Compare commits

...

103 Commits

Author SHA1 Message Date
Olivier 'reivilibre
c47d8e0ee1 1.131.0 2025-06-03 14:37:27 +01:00
Quentin Gliech
461571fcf2 Changelog fixes
Co-Authored-By: Andrew Morgan <andrew@amorgan.xyz>
2025-05-28 12:36:28 +02:00
Quentin Gliech
22db145da3 1.131.0rc1 2025-05-28 12:29:07 +02:00
dependabot[bot]
d82ad6e554 Bump lxml from 5.3.0 to 5.4.0 (#18480)
Bumps [lxml](https://github.com/lxml/lxml) from 5.3.0 to 5.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/lxml/lxml/releases">lxml's
releases</a>.</em></p>
<blockquote>
<h2>lxml-5.4.0</h2>
<h1>5.4.0 (2025-04-22)</h1>
<h2>Bugs fixed</h2>
<ul>
<li>LP#2107279: Binary wheels use libxml2 2.13.8 and libxslt 1.1.43 to
resolve several CVEs.
(Binary wheels for Windows continue to use a patched libxml2 2.11.9 and
libxslt 1.1.39.)
Issue found by Anatoly Katyushin, see <a
href="https://bugs.launchpad.net/lxml/+bug/2107279">https://bugs.launchpad.net/lxml/+bug/2107279</a></li>
</ul>
<h2>lxml-5.3.2</h2>
<p>No release notes provided.</p>
<h2>lxml-5.3.1</h2>
<p>No release notes provided.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/lxml/lxml/blob/master/CHANGES.txt">lxml's
changelog</a>.</em></p>
<blockquote>
<h1>5.4.0 (2025-04-22)</h1>
<h2>Bugs fixed</h2>
<ul>
<li>LP#2107279: Binary wheels use libxml2 2.13.8 and libxslt 1.1.43 to
resolve several CVEs.
(Binary wheels for Windows continue to use a patched libxml2 2.11.9 and
libxslt 1.1.39.)
Issue found by Anatoly Katyushin.</li>
</ul>
<h1>5.3.2 (2025-04-05)</h1>
<p>This release resolves CVE-2025-24928 as described in
<a
href="https://gitlab.gnome.org/GNOME/libxml2/-/issues/847">https://gitlab.gnome.org/GNOME/libxml2/-/issues/847</a></p>
<h2>Bugs fixed</h2>
<ul>
<li>
<p>Binary wheels use libxml2 2.12.10 and libxslt 1.1.42.</p>
</li>
<li>
<p>Binary wheels for Windows use a patched libxml2 2.11.9 and libxslt
1.1.39.</p>
</li>
</ul>
<h1>5.3.1 (2025-02-09)</h1>
<h2>Bugs fixed</h2>
<ul>
<li>
<p>GH#440: Some tests were adapted for libxml2 2.14.0.
Patch by Nick Wellnhofer.</p>
</li>
<li>
<p>LP#2097175: <code>DTD(external_id=&quot;…&quot;)</code> erroneously
required a byte string as ID value.</p>
</li>
<li>
<p>GH#450: <code>iterparse()</code> internally triggered the
`DeprecationWarning`` added in lxml 5.3.0 when parsing HTML.</p>
</li>
</ul>
<h2>Other changes</h2>
<ul>
<li>GH#442: Binary wheels for macOS no longer use the linker flag
<code>-flat_namespace</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6e76d57af8"><code>6e76d57</code></a>
Build: Exclude slow Py3.9 wheel builds for s390/ppc and Py3.7 for
ARM64.</li>
<li><a
href="ee10c02bb7"><code>ee10c02</code></a>
Prepare release of lxml 5.4.0.</li>
<li><a
href="0e4f3c3372"><code>0e4f3c3</code></a>
Prepare release of lxml 5.3.3.</li>
<li><a
href="b4703fc2e7"><code>b4703fc</code></a>
Update changelog.</li>
<li><a
href="db723bb3b9"><code>db723bb</code></a>
Build: Use libxslt 1.1.43 instead of 1.1.42 to resolve some CVEs.</li>
<li><a
href="a664877bde"><code>a664877</code></a>
Build: Use libxml2 2.13.8 instead of 2.12.x to resolve some CVEs.</li>
<li><a
href="df4633e7a9"><code>df4633e</code></a>
Remove appveyor usage.</li>
<li><a
href="820db896be"><code>820db89</code></a>
CI: Allow Py3.14 jobs to fail.</li>
<li><a
href="93ad02aad6"><code>93ad02a</code></a>
docs: Add a note about C compiler installation to error message (<a
href="https://redirect.github.com/lxml/lxml/issues/454">GH-454</a>)</li>
<li><a
href="16878dac70"><code>16878da</code></a>
Add some hints to the documentation on how to build lxml (<a
href="https://redirect.github.com/lxml/lxml/issues/453">GH-453</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/lxml/lxml/compare/lxml-5.3.0...lxml-5.4.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=lxml&package-manager=pip&previous-version=5.3.0&new-version=5.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-28 11:59:59 +02:00
dependabot[bot]
58e8521313 Bump ruff from 0.11.10 to 0.11.11 (#18482)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.11.10 to 0.11.11.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.11.11</h2>
<h2>Release Notes</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Add autofixes for <code>AIR302</code> and
<code>AIR312</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17942">#17942</a>)</li>
<li>[<code>airflow</code>] Move rules from <code>AIR312</code> to
<code>AIR302</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17940">#17940</a>)</li>
<li>[<code>airflow</code>] Update <code>AIR301</code> and
<code>AIR311</code> with the latest Airflow implementations (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17985">#17985</a>)</li>
<li>[<code>flake8-simplify</code>] Enable fix in preview mode
(<code>SIM117</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18208">#18208</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>Fix inconsistent formatting of match-case on <code>[]</code> and
<code>_</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18147">#18147</a>)</li>
<li>[<code>pylint</code>] Fix <code>PLW1514</code> not recognizing the
<code>encoding</code> positional argument of <code>codecs.open</code>
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/18109">#18109</a>)</li>
</ul>
<h3>CLI</h3>
<ul>
<li>Add full option name in formatter warning (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18217">#18217</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix rendering of admonition in docs (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18163">#18163</a>)</li>
<li>[<code>flake8-print</code>] Improve print/pprint docs for
<code>T201</code> and <code>T203</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18130">#18130</a>)</li>
<li>[<code>flake8-simplify</code>] Add fix safety section
(<code>SIM110</code>,<code>SIM210</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18114">#18114</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18100">#18100</a>)</li>
<li>[<code>pylint</code>] Fix docs example that produced different
output (<code>PLW0603</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18216">#18216</a>)</li>
</ul>
<h2>Contributors</h2>
<ul>
<li><a
href="https://github.com/AlexWaygood"><code>@​AlexWaygood</code></a></li>
<li><a
href="https://github.com/BradonZhang"><code>@​BradonZhang</code></a></li>
<li><a
href="https://github.com/BurntSushi"><code>@​BurntSushi</code></a></li>
<li><a
href="https://github.com/CodeMan62"><code>@​CodeMan62</code></a></li>
<li><a
href="https://github.com/InSyncWithFoo"><code>@​InSyncWithFoo</code></a></li>
<li><a
href="https://github.com/LaBatata101"><code>@​LaBatata101</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/Mathemmagician"><code>@​Mathemmagician</code></a></li>
<li><a
href="https://github.com/MatthewMckee4"><code>@​MatthewMckee4</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a
href="https://github.com/TomerBin"><code>@​TomerBin</code></a></li>
<li><a
href="https://github.com/VascoSch92"><code>@​VascoSch92</code></a></li>
<li><a
href="https://github.com/adamaaronson"><code>@​adamaaronson</code></a></li>
<li><a
href="https://github.com/brainwane"><code>@​brainwane</code></a></li>
<li><a
href="https://github.com/brandtbucher"><code>@​brandtbucher</code></a></li>
<li><a href="https://github.com/carljm"><code>@​carljm</code></a></li>
<li><a
href="https://github.com/dcreager"><code>@​dcreager</code></a></li>
<li><a
href="https://github.com/dhruvmanila"><code>@​dhruvmanila</code></a></li>
<li><a
href="https://github.com/dragon-dxw"><code>@​dragon-dxw</code></a></li>
<li><a
href="https://github.com/felixscherz"><code>@​felixscherz</code></a></li>
<li><a
href="https://github.com/kiran-4444"><code>@​kiran-4444</code></a></li>
<li><a
href="https://github.com/maxmynter"><code>@​maxmynter</code></a></li>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.11.11</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Add autofixes for <code>AIR302</code> and
<code>AIR312</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17942">#17942</a>)</li>
<li>[<code>airflow</code>] Move rules from <code>AIR312</code> to
<code>AIR302</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17940">#17940</a>)</li>
<li>[<code>airflow</code>] Update <code>AIR301</code> and
<code>AIR311</code> with the latest Airflow implementations (<a
href="https://redirect.github.com/astral-sh/ruff/pull/17985">#17985</a>)</li>
<li>[<code>flake8-simplify</code>] Enable fix in preview mode
(<code>SIM117</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18208">#18208</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>Fix inconsistent formatting of match-case on <code>[]</code> and
<code>_</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18147">#18147</a>)</li>
<li>[<code>pylint</code>] Fix <code>PLW1514</code> not recognizing the
<code>encoding</code> positional argument of <code>codecs.open</code>
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/18109">#18109</a>)</li>
</ul>
<h3>CLI</h3>
<ul>
<li>Add full option name in formatter warning (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18217">#18217</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Fix rendering of admonition in docs (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18163">#18163</a>)</li>
<li>[<code>flake8-print</code>] Improve print/pprint docs for
<code>T201</code> and <code>T203</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18130">#18130</a>)</li>
<li>[<code>flake8-simplify</code>] Add fix safety section
(<code>SIM110</code>,<code>SIM210</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18114">#18114</a>,<a
href="https://redirect.github.com/astral-sh/ruff/pull/18100">#18100</a>)</li>
<li>[<code>pylint</code>] Fix docs example that produced different
output (<code>PLW0603</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/18216">#18216</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0397682f1f"><code>0397682</code></a>
Bump 0.11.11 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18259">#18259</a>)</li>
<li><a
href="bcefa459f4"><code>bcefa45</code></a>
[ty] Rename <code>call-possibly-unbound-method</code> to
`possibly-unbound-implicit-call...</li>
<li><a
href="91b7a570c2"><code>91b7a57</code></a>
[ty] Implement Python's floor division semantics for
<code>Literal</code> <code>int</code>s (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18249">#18249</a>)</li>
<li><a
href="98da200d45"><code>98da200</code></a>
[ty] Fix server panic when calling <code>system_mut</code> (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18252">#18252</a>)</li>
<li><a
href="029085fa72"><code>029085f</code></a>
[ty] Clarify <code>ty check</code> output default in documentation. (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18246">#18246</a>)</li>
<li><a
href="6df10c638e"><code>6df10c6</code></a>
[<code>pylint</code>] Fix docs example that produced different output
(<code>PLW0603</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18216">#18216</a>)</li>
<li><a
href="bdf488462a"><code>bdf4884</code></a>
Preserve tuple parentheses in case patterns (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18147">#18147</a>)</li>
<li><a
href="01eeb2f0d6"><code>01eeb2f</code></a>
[ty] Support frozen dataclasses (<a
href="https://redirect.github.com/astral-sh/ruff/issues/17974">#17974</a>)</li>
<li><a
href="cb04343b3b"><code>cb04343</code></a>
[ty] Split <code>invalid-base</code> error code into two error codes (<a
href="https://redirect.github.com/astral-sh/ruff/issues/18245">#18245</a>)</li>
<li><a
href="02394b8049"><code>02394b8</code></a>
[ty] Improve <code>invalid-type-form</code> diagnostic where a
module-literal type is us...</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.11.10...0.11.11">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.11.10&new-version=0.11.11)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-28 11:58:45 +02:00
dependabot[bot]
3680de63a7 Bump types-jsonschema from 4.23.0.20241208 to 4.23.0.20250516 (#18481)
Bumps
[types-jsonschema](https://github.com/typeshed-internal/stub_uploader)
from 4.23.0.20241208 to 4.23.0.20250516.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/typeshed-internal/stub_uploader/commits">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=types-jsonschema&package-manager=pip&previous-version=4.23.0.20241208&new-version=4.23.0.20250516)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-28 11:58:30 +02:00
Johannes Marbach
c8733be8aa Add option to limit key queries to users sharing rooms as per MSC4263 (#18180)
This implements
https://github.com/matrix-org/matrix-spec-proposals/pull/4263.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Signed-off-by: Johannes Marbach <n0-0ne+github@mailbox.org>
2025-05-28 11:58:08 +02:00
gui-yue
07468a0f1c Increase timeout for test_lock_contention on RISC-V (#18430)
This PR addresses a test failure for
`tests.handlers.test_worker_lock.WorkerLockTestCase.test_lock_contention`
which consistently times out on the RISC-V (specifically `riscv64`)
architecture.

The test simulates high lock contention and has a default timeout of 5
seconds, which seems sufficient for architectures like x86_64 but proves
too short for current RISC-V hardware/environment performance
characteristics, leading to spurious `tests.utils.TestTimeout` failures.

This fix introduces architecture detection using `platform.machine()`.
If a RISC-V architecture is detected:
* The timeout for this specific test is increased (e.g., to 15 seconds
).

The original, stricter timeout (5 seconds) and lock count (500) are
maintained for all other architectures to avoid masking potential
performance regressions elsewhere.

This change has been tested locally on RISC-V, where the test now passes
reliably, and on x86_64, where it continues to pass with the original
constraints.

---

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch *(Assuming you based
it correctly)*
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
*(See below)*
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
*(Please run linters locally)*
2025-05-27 17:17:04 +00:00
3nprob
33ba8860c4 fix(device-handler): make _maybe_retry_device_resync thread-safe (#18391)
A race-condition may render concurrent retry loops.

Use an actual `Lock` for guarding single access of device resyncing
retrying.

### Pull Request Checklist

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-05-26 16:21:43 +02:00
Shay
24e849e483 Don't move invited users to new room when shutting down room (#18471)
This is confusing to users who received unwanted invites.
2025-05-23 09:59:40 +01:00
Andrew Morgan
1624073191 Bump Tornado from 6.4.2 to 6.5.0 (#18459)
Bumps tornado 6.5.0 to mitigate
[CVE-2025-47287](https://nvd.nist.gov/vuln/detail/CVE-2025-47287).

This dependency is only used indirectly through our sentry dependency.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [ ] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-05-21 22:24:12 +00:00
Andrew Morgan
ed6b7ba9c3 Bump pyo3 from 0.23.5 to 0.24.2 (#18460)
Also bump pythonize from 0.23.0 to 0.24.0, otherwise we couldn't compile
as pythonize 0.23.0 required pyo3 "^0.23.0".

Addresses
[RUSTSEC-2025-0020](https://rustsec.org/advisories/RUSTSEC-2025-0020),
although Synapse is not affected as we don't make use of
`PyString::from_object`.

[pyo3 0.24.x](https://github.com/PyO3/pyo3/releases/tag/v0.24.0) include
some performance optimisations apparently, and no breaking changes.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-05-21 22:12:01 +00:00
Travis Ralston
b7d4841947 Policy server part 1: Actually call the policy server (#18387)
Roughly reviewable commit-by-commit.

This is the first part of adding policy server support to Synapse. Other
parts (unordered), which may or may not be bundled into fewer PRs,
include:

* Implementation of a bulk API
* Supporting a moderation server config (the `fallback_*` options of
https://github.com/element-hq/policyserv_spam_checker )
* Adding an "early event hook" for appservices to receive federation
transactions *before* events are processed formally
* Performance and stability improvements

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: turt2live <1190097+turt2live@users.noreply.github.com>
Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
2025-05-21 22:09:09 +00:00
Dagfinn Ilmari Mannsåker
553e124f76 Include room ID in room deletion status response (#18318)
When querying by `delete_id` it's handy to see which room the delete
pertains to.
2025-05-20 11:53:30 -05:00
Devon Hudson
99cbd33630 Merge branch 'master' into develop 2025-05-20 09:36:05 -06:00
Andrew Morgan
4b1d9d5d0e Add a unit test for the phone home stats (#18463) 2025-05-20 16:26:45 +01:00
Devon Hudson
f92c6455ef Tweak changelog 2025-05-20 08:46:37 -06:00
Devon Hudson
a36f3a6d87 1.130.0 2025-05-20 08:35:23 -06:00
dependabot[bot]
9d43bec326 Bump ruff from 0.7.3 to 0.11.10 (#18451)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-20 15:23:30 +01:00
Strac Consulting Engineers Pty Ltd
a6cb3533db Update postgres.md (#18445) 2025-05-20 13:31:05 +00:00
dependabot[bot]
303c5c4daa Bump setuptools from 72.1.0 to 78.1.1 (#18461)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-20 12:03:10 +01:00
Andrew Morgan
1f4ae2f9eb Allow only requiring a field be present in an SSO response, rather than specifying a required value (#18454) 2025-05-19 17:50:02 +01:00
Erik Johnston
67920c0aca Fix up the topological ordering for events above MAX_DEPTH (#18447)
Synapse previously did not correctly cap the max depth of an event to
the max canonical json int. This can cause ordering issues for any
events that were sent locally at the time.

This background update goes and correctly caps the topological ordering
to the new `MAX_DEPTH`.

c.f. GHSA-v56r-hwv5-mxg6

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-19 13:36:30 +01:00
dependabot[bot]
17e6b32966 Bump docker/build-push-action from 6.16.0 to 6.17.0 (#18449)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 13:07:24 +01:00
dependabot[bot]
afeb0e01c5 Bump pyopenssl from 25.0.0 to 25.1.0 (#18450)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 13:06:45 +01:00
dependabot[bot]
cd1a3ac584 Bump authlib from 1.5.1 to 1.5.2 (#18452)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 13:06:11 +01:00
dependabot[bot]
b3b24c69fc Bump pyo3-log from 0.12.3 to 0.12.4 (#18453)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 13:04:15 +01:00
Erik Johnston
fa4a00a2da Check for CREATE/DROP INDEX in schema deltas (#18440)
As these should be background updates.
2025-05-19 10:52:05 +00:00
dependabot[bot]
7d4c3b64e3 Bump docker/build-push-action from 6.15.0 to 6.16.0 (#18397)
Bumps
[docker/build-push-action](https://github.com/docker/build-push-action)
from 6.15.0 to 6.16.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/docker/build-push-action/releases">docker/build-push-action's
releases</a>.</em></p>
<blockquote>
<h2>v6.16.0</h2>
<ul>
<li>Handle no default attestations env var by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a
href="https://redirect.github.com/docker/build-push-action/pull/1343">docker/build-push-action#1343</a></li>
<li>Only print secret keys in build summary output by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a
href="https://redirect.github.com/docker/build-push-action/pull/1353">docker/build-push-action#1353</a></li>
<li>Bump <code>@​docker/actions-toolkit</code> from 0.56.0 to 0.59.0 in
<a
href="https://redirect.github.com/docker/build-push-action/pull/1352">docker/build-push-action#1352</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/docker/build-push-action/compare/v6.15.0...v6.16.0">https://github.com/docker/build-push-action/compare/v6.15.0...v6.16.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="14487ce63c"><code>14487ce</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/build-push-action/issues/1343">#1343</a>
from crazy-max/fix-no-default-attest</li>
<li><a
href="0ec91264d8"><code>0ec9126</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/build-push-action/issues/1366">#1366</a>
from crazy-max/pr-assign-author</li>
<li><a
href="b749522b90"><code>b749522</code></a>
pr-assign-author workflow</li>
<li><a
href="c566248492"><code>c566248</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/build-push-action/issues/1363">#1363</a>
from crazy-max/fix-codecov</li>
<li><a
href="13275dd76e"><code>13275dd</code></a>
ci: fix missing source for codecov</li>
<li><a
href="67dc78bbaf"><code>67dc78b</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/build-push-action/issues/1361">#1361</a>
from mschoettle/patch-1</li>
<li><a
href="0760504437"><code>0760504</code></a>
docs: add validating build configuration example</li>
<li><a
href="1c198f4467"><code>1c198f4</code></a>
chore: update generated content</li>
<li><a
href="288d9e2e4a"><code>288d9e2</code></a>
handle no default attestations env var</li>
<li><a
href="88844b95d8"><code>88844b9</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/build-push-action/issues/1353">#1353</a>
from crazy-max/summary-secret-keys</li>
<li>Additional commits viewable in <a
href="471d1dc4e0...14487ce63c">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/build-push-action&package-manager=github_actions&previous-version=6.15.0&new-version=6.16.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 09:51:52 +01:00
dependabot[bot]
078cefd014 Bump actions/setup-python from 5.5.0 to 5.6.0 (#18398)
Bumps [actions/setup-python](https://github.com/actions/setup-python)
from 5.5.0 to 5.6.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-python/releases">actions/setup-python's
releases</a>.</em></p>
<blockquote>
<h2>v5.6.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Workflow updates related to Ubuntu 20.04 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1065">actions/setup-python#1065</a></li>
<li>Fix for Candidate Not Iterable Error by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1082">actions/setup-python#1082</a></li>
<li>Upgrade semver and <code>@​types/semver</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1091">actions/setup-python#1091</a></li>
<li>Upgrade prettier from 2.8.8 to 3.5.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1046">actions/setup-python#1046</a></li>
<li>Upgrade ts-jest from 29.1.2 to 29.3.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1081">actions/setup-python#1081</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v5...v5.6.0">https://github.com/actions/setup-python/compare/v5...v5.6.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a26af69be9"><code>a26af69</code></a>
Bump ts-jest from 29.1.2 to 29.3.2 (<a
href="https://redirect.github.com/actions/setup-python/issues/1081">#1081</a>)</li>
<li><a
href="30eafe9548"><code>30eafe9</code></a>
Bump prettier from 2.8.8 to 3.5.3 (<a
href="https://redirect.github.com/actions/setup-python/issues/1046">#1046</a>)</li>
<li><a
href="5d95bc16d4"><code>5d95bc1</code></a>
Bump semver and <code>@​types/semver</code> (<a
href="https://redirect.github.com/actions/setup-python/issues/1091">#1091</a>)</li>
<li><a
href="6ed2c67c8a"><code>6ed2c67</code></a>
Fix for Candidate Not Iterable Error (<a
href="https://redirect.github.com/actions/setup-python/issues/1082">#1082</a>)</li>
<li><a
href="e348410e00"><code>e348410</code></a>
Remove Ubuntu 20.04 from workflows due to deprecation from 2025-04-15
(<a
href="https://redirect.github.com/actions/setup-python/issues/1065">#1065</a>)</li>
<li>See full diff in <a
href="8d9ed9ac5c...a26af69be9">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-python&package-manager=github_actions&previous-version=5.5.0&new-version=5.6.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-19 09:51:08 +01:00
Shay
74e2f028bb Fix admin redaction endpoint not redacting encrypted messages (#18434) 2025-05-19 09:48:46 +01:00
Stanislav Kazantsev
0afdc0fc7f remove room without listeners from Notifier.room_to_user_streams (#18380)
Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
2025-05-15 18:18:17 +01:00
Erik Johnston
f5ed52c1e2 Move index creation to background update (#18439)
Follow on from #18375. This prevents blocking startup on creating the
index, which can take a while

---------

Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
2025-05-15 12:43:24 +01:00
_
44ae5362fd Add option to allow registrations that begin with '_' (#18262)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-15 11:31:52 +00:00
Kim Brose
194b923a6e Fix room_list_publication_rules docs for v1.126.0 (#18286)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-14 11:36:54 +01:00
Eric Eastwood
a3bbd7eeab Explain why we flush_buffer() for Python print(...) output (#18420)
Spawning from using this code elsewhere and not knowing why it's there.

Based on this article and @reivilibre's experience mentioning
`PYTHONUNBUFFERED=1`,

> #### programming languages where the default “print” statement buffers
> 
> Also, here are a few programming language where the default print
statement will buffer output when writing to a pipe, and some ways to
disable buffering if you want:
> 
> - Python (disable with `python -u`, or `PYTHONUNBUFFERED=1`, or
`sys.stdout.reconfigure(line_buffering=False)`, or `print(x,
flush=True)`)
> 
> _--
https://jvns.ca/blog/2024/11/29/why-pipes-get-stuck-buffering/#programming-languages-where-the-default-print-statement-buffers_
2025-05-13 10:40:49 -05:00
Eric Eastwood
6e910e2b2c Fix a couple type annotations in the RootConfig/Config (#18409)
Fix a couple type annotations in the `RootConfig`/`Config`. Discovered
while cribbing this code for another project.

It's really sucks that `mypy` type checking doesn't catch this. I assume
this is because we also have a `synapse/config/_base.pyi` that overrides
all of this. Still unclear to me why the `Iterable[str]` vs
`StrSequence` issue wasn't caught as that's what `ConfigError` expects.
2025-05-13 10:22:15 -05:00
Andrew Morgan
2db54c88ff Explicitly enable pypy for cibuildwheel (#18417) 2025-05-13 15:19:30 +01:00
Andrew Morgan
480d4faa38 Remove newline from final bullet point of PR template (#18419) 2025-05-13 15:14:00 +01:00
dependabot[bot]
ba2f1be891 Bump types-requests from 2.32.0.20241016 to 2.32.0.20250328 (#18427) 2025-05-13 15:12:34 +01:00
dependabot[bot]
c626d54cea Bump mypy-zope from 1.0.9 to 1.0.11 (#18428) 2025-05-13 15:12:22 +01:00
Erik Johnston
99c15f4630 Fix up changelog 2025-05-13 10:54:23 +01:00
Erik Johnston
09b4109c2e 1.130.0rc1 2025-05-13 10:44:11 +01:00
dependabot[bot]
40ce11ded0 Bump pillow from 11.1.0 to 11.2.1 (#18429)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-13 09:46:03 +01:00
dependabot[bot]
3dade08e7c Bump actions/setup-go from 5.4.0 to 5.5.0 (#18426)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-13 09:34:23 +01:00
dependabot[bot]
1920dfff40 Bump pydantic from 2.10.3 to 2.11.4 (#18394) 2025-05-09 16:36:54 +01:00
dependabot[bot]
b7728a2df1 Bump packaging from 24.2 to 25.0 (#18393) 2025-05-09 15:37:05 +01:00
dependabot[bot]
c6dfe70014 Bump txredisapi from 1.4.10 to 1.4.11 (#18392) 2025-05-09 15:36:41 +01:00
dependabot[bot]
b5d94f654c Bump sha2 from 0.10.8 to 0.10.9 (#18395) 2025-05-09 15:35:18 +01:00
Devon Hudson
7c633f1a58 Pass leave from remote invite rejection down Sliding Sync (#18375)
Fixes #17753 


### Dev notes

The `sliding_sync_membership_snapshots` and `sliding_sync_joined_rooms`
database tables were added in
https://github.com/element-hq/synapse/pull/17512

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Erik Johnston <erik@matrix.org>
Co-authored-by: Olivier 'reivilibre <oliverw@matrix.org>
Co-authored-by: Eric Eastwood <erice@element.io>
2025-05-08 14:28:23 +00:00
Devon Hudson
ae877aa101 Convert Sliding Sync tests to use higher-level compute_interested_rooms (#18399)
Spawning from
https://github.com/element-hq/synapse/pull/18375#discussion_r2071768635,

This updates some sliding sync tests to use a higher level function in
order to move test coverage to cover both fallback & new tables.
Important when https://github.com/element-hq/synapse/pull/18375 is
merged.

In other words, adjust tests to target `compute_interested_room(...)`
(relevant to both new and fallback path) instead of the lower level
`get_room_membership_for_user_at_to_token(...)` that only applies to the
fallback path.

### Dev notes

```
SYNAPSE_TEST_LOG_LEVEL=INFO poetry run trial tests.handlers.test_sliding_sync.ComputeInterestedRoomsTestCase_new
```

```
SYNAPSE_TEST_LOG_LEVEL=INFO poetry run trial tests.rest.client.sliding_sync
```

```
SYNAPSE_POSTGRES=1 SYNAPSE_POSTGRES_USER=postgres SYNAPSE_TEST_LOG_LEVEL=INFO poetry run trial tests.handlers.test_sliding_sync.ComputeInterestedRoomsTestCase_new.test_display_name_changes_leave_after_token_range
```

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Eric Eastwood <erice@element.io>
2025-05-07 15:07:58 +00:00
Andrew Morgan
740fc885cd Merge branch 'master' into develop 2025-05-06 13:31:41 +01:00
Andrew Morgan
9a62b2d47a 1.129.0 2025-05-06 12:22:27 +01:00
Will Hunt
d0873d549a Ensure the url previewer also hashes and quarantines media (#18297)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-06 11:04:31 +01:00
Florian Klink
c9adbc6a1c make tests tolerant to authlib 1.5.2 error messages (#18390)
authlib 1.5.2 now single-quotes error messages in the claims, causing
three tests to fail.

Replace the comparison with a regex that accepts both single or double
quotes.

This succeeds the tests with both authlib 1.5.1 and 1.5.2.

See https://github.com/NixOS/nixpkgs/pull/402797 for context.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-05-05 10:09:39 +00:00
David Baker
9f9eb56333 Return specific error code when email / phone not supported (#17578)
Implements https://github.com/matrix-org/matrix-spec-proposals/pull/4178

If this would need tests, could you give some idea of what tests would
be needed and how best to add them?

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [ ] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [ ] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-05-05 11:08:50 +02:00
Will Lewis
fe8bb620de Add the ability to exclude remote users in user directory search results (#18300)
This change adds a new configuration
`user_directory.exclude_remote_users`, which defaults to False.
When set to True, remote users will not appear in user directory search
results.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-02 15:38:02 +01:00
Quentin Gliech
b8146d4b03 Allow a few admin APIs used by MAS to run on workers (#18313)
This should be reviewed commit by commit.

It adds a few admin servlets that are used by MAS when in delegation
mode to workers

---------

Co-authored-by: Olivier 'reivilibre <oliverw@matrix.org>
Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-02 15:37:58 +02:00
Shay
411d239db4 Apply should_drop_federated_event to federation invites (#18330)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-02 13:04:01 +00:00
Quentin Gliech
d18edf67d6 Fix lint which broke in #18374 (#18385)
https://github.com/element-hq/synapse/pull/18374 did not pass linting
but was merged
2025-05-02 12:07:23 +00:00
Andrew Morgan
fd5d3d852d Don't check the at_hash (access token hash) in OIDC ID Tokens if we don't use the access token (#18374)
Co-authored-by: Eric Eastwood <erice@element.io>
2025-05-02 12:16:14 +01:00
Shay
ea376126a0 Fix typo in doc for Scheduled Tasks Admin API (#18384) 2025-05-02 12:14:31 +01:00
Quentin Gliech
74be5cfdbc Do not auto-provision missing users & devices when delegating auth to MAS (#18181)
Since MAS 0.13.0, the provisionning of devices and users is done
synchronously and reliably enough that we don't need to auto-provision
on the Synapse side anymore.

It's important to remove this behaviour if we want to start caching
token introspection results.
2025-05-02 12:13:26 +02:00
Andrew Ferrazzutti
f2ca2e31f7 Readme tweaks (#18218) 2025-05-02 12:11:48 +02:00
Shay
6dc1ecd359 Add an Admin API endpoint to fetch scheduled tasks (#18214) 2025-05-01 18:30:00 +00:00
Sebastian Spaeth
2965c9970c docs/workers.md: Add ^/_matrix/federation/v1/event/ to list of delegatable endpoints (#18377) 2025-05-01 15:11:59 +01:00
Martin Lavén
d59bbd8b6b Added Pocket ID to openid.md (#18237) 2025-04-30 16:13:09 +00:00
Andrew Ferrazzutti
7be6c711d4 start_for_complement.sh: use more shell builtins (#18293)
Avoid calling external tools when shell builtins suffice.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Quentin Gliech <quenting@element.io>
2025-04-30 15:53:15 +00:00
Andrew Ferrazzutti
5ab05e7b95 docker: use shebangs to invoke generated scripts (#18295)
When generating scripts from templates, don't add a leading newline so
that their shebangs may be handled correctly.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Quentin Gliech <quenting@element.io>
2025-04-30 14:26:08 +00:00
Andrew Ferrazzutti
7563b2a2a3 configure_workers_and_start.py: unify python path (#18291)
Use absolute path for python in script shebang, and invoke child python
processes with sys.executable. This is consistent with the absolute path
used to invoke python elsewhere (like in the supervisor config).

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Quentin Gliech <quenting@element.io>
2025-04-30 14:22:09 +00:00
Andrew Ferrazzutti
4097ada89f Optimize Dockerfile-workers (#18292)
- Use a `uv:python` image for the first build layer, to reduce the
number of intermediate images required, as the
main Dockerfile uses that image already
- Use a cache mount for `apt` commands
- Skip a pointless install of `redis-server`, since the redis Docker
image is copied from instead
- Move some RUN steps out of the final image layer & into the build
layer

Depends on https://github.com/element-hq/synapse/pull/18275

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-04-30 15:54:30 +02:00
Kim Brose
f79811ed80 Fix typo in docs about push (#18320) 2025-04-30 14:27:08 +01:00
Quentin Gliech
5f587dfd38 Adjust changelog
Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-04-30 15:25:59 +02:00
Quentin Gliech
a4ec96ca34 1.129.0rc2 2025-04-30 15:17:19 +02:00
Quentin Gliech
02dca7c67a Unschedule the background update scheduled in #18068. (#18372)
Fixes #18356
2025-04-30 12:35:32 +00:00
Quentin Gliech
dbf5b0be67 Remove the trigger added in #18260 and then reverted (#18373)
See #18260

This is useful for anyone who tried Synapse v1.129.0rc1 out

Fixes #18349

To test:

 - checkout v1.129.0rc1 and start
- check that the events table has the trigger (`\dS events` with
postgres)
 - checkout this PR and start
 - check that the events table doesn't have the trigger anymore
2025-04-30 14:07:21 +02:00
Quentin Gliech
b2f12d22e4 Merge commit '89cb613a4e' into release-v1.129 2025-04-29 16:43:35 +02:00
Erik Johnston
4eaab31757 Minor performance improvements to notifier/replication (#18367)
These are some improvements to `on_new_event` which is a hot path. Not
sure how much this will save, but maybe like ~5%?

Possibly easier to review commit-by-commit
2025-04-29 14:08:32 +01:00
Erik Johnston
ad140130cc Slight performance increase when using the ratelimiter (#18369)
See the commits.
2025-04-29 14:08:22 +01:00
Erik Johnston
e47de2b32d Do not retry push during backoff period (#18363)
This fixes a bug where if a pusher gets told about a new event to push
it will ignore the backoff and immediately retry sending any pending
push.
2025-04-29 14:08:11 +01:00
dependabot[bot]
0384fd72ee Bump softprops/action-gh-release from 1 to 2 (#18264) 2025-04-29 10:08:20 +01:00
dependabot[bot]
75832f25b0 Bump types-jsonschema from 4.23.0.20240813 to 4.23.0.20241208 (#18305) 2025-04-29 10:07:49 +01:00
dependabot[bot]
7346760aed Bump pyopenssl from 24.3.0 to 25.0.0 (#18315) 2025-04-29 10:07:33 +01:00
dependabot[bot]
b0795d0cb6 Bump types-psycopg2 from 2.9.21.20250121 to 2.9.21.20250318 (#18316)
Bumps [types-psycopg2](https://github.com/python/typeshed) from
2.9.21.20250121 to 2.9.21.20250318.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/python/typeshed/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=types-psycopg2&package-manager=pip&previous-version=2.9.21.20250121&new-version=2.9.21.20250318)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-29 10:07:15 +01:00
dependabot[bot]
2ef7824620 Bump pyo3-log from 0.12.2 to 0.12.3 (#18317) 2025-04-29 10:07:06 +01:00
dependabot[bot]
39e17856a3 Bump anyhow from 1.0.97 to 1.0.98 (#18336) 2025-04-29 10:06:36 +01:00
dependabot[bot]
4c958c679a Bump stefanzweifel/git-auto-commit-action from 5.1.0 to 5.2.0 (#18354) 2025-04-29 10:06:26 +01:00
dependabot[bot]
a87981f673 Bump actions/download-artifact from 4.2.1 to 4.3.0 (#18364) 2025-04-29 10:06:13 +01:00
dependabot[bot]
2ff977a6c3 Bump actions/add-to-project from 280af8ae1f83a494cfad2cb10f02f6d13529caa9 to 5b1a254a3546aef88e0a7724a77a623fa2e47c36 (#18365) 2025-04-29 10:05:55 +01:00
dependabot[bot]
1482ad1917 Bump sigstore/cosign-installer from 3.8.1 to 3.8.2 (#18366) 2025-04-29 10:05:43 +01:00
Erik Johnston
5b89c92643 Allow /rooms/ admin API to be on workers (#18360)
Tested by https://github.com/matrix-org/sytest/pull/1400
2025-04-25 15:18:22 +01:00
Erik Johnston
33824495ba Move GET /devices/ off main process (#18355)
We can't move PUT/DELETE as they do need to happen on main process (due
to notification of device changes).

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-04-25 15:08:33 +01:00
Devon Hudson
89cb613a4e Revert "Add total event, unencrypted message, and e2ee event counts to stats reporting" (#18346)
Reverts element-hq/synapse#18260

It is causing a failure when building release debs for `debian:bullseye`
with the following error:
```
sqlite3.OperationalError: near "RETURNING": syntax error
```
2025-04-16 16:41:41 +00:00
Devon Hudson
d67e9c5367 Update changelog 2025-04-16 07:19:27 -06:00
Devon Hudson
2b5c6239de Merge branch 'develop' into release-v1.129 2025-04-16 07:17:07 -06:00
Erik Johnston
c16a981f22 Fix query for room participation (#18345)
Follow on from #18068

Currently the subquery in `UPDATE` is pointless, as it will still just
update all `room_membership` rows. Instead, we should look at the
current membership event ID (which is easily retrieved from
`local_current_membership`). We also add a `AND NOT participant` to noop
the `UPDATE` when the `participant` flag is already set.

cc @H-Shay
2025-04-16 14:14:56 +01:00
Quentin Gliech
0046d7278b Fix ExternalIDReuse exception for concurrent transactions (#18342) 2025-04-16 07:34:58 +00:00
Devon Hudson
9b8eebbe4e Changelog tweaks 2025-04-15 11:12:04 -06:00
Devon Hudson
5ced4efe1d 1.129.0rc1 2025-04-15 10:48:32 -06:00
Quentin Gliech
2c7a61e311 Don't cache introspection failures (#18339) 2025-04-15 17:30:45 +02:00
Erik Johnston
45420b1d42 Fix force_tracing_for_users config when using MAS (#18334)
This is a copy of what we do for internal auth, and we should figure out
a way to deduplicate some of this stuff:


dd05cc55ee/synapse/api/auth/internal.py (L62-L110)
2025-04-15 16:02:27 +01:00
reivilibre
19b0e23c3d Fix the token introspection cache logging access tokens when MAS integration is in use. (#18335)
The `ResponseCache` logs keys by default.

Let's not do that for access tokens.

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-04-15 15:58:30 +01:00
Andrew Morgan
a832375bfb Add total event, unencrypted message, and e2ee event counts to stats reporting (#18260)
Co-authored-by: Eric Eastwood <erice@element.io>
2025-04-15 07:49:08 -07:00
173 changed files with 5377 additions and 1656 deletions

View File

@@ -9,5 +9,4 @@
- End with either a period (.) or an exclamation mark (!).
- Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
* [ ] [Code style](https://element-hq.github.io/synapse/latest/code_style.html) is correct
(run the [linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
* [ ] [Code style](https://element-hq.github.io/synapse/latest/code_style.html) is correct (run the [linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

View File

@@ -30,7 +30,7 @@ jobs:
run: docker buildx inspect
- name: Install Cosign
uses: sigstore/cosign-installer@d7d6bc7722e3daa8354c50bcb52f4837da5e9b6a # v3.8.1
uses: sigstore/cosign-installer@3454372f43399081ed03b604cb2d021dabca52bb # v3.8.2
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
@@ -72,7 +72,7 @@ jobs:
- name: Build and push all platforms
id: build-and-push
uses: docker/build-push-action@471d1dc4e07e5cdedd4c2171150001c434f0b7a4 # v6.15.0
uses: docker/build-push-action@1dc73863535b631f98b2378be8619f83b136f4a0 # v6.17.0
with:
push: true
labels: |

View File

@@ -24,7 +24,7 @@ jobs:
mdbook-version: '0.4.17'
- name: Setup python
uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"

View File

@@ -64,7 +64,7 @@ jobs:
run: echo 'window.SYNAPSE_VERSION = "${{ needs.pre.outputs.branch-version }}";' > ./docs/website_files/version.js
- name: Setup python
uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"

View File

@@ -44,6 +44,6 @@ jobs:
- run: cargo fmt
continue-on-error: true
- uses: stefanzweifel/git-auto-commit-action@e348103e9026cc0eee72ae06630dbe30c8bf7a79 # v5.1.0
- uses: stefanzweifel/git-auto-commit-action@b863ae1933cb653a53c021fe36dbb774e1fb9403 # v5.2.0
with:
commit_message: "Attempt to fix linting"

View File

@@ -86,7 +86,7 @@ jobs:
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.postgres-version }}
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
- run: pip install .[all,test]
@@ -200,7 +200,7 @@ jobs:
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.x'
- run: pip install tomli

View File

@@ -28,7 +28,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.x'
- id: set-distros
@@ -74,7 +74,7 @@ jobs:
${{ runner.os }}-buildx-
- name: Set up python
uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.x'
@@ -132,7 +132,7 @@ jobs:
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
# setup-python@v4 doesn't impose a default python version. Need to use 3.x
# here, because `python` on osx points to Python 2.7.
@@ -177,7 +177,7 @@ jobs:
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.10'
@@ -203,7 +203,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Download all workflow run artifacts
uses: actions/download-artifact@95815c38cf2ff2164869cbab79da8d1f422bc89e # v4.2.1
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
- name: Build a tarball for the debs
# We need to merge all the debs uploads into one folder, then compress
# that.
@@ -213,7 +213,7 @@ jobs:
tar -cvJf debs.tar.xz debs
- name: Attach to release
# Pinned to work around https://github.com/softprops/action-gh-release/issues/445
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
uses: softprops/action-gh-release@c95fe1489396fe8a9eb87c0abf8aa5b2ef267fda # v0.1.15
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -102,7 +102,7 @@ jobs:
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20'"
@@ -112,7 +112,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
- run: .ci/scripts/check_lockfile.py
@@ -192,7 +192,7 @@ jobs:
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
- run: "pip install 'towncrier>=18.6.0rc1'"
@@ -279,7 +279,7 @@ jobs:
if: ${{ needs.changes.outputs.linting_readme == 'true' }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
- run: "pip install rstcheck"
@@ -327,7 +327,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
- id: get-matrix
@@ -414,7 +414,7 @@ jobs:
sudo apt-get -qq install build-essential libffi-dev python3-dev \
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.9'
@@ -669,7 +669,7 @@ jobs:
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod

View File

@@ -11,7 +11,7 @@ jobs:
if: >
contains(github.event.issue.labels.*.name, 'X-Needs-Info')
steps:
- uses: actions/add-to-project@280af8ae1f83a494cfad2cb10f02f6d13529caa9 # main (v1.0.2 + 10 commits)
- uses: actions/add-to-project@5b1a254a3546aef88e0a7724a77a623fa2e47c36 # main (v1.0.2 + 10 commits)
id: add_project
with:
project-url: "https://github.com/orgs/matrix-org/projects/67"

View File

@@ -173,7 +173,7 @@ jobs:
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod

View File

@@ -1,3 +1,176 @@
# Synapse 1.131.0 (2025-06-03)
No significant changes since 1.131.0rc1.
# Synapse 1.131.0rc1 (2025-05-28)
### Features
- Add `msc4263_limit_key_queries_to_users_who_share_rooms` config option as per [MSC4263](https://github.com/matrix-org/matrix-spec-proposals/pull/4263). ([\#18180](https://github.com/element-hq/synapse/issues/18180))
- Add option to allow registrations that begin with `_`. Contributed by `_` (@hex5f). ([\#18262](https://github.com/element-hq/synapse/issues/18262))
- Include room ID in response to the [Room Deletion Status Admin API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#status-of-deleting-rooms). ([\#18318](https://github.com/element-hq/synapse/issues/18318))
- Add support for calling Policy Servers ([MSC4284](https://github.com/matrix-org/matrix-spec-proposals/pull/4284)) to mark events as spam. ([\#18387](https://github.com/element-hq/synapse/issues/18387))
### Bugfixes
- Prevent race-condition in `_maybe_retry_device_resync` entrance. ([\#18391](https://github.com/element-hq/synapse/issues/18391))
- Fix the `tests.handlers.test_worker_lock.WorkerLockTestCase.test_lock_contention` test which could spuriously time out on RISC-V architectures due to performance differences. ([\#18430](https://github.com/element-hq/synapse/issues/18430))
- Fix admin redaction endpoint not redacting encrypted messages. ([\#18434](https://github.com/element-hq/synapse/issues/18434))
### Improved Documentation
- Update `room_list_publication_rules` docs to consider defaults that changed in v1.126.0. Contributed by @HarHarLinks. ([\#18286](https://github.com/element-hq/synapse/issues/18286))
- Add advice for upgrading between major PostgreSQL versions to the database documentation. ([\#18445](https://github.com/element-hq/synapse/issues/18445))
### Internal Changes
- Fix a memory leak in `_NotifierUserStream`. ([\#18380](https://github.com/element-hq/synapse/issues/18380))
- Fix a couple type annotations in the `RootConfig`/`Config`. ([\#18409](https://github.com/element-hq/synapse/issues/18409))
- Explicitly enable PyPy builds in `cibuildwheel`s config to avoid it being disabled on a future upgrade to `cibuildwheel` v3. ([\#18417](https://github.com/element-hq/synapse/issues/18417))
- Update the PR review template to remove an erroneous line break from the final bullet point. ([\#18419](https://github.com/element-hq/synapse/issues/18419))
- Explain why we `flush_buffer()` for Python `print(...)` output. ([\#18420](https://github.com/element-hq/synapse/issues/18420))
- Add lint to ensure we don't add a `CREATE/DROP INDEX` in a schema delta. ([\#18440](https://github.com/element-hq/synapse/issues/18440))
- Allow checking only for the existence of a field in an SSO provider's response, rather than requiring the value(s) to check. ([\#18454](https://github.com/element-hq/synapse/issues/18454))
- Add unit tests for homeserver usage statistics. ([\#18463](https://github.com/element-hq/synapse/issues/18463))
- Don't move invited users to new room when shutting down room. ([\#18471](https://github.com/element-hq/synapse/issues/18471))
### Updates to locked dependencies
* Bump actions/setup-python from 5.5.0 to 5.6.0. ([\#18398](https://github.com/element-hq/synapse/issues/18398))
* Bump authlib from 1.5.1 to 1.5.2. ([\#18452](https://github.com/element-hq/synapse/issues/18452))
* Bump docker/build-push-action from 6.15.0 to 6.17.0. ([\#18397](https://github.com/element-hq/synapse/issues/18397), [\#18449](https://github.com/element-hq/synapse/issues/18449))
* Bump lxml from 5.3.0 to 5.4.0. ([\#18480](https://github.com/element-hq/synapse/issues/18480))
* Bump mypy-zope from 1.0.9 to 1.0.11. ([\#18428](https://github.com/element-hq/synapse/issues/18428))
* Bump pyo3 from 0.23.5 to 0.24.2. ([\#18460](https://github.com/element-hq/synapse/issues/18460))
* Bump pyo3-log from 0.12.3 to 0.12.4. ([\#18453](https://github.com/element-hq/synapse/issues/18453))
* Bump pyopenssl from 25.0.0 to 25.1.0. ([\#18450](https://github.com/element-hq/synapse/issues/18450))
* Bump ruff from 0.7.3 to 0.11.11. ([\#18451](https://github.com/element-hq/synapse/issues/18451), [\#18482](https://github.com/element-hq/synapse/issues/18482))
* Bump tornado from 6.4.2 to 6.5.0. ([\#18459](https://github.com/element-hq/synapse/issues/18459))
* Bump setuptools from 72.1.0 to 78.1.1. ([\#18461](https://github.com/element-hq/synapse/issues/18461))
* Bump types-jsonschema from 4.23.0.20241208 to 4.23.0.20250516. ([\#18481](https://github.com/element-hq/synapse/issues/18481))
* Bump types-requests from 2.32.0.20241016 to 2.32.0.20250328. ([\#18427](https://github.com/element-hq/synapse/issues/18427))
# Synapse 1.130.0 (2025-05-20)
### Bugfixes
- Fix startup being blocked on creating a new index that was introduced in v1.130.0rc1. ([\#18439](https://github.com/element-hq/synapse/issues/18439))
- Fix the ordering of local messages in rooms that were affected by [GHSA-v56r-hwv5-mxg6](https://github.com/advisories/GHSA-v56r-hwv5-mxg6). ([\#18447](https://github.com/element-hq/synapse/issues/18447))
# Synapse 1.130.0rc1 (2025-05-13)
### Features
- Add an Admin API endpoint `GET /_synapse/admin/v1/scheduled_tasks` to fetch scheduled tasks. ([\#18214](https://github.com/element-hq/synapse/issues/18214))
- Add config option `user_directory.exclude_remote_users` which, when enabled, excludes remote users from user directory search results. ([\#18300](https://github.com/element-hq/synapse/issues/18300))
- Add support for handling `GET /devices/` on workers. ([\#18355](https://github.com/element-hq/synapse/issues/18355))
### Bugfixes
- Fix a longstanding bug where Synapse would immediately retry a failing push endpoint when a new event is received, ignoring any backoff timers. ([\#18363](https://github.com/element-hq/synapse/issues/18363))
- Pass leave from remote invite rejection down Sliding Sync. ([\#18375](https://github.com/element-hq/synapse/issues/18375))
### Updates to the Docker image
- In `configure_workers_and_start.py`, use the same absolute path of Python in the interpreter shebang, and invoke child Python processes with `sys.executable`. ([\#18291](https://github.com/element-hq/synapse/issues/18291))
- Optimize the build of the workers image. ([\#18292](https://github.com/element-hq/synapse/issues/18292))
- In `start_for_complement.sh`, replace some external program calls with shell builtins. ([\#18293](https://github.com/element-hq/synapse/issues/18293))
- When generating container scripts from templates, don't add a leading newline so that their shebangs may be handled correctly. ([\#18295](https://github.com/element-hq/synapse/issues/18295))
### Improved Documentation
- Improve formatting of the README file. ([\#18218](https://github.com/element-hq/synapse/issues/18218))
- Add documentation for configuring [Pocket ID](https://github.com/pocket-id/pocket-id) as an OIDC provider. ([\#18237](https://github.com/element-hq/synapse/issues/18237))
- Fix typo in docs about the `push` config option. Contributed by @HarHarLinks. ([\#18320](https://github.com/element-hq/synapse/issues/18320))
- Add `/_matrix/federation/v1/version` to list of federation endpoints that can be handled by workers. ([\#18377](https://github.com/element-hq/synapse/issues/18377))
- Add an Admin API endpoint `GET /_synapse/admin/v1/scheduled_tasks` to fetch scheduled tasks. ([\#18384](https://github.com/element-hq/synapse/issues/18384))
### Internal Changes
- Return specific error code when adding an email address / phone number to account is not supported ([MSC4178](https://github.com/matrix-org/matrix-spec-proposals/pull/4178)). ([\#17578](https://github.com/element-hq/synapse/issues/17578))
- Stop auto-provisionning missing users & devices when delegating auth to Matrix Authentication Service. Requires MAS 0.13.0 or later. ([\#18181](https://github.com/element-hq/synapse/issues/18181))
- Apply file hashing and existing quarantines to media downloaded for URL previews. ([\#18297](https://github.com/element-hq/synapse/issues/18297))
- Allow a few admin APIs used by matrix-authentication-service to run on workers. ([\#18313](https://github.com/element-hq/synapse/issues/18313))
- Apply `should_drop_federated_event` to federation invites. ([\#18330](https://github.com/element-hq/synapse/issues/18330))
- Allow `/rooms/` admin API to be run on workers. ([\#18360](https://github.com/element-hq/synapse/issues/18360))
- Minor performance improvements to the notifier. ([\#18367](https://github.com/element-hq/synapse/issues/18367))
- Slight performance increase when using the ratelimiter. ([\#18369](https://github.com/element-hq/synapse/issues/18369))
- Don't validate the `at_hash` (access token hash) field in OIDC ID Tokens if we don't end up actually using the OIDC Access Token. ([\#18374](https://github.com/element-hq/synapse/issues/18374), [\#18385](https://github.com/element-hq/synapse/issues/18385))
- Fixed test failures when using authlib 1.5.2. ([\#18390](https://github.com/element-hq/synapse/issues/18390))
- Refactor [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Simplified Sliding Sync room list tests to cover both new and fallback logic paths. ([\#18399](https://github.com/element-hq/synapse/issues/18399))
### Updates to locked dependencies
* Bump actions/add-to-project from 280af8ae1f83a494cfad2cb10f02f6d13529caa9 to 5b1a254a3546aef88e0a7724a77a623fa2e47c36. ([\#18365](https://github.com/element-hq/synapse/issues/18365))
* Bump actions/download-artifact from 4.2.1 to 4.3.0. ([\#18364](https://github.com/element-hq/synapse/issues/18364))
* Bump actions/setup-go from 5.4.0 to 5.5.0. ([\#18426](https://github.com/element-hq/synapse/issues/18426))
* Bump anyhow from 1.0.97 to 1.0.98. ([\#18336](https://github.com/element-hq/synapse/issues/18336))
* Bump packaging from 24.2 to 25.0. ([\#18393](https://github.com/element-hq/synapse/issues/18393))
* Bump pillow from 11.1.0 to 11.2.1. ([\#18429](https://github.com/element-hq/synapse/issues/18429))
* Bump pydantic from 2.10.3 to 2.11.4. ([\#18394](https://github.com/element-hq/synapse/issues/18394))
* Bump pyo3-log from 0.12.2 to 0.12.3. ([\#18317](https://github.com/element-hq/synapse/issues/18317))
* Bump pyopenssl from 24.3.0 to 25.0.0. ([\#18315](https://github.com/element-hq/synapse/issues/18315))
* Bump sha2 from 0.10.8 to 0.10.9. ([\#18395](https://github.com/element-hq/synapse/issues/18395))
* Bump sigstore/cosign-installer from 3.8.1 to 3.8.2. ([\#18366](https://github.com/element-hq/synapse/issues/18366))
* Bump softprops/action-gh-release from 1 to 2. ([\#18264](https://github.com/element-hq/synapse/issues/18264))
* Bump stefanzweifel/git-auto-commit-action from 5.1.0 to 5.2.0. ([\#18354](https://github.com/element-hq/synapse/issues/18354))
* Bump txredisapi from 1.4.10 to 1.4.11. ([\#18392](https://github.com/element-hq/synapse/issues/18392))
* Bump types-jsonschema from 4.23.0.20240813 to 4.23.0.20241208. ([\#18305](https://github.com/element-hq/synapse/issues/18305))
* Bump types-psycopg2 from 2.9.21.20250121 to 2.9.21.20250318. ([\#18316](https://github.com/element-hq/synapse/issues/18316))
# Synapse 1.129.0 (2025-05-06)
No significant changes since 1.129.0rc2.
# Synapse 1.129.0rc2 (2025-04-30)
Synapse 1.129.0rc1 was never formally released due to regressions discovered during the release process. 1.129.0rc2 fixes those regressions by reverting the affected PRs.
### Internal Changes
- Revert the slow background update introduced by [\#18068](https://github.com/element-hq/synapse/issues/18068) in v1.128.0. ([\#18372](https://github.com/element-hq/synapse/issues/18372))
- Revert "Add total event, unencrypted message, and e2ee event counts to stats reporting", added in v1.129.0rc1. ([\#18373](https://github.com/element-hq/synapse/issues/18373))
# Synapse 1.129.0rc1 (2025-04-15)
### Features
- Add `passthrough_authorization_parameters` in OIDC configuration to allow passing parameters to the authorization grant URL. ([\#18232](https://github.com/element-hq/synapse/issues/18232))
- Add `total_event_count`, `total_message_count`, and `total_e2ee_event_count` fields to the homeserver usage statistics. ([\#18260](https://github.com/element-hq/synapse/issues/18260))
### Bugfixes
- Fix `force_tracing_for_users` config when using delegated auth. ([\#18334](https://github.com/element-hq/synapse/issues/18334))
- Fix the token introspection cache logging access tokens when MAS integration is in use. ([\#18335](https://github.com/element-hq/synapse/issues/18335))
- Stop caching introspection failures when delegating auth to MAS. ([\#18339](https://github.com/element-hq/synapse/issues/18339))
- Fix `ExternalIDReuse` exception after migrating to MAS on workers with a high traffic. ([\#18342](https://github.com/element-hq/synapse/issues/18342))
- Fix minor performance regression caused by tracking of room participation. Regressed in v1.128.0. ([\#18345](https://github.com/element-hq/synapse/issues/18345))
### Updates to the Docker image
- Optimize the build of the complement-synapse image. ([\#18294](https://github.com/element-hq/synapse/issues/18294))
### Internal Changes
- Disable statement timeout during room purge. ([\#18133](https://github.com/element-hq/synapse/issues/18133))
- Add cache to storage functions used to auth requests when using delegated auth. ([\#18337](https://github.com/element-hq/synapse/issues/18337))
# Synapse 1.128.0 (2025-04-08)
No significant changes since 1.128.0rc1.

40
Cargo.lock generated
View File

@@ -13,9 +13,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.97"
version = "1.0.98"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dcfed56ad506cb2c684a14971b8861fdc3baaaae314b9e5f9bb532cbe3ba7a4f"
checksum = "e16d2d3311acee920a9eb8d33b8cbc1787ce4a264e85f964c2404b969bdcd487"
[[package]]
name = "arc-swap"
@@ -277,9 +277,9 @@ dependencies = [
[[package]]
name = "pyo3"
version = "0.23.5"
version = "0.24.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7778bffd85cf38175ac1f545509665d0b9b92a198ca7941f131f85f7a4f9a872"
checksum = "e5203598f366b11a02b13aa20cab591229ff0a89fd121a308a5df751d5fc9219"
dependencies = [
"anyhow",
"cfg-if",
@@ -296,9 +296,9 @@ dependencies = [
[[package]]
name = "pyo3-build-config"
version = "0.23.5"
version = "0.24.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94f6cbe86ef3bf18998d9df6e0f3fc1050a8c5efa409bf712e661a4366e010fb"
checksum = "99636d423fa2ca130fa5acde3059308006d46f98caac629418e53f7ebb1e9999"
dependencies = [
"once_cell",
"target-lexicon",
@@ -306,9 +306,9 @@ dependencies = [
[[package]]
name = "pyo3-ffi"
version = "0.23.5"
version = "0.24.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e9f1b4c431c0bb1c8fb0a338709859eed0d030ff6daa34368d3b152a63dfdd8d"
checksum = "78f9cf92ba9c409279bc3305b5409d90db2d2c22392d443a87df3a1adad59e33"
dependencies = [
"libc",
"pyo3-build-config",
@@ -316,9 +316,9 @@ dependencies = [
[[package]]
name = "pyo3-log"
version = "0.12.2"
version = "0.12.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4b78e4983ba15bc62833a0e0941d965bc03690163f1127864f1408db25063466"
checksum = "45192e5e4a4d2505587e27806c7b710c231c40c56f3bfc19535d0bb25df52264"
dependencies = [
"arc-swap",
"log",
@@ -327,9 +327,9 @@ dependencies = [
[[package]]
name = "pyo3-macros"
version = "0.23.5"
version = "0.24.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fbc2201328f63c4710f68abdf653c89d8dbc2858b88c5d88b0ff38a75288a9da"
checksum = "0b999cb1a6ce21f9a6b147dcf1be9ffedf02e0043aec74dc390f3007047cecd9"
dependencies = [
"proc-macro2",
"pyo3-macros-backend",
@@ -339,9 +339,9 @@ dependencies = [
[[package]]
name = "pyo3-macros-backend"
version = "0.23.5"
version = "0.24.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fca6726ad0f3da9c9de093d6f116a93c1a38e417ed73bf138472cf4064f72028"
checksum = "822ece1c7e1012745607d5cf0bcb2874769f0f7cb34c4cde03b9358eb9ef911a"
dependencies = [
"heck",
"proc-macro2",
@@ -352,9 +352,9 @@ dependencies = [
[[package]]
name = "pythonize"
version = "0.23.0"
version = "0.24.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "91a6ee7a084f913f98d70cdc3ebec07e852b735ae3059a1500db2661265da9ff"
checksum = "d5bcac0d0b71821f0d69e42654f1e15e5c94b85196446c4de9588951a2117e7b"
dependencies = [
"pyo3",
"serde",
@@ -480,9 +480,9 @@ dependencies = [
[[package]]
name = "sha2"
version = "0.10.8"
version = "0.10.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "793db75ad2bcafc3ffa7c68b215fee268f537982cd901d132f89c6343f3a3dc8"
checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283"
dependencies = [
"cfg-if",
"cpufeatures",
@@ -532,9 +532,9 @@ dependencies = [
[[package]]
name = "target-lexicon"
version = "0.12.14"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e1fc403891a21bcfb7c37834ba66a547a8f402146eba7265b5a6d88059c9ff2f"
checksum = "e502f78cdbb8ba4718f566c418c52bc729126ffd16baee5baa718cf25dd5a69a"
[[package]]
name = "typenum"

View File

@@ -253,15 +253,17 @@ Alongside all that, join our developer community on Matrix:
Copyright and Licensing
=======================
Copyright 2014-2017 OpenMarket Ltd
Copyright 2017 Vector Creations Ltd
Copyright 2017-2025 New Vector Ltd
| Copyright 2014-2017 OpenMarket Ltd
| Copyright 2017 Vector Creations Ltd
| Copyright 2017-2025 New Vector Ltd
|
This software is dual-licensed by New Vector Ltd (Element). It can be used either:
(1) for free under the terms of the GNU Affero General Public License (as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version); OR
(2) under the terms of a paid-for Element Commercial License agreement between you and Element (the terms of which may vary depending on what you and Element have agreed to).
Unless required by applicable law or agreed to in writing, software distributed under the Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the Licenses for the specific language governing permissions and limitations under the Licenses.

View File

@@ -1 +0,0 @@
Disable statement timeout during room purge.

View File

@@ -1 +0,0 @@
Add `passthrough_authorization_parameters` in OIDC configuration to allow to pass parameters to the authorization grant URL.

View File

@@ -1 +0,0 @@
Optimize the build of the complement-synapse image.

View File

@@ -1 +0,0 @@
Add cache to storage functions used to auth requests when using delegated auth.

42
debian/changelog vendored
View File

@@ -1,3 +1,45 @@
matrix-synapse-py3 (1.131.0) stable; urgency=medium
* New Synapse release 1.131.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Jun 2025 14:36:55 +0100
matrix-synapse-py3 (1.131.0~rc1) stable; urgency=medium
* New synapse release 1.131.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 28 May 2025 10:25:44 +0000
matrix-synapse-py3 (1.130.0) stable; urgency=medium
* New Synapse release 1.130.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 May 2025 08:34:13 -0600
matrix-synapse-py3 (1.130.0~rc1) stable; urgency=medium
* New Synapse release 1.130.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 13 May 2025 10:44:04 +0100
matrix-synapse-py3 (1.129.0) stable; urgency=medium
* New Synapse release 1.129.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 06 May 2025 12:22:11 +0100
matrix-synapse-py3 (1.129.0~rc2) stable; urgency=medium
* New synapse release 1.129.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 30 Apr 2025 13:13:16 +0000
matrix-synapse-py3 (1.129.0~rc1) stable; urgency=medium
* New Synapse release 1.129.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Apr 2025 10:47:43 -0600
matrix-synapse-py3 (1.128.0) stable; urgency=medium
* New Synapse release 1.128.0.

View File

@@ -3,18 +3,37 @@
ARG SYNAPSE_VERSION=latest
ARG FROM=matrixdotorg/synapse:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=bookworm
ARG PYTHON_VERSION=3.12
# first of all, we create a base image with an nginx which we can copy into the
# first of all, we create a base image with dependencies which we can copy into the
# target image. For repeated rebuilds, this is much faster than apt installing
# each time.
FROM docker.io/library/debian:${DEBIAN_VERSION}-slim AS deps_base
FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS deps_base
# Tell apt to keep downloaded package files, as we're using cache mounts.
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -yqq --no-install-recommends \
redis-server nginx-light
nginx-light
RUN \
# remove default page
rm /etc/nginx/sites-enabled/default && \
# have nginx log to stderr/out
ln -sf /dev/stdout /var/log/nginx/access.log && \
ln -sf /dev/stderr /var/log/nginx/error.log
# --link-mode=copy silences a warning as uv isn't able to do hardlinks between its cache
# (mounted as --mount=type=cache) and the target directory.
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --link-mode=copy --prefix="/uv/usr/local" supervisor~=4.2
RUN mkdir -p /uv/etc/supervisor/conf.d
# Similarly, a base to copy the redis server from.
#
@@ -27,31 +46,16 @@ FROM docker.io/library/redis:7-${DEBIAN_VERSION} AS redis_base
# now build the final image, based on the the regular Synapse docker image
FROM $FROM
# Install supervisord with uv pip instead of apt, to avoid installing a second
# copy of python.
# --link-mode=copy silences a warning as uv isn't able to do hardlinks between its cache
# (mounted as --mount=type=cache) and the target directory.
RUN \
--mount=type=bind,from=ghcr.io/astral-sh/uv:0.6.8,source=/uv,target=/uv \
--mount=type=cache,target=/root/.cache/uv \
/uv pip install --link-mode=copy --prefix="/usr/local" supervisor~=4.2
RUN mkdir -p /etc/supervisor/conf.d
# Copy over redis and nginx
# Copy over dependencies
COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin
COPY --from=deps_base /uv /
COPY --from=deps_base /usr/sbin/nginx /usr/sbin
COPY --from=deps_base /usr/share/nginx /usr/share/nginx
COPY --from=deps_base /usr/lib/nginx /usr/lib/nginx
COPY --from=deps_base /etc/nginx /etc/nginx
RUN rm /etc/nginx/sites-enabled/default
RUN mkdir /var/log/nginx /var/lib/nginx
RUN chown www-data /var/lib/nginx
# have nginx log to stderr/out
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
COPY --from=deps_base /var/log/nginx /var/log/nginx
# chown to allow non-root user to write to http-*-temp-path dirs
COPY --from=deps_base --chown=www-data:root /var/lib/nginx /var/lib/nginx
# Copy Synapse worker, nginx and supervisord configuration template files
COPY ./docker/conf-workers/* /conf/
@@ -70,4 +74,4 @@ FROM $FROM
# Replace the healthcheck with one which checks *all* the workers. The script
# is generated by configure_workers_and_start.py.
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
CMD /bin/sh /healthcheck.sh
CMD ["/healthcheck.sh"]

View File

@@ -58,4 +58,4 @@ ENTRYPOINT ["/start_for_complement.sh"]
# Update the healthcheck to have a shorter check interval
HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \
CMD /bin/sh /healthcheck.sh
CMD ["/healthcheck.sh"]

View File

@@ -9,7 +9,7 @@ echo " Args: $*"
echo " Env: SYNAPSE_COMPLEMENT_DATABASE=$SYNAPSE_COMPLEMENT_DATABASE SYNAPSE_COMPLEMENT_USE_WORKERS=$SYNAPSE_COMPLEMENT_USE_WORKERS SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR=$SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR"
function log {
d=$(date +"%Y-%m-%d %H:%M:%S,%3N")
d=$(printf '%(%Y-%m-%d %H:%M:%S)T,%.3s\n' ${EPOCHREALTIME/./ })
echo "$d $*"
}
@@ -103,12 +103,11 @@ fi
# Note that both the key and certificate are in PEM format (not DER).
# First generate a configuration file to set up a Subject Alternative Name.
cat > /conf/server.tls.conf <<EOF
echo "\
.include /etc/ssl/openssl.cnf
[SAN]
subjectAltName=DNS:${SERVER_NAME}
EOF
subjectAltName=DNS:${SERVER_NAME}" > /conf/server.tls.conf
# Generate an RSA key
openssl genrsa -out /conf/server.tls.key 2048
@@ -123,8 +122,8 @@ openssl x509 -req -in /conf/server.tls.csr \
-out /conf/server.tls.crt -extfile /conf/server.tls.conf -extensions SAN
# Assert that we have a Subject Alternative Name in the certificate.
# (grep will exit with 1 here if there isn't a SAN in the certificate.)
openssl x509 -in /conf/server.tls.crt -noout -text | grep DNS:
# (the test will exit with 1 here if there isn't a SAN in the certificate.)
[[ $(openssl x509 -in /conf/server.tls.crt -noout -text) == *DNS:* ]]
export SYNAPSE_TLS_CERT=/conf/server.tls.crt
export SYNAPSE_TLS_KEY=/conf/server.tls.key

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/local/bin/python
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
@@ -202,6 +202,7 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"app": "synapse.app.generic_worker",
"listener_resources": ["federation"],
"endpoint_patterns": [
"^/_matrix/federation/v1/version$",
"^/_matrix/federation/(v1|v2)/event/",
"^/_matrix/federation/(v1|v2)/state/",
"^/_matrix/federation/(v1|v2)/state_ids/",
@@ -351,6 +352,11 @@ def error(txt: str) -> NoReturn:
def flush_buffers() -> None:
"""
Python's `print()` buffers output by default, typically waiting until ~8KB
accumulates. This method can be used to flush the buffers so we can see the output
of any print statements so far.
"""
sys.stdout.flush()
sys.stderr.flush()
@@ -376,9 +382,11 @@ def convert(src: str, dst: str, **template_vars: object) -> None:
#
# We use append mode in case the files have already been written to by something else
# (for instance, as part of the instructions in a dockerfile).
exists = os.path.isfile(dst)
with open(dst, "a") as outfile:
# In case the existing file doesn't end with a newline
outfile.write("\n")
if exists:
outfile.write("\n")
outfile.write(rendered)
@@ -604,7 +612,7 @@ def generate_base_homeserver_config() -> None:
# start.py already does this for us, so just call that.
# note that this script is copied in in the official, monolith dockerfile
os.environ["SYNAPSE_HTTP_PORT"] = str(MAIN_PROCESS_HTTP_LISTENER_PORT)
subprocess.run(["/usr/local/bin/python", "/start.py", "migrate_config"], check=True)
subprocess.run([sys.executable, "/start.py", "migrate_config"], check=True)
def parse_worker_types(
@@ -998,6 +1006,7 @@ def generate_worker_files(
"/healthcheck.sh",
healthcheck_urls=healthcheck_urls,
)
os.chmod("/healthcheck.sh", 0o755)
# Ensure the logging directory exists
log_dir = data_dir + "/logs"

View File

@@ -22,6 +22,11 @@ def error(txt: str) -> NoReturn:
def flush_buffers() -> None:
"""
Python's `print()` buffers output by default, typically waiting until ~8KB
accumulates. This method can be used to flush the buffers so we can see the output
of any print statements so far.
"""
sys.stdout.flush()
sys.stderr.flush()

View File

@@ -794,6 +794,7 @@ A response body like the following is returned:
"results": [
{
"delete_id": "delete_id1",
"room_id": "!roomid:example.com",
"status": "failed",
"error": "error message",
"shutdown_room": {
@@ -804,6 +805,7 @@ A response body like the following is returned:
}
}, {
"delete_id": "delete_id2",
"room_id": "!roomid:example.com",
"status": "purging",
"shutdown_room": {
"kicked_users": [
@@ -842,6 +844,8 @@ A response body like the following is returned:
```json
{
"status": "purging",
"delete_id": "bHkCNQpHqOaFhPtK",
"room_id": "!roomid:example.com",
"shutdown_room": {
"kicked_users": [
"@foobar:example.com"
@@ -869,7 +873,8 @@ The following fields are returned in the JSON response body:
- `results` - An array of objects, each containing information about one task.
This field is omitted from the result when you query by `delete_id`.
Task objects contain the following fields:
- `delete_id` - The ID for this purge if you query by `room_id`.
- `delete_id` - The ID for this purge
- `room_id` - The ID of the room being deleted
- `status` - The status will be one of:
- `shutting_down` - The process is removing users from the room.
- `purging` - The process is purging the room and event data from database.

View File

@@ -0,0 +1,54 @@
# Show scheduled tasks
This API returns information about scheduled tasks.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api/).
The api is:
```
GET /_synapse/admin/v1/scheduled_tasks
```
It returns a JSON body like the following:
```json
{
"scheduled_tasks": [
{
"id": "GSA124oegf1",
"action": "shutdown_room",
"status": "complete",
"timestamp_ms": 23423523,
"resource_id": "!roomid",
"result": "some result",
"error": null
}
]
}
```
**Query parameters:**
* `action_name`: string - Is optional. Returns only the scheduled tasks with the given action name.
* `resource_id`: string - Is optional. Returns only the scheduled tasks with the given resource id.
* `status`: string - Is optional. Returns only the scheduled tasks matching the given status, one of
- "scheduled" - Task is scheduled but not active
- "active" - Task is active and probably running, and if not will be run on next scheduler loop run
- "complete" - Task has completed successfully
- "failed" - Task is over and either returned a failed status, or had an exception
* `max_timestamp`: int - Is optional. Returns only the scheduled tasks with a timestamp inferior to the specified one.
**Response**
The following fields are returned in the JSON response body along with a `200` HTTP status code:
* `id`: string - ID of scheduled task.
* `action`: string - The name of the scheduled task's action.
* `status`: string - The status of the scheduled task.
* `timestamp_ms`: integer - The timestamp (in milliseconds since the unix epoch) of the given task - If the status is "scheduled" then this represents when it should be launched.
Otherwise it represents the last time this task got a change of state.
* `resource_id`: Optional string - The resource id of the scheduled task, if it possesses one
* `result`: Optional Json - Any result of the scheduled task, if given
* `error`: Optional string - If the task has the status "failed", the error associated with this failure

View File

@@ -353,6 +353,8 @@ callback returns `False`, Synapse falls through to the next one. The value of th
callback that does not return `False` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
Note that this check is applied to federation invites as of Synapse v1.130.0.
### `check_login_for_spam`

View File

@@ -23,6 +23,7 @@ such as [Github][github-idp].
[auth0]: https://auth0.com/
[authentik]: https://goauthentik.io/
[lemonldap]: https://lemonldap-ng.org/
[pocket-id]: https://pocket-id.org/
[okta]: https://www.okta.com/
[dex-idp]: https://github.com/dexidp/dex
[keycloak-idp]: https://www.keycloak.org/docs/latest/server_admin/#sso-protocols
@@ -624,6 +625,32 @@ oidc_providers:
Note that the fields `client_id` and `client_secret` are taken from the CURL response above.
### Pocket ID
[Pocket ID][pocket-id] is a simple OIDC provider that allows users to authenticate with their passkeys.
1. Go to `OIDC Clients`
2. Click on `Add OIDC Client`
3. Add a name, for example `Synapse`
4. Add `"https://auth.example.org/_synapse/client/oidc/callback` to `Callback URLs` # Replace `auth.example.org` with your domain
5. Click on `Save`
6. Note down your `Client ID` and `Client secret`, these will be used later
Synapse config:
```yaml
oidc_providers:
- idp_id: pocket_id
idp_name: Pocket ID
issuer: "https://auth.example.org/" # Replace with your domain
client_id: "your-client-id" # Replace with the "Client ID" you noted down before
client_secret: "your-client-secret" # Replace with the "Client secret" you noted down before
scopes: ["openid", "profile"]
user_mapping_provider:
config:
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
```
### Shibboleth with OIDC Plugin
[Shibboleth](https://www.shibboleth.net/) is an open Standard IdP solution widely used by Universities.

View File

@@ -100,6 +100,14 @@ database:
keepalives_count: 3
```
## Postgresql major version upgrades
Postgres uses separate directories for database locations between major versions (typically `/var/lib/postgresql/<version>/main`).
Therefore, it is recommended to stop Synapse and other services (MAS, etc) before upgrading Postgres major versions.
It is also strongly recommended to [back up](./usage/administration/backups.md#database) your database beforehand to ensure no data loss arising from a failed upgrade.
## Backups
Don't forget to [back up](./usage/administration/backups.md#database) your database!

View File

@@ -117,6 +117,16 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.130.0
## Documented endpoint which can be delegated to a federation worker
The endpoint `^/_matrix/federation/v1/version$` can be delegated to a federation
worker. This is not new behaviour, but had not been documented yet. The
[list of delegatable endpoints](workers.md#synapseappgeneric_worker) has
been updated to include it. Make sure to check your reverse proxy rules if you
are using workers.
# Upgrading to v1.126.0
## Room list publication rules change

View File

@@ -30,7 +30,7 @@ The following statistics are sent to the configured reporting endpoint:
| `python_version` | string | The Python version number in use (e.g "3.7.1"). Taken from `sys.version_info`. |
| `total_users` | int | The number of registered users on the homeserver. |
| `total_nonbridged_users` | int | The number of users, excluding those created by an Application Service. |
| `daily_user_type_native` | int | The number of native users created in the last 24 hours. |
| `daily_user_type_native` | int | The number of native, non-guest users created in the last 24 hours. |
| `daily_user_type_guest` | int | The number of guest users created in the last 24 hours. |
| `daily_user_type_bridged` | int | The number of users created by Application Services in the last 24 hours. |
| `total_room_count` | int | The total number of rooms present on the homeserver. |
@@ -50,8 +50,8 @@ The following statistics are sent to the configured reporting endpoint:
| `cache_factor` | int | The configured [`global factor`](../../configuration/config_documentation.md#caching) value for caching. |
| `event_cache_size` | int | The configured [`event_cache_size`](../../configuration/config_documentation.md#caching) value for caching. |
| `database_engine` | string | The database engine that is in use. Either "psycopg2" meaning PostgreSQL is in use, or "sqlite3" for SQLite3. |
| `database_server_version` | string | The version of the database server. Examples being "10.10" for PostgreSQL server version 10.0, and "3.38.5" for SQLite 3.38.5 installed on the system. |
| `log_level` | string | The log level in use. Examples are "INFO", "WARNING", "ERROR", "DEBUG", etc. |
| `database_server_version` | string | The version of the database server. Examples being "10.10" for PostgreSQL server version 10.0, and "3.38.5" for SQLite 3.38.5 installed on the system. |
| `log_level` | string | The log level in use. Examples are "INFO", "WARNING", "ERROR", "DEBUG", etc. |
[^1]: Native matrix users and guests are always counted. If the

View File

@@ -2887,6 +2887,20 @@ Example configuration:
inhibit_user_in_use_error: true
```
---
### `allow_underscore_prefixed_registration`
Whether users are allowed to register with a underscore-prefixed localpart.
By default, AppServices use prefixes like `_example` to namespace their
associated ghost users. If turned on, this may result in clashes or confusion.
Useful when provisioning users from an external identity provider.
Defaults to false.
Example configuration:
```yaml
allow_underscore_prefixed_registration: false
```
---
## User session management
---
### `session_lifetime`
@@ -3768,17 +3782,23 @@ match particular values in the OIDC userinfo. The requirements can be listed und
```yaml
attribute_requirements:
- attribute: family_name
value: "Stephensson"
one_of: ["Stephensson", "Smith"]
- attribute: groups
value: "admin"
# If `value` or `one_of` are not specified, the attribute only needs
# to exist, regardless of value.
- attribute: picture
```
`attribute` is a required field, while `value` and `one_of` are optional.
All of the listed attributes must match for the login to be permitted. Additional attributes can be added to
userinfo by expanding the `scopes` section of the OIDC config to retrieve
additional information from the OIDC provider.
If the OIDC claim is a list, then the attribute must match any value in the list.
Otherwise, it must exactly match the value of the claim. Using the example
above, the `family_name` claim MUST be "Stephensson", but the `groups`
above, the `family_name` claim MUST be either "Stephensson" or "Smith", but the `groups`
claim MUST contain "admin".
Example configuration:
@@ -4018,7 +4038,7 @@ This option has a number of sub-options. They are as follows:
* `include_content`: Clients requesting push notifications can either have the body of
the message sent in the notification poke along with other details
like the sender, or just the event ID and room ID (`event_id_only`).
If clients choose the to have the body sent, this option controls whether the
If clients choose to have the body sent, this option controls whether the
notification request includes the content of the event (other details
like the sender are still included). If `event_id_only` is enabled, it
has no effect.
@@ -4095,6 +4115,7 @@ This option has the following sub-options:
* `prefer_local_users`: Defines whether to prefer local users in search query results.
If set to true, local users are more likely to appear above remote users when searching the
user directory. Defaults to false.
* `exclude_remote_users`: If set to true, the search will only return local users. Defaults to false.
* `show_locked_users`: Defines whether to show locked users in search query results. Defaults to false.
Example configuration:
@@ -4103,6 +4124,7 @@ user_directory:
enabled: false
search_all_users: true
prefer_local_users: true
exclude_remote_users: false
show_locked_users: true
```
---
@@ -4329,23 +4351,11 @@ room list by default_
Example configuration:
```yaml
# No rule list specified. Anyone may publish any room to the public list.
# No rule list specified. No one may publish any room to the public list, except server admins.
# This is the default behaviour.
room_list_publication_rules:
```
```yaml
# A list of one rule which allows everything.
# This has the same effect as the previous example.
room_list_publication_rules:
- "action": "allow"
```
```yaml
# An empty list of rules. No-one may publish to the room list.
room_list_publication_rules: []
```
```yaml
# A list of one rule which denies everything.
# This has the same effect as the previous example.
@@ -4353,6 +4363,19 @@ room_list_publication_rules:
- "action": "deny"
```
```yaml
# An empty list of rules.
# This has the same effect as the previous example.
room_list_publication_rules: []
```
```yaml
# A list of one rule which allows everything.
# This was the default behaviour pre v1.126.0.
room_list_publication_rules:
- "action": "allow"
```
```yaml
# Prevent a specific user from publishing rooms.
# Allow other users to publish anything.

View File

@@ -200,6 +200,7 @@ information.
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
# Federation requests
^/_matrix/federation/v1/version$
^/_matrix/federation/v1/event/
^/_matrix/federation/v1/state/
^/_matrix/federation/v1/state_ids/
@@ -249,6 +250,7 @@ information.
^/_matrix/client/(api/v1|r0|v3|unstable)/directory/room/.*$
^/_matrix/client/(r0|v3|unstable)/capabilities$
^/_matrix/client/(r0|v3|unstable)/notifications$
^/_synapse/admin/v1/rooms/
# Encryption requests
^/_matrix/client/(r0|v3|unstable)/keys/query$
@@ -280,6 +282,7 @@ Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
^/_matrix/client/unstable/org.matrix.msc4140/delayed_events
^/_matrix/client/(api/v1|r0|v3|unstable)/devices/
# Account data requests
^/_matrix/client/(r0|v3|unstable)/.*/tags
@@ -320,6 +323,15 @@ For multiple workers not handling the SSO endpoints properly, see
[#7530](https://github.com/matrix-org/synapse/issues/7530) and
[#9427](https://github.com/matrix-org/synapse/issues/9427).
Additionally, when MSC3861 is enabled (`experimental_features.msc3861.enabled`
set to `true`), the following endpoints can be handled by the worker:
^/_synapse/admin/v2/users/[^/]+$
^/_synapse/admin/v1/username_available$
^/_synapse/admin/v1/users/[^/]+/_allow_cross_signing_replacement_without_uia$
# Only the GET method:
^/_synapse/admin/v1/users/[^/]+/devices$
Note that a [HTTP listener](usage/configuration/config_documentation.md#listeners)
with `client` and `federation` `resources` must be configured in the
[`worker_listeners`](usage/configuration/config_documentation.md#worker_listeners)

804
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.128.0"
version = "1.131.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"
@@ -320,7 +320,7 @@ all = [
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.7.3"
ruff = "0.11.11"
# Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2"
@@ -385,6 +385,9 @@ build-backend = "poetry.core.masonry.api"
# - PyPy on Aarch64 and musllinux on aarch64: too slow to build.
# c.f. https://github.com/matrix-org/synapse/pull/14259
skip = "cp36* cp37* cp38* pp37* pp38* *-musllinux_i686 pp*aarch64 *-musllinux_aarch64"
# Enable non-default builds.
# "pypy" used to be included by default up until cibuildwheel 3.
enable = "pypy"
# We need a rust compiler.
#

View File

@@ -30,14 +30,14 @@ http = "1.1.0"
lazy_static = "1.4.0"
log = "0.4.17"
mime = "0.3.17"
pyo3 = { version = "0.23.5", features = [
pyo3 = { version = "0.24.2", features = [
"macros",
"anyhow",
"abi3",
"abi3-py39",
] }
pyo3-log = "0.12.0"
pythonize = "0.23.0"
pythonize = "0.24.0"
regex = "1.6.0"
sha2 = "0.10.8"
serde = { version = "1.0.144", features = ["derive"] }

View File

@@ -1,6 +1,8 @@
#!/usr/bin/env python3
# Check that no schema deltas have been added to the wrong version.
#
# Also checks that schema deltas do not try and create or drop indices.
import re
from typing import Any, Dict, List
@@ -9,6 +11,13 @@ import click
import git
SCHEMA_FILE_REGEX = re.compile(r"^synapse/storage/schema/(.*)/delta/(.*)/(.*)$")
INDEX_CREATION_REGEX = re.compile(r"CREATE .*INDEX .*ON ([a-z_]+)", flags=re.IGNORECASE)
INDEX_DELETION_REGEX = re.compile(r"DROP .*INDEX ([a-z_]+)", flags=re.IGNORECASE)
TABLE_CREATION_REGEX = re.compile(r"CREATE .*TABLE ([a-z_]+)", flags=re.IGNORECASE)
# The base branch we want to check against. We use the main development branch
# on the assumption that is what we are developing against.
DEVELOP_BRANCH = "develop"
@click.command()
@@ -20,6 +29,9 @@ SCHEMA_FILE_REGEX = re.compile(r"^synapse/storage/schema/(.*)/delta/(.*)/(.*)$")
help="Always output ANSI colours",
)
def main(force_colors: bool) -> None:
# Return code. Set to non-zero when we encounter an error
return_code = 0
click.secho(
"+++ Checking schema deltas are in the right folder",
fg="green",
@@ -30,17 +42,17 @@ def main(force_colors: bool) -> None:
click.secho("Updating repo...")
repo = git.Repo()
repo.remote().fetch()
repo.remote().fetch(refspec=DEVELOP_BRANCH)
click.secho("Getting current schema version...")
r = repo.git.show("origin/develop:synapse/storage/schema/__init__.py")
r = repo.git.show(f"origin/{DEVELOP_BRANCH}:synapse/storage/schema/__init__.py")
locals: Dict[str, Any] = {}
exec(r, locals)
current_schema_version = locals["SCHEMA_VERSION"]
diffs: List[git.Diff] = repo.remote().refs.develop.commit.diff(None)
diffs: List[git.Diff] = repo.remote().refs[DEVELOP_BRANCH].commit.diff(None)
# Get the schema version of the local file to check against current schema on develop
with open("synapse/storage/schema/__init__.py") as file:
@@ -53,7 +65,7 @@ def main(force_colors: bool) -> None:
# local schema version must be +/-1 the current schema version on develop
if abs(local_schema_version - current_schema_version) != 1:
click.secho(
"The proposed schema version has diverged more than one version from develop, please fix!",
f"The proposed schema version has diverged more than one version from {DEVELOP_BRANCH}, please fix!",
fg="red",
bold=True,
color=force_colors,
@@ -67,21 +79,28 @@ def main(force_colors: bool) -> None:
click.secho(f"Current schema version: {current_schema_version}")
seen_deltas = False
bad_files = []
bad_delta_files = []
changed_delta_files = []
for diff in diffs:
if not diff.new_file or diff.b_path is None:
if diff.b_path is None:
# We don't lint deleted files.
continue
match = SCHEMA_FILE_REGEX.match(diff.b_path)
if not match:
continue
changed_delta_files.append(diff.b_path)
if not diff.new_file:
continue
seen_deltas = True
_, delta_version, _ = match.groups()
if delta_version != str(current_schema_version):
bad_files.append(diff.b_path)
bad_delta_files.append(diff.b_path)
if not seen_deltas:
click.secho(
@@ -92,41 +111,91 @@ def main(force_colors: bool) -> None:
)
return
if not bad_files:
if bad_delta_files:
bad_delta_files.sort()
click.secho(
"Found deltas in the wrong folder!",
fg="red",
bold=True,
color=force_colors,
)
for f in bad_delta_files:
click.secho(
f"\t{f}",
fg="red",
bold=True,
color=force_colors,
)
click.secho()
click.secho(
f"Please move these files to delta/{current_schema_version}/",
fg="red",
bold=True,
color=force_colors,
)
else:
click.secho(
f"All deltas are in the correct folder: {current_schema_version}!",
fg="green",
bold=True,
color=force_colors,
)
return
bad_files.sort()
# Make sure we process them in order. This sort works because deltas are numbered
# and delta files are also numbered in order.
changed_delta_files.sort()
click.secho(
"Found deltas in the wrong folder!",
fg="red",
bold=True,
color=force_colors,
)
# Now check that we're not trying to create or drop indices. If we want to
# do that they should be in background updates. The exception is when we
# create indices on tables we've just created.
created_tables = set()
for delta_file in changed_delta_files:
with open(delta_file) as fd:
delta_lines = fd.readlines()
for f in bad_files:
click.secho(
f"\t{f}",
fg="red",
bold=True,
color=force_colors,
)
for line in delta_lines:
# Strip SQL comments
line = line.split("--", maxsplit=1)[0]
click.secho()
click.secho(
f"Please move these files to delta/{current_schema_version}/",
fg="red",
bold=True,
color=force_colors,
)
# Check and track any tables we create
match = TABLE_CREATION_REGEX.search(line)
if match:
table_name = match.group(1)
created_tables.add(table_name)
click.get_current_context().exit(1)
# Check for dropping indices, these are always banned
match = INDEX_DELETION_REGEX.search(line)
if match:
clause = match.group()
click.secho(
f"Found delta with index deletion: '{clause}' in {delta_file}\nThese should be in background updates.",
fg="red",
bold=True,
color=force_colors,
)
return_code = 1
# Check for index creation, which is only allowed for tables we've
# created.
match = INDEX_CREATION_REGEX.search(line)
if match:
clause = match.group()
table_name = match.group(1)
if table_name not in created_tables:
click.secho(
f"Found delta with index creation: '{clause}' in {delta_file}\nThese should be in background updates.",
fg="red",
bold=True,
color=force_colors,
)
return_code = 1
click.get_current_context().exit(return_code)
if __name__ == "__main__":

View File

@@ -1065,7 +1065,7 @@ class Porter:
def get_sent_table_size(txn: LoggingTransaction) -> int:
txn.execute(
"SELECT count(*) FROM sent_transactions" " WHERE ts >= ?", (yesterday,)
"SELECT count(*) FROM sent_transactions WHERE ts >= ?", (yesterday,)
)
result = txn.fetchone()
assert result is not None

View File

@@ -292,9 +292,9 @@ def main() -> None:
for key in worker_config:
if key == "worker_app": # But we allow worker_app
continue
assert not key.startswith(
"worker_"
), "Main process cannot use worker_* config"
assert not key.startswith("worker_"), (
"Main process cannot use worker_* config"
)
else:
worker_pidfile = worker_config["worker_pid_file"]
worker_cache_factor = worker_config.get("synctl_cache_factor")

View File

@@ -39,16 +39,16 @@ from synapse.api.errors import (
HttpResponseException,
InvalidClientTokenError,
OAuthInsufficientScopeError,
StoreError,
SynapseError,
UnrecognizedRequestError,
)
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.opentracing import active_span, force_tracing, start_active_span
from synapse.types import Requester, UserID, create_requester
from synapse.util import json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
if TYPE_CHECKING:
from synapse.rest.admin.experimental_features import ExperimentalFeature
@@ -177,6 +177,7 @@ class MSC3861DelegatedAuth(BaseAuth):
self._http_client = hs.get_proxied_http_client()
self._hostname = hs.hostname
self._admin_token: Callable[[], Optional[str]] = self._config.admin_token
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
# # Token Introspection Cache
# This remembers what users/devices are represented by which access tokens,
@@ -201,6 +202,8 @@ class MSC3861DelegatedAuth(BaseAuth):
self._clock,
"token_introspection",
timeout_ms=120_000,
# don't log because the keys are access tokens
enable_logging=False,
)
self._issuer_metadata = RetryOnExceptionCachedCall[OpenIDProviderMetadata](
@@ -275,7 +278,9 @@ class MSC3861DelegatedAuth(BaseAuth):
metadata = await self._issuer_metadata.get()
return metadata.get("introspection_endpoint")
async def _introspect_token(self, token: str) -> IntrospectionResult:
async def _introspect_token(
self, token: str, cache_context: ResponseCacheContext[str]
) -> IntrospectionResult:
"""
Send a token to the introspection endpoint and returns the introspection response
@@ -291,6 +296,8 @@ class MSC3861DelegatedAuth(BaseAuth):
Returns:
The introspection response
"""
# By default, we shouldn't cache the result unless we know it's valid
cache_context.should_cache = False
introspection_endpoint = await self._introspection_endpoint()
raw_headers: Dict[str, str] = {
"Content-Type": "application/x-www-form-urlencoded",
@@ -348,6 +355,8 @@ class MSC3861DelegatedAuth(BaseAuth):
"The introspection endpoint returned an invalid JSON response."
)
# We had a valid response, so we can cache it
cache_context.should_cache = True
return IntrospectionResult(
IntrospectionToken(**resp), retrieved_at_ms=self._clock.time_msec()
)
@@ -361,6 +370,55 @@ class MSC3861DelegatedAuth(BaseAuth):
allow_guest: bool = False,
allow_expired: bool = False,
allow_locked: bool = False,
) -> Requester:
"""Get a registered user's ID.
Args:
request: An HTTP request with an access_token query parameter.
allow_guest: If False, will raise an AuthError if the user making the
request is a guest.
allow_expired: If True, allow the request through even if the account
is expired, or session token lifetime has ended. Note that
/login will deliver access tokens regardless of expiration.
Returns:
Resolves to the requester
Raises:
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
AuthError if access is denied for the user in the access token
"""
parent_span = active_span()
with start_active_span("get_user_by_req"):
requester = await self._wrapped_get_user_by_req(
request, allow_guest, allow_expired, allow_locked
)
if parent_span:
if requester.authenticated_entity in self._force_tracing_for_users:
# request tracing is enabled for this user, so we need to force it
# tracing on for the parent span (which will be the servlet span).
#
# It's too late for the get_user_by_req span to inherit the setting,
# so we also force it on for that.
force_tracing()
force_tracing(parent_span)
parent_span.set_tag(
"authenticated_entity", requester.authenticated_entity
)
parent_span.set_tag("user_id", requester.user.to_string())
if requester.device_id is not None:
parent_span.set_tag("device_id", requester.device_id)
if requester.app_service is not None:
parent_span.set_tag("appservice_id", requester.app_service.id)
return requester
async def _wrapped_get_user_by_req(
self,
request: SynapseRequest,
allow_guest: bool = False,
allow_expired: bool = False,
allow_locked: bool = False,
) -> Requester:
access_token = self.get_access_token_from_request(request)
@@ -429,7 +487,7 @@ class MSC3861DelegatedAuth(BaseAuth):
try:
introspection_result = await self._introspection_cache.wrap(
token, self._introspect_token, token
token, self._introspect_token, token, cache_context=True
)
except Exception:
logger.exception("Failed to introspect token")
@@ -453,7 +511,7 @@ class MSC3861DelegatedAuth(BaseAuth):
raise InvalidClientTokenError("No scope in token granting user rights")
# Match via the sub claim
sub: Optional[str] = introspection_result.get_sub()
sub = introspection_result.get_sub()
if sub is None:
raise InvalidClientTokenError(
"Invalid sub claim in the introspection result"
@@ -466,29 +524,20 @@ class MSC3861DelegatedAuth(BaseAuth):
# If we could not find a user via the external_id, it either does not exist,
# or the external_id was never recorded
# TODO: claim mapping should be configurable
username: Optional[str] = introspection_result.get_username()
if username is None or not isinstance(username, str):
username = introspection_result.get_username()
if username is None:
raise AuthError(
500,
"Invalid username claim in the introspection result",
)
user_id = UserID(username, self._hostname)
# First try to find a user from the username claim
# Try to find a user from the username claim
user_info = await self.store.get_user_by_id(user_id=user_id.to_string())
if user_info is None:
# If the user does not exist, we should create it on the fly
# TODO: we could use SCIM to provision users ahead of time and listen
# for SCIM SET events if those ever become standard:
# https://datatracker.ietf.org/doc/html/draft-hunt-scim-notify-00
# TODO: claim mapping should be configurable
# If present, use the name claim as the displayname
name: Optional[str] = introspection_result.get_name()
await self.store.register_user(
user_id=user_id.to_string(), create_profile_with_displayname=name
raise AuthError(
500,
"User not found",
)
# And record the sub as external_id
@@ -528,17 +577,10 @@ class MSC3861DelegatedAuth(BaseAuth):
"Invalid device ID in introspection result",
)
# Create the device on the fly if it does not exist
try:
await self.store.get_device(
user_id=user_id.to_string(), device_id=device_id
)
except StoreError:
await self.store.store_device(
user_id=user_id.to_string(),
device_id=device_id,
initial_device_display_name="OIDC-native client",
)
# Make sure the device exists
await self.store.get_device(
user_id=user_id.to_string(), device_id=device_id
)
# TODO: there is a few things missing in the requester here, which still need
# to be figured out, like:

View File

@@ -70,6 +70,7 @@ class Codes(str, Enum):
THREEPID_NOT_FOUND = "M_THREEPID_NOT_FOUND"
THREEPID_DENIED = "M_THREEPID_DENIED"
INVALID_USERNAME = "M_INVALID_USERNAME"
THREEPID_MEDIUM_NOT_SUPPORTED = "M_THREEPID_MEDIUM_NOT_SUPPORTED"
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
CONSENT_NOT_GIVEN = "M_CONSENT_NOT_GIVEN"
CANNOT_LEAVE_SERVER_NOTICE_ROOM = "M_CANNOT_LEAVE_SERVER_NOTICE_ROOM"

View File

@@ -20,8 +20,7 @@
#
#
from collections import OrderedDict
from typing import Hashable, Optional, Tuple
from typing import Dict, Hashable, Optional, Tuple
from synapse.api.errors import LimitExceededError
from synapse.config.ratelimiting import RatelimitSettings
@@ -80,12 +79,14 @@ class Ratelimiter:
self.store = store
self._limiter_name = cfg.key
# An ordered dictionary representing the token buckets tracked by this rate
# A dictionary representing the token buckets tracked by this rate
# limiter. Each entry maps a key of arbitrary type to a tuple representing:
# * The number of tokens currently in the bucket,
# * The time point when the bucket was last completely empty, and
# * The rate_hz (leak rate) of this particular bucket.
self.actions: OrderedDict[Hashable, Tuple[float, float, float]] = OrderedDict()
self.actions: Dict[Hashable, Tuple[float, float, float]] = {}
self.clock.looping_call(self._prune_message_counts, 60 * 1000)
def _get_key(
self, requester: Optional[Requester], key: Optional[Hashable]
@@ -169,9 +170,6 @@ class Ratelimiter:
rate_hz = rate_hz if rate_hz is not None else self.rate_hz
burst_count = burst_count if burst_count is not None else self.burst_count
# Remove any expired entries
self._prune_message_counts(time_now_s)
# Check if there is an existing count entry for this key
action_count, time_start, _ = self._get_action_counts(key, time_now_s)
@@ -246,13 +244,12 @@ class Ratelimiter:
action_count, time_start, rate_hz = self._get_action_counts(key, time_now_s)
self.actions[key] = (action_count + n_actions, time_start, rate_hz)
def _prune_message_counts(self, time_now_s: float) -> None:
def _prune_message_counts(self) -> None:
"""Remove message count entries that have not exceeded their defined
rate_hz limit
Args:
time_now_s: The current time
"""
time_now_s = self.clock.time()
# We create a copy of the key list here as the dictionary is modified during
# the loop
for key in list(self.actions.keys()):

View File

@@ -51,8 +51,7 @@ from synapse.http.server import JsonResource, OptionsResource
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.rest import ClientRestResource
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest import ClientRestResource, admin
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyResource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
@@ -176,8 +175,13 @@ class GenericWorkerServer(HomeServer):
def _listen_http(self, listener_config: ListenerConfig) -> None:
assert listener_config.http_options is not None
# We always include a health resource.
resources: Dict[str, Resource] = {"/health": HealthResource()}
# We always include an admin resource that we populate with servlets as needed
admin_resource = JsonResource(self, canonical_json=False)
resources: Dict[str, Resource] = {
# We always include a health resource.
"/health": HealthResource(),
"/_synapse/admin": admin_resource,
}
for res in listener_config.http_options.resources:
for name in res.names:
@@ -190,6 +194,7 @@ class GenericWorkerServer(HomeServer):
resources.update(build_synapse_client_resource_tree(self))
resources["/.well-known"] = well_known_resource(self)
admin.register_servlets(self, admin_resource)
elif name == "federation":
resources[FEDERATION_PREFIX] = TransportLayerServer(self)
@@ -199,15 +204,13 @@ class GenericWorkerServer(HomeServer):
# We need to serve the admin servlets for media on the
# worker.
admin_resource = JsonResource(self, canonical_json=False)
register_servlets_for_media_repo(self, admin_resource)
admin.register_servlets_for_media_repo(self, admin_resource)
resources.update(
{
MEDIA_R0_PREFIX: media_repo,
MEDIA_V3_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
"/_synapse/admin": admin_resource,
}
)
@@ -284,8 +287,7 @@ class GenericWorkerServer(HomeServer):
elif listener.type == "metrics":
if not self.config.metrics.enable_metrics:
logger.warning(
"Metrics listener configured, but "
"enable_metrics is not True!"
"Metrics listener configured, but enable_metrics is not True!"
)
else:
if isinstance(listener, TCPListenerConfig):

View File

@@ -54,6 +54,7 @@ from synapse.config.server import ListenerConfig, TCPListenerConfig
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import (
JsonResource,
OptionsResource,
RootOptionsRedirectResource,
StaticResource,
@@ -61,8 +62,7 @@ from synapse.http.server import (
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.rest import ClientRestResource
from synapse.rest.admin import AdminRestResource
from synapse.rest import ClientRestResource, admin
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyResource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
@@ -180,11 +180,14 @@ class SynapseHomeServer(HomeServer):
if compress:
client_resource = gz_wrap(client_resource)
admin_resource = JsonResource(self, canonical_json=False)
admin.register_servlets(self, admin_resource)
resources.update(
{
CLIENT_API_PREFIX: client_resource,
"/.well-known": well_known_resource(self),
"/_synapse/admin": AdminRestResource(self),
"/_synapse/admin": admin_resource,
**build_synapse_client_resource_tree(self),
}
)
@@ -286,8 +289,7 @@ class SynapseHomeServer(HomeServer):
elif listener.type == "metrics":
if not self.config.metrics.enable_metrics:
logger.warning(
"Metrics listener configured, but "
"enable_metrics is not True!"
"Metrics listener configured, but enable_metrics is not True!"
)
else:
if isinstance(listener, TCPListenerConfig):

View File

@@ -34,6 +34,22 @@ if TYPE_CHECKING:
logger = logging.getLogger("synapse.app.homeserver")
ONE_MINUTE_SECONDS = 60
ONE_HOUR_SECONDS = 60 * ONE_MINUTE_SECONDS
MILLISECONDS_PER_SECOND = 1000
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME_SECONDS = 5 * ONE_MINUTE_SECONDS
"""
We wait 5 minutes to send the first set of stats as the server can be quite busy the
first few minutes
"""
PHONE_HOME_INTERVAL_SECONDS = 3 * ONE_HOUR_SECONDS
"""
Phone home stats are sent every 3 hours
"""
# Contains the list of processes we will be monitoring
# currently either 0 or 1
_stats_process: List[Tuple[int, "resource.struct_rusage"]] = []
@@ -185,12 +201,14 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
# If you increase the loop period, the accuracy of user_daily_visits
# table will decrease
clock.looping_call(
hs.get_datastores().main.generate_user_daily_visits, 5 * 60 * 1000
hs.get_datastores().main.generate_user_daily_visits,
5 * ONE_MINUTE_SECONDS * MILLISECONDS_PER_SECOND,
)
# monthly active user limiting functionality
clock.looping_call(
hs.get_datastores().main.reap_monthly_active_users, 1000 * 60 * 60
hs.get_datastores().main.reap_monthly_active_users,
ONE_HOUR_SECONDS * MILLISECONDS_PER_SECOND,
)
hs.get_datastores().main.reap_monthly_active_users()
@@ -221,7 +239,12 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
if hs.config.metrics.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000, hs, stats)
clock.looping_call(
phone_stats_home,
PHONE_HOME_INTERVAL_SECONDS * MILLISECONDS_PER_SECOND,
hs,
stats,
)
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
@@ -229,4 +252,6 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home, hs, stats)
clock.call_later(
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME_SECONDS, phone_stats_home, hs, stats
)

View File

@@ -170,7 +170,7 @@ class Config:
section: ClassVar[str]
def __init__(self, root_config: "RootConfig" = None):
def __init__(self, root_config: "RootConfig"):
self.root = root_config
# Get the path to the default Synapse template directory
@@ -445,7 +445,7 @@ class RootConfig:
return res
@classmethod
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: any) -> None:
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: Any) -> None:
"""
Invoke a static function on config objects this RootConfig is
configured to use.
@@ -1047,7 +1047,7 @@ class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
return self._get_instance(key)
def read_file(file_path: Any, config_path: Iterable[str]) -> str:
def read_file(file_path: Any, config_path: StrSequence) -> str:
"""Check the given file exists, and read it into a string
If it does not, emit an error indicating the problem

View File

@@ -179,7 +179,7 @@ class RootConfig:
class Config:
root: RootConfig
default_template_dir: str
def __init__(self, root_config: Optional[RootConfig] = ...) -> None: ...
def __init__(self, root_config: RootConfig = ...) -> None: ...
@staticmethod
def parse_size(value: Union[str, int]) -> int: ...
@staticmethod
@@ -212,4 +212,4 @@ class ShardedWorkerHandlingConfig:
class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
def get_instance(self, key: str) -> str: ... # noqa: F811
def read_file(file_path: Any, config_path: Iterable[str]) -> str: ...
def read_file(file_path: Any, config_path: StrSequence) -> str: ...

View File

@@ -21,7 +21,7 @@
import enum
from functools import cache
from typing import TYPE_CHECKING, Any, Iterable, Optional
from typing import TYPE_CHECKING, Any, Optional
import attr
import attr.validators
@@ -29,7 +29,7 @@ import attr.validators
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.config import ConfigError
from synapse.config._base import Config, RootConfig, read_file
from synapse.types import JsonDict
from synapse.types import JsonDict, StrSequence
# Determine whether authlib is installed.
try:
@@ -45,7 +45,7 @@ if TYPE_CHECKING:
@cache
def read_secret_from_file_once(file_path: Any, config_path: Iterable[str]) -> str:
def read_secret_from_file_once(file_path: Any, config_path: StrSequence) -> str:
"""Returns the memoized secret read from file."""
return read_file(file_path, config_path).strip()
@@ -560,3 +560,9 @@ class ExperimentalConfig(Config):
# MSC4076: Add `disable_badge_count`` to pusher configuration
self.msc4076_enabled: bool = experimental.get("msc4076_enabled", False)
# MSC4263: Preventing MXID enumeration via key queries
self.msc4263_limit_key_queries_to_users_who_share_rooms = experimental.get(
"msc4263_limit_key_queries_to_users_who_share_rooms",
False,
)

View File

@@ -191,7 +191,7 @@ class KeyConfig(Config):
if macaroon_secret_key:
raise ConfigError(CONFLICTING_MACAROON_SECRET_KEY_OPTS_ERROR)
macaroon_secret_key = read_file(
macaroon_secret_key_path, "macaroon_secret_key_path"
macaroon_secret_key_path, ("macaroon_secret_key_path",)
).strip()
if not macaroon_secret_key:
macaroon_secret_key = self.root.registration.registration_shared_secret
@@ -216,7 +216,9 @@ class KeyConfig(Config):
if form_secret_path:
if form_secret:
raise ConfigError(CONFLICTING_FORM_SECRET_OPTS_ERROR)
self.form_secret = read_file(form_secret_path, "form_secret_path").strip()
self.form_secret = read_file(
form_secret_path, ("form_secret_path",)
).strip()
else:
self.form_secret = form_secret

View File

@@ -162,6 +162,10 @@ class RegistrationConfig(Config):
"disable_msisdn_registration", False
)
self.allow_underscore_prefixed_localpart = config.get(
"allow_underscore_prefixed_localpart", False
)
session_lifetime = config.get("session_lifetime")
if session_lifetime is not None:
session_lifetime = self.parse_duration(session_lifetime)

View File

@@ -43,8 +43,7 @@ class SsoAttributeRequirement:
"""Object describing a single requirement for SSO attributes."""
attribute: str
# If neither value nor one_of is given, the attribute must simply exist. This is
# only true for CAS configs which use a different JSON schema than the one below.
# If neither `value` nor `one_of` is given, the attribute must simply exist.
value: Optional[str] = None
one_of: Optional[List[str]] = None
@@ -56,10 +55,6 @@ class SsoAttributeRequirement:
"one_of": {"type": "array", "items": {"type": "string"}},
},
"required": ["attribute"],
"oneOf": [
{"required": ["value"]},
{"required": ["one_of"]},
],
}

View File

@@ -108,8 +108,7 @@ class TlsConfig(Config):
# Raise an error if this option has been specified without any
# corresponding certificates.
raise ConfigError(
"federation_custom_ca_list specified without "
"any certificate files"
"federation_custom_ca_list specified without any certificate files"
)
certs = []

View File

@@ -38,6 +38,9 @@ class UserDirectoryConfig(Config):
self.user_directory_search_all_users = user_directory_config.get(
"search_all_users", False
)
self.user_directory_exclude_remote_users = user_directory_config.get(
"exclude_remote_users", False
)
self.user_directory_search_prefer_local_users = user_directory_config.get(
"prefer_local_users", False
)

View File

@@ -263,7 +263,7 @@ class WorkerConfig(Config):
if worker_replication_secret:
raise ConfigError(CONFLICTING_WORKER_REPLICATION_SECRET_OPTS_ERROR)
self.worker_replication_secret = read_file(
worker_replication_secret_path, "worker_replication_secret_path"
worker_replication_secret_path, ("worker_replication_secret_path",)
).strip()
else:
self.worker_replication_secret = worker_replication_secret

View File

@@ -986,8 +986,7 @@ def _check_power_levels(
if old_level == user_level:
raise AuthError(
403,
"You don't have permission to remove ops level equal "
"to your own",
"You don't have permission to remove ops level equal to your own",
)
# Check if the old and new levels are greater than the user level

View File

@@ -30,6 +30,7 @@ from synapse.crypto.keyring import Keyring
from synapse.events import EventBase, make_event_from_dict
from synapse.events.utils import prune_event, validate_canonicaljson
from synapse.federation.units import filter_pdus_for_valid_depth
from synapse.handlers.room_policy import RoomPolicyHandler
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.opentracing import log_kv, trace
from synapse.types import JsonDict, get_domain_from_id
@@ -64,6 +65,24 @@ class FederationBase:
self._clock = hs.get_clock()
self._storage_controllers = hs.get_storage_controllers()
# We need to define this lazily otherwise we get a cyclic dependency.
# self._policy_handler = hs.get_room_policy_handler()
self._policy_handler: Optional[RoomPolicyHandler] = None
def _lazily_get_policy_handler(self) -> RoomPolicyHandler:
"""Lazily get the room policy handler.
This is required to avoid an import cycle: RoomPolicyHandler requires a
FederationClient, which requires a FederationBase, which requires a
RoomPolicyHandler.
Returns:
RoomPolicyHandler: The room policy handler.
"""
if self._policy_handler is None:
self._policy_handler = self.hs.get_room_policy_handler()
return self._policy_handler
@trace
async def _check_sigs_and_hash(
self,
@@ -80,6 +99,10 @@ class FederationBase:
Also runs the event through the spam checker; if it fails, redacts the event
and flags it as soft-failed.
Also checks that the event is allowed by the policy server, if the room uses
a policy server. If the event is not allowed, the event is flagged as
soft-failed but not redacted.
Args:
room_version: The room version of the PDU
pdu: the event to be checked
@@ -145,6 +168,17 @@ class FederationBase:
)
return redacted_event
policy_allowed = await self._lazily_get_policy_handler().is_event_allowed(pdu)
if not policy_allowed:
logger.warning(
"Event not allowed by policy server, soft-failing %s", pdu.event_id
)
pdu.internal_metadata.soft_failed = True
# Note: we don't redact the event so admins can inspect the event after the
# fact. Other processes may redact the event, but that won't be applied to
# the database copy of the event until the server's config requires it.
return pdu
spam_check = await self._spam_checker_module_callbacks.check_event_for_spam(pdu)
if spam_check != self._spam_checker_module_callbacks.NOT_SPAM:

View File

@@ -75,6 +75,7 @@ from synapse.http.client import is_unknown_endpoint
from synapse.http.types import QueryParams
from synapse.logging.opentracing import SynapseTags, log_kv, set_tag, tag_args, trace
from synapse.types import JsonDict, StrCollection, UserID, get_domain_from_id
from synapse.types.handlers.policy_server import RECOMMENDATION_OK, RECOMMENDATION_SPAM
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
@@ -421,6 +422,62 @@ class FederationClient(FederationBase):
return None
@trace
@tag_args
async def get_pdu_policy_recommendation(
self, destination: str, pdu: EventBase, timeout: Optional[int] = None
) -> str:
"""Requests that the destination server (typically a policy server)
check the event and return its recommendation on how to handle the
event.
If the policy server could not be contacted or the policy server
returned an unknown recommendation, this returns an OK recommendation.
This type fixing behaviour is done because the typical caller will be
in a critical call path and would generally interpret a `None` or similar
response as "weird value; don't care; move on without taking action". We
just frontload that logic here.
Args:
destination: The remote homeserver to ask (a policy server)
pdu: The event to check
timeout: How long to try (in ms) the destination for before
giving up. None indicates no timeout.
Returns:
The policy recommendation, or RECOMMENDATION_OK if the policy server was
uncontactable or returned an unknown recommendation.
"""
logger.debug(
"get_pdu_policy_recommendation for event_id=%s from %s",
pdu.event_id,
destination,
)
try:
res = await self.transport_layer.get_policy_recommendation_for_pdu(
destination, pdu, timeout=timeout
)
recommendation = res.get("recommendation")
if not isinstance(recommendation, str):
raise InvalidResponseError("recommendation is not a string")
if recommendation not in (RECOMMENDATION_OK, RECOMMENDATION_SPAM):
logger.warning(
"get_pdu_policy_recommendation: unknown recommendation: %s",
recommendation,
)
return RECOMMENDATION_OK
return recommendation
except Exception as e:
logger.warning(
"get_pdu_policy_recommendation: server %s responded with error, assuming OK recommendation: %s",
destination,
e,
)
return RECOMMENDATION_OK
@trace
@tag_args
async def get_pdu(

View File

@@ -701,6 +701,12 @@ class FederationServer(FederationBase):
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
if await self._spam_checker_module_callbacks.should_drop_federated_event(pdu):
logger.info(
"Federated event contains spam, dropping %s",
pdu.event_id,
)
raise SynapseError(403, Codes.FORBIDDEN)
try:
pdu = await self._check_sigs_and_hash(room_version, pdu)
except InvalidEventSignatureError as e:

View File

@@ -143,6 +143,33 @@ class TransportLayerClient:
destination, path=path, timeout=timeout, try_trailing_slash_on_400=True
)
async def get_policy_recommendation_for_pdu(
self, destination: str, event: EventBase, timeout: Optional[int] = None
) -> JsonDict:
"""Requests the policy recommendation for the given pdu from the given policy server.
Args:
destination: The host name of the remote homeserver checking the event.
event: The event to check.
timeout: How long to try (in ms) the destination for before giving up.
None indicates no timeout.
Returns:
The full recommendation object from the remote server.
"""
logger.debug(
"get_policy_recommendation_for_pdu dest=%s, event_id=%s",
destination,
event.event_id,
)
return await self.client.post_json(
destination=destination,
path=f"/_matrix/policy/unstable/org.matrix.msc4284/event/{event.event_id}/check",
data=event.get_pdu_json(),
ignore_backoff=True,
timeout=timeout,
)
async def backfill(
self, destination: str, room_id: str, event_tuples: Collection[str], limit: int
) -> Optional[Union[JsonDict, list]]:

View File

@@ -445,7 +445,7 @@ class AdminHandler:
user_id,
room,
limit,
["m.room.member", "m.room.message"],
["m.room.member", "m.room.message", "m.room.encrypted"],
)
if not event_ids:
# nothing to redact in this room

View File

@@ -20,6 +20,7 @@
#
#
import logging
from threading import Lock
from typing import (
TYPE_CHECKING,
AbstractSet,
@@ -1237,7 +1238,7 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
)
# Attempt to resync out of sync device lists every 30s.
self._resync_retry_in_progress = False
self._resync_retry_lock = Lock()
self.clock.looping_call(
run_as_background_process,
30 * 1000,
@@ -1419,13 +1420,10 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
"""Retry to resync device lists that are out of sync, except if another retry is
in progress.
"""
if self._resync_retry_in_progress:
# If the lock can not be acquired we want to always return immediately instead of blocking here
if not self._resync_retry_lock.acquire(blocking=False):
return
try:
# Prevent another call of this function to retry resyncing device lists so
# we don't send too many requests.
self._resync_retry_in_progress = True
# Get all of the users that need resyncing.
need_resync = await self.store.get_user_ids_requiring_device_list_resync()
@@ -1466,8 +1464,7 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
e,
)
finally:
# Allow future calls to retry resyncinc out of sync device lists.
self._resync_retry_in_progress = False
self._resync_retry_lock.release()
async def multi_user_device_resync(
self, user_ids: List[str], mark_failed_as_stale: bool = True

View File

@@ -158,7 +158,37 @@ class E2eKeysHandler:
the number of in-flight queries at a time.
"""
async with self._query_devices_linearizer.queue((from_user_id, from_device_id)):
device_keys_query: Dict[str, List[str]] = query_body.get("device_keys", {})
async def filter_device_key_query(
query: Dict[str, List[str]],
) -> Dict[str, List[str]]:
if not self.config.experimental.msc4263_limit_key_queries_to_users_who_share_rooms:
# Only ignore invalid user IDs, which is the same behaviour as if
# the user existed but had no keys.
return {
user_id: v
for user_id, v in query.items()
if UserID.is_valid(user_id)
}
# Strip invalid user IDs and user IDs the requesting user does not share rooms with.
valid_user_ids = [
user_id for user_id in query.keys() if UserID.is_valid(user_id)
]
allowed_user_ids = set(
await self.store.do_users_share_a_room_joined_or_invited(
from_user_id, valid_user_ids
)
)
return {
user_id: v
for user_id, v in query.items()
if user_id in allowed_user_ids
}
device_keys_query: Dict[str, List[str]] = await filter_device_key_query(
query_body.get("device_keys", {})
)
# separate users by domain.
# make a map from domain to user_id to device_ids
@@ -166,11 +196,6 @@ class E2eKeysHandler:
remote_queries = {}
for user_id, device_ids in device_keys_query.items():
if not UserID.is_valid(user_id):
# Ignore invalid user IDs, which is the same behaviour as if
# the user existed but had no keys.
continue
# we use UserID.from_string to catch invalid user ids
if self.is_mine(UserID.from_string(user_id)):
local_query[user_id] = device_ids
@@ -1163,7 +1188,7 @@ class E2eKeysHandler:
devices = devices[user_id]
except SynapseError as e:
failure = _exception_to_failure(e)
failures[user_id] = {device: failure for device in signatures.keys()}
failures[user_id] = dict.fromkeys(signatures.keys(), failure)
return signature_list, failures
for device_id, device in signatures.items():
@@ -1303,7 +1328,7 @@ class E2eKeysHandler:
except SynapseError as e:
failure = _exception_to_failure(e)
for user, devicemap in signatures.items():
failures[user] = {device_id: failure for device_id in devicemap.keys()}
failures[user] = dict.fromkeys(devicemap.keys(), failure)
return signature_list, failures
for target_user, devicemap in signatures.items():
@@ -1344,9 +1369,7 @@ class E2eKeysHandler:
# other devices were signed -- mark those as failures
logger.debug("upload signature: too many devices specified")
failure = _exception_to_failure(NotFoundError("Unknown device"))
failures[target_user] = {
device: failure for device in other_devices
}
failures[target_user] = dict.fromkeys(other_devices, failure)
if user_signing_key_id in master_key.get("signatures", {}).get(
user_id, {}
@@ -1367,9 +1390,7 @@ class E2eKeysHandler:
except SynapseError as e:
failure = _exception_to_failure(e)
if device_id is None:
failures[target_user] = {
device_id: failure for device_id in devicemap.keys()
}
failures[target_user] = dict.fromkeys(devicemap.keys(), failure)
else:
failures.setdefault(target_user, {})[device_id] = failure

View File

@@ -1312,9 +1312,9 @@ class FederationHandler:
if state_key is not None:
# the event was not rejected (get_event raises a NotFoundError for rejected
# events) so the state at the event should include the event itself.
assert (
state_map.get((event.type, state_key)) == event.event_id
), "State at event did not include event itself"
assert state_map.get((event.type, state_key)) == event.event_id, (
"State at event did not include event itself"
)
# ... but we need the state *before* that event
if "replaces_state" in event.unsigned:

View File

@@ -143,9 +143,9 @@ class MessageHandler:
elif membership == Membership.LEAVE:
key = (event_type, state_key)
# If the membership is not JOIN, then the event ID should exist.
assert (
membership_event_id is not None
), "check_user_in_room_or_world_readable returned invalid data"
assert membership_event_id is not None, (
"check_user_in_room_or_world_readable returned invalid data"
)
room_state = await self._state_storage_controller.get_state_for_events(
[membership_event_id], StateFilter.from_types([key])
)
@@ -242,9 +242,9 @@ class MessageHandler:
room_state = await self.store.get_events(state_ids.values())
elif membership == Membership.LEAVE:
# If the membership is not JOIN, then the event ID should exist.
assert (
membership_event_id is not None
), "check_user_in_room_or_world_readable returned invalid data"
assert membership_event_id is not None, (
"check_user_in_room_or_world_readable returned invalid data"
)
room_state_events = (
await self._state_storage_controller.get_state_for_events(
[membership_event_id], state_filter=state_filter
@@ -495,6 +495,7 @@ class EventCreationHandler:
self._instance_name = hs.get_instance_name()
self._notifier = hs.get_notifier()
self._worker_lock_handler = hs.get_worker_locks_handler()
self._policy_handler = hs.get_room_policy_handler()
self.room_prejoin_state_types = self.hs.config.api.room_prejoin_state
@@ -1108,6 +1109,18 @@ class EventCreationHandler:
event.sender,
)
policy_allowed = await self._policy_handler.is_event_allowed(event)
if not policy_allowed:
logger.warning(
"Event not allowed by policy server, rejecting %s",
event.event_id,
)
raise SynapseError(
403,
"This message has been rejected as probable spam",
Codes.FORBIDDEN,
)
spam_check_result = (
await self._spam_checker_module_callbacks.check_event_for_spam(
event
@@ -1119,7 +1132,7 @@ class EventCreationHandler:
[code, dict] = spam_check_result
raise SynapseError(
403,
"This message had been rejected as probable spam",
"This message has been rejected as probable spam",
code,
dict,
)
@@ -1266,12 +1279,14 @@ class EventCreationHandler:
# Allow an event to have empty list of prev_event_ids
# only if it has auth_event_ids.
or auth_event_ids
), "Attempting to create a non-m.room.create event with no prev_events or auth_event_ids"
), (
"Attempting to create a non-m.room.create event with no prev_events or auth_event_ids"
)
else:
# we now ought to have some prev_events (unless it's a create event).
assert (
builder.type == EventTypes.Create or prev_event_ids
), "Attempting to create a non-m.room.create event with no prev_events"
assert builder.type == EventTypes.Create or prev_event_ids, (
"Attempting to create a non-m.room.create event with no prev_events"
)
if for_batch:
assert prev_event_ids is not None

View File

@@ -586,6 +586,24 @@ class OidcProvider:
or self._user_profile_method == "userinfo_endpoint"
)
@property
def _uses_access_token(self) -> bool:
"""Return True if the `access_token` will be used during the login process.
This is useful to determine whether the access token
returned by the identity provider, and
any related metadata (such as the `at_hash` field in
the ID token), should be validated.
"""
# Currently, Synapse only uses the access_token to fetch user metadata
# from the userinfo endpoint. Therefore we only have a single criteria
# to check right now but this may change in the future and this function
# should be updated if more usages are introduced.
#
# For example, if we start to use the access_token given to us by the
# IdP for more things, such as accessing Resource Server APIs.
return self._uses_userinfo
@property
def issuer(self) -> str:
"""The issuer identifying this provider."""
@@ -957,9 +975,16 @@ class OidcProvider:
"nonce": nonce,
"client_id": self._client_auth.client_id,
}
if "access_token" in token:
if self._uses_access_token and "access_token" in token:
# If we got an `access_token`, there should be an `at_hash` claim
# in the `id_token` that we can check against.
# in the `id_token` that we can check against. Setting this
# instructs authlib to check the value of `at_hash` in the
# ID token.
#
# We only need to verify the access token if we actually make
# use of it. Which currently only happens when we need to fetch
# the user's information from the userinfo_endpoint. Thus, this
# check is also gated on self._uses_userinfo.
claims_params["access_token"] = token["access_token"]
claims_options = {"iss": {"values": [metadata["issuer"]]}}

View File

@@ -159,7 +159,10 @@ class RegistrationHandler:
if not localpart:
raise SynapseError(400, "User ID cannot be empty", Codes.INVALID_USERNAME)
if localpart[0] == "_":
if (
localpart[0] == "_"
and not self.hs.config.registration.allow_underscore_prefixed_localpart
):
raise SynapseError(
400, "User ID may not begin with _", Codes.INVALID_USERNAME
)

View File

@@ -1806,7 +1806,7 @@ class RoomShutdownHandler:
] = None,
) -> Optional[ShutdownRoomResponse]:
"""
Shuts down a room. Moves all local users and room aliases automatically
Shuts down a room. Moves all joined local users and room aliases automatically
to a new room if `new_room_user_id` is set. Otherwise local users only
leave the room without any information.
@@ -1949,16 +1949,17 @@ class RoomShutdownHandler:
# Join users to new room
if new_room_user_id:
assert new_room_id is not None
await self.room_member_handler.update_membership(
requester=target_requester,
target=target_requester.user,
room_id=new_room_id,
action=Membership.JOIN,
content={},
ratelimit=False,
require_consent=False,
)
if membership == Membership.JOIN:
assert new_room_id is not None
await self.room_member_handler.update_membership(
requester=target_requester,
target=target_requester.user,
room_id=new_room_id,
action=Membership.JOIN,
content={},
ratelimit=False,
require_consent=False,
)
result["kicked_users"].append(user_id)
if update_result_fct:

View File

@@ -0,0 +1,89 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2016-2021 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
#
import logging
from typing import TYPE_CHECKING
from synapse.events import EventBase
from synapse.types.handlers.policy_server import RECOMMENDATION_OK
from synapse.util.stringutils import parse_and_validate_server_name
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class RoomPolicyHandler:
def __init__(self, hs: "HomeServer"):
self._hs = hs
self._store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self._event_auth_handler = hs.get_event_auth_handler()
self._federation_client = hs.get_federation_client()
async def is_event_allowed(self, event: EventBase) -> bool:
"""Check if the given event is allowed in the room by the policy server.
Note: This will *always* return True if the room's policy server is Synapse
itself. This is because Synapse can't be a policy server (currently).
If no policy server is configured in the room, this returns True. Similarly, if
the policy server is invalid in any way (not joined, not a server, etc), this
returns True.
If a valid and contactable policy server is configured in the room, this returns
True if that server suggests the event is not spammy, and False otherwise.
Args:
event: The event to check. This should be a fully-formed PDU.
Returns:
bool: True if the event is allowed in the room, False otherwise.
"""
policy_event = await self._storage_controllers.state.get_current_state_event(
event.room_id, "org.matrix.msc4284.policy", ""
)
if not policy_event:
return True # no policy server == default allow
policy_server = policy_event.content.get("via", "")
if policy_server is None or not isinstance(policy_server, str):
return True # no policy server == default allow
if policy_server == self._hs.hostname:
return True # Synapse itself can't be a policy server (currently)
try:
parse_and_validate_server_name(policy_server)
except ValueError:
return True # invalid policy server == default allow
is_in_room = await self._event_auth_handler.is_host_in_room(
event.room_id, policy_server
)
if not is_in_room:
return True # policy server not in room == default allow
# At this point, the server appears valid and is in the room, so ask it to check
# the event.
recommendation = await self._federation_client.get_pdu_policy_recommendation(
policy_server, event
)
if recommendation != RECOMMENDATION_OK:
return False
return True # default allow

View File

@@ -36,10 +36,17 @@ class SetPasswordHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastores().main
self._auth_handler = hs.get_auth_handler()
# This can only be instantiated on the main process.
device_handler = hs.get_device_handler()
assert isinstance(device_handler, DeviceHandler)
self._device_handler = device_handler
# We don't need the device handler if password changing is disabled.
# This allows us to instantiate the SetPasswordHandler on the workers
# that have admin APIs for MAS
if self._auth_handler.can_change_password():
# This can only be instantiated on the main process.
device_handler = hs.get_device_handler()
assert isinstance(device_handler, DeviceHandler)
self._device_handler: Optional[DeviceHandler] = device_handler
else:
self._device_handler = None
async def set_password(
self,
@@ -51,6 +58,9 @@ class SetPasswordHandler:
if not self._auth_handler.can_change_password():
raise SynapseError(403, "Password change disabled", errcode=Codes.FORBIDDEN)
# We should have this available only if password changing is enabled.
assert self._device_handler is not None
try:
await self.store.user_set_password_hash(user_id, password_hash)
except StoreError as e:

View File

@@ -271,6 +271,7 @@ class SlidingSyncHandler:
from_token=from_token,
to_token=to_token,
newly_joined=room_id in interested_rooms.newly_joined_rooms,
newly_left=room_id in interested_rooms.newly_left_rooms,
is_dm=room_id in interested_rooms.dm_room_ids,
)
@@ -542,6 +543,7 @@ class SlidingSyncHandler:
from_token: Optional[SlidingSyncStreamToken],
to_token: StreamToken,
newly_joined: bool,
newly_left: bool,
is_dm: bool,
) -> SlidingSyncResult.RoomResult:
"""
@@ -559,6 +561,7 @@ class SlidingSyncHandler:
from_token: The point in the stream to sync from.
to_token: The point in the stream to sync up to.
newly_joined: If the user has newly joined the room
newly_left: If the user has newly left the room
is_dm: Whether the room is a DM room
"""
user = sync_config.user
@@ -856,6 +859,26 @@ class SlidingSyncHandler:
# TODO: Limit the number of state events we're about to send down
# the room, if its too many we should change this to an
# `initial=True`?
# For the case of rejecting remote invites, the leave event won't be
# returned by `get_current_state_deltas_for_room`. This is due to the current
# state only being filled out for rooms the server is in, and so doesn't pick
# up out-of-band leaves (including locally rejected invites) as these events
# are outliers and not added to the `current_state_delta_stream`.
#
# We rely on being explicitly told that the room has been `newly_left` to
# ensure we extract the out-of-band leave.
if newly_left and room_membership_for_user_at_to_token.event_id is not None:
membership_changed = True
leave_event = await self.store.get_event(
room_membership_for_user_at_to_token.event_id
)
state_key = leave_event.get_state_key()
if state_key is not None:
room_state_delta_id_map[(leave_event.type, state_key)] = (
room_membership_for_user_at_to_token.event_id
)
deltas = await self.get_current_state_deltas_for_room(
room_id=room_id,
room_membership_for_user_at_to_token=room_membership_for_user_at_to_token,

View File

@@ -244,14 +244,47 @@ class SlidingSyncRoomLists:
# Note: this won't include rooms the user has left themselves. We add back
# `newly_left` rooms below. This is more efficient than fetching all rooms and
# then filtering out the old left rooms.
room_membership_for_user_map = await self.store.get_sliding_sync_rooms_for_user(
user_id
room_membership_for_user_map = (
await self.store.get_sliding_sync_rooms_for_user_from_membership_snapshots(
user_id
)
)
# To play nice with the rewind logic below, we need to go fetch the rooms the
# user has left themselves but only if it changed after the `to_token`.
#
# If a leave happens *after* the token range, we may have still been joined (or
# any non-self-leave which is relevant to sync) to the room before so we need to
# include it in the list of potentially relevant rooms and apply our rewind
# logic (outside of this function) to see if it's actually relevant.
#
# We do this separately from
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` as those results
# are cached and the `to_token` isn't very cache friendly (people are constantly
# requesting with new tokens) so we separate it out here.
self_leave_room_membership_for_user_map = (
await self.store.get_sliding_sync_self_leave_rooms_after_to_token(
user_id, to_token
)
)
if self_leave_room_membership_for_user_map:
# FIXME: It would be nice to avoid this copy but since
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
# can't return a mutable value like a `dict`. We make the copy to get a
# mutable dict that we can change. We try to only make a copy when necessary
# (if we actually need to change something) as in most cases, the logic
# doesn't need to run.
room_membership_for_user_map = dict(room_membership_for_user_map)
room_membership_for_user_map.update(self_leave_room_membership_for_user_map)
# Remove invites from ignored users
ignored_users = await self.store.ignored_users(user_id)
if ignored_users:
# TODO: It would be nice to avoid these copies
# FIXME: It would be nice to avoid this copy but since
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
# can't return a mutable value like a `dict`. We make the copy to get a
# mutable dict that we can change. We try to only make a copy when necessary
# (if we actually need to change something) as in most cases, the logic
# doesn't need to run.
room_membership_for_user_map = dict(room_membership_for_user_map)
# Make a copy so we don't run into an error: `dictionary changed size during
# iteration`, when we remove items
@@ -263,11 +296,23 @@ class SlidingSyncRoomLists:
):
room_membership_for_user_map.pop(room_id, None)
(
newly_joined_room_ids,
newly_left_room_map,
) = await self._get_newly_joined_and_left_rooms(
user_id, from_token=from_token, to_token=to_token
)
changes = await self._get_rewind_changes_to_current_membership_to_token(
sync_config.user, room_membership_for_user_map, to_token=to_token
)
if changes:
# TODO: It would be nice to avoid these copies
# FIXME: It would be nice to avoid this copy but since
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
# can't return a mutable value like a `dict`. We make the copy to get a
# mutable dict that we can change. We try to only make a copy when necessary
# (if we actually need to change something) as in most cases, the logic
# doesn't need to run.
room_membership_for_user_map = dict(room_membership_for_user_map)
for room_id, change in changes.items():
if change is None:
@@ -278,7 +323,7 @@ class SlidingSyncRoomLists:
existing_room = room_membership_for_user_map.get(room_id)
if existing_room is not None:
# Update room membership events to the point in time of the `to_token`
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_for_user = RoomsForUserSlidingSync(
room_id=room_id,
sender=change.sender,
membership=change.membership,
@@ -290,18 +335,18 @@ class SlidingSyncRoomLists:
room_type=existing_room.room_type,
is_encrypted=existing_room.is_encrypted,
)
(
newly_joined_room_ids,
newly_left_room_map,
) = await self._get_newly_joined_and_left_rooms(
user_id, from_token=from_token, to_token=to_token
)
dm_room_ids = await self._get_dm_rooms_for_user(user_id)
if filter_membership_for_sync(
user_id=user_id,
room_membership_for_user=room_for_user,
newly_left=room_id in newly_left_room_map,
):
room_membership_for_user_map[room_id] = room_for_user
else:
room_membership_for_user_map.pop(room_id, None)
# Add back `newly_left` rooms (rooms left in the from -> to token range).
#
# We do this because `get_sliding_sync_rooms_for_user(...)` doesn't include
# We do this because `get_sliding_sync_rooms_for_user_from_membership_snapshots(...)` doesn't include
# rooms that the user left themselves as it's more efficient to add them back
# here than to fetch all rooms and then filter out the old left rooms. The user
# only leaves a room once in a blue moon so this barely needs to run.
@@ -310,7 +355,12 @@ class SlidingSyncRoomLists:
newly_left_room_map.keys() - room_membership_for_user_map.keys()
)
if missing_newly_left_rooms:
# TODO: It would be nice to avoid these copies
# FIXME: It would be nice to avoid this copy but since
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
# can't return a mutable value like a `dict`. We make the copy to get a
# mutable dict that we can change. We try to only make a copy when necessary
# (if we actually need to change something) as in most cases, the logic
# doesn't need to run.
room_membership_for_user_map = dict(room_membership_for_user_map)
for room_id in missing_newly_left_rooms:
newly_left_room_for_user = newly_left_room_map[room_id]
@@ -327,14 +377,21 @@ class SlidingSyncRoomLists:
# If the membership exists, it's just a normal user left the room on
# their own
if newly_left_room_for_user_sliding_sync is not None:
room_membership_for_user_map[room_id] = (
newly_left_room_for_user_sliding_sync
)
if filter_membership_for_sync(
user_id=user_id,
room_membership_for_user=newly_left_room_for_user_sliding_sync,
newly_left=room_id in newly_left_room_map,
):
room_membership_for_user_map[room_id] = (
newly_left_room_for_user_sliding_sync
)
else:
room_membership_for_user_map.pop(room_id, None)
change = changes.get(room_id)
if change is not None:
# Update room membership events to the point in time of the `to_token`
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_for_user = RoomsForUserSlidingSync(
room_id=room_id,
sender=change.sender,
membership=change.membership,
@@ -346,6 +403,14 @@ class SlidingSyncRoomLists:
room_type=newly_left_room_for_user_sliding_sync.room_type,
is_encrypted=newly_left_room_for_user_sliding_sync.is_encrypted,
)
if filter_membership_for_sync(
user_id=user_id,
room_membership_for_user=room_for_user,
newly_left=room_id in newly_left_room_map,
):
room_membership_for_user_map[room_id] = room_for_user
else:
room_membership_for_user_map.pop(room_id, None)
# If we are `newly_left` from the room but can't find any membership,
# then we have been "state reset" out of the room
@@ -367,7 +432,7 @@ class SlidingSyncRoomLists:
newly_left_room_for_user.event_pos.to_room_stream_token(),
)
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
room_for_user = RoomsForUserSlidingSync(
room_id=room_id,
sender=newly_left_room_for_user.sender,
membership=newly_left_room_for_user.membership,
@@ -378,6 +443,16 @@ class SlidingSyncRoomLists:
room_type=room_type,
is_encrypted=is_encrypted,
)
if filter_membership_for_sync(
user_id=user_id,
room_membership_for_user=room_for_user,
newly_left=room_id in newly_left_room_map,
):
room_membership_for_user_map[room_id] = room_for_user
else:
room_membership_for_user_map.pop(room_id, None)
dm_room_ids = await self._get_dm_rooms_for_user(user_id)
if sync_config.lists:
sync_room_map = room_membership_for_user_map
@@ -493,7 +568,12 @@ class SlidingSyncRoomLists:
if sync_config.room_subscriptions:
with start_active_span("assemble_room_subscriptions"):
# TODO: It would be nice to avoid these copies
# FIXME: It would be nice to avoid this copy but since
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
# can't return a mutable value like a `dict`. We make the copy to get a
# mutable dict that we can change. We try to only make a copy when necessary
# (if we actually need to change something) as in most cases, the logic
# doesn't need to run.
room_membership_for_user_map = dict(room_membership_for_user_map)
# Find which rooms are partially stated and may need to be filtered out
@@ -1040,7 +1120,7 @@ class SlidingSyncRoomLists:
(
newly_joined_room_ids,
newly_left_room_map,
) = await self._get_newly_joined_and_left_rooms(
) = await self._get_newly_joined_and_left_rooms_fallback(
user_id, to_token=to_token, from_token=from_token
)
@@ -1096,6 +1176,53 @@ class SlidingSyncRoomLists:
"state reset" out of the room, and so that room would not be part of the
"current memberships" of the user.
Returns:
A 2-tuple of newly joined room IDs and a map of newly_left room
IDs to the `RoomsForUserStateReset` entry.
We're using `RoomsForUserStateReset` but that doesn't necessarily mean the
user was state reset of the rooms. It's just that the `event_id`/`sender`
are optional and we can't tell the difference between the server leaving the
room when the user was the last person participating in the room and left or
was state reset out of the room. To actually check for a state reset, you
need to check if a membership still exists in the room.
"""
newly_joined_room_ids: Set[str] = set()
newly_left_room_map: Dict[str, RoomsForUserStateReset] = {}
if not from_token:
return newly_joined_room_ids, newly_left_room_map
changes = await self.store.get_sliding_sync_membership_changes(
user_id,
from_key=from_token.room_key,
to_key=to_token.room_key,
excluded_room_ids=set(self.rooms_to_exclude_globally),
)
for room_id, entry in changes.items():
if entry.membership == Membership.JOIN:
newly_joined_room_ids.add(room_id)
elif entry.membership == Membership.LEAVE:
newly_left_room_map[room_id] = entry
return newly_joined_room_ids, newly_left_room_map
@trace
async def _get_newly_joined_and_left_rooms_fallback(
self,
user_id: str,
to_token: StreamToken,
from_token: Optional[StreamToken],
) -> Tuple[AbstractSet[str], Mapping[str, RoomsForUserStateReset]]:
"""Fetch the sets of rooms that the user newly joined or left in the
given token range.
Note: there may be rooms in the newly left rooms where the user was
"state reset" out of the room, and so that room would not be part of the
"current memberships" of the user.
Returns:
A 2-tuple of newly joined room IDs and a map of newly_left room
IDs to the `RoomsForUserStateReset` entry.

View File

@@ -1192,9 +1192,9 @@ class SsoHandler:
"""
# It is expected that this is the main process.
assert isinstance(
self._device_handler, DeviceHandler
), "revoking SSO sessions can only be called on the main process"
assert isinstance(self._device_handler, DeviceHandler), (
"revoking SSO sessions can only be called on the main process"
)
# Invalidate any running user-mapping sessions
to_delete = []

View File

@@ -108,6 +108,9 @@ class UserDirectoryHandler(StateDeltasHandler):
self.is_mine_id = hs.is_mine_id
self.update_user_directory = hs.config.worker.should_update_user_directory
self.search_all_users = hs.config.userdirectory.user_directory_search_all_users
self.exclude_remote_users = (
hs.config.userdirectory.user_directory_exclude_remote_users
)
self.show_locked_users = hs.config.userdirectory.show_locked_users
self._spam_checker_module_callbacks = hs.get_module_api_callbacks().spam_checker
self._hs = hs

View File

@@ -425,9 +425,9 @@ class MatrixFederationHttpClient:
)
else:
proxy_authorization_secret = hs.config.worker.worker_replication_secret
assert (
proxy_authorization_secret is not None
), "`worker_replication_secret` must be set when using `outbound_federation_restricted_to` (used to authenticate requests across workers)"
assert proxy_authorization_secret is not None, (
"`worker_replication_secret` must be set when using `outbound_federation_restricted_to` (used to authenticate requests across workers)"
)
federation_proxy_credentials = BearerProxyCredentials(
proxy_authorization_secret.encode("ascii")
)

View File

@@ -173,9 +173,9 @@ class ProxyAgent(_AgentBase):
self._federation_proxy_endpoint: Optional[IStreamClientEndpoint] = None
self._federation_proxy_credentials: Optional[ProxyCredentials] = None
if federation_proxy_locations:
assert (
federation_proxy_credentials is not None
), "`federation_proxy_credentials` are required when using `federation_proxy_locations`"
assert federation_proxy_credentials is not None, (
"`federation_proxy_credentials` are required when using `federation_proxy_locations`"
)
endpoints: List[IStreamClientEndpoint] = []
for federation_proxy_location in federation_proxy_locations:
@@ -302,9 +302,9 @@ class ProxyAgent(_AgentBase):
parsed_uri.scheme == b"matrix-federation"
and self._federation_proxy_endpoint
):
assert (
self._federation_proxy_credentials is not None
), "`federation_proxy_credentials` are required when using `federation_proxy_locations`"
assert self._federation_proxy_credentials is not None, (
"`federation_proxy_credentials` are required when using `federation_proxy_locations`"
)
# Set a Proxy-Authorization header
if headers is None:

View File

@@ -582,9 +582,9 @@ def parse_enum(
is not one of those allowed values.
"""
# Assert the enum values are strings.
assert all(
isinstance(e.value, str) for e in E
), "parse_enum only works with string values"
assert all(isinstance(e.value, str) for e in E), (
"parse_enum only works with string values"
)
str_value = parse_string(
request,
name,

View File

@@ -378,7 +378,6 @@ class MediaRepository:
media_length=content_length,
user_id=auth_user,
sha256=sha256,
# TODO: Better name?
quarantined_by="system" if should_quarantine else None,
)

View File

@@ -41,7 +41,7 @@ from synapse.api.errors import Codes, SynapseError
from synapse.http.client import SimpleHttpClient
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.media._base import FileInfo, get_filename_from_headers
from synapse.media.media_storage import MediaStorage
from synapse.media.media_storage import MediaStorage, SHA256TransparentIOWriter
from synapse.media.oembed import OEmbedProvider
from synapse.media.preview_html import decode_body, parse_html_to_open_graph
from synapse.metrics.background_process_metrics import run_as_background_process
@@ -593,17 +593,26 @@ class UrlPreviewer:
file_info = FileInfo(server_name=None, file_id=file_id, url_cache=True)
async with self.media_storage.store_into_file(file_info) as (f, fname):
sha256writer = SHA256TransparentIOWriter(f)
if url.startswith("data:"):
if not allow_data_urls:
raise SynapseError(
500, "Previewing of data: URLs is forbidden", Codes.UNKNOWN
)
download_result = await self._parse_data_url(url, f)
download_result = await self._parse_data_url(url, sha256writer.wrap())
else:
download_result = await self._download_url(url, f)
download_result = await self._download_url(url, sha256writer.wrap())
try:
sha256 = sha256writer.hexdigest()
should_quarantine = await self.store.get_is_hash_quarantined(sha256)
if should_quarantine:
logger.warn(
"Media has been automatically quarantined as it matched existing quarantined media"
)
time_now_ms = self.clock.time_msec()
await self.store.store_local_media(
@@ -614,6 +623,8 @@ class UrlPreviewer:
media_length=download_result.length,
user_id=user,
url_cache=url,
sha256=sha256,
quarantined_by="system" if should_quarantine else None,
)
except Exception as e:

View File

@@ -894,9 +894,9 @@ class ModuleApi:
Raises:
synapse.api.errors.AuthError: the access token is invalid
"""
assert isinstance(
self._device_handler, DeviceHandler
), "invalidate_access_token can only be called on the main process"
assert isinstance(self._device_handler, DeviceHandler), (
"invalidate_access_token can only be called on the main process"
)
# see if the access token corresponds to a device
user_info = yield defer.ensureDeferred(

View File

@@ -66,7 +66,6 @@ from synapse.types import (
from synapse.util.async_helpers import (
timeout_deferred,
)
from synapse.util.metrics import Measure
from synapse.util.stringutils import shortstr
from synapse.visibility import filter_events_for_client
@@ -159,6 +158,9 @@ class _NotifierUserStream:
lst = notifier.room_to_user_streams.get(room, set())
lst.discard(self)
if not lst:
notifier.room_to_user_streams.pop(room, None)
notifier.user_to_user_stream.pop(self.user_id)
def count_listeners(self) -> int:
@@ -520,20 +522,22 @@ class Notifier:
users = users or []
rooms = rooms or []
with Measure(self.clock, "on_new_event"):
user_streams: Set[_NotifierUserStream] = set()
user_streams: Set[_NotifierUserStream] = set()
log_kv(
{
"waking_up_explicit_users": len(users),
"waking_up_explicit_rooms": len(rooms),
"users": shortstr(users),
"rooms": shortstr(rooms),
"stream": stream_key,
"stream_id": new_token,
}
)
log_kv(
{
"waking_up_explicit_users": len(users),
"waking_up_explicit_rooms": len(rooms),
"users": shortstr(users),
"rooms": shortstr(rooms),
"stream": stream_key,
"stream_id": new_token,
}
)
# Only calculate which user streams to wake up if there are, in fact,
# any user streams registered.
if self.user_to_user_stream or self.room_to_user_streams:
for user in users:
user_stream = self.user_to_user_stream.get(str(user))
if user_stream is not None:
@@ -565,25 +569,25 @@ class Notifier:
# We resolve all these deferreds in one go so that we only need to
# call `PreserveLoggingContext` once, as it has a bunch of overhead
# (to calculate performance stats)
with PreserveLoggingContext():
for listener in listeners:
listener.callback(current_token)
if listeners:
with PreserveLoggingContext():
for listener in listeners:
listener.callback(current_token)
users_woken_by_stream_counter.labels(stream_key).inc(len(user_streams))
if user_streams:
users_woken_by_stream_counter.labels(stream_key).inc(len(user_streams))
self.notify_replication()
self.notify_replication()
# Notify appservices.
try:
self.appservice_handler.notify_interested_services_ephemeral(
stream_key,
new_token,
users,
)
except Exception:
logger.exception(
"Error notifying application services of ephemeral events"
)
# Notify appservices.
try:
self.appservice_handler.notify_interested_services_ephemeral(
stream_key,
new_token,
users,
)
except Exception:
logger.exception("Error notifying application services of ephemeral events")
def on_new_replication_data(self) -> None:
"""Used to inform replication listeners that something has happened

View File

@@ -205,6 +205,12 @@ class HttpPusher(Pusher):
if self._is_processing:
return
# Check if we are trying, but failing, to contact the pusher. If so, we
# don't try and start processing immediately and instead wait for the
# retry loop to try again later (which is controlled by the timer).
if self.failing_since and self.timed_call and self.timed_call.active():
return
run_as_background_process("httppush.process", self._process)
async def _process(self) -> None:

View File

@@ -128,9 +128,9 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
# We reserve `instance_name` as a parameter to sending requests, so we
# assert here that sub classes don't try and use the name.
assert (
"instance_name" not in self.PATH_ARGS
), "`instance_name` is a reserved parameter name"
assert "instance_name" not in self.PATH_ARGS, (
"`instance_name` is a reserved parameter name"
)
assert (
"instance_name"
not in signature(self.__class__._serialize_payload).parameters

View File

@@ -200,9 +200,9 @@ class EventsStream(_StreamFromIdGen):
# we rely on get_all_new_forward_event_rows strictly honouring the limit, so
# that we know it is safe to just take upper_limit = event_rows[-1][0].
assert (
len(event_rows) <= target_row_count
), "get_all_new_forward_event_rows did not honour row limit"
assert len(event_rows) <= target_row_count, (
"get_all_new_forward_event_rows did not honour row limit"
)
# if we hit the limit on event_updates, there's no point in going beyond the
# last stream_id in the batch for the other sources.

View File

@@ -187,7 +187,6 @@ class ClientRestResource(JsonResource):
mutual_rooms.register_servlets,
login_token_request.register_servlets,
rendezvous.register_servlets,
auth_metadata.register_servlets,
]:
continue

View File

@@ -39,7 +39,7 @@ from typing import TYPE_CHECKING, Optional, Tuple
from synapse.api.errors import Codes, NotFoundError, SynapseError
from synapse.handlers.pagination import PURGE_HISTORY_ACTION_NAME
from synapse.http.server import HttpServer, JsonResource
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseRequest
from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
@@ -51,6 +51,7 @@ from synapse.rest.admin.background_updates import (
from synapse.rest.admin.devices import (
DeleteDevicesRestServlet,
DeviceRestServlet,
DevicesGetRestServlet,
DevicesRestServlet,
)
from synapse.rest.admin.event_reports import (
@@ -86,6 +87,7 @@ from synapse.rest.admin.rooms import (
RoomStateRestServlet,
RoomTimestampToEventRestServlet,
)
from synapse.rest.admin.scheduled_tasks import ScheduledTasksRestServlet
from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
from synapse.rest.admin.statistics import (
LargestRoomsStatistics,
@@ -205,8 +207,7 @@ class PurgeHistoryRestServlet(RestServlet):
(stream, topo, _event_id) = r
token = "t%d-%d" % (topo, stream)
logger.info(
"[purge] purging up to token %s (received_ts %i => "
"stream_ordering %i)",
"[purge] purging up to token %s (received_ts %i => stream_ordering %i)",
token,
ts,
stream_ordering,
@@ -263,27 +264,24 @@ class PurgeHistoryStatusRestServlet(RestServlet):
########################################################################################
class AdminRestResource(JsonResource):
"""The REST resource which gets mounted at /_synapse/admin"""
def __init__(self, hs: "HomeServer"):
JsonResource.__init__(self, hs, canonical_json=False)
register_servlets(hs, self)
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
"""
Register all the admin servlets.
"""
# Admin servlets aren't registered on workers.
RoomRestServlet(hs).register(http_server)
# Admin servlets below may not work on workers.
if hs.config.worker.worker_app is not None:
# Some admin servlets can be mounted on workers when MSC3861 is enabled.
if hs.config.experimental.msc3861.enabled:
register_servlets_for_msc3861_delegation(hs, http_server)
return
register_servlets_for_client_rest_resource(hs, http_server)
BlockRoomRestServlet(hs).register(http_server)
ListRoomRestServlet(hs).register(http_server)
RoomStateRestServlet(hs).register(http_server)
RoomRestServlet(hs).register(http_server)
RoomRestV2Servlet(hs).register(http_server)
RoomMembersRestServlet(hs).register(http_server)
DeleteRoomStatusByDeleteIdRestServlet(hs).register(http_server)
@@ -337,6 +335,7 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
BackgroundUpdateStartJobRestServlet(hs).register(http_server)
ExperimentalFeaturesRestServlet(hs).register(http_server)
SuspendAccountRestServlet(hs).register(http_server)
ScheduledTasksRestServlet(hs).register(http_server)
def register_servlets_for_client_rest_resource(
@@ -364,4 +363,16 @@ def register_servlets_for_client_rest_resource(
ListMediaInRoom(hs).register(http_server)
# don't add more things here: new servlets should only be exposed on
# /_synapse/admin so should not go here. Instead register them in AdminRestResource.
# /_synapse/admin so should not go here. Instead register them in register_servlets.
def register_servlets_for_msc3861_delegation(
hs: "HomeServer", http_server: HttpServer
) -> None:
"""Register servlets needed by MAS when MSC3861 is enabled"""
assert hs.config.experimental.msc3861.enabled
UserRestServletV2(hs).register(http_server)
UsernameAvailableRestServlet(hs).register(http_server)
UserReplaceMasterCrossSigningKeyRestServlet(hs).register(http_server)
DevicesGetRestServlet(hs).register(http_server)

View File

@@ -113,18 +113,19 @@ class DeviceRestServlet(RestServlet):
return HTTPStatus.OK, {}
class DevicesRestServlet(RestServlet):
class DevicesGetRestServlet(RestServlet):
"""
Retrieve the given user's devices
This can be mounted on workers as it is read-only, as opposed
to `DevicesRestServlet`.
"""
PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/devices$", "v2")
def __init__(self, hs: "HomeServer"):
self.auth = hs.get_auth()
handler = hs.get_device_handler()
assert isinstance(handler, DeviceHandler)
self.device_handler = handler
self.device_worker_handler = hs.get_device_handler()
self.store = hs.get_datastores().main
self.is_mine = hs.is_mine
@@ -141,9 +142,24 @@ class DevicesRestServlet(RestServlet):
if u is None:
raise NotFoundError("Unknown user")
devices = await self.device_handler.get_devices_by_user(target_user.to_string())
devices = await self.device_worker_handler.get_devices_by_user(
target_user.to_string()
)
return HTTPStatus.OK, {"devices": devices, "total": len(devices)}
class DevicesRestServlet(DevicesGetRestServlet):
"""
Retrieve the given user's devices
"""
PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/devices$", "v2")
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
assert isinstance(self.device_worker_handler, DeviceHandler)
self.device_handler = self.device_worker_handler
async def on_POST(
self, request: SynapseRequest, user_id: str
) -> Tuple[int, JsonDict]:

View File

@@ -150,6 +150,7 @@ class RoomRestV2Servlet(RestServlet):
def _convert_delete_task_to_response(task: ScheduledTask) -> JsonDict:
return {
"delete_id": task.id,
"room_id": task.resource_id,
"status": task.status,
"shutdown_room": task.result,
}

View File

@@ -0,0 +1,70 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
#
#
from typing import TYPE_CHECKING, Tuple
from synapse.http.servlet import RestServlet, parse_integer, parse_string
from synapse.http.site import SynapseRequest
from synapse.rest.admin import admin_patterns, assert_requester_is_admin
from synapse.types import JsonDict, TaskStatus
if TYPE_CHECKING:
from synapse.server import HomeServer
class ScheduledTasksRestServlet(RestServlet):
"""Get a list of scheduled tasks and their statuses
optionally filtered by action name, resource id, status, and max timestamp
"""
PATTERNS = admin_patterns("/scheduled_tasks$")
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self._store = hs.get_datastores().main
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self._auth, request)
# extract query params
action_name = parse_string(request, "action_name")
resource_id = parse_string(request, "resource_id")
status = parse_string(request, "job_status")
max_timestamp = parse_integer(request, "max_timestamp")
actions = [action_name] if action_name else None
statuses = [TaskStatus(status)] if status else None
tasks = await self._store.get_scheduled_tasks(
actions=actions,
resource_id=resource_id,
statuses=statuses,
max_timestamp=max_timestamp,
)
json_tasks = []
for task in tasks:
result_task = {
"id": task.id,
"action": task.action,
"status": task.status,
"timestamp_ms": task.timestamp,
"resource_id": task.resource_id,
"result": task.result,
"error": task.error,
}
json_tasks.append(result_task)
return 200, {"scheduled_tasks": json_tasks}

View File

@@ -350,6 +350,7 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
raise SynapseError(
400,
"Adding an email to your account is disabled on this server",
Codes.THREEPID_MEDIUM_NOT_SUPPORTED,
)
body = parse_and_validate_json_object_from_request(
@@ -456,6 +457,7 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
raise SynapseError(
400,
"Adding phone numbers to user account is not supported by this homeserver",
Codes.THREEPID_MEDIUM_NOT_SUPPORTED,
)
ret = await self.identity_handler.requestMsisdnToken(
@@ -498,7 +500,9 @@ class AddThreepidEmailSubmitTokenServlet(RestServlet):
"Adding emails have been disabled due to lack of an email config"
)
raise SynapseError(
400, "Adding an email to your account is disabled on this server"
400,
"Adding an email to your account is disabled on this server",
Codes.THREEPID_MEDIUM_NOT_SUPPORTED,
)
sid = parse_string(request, "sid", required=True)

View File

@@ -143,11 +143,11 @@ class DeviceRestServlet(RestServlet):
self.hs = hs
self.auth = hs.get_auth()
handler = hs.get_device_handler()
assert isinstance(handler, DeviceHandler)
self.device_handler = handler
self.auth_handler = hs.get_auth_handler()
self._msc3852_enabled = hs.config.experimental.msc3852_enabled
self._msc3861_oauth_delegation_enabled = hs.config.experimental.msc3861.enabled
self._is_main_process = hs.config.worker.worker_app is None
async def on_GET(
self, request: SynapseRequest, device_id: str
@@ -179,6 +179,14 @@ class DeviceRestServlet(RestServlet):
async def on_DELETE(
self, request: SynapseRequest, device_id: str
) -> Tuple[int, JsonDict]:
# Can only be run on main process, as changes to device lists must
# happen on main.
if not self._is_main_process:
error_message = "DELETE on /devices/ must be routed to main process"
logger.error(error_message)
raise SynapseError(500, error_message)
assert isinstance(self.device_handler, DeviceHandler)
requester = await self.auth.get_user_by_req(request)
try:
@@ -223,6 +231,14 @@ class DeviceRestServlet(RestServlet):
async def on_PUT(
self, request: SynapseRequest, device_id: str
) -> Tuple[int, JsonDict]:
# Can only be run on main process, as changes to device lists must
# happen on main.
if not self._is_main_process:
error_message = "PUT on /devices/ must be routed to main process"
logger.error(error_message)
raise SynapseError(500, error_message)
assert isinstance(self.device_handler, DeviceHandler)
requester = await self.auth.get_user_by_req(request, allow_guest=True)
body = parse_and_validate_json_object_from_request(request, self.PutBody)
@@ -585,9 +601,9 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
):
DeleteDevicesRestServlet(hs).register(http_server)
DevicesRestServlet(hs).register(http_server)
DeviceRestServlet(hs).register(http_server)
if hs.config.worker.worker_app is None:
DeviceRestServlet(hs).register(http_server)
if hs.config.experimental.msc2697_enabled:
DehydratedDeviceServlet(hs).register(http_server)
ClaimDehydratedDeviceServlet(hs).register(http_server)

View File

@@ -39,9 +39,7 @@ logger = logging.getLogger(__name__)
class ReceiptRestServlet(RestServlet):
PATTERNS = client_patterns(
"/rooms/(?P<room_id>[^/]*)"
"/receipt/(?P<receipt_type>[^/]*)"
"/(?P<event_id>[^/]*)$"
"/rooms/(?P<room_id>[^/]*)/receipt/(?P<receipt_type>[^/]*)/(?P<event_id>[^/]*)$"
)
CATEGORY = "Receipts requests"

View File

@@ -44,9 +44,9 @@ class MSC4108DelegationRendezvousServlet(RestServlet):
redirection_target: Optional[str] = (
hs.config.experimental.msc4108_delegation_endpoint
)
assert (
redirection_target is not None
), "Servlet is only registered if there is a delegation target"
assert redirection_target is not None, (
"Servlet is only registered if there is a delegation target"
)
self.endpoint = redirection_target.encode("utf-8")
async def on_POST(self, request: SynapseRequest) -> None:

View File

@@ -24,7 +24,7 @@ from collections import defaultdict
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Tuple, Union
from synapse.api.constants import AccountDataTypes, EduTypes, Membership, PresenceState
from synapse.api.errors import Codes, LimitExceededError, StoreError, SynapseError
from synapse.api.errors import Codes, StoreError, SynapseError
from synapse.api.filtering import FilterCollection
from synapse.api.presence import UserPresenceState
from synapse.api.ratelimiting import Ratelimiter
@@ -248,9 +248,8 @@ class SyncRestServlet(RestServlet):
await self._server_notices_sender.on_user_syncing(user.to_string())
# ignore the presence update if the ratelimit is exceeded but do not pause the request
try:
await self._presence_per_user_limiter.ratelimit(requester, pause=0.0)
except LimitExceededError:
allowed, _ = await self._presence_per_user_limiter.can_do_action(requester)
if not allowed:
affect_presence = False
logger.debug("User set_presence ratelimit exceeded; ignoring it.")
else:

View File

@@ -94,9 +94,9 @@ class HttpTransactionCache:
# (appservice and guest users), but does not cover access tokens minted
# by the admin API. Use the access token ID instead.
else:
assert (
requester.access_token_id is not None
), "Requester must have an access_token_id"
assert requester.access_token_id is not None, (
"Requester must have an access_token_id"
)
return (path, "user_admin", requester.access_token_id)
def fetch_or_execute_request(

View File

@@ -107,6 +107,7 @@ from synapse.handlers.room_member import (
RoomMemberMasterHandler,
)
from synapse.handlers.room_member_worker import RoomMemberWorkerHandler
from synapse.handlers.room_policy import RoomPolicyHandler
from synapse.handlers.room_summary import RoomSummaryHandler
from synapse.handlers.search import SearchHandler
from synapse.handlers.send_email import SendEmailHandler
@@ -807,6 +808,10 @@ class HomeServer(metaclass=abc.ABCMeta):
return OidcHandler(self)
@cache_in_self
def get_room_policy_handler(self) -> RoomPolicyHandler:
return RoomPolicyHandler(self)
@cache_in_self
def get_event_client_serializer(self) -> EventClientSerializer:
return EventClientSerializer(self)

View File

@@ -130,7 +130,7 @@ class SQLBaseStore(metaclass=ABCMeta):
"_get_rooms_for_local_user_where_membership_is_inner", (user_id,)
)
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user", (user_id,)
"get_sliding_sync_rooms_for_user_from_membership_snapshots", (user_id,)
)
# Purge other caches based on room state.
@@ -138,7 +138,9 @@ class SQLBaseStore(metaclass=ABCMeta):
self._attempt_to_invalidate_cache("get_partial_current_state_ids", (room_id,))
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
)
def _invalidate_state_caches_all(self, room_id: str) -> None:
"""Invalidates caches that are based on the current state, but does
@@ -168,7 +170,9 @@ class SQLBaseStore(metaclass=ABCMeta):
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
)
def _attempt_to_invalidate_cache(
self, cache_name: str, key: Optional[Collection[Any]]

View File

@@ -739,9 +739,9 @@ class BackgroundUpdater:
c.execute(sql)
async def updater(progress: JsonDict, batch_size: int) -> int:
assert isinstance(
self.db_pool.engine, engines.PostgresEngine
), "validate constraint background update registered for non-Postres database"
assert isinstance(self.db_pool.engine, engines.PostgresEngine), (
"validate constraint background update registered for non-Postres database"
)
logger.info("Validating constraint %s to %s", constraint_name, table)
await self.db_pool.runWithConnection(runner)
@@ -900,9 +900,9 @@ class BackgroundUpdater:
on the table. Used to iterate over the table.
"""
assert isinstance(
self.db_pool.engine, engines.PostgresEngine
), "validate constraint background update registered for non-Postres database"
assert isinstance(self.db_pool.engine, engines.PostgresEngine), (
"validate constraint background update registered for non-Postres database"
)
async def updater(progress: JsonDict, batch_size: int) -> int:
return await self.validate_constraint_and_delete_in_background(

View File

@@ -870,8 +870,7 @@ class EventsPersistenceStorageController:
# This should only happen for outlier events.
if not ev.internal_metadata.is_outlier():
raise Exception(
"Context for new event %s has no state "
"group" % (ev.event_id,)
"Context for new event %s has no state group" % (ev.event_id,)
)
continue
if ctx.state_group_deltas:

View File

@@ -307,7 +307,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
"get_rooms_for_user", (data.state_key,)
)
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user", None
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
)
self._membership_stream_cache.entity_has_changed(data.state_key, token) # type: ignore[attr-defined]
elif data.type == EventTypes.RoomEncryption:
@@ -319,7 +319,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
if (data.type, data.state_key) in SLIDING_SYNC_RELEVANT_STATE_SET:
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user", None
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
)
elif row.type == EventsStreamAllStateRow.TypeId:
assert isinstance(data, EventsStreamAllStateRow)
@@ -330,7 +330,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
self._attempt_to_invalidate_cache("get_room_type", (data.room_id,))
self._attempt_to_invalidate_cache("get_room_encryption", (data.room_id,))
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
)
else:
raise Exception("Unknown events stream row type %s" % (row.type,))
@@ -394,7 +396,8 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
"_get_rooms_for_local_user_where_membership_is_inner", (state_key,)
)
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user", (state_key,)
"get_sliding_sync_rooms_for_user_from_membership_snapshots",
(state_key,),
)
self._attempt_to_invalidate_cache(
@@ -413,7 +416,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
if (etype, state_key) in SLIDING_SYNC_RELEVANT_STATE_SET:
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
)
if relates_to:
self._attempt_to_invalidate_cache(
@@ -470,7 +475,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache(
"_get_rooms_for_local_user_where_membership_is_inner", None
)
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
)
self._attempt_to_invalidate_cache("did_forget", None)
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
self._attempt_to_invalidate_cache("get_references_for_event", None)
@@ -529,7 +536,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache(
"get_current_hosts_in_room_ordered", (room_id,)
)
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
)
self._attempt_to_invalidate_cache("did_forget", None)
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)

View File

@@ -650,9 +650,9 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
@wrap_as_background_process("update_client_ips")
async def _update_client_ips_batch(self) -> None:
assert (
self._update_on_this_worker
), "This worker is not designated to update client IPs"
assert self._update_on_this_worker, (
"This worker is not designated to update client IPs"
)
# If the DB pool has already terminated, don't try updating
if not self.db_pool.is_running():
@@ -671,9 +671,9 @@ class ClientIpWorkerStore(ClientIpBackgroundUpdateStore, MonthlyActiveUsersWorke
txn: LoggingTransaction,
to_update: Mapping[Tuple[str, str, str], Tuple[str, Optional[str], int]],
) -> None:
assert (
self._update_on_this_worker
), "This worker is not designated to update client IPs"
assert self._update_on_this_worker, (
"This worker is not designated to update client IPs"
)
# Keys and values for the `user_ips` upsert.
user_ips_keys = []

Some files were not shown because too many files have changed in this diff Show More