Compare commits

...

67 Commits

Author SHA1 Message Date
Erik Johnston
b30fcb03cc 1.128.0 2025-04-08 14:09:59 +01:00
Quentin Gliech
770768614b Merge changelog entries 2025-04-01 16:49:19 +02:00
Quentin Gliech
b8b3896b1d Fix rendering of the changelog 2025-04-01 16:45:11 +02:00
Quentin Gliech
01efc49554 1.128.0rc1 2025-04-01 16:41:42 +02:00
Quentin Gliech
fa53a8512a Make sure media hashes are not queried until the index is up (#18302) 2025-04-01 14:21:35 +00:00
dependabot[bot]
fdbcb821ff Bump phonenumbers from 8.13.50 to 9.0.2 (#18299)
Bumps
[phonenumbers](https://github.com/daviddrysdale/python-phonenumbers)
from 8.13.50 to 9.0.2.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="73ef5e664b"><code>73ef5e6</code></a>
Prep for 9.0.2 release</li>
<li><a
href="528a98bc75"><code>528a98b</code></a>
Generated files for metadata</li>
<li><a
href="28f5958abd"><code>28f5958</code></a>
Merge metadata changes from upstream 9.0.2</li>
<li><a
href="25ae49c160"><code>25ae49c</code></a>
Prep for 9.0.1 release</li>
<li><a
href="b8a1459cef"><code>b8a1459</code></a>
Generated files for metadata</li>
<li><a
href="f6cd233359"><code>f6cd233</code></a>
Merge metadata changes from upstream 9.0.1</li>
<li><a
href="c46f1049ba"><code>c46f104</code></a>
Prep for 9.0.0 release</li>
<li><a
href="d542ec2abc"><code>d542ec2</code></a>
Generated files for metadata</li>
<li><a
href="a4da80e252"><code>a4da80e</code></a>
Merge metadata changes from upstream 9.0.0</li>
<li><a
href="45c822e887"><code>45c822e</code></a>
Prep for 8.13.55 release</li>
<li>Additional commits viewable in <a
href="https://github.com/daviddrysdale/python-phonenumbers/compare/v8.13.50...v9.0.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=phonenumbers&package-manager=pip&previous-version=8.13.50&new-version=9.0.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 13:56:32 +00:00
dependabot[bot]
8eb991b746 Bump authlib from 1.4.1 to 1.5.1 (#18306)
Bumps [authlib](https://github.com/lepture/authlib) from 1.4.1 to 1.5.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/lepture/authlib/releases">authlib's
releases</a>.</em></p>
<blockquote>
<h2>Version 1.5.1</h2>
<p>Released on Feb 28, 2025</p>
<ul>
<li>Fix RFC9207 iss parameter. <a
href="https://redirect.github.com/lepture/authlib/issues/715">#715</a></li>
</ul>
<h2>Version 1.5.0</h2>
<ul>
<li>Fix token introspection auth method for clients. <a
href="https://redirect.github.com/lepture/authlib/pull/662">#662</a></li>
<li>Optional typ claim in JWT tokens. <a
href="https://redirect.github.com/lepture/authlib/pull/696">#696</a></li>
<li>JWT validation leeway. <a
href="https://redirect.github.com/lepture/authlib/pull/689">#689</a></li>
<li>Implement server-side <a
href="https://datatracker.ietf.org/doc/html/rfc9207.html">RFC9207</a>.
<a
href="https://redirect.github.com/lepture/authlib/issues/700">#700</a>
<a
href="https://redirect.github.com/lepture/authlib/pull/701">#701</a></li>
<li>generate_id_token can take a kid parameter. <a
href="https://redirect.github.com/lepture/authlib/pull/702">#702</a></li>
<li>More detailed InvalidClientError. <a
href="https://redirect.github.com/lepture/authlib/pull/706">#706</a></li>
<li>OpenID Connect Dynamic Client Registration implementation. <a
href="https://redirect.github.com/lepture/authlib/pull/707">#707</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/lepture/authlib/blob/main/docs/changelog.rst">authlib's
changelog</a>.</em></p>
<blockquote>
<h2>Version 1.5.1</h2>
<p><strong>Released on Feb 28, 2025</strong></p>
<ul>
<li>Fix RFC9207 <code>iss</code> parameter. :pr:<code>715</code></li>
</ul>
<h2>Version 1.5.0</h2>
<p><strong>Released on Feb 25, 2025</strong></p>
<ul>
<li>Fix token introspection auth method for clients.
:pr:<code>662</code></li>
<li>Optional <code>typ</code> claim in JWT tokens.
:pr:<code>696</code></li>
<li>JWT validation leeway. :pr:<code>689</code></li>
<li>Implement server-side :rfc:<code>RFC9207 &lt;9207&gt;</code>.
:issue:<code>700</code> :pr:<code>701</code></li>
<li><code>generate_id_token</code> can take a <code>kid</code>
parameter. :pr:<code>702</code></li>
<li>More detailed <code>InvalidClientError</code>.
:pr:<code>706</code></li>
<li>OpenID Connect Dynamic Client Registration implementation.
:pr:<code>707</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4eafdc2189"><code>4eafdc2</code></a>
chore: release 1.5.1</li>
<li><a
href="0e7e344344"><code>0e7e344</code></a>
Merge pull request <a
href="https://redirect.github.com/lepture/authlib/issues/715">#715</a>
from azmeuk/rfc9207</li>
<li><a
href="b57932bc7e"><code>b57932b</code></a>
fix: RFC9207 iss parameter</li>
<li><a
href="7833a887da"><code>7833a88</code></a>
Merge pull request <a
href="https://redirect.github.com/lepture/authlib/issues/713">#713</a>
from geigerzaehler/full-entropy</li>
<li><a
href="642dfa3264"><code>642dfa3</code></a>
doc: fix an example import for rfc9207</li>
<li><a
href="5c507a8473"><code>5c507a8</code></a>
fix: Use full entropy from specified oct key size</li>
<li><a
href="2d0396e3fc"><code>2d0396e</code></a>
chore: release 1.5.0</li>
<li><a
href="da87c8b2ec"><code>da87c8b</code></a>
doc: update changelog</li>
<li><a
href="b79d868e7f"><code>b79d868</code></a>
Merge pull request <a
href="https://redirect.github.com/lepture/authlib/issues/662">#662</a>
from AdamWill/oauth2-fix-introspect-endpoint</li>
<li><a
href="24c2bd8718"><code>24c2bd8</code></a>
chore: add a dependency group for the documentation</li>
<li>Additional commits viewable in <a
href="https://github.com/lepture/authlib/compare/v1.4.1...v1.5.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=authlib&package-manager=pip&previous-version=1.4.1&new-version=1.5.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 15:36:25 +02:00
Andrew Ferrazzutti
87d374c639 Tweaks to prefix-log (#18274)
- Explicitly use `mawk` instead of `awk`, since an extension of the
former is used
- Use `fflush` to reduce interleaving the output of different processes
& streams
- Move the `mawk` command to a shell function, instead of writing it
twice
- Look up the `SUPERVISOR_PROCESS_NAME` environment variable in `mawk`,
instead of reading it in the shell & using complex quoting to pass it to
`mawk`

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Quentin Gliech <quenting@element.io>
2025-04-01 15:36:13 +02:00
reivilibre
1709234311 Add an access token introspection cache to make Matrix Authentication Service integration (MSC3861) more efficient. (#18231)
Evolution of
cd78f3d2ee

This cache does not have any explicit invalidation, but this is deemed
acceptable (see code comment).

We may still prefer to add it eventually, letting us bump up the
Time-To-Live (TTL) on the cache as we currently set a 2 minute expiry
to balance the fact that we have no explicit invalidation.


This cache makes several things more efficient:

- reduces number of outbound requests from Synapse, reducing CPU
utilisation + network I/O
- reduces request handling time in Synapse, which improves
client-visible latency
- reduces load on MAS and its database


---

Other than that, this PR also introduces support for `expires_in`
(seconds) on the introspection response.
This lets the cached responses expire at the proper expiry time of the
access token, whilst avoiding clock skew issues.

Corresponds to:
https://github.com/element-hq/matrix-authentication-service/pull/4241

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-04-01 14:31:19 +01:00
dependabot[bot]
80b62d7903 Bump actions/upload-artifact from 4.6.1 to 4.6.2 (#18304)
Bumps
[actions/upload-artifact](https://github.com/actions/upload-artifact)
from 4.6.1 to 4.6.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/upload-artifact/releases">actions/upload-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v4.6.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Update to use artifact 2.3.2 package &amp; prepare for new
upload-artifact release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/685">actions/upload-artifact#685</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/685">actions/upload-artifact#685</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v4.6.2">https://github.com/actions/upload-artifact/compare/v4...v4.6.2</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ea165f8d65"><code>ea165f8</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/685">#685</a>
from salmanmkc/salmanmkc/3-new-upload-artifacts-release</li>
<li><a
href="08396203c1"><code>0839620</code></a>
Prepare for new release of actions/upload-artifact with new toolkit
cache ver...</li>
<li>See full diff in <a
href="4cec3d8aa0...ea165f8d65">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/upload-artifact&package-manager=github_actions&previous-version=4.6.1&new-version=4.6.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 13:30:44 +00:00
dependabot[bot]
7ace290f07 Bump actions/add-to-project from f5473ace9aeee8b97717b281e26980aa5097023f to 280af8ae1f83a494cfad2cb10f02f6d13529caa9 (#18303)
Bumps
[actions/add-to-project](https://github.com/actions/add-to-project) from
f5473ace9aeee8b97717b281e26980aa5097023f to
280af8ae1f83a494cfad2cb10f02f6d13529caa9.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="280af8ae1f"><code>280af8a</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/add-to-project/issues/688">#688</a>
from actions/dependabot/npm_and_yarn/vercel/ncc-0.38.3</li>
<li><a
href="a5abfebda9"><code>a5abfeb</code></a>
Update licensed cache and dist/ directory</li>
<li><a
href="f30c2e67f8"><code>f30c2e6</code></a>
Bump <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3</li>
<li><a
href="81dd5ce97f"><code>81dd5ce</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/add-to-project/issues/687">#687</a>
from actions/dependabot/npm_and_yarn/types/jest-29.5.14</li>
<li><a
href="122a803742"><code>122a803</code></a>
Bump <code>@​types/jest</code> from 29.5.12 to 29.5.14</li>
<li><a
href="29c72ac924"><code>29c72ac</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/add-to-project/issues/686">#686</a>
from actions/dependabot/npm_and_yarn/types/node-22.13.14</li>
<li><a
href="46316d9a20"><code>46316d9</code></a>
Bump <code>@​types/node</code> from 16.18.101 to 22.13.14</li>
<li><a
href="95df5ae4db"><code>95df5ae</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/add-to-project/issues/685">#685</a>
from actions/dependabot/npm_and_yarn/eslint-plugin-je...</li>
<li><a
href="f14f229b02"><code>f14f229</code></a>
Bump eslint-plugin-jest from 28.6.0 to 28.11.0</li>
<li><a
href="cc696180af"><code>cc69618</code></a>
Exit without failure if nothing to commit</li>
<li>Additional commits viewable in <a
href="f5473ace9a...280af8ae1f">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 13:13:58 +00:00
dependabot[bot]
2f812c2eb6 Bump jinja2 from 3.1.5 to 3.1.6 (#18223)
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.5 to 3.1.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/jinja/releases">jinja2's
releases</a>.</em></p>
<blockquote>
<h2>3.1.6</h2>
<p>This is the Jinja 3.1.6 security release, which fixes security issues
but does not otherwise change behavior and should not result in breaking
changes compared to the latest feature release.</p>
<p>PyPI: <a
href="https://pypi.org/project/Jinja2/3.1.6/">https://pypi.org/project/Jinja2/3.1.6/</a>
Changes: <a
href="https://jinja.palletsprojects.com/en/stable/changes/#version-3-1-6">https://jinja.palletsprojects.com/en/stable/changes/#version-3-1-6</a></p>
<ul>
<li>The <code>|attr</code> filter does not bypass the environment's
attribute lookup, allowing the sandbox to apply its checks. <a
href="https://github.com/pallets/jinja/security/advisories/GHSA-cpwx-vrp4-4pq7">https://github.com/pallets/jinja/security/advisories/GHSA-cpwx-vrp4-4pq7</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/jinja/blob/main/CHANGES.rst">jinja2's
changelog</a>.</em></p>
<blockquote>
<h2>Version 3.1.6</h2>
<p>Released 2025-03-05</p>
<ul>
<li>The <code>|attr</code> filter does not bypass the environment's
attribute lookup,
allowing the sandbox to apply its checks.
:ghsa:<code>cpwx-vrp4-4pq7</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="15206881c0"><code>1520688</code></a>
release version 3.1.6</li>
<li><a
href="90457bbf33"><code>90457bb</code></a>
Merge commit from fork</li>
<li><a
href="065334d1ee"><code>065334d</code></a>
attr filter uses env.getattr</li>
<li><a
href="033c20015c"><code>033c200</code></a>
start version 3.1.6</li>
<li><a
href="bc68d4efa9"><code>bc68d4e</code></a>
use global contributing guide (<a
href="https://redirect.github.com/pallets/jinja/issues/2070">#2070</a>)</li>
<li><a
href="247de5e0c5"><code>247de5e</code></a>
use global contributing guide</li>
<li><a
href="ab8218c7a1"><code>ab8218c</code></a>
use project advisory link instead of global</li>
<li><a
href="b4ffc8ff29"><code>b4ffc8f</code></a>
release version 3.1.5 (<a
href="https://redirect.github.com/pallets/jinja/issues/2066">#2066</a>)</li>
<li>See full diff in <a
href="https://github.com/pallets/jinja/compare/3.1.5...3.1.6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=jinja2&package-manager=pip&previous-version=3.1.5&new-version=3.1.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/element-hq/synapse/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 12:42:01 +00:00
Andrew Ferrazzutti
90f346183a Use uv pip to install supervisor in workers image (#18275) 2025-04-01 12:32:56 +00:00
Andrew Ferrazzutti
f638a76ba4 Avoid relying on rsync during Docker build (#18287)
Use targeted COPY commands instead of rsync to avoid having a symlinked
/lib as the destination of a COPY (which buildkit does not support).

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-04-01 12:32:34 +00:00
dependabot[bot]
cf02b8fea5 Bump actions/setup-python from 5.4.0 to 5.5.0 (#18298)
Bumps [actions/setup-python](https://github.com/actions/setup-python)
from 5.4.0 to 5.5.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-python/releases">actions/setup-python's
releases</a>.</em></p>
<blockquote>
<h2>v5.5.0</h2>
<h2>What's Changed</h2>
<h3>Enhancements:</h3>
<ul>
<li>Support free threaded Python versions like '3.13t' by <a
href="https://github.com/colesbury"><code>@​colesbury</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/973">actions/setup-python#973</a></li>
<li>Enhance Workflows: Include ubuntu-arm runners, Add e2e Testing for
free threaded and Upgrade <code>@​action/cache</code> from 4.0.0 to
4.0.3 by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1056">actions/setup-python#1056</a></li>
<li>Add support for .tool-versions file in setup-python by <a
href="https://github.com/mahabaleshwars"><code>@​mahabaleshwars</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1043">actions/setup-python#1043</a></li>
</ul>
<h3>Bug fixes:</h3>
<ul>
<li>Fix architecture for pypy on Linux ARM64 by <a
href="https://github.com/mayeut"><code>@​mayeut</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1011">actions/setup-python#1011</a>
This update maps arm64 to aarch64 for Linux ARM64 PyPy
installations.</li>
</ul>
<h3>Dependency updates:</h3>
<ul>
<li>Upgrade <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1016">actions/setup-python#1016</a></li>
<li>Upgrade <code>@​actions/glob</code> from 0.4.0 to 0.5.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1015">actions/setup-python#1015</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/colesbury"><code>@​colesbury</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/973">actions/setup-python#973</a></li>
<li><a
href="https://github.com/mahabaleshwars"><code>@​mahabaleshwars</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/1043">actions/setup-python#1043</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v5...v5.5.0">https://github.com/actions/setup-python/compare/v5...v5.5.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8d9ed9ac5c"><code>8d9ed9a</code></a>
Add e2e Testing for free threaded and Bump <code>@​action/cache</code>
from 4.0.0 to 4.0.3 ...</li>
<li><a
href="19e4675e06"><code>19e4675</code></a>
Add support for .tool-versions file in setup-python (<a
href="https://redirect.github.com/actions/setup-python/issues/1043">#1043</a>)</li>
<li><a
href="6fd11e170a"><code>6fd11e1</code></a>
Bump <code>@​actions/glob</code> from 0.4.0 to 0.5.0 (<a
href="https://redirect.github.com/actions/setup-python/issues/1015">#1015</a>)</li>
<li><a
href="9e62be81b2"><code>9e62be8</code></a>
Support free threaded Python versions like '3.13t' (<a
href="https://redirect.github.com/actions/setup-python/issues/973">#973</a>)</li>
<li><a
href="6ca8e8598f"><code>6ca8e85</code></a>
Bump <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 (<a
href="https://redirect.github.com/actions/setup-python/issues/1016">#1016</a>)</li>
<li><a
href="8039c45ed9"><code>8039c45</code></a>
fix: install PyPy on Linux ARM64 (<a
href="https://redirect.github.com/actions/setup-python/issues/1011">#1011</a>)</li>
<li>See full diff in <a
href="42375524e2...8d9ed9ac5c">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-python&package-manager=github_actions&previous-version=5.4.0&new-version=5.5.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 12:31:59 +00:00
dependabot[bot]
1deb6e03e0 Bump pyo3-log from 0.12.1 to 0.12.2 (#18269)
Bumps [pyo3-log](https://github.com/vorner/pyo3-log) from 0.12.1 to
0.12.2.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/vorner/pyo3-log/blob/main/CHANGELOG.md">pyo3-log's
changelog</a>.</em></p>
<blockquote>
<h1>0.12.2</h1>
<ul>
<li>Allow pyo3 0.24.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="99ee890b2b"><code>99ee890</code></a>
Release 0.12.2</li>
<li><a
href="d1a27f574f"><code>d1a27f5</code></a>
Merge pull request <a
href="https://redirect.github.com/vorner/pyo3-log/issues/61">#61</a>
from gi0baro/pyo3-024</li>
<li><a
href="66fd9498c3"><code>66fd949</code></a>
Allow PyO3 0.24</li>
<li>See full diff in <a
href="https://github.com/vorner/pyo3-log/compare/v0.12.1...v0.12.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pyo3-log&package-manager=cargo&previous-version=0.12.1&new-version=0.12.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 14:12:58 +02:00
Will Hunt
02eed668b8 Document media hashing changes (#18296)
Essentially document the change in behaviour in #18277 

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-04-01 12:43:05 +02:00
dependabot[bot]
9f8ed14535 Bump actions/download-artifact from 4.2.0 to 4.2.1 (#18268)
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 4.2.0 to 4.2.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v4.2.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Add unit tests by <a
href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/392">actions/download-artifact#392</a></li>
<li>Fix bug introduced in 4.2.0 by <a
href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/391">actions/download-artifact#391</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v4.2.0...v4.2.1">https://github.com/actions/download-artifact/compare/v4.2.0...v4.2.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="95815c38cf"><code>95815c3</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/391">#391</a>
from GhadimiR/main</li>
<li><a
href="278fca438a"><code>278fca4</code></a>
Move log statements</li>
<li><a
href="68909842a1"><code>6890984</code></a>
Merge branch 'main' into main</li>
<li><a
href="f9415c0ec3"><code>f9415c0</code></a>
Run unit tests in CI</li>
<li><a
href="76a6eb5cbc"><code>76a6eb5</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/392">#392</a>
from GhadimiR/add_unit_tests</li>
<li><a
href="a2426d7c45"><code>a2426d7</code></a>
Merge branch 'main' into add_unit_tests</li>
<li><a
href="3ffa694f6f"><code>3ffa694</code></a>
lint</li>
<li><a
href="53f6aa5f93"><code>53f6aa5</code></a>
Add extra assertion to download single artifact test</li>
<li><a
href="b456700053"><code>b456700</code></a>
lint</li>
<li><a
href="9eab798a98"><code>9eab798</code></a>
Configure tsconfig</li>
<li>Additional commits viewable in <a
href="b14cf4c926...95815c38cf">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/download-artifact&package-manager=github_actions&previous-version=4.2.0&new-version=4.2.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 08:08:57 +00:00
dependabot[bot]
3bc04d05a4 Bump pygithub from 2.5.0 to 2.6.1 (#18243)
Bumps [pygithub](https://github.com/pygithub/pygithub) from 2.5.0 to
2.6.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pygithub/pygithub/releases">pygithub's
releases</a>.</em></p>
<blockquote>
<h2>v2.6.1</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Fix broken pickle support for <code>Auth</code> classes by <a
href="https://github.com/EnricoMi"><code>@​EnricoMi</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3211">PyGithub/PyGithub#3211</a></li>
<li>Remove schema from <code>Deployment</code>, remove
<code>message</code> attribute by <a
href="https://github.com/EnricoMi"><code>@​EnricoMi</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3223">PyGithub/PyGithub#3223</a></li>
<li>Fix incorrect deprecated import by <a
href="https://github.com/EnricoMi"><code>@​EnricoMi</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3225">PyGithub/PyGithub#3225</a></li>
<li>Add <code>CodeSecurityConfigRepository</code> returned by
<code>get_repos_for_code_security_config</code> by <a
href="https://github.com/EnricoMi"><code>@​EnricoMi</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3219">PyGithub/PyGithub#3219</a></li>
<li>Make <code>GitTag.verification</code> return
<code>GitCommitVerification</code> by <a
href="https://github.com/EnricoMi"><code>@​EnricoMi</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3226">PyGithub/PyGithub#3226</a></li>
</ul>
<h3>Maintenance</h3>
<ul>
<li>Mention removal of <code>AppAuth.private_key</code> in changelog by
<a href="https://github.com/EnricoMi"><code>@​EnricoMi</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3212">PyGithub/PyGithub#3212</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/PyGithub/PyGithub/compare/v2.6.0...v2.6.1">https://github.com/PyGithub/PyGithub/compare/v2.6.0...v2.6.1</a></p>
<h2>v2.6.0</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Rework <code>Views</code> and <code>Clones</code> by <a
href="https://github.com/EnricoMi"><code>@​EnricoMi</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3168">PyGithub/PyGithub#3168</a>:
View and clones traffic information returned by
<code>Repository.get_views_traffic</code> and
<code>Repository.get_clones_traffic</code>
now return proper PyGithub objects, instead of a <code>dict</code>, with
all information that used to be provided by the <code>dict</code>:</li>
</ul>
<p>Code like</p>
<pre
lang="python"><code>repo.get_views_traffic().[&quot;views&quot;].timestamp
repo.get_clones_traffic().[&quot;clones&quot;].timestamp
</code></pre>
<p>should be replaced with</p>
<pre lang="python"><code>repo.get_views_traffic().views.timestamp
repo.get_clones_traffic().clones.timestamp
</code></pre>
<ul>
<li>Fix typos by <a
href="https://github.com/kianmeng"><code>@​kianmeng</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3086">PyGithub/PyGithub#3086</a>:
Property <code>OrganizationCustomProperty.respository_id</code> renamed
to <code>OrganizationCustomProperty.repository_id</code>.</li>
</ul>
<h3>New Features</h3>
<ul>
<li>Add capability for global laziness by <a
href="https://github.com/EnricoMi"><code>@​EnricoMi</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/2746">PyGithub/PyGithub#2746</a></li>
<li>Add Support for GitHub Copilot Seat Management in Organizations by
<a href="https://github.com/pashafateev"><code>@​pashafateev</code></a>
in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3082">PyGithub/PyGithub#3082</a></li>
<li>Get branches where commit is head by <a
href="https://github.com/EnricoMi"><code>@​EnricoMi</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3083">PyGithub/PyGithub#3083</a></li>
<li>Support downloading a Release Asset by <a
href="https://github.com/neel-m"><code>@​neel-m</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3060">PyGithub/PyGithub#3060</a></li>
<li>Add <code>Repository.merge_upstream</code> method by <a
href="https://github.com/Felixoid"><code>@​Felixoid</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3175">PyGithub/PyGithub#3175</a></li>
<li>Support updating pull request draft status by <a
href="https://github.com/didot"><code>@​didot</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3104">PyGithub/PyGithub#3104</a></li>
<li>Add transfer ownership method to Repository by <a
href="https://github.com/tanannie22"><code>@​tanannie22</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3091">PyGithub/PyGithub#3091</a></li>
<li>Add enable and disable a Workflow by <a
href="https://github.com/nickrmcclorey"><code>@​nickrmcclorey</code></a>
in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3088">PyGithub/PyGithub#3088</a></li>
<li>Add support for managing Code Security Configrations by <a
href="https://github.com/billnapier"><code>@​billnapier</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3095">PyGithub/PyGithub#3095</a></li>
<li>Allow for private_key / sign function in AppAuth by <a
href="https://github.com/EnricoMi"><code>@​EnricoMi</code></a> in <a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3065">PyGithub/PyGithub#3065</a></li>
</ul>
<h3>Improvements</h3>
<ul>
<li>Update RateLimit object with all the new categories GitHub added. by
<a href="https://github.com/billnapier"><code>@​billnapier</code></a> in
<a
href="https://redirect.github.com/PyGithub/PyGithub/pull/3096">PyGithub/PyGithub#3096</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/PyGithub/PyGithub/blob/v2.6.1/doc/changes.rst">pygithub's
changelog</a>.</em></p>
<blockquote>
<h2>Version 2.6.1 (February 21, 2025)</h2>
<p>Bug Fixes
^^^^^^^^^</p>
<ul>
<li>Fix broken pickle support for <code>Auth</code> classes
(<code>[#3211](https://github.com/pygithub/pygithub/issues/3211)
&lt;https://github.com/PyGithub/PyGithub/pull/3211&gt;</code><em>)
(<code>f975552a
&lt;https://github.com/PyGithub/PyGithub/commit/f975552a&gt;</code></em>)</li>
<li>Remove schema from <code>Deployment</code>, remove
<code>message</code> attribute
(<code>[#3223](https://github.com/pygithub/pygithub/issues/3223)
&lt;https://github.com/PyGithub/PyGithub/pull/3223&gt;</code><em>)
(<code>d12e7d4c
&lt;https://github.com/PyGithub/PyGithub/commit/d12e7d4c&gt;</code></em>)</li>
<li>Fix incorrect deprecated import
(<code>[#3225](https://github.com/pygithub/pygithub/issues/3225)
&lt;https://github.com/PyGithub/PyGithub/pull/3225&gt;</code><em>)
(<code>93297440
&lt;https://github.com/PyGithub/PyGithub/commit/93297440&gt;</code></em>)</li>
<li>Add <code>CodeSecurityConfigRepository</code> returned by
<code>get_repos_for_code_security_config</code>
(<code>[#3219](https://github.com/pygithub/pygithub/issues/3219)
&lt;https://github.com/PyGithub/PyGithub/pull/3219&gt;</code><em>)
(<code>f997a2f6
&lt;https://github.com/PyGithub/PyGithub/commit/f997a2f6&gt;</code></em>)</li>
<li>Make <code>GitTag.verification</code> return
<code>GitCommitVerification</code>
(<code>[#3226](https://github.com/pygithub/pygithub/issues/3226)
&lt;https://github.com/PyGithub/PyGithub/pull/3226&gt;</code><em>)
(<code>048a1a38
&lt;https://github.com/PyGithub/PyGithub/commit/048a1a38&gt;</code></em>)</li>
</ul>
<p>Maintenance
^^^^^^^^^^^</p>
<ul>
<li>Mention removal of <code>AppAuth.private_key</code> in changelog
(<code>[#3212](https://github.com/pygithub/pygithub/issues/3212)
&lt;https://github.com/PyGithub/PyGithub/pull/3212&gt;</code><em>)
(<code>f5dc1c76
&lt;https://github.com/PyGithub/PyGithub/commit/f5dc1c76&gt;</code></em>)</li>
</ul>
<h2>Version 2.6.0 (February 15, 2025)</h2>
<p>Breaking Changes
^^^^^^^^^^^^^^^^</p>
<ul>
<li>
<p>Rework <code>Views</code> and <code>Clones</code>
(<code>[#3168](https://github.com/pygithub/pygithub/issues/3168)
&lt;https://github.com/PyGithub/PyGithub/pull/3168&gt;</code><em>)
(<code>f7d52249
&lt;https://github.com/PyGithub/PyGithub/commit/f7d52249&gt;</code></em>):</p>
<p>View and clones traffic information returned by
<code>Repository.get_views_traffic</code> and
<code>Repository.get_clones_traffic</code>
now return proper PyGithub objects, instead of a <code>dict</code>, with
all information that used to be provided by the <code>dict</code>:</p>
</li>
</ul>
<p>Code like</p>
<p>.. code-block:: python</p>
<p>repo.get_views_traffic().[&quot;views&quot;].timestamp
repo.get_clones_traffic().[&quot;clones&quot;].timestamp</p>
<p>should be replaced with</p>
<p>.. code-block:: python</p>
<p>repo.get_views_traffic().views.timestamp
repo.get_clones_traffic().clones.timestamp</p>
<ul>
<li>
<p>Add <code>GitCommitVerification</code> class
(<code>[#3028](https://github.com/pygithub/pygithub/issues/3028)
&lt;https://github.com/PyGithub/PyGithub/pull/3028&gt;</code><em>)
(<code>822e6d71
&lt;https://github.com/PyGithub/PyGithub/commit/822e6d71&gt;</code></em>):</p>
<p>Changes the return value of <code>GitTag.verification</code> and
<code>GitCommit.verification</code> from <code>dict</code> to
<code>GitCommitVerification</code>.</p>
<p>Code like</p>
<p>.. code-block:: python</p>
<p>tag.verification[&quot;reason&quot;]
commit.verification[&quot;reason&quot;]</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="da30d6e793"><code>da30d6e</code></a>
Releasing v2.6.1 (<a
href="https://redirect.github.com/pygithub/pygithub/issues/3230">#3230</a>)</li>
<li><a
href="f997a2f653"><code>f997a2f</code></a>
Add <code>CodeSecurityConfigRepository</code> returned by
`get_repos_for_code_security_c...</li>
<li><a
href="048a1a3837"><code>048a1a3</code></a>
Make <code>GitTag.verification</code> return
<code>GitCommitVerification</code> (<a
href="https://redirect.github.com/pygithub/pygithub/issues/3226">#3226</a>)</li>
<li><a
href="93297440ce"><code>9329744</code></a>
Fix incorrect deprecated import (<a
href="https://redirect.github.com/pygithub/pygithub/issues/3225">#3225</a>)</li>
<li><a
href="d12e7d4cb4"><code>d12e7d4</code></a>
Remove schema from <code>Deployment</code>, remove <code>message</code>
attribute (<a
href="https://redirect.github.com/pygithub/pygithub/issues/3223">#3223</a>)</li>
<li><a
href="f975552acd"><code>f975552</code></a>
Fix broken pickle support for <code>Auth</code> classes (<a
href="https://redirect.github.com/pygithub/pygithub/issues/3211">#3211</a>)</li>
<li><a
href="f5dc1c762f"><code>f5dc1c7</code></a>
Mention removal of <code>AppAuth.private_key</code> in changelog (<a
href="https://redirect.github.com/pygithub/pygithub/issues/3212">#3212</a>)</li>
<li><a
href="e3e07d7466"><code>e3e07d7</code></a>
Fix PyPi upload (<a
href="https://redirect.github.com/pygithub/pygithub/issues/3200">#3200</a>)</li>
<li><a
href="620c83994a"><code>620c839</code></a>
Fix PyPi upload (<a
href="https://redirect.github.com/pygithub/pygithub/issues/3199">#3199</a>)</li>
<li><a
href="bf98e17854"><code>bf98e17</code></a>
Release 2.6.0 (<a
href="https://redirect.github.com/pygithub/pygithub/issues/3198">#3198</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pygithub/pygithub/compare/v2.5.0...v2.6.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pygithub&package-manager=pip&previous-version=2.5.0&new-version=2.6.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 09:58:27 +02:00
dependabot[bot]
4dba011c31 Bump dawidd6/action-download-artifact from 8 to 9 (#18204)
Bumps
[dawidd6/action-download-artifact](https://github.com/dawidd6/action-download-artifact)
from 8 to 9.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dawidd6/action-download-artifact/releases">dawidd6/action-download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v9</h2>
<h2>What's Changed</h2>
<ul>
<li>add merge_multiple option by <a
href="https://github.com/timostroehlein"><code>@​timostroehlein</code></a>
in <a
href="https://redirect.github.com/dawidd6/action-download-artifact/pull/327">dawidd6/action-download-artifact#327</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/timostroehlein"><code>@​timostroehlein</code></a>
made their first contribution in <a
href="https://redirect.github.com/dawidd6/action-download-artifact/pull/327">dawidd6/action-download-artifact#327</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/dawidd6/action-download-artifact/compare/v8...v9">https://github.com/dawidd6/action-download-artifact/compare/v8...v9</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="07ab29fd4a"><code>07ab29f</code></a>
add merge_multiple option (<a
href="https://redirect.github.com/dawidd6/action-download-artifact/issues/327">#327</a>)</li>
<li>See full diff in <a
href="20319c5641...07ab29fd4a">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=dawidd6/action-download-artifact&package-manager=github_actions&previous-version=8&new-version=9)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 09:56:53 +02:00
dependabot[bot]
76ffd3ba01 Bump actions/cache from 4.2.2 to 4.2.3 (#18266)
Bumps [actions/cache](https://github.com/actions/cache) from 4.2.2 to
4.2.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v4.2.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Update to use <code>@​actions/cache</code> 4.0.3 package &amp;
prepare for new release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1577">actions/cache#1577</a>
(SAS tokens for cache entries are now masked in debug logs)</li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1577">actions/cache#1577</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v4.2.2...v4.2.3">https://github.com/actions/cache/compare/v4.2.2...v4.2.3</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h3>4.2.3</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.3 (obfuscates SAS token in
debug logs for cache entries)</li>
</ul>
<h3>4.2.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.2</li>
</ul>
<h3>4.2.1</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.1</li>
</ul>
<h3>4.2.0</h3>
<p>TLDR; The cache backend service has been rewritten from the ground up
for improved performance and reliability. <a
href="https://github.com/actions/cache">actions/cache</a> now integrates
with the new cache service (v2) APIs.</p>
<p>The new service will gradually roll out as of <strong>February 1st,
2025</strong>. The legacy service will also be sunset on the same date.
Changes in these release are <strong>fully backward
compatible</strong>.</p>
<p><strong>We are deprecating some versions of this action</strong>. We
recommend upgrading to version <code>v4</code> or <code>v3</code> as
soon as possible before <strong>February 1st, 2025.</strong> (Upgrade
instructions below).</p>
<p>If you are using pinned SHAs, please use the SHAs of versions
<code>v4.2.0</code> or <code>v3.4.0</code></p>
<p>If you do not upgrade, all workflow runs using any of the deprecated
<a href="https://github.com/actions/cache">actions/cache</a> will
fail.</p>
<p>Upgrading to the recommended versions will not break your
workflows.</p>
<h3>4.1.2</h3>
<ul>
<li>Add GitHub Enterprise Cloud instances hostname filters to inform API
endpoint choices - <a
href="https://redirect.github.com/actions/cache/pull/1474">#1474</a></li>
<li>Security fix: Bump braces from 3.0.2 to 3.0.3 - <a
href="https://redirect.github.com/actions/cache/pull/1475">#1475</a></li>
</ul>
<h3>4.1.1</h3>
<ul>
<li>Restore original behavior of <code>cache-hit</code> output - <a
href="https://redirect.github.com/actions/cache/pull/1467">#1467</a></li>
</ul>
<h3>4.1.0</h3>
<ul>
<li>Ensure <code>cache-hit</code> output is set when a cache is missed -
<a
href="https://redirect.github.com/actions/cache/pull/1404">#1404</a></li>
<li>Deprecate <code>save-always</code> input - <a
href="https://redirect.github.com/actions/cache/pull/1452">#1452</a></li>
</ul>
<h3>4.0.2</h3>
<ul>
<li>Fixed restore <code>fail-on-cache-miss</code> not working.</li>
</ul>
<h3>4.0.1</h3>
<ul>
<li>Updated <code>isGhes</code> check</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="5a3ec84eff"><code>5a3ec84</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1577">#1577</a>
from salmanmkc/salmanmkc/4-test</li>
<li><a
href="7de21022a7"><code>7de2102</code></a>
Update releases.md</li>
<li><a
href="76d40dd347"><code>76d40dd</code></a>
Update to use the latest version of the cache package to obfuscate the
SAS</li>
<li><a
href="76dd5eb692"><code>76dd5eb</code></a>
update cache with main</li>
<li><a
href="8c80c27c5e"><code>8c80c27</code></a>
new package</li>
<li><a
href="45cfd0e7ff"><code>45cfd0e</code></a>
updates</li>
<li><a
href="edd449b9cf"><code>edd449b</code></a>
updated cache with latest changes</li>
<li><a
href="0576707e37"><code>0576707</code></a>
latest test before pr</li>
<li><a
href="3105dc9754"><code>3105dc9</code></a>
update</li>
<li><a
href="9450d42d15"><code>9450d42</code></a>
mask</li>
<li>Additional commits viewable in <a
href="d4323d4df1...5a3ec84eff">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/cache&package-manager=github_actions&previous-version=4.2.2&new-version=4.2.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-01 09:55:30 +02:00
Marcel Pennewiß
3c188231c7 Update admin_faq - Fix how to obtain access token (#18225)
Riot is now known as element and Access token moved to Help & About
2025-03-27 17:31:37 +00:00
Will Hunt
d17295e5c3 Store hashes of media files, and allow quarantining by hash. (#18277)
This PR makes a few radical changes to media. This now stores the SHA256
hash of each file stored in the database (excluding thumbnails, more on
that later). If a set of media is quarantined, any additional uploads of
the same file contents or any other files with the same hash will be
quarantined at the same time.

Currently this does NOT:
 - De-duplicate media, although a future extension could be to do that.
- Run any background jobs to identify the hashes of older files. This
could also be a future extension, though the value of doing so is
limited to combat the abuse of recent media.
- Hash thumbnails. It's assumed that thumbnails are parented to some
form of media, so you'd likely be wanting to quarantine the media and
the thumbnail at the same time.
2025-03-27 17:26:34 +00:00
Devon Hudson
a39b856cf0 Add DB delta to remove the old state group deletion job (#18284)
This background DB delta removes the old state group deletion background
update from the `background_updates` table if it exists.
The `delete_unreferenced_state_groups_bg_update` update should only
exist in that table if a homeserver ran v1.126.0rc1/v1.126.0rc2, and
rolled back or forward to any other version of Synapse before letting
the update finish.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-03-27 14:56:16 +00:00
Andrew Morgan
2830013e5e Merge branch 'master' into develop 2025-03-26 22:00:52 +00:00
Andrew Morgan
ecc09b15f1 1.127.1 2025-03-26 21:08:00 +00:00
Eric Eastwood
31110f35d9 Add docs for how to clear out the Poetry wheel cache (#18283)
As shared by @reivilibre,
https://github.com/element-hq/synapse/pull/18261#issuecomment-2754607816

Relevant Poetry issue around how this should be handled by them:
https://github.com/python-poetry/poetry/issues/10304
2025-03-26 14:35:54 -05:00
Erik Johnston
2277df2a1e Fix GHSA-v56r-hwv5-mxg6 — Federation denial
Fixes https://github.com/element-hq/synapse/security/advisories/GHSA-v56r-hwv5-mxg6

Federation denial of service via malformed events.
2025-03-26 18:44:45 +00:00
dependabot[bot]
5e83434f3a Bump log from 0.4.26 to 0.4.27 (#18267) 2025-03-25 14:11:51 +00:00
Andrew Ferrazzutti
a227d20c25 Pass args to start_for_complement.sh (#18273) 2025-03-25 14:09:38 +00:00
Andrew Ferrazzutti
bd08a01fc8 Dockerfile: set package arch via APT config option (#18271) 2025-03-25 13:58:40 +00:00
Andrew Ferrazzutti
92a29dcffc Docker: Use an ARG for debian version more often (#18272) 2025-03-25 13:57:55 +00:00
Olivier 'reivilibre
2719bd1794 Merge branch 'master' into develop 2025-03-25 13:47:01 +00:00
Olivier 'reivilibre
7af299b365 1.127.0 2025-03-25 12:04:21 +00:00
Andrew Morgan
d8fef721a0 Correct typo "SAML" -> SSO in mapping providers docs (#18276) 2025-03-25 10:35:01 +00:00
Devon Hudson
1efb826b54 Delete unreferenced state groups in background (#18254)
This PR fixes #18154 to avoid de-deltaing state groups which resulted in
DB size temporarily increasing until the DB was `VACUUM`'ed. As a
result, less state groups will get deleted now.
It also attempts to improve performance by not duplicating work when
processing state groups it has already processed in previous iterations.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Erik Johnston <erikj@element.io>
2025-03-21 17:09:49 +00:00
reivilibre
33bcef9dc7 Update Poetry to 2.1.1, including updating the lock file version. (#18251) 2025-03-21 15:32:52 +00:00
Andrew Morgan
51deadec41 Pin our GitHub Actions dependencies (#18255)
After the [recent supply chain attack](https://www.wiz.io/blog/new-github-action-supply-chain-attack-reviewdog-action-setup)
in `tj-actions/changed-files` and actions based on it, it's become clear
that relying on git tags to pin our dependencies is not enough (as tags
can simply be replaced). Therefore we need to switch to hashes.

Dependabot should continue to update these dependencies for us.

Best reviewed commit-by-commit. Though if CI passes, we're *probably*
fine.
2025-03-19 14:16:04 +00:00
reivilibre
47e295bf3a Add index to sliding sync membership snapshot table, to fix a performance issue. (#18074)
To address a performance problem due to the foreign key on the same
column.

cc @erikjohnston

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-03-18 18:38:18 +00:00
Shay
4b8dbe22c0 Add a column participant to room_memberships table (#18068) 2025-03-18 17:59:57 +00:00
Erik Johnston
bfafd0f2c7 1.127.0rc1 2025-03-18 13:30:45 +00:00
Eric Eastwood
d61bdff7a4 Remove SYNAPSE_USE_FROZEN_DICTS environment variable (#18123)
I got rid of the `SYNAPSE_USE_FROZEN_DICTS` environment variable because
it will be overridden by the Synapse worker apps anyway and if we want
to support `SYNAPSE_USE_FROZEN_DICTS`, it should be in
`synapse/config/server.py`. It's also not documented so I'm assuming no
one is using it anyway.

Spawning from looking at the frozen dict stuff during the review of
https://github.com/element-hq/synapse/pull/18103#discussion_r1935876168
2025-03-18 05:53:21 -05:00
dependabot[bot]
4d2c4ce92b Bump ulid from 1.2.0 to 1.2.1 (#18246) 2025-03-18 10:01:09 +00:00
dependabot[bot]
79081e1be5 Bump http from 1.2.0 to 1.3.1 (#18245) 2025-03-18 10:00:57 +00:00
Andrew Ferrazzutti
51df675c05 MSC4140: don't cancel delayed state on own state (#17810)
When a user sends a state event, do not cancel their own delayed events
for the same piece of state.

For context, see [the relevant section in the
MSC](a09a883d9a/proposals/4140-delayed-events-futures.md (delayed-state-events-are-cancelled-by-a-more-recent-state-event)).
2025-03-17 16:21:45 +00:00
Erik Johnston
59a15da433 Add caching support to media endpoints (#18235)
We do a few things in this PR to better support caching:

1. Change `Cache-Control` header to allow intermediary proxies to cache
media *only* if they revalidate on every request. This means that the
intermediary cache will still send the request to Synapse but with a
`If-None-Match` header, at which point Synapse can check auth and
respond with a 304 and empty content.
2. Add `ETag` response header to all media responses. We hardcode this
to `1` since all media is immutable (beyond being deleted).
3. Check for `If-None-Match` header (after checking for auth), and if it
matches then respond with a 304 and empty body.

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-03-13 16:28:19 +00:00
reivilibre
a278c0d852 Fix detection of workflow failures in the release script. (#18211)
If one workflow is successful and one fails, currently that is reported
as success.

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-03-13 14:52:00 +00:00
karuto
929f19b472 Fix: corrected routing path for workers doc (#18224)
Closes: https://github.com/element-hq/synapse/issues/17926
2025-03-13 11:56:22 +00:00
dependabot[bot]
60b3cd0650 Bump anyhow from 1.0.96 to 1.0.97 (#18201)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.96 to 1.0.97.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dtolnay/anyhow/releases">anyhow's
releases</a>.</em></p>
<blockquote>
<h2>1.0.97</h2>
<ul>
<li>Documentation improvements</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bfb89ef244"><code>bfb89ef</code></a>
Release 1.0.97</li>
<li><a
href="c7fca9b086"><code>c7fca9b</code></a>
Ignore elidable_lifetime_names pedantic clippy lint</li>
<li><a
href="427c0bb0f3"><code>427c0bb</code></a>
Point standard library links to stable</li>
<li>See full diff in <a
href="https://github.com/dtolnay/anyhow/compare/1.0.96...1.0.97">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=anyhow&package-manager=cargo&previous-version=1.0.96&new-version=1.0.97)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-13 11:48:37 +00:00
dependabot[bot]
df044a3667 Bump bcrypt from 4.2.1 to 4.3.0 (#18207) 2025-03-13 11:44:49 +00:00
dependabot[bot]
04814a48de Bump sentry-sdk from 2.19.2 to 2.22.0 (#18205) 2025-03-13 11:44:39 +00:00
dependabot[bot]
698278ba50 Bump bytes from 1.10.0 to 1.10.1 (#18227) 2025-03-13 11:40:09 +00:00
dependabot[bot]
74cc353961 Bump serde from 1.0.218 to 1.0.219 (#18228) 2025-03-13 11:39:57 +00:00
Andrew Morgan
caa2012154 Merge branch 'master' into develop 2025-03-11 16:33:00 +00:00
Andrew Morgan
5064f35958 Move debian signing key expiry notice to top of 1.126.0 notes 2025-03-11 13:15:44 +00:00
Andrew Morgan
c30157b3cb 1.126.0 2025-03-11 13:11:45 +00:00
dependabot[bot]
fda1ffe5b8 Bump serde_json from 1.0.139 to 1.0.140 (#18202) 2025-03-11 10:27:19 +00:00
Olivier 'reivilibre
a4c476305e Tweak changelog 2025-03-07 16:03:18 +00:00
Olivier 'reivilibre
1803a62db4 1.126.0rc3 2025-03-07 15:45:11 +00:00
reivilibre
8295de87a7 Revert the background job to clear unreferenced state groups (that was introduced in v1.126.0rc1), due to a suspected issue that causes increased disk usage. (#18222)
Revert "Add background job to clear unreferenced state groups (#18154)"

This mechanism is suspected of inserting large numbers of rows into
`state_groups_state`,
thus unreasonably increasing disk usage.

See: https://github.com/element-hq/synapse/issues/18217

This reverts commit 5121f9210c (#18154).

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-03-07 15:44:13 +00:00
Olivier 'reivilibre
350e84a8a4 1.126.0rc2 2025-03-05 14:35:21 +00:00
reivilibre
69aceef8f6 Actually fix CI build wheels. (#18213)
Follows: #18212

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-03-05 14:20:17 +00:00
reivilibre
b7946c29be Fix wheel building configuration in CI by installing libatomic1. (#18212)
Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-03-04 17:37:28 +00:00
Olivier 'reivilibre
d7e238c8ee Tweak changelog to linkify MSCs 2025-03-04 14:31:47 +00:00
Olivier 'reivilibre
70f41c4541 Tweak changelog notice for debian repo signing key expiry change 2025-03-04 14:31:13 +00:00
Olivier 'reivilibre
26d9ce80c5 Add upgrade notes for the debian repo signing key expiry change 2025-03-04 14:29:38 +00:00
Olivier 'reivilibre
aa4a7b75d7 1.126.0rc1 2025-03-04 13:29:36 +00:00
93 changed files with 2702 additions and 586 deletions

10
.ci/before_build_wheel.sh Normal file
View File

@@ -0,0 +1,10 @@
#!/bin/sh
set -xeu
# On 32-bit Linux platforms, we need libatomic1 to use rustup
if command -v yum &> /dev/null; then
yum install -y libatomic
fi
# Install a Rust toolchain
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain 1.82.0 -y --profile minimal

View File

@@ -11,12 +11,12 @@ with open("poetry.lock", "rb") as f:
try:
lock_version = lockfile["metadata"]["lock-version"]
assert lock_version == "2.0"
assert lock_version == "2.1"
except Exception:
print(
"""\
Lockfile is not version 2.0. You probably need to upgrade poetry on your local box
and re-run `poetry lock --no-update`. See the Poetry cheat sheet at
Lockfile is not version 2.1. You probably need to upgrade poetry on your local box
and re-run `poetry lock`. See the Poetry cheat sheet at
https://element-hq.github.io/synapse/develop/development/dependencies.html
"""
)

View File

@@ -18,22 +18,22 @@ jobs:
steps:
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
with:
platforms: arm64
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Inspect builder
run: docker buildx inspect
- name: Install Cosign
uses: sigstore/cosign-installer@v3.8.1
uses: sigstore/cosign-installer@d7d6bc7722e3daa8354c50bcb52f4837da5e9b6a # v3.8.1
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Extract version from pyproject.toml
# Note: explicitly requesting bash will mean bash is invoked with `-eo pipefail`, see
@@ -43,13 +43,13 @@ jobs:
echo "SYNAPSE_VERSION=$(grep "^version" pyproject.toml | sed -E 's/version\s*=\s*["]([^"]*)["]/\1/')" >> $GITHUB_ENV
- name: Log in to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@v3
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -57,7 +57,7 @@ jobs:
- name: Calculate docker image tag
id: set-tag
uses: docker/metadata-action@master
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: |
docker.io/matrixdotorg/synapse
@@ -72,7 +72,7 @@ jobs:
- name: Build and push all platforms
id: build-and-push
uses: docker/build-push-action@v6
uses: docker/build-push-action@471d1dc4e07e5cdedd4c2171150001c434f0b7a4 # v6.15.0
with:
push: true
labels: |

View File

@@ -14,7 +14,7 @@ jobs:
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
- name: 📥 Download artifact
uses: dawidd6/action-download-artifact@20319c5641d495c8a52e688b7dc5fada6c3a9fbc # v8
uses: dawidd6/action-download-artifact@07ab29fd4a977ae4d2b275087cf67563dfdf0295 # v9
with:
workflow: docs-pr.yaml
run_id: ${{ github.event.workflow_run.id }}
@@ -22,7 +22,7 @@ jobs:
path: book
- name: 📤 Deploy to Netlify
uses: matrix-org/netlify-pr-preview@v3
uses: matrix-org/netlify-pr-preview@9805cd123fc9a7e421e35340a05e1ebc5dee46b5 # v3
with:
path: book
owner: ${{ github.event.workflow_run.head_repository.owner.login }}

View File

@@ -13,7 +13,7 @@ jobs:
name: GitHub Pages
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
@@ -24,7 +24,7 @@ jobs:
mdbook-version: '0.4.17'
- name: Setup python
uses: actions/setup-python@v5
uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: "3.x"
@@ -39,7 +39,7 @@ jobs:
cp book/welcome_and_overview.html book/index.html
- name: Upload Artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: book
path: book
@@ -50,7 +50,7 @@ jobs:
name: Check links in documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Setup mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2.0.0

View File

@@ -50,7 +50,7 @@ jobs:
needs:
- pre
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
@@ -64,7 +64,7 @@ jobs:
run: echo 'window.SYNAPSE_VERSION = "${{ needs.pre.outputs.branch-version }}";' > ./docs/website_files/version.js
- name: Setup python
uses: actions/setup-python@v5
uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: "3.x"

View File

@@ -13,21 +13,22 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@master
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master (rust 1.85.1)
with:
# We use nightly so that `fmt` correctly groups together imports, and
# clippy correctly fixes up the benchmarks.
toolchain: nightly-2022-12-01
components: clippy, rustfmt
- uses: Swatinem/rust-cache@v2
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@v1
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
install-project: "false"
poetry-version: "2.1.1"
- name: Run ruff check
continue-on-error: true
@@ -43,6 +44,6 @@ jobs:
- run: cargo fmt
continue-on-error: true
- uses: stefanzweifel/git-auto-commit-action@v5
- uses: stefanzweifel/git-auto-commit-action@e348103e9026cc0eee72ae06630dbe30c8bf7a79 # v5.1.0
with:
commit_message: "Attempt to fix linting"

View File

@@ -39,17 +39,17 @@ jobs:
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@fcf085fcb4b4b8f63f96906cd713eb52181b5ea4 # stable (rust 1.85.1)
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
# The dev dependencies aren't exposed in the wheel metadata (at least with current
# poetry-core versions), so we install with poetry.
- uses: matrix-org/setup-python-poetry@v1
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: "3.x"
poetry-version: "1.3.2"
poetry-version: "2.1.1"
extras: "all"
# Dump installed versions for debugging.
- run: poetry run pip list > before.txt
@@ -72,11 +72,11 @@ jobs:
postgres-version: "14"
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@fcf085fcb4b4b8f63f96906cd713eb52181b5ea4 # stable (rust 1.85.1)
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.postgres-version }}
@@ -86,7 +86,7 @@ jobs:
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.postgres-version }}
- uses: actions/setup-python@v5
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: "3.x"
- run: pip install .[all,test]
@@ -145,11 +145,11 @@ jobs:
BLACKLIST: ${{ matrix.workers && 'synapse-blacklist-with-workers' }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@fcf085fcb4b4b8f63f96906cd713eb52181b5ea4 # stable (rust 1.85.1)
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- name: Ensure sytest runs `pip install`
# Delete the lockfile so sytest will `pip install` rather than `poetry install`
@@ -164,7 +164,7 @@ jobs:
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
@@ -192,15 +192,15 @@ jobs:
database: Postgres
steps:
- name: Run actions/checkout@v4 for synapse
uses: actions/checkout@v4
- name: Check out synapse codebase
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@v5
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
@@ -225,7 +225,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -16,8 +16,8 @@ jobs:
name: "Check locked dependencies have sdists"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: '3.x'
- run: pip install tomli

View File

@@ -33,29 +33,29 @@ jobs:
packages: write
steps:
- name: Checkout specific branch (debug build)
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
if: github.event_name == 'workflow_dispatch'
with:
ref: ${{ inputs.branch }}
- name: Checkout clean copy of develop (scheduled build)
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
if: github.event_name == 'schedule'
with:
ref: develop
- name: Checkout clean copy of master (on-push)
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
if: github.event_name == 'push'
with:
ref: master
- name: Login to registry
uses: docker/login-action@v3
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Work out labels for complement image
id: meta
uses: docker/metadata-action@v5
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ghcr.io/${{ github.repository }}/complement-synapse
tags: |

View File

@@ -27,8 +27,8 @@ jobs:
name: "Calculate list of debian distros"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: '3.x'
- id: set-distros
@@ -55,18 +55,18 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: src
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
with:
install: true
- name: Set up docker layer caching
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
@@ -74,7 +74,7 @@ jobs:
${{ runner.os }}-buildx-
- name: Set up python
uses: actions/setup-python@v5
uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: '3.x'
@@ -101,7 +101,7 @@ jobs:
echo "ARTIFACT_NAME=${DISTRO#*:}" >> "$GITHUB_OUTPUT"
- name: Upload debs as artifacts
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: debs-${{ steps.artifact-name.outputs.ARTIFACT_NAME }}
path: debs/*
@@ -130,20 +130,20 @@ jobs:
arch: aarch64
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@v5
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
# setup-python@v4 doesn't impose a default python version. Need to use 3.x
# here, because `python` on osx points to Python 2.7.
python-version: "3.x"
- name: Install cibuildwheel
run: python -m pip install cibuildwheel==2.19.1
run: python -m pip install cibuildwheel==2.23.0
- name: Set up QEMU to emulate aarch64
if: matrix.arch == 'aarch64'
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
with:
platforms: arm64
@@ -165,7 +165,7 @@ jobs:
CARGO_NET_GIT_FETCH_WITH_CLI: true
CIBW_ENVIRONMENT_PASS_LINUX: CARGO_NET_GIT_FETCH_WITH_CLI
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Wheel-${{ matrix.os }}-${{ matrix.arch }}
path: ./wheelhouse/*.whl
@@ -176,8 +176,8 @@ jobs:
if: ${{ !startsWith(github.ref, 'refs/pull/') }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: '3.10'
@@ -186,7 +186,7 @@ jobs:
- name: Build sdist
run: python -m build --sdist
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: Sdist
path: dist/*.tar.gz
@@ -203,7 +203,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Download all workflow run artifacts
uses: actions/download-artifact@v4
uses: actions/download-artifact@95815c38cf2ff2164869cbab79da8d1f422bc89e # v4.2.1
- name: Build a tarball for the debs
# We need to merge all the debs uploads into one folder, then compress
# that.
@@ -213,7 +213,7 @@ jobs:
tar -cvJf debs.tar.xz debs
- name: Attach to release
# Pinned to work around https://github.com/softprops/action-gh-release/issues/445
uses: softprops/action-gh-release@v0.1.15
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -23,7 +23,7 @@ jobs:
linting: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.linting }}
linting_readme: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.linting_readme }}
steps:
- uses: dorny/paths-filter@v3
- uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
id: filter
# We only check on PRs
if: startsWith(github.ref, 'refs/pull/')
@@ -83,14 +83,14 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: "3.x"
poetry-version: "1.3.2"
poetry-version: "2.1.1"
extras: "all"
- run: poetry run scripts-dev/generate_sample_config.sh --check
- run: poetry run scripts-dev/config-lint.sh
@@ -101,8 +101,8 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: "3.x"
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20'"
@@ -111,8 +111,8 @@ jobs:
check-lockfile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: "3.x"
- run: .ci/scripts/check_lockfile.py
@@ -124,11 +124,12 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@v1
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
poetry-version: "2.1.1"
install-project: "false"
- name: Run ruff check
@@ -145,14 +146,14 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@v1
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
# We want to make use of type hints in optional dependencies too.
extras: all
@@ -161,11 +162,12 @@ jobs:
# https://github.com/matrix-org/synapse/pull/15376#issuecomment-1498983775
# To make CI green, err towards caution and install the project.
install-project: "true"
poetry-version: "2.1.1"
# Cribbed from
# https://github.com/AustinScola/mypy-cache-github-action/blob/85ea4f2972abed39b33bd02c36e341b28ca59213/src/restore.ts#L10-L17
- name: Restore/persist mypy's cache
uses: actions/cache@v4
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: |
.mypy_cache
@@ -178,7 +180,7 @@ jobs:
lint-crlf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Check line endings
run: scripts-dev/check_line_terminators.sh
@@ -186,11 +188,11 @@ jobs:
if: ${{ (github.base_ref == 'develop' || contains(github.base_ref, 'release-')) && github.actor != 'dependabot[bot]' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: actions/setup-python@v5
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: "3.x"
- run: "pip install 'towncrier>=18.6.0rc1'"
@@ -204,15 +206,15 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
poetry-version: "1.3.2"
poetry-version: "2.1.1"
extras: "all"
- run: poetry run scripts-dev/check_pydantic_models.py
@@ -222,13 +224,13 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
with:
components: clippy
- uses: Swatinem/rust-cache@v2
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- run: cargo clippy -- -D warnings
@@ -240,14 +242,14 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@master
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master (rust 1.85.1)
with:
toolchain: nightly-2022-12-01
components: clippy
- uses: Swatinem/rust-cache@v2
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- run: cargo clippy --all-features -- -D warnings
@@ -257,15 +259,15 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@master
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master (rust 1.85.1)
with:
# We use nightly so that it correctly groups together imports
toolchain: nightly-2022-12-01
components: rustfmt
- uses: Swatinem/rust-cache@v2
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- run: cargo fmt --check
@@ -276,8 +278,8 @@ jobs:
needs: changes
if: ${{ needs.changes.outputs.linting_readme == 'true' }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: "3.x"
- run: "pip install rstcheck"
@@ -301,7 +303,7 @@ jobs:
- lint-readme
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@v3
- uses: matrix-org/done-action@3409aa904e8a2aaf2220f09bc954d3d0b0a2ee67 # v3
with:
needs: ${{ toJSON(needs) }}
@@ -324,8 +326,8 @@ jobs:
needs: linting-done
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: "3.x"
- id: get-matrix
@@ -345,7 +347,7 @@ jobs:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.trial_test_matrix) }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.job.postgres-version }}
if: ${{ matrix.job.postgres-version }}
@@ -360,13 +362,13 @@ jobs:
postgres:${{ matrix.job.postgres-version }}
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: matrix-org/setup-python-poetry@v1
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: ${{ matrix.job.python-version }}
poetry-version: "1.3.2"
poetry-version: "2.1.1"
extras: ${{ matrix.job.extras }}
- name: Await PostgreSQL
if: ${{ matrix.job.postgres-version }}
@@ -399,11 +401,11 @@ jobs:
- changes
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
# There aren't wheels for some of the older deps, so we need to install
# their build dependencies
@@ -412,7 +414,7 @@ jobs:
sudo apt-get -qq install build-essential libffi-dev python3-dev \
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
- uses: actions/setup-python@v5
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
with:
python-version: '3.9'
@@ -462,13 +464,13 @@ jobs:
extras: ["all"]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
# Install libs necessary for PyPy to build binary wheels for dependencies
- run: sudo apt-get -qq install xmlsec1 libxml2-dev libxslt-dev
- uses: matrix-org/setup-python-poetry@v1
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: ${{ matrix.python-version }}
poetry-version: "1.3.2"
poetry-version: "2.1.1"
extras: ${{ matrix.extras }}
- run: poetry run trial --jobs=2 tests
- name: Dump logs
@@ -512,13 +514,13 @@ jobs:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.sytest_test_matrix) }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- name: Run SyTest
run: /bootstrap.sh synapse
@@ -527,7 +529,7 @@ jobs:
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.job.*, ', ') }})
@@ -557,11 +559,11 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@v1
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
poetry-version: "1.3.2"
poetry-version: "2.1.1"
extras: "postgres"
- run: .ci/scripts/test_export_data_command.sh
env:
@@ -601,7 +603,7 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Add PostgreSQL apt repository
# We need a version of pg_dump that can handle the version of
# PostgreSQL being tested against. The Ubuntu package repository lags
@@ -612,10 +614,10 @@ jobs:
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@v1
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: ${{ matrix.python-version }}
poetry-version: "1.3.2"
poetry-version: "2.1.1"
extras: "postgres"
- run: .ci/scripts/test_synapse_port_db.sh
id: run_tester_script
@@ -625,7 +627,7 @@ jobs:
PGPASSWORD: postgres
PGDATABASE: postgres
- name: "Upload schema differences"
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ failure() && !cancelled() && steps.run_tester_script.outcome == 'failure' }}
with:
name: Schema dumps
@@ -655,19 +657,19 @@ jobs:
database: Postgres
steps:
- name: Run actions/checkout@v4 for synapse
uses: actions/checkout@v4
- name: Checkout synapse codebase
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: synapse
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@v5
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
@@ -690,11 +692,11 @@ jobs:
- changes
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- run: cargo test
@@ -708,13 +710,13 @@ jobs:
- changes
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@master
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master (rust 1.85.1)
with:
toolchain: nightly-2022-12-01
- uses: Swatinem/rust-cache@v2
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- run: cargo bench --no-run
@@ -733,7 +735,7 @@ jobs:
- linting-done
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@v3
- uses: matrix-org/done-action@3409aa904e8a2aaf2220f09bc954d3d0b0a2ee67 # v3
with:
needs: ${{ toJSON(needs) }}

View File

@@ -6,7 +6,7 @@ on:
jobs:
triage:
uses: matrix-org/backend-meta/.github/workflows/triage-incoming.yml@v2
uses: matrix-org/backend-meta/.github/workflows/triage-incoming.yml@18beaf3c8e536108bd04d18e6c3dc40ba3931e28 # v2.0.3
with:
project_id: 'PVT_kwDOAIB0Bs4AFDdZ'
content_id: ${{ github.event.issue.node_id }}

View File

@@ -11,7 +11,7 @@ jobs:
if: >
contains(github.event.issue.labels.*.name, 'X-Needs-Info')
steps:
- uses: actions/add-to-project@main
- uses: actions/add-to-project@280af8ae1f83a494cfad2cb10f02f6d13529caa9 # main (v1.0.2 + 10 commits)
id: add_project
with:
project-url: "https://github.com/orgs/matrix-org/projects/67"

View File

@@ -40,16 +40,17 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@fcf085fcb4b4b8f63f96906cd713eb52181b5ea4 # stable (rust 1.85.1)
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: matrix-org/setup-python-poetry@v1
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: "3.x"
extras: "all"
poetry-version: "2.1.1"
- run: |
poetry remove twisted
poetry add --extras tls git+https://github.com/twisted/twisted.git#${{ inputs.twisted_ref || 'trunk' }}
@@ -64,17 +65,18 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- run: sudo apt-get -qq install xmlsec1
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@fcf085fcb4b4b8f63f96906cd713eb52181b5ea4 # stable (rust 1.85.1)
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: matrix-org/setup-python-poetry@v1
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: "3.x"
extras: "all test"
poetry-version: "2.1.1"
- run: |
poetry remove twisted
poetry add --extras tls git+https://github.com/twisted/twisted.git#trunk
@@ -108,11 +110,11 @@ jobs:
- ${{ github.workspace }}:/src
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
uses: dtolnay/rust-toolchain@fcf085fcb4b4b8f63f96906cd713eb52181b5ea4 # stable (rust 1.85.1)
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- name: Patch dependencies
# Note: The poetry commands want to create a virtualenv in /src/.venv/,
@@ -136,7 +138,7 @@ jobs:
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
@@ -164,14 +166,14 @@ jobs:
steps:
- name: Run actions/checkout@v4 for synapse
uses: actions/checkout@v4
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@v5
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
@@ -181,11 +183,11 @@ jobs:
run: |
set -x
DEBIAN_FRONTEND=noninteractive sudo apt-get install -yqq python3 pipx
pipx install poetry==1.3.2
pipx install poetry==2.1.1
poetry remove -n twisted
poetry add -n --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry lock --no-update
poetry lock
working-directory: synapse
- run: |
@@ -206,7 +208,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,10 +1,183 @@
# Synapse 1.128.0 (2025-04-08)
No significant changes since 1.128.0rc1.
# Synapse 1.128.0rc1 (2025-04-01)
### Features
- Add an access token introspection cache to make Matrix Authentication Service integration ([MSC3861](https://github.com/matrix-org/matrix-doc/pull/3861)) more efficient. ([\#18231](https://github.com/element-hq/synapse/issues/18231))
- Add background job to clear unreferenced state groups. ([\#18254](https://github.com/element-hq/synapse/issues/18254))
- Hashes of media files are now tracked by Synapse. Media quarantines will now apply to all files with the same hash. ([\#18277](https://github.com/element-hq/synapse/issues/18277), [\#18302](https://github.com/element-hq/synapse/issues/18302), [\#18296](https://github.com/element-hq/synapse/issues/18296))
### Bugfixes
- Add index to sliding sync ([MSC4186](https://github.com/matrix-org/matrix-doc/pull/4186)) membership snapshot table, to fix a performance issue. ([\#18074](https://github.com/element-hq/synapse/issues/18074))
### Updates to the Docker image
- Specify the architecture of installed packages via an APT config option, which is more reliable than appending package names with `:{arch}`. ([\#18271](https://github.com/element-hq/synapse/issues/18271))
- Always specify base image debian versions with a build argument. ([\#18272](https://github.com/element-hq/synapse/issues/18272))
- Allow passing arguments to `start_for_complement.sh` (to be sent to `configure_workers_and_start.py`). ([\#18273](https://github.com/element-hq/synapse/issues/18273))
- Make some improvements to the `prefix-log` script in the workers image. ([\#18274](https://github.com/element-hq/synapse/issues/18274))
- Use `uv pip` to install `supervisor` in the worker image. ([\#18275](https://github.com/element-hq/synapse/issues/18275))
- Avoid needing to download & use `rsync` in a build layer. ([\#18287](https://github.com/element-hq/synapse/issues/18287))
### Improved Documentation
- Fix how to obtain access token and change naming from riot to element ([\#18225](https://github.com/element-hq/synapse/issues/18225))
- Correct a small typo in the SSO mapping providers documentation. ([\#18276](https://github.com/element-hq/synapse/issues/18276))
- Add docs for how to clear out the Poetry wheel cache. ([\#18283](https://github.com/element-hq/synapse/issues/18283))
### Internal Changes
- Add a column `participant` to `room_memberships` table. ([\#18068](https://github.com/element-hq/synapse/issues/18068))
- Update Poetry to 2.1.1, including updating the lock file version. ([\#18251](https://github.com/element-hq/synapse/issues/18251))
- Pin GitHub Actions dependencies by commit hash. ([\#18255](https://github.com/element-hq/synapse/issues/18255))
- Add DB delta to remove the old state group deletion job. ([\#18284](https://github.com/element-hq/synapse/issues/18284))
### Updates to locked dependencies
* Bump actions/add-to-project from f5473ace9aeee8b97717b281e26980aa5097023f to 280af8ae1f83a494cfad2cb10f02f6d13529caa9. ([\#18303](https://github.com/element-hq/synapse/issues/18303))
* Bump actions/cache from 4.2.2 to 4.2.3. ([\#18266](https://github.com/element-hq/synapse/issues/18266))
* Bump actions/download-artifact from 4.2.0 to 4.2.1. ([\#18268](https://github.com/element-hq/synapse/issues/18268))
* Bump actions/setup-python from 5.4.0 to 5.5.0. ([\#18298](https://github.com/element-hq/synapse/issues/18298))
* Bump actions/upload-artifact from 4.6.1 to 4.6.2. ([\#18304](https://github.com/element-hq/synapse/issues/18304))
* Bump authlib from 1.4.1 to 1.5.1. ([\#18306](https://github.com/element-hq/synapse/issues/18306))
* Bump dawidd6/action-download-artifact from 8 to 9. ([\#18204](https://github.com/element-hq/synapse/issues/18204))
* Bump jinja2 from 3.1.5 to 3.1.6. ([\#18223](https://github.com/element-hq/synapse/issues/18223))
* Bump log from 0.4.26 to 0.4.27. ([\#18267](https://github.com/element-hq/synapse/issues/18267))
* Bump phonenumbers from 8.13.50 to 9.0.2. ([\#18299](https://github.com/element-hq/synapse/issues/18299))
* Bump pygithub from 2.5.0 to 2.6.1. ([\#18243](https://github.com/element-hq/synapse/issues/18243))
* Bump pyo3-log from 0.12.1 to 0.12.2. ([\#18269](https://github.com/element-hq/synapse/issues/18269))
# Synapse 1.127.1 (2025-03-26)
## Security
- Fix [CVE-2025-30355](https://www.cve.org/CVERecord?id=CVE-2025-30355) / [GHSA-v56r-hwv5-mxg6](https://github.com/element-hq/synapse/security/advisories/GHSA-v56r-hwv5-mxg6). **High severity vulnerability affecting federation. The vulnerability has been exploited in the wild.**
# Synapse 1.127.0 (2025-03-25)
No significant changes since 1.127.0rc1.
# Synapse 1.127.0rc1 (2025-03-18)
### Features
- Update [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140) implementation to no longer cancel a user's own delayed state events with an event type & state key that match a more recent state event sent by that user. ([\#17810](https://github.com/element-hq/synapse/issues/17810))
### Improved Documentation
- Fixed a minor typo in the Synapse documentation. Contributed by @karuto12. ([\#18224](https://github.com/element-hq/synapse/issues/18224))
### Internal Changes
- Remove undocumented `SYNAPSE_USE_FROZEN_DICTS` environment variable. ([\#18123](https://github.com/element-hq/synapse/issues/18123))
- Fix detection of workflow failures in the release script. ([\#18211](https://github.com/element-hq/synapse/issues/18211))
- Add caching support to media endpoints. ([\#18235](https://github.com/element-hq/synapse/issues/18235))
### Updates to locked dependencies
* Bump anyhow from 1.0.96 to 1.0.97. ([\#18201](https://github.com/element-hq/synapse/issues/18201))
* Bump bcrypt from 4.2.1 to 4.3.0. ([\#18207](https://github.com/element-hq/synapse/issues/18207))
* Bump bytes from 1.10.0 to 1.10.1. ([\#18227](https://github.com/element-hq/synapse/issues/18227))
* Bump http from 1.2.0 to 1.3.1. ([\#18245](https://github.com/element-hq/synapse/issues/18245))
* Bump sentry-sdk from 2.19.2 to 2.22.0. ([\#18205](https://github.com/element-hq/synapse/issues/18205))
* Bump serde from 1.0.218 to 1.0.219. ([\#18228](https://github.com/element-hq/synapse/issues/18228))
* Bump serde_json from 1.0.139 to 1.0.140. ([\#18202](https://github.com/element-hq/synapse/issues/18202))
* Bump ulid from 1.2.0 to 1.2.1. ([\#18246](https://github.com/element-hq/synapse/issues/18246))
# Synapse 1.126.0 (2025-03-11)
Administrators using the Debian/Ubuntu packages from `packages.matrix.org`, please check
[the relevant section in the upgrade notes](https://github.com/element-hq/synapse/blob/release-v1.126/docs/upgrade.md#change-of-signing-key-expiry-date-for-the-debianubuntu-package-repository)
as we have recently updated the expiry date on the repository's GPG signing key. The old version of the key will expire on `2025-03-15`.
No significant changes since 1.126.0rc3.
# Synapse 1.126.0rc3 (2025-03-07)
### Bugfixes
- Revert the background job to clear unreferenced state groups (that was introduced in v1.126.0rc1), due to [a suspected issue](https://github.com/element-hq/synapse/issues/18217) that causes increased disk usage. ([\#18222](https://github.com/element-hq/synapse/issues/18222))
# Synapse 1.126.0rc2 (2025-03-05)
### Internal Changes
- Fix wheel building configuration in CI by installing libatomic1. ([\#18212](https://github.com/element-hq/synapse/issues/18212), [\#18213](https://github.com/element-hq/synapse/issues/18213))
# Synapse 1.126.0rc1 (2025-03-04)
Synapse 1.126.0rc1 was not fully released due to an error in CI.
### Features
- Define ratelimit configuration for delayed event management. ([\#18019](https://github.com/element-hq/synapse/issues/18019))
- Add `form_secret_path` config option. ([\#18090](https://github.com/element-hq/synapse/issues/18090))
- Add the `--no-secrets-in-config` command line option. ([\#18092](https://github.com/element-hq/synapse/issues/18092))
- Add background job to clear unreferenced state groups. ([\#18154](https://github.com/element-hq/synapse/issues/18154))
- Add support for specifying/overriding `id_token_signing_alg_values_supported` for an OpenID identity provider. ([\#18177](https://github.com/element-hq/synapse/issues/18177))
- Add `worker_replication_secret_path` config option. ([\#18191](https://github.com/element-hq/synapse/issues/18191))
- Add support for specifying/overriding `redirect_uri` in the authorization and token requests against an OpenID identity provider. ([\#18197](https://github.com/element-hq/synapse/issues/18197))
### Bugfixes
- Make sure we advertise registration as disabled when [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861) is enabled. ([\#17661](https://github.com/element-hq/synapse/issues/17661))
- Prevent suspended users from sending encrypted messages. ([\#18157](https://github.com/element-hq/synapse/issues/18157))
- Cleanup deleted state group references. ([\#18165](https://github.com/element-hq/synapse/issues/18165))
- Fix [MSC4108 QR-code login](https://github.com/matrix-org/matrix-spec-proposals/pull/4108) not working with some reverse-proxy setups. ([\#18178](https://github.com/element-hq/synapse/issues/18178))
- Support device IDs that can't be represented in a scope when delegating auth to Matrix Authentication Service 0.15.0+. ([\#18174](https://github.com/element-hq/synapse/issues/18174))
### Updates to the Docker image
- Speed up the building of the Docker image. ([\#18038](https://github.com/element-hq/synapse/issues/18038))
### Improved Documentation
- Move incorrectly placed version indicator in User Event Redaction Admin API docs. ([\#18152](https://github.com/element-hq/synapse/issues/18152))
- Document suspension Admin API. ([\#18162](https://github.com/element-hq/synapse/issues/18162))
### Deprecations and Removals
- Disable room list publication by default. ([\#18175](https://github.com/element-hq/synapse/issues/18175))
### Updates to locked dependencies
* Bump anyhow from 1.0.95 to 1.0.96. ([\#18187](https://github.com/element-hq/synapse/issues/18187))
* Bump authlib from 1.4.0 to 1.4.1. ([\#18190](https://github.com/element-hq/synapse/issues/18190))
* Bump click from 8.1.7 to 8.1.8. ([\#18189](https://github.com/element-hq/synapse/issues/18189))
* Bump log from 0.4.25 to 0.4.26. ([\#18184](https://github.com/element-hq/synapse/issues/18184))
* Bump pyo3-log from 0.12.0 to 0.12.1. ([\#18046](https://github.com/element-hq/synapse/issues/18046))
* Bump serde from 1.0.217 to 1.0.218. ([\#18183](https://github.com/element-hq/synapse/issues/18183))
* Bump serde_json from 1.0.138 to 1.0.139. ([\#18186](https://github.com/element-hq/synapse/issues/18186))
* Bump sigstore/cosign-installer from 3.8.0 to 3.8.1. ([\#18185](https://github.com/element-hq/synapse/issues/18185))
* Bump types-psycopg2 from 2.9.21.20241019 to 2.9.21.20250121. ([\#18188](https://github.com/element-hq/synapse/issues/18188))
# Synapse 1.125.0 (2025-02-25)
No significant changes since 1.125.0rc1.
# Synapse 1.125.0rc1 (2025-02-18)
### Features

56
Cargo.lock generated
View File

@@ -13,9 +13,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.96"
version = "1.0.97"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b964d184e89d9b6b67dd2715bc8e74cf3107fb2b529990c90cf517326150bf4"
checksum = "dcfed56ad506cb2c684a14971b8861fdc3baaaae314b9e5f9bb532cbe3ba7a4f"
[[package]]
name = "arc-swap"
@@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
[[package]]
name = "bytes"
version = "1.10.0"
version = "1.10.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f61dac84819c6588b558454b194026eb1f09c293b9036ae9b159e74e73ab6cf9"
checksum = "d71b6127be86fdcfddb610f7182ac57211d4b18a3e9c82eb2d17662f2227ad6a"
[[package]]
name = "cfg-if"
@@ -173,9 +173,9 @@ checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "http"
version = "1.2.0"
version = "1.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f16ca2af56261c99fba8bac40a10251ce8188205a4c448fbb745a2e4daa76fea"
checksum = "f4a85d31aea989eead29a3aaf9e1115a180df8282431156e533de47660892565"
dependencies = [
"bytes",
"fnv",
@@ -223,9 +223,9 @@ checksum = "ae743338b92ff9146ce83992f766a31066a91a8c84a45e0e9f21e7cf6de6d346"
[[package]]
name = "log"
version = "0.4.26"
version = "0.4.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "30bde2b3dc3671ae49d8e2e9f044c7c005836e7a023ee57cffa25ab82764bb9e"
checksum = "13dc2df351e3202783a1fe0d44375f7295ffb4049267b0f3018346dc122a1d94"
[[package]]
name = "memchr"
@@ -277,9 +277,9 @@ dependencies = [
[[package]]
name = "pyo3"
version = "0.23.4"
version = "0.23.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57fe09249128b3173d092de9523eaa75136bf7ba85e0d69eca241c7939c933cc"
checksum = "7778bffd85cf38175ac1f545509665d0b9b92a198ca7941f131f85f7a4f9a872"
dependencies = [
"anyhow",
"cfg-if",
@@ -296,9 +296,9 @@ dependencies = [
[[package]]
name = "pyo3-build-config"
version = "0.23.4"
version = "0.23.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1cd3927b5a78757a0d71aa9dff669f903b1eb64b54142a9bd9f757f8fde65fd7"
checksum = "94f6cbe86ef3bf18998d9df6e0f3fc1050a8c5efa409bf712e661a4366e010fb"
dependencies = [
"once_cell",
"target-lexicon",
@@ -306,9 +306,9 @@ dependencies = [
[[package]]
name = "pyo3-ffi"
version = "0.23.4"
version = "0.23.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dab6bb2102bd8f991e7749f130a70d05dd557613e39ed2deeee8e9ca0c4d548d"
checksum = "e9f1b4c431c0bb1c8fb0a338709859eed0d030ff6daa34368d3b152a63dfdd8d"
dependencies = [
"libc",
"pyo3-build-config",
@@ -316,9 +316,9 @@ dependencies = [
[[package]]
name = "pyo3-log"
version = "0.12.1"
version = "0.12.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be5bb22b77965a7b5394e9aae9897a0607b51df5167561ffc3b02643b4200bc7"
checksum = "4b78e4983ba15bc62833a0e0941d965bc03690163f1127864f1408db25063466"
dependencies = [
"arc-swap",
"log",
@@ -327,9 +327,9 @@ dependencies = [
[[package]]
name = "pyo3-macros"
version = "0.23.4"
version = "0.23.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "91871864b353fd5ffcb3f91f2f703a22a9797c91b9ab497b1acac7b07ae509c7"
checksum = "fbc2201328f63c4710f68abdf653c89d8dbc2858b88c5d88b0ff38a75288a9da"
dependencies = [
"proc-macro2",
"pyo3-macros-backend",
@@ -339,9 +339,9 @@ dependencies = [
[[package]]
name = "pyo3-macros-backend"
version = "0.23.4"
version = "0.23.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "43abc3b80bc20f3facd86cd3c60beed58c3e2aa26213f3cda368de39c60a27e4"
checksum = "fca6726ad0f3da9c9de093d6f116a93c1a38e417ed73bf138472cf4064f72028"
dependencies = [
"heck",
"proc-macro2",
@@ -437,18 +437,18 @@ checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f"
[[package]]
name = "serde"
version = "1.0.218"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8dfc9d19bdbf6d17e22319da49161d5d0108e4188e8b680aef6299eed22df60"
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.218"
version = "1.0.219"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f09503e191f4e797cb8aac08e9a4a4695c5edf6a2e70e376d961ddd5c969f82b"
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00"
dependencies = [
"proc-macro2",
"quote",
@@ -457,9 +457,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.139"
version = "1.0.140"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44f86c3acccc9c65b153fe1b85a3be07fe5515274ec9f0653b4a0875731c72a6"
checksum = "20068b6e96dc6c9bd23e01df8827e6c7e1f2fddd43c21810382803c136b99373"
dependencies = [
"itoa",
"memchr",
@@ -544,9 +544,9 @@ checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
[[package]]
name = "ulid"
version = "1.2.0"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ab82fc73182c29b02e2926a6df32f2241dbadb5cfc111fd595515b3598f46bb3"
checksum = "470dbf6591da1b39d43c14523b2b469c86879a53e8b758c8e090a470fe7b1fbe"
dependencies = [
"rand",
"web-time",

View File

@@ -1 +0,0 @@
Make sure we advertise registration as disabled when MSC3861 is enabled.

View File

@@ -1 +0,0 @@
Define ratelimit configuration for delayed event management.

View File

@@ -1 +0,0 @@
Speed up the building of the Docker image.

View File

@@ -1 +0,0 @@
Bump pyo3-log from 0.12.0 to 0.12.1.

View File

@@ -1 +0,0 @@
Add `form_secret_path` config option.

View File

@@ -1 +0,0 @@
Add the `--no-secrets-in-config` command line option.

View File

@@ -1 +0,0 @@
Move incorrectly placed version indicator in User Event Redaction Admin API docs.

View File

@@ -1 +0,0 @@
Add background job to clear unreferenced state groups.

View File

@@ -1 +0,0 @@
Prevent suspended users from sending encrypted messages.

View File

@@ -1 +0,0 @@
Document suspension Admin API.

View File

@@ -1 +0,0 @@
Cleanup deleted state group references.

View File

@@ -1 +0,0 @@
Support device IDs that can't be represented in a scope when delegating auth to Matrix Authentication Service 0.15.0+.

View File

@@ -1 +0,0 @@
Disable room list publication by default.

View File

@@ -1 +0,0 @@
Add support for specifying/overriding `id_token_signing_alg_values_supported` for an OpenID identity provider.

View File

@@ -1 +0,0 @@
Fix MSC4108 QR-code login not working with some reverse-proxy setups.

View File

@@ -1 +0,0 @@
Add `worker_replication_secret_path` config option.

View File

@@ -1 +0,0 @@
Add support for specifying/overriding `redirect_uri` in the authorization and token requests against an OpenID identity provider.

View File

@@ -35,7 +35,7 @@ TEMP_VENV="$(mktemp -d)"
python3 -m venv "$TEMP_VENV"
source "$TEMP_VENV/bin/activate"
pip install -U pip
pip install poetry==1.3.2
pip install poetry==2.1.1 poetry-plugin-export==1.9.0
poetry export \
--extras all \
--extras test \

55
debian/changelog vendored
View File

@@ -1,3 +1,58 @@
matrix-synapse-py3 (1.128.0) stable; urgency=medium
* New Synapse release 1.128.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Apr 2025 14:09:54 +0100
matrix-synapse-py3 (1.128.0~rc1) stable; urgency=medium
* Update Poetry to 2.1.1.
* New synapse release 1.128.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Apr 2025 14:35:33 +0000
matrix-synapse-py3 (1.127.1) stable; urgency=medium
* New Synapse release 1.127.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 26 Mar 2025 21:07:31 +0000
matrix-synapse-py3 (1.127.0) stable; urgency=medium
* New Synapse release 1.127.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 25 Mar 2025 12:04:15 +0000
matrix-synapse-py3 (1.127.0~rc1) stable; urgency=medium
* New Synapse release 1.127.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Mar 2025 13:30:05 +0000
matrix-synapse-py3 (1.126.0) stable; urgency=medium
* New Synapse release 1.126.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 11 Mar 2025 13:11:29 +0000
matrix-synapse-py3 (1.126.0~rc3) stable; urgency=medium
* New Synapse release 1.126.0rc3.
-- Synapse Packaging team <packages@matrix.org> Fri, 07 Mar 2025 15:45:05 +0000
matrix-synapse-py3 (1.126.0~rc2) stable; urgency=medium
* New Synapse release 1.126.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 05 Mar 2025 14:29:12 +0000
matrix-synapse-py3 (1.126.0~rc1) stable; urgency=medium
* New Synapse release 1.126.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Mar 2025 13:11:51 +0000
matrix-synapse-py3 (1.125.0) stable; urgency=medium
* New Synapse release 1.125.0.

View File

@@ -22,7 +22,7 @@
ARG DEBIAN_VERSION=bookworm
ARG PYTHON_VERSION=3.12
ARG POETRY_VERSION=1.8.3
ARG POETRY_VERSION=2.1.1
###
### Stage 0: generate requirements.txt
@@ -56,7 +56,7 @@ ENV UV_LINK_MODE=copy
ARG POETRY_VERSION
RUN --mount=type=cache,target=/root/.cache/uv \
if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
uvx --with poetry-plugin-export==1.8.0 \
uvx --with poetry-plugin-export==1.9.0 \
poetry@${POETRY_VERSION} export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}; \
else \
touch /synapse/requirements.txt; \
@@ -134,7 +134,6 @@ RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
apt-get install -y --no-install-recommends rsync && \
apt-cache depends --recurse --no-recommends --no-suggests --no-conflicts --no-breaks --no-replaces --no-enhances --no-pre-depends \
curl \
gosu \
@@ -148,14 +147,10 @@ RUN \
for arch in arm64 amd64; do \
mkdir -p /tmp/debs-${arch} && \
cd /tmp/debs-${arch} && \
apt-get download $(sed "s/$/:${arch}/" /tmp/pkg-list); \
apt-get -o APT::Architecture="${arch}" download $(cat /tmp/pkg-list); \
done
# Extract the debs for each architecture
# On the runtime image, /lib is a symlink to /usr/lib, so we need to copy the
# libraries to the right place, else the `COPY` won't work.
# On amd64, we'll also have a /lib64 folder with ld-linux-x86-64.so.2, which is
# already present in the runtime image.
RUN \
for arch in arm64 amd64; do \
mkdir -p /install-${arch}/var/lib/dpkg/status.d/ && \
@@ -165,8 +160,6 @@ RUN \
dpkg --ctrl-tarfile $deb | tar -Ox ./control > /install-${arch}/var/lib/dpkg/status.d/${package_name}; \
dpkg --extract $deb /install-${arch}; \
done; \
rsync -avr /install-${arch}/lib/ /install-${arch}/usr/lib; \
rm -rf /install-${arch}/lib /install-${arch}/lib64; \
done
@@ -183,7 +176,14 @@ LABEL org.opencontainers.image.documentation='https://github.com/element-hq/syna
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later'
COPY --from=runtime-deps /install-${TARGETARCH} /
# On the runtime image, /lib is a symlink to /usr/lib, so we need to copy the
# libraries to the right place, else the `COPY` won't work.
# On amd64, we'll also have a /lib64 folder with ld-linux-x86-64.so.2, which is
# already present in the runtime image.
COPY --from=runtime-deps /install-${TARGETARCH}/lib /usr/lib
COPY --from=runtime-deps /install-${TARGETARCH}/etc /etc
COPY --from=runtime-deps /install-${TARGETARCH}/usr /usr
COPY --from=runtime-deps /install-${TARGETARCH}/var /var
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py
COPY ./docker/conf /conf

View File

@@ -2,12 +2,13 @@
ARG SYNAPSE_VERSION=latest
ARG FROM=matrixdotorg/synapse:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=bookworm
# first of all, we create a base image with an nginx which we can copy into the
# target image. For repeated rebuilds, this is much faster than apt installing
# each time.
FROM docker.io/library/debian:bookworm-slim AS deps_base
FROM docker.io/library/debian:${DEBIAN_VERSION}-slim AS deps_base
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
@@ -21,15 +22,20 @@ FROM docker.io/library/debian:bookworm-slim AS deps_base
# which makes it much easier to copy (but we need to make sure we use an image
# based on the same debian version as the synapse image, to make sure we get
# the expected version of libc.
FROM docker.io/library/redis:7-bookworm AS redis_base
FROM docker.io/library/redis:7-${DEBIAN_VERSION} AS redis_base
# now build the final image, based on the the regular Synapse docker image
FROM $FROM
# Install supervisord with pip instead of apt, to avoid installing a second
# Install supervisord with uv pip instead of apt, to avoid installing a second
# copy of python.
RUN --mount=type=cache,target=/root/.cache/pip \
pip install supervisor~=4.2
# --link-mode=copy silences a warning as uv isn't able to do hardlinks between its cache
# (mounted as --mount=type=cache) and the target directory.
RUN \
--mount=type=bind,from=ghcr.io/astral-sh/uv:0.6.8,source=/uv,target=/uv \
--mount=type=cache,target=/root/.cache/uv \
/uv pip install --link-mode=copy --prefix="/usr/local" supervisor~=4.2
RUN mkdir -p /etc/supervisor/conf.d
# Copy over redis and nginx

View File

@@ -9,6 +9,9 @@
ARG SYNAPSE_VERSION=latest
# This is an intermediate image, to be built locally (not pulled from a registry).
ARG FROM=matrixdotorg/synapse-workers:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=bookworm
FROM docker.io/library/postgres:13-${DEBIAN_VERSION} AS postgres_base
FROM $FROM
# First of all, we copy postgres server from the official postgres image,
@@ -20,8 +23,8 @@ FROM $FROM
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=docker.io/library/postgres:13-bookworm /usr/lib/postgresql /usr/lib/postgresql
COPY --from=docker.io/library/postgres:13-bookworm /usr/share/postgresql /usr/share/postgresql
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
ENV PGDATA=/var/lib/postgresql/data

View File

@@ -5,12 +5,12 @@
set -e
echo "Complement Synapse launcher"
echo " Args: $@"
echo " Args: $*"
echo " Env: SYNAPSE_COMPLEMENT_DATABASE=$SYNAPSE_COMPLEMENT_DATABASE SYNAPSE_COMPLEMENT_USE_WORKERS=$SYNAPSE_COMPLEMENT_USE_WORKERS SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR=$SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR"
function log {
d=$(date +"%Y-%m-%d %H:%M:%S,%3N")
echo "$d $@"
echo "$d $*"
}
# Set the server name of the homeserver
@@ -131,4 +131,4 @@ export SYNAPSE_TLS_KEY=/conf/server.tls.key
# Run the script that writes the necessary config files and starts supervisord, which in turn
# starts everything else
exec /configure_workers_and_start.py
exec /configure_workers_and_start.py "$@"

View File

@@ -10,6 +10,9 @@
# '-W interactive' is a `mawk` extension which disables buffering on stdout and sets line-buffered reads on
# stdin. The effect is that the output is flushed after each line, rather than being batched, which helps reduce
# confusion due to to interleaving of the different processes.
exec 1> >(awk -W interactive '{print "'"${SUPERVISOR_PROCESS_NAME}"' | "$0 }' >&1)
exec 2> >(awk -W interactive '{print "'"${SUPERVISOR_PROCESS_NAME}"' | "$0 }' >&2)
prefixer() {
mawk -W interactive '{printf("%s | %s\n", ENVIRON["SUPERVISOR_PROCESS_NAME"], $0); fflush() }'
}
exec 1> >(prefixer)
exec 2> >(prefixer >&2)
exec "$@"

View File

@@ -46,6 +46,14 @@ to any local media, and any locally-cached copies of remote media.
The media file itself (and any thumbnails) is not deleted from the server.
Since Synapse 1.128.0, hashes of uploaded media are tracked. If this media
is quarantined, Synapse will:
- Quarantine any media with a matching hash that has already been uploaded.
- Quarantine any future media.
- Quarantine any existing cached remote media.
- Quarantine any future remote media.
## Quarantining media by ID
This API quarantines a single piece of local or remote media.

View File

@@ -150,6 +150,28 @@ $ poetry shell
$ poetry install --extras all
```
If you want to go even further and remove the Poetry caches:
```shell
# Find your Poetry cache directory
# Docs: https://github.com/python-poetry/poetry/blob/main/docs/configuration.md#cache-directory
$ poetry config cache-dir
# Remove packages from all cached repositories
$ poetry cache clear --all .
# Go completely nuclear and clear out everything Poetry cache related
# including the wheel artifacts which is not covered by the above command
# (see https://github.com/python-poetry/poetry/issues/10304)
#
# This is necessary in order to rebuild or fetch new wheels. For example, if you update
# the `icu` library in on your system, you will need to rebuild the PyICU Python package
# in order to incorporate the correct dynamically linked library locations otherwise you
# will run into errors like: `ImportError: libicui18n.so.75: cannot open shared object file: No such file or directory`
$ rm -rf $(poetry config cache-dir)
```
## ...run a command in the `poetry` virtualenv?
Use `poetry run cmd args` when you need the python virtualenv context.
@@ -187,7 +209,7 @@ useful.
## ...add a new dependency?
Either:
- manually update `pyproject.toml`; then `poetry lock --no-update`; or else
- manually update `pyproject.toml`; then `poetry lock`; or else
- `poetry add packagename`. See `poetry add --help`; note the `--dev`,
`--extras` and `--optional` flags in particular.
@@ -202,12 +224,12 @@ poetry remove packagename
```
ought to do the trick. Alternatively, manually update `pyproject.toml` and
`poetry lock --no-update`. Include the updated `pyproject.toml` and `poetry.lock`
`poetry lock`. Include the updated `pyproject.toml` and `poetry.lock`
files in your commit.
## ...update the version range for an existing dependency?
Best done by manually editing `pyproject.toml`, then `poetry lock --no-update`.
Best done by manually editing `pyproject.toml`, then `poetry lock`.
Include the updated `pyproject.toml` and `poetry.lock` in your commit.
## ...update a dependency in the locked environment?
@@ -233,7 +255,7 @@ poetry add packagename==1.2.3
# Get poetry to recompute the content-hash of pyproject.toml without changing
# the locked package versions.
poetry lock --no-update
poetry lock
```
Either way, include the updated `poetry.lock` file in your commit.

View File

@@ -10,7 +10,7 @@ As an example, a SSO service may return the email address
to turn that into a displayname when creating a Matrix user for this individual.
It may choose `John Smith`, or `Smith, John [Example.com]` or any number of
variations. As each Synapse configuration may want something different, this is
where SAML mapping providers come into play.
where SSO mapping providers come into play.
SSO mapping providers are currently supported for OpenID and SAML SSO
configurations. Please see the details below for how to implement your own.

View File

@@ -137,6 +137,24 @@ room_list_publication_rules:
[`room_list_publication_rules`]: usage/configuration/config_documentation.md#room_list_publication_rules
## Change of signing key expiry date for the Debian/Ubuntu package repository
Administrators using the Debian/Ubuntu packages from `packages.matrix.org`,
please be aware that we have recently updated the expiry date on the repository's GPG signing key,
but this change must be imported into your keyring.
If you have the `matrix-org-archive-keyring` package installed and it updates before the current key expires, this should
happen automatically.
Otherwise, if you see an error similar to `The following signatures were invalid: EXPKEYSIG F473DD4473365DE1`, you
will need to get a fresh copy of the keys. You can do so with:
```sh
sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
```
The old version of the key will expire on `2025-03-15`.
# Upgrading to v1.122.0
## Dropping support for PostgreSQL 11 and 12

View File

@@ -160,7 +160,7 @@ Using the following curl command:
```console
curl -H 'Authorization: Bearer <access-token>' -X DELETE https://matrix.org/_matrix/client/r0/directory/room/<room-alias>
```
`<access-token>` - can be obtained in riot by looking in the riot settings, down the bottom is:
`<access-token>` - can be obtained in element by looking in All settings, clicking Help & About and down the bottom is:
Access Token:\<click to reveal\>
`<room-alias>` - the room alias, eg. #my_room:matrix.org this possibly needs to be URL encoded also, for example %23my_room%3Amatrix.org

View File

@@ -255,7 +255,7 @@ information.
^/_matrix/client/(r0|v3|unstable)/keys/changes$
^/_matrix/client/(r0|v3|unstable)/keys/claim$
^/_matrix/client/(r0|v3|unstable)/room_keys/
^/_matrix/client/(r0|v3|unstable)/keys/upload/
^/_matrix/client/(r0|v3|unstable)/keys/upload$
# Registration/login requests
^/_matrix/client/(api/v1|r0|v3|unstable)/login$

332
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.125.0"
version = "1.128.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"
@@ -390,7 +390,7 @@ skip = "cp36* cp37* cp38* pp37* pp38* *-musllinux_i686 pp*aarch64 *-musllinux_aa
#
# We temporarily pin Rust to 1.82.0 to work around
# https://github.com/element-hq/synapse/issues/17988
before-all = "curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain 1.82.0 -y --profile minimal"
before-all = "sh .ci/before_build_wheel.sh"
environment= { PATH = "$PATH:$HOME/.cargo/bin" }
# For some reason if we don't manually clean the build directory we

View File

@@ -30,7 +30,7 @@ http = "1.1.0"
lazy_static = "1.4.0"
log = "0.4.17"
mime = "0.3.17"
pyo3 = { version = "0.23.2", features = [
pyo3 = { version = "0.23.5", features = [
"macros",
"anyhow",
"abi3",

View File

@@ -592,7 +592,7 @@ def _wait_for_actions(gh_token: Optional[str]) -> None:
if all(
workflow["status"] != "in_progress" for workflow in resp["workflow_runs"]
):
success = (
success = all(
workflow["status"] == "completed" for workflow in resp["workflow_runs"]
)
if success:

View File

@@ -128,6 +128,7 @@ BOOLEAN_COLUMNS = {
"pushers": ["enabled"],
"redactions": ["have_censored"],
"remote_media_cache": ["authenticated"],
"room_memberships": ["participant"],
"room_stats_state": ["is_federatable"],
"rooms": ["is_public", "has_auth_chain_index"],
"sliding_sync_joined_rooms": ["is_encrypted"],
@@ -194,7 +195,7 @@ IGNORED_TABLES = {
# Porting the auto generated sequence in this table is non-trivial.
# None of the entries in this list are mandatory for Synapse to keep working.
# If state group disk space is an issue after the port, the
# `delete_unreferenced_state_groups_bg_update` background task can be run again.
# `mark_unreferenced_state_groups_for_deletion_bg_update` background task can be run again.
"state_groups_pending_deletion",
# We don't port these tables, as they're a faff and we can regenerate
# them anyway.
@@ -226,7 +227,7 @@ IGNORED_BACKGROUND_UPDATES = {
# Reapplying this background update to the postgres database is unnecessary after
# already having waited for the SQLite database to complete all running background
# updates.
"delete_unreferenced_state_groups_bg_update",
"mark_unreferenced_state_groups_for_deletion_bg_update",
}

View File

@@ -19,6 +19,7 @@
#
#
import logging
from dataclasses import dataclass
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional
from urllib.parse import urlencode
@@ -47,6 +48,7 @@ from synapse.logging.context import make_deferred_yieldable
from synapse.types import Requester, UserID, create_requester
from synapse.util import json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.caches.response_cache import ResponseCache
if TYPE_CHECKING:
from synapse.rest.admin.experimental_features import ExperimentalFeature
@@ -76,6 +78,61 @@ def scope_to_list(scope: str) -> List[str]:
return scope.strip().split(" ")
@dataclass
class IntrospectionResult:
_inner: IntrospectionToken
# when we retrieved this token,
# in milliseconds since the Unix epoch
retrieved_at_ms: int
def is_active(self, now_ms: int) -> bool:
if not self._inner.get("active"):
return False
expires_in = self._inner.get("expires_in")
if expires_in is None:
return True
if not isinstance(expires_in, int):
raise InvalidClientTokenError("token `expires_in` is not an int")
absolute_expiry_ms = expires_in * 1000 + self.retrieved_at_ms
return now_ms < absolute_expiry_ms
def get_scope_list(self) -> List[str]:
value = self._inner.get("scope")
if not isinstance(value, str):
return []
return scope_to_list(value)
def get_sub(self) -> Optional[str]:
value = self._inner.get("sub")
if not isinstance(value, str):
return None
return value
def get_username(self) -> Optional[str]:
value = self._inner.get("username")
if not isinstance(value, str):
return None
return value
def get_name(self) -> Optional[str]:
value = self._inner.get("name")
if not isinstance(value, str):
return None
return value
def get_device_id(self) -> Optional[str]:
value = self._inner.get("device_id")
if value is not None and not isinstance(value, str):
raise AuthError(
500,
"Invalid device ID in introspection result",
)
return value
class PrivateKeyJWTWithKid(PrivateKeyJWT): # type: ignore[misc]
"""An implementation of the private_key_jwt client auth method that includes a kid header.
@@ -121,6 +178,31 @@ class MSC3861DelegatedAuth(BaseAuth):
self._hostname = hs.hostname
self._admin_token: Callable[[], Optional[str]] = self._config.admin_token
# # Token Introspection Cache
# This remembers what users/devices are represented by which access tokens,
# in order to reduce overall system load:
# - on Synapse (as requests are relatively expensive)
# - on the network
# - on MAS
#
# Since there is no invalidation mechanism currently,
# the entries expire after 2 minutes.
# This does mean tokens can be treated as valid by Synapse
# for longer than reality.
#
# Ideally, tokens should logically be invalidated in the following circumstances:
# - If a session logout happens.
# In this case, MAS will delete the device within Synapse
# anyway and this is good enough as an invalidation.
# - If the client refreshes their token in MAS.
# In this case, the device still exists and it's not the end of the world for
# the old access token to continue working for a short time.
self._introspection_cache: ResponseCache[str] = ResponseCache(
self._clock,
"token_introspection",
timeout_ms=120_000,
)
self._issuer_metadata = RetryOnExceptionCachedCall[OpenIDProviderMetadata](
self._load_metadata
)
@@ -193,7 +275,7 @@ class MSC3861DelegatedAuth(BaseAuth):
metadata = await self._issuer_metadata.get()
return metadata.get("introspection_endpoint")
async def _introspect_token(self, token: str) -> IntrospectionToken:
async def _introspect_token(self, token: str) -> IntrospectionResult:
"""
Send a token to the introspection endpoint and returns the introspection response
@@ -266,7 +348,9 @@ class MSC3861DelegatedAuth(BaseAuth):
"The introspection endpoint returned an invalid JSON response."
)
return IntrospectionToken(**resp)
return IntrospectionResult(
IntrospectionToken(**resp), retrieved_at_ms=self._clock.time_msec()
)
async def is_server_admin(self, requester: Requester) -> bool:
return "urn:synapse:admin:*" in requester.scope
@@ -344,7 +428,9 @@ class MSC3861DelegatedAuth(BaseAuth):
)
try:
introspection_result = await self._introspect_token(token)
introspection_result = await self._introspection_cache.wrap(
token, self._introspect_token, token
)
except Exception:
logger.exception("Failed to introspect token")
raise SynapseError(503, "Unable to introspect the access token")
@@ -353,11 +439,11 @@ class MSC3861DelegatedAuth(BaseAuth):
# TODO: introspection verification should be more extensive, especially:
# - verify the audience
if not introspection_result.get("active"):
if not introspection_result.is_active(self._clock.time_msec()):
raise InvalidClientTokenError("Token is not active")
# Let's look at the scope
scope: List[str] = scope_to_list(introspection_result.get("scope", ""))
scope: List[str] = introspection_result.get_scope_list()
# Determine type of user based on presence of particular scopes
has_user_scope = SCOPE_MATRIX_API in scope
@@ -367,7 +453,7 @@ class MSC3861DelegatedAuth(BaseAuth):
raise InvalidClientTokenError("No scope in token granting user rights")
# Match via the sub claim
sub: Optional[str] = introspection_result.get("sub")
sub: Optional[str] = introspection_result.get_sub()
if sub is None:
raise InvalidClientTokenError(
"Invalid sub claim in the introspection result"
@@ -381,7 +467,7 @@ class MSC3861DelegatedAuth(BaseAuth):
# or the external_id was never recorded
# TODO: claim mapping should be configurable
username: Optional[str] = introspection_result.get("username")
username: Optional[str] = introspection_result.get_username()
if username is None or not isinstance(username, str):
raise AuthError(
500,
@@ -399,7 +485,7 @@ class MSC3861DelegatedAuth(BaseAuth):
# TODO: claim mapping should be configurable
# If present, use the name claim as the displayname
name: Optional[str] = introspection_result.get("name")
name: Optional[str] = introspection_result.get_name()
await self.store.register_user(
user_id=user_id.to_string(), create_profile_with_displayname=name
@@ -414,15 +500,8 @@ class MSC3861DelegatedAuth(BaseAuth):
# MAS 0.15+ will give us the device ID as an explicit value for compatibility sessions
# If present, we get it from here, if not we get it in thee scope
device_id = introspection_result.get("device_id")
if device_id is not None:
# We got the device ID explicitly, just sanity check that it's a string
if not isinstance(device_id, str):
raise AuthError(
500,
"Invalid device ID in introspection result",
)
else:
device_id = introspection_result.get_device_id()
if device_id is None:
# Find device_ids in scope
# We only allow a single device_id in the scope, so we find them all in the
# scope list, and raise if there are more than one. The OIDC server should be

View File

@@ -29,8 +29,13 @@ from typing import Final
# the max size of a (canonical-json-encoded) event
MAX_PDU_SIZE = 65536
# the "depth" field on events is limited to 2**63 - 1
MAX_DEPTH = 2**63 - 1
# Max/min size of ints in canonical JSON
CANONICALJSON_MAX_INT = (2**53) - 1
CANONICALJSON_MIN_INT = -CANONICALJSON_MAX_INT
# the "depth" field on events is limited to the same as what
# canonicaljson accepts
MAX_DEPTH = CANONICALJSON_MAX_INT
# the maximum length for a room alias is 255 characters
MAX_ALIAS_LENGTH = 255

View File

@@ -22,7 +22,6 @@
import abc
import collections.abc
import os
from typing import (
TYPE_CHECKING,
Any,
@@ -48,21 +47,21 @@ from synapse.synapse_rust.events import EventInternalMetadata
from synapse.types import JsonDict, StrCollection
from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze
from synapse.util.stringutils import strtobool
if TYPE_CHECKING:
from synapse.events.builder import EventBuilder
# Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents
# bugs where we accidentally share e.g. signature dicts. However, converting a
# dict to frozen_dicts is expensive.
#
# NOTE: This is overridden by the configuration by the Synapse worker apps, but
# for the sake of tests, it is set here while it cannot be configured on the
# homeserver object itself.
USE_FROZEN_DICTS = strtobool(os.environ.get("SYNAPSE_USE_FROZEN_DICTS", "0"))
USE_FROZEN_DICTS = False
"""
Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents
bugs where we accidentally share e.g. signature dicts. However, converting a
dict to frozen_dicts is expensive.
NOTE: This is overridden by the configuration by the Synapse worker apps, but
for the sake of tests, it is set here because it cannot be configured on the
homeserver object itself.
"""
T = TypeVar("T")

View File

@@ -40,6 +40,8 @@ import attr
from canonicaljson import encode_canonical_json
from synapse.api.constants import (
CANONICALJSON_MAX_INT,
CANONICALJSON_MIN_INT,
MAX_PDU_SIZE,
EventContentFields,
EventTypes,
@@ -61,9 +63,6 @@ SPLIT_FIELD_REGEX = re.compile(r"\\*\.")
# Find escaped characters, e.g. those with a \ in front of them.
ESCAPE_SEQUENCE_PATTERN = re.compile(r"\\(.)")
CANONICALJSON_MAX_INT = (2**53) - 1
CANONICALJSON_MIN_INT = -CANONICALJSON_MAX_INT
# Module API callback that allows adding fields to the unsigned section of
# events that are sent to clients.

View File

@@ -86,9 +86,7 @@ class EventValidator:
# Depending on the room version, ensure the data is spec compliant JSON.
if event.room_version.strict_canonicaljson:
# Note that only the client controlled portion of the event is
# checked, since we trust the portions of the event we created.
validate_canonicaljson(event.content)
validate_canonicaljson(event.get_pdu_json())
if event.type == EventTypes.Aliases:
if "aliases" in event.content:

View File

@@ -20,7 +20,7 @@
#
#
import logging
from typing import TYPE_CHECKING, Awaitable, Callable, Optional
from typing import TYPE_CHECKING, Awaitable, Callable, List, Optional, Sequence
from synapse.api.constants import MAX_DEPTH, EventContentFields, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
@@ -29,6 +29,7 @@ from synapse.crypto.event_signing import check_event_content_hash
from synapse.crypto.keyring import Keyring
from synapse.events import EventBase, make_event_from_dict
from synapse.events.utils import prune_event, validate_canonicaljson
from synapse.federation.units import filter_pdus_for_valid_depth
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.opentracing import log_kv, trace
from synapse.types import JsonDict, get_domain_from_id
@@ -267,6 +268,15 @@ def _is_invite_via_3pid(event: EventBase) -> bool:
)
def parse_events_from_pdu_json(
pdus_json: Sequence[JsonDict], room_version: RoomVersion
) -> List[EventBase]:
return [
event_from_pdu_json(pdu_json, room_version)
for pdu_json in filter_pdus_for_valid_depth(pdus_json)
]
def event_from_pdu_json(pdu_json: JsonDict, room_version: RoomVersion) -> EventBase:
"""Construct an EventBase from an event json received over federation

View File

@@ -68,6 +68,7 @@ from synapse.federation.federation_base import (
FederationBase,
InvalidEventSignatureError,
event_from_pdu_json,
parse_events_from_pdu_json,
)
from synapse.federation.transport.client import SendJoinResponse
from synapse.http.client import is_unknown_endpoint
@@ -349,7 +350,7 @@ class FederationClient(FederationBase):
room_version = await self.store.get_room_version(room_id)
pdus = [event_from_pdu_json(p, room_version) for p in transaction_data_pdus]
pdus = parse_events_from_pdu_json(transaction_data_pdus, room_version)
# Check signatures and hash of pdus, removing any from the list that fail checks
pdus[:] = await self._check_sigs_and_hash_for_pulled_events_and_fetch(
@@ -393,9 +394,7 @@ class FederationClient(FederationBase):
transaction_data,
)
pdu_list: List[EventBase] = [
event_from_pdu_json(p, room_version) for p in transaction_data["pdus"]
]
pdu_list = parse_events_from_pdu_json(transaction_data["pdus"], room_version)
if pdu_list and pdu_list[0]:
pdu = pdu_list[0]
@@ -809,7 +808,7 @@ class FederationClient(FederationBase):
room_version = await self.store.get_room_version(room_id)
auth_chain = [event_from_pdu_json(p, room_version) for p in res["auth_chain"]]
auth_chain = parse_events_from_pdu_json(res["auth_chain"], room_version)
signed_auth = await self._check_sigs_and_hash_for_pulled_events_and_fetch(
destination, auth_chain, room_version=room_version
@@ -1529,9 +1528,7 @@ class FederationClient(FederationBase):
room_version = await self.store.get_room_version(room_id)
events = [
event_from_pdu_json(e, room_version) for e in content.get("events", [])
]
events = parse_events_from_pdu_json(content.get("events", []), room_version)
signed_events = await self._check_sigs_and_hash_for_pulled_events_and_fetch(
destination, events, room_version=room_version

View File

@@ -66,7 +66,7 @@ from synapse.federation.federation_base import (
event_from_pdu_json,
)
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
from synapse.federation.units import Edu, Transaction, serialize_and_filter_pdus
from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.context import (
@@ -469,7 +469,12 @@ class FederationServer(FederationBase):
logger.info("Ignoring PDU: %s", e)
continue
event = event_from_pdu_json(p, room_version)
try:
event = event_from_pdu_json(p, room_version)
except SynapseError as e:
logger.info("Ignoring PDU for failing to deserialize: %s", e)
continue
pdus_by_room.setdefault(room_id, []).append(event)
if event.origin_server_ts > newest_pdu_ts:
@@ -636,8 +641,8 @@ class FederationServer(FederationBase):
)
return {
"pdus": [pdu.get_pdu_json() for pdu in pdus],
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
"pdus": serialize_and_filter_pdus(pdus),
"auth_chain": serialize_and_filter_pdus(auth_chain),
}
async def on_pdu_request(
@@ -761,8 +766,8 @@ class FederationServer(FederationBase):
event_json = event.get_pdu_json(time_now)
resp = {
"event": event_json,
"state": [p.get_pdu_json(time_now) for p in state_events],
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain_events],
"state": serialize_and_filter_pdus(state_events, time_now),
"auth_chain": serialize_and_filter_pdus(auth_chain_events, time_now),
"members_omitted": caller_supports_partial_state,
}
@@ -1005,7 +1010,7 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec()
auth_pdus = await self.handler.on_event_auth(event_id)
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
res = {"auth_chain": serialize_and_filter_pdus(auth_pdus, time_now)}
return 200, res
async def on_query_client_keys(
@@ -1090,7 +1095,7 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec()
return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
return {"events": serialize_and_filter_pdus(missing_events, time_now)}
async def on_openid_userinfo(self, token: str) -> Optional[str]:
ts_now_ms = self._clock.time_msec()

View File

@@ -24,10 +24,12 @@ server protocol.
"""
import logging
from typing import List, Optional
from typing import List, Optional, Sequence
import attr
from synapse.api.constants import CANONICALJSON_MAX_INT, CANONICALJSON_MIN_INT
from synapse.events import EventBase
from synapse.types import JsonDict
logger = logging.getLogger(__name__)
@@ -104,8 +106,28 @@ class Transaction:
result = {
"origin": self.origin,
"origin_server_ts": self.origin_server_ts,
"pdus": self.pdus,
"pdus": filter_pdus_for_valid_depth(self.pdus),
}
if self.edus:
result["edus"] = self.edus
return result
def filter_pdus_for_valid_depth(pdus: Sequence[JsonDict]) -> List[JsonDict]:
filtered_pdus = []
for pdu in pdus:
# Drop PDUs that have a depth that is outside of the range allowed
# by canonical json.
if (
"depth" in pdu
and CANONICALJSON_MIN_INT <= pdu["depth"] <= CANONICALJSON_MAX_INT
):
filtered_pdus.append(pdu)
return filtered_pdus
def serialize_and_filter_pdus(
pdus: Sequence[EventBase], time_now: Optional[int] = None
) -> List[JsonDict]:
return filter_pdus_for_valid_depth([pdu.get_pdu_json(time_now) for pdu in pdus])

View File

@@ -191,18 +191,36 @@ class DelayedEventsHandler:
async def _handle_state_deltas(self, deltas: List[StateDelta]) -> None:
"""
Process current state deltas to cancel pending delayed events
Process current state deltas to cancel other users' pending delayed events
that target the same state.
"""
for delta in deltas:
if delta.event_id is None:
logger.debug(
"Not handling delta for deleted state: %r %r",
delta.event_type,
delta.state_key,
)
continue
logger.debug(
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
)
event = await self._store.get_event(
delta.event_id, check_room_id=delta.room_id
)
sender = UserID.from_string(event.sender)
next_send_ts = await self._store.cancel_delayed_state_events(
room_id=delta.room_id,
event_type=delta.event_type,
state_key=delta.state_key,
not_from_localpart=(
sender.localpart
if sender.domain == self._config.server.server_name
else ""
),
)
if self._next_send_ts_changed(next_send_ts):

View File

@@ -1462,6 +1462,12 @@ class EventCreationHandler:
)
return prev_event
if not event.is_state() and event.type in [
EventTypes.Message,
EventTypes.Encrypted,
]:
await self.store.set_room_participation(event.user_id, event.room_id)
if event.internal_metadata.is_out_of_band_membership():
# the only sort of out-of-band-membership events we expect to see here are
# invite rejections and rescinded knocks that we have generated ourselves.

View File

@@ -118,6 +118,9 @@ DEFAULT_MAX_TIMEOUT_MS = 20_000
# Maximum allowed timeout_ms for download and thumbnail requests
MAXIMUM_ALLOWED_MAX_TIMEOUT_MS = 60_000
# The ETag header value to use for immutable media. This can be anything.
_IMMUTABLE_ETAG = "1"
def respond_404(request: SynapseRequest) -> None:
assert request.path is not None
@@ -224,12 +227,7 @@ def add_file_headers(
request.setHeader(b"Content-Disposition", disposition.encode("ascii"))
# cache for at least a day.
# XXX: we might want to turn this off for data we don't want to
# recommend caching as it's sensitive or private - or at least
# select private. don't bother setting Expires as all our
# clients are smart enough to be happy with Cache-Control
request.setHeader(b"Cache-Control", b"public,max-age=86400,s-maxage=86400")
_add_cache_headers(request)
if file_size is not None:
request.setHeader(b"Content-Length", b"%d" % (file_size,))
@@ -240,6 +238,26 @@ def add_file_headers(
request.setHeader(b"X-Robots-Tag", "noindex, nofollow, noarchive, noimageindex")
def _add_cache_headers(request: Request) -> None:
"""Adds the appropriate cache headers to the response"""
# Cache on the client for at least a day.
#
# We set this to "public,s-maxage=0,proxy-revalidate" to allow CDNs to cache
# the media, so long as they "revalidate" the media on every request. By
# revalidate, we mean send the request to Synapse with a `If-None-Match`
# header, to which Synapse can either respond with a 304 if the user is
# authenticated/authorized, or a 401/403 if they're not.
request.setHeader(
b"Cache-Control", b"public,max-age=86400,s-maxage=0,proxy-revalidate"
)
# Set an ETag header to allow requesters to use it in requests to check if
# the cache is still valid. Since media is immutable (though may be
# deleted), we just set this to a constant.
request.setHeader(b"ETag", _IMMUTABLE_ETAG)
# separators as defined in RFC2616. SP and HT are handled separately.
# see _can_encode_filename_as_token.
_FILENAME_SEPARATOR_CHARS = {
@@ -336,13 +354,15 @@ async def respond_with_multipart_responder(
from synapse.media.media_storage import MultipartFileConsumer
_add_cache_headers(request)
# note that currently the json_object is just {}, this will change when linked media
# is implemented
multipart_consumer = MultipartFileConsumer(
clock,
request,
media_type,
{},
{}, # Note: if we change this we need to change the returned ETag.
disposition,
media_length,
)
@@ -419,6 +439,46 @@ async def respond_with_responder(
finish_request(request)
def respond_with_304(request: SynapseRequest) -> None:
request.setResponseCode(304)
# could alternatively use request.notifyFinish() and flip a flag when
# the Deferred fires, but since the flag is RIGHT THERE it seems like
# a waste.
if request._disconnected:
logger.warning(
"Not sending response to request %s, already disconnected.", request
)
return None
_add_cache_headers(request)
request.finish()
def check_for_cached_entry_and_respond(request: SynapseRequest) -> bool:
"""Check if the request has a conditional header that allows us to return a
304 Not Modified response, and if it does, return a 304 response.
This handles clients and intermediary proxies caching media.
This method assumes that the user has already been
authorised to request the media.
Returns True if we have responded."""
# We've checked the user has access to the media, so we now check if it
# is a "conditional request" and we can just return a `304 Not Modified`
# response. Since media is immutable (though may be deleted), we just
# check this is the expected constant.
etag = request.getHeader("If-None-Match")
if etag == _IMMUTABLE_ETAG:
# Return a `304 Not modified`.
respond_with_304(request)
return True
return False
class Responder(ABC):
"""Represents a response that can be streamed to the requester.

View File

@@ -52,13 +52,18 @@ from synapse.media._base import (
FileInfo,
Responder,
ThumbnailInfo,
check_for_cached_entry_and_respond,
get_filename_from_headers,
respond_404,
respond_with_multipart_responder,
respond_with_responder,
)
from synapse.media.filepath import MediaFilePaths
from synapse.media.media_storage import MediaStorage
from synapse.media.media_storage import (
MediaStorage,
SHA256TransparentIOReader,
SHA256TransparentIOWriter,
)
from synapse.media.storage_provider import StorageProviderWrapper
from synapse.media.thumbnailer import Thumbnailer, ThumbnailError
from synapse.media.url_previewer import UrlPreviewer
@@ -300,15 +305,26 @@ class MediaRepository:
auth_user: The user_id of the uploader
"""
file_info = FileInfo(server_name=None, file_id=media_id)
fname = await self.media_storage.store_file(content, file_info)
sha256reader = SHA256TransparentIOReader(content)
# This implements all of IO as it has a passthrough
fname = await self.media_storage.store_file(sha256reader.wrap(), file_info)
sha256 = sha256reader.hexdigest()
should_quarantine = await self.store.get_is_hash_quarantined(sha256)
logger.info("Stored local media in file %r", fname)
if should_quarantine:
logger.warn(
"Media has been automatically quarantined as it matched existing quarantined media"
)
await self.store.update_local_media(
media_id=media_id,
media_type=media_type,
upload_name=upload_name,
media_length=content_length,
user_id=auth_user,
sha256=sha256,
quarantined_by="system" if should_quarantine else None,
)
try:
@@ -341,11 +357,19 @@ class MediaRepository:
media_id = random_string(24)
file_info = FileInfo(server_name=None, file_id=media_id)
fname = await self.media_storage.store_file(content, file_info)
# This implements all of IO as it has a passthrough
sha256reader = SHA256TransparentIOReader(content)
fname = await self.media_storage.store_file(sha256reader.wrap(), file_info)
sha256 = sha256reader.hexdigest()
should_quarantine = await self.store.get_is_hash_quarantined(sha256)
logger.info("Stored local media in file %r", fname)
if should_quarantine:
logger.warn(
"Media has been automatically quarantined as it matched existing quarantined media"
)
await self.store.store_local_media(
media_id=media_id,
media_type=media_type,
@@ -353,6 +377,9 @@ class MediaRepository:
upload_name=upload_name,
media_length=content_length,
user_id=auth_user,
sha256=sha256,
# TODO: Better name?
quarantined_by="system" if should_quarantine else None,
)
try:
@@ -459,6 +486,11 @@ class MediaRepository:
self.mark_recently_accessed(None, media_id)
# Once we've checked auth we can return early if the media is cached on
# the client
if check_for_cached_entry_and_respond(request):
return
media_type = media_info.media_type
if not media_type:
media_type = "application/octet-stream"
@@ -538,6 +570,17 @@ class MediaRepository:
allow_authenticated,
)
# Check if the media is cached on the client, if so return 304. We need
# to do this after we have fetched remote media, as we need it to do the
# auth.
if check_for_cached_entry_and_respond(request):
# We always need to use the responder.
if responder:
with responder:
pass
return
# We deliberately stream the file outside the lock
if responder and media_info:
upload_name = name if name else media_info.upload_name
@@ -739,11 +782,13 @@ class MediaRepository:
file_info = FileInfo(server_name=server_name, file_id=file_id)
async with self.media_storage.store_into_file(file_info) as (f, fname):
sha256writer = SHA256TransparentIOWriter(f)
try:
length, headers = await self.client.download_media(
server_name,
media_id,
output_stream=f,
# This implements all of BinaryIO as it has a passthrough
output_stream=sha256writer.wrap(),
max_size=self.max_upload_size,
max_timeout_ms=max_timeout_ms,
download_ratelimiter=download_ratelimiter,
@@ -808,6 +853,7 @@ class MediaRepository:
upload_name=upload_name,
media_length=length,
filesystem_id=file_id,
sha256=sha256writer.hexdigest(),
)
logger.info("Stored remote media in file %r", fname)
@@ -828,6 +874,7 @@ class MediaRepository:
last_access_ts=time_now_ms,
quarantined_by=None,
authenticated=authenticated,
sha256=sha256writer.hexdigest(),
)
async def _federation_download_remote_file(
@@ -862,11 +909,13 @@ class MediaRepository:
file_info = FileInfo(server_name=server_name, file_id=file_id)
async with self.media_storage.store_into_file(file_info) as (f, fname):
sha256writer = SHA256TransparentIOWriter(f)
try:
res = await self.client.federation_download_media(
server_name,
media_id,
output_stream=f,
# This implements all of BinaryIO as it has a passthrough
output_stream=sha256writer.wrap(),
max_size=self.max_upload_size,
max_timeout_ms=max_timeout_ms,
download_ratelimiter=download_ratelimiter,
@@ -937,6 +986,7 @@ class MediaRepository:
upload_name=upload_name,
media_length=length,
filesystem_id=file_id,
sha256=sha256writer.hexdigest(),
)
logger.debug("Stored remote media in file %r", fname)
@@ -957,6 +1007,7 @@ class MediaRepository:
last_access_ts=time_now_ms,
quarantined_by=None,
authenticated=authenticated,
sha256=sha256writer.hexdigest(),
)
def _get_thumbnail_requirements(

View File

@@ -19,6 +19,7 @@
#
#
import contextlib
import hashlib
import json
import logging
import os
@@ -70,6 +71,88 @@ logger = logging.getLogger(__name__)
CRLF = b"\r\n"
class SHA256TransparentIOWriter:
"""Will generate a SHA256 hash from a source stream transparently.
Args:
source: Source stream.
"""
def __init__(self, source: BinaryIO):
self._hash = hashlib.sha256()
self._source = source
def write(self, buffer: Union[bytes, bytearray]) -> int:
"""Wrapper for source.write()
Args:
buffer
Returns:
the value of source.write()
"""
res = self._source.write(buffer)
self._hash.update(buffer)
return res
def hexdigest(self) -> str:
"""The digest of the written or read value.
Returns:
The digest in hex formaat.
"""
return self._hash.hexdigest()
def wrap(self) -> BinaryIO:
# This class implements a subset the IO interface and passes through everything else via __getattr__
return cast(BinaryIO, self)
# Passthrough any other calls
def __getattr__(self, attr_name: str) -> Any:
return getattr(self._source, attr_name)
class SHA256TransparentIOReader:
"""Will generate a SHA256 hash from a source stream transparently.
Args:
source: Source IO stream.
"""
def __init__(self, source: IO):
self._hash = hashlib.sha256()
self._source = source
def read(self, n: int = -1) -> bytes:
"""Wrapper for source.read()
Args:
n
Returns:
the value of source.read()
"""
bytes = self._source.read(n)
self._hash.update(bytes)
return bytes
def hexdigest(self) -> str:
"""The digest of the written or read value.
Returns:
The digest in hex formaat.
"""
return self._hash.hexdigest()
def wrap(self) -> IO:
# This class implements a subset the IO interface and passes through everything else via __getattr__
return cast(IO, self)
# Passthrough any other calls
def __getattr__(self, attr_name: str) -> Any:
return getattr(self._source, attr_name)
class MediaStorage:
"""Responsible for storing/fetching files from local sources.
@@ -107,7 +190,6 @@ class MediaStorage:
Returns:
the file path written to in the primary media store
"""
async with self.store_into_file(file_info) as (f, fname):
# Write to the main media repository
await self.write_to_file(source, f)

View File

@@ -34,6 +34,7 @@ from synapse.logging.opentracing import trace
from synapse.media._base import (
FileInfo,
ThumbnailInfo,
check_for_cached_entry_and_respond,
respond_404,
respond_with_file,
respond_with_multipart_responder,
@@ -294,6 +295,11 @@ class ThumbnailProvider:
if media_info.authenticated:
raise NotFoundError()
# Once we've checked auth we can return early if the media is cached on
# the client
if check_for_cached_entry_and_respond(request):
return
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
await self._select_and_respond_with_thumbnail(
request,
@@ -334,6 +340,11 @@ class ThumbnailProvider:
if media_info.authenticated:
raise NotFoundError()
# Once we've checked auth we can return early if the media is cached on
# the client
if check_for_cached_entry_and_respond(request):
return
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
for info in thumbnail_infos:
t_w = info.width == desired_width
@@ -431,6 +442,10 @@ class ThumbnailProvider:
respond_404(request)
return
# Check if the media is cached on the client, if so return 304.
if check_for_cached_entry_and_respond(request):
return
thumbnail_infos = await self.store.get_remote_media_thumbnails(
server_name, media_id
)
@@ -510,6 +525,10 @@ class ThumbnailProvider:
if media_info.authenticated:
raise NotFoundError()
# Check if the media is cached on the client, if so return 304.
if check_for_cached_entry_and_respond(request):
return
thumbnail_infos = await self.store.get_remote_media_thumbnails(
server_name, media_id
)

View File

@@ -25,6 +25,7 @@ from typing import (
TYPE_CHECKING,
Collection,
Mapping,
Optional,
Set,
)
@@ -52,7 +53,7 @@ class PurgeEventsStorageController:
)
self.stores.state.db_pool.updates.register_background_update_handler(
_BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE,
_BackgroundUpdates.MARK_UNREFERENCED_STATE_GROUPS_FOR_DELETION_BG_UPDATE,
self._background_delete_unrefereneced_state_groups,
)
@@ -92,6 +93,69 @@ class PurgeEventsStorageController:
sg_to_delete
)
async def _find_unreferenced_groups(
self,
state_groups: Collection[int],
) -> Set[int]:
"""Used when purging history to figure out which state groups can be
deleted.
Args:
state_groups: Set of state groups referenced by events
that are going to be deleted.
Returns:
The set of state groups that can be deleted.
"""
# Set of events that we have found to be referenced by events
referenced_groups = set()
# Set of state groups we've already seen
state_groups_seen = set(state_groups)
# Set of state groups to handle next.
next_to_search = set(state_groups)
while next_to_search:
# We bound size of groups we're looking up at once, to stop the
# SQL query getting too big
if len(next_to_search) < 100:
current_search = next_to_search
next_to_search = set()
else:
current_search = set(itertools.islice(next_to_search, 100))
next_to_search -= current_search
referenced = await self.stores.main.get_referenced_state_groups(
current_search
)
referenced_groups |= referenced
# We don't continue iterating up the state group graphs for state
# groups that are referenced.
current_search -= referenced
edges = await self.stores.state.get_previous_state_groups(current_search)
prevs = set(edges.values())
# We don't bother re-handling groups we've already seen
prevs -= state_groups_seen
next_to_search |= prevs
state_groups_seen |= prevs
# We also check to see if anything referencing the state groups are
# also unreferenced. This helps ensure that we delete unreferenced
# state groups, if we don't then we will de-delta them when we
# delete the other state groups leading to increased DB usage.
next_edges = await self.stores.state.get_next_state_groups(current_search)
nexts = set(next_edges.keys())
nexts -= state_groups_seen
next_to_search |= nexts
state_groups_seen |= nexts
to_delete = state_groups_seen - referenced_groups
return to_delete
@wrap_as_background_process("_delete_state_groups_loop")
async def _delete_state_groups_loop(self) -> None:
"""Background task that deletes any state groups that may be pending
@@ -160,47 +224,47 @@ class PurgeEventsStorageController:
"""This background update will slowly delete any unreferenced state groups"""
last_checked_state_group = progress.get("last_checked_state_group")
max_state_group = progress.get("max_state_group")
if last_checked_state_group is None or max_state_group is None:
if last_checked_state_group is None:
# This is the first run.
last_checked_state_group = 0
max_state_group = await self.stores.state.db_pool.simple_select_one_onecol(
table="state_groups",
keyvalues={},
retcol="MAX(id)",
allow_none=True,
desc="get_max_state_group",
last_checked_state_group = (
await self.stores.state.db_pool.simple_select_one_onecol(
table="state_groups",
keyvalues={},
retcol="MAX(id)",
allow_none=True,
desc="get_max_state_group",
)
)
if max_state_group is None:
if last_checked_state_group is None:
# There are no state groups so the background process is finished.
await self.stores.state.db_pool.updates._end_background_update(
_BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE
_BackgroundUpdates.MARK_UNREFERENCED_STATE_GROUPS_FOR_DELETION_BG_UPDATE
)
return batch_size
last_checked_state_group += 1
(
last_checked_state_group,
final_batch,
) = await self._delete_unreferenced_state_groups_batch(
last_checked_state_group, batch_size, max_state_group
last_checked_state_group,
batch_size,
)
if not final_batch:
# There are more state groups to check.
progress = {
"last_checked_state_group": last_checked_state_group,
"max_state_group": max_state_group,
}
await self.stores.state.db_pool.updates._background_update_progress(
_BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE,
_BackgroundUpdates.MARK_UNREFERENCED_STATE_GROUPS_FOR_DELETION_BG_UPDATE,
progress,
)
else:
# This background process is finished.
await self.stores.state.db_pool.updates._end_background_update(
_BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE
_BackgroundUpdates.MARK_UNREFERENCED_STATE_GROUPS_FOR_DELETION_BG_UPDATE
)
return batch_size
@@ -209,11 +273,9 @@ class PurgeEventsStorageController:
self,
last_checked_state_group: int,
batch_size: int,
max_state_group: int,
) -> tuple[int, bool]:
"""Looks for unreferenced state groups starting from the last state group
checked, and any state groups which would become unreferenced if a state group
was deleted, and marks them for deletion.
checked and marks them for deletion.
Args:
last_checked_state_group: The last state group that was checked.
@@ -223,35 +285,15 @@ class PurgeEventsStorageController:
(last_checked_state_group, final_batch)
"""
# Look for state groups that can be cleaned up.
def get_next_state_groups_txn(txn: LoggingTransaction) -> Set[int]:
state_group_sql = "SELECT id FROM state_groups WHERE ? < id AND id <= ? ORDER BY id LIMIT ?"
txn.execute(
state_group_sql, (last_checked_state_group, max_state_group, batch_size)
)
next_set = {row[0] for row in txn}
return next_set
next_set = await self.stores.state.db_pool.runInteraction(
"get_next_state_groups", get_next_state_groups_txn
# Find all state groups that can be deleted if any of the original set are deleted.
(
to_delete,
last_checked_state_group,
final_batch,
) = await self._find_unreferenced_groups_for_background_deletion(
last_checked_state_group, batch_size
)
final_batch = False
if len(next_set) < batch_size:
final_batch = True
else:
last_checked_state_group = max(next_set)
if len(next_set) == 0:
return last_checked_state_group, final_batch
# Find all state groups that can be deleted if the original set is deleted.
# This set includes the original set, as well as any state groups that would
# become unreferenced upon deleting the original set.
to_delete = await self._find_unreferenced_groups(next_set)
if len(to_delete) == 0:
return last_checked_state_group, final_batch
@@ -261,65 +303,146 @@ class PurgeEventsStorageController:
return last_checked_state_group, final_batch
async def _find_unreferenced_groups(
async def _find_unreferenced_groups_for_background_deletion(
self,
state_groups: Collection[int],
) -> Set[int]:
"""Used when purging history to figure out which state groups can be
deleted.
last_checked_state_group: int,
batch_size: int,
) -> tuple[Set[int], int, bool]:
"""Used when deleting unreferenced state groups in the background to figure out
which state groups can be deleted.
To avoid increased DB usage due to de-deltaing state groups, this returns only
state groups which are free standing (ie. no shared edges with referenced groups) or
state groups which do not share edges which result in a future referenced group.
The following scenarios outline the possibilities based on state group data in
the DB.
ie. Free standing -> state groups 1-N would be returned:
SG_1
|
...
|
SG_N
ie. Previous reference -> state groups 2-N would be returned:
SG_1 <- referenced by event
|
SG_2
|
...
|
SG_N
ie. Future reference -> none of the following state groups would be returned:
SG_1
|
SG_2
|
...
|
SG_N <- referenced by event
Args:
state_groups: Set of state groups referenced by events
that are going to be deleted.
last_checked_state_group: The last state group that was checked.
batch_size: How many state groups to process in this iteration.
Returns:
The set of state groups that can be deleted.
(to_delete, last_checked_state_group, final_batch)
"""
# Set of events that we have found to be referenced by events
referenced_groups = set()
# Set of state groups we've already seen
state_groups_seen = set(state_groups)
# If a state group's next edge is not pending deletion then we don't delete the state group.
# If there is no next edge or the next edges are all marked for deletion, then delete
# the state group.
# This holds since we walk backwards from the latest state groups, ensuring that
# we've already checked newer state groups for event references along the way.
def get_next_state_groups_marked_for_deletion_txn(
txn: LoggingTransaction,
) -> tuple[dict[int, bool], dict[int, int]]:
state_group_sql = """
SELECT s.id, e.state_group, d.state_group
FROM (
SELECT id FROM state_groups
WHERE id < ? ORDER BY id DESC LIMIT ?
) as s
LEFT JOIN state_group_edges AS e ON (s.id = e.prev_state_group)
LEFT JOIN state_groups_pending_deletion AS d ON (e.state_group = d.state_group)
"""
txn.execute(state_group_sql, (last_checked_state_group, batch_size))
# Set of state groups to handle next.
next_to_search = set(state_groups)
while next_to_search:
# We bound size of groups we're looking up at once, to stop the
# SQL query getting too big
if len(next_to_search) < 100:
current_search = next_to_search
next_to_search = set()
else:
current_search = set(itertools.islice(next_to_search, 100))
next_to_search -= current_search
# Mapping from state group to whether we should delete it.
state_groups_to_deletion: dict[int, bool] = {}
referenced = await self.stores.main.get_referenced_state_groups(
current_search
)
referenced_groups |= referenced
# Mapping from state group to prev state group.
state_groups_to_prev: dict[int, int] = {}
# We don't continue iterating up the state group graphs for state
# groups that are referenced.
current_search -= referenced
for row in txn:
state_group = row[0]
next_edge = row[1]
pending_deletion = row[2]
edges = await self.stores.state.get_previous_state_groups(current_search)
if next_edge is not None:
state_groups_to_prev[next_edge] = state_group
prevs = set(edges.values())
# We don't bother re-handling groups we've already seen
prevs -= state_groups_seen
next_to_search |= prevs
state_groups_seen |= prevs
if next_edge is not None and not pending_deletion:
# We have found an edge not marked for deletion.
# Check previous results to see if this group is part of a chain
# within this batch that qualifies for deletion.
# ie. batch contains:
# SG_1 -> SG_2 -> SG_3
# If SG_3 is a candidate for deletion, then SG_2 & SG_1 should also
# be, even though they have edges which may not be marked for
# deletion.
# This relies on SQL results being sorted in DESC order to work.
next_is_deletion_candidate = state_groups_to_deletion.get(next_edge)
if (
next_is_deletion_candidate is None
or not next_is_deletion_candidate
):
state_groups_to_deletion[state_group] = False
else:
state_groups_to_deletion.setdefault(state_group, True)
else:
# This state group may be a candidate for deletion
state_groups_to_deletion.setdefault(state_group, True)
# We also check to see if anything referencing the state groups are
# also unreferenced. This helps ensure that we delete unreferenced
# state groups, if we don't then we will de-delta them when we
# delete the other state groups leading to increased DB usage.
next_edges = await self.stores.state.get_next_state_groups(current_search)
nexts = set(next_edges.keys())
nexts -= state_groups_seen
next_to_search |= nexts
state_groups_seen |= nexts
return state_groups_to_deletion, state_groups_to_prev
to_delete = state_groups_seen - referenced_groups
(
state_groups_to_deletion,
state_group_edges,
) = await self.stores.state.db_pool.runInteraction(
"get_next_state_groups_marked_for_deletion",
get_next_state_groups_marked_for_deletion_txn,
)
deletion_candidates = {
state_group
for state_group, deletion in state_groups_to_deletion.items()
if deletion
}
return to_delete
final_batch = False
state_groups = state_groups_to_deletion.keys()
if len(state_groups) < batch_size:
final_batch = True
else:
last_checked_state_group = min(state_groups)
if len(state_groups) == 0:
return set(), last_checked_state_group, final_batch
# Determine if any of the remaining state groups are directly referenced.
referenced = await self.stores.main.get_referenced_state_groups(
deletion_candidates
)
# Remove state groups from deletion_candidates which are directly referenced or share a
# future edge with a referenced state group within this batch.
def filter_reference_chains(group: Optional[int]) -> None:
while group is not None:
deletion_candidates.discard(group)
group = state_group_edges.get(group)
for referenced_group in referenced:
filter_reference_chains(referenced_group)
return deletion_candidates, last_checked_state_group, final_batch

View File

@@ -424,25 +424,37 @@ class DelayedEventsStore(SQLBaseStore):
room_id: str,
event_type: str,
state_key: str,
not_from_localpart: str,
) -> Optional[Timestamp]:
"""
Cancels all matching delayed state events, i.e. remove them as long as they haven't been processed.
Args:
room_id: The room ID to match against.
event_type: The event type to match against.
state_key: The state key to match against.
not_from_localpart: The localpart of a user whose delayed events to not cancel.
If set to the empty string, any users' delayed events may be cancelled.
Returns: The send time of the next delayed event to be sent, if any.
"""
def cancel_delayed_state_events_txn(
txn: LoggingTransaction,
) -> Optional[Timestamp]:
self.db_pool.simple_delete_txn(
txn,
table="delayed_events",
keyvalues={
"room_id": room_id,
"event_type": event_type,
"state_key": state_key,
"is_processed": False,
},
txn.execute(
"""
DELETE FROM delayed_events
WHERE room_id = ? AND event_type = ? AND state_key = ?
AND user_localpart <> ?
AND NOT is_processed
""",
(
room_id,
event_type,
state_key,
not_from_localpart,
),
)
return self._get_next_delayed_event_send_ts_txn(txn)

View File

@@ -19,6 +19,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
import logging
from enum import Enum
from typing import (
TYPE_CHECKING,
@@ -51,6 +52,8 @@ BG_UPDATE_REMOVE_MEDIA_REPO_INDEX_WITHOUT_METHOD_2 = (
"media_repository_drop_index_wo_method_2"
)
logger = logging.getLogger(__name__)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class LocalMedia:
@@ -65,6 +68,7 @@ class LocalMedia:
safe_from_quarantine: bool
user_id: Optional[str]
authenticated: Optional[bool]
sha256: Optional[str]
@attr.s(slots=True, frozen=True, auto_attribs=True)
@@ -79,6 +83,7 @@ class RemoteMedia:
last_access_ts: int
quarantined_by: Optional[str]
authenticated: Optional[bool]
sha256: Optional[str]
@attr.s(slots=True, frozen=True, auto_attribs=True)
@@ -154,6 +159,26 @@ class MediaRepositoryBackgroundUpdateStore(SQLBaseStore):
unique=True,
)
self.db_pool.updates.register_background_index_update(
update_name="local_media_repository_sha256_idx",
index_name="local_media_repository_sha256",
table="local_media_repository",
where_clause="sha256 IS NOT NULL",
columns=[
"sha256",
],
)
self.db_pool.updates.register_background_index_update(
update_name="remote_media_cache_sha256_idx",
index_name="remote_media_cache_sha256",
table="remote_media_cache",
where_clause="sha256 IS NOT NULL",
columns=[
"sha256",
],
)
self.db_pool.updates.register_background_update_handler(
BG_UPDATE_REMOVE_MEDIA_REPO_INDEX_WITHOUT_METHOD_2,
self._drop_media_index_without_method,
@@ -221,6 +246,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
"safe_from_quarantine",
"user_id",
"authenticated",
"sha256",
),
allow_none=True,
desc="get_local_media",
@@ -239,6 +265,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
safe_from_quarantine=row[7],
user_id=row[8],
authenticated=row[9],
sha256=row[10],
)
async def get_local_media_by_user_paginate(
@@ -295,7 +322,8 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
quarantined_by,
safe_from_quarantine,
user_id,
authenticated
authenticated,
sha256
FROM local_media_repository
WHERE user_id = ?
ORDER BY {order_by_column} {order}, media_id ASC
@@ -320,6 +348,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
safe_from_quarantine=bool(row[8]),
user_id=row[9],
authenticated=row[10],
sha256=row[11],
)
for row in txn
]
@@ -449,6 +478,8 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
media_length: int,
user_id: UserID,
url_cache: Optional[str] = None,
sha256: Optional[str] = None,
quarantined_by: Optional[str] = None,
) -> None:
if self.hs.config.media.enable_authenticated_media:
authenticated = True
@@ -466,6 +497,8 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
"user_id": user_id.to_string(),
"url_cache": url_cache,
"authenticated": authenticated,
"sha256": sha256,
"quarantined_by": quarantined_by,
},
desc="store_local_media",
)
@@ -477,20 +510,28 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
upload_name: Optional[str],
media_length: int,
user_id: UserID,
sha256: str,
url_cache: Optional[str] = None,
quarantined_by: Optional[str] = None,
) -> None:
updatevalues = {
"media_type": media_type,
"upload_name": upload_name,
"media_length": media_length,
"url_cache": url_cache,
"sha256": sha256,
}
# This should never be un-set by this function.
if quarantined_by is not None:
updatevalues["quarantined_by"] = quarantined_by
await self.db_pool.simple_update_one(
"local_media_repository",
keyvalues={
"user_id": user_id.to_string(),
"media_id": media_id,
},
updatevalues={
"media_type": media_type,
"upload_name": upload_name,
"media_length": media_length,
"url_cache": url_cache,
},
updatevalues=updatevalues,
desc="update_local_media",
)
@@ -657,6 +698,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
"last_access_ts",
"quarantined_by",
"authenticated",
"sha256",
),
allow_none=True,
desc="get_cached_remote_media",
@@ -674,6 +716,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
last_access_ts=row[5],
quarantined_by=row[6],
authenticated=row[7],
sha256=row[8],
)
async def store_cached_remote_media(
@@ -685,6 +728,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
time_now_ms: int,
upload_name: Optional[str],
filesystem_id: str,
sha256: Optional[str],
) -> None:
if self.hs.config.media.enable_authenticated_media:
authenticated = True
@@ -703,6 +747,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
"filesystem_id": filesystem_id,
"last_access_ts": time_now_ms,
"authenticated": authenticated,
"sha256": sha256,
},
desc="store_cached_remote_media",
)
@@ -946,3 +991,46 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
await self.db_pool.runInteraction(
"delete_url_cache_media", _delete_url_cache_media_txn
)
async def get_is_hash_quarantined(self, sha256: str) -> bool:
"""Get whether a specific sha256 hash digest matches any quarantined media.
Returns:
None if the media_id doesn't exist.
"""
# If we don't have the index yet, performance tanks, so we return False.
# In the background updates, remote_media_cache_sha256_idx is created
# after local_media_repository_sha256_idx, which is why we only need to
# check for the completion of the former.
if not await self.db_pool.updates.has_completed_background_update(
"remote_media_cache_sha256_idx"
):
return False
def get_matching_media_txn(
txn: LoggingTransaction, table: str, sha256: str
) -> bool:
# Return on first match
sql = """
SELECT 1
FROM local_media_repository
WHERE sha256 = ? AND quarantined_by IS NOT NULL
UNION ALL
SELECT 1
FROM remote_media_cache
WHERE sha256 = ? AND quarantined_by IS NOT NULL
LIMIT 1
"""
txn.execute(sql, (sha256, sha256))
row = txn.fetchone()
return row is not None
return await self.db_pool.runInteraction(
"get_matching_media_txn",
get_matching_media_txn,
"local_media_repository",
sha256,
)

View File

@@ -51,11 +51,15 @@ from synapse.api.room_versions import RoomVersion, RoomVersions
from synapse.config.homeserver import HomeServerConfig
from synapse.events import EventBase
from synapse.replication.tcp.streams.partial_state import UnPartialStatedRoomStream
from synapse.storage._base import db_to_json, make_in_list_sql_clause
from synapse.storage._base import (
db_to_json,
make_in_list_sql_clause,
)
from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
make_tuple_in_list_sql_clause,
)
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
from synapse.storage.types import Cursor
@@ -1127,6 +1131,109 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
return local_media_ids
def _quarantine_local_media_txn(
self,
txn: LoggingTransaction,
hashes: Set[str],
media_ids: Set[str],
quarantined_by: Optional[str],
) -> int:
"""Quarantine and unquarantine local media items.
Args:
txn (cursor)
hashes: A set of sha256 hashes for any media that should be quarantined
media_ids: A set of media IDs for any media that should be quarantined
quarantined_by: The ID of the user who initiated the quarantine request
If it is `None` media will be removed from quarantine
Returns:
The total number of media items quarantined
"""
total_media_quarantined = 0
# Effectively a legacy path, update any media that was explicitly named.
if media_ids:
sql_many_clause_sql, sql_many_clause_args = make_in_list_sql_clause(
txn.database_engine, "media_id", media_ids
)
sql = f"""
UPDATE local_media_repository
SET quarantined_by = ?
WHERE {sql_many_clause_sql}"""
if quarantined_by is not None:
sql += " AND safe_from_quarantine = FALSE"
txn.execute(sql, [quarantined_by] + sql_many_clause_args)
# Note that a rowcount of -1 can be used to indicate no rows were affected.
total_media_quarantined += txn.rowcount if txn.rowcount > 0 else 0
# Update any media that was identified via hash.
if hashes:
sql_many_clause_sql, sql_many_clause_args = make_in_list_sql_clause(
txn.database_engine, "sha256", hashes
)
sql = f"""
UPDATE local_media_repository
SET quarantined_by = ?
WHERE {sql_many_clause_sql}"""
if quarantined_by is not None:
sql += " AND safe_from_quarantine = FALSE"
txn.execute(sql, [quarantined_by] + sql_many_clause_args)
total_media_quarantined += txn.rowcount if txn.rowcount > 0 else 0
return total_media_quarantined
def _quarantine_remote_media_txn(
self,
txn: LoggingTransaction,
hashes: Set[str],
media: Set[Tuple[str, str]],
quarantined_by: Optional[str],
) -> int:
"""Quarantine and unquarantine remote items
Args:
txn (cursor)
hashes: A set of sha256 hashes for any media that should be quarantined
media_ids: A set of tuples (media_origin, media_id) for any media that should be quarantined
quarantined_by: The ID of the user who initiated the quarantine request
If it is `None` media will be removed from quarantine
Returns:
The total number of media items quarantined
"""
total_media_quarantined = 0
if media:
sql_in_list_clause, sql_args = make_tuple_in_list_sql_clause(
txn.database_engine,
("media_origin", "media_id"),
media,
)
sql = f"""
UPDATE remote_media_cache
SET quarantined_by = ?
WHERE {sql_in_list_clause}"""
txn.execute(sql, [quarantined_by] + sql_args)
total_media_quarantined += txn.rowcount if txn.rowcount > 0 else 0
total_media_quarantined = 0
if hashes:
sql_many_clause_sql, sql_many_clause_args = make_in_list_sql_clause(
txn.database_engine, "sha256", hashes
)
sql = f"""
UPDATE remote_media_cache
SET quarantined_by = ?
WHERE {sql_many_clause_sql}"""
txn.execute(sql, [quarantined_by] + sql_many_clause_args)
total_media_quarantined += txn.rowcount if txn.rowcount > 0 else 0
return total_media_quarantined
def _quarantine_media_txn(
self,
txn: LoggingTransaction,
@@ -1146,40 +1253,49 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
Returns:
The total number of media items quarantined
"""
hashes = set()
media_ids = set()
remote_media = set()
# Update all the tables to set the quarantined_by flag
sql = """
UPDATE local_media_repository
SET quarantined_by = ?
WHERE media_id = ?
"""
# set quarantine
if quarantined_by is not None:
sql += "AND safe_from_quarantine = FALSE"
txn.executemany(
sql, [(quarantined_by, media_id) for media_id in local_mxcs]
# First, determine the hashes of the media we want to delete.
# We also want the media_ids for any media that lacks a hash.
if local_mxcs:
hash_sql_many_clause_sql, hash_sql_many_clause_args = (
make_in_list_sql_clause(txn.database_engine, "media_id", local_mxcs)
)
# remove from quarantine
else:
txn.executemany(
sql, [(quarantined_by, media_id) for media_id in local_mxcs]
hash_sql = f"SELECT sha256, media_id FROM local_media_repository WHERE {hash_sql_many_clause_sql}"
if quarantined_by is not None:
hash_sql += " AND safe_from_quarantine = FALSE"
txn.execute(hash_sql, hash_sql_many_clause_args)
for sha256, media_id in txn:
if sha256:
hashes.add(sha256)
else:
media_ids.add(media_id)
# Do the same for remote media
if remote_mxcs:
hash_sql_in_list_clause, hash_sql_args = make_tuple_in_list_sql_clause(
txn.database_engine,
("media_origin", "media_id"),
remote_mxcs,
)
# Note that a rowcount of -1 can be used to indicate no rows were affected.
total_media_quarantined = txn.rowcount if txn.rowcount > 0 else 0
hash_sql = f"SELECT sha256, media_origin, media_id FROM remote_media_cache WHERE {hash_sql_in_list_clause}"
txn.execute(hash_sql, hash_sql_args)
for sha256, media_origin, media_id in txn:
if sha256:
hashes.add(sha256)
else:
remote_media.add((media_origin, media_id))
txn.executemany(
"""
UPDATE remote_media_cache
SET quarantined_by = ?
WHERE media_origin = ? AND media_id = ?
""",
[(quarantined_by, origin, media_id) for origin, media_id in remote_mxcs],
count = self._quarantine_local_media_txn(txn, hashes, media_ids, quarantined_by)
count += self._quarantine_remote_media_txn(
txn, hashes, remote_media, quarantined_by
)
total_media_quarantined += txn.rowcount if txn.rowcount > 0 else 0
return total_media_quarantined
return count
async def block_room(self, room_id: str, user_id: str) -> None:
"""Marks the room as blocked.

View File

@@ -79,6 +79,7 @@ logger = logging.getLogger(__name__)
_MEMBERSHIP_PROFILE_UPDATE_NAME = "room_membership_profile_update"
_CURRENT_STATE_MEMBERSHIP_UPDATE_NAME = "current_state_events_membership"
_POPULATE_PARTICIPANT_BG_UPDATE_BATCH_SIZE = 1000
@attr.s(frozen=True, slots=True, auto_attribs=True)
@@ -1606,6 +1607,66 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
from_ts,
)
async def set_room_participation(self, user_id: str, room_id: str) -> None:
"""
Record the provided user as participating in the given room
Args:
user_id: the user ID of the user
room_id: ID of the room to set the participant in
"""
def _set_room_participation_txn(
txn: LoggingTransaction, user_id: str, room_id: str
) -> None:
sql = """
UPDATE room_memberships
SET participant = true
WHERE (user_id, room_id) IN (
SELECT user_id, room_id
FROM room_memberships
WHERE user_id = ?
AND room_id = ?
ORDER BY event_stream_ordering DESC
LIMIT 1
)
"""
txn.execute(sql, (user_id, room_id))
await self.db_pool.runInteraction(
"_set_room_participation_txn", _set_room_participation_txn, user_id, room_id
)
async def get_room_participation(self, user_id: str, room_id: str) -> bool:
"""
Check whether a user is listed as a participant in a room
Args:
user_id: user ID of the user
room_id: ID of the room to check in
"""
def _get_room_participation_txn(
txn: LoggingTransaction, user_id: str, room_id: str
) -> bool:
sql = """
SELECT participant
FROM room_memberships
WHERE user_id = ?
AND room_id = ?
ORDER BY event_stream_ordering DESC
LIMIT 1
"""
txn.execute(sql, (user_id, room_id))
res = txn.fetchone()
if res:
return res[0]
return False
return await self.db_pool.runInteraction(
"_get_room_participation_txn", _get_room_participation_txn, user_id, room_id
)
class RoomMemberBackgroundUpdateStore(SQLBaseStore):
def __init__(
@@ -1636,6 +1697,93 @@ class RoomMemberBackgroundUpdateStore(SQLBaseStore):
columns=["user_id", "room_id"],
)
self.db_pool.updates.register_background_update_handler(
"populate_participant_bg_update", self._populate_participant
)
async def _populate_participant(self, progress: JsonDict, batch_size: int) -> int:
"""
Background update to populate column `participant` on `room_memberships` table
A 'participant' is someone who is currently joined to a room and has sent at least
one `m.room.message` or `m.room.encrypted` event.
This background update will set the `participant` column across all rows in
`room_memberships` based on the user's *current* join status, and if
they've *ever* sent a message or encrypted event. Therefore one should
never assume the `participant` column's value is based solely on whether
the user participated in a previous "session" (where a "session" is defined
as a period between the user joining and leaving). See
https://github.com/element-hq/synapse/pull/18068#discussion_r1931070291
for further detail.
"""
stream_token = progress.get("last_stream_token", None)
def _get_max_stream_token_txn(txn: LoggingTransaction) -> int:
sql = """
SELECT event_stream_ordering from room_memberships
ORDER BY event_stream_ordering DESC
LIMIT 1;
"""
txn.execute(sql)
res = txn.fetchone()
if not res or not res[0]:
return 0
return res[0]
def _background_populate_participant_txn(
txn: LoggingTransaction, stream_token: str
) -> None:
sql = """
UPDATE room_memberships
SET participant = True
FROM (
SELECT DISTINCT c.state_key, e.room_id
FROM current_state_events AS c
INNER JOIN events AS e ON c.room_id = e.room_id
WHERE c.membership = 'join'
AND c.state_key = e.sender
AND (
e.type = 'm.room.message'
OR e.type = 'm.room.encrypted'
)
) AS subquery
WHERE room_memberships.user_id = subquery.state_key
AND room_memberships.room_id = subquery.room_id
AND room_memberships.event_stream_ordering <= ?
AND room_memberships.event_stream_ordering > ?;
"""
batch = int(stream_token) - _POPULATE_PARTICIPANT_BG_UPDATE_BATCH_SIZE
txn.execute(sql, (stream_token, batch))
if stream_token is None:
stream_token = await self.db_pool.runInteraction(
"_get_max_stream_token", _get_max_stream_token_txn
)
if stream_token < 0:
await self.db_pool.updates._end_background_update(
"populate_participant_bg_update"
)
return _POPULATE_PARTICIPANT_BG_UPDATE_BATCH_SIZE
await self.db_pool.runInteraction(
"_background_populate_participant_txn",
_background_populate_participant_txn,
stream_token,
)
progress["last_stream_token"] = (
stream_token - _POPULATE_PARTICIPANT_BG_UPDATE_BATCH_SIZE
)
await self.db_pool.runInteraction(
"populate_participant_bg_update",
self.db_pool.updates._background_update_progress_txn,
"populate_participant_bg_update",
progress,
)
return _POPULATE_PARTICIPANT_BG_UPDATE_BATCH_SIZE
async def _background_add_membership_profile(
self, progress: JsonDict, batch_size: int
) -> int:

View File

@@ -1,7 +1,7 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
# Copyright (C) 2023, 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
@@ -61,6 +61,13 @@ class SlidingSyncStore(SQLBaseStore):
columns=("required_state_id",),
)
self.db_pool.updates.register_background_index_update(
update_name="sliding_sync_membership_snapshots_membership_event_id_idx",
index_name="sliding_sync_membership_snapshots_membership_event_id_idx",
table="sliding_sync_membership_snapshots",
columns=("membership_event_id",),
)
async def get_latest_bump_stamp_for_room(
self,
room_id: str,

View File

@@ -19,7 +19,7 @@
#
#
SCHEMA_VERSION = 89 # remember to update the list below when updating
SCHEMA_VERSION = 91 # remember to update the list below when updating
"""Represents the expectations made by the codebase about the database schema
This should be incremented whenever the codebase changes its requirements on the
@@ -158,6 +158,9 @@ Changes in SCHEMA_VERSION = 88
Changes in SCHEMA_VERSION = 89
- Add `state_groups_pending_deletion` and `state_groups_persisting` tables.
Changes in SCHEMA_VERSION = 90
- Add a column `participant` to `room_memberships` table
- Add background update to delete unreferenced state groups.
"""

View File

@@ -0,0 +1,15 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2025 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8901, 'sliding_sync_membership_snapshots_membership_event_id_idx', '{}');

View File

@@ -0,0 +1,20 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2025 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Add a column `participant` to `room_memberships` table to track whether a room member has sent
-- a `m.room.message` or `m.room.encrypted` event into a room they are a member of
ALTER TABLE room_memberships ADD COLUMN participant BOOLEAN DEFAULT FALSE;
-- Add a background update to populate `participant` column
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(9001, 'populate_participant_bg_update', '{}');

View File

@@ -0,0 +1,28 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2025 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Store the SHA256 content hash of media files.
ALTER TABLE local_media_repository ADD COLUMN sha256 TEXT;
ALTER TABLE remote_media_cache ADD COLUMN sha256 TEXT;
-- Add a background updates to handle creating the new index.
--
-- Note that the ordering of the update is not following the usual scheme. This
-- is because when upgrading from Synapse 1.127, this index is fairly important
-- to have up quickly, so that it doesn't tank performance, which is why it is
-- scheduled before other background updates in the 1.127 -> 1.128 upgrade
INSERT INTO
background_updates (ordering, update_name, progress_json)
VALUES
(8890, 'local_media_repository_sha256_idx', '{}'),
(8891, 'remote_media_cache_sha256_idx', '{}');

View File

@@ -13,4 +13,4 @@
-- Add a background update to delete any unreferenced state groups
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8902, 'delete_unreferenced_state_groups_bg_update', '{}');
(9002, 'mark_unreferenced_state_groups_for_deletion_bg_update', '{}');

View File

@@ -0,0 +1,15 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2025 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Remove the old unreferenced state group deletion background update if it exists
DELETE FROM background_updates WHERE update_name = 'delete_unreferenced_state_groups_bg_update';

View File

@@ -49,6 +49,6 @@ class _BackgroundUpdates:
"sliding_sync_membership_snapshots_fix_forgotten_column_bg_update"
)
DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE = (
"delete_unreferenced_state_groups_bg_update"
MARK_UNREFERENCED_STATE_GROUPS_FOR_DELETION_BG_UPDATE = (
"mark_unreferenced_state_groups_for_deletion_bg_update"
)

View File

@@ -147,6 +147,45 @@ class FederationMediaDownloadsTest(unittest.FederatingHomeserverTestCase):
found_file = any(SMALL_PNG in field for field in stripped_bytes)
self.assertTrue(found_file)
def test_federation_etag(self) -> None:
"""Test that federation ETags work"""
content = io.BytesIO(b"file_to_stream")
content_uri = self.get_success(
self.media_repo.create_content(
"text/plain",
"test_upload",
content,
46,
UserID.from_string("@user_id:whatever.org"),
)
)
channel = self.make_signed_federation_request(
"GET",
f"/_matrix/federation/v1/media/download/{content_uri.media_id}",
)
self.pump()
self.assertEqual(200, channel.code)
# We expect exactly one ETag header.
etags = channel.headers.getRawHeaders("ETag")
self.assertIsNotNone(etags)
assert etags is not None # For mypy
self.assertEqual(len(etags), 1)
etag = etags[0]
# Refetching with the etag should result in 304 and empty body.
channel = self.make_signed_federation_request(
"GET",
f"/_matrix/federation/v1/media/download/{content_uri.media_id}",
custom_headers=[("If-None-Match", etag)],
)
self.pump()
self.assertEqual(channel.code, 304)
self.assertEqual(channel.is_finished(), True)
self.assertNotIn("body", channel.result)
class FederationThumbnailTest(unittest.FederatingHomeserverTestCase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:

View File

@@ -539,6 +539,44 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
error = self.get_failure(self.auth.get_user_by_req(request), SynapseError)
self.assertEqual(error.value.code, 503)
def test_cached_expired_introspection(self) -> None:
"""The handler should raise an error if the introspection response gives
an expiry time, the introspection response is cached and then the entry is
re-requested after it has expired."""
self.http_client.request = introspection_mock = AsyncMock(
return_value=FakeResponse.json(
code=200,
payload={
"active": True,
"sub": SUBJECT,
"scope": " ".join(
[
MATRIX_USER_SCOPE,
f"{MATRIX_DEVICE_SCOPE_PREFIX}AABBCC",
]
),
"username": USERNAME,
"expires_in": 60,
},
)
)
request = Mock(args={})
request.args[b"access_token"] = [b"mockAccessToken"]
request.requestHeaders.getRawHeaders = mock_getRawHeaders()
# The first CS-API request causes a successful introspection
self.get_success(self.auth.get_user_by_req(request))
self.assertEqual(introspection_mock.call_count, 1)
# Sleep for 60 seconds so the token expires.
self.reactor.advance(60.0)
# Now the CS-API request fails because the token expired
self.get_failure(self.auth.get_user_by_req(request), InvalidClientTokenError)
# Ensure another introspection request was not sent
self.assertEqual(introspection_mock.call_count, 1)
def make_device_keys(self, user_id: str, device_id: str) -> JsonDict:
# We only generate a master key to simplify the test.
master_signing_key = generate_signing_key(device_id)

View File

@@ -369,6 +369,7 @@ class ProfileTestCase(unittest.HomeserverTestCase):
time_now_ms=self.clock.time_msec(),
upload_name=None,
filesystem_id="xyz",
sha256="abcdefg12345",
)
)

View File

@@ -31,6 +31,9 @@ from synapse.rest.client import login, register, room
from synapse.server import HomeServer
from synapse.types import UserID
from synapse.util import Clock
from synapse.util.stringutils import (
random_string,
)
from tests import unittest
from tests.unittest import override_config
@@ -65,7 +68,6 @@ class MediaRetentionTestCase(unittest.HomeserverTestCase):
# quarantined media) into both the local store and the remote cache, plus
# one additional local media that is marked as protected from quarantine.
media_repository = hs.get_media_repository()
test_media_content = b"example string"
def _create_media_and_set_attributes(
last_accessed_ms: Optional[int],
@@ -73,12 +75,14 @@ class MediaRetentionTestCase(unittest.HomeserverTestCase):
is_protected: Optional[bool] = False,
) -> MXCUri:
# "Upload" some media to the local media store
# If the meda
random_content = bytes(random_string(24), "utf-8")
mxc_uri: MXCUri = self.get_success(
media_repository.create_content(
media_type="text/plain",
upload_name=None,
content=io.BytesIO(test_media_content),
content_length=len(test_media_content),
content=io.BytesIO(random_content),
content_length=len(random_content),
auth_user=UserID.from_string(test_user_id),
)
)
@@ -129,6 +133,7 @@ class MediaRetentionTestCase(unittest.HomeserverTestCase):
time_now_ms=clock.time_msec(),
upload_name="testfile.txt",
filesystem_id="abcdefg12345",
sha256=random_string(24),
)
)

View File

@@ -42,6 +42,7 @@ from twisted.web.resource import Resource
from synapse.api.errors import Codes, HttpResponseException
from synapse.api.ratelimiting import Ratelimiter
from synapse.events import EventBase
from synapse.http.client import ByteWriteable
from synapse.http.types import QueryParams
from synapse.logging.context import make_deferred_yieldable
from synapse.media._base import FileInfo, ThumbnailInfo
@@ -59,7 +60,7 @@ from synapse.util import Clock
from tests import unittest
from tests.server import FakeChannel
from tests.test_utils import SMALL_CMYK_JPEG, SMALL_PNG
from tests.test_utils import SMALL_CMYK_JPEG, SMALL_PNG, SMALL_PNG_SHA256
from tests.unittest import override_config
from tests.utils import default_config
@@ -1257,3 +1258,107 @@ class RemoteDownloadLimiterTestCase(unittest.HomeserverTestCase):
)
assert channel.code == 502
assert channel.json_body["errcode"] == "M_TOO_LARGE"
def read_body(
response: IResponse, stream: ByteWriteable, max_size: Optional[int]
) -> Deferred:
d: Deferred = defer.Deferred()
stream.write(SMALL_PNG)
d.callback(len(SMALL_PNG))
return d
class MediaHashesTestCase(unittest.HomeserverTestCase):
servlets = [
admin.register_servlets,
login.register_servlets,
media.register_servlets,
]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.user = self.register_user("user", "pass")
self.tok = self.login("user", "pass")
self.store = hs.get_datastores().main
self.client = hs.get_federation_http_client()
def create_resource_dict(self) -> Dict[str, Resource]:
resources = super().create_resource_dict()
resources["/_matrix/media"] = self.hs.get_media_repository_resource()
return resources
def test_ensure_correct_sha256(self) -> None:
"""Check that the hash does not change"""
media = self.helper.upload_media(SMALL_PNG, tok=self.tok, expect_code=200)
mxc = media.get("content_uri")
assert mxc
store_media = self.get_success(self.store.get_local_media(mxc[11:]))
assert store_media
self.assertEqual(
store_media.sha256,
SMALL_PNG_SHA256,
)
def test_ensure_multiple_correct_sha256(self) -> None:
"""Check that two media items have the same hash."""
media_a = self.helper.upload_media(SMALL_PNG, tok=self.tok, expect_code=200)
mxc_a = media_a.get("content_uri")
assert mxc_a
store_media_a = self.get_success(self.store.get_local_media(mxc_a[11:]))
assert store_media_a
media_b = self.helper.upload_media(SMALL_PNG, tok=self.tok, expect_code=200)
mxc_b = media_b.get("content_uri")
assert mxc_b
store_media_b = self.get_success(self.store.get_local_media(mxc_b[11:]))
assert store_media_b
self.assertNotEqual(
store_media_a.media_id,
store_media_b.media_id,
)
self.assertEqual(
store_media_a.sha256,
store_media_b.sha256,
)
@override_config(
{
"enable_authenticated_media": False,
}
)
# mock actually reading file body
@patch(
"synapse.http.matrixfederationclient.read_body_with_max_size",
read_body,
)
def test_ensure_correct_sha256_federated(self) -> None:
"""Check that federated media have the same hash."""
# Mock getting a file over federation
async def _send_request(*args: Any, **kwargs: Any) -> IResponse:
resp = MagicMock(spec=IResponse)
resp.code = 200
resp.length = 500
resp.headers = Headers({"Content-Type": ["application/octet-stream"]})
resp.phrase = b"OK"
return resp
self.client._send_request = _send_request # type: ignore
# first request should go through
channel = self.make_request(
"GET",
"/_matrix/media/v3/download/remote.org/abc",
shorthand=False,
access_token=self.tok,
)
assert channel.code == 200
store_media = self.get_success(
self.store.get_cached_remote_media("remote.org", "abc")
)
assert store_media
self.assertEqual(
store_media.sha256,
SMALL_PNG_SHA256,
)

View File

@@ -20,7 +20,7 @@
#
import urllib.parse
from typing import Dict
from typing import Dict, cast
from parameterized import parameterized
@@ -32,6 +32,7 @@ from synapse.http.server import JsonResource
from synapse.rest.admin import VersionServlet
from synapse.rest.client import login, media, room
from synapse.server import HomeServer
from synapse.types import UserID
from synapse.util import Clock
from tests import unittest
@@ -227,10 +228,25 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
# Upload some media
response_1 = self.helper.upload_media(SMALL_PNG, tok=non_admin_user_tok)
response_2 = self.helper.upload_media(SMALL_PNG, tok=non_admin_user_tok)
response_3 = self.helper.upload_media(SMALL_PNG, tok=non_admin_user_tok)
# Extract media IDs
server_and_media_id_1 = response_1["content_uri"][6:]
server_and_media_id_2 = response_2["content_uri"][6:]
server_and_media_id_3 = response_3["content_uri"][6:]
# Remove the hash from the media to simulate historic media.
self.get_success(
self.hs.get_datastores().main.update_local_media(
media_id=server_and_media_id_3.split("/")[1],
media_type="image/png",
upload_name=None,
media_length=123,
user_id=UserID.from_string(non_admin_user),
# Hack to force some media to have no hash.
sha256=cast(str, None),
)
)
# Quarantine all media by this user
url = "/_synapse/admin/v1/user/%s/media/quarantine" % urllib.parse.quote(
@@ -244,12 +260,13 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
self.pump(1.0)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(
channel.json_body, {"num_quarantined": 2}, "Expected 2 quarantined items"
channel.json_body, {"num_quarantined": 3}, "Expected 3 quarantined items"
)
# Attempt to access each piece of media
self._ensure_quarantined(admin_user_tok, server_and_media_id_1)
self._ensure_quarantined(admin_user_tok, server_and_media_id_2)
self._ensure_quarantined(admin_user_tok, server_and_media_id_3)
def test_cannot_quarantine_safe_media(self) -> None:
self.register_user("user_admin", "pass", admin=True)

View File

@@ -35,7 +35,7 @@ from synapse.server import HomeServer
from synapse.util import Clock
from tests import unittest
from tests.test_utils import SMALL_PNG
from tests.test_utils import SMALL_CMYK_JPEG, SMALL_PNG
from tests.unittest import override_config
VALID_TIMESTAMP = 1609459200000 # 2021-01-01 in milliseconds
@@ -598,23 +598,27 @@ class DeleteMediaByDateSizeTestCase(_AdminMediaTests):
class QuarantineMediaByIDTestCase(_AdminMediaTests):
def upload_media_and_return_media_id(self, data: bytes) -> str:
# Upload some media into the room
response = self.helper.upload_media(
data,
tok=self.admin_user_tok,
expect_code=200,
)
# Extract media ID from the response
server_and_media_id = response["content_uri"][6:] # Cut off 'mxc://'
return server_and_media_id.split("/")[1]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = hs.get_datastores().main
self.server_name = hs.hostname
self.admin_user = self.register_user("admin", "pass", admin=True)
self.admin_user_tok = self.login("admin", "pass")
# Upload some media into the room
response = self.helper.upload_media(
SMALL_PNG,
tok=self.admin_user_tok,
expect_code=200,
)
# Extract media ID from the response
server_and_media_id = response["content_uri"][6:] # Cut off 'mxc://'
self.media_id = server_and_media_id.split("/")[1]
self.media_id = self.upload_media_and_return_media_id(SMALL_PNG)
self.media_id_2 = self.upload_media_and_return_media_id(SMALL_PNG)
self.media_id_3 = self.upload_media_and_return_media_id(SMALL_PNG)
self.media_id_other = self.upload_media_and_return_media_id(SMALL_CMYK_JPEG)
self.url = "/_synapse/admin/v1/media/%s/%s/%s"
@parameterized.expand(["quarantine", "unquarantine"])
@@ -686,6 +690,52 @@ class QuarantineMediaByIDTestCase(_AdminMediaTests):
assert media_info is not None
self.assertFalse(media_info.quarantined_by)
def test_quarantine_media_match_hash(self) -> None:
"""
Tests that quarantining removes all media with the same hash
"""
media_info = self.get_success(self.store.get_local_media(self.media_id))
assert media_info is not None
self.assertFalse(media_info.quarantined_by)
# quarantining
channel = self.make_request(
"POST",
self.url % ("quarantine", self.server_name, self.media_id),
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertFalse(channel.json_body)
# Test that ALL similar media was quarantined.
for media in [self.media_id, self.media_id_2, self.media_id_3]:
media_info = self.get_success(self.store.get_local_media(media))
assert media_info is not None
self.assertTrue(media_info.quarantined_by)
# Test that other media was not.
media_info = self.get_success(self.store.get_local_media(self.media_id_other))
assert media_info is not None
self.assertFalse(media_info.quarantined_by)
# remove from quarantine
channel = self.make_request(
"POST",
self.url % ("unquarantine", self.server_name, self.media_id),
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertFalse(channel.json_body)
# Test that ALL similar media is now reset.
for media in [self.media_id, self.media_id_2, self.media_id_3]:
media_info = self.get_success(self.store.get_local_media(media))
assert media_info is not None
self.assertFalse(media_info.quarantined_by)
def test_quarantine_protected_media(self) -> None:
"""
Tests that quarantining from protected media fails

View File

@@ -22,7 +22,8 @@ from parameterized import parameterized
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.errors import Codes
from synapse.rest.client import delayed_events, room, versions
from synapse.rest import admin
from synapse.rest.client import delayed_events, login, room, versions
from synapse.server import HomeServer
from synapse.types import JsonDict
from synapse.util import Clock
@@ -32,7 +33,6 @@ from tests.unittest import HomeserverTestCase
PATH_PREFIX = "/_matrix/client/unstable/org.matrix.msc4140/delayed_events"
_HS_NAME = "red"
_EVENT_TYPE = "com.example.test"
@@ -54,23 +54,41 @@ class DelayedEventsUnstableSupportTestCase(HomeserverTestCase):
class DelayedEventsTestCase(HomeserverTestCase):
"""Tests getting and managing delayed events."""
servlets = [delayed_events.register_servlets, room.register_servlets]
user_id = f"@sid1:{_HS_NAME}"
servlets = [
admin.register_servlets,
delayed_events.register_servlets,
login.register_servlets,
room.register_servlets,
]
def default_config(self) -> JsonDict:
config = super().default_config()
config["server_name"] = _HS_NAME
config["max_event_delay_duration"] = "24h"
return config
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.user1_user_id = self.register_user("user1", "pass")
self.user1_access_token = self.login("user1", "pass")
self.user2_user_id = self.register_user("user2", "pass")
self.user2_access_token = self.login("user2", "pass")
self.room_id = self.helper.create_room_as(
self.user_id,
self.user1_user_id,
tok=self.user1_access_token,
extra_content={
"preset": "trusted_private_chat",
"preset": "public_chat",
"power_level_content_override": {
"events": {
_EVENT_TYPE: 0,
}
},
},
)
self.helper.join(
room=self.room_id, user=self.user2_user_id, tok=self.user2_access_token
)
def test_delayed_events_empty_on_startup(self) -> None:
self.assertListEqual([], self._get_delayed_events())
@@ -85,6 +103,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
{
setter_key: setter_expected,
},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
events = self._get_delayed_events()
@@ -94,7 +113,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
self.user1_access_token,
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
@@ -104,7 +123,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
self.user1_access_token,
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
@@ -113,7 +132,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
{"rc_delayed_event_mgmt": {"per_second": 0.5, "burst_count": 1}}
)
def test_get_delayed_events_ratelimit(self) -> None:
args = ("GET", PATH_PREFIX)
args = ("GET", PATH_PREFIX, b"", self.user1_access_token)
channel = self.make_request(*args)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
@@ -123,7 +142,9 @@ class DelayedEventsTestCase(HomeserverTestCase):
# Add the current user to the ratelimit overrides, allowing them no ratelimiting.
self.get_success(
self.hs.get_datastores().main.set_ratelimit_for_user(self.user_id, 0, 0)
self.hs.get_datastores().main.set_ratelimit_for_user(
self.user1_user_id, 0, 0
)
)
# Test that the request isn't ratelimited anymore.
@@ -134,6 +155,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/",
access_token=self.user1_access_token,
)
self.assertEqual(HTTPStatus.NOT_FOUND, channel.code, channel.result)
@@ -141,6 +163,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
channel = self.make_request(
"POST",
f"{PATH_PREFIX}/abc",
access_token=self.user1_access_token,
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
@@ -153,6 +176,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/abc",
{},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
@@ -165,6 +189,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/abc",
{"action": "oops"},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.BAD_REQUEST, channel.code, channel.result)
self.assertEqual(
@@ -178,6 +203,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/abc",
{"action": action},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.NOT_FOUND, channel.code, channel.result)
@@ -192,6 +218,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
{
setter_key: setter_expected,
},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
@@ -205,7 +232,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
self.user1_access_token,
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
@@ -214,6 +241,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/{delay_id}",
{"action": "cancel"},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.assertListEqual([], self._get_delayed_events())
@@ -222,7 +250,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
self.user1_access_token,
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
@@ -237,6 +265,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
_get_path_for_delayed_send(self.room_id, _EVENT_TYPE, 100000),
{},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
@@ -247,6 +276,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/{delay_ids.pop(0)}",
{"action": "cancel"},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
@@ -254,13 +284,16 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/{delay_ids.pop(0)}",
{"action": "cancel"},
self.user1_access_token,
)
channel = self.make_request(*args)
self.assertEqual(HTTPStatus.TOO_MANY_REQUESTS, channel.code, channel.result)
# Add the current user to the ratelimit overrides, allowing them no ratelimiting.
self.get_success(
self.hs.get_datastores().main.set_ratelimit_for_user(self.user_id, 0, 0)
self.hs.get_datastores().main.set_ratelimit_for_user(
self.user1_user_id, 0, 0
)
)
# Test that the request isn't ratelimited anymore.
@@ -278,6 +311,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
{
setter_key: setter_expected,
},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
@@ -291,7 +325,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
self.user1_access_token,
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
@@ -300,13 +334,14 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/{delay_id}",
{"action": "send"},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
self.assertListEqual([], self._get_delayed_events())
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
self.user1_access_token,
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
@@ -319,6 +354,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
_get_path_for_delayed_send(self.room_id, _EVENT_TYPE, 100000),
{},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
@@ -329,6 +365,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/{delay_ids.pop(0)}",
{"action": "send"},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
@@ -336,13 +373,16 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/{delay_ids.pop(0)}",
{"action": "send"},
self.user1_access_token,
)
channel = self.make_request(*args)
self.assertEqual(HTTPStatus.TOO_MANY_REQUESTS, channel.code, channel.result)
# Add the current user to the ratelimit overrides, allowing them no ratelimiting.
self.get_success(
self.hs.get_datastores().main.set_ratelimit_for_user(self.user_id, 0, 0)
self.hs.get_datastores().main.set_ratelimit_for_user(
self.user1_user_id, 0, 0
)
)
# Test that the request isn't ratelimited anymore.
@@ -360,6 +400,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
{
setter_key: setter_expected,
},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
@@ -373,7 +414,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
self.user1_access_token,
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
@@ -382,6 +423,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/{delay_id}",
{"action": "restart"},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
@@ -393,7 +435,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
self.user1_access_token,
state_key=state_key,
expect_code=HTTPStatus.NOT_FOUND,
)
@@ -403,7 +445,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
self.user1_access_token,
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
@@ -418,6 +460,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
_get_path_for_delayed_send(self.room_id, _EVENT_TYPE, 100000),
{},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
delay_id = channel.json_body.get("delay_id")
@@ -428,6 +471,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/{delay_ids.pop(0)}",
{"action": "restart"},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
@@ -435,21 +479,66 @@ class DelayedEventsTestCase(HomeserverTestCase):
"POST",
f"{PATH_PREFIX}/{delay_ids.pop(0)}",
{"action": "restart"},
self.user1_access_token,
)
channel = self.make_request(*args)
self.assertEqual(HTTPStatus.TOO_MANY_REQUESTS, channel.code, channel.result)
# Add the current user to the ratelimit overrides, allowing them no ratelimiting.
self.get_success(
self.hs.get_datastores().main.set_ratelimit_for_user(self.user_id, 0, 0)
self.hs.get_datastores().main.set_ratelimit_for_user(
self.user1_user_id, 0, 0
)
)
# Test that the request isn't ratelimited anymore.
channel = self.make_request(*args)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
def test_delayed_state_events_are_cancelled_by_more_recent_state(self) -> None:
state_key = "to_be_cancelled"
def test_delayed_state_is_not_cancelled_by_new_state_from_same_user(
self,
) -> None:
state_key = "to_not_be_cancelled_by_same_user"
setter_key = "setter"
setter_expected = "on_timeout"
channel = self.make_request(
"PUT",
_get_path_for_delayed_state(self.room_id, _EVENT_TYPE, state_key, 900),
{
setter_key: setter_expected,
},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
self.helper.send_state(
self.room_id,
_EVENT_TYPE,
{
setter_key: "manual",
},
self.user1_access_token,
state_key=state_key,
)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
self.reactor.advance(1)
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
self.user1_access_token,
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
def test_delayed_state_is_cancelled_by_new_state_from_other_user(
self,
) -> None:
state_key = "to_be_cancelled_by_other_user"
setter_key = "setter"
channel = self.make_request(
@@ -458,19 +547,20 @@ class DelayedEventsTestCase(HomeserverTestCase):
{
setter_key: "on_timeout",
},
self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)
events = self._get_delayed_events()
self.assertEqual(1, len(events), events)
setter_expected = "manual"
setter_expected = "other_user"
self.helper.send_state(
self.room_id,
_EVENT_TYPE,
{
setter_key: setter_expected,
},
None,
self.user2_access_token,
state_key=state_key,
)
self.assertListEqual([], self._get_delayed_events())
@@ -479,7 +569,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
content = self.helper.get_state(
self.room_id,
_EVENT_TYPE,
"",
self.user1_access_token,
state_key=state_key,
)
self.assertEqual(setter_expected, content.get(setter_key), content)
@@ -488,6 +578,7 @@ class DelayedEventsTestCase(HomeserverTestCase):
channel = self.make_request(
"GET",
PATH_PREFIX,
access_token=self.user1_access_token,
)
self.assertEqual(HTTPStatus.OK, channel.code, channel.result)

View File

@@ -137,6 +137,7 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
time_now_ms=clock.time_msec(),
upload_name="test.png",
filesystem_id=file_id,
sha256=file_id,
)
)
self.register_user("user", "password")
@@ -2593,6 +2594,7 @@ class AuthenticatedMediaTestCase(unittest.HomeserverTestCase):
time_now_ms=self.clock.time_msec(),
upload_name="remote_test.png",
filesystem_id=file_id,
sha256=file_id,
)
)
@@ -2676,3 +2678,114 @@ class AuthenticatedMediaTestCase(unittest.HomeserverTestCase):
access_token=self.tok,
)
self.assertEqual(channel10.code, 200)
def test_authenticated_media_etag(self) -> None:
"""Test that ETag works correctly with authenticated media over client
APIs"""
# upload some local media with authentication on
channel = self.make_request(
"POST",
"_matrix/media/v3/upload?filename=test_png_upload",
SMALL_PNG,
self.tok,
shorthand=False,
content_type=b"image/png",
custom_headers=[("Content-Length", str(67))],
)
self.assertEqual(channel.code, 200)
res = channel.json_body.get("content_uri")
assert res is not None
uri = res.split("mxc://")[1]
# Check standard media endpoint
self._check_caching(f"/download/{uri}")
# check thumbnails as well
params = "?width=32&height=32&method=crop"
self._check_caching(f"/thumbnail/{uri}{params}")
# Inject a piece of remote media.
file_id = "abcdefg12345"
file_info = FileInfo(server_name="lonelyIsland", file_id=file_id)
media_storage = self.hs.get_media_repository().media_storage
ctx = media_storage.store_into_file(file_info)
(f, fname) = self.get_success(ctx.__aenter__())
f.write(SMALL_PNG)
self.get_success(ctx.__aexit__(None, None, None))
# we write the authenticated status when storing media, so this should pick up
# config and authenticate the media
self.get_success(
self.store.store_cached_remote_media(
origin="lonelyIsland",
media_id="52",
media_type="image/png",
media_length=1,
time_now_ms=self.clock.time_msec(),
upload_name="remote_test.png",
filesystem_id=file_id,
sha256=file_id,
)
)
# ensure we have thumbnails for the non-dynamic code path
if self.extra_config == {"dynamic_thumbnails": False}:
self.get_success(
self.repo._generate_thumbnails(
"lonelyIsland", "52", file_id, "image/png"
)
)
self._check_caching("/download/lonelyIsland/52")
params = "?width=32&height=32&method=crop"
self._check_caching(f"/thumbnail/lonelyIsland/52{params}")
def _check_caching(self, path: str) -> None:
"""
Checks that:
1. fetching the path returns an ETag header
2. refetching with the ETag returns a 304 without a body
3. refetching with the ETag but through unauthenticated endpoint
returns 404
"""
# Request media over authenticated endpoint, should be found
channel1 = self.make_request(
"GET",
f"/_matrix/client/v1/media{path}",
access_token=self.tok,
shorthand=False,
)
self.assertEqual(channel1.code, 200)
# Should have a single ETag field
etags = channel1.headers.getRawHeaders("ETag")
self.assertIsNotNone(etags)
assert etags is not None # For mypy
self.assertEqual(len(etags), 1)
etag = etags[0]
# Refetching with the etag should result in 304 and empty body.
channel2 = self.make_request(
"GET",
f"/_matrix/client/v1/media{path}",
access_token=self.tok,
shorthand=False,
custom_headers=[("If-None-Match", etag)],
)
self.assertEqual(channel2.code, 304)
self.assertEqual(channel2.is_finished(), True)
self.assertNotIn("body", channel2.result)
# Refetching with the etag but no access token should result in 404.
channel3 = self.make_request(
"GET",
f"/_matrix/media/r0{path}",
shorthand=False,
custom_headers=[("If-None-Match", etag)],
)
self.assertEqual(channel3.code, 404)

View File

@@ -4208,3 +4208,196 @@ class UserSuspensionTests(unittest.HomeserverTestCase):
shorthand=False,
)
self.assertEqual(channel.code, 200)
class RoomParticipantTestCase(unittest.HomeserverTestCase):
servlets = [
login.register_servlets,
room.register_servlets,
profile.register_servlets,
admin.register_servlets,
]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.user1 = self.register_user("thomas", "hackme")
self.tok1 = self.login("thomas", "hackme")
self.user2 = self.register_user("teresa", "hackme")
self.tok2 = self.login("teresa", "hackme")
self.room1 = self.helper.create_room_as(
room_creator=self.user1,
tok=self.tok1,
# Allow user2 to send state events into the room.
extra_content={
"power_level_content_override": {
"state_default": 0,
},
},
)
self.store = hs.get_datastores().main
@parameterized.expand(
[
# Should record participation.
param(
is_state=False,
event_type="m.room.message",
event_content={
"msgtype": "m.text",
"body": "I am engaging in this room",
},
record_participation=True,
),
param(
is_state=False,
event_type="m.room.encrypted",
event_content={
"algorithm": "m.megolm.v1.aes-sha2",
"ciphertext": "AwgAEnACgAkLmt6qF84IK++J7UDH2Za1YVchHyprqTqsg...",
"device_id": "RJYKSTBOIE",
"sender_key": "IlRMeOPX2e0MurIyfWEucYBRVOEEUMrOHqn/8mLqMjA",
"session_id": "X3lUlvLELLYxeTx4yOVu6UDpasGEVO0Jbu+QFnm0cKQ",
},
record_participation=True,
),
# Should not record participation.
param(
is_state=False,
event_type="m.sticker",
event_content={
"body": "My great sticker",
"info": {},
"url": "mxc://unused/mxcurl",
},
record_participation=False,
),
# An invalid **state event** with type `m.room.message`
param(
is_state=True,
event_type="m.room.message",
event_content={
"msgtype": "m.text",
"body": "I am engaging in this room",
},
record_participation=False,
),
# An invalid **state event** with type `m.room.encrypted`
# Note: this may become valid in the future with encrypted state, though we
# still may not want to consider it grounds for marking a user as participating.
param(
is_state=True,
event_type="m.room.encrypted",
event_content={
"algorithm": "m.megolm.v1.aes-sha2",
"ciphertext": "AwgAEnACgAkLmt6qF84IK++J7UDH2Za1YVchHyprqTqsg...",
"device_id": "RJYKSTBOIE",
"sender_key": "IlRMeOPX2e0MurIyfWEucYBRVOEEUMrOHqn/8mLqMjA",
"session_id": "X3lUlvLELLYxeTx4yOVu6UDpasGEVO0Jbu+QFnm0cKQ",
},
record_participation=False,
),
]
)
def test_sending_message_records_participation(
self,
is_state: bool,
event_type: str,
event_content: JsonDict,
record_participation: bool,
) -> None:
"""
Test that sending an various events into a room causes the user to
appropriately marked or not marked as a participant in that room.
"""
self.helper.join(self.room1, self.user2, tok=self.tok2)
# user has not sent any messages, so should not be a participant
participant = self.get_success(
self.store.get_room_participation(self.user2, self.room1)
)
self.assertFalse(participant)
# send an event into the room
if is_state:
# send a state event
self.helper.send_state(
self.room1,
event_type,
body=event_content,
tok=self.tok2,
)
else:
# send a non-state event
self.helper.send_event(
self.room1,
event_type,
content=event_content,
tok=self.tok2,
)
# check whether the user has been marked as a participant
participant = self.get_success(
self.store.get_room_participation(self.user2, self.room1)
)
self.assertEqual(participant, record_participation)
@parameterized.expand(
[
param(
event_type="m.room.message",
event_content={
"msgtype": "m.text",
"body": "I am engaging in this room",
},
),
param(
event_type="m.room.encrypted",
event_content={
"algorithm": "m.megolm.v1.aes-sha2",
"ciphertext": "AwgAEnACgAkLmt6qF84IK++J7UDH2Za1YVchHyprqTqsg...",
"device_id": "RJYKSTBOIE",
"sender_key": "IlRMeOPX2e0MurIyfWEucYBRVOEEUMrOHqn/8mLqMjA",
"session_id": "X3lUlvLELLYxeTx4yOVu6UDpasGEVO0Jbu+QFnm0cKQ",
},
),
]
)
def test_sending_event_and_leaving_does_not_record_participation(
self,
event_type: str,
event_content: JsonDict,
) -> None:
"""
Test that sending an event into a room that should mark a user as a
participant, but then leaving the room, results in the user no longer
be marked as a participant in that room.
"""
self.helper.join(self.room1, self.user2, tok=self.tok2)
# user has not sent any messages, so should not be a participant
participant = self.get_success(
self.store.get_room_participation(self.user2, self.room1)
)
self.assertFalse(participant)
# sending a message should now mark user as participant
self.helper.send_event(
self.room1,
event_type,
content=event_content,
tok=self.tok2,
)
participant = self.get_success(
self.store.get_room_participation(self.user2, self.room1)
)
self.assertTrue(participant)
# leave the room
self.helper.leave(self.room1, self.user2, tok=self.tok2)
# user should no longer be considered a participant
participant = self.get_success(
self.store.get_room_participation(self.user2, self.room1)
)
self.assertFalse(participant)

View File

@@ -61,6 +61,7 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
time_now_ms=clock.time_msec(),
upload_name="test.png",
filesystem_id=file_id,
sha256=file_id,
)
)

View File

@@ -313,46 +313,93 @@ class PurgeTests(HomeserverTestCase):
self.room_id, "org.matrix.test", body={"number": 2}
)
# Create enough state events to require multiple batches of
# delete_unreferenced_state_groups_bg_update to be run.
# mark_unreferenced_state_groups_for_deletion_bg_update to be run.
for i in range(200):
self.helper.send_state(self.room_id, "org.matrix.test", body={"number": i})
state2 = self.helper.send_state(
self.room_id, "org.matrix.test", body={"number": 3}
)
self.helper.send(self.room_id, body="test4")
last = self.helper.send(self.room_id, body="test5")
# Create an unreferenced state group that has a prev group of one of the
# to-be-purged events.
# Create an unreferenced state group that has no prev group.
unreferenced_free_state_group = self.get_success(
self.state_store.store_state_group(
event_id=last["event_id"],
room_id=self.room_id,
prev_group=None,
delta_ids={("org.matrix.test", ""): state1["event_id"]},
current_state_ids={("org.matrix.test", ""): ""},
)
)
# Create some unreferenced state groups that have a prev group of one of the
# existing state groups.
prev_group = self.get_success(
self.store._get_state_group_for_event(state1["event_id"])
)
unreferenced_state_group = self.get_success(
unreferenced_end_state_group = self.get_success(
self.state_store.store_state_group(
event_id=last["event_id"],
room_id=self.room_id,
prev_group=prev_group,
delta_ids={("org.matrix.test", ""): state2["event_id"]},
delta_ids={("org.matrix.test", ""): state1["event_id"]},
current_state_ids=None,
)
)
another_unreferenced_end_state_group = self.get_success(
self.state_store.store_state_group(
event_id=last["event_id"],
room_id=self.room_id,
prev_group=unreferenced_end_state_group,
delta_ids={("org.matrix.test", ""): state1["event_id"]},
current_state_ids=None,
)
)
another_unreferenced_state_group = self.get_success(
# Add some other unreferenced state groups which lead to a referenced state
# group.
# These state groups should not get deleted.
chain_state_group = self.get_success(
self.state_store.store_state_group(
event_id=last["event_id"],
room_id=self.room_id,
prev_group=unreferenced_state_group,
delta_ids={("org.matrix.test", ""): state2["event_id"]},
prev_group=None,
delta_ids={("org.matrix.test", ""): ""},
current_state_ids={("org.matrix.test", ""): ""},
)
)
chain_state_group_2 = self.get_success(
self.state_store.store_state_group(
event_id=last["event_id"],
room_id=self.room_id,
prev_group=chain_state_group,
delta_ids={("org.matrix.test", ""): ""},
current_state_ids=None,
)
)
referenced_chain_state_group = self.get_success(
self.state_store.store_state_group(
event_id=last["event_id"],
room_id=self.room_id,
prev_group=chain_state_group_2,
delta_ids={("org.matrix.test", ""): ""},
current_state_ids=None,
)
)
self.get_success(
self.store.db_pool.simple_insert(
"event_to_state_groups",
{
"event_id": "$new_event",
"state_group": referenced_chain_state_group,
},
)
)
# Insert and run the background update.
self.get_success(
self.store.db_pool.simple_insert(
"background_updates",
{
"update_name": _BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE,
"update_name": _BackgroundUpdates.MARK_UNREFERENCED_STATE_GROUPS_FOR_DELETION_BG_UPDATE,
"progress_json": "{}",
},
)
@@ -365,11 +412,11 @@ class PurgeTests(HomeserverTestCase):
1 + self.state_deletion_store.DELAY_BEFORE_DELETION_MS / 1000
)
# We expect that the unreferenced state group has been deleted.
# We expect that the unreferenced free state group has been deleted.
row = self.get_success(
self.state_store.db_pool.simple_select_one_onecol(
table="state_groups",
keyvalues={"id": unreferenced_state_group},
keyvalues={"id": unreferenced_free_state_group},
retcol="id",
allow_none=True,
desc="test_purge_unreferenced_state_group",
@@ -377,11 +424,21 @@ class PurgeTests(HomeserverTestCase):
)
self.assertIsNone(row)
# We expect that the other unreferenced state group has also been deleted.
# We expect that both unreferenced end state groups have been deleted.
row = self.get_success(
self.state_store.db_pool.simple_select_one_onecol(
table="state_groups",
keyvalues={"id": another_unreferenced_state_group},
keyvalues={"id": unreferenced_end_state_group},
retcol="id",
allow_none=True,
desc="test_purge_unreferenced_state_group",
)
)
self.assertIsNone(row)
row = self.get_success(
self.state_store.db_pool.simple_select_one_onecol(
table="state_groups",
keyvalues={"id": another_unreferenced_end_state_group},
retcol="id",
allow_none=True,
desc="test_purge_unreferenced_state_group",
@@ -399,4 +456,4 @@ class PurgeTests(HomeserverTestCase):
desc="test_purge_unreferenced_state_group",
)
)
self.assertEqual(len(state_groups), 207)
self.assertEqual(len(state_groups), 210)

View File

@@ -139,6 +139,8 @@ SMALL_PNG = unhexlify(
b"0000001f15c4890000000a49444154789c63000100000500010d"
b"0a2db40000000049454e44ae426082"
)
# The SHA256 hexdigest for the above bytes.
SMALL_PNG_SHA256 = "ebf4f635a17d10d6eb46ba680b70142419aa3220f228001a036d311a22ee9d2a"
# A small CMYK-encoded JPEG image used in some tests.
#