Compare commits

...

120 Commits

Author SHA1 Message Date
H. Shay
145484d010 properly puncuate newsfragment 2023-06-28 11:19:47 -07:00
H. Shay
900064c165 newsfragment 2023-06-28 11:15:51 -07:00
H. Shay
fc8a2ff49c add check constraint to current_state_delta_stream 2023-06-28 11:09:19 -07:00
Shay
78cfa55dad Fix sqlite user_filters upgrade (#15817) 2023-06-27 09:41:42 +01:00
dependabot[bot]
14c1bfd534 Bump serde_json from 1.0.97 to 1.0.99 (#15832)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.97 to 1.0.99.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.97...v1.0.99)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-26 15:36:33 +01:00
dependabot[bot]
70dc44f667 Bump towncrier from 22.12.0 to 23.6.0 (#15831)
Bumps [towncrier](https://github.com/twisted/towncrier) from 22.12.0 to 23.6.0.
- [Release notes](https://github.com/twisted/towncrier/releases)
- [Changelog](https://github.com/twisted/towncrier/blob/trunk/NEWS.rst)
- [Commits](https://github.com/twisted/towncrier/compare/22.12.0...23.6.0)

---
updated-dependencies:
- dependency-name: towncrier
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-26 15:36:07 +01:00
Erik Johnston
25c55a9d22 Add login spam checker API (#15838) 2023-06-26 14:12:20 +00:00
dependabot[bot]
52d8131e87 Bump types-opentracing from 2.4.10.4 to 2.4.10.5 (#15830)
Bumps [types-opentracing](https://github.com/python/typeshed) from 2.4.10.4 to 2.4.10.5.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-opentracing
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-26 09:26:01 +01:00
dependabot[bot]
53ea381ec3 Bump ruff from 0.0.272 to 0.0.275 (#15833)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.0.272 to 0.0.275.
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/BREAKING_CHANGES.md)
- [Commits](https://github.com/astral-sh/ruff/compare/v0.0.272...v0.0.275)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-26 09:14:20 +01:00
dependabot[bot]
6e65ca0b36 Bump types-setuptools from 67.8.0.0 to 68.0.0.0 (#15835)
Bumps [types-setuptools](https://github.com/python/typeshed) from 67.8.0.0 to 68.0.0.0.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-setuptools
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-26 09:09:57 +01:00
dependabot[bot]
d535473520 Bump cryptography from 40.0.2 to 41.0.1 (#15800)
Bumps [cryptography](https://github.com/pyca/cryptography) from 40.0.2 to 41.0.1.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/40.0.2...41.0.1)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-22 16:32:53 +01:00
Nicolas Werner
e0c39d6bb5 Fix forgotten rooms missing in initial sync (#15815)
If you leave a room and forget it, then rejoin it, the room would be
missing from the next initial sync.

fixes #13262

Signed-off-by: Nicolas Werner <n.werner@famedly.com>
2023-06-21 14:56:31 +01:00
Erik Johnston
289ce3b8d9 Fix harmless exception in port DB script (#15814)
The port DB script would try and run database background tasks, which
could fail if the data they acted on was in the process of being ported.
These exceptions were non fatal.

Fixes #15789
2023-06-21 13:20:46 +00:00
Erik Johnston
6c749c5124 Fix typo in faster join docs (#15812)
Fixes #15756
2023-06-21 11:34:32 +01:00
Mathieu Velten
496f73103d Allow for the configuration of max request retries and min/max retry delays in the matrix federation client (#15783) 2023-06-21 10:41:11 +02:00
Erik Johnston
1fcefd8f3e Merge branch 'master' into develop 2023-06-20 18:56:18 +01:00
Mathieu Velten
7d3da399dd 1.86.0 2023-06-20 17:22:50 +02:00
Shay
6a5cf1a759 Fix Sytest environmental variable evaluation in CI (#15804) 2023-06-20 07:55:46 -07:00
ew-at-vier
2301a09d7a Fix admin api documentation typo (#15805)
* Fix admin api documentation typo

Signed-off-by: Eric Wolf <eric.wolf@vier.ai>
2023-06-20 10:45:26 +00:00
Eric Eastwood
887fa4b66b Switch from matrix:// to matrix-federation:// scheme for internal Synapse routing of outbound federation traffic (#15806)
`matrix://` is a registered specced scheme nowadays and doesn't make sense for
our internal to Synapse use case anymore. ([discussion]
(https://github.com/matrix-org/synapse/pull/15773#discussion_r1227598679))
2023-06-20 10:05:31 +01:00
dependabot[bot]
4ba528d9c3 Bump ijson from 3.2.0.post0 to 3.2.1 (#15802)
Bumps [ijson](https://github.com/ICRAR/ijson) from 3.2.0.post0 to 3.2.1.
- [Changelog](https://github.com/ICRAR/ijson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ICRAR/ijson/compare/v3.2.0.post0...v3.2.1)

---
updated-dependencies:
- dependency-name: ijson
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-19 10:30:17 +01:00
dependabot[bot]
5f9d5190aa Bump attrs from 22.2.0 to 23.1.0 (#15801)
Bumps [attrs](https://github.com/python-attrs/attrs) from 22.2.0 to 23.1.0.
- [Release notes](https://github.com/python-attrs/attrs/releases)
- [Changelog](https://github.com/python-attrs/attrs/blob/main/CHANGELOG.md)
- [Commits](https://github.com/python-attrs/attrs/compare/22.2.0...23.1.0)

---
updated-dependencies:
- dependency-name: attrs
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-19 10:30:03 +01:00
dependabot[bot]
207cbe519d Bump phonenumbers from 8.13.13 to 8.13.14 (#15798)
Bumps [phonenumbers](https://github.com/daviddrysdale/python-phonenumbers) from 8.13.13 to 8.13.14.
- [Commits](https://github.com/daviddrysdale/python-phonenumbers/compare/v8.13.13...v8.13.14)

---
updated-dependencies:
- dependency-name: phonenumbers
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-19 10:29:10 +01:00
dependabot[bot]
d3cd9881c0 Bump ruff from 0.0.265 to 0.0.272 (#15799)
Bumps [ruff](https://github.com/charliermarsh/ruff) from 0.0.265 to 0.0.272.
- [Release notes](https://github.com/charliermarsh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/BREAKING_CHANGES.md)
- [Commits](https://github.com/charliermarsh/ruff/compare/v0.0.265...v0.0.272)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-19 10:28:57 +01:00
dependabot[bot]
10c509425f Bump serde_json from 1.0.96 to 1.0.97 (#15797)
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.96 to 1.0.97.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.96...v1.0.97)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-19 10:28:43 +01:00
Eric Eastwood
0f02f0b4da Remove experimental MSC2716 implementation to incrementally import history into existing rooms (#15748)
Context for why we're removing the implementation:

 - https://github.com/matrix-org/matrix-spec-proposals/pull/2716#issuecomment-1487441010
 - https://github.com/matrix-org/matrix-spec-proposals/pull/2716#issuecomment-1504262734

Anyone wanting to continue MSC2716, should also address these leftover tasks: https://github.com/matrix-org/synapse/issues/10737

Closes https://github.com/matrix-org/synapse/issues/10737 in the fact that it is not longer necessary to track those things.
2023-06-16 14:12:24 -05:00
Andrew Morgan
2ac6c3bbb5 Don't always lock "user_ips" table when performing non-native upsert (#15788) 2023-06-16 15:25:44 +01:00
Mathieu Velten
0618bf94cd push rules: fix internal conversion from _type to value (#15781)
Also fix wrong rule names for `is_user_mention` and `is_room_mention`.
2023-06-16 14:17:02 +02:00
Mathieu Velten
f63d4a3a65 Regularly try to wake up dests instead of waiting for next PDU/EDU (#15743) 2023-06-16 10:15:12 +00:00
Josh Qou
d939120421 Fix unsafe hotserving behaviour for non-multimedia uploads. (#15680)
* Fix unsafe hotserving behaviour for non-multimedia uploads.

* invert disposition assert

* test_media_storage.py: run lint

* test_base.py: /inline/attachment/s

* Only return attachment for disposition type, update tests

* Update synapse/media/_base.py

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* Update changelog.d/15680.bugfix

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* add attribution

* Update changelog.

---------

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2023-06-15 14:23:27 +01:00
Tulir Asokan
1404f68a03 Fix joining rooms through aliases where the alias server isn't a real homeserver (#15776) 2023-06-14 15:42:33 +01:00
Mathieu Velten
87e5df9a6e Merge branch 'release-v1.86' into develop 2023-06-14 14:54:19 +02:00
Mathieu Velten
825c5909de 1.86.0rc2 2023-06-14 12:17:29 +02:00
Mathieu Velten
ef0d3d7bd9 Revert "Allow for the configuration of max request retries and min/max retry delays in the matrix federation client (#12504)"
This reverts commit d84e66144d.
2023-06-14 11:55:57 +02:00
Mathieu Velten
14f9d9b452 Fix empty scope when having version mismatch between workers (#15774) 2023-06-14 11:53:55 +02:00
Jason Little
21fea6b749 Prefill events after invalidate not before when persisting events (#15758)
Fixes #15757
2023-06-14 09:42:18 +01:00
Eric Eastwood
8ddb2de553 Document looping_call() functionality that will wait for the given function to finish before scheduling another (#15772)
Thanks to @erikjohnston for clarifying, https://github.com/matrix-org/synapse/pull/15743#discussion_r1226544457

We don't have to worry about calls stacking up if the given function takes longer than the scheduled time.
2023-06-13 16:34:54 -05:00
Shay
553f2f53e7 Replace EventContext fields prev_group and delta_ids with field state_group_deltas (#15233) 2023-06-13 13:22:06 -07:00
Mathieu Velten
59ec4a0dc1 Fix MSC3983 support: only one OTK per device was returned through federation (#15770) 2023-06-13 19:51:47 +02:00
Eric Eastwood
0757d59ec4 Avoid backfill when we already have messages to return (#15737)
We now only block the client to backfill when we see a large gap in the events (more than 2 events missing in a row according to `depth`), more than 3 single-event holes, or not enough messages to fill the response. Otherwise, we return the messages directly to the client and backfill in the background for eventual consistency sake. 

Fix https://github.com/matrix-org/synapse/issues/15696
2023-06-13 12:31:08 -05:00
Patrick Cloke
df945e0d7c Fix MSC3983 support: Use the unstable /keys/claim federation endpoint if multiple keys are requested (#15755) 2023-06-13 18:07:55 +02:00
Mathieu Velten
629115836f Fix changelog typo 2023-06-13 14:38:53 +02:00
Mathieu Velten
9966eb10a3 1.86.0rc1 2023-06-13 14:30:51 +02:00
dependabot[bot]
99c850f798 Bump regex from 1.7.3 to 1.8.4 (#15769)
Bumps [regex](https://github.com/rust-lang/regex) from 1.7.3 to 1.8.4.
- [Release notes](https://github.com/rust-lang/regex/releases)
- [Changelog](https://github.com/rust-lang/regex/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/regex/compare/1.7.3...1.8.4)

---
updated-dependencies:
- dependency-name: regex
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-13 10:05:29 +01:00
dependabot[bot]
8afc9a4cda Bump log from 0.4.18 to 0.4.19 (#15761)
Bumps [log](https://github.com/rust-lang/log) from 0.4.18 to 0.4.19.
- [Release notes](https://github.com/rust-lang/log/releases)
- [Changelog](https://github.com/rust-lang/log/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/log/compare/0.4.18...0.4.19)

---
updated-dependencies:
- dependency-name: log
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-13 10:05:13 +01:00
Erik Johnston
ba97b39881 Bump minimum supported Rust version (#15768)
Important crates such as `log` and `regex` have bumped theirs to 1.60.0
as well.
2023-06-12 13:27:11 +00:00
dependabot[bot]
0b104364f9 Bump pyo3-log from 0.8.1 to 0.8.2 (#15759)
Bumps [pyo3-log](https://github.com/vorner/pyo3-log) from 0.8.1 to 0.8.2.
- [Changelog](https://github.com/vorner/pyo3-log/blob/main/CHANGELOG.md)
- [Commits](https://github.com/vorner/pyo3-log/compare/v0.8.1...v0.8.2)

---
updated-dependencies:
- dependency-name: pyo3-log
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:22:21 +01:00
dependabot[bot]
42eb4fea1c Bump serde from 1.0.163 to 1.0.164 (#15760)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.163 to 1.0.164.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.163...v1.0.164)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:21:20 +01:00
dependabot[bot]
9e321e0098 Bump pyopenssl from 23.1.1 to 23.2.0 (#15765)
Bumps [pyopenssl](https://github.com/pyca/pyopenssl) from 23.1.1 to 23.2.0.
- [Changelog](https://github.com/pyca/pyopenssl/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/pyopenssl/compare/23.1.1...23.2.0)

---
updated-dependencies:
- dependency-name: pyopenssl
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:20:55 +01:00
dependabot[bot]
0aa731cb6f Bump pydantic from 1.10.8 to 1.10.9 (#15762)
Bumps [pydantic](https://github.com/pydantic/pydantic) from 1.10.8 to 1.10.9.
- [Release notes](https://github.com/pydantic/pydantic/releases)
- [Changelog](https://github.com/pydantic/pydantic/blob/main/HISTORY.md)
- [Commits](https://github.com/pydantic/pydantic/compare/v1.10.8...v1.10.9)

---
updated-dependencies:
- dependency-name: pydantic
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:19:43 +01:00
dependabot[bot]
aad7e2d0c1 Bump sentry-sdk from 1.25.0 to 1.25.1 (#15764)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.25.0 to 1.25.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.25.0...1.25.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:19:01 +01:00
dependabot[bot]
046e7e494a Bump phonenumbers from 8.13.11 to 8.13.13 (#15763)
Bumps [phonenumbers](https://github.com/daviddrysdale/python-phonenumbers) from 8.13.11 to 8.13.13.
- [Commits](https://github.com/daviddrysdale/python-phonenumbers/compare/v8.13.11...v8.13.13)

---
updated-dependencies:
- dependency-name: phonenumbers
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:17:40 +01:00
dependabot[bot]
4f2bd6be69 Bump types-pyopenssl from 23.1.0.2 to 23.2.0.0 (#15766)
Bumps [types-pyopenssl](https://github.com/python/typeshed) from 23.1.0.2 to 23.2.0.0.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyopenssl
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:17:04 +01:00
Eric Eastwood
fcc3ca37e1 Backfill in the background if we're doing it "just because" (#15710)
Fix https://github.com/matrix-org/synapse/issues/15702
2023-06-09 15:39:49 -05:00
Erik Johnston
373c0c7ff7 Speed up typechecking CI (#15752)
By restoring the rust cache before installing the project.
2023-06-09 15:00:30 +01:00
Shay
d84e66144d Allow for the configuration of max request retries and min/max retry delays in the matrix federation client (#12504)
Co-authored-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Erik Johnston <erik@matrix.org>
2023-06-09 09:00:46 +02:00
Erik Johnston
f6321e386c Merge branch 'master' into develop 2023-06-08 13:16:46 +01:00
Erik Johnston
b5b7bb7c0f Merge branch 'release-v1.85' 2023-06-08 13:16:39 +01:00
Erik Johnston
c485ed1c5a Clear event caches when we purge history (#15609)
This should help a little with #13476

---------

Co-authored-by: Patrick Cloke <patrickc@matrix.org>
2023-06-08 13:14:40 +01:00
David Robertson
d162aecaac Quick & dirty metric for background update status (#15740)
* Quick & dirty metric for background update status

* Changelog

* Remove debug

Co-authored-by: Mathieu Velten <mathieuv@matrix.org>

* Actually write to _aborted

---------

Co-authored-by: Mathieu Velten <mathieuv@matrix.org>
2023-06-07 17:12:23 +00:00
Eric Eastwood
e536f02f68 Remove superfluous room_memberships join from background update (#15733)
Spawning from https://github.com/matrix-org/synapse/pull/15731
2023-06-07 11:47:01 -05:00
Eric Eastwood
195b6a298d Remove redundant room_memberships join to find participating servers in a room (#15732)
Spawning from https://github.com/matrix-org/synapse/pull/15731
2023-06-07 11:45:16 -05:00
Grant McLean
5c24d7b9eb Check required power levels earlier in createRoom handler. (#15695)
* Check required power levels earlier in createRoom handler.

- If a server was configured to reject the creation of rooms with E2EE
  enabled (by specifying an unattainably high power level for
  "m.room.encryption" in default_power_level_content_override), the 403
  error was not being triggered until after the room was created and
  before the "m.room.power_levels" was sent.  This allowed a user to
  access the partially-configured room and complete the setup of E2EE
  and power levels manually.

- This change causes the power level overrides to be checked earlier and
  the request to be rejected before the user gains access to the room.

- A new `_validate_room_config` method is added to contain checks that
  should be run before a room is created.

- The new test case confirms that a user request is rejected by the new
  validation method.

Signed-off-by: Grant McLean <grant@catalyst.net.nz>

* Add a changelog file.

* Formatting fix for black.

* Remove unneeded line from test.

---------

Signed-off-by: Grant McLean <grant@catalyst.net.nz>
2023-06-07 16:21:25 +01:00
Erik Johnston
8934c11935 Merge branch 'master' into develop 2023-06-07 14:45:19 +01:00
Erik Johnston
140a76c00f Merge branch 'release-v1.85' 2023-06-07 14:45:09 +01:00
Eric Eastwood
9d911b0da6 No need for the extra join since membership is built-in to current_state_events (#15731)
This helps with the upstream `is_host_joined()` and `is_host_invited()` functions.

`membership` was added to `current_state_events` in https://github.com/matrix-org/synapse/pull/5706 and forced in https://github.com/matrix-org/synapse/pull/13745
2023-06-06 22:19:57 -05:00
Eric Eastwood
8bfded81f3 Trace functions which return Awaitable (#15650) 2023-06-06 17:39:22 -05:00
Eric Eastwood
4e6390cb10 Update error to more plainly explain we can only authorize our own events (#15725) 2023-06-06 16:26:12 -05:00
Eric Eastwood
33c3550887 Add context for when/why to use the long_retries option when sending Federation requests (#15721) 2023-06-06 16:25:03 -05:00
Shay
6ee96e9366 Improve performance of user directory search (#15729) 2023-06-06 21:16:03 +01:00
Andrew Morgan
d43c72a6c8 Prevent "twisted trunk" and "latest deps" workflows from running on forks (#15726) 2023-06-06 18:29:54 +00:00
Sean Quah
dfd77f426e Remove some unused server_name fields (#15723)
Signed-off-by: Sean Quah <seanq@matrix.org>
2023-06-06 12:32:29 +01:00
Erik Johnston
1a54953473 Merge remote-tracking branch 'origin/master' into develop 2023-06-06 10:59:20 +01:00
Erik Johnston
ad690037de Fix link in changelog 2023-06-06 10:58:32 +01:00
Erik Johnston
07fd6d82d7 Merge branch 'master' into develop 2023-06-06 10:49:04 +01:00
Patrick Cloke
f880e64b11 Stabilize support for MSC3952: Intentional mentions. (#15520) 2023-06-06 09:11:07 +01:00
Eric Eastwood
f9561b9e37 Some house keeping on maybe_backfill() functions (#15709) 2023-06-05 23:38:52 -05:00
dependabot[bot]
ca8906be2c Bump types-requests from 2.31.0.0 to 2.31.0.1 (#15715)
Bumps [types-requests](https://github.com/python/typeshed) from 2.31.0.0 to 2.31.0.1.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-requests
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:39:34 +01:00
dependabot[bot]
2d97d5b1c3 Bump types-jsonschema from 4.17.0.7 to 4.17.0.8 (#15716)
Bumps [types-jsonschema](https://github.com/python/typeshed) from 4.17.0.7 to 4.17.0.8.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-jsonschema
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:32:25 +01:00
dependabot[bot]
1a7aa81715 Bump sentry-sdk from 1.22.1 to 1.25.0 (#15714)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.22.1 to 1.25.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.22.1...1.25.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:32:16 +01:00
dependabot[bot]
5feabbdf06 Bump pyasn1 from 0.4.8 to 0.5.0 (#15713)
Bumps [pyasn1](https://github.com/pyasn1/pyasn1) from 0.4.8 to 0.5.0.
- [Release notes](https://github.com/pyasn1/pyasn1/releases)
- [Changelog](https://github.com/pyasn1/pyasn1/blob/main/CHANGES.rst)
- [Commits](https://github.com/pyasn1/pyasn1/compare/v0.4.8...v0.5.0)

---
updated-dependencies:
- dependency-name: pyasn1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:32:07 +01:00
dependabot[bot]
36a5bcae2c Bump library/redis from 6-bullseye to 7-bullseye in /docker (#15712)
Bumps library/redis from 6-bullseye to 7-bullseye.

---
updated-dependencies:
- dependency-name: library/redis
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:31:54 +01:00
dependabot[bot]
8ba530c0e3 Bump importlib-metadata from 6.1.0 to 6.6.0 (#15711)
Bumps [importlib-metadata](https://github.com/python/importlib_metadata) from 6.1.0 to 6.6.0.
- [Release notes](https://github.com/python/importlib_metadata/releases)
- [Changelog](https://github.com/python/importlib_metadata/blob/main/CHANGES.rst)
- [Commits](https://github.com/python/importlib_metadata/compare/v6.1.0...v6.6.0)

---
updated-dependencies:
- dependency-name: importlib-metadata
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:31:41 +01:00
Shay
d0c4257f14 N + 3: Read from column full_user_id rather than user_id of tables profiles and user_filters (#15649) 2023-06-02 17:24:13 -07:00
Mathieu Velten
e0f2429d13 Add a catch-all * to the supported relation types when redacting (#15705)
This is an update to MSC3912 implementation
2023-06-02 13:13:50 +00:00
Eric Eastwood
30a5076da8 Log when events are (unexpectedly) filtered out of responses in tests (#14213)
See https://github.com/matrix-org/synapse/pull/14095#discussion_r990335492

This is useful because when see that a relevant event is an `outlier` or `soft-failed`, then that's a good unexpected indicator explaining why it's not showing up. `filter_events_for_client` is used in `/sync`, `/messages`, `/context` which are all common end-to-end assertion touch points (also notifications, relations).
2023-06-01 21:27:18 -05:00
H. Shay
8af29155ec Merge branch 'release-v1.85' into develop 2023-06-01 10:26:37 -07:00
Erik Johnston
5ed0e8c61f Cache requests for user's devices from federation (#15675)
This should mitigate the issue where lots of different servers requests
the same user's devices all at once.
2023-06-01 13:25:20 +00:00
Hugh Nimmo-Smith
d1693f0362 Implement stable support for MSC3882 to allow an existing device/session to generate a login token for use on a new device/session (#15388)
Implements stable support for MSC3882; this involves updating Synapse's support to
match the MSC / the spec says.

Continue to support the unstable version to allow clients to transition.
2023-06-01 08:52:51 -04:00
Eric Eastwood
0b5f64ff09 Add Synapse version deploy annotations to Grafana dashboard (#15674)
Fix https://github.com/matrix-org/synapse/issues/15662

This manifests as purple lines that show up on all time series panels
that you can hover and see what version was deployed.

Also added a new "Deployed Synapse versions over time" panel
where the color block changes with each version. And mixed this
color block into the "Up" time series panel.

To get the Grafana dashboard JSON to copy here: use the **Share** icon at the top -> **Export** -> check the **Export for sharing externally** option -> **View JSON** or **Save to file**
2023-05-31 14:35:49 -05:00
Patrick Cloke
6f18812bb0 Add stubs package for lxml. (#15697)
The stubs have some issues so this has some generous cast
and ignores in it, but it is better than not having stubs.

Note that confusing that Element is a function which creates
_Element instances (and similarly for Comment).
2023-05-31 17:06:57 +00:00
Jason Little
874378c052 Docker fully qualified image names (#15689)
* Fully qualified docker image names for the main Dockerfile and Complement related.

* Fully qualified docker image names for Dockerfiles associated with building Debian release artifacts.

This one is harder and is separate from the other commit in case it wasn't correct or was unwanted. I decided to
do the expansion on the docker images in the Dockerfile itself, instead of the various source places that build
which distribution that is selected, as it would have been more invasive with the scripts breaking up the string
for tagging and such. This one is untested.

* Changelog

* Update docker/Dockerfile-workers

* Update docker/complement/Dockerfile

---------

Co-authored-by: reivilibre <olivier@librepush.net>
2023-05-31 15:13:31 +00:00
Gabriel Féron
daf3a67908 Add get_canonical_room_alias to module API (#15450)
Co-authored-by: Boxdot <d@zerovolt.org>
2023-05-31 09:18:37 -04:00
Patrick Cloke
c01343de43 Add stricter mypy options (#15694)
Enable warn_unused_configs, strict_concatenate, disallow_subclassing_any,
and disallow_incomplete_defs.
2023-05-31 07:18:29 -04:00
David Robertson
6fc3deb029 Merge branch 'release-v1.85' into develop 2023-05-30 16:08:33 +01:00
Quentin Gliech
ceb3dd77db Enforce that an admin token also has the basic Matrix API scope 2023-05-30 09:43:06 -04:00
Quentin Gliech
32a2f05004 Make the config tests spawn the homeserver only when needed 2023-05-30 09:43:06 -04:00
Quentin Gliech
f739bde962 Reject tokens with multiple device scopes 2023-05-30 09:43:06 -04:00
Quentin Gliech
98afc57d59 Make OIDC scope constants 2023-05-30 09:43:06 -04:00
Quentin Gliech
14a5be9c4d Handle errors when introspecting tokens
This returns a proper 503 when the introspection endpoint is not working
for some reason, which should avoid logging out clients in those cases.
2023-05-30 09:43:06 -04:00
Quentin Gliech
ec9379d7e2 Newsfile. 2023-05-30 09:43:06 -04:00
Quentin Gliech
e343125b38 Disable incompatible Admin API endpoints 2023-05-30 09:43:06 -04:00
Quentin Gliech
4d0231b364 Make AS tokens work & allow ASes to /register 2023-05-30 09:43:06 -04:00
Quentin Gliech
c008b44b4f Add an admin token for MAS -> Synapse calls 2023-05-30 09:43:06 -04:00
Hugh Nimmo-Smith
bad1f2cd35 Tests for JWKS endpoint 2023-05-30 09:43:06 -04:00
Hugh Nimmo-Smith
249f4a338d Refactor config to be an experimental feature
Also enforce you can't combine it with incompatible config options
2023-05-30 09:43:06 -04:00
Hugh Nimmo-Smith
03920bdd4e Test MSC2965 implementation: well-known discovery document 2023-05-30 09:43:06 -04:00
Quentin Gliech
31691d6151 Disable account related endpoints when using OAuth delegation 2023-05-30 09:43:06 -04:00
Hugh Nimmo-Smith
5fe96082d0 Actually enforce guest + return www-authenticate header 2023-05-30 09:43:06 -04:00
Hugh Nimmo-Smith
28a9663bdf Initial tests for OAuth delegation 2023-05-30 09:43:06 -04:00
Hugh Nimmo-Smith
a1374b5c70 MSC2967: Check access token scope for use as user and add guest support 2023-05-30 09:43:06 -04:00
Hugh Nimmo-Smith
d20669971a Use name claim as display name when registering users on the fly.
This makes is so that the `name` claim got when introspecting the token
is used as the display name when registering a user on the fly.
2023-05-30 09:43:06 -04:00
Quentin Gliech
f9cd549f64 Record the sub claims as an external_id 2023-05-30 09:43:06 -04:00
Quentin Gliech
7628dbf4e9 Handle the Synapse admin scope 2023-05-30 09:43:06 -04:00
Quentin Gliech
c5cf1b421d Save the scopes in the requester 2023-05-30 09:43:06 -04:00
Quentin Gliech
e82ec6d008 MSC2965: OIDC Provider discovery via well-known document 2023-05-30 09:43:06 -04:00
Quentin Gliech
8f576aa462 Expose the public keys used for client authentication on an endpoint 2023-05-30 09:43:06 -04:00
Quentin Gliech
765244faee Initial MSC3964 support: delegation of auth to OIDC server 2023-05-30 09:43:06 -04:00
Quentin Gliech
e2c8458bba Make the api.auth.Auth a Protocol 2023-05-30 09:43:06 -04:00
Sean Quah
5d8c659373 Remove unused FederationServer.__str__ override (#15690)
Signed-off-by: Sean Quah <seanq@matrix.org>
2023-05-30 14:37:39 +01:00
204 changed files with 6326 additions and 3701 deletions

View File

@@ -22,7 +22,21 @@ concurrency:
cancel-in-progress: true
jobs:
check_repo:
# Prevent this workflow from running on any fork of Synapse other than matrix-org/synapse, as it is
# only useful to the Synapse core team.
# All other workflow steps depend on this one, thus if 'should_run_workflow' is not 'true', the rest
# of the workflow will be skipped as well.
runs-on: ubuntu-latest
outputs:
should_run_workflow: ${{ steps.check_condition.outputs.should_run_workflow }}
steps:
- id: check_condition
run: echo "should_run_workflow=${{ github.repository == 'matrix-org/synapse' }}" >> "$GITHUB_OUTPUT"
mypy:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
@@ -47,6 +61,8 @@ jobs:
run: sed '/warn_unused_ignores = True/d' -i mypy.ini
- run: poetry run mypy
trial:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
strategy:
matrix:
@@ -105,6 +121,8 @@ jobs:
sytest:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:testing
@@ -156,7 +174,8 @@ jobs:
complement:
if: "${{ !failure() && !cancelled() }}"
needs: check_repo
if: "!failure() && !cancelled() && needs.check_repo.outputs.should_run_workflow == 'true'"
runs-on: ubuntu-latest
strategy:
@@ -192,7 +211,7 @@ jobs:
# Open an issue if the build fails, so we know about it.
# Only do this if we're not experimenting with this action in a PR.
open-issue:
if: "failure() && github.event_name != 'push' && github.event_name != 'pull_request'"
if: "failure() && github.event_name != 'push' && github.event_name != 'pull_request' && needs.check_repo.outputs.should_run_workflow == 'true'"
needs:
# TODO: should mypy be included here? It feels more brittle than the others.
- mypy

View File

@@ -34,6 +34,7 @@ jobs:
- id: set-distros
run: |
# if we're running from a tag, get the full list of distros; otherwise just use debian:sid
# NOTE: inside the actual Dockerfile-dhvirtualenv, the image name is expanded into its full image path
dists='["debian:sid"]'
if [[ $GITHUB_REF == refs/tags/* ]]; then
dists=$(scripts-dev/build_debian_packages.py --show-dists-json)

View File

@@ -35,7 +35,7 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
@@ -92,6 +92,10 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@v1
with:
@@ -103,10 +107,6 @@ jobs:
# To make CI green, err towards caution and install the project.
install-project: "true"
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
- uses: Swatinem/rust-cache@v2
# Cribbed from
# https://github.com/AustinScola/mypy-cache-github-action/blob/85ea4f2972abed39b33bd02c36e341b28ca59213/src/restore.ts#L10-L17
- name: Restore/persist mypy's cache
@@ -150,7 +150,7 @@ jobs:
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
@@ -167,7 +167,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
with:
components: clippy
- uses: Swatinem/rust-cache@v2
@@ -268,7 +268,7 @@ jobs:
postgres:${{ matrix.job.postgres-version }}
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
@@ -308,7 +308,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
# There aren't wheels for some of the older deps, so we need to install
@@ -399,8 +399,8 @@ jobs:
env:
SYTEST_BRANCH: ${{ github.head_ref }}
POSTGRES: ${{ matrix.job.postgres && 1}}
MULTI_POSTGRES: ${{ (matrix.job.postgres == 'multi-postgres') && 1}}
ASYNCIO_REACTOR: ${{ (matrix.job.reactor == 'asyncio') && 1 }}
MULTI_POSTGRES: ${{ (matrix.job.postgres == 'multi-postgres') || '' }}
ASYNCIO_REACTOR: ${{ (matrix.job.reactor == 'asyncio') || '' }}
WORKERS: ${{ matrix.job.workers && 1 }}
BLACKLIST: ${{ matrix.job.workers && 'synapse-blacklist-with-workers' }}
TOP: ${{ github.workspace }}
@@ -416,7 +416,7 @@ jobs:
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- name: Run SyTest
@@ -556,7 +556,7 @@ jobs:
path: synapse
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- uses: actions/setup-go@v4
@@ -584,7 +584,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- run: cargo test

View File

@@ -18,7 +18,22 @@ concurrency:
cancel-in-progress: true
jobs:
check_repo:
# Prevent this workflow from running on any fork of Synapse other than matrix-org/synapse, as it is
# only useful to the Synapse core team.
# All other workflow steps depend on this one, thus if 'should_run_workflow' is not 'true', the rest
# of the workflow will be skipped as well.
if: github.repository == 'matrix-org/synapse'
runs-on: ubuntu-latest
outputs:
should_run_workflow: ${{ steps.check_condition.outputs.should_run_workflow }}
steps:
- id: check_condition
run: echo "should_run_workflow=${{ github.repository == 'matrix-org/synapse' }}" >> "$GITHUB_OUTPUT"
mypy:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
@@ -41,6 +56,8 @@ jobs:
- run: poetry run mypy
trial:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
@@ -75,6 +92,8 @@ jobs:
|| true
sytest:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:buster
@@ -119,7 +138,8 @@ jobs:
/logs/**/*.log*
complement:
if: "${{ !failure() && !cancelled() }}"
needs: check_repo
if: "!failure() && !cancelled() && needs.check_repo.outputs.should_run_workflow == 'true'"
runs-on: ubuntu-latest
strategy:
@@ -166,7 +186,7 @@ jobs:
# open an issue if the build fails, so we know about it.
open-issue:
if: failure()
if: failure() && needs.check_repo.outputs.should_run_workflow == 'true'
needs:
- mypy
- trial

View File

@@ -1,3 +1,88 @@
Synapse 1.86.0 (2023-06-20)
===========================
No significant changes since 1.86.0rc2.
Synapse 1.86.0rc2 (2023-06-14)
==============================
Bugfixes
--------
- Fix an error when having workers of different versions running. ([\#15774](https://github.com/matrix-org/synapse/issues/15774))
Synapse 1.86.0rc1 (2023-06-13)
==============================
This version was tagged but never released.
Features
--------
- Stable support for [MSC3882](https://github.com/matrix-org/matrix-spec-proposals/pull/3882) to allow an existing device/session to generate a login token for use on a new device/session. ([\#15388](https://github.com/matrix-org/synapse/issues/15388))
- Support resolving a room's [canonical alias](https://spec.matrix.org/v1.7/client-server-api/#mroomcanonical_alias) via the module API. ([\#15450](https://github.com/matrix-org/synapse/issues/15450))
- Enable support for [MSC3952](https://github.com/matrix-org/matrix-spec-proposals/pull/3952): intentional mentions. ([\#15520](https://github.com/matrix-org/synapse/issues/15520))
- Experimental [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861) support: delegate auth to an OIDC provider. ([\#15582](https://github.com/matrix-org/synapse/issues/15582))
- Add Synapse version deploy annotations to Grafana dashboard which enables easy correlation between behavior changes witnessed in a graph to a certain Synapse version and nail down regressions. ([\#15674](https://github.com/matrix-org/synapse/issues/15674))
- Add a catch-all * to the supported relation types when redacting an event and its related events. This is an update to [MSC3912](https://github.com/matrix-org/matrix-spec-proposals/pull/3861) implementation. ([\#15705](https://github.com/matrix-org/synapse/issues/15705))
- Speed up `/messages` by backfilling in the background when there are no backward extremities where we are directly paginating. ([\#15710](https://github.com/matrix-org/synapse/issues/15710))
- Expose a metric reporting the database background update status. ([\#15740](https://github.com/matrix-org/synapse/issues/15740))
Bugfixes
--------
- Correctly clear caches when we delete a room. ([\#15609](https://github.com/matrix-org/synapse/issues/15609))
- Check permissions for enabling encryption earlier during room creation to avoid creating broken rooms. ([\#15695](https://github.com/matrix-org/synapse/issues/15695))
Improved Documentation
----------------------
- Simplify query to find participating servers in a room. ([\#15732](https://github.com/matrix-org/synapse/issues/15732))
Internal Changes
----------------
- Log when events are (maybe unexpectedly) filtered out of responses in tests. ([\#14213](https://github.com/matrix-org/synapse/issues/14213))
- Read from column `full_user_id` rather than `user_id` of tables `profiles` and `user_filters`. ([\#15649](https://github.com/matrix-org/synapse/issues/15649))
- Add support for tracing functions which return `Awaitable`s. ([\#15650](https://github.com/matrix-org/synapse/issues/15650))
- Cache requests for user's devices over federation. ([\#15675](https://github.com/matrix-org/synapse/issues/15675))
- Add fully qualified docker image names to Dockerfiles. ([\#15689](https://github.com/matrix-org/synapse/issues/15689))
- Remove some unused code. ([\#15690](https://github.com/matrix-org/synapse/issues/15690))
- Improve type hints. ([\#15694](https://github.com/matrix-org/synapse/issues/15694), [\#15697](https://github.com/matrix-org/synapse/issues/15697))
- Update docstring and traces on `maybe_backfill()` functions. ([\#15709](https://github.com/matrix-org/synapse/issues/15709))
- Add context for when/why to use the `long_retries` option when sending Federation requests. ([\#15721](https://github.com/matrix-org/synapse/issues/15721))
- Removed some unused fields. ([\#15723](https://github.com/matrix-org/synapse/issues/15723))
- Update federation error to more plainly explain we can only authorize our own membership events. ([\#15725](https://github.com/matrix-org/synapse/issues/15725))
- Prevent the `latest_deps` and `twisted_trunk` daily GitHub Actions workflows from running on forks of the codebase. ([\#15726](https://github.com/matrix-org/synapse/issues/15726))
- Improve performance of user directory search. ([\#15729](https://github.com/matrix-org/synapse/issues/15729))
- Remove redundant table join with `room_memberships` when doing a `is_host_joined()`/`is_host_invited()` call (`membership` is already part of the `current_state_events`). ([\#15731](https://github.com/matrix-org/synapse/issues/15731))
- Remove superfluous `room_memberships` join from background update. ([\#15733](https://github.com/matrix-org/synapse/issues/15733))
- Speed up typechecking CI. ([\#15752](https://github.com/matrix-org/synapse/issues/15752))
- Bump minimum supported Rust version to 1.60.0. ([\#15768](https://github.com/matrix-org/synapse/issues/15768))
### Updates to locked dependencies
* Bump importlib-metadata from 6.1.0 to 6.6.0. ([\#15711](https://github.com/matrix-org/synapse/issues/15711))
* Bump library/redis from 6-bullseye to 7-bullseye in /docker. ([\#15712](https://github.com/matrix-org/synapse/issues/15712))
* Bump log from 0.4.18 to 0.4.19. ([\#15761](https://github.com/matrix-org/synapse/issues/15761))
* Bump phonenumbers from 8.13.11 to 8.13.13. ([\#15763](https://github.com/matrix-org/synapse/issues/15763))
* Bump pyasn1 from 0.4.8 to 0.5.0. ([\#15713](https://github.com/matrix-org/synapse/issues/15713))
* Bump pydantic from 1.10.8 to 1.10.9. ([\#15762](https://github.com/matrix-org/synapse/issues/15762))
* Bump pyo3-log from 0.8.1 to 0.8.2. ([\#15759](https://github.com/matrix-org/synapse/issues/15759))
* Bump pyopenssl from 23.1.1 to 23.2.0. ([\#15765](https://github.com/matrix-org/synapse/issues/15765))
* Bump regex from 1.7.3 to 1.8.4. ([\#15769](https://github.com/matrix-org/synapse/issues/15769))
* Bump sentry-sdk from 1.22.1 to 1.25.0. ([\#15714](https://github.com/matrix-org/synapse/issues/15714))
* Bump sentry-sdk from 1.25.0 to 1.25.1. ([\#15764](https://github.com/matrix-org/synapse/issues/15764))
* Bump serde from 1.0.163 to 1.0.164. ([\#15760](https://github.com/matrix-org/synapse/issues/15760))
* Bump types-jsonschema from 4.17.0.7 to 4.17.0.8. ([\#15716](https://github.com/matrix-org/synapse/issues/15716))
* Bump types-pyopenssl from 23.1.0.2 to 23.2.0.0. ([\#15766](https://github.com/matrix-org/synapse/issues/15766))
* Bump types-requests from 2.31.0.0 to 2.31.0.1. ([\#15715](https://github.com/matrix-org/synapse/issues/15715))
Synapse 1.85.2 (2023-06-08)
===========================
@@ -28,7 +113,7 @@ No significant changes since 1.85.0rc2.
The following issues are fixed in 1.85.0 (and RCs).
- [GHSA-26c5-ppr8-f33p](https://github.com/matrix-org/synapse/security/advisories/GHSA-26c5-ppr8-f33p) / [CVE-2023-32682](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-32683) — Low Severity
- [GHSA-26c5-ppr8-f33p](https://github.com/matrix-org/synapse/security/advisories/GHSA-26c5-ppr8-f33p) / [CVE-2023-32682](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-32682) — Low Severity
It may be possible for a deactivated user to login when using uncommon configurations.

32
Cargo.lock generated
View File

@@ -4,9 +4,9 @@ version = 3
[[package]]
name = "aho-corasick"
version = "0.7.19"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4f55bd91a0978cbfd91c457a164bab8b4001c833b7f323132c0a4e1922dd44e"
checksum = "43f6cb1bf222025340178f382c426f13757b2960e89779dfcb319c32542a5a41"
dependencies = [
"memchr",
]
@@ -132,9 +132,9 @@ dependencies = [
[[package]]
name = "log"
version = "0.4.18"
version = "0.4.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "518ef76f2f87365916b142844c16d8fefd85039bc5699050210a7778ee1cd1de"
checksum = "b06a4cde4c0f271a446782e3eff8de789548ce57dbc8eca9292c27f4a42004b4"
[[package]]
name = "memchr"
@@ -229,9 +229,9 @@ dependencies = [
[[package]]
name = "pyo3-log"
version = "0.8.1"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f9c8b57fe71fb5dcf38970ebedc2b1531cf1c14b1b9b4c560a182a57e115575c"
checksum = "c94ff6535a6bae58d7d0b85e60d4c53f7f84d0d0aa35d6a28c3f3e70bfe51444"
dependencies = [
"arc-swap",
"log",
@@ -291,9 +291,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.7.3"
version = "1.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b1f693b24f6ac912f4893ef08244d70b6067480d2f1a46e950c9691e6749d1d"
checksum = "d0ab3ca65655bb1e41f2a8c8cd662eb4fb035e67c3f78da1d61dffe89d07300f"
dependencies = [
"aho-corasick",
"memchr",
@@ -302,9 +302,9 @@ dependencies = [
[[package]]
name = "regex-syntax"
version = "0.6.29"
version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f162c6dd7b008981e4d40210aca20b4bd0f9b60ca9271061b07f78537722f2e1"
checksum = "436b050e76ed2903236f032a59761c1eb99e1b0aead2c257922771dab1fc8c78"
[[package]]
name = "ryu"
@@ -320,18 +320,18 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "serde"
version = "1.0.163"
version = "1.0.164"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2113ab51b87a539ae008b5c6c02dc020ffa39afd2d83cffcb3f4eb2722cebec2"
checksum = "9e8c8cf938e98f769bc164923b06dce91cea1751522f46f8466461af04c9027d"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.163"
version = "1.0.164"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c805777e3930c8883389c602315a24224bcc738b63905ef87cd1420353ea93e"
checksum = "d9735b638ccc51c28bf6914d90a2e9725b377144fc612c49a611fddd1b631d68"
dependencies = [
"proc-macro2",
"quote",
@@ -340,9 +340,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.96"
version = "1.0.99"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "057d394a50403bcac12672b2b18fb387ab6d289d957dab67dd201875391e52f1"
checksum = "46266871c240a00b8f503b877622fe33430b3c7d963bdc0f2adc511e54a1eae3"
dependencies = [
"itoa",
"ryu",

1
changelog.d/15233.misc Normal file
View File

@@ -0,0 +1 @@
Replace `EventContext` fields `prev_group` and `delta_ids` with field `state_group_deltas`.

1
changelog.d/15680.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where media files were served in an unsafe manner. Contributed by @joshqou.

View File

@@ -0,0 +1 @@
Improve `/messages` response time by avoiding backfill when we already have messages to return.

1
changelog.d/15743.misc Normal file
View File

@@ -0,0 +1 @@
Regularly try to send transactions to other servers after they failed instead of waiting for a new event to be available before trying.

View File

@@ -0,0 +1 @@
Remove experimental [MSC2716](https://github.com/matrix-org/matrix-spec-proposals/pull/2716) implementation to incrementally import history into existing rooms.

1
changelog.d/15755.misc Normal file
View File

@@ -0,0 +1 @@
Fix requesting multiple keys at once over federation, related to [MSC3983](https://github.com/matrix-org/matrix-spec-proposals/pull/3983).

1
changelog.d/15758.bugfix Normal file
View File

@@ -0,0 +1 @@
Avoid invalidating a cache that was just prefilled.

1
changelog.d/15770.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix requesting multiple keys at once over federation, related to [MSC3983](https://github.com/matrix-org/matrix-spec-proposals/pull/3983).

1
changelog.d/15772.doc Normal file
View File

@@ -0,0 +1 @@
Document `looping_call()` functionality that will wait for the given function to finish before scheduling another.

1
changelog.d/15776.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix joining rooms through aliases where the alias server isn't a real homeserver. Contributed by @tulir @ Beeper.

1
changelog.d/15781.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug in push rules handling leading to an invalid (per spec) `is_user_mention` rule sent to clients. Also fix wrong rule names for `is_user_mention` and `is_room_mention`.

1
changelog.d/15783.misc Normal file
View File

@@ -0,0 +1 @@
Allow for the configuration of max request retries and min/max retry delays in the matrix federation client.

1
changelog.d/15788.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in 1.57.0 where the wrong table would be locked on updating database rows when using SQLite as the database backend.

1
changelog.d/15804.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix Sytest environmental variable evaluation in CI.

1
changelog.d/15805.doc Normal file
View File

@@ -0,0 +1 @@
Fix a typo in the [Admin API](https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/index.html).

1
changelog.d/15806.misc Normal file
View File

@@ -0,0 +1 @@
Switch from `matrix://` to `matrix-federation://` scheme for internal Synapse routing of outbound federation traffic.

1
changelog.d/15812.doc Normal file
View File

@@ -0,0 +1 @@
Fix typo in MSC number in faster remote room join architecture doc.

1
changelog.d/15814.misc Normal file
View File

@@ -0,0 +1 @@
Fix harmless exceptions being printed when running the port DB script.

1
changelog.d/15815.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix forgotten rooms missing from initial sync after rejoining them. Contributed by Nico from Famedly.

1
changelog.d/15817.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix sqlite `user_filters` upgrade introduced in v1.86.0.

View File

@@ -0,0 +1 @@
Add spam checker module API for logins.

1
changelog.d/15849.misc Normal file
View File

@@ -0,0 +1 @@
Add check constraint to current_state_delta_stream (#15849).

File diff suppressed because it is too large Load Diff

View File

@@ -29,7 +29,7 @@
"level": "error"
},
{
"line": "my-matrix-server-federation-sender-1 | 2023-01-25 20:56:20,995 - synapse.http.matrixfederationclient - 709 - WARNING - federation_transaction_transmission_loop-3 - {PUT-O-3} [example.com] Request failed: PUT matrix://example.com/_matrix/federation/v1/send/1674680155797: HttpResponseException('403: Forbidden')",
"line": "my-matrix-server-federation-sender-1 | 2023-01-25 20:56:20,995 - synapse.http.matrixfederationclient - 709 - WARNING - federation_transaction_transmission_loop-3 - {PUT-O-3} [example.com] Request failed: PUT matrix-federation://example.com/_matrix/federation/v1/send/1674680155797: HttpResponseException('403: Forbidden')",
"level": "warning"
},
{

18
debian/changelog vendored
View File

@@ -1,3 +1,21 @@
matrix-synapse-py3 (1.86.0) stable; urgency=medium
* New Synapse release 1.86.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 Jun 2023 17:22:46 +0200
matrix-synapse-py3 (1.86.0~rc2) stable; urgency=medium
* New Synapse release 1.86.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 14 Jun 2023 12:16:27 +0200
matrix-synapse-py3 (1.86.0~rc1) stable; urgency=medium
* New Synapse release 1.86.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 13 Jun 2023 14:30:45 +0200
matrix-synapse-py3 (1.85.2) stable; urgency=medium
* New Synapse release 1.85.2.

View File

@@ -27,7 +27,7 @@ ARG PYTHON_VERSION=3.11
###
# We hardcode the use of Debian bullseye here because this could change upstream
# and other Dockerfiles used for testing are expecting bullseye.
FROM docker.io/python:${PYTHON_VERSION}-slim-bullseye as requirements
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bullseye as requirements
# RUN --mount is specific to buildkit and is documented at
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
@@ -87,7 +87,7 @@ RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
###
### Stage 1: builder
###
FROM docker.io/python:${PYTHON_VERSION}-slim-bullseye as builder
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bullseye as builder
# install the OS build deps
RUN \
@@ -158,7 +158,7 @@ RUN --mount=type=cache,target=/synapse/target,sharing=locked \
### Stage 2: runtime
###
FROM docker.io/python:${PYTHON_VERSION}-slim-bullseye
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bullseye
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/synapse/blob/master/docker/README.md'

View File

@@ -24,7 +24,7 @@ ARG distro=""
# https://launchpad.net/~jyrki-pulliainen/+archive/ubuntu/dh-virtualenv, but
# it's not obviously easier to use that than to build our own.)
FROM ${distro} as builder
FROM docker.io/library/${distro} as builder
RUN apt-get update -qq -o Acquire::Languages=none
RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
@@ -55,7 +55,7 @@ RUN cd /dh-virtualenv && DEB_BUILD_OPTIONS=nodoc dpkg-buildpackage -us -uc -b
###
### Stage 1
###
FROM ${distro}
FROM docker.io/library/${distro}
# Get the distro we want to pull from as a dynamic build variable
# (We need to define it in each build stage)

View File

@@ -7,7 +7,7 @@ ARG FROM=matrixdotorg/synapse:$SYNAPSE_VERSION
# target image. For repeated rebuilds, this is much faster than apt installing
# each time.
FROM debian:bullseye-slim AS deps_base
FROM docker.io/library/debian:bullseye-slim AS deps_base
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
@@ -21,7 +21,7 @@ FROM debian:bullseye-slim AS deps_base
# which makes it much easier to copy (but we need to make sure we use an image
# based on the same debian version as the synapse image, to make sure we get
# the expected version of libc.
FROM redis:6-bullseye AS redis_base
FROM docker.io/library/redis:7-bullseye AS redis_base
# now build the final image, based on the the regular Synapse docker image
FROM $FROM

View File

@@ -73,7 +73,8 @@ The following environment variables are supported in `generate` mode:
will log sensitive information such as access tokens.
This should not be needed unless you are a developer attempting to debug something
particularly tricky.
* `SYNAPSE_LOG_TESTING`: if set, Synapse will log additional information useful
for testing.
## Postgres

View File

@@ -7,6 +7,7 @@
# https://github.com/matrix-org/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse
ARG SYNAPSE_VERSION=latest
# This is an intermediate image, to be built locally (not pulled from a registry).
ARG FROM=matrixdotorg/synapse-workers:$SYNAPSE_VERSION
FROM $FROM
@@ -19,8 +20,8 @@ FROM $FROM
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres:13-bullseye /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres:13-bullseye /usr/share/postgresql /usr/share/postgresql
COPY --from=docker.io/library/postgres:13-bullseye /usr/lib/postgresql /usr/lib/postgresql
COPY --from=docker.io/library/postgres:13-bullseye /usr/share/postgresql /usr/share/postgresql
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
ENV PGDATA=/var/lib/postgresql/data

View File

@@ -92,8 +92,6 @@ allow_device_name_lookup_over_federation: true
## Experimental Features ##
experimental_features:
# Enable history backfilling support
msc2716_enabled: true
# client-side support for partial state in /send_join responses
faster_joins: true
# Enable support for polls

View File

@@ -49,17 +49,35 @@ handlers:
class: logging.StreamHandler
formatter: precise
{% if not SYNAPSE_LOG_SENSITIVE %}
{#
If SYNAPSE_LOG_SENSITIVE is unset, then override synapse.storage.SQL to INFO
so that DEBUG entries (containing sensitive information) are not emitted.
#}
loggers:
# This is just here so we can leave `loggers` in the config regardless of whether
# we configure other loggers below (avoid empty yaml dict error).
_placeholder:
level: "INFO"
{% if not SYNAPSE_LOG_SENSITIVE %}
{#
If SYNAPSE_LOG_SENSITIVE is unset, then override synapse.storage.SQL to INFO
so that DEBUG entries (containing sensitive information) are not emitted.
#}
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: INFO
{% endif %}
{% endif %}
{% if SYNAPSE_LOG_TESTING %}
{#
If Synapse is under test, log a few more useful things for a developer
attempting to debug something particularly tricky.
With `synapse.visibility.filtered_event_debug`, it logs when events are (maybe
unexpectedly) filtered out of responses in tests. It's just nice to be able to
look at the CI log and figure out why an event isn't being returned.
#}
synapse.visibility.filtered_event_debug:
level: DEBUG
{% endif %}
root:
level: {{ SYNAPSE_LOG_LEVEL or "INFO" }}

View File

@@ -40,6 +40,8 @@
# log level. INFO is the default.
# * SYNAPSE_LOG_SENSITIVE: If unset, SQL and SQL values won't be logged,
# regardless of the SYNAPSE_LOG_LEVEL setting.
# * SYNAPSE_LOG_TESTING: if set, Synapse will log additional information useful
# for testing.
#
# NOTE: According to Complement's ENTRYPOINT expectations for a homeserver image (as defined
# in the project's README), this script may be run multiple times, and functionality should
@@ -242,7 +244,6 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"^/_matrix/client/(api/v1|r0|v3|unstable)/join/",
"^/_matrix/client/(api/v1|r0|v3|unstable)/knock/",
"^/_matrix/client/(api/v1|r0|v3|unstable)/profile/",
"^/_matrix/client/(v1|unstable/org.matrix.msc2716)/rooms/.*/batch_send",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
@@ -947,6 +948,7 @@ def generate_worker_log_config(
extra_log_template_args["SYNAPSE_LOG_SENSITIVE"] = environ.get(
"SYNAPSE_LOG_SENSITIVE"
)
extra_log_template_args["SYNAPSE_LOG_TESTING"] = environ.get("SYNAPSE_LOG_TESTING")
# Render and write the file
log_config_filepath = f"/conf/workers/{worker_name}.log.config"

View File

@@ -10,7 +10,7 @@ ARG PYTHON_VERSION=3.9
###
# We hardcode the use of Debian bullseye here because this could change upstream
# and other Dockerfiles used for testing are expecting bullseye.
FROM docker.io/python:${PYTHON_VERSION}-slim-bullseye
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bullseye
# Install Rust and other dependencies (stolen from normal Dockerfile)
# install the OS build deps

View File

@@ -419,7 +419,7 @@ The following query parameters are available:
* `from` (required) - The token to start returning events from. This token can be obtained from a prev_batch
or next_batch token returned by the /sync endpoint, or from an end token returned by a previous request to this endpoint.
* `to` - The token to spot returning events at.
* `to` - The token to stop returning events at.
* `limit` - The maximum number of events to return. Defaults to `10`.
* `filter` - A JSON RoomEventFilter to filter returned events with.
* `dir` - The direction to return events from. Either `f` for forwards or `b` for backwards. Setting

View File

@@ -6,7 +6,7 @@ This is a work-in-progress set of notes with two goals:
See also [MSC3902](https://github.com/matrix-org/matrix-spec-proposals/pull/3902).
The key idea is described by [MSC706](https://github.com/matrix-org/matrix-spec-proposals/pull/3902). This allows servers to
The key idea is described by [MSC3706](https://github.com/matrix-org/matrix-spec-proposals/pull/3706). This allows servers to
request a lightweight response to the federation `/send_join` endpoint.
This is called a **faster join**, also known as a **partial join**. In these
notes we'll usually use the word "partial" as it matches the database schema.

View File

@@ -348,6 +348,42 @@ callback returns `False`, Synapse falls through to the next one. The value of th
callback that does not return `False` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
### `check_login_for_spam`
_First introduced in Synapse v1.87.0_
```python
async def check_login_for_spam(
user_id: str,
device_id: Optional[str],
initial_display_name: Optional[str],
request_info: Collection[Tuple[Optional[str], str]],
auth_provider_id: Optional[str] = None,
) -> Union["synapse.module_api.NOT_SPAM", "synapse.module_api.errors.Codes"]
```
Called when a user logs in.
The arguments passed to this callback are:
* `user_id`: The user ID the user is logging in with
* `device_id`: The device ID the user is re-logging into.
* `initial_display_name`: The device display name, if any.
* `request_info`: A collection of tuples, which first item is a user agent, and which
second item is an IP address. These user agents and IP addresses are the ones that were
used during the login process.
* `auth_provider_id`: The identifier of the SSO authentication provider, if any.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `synapse.module_api.NOT_SPAM`, Synapse falls through to the next one.
The value of the first callback that does not return `synapse.module_api.NOT_SPAM` will
be used. If this happens, Synapse will not call any of the subsequent implementations of
this callback.
*Note:* This will not be called when a user registers.
## Example
The example below is a module that implements the spam checker callback

View File

@@ -87,6 +87,14 @@ process, for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.86.0
## Minimum supported Rust version
The minimum supported Rust version has been increased from v1.58.1 to v1.60.0.
Users building from source will need to ensure their `rustc` version is up to
date.
# Upgrading to v1.85.0

View File

@@ -27,9 +27,8 @@ What servers are currently participating in this room?
Run this sql query on your db:
```sql
SELECT DISTINCT split_part(state_key, ':', 2)
FROM current_state_events AS c
INNER JOIN room_memberships AS m USING (room_id, event_id)
WHERE room_id = '!cURbafjkfsMDVwdRDQ:matrix.org' AND membership = 'join';
FROM current_state_events
WHERE room_id = '!cURbafjkfsMDVwdRDQ:matrix.org' AND membership = 'join';
```
What users are registered on my server?

View File

@@ -1196,6 +1196,32 @@ Example configuration:
allow_device_name_lookup_over_federation: true
```
---
### `federation`
The federation section defines some sub-options related to federation.
The following options are related to configuring timeout and retry logic for one request,
independently of the others.
Short retry algorithm is used when something or someone will wait for the request to have an
answer, while long retry is used for requests that happen in the background,
like sending a federation transaction.
* `client_timeout`: timeout for the federation requests. Default to 60s.
* `max_short_retry_delay`: maximum delay to be used for the short retry algo. Default to 2s.
* `max_long_retry_delay`: maximum delay to be used for the short retry algo. Default to 60s.
* `max_short_retries`: maximum number of retries for the short retry algo. Default to 3 attempts.
* `max_long_retries`: maximum number of retries for the long retry algo. Default to 10 attempts.
Example configuration:
```yaml
federation:
client_timeout: 180s
max_short_retry_delay: 7s
max_long_retry_delay: 100s
max_short_retries: 5
max_long_retries: 20
```
---
## Caching
Options related to caching.
@@ -2570,7 +2596,50 @@ Example configuration:
```yaml
nonrefreshable_access_token_lifetime: 24h
```
---
### `ui_auth`
The amount of time to allow a user-interactive authentication session to be active.
This defaults to 0, meaning the user is queried for their credentials
before every action, but this can be overridden to allow a single
validation to be re-used. This weakens the protections afforded by
the user-interactive authentication process, by allowing for multiple
(and potentially different) operations to use the same validation session.
This is ignored for potentially "dangerous" operations (including
deactivating an account, modifying an account password, adding a 3PID,
and minting additional login tokens).
Use the `session_timeout` sub-option here to change the time allowed for credential validation.
Example configuration:
```yaml
ui_auth:
session_timeout: "15s"
```
---
### `login_via_existing_session`
Matrix supports the ability of an existing session to mint a login token for
another client.
Synapse disables this by default as it has security ramifications -- a malicious
client could use the mechanism to spawn more than one session.
The duration of time the generated token is valid for can be configured with the
`token_timeout` sub-option.
User-interactive authentication is required when this is enabled unless the
`require_ui_auth` sub-option is set to `False`.
Example configuration:
```yaml
login_via_existing_session:
enabled: true
require_ui_auth: false
token_timeout: "5m"
```
---
## Metrics
Config options related to metrics.
@@ -3415,28 +3484,6 @@ password_config:
require_uppercase: true
```
---
### `ui_auth`
The amount of time to allow a user-interactive authentication session to be active.
This defaults to 0, meaning the user is queried for their credentials
before every action, but this can be overridden to allow a single
validation to be re-used. This weakens the protections afforded by
the user-interactive authentication process, by allowing for multiple
(and potentially different) operations to use the same validation session.
This is ignored for potentially "dangerous" operations (including
deactivating an account, modifying an account password, and
adding a 3PID).
Use the `session_timeout` sub-option here to change the time allowed for credential validation.
Example configuration:
```yaml
ui_auth:
session_timeout: "15s"
```
---
## Push
Configuration settings related to push notifications

View File

@@ -232,7 +232,6 @@ information.
^/_matrix/client/v1/rooms/.*/hierarchy$
^/_matrix/client/(v1|unstable)/rooms/.*/relations/
^/_matrix/client/v1/rooms/.*/threads$
^/_matrix/client/unstable/org.matrix.msc2716/rooms/.*/batch_send$
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
^/_matrix/client/(r0|v3|unstable)/account/3pid$
^/_matrix/client/(r0|v3|unstable)/account/whoami$

View File

@@ -2,17 +2,29 @@
namespace_packages = True
plugins = pydantic.mypy, mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
follow_imports = normal
check_untyped_defs = True
show_error_codes = True
show_traceback = True
mypy_path = stubs
warn_unreachable = True
warn_unused_ignores = True
local_partial_types = True
no_implicit_optional = True
# Strict checks, see mypy --help
warn_unused_configs = True
# disallow_any_generics = True
disallow_subclassing_any = True
# disallow_untyped_calls = True
disallow_untyped_defs = True
strict_equality = True
disallow_incomplete_defs = True
# check_untyped_defs = True
# disallow_untyped_decorators = True
warn_redundant_casts = True
warn_unused_ignores = True
# warn_return_any = True
# no_implicit_reexport = True
strict_equality = True
strict_concatenate = True
# Run mypy type checking with the minimum supported Python version to catch new usage
# that isn't backwards-compatible (types, overloads, etc).
python_version = 3.8
@@ -31,6 +43,7 @@ warn_unused_ignores = False
[mypy-synapse.util.caches.treecache]
disallow_untyped_defs = False
disallow_incomplete_defs = False
;; Dependencies without annotations
;; Before ignoring a module, check to see if type stubs are available.
@@ -40,18 +53,18 @@ disallow_untyped_defs = False
;; which we can pull in as a dev dependency by adding to `pyproject.toml`'s
;; `[tool.poetry.dev-dependencies]` list.
# https://github.com/lepture/authlib/issues/460
[mypy-authlib.*]
ignore_missing_imports = True
[mypy-ijson.*]
ignore_missing_imports = True
[mypy-lxml]
ignore_missing_imports = True
# https://github.com/msgpack/msgpack-python/issues/448
[mypy-msgpack]
ignore_missing_imports = True
# https://github.com/wolever/parameterized/issues/143
[mypy-parameterized.*]
ignore_missing_imports = True
@@ -73,6 +86,7 @@ ignore_missing_imports = True
[mypy-srvlookup.*]
ignore_missing_imports = True
# https://github.com/twisted/treq/pull/366
[mypy-treq.*]
ignore_missing_imports = True

593
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -89,7 +89,7 @@ manifest-path = "rust/Cargo.toml"
[tool.poetry]
name = "matrix-synapse"
version = "1.85.2"
version = "1.86.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@@ -311,9 +311,10 @@ all = [
# We pin black so that our tests don't start failing on new releases.
isort = ">=5.10.1"
black = ">=22.3.0"
ruff = "0.0.265"
ruff = "0.0.275"
# Typechecking
lxml-stubs = ">=0.4.0"
mypy = "*"
mypy-zope = "*"
types-bleach = ">=4.1.0"

View File

@@ -7,7 +7,7 @@ name = "synapse"
version = "0.1.0"
edition = "2021"
rust-version = "1.58.1"
rust-version = "1.60.0"
[lib]
name = "synapse"

View File

@@ -13,8 +13,6 @@
// limitations under the License.
#![feature(test)]
use std::collections::BTreeSet;
use synapse::push::{
evaluator::PushRuleEvaluator, Condition, EventMatchCondition, FilteredPushRules, JsonValue,
PushRules, SimpleJsonValue,
@@ -197,7 +195,6 @@ fn bench_eval_message(b: &mut Bencher) {
false,
false,
false,
false,
);
b.iter(|| eval.run(&rules, Some("bob"), Some("person")));

View File

@@ -142,11 +142,11 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(".org.matrix.msc3952.is_user_mention"),
rule_id: Cow::Borrowed("global/override/.m.rule.is_user_mention"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(
KnownCondition::ExactEventPropertyContainsType(EventPropertyIsTypeCondition {
key: Cow::Borrowed("content.org\\.matrix\\.msc3952\\.mentions.user_ids"),
key: Cow::Borrowed("content.m\\.mentions.user_ids"),
value_type: Cow::Borrowed(&EventMatchPatternType::UserId),
}),
)]),
@@ -163,11 +163,11 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(".org.matrix.msc3952.is_room_mention"),
rule_id: Cow::Borrowed("global/override/.m.rule.is_room_mention"),
priority_class: 5,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventPropertyIs(EventPropertyIsCondition {
key: Cow::Borrowed("content.org\\.matrix\\.msc3952\\.mentions.room"),
key: Cow::Borrowed("content.m\\.mentions.room"),
value: Cow::Borrowed(&SimpleJsonValue::Bool(true)),
})),
Condition::Known(KnownCondition::SenderNotificationPermission {

View File

@@ -70,7 +70,9 @@ pub struct PushRuleEvaluator {
/// The "content.body", if any.
body: String,
/// True if the event has a mentions property and MSC3952 support is enabled.
/// True if the event has a m.mentions property. (Note that this is a separate
/// flag instead of checking flattened_keys since the m.mentions property
/// might be an empty map and not appear in flattened_keys.
has_mentions: bool,
/// The number of users in the room.
@@ -155,9 +157,7 @@ impl PushRuleEvaluator {
let rule_id = &push_rule.rule_id().to_string();
// For backwards-compatibility the legacy mention rules are disabled
// if the event contains the 'm.mentions' property (and if the
// experimental feature is enabled, both of these are represented
// by the has_mentions flag).
// if the event contains the 'm.mentions' property.
if self.has_mentions
&& (rule_id == "global/override/.m.rule.contains_display_name"
|| rule_id == "global/content/.m.rule.contains_user_name"
@@ -562,7 +562,7 @@ fn test_requires_room_version_supports_condition() {
};
let rules = PushRules::new(vec![custom_rule]);
result = evaluator.run(
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false, false),
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false),
None,
None,
);

View File

@@ -527,7 +527,6 @@ pub struct FilteredPushRules {
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3952_intentional_mentions: bool,
msc3958_suppress_edits_enabled: bool,
}
@@ -540,7 +539,6 @@ impl FilteredPushRules {
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3952_intentional_mentions: bool,
msc3958_suppress_edits_enabled: bool,
) -> Self {
Self {
@@ -549,7 +547,6 @@ impl FilteredPushRules {
msc1767_enabled,
msc3381_polls_enabled,
msc3664_enabled,
msc3952_intentional_mentions,
msc3958_suppress_edits_enabled,
}
}
@@ -587,10 +584,6 @@ impl FilteredPushRules {
return false;
}
if !self.msc3952_intentional_mentions && rule.rule_id.contains("org.matrix.msc3952")
{
return false;
}
if !self.msc3958_suppress_edits_enabled
&& rule.rule_id == "global/override/.com.beeper.suppress_edits"
{

View File

@@ -20,6 +20,8 @@ from concurrent.futures import ThreadPoolExecutor
from types import FrameType
from typing import Collection, Optional, Sequence, Set
# These are expanded inside the dockerfile to be a fully qualified image name.
# e.g. docker.io/library/debian:bullseye
DISTS = (
"debian:buster", # oldstable: EOL 2022-08
"debian:bullseye",

View File

@@ -246,10 +246,6 @@ else
else
export PASS_SYNAPSE_COMPLEMENT_DATABASE=sqlite
fi
# The tests for importing historical messages (MSC2716)
# only pass with monoliths, currently.
test_tags="$test_tags,msc2716"
fi
if [[ -n "$ASYNCIO_REACTOR" ]]; then
@@ -269,6 +265,10 @@ if [[ -n "$SYNAPSE_TEST_LOG_LEVEL" ]]; then
export PASS_SYNAPSE_LOG_SENSITIVE=1
fi
# Log a few more useful things for a developer attempting to debug something
# particularly tricky.
export PASS_SYNAPSE_LOG_TESTING=1
# Run the tests!
echo "Images built; running complement"
cd "$COMPLEMENT_DIR"

View File

@@ -136,11 +136,11 @@ def request(
authorization_headers.append(header)
print("Authorization: %s" % header, file=sys.stderr)
dest = "matrix://%s%s" % (destination, path)
dest = "matrix-federation://%s%s" % (destination, path)
print("Requesting %s" % dest, file=sys.stderr)
s = requests.Session()
s.mount("matrix://", MatrixConnectionAdapter())
s.mount("matrix-federation://", MatrixConnectionAdapter())
headers: Dict[str, str] = {
"Authorization": authorization_headers[0],

View File

@@ -46,7 +46,6 @@ class FilteredPushRules:
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3952_intentional_mentions: bool,
msc3958_suppress_edits_enabled: bool,
): ...
def rules(self) -> Collection[Tuple[PushRule, bool]]: ...

View File

@@ -1369,6 +1369,9 @@ def main() -> None:
sys.stderr.write("Database must use the 'psycopg2' connector.\n")
sys.exit(3)
# Don't run the background tasks that get started by the data stores.
hs_config["run_background_tasks_on"] = "some_other_process"
config = HomeServerConfig()
config.parse_config_dict(hs_config, "", "")

View File

@@ -0,0 +1,175 @@
# Copyright 2023 The Matrix.org Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Optional, Tuple
from typing_extensions import Protocol
from twisted.web.server import Request
from synapse.appservice import ApplicationService
from synapse.http.site import SynapseRequest
from synapse.types import Requester
# guests always get this device id.
GUEST_DEVICE_ID = "guest_device"
class Auth(Protocol):
"""The interface that an auth provider must implement."""
async def check_user_in_room(
self,
room_id: str,
requester: Requester,
allow_departed_users: bool = False,
) -> Tuple[str, Optional[str]]:
"""Check if the user is in the room, or was at some point.
Args:
room_id: The room to check.
user_id: The user to check.
current_state: Optional map of the current state of the room.
If provided then that map is used to check whether they are a
member of the room. Otherwise the current membership is
loaded from the database.
allow_departed_users: if True, accept users that were previously
members but have now departed.
Raises:
AuthError if the user is/was not in the room.
Returns:
The current membership of the user in the room and the
membership event ID of the user.
"""
async def get_user_by_req(
self,
request: SynapseRequest,
allow_guest: bool = False,
allow_expired: bool = False,
) -> Requester:
"""Get a registered user's ID.
Args:
request: An HTTP request with an access_token query parameter.
allow_guest: If False, will raise an AuthError if the user making the
request is a guest.
allow_expired: If True, allow the request through even if the account
is expired, or session token lifetime has ended. Note that
/login will deliver access tokens regardless of expiration.
Returns:
Resolves to the requester
Raises:
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
AuthError if access is denied for the user in the access token
"""
async def validate_appservice_can_control_user_id(
self, app_service: ApplicationService, user_id: str
) -> None:
"""Validates that the app service is allowed to control
the given user.
Args:
app_service: The app service that controls the user
user_id: The author MXID that the app service is controlling
Raises:
AuthError: If the application service is not allowed to control the user
(user namespace regex does not match, wrong homeserver, etc)
or if the user has not been registered yet.
"""
async def get_user_by_access_token(
self,
token: str,
allow_expired: bool = False,
) -> Requester:
"""Validate access token and get user_id from it
Args:
token: The access token to get the user by
allow_expired: If False, raises an InvalidClientTokenError
if the token is expired
Raises:
InvalidClientTokenError if a user by that token exists, but the token is
expired
InvalidClientCredentialsError if no user by that token exists or the token
is invalid
"""
async def is_server_admin(self, requester: Requester) -> bool:
"""Check if the given user is a local server admin.
Args:
requester: user to check
Returns:
True if the user is an admin
"""
async def check_can_change_room_list(
self, room_id: str, requester: Requester
) -> bool:
"""Determine whether the user is allowed to edit the room's entry in the
published room list.
Args:
room_id
user
"""
@staticmethod
def has_access_token(request: Request) -> bool:
"""Checks if the request has an access_token.
Returns:
False if no access_token was given, True otherwise.
"""
@staticmethod
def get_access_token_from_request(request: Request) -> str:
"""Extracts the access_token from the request.
Args:
request: The http request.
Returns:
The access_token
Raises:
MissingClientTokenError: If there isn't a single access_token in the
request
"""
async def check_user_in_room_or_world_readable(
self, room_id: str, requester: Requester, allow_departed_users: bool = False
) -> Tuple[str, Optional[str]]:
"""Checks that the user is or was in the room or the room is world
readable. If it isn't then an exception is raised.
Args:
room_id: room to check
user_id: user to check
allow_departed_users: if True, accept users that were previously
members but have now departed
Returns:
Resolves to the current membership of the user in the room and the
membership event ID of the user. If the user is not in the room and
never has been, then `(Membership.JOIN, None)` is returned.
"""

View File

@@ -1,4 +1,4 @@
# Copyright 2014 - 2016 OpenMarket Ltd
# Copyright 2023 The Matrix.org Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -14,7 +14,6 @@
import logging
from typing import TYPE_CHECKING, Optional, Tuple
import pymacaroons
from netaddr import IPAddress
from twisted.web.server import Request
@@ -24,19 +23,11 @@ from synapse.api.constants import EventTypes, HistoryVisibility, Membership
from synapse.api.errors import (
AuthError,
Codes,
InvalidClientTokenError,
MissingClientTokenError,
UnstableSpecAuthError,
)
from synapse.appservice import ApplicationService
from synapse.http import get_request_user_agent
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import (
active_span,
force_tracing,
start_active_span,
trace,
)
from synapse.logging.opentracing import trace
from synapse.types import Requester, create_requester
from synapse.util.cancellation import cancellable
@@ -46,26 +37,13 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# guests always get this device id.
GUEST_DEVICE_ID = "guest_device"
class Auth:
"""
This class contains functions for authenticating users of our client-server API.
"""
class BaseAuth:
"""Common base class for all auth implementations."""
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastores().main
self._account_validity_handler = hs.get_account_validity_handler()
self._storage_controllers = hs.get_storage_controllers()
self._macaroon_generator = hs.get_macaroon_generator()
self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips
self._track_puppeted_user_ips = hs.config.api.track_puppeted_user_ips
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
async def check_user_in_room(
self,
@@ -119,139 +97,49 @@ class Auth:
errcode=Codes.NOT_JOINED,
)
@cancellable
async def get_user_by_req(
self,
request: SynapseRequest,
allow_guest: bool = False,
allow_expired: bool = False,
) -> Requester:
"""Get a registered user's ID.
@trace
async def check_user_in_room_or_world_readable(
self, room_id: str, requester: Requester, allow_departed_users: bool = False
) -> Tuple[str, Optional[str]]:
"""Checks that the user is or was in the room or the room is world
readable. If it isn't then an exception is raised.
Args:
request: An HTTP request with an access_token query parameter.
allow_guest: If False, will raise an AuthError if the user making the
request is a guest.
allow_expired: If True, allow the request through even if the account
is expired, or session token lifetime has ended. Note that
/login will deliver access tokens regardless of expiration.
room_id: room to check
user_id: user to check
allow_departed_users: if True, accept users that were previously
members but have now departed
Returns:
Resolves to the requester
Raises:
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
AuthError if access is denied for the user in the access token
Resolves to the current membership of the user in the room and the
membership event ID of the user. If the user is not in the room and
never has been, then `(Membership.JOIN, None)` is returned.
"""
parent_span = active_span()
with start_active_span("get_user_by_req"):
requester = await self._wrapped_get_user_by_req(
request, allow_guest, allow_expired
)
if parent_span:
if requester.authenticated_entity in self._force_tracing_for_users:
# request tracing is enabled for this user, so we need to force it
# tracing on for the parent span (which will be the servlet span).
#
# It's too late for the get_user_by_req span to inherit the setting,
# so we also force it on for that.
force_tracing()
force_tracing(parent_span)
parent_span.set_tag(
"authenticated_entity", requester.authenticated_entity
)
parent_span.set_tag("user_id", requester.user.to_string())
if requester.device_id is not None:
parent_span.set_tag("device_id", requester.device_id)
if requester.app_service is not None:
parent_span.set_tag("appservice_id", requester.app_service.id)
return requester
@cancellable
async def _wrapped_get_user_by_req(
self,
request: SynapseRequest,
allow_guest: bool,
allow_expired: bool,
) -> Requester:
"""Helper for get_user_by_req
Once get_user_by_req has set up the opentracing span, this does the actual work.
"""
try:
ip_addr = request.getClientAddress().host
user_agent = get_request_user_agent(request)
access_token = self.get_access_token_from_request(request)
# First check if it could be a request from an appservice
requester = await self._get_appservice_user(request)
if not requester:
# If not, it should be from a regular user
requester = await self.get_user_by_access_token(
access_token, allow_expired=allow_expired
)
# Deny the request if the user account has expired.
# This check is only done for regular users, not appservice ones.
if not allow_expired:
if await self._account_validity_handler.is_user_expired(
requester.user.to_string()
):
# Raise the error if either an account validity module has determined
# the account has expired, or the legacy account validity
# implementation is enabled and determined the account has expired
raise AuthError(
403,
"User account has expired",
errcode=Codes.EXPIRED_ACCOUNT,
)
if ip_addr and (
not requester.app_service or self._track_appservice_user_ips
# check_user_in_room will return the most recent membership
# event for the user if:
# * The user is a non-guest user, and was ever in the room
# * The user is a guest user, and has joined the room
# else it will throw.
return await self.check_user_in_room(
room_id, requester, allow_departed_users=allow_departed_users
)
except AuthError:
visibility = await self._storage_controllers.state.get_current_state_event(
room_id, EventTypes.RoomHistoryVisibility, ""
)
if (
visibility
and visibility.content.get("history_visibility")
== HistoryVisibility.WORLD_READABLE
):
# XXX(quenting): I'm 95% confident that we could skip setting the
# device_id to "dummy-device" for appservices, and that the only impact
# would be some rows which whould not deduplicate in the 'user_ips'
# table during the transition
recorded_device_id = (
"dummy-device"
if requester.device_id is None and requester.app_service is not None
else requester.device_id
)
await self.store.insert_client_ip(
user_id=requester.authenticated_entity,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=recorded_device_id,
)
# Track also the puppeted user client IP if enabled and the user is puppeting
if (
requester.user.to_string() != requester.authenticated_entity
and self._track_puppeted_user_ips
):
await self.store.insert_client_ip(
user_id=requester.user.to_string(),
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=requester.device_id,
)
if requester.is_guest and not allow_guest:
raise AuthError(
403,
"Guest access not allowed",
errcode=Codes.GUEST_ACCESS_FORBIDDEN,
)
request.requester = requester
return requester
except KeyError:
raise MissingClientTokenError()
return Membership.JOIN, None
raise AuthError(
403,
"User %r not in room %s, and room previews are disabled"
% (requester.user, room_id),
)
async def validate_appservice_can_control_user_id(
self, app_service: ApplicationService, user_id: str
@@ -284,184 +172,16 @@ class Auth:
403, "Application service has not registered this user (%s)" % user_id
)
@cancellable
async def _get_appservice_user(self, request: Request) -> Optional[Requester]:
"""
Given a request, reads the request parameters to determine:
- whether it's an application service that's making this request
- what user the application service should be treated as controlling
(the user_id URI parameter allows an application service to masquerade
any applicable user in its namespace)
- what device the application service should be treated as controlling
(the device_id[^1] URI parameter allows an application service to masquerade
as any device that exists for the relevant user)
[^1] Unstable and provided by MSC3202.
Must use `org.matrix.msc3202.device_id` in place of `device_id` for now.
Returns:
the application service `Requester` of that request
Postconditions:
- The `app_service` field in the returned `Requester` is set
- The `user_id` field in the returned `Requester` is either the application
service sender or the controlled user set by the `user_id` URI parameter
- The returned application service is permitted to control the returned user ID.
- The returned device ID, if present, has been checked to be a valid device ID
for the returned user ID.
"""
DEVICE_ID_ARG_NAME = b"org.matrix.msc3202.device_id"
app_service = self.store.get_app_service_by_token(
self.get_access_token_from_request(request)
)
if app_service is None:
return None
if app_service.ip_range_whitelist:
ip_address = IPAddress(request.getClientAddress().host)
if ip_address not in app_service.ip_range_whitelist:
return None
# This will always be set by the time Twisted calls us.
assert request.args is not None
if b"user_id" in request.args:
effective_user_id = request.args[b"user_id"][0].decode("utf8")
await self.validate_appservice_can_control_user_id(
app_service, effective_user_id
)
else:
effective_user_id = app_service.sender
effective_device_id: Optional[str] = None
if (
self.hs.config.experimental.msc3202_device_masquerading_enabled
and DEVICE_ID_ARG_NAME in request.args
):
effective_device_id = request.args[DEVICE_ID_ARG_NAME][0].decode("utf8")
# We only just set this so it can't be None!
assert effective_device_id is not None
device_opt = await self.store.get_device(
effective_user_id, effective_device_id
)
if device_opt is None:
# For now, use 400 M_EXCLUSIVE if the device doesn't exist.
# This is an open thread of discussion on MSC3202 as of 2021-12-09.
raise AuthError(
400,
f"Application service trying to use a device that doesn't exist ('{effective_device_id}' for {effective_user_id})",
Codes.EXCLUSIVE,
)
return create_requester(
effective_user_id, app_service=app_service, device_id=effective_device_id
)
async def get_user_by_access_token(
self,
token: str,
allow_expired: bool = False,
) -> Requester:
"""Validate access token and get user_id from it
Args:
token: The access token to get the user by
allow_expired: If False, raises an InvalidClientTokenError
if the token is expired
Raises:
InvalidClientTokenError if a user by that token exists, but the token is
expired
InvalidClientCredentialsError if no user by that token exists or the token
is invalid
"""
# First look in the database to see if the access token is present
# as an opaque token.
user_info = await self.store.get_user_by_access_token(token)
if user_info:
valid_until_ms = user_info.valid_until_ms
if (
not allow_expired
and valid_until_ms is not None
and valid_until_ms < self.clock.time_msec()
):
# there was a valid access token, but it has expired.
# soft-logout the user.
raise InvalidClientTokenError(
msg="Access token has expired", soft_logout=True
)
# Mark the token as used. This is used to invalidate old refresh
# tokens after some time.
await self.store.mark_access_token_as_used(user_info.token_id)
requester = create_requester(
user_id=user_info.user_id,
access_token_id=user_info.token_id,
is_guest=user_info.is_guest,
shadow_banned=user_info.shadow_banned,
device_id=user_info.device_id,
authenticated_entity=user_info.token_owner,
)
return requester
# If the token isn't found in the database, then it could still be a
# macaroon for a guest, so we check that here.
try:
user_id = self._macaroon_generator.verify_guest_token(token)
# Guest access tokens are not stored in the database (there can
# only be one access token per guest, anyway).
#
# In order to prevent guest access tokens being used as regular
# user access tokens (and hence getting around the invalidation
# process), we look up the user id and check that it is indeed
# a guest user.
#
# It would of course be much easier to store guest access
# tokens in the database as well, but that would break existing
# guest tokens.
stored_user = await self.store.get_user_by_id(user_id)
if not stored_user:
raise InvalidClientTokenError("Unknown user_id %s" % user_id)
if not stored_user["is_guest"]:
raise InvalidClientTokenError(
"Guest access token used for regular user"
)
return create_requester(
user_id=user_id,
is_guest=True,
# all guests get the same device id
device_id=GUEST_DEVICE_ID,
authenticated_entity=user_id,
)
except (
pymacaroons.exceptions.MacaroonException,
TypeError,
ValueError,
) as e:
logger.warning(
"Invalid access token in auth: %s %s.",
type(e),
e,
)
raise InvalidClientTokenError("Invalid access token passed.")
async def is_server_admin(self, requester: Requester) -> bool:
"""Check if the given user is a local server admin.
Args:
requester: The user making the request, according to the access token.
requester: user to check
Returns:
True if the user is an admin
"""
return await self.store.is_server_admin(requester.user)
raise NotImplementedError()
async def check_can_change_room_list(
self, room_id: str, requester: Requester
@@ -470,8 +190,8 @@ class Auth:
published room list.
Args:
room_id: The room to check.
requester: The user making the request, according to the access token.
room_id
user
"""
is_admin = await self.is_server_admin(requester)
@@ -518,7 +238,6 @@ class Auth:
return bool(query_params) or bool(auth_headers)
@staticmethod
@cancellable
def get_access_token_from_request(request: Request) -> str:
"""Extracts the access_token from the request.
@@ -556,47 +275,77 @@ class Auth:
return query_params[0].decode("ascii")
@trace
async def check_user_in_room_or_world_readable(
self, room_id: str, requester: Requester, allow_departed_users: bool = False
) -> Tuple[str, Optional[str]]:
"""Checks that the user is or was in the room or the room is world
readable. If it isn't then an exception is raised.
@cancellable
async def get_appservice_user(
self, request: Request, access_token: str
) -> Optional[Requester]:
"""
Given a request, reads the request parameters to determine:
- whether it's an application service that's making this request
- what user the application service should be treated as controlling
(the user_id URI parameter allows an application service to masquerade
any applicable user in its namespace)
- what device the application service should be treated as controlling
(the device_id[^1] URI parameter allows an application service to masquerade
as any device that exists for the relevant user)
Args:
room_id: The room to check.
requester: The user making the request, according to the access token.
allow_departed_users: If True, accept users that were previously
members but have now departed.
[^1] Unstable and provided by MSC3202.
Must use `org.matrix.msc3202.device_id` in place of `device_id` for now.
Returns:
Resolves to the current membership of the user in the room and the
membership event ID of the user. If the user is not in the room and
never has been, then `(Membership.JOIN, None)` is returned.
"""
the application service `Requester` of that request
try:
# check_user_in_room will return the most recent membership
# event for the user if:
# * The user is a non-guest user, and was ever in the room
# * The user is a guest user, and has joined the room
# else it will throw.
return await self.check_user_in_room(
room_id, requester, allow_departed_users=allow_departed_users
Postconditions:
- The `app_service` field in the returned `Requester` is set
- The `user_id` field in the returned `Requester` is either the application
service sender or the controlled user set by the `user_id` URI parameter
- The returned application service is permitted to control the returned user ID.
- The returned device ID, if present, has been checked to be a valid device ID
for the returned user ID.
"""
DEVICE_ID_ARG_NAME = b"org.matrix.msc3202.device_id"
app_service = self.store.get_app_service_by_token(access_token)
if app_service is None:
return None
if app_service.ip_range_whitelist:
ip_address = IPAddress(request.getClientAddress().host)
if ip_address not in app_service.ip_range_whitelist:
return None
# This will always be set by the time Twisted calls us.
assert request.args is not None
if b"user_id" in request.args:
effective_user_id = request.args[b"user_id"][0].decode("utf8")
await self.validate_appservice_can_control_user_id(
app_service, effective_user_id
)
except AuthError:
visibility = await self._storage_controllers.state.get_current_state_event(
room_id, EventTypes.RoomHistoryVisibility, ""
)
if (
visibility
and visibility.content.get("history_visibility")
== HistoryVisibility.WORLD_READABLE
):
return Membership.JOIN, None
raise UnstableSpecAuthError(
403,
"User %s not in room %s, and room previews are disabled"
% (requester.user, room_id),
errcode=Codes.NOT_JOINED,
else:
effective_user_id = app_service.sender
effective_device_id: Optional[str] = None
if (
self.hs.config.experimental.msc3202_device_masquerading_enabled
and DEVICE_ID_ARG_NAME in request.args
):
effective_device_id = request.args[DEVICE_ID_ARG_NAME][0].decode("utf8")
# We only just set this so it can't be None!
assert effective_device_id is not None
device_opt = await self.store.get_device(
effective_user_id, effective_device_id
)
if device_opt is None:
# For now, use 400 M_EXCLUSIVE if the device doesn't exist.
# This is an open thread of discussion on MSC3202 as of 2021-12-09.
raise AuthError(
400,
f"Application service trying to use a device that doesn't exist ('{effective_device_id}' for {effective_user_id})",
Codes.EXCLUSIVE,
)
return create_requester(
effective_user_id, app_service=app_service, device_id=effective_device_id
)

View File

@@ -0,0 +1,291 @@
# Copyright 2023 The Matrix.org Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING
import pymacaroons
from synapse.api.errors import (
AuthError,
Codes,
InvalidClientTokenError,
MissingClientTokenError,
)
from synapse.http import get_request_user_agent
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import active_span, force_tracing, start_active_span
from synapse.types import Requester, create_requester
from synapse.util.cancellation import cancellable
from . import GUEST_DEVICE_ID
from .base import BaseAuth
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class InternalAuth(BaseAuth):
"""
This class contains functions for authenticating users of our client-server API.
"""
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.clock = hs.get_clock()
self._account_validity_handler = hs.get_account_validity_handler()
self._macaroon_generator = hs.get_macaroon_generator()
self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips
self._track_puppeted_user_ips = hs.config.api.track_puppeted_user_ips
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
@cancellable
async def get_user_by_req(
self,
request: SynapseRequest,
allow_guest: bool = False,
allow_expired: bool = False,
) -> Requester:
"""Get a registered user's ID.
Args:
request: An HTTP request with an access_token query parameter.
allow_guest: If False, will raise an AuthError if the user making the
request is a guest.
allow_expired: If True, allow the request through even if the account
is expired, or session token lifetime has ended. Note that
/login will deliver access tokens regardless of expiration.
Returns:
Resolves to the requester
Raises:
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
AuthError if access is denied for the user in the access token
"""
parent_span = active_span()
with start_active_span("get_user_by_req"):
requester = await self._wrapped_get_user_by_req(
request, allow_guest, allow_expired
)
if parent_span:
if requester.authenticated_entity in self._force_tracing_for_users:
# request tracing is enabled for this user, so we need to force it
# tracing on for the parent span (which will be the servlet span).
#
# It's too late for the get_user_by_req span to inherit the setting,
# so we also force it on for that.
force_tracing()
force_tracing(parent_span)
parent_span.set_tag(
"authenticated_entity", requester.authenticated_entity
)
parent_span.set_tag("user_id", requester.user.to_string())
if requester.device_id is not None:
parent_span.set_tag("device_id", requester.device_id)
if requester.app_service is not None:
parent_span.set_tag("appservice_id", requester.app_service.id)
return requester
@cancellable
async def _wrapped_get_user_by_req(
self,
request: SynapseRequest,
allow_guest: bool,
allow_expired: bool,
) -> Requester:
"""Helper for get_user_by_req
Once get_user_by_req has set up the opentracing span, this does the actual work.
"""
try:
ip_addr = request.getClientAddress().host
user_agent = get_request_user_agent(request)
access_token = self.get_access_token_from_request(request)
# First check if it could be a request from an appservice
requester = await self.get_appservice_user(request, access_token)
if not requester:
# If not, it should be from a regular user
requester = await self.get_user_by_access_token(
access_token, allow_expired=allow_expired
)
# Deny the request if the user account has expired.
# This check is only done for regular users, not appservice ones.
if not allow_expired:
if await self._account_validity_handler.is_user_expired(
requester.user.to_string()
):
# Raise the error if either an account validity module has determined
# the account has expired, or the legacy account validity
# implementation is enabled and determined the account has expired
raise AuthError(
403,
"User account has expired",
errcode=Codes.EXPIRED_ACCOUNT,
)
if ip_addr and (
not requester.app_service or self._track_appservice_user_ips
):
# XXX(quenting): I'm 95% confident that we could skip setting the
# device_id to "dummy-device" for appservices, and that the only impact
# would be some rows which whould not deduplicate in the 'user_ips'
# table during the transition
recorded_device_id = (
"dummy-device"
if requester.device_id is None and requester.app_service is not None
else requester.device_id
)
await self.store.insert_client_ip(
user_id=requester.authenticated_entity,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=recorded_device_id,
)
# Track also the puppeted user client IP if enabled and the user is puppeting
if (
requester.user.to_string() != requester.authenticated_entity
and self._track_puppeted_user_ips
):
await self.store.insert_client_ip(
user_id=requester.user.to_string(),
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=requester.device_id,
)
if requester.is_guest and not allow_guest:
raise AuthError(
403,
"Guest access not allowed",
errcode=Codes.GUEST_ACCESS_FORBIDDEN,
)
request.requester = requester
return requester
except KeyError:
raise MissingClientTokenError()
async def get_user_by_access_token(
self,
token: str,
allow_expired: bool = False,
) -> Requester:
"""Validate access token and get user_id from it
Args:
token: The access token to get the user by
allow_expired: If False, raises an InvalidClientTokenError
if the token is expired
Raises:
InvalidClientTokenError if a user by that token exists, but the token is
expired
InvalidClientCredentialsError if no user by that token exists or the token
is invalid
"""
# First look in the database to see if the access token is present
# as an opaque token.
user_info = await self.store.get_user_by_access_token(token)
if user_info:
valid_until_ms = user_info.valid_until_ms
if (
not allow_expired
and valid_until_ms is not None
and valid_until_ms < self.clock.time_msec()
):
# there was a valid access token, but it has expired.
# soft-logout the user.
raise InvalidClientTokenError(
msg="Access token has expired", soft_logout=True
)
# Mark the token as used. This is used to invalidate old refresh
# tokens after some time.
await self.store.mark_access_token_as_used(user_info.token_id)
requester = create_requester(
user_id=user_info.user_id,
access_token_id=user_info.token_id,
is_guest=user_info.is_guest,
shadow_banned=user_info.shadow_banned,
device_id=user_info.device_id,
authenticated_entity=user_info.token_owner,
)
return requester
# If the token isn't found in the database, then it could still be a
# macaroon for a guest, so we check that here.
try:
user_id = self._macaroon_generator.verify_guest_token(token)
# Guest access tokens are not stored in the database (there can
# only be one access token per guest, anyway).
#
# In order to prevent guest access tokens being used as regular
# user access tokens (and hence getting around the invalidation
# process), we look up the user id and check that it is indeed
# a guest user.
#
# It would of course be much easier to store guest access
# tokens in the database as well, but that would break existing
# guest tokens.
stored_user = await self.store.get_user_by_id(user_id)
if not stored_user:
raise InvalidClientTokenError("Unknown user_id %s" % user_id)
if not stored_user["is_guest"]:
raise InvalidClientTokenError(
"Guest access token used for regular user"
)
return create_requester(
user_id=user_id,
is_guest=True,
# all guests get the same device id
device_id=GUEST_DEVICE_ID,
authenticated_entity=user_id,
)
except (
pymacaroons.exceptions.MacaroonException,
TypeError,
ValueError,
) as e:
logger.warning(
"Invalid access token in auth: %s %s.",
type(e),
e,
)
raise InvalidClientTokenError("Invalid access token passed.")
async def is_server_admin(self, requester: Requester) -> bool:
"""Check if the given user is a local server admin.
Args:
requester: The user making the request, according to the access token.
Returns:
True if the user is an admin
"""
return await self.store.is_server_admin(requester.user)

View File

@@ -0,0 +1,352 @@
# Copyright 2023 The Matrix.org Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional
from urllib.parse import urlencode
from authlib.oauth2 import ClientAuth
from authlib.oauth2.auth import encode_client_secret_basic, encode_client_secret_post
from authlib.oauth2.rfc7523 import ClientSecretJWT, PrivateKeyJWT, private_key_jwt_sign
from authlib.oauth2.rfc7662 import IntrospectionToken
from authlib.oidc.discovery import OpenIDProviderMetadata, get_well_known_url
from twisted.web.client import readBody
from twisted.web.http_headers import Headers
from synapse.api.auth.base import BaseAuth
from synapse.api.errors import (
AuthError,
HttpResponseException,
InvalidClientTokenError,
OAuthInsufficientScopeError,
StoreError,
SynapseError,
)
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.types import Requester, UserID, create_requester
from synapse.util import json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
# Scope as defined by MSC2967
# https://github.com/matrix-org/matrix-spec-proposals/pull/2967
SCOPE_MATRIX_API = "urn:matrix:org.matrix.msc2967.client:api:*"
SCOPE_MATRIX_GUEST = "urn:matrix:org.matrix.msc2967.client:api:guest"
SCOPE_MATRIX_DEVICE_PREFIX = "urn:matrix:org.matrix.msc2967.client:device:"
# Scope which allows access to the Synapse admin API
SCOPE_SYNAPSE_ADMIN = "urn:synapse:admin:*"
def scope_to_list(scope: str) -> List[str]:
"""Convert a scope string to a list of scope tokens"""
return scope.strip().split(" ")
class PrivateKeyJWTWithKid(PrivateKeyJWT): # type: ignore[misc]
"""An implementation of the private_key_jwt client auth method that includes a kid header.
This is needed because some providers (Keycloak) require the kid header to figure
out which key to use to verify the signature.
"""
def sign(self, auth: Any, token_endpoint: str) -> bytes:
return private_key_jwt_sign(
auth.client_secret,
client_id=auth.client_id,
token_endpoint=token_endpoint,
claims=self.claims,
header={"kid": auth.client_secret["kid"]},
)
class MSC3861DelegatedAuth(BaseAuth):
AUTH_METHODS = {
"client_secret_post": encode_client_secret_post,
"client_secret_basic": encode_client_secret_basic,
"client_secret_jwt": ClientSecretJWT(),
"private_key_jwt": PrivateKeyJWTWithKid(),
}
EXTERNAL_ID_PROVIDER = "oauth-delegated"
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self._config = hs.config.experimental.msc3861
auth_method = MSC3861DelegatedAuth.AUTH_METHODS.get(
self._config.client_auth_method.value, None
)
# Those assertions are already checked when parsing the config
assert self._config.enabled, "OAuth delegation is not enabled"
assert self._config.issuer, "No issuer provided"
assert self._config.client_id, "No client_id provided"
assert auth_method is not None, "Invalid client_auth_method provided"
self._http_client = hs.get_proxied_http_client()
self._hostname = hs.hostname
self._admin_token = self._config.admin_token
self._issuer_metadata = RetryOnExceptionCachedCall(self._load_metadata)
if isinstance(auth_method, PrivateKeyJWTWithKid):
# Use the JWK as the client secret when using the private_key_jwt method
assert self._config.jwk, "No JWK provided"
self._client_auth = ClientAuth(
self._config.client_id, self._config.jwk, auth_method
)
else:
# Else use the client secret
assert self._config.client_secret, "No client_secret provided"
self._client_auth = ClientAuth(
self._config.client_id, self._config.client_secret, auth_method
)
async def _load_metadata(self) -> OpenIDProviderMetadata:
if self._config.issuer_metadata is not None:
return OpenIDProviderMetadata(**self._config.issuer_metadata)
url = get_well_known_url(self._config.issuer, external=True)
response = await self._http_client.get_json(url)
metadata = OpenIDProviderMetadata(**response)
# metadata.validate_introspection_endpoint()
return metadata
async def _introspect_token(self, token: str) -> IntrospectionToken:
"""
Send a token to the introspection endpoint and returns the introspection response
Parameters:
token: The token to introspect
Raises:
HttpResponseException: If the introspection endpoint returns a non-2xx response
ValueError: If the introspection endpoint returns an invalid JSON response
JSONDecodeError: If the introspection endpoint returns a non-JSON response
Exception: If the HTTP request fails
Returns:
The introspection response
"""
metadata = await self._issuer_metadata.get()
introspection_endpoint = metadata.get("introspection_endpoint")
raw_headers: Dict[str, str] = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": str(self._http_client.user_agent, "utf-8"),
"Accept": "application/json",
}
args = {"token": token, "token_type_hint": "access_token"}
body = urlencode(args, True)
# Fill the body/headers with credentials
uri, raw_headers, body = self._client_auth.prepare(
method="POST", uri=introspection_endpoint, headers=raw_headers, body=body
)
headers = Headers({k: [v] for (k, v) in raw_headers.items()})
# Do the actual request
# We're not using the SimpleHttpClient util methods as we don't want to
# check the HTTP status code, and we do the body encoding ourselves.
response = await self._http_client.request(
method="POST",
uri=uri,
data=body.encode("utf-8"),
headers=headers,
)
resp_body = await make_deferred_yieldable(readBody(response))
if response.code < 200 or response.code >= 300:
raise HttpResponseException(
response.code,
response.phrase.decode("ascii", errors="replace"),
resp_body,
)
resp = json_decoder.decode(resp_body.decode("utf-8"))
if not isinstance(resp, dict):
raise ValueError(
"The introspection endpoint returned an invalid JSON response."
)
return IntrospectionToken(**resp)
async def is_server_admin(self, requester: Requester) -> bool:
return "urn:synapse:admin:*" in requester.scope
async def get_user_by_req(
self,
request: SynapseRequest,
allow_guest: bool = False,
allow_expired: bool = False,
) -> Requester:
access_token = self.get_access_token_from_request(request)
requester = await self.get_appservice_user(request, access_token)
if not requester:
# TODO: we probably want to assert the allow_guest inside this call
# so that we don't provision the user if they don't have enough permission:
requester = await self.get_user_by_access_token(access_token, allow_expired)
if not allow_guest and requester.is_guest:
raise OAuthInsufficientScopeError([SCOPE_MATRIX_API])
request.requester = requester
return requester
async def get_user_by_access_token(
self,
token: str,
allow_expired: bool = False,
) -> Requester:
if self._admin_token is not None and token == self._admin_token:
# XXX: This is a temporary solution so that the admin API can be called by
# the OIDC provider. This will be removed once we have OIDC client
# credentials grant support in matrix-authentication-service.
logging.info("Admin toked used")
# XXX: that user doesn't exist and won't be provisioned.
# This is mostly fine for admin calls, but we should also think about doing
# requesters without a user_id.
admin_user = UserID("__oidc_admin", self._hostname)
return create_requester(
user_id=admin_user,
scope=["urn:synapse:admin:*"],
)
try:
introspection_result = await self._introspect_token(token)
except Exception:
logger.exception("Failed to introspect token")
raise SynapseError(503, "Unable to introspect the access token")
logger.info(f"Introspection result: {introspection_result!r}")
# TODO: introspection verification should be more extensive, especially:
# - verify the audience
if not introspection_result.get("active"):
raise InvalidClientTokenError("Token is not active")
# Let's look at the scope
scope: List[str] = scope_to_list(introspection_result.get("scope", ""))
# Determine type of user based on presence of particular scopes
has_user_scope = SCOPE_MATRIX_API in scope
has_guest_scope = SCOPE_MATRIX_GUEST in scope
if not has_user_scope and not has_guest_scope:
raise InvalidClientTokenError("No scope in token granting user rights")
# Match via the sub claim
sub: Optional[str] = introspection_result.get("sub")
if sub is None:
raise InvalidClientTokenError(
"Invalid sub claim in the introspection result"
)
user_id_str = await self.store.get_user_by_external_id(
MSC3861DelegatedAuth.EXTERNAL_ID_PROVIDER, sub
)
if user_id_str is None:
# If we could not find a user via the external_id, it either does not exist,
# or the external_id was never recorded
# TODO: claim mapping should be configurable
username: Optional[str] = introspection_result.get("username")
if username is None or not isinstance(username, str):
raise AuthError(
500,
"Invalid username claim in the introspection result",
)
user_id = UserID(username, self._hostname)
# First try to find a user from the username claim
user_info = await self.store.get_userinfo_by_id(user_id=user_id.to_string())
if user_info is None:
# If the user does not exist, we should create it on the fly
# TODO: we could use SCIM to provision users ahead of time and listen
# for SCIM SET events if those ever become standard:
# https://datatracker.ietf.org/doc/html/draft-hunt-scim-notify-00
# TODO: claim mapping should be configurable
# If present, use the name claim as the displayname
name: Optional[str] = introspection_result.get("name")
await self.store.register_user(
user_id=user_id.to_string(), create_profile_with_displayname=name
)
# And record the sub as external_id
await self.store.record_user_external_id(
MSC3861DelegatedAuth.EXTERNAL_ID_PROVIDER, sub, user_id.to_string()
)
else:
user_id = UserID.from_string(user_id_str)
# Find device_ids in scope
# We only allow a single device_id in the scope, so we find them all in the
# scope list, and raise if there are more than one. The OIDC server should be
# the one enforcing valid scopes, so we raise a 500 if we find an invalid scope.
device_ids = [
tok[len(SCOPE_MATRIX_DEVICE_PREFIX) :]
for tok in scope
if tok.startswith(SCOPE_MATRIX_DEVICE_PREFIX)
]
if len(device_ids) > 1:
raise AuthError(
500,
"Multiple device IDs in scope",
)
device_id = device_ids[0] if device_ids else None
if device_id is not None:
# Sanity check the device_id
if len(device_id) > 255 or len(device_id) < 1:
raise AuthError(
500,
"Invalid device ID in scope",
)
# Create the device on the fly if it does not exist
try:
await self.store.get_device(
user_id=user_id.to_string(), device_id=device_id
)
except StoreError:
await self.store.store_device(
user_id=user_id.to_string(),
device_id=device_id,
initial_device_display_name="OIDC-native client",
)
# TODO: there is a few things missing in the requester here, which still need
# to be figured out, like:
# - impersonation, with the `authenticated_entity`, which is used for
# rate-limiting, MAU limits, etc.
# - shadow-banning, with the `shadow_banned` flag
# - a proper solution for appservices, which still needs to be figured out in
# the context of MSC3861
return create_requester(
user_id=user_id,
device_id=device_id,
scope=scope,
is_guest=(has_guest_scope and not has_user_scope),
)

View File

@@ -123,10 +123,6 @@ class EventTypes:
SpaceChild: Final = "m.space.child"
SpaceParent: Final = "m.space.parent"
MSC2716_INSERTION: Final = "org.matrix.msc2716.insertion"
MSC2716_BATCH: Final = "org.matrix.msc2716.batch"
MSC2716_MARKER: Final = "org.matrix.msc2716.marker"
Reaction: Final = "m.reaction"
@@ -222,21 +218,11 @@ class EventContentFields:
# Used in m.room.guest_access events.
GUEST_ACCESS: Final = "guest_access"
# Used on normal messages to indicate they were historically imported after the fact
MSC2716_HISTORICAL: Final = "org.matrix.msc2716.historical"
# For "insertion" events to indicate what the next batch ID should be in
# order to connect to it
MSC2716_NEXT_BATCH_ID: Final = "next_batch_id"
# Used on "batch" events to indicate which insertion event it connects to
MSC2716_BATCH_ID: Final = "batch_id"
# For "marker" events
MSC2716_INSERTION_EVENT_REFERENCE: Final = "insertion_event_reference"
# The authorising user for joining a restricted room.
AUTHORISING_USER: Final = "join_authorised_via_users_server"
# Use for mentioning users.
MSC3952_MENTIONS: Final = "org.matrix.msc3952.mentions"
MENTIONS: Final = "m.mentions"
# an unspecced field added to to-device messages to identify them uniquely-ish
TO_DEVICE_MSGID: Final = "org.matrix.msgid"

View File

@@ -119,14 +119,20 @@ class Codes(str, Enum):
class CodeMessageException(RuntimeError):
"""An exception with integer code and message string attributes.
"""An exception with integer code, a message string attributes and optional headers.
Attributes:
code: HTTP error code
msg: string describing the error
headers: optional response headers to send
"""
def __init__(self, code: Union[int, HTTPStatus], msg: str):
def __init__(
self,
code: Union[int, HTTPStatus],
msg: str,
headers: Optional[Dict[str, str]] = None,
):
super().__init__("%d: %s" % (code, msg))
# Some calls to this method pass instances of http.HTTPStatus for `code`.
@@ -137,6 +143,7 @@ class CodeMessageException(RuntimeError):
# To eliminate this behaviour, we convert them to their integer equivalents here.
self.code = int(code)
self.msg = msg
self.headers = headers
class RedirectException(CodeMessageException):
@@ -182,6 +189,7 @@ class SynapseError(CodeMessageException):
msg: str,
errcode: str = Codes.UNKNOWN,
additional_fields: Optional[Dict] = None,
headers: Optional[Dict[str, str]] = None,
):
"""Constructs a synapse error.
@@ -190,7 +198,7 @@ class SynapseError(CodeMessageException):
msg: The human-readable error message.
errcode: The matrix error code e.g 'M_FORBIDDEN'
"""
super().__init__(code, msg)
super().__init__(code, msg, headers)
self.errcode = errcode
if additional_fields is None:
self._additional_fields: Dict = {}
@@ -335,6 +343,20 @@ class AuthError(SynapseError):
super().__init__(code, msg, errcode, additional_fields)
class OAuthInsufficientScopeError(SynapseError):
"""An error raised when the caller does not have sufficient scope to perform the requested action"""
def __init__(
self,
required_scopes: List[str],
):
headers = {
"WWW-Authenticate": 'Bearer error="insufficient_scope", scope="%s"'
% (" ".join(required_scopes))
}
super().__init__(401, "Insufficient scope", Codes.FORBIDDEN, None, headers)
class UnstableSpecAuthError(AuthError):
"""An error raised when a new error code is being proposed to replace a previous one.
This error will return a "org.matrix.unstable.errcode" property with the new error code,

View File

@@ -152,9 +152,9 @@ class Filtering:
self.DEFAULT_FILTER_COLLECTION = FilterCollection(hs, {})
async def get_user_filter(
self, user_localpart: str, filter_id: Union[int, str]
self, user_id: UserID, filter_id: Union[int, str]
) -> "FilterCollection":
result = await self.store.get_user_filter(user_localpart, filter_id)
result = await self.store.get_user_filter(user_id, filter_id)
return FilterCollection(self._hs, result)
def add_user_filter(self, user_id: UserID, user_filter: JsonDict) -> Awaitable[int]:

View File

@@ -91,11 +91,6 @@ class RoomVersion:
# MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending
# m.room.membership event with membership 'knock'.
msc2403_knocking: bool
# MSC2716: Adds m.room.power_levels -> content.historical field to control
# whether "insertion", "chunk", "marker" events can be sent
msc2716_historical: bool
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
msc2716_redactions: bool
# MSC3389: Protect relation information from redaction.
msc3389_relation_redactions: bool
# MSC3787: Adds support for a `knock_restricted` join rule, mixing concepts of
@@ -130,8 +125,6 @@ class RoomVersions:
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -153,8 +146,6 @@ class RoomVersions:
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -176,8 +167,6 @@ class RoomVersions:
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -199,8 +188,6 @@ class RoomVersions:
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -222,8 +209,6 @@ class RoomVersions:
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -245,8 +230,6 @@ class RoomVersions:
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -268,8 +251,6 @@ class RoomVersions:
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -291,8 +272,6 @@ class RoomVersions:
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -314,8 +293,6 @@ class RoomVersions:
msc3083_join_rules=True,
msc3375_redaction_rules=False,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -337,8 +314,6 @@ class RoomVersions:
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -360,8 +335,6 @@ class RoomVersions:
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=False,
@@ -383,8 +356,6 @@ class RoomVersions:
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
@@ -406,8 +377,6 @@ class RoomVersions:
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
@@ -415,29 +384,6 @@ class RoomVersions:
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
MSC2716v4 = RoomVersion(
"org.matrix.msc2716v4",
RoomDisposition.UNSTABLE,
EventFormatVersions.ROOM_V4_PLUS,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2175_implicit_room_creator=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=True,
msc2716_historical=True,
msc2716_redactions=True,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
msc3821_redaction_rules=False,
msc3931_push_features=(),
msc3989_redaction_rules=False,
)
MSC1767v10 = RoomVersion(
# MSC1767 (Extensible Events) based on room version "10"
"org.matrix.msc1767.10",
@@ -453,8 +399,6 @@ class RoomVersions:
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
@@ -476,8 +420,6 @@ class RoomVersions:
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
@@ -500,8 +442,6 @@ class RoomVersions:
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3389_relation_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
@@ -526,7 +466,6 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.V9,
RoomVersions.MSC3787,
RoomVersions.V10,
RoomVersions.MSC2716v4,
RoomVersions.MSC3989,
RoomVersions.MSC3820opt2,
)

View File

@@ -83,7 +83,6 @@ from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.registration import RegistrationWorkerStore
from synapse.storage.databases.main.relations import RelationsWorkerStore
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.storage.databases.main.room_batch import RoomBatchStore
from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
from synapse.storage.databases.main.search import SearchStore
from synapse.storage.databases.main.session import SessionStore
@@ -120,7 +119,6 @@ class GenericWorkerStore(
# the races it creates aren't too bad.
KeyStore,
RoomWorkerStore,
RoomBatchStore,
DirectoryWorkerStore,
PushRulesWorkerStore,
ApplicationServiceTransactionWorkerStore,

View File

@@ -29,7 +29,14 @@ class AuthConfig(Config):
if password_config is None:
password_config = {}
passwords_enabled = password_config.get("enabled", True)
# The default value of password_config.enabled is True, unless msc3861 is enabled.
msc3861_enabled = (
config.get("experimental_features", {})
.get("msc3861", {})
.get("enabled", False)
)
passwords_enabled = password_config.get("enabled", not msc3861_enabled)
# 'only_for_reauth' allows users who have previously set a password to use it,
# even though passwords would otherwise be disabled.
passwords_for_reauth_only = passwords_enabled == "only_for_reauth"
@@ -53,3 +60,13 @@ class AuthConfig(Config):
self.ui_auth_session_timeout = self.parse_duration(
ui_auth.get("session_timeout", 0)
)
# Logging in with an existing session.
login_via_existing = config.get("login_via_existing_session", {})
self.login_via_existing_enabled = login_via_existing.get("enabled", False)
self.login_via_existing_require_ui_auth = login_via_existing.get(
"require_ui_auth", True
)
self.login_via_existing_token_timeout = self.parse_duration(
login_via_existing.get("token_timeout", "5m")
)

View File

@@ -12,15 +12,216 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Optional
import enum
from typing import TYPE_CHECKING, Any, Optional
import attr
import attr.validators
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.config import ConfigError
from synapse.config._base import Config
from synapse.config._base import Config, RootConfig
from synapse.types import JsonDict
# Determine whether authlib is installed.
try:
import authlib # noqa: F401
HAS_AUTHLIB = True
except ImportError:
HAS_AUTHLIB = False
if TYPE_CHECKING:
# Only import this if we're type checking, as it might not be installed at runtime.
from authlib.jose.rfc7517 import JsonWebKey
class ClientAuthMethod(enum.Enum):
"""List of supported client auth methods."""
CLIENT_SECRET_POST = "client_secret_post"
CLIENT_SECRET_BASIC = "client_secret_basic"
CLIENT_SECRET_JWT = "client_secret_jwt"
PRIVATE_KEY_JWT = "private_key_jwt"
def _parse_jwks(jwks: Optional[JsonDict]) -> Optional["JsonWebKey"]:
"""A helper function to parse a JWK dict into a JsonWebKey."""
if jwks is None:
return None
from authlib.jose.rfc7517 import JsonWebKey
return JsonWebKey.import_key(jwks)
@attr.s(slots=True, frozen=True)
class MSC3861:
"""Configuration for MSC3861: Matrix architecture change to delegate authentication via OIDC"""
enabled: bool = attr.ib(default=False, validator=attr.validators.instance_of(bool))
"""Whether to enable MSC3861 auth delegation."""
@enabled.validator
def _check_enabled(self, attribute: attr.Attribute, value: bool) -> None:
# Only allow enabling MSC3861 if authlib is installed
if value and not HAS_AUTHLIB:
raise ConfigError(
"MSC3861 is enabled but authlib is not installed. "
"Please install authlib to use MSC3861.",
("experimental", "msc3861", "enabled"),
)
issuer: str = attr.ib(default="", validator=attr.validators.instance_of(str))
"""The URL of the OIDC Provider."""
issuer_metadata: Optional[JsonDict] = attr.ib(default=None)
"""The issuer metadata to use, otherwise discovered from /.well-known/openid-configuration as per MSC2965."""
client_id: str = attr.ib(
default="",
validator=attr.validators.instance_of(str),
)
"""The client ID to use when calling the introspection endpoint."""
client_auth_method: ClientAuthMethod = attr.ib(
default=ClientAuthMethod.CLIENT_SECRET_POST, converter=ClientAuthMethod
)
"""The auth method used when calling the introspection endpoint."""
client_secret: Optional[str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(str)),
)
"""
The client secret to use when calling the introspection endpoint,
when using any of the client_secret_* client auth methods.
"""
jwk: Optional["JsonWebKey"] = attr.ib(default=None, converter=_parse_jwks)
"""
The JWKS to use when calling the introspection endpoint,
when using the private_key_jwt client auth method.
"""
@client_auth_method.validator
def _check_client_auth_method(
self, attribute: attr.Attribute, value: ClientAuthMethod
) -> None:
# Check that the right client credentials are provided for the client auth method.
if not self.enabled:
return
if value == ClientAuthMethod.PRIVATE_KEY_JWT and self.jwk is None:
raise ConfigError(
"A JWKS must be provided when using the private_key_jwt client auth method",
("experimental", "msc3861", "client_auth_method"),
)
if (
value
in (
ClientAuthMethod.CLIENT_SECRET_POST,
ClientAuthMethod.CLIENT_SECRET_BASIC,
ClientAuthMethod.CLIENT_SECRET_JWT,
)
and self.client_secret is None
):
raise ConfigError(
f"A client secret must be provided when using the {value} client auth method",
("experimental", "msc3861", "client_auth_method"),
)
account_management_url: Optional[str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(str)),
)
"""The URL of the My Account page on the OIDC Provider as per MSC2965."""
admin_token: Optional[str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(str)),
)
"""
A token that should be considered as an admin token.
This is used by the OIDC provider, to make admin calls to Synapse.
"""
def check_config_conflicts(self, root: RootConfig) -> None:
"""Checks for any configuration conflicts with other parts of Synapse.
Raises:
ConfigError: If there are any configuration conflicts.
"""
if not self.enabled:
return
if (
root.auth.password_enabled_for_reauth
or root.auth.password_enabled_for_login
):
raise ConfigError(
"Password auth cannot be enabled when OAuth delegation is enabled",
("password_config", "enabled"),
)
if root.registration.enable_registration:
raise ConfigError(
"Registration cannot be enabled when OAuth delegation is enabled",
("enable_registration",),
)
if (
root.oidc.oidc_enabled
or root.saml2.saml2_enabled
or root.cas.cas_enabled
or root.jwt.jwt_enabled
):
raise ConfigError("SSO cannot be enabled when OAuth delegation is enabled")
if bool(root.authproviders.password_providers):
raise ConfigError(
"Password auth providers cannot be enabled when OAuth delegation is enabled"
)
if root.captcha.enable_registration_captcha:
raise ConfigError(
"CAPTCHA cannot be enabled when OAuth delegation is enabled",
("captcha", "enable_registration_captcha"),
)
if root.auth.login_via_existing_enabled:
raise ConfigError(
"Login via existing session cannot be enabled when OAuth delegation is enabled",
("login_via_existing_session", "enabled"),
)
if root.registration.refresh_token_lifetime:
raise ConfigError(
"refresh_token_lifetime cannot be set when OAuth delegation is enabled",
("refresh_token_lifetime",),
)
if root.registration.nonrefreshable_access_token_lifetime:
raise ConfigError(
"nonrefreshable_access_token_lifetime cannot be set when OAuth delegation is enabled",
("nonrefreshable_access_token_lifetime",),
)
if root.registration.session_lifetime:
raise ConfigError(
"session_lifetime cannot be set when OAuth delegation is enabled",
("session_lifetime",),
)
if not root.experimental.msc3970_enabled:
raise ConfigError(
"experimental_features.msc3970_enabled must be 'true' when OAuth delegation is enabled",
("experimental_features", "msc3970_enabled"),
)
@attr.s(auto_attribs=True, frozen=True, slots=True)
class MSC3866Config:
@@ -46,9 +247,6 @@ class ExperimentalConfig(Config):
# MSC3026 (busy presence state)
self.msc3026_enabled: bool = experimental.get("msc3026_enabled", False)
# MSC2716 (importing historical messages)
self.msc2716_enabled: bool = experimental.get("msc2716_enabled", False)
# MSC3244 (room version capabilities)
self.msc3244_enabled: bool = experimental.get("msc3244_enabled", True)
@@ -118,13 +316,6 @@ class ExperimentalConfig(Config):
# MSC3881: Remotely toggle push notifications for another client
self.msc3881_enabled: bool = experimental.get("msc3881_enabled", False)
# MSC3882: Allow an existing session to sign in a new session
self.msc3882_enabled: bool = experimental.get("msc3882_enabled", False)
self.msc3882_ui_auth: bool = experimental.get("msc3882_ui_auth", True)
self.msc3882_token_timeout = self.parse_duration(
experimental.get("msc3882_token_timeout", "5m")
)
# MSC3874: Filtering /messages with rel_types / not_rel_types.
self.msc3874_enabled: bool = experimental.get("msc3874_enabled", False)
@@ -164,11 +355,6 @@ class ExperimentalConfig(Config):
# MSC3391: Removing account data.
self.msc3391_enabled = experimental.get("msc3391_enabled", False)
# MSC3952: Intentional mentions, this depends on MSC3966.
self.msc3952_intentional_mentions = experimental.get(
"msc3952_intentional_mentions", False
)
# MSC3959: Do not generate notifications for edits.
self.msc3958_supress_edit_notifs = experimental.get(
"msc3958_supress_edit_notifs", False
@@ -182,8 +368,19 @@ class ExperimentalConfig(Config):
"msc3981_recurse_relations", False
)
# MSC3861: Matrix architecture change to delegate authentication via OIDC
try:
self.msc3861 = MSC3861(**experimental.get("msc3861", {}))
except ValueError as exc:
raise ConfigError(
"Invalid MSC3861 configuration", ("experimental", "msc3861")
) from exc
# MSC3970: Scope transaction IDs to devices
self.msc3970_enabled = experimental.get("msc3970_enabled", False)
self.msc3970_enabled = experimental.get("msc3970_enabled", self.msc3861.enabled)
# Check that none of the other config options conflict with MSC3861 when enabled
self.msc3861.check_config_conflicts(self.root)
# MSC4009: E.164 Matrix IDs
self.msc4009_e164_mxids = experimental.get("msc4009_e164_mxids", False)

View File

@@ -22,6 +22,8 @@ class FederationConfig(Config):
section = "federation"
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
federation_config = config.setdefault("federation", {})
# FIXME: federation_domain_whitelist needs sytests
self.federation_domain_whitelist: Optional[dict] = None
federation_domain_whitelist = config.get("federation_domain_whitelist", None)
@@ -49,5 +51,19 @@ class FederationConfig(Config):
"allow_device_name_lookup_over_federation", False
)
# Allow for the configuration of timeout, max request retries
# and min/max retry delays in the matrix federation client.
self.client_timeout_ms = Config.parse_duration(
federation_config.get("client_timeout", "60s")
)
self.max_long_retry_delay_ms = Config.parse_duration(
federation_config.get("max_long_retry_delay", "60s")
)
self.max_short_retry_delay_ms = Config.parse_duration(
federation_config.get("max_short_retry_delay", "2s")
)
self.max_long_retries = federation_config.get("max_long_retries", 10)
self.max_short_retries = federation_config.get("max_short_retries", 3)
_METRICS_FOR_DOMAINS_SCHEMA = {"type": "array", "items": {"type": "string"}}

View File

@@ -339,13 +339,6 @@ def check_state_dependent_auth_rules(
if event.type == EventTypes.Redaction:
check_redaction(event.room_version, event, auth_dict)
if (
event.type == EventTypes.MSC2716_INSERTION
or event.type == EventTypes.MSC2716_BATCH
or event.type == EventTypes.MSC2716_MARKER
):
check_historical(event.room_version, event, auth_dict)
logger.debug("Allowing! %s", event)
@@ -365,7 +358,6 @@ LENIENT_EVENT_BYTE_LIMITS_ROOM_VERSIONS = {
RoomVersions.V9,
RoomVersions.MSC3787,
RoomVersions.V10,
RoomVersions.MSC2716v4,
RoomVersions.MSC1767v10,
}
@@ -823,38 +815,6 @@ def check_redaction(
raise AuthError(403, "You don't have permission to redact events")
def check_historical(
room_version_obj: RoomVersion,
event: "EventBase",
auth_events: StateMap["EventBase"],
) -> None:
"""Check whether the event sender is allowed to send historical related
events like "insertion", "batch", and "marker".
Returns:
None
Raises:
AuthError if the event sender is not allowed to send historical related events
("insertion", "batch", and "marker").
"""
# Ignore the auth checks in room versions that do not support historical
# events
if not room_version_obj.msc2716_historical:
return
user_level = get_user_power_level(event.user_id, auth_events)
historical_level = get_named_level(auth_events, "historical", 100)
if user_level < historical_level:
raise UnstableSpecAuthError(
403,
'You don\'t have permission to send send historical related events ("insertion", "batch", and "marker")',
errcode=Codes.INSUFFICIENT_POWER,
)
def _check_power_levels(
room_version_obj: RoomVersion,
event: "EventBase",

View File

@@ -198,7 +198,6 @@ class _EventInternalMetadata:
soft_failed: DictProperty[bool] = DictProperty("soft_failed")
proactively_send: DictProperty[bool] = DictProperty("proactively_send")
redacted: DictProperty[bool] = DictProperty("redacted")
historical: DictProperty[bool] = DictProperty("historical")
txn_id: DictProperty[str] = DictProperty("txn_id")
"""The transaction ID, if it was set when the event was created."""
@@ -288,14 +287,6 @@ class _EventInternalMetadata:
"""
return self._dict.get("redacted", False)
def is_historical(self) -> bool:
"""Whether this is a historical message.
This is used by the batchsend historical message endpoint and
is needed to and mark the event as backfilled and skip some checks
like push notifications.
"""
return self._dict.get("historical", False)
def is_notifiable(self) -> bool:
"""Whether this event can trigger a push notification"""
return not self.is_outlier() or self.is_out_of_band_membership()

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, List, Optional, Tuple
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
import attr
from immutabledict import immutabledict
@@ -107,33 +107,32 @@ class EventContext(UnpersistedEventContextBase):
state_delta_due_to_event: If `state_group` and `state_group_before_event` are not None
then this is the delta of the state between the two groups.
prev_group: If it is known, ``state_group``'s prev_group. Note that this being
None does not necessarily mean that ``state_group`` does not have
a prev_group!
state_group_deltas: If not empty, this is a dict collecting a mapping of the state
difference between state groups.
If the event is a state event, this is normally the same as
``state_group_before_event``.
The keys are a tuple of two integers: the initial group and final state group.
The corresponding value is a state map representing the state delta between
these state groups.
If ``state_group`` is None (ie, the event is an outlier), ``prev_group``
will always also be ``None``.
The dictionary is expected to have at most two entries with state groups of:
Note that this *not* (necessarily) the state group associated with
``_prev_state_ids``.
1. The state group before the event and after the event.
2. The state group preceding the state group before the event and the
state group before the event.
delta_ids: If ``prev_group`` is not None, the state delta between ``prev_group``
and ``state_group``.
This information is collected and stored as part of an optimization for persisting
events.
partial_state: if True, we may be storing this event with a temporary,
incomplete state.
"""
_storage: "StorageControllers"
state_group_deltas: Dict[Tuple[int, int], StateMap[str]]
rejected: Optional[str] = None
_state_group: Optional[int] = None
state_group_before_event: Optional[int] = None
_state_delta_due_to_event: Optional[StateMap[str]] = None
prev_group: Optional[int] = None
delta_ids: Optional[StateMap[str]] = None
app_service: Optional[ApplicationService] = None
partial_state: bool = False
@@ -145,16 +144,14 @@ class EventContext(UnpersistedEventContextBase):
state_group_before_event: Optional[int],
state_delta_due_to_event: Optional[StateMap[str]],
partial_state: bool,
prev_group: Optional[int] = None,
delta_ids: Optional[StateMap[str]] = None,
state_group_deltas: Dict[Tuple[int, int], StateMap[str]],
) -> "EventContext":
return EventContext(
storage=storage,
state_group=state_group,
state_group_before_event=state_group_before_event,
state_delta_due_to_event=state_delta_due_to_event,
prev_group=prev_group,
delta_ids=delta_ids,
state_group_deltas=state_group_deltas,
partial_state=partial_state,
)
@@ -163,7 +160,7 @@ class EventContext(UnpersistedEventContextBase):
storage: "StorageControllers",
) -> "EventContext":
"""Return an EventContext instance suitable for persisting an outlier event"""
return EventContext(storage=storage)
return EventContext(storage=storage, state_group_deltas={})
async def persist(self, event: EventBase) -> "EventContext":
return self
@@ -183,13 +180,15 @@ class EventContext(UnpersistedEventContextBase):
"state_group": self._state_group,
"state_group_before_event": self.state_group_before_event,
"rejected": self.rejected,
"prev_group": self.prev_group,
"state_group_deltas": _encode_state_group_delta(self.state_group_deltas),
"state_delta_due_to_event": _encode_state_dict(
self._state_delta_due_to_event
),
"delta_ids": _encode_state_dict(self.delta_ids),
"app_service_id": self.app_service.id if self.app_service else None,
"partial_state": self.partial_state,
# add dummy delta_ids and prev_group for backwards compatibility
"delta_ids": None,
"prev_group": None,
}
@staticmethod
@@ -204,17 +203,24 @@ class EventContext(UnpersistedEventContextBase):
Returns:
The event context.
"""
# workaround for backwards/forwards compatibility: if the input doesn't have a value
# for "state_group_deltas" just assign an empty dict
state_group_deltas = input.get("state_group_deltas", None)
if state_group_deltas:
state_group_deltas = _decode_state_group_delta(state_group_deltas)
else:
state_group_deltas = {}
context = EventContext(
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
storage=storage,
state_group=input["state_group"],
state_group_before_event=input["state_group_before_event"],
prev_group=input["prev_group"],
state_group_deltas=state_group_deltas,
state_delta_due_to_event=_decode_state_dict(
input["state_delta_due_to_event"]
),
delta_ids=_decode_state_dict(input["delta_ids"]),
rejected=input["rejected"],
partial_state=input.get("partial_state", False),
)
@@ -349,7 +355,7 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
_storage: "StorageControllers"
state_group_before_event: Optional[int]
state_group_after_event: Optional[int]
state_delta_due_to_event: Optional[dict]
state_delta_due_to_event: Optional[StateMap[str]]
prev_group_for_state_group_before_event: Optional[int]
delta_ids_to_state_group_before_event: Optional[StateMap[str]]
partial_state: bool
@@ -380,26 +386,16 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
events_and_persisted_context = []
for event, unpersisted_context in amended_events_and_context:
if event.is_state():
context = EventContext(
storage=unpersisted_context._storage,
state_group=unpersisted_context.state_group_after_event,
state_group_before_event=unpersisted_context.state_group_before_event,
state_delta_due_to_event=unpersisted_context.state_delta_due_to_event,
partial_state=unpersisted_context.partial_state,
prev_group=unpersisted_context.state_group_before_event,
delta_ids=unpersisted_context.state_delta_due_to_event,
)
else:
context = EventContext(
storage=unpersisted_context._storage,
state_group=unpersisted_context.state_group_after_event,
state_group_before_event=unpersisted_context.state_group_before_event,
state_delta_due_to_event=unpersisted_context.state_delta_due_to_event,
partial_state=unpersisted_context.partial_state,
prev_group=unpersisted_context.prev_group_for_state_group_before_event,
delta_ids=unpersisted_context.delta_ids_to_state_group_before_event,
)
state_group_deltas = unpersisted_context._build_state_group_deltas()
context = EventContext(
storage=unpersisted_context._storage,
state_group=unpersisted_context.state_group_after_event,
state_group_before_event=unpersisted_context.state_group_before_event,
state_delta_due_to_event=unpersisted_context.state_delta_due_to_event,
partial_state=unpersisted_context.partial_state,
state_group_deltas=state_group_deltas,
)
events_and_persisted_context.append((event, context))
return events_and_persisted_context
@@ -452,11 +448,11 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
# if the event isn't a state event the state group doesn't change
if not self.state_delta_due_to_event:
state_group_after_event = self.state_group_before_event
self.state_group_after_event = self.state_group_before_event
# otherwise if it is a state event we need to get a state group for it
else:
state_group_after_event = await self._storage.state.store_state_group(
self.state_group_after_event = await self._storage.state.store_state_group(
event.event_id,
event.room_id,
prev_group=self.state_group_before_event,
@@ -464,16 +460,81 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
current_state_ids=None,
)
state_group_deltas = self._build_state_group_deltas()
return EventContext.with_state(
storage=self._storage,
state_group=state_group_after_event,
state_group=self.state_group_after_event,
state_group_before_event=self.state_group_before_event,
state_delta_due_to_event=self.state_delta_due_to_event,
state_group_deltas=state_group_deltas,
partial_state=self.partial_state,
prev_group=self.state_group_before_event,
delta_ids=self.state_delta_due_to_event,
)
def _build_state_group_deltas(self) -> Dict[Tuple[int, int], StateMap]:
"""
Collect deltas between the state groups associated with this context
"""
state_group_deltas = {}
# if we know the state group before the event and after the event, add them and the
# state delta between them to state_group_deltas
if self.state_group_before_event and self.state_group_after_event:
# if we have the state groups we should have the delta
assert self.state_delta_due_to_event is not None
state_group_deltas[
(
self.state_group_before_event,
self.state_group_after_event,
)
] = self.state_delta_due_to_event
# the state group before the event may also have a state group which precedes it, if
# we have that and the state group before the event, add them and the state
# delta between them to state_group_deltas
if (
self.prev_group_for_state_group_before_event
and self.state_group_before_event
):
# if we have both state groups we should have the delta between them
assert self.delta_ids_to_state_group_before_event is not None
state_group_deltas[
(
self.prev_group_for_state_group_before_event,
self.state_group_before_event,
)
] = self.delta_ids_to_state_group_before_event
return state_group_deltas
def _encode_state_group_delta(
state_group_delta: Dict[Tuple[int, int], StateMap[str]]
) -> List[Tuple[int, int, Optional[List[Tuple[str, str, str]]]]]:
if not state_group_delta:
return []
state_group_delta_encoded = []
for key, value in state_group_delta.items():
state_group_delta_encoded.append((key[0], key[1], _encode_state_dict(value)))
return state_group_delta_encoded
def _decode_state_group_delta(
input: List[Tuple[int, int, List[Tuple[str, str, str]]]]
) -> Dict[Tuple[int, int], StateMap[str]]:
if not input:
return {}
state_group_deltas = {}
for state_group_1, state_group_2, state_dict in input:
state_map = _decode_state_dict(state_dict)
assert state_map is not None
state_group_deltas[(state_group_1, state_group_2)] = state_map
return state_group_deltas
def _encode_state_dict(
state_dict: Optional[StateMap[str]],

View File

@@ -164,21 +164,12 @@ def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDic
if room_version.msc2176_redaction_rules:
add_fields("invite")
if room_version.msc2716_historical:
add_fields("historical")
elif event_type == EventTypes.Aliases and room_version.special_case_aliases_auth:
add_fields("aliases")
elif event_type == EventTypes.RoomHistoryVisibility:
add_fields("history_visibility")
elif event_type == EventTypes.Redaction and room_version.msc2176_redaction_rules:
add_fields("redacts")
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_INSERTION:
add_fields(EventContentFields.MSC2716_NEXT_BATCH_ID)
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_BATCH:
add_fields(EventContentFields.MSC2716_BATCH_ID)
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_MARKER:
add_fields(EventContentFields.MSC2716_INSERTION_EVENT_REFERENCE)
# Protect the rel_type and event_id fields under the m.relates_to field.
if room_version.msc3389_relation_redactions:

View File

@@ -134,13 +134,8 @@ class EventValidator:
)
# If the event contains a mentions key, validate it.
if (
EventContentFields.MSC3952_MENTIONS in event.content
and config.experimental.msc3952_intentional_mentions
):
validate_json_object(
event.content[EventContentFields.MSC3952_MENTIONS], Mentions
)
if EventContentFields.MENTIONS in event.content:
validate_json_object(event.content[EventContentFields.MENTIONS], Mentions)
def _validate_retention(self, event: EventBase) -> None:
"""Checks that an event that defines the retention policy for a room respects the

View File

@@ -260,7 +260,9 @@ class FederationClient(FederationBase):
use_unstable = False
for user_id, one_time_keys in query.items():
for device_id, algorithms in one_time_keys.items():
if any(count > 1 for count in algorithms.values()):
# If more than one algorithm is requested, attempt to use the unstable
# endpoint.
if sum(algorithms.values()) > 1:
use_unstable = True
if algorithms:
# For the stable query, choose only the first algorithm.
@@ -296,6 +298,7 @@ class FederationClient(FederationBase):
else:
logger.debug("Skipping unstable claim client keys API")
# TODO Potentially attempt multiple queries and combine the results?
return await self.transport_layer.claim_client_keys(
user, destination, content, timeout
)

View File

@@ -515,7 +515,7 @@ class FederationServer(FederationBase):
logger.error(
"Failed to handle PDU %s",
event_id,
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
exc_info=(f.type, f.value, f.getTracebackObject()),
)
return {"error": str(e)}
@@ -944,7 +944,7 @@ class FederationServer(FederationBase):
if not self._is_mine_server_name(authorising_server):
raise SynapseError(
400,
f"Cannot authorise request from resident server: {authorising_server}",
f"Cannot authorise membership event for {authorising_server}. We can only authorise requests from our own homeserver",
)
event.signatures.update(
@@ -1016,7 +1016,9 @@ class FederationServer(FederationBase):
for user_id, device_keys in result.items():
for device_id, keys in device_keys.items():
for key_id, key in keys.items():
json_result.setdefault(user_id, {})[device_id] = {key_id: key}
json_result.setdefault(user_id, {}).setdefault(device_id, {})[
key_id
] = key
logger.info(
"Claimed one-time-keys: %s",
@@ -1247,7 +1249,7 @@ class FederationServer(FederationBase):
logger.error(
"Failed to handle PDU %s",
event.event_id,
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
exc_info=(f.type, f.value, f.getTracebackObject()),
)
received_ts = await self.store.remove_received_event_from_staging(
@@ -1291,9 +1293,6 @@ class FederationServer(FederationBase):
return
lock = new_lock
def __str__(self) -> str:
return "<ReplicationLayer(%s)>" % self.server_name
async def exchange_third_party_invite(
self, sender_user_id: str, target_user_id: str, room_id: str, signed: Dict
) -> None:

View File

@@ -109,10 +109,8 @@ was enabled*, Catch-Up Mode is exited and we return to `_transaction_transmissio
If a remote server is unreachable over federation, we back off from that server,
with an exponentially-increasing retry interval.
Whilst we don't automatically retry after the interval, we prevent making new attempts
until such time as the back-off has cleared.
Once the back-off is cleared and a new PDU or EDU arrives for transmission, the transmission
loop resumes and empties the queue by making federation requests.
We automatically retry after the retry interval expires (roughly, the logic to do so
being triggered every minute).
If the backoff grows too large (> 1 hour), the in-memory queue is emptied (to prevent
unbounded growth) and Catch-Up Mode is entered.
@@ -145,7 +143,6 @@ from prometheus_client import Counter
from typing_extensions import Literal
from twisted.internet import defer
from twisted.internet.interfaces import IDelayedCall
import synapse.metrics
from synapse.api.presence import UserPresenceState
@@ -184,14 +181,18 @@ sent_pdus_destination_dist_total = Counter(
"Total number of PDUs queued for sending across all destinations",
)
# Time (in s) after Synapse's startup that we will begin to wake up destinations
# that have catch-up outstanding.
CATCH_UP_STARTUP_DELAY_SEC = 15
# Time (in s) to wait before trying to wake up destinations that have
# catch-up outstanding. This will also be the delay applied at startup
# before trying the same.
# Please note that rate limiting still applies, so while the loop is
# executed every X seconds the destinations may not be wake up because
# they are being rate limited following previous attempt failures.
WAKEUP_RETRY_PERIOD_SEC = 60
# Time (in s) to wait in between waking up each destination, i.e. one destination
# will be woken up every <x> seconds after Synapse's startup until we have woken
# every destination has outstanding catch-up.
CATCH_UP_STARTUP_INTERVAL_SEC = 5
# will be woken up every <x> seconds until we have woken every destination
# has outstanding catch-up.
WAKEUP_INTERVAL_BETWEEN_DESTINATIONS_SEC = 5
class AbstractFederationSender(metaclass=abc.ABCMeta):
@@ -415,12 +416,10 @@ class FederationSender(AbstractFederationSender):
/ hs.config.ratelimiting.federation_rr_transactions_per_room_per_second
)
# wake up destinations that have outstanding PDUs to be caught up
self._catchup_after_startup_timer: Optional[
IDelayedCall
] = self.clock.call_later(
CATCH_UP_STARTUP_DELAY_SEC,
# Regularly wake up destinations that have outstanding PDUs to be caught up
self.clock.looping_call(
run_as_background_process,
WAKEUP_RETRY_PERIOD_SEC * 1000.0,
"wake_destinations_needing_catchup",
self._wake_destinations_needing_catchup,
)
@@ -966,7 +965,6 @@ class FederationSender(AbstractFederationSender):
if not destinations_to_wake:
# finished waking all destinations!
self._catchup_after_startup_timer = None
break
last_processed = destinations_to_wake[-1]
@@ -983,4 +981,4 @@ class FederationSender(AbstractFederationSender):
last_processed,
)
self.wake_destination(destination)
await self.clock.sleep(CATCH_UP_STARTUP_INTERVAL_SEC)
await self.clock.sleep(WAKEUP_INTERVAL_BETWEEN_DESTINATIONS_SEC)

View File

@@ -164,7 +164,7 @@ class AccountValidityHandler:
try:
user_display_name = await self.store.get_profile_displayname(
UserID.from_string(user_id).localpart
UserID.from_string(user_id)
)
if user_display_name is None:
user_display_name = user_id

View File

@@ -89,7 +89,7 @@ class AdminHandler:
}
# Add additional user metadata
profile = await self._store.get_profileinfo(user.localpart)
profile = await self._store.get_profileinfo(user)
threepids = await self._store.user_get_threepids(user.to_string())
external_ids = [
({"auth_provider": auth_provider, "external_id": external_id})

View File

@@ -274,6 +274,8 @@ class AuthHandler:
# response.
self._extra_attributes: Dict[str, SsoLoginExtraAttributes] = {}
self.msc3861_oauth_delegation_enabled = hs.config.experimental.msc3861.enabled
async def validate_user_via_ui_auth(
self,
requester: Requester,
@@ -322,8 +324,12 @@ class AuthHandler:
LimitExceededError if the ratelimiter's failed request count for this
user is too high to proceed
"""
if self.msc3861_oauth_delegation_enabled:
raise SynapseError(
HTTPStatus.INTERNAL_SERVER_ERROR, "UIA shouldn't be used with MSC3861"
)
if not requester.access_token_id:
raise ValueError("Cannot validate a user without an access token")
if can_skip_ui_auth and self._ui_auth_session_timeout:
@@ -1753,7 +1759,7 @@ class AuthHandler:
return
user_profile_data = await self.store.get_profileinfo(
UserID.from_string(registered_user_id).localpart
UserID.from_string(registered_user_id)
)
# Store any extra attributes which will be passed in the login response.

View File

@@ -297,5 +297,5 @@ class DeactivateAccountHandler:
# Add the user to the directory, if necessary. Note that
# this must be done after the user is re-activated, because
# deactivated users are excluded from the user directory.
profile = await self.store.get_profileinfo(user.localpart)
profile = await self.store.get_profileinfo(user)
await self.user_directory_handler.handle_local_profile_change(user_id, profile)

View File

@@ -105,14 +105,12 @@ backfill_processing_before_timer = Histogram(
)
# TODO: We can refactor this away now that there is only one backfill point again
class _BackfillPointType(Enum):
# a regular backwards extremity (ie, an event which we don't yet have, but which
# is referred to by other events in the DAG)
BACKWARDS_EXTREMITY = enum.auto()
# an MSC2716 "insertion event"
INSERTION_PONT = enum.auto()
@attr.s(slots=True, auto_attribs=True, frozen=True)
class _BackfillPoint:
@@ -200,6 +198,7 @@ class FederationHandler:
)
@trace
@tag_args
async def maybe_backfill(
self, room_id: str, current_depth: int, limit: int
) -> bool:
@@ -214,6 +213,9 @@ class FederationHandler:
limit: The number of events that the pagination request will
return. This is used as part of the heuristic to decide if we
should back paginate.
Returns:
True if we actually tried to backfill something, otherwise False.
"""
# Starting the processing time here so we can include the room backfill
# linearizer lock queue in the timing
@@ -227,6 +229,8 @@ class FederationHandler:
processing_start_time=processing_start_time,
)
@trace
@tag_args
async def _maybe_backfill_inner(
self,
room_id: str,
@@ -247,6 +251,9 @@ class FederationHandler:
limit: The max number of events to request from the remote federated server.
processing_start_time: The time when `maybe_backfill` started processing.
Only used for timing. If `None`, no timing observation will be made.
Returns:
True if we actually tried to backfill something, otherwise False.
"""
backwards_extremities = [
_BackfillPoint(event_id, depth, _BackfillPointType.BACKWARDS_EXTREMITY)
@@ -264,32 +271,10 @@ class FederationHandler:
)
]
insertion_events_to_be_backfilled: List[_BackfillPoint] = []
if self.hs.config.experimental.msc2716_enabled:
insertion_events_to_be_backfilled = [
_BackfillPoint(event_id, depth, _BackfillPointType.INSERTION_PONT)
for event_id, depth in await self.store.get_insertion_event_backward_extremities_in_room(
room_id=room_id,
current_depth=current_depth,
# We only need to end up with 5 extremities combined with
# the backfill points to make the `/backfill` request ...
# (see the other comment above for more context).
limit=50,
)
]
logger.debug(
"_maybe_backfill_inner: backwards_extremities=%s insertion_events_to_be_backfilled=%s",
backwards_extremities,
insertion_events_to_be_backfilled,
)
# we now have a list of potential places to backpaginate from. We prefer to
# start with the most recent (ie, max depth), so let's sort the list.
sorted_backfill_points: List[_BackfillPoint] = sorted(
itertools.chain(
backwards_extremities,
insertion_events_to_be_backfilled,
),
backwards_extremities,
key=lambda e: -int(e.depth),
)
@@ -302,15 +287,30 @@ class FederationHandler:
len(sorted_backfill_points),
sorted_backfill_points,
)
set_tag(
SynapseTags.RESULT_PREFIX + "sorted_backfill_points",
str(sorted_backfill_points),
)
set_tag(
SynapseTags.RESULT_PREFIX + "sorted_backfill_points.length",
str(len(sorted_backfill_points)),
)
# If we have no backfill points lower than the `current_depth` then
# either we can a) bail or b) still attempt to backfill. We opt to try
# backfilling anyway just in case we do get relevant events.
# If we have no backfill points lower than the `current_depth` then either we
# can a) bail or b) still attempt to backfill. We opt to try backfilling anyway
# just in case we do get relevant events. This is good for eventual consistency
# sake but we don't need to block the client for something that is just as
# likely not to return anything relevant so we backfill in the background. The
# only way, this could return something relevant is if we discover a new branch
# of history that extends all the way back to where we are currently paginating
# and it's within the 100 events that are returned from `/backfill`.
if not sorted_backfill_points and current_depth != MAX_DEPTH:
logger.debug(
"_maybe_backfill_inner: all backfill points are *after* current depth. Trying again with later backfill points."
)
return await self._maybe_backfill_inner(
run_as_background_process(
"_maybe_backfill_inner_anyway_with_max_depth",
self._maybe_backfill_inner,
room_id=room_id,
# We use `MAX_DEPTH` so that we find all backfill points next
# time (all events are below the `MAX_DEPTH`)
@@ -321,6 +321,9 @@ class FederationHandler:
# overall otherwise the smaller one will throw off the results.
processing_start_time=None,
)
# We return `False` because we're backfilling in the background and there is
# no new events immediately for the caller to know about yet.
return False
# Even after recursing with `MAX_DEPTH`, we didn't find any
# backward extremities to backfill from.
@@ -384,10 +387,7 @@ class FederationHandler:
# event but not anything before it. This would require looking at the
# state *before* the event, ignoring the special casing certain event
# types have.
if bp.type == _BackfillPointType.INSERTION_PONT:
event_ids_to_check = [bp.event_id]
else:
event_ids_to_check = await self.store.get_successor_events(bp.event_id)
event_ids_to_check = await self.store.get_successor_events(bp.event_id)
events_to_check = await self.store.get_events_as_list(
event_ids_to_check,

View File

@@ -601,18 +601,6 @@ class FederationEventHandler:
room_id, [(event, context)]
)
# If we're joining the room again, check if there is new marker
# state indicating that there is new history imported somewhere in
# the DAG. Multiple markers can exist in the current state with
# unique state_keys.
#
# Do this after the state from the remote join was persisted (via
# `persist_events_and_notify`). Otherwise we can run into a
# situation where the create event doesn't exist yet in the
# `current_state_events`
for e in state:
await self._handle_marker_event(origin, e)
return stream_id_after_persist
async def update_state_for_partial_state_event(
@@ -915,13 +903,6 @@ class FederationEventHandler:
)
)
# We construct the event lists in source order from `/backfill` response because
# it's a) easiest, but also b) the order in which we process things matters for
# MSC2716 historical batches because many historical events are all at the same
# `depth` and we rely on the tenuous sort that the other server gave us and hope
# they're doing their best. The brittle nature of this ordering for historical
# messages over federation is one of the reasons why we don't want to continue
# on MSC2716 until we have online topological ordering.
events_with_failed_pull_attempts, fresh_events = partition(
new_events, lambda e: e.event_id in event_ids_with_failed_pull_attempts
)
@@ -1460,8 +1441,6 @@ class FederationEventHandler:
await self._run_push_actions_and_persist_event(event, context, backfilled)
await self._handle_marker_event(origin, event)
if backfilled or context.rejected:
return
@@ -1559,94 +1538,6 @@ class FederationEventHandler:
except Exception:
logger.exception("Failed to resync device for %s", sender)
@trace
async def _handle_marker_event(self, origin: str, marker_event: EventBase) -> None:
"""Handles backfilling the insertion event when we receive a marker
event that points to one.
Args:
origin: Origin of the event. Will be called to get the insertion event
marker_event: The event to process
"""
if marker_event.type != EventTypes.MSC2716_MARKER:
# Not a marker event
return
if marker_event.rejected_reason is not None:
# Rejected event
return
# Skip processing a marker event if the room version doesn't
# support it or the event is not from the room creator.
room_version = await self._store.get_room_version(marker_event.room_id)
create_event = await self._store.get_create_event_for_room(marker_event.room_id)
if not room_version.msc2175_implicit_room_creator:
room_creator = create_event.content.get(EventContentFields.ROOM_CREATOR)
else:
room_creator = create_event.sender
if not room_version.msc2716_historical and (
not self._config.experimental.msc2716_enabled
or marker_event.sender != room_creator
):
return
logger.debug("_handle_marker_event: received %s", marker_event)
insertion_event_id = marker_event.content.get(
EventContentFields.MSC2716_INSERTION_EVENT_REFERENCE
)
if insertion_event_id is None:
# Nothing to retrieve then (invalid marker)
return
already_seen_insertion_event = await self._store.have_seen_event(
marker_event.room_id, insertion_event_id
)
if already_seen_insertion_event:
# No need to process a marker again if we have already seen the
# insertion event that it was pointing to
return
logger.debug(
"_handle_marker_event: backfilling insertion event %s", insertion_event_id
)
await self._get_events_and_persist(
origin,
marker_event.room_id,
[insertion_event_id],
)
insertion_event = await self._store.get_event(
insertion_event_id, allow_none=True
)
if insertion_event is None:
logger.warning(
"_handle_marker_event: server %s didn't return insertion event %s for marker %s",
origin,
insertion_event_id,
marker_event.event_id,
)
return
logger.debug(
"_handle_marker_event: succesfully backfilled insertion event %s from marker event %s",
insertion_event,
marker_event,
)
await self._store.insert_insertion_extremity(
insertion_event_id, marker_event.room_id
)
logger.debug(
"_handle_marker_event: insertion extremity added for %s from marker event %s",
insertion_event,
marker_event,
)
async def backfill_event_id(
self, destinations: List[str], room_id: str, event_id: str
) -> PulledPduInfo:

View File

@@ -60,7 +60,6 @@ from synapse.replication.http.send_event import ReplicationSendEventRestServlet
from synapse.replication.http.send_events import ReplicationSendEventsRestServlet
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
from synapse.types import (
MutableStateMap,
PersistedEventPosition,
Requester,
RoomAlias,
@@ -573,7 +572,6 @@ class EventCreationHandler:
state_event_ids: Optional[List[str]] = None,
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
depth: Optional[int] = None,
state_map: Optional[StateMap[str]] = None,
for_batch: bool = False,
@@ -599,7 +597,7 @@ class EventCreationHandler:
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
cases (previously useful for MSC2716).
prev_event_ids:
the forward extremities to use as the prev_events for the
new event.
@@ -614,13 +612,10 @@ class EventCreationHandler:
If non-None, prev_event_ids must also be provided.
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is with insertion events which float at
the beginning of a historical batch and don't have any `prev_events` to
derive from; we add all of these state events as the explicit state so the
rest of the historical batch can inherit the same state and state_group.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
The full state at a given event. This was previously used particularly
by the MSC2716 /batch_send endpoint. This should normally be left as
None, which will cause the auth_event_ids to be calculated based on the
room state at the prev_events.
require_consent: Whether to check if the requester has
consented to the privacy policy.
@@ -629,10 +624,6 @@ class EventCreationHandler:
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
historical: Indicates whether the message is being inserted
back in time around some existing events. This is used to skip
a few checks and mark the event as backfilled.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
@@ -717,8 +708,6 @@ class EventCreationHandler:
builder.internal_metadata.outlier = outlier
builder.internal_metadata.historical = historical
event, unpersisted_context = await self.create_new_client_event(
builder=builder,
requester=requester,
@@ -947,7 +936,6 @@ class EventCreationHandler:
txn_id: Optional[str] = None,
ignore_shadow_ban: bool = False,
outlier: bool = False,
historical: bool = False,
depth: Optional[int] = None,
) -> Tuple[EventBase, int]:
"""
@@ -961,19 +949,16 @@ class EventCreationHandler:
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
cases (previously useful for MSC2716).
prev_event_ids:
The event IDs to use as the prev events.
Should normally be left as None to automatically request them
from the database.
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is with insertion events which float at
the beginning of a historical batch and don't have any `prev_events` to
derive from; we add all of these state events as the explicit state so the
rest of the historical batch can inherit the same state and state_group.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
The full state at a given event. This was previously used particularly
by the MSC2716 /batch_send endpoint. This should normally be left as
None, which will cause the auth_event_ids to be calculated based on the
room state at the prev_events.
ratelimit: Whether to rate limit this send.
txn_id: The transaction ID.
ignore_shadow_ban: True if shadow-banned users should be allowed to
@@ -981,9 +966,6 @@ class EventCreationHandler:
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
historical: Indicates whether the message is being inserted
back in time around some existing events. This is used to skip
a few checks and mark the event as backfilled.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
@@ -1053,7 +1035,6 @@ class EventCreationHandler:
prev_event_ids=prev_event_ids,
state_event_ids=state_event_ids,
outlier=outlier,
historical=historical,
depth=depth,
)
context = await unpersisted_context.persist(event)
@@ -1145,7 +1126,7 @@ class EventCreationHandler:
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
cases (previously useful for MSC2716).
prev_event_ids:
the forward extremities to use as the prev_events for the
new event.
@@ -1158,13 +1139,10 @@ class EventCreationHandler:
based on the room state at the prev_events.
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is with insertion events which float at
the beginning of a historical batch and don't have any `prev_events` to
derive from; we add all of these state events as the explicit state so the
rest of the historical batch can inherit the same state and state_group.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
The full state at a given event. This was previously used particularly
by the MSC2716 /batch_send endpoint. This should normally be left as
None, which will cause the auth_event_ids to be calculated based on the
room state at the prev_events.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
@@ -1261,52 +1239,6 @@ class EventCreationHandler:
if builder.internal_metadata.outlier:
event.internal_metadata.outlier = True
context = EventContext.for_outlier(self._storage_controllers)
elif (
event.type == EventTypes.MSC2716_INSERTION
and state_event_ids
and builder.internal_metadata.is_historical()
):
# Add explicit state to the insertion event so it has state to derive
# from even though it's floating with no `prev_events`. The rest of
# the batch can derive from this state and state_group.
#
# TODO(faster_joins): figure out how this works, and make sure that the
# old state is complete.
# https://github.com/matrix-org/synapse/issues/13003
metadata = await self.store.get_metadata_for_events(state_event_ids)
state_map_for_event: MutableStateMap[str] = {}
for state_id in state_event_ids:
data = metadata.get(state_id)
if data is None:
# We're trying to persist a new historical batch of events
# with the given state, e.g. via
# `RoomBatchSendEventRestServlet`. The state can be inferred
# by Synapse or set directly by the client.
#
# Either way, we should have persisted all the state before
# getting here.
raise Exception(
f"State event {state_id} not found in DB,"
" Synapse should have persisted it before using it."
)
if data.state_key is None:
raise Exception(
f"Trying to set non-state event {state_id} as state"
)
state_map_for_event[(data.event_type, data.state_key)] = state_id
# TODO(faster_joins): check how MSC2716 works and whether we can have
# partial state here
# https://github.com/matrix-org/synapse/issues/13003
context = await self.state.calculate_context_info(
event,
state_ids_before_event=state_map_for_event,
partial_state=False,
)
else:
context = await self.state.calculate_context_info(event)
@@ -1876,28 +1808,6 @@ class EventCreationHandler:
403, "Redacting server ACL events is not permitted"
)
# Add a little safety stop-gap to prevent people from trying to
# redact MSC2716 related events when they're in a room version
# which does not support it yet. We allow people to use MSC2716
# events in existing room versions but only from the room
# creator since it does not require any changes to the auth
# rules and in effect, the redaction algorithm . In the
# supported room version, we add the `historical` power level to
# auth the MSC2716 related events and adjust the redaction
# algorthim to keep the `historical` field around (redacting an
# event should only strip fields which don't affect the
# structural protocol level).
is_msc2716_event = (
original_event.type == EventTypes.MSC2716_INSERTION
or original_event.type == EventTypes.MSC2716_BATCH
or original_event.type == EventTypes.MSC2716_MARKER
)
if not room_version_obj.msc2716_historical and is_msc2716_event:
raise AuthError(
403,
"Redacting MSC2716 events is not supported in this room version",
)
event_types = event_auth.auth_types_for_event(event.room_version, event)
prev_state_ids = await context.get_prev_state_ids(
StateFilter.from_types(event_types)
@@ -1935,58 +1845,12 @@ class EventCreationHandler:
if prev_state_ids:
raise AuthError(403, "Changing the room create event is forbidden")
if event.type == EventTypes.MSC2716_INSERTION:
room_version = await self.store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
create_event = await self.store.get_create_event_for_room(event.room_id)
if not room_version_obj.msc2175_implicit_room_creator:
room_creator = create_event.content.get(
EventContentFields.ROOM_CREATOR
)
else:
room_creator = create_event.sender
# Only check an insertion event if the room version
# supports it or the event is from the room creator.
if room_version_obj.msc2716_historical or (
self.config.experimental.msc2716_enabled
and event.sender == room_creator
):
next_batch_id = event.content.get(
EventContentFields.MSC2716_NEXT_BATCH_ID
)
conflicting_insertion_event_id = None
if next_batch_id:
conflicting_insertion_event_id = (
await self.store.get_insertion_event_id_by_batch_id(
event.room_id, next_batch_id
)
)
if conflicting_insertion_event_id is not None:
# The current insertion event that we're processing is invalid
# because an insertion event already exists in the room with the
# same next_batch_id. We can't allow multiple because the batch
# pointing will get weird, e.g. we can't determine which insertion
# event the batch event is pointing to.
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Another insertion event already exists with the same next_batch_id",
errcode=Codes.INVALID_PARAM,
)
# Mark any `m.historical` messages as backfilled so they don't appear
# in `/sync` and have the proper decrementing `stream_ordering` as we import
backfilled = False
if event.internal_metadata.is_historical():
backfilled = True
assert self._storage_controllers.persistence is not None
(
persisted_events,
max_stream_token,
) = await self._storage_controllers.persistence.persist_events(
events_and_context, backfilled=backfilled
events_and_context,
)
events_and_pos = []

View File

@@ -1354,7 +1354,7 @@ class OidcProvider:
finish_request(request)
class LogoutToken(JWTClaims):
class LogoutToken(JWTClaims): # type: ignore[misc]
"""
Holds and verify claims of a logout token, as per
https://openid.net/specs/openid-connect-backchannel-1_0.html#LogoutToken

View File

@@ -40,6 +40,11 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# How many single event gaps we tolerate returning in a `/messages` response before we
# backfill and try to fill in the history. This is an arbitrarily picked number so feel
# free to tune it in the future.
BACKFILL_BECAUSE_TOO_MANY_GAPS_THRESHOLD = 3
@attr.s(slots=True, auto_attribs=True)
class PurgeStatus:
@@ -360,7 +365,7 @@ class PaginationHandler:
except Exception:
f = Failure()
logger.error(
"[purge] failed", exc_info=(f.type, f.value, f.getTracebackObject()) # type: ignore
"[purge] failed", exc_info=(f.type, f.value, f.getTracebackObject())
)
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
self._purges_by_id[purge_id].error = f.getErrorMessage()
@@ -486,35 +491,35 @@ class PaginationHandler:
room_id, room_token.stream
)
if not use_admin_priviledge and membership == Membership.LEAVE:
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
if (
pagin_config.direction == Direction.BACKWARDS
and not use_admin_priviledge
and membership == Membership.LEAVE
):
# This is only None if the room is world_readable, in which case
# "Membership.JOIN" would have been returned and we should never hit
# this branch.
assert member_event_id
# This is only None if the room is world_readable, in which
# case "JOIN" would have been returned.
assert member_event_id
leave_token = await self.store.get_topological_token_for_event(
member_event_id
)
assert leave_token.topological is not None
if leave_token.topological < curr_topo:
from_token = from_token.copy_and_replace(
StreamKeyType.ROOM, leave_token
)
await self.hs.get_federation_handler().maybe_backfill(
room_id,
curr_topo,
limit=pagin_config.limit,
leave_token = await self.store.get_topological_token_for_event(
member_event_id
)
assert leave_token.topological is not None
if leave_token.topological < curr_topo:
from_token = from_token.copy_and_replace(
StreamKeyType.ROOM, leave_token
)
to_room_key = None
if pagin_config.to_token:
to_room_key = pagin_config.to_token.room_key
# Initially fetch the events from the database. With any luck, we can return
# these without blocking on backfill (handled below).
events, next_key = await self.store.paginate_room_events(
room_id=room_id,
from_key=from_token.room_key,
@@ -524,6 +529,94 @@ class PaginationHandler:
event_filter=event_filter,
)
if pagin_config.direction == Direction.BACKWARDS:
# We use a `Set` because there can be multiple events at a given depth
# and we only care about looking at the unique continum of depths to
# find gaps.
event_depths: Set[int] = {event.depth for event in events}
sorted_event_depths = sorted(event_depths)
# Inspect the depths of the returned events to see if there are any gaps
found_big_gap = False
number_of_gaps = 0
previous_event_depth = (
sorted_event_depths[0] if len(sorted_event_depths) > 0 else 0
)
for event_depth in sorted_event_depths:
# We don't expect a negative depth but we'll just deal with it in
# any case by taking the absolute value to get the true gap between
# any two integers.
depth_gap = abs(event_depth - previous_event_depth)
# A `depth_gap` of 1 is a normal continuous chain to the next event
# (1 <-- 2 <-- 3) so anything larger indicates a missing event (it's
# also possible there is no event at a given depth but we can't ever
# know that for sure)
if depth_gap > 1:
number_of_gaps += 1
# We only tolerate a small number single-event long gaps in the
# returned events because those are most likely just events we've
# failed to pull in the past. Anything longer than that is probably
# a sign that we're missing a decent chunk of history and we should
# try to backfill it.
#
# XXX: It's possible we could tolerate longer gaps if we checked
# that a given events `prev_events` is one that has failed pull
# attempts and we could just treat it like a dead branch of history
# for now or at least something that we don't need the block the
# client on to try pulling.
#
# XXX: If we had something like MSC3871 to indicate gaps in the
# timeline to the client, we could also get away with any sized gap
# and just have the client refetch the holes as they see fit.
if depth_gap > 2:
found_big_gap = True
break
previous_event_depth = event_depth
# Backfill in the foreground if we found a big gap, have too many holes,
# or we don't have enough events to fill the limit that the client asked
# for.
missing_too_many_events = (
number_of_gaps > BACKFILL_BECAUSE_TOO_MANY_GAPS_THRESHOLD
)
not_enough_events_to_fill_response = len(events) < pagin_config.limit
if (
found_big_gap
or missing_too_many_events
or not_enough_events_to_fill_response
):
did_backfill = (
await self.hs.get_federation_handler().maybe_backfill(
room_id,
curr_topo,
limit=pagin_config.limit,
)
)
# If we did backfill something, refetch the events from the database to
# catch anything new that might have been added since we last fetched.
if did_backfill:
events, next_key = await self.store.paginate_room_events(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
else:
# Otherwise, we can backfill in the background for eventual
# consistency's sake but we don't need to block the client waiting
# for a costly federation call and processing.
run_as_background_process(
"maybe_backfill_in_the_background",
self.hs.get_federation_handler().maybe_backfill,
room_id,
curr_topo,
limit=pagin_config.limit,
)
next_token = from_token.copy_and_replace(StreamKeyType.ROOM, next_key)
# if no events are returned from pagination, that implies
@@ -689,7 +782,7 @@ class PaginationHandler:
f = Failure()
logger.error(
"failed",
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
exc_info=(f.type, f.value, f.getTracebackObject()),
)
self._delete_by_id[delete_id].status = DeleteStatus.STATUS_FAILED
self._delete_by_id[delete_id].error = f.getErrorMessage()

View File

@@ -648,7 +648,6 @@ class PresenceHandler(BasePresenceHandler):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.hs = hs
self.server_name = hs.hostname
self.wheel_timer: WheelTimer[str] = WheelTimer()
self.notifier = hs.get_notifier()
self._presence_enabled = hs.config.server.use_presence

View File

@@ -67,7 +67,7 @@ class ProfileHandler:
target_user = UserID.from_string(user_id)
if self.hs.is_mine(target_user):
profileinfo = await self.store.get_profileinfo(target_user.localpart)
profileinfo = await self.store.get_profileinfo(target_user)
if profileinfo.display_name is None:
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
@@ -99,9 +99,7 @@ class ProfileHandler:
async def get_displayname(self, target_user: UserID) -> Optional[str]:
if self.hs.is_mine(target_user):
try:
displayname = await self.store.get_profile_displayname(
target_user.localpart
)
displayname = await self.store.get_profile_displayname(target_user)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
@@ -147,7 +145,7 @@ class ProfileHandler:
raise AuthError(400, "Cannot set another user's displayname")
if not by_admin and not self.hs.config.registration.enable_set_displayname:
profile = await self.store.get_profileinfo(target_user.localpart)
profile = await self.store.get_profileinfo(target_user)
if profile.display_name:
raise SynapseError(
400,
@@ -180,7 +178,7 @@ class ProfileHandler:
await self.store.set_profile_displayname(target_user, displayname_to_set)
profile = await self.store.get_profileinfo(target_user.localpart)
profile = await self.store.get_profileinfo(target_user)
await self.user_directory_handler.handle_local_profile_change(
target_user.to_string(), profile
)
@@ -194,9 +192,7 @@ class ProfileHandler:
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
if self.hs.is_mine(target_user):
try:
avatar_url = await self.store.get_profile_avatar_url(
target_user.localpart
)
avatar_url = await self.store.get_profile_avatar_url(target_user)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)
@@ -241,7 +237,7 @@ class ProfileHandler:
raise AuthError(400, "Cannot set another user's avatar_url")
if not by_admin and not self.hs.config.registration.enable_set_avatar_url:
profile = await self.store.get_profileinfo(target_user.localpart)
profile = await self.store.get_profileinfo(target_user)
if profile.avatar_url:
raise SynapseError(
400, "Changing avatar is disabled on this server", Codes.FORBIDDEN
@@ -272,7 +268,7 @@ class ProfileHandler:
await self.store.set_profile_avatar_url(target_user, avatar_url_to_set)
profile = await self.store.get_profileinfo(target_user.localpart)
profile = await self.store.get_profileinfo(target_user)
await self.user_directory_handler.handle_local_profile_change(
target_user.to_string(), profile
)
@@ -369,14 +365,10 @@ class ProfileHandler:
response = {}
try:
if just_field is None or just_field == "displayname":
response["displayname"] = await self.store.get_profile_displayname(
user.localpart
)
response["displayname"] = await self.store.get_profile_displayname(user)
if just_field is None or just_field == "avatar_url":
response["avatar_url"] = await self.store.get_profile_avatar_url(
user.localpart
)
response["avatar_url"] = await self.store.get_profile_avatar_url(user)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Profile was not found", Codes.NOT_FOUND)

View File

@@ -27,7 +27,6 @@ logger = logging.getLogger(__name__)
class ReadMarkerHandler:
def __init__(self, hs: "HomeServer"):
self.server_name = hs.config.server.server_name
self.store = hs.get_datastores().main
self.account_data_handler = hs.get_account_data_handler()
self.read_marker_linearizer = Linearizer(name="read_marker")

View File

@@ -315,7 +315,7 @@ class RegistrationHandler:
approved=approved,
)
profile = await self.store.get_profileinfo(localpart)
profile = await self.store.get_profileinfo(user)
await self.user_directory_handler.handle_local_profile_change(
user_id, profile
)

View File

@@ -205,16 +205,22 @@ class RelationsHandler:
event_id: The event IDs to look and redact relations of.
initial_redaction_event: The redaction for the event referred to by
event_id.
relation_types: The types of relations to look for.
relation_types: The types of relations to look for. If "*" is in the list,
all related events will be redacted regardless of the type.
Raises:
ShadowBanError if the requester is shadow-banned
"""
related_event_ids = (
await self._main_store.get_all_relations_for_event_with_types(
event_id, relation_types
if "*" in relation_types:
related_event_ids = await self._main_store.get_all_relations_for_event(
event_id
)
else:
related_event_ids = (
await self._main_store.get_all_relations_for_event_with_types(
event_id, relation_types
)
)
)
for related_event_id in related_event_ids:
try:

View File

@@ -872,6 +872,8 @@ class RoomCreationHandler:
visibility = config.get("visibility", "private")
is_public = visibility == "public"
self._validate_room_config(config, visibility)
room_id = await self._generate_and_create_room_id(
creator_id=user_id,
is_public=is_public,
@@ -1111,20 +1113,7 @@ class RoomCreationHandler:
return new_event, new_unpersisted_context
visibility = room_config.get("visibility", "private")
preset_config = room_config.get(
"preset",
RoomCreationPreset.PRIVATE_CHAT
if visibility == "private"
else RoomCreationPreset.PUBLIC_CHAT,
)
try:
config = self._presets_dict[preset_config]
except KeyError:
raise SynapseError(
400, f"'{preset_config}' is not a valid preset", errcode=Codes.BAD_JSON
)
preset_config, config = self._room_preset_config(room_config)
# MSC2175 removes the creator field from the create event.
if not room_version.msc2175_implicit_room_creator:
@@ -1306,6 +1295,65 @@ class RoomCreationHandler:
assert last_event.internal_metadata.stream_ordering is not None
return last_event.internal_metadata.stream_ordering, last_event.event_id, depth
def _validate_room_config(
self,
config: JsonDict,
visibility: str,
) -> None:
"""Checks configuration parameters for a /createRoom request.
If validation detects invalid parameters an exception may be raised to
cause room creation to be aborted and an error response to be returned
to the client.
Args:
config: A dict of configuration options. Originally from the body of
the /createRoom request
visibility: One of "public" or "private"
"""
# Validate the requested preset, raise a 400 error if not valid
preset_name, preset_config = self._room_preset_config(config)
# If the user is trying to create an encrypted room and this is forbidden
# by the configured default_power_level_content_override, then reject the
# request before the room is created.
raw_initial_state = config.get("initial_state", [])
room_encryption_event = any(
s.get("type", "") == EventTypes.RoomEncryption for s in raw_initial_state
)
if preset_config["encrypted"] or room_encryption_event:
if self._default_power_level_content_override:
override = self._default_power_level_content_override.get(preset_name)
if override is not None:
event_levels = override.get("events", {})
room_admin_level = event_levels.get(EventTypes.PowerLevels, 100)
encryption_level = event_levels.get(EventTypes.RoomEncryption, 100)
if encryption_level > room_admin_level:
raise SynapseError(
403,
f"You cannot create an encrypted room. user_level ({room_admin_level}) < send_level ({encryption_level})",
)
def _room_preset_config(self, room_config: JsonDict) -> Tuple[str, dict]:
# The spec says rooms should default to private visibility if
# `visibility` is not specified.
visibility = room_config.get("visibility", "private")
preset_name = room_config.get(
"preset",
RoomCreationPreset.PRIVATE_CHAT
if visibility == "private"
else RoomCreationPreset.PUBLIC_CHAT,
)
try:
preset_config = self._presets_dict[preset_name]
except KeyError:
raise SynapseError(
400, f"'{preset_name}' is not a valid preset", errcode=Codes.BAD_JSON
)
return preset_name, preset_config
def _generate_room_id(self) -> str:
"""Generates a random room ID.
@@ -1490,7 +1538,6 @@ class RoomContextHandler:
class TimestampLookupHandler:
def __init__(self, hs: "HomeServer"):
self.server_name = hs.hostname
self.store = hs.get_datastores().main
self.state_handler = hs.get_state_handler()
self.federation_client = hs.get_federation_client()

View File

@@ -1,466 +0,0 @@
import logging
from typing import TYPE_CHECKING, List, Tuple
from synapse.api.constants import EventContentFields, EventTypes
from synapse.appservice import ApplicationService
from synapse.http.servlet import assert_params_in_dict
from synapse.types import JsonDict, Requester, UserID, create_requester
from synapse.util.stringutils import random_string
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class RoomBatchHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.store = hs.get_datastores().main
self._state_storage_controller = hs.get_storage_controllers().state
self.event_creation_handler = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler()
self.auth = hs.get_auth()
async def inherit_depth_from_prev_ids(self, prev_event_ids: List[str]) -> int:
"""Finds the depth which would sort it after the most-recent
prev_event_id but before the successors of those events. If no
successors are found, we assume it's an historical extremity part of the
current batch and use the same depth of the prev_event_ids.
Args:
prev_event_ids: List of prev event IDs
Returns:
Inherited depth
"""
(
most_recent_prev_event_id,
most_recent_prev_event_depth,
) = await self.store.get_max_depth_of(prev_event_ids)
# We want to insert the historical event after the `prev_event` but before the successor event
#
# We inherit depth from the successor event instead of the `prev_event`
# because events returned from `/messages` are first sorted by `topological_ordering`
# which is just the `depth` and then tie-break with `stream_ordering`.
#
# We mark these inserted historical events as "backfilled" which gives them a
# negative `stream_ordering`. If we use the same depth as the `prev_event`,
# then our historical event will tie-break and be sorted before the `prev_event`
# when it should come after.
#
# We want to use the successor event depth so they appear after `prev_event` because
# it has a larger `depth` but before the successor event because the `stream_ordering`
# is negative before the successor event.
assert most_recent_prev_event_id is not None
successor_event_ids = await self.store.get_successor_events(
most_recent_prev_event_id
)
# If we can't find any successor events, then it's a forward extremity of
# historical messages and we can just inherit from the previous historical
# event which we can already assume has the correct depth where we want
# to insert into.
if not successor_event_ids:
depth = most_recent_prev_event_depth
else:
(
_,
oldest_successor_depth,
) = await self.store.get_min_depth_of(successor_event_ids)
depth = oldest_successor_depth
return depth
def create_insertion_event_dict(
self, sender: str, room_id: str, origin_server_ts: int
) -> JsonDict:
"""Creates an event dict for an "insertion" event with the proper fields
and a random batch ID.
Args:
sender: The event author MXID
room_id: The room ID that the event belongs to
origin_server_ts: Timestamp when the event was sent
Returns:
The new event dictionary to insert.
"""
next_batch_id = random_string(8)
insertion_event = {
"type": EventTypes.MSC2716_INSERTION,
"sender": sender,
"room_id": room_id,
"content": {
EventContentFields.MSC2716_NEXT_BATCH_ID: next_batch_id,
EventContentFields.MSC2716_HISTORICAL: True,
},
"origin_server_ts": origin_server_ts,
}
return insertion_event
async def create_requester_for_user_id_from_app_service(
self, user_id: str, app_service: ApplicationService
) -> Requester:
"""Creates a new requester for the given user_id
and validates that the app service is allowed to control
the given user.
Args:
user_id: The author MXID that the app service is controlling
app_service: The app service that controls the user
Returns:
Requester object
"""
await self.auth.validate_appservice_can_control_user_id(app_service, user_id)
return create_requester(user_id, app_service=app_service)
async def get_most_recent_full_state_ids_from_event_id_list(
self, event_ids: List[str]
) -> List[str]:
"""Find the most recent event_id and grab the full state at that event.
We will use this as a base to auth our historical messages against.
Args:
event_ids: List of event ID's to look at
Returns:
List of event ID's
"""
(
most_recent_event_id,
_,
) = await self.store.get_max_depth_of(event_ids)
# mapping from (type, state_key) -> state_event_id
assert most_recent_event_id is not None
prev_state_map = await self._state_storage_controller.get_state_ids_for_event(
most_recent_event_id
)
# List of state event ID's
full_state_ids = list(prev_state_map.values())
return full_state_ids
async def persist_state_events_at_start(
self,
state_events_at_start: List[JsonDict],
room_id: str,
initial_state_event_ids: List[str],
app_service_requester: Requester,
) -> List[str]:
"""Takes all `state_events_at_start` event dictionaries and creates/persists
them in a floating state event chain which don't resolve into the current room
state. They are floating because they reference no prev_events which disconnects
them from the normal DAG.
Args:
state_events_at_start:
room_id: Room where you want the events persisted in.
initial_state_event_ids:
The base set of state for the historical batch which the floating
state chain will derive from. This should probably be the state
from the `prev_event` defined by `/batch_send?prev_event_id=$abc`.
app_service_requester: The requester of an application service.
Returns:
List of state event ID's we just persisted
"""
assert app_service_requester.app_service
state_event_ids_at_start = []
state_event_ids = initial_state_event_ids.copy()
# Make the state events float off on their own by specifying no
# prev_events for the first one in the chain so we don't have a bunch of
# `@mxid joined the room` noise between each batch.
prev_event_ids_for_state_chain: List[str] = []
for index, state_event in enumerate(state_events_at_start):
assert_params_in_dict(
state_event, ["type", "origin_server_ts", "content", "sender"]
)
logger.debug(
"RoomBatchSendEventRestServlet inserting state_event=%s", state_event
)
event_dict = {
"type": state_event["type"],
"origin_server_ts": state_event["origin_server_ts"],
"content": state_event["content"],
"room_id": room_id,
"sender": state_event["sender"],
"state_key": state_event["state_key"],
}
# Mark all events as historical
event_dict["content"][EventContentFields.MSC2716_HISTORICAL] = True
# TODO: This is pretty much the same as some other code to handle inserting state in this file
if event_dict["type"] == EventTypes.Member:
membership = event_dict["content"].get("membership", None)
event_id, _ = await self.room_member_handler.update_membership(
await self.create_requester_for_user_id_from_app_service(
state_event["sender"], app_service_requester.app_service
),
target=UserID.from_string(event_dict["state_key"]),
room_id=room_id,
action=membership,
content=event_dict["content"],
historical=True,
# Only the first event in the state chain should be floating.
# The rest should hang off each other in a chain.
allow_no_prev_events=index == 0,
prev_event_ids=prev_event_ids_for_state_chain,
# The first event in the state chain is floating with no
# `prev_events` which means it can't derive state from
# anywhere automatically. So we need to set some state
# explicitly.
#
# Make sure to use a copy of this list because we modify it
# later in the loop here. Otherwise it will be the same
# reference and also update in the event when we append
# later.
state_event_ids=state_event_ids.copy(),
)
else:
(
event,
_,
) = await self.event_creation_handler.create_and_send_nonmember_event(
await self.create_requester_for_user_id_from_app_service(
state_event["sender"], app_service_requester.app_service
),
event_dict,
historical=True,
# Only the first event in the state chain should be floating.
# The rest should hang off each other in a chain.
allow_no_prev_events=index == 0,
prev_event_ids=prev_event_ids_for_state_chain,
# The first event in the state chain is floating with no
# `prev_events` which means it can't derive state from
# anywhere automatically. So we need to set some state
# explicitly.
#
# Make sure to use a copy of this list because we modify it
# later in the loop here. Otherwise it will be the same
# reference and also update in the event when we append later.
state_event_ids=state_event_ids.copy(),
)
event_id = event.event_id
state_event_ids_at_start.append(event_id)
state_event_ids.append(event_id)
# Connect all the state in a floating chain
prev_event_ids_for_state_chain = [event_id]
return state_event_ids_at_start
async def persist_historical_events(
self,
events_to_create: List[JsonDict],
room_id: str,
inherited_depth: int,
initial_state_event_ids: List[str],
app_service_requester: Requester,
) -> List[str]:
"""Create and persists all events provided sequentially. Handles the
complexity of creating events in chronological order so they can
reference each other by prev_event but still persists in
reverse-chronoloical order so they have the correct
(topological_ordering, stream_ordering) and sort correctly from
/messages.
Args:
events_to_create: List of historical events to create in JSON
dictionary format.
room_id: Room where you want the events persisted in.
inherited_depth: The depth to create the events at (you will
probably by calling inherit_depth_from_prev_ids(...)).
initial_state_event_ids:
This is used to set explicit state for the insertion event at
the start of the historical batch since it's floating with no
prev_events to derive state from automatically.
app_service_requester: The requester of an application service.
Returns:
List of persisted event IDs
"""
assert app_service_requester.app_service
# We expect the first event in a historical batch to be an insertion event
assert events_to_create[0]["type"] == EventTypes.MSC2716_INSERTION
# We expect the last event in a historical batch to be an batch event
assert events_to_create[-1]["type"] == EventTypes.MSC2716_BATCH
# Make the historical event chain float off on its own by specifying no
# prev_events for the first event in the chain which causes the HS to
# ask for the state at the start of the batch later.
prev_event_ids: List[str] = []
event_ids = []
events_to_persist = []
for index, ev in enumerate(events_to_create):
assert_params_in_dict(ev, ["type", "origin_server_ts", "content", "sender"])
assert self.hs.is_mine_id(ev["sender"]), "User must be our own: %s" % (
ev["sender"],
)
event_dict = {
"type": ev["type"],
"origin_server_ts": ev["origin_server_ts"],
"content": ev["content"],
"room_id": room_id,
"sender": ev["sender"], # requester.user.to_string(),
"prev_events": prev_event_ids.copy(),
}
# Mark all events as historical
event_dict["content"][EventContentFields.MSC2716_HISTORICAL] = True
event, unpersisted_context = await self.event_creation_handler.create_event(
await self.create_requester_for_user_id_from_app_service(
ev["sender"], app_service_requester.app_service
),
event_dict,
# Only the first event (which is the insertion event) in the
# chain should be floating. The rest should hang off each other
# in a chain.
allow_no_prev_events=index == 0,
prev_event_ids=event_dict.get("prev_events"),
# Since the first event (which is the insertion event) in the
# chain is floating with no `prev_events`, it can't derive state
# from anywhere automatically. So we need to set some state
# explicitly.
state_event_ids=initial_state_event_ids if index == 0 else None,
historical=True,
depth=inherited_depth,
)
context = await unpersisted_context.persist(event)
assert context._state_group
# Normally this is done when persisting the event but we have to
# pre-emptively do it here because we create all the events first,
# then persist them in another pass below. And we want to share
# state_groups across the whole batch so this lookup needs to work
# for the next event in the batch in this loop.
await self.store.store_state_group_id_for_event_id(
event_id=event.event_id,
state_group_id=context._state_group,
)
logger.debug(
"RoomBatchSendEventRestServlet inserting event=%s, prev_event_ids=%s",
event,
prev_event_ids,
)
events_to_persist.append((event, context))
event_id = event.event_id
event_ids.append(event_id)
prev_event_ids = [event_id]
# Persist events in reverse-chronological order so they have the
# correct stream_ordering as they are backfilled (which decrements).
# Events are sorted by (topological_ordering, stream_ordering)
# where topological_ordering is just depth.
for event, context in reversed(events_to_persist):
# This call can't raise `PartialStateConflictError` since we forbid
# use of the historical batch API during partial state
await self.event_creation_handler.handle_new_client_event(
await self.create_requester_for_user_id_from_app_service(
event.sender, app_service_requester.app_service
),
events_and_context=[(event, context)],
)
return event_ids
async def handle_batch_of_events(
self,
events_to_create: List[JsonDict],
room_id: str,
batch_id_to_connect_to: str,
inherited_depth: int,
initial_state_event_ids: List[str],
app_service_requester: Requester,
) -> Tuple[List[str], str]:
"""
Handles creating and persisting all of the historical events as well as
insertion and batch meta events to make the batch navigable in the DAG.
Args:
events_to_create: List of historical events to create in JSON
dictionary format.
room_id: Room where you want the events created in.
batch_id_to_connect_to: The batch_id from the insertion event you
want this batch to connect to.
inherited_depth: The depth to create the events at (you will
probably by calling inherit_depth_from_prev_ids(...)).
initial_state_event_ids:
This is used to set explicit state for the insertion event at
the start of the historical batch since it's floating with no
prev_events to derive state from automatically. This should
probably be the state from the `prev_event` defined by
`/batch_send?prev_event_id=$abc` plus the outcome of
`persist_state_events_at_start`
app_service_requester: The requester of an application service.
Returns:
Tuple containing a list of created events and the next_batch_id
"""
# Connect this current batch to the insertion event from the previous batch
last_event_in_batch = events_to_create[-1]
batch_event = {
"type": EventTypes.MSC2716_BATCH,
"sender": app_service_requester.user.to_string(),
"room_id": room_id,
"content": {
EventContentFields.MSC2716_BATCH_ID: batch_id_to_connect_to,
EventContentFields.MSC2716_HISTORICAL: True,
},
# Since the batch event is put at the end of the batch,
# where the newest-in-time event is, copy the origin_server_ts from
# the last event we're inserting
"origin_server_ts": last_event_in_batch["origin_server_ts"],
}
# Add the batch event to the end of the batch (newest-in-time)
events_to_create.append(batch_event)
# Add an "insertion" event to the start of each batch (next to the oldest-in-time
# event in the batch) so the next batch can be connected to this one.
insertion_event = self.create_insertion_event_dict(
sender=app_service_requester.user.to_string(),
room_id=room_id,
# Since the insertion event is put at the start of the batch,
# where the oldest-in-time event is, copy the origin_server_ts from
# the first event we're inserting
origin_server_ts=events_to_create[0]["origin_server_ts"],
)
next_batch_id = insertion_event["content"][
EventContentFields.MSC2716_NEXT_BATCH_ID
]
# Prepend the insertion event to the start of the batch (oldest-in-time)
events_to_create = [insertion_event] + events_to_create
# Create and persist all of the historical events
event_ids = await self.persist_historical_events(
events_to_create=events_to_create,
room_id=room_id,
inherited_depth=inherited_depth,
initial_state_event_ids=initial_state_event_ids,
app_service_requester=app_service_requester,
)
return event_ids, next_batch_id

View File

@@ -362,7 +362,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
content: Optional[dict] = None,
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
origin_server_ts: Optional[int] = None,
) -> Tuple[str, int]:
"""
@@ -378,16 +377,13 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
cases (previously useful for MSC2716).
prev_event_ids: The event IDs to use as the prev events
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is the historical `state_events_at_start`;
since each is marked as an `outlier`, the `EventContext.for_outlier()` won't
have any `state_ids` set and therefore can't derive any state even though the
prev_events are set so we need to set them ourself via this argument.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
The full state at a given event. This was previously used particularly
by the MSC2716 /batch_send endpoint. This should normally be left as
None, which will cause the auth_event_ids to be calculated based on the
room state at the prev_events.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
@@ -400,9 +396,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
historical: Indicates whether the message is being inserted
back in time around some existing events. This is used to skip
a few checks and mark the event as backfilled.
origin_server_ts: The origin_server_ts to use if a new event is created. Uses
the current timestamp if set to None.
@@ -477,7 +470,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
depth=depth,
require_consent=require_consent,
outlier=outlier,
historical=historical,
)
context = await unpersisted_context.persist(event)
prev_state_ids = await context.get_prev_state_ids(
@@ -585,7 +577,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
new_room: bool = False,
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None,
@@ -610,22 +601,16 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
historical: Indicates whether the message is being inserted
back in time around some existing events. This is used to skip
a few checks and mark the event as backfilled.
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
cases (previously useful for MSC2716).
prev_event_ids: The event IDs to use as the prev events
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is the historical `state_events_at_start`;
since each is marked as an `outlier`, the `EventContext.for_outlier()` won't
have any `state_ids` set and therefore can't derive any state even though the
prev_events are set so we need to set them ourself via this argument.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
The full state at a given event. This was previously used particularly
by the MSC2716 /batch_send endpoint. This should normally be left as
None, which will cause the auth_event_ids to be calculated based on the
room state at the prev_events.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
@@ -667,7 +652,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
new_room=new_room,
require_consent=require_consent,
outlier=outlier,
historical=historical,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
state_event_ids=state_event_ids,
@@ -691,7 +675,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
new_room: bool = False,
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None,
@@ -718,22 +701,16 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
historical: Indicates whether the message is being inserted
back in time around some existing events. This is used to skip
a few checks and mark the event as backfilled.
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
cases (previously useful for MSC2716).
prev_event_ids: The event IDs to use as the prev events
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is the historical `state_events_at_start`;
since each is marked as an `outlier`, the `EventContext.for_outlier()` won't
have any `state_ids` set and therefore can't derive any state even though the
prev_events are set so we need to set them ourself via this argument.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
The full state at a given event. This was previously used particularly
by the MSC2716 /batch_send endpoint. This should normally be left as
None, which will cause the auth_event_ids to be calculated based on the
room state at the prev_events.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
@@ -877,7 +854,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
content=content,
require_consent=require_consent,
outlier=outlier,
historical=historical,
origin_server_ts=origin_server_ts,
)
@@ -1498,7 +1474,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# put the server which owns the alias at the front of the server list.
if room_alias.domain in servers:
servers.remove(room_alias.domain)
servers.insert(0, room_alias.domain)
servers.insert(0, room_alias.domain)
return RoomID.from_string(room_id), servers

View File

@@ -42,7 +42,6 @@ class StatsHandler:
self.store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self.state = hs.get_state_handler()
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id

View File

@@ -51,8 +51,10 @@ logger = logging.getLogger(__name__)
@implementer(IAgent)
class MatrixFederationAgent:
"""An Agent-like thing which provides a `request` method which correctly
handles resolving matrix server names when using matrix://. Handles standard
https URIs as normal.
handles resolving matrix server names when using `matrix-federation://`. Handles
standard https URIs as normal. The `matrix-federation://` scheme is internal to
Synapse and we purposely want to avoid colliding with the `matrix://` URL scheme
which is now specced.
Doesn't implement any retries. (Those are done in MatrixFederationHttpClient.)
@@ -167,14 +169,14 @@ class MatrixFederationAgent:
# There must be a valid hostname.
assert parsed_uri.hostname
# If this is a matrix:// URI check if the server has delegated matrix
# If this is a matrix-federation:// URI check if the server has delegated matrix
# traffic using well-known delegation.
#
# We have to do this here and not in the endpoint as we need to rewrite
# the host header with the delegated server name.
delegated_server = None
if (
parsed_uri.scheme == b"matrix"
parsed_uri.scheme == b"matrix-federation"
and not _is_ip_literal(parsed_uri.hostname)
and not parsed_uri.port
):
@@ -250,7 +252,7 @@ class MatrixHostnameEndpointFactory:
@implementer(IStreamClientEndpoint)
class MatrixHostnameEndpoint:
"""An endpoint that resolves matrix:// URLs using Matrix server name
"""An endpoint that resolves matrix-federation:// URLs using Matrix server name
resolution (i.e. via SRV). Does not check for well-known delegation.
Args:
@@ -379,7 +381,7 @@ class MatrixHostnameEndpoint:
connect to.
"""
if self._parsed_uri.scheme != b"matrix":
if self._parsed_uri.scheme != b"matrix-federation":
return [Server(host=self._parsed_uri.host, port=self._parsed_uri.port)]
# Note: We don't do well-known lookup as that needs to have happened

View File

@@ -95,8 +95,6 @@ incoming_responses_counter = Counter(
)
MAX_LONG_RETRIES = 10
MAX_SHORT_RETRIES = 3
MAXINT = sys.maxsize
@@ -174,7 +172,14 @@ class MatrixFederationRequest:
# The object is frozen so we can pre-compute this.
uri = urllib.parse.urlunparse(
(b"matrix", destination_bytes, path_bytes, None, query_bytes, b"")
(
b"matrix-federation",
destination_bytes,
path_bytes,
None,
query_bytes,
b"",
)
)
object.__setattr__(self, "uri", uri)
@@ -406,7 +411,16 @@ class MatrixFederationHttpClient:
self.clock = hs.get_clock()
self._store = hs.get_datastores().main
self.version_string_bytes = hs.version_string.encode("ascii")
self.default_timeout = 60
self.default_timeout_seconds = hs.config.federation.client_timeout_ms / 1000
self.max_long_retry_delay_seconds = (
hs.config.federation.max_long_retry_delay_ms / 1000
)
self.max_short_retry_delay_seconds = (
hs.config.federation.max_short_retry_delay_ms / 1000
)
self.max_long_retries = hs.config.federation.max_long_retries
self.max_short_retries = hs.config.federation.max_short_retries
self._cooperator = Cooperator(scheduler=_make_scheduler(self.reactor))
@@ -499,8 +513,15 @@ class MatrixFederationHttpClient:
Note that the above intervals are *in addition* to the time spent
waiting for the request to complete (up to `timeout` ms).
NB: the long retry algorithm takes over 20 minutes to complete, with
a default timeout of 60s!
NB: the long retry algorithm takes over 20 minutes to complete, with a
default timeout of 60s! It's best not to use the `long_retries` option
for something that is blocking a client so we don't make them wait for
aaaaages, whereas some things like sending transactions (server to
server) we can be a lot more lenient but its very fuzzy / hand-wavey.
In the future, we could be more intelligent about doing this sort of
thing by looking at things with the bigger picture in mind,
https://github.com/matrix-org/synapse/issues/8917
ignore_backoff: true to ignore the historical backoff data
and try the request anyway.
@@ -528,10 +549,10 @@ class MatrixFederationHttpClient:
logger.exception(f"Invalid destination: {request.destination}.")
raise FederationDeniedError(request.destination)
if timeout:
if timeout is not None:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
_sec_timeout = self.default_timeout_seconds
if (
self.hs.config.federation.federation_domain_whitelist is not None
@@ -576,9 +597,9 @@ class MatrixFederationHttpClient:
# XXX: Would be much nicer to retry only at the transaction-layer
# (once we have reliable transactions in place)
if long_retries:
retries_left = MAX_LONG_RETRIES
retries_left = self.max_long_retries
else:
retries_left = MAX_SHORT_RETRIES
retries_left = self.max_short_retries
url_bytes = request.uri
url_str = url_bytes.decode("ascii")
@@ -723,24 +744,34 @@ class MatrixFederationHttpClient:
if retries_left and not timeout:
if long_retries:
delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left)
delay = min(delay, 60)
delay *= random.uniform(0.8, 1.4)
delay_seconds = 4 ** (
self.max_long_retries + 1 - retries_left
)
delay_seconds = min(
delay_seconds, self.max_long_retry_delay_seconds
)
delay_seconds *= random.uniform(0.8, 1.4)
else:
delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left)
delay = min(delay, 2)
delay *= random.uniform(0.8, 1.4)
delay_seconds = 0.5 * 2 ** (
self.max_short_retries - retries_left
)
delay_seconds = min(
delay_seconds, self.max_short_retry_delay_seconds
)
delay_seconds *= random.uniform(0.8, 1.4)
logger.debug(
"{%s} [%s] Waiting %ss before re-sending...",
request.txn_id,
request.destination,
delay,
delay_seconds,
)
# Sleep for the calculated delay, or wake up immediately
# if we get notified that the server is back up.
await self._sleeper.sleep(request.destination, delay * 1000)
await self._sleeper.sleep(
request.destination, delay_seconds * 1000
)
retries_left -= 1
else:
raise
@@ -939,7 +970,7 @@ class MatrixFederationHttpClient:
if timeout is not None:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
_sec_timeout = self.default_timeout_seconds
if parser is None:
parser = cast(ByteParser[T], JsonParser())
@@ -1017,10 +1048,10 @@ class MatrixFederationHttpClient:
ignore_backoff=ignore_backoff,
)
if timeout:
if timeout is not None:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
_sec_timeout = self.default_timeout_seconds
body = await _handle_response(
self.reactor, _sec_timeout, request, response, start_ms, parser=JsonParser()
@@ -1128,7 +1159,7 @@ class MatrixFederationHttpClient:
if timeout is not None:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
_sec_timeout = self.default_timeout_seconds
if parser is None:
parser = cast(ByteParser[T], JsonParser())
@@ -1204,7 +1235,7 @@ class MatrixFederationHttpClient:
if timeout is not None:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
_sec_timeout = self.default_timeout_seconds
body = await _handle_response(
self.reactor, _sec_timeout, request, response, start_ms, parser=JsonParser()
@@ -1256,7 +1287,7 @@ class MatrixFederationHttpClient:
try:
d = read_body_with_max_size(response, output_stream, max_size)
d.addTimeout(self.default_timeout, self.reactor)
d.addTimeout(self.default_timeout_seconds, self.reactor)
length = await make_deferred_yieldable(d)
except BodyExceededMaxSize:
msg = "Requested file is too large > %r bytes" % (max_size,)

Some files were not shown because too many files have changed in this diff Show More