Compare commits

...

63 Commits

Author SHA1 Message Date
Brendan Abolivier
e713855dca Merge trust_identity_server_for_password_resets PRs 2021-11-30 11:48:06 +00:00
Brendan Abolivier
f663426804 Move notices up 2021-11-30 11:26:18 +00:00
Brendan Abolivier
3d831415cc Fixup changelog 2021-11-30 11:25:11 +00:00
Brendan Abolivier
4bdad80de1 1.48.0 2021-11-30 11:24:21 +00:00
Brendan Abolivier
c54c9df286 Fix docker hub name 2021-11-25 16:22:54 +00:00
Brendan Abolivier
d4dcc0524f Incorporate review from synapse-dev 2021-11-25 16:21:00 +00:00
Brendan Abolivier
b757b68454 Fixup changelog 2021-11-25 16:07:23 +00:00
Brendan Abolivier
946c102ac9 1.48.0rc1 2021-11-25 15:57:04 +00:00
Brendan Abolivier
0d88c4f903 Improve performance of remove_{hidden,deleted}_devices_from_device_inbox (#11421)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2021-11-25 15:14:54 +00:00
Brendan Abolivier
7f9841bdec Lower minumum batch size to 1 for background updates (#11422)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2021-11-24 19:21:44 +00:00
reivilibre
f25c75d376 Rename unstable access_token_lifetime configuration option to refreshable_access_token_lifetime to make it clear it only concerns refreshable access tokens. (#11388) 2021-11-23 17:01:34 +00:00
Patrick Cloke
55669bd3de Add missing type hints to config base classes (#11377) 2021-11-23 15:21:19 +00:00
Shay
7cebaf9644 Remove code invalidated by deprecated config flag 'trust_identity_servers_for_password_resets' (#11395)
* remove background update code related to deprecated config flag

* changelog entry

* update changelog

* Delete 11394.removal

Duplicate, wrong number

* add no-op background update and change newfragment so it will be consolidated with associated work

* remove unused code

* Remove code associated with deprecated flag from legacy docker dynamic config file

Co-authored-by: reivilibre <oliverw@matrix.org>
2021-11-23 06:46:40 -08:00
Sean Quah
454c3d7694 Merge branch 'master' into develop 2021-11-23 13:06:56 +00:00
Sean Quah
fcb9441791 Merge tag 'v1.47.1'
Synapse 1.47.1 (2021-11-23)
===========================

This release fixes a security issue in the media store, affecting all prior releases of Synapse. Server administrators are encouraged to update Synapse as soon as possible. We are not aware of these vulnerabilities being exploited in the wild.

Server administrators who are unable to update Synapse may use the workarounds described in the linked GitHub Security Advisory below.

Security advisory
-----------------

The following issue is fixed in 1.47.1.

- **[GHSA-3hfw-x7gx-437c](https://github.com/matrix-org/synapse/security/advisories/GHSA-3hfw-x7gx-437c) / [CVE-2021-41281](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41281): Path traversal when downloading remote media.**

  Synapse instances with the media repository enabled can be tricked into downloading a file from a remote server into an arbitrary directory, potentially outside the media store directory.

  The last two directories and file name of the path are chosen randomly by Synapse and cannot be controlled by an attacker, which limits the impact.

  Homeservers with the media repository disabled are unaffected. Homeservers configured with a federation whitelist are also unaffected.

  Fixed by [91f2bd090](https://github.com/matrix-org/synapse/commit/91f2bd090).
2021-11-23 12:39:09 +00:00
Patrick Cloke
6a5dd485bd Refactor the code to inject bundled relations during serialization. (#11408) 2021-11-23 06:43:56 -05:00
Kostas
1035663833 Add config for customizing the claim used for JWT logins. (#11361)
Allows specifying a different claim (from the default "sub") to use
when calculating the localpart of the Matrix ID used during the
JWT login.
2021-11-22 13:01:03 -05:00
Patrick Cloke
3d893b8cf2 Store arbitrary relations from events. (#11391)
Instead of only known relation types. This also reworks the background
update for thread relations to crawl events and search for any relation
type, not just threaded relations.
2021-11-22 12:01:47 -05:00
Shay
d9e9771d6b Update README.md 2021-11-19 14:01:55 -08:00
Dirk Klimpel
ea20937084 Add an admin API to run background jobs. (#11352)
Instead of having admins poke into the database directly.

Can currently run jobs to populate stats and to populate
the user directory.
2021-11-19 19:39:46 +00:00
Sean Quah
8fa83999d6 Add CVE number 2021-11-19 18:40:13 +00:00
Patrick Cloke
7ae559944a Fix checking whether a room can be published on creation. (#11392)
If `room_list_publication_rules` was configured with a rule with a
non-wildcard alias and a room was created with an alias then an
internal server error would have been thrown.

This fixes the error and properly applies the publication rules
during room creation.
2021-11-19 15:19:32 +00:00
Sean Quah
9c21a68995 Refer to 1.47.1 without the v 2021-11-19 14:11:35 +00:00
Sean Quah
8d4dcac7e9 Update 1.47.1 release date in CHANGES.md 2021-11-19 14:11:05 +00:00
Sean Quah
97a402302c 1.47.1 2021-11-19 14:08:59 +00:00
Sean Quah
91f2bd0907 Prevent the media store from writing outside of the configured directory
Also tighten validation of server names by forbidding invalid characters
in IPv6 addresses and empty domain labels.
2021-11-19 13:39:15 +00:00
Patrick Cloke
4d6d38ac2f Remove dead code from acme support. (#11393) 2021-11-19 07:07:22 -05:00
Patrick Cloke
5505da2109 Remove msc2716 from the list of tests for complement. (#11389)
As the tests are currently failing and not run in CI.
2021-11-19 07:06:16 -05:00
Hubert Chathi
eca7cffb73 Keep fallback key marked as used if it's re-uploaded (#11382) 2021-11-19 11:40:12 +00:00
Richard van der Hoff
e2e9bea1ce Publish a develop docker image (#11380)
I'd find it helpful to have a docker image corresponding to current develop,
without having to build my own.
2021-11-19 10:56:59 +00:00
Richard van der Hoff
a6f7f84570 Fix verification of objects signed with old local keys (#11379)
Fixes a bug introduced in #11129: objects signed by the local server, but with
keys other than the current one, could not be successfully verified.

We need to check the key id in the signature, and track down the right key.
2021-11-19 10:55:09 +00:00
Eric Eastwood
7ffddd819c Prevent historical state from being pushed to an application service via /transactions (MSC2716) (#11265)
Mark historical state from the MSC2716 `/batch_send` endpoint as `historical` which makes it `backfilled` and have a negative `stream_ordering` so it doesn't get queried by `/transactions`.

Fix https://github.com/matrix-org/synapse/issues/11241

Complement tests: https://github.com/matrix-org/complement/pull/221
2021-11-18 14:16:08 -06:00
Shay
92b75388f5 Remove legacy code related to deprecated trust_identity_server_for_password_resets config flag (#11333)
* remove code legacy code related to deprecated config flag "trust_identity_server_for_password_resets" from synapse/config/emailconfig.py

* remove legacy code supporting depreciated config flag "trust_identity_server_for_password_resets" from synapse/config/registration.py

* remove legacy code supporting depreciated config flag "trust_identity_server_for_password_resets" from synapse/handlers/identity.py

* add tests to ensure config error is thrown and synapse refuses to start when depreciated config flag is found

* add changelog

* slightly change behavior to only check for deprecated flag if set to 'true'

* Update changelog.d/11333.misc

Co-authored-by: reivilibre <oliverw@matrix.org>

Co-authored-by: reivilibre <oliverw@matrix.org>
2021-11-18 10:56:32 -08:00
Dirk Klimpel
81b18fe5c0 Add dedicated admin API for blocking a room (#11324) 2021-11-18 17:43:49 +00:00
reivilibre
5f81c0ce9c Add/Unerase annotations to Module API (#11341) 2021-11-18 16:55:33 +00:00
reivilibre
433ee159cb Rename get_refresh_token_for_user_id to create_refresh_token_for_user_id (#11370) 2021-11-18 14:45:38 +00:00
reivilibre
539e441399 Use auto_attribs for RefreshTokenLookupResult (#11386) 2021-11-18 14:40:26 +00:00
Patrick Cloke
4bd54b263e Do not allow MSC3440 threads to fork threads (#11161)
Adds validation to the Client-Server API to ensure that
the potential thread head does not relate to another event
already. This results in not allowing a thread to "fork" into
other threads.

If the target event is unknown for some reason (maybe it isn't
visible to your homeserver), but is the target of other events
it is assumed that the thread can be created from it. Otherwise,
it is rejected as an unknown event.
2021-11-18 13:43:09 +00:00
Nicolai Søborg
e2dabec996 Docs: Quote wildcard federation_certificate_verification_whitelist (#11381)
Otherwise I get this beautiful stacktrace:

```
python3 -m synapse.app.homeserver --config-path /etc/matrix/homeserver.yaml
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/root/synapse/synapse/app/homeserver.py", line 455, in <module>
    main()
  File "/root/synapse/synapse/app/homeserver.py", line 445, in main
    hs = setup(sys.argv[1:])
  File "/root/synapse/synapse/app/homeserver.py", line 345, in setup
    config = HomeServerConfig.load_or_generate_config(
  File "/root/synapse/synapse/config/_base.py", line 671, in load_or_generate_config
    config_dict = read_config_files(config_files)
  File "/root/synapse/synapse/config/_base.py", line 717, in read_config_files
    yaml_config = yaml.safe_load(file_stream)
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/__init__.py", line 125, in safe_load
    return load(stream, SafeLoader)
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/__init__.py", line 81, in load
    return loader.get_single_data()
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/constructor.py", line 49, in get_single_data
    node = self.get_single_node()
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/composer.py", line 36, in get_single_node
    document = self.compose_document()
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/composer.py", line 55, in compose_document
    node = self.compose_node(None, None)
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/composer.py", line 84, in compose_node
    node = self.compose_mapping_node(anchor)
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/composer.py", line 133, in compose_mapping_node
    item_value = self.compose_node(node, item_key)
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/composer.py", line 82, in compose_node
    node = self.compose_sequence_node(anchor)
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/composer.py", line 110, in compose_sequence_node
    while not self.check_event(SequenceEndEvent):
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/parser.py", line 98, in check_event
    self.current_event = self.state()
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/parser.py", line 379, in parse_block_sequence_first_entry
    return self.parse_block_sequence_entry()
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/parser.py", line 384, in parse_block_sequence_entry
    if not self.check_token(BlockEntryToken, BlockEndToken):
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/scanner.py", line 116, in check_token
    self.fetch_more_tokens()
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/scanner.py", line 227, in fetch_more_tokens
    return self.fetch_alias()
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/scanner.py", line 610, in fetch_alias
    self.tokens.append(self.scan_anchor(AliasToken))
  File "/root/synapse/env/lib/python3.8/site-packages/yaml/scanner.py", line 922, in scan_anchor
    raise ScannerError("while scanning an %s" % name, start_mark,
yaml.scanner.ScannerError: while scanning an alias
  in "/etc/matrix/homeserver.yaml", line 614, column 5
expected alphabetic or numeric character, but found '.'
  in "/etc/matrix/homeserver.yaml", line 614, column 6
```

Signed-off-by: Nicolai Søborg <git@xn--sb-lka.org>
2021-11-18 12:24:40 +00:00
Sean Quah
84fac0f814 Add type annotations to synapse.metrics (#10847) 2021-11-17 19:07:02 +00:00
Aaron R
d993c3bb1e Add support for /_matrix/media/v3 APIs (#11371)
* Add support for `/_matrix/media/v3` APIs

Signed-off-by: Aaron Raimist <aaron@raim.ist>

* Update `workers.md` to use v3 client and media APIs

Signed-off-by: Aaron Raimist <aaron@raim.ist>

* Add changelog

Signed-off-by: Aaron Raimist <aaron@raim.ist>
2021-11-17 15:30:24 +00:00
David Robertson
b76337fdf8 Merge branch 'master' into develop 2021-11-17 14:19:56 +00:00
David Robertson
077b74929f Merge remote-tracking branch 'origin/release-v1.47' 2021-11-17 14:19:27 +00:00
reivilibre
0d86f6334a Rename get_access_token_for_user_id method to create_access_token_for_user_id (#11369) 2021-11-17 14:10:57 +00:00
Patrick Cloke
60ecb6b4d4 Fix running complement.sh script. (#11368)
By reverting changes from #11166 in this script. Specifically commit
13f084eb58.
2021-11-17 09:04:50 -05:00
David Robertson
9f9d82aa84 1.47.0 2021-11-17 13:10:12 +00:00
Patrick Cloke
319dcb955e Fix incorrect return value in tests. (#11359) 2021-11-16 16:36:46 +00:00
David Robertson
0caf20883c Merge tag 'v1.47.0rc3' into develop
Synapse 1.47.0rc3 (2021-11-16)
==============================

Bugfixes
--------

- Fix a bug introduced in 1.47.0rc1 which caused worker processes to not halt startup in the presence of outstanding database migrations. ([\#11346](https://github.com/matrix-org/synapse/issues/11346))
- Fix a bug introduced in 1.47.0rc1 which prevented the 'remove deleted devices from `device_inbox` column' background process from running when updating from a recent Synapse version. ([\#11303](https://github.com/matrix-org/synapse/issues/11303), [\#11353](https://github.com/matrix-org/synapse/issues/11353))
2021-11-16 15:46:45 +00:00
Sean Quah
88375beeaa Avoid sharing room hierarchy responses between users (#11355)
Different users may be allowed to see different rooms within a space,
so sharing responses between users is inadvisable.
2021-11-16 15:40:47 +00:00
Andrew Morgan
7baa671dc8 fix up changelog language 2021-11-16 14:42:21 +00:00
Andrew Morgan
729acd82c8 mark the migration file migration as a bug 2021-11-16 14:41:21 +00:00
Andrew Morgan
edcdc5fd82 1.47.0rc3 2021-11-16 14:34:46 +00:00
Aaron R
dfa536490e Add support for /_matrix/client/v3 APIs (#11318)
This is one of the changes required to support Matrix 1.1

Signed-off-by: Aaron Raimist <aaron@raim.ist>
2021-11-16 14:47:58 +01:00
Patrick Cloke
7468723697 Add most missing type hints to synapse.util (#11328) 2021-11-16 08:47:36 -05:00
Andrew Morgan
6e084b62b8 Rename remove_deleted_devices_from_device_inbox to ensure it is always run (#11353)
Co-authored-by: reivilibre <oliverw@matrix.org>
2021-11-16 13:16:43 +00:00
reivilibre
3a1462f7e0 Properly register all callback hooks for legacy password authentication providers (#11340)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2021-11-16 12:53:31 +00:00
Patrick Cloke
24b61f379a Add ability to un-shadow-ban via the admin API. (#11347) 2021-11-16 12:43:53 +00:00
David Robertson
0dda1a7968 Misc typing fixes for tests, part 2 of N (#11330) 2021-11-16 10:41:35 +00:00
Ashwin Nair
e72135b9d3 change 'Home Server' to one word 'homeserver' (#11320)
Signed-off-by: Ashwin S. Nair <58840757+Ashwin-exe@users.noreply.github.com>
2021-11-16 10:21:01 +00:00
Andrew Morgan
9c59e117db Run _upgrade_existing_database on workers if at current schema_version (#11346)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2021-11-15 17:34:15 +00:00
Dirk Klimpel
b596a1eb80 Move sql file for remove_deleted_devices_from_device_inbox into v65 (#11303) 2021-11-15 11:47:30 +00:00
reivilibre
4ad5ee9996 Correct target of link to the modules page from the Password Auth Providers page (#11309) 2021-11-12 12:58:39 +00:00
jmcparland
02742fd058 Wrong DTLS port in "Troubleshooting" (#11268)
Port 5349, not 5479.
2021-11-08 10:34:39 +00:00
145 changed files with 3344 additions and 1271 deletions

View File

@@ -5,7 +5,7 @@ name: Build docker images
on:
push:
tags: ["v*"]
branches: [ master, main ]
branches: [ master, main, develop ]
workflow_dispatch:
permissions:
@@ -38,6 +38,9 @@ jobs:
id: set-tag
run: |
case "${GITHUB_REF}" in
refs/heads/develop)
tag=develop
;;
refs/heads/master|refs/heads/main)
tag=latest
;;

View File

@@ -1,3 +1,130 @@
Synapse 1.48.0 (2021-11-30)
===========================
This release removes support for the long-deprecated `trust_identity_server_for_password_resets` configuration flag.
This release also fixes some performance issues with some background database updates introduced in Synapse 1.47.0.
No significant changes since 1.48.0rc1.
Synapse 1.48.0rc1 (2021-11-25)
==============================
Features
--------
- Experimental support for the thread relation defined in [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11161](https://github.com/matrix-org/synapse/issues/11161))
- Support filtering by relation senders & types per [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11236](https://github.com/matrix-org/synapse/issues/11236))
- Add support for the `/_matrix/client/v3` and `/_matrix/media/v3` APIs from Matrix v1.1. ([\#11318](https://github.com/matrix-org/synapse/issues/11318), [\#11371](https://github.com/matrix-org/synapse/issues/11371))
- Support the stable version of [MSC2778](https://github.com/matrix-org/matrix-doc/pull/2778): the `m.login.application_service` login type. Contributed by @tulir. ([\#11335](https://github.com/matrix-org/synapse/issues/11335))
- Add a new version of delete room admin API `DELETE /_synapse/admin/v2/rooms/<room_id>` to run it in the background. Contributed by @dklimpel. ([\#11223](https://github.com/matrix-org/synapse/issues/11223))
- Allow the admin [Delete Room API](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#delete-room-api) to block a room without the need to join it. ([\#11228](https://github.com/matrix-org/synapse/issues/11228))
- Add an admin API to un-shadow-ban a user. ([\#11347](https://github.com/matrix-org/synapse/issues/11347))
- Add an admin API to run background database schema updates. ([\#11352](https://github.com/matrix-org/synapse/issues/11352))
- Add an admin API for blocking a room. ([\#11324](https://github.com/matrix-org/synapse/issues/11324))
- Update the JWT login type to support custom a `sub` claim. ([\#11361](https://github.com/matrix-org/synapse/issues/11361))
- Store and allow querying of arbitrary event relations. ([\#11391](https://github.com/matrix-org/synapse/issues/11391))
Bugfixes
--------
- Fix a long-standing bug wherein display names or avatar URLs containing null bytes cause an internal server error when stored in the DB. ([\#11230](https://github.com/matrix-org/synapse/issues/11230))
- Prevent [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) historical state events from being pushed to an application service via `/transactions`. ([\#11265](https://github.com/matrix-org/synapse/issues/11265))
- Fix a long-standing bug where uploading extremely thin images (e.g. 1000x1) would fail. Contributed by @Neeeflix. ([\#11288](https://github.com/matrix-org/synapse/issues/11288))
- Fix a bug, introduced in Synapse 1.46.0, which caused the `check_3pid_auth` and `on_logged_out` callbacks in legacy password authentication provider modules to not be registered. Modules using the generic module interface were not affected. ([\#11340](https://github.com/matrix-org/synapse/issues/11340))
- Fix a bug introduced in 1.41.0 where space hierarchy responses would be incorrectly reused if multiple users were to make the same request at the same time. ([\#11355](https://github.com/matrix-org/synapse/issues/11355))
- Fix a bug introduced in 1.45.0 where the `read_templates` method of the module API would error. ([\#11377](https://github.com/matrix-org/synapse/issues/11377))
- Fix an issue introduced in 1.47.0 which prevented servers re-joining rooms they had previously left, if their signing keys were replaced. ([\#11379](https://github.com/matrix-org/synapse/issues/11379))
- Fix a bug introduced in 1.13.0 where creating and publishing a room could cause errors if `room_list_publication_rules` is configured. ([\#11392](https://github.com/matrix-org/synapse/issues/11392))
- Improve performance of various background database updates. ([\#11421](https://github.com/matrix-org/synapse/issues/11421), [\#11422](https://github.com/matrix-org/synapse/issues/11422))
Improved Documentation
----------------------
- Suggest users of the Debian packages add configuration to `/etc/matrix-synapse/conf.d/` to prevent, upon upgrade, being asked to choose between their configuration and the maintainer's. ([\#11281](https://github.com/matrix-org/synapse/issues/11281))
- Fix typos in the documentation for the `username_available` admin API. Contributed by Stanislav Motylkov. ([\#11286](https://github.com/matrix-org/synapse/issues/11286))
- Add Single Sign-On, SAML and CAS pages to the documentation. ([\#11298](https://github.com/matrix-org/synapse/issues/11298))
- Change the word 'Home server' as one word 'homeserver' in documentation. ([\#11320](https://github.com/matrix-org/synapse/issues/11320))
- Fix missing quotes for wildcard domains in `federation_certificate_verification_whitelist`. ([\#11381](https://github.com/matrix-org/synapse/issues/11381))
Deprecations and Removals
-------------------------
- Remove deprecated `trust_identity_server_for_password_resets` configuration flag. ([\#11333](https://github.com/matrix-org/synapse/issues/11333), [\#11395](https://github.com/matrix-org/synapse/issues/11395))
Internal Changes
----------------
- Add type annotations to `synapse.metrics`. ([\#10847](https://github.com/matrix-org/synapse/issues/10847))
- Split out federated PDU retrieval function into a non-cached version. ([\#11242](https://github.com/matrix-org/synapse/issues/11242))
- Clean up code relating to to-device messages and sending ephemeral events to application services. ([\#11247](https://github.com/matrix-org/synapse/issues/11247))
- Fix a small typo in the error response when a relation type other than 'm.annotation' is passed to `GET /rooms/{room_id}/aggregations/{event_id}`. ([\#11278](https://github.com/matrix-org/synapse/issues/11278))
- Drop unused database tables `room_stats_historical` and `user_stats_historical`. ([\#11280](https://github.com/matrix-org/synapse/issues/11280))
- Require all files in synapse/ and tests/ to pass mypy unless specifically excluded. ([\#11282](https://github.com/matrix-org/synapse/issues/11282), [\#11285](https://github.com/matrix-org/synapse/issues/11285), [\#11359](https://github.com/matrix-org/synapse/issues/11359))
- Add missing type hints to `synapse.app`. ([\#11287](https://github.com/matrix-org/synapse/issues/11287))
- Remove unused parameters on `FederationEventHandler._check_event_auth`. ([\#11292](https://github.com/matrix-org/synapse/issues/11292))
- Add type hints to `synapse._scripts`. ([\#11297](https://github.com/matrix-org/synapse/issues/11297))
- Fix an issue which prevented the `remove_deleted_devices_from_device_inbox` background database schema update from running when updating from a recent Synapse version. ([\#11303](https://github.com/matrix-org/synapse/issues/11303))
- Add type hints to storage classes. ([\#11307](https://github.com/matrix-org/synapse/issues/11307), [\#11310](https://github.com/matrix-org/synapse/issues/11310), [\#11311](https://github.com/matrix-org/synapse/issues/11311), [\#11312](https://github.com/matrix-org/synapse/issues/11312), [\#11313](https://github.com/matrix-org/synapse/issues/11313), [\#11314](https://github.com/matrix-org/synapse/issues/11314), [\#11316](https://github.com/matrix-org/synapse/issues/11316), [\#11322](https://github.com/matrix-org/synapse/issues/11322), [\#11332](https://github.com/matrix-org/synapse/issues/11332), [\#11339](https://github.com/matrix-org/synapse/issues/11339), [\#11342](https://github.com/matrix-org/synapse/issues/11342))
- Add type hints to `synapse.util`. ([\#11321](https://github.com/matrix-org/synapse/issues/11321), [\#11328](https://github.com/matrix-org/synapse/issues/11328))
- Improve type annotations in Synapse's test suite. ([\#11323](https://github.com/matrix-org/synapse/issues/11323), [\#11330](https://github.com/matrix-org/synapse/issues/11330))
- Test that room alias deletion works as intended. ([\#11327](https://github.com/matrix-org/synapse/issues/11327))
- Add type annotations for some methods and properties in the module API. ([\#11341](https://github.com/matrix-org/synapse/issues/11341))
- Fix running `scripts-dev/complement.sh`, which was broken in v1.47.0rc1. ([\#11368](https://github.com/matrix-org/synapse/issues/11368))
- Rename internal functions for token generation to better reflect what they do. ([\#11369](https://github.com/matrix-org/synapse/issues/11369), [\#11370](https://github.com/matrix-org/synapse/issues/11370))
- Add type hints to configuration classes. ([\#11377](https://github.com/matrix-org/synapse/issues/11377))
- Publish a `develop` image to Docker Hub. ([\#11380](https://github.com/matrix-org/synapse/issues/11380))
- Keep fallback key marked as used if it's re-uploaded. ([\#11382](https://github.com/matrix-org/synapse/issues/11382))
- Use `auto_attribs` on the `attrs` class `RefreshTokenLookupResult`. ([\#11386](https://github.com/matrix-org/synapse/issues/11386))
- Rename unstable `access_token_lifetime` configuration option to `refreshable_access_token_lifetime` to make it clear it only concerns refreshable access tokens. ([\#11388](https://github.com/matrix-org/synapse/issues/11388))
- Do not run the broken MSC2716 tests when running `scripts-dev/complement.sh`. ([\#11389](https://github.com/matrix-org/synapse/issues/11389))
- Remove dead code from supporting ACME. ([\#11393](https://github.com/matrix-org/synapse/issues/11393))
- Refactor including the bundled relations when serializing an event. ([\#11408](https://github.com/matrix-org/synapse/issues/11408))
Synapse 1.47.1 (2021-11-23)
===========================
This release fixes a security issue in the media store, affecting all prior releases of Synapse. Server administrators are encouraged to update Synapse as soon as possible. We are not aware of these vulnerabilities being exploited in the wild.
Server administrators who are unable to update Synapse may use the workarounds described in the linked GitHub Security Advisory below.
Security advisory
-----------------
The following issue is fixed in 1.47.1.
- **[GHSA-3hfw-x7gx-437c](https://github.com/matrix-org/synapse/security/advisories/GHSA-3hfw-x7gx-437c) / [CVE-2021-41281](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41281): Path traversal when downloading remote media.**
Synapse instances with the media repository enabled can be tricked into downloading a file from a remote server into an arbitrary directory, potentially outside the media store directory.
The last two directories and file name of the path are chosen randomly by Synapse and cannot be controlled by an attacker, which limits the impact.
Homeservers with the media repository disabled are unaffected. Homeservers configured with a federation whitelist are also unaffected.
Fixed by [91f2bd090](https://github.com/matrix-org/synapse/commit/91f2bd090).
Synapse 1.47.0 (2021-11-17)
===========================
No significant changes since 1.47.0rc3.
Synapse 1.47.0rc3 (2021-11-16)
==============================
Bugfixes
--------
- Fix a bug introduced in 1.47.0rc1 which caused worker processes to not halt startup in the presence of outstanding database migrations. ([\#11346](https://github.com/matrix-org/synapse/issues/11346))
- Fix a bug introduced in 1.47.0rc1 which prevented the 'remove deleted devices from `device_inbox` column' background process from running when updating from a recent Synapse version. ([\#11303](https://github.com/matrix-org/synapse/issues/11303), [\#11353](https://github.com/matrix-org/synapse/issues/11353))
Synapse 1.47.0rc2 (2021-11-10)
==============================
@@ -8678,14 +8805,14 @@ General:
Federation:
- Add key distribution mechanisms for fetching public keys of unavailable remote home servers. See [Retrieving Server Keys](https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys) in the spec.
- Add key distribution mechanisms for fetching public keys of unavailable remote homeservers. See [Retrieving Server Keys](https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys) in the spec.
Configuration:
- Add support for multiple config files.
- Add support for dictionaries in config files.
- Remove support for specifying config options on the command line, except for:
- `--daemonize` - Daemonize the home server.
- `--daemonize` - Daemonize the homeserver.
- `--manhole` - Turn on the twisted telnet manhole service on the given port.
- `--database-path` - The path to a sqlite database to use.
- `--verbose` - The verbosity level.
@@ -8890,7 +9017,7 @@ This version adds support for using a TURN server. See docs/turn-howto.rst on ho
Homeserver:
- Add support for redaction of messages.
- Fix bug where inviting a user on a remote home server could take up to 20-30s.
- Fix bug where inviting a user on a remote homeserver could take up to 20-30s.
- Implement a get current room state API.
- Add support specifying and retrieving turn server configuration.
@@ -8980,7 +9107,7 @@ Changes in synapse 0.2.3 (2014-09-12)
Homeserver:
- Fix bug where we stopped sending events to remote home servers if a user from that home server left, even if there were some still in the room.
- Fix bug where we stopped sending events to remote homeservers if a user from that homeserver left, even if there were some still in the room.
- Fix bugs in the state conflict resolution where it was incorrectly rejecting events.
Webclient:

View File

@@ -1 +0,0 @@
Add a new version of delete room admin API `DELETE /_synapse/admin/v2/rooms/<room_id>` to run it in background. Contributed by @dklimpel.

View File

@@ -1 +0,0 @@
Allow the admin [Delete Room API](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#delete-room-api) to block a room without the need to join it.

View File

@@ -1,2 +0,0 @@
Fix a long-standing bug wherein display names or avatar URLs containing null bytes cause an internal server error
when stored in the DB.

View File

@@ -1 +0,0 @@
Support filtering by relation senders & types per [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440).

View File

@@ -1 +0,0 @@
Split out federated PDU retrieval function into a non-cached version.

View File

@@ -1 +0,0 @@
Clean up code relating to to-device messages and sending ephemeral events to application services.

View File

@@ -1 +0,0 @@
Fix a small typo in the error response when a relation type other than 'm.annotation' is passed to `GET /rooms/{room_id}/aggregations/{event_id}`.

View File

@@ -1 +0,0 @@
Drop unused db tables `room_stats_historical` and `user_stats_historical`.

View File

@@ -1 +0,0 @@
Suggest users of the Debian packages add configuration to `/etc/matrix-synapse/conf.d/` to prevent, upon upgrade, being asked to choose between their configuration and the maintainer's.

View File

@@ -1 +0,0 @@
Require all files in synapse/ and tests/ to pass mypy unless specifically excluded.

View File

@@ -1 +0,0 @@
Require all files in synapse/ and tests/ to pass mypy unless specifically excluded.

View File

@@ -1 +0,0 @@
Fix typo in the word `available` and fix HTTP method (should be `GET`) for the `username_available` admin API. Contributed by Stanislav Motylkov.

View File

@@ -1 +0,0 @@
Add missing type hints to `synapse.app`.

View File

@@ -1 +0,0 @@
Fix a long-standing bug where uploading extremely thin images (e.g. 1000x1) would fail. Contributed by @Neeeflix.

View File

@@ -1 +0,0 @@
Remove unused parameters on `FederationEventHandler._check_event_auth`.

View File

@@ -1 +0,0 @@
Add type hints to `synapse._scripts`.

View File

@@ -1 +0,0 @@
Add Single Sign-On, SAML and CAS pages to the documentation.

View File

@@ -1 +0,0 @@
Fix an issue which prevented the 'remove deleted devices from device_inbox column' background process from running when updating from a recent Synapse version.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

View File

@@ -1 +0,0 @@
Add type hints to `synapse.util`.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

View File

@@ -1 +0,0 @@
Improve type annotations in Synapse's test suite.

View File

@@ -1 +0,0 @@
Test that room alias deletion works as intended.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

View File

@@ -1 +0,0 @@
Support the stable version of [MSC2778](https://github.com/matrix-org/matrix-doc/pull/2778): the `m.login.application_service` login type. Contributed by @tulir.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

View File

@@ -1 +0,0 @@
Add type hints to storage classes.

30
debian/changelog vendored
View File

@@ -1,3 +1,33 @@
matrix-synapse-py3 (1.48.0) stable; urgency=medium
* New synapse release 1.48.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Nov 2021 11:24:15 +0000
matrix-synapse-py3 (1.48.0~rc1) stable; urgency=medium
* New synapse release 1.48.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Nov 2021 15:56:03 +0000
matrix-synapse-py3 (1.47.1) stable; urgency=medium
* New synapse release 1.47.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 19 Nov 2021 13:44:32 +0000
matrix-synapse-py3 (1.47.0) stable; urgency=medium
* New synapse release 1.47.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 17 Nov 2021 13:09:43 +0000
matrix-synapse-py3 (1.47.0~rc3) stable; urgency=medium
* New synapse release 1.47.0~rc3.
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Nov 2021 14:32:47 +0000
matrix-synapse-py3 (1.47.0~rc2) stable; urgency=medium
[ Dan Callahan ]

View File

@@ -148,14 +148,6 @@ bcrypt_rounds: 12
allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
enable_group_creation: true
# The list of identity servers trusted to verify third party
# identifiers by this server.
#
# Also defines the ID server which will be called when an account is
# deactivated (one will be picked arbitrarily).
trusted_third_party_id_servers:
- matrix.org
- vector.im
## Metrics ###

View File

@@ -48,7 +48,7 @@ WORKERS_CONFIG = {
"app": "synapse.app.user_dir",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$"
"^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$"
],
"shared_extra_conf": {"update_user_directory": False},
"worker_extra_conf": "",
@@ -85,10 +85,10 @@ WORKERS_CONFIG = {
"app": "synapse.app.generic_worker",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(v2_alpha|r0)/sync$",
"^/_matrix/client/(api/v1|v2_alpha|r0)/events$",
"^/_matrix/client/(api/v1|r0)/initialSync$",
"^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$",
"^/_matrix/client/(v2_alpha|r0|v3)/sync$",
"^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$",
"^/_matrix/client/(api/v1|r0|v3)/initialSync$",
"^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
@@ -146,11 +146,11 @@ WORKERS_CONFIG = {
"app": "synapse.app.generic_worker",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact",
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send",
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
"^/_matrix/client/(api/v1|r0|unstable)/join/",
"^/_matrix/client/(api/v1|r0|unstable)/profile/",
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact",
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send",
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/join/",
"^/_matrix/client/(api/v1|r0|v3|unstable)/profile/",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
@@ -158,7 +158,7 @@ WORKERS_CONFIG = {
"frontend_proxy": {
"app": "synapse.app.frontend_proxy",
"listener_resources": ["client", "replication"],
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|unstable)/keys/upload"],
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload"],
"shared_extra_conf": {},
"worker_extra_conf": (
"worker_main_http_uri: http://127.0.0.1:%d"

View File

@@ -50,8 +50,10 @@ build the documentation with:
mdbook build
```
The rendered contents will be outputted to a new `book/` directory at the root of the repository. You can
browse the book by opening `book/index.html` in a web browser.
The rendered contents will be outputted to a new `book/` directory at the root of the repository. Please note that
index.html is not built by default, it is created by copying over the file `welcome_and_overview.html` to `index.html`
during deployment. Thus, when running `mdbook serve` locally the book will initially show a 404 in place of the index
due to the above. Do not be alarmed!
You can also have mdbook host the docs on a local webserver with hot-reload functionality via:

View File

@@ -3,6 +3,7 @@
- [Room Details API](#room-details-api)
- [Room Members API](#room-members-api)
- [Room State API](#room-state-api)
- [Block Room API](#block-room-api)
- [Delete Room API](#delete-room-api)
* [Version 1 (old version)](#version-1-old-version)
* [Version 2 (new version)](#version-2-new-version)
@@ -386,6 +387,83 @@ A response body like the following is returned:
}
```
# Block Room API
The Block Room admin API allows server admins to block and unblock rooms,
and query to see if a given room is blocked.
This API can be used to pre-emptively block a room, even if it's unknown to this
homeserver. Users will be prevented from joining a blocked room.
## Block or unblock a room
The API is:
```
PUT /_synapse/admin/v1/rooms/<room_id>/block
```
with a body of:
```json
{
"block": true
}
```
A response body like the following is returned:
```json
{
"block": true
}
```
**Parameters**
The following parameters should be set in the URL:
- `room_id` - The ID of the room.
The following JSON body parameters are available:
- `block` - If `true` the room will be blocked and if `false` the room will be unblocked.
**Response**
The following fields are possible in the JSON response body:
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
## Get block status
The API is:
```
GET /_synapse/admin/v1/rooms/<room_id>/block
```
A response body like the following is returned:
```json
{
"block": true,
"user_id": "<user_id>"
}
```
**Parameters**
The following parameters should be set in the URL:
- `room_id` - The ID of the room.
**Response**
The following fields are possible in the JSON response body:
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
- `user_id` - An optional string. If the room is blocked (`block` is `true`) shows
the user who has add the room to blocking list. Otherwise it is not displayed.
# Delete Room API
The Delete Room admin API allows server admins to remove rooms from the server

View File

@@ -948,7 +948,7 @@ The following fields are returned in the JSON response body:
See also the
[Client-Server API Spec on pushers](https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers).
## Shadow-banning users
## Controlling whether a user is shadow-banned
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
A shadow-banned users receives successful responses to their client-server API requests,
@@ -961,16 +961,22 @@ or broken behaviour for the client. A shadow-banned user will not receive any
notification and it is generally more appropriate to ban or kick abusive users.
A shadow-banned user will be unable to contact anyone on the server.
The API is:
To shadow-ban a user the API is:
```
POST /_synapse/admin/v1/users/<user_id>/shadow_ban
```
To un-shadow-ban a user the API is:
```
DELETE /_synapse/admin/v1/users/<user_id>/shadow_ban
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned.
An empty JSON dict is returned in both cases.
**Parameters**

View File

@@ -7,7 +7,7 @@
## Server to Server Stack
To use the server to server stack, home servers should only need to
To use the server to server stack, homeservers should only need to
interact with the Messaging layer.
The server to server side of things is designed into 4 distinct layers:
@@ -23,7 +23,7 @@ Server with a domain specific API.
1. **Messaging Layer**
This is what the rest of the Home Server hits to send messages, join rooms,
This is what the rest of the homeserver hits to send messages, join rooms,
etc. It also allows you to register callbacks for when it get's notified by
lower levels that e.g. a new message has been received.
@@ -45,7 +45,7 @@ Server with a domain specific API.
For incoming PDUs, it has to check the PDUs it references to see
if we have missed any. If we have go and ask someone (another
home server) for it.
homeserver) for it.
3. **Transaction Layer**

View File

@@ -22,8 +22,9 @@ will be removed in a future version of Synapse.
The `token` field should include the JSON web token with the following claims:
* The `sub` (subject) claim is required and should encode the local part of the
user ID.
* A claim that encodes the local part of the user ID is required. By default,
the `sub` (subject) claim is used, or a custom claim can be set in the
configuration file.
* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
claims are optional, but validated if present.
* The issuer (`iss`) claim is optional, but required and validated if configured.

View File

@@ -1,7 +1,7 @@
<h2 style="color:red">
This page of the Synapse documentation is now deprecated. For up to date
documentation on setting up or writing a password auth provider module, please see
<a href="modules.md">this page</a>.
<a href="modules/index.md">this page</a>.
</h2>
# Password auth provider modules

View File

@@ -647,8 +647,8 @@ retention:
#
#federation_certificate_verification_whitelist:
# - lon.example.com
# - *.domain.com
# - *.onion
# - "*.domain.com"
# - "*.onion"
# List of custom certificate authorities for federation traffic.
#
@@ -2039,6 +2039,12 @@ sso:
#
#algorithm: "provided-by-your-issuer"
# Name of the claim containing a unique identifier for the user.
#
# Optional, defaults to `sub`.
#
#subject_claim: "sub"
# The issuer to validate the "iss" claim against.
#
# Optional, if provided the "iss" claim will be required and
@@ -2360,8 +2366,8 @@ user_directory:
# indexes were (re)built was before Synapse 1.44, you'll have to
# rebuild the indexes in order to search through all known users.
# These indexes are built the first time Synapse starts; admins can
# manually trigger a rebuild following the instructions at
# https://matrix-org.github.io/synapse/latest/user_directory.html
# manually trigger a rebuild via API following the instructions at
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
#
# Uncomment to return search results containing all known users, even if that
# user does not share a room with the requester.

View File

@@ -1,12 +1,12 @@
# Overview
This document explains how to enable VoIP relaying on your Home Server with
This document explains how to enable VoIP relaying on your homeserver with
TURN.
The synapse Matrix Home Server supports integration with TURN server via the
The synapse Matrix homeserver supports integration with TURN server via the
[TURN server REST API](<https://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This
allows the Home Server to generate credentials that are valid for use on the
TURN server through the use of a secret shared between the Home Server and the
allows the homeserver to generate credentials that are valid for use on the
TURN server through the use of a secret shared between the homeserver and the
TURN server.
The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
@@ -165,18 +165,18 @@ This will install and start a systemd service called `coturn`.
## Synapse setup
Your home server configuration file needs the following extra keys:
Your homeserver configuration file needs the following extra keys:
1. "`turn_uris`": This needs to be a yaml list of public-facing URIs
for your TURN server to be given out to your clients. Add separate
entries for each transport your TURN server supports.
2. "`turn_shared_secret`": This is the secret shared between your
Home server and your TURN server, so you should set it to the same
homeserver and your TURN server, so you should set it to the same
string you used in turnserver.conf.
3. "`turn_user_lifetime`": This is the amount of time credentials
generated by your Home Server are valid for (in milliseconds).
generated by your homeserver are valid for (in milliseconds).
Shorter times offer less potential for abuse at the expense of
increased traffic between web clients and your home server to
increased traffic between web clients and your homeserver to
refresh credentials. The TURN REST API specification recommends
one day (86400000).
4. "`turn_allow_guests`": Whether to allow guest users to use the
@@ -220,7 +220,7 @@ Here are a few things to try:
anyone who has successfully set this up.
* Check that you have opened your firewall to allow TCP and UDP traffic to the
TURN ports (normally 3478 and 5479).
TURN ports (normally 3478 and 5349).
* Check that you have opened your firewall to allow UDP traffic to the UDP
relay ports (49152-65535 by default).

View File

@@ -42,7 +42,6 @@ For each update:
`average_items_per_ms` how many items are processed per millisecond based on an exponential average.
## Enabled
This API allow pausing background updates.
@@ -82,3 +81,29 @@ The API returns the `enabled` param.
```
There is also a `GET` version which returns the `enabled` state.
## Run
This API schedules a specific background update to run. The job starts immediately after calling the API.
The API is:
```
POST /_synapse/admin/v1/background_updates/start_job
```
with the following body:
```json
{
"job_name": "populate_stats_process_rooms"
}
```
The following JSON body parameters are available:
- `job_name` - A string which job to run. Valid values are:
- `populate_stats_process_rooms` - Recalculate the stats for all rooms.
- `regenerate_directory` - Recalculate the [user directory](../../../user_directory.md) if it is stale or out of sync.

View File

@@ -6,9 +6,9 @@ on this particular server - i.e. ones which your account shares a room with, or
who are present in a publicly viewable room present on the server.
The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to execute the SQL [here](https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/main/delta/53/user_dir_populate.sql)
and then restart synapse. This should then start a background task to
DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to use the [admin API](usage/administration/admin_api/background_updates.md#run)
and execute the job `regenerate_directory`. This should then start a background task to
flush the current tables and regenerate the directory.
Data model

View File

@@ -182,10 +182,10 @@ This worker can handle API requests matching the following regular
expressions:
# Sync requests
^/_matrix/client/(v2_alpha|r0)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
^/_matrix/client/(api/v1|r0)/initialSync$
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
^/_matrix/client/(v2_alpha|r0|v3)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$
^/_matrix/client/(api/v1|r0|v3)/initialSync$
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
# Federation requests
^/_matrix/federation/v1/event/
@@ -216,40 +216,40 @@ expressions:
^/_matrix/federation/v1/send/
# Client API requests
^/_matrix/client/(api/v1|r0|unstable)/createRoom$
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
^/_matrix/client/(api/v1|r0|v3|unstable)/createRoom$
^/_matrix/client/(api/v1|r0|v3|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|unstable)/devices$
^/_matrix/client/(api/v1|r0|unstable)/keys/query$
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|v3|unstable)/devices$
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/query$
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/changes$
^/_matrix/client/versions$
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/event/
^/_matrix/client/(api/v1|r0|unstable)/joined_rooms$
^/_matrix/client/(api/v1|r0|unstable)/search$
^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups/
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event/
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms$
^/_matrix/client/(api/v1|r0|v3|unstable)/search$
# Registration/login requests
^/_matrix/client/(api/v1|r0|unstable)/login$
^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(api/v1|r0|v3|unstable)/login$
^/_matrix/client/(r0|v3|unstable)/register$
^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
# Event sending requests
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|v3|unstable)/join/
^/_matrix/client/(api/v1|r0|v3|unstable)/profile/
Additionally, the following REST endpoints can be handled for GET requests:
@@ -261,14 +261,14 @@ room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
for the room are in flight:
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/messages$
Additionally, the following endpoints should be included if Synapse is configured
to use SSO (you only need to include the ones for whichever SSO provider you're
using):
# for all SSO providers
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect
^/_matrix/client/(api/v1|r0|v3|unstable)/login/sso/redirect
^/_synapse/client/pick_idp$
^/_synapse/client/pick_username
^/_synapse/client/new_user_consent$
@@ -281,7 +281,7 @@ using):
^/_synapse/client/saml2/authn_response$
# CAS requests.
^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
^/_matrix/client/(api/v1|r0|v3|unstable)/login/cas/ticket$
Ensure that all SSO logins go to a single process.
For multiple workers not handling the SSO endpoints properly, see
@@ -465,7 +465,7 @@ Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for
Handles searches in the user directory. It can handle REST endpoints matching
the following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$
When using this worker you must also set `update_user_directory: False` in the
shared configuration file to stop the main synapse running background
@@ -477,12 +477,12 @@ Proxies some frequently-requested client endpoints to add caching and remove
load from the main synapse. It can handle REST endpoints matching the following
regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload
If `use_presence` is False in the homeserver config, it can also handle REST
endpoints matching the following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/[^/]+/status
This "stub" presence handler will pass through `GET` request but make the
`PUT` effectively a no-op.

View File

@@ -151,6 +151,9 @@ disallow_untyped_defs = True
[mypy-synapse.app.*]
disallow_untyped_defs = True
[mypy-synapse.config._base]
disallow_untyped_defs = True
[mypy-synapse.crypto.*]
disallow_untyped_defs = True
@@ -160,6 +163,9 @@ disallow_untyped_defs = True
[mypy-synapse.handlers.*]
disallow_untyped_defs = True
[mypy-synapse.metrics.*]
disallow_untyped_defs = True
[mypy-synapse.push.*]
disallow_untyped_defs = True
@@ -196,92 +202,11 @@ disallow_untyped_defs = True
[mypy-synapse.streams.*]
disallow_untyped_defs = True
[mypy-synapse.util.batching_queue]
[mypy-synapse.util.*]
disallow_untyped_defs = True
[mypy-synapse.util.caches.cached_call]
disallow_untyped_defs = True
[mypy-synapse.util.caches.dictionary_cache]
disallow_untyped_defs = True
[mypy-synapse.util.caches.lrucache]
disallow_untyped_defs = True
[mypy-synapse.util.caches.response_cache]
disallow_untyped_defs = True
[mypy-synapse.util.caches.stream_change_cache]
disallow_untyped_defs = True
[mypy-synapse.util.caches.ttl_cache]
disallow_untyped_defs = True
[mypy-synapse.util.daemonize]
disallow_untyped_defs = True
[mypy-synapse.util.file_consumer]
disallow_untyped_defs = True
[mypy-synapse.util.frozenutils]
disallow_untyped_defs = True
[mypy-synapse.util.hash]
disallow_untyped_defs = True
[mypy-synapse.util.httpresourcetree]
disallow_untyped_defs = True
[mypy-synapse.util.iterutils]
disallow_untyped_defs = True
[mypy-synapse.util.linked_list]
disallow_untyped_defs = True
[mypy-synapse.util.logcontext]
disallow_untyped_defs = True
[mypy-synapse.util.logformatter]
disallow_untyped_defs = True
[mypy-synapse.util.macaroons]
disallow_untyped_defs = True
[mypy-synapse.util.manhole]
disallow_untyped_defs = True
[mypy-synapse.util.module_loader]
disallow_untyped_defs = True
[mypy-synapse.util.msisdn]
disallow_untyped_defs = True
[mypy-synapse.util.patch_inline_callbacks]
disallow_untyped_defs = True
[mypy-synapse.util.ratelimitutils]
disallow_untyped_defs = True
[mypy-synapse.util.retryutils]
disallow_untyped_defs = True
[mypy-synapse.util.rlimit]
disallow_untyped_defs = True
[mypy-synapse.util.stringutils]
disallow_untyped_defs = True
[mypy-synapse.util.templates]
disallow_untyped_defs = True
[mypy-synapse.util.threepids]
disallow_untyped_defs = True
[mypy-synapse.util.wheel_timer]
disallow_untyped_defs = True
[mypy-synapse.util.versionstring]
disallow_untyped_defs = True
[mypy-synapse.util.caches.treecache]
disallow_untyped_defs = False
[mypy-tests.handlers.test_user_directory]
disallow_untyped_defs = True

View File

@@ -24,7 +24,7 @@
set -e
# Change to the repository root
cd "$(dirname "$0")/.."
cd "$(dirname $0)/.."
# Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then
@@ -61,8 +61,8 @@ cd "$COMPLEMENT_DIR"
EXTRA_COMPLEMENT_ARGS=""
if [[ -n "$1" ]]; then
# A test name regex has been set, supply it to Complement
EXTRA_COMPLEMENT_ARGS=(-run "$1")
EXTRA_COMPLEMENT_ARGS+="-run $1 "
fi
# Run the tests!
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403,msc2716 -count=1 "${EXTRA_COMPLEMENT_ARGS[@]}" ./tests/...
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...

View File

@@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.47.0rc2"
__version__ = "1.48.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -30,7 +30,8 @@ FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
STATIC_PREFIX = "/_matrix/static"
WEB_CLIENT_PREFIX = "/_matrix/client"
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
MEDIA_PREFIX = "/_matrix/media/r0"
MEDIA_R0_PREFIX = "/_matrix/media/r0"
MEDIA_V3_PREFIX = "/_matrix/media/v3"
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"

View File

@@ -402,7 +402,7 @@ async def start(hs: "HomeServer") -> None:
if hasattr(signal, "SIGHUP"):
@wrap_as_background_process("sighup")
def handle_sighup(*args: Any, **kwargs: Any) -> None:
async def handle_sighup(*args: Any, **kwargs: Any) -> None:
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sdnotify(b"RELOADING=1")

View File

@@ -26,7 +26,8 @@ from synapse.api.urls import (
CLIENT_API_PREFIX,
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_PREFIX,
MEDIA_R0_PREFIX,
MEDIA_V3_PREFIX,
SERVER_KEY_V2_PREFIX,
)
from synapse.app import _base
@@ -338,7 +339,8 @@ class GenericWorkerServer(HomeServer):
resources.update(
{
MEDIA_PREFIX: media_repo,
MEDIA_R0_PREFIX: media_repo,
MEDIA_V3_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
"/_synapse/admin": admin_resource,
}

View File

@@ -29,7 +29,8 @@ from synapse import events
from synapse.api.urls import (
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_PREFIX,
MEDIA_R0_PREFIX,
MEDIA_V3_PREFIX,
SERVER_KEY_V2_PREFIX,
STATIC_PREFIX,
WEB_CLIENT_PREFIX,
@@ -193,6 +194,7 @@ class SynapseHomeServer(HomeServer):
{
"/_matrix/client/api/v1": client_resource,
"/_matrix/client/r0": client_resource,
"/_matrix/client/v3": client_resource,
"/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource,
@@ -244,7 +246,11 @@ class SynapseHomeServer(HomeServer):
if self.config.server.enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update(
{MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo}
{
MEDIA_R0_PREFIX: media_repo,
MEDIA_V3_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
}
)
elif name == "media":
raise ConfigError(

View File

@@ -231,13 +231,32 @@ class ApplicationServiceApi(SimpleHttpClient):
json_body=body,
args={"access_token": service.hs_token},
)
if logger.isEnabledFor(logging.DEBUG):
logger.debug(
"push_bulk to %s succeeded! events=%s",
uri,
[event.get("event_id") for event in events],
)
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(events))
return True
except CodeMessageException as e:
logger.warning("push_bulk to %s received %s", uri, e.code)
logger.warning(
"push_bulk to %s received code=%s msg=%s",
uri,
e.code,
e.msg,
exc_info=logger.isEnabledFor(logging.DEBUG),
)
except Exception as ex:
logger.warning("push_bulk to %s threw exception %s", uri, ex)
logger.warning(
"push_bulk to %s threw exception(%s) %s args=%s",
uri,
type(ex).__name__,
ex,
ex.args,
exc_info=logger.isEnabledFor(logging.DEBUG),
)
failed_transactions_counter.labels(service.id).inc()
return False

View File

@@ -20,7 +20,18 @@ import os
from collections import OrderedDict
from hashlib import sha256
from textwrap import dedent
from typing import Any, Iterable, List, MutableMapping, Optional, Union
from typing import (
Any,
Dict,
Iterable,
List,
MutableMapping,
Optional,
Tuple,
Type,
TypeVar,
Union,
)
import attr
import jinja2
@@ -78,7 +89,7 @@ CONFIG_FILE_HEADER = """\
"""
def path_exists(file_path):
def path_exists(file_path: str) -> bool:
"""Check if a file exists
Unlike os.path.exists, this throws an exception if there is an error
@@ -86,7 +97,7 @@ def path_exists(file_path):
the parent dir).
Returns:
bool: True if the file exists; False if not.
True if the file exists; False if not.
"""
try:
os.stat(file_path)
@@ -102,15 +113,15 @@ class Config:
A configuration section, containing configuration keys and values.
Attributes:
section (str): The section title of this config object, such as
section: The section title of this config object, such as
"tls" or "logger". This is used to refer to it on the root
logger (for example, `config.tls.some_option`). Must be
defined in subclasses.
"""
section = None
section: str
def __init__(self, root_config=None):
def __init__(self, root_config: "RootConfig" = None):
self.root = root_config
# Get the path to the default Synapse template directory
@@ -119,7 +130,7 @@ class Config:
)
@staticmethod
def parse_size(value):
def parse_size(value: Union[str, int]) -> int:
if isinstance(value, int):
return value
sizes = {"K": 1024, "M": 1024 * 1024}
@@ -162,15 +173,15 @@ class Config:
return int(value) * size
@staticmethod
def abspath(file_path):
def abspath(file_path: str) -> str:
return os.path.abspath(file_path) if file_path else file_path
@classmethod
def path_exists(cls, file_path):
def path_exists(cls, file_path: str) -> bool:
return path_exists(file_path)
@classmethod
def check_file(cls, file_path, config_name):
def check_file(cls, file_path: Optional[str], config_name: str) -> str:
if file_path is None:
raise ConfigError("Missing config for %s." % (config_name,))
try:
@@ -183,7 +194,7 @@ class Config:
return cls.abspath(file_path)
@classmethod
def ensure_directory(cls, dir_path):
def ensure_directory(cls, dir_path: str) -> str:
dir_path = cls.abspath(dir_path)
os.makedirs(dir_path, exist_ok=True)
if not os.path.isdir(dir_path):
@@ -191,7 +202,7 @@ class Config:
return dir_path
@classmethod
def read_file(cls, file_path, config_name):
def read_file(cls, file_path: Any, config_name: str) -> str:
"""Deprecated: call read_file directly"""
return read_file(file_path, (config_name,))
@@ -284,6 +295,9 @@ class Config:
return [env.get_template(filename) for filename in filenames]
TRootConfig = TypeVar("TRootConfig", bound="RootConfig")
class RootConfig:
"""
Holder of an application's configuration.
@@ -308,7 +322,9 @@ class RootConfig:
raise Exception("Failed making %s: %r" % (config_class.section, e))
setattr(self, config_class.section, conf)
def invoke_all(self, func_name: str, *args, **kwargs) -> MutableMapping[str, Any]:
def invoke_all(
self, func_name: str, *args: Any, **kwargs: Any
) -> MutableMapping[str, Any]:
"""
Invoke a function on all instantiated config objects this RootConfig is
configured to use.
@@ -317,6 +333,7 @@ class RootConfig:
func_name: Name of function to invoke
*args
**kwargs
Returns:
ordered dictionary of config section name and the result of the
function from it.
@@ -332,7 +349,7 @@ class RootConfig:
return res
@classmethod
def invoke_all_static(cls, func_name: str, *args, **kwargs):
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: any) -> None:
"""
Invoke a static function on config objects this RootConfig is
configured to use.
@@ -341,6 +358,7 @@ class RootConfig:
func_name: Name of function to invoke
*args
**kwargs
Returns:
ordered dictionary of config section name and the result of the
function from it.
@@ -351,16 +369,16 @@ class RootConfig:
def generate_config(
self,
config_dir_path,
data_dir_path,
server_name,
generate_secrets=False,
report_stats=None,
open_private_ports=False,
listeners=None,
tls_certificate_path=None,
tls_private_key_path=None,
):
config_dir_path: str,
data_dir_path: str,
server_name: str,
generate_secrets: bool = False,
report_stats: Optional[bool] = None,
open_private_ports: bool = False,
listeners: Optional[List[dict]] = None,
tls_certificate_path: Optional[str] = None,
tls_private_key_path: Optional[str] = None,
) -> str:
"""
Build a default configuration file
@@ -368,27 +386,27 @@ class RootConfig:
(eg with --generate_config).
Args:
config_dir_path (str): The path where the config files are kept. Used to
config_dir_path: The path where the config files are kept. Used to
create filenames for things like the log config and the signing key.
data_dir_path (str): The path where the data files are kept. Used to create
data_dir_path: The path where the data files are kept. Used to create
filenames for things like the database and media store.
server_name (str): The server name. Used to initialise the server_name
server_name: The server name. Used to initialise the server_name
config param, but also used in the names of some of the config files.
generate_secrets (bool): True if we should generate new secrets for things
generate_secrets: True if we should generate new secrets for things
like the macaroon_secret_key. If False, these parameters will be left
unset.
report_stats (bool|None): Initial setting for the report_stats setting.
report_stats: Initial setting for the report_stats setting.
If None, report_stats will be left unset.
open_private_ports (bool): True to leave private ports (such as the non-TLS
open_private_ports: True to leave private ports (such as the non-TLS
HTTP listener) open to the internet.
listeners (list(dict)|None): A list of descriptions of the listeners
synapse should start with each of which specifies a port (str), a list of
listeners: A list of descriptions of the listeners synapse should
start with each of which specifies a port (int), a list of
resources (list(str)), tls (bool) and type (str). For example:
[{
"port": 8448,
@@ -403,16 +421,12 @@ class RootConfig:
"type": "http",
}],
tls_certificate_path: The path to the tls certificate.
database (str|None): The database type to configure, either `psycog2`
or `sqlite3`.
tls_certificate_path (str|None): The path to the tls certificate.
tls_private_key_path (str|None): The path to the tls private key.
tls_private_key_path: The path to the tls private key.
Returns:
str: the yaml config file
The yaml config file
"""
return CONFIG_FILE_HEADER + "\n\n".join(
@@ -432,12 +446,15 @@ class RootConfig:
)
@classmethod
def load_config(cls, description, argv):
def load_config(
cls: Type[TRootConfig], description: str, argv: List[str]
) -> TRootConfig:
"""Parse the commandline and config files
Doesn't support config-file-generation: used by the worker apps.
Returns: Config object.
Returns:
Config object.
"""
config_parser = argparse.ArgumentParser(description=description)
cls.add_arguments_to_parser(config_parser)
@@ -446,7 +463,7 @@ class RootConfig:
return obj
@classmethod
def add_arguments_to_parser(cls, config_parser):
def add_arguments_to_parser(cls, config_parser: argparse.ArgumentParser) -> None:
"""Adds all the config flags to an ArgumentParser.
Doesn't support config-file-generation: used by the worker apps.
@@ -454,7 +471,7 @@ class RootConfig:
Used for workers where we want to add extra flags/subcommands.
Args:
config_parser (ArgumentParser): App description
config_parser: App description
"""
config_parser.add_argument(
@@ -477,7 +494,9 @@ class RootConfig:
cls.invoke_all_static("add_arguments", config_parser)
@classmethod
def load_config_with_parser(cls, parser, argv):
def load_config_with_parser(
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv: List[str]
) -> Tuple[TRootConfig, argparse.Namespace]:
"""Parse the commandline and config files with the given parser
Doesn't support config-file-generation: used by the worker apps.
@@ -485,13 +504,12 @@ class RootConfig:
Used for workers where we want to add extra flags/subcommands.
Args:
parser (ArgumentParser)
argv (list[str])
parser
argv
Returns:
tuple[HomeServerConfig, argparse.Namespace]: Returns the parsed
config object and the parsed argparse.Namespace object from
`parser.parse_args(..)`
Returns the parsed config object and the parsed argparse.Namespace
object from parser.parse_args(..)`
"""
obj = cls()
@@ -520,12 +538,15 @@ class RootConfig:
return obj, config_args
@classmethod
def load_or_generate_config(cls, description, argv):
def load_or_generate_config(
cls: Type[TRootConfig], description: str, argv: List[str]
) -> Optional[TRootConfig]:
"""Parse the commandline and config files
Supports generation of config files, so is used for the main homeserver app.
Returns: Config object, or None if --generate-config or --generate-keys was set
Returns:
Config object, or None if --generate-config or --generate-keys was set
"""
parser = argparse.ArgumentParser(description=description)
parser.add_argument(
@@ -680,16 +701,21 @@ class RootConfig:
return obj
def parse_config_dict(self, config_dict, config_dir_path=None, data_dir_path=None):
def parse_config_dict(
self,
config_dict: Dict[str, Any],
config_dir_path: Optional[str] = None,
data_dir_path: Optional[str] = None,
) -> None:
"""Read the information from the config dict into this Config object.
Args:
config_dict (dict): Configuration data, as read from the yaml
config_dict: Configuration data, as read from the yaml
config_dir_path (str): The path where the config files are kept. Used to
config_dir_path: The path where the config files are kept. Used to
create filenames for things like the log config and the signing key.
data_dir_path (str): The path where the data files are kept. Used to create
data_dir_path: The path where the data files are kept. Used to create
filenames for things like the database and media store.
"""
self.invoke_all(
@@ -699,17 +725,20 @@ class RootConfig:
data_dir_path=data_dir_path,
)
def generate_missing_files(self, config_dict, config_dir_path):
def generate_missing_files(
self, config_dict: Dict[str, Any], config_dir_path: str
) -> None:
self.invoke_all("generate_files", config_dict, config_dir_path)
def read_config_files(config_files):
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
"""Read the config files into a dict
Args:
config_files (iterable[str]): A list of the config files to read
config_files: A list of the config files to read
Returns: dict
Returns:
The configuration dictionary.
"""
specified_config = {}
for config_file in config_files:
@@ -733,17 +762,17 @@ def read_config_files(config_files):
return specified_config
def find_config_files(search_paths):
def find_config_files(search_paths: List[str]) -> List[str]:
"""Finds config files using a list of search paths. If a path is a file
then that file path is added to the list. If a search path is a directory
then all the "*.yaml" files in that directory are added to the list in
sorted order.
Args:
search_paths(list(str)): A list of paths to search.
search_paths: A list of paths to search.
Returns:
list(str): A list of file paths.
A list of file paths.
"""
config_files = []
@@ -777,7 +806,7 @@ def find_config_files(search_paths):
return config_files
@attr.s
@attr.s(auto_attribs=True)
class ShardedWorkerHandlingConfig:
"""Algorithm for choosing which instance is responsible for handling some
sharded work.
@@ -787,7 +816,7 @@ class ShardedWorkerHandlingConfig:
below).
"""
instances = attr.ib(type=List[str])
instances: List[str]
def should_handle(self, instance_name: str, key: str) -> bool:
"""Whether this instance is responsible for handling the given key."""

View File

@@ -1,4 +1,18 @@
from typing import Any, Iterable, List, Optional
import argparse
from typing import (
Any,
Dict,
Iterable,
List,
MutableMapping,
Optional,
Tuple,
Type,
TypeVar,
Union,
)
import jinja2
from synapse.config import (
account_validity,
@@ -19,6 +33,7 @@ from synapse.config import (
logger,
metrics,
modules,
oembed,
oidc,
password_auth_providers,
push,
@@ -27,6 +42,7 @@ from synapse.config import (
registration,
repository,
retention,
room,
room_directory,
saml2,
server,
@@ -51,7 +67,9 @@ MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
MISSING_REPORT_STATS_SPIEL: str
MISSING_SERVER_NAME: str
def path_exists(file_path: str): ...
def path_exists(file_path: str) -> bool: ...
TRootConfig = TypeVar("TRootConfig", bound="RootConfig")
class RootConfig:
server: server.ServerConfig
@@ -61,6 +79,7 @@ class RootConfig:
logging: logger.LoggingConfig
ratelimiting: ratelimiting.RatelimitConfig
media: repository.ContentRepositoryConfig
oembed: oembed.OembedConfig
captcha: captcha.CaptchaConfig
voip: voip.VoipConfig
registration: registration.RegistrationConfig
@@ -80,6 +99,7 @@ class RootConfig:
authproviders: password_auth_providers.PasswordAuthProviderConfig
push: push.PushConfig
spamchecker: spam_checker.SpamCheckerConfig
room: room.RoomConfig
groups: groups.GroupsConfig
userdirectory: user_directory.UserDirectoryConfig
consent: consent.ConsentConfig
@@ -87,72 +107,85 @@ class RootConfig:
servernotices: server_notices.ServerNoticesConfig
roomdirectory: room_directory.RoomDirectoryConfig
thirdpartyrules: third_party_event_rules.ThirdPartyRulesConfig
tracer: tracer.TracerConfig
tracing: tracer.TracerConfig
redis: redis.RedisConfig
modules: modules.ModulesConfig
caches: cache.CacheConfig
federation: federation.FederationConfig
retention: retention.RetentionConfig
config_classes: List = ...
config_classes: List[Type["Config"]] = ...
def __init__(self) -> None: ...
def invoke_all(self, func_name: str, *args: Any, **kwargs: Any): ...
def invoke_all(
self, func_name: str, *args: Any, **kwargs: Any
) -> MutableMapping[str, Any]: ...
@classmethod
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: Any) -> None: ...
def __getattr__(self, item: str): ...
def parse_config_dict(
self,
config_dict: Any,
config_dir_path: Optional[Any] = ...,
data_dir_path: Optional[Any] = ...,
config_dict: Dict[str, Any],
config_dir_path: Optional[str] = ...,
data_dir_path: Optional[str] = ...,
) -> None: ...
read_config: Any = ...
def generate_config(
self,
config_dir_path: str,
data_dir_path: str,
server_name: str,
generate_secrets: bool = ...,
report_stats: Optional[str] = ...,
report_stats: Optional[bool] = ...,
open_private_ports: bool = ...,
listeners: Optional[Any] = ...,
database_conf: Optional[Any] = ...,
tls_certificate_path: Optional[str] = ...,
tls_private_key_path: Optional[str] = ...,
): ...
) -> str: ...
@classmethod
def load_or_generate_config(cls, description: Any, argv: Any): ...
def load_or_generate_config(
cls: Type[TRootConfig], description: str, argv: List[str]
) -> Optional[TRootConfig]: ...
@classmethod
def load_config(cls, description: Any, argv: Any): ...
def load_config(
cls: Type[TRootConfig], description: str, argv: List[str]
) -> TRootConfig: ...
@classmethod
def add_arguments_to_parser(cls, config_parser: Any) -> None: ...
def add_arguments_to_parser(
cls, config_parser: argparse.ArgumentParser
) -> None: ...
@classmethod
def load_config_with_parser(cls, parser: Any, argv: Any): ...
def load_config_with_parser(
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv: List[str]
) -> Tuple[TRootConfig, argparse.Namespace]: ...
def generate_missing_files(
self, config_dict: dict, config_dir_path: str
) -> None: ...
class Config:
root: RootConfig
default_template_dir: str
def __init__(self, root_config: Optional[RootConfig] = ...) -> None: ...
def __getattr__(self, item: str, from_root: bool = ...): ...
@staticmethod
def parse_size(value: Any): ...
def parse_size(value: Union[str, int]) -> int: ...
@staticmethod
def parse_duration(value: Any): ...
def parse_duration(value: Union[str, int]) -> int: ...
@staticmethod
def abspath(file_path: Optional[str]): ...
def abspath(file_path: Optional[str]) -> str: ...
@classmethod
def path_exists(cls, file_path: str): ...
def path_exists(cls, file_path: str) -> bool: ...
@classmethod
def check_file(cls, file_path: str, config_name: str): ...
def check_file(cls, file_path: str, config_name: str) -> str: ...
@classmethod
def ensure_directory(cls, dir_path: str): ...
def ensure_directory(cls, dir_path: str) -> str: ...
@classmethod
def read_file(cls, file_path: str, config_name: str): ...
def read_file(cls, file_path: str, config_name: str) -> str: ...
def read_template(self, filenames: str) -> jinja2.Template: ...
def read_templates(
self,
filenames: List[str],
custom_template_directories: Optional[Iterable[str]] = None,
) -> List[jinja2.Template]: ...
def read_config_files(config_files: List[str]): ...
def find_config_files(search_paths: List[str]): ...
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]: ...
def find_config_files(search_paths: List[str]) -> List[str]: ...
class ShardedWorkerHandlingConfig:
instances: List[str]

View File

@@ -15,7 +15,7 @@
import os
import re
import threading
from typing import Callable, Dict
from typing import Callable, Dict, Optional
from synapse.python_dependencies import DependencyException, check_requirements
@@ -217,7 +217,7 @@ class CacheConfig(Config):
expiry_time = cache_config.get("expiry_time")
if expiry_time:
self.expiry_time_msec = self.parse_duration(expiry_time)
self.expiry_time_msec: Optional[int] = self.parse_duration(expiry_time)
else:
self.expiry_time_msec = None

View File

@@ -137,33 +137,14 @@ class EmailConfig(Config):
if self.root.registration.account_threepid_delegate_email
else ThreepidBehaviour.LOCAL
)
# Prior to Synapse v1.4.0, there was another option that defined whether Synapse would
# use an identity server to password reset tokens on its behalf. We now warn the user
# if they have this set and tell them to use the updated option, while using a default
# identity server in the process.
self.using_identity_server_from_trusted_list = False
if (
not self.root.registration.account_threepid_delegate_email
and config.get("trust_identity_server_for_password_resets", False) is True
):
# Use the first entry in self.trusted_third_party_id_servers instead
if self.trusted_third_party_id_servers:
# XXX: It's a little confusing that account_threepid_delegate_email is modified
# both in RegistrationConfig and here. We should factor this bit out
first_trusted_identity_server = self.trusted_third_party_id_servers[0]
# trusted_third_party_id_servers does not contain a scheme whereas
# account_threepid_delegate_email is expected to. Presume https
self.root.registration.account_threepid_delegate_email = (
"https://" + first_trusted_identity_server
)
self.using_identity_server_from_trusted_list = True
else:
raise ConfigError(
"Attempted to use an identity server from"
'"trusted_third_party_id_servers" but it is empty.'
)
if config.get("trust_identity_server_for_password_resets"):
raise ConfigError(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "
"details and update your config file."
)
self.local_threepid_handling_disabled_due_to_email_config = False
if (

View File

@@ -31,6 +31,8 @@ class JWTConfig(Config):
self.jwt_secret = jwt_config["secret"]
self.jwt_algorithm = jwt_config["algorithm"]
self.jwt_subject_claim = jwt_config.get("subject_claim", "sub")
# The issuer and audiences are optional, if provided, it is asserted
# that the claims exist on the JWT.
self.jwt_issuer = jwt_config.get("issuer")
@@ -46,6 +48,7 @@ class JWTConfig(Config):
self.jwt_enabled = False
self.jwt_secret = None
self.jwt_algorithm = None
self.jwt_subject_claim = None
self.jwt_issuer = None
self.jwt_audiences = None
@@ -88,6 +91,12 @@ class JWTConfig(Config):
#
#algorithm: "provided-by-your-issuer"
# Name of the claim containing a unique identifier for the user.
#
# Optional, defaults to `sub`.
#
#subject_claim: "sub"
# The issuer to validate the "iss" claim against.
#
# Optional, if provided the "iss" claim will be required and

View File

@@ -16,6 +16,7 @@
import hashlib
import logging
import os
from typing import Any, Dict
import attr
import jsonschema
@@ -312,7 +313,7 @@ class KeyConfig(Config):
)
return keys
def generate_files(self, config, config_dir_path):
def generate_files(self, config: Dict[str, Any], config_dir_path: str) -> None:
if "signing_key" in config:
return

View File

@@ -18,7 +18,7 @@ import os
import sys
import threading
from string import Template
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Any, Dict
import yaml
from zope.interface import implementer
@@ -185,7 +185,7 @@ class LoggingConfig(Config):
help=argparse.SUPPRESS,
)
def generate_files(self, config, config_dir_path):
def generate_files(self, config: Dict[str, Any], config_dir_path: str) -> None:
log_config = config.get("log_config")
if log_config and not os.path.exists(log_config):
log_file = self.abspath("homeserver.log")

View File

@@ -39,9 +39,7 @@ class RegistrationConfig(Config):
self.registration_shared_secret = config.get("registration_shared_secret")
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
self.trusted_third_party_id_servers = config.get(
"trusted_third_party_id_servers", ["matrix.org", "vector.im"]
)
account_threepid_delegates = config.get("account_threepid_delegates") or {}
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
@@ -114,25 +112,32 @@ class RegistrationConfig(Config):
session_lifetime = self.parse_duration(session_lifetime)
self.session_lifetime = session_lifetime
# The `access_token_lifetime` applies for tokens that can be renewed
# The `refreshable_access_token_lifetime` applies for tokens that can be renewed
# using a refresh token, as per MSC2918. If it is `None`, the refresh
# token mechanism is disabled.
#
# Since it is incompatible with the `session_lifetime` mechanism, it is set to
# `None` by default if a `session_lifetime` is set.
access_token_lifetime = config.get(
"access_token_lifetime", "5m" if session_lifetime is None else None
refreshable_access_token_lifetime = config.get(
"refreshable_access_token_lifetime",
"5m" if session_lifetime is None else None,
)
if access_token_lifetime is not None:
access_token_lifetime = self.parse_duration(access_token_lifetime)
self.access_token_lifetime = access_token_lifetime
if refreshable_access_token_lifetime is not None:
refreshable_access_token_lifetime = self.parse_duration(
refreshable_access_token_lifetime
)
self.refreshable_access_token_lifetime = refreshable_access_token_lifetime
if session_lifetime is not None and access_token_lifetime is not None:
if (
session_lifetime is not None
and refreshable_access_token_lifetime is not None
):
raise ConfigError(
"The refresh token mechanism is incompatible with the "
"`session_lifetime` option. Consider disabling the "
"`session_lifetime` option or disabling the refresh token "
"mechanism by removing the `access_token_lifetime` option."
"mechanism by removing the `refreshable_access_token_lifetime` "
"option."
)
# The fallback template used for authenticating using a registration token

View File

@@ -1,4 +1,5 @@
# Copyright 2018 New Vector Ltd
# Copyright 2021 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,6 +13,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List
from synapse.types import JsonDict
from synapse.util import glob_to_regex
from ._base import Config, ConfigError
@@ -20,7 +24,7 @@ from ._base import Config, ConfigError
class RoomDirectoryConfig(Config):
section = "roomdirectory"
def read_config(self, config, **kwargs):
def read_config(self, config, **kwargs) -> None:
self.enable_room_list_search = config.get("enable_room_list_search", True)
alias_creation_rules = config.get("alias_creation_rules")
@@ -47,7 +51,7 @@ class RoomDirectoryConfig(Config):
_RoomDirectoryRule("room_list_publication_rules", {"action": "allow"})
]
def generate_config_section(self, config_dir_path, server_name, **kwargs):
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
return """
# Uncomment to disable searching the public room list. When disabled
# blocks searching local and remote room lists for local and remote
@@ -113,16 +117,16 @@ class RoomDirectoryConfig(Config):
# action: allow
"""
def is_alias_creation_allowed(self, user_id, room_id, alias):
def is_alias_creation_allowed(self, user_id: str, room_id: str, alias: str) -> bool:
"""Checks if the given user is allowed to create the given alias
Args:
user_id (str)
room_id (str)
alias (str)
user_id: The user to check.
room_id: The room ID for the alias.
alias: The alias being created.
Returns:
boolean: True if user is allowed to create the alias
True if user is allowed to create the alias
"""
for rule in self._alias_creation_rules:
if rule.matches(user_id, room_id, [alias]):
@@ -130,16 +134,18 @@ class RoomDirectoryConfig(Config):
return False
def is_publishing_room_allowed(self, user_id, room_id, aliases):
def is_publishing_room_allowed(
self, user_id: str, room_id: str, aliases: List[str]
) -> bool:
"""Checks if the given user is allowed to publish the room
Args:
user_id (str)
room_id (str)
aliases (list[str]): any local aliases associated with the room
user_id: The user ID publishing the room.
room_id: The room being published.
aliases: any local aliases associated with the room
Returns:
boolean: True if user can publish room
True if user can publish room
"""
for rule in self._room_list_publication_rules:
if rule.matches(user_id, room_id, aliases):
@@ -153,11 +159,11 @@ class _RoomDirectoryRule:
creating an alias or publishing a room.
"""
def __init__(self, option_name, rule):
def __init__(self, option_name: str, rule: JsonDict):
"""
Args:
option_name (str): Name of the config option this rule belongs to
rule (dict): The rule as specified in the config
option_name: Name of the config option this rule belongs to
rule: The rule as specified in the config
"""
action = rule["action"]
@@ -181,18 +187,18 @@ class _RoomDirectoryRule:
except Exception as e:
raise ConfigError("Failed to parse glob into regex") from e
def matches(self, user_id, room_id, aliases):
def matches(self, user_id: str, room_id: str, aliases: List[str]) -> bool:
"""Tests if this rule matches the given user_id, room_id and aliases.
Args:
user_id (str)
room_id (str)
aliases (list[str]): The associated aliases to the room. Will be a
single element for testing alias creation, and can be empty for
testing room publishing.
user_id: The user ID to check.
room_id: The room ID to check.
aliases: The associated aliases to the room. Will be a single element
for testing alias creation, and can be empty for testing room
publishing.
Returns:
boolean
True if the rule matches.
"""
# Note: The regexes are anchored at both ends

View File

@@ -421,7 +421,7 @@ class ServerConfig(Config):
# before redacting them.
redaction_retention_period = config.get("redaction_retention_period", "7d")
if redaction_retention_period is not None:
self.redaction_retention_period = self.parse_duration(
self.redaction_retention_period: Optional[int] = self.parse_duration(
redaction_retention_period
)
else:
@@ -430,7 +430,7 @@ class ServerConfig(Config):
# How long to keep entries in the `users_ips` table.
user_ips_max_age = config.get("user_ips_max_age", "28d")
if user_ips_max_age is not None:
self.user_ips_max_age = self.parse_duration(user_ips_max_age)
self.user_ips_max_age: Optional[int] = self.parse_duration(user_ips_max_age)
else:
self.user_ips_max_age = None

View File

@@ -14,7 +14,6 @@
import logging
import os
from datetime import datetime
from typing import List, Optional, Pattern
from OpenSSL import SSL, crypto
@@ -133,55 +132,6 @@ class TlsConfig(Config):
self.tls_certificate: Optional[crypto.X509] = None
self.tls_private_key: Optional[crypto.PKey] = None
def is_disk_cert_valid(self, allow_self_signed=True):
"""
Is the certificate we have on disk valid, and if so, for how long?
Args:
allow_self_signed (bool): Should we allow the certificate we
read to be self signed?
Returns:
int: Days remaining of certificate validity.
None: No certificate exists.
"""
if not os.path.exists(self.tls_certificate_file):
return None
try:
with open(self.tls_certificate_file, "rb") as f:
cert_pem = f.read()
except Exception as e:
raise ConfigError(
"Failed to read existing certificate file %s: %s"
% (self.tls_certificate_file, e)
)
try:
tls_certificate = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
except Exception as e:
raise ConfigError(
"Failed to parse existing certificate file %s: %s"
% (self.tls_certificate_file, e)
)
if not allow_self_signed:
if tls_certificate.get_subject() == tls_certificate.get_issuer():
raise ValueError(
"TLS Certificate is self signed, and this is not permitted"
)
# YYYYMMDDhhmmssZ -- in UTC
expiry_data = tls_certificate.get_notAfter()
if expiry_data is None:
raise ValueError(
"TLS Certificate has no expiry date, and this is not permitted"
)
expires_on = datetime.strptime(expiry_data.decode("ascii"), "%Y%m%d%H%M%SZ")
now = datetime.utcnow()
days_remaining = (expires_on - now).days
return days_remaining
def read_certificate_from_disk(self):
"""
Read the certificates and private key from disk.
@@ -263,8 +213,8 @@ class TlsConfig(Config):
#
#federation_certificate_verification_whitelist:
# - lon.example.com
# - *.domain.com
# - *.onion
# - "*.domain.com"
# - "*.onion"
# List of custom certificate authorities for federation traffic.
#
@@ -295,7 +245,7 @@ class TlsConfig(Config):
cert_path = self.tls_certificate_file
logger.info("Loading TLS certificate from %s", cert_path)
cert_pem = self.read_file(cert_path, "tls_certificate_path")
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem.encode())
return cert

View File

@@ -53,8 +53,8 @@ class UserDirectoryConfig(Config):
# indexes were (re)built was before Synapse 1.44, you'll have to
# rebuild the indexes in order to search through all known users.
# These indexes are built the first time Synapse starts; admins can
# manually trigger a rebuild following the instructions at
# https://matrix-org.github.io/synapse/latest/user_directory.html
# manually trigger a rebuild via API following the instructions at
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
#
# Uncomment to return search results containing all known users, even if that
# user does not share a room with the requester.

View File

@@ -1,5 +1,4 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017, 2018 New Vector Ltd
# Copyright 2014-2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -120,16 +119,6 @@ class VerifyJsonRequest:
key_ids=key_ids,
)
def to_fetch_key_request(self) -> "_FetchKeyRequest":
"""Create a key fetch request for all keys needed to satisfy the
verification request.
"""
return _FetchKeyRequest(
server_name=self.server_name,
minimum_valid_until_ts=self.minimum_valid_until_ts,
key_ids=self.key_ids,
)
class KeyLookupError(ValueError):
pass
@@ -179,8 +168,22 @@ class Keyring:
clock=hs.get_clock(),
process_batch_callback=self._inner_fetch_key_requests,
)
self.verify_key = get_verify_key(hs.signing_key)
self.hostname = hs.hostname
self._hostname = hs.hostname
# build a FetchKeyResult for each of our own keys, to shortcircuit the
# fetcher.
self._local_verify_keys: Dict[str, FetchKeyResult] = {}
for key_id, key in hs.config.key.old_signing_keys.items():
self._local_verify_keys[key_id] = FetchKeyResult(
verify_key=key, valid_until_ts=key.expired_ts
)
vk = get_verify_key(hs.signing_key)
self._local_verify_keys[f"{vk.alg}:{vk.version}"] = FetchKeyResult(
verify_key=vk,
valid_until_ts=2 ** 63, # fake future timestamp
)
async def verify_json_for_server(
self,
@@ -267,22 +270,32 @@ class Keyring:
Codes.UNAUTHORIZED,
)
# If we are the originating server don't fetch verify key for self over federation
if verify_request.server_name == self.hostname:
await self._process_json(self.verify_key, verify_request)
return
found_keys: Dict[str, FetchKeyResult] = {}
# Add the keys we need to verify to the queue for retrieval. We queue
# up requests for the same server so we don't end up with many in flight
# requests for the same keys.
key_request = verify_request.to_fetch_key_request()
found_keys_by_server = await self._server_queue.add_to_queue(
key_request, key=verify_request.server_name
)
# If we are the originating server, short-circuit the key-fetch for any keys
# we already have
if verify_request.server_name == self._hostname:
for key_id in verify_request.key_ids:
if key_id in self._local_verify_keys:
found_keys[key_id] = self._local_verify_keys[key_id]
# Since we batch up requests the returned set of keys may contain keys
# from other servers, so we pull out only the ones we care about.s
found_keys = found_keys_by_server.get(verify_request.server_name, {})
key_ids_to_find = set(verify_request.key_ids) - found_keys.keys()
if key_ids_to_find:
# Add the keys we need to verify to the queue for retrieval. We queue
# up requests for the same server so we don't end up with many in flight
# requests for the same keys.
key_request = _FetchKeyRequest(
server_name=verify_request.server_name,
minimum_valid_until_ts=verify_request.minimum_valid_until_ts,
key_ids=list(key_ids_to_find),
)
found_keys_by_server = await self._server_queue.add_to_queue(
key_request, key=verify_request.server_name
)
# Since we batch up requests the returned set of keys may contain keys
# from other servers, so we pull out only the ones we care about.
found_keys.update(found_keys_by_server.get(verify_request.server_name, {}))
# Verify each signature we got valid keys for, raising if we can't
# verify any of them.

View File

@@ -1,4 +1,5 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -392,15 +393,16 @@ class EventClientSerializer:
self,
event: Union[JsonDict, EventBase],
time_now: int,
bundle_aggregations: bool = True,
bundle_relations: bool = True,
**kwargs: Any,
) -> JsonDict:
"""Serializes a single event.
Args:
event
event: The event being serialized.
time_now: The current time in milliseconds
bundle_aggregations: Whether to bundle in related events
bundle_relations: Whether to include the bundled relations for this
event.
**kwargs: Arguments to pass to `serialize_event`
Returns:
@@ -410,77 +412,93 @@ class EventClientSerializer:
if not isinstance(event, EventBase):
return event
event_id = event.event_id
serialized_event = serialize_event(event, time_now, **kwargs)
# If MSC1849 is enabled then we need to look if there are any relations
# we need to bundle in with the event.
# Do not bundle relations if the event has been redacted
if not event.internal_metadata.is_redacted() and (
self._msc1849_enabled and bundle_aggregations
self._msc1849_enabled and bundle_relations
):
annotations = await self.store.get_aggregation_groups_for_event(event_id)
references = await self.store.get_relations_for_event(
event_id, RelationTypes.REFERENCE, direction="f"
)
if annotations.chunk:
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.ANNOTATION] = annotations.to_dict()
if references.chunk:
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.REFERENCE] = references.to_dict()
edit = None
if event.type == EventTypes.Message:
edit = await self.store.get_applicable_edit(event_id)
if edit:
# If there is an edit replace the content, preserving existing
# relations.
# Ensure we take copies of the edit content, otherwise we risk modifying
# the original event.
edit_content = edit.content.copy()
# Unfreeze the event content if necessary, so that we may modify it below
edit_content = unfreeze(edit_content)
serialized_event["content"] = edit_content.get("m.new_content", {})
# Check for existing relations
relations = event.content.get("m.relates_to")
if relations:
# Keep the relations, ensuring we use a dict copy of the original
serialized_event["content"]["m.relates_to"] = relations.copy()
else:
serialized_event["content"].pop("m.relates_to", None)
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.REPLACE] = {
"event_id": edit.event_id,
"origin_server_ts": edit.origin_server_ts,
"sender": edit.sender,
}
# If this event is the start of a thread, include a summary of the replies.
if self._msc3440_enabled:
(
thread_count,
latest_thread_event,
) = await self.store.get_thread_summary(event_id)
if latest_thread_event:
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.THREAD] = {
# Don't bundle aggregations as this could recurse forever.
"latest_event": await self.serialize_event(
latest_thread_event, time_now, bundle_aggregations=False
),
"count": thread_count,
}
await self._injected_bundled_relations(event, time_now, serialized_event)
return serialized_event
async def _injected_bundled_relations(
self, event: EventBase, time_now: int, serialized_event: JsonDict
) -> None:
"""Potentially injects bundled relations into the unsigned portion of the serialized event.
Args:
event: The event being serialized.
time_now: The current time in milliseconds
serialized_event: The serialized event which may be modified.
"""
event_id = event.event_id
# The bundled relations to include.
relations = {}
annotations = await self.store.get_aggregation_groups_for_event(event_id)
if annotations.chunk:
relations[RelationTypes.ANNOTATION] = annotations.to_dict()
references = await self.store.get_relations_for_event(
event_id, RelationTypes.REFERENCE, direction="f"
)
if references.chunk:
relations[RelationTypes.REFERENCE] = references.to_dict()
edit = None
if event.type == EventTypes.Message:
edit = await self.store.get_applicable_edit(event_id)
if edit:
# If there is an edit replace the content, preserving existing
# relations.
# Ensure we take copies of the edit content, otherwise we risk modifying
# the original event.
edit_content = edit.content.copy()
# Unfreeze the event content if necessary, so that we may modify it below
edit_content = unfreeze(edit_content)
serialized_event["content"] = edit_content.get("m.new_content", {})
# Check for existing relations
relates_to = event.content.get("m.relates_to")
if relates_to:
# Keep the relations, ensuring we use a dict copy of the original
serialized_event["content"]["m.relates_to"] = relates_to.copy()
else:
serialized_event["content"].pop("m.relates_to", None)
relations[RelationTypes.REPLACE] = {
"event_id": edit.event_id,
"origin_server_ts": edit.origin_server_ts,
"sender": edit.sender,
}
# If this event is the start of a thread, include a summary of the replies.
if self._msc3440_enabled:
(
thread_count,
latest_thread_event,
) = await self.store.get_thread_summary(event_id)
if latest_thread_event:
relations[RelationTypes.THREAD] = {
# Don't bundle relations as this could recurse forever.
"latest_event": await self.serialize_event(
latest_thread_event, time_now, bundle_relations=False
),
"count": thread_count,
}
# If any bundled relations were found, include them.
if relations:
serialized_event["unsigned"].setdefault("m.relations", {}).update(relations)
async def serialize_events(
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
) -> List[JsonDict]:

View File

@@ -40,6 +40,8 @@ from typing import TYPE_CHECKING, Optional, Tuple
from signedjson.sign import sign_json
from twisted.internet.defer import Deferred
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import JsonDict, get_domain_from_id
@@ -166,7 +168,7 @@ class GroupAttestionRenewer:
return {}
def _start_renew_attestations(self) -> None:
def _start_renew_attestations(self) -> "Deferred[None]":
return run_as_background_process("renew_attestations", self._renew_attestations)
async def _renew_attestations(self) -> None:

View File

@@ -790,10 +790,10 @@ class AuthHandler:
(
new_refresh_token,
new_refresh_token_id,
) = await self.get_refresh_token_for_user_id(
) = await self.create_refresh_token_for_user_id(
user_id=existing_token.user_id, device_id=existing_token.device_id
)
access_token = await self.get_access_token_for_user_id(
access_token = await self.create_access_token_for_user_id(
user_id=existing_token.user_id,
device_id=existing_token.device_id,
valid_until_ms=valid_until_ms,
@@ -832,7 +832,7 @@ class AuthHandler:
return True
async def get_refresh_token_for_user_id(
async def create_refresh_token_for_user_id(
self,
user_id: str,
device_id: str,
@@ -855,7 +855,7 @@ class AuthHandler:
)
return refresh_token, refresh_token_id
async def get_access_token_for_user_id(
async def create_access_token_for_user_id(
self,
user_id: str,
device_id: Optional[str],
@@ -1828,13 +1828,6 @@ def load_single_legacy_password_auth_provider(
logger.error("Error while initializing %r: %s", module, e)
raise
# The known hooks. If a module implements a method who's name appears in this set
# we'll want to register it
password_auth_provider_methods = {
"check_3pid_auth",
"on_logged_out",
}
# All methods that the module provides should be async, but this wasn't enforced
# in the old module system, so we wrap them if needed
def async_wrapper(f: Optional[Callable]) -> Optional[Callable[..., Awaitable]]:
@@ -1919,11 +1912,14 @@ def load_single_legacy_password_auth_provider(
return run
# populate hooks with the implemented methods, wrapped with async_wrapper
hooks = {
hook: async_wrapper(getattr(provider, hook, None))
for hook in password_auth_provider_methods
}
# If the module has these methods implemented, then we pull them out
# and register them as hooks.
check_3pid_auth_hook: Optional[CHECK_3PID_AUTH_CALLBACK] = async_wrapper(
getattr(provider, "check_3pid_auth", None)
)
on_logged_out_hook: Optional[ON_LOGGED_OUT_CALLBACK] = async_wrapper(
getattr(provider, "on_logged_out", None)
)
supported_login_types = {}
# call get_supported_login_types and add that to the dict
@@ -1950,7 +1946,11 @@ def load_single_legacy_password_auth_provider(
# need to use a tuple here for ("password",) not a list since lists aren't hashable
auth_checkers[(LoginType.PASSWORD, ("password",))] = check_password
api.register_password_auth_provider_callbacks(hooks, auth_checkers=auth_checkers)
api.register_password_auth_provider_callbacks(
check_3pid_auth=check_3pid_auth_hook,
on_logged_out=on_logged_out_hook,
auth_checkers=auth_checkers,
)
CHECK_3PID_AUTH_CALLBACK = Callable[

View File

@@ -124,7 +124,7 @@ class EventStreamHandler:
as_client_event=as_client_event,
# We don't bundle "live" events, as otherwise clients
# will end up double counting annotations.
bundle_aggregations=False,
bundle_relations=False,
)
chunk = {

View File

@@ -464,15 +464,6 @@ class IdentityHandler:
if next_link:
params["next_link"] = next_link
if self.hs.config.email.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use
logger.warning(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "
"details and update your config file."
)
try:
data = await self.http_client.post_json_get_json(
id_server + "/_matrix/identity/api/v1/validate/email/requestToken",
@@ -517,15 +508,6 @@ class IdentityHandler:
if next_link:
params["next_link"] = next_link
if self.hs.config.email.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use
logger.warning(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "
"details and update your config file."
)
try:
data = await self.http_client.post_json_get_json(
id_server + "/_matrix/identity/api/v1/validate/msisdn/requestToken",

View File

@@ -252,7 +252,7 @@ class MessageHandler:
now,
# We don't bother bundling aggregations in when asked for state
# events, as clients won't use them.
bundle_aggregations=False,
bundle_relations=False,
)
return events
@@ -1001,13 +1001,52 @@ class EventCreationHandler:
)
self.validator.validate_new(event, self.config)
await self._validate_event_relation(event)
logger.debug("Created event %s", event.event_id)
return event, context
async def _validate_event_relation(self, event: EventBase) -> None:
"""
Ensure the relation data on a new event is not bogus.
Args:
event: The event being created.
Raises:
SynapseError if the event is invalid.
"""
relation = event.content.get("m.relates_to")
if not relation:
return
relation_type = relation.get("rel_type")
if not relation_type:
return
# Ensure the parent is real.
relates_to = relation.get("event_id")
if not relates_to:
return
parent_event = await self.store.get_event(relates_to, allow_none=True)
if parent_event:
# And in the same room.
if parent_event.room_id != event.room_id:
raise SynapseError(400, "Relations must be in the same room")
else:
# There must be some reason that the client knows the event exists,
# see if there are existing relations. If so, assume everything is fine.
if not await self.store.event_is_target_of_relation(relates_to):
# Otherwise, the client can't know about the parent event!
raise SynapseError(400, "Can't send relation to unknown event")
# If this event is an annotation then we check that that the sender
# can't annotate the same way twice (e.g. stops users from liking an
# event multiple times).
relation = event.content.get("m.relates_to", {})
if relation.get("rel_type") == RelationTypes.ANNOTATION:
relates_to = relation["event_id"]
if relation_type == RelationTypes.ANNOTATION:
aggregation_key = relation["key"]
already_exists = await self.store.has_user_annotated_event(
@@ -1016,9 +1055,12 @@ class EventCreationHandler:
if already_exists:
raise SynapseError(400, "Can't send same reaction twice")
logger.debug("Created event %s", event.event_id)
return event, context
# Don't attempt to start a thread if the parent event is a relation.
elif relation_type == RelationTypes.THREAD:
if await self.store.event_includes_relation(relates_to):
raise SynapseError(
400, "Cannot start threads from an event with a relation"
)
@measure_func("handle_new_client_event")
async def handle_new_client_event(

View File

@@ -116,7 +116,9 @@ class RegistrationHandler:
self.pusher_pool = hs.get_pusherpool()
self.session_lifetime = hs.config.registration.session_lifetime
self.access_token_lifetime = hs.config.registration.access_token_lifetime
self.refreshable_access_token_lifetime = (
hs.config.registration.refreshable_access_token_lifetime
)
init_counters_for_auth_provider("")
@@ -813,13 +815,15 @@ class RegistrationHandler:
(
refresh_token,
refresh_token_id,
) = await self._auth_handler.get_refresh_token_for_user_id(
) = await self._auth_handler.create_refresh_token_for_user_id(
user_id,
device_id=registered_device_id,
)
valid_until_ms = self.clock.time_msec() + self.access_token_lifetime
valid_until_ms = (
self.clock.time_msec() + self.refreshable_access_token_lifetime
)
access_token = await self._auth_handler.get_access_token_for_user_id(
access_token = await self._auth_handler.create_access_token_for_user_id(
user_id,
device_id=registered_device_id,
valid_until_ms=valid_until_ms,

View File

@@ -775,8 +775,11 @@ class RoomCreationHandler:
raise SynapseError(403, "Room visibility value not allowed.")
if is_public:
room_aliases = []
if room_alias:
room_aliases.append(room_alias.to_string())
if not self.config.roomdirectory.is_publishing_room_allowed(
user_id, room_id, room_alias
user_id, room_id, room_aliases
):
# Let's just return a generic message, as there may be all sorts of
# reasons why we said no. TODO: Allow configurable error messages

View File

@@ -221,6 +221,7 @@ class RoomBatchHandler:
action=membership,
content=event_dict["content"],
outlier=True,
historical=True,
prev_event_ids=[prev_event_id_for_state_chain],
# Make sure to use a copy of this list because we modify it
# later in the loop here. Otherwise it will be the same
@@ -240,6 +241,7 @@ class RoomBatchHandler:
),
event_dict,
outlier=True,
historical=True,
prev_event_ids=[prev_event_id_for_state_chain],
# Make sure to use a copy of this list because we modify it
# later in the loop here. Otherwise it will be the same

View File

@@ -268,6 +268,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
content: Optional[dict] = None,
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
) -> Tuple[str, int]:
"""
Internal membership update function to get an existing event or create
@@ -293,6 +294,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
historical: Indicates whether the message is being inserted
back in time around some existing events. This is used to skip
a few checks and mark the event as backfilled.
Returns:
Tuple of event ID and stream ordering position
@@ -337,6 +341,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
auth_event_ids=auth_event_ids,
require_consent=require_consent,
outlier=outlier,
historical=historical,
)
prev_state_ids = await context.get_prev_state_ids()
@@ -433,6 +438,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
new_room: bool = False,
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
) -> Tuple[str, int]:
@@ -454,6 +460,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
historical: Indicates whether the message is being inserted
back in time around some existing events. This is used to skip
a few checks and mark the event as backfilled.
prev_event_ids: The event IDs to use as the prev events
auth_event_ids:
The event ids to use as the auth_events for the new event.
@@ -487,6 +496,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
new_room=new_room,
require_consent=require_consent,
outlier=outlier,
historical=historical,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
)
@@ -507,6 +517,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
new_room: bool = False,
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
) -> Tuple[str, int]:
@@ -530,6 +541,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
outlier: Indicates whether the event is an `outlier`, i.e. if
it's from an arbitrary point and floating in the DAG as
opposed to being inline with the current DAG.
historical: Indicates whether the message is being inserted
back in time around some existing events. This is used to skip
a few checks and mark the event as backfilled.
prev_event_ids: The event IDs to use as the prev events
auth_event_ids:
The event ids to use as the auth_events for the new event.
@@ -657,6 +671,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
content=content,
require_consent=require_consent,
outlier=outlier,
historical=historical,
)
latest_event_ids = await self.store.get_prev_events_for_room(room_id)

View File

@@ -97,7 +97,7 @@ class RoomSummaryHandler:
# If a user tries to fetch the same page multiple times in quick succession,
# only process the first attempt and return its result to subsequent requests.
self._pagination_response_cache: ResponseCache[
Tuple[str, bool, Optional[int], Optional[int], Optional[str]]
Tuple[str, str, bool, Optional[int], Optional[int], Optional[str]]
] = ResponseCache(
hs.get_clock(),
"get_room_hierarchy",
@@ -282,7 +282,14 @@ class RoomSummaryHandler:
# This is due to the pagination process mutating internal state, attempting
# to process multiple requests for the same page will result in errors.
return await self._pagination_response_cache.wrap(
(requested_room_id, suggested_only, max_depth, limit, from_token),
(
requester,
requested_room_id,
suggested_only,
max_depth,
limit,
from_token,
),
self._get_room_hierarchy,
requester,
requested_room_id,

View File

@@ -90,7 +90,7 @@ class FollowerTypingHandler:
self.wheel_timer = WheelTimer(bucket_size=5000)
@wrap_as_background_process("typing._handle_timeouts")
def _handle_timeouts(self) -> None:
async def _handle_timeouts(self) -> None:
logger.debug("Checking for typing timeouts")
now = self.clock.time_msec()

View File

@@ -20,10 +20,25 @@ import os
import platform
import threading
import time
from typing import Callable, Dict, Iterable, Mapping, Optional, Tuple, Union
from typing import (
Any,
Callable,
Dict,
Generic,
Iterable,
Mapping,
Optional,
Sequence,
Set,
Tuple,
Type,
TypeVar,
Union,
cast,
)
import attr
from prometheus_client import Counter, Gauge, Histogram
from prometheus_client import CollectorRegistry, Counter, Gauge, Histogram, Metric
from prometheus_client.core import (
REGISTRY,
CounterMetricFamily,
@@ -32,6 +47,7 @@ from prometheus_client.core import (
)
from twisted.internet import reactor
from twisted.internet.base import ReactorBase
from twisted.python.threadpool import ThreadPool
import synapse
@@ -54,7 +70,7 @@ HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
class RegistryProxy:
@staticmethod
def collect():
def collect() -> Iterable[Metric]:
for metric in REGISTRY.collect():
if not metric.name.startswith("__"):
yield metric
@@ -74,7 +90,7 @@ class LaterGauge:
]
)
def collect(self):
def collect(self) -> Iterable[Metric]:
g = GaugeMetricFamily(self.name, self.desc, labels=self.labels)
@@ -93,10 +109,10 @@ class LaterGauge:
yield g
def __attrs_post_init__(self):
def __attrs_post_init__(self) -> None:
self._register()
def _register(self):
def _register(self) -> None:
if self.name in all_gauges.keys():
logger.warning("%s already registered, reregistering" % (self.name,))
REGISTRY.unregister(all_gauges.pop(self.name))
@@ -105,7 +121,12 @@ class LaterGauge:
all_gauges[self.name] = self
class InFlightGauge:
# `MetricsEntry` only makes sense when it is a `Protocol`,
# but `Protocol` can't be used as a `TypeVar` bound.
MetricsEntry = TypeVar("MetricsEntry")
class InFlightGauge(Generic[MetricsEntry]):
"""Tracks number of things (e.g. requests, Measure blocks, etc) in flight
at any given time.
@@ -115,14 +136,19 @@ class InFlightGauge:
callbacks.
Args:
name (str)
desc (str)
labels (list[str])
sub_metrics (list[str]): A list of sub metrics that the callbacks
will update.
name
desc
labels
sub_metrics: A list of sub metrics that the callbacks will update.
"""
def __init__(self, name, desc, labels, sub_metrics):
def __init__(
self,
name: str,
desc: str,
labels: Sequence[str],
sub_metrics: Sequence[str],
):
self.name = name
self.desc = desc
self.labels = labels
@@ -130,19 +156,25 @@ class InFlightGauge:
# Create a class which have the sub_metrics values as attributes, which
# default to 0 on initialization. Used to pass to registered callbacks.
self._metrics_class = attr.make_class(
self._metrics_class: Type[MetricsEntry] = attr.make_class(
"_MetricsEntry", attrs={x: attr.ib(0) for x in sub_metrics}, slots=True
)
# Counts number of in flight blocks for a given set of label values
self._registrations: Dict = {}
self._registrations: Dict[
Tuple[str, ...], Set[Callable[[MetricsEntry], None]]
] = {}
# Protects access to _registrations
self._lock = threading.Lock()
self._register_with_collector()
def register(self, key, callback):
def register(
self,
key: Tuple[str, ...],
callback: Callable[[MetricsEntry], None],
) -> None:
"""Registers that we've entered a new block with labels `key`.
`callback` gets called each time the metrics are collected. The same
@@ -158,13 +190,17 @@ class InFlightGauge:
with self._lock:
self._registrations.setdefault(key, set()).add(callback)
def unregister(self, key, callback):
def unregister(
self,
key: Tuple[str, ...],
callback: Callable[[MetricsEntry], None],
) -> None:
"""Registers that we've exited a block with labels `key`."""
with self._lock:
self._registrations.setdefault(key, set()).discard(callback)
def collect(self):
def collect(self) -> Iterable[Metric]:
"""Called by prometheus client when it reads metrics.
Note: may be called by a separate thread.
@@ -200,7 +236,7 @@ class InFlightGauge:
gauge.add_metric(key, getattr(metrics, name))
yield gauge
def _register_with_collector(self):
def _register_with_collector(self) -> None:
if self.name in all_gauges.keys():
logger.warning("%s already registered, reregistering" % (self.name,))
REGISTRY.unregister(all_gauges.pop(self.name))
@@ -230,7 +266,7 @@ class GaugeBucketCollector:
name: str,
documentation: str,
buckets: Iterable[float],
registry=REGISTRY,
registry: CollectorRegistry = REGISTRY,
):
"""
Args:
@@ -257,12 +293,12 @@ class GaugeBucketCollector:
registry.register(self)
def collect(self):
def collect(self) -> Iterable[Metric]:
# Don't report metrics unless we've already collected some data
if self._metric is not None:
yield self._metric
def update_data(self, values: Iterable[float]):
def update_data(self, values: Iterable[float]) -> None:
"""Update the data to be reported by the metric
The existing data is cleared, and each measurement in the input is assigned
@@ -304,7 +340,7 @@ class GaugeBucketCollector:
class CPUMetrics:
def __init__(self):
def __init__(self) -> None:
ticks_per_sec = 100
try:
# Try and get the system config
@@ -314,7 +350,7 @@ class CPUMetrics:
self.ticks_per_sec = ticks_per_sec
def collect(self):
def collect(self) -> Iterable[Metric]:
if not HAVE_PROC_SELF_STAT:
return
@@ -364,7 +400,7 @@ gc_time = Histogram(
class GCCounts:
def collect(self):
def collect(self) -> Iterable[Metric]:
cm = GaugeMetricFamily("python_gc_counts", "GC object counts", labels=["gen"])
for n, m in enumerate(gc.get_count()):
cm.add_metric([str(n)], m)
@@ -382,7 +418,7 @@ if not running_on_pypy:
class PyPyGCStats:
def collect(self):
def collect(self) -> Iterable[Metric]:
# @stats is a pretty-printer object with __str__() returning a nice table,
# plus some fields that contain data from that table.
@@ -565,7 +601,7 @@ def register_threadpool(name: str, threadpool: ThreadPool) -> None:
class ReactorLastSeenMetric:
def collect(self):
def collect(self) -> Iterable[Metric]:
cm = GaugeMetricFamily(
"python_twisted_reactor_last_seen",
"Seconds since the Twisted reactor was last seen",
@@ -584,9 +620,12 @@ MIN_TIME_BETWEEN_GCS = (1.0, 10.0, 30.0)
_last_gc = [0.0, 0.0, 0.0]
def runUntilCurrentTimer(reactor, func):
F = TypeVar("F", bound=Callable[..., Any])
def runUntilCurrentTimer(reactor: ReactorBase, func: F) -> F:
@functools.wraps(func)
def f(*args, **kwargs):
def f(*args: Any, **kwargs: Any) -> Any:
now = reactor.seconds()
num_pending = 0
@@ -649,7 +688,7 @@ def runUntilCurrentTimer(reactor, func):
return ret
return f
return cast(F, f)
try:
@@ -677,5 +716,5 @@ __all__ = [
"start_http_server",
"LaterGauge",
"InFlightGauge",
"BucketCollector",
"GaugeBucketCollector",
]

View File

@@ -25,27 +25,25 @@ import math
import threading
from http.server import BaseHTTPRequestHandler, HTTPServer
from socketserver import ThreadingMixIn
from typing import Dict, List
from typing import Any, Dict, List, Type, Union
from urllib.parse import parse_qs, urlparse
from prometheus_client import REGISTRY
from prometheus_client import REGISTRY, CollectorRegistry
from prometheus_client.core import Sample
from twisted.web.resource import Resource
from twisted.web.server import Request
from synapse.util import caches
CONTENT_TYPE_LATEST = "text/plain; version=0.0.4; charset=utf-8"
INF = float("inf")
MINUS_INF = float("-inf")
def floatToGoString(d):
def floatToGoString(d: Union[int, float]) -> str:
d = float(d)
if d == INF:
if d == math.inf:
return "+Inf"
elif d == MINUS_INF:
elif d == -math.inf:
return "-Inf"
elif math.isnan(d):
return "NaN"
@@ -60,7 +58,7 @@ def floatToGoString(d):
return s
def sample_line(line, name):
def sample_line(line: Sample, name: str) -> str:
if line.labels:
labelstr = "{{{0}}}".format(
",".join(
@@ -82,7 +80,7 @@ def sample_line(line, name):
return "{}{} {}{}\n".format(name, labelstr, floatToGoString(line.value), timestamp)
def generate_latest(registry, emit_help=False):
def generate_latest(registry: CollectorRegistry, emit_help: bool = False) -> bytes:
# Trigger the cache metrics to be rescraped, which updates the common
# metrics but do not produce metrics themselves
@@ -187,7 +185,7 @@ class MetricsHandler(BaseHTTPRequestHandler):
registry = REGISTRY
def do_GET(self):
def do_GET(self) -> None:
registry = self.registry
params = parse_qs(urlparse(self.path).query)
@@ -207,11 +205,11 @@ class MetricsHandler(BaseHTTPRequestHandler):
self.end_headers()
self.wfile.write(output)
def log_message(self, format, *args):
def log_message(self, format: str, *args: Any) -> None:
"""Log nothing."""
@classmethod
def factory(cls, registry):
def factory(cls, registry: CollectorRegistry) -> Type:
"""Returns a dynamic MetricsHandler class tied
to the passed registry.
"""
@@ -236,7 +234,9 @@ class _ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
daemon_threads = True
def start_http_server(port, addr="", registry=REGISTRY):
def start_http_server(
port: int, addr: str = "", registry: CollectorRegistry = REGISTRY
) -> None:
"""Starts an HTTP server for prometheus metrics as a daemon thread"""
CustomMetricsHandler = MetricsHandler.factory(registry)
httpd = _ThreadingSimpleServer((addr, port), CustomMetricsHandler)
@@ -252,10 +252,10 @@ class MetricsResource(Resource):
isLeaf = True
def __init__(self, registry=REGISTRY):
def __init__(self, registry: CollectorRegistry = REGISTRY):
self.registry = registry
def render_GET(self, request):
def render_GET(self, request: Request) -> bytes:
request.setHeader(b"Content-Type", CONTENT_TYPE_LATEST.encode("ascii"))
response = generate_latest(self.registry)
request.setHeader(b"Content-Length", str(len(response)))

View File

@@ -15,19 +15,37 @@
import logging
import threading
from functools import wraps
from typing import TYPE_CHECKING, Dict, Optional, Set, Union
from types import TracebackType
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Callable,
Dict,
Iterable,
Optional,
Set,
Type,
TypeVar,
Union,
cast,
)
from prometheus_client import Metric
from prometheus_client.core import REGISTRY, Counter, Gauge
from twisted.internet import defer
from synapse.logging.context import LoggingContext, PreserveLoggingContext
from synapse.logging.context import (
ContextResourceUsage,
LoggingContext,
PreserveLoggingContext,
)
from synapse.logging.opentracing import (
SynapseTags,
noop_context_manager,
start_active_span,
)
from synapse.util.async_helpers import maybe_awaitable
if TYPE_CHECKING:
import resource
@@ -116,7 +134,7 @@ class _Collector:
before they are returned.
"""
def collect(self):
def collect(self) -> Iterable[Metric]:
global _background_processes_active_since_last_scrape
# We swap out the _background_processes set with an empty one so that
@@ -144,12 +162,12 @@ REGISTRY.register(_Collector())
class _BackgroundProcess:
def __init__(self, desc, ctx):
def __init__(self, desc: str, ctx: LoggingContext):
self.desc = desc
self._context = ctx
self._reported_stats = None
self._reported_stats: Optional[ContextResourceUsage] = None
def update_metrics(self):
def update_metrics(self) -> None:
"""Updates the metrics with values from this process."""
new_stats = self._context.get_resource_usage()
if self._reported_stats is None:
@@ -169,7 +187,16 @@ class _BackgroundProcess:
)
def run_as_background_process(desc: str, func, *args, bg_start_span=True, **kwargs):
R = TypeVar("R")
def run_as_background_process(
desc: str,
func: Callable[..., Awaitable[Optional[R]]],
*args: Any,
bg_start_span: bool = True,
**kwargs: Any,
) -> "defer.Deferred[Optional[R]]":
"""Run the given function in its own logcontext, with resource metrics
This should be used to wrap processes which are fired off to run in the
@@ -189,11 +216,13 @@ def run_as_background_process(desc: str, func, *args, bg_start_span=True, **kwar
args: positional args for func
kwargs: keyword args for func
Returns: Deferred which returns the result of func, but note that it does not
follow the synapse logcontext rules.
Returns:
Deferred which returns the result of func, or `None` if func raises.
Note that the returned Deferred does not follow the synapse logcontext
rules.
"""
async def run():
async def run() -> Optional[R]:
with _bg_metrics_lock:
count = _background_process_counts.get(desc, 0)
_background_process_counts[desc] = count + 1
@@ -210,12 +239,13 @@ def run_as_background_process(desc: str, func, *args, bg_start_span=True, **kwar
else:
ctx = noop_context_manager()
with ctx:
return await maybe_awaitable(func(*args, **kwargs))
return await func(*args, **kwargs)
except Exception:
logger.exception(
"Background process '%s' threw an exception",
desc,
)
return None
finally:
_background_process_in_flight_count.labels(desc).dec()
@@ -225,19 +255,24 @@ def run_as_background_process(desc: str, func, *args, bg_start_span=True, **kwar
return defer.ensureDeferred(run())
def wrap_as_background_process(desc):
F = TypeVar("F", bound=Callable[..., Awaitable[Optional[Any]]])
def wrap_as_background_process(desc: str) -> Callable[[F], F]:
"""Decorator that wraps a function that gets called as a background
process.
Equivalent of calling the function with `run_as_background_process`
Equivalent to calling the function with `run_as_background_process`
"""
def wrap_as_background_process_inner(func):
def wrap_as_background_process_inner(func: F) -> F:
@wraps(func)
def wrap_as_background_process_inner_2(*args, **kwargs):
def wrap_as_background_process_inner_2(
*args: Any, **kwargs: Any
) -> "defer.Deferred[Optional[R]]":
return run_as_background_process(desc, func, *args, **kwargs)
return wrap_as_background_process_inner_2
return cast(F, wrap_as_background_process_inner_2)
return wrap_as_background_process_inner
@@ -265,7 +300,7 @@ class BackgroundProcessLoggingContext(LoggingContext):
super().__init__("%s-%s" % (name, instance_id))
self._proc = _BackgroundProcess(name, self)
def start(self, rusage: "Optional[resource.struct_rusage]"):
def start(self, rusage: "Optional[resource.struct_rusage]") -> None:
"""Log context has started running (again)."""
super().start(rusage)
@@ -276,7 +311,12 @@ class BackgroundProcessLoggingContext(LoggingContext):
with _bg_metrics_lock:
_background_processes_active_since_last_scrape.add(self._proc)
def __exit__(self, type, value, traceback) -> None:
def __exit__(
self,
type: Optional[Type[BaseException]],
value: Optional[BaseException],
traceback: Optional[TracebackType],
) -> None:
"""Log context has finished."""
super().__exit__(type, value, traceback)

View File

@@ -16,14 +16,16 @@ import ctypes
import logging
import os
import re
from typing import Optional
from typing import Iterable, Optional
from prometheus_client import Metric
from synapse.metrics import REGISTRY, GaugeMetricFamily
logger = logging.getLogger(__name__)
def _setup_jemalloc_stats():
def _setup_jemalloc_stats() -> None:
"""Checks to see if jemalloc is loaded, and hooks up a collector to record
statistics exposed by jemalloc.
"""
@@ -135,7 +137,7 @@ def _setup_jemalloc_stats():
class JemallocCollector:
"""Metrics for internal jemalloc stats."""
def collect(self):
def collect(self) -> Iterable[Metric]:
_jemalloc_refresh_stats()
g = GaugeMetricFamily(
@@ -185,7 +187,7 @@ def _setup_jemalloc_stats():
logger.debug("Added jemalloc stats")
def setup_jemalloc_stats():
def setup_jemalloc_stats() -> None:
"""Try to setup jemalloc stats, if jemalloc is loaded."""
try:

View File

@@ -35,7 +35,44 @@ from twisted.web.resource import Resource
from synapse.api.errors import SynapseError
from synapse.events import EventBase
from synapse.events.presence_router import PresenceRouter
from synapse.events.presence_router import (
GET_INTERESTED_USERS_CALLBACK,
GET_USERS_FOR_STATES_CALLBACK,
PresenceRouter,
)
from synapse.events.spamcheck import (
CHECK_EVENT_FOR_SPAM_CALLBACK,
CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK,
CHECK_REGISTRATION_FOR_SPAM_CALLBACK,
CHECK_USERNAME_FOR_SPAM_CALLBACK,
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK,
USER_MAY_CREATE_ROOM_CALLBACK,
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK,
USER_MAY_INVITE_CALLBACK,
USER_MAY_JOIN_ROOM_CALLBACK,
USER_MAY_PUBLISH_ROOM_CALLBACK,
USER_MAY_SEND_3PID_INVITE_CALLBACK,
)
from synapse.events.third_party_rules import (
CHECK_EVENT_ALLOWED_CALLBACK,
CHECK_THREEPID_CAN_BE_INVITED_CALLBACK,
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK,
ON_CREATE_ROOM_CALLBACK,
ON_NEW_EVENT_CALLBACK,
)
from synapse.handlers.account_validity import (
IS_USER_EXPIRED_CALLBACK,
ON_LEGACY_ADMIN_REQUEST,
ON_LEGACY_RENEW_CALLBACK,
ON_LEGACY_SEND_MAIL_CALLBACK,
ON_USER_REGISTRATION_CALLBACK,
)
from synapse.handlers.auth import (
CHECK_3PID_AUTH_CALLBACK,
CHECK_AUTH_CALLBACK,
ON_LOGGED_OUT_CALLBACK,
AuthHandler,
)
from synapse.http.client import SimpleHttpClient
from synapse.http.server import (
DirectServeHtmlResource,
@@ -114,7 +151,7 @@ class ModuleApi:
can register new users etc if necessary.
"""
def __init__(self, hs: "HomeServer", auth_handler):
def __init__(self, hs: "HomeServer", auth_handler: AuthHandler) -> None:
self._hs = hs
# TODO: Fix this type hint once the types for the data stores have been ironed
@@ -156,45 +193,119 @@ class ModuleApi:
#################################################################################
# The following methods should only be called during the module's initialisation.
@property
def register_spam_checker_callbacks(self):
def register_spam_checker_callbacks(
self,
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
user_may_create_room_with_invites: Optional[
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
] = None,
user_may_create_room_alias: Optional[
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
] = None,
user_may_publish_room: Optional[USER_MAY_PUBLISH_ROOM_CALLBACK] = None,
check_username_for_spam: Optional[CHECK_USERNAME_FOR_SPAM_CALLBACK] = None,
check_registration_for_spam: Optional[
CHECK_REGISTRATION_FOR_SPAM_CALLBACK
] = None,
check_media_file_for_spam: Optional[CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK] = None,
) -> None:
"""Registers callbacks for spam checking capabilities.
Added in Synapse v1.37.0.
"""
return self._spam_checker.register_callbacks
return self._spam_checker.register_callbacks(
check_event_for_spam=check_event_for_spam,
user_may_join_room=user_may_join_room,
user_may_invite=user_may_invite,
user_may_send_3pid_invite=user_may_send_3pid_invite,
user_may_create_room=user_may_create_room,
user_may_create_room_with_invites=user_may_create_room_with_invites,
user_may_create_room_alias=user_may_create_room_alias,
user_may_publish_room=user_may_publish_room,
check_username_for_spam=check_username_for_spam,
check_registration_for_spam=check_registration_for_spam,
check_media_file_for_spam=check_media_file_for_spam,
)
@property
def register_account_validity_callbacks(self):
def register_account_validity_callbacks(
self,
is_user_expired: Optional[IS_USER_EXPIRED_CALLBACK] = None,
on_user_registration: Optional[ON_USER_REGISTRATION_CALLBACK] = None,
on_legacy_send_mail: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None,
on_legacy_renew: Optional[ON_LEGACY_RENEW_CALLBACK] = None,
on_legacy_admin_request: Optional[ON_LEGACY_ADMIN_REQUEST] = None,
) -> None:
"""Registers callbacks for account validity capabilities.
Added in Synapse v1.39.0.
"""
return self._account_validity_handler.register_account_validity_callbacks
return self._account_validity_handler.register_account_validity_callbacks(
is_user_expired=is_user_expired,
on_user_registration=on_user_registration,
on_legacy_send_mail=on_legacy_send_mail,
on_legacy_renew=on_legacy_renew,
on_legacy_admin_request=on_legacy_admin_request,
)
@property
def register_third_party_rules_callbacks(self):
def register_third_party_rules_callbacks(
self,
check_event_allowed: Optional[CHECK_EVENT_ALLOWED_CALLBACK] = None,
on_create_room: Optional[ON_CREATE_ROOM_CALLBACK] = None,
check_threepid_can_be_invited: Optional[
CHECK_THREEPID_CAN_BE_INVITED_CALLBACK
] = None,
check_visibility_can_be_modified: Optional[
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK
] = None,
on_new_event: Optional[ON_NEW_EVENT_CALLBACK] = None,
) -> None:
"""Registers callbacks for third party event rules capabilities.
Added in Synapse v1.39.0.
"""
return self._third_party_event_rules.register_third_party_rules_callbacks
return self._third_party_event_rules.register_third_party_rules_callbacks(
check_event_allowed=check_event_allowed,
on_create_room=on_create_room,
check_threepid_can_be_invited=check_threepid_can_be_invited,
check_visibility_can_be_modified=check_visibility_can_be_modified,
on_new_event=on_new_event,
)
@property
def register_presence_router_callbacks(self):
def register_presence_router_callbacks(
self,
get_users_for_states: Optional[GET_USERS_FOR_STATES_CALLBACK] = None,
get_interested_users: Optional[GET_INTERESTED_USERS_CALLBACK] = None,
) -> None:
"""Registers callbacks for presence router capabilities.
Added in Synapse v1.42.0.
"""
return self._presence_router.register_presence_router_callbacks
return self._presence_router.register_presence_router_callbacks(
get_users_for_states=get_users_for_states,
get_interested_users=get_interested_users,
)
@property
def register_password_auth_provider_callbacks(self):
def register_password_auth_provider_callbacks(
self,
check_3pid_auth: Optional[CHECK_3PID_AUTH_CALLBACK] = None,
on_logged_out: Optional[ON_LOGGED_OUT_CALLBACK] = None,
auth_checkers: Optional[
Dict[Tuple[str, Tuple[str, ...]], CHECK_AUTH_CALLBACK]
] = None,
) -> None:
"""Registers callbacks for password auth provider capabilities.
Added in Synapse v1.46.0.
"""
return self._password_auth_provider.register_password_auth_provider_callbacks
return self._password_auth_provider.register_password_auth_provider_callbacks(
check_3pid_auth=check_3pid_auth,
on_logged_out=on_logged_out,
auth_checkers=auth_checkers,
)
def register_web_resource(self, path: str, resource: Resource):
"""Registers a web resource to be served at the given path.
@@ -216,7 +327,7 @@ class ModuleApi:
# The following methods can be called by the module at any point in time.
@property
def http_client(self):
def http_client(self) -> SimpleHttpClient:
"""Allows making outbound HTTP requests to remote resources.
An instance of synapse.http.client.SimpleHttpClient
@@ -226,7 +337,7 @@ class ModuleApi:
return self._http_client
@property
def public_room_list_manager(self):
def public_room_list_manager(self) -> "PublicRoomListManager":
"""Allows adding to, removing from and checking the status of rooms in the
public room list.
@@ -309,7 +420,7 @@ class ModuleApi:
"""
return await self._store.is_server_admin(UserID.from_string(user_id))
def get_qualified_user_id(self, username):
def get_qualified_user_id(self, username: str) -> str:
"""Qualify a user id, if necessary
Takes a user id provided by the user and adds the @ and :domain to
@@ -318,7 +429,7 @@ class ModuleApi:
Added in Synapse v0.25.0.
Args:
username (str): provided user id
username: provided user id
Returns:
str: qualified @user:id
@@ -357,13 +468,13 @@ class ModuleApi:
"""
return await self._store.user_get_threepids(user_id)
def check_user_exists(self, user_id):
def check_user_exists(self, user_id: str):
"""Check if user exists.
Added in Synapse v0.25.0.
Args:
user_id (str): Complete @user:id
user_id: Complete @user:id
Returns:
Deferred[str|None]: Canonical (case-corrected) user_id, or None
@@ -903,7 +1014,7 @@ class ModuleApi:
A list containing the loaded templates, with the orders matching the one of
the filenames parameter.
"""
return self._hs.config.read_templates(
return self._hs.config.server.read_templates(
filenames,
(td for td in (self.custom_template_dir, custom_template_directory) if td),
)

View File

@@ -28,6 +28,7 @@ from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
from synapse.rest.admin.background_updates import (
BackgroundUpdateEnabledRestServlet,
BackgroundUpdateRestServlet,
BackgroundUpdateStartJobRestServlet,
)
from synapse.rest.admin.devices import (
DeleteDevicesRestServlet,
@@ -46,6 +47,7 @@ from synapse.rest.admin.registration_tokens import (
RegistrationTokenRestServlet,
)
from synapse.rest.admin.rooms import (
BlockRoomRestServlet,
DeleteRoomStatusByDeleteIdRestServlet,
DeleteRoomStatusByRoomIdRestServlet,
ForwardExtremitiesRestServlet,
@@ -223,6 +225,7 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
Register all the admin servlets.
"""
register_servlets_for_client_rest_resource(hs, http_server)
BlockRoomRestServlet(hs).register(http_server)
ListRoomRestServlet(hs).register(http_server)
RoomStateRestServlet(hs).register(http_server)
RoomRestServlet(hs).register(http_server)
@@ -259,6 +262,7 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
SendServerNoticeServlet(hs).register(http_server)
BackgroundUpdateEnabledRestServlet(hs).register(http_server)
BackgroundUpdateRestServlet(hs).register(http_server)
BackgroundUpdateStartJobRestServlet(hs).register(http_server)
def register_servlets_for_client_rest_resource(

View File

@@ -12,10 +12,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from http import HTTPStatus
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import SynapseError
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.servlet import (
RestServlet,
assert_params_in_dict,
parse_json_object_from_request,
)
from synapse.http.site import SynapseRequest
from synapse.rest.admin._base import admin_patterns, assert_user_is_admin
from synapse.types import JsonDict
@@ -29,37 +34,36 @@ logger = logging.getLogger(__name__)
class BackgroundUpdateEnabledRestServlet(RestServlet):
"""Allows temporarily disabling background updates"""
PATTERNS = admin_patterns("/background_updates/enabled")
PATTERNS = admin_patterns("/background_updates/enabled$")
def __init__(self, hs: "HomeServer"):
self.group_server = hs.get_groups_server_handler()
self.is_mine_id = hs.is_mine_id
self.auth = hs.get_auth()
self.data_stores = hs.get_datastores()
self._auth = hs.get_auth()
self._data_stores = hs.get_datastores()
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester.user)
# We need to check that all configured databases have updates enabled.
# (They *should* all be in sync.)
enabled = all(db.updates.enabled for db in self.data_stores.databases)
enabled = all(db.updates.enabled for db in self._data_stores.databases)
return 200, {"enabled": enabled}
return HTTPStatus.OK, {"enabled": enabled}
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester.user)
body = parse_json_object_from_request(request)
enabled = body.get("enabled", True)
if not isinstance(enabled, bool):
raise SynapseError(400, "'enabled' parameter must be a boolean")
raise SynapseError(
HTTPStatus.BAD_REQUEST, "'enabled' parameter must be a boolean"
)
for db in self.data_stores.databases:
for db in self._data_stores.databases:
db.updates.enabled = enabled
# If we're re-enabling them ensure that we start the background
@@ -67,32 +71,29 @@ class BackgroundUpdateEnabledRestServlet(RestServlet):
if enabled:
db.updates.start_doing_background_updates()
return 200, {"enabled": enabled}
return HTTPStatus.OK, {"enabled": enabled}
class BackgroundUpdateRestServlet(RestServlet):
"""Fetch information about background updates"""
PATTERNS = admin_patterns("/background_updates/status")
PATTERNS = admin_patterns("/background_updates/status$")
def __init__(self, hs: "HomeServer"):
self.group_server = hs.get_groups_server_handler()
self.is_mine_id = hs.is_mine_id
self.auth = hs.get_auth()
self.data_stores = hs.get_datastores()
self._auth = hs.get_auth()
self._data_stores = hs.get_datastores()
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester.user)
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester.user)
# We need to check that all configured databases have updates enabled.
# (They *should* all be in sync.)
enabled = all(db.updates.enabled for db in self.data_stores.databases)
enabled = all(db.updates.enabled for db in self._data_stores.databases)
current_updates = {}
for db in self.data_stores.databases:
for db in self._data_stores.databases:
update = db.updates.get_current_update()
if not update:
continue
@@ -104,4 +105,72 @@ class BackgroundUpdateRestServlet(RestServlet):
"average_items_per_ms": update.average_items_per_ms(),
}
return 200, {"enabled": enabled, "current_updates": current_updates}
return HTTPStatus.OK, {"enabled": enabled, "current_updates": current_updates}
class BackgroundUpdateStartJobRestServlet(RestServlet):
"""Allows to start specific background updates"""
PATTERNS = admin_patterns("/background_updates/start_job")
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self._store = hs.get_datastore()
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester.user)
body = parse_json_object_from_request(request)
assert_params_in_dict(body, ["job_name"])
job_name = body["job_name"]
if job_name == "populate_stats_process_rooms":
jobs = [
{
"update_name": "populate_stats_process_rooms",
"progress_json": "{}",
},
]
elif job_name == "regenerate_directory":
jobs = [
{
"update_name": "populate_user_directory_createtables",
"progress_json": "{}",
"depends_on": "",
},
{
"update_name": "populate_user_directory_process_rooms",
"progress_json": "{}",
"depends_on": "populate_user_directory_createtables",
},
{
"update_name": "populate_user_directory_process_users",
"progress_json": "{}",
"depends_on": "populate_user_directory_process_rooms",
},
{
"update_name": "populate_user_directory_cleanup",
"progress_json": "{}",
"depends_on": "populate_user_directory_process_users",
},
]
else:
raise SynapseError(HTTPStatus.BAD_REQUEST, "Invalid job_name")
try:
await self._store.db_pool.simple_insert_many(
table="background_updates",
values=jobs,
desc=f"admin_api_run_{job_name}",
)
except self._store.db_pool.engine.module.IntegrityError:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Job %s is already in queue of background updates." % (job_name,),
)
self._store.db_pool.updates.start_doing_background_updates()
return HTTPStatus.OK, {}

View File

@@ -448,7 +448,7 @@ class RoomStateRestServlet(RestServlet):
now,
# We don't bother bundling aggregations in when asked for state
# events, as clients won't use them.
bundle_aggregations=False,
bundle_relations=False,
)
ret = {"state": room_state}
@@ -778,7 +778,70 @@ class RoomEventContextServlet(RestServlet):
results["state"],
time_now,
# No need to bundle aggregations for state events
bundle_aggregations=False,
bundle_relations=False,
)
return 200, results
class BlockRoomRestServlet(RestServlet):
"""
Manage blocking of rooms.
On PUT: Add or remove a room from blocking list.
On GET: Get blocking status of room and user who has blocked this room.
"""
PATTERNS = admin_patterns("/rooms/(?P<room_id>[^/]+)/block$")
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self._store = hs.get_datastore()
async def on_GET(
self, request: SynapseRequest, room_id: str
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self._auth, request)
if not RoomID.is_valid(room_id):
raise SynapseError(
HTTPStatus.BAD_REQUEST, "%s is not a legal room ID" % (room_id,)
)
blocked_by = await self._store.room_is_blocked_by(room_id)
# Test `not None` if `user_id` is an empty string
# if someone add manually an entry in database
if blocked_by is not None:
response = {"block": True, "user_id": blocked_by}
else:
response = {"block": False}
return HTTPStatus.OK, response
async def on_PUT(
self, request: SynapseRequest, room_id: str
) -> Tuple[int, JsonDict]:
requester = await self._auth.get_user_by_req(request)
await assert_user_is_admin(self._auth, requester.user)
content = parse_json_object_from_request(request)
if not RoomID.is_valid(room_id):
raise SynapseError(
HTTPStatus.BAD_REQUEST, "%s is not a legal room ID" % (room_id,)
)
assert_params_in_dict(content, ["block"])
block = content.get("block")
if not isinstance(block, bool):
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Param 'block' must be a boolean.",
Codes.BAD_JSON,
)
if block:
await self._store.block_room(room_id, requester.user.to_string())
else:
await self._store.unblock_room(room_id)
return HTTPStatus.OK, {"block": block}

View File

@@ -898,7 +898,7 @@ class UserTokenRestServlet(RestServlet):
if auth_user.to_string() == user_id:
raise SynapseError(400, "Cannot use admin API to login as self")
token = await self.auth_handler.get_access_token_for_user_id(
token = await self.auth_handler.create_access_token_for_user_id(
user_id=auth_user.to_string(),
device_id=None,
valid_until_ms=valid_until_ms,
@@ -909,7 +909,7 @@ class UserTokenRestServlet(RestServlet):
class ShadowBanRestServlet(RestServlet):
"""An admin API for shadow-banning a user.
"""An admin API for controlling whether a user is shadow-banned.
A shadow-banned users receives successful responses to their client-server
API requests, but the events are not propagated into rooms.
@@ -917,11 +917,19 @@ class ShadowBanRestServlet(RestServlet):
Shadow-banning a user should be used as a tool of last resort and may lead
to confusing or broken behaviour for the client.
Example:
Example of shadow-banning a user:
POST /_synapse/admin/v1/users/@test:example.com/shadow_ban
{}
200 OK
{}
Example of removing a user from being shadow-banned:
DELETE /_synapse/admin/v1/users/@test:example.com/shadow_ban
{}
200 OK
{}
"""
@@ -945,6 +953,18 @@ class ShadowBanRestServlet(RestServlet):
return 200, {}
async def on_DELETE(
self, request: SynapseRequest, user_id: str
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
if not self.hs.is_mine_id(user_id):
raise SynapseError(400, "Only local users can be shadow-banned")
await self.store.set_shadow_banned(UserID.from_string(user_id), False)
return 200, {}
class RateLimitRestServlet(RestServlet):
"""An admin API to override ratelimiting for an user.

View File

@@ -27,7 +27,7 @@ logger = logging.getLogger(__name__)
def client_patterns(
path_regex: str,
releases: Iterable[int] = (0,),
releases: Iterable[str] = ("r0", "v3"),
unstable: bool = True,
v1: bool = False,
) -> Iterable[Pattern]:
@@ -52,7 +52,7 @@ def client_patterns(
v1_prefix = CLIENT_API_PREFIX + "/api/v1"
patterns.append(re.compile("^" + v1_prefix + path_regex))
for release in releases:
new_prefix = CLIENT_API_PREFIX + "/r%d" % (release,)
new_prefix = CLIENT_API_PREFIX + f"/{release}"
patterns.append(re.compile("^" + new_prefix + path_regex))
return patterns

View File

@@ -262,7 +262,7 @@ class SigningKeyUploadServlet(RestServlet):
}
"""
PATTERNS = client_patterns("/keys/device_signing/upload$", releases=())
PATTERNS = client_patterns("/keys/device_signing/upload$", releases=("v3",))
def __init__(self, hs: "HomeServer"):
super().__init__()

View File

@@ -72,6 +72,7 @@ class LoginRestServlet(RestServlet):
# JWT configuration variables.
self.jwt_enabled = hs.config.jwt.jwt_enabled
self.jwt_secret = hs.config.jwt.jwt_secret
self.jwt_subject_claim = hs.config.jwt.jwt_subject_claim
self.jwt_algorithm = hs.config.jwt.jwt_algorithm
self.jwt_issuer = hs.config.jwt.jwt_issuer
self.jwt_audiences = hs.config.jwt.jwt_audiences
@@ -80,7 +81,9 @@ class LoginRestServlet(RestServlet):
self.saml2_enabled = hs.config.saml2.saml2_enabled
self.cas_enabled = hs.config.cas.cas_enabled
self.oidc_enabled = hs.config.oidc.oidc_enabled
self._msc2918_enabled = hs.config.registration.access_token_lifetime is not None
self._msc2918_enabled = (
hs.config.registration.refreshable_access_token_lifetime is not None
)
self.auth = hs.get_auth()
@@ -413,7 +416,7 @@ class LoginRestServlet(RestServlet):
errcode=Codes.FORBIDDEN,
)
user = payload.get("sub", None)
user = payload.get(self.jwt_subject_claim, None)
if user is None:
raise LoginError(403, "Invalid JWT", errcode=Codes.FORBIDDEN)
@@ -452,7 +455,9 @@ class RefreshTokenServlet(RestServlet):
def __init__(self, hs: "HomeServer"):
self._auth_handler = hs.get_auth_handler()
self._clock = hs.get_clock()
self.access_token_lifetime = hs.config.registration.access_token_lifetime
self.refreshable_access_token_lifetime = (
hs.config.registration.refreshable_access_token_lifetime
)
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
refresh_submission = parse_json_object_from_request(request)
@@ -462,7 +467,9 @@ class RefreshTokenServlet(RestServlet):
if not isinstance(token, str):
raise SynapseError(400, "Invalid param: refresh_token", Codes.INVALID_PARAM)
valid_until_ms = self._clock.time_msec() + self.access_token_lifetime
valid_until_ms = (
self._clock.time_msec() + self.refreshable_access_token_lifetime
)
access_token, refresh_token = await self._auth_handler.refresh_token(
token, valid_until_ms
)
@@ -561,7 +568,7 @@ class CasTicketServlet(RestServlet):
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
LoginRestServlet(hs).register(http_server)
if hs.config.registration.access_token_lifetime is not None:
if hs.config.registration.refreshable_access_token_lifetime is not None:
RefreshTokenServlet(hs).register(http_server)
SsoRedirectServlet(hs).register(http_server)
if hs.config.cas.cas_enabled:

View File

@@ -420,7 +420,9 @@ class RegisterRestServlet(RestServlet):
self.password_policy_handler = hs.get_password_policy_handler()
self.clock = hs.get_clock()
self._registration_enabled = self.hs.config.registration.enable_registration
self._msc2918_enabled = hs.config.registration.access_token_lifetime is not None
self._msc2918_enabled = (
hs.config.registration.refreshable_access_token_lifetime is not None
)
self._registration_flows = _calculate_registration_flows(
hs.config, self.auth_handler

View File

@@ -224,17 +224,17 @@ class RelationPaginationServlet(RestServlet):
)
now = self.clock.time_msec()
# We set bundle_aggregations to False when retrieving the original
# We set bundle_relations to False when retrieving the original
# event because we want the content before relations were applied to
# it.
original_event = await self._event_serializer.serialize_event(
event, now, bundle_aggregations=False
event, now, bundle_relations=False
)
# Similarly, we don't allow relations to be applied to relations, so we
# return the original relations without any aggregations on top of them
# here.
serialized_events = await self._event_serializer.serialize_events(
events, now, bundle_aggregations=False
events, now, bundle_relations=False
)
return_value = pagination_chunk.to_dict()

View File

@@ -719,7 +719,7 @@ class RoomEventContextServlet(RestServlet):
results["state"],
time_now,
# No need to bundle aggregations for state events
bundle_aggregations=False,
bundle_relations=False,
)
return 200, results

View File

@@ -522,7 +522,7 @@ class SyncRestServlet(RestServlet):
time_now=time_now,
# We don't bundle "live" events, as otherwise clients
# will end up double counting annotations.
bundle_aggregations=False,
bundle_relations=False,
token_id=token_id,
event_format=event_formatter,
only_event_fields=only_fields,

View File

@@ -29,7 +29,7 @@ from synapse.api.errors import Codes, SynapseError, cs_error
from synapse.http.server import finish_request, respond_with_json
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.util.stringutils import is_ascii
from synapse.util.stringutils import is_ascii, parse_and_validate_server_name
logger = logging.getLogger(__name__)
@@ -51,6 +51,19 @@ TEXT_CONTENT_TYPES = [
def parse_media_id(request: Request) -> Tuple[str, str, Optional[str]]:
"""Parses the server name, media ID and optional file name from the request URI
Also performs some rough validation on the server name.
Args:
request: The `Request`.
Returns:
A tuple containing the parsed server name, media ID and optional file name.
Raises:
SynapseError(404): if parsing or validation fail for any reason
"""
try:
# The type on postpath seems incorrect in Twisted 21.2.0.
postpath: List[bytes] = request.postpath # type: ignore
@@ -62,6 +75,9 @@ def parse_media_id(request: Request) -> Tuple[str, str, Optional[str]]:
server_name = server_name_bytes.decode("utf-8")
media_id = media_id_bytes.decode("utf8")
# Validate the server name, raising if invalid
parse_and_validate_server_name(server_name)
file_name = None
if len(postpath) > 2:
try:

View File

@@ -16,7 +16,8 @@
import functools
import os
import re
from typing import Any, Callable, List, TypeVar, cast
import string
from typing import Any, Callable, List, TypeVar, Union, cast
NEW_FORMAT_ID_RE = re.compile(r"^\d\d\d\d-\d\d-\d\d")
@@ -37,6 +38,85 @@ def _wrap_in_base_path(func: F) -> F:
return cast(F, _wrapped)
GetPathMethod = TypeVar(
"GetPathMethod", bound=Union[Callable[..., str], Callable[..., List[str]]]
)
def _wrap_with_jail_check(func: GetPathMethod) -> GetPathMethod:
"""Wraps a path-returning method to check that the returned path(s) do not escape
the media store directory.
The check is not expected to ever fail, unless `func` is missing a call to
`_validate_path_component`, or `_validate_path_component` is buggy.
Args:
func: The `MediaFilePaths` method to wrap. The method may return either a single
path, or a list of paths. Returned paths may be either absolute or relative.
Returns:
The method, wrapped with a check to ensure that the returned path(s) lie within
the media store directory. Raises a `ValueError` if the check fails.
"""
@functools.wraps(func)
def _wrapped(
self: "MediaFilePaths", *args: Any, **kwargs: Any
) -> Union[str, List[str]]:
path_or_paths = func(self, *args, **kwargs)
if isinstance(path_or_paths, list):
paths_to_check = path_or_paths
else:
paths_to_check = [path_or_paths]
for path in paths_to_check:
# path may be an absolute or relative path, depending on the method being
# wrapped. When "appending" an absolute path, `os.path.join` discards the
# previous path, which is desired here.
normalized_path = os.path.normpath(os.path.join(self.real_base_path, path))
if (
os.path.commonpath([normalized_path, self.real_base_path])
!= self.real_base_path
):
raise ValueError(f"Invalid media store path: {path!r}")
return path_or_paths
return cast(GetPathMethod, _wrapped)
ALLOWED_CHARACTERS = set(
string.ascii_letters
+ string.digits
+ "_-"
+ ".[]:" # Domain names, IPv6 addresses and ports in server names
)
FORBIDDEN_NAMES = {
"",
os.path.curdir, # "." for the current platform
os.path.pardir, # ".." for the current platform
}
def _validate_path_component(name: str) -> str:
"""Checks that the given string can be safely used as a path component
Args:
name: The path component to check.
Returns:
The path component if valid.
Raises:
ValueError: If `name` cannot be safely used as a path component.
"""
if not ALLOWED_CHARACTERS.issuperset(name) or name in FORBIDDEN_NAMES:
raise ValueError(f"Invalid path component: {name!r}")
return name
class MediaFilePaths:
"""Describes where files are stored on disk.
@@ -48,22 +128,46 @@ class MediaFilePaths:
def __init__(self, primary_base_path: str):
self.base_path = primary_base_path
# The media store directory, with all symlinks resolved.
self.real_base_path = os.path.realpath(primary_base_path)
# Refuse to initialize if paths cannot be validated correctly for the current
# platform.
assert os.path.sep not in ALLOWED_CHARACTERS
assert os.path.altsep not in ALLOWED_CHARACTERS
# On Windows, paths have all sorts of weirdness which `_validate_path_component`
# does not consider. In any case, the remote media store can't work correctly
# for certain homeservers there, since ":"s aren't allowed in paths.
assert os.name == "posix"
@_wrap_with_jail_check
def local_media_filepath_rel(self, media_id: str) -> str:
return os.path.join("local_content", media_id[0:2], media_id[2:4], media_id[4:])
return os.path.join(
"local_content",
_validate_path_component(media_id[0:2]),
_validate_path_component(media_id[2:4]),
_validate_path_component(media_id[4:]),
)
local_media_filepath = _wrap_in_base_path(local_media_filepath_rel)
@_wrap_with_jail_check
def local_media_thumbnail_rel(
self, media_id: str, width: int, height: int, content_type: str, method: str
) -> str:
top_level_type, sub_type = content_type.split("/")
file_name = "%i-%i-%s-%s-%s" % (width, height, top_level_type, sub_type, method)
return os.path.join(
"local_thumbnails", media_id[0:2], media_id[2:4], media_id[4:], file_name
"local_thumbnails",
_validate_path_component(media_id[0:2]),
_validate_path_component(media_id[2:4]),
_validate_path_component(media_id[4:]),
_validate_path_component(file_name),
)
local_media_thumbnail = _wrap_in_base_path(local_media_thumbnail_rel)
@_wrap_with_jail_check
def local_media_thumbnail_dir(self, media_id: str) -> str:
"""
Retrieve the local store path of thumbnails of a given media_id
@@ -76,18 +180,24 @@ class MediaFilePaths:
return os.path.join(
self.base_path,
"local_thumbnails",
media_id[0:2],
media_id[2:4],
media_id[4:],
_validate_path_component(media_id[0:2]),
_validate_path_component(media_id[2:4]),
_validate_path_component(media_id[4:]),
)
@_wrap_with_jail_check
def remote_media_filepath_rel(self, server_name: str, file_id: str) -> str:
return os.path.join(
"remote_content", server_name, file_id[0:2], file_id[2:4], file_id[4:]
"remote_content",
_validate_path_component(server_name),
_validate_path_component(file_id[0:2]),
_validate_path_component(file_id[2:4]),
_validate_path_component(file_id[4:]),
)
remote_media_filepath = _wrap_in_base_path(remote_media_filepath_rel)
@_wrap_with_jail_check
def remote_media_thumbnail_rel(
self,
server_name: str,
@@ -101,11 +211,11 @@ class MediaFilePaths:
file_name = "%i-%i-%s-%s-%s" % (width, height, top_level_type, sub_type, method)
return os.path.join(
"remote_thumbnail",
server_name,
file_id[0:2],
file_id[2:4],
file_id[4:],
file_name,
_validate_path_component(server_name),
_validate_path_component(file_id[0:2]),
_validate_path_component(file_id[2:4]),
_validate_path_component(file_id[4:]),
_validate_path_component(file_name),
)
remote_media_thumbnail = _wrap_in_base_path(remote_media_thumbnail_rel)
@@ -113,6 +223,7 @@ class MediaFilePaths:
# Legacy path that was used to store thumbnails previously.
# Should be removed after some time, when most of the thumbnails are stored
# using the new path.
@_wrap_with_jail_check
def remote_media_thumbnail_rel_legacy(
self, server_name: str, file_id: str, width: int, height: int, content_type: str
) -> str:
@@ -120,43 +231,66 @@ class MediaFilePaths:
file_name = "%i-%i-%s-%s" % (width, height, top_level_type, sub_type)
return os.path.join(
"remote_thumbnail",
server_name,
file_id[0:2],
file_id[2:4],
file_id[4:],
file_name,
_validate_path_component(server_name),
_validate_path_component(file_id[0:2]),
_validate_path_component(file_id[2:4]),
_validate_path_component(file_id[4:]),
_validate_path_component(file_name),
)
def remote_media_thumbnail_dir(self, server_name: str, file_id: str) -> str:
return os.path.join(
self.base_path,
"remote_thumbnail",
server_name,
file_id[0:2],
file_id[2:4],
file_id[4:],
_validate_path_component(server_name),
_validate_path_component(file_id[0:2]),
_validate_path_component(file_id[2:4]),
_validate_path_component(file_id[4:]),
)
@_wrap_with_jail_check
def url_cache_filepath_rel(self, media_id: str) -> str:
if NEW_FORMAT_ID_RE.match(media_id):
# Media id is of the form <DATE><RANDOM_STRING>
# E.g.: 2017-09-28-fsdRDt24DS234dsf
return os.path.join("url_cache", media_id[:10], media_id[11:])
return os.path.join(
"url_cache",
_validate_path_component(media_id[:10]),
_validate_path_component(media_id[11:]),
)
else:
return os.path.join("url_cache", media_id[0:2], media_id[2:4], media_id[4:])
return os.path.join(
"url_cache",
_validate_path_component(media_id[0:2]),
_validate_path_component(media_id[2:4]),
_validate_path_component(media_id[4:]),
)
url_cache_filepath = _wrap_in_base_path(url_cache_filepath_rel)
@_wrap_with_jail_check
def url_cache_filepath_dirs_to_delete(self, media_id: str) -> List[str]:
"The dirs to try and remove if we delete the media_id file"
if NEW_FORMAT_ID_RE.match(media_id):
return [os.path.join(self.base_path, "url_cache", media_id[:10])]
return [
os.path.join(
self.base_path, "url_cache", _validate_path_component(media_id[:10])
)
]
else:
return [
os.path.join(self.base_path, "url_cache", media_id[0:2], media_id[2:4]),
os.path.join(self.base_path, "url_cache", media_id[0:2]),
os.path.join(
self.base_path,
"url_cache",
_validate_path_component(media_id[0:2]),
_validate_path_component(media_id[2:4]),
),
os.path.join(
self.base_path, "url_cache", _validate_path_component(media_id[0:2])
),
]
@_wrap_with_jail_check
def url_cache_thumbnail_rel(
self, media_id: str, width: int, height: int, content_type: str, method: str
) -> str:
@@ -168,37 +302,46 @@ class MediaFilePaths:
if NEW_FORMAT_ID_RE.match(media_id):
return os.path.join(
"url_cache_thumbnails", media_id[:10], media_id[11:], file_name
"url_cache_thumbnails",
_validate_path_component(media_id[:10]),
_validate_path_component(media_id[11:]),
_validate_path_component(file_name),
)
else:
return os.path.join(
"url_cache_thumbnails",
media_id[0:2],
media_id[2:4],
media_id[4:],
file_name,
_validate_path_component(media_id[0:2]),
_validate_path_component(media_id[2:4]),
_validate_path_component(media_id[4:]),
_validate_path_component(file_name),
)
url_cache_thumbnail = _wrap_in_base_path(url_cache_thumbnail_rel)
@_wrap_with_jail_check
def url_cache_thumbnail_directory_rel(self, media_id: str) -> str:
# Media id is of the form <DATE><RANDOM_STRING>
# E.g.: 2017-09-28-fsdRDt24DS234dsf
if NEW_FORMAT_ID_RE.match(media_id):
return os.path.join("url_cache_thumbnails", media_id[:10], media_id[11:])
return os.path.join(
"url_cache_thumbnails",
_validate_path_component(media_id[:10]),
_validate_path_component(media_id[11:]),
)
else:
return os.path.join(
"url_cache_thumbnails",
media_id[0:2],
media_id[2:4],
media_id[4:],
_validate_path_component(media_id[0:2]),
_validate_path_component(media_id[2:4]),
_validate_path_component(media_id[4:]),
)
url_cache_thumbnail_directory = _wrap_in_base_path(
url_cache_thumbnail_directory_rel
)
@_wrap_with_jail_check
def url_cache_thumbnail_dirs_to_delete(self, media_id: str) -> List[str]:
"The dirs to try and remove if we delete the media_id thumbnails"
# Media id is of the form <DATE><RANDOM_STRING>
@@ -206,21 +349,35 @@ class MediaFilePaths:
if NEW_FORMAT_ID_RE.match(media_id):
return [
os.path.join(
self.base_path, "url_cache_thumbnails", media_id[:10], media_id[11:]
self.base_path,
"url_cache_thumbnails",
_validate_path_component(media_id[:10]),
_validate_path_component(media_id[11:]),
),
os.path.join(
self.base_path,
"url_cache_thumbnails",
_validate_path_component(media_id[:10]),
),
os.path.join(self.base_path, "url_cache_thumbnails", media_id[:10]),
]
else:
return [
os.path.join(
self.base_path,
"url_cache_thumbnails",
media_id[0:2],
media_id[2:4],
media_id[4:],
_validate_path_component(media_id[0:2]),
_validate_path_component(media_id[2:4]),
_validate_path_component(media_id[4:]),
),
os.path.join(
self.base_path, "url_cache_thumbnails", media_id[0:2], media_id[2:4]
self.base_path,
"url_cache_thumbnails",
_validate_path_component(media_id[0:2]),
_validate_path_component(media_id[2:4]),
),
os.path.join(
self.base_path,
"url_cache_thumbnails",
_validate_path_component(media_id[0:2]),
),
os.path.join(self.base_path, "url_cache_thumbnails", media_id[0:2]),
]

Some files were not shown because too many files have changed in this diff Show More