Compare commits

..

128 Commits

Author SHA1 Message Date
Andrew Morgan
f3e4cf3758 Fix get_users_who_share_room_with_user SQL
Three fixes here:

* Using UNION only selects DISTINCT rows from each query. However, if we DISTINCT the rows during the query, and then use UNION ALL which doesn't attempt to select distinct rows, we get a 1/3rd speedup in query time!
* We should select other_user_id instead of user_id from users_who_share_private_rooms, as user_id will always be the requesting user.
* Added p1.user_id != p2.user_id to filter out the same entries between each table in the users_in_public_rooms query.
2021-02-15 17:41:26 +00:00
Andrew Morgan
32e41cc8e0 Add instead of update requesting user_id to set; invalidate cache context
We were set.update'ing a user_id, instead of set.add. The former treats
user_id as an iterable, and thus adds every individual letter of the
requesting user to the set. Fun!
2021-02-15 17:33:35 +00:00
Andrew Morgan
27b0a44a18 Fix server name typo 2021-02-12 22:12:56 +00:00
Andrew Morgan
e10b3b377b Changelog 2021-02-12 20:11:38 +00:00
Andrew Morgan
72c1e61f41 Remove duplicate where we filter ourselves out of destinations to send presence to
send_presence_to_destinations will already filter out the current server from the
given list. No need to do it again beforehand.
2021-02-12 20:11:37 +00:00
Andrew Morgan
c57f436515 Speed up get_users_who_share_room_with_user with a more efficient query
The old code was pulling all rooms for a given user, *then* looping over each
one in Python and pulling all users from those rooms.

This commit replaces that with a single query which makes use of existing
tables which keep track of user relationships.
2021-02-12 20:11:10 +00:00
Patrick Cloke
8a33d217bd Convert some test cases to use HomeserverTestCase. (#9377)
This has the side-effect of being able to remove use of `inlineCallbacks`
in the test-cases for cleaner tracebacks.
2021-02-11 10:29:09 -05:00
Patrick Cloke
6dade80048 Combine the CAS & SAML implementations for required attributes. (#9326) 2021-02-11 10:05:15 -05:00
Eric Eastwood
80d6dc9783 Remove conflicting sqlite tables that are "reserved" (shadow fts4 tables) (#9003)
Remove conflicting sqlite tables that throw sqlite3.OperationalError: object name reserved for internal use: event_search_content when running the twisted unit tests.

Fix #8996
2021-02-10 20:12:57 +00:00
Brendan Abolivier
fb0e14ee9a Merge pull request #9361 from matrix-org/babolivier/third_party_validation
Remove unneeded type constraints on 3rd party protocol lookup responses
2021-02-09 18:51:44 +01:00
Thomas Mortagne
5f716fa777 Add XWiki OIDC provider example. (#9324) 2021-02-09 11:54:52 -05:00
Brendan Abolivier
29ae04af3b Remove unneeded type constraints on 3rd party protocol lookup responses 2021-02-09 17:50:25 +01:00
Patrick Cloke
3f58fc848d Type hints and validation improvements. (#9321)
* Adds type hints to the groups servlet and stringutils code.
* Assert the maximum length of some input values for spec compliance.
2021-02-08 13:59:54 -05:00
Patrick Cloke
0963d39ea6 Handle additional errors when previewing URLs. (#9333)
* Handle the case of lxml not finding a document tree.
* Parse the document encoding from the XML tag.
2021-02-08 12:33:30 -05:00
David Teller
b0b2cac057 Merge pull request #9150 from Yoric/develop-context
New API /_synapse/admin/rooms/{roomId}/context/{eventId}
2021-02-08 15:53:44 +01:00
Jonathan de Jong
d882fbca38 Update type hints for Cursor to match PEP 249. (#9299) 2021-02-05 15:39:19 -05:00
Dan Callahan
5a9cdaa6e9 Update installation instructions on Fedora (#9322)
Signed-off-by: Joseph Arnault <computerdude90042@outlook.com>

Signed-off-by: Dan Callahan <danc@element.io>

Co-authored-by: compu42 <56663749+compu42@users.noreply.github.com>
2021-02-05 14:20:38 +00:00
Erik Johnston
adc96d4236 Merge branch 'erikj/media_spam_checker' into develop 2021-02-04 17:01:59 +00:00
Erik Johnston
7e8083eb48 Add check_media_file_for_spam spam checker hook 2021-02-04 17:01:30 +00:00
dykstranet
982d9eb211 Correct matrix-synapse.service reference in TURN howto docs. (#9308) 2021-02-04 11:22:44 -05:00
Patrick Cloke
792263c97c Handle empty rooms when generating email notifications. (#9257)
Fixes some exceptions if the room state isn't quite as expected.
If the expected state events aren't found, try to find them in the
historical room state. If they still aren't found, fallback to a reasonable,
although ugly, value.
2021-02-04 10:18:25 -05:00
Patrick Cloke
2ab6e67ab7 Fix escaping of braces in OIDC sample config. (#9317)
This fixes the Jinja2 templates for the mapping provider.
2021-02-04 09:06:20 -05:00
Jonathan de Jong
2814028ce5 Add experimental support for PyPy. (#9123)
* Adds proper dependencies.
* Minor fixes in database layer.
2021-02-04 08:29:47 -05:00
Marcus
b0f4119b8b Add debug logging to DNS SRV requests. (#9305) 2021-02-03 16:47:30 -05:00
Richard van der Hoff
3f534d3fdf Merge branch 'social_login_hotfixes' into develop 2021-02-03 20:34:27 +00:00
Richard van der Hoff
17f2a512f3 Merge remote-tracking branch 'origin/release-v1.27.0' into social_login_hotfixes 2021-02-03 20:33:32 +00:00
Richard van der Hoff
e288499c60 Social login UI polish (#9301) 2021-02-03 20:31:23 +00:00
Patrick Cloke
afa18f1baa Clarify documentation about escaping URLs in templates. (#9310) 2021-02-03 14:51:38 -05:00
Richard van der Hoff
ce669863b9 Add debug for OIDC flow (#9307) 2021-02-03 19:45:34 +00:00
Richard van der Hoff
7a0dcea3e5 social login Fix username validation javascript (#9297)
* fix validation and don't use built-in validation UI

Co-authored-by: Bruno Windels <brunow@element.io>
2021-02-03 17:52:55 +00:00
Richard van der Hoff
f20dadb649 Fix formatting for "bad session" error during sso registration flow (#9296) 2021-02-03 16:13:09 +00:00
dykstranet
e4cdecb310 config: Add detail to auto_join_rooms comment (#9291)
config: Add detail to auto_join_rooms comment

Signed-off-by: Gary Dykstra <gary@dykstranet.com>
2021-02-03 15:21:30 +00:00
Tim Gates
e1943d1353 Typo fix in a comment: subequently -> subsequently. (#8988) 2021-02-03 07:24:53 -05:00
Patrick Cloke
4ca054a4ea Convert blacklisted IPv4 addresses to compatible IPv6 addresses. (#9240)
Also add a few more IP ranges to the default blacklist.
2021-02-03 07:13:46 -05:00
Erik Johnston
ff55300b91 Honour ratelimit flag for application services for invite ratelimiting (#9302) 2021-02-03 10:17:37 +00:00
Richard van der Hoff
96e460df2e social login: add noopener to terms link (#9300) 2021-02-02 18:35:28 +00:00
Erik Johnston
eec9ab3225 Update changelog 2021-02-02 13:51:20 +00:00
Erik Johnston
2610930721 1.27.0rc1 2021-02-02 13:32:05 +00:00
Travis Ralston
b60bb28bbc Add an admin API to get the current room state (#9168)
This could arguably replace the existing admin API for `/members`, however that is out of scope of this change.

This sort of endpoint is ideal for moderation use cases as well as other applications, such as needing to retrieve various bits of information about a room to perform a task (like syncing power levels between two places). This endpoint exposes nothing more than an admin would be able to access with a `select *` query on their database.
2021-02-02 11:16:29 +00:00
Richard van der Hoff
8f75bf1df7 Put SAML callback URI under /_synapse/client. (#9289) 2021-02-02 09:43:50 +00:00
Richard van der Hoff
846b9d3df0 Put OIDC callback URI under /_synapse/client. (#9288) 2021-02-01 22:56:01 +00:00
Oliver Hanikel
d1f13c7485 Add an OpenID example config for Gitea. (#9134) 2021-02-01 16:21:09 -05:00
Richard van der Hoff
8fee6a3ab2 Merge branch 'social_login' into develop 2021-02-01 18:48:11 +00:00
Richard van der Hoff
351845452c fix broken HTML tag 2021-02-01 18:47:01 +00:00
Richard van der Hoff
5963426b95 Merge branch 'social_login' into develop 2021-02-01 18:46:12 +00:00
Bruno Windels
f30c3a99be make primary button not wider than viewport 2021-02-01 18:39:17 +00:00
Richard van der Hoff
c543bf87ec Collect terms consent from the user during SSO registration (#9276) 2021-02-01 18:37:41 +00:00
Richard van der Hoff
e5d70c8a82 Improve styling and wording of SSO UIA templates (#9286)
fixes #9171
2021-02-01 18:36:04 +00:00
Patrick Cloke
5d38a3c97f Refactor email summary generation. (#9260)
* Fixes a case where no summary text was returned.
* The use of messages_from_person vs. messages_from_person_and_others
  was tweaked to depend on whether there was 1 sender or multiple senders,
  not based on if there was 1 room or multiple rooms.
2021-02-01 13:09:39 -05:00
Richard van der Hoff
419313b06a Improve styling and wording of SSO error templates (#9287) 2021-02-01 18:01:15 +00:00
Richard van der Hoff
85c56b5a67 Make importing display name and email optional (#9277) 2021-02-01 17:30:42 +00:00
Richard van der Hoff
18ab35284a Merge branch 'social_login' into develop 2021-02-01 17:28:37 +00:00
Jan Christian Grünhage
43dd93bb26 Add phone home stats for encrypted messages. (#9283)
Signed-off-by: Jan Christian Grünhage <jan.christian@gruenhage.xyz>
2021-02-01 17:06:22 +00:00
Andrew Morgan
a800603561 Prevent email UIA failures from raising a LoginError (#9265)
Context, Fixes: https://github.com/matrix-org/synapse/issues/9263

In the past to fix an issue with old Riots re-requesting threepid validation tokens, we raised a `LoginError` during UIA instead of `InteractiveAuthIncompleteError`. This is now breaking the way Tchap logs in - which isn't standard, but also isn't disallowed by the spec.

An easy fix is just to remove the 4 year old workaround.
2021-02-01 15:54:39 +00:00
Richard van der Hoff
4167494c90 Replace username picker with a template (#9275)
There's some prelimiary work here to pull out the construction of a jinja environment to a separate function.

I wanted to load the template at display time rather than load time, so that it's easy to update on the fly. Honestly, I think we should do this with all our templates: the risk of ending up with malformed templates is far outweighed by the improved turnaround time for an admin trying to update them.
2021-02-01 15:52:50 +00:00
Richard van der Hoff
8aed29dc61 Improve styling and wording of SSO redirect confirm template (#9272) 2021-02-01 15:50:56 +00:00
Richard van der Hoff
9c715a5f19 Fix SSO on workers (#9271)
Fixes #8966.

* Factor out build_synapse_client_resource_tree

Start a function which will mount resources common to all workers.

* Move sso init into build_synapse_client_resource_tree

... so that we don't have to do it for each worker

* Fix SSO-login-via-a-worker

Expose the SSO login endpoints on workers, like the documentation says.

* Update workers config for new endpoints

Add documentation for endpoints recently added (#8942, #9017, #9262)

* remove submit_token from workers endpoints list

this *doesn't* work on workers (yet).

* changelog

* Add a comment about the odd path for SAML2Resource
2021-02-01 15:47:59 +00:00
Richard van der Hoff
f78d07bf00 Split out a separate endpoint to complete SSO registration (#9262)
There are going to be a couple of paths to get to the final step of SSO reg, and I want the URL in the browser to consistent. So, let's move the final step onto a separate path, which we redirect to.
2021-02-01 13:15:51 +00:00
Ivan Shapovalov
13c7ab8181 Fixes for PyPy compatibility (#9270)
* synapse.app.base: only call gc.freeze() on CPython

gc.freeze() is an implementation detail of CPython garbage collector,
and notably does not exist on PyPy.

Rather than playing whack-a-mole and skipping the call when under PyPy,
simply restrict it to CPython because the whole gc module is
implementation-defined.

Signed-off-by: Ivan Shapovalov <intelfx@intelfx.name>
2021-01-30 17:22:05 +00:00
Erik Johnston
f2c1560eca Ratelimit invites by room and target user (#9258) 2021-01-29 16:38:29 +00:00
Dan Callahan
e19396d622 Fix Debian builds on Xenial (#9254)
Adds note about updating dh-virtualenv once we drop support for Xenial.

We can't update now, because it needs debhelper 12, while Xenial only
backports 10.

Signed-off-by: Dan Callahan <danc@element.io>
2021-01-29 14:56:04 +00:00
Denis Kasak
c14688d44a Fix typo in UPGRADE.rst 2021-01-29 11:27:43 +01:00
Richard van der Hoff
0d81a6fa3e Merge branch 'social_login' into develop 2021-01-28 22:08:11 +00:00
Erik Johnston
4b73488e81 Ratelimit 3PID /requestToken API (#9238) 2021-01-28 17:39:21 +00:00
Erik Johnston
54a6afeee3 Cache config options in SSL verification (#9255)
Reading from the config object is *slow*.
2021-01-28 17:38:59 +00:00
David Teller
31d072aea0 FIXUP: linter 2021-01-28 16:53:40 +01:00
Patrick Cloke
a78016dadf Add type hints to E2E handler. (#9232)
This finishes adding type hints to the `synapse.handlers` module.
2021-01-28 08:34:19 -05:00
David Teller
93f84e0373 FIXUP: Making get_event_context a bit more paranoid 2021-01-28 12:31:07 +01:00
David Teller
b755f60ce2 FIXUP: Removing awaitable 2021-01-28 12:31:07 +01:00
David Teller
a764869623 FIXUP: Doc 2021-01-28 12:31:07 +01:00
David Teller
b859919acc FIXUP: Now testing that the user is admin! 2021-01-28 12:31:07 +01:00
David Teller
de7f049527 FIXUP: Don't filter events at all for admin/v1/rooms/.../context/... 2021-01-28 12:31:07 +01:00
David Teller
fe52dae6bd FIXUP: Documenting /_synapse/admin/v1/rooms/<room_id>/context/<event_id> 2021-01-28 12:30:21 +01:00
David Teller
10332c175c New API /_synapse/admin/rooms/{roomId}/context/{eventId}
Signed-off-by: David Teller <davidt@element.io>
2021-01-28 12:29:49 +01:00
Richard van der Hoff
34efb4c604 Add notes on integrating with Facebook for SSO login. (#9244) 2021-01-27 22:57:16 +00:00
Richard van der Hoff
a083aea396 Add 'brand' field to MSC2858 response (#9242)
We've decided to add a 'brand' field to help clients decide how to style the
buttons.

Also, fix up the allowed characters for idp_id, while I'm in the area.
2021-01-27 21:31:45 +00:00
Richard van der Hoff
869667760f Support for scraping email addresses from OIDC providers (#9245) 2021-01-27 21:28:59 +00:00
Patrick Cloke
00e97a7774 Merge branch 'master' into develop 2021-01-27 12:51:49 -05:00
Patrick Cloke
ccb9616f26 Update debian changelog. 2021-01-27 12:45:02 -05:00
Pankaj Yadav
2e537a0280 Check if a user is in the room before sending a PowerLevel event on their behalf (#9235) 2021-01-27 17:38:08 +00:00
Richard van der Hoff
300d0d756a Merge branch 'social_login' into develop 2021-01-27 17:28:39 +00:00
Richard van der Hoff
fbd9de6d1f Merge tag 'v1.26.0' into social_login
Synapse 1.26.0 (2021-01-27)
===========================

This release brings a new schema version for Synapse and rolling back to a previous
version is not trivial. Please review [UPGRADE.rst](UPGRADE.rst) for more details
on these changes and for general upgrade guidance.

No significant changes since 1.26.0rc2.

Synapse 1.26.0rc2 (2021-01-25)
==============================

Bugfixes
--------

- Fix receipts and account data not being sent down sync. Introduced in v1.26.0rc1. ([\#9193](https://github.com/matrix-org/synapse/issues/9193), [\#9195](https://github.com/matrix-org/synapse/issues/9195))
- Fix chain cover update to handle events with duplicate auth events. Introduced in v1.26.0rc1. ([\#9210](https://github.com/matrix-org/synapse/issues/9210))

Internal Changes
----------------

- Add an `oidc-` prefix to any `idp_id`s which are given in the `oidc_providers` configuration. ([\#9189](https://github.com/matrix-org/synapse/issues/9189))
- Bump minimum `psycopg2` version to v2.8. ([\#9204](https://github.com/matrix-org/synapse/issues/9204))

Synapse 1.26.0rc1 (2021-01-20)
==============================

This release brings a new schema version for Synapse and rolling back to a previous
version is not trivial. Please review [UPGRADE.rst](UPGRADE.rst) for more details
on these changes and for general upgrade guidance.

Features
--------

- Add support for multiple SSO Identity Providers. ([\#9015](https://github.com/matrix-org/synapse/issues/9015), [\#9017](https://github.com/matrix-org/synapse/issues/9017), [\#9036](https://github.com/matrix-org/synapse/issues/9036), [\#9067](https://github.com/matrix-org/synapse/issues/9067), [\#9081](https://github.com/matrix-org/synapse/issues/9081), [\#9082](https://github.com/matrix-org/synapse/issues/9082), [\#9105](https://github.com/matrix-org/synapse/issues/9105), [\#9107](https://github.com/matrix-org/synapse/issues/9107), [\#9109](https://github.com/matrix-org/synapse/issues/9109), [\#9110](https://github.com/matrix-org/synapse/issues/9110), [\#9127](https://github.com/matrix-org/synapse/issues/9127), [\#9153](https://github.com/matrix-org/synapse/issues/9153), [\#9154](https://github.com/matrix-org/synapse/issues/9154), [\#9177](https://github.com/matrix-org/synapse/issues/9177))
- During user-interactive authentication via single-sign-on, give a better error if the user uses the wrong account on the SSO IdP. ([\#9091](https://github.com/matrix-org/synapse/issues/9091))
- Give the `public_baseurl` a default value, if it is not explicitly set in the configuration file. ([\#9159](https://github.com/matrix-org/synapse/issues/9159))
- Improve performance when calculating ignored users in large rooms. ([\#9024](https://github.com/matrix-org/synapse/issues/9024))
- Implement [MSC2176](https://github.com/matrix-org/matrix-doc/pull/2176) in an experimental room version. ([\#8984](https://github.com/matrix-org/synapse/issues/8984))
- Add an admin API for protecting local media from quarantine. ([\#9086](https://github.com/matrix-org/synapse/issues/9086))
- Remove a user's avatar URL and display name when deactivated with the Admin API. ([\#8932](https://github.com/matrix-org/synapse/issues/8932))
- Update `/_synapse/admin/v1/users/<user_id>/joined_rooms` to work for both local and remote users. ([\#8948](https://github.com/matrix-org/synapse/issues/8948))
- Add experimental support for handling to-device messages on worker processes. ([\#9042](https://github.com/matrix-org/synapse/issues/9042), [\#9043](https://github.com/matrix-org/synapse/issues/9043), [\#9044](https://github.com/matrix-org/synapse/issues/9044), [\#9130](https://github.com/matrix-org/synapse/issues/9130))
- Add experimental support for handling `/keys/claim` and `/room_keys` APIs on worker processes. ([\#9068](https://github.com/matrix-org/synapse/issues/9068))
- Add experimental support for handling `/devices` API on worker processes. ([\#9092](https://github.com/matrix-org/synapse/issues/9092))
- Add experimental support for moving off receipts and account data persistence off master. ([\#9104](https://github.com/matrix-org/synapse/issues/9104), [\#9166](https://github.com/matrix-org/synapse/issues/9166))

Bugfixes
--------

- Fix a long-standing issue where an internal server error would occur when requesting a profile over federation that did not include a display name / avatar URL. ([\#9023](https://github.com/matrix-org/synapse/issues/9023))
- Fix a long-standing bug where some caches could grow larger than configured. ([\#9028](https://github.com/matrix-org/synapse/issues/9028))
- Fix error handling during insertion of client IPs into the database. ([\#9051](https://github.com/matrix-org/synapse/issues/9051))
- Fix bug where we didn't correctly record CPU time spent in `on_new_event` block. ([\#9053](https://github.com/matrix-org/synapse/issues/9053))
- Fix a minor bug which could cause confusing error messages from invalid configurations. ([\#9054](https://github.com/matrix-org/synapse/issues/9054))
- Fix incorrect exit code when there is an error at startup. ([\#9059](https://github.com/matrix-org/synapse/issues/9059))
- Fix `JSONDecodeError` spamming the logs when sending transactions to remote servers. ([\#9070](https://github.com/matrix-org/synapse/issues/9070))
- Fix "Failed to send request" errors when a client provides an invalid room alias. ([\#9071](https://github.com/matrix-org/synapse/issues/9071))
- Fix bugs in federation catchup logic that caused outbound federation to be delayed for large servers after start up. Introduced in v1.8.0 and v1.21.0. ([\#9114](https://github.com/matrix-org/synapse/issues/9114), [\#9116](https://github.com/matrix-org/synapse/issues/9116))
- Fix corruption of `pushers` data when a postgres bouncer is used. ([\#9117](https://github.com/matrix-org/synapse/issues/9117))
- Fix minor bugs in handling the `clientRedirectUrl` parameter for SSO login. ([\#9128](https://github.com/matrix-org/synapse/issues/9128))
- Fix "Unhandled error in Deferred: BodyExceededMaxSize" errors when .well-known files that are too large. ([\#9108](https://github.com/matrix-org/synapse/issues/9108))
- Fix "UnboundLocalError: local variable 'length' referenced before assignment" errors when the response body exceeds the expected size. This bug was introduced in v1.25.0. ([\#9145](https://github.com/matrix-org/synapse/issues/9145))
- Fix a long-standing bug "ValueError: invalid literal for int() with base 10" when `/publicRooms` is requested with an invalid `server` parameter. ([\#9161](https://github.com/matrix-org/synapse/issues/9161))

Improved Documentation
----------------------

- Add some extra docs for getting Synapse running on macOS. ([\#8997](https://github.com/matrix-org/synapse/issues/8997))
- Correct a typo in the `systemd-with-workers` documentation. ([\#9035](https://github.com/matrix-org/synapse/issues/9035))
- Correct a typo in `INSTALL.md`. ([\#9040](https://github.com/matrix-org/synapse/issues/9040))
- Add missing `user_mapping_provider` configuration to the Keycloak OIDC example. Contributed by @chris-ruecker. ([\#9057](https://github.com/matrix-org/synapse/issues/9057))
- Quote `pip install` packages when extras are used to avoid shells interpreting bracket characters. ([\#9151](https://github.com/matrix-org/synapse/issues/9151))

Deprecations and Removals
-------------------------

- Remove broken and unmaintained `demo/webserver.py` script. ([\#9039](https://github.com/matrix-org/synapse/issues/9039))

Internal Changes
----------------

- Improve efficiency of large state resolutions. ([\#8868](https://github.com/matrix-org/synapse/issues/8868), [\#9029](https://github.com/matrix-org/synapse/issues/9029), [\#9115](https://github.com/matrix-org/synapse/issues/9115), [\#9118](https://github.com/matrix-org/synapse/issues/9118), [\#9124](https://github.com/matrix-org/synapse/issues/9124))
- Various clean-ups to the structured logging and logging context code. ([\#8939](https://github.com/matrix-org/synapse/issues/8939))
- Ensure rejected events get added to some metadata tables. ([\#9016](https://github.com/matrix-org/synapse/issues/9016))
- Ignore date-rotated homeserver logs saved to disk. ([\#9018](https://github.com/matrix-org/synapse/issues/9018))
- Remove an unused column from `access_tokens` table. ([\#9025](https://github.com/matrix-org/synapse/issues/9025))
- Add a `-noextras` factor to `tox.ini`, to support running the tests with no optional dependencies. ([\#9030](https://github.com/matrix-org/synapse/issues/9030))
- Fix running unit tests when optional dependencies are not installed. ([\#9031](https://github.com/matrix-org/synapse/issues/9031))
- Allow bumping schema version when using split out state database. ([\#9033](https://github.com/matrix-org/synapse/issues/9033))
- Configure the linters to run on a consistent set of files. ([\#9038](https://github.com/matrix-org/synapse/issues/9038))
- Various cleanups to device inbox store. ([\#9041](https://github.com/matrix-org/synapse/issues/9041))
- Drop unused database tables. ([\#9055](https://github.com/matrix-org/synapse/issues/9055))
- Remove unused `SynapseService` class. ([\#9058](https://github.com/matrix-org/synapse/issues/9058))
- Remove unnecessary declarations in the tests for the admin API. ([\#9063](https://github.com/matrix-org/synapse/issues/9063))
- Remove `SynapseRequest.get_user_agent`. ([\#9069](https://github.com/matrix-org/synapse/issues/9069))
- Remove redundant `Homeserver.get_ip_from_request` method. ([\#9080](https://github.com/matrix-org/synapse/issues/9080))
- Add type hints to media repository. ([\#9093](https://github.com/matrix-org/synapse/issues/9093))
- Fix the wrong arguments being passed to `BlacklistingAgentWrapper` from `MatrixFederationAgent`. Contributed by Timothy Leung. ([\#9098](https://github.com/matrix-org/synapse/issues/9098))
- Reduce the scope of caught exceptions in `BlacklistingAgentWrapper`. ([\#9106](https://github.com/matrix-org/synapse/issues/9106))
- Improve `UsernamePickerTestCase`. ([\#9112](https://github.com/matrix-org/synapse/issues/9112))
- Remove dependency on `distutils`. ([\#9125](https://github.com/matrix-org/synapse/issues/9125))
- Enforce that replication HTTP clients are called with keyword arguments only. ([\#9144](https://github.com/matrix-org/synapse/issues/9144))
- Fix the Python 3.5 / old dependencies build in CI. ([\#9146](https://github.com/matrix-org/synapse/issues/9146))
- Replace the old `perspectives` option in the Synapse docker config file template with `trusted_key_servers`. ([\#9157](https://github.com/matrix-org/synapse/issues/9157))
2021-01-27 17:27:58 +00:00
Richard van der Hoff
7fa1346f93 Merge branch 'social_login' into develop 2021-01-27 17:27:24 +00:00
Patrick Cloke
17b713850f Merge branch 'master' into develop 2021-01-27 11:13:21 -05:00
Patrick Cloke
b685c5e7f1 Move note above changes. 2021-01-27 11:02:04 -05:00
Patrick Cloke
e54746bdf7 Clean-up the template loading code. (#9200)
* Enables autoescape by default for HTML files.
* Adds a new read_template method for reading a single template.
* Some logic clean-up.
2021-01-27 10:59:50 -05:00
Patrick Cloke
71c46652a2 Copy the upgrade note to 1.26.0. 2021-01-27 10:52:45 -05:00
Patrick Cloke
73ed289bd2 1.26.0 2021-01-27 10:50:37 -05:00
Erik Johnston
93b61589b0 Add a note to changelog about redis usage (#9227) 2021-01-27 14:06:27 +00:00
Richard van der Hoff
cfcc4bfcaf Merge branch 'social_login' into develop 2021-01-27 12:41:51 +00:00
Richard van der Hoff
a737cc2713 Implement MSC2858 support (#9183)
Fixes #8928.
2021-01-27 12:41:24 +00:00
Andrew Morgan
a64c29926e Pass a dict, instead of None, to modules if a None config is specified in the homeserver config (#9229)
If a Synapse module's config block were empty in YAML, thus being translated to a `Nonetype` in Python, then some modules could fail as that None ends up getting passed to their `parse_config` method. Modules are expected to accept a `dict` instead.

This PR ensures that if the user does end up specifying an empty config block (such as what [the default oidc config in the sample config](5310808d3b/docs/sample_config.yaml (L1816-L1845)) states) then `None` is not passed to the module. An empty dict is passed instead.

This code assumes that no existing modules are relying on receiving a `None` config block, but I'd really hope that they aren't.
2021-01-27 11:49:31 +00:00
Patrick Cloke
1baab20352 Add type hints to various handlers. (#9223)
With this change all handlers except the e2e_* ones have
type hints enabled.
2021-01-26 10:50:21 -05:00
Patrick Cloke
26837d5dbe Do not require the CAS service URL setting (use public_baseurl instead). (#9199)
The current configuration is handled for backwards compatibility,
but is considered deprecated.
2021-01-26 10:49:25 -05:00
Erik Johnston
dd8da8c5f6 Precompute joined hosts and store in Redis (#9198) 2021-01-26 13:57:31 +00:00
Patrick Cloke
4937fe3d6b Try to recover from unknown encodings when previewing media. (#9164)
Treat unknown encodings (according to lxml) as UTF-8
when generating a preview for HTML documents. This
isn't fully accurate, but will hopefully give a reasonable
title and summary.
2021-01-26 07:32:17 -05:00
Andrew Morgan
e74bb96733 Update isort to v5.7.0 (#9222)
This new version no longer has the problem of adding/removing a blank line in `.pyi` files, which black disagrees with. This would cause `isort` to slightly modify `.pyi` files, before `black` would subsequently modify back directly afterwards.

Relevant `isort` issue: https://github.com/pycqa/isort/issues/1284
2021-01-26 11:36:12 +00:00
Jason Robinson
e5b659e9e1 Merge pull request #9062 from matrix-org/jaywink/admin-forward-extremities
Add forward extremities endpoint to rooms admin API
2021-01-26 12:57:38 +02:00
Erik Johnston
a1ff1e967f Periodically send pings to detect dead Redis connections (#9218)
This is done by creating a custom `RedisFactory` subclass that
periodically pings all connections in its pool.

We also ensure that the `replyTimeout` param is non-null, so that we
timeout waiting for the reply to those pings (and thus triggering a
reconnect).
2021-01-26 10:54:54 +00:00
Jason Robinson
4936fc59fc Fix get forward extremities query
Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-26 10:21:02 +02:00
Jason Robinson
cee4010f94 Merge branch 'develop' into jaywink/admin-forward-extremities
# Conflicts:
#	synapse/rest/admin/__init__.py
2021-01-26 10:15:32 +02:00
Jason Robinson
e20f18a766 Make natural join inner join
Co-authored-by: Erik Johnston <erik@matrix.org>
2021-01-26 10:13:35 +02:00
Patrick Cloke
fdf8346944 Merge remote-tracking branch 'origin/develop' into jaywink/admin-forward-extremities 2021-01-25 14:59:48 -05:00
Patrick Cloke
5b857b77f7 Don't error if deleting a non-existent pusher. (#9121) 2021-01-25 14:52:30 -05:00
Patrick Cloke
4a55d267ee Add an admin API for shadow-banning users. (#9209)
This expands the current shadow-banning feature to be usable via
the admin API and adds documentation for it.

A shadow-banned users receives successful responses to their
client-server API requests, but the events are not propagated into rooms.

Shadow-banning a user should be used as a tool of last resort and may lead
to confusing or broken behaviour for the client.
2021-01-25 14:49:39 -05:00
Patrick Cloke
2547d9d4d7 Fix Python 3.5 old deps build by using a compatible pip version. (#9217)
Co-authored-by: Dan Callahan <danc@element.io>

pip 21.0 stopped supporting Python 3.5.
2021-01-25 19:38:31 +00:00
Richard van der Hoff
65fb3b2e25 Merge tag 'v1.26.0rc2' into social_login
Synapse 1.26.0rc2 (2021-01-25)
==============================

Bugfixes
--------

- Fix receipts and account data not being sent down sync. Introduced in v1.26.0rc1. ([\#9193](https://github.com/matrix-org/synapse/issues/9193), [\#9195](https://github.com/matrix-org/synapse/issues/9195))
- Fix chain cover update to handle events with duplicate auth events. Introduced in v1.26.0rc1. ([\#9210](https://github.com/matrix-org/synapse/issues/9210))

Internal Changes
----------------

- Add an `oidc-` prefix to any `idp_id`s which are given in the `oidc_providers` configuration. ([\#9189](https://github.com/matrix-org/synapse/issues/9189))
- Bump minimum `psycopg2` version to v2.8. ([\#9204](https://github.com/matrix-org/synapse/issues/9204))
2021-01-25 19:37:58 +00:00
Patrick Cloke
a71be9d62d Fix Python 3.5 old deps build by using a compatible pip version. (#9217)
Co-authored-by: Dan Callahan <danc@element.io>

pip 21.0 stopped supporting Python 3.5.
2021-01-25 14:22:35 -05:00
Jason Robinson
fe18882bb5 Merge remote-tracking branch 'origin/develop' into jaywink/admin-forward-extremities 2021-01-25 15:55:54 +02:00
Patrick Cloke
e448dbbf5b Merge tag 'v1.26.0rc2' into develop
Synapse 1.26.0rc2 (2021-01-25)
==============================

Bugfixes
--------

- Fix receipts and account data not being sent down sync. Introduced in v1.26.0rc1. ([\#9193](https://github.com/matrix-org/synapse/issues/9193), [\#9195](https://github.com/matrix-org/synapse/issues/9195))
- Fix chain cover update to handle events with duplicate auth events. Introduced in v1.26.0rc1. ([\#9210](https://github.com/matrix-org/synapse/issues/9210))

Internal Changes
----------------

- Add an `oidc-` prefix to any `idp_id`s which are given in the `oidc_providers` configuration. ([\#9189](https://github.com/matrix-org/synapse/issues/9189))
- Bump minimum `psycopg2` version to v2.8. ([\#9204](https://github.com/matrix-org/synapse/issues/9204))
2021-01-25 08:51:45 -05:00
Patrick Cloke
69961c7e9f Tweak changes. 2021-01-25 08:26:42 -05:00
Patrick Cloke
a01605c136 1.26.0rc2 2021-01-25 08:25:40 -05:00
Patrick Cloke
6f7417c3db Handle missing content keys when calculating presentable names. (#9165)
Treat the content as untrusted and do not assume it is of
the proper form.
2021-01-25 07:27:16 -05:00
Jason Robinson
8965b6cfec Merge branch 'develop' into jaywink/admin-forward-extremities 2021-01-23 21:41:35 +02:00
Jason Robinson
930ba00971 Add depth and received_ts to forward_extremities admin API response
Also add a warning on the admin API documentation.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-23 21:34:32 +02:00
Erik Johnston
056327457f Fix chain cover update to handle events with duplicate auth events (#9210) 2021-01-22 19:44:08 +00:00
Erik Johnston
28f255d5f3 Bump psycopg2 version (#9204)
As we use `execute_values` with the `fetch` parameter.
2021-01-22 11:14:49 +00:00
Jason Robinson
c177faf5a9 Remove trailing whitespace to appease the linter
Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-11 23:55:44 +02:00
Jason Robinson
49c619a9a2 Simplify delete_forward_extremities_for_room_txn SQL
As per feedback.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-11 23:49:58 +02:00
Jason Robinson
da16d06301 Address pr feedback
* docs updates
* prettify SQL
* add missing copyright
* cursor_to_dict
* update touched files copyright years

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-11 23:43:58 +02:00
Jason Robinson
0b77329fe2 Clarify rooms.md
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2021-01-11 23:05:36 +02:00
Jason Robinson
b52fb703f7 Don't try to use f-strings
Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-11 09:47:03 +02:00
Jason Robinson
e2c16edc78 Add changelog and admin API docs
Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-09 22:58:29 +02:00
Jason Robinson
2eb421b606 Merge branch 'develop' into jaywink/admin-forward-extremities 2021-01-09 22:00:04 +02:00
Jason Robinson
90ad4d443a Implement clearing cache after deleting forward extremities
Also run linter.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-09 21:57:41 +02:00
Jason Robinson
85c0999bfb Add Rooms admin forward extremities DELETE endpoint
Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-08 00:12:23 +02:00
Jason Robinson
c91045f56c Move unknown room ID error into resolve_room_id
Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-07 23:03:54 +02:00
Jason Robinson
b849e46139 Add forward extremities endpoint to rooms admin API
GET /_synapse/admin/v1/rooms/<identifier>/forward_extremities now gets forward extremities for a room, returning count and the list of extremities.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-01-07 23:01:59 +02:00
206 changed files with 6395 additions and 2131 deletions

View File

@@ -9,3 +9,8 @@ apt-get update
apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev xmlsec1 zlib1g-dev tox
export LANG="C.UTF-8"
# Prevent virtualenv from auto-updating pip to an incompatible version
export VIRTUALENV_NO_DOWNLOAD=1
exec tox -e py35-old,combine

View File

@@ -1,9 +1,112 @@
Synapse 1.27.0rc1 (2021-02-02)
==============================
Note that this release includes a change in Synapse to use Redis as a cache ─ as well as a pub/sub mechanism ─ if Redis support is enabled. No action is needed by server administrators, and we do not expect resource usage of the Redis instance to change dramatically.
This release also changes the callback URI for OpenID Connect (OIDC) identity providers. If your server is configured to use single sign-on via an OIDC/OAuth2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
This release also changes escaping of variables in the HTML templates for SSO or email notifications. If you have customised these templates, please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
Features
--------
- Add an admin API for getting and deleting forward extremities for a room. ([\#9062](https://github.com/matrix-org/synapse/issues/9062))
- Add an admin API for retrieving the current room state of a room. ([\#9168](https://github.com/matrix-org/synapse/issues/9168))
- Add experimental support for allowing clients to pick an SSO Identity Provider ([MSC2858](https://github.com/matrix-org/matrix-doc/pull/2858)). ([\#9183](https://github.com/matrix-org/synapse/issues/9183), [\#9242](https://github.com/matrix-org/synapse/issues/9242))
- Add an admin API endpoint for shadow-banning users. ([\#9209](https://github.com/matrix-org/synapse/issues/9209))
- Add ratelimits to the 3PID `/requestToken` APIs. ([\#9238](https://github.com/matrix-org/synapse/issues/9238))
- Add support to the OpenID Connect integration for adding the user's email address. ([\#9245](https://github.com/matrix-org/synapse/issues/9245))
- Add ratelimits to invites in rooms and to specific users. ([\#9258](https://github.com/matrix-org/synapse/issues/9258))
- Improve the user experience of setting up an account via single-sign on. ([\#9262](https://github.com/matrix-org/synapse/issues/9262), [\#9272](https://github.com/matrix-org/synapse/issues/9272), [\#9275](https://github.com/matrix-org/synapse/issues/9275), [\#9276](https://github.com/matrix-org/synapse/issues/9276), [\#9277](https://github.com/matrix-org/synapse/issues/9277), [\#9286](https://github.com/matrix-org/synapse/issues/9286), [\#9287](https://github.com/matrix-org/synapse/issues/9287))
- Add phone home stats for encrypted messages. ([\#9283](https://github.com/matrix-org/synapse/issues/9283))
- Update the redirect URI for OIDC authentication. ([\#9288](https://github.com/matrix-org/synapse/issues/9288))
Bugfixes
--------
- Fix spurious errors in logs when deleting a non-existant pusher. ([\#9121](https://github.com/matrix-org/synapse/issues/9121))
- Fix a long-standing bug where Synapse would return a 500 error when a thumbnail did not exist (and auto-generation of thumbnails was not enabled). ([\#9163](https://github.com/matrix-org/synapse/issues/9163))
- Fix a long-standing bug where an internal server error was raised when attempting to preview an HTML document in an unknown character encoding. ([\#9164](https://github.com/matrix-org/synapse/issues/9164))
- Fix a long-standing bug where invalid data could cause errors when calculating the presentable room name for push. ([\#9165](https://github.com/matrix-org/synapse/issues/9165))
- Fix bug where we sometimes didn't detect that Redis connections had died, causing workers to not see new data. ([\#9218](https://github.com/matrix-org/synapse/issues/9218))
- Fix a bug where `None` was passed to Synapse modules instead of an empty dictionary if an empty module `config` block was provided in the homeserver config. ([\#9229](https://github.com/matrix-org/synapse/issues/9229))
- Fix a bug in the `make_room_admin` admin API where it failed if the admin with the greatest power level was not in the room. Contributed by Pankaj Yadav. ([\#9235](https://github.com/matrix-org/synapse/issues/9235))
- Prevent password hashes from getting dropped if a client failed threepid validation during a User Interactive Auth stage. Removes a workaround for an ancient bug in Riot Web <v0.7.4. ([\#9265](https://github.com/matrix-org/synapse/issues/9265))
- Fix single-sign-on when the endpoints are routed to synapse workers. ([\#9271](https://github.com/matrix-org/synapse/issues/9271))
Improved Documentation
----------------------
- Add docs for using Gitea as OpenID provider. ([\#9134](https://github.com/matrix-org/synapse/issues/9134))
- Add link to Matrix VoIP tester for turn-howto. ([\#9135](https://github.com/matrix-org/synapse/issues/9135))
- Add notes on integrating with Facebook for SSO login. ([\#9244](https://github.com/matrix-org/synapse/issues/9244))
Deprecations and Removals
-------------------------
- The `service_url` parameter in `cas_config` is deprecated in favor of `public_baseurl`. ([\#9199](https://github.com/matrix-org/synapse/issues/9199))
- Add new endpoint `/_synapse/client/saml2` for SAML2 authentication callbacks, and deprecate the old endpoint `/_matrix/saml2`. ([\#9289](https://github.com/matrix-org/synapse/issues/9289))
Internal Changes
----------------
- Add tests to `test_user.UsersListTestCase` for List Users Admin API. ([\#9045](https://github.com/matrix-org/synapse/issues/9045))
- Various improvements to the federation client. ([\#9129](https://github.com/matrix-org/synapse/issues/9129))
- Speed up chain cover calculation when persisting a batch of state events at once. ([\#9176](https://github.com/matrix-org/synapse/issues/9176))
- Add a `long_description_type` to the package metadata. ([\#9180](https://github.com/matrix-org/synapse/issues/9180))
- Speed up batch insertion when using PostgreSQL. ([\#9181](https://github.com/matrix-org/synapse/issues/9181), [\#9188](https://github.com/matrix-org/synapse/issues/9188))
- Emit an error at startup if different Identity Providers are configured with the same `idp_id`. ([\#9184](https://github.com/matrix-org/synapse/issues/9184))
- Improve performance of concurrent use of `StreamIDGenerators`. ([\#9190](https://github.com/matrix-org/synapse/issues/9190))
- Add some missing source directories to the automatic linting script. ([\#9191](https://github.com/matrix-org/synapse/issues/9191))
- Precompute joined hosts and store in Redis. ([\#9198](https://github.com/matrix-org/synapse/issues/9198), [\#9227](https://github.com/matrix-org/synapse/issues/9227))
- Clean-up template loading code. ([\#9200](https://github.com/matrix-org/synapse/issues/9200))
- Fix the Python 3.5 old dependencies build. ([\#9217](https://github.com/matrix-org/synapse/issues/9217))
- Update `isort` to v5.7.0 to bypass a bug where it would disagree with `black` about formatting. ([\#9222](https://github.com/matrix-org/synapse/issues/9222))
- Add type hints to handlers code. ([\#9223](https://github.com/matrix-org/synapse/issues/9223), [\#9232](https://github.com/matrix-org/synapse/issues/9232))
- Fix Debian package building on Ubuntu 16.04 LTS (Xenial). ([\#9254](https://github.com/matrix-org/synapse/issues/9254))
- Minor performance improvement during TLS handshake. ([\#9255](https://github.com/matrix-org/synapse/issues/9255))
- Refactor the generation of summary text for email notifications. ([\#9260](https://github.com/matrix-org/synapse/issues/9260))
- Restore PyPy compatibility by not calling CPython-specific GC methods when under PyPy. ([\#9270](https://github.com/matrix-org/synapse/issues/9270))
Synapse 1.26.0 (2021-01-27)
===========================
This release brings a new schema version for Synapse and rolling back to a previous
version is not trivial. Please review [UPGRADE.rst](UPGRADE.rst) for more details
on these changes and for general upgrade guidance.
No significant changes since 1.26.0rc2.
Synapse 1.26.0rc2 (2021-01-25)
==============================
Bugfixes
--------
- Fix receipts and account data not being sent down sync. Introduced in v1.26.0rc1. ([\#9193](https://github.com/matrix-org/synapse/issues/9193), [\#9195](https://github.com/matrix-org/synapse/issues/9195))
- Fix chain cover update to handle events with duplicate auth events. Introduced in v1.26.0rc1. ([\#9210](https://github.com/matrix-org/synapse/issues/9210))
Internal Changes
----------------
- Add an `oidc-` prefix to any `idp_id`s which are given in the `oidc_providers` configuration. ([\#9189](https://github.com/matrix-org/synapse/issues/9189))
- Bump minimum `psycopg2` version to v2.8. ([\#9204](https://github.com/matrix-org/synapse/issues/9204))
Synapse 1.26.0rc1 (2021-01-20)
==============================
This release brings a new schema version for Synapse and rolling back to a previous
version is not trivial. Please review [UPGRADE.rst](UPGRADE.rst) for more details
on these changes and for general upgrade guidance.
version is not trivial. Please review [UPGRADE.rst](UPGRADE.rst) for more details
on these changes and for general upgrade guidance.
Features
--------

View File

@@ -151,29 +151,15 @@ sudo pacman -S base-devel python python-pip \
##### CentOS/Fedora
Installing prerequisites on CentOS 8 or Fedora>26:
Installing prerequisites on CentOS or Fedora Linux:
```sh
sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
libwebp-devel tk-devel redhat-rpm-config \
python3-virtualenv libffi-devel openssl-devel
libwebp-devel libxml2-devel libxslt-devel libpq-devel \
python3-virtualenv libffi-devel openssl-devel python3-devel
sudo dnf groupinstall "Development Tools"
```
Installing prerequisites on CentOS 7 or Fedora<=25:
```sh
sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel redhat-rpm-config \
python3-virtualenv libffi-devel openssl-devel
sudo yum groupinstall "Development Tools"
```
Note that Synapse does not support versions of SQLite before 3.11, and CentOS 7
uses SQLite 3.7. You may be able to work around this by installing a more
recent SQLite version, but it is recommended that you instead use a Postgres
database: see [docs/postgres.md](docs/postgres.md).
##### macOS
Installing prerequisites on macOS:

View File

@@ -85,6 +85,58 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.27.0
====================
Changes to callback URI for OAuth2 / OpenID Connect
---------------------------------------------------
This version changes the URI used for callbacks from OAuth2 identity providers. If
your server is configured for single sign-on via an OpenID Connect or OAuth2 identity
provider, you will need to add ``[synapse public baseurl]/_synapse/client/oidc/callback``
to the list of permitted "redirect URIs" at the identity provider.
See `docs/openid.md <docs/openid.md>`_ for more information on setting up OpenID
Connect.
(Note: a similar change is being made for SAML2; in this case the old URI
``[synapse public baseurl]/_matrix/saml2`` is being deprecated, but will continue to
work, so no immediate changes are required for existing installations.)
Changes to HTML templates
-------------------------
The HTML templates for SSO and email notifications now have `Jinja2's autoescape <https://jinja.palletsprojects.com/en/2.11.x/api/#autoescaping>`_
enabled for files ending in ``.html``, ``.htm``, and ``.xml``. If you have customised
these templates and see issues when viewing them you might need to update them.
It is expected that most configurations will need no changes.
If you have customised the templates *names* for these templates, it is recommended
to verify they end in ``.html`` to ensure autoescape is enabled.
The above applies to the following templates:
* ``add_threepid.html``
* ``add_threepid_failure.html``
* ``add_threepid_success.html``
* ``notice_expiry.html``
* ``notice_expiry.html``
* ``notif_mail.html`` (which, by default, includes ``room.html`` and ``notif.html``)
* ``password_reset.html``
* ``password_reset_confirmation.html``
* ``password_reset_failure.html``
* ``password_reset_success.html``
* ``registration.html``
* ``registration_failure.html``
* ``registration_success.html``
* ``sso_account_deactivated.html``
* ``sso_auth_bad_user.html``
* ``sso_auth_confirm.html``
* ``sso_auth_success.html``
* ``sso_error.html``
* ``sso_login_idp_picker.html``
* ``sso_redirect_confirm.html``
Upgrading to v1.26.0
====================
@@ -198,7 +250,7 @@ shown below:
return {"localpart": localpart}
Removal historical Synapse Admin API
Removal historical Synapse Admin API
------------------------------------
Historically, the Synapse Admin API has been accessible under:

1
changelog.d/9003.misc Normal file
View File

@@ -0,0 +1 @@
Fix 'object name reserved for internal use' errors with recent versions of SQLite.

View File

@@ -1 +0,0 @@
Add tests to `test_user.UsersListTestCase` for List Users Admin API.

1
changelog.d/9123.misc Normal file
View File

@@ -0,0 +1 @@
Add experimental support for running Synapse with PyPy.

View File

@@ -1 +0,0 @@
Various improvements to the federation client.

View File

@@ -1 +0,0 @@
Add link to Matrix VoIP tester for turn-howto.

1
changelog.d/9150.feature Normal file
View File

@@ -0,0 +1 @@
New API /_synapse/admin/rooms/{roomId}/context/{eventId}.

View File

@@ -1 +0,0 @@
Fix a long-standing bug where Synapse would return a 500 error when a thumbnail did not exist (and auto-generation of thumbnails was not enabled).

View File

@@ -1 +0,0 @@
Speed up chain cover calculation when persisting a batch of state events at once.

View File

@@ -1 +0,0 @@
Add a `long_description_type` to the package metadata.

View File

@@ -1 +0,0 @@
Speed up batch insertion when using PostgreSQL.

View File

@@ -1 +0,0 @@
Emit an error at startup if different Identity Providers are configured with the same `idp_id`.

View File

@@ -1 +0,0 @@
Speed up batch insertion when using PostgreSQL.

View File

@@ -1 +0,0 @@
Add an `oidc-` prefix to any `idp_id`s which are given in the `oidc_providers` configuration.

View File

@@ -1 +0,0 @@
Improve performance of concurrent use of `StreamIDGenerators`.

View File

@@ -1 +0,0 @@
Add some missing source directories to the automatic linting script.

View File

@@ -1 +0,0 @@
Fix receipts or account data not being sent down sync. Introduced in v1.26.0rc1.

View File

@@ -1 +0,0 @@
Fix receipts or account data not being sent down sync. Introduced in v1.26.0rc1.

1
changelog.d/9240.misc Normal file
View File

@@ -0,0 +1 @@
Deny access to additional IP addresses by default.

1
changelog.d/9257.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix long-standing bug where sending email push would fail for rooms that the server had since left.

1
changelog.d/9291.doc Normal file
View File

@@ -0,0 +1 @@
Add note to `auto_join_rooms` config option explaining existing rooms must be publicly joinable.

1
changelog.d/9296.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug in Synapse 1.27.0rc1 which meant the "session expired" error page during SSO registration was badly formatted.

1
changelog.d/9297.feature Normal file
View File

@@ -0,0 +1 @@
Further improvements to the user experience of registration via single sign-on.

1
changelog.d/9299.misc Normal file
View File

@@ -0,0 +1 @@
Update the `Cursor` type hints to better match PEP 249.

1
changelog.d/9300.feature Normal file
View File

@@ -0,0 +1 @@
Further improvements to the user experience of registration via single sign-on.

1
changelog.d/9301.feature Normal file
View File

@@ -0,0 +1 @@
Further improvements to the user experience of registration via single sign-on.

1
changelog.d/9302.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix new ratelimiting for invites to respect the `ratelimit` flag on application services. Introduced in v1.27.0rc1.

1
changelog.d/9305.misc Normal file
View File

@@ -0,0 +1 @@
Add debug logging for SRV lookups. Contributed by @Bubu.

1
changelog.d/9307.misc Normal file
View File

@@ -0,0 +1 @@
Improve logging for OIDC login flow.

1
changelog.d/9308.doc Normal file
View File

@@ -0,0 +1 @@
Correct name of Synapse's service file in TURN howto.

1
changelog.d/9310.doc Normal file
View File

@@ -0,0 +1 @@
Clarify the sample configuration for changes made to the template loading code.

1
changelog.d/9311.feature Normal file
View File

@@ -0,0 +1 @@
Add hook to spam checker modules that allow checking file uploads and remote downloads.

1
changelog.d/9317.doc Normal file
View File

@@ -0,0 +1 @@
Fix the braces in the `oidc_providers` section of the sample config.

1
changelog.d/9321.bugfix Normal file
View File

@@ -0,0 +1 @@
Assert a maximum length for the `client_secret` parameter for spec compliance.

1
changelog.d/9322.doc Normal file
View File

@@ -0,0 +1 @@
Update installation instructions on Fedora.

1
changelog.d/9326.misc Normal file
View File

@@ -0,0 +1 @@
Share the code for handling required attributes between the CAS and SAML handlers.

1
changelog.d/9333.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix additional errors when previewing URLs: "AttributeError 'NoneType' object has no attribute 'xpath'" and "ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.".

1
changelog.d/9361.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug causing Synapse to impose the wrong type constraints on fields when processing responses from appservices to `/_matrix/app/v1/thirdparty/user/{protocol}`.

1
changelog.d/9377.misc Normal file
View File

@@ -0,0 +1 @@
Convert tests to use `HomeserverTestCase`.

1
changelog.d/9399.misc Normal file
View File

@@ -0,0 +1 @@
Optimise some code behind the user presence feature.

View File

@@ -33,11 +33,13 @@ esac
# Use --builtin-venv to use the better `venv` module from CPython 3.4+ rather
# than the 2/3 compatible `virtualenv`.
# Pin pip to 20.3.4 to fix breakage in 21.0 on py3.5 (xenial)
dh_virtualenv \
--install-suffix "matrix-synapse" \
--builtin-venv \
--python "$SNAKE" \
--upgrade-pip \
--upgrade-pip-to="20.3.4" \
--preinstall="lxml" \
--preinstall="mock" \
--extra-pip-arg="--no-cache-dir" \

14
debian/changelog vendored
View File

@@ -1,8 +1,18 @@
matrix-synapse-py3 (1.25.0ubuntu1) UNRELEASED; urgency=medium
matrix-synapse-py3 (1.26.0+nmu1) UNRELEASED; urgency=medium
* Fix build on Ubuntu 16.04 LTS (Xenial).
-- Dan Callahan <danc@element.io> Thu, 28 Jan 2021 16:21:03 +0000
matrix-synapse-py3 (1.26.0) stable; urgency=medium
[ Richard van der Hoff ]
* Remove dependency on `python3-distutils`.
-- Richard van der Hoff <richard@matrix.org> Fri, 15 Jan 2021 12:44:19 +0000
[ Synapse Packaging team ]
* New synapse release 1.26.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 27 Jan 2021 12:43:35 -0500
matrix-synapse-py3 (1.25.0) stable; urgency=medium

View File

@@ -27,6 +27,7 @@ RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
wget
# fetch and unpack the package
# TODO: Upgrade to 1.2.2 once xenial is dropped
RUN mkdir /dh-virtualenv
RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/ac6e1b1.tar.gz
RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz

View File

@@ -9,6 +9,8 @@
* [Response](#response)
* [Undoing room shutdowns](#undoing-room-shutdowns)
- [Make Room Admin API](#make-room-admin-api)
- [Forward Extremities Admin API](#forward-extremities-admin-api)
- [Event Context API](#event-context-api)
# List Room API
@@ -367,6 +369,36 @@ Response:
}
```
# Room State API
The Room State admin API allows server admins to get a list of all state events in a room.
The response includes the following fields:
* `state` - The current state of the room at the time of request.
## Usage
A standard request:
```
GET /_synapse/admin/v1/rooms/<room_id>/state
{}
```
Response:
```json
{
"state": [
{"type": "m.room.create", "state_key": "", "etc": true},
{"type": "m.room.power_levels", "state_key": "", "etc": true},
{"type": "m.room.name", "state_key": "", "etc": true}
]
}
```
# Delete Room API
The Delete Room admin API allows server admins to remove rooms from server
@@ -511,3 +543,173 @@ optionally be specified, e.g.:
"user_id": "@foo:example.com"
}
```
# Forward Extremities Admin API
Enables querying and deleting forward extremities from rooms. When a lot of forward
extremities accumulate in a room, performance can become degraded. For details, see
[#1760](https://github.com/matrix-org/synapse/issues/1760).
## Check for forward extremities
To check the status of forward extremities for a room:
```
GET /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities
```
A response as follows will be returned:
```json
{
"count": 1,
"results": [
{
"event_id": "$M5SP266vsnxctfwFgFLNceaCo3ujhRtg_NiiHabcdefgh",
"state_group": 439,
"depth": 123,
"received_ts": 1611263016761
}
]
}
```
## Deleting forward extremities
**WARNING**: Please ensure you know what you're doing and have read
the related issue [#1760](https://github.com/matrix-org/synapse/issues/1760).
Under no situations should this API be executed as an automated maintenance task!
If a room has lots of forward extremities, the extra can be
deleted as follows:
```
DELETE /_synapse/admin/v1/rooms/<room_id_or_alias>/forward_extremities
```
A response as follows will be returned, indicating the amount of forward extremities
that were deleted.
```json
{
"deleted": 1
}
```
# Event Context API
This API lets a client find the context of an event. This is designed primarily to investigate abuse reports.
```
GET /_synapse/admin/v1/rooms/<room_id>/context/<event_id>
```
This API mimmicks [GET /_matrix/client/r0/rooms/{roomId}/context/{eventId}](https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-rooms-roomid-context-eventid). Please refer to the link for all details on parameters and reseponse.
Example response:
```json
{
"end": "t29-57_2_0_2",
"events_after": [
{
"content": {
"body": "This is an example text message",
"msgtype": "m.text",
"format": "org.matrix.custom.html",
"formatted_body": "<b>This is an example text message</b>"
},
"type": "m.room.message",
"event_id": "$143273582443PhrSn:example.org",
"room_id": "!636q39766251:example.com",
"sender": "@example:example.org",
"origin_server_ts": 1432735824653,
"unsigned": {
"age": 1234
}
}
],
"event": {
"content": {
"body": "filename.jpg",
"info": {
"h": 398,
"w": 394,
"mimetype": "image/jpeg",
"size": 31037
},
"url": "mxc://example.org/JWEIFJgwEIhweiWJE",
"msgtype": "m.image"
},
"type": "m.room.message",
"event_id": "$f3h4d129462ha:example.com",
"room_id": "!636q39766251:example.com",
"sender": "@example:example.org",
"origin_server_ts": 1432735824653,
"unsigned": {
"age": 1234
}
},
"events_before": [
{
"content": {
"body": "something-important.doc",
"filename": "something-important.doc",
"info": {
"mimetype": "application/msword",
"size": 46144
},
"msgtype": "m.file",
"url": "mxc://example.org/FHyPlCeYUSFFxlgbQYZmoEoe"
},
"type": "m.room.message",
"event_id": "$143273582443PhrSn:example.org",
"room_id": "!636q39766251:example.com",
"sender": "@example:example.org",
"origin_server_ts": 1432735824653,
"unsigned": {
"age": 1234
}
}
],
"start": "t27-54_2_0_2",
"state": [
{
"content": {
"creator": "@example:example.org",
"room_version": "1",
"m.federate": true,
"predecessor": {
"event_id": "$something:example.org",
"room_id": "!oldroom:example.org"
}
},
"type": "m.room.create",
"event_id": "$143273582443PhrSn:example.org",
"room_id": "!636q39766251:example.com",
"sender": "@example:example.org",
"origin_server_ts": 1432735824653,
"unsigned": {
"age": 1234
},
"state_key": ""
},
{
"content": {
"membership": "join",
"avatar_url": "mxc://example.org/SEsfnsuifSDFSSEF",
"displayname": "Alice Margatroid"
},
"type": "m.room.member",
"event_id": "$143273582443PhrSn:example.org",
"room_id": "!636q39766251:example.com",
"sender": "@example:example.org",
"origin_server_ts": 1432735824653,
"unsigned": {
"age": 1234
},
"state_key": "@alice:example.org"
}
]
}
```

View File

@@ -760,3 +760,33 @@ The following fields are returned in the JSON response body:
- ``total`` - integer - Number of pushers.
See also `Client-Server API Spec <https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers>`_
Shadow-banning users
====================
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
A shadow-banned users receives successful responses to their client-server API requests,
but the events are not propagated into rooms. This can be an effective tool as it
(hopefully) takes longer for the user to realise they are being moderated before
pivoting to another account.
Shadow-banning a user should be used as a tool of last resort and may lead to confusing
or broken behaviour for the client. A shadow-banned user will not receive any
notification and it is generally more appropriate to ban or kick abusive users.
A shadow-banned user will be unable to contact anyone on the server.
The API is::
POST /_synapse/admin/v1/users/<user_id>/shadow_ban
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
An empty JSON dict is returned.
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - The fully qualified MXID: for example, ``@user:server.com``. The user must
be local.

View File

@@ -44,7 +44,7 @@ as follows:
To enable the OpenID integration, you should then add a section to the `oidc_providers`
setting in your configuration file (or uncomment one of the existing examples).
See [sample_config.yaml](./sample_config.yaml) for some sample settings, as well as
See [sample_config.yaml](./sample_config.yaml) for some sample settings, as well as
the text below for example configurations for specific providers.
## Sample configs
@@ -52,11 +52,12 @@ the text below for example configurations for specific providers.
Here are a few configs for providers that should work with Synapse.
### Microsoft Azure Active Directory
Azure AD can act as an OpenID Connect Provider. Register a new application under
Azure AD can act as an OpenID Connect Provider. Register a new application under
*App registrations* in the Azure AD management console. The RedirectURI for your
application should point to your matrix server: `[synapse public baseurl]/_synapse/oidc/callback`
application should point to your matrix server:
`[synapse public baseurl]/_synapse/client/oidc/callback`
Go to *Certificates & secrets* and register a new client secret. Make note of your
Go to *Certificates & secrets* and register a new client secret. Make note of your
Directory (tenant) ID as it will be used in the Azure links.
Edit your Synapse config file and change the `oidc_config` section:
@@ -94,7 +95,7 @@ staticClients:
- id: synapse
secret: secret
redirectURIs:
- '[synapse public baseurl]/_synapse/oidc/callback'
- '[synapse public baseurl]/_synapse/client/oidc/callback'
name: 'Synapse'
```
@@ -118,7 +119,7 @@ oidc_providers:
```
### [Keycloak][keycloak-idp]
[Keycloak][keycloak-idp] is an opensource IdP maintained by Red Hat.
[Keycloak][keycloak-idp] is an opensource IdP maintained by Red Hat.
Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to install Keycloak and set up a realm.
@@ -140,7 +141,7 @@ Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to
| Enabled | `On` |
| Client Protocol | `openid-connect` |
| Access Type | `confidential` |
| Valid Redirect URIs | `[synapse public baseurl]/_synapse/oidc/callback` |
| Valid Redirect URIs | `[synapse public baseurl]/_synapse/client/oidc/callback` |
5. Click `Save`
6. On the Credentials tab, update the fields:
@@ -168,7 +169,7 @@ oidc_providers:
### [Auth0][auth0]
1. Create a regular web application for Synapse
2. Set the Allowed Callback URLs to `[synapse public baseurl]/_synapse/oidc/callback`
2. Set the Allowed Callback URLs to `[synapse public baseurl]/_synapse/client/oidc/callback`
3. Add a rule to add the `preferred_username` claim.
<details>
<summary>Code sample</summary>
@@ -194,7 +195,7 @@ Synapse config:
```yaml
oidc_providers:
- idp_id: auth0
- idp_id: auth0
idp_name: Auth0
issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED
client_id: "your-client-id" # TO BE FILLED
@@ -217,7 +218,7 @@ login mechanism needs an attribute to uniquely identify users, and that endpoint
does not return a `sub` property, an alternative `subject_claim` has to be set.
1. Create a new OAuth application: https://github.com/settings/applications/new.
2. Set the callback URL to `[synapse public baseurl]/_synapse/oidc/callback`.
2. Set the callback URL to `[synapse public baseurl]/_synapse/client/oidc/callback`.
Synapse config:
@@ -225,6 +226,7 @@ Synapse config:
oidc_providers:
- idp_id: github
idp_name: Github
idp_brand: "org.matrix.github" # optional: styling hint for clients
discover: false
issuer: "https://github.com/"
client_id: "your-client-id" # TO BE FILLED
@@ -250,6 +252,7 @@ oidc_providers:
oidc_providers:
- idp_id: google
idp_name: Google
idp_brand: "org.matrix.google" # optional: styling hint for clients
issuer: "https://accounts.google.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
@@ -260,13 +263,13 @@ oidc_providers:
display_name_template: "{{ user.name }}"
```
4. Back in the Google console, add this Authorized redirect URI: `[synapse
public baseurl]/_synapse/oidc/callback`.
public baseurl]/_synapse/client/oidc/callback`.
### Twitch
1. Setup a developer account on [Twitch](https://dev.twitch.tv/)
2. Obtain the OAuth 2.0 credentials by [creating an app](https://dev.twitch.tv/console/apps/)
3. Add this OAuth Redirect URL: `[synapse public baseurl]/_synapse/oidc/callback`
3. Add this OAuth Redirect URL: `[synapse public baseurl]/_synapse/client/oidc/callback`
Synapse config:
@@ -288,7 +291,7 @@ oidc_providers:
1. Create a [new application](https://gitlab.com/profile/applications).
2. Add the `read_user` and `openid` scopes.
3. Add this Callback URL: `[synapse public baseurl]/_synapse/oidc/callback`
3. Add this Callback URL: `[synapse public baseurl]/_synapse/client/oidc/callback`
Synapse config:
@@ -296,6 +299,7 @@ Synapse config:
oidc_providers:
- idp_id: gitlab
idp_name: Gitlab
idp_brand: "org.matrix.gitlab" # optional: styling hint for clients
issuer: "https://gitlab.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
@@ -307,3 +311,102 @@ oidc_providers:
localpart_template: '{{ user.nickname }}'
display_name_template: '{{ user.name }}'
```
### Facebook
Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant
one so requires a little more configuration.
0. You will need a Facebook developer account. You can register for one
[here](https://developers.facebook.com/async/registration/).
1. On the [apps](https://developers.facebook.com/apps/) page of the developer
console, "Create App", and choose "Build Connected Experiences".
2. Once the app is created, add "Facebook Login" and choose "Web". You don't
need to go through the whole form here.
3. In the left-hand menu, open "Products"/"Facebook Login"/"Settings".
* Add `[synapse public baseurl]/_synapse/client/oidc/callback` as an OAuth Redirect
URL.
4. In the left-hand menu, open "Settings/Basic". Here you can copy the "App ID"
and "App Secret" for use below.
Synapse config:
```yaml
- idp_id: facebook
idp_name: Facebook
idp_brand: "org.matrix.facebook" # optional: styling hint for clients
discover: false
issuer: "https://facebook.com"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes: ["openid", "email"]
authorization_endpoint: https://facebook.com/dialog/oauth
token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token
user_profile_method: "userinfo_endpoint"
userinfo_endpoint: "https://graph.facebook.com/v9.0/me?fields=id,name,email,picture"
user_mapping_provider:
config:
subject_claim: "id"
display_name_template: "{{ user.name }}"
```
Relevant documents:
* https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow
* Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/
* Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user
### Gitea
Gitea is, like Github, not an OpenID provider, but just an OAuth2 provider.
The [`/user` API endpoint](https://try.gitea.io/api/swagger#/user/userGetCurrent)
can be used to retrieve information on the authenticated user. As the Synapse
login mechanism needs an attribute to uniquely identify users, and that endpoint
does not return a `sub` property, an alternative `subject_claim` has to be set.
1. Create a new application.
2. Add this Callback URL: `[synapse public baseurl]/_synapse/oidc/callback`
Synapse config:
```yaml
oidc_providers:
- idp_id: gitea
idp_name: Gitea
discover: false
issuer: "https://your-gitea.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
client_auth_method: client_secret_post
scopes: [] # Gitea doesn't support Scopes
authorization_endpoint: "https://your-gitea.com/login/oauth/authorize"
token_endpoint: "https://your-gitea.com/login/oauth/access_token"
userinfo_endpoint: "https://your-gitea.com/api/v1/user"
user_mapping_provider:
config:
subject_claim: "id"
localpart_template: "{{ user.login }}"
display_name_template: "{{ user.full_name }}"
```
### XWiki
Install [OpenID Connect Provider](https://extensions.xwiki.org/xwiki/bin/view/Extension/OpenID%20Connect/OpenID%20Connect%20Provider/) extension in your [XWiki](https://www.xwiki.org) instance.
Synapse config:
```yaml
oidc_providers:
- idp_id: xwiki
idp_name: "XWiki"
issuer: "https://myxwikihost/xwiki/oidc/"
client_id: "your-client-id" # TO BE FILLED
# Needed until https://github.com/matrix-org/synapse/issues/9212 is fixed
client_secret: "dontcare"
scopes: ["openid", "profile"]
user_profile_method: "userinfo_endpoint"
user_mapping_provider:
config:
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
```

View File

@@ -169,6 +169,7 @@ pid_file: DATADIR/homeserver.pid
# - '100.64.0.0/10'
# - '192.0.0.0/24'
# - '169.254.0.0/16'
# - '192.88.99.0/24'
# - '198.18.0.0/15'
# - '192.0.2.0/24'
# - '198.51.100.0/24'
@@ -177,6 +178,9 @@ pid_file: DATADIR/homeserver.pid
# - '::1/128'
# - 'fe80::/10'
# - 'fc00::/7'
# - '2001:db8::/32'
# - 'ff00::/8'
# - 'fec0::/10'
# List of IP address CIDR ranges that should be allowed for federation,
# identity servers, push servers, and for checking key validity for
@@ -824,6 +828,9 @@ log_config: "CONFDIR/SERVERNAME.log.config"
# users are joining rooms the server is already in (this is cheap) vs
# "remote" for when users are trying to join rooms not on the server (which
# can be more expensive)
# - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
# - two for ratelimiting how often invites can be sent in a room or to a
# specific user.
#
# The defaults are as shown below.
#
@@ -857,7 +864,18 @@ log_config: "CONFDIR/SERVERNAME.log.config"
# remote:
# per_second: 0.01
# burst_count: 3
#
#rc_3pid_validation:
# per_second: 0.003
# burst_count: 5
#
#rc_invites:
# per_room:
# per_second: 0.3
# burst_count: 10
# per_user:
# per_second: 0.003
# burst_count: 5
# Ratelimiting settings for incoming federation
#
@@ -980,6 +998,7 @@ media_store_path: "DATADIR/media_store"
# - '100.64.0.0/10'
# - '192.0.0.0/24'
# - '169.254.0.0/16'
# - '192.88.99.0/24'
# - '198.18.0.0/15'
# - '192.0.2.0/24'
# - '198.51.100.0/24'
@@ -988,6 +1007,9 @@ media_store_path: "DATADIR/media_store"
# - '::1/128'
# - 'fe80::/10'
# - 'fc00::/7'
# - '2001:db8::/32'
# - 'ff00::/8'
# - 'fec0::/10'
# List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist.
@@ -1306,6 +1328,8 @@ account_threepid_delegates:
# By default, any room aliases included in this list will be created
# as a publicly joinable room when the first user registers for the
# homeserver. This behaviour can be customised with the settings below.
# If the room already exists, make certain it is a publicly joinable
# room. The join rule of the room must be set to 'public'.
#
#auto_join_rooms:
# - "#example:example.com"
@@ -1552,10 +1576,10 @@ trusted_key_servers:
# enable SAML login.
#
# Once SAML support is enabled, a metadata file will be exposed at
# https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to
# https://<server>:<port>/_synapse/client/saml2/metadata.xml, which you may be able to
# use to configure your SAML IdP with. Alternatively, you can manually configure
# the IdP to use an ACS location of
# https://<server>:<port>/_matrix/saml2/authn_response.
# https://<server>:<port>/_synapse/client/saml2/authn_response.
#
saml2_config:
# `sp_config` is the configuration for the pysaml2 Service Provider.
@@ -1727,10 +1751,14 @@ saml2_config:
# offer the user a choice of login mechanisms.
#
# idp_icon: An optional icon for this identity provider, which is presented
# by identity picker pages. If given, must be an MXC URI of the format
# mxc://<server-name>/<media-id>. (An easy way to obtain such an MXC URI
# is to upload an image to an (unencrypted) room and then copy the "url"
# from the source of the event.)
# by clients and Synapse's own IdP picker page. If given, must be an
# MXC URI of the format mxc://<server-name>/<media-id>. (An easy way to
# obtain such an MXC URI is to upload an image to an (unencrypted) room
# and then copy the "url" from the source of the event.)
#
# idp_brand: An optional brand for this identity provider, allowing clients
# to style the login flow according to the identity provider in question.
# See the spec for possible options here.
#
# discover: set to 'false' to disable the use of the OIDC discovery mechanism
# to discover endpoints. Defaults to true.
@@ -1791,17 +1819,21 @@ saml2_config:
#
# For the default provider, the following settings are available:
#
# sub: name of the claim containing a unique identifier for the
# user. Defaults to 'sub', which OpenID Connect compliant
# providers should provide.
# subject_claim: name of the claim containing a unique identifier
# for the user. Defaults to 'sub', which OpenID Connect
# compliant providers should provide.
#
# localpart_template: Jinja2 template for the localpart of the MXID.
# If this is not set, the user will be prompted to choose their
# own username.
# own username (see 'sso_auth_account_details.html' in the 'sso'
# section of this file).
#
# display_name_template: Jinja2 template for the display name to set
# on first login. If unset, no displayname will be set.
#
# email_template: Jinja2 template for the email address of the user.
# If unset, no email address will be added to the account.
#
# extra_attributes: a map of Jinja2 templates for extra attributes
# to send back to the client during login.
# Note that these are non-standard and clients will ignore them
@@ -1837,6 +1869,12 @@ oidc_providers:
# userinfo_endpoint: "https://accounts.example.com/userinfo"
# jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
# skip_verification: true
# user_mapping_provider:
# config:
# subject_claim: "id"
# localpart_template: "{{ user.login }}"
# display_name_template: "{{ user.name }}"
# email_template: "{{ user.email }}"
# For use with Keycloak
#
@@ -1851,6 +1889,7 @@ oidc_providers:
#
#- idp_id: github
# idp_name: Github
# idp_brand: org.matrix.github
# discover: false
# issuer: "https://github.com/"
# client_id: "your-client-id" # TO BE FILLED
@@ -1862,8 +1901,8 @@ oidc_providers:
# user_mapping_provider:
# config:
# subject_claim: "id"
# localpart_template: "{ user.login }"
# display_name_template: "{ user.name }"
# localpart_template: "{{ user.login }}"
# display_name_template: "{{ user.name }}"
# Enable Central Authentication Service (CAS) for registration and login.
@@ -1878,10 +1917,6 @@ cas_config:
#
#server_url: "https://cas-server.com"
# The public URL of the homeserver.
#
#service_url: "https://homeserver.domain.com:8448"
# The attribute of the CAS response to use as the display name.
#
# If unset, no displayname will be set.
@@ -1936,15 +1971,19 @@ sso:
#
# When rendering, this template is given the following variables:
# * redirect_url: the URL that the user will be redirected to after
# login. Needs manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
# login.
#
# * server_name: the homeserver's name.
#
# * providers: a list of available Identity Providers. Each element is
# an object with the following attributes:
#
# * idp_id: unique identifier for the IdP
# * idp_name: user-facing name for the IdP
# * idp_icon: if specified in the IdP config, an MXC URI for an icon
# for the IdP
# * idp_brand: if specified in the IdP config, a textual identifier
# for the brand of the IdP
#
# The rendered HTML page should contain a form which submits its results
# back as a GET request, with the following query parameters:
@@ -1954,33 +1993,101 @@ sso:
#
# * idp: the 'idp_id' of the chosen IDP.
#
# * HTML page to prompt new users to enter a userid and confirm other
# details: 'sso_auth_account_details.html'. This is only shown if the
# SSO implementation (with any user_mapping_provider) does not return
# a localpart.
#
# When rendering, this template is given the following variables:
#
# * server_name: the homeserver's name.
#
# * idp: details of the SSO Identity Provider that the user logged in
# with: an object with the following attributes:
#
# * idp_id: unique identifier for the IdP
# * idp_name: user-facing name for the IdP
# * idp_icon: if specified in the IdP config, an MXC URI for an icon
# for the IdP
# * idp_brand: if specified in the IdP config, a textual identifier
# for the brand of the IdP
#
# * user_attributes: an object containing details about the user that
# we received from the IdP. May have the following attributes:
#
# * display_name: the user's display_name
# * emails: a list of email addresses
#
# The template should render a form which submits the following fields:
#
# * username: the localpart of the user's chosen user id
#
# * HTML page allowing the user to consent to the server's terms and
# conditions. This is only shown for new users, and only if
# `user_consent.require_at_registration` is set.
#
# When rendering, this template is given the following variables:
#
# * server_name: the homeserver's name.
#
# * user_id: the user's matrix proposed ID.
#
# * user_profile.display_name: the user's proposed display name, if any.
#
# * consent_version: the version of the terms that the user will be
# shown
#
# * terms_url: a link to the page showing the terms.
#
# The template should render a form which submits the following fields:
#
# * accepted_version: the version of the terms accepted by the user
# (ie, 'consent_version' from the input variables).
#
# * HTML page for a confirmation step before redirecting back to the client
# with the login token: 'sso_redirect_confirm.html'.
#
# When rendering, this template is given three variables:
# * redirect_url: the URL the user is about to be redirected to. Needs
# manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
# When rendering, this template is given the following variables:
#
# * redirect_url: the URL the user is about to be redirected to.
#
# * display_url: the same as `redirect_url`, but with the query
# parameters stripped. The intention is to have a
# human-readable URL to show to users, not to use it as
# the final address to redirect to. Needs manual escaping
# (see https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
# the final address to redirect to.
#
# * server_name: the homeserver's name.
#
# * new_user: a boolean indicating whether this is the user's first time
# logging in.
#
# * user_id: the user's matrix ID.
#
# * user_profile.avatar_url: an MXC URI for the user's avatar, if any.
# None if the user has not set an avatar.
#
# * user_profile.display_name: the user's display name. None if the user
# has not set a display name.
#
# * HTML page which notifies the user that they are authenticating to confirm
# an operation on their account during the user interactive authentication
# process: 'sso_auth_confirm.html'.
#
# When rendering, this template is given the following variables:
# * redirect_url: the URL the user is about to be redirected to. Needs
# manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
# * redirect_url: the URL the user is about to be redirected to.
#
# * description: the operation which the user is being asked to confirm
#
# * idp: details of the Identity Provider that we will use to confirm
# the user's identity: an object with the following attributes:
#
# * idp_id: unique identifier for the IdP
# * idp_name: user-facing name for the IdP
# * idp_icon: if specified in the IdP config, an MXC URI for an icon
# for the IdP
# * idp_brand: if specified in the IdP config, a textual identifier
# for the brand of the IdP
#
# * HTML page shown after a successful user interactive authentication session:
# 'sso_auth_success.html'.
#

View File

@@ -61,6 +61,9 @@ class ExampleSpamChecker:
async def check_registration_for_spam(self, email_threepid, username, request_info):
return RegistrationBehaviour.ALLOW # allow all registrations
async def check_media_file_for_spam(self, file_wrapper, file_info):
return False # allow all media
```
## Configuration

View File

@@ -187,7 +187,7 @@ After updating the homeserver configuration, you must restart synapse:
```
* If you use systemd:
```
systemctl restart synapse.service
systemctl restart matrix-synapse.service
```
... and then reload any clients (or wait an hour for them to refresh their
settings).

View File

@@ -40,6 +40,9 @@ which relays replication commands between processes. This can give a significant
cpu saving on the main process and will be a prerequisite for upcoming
performance improvements.
If Redis support is enabled Synapse will use it as a shared cache, as well as a
pub/sub mechanism.
See the [Architectural diagram](#architectural-diagram) section at the end for
a visualisation of what this looks like.
@@ -225,7 +228,6 @@ expressions:
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
^/_synapse/client/password_reset/email/submit_token$
# Registration/login requests
^/_matrix/client/(api/v1|r0|unstable)/login$
@@ -256,25 +258,29 @@ Additionally, the following endpoints should be included if Synapse is configure
to use SSO (you only need to include the ones for whichever SSO provider you're
using):
# for all SSO providers
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect
^/_synapse/client/pick_idp$
^/_synapse/client/pick_username
^/_synapse/client/new_user_consent$
^/_synapse/client/sso_register$
# OpenID Connect requests.
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect$
^/_synapse/oidc/callback$
^/_synapse/client/oidc/callback$
# SAML requests.
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect$
^/_matrix/saml2/authn_response$
^/_synapse/client/saml2/authn_response$
# CAS requests.
^/_matrix/client/(api/v1|r0|unstable)/login/(cas|sso)/redirect$
^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
Ensure that all SSO logins go to a single process.
For multiple workers not handling the SSO endpoints properly, see
[#7530](https://github.com/matrix-org/synapse/issues/7530).
Note that a HTTP listener with `client` and `federation` resources must be
configured in the `worker_listeners` option in the worker config.
Ensure that all SSO logins go to a single process (usually the main process).
For multiple workers not handling the SSO endpoints properly, see
[#7530](https://github.com/matrix-org/synapse/issues/7530).
#### Load balancing
It is possible to run multiple instances of this worker app, with incoming requests

View File

@@ -23,39 +23,7 @@ files =
synapse/events/validator.py,
synapse/events/spamcheck.py,
synapse/federation,
synapse/handlers/_base.py,
synapse/handlers/account_data.py,
synapse/handlers/account_validity.py,
synapse/handlers/admin.py,
synapse/handlers/appservice.py,
synapse/handlers/auth.py,
synapse/handlers/cas_handler.py,
synapse/handlers/deactivate_account.py,
synapse/handlers/device.py,
synapse/handlers/devicemessage.py,
synapse/handlers/directory.py,
synapse/handlers/events.py,
synapse/handlers/federation.py,
synapse/handlers/identity.py,
synapse/handlers/initial_sync.py,
synapse/handlers/message.py,
synapse/handlers/oidc_handler.py,
synapse/handlers/pagination.py,
synapse/handlers/password_policy.py,
synapse/handlers/presence.py,
synapse/handlers/profile.py,
synapse/handlers/read_marker.py,
synapse/handlers/receipts.py,
synapse/handlers/register.py,
synapse/handlers/room.py,
synapse/handlers/room_list.py,
synapse/handlers/room_member.py,
synapse/handlers/room_member_worker.py,
synapse/handlers/saml_handler.py,
synapse/handlers/sso.py,
synapse/handlers/sync.py,
synapse/handlers/user_directory.py,
synapse/handlers/ui_auth,
synapse/handlers,
synapse/http/client.py,
synapse/http/federation/matrix_federation_agent.py,
synapse/http/federation/well_known_resolver.py,
@@ -194,3 +162,9 @@ ignore_missing_imports = True
[mypy-hiredis]
ignore_missing_imports = True
[mypy-josepy.*]
ignore_missing_imports = True
[mypy-txacme.*]
ignore_missing_imports = True

View File

@@ -162,12 +162,23 @@ else
fi
# Delete schema_version, applied_schema_deltas and applied_module_schemas tables
# Also delete any shadow tables from fts4
# This needs to be done after synapse_port_db is run
echo "Dropping unwanted db tables..."
SQL="
DROP TABLE schema_version;
DROP TABLE applied_schema_deltas;
DROP TABLE applied_module_schemas;
DROP TABLE event_search_content;
DROP TABLE event_search_segments;
DROP TABLE event_search_segdir;
DROP TABLE event_search_docsize;
DROP TABLE event_search_stat;
DROP TABLE user_directory_search_content;
DROP TABLE user_directory_search_segments;
DROP TABLE user_directory_search_segdir;
DROP TABLE user_directory_search_docsize;
DROP TABLE user_directory_search_stat;
"
sqlite3 "$SQLITE_DB" <<< "$SQL"
psql $POSTGRES_DB_NAME -U "$POSTGRES_USERNAME" -w <<< "$SQL"

View File

@@ -96,7 +96,7 @@ CONDITIONAL_REQUIREMENTS["all"] = list(ALL_OPTIONAL_REQUIREMENTS)
#
# We pin black so that our tests don't start failing on new releases.
CONDITIONAL_REQUIREMENTS["lint"] = [
"isort==5.0.3",
"isort==5.7.0",
"black==19.10b0",
"flake8-comprehensions",
"flake8",

View File

@@ -15,13 +15,23 @@
"""Contains *incomplete* type hints for txredisapi.
"""
from typing import List, Optional, Type, Union
from typing import Any, List, Optional, Type, Union
class RedisProtocol:
def publish(self, channel: str, message: bytes): ...
async def ping(self) -> None: ...
async def set(
self,
key: str,
value: Any,
expire: Optional[int] = None,
pexpire: Optional[int] = None,
only_if_not_exists: bool = False,
only_if_exists: bool = False,
) -> None: ...
async def get(self, key: str) -> Any: ...
class SubscriberProtocol:
class SubscriberProtocol(RedisProtocol):
def __init__(self, *args, **kwargs): ...
password: Optional[str]
def subscribe(self, channels: Union[str, List[str]]): ...
@@ -40,14 +50,13 @@ def lazyConnection(
convertNumbers: bool = ...,
) -> RedisProtocol: ...
class SubscriberFactory:
def buildProtocol(self, addr): ...
class ConnectionHandler: ...
class RedisFactory:
continueTrying: bool
handler: RedisProtocol
pool: List[RedisProtocol]
replyTimeout: Optional[int]
def __init__(
self,
uuid: str,
@@ -60,3 +69,7 @@ class RedisFactory:
replyTimeout: Optional[int] = None,
convertNumbers: Optional[int] = True,
): ...
def buildProtocol(self, addr) -> RedisProtocol: ...
class SubscriberFactory(RedisFactory):
def __init__(self): ...

View File

@@ -48,7 +48,7 @@ try:
except ImportError:
pass
__version__ = "1.26.0rc1"
__version__ = "1.27.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -16,6 +16,7 @@
import gc
import logging
import os
import platform
import signal
import socket
import sys
@@ -339,7 +340,7 @@ async def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerCon
# rest of time. Doing so means less work each GC (hopefully).
#
# This only works on Python 3.7
if sys.version_info >= (3, 7):
if platform.python_implementation() == "CPython" and sys.version_info >= (3, 7):
gc.collect()
gc.freeze()

View File

@@ -22,6 +22,7 @@ from typing import Dict, Iterable, Optional, Set
from typing_extensions import ContextManager
from twisted.internet import address
from twisted.web.resource import IResource
import synapse
import synapse.events
@@ -90,9 +91,8 @@ from synapse.replication.tcp.streams import (
ToDeviceStream,
)
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client.v1 import events, room
from synapse.rest.client.v1 import events, login, room
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.login import LoginRestServlet
from synapse.rest.client.v1.profile import (
ProfileAvatarURLRestServlet,
ProfileDisplaynameRestServlet,
@@ -127,6 +127,7 @@ from synapse.rest.client.v2_alpha.sendtodevice import SendToDeviceRestServlet
from synapse.rest.client.versions import VersionsRestServlet
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.server import HomeServer, cache_in_self
from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
@@ -507,7 +508,7 @@ class GenericWorkerServer(HomeServer):
site_tag = port
# We always include a health resource.
resources = {"/health": HealthResource()}
resources = {"/health": HealthResource()} # type: Dict[str, IResource]
for res in listener_config.http_options.resources:
for name in res.names:
@@ -517,7 +518,7 @@ class GenericWorkerServer(HomeServer):
resource = JsonResource(self, canonical_json=False)
RegisterRestServlet(self).register(resource)
LoginRestServlet(self).register(resource)
login.register_servlets(self, resource)
ThreepidRestServlet(self).register(resource)
DevicesRestServlet(self).register(resource)
KeyQueryServlet(self).register(resource)
@@ -557,6 +558,8 @@ class GenericWorkerServer(HomeServer):
groups.register_servlets(self, resource)
resources.update({CLIENT_API_PREFIX: resource})
resources.update(build_synapse_client_resource_tree(self))
elif name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
elif name == "media":

View File

@@ -60,8 +60,7 @@ from synapse.rest import ClientRestResource
from synapse.rest.admin import AdminRestResource
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client.pick_idp import PickIdpResource
from synapse.rest.synapse.client.pick_username import pick_username_resource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import WellKnownResource
from synapse.server import HomeServer
from synapse.storage import DataStore
@@ -190,21 +189,10 @@ class SynapseHomeServer(HomeServer):
"/_matrix/client/versions": client_resource,
"/.well-known/matrix/client": WellKnownResource(self),
"/_synapse/admin": AdminRestResource(self),
"/_synapse/client/pick_username": pick_username_resource(self),
"/_synapse/client/pick_idp": PickIdpResource(self),
**build_synapse_client_resource_tree(self),
}
)
if self.get_config().oidc_enabled:
from synapse.rest.oidc import OIDCResource
resources["/_synapse/oidc"] = OIDCResource(self)
if self.get_config().saml2_enabled:
from synapse.rest.saml2 import SAML2Resource
resources["/_matrix/saml2"] = SAML2Resource(self)
if self.get_config().threepid_behaviour_email == ThreepidBehaviour.LOCAL:
from synapse.rest.synapse.client.password_reset import (
PasswordResetSubmitTokenResource,

View File

@@ -93,15 +93,20 @@ async def phone_stats_home(hs, stats, stats_process=_stats_process):
stats["daily_active_users"] = await hs.get_datastore().count_daily_users()
stats["monthly_active_users"] = await hs.get_datastore().count_monthly_users()
daily_active_e2ee_rooms = await hs.get_datastore().count_daily_active_e2ee_rooms()
stats["daily_active_e2ee_rooms"] = daily_active_e2ee_rooms
stats["daily_e2ee_messages"] = await hs.get_datastore().count_daily_e2ee_messages()
daily_sent_e2ee_messages = await hs.get_datastore().count_daily_sent_e2ee_messages()
stats["daily_sent_e2ee_messages"] = daily_sent_e2ee_messages
stats["daily_active_rooms"] = await hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = await hs.get_datastore().count_daily_messages()
daily_sent_messages = await hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
r30_results = await hs.get_datastore().count_r30_users()
for name, count in r30_results.items():
stats["r30_users_" + name] = count
daily_sent_messages = await hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = hs.config.caches.global_factor
stats["event_cache_size"] = hs.config.caches.event_cache_size

View File

@@ -76,9 +76,6 @@ def _is_valid_3pe_result(r, field):
fields = r["fields"]
if not isinstance(fields, dict):
return False
for k in fields.keys():
if not isinstance(fields[k], str):
return False
return True

View File

@@ -18,18 +18,18 @@
import argparse
import errno
import os
import time
import urllib.parse
from collections import OrderedDict
from hashlib import sha256
from textwrap import dedent
from typing import Any, Callable, Iterable, List, MutableMapping, Optional
from typing import Any, Iterable, List, MutableMapping, Optional
import attr
import jinja2
import pkg_resources
import yaml
from synapse.util.templates import _create_mxc_to_http_filter, _format_ts_filter
class ConfigError(Exception):
"""Represents a problem parsing the configuration
@@ -203,11 +203,28 @@ class Config:
with open(file_path) as file_stream:
return file_stream.read()
def read_template(self, filename: str) -> jinja2.Template:
"""Load a template file from disk.
This function will attempt to load the given template from the default Synapse
template directory.
Files read are treated as Jinja templates. The templates is not rendered yet
and has autoescape enabled.
Args:
filename: A template filename to read.
Raises:
ConfigError: if the file's path is incorrect or otherwise cannot be read.
Returns:
A jinja2 template.
"""
return self.read_templates([filename])[0]
def read_templates(
self,
filenames: List[str],
custom_template_directory: Optional[str] = None,
autoescape: bool = False,
self, filenames: List[str], custom_template_directory: Optional[str] = None,
) -> List[jinja2.Template]:
"""Load a list of template files from disk using the given variables.
@@ -215,7 +232,8 @@ class Config:
template directory. If `custom_template_directory` is supplied, that directory
is tried first.
Files read are treated as Jinja templates. These templates are not rendered yet.
Files read are treated as Jinja templates. The templates are not rendered yet
and have autoescape enabled.
Args:
filenames: A list of template filenames to read.
@@ -223,16 +241,12 @@ class Config:
custom_template_directory: A directory to try to look for the templates
before using the default Synapse template directory instead.
autoescape: Whether to autoescape variables before inserting them into the
template.
Raises:
ConfigError: if the file's path is incorrect or otherwise cannot be read.
Returns:
A list of jinja2 templates.
"""
templates = []
search_directories = [self.default_template_dir]
# The loader will first look in the custom template directory (if specified) for the
@@ -248,8 +262,9 @@ class Config:
# Search the custom template directory as well
search_directories.insert(0, custom_template_directory)
# TODO: switch to synapse.util.templates.build_jinja_env
loader = jinja2.FileSystemLoader(search_directories)
env = jinja2.Environment(loader=loader, autoescape=autoescape)
env = jinja2.Environment(loader=loader, autoescape=jinja2.select_autoescape(),)
# Update the environment with our custom filters
env.filters.update(
@@ -259,44 +274,8 @@ class Config:
}
)
for filename in filenames:
# Load the template
template = env.get_template(filename)
templates.append(template)
return templates
def _format_ts_filter(value: int, format: str):
return time.strftime(format, time.localtime(value / 1000))
def _create_mxc_to_http_filter(public_baseurl: str) -> Callable:
"""Create and return a jinja2 filter that converts MXC urls to HTTP
Args:
public_baseurl: The public, accessible base URL of the homeserver
"""
def mxc_to_http_filter(value, width, height, resize_method="crop"):
if value[0:6] != "mxc://":
return ""
server_and_media_id = value[6:]
fragment = None
if "#" in server_and_media_id:
server_and_media_id, fragment = server_and_media_id.split("#", 1)
fragment = "#" + fragment
params = {"width": width, "height": height, "method": resize_method}
return "%s_matrix/media/v1/thumbnail/%s?%s%s" % (
public_baseurl,
server_and_media_id,
urllib.parse.urlencode(params),
fragment or "",
)
return mxc_to_http_filter
# Load the templates
return [env.get_template(filename) for filename in filenames]
class RootConfig:

View File

@@ -9,6 +9,7 @@ from synapse.config import (
consent_config,
database,
emailconfig,
experimental,
groups,
jwt_config,
key,
@@ -18,6 +19,7 @@ from synapse.config import (
password_auth_providers,
push,
ratelimiting,
redis,
registration,
repository,
room_directory,
@@ -48,10 +50,11 @@ def path_exists(file_path: str): ...
class RootConfig:
server: server.ServerConfig
experimental: experimental.ExperimentalConfig
tls: tls.TlsConfig
database: database.DatabaseConfig
logging: logger.LoggingConfig
ratelimit: ratelimiting.RatelimitConfig
ratelimiting: ratelimiting.RatelimitConfig
media: repository.ContentRepositoryConfig
captcha: captcha.CaptchaConfig
voip: voip.VoipConfig
@@ -79,6 +82,7 @@ class RootConfig:
roomdirectory: room_directory.RoomDirectoryConfig
thirdpartyrules: third_party_event_rules.ThirdPartyRulesConfig
tracer: tracer.TracerConfig
redis: redis.RedisConfig
config_classes: List = ...
def __init__(self) -> None: ...

View File

@@ -28,9 +28,7 @@ class CaptchaConfig(Config):
"recaptcha_siteverify_api",
"https://www.recaptcha.net/recaptcha/api/siteverify",
)
self.recaptcha_template = self.read_templates(
["recaptcha.html"], autoescape=True
)[0]
self.recaptcha_template = self.read_template("recaptcha.html")
def generate_config_section(self, **kwargs):
return """\

View File

@@ -13,7 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, List
from synapse.config.sso import SsoAttributeRequirement
from ._base import Config
from ._util import validate_config
class CasConfig(Config):
@@ -30,14 +35,24 @@ class CasConfig(Config):
if self.cas_enabled:
self.cas_server_url = cas_config["server_url"]
self.cas_service_url = cas_config["service_url"]
public_base_url = cas_config.get("service_url") or self.public_baseurl
if public_base_url[-1] != "/":
public_base_url += "/"
# TODO Update this to a _synapse URL.
self.cas_service_url = (
public_base_url + "_matrix/client/r0/login/cas/ticket"
)
self.cas_displayname_attribute = cas_config.get("displayname_attribute")
self.cas_required_attributes = cas_config.get("required_attributes") or {}
required_attributes = cas_config.get("required_attributes") or {}
self.cas_required_attributes = _parsed_required_attributes_def(
required_attributes
)
else:
self.cas_server_url = None
self.cas_service_url = None
self.cas_displayname_attribute = None
self.cas_required_attributes = {}
self.cas_required_attributes = []
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
@@ -53,10 +68,6 @@ class CasConfig(Config):
#
#server_url: "https://cas-server.com"
# The public URL of the homeserver.
#
#service_url: "https://homeserver.domain.com:8448"
# The attribute of the CAS response to use as the display name.
#
# If unset, no displayname will be set.
@@ -73,3 +84,22 @@ class CasConfig(Config):
# userGroup: "staff"
# department: None
"""
# CAS uses a legacy required attributes mapping, not the one provided by
# SsoAttributeRequirement.
REQUIRED_ATTRIBUTES_SCHEMA = {
"type": "object",
"additionalProperties": {"anyOf": [{"type": "string"}, {"type": "null"}]},
}
def _parsed_required_attributes_def(
required_attributes: Any,
) -> List[SsoAttributeRequirement]:
validate_config(
REQUIRED_ATTRIBUTES_SCHEMA,
required_attributes,
config_path=("cas_config", "required_attributes"),
)
return [SsoAttributeRequirement(k, v) for k, v in required_attributes.items()]

View File

@@ -89,7 +89,7 @@ class ConsentConfig(Config):
def read_config(self, config, **kwargs):
consent_config = config.get("user_consent")
self.terms_template = self.read_templates(["terms.html"], autoescape=True)[0]
self.terms_template = self.read_template("terms.html")
if consent_config is None:
return

View File

@@ -0,0 +1,29 @@
# -*- coding: utf-8 -*-
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.config._base import Config
from synapse.types import JsonDict
class ExperimentalConfig(Config):
"""Config section for enabling experimental features"""
section = "experimental"
def read_config(self, config: JsonDict, **kwargs):
experimental = config.get("experimental_features") or {}
# MSC2858 (multiple SSO identity providers)
self.msc2858_enabled = experimental.get("msc2858_enabled", False) # type: bool

View File

@@ -24,6 +24,7 @@ from .cas import CasConfig
from .consent_config import ConsentConfig
from .database import DatabaseConfig
from .emailconfig import EmailConfig
from .experimental import ExperimentalConfig
from .federation import FederationConfig
from .groups import GroupsConfig
from .jwt_config import JWTConfig
@@ -57,6 +58,7 @@ class HomeServerConfig(RootConfig):
config_classes = [
ServerConfig,
ExperimentalConfig,
TlsConfig,
FederationConfig,
CacheConfig,

View File

@@ -14,7 +14,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import string
from collections import Counter
from typing import Iterable, Optional, Tuple, Type
@@ -54,8 +53,7 @@ class OIDCConfig(Config):
"Multiple OIDC providers have the idp_id %r." % idp_id
)
public_baseurl = self.public_baseurl
self.oidc_callback_url = public_baseurl + "_synapse/oidc/callback"
self.oidc_callback_url = self.public_baseurl + "_synapse/client/oidc/callback"
@property
def oidc_enabled(self) -> bool:
@@ -79,10 +77,14 @@ class OIDCConfig(Config):
# offer the user a choice of login mechanisms.
#
# idp_icon: An optional icon for this identity provider, which is presented
# by identity picker pages. If given, must be an MXC URI of the format
# mxc://<server-name>/<media-id>. (An easy way to obtain such an MXC URI
# is to upload an image to an (unencrypted) room and then copy the "url"
# from the source of the event.)
# by clients and Synapse's own IdP picker page. If given, must be an
# MXC URI of the format mxc://<server-name>/<media-id>. (An easy way to
# obtain such an MXC URI is to upload an image to an (unencrypted) room
# and then copy the "url" from the source of the event.)
#
# idp_brand: An optional brand for this identity provider, allowing clients
# to style the login flow according to the identity provider in question.
# See the spec for possible options here.
#
# discover: set to 'false' to disable the use of the OIDC discovery mechanism
# to discover endpoints. Defaults to true.
@@ -143,17 +145,21 @@ class OIDCConfig(Config):
#
# For the default provider, the following settings are available:
#
# sub: name of the claim containing a unique identifier for the
# user. Defaults to 'sub', which OpenID Connect compliant
# providers should provide.
# subject_claim: name of the claim containing a unique identifier
# for the user. Defaults to 'sub', which OpenID Connect
# compliant providers should provide.
#
# localpart_template: Jinja2 template for the localpart of the MXID.
# If this is not set, the user will be prompted to choose their
# own username.
# own username (see 'sso_auth_account_details.html' in the 'sso'
# section of this file).
#
# display_name_template: Jinja2 template for the display name to set
# on first login. If unset, no displayname will be set.
#
# email_template: Jinja2 template for the email address of the user.
# If unset, no email address will be added to the account.
#
# extra_attributes: a map of Jinja2 templates for extra attributes
# to send back to the client during login.
# Note that these are non-standard and clients will ignore them
@@ -189,6 +195,12 @@ class OIDCConfig(Config):
# userinfo_endpoint: "https://accounts.example.com/userinfo"
# jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
# skip_verification: true
# user_mapping_provider:
# config:
# subject_claim: "id"
# localpart_template: "{{{{ user.login }}}}"
# display_name_template: "{{{{ user.name }}}}"
# email_template: "{{{{ user.email }}}}"
# For use with Keycloak
#
@@ -203,6 +215,7 @@ class OIDCConfig(Config):
#
#- idp_id: github
# idp_name: Github
# idp_brand: org.matrix.github
# discover: false
# issuer: "https://github.com/"
# client_id: "your-client-id" # TO BE FILLED
@@ -214,8 +227,8 @@ class OIDCConfig(Config):
# user_mapping_provider:
# config:
# subject_claim: "id"
# localpart_template: "{{ user.login }}"
# display_name_template: "{{ user.name }}"
# localpart_template: "{{{{ user.login }}}}"
# display_name_template: "{{{{ user.name }}}}"
""".format(
mapping_provider=DEFAULT_USER_MAPPING_PROVIDER
)
@@ -226,11 +239,22 @@ OIDC_PROVIDER_CONFIG_SCHEMA = {
"type": "object",
"required": ["issuer", "client_id", "client_secret"],
"properties": {
# TODO: fix the maxLength here depending on what MSC2528 decides
# remember that we prefix the ID given here with `oidc-`
"idp_id": {"type": "string", "minLength": 1, "maxLength": 128},
"idp_id": {
"type": "string",
"minLength": 1,
# MSC2858 allows a maxlen of 255, but we prefix with "oidc-"
"maxLength": 250,
"pattern": "^[A-Za-z0-9._~-]+$",
},
"idp_name": {"type": "string"},
"idp_icon": {"type": "string"},
"idp_brand": {
"type": "string",
# MSC2758-style namespaced identifier
"minLength": 1,
"maxLength": 255,
"pattern": "^[a-z][a-z0-9_.-]*$",
},
"discover": {"type": "boolean"},
"issuer": {"type": "string"},
"client_id": {"type": "string"},
@@ -349,25 +373,8 @@ def _parse_oidc_config_dict(
config_path + ("user_mapping_provider", "module"),
)
# MSC2858 will apply certain limits in what can be used as an IdP id, so let's
# enforce those limits now.
# TODO: factor out this stuff to a generic function
idp_id = oidc_config.get("idp_id", "oidc")
# TODO: update this validity check based on what MSC2858 decides.
valid_idp_chars = set(string.ascii_lowercase + string.digits + "-._")
if any(c not in valid_idp_chars for c in idp_id):
raise ConfigError(
'idp_id may only contain a-z, 0-9, "-", ".", "_"',
config_path + ("idp_id",),
)
if idp_id[0] not in string.ascii_lowercase:
raise ConfigError(
"idp_id must start with a-z", config_path + ("idp_id",),
)
# prefix the given IDP with a prefix specific to the SSO mechanism, to avoid
# clashes with other mechs (such as SAML, CAS).
#
@@ -393,6 +400,7 @@ def _parse_oidc_config_dict(
idp_id=idp_id,
idp_name=oidc_config.get("idp_name", "OIDC"),
idp_icon=idp_icon,
idp_brand=oidc_config.get("idp_brand"),
discover=oidc_config.get("discover", True),
issuer=oidc_config["issuer"],
client_id=oidc_config["client_id"],
@@ -423,6 +431,9 @@ class OidcProviderConfig:
# Optional MXC URI for icon for this IdP.
idp_icon = attr.ib(type=Optional[str])
# Optional brand identifier for this IdP.
idp_brand = attr.ib(type=Optional[str])
# whether the OIDC discovery mechanism is used to discover endpoints
discover = attr.ib(type=bool)

View File

@@ -24,7 +24,7 @@ class RateLimitConfig:
defaults={"per_second": 0.17, "burst_count": 3.0},
):
self.per_second = config.get("per_second", defaults["per_second"])
self.burst_count = config.get("burst_count", defaults["burst_count"])
self.burst_count = int(config.get("burst_count", defaults["burst_count"]))
class FederationRateLimitConfig:
@@ -102,6 +102,20 @@ class RatelimitConfig(Config):
defaults={"per_second": 0.01, "burst_count": 3},
)
self.rc_3pid_validation = RateLimitConfig(
config.get("rc_3pid_validation") or {},
defaults={"per_second": 0.003, "burst_count": 5},
)
self.rc_invites_per_room = RateLimitConfig(
config.get("rc_invites", {}).get("per_room", {}),
defaults={"per_second": 0.3, "burst_count": 10},
)
self.rc_invites_per_user = RateLimitConfig(
config.get("rc_invites", {}).get("per_user", {}),
defaults={"per_second": 0.003, "burst_count": 5},
)
def generate_config_section(self, **kwargs):
return """\
## Ratelimiting ##
@@ -131,6 +145,9 @@ class RatelimitConfig(Config):
# users are joining rooms the server is already in (this is cheap) vs
# "remote" for when users are trying to join rooms not on the server (which
# can be more expensive)
# - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
# - two for ratelimiting how often invites can be sent in a room or to a
# specific user.
#
# The defaults are as shown below.
#
@@ -164,7 +181,18 @@ class RatelimitConfig(Config):
# remote:
# per_second: 0.01
# burst_count: 3
#
#rc_3pid_validation:
# per_second: 0.003
# burst_count: 5
#
#rc_invites:
# per_room:
# per_second: 0.3
# burst_count: 10
# per_user:
# per_second: 0.003
# burst_count: 5
# Ratelimiting settings for incoming federation
#

View File

@@ -176,9 +176,7 @@ class RegistrationConfig(Config):
self.session_lifetime = session_lifetime
# The success template used during fallback auth.
self.fallback_success_template = self.read_templates(
["auth_success.html"], autoescape=True
)[0]
self.fallback_success_template = self.read_template("auth_success.html")
def generate_config_section(self, generate_secrets=False, **kwargs):
if generate_secrets:
@@ -380,6 +378,8 @@ class RegistrationConfig(Config):
# By default, any room aliases included in this list will be created
# as a publicly joinable room when the first user registers for the
# homeserver. This behaviour can be customised with the settings below.
# If the room already exists, make certain it is a publicly joinable
# room. The join rule of the room must be set to 'public'.
#
#auto_join_rooms:
# - "#example:example.com"

View File

@@ -17,9 +17,7 @@ import os
from collections import namedtuple
from typing import Dict, List
from netaddr import IPSet
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST, generate_ip_set
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module
@@ -187,16 +185,17 @@ class ContentRepositoryConfig(Config):
"to work"
)
self.url_preview_ip_range_blacklist = IPSet(
config["url_preview_ip_range_blacklist"]
)
# we always blacklist '0.0.0.0' and '::', which are supposed to be
# unroutable addresses.
self.url_preview_ip_range_blacklist.update(["0.0.0.0", "::"])
self.url_preview_ip_range_blacklist = generate_ip_set(
config["url_preview_ip_range_blacklist"],
["0.0.0.0", "::"],
config_path=("url_preview_ip_range_blacklist",),
)
self.url_preview_ip_range_whitelist = IPSet(
config.get("url_preview_ip_range_whitelist", ())
self.url_preview_ip_range_whitelist = generate_ip_set(
config.get("url_preview_ip_range_whitelist", ()),
config_path=("url_preview_ip_range_whitelist",),
)
self.url_preview_url_blacklist = config.get("url_preview_url_blacklist", ())

View File

@@ -17,8 +17,7 @@
import logging
from typing import Any, List
import attr
from synapse.config.sso import SsoAttributeRequirement
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module, load_python_module
@@ -194,8 +193,8 @@ class SAML2Config(Config):
optional_attributes.add(self.saml2_grandfathered_mxid_source_attribute)
optional_attributes -= required_attributes
metadata_url = public_baseurl + "_matrix/saml2/metadata.xml"
response_url = public_baseurl + "_matrix/saml2/authn_response"
metadata_url = public_baseurl + "_synapse/client/saml2/metadata.xml"
response_url = public_baseurl + "_synapse/client/saml2/authn_response"
return {
"entityid": metadata_url,
"service": {
@@ -233,10 +232,10 @@ class SAML2Config(Config):
# enable SAML login.
#
# Once SAML support is enabled, a metadata file will be exposed at
# https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to
# https://<server>:<port>/_synapse/client/saml2/metadata.xml, which you may be able to
# use to configure your SAML IdP with. Alternatively, you can manually configure
# the IdP to use an ACS location of
# https://<server>:<port>/_matrix/saml2/authn_response.
# https://<server>:<port>/_synapse/client/saml2/authn_response.
#
saml2_config:
# `sp_config` is the configuration for the pysaml2 Service Provider.
@@ -396,32 +395,18 @@ class SAML2Config(Config):
}
@attr.s(frozen=True)
class SamlAttributeRequirement:
"""Object describing a single requirement for SAML attributes."""
attribute = attr.ib(type=str)
value = attr.ib(type=str)
JSON_SCHEMA = {
"type": "object",
"properties": {"attribute": {"type": "string"}, "value": {"type": "string"}},
"required": ["attribute", "value"],
}
ATTRIBUTE_REQUIREMENTS_SCHEMA = {
"type": "array",
"items": SamlAttributeRequirement.JSON_SCHEMA,
"items": SsoAttributeRequirement.JSON_SCHEMA,
}
def _parse_attribute_requirements_def(
attribute_requirements: Any,
) -> List[SamlAttributeRequirement]:
) -> List[SsoAttributeRequirement]:
validate_config(
ATTRIBUTE_REQUIREMENTS_SCHEMA,
attribute_requirements,
config_path=["saml2_config", "attribute_requirements"],
config_path=("saml2_config", "attribute_requirements"),
)
return [SamlAttributeRequirement(**x) for x in attribute_requirements]
return [SsoAttributeRequirement(**x) for x in attribute_requirements]

View File

@@ -15,6 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import itertools
import logging
import os.path
import re
@@ -23,7 +24,7 @@ from typing import Any, Dict, Iterable, List, Optional, Set
import attr
import yaml
from netaddr import IPSet
from netaddr import AddrFormatError, IPNetwork, IPSet
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.util.stringutils import parse_and_validate_server_name
@@ -40,6 +41,66 @@ logger = logging.Logger(__name__)
# in the list.
DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
def _6to4(network: IPNetwork) -> IPNetwork:
"""Convert an IPv4 network into a 6to4 IPv6 network per RFC 3056."""
# 6to4 networks consist of:
# * 2002 as the first 16 bits
# * The first IPv4 address in the network hex-encoded as the next 32 bits
# * The new prefix length needs to include the bits from the 2002 prefix.
hex_network = hex(network.first)[2:]
hex_network = ("0" * (8 - len(hex_network))) + hex_network
return IPNetwork(
"2002:%s:%s::/%d" % (hex_network[:4], hex_network[4:], 16 + network.prefixlen,)
)
def generate_ip_set(
ip_addresses: Optional[Iterable[str]],
extra_addresses: Optional[Iterable[str]] = None,
config_path: Optional[Iterable[str]] = None,
) -> IPSet:
"""
Generate an IPSet from a list of IP addresses or CIDRs.
Additionally, for each IPv4 network in the list of IP addresses, also
includes the corresponding IPv6 networks.
This includes:
* IPv4-Compatible IPv6 Address (see RFC 4291, section 2.5.5.1)
* IPv4-Mapped IPv6 Address (see RFC 4291, section 2.5.5.2)
* 6to4 Address (see RFC 3056, section 2)
Args:
ip_addresses: An iterable of IP addresses or CIDRs.
extra_addresses: An iterable of IP addresses or CIDRs.
config_path: The path in the configuration for error messages.
Returns:
A new IP set.
"""
result = IPSet()
for ip in itertools.chain(ip_addresses or (), extra_addresses or ()):
try:
network = IPNetwork(ip)
except AddrFormatError as e:
raise ConfigError(
"Invalid IP range provided: %s." % (ip,), config_path
) from e
result.add(network)
# It is possible that these already exist in the set, but that's OK.
if ":" not in str(network):
result.add(IPNetwork(network).ipv6(ipv4_compatible=True))
result.add(IPNetwork(network).ipv6(ipv4_compatible=False))
result.add(_6to4(network))
return result
# IP ranges that are considered private / unroutable / don't make sense.
DEFAULT_IP_RANGE_BLACKLIST = [
# Localhost
"127.0.0.0/8",
@@ -53,6 +114,8 @@ DEFAULT_IP_RANGE_BLACKLIST = [
"192.0.0.0/24",
# Link-local networks.
"169.254.0.0/16",
# Formerly used for 6to4 relay.
"192.88.99.0/24",
# Testing networks.
"198.18.0.0/15",
"192.0.2.0/24",
@@ -66,6 +129,12 @@ DEFAULT_IP_RANGE_BLACKLIST = [
"fe80::/10",
# Unique local addresses.
"fc00::/7",
# Testing networks.
"2001:db8::/32",
# Multicast.
"ff00::/8",
# Site-local addresses
"fec0::/10",
]
DEFAULT_ROOM_VERSION = "6"
@@ -294,17 +363,15 @@ class ServerConfig(Config):
)
# Attempt to create an IPSet from the given ranges
try:
self.ip_range_blacklist = IPSet(ip_range_blacklist)
except Exception as e:
raise ConfigError("Invalid range(s) provided in ip_range_blacklist.") from e
# Always blacklist 0.0.0.0, ::
self.ip_range_blacklist.update(["0.0.0.0", "::"])
try:
self.ip_range_whitelist = IPSet(config.get("ip_range_whitelist", ()))
except Exception as e:
raise ConfigError("Invalid range(s) provided in ip_range_whitelist.") from e
# Always blacklist 0.0.0.0, ::
self.ip_range_blacklist = generate_ip_set(
ip_range_blacklist, ["0.0.0.0", "::"], config_path=("ip_range_blacklist",)
)
self.ip_range_whitelist = generate_ip_set(
config.get("ip_range_whitelist", ()), config_path=("ip_range_whitelist",)
)
# The federation_ip_range_blacklist is used for backwards-compatibility
# and only applies to federation and identity servers. If it is not given,
@@ -312,14 +379,12 @@ class ServerConfig(Config):
federation_ip_range_blacklist = config.get(
"federation_ip_range_blacklist", ip_range_blacklist
)
try:
self.federation_ip_range_blacklist = IPSet(federation_ip_range_blacklist)
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in federation_ip_range_blacklist."
) from e
# Always blacklist 0.0.0.0, ::
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
self.federation_ip_range_blacklist = generate_ip_set(
federation_ip_range_blacklist,
["0.0.0.0", "::"],
config_path=("federation_ip_range_blacklist",),
)
self.start_pushers = config.get("start_pushers", True)

View File

@@ -12,11 +12,28 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Dict
from typing import Any, Dict, Optional
import attr
from ._base import Config
@attr.s(frozen=True)
class SsoAttributeRequirement:
"""Object describing a single requirement for SSO attributes."""
attribute = attr.ib(type=str)
# If a value is not given, than the attribute must simply exist.
value = attr.ib(type=Optional[str])
JSON_SCHEMA = {
"type": "object",
"properties": {"attribute": {"type": "string"}, "value": {"type": "string"}},
"required": ["attribute", "value"],
}
class SSOConfig(Config):
"""SSO Configuration
"""
@@ -27,7 +44,7 @@ class SSOConfig(Config):
sso_config = config.get("sso") or {} # type: Dict[str, Any]
# The sso-specific template_dir
template_dir = sso_config.get("template_dir")
self.sso_template_dir = sso_config.get("template_dir")
# Read templates from disk
(
@@ -48,7 +65,7 @@ class SSOConfig(Config):
"sso_auth_success.html",
"sso_auth_bad_user.html",
],
template_dir,
self.sso_template_dir,
)
# These templates have no placeholders, so render them here
@@ -106,15 +123,19 @@ class SSOConfig(Config):
#
# When rendering, this template is given the following variables:
# * redirect_url: the URL that the user will be redirected to after
# login. Needs manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
# login.
#
# * server_name: the homeserver's name.
#
# * providers: a list of available Identity Providers. Each element is
# an object with the following attributes:
#
# * idp_id: unique identifier for the IdP
# * idp_name: user-facing name for the IdP
# * idp_icon: if specified in the IdP config, an MXC URI for an icon
# for the IdP
# * idp_brand: if specified in the IdP config, a textual identifier
# for the brand of the IdP
#
# The rendered HTML page should contain a form which submits its results
# back as a GET request, with the following query parameters:
@@ -124,33 +145,101 @@ class SSOConfig(Config):
#
# * idp: the 'idp_id' of the chosen IDP.
#
# * HTML page to prompt new users to enter a userid and confirm other
# details: 'sso_auth_account_details.html'. This is only shown if the
# SSO implementation (with any user_mapping_provider) does not return
# a localpart.
#
# When rendering, this template is given the following variables:
#
# * server_name: the homeserver's name.
#
# * idp: details of the SSO Identity Provider that the user logged in
# with: an object with the following attributes:
#
# * idp_id: unique identifier for the IdP
# * idp_name: user-facing name for the IdP
# * idp_icon: if specified in the IdP config, an MXC URI for an icon
# for the IdP
# * idp_brand: if specified in the IdP config, a textual identifier
# for the brand of the IdP
#
# * user_attributes: an object containing details about the user that
# we received from the IdP. May have the following attributes:
#
# * display_name: the user's display_name
# * emails: a list of email addresses
#
# The template should render a form which submits the following fields:
#
# * username: the localpart of the user's chosen user id
#
# * HTML page allowing the user to consent to the server's terms and
# conditions. This is only shown for new users, and only if
# `user_consent.require_at_registration` is set.
#
# When rendering, this template is given the following variables:
#
# * server_name: the homeserver's name.
#
# * user_id: the user's matrix proposed ID.
#
# * user_profile.display_name: the user's proposed display name, if any.
#
# * consent_version: the version of the terms that the user will be
# shown
#
# * terms_url: a link to the page showing the terms.
#
# The template should render a form which submits the following fields:
#
# * accepted_version: the version of the terms accepted by the user
# (ie, 'consent_version' from the input variables).
#
# * HTML page for a confirmation step before redirecting back to the client
# with the login token: 'sso_redirect_confirm.html'.
#
# When rendering, this template is given three variables:
# * redirect_url: the URL the user is about to be redirected to. Needs
# manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
# When rendering, this template is given the following variables:
#
# * redirect_url: the URL the user is about to be redirected to.
#
# * display_url: the same as `redirect_url`, but with the query
# parameters stripped. The intention is to have a
# human-readable URL to show to users, not to use it as
# the final address to redirect to. Needs manual escaping
# (see https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
# the final address to redirect to.
#
# * server_name: the homeserver's name.
#
# * new_user: a boolean indicating whether this is the user's first time
# logging in.
#
# * user_id: the user's matrix ID.
#
# * user_profile.avatar_url: an MXC URI for the user's avatar, if any.
# None if the user has not set an avatar.
#
# * user_profile.display_name: the user's display name. None if the user
# has not set a display name.
#
# * HTML page which notifies the user that they are authenticating to confirm
# an operation on their account during the user interactive authentication
# process: 'sso_auth_confirm.html'.
#
# When rendering, this template is given the following variables:
# * redirect_url: the URL the user is about to be redirected to. Needs
# manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
# * redirect_url: the URL the user is about to be redirected to.
#
# * description: the operation which the user is being asked to confirm
#
# * idp: details of the Identity Provider that we will use to confirm
# the user's identity: an object with the following attributes:
#
# * idp_id: unique identifier for the IdP
# * idp_name: user-facing name for the IdP
# * idp_icon: if specified in the IdP config, an MXC URI for an icon
# for the IdP
# * idp_brand: if specified in the IdP config, a textual identifier
# for the brand of the IdP
#
# * HTML page shown after a successful user interactive authentication session:
# 'sso_auth_success.html'.
#

View File

@@ -125,19 +125,24 @@ class FederationPolicyForHTTPS:
self._no_verify_ssl_context = _no_verify_ssl.getContext()
self._no_verify_ssl_context.set_info_callback(_context_info_cb)
def get_options(self, host: bytes):
self._should_verify = self._config.federation_verify_certificates
self._federation_certificate_verification_whitelist = (
self._config.federation_certificate_verification_whitelist
)
def get_options(self, host: bytes):
# IPolicyForHTTPS.get_options takes bytes, but we want to compare
# against the str whitelist. The hostnames in the whitelist are already
# IDNA-encoded like the hosts will be here.
ascii_host = host.decode("ascii")
# Check if certificate verification has been enabled
should_verify = self._config.federation_verify_certificates
should_verify = self._should_verify
# Check if we've disabled certificate verification for this host
if should_verify:
for regex in self._config.federation_certificate_verification_whitelist:
if self._should_verify:
for regex in self._federation_certificate_verification_whitelist:
if regex.match(ascii_host):
should_verify = False
break

View File

@@ -17,6 +17,8 @@
import inspect
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from synapse.rest.media.v1._base import FileInfo
from synapse.rest.media.v1.media_storage import ReadableFileWrapper
from synapse.spam_checker_api import RegistrationBehaviour
from synapse.types import Collection
from synapse.util.async_helpers import maybe_awaitable
@@ -214,3 +216,48 @@ class SpamChecker:
return behaviour
return RegistrationBehaviour.ALLOW
async def check_media_file_for_spam(
self, file_wrapper: ReadableFileWrapper, file_info: FileInfo
) -> bool:
"""Checks if a piece of newly uploaded media should be blocked.
This will be called for local uploads, downloads of remote media, each
thumbnail generated for those, and web pages/images used for URL
previews.
Note that care should be taken to not do blocking IO operations in the
main thread. For example, to get the contents of a file a module
should do::
async def check_media_file_for_spam(
self, file: ReadableFileWrapper, file_info: FileInfo
) -> bool:
buffer = BytesIO()
await file.write_chunks_to(buffer.write)
if buffer.getvalue() == b"Hello World":
return True
return False
Args:
file: An object that allows reading the contents of the media.
file_info: Metadata about the file.
Returns:
True if the media should be blocked or False if it should be
allowed.
"""
for spam_checker in self.spam_checkers:
# For backwards compatibility, only run if the method exists on the
# spam checker
checker = getattr(spam_checker, "check_media_file_for_spam", None)
if checker:
spam = await maybe_awaitable(checker(file_wrapper, file_info))
if spam:
return True
return False

View File

@@ -810,7 +810,7 @@ class FederationClient(FederationBase):
"User's homeserver does not support this room version",
Codes.UNSUPPORTED_ROOM_VERSION,
)
elif e.code == 403:
elif e.code in (403, 429):
raise e.to_synapse_error()
else:
raise

View File

@@ -142,6 +142,8 @@ class FederationSender:
self._wake_destinations_needing_catchup,
)
self._external_cache = hs.get_external_cache()
def _get_per_destination_queue(self, destination: str) -> PerDestinationQueue:
"""Get or create a PerDestinationQueue for the given destination
@@ -197,22 +199,40 @@ class FederationSender:
if not event.internal_metadata.should_proactively_send():
return
try:
# Get the state from before the event.
# We need to make sure that this is the state from before
# the event and not from after it.
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = await self.state.get_hosts_in_room_at_events(
event.room_id, event_ids=event.prev_event_ids()
destinations = None # type: Optional[Set[str]]
if not event.prev_event_ids():
# If there are no prev event IDs then the state is empty
# and so no remote servers in the room
destinations = set()
else:
# We check the external cache for the destinations, which is
# stored per state group.
sg = await self._external_cache.get(
"event_to_prev_state_group", event.event_id
)
except Exception:
logger.exception(
"Failed to calculate hosts in room for event: %s",
event.event_id,
)
return
if sg:
destinations = await self._external_cache.get(
"get_joined_hosts", str(sg)
)
if destinations is None:
try:
# Get the state from before the event.
# We need to make sure that this is the state from before
# the event and not from after it.
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = await self.state.get_hosts_in_room_at_events(
event.room_id, event_ids=event.prev_event_ids()
)
except Exception:
logger.exception(
"Failed to calculate hosts in room for event: %s",
event.event_id,
)
return
destinations = {
d
@@ -452,7 +472,7 @@ class FederationSender:
self._processing_pending_presence = False
def send_presence_to_destinations(
self, states: List[UserPresenceState], destinations: List[str]
self, states: List[UserPresenceState], destinations: Iterable[str]
) -> None:
"""Send the given presence states to the given destinations.
destinations (list[str])

View File

@@ -18,6 +18,7 @@
import logging
from synapse.api.errors import Codes, SynapseError
from synapse.handlers.profile import MAX_AVATAR_URL_LEN, MAX_DISPLAYNAME_LEN
from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute
@@ -32,6 +33,11 @@ logger = logging.getLogger(__name__)
# TODO: Flairs
# Note that the maximum lengths are somewhat arbitrary.
MAX_SHORT_DESC_LEN = 1000
MAX_LONG_DESC_LEN = 10000
class GroupsServerWorkerHandler:
def __init__(self, hs):
self.hs = hs
@@ -508,11 +514,26 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
)
profile = {}
for keyname in ("name", "avatar_url", "short_description", "long_description"):
for keyname, max_length in (
("name", MAX_DISPLAYNAME_LEN),
("avatar_url", MAX_AVATAR_URL_LEN),
("short_description", MAX_SHORT_DESC_LEN),
("long_description", MAX_LONG_DESC_LEN),
):
if keyname in content:
value = content[keyname]
if not isinstance(value, str):
raise SynapseError(400, "%r value is not a string" % (keyname,))
raise SynapseError(
400,
"%r value is not a string" % (keyname,),
errcode=Codes.INVALID_PARAM,
)
if len(value) > max_length:
raise SynapseError(
400,
"Invalid %s parameter" % (keyname,),
errcode=Codes.INVALID_PARAM,
)
profile[keyname] = value
await self.store.update_group_profile(group_id, profile)

View File

@@ -14,6 +14,7 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING
import twisted
import twisted.internet.error
@@ -22,6 +23,9 @@ from twisted.web.resource import Resource
from synapse.app import check_bind_error
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
ACME_REGISTER_FAIL_ERROR = """
@@ -35,12 +39,12 @@ solutions, please read https://github.com/matrix-org/synapse/blob/master/docs/AC
class AcmeHandler:
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.reactor = hs.get_reactor()
self._acme_domain = hs.config.acme_domain
async def start_listening(self):
async def start_listening(self) -> None:
from synapse.handlers import acme_issuing_service
# Configure logging for txacme, if you need to debug
@@ -85,7 +89,7 @@ class AcmeHandler:
logger.error(ACME_REGISTER_FAIL_ERROR)
raise
async def provision_certificate(self):
async def provision_certificate(self) -> None:
logger.warning("Reprovisioning %s", self._acme_domain)
@@ -110,5 +114,3 @@ class AcmeHandler:
except Exception:
logger.exception("Failed saving!")
raise
return True

View File

@@ -22,8 +22,10 @@ only need (and may only have available) if we are doing ACME, so is designed to
imported conditionally.
"""
import logging
from typing import Dict, Iterable, List
import attr
import pem
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import serialization
from josepy import JWKRSA
@@ -36,20 +38,27 @@ from txacme.util import generate_private_key
from zope.interface import implementer
from twisted.internet import defer
from twisted.internet.interfaces import IReactorTCP
from twisted.python.filepath import FilePath
from twisted.python.url import URL
from twisted.web.resource import IResource
logger = logging.getLogger(__name__)
def create_issuing_service(reactor, acme_url, account_key_file, well_known_resource):
def create_issuing_service(
reactor: IReactorTCP,
acme_url: str,
account_key_file: str,
well_known_resource: IResource,
) -> AcmeIssuingService:
"""Create an ACME issuing service, and attach it to a web Resource
Args:
reactor: twisted reactor
acme_url (str): URL to use to request certificates
account_key_file (str): where to store the account key
well_known_resource (twisted.web.IResource): web resource for .well-known.
acme_url: URL to use to request certificates
account_key_file: where to store the account key
well_known_resource: web resource for .well-known.
we will attach a child resource for "acme-challenge".
Returns:
@@ -83,18 +92,20 @@ class ErsatzStore:
A store that only stores in memory.
"""
certs = attr.ib(default=attr.Factory(dict))
certs = attr.ib(type=Dict[bytes, List[bytes]], default=attr.Factory(dict))
def store(self, server_name, pem_objects):
def store(
self, server_name: bytes, pem_objects: Iterable[pem.AbstractPEMObject]
) -> defer.Deferred:
self.certs[server_name] = [o.as_bytes() for o in pem_objects]
return defer.succeed(None)
def load_or_create_client_key(key_file):
def load_or_create_client_key(key_file: str) -> JWKRSA:
"""Load the ACME account key from a file, creating it if it does not exist.
Args:
key_file (str): name of the file to use as the account key
key_file: name of the file to use as the account key
"""
# this is based on txacme.endpoint.load_or_create_client_key, but doesn't
# hardcode the 'client.key' filename

View File

@@ -61,6 +61,7 @@ from synapse.http.site import SynapseRequest
from synapse.logging.context import defer_to_thread
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.module_api import ModuleApi
from synapse.storage.roommember import ProfileInfo
from synapse.types import JsonDict, Requester, UserID
from synapse.util import stringutils as stringutils
from synapse.util.async_helpers import maybe_awaitable
@@ -567,16 +568,6 @@ class AuthHandler(BaseHandler):
session.session_id, login_type, result
)
except LoginError as e:
if login_type == LoginType.EMAIL_IDENTITY:
# riot used to have a bug where it would request a new
# validation token (thus sending a new email) each time it
# got a 401 with a 'flows' field.
# (https://github.com/vector-im/vector-web/issues/2447).
#
# Grandfather in the old behaviour for now to avoid
# breaking old riot deployments.
raise
# this step failed. Merge the error dict into the response
# so that the client can have another go.
errordict = e.error_dict()
@@ -1387,7 +1378,9 @@ class AuthHandler(BaseHandler):
)
return self._sso_auth_confirm_template.render(
description=session.description, redirect_url=redirect_url,
description=session.description,
redirect_url=redirect_url,
idp=sso_auth_provider,
)
async def complete_sso_login(
@@ -1396,6 +1389,7 @@ class AuthHandler(BaseHandler):
request: Request,
client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None,
new_user: bool = False,
):
"""Having figured out a mxid for this user, complete the HTTP request
@@ -1406,6 +1400,8 @@ class AuthHandler(BaseHandler):
process.
extra_attributes: Extra attributes which will be passed to the client
during successful login. Must be JSON serializable.
new_user: True if we should use wording appropriate to a user who has just
registered.
"""
# If the account has been deactivated, do not proceed with the login
# flow.
@@ -1414,8 +1410,17 @@ class AuthHandler(BaseHandler):
respond_with_html(request, 403, self._sso_account_deactivated_template)
return
profile = await self.store.get_profileinfo(
UserID.from_string(registered_user_id).localpart
)
self._complete_sso_login(
registered_user_id, request, client_redirect_url, extra_attributes
registered_user_id,
request,
client_redirect_url,
extra_attributes,
new_user=new_user,
user_profile_data=profile,
)
def _complete_sso_login(
@@ -1424,12 +1429,18 @@ class AuthHandler(BaseHandler):
request: Request,
client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None,
new_user: bool = False,
user_profile_data: Optional[ProfileInfo] = None,
):
"""
The synchronous portion of complete_sso_login.
This exists purely for backwards compatibility of synapse.module_api.ModuleApi.
"""
if user_profile_data is None:
user_profile_data = ProfileInfo(None, None)
# Store any extra attributes which will be passed in the login response.
# Note that this is per-user so it may overwrite a previous value, this
# is considered OK since the newest SSO attributes should be most valid.
@@ -1461,12 +1472,27 @@ class AuthHandler(BaseHandler):
# Remove the query parameters from the redirect URL to get a shorter version of
# it. This is only to display a human-readable URL in the template, but not the
# URL we redirect users to.
redirect_url_no_params = client_redirect_url.split("?")[0]
url_parts = urllib.parse.urlsplit(client_redirect_url)
if url_parts.scheme == "https":
# for an https uri, just show the netloc (ie, the hostname. Specifically,
# the bit between "//" and "/"; this includes any potential
# "username:password@" prefix.)
display_url = url_parts.netloc
else:
# for other uris, strip the query-params (including the login token) and
# fragment.
display_url = urllib.parse.urlunsplit(
(url_parts.scheme, url_parts.netloc, url_parts.path, "", "")
)
html = self._sso_redirect_confirm_template.render(
display_url=redirect_url_no_params,
display_url=display_url,
redirect_url=redirect_url,
server_name=self._server_name,
new_user=new_user,
user_id=registered_user_id,
user_profile=user_profile_data,
)
respond_with_html(request, 200, html)

View File

@@ -14,7 +14,7 @@
# limitations under the License.
import logging
import urllib.parse
from typing import TYPE_CHECKING, Dict, Optional
from typing import TYPE_CHECKING, Dict, List, Optional
from xml.etree import ElementTree as ET
import attr
@@ -49,7 +49,7 @@ class CasError(Exception):
@attr.s(slots=True, frozen=True)
class CasResponse:
username = attr.ib(type=str)
attributes = attr.ib(type=Dict[str, Optional[str]])
attributes = attr.ib(type=Dict[str, List[Optional[str]]])
class CasHandler:
@@ -80,9 +80,10 @@ class CasHandler:
# user-facing name of this auth provider
self.idp_name = "CAS"
# we do not currently support icons for CAS auth, but this is required by
# we do not currently support brands/icons for CAS auth, but this is required by
# the SsoIdentityProvider protocol type.
self.idp_icon = None
self.idp_brand = None
self._sso_handler = hs.get_sso_handler()
@@ -99,11 +100,7 @@ class CasHandler:
Returns:
The URL to use as a "service" parameter.
"""
return "%s%s?%s" % (
self._cas_service_url,
"/_matrix/client/r0/login/cas/ticket",
urllib.parse.urlencode(args),
)
return "%s?%s" % (self._cas_service_url, urllib.parse.urlencode(args),)
async def _validate_ticket(
self, ticket: str, service_args: Dict[str, str]
@@ -172,7 +169,7 @@ class CasHandler:
# Iterate through the nodes and pull out the user and any extra attributes.
user = None
attributes = {}
attributes = {} # type: Dict[str, List[Optional[str]]]
for child in root[0]:
if child.tag.endswith("user"):
user = child.text
@@ -185,7 +182,7 @@ class CasHandler:
tag = attribute.tag
if "}" in tag:
tag = tag.split("}")[1]
attributes[tag] = attribute.text
attributes.setdefault(tag, []).append(attribute.text)
# Ensure a user was found.
if user is None:
@@ -306,29 +303,10 @@ class CasHandler:
# Ensure that the attributes of the logged in user meet the required
# attributes.
for required_attribute, required_value in self._cas_required_attributes.items():
# If required attribute was not in CAS Response - Forbidden
if required_attribute not in cas_response.attributes:
self._sso_handler.render_error(
request,
"unauthorised",
"You are not authorised to log in here.",
401,
)
return
# Also need to check value
if required_value is not None:
actual_value = cas_response.attributes[required_attribute]
# If required attribute value does not match expected - Forbidden
if required_value != actual_value:
self._sso_handler.render_error(
request,
"unauthorised",
"You are not authorised to log in here.",
401,
)
return
if not self._sso_handler.check_required_attributes(
request, cas_response.attributes, self._cas_required_attributes
):
return
# Call the mapper to register/login the user
@@ -375,9 +353,10 @@ class CasHandler:
if failures:
raise RuntimeError("CAS is not expected to de-duplicate Matrix IDs")
# Arbitrarily use the first attribute found.
display_name = cas_response.attributes.get(
self._cas_displayname_attribute, None
)
self._cas_displayname_attribute, [None]
)[0]
return UserAttributes(localpart=localpart, display_name=display_name)

View File

@@ -15,7 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Set, Tuple
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Tuple
from synapse.api import errors
from synapse.api.constants import EventTypes
@@ -62,7 +62,7 @@ class DeviceWorkerHandler(BaseHandler):
self._auth_handler = hs.get_auth_handler()
@trace
async def get_devices_by_user(self, user_id: str) -> List[Dict[str, Any]]:
async def get_devices_by_user(self, user_id: str) -> List[JsonDict]:
"""
Retrieve the given user's devices
@@ -85,7 +85,7 @@ class DeviceWorkerHandler(BaseHandler):
return devices
@trace
async def get_device(self, user_id: str, device_id: str) -> Dict[str, Any]:
async def get_device(self, user_id: str, device_id: str) -> JsonDict:
""" Retrieve the given device
Args:
@@ -598,7 +598,7 @@ class DeviceHandler(DeviceWorkerHandler):
def _update_device_from_client_ips(
device: Dict[str, Any], client_ips: Dict[Tuple[str, str], Dict[str, Any]]
device: JsonDict, client_ips: Dict[Tuple[str, str], JsonDict]
) -> None:
ip = client_ips.get((device["user_id"], device["device_id"]), {})
device.update({"last_seen_ts": ip.get("last_seen"), "last_seen_ip": ip.get("ip")})
@@ -946,8 +946,8 @@ class DeviceListUpdater:
async def process_cross_signing_key_update(
self,
user_id: str,
master_key: Optional[Dict[str, Any]],
self_signing_key: Optional[Dict[str, Any]],
master_key: Optional[JsonDict],
self_signing_key: Optional[JsonDict],
) -> List[str]:
"""Process the given new master and self-signing key for the given remote user.

View File

@@ -16,7 +16,7 @@
# limitations under the License.
import logging
from typing import Dict, List, Optional, Tuple
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple
import attr
from canonicaljson import encode_canonical_json
@@ -31,6 +31,7 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
from synapse.types import (
JsonDict,
UserID,
get_domain_from_id,
get_verify_key_from_cross_signing_key,
@@ -40,11 +41,14 @@ from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
class E2eKeysHandler:
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.federation = hs.get_federation_client()
self.device_handler = hs.get_device_handler()
@@ -78,7 +82,9 @@ class E2eKeysHandler:
)
@trace
async def query_devices(self, query_body, timeout, from_user_id):
async def query_devices(
self, query_body: JsonDict, timeout: int, from_user_id: str
) -> JsonDict:
""" Handle a device key query from a client
{
@@ -98,12 +104,14 @@ class E2eKeysHandler:
}
Args:
from_user_id (str): the user making the query. This is used when
from_user_id: the user making the query. This is used when
adding cross-signing signatures to limit what signatures users
can see.
"""
device_keys_query = query_body.get("device_keys", {})
device_keys_query = query_body.get(
"device_keys", {}
) # type: Dict[str, Iterable[str]]
# separate users by domain.
# make a map from domain to user_id to device_ids
@@ -121,7 +129,8 @@ class E2eKeysHandler:
set_tag("remote_key_query", remote_queries)
# First get local devices.
failures = {}
# A map of destination -> failure response.
failures = {} # type: Dict[str, JsonDict]
results = {}
if local_query:
local_result = await self.query_local_devices(local_query)
@@ -135,9 +144,10 @@ class E2eKeysHandler:
)
# Now attempt to get any remote devices from our local cache.
remote_queries_not_in_cache = {}
# A map of destination -> user ID -> device IDs.
remote_queries_not_in_cache = {} # type: Dict[str, Dict[str, Iterable[str]]]
if remote_queries:
query_list = []
query_list = [] # type: List[Tuple[str, Optional[str]]]
for user_id, device_ids in remote_queries.items():
if device_ids:
query_list.extend((user_id, device_id) for device_id in device_ids)
@@ -284,15 +294,15 @@ class E2eKeysHandler:
return ret
async def get_cross_signing_keys_from_cache(
self, query, from_user_id
self, query: Iterable[str], from_user_id: Optional[str]
) -> Dict[str, Dict[str, dict]]:
"""Get cross-signing keys for users from the database
Args:
query (Iterable[string]) an iterable of user IDs. A dict whose keys
query: an iterable of user IDs. A dict whose keys
are user IDs satisfies this, so the query format used for
query_devices can be used here.
from_user_id (str): the user making the query. This is used when
from_user_id: the user making the query. This is used when
adding cross-signing signatures to limit what signatures users
can see.
@@ -315,14 +325,12 @@ class E2eKeysHandler:
if "self_signing" in user_info:
self_signing_keys[user_id] = user_info["self_signing"]
if (
from_user_id in keys
and keys[from_user_id] is not None
and "user_signing" in keys[from_user_id]
):
# users can see other users' master and self-signing keys, but can
# only see their own user-signing keys
user_signing_keys[from_user_id] = keys[from_user_id]["user_signing"]
# users can see other users' master and self-signing keys, but can
# only see their own user-signing keys
if from_user_id:
from_user_key = keys.get(from_user_id)
if from_user_key and "user_signing" in from_user_key:
user_signing_keys[from_user_id] = from_user_key["user_signing"]
return {
"master_keys": master_keys,
@@ -344,9 +352,9 @@ class E2eKeysHandler:
A map from user_id -> device_id -> device details
"""
set_tag("local_query", query)
local_query = []
local_query = [] # type: List[Tuple[str, Optional[str]]]
result_dict = {}
result_dict = {} # type: Dict[str, Dict[str, dict]]
for user_id, device_ids in query.items():
# we use UserID.from_string to catch invalid user ids
if not self.is_mine(UserID.from_string(user_id)):
@@ -380,10 +388,14 @@ class E2eKeysHandler:
log_kv(results)
return result_dict
async def on_federation_query_client_keys(self, query_body):
async def on_federation_query_client_keys(
self, query_body: Dict[str, Dict[str, Optional[List[str]]]]
) -> JsonDict:
""" Handle a device key query from a federated server
"""
device_keys_query = query_body.get("device_keys", {})
device_keys_query = query_body.get(
"device_keys", {}
) # type: Dict[str, Optional[List[str]]]
res = await self.query_local_devices(device_keys_query)
ret = {"device_keys": res}
@@ -397,31 +409,34 @@ class E2eKeysHandler:
return ret
@trace
async def claim_one_time_keys(self, query, timeout):
local_query = []
remote_queries = {}
async def claim_one_time_keys(
self, query: Dict[str, Dict[str, Dict[str, str]]], timeout: int
) -> JsonDict:
local_query = [] # type: List[Tuple[str, str, str]]
remote_queries = {} # type: Dict[str, Dict[str, Dict[str, str]]]
for user_id, device_keys in query.get("one_time_keys", {}).items():
for user_id, one_time_keys in query.get("one_time_keys", {}).items():
# we use UserID.from_string to catch invalid user ids
if self.is_mine(UserID.from_string(user_id)):
for device_id, algorithm in device_keys.items():
for device_id, algorithm in one_time_keys.items():
local_query.append((user_id, device_id, algorithm))
else:
domain = get_domain_from_id(user_id)
remote_queries.setdefault(domain, {})[user_id] = device_keys
remote_queries.setdefault(domain, {})[user_id] = one_time_keys
set_tag("local_key_query", local_query)
set_tag("remote_key_query", remote_queries)
results = await self.store.claim_e2e_one_time_keys(local_query)
json_result = {}
failures = {}
# A map of user ID -> device ID -> key ID -> key.
json_result = {} # type: Dict[str, Dict[str, Dict[str, JsonDict]]]
failures = {} # type: Dict[str, JsonDict]
for user_id, device_keys in results.items():
for device_id, keys in device_keys.items():
for key_id, json_bytes in keys.items():
for key_id, json_str in keys.items():
json_result.setdefault(user_id, {})[device_id] = {
key_id: json_decoder.decode(json_bytes)
key_id: json_decoder.decode(json_str)
}
@trace
@@ -468,7 +483,9 @@ class E2eKeysHandler:
return {"one_time_keys": json_result, "failures": failures}
@tag_args
async def upload_keys_for_user(self, user_id, device_id, keys):
async def upload_keys_for_user(
self, user_id: str, device_id: str, keys: JsonDict
) -> JsonDict:
time_now = self.clock.time_msec()
@@ -543,8 +560,8 @@ class E2eKeysHandler:
return {"one_time_key_counts": result}
async def _upload_one_time_keys_for_user(
self, user_id, device_id, time_now, one_time_keys
):
self, user_id: str, device_id: str, time_now: int, one_time_keys: JsonDict
) -> None:
logger.info(
"Adding one_time_keys %r for device %r for user %r at %d",
one_time_keys.keys(),
@@ -585,12 +602,14 @@ class E2eKeysHandler:
log_kv({"message": "Inserting new one_time_keys.", "keys": new_keys})
await self.store.add_e2e_one_time_keys(user_id, device_id, time_now, new_keys)
async def upload_signing_keys_for_user(self, user_id, keys):
async def upload_signing_keys_for_user(
self, user_id: str, keys: JsonDict
) -> JsonDict:
"""Upload signing keys for cross-signing
Args:
user_id (string): the user uploading the keys
keys (dict[string, dict]): the signing keys
user_id: the user uploading the keys
keys: the signing keys
"""
# if a master key is uploaded, then check it. Otherwise, load the
@@ -667,16 +686,17 @@ class E2eKeysHandler:
return {}
async def upload_signatures_for_device_keys(self, user_id, signatures):
async def upload_signatures_for_device_keys(
self, user_id: str, signatures: JsonDict
) -> JsonDict:
"""Upload device signatures for cross-signing
Args:
user_id (string): the user uploading the signatures
signatures (dict[string, dict[string, dict]]): map of users to
devices to signed keys. This is the submission from the user; an
exception will be raised if it is malformed.
user_id: the user uploading the signatures
signatures: map of users to devices to signed keys. This is the submission
from the user; an exception will be raised if it is malformed.
Returns:
dict: response to be sent back to the client. The response will have
The response to be sent back to the client. The response will have
a "failures" key, which will be a dict mapping users to devices
to errors for the signatures that failed.
Raises:
@@ -719,7 +739,9 @@ class E2eKeysHandler:
return {"failures": failures}
async def _process_self_signatures(self, user_id, signatures):
async def _process_self_signatures(
self, user_id: str, signatures: JsonDict
) -> Tuple[List["SignatureListItem"], Dict[str, Dict[str, dict]]]:
"""Process uploaded signatures of the user's own keys.
Signatures of the user's own keys from this API come in two forms:
@@ -731,15 +753,14 @@ class E2eKeysHandler:
signatures (dict[string, dict]): map of devices to signed keys
Returns:
(list[SignatureListItem], dict[string, dict[string, dict]]):
a list of signatures to store, and a map of users to devices to failure
reasons
A tuple of a list of signatures to store, and a map of users to
devices to failure reasons
Raises:
SynapseError: if the input is malformed
"""
signature_list = []
failures = {}
signature_list = [] # type: List[SignatureListItem]
failures = {} # type: Dict[str, Dict[str, JsonDict]]
if not signatures:
return signature_list, failures
@@ -834,19 +855,24 @@ class E2eKeysHandler:
return signature_list, failures
def _check_master_key_signature(
self, user_id, master_key_id, signed_master_key, stored_master_key, devices
):
self,
user_id: str,
master_key_id: str,
signed_master_key: JsonDict,
stored_master_key: JsonDict,
devices: Dict[str, Dict[str, JsonDict]],
) -> List["SignatureListItem"]:
"""Check signatures of a user's master key made by their devices.
Args:
user_id (string): the user whose master key is being checked
master_key_id (string): the ID of the user's master key
signed_master_key (dict): the user's signed master key that was uploaded
stored_master_key (dict): our previously-stored copy of the user's master key
devices (iterable(dict)): the user's devices
user_id: the user whose master key is being checked
master_key_id: the ID of the user's master key
signed_master_key: the user's signed master key that was uploaded
stored_master_key: our previously-stored copy of the user's master key
devices: the user's devices
Returns:
list[SignatureListItem]: a list of signatures to store
A list of signatures to store
Raises:
SynapseError: if a signature is invalid
@@ -877,25 +903,26 @@ class E2eKeysHandler:
return master_key_signature_list
async def _process_other_signatures(self, user_id, signatures):
async def _process_other_signatures(
self, user_id: str, signatures: Dict[str, dict]
) -> Tuple[List["SignatureListItem"], Dict[str, Dict[str, dict]]]:
"""Process uploaded signatures of other users' keys. These will be the
target user's master keys, signed by the uploading user's user-signing
key.
Args:
user_id (string): the user uploading the keys
signatures (dict[string, dict]): map of users to devices to signed keys
user_id: the user uploading the keys
signatures: map of users to devices to signed keys
Returns:
(list[SignatureListItem], dict[string, dict[string, dict]]):
a list of signatures to store, and a map of users to devices to failure
A list of signatures to store, and a map of users to devices to failure
reasons
Raises:
SynapseError: if the input is malformed
"""
signature_list = []
failures = {}
signature_list = [] # type: List[SignatureListItem]
failures = {} # type: Dict[str, Dict[str, JsonDict]]
if not signatures:
return signature_list, failures
@@ -983,7 +1010,7 @@ class E2eKeysHandler:
async def _get_e2e_cross_signing_verify_key(
self, user_id: str, key_type: str, from_user_id: str = None
):
) -> Tuple[JsonDict, str, VerifyKey]:
"""Fetch locally or remotely query for a cross-signing public key.
First, attempt to fetch the cross-signing public key from storage.
@@ -997,8 +1024,7 @@ class E2eKeysHandler:
This affects what signatures are fetched.
Returns:
dict, str, VerifyKey: the raw key data, the key ID, and the
signedjson verify key
The raw key data, the key ID, and the signedjson verify key
Raises:
NotFoundError: if the key is not found
@@ -1135,16 +1161,18 @@ class E2eKeysHandler:
return desired_key, desired_key_id, desired_verify_key
def _check_cross_signing_key(key, user_id, key_type, signing_key=None):
def _check_cross_signing_key(
key: JsonDict, user_id: str, key_type: str, signing_key: Optional[VerifyKey] = None
) -> None:
"""Check a cross-signing key uploaded by a user. Performs some basic sanity
checking, and ensures that it is signed, if a signature is required.
Args:
key (dict): the key data to verify
user_id (str): the user whose key is being checked
key_type (str): the type of key that the key should be
signing_key (VerifyKey): (optional) the signing key that the key should
be signed with. If omitted, signatures will not be checked.
key: the key data to verify
user_id: the user whose key is being checked
key_type: the type of key that the key should be
signing_key: the signing key that the key should be signed with. If
omitted, signatures will not be checked.
"""
if (
key.get("user_id") != user_id
@@ -1162,16 +1190,21 @@ def _check_cross_signing_key(key, user_id, key_type, signing_key=None):
)
def _check_device_signature(user_id, verify_key, signed_device, stored_device):
def _check_device_signature(
user_id: str,
verify_key: VerifyKey,
signed_device: JsonDict,
stored_device: JsonDict,
) -> None:
"""Check that a signature on a device or cross-signing key is correct and
matches the copy of the device/key that we have stored. Throws an
exception if an error is detected.
Args:
user_id (str): the user ID whose signature is being checked
verify_key (VerifyKey): the key to verify the device with
signed_device (dict): the uploaded signed device data
stored_device (dict): our previously stored copy of the device
user_id: the user ID whose signature is being checked
verify_key: the key to verify the device with
signed_device: the uploaded signed device data
stored_device: our previously stored copy of the device
Raises:
SynapseError: if the signature was invalid or the sent device is not the
@@ -1201,7 +1234,7 @@ def _check_device_signature(user_id, verify_key, signed_device, stored_device):
raise SynapseError(400, "Invalid signature", Codes.INVALID_SIGNATURE)
def _exception_to_failure(e):
def _exception_to_failure(e: Exception) -> JsonDict:
if isinstance(e, SynapseError):
return {"status": e.code, "errcode": e.errcode, "message": str(e)}
@@ -1218,7 +1251,7 @@ def _exception_to_failure(e):
return {"status": 503, "message": str(e)}
def _one_time_keys_match(old_key_json, new_key):
def _one_time_keys_match(old_key_json: str, new_key: JsonDict) -> bool:
old_key = json_decoder.decode(old_key_json)
# if either is a string rather than an object, they must match exactly
@@ -1239,16 +1272,16 @@ class SignatureListItem:
"""An item in the signature list as used by upload_signatures_for_device_keys.
"""
signing_key_id = attr.ib()
target_user_id = attr.ib()
target_device_id = attr.ib()
signature = attr.ib()
signing_key_id = attr.ib(type=str)
target_user_id = attr.ib(type=str)
target_device_id = attr.ib(type=str)
signature = attr.ib(type=JsonDict)
class SigningKeyEduUpdater:
"""Handles incoming signing key updates from federation and updates the DB"""
def __init__(self, hs, e2e_keys_handler):
def __init__(self, hs: "HomeServer", e2e_keys_handler: E2eKeysHandler):
self.store = hs.get_datastore()
self.federation = hs.get_federation_client()
self.clock = hs.get_clock()
@@ -1257,7 +1290,7 @@ class SigningKeyEduUpdater:
self._remote_edu_linearizer = Linearizer(name="remote_signing_key")
# user_id -> list of updates waiting to be handled.
self._pending_updates = {}
self._pending_updates = {} # type: Dict[str, List[Tuple[JsonDict, JsonDict]]]
# Recently seen stream ids. We don't bother keeping these in the DB,
# but they're useful to have them about to reduce the number of spurious
@@ -1270,13 +1303,15 @@ class SigningKeyEduUpdater:
iterable=True,
)
async def incoming_signing_key_update(self, origin, edu_content):
async def incoming_signing_key_update(
self, origin: str, edu_content: JsonDict
) -> None:
"""Called on incoming signing key update from federation. Responsible for
parsing the EDU and adding to pending updates list.
Args:
origin (string): the server that sent the EDU
edu_content (dict): the contents of the EDU
origin: the server that sent the EDU
edu_content: the contents of the EDU
"""
user_id = edu_content.pop("user_id")
@@ -1299,11 +1334,11 @@ class SigningKeyEduUpdater:
await self._handle_signing_key_updates(user_id)
async def _handle_signing_key_updates(self, user_id):
async def _handle_signing_key_updates(self, user_id: str) -> None:
"""Actually handle pending updates.
Args:
user_id (string): the user whose updates we are processing
user_id: the user whose updates we are processing
"""
device_handler = self.e2e_keys_handler.device_handler
@@ -1315,7 +1350,7 @@ class SigningKeyEduUpdater:
# This can happen since we batch updates
return
device_ids = []
device_ids = [] # type: List[str]
logger.info("pending updates: %r", pending_updates)

View File

@@ -15,6 +15,7 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING, List, Optional
from synapse.api.errors import (
Codes,
@@ -24,8 +25,12 @@ from synapse.api.errors import (
SynapseError,
)
from synapse.logging.opentracing import log_kv, trace
from synapse.types import JsonDict
from synapse.util.async_helpers import Linearizer
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
@@ -37,7 +42,7 @@ class E2eRoomKeysHandler:
The actual payload of the encrypted keys is completely opaque to the handler.
"""
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
# Used to lock whenever a client is uploading key data. This prevents collisions
@@ -48,21 +53,27 @@ class E2eRoomKeysHandler:
self._upload_linearizer = Linearizer("upload_room_keys_lock")
@trace
async def get_room_keys(self, user_id, version, room_id=None, session_id=None):
async def get_room_keys(
self,
user_id: str,
version: str,
room_id: Optional[str] = None,
session_id: Optional[str] = None,
) -> List[JsonDict]:
"""Bulk get the E2E room keys for a given backup, optionally filtered to a given
room, or a given session.
See EndToEndRoomKeyStore.get_e2e_room_keys for full details.
Args:
user_id(str): the user whose keys we're getting
version(str): the version ID of the backup we're getting keys from
room_id(string): room ID to get keys for, for None to get keys for all rooms
session_id(string): session ID to get keys for, for None to get keys for all
user_id: the user whose keys we're getting
version: the version ID of the backup we're getting keys from
room_id: room ID to get keys for, for None to get keys for all rooms
session_id: session ID to get keys for, for None to get keys for all
sessions
Raises:
NotFoundError: if the backup version does not exist
Returns:
A deferred list of dicts giving the session_data and message metadata for
A list of dicts giving the session_data and message metadata for
these room keys.
"""
@@ -86,17 +97,23 @@ class E2eRoomKeysHandler:
return results
@trace
async def delete_room_keys(self, user_id, version, room_id=None, session_id=None):
async def delete_room_keys(
self,
user_id: str,
version: str,
room_id: Optional[str] = None,
session_id: Optional[str] = None,
) -> JsonDict:
"""Bulk delete the E2E room keys for a given backup, optionally filtered to a given
room or a given session.
See EndToEndRoomKeyStore.delete_e2e_room_keys for full details.
Args:
user_id(str): the user whose backup we're deleting
version(str): the version ID of the backup we're deleting
room_id(string): room ID to delete keys for, for None to delete keys for all
user_id: the user whose backup we're deleting
version: the version ID of the backup we're deleting
room_id: room ID to delete keys for, for None to delete keys for all
rooms
session_id(string): session ID to delete keys for, for None to delete keys
session_id: session ID to delete keys for, for None to delete keys
for all sessions
Raises:
NotFoundError: if the backup version does not exist
@@ -128,15 +145,17 @@ class E2eRoomKeysHandler:
return {"etag": str(version_etag), "count": count}
@trace
async def upload_room_keys(self, user_id, version, room_keys):
async def upload_room_keys(
self, user_id: str, version: str, room_keys: JsonDict
) -> JsonDict:
"""Bulk upload a list of room keys into a given backup version, asserting
that the given version is the current backup version. room_keys are merged
into the current backup as described in RoomKeysServlet.on_PUT().
Args:
user_id(str): the user whose backup we're setting
version(str): the version ID of the backup we're updating
room_keys(dict): a nested dict describing the room_keys we're setting:
user_id: the user whose backup we're setting
version: the version ID of the backup we're updating
room_keys: a nested dict describing the room_keys we're setting:
{
"rooms": {
@@ -254,14 +273,16 @@ class E2eRoomKeysHandler:
return {"etag": str(version_etag), "count": count}
@staticmethod
def _should_replace_room_key(current_room_key, room_key):
def _should_replace_room_key(
current_room_key: Optional[JsonDict], room_key: JsonDict
) -> bool:
"""
Determine whether to replace a given current_room_key (if any)
with a newly uploaded room_key backup
Args:
current_room_key (dict): Optional, the current room_key dict if any
room_key (dict): The new room_key dict which may or may not be fit to
current_room_key: Optional, the current room_key dict if any
room_key : The new room_key dict which may or may not be fit to
replace the current_room_key
Returns:
@@ -286,14 +307,14 @@ class E2eRoomKeysHandler:
return True
@trace
async def create_version(self, user_id, version_info):
async def create_version(self, user_id: str, version_info: JsonDict) -> str:
"""Create a new backup version. This automatically becomes the new
backup version for the user's keys; previous backups will no longer be
writeable to.
Args:
user_id(str): the user whose backup version we're creating
version_info(dict): metadata about the new version being created
user_id: the user whose backup version we're creating
version_info: metadata about the new version being created
{
"algorithm": "m.megolm_backup.v1",
@@ -301,7 +322,7 @@ class E2eRoomKeysHandler:
}
Returns:
A deferred of a string that gives the new version number.
The new version number.
"""
# TODO: Validate the JSON to make sure it has the right keys.
@@ -313,17 +334,19 @@ class E2eRoomKeysHandler:
)
return new_version
async def get_version_info(self, user_id, version=None):
async def get_version_info(
self, user_id: str, version: Optional[str] = None
) -> JsonDict:
"""Get the info about a given version of the user's backup
Args:
user_id(str): the user whose current backup version we're querying
version(str): Optional; if None gives the most recent version
user_id: the user whose current backup version we're querying
version: Optional; if None gives the most recent version
otherwise a historical one.
Raises:
NotFoundError: if the requested backup version doesn't exist
Returns:
A deferred of a info dict that gives the info about the new version.
A info dict that gives the info about the new version.
{
"version": "1234",
@@ -346,7 +369,7 @@ class E2eRoomKeysHandler:
return res
@trace
async def delete_version(self, user_id, version=None):
async def delete_version(self, user_id: str, version: Optional[str] = None) -> None:
"""Deletes a given version of the user's e2e_room_keys backup
Args:
@@ -366,17 +389,19 @@ class E2eRoomKeysHandler:
raise
@trace
async def update_version(self, user_id, version, version_info):
async def update_version(
self, user_id: str, version: str, version_info: JsonDict
) -> JsonDict:
"""Update the info about a given version of the user's backup
Args:
user_id(str): the user whose current backup version we're updating
version(str): the backup version we're updating
version_info(dict): the new information about the backup
user_id: the user whose current backup version we're updating
version: the backup version we're updating
version_info: the new information about the backup
Raises:
NotFoundError: if the requested backup version doesn't exist
Returns:
A deferred of an empty dict.
An empty dict.
"""
if "version" not in version_info:
version_info["version"] = version

View File

@@ -1617,6 +1617,12 @@ class FederationHandler(BaseHandler):
if event.state_key == self._server_notices_mxid:
raise SynapseError(HTTPStatus.FORBIDDEN, "Cannot invite this user")
# We retrieve the room member handler here as to not cause a cyclic dependency
member_handler = self.hs.get_room_member_handler()
# We don't rate limit based on room ID, as that should be done by
# sending server.
member_handler.ratelimit_invite(None, event.state_key)
# keep a record of the room version, if we don't yet know it.
# (this may get overwritten if we later get a different room version in a
# join dance).
@@ -2093,6 +2099,11 @@ class FederationHandler(BaseHandler):
if event.type == EventTypes.GuestAccess and not context.rejected:
await self.maybe_kick_guest_users(event)
# If we are going to send this event over federation we precaclculate
# the joined hosts.
if event.internal_metadata.get_send_on_behalf_of():
await self.event_creation_handler.cache_joined_hosts_for_event(event)
return context
async def _check_for_soft_fail(

View File

@@ -15,9 +15,13 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Dict, Iterable, List, Set
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
from synapse.types import GroupID, get_domain_from_id
from synapse.types import GroupID, JsonDict, get_domain_from_id
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
@@ -56,7 +60,7 @@ def _create_rerouter(func_name):
class GroupsLocalWorkerHandler:
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.store = hs.get_datastore()
self.room_list_handler = hs.get_room_list_handler()
@@ -84,7 +88,9 @@ class GroupsLocalWorkerHandler:
get_group_role = _create_rerouter("get_group_role")
get_group_roles = _create_rerouter("get_group_roles")
async def get_group_summary(self, group_id, requester_user_id):
async def get_group_summary(
self, group_id: str, requester_user_id: str
) -> JsonDict:
"""Get the group summary for a group.
If the group is remote we check that the users have valid attestations.
@@ -137,14 +143,15 @@ class GroupsLocalWorkerHandler:
return res
async def get_users_in_group(self, group_id, requester_user_id):
async def get_users_in_group(
self, group_id: str, requester_user_id: str
) -> JsonDict:
"""Get users in a group
"""
if self.is_mine_id(group_id):
res = await self.groups_server_handler.get_users_in_group(
return await self.groups_server_handler.get_users_in_group(
group_id, requester_user_id
)
return res
group_server_name = get_domain_from_id(group_id)
@@ -178,11 +185,11 @@ class GroupsLocalWorkerHandler:
return res
async def get_joined_groups(self, user_id):
async def get_joined_groups(self, user_id: str) -> JsonDict:
group_ids = await self.store.get_joined_groups(user_id)
return {"groups": group_ids}
async def get_publicised_groups_for_user(self, user_id):
async def get_publicised_groups_for_user(self, user_id: str) -> JsonDict:
if self.hs.is_mine_id(user_id):
result = await self.store.get_publicised_groups_for_user(user_id)
@@ -206,8 +213,10 @@ class GroupsLocalWorkerHandler:
# TODO: Verify attestations
return {"groups": result}
async def bulk_get_publicised_groups(self, user_ids, proxy=True):
destinations = {}
async def bulk_get_publicised_groups(
self, user_ids: Iterable[str], proxy: bool = True
) -> JsonDict:
destinations = {} # type: Dict[str, Set[str]]
local_users = set()
for user_id in user_ids:
@@ -220,7 +229,7 @@ class GroupsLocalWorkerHandler:
raise SynapseError(400, "Some user_ids are not local")
results = {}
failed_results = []
failed_results = [] # type: List[str]
for destination, dest_user_ids in destinations.items():
try:
r = await self.transport_client.bulk_get_publicised_groups(
@@ -242,7 +251,7 @@ class GroupsLocalWorkerHandler:
class GroupsLocalHandler(GroupsLocalWorkerHandler):
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
# Ensure attestations get renewed
@@ -271,7 +280,9 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
set_group_join_policy = _create_rerouter("set_group_join_policy")
async def create_group(self, group_id, user_id, content):
async def create_group(
self, group_id: str, user_id: str, content: JsonDict
) -> JsonDict:
"""Create a group
"""
@@ -284,27 +295,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
local_attestation = None
remote_attestation = None
else:
local_attestation = self.attestations.create_attestation(group_id, user_id)
content["attestation"] = local_attestation
content["user_profile"] = await self.profile_handler.get_profile(user_id)
try:
res = await self.transport_client.create_group(
get_domain_from_id(group_id), group_id, user_id, content
)
except HttpResponseException as e:
raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
remote_attestation = res["attestation"]
await self.attestations.verify_attestation(
remote_attestation,
group_id=group_id,
user_id=user_id,
server_name=get_domain_from_id(group_id),
)
raise SynapseError(400, "Unable to create remote groups")
is_publicised = content.get("publicise", False)
token = await self.store.register_user_group_membership(
@@ -320,7 +311,9 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return res
async def join_group(self, group_id, user_id, content):
async def join_group(
self, group_id: str, user_id: str, content: JsonDict
) -> JsonDict:
"""Request to join a group
"""
if self.is_mine_id(group_id):
@@ -365,7 +358,9 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return {}
async def accept_invite(self, group_id, user_id, content):
async def accept_invite(
self, group_id: str, user_id: str, content: JsonDict
) -> JsonDict:
"""Accept an invite to a group
"""
if self.is_mine_id(group_id):
@@ -410,7 +405,9 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return {}
async def invite(self, group_id, user_id, requester_user_id, config):
async def invite(
self, group_id: str, user_id: str, requester_user_id: str, config: JsonDict
) -> JsonDict:
"""Invite a user to a group
"""
content = {"requester_user_id": requester_user_id, "config": config}
@@ -434,7 +431,9 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return res
async def on_invite(self, group_id, user_id, content):
async def on_invite(
self, group_id: str, user_id: str, content: JsonDict
) -> JsonDict:
"""One of our users were invited to a group
"""
# TODO: Support auto join and rejection
@@ -465,8 +464,8 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return {"state": "invite", "user_profile": user_profile}
async def remove_user_from_group(
self, group_id, user_id, requester_user_id, content
):
self, group_id: str, user_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict:
"""Remove a user from a group
"""
if user_id == requester_user_id:
@@ -499,7 +498,9 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return res
async def user_removed_from_group(self, group_id, user_id, content):
async def user_removed_from_group(
self, group_id: str, user_id: str, content: JsonDict
) -> None:
"""One of our users was removed/kicked from a group
"""
# TODO: Check if user in group

View File

@@ -27,9 +27,11 @@ from synapse.api.errors import (
HttpResponseException,
SynapseError,
)
from synapse.api.ratelimiting import Ratelimiter
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.http import RequestTimedOutError
from synapse.http.client import SimpleHttpClient
from synapse.http.site import SynapseRequest
from synapse.types import JsonDict, Requester
from synapse.util import json_decoder
from synapse.util.hash import sha256_and_url_safe_base64
@@ -57,6 +59,32 @@ class IdentityHandler(BaseHandler):
self._web_client_location = hs.config.invite_client_location
# Ratelimiters for `/requestToken` endpoints.
self._3pid_validation_ratelimiter_ip = Ratelimiter(
clock=hs.get_clock(),
rate_hz=hs.config.ratelimiting.rc_3pid_validation.per_second,
burst_count=hs.config.ratelimiting.rc_3pid_validation.burst_count,
)
self._3pid_validation_ratelimiter_address = Ratelimiter(
clock=hs.get_clock(),
rate_hz=hs.config.ratelimiting.rc_3pid_validation.per_second,
burst_count=hs.config.ratelimiting.rc_3pid_validation.burst_count,
)
def ratelimit_request_token_requests(
self, request: SynapseRequest, medium: str, address: str,
):
"""Used to ratelimit requests to `/requestToken` by IP and address.
Args:
request: The associated request
medium: The type of threepid, e.g. "msisdn" or "email"
address: The actual threepid ID, e.g. the phone number or email address
"""
self._3pid_validation_ratelimiter_ip.ratelimit((medium, request.getClientIP()))
self._3pid_validation_ratelimiter_address.ratelimit((medium, address))
async def threepid_from_creds(
self, id_server: str, creds: Dict[str, str]
) -> Optional[JsonDict]:

View File

@@ -174,7 +174,7 @@ class MessageHandler:
raise NotFoundError("Can't find event for token %s" % (at_token,))
visible_events = await filter_events_for_client(
self.storage, user_id, last_events, filter_send_to_client=False
self.storage, user_id, last_events, filter_send_to_client=False,
)
event = last_events[0]
@@ -432,6 +432,8 @@ class EventCreationHandler:
self._ephemeral_events_enabled = hs.config.enable_ephemeral_messages
self._external_cache = hs.get_external_cache()
async def create_event(
self,
requester: Requester,
@@ -939,6 +941,8 @@ class EventCreationHandler:
await self.action_generator.handle_push_actions_for_event(event, context)
await self.cache_joined_hosts_for_event(event)
try:
# If we're a worker we need to hit out to the master.
writer_instance = self._events_shard_config.get_instance(event.room_id)
@@ -978,6 +982,44 @@ class EventCreationHandler:
await self.store.remove_push_actions_from_staging(event.event_id)
raise
async def cache_joined_hosts_for_event(self, event: EventBase) -> None:
"""Precalculate the joined hosts at the event, when using Redis, so that
external federation senders don't have to recalculate it themselves.
"""
if not self._external_cache.is_enabled():
return
# We actually store two mappings, event ID -> prev state group,
# state group -> joined hosts, which is much more space efficient
# than event ID -> joined hosts.
#
# Note: We have to cache event ID -> prev state group, as we don't
# store that in the DB.
#
# Note: We always set the state group -> joined hosts cache, even if
# we already set it, so that the expiry time is reset.
state_entry = await self.state.resolve_state_groups_for_events(
event.room_id, event_ids=event.prev_event_ids()
)
if state_entry.state_group:
joined_hosts = await self.store.get_joined_hosts(event.room_id, state_entry)
await self._external_cache.set(
"event_to_prev_state_group",
event.event_id,
state_entry.state_group,
expiry_ms=60 * 60 * 1000,
)
await self._external_cache.set(
"get_joined_hosts",
str(state_entry.state_group),
list(joined_hosts),
expiry_ms=60 * 60 * 1000,
)
async def _validate_canonical_alias(
self, directory_handler, room_alias_str: str, expected_room_id: str
) -> None:

View File

@@ -102,7 +102,7 @@ class OidcHandler:
) from e
async def handle_oidc_callback(self, request: SynapseRequest) -> None:
"""Handle an incoming request to /_synapse/oidc/callback
"""Handle an incoming request to /_synapse/client/oidc/callback
Since we might want to display OIDC-related errors in a user-friendly
way, we don't raise SynapseError from here. Instead, we call
@@ -123,7 +123,6 @@ class OidcHandler:
Args:
request: the incoming request from the browser.
"""
# The provider might redirect with an error.
# In that case, just display it as-is.
if b"error" in request.args:
@@ -137,8 +136,12 @@ class OidcHandler:
# either the provider misbehaving or Synapse being misconfigured.
# The only exception of that is "access_denied", where the user
# probably cancelled the login flow. In other cases, log those errors.
if error != "access_denied":
logger.error("Error from the OIDC provider: %s %s", error, description)
logger.log(
logging.INFO if error == "access_denied" else logging.ERROR,
"Received OIDC callback with error: %s %s",
error,
description,
)
self._sso_handler.render_error(request, error, description)
return
@@ -149,7 +152,7 @@ class OidcHandler:
# Fetch the session cookie
session = request.getCookie(SESSION_COOKIE_NAME) # type: Optional[bytes]
if session is None:
logger.info("No session cookie found")
logger.info("Received OIDC callback, with no session cookie")
self._sso_handler.render_error(
request, "missing_session", "No session cookie found"
)
@@ -169,7 +172,7 @@ class OidcHandler:
# Check for the state query parameter
if b"state" not in request.args:
logger.info("State parameter is missing")
logger.info("Received OIDC callback, with no state parameter")
self._sso_handler.render_error(
request, "invalid_request", "State parameter is missing"
)
@@ -183,14 +186,16 @@ class OidcHandler:
session, state
)
except (MacaroonDeserializationException, ValueError) as e:
logger.exception("Invalid session")
logger.exception("Invalid session for OIDC callback")
self._sso_handler.render_error(request, "invalid_session", str(e))
return
except MacaroonInvalidSignatureException as e:
logger.exception("Could not verify session")
logger.exception("Could not verify session for OIDC callback")
self._sso_handler.render_error(request, "mismatching_session", str(e))
return
logger.info("Received OIDC callback for IdP %s", session_data.idp_id)
oidc_provider = self._providers.get(session_data.idp_id)
if not oidc_provider:
logger.error("OIDC session uses unknown IdP %r", oidc_provider)
@@ -274,6 +279,9 @@ class OidcProvider:
# MXC URI for icon for this auth provider
self.idp_icon = provider.idp_icon
# optional brand identifier for this auth provider
self.idp_brand = provider.idp_brand
self._sso_handler = hs.get_sso_handler()
self._sso_handler.register_identity_provider(self)
@@ -562,6 +570,7 @@ class OidcProvider:
Returns:
UserInfo: an object representing the user.
"""
logger.debug("Using the OAuth2 access_token to request userinfo")
metadata = await self.load_metadata()
resp = await self._http_client.get_json(
@@ -569,6 +578,8 @@ class OidcProvider:
headers={"Authorization": ["Bearer {}".format(token["access_token"])]},
)
logger.debug("Retrieved user info from userinfo endpoint: %r", resp)
return UserInfo(resp)
async def _parse_id_token(self, token: Token, nonce: str) -> UserInfo:
@@ -597,17 +608,19 @@ class OidcProvider:
claims_cls = ImplicitIDToken
alg_values = metadata.get("id_token_signing_alg_values_supported", ["RS256"])
jwt = JsonWebToken(alg_values)
claim_options = {"iss": {"values": [metadata["issuer"]]}}
id_token = token["id_token"]
logger.debug("Attempting to decode JWT id_token %r", id_token)
# Try to decode the keys in cache first, then retry by forcing the keys
# to be reloaded
jwk_set = await self.load_jwks()
try:
claims = jwt.decode(
token["id_token"],
id_token,
key=jwk_set,
claims_cls=claims_cls,
claims_options=claim_options,
@@ -617,13 +630,15 @@ class OidcProvider:
logger.info("Reloading JWKS after decode error")
jwk_set = await self.load_jwks(force=True) # try reloading the jwks
claims = jwt.decode(
token["id_token"],
id_token,
key=jwk_set,
claims_cls=claims_cls,
claims_options=claim_options,
claims_params=claims_params,
)
logger.debug("Decoded id_token JWT %r; validating", claims)
claims.validate(leeway=120) # allows 2 min of clock skew
return UserInfo(claims)
@@ -640,7 +655,7 @@ class OidcProvider:
- ``client_id``: the client ID set in ``oidc_config.client_id``
- ``response_type``: ``code``
- ``redirect_uri``: the callback URL ; ``{base url}/_synapse/oidc/callback``
- ``redirect_uri``: the callback URL ; ``{base url}/_synapse/client/oidc/callback``
- ``scope``: the list of scopes set in ``oidc_config.scopes``
- ``state``: a random string
- ``nonce``: a random string
@@ -681,7 +696,7 @@ class OidcProvider:
request.addCookie(
SESSION_COOKIE_NAME,
cookie,
path="/_synapse/oidc",
path="/_synapse/client/oidc",
max_age="3600",
httpOnly=True,
sameSite="lax",
@@ -702,7 +717,7 @@ class OidcProvider:
async def handle_oidc_callback(
self, request: SynapseRequest, session_data: "OidcSessionData", code: str
) -> None:
"""Handle an incoming request to /_synapse/oidc/callback
"""Handle an incoming request to /_synapse/client/oidc/callback
By this time we have already validated the session on the synapse side, and
now need to do the provider-specific operations. This includes:
@@ -723,19 +738,18 @@ class OidcProvider:
"""
# Exchange the code with the provider
try:
logger.debug("Exchanging code")
logger.debug("Exchanging OAuth2 code for a token")
token = await self._exchange_code(code)
except OidcError as e:
logger.exception("Could not exchange code")
logger.exception("Could not exchange OAuth2 code")
self._sso_handler.render_error(request, e.error, e.error_description)
return
logger.debug("Successfully obtained OAuth2 access token")
logger.debug("Successfully obtained OAuth2 token data: %r", token)
# Now that we have a token, get the userinfo, either by decoding the
# `id_token` or by fetching the `userinfo_endpoint`.
if self._uses_userinfo:
logger.debug("Fetching userinfo")
try:
userinfo = await self._fetch_userinfo(token)
except Exception as e:
@@ -743,7 +757,6 @@ class OidcProvider:
self._sso_handler.render_error(request, "fetch_error", str(e))
return
else:
logger.debug("Extracting userinfo from id_token")
try:
userinfo = await self._parse_id_token(token, nonce=session_data.nonce)
except Exception as e:
@@ -1056,7 +1069,8 @@ class OidcSessionData:
UserAttributeDict = TypedDict(
"UserAttributeDict", {"localpart": Optional[str], "display_name": Optional[str]}
"UserAttributeDict",
{"localpart": Optional[str], "display_name": Optional[str], "emails": List[str]},
)
C = TypeVar("C")
@@ -1135,11 +1149,12 @@ def jinja_finalize(thing):
env = Environment(finalize=jinja_finalize)
@attr.s
@attr.s(slots=True, frozen=True)
class JinjaOidcMappingConfig:
subject_claim = attr.ib(type=str)
localpart_template = attr.ib(type=Optional[Template])
display_name_template = attr.ib(type=Optional[Template])
email_template = attr.ib(type=Optional[Template])
extra_attributes = attr.ib(type=Dict[str, Template])
@@ -1156,23 +1171,17 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
def parse_config(config: dict) -> JinjaOidcMappingConfig:
subject_claim = config.get("subject_claim", "sub")
localpart_template = None # type: Optional[Template]
if "localpart_template" in config:
def parse_template_config(option_name: str) -> Optional[Template]:
if option_name not in config:
return None
try:
localpart_template = env.from_string(config["localpart_template"])
return env.from_string(config[option_name])
except Exception as e:
raise ConfigError(
"invalid jinja template", path=["localpart_template"]
) from e
raise ConfigError("invalid jinja template", path=[option_name]) from e
display_name_template = None # type: Optional[Template]
if "display_name_template" in config:
try:
display_name_template = env.from_string(config["display_name_template"])
except Exception as e:
raise ConfigError(
"invalid jinja template", path=["display_name_template"]
) from e
localpart_template = parse_template_config("localpart_template")
display_name_template = parse_template_config("display_name_template")
email_template = parse_template_config("email_template")
extra_attributes = {} # type Dict[str, Template]
if "extra_attributes" in config:
@@ -1192,6 +1201,7 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
subject_claim=subject_claim,
localpart_template=localpart_template,
display_name_template=display_name_template,
email_template=email_template,
extra_attributes=extra_attributes,
)
@@ -1213,16 +1223,23 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
# a usable mxid.
localpart += str(failures) if failures else ""
display_name = None # type: Optional[str]
if self._config.display_name_template is not None:
display_name = self._config.display_name_template.render(
user=userinfo
).strip()
def render_template_field(template: Optional[Template]) -> Optional[str]:
if template is None:
return None
return template.render(user=userinfo).strip()
if display_name == "":
display_name = None
display_name = render_template_field(self._config.display_name_template)
if display_name == "":
display_name = None
return UserAttributeDict(localpart=localpart, display_name=display_name)
emails = [] # type: List[str]
email = render_template_field(self._config.email_template)
if email:
emails.append(email)
return UserAttributeDict(
localpart=localpart, display_name=display_name, emails=emails
)
async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict:
extras = {} # type: Dict[str, str]

View File

@@ -909,9 +909,6 @@ class PresenceHandler(BasePresenceHandler):
state = await self.current_state_for_user(user_id)
hosts = await self.state.get_current_hosts_in_room(room_id)
# Filter out ourselves.
hosts = {host for host in hosts if host != self.server_name}
self.federation.send_presence_to_destinations(
states=[state], destinations=hosts
)
@@ -1115,13 +1112,11 @@ class PresenceEventSource:
updates for
"""
user_id = user.to_string()
users_interested_in = set()
users_interested_in.add(user_id) # So that we receive our own presence
users_who_share_room = await self.store.get_users_who_share_room_with_user(
users_interested_in = await self.store.get_users_who_share_room_with_user(
user_id, on_invalidate=cache_context.invalidate
)
users_interested_in.update(users_who_share_room)
users_interested_in.add(user_id) # So that we receive our own presence
if explicit_room_id:
user_ids = await self.store.get_users_in_room(

View File

@@ -14,8 +14,9 @@
# limitations under the License.
"""Contains functions for registering clients."""
import logging
from typing import TYPE_CHECKING, List, Optional, Tuple
from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple
from synapse import types
from synapse.api.constants import MAX_USERID_LENGTH, EventTypes, JoinRules, LoginType
@@ -152,7 +153,7 @@ class RegistrationHandler(BaseHandler):
user_type: Optional[str] = None,
default_display_name: Optional[str] = None,
address: Optional[str] = None,
bind_emails: List[str] = [],
bind_emails: Iterable[str] = [],
by_admin: bool = False,
user_agent_ips: Optional[List[Tuple[str, str]]] = None,
) -> str:
@@ -693,6 +694,8 @@ class RegistrationHandler(BaseHandler):
access_token: The access token of the newly logged in device, or
None if `inhibit_login` enabled.
"""
# TODO: 3pid registration can actually happen on the workers. Consider
# refactoring it.
if self.hs.config.worker_app:
await self._post_registration_client(
user_id=user_id, auth_result=auth_result, access_token=access_token

View File

@@ -38,6 +38,7 @@ from synapse.api.filtering import Filter
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
from synapse.events import EventBase
from synapse.events.utils import copy_power_levels_contents
from synapse.rest.admin._base import assert_user_is_admin
from synapse.storage.state import StateFilter
from synapse.types import (
JsonDict,
@@ -126,6 +127,10 @@ class RoomCreationHandler(BaseHandler):
self.third_party_event_rules = hs.get_third_party_event_rules()
self._invite_burst_count = (
hs.config.ratelimiting.rc_invites_per_room.burst_count
)
async def upgrade_room(
self, requester: Requester, old_room_id: str, new_version: RoomVersion
) -> str:
@@ -662,6 +667,9 @@ class RoomCreationHandler(BaseHandler):
invite_3pid_list = []
invite_list = []
if len(invite_list) + len(invite_3pid_list) > self._invite_burst_count:
raise SynapseError(400, "Cannot invite so many users at once")
await self.event_creation_handler.assert_accepted_privacy_policy(requester)
power_level_content_override = config.get("power_level_content_override")
@@ -997,41 +1005,51 @@ class RoomCreationHandler(BaseHandler):
class RoomContextHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state_store = self.storage.state
async def get_event_context(
self,
user: UserID,
requester: Requester,
room_id: str,
event_id: str,
limit: int,
event_filter: Optional[Filter],
use_admin_priviledge: bool = False,
) -> Optional[JsonDict]:
"""Retrieves events, pagination tokens and state around a given event
in a room.
Args:
user
requester
room_id
event_id
limit: The maximum number of events to return in total
(excluding state).
event_filter: the filter to apply to the events returned
(excluding the target event_id)
use_admin_priviledge: if `True`, return all events, regardless
of whether `user` has access to them. To be used **ONLY**
from the admin API.
Returns:
dict, or None if the event isn't found
"""
user = requester.user
if use_admin_priviledge:
await assert_user_is_admin(self.auth, requester.user)
before_limit = math.floor(limit / 2.0)
after_limit = limit - before_limit
users = await self.store.get_users_in_room(room_id)
is_peeking = user.to_string() not in users
def filter_evts(events):
return filter_events_for_client(
async def filter_evts(events):
if use_admin_priviledge:
return events
return await filter_events_for_client(
self.storage, user.to_string(), events, is_peeking=is_peeking
)

View File

@@ -85,6 +85,17 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
burst_count=hs.config.ratelimiting.rc_joins_remote.burst_count,
)
self._invites_per_room_limiter = Ratelimiter(
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_invites_per_room.per_second,
burst_count=hs.config.ratelimiting.rc_invites_per_room.burst_count,
)
self._invites_per_user_limiter = Ratelimiter(
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_invites_per_user.per_second,
burst_count=hs.config.ratelimiting.rc_invites_per_user.burst_count,
)
# This is only used to get at ratelimit function, and
# maybe_kick_guest_users. It's fine there are multiple of these as
# it doesn't store state.
@@ -144,6 +155,16 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
"""
raise NotImplementedError()
def ratelimit_invite(self, room_id: Optional[str], invitee_user_id: str):
"""Ratelimit invites by room and by target user.
If room ID is missing then we just rate limit by target user.
"""
if room_id:
self._invites_per_room_limiter.ratelimit(room_id)
self._invites_per_user_limiter.ratelimit(invitee_user_id)
async def _local_membership_update(
self,
requester: Requester,
@@ -387,8 +408,14 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
raise SynapseError(403, "This room has been blocked on this server")
if effective_membership_state == Membership.INVITE:
target_id = target.to_string()
if ratelimit:
# Don't ratelimit application services.
if not requester.app_service or requester.app_service.is_rate_limited():
self.ratelimit_invite(room_id, target_id)
# block any attempts to invite the server notices mxid
if target.to_string() == self._server_notices_mxid:
if target_id == self._server_notices_mxid:
raise SynapseError(HTTPStatus.FORBIDDEN, "Cannot invite this user")
block_invite = False
@@ -412,7 +439,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
block_invite = True
if not await self.spam_checker.user_may_invite(
requester.user.to_string(), target.to_string(), room_id
requester.user.to_string(), target_id, room_id
):
logger.info("Blocking invite due to spam checker")
block_invite = True

View File

@@ -23,7 +23,6 @@ from saml2.client import Saml2Client
from synapse.api.errors import SynapseError
from synapse.config import ConfigError
from synapse.config.saml2_config import SamlAttributeRequirement
from synapse.handlers._base import BaseHandler
from synapse.handlers.sso import MappingException, UserAttributes
from synapse.http.servlet import parse_string
@@ -78,9 +77,10 @@ class SamlHandler(BaseHandler):
# user-facing name of this auth provider
self.idp_name = "SAML"
# we do not currently support icons for SAML auth, but this is required by
# we do not currently support icons/brands for SAML auth, but this is required by
# the SsoIdentityProvider protocol type.
self.idp_icon = None
self.idp_brand = None
# a map from saml session id to Saml2SessionData object
self._outstanding_requests_dict = {} # type: Dict[str, Saml2SessionData]
@@ -132,7 +132,7 @@ class SamlHandler(BaseHandler):
raise Exception("prepare_for_authenticate didn't return a Location header")
async def handle_saml_response(self, request: SynapseRequest) -> None:
"""Handle an incoming request to /_matrix/saml2/authn_response
"""Handle an incoming request to /_synapse/client/saml2/authn_response
Args:
request: the incoming request from the browser. We'll
@@ -238,12 +238,10 @@ class SamlHandler(BaseHandler):
# Ensure that the attributes of the logged in user meet the required
# attributes.
for requirement in self._saml2_attribute_requirements:
if not _check_attribute_requirement(saml2_auth.ava, requirement):
self._sso_handler.render_error(
request, "unauthorised", "You are not authorised to log in here."
)
return
if not self._sso_handler.check_required_attributes(
request, saml2_auth.ava, self._saml2_attribute_requirements
):
return
# Call the mapper to register/login the user
try:
@@ -372,21 +370,6 @@ class SamlHandler(BaseHandler):
del self._outstanding_requests_dict[reqid]
def _check_attribute_requirement(ava: dict, req: SamlAttributeRequirement) -> bool:
values = ava.get(req.attribute, [])
for v in values:
if v == req.value:
return True
logger.info(
"SAML2 attribute %s did not match required value '%s' (was '%s')",
req.attribute,
req.value,
values,
)
return False
DOT_REPLACE_PATTERN = re.compile(
("[^%s]" % (re.escape("".join(mxid_localpart_allowed_characters)),))
)

View File

@@ -15,23 +15,28 @@
import itertools
import logging
from typing import Iterable
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional
from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import NotFoundError, SynapseError
from synapse.api.filtering import Filter
from synapse.events import EventBase
from synapse.storage.state import StateFilter
from synapse.types import JsonDict, UserID
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
class SearchHandler(BaseHandler):
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self._event_serializer = hs.get_event_client_serializer()
self.storage = hs.get_storage()
@@ -87,13 +92,15 @@ class SearchHandler(BaseHandler):
return historical_room_ids
async def search(self, user, content, batch=None):
async def search(
self, user: UserID, content: JsonDict, batch: Optional[str] = None
) -> JsonDict:
"""Performs a full text search for a user.
Args:
user (UserID)
content (dict): Search parameters
batch (str): The next_batch parameter. Used for pagination.
user
content: Search parameters
batch: The next_batch parameter. Used for pagination.
Returns:
dict to be returned to the client with results of search
@@ -186,7 +193,7 @@ class SearchHandler(BaseHandler):
# If doing a subset of all rooms seearch, check if any of the rooms
# are from an upgraded room, and search their contents as well
if search_filter.rooms:
historical_room_ids = []
historical_room_ids = [] # type: List[str]
for room_id in search_filter.rooms:
# Add any previous rooms to the search if they exist
ids = await self.get_old_rooms_from_upgraded_room(room_id)
@@ -209,8 +216,10 @@ class SearchHandler(BaseHandler):
rank_map = {} # event_id -> rank of event
allowed_events = []
room_groups = {} # Holds result of grouping by room, if applicable
sender_group = {} # Holds result of grouping by sender, if applicable
# Holds result of grouping by room, if applicable
room_groups = {} # type: Dict[str, JsonDict]
# Holds result of grouping by sender, if applicable
sender_group = {} # type: Dict[str, JsonDict]
# Holds the next_batch for the entire result set if one of those exists
global_next_batch = None
@@ -254,7 +263,7 @@ class SearchHandler(BaseHandler):
s["results"].append(e.event_id)
elif order_by == "recent":
room_events = []
room_events = [] # type: List[EventBase]
i = 0
pagination_token = batch_token
@@ -418,13 +427,10 @@ class SearchHandler(BaseHandler):
state_results = {}
if include_state:
rooms = {e.room_id for e in allowed_events}
for room_id in rooms:
for room_id in {e.room_id for e in allowed_events}:
state = await self.state_handler.get_current_state(room_id)
state_results[room_id] = list(state.values())
state_results.values()
# We're now about to serialize the events. We should not make any
# blocking calls after this. Otherwise the 'age' will be wrong
@@ -448,9 +454,9 @@ class SearchHandler(BaseHandler):
if state_results:
s = {}
for room_id, state in state_results.items():
for room_id, state_events in state_results.items():
s[room_id] = await self._event_serializer.serialize_events(
state, time_now
state_events, time_now
)
rooms_cat_res["state"] = s

Some files were not shown because too many files have changed in this diff Show More