Compare commits

...

119 Commits

Author SHA1 Message Date
Andrew Morgan
9d567cc26c change other 404 to a SynapseError 2019-08-06 13:08:35 +01:00
Andrew Morgan
dc361798e5 raise a SynapseError instead of tuple 2019-08-05 12:39:36 +01:00
Andrew Morgan
f536e7c6a7 Add M_NOT_FOUND errcode to response 2019-07-31 15:14:19 +01:00
Andrew Morgan
ad2c415965 lint 2019-07-31 14:28:37 +01:00
Andrew Morgan
2347e3c362 Hide event existence 2019-07-31 14:26:16 +01:00
Andrew Morgan
e906115472 lint 2019-07-31 14:18:35 +01:00
Andrew Morgan
ccddd9b9a8 Add changelog 2019-07-31 14:03:58 +01:00
Andrew Morgan
228c3c405f Return 404 instead of 403 when retrieving an event without perms 2019-07-31 13:59:12 +01:00
Andrew Morgan
58a755cdc3 Remove duplicate return statement 2019-07-31 13:24:51 +01:00
Erik Johnston
8fde611a8c Merge pull request #5794 from matrix-org/erikj/share_ssl_options_for_well_known
Share SSL options for well-known requests
2019-07-31 11:40:02 +01:00
Amber Brown
8f15832950 Remove DelayedCall debugging from test runs (#5787) 2019-07-31 20:39:22 +10:00
Erik Johnston
9fe6ad5fef Merge pull request #5796 from matrix-org/erikj/disable_codecov_report
Disable codecov reports to GH comments.
2019-07-31 11:16:15 +01:00
Erik Johnston
fe2f2fc530 Newsfile 2019-07-31 10:59:39 +01:00
Erik Johnston
6be336c0d8 Disable codecov reports to GH comments.
The double posting is really annoying, and I don't think anyone is
actually reading them. The commit statuses should give a good summary
and will link to a full report.
2019-07-31 10:56:02 +01:00
Erik Johnston
3b7a35a59a Newsfile 2019-07-31 10:39:24 +01:00
Erik Johnston
a9bcae9f50 Share SSL options for well-known requests 2019-07-31 10:39:24 +01:00
Brendan Abolivier
d4f91e7e9f Merge pull request #5793 from matrix-org/erikj/fix_bg_update
Don't recreate current_state_events.membership column
2019-07-30 21:19:39 +02:00
Erik Johnston
4037d3220a Newsfile 2019-07-30 16:43:59 +01:00
Erik Johnston
123c04daa7 Don't recreate column 2019-07-30 16:42:48 +01:00
Erik Johnston
62a2d60d72 Merge pull request #5792 from matrix-org/erikj/fix_bg_update
Fix current_state_events membership background update.
2019-07-30 15:20:09 +01:00
Erik Johnston
958d69f300 Newsfile 2019-07-30 14:53:52 +01:00
Erik Johnston
15056ca208 Fix current_state_events membership background update.
Turns out not all rooms are in `rooms`, so lets fetch the room list from
`current_state_events`. We move the delta file to force it to be run
again.
2019-07-30 14:51:41 +01:00
Erik Johnston
7a48d0bab8 Merge pull request #5789 from matrix-org/erikj/fix_error_handling_keys
Fix error handling when fetching remote device keys
2019-07-30 13:26:12 +01:00
Erik Johnston
e23ab7f41a Newsfile 2019-07-30 13:10:00 +01:00
Erik Johnston
1ec7d656dd Unwrap error 2019-07-30 13:09:02 +01:00
Erik Johnston
458e51df7a Fix error handling when fetching remote device keys 2019-07-30 13:07:02 +01:00
Erik Johnston
63eb4a1b62 Merge pull request #5746 from matrix-org/erikj/test_bg_update_currnet_state
Add unit test for current state membership bg update
2019-07-30 10:00:02 +01:00
Richard van der Hoff
8c97f6414c Remove non-functional 'expire_access_token' setting (#5782)
The `expire_access_token` didn't do what it sounded like it should do. What it
actually did was make Synapse enforce the 'time' caveat on macaroons used as
access tokens, but since our access token macaroons never contained such a
caveat, it was always a no-op.

(The code to add 'time' caveats was removed back in v0.18.5, in #1656)
2019-07-30 08:25:02 +01:00
Amber Brown
865077f1d1 Room Complexity Client Implementation (#5783) 2019-07-30 02:47:27 +10:00
Erik Johnston
7c8c3b8437 Merge pull request #5774 from matrix-org/erikj/fix_rejected_membership
Fix room summary when rejected events are in state
2019-07-29 17:15:15 +01:00
Erik Johnston
3e013b7c8e Merge pull request #5752 from matrix-org/erikj/forgotten_user
Remove some more joins on room_memberships
2019-07-29 17:15:01 +01:00
Erik Johnston
2a12d76646 Merge pull request #5770 from matrix-org/erikj/fix_current_state_event_sqlite
Fix current_state bg update to work on old SQLite
2019-07-29 17:09:01 +01:00
Amber Brown
97a8b4caf7 Move some timeout checking logs to DEBUG #5785 2019-07-30 02:02:18 +10:00
Erik Johnston
df3a5db629 Expand comment 2019-07-29 16:40:25 +01:00
Jorik Schellekens
85b0bd8fe0 Update the device list cache when keys/query is called (#5693) 2019-07-29 16:34:44 +01:00
Erik Johnston
105e7f6ed3 Remove lost comment 2019-07-29 16:09:48 +01:00
Erik Johnston
3b476f5767 Fix debian packages for sid being called buster. (#5775)
* Fix debian packages for sid being called buster.

I don't know why the sid images return buster as its codename in
`lsb_release` but it does, so lets just grab the codename from the
distro we pass into dockerfile

* Newsfile
2019-07-30 00:33:32 +10:00
Erik Johnston
d94916852f Newsfile 2019-07-29 13:04:58 +01:00
Erik Johnston
84c6ea1af8 Update old deps unit test to use old sqlite3 2019-07-29 13:04:50 +01:00
Erik Johnston
45df38e61b Fix current_state bg update to work on old SQLite 2019-07-29 13:04:10 +01:00
Brendan Abolivier
fa87004bc1 Merge pull request #5780 from matrix-org/baboliver/loopingcall-args
Add ability to pass arguments to looping calls
2019-07-29 10:58:22 +02:00
Brendan Abolivier
bd083a5fcf Changelog 2019-07-29 10:04:09 +02:00
Brendan Abolivier
244953be3f Add kwargs and doc 2019-07-29 10:03:14 +02:00
Brendan Abolivier
08352d44f8 Add ability to pass arguments to looping calls 2019-07-29 09:54:37 +02:00
Richard van der Hoff
d74595e2ca Merge branch 'master' into develop 2019-07-26 12:39:33 +01:00
Richard van der Hoff
1a93daf353 Merge pull request #5744 from matrix-org/erikj/log_leave_origin_mismatch
Log when we receive a /make_* request from a different origin
2019-07-26 12:38:37 +01:00
Richard van der Hoff
97bf307755 yet more changelog attribution fixes 2019-07-26 12:06:06 +01:00
Erik Johnston
2e9cf7dda5 Newsfile 2019-07-26 10:14:31 +01:00
Erik Johnston
14c24c9037 Fix room summary when rejected events are in state
Annoyingly, `current_state_events` table can include rejected events,
in which case the membership column will be null. To work around this
lets just always filter out null membership for now.
2019-07-26 10:11:36 +01:00
Richard van der Hoff
1cad8d7b6f Convert RedactionTestCase to modern test style (#5768) 2019-07-26 07:38:55 +01:00
Richard van der Hoff
26d742fed6 Merge pull request #5767 from matrix-org/rav/redactions/cross_room_id
log when a redaction attempts to redact an event in a different room
2019-07-25 18:49:56 +01:00
Richard van der Hoff
618bd1ee76 Fix some error cases in the caching layer. (#5749)
There was some inconsistent behaviour in the caching layer around how
exceptions were handled - particularly synchronously-thrown ones.

This seems to be most easily handled by pushing the creation of
ObservableDeferreds down from CacheDescriptor to the Cache.
2019-07-25 15:59:45 +01:00
Andrew Morgan
f16aa3a44b Merge branch 'master' into develop 2019-07-25 15:19:22 +01:00
Andrew Morgan
baf081cd3b Merge tag 'v1.2.0rc2' into develop
Bugfixes
--------

- Fix a regression introduced in v1.2.0rc1 which led to incorrect labels on some prometheus metrics. ([\#5734](https://github.com/matrix-org/synapse/issues/5734))
2019-07-24 13:47:51 +01:00
Erik Johnston
2276936bac Merge pull request #5743 from matrix-org/erikj/log_origin_receipts_mismatch
Log when we receive receipt from a different origin
2019-07-24 13:27:57 +01:00
Richard van der Hoff
f30a71a67b Stop trying to fetch events with event_id=None. (#5753)
`None` is not a valid event id, so queuing up a database fetch for it seems
like a silly thing to do.

I considered making `get_event` return `None` if `event_id is None`, but then
its interaction with `allow_none` seemed uninituitive, and strong typing ftw.
2019-07-24 13:16:18 +01:00
Erik Johnston
c159803067 Newsfile 2019-07-24 11:51:44 +01:00
Erik Johnston
0c4a99607e Remove join when calculating room summaries. 2019-07-24 11:49:15 +01:00
Erik Johnston
62921fb53e Remove join on room_memberships when fetching rooms for user. 2019-07-24 11:45:58 +01:00
Erik Johnston
32768e96d4 Add function to get all forgotten rooms for user
This will allow us to efficiently filter out rooms that have been
forgotten in other queries without having to join against the
`room_memberships` table.
2019-07-24 11:44:23 +01:00
Richard van der Hoff
418635e68a Add a prometheus metric for active cache lookups. (#5750)
* Add a prometheus metric for active cache lookups.

* changelog
2019-07-24 11:33:13 +01:00
Erik Johnston
adcd5368b0 Newsfile 2019-07-23 17:00:24 +01:00
Erik Johnston
73bbaf2bc6 Add unit test for current state membership bg update 2019-07-23 17:00:22 +01:00
Jorik Schellekens
3641784e8c Make Jaeger fully configurable (#5694)
* Allow Jaeger to be configured

* Update sample config
2019-07-23 15:46:04 +01:00
Erik Johnston
65afc535a6 Update changelog.d/5743.bugfix
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-07-23 15:14:21 +01:00
Amber Brown
4806651744 Replace returnValue with return (#5736) 2019-07-23 23:00:55 +10:00
Erik Johnston
fadfde9aaa Newsfile 2019-07-23 13:32:37 +01:00
Jorik Schellekens
18a466b84e Opentracing Utils (#5722)
* Add decerators for tracing functions

* Use the new clean contexts

* Context and edu utils

* Move opentracing setters

* Move whitelisting

* Sectioning comments

* Better args wrapper

* Docstrings

Co-Authored-By: Erik Johnston <erik@matrix.org>

* Remove unused methods.

* Don't use global

* One tracing decorator to rule them all.
2019-07-23 13:31:16 +01:00
Erik Johnston
3db1377b26 Log when we receive receipt from a different origin 2019-07-23 13:31:03 +01:00
Erik Johnston
841b12867e Merge pull request #5732 from matrix-org/erikj/sdnotify
Add process hooks to tell systemd our state.
2019-07-23 13:06:53 +01:00
Erik Johnston
73bf452666 Merge pull request #5740 from matrix-org/erikj/worker_flakey_tests
Mark flakey tests as blacklisted for worker mode
2019-07-23 11:32:32 +01:00
Erik Johnston
22d2338ace Newsfile 2019-07-23 10:27:53 +01:00
Erik Johnston
1883223a01 Mark flakey tests as blacklisted for worker mode 2019-07-23 10:26:52 +01:00
Erik Johnston
4f6984aa88 Merge pull request #5738 from matrix-org/erikj/faster_update
Speed up current state background update.
2019-07-23 10:23:12 +01:00
Erik Johnston
cda4460d99 Also update systemd-with-workers contrib examples 2019-07-23 10:14:01 +01:00
Erik Johnston
39e594b765 Merge pull request #5733 from matrix-org/erikj/exlude_sytest_blacklist
Don't package sytest-blacklist file.
2019-07-23 10:11:34 +01:00
Erik Johnston
cf0006719d Newsfile 2019-07-23 10:01:30 +01:00
Erik Johnston
b2a629ef49 Speed up current state background update.
Turns out that storing huge JSON arrays in the progress JSON isn't
something that postgres particularly likes.
2019-07-23 10:01:30 +01:00
Erik Johnston
d9ea9881d2 Newsfile 2019-07-22 16:09:15 +01:00
Erik Johnston
c96322c8d2 Don't package sytest-blacklist file.
I don't think its useful, and I don't even know where it would end up.
2019-07-22 16:07:12 +01:00
Amber Brown
0d0f6d12bc Fix logging in workers (#5729)
This also adds a worker blacklist.
2019-07-22 16:05:00 +01:00
Erik Johnston
17c27df6ea Update example systemd service file 2019-07-22 15:24:25 +01:00
Erik Johnston
80cfad233e Call startup commands as system triggers.
This helps ensures that we only consider ourselves "up" once all the
startup functions have completed.
2019-07-22 15:22:14 +01:00
Erik Johnston
720d30469f Merge pull request #5730 from matrix-org/erikj/cache_versions
Cache get_version_string.
2019-07-22 14:52:52 +01:00
Erik Johnston
79f689e6c2 Newsfile 2019-07-22 14:52:19 +01:00
Erik Johnston
c560b791e1 Add process hooks to tell systemd our state.
Fixes #5676.
2019-07-22 14:52:18 +01:00
Jason Robinson
8e513e7afc Merge pull request #5731 from matrix-org/jaywink/admin-user-list-user-type
Add `user_type` to returned fields in admin API user list endpoints
2019-07-22 16:28:51 +03:00
Erik Johnston
22e862304a Update changelog.d/5730.misc
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-07-22 14:09:56 +01:00
Richard van der Hoff
0cb72812f9 Fix stack overflow in Keyring (#5724)
* Refactor Keyring._start_key_lookups

There's an awful lot of deferreds and dictionaries flying around here. The
whole thing can be made much simpler and achieve the same effect.

* Add a delay to key lookup lock release to fix stack overflow

A tactical call_later here should fix #5723

* changelog
2019-07-22 13:51:22 +01:00
Andrew Morgan
f477ce4b1a Merge tag 'v1.2.0rc1' into develop
v1.2.0rc1

Features
--------

- Add support for opentracing. ([\#5544](https://github.com/matrix-org/synapse/issues/5544), [\#5712](https://github.com/matrix-org/synapse/issues/5712))
- Add ability to pull all locally stored events out of synapse that a particular user can see. ([\#5589](https://github.com/matrix-org/synapse/issues/5589))
- Add a basic admin command app to allow server operators to run Synapse admin commands separately from the main production instance. ([\#5597](https://github.com/matrix-org/synapse/issues/5597))
- Add `sender` and `origin_server_ts` fields to `m.replace`. ([\#5613](https://github.com/matrix-org/synapse/issues/5613))
- Add default push rule to ignore reactions. ([\#5623](https://github.com/matrix-org/synapse/issues/5623))
- Include the original event when asking for its relations. ([\#5626](https://github.com/matrix-org/synapse/issues/5626))
- Implement `session_lifetime` configuration option, after which access tokens will expire. ([\#5660](https://github.com/matrix-org/synapse/issues/5660))
- Return "This account has been deactivated" when a deactivated user tries to login. ([\#5674](https://github.com/matrix-org/synapse/issues/5674))
- Enable aggregations support by default ([\#5714](https://github.com/matrix-org/synapse/issues/5714))

Bugfixes
--------

- Fix 'utime went backwards' errors on daemonization. ([\#5609](https://github.com/matrix-org/synapse/issues/5609))
- Various minor fixes to the federation request rate limiter. ([\#5621](https://github.com/matrix-org/synapse/issues/5621))
- Forbid viewing relations on an event once it has been redacted. ([\#5629](https://github.com/matrix-org/synapse/issues/5629))
- Fix requests to the `/store_invite` endpoint of identity servers being sent in the wrong format. ([\#5638](https://github.com/matrix-org/synapse/issues/5638))
- Fix newly-registered users not being able to lookup their own profile without joining a room. ([\#5644](https://github.com/matrix-org/synapse/issues/5644))
- Fix bug in #5626 that prevented the original_event field from actually having the contents of the original event in a call to `/relations`. ([\#5654](https://github.com/matrix-org/synapse/issues/5654))
- Fix 3PID bind requests being sent to identity servers as `application/x-form-www-urlencoded` data, which is deprecated. ([\#5658](https://github.com/matrix-org/synapse/issues/5658))
- Fix some problems with authenticating redactions in recent room versions. ([\#5699](https://github.com/matrix-org/synapse/issues/5699), [\#5700](https://github.com/matrix-org/synapse/issues/5700), [\#5707](https://github.com/matrix-org/synapse/issues/5707))
- Ignore redactions of m.room.create events. ([\#5701](https://github.com/matrix-org/synapse/issues/5701))

Updates to the Docker image
---------------------------

- Base Docker image on a newer Alpine Linux version (3.8 -> 3.10). ([\#5619](https://github.com/matrix-org/synapse/issues/5619))
- Add missing space in default logging file format generated by the Docker image. ([\#5620](https://github.com/matrix-org/synapse/issues/5620))

Improved Documentation
----------------------

- Add information about nginx normalisation to reverse_proxy.rst. Contributed by @skalarproduktraum - thanks! ([\#5397](https://github.com/matrix-org/synapse/issues/5397))
- --no-pep517 should be --no-use-pep517 in the documentation to setup the development environment. ([\#5651](https://github.com/matrix-org/synapse/issues/5651))
- Improvements to Postgres setup instructions. Contributed by @Lrizika - thanks! ([\#5661](https://github.com/matrix-org/synapse/issues/5661))
- Minor tweaks to postgres documentation. ([\#5675](https://github.com/matrix-org/synapse/issues/5675))

Deprecations and Removals
-------------------------

- Remove support for the `invite_3pid_guest` configuration setting. ([\#5625](https://github.com/matrix-org/synapse/issues/5625))

Internal Changes
----------------

- Move logging code out of `synapse.util` and into `synapse.logging`. ([\#5606](https://github.com/matrix-org/synapse/issues/5606), [\#5617](https://github.com/matrix-org/synapse/issues/5617))
- Add a blacklist file to the repo to blacklist certain sytests from failing CI. ([\#5611](https://github.com/matrix-org/synapse/issues/5611))
- Make runtime errors surrounding password reset emails much clearer. ([\#5616](https://github.com/matrix-org/synapse/issues/5616))
- Remove dead code for persiting outgoing federation transactions. ([\#5622](https://github.com/matrix-org/synapse/issues/5622))
- Add `lint.sh` to the scripts-dev folder which will run all linting steps required by CI. ([\#5627](https://github.com/matrix-org/synapse/issues/5627))
- Move RegistrationHandler.get_or_create_user to test code. ([\#5628](https://github.com/matrix-org/synapse/issues/5628))
- Add some more common python virtual-environment paths to the black exclusion list. ([\#5630](https://github.com/matrix-org/synapse/issues/5630))
- Some counter metrics exposed over Prometheus have been renamed, with the old names preserved for backwards compatibility and deprecated. See `docs/metrics-howto.rst` for details. ([\#5636](https://github.com/matrix-org/synapse/issues/5636))
- Unblacklist some user_directory sytests. ([\#5637](https://github.com/matrix-org/synapse/issues/5637))
- Factor out some redundant code in the login implementation. ([\#5639](https://github.com/matrix-org/synapse/issues/5639))
- Update ModuleApi to avoid register(generate_token=True). ([\#5640](https://github.com/matrix-org/synapse/issues/5640))
- Remove access-token support from `RegistrationHandler.register`, and rename it. ([\#5641](https://github.com/matrix-org/synapse/issues/5641))
- Remove access-token support from `RegistrationStore.register`, and rename it. ([\#5642](https://github.com/matrix-org/synapse/issues/5642))
- Improve logging for auto-join when a new user is created. ([\#5643](https://github.com/matrix-org/synapse/issues/5643))
- Remove unused and unnecessary check for FederationDeniedError in _exception_to_failure. ([\#5645](https://github.com/matrix-org/synapse/issues/5645))
- Fix a small typo in a code comment. ([\#5655](https://github.com/matrix-org/synapse/issues/5655))
- Clean up exception handling around client access tokens. ([\#5656](https://github.com/matrix-org/synapse/issues/5656))
- Add a mechanism for per-test homeserver configuration in the unit tests. ([\#5657](https://github.com/matrix-org/synapse/issues/5657))
- Inline issue_access_token. ([\#5659](https://github.com/matrix-org/synapse/issues/5659))
- Update the sytest BuildKite configuration to checkout Synapse in `/src`. ([\#5664](https://github.com/matrix-org/synapse/issues/5664))
- Add a `docker` type to the towncrier configuration. ([\#5673](https://github.com/matrix-org/synapse/issues/5673))
- Convert `synapse.federation.transport.server` to `async`. Might improve some stack traces. ([\#5689](https://github.com/matrix-org/synapse/issues/5689))
- Documentation for opentracing. ([\#5703](https://github.com/matrix-org/synapse/issues/5703))
2019-07-22 13:49:16 +01:00
Jason Robinson
66f5ff72fd Add user_type to returned fields in admin API user list endpoints
Mostly user type will be empty (normal user) but there is also the
"support" user type.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2019-07-22 15:29:18 +03:00
Erik Johnston
2017369f7d Newsfile 2019-07-22 13:18:25 +01:00
Erik Johnston
5ea773c505 Cache get_version_string.
The version of a module isn't going to change over the lifetime of the
process (assuming no funky hot reloading is going on, which it isn't),
so let's just cache the result to avoid spawning lots of git
subprocesses.

Fixes #5672.
2019-07-22 13:15:08 +01:00
Jorik Schellekens
f337d2f0f0 Demo uses deprecated cli option (#5725)
* Remove deprecated 'verbose' cli arg

* Create 5725.bugfix
2019-07-22 11:31:05 +01:00
Jorik Schellekens
0fd171770a Merge branch 'release-v1.2.0' into develop 2019-07-22 11:18:50 +01:00
Jorik Schellekens
f99554b15d Revert "Remove deprecated 'verbose' cli arg"
This reverts commit dc7cf81267.
2019-07-19 18:19:27 +01:00
Jorik Schellekens
dc7cf81267 Remove deprecated 'verbose' cli arg 2019-07-19 18:16:42 +01:00
Richard van der Hoff
f214bff0c0 changelog 2019-07-19 17:58:17 +01:00
Richard van der Hoff
dcca56baba Add a delay to key lookup lock release to fix stack overflow
A tactical call_later here should fix #5723
2019-07-19 17:57:00 +01:00
Richard van der Hoff
c7095be913 Refactor Keyring._start_key_lookups
There's an awful lot of deferreds and dictionaries flying around here. The
whole thing can be made much simpler and achieve the same effect.
2019-07-19 17:49:19 +01:00
Erik Johnston
7704873cb8 Merge pull request #5720 from matrix-org/erikj/transactions_upsert
Use upsert when updating destination retry interval
2019-07-19 16:51:16 +01:00
Erik Johnston
d7bd9651bc Merge pull request #5713 from matrix-org/erikj/use_cache_for_filtered_state
Delegate to cached version when using get_filtered_current_state_ids
2019-07-19 16:30:49 +01:00
Erik Johnston
5c07c97c09 Merge pull request #5706 from matrix-org/erikj/add_memberships_to_current_state
Add membership column to current_state_events table
2019-07-19 16:30:33 +01:00
Jorik Schellekens
7b8bc61834 Don't accept opentracing data from clients. (#5715)
* Don't accept opentracing data from clients.

* newsfile
2019-07-19 16:29:57 +01:00
Erik Johnston
ced4fdaa84 Newsfile 2019-07-19 13:40:26 +01:00
Erik Johnston
2410335507 Use upsert when updating destination retry interval 2019-07-19 13:40:24 +01:00
Erik Johnston
bd2e1a2aa8 LoggingTransaction accepts None for callback lists.
Its a bit disingenuousto give LoggingTransaction lists to append
callbacks to if we're not going to run the callbacks.
2019-07-19 13:36:04 +01:00
Erik Johnston
ebc5ed1296 Update comment for new column 2019-07-19 13:29:02 +01:00
Neil Johnson
5c05ae7ba0 Add 'rel' attribute to default welcome page. (#5695)
add rel attribute as a precaution against reverse tabnabbing in future
2019-07-19 12:03:36 +01:00
Richard van der Hoff
b73ce4ba81 Update the coding style doc (#5719)
A few fixes and removal of duplicated stuff, but mostly a bunch of the words on the config file.
2019-07-19 11:55:14 +01:00
Amber Brown
356ed0438e Speed up the PostgreSQL unit tests (#5717) 2019-07-19 19:01:23 +10:00
Amber Brown
6a85cb5ef7 Remove non-dedicated logging options and command line arguments (#5678) 2019-07-19 01:40:08 +10:00
Erik Johnston
dd2851d576 Newsfile 2019-07-18 15:27:18 +01:00
Erik Johnston
10523241d8 Delegate to cached version when using get_filtered_current_state_ids
In the case where it gets called with `StateFilter.all()`
2019-07-18 15:17:39 +01:00
Erik Johnston
89c885909a Newsfile 2019-07-18 14:16:01 +01:00
Erik Johnston
8e1ada9e6f Use the current_state_events.membership column 2019-07-18 14:16:01 +01:00
Erik Johnston
059d8c1a4e Track if current_state_events.membership is up to date 2019-07-18 14:16:01 +01:00
Erik Johnston
c618a5d348 Add background update for current_state_events.membership column 2019-07-18 14:16:01 +01:00
Erik Johnston
6de09e07a6 Add membership column to current_state_events table.
It turns out that doing a join is surprisingly expensive for the DB to
do when room_membership table is larger than the disk cache.
2019-07-18 14:15:57 +01:00
259 changed files with 3221 additions and 2162 deletions

View File

@@ -49,14 +49,15 @@ steps:
- command:
- "python -m pip install tox"
- "apt-get update && apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev"
- "python3.5 -m pip install tox"
- "tox -e py35-old,codecov"
label: ":python: 3.5 / SQLite / Old Deps"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "python:3.5"
image: "ubuntu:xenial" # We use xenail to get an old sqlite and python
propagate-environment: true
retry:
automatic:
@@ -117,8 +118,10 @@ steps:
limit: 2
- label: ":python: 3.5 / :postgres: 9.5"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py35-postgres,codecov'"
plugins:
@@ -134,8 +137,10 @@ steps:
limit: 2
- label: ":python: 3.7 / :postgres: 9.5"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
plugins:
@@ -151,8 +156,10 @@ steps:
limit: 2
- label: ":python: 3.7 / :postgres: 11"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
plugins:
@@ -214,8 +221,10 @@ steps:
env:
POSTGRES: "1"
WORKERS: "1"
BLACKLIST: "synapse-blacklist-with-workers"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash -c 'cat /src/sytest-blacklist /src/.buildkite/worker-blacklist > /src/synapse-blacklist-with-workers'"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
@@ -223,7 +232,6 @@ steps:
propagate-environment: true
always-pull: true
workdir: "/src"
soft_fail: true
retry:
automatic:
- exit_status: -1

View File

@@ -0,0 +1,34 @@
# This file serves as a blacklist for SyTest tests that we expect will fail in
# Synapse when run under worker mode. For more details, see sytest-blacklist.
Message history can be paginated
m.room.history_visibility == "world_readable" allows/forbids appropriately for Guest users
m.room.history_visibility == "world_readable" allows/forbids appropriately for Real users
Can re-join room if re-invited
/upgrade creates a new room
The only membership state included in an initial sync is for all the senders in the timeline
Local device key changes get to remote servers
If remote user leaves room we no longer receive device updates
Forgotten room messages cannot be paginated
Inbound federation can get public room list
Members from the gap are included in gappy incr LL sync
Leaves are present in non-gapped incremental syncs
Old leaves are present in gapped incremental syncs
User sees updates to presence from other users in the incremental sync.
Gapped incremental syncs include all state changes
Old members are included in gappy incr LL sync if they start speaking

View File

@@ -1,5 +1,4 @@
comment:
layout: "diff"
comment: off
coverage:
status:

View File

@@ -8,9 +8,9 @@ This release includes *four* security fixes:
- Prevent an attack where a federated server could send redactions for arbitrary events in v1 and v2 rooms. ([\#5767](https://github.com/matrix-org/synapse/issues/5767))
- Prevent a denial-of-service attack where cycles of redaction events would make Synapse spin infinitely. Thanks to `@lrizika:matrix.org` for identifying and responsibly disclosing this issue. ([0f2ecb961](https://github.com/matrix-org/synapse/commit/0f2ecb961))
- Prevent an attack where users could be joined or parted from public rooms without their consent. Thanks to @Dylanger for identifying and responsibly disclosing this issue. ([\#5744](https://github.com/matrix-org/synapse/issues/5744))
- Prevent an attack where users could be joined or parted from public rooms without their consent. Thanks to @dylangerdaly for identifying and responsibly disclosing this issue. ([\#5744](https://github.com/matrix-org/synapse/issues/5744))
- Fix a vulnerability where a federated server could spoof read-receipts from
users on other servers. Thanks to @Dylanger for identifying this issue too. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
users on other servers. Thanks to @dylangerdaly for identifying this issue too. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
Additionally, the following fix was in Synapse **1.2.0**, but was not correctly
identified during the original release:

View File

@@ -7,7 +7,6 @@ include demo/README
include demo/demo.tls.dh
include demo/*.py
include demo/*.sh
include sytest-blacklist
recursive-include synapse/storage/schema *.sql
recursive-include synapse/storage/schema *.sql.postgres
@@ -34,6 +33,7 @@ exclude Dockerfile
exclude .dockerignore
exclude test_postgresql.sh
exclude .editorconfig
exclude sytest-blacklist
include pyproject.toml
recursive-include changelog.d *

1
changelog.d/5678.removal Normal file
View File

@@ -0,0 +1 @@
Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration.

1
changelog.d/5693.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix UISIs during homeserver outage.

1
changelog.d/5694.misc Normal file
View File

@@ -0,0 +1 @@
Make Jaeger fully configurable.

1
changelog.d/5695.misc Normal file
View File

@@ -0,0 +1 @@
Add precautionary measures to prevent future abuse of `window.opener` in default welcome page.

1
changelog.d/5706.misc Normal file
View File

@@ -0,0 +1 @@
Reduce database IO usage by optimising queries for current membership.

1
changelog.d/5713.misc Normal file
View File

@@ -0,0 +1 @@
Improve caching when fetching `get_filtered_current_state_ids`.

1
changelog.d/5715.misc Normal file
View File

@@ -0,0 +1 @@
Don't accept opentracing data from clients.

1
changelog.d/5717.misc Normal file
View File

@@ -0,0 +1 @@
Speed up PostgreSQL unit tests in CI.

1
changelog.d/5719.misc Normal file
View File

@@ -0,0 +1 @@
Update the coding style document.

1
changelog.d/5720.misc Normal file
View File

@@ -0,0 +1 @@
Improve database query performance when recording retry intervals for remote hosts.

1
changelog.d/5722.misc Normal file
View File

@@ -0,0 +1 @@
Add a set of opentracing utils.

1
changelog.d/5724.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix stack overflow in server key lookup code.

1
changelog.d/5725.bugfix Normal file
View File

@@ -0,0 +1 @@
start.sh no longer uses deprecated cli option.

1
changelog.d/5729.removal Normal file
View File

@@ -0,0 +1 @@
Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration.

1
changelog.d/5730.misc Normal file
View File

@@ -0,0 +1 @@
Cache result of get_version_string to reduce overhead of `/version` federation requests.

1
changelog.d/5731.misc Normal file
View File

@@ -0,0 +1 @@
Return 'user_type' in admin API user endpoints results.

1
changelog.d/5732.feature Normal file
View File

@@ -0,0 +1 @@
Add sd_notify hooks to ease systemd integration and allows usage of Type=Notify.

1
changelog.d/5733.misc Normal file
View File

@@ -0,0 +1 @@
Don't package the sytest test blacklist file.

1
changelog.d/5736.misc Normal file
View File

@@ -0,0 +1 @@
Replace uses of returnValue with plain return, as returnValue is not needed on Python 3.

1
changelog.d/5738.misc Normal file
View File

@@ -0,0 +1 @@
Reduce database IO usage by optimising queries for current membership.

1
changelog.d/5740.misc Normal file
View File

@@ -0,0 +1 @@
Blacklist some flakey tests in worker mode.

1
changelog.d/5743.bugfix Normal file
View File

@@ -0,0 +1 @@
Log when we receive an event receipt from an unexpected origin.

1
changelog.d/5746.misc Normal file
View File

@@ -0,0 +1 @@
Reduce database IO usage by optimising queries for current membership.

1
changelog.d/5749.misc Normal file
View File

@@ -0,0 +1 @@
Fix some error cases in the caching layer.

1
changelog.d/5750.misc Normal file
View File

@@ -0,0 +1 @@
Add a prometheus metric for pending cache lookups.

1
changelog.d/5752.misc Normal file
View File

@@ -0,0 +1 @@
Reduce database IO usage by optimising queries for current membership.

1
changelog.d/5753.misc Normal file
View File

@@ -0,0 +1 @@
Stop trying to fetch events with event_id=None.

1
changelog.d/5768.misc Normal file
View File

@@ -0,0 +1 @@
Convert RedactionTestCase to modern test style.

1
changelog.d/5770.misc Normal file
View File

@@ -0,0 +1 @@
Reduce database IO usage by optimising queries for current membership.

1
changelog.d/5774.misc Normal file
View File

@@ -0,0 +1 @@
Reduce database IO usage by optimising queries for current membership.

1
changelog.d/5775.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix debian packaging scripts to correctly build sid packages.

1
changelog.d/5780.misc Normal file
View File

@@ -0,0 +1 @@
Allow looping calls to be given arguments.

1
changelog.d/5782.removal Normal file
View File

@@ -0,0 +1 @@
Remove non-functional 'expire_access_token' setting.

1
changelog.d/5783.feature Normal file
View File

@@ -0,0 +1 @@
Synapse can now be configured to not join remote rooms of a given "complexity" (currently, state events) over federation. This option can be used to prevent adverse performance on resource-constrained homeservers.

1
changelog.d/5785.misc Normal file
View File

@@ -0,0 +1 @@
Set the logs emitted when checking typing and presence timeouts to DEBUG level, not INFO.

1
changelog.d/5787.misc Normal file
View File

@@ -0,0 +1 @@
Remove DelayedCall debugging from the test suite, as it is no longer required in the vast majority of Synapse's tests.

1
changelog.d/5789.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix UISIs during homeserver outage.

1
changelog.d/5792.misc Normal file
View File

@@ -0,0 +1 @@
Reduce database IO usage by optimising queries for current membership.

1
changelog.d/5793.misc Normal file
View File

@@ -0,0 +1 @@
Reduce database IO usage by optimising queries for current membership.

1
changelog.d/5794.misc Normal file
View File

@@ -0,0 +1 @@
Improve performance when making `.well-known` requests by sharing the SSL options between requests.

1
changelog.d/5796.misc Normal file
View File

@@ -0,0 +1 @@
Disable codecov GitHub comments on PRs.

1
changelog.d/5798.bugfix Normal file
View File

@@ -0,0 +1 @@
Return 404 instead of 403 when accessing /rooms/{roomId}/event/{eventId} for an event without the appropriate permissions.

View File

@@ -4,7 +4,8 @@ After=matrix-synapse.service
BindsTo=matrix-synapse.service
[Service]
Type=simple
Type=notify
NotifyAccess=main
User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse

View File

@@ -2,7 +2,8 @@
Description=Synapse Matrix Homeserver
[Service]
Type=simple
Type=notify
NotifyAccess=main
User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse

View File

@@ -14,7 +14,9 @@
Description=Synapse Matrix homeserver
[Service]
Type=simple
Type=notify
NotifyAccess=main
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-abort
User=synapse

View File

@@ -29,7 +29,7 @@ for port in 8080 8081 8082; do
if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
echo 'enable_registration: true' >> $DIR/etc/$port.config
# Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
@@ -43,7 +43,7 @@ for port in 8080 8081 8082; do
tls: true
resources:
- names: [client, federation]
- port: $port
tls: false
bind_addresses: ['::1', '127.0.0.1']
@@ -68,7 +68,7 @@ for port in 8080 8081 8082; do
# Generate tls keys
openssl req -x509 -newkey rsa:4096 -keyout $DIR/etc/localhost\:$https_port.tls.key -out $DIR/etc/localhost\:$https_port.tls.crt -days 365 -nodes -subj "/O=matrix"
# Ignore keys from the trusted keys server
echo '# Ignore keys from the trusted keys server' >> $DIR/etc/$port.config
echo 'trusted_key_servers:' >> $DIR/etc/$port.config
@@ -120,7 +120,6 @@ for port in 8080 8081 8082; do
python3 -m synapse.app.homeserver \
--config-path "$DIR/etc/$port.config" \
-D \
-vv \
popd
done

View File

@@ -42,6 +42,11 @@ RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
###
FROM ${distro}
# Get the distro we want to pull from as a dynamic build variable
# (We need to define it in each build stage)
ARG distro=""
ENV distro ${distro}
# Install the build dependencies
#
# NB: keep this list in sync with the list of build-deps in debian/control

View File

@@ -4,7 +4,8 @@
set -ex
DIST=`lsb_release -c -s`
# Get the codename from distro env
DIST=`cut -d ':' -f2 <<< $distro`
# we get a read-only copy of the source: make a writeable copy
cp -aT /synapse/source /synapse/build

View File

@@ -1,4 +1,8 @@
# Code Style
Code Style
==========
Formatting tools
----------------
The Synapse codebase uses a number of code formatting tools in order to
quickly and automatically check for formatting (and sometimes logical) errors
@@ -6,20 +10,20 @@ in code.
The necessary tools are detailed below.
## Formatting tools
- **black**
The Synapse codebase uses [black](https://pypi.org/project/black/) as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
The Synapse codebase uses `black <https://pypi.org/project/black/>`_ as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
First install ``black`` with::
First install ``black`` with::
pip install --upgrade black
pip install --upgrade black
Have ``black`` auto-format your code (it shouldn't change any
functionality) with::
Have ``black`` auto-format your code (it shouldn't change any functionality)
with::
black . --exclude="\.tox|build|env"
black . --exclude="\.tox|build|env"
- **flake8**
@@ -54,17 +58,16 @@ functionality is supported in your editor for a more convenient development
workflow. It is not, however, recommended to run ``flake8`` on save as it
takes a while and is very resource intensive.
## General rules
General rules
-------------
- **Naming**:
- Use camel case for class and type names
- Use underscores for functions and variables.
- Use double quotes ``"foo"`` rather than single quotes ``'foo'``.
- **Comments**: should follow the `google code style
<http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
- **Docstrings**: should follow the `google code style
<https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
This is so that we can generate documentation with `sphinx
<http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples
@@ -73,6 +76,8 @@ takes a while and is very resource intensive.
- **Imports**:
- Imports should be sorted by ``isort`` as described above.
- Prefer to import classes and functions rather than packages or modules.
Example::
@@ -92,25 +97,84 @@ takes a while and is very resource intensive.
This goes against the advice in the Google style guide, but it means that
errors in the name are caught early (at import time).
- Multiple imports from the same package can be combined onto one line::
from synapse.types import GroupID, RoomID, UserID
An effort should be made to keep the individual imports in alphabetical
order.
If the list becomes long, wrap it with parentheses and split it over
multiple lines.
- As per `PEP-8 <https://www.python.org/dev/peps/pep-0008/#imports>`_,
imports should be grouped in the following order, with a blank line between
each group:
1. standard library imports
2. related third party imports
3. local application/library specific imports
- Imports within each group should be sorted alphabetically by module name.
- Avoid wildcard imports (``from synapse.types import *``) and relative
imports (``from .types import UserID``).
Configuration file format
-------------------------
The `sample configuration file <./sample_config.yaml>`_ acts as a reference to
Synapse's configuration options for server administrators. Remember that many
readers will be unfamiliar with YAML and server administration in general, so
that it is important that the file be as easy to understand as possible, which
includes following a consistent format.
Some guidelines follow:
* Sections should be separated with a heading consisting of a single line
prefixed and suffixed with ``##``. There should be **two** blank lines
before the section header, and **one** after.
* Each option should be listed in the file with the following format:
* A comment describing the setting. Each line of this comment should be
prefixed with a hash (``#``) and a space.
The comment should describe the default behaviour (ie, what happens if
the setting is omitted), as well as what the effect will be if the
setting is changed.
Often, the comment end with something like "uncomment the
following to \<do action>".
* A line consisting of only ``#``.
* A commented-out example setting, prefixed with only ``#``.
For boolean (on/off) options, convention is that this example should be
the *opposite* to the default (so the comment will end with "Uncomment
the following to enable [or disable] \<feature\>." For other options,
the example should give some non-default value which is likely to be
useful to the reader.
* There should be a blank line between each option.
* Where several settings are grouped into a single dict, *avoid* the
convention where the whole block is commented out, resulting in comment
lines starting ``# #``, as this is hard to read and confusing to
edit. Instead, leave the top-level config option uncommented, and follow
the conventions above for sub-options. Ensure that your code correctly
handles the top-level option being set to ``None`` (as it will be if no
sub-options are enabled).
* Lines should be wrapped at 80 characters.
Example::
## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
Note that the sample configuration is generated from the synapse code and is
maintained by a script, ``scripts-dev/generate_sample_config``. Making sure
that the output from this script matches the desired format is left as an
exercise for the reader!

View File

@@ -148,7 +148,7 @@ call any other functions.
d = more_stuff()
result = yield d # also fine, of course
defer.returnValue(result)
return result
def nonInlineCallbacksFun():
logger.debug("just a wrapper really")

View File

@@ -278,6 +278,23 @@ listeners:
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained Homeserver Settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
#
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
#
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: True
# complexity: 1.0
# complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'.
#
@@ -925,10 +942,6 @@ uploads_path: "DATADIR/uploads"
#
# macaroon_secret_key: <PRIVATE STRING>
# Used to enable access token expiration.
#
#expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent
# forms to work.
@@ -1430,3 +1443,19 @@ opentracing:
#
#homeserver_whitelist:
# - ".*"
# Jaeger can be configured to sample traces at different rates.
# All configuration options provided by Jaeger can be set here.
# Jaeger's configuration mostly related to trace sampling which
# is documented here:
# https://www.jaegertracing.io/docs/1.13/sampling/.
#
#jaeger_config:
# sampler:
# type: const
# param: 1
# Logging whether spans were started and reported
#
# logging:
# false

View File

@@ -128,7 +128,7 @@ class Auth(object):
)
self._check_joined_room(member, user_id, room_id)
defer.returnValue(member)
return member
@defer.inlineCallbacks
def check_user_was_in_room(self, room_id, user_id):
@@ -156,13 +156,13 @@ class Auth(object):
if forgot:
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
defer.returnValue(member)
return member
@defer.inlineCallbacks
def check_host_in_room(self, room_id, host):
with Measure(self.clock, "check_host_in_room"):
latest_event_ids = yield self.store.is_host_joined(room_id, host)
defer.returnValue(latest_event_ids)
return latest_event_ids
def _check_joined_room(self, member, user_id, room_id):
if not member or member.membership != Membership.JOIN:
@@ -219,9 +219,7 @@ class Auth(object):
device_id="dummy-device", # stubbed
)
defer.returnValue(
synapse.types.create_requester(user_id, app_service=app_service)
)
return synapse.types.create_requester(user_id, app_service=app_service)
user_info = yield self.get_user_by_access_token(access_token, rights)
user = user_info["user"]
@@ -262,10 +260,8 @@ class Auth(object):
request.authenticated_entity = user.to_string()
defer.returnValue(
synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service
)
return synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service
)
except KeyError:
raise MissingClientTokenError()
@@ -276,25 +272,25 @@ class Auth(object):
self.get_access_token_from_request(request)
)
if app_service is None:
defer.returnValue((None, None))
return (None, None)
if app_service.ip_range_whitelist:
ip_address = IPAddress(self.hs.get_ip_from_request(request))
if ip_address not in app_service.ip_range_whitelist:
defer.returnValue((None, None))
return (None, None)
if b"user_id" not in request.args:
defer.returnValue((app_service.sender, app_service))
return (app_service.sender, app_service)
user_id = request.args[b"user_id"][0].decode("utf8")
if app_service.sender == user_id:
defer.returnValue((app_service.sender, app_service))
return (app_service.sender, app_service)
if not app_service.is_interested_in_user(user_id):
raise AuthError(403, "Application service cannot masquerade as this user.")
if not (yield self.store.get_user_by_id(user_id)):
raise AuthError(403, "Application service has not registered this user")
defer.returnValue((user_id, app_service))
return (user_id, app_service)
@defer.inlineCallbacks
def get_user_by_access_token(self, token, rights="access"):
@@ -330,7 +326,7 @@ class Auth(object):
msg="Access token has expired", soft_logout=True
)
defer.returnValue(r)
return r
# otherwise it needs to be a valid macaroon
try:
@@ -378,7 +374,7 @@ class Auth(object):
}
else:
raise RuntimeError("Unknown rights setting %s", rights)
defer.returnValue(ret)
return ret
except (
_InvalidMacaroonException,
pymacaroons.exceptions.MacaroonException,
@@ -414,21 +410,16 @@ class Auth(object):
try:
user_id = self.get_user_id_from_macaroon(macaroon)
has_expiry = False
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith("time "):
has_expiry = True
elif caveat.caveat_id == "guest = true":
if caveat.caveat_id == "guest = true":
guest = True
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token, user_id=user_id
)
self.validate_macaroon(macaroon, rights, user_id=user_id)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise InvalidClientTokenError("Invalid macaroon passed.")
if not has_expiry and rights == "access":
if rights == "access":
self.token_cache[token] = (user_id, guest)
return user_id, guest
@@ -454,7 +445,7 @@ class Auth(object):
return caveat.caveat_id[len(user_prefix) :]
raise InvalidClientTokenError("No user caveat in macaroon")
def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id):
def validate_macaroon(self, macaroon, type_string, user_id):
"""
validate that a Macaroon is understood by and was signed by this server.
@@ -462,7 +453,6 @@ class Auth(object):
macaroon(pymacaroons.Macaroon): The macaroon to validate
type_string(str): The kind of token required (e.g. "access",
"delete_pusher")
verify_expiry(bool): Whether to verify whether the macaroon has expired.
user_id (str): The user_id required
"""
v = pymacaroons.Verifier()
@@ -475,19 +465,7 @@ class Auth(object):
v.satisfy_exact("type = " + type_string)
v.satisfy_exact("user_id = %s" % user_id)
v.satisfy_exact("guest = true")
# verify_expiry should really always be True, but there exist access
# tokens in the wild which expire when they should not, so we can't
# enforce expiry yet (so we have to allow any caveat starting with
# 'time < ' in access tokens).
#
# On the other hand, short-term login tokens (as used by CAS login, for
# example) have an expiry time which we do want to enforce.
if verify_expiry:
v.satisfy_general(self._verify_expiry)
else:
v.satisfy_general(lambda c: c.startswith("time < "))
v.satisfy_general(self._verify_expiry)
# access_tokens include a nonce for uniqueness: any value is acceptable
v.satisfy_general(lambda c: c.startswith("nonce = "))
@@ -506,7 +484,7 @@ class Auth(object):
def _look_up_user_by_access_token(self, token):
ret = yield self.store.get_user_by_access_token(token)
if not ret:
defer.returnValue(None)
return None
# we use ret.get() below because *lots* of unit tests stub out
# get_user_by_access_token in a way where it only returns a couple of
@@ -518,7 +496,7 @@ class Auth(object):
"device_id": ret.get("device_id"),
"valid_until_ms": ret.get("valid_until_ms"),
}
defer.returnValue(user_info)
return user_info
def get_appservice_by_req(self, request):
token = self.get_access_token_from_request(request)
@@ -543,7 +521,7 @@ class Auth(object):
@defer.inlineCallbacks
def compute_auth_events(self, event, current_state_ids, for_verification=False):
if event.type == EventTypes.Create:
defer.returnValue([])
return []
auth_ids = []
@@ -604,7 +582,7 @@ class Auth(object):
if member_event.content["membership"] == Membership.JOIN:
auth_ids.append(member_event.event_id)
defer.returnValue(auth_ids)
return auth_ids
@defer.inlineCallbacks
def check_can_change_room_list(self, room_id, user):
@@ -618,7 +596,7 @@ class Auth(object):
is_admin = yield self.is_server_admin(user)
if is_admin:
defer.returnValue(True)
return True
user_id = user.to_string()
yield self.check_joined_room(room_id, user_id)
@@ -712,7 +690,7 @@ class Auth(object):
# * The user is a guest user, and has joined the room
# else it will throw.
member_event = yield self.check_user_was_in_room(room_id, user_id)
defer.returnValue((member_event.membership, member_event.event_id))
return (member_event.membership, member_event.event_id)
except AuthError:
visibility = yield self.state.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
@@ -721,7 +699,7 @@ class Auth(object):
visibility
and visibility.content["history_visibility"] == "world_readable"
):
defer.returnValue((Membership.JOIN, None))
return (Membership.JOIN, None)
return
raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN

View File

@@ -132,7 +132,7 @@ class Filtering(object):
@defer.inlineCallbacks
def get_user_filter(self, user_localpart, filter_id):
result = yield self.store.get_user_filter(user_localpart, filter_id)
defer.returnValue(FilterCollection(result))
return FilterCollection(result)
def add_user_filter(self, user_localpart, user_filter):
self.check_valid_filter(user_filter)

View File

@@ -15,10 +15,12 @@
import gc
import logging
import os
import signal
import sys
import traceback
import sdnotify
from daemonize import Daemonize
from twisted.internet import defer, error, reactor
@@ -242,9 +244,16 @@ def start(hs, listeners=None):
if hasattr(signal, "SIGHUP"):
def handle_sighup(*args, **kwargs):
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sd_channel = sdnotify.SystemdNotifier()
sd_channel.notify("RELOADING=1")
for i in _sighup_callbacks:
i(hs)
sd_channel.notify("READY=1")
signal.signal(signal.SIGHUP, handle_sighup)
register_sighup(refresh_certificate)
@@ -260,6 +269,7 @@ def start(hs, listeners=None):
hs.get_datastore().start_profiling()
setup_sentry(hs)
setup_sdnotify(hs)
except Exception:
traceback.print_exc(file=sys.stderr)
reactor = hs.get_reactor()
@@ -292,6 +302,25 @@ def setup_sentry(hs):
scope.set_tag("worker_name", name)
def setup_sdnotify(hs):
"""Adds process state hooks to tell systemd what we are up to.
"""
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sd_channel = sdnotify.SystemdNotifier()
hs.get_reactor().addSystemEventTrigger(
"after",
"startup",
lambda: sd_channel.notify("READY=1\nMAINPID=%s" % (os.getpid())),
)
hs.get_reactor().addSystemEventTrigger(
"before", "shutdown", lambda: sd_channel.notify("STOPPING=1")
)
def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
"""Replaces the resolver with one that limits the number of in flight DNS
requests.

View File

@@ -168,7 +168,9 @@ def start(config_options):
)
ps.setup()
reactor.callWhenRunning(_base.start, ps, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ps, config.worker_listeners
)
_base.start_worker_reactor("synapse-appservice", config)

View File

@@ -194,7 +194,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-client-reader", config)

View File

@@ -193,7 +193,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-event-creator", config)

View File

@@ -175,7 +175,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-federation-reader", config)

View File

@@ -198,7 +198,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-federation-sender", config)

View File

@@ -70,12 +70,12 @@ class PresenceStatusStubServlet(RestServlet):
except HttpResponseException as e:
raise e.to_synapse_error()
defer.returnValue((200, result))
return (200, result)
@defer.inlineCallbacks
def on_PUT(self, request, user_id):
yield self.auth.get_user_by_req(request)
defer.returnValue((200, {}))
return (200, {})
class KeyUploadServlet(RestServlet):
@@ -126,11 +126,11 @@ class KeyUploadServlet(RestServlet):
self.main_uri + request.uri.decode("ascii"), body, headers=headers
)
defer.returnValue((200, result))
return (200, result)
else:
# Just interested in counts.
result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue((200, {"one_time_key_counts": result}))
return (200, {"one_time_key_counts": result})
class FrontendProxySlavedStore(
@@ -247,7 +247,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-frontend-proxy", config)

4
synapse/app/homeserver.py Executable file → Normal file
View File

@@ -406,7 +406,7 @@ def setup(config_options):
if provision:
yield acme.provision_certificate()
defer.returnValue(provision)
return provision
@defer.inlineCallbacks
def reprovision_acme():
@@ -447,7 +447,7 @@ def setup(config_options):
reactor.stop()
sys.exit(1)
reactor.callWhenRunning(start)
reactor.addSystemEventTrigger("before", "startup", start)
return hs

View File

@@ -161,7 +161,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-media-repository", config)

View File

@@ -216,7 +216,7 @@ def start(config_options):
_base.start(ps, config.worker_listeners)
ps.get_pusherpool().start()
reactor.callWhenRunning(start)
reactor.addSystemEventTrigger("before", "startup", start)
_base.start_worker_reactor("synapse-pusher", config)

View File

@@ -451,7 +451,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-synchrotron", config)

View File

@@ -224,7 +224,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-user-dir", config)

View File

@@ -175,21 +175,21 @@ class ApplicationService(object):
@defer.inlineCallbacks
def _matches_user(self, event, store):
if not event:
defer.returnValue(False)
return False
if self.is_interested_in_user(event.sender):
defer.returnValue(True)
return True
# also check m.room.member state key
if event.type == EventTypes.Member and self.is_interested_in_user(
event.state_key
):
defer.returnValue(True)
return True
if not store:
defer.returnValue(False)
return False
does_match = yield self._matches_user_in_member_list(event.room_id, store)
defer.returnValue(does_match)
return does_match
@cachedInlineCallbacks(num_args=1, cache_context=True)
def _matches_user_in_member_list(self, room_id, store, cache_context):
@@ -200,8 +200,8 @@ class ApplicationService(object):
# check joined member events
for user_id in member_list:
if self.is_interested_in_user(user_id):
defer.returnValue(True)
defer.returnValue(False)
return True
return False
def _matches_room_id(self, event):
if hasattr(event, "room_id"):
@@ -211,13 +211,13 @@ class ApplicationService(object):
@defer.inlineCallbacks
def _matches_aliases(self, event, store):
if not store or not event:
defer.returnValue(False)
return False
alias_list = yield store.get_aliases_for_room(event.room_id)
for alias in alias_list:
if self.is_interested_in_alias(alias):
defer.returnValue(True)
defer.returnValue(False)
return True
return False
@defer.inlineCallbacks
def is_interested(self, event, store=None):
@@ -231,15 +231,15 @@ class ApplicationService(object):
"""
# Do cheap checks first
if self._matches_room_id(event):
defer.returnValue(True)
return True
if (yield self._matches_aliases(event, store)):
defer.returnValue(True)
return True
if (yield self._matches_user(event, store)):
defer.returnValue(True)
return True
defer.returnValue(False)
return False
def is_interested_in_user(self, user_id):
return (

View File

@@ -97,40 +97,40 @@ class ApplicationServiceApi(SimpleHttpClient):
@defer.inlineCallbacks
def query_user(self, service, user_id):
if service.url is None:
defer.returnValue(False)
return False
uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
response = None
try:
response = yield self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object
defer.returnValue(True)
return True
except CodeMessageException as e:
if e.code == 404:
defer.returnValue(False)
return False
return
logger.warning("query_user to %s received %s", uri, e.code)
except Exception as ex:
logger.warning("query_user to %s threw exception %s", uri, ex)
defer.returnValue(False)
return False
@defer.inlineCallbacks
def query_alias(self, service, alias):
if service.url is None:
defer.returnValue(False)
return False
uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias))
response = None
try:
response = yield self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object
defer.returnValue(True)
return True
except CodeMessageException as e:
logger.warning("query_alias to %s received %s", uri, e.code)
if e.code == 404:
defer.returnValue(False)
return False
return
except Exception as ex:
logger.warning("query_alias to %s threw exception %s", uri, ex)
defer.returnValue(False)
return False
@defer.inlineCallbacks
def query_3pe(self, service, kind, protocol, fields):
@@ -141,7 +141,7 @@ class ApplicationServiceApi(SimpleHttpClient):
else:
raise ValueError("Unrecognised 'kind' argument %r to query_3pe()", kind)
if service.url is None:
defer.returnValue([])
return []
uri = "%s%s/thirdparty/%s/%s" % (
service.url,
@@ -155,7 +155,7 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning(
"query_3pe to %s returned an invalid response %r", uri, response
)
defer.returnValue([])
return []
ret = []
for r in response:
@@ -166,14 +166,14 @@ class ApplicationServiceApi(SimpleHttpClient):
"query_3pe to %s returned an invalid result %r", uri, r
)
defer.returnValue(ret)
return ret
except Exception as ex:
logger.warning("query_3pe to %s threw exception %s", uri, ex)
defer.returnValue([])
return []
def get_3pe_protocol(self, service, protocol):
if service.url is None:
defer.returnValue({})
return {}
@defer.inlineCallbacks
def _get():
@@ -189,7 +189,7 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning(
"query_3pe_protocol to %s did not return a" " valid result", uri
)
defer.returnValue(None)
return None
for instance in info.get("instances", []):
network_id = instance.get("network_id", None)
@@ -198,10 +198,10 @@ class ApplicationServiceApi(SimpleHttpClient):
service.id, network_id
).to_string()
defer.returnValue(info)
return info
except Exception as ex:
logger.warning("query_3pe_protocol to %s threw exception %s", uri, ex)
defer.returnValue(None)
return None
key = (service.id, protocol)
return self.protocol_meta_cache.wrap(key, _get)
@@ -209,7 +209,7 @@ class ApplicationServiceApi(SimpleHttpClient):
@defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None):
if service.url is None:
defer.returnValue(True)
return True
events = self._serialize(events)
@@ -229,14 +229,14 @@ class ApplicationServiceApi(SimpleHttpClient):
)
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(events))
defer.returnValue(True)
return True
return
except CodeMessageException as e:
logger.warning("push_bulk to %s received %s", uri, e.code)
except Exception as ex:
logger.warning("push_bulk to %s threw exception %s", uri, ex)
failed_transactions_counter.labels(service.id).inc()
defer.returnValue(False)
return False
def _serialize(self, events):
time_now = self.clock.time_msec()

View File

@@ -193,7 +193,7 @@ class _TransactionController(object):
@defer.inlineCallbacks
def _is_service_up(self, service):
state = yield self.store.get_appservice_state(service)
defer.returnValue(state == ApplicationServiceState.UP or state is None)
return state == ApplicationServiceState.UP or state is None
class _Recoverer(object):
@@ -208,7 +208,7 @@ class _Recoverer(object):
r.service.id,
)
r.recover()
defer.returnValue(recoverers)
return recoverers
def __init__(self, clock, store, as_api, service, callback):
self.clock = clock

View File

@@ -116,8 +116,6 @@ class KeyConfig(Config):
seed = bytes(self.signing_key[0])
self.macaroon_secret_key = hashlib.sha256(seed).digest()
self.expire_access_token = config.get("expire_access_token", False)
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values
self.form_secret = config.get("form_secret", None)
@@ -144,10 +142,6 @@ class KeyConfig(Config):
#
%(macaroon_secret_key)s
# Used to enable access token expiration.
#
#expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent
# forms to work.

View File

@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import logging.config
import os
@@ -75,10 +76,8 @@ root:
class LoggingConfig(Config):
def read_config(self, config, **kwargs):
self.verbosity = config.get("verbose", 0)
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
self.log_config = self.abspath(config.get("log_config"))
self.log_file = self.abspath(config.get("log_file"))
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
log_config = os.path.join(config_dir_path, server_name + ".log.config")
@@ -94,38 +93,12 @@ class LoggingConfig(Config):
)
def read_arguments(self, args):
if args.verbose is not None:
self.verbosity = args.verbose
if args.no_redirect_stdio is not None:
self.no_redirect_stdio = args.no_redirect_stdio
if args.log_config is not None:
self.log_config = args.log_config
if args.log_file is not None:
self.log_file = args.log_file
@staticmethod
def add_arguments(parser):
logging_group = parser.add_argument_group("logging")
logging_group.add_argument(
"-v",
"--verbose",
dest="verbose",
action="count",
help="The verbosity level. Specify multiple times to increase "
"verbosity. (Ignored if --log-config is specified.)",
)
logging_group.add_argument(
"-f",
"--log-file",
dest="log_file",
help="File to log to. (Ignored if --log-config is specified.)",
)
logging_group.add_argument(
"--log-config",
dest="log_config",
default=None,
help="Python logging config file",
)
logging_group.add_argument(
"-n",
"--no-redirect-stdio",
@@ -153,58 +126,29 @@ def setup_logging(config, use_worker_options=False):
config (LoggingConfig | synapse.config.workers.WorkerConfig):
configuration data
use_worker_options (bool): True to use 'worker_log_config' and
'worker_log_file' options instead of 'log_config' and 'log_file'.
use_worker_options (bool): True to use the 'worker_log_config' option
instead of 'log_config'.
register_sighup (func | None): Function to call to register a
sighup handler.
"""
log_config = config.worker_log_config if use_worker_options else config.log_config
log_file = config.worker_log_file if use_worker_options else config.log_file
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if log_config is None:
# We don't have a logfile, so fall back to the 'verbosity' param from
# the config or cmdline. (Note that we generate a log config for new
# installs, so this will be an unusual case)
level = logging.INFO
level_for_storage = logging.INFO
if config.verbosity:
level = logging.DEBUG
if config.verbosity > 1:
level_for_storage = logging.DEBUG
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
logger = logging.getLogger("")
logger.setLevel(level)
logging.getLogger("synapse.storage.SQL").setLevel(level_for_storage)
logger.setLevel(logging.INFO)
logging.getLogger("synapse.storage.SQL").setLevel(logging.INFO)
formatter = logging.Formatter(log_format)
if log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3, encoding="utf8"
)
def sighup(signum, stack):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
else:
handler = logging.StreamHandler()
def sighup(*args):
pass
handler = logging.StreamHandler()
handler.setFormatter(formatter)
handler.addFilter(LoggingContextFilter(request=""))
logger.addHandler(handler)
else:
@@ -218,8 +162,7 @@ def setup_logging(config, use_worker_options=False):
logging.info("Reloaded log config from %s due to SIGHUP", log_config)
load_log_config()
appbase.register_sighup(sighup)
appbase.register_sighup(sighup)
# make sure that the first thing we log is a thing we can grep backwards
# for

View File

@@ -18,6 +18,7 @@
import logging
import os.path
import attr
from netaddr import IPSet
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
@@ -38,6 +39,12 @@ DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
DEFAULT_ROOM_VERSION = "4"
ROOM_COMPLEXITY_TOO_GREAT = (
"Your homeserver is unable to join rooms this large or complex. "
"Please speak to your server administrator, or upgrade your instance "
"to join this room."
)
class ServerConfig(Config):
def read_config(self, config, **kwargs):
@@ -247,6 +254,23 @@ class ServerConfig(Config):
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
@attr.s
class LimitRemoteRoomsConfig(object):
enabled = attr.ib(
validator=attr.validators.instance_of(bool), default=False
)
complexity = attr.ib(
validator=attr.validators.instance_of((int, float)), default=1.0
)
complexity_error = attr.ib(
validator=attr.validators.instance_of(str),
default=ROOM_COMPLEXITY_TOO_GREAT,
)
self.limit_remote_rooms = LimitRemoteRoomsConfig(
**config.get("limit_remote_rooms", {})
)
bind_port = config.get("bind_port")
if bind_port:
if config.get("no_tls", False):
@@ -617,6 +641,23 @@ class ServerConfig(Config):
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained Homeserver Settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
#
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
#
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: True
# complexity: 1.0
# complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'.
#

View File

@@ -23,6 +23,12 @@ class TracerConfig(Config):
opentracing_config = {}
self.opentracer_enabled = opentracing_config.get("enabled", False)
self.jaeger_config = opentracing_config.get(
"jaeger_config",
{"sampler": {"type": "const", "param": 1}, "logging": False},
)
if not self.opentracer_enabled:
return
@@ -56,4 +62,20 @@ class TracerConfig(Config):
#
#homeserver_whitelist:
# - ".*"
# Jaeger can be configured to sample traces at different rates.
# All configuration options provided by Jaeger can be set here.
# Jaeger's configuration mostly related to trace sampling which
# is documented here:
# https://www.jaegertracing.io/docs/1.13/sampling/.
#
#jaeger_config:
# sampler:
# type: const
# param: 1
# Logging whether spans were started and reported
#
# logging:
# false
"""

View File

@@ -31,7 +31,6 @@ class WorkerConfig(Config):
self.worker_listeners = config.get("worker_listeners", [])
self.worker_daemonize = config.get("worker_daemonize")
self.worker_pid_file = config.get("worker_pid_file")
self.worker_log_file = config.get("worker_log_file")
self.worker_log_config = config.get("worker_log_config")
# The host used to connect to the main synapse
@@ -78,9 +77,5 @@ class WorkerConfig(Config):
if args.daemonize is not None:
self.worker_daemonize = args.daemonize
if args.log_config is not None:
self.worker_log_config = args.log_config
if args.log_file is not None:
self.worker_log_file = args.log_file
if args.manhole is not None:
self.worker_manhole = args.worker_manhole

View File

@@ -31,6 +31,7 @@ from twisted.internet.ssl import (
platformTrust,
)
from twisted.python.failure import Failure
from twisted.web.iweb import IPolicyForHTTPS
logger = logging.getLogger(__name__)
@@ -74,6 +75,7 @@ class ServerContextFactory(ContextFactory):
return self._context
@implementer(IPolicyForHTTPS)
class ClientTLSOptionsFactory(object):
"""Factory for Twisted SSLClientConnectionCreators that are used to make connections
to remote servers for federation.
@@ -146,6 +148,12 @@ class ClientTLSOptionsFactory(object):
f = Failure()
tls_protocol.failVerification(f)
def creatorForNetloc(self, hostname, port):
"""Implements the IPolicyForHTTPS interace so that this can be passed
directly to agents.
"""
return self.get_options(hostname)
@implementer(IOpenSSLClientConnectionCreator)
class SSLClientConnectionCreator(object):

View File

@@ -238,27 +238,9 @@ class Keyring(object):
"""
try:
# create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred() for rq in verify_requests
}
ctx = LoggingContext.current_context()
# We want to wait for any previous lookups to complete before
# proceeding.
yield self.wait_for_previous_lookups(server_to_deferred)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
# map from server name to a set of outstanding request ids
server_to_request_ids = {}
for verify_request in verify_requests:
@@ -266,40 +248,61 @@ class Keyring(object):
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request):
# Wait for any previous lookups to complete before proceeding.
yield self.wait_for_previous_lookups(server_to_request_ids.keys())
# take out a lock on each of the servers by sticking a Deferred in
# key_downloads
for server_name in server_to_request_ids.keys():
self.key_downloads[server_name] = defer.Deferred()
logger.debug("Got key lookup lock on %s", server_name)
# When we've finished fetching all the keys for a given server_name,
# drop the lock by resolving the deferred in key_downloads.
def drop_server_lock(server_name):
d = self.key_downloads.pop(server_name)
d.callback(None)
def lookup_done(res, verify_request):
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
server_requests = server_to_request_ids[server_name]
server_requests.remove(id(verify_request))
# if there are no more requests for this server, we can drop the lock.
if not server_requests:
with PreserveLoggingContext(ctx):
logger.debug("Releasing key lookup lock on %s", server_name)
# ... but not immediately, as that can cause stack explosions if
# we get a long queue of lookups.
self.clock.call_later(0, drop_server_lock, server_name)
return res
for verify_request in verify_requests:
verify_request.key_ready.addBoth(remove_deferreds, verify_request)
verify_request.key_ready.addBoth(lookup_done, verify_request)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
except Exception:
logger.exception("Error starting key lookups")
@defer.inlineCallbacks
def wait_for_previous_lookups(self, server_to_deferred):
def wait_for_previous_lookups(self, server_names):
"""Waits for any previous key lookups for the given servers to finish.
Args:
server_to_deferred (dict[str, Deferred]): server_name to deferred which gets
resolved once we've finished looking up keys for that server.
The Deferreds should be regular twisted ones which call their
callbacks with no logcontext.
server_names (Iterable[str]): list of servers which we want to look up
Returns: a Deferred which resolves once all key lookups for the given
servers have completed. Follows the synapse rules of logcontext
preservation.
Returns:
Deferred[None]: resolves once all key lookups for the given servers have
completed. Follows the synapse rules of logcontext preservation.
"""
loop_count = 1
while True:
wait_on = [
(server_name, self.key_downloads[server_name])
for server_name in server_to_deferred.keys()
for server_name in server_names
if server_name in self.key_downloads
]
if not wait_on:
@@ -314,19 +317,6 @@ class Keyring(object):
loop_count += 1
ctx = LoggingContext.current_context()
def rm(r, server_name_):
with PreserveLoggingContext(ctx):
logger.debug("Releasing key lookup lock on %s", server_name_)
self.key_downloads.pop(server_name_, None)
return r
for server_name, deferred in server_to_deferred.items():
logger.debug("Got key lookup lock on %s", server_name)
self.key_downloads[server_name] = deferred
deferred.addBoth(rm, server_name)
def _get_server_verify_keys(self, verify_requests):
"""Tries to find at least one key for each verify request
@@ -472,7 +462,7 @@ class StoreKeyFetcher(KeyFetcher):
keys = {}
for (server_name, key_id), key in res.items():
keys.setdefault(server_name, {})[key_id] = key
defer.returnValue(keys)
return keys
class BaseV2KeyFetcher(object):
@@ -576,7 +566,7 @@ class BaseV2KeyFetcher(object):
).addErrback(unwrapFirstError)
)
defer.returnValue(verify_keys)
return verify_keys
class PerspectivesKeyFetcher(BaseV2KeyFetcher):
@@ -598,7 +588,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
result = yield self.get_server_verify_key_v2_indirect(
keys_to_fetch, key_server
)
defer.returnValue(result)
return result
except KeyLookupError as e:
logger.warning(
"Key lookup failed from %r: %s", key_server.server_name, e
@@ -611,7 +601,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
str(e),
)
defer.returnValue({})
return {}
results = yield make_deferred_yieldable(
defer.gatherResults(
@@ -625,7 +615,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
for server_name, keys in result.items():
union_of_keys.setdefault(server_name, {}).update(keys)
defer.returnValue(union_of_keys)
return union_of_keys
@defer.inlineCallbacks
def get_server_verify_key_v2_indirect(self, keys_to_fetch, key_server):
@@ -711,7 +701,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
perspective_name, time_now_ms, added_keys
)
defer.returnValue(keys)
return keys
def _validate_perspectives_response(self, key_server, response):
"""Optionally check the signature on the result of a /key/query request
@@ -853,7 +843,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
)
keys.update(response_keys)
defer.returnValue(keys)
return keys
@defer.inlineCallbacks

View File

@@ -144,15 +144,13 @@ class EventBuilder(object):
if self._origin_server_ts is not None:
event_dict["origin_server_ts"] = self._origin_server_ts
defer.returnValue(
create_local_event_from_event_dict(
clock=self._clock,
hostname=self._hostname,
signing_key=self._signing_key,
format_version=self.format_version,
event_dict=event_dict,
internal_metadata_dict=self.internal_metadata.get_dict(),
)
return create_local_event_from_event_dict(
clock=self._clock,
hostname=self._hostname,
signing_key=self._signing_key,
format_version=self.format_version,
event_dict=event_dict,
internal_metadata_dict=self.internal_metadata.get_dict(),
)

View File

@@ -133,19 +133,17 @@ class EventContext(object):
else:
prev_state_id = None
defer.returnValue(
{
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
"state_group": self.state_group,
"rejected": self.rejected,
"prev_group": self.prev_group,
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None,
}
)
return {
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
"state_group": self.state_group,
"rejected": self.rejected,
"prev_group": self.prev_group,
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None,
}
@staticmethod
def deserialize(store, input):
@@ -202,7 +200,7 @@ class EventContext(object):
yield make_deferred_yieldable(self._fetching_state_deferred)
defer.returnValue(self._current_state_ids)
return self._current_state_ids
@defer.inlineCallbacks
def get_prev_state_ids(self, store):
@@ -222,7 +220,7 @@ class EventContext(object):
yield make_deferred_yieldable(self._fetching_state_deferred)
defer.returnValue(self._prev_state_ids)
return self._prev_state_ids
def get_cached_current_state_ids(self):
"""Gets the current state IDs if we have them already cached.

View File

@@ -51,7 +51,7 @@ class ThirdPartyEventRules(object):
defer.Deferred[bool]: True if the event should be allowed, False if not.
"""
if self.third_party_rules is None:
defer.returnValue(True)
return True
prev_state_ids = yield context.get_prev_state_ids(self.store)
@@ -61,7 +61,7 @@ class ThirdPartyEventRules(object):
state_events[key] = yield self.store.get_event(event_id, allow_none=True)
ret = yield self.third_party_rules.check_event_allowed(event, state_events)
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def on_create_room(self, requester, config, is_requester_admin):
@@ -98,7 +98,7 @@ class ThirdPartyEventRules(object):
"""
if self.third_party_rules is None:
defer.returnValue(True)
return True
state_ids = yield self.store.get_filtered_current_state_ids(room_id)
room_state_events = yield self.store.get_events(state_ids.values())
@@ -110,4 +110,4 @@ class ThirdPartyEventRules(object):
ret = yield self.third_party_rules.check_threepid_can_be_invited(
medium, address, state_events
)
defer.returnValue(ret)
return ret

View File

@@ -360,7 +360,7 @@ class EventClientSerializer(object):
"""
# To handle the case of presence events and the like
if not isinstance(event, EventBase):
defer.returnValue(event)
return event
event_id = event.event_id
serialized_event = serialize_event(event, time_now, **kwargs)
@@ -406,7 +406,7 @@ class EventClientSerializer(object):
"sender": edit.sender,
}
defer.returnValue(serialized_event)
return serialized_event
def serialize_events(self, events, time_now, **kwargs):
"""Serializes multiple events.

View File

@@ -106,7 +106,7 @@ class FederationBase(object):
"Failed to find copy of %s with valid signature", pdu.event_id
)
defer.returnValue(res)
return res
handle = preserve_fn(handle_check_result)
deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)]
@@ -116,9 +116,9 @@ class FederationBase(object):
).addErrback(unwrapFirstError)
if include_none:
defer.returnValue(valid_pdus)
return valid_pdus
else:
defer.returnValue([p for p in valid_pdus if p])
return [p for p in valid_pdus if p]
def _check_sigs_and_hash(self, room_version, pdu):
return make_deferred_yieldable(

View File

@@ -213,7 +213,7 @@ class FederationClient(FederationBase):
).addErrback(unwrapFirstError)
)
defer.returnValue(pdus)
return pdus
@defer.inlineCallbacks
@log_function
@@ -245,7 +245,7 @@ class FederationClient(FederationBase):
ev = self._get_pdu_cache.get(event_id)
if ev:
defer.returnValue(ev)
return ev
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
@@ -307,7 +307,7 @@ class FederationClient(FederationBase):
if signed_pdu:
self._get_pdu_cache[event_id] = signed_pdu
defer.returnValue(signed_pdu)
return signed_pdu
@defer.inlineCallbacks
@log_function
@@ -355,7 +355,7 @@ class FederationClient(FederationBase):
auth_chain.sort(key=lambda e: e.depth)
defer.returnValue((pdus, auth_chain))
return (pdus, auth_chain)
except HttpResponseException as e:
if e.code == 400 or e.code == 404:
logger.info("Failed to use get_room_state_ids API, falling back")
@@ -404,7 +404,7 @@ class FederationClient(FederationBase):
signed_auth.sort(key=lambda e: e.depth)
defer.returnValue((signed_pdus, signed_auth))
return (signed_pdus, signed_auth)
@defer.inlineCallbacks
def get_events_from_store_or_dest(self, destination, room_id, event_ids):
@@ -429,7 +429,7 @@ class FederationClient(FederationBase):
missing_events.discard(k)
if not missing_events:
defer.returnValue((signed_events, failed_to_fetch))
return (signed_events, failed_to_fetch)
logger.debug(
"Fetching unknown state/auth events %s for room %s",
@@ -465,7 +465,7 @@ class FederationClient(FederationBase):
# We removed all events we successfully fetched from `batch`
failed_to_fetch.update(batch)
defer.returnValue((signed_events, failed_to_fetch))
return (signed_events, failed_to_fetch)
@defer.inlineCallbacks
@log_function
@@ -485,7 +485,7 @@ class FederationClient(FederationBase):
signed_auth.sort(key=lambda e: e.depth)
defer.returnValue(signed_auth)
return signed_auth
@defer.inlineCallbacks
def _try_destination_list(self, description, destinations, callback):
@@ -521,7 +521,7 @@ class FederationClient(FederationBase):
try:
res = yield callback(destination)
defer.returnValue(res)
return res
except InvalidResponseError as e:
logger.warn("Failed to %s via %s: %s", description, destination, e)
except HttpResponseException as e:
@@ -615,7 +615,7 @@ class FederationClient(FederationBase):
event_dict=pdu_dict,
)
defer.returnValue((destination, ev, event_format))
return (destination, ev, event_format)
return self._try_destination_list(
"make_" + membership, destinations, send_request
@@ -728,13 +728,11 @@ class FederationClient(FederationBase):
check_authchain_validity(signed_auth)
defer.returnValue(
{
"state": signed_state,
"auth_chain": signed_auth,
"origin": destination,
}
)
return {
"state": signed_state,
"auth_chain": signed_auth,
"origin": destination,
}
return self._try_destination_list("send_join", destinations, send_request)
@@ -758,7 +756,7 @@ class FederationClient(FederationBase):
# FIXME: We should handle signature failures more gracefully.
defer.returnValue(pdu)
return pdu
@defer.inlineCallbacks
def _do_send_invite(self, destination, pdu, room_version):
@@ -786,7 +784,7 @@ class FederationClient(FederationBase):
"invite_room_state": pdu.unsigned.get("invite_room_state", []),
},
)
defer.returnValue(content)
return content
except HttpResponseException as e:
if e.code in [400, 404]:
err = e.to_synapse_error()
@@ -821,7 +819,7 @@ class FederationClient(FederationBase):
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
defer.returnValue(content)
return content
def send_leave(self, destinations, pdu):
"""Sends a leave event to one of a list of homeservers.
@@ -856,7 +854,7 @@ class FederationClient(FederationBase):
)
logger.debug("Got content: %s", content)
defer.returnValue(None)
return None
return self._try_destination_list("send_leave", destinations, send_request)
@@ -917,7 +915,7 @@ class FederationClient(FederationBase):
"missing": content.get("missing", []),
}
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def get_missing_events(
@@ -974,7 +972,7 @@ class FederationClient(FederationBase):
# get_missing_events
signed_events = []
defer.returnValue(signed_events)
return signed_events
@defer.inlineCallbacks
def forward_third_party_invite(self, destinations, room_id, event_dict):
@@ -986,7 +984,7 @@ class FederationClient(FederationBase):
yield self.transport_layer.exchange_third_party_invite(
destination=destination, room_id=room_id, event_dict=event_dict
)
defer.returnValue(None)
return None
except CodeMessageException:
raise
except Exception as e:
@@ -995,3 +993,39 @@ class FederationClient(FederationBase):
)
raise RuntimeError("Failed to send to any server.")
@defer.inlineCallbacks
def get_room_complexity(self, destination, room_id):
"""
Fetch the complexity of a remote room from another server.
Args:
destination (str): The remote server
room_id (str): The room ID to ask about.
Returns:
Deferred[dict] or Deferred[None]: Dict contains the complexity
metric versions, while None means we could not fetch the complexity.
"""
try:
complexity = yield self.transport_layer.get_room_complexity(
destination=destination, room_id=room_id
)
defer.returnValue(complexity)
except CodeMessageException as e:
# We didn't manage to get it -- probably a 404. We are okay if other
# servers don't give it to us.
logger.debug(
"Failed to fetch room complexity via %s for %s, got a %d",
destination,
room_id,
e.code,
)
except Exception:
logger.exception(
"Failed to fetch room complexity via %s for %s", destination, room_id
)
# If we don't manage to find it, return None. It's not an error if a
# server doesn't give it to us.
defer.returnValue(None)

View File

@@ -99,7 +99,7 @@ class FederationServer(FederationBase):
res = self._transaction_from_pdus(pdus).get_dict()
defer.returnValue((200, res))
return (200, res)
@defer.inlineCallbacks
@log_function
@@ -126,7 +126,7 @@ class FederationServer(FederationBase):
origin, transaction, request_time
)
defer.returnValue(result)
return result
@defer.inlineCallbacks
def _handle_incoming_transaction(self, origin, transaction, request_time):
@@ -147,8 +147,7 @@ class FederationServer(FederationBase):
"[%s] We've already responded to this request",
transaction.transaction_id,
)
defer.returnValue(response)
return
return response
logger.debug("[%s] Transaction is new", transaction.transaction_id)
@@ -163,7 +162,7 @@ class FederationServer(FederationBase):
yield self.transaction_actions.set_response(
origin, transaction, 400, response
)
defer.returnValue((400, response))
return (400, response)
received_pdus_counter.inc(len(transaction.pdus))
@@ -265,7 +264,7 @@ class FederationServer(FederationBase):
logger.debug("Returning: %s", str(response))
yield self.transaction_actions.set_response(origin, transaction, 200, response)
defer.returnValue((200, response))
return (200, response)
@defer.inlineCallbacks
def received_edu(self, origin, edu_type, content):
@@ -298,7 +297,7 @@ class FederationServer(FederationBase):
event_id,
)
defer.returnValue((200, resp))
return (200, resp)
@defer.inlineCallbacks
def on_state_ids_request(self, origin, room_id, event_id):
@@ -315,9 +314,7 @@ class FederationServer(FederationBase):
state_ids = yield self.handler.get_state_ids_for_pdu(room_id, event_id)
auth_chain_ids = yield self.store.get_auth_chain_ids(state_ids)
defer.returnValue(
(200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids})
)
return (200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids})
@defer.inlineCallbacks
def _on_context_state_request_compute(self, room_id, event_id):
@@ -336,12 +333,10 @@ class FederationServer(FederationBase):
)
)
defer.returnValue(
{
"pdus": [pdu.get_pdu_json() for pdu in pdus],
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
}
)
return {
"pdus": [pdu.get_pdu_json() for pdu in pdus],
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
}
@defer.inlineCallbacks
@log_function
@@ -349,15 +344,15 @@ class FederationServer(FederationBase):
pdu = yield self.handler.get_persisted_pdu(origin, event_id)
if pdu:
defer.returnValue((200, self._transaction_from_pdus([pdu]).get_dict()))
return (200, self._transaction_from_pdus([pdu]).get_dict())
else:
defer.returnValue((404, ""))
return (404, "")
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
received_queries_counter.labels(query_type).inc()
resp = yield self.registry.on_query(query_type, args)
defer.returnValue((200, resp))
return (200, resp)
@defer.inlineCallbacks
def on_make_join_request(self, origin, room_id, user_id, supported_versions):
@@ -371,9 +366,7 @@ class FederationServer(FederationBase):
pdu = yield self.handler.on_make_join_request(origin, room_id, user_id)
time_now = self._clock.time_msec()
defer.returnValue(
{"event": pdu.get_pdu_json(time_now), "room_version": room_version}
)
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
@defer.inlineCallbacks
def on_invite_request(self, origin, content, room_version):
@@ -391,7 +384,7 @@ class FederationServer(FederationBase):
yield self.check_server_matches_acl(origin_host, pdu.room_id)
ret_pdu = yield self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec()
defer.returnValue({"event": ret_pdu.get_pdu_json(time_now)})
return {"event": ret_pdu.get_pdu_json(time_now)}
@defer.inlineCallbacks
def on_send_join_request(self, origin, content, room_id):
@@ -407,16 +400,14 @@ class FederationServer(FederationBase):
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
res_pdus = yield self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
defer.returnValue(
(
200,
{
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
"auth_chain": [
p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]
],
},
)
return (
200,
{
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
"auth_chain": [
p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]
],
},
)
@defer.inlineCallbacks
@@ -428,9 +419,7 @@ class FederationServer(FederationBase):
room_version = yield self.store.get_room_version(room_id)
time_now = self._clock.time_msec()
defer.returnValue(
{"event": pdu.get_pdu_json(time_now), "room_version": room_version}
)
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
@defer.inlineCallbacks
def on_send_leave_request(self, origin, content, room_id):
@@ -445,7 +434,7 @@ class FederationServer(FederationBase):
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
yield self.handler.on_send_leave_request(origin, pdu)
defer.returnValue((200, {}))
return (200, {})
@defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id):
@@ -456,7 +445,7 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
defer.returnValue((200, res))
return (200, res)
@defer.inlineCallbacks
def on_query_auth_request(self, origin, content, room_id, event_id):
@@ -509,7 +498,7 @@ class FederationServer(FederationBase):
"missing": ret.get("missing", []),
}
defer.returnValue((200, send_content))
return (200, send_content)
@log_function
def on_query_client_keys(self, origin, content):
@@ -548,7 +537,7 @@ class FederationServer(FederationBase):
),
)
defer.returnValue({"one_time_keys": json_result})
return {"one_time_keys": json_result}
@defer.inlineCallbacks
@log_function
@@ -580,9 +569,7 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec()
defer.returnValue(
{"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
)
return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
@log_function
def on_openid_userinfo(self, token):
@@ -676,14 +663,14 @@ class FederationServer(FederationBase):
ret = yield self.handler.exchange_third_party_invite(
sender_user_id, target_user_id, room_id, signed
)
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
ret = yield self.handler.on_exchange_third_party_invite_request(
origin, room_id, event_dict
)
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def check_server_matches_acl(self, server_name, room_id):

View File

@@ -374,7 +374,7 @@ class PerDestinationQueue(object):
assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs"
defer.returnValue((edus, now_stream_id))
return (edus, now_stream_id)
@defer.inlineCallbacks
def _get_to_device_message_edus(self, limit):
@@ -393,4 +393,4 @@ class PerDestinationQueue(object):
for content in contents
]
defer.returnValue((edus, stream_id))
return (edus, stream_id)

View File

@@ -133,4 +133,4 @@ class TransactionManager(object):
)
success = False
defer.returnValue(success)
return success

View File

@@ -21,7 +21,11 @@ from six.moves import urllib
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.urls import FEDERATION_V1_PREFIX, FEDERATION_V2_PREFIX
from synapse.api.urls import (
FEDERATION_UNSTABLE_PREFIX,
FEDERATION_V1_PREFIX,
FEDERATION_V2_PREFIX,
)
from synapse.logging.utils import log_function
logger = logging.getLogger(__name__)
@@ -183,7 +187,7 @@ class TransportLayerClient(object):
try_trailing_slash_on_400=True,
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -201,7 +205,7 @@ class TransportLayerClient(object):
ignore_backoff=ignore_backoff,
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -259,7 +263,7 @@ class TransportLayerClient(object):
ignore_backoff=ignore_backoff,
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -270,7 +274,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -288,7 +292,7 @@ class TransportLayerClient(object):
ignore_backoff=True,
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -299,7 +303,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content, ignore_backoff=True
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -310,7 +314,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content, ignore_backoff=True
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -339,7 +343,7 @@ class TransportLayerClient(object):
destination=remote_server, path=path, args=args, ignore_backoff=True
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -350,7 +354,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=event_dict
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -359,7 +363,7 @@ class TransportLayerClient(object):
content = yield self.client.get_json(destination=destination, path=path)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -370,7 +374,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -402,7 +406,7 @@ class TransportLayerClient(object):
content = yield self.client.post_json(
destination=destination, path=path, data=query_content, timeout=timeout
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -426,7 +430,7 @@ class TransportLayerClient(object):
content = yield self.client.get_json(
destination=destination, path=path, timeout=timeout
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -460,7 +464,7 @@ class TransportLayerClient(object):
content = yield self.client.post_json(
destination=destination, path=path, data=query_content, timeout=timeout
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -488,7 +492,7 @@ class TransportLayerClient(object):
timeout=timeout,
)
defer.returnValue(content)
return content
@log_function
def get_group_profile(self, destination, group_id, requester_user_id):
@@ -935,6 +939,23 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content, ignore_backoff=True
)
def get_room_complexity(self, destination, room_id):
"""
Args:
destination (str): The remote server
room_id (str): The room ID to ask about.
"""
path = _create_path(FEDERATION_UNSTABLE_PREFIX, "/rooms/%s/complexity", room_id)
return self.client.get_json(destination=destination, path=path)
def _create_path(federation_prefix, path, *args):
"""
Ensures that all args are url encoded.
"""
return federation_prefix + path % tuple(urllib.parse.quote(arg, "") for arg in args)
def _create_v1_path(path, *args):
"""Creates a path against V1 federation API from the path template and
@@ -951,9 +972,7 @@ def _create_v1_path(path, *args):
Returns:
str
"""
return FEDERATION_V1_PREFIX + path % tuple(
urllib.parse.quote(arg, "") for arg in args
)
return _create_path(FEDERATION_V1_PREFIX, path, *args)
def _create_v2_path(path, *args):
@@ -971,6 +990,4 @@ def _create_v2_path(path, *args):
Returns:
str
"""
return FEDERATION_V2_PREFIX + path % tuple(
urllib.parse.quote(arg, "") for arg in args
)
return _create_path(FEDERATION_V2_PREFIX, path, *args)

View File

@@ -157,7 +157,7 @@ class GroupAttestionRenewer(object):
yield self.store.update_remote_attestion(group_id, user_id, attestation)
defer.returnValue({})
return {}
def _start_renew_attestations(self):
return run_as_background_process("renew_attestations", self._renew_attestations)

View File

@@ -85,7 +85,7 @@ class GroupsServerHandler(object):
if not is_admin:
raise SynapseError(403, "User is not admin in group")
defer.returnValue(group)
return group
@defer.inlineCallbacks
def get_group_summary(self, group_id, requester_user_id):
@@ -151,22 +151,20 @@ class GroupsServerHandler(object):
group_id, requester_user_id
)
defer.returnValue(
{
"profile": profile,
"users_section": {
"users": users,
"roles": roles,
"total_user_count_estimate": 0, # TODO
},
"rooms_section": {
"rooms": rooms,
"categories": categories,
"total_room_count_estimate": 0, # TODO
},
"user": membership_info,
}
)
return {
"profile": profile,
"users_section": {
"users": users,
"roles": roles,
"total_user_count_estimate": 0, # TODO
},
"rooms_section": {
"rooms": rooms,
"categories": categories,
"total_room_count_estimate": 0, # TODO
},
"user": membership_info,
}
@defer.inlineCallbacks
def update_group_summary_room(
@@ -192,7 +190,7 @@ class GroupsServerHandler(object):
is_public=is_public,
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def delete_group_summary_room(
@@ -208,7 +206,7 @@ class GroupsServerHandler(object):
group_id=group_id, room_id=room_id, category_id=category_id
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def set_group_join_policy(self, group_id, requester_user_id, content):
@@ -228,7 +226,7 @@ class GroupsServerHandler(object):
yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def get_group_categories(self, group_id, requester_user_id):
@@ -237,7 +235,7 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
categories = yield self.store.get_group_categories(group_id=group_id)
defer.returnValue({"categories": categories})
return {"categories": categories}
@defer.inlineCallbacks
def get_group_category(self, group_id, requester_user_id, category_id):
@@ -249,7 +247,7 @@ class GroupsServerHandler(object):
group_id=group_id, category_id=category_id
)
defer.returnValue(res)
return res
@defer.inlineCallbacks
def update_group_category(self, group_id, requester_user_id, category_id, content):
@@ -269,7 +267,7 @@ class GroupsServerHandler(object):
profile=profile,
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def delete_group_category(self, group_id, requester_user_id, category_id):
@@ -283,7 +281,7 @@ class GroupsServerHandler(object):
group_id=group_id, category_id=category_id
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def get_group_roles(self, group_id, requester_user_id):
@@ -292,7 +290,7 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
roles = yield self.store.get_group_roles(group_id=group_id)
defer.returnValue({"roles": roles})
return {"roles": roles}
@defer.inlineCallbacks
def get_group_role(self, group_id, requester_user_id, role_id):
@@ -301,7 +299,7 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
res = yield self.store.get_group_role(group_id=group_id, role_id=role_id)
defer.returnValue(res)
return res
@defer.inlineCallbacks
def update_group_role(self, group_id, requester_user_id, role_id, content):
@@ -319,7 +317,7 @@ class GroupsServerHandler(object):
group_id=group_id, role_id=role_id, is_public=is_public, profile=profile
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def delete_group_role(self, group_id, requester_user_id, role_id):
@@ -331,7 +329,7 @@ class GroupsServerHandler(object):
yield self.store.remove_group_role(group_id=group_id, role_id=role_id)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def update_group_summary_user(
@@ -355,7 +353,7 @@ class GroupsServerHandler(object):
is_public=is_public,
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id):
@@ -369,7 +367,7 @@ class GroupsServerHandler(object):
group_id=group_id, user_id=user_id, role_id=role_id
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def get_group_profile(self, group_id, requester_user_id):
@@ -391,7 +389,7 @@ class GroupsServerHandler(object):
group_description = {key: group[key] for key in cols}
group_description["is_openly_joinable"] = group["join_policy"] == "open"
defer.returnValue(group_description)
return group_description
else:
raise SynapseError(404, "Unknown group")
@@ -461,9 +459,7 @@ class GroupsServerHandler(object):
# TODO: If admin add lists of users whose attestations have timed out
defer.returnValue(
{"chunk": chunk, "total_user_count_estimate": len(user_results)}
)
return {"chunk": chunk, "total_user_count_estimate": len(user_results)}
@defer.inlineCallbacks
def get_invited_users_in_group(self, group_id, requester_user_id):
@@ -494,9 +490,7 @@ class GroupsServerHandler(object):
logger.warn("Error getting profile for %s: %s", user_id, e)
user_profiles.append(user_profile)
defer.returnValue(
{"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
)
return {"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
@defer.inlineCallbacks
def get_rooms_in_group(self, group_id, requester_user_id):
@@ -533,9 +527,7 @@ class GroupsServerHandler(object):
chunk.sort(key=lambda e: -e["num_joined_members"])
defer.returnValue(
{"chunk": chunk, "total_room_count_estimate": len(room_results)}
)
return {"chunk": chunk, "total_room_count_estimate": len(room_results)}
@defer.inlineCallbacks
def add_room_to_group(self, group_id, requester_user_id, room_id, content):
@@ -551,7 +543,7 @@ class GroupsServerHandler(object):
yield self.store.add_room_to_group(group_id, room_id, is_public=is_public)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def update_room_in_group(
@@ -574,7 +566,7 @@ class GroupsServerHandler(object):
else:
raise SynapseError(400, "Uknown config option")
defer.returnValue({})
return {}
@defer.inlineCallbacks
def remove_room_from_group(self, group_id, requester_user_id, room_id):
@@ -586,7 +578,7 @@ class GroupsServerHandler(object):
yield self.store.remove_room_from_group(group_id, room_id)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def invite_to_group(self, group_id, user_id, requester_user_id, content):
@@ -644,9 +636,9 @@ class GroupsServerHandler(object):
)
elif res["state"] == "invite":
yield self.store.add_group_invite(group_id, user_id)
defer.returnValue({"state": "invite"})
return {"state": "invite"}
elif res["state"] == "reject":
defer.returnValue({"state": "reject"})
return {"state": "reject"}
else:
raise SynapseError(502, "Unknown state returned by HS")
@@ -679,7 +671,7 @@ class GroupsServerHandler(object):
remote_attestation=remote_attestation,
)
defer.returnValue(local_attestation)
return local_attestation
@defer.inlineCallbacks
def accept_invite(self, group_id, requester_user_id, content):
@@ -699,7 +691,7 @@ class GroupsServerHandler(object):
local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({"state": "join", "attestation": local_attestation})
return {"state": "join", "attestation": local_attestation}
@defer.inlineCallbacks
def join_group(self, group_id, requester_user_id, content):
@@ -716,7 +708,7 @@ class GroupsServerHandler(object):
local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({"state": "join", "attestation": local_attestation})
return {"state": "join", "attestation": local_attestation}
@defer.inlineCallbacks
def knock(self, group_id, requester_user_id, content):
@@ -769,7 +761,7 @@ class GroupsServerHandler(object):
if not self.hs.is_mine_id(user_id):
yield self.store.maybe_delete_remote_profile_cache(user_id)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def create_group(self, group_id, requester_user_id, content):
@@ -845,7 +837,7 @@ class GroupsServerHandler(object):
avatar_url=user_profile.get("avatar_url"),
)
defer.returnValue({"group_id": group_id})
return {"group_id": group_id}
@defer.inlineCallbacks
def delete_group(self, group_id, requester_user_id):

View File

@@ -51,8 +51,8 @@ class AccountDataEventSource(object):
{"type": account_data_type, "content": content, "room_id": room_id}
)
defer.returnValue((results, current_stream_id))
return (results, current_stream_id)
@defer.inlineCallbacks
def get_pagination_rows(self, user, config, key):
defer.returnValue(([], config.to_id))
return ([], config.to_id)

View File

@@ -193,7 +193,7 @@ class AccountValidityHandler(object):
if threepid["medium"] == "email":
addresses.append(threepid["address"])
defer.returnValue(addresses)
return addresses
@defer.inlineCallbacks
def _get_renewal_token(self, user_id):
@@ -214,7 +214,7 @@ class AccountValidityHandler(object):
try:
renewal_token = stringutils.random_string(32)
yield self.store.set_renewal_token_for_user(user_id, renewal_token)
defer.returnValue(renewal_token)
return renewal_token
except StoreError:
attempts += 1
raise StoreError(500, "Couldn't generate a unique string as refresh string.")
@@ -254,4 +254,4 @@ class AccountValidityHandler(object):
user_id=user_id, expiration_ts=expiration_ts, email_sent=email_sent
)
defer.returnValue(expiration_ts)
return expiration_ts

View File

@@ -100,4 +100,4 @@ class AcmeHandler(object):
logger.exception("Failed saving!")
raise
defer.returnValue(True)
return True

View File

@@ -49,7 +49,7 @@ class AdminHandler(BaseHandler):
"devices": {"": {"sessions": [{"connections": connections}]}},
}
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def get_users(self):
@@ -61,7 +61,7 @@ class AdminHandler(BaseHandler):
"""
ret = yield self.store.get_users()
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def get_users_paginate(self, order, start, limit):
@@ -78,7 +78,7 @@ class AdminHandler(BaseHandler):
"""
ret = yield self.store.get_users_paginate(order, start, limit)
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def search_users(self, term):
@@ -92,7 +92,7 @@ class AdminHandler(BaseHandler):
"""
ret = yield self.store.search_users(term)
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def export_user_data(self, user_id, writer):
@@ -225,7 +225,7 @@ class AdminHandler(BaseHandler):
state = yield self.store.get_state_for_event(event_id)
writer.write_state(room_id, event_id, state)
defer.returnValue(writer.finished())
return writer.finished()
class ExfiltrationWriter(object):

View File

@@ -167,8 +167,8 @@ class ApplicationServicesHandler(object):
for user_service in user_query_services:
is_known_user = yield self.appservice_api.query_user(user_service, user_id)
if is_known_user:
defer.returnValue(True)
defer.returnValue(False)
return True
return False
@defer.inlineCallbacks
def query_room_alias_exists(self, room_alias):
@@ -192,7 +192,7 @@ class ApplicationServicesHandler(object):
if is_known_alias:
# the alias exists now so don't query more ASes.
result = yield self.store.get_association_from_room_alias(room_alias)
defer.returnValue(result)
return result
@defer.inlineCallbacks
def query_3pe(self, kind, protocol, fields):
@@ -215,7 +215,7 @@ class ApplicationServicesHandler(object):
if success:
ret.extend(result)
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def get_3pe_protocols(self, only_protocol=None):
@@ -254,7 +254,7 @@ class ApplicationServicesHandler(object):
for p in protocols.keys():
protocols[p] = _merge_instances(protocols[p])
defer.returnValue(protocols)
return protocols
@defer.inlineCallbacks
def _get_services_for_event(self, event):
@@ -276,7 +276,7 @@ class ApplicationServicesHandler(object):
if (yield s.is_interested(event, self.store)):
interested_list.append(s)
defer.returnValue(interested_list)
return interested_list
def _get_services_for_user(self, user_id):
services = self.store.get_app_services()
@@ -293,23 +293,23 @@ class ApplicationServicesHandler(object):
if not self.is_mine_id(user_id):
# we don't know if they are unknown or not since it isn't one of our
# users. We can't poke ASes.
defer.returnValue(False)
return False
return
user_info = yield self.store.get_user_by_id(user_id)
if user_info:
defer.returnValue(False)
return False
return
# user not found; could be the AS though, so check.
services = self.store.get_app_services()
service_list = [s for s in services if s.sender == user_id]
defer.returnValue(len(service_list) == 0)
return len(service_list) == 0
@defer.inlineCallbacks
def _check_user_exists(self, user_id):
unknown_user = yield self._is_unknown_user(user_id)
if unknown_user:
exists = yield self.query_user_exists(user_id)
defer.returnValue(exists)
defer.returnValue(True)
return exists
return True

View File

@@ -155,7 +155,7 @@ class AuthHandler(BaseHandler):
if user_id != requester.user.to_string():
raise AuthError(403, "Invalid auth")
defer.returnValue(params)
return params
@defer.inlineCallbacks
def check_auth(self, flows, clientdict, clientip, password_servlet=False):
@@ -280,7 +280,7 @@ class AuthHandler(BaseHandler):
creds,
list(clientdict),
)
defer.returnValue((creds, clientdict, session["id"]))
return (creds, clientdict, session["id"])
ret = self._auth_dict_for_flows(flows, session)
ret["completed"] = list(creds)
@@ -307,8 +307,8 @@ class AuthHandler(BaseHandler):
if result:
creds[stagetype] = result
self._save_session(sess)
defer.returnValue(True)
defer.returnValue(False)
return True
return False
def get_session_id(self, clientdict):
"""
@@ -379,7 +379,7 @@ class AuthHandler(BaseHandler):
res = yield checker(
authdict, clientip=clientip, password_servlet=password_servlet
)
defer.returnValue(res)
return res
# build a v1-login-style dict out of the authdict and fall back to the
# v1 code
@@ -389,7 +389,7 @@ class AuthHandler(BaseHandler):
raise SynapseError(400, "", Codes.MISSING_PARAM)
(canonical_id, callback) = yield self.validate_login(user_id, authdict)
defer.returnValue(canonical_id)
return canonical_id
@defer.inlineCallbacks
def _check_recaptcha(self, authdict, clientip, **kwargs):
@@ -433,7 +433,7 @@ class AuthHandler(BaseHandler):
resp_body.get("hostname"),
)
if resp_body["success"]:
defer.returnValue(True)
return True
raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
def _check_email_identity(self, authdict, **kwargs):
@@ -502,7 +502,7 @@ class AuthHandler(BaseHandler):
threepid["threepid_creds"] = authdict["threepid_creds"]
defer.returnValue(threepid)
return threepid
def _get_params_recaptcha(self):
return {"public_key": self.hs.config.recaptcha_public_key}
@@ -606,7 +606,7 @@ class AuthHandler(BaseHandler):
yield self.store.delete_access_token(access_token)
raise StoreError(400, "Login raced against device deletion")
defer.returnValue(access_token)
return access_token
@defer.inlineCallbacks
def check_user_exists(self, user_id):
@@ -629,8 +629,8 @@ class AuthHandler(BaseHandler):
self.ratelimit_login_per_account(user_id)
res = yield self._find_user_id_and_pwd_hash(user_id)
if res is not None:
defer.returnValue(res[0])
defer.returnValue(None)
return res[0]
return None
@defer.inlineCallbacks
def _find_user_id_and_pwd_hash(self, user_id):
@@ -661,7 +661,7 @@ class AuthHandler(BaseHandler):
user_id,
user_infos.keys(),
)
defer.returnValue(result)
return result
def get_supported_login_types(self):
"""Get a the login types supported for the /login API
@@ -722,7 +722,7 @@ class AuthHandler(BaseHandler):
known_login_type = True
is_valid = yield provider.check_password(qualified_user_id, password)
if is_valid:
defer.returnValue((qualified_user_id, None))
return (qualified_user_id, None)
if not hasattr(provider, "get_supported_login_types") or not hasattr(
provider, "check_auth"
@@ -756,7 +756,7 @@ class AuthHandler(BaseHandler):
if result:
if isinstance(result, str):
result = (result, None)
defer.returnValue(result)
return result
if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled:
known_login_type = True
@@ -766,7 +766,7 @@ class AuthHandler(BaseHandler):
)
if canonical_user_id:
defer.returnValue((canonical_user_id, None))
return (canonical_user_id, None)
if not known_login_type:
raise SynapseError(400, "Unknown login type %s" % login_type)
@@ -814,9 +814,9 @@ class AuthHandler(BaseHandler):
if isinstance(result, str):
# If it's a str, set callback function to None
result = (result, None)
defer.returnValue(result)
return result
defer.returnValue((None, None))
return (None, None)
@defer.inlineCallbacks
def _check_local_password(self, user_id, password):
@@ -838,7 +838,7 @@ class AuthHandler(BaseHandler):
"""
lookupres = yield self._find_user_id_and_pwd_hash(user_id)
if not lookupres:
defer.returnValue(None)
return None
(user_id, password_hash) = lookupres
# If the password hash is None, the account has likely been deactivated
@@ -850,8 +850,8 @@ class AuthHandler(BaseHandler):
result = yield self.validate_hash(password, password_hash)
if not result:
logger.warn("Failed password login for user %s", user_id)
defer.returnValue(None)
defer.returnValue(user_id)
return None
return user_id
@defer.inlineCallbacks
def validate_short_term_login_token_and_get_user_id(self, login_token):
@@ -860,12 +860,12 @@ class AuthHandler(BaseHandler):
try:
macaroon = pymacaroons.Macaroon.deserialize(login_token)
user_id = auth_api.get_user_id_from_macaroon(macaroon)
auth_api.validate_macaroon(macaroon, "login", True, user_id)
auth_api.validate_macaroon(macaroon, "login", user_id)
except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
self.ratelimit_login_per_account(user_id)
yield self.auth.check_auth_blocking(user_id)
defer.returnValue(user_id)
return user_id
@defer.inlineCallbacks
def delete_access_token(self, access_token):
@@ -976,7 +976,7 @@ class AuthHandler(BaseHandler):
)
yield self.store.user_delete_threepid(user_id, medium, address)
defer.returnValue(result)
return result
def _save_session(self, session):
# TODO: Persistent storage

View File

@@ -125,7 +125,7 @@ class DeactivateAccountHandler(BaseHandler):
# Mark the user as deactivated.
yield self.store.set_user_deactivated_status(user_id, True)
defer.returnValue(identity_server_supports_unbinding)
return identity_server_supports_unbinding
def _start_user_parting(self):
"""

View File

@@ -64,7 +64,7 @@ class DeviceWorkerHandler(BaseHandler):
for device in devices:
_update_device_from_client_ips(device, ips)
defer.returnValue(devices)
return devices
@defer.inlineCallbacks
def get_device(self, user_id, device_id):
@@ -85,7 +85,7 @@ class DeviceWorkerHandler(BaseHandler):
raise errors.NotFoundError
ips = yield self.store.get_last_client_ip_by_device(user_id, device_id)
_update_device_from_client_ips(device, ips)
defer.returnValue(device)
return device
@measure_func("device.get_user_ids_changed")
@defer.inlineCallbacks
@@ -200,9 +200,7 @@ class DeviceWorkerHandler(BaseHandler):
possibly_joined = []
possibly_left = []
defer.returnValue(
{"changed": list(possibly_joined), "left": list(possibly_left)}
)
return {"changed": list(possibly_joined), "left": list(possibly_left)}
class DeviceHandler(DeviceWorkerHandler):
@@ -211,12 +209,12 @@ class DeviceHandler(DeviceWorkerHandler):
self.federation_sender = hs.get_federation_sender()
self._edu_updater = DeviceListEduUpdater(hs, self)
self.device_list_updater = DeviceListUpdater(hs, self)
federation_registry = hs.get_federation_registry()
federation_registry.register_edu_handler(
"m.device_list_update", self._edu_updater.incoming_device_list_update
"m.device_list_update", self.device_list_updater.incoming_device_list_update
)
federation_registry.register_query_handler(
"user_devices", self.on_federation_query_user_devices
@@ -250,7 +248,7 @@ class DeviceHandler(DeviceWorkerHandler):
)
if new_device:
yield self.notify_device_update(user_id, [device_id])
defer.returnValue(device_id)
return device_id
# if the device id is not specified, we'll autogen one, but loop a few
# times in case of a clash.
@@ -264,7 +262,7 @@ class DeviceHandler(DeviceWorkerHandler):
)
if new_device:
yield self.notify_device_update(user_id, [device_id])
defer.returnValue(device_id)
return device_id
attempts += 1
raise errors.StoreError(500, "Couldn't generate a device ID.")
@@ -411,9 +409,7 @@ class DeviceHandler(DeviceWorkerHandler):
@defer.inlineCallbacks
def on_federation_query_user_devices(self, user_id):
stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)
defer.returnValue(
{"user_id": user_id, "stream_id": stream_id, "devices": devices}
)
return {"user_id": user_id, "stream_id": stream_id, "devices": devices}
@defer.inlineCallbacks
def user_left_room(self, user, room_id):
@@ -430,7 +426,7 @@ def _update_device_from_client_ips(device, client_ips):
device.update({"last_seen_ts": ip.get("last_seen"), "last_seen_ip": ip.get("ip")})
class DeviceListEduUpdater(object):
class DeviceListUpdater(object):
"Handles incoming device list updates from federation and updates the DB"
def __init__(self, hs, device_handler):
@@ -523,75 +519,7 @@ class DeviceListEduUpdater(object):
logger.debug("Need to re-sync devices for %r? %r", user_id, resync)
if resync:
# Fetch all devices for the user.
origin = get_domain_from_id(user_id)
try:
result = yield self.federation.query_user_devices(origin, user_id)
except (
NotRetryingDestination,
RequestSendFailed,
HttpResponseException,
):
# TODO: Remember that we are now out of sync and try again
# later
logger.warn("Failed to handle device list update for %s", user_id)
# We abort on exceptions rather than accepting the update
# as otherwise synapse will 'forget' that its device list
# is out of date. If we bail then we will retry the resync
# next time we get a device list update for this user_id.
# This makes it more likely that the device lists will
# eventually become consistent.
return
except FederationDeniedError as e:
logger.info(e)
return
except Exception:
# TODO: Remember that we are now out of sync and try again
# later
logger.exception(
"Failed to handle device list update for %s", user_id
)
return
stream_id = result["stream_id"]
devices = result["devices"]
# If the remote server has more than ~1000 devices for this user
# we assume that something is going horribly wrong (e.g. a bot
# that logs in and creates a new device every time it tries to
# send a message). Maintaining lots of devices per user in the
# cache can cause serious performance issues as if this request
# takes more than 60s to complete, internal replication from the
# inbound federation worker to the synapse master may time out
# causing the inbound federation to fail and causing the remote
# server to retry, causing a DoS. So in this scenario we give
# up on storing the total list of devices and only handle the
# delta instead.
if len(devices) > 1000:
logger.warn(
"Ignoring device list snapshot for %s as it has >1K devs (%d)",
user_id,
len(devices),
)
devices = []
for device in devices:
logger.debug(
"Handling resync update %r/%r, ID: %r",
user_id,
device["device_id"],
stream_id,
)
yield self.store.update_remote_device_list_cache(
user_id, devices, stream_id
)
device_ids = [device["device_id"] for device in devices]
yield self.device_handler.notify_device_update(user_id, device_ids)
# We clobber the seen updates since we've re-synced from a given
# point.
self._seen_updates[user_id] = set([stream_id])
yield self.user_device_resync(user_id)
else:
# Simply update the single device, since we know that is the only
# change (because of the single prev_id matching the current cache)
@@ -623,7 +551,7 @@ class DeviceListEduUpdater(object):
for _, stream_id, prev_ids, _ in updates:
if not prev_ids:
# We always do a resync if there are no previous IDs
defer.returnValue(True)
return True
for prev_id in prev_ids:
if prev_id == extremity:
@@ -633,8 +561,82 @@ class DeviceListEduUpdater(object):
elif prev_id in stream_id_in_updates:
continue
else:
defer.returnValue(True)
return True
stream_id_in_updates.add(stream_id)
defer.returnValue(False)
return False
@defer.inlineCallbacks
def user_device_resync(self, user_id):
"""Fetches all devices for a user and updates the device cache with them.
Args:
user_id (str): The user's id whose device_list will be updated.
Returns:
Deferred[dict]: a dict with device info as under the "devices" in the result of this
request:
https://matrix.org/docs/spec/server_server/r0.1.2#get-matrix-federation-v1-user-devices-userid
"""
# Fetch all devices for the user.
origin = get_domain_from_id(user_id)
try:
result = yield self.federation.query_user_devices(origin, user_id)
except (NotRetryingDestination, RequestSendFailed, HttpResponseException):
# TODO: Remember that we are now out of sync and try again
# later
logger.warn("Failed to handle device list update for %s", user_id)
# We abort on exceptions rather than accepting the update
# as otherwise synapse will 'forget' that its device list
# is out of date. If we bail then we will retry the resync
# next time we get a device list update for this user_id.
# This makes it more likely that the device lists will
# eventually become consistent.
return
except FederationDeniedError as e:
logger.info(e)
return
except Exception:
# TODO: Remember that we are now out of sync and try again
# later
logger.exception("Failed to handle device list update for %s", user_id)
return
stream_id = result["stream_id"]
devices = result["devices"]
# If the remote server has more than ~1000 devices for this user
# we assume that something is going horribly wrong (e.g. a bot
# that logs in and creates a new device every time it tries to
# send a message). Maintaining lots of devices per user in the
# cache can cause serious performance issues as if this request
# takes more than 60s to complete, internal replication from the
# inbound federation worker to the synapse master may time out
# causing the inbound federation to fail and causing the remote
# server to retry, causing a DoS. So in this scenario we give
# up on storing the total list of devices and only handle the
# delta instead.
if len(devices) > 1000:
logger.warn(
"Ignoring device list snapshot for %s as it has >1K devs (%d)",
user_id,
len(devices),
)
devices = []
for device in devices:
logger.debug(
"Handling resync update %r/%r, ID: %r",
user_id,
device["device_id"],
stream_id,
)
yield self.store.update_remote_device_list_cache(user_id, devices, stream_id)
device_ids = [device["device_id"] for device in devices]
yield self.device_handler.notify_device_update(user_id, device_ids)
# We clobber the seen updates since we've re-synced from a given
# point.
self._seen_updates[user_id] = set([stream_id])
defer.returnValue(result)

Some files were not shown because too many files have changed in this diff Show More