Compare commits

...

1176 Commits

Author SHA1 Message Date
Erik Johnston
790328a93c Require SQLite3 version 3.15 or above
This is primarily to allow tuple comparisons in queries, though a better
query optimiser and other improvements mean that using newer versions of
sqlite is highly recommended anyway.
2018-05-08 13:48:45 +01:00
Richard van der Hoff
966686c845 Merge pull request #3007 from matrix-org/rav/warn_on_logcontext_fail
Make 'unexpected logging context' into warnings
2018-05-03 15:10:04 +01:00
Richard van der Hoff
093d8c415a Merge remote-tracking branch 'origin/develop' into rav/warn_on_logcontext_fail 2018-05-03 14:59:29 +01:00
Richard van der Hoff
0ba609dc6f Merge pull request #3183 from matrix-org/rav/moar_logcontext_leaks
Fix logcontext leaks in rate limiter
2018-05-03 14:58:13 +01:00
Richard van der Hoff
2117f84323 Merge pull request #3182 from Half-Shot/hs/fix-twisted-shutdown
Fix 'Unhandled Error' logs with Twisted 18.4
2018-05-03 12:40:11 +01:00
Richard van der Hoff
a7fe62f0cb Fix logcontext leaks in rate limiter 2018-05-03 12:31:59 +01:00
Will Hunt
2e7a94c36b Don't abortConnection() if the transport connection has already closed. 2018-05-03 12:31:47 +01:00
Richard van der Hoff
a2aaa9cb3c Merge pull request #3178 from matrix-org/rav/fix_request_timeouts
fix http request timeout code
2018-05-03 11:33:26 +01:00
Richard van der Hoff
d72faf2fad Fix changes warning 2018-05-03 10:56:42 +01:00
Richard van der Hoff
a0501ac57e Warn of potential client incompatibility from #3161 2018-05-03 10:51:39 +01:00
Erik Johnston
0a3b51c420 Merge pull request #3141 from matrix-org/erikj/fixup_state
Refactor event storage to prepare for changes in state calculations
2018-05-03 10:39:20 +01:00
Erik Johnston
31c7c29d43 Fix up grammar 2018-05-03 10:38:58 +01:00
Richard van der Hoff
902673e356 Merge pull request #3161 from NotAFile/remove-v1auth
Make Client-Server API return 403 for invalid token
2018-05-03 10:10:57 +01:00
Erik Johnston
53a5fdf312 Merge pull request #3175 from matrix-org/erikj/escape_metric_values
Escape label values in prometheus metrics
2018-05-03 10:01:04 +01:00
Richard van der Hoff
1dfd650348 add missing param to cancelled_to_request_timed_out_error
This gets two arguments, not one.
2018-05-02 22:42:36 +01:00
Erik Johnston
a41117c63b Make _escape_character take MatchObject 2018-05-02 17:27:27 +01:00
Erik Johnston
32015e1109 Escape label values in prometheus metrics 2018-05-02 16:52:42 +01:00
Richard van der Hoff
3a42aed9a1 Merge pull request #3170 from matrix-org/rav/more_logcontext_leaks
Fix a class of logcontext leaks
2018-05-02 16:45:51 +01:00
Richard van der Hoff
5a0be97ab2 Merge pull request #3174 from matrix-org/rav/media_repo_logcontext_leaks
Fix logcontext leak in media repo
2018-05-02 16:43:04 +01:00
Richard van der Hoff
415c6b672e Merge branch 'develop' into rav/more_logcontext_leaks 2018-05-02 16:16:01 +01:00
Richard van der Hoff
4e9bdeba57 Merge pull request #3172 from matrix-org/rav/fix_test_logcontext_leaks
Fix a couple of logcontext leaks in unit tests
2018-05-02 16:15:22 +01:00
Richard van der Hoff
be31adb036 Fix logcontext leak in media repo
Make FileResponder.write_to_consumer uphold the logcontext contract
2018-05-02 16:14:50 +01:00
Richard van der Hoff
11607006d9 Remove spurious unittest.DEBUG 2018-05-02 15:48:47 +01:00
Richard van der Hoff
46beeb9a30 Fix a couple of logcontext leaks in unit tests
... which were making other, innocent, tests, fail.

Plus remove a spurious unittest.DEBUG which was making the output noisy.
2018-05-02 15:46:22 +01:00
Richard van der Hoff
f22e7cda2c Fix a class of logcontext leaks
So, it turns out that if you have a first `Deferred` `D1`, you can add a
callback which returns another `Deferred` `D2`, and `D2` must then complete
before any further callbacks on `D1` will execute (and later callbacks on `D1`
get the *result* of `D2` rather than `D2` itself).

So, `D1` might have `called=True` (as in, it has started running its
callbacks), but any new callbacks added to `D1` won't get run until `D2`
completes - so if you `yield D1` in an `inlineCallbacks` function, your `yield`
will 'block'.

In conclusion: some of our assumptions in `logcontext` were invalid. We need to
make sure that we don't optimise out the logcontext juggling when this
situation happens. Fortunately, it is easy to detect by checking `D1.paused`.
2018-05-02 11:58:00 +01:00
Richard van der Hoff
a8d8bf92e0 Merge pull request #3168 from matrix-org/rav/fix_logformatter
Fix incorrect reference to StringIO
2018-05-02 10:03:36 +01:00
Richard van der Hoff
e482f8cd85 Fix incorrect reference to StringIO
This was introduced in 4f2f5171
2018-05-02 09:12:26 +01:00
Matthew Hodgson
9f21de6a01 missing word :| 2018-05-01 19:19:46 +01:00
Matthew Hodgson
8ae7096958 Merge branch 'release-v0.28.1' into develop 2018-05-01 19:05:03 +01:00
Matthew Hodgson
5c2214f4c7 fix markdown 2018-05-01 19:03:35 +01:00
Neil Johnson
2414178ed6 Merge branch 'master' into develop 2018-05-01 18:53:56 +01:00
Neil Johnson
40d1bbd257 fix conflict in changelog from previous release 2018-05-01 18:52:44 +01:00
Matthew Hodgson
8e6bd0e324 changelog for 0.28.1 2018-05-01 18:28:23 +01:00
Neil Johnson
8570bb84cc Update __init__.py
bump version
2018-05-01 18:22:53 +01:00
Richard van der Hoff
ca7211104e Merge branch 'release-v0.28.1' into develop 2018-05-01 18:16:57 +01:00
Richard van der Hoff
d5eee5d601 Merge commit '33f469b' into release-v0.28.1 2018-05-01 18:14:18 +01:00
Richard van der Hoff
d858f3bd4e Miscellaneous fixes to python_dependencies
* add some doc about wtf this thing does
* pin Twisted to < 18.4
* add explicit dep on six (fixes #3089)
2018-05-01 18:13:54 +01:00
Richard van der Hoff
33f469ba19 Apply some limits to depth to counter abuse
* When creating a new event, cap its depth to 2^63 - 1
* When receiving events, reject any without a sensible depth

As per https://docs.google.com/document/d/1I3fi2S-XnpO45qrpCsowZv8P8dHcNZ4fsBsbOW7KABI
2018-05-01 17:54:19 +01:00
Adrian Tschira
6495dbb326 Burminate v1auth
This closes #2602

v1auth was created to account for the differences in status code between
the v1 and v2_alpha revisions of the protocol (401 vs 403 for invalid
tokens). However since those protocols were merged, this makes the r0
version/endpoint internally inconsistent, and violates the
specification for the r0 endpoint.

This might break clients that rely on this inconsistency with the
specification. This is said to affect the legacy angular reference
client. However, I feel that restoring parity with the spec is more
important. Either way, it is critical to inform developers about this
change, in case they rely on the illegal behaviour.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-30 22:20:43 +02:00
Will Hunt
2ad3fc36e6 Fixes #3135 - Replace _OpenSSLECCurve with crypto.get_elliptic_curve (#3157)
fixes #3135

Signed-off-by: Will Hunt will@half-shot.uk
2018-04-30 16:21:11 +01:00
Richard van der Hoff
cead75fae3 Merge pull request #3160 from krombel/fix_3076
add guard for None on purge_history api
2018-04-30 15:03:59 +01:00
Krombel
576b71dd3d add guard for None on purge_history api 2018-04-30 14:29:48 +02:00
Matthew Hodgson
99a54bf2af Merge pull request #3129 from matrix-org/matthew/fix_group_dups
remove duplicates from groups tables
2018-04-30 11:47:25 +01:00
Richard van der Hoff
63ae5cbf34 Merge pull request #3143 from matrix-org/rav/remove_redundant_preserve_fn
Remove redundant call to preserve_fn
2018-04-30 10:23:59 +01:00
Richard van der Hoff
fdb6849b81 Merge pull request #3144 from matrix-org/rav/run_in_background_exception_handling
Trap exceptions thrown within run_in_background
2018-04-30 10:23:02 +01:00
Richard van der Hoff
66aa32ede2 Merge pull request #3159 from NotAFile/py3-tests-config
run config tests on py3
2018-04-30 10:22:45 +01:00
Adrian Tschira
6e005d1382 run config tests on py3
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-30 10:39:45 +02:00
Richard van der Hoff
01e8a52825 Merge pull request #3102 from NotAFile/py3-attributeerror
Make event properties raise AttributeError instead
2018-04-30 09:22:09 +01:00
Adrian Tschira
0c9db26260 add comment explaining attributeerror 2018-04-30 09:49:10 +02:00
Richard van der Hoff
950a32eb47 Merge pull request #3152 from NotAFile/py3-local-imports
make imports local
2018-04-30 01:28:13 +01:00
Richard van der Hoff
bc2017a594 Merge pull request #3153 from NotAFile/py3-httplib
move httplib import to six
2018-04-30 01:26:42 +01:00
Richard van der Hoff
683149c1f9 Merge pull request #3151 from NotAFile/py3-xrange-1
Move more xrange to six
2018-04-30 01:20:06 +01:00
Richard van der Hoff
7b908aeec4 Merge branch 'rav/test_36' into develop 2018-04-30 01:12:58 +01:00
Richard van der Hoff
3b0e431c82 Merge pull request #3150 from NotAFile/py3-listcomp-yield
Don't yield in list comprehensions
2018-04-30 01:11:41 +01:00
Richard van der Hoff
db75c86e84 Merge branch 'develop' into py3-xrange-1 2018-04-30 01:02:25 +01:00
Richard van der Hoff
2fd96727b1 Merge pull request #3085 from NotAFile/py3-config-text-mode
Open config file in non-bytes mode
2018-04-30 01:00:23 +01:00
Richard van der Hoff
b8ee12b978 Merge pull request #3084 from NotAFile/py3-certs-byte-mode
Open certificate files as bytes
2018-04-30 01:00:05 +01:00
Richard van der Hoff
049b0b5af2 Merge pull request #3154 from NotAFile/py3-stringio
Replace stringIO imports with six
2018-04-30 00:59:04 +01:00
Richard van der Hoff
d1d54d6088 add py36 to build matrix 2018-04-30 00:58:31 +01:00
Richard van der Hoff
ac5f2f4d86 Merge pull request #3145 from NotAFile/py3-tests
Add py3 tests to tox with folders that work
2018-04-30 00:53:05 +01:00
Richard van der Hoff
af3cc50511 Remove redundant call to preserve_fn
submit_event_for_as doesn't return a deferred anyway, so this is pointless.
2018-04-30 00:48:36 +01:00
Richard van der Hoff
dbf6f28d64 Merge pull request #3155 from NotAFile/py3-bytes-1
more bytes strings
2018-04-30 00:38:21 +01:00
Richard van der Hoff
7767a9fc0e Update tox.ini
add missing comma
2018-04-30 00:37:32 +01:00
Richard van der Hoff
aab2e4da60 Merge pull request #3140 from matrix-org/rav/use_run_in_background
Use run_in_background in preference to preserve_fn
2018-04-30 00:34:28 +01:00
Richard van der Hoff
1315d374cc Merge pull request #3156 from NotAFile/py3-hmac-bytes
Construct HMAC as bytes on py3
2018-04-30 00:33:20 +01:00
Richard van der Hoff
9e2601f830 Merge pull request #3108 from NotAFile/py3-six-urlparse
Use six.moves.urlparse
2018-04-30 00:33:05 +01:00
Adrian Tschira
122593265b Construct HMAC as bytes on py3
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-29 00:19:41 +02:00
Adrian Tschira
e9143b6593 more bytes strings
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-29 00:13:57 +02:00
Matthew Hodgson
adaf3ec87f fix missing import 2018-04-28 22:39:15 +01:00
Matthew Hodgson
006e18b6bb pep8 2018-04-28 22:32:24 +01:00
Matthew Hodgson
42c89c8215 make it work with sqlite 2018-04-28 22:27:30 +01:00
Adrian Tschira
d82b6ea9e6 Move more xrange to six
plus a bonus next()

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:57:00 +02:00
Adrian Tschira
4f2f5171b7 replace stringIO imports 2018-04-28 13:46:23 +02:00
Adrian Tschira
94f4d7f49e move httplib import to six 2018-04-28 13:43:34 +02:00
Adrian Tschira
57b58e2174 make imports local
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:41:41 +02:00
Adrian Tschira
cdb4647a80 Don't yield in list comprehensions
I've tried to grep for more of this with no success.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:36:30 +02:00
Adrian Tschira
a376d8f761 open log_config in text mode too
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:34:13 +02:00
Adrian Tschira
4f5694e2ce Add py3 tests to tox with folders that work
It's just a few tests, but it will at least prevent a few files from
regressing. Also, it makes it easiert to check your code against py36
while writing it.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-27 16:29:41 +02:00
Richard van der Hoff
9558236728 Merge pull request #3127 from matrix-org/rav/deferred_timeout
Use deferred.addTimeout instead of time_bound_deferred
2018-04-27 14:32:54 +01:00
Richard van der Hoff
453adf00b6 pep8; remove spurious import 2018-04-27 14:32:08 +01:00
Richard van der Hoff
fc149b4eeb Merge remote-tracking branch 'origin/develop' into rav/use_run_in_background 2018-04-27 14:31:23 +01:00
Richard van der Hoff
6146332387 Merge remote-tracking branch 'origin/develop' into rav/deferred_timeout 2018-04-27 14:18:00 +01:00
Erik Johnston
d2737c1fae Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-04-27 13:28:51 +01:00
Richard van der Hoff
2a13af23bc Use run_in_background in preference to preserve_fn
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
2018-04-27 12:55:51 +01:00
Richard van der Hoff
3d1ae61399 Merge branch 'develop' into rav/deferred_timeout 2018-04-27 12:54:43 +01:00
Richard van der Hoff
9d2c1b8429 Backport deferred.addTimeout
Twisted 16.0 doesn't have addTimeout, so let's backport it.
2018-04-27 12:52:30 +01:00
Richard van der Hoff
13843f771e Trap exceptions thrown within run_in_background
Turn any exceptions that get thrown synchronously within run_in_background into
Failures instead.
2018-04-27 12:17:13 +01:00
Richard van der Hoff
41d4b07a53 Merge pull request #3142 from matrix-org/rav/reraise
reraise exceptions more carefully
2018-04-27 12:16:19 +01:00
Neil Johnson
05ba7e3a44 Update CHANGES.rst 2018-04-27 12:13:12 +01:00
Neil Johnson
53849ea9d3 Update CHANGES.rst 2018-04-27 12:11:39 +01:00
Richard van der Hoff
268e40341b Merge pull request #3136 from matrix-org/rav/fix_dependencies
Miscellaneous fixes to python_dependencies
2018-04-27 11:48:28 +01:00
Richard van der Hoff
9c3da24561 Merge pull request #3138 from matrix-org/rav/catch_unhandled_exceptions
Improve exception handling for background processes
2018-04-27 11:47:49 +01:00
Richard van der Hoff
53494c34df Merge pull request #3139 from matrix-org/rav/consume_errors
Add missing consumeErrors to improve exception handling
2018-04-27 11:47:21 +01:00
Richard van der Hoff
6493b22b42 reraise exceptions more carefully
We need to be careful (under python 2, at least) that when we reraise an
exception after doing some error handling, we actually reraise the original
exception rather than anything that might have been raised (and handled) during
the error handling.
2018-04-27 11:40:06 +01:00
Erik Johnston
6e10eed28e Refactor event storage to not require state
This is in preparation for using contexts that may or may not have the
current_state_ids set. This will allow us to avoid unnecessarily pulling
out state for an event on the master process when using workers.

We also add a check to see if the state groups of the old extremities
are the same as the new ones.
2018-04-27 11:38:02 +01:00
Richard van der Hoff
605defb9e4 Add missing consumeErrors
In general we want defer.gatherResults to consumeErrors, rather than having
exceptions hanging around and getting logged as CRITICAL unhandled errors.
2018-04-27 11:16:28 +01:00
Richard van der Hoff
9255a6cb17 Improve exception handling for background processes
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.

This is unsatisfactory for a number of reasons:
 - logging on garbage collection is best-effort and may happen some time after
   the error, if at all
 - it can be hard to figure out where the error actually happened.
 - it is logged as a scary CRITICAL error which (a) I always forget to grep for
   and (b) it's not really CRITICAL if a background process we don't care about
   fails.

So this is an attempt to add exception handling to everything we fire off into
the background.
2018-04-27 11:07:40 +01:00
Neil Johnson
d842ed14f4 Merge tag 'v0.28.0'
Changes in synapse v0.28.0-rc1 (2018-04-26)
===========================================

Bug Fixes:

* Fix quarantine media admin API and search reindex (PR #3130)
* Fix media admin APIs (PR #3134)

Changes in synapse v0.28.0-rc1 (2018-04-24)
===========================================

Minor performance improvement to federation sending and bug fixes.

(Note: This release does not include state resolutions discussed in matrix live)

Features:

* Add metrics for event processing lag (PR #3090)
* Add metrics for ResponseCache (PR #3092)

Changes:

* Synapse on PyPy (PR #2760) Thanks to @Valodim!
* move handling of auto_join_rooms to RegisterHandler (PR #2996) Thanks to @krombel!
* Improve handling of SRV records for federation connections (PR #3016) Thanks to @silkeh!
* Document the behaviour of ResponseCache (PR #3059)
* Preparation for py3 (PR #3061, #3073, #3074, #3075, #3103, #3104, #3106, #3107, #3109, #3110) Thanks to @NotAFile!
* update prometheus dashboard to use new metric names (PR #3069) Thanks to @krombel!
* use python3-compatible prints (PR #3074) Thanks to @NotAFile!
* Send federation events concurrently (PR #3078)
* Limit concurrent event sends for a room (PR #3079)
* Improve R30 stat definition (PR #3086)
* Send events to ASes concurrently (PR #3088)
* Refactor ResponseCache usage (PR #3093)
* Clarify that SRV may not point to a CNAME (PR #3100) Thanks to @silkeh!
* Use str(e) instead of e.message (PR #3103) Thanks to @NotAFile!
* Use six.itervalues in some places (PR #3106) Thanks to @NotAFile!
* Refactor store.have_events (PR #3117)

Bug Fixes:

* Return 401 for invalid access_token on logout (PR #2938) Thanks to @dklug!
* Return a 404 rather than a 500 on rejoining empty rooms (PR #3080)
* fix federation_domain_whitelist (PR #3099)
* Avoid creating events with huge numbers of prev_events (PR #3113)
* Reject events which have lots of prev_events (PR #3118)
2018-04-27 10:40:27 +01:00
Richard van der Hoff
31c8be956f also upgrade pip when installing 2018-04-27 01:56:58 +01:00
Neil Johnson
28dd536e80 update changelog and bump version to 0.28.0 2018-04-26 15:51:39 +01:00
Neil Johnson
8721580303 Merge branch 'develop' of https://github.com/matrix-org/synapse into release-v0.28.0-rc1 2018-04-26 15:44:54 +01:00
Richard van der Hoff
dbf76fd4b9 jenkins build: make sure we have a recent setuptools 2018-04-26 13:11:03 +01:00
Richard van der Hoff
d78ada3166 Miscellaneous fixes to python_dependencies
* add some doc about wtf this thing does
* pin Twisted to < 18.4
* add explicit dep on six (fixes #3089)
2018-04-26 13:11:03 +01:00
Erik Johnston
0ced8b5b47 Merge pull request #3134 from matrix-org/erikj/fix_admin_media_api
Fix media admin APIs
2018-04-26 12:02:40 +01:00
Erik Johnston
7ec8e798b4 Fix media admin APIs 2018-04-26 11:31:22 +01:00
Erik Johnston
a5ad88913c Merge pull request #3130 from matrix-org/erikj/fix_quarantine_room
Fix quarantine media admin API
2018-04-25 17:54:12 +01:00
Erik Johnston
22881b3d69 Also fix reindexing of search 2018-04-25 15:32:04 +01:00
Erik Johnston
ba3166743c Fix quarantine media admin API 2018-04-25 15:11:18 +01:00
Matthew Hodgson
e3a373f002 remove duplicates from groups tables
and rename inconsistently named indexes.
Based on https://github.com/matrix-org/synapse/pull/3128 - thanks @vurpo\!
2018-04-25 14:58:43 +01:00
Neil Johnson
6ab3b9c743 Update CHANGES.rst
Rephrase v0.28.0-rc1 summary
2018-04-24 16:39:20 +01:00
Neil Johnson
1bb83d5d41 Merge branch 'master' into develop 2018-04-24 15:52:43 +01:00
Neil Johnson
13a2beabca Update CHANGES.rst
fix formatting on line break
2018-04-24 15:43:30 +01:00
Neil Johnson
2c3e995f38 Bump version and update changelog 2018-04-24 15:33:22 +01:00
Neil Johnson
8e8b06715f Revert "Bump version and update changelog"
This reverts commit 08b29d4574.
2018-04-24 13:58:45 +01:00
Neil Johnson
08b29d4574 Bump version and update changelog 2018-04-24 13:56:12 +01:00
Richard van der Hoff
77ebef9d43 Merge pull request #3118 from matrix-org/rav/reject_prev_events
Reject events which have lots of prev_events
2018-04-23 17:51:38 +01:00
Richard van der Hoff
9b9c38373c Remove spurious param 2018-04-23 12:00:06 +01:00
Richard van der Hoff
286e20f2bc Merge pull request #3109 from NotAFile/py3-tests-fix
Make tests py3 compatible
2018-04-23 11:59:03 +01:00
Richard van der Hoff
1ea904b9f0 Use deferred.addTimeout instead of time_bound_deferred
This doesn't feel like a wheel we need to reinvent.
2018-04-23 00:53:18 +01:00
Richard van der Hoff
dc875d2712 Merge pull request #3106 from NotAFile/py3-six-itervalues-1
Use six.itervalues in some places
2018-04-20 15:43:52 +01:00
Richard van der Hoff
8dc4a6144b Merge pull request #3107 from NotAFile/py3-bool-nonzero
add __bool__ alias to __nonzero__ methods
2018-04-20 15:43:39 +01:00
Richard van der Hoff
d06a9ea5f7 Merge pull request #3104 from NotAFile/py3-unittest-config
Add some more variables to the unittest config
2018-04-20 15:35:58 +01:00
Richard van der Hoff
c09a6daf09 Merge pull request #3110 from NotAFile/py3-six-queue
Replace Queue with six.moves.queue
2018-04-20 15:35:00 +01:00
Richard van der Hoff
692a3cc806 Merge pull request #3103 from NotAFile/py3-baseexcepton-message
Use str(e) instead of e.message
2018-04-20 15:34:49 +01:00
Erik Johnston
366dd893fc Merge pull request #3100 from silkeh/readme-srv-cname
Clarify that SRV may not point to a CNAME
2018-04-20 15:18:44 +01:00
Erik Johnston
bdb7714d13 Merge pull request #3125 from matrix-org/erikj/add_contrib_docs
Document contrib directory
2018-04-20 13:02:24 +01:00
Erik Johnston
67dabe143d Document contrib directory 2018-04-20 11:47:38 +01:00
Richard van der Hoff
3de7d9fe99 accept stupid events over backfill 2018-04-20 11:41:03 +01:00
Richard van der Hoff
11a67b7c9d Merge pull request #3093 from matrix-org/rav/response_cache_wrap
Refactor ResponseCache usage
2018-04-20 11:31:17 +01:00
Richard van der Hoff
0c280d4d99 Reinstate linearizer for federation_server.on_context_state_request 2018-04-20 11:10:04 +01:00
Richard van der Hoff
bc381d5798 Merge pull request #3117 from matrix-org/rav/refactor_have_events
Refactor store.have_events
2018-04-20 10:26:12 +01:00
Richard van der Hoff
b1dfbc3c40 Refactor store.have_events
It turns out that most of the time we were calling have_events, we were only
using half of the result. Replace have_events with have_seen_events and
get_rejection_reasons, so that we can see what's going on a bit more clearly.
2018-04-20 10:25:56 +01:00
Richard van der Hoff
dacf3a50ac Merge pull request #3113 from matrix-org/rav/fix_huge_prev_events
Avoid creating events with huge numbers of prev_events
2018-04-18 11:27:56 +01:00
Richard van der Hoff
1f4b498b73 Add some comments 2018-04-18 00:15:36 +01:00
Richard van der Hoff
e585228860 Check events on backfill too 2018-04-18 00:06:42 +01:00
Richard van der Hoff
9b7794262f Reject events which have too many auth_events or prev_events
... this should protect us from being dossed by people making silly events
(deliberately or otherwise)
2018-04-18 00:06:42 +01:00
Richard van der Hoff
639480e14a Avoid creating events with huge numbers of prev_events
In most cases, we limit the number of prev_events for a given event to 10
events. This fixes a particular code path which created events with huge
numbers of prev_events.
2018-04-16 18:41:37 +01:00
Adrian Tschira
878995e660 Replace Queue with six.moves.queue
and a six.range change which I missed the last time

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-16 00:46:21 +02:00
Adrian Tschira
a1a3c9660f Make tests py3 compatible
This is a mixed commit that fixes various small issues

 * print parentheses
 * 01 is invalid syntax (it was octal in py2)
 * [x for i in 1, 2] is invalid syntax
 * six moves

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-16 00:39:32 +02:00
Matthew Hodgson
512633ef44 fix spurious changelog dup 2018-04-15 22:45:06 +01:00
Adrian Tschira
2a3c33ff03 Use six.moves.urlparse
The imports were shuffled around a bunch in py3

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 21:22:43 +02:00
Adrian Tschira
f63ff73c7f add __bool__ alias to __nonzero__ methods
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:40:47 +02:00
Adrian Tschira
36c59ce669 Use six.itervalues in some places
There's more where that came from

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:39:43 +02:00
Adrian Tschira
cb9cdfecd0 Add some more variables to the unittest config
These worked accidentally before (python2 doesn't complain if you
compare incompatible types) but under py3 this blows up spectacularly

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:36:39 +02:00
Adrian Tschira
1515560f5c Use str(e) instead of e.message
Doing this I learned e.message was pretty shortlived, added in 2.6,
they realized it was a bad idea and deprecated it in 2.7

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:32:42 +02:00
Adrian Tschira
bfc2ade9b3 Make event properties raise AttributeError instead
They raised KeyError before. I'm changing this because the code uses
hasattr() to check for the presence of a key. This worked accidentally
before, because hasattr() silences all exceptions in python 2. However,
in python3, this isn't the case anymore.

I had a look around to see if anything depended on this raising a
KeyError and I couldn't find anything. Of course, I could have simply
missed it.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:16:59 +02:00
Silke
c4bdbc2bd2 Clarify that SRV may not point to a CNAME
Signed-off-by: Silke Hofstra <silke@slxh.eu>
2018-04-14 14:55:25 +02:00
Neil Johnson
154b44c249 Merge branch 'master' of https://github.com/matrix-org/synapse into develop 2018-04-13 17:07:54 +01:00
Matthew Hodgson
0d8c50df44 Merge pull request #3099 from matrix-org/matthew/fix-federation-domain-whitelist
fix federation_domain_whitelist
2018-04-13 15:51:13 +01:00
Matthew Hodgson
78a9698650 fix federation_domain_whitelist
we were checking the wrong server_name on inbound requests
2018-04-13 15:47:43 +01:00
Matthew Hodgson
25b0ba30b1 revert last to PR properly 2018-04-13 15:46:37 +01:00
Matthew Hodgson
f8d46cad3c correctly auth inbound federation_domain_whitelist reqs 2018-04-13 15:41:52 +01:00
Neil Johnson
d4b2e05852 Merge branch 'master' of https://github.com/matrix-org/synapse into develop 2018-04-13 12:16:27 +01:00
Neil Johnson
eb53439c4a Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-04-13 12:14:57 +01:00
Neil Johnson
51d628d28d Bump version and Change log 2018-04-13 12:08:19 +01:00
Neil Johnson
df77837a33 Merge pull request #3095 from matrix-org/rav/bump_canonical_json
Update canonicaljson dependency
2018-04-13 12:04:46 +01:00
Richard van der Hoff
d3347ad485 Revert "Use sortedcontainers instead of blist"
This reverts commit 9fbe70a7dc.

It turns out that sortedcontainers.SortedDict is not an exact match for
blist.sorteddict; in particular, `popitem()` removes things from the opposite
end of the dict.

This is trivial to fix, but I want to add some unit tests, and potentially some
more thought about it, before we do so.
2018-04-13 11:16:43 +01:00
Richard van der Hoff
fac3f9e678 Bump canonicaljson to 1.1.3
1.1.2 was a bit broken too :/
2018-04-13 10:21:38 +01:00
Richard van der Hoff
60f6014bb7 ResponseCache: fix handling of completed results
Turns out that ObservableDeferred.observe doesn't return a deferred if the
result is already completed. Fix handling and improve documentation.
2018-04-13 07:32:29 +01:00
Richard van der Hoff
119596ab8f Update canonicaljson dependency
1.1.0 and 1.1.1 were broken, so we're updating this to help people make sure
they don't end up on a broken version.

Also, 1.1.0 is speedier...
2018-04-12 17:31:44 +01:00
Richard van der Hoff
b78395b7fe Refactor ResponseCache usage
Adds a `.wrap` method to ResponseCache which wraps up the boilerplate of a
(get, set) pair, and then use it throughout the codebase.

This will be largely non-functional, but does include the following functional
changes:

* federation_server.on_context_state_request: drops use of _server_linearizer
  which looked redundant and could cause incorrect cache misses by yielding
  between the get and the set.
* RoomListHandler.get_remote_public_room_list(): fixes logcontext leaks
* the wrap function includes some logging. I'm hoping this won't be too noisy
  on production.
2018-04-12 13:02:15 +01:00
Richard van der Hoff
d5c74b9f6c Merge pull request #3092 from matrix-org/rav/response_cache_metrics
Add metrics for ResponseCache
2018-04-12 12:59:36 +01:00
Erik Johnston
0f13f30fca Merge pull request #3090 from matrix-org/erikj/processed_event_lag
Add metrics for event processing lag
2018-04-12 12:18:57 +01:00
Erik Johnston
415aeefd89 Format docstring 2018-04-12 12:07:09 +01:00
Erik Johnston
19ceb4851f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/processed_event_lag 2018-04-12 11:36:07 +01:00
Richard van der Hoff
261124396e Merge pull request #3059 from matrix-org/rav/doc_response_cache
Document the behaviour of ResponseCache
2018-04-12 11:22:30 +01:00
Erik Johnston
23a7f9d7f4 Doc we raise on unknown event 2018-04-12 11:20:51 +01:00
Erik Johnston
d7bf3a68f0 s/list/tuple 2018-04-12 11:19:04 +01:00
Erik Johnston
f67e906e18 Set all metrics at the same time 2018-04-12 11:18:19 +01:00
Erik Johnston
971059a733 Merge pull request #3088 from matrix-org/erikj/as_parallel
Send events to ASes concurrently
2018-04-12 10:42:36 +01:00
Erik Johnston
e939f3bca6 Fix tests 2018-04-11 14:37:11 +01:00
Erik Johnston
4dae4a97ed Track last processed event received_ts 2018-04-11 14:27:09 +01:00
Erik Johnston
92e34615c5 Track where event stream processing have gotten up to 2018-04-11 12:13:40 +01:00
Erik Johnston
ab825aa328 Add GaugeMetric 2018-04-11 12:13:40 +01:00
Richard van der Hoff
233699c42e Merge pull request #2760 from Valodim/pypy
Synapse on PyPy
2018-04-11 11:20:01 +01:00
Neil Johnson
427e6c4059 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-04-11 10:59:00 +01:00
Neil Johnson
781cd8c54f bump version/changelog 2018-04-11 10:54:43 +01:00
Neil Johnson
9ef0b179e0 Merge commit '11d2609da70af797405241cdf7d9df19db5628f2' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-11 10:51:59 +01:00
Erik Johnston
121591568b Send events to ASes concurrently 2018-04-11 09:56:00 +01:00
Richard van der Hoff
b3384232a0 Add metrics for ResponseCache 2018-04-10 23:14:47 +01:00
Matthew Hodgson
360d899a64 Merge pull request #3086 from matrix-org/r30_stats
fix typo
2018-04-10 17:46:37 +01:00
Neil Johnson
d54cfbb7a8 fix typo 2018-04-10 17:38:16 +01:00
Erik Johnston
eaa2ebf20b Merge pull request #3079 from matrix-org/erikj/limit_concurrent_sends
Limit concurrent event sends for a room
2018-04-10 16:43:58 +01:00
Erik Johnston
9daf82278f Merge pull request #3078 from matrix-org/erikj/federation_sender
Send federation events concurrently
2018-04-10 16:43:48 +01:00
Adrian Tschira
a3f9ddbede Open certificate files as bytes
That's what pyOpenSSL expects on python3

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-10 17:36:29 +02:00
Adrian Tschira
7f8eebc8ee Open config file in non-bytes mode
Nothing written into it is encoded, so it makes little sense, but it
does break in python3 the way it was before.

The variable names were adjusted to be less misleading.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-10 17:32:40 +02:00
Neil Johnson
dd723267b2 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into develop 2018-04-10 15:03:29 +01:00
Erik Johnston
a060dfa132 Use run_in_background instead 2018-04-10 14:25:11 +01:00
Erik Johnston
f8e8ec013b Note why we're limiting concurrent event sends 2018-04-10 14:00:46 +01:00
Erik Johnston
1246d23710 Preserve log contexts correctly 2018-04-10 12:04:32 +01:00
Erik Johnston
d49cbf712f Log event ID on exception 2018-04-10 12:03:41 +01:00
Erik Johnston
ce72d590ed Merge pull request #3082 from matrix-org/erikj/urlencode_paths
URL quote path segments over federation
2018-04-10 11:31:16 +01:00
Erik Johnston
11d2609da7 Ensure slashes are escaped 2018-04-10 11:24:40 +01:00
Erik Johnston
dab87b84a3 URL quote path segments over federation 2018-04-10 11:16:08 +01:00
Vincent Breitmoser
6d7f0f8dd3 Don't disable GC when running on PyPy
PyPy's incminimark GC can't be triggered manually. From what I observed
there are no obvious issues with just letting it run normally. And
unlike CPython, it actually returns unused RAM to the system.

Signed-off-by: Vincent Breitmoser <look@my.amazin.horse>
2018-04-10 11:35:34 +02:00
Vincent Breitmoser
f4284d943a In DomainSpecificString, override __repr__ in addition to __str__
For some reason, string interpolation on a DomainSpecificString object
like "%r" % (domainSpecificStringObj) fails under PyPy, because the
default __repr__ implementation wants to iterate over the object. I'm
not sure why that happens, but overriding __repr__ instead of __str__
fixes this problem, and is arguably the more appropriate thing to do
anyways.
2018-04-10 11:35:29 +02:00
Richard van der Hoff
d1e56cfcd1 Fix pep8 error on psycopg2cffi hack 2018-04-10 11:35:29 +02:00
Vincent Breitmoser
89de934981 Use psycopg2cffi module instead of psycopg2 if running on pypy
The psycopg2 package isn't available for PyPy.  This commit adds a check
if the runtime is PyPy, and if it is uses psycopg2cffi module in favor
of psycopg2. This is almost a drop-in replacement, except for one place
where an additional cast to string is required.
2018-04-10 11:29:52 +02:00
Vincent Breitmoser
9fbe70a7dc Use sortedcontainers instead of blist
This commit drop-in replaces blist with SortedContainers. They are
written in pure python so work with pypy, but perform as good as
native implementations, at least in a couple benchmarks:

http://www.grantjenks.com/docs/sortedcontainers/performance.html
2018-04-10 11:29:51 +02:00
Richard van der Hoff
a3599dda97 Merge pull request #2996 from krombel/allow_auto_join_rooms
move handling of auto_join_rooms to RegisterHandler
2018-04-10 01:11:00 +01:00
Richard van der Hoff
87478c5a60 Merge pull request #3061 from NotAFile/add-some-byte-strings
Add b prefixes to some strings that are bytes in py3
2018-04-09 23:54:05 +01:00
Richard van der Hoff
c508b2f2f0 Merge pull request #3073 from NotAFile/use-six-reraise
Replace old-style raise with six.reraise
2018-04-09 23:53:40 +01:00
Richard van der Hoff
37354b55c9 Merge pull request #2938 from dklug/develop
Return 401 for invalid access_token on logout
2018-04-09 23:52:56 +01:00
Richard van der Hoff
0e9aa1d091 Merge pull request #3074 from NotAFile/fix-py3-prints
use python3-compatible prints
2018-04-09 23:44:41 +01:00
Richard van der Hoff
8eaa141d8f Merge pull request #3075 from NotAFile/six-type-checks
Replace some type checks with six type checks
2018-04-09 23:40:44 +01:00
Richard van der Hoff
664adb4236 Merge pull request #3016 from silkeh/improve-service-lookups
Improve handling of SRV records for federation connections
2018-04-09 23:40:06 +01:00
Richard van der Hoff
aea3a93611 Merge pull request #3069 from krombel/update_prometheus_config
update prometheus dashboard to use new metric names
2018-04-09 23:37:18 +01:00
Neil Johnson
41e0611895 remove errant print 2018-04-09 18:44:20 +01:00
Neil Johnson
61b439c904 Fix msec to sec, again 2018-04-09 18:43:48 +01:00
Neil Johnson
87770300d5 Fix msec to sec 2018-04-09 18:38:59 +01:00
Neil Johnson
9a311adfea v0.27.3-rc2 2018-04-09 17:52:08 +01:00
Neil Johnson
64bc2162ef Fix psycopg2 interpolation 2018-04-09 17:50:36 +01:00
Richard van der Hoff
d2c6f4d626 Merge pull request #3080 from matrix-org/rav/fix_500_on_rejoin
Return a 404 rather than a 500 on rejoining empty rooms
2018-04-09 17:32:36 +01:00
Neil Johnson
5232d3bfb1 version bump v0.27.3-rc2 2018-04-09 17:25:57 +01:00
Neil Johnson
5e785d4d5b Merge branch 'develop' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 17:21:34 +01:00
Erik Johnston
6e025a97b4 Handle all events in a room correctly 2018-04-09 16:02:48 +01:00
Neil Johnson
414b2b3bd1 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 16:02:08 +01:00
Neil Johnson
b151eb14a2 Update CHANGES.rst 2018-04-09 16:01:59 +01:00
Neil Johnson
64cebbc730 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 16:00:51 +01:00
Neil Johnson
d9ae2bc826 bump version to release candidate 2018-04-09 16:00:31 +01:00
Neil Johnson
21d5a2a08e Update CHANGES.rst 2018-04-09 15:55:41 +01:00
Neil Johnson
c115deed12 Update CHANGES.rst 2018-04-09 15:54:32 +01:00
Neil Johnson
072fb59446 bump version 2018-04-09 13:49:25 +01:00
Neil Johnson
89dda61315 Merge branch 'develop' into release-v0.27.0 2018-04-09 13:48:15 +01:00
Neil Johnson
687f3451bd 0.27.3 2018-04-09 13:44:05 +01:00
Richard van der Hoff
13decdbf96 Revert "Merge pull request #3066 from matrix-org/rav/remove_redundant_metrics"
We aren't ready to release this yet, so I'm reverting it for now.

This reverts commit d1679a4ed7, reversing
changes made to e089100c62.
2018-04-09 12:59:12 +01:00
Richard van der Hoff
f3ef60662f Return a 404 rather than a 500 on rejoining empty rooms
Filter ourselves out of the server list before checking for an empty remote
host list, to fix 500 error

Fixes #2141
2018-04-09 12:56:22 +01:00
Erik Johnston
e5082494eb Limit concurrent event sends for a room 2018-04-09 12:07:39 +01:00
Erik Johnston
56b0589865 Use create_and_send_nonmember_event everywhere 2018-04-09 12:04:18 +01:00
Erik Johnston
11974f3787 Send federation events concurrently 2018-04-09 11:47:10 +01:00
Erik Johnston
145d14656b Handle exceptions in get_hosts_for_room when sending events over federation 2018-04-09 11:47:01 +01:00
Adrian Tschira
e54c202b81 Replace some type checks with six type checks
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-07 01:02:32 +02:00
Adrian Tschira
b0500d3774 use python3-compatible prints 2018-04-06 23:35:27 +02:00
Adrian Tschira
4f40d058cc Replace old-style raise with six.reraise
The old style raise is invalid syntax in python3. As noted in the docs,
this adds one more frame in the traceback, but I think this is
acceptable:

    <ipython-input-7-bcc5cba3de3f> in <module>()
         16     except:
         17         pass
    ---> 18     six.reraise(*x)

    /usr/lib/python3.6/site-packages/six.py in reraise(tp, value, tb)
        691             if value.__traceback__ is not tb:
        692                 raise value.with_traceback(tb)
    --> 693             raise value
        694         finally:
        695             value = None

    <ipython-input-7-bcc5cba3de3f> in <module>()
          9
         10 try:
    ---> 11     x()
         12 except:
         13     x = sys.exc_info()

Also note that this uses six, which is not formally a dependency yet,
but is included indirectly since most packages depend on it.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-06 23:06:24 +02:00
Luke Barnard
135fc5b9cd Merge pull request #3046 from matrix-org/dbkr/join_group
Implement group join API
2018-04-06 16:24:32 +01:00
Luke Barnard
020a501354 de-lint, quote consistency 2018-04-06 16:02:06 +01:00
Luke Barnard
db2fd801f7 Explicitly grab individual columns from group object 2018-04-06 15:57:25 +01:00
Erik Johnston
e8b03cab1b Merge pull request #3071 from matrix-org/erikj/resp_size_metrics
Add response size metrics
2018-04-06 15:49:12 +01:00
Richard van der Hoff
8844f95c32 Merge pull request #3072 from matrix-org/rav/fix_port_script
postgres port script: fix state_groups_pkey error
2018-04-06 15:47:40 +01:00
Luke Barnard
7945435587 When exposing group state, return is_openly_joinable
as opposed to join_policy, which is really only pertinent to the
synapse implementation of the group server.

By doing this we keep the group server concept extensible by
allowing arbitrarily complex rules for deciding whether a group
is openly joinable.
2018-04-06 15:43:27 +01:00
Luke Barnard
6bd1b7053e By default, join policy is "invite" 2018-04-06 15:43:27 +01:00
Luke Barnard
b4478e586f add_user -> _add_user 2018-04-06 15:43:27 +01:00
Luke Barnard
112c2253e2 pep8 2018-04-06 15:43:27 +01:00
Luke Barnard
6850f8aea3 Get group_info from existing call to check_group_is_ours 2018-04-06 15:43:27 +01:00
Luke Barnard
cd087a265d Don't use redundant inlineCallbacks 2018-04-06 15:43:27 +01:00
Luke Barnard
87c864b698 join_rule -> join_policy 2018-04-06 15:43:27 +01:00
Luke Barnard
ae85c7804e is_joinable -> join_rule 2018-04-06 15:43:27 +01:00
Luke Barnard
f8d1917fce Fix federation client set_group_joinable typo 2018-04-06 15:43:27 +01:00
Luke Barnard
6eb3aa94b6 Factor out add_user from accept_invite and join_group 2018-04-06 15:43:27 +01:00
David Baker
edb45aae38 pep8 2018-04-06 15:43:27 +01:00
David Baker
b370fe61c0 Implement group join API 2018-04-06 15:43:27 +01:00
Richard van der Hoff
6a9777ba02 Port script: Set up state_group_id_seq
Fixes https://github.com/matrix-org/synapse/issues/3050.
2018-04-06 15:33:30 +01:00
Richard van der Hoff
01579384cc Port script: clean up a bit
Improve logging and comments. Group all the stuff to do with inspecting tables
together rather than creating the port tables in the middle.
2018-04-06 15:33:30 +01:00
Richard van der Hoff
e01ba5bda3 Port script: avoid nasty errors when setting up
We really shouldn't spit out "Failed to create port table", it looks scary.
2018-04-06 15:33:30 +01:00
Erik Johnston
7b824f1475 Add response size metrics 2018-04-06 13:20:11 +01:00
Erik Johnston
35ff941172 Merge pull request #3070 from krombel/group_join_put_instead_post
use PUT instead of POST for federating groups/m.join_policy
2018-04-06 12:11:16 +01:00
Krombel
1d71f484d4 use PUT instead of POST for federating groups/m.join_policy 2018-04-06 12:54:09 +02:00
Richard van der Hoff
15e8ed874f more verbosity in synctl 2018-04-06 09:28:36 +01:00
Krombel
c7ede92d0b make prometheus config compliant to v0.28 2018-04-05 23:34:01 +02:00
Richard van der Hoff
551422051b Merge pull request #2886 from turt2live/travis/new-worker-docs
Add a blurb explaining the main synapse worker
2018-04-05 17:33:09 +01:00
Richard van der Hoff
c7f0969731 Merge pull request #2986 from jplatte/join_reponse_room_id
Add room_id to the response of `rooms/{roomId}/join`
2018-04-05 17:29:06 +01:00
Richard van der Hoff
3449da3bc7 Merge pull request #3068 from matrix-org/rav/fix_cache_invalidation
Improve database cache performance
2018-04-05 17:21:44 +01:00
Richard van der Hoff
d1679a4ed7 Merge pull request #3066 from matrix-org/rav/remove_redundant_metrics
Remove redundant metrics which were deprecated in 0.27.0.
2018-04-05 17:21:18 +01:00
Richard van der Hoff
01afc563c3 Fix overzealous cache invalidation
Fixes an issue where a cache invalidation would invalidate *all* pending
entries, rather than just the entry that we intended to invalidate.
2018-04-05 16:24:04 +01:00
Luke Barnard
e089100c62 Merge pull request #3045 from matrix-org/dbkr/group_joinable
Add joinability for groups
2018-04-05 15:57:49 +01:00
Neil Johnson
68b0ee4e8d Merge pull request #3041 from matrix-org/r30_stats
R30 stats
2018-04-05 15:37:37 +01:00
Richard van der Hoff
22284a6f65 Merge pull request #3060 from matrix-org/rav/kill_event_content
Remove uses of events.content
2018-04-05 15:02:17 +01:00
Luke Barnard
917380e89d NON NULL -> NOT NULL 2018-04-05 14:32:12 +01:00
Luke Barnard
104c0bc1d5 Use "/settings/" (plural) 2018-04-05 14:07:16 +01:00
Luke Barnard
700e5e7198 Use DEFAULT join_policy of "invite" in db 2018-04-05 14:01:17 +01:00
Luke Barnard
b214a04ffc Document set_group_join_policy 2018-04-05 13:29:16 +01:00
Neil Johnson
0e5f479fc0 Review comments
Use iteritems over item to loop over dict
formatting
2018-04-05 12:16:46 +01:00
Richard van der Hoff
518f6de088 Remove redundant metrics which were deprecated in 0.27.0. 2018-04-04 19:46:28 +01:00
Jan Christian Grünhage
7d0f712348 Merge pull request #3063 from matrix-org/jcgruenhage/cache_settings_stats
phone home cache size configurations
2018-04-04 17:24:40 +01:00
Jan Christian Grünhage
e4570c53dd phone home cache size configurations 2018-04-04 16:46:58 +01:00
Travis Ralston
88964b987e Merge remote-tracking branch 'matrix-org/develop' into travis/new-worker-docs 2018-04-04 08:46:56 -06:00
Travis Ralston
204fc98520 Document the additional routes for the event_creator worker
Fixes https://github.com/matrix-org/synapse/issues/3018

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-04-04 08:46:17 -06:00
Travis Ralston
301b339494 Move the mention of the main synapse worker higher up
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-04-04 08:45:51 -06:00
Adrian Tschira
6168351877 Add b prefixes to some strings that are bytes in py3
This has no effect on python2

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-04 13:48:51 +02:00
Richard van der Hoff
9cd3f06ab7 Merge pull request #3062 from matrix-org/revert-3053-speedup-mxid-check
Revert "improve mxid check performance"
2018-04-04 12:09:17 +01:00
Richard van der Hoff
f92963f5db Revert "improve mxid check performance" 2018-04-04 12:08:29 +01:00
Silke
72251d1b97 Remove address resolution of hosts in SRV records
Signed-off-by: Silke Hofstra <silke@slxh.eu>
2018-04-04 12:26:50 +02:00
Richard van der Hoff
725a72ec5a Merge pull request #3000 from NotAFile/change-except-style
Replace old style error catching with 'as' keyword
2018-04-04 10:45:22 +01:00
Richard van der Hoff
a89f9f830c Merge pull request #3044 from matrix-org/michaelk/performance_stats
Add basic performance statistics to phone home
2018-04-04 10:37:25 +01:00
Richard van der Hoff
39ce38b024 Merge pull request #3053 from NotAFile/speedup-mxid-check
improve mxid check performance
2018-04-04 10:19:52 +01:00
Richard van der Hoff
a9a74101a4 Document the behaviour of ResponseCache
it looks like everything that uses ResponseCache expects to have to
`make_deferred_yieldable` its results. It's debatable whether that is the best
approach, but let's document it for now to avoid further confusion.
2018-04-04 09:06:22 +01:00
Luke Barnard
eb8d8d6f57 Use join_policy API instead of joinable
The API is now under
 /groups/$group_id/setting/m.join_policy

and expects a JSON blob of the shape

```json
{
  "m.join_policy": {
    "type": "invite"
  }
}
```

where "invite" could alternatively be "open".
2018-04-03 16:16:40 +01:00
Richard van der Hoff
8da39ad98f Merge pull request #3049 from matrix-org/rav/use_staticjson
Use static JSONEncoders
2018-04-03 15:18:32 +01:00
Richard van der Hoff
3ee4ad09eb Fix json encoding bug in replication
json encoders have an encode method, not a dumps method.
2018-04-03 15:09:48 +01:00
Richard van der Hoff
0ca5c4d2af Merge pull request #3048 from matrix-org/rav/use_simplejson
Use simplejson throughout
2018-04-03 14:39:57 +01:00
Adrian Tschira
11597ddea5 improve mxid check performance ~4x
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-03-31 02:00:22 +02:00
Richard van der Hoff
2fe3f848b9 Remove uses of events.content 2018-03-29 23:17:12 +01:00
Richard van der Hoff
05630758f2 Use static JSONEncoders
using json.dumps with custom options requires us to create a new JSONEncoder on
each call. It's more efficient to create one upfront and reuse it.
2018-03-29 23:13:33 +01:00
Richard van der Hoff
fcfe7f6ad3 Use simplejson throughout
Let's use simplejson rather than json, for consistency.
2018-03-29 22:45:52 +01:00
Neil Johnson
b4e37c6f50 pep8 2018-03-29 17:27:39 +01:00
Neil Johnson
9ee44a372d Remove need for sqlite specific query 2018-03-29 16:45:34 +01:00
Erik Johnston
88cc9cc69e Merge pull request #3043 from matrix-org/erikj/measure_state_group_creation
Measure time it takes to calculate state group ID
2018-03-28 17:40:14 +01:00
Neil Johnson
dc7c020b33 fix pep8 errors 2018-03-28 17:25:15 +01:00
Neil Johnson
16aeb41547 Update README.rst
update docker hub url
2018-03-28 16:47:56 +01:00
David Baker
c5de6987c2 This should probably be a PUT 2018-03-28 16:44:11 +01:00
Neil Johnson
241e4e8687 remove twisted deferral cruft 2018-03-28 16:25:53 +01:00
David Baker
929b34963d OK, smallint it is then 2018-03-28 14:53:55 +01:00
Richard van der Hoff
9a0db062af Merge pull request #3034 from matrix-org/rav/fix_key_claim_errors
Fix error when claiming e2e keys from offline servers
2018-03-28 14:50:38 +01:00
David Baker
a838444a70 Grr. Copy the definition from is_admin 2018-03-28 14:50:30 +01:00
Neil Johnson
4262aba17b bump schema version 2018-03-28 14:40:03 +01:00
Neil Johnson
86932be2cb Support multi client R30 for psql 2018-03-28 14:36:53 +01:00
David Baker
32260baa41 pep8 2018-03-28 14:29:42 +01:00
Michael Kaye
33f6195d9a Handle review comments 2018-03-28 14:25:25 +01:00
David Baker
a164270833 Make column definition that works on both dbs 2018-03-28 14:23:00 +01:00
David Baker
352e1ff9ed Add schema delta file 2018-03-28 14:07:57 +01:00
David Baker
79452edeee Add joinability for groups
Adds API to set the 'joinable' flag, and corresponding flag in the
table.
2018-03-28 14:03:37 +01:00
Krombel
6152e253d8 Merge branch 'develop' of into allow_auto_join_rooms 2018-03-28 14:45:28 +02:00
Erik Johnston
e9e4cb25fc Merge pull request #3042 from matrix-org/fix_locally_failing_tests
fix tests/storage/test_user_directory.py
2018-03-28 13:10:37 +01:00
Neil Johnson
792d340572 rename stat to future proof 2018-03-28 12:25:02 +01:00
Michael Kaye
4ceaa7433a As daemonizing will make a new process, defer call to init. 2018-03-28 12:19:01 +01:00
Neil Johnson
788e69098c Add user_ips last seen index 2018-03-28 12:03:13 +01:00
Neil Johnson
0f890f477e No need to cast in count_daily_users 2018-03-28 11:49:57 +01:00
Neil Johnson
545001b9e4 Fix search_user_dir multiple sqlite versions do different things 2018-03-28 11:19:45 +01:00
Erik Johnston
01ccc9e6f2 Measure time it takes to calculate state group ID 2018-03-28 11:03:52 +01:00
Neil Johnson
a9cb1a35c8 fix tests/storage/test_user_directory.py 2018-03-28 10:57:27 +01:00
Neil Johnson
a32d2548d9 query and call for r30 stats 2018-03-28 10:39:13 +01:00
Neil Johnson
9187e0762f count_daily_users failed if db was sqlite due to type failure - presumably this prevcented all sqlite homeservers reporting home 2018-03-28 10:02:32 +01:00
Erik Johnston
f879127aaa Merge pull request #3029 from matrix-org/erikj/linearize_generate_user_id
Linearize calls to _generate_user_id
2018-03-28 10:00:31 +01:00
Erik Johnston
e6d87c93f3 Merge pull request #3030 from matrix-org/erikj/no_ujson
Remove last usage of ujson
2018-03-28 10:00:06 +01:00
Erik Johnston
004cc8a328 Merge pull request #3033 from matrix-org/erikj/calculate_state_metrics
Add counter metrics for calculating state delta
2018-03-28 09:59:42 +01:00
Michael Kaye
ef520d8d0e Include coarse CPU and Memory use in stats callbacks.
This requires the psutil module, and is still opt-in based on the report_stats
config option.
2018-03-27 17:56:03 +01:00
Richard van der Hoff
a134c572a6 Stringify exceptions for keys/{query,claim}
Make sure we stringify any exceptions we return from keys/query and keys/claim,
to avoid a 'not JSON serializable' error later

Fixes #3010
2018-03-27 17:15:06 +01:00
Richard van der Hoff
c2a5cf2fe3 factor out exception handling for keys/claim and keys/query
this stuff is badly c&p'ed
2018-03-27 17:11:23 +01:00
Erik Johnston
800cfd5774 Comment 2018-03-27 13:30:39 +01:00
Erik Johnston
152c2ac19e Fix indent 2018-03-27 13:13:46 +01:00
Erik Johnston
e70287cff3 Comment 2018-03-27 13:13:38 +01:00
Erik Johnston
03a26e28d9 Merge pull request #3017 from matrix-org/erikj/add_cache_control_headers
Add Cache-Control headers to all JSON APIs
2018-03-27 13:10:38 +01:00
Erik Johnston
3e0c0660b3 Also do check inside linearizer 2018-03-27 13:01:34 +01:00
Erik Johnston
3f49e131d9 Add counter metrics for calculating state delta
This will allow us to measure how often we calculate state deltas in
event persistence that we would have been able to calculate at the same
time we calculated the state for the event.
2018-03-27 10:57:35 +01:00
Erik Johnston
9b8c0fb162 Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-03-26 21:41:55 +01:00
Erik Johnston
691f8492fb Merge branch 'release-v0.27.0' of github.com:matrix-org/synapse 2018-03-26 16:38:23 +01:00
Erik Johnston
a9d7d98d3f Bum version and changelog 2018-03-26 16:36:53 +01:00
Erik Johnston
bdbb1eec65 Merge branch 'erikj/simplejson_replication' of github.com:matrix-org/synapse into release-v0.27.0 2018-03-26 16:35:55 +01:00
Erik Johnston
01f72e2fc7 Fix date 2018-03-26 16:21:26 +01:00
Erik Johnston
9187862002 Bump version and changelog 2018-03-26 16:20:24 +01:00
Neil Johnson
aa3587fdd1 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-03-26 14:51:11 +01:00
Neil Johnson
51406dab96 version bump 2018-03-26 14:48:19 +01:00
Erik Johnston
fecb45e0c3 Remove last usage of ujson 2018-03-26 13:32:29 +01:00
Erik Johnston
44cd6e1358 PEP8 2018-03-26 12:06:48 +01:00
Erik Johnston
8d6dc106d1 Don't use _cursor_to_dict in find_next_generated_user_id_localpart 2018-03-26 12:02:44 +01:00
Erik Johnston
a052aa42e7 Linearize calls to _generate_user_id 2018-03-26 12:02:20 +01:00
Matthew Hodgson
8efe773ef1 fix typo 2018-03-23 21:38:42 +00:00
Matthew Hodgson
b7e7b52452 Merge pull request #3022 from matrix-org/matthew/noresource
404 correctly on missing paths via NoResource
2018-03-23 11:10:32 +00:00
Matthew Hodgson
8cbbfaefc1 404 correctly on missing paths via NoResource
fixes https://github.com/matrix-org/synapse/issues/2043 and https://github.com/matrix-org/synapse/issues/2029
2018-03-23 10:32:50 +00:00
Erik Johnston
84b5cc69f5 Merge pull request #3006 from matrix-org/erikj/state_iter
Use .iter* to avoid copies in StateHandler
2018-03-22 11:51:13 +00:00
Erik Johnston
fde8e8f09f Fix s/iteriterms/itervalues 2018-03-22 11:42:16 +00:00
Erik Johnston
eb9fc021e3 Merge branch 'release-v0.27.0' of github.com:matrix-org/synapse into develop 2018-03-22 10:19:53 +00:00
Erik Johnston
1c41b05c8c Add Cache-Control headers to all JSON APIs
It is especially important that sync requests don't get cached, as if a
sync returns the same token given then the client will call sync with
the same parameters again. If the previous response was cached it will
get reused, resulting in the client tight looping making the same
request and never making any progress.

In general, clients will expect to get up to date data when requesting
APIs, and so its safer to do a blanket no cache policy than only
whitelisting APIs that we know will break things if they get cached.
2018-03-21 17:46:26 +00:00
Erik Johnston
5bdb57cb66 Merge pull request #3015 from matrix-org/erikj/simplejson_replication
Fix replication after switch to simplejson
2018-03-20 15:06:33 +00:00
Neil Johnson
f5aa027c2f Update CHANGES.rst
rearrange ordering of releases to match chronology
2018-03-20 15:06:22 +00:00
Neil Johnson
e66fbcbb02 fix merge conflicts 2018-03-20 14:25:31 +00:00
Erik Johnston
9aa5a0af51 Explicitly use simplejson 2018-03-20 09:58:13 +00:00
Erik Johnston
610accbb7f Fix replication after switch to simplejson
Turns out that simplejson serialises namedtuple's as dictionaries rather
than tuples by default.
2018-03-19 16:12:48 +00:00
Neil Johnson
c384705ee8 Update __init__.py
bump version
2018-03-19 15:11:58 +00:00
Neil Johnson
1a3aa957ca Update CHANGES.rst 2018-03-19 15:11:00 +00:00
Erik Johnston
3f961e638a Merge pull request #3005 from matrix-org/erikj/fix_cache_size
Fix bug where state cache used lots of memory
2018-03-19 11:44:26 +00:00
Erik Johnston
fa72803490 Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-03-19 11:41:01 +00:00
Erik Johnston
9a0d783c11 Add comments 2018-03-19 11:35:53 +00:00
Matthew Hodgson
38f952b9bc spell out not to massively increase bcrypt rounds 2018-03-19 09:27:36 +00:00
Erik Johnston
a8ce159be4 Replace some ujson with simplejson to make it work 2018-03-16 00:27:09 +00:00
Erik Johnston
f609acc109 Merge branch 'hotfixes-v0.26.1' of github.com:matrix-org/synapse 2018-03-16 00:13:51 +00:00
Erik Johnston
0092cf38ae Newline 2018-03-16 00:11:58 +00:00
Erik Johnston
5b631ff41a Remove wrong comment 2018-03-16 00:07:08 +00:00
Erik Johnston
ba48755d56 Bump version and changelog 2018-03-15 23:57:26 +00:00
Erik Johnston
926ba76e23 Replace ujson with simplejson 2018-03-15 23:43:31 +00:00
Richard van der Hoff
5a6e54264d Make 'unexpected logging context' into warnings
I think we've now fixed enough of these that the rest can be logged at
warning.
2018-03-15 18:40:38 +00:00
Erik Johnston
9cf519769b Use .iter* to avoid copies in StateHandler 2018-03-15 17:50:26 +00:00
Erik Johnston
7c7706f42b Fix bug where state cache used lots of memory
The state cache bases its size on the sum of the size of entries. The
size of the entry is calculated once on insertion, so it is important
that the size of entries does not change.

The DictionaryCache modified the entries size, which caused the state
cache to incorrectly think it was smaller than it actually was.
2018-03-15 15:46:54 +00:00
NotAFile
2cc9f76bc3 replace old style error catching with 'as' keyword
This is both easier to read and compatible with python3 (not that that
matters)

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-03-15 16:11:17 +01:00
Erik Johnston
ddb00efc1d Bump version number 2018-03-15 14:41:30 +00:00
Erik Johnston
2a376579f3 Merge pull request #3003 from matrix-org/rav/fix_contributing
CONTRIBUTING.rst: fix CI info
2018-03-15 13:31:59 +00:00
Erik Johnston
873aea7168 Merge pull request #3002 from matrix-org/rav/purge_doc
Update purge_history_api.rst
2018-03-15 13:31:54 +00:00
Erik Johnston
bf7ee93cb6 Merge pull request #2997 from turt2live/patch-2
Doc: Make the event_creator routes regex a code block
2018-03-15 13:12:44 +00:00
Richard van der Hoff
5ea624b0f5 CONTRIBUTING.rst: fix CI info 2018-03-15 11:48:35 +00:00
Richard van der Hoff
0ad5125814 Update purge_history_api.rst
clarify that `purge_history` will not purge state
2018-03-15 11:05:42 +00:00
Richard van der Hoff
068c21ab10 CHANGES.rst: reword metric deprecation 2018-03-15 10:36:31 +00:00
Neil Johnson
b29d1abab6 Update CHANGES.rst 2018-03-15 10:34:15 +00:00
Neil Johnson
7367a4a823 Update CHANGES.rst 2018-03-15 10:33:52 +00:00
Neil Johnson
7d26591048 Update CHANGES.rst 2018-03-15 10:33:24 +00:00
Erik Johnston
2059b8573f Update CHANGES.rst 2018-03-14 18:11:21 +00:00
Erik Johnston
10fdcf561d Fix up rst formatting 2018-03-14 17:30:17 +00:00
Neil Johnson
5ccb57d3ff Update CHANGES.rst 2018-03-14 17:12:58 +00:00
Travis Ralston
c33c1ceddd OCD: Make the event_creator routes regex a code block
All the others are code blocks, so this one should be to (currently it is a blockquote).

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-03-14 11:09:08 -06:00
Neil Johnson
fb647164f2 Update CHANGES.rst 2018-03-14 16:20:36 +00:00
Neil Johnson
a492b17fe2 Update CHANGES.rst
clean formatting
2018-03-14 16:18:09 +00:00
Neil Johnson
cb2c7c0669 Update CHANGES.rst
WIP, need to add most recent PRs
2018-03-14 16:09:20 +00:00
Krombel
91ea0202e6 move handling of auto_join_rooms to RegisterHandler
Currently the handling of auto_join_rooms only works when a user
registers itself via public register api. Registrations via
registration_shared_secret and ModuleApi do not work

This auto_joins the users in the registration handler which enables
the auto join feature for all 3 registration paths.

This is related to issue #2725

Signed-Off-by: Matthias Kesler <krombel@krombel.de>
2018-03-14 16:45:37 +01:00
Erik Johnston
3959754de3 Merge pull request #2995 from matrix-org/erikj/enable_membership_worker
Register membership/state servlets in event_creator
2018-03-14 14:37:33 +00:00
Erik Johnston
4f28018c83 Register membership/state servlets in event_creator 2018-03-14 14:30:06 +00:00
Erik Johnston
57db62e554 Merge pull request #2992 from matrix-org/erikj/implement_member_workre
Implement RoomMemberWorkerHandler
2018-03-14 14:29:33 +00:00
Erik Johnston
0011ede3b0 Fix imports 2018-03-14 14:19:23 +00:00
Erik Johnston
62ad701326 s/join/joined/ in notify_user_membership_change 2018-03-14 14:17:43 +00:00
Erik Johnston
3f0f06cb31 Split RoomMemberWorkerHandler to separate file 2018-03-14 11:41:45 +00:00
Erik Johnston
3e839e0548 Merge pull request #2989 from matrix-org/erikj/profile_cache_master
Only update remote profile cache on master
2018-03-14 09:42:27 +00:00
Erik Johnston
ebd0127999 Merge pull request #2988 from matrix-org/erikj/split_profile_store
Split up ProfileStore
2018-03-14 09:41:06 +00:00
Erik Johnston
cfe75a9fb6 Merge pull request #2991 from matrix-org/erikj/fixup_rm
_remote_join and co take a requester
2018-03-14 09:40:57 +00:00
Erik Johnston
f51565e023 Merge pull request #2993 from matrix-org/erikj/is_blocked
Add is_blocked to worker store
2018-03-14 09:39:18 +00:00
Matthew Hodgson
d144ed6ffb fix bug #2926 (loading all state for a given type from the DB if the state_key is None) (#2990)
Fixes a regression that had crept in where the caching layer upholds requests for loading state which is filtered by type (but not by state_key), but the DB layer itself would interpret a missing state_key as a request to filter by null state_key rather than returning all state_keys.
2018-03-13 22:36:04 +00:00
Erik Johnston
a08726fc42 Add is_blocked to worker store 2018-03-13 18:28:44 +00:00
Erik Johnston
b27320b550 Implement RoomMemberWorkerHandler 2018-03-13 18:26:00 +00:00
Erik Johnston
350331d466 _remote_join and co take a requester 2018-03-13 17:50:39 +00:00
Erik Johnston
1a69c6d590 Merge pull request #2987 from matrix-org/erikj/split_room_member_handler
Split RoomMemberHandler into base and master class
2018-03-13 17:40:00 +00:00
Erik Johnston
df8ff682a7 Only update remote profile cache on master 2018-03-13 17:38:21 +00:00
Erik Johnston
3518d0ea8f Split up ProfileStore 2018-03-13 17:36:50 +00:00
Erik Johnston
d45a114824 Raise, don't return, exception 2018-03-13 17:24:34 +00:00
Erik Johnston
6dbebef141 Add missing param to docstrings 2018-03-13 17:15:32 +00:00
Erik Johnston
16adb11cc0 Correct import order 2018-03-13 16:57:07 +00:00
Erik Johnston
82f16faa78 Move user_*_room distributor stuff to master class
I added yields when calling user_left_room, but they shouldn't matter on
the master process as they always return None anyway.
2018-03-13 16:38:15 +00:00
Erik Johnston
b78717b87b Split RoomMemberHandler into base and master class
The intention here is to split the class into the bits that can be done
on workers and the bits that have to be done on the master.

In future there will also be a class that can be run on the worker,
which will delegate work to the master when necessary.
2018-03-13 16:37:41 +00:00
Erik Johnston
95cb401ae0 Merge pull request #2978 from matrix-org/erikj/refactor_replication_layer
Remove ReplicationLayer and user Client/Server directly
2018-03-13 15:45:08 +00:00
Erik Johnston
5d8476d8ff Merge pull request #2981 from matrix-org/erikj/factor_remote_leave
Factor out _remote_reject_invite in RoomMember
2018-03-13 15:44:56 +00:00
Jonas Platte
47ce527f45 Add room_id to the response of rooms/{roomId}/join
Fixes #2349
2018-03-13 14:48:12 +01:00
Erik Johnston
56e709857c Merge pull request #2979 from matrix-org/erikj/no_handlers
Don't build handlers on workers unnecessarily
2018-03-13 13:46:38 +00:00
Erik Johnston
cb9f8e527c s/replication_client/federation_client/ 2018-03-13 13:26:52 +00:00
Erik Johnston
cea462e285 s/replication_server/federation_server 2018-03-13 13:22:21 +00:00
Erik Johnston
bf8e97bd3c Merge branch 'develop' of github.com:matrix-org/synapse into erikj/factor_remote_leave 2018-03-13 13:17:08 +00:00
Erik Johnston
ea3442c15c Add docstring 2018-03-13 13:16:21 +00:00
Erik Johnston
16469a4f15 Merge pull request #2980 from matrix-org/erikj/rm_priv
Make RoomMemberHandler functions private that can be
2018-03-13 13:11:11 +00:00
Erik Johnston
c82111a55f Merge pull request #2982 from matrix-org/erikj/fix_extra_users
extra_users is actually a list of UserIDs
2018-03-13 13:11:04 +00:00
Erik Johnston
da87791975 Merge pull request #2983 from matrix-org/erikj/rename_register_3pid
Refactor get_or_register_3pid_guest
2018-03-13 13:10:52 +00:00
Erik Johnston
99e9b4f26c Merge pull request #2984 from matrix-org/erikj/fix_rest_regeix
RoomMembershipRestServlet doesn't handle /forget
2018-03-13 13:10:44 +00:00
Erik Johnston
f5160d4a3e RoomMembershipRestServlet doesn't handle /forget
Due to the order we register the REST handlers `/forget` was handled by
the correct handler.
2018-03-13 12:12:55 +00:00
Erik Johnston
8b3573a8b2 Refactor get_or_register_3pid_guest 2018-03-13 12:08:58 +00:00
Richard van der Hoff
299fd740c7 Merge pull request #2975 from matrix-org/rav/measure_persist_events
Add Measure block for persist_events
2018-03-13 12:06:25 +00:00
Erik Johnston
9a2d9b4789 Merge pull request #2977 from matrix-org/erikj/replication_move_props
Move property setting from ReplicationLayer to base classes
2018-03-13 11:45:25 +00:00
Erik Johnston
141c343e03 Merge pull request #2976 from matrix-org/erikj/replication_registry
Split out edu/query registration to a separate class
2018-03-13 11:45:08 +00:00
Erik Johnston
f43b6d6d9b Fix docstring types 2018-03-13 11:29:35 +00:00
Erik Johnston
0f942f68c1 Factor out _remote_reject_invite in RoomMember 2018-03-13 11:22:45 +00:00
Erik Johnston
d0fcc48f9d extra_users is actually a list of UserIDs 2018-03-13 11:20:06 +00:00
Erik Johnston
31becf4ac3 Make functions private that can be 2018-03-13 11:15:16 +00:00
Erik Johnston
d023ecb810 Don't build handlers on workers unnecessarily 2018-03-13 11:08:10 +00:00
Erik Johnston
ea7b3c4b1b Remove unused ReplicationLayer 2018-03-13 11:00:04 +00:00
Erik Johnston
6ea27fafad Fix tests 2018-03-13 10:55:47 +00:00
Erik Johnston
265b993b8a Split replication layer into two 2018-03-13 10:55:47 +00:00
Erik Johnston
e05bf34117 Move property setting from ReplicationLayer to FederationBase 2018-03-13 10:51:30 +00:00
Erik Johnston
631a73f7ef Fix tests 2018-03-13 10:39:19 +00:00
Erik Johnston
c3f79c9da5 Split out edu/query registration to a separate class 2018-03-13 10:24:27 +00:00
Richard van der Hoff
889a2a853a Add Measure block for persist_events
This seems like a useful thing to measure.
2018-03-13 10:01:42 +00:00
Richard van der Hoff
d65ceb4b48 Merge pull request #2962 from matrix-org/rav/purge_history_txns
Add transactional API to history purge
2018-03-12 16:32:18 +00:00
Richard van der Hoff
e48c7aac4d Add transactional API to history purge
Make the purge request return quickly, and allow scripts to poll for updates.
2018-03-12 16:22:55 +00:00
Richard van der Hoff
1708412f56 Return an error when doing two purges on a room
Queuing up purges doesn't sound like a good thing.
2018-03-12 16:22:54 +00:00
Richard van der Hoff
b984dd0b73 Merge pull request #2961 from matrix-org/rav/run_in_background
Factor run_in_background out from preserve_fn
2018-03-12 16:19:13 +00:00
Richard van der Hoff
ba1d08bc4b Merge pull request #2965 from matrix-org/rav/request_logging
Add a metric which increments when a request is received
2018-03-12 09:45:33 +00:00
Richard van der Hoff
58dd148c4f Add some docstrings to help figure this out 2018-03-09 18:05:41 +00:00
Richard van der Hoff
88541f9009 Add a metric which increments when a request is received
It's useful to know when there are peaks in incoming requests - which isn't
quite the same as there being peaks in outgoing responses, due to the time
taken to handle requests.
2018-03-09 16:30:26 +00:00
Richard van der Hoff
dbe80a286b refactor JsonResource
rephrase the OPTIONS and unrecognised request handling so that they look
similar to the common flow.
2018-03-09 16:22:16 +00:00
Richard van der Hoff
20f40348d4 Factor run_in_background out from preserve_fn
It annoys me that we create temporary function objects when there's really no
need for it. Let's factor the gubbins out of preserve_fn and start using it.
2018-03-08 11:50:11 +00:00
Erik Johnston
735fd8719a Merge pull request #2944 from matrix-org/erikj/fix_sync_race
Fix race in sync when joining room
2018-03-07 17:42:26 +00:00
Erik Johnston
a56d54dcb7 Fix up log message 2018-03-07 11:55:31 +00:00
Erik Johnston
02a1296ad6 Fix typo 2018-03-07 11:55:31 +00:00
Erik Johnston
8cb44da4aa Fix race in sync when joining room
The race happens when the user joins a room at the same time as doing a
sync. We fetch the current token and then get the rooms the user is in.
If the join happens after the current token, but before we get the rooms
we end up sending down a partial room entry in the sync.

This is fixed by looking at the stream ordering of the membership
returned by get_rooms_for_user, and handling the case when that stream
ordering is after the current token.
2018-03-07 11:55:31 +00:00
Richard van der Hoff
8ffaacbee3 Merge pull request #2949 from krombel/use_bcrypt_checkpw
use bcrypt.checkpw
2018-03-06 11:56:06 +00:00
Richard van der Hoff
b2932107bb Merge pull request #2946 from matrix-org/rav/timestamp_to_purge
Implement purge_history by timestamp
2018-03-06 11:20:23 +00:00
Erik Johnston
7aed50a038 Merge pull request #2948 from matrix-org/erikj/kill_as_sync
Remove ability for AS users to call /events and /sync
2018-03-06 11:10:09 +00:00
Erik Johnston
b6c4b851f1 Merge pull request #2947 from matrix-org/erikj/split_directory_store
Split Directory store
2018-03-05 18:17:32 +00:00
Krombel
ed9b5eced4 use bcrypt.checkpw
in bcrypt 3.1.0 checkpw got introduced (already 2 years ago)
This makes use of that with enhancements which might get introduced
by that

Signed-Off-by: Matthias Kesler <krombel@krombel.de>
2018-03-05 18:02:59 +01:00
Erik Johnston
d4ffe61d4f Remove ability for AS users to call /events and /sync
This functionality has been deprecated for a while as well as being
broken for a while. Instead of fixing it lets just remove it entirely.

See: https://github.com/matrix-org/matrix-doc/issues/1144
2018-03-05 15:44:46 +00:00
Erik Johnston
69ce365b79 Fix cache invalidation on deletion 2018-03-05 15:29:03 +00:00
Erik Johnston
2e223163ff Split Directory store 2018-03-05 15:11:30 +00:00
Richard van der Hoff
f8bfcd7e0d Provide a means to pass a timestamp to purge_history 2018-03-05 14:37:23 +00:00
Richard van der Hoff
d032785aa7 Merge pull request #2943 from matrix-org/rav/fix_find_first_stream_ordering_after_ts
Test and fix find_first_stream_ordering_after_ts
2018-03-05 12:26:14 +00:00
Richard van der Hoff
2c911d75e8 Fix comment typo 2018-03-05 12:24:49 +00:00
Richard van der Hoff
c818fcab11 Test and fix find_first_stream_ordering_after_ts
It seemed to suffer from a bunch of off-by-one errors.
2018-03-05 12:04:02 +00:00
Richard van der Hoff
06a14876e5 Add find_first_stream_ordering_after_ts
Expose this as a public function which can be called outside a txn
2018-03-05 11:53:39 +00:00
Erik Johnston
42174946f8 Merge pull request #2934 from matrix-org/erikj/cache_fix
Fix bug with delayed cache invalidation stream
2018-03-05 11:33:17 +00:00
Erik Johnston
f394f5574d Merge pull request #2929 from matrix-org/erikj/split_regististration_store
Split registration store
2018-03-05 11:33:07 +00:00
dklug
af7ed8e1ef Return 401 for invalid access_token on logout
Signed-off-by: Duncan Klug <dklug@ucmerced.edu>
2018-03-02 22:01:27 -08:00
Erik Johnston
efb79820b4 Fix bug with delayed cache invalidation stream
We poked the notifier before updated the current token for the cache
invalidation stream. This mean that sometimes the update wouldn't be
sent until the next time a cache was invalidated.
2018-03-02 14:45:15 +00:00
Erik Johnston
fafa3e7114 Split registration store 2018-03-02 13:48:27 +00:00
Erik Johnston
6619f047ad Merge pull request #2933 from matrix-org/erikj/3pid_yield
Add missing yield during 3pid signature checks
2018-03-02 11:31:50 +00:00
Erik Johnston
d960d23830 Add missing yield during 3pid signature checks 2018-03-02 11:03:18 +00:00
Erik Johnston
1a6c7cdf54 Merge pull request #2928 from matrix-org/erikj/read_marker_caches
Fix typo in getting replication account data processing
2018-03-01 17:56:14 +00:00
Erik Johnston
89b7232ff8 Fix typo in getting replication account data processing 2018-03-01 17:50:30 +00:00
Erik Johnston
1773df0632 Merge pull request #2925 from matrix-org/erikj/split_sig_fed
Split out SignatureStore and EventFederationStore
2018-03-01 17:32:58 +00:00
Erik Johnston
65cf454fd1 Remove unused DataStore 2018-03-01 17:27:53 +00:00
Erik Johnston
9e08a93a7b Merge pull request #2927 from matrix-org/erikj/read_marker_caches
Improve caching for read_marker API
2018-03-01 17:12:34 +00:00
Erik Johnston
4b44f05f19 Fewer lies are better 2018-03-01 17:08:17 +00:00
Erik Johnston
a83c514d1f Improve caching for read_marker API
We add a new storage function to get a paritcular type of room account
data. This allows us to prefill the cache when updating that acount
data.
2018-03-01 17:08:17 +00:00
Erik Johnston
33bebb63f3 Add some caches to help read marker API 2018-03-01 17:08:17 +00:00
Erik Johnston
483e8104db Merge pull request #2926 from matrix-org/erikj/member_handler_move
Move RoomMemberHandler out of Handlers
2018-03-01 17:01:25 +00:00
Erik Johnston
2ad4d5b5bb Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_sig_fed 2018-03-01 16:59:39 +00:00
Erik Johnston
92789199a9 Merge pull request #2924 from matrix-org/erikj/split_stream_store
Split out stream store
2018-03-01 16:56:37 +00:00
Erik Johnston
529c026ac1 Move back to hs.is_mine 2018-03-01 16:49:12 +00:00
Erik Johnston
7c371834cc Stub out broken function only used for cache 2018-03-01 16:44:13 +00:00
Erik Johnston
64346be26d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_stream_store 2018-03-01 16:26:42 +00:00
Erik Johnston
22518e2833 Merge pull request #2923 from matrix-org/erikj/stream_ago_worker
Calculate stream_ordering_month_ago correctly on workers
2018-03-01 16:23:54 +00:00
Erik Johnston
884b26ae41 Remove unused variables 2018-03-01 16:23:48 +00:00
Erik Johnston
1b2af11650 Document abstract class and method better 2018-03-01 16:20:57 +00:00
Erik Johnston
872ff95ed4 Default stream_ordering_*_ago to None 2018-03-01 16:00:05 +00:00
Erik Johnston
22004b524e Fix comment typo 2018-03-01 15:59:40 +00:00
Erik Johnston
4bc4236faf Merge pull request #2922 from matrix-org/erikj/split_room_store
Split up RoomStore
2018-03-01 15:55:01 +00:00
Richard van der Hoff
2324124a72 Merge pull request #2921 from matrix-org/rav/unyielding_make_deferred_yieldable
Rewrite make_deferred_yieldable avoiding inlineCallbacks
2018-03-01 15:39:10 +00:00
Erik Johnston
f793bc3877 Split out stream store 2018-03-01 15:13:08 +00:00
Erik Johnston
784f036306 Move RoomMemberHandler out of Handlers 2018-03-01 14:36:50 +00:00
Erik Johnston
6411f725be Calculate stream_ordering_month_ago correctly on workers 2018-03-01 14:20:53 +00:00
Erik Johnston
a9a2d66cdd Split out SignatureStore and EventFederationStore 2018-03-01 14:17:53 +00:00
Erik Johnston
0c8ba5dd1c Split up RoomStore 2018-03-01 14:01:19 +00:00
Richard van der Hoff
3a75de923b Rewrite make_deferred_yieldable avoiding inlineCallbacks
... because (a) it's actually simpler (b) it might be marginally more
performant?
2018-03-01 12:40:05 +00:00
Erik Johnston
17445e6701 Merge pull request #2920 from matrix-org/erikj/retry_send_event
Make repl send_event idempotent and retry on timeouts
2018-03-01 12:14:21 +00:00
Erik Johnston
126b9bf96f Log in the correct places 2018-03-01 12:05:33 +00:00
Erik Johnston
157298f986 Don't do preserve_fn for every request 2018-03-01 11:59:45 +00:00
Erik Johnston
89f90d808a Add some logging 2018-03-01 11:59:16 +00:00
Erik Johnston
8ded8ba2c7 Make repl send_event idempotent and retry on timeouts
If we treated timeouts as failures on the worker we would attempt to
clean up e.g. push actions while the master might still process the
event.
2018-03-01 11:20:34 +00:00
Erik Johnston
182ff17c83 Merge pull request #2875 from matrix-org/erikj/push_actions_worker
Calculate push actions on worker
2018-03-01 11:20:07 +00:00
Erik Johnston
f381d63813 Check event auth on the worker 2018-03-01 10:18:37 +00:00
Erik Johnston
6b8604239f Correctly send ratelimit and extra_users params 2018-03-01 10:08:39 +00:00
Erik Johnston
f756f961ea Fixup comments 2018-03-01 10:05:27 +00:00
Erik Johnston
28e973ac11 Calculate push actions on worker 2018-02-28 18:02:30 +00:00
Erik Johnston
9cb3a190bc Merge pull request #2913 from matrix-org/erikj/store_push
Move storage functions for push calculations
2018-02-28 18:02:03 +00:00
Erik Johnston
493e25d554 Move storage functions for push calculations
This will allow push actions for an event to be calculated on workers.
2018-02-27 13:58:16 +00:00
Erik Johnston
3594dbc6dc Merge pull request #2904 from matrix-org/erikj/receipt_cache_invalidation
Fix missing invalidations for receipt storage
2018-02-27 11:34:26 +00:00
Erik Johnston
2311189ee4 Merge pull request #2903 from matrix-org/erikj/split_roommember_store
Split out RoomMemberStore
2018-02-27 11:32:10 +00:00
Erik Johnston
c57607874c Merge pull request #2901 from matrix-org/erikj/split_as_stores
Split AS stores
2018-02-27 10:07:07 +00:00
Erik Johnston
8956f0147a Add comment 2018-02-27 10:06:51 +00:00
Erik Johnston
e5b4a208ce Merge pull request #2892 from matrix-org/erikj/batch_inserts_push_actions
Batch inserts into event_push_actions_staging
2018-02-26 14:45:40 +00:00
Erik Johnston
73fe866847 Merge pull request #2894 from matrix-org/erikj/handle_unpersisted_events_push
Ensure all push actions are deleted from staging
2018-02-26 14:28:35 +00:00
Erik Johnston
45b5fe9122 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/handle_unpersisted_events_push 2018-02-26 13:49:24 +00:00
Erik Johnston
d62ce972f8 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_roommember_store 2018-02-23 11:46:24 +00:00
Erik Johnston
6ae9a3d2a6 Update copyright 2018-02-23 11:44:49 +00:00
Erik Johnston
2ec49826e8 Merge pull request #2900 from matrix-org/erikj/split_event_push_actions
Split out EventPushActionWorkerStore
2018-02-23 11:44:30 +00:00
Erik Johnston
a90c60912f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_event_push_actions 2018-02-23 11:26:31 +00:00
Erik Johnston
50e8657867 Merge pull request #2902 from matrix-org/erikj/split_events_store
Split out get_events and co into a worker store
2018-02-23 11:23:52 +00:00
Erik Johnston
1cf9e071dd Merge pull request #2899 from matrix-org/erikj/split_pushers
Split PusherStore
2018-02-23 11:23:35 +00:00
Erik Johnston
d0957753bf Merge pull request #2898 from matrix-org/erikj/split_push_rules_store
Split PushRulesStore
2018-02-23 11:23:23 +00:00
Erik Johnston
199dba6c15 Merge pull request #2897 from matrix-org/erikj/split_account_data
Split AccountDataStore and TagStore
2018-02-23 11:23:11 +00:00
Erik Johnston
70349872c2 Update copyright 2018-02-23 11:14:35 +00:00
Erik Johnston
eba93b05bf Split EventsWorkerStore into separate file 2018-02-23 11:01:21 +00:00
Erik Johnston
bf8a36e080 Update copyright 2018-02-23 10:52:10 +00:00
Erik Johnston
5d0f665848 Remove redundant clock 2018-02-23 10:49:58 +00:00
Erik Johnston
3bd760628b _event_persist_queue shouldn't be in worker store 2018-02-23 10:49:18 +00:00
Erik Johnston
eb9b5eec81 Update copyright 2018-02-23 10:42:39 +00:00
Erik Johnston
c2ecfcc3a4 Update copyright 2018-02-23 10:41:34 +00:00
Erik Johnston
7e6cf89dc2 Update copyright 2018-02-23 10:39:19 +00:00
Erik Johnston
26d37f7a63 Update copyright 2018-02-23 10:33:55 +00:00
Erik Johnston
bb73f55fc6 Use absolute imports 2018-02-23 10:31:16 +00:00
Erik Johnston
faeb369f15 Fix missing invalidations for receipt storage 2018-02-21 15:19:54 +00:00
Erik Johnston
3dec9c66b3 Split out RoomMemberStore 2018-02-21 12:07:26 +00:00
Erik Johnston
46244b2759 Split AS stores 2018-02-21 11:49:34 +00:00
Erik Johnston
27b094f382 Split out get_events and co into a worker store 2018-02-21 11:41:48 +00:00
Erik Johnston
573712da6b Update comments 2018-02-21 11:29:49 +00:00
Erik Johnston
c96d547f4d Actually use new param 2018-02-21 11:03:42 +00:00
Erik Johnston
d15d237b0d Split out EventPushActionWorkerStore 2018-02-21 11:01:13 +00:00
Erik Johnston
27939cbb0e Merge pull request #2893 from matrix-org/erikj/delete_from_staging_fed
Delete from push_actions_staging in federation too
2018-02-21 11:00:06 +00:00
Erik Johnston
6f72765371 Split PusherStore 2018-02-21 10:54:21 +00:00
Erik Johnston
cbaad969f9 Split PushRulesStore 2018-02-21 10:43:31 +00:00
Erik Johnston
ca9b9d9703 Split AccountDataStore and TagStore 2018-02-21 10:15:04 +00:00
Erik Johnston
a2b25de68d Merge pull request #2896 from matrix-org/erikj/split_receipts_store
Split ReceiptsStore
2018-02-20 18:08:20 +00:00
Erik Johnston
8fbb4d0d19 Raise exception in abstract method 2018-02-20 17:59:23 +00:00
Erik Johnston
95e4cffd85 Fix comment 2018-02-20 17:58:40 +00:00
Erik Johnston
e316bbb4c0 Use abstract base class to access stream IDs 2018-02-20 17:43:57 +00:00
Erik Johnston
f5ac4dc2d4 Split ReceiptsStore 2018-02-20 16:28:28 +00:00
Erik Johnston
25634ed152 Fix test 2018-02-20 12:40:44 +00:00
Erik Johnston
24087bffa9 Ensure all push actions are deleted from staging 2018-02-20 12:34:31 +00:00
Erik Johnston
ad0ccf15ea Refactor _set_push_actions_for_event_and_users_txn to use events_and_contexts 2018-02-20 12:34:28 +00:00
Erik Johnston
e440e28456 Fix unit tests 2018-02-20 11:41:40 +00:00
Erik Johnston
d874d4f2d7 Delete from push_actions_staging in federation too 2018-02-20 11:37:52 +00:00
Erik Johnston
6ff8c87484 Batch inserts into event_push_actions_staging 2018-02-20 11:33:07 +00:00
Erik Johnston
324c3e9399 Merge pull request #2868 from matrix-org/erikj/refactor_media_storage
Make store_file use store_into_file
2018-02-20 11:31:24 +00:00
Richard van der Hoff
3fc33bae8b Merge pull request #2888 from bachp/pynacl-1.2.1
Update pynacl dependency to 1.2.1 or higher
2018-02-19 14:37:51 +00:00
Pascal Bach
3acd616979 Update pynacl dependency to 1.2.1 or higher
Signed-off-by: Pascal Bach <pascal.bach@nextrem.ch>
2018-02-19 10:45:22 +01:00
Travis Ralston
923d9300ed Add a blurb explaining the main synapse worker
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-02-17 21:53:46 -07:00
Richard van der Hoff
a71a080cd2 Merge pull request #2882 from matrix-org/rav/fix_purge_tablescan
(Really) fix tablescan of event_push_actions on purge
2018-02-16 15:18:47 +00:00
Richard van der Hoff
d1a3325f99 (Really) fix tablescan of event_push_actions on purge
commit 278d21b5 added new code to avoid the tablescan, but didn't remove the
old :/
2018-02-16 14:02:31 +00:00
Erik Johnston
bf5ef10a93 Merge pull request #2874 from matrix-org/erikj/creator_push_actions
Store push actions in DB staging area instead of context
2018-02-16 11:52:51 +00:00
Erik Johnston
6af025d3c4 Fix typo of double is_highlight 2018-02-16 11:35:31 +00:00
Erik Johnston
012e8e142a Comments 2018-02-16 11:35:01 +00:00
Erik Johnston
3a061cae26 Fix unit test 2018-02-15 16:31:59 +00:00
Erik Johnston
b96278d6fe Ensure that we delete staging push actions on errors 2018-02-15 15:47:06 +00:00
Erik Johnston
4810f7effd Remove context.push_actions 2018-02-15 15:47:06 +00:00
Erik Johnston
c714c61853 Update event_push_actions table from staging table 2018-02-15 15:47:06 +00:00
Erik Johnston
acac21248c Store push actions in staging area 2018-02-15 15:47:04 +00:00
Erik Johnston
6ed9ff69c2 Merge pull request #2873 from matrix-org/erikj/event_creator_no_state
Don't serialize current state over replication
2018-02-15 14:09:42 +00:00
Erik Johnston
106906a65e Don't serialize current state over replication 2018-02-15 13:53:18 +00:00
Erik Johnston
5fb347fc41 Merge pull request #2872 from matrix-org/erikj/event_worker_dont_log
Don't log errors propogated from send_event
2018-02-15 12:31:49 +00:00
Erik Johnston
cd94728e93 Merge pull request #2871 from matrix-org/erikj/event_creator_state
Fix state group storage bug in workers
2018-02-15 11:34:01 +00:00
Erik Johnston
fd1601c596 Fix state group storage bug in workers
We needed to move `_count_state_group_hops_txn` to the
StateGroupWorkerStore.
2018-02-15 11:04:32 +00:00
Erik Johnston
ef344b10e5 Don't log errors propogated from send_event 2018-02-15 11:03:49 +00:00
Richard van der Hoff
b8d821aa68 Merge pull request #2867 from matrix-org/rav/rework_purge
purge_history cleanups
2018-02-15 09:49:07 +00:00
Erik Johnston
92c52df702 Make store_file use store_into_file 2018-02-14 17:55:18 +00:00
Richard van der Hoff
d28ec43e15 Merge pull request #2769 from matrix-org/matthew/hit_the_gin
switch back from GIST to GIN indexes
2018-02-14 16:59:03 +00:00
Richard van der Hoff
39bf47319f purge_history: fix sqlite syntax error
apparently sqlite insists on indexes being named
2018-02-14 16:42:19 +00:00
Richard van der Hoff
ac27f6a35e purge_history: handle sqlite asshattery
apparently creating a temporary table commits the transaction. because that's a
useful thing.
2018-02-14 16:41:12 +00:00
Richard van der Hoff
5978dccff0 remove overzealous exception handling 2018-02-14 15:54:09 +00:00
Richard van der Hoff
278d21b5e4 purge_history: fix index use
event_push_actions doesn't have an index on event_id, so we need to specify
room_id.
2018-02-14 15:44:51 +00:00
Richard van der Hoff
5fcbf1e07c Rework event purge to use a temporary table
... which should speed things up by reducing the amount of data being shuffled
across the connection
2018-02-14 11:02:22 +00:00
Erik Johnston
c0c9327fe0 Merge pull request #2854 from matrix-org/erikj/event_create_worker
Create a worker for event creation
2018-02-13 18:07:10 +00:00
Erik Johnston
059d3a6c8e Update docs 2018-02-13 17:53:56 +00:00
Richard van der Hoff
d627174da2 Fix log message in purge_history
(we don't just remove remote events)
2018-02-13 16:51:21 +00:00
Richard van der Hoff
ddb6a79b68 Merge branch 'matthew/gin_work_mem' into matthew/hit_the_gin 2018-02-13 16:45:36 +00:00
Richard van der Hoff
0b27ae8dc3 move search reindex to schema 47
We're up to schema v47 on develop now, so this will have to go in there to have
an effect.

This might cause an error if somebody has already run it in the v46 guise, and
runs it again in the v47 guise, because it will cause a duplicate entry in the
bbackground_updates table. On the other hand, the entry is removed once it is
complete, and it is unlikely that anyone other than matrix.org has run it on
v46. The update itself is harmless to re-run because it deliberately copes with
the index already existing.
2018-02-13 16:44:46 +00:00
Richard van der Hoff
4a6d551704 GIN reindex: Fix syntax errors, improve exception handling 2018-02-13 16:44:46 +00:00
Richard van der Hoff
bfdf7b9237 Merge pull request #2864 from matrix-org/rav/persist_event_caching
Use StateResolutionHandler to resolve state in persist_events
2018-02-13 14:45:57 +00:00
Richard van der Hoff
630caf8a70 style nit 2018-02-13 14:29:22 +00:00
Richard van der Hoff
8fd1a32456 Fix typos in purge api & doc
* It's supposed to be purge_local_events, not ..._history
* Fix the doc to have valid json
2018-02-13 13:09:39 +00:00
Richard van der Hoff
4d09366656 Merge pull request #2695 from okurz/feature/allow_recent_pysaml
Allow use of higher versions of saml2
2018-02-13 12:24:08 +00:00
Richard van der Hoff
a9b712e9dc Merge branch 'develop' into matthew/gin_work_mem 2018-02-13 12:16:01 +00:00
Erik Johnston
32c7b8e48b Update workers docs to include http port 2018-02-12 17:21:23 +00:00
Erik Johnston
1026690cd2 Merge pull request #2857 from matrix-org/erikj/upload_store
Tell storage providers about new file so they can upload
2018-02-12 13:52:58 +00:00
Richard van der Hoff
10b34dbb9a Merge pull request #2858 from matrix-org/rav/purge_updates
delete_local_events for purge_room_history
2018-02-09 14:11:00 +00:00
Richard van der Hoff
39a6b35496 purge: move room_depth update to end
... to avoid locking the table for too long
2018-02-09 13:07:41 +00:00
Richard van der Hoff
74fcbf741b delete_local_events for purge_history
Add a flag which makes the purger delete local events
2018-02-09 13:07:41 +00:00
Richard van der Hoff
e571aef06d purge: Move cache invalidation to more appropriate place
it was a bit of a non-sequitur there
2018-02-09 13:07:41 +00:00
Richard van der Hoff
61ffaa8137 bump purge logging to info
this thing takes ages and the only sign of any progress is the logs, so having
some logs is useful.
2018-02-09 13:07:41 +00:00
Richard van der Hoff
671540dccf rename delete_old_state -> purge_history
(beacause it deletes more than state)
2018-02-09 13:07:41 +00:00
Erik Johnston
5fa571a91b Tell storage providers about new file so they can upload 2018-02-07 13:35:08 +00:00
Erik Johnston
053255f36c Merge pull request #2856 from matrix-org/erikj/remove_ratelimit
Remove pointless ratelimit check
2018-02-07 11:12:14 +00:00
Erik Johnston
f133228cb3 Add note in docs/workers.rst 2018-02-07 10:34:31 +00:00
Erik Johnston
50fe92cd26 Move presence handling into handle_new_client_event
As we want to have it run on the main synapse instance
2018-02-07 10:34:09 +00:00
Erik Johnston
8ec2e638be Add event_creator worker 2018-02-07 10:32:32 +00:00
Erik Johnston
24dd73028a Add replication http endpoint for event sending 2018-02-07 10:32:32 +00:00
Erik Johnston
e3624fad5f Remove pointless ratelimit check
The intention was for the check to be called as early as possible in the
request, but actually was called just before the main ratelimit check,
so was fairly pointless.
2018-02-07 10:30:25 +00:00
Erik Johnston
617199d73d Merge pull request #2847 from matrix-org/erikj/separate_event_creation
Split event creation into a separate handler
2018-02-06 17:01:17 +00:00
Erik Johnston
3e1e69ccaf Update copyright 2018-02-06 16:40:38 +00:00
Erik Johnston
770b2252ca s/_create_new_client_event/create_new_client_event/ 2018-02-06 16:40:30 +00:00
Erik Johnston
3d33eef6fc Store state groups separately from events (#2784)
* Split state group persist into seperate storage func

* Add per database engine code for state group id gen

* Move store_state_group to StateReadStore

This allows other workers to use it, and so resolve state.

* Hook up store_state_group

* Fix tests

* Rename _store_mult_state_groups_txn

* Rename StateGroupReadStore

* Remove redundant _have_persisted_state_group_txn

* Update comments

* Comment compute_event_context

* Set start val for state_group_id_seq

... otherwise we try to recreate old state groups

* Update comments

* Don't store state for outliers

* Update comment

* Update docstring as state groups are ints
2018-02-06 14:31:24 +00:00
Richard van der Hoff
b31bf0bb51 Merge pull request #2849 from matrix-org/rav/clean_up_state_delta
Remove redundant return value from _calculate_state_delta
2018-02-05 17:42:20 +01:00
Richard van der Hoff
9a304ef2b0 Merge pull request #2848 from matrix-org/rav/refactor_search_insert
Factor out common code for search insert
2018-02-05 17:42:09 +01:00
Richard van der Hoff
ebfe64e3d6 Use StateResolutionHandler to resolve state in persist events
... and thus benefit (hopefully) from its cache.
2018-02-05 16:23:26 +00:00
Richard van der Hoff
225dc3b4cb Flatten _get_new_state_after_events
rejig the if statements to simplify the logic and reduce indentation
2018-02-05 16:23:25 +00:00
Richard van der Hoff
9fcbbe8e7d Check that events being persisted have state_group 2018-02-05 16:23:25 +00:00
Richard van der Hoff
447aed42d2 Add event_map param to resolve_state_groups 2018-02-05 16:23:25 +00:00
Richard van der Hoff
ee6fb4cf85 Remove redundant return value from _calculate_state_delta
we already have the state from _get_new_state_after_events, so returning it
from _calculate_state_delta is just confusing.
2018-02-05 16:23:20 +00:00
Richard van der Hoff
3c7b480ba3 Factor out common code for search insert
we can reuse the same code as is used for event insert, for doing the
background index population.
2018-02-05 16:12:14 +00:00
Erik Johnston
25c0a020f4 Updates tests 2018-02-05 16:01:48 +00:00
Erik Johnston
3fa362502c Update places where we create events 2018-02-05 16:01:48 +00:00
Erik Johnston
5ff3d23564 Split event creation into a separate handler 2018-02-05 16:01:48 +00:00
Richard van der Hoff
c46e75d3d8 Move store_event_search_txn to SearchStore
... as a precursor to making event storing and doing the bg update share some
code.
2018-02-05 15:43:22 +00:00
Richard van der Hoff
db91e72ade Merge pull request #2844 from matrix-org/rav/evicted_metrics
montoring metrics for number of cache evictions
2018-02-05 16:34:36 +01:00
Richard van der Hoff
bc496df192 report metrics on number of cache evictions 2018-02-05 15:34:01 +00:00
Erik Johnston
a1beca0e25 Fix broken unit test for media storage 2018-02-05 12:44:03 +00:00
Erik Johnston
b5049d2e5c Add .vscode to gitignore 2018-02-05 12:06:46 +00:00
Erik Johnston
1f881e0746 Merge pull request #2791 from matrix-org/erikj/media_storage_refactor
Ensure media is in local cache before thumbnailing
2018-02-05 11:28:52 +00:00
Richard van der Hoff
80b8a28100 Factor out common code for search insert
we can reuse the same code as is used for event insert, for doing the
background index population.
2018-02-04 00:23:06 +00:00
Richard van der Hoff
bd25f9cf36 Clean up work_mem handling
Add some comments and improve exception handling when twiddling work_mem for
the search update
2018-02-03 23:05:41 +00:00
Richard van der Hoff
4eeae7ad65 Move store_event_search_txn to SearchStore
... as a precursor to making event storing and doing the bg update share some
code.
2018-02-03 22:59:45 +00:00
Richard van der Hoff
bb9f0f3cdb Merge branch 'develop' into matthew/gin_work_mem 2018-02-03 22:40:28 +00:00
Richard van der Hoff
6b02fc80d1 Reinstate event_search_postgres_gist handler
People may have queued updates for this, so we can't just delete it.
2018-02-02 14:32:51 +00:00
Richard van der Hoff
9c9356512e Merge pull request #2845 from matrix-org/rav/urlcache_error_handling
Handle url_previews with no content-type
2018-02-02 15:27:52 +01:00
Richard van der Hoff
18eae413af Merge pull request #2842 from matrix-org/rav/state_resolution_handler
Factor out resolve_state_groups to a separate handler
2018-02-02 15:27:35 +01:00
Richard van der Hoff
78d6ddba86 Merge pull request #2841 from matrix-org/rav/refactor_calc_state_delta
factor _get_new_state_after_events out of _calculate_state_delta
2018-02-02 15:27:15 +01:00
Richard van der Hoff
9dcd667ac2 Merge pull request #2836 from matrix-org/rav/resolve_state_events_docstring
Docstring fixes
2018-02-02 15:26:49 +01:00
Richard van der Hoff
33cac3dc29 Merge pull request #2818 from turt2live/travis/admin-list-media
Add an admin route to get all the media in a room
2018-02-02 11:41:03 +01:00
Travis Ralston
6e87b34f7b Merge branch 'develop' into travis/admin-list-media 2018-02-01 18:05:47 -07:00
Richard van der Hoff
d5352cbba8 Handle url_previews with no content-type
avoid failing with an exception if the remote server doesn't give us a
Content-Type header.

Also, clean up the exception handling a bit.
2018-02-02 00:53:46 +00:00
Richard van der Hoff
14737ba495 doc arg types for _seperate 2018-02-01 12:41:34 +00:00
Richard van der Hoff
e15d4ea248 More docstring fixes
Fix a couple of errors in docstrings
2018-02-01 12:41:34 +00:00
Richard van der Hoff
a18828c129 Fix docstring for StateHandler.resolve_state_groups
The return type was a complete lie, so fix it
2018-02-01 12:41:34 +00:00
Richard van der Hoff
6da4c4d3bd Factor out resolve_state_groups to a separate handler
We extract the storage-independent bits of the state group resolution out to a
separate functiom, and stick it in a new handler, in preparation for its use
from the storage layer.
2018-02-01 12:40:04 +00:00
Richard van der Hoff
0cbda53819 Rename resolve_state_groups -> resolve_state_groups_for_events
(to make way for a method that actually just does the state group resolution)
2018-02-01 12:40:00 +00:00
Richard van der Hoff
77c0629ebc Merge pull request #2837 from matrix-org/rav/fix_quarantine_media
Fix sql error in quarantine_media
2018-02-01 11:45:58 +01:00
Travis Ralston
e16e45b1b4 pep8
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-31 15:30:38 -07:00
Richard van der Hoff
e1e4ec9f9d factor _get_new_state_after_events out of _calculate_state_delta
This reduces the scope of a bunch of variables
2018-01-31 21:32:09 +00:00
Richard van der Hoff
78e7e05188 Merge pull request #2838 from matrix-org/rav/fix_logging_on_dns_fail
Remove spurious log argument
2018-01-31 22:18:46 +01:00
Richard van der Hoff
ad48dfe73d Merge pull request #2835 from matrix-org/rav/remove_event_type_param
Remove unused "event_type" param on state.get_current_state_ids
2018-01-31 22:17:57 +01:00
Richard van der Hoff
518a74586c Merge pull request #2834 from matrix-org/rav/better_persist_event_exception_handling
Improve exception handling in persist_event
2018-01-31 22:17:38 +01:00
Richard van der Hoff
d1fe4db882 Merge pull request #2833 from matrix-org/rav/pusher_hacks
Logging and metrics for the http pusher
2018-01-31 22:16:59 +01:00
Richard van der Hoff
421d68ca8c Merge pull request #2817 from matrix-org/rav/http_conn_pool
Use a connection pool for the SimpleHttpClient
2018-01-31 22:14:22 +01:00
Richard van der Hoff
326189c25a Script to move remote media to another media store 2018-01-31 18:45:10 +00:00
Travis Ralston
3af53c183a Add admin api documentation for list media endpoint
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-31 08:15:59 -07:00
Travis Ralston
63c4383927 Documentation and naming
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-31 08:07:52 -07:00
Richard van der Hoff
af19f5e9aa Remove spurious log argument
... which would cause scary-looking and unhelpful errors in the log on dns fail
2018-01-30 17:52:03 +00:00
Richard van der Hoff
773f0eed1e Fix sql error in quarantine_media 2018-01-30 15:02:51 +00:00
Richard van der Hoff
adfc0c9539 docstring for get_current_state_ids 2018-01-29 17:39:55 +00:00
Richard van der Hoff
d413a2ba98 Remove unused "event_type" param on state.get_current_state_ids
this param doesn't seem to be used, and is a bit pointless anyway because it
can easily be replicated by the caller. It is also horrible, because it changes
the return type of the method.
2018-01-29 17:06:57 +00:00
Richard van der Hoff
b387ee17b6 Improve exception handling in persist_event
1. use `deferred.errback()` instead of `deferred.errback(e)`, which means that
a Failure object will be constructed using the current exception state,
*including* its stack trace - so the stack trace is saved in the Failure,
leading to better exception reports.

2. Set `consumeErrors=True` on the ObservableDeferred, because we know that
there will always be at least one observer - which avoids a spurious "CRITICAL:
unhandled exception in Deferred" error in the logs
2018-01-29 17:05:33 +00:00
Richard van der Hoff
03dd745fe2 Better logging when pushes fail 2018-01-29 15:49:06 +00:00
Richard van der Hoff
e051abd20b add appid/device_display_name to to pusher logging 2018-01-29 15:04:16 +00:00
Richard van der Hoff
02ba118f81 Increase http conn pool size 2018-01-29 14:30:15 +00:00
Richard van der Hoff
4c65b98e4a Merge pull request #2831 from matrix-org/rav/fix-userdir-search-again
Fix SQL for user search
2018-01-27 22:58:24 +01:00
Richard van der Hoff
d1f3490e75 Add tests for user directory search 2018-01-27 17:21:57 +00:00
Richard van der Hoff
46022025ea Fix SQL for user search
fix some syntax errors for user search when search_all_users is enabled

fixes #2801, hopefully
2018-01-27 17:21:57 +00:00
Richard van der Hoff
2186d7c06e Merge pull request #2829 from matrix-org/rav/postgres_in_tests
Make it possible to run tests against postgres
2018-01-27 18:20:55 +01:00
Richard van der Hoff
88b9c5cbf0 Make it possible to run tests against postgres 2018-01-27 17:15:24 +00:00
Richard van der Hoff
d7eacc4f87 Create dbpool as normal in tests
... instead of creating our own special SQLiteMemoryDbPool, whose purpose was a
bit of a mystery.

For some reason this makes one of the tests run slightly slower, so bump the
sleep(). Sorry.
2018-01-27 17:15:15 +00:00
Richard van der Hoff
b178eca261 Run on_new_connection for unit tests
Configure the connectionpool used for unit tests to run the `on_new_connection`
function.
2018-01-27 17:06:04 +00:00
Richard van der Hoff
d8f90c4208 Merge pull request #2828 from matrix-org/rav/kill_memory_datastore
Remove unused/bitrotted MemoryDataStore
2018-01-27 17:57:02 +01:00
Richard van der Hoff
4b0f06e99c Merge pull request #2830 from matrix-org/rav/factor_out_get_conn
Factor out get_db_conn to HomeServer base class
2018-01-27 17:56:25 +01:00
Neil Johnson
e98f0f9112 Merge pull request #2827 from matrix-org/fix_server_500_on_public_rooms_call_when_no_rooms_exist
Fix server 500 on public rooms call when no rooms exist
2018-01-26 20:22:40 +00:00
Richard van der Hoff
25adde9a04 Factor out get_db_conn to HomeServer base class
This function is identical to all subclasses, so we may as well push it up to
the base class to reduce duplication (and make use of it in the tests)
2018-01-26 00:56:49 +00:00
Richard van der Hoff
6e9bf67f18 Remove unused/bitrotted MemoryDataStore
This isn't used, and looks thoroughly bitrotted.
2018-01-26 00:35:15 +00:00
Richard van der Hoff
2b91846497 Remove spurious unittest.DEBUG 2018-01-26 00:34:27 +00:00
Neil Johnson
73560237d6 add white space line 2018-01-26 00:15:10 +00:00
Neil Johnson
86c4f49a31 rather than try reconstruct the results object, better to guard against the xrange step argument being 0 2018-01-26 00:12:02 +00:00
Neil Johnson
f632083576 fix return type, should be a dict 2018-01-25 23:52:17 +00:00
Neil Johnson
6c6e197b0a fix PEP8 violation 2018-01-25 23:47:46 +00:00
Neil Johnson
d02e43b15f remove white space 2018-01-25 23:29:46 +00:00
Neil Johnson
349c739966 synapse 500s on a call to publicRooms in the case where the number of public rooms is zero, the specific cause is due to xrange trying to use a step value of zero, but if the total room number really is zero then it makes sense to just bail and save the extra processing 2018-01-25 23:28:44 +00:00
Matthew Hodgson
9a72b70630 fix thinko on 3pid whitelisting 2018-01-24 11:07:47 +01:00
Richard van der Hoff
25e2456ee7 Merge pull request #2816 from matrix-org/rav/metrics_comments
Add some comments about the reactor tick time metric
2018-01-23 19:39:26 +00:00
Matthew Hodgson
d32385336f add ?ts massaging for ASes (#2754)
blindly implement ?ts for AS. untested
2018-01-23 09:59:06 +01:00
Richard van der Hoff
b2da272b77 Merge pull request #2821 from matrix-org/rav/matthew_test_fixes
Matthew's fixes to the unit tests
2018-01-22 20:43:56 +00:00
Richard van der Hoff
4528dd2443 Fix logging and add user_id 2018-01-22 20:15:42 +00:00
Richard van der Hoff
93efd7eb04 logging and debug for http pusher 2018-01-22 18:14:10 +00:00
Matthew Hodgson
ab9f844aaf Add federation_domain_whitelist option (#2820)
Add federation_domain_whitelist

gives a way to restrict which domains your HS is allowed to federate with.
useful mainly for gracefully preventing a private but internet-connected HS from trying to federate to the wider public Matrix network
2018-01-22 19:11:18 +01:00
Richard van der Hoff
5c431f421c Matthew's fixes to the unit tests
Extracted from https://github.com/matrix-org/synapse/pull/2820
2018-01-22 16:46:16 +00:00
Matthew Hodgson
d84f65255e Merge pull request #2813 from matrix-org/matthew/registrations_require_3pid
add registrations_require_3pid and allow_local_3pids
2018-01-22 13:57:22 +00:00
Travis Ralston
a94d9b6b82 Appease the linter
These are ids anyways, not mxc uris.

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-20 22:49:46 -07:00
Travis Ralston
5552ed9a7f Add an admin route to get all the media in a room
This is intended to be used by administrators to monitor the media that is passing through their server, if they wish.

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-20 22:37:53 -07:00
Richard van der Hoff
2c8526cac7 Use a connection pool for the SimpleHttpClient
In particular I hope this will help the pusher, which makes many requests to
sygnal, and is currently negotiating SSL for each one.
2018-01-20 00:55:44 +00:00
Richard van der Hoff
87b7d72760 Add some comments about the reactor tick time metric 2018-01-19 23:51:04 +00:00
Matthew Hodgson
49fce04624 fix typo (thanks sytest) 2018-01-19 19:55:38 +00:00
Richard van der Hoff
b0d9e633ee Merge pull request #2814 from matrix-org/rav/fix_urlcache_thumbs
Use the right path for url_preview thumbnails
2018-01-19 18:57:15 +00:00
Richard van der Hoff
ad7ec63d08 Use the right path for url_preview thumbnails
This was introduced by #2627: we were overwriting the original media for url
previews with the thumbnails :/

(fixes https://github.com/vector-im/riot-web/issues/6012, hopefully)
2018-01-19 18:29:39 +00:00
Matthew Hodgson
62d7d66ae5 oops, check all login types 2018-01-19 18:23:56 +00:00
Matthew Hodgson
8fe253f19b fix PR nitpicking 2018-01-19 18:23:45 +00:00
Matthew Hodgson
293380bef7 trailing commas 2018-01-19 15:38:53 +00:00
Matthew Hodgson
447f4f0d5f rewrite based on PR feedback:
* [ ] split config options into allowed_local_3pids and registrations_require_3pid
 * [ ] simplify and comment logic for picking registration flows
 * [ ] fix docstring and move check_3pid_allowed into a new util module
 * [ ] use check_3pid_allowed everywhere

@erikjohnston PTAL
2018-01-19 15:33:55 +00:00
Matthew Hodgson
9d332e0f79 fix up v1, and improve errors 2018-01-19 00:53:58 +00:00
Matthew Hodgson
0af58f14ee fix pep8 2018-01-19 00:33:51 +00:00
Matthew Hodgson
81d037dbd8 mock registrations_require_3pid 2018-01-19 00:28:08 +00:00
Matthew Hodgson
28a6ccb49c add registrations_require_3pid
lets homeservers specify a whitelist for 3PIDs that users are allowed to associate with.
Typically useful for stopping people from registering with non-work emails
2018-01-19 00:19:58 +00:00
Erik Johnston
cd871a3057 Fix storage provider bug introduced when renamed to store_local 2018-01-18 18:37:59 +00:00
Erik Johnston
8ff6726c0d Merge pull request #2812 from matrix-org/erikj/media_storage_provider_config
Make storage providers configurable
2018-01-18 18:33:57 +00:00
Erik Johnston
d69768348f Fix passing wrong config to provider constructor 2018-01-18 17:14:05 +00:00
Erik Johnston
8e85220373 Remove duplicate directory test 2018-01-18 17:12:35 +00:00
Erik Johnston
3fe2bae857 Missing staticmethod 2018-01-18 17:11:45 +00:00
Erik Johnston
aae77da73f Fixup comments 2018-01-18 17:11:29 +00:00
Erik Johnston
ce4f66133e Add unit tests 2018-01-18 16:32:11 +00:00
Erik Johnston
b6dc7044a9 Merge pull request #2804 from matrix-org/erikj/file_consumer
Add decent impl of a FileConsumer
2018-01-18 16:31:33 +00:00
Erik Johnston
9a89dae8c5 Fix typo in thumbnail resource causing access times to be incorrect 2018-01-18 15:06:24 +00:00
Erik Johnston
0af5dc63a8 Make storage providers more configurable 2018-01-18 14:07:21 +00:00
Richard van der Hoff
5a4da21d58 Merge pull request #2810 from matrix-org/rav/metrics_fixes
Fix bugs in block metrics
2018-01-18 13:35:28 +00:00
Richard van der Hoff
d57765fc8a Fix bugs in block metrics
... which I introduced in #2785
2018-01-18 12:24:42 +00:00
Erik Johnston
2cf6a7bc20 Use better file consumer 2018-01-18 12:00:46 +00:00
Erik Johnston
4a53f3a3e8 Ensure media is in local cache before thumbnailing 2018-01-18 12:00:46 +00:00
Erik Johnston
be0dfcd4a2 Do logcontexts correctly 2018-01-18 11:57:57 +00:00
Erik Johnston
1432f7ccd5 Move test stuff to tests 2018-01-18 11:57:57 +00:00
Erik Johnston
2f18a2647b Make all fields private 2018-01-18 11:57:54 +00:00
Richard van der Hoff
d6af5512bb Merge pull request #2809 from matrix-org/rav/metrics_errors
better exception logging in callbackmetrics
2018-01-18 11:46:37 +00:00
Richard van der Hoff
ce236f8ac8 better exception logging in callbackmetrics
when we fail to render a metric, give a clue as to which metric it was
2018-01-18 11:30:49 +00:00
Erik Johnston
dc519602ac Ensure we registerProducer isn't called twice 2018-01-18 11:07:17 +00:00
Erik Johnston
17b54389fe Fix _notify_empty typo 2018-01-18 11:05:34 +00:00
Erik Johnston
28b338ed9b Move definition of paused_producer to __init__ 2018-01-18 11:04:41 +00:00
Erik Johnston
a177325b49 Fix comments 2018-01-18 11:02:43 +00:00
Richard van der Hoff
36da256cc6 Merge pull request #2805 from matrix-org/rav/log_state_res
Log room when doing state resolution
2018-01-17 18:05:04 +00:00
Richard van der Hoff
1224612a79 Log room when doing state resolution
Mostly because it helps figure out what is prompting the resolution
2018-01-17 17:11:59 +00:00
Erik Johnston
bc67e7d260 Add decent impl of a FileConsumer
Twisted core doesn't have a general purpose one, so we need to write one
ourselves.

Features:
- All writing happens in background thread
- Supports both push and pull producers
- Push producers get paused if the consumer falls behind
2018-01-17 16:43:03 +00:00
Erik Johnston
a87006f9c7 Merge pull request #2783 from matrix-org/erikj/media_last_accessed
Keep track of last access time for local media
2018-01-17 16:39:02 +00:00
Matthew Hodgson
06db5c4b76 Merge pull request #2803 from matrix-org/matthew/fix-userdir-sql
fix SQL when searching all users
2018-01-17 16:27:54 +00:00
Richard van der Hoff
8716eb4920 Merge pull request #2802 from matrix-org/rav/clean_up_resolve_events
Split resolve_events into two functions
2018-01-17 16:01:33 +00:00
Matthew Hodgson
2d9ab533f9 fix SQL when searching all users 2018-01-17 15:58:52 +00:00
Richard van der Hoff
390093d45e Split resolve_events into two functions
... so that the return type doesn't depend on the arg types
2018-01-17 15:44:31 +00:00
Erik Johnston
2fb3a28c98 Remove lost comment 2018-01-17 14:59:44 +00:00
Richard van der Hoff
a7e4ff9cca Merge pull request #2795 from matrix-org/rav/track_db_scheduling
Track DB scheduling delay per-request
2018-01-17 14:29:37 +00:00
Richard van der Hoff
f884cfffb9 Merge pull request #2797 from matrix-org/rav/user_id_checking
Sanity checking for user ids
2018-01-17 14:29:12 +00:00
Richard van der Hoff
a5213df1f7 Sanity checking for user ids
Check the user_id passed to a couple of APIs for validity, to avoid
"IndexError: list index out of range" exception which looks scary and results
in a 500 rather than a more useful error.

Fixes #1432, among other things
2018-01-17 14:28:54 +00:00
Erik Johnston
3d5a25407c Merge pull request #2798 from jeremycline/fedora-repo
Note that Synapse is available in Fedora
2018-01-17 14:28:17 +00:00
Richard van der Hoff
e8f7541d3f Merge remote-tracking branch 'origin/develop' into rav/track_db_scheduling 2018-01-17 14:01:57 +00:00
Richard van der Hoff
fb6563b4be Merge pull request #2793 from matrix-org/rav/db_txn_time_in_millis
Track db txn time in millisecs
2018-01-17 13:52:42 +00:00
Richard van der Hoff
1954e867b4 Merge pull request #2796 from matrix-org/rav/fix_closed_connection_errors
Fix 'NoneType' object has no attribute 'writeHeaders'
2018-01-17 13:52:24 +00:00
Richard van der Hoff
f23b4078c0 Merge pull request #2794 from matrix-org/rav/rework_run_interaction
rework runInteraction in terms of runConnection
2018-01-17 13:50:42 +00:00
Richard van der Hoff
11ab2f56f5 Merge branch 'develop' into rav/fix_closed_connection_errors 2018-01-17 11:25:11 +00:00
Richard van der Hoff
0486a7814a Merge branch 'develop' into rav/rework_run_interaction 2018-01-17 11:24:38 +00:00
Richard van der Hoff
90c14da992 Merge branch 'develop' into rav/db_txn_time_in_millis 2018-01-17 11:19:55 +00:00
Richard van der Hoff
1067b96364 Merge pull request #2792 from matrix-org/rav/optimise_logging_context
Optimise LoggingContext creation and copying
2018-01-17 11:19:28 +00:00
Richard van der Hoff
38506773eb Merge remote-tracking branch 'origin/develop' into rav/optimise_logging_context 2018-01-17 10:57:13 +00:00
Erik Johnston
300edc2348 Update last access time when thumbnails are viewed 2018-01-17 10:24:43 +00:00
Erik Johnston
05f98a2224 Keep track of last access time for local media 2018-01-17 10:24:43 +00:00
Erik Johnston
3cb2dabaad Merge pull request #2789 from matrix-org/erikj/fix_thumbnails
Fix thumbnailing remote files
2018-01-17 10:22:20 +00:00
Erik Johnston
d728c47142 Add docstring 2018-01-17 10:06:14 +00:00
Jeremy Cline
4102468da9 Note that Synapse is available in Fedora
Signed-off-by: Jeremy Cline <jeremy@jcline.org>
2018-01-16 16:14:02 -05:00
Richard van der Hoff
936482d507 Fix 'NoneType' object has no attribute 'writeHeaders'
Avoid throwing a (harmless) exception when we try to write an error response to
an http request where the client has disconnected.

This comes up as a CRITICAL error in the logs which tends to mislead people
into thinking there's an actual problem
2018-01-16 17:58:16 +00:00
Richard van der Hoff
3d12d97415 Track DB scheduling delay per-request
For each request, track the amount of time spent waiting for a db
connection. This entails adding it to the LoggingContext and we may as well add
metrics for it while we are passing.
2018-01-16 17:23:32 +00:00
Richard van der Hoff
0f5d2cc37c Merge branch 'rav/db_txn_time_in_millis' into rav/track_db_scheduling 2018-01-16 17:09:15 +00:00
Richard van der Hoff
8615f19d20 rework runInteraction in terms of runConnection
... so that we can share the code
2018-01-16 17:08:29 +00:00
Matthew Hodgson
5e97ca7ee6 fix typo 2018-01-16 16:52:35 +00:00
Erik Johnston
d863f68cab Use local vars 2018-01-16 16:24:15 +00:00
Erik Johnston
6368e5c0ab Change _generate_thumbnails to take media_type 2018-01-16 16:17:38 +00:00
Erik Johnston
0a90d9ede4 Move setting of file_id up to caller 2018-01-16 16:03:05 +00:00
Richard van der Hoff
6324b65f08 Track db txn time in millisecs
... to reduce the amount of floating-point foo we do.
2018-01-16 15:53:18 +00:00
Richard van der Hoff
44a498418c Optimise LoggingContext creation and copying
It turns out that the only thing we use the __dict__ of LoggingContext for is
`request`, and given we create lots of LoggingContexts and then copy them every
time we do a db transaction or log line, using the __dict__ seems a bit
redundant. Let's try to optimise things by making the request attribute
explicit.
2018-01-16 15:49:42 +00:00
Erik Johnston
5dfc83704b Fix typo 2018-01-16 14:32:56 +00:00
Richard van der Hoff
febdca4b37 Merge pull request #2785 from matrix-org/rav/reorganise_metrics_again
Reorganise request and block metrics
2018-01-16 14:10:21 +00:00
Richard van der Hoff
f5f89fda21 Merge pull request #2790 from matrix-org/rav/preserve_event_logcontext_leak
Fix a logcontext leak in persist_events
2018-01-16 14:09:13 +00:00
Erik Johnston
307f88dfb6 Fix up log lines 2018-01-16 13:53:52 +00:00
Richard van der Hoff
5b527d7ee1 Merge pull request #2674 from her001/readme-sytest
Mention SyTest in the README, after Development
2018-01-16 13:18:17 +00:00
Richard van der Hoff
807e848f0f Merge pull request #2787 from matrix-org/rav/worker_event_counts
Metrics for events processed in appservice and fed sender
2018-01-16 13:14:09 +00:00
Richard van der Hoff
4a31a61ef9 Merge pull request #2786 from matrix-org/rav/rdata_metrics
Metrics for number of RDATA commands received
2018-01-16 13:13:53 +00:00
Richard van der Hoff
ee7a1cabd8 document metrics changes 2018-01-16 13:04:01 +00:00
Erik Johnston
9795b9ebb1 Correctly use server_name/file_id when generating/fetching remote thumbnails 2018-01-16 12:02:06 +00:00
Erik Johnston
c5b589f2e8 Log when we respond with 404 2018-01-16 12:01:40 +00:00
Richard van der Hoff
64ddec1bc0 Fix a logcontext leak in persist_events
ObserveableDeferred expects its callbacks to be called without any
logcontexts, whereas it turns out we were calling them with the logcontext of
the request which initiated the persistence loop.

It seems wrong that we are attributing work done in the persistence loop to the
request that happened to initiate it, so let's solve this by dropping the
logcontext for it.

(I'm not sure this actually causes any real problems other than messages in the
debug log, but let's clean it up anyway)
2018-01-16 11:47:36 +00:00
Erik Johnston
a4c5e4a645 Fix thumbnailing remote files 2018-01-16 11:37:50 +00:00
Erik Johnston
1159abbdd2 Merge pull request #2767 from matrix-org/erikj/media_storage_refactor
Refactor MediaRepository to separate out storage
2018-01-16 10:23:50 +00:00
Richard van der Hoff
a027c2af8d Metrics for events processed in appservice and fed sender
More metrics I wished I'd had
2018-01-15 18:23:24 +00:00
Richard van der Hoff
5c3c32f16f Metrics for number of RDATA commands received
I found myself wishing we had this.
2018-01-15 17:45:55 +00:00
Richard van der Hoff
39f4e29d01 Reorganise request and block metrics
In order to circumvent the number of duplicate foo:count metrics increasing
without bounds, it's time for a rearrangement.

The following are all deprecated, and replaced with synapse_util_metrics_block_count:
  synapse_util_metrics_block_timer:count
  synapse_util_metrics_block_ru_utime:count
  synapse_util_metrics_block_ru_stime:count
  synapse_util_metrics_block_db_txn_count:count
  synapse_util_metrics_block_db_txn_duration:count

The following are all deprecated, and replaced with synapse_http_server_response_count:
   synapse_http_server_requests
   synapse_http_server_response_time:count
   synapse_http_server_response_ru_utime:count
   synapse_http_server_response_ru_stime:count
   synapse_http_server_response_db_txn_count:count
   synapse_http_server_response_db_txn_duration:count

The following are renamed (the old metrics are kept for now, but deprecated):

  synapse_util_metrics_block_timer:total ->
     synapse_util_metrics_block_time_seconds

  synapse_util_metrics_block_ru_utime:total ->
     synapse_util_metrics_block_ru_utime_seconds

  synapse_util_metrics_block_ru_stime:total ->
     synapse_util_metrics_block_ru_stime_seconds

  synapse_util_metrics_block_db_txn_count:total ->
     synapse_util_metrics_block_db_txn_count

  synapse_util_metrics_block_db_txn_duration:total ->
     synapse_util_metrics_block_db_txn_duration_seconds

  synapse_http_server_response_time:total ->
     synapse_http_server_response_time_seconds

  synapse_http_server_response_ru_utime:total ->
     synapse_http_server_response_ru_utime_seconds

  synapse_http_server_response_ru_stime:total ->
     synapse_http_server_response_ru_stime_seconds

   synapse_http_server_response_db_txn_count:total ->
      synapse_http_server_response_db_txn_count

   synapse_http_server_response_db_txn_duration:total
      synapse_http_server_response_db_txn_duration_seconds
2018-01-15 17:09:44 +00:00
Richard van der Hoff
992018d1c0 mechanism to render metrics with alternative names 2018-01-15 17:04:39 +00:00
Richard van der Hoff
80fa610f9c Add some comments to metrics classes 2018-01-15 16:52:52 +00:00
Richard van der Hoff
5e16c1dc8c Merge pull request #2778 from matrix-org/rav/counters_should_be_floats
Make Counter render floats
2018-01-15 14:09:53 +00:00
Richard van der Hoff
19d274085f Make Counter render floats
Prometheus handles all metrics as floats, and sometimes we store non-integer
values in them (notably, durations in seconds), so let's render them as floats
too.

(Note that the standard client libraries also treat Counters as floats.)
2018-01-12 23:49:44 +00:00
Richard van der Hoff
0fc2362d37 Merge pull request #2777 from matrix-org/rav/fix_remote_thumbnails
Reinstate media download on thumbnail request
2018-01-12 19:12:31 +00:00
Richard van der Hoff
21bf87a146 Reinstate media download on thumbnail request
We need to actually download the remote media when we get a request for a
thumbnail.
2018-01-12 15:38:06 +00:00
Erik Johnston
694f1c1b18 Fix up comments 2018-01-12 15:02:46 +00:00
Erik Johnston
e21370ba54 Correctly reraise exception 2018-01-12 14:44:02 +00:00
Erik Johnston
85a4d78213 Make Responder a context manager 2018-01-12 13:32:03 +00:00
Erik Johnston
dcc8eded41 Add missing class var 2018-01-12 13:16:27 +00:00
Erik Johnston
fefeb0ab0e Merge pull request #2774 from matrix-org/erikj/synctl
When using synctl with workers, don't start the main synapse automatically
2018-01-12 12:00:07 +00:00
Erik Johnston
81391fa162 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/media_storage_refactor 2018-01-12 11:28:49 +00:00
Erik Johnston
1e4edd1717 Remove unnecessary condition 2018-01-12 11:28:32 +00:00
Erik Johnston
c6c009603c Remove unused variables 2018-01-12 11:24:05 +00:00
Erik Johnston
4d88958cf6 Make class var local 2018-01-12 11:23:54 +00:00
Erik Johnston
227c491510 Comments 2018-01-12 11:22:41 +00:00
Erik Johnston
f4d93ae424 Actually make it work 2018-01-12 10:39:27 +00:00
Erik Johnston
f68e4cf690 Refactor 2018-01-12 10:11:12 +00:00
Richard van der Hoff
5f23b6d5ea Merge pull request #2766 from matrix-org/rav/room_event
Add /room/{id}/event/{id} to synapse
2018-01-11 14:47:18 +00:00
Erik Johnston
7cd34512d8 When using synctl with workers, don't start the main synapse automatically 2018-01-11 11:37:39 +00:00
Erik Johnston
07ab948c38 Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-01-11 11:23:40 +00:00
Erik Johnston
825a07a974 Merge pull request #2773 from matrix-org/erikj/hash_bg
Do bcrypt hashing in a background thread
2018-01-10 18:11:41 +00:00
Erik Johnston
f8e1ab5fee Do bcrypt hashing in a background thread 2018-01-10 18:01:28 +00:00
Erik Johnston
b9e4a97922 Merge pull request #2772 from matrix-org/t3chguy/patch-1
Fix publicised groups GET API (singular) over federation
2018-01-10 15:15:48 +00:00
Michael Telatynski
5f07f5694c fix order of operations derp and also use .get to default to {}
Signed-off-by: Michael Telatynski <7t3chguy@gmail.com>
2018-01-10 15:11:35 +00:00
Michael Telatynski
8c9d5b4873 Fix publicised groups API (singular) over federation
which was missing its fed client API, since there is no other API
it might as well reuse the bulk one and unwrap it

Signed-off-by: Michael Telatynski <7t3chguy@gmail.com>
2018-01-10 15:04:51 +00:00
Richard van der Hoff
c175a5f0f2 Merge pull request #2770 from matrix-org/rav/fix_request_metrics
Update http request metrics before calling servlet
2018-01-10 14:53:40 +00:00
Richard van der Hoff
d90e8ea444 Update http request metrics before calling servlet
Make sure that we set the servlet name in the metrics object *before* calling
the servlet, in case the servlet throws an exception.
2018-01-09 18:27:35 +00:00
hera
174eacc8ba oops 2018-01-09 18:14:32 +00:00
Matthew Hodgson
a66f489678 fix GIST->GIN switch 2018-01-09 16:55:51 +00:00
Matthew Hodgson
e79db0a673 switch back from GIST to GIN indexes 2018-01-09 16:37:48 +00:00
Matthew Hodgson
e365ad329f oops, tweak work_mem when actually storing 2018-01-09 16:30:30 +00:00
Matthew Hodgson
19f9227643 avoid 80s GIN inserts by tweaking work_mem
see https://github.com/matrix-org/synapse/issues/2753 for details
2018-01-09 16:25:04 +00:00
Erik Johnston
8f03aa9f61 Add StorageProvider concept 2018-01-09 16:16:12 +00:00
Erik Johnston
2442e9876c Make PreviewUrlResource use MediaStorage 2018-01-09 16:15:07 +00:00
Erik Johnston
9d30a7691c Make ThumbnailResource use MediaStorage 2018-01-09 16:15:07 +00:00
Erik Johnston
9e20840e02 Use MediaStorage for remote media 2018-01-09 16:15:07 +00:00
Erik Johnston
dd3092c3a3 Use MediaStorage for local files 2018-01-09 16:15:07 +00:00
Erik Johnston
ada470bccb Add MediaStorage class 2018-01-09 16:15:07 +00:00
Erik Johnston
1ee787912b Add some helper classes 2018-01-09 16:15:07 +00:00
Erik Johnston
47ca5eb882 Split out add_file_headers 2018-01-09 16:15:07 +00:00
Erik Johnston
ce3a726fc0 Merge pull request #2764 from matrix-org/erikj/remove_dead_thumbnail_code
Remove dead code related to default thumbnails
2018-01-09 16:01:02 +00:00
Erik Johnston
b6c9deffda Remove dead TODO 2018-01-09 15:53:23 +00:00
Richard van der Hoff
51c9d9ed65 Add /room/{id}/event/{id} to synapse
Turns out that there is a valid usecase for retrieving event by id (notably
having received a push), but event ids should be scoped to room, so /event/{id}
is wrong.
2018-01-09 14:39:12 +00:00
Erik Johnston
b30cd5b107 Remove dead code related to default thumbnails 2018-01-09 14:38:33 +00:00
Richard van der Hoff
a767f06e3f Merge pull request #2765 from matrix-org/rav/fix_room_uts
Fix flaky test_rooms UTs
2018-01-09 14:21:03 +00:00
Richard van der Hoff
cb66a2d387 Merge pull request #2763 from matrix-org/rav/fix_config_uts
Fix broken config UTs
2018-01-09 12:08:08 +00:00
Richard van der Hoff
aed4e4ecdd Merge pull request #2762 from matrix-org/rav/fix_logconfig_indenting
Make indentation of generated log config consistent
2018-01-09 12:07:56 +00:00
Richard van der Hoff
f8fa5ae4af enable twisted delayedcall debugging in UTs 2018-01-09 12:06:45 +00:00
Richard van der Hoff
374c4d4ced Remove dead code
pointless function is pointless
2018-01-09 12:06:45 +00:00
Richard van der Hoff
142fb0a7d4 Disable user_directory updates for UTs
Fix flakiness in the UTs caused by the user_directory being updated in the
background
2018-01-09 12:06:45 +00:00
Richard van der Hoff
0211464ba2 Fix broken config UTs
https://github.com/matrix-org/synapse/pull/2755 broke log-config generation,
which in turn broke the unit tests.
2018-01-09 11:28:33 +00:00
Richard van der Hoff
3a556f1ea0 Make indentation of generated log config consistent
(we had a mix of 2- and 4-space indents)
2018-01-09 11:27:19 +00:00
Erik Johnston
e9f7677170 Merge pull request #2761 from turt2live/patch-1
Fix templating error with unban permission message
2018-01-08 10:10:49 +00:00
Travis Ralston
eccfc8e928 Fix templating error with unban permission message
Fixes https://github.com/matrix-org/synapse/issues/2759

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-07 19:52:58 -07:00
Richard van der Hoff
e6b24663e4 Merge pull request #2755 from matrix-org/richvdh-patch-1
Remove 'verbosity'/'log_file' from generated cfg
2018-01-05 14:17:46 +00:00
Richard van der Hoff
840f72356e Remove 'verbosity'/'log_file' from generated cfg
... because these only really exist to confuse people nowadays.

Also bring log config more into line with the generated log config, by making `level_for_storage`
apply to the `synapse.storage.SQL` logger rather than `synapse.storage`.
2018-01-05 12:30:28 +00:00
Erik Johnston
18e3a16e8b Merge branch 'release-v0.26.0' of github.com:matrix-org/synapse 2018-01-05 10:54:22 +00:00
Erik Johnston
864a6d2977 Bump version and changelog 2018-01-05 10:54:01 +00:00
Richard van der Hoff
6e375f4597 Merge pull request #2744 from matrix-org/rav/login_logging
Better logging when login can't find a 3pid
2018-01-05 09:54:39 +00:00
Richard van der Hoff
efdfd5c835 Merge pull request #2745 from matrix-org/rav/assert_params
Check missing fields in event_from_pdu_json
2018-01-05 09:52:39 +00:00
Richard van der Hoff
bd91857028 Check missing fields in event_from_pdu_json
Return a 400 rather than a 500 when somebody messes up their send_join
2017-12-30 18:40:19 +00:00
Richard van der Hoff
3079f80d4a Factor out event_from_pdu_json
turns out we have two copies of this, and neither needs to be an instance
method
2017-12-30 18:40:19 +00:00
Richard van der Hoff
65abc90fb6 federation_server: clean up imports 2017-12-30 18:40:19 +00:00
Richard van der Hoff
a7b726ad18 federation_client: clean up imports 2017-12-30 18:40:19 +00:00
Richard van der Hoff
75c1b8df01 Better logging when login can't find a 3pid 2017-12-20 19:31:00 +00:00
Richard van der Hoff
3f9f1c50f3 Merge pull request #2683 from seckrv/fix_pwd_auth_prov_typo
synapse/config/password_auth_providers: Fixed bracket typo
2017-12-18 22:37:21 +00:00
Richard van der Hoff
48fa4e1e5b Merge pull request #2435 from silkeh/listen-ipv6-default
Adapt the default config to bind on both IPv4 and IPv6 on all platforms
2017-12-18 22:34:50 +00:00
Silke
df0f602796 Implement listen_tcp method in remaining workers
Signed-off-by: Silke <silke@slxh.eu>
2017-12-18 20:00:42 +01:00
Silke
26cd3f5690 Remove logger argument and do not catch replication listener
Signed-off-by: Silke <silke@slxh.eu>
2017-12-18 20:00:42 +01:00
Matthew Hodgson
3355ce650d Merge pull request #2737 from Valodim/master
mention federation tester more prominently in the readme
2017-12-17 23:05:17 +00:00
Silke Hofstra
ed48ecc58c Add methods for listening on multiple addresses
Add listen_tcp and listen_ssl which implement Twisted's reactor.listenTCP
and reactor.listenSSL for multiple addresses.

Signed-off-by: Silke Hofstra <silke@slxh.eu>
2017-12-17 13:15:48 +01:00
Silke Hofstra
37d1a90025 Allow binds to both :: and 0.0.0.0
Binding on 0.0.0.0 when :: is specified in the bind_addresses is now allowed.
This causes a warning explaining the behaviour.
Configuration changed to match.

See #2232

Signed-off-by: Silke Hofstra <silke@slxh.eu>
2017-12-17 13:10:31 +01:00
Willem Mulder
3e59143ba8 Adapt the default config to bind on IPv6.
Most deployments are on Linux (or Mac OS), so this would actually bind
on both IPv4 and IPv6.

Resolves #1886.

Signed-off-by: Willem Mulder <willemmaster@hotmail.com>
2017-12-17 13:07:37 +01:00
Vincent Breitmoser
9419bb5776 mention federation tester more prominently in the readme 2017-12-16 22:09:53 +02:00
Erik Johnston
80573e3900 Fix rc version number 2017-12-13 15:15:33 +00:00
Erik Johnston
069ae2a5d6 Bump changelog and version 2017-12-13 15:11:22 +00:00
Erik Johnston
ba24576f2f Merge pull request #2717 from matrix-org/erikj/createroom_content
Fix wrong avatars when inviting multiple users when creating room
2017-12-07 20:00:34 +00:00
Erik Johnston
d8a6c734fa Merge branch 'develop' of github.com:matrix-org/synapse into erikj/createroom_content 2017-12-07 14:24:01 +00:00
Erik Johnston
ef045dcd71 Copy dict in update_membership too 2017-12-07 14:17:15 +00:00
Matthew Hodgson
33cb7ef0b7 Merge pull request #2723 from matrix-org/matthew/search-all-local-users
Add all local users to the user_directory and optionally search them
2017-12-05 11:09:47 +00:00
Matthew Hodgson
cdc2cb5d11 fix StoreError syntax 2017-12-05 11:09:31 +00:00
Richard van der Hoff
16ec3805e5 Fix error when deleting devices
This was introduced in d7ea8c4 / PR #2728
2017-12-05 09:49:22 +00:00
Richard van der Hoff
8529874368 Merge pull request #2729 from matrix-org/rav/custom_ui_auth
support custom login types for validating users
2017-12-05 09:43:43 +00:00
Richard van der Hoff
da1010c83a support custom login types for validating users
Wire the custom login type support from password providers into the UI-auth
user-validation flows.
2017-12-05 09:43:30 +00:00
Richard van der Hoff
cc58e177f3 Merge pull request #2728 from matrix-org/rav/validate_user_via_ui_auth
Factor out a validate_user_via_ui_auth method
2017-12-05 09:42:46 +00:00
Richard van der Hoff
d7ea8c4800 Factor out a validate_user_via_ui_auth method
Collect together all the places that validate a logged-in user via UI auth.
2017-12-05 09:42:30 +00:00
Richard van der Hoff
aa6ecf0984 Merge pull request #2727 from matrix-org/rav/refactor_ui_auth_return
Refactor UI auth implementation
2017-12-05 09:40:38 +00:00
Richard van der Hoff
d5f9fb06b0 Refactor UI auth implementation
Instead of returning False when auth is incomplete, throw an exception which
can be caught with a wrapper.
2017-12-05 09:40:05 +00:00
Matthew Hodgson
c22e73293a speed up the rate of initial spam for users 2017-12-04 18:05:28 +00:00
Matthew Hodgson
b11dca2025 better doc 2017-12-04 17:51:33 +00:00
Matthew Hodgson
7b86c1fdcd try make tests work a bit more... 2017-12-04 17:10:03 +00:00
Richard van der Hoff
58ebdb037c Merge pull request #2716 from matrix-org/rav/federation_client_post
federation_client script: Support for posting content
2017-12-04 17:04:08 +00:00
Matthew Hodgson
95f8a713dc erik told me to 2017-12-04 16:56:25 +00:00
Matthew Hodgson
74e0cc74ce fix pep8 and tests 2017-12-04 15:11:38 +00:00
Matthew Hodgson
1bd40ca73e switch to a simpler 'search_all_users' button as per review feedback 2017-12-04 14:58:39 +00:00
Matthew Hodgson
f397153dfc Merge branch 'develop' into matthew/search-all-local-users 2017-11-30 01:51:38 +00:00
Matthew Hodgson
5406392f8b specify default user_directory_include_pattern 2017-11-30 01:45:34 +00:00
Matthew Hodgson
f61e107f63 remove null constraint on user_dir.room_id 2017-11-30 01:43:50 +00:00
Matthew Hodgson
4b1fceb913 fix alternation operator for FTS4 - how did this ever work!? 2017-11-30 01:34:03 +00:00
Matthew Hodgson
a4bb133b68 fix thinkos galore 2017-11-30 01:17:15 +00:00
Matthew Hodgson
cd3697e8b7 kick the user_directory index when new users register 2017-11-29 18:33:34 +00:00
Matthew Hodgson
3241c7aac3 untested WIP but might actually work 2017-11-29 18:27:05 +00:00
Richard van der Hoff
624c46eb06 Merge pull request #2721 from matrix-org/rav/get_user_by_access_token_comments
Improve comments on get_user_by_access_token
2017-11-29 17:57:06 +00:00
Richard van der Hoff
7a48a6b63e Merge pull request #2722 from matrix-org/rav/delete_device_on_logout
Delete devices and pushers on logouts etc
2017-11-29 17:56:46 +00:00
Matthew Hodgson
47d99a20d5 Add user_directory_include_pattern config param to expand search results to additional users
Initial commit; this doesn't work yet - the LIKE filtering seems too aggressive.
It also needs _do_initial_spam to be aware of prepopulating the whole user_directory_search table with all users...
...and it needs a handle_user_signup() or something to be added so that new signups get incrementally added to the table too.

Committing it here as a WIP
2017-11-29 16:46:45 +00:00
Richard van der Hoff
ad7e570d07 Delete devices in various logout situations
Make sure that we delete devices whenever a user is logged out due to any of
the following situations:

 * /logout
 * /logout_all
 * change password
 * deactivate account (by the user or by an admin)
 * invalidate access token from a dynamic module

Fixes #2672.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
ae31f8ce45 Move set_password into its own handler
Non-functional refactoring to move set_password. This means that we'll be able
to properly deactivate devices and access tokens without introducing a
dependency loop.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
7ca5c68233 Move deactivate_account into its own handler
Non-functional refactoring to move deactivate_account. This means that we'll be
able to properly deactivate devices and access tokens without introducing a
dependency loop.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
2c6d63922a Remove pushers when deleting access tokens
Whenever an access token is invalidated, we should remove the associated
pushers.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
97d1a1dc01 Merge pull request #2718 from matrix-org/rav/notify_logcontexts
Clear logcontext before starting fed txn queue runner
2017-11-29 16:01:46 +00:00
Richard van der Hoff
8b45de90a4 Merge pull request #2719 from matrix-org/rav/handle_missing_hashes
Fix 500 when joining matrix-dev
2017-11-29 16:01:33 +00:00
Richard van der Hoff
7303ed65e1 Fix 500 when joining matrix-dev
matrix-dev has an event (`$/6ANj/9QWQyd71N6DpRQPf+SDUu11+HVMeKSpMzBCwM:zemos.net`)
which has no `hashes` member.

Check for missing `hashes` element in events.
2017-11-29 16:00:46 +00:00
Richard van der Hoff
da562bd6a1 Improve comments on get_user_by_access_token
because I have to reverse-engineer this every time.
2017-11-29 15:52:41 +00:00
Richard van der Hoff
d4fb4f7c52 Clear logcontext before starting fed txn queue runner
These processes take a long time compared to the request, so there is lots of
"Entering|Restoring dead context" in the logs. Let's try to shut it up a bit.
2017-11-28 15:26:14 +00:00
Erik Johnston
dfbc45302e PEP8 2017-11-28 15:23:26 +00:00
Erik Johnston
c4c1d170af Fix wrong avatars when inviting multiple users when creating room
We reused the `content` dictionary between invite requests, which meant they could end up reusing the profile info for a previous user
2017-11-28 15:19:15 +00:00
Richard van der Hoff
fd04968f32 federation_client script: Support for posting content 2017-11-28 11:59:24 +00:00
Luke Barnard
c2a1194424 Merge pull request #2715 from matrix-org/luke/group-guest-access
Allow guest access to group APIs for reading
2017-11-28 11:58:54 +00:00
Luke Barnard
ab1b2d0ff2 Allow guest access to group APIs for reading 2017-11-28 11:23:00 +00:00
Richard van der Hoff
5a4da5bf78 Merge pull request #2697 from matrix-org/rav/fix_urlcache_index_error
Fix error on sqlite 3.7
2017-11-27 12:25:48 +00:00
Richard van der Hoff
84b31a3e7a Merge pull request #2713 from matrix-org/rav/no_upsert_forever
Avoid retrying forever on IntegrityError
2017-11-27 12:19:35 +00:00
Richard van der Hoff
df6c72ede3 Merge pull request #2711 from matrix-org/rav/fix_dns_errhandler
Fix error handling on dns lookup
2017-11-27 12:19:18 +00:00
Richard van der Hoff
04bb79f139 Merge pull request #2710 from matrix-org/rav/remove_dead_code
Tiny code cleanups
2017-11-27 12:15:44 +00:00
Richard van der Hoff
e828a7380a Merge pull request #2708 from matrix-org/rav/replication_logcontext_leaks
Fix some logcontext leaks in replication resource
2017-11-27 12:15:33 +00:00
Richard van der Hoff
7ef22a41a3 Merge pull request #2707 from matrix-org/rav/fix_urlpreview
Fix OPTIONS on preview_url
2017-11-27 12:15:14 +00:00
Richard van der Hoff
96387bd26f Merge pull request #2705 from matrix-org/rav/improve_tracebacks
Improve tracebacks on exceptions
2017-11-27 12:07:52 +00:00
Richard van der Hoff
6be01f599b Improve tracebacks on exceptions
Use failure.Failure to recover our failure, which will give us a useful
stacktrace, unlike the rethrown exception.
2017-11-27 12:05:58 +00:00
Richard van der Hoff
63ccaa5873 Avoid retrying forever on IntegrityError 2017-11-27 12:00:07 +00:00
Richard van der Hoff
8b38096a89 Fix error handling on dns lookup
pass the right arguments to the errback handler

Fixes "TypeError('eb() takes exactly 2 arguments (1 given)',)"
2017-11-24 16:47:48 +00:00
Richard van der Hoff
795b0849f3 Add a comment which might save some confusion 2017-11-24 00:34:56 +00:00
Richard van der Hoff
7f14f0ae38 Remove dead sync_callback
This is never used; let's remove it to stop confusing things.
2017-11-24 00:32:04 +00:00
Richard van der Hoff
0edf085b68 Fix some logcontext leaks in replication resource
The @measure_func annotations rely on the wrapped function respecting the
logcontext rules. Add the necessary yields to make this work.
2017-11-23 23:19:43 +00:00
Richard van der Hoff
8132a6b7ac Fix OPTIONS on preview_url
Fixes #2706
2017-11-23 17:52:31 +00:00
Richard van der Hoff
6b48b3e277 fix sql fails 2017-11-22 18:06:24 +00:00
Richard van der Hoff
2908f955d1 Check database in has_completed_background_updates
so that the right thing happens on workers.
2017-11-22 18:02:15 +00:00
Richard van der Hoff
79eba878a7 Merge pull request #2701 from matrix-org/rav/one_mediarepo_to_rule_them_all
Try to avoid having multiple PreviewUrlResource instances
2017-11-22 16:46:09 +00:00
Richard van der Hoff
68ca864141 Add config option to disable media_repo on main synapse
... to stop us doing the cache cleanup jobs on the master.
2017-11-22 16:20:27 +00:00
Richard van der Hoff
e1fd4751de Build MediaRepositoryResource as a homeserver dependency
This avoids the scenario where we have four different PreviewUrlResources
configured on a single app, each of which have their own caches and cache
clearing jobs.
2017-11-22 16:19:49 +00:00
Richard van der Hoff
148c113fbe Merge pull request #2700 from matrix-org/rav/worker_docs
* Improve documentation of workers

Fixes https://github.com/matrix-org/synapse/issues/2554
2017-11-21 18:28:40 +00:00
Richard van der Hoff
a0c6688976 Improve documentation of workers
Fixes https://github.com/matrix-org/synapse/issues/2554
2017-11-21 18:28:13 +00:00
Richard van der Hoff
d5a7c56ef9 Merge pull request #2698 from matrix-org/rav/remove_dead_dependencies
Clean up dependency list
2017-11-21 17:38:42 +00:00
Richard van der Hoff
0b4aa2dc21 Merge pull request #2689 from matrix-org/rav/unlock_account_data_upsert
Avoid locking account_data tables for upserts
2017-11-21 13:39:14 +00:00
Matthew Hodgson
3ab2cfec47 sanity checks 2017-11-21 12:10:20 +00:00
Richard van der Hoff
7298ed7c51 Clean up dependency list
remove those that aren't used at all, and replace the ones that don't have
builders with simple getters rather than dynamically-generated methods.
2017-11-21 11:15:41 +00:00
Richard van der Hoff
7098b65cb8 Fix error on sqlite 3.7
Create the url_cache index on local_media_repository as a background update, so
that we can detect whether we are on sqlite or not and create a partial or
complete index accordingly.

To avoid running the cleanup job before we have built the index, add a bailout
which will defer the cleanup if the bg updates are still running.

Fixes https://github.com/matrix-org/synapse/issues/2572.
2017-11-21 11:14:17 +00:00
Oliver Kurz
83d8d4d8cd Allow use of higher versions of saml2
The package was pinned to <4.0 with 07cf96eb because "from saml2 import
config" did not work. This seems to have been fixed in the mean time in the
saml2 package and therefore should not stop to use a more recent version.

Signed-off-by: Oliver Kurz <okurz@suse.de>
2017-11-20 11:14:39 +01:00
Matthew Hodgson
2145ee1976 don't double-invite in sync_room_to_group.pl 2017-11-19 00:48:47 +00:00
Richard van der Hoff
59a7275258 Merge pull request #2688 from matrix-org/rav/unlock_more_upsert
Avoid locking for upsert on pushers tables
2017-11-17 13:09:31 +00:00
Richard van der Hoff
d8a05418f9 Merge branch 'master' into develop 2017-11-17 12:13:14 +00:00
Richard van der Hoff
b102e93571 Merge branch 'release-v0.25.1' 2017-11-17 12:03:07 +00:00
Luke Barnard
cdf6fc15b0 Merge pull request #2686 from matrix-org/luke/as-flair
Add automagical AS Publicised Group(s)
2017-11-17 10:13:46 +00:00
Richard van der Hoff
74bbeb4373 Bump version in __init__.py 2017-11-17 10:10:53 +00:00
Richard van der Hoff
2187724ad2 Prep changelog for v0.25.1 2017-11-17 10:09:16 +00:00
Jurek
eded7084d2 Fix auth handler #2678 2017-11-17 10:07:27 +00:00
Matthew Hodgson
34c3d0a386 typo 2017-11-17 01:54:02 +00:00
Matthew Hodgson
9d50b6f0ea quick and dirty room membership<->group membership sync script 2017-11-17 01:54:02 +00:00
Luke Barnard
ab1dc84779 Add extra space before inline comment 2017-11-16 18:22:40 +00:00
Luke Barnard
7fb0e98b03 Extract group_id from the dict for multiple use 2017-11-16 18:18:30 +00:00
Luke Barnard
e836bdf734 Fix tests 2017-11-16 18:14:39 +00:00
Richard van der Hoff
c46139a17e Avoid locking account_data tables for upserts 2017-11-16 18:08:01 +00:00
Luke Barnard
d8391f0541 Remove unused GROUP_ID_REGEX 2017-11-16 18:05:57 +00:00
Luke Barnard
4e8374856d Document get_groups_for_user 2017-11-16 18:03:46 +00:00
Luke Barnard
270f9cd23a Flake8 2017-11-16 18:03:31 +00:00
Luke Barnard
9d83d52027 Use a generator instead of a list 2017-11-16 17:57:34 +00:00
Luke Barnard
5b48eec4a1 Make sure we check AS groups for lookup on bulk 2017-11-16 17:55:15 +00:00
Luke Barnard
b1edf26051 Check group_id belongs to this domain 2017-11-16 17:54:27 +00:00
Richard van der Hoff
06e5bcfc83 Avoid locking for upsert on pushers tables
* replace the upsert into deleted_pushers with an insert
* no need to lock for upsert on pusher_throttle
2017-11-16 17:52:23 +00:00
Jurek
624a8bbd67 Fix auth handler #2678 2017-11-16 17:19:02 +00:00
Richard van der Hoff
b26cbbb60e Revert "Merge pull request #2679 from jkolo/fix_auth_handler"
This PR was against master, not develop :(

This reverts commit 203058a027, reversing
changes made to 552f123bea.
2017-11-16 17:18:11 +00:00
Richard van der Hoff
203058a027 Merge pull request #2679 from jkolo/fix_auth_handler
Fix auth handler
2017-11-16 17:09:00 +00:00
Luke Barnard
97bd18af4e Add automagical AS Publicised Group(s)
via registration file "users" namespace:

```YAML
...
namespaces:
  users:
    - exclusive: true
      regex: '.*luke.*'
      group_id: '+all_the_lukes:hsdomain'
...
```

This is part of giving App Services their own groups for matching users. With this, ghost users will be given the appeareance that they are in a group and that they have publicised the fact, but _only_ from the perspective of the `get_publicised_groups_for_user` API.
2017-11-16 16:44:55 +00:00
Richard van der Hoff
ba05f28ae7 Merge pull request #2684 from matrix-org/rav/unlock_upsert
Start work on avoiding table locks for upserts
2017-11-16 16:27:29 +00:00
Richard van der Hoff
77a1227870 Fix broken ref to IntegrityError 2017-11-16 16:03:38 +00:00
Richard van der Hoff
7ab2b69e18 Avoid locking pushers table on upsert
Now that _simple_upsert will retry on IntegrityError, we don't need to lock the
table.
2017-11-16 15:32:01 +00:00
Richard van der Hoff
10aaa1bc15 _simple_upsert: retry on IntegrityError
wrap the call to _simple_upsert_txn in a loop so that we retry on an
integrityerror: this means we can avoid locking the table provided there is an
unique index.
2017-11-16 15:30:15 +00:00
Richard van der Hoff
cdc9e50a5d Cleanup in _simple_upsert_txn
Bail out early to reduce indentation
2017-11-16 15:29:10 +00:00
Richard von Seck
6f05de0e5e synapse/config/password_auth_providers: Fixed bracket typo
Signed-off-by: Richard von Seck <richard.von-seck@gmx.net>
2017-11-16 15:59:38 +01:00
Jurek
56e2a4333e Fix auth handler #2678 2017-11-15 22:49:43 +01:00
Richard van der Hoff
f959c01600 Merge pull request #2661 from matrix-org/rav/statereadstore
Pull out bits of StateStore to a mixin
2017-11-15 17:23:01 +00:00
Richard van der Hoff
117a8c0d35 Merge pull request #2677 from matrix-org/rav/spec_r0.3.0
Declare support for r0.3.0
2017-11-15 17:10:45 +00:00
Richard van der Hoff
30d2730ee2 Declare support for r0.3.0 2017-11-15 16:24:22 +00:00
Erik Johnston
aa812feb41 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-11-15 11:35:34 +00:00
Erik Johnston
552f123bea Merge branch 'release-v0.25.0' of github.com:matrix-org/synapse 2017-11-15 11:32:24 +00:00
Erik Johnston
5d0cbf763f Bump changelog 2017-11-15 11:29:32 +00:00
Richard van der Hoff
1b83c09c03 Merge pull request #2675 from matrix-org/rav/remove_broken_logcontext_funcs
Remove preserve_context_over_{fn, deferred}
2017-11-15 11:13:53 +00:00
David Baker
7190a550dc Merge pull request #2650 from matrix-org/dbkr/push_include_content_option
Rename redact_content option to include_content
2017-11-15 10:47:38 +00:00
Richard van der Hoff
b2cd6accf5 Remove __PreservingContextDeferred too 2017-11-14 23:00:10 +00:00
Andrew Conrad
053ecae4db Mention SyTest in the README, after Development
Signed-off-by: Andrew Conrad <aconrad103@gmail.com>
2017-11-14 15:11:35 -06:00
Richard van der Hoff
038c994724 Merge pull request #2648 from krombel/update_prometheus
update prometheus-config to new format
2017-11-14 19:14:32 +00:00
Krombel
c161472575 Make clear that the config has changed since prometheus v2
This restores the config that is usable for prometheus pre v2.0.0
The new config only works for Prometheus v2+
2017-11-14 19:59:26 +01:00
Richard van der Hoff
008aa2fc6d Merge pull request #2671 from matrix-org/rav/room_list_fixes
Reshuffle room list request code
2017-11-14 18:11:02 +00:00
Richard van der Hoff
6f30fd9235 Merge pull request #2673 from matrix-org/erikj/fix_port_script_groups
Add new boolean columns to port script
2017-11-14 16:03:41 +00:00
Erik Johnston
9ecf621404 Less s's 2017-11-14 15:55:15 +00:00
Erik Johnston
22db751d1e Add new boolean columns to port script 2017-11-14 15:48:50 +00:00
Erik Johnston
03feb7a34d Bump version and changelog 2017-11-14 14:51:25 +00:00
Richard van der Hoff
35a4b63240 Pull out bits of StateStore to a mixin
... so that we don't need to secretly gut-wrench it for use in the slaved
stores. I haven't done the other stores yet, but we should. I'm tired of the
workers breaking every time we tweak the stores because I forgot to gut-wrench
the right method.

fixes https://github.com/matrix-org/synapse/issues/2655.
2017-11-14 11:43:58 +00:00
Richard van der Hoff
4dd1bfa8c1 Revert "Revert "move _state_group_cache to statestore""
We're going to fix this properly on this branch, so that the _state_group_cache
can end up in StateGroupReadStore.

This reverts commit ab335edb02.
2017-11-14 11:43:58 +00:00
Richard van der Hoff
6caa379ba1 Merge pull request #2658 from matrix-org/rav/store_heirarchy_init
Make __init__ consistent across Store hierarchy
2017-11-14 11:43:12 +00:00
Richard van der Hoff
7e6fa29cb5 Remove preserve_context_over_{fn, deferred}
Both of these functions ae known to leak logcontexts. Replace the remaining
calls to them and kill them off.
2017-11-14 11:22:42 +00:00
Richard van der Hoff
44a1bfd6a6 Reshuffle room list request code
I'm not entirely sure if this will actually help anything, but it simplifies
the code and might give further clues about why room list search requests are
blowing out the get_current_state_ids caches.
2017-11-14 10:29:58 +00:00
Richard van der Hoff
1fc66c7460 Add a load of logging to the room_list handler
So we can see what it gets up to.
2017-11-14 10:23:47 +00:00
Richard van der Hoff
7bd6c87eca Merge pull request #2668 from turt2live/travis/whoami
Add a route for determining who you are
2017-11-14 09:54:21 +00:00
Travis Ralston
812c191939 Remove redundent call
Signed-off-by: Travis Ralston <travpc@gmail.com>
2017-11-13 12:44:21 -07:00
Richard van der Hoff
c741ba59c9 Merge pull request #2669 from matrix-org/rav/cache_urlpreview_failure
Cache failures in url_preview handler
2017-11-13 18:36:24 +00:00
Richard van der Hoff
781c15a6a3 Merge pull request #2663 from matrix-org/rav/invalid_request_utf8
Fix 500 on invalid utf-8 in request
2017-11-13 18:35:24 +00:00
David Baker
45ab288e07 Print instead of logging
because we had to wait until the logger was set up
2017-11-13 18:32:08 +00:00
Richard van der Hoff
8b33ac8f6c Merge branch 'develop' into rav/invalid_request_utf8 2017-11-13 11:56:22 +00:00
Richard van der Hoff
63ef607f1f Fix tests for Store.__init__ update
Fix the test to pass the right number of args to the Store constructors
2017-11-13 10:46:08 +00:00
Richard van der Hoff
6cfee09be9 Make __init__ consitstent across Store heirarchy
Add db_conn parameters to the `__init__` methods of the *Store classes, so that
they are all consistent, which makes the multiple inheritance work correctly
(and so that we can later extract mixins which can be used in the slavedstores)
2017-11-13 10:46:07 +00:00
Erik Johnston
ab335edb02 Revert "move _state_group_cache to statestore"
This reverts commit f5cf3638e9.
2017-11-13 10:05:33 +00:00
Erik Johnston
bfbf1e1f1a Up cache size of get_global_account_data_by_type_for_user 2017-11-13 09:52:11 +00:00
Travis Ralston
2d314b771f Add a route for determining who you are
Useful for applications which may have an access token, but no idea as to who owns it.

Signed-off-by: Travis Ralston <travpc@gmail.com>
2017-11-12 23:39:38 -07:00
Richard van der Hoff
5d15abb120 Bit more logging 2017-11-10 16:58:04 +00:00
Richard van der Hoff
46790f50cf Cache failures in url_preview handler
Reshuffle the caching logic in the url_preview handler so that failures are
cached (and to generally simplify things and fix the logcontext leaks).
2017-11-10 16:50:50 +00:00
Richard van der Hoff
4d0414c714 Merge pull request #2662 from matrix-org/rav/fix_mxids_again
Downcase userid on registration
2017-11-10 13:57:34 +00:00
Richard van der Hoff
e508145c9b Add some more comments appservice user registration
Explain why we don't validate userids registered via app services
2017-11-10 12:39:45 +00:00
Richard van der Hoff
e0ebd1e4bd Downcase userids for shared-secret registration 2017-11-10 12:39:05 +00:00
Richard van der Hoff
f90649eb2b Fix 500 on invalid utf-8 in request
If somebody sends us a request where the the body is invalid utf-8, we should
return a 400 rather than a 500. (json.loads throws a UnicodeError in this
situation)

We might as well catch all Exceptions here: it seems very unlikely that we
would get a request that *isn't caused by invalid json.
2017-11-10 09:15:39 +00:00
Richard van der Hoff
9b599bc18d Downcase userid on registration
Force username to lowercase before attempting to register

https://github.com/matrix-org/synapse/issues/2660
2017-11-09 22:20:01 +00:00
Richard van der Hoff
9b803ccc98 Revert "Allow upper-case characters in mxids"
This reverts commit b70b646903.
2017-11-09 21:57:24 +00:00
Richard van der Hoff
1282086f58 Merge pull request #2659 from matrix-org/rav/apparently_we_dont_follow_our_own_spec_now
Allow upper-case characters in mxids
2017-11-09 20:06:47 +00:00
Richard van der Hoff
b70b646903 Allow upper-case characters in mxids
Because we're never going to be able to fix this :'(
2017-11-09 19:36:13 +00:00
Erik Johnston
2dce6b15c3 Fix typo 2017-11-09 15:56:16 +00:00
Erik Johnston
4e2b2508af Register group servlet 2017-11-09 15:49:42 +00:00
Luke Barnard
0ea5310290 Merge pull request #2657 from matrix-org/erikj/group_visibility_namespace
Namespace visibility options for groups
2017-11-09 15:41:51 +00:00
Erik Johnston
13735843c7 Namespace visibility options for groups 2017-11-09 15:27:18 +00:00
Richard van der Hoff
618c7b816a Merge pull request #2656 from matrix-org/rav/fix_deactivate
Fix 'NoneType' not iterable in /deactivate
2017-11-09 15:20:35 +00:00
Erik Johnston
0fcb5a8ce5 Merge pull request #2651 from matrix-org/erikj/update_group_room_settings
Change so that update group room visibility isn't an upsert
2017-11-09 15:20:24 +00:00
Richard van der Hoff
889102315e Fix 'NoneType' not iterable in /deactivate
make sure we actually return a value from user_delete_access_tokens
2017-11-09 15:15:33 +00:00
David Baker
b2a788e902 Make the commented config have the default 2017-11-09 10:11:42 +00:00
Erik Johnston
82e4bfb53d Add brackets 2017-11-09 10:06:42 +00:00
Erik Johnston
e8814410ef Have an explicit API to update room config 2017-11-08 16:13:27 +00:00
Erik Johnston
94ff2cda73 Revert "Modify group room association API to allow modification of is_public" 2017-11-08 15:43:34 +00:00
Erik Johnston
d305987b40 Merge pull request #2631 from xyzz/fix_appservice_event_backlog
Fix appservices being backlogged and not receiving new events due to a bug in notify_interested_services
2017-11-08 11:54:10 +00:00
Erik Johnston
167eb01d83 Merge pull request #2637 from spantaleev/avoid-noop-media-deletes
Avoid no-op media deletes
2017-11-08 11:53:27 +00:00
David Baker
ad408beb66 better comments 2017-11-08 11:50:08 +00:00
David Baker
1b870937ae Log if any of the old config flags are set 2017-11-08 11:46:24 +00:00
David Baker
2a98ba0ed3 Rename redact_content option to include_content
The redact_content option never worked because it read the wrong config
section. The PR introducing it
(https://github.com/matrix-org/synapse/pull/2301) had feedback suggesting the
name be changed to not re-use the term 'redact' but this wasn't
incorporated.

This reanmes the option to give it a less confusing name, and also
means that people who've set the redact_content option won't suddenly
see a behaviour change when upgrading synapse, but instead can set
include_content if they want to.

This PR also updates the wording of the config comment to clarify
that this has no effect on event_id_only push.

Includes https://github.com/matrix-org/synapse/pull/2422
2017-11-08 10:35:30 +00:00
Richard van der Hoff
02a9a93bde Merge pull request #2649 from matrix-org/rav/fix_delta_on_state_res
Fix bug in state group storage
2017-11-08 09:22:13 +00:00
Richard van der Hoff
e148438e97 s/items/iteritems/ 2017-11-08 09:21:41 +00:00
Ilya Zhuravlev
d46386d57e Remove useless assignment in notify_interested_services 2017-11-07 22:23:22 +03:00
Matthew Hodgson
228ccf1fe3 Merge pull request #2643 from matrix-org/matthew/user_dir_typos
Fix various embarrassing typos around user_directory and add some doc.
2017-11-07 17:31:11 +00:00
Richard van der Hoff
780dbb378f Update deltas when doing auth resolution
Fixes a bug where the persisted state groups were different to those actually
being used after auth resolution.
2017-11-07 16:43:00 +00:00
Richard van der Hoff
1ca4288135 factor out _update_context_for_auth_events
This is duplicated, so let's factor it out before fixing it
2017-11-07 16:43:00 +00:00
Richard van der Hoff
f5cf3638e9 move _state_group_cache to statestore
this is internal to statestore, so let's keep it there.
2017-11-07 16:43:00 +00:00
Erik Johnston
5ef5e14ecc Merge pull request #2636 from farialima/me-master
Fix for #2635: correctly update rooms avatar/display name when modified by admin
2017-11-07 13:49:27 +00:00
Erik Johnston
76c9af193c Revert "Merge branch 'master' of github.com:matrix-org/synapse into develop"
This reverts commit f9b255cd62, reversing
changes made to 1bd654dabd.
2017-11-07 13:32:35 +00:00
Erik Johnston
f9b255cd62 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-11-07 13:31:03 +00:00
Krombel
44ad6dd4bf update prometheus-config to new format 2017-11-07 13:35:35 +01:00
Luke Barnard
1bd654dabd Merge pull request #2647 from matrix-org/luke/get-group-users-is-privileged
Return whether a user is an admin within a group
2017-11-07 12:05:11 +00:00
Luke Barnard
38b265cb51 Remember to pick is_admin out of the db 2017-11-07 11:24:04 +00:00
Luke Barnard
5561c09091 Return whether a user is an admin within a group 2017-11-07 11:18:45 +00:00
Matthew Hodgson
3db5ff69b2 Merge pull request #2576 from maximevaillancourt/exclude-noscript-url-preview
Ignore <noscript> tags when generating URL preview descriptions
2017-11-07 11:09:22 +00:00
Richard van der Hoff
ec12e7eada Merge pull request #2646 from matrix-org/rav/logging_for_limiter
Logging and logcontext fixes for Limiter
2017-11-07 10:47:00 +00:00
Matthew Hodgson
631fa4a1b7 create new indexes before dropping old ones to keep safetynet in place 2017-11-07 10:41:55 +00:00
Richard van der Hoff
bf993db11c Logging and logcontext fixes for Limiter
Add some logging to the Limiter in a similar spirit to the Linearizer, to help
debug issues.

Also fix a logcontext leak.

Also refactor slightly to avoid throwing exceptions.
2017-11-07 00:48:57 +00:00
Matthew Hodgson
4ad883398f s/users_in_pubic_room/users_in_public_rooms/g 2017-11-04 19:39:40 +00:00
Matthew Hodgson
d802e8ca6a s/users_in_pubic_room/users_in_public_rooms/g 2017-11-04 19:38:13 +00:00
Matthew Hodgson
a100700630 fix copyright.... 2017-11-04 19:35:49 +00:00
Matthew Hodgson
b6b075fd49 s/popualte/populate/ 2017-11-04 19:35:33 +00:00
Matthew Hodgson
d1622e080f s/intial/initial/ 2017-11-04 19:35:14 +00:00
Matthew Hodgson
2ac6deafb7 simplify instructions for regenerating user_dir 2017-11-04 19:34:59 +00:00
Slavi Pantaleev
805196fbeb Avoid no-op media deletes
If there are no media entries to delete,
avoid creating transactions, prepared statements
and unnecessary log entries.

Signed-off-by: Slavi Pantaleev <slavi@devture.com>
2017-11-04 09:50:15 +02:00
Francois Granade
f103b91ffa removed unused import flagged by flake8a 2017-11-03 18:45:49 +01:00
Francois Granade
fa4f337b49 Fix for issue 2635: correctly update rooms avatar/display name when modified by admin 2017-11-03 18:25:04 +01:00
Ilya Zhuravlev
8a4a0ddea6 Fix appservice tests to account for new behavior of notify_interested_services 2017-11-02 23:19:57 +03:00
Ilya Zhuravlev
45fbe4ff67 Fix appservices being backlogged and not receiving new events due to a bug in notify_interested_services 2017-11-02 22:49:43 +03:00
David Baker
f851bc8182 Merge pull request #2630 from matrix-org/luke/fix-rooms-in-group
Make the get_rooms_in_group API more sane
2017-11-02 17:23:17 +00:00
David Baker
9e09a1800b Merge pull request #2629 from matrix-org/rav/register_inhibit_login
support inhibit_login in /register
2017-11-02 16:51:35 +00:00
Luke Barnard
a34c586a89 Make the get_rooms_in_group API more sane
Return entries with is_public = True when they're public and is_public = False otherwise.
2017-11-02 16:42:30 +00:00
Richard van der Hoff
6c3a02072b support inhibit_login in /register
Allow things to pass inhibit_login when registering to ... inhibit logins.
2017-11-02 16:31:07 +00:00
David Baker
4a6754baf2 Merge pull request #2628 from matrix-org/rav/module_api_hooks
Add more hooks to ModuleApi
2017-11-02 15:37:14 +00:00
David Baker
4b36897cd9 Merge remote-tracking branch 'origin/develop' into rav/module_api_hooks 2017-11-02 15:19:17 +00:00
David Baker
d4553818a0 Merge pull request #2627 from matrix-org/rav/custom_rest_endpoints
Add a hook for custom rest endpoints
2017-11-02 15:18:37 +00:00
David Baker
6b6f03ae05 Merge pull request #2626 from matrix-org/rav/refactor_module_api
Factor _AccountHandler proxy out to ModuleApi
2017-11-02 15:15:30 +00:00
David Baker
77e3757fa9 Merge pull request #2625 from matrix-org/rav/named_resource_refactor
Factor out _configure_named_resource
2017-11-02 15:11:30 +00:00
Richard van der Hoff
6b60f7dca0 Add more hooks to ModuleApi
add `get_user_by_req` and `invalidate_access_token`
2017-11-02 14:37:39 +00:00
Richard van der Hoff
fcdfc911ee Add a hook for custom rest endpoints
Let the user specify custom modules which can be used for implementing extra
endpoints.
2017-11-02 14:36:55 +00:00
Richard van der Hoff
1189be43a2 Factor _AccountHandler proxy out to ModuleApi
We're going to need to use this from places that aren't password auth, so let's
move it to a proper class.
2017-11-02 14:36:11 +00:00
Richard van der Hoff
6650a07ede Factor out _configure_named_resource
This was a bit of a code vomit, so let's factor it out to preserve some sanity
2017-11-02 14:33:37 +00:00
David Baker
b19d9e2174 Merge pull request #2624 from matrix-org/rav/password_provider_notify_logout
Notify auth providers on logout
2017-11-02 10:55:17 +00:00
David Baker
1f080a6c97 Merge pull request #2623 from matrix-org/rav/callbacks_for_auth_providers
Allow password_auth_providers to return a callback
2017-11-02 10:49:03 +00:00
David Baker
04897c9dc1 Merge pull request #2622 from matrix-org/rav/db_access_for_auth_providers
Let auth providers get to the database
2017-11-02 10:41:25 +00:00
Richard van der Hoff
979eed4362 Fix user-interactive password auth
this got broken in the previous commit
2017-11-01 17:03:20 +00:00
Richard van der Hoff
bc8a5c0330 Notify auth providers on logout
Provide a hook by which auth providers can be notified of logouts.
2017-11-01 16:51:51 +00:00
Richard van der Hoff
4c8f94ac94 Allow password_auth_providers to return a callback
... so that they have a way to record access tokens.
2017-11-01 16:51:03 +00:00
Richard van der Hoff
846a94fbc9 Merge pull request #2620 from matrix-org/rav/auth_non_password
Let password auth providers handle arbitrary login types
2017-11-01 16:45:33 +00:00
Richard van der Hoff
3cd6b22c7b Let password auth providers handle arbitrary login types
Provide a hook where password auth providers can say they know about other
login types, and get passed the relevant parameters
2017-11-01 16:43:57 +00:00
David Baker
c9b9ef575b Merge pull request #2621 from matrix-org/rav/refactor_accesstoken_delete
Move access token deletion into auth handler
2017-11-01 16:26:06 +00:00
Matthew Hodgson
275826f234 Merge pull request #2617 from matrix-org/matthew/auto-displayname
automatically set default displayname on register
2017-11-01 16:21:16 +00:00
David Baker
4f0488b307 Merge remote-tracking branch 'origin/develop' into rav/refactor_accesstoken_delete 2017-11-01 16:20:19 +00:00
David Baker
e5e930aec3 Merge pull request #2615 from matrix-org/rav/break_auth_device_dep
Break dependency of auth_handler on device_handler
2017-11-01 16:06:31 +00:00
David Baker
fbbacb284e Merge pull request #2613 from matrix-org/rav/kill_refresh_tokens
Remove the last vestiges of refresh_tokens
2017-11-01 15:57:35 +00:00
Matthew Hodgson
9f7a555b4e switch to setting default displayname in the storage layer
to avoid clobbering guest user displaynames on registration
2017-11-01 15:51:30 +00:00
Richard van der Hoff
dd13310fb8 Move access token deletion into auth handler
Also move duplicated deactivation code into the auth handler.

I want to add some hooks when we deactivate an access token, so let's bring it
all in here so that there's somewhere to put it.
2017-11-01 15:46:22 +00:00
David Baker
691cc4e036 Merge pull request #2618 from matrix-org/dbkr/log_login_requests
Log login requests
2017-11-01 14:11:28 +00:00
David Baker
0bb253f37b Apparently this is python 2017-11-01 14:02:52 +00:00
David Baker
59e7e62c4b Log login requests
Carefully though, to avoid logging passwords
2017-11-01 13:58:01 +00:00
Matthew Hodgson
f8420d6279 automatically set default displayname on register
to avoid leaking ugly MXIDs and cluttering up the timeline with
displayname changes as well as membership joins for autojoin rooms
(e.g. the status autojoin rooms), automatically set the displayname
to match the localpart of the mxid upon registration.
2017-11-01 13:15:41 +00:00
Luke Barnard
99354b430e Merge pull request #2612 from matrix-org/luke/groups-room-relationship-is-public
Modify group room association API to allow modification of is_public
2017-11-01 11:08:36 +00:00
Richard van der Hoff
74c56f794c Break dependency of auth_handler on device_handler
I'm going to need to make the device_handler depend on the auth_handler, so I
need to break this dependency to avoid a cycle.

It turns out that the auth_handler was only using the device_handler in one
place which was an edge case which we can more elegantly handle by throwing an
error rather than fixing it up.
2017-11-01 10:27:06 +00:00
Richard van der Hoff
02237ce725 Fix tests for refresh_token removal 2017-11-01 10:19:42 +00:00
Luke Barnard
318a249c8b Leave is_public as required argument of update_room_group_association 2017-11-01 09:36:01 +00:00
Luke Barnard
207fabbc6a Update docs for updating room group association 2017-11-01 09:35:15 +00:00
Richard van der Hoff
356bcafc44 Remove the last vestiges of refresh_tokens 2017-10-31 20:35:58 +00:00
Richard van der Hoff
3e0aaad190 Let auth providers get to the database
Somewhat open to abuse, but also somewhat unavoidable :/
2017-10-31 17:22:29 +00:00
Richard van der Hoff
a72e4e3e28 Merge pull request #2610 from matrix-org/rav/schema_for_pw_providers
DB schema interface for password auth providers
2017-10-31 17:20:47 +00:00
Luke Barnard
13b3d7b4a0 Flake8 2017-10-31 17:20:11 +00:00
Richard van der Hoff
e025aec028 Merge pull request #2611 from matrix-org/dbkr/port_script_drop_nuls
Make the port script drop NUL values in all tables
2017-10-31 17:16:08 +00:00
Luke Barnard
20fe347906 Modify group room association API to allow modification of is_public
also includes renamings to make things more consistent.
2017-10-31 17:04:28 +00:00
David Baker
9d419f48e6 Make the port script drop NUL values in all tables
Postgres doesn't support NULs in strings so it makes the script
throw an exception and stop if any values contain \0. Drop them
with appropriate warning.
2017-10-31 16:58:49 +00:00
Richard van der Hoff
9ded00f221 fix tests 2017-10-31 14:21:13 +00:00
Richard van der Hoff
1650eb5847 DB schema interface for password auth providers
Provide an interface by which password auth providers can register db schema
files to be run at startup
2017-10-31 14:01:53 +00:00
David Baker
c31a7c3ff6 Merge pull request #2609 from matrix-org/rav/refactor_login
Refactor some logic from LoginRestServlet into AuthHandler
2017-10-31 13:51:36 +00:00
Richard van der Hoff
b8e54fbc08 Merge pull request #2607 from matrix-org/rav/cleanup_ldap_hacks
Clean up backwards-compat hacks for ldap
2017-10-31 13:41:12 +00:00
David Baker
a1f8b0fd64 Merge pull request #2608 from matrix-org/rav/password_provider_doc
Start some documentation on password providers
2017-10-31 13:40:10 +00:00
Richard van der Hoff
1b65ae00ac Refactor some logic from LoginRestServlet into AuthHandler
I'm going to need some more flexibility in handling login types in password
auth providers, so as a first step, move some stuff from LoginRestServlet into
AuthHandler.

In particular, we pass everything other than SAML, JWT and token logins down to
the AuthHandler, which now has responsibility for checking the login type and
fishing the password out of the login dictionary, as well as qualifying the
user_id if need be. Ideally SAML, JWT and token would go that way too, but
there's no real need for it right now and I'm trying to minimise impact.

This commit *should* be non-functional.
2017-10-31 10:48:41 +00:00
Richard van der Hoff
ebda45de4c Start some documentation on password providers
Document the existing interface, before I start adding new stuff.
2017-10-31 10:47:52 +00:00
Richard van der Hoff
ffc574a6f9 Clean up backwards-compat hacks for ldap
try to make the backwards-compat flows follow the same code paths as the modern
impl.

This commit should be non-functional.
2017-10-31 10:47:02 +00:00
Richard van der Hoff
e2f4190209 Merge pull request #2605 from matrix-org/luke/fix-group-creation-error-wording
Fix wording on group creation error
2017-10-30 16:06:42 +00:00
Luke Barnard
9bc17fc5fb Fix wording on group creation error 2017-10-30 15:17:23 +00:00
Matthew Hodgson
208a6647f1 fix typo 2017-10-29 20:54:20 +00:00
Matthew Hodgson
e51c2bcaef move url_previews to MD as RST does my head in 2017-10-29 20:47:17 +00:00
Erik Johnston
71a1bd53b2 Merge pull request #2599 from matrix-org/erikj/groups_invite
Fix typo when checking if user is invited to group
2017-10-27 17:34:54 +01:00
Erik Johnston
d0abb4e8e6 Fix typo when checking if user is invited to group 2017-10-27 16:57:19 +01:00
Erik Johnston
977078f06d Fix bad merge 2017-10-27 15:10:50 +01:00
Erik Johnston
6980c4557e Merge branch 'erikj/attestation_jitter' of github.com:matrix-org/synapse into develop 2017-10-27 15:09:05 +01:00
Erik Johnston
632baf799e Merge pull request #2598 from matrix-org/revert-2596-erikj/attestation_jitter
Revert "Add jitter to validity period of attestations"
2017-10-27 15:08:29 +01:00
Erik Johnston
af92f5b00f Revert "Add jitter to validity period of attestations" 2017-10-27 15:07:21 +01:00
Erik Johnston
4ab8abbc2b Merge branch 'erikj/attestation_local_fix' of github.com:matrix-org/synapse into develop 2017-10-27 15:07:08 +01:00
Erik Johnston
b1e62d4a57 Merge pull request #2596 from matrix-org/erikj/attestation_jitter
Add jitter to validity period of attestations
2017-10-27 14:20:25 +01:00
Erik Johnston
6af3656deb Merge pull request #2595 from matrix-org/erikj/attestation_commnet
Add comment about attestations
2017-10-27 14:20:19 +01:00
Richard van der Hoff
4d83632009 Merge pull request #2591 from matrix-org/rav/device_delete_auth
Device deletion: check UI auth matches access token
2017-10-27 12:30:10 +01:00
Richard van der Hoff
110b373e9c Merge pull request #2589 from matrix-org/rav/as_deactivate_account
Allow ASes to deactivate their own users
2017-10-27 12:29:32 +01:00
Erik Johnston
ca571b0ec3 Add jitter to validity period of attestations
This helps ensure that the renewals of attestations are spread out more
evenly.
2017-10-27 11:57:27 +01:00
Luke Barnard
d8c26162a1 Merge pull request #2582 from matrix-org/luke/group-is-public
Add is_public to groups table to allow for private groups
2017-10-27 11:41:13 +01:00
Erik Johnston
c067088747 Add comment about attestations 2017-10-27 11:35:41 +01:00
Luke Barnard
5451cc7792 Request is_public from database 2017-10-27 11:27:43 +01:00
Luke Barnard
124314672f group is dict 2017-10-27 11:08:19 +01:00
Luke Barnard
6362298fa5 Create groups with is_public = True 2017-10-27 11:04:20 +01:00
Richard van der Hoff
8b56977b6f Merge pull request #2586 from matrix-org/rav/frontend_proxy_auth_header
Front-end proxy: pass through auth header
2017-10-27 11:01:50 +01:00
Richard van der Hoff
173567a7f2 Docstring for post_urlencoded_get_json 2017-10-27 10:59:50 +01:00
Luke Barnard
c7d9f25d22 Fix create_group to pass requester_user_id 2017-10-27 10:57:20 +01:00
Erik Johnston
e27b76d117 Import logger 2017-10-27 10:54:02 +01:00
Richard van der Hoff
8854c039f2 Merge pull request #2585 from matrix-org/rav/unstable_to_r0
Support /keys/upload on /r0 as well as /unstable
2017-10-27 10:53:48 +01:00
Richard van der Hoff
14f581abc2 Merge pull request #2584 from matrix-org/rav/fix_httpclient_logcontexts
Fix logcontext leaks in httpclient
2017-10-27 10:53:29 +01:00
Luke Barnard
2ca46c7afc Correct logic for checking private group membership 2017-10-27 10:48:01 +01:00
Erik Johnston
82d8c1bacb Fixup 2017-10-27 10:30:21 +01:00
Erik Johnston
2fd9831f7c Merge pull request #2574 from matrix-org/erikj/room_list_fixes
Add logging and fix log contexts for publicRooms
2017-10-27 10:01:23 +01:00
Erik Johnston
195abfe7a5 Remove incorrect attestations 2017-10-27 09:58:13 +01:00
Erik Johnston
d8dde19f04 Log if we try to do attestations for our own user and group 2017-10-27 09:55:01 +01:00
Erik Johnston
585972b51a Don't generate group attestations for local users 2017-10-27 09:46:56 +01:00
Richard van der Hoff
7a6546228b Device deletion: check UI auth matches access token
(otherwise there's no point in the UI auth)
2017-10-27 00:04:31 +01:00
Matthew Hodgson
92f680889d spell out need for libxml2 for lxml to work 2017-10-27 00:02:29 +01:00
Richard van der Hoff
785bd7fd75 Allow ASes to deactivate their own users 2017-10-27 00:01:00 +01:00
Richard van der Hoff
c89e6aadff Merge pull request #2581 from matrix-org/rav/fix_init_with_no_logfile
Fix error when running synapse with no logfile
2017-10-26 22:16:57 +01:00
Richard van der Hoff
54a2525133 Front-end proxy: pass through auth header
So that access-token-in-an-auth-header works.
2017-10-26 18:19:01 +01:00
Richard van der Hoff
0a5866bec9 Support /keys/upload on /r0 as well as /unstable
(So that we can stop riot relying on it in /unstable)
2017-10-26 18:18:23 +01:00
Richard van der Hoff
0d8e3ad48b Fix logcontext leaks in httpclient
`preserve_context_over_fn` is borked
2017-10-26 18:17:10 +01:00
Richard van der Hoff
12ef02dc3d SimpleHTTPClient: add support for headers
Sometimes we need to pass headers into these methods
2017-10-26 17:59:50 +01:00
Luke Barnard
69e8a05f35 Make it work 2017-10-26 17:55:58 +01:00
Luke Barnard
007cd48af6 Recreate groups table instead of adding column
Adding a column with non-constant default not possible in sqlite3
2017-10-26 17:55:22 +01:00
Luke Barnard
713e60b9b6 Awful hack to get default true 2017-10-26 17:38:14 +01:00
Luke Barnard
e86cefcb6f Add groups table to BOOLEAN_COLUMNS in synapse_port_db 2017-10-26 17:24:54 +01:00
Luke Barnard
cfa4e658e0 Bump schema version to 46 2017-10-26 17:23:49 +01:00
Luke Barnard
595fe67f01 delint 2017-10-26 17:20:24 +01:00
Luke Barnard
9b2feef9eb Add is_public to groups table to allow for private groups
Prevent group API access to non-members for private groups

Also make all the group code paths consistent with `requester_user_id` always being the User ID of the requesting user.
2017-10-26 16:51:32 +01:00
Richard van der Hoff
f7f90e0c8d Fix error when running synapse with no logfile
Fixes 'UnboundLocalError: local variable 'sighup' referenced before assignment'
2017-10-26 16:45:20 +01:00
Richard van der Hoff
1dd0f53b21 Merge pull request #2579 from krombel/move_unstable_to_r0
register some /unstable endpoints in /r0 as well
2017-10-26 16:10:43 +01:00
Krombel
8299b323ee add release endpoints for /thirdparty 2017-10-26 16:58:20 +02:00
Krombel
9b436c8b4c register some /unstable endpoints in /r0 as well 2017-10-26 15:22:50 +02:00
Richard van der Hoff
5b38fdab31 Merge pull request #2578 from matrix-org/rav/code_style_imports
Code_style updates
2017-10-26 11:55:59 +01:00
Richard van der Hoff
1eb300e1fc Document import rules 2017-10-26 11:55:41 +01:00
Richard van der Hoff
f7f6bfaae4 code_style: more formatting 2017-10-26 11:55:41 +01:00
Erik Johnston
4ea882ede4 Merge pull request #2577 from matrix-org/erikj/fix_port
Fix port script
2017-10-26 11:54:12 +01:00
Erik Johnston
566e21eac8 Update room_list.py 2017-10-26 11:39:54 +01:00
Richard van der Hoff
351cc35342 code_style.rst: a couple of tidyups 2017-10-26 10:29:26 +01:00
Erik Johnston
37d766aedd Fix port script
We changed _simple_update_one_txn to use _simple_update_txn but didn't
yank it out in the port script.

Fixes #2565
2017-10-26 10:01:03 +01:00
Maxime Vaillancourt
5287e57c86 Ignore noscript tags when generating URL previews 2017-10-25 20:44:34 -04:00
Erik Johnston
2a7e9faeec Do logcontexts outside ResponseCache 2017-10-25 15:21:08 +01:00
Erik Johnston
1ad1ba9e6a Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-10-25 10:27:23 +01:00
Erik Johnston
33a9026cdf Add logging and fix log contexts for publicRooms 2017-10-25 10:26:06 +01:00
Matthew Hodgson
efd0f5a3c5 tip for generating tls_fingerprints 2017-10-24 18:49:49 +01:00
292 changed files with 15035 additions and 6838 deletions

2
.gitignore vendored
View File

@@ -46,3 +46,5 @@ static/client/register/register_config.js
env/
*.config
.vscode/

View File

@@ -1,14 +1,22 @@
sudo: false
language: python
python: 2.7
# tell travis to cache ~/.cache/pip
cache: pip
env:
- TOX_ENV=packaging
- TOX_ENV=pep8
- TOX_ENV=py27
matrix:
include:
- python: 2.7
env: TOX_ENV=packaging
- python: 2.7
env: TOX_ENV=pep8
- python: 2.7
env: TOX_ENV=py27
- python: 3.6
env: TOX_ENV=py36
install:
- pip install tox

View File

@@ -1,3 +1,356 @@
Changes in synapse <unreleased>
===============================
Potentially breaking change:
* Make Client-Server API return 401 for invalid token (PR #3161).
This changes the Client-server spec to return a 401 error code instead of 403
when the access token is unrecognised. This is the behaviour required by the
specification, but some clients may be relying on the old, incorrect
behaviour.
Thanks to @NotAFile for fixing this.
Changes in synapse v0.28.1 (2018-05-01)
=======================================
SECURITY UPDATE
* Clamp the allowed values of event depth received over federation to be
[0, 2^63 - 1]. This mitigates an attack where malicious events
injected with depth = 2^63 - 1 render rooms unusable. Depth is used to
determine the cosmetic ordering of events within a room, and so the ordering
of events in such a room will default to using stream_ordering rather than depth
(topological_ordering).
This is a temporary solution to mitigate abuse in the wild, whilst a long term solution
is being implemented to improve how the depth parameter is used.
Full details at
https://docs.google.com/document/d/1I3fi2S-XnpO45qrpCsowZv8P8dHcNZ4fsBsbOW7KABI
* Pin Twisted to <18.4 until we stop using the private _OpenSSLECCurve API.
Changes in synapse v0.28.0 (2018-04-26)
=======================================
Bug Fixes:
* Fix quarantine media admin API and search reindex (PR #3130)
* Fix media admin APIs (PR #3134)
Changes in synapse v0.28.0-rc1 (2018-04-24)
===========================================
Minor performance improvement to federation sending and bug fixes.
(Note: This release does not include the delta state resolution implementation discussed in matrix live)
Features:
* Add metrics for event processing lag (PR #3090)
* Add metrics for ResponseCache (PR #3092)
Changes:
* Synapse on PyPy (PR #2760) Thanks to @Valodim!
* move handling of auto_join_rooms to RegisterHandler (PR #2996) Thanks to @krombel!
* Improve handling of SRV records for federation connections (PR #3016) Thanks to @silkeh!
* Document the behaviour of ResponseCache (PR #3059)
* Preparation for py3 (PR #3061, #3073, #3074, #3075, #3103, #3104, #3106, #3107, #3109, #3110) Thanks to @NotAFile!
* update prometheus dashboard to use new metric names (PR #3069) Thanks to @krombel!
* use python3-compatible prints (PR #3074) Thanks to @NotAFile!
* Send federation events concurrently (PR #3078)
* Limit concurrent event sends for a room (PR #3079)
* Improve R30 stat definition (PR #3086)
* Send events to ASes concurrently (PR #3088)
* Refactor ResponseCache usage (PR #3093)
* Clarify that SRV may not point to a CNAME (PR #3100) Thanks to @silkeh!
* Use str(e) instead of e.message (PR #3103) Thanks to @NotAFile!
* Use six.itervalues in some places (PR #3106) Thanks to @NotAFile!
* Refactor store.have_events (PR #3117)
Bug Fixes:
* Return 401 for invalid access_token on logout (PR #2938) Thanks to @dklug!
* Return a 404 rather than a 500 on rejoining empty rooms (PR #3080)
* fix federation_domain_whitelist (PR #3099)
* Avoid creating events with huge numbers of prev_events (PR #3113)
* Reject events which have lots of prev_events (PR #3118)
Changes in synapse v0.27.4 (2018-04-13)
======================================
Changes:
* Update canonicaljson dependency (#3095)
Changes in synapse v0.27.3 (2018-04-11)
======================================
Bug fixes:
* URL quote path segments over federation (#3082)
Changes in synapse v0.27.3-rc2 (2018-04-09)
==========================================
v0.27.3-rc1 used a stale version of the develop branch so the changelog overstates
the functionality. v0.27.3-rc2 is up to date, rc1 should be ignored.
Changes in synapse v0.27.3-rc1 (2018-04-09)
=======================================
Notable changes include API support for joinability of groups. Also new metrics
and phone home stats. Phone home stats include better visibility of system usage
so we can tweak synpase to work better for all users rather than our own experience
with matrix.org. Also, recording 'r30' stat which is the measure we use to track
overal growth of the Matrix ecosystem. It is defined as:-
Counts the number of native 30 day retained users, defined as:-
* Users who have created their accounts more than 30 days
* Where last seen at most 30 days ago
* Where account creation and last_seen are > 30 days"
Features:
* Add joinability for groups (PR #3045)
* Implement group join API (PR #3046)
* Add counter metrics for calculating state delta (PR #3033)
* R30 stats (PR #3041)
* Measure time it takes to calculate state group ID (PR #3043)
* Add basic performance statistics to phone home (PR #3044)
* Add response size metrics (PR #3071)
* phone home cache size configurations (PR #3063)
Changes:
* Add a blurb explaining the main synapse worker (PR #2886) Thanks to @turt2live!
* Replace old style error catching with 'as' keyword (PR #3000) Thanks to @NotAFile!
* Use .iter* to avoid copies in StateHandler (PR #3006)
* Linearize calls to _generate_user_id (PR #3029)
* Remove last usage of ujson (PR #3030)
* Use simplejson throughout (PR #3048)
* Use static JSONEncoders (PR #3049)
* Remove uses of events.content (PR #3060)
* Improve database cache performance (PR #3068)
Bug fixes:
* Add room_id to the response of `rooms/{roomId}/join` (PR #2986) Thanks to @jplatte!
* Fix replication after switch to simplejson (PR #3015)
* 404 correctly on missing paths via NoResource (PR #3022)
* Fix error when claiming e2e keys from offline servers (PR #3034)
* fix tests/storage/test_user_directory.py (PR #3042)
* use PUT instead of POST for federating groups/m.join_policy (PR #3070) Thanks to @krombel!
* postgres port script: fix state_groups_pkey error (PR #3072)
Changes in synapse v0.27.2 (2018-03-26)
=======================================
Bug fixes:
* Fix bug which broke TCP replication between workers (PR #3015)
Changes in synapse v0.27.1 (2018-03-26)
=======================================
Meta release as v0.27.0 temporarily pointed to the wrong commit
Changes in synapse v0.27.0 (2018-03-26)
=======================================
No changes since v0.27.0-rc2
Changes in synapse v0.27.0-rc2 (2018-03-19)
===========================================
Pulls in v0.26.1
Bug fixes:
* Fix bug introduced in v0.27.0-rc1 that causes much increased memory usage in state cache (PR #3005)
Changes in synapse v0.26.1 (2018-03-15)
=======================================
Bug fixes:
* Fix bug where an invalid event caused server to stop functioning correctly,
due to parsing and serializing bugs in ujson library (PR #3008)
Changes in synapse v0.27.0-rc1 (2018-03-14)
===========================================
The common case for running Synapse is not to run separate workers, but for those that do, be aware that synctl no longer starts the main synapse when using ``-a`` option with workers. A new worker file should be added with ``worker_app: synapse.app.homeserver``.
This release also begins the process of renaming a number of the metrics
reported to prometheus. See `docs/metrics-howto.rst <docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0>`_.
Note that the v0.28.0 release will remove the deprecated metric names.
Features:
* Add ability for ASes to override message send time (PR #2754)
* Add support for custom storage providers for media repository (PR #2867, #2777, #2783, #2789, #2791, #2804, #2812, #2814, #2857, #2868, #2767)
* Add purge API features, see `docs/admin_api/purge_history_api.rst <docs/admin_api/purge_history_api.rst>`_ for full details (PR #2858, #2867, #2882, #2946, #2962, #2943)
* Add support for whitelisting 3PIDs that users can register. (PR #2813)
* Add ``/room/{id}/event/{id}`` API (PR #2766)
* Add an admin API to get all the media in a room (PR #2818) Thanks to @turt2live!
* Add ``federation_domain_whitelist`` option (PR #2820, #2821)
Changes:
* Continue to factor out processing from main process and into worker processes. See updated `docs/workers.rst <docs/workers.rst>`_ (PR #2892 - #2904, #2913, #2920 - #2926, #2947, #2847, #2854, #2872, #2873, #2874, #2928, #2929, #2934, #2856, #2976 - #2984, #2987 - #2989, #2991 - #2993, #2995, #2784)
* Ensure state cache is used when persisting events (PR #2864, #2871, #2802, #2835, #2836, #2841, #2842, #2849)
* Change the default config to bind on both IPv4 and IPv6 on all platforms (PR #2435) Thanks to @silkeh!
* No longer require a specific version of saml2 (PR #2695) Thanks to @okurz!
* Remove ``verbosity``/``log_file`` from generated config (PR #2755)
* Add and improve metrics and logging (PR #2770, #2778, #2785, #2786, #2787, #2793, #2794, #2795, #2809, #2810, #2833, #2834, #2844, #2965, #2927, #2975, #2790, #2796, #2838)
* When using synctl with workers, don't start the main synapse automatically (PR #2774)
* Minor performance improvements (PR #2773, #2792)
* Use a connection pool for non-federation outbound connections (PR #2817)
* Make it possible to run unit tests against postgres (PR #2829)
* Update pynacl dependency to 1.2.1 or higher (PR #2888) Thanks to @bachp!
* Remove ability for AS users to call /events and /sync (PR #2948)
* Use bcrypt.checkpw (PR #2949) Thanks to @krombel!
Bug fixes:
* Fix broken ``ldap_config`` config option (PR #2683) Thanks to @seckrv!
* Fix error message when user is not allowed to unban (PR #2761) Thanks to @turt2live!
* Fix publicised groups GET API (singular) over federation (PR #2772)
* Fix user directory when using ``user_directory_search_all_users`` config option (PR #2803, #2831)
* Fix error on ``/publicRooms`` when no rooms exist (PR #2827)
* Fix bug in quarantine_media (PR #2837)
* Fix url_previews when no Content-Type is returned from URL (PR #2845)
* Fix rare race in sync API when joining room (PR #2944)
* Fix slow event search, switch back from GIST to GIN indexes (PR #2769, #2848)
Changes in synapse v0.26.0 (2018-01-05)
=======================================
No changes since v0.26.0-rc1
Changes in synapse v0.26.0-rc1 (2017-12-13)
===========================================
Features:
* Add ability for ASes to publicise groups for their users (PR #2686)
* Add all local users to the user_directory and optionally search them (PR
#2723)
* Add support for custom login types for validating users (PR #2729)
Changes:
* Update example Prometheus config to new format (PR #2648) Thanks to
@krombel!
* Rename redact_content option to include_content in Push API (PR #2650)
* Declare support for r0.3.0 (PR #2677)
* Improve upserts (PR #2684, #2688, #2689, #2713)
* Improve documentation of workers (PR #2700)
* Improve tracebacks on exceptions (PR #2705)
* Allow guest access to group APIs for reading (PR #2715)
* Support for posting content in federation_client script (PR #2716)
* Delete devices and pushers on logouts etc (PR #2722)
Bug fixes:
* Fix database port script (PR #2673)
* Fix internal server error on login with ldap_auth_provider (PR #2678) Thanks
to @jkolo!
* Fix error on sqlite 3.7 (PR #2697)
* Fix OPTIONS on preview_url (PR #2707)
* Fix error handling on dns lookup (PR #2711)
* Fix wrong avatars when inviting multiple users when creating room (PR #2717)
* Fix 500 when joining matrix-dev (PR #2719)
Changes in synapse v0.25.1 (2017-11-17)
=======================================
Bug fixes:
* Fix login with LDAP and other password provider modules (PR #2678). Thanks to
@jkolo!
Changes in synapse v0.25.0 (2017-11-15)
=======================================
Bug fixes:
* Fix port script (PR #2673)
Changes in synapse v0.25.0-rc1 (2017-11-14)
===========================================
Features:
* Add is_public to groups table to allow for private groups (PR #2582)
* Add a route for determining who you are (PR #2668) Thanks to @turt2live!
* Add more features to the password providers (PR #2608, #2610, #2620, #2622,
#2623, #2624, #2626, #2628, #2629)
* Add a hook for custom rest endpoints (PR #2627)
* Add API to update group room visibility (PR #2651)
Changes:
* Ignore <noscript> tags when generating URL preview descriptions (PR #2576)
Thanks to @maximevaillancourt!
* Register some /unstable endpoints in /r0 as well (PR #2579) Thanks to
@krombel!
* Support /keys/upload on /r0 as well as /unstable (PR #2585)
* Front-end proxy: pass through auth header (PR #2586)
* Allow ASes to deactivate their own users (PR #2589)
* Remove refresh tokens (PR #2613)
* Automatically set default displayname on register (PR #2617)
* Log login requests (PR #2618)
* Always return `is_public` in the `/groups/:group_id/rooms` API (PR #2630)
* Avoid no-op media deletes (PR #2637) Thanks to @spantaleev!
* Fix various embarrassing typos around user_directory and add some doc. (PR
#2643)
* Return whether a user is an admin within a group (PR #2647)
* Namespace visibility options for groups (PR #2657)
* Downcase UserIDs on registration (PR #2662)
* Cache failures when fetching URL previews (PR #2669)
Bug fixes:
* Fix port script (PR #2577)
* Fix error when running synapse with no logfile (PR #2581)
* Fix UI auth when deleting devices (PR #2591)
* Fix typo when checking if user is invited to group (PR #2599)
* Fix the port script to drop NUL values in all tables (PR #2611)
* Fix appservices being backlogged and not receiving new events due to a bug in
notify_interested_services (PR #2631) Thanks to @xyzz!
* Fix updating rooms avatar/display name when modified by admin (PR #2636)
Thanks to @farialima!
* Fix bug in state group storage (PR #2649)
* Fix 500 on invalid utf-8 in request (PR #2663)
Changes in synapse v0.24.1 (2017-10-24)
=======================================

View File

@@ -30,8 +30,12 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use Jenkins for continuous integration (http://matrix.org/jenkins), and
typically all pull requests get automatically tested Jenkins: if your change breaks the build, Jenkins will yell about it in #matrix-dev:matrix.org so please lurk there and keep an eye open.
We use `Jenkins <http://matrix.org/jenkins>`_ and
`Travis <https://travis-ci.org/matrix-org/synapse>`_ for continuous
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
pull request for feedback.
Code style
~~~~~~~~~~
@@ -115,4 +119,4 @@ can't be accepted. Git makes this trivial - just use the -s flag when you do
Conclusion
~~~~~~~~~~
That's it! Matrix is a very open and collaborative project as you might expect given our obsession with open communication. If we're going to successfully matrix together all the fragmented communication technologies out there we are reliant on contributions and collaboration from the community to do so. So please get involved - and we hope you have as much fun hacking on Matrix as we do!
That's it! Matrix is a very open and collaborative project as you might expect given our obsession with open communication. If we're going to successfully matrix together all the fragmented communication technologies out there we are reliant on contributions and collaboration from the community to do so. So please get involved - and we hope you have as much fun hacking on Matrix as we do!

View File

@@ -157,8 +157,8 @@ if you prefer.
In case of problems, please see the _`Troubleshooting` section below.
Alternatively, Silvio Fricke has contributed a Dockerfile to automate the
above in Docker at https://registry.hub.docker.com/u/silviof/docker-matrix/.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate the
above in Docker at https://hub.docker.com/r/avhost/docker-matrix/tags/
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
@@ -354,6 +354,10 @@ https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one
Fedora
------
Synapse is in the Fedora repositories as ``matrix-synapse``::
sudo dnf install matrix-synapse
Oleg Girko provides Fedora RPMs at
https://obs.infoserver.lv/project/monitor/matrix-synapse
@@ -610,6 +614,9 @@ should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
$ dig -t srv _matrix._tcp.example.com
_matrix._tcp.example.com. 3600 IN SRV 10 0 8448 synapse.example.com.
Note that the server hostname cannot be an alias (CNAME record): it has to point
directly to the server hosting the synapse instance.
You can then configure your homeserver to use ``<yourdomain.com>`` as the domain in
its user-ids, by setting ``server_name``::
@@ -632,6 +639,11 @@ largest boxes pause for thought.)
Troubleshooting
---------------
You can use the federation tester to check if your homeserver is all set:
``https://matrix.org/federationtester/api/report?server_name=<your_server_name>``
If any of the attributes under "checks" is false, federation won't work.
The typical failure mode with federation is that when you try to join a room,
it is rejected with "401: Unauthorized". Generally this means that other
servers in the room couldn't access yours. (Joining a room over federation is a
@@ -823,7 +835,9 @@ spidering 'internal' URLs on your network. At the very least we recommend that
your loopback and RFC1918 IP addresses are blacklisted.
This also requires the optional lxml and netaddr python dependencies to be
installed.
installed. This in turn requires the libxml2 library to be available - on
Debian/Ubuntu this means ``apt-get install libxml2-dev``, or equivalent for
your OS.
Password reset
@@ -883,6 +897,17 @@ This should end with a 'PASSED' result::
PASSED (successes=143)
Running the Integration Tests
=============================
Synapse is accompanied by `SyTest <https://github.com/matrix-org/sytest>`_,
a Matrix homeserver integration testing suite, which uses HTTP requests to
access the API as a Matrix client would. It is able to run Synapse directly from
the source tree, so installation of the server is not required.
Testing with SyTest is recommended for verifying that changes related to the
Client-Server API are functioning correctly. See the `installation instructions
<https://github.com/matrix-org/sytest#installing>`_ for details.
Building Internal API Documentation
===================================

View File

@@ -48,6 +48,18 @@ returned by the Client-Server API:
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to $NEXT_VERSION
====================
This release expands the anonymous usage stats sent if the opt-in
``report_stats`` configuration is set to ``true``. We now capture RSS memory
and cpu use at a very coarse level. This requires administrators to install
the optional ``psutil`` python module.
We would appreciate it if you could assist by ensuring this module is available
and ``report_stats`` is enabled. This will let us see if performance changes to
synapse are having an impact to the general community.
Upgrading to v0.15.0
====================

10
contrib/README.rst Normal file
View File

@@ -0,0 +1,10 @@
Community Contributions
=======================
Everything in this directory are projects submitted by the community that may be useful
to others. As such, the project maintainers cannot guarantee support, stability
or backwards compatibility of these projects.
Files in this directory should *not* be relied on directly, as they may not
continue to work or exist in future. If you wish to use any of these files then
they should be copied to avoid them breaking from underneath you.

View File

@@ -22,6 +22,8 @@ import argparse
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
from six import string_types
def make_graph(file_name, room_id, file_prefix, limit):
print "Reading lines"
@@ -58,7 +60,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for key, value in unfreeze(event.get_dict()["content"]).items():
if value is None:
value = "<null>"
elif isinstance(value, basestring):
elif isinstance(value, string_types):
pass
else:
value = json.dumps(value)

View File

@@ -5,7 +5,8 @@ To use it, first install prometheus by following the instructions at
http://prometheus.io/
Then add a new job to the main prometheus.conf file:
### for Prometheus v1
Add a new job to the main prometheus.conf file:
job: {
name: "synapse"
@@ -15,6 +16,22 @@ Then add a new job to the main prometheus.conf file:
}
}
### for Prometheus v2
Add a new job to the main prometheus.yml file:
- job_name: "synapse"
metrics_path: "/_synapse/metrics"
# when endpoint uses https:
scheme: "https"
static_configs:
- targets: ['SERVER.LOCATION:PORT']
To use `synapse.rules` add
rule_files:
- "/PATH/TO/synapse-v2.rules"
Metrics are disabled by default when running synapse; they must be enabled
with the 'enable-metrics' option, either in the synapse config file or as a
command-line option.

View File

@@ -202,11 +202,11 @@ new PromConsole.Graph({
<h1>Requests</h1>
<h3>Requests by Servlet</h3>
<div id="synapse_http_server_requests_servlet"></div>
<div id="synapse_http_server_request_count_servlet"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet"),
expr: "rate(synapse_http_server_requests:servlet[2m])",
node: document.querySelector("#synapse_http_server_request_count_servlet"),
expr: "rate(synapse_http_server_request_count:servlet[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -215,11 +215,11 @@ new PromConsole.Graph({
})
</script>
<h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4>
<div id="synapse_http_server_requests_servlet_minus_events"></div>
<div id="synapse_http_server_request_count_servlet_minus_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet_minus_events"),
expr: "rate(synapse_http_server_requests:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
node: document.querySelector("#synapse_http_server_request_count_servlet_minus_events"),
expr: "rate(synapse_http_server_request_count:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -233,7 +233,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_time_avg"),
expr: "rate(synapse_http_server_response_time:total[2m]) / rate(synapse_http_server_response_time:count[2m]) / 1000",
expr: "rate(synapse_http_server_response_time_seconds[2m]) / rate(synapse_http_server_response_count[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -276,7 +276,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_ru_utime"),
expr: "rate(synapse_http_server_response_ru_utime:total[2m])",
expr: "rate(synapse_http_server_response_ru_utime_seconds[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -291,7 +291,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
expr: "rate(synapse_http_server_response_db_txn_duration:total[2m])",
expr: "rate(synapse_http_server_response_db_txn_duration_seconds[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -306,7 +306,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_send_time_avg"),
expr: "rate(synapse_http_server_response_time:total{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_time:count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
expr: "rate(synapse_http_server_response_time_second{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,

View File

@@ -1,10 +1,10 @@
synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method)
synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet)
synapse_http_server_request_count:method{servlet=""} = sum(synapse_http_server_request_count) by (method)
synapse_http_server_request_count:servlet{method=""} = sum(synapse_http_server_request_count) by (servlet)
synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet)
synapse_http_server_request_count:total{servlet=""} = sum(synapse_http_server_request_count:by_method) by (servlet)
synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])

View File

@@ -0,0 +1,60 @@
groups:
- name: synapse
rules:
- record: "synapse_federation_transaction_queue_pendingEdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)"
- record: "synapse_federation_transaction_queue_pendingPdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)"
- record: 'synapse_http_server_request_count:method'
labels:
servlet: ""
expr: "sum(synapse_http_server_request_count) by (method)"
- record: 'synapse_http_server_request_count:servlet'
labels:
method: ""
expr: 'sum(synapse_http_server_request_count) by (servlet)'
- record: 'synapse_http_server_request_count:total'
labels:
servlet: ""
expr: 'sum(synapse_http_server_request_count:by_method) by (servlet)'
- record: 'synapse_cache:hit_ratio_5m'
expr: 'rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])'
- record: 'synapse_cache:hit_ratio_30s'
expr: 'rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])'
- record: 'synapse_federation_client_sent'
labels:
type: "EDU"
expr: 'synapse_federation_client_sent_edus + 0'
- record: 'synapse_federation_client_sent'
labels:
type: "PDU"
expr: 'synapse_federation_client_sent_pdu_destinations:count + 0'
- record: 'synapse_federation_client_sent'
labels:
type: "Query"
expr: 'sum(synapse_federation_client_sent_queries) by (job)'
- record: 'synapse_federation_server_received'
labels:
type: "EDU"
expr: 'synapse_federation_server_received_edus + 0'
- record: 'synapse_federation_server_received'
labels:
type: "PDU"
expr: 'synapse_federation_server_received_pdus + 0'
- record: 'synapse_federation_server_received'
labels:
type: "Query"
expr: 'sum(synapse_federation_server_received_queries) by (job)'
- record: 'synapse_federation_transaction_queue_pending'
labels:
type: "EDU"
expr: 'synapse_federation_transaction_queue_pending_edus + 0'
- record: 'synapse_federation_transaction_queue_pending'
labels:
type: "PDU"
expr: 'synapse_federation_transaction_queue_pending_pdus + 0'

View File

@@ -2,6 +2,9 @@
# (e.g. https://www.archlinux.org/packages/community/any/matrix-synapse/ for ArchLinux)
# rather than in a user home directory or similar under virtualenv.
# **NOTE:** This is an example service file that may change in the future. If you
# wish to use this please copy rather than symlink it.
[Unit]
Description=Synapse Matrix homeserver
@@ -12,6 +15,7 @@ Group=synapse
WorkingDirectory=/var/lib/synapse
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml
ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml
# EnvironmentFile=-/etc/sysconfig/synapse # Can be used to e.g. set SYNAPSE_CACHE_FACTOR
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,23 @@
# List all media in a room
This API gets a list of known media in a room.
The API is:
```
GET /_matrix/client/r0/admin/room/<room_id>/media
```
including an `access_token` of a server admin.
It returns a JSON body like the following:
```
{
"local": [
"mxc://localhost/xwvutsrqponmlkjihgfedcba",
"mxc://localhost/abcdefghijklmnopqrstuvwx"
],
"remote": [
"mxc://matrix.org/xwvutsrqponmlkjihgfedcba",
"mxc://matrix.org/abcdefghijklmnopqrstuvwx"
]
}
```

View File

@@ -4,14 +4,60 @@ Purge History API
The purge history API allows server admins to purge historic events from their
database, reclaiming disk space.
**NB!** This will not delete local events (locally sent messages content etc) from the database, but will remove lots of the metadata about them and does dramatically reduce the on disk space usage
Depending on the amount of history being purged a call to the API may take
several minutes or longer. During this period users will not be able to
paginate further back in the room from the point being purged from.
The API is simply:
The API is:
``POST /_matrix/client/r0/admin/purge_history/<room_id>/<event_id>``
``POST /_matrix/client/r0/admin/purge_history/<room_id>[/<event_id>]``
including an ``access_token`` of a server admin.
By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are
deleted.)
Room state data (such as joins, leaves, topic) is always preserved.
To delete local message events as well, set ``delete_local_events`` in the body:
.. code:: json
{
"delete_local_events": true
}
The caller must specify the point in the room to purge up to. This can be
specified by including an event_id in the URI, or by setting a
``purge_up_to_event_id`` or ``purge_up_to_ts`` in the request body. If an event
id is given, that event (and others at the same graph depth) will be retained.
If ``purge_up_to_ts`` is given, it should be a timestamp since the unix epoch,
in milliseconds.
The API starts the purge running, and returns immediately with a JSON body with
a purge id:
.. code:: json
{
"purge_id": "<opaque id>"
}
Purge status query
------------------
It is possible to poll for updates on recent purges with a second API;
``GET /_matrix/client/r0/admin/purge_history_status/<purge_id>``
(again, with a suitable ``access_token``). This API returns a JSON body like
the following:
.. code:: json
{
"status": "active"
}
The status will be one of ``active``, ``complete``, or ``failed``.

View File

@@ -1,52 +1,119 @@
Basically, PEP8
- Everything should comply with PEP8. Code should pass
``pep8 --max-line-length=100`` without any warnings.
- NEVER tabs. 4 spaces to indent.
- Max line width: 79 chars (with flexibility to overflow by a "few chars" if
- **Indenting**:
- NEVER tabs. 4 spaces to indent.
- follow PEP8; either hanging indent or multiline-visual indent depending
on the size and shape of the arguments and what makes more sense to the
author. In other words, both this::
print("I am a fish %s" % "moo")
and this::
print("I am a fish %s" %
"moo")
and this::
print(
"I am a fish %s" %
"moo",
)
...are valid, although given each one takes up 2x more vertical space than
the previous, it's up to the author's discretion as to which layout makes
most sense for their function invocation. (e.g. if they want to add
comments per-argument, or put expressions in the arguments, or group
related arguments together, or want to deliberately extend or preserve
vertical/horizontal space)
- **Line length**:
Max line length is 79 chars (with flexibility to overflow by a "few chars" if
the overflowing content is not semantically significant and avoids an
explosion of vertical whitespace).
- Use camel case for class and type names
- Use underscores for functions and variables.
- Use double quotes.
- Use parentheses instead of '\\' for line continuation where ever possible
(which is pretty much everywhere)
- There should be max a single new line between:
Use parentheses instead of ``\`` for line continuation where ever possible
(which is pretty much everywhere).
- **Naming**:
- Use camel case for class and type names
- Use underscores for functions and variables.
- Use double quotes ``"foo"`` rather than single quotes ``'foo'``.
- **Blank lines**:
- There should be max a single new line between:
- statements
- functions in a class
- There should be two new lines between:
- There should be two new lines between:
- definitions in a module (e.g., between different classes)
- There should be spaces where spaces should be and not where there shouldn't be:
- a single space after a comma
- a single space before and after for '=' when used as assignment
- no spaces before and after for '=' for default values and keyword arguments.
- Indenting must follow PEP8; either hanging indent or multiline-visual indent
depending on the size and shape of the arguments and what makes more sense to
the author. In other words, both this::
print("I am a fish %s" % "moo")
- **Whitespace**:
and this::
There should be spaces where spaces should be and not where there shouldn't
be:
print("I am a fish %s" %
"moo")
- a single space after a comma
- a single space before and after for '=' when used as assignment
- no spaces before and after for '=' for default values and keyword arguments.
and this::
- **Comments**: should follow the `google code style
<http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with `sphinx
<http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples
<http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
in the sphinx documentation.
print(
"I am a fish %s" %
"moo"
)
- **Imports**:
...are valid, although given each one takes up 2x more vertical space than
the previous, it's up to the author's discretion as to which layout makes most
sense for their function invocation. (e.g. if they want to add comments
per-argument, or put expressions in the arguments, or group related arguments
together, or want to deliberately extend or preserve vertical/horizontal
space)
- Prefer to import classes and functions than packages or modules.
Comments should follow the `google code style <http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with
`sphinx <http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples <http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
in the sphinx documentation.
Example::
Code should pass pep8 --max-line-length=100 without any warnings.
from synapse.types import UserID
...
user_id = UserID(local, server)
is preferred over::
from synapse import types
...
user_id = types.UserID(local, server)
(or any other variant).
This goes against the advice in the Google style guide, but it means that
errors in the name are caught early (at import time).
- Multiple imports from the same package can be combined onto one line::
from synapse.types import GroupID, RoomID, UserID
An effort should be made to keep the individual imports in alphabetical
order.
If the list becomes long, wrap it with parentheses and split it over
multiple lines.
- As per `PEP-8 <https://www.python.org/dev/peps/pep-0008/#imports>`_,
imports should be grouped in the following order, with a blank line between
each group:
1. standard library imports
2. related third party imports
3. local application/library specific imports
- Imports within each group should be sorted alphabetically by module name.
- Avoid wildcard imports (``from synapse.types import *``) and relative
imports (``from .types import UserID``).

View File

@@ -279,9 +279,9 @@ Obviously that option means that the operations done in
that might be fixed by setting a different logcontext via a ``with
LoggingContext(...)`` in ``background_operation``).
The second option is to use ``logcontext.preserve_fn``, which wraps a function
so that it doesn't reset the logcontext even when it returns an incomplete
deferred, and adds a callback to the returned deferred to reset the
The second option is to use ``logcontext.run_in_background``, which wraps a
function so that it doesn't reset the logcontext even when it returns an
incomplete deferred, and adds a callback to the returned deferred to reset the
logcontext. In other words, it turns a function that follows the Synapse rules
about logcontexts and Deferreds into one which behaves more like an external
function — the opposite operation to that described in the previous section.
@@ -293,15 +293,11 @@ It can be used like this:
def do_request_handling():
yield foreground_operation()
logcontext.preserve_fn(background_operation)()
logcontext.run_in_background(background_operation)
# this will now be logged against the request context
logger.debug("Request handling complete")
XXX: I think ``preserve_context_over_fn`` is supposed to do the first option,
but the fact that it does ``preserve_context_over_deferred`` on its results
means that its use is fraught with difficulty.
Passing synapse deferreds into third-party functions
----------------------------------------------------

View File

@@ -16,7 +16,7 @@ How to monitor Synapse metrics using Prometheus
metrics_port: 9092
Also ensure that ``enable_metrics`` is set to ``True``.
Restart synapse.
3. Add a prometheus target for synapse.
@@ -28,11 +28,58 @@ How to monitor Synapse metrics using Prometheus
static_configs:
- targets: ["my.server.here:9092"]
If your prometheus is older than 1.5.2, you will need to replace
If your prometheus is older than 1.5.2, you will need to replace
``static_configs`` in the above with ``target_groups``.
Restart prometheus.
Block and response metrics renamed for 0.27.0
---------------------------------------------
Synapse 0.27.0 begins the process of rationalising the duplicate ``*:count``
metrics reported for the resource tracking for code blocks and HTTP requests.
At the same time, the corresponding ``*:total`` metrics are being renamed, as
the ``:total`` suffix no longer makes sense in the absence of a corresponding
``:count`` metric.
To enable a graceful migration path, this release just adds new names for the
metrics being renamed. A future release will remove the old ones.
The following table shows the new metrics, and the old metrics which they are
replacing.
==================================================== ===================================================
New name Old name
==================================================== ===================================================
synapse_util_metrics_block_count synapse_util_metrics_block_timer:count
synapse_util_metrics_block_count synapse_util_metrics_block_ru_utime:count
synapse_util_metrics_block_count synapse_util_metrics_block_ru_stime:count
synapse_util_metrics_block_count synapse_util_metrics_block_db_txn_count:count
synapse_util_metrics_block_count synapse_util_metrics_block_db_txn_duration:count
synapse_util_metrics_block_time_seconds synapse_util_metrics_block_timer:total
synapse_util_metrics_block_ru_utime_seconds synapse_util_metrics_block_ru_utime:total
synapse_util_metrics_block_ru_stime_seconds synapse_util_metrics_block_ru_stime:total
synapse_util_metrics_block_db_txn_count synapse_util_metrics_block_db_txn_count:total
synapse_util_metrics_block_db_txn_duration_seconds synapse_util_metrics_block_db_txn_duration:total
synapse_http_server_response_count synapse_http_server_requests
synapse_http_server_response_count synapse_http_server_response_time:count
synapse_http_server_response_count synapse_http_server_response_ru_utime:count
synapse_http_server_response_count synapse_http_server_response_ru_stime:count
synapse_http_server_response_count synapse_http_server_response_db_txn_count:count
synapse_http_server_response_count synapse_http_server_response_db_txn_duration:count
synapse_http_server_response_time_seconds synapse_http_server_response_time:total
synapse_http_server_response_ru_utime_seconds synapse_http_server_response_ru_utime:total
synapse_http_server_response_ru_stime_seconds synapse_http_server_response_ru_stime:total
synapse_http_server_response_db_txn_count synapse_http_server_response_db_txn_count:total
synapse_http_server_response_db_txn_duration_seconds synapse_http_server_response_db_txn_duration:total
==================================================== ===================================================
Standard Metric Names
---------------------
@@ -42,7 +89,7 @@ have been changed to seconds, from miliseconds.
================================== =============================
New name Old name
---------------------------------- -----------------------------
================================== =============================
process_cpu_user_seconds_total process_resource_utime / 1000
process_cpu_system_seconds_total process_resource_stime / 1000
process_open_fds (no 'type' label) process_fds
@@ -52,8 +99,8 @@ The python-specific counts of garbage collector performance have been renamed.
=========================== ======================
New name Old name
--------------------------- ----------------------
python_gc_time reactor_gc_time
=========================== ======================
python_gc_time reactor_gc_time
python_gc_unreachable_total reactor_gc_unreachable
python_gc_counts reactor_gc_counts
=========================== ======================
@@ -62,7 +109,7 @@ The twisted-specific reactor metrics have been renamed.
==================================== =====================
New name Old name
------------------------------------ ---------------------
==================================== =====================
python_twisted_reactor_pending_calls reactor_pending_calls
python_twisted_reactor_tick_time reactor_tick_time
==================================== =====================

View File

@@ -0,0 +1,99 @@
Password auth provider modules
==============================
Password auth providers offer a way for server administrators to integrate
their Synapse installation with an existing authentication system.
A password auth provider is a Python class which is dynamically loaded into
Synapse, and provides a number of methods by which it can integrate with the
authentication system.
This document serves as a reference for those looking to implement their own
password auth providers.
Required methods
----------------
Password auth provider classes must provide the following methods:
*class* ``SomeProvider.parse_config``\(*config*)
This method is passed the ``config`` object for this module from the
homeserver configuration file.
It should perform any appropriate sanity checks on the provided
configuration, and return an object which is then passed into ``__init__``.
*class* ``SomeProvider``\(*config*, *account_handler*)
The constructor is passed the config object returned by ``parse_config``,
and a ``synapse.module_api.ModuleApi`` object which allows the
password provider to check if accounts exist and/or create new ones.
Optional methods
----------------
Password auth provider classes may optionally provide the following methods.
*class* ``SomeProvider.get_db_schema_files``\()
This method, if implemented, should return an Iterable of ``(name,
stream)`` pairs of database schema files. Each file is applied in turn at
initialisation, and a record is then made in the database so that it is
not re-applied on the next start.
``someprovider.get_supported_login_types``\()
This method, if implemented, should return a ``dict`` mapping from a login
type identifier (such as ``m.login.password``) to an iterable giving the
fields which must be provided by the user in the submission to the
``/login`` api. These fields are passed in the ``login_dict`` dictionary
to ``check_auth``.
For example, if a password auth provider wants to implement a custom login
type of ``com.example.custom_login``, where the client is expected to pass
the fields ``secret1`` and ``secret2``, the provider should implement this
method and return the following dict::
{"com.example.custom_login": ("secret1", "secret2")}
``someprovider.check_auth``\(*username*, *login_type*, *login_dict*)
This method is the one that does the real work. If implemented, it will be
called for each login attempt where the login type matches one of the keys
returned by ``get_supported_login_types``.
It is passed the (possibly UNqualified) ``user`` provided by the client,
the login type, and a dictionary of login secrets passed by the client.
The method should return a Twisted ``Deferred`` object, which resolves to
the canonical ``@localpart:domain`` user id if authentication is successful,
and ``None`` if not.
Alternatively, the ``Deferred`` can resolve to a ``(str, func)`` tuple, in
which case the second field is a callback which will be called with the
result from the ``/login`` call (including ``access_token``, ``device_id``,
etc.)
``someprovider.check_password``\(*user_id*, *password*)
This method provides a simpler interface than ``get_supported_login_types``
and ``check_auth`` for password auth providers that just want to provide a
mechanism for validating ``m.login.password`` logins.
Iif implemented, it will be called to check logins with an
``m.login.password`` login type. It is passed a qualified
``@localpart:domain`` user id, and the password provided by the user.
The method should return a Twisted ``Deferred`` object, which resolves to
``True`` if authentication is successful, and ``False`` if not.
``someprovider.on_logged_out``\(*user_id*, *device_id*, *access_token*)
This method, if implemented, is called when a user logs out. It is passed
the qualified user ID, the ID of the deactivated device (if any: access
tokens are occasionally created without an associated device ID), and the
(now deactivated) access token.
It may return a Twisted ``Deferred`` object; the logout request will wait
for the deferred to complete but the result is ignored.

View File

@@ -56,6 +56,7 @@ As a first cut, let's do #2 and have the receiver hit the API to calculate its o
API
---
```
GET /_matrix/media/r0/preview_url?url=http://wherever.com
200 OK
{
@@ -66,6 +67,7 @@ GET /_matrix/media/r0/preview_url?url=http://wherever.com
"og:description" : "“Synapse 0.12 is out! Lots of polishing, performance &amp;amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”"
"og:site_name" : "Twitter"
}
```
* Downloads the URL
* If HTML, just stores it in RAM and parses it for OG meta tags

17
docs/user_directory.md Normal file
View File

@@ -0,0 +1,17 @@
User Directory API Implementation
=================================
The user directory is currently maintained based on the 'visible' users
on this particular server - i.e. ones which your account shares a room with, or
who are present in a publicly viewable room present on the server.
The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
quickest solution to fix it is:
```
UPDATE user_directory_stream_pos SET stream_id = NULL;
```
and restart the synapse, which should then start a background task to
flush the current tables and regenerate the directory.

View File

@@ -1,11 +1,15 @@
Scaling synapse via workers
---------------------------
===========================
Synapse has experimental support for splitting out functionality into
multiple separate python processes, helping greatly with scalability. These
processes are called 'workers', and are (eventually) intended to scale
horizontally independently.
All of the below is highly experimental and subject to change as Synapse evolves,
but documenting it here to help folks needing highly scalable Synapses similar
to the one running matrix.org!
All processes continue to share the same database instance, and as such, workers
only work with postgres based synapse deployments (sharing a single sqlite
across multiple processes is a recipe for disaster, plus you should be using
@@ -16,37 +20,62 @@ TCP protocol called 'replication' - analogous to MySQL or Postgres style
database replication; feeding a stream of relevant data to the workers so they
can be kept in sync with the main synapse process and database state.
To enable workers, you need to add a replication listener to the master synapse, e.g.::
Configuration
-------------
To make effective use of the workers, you will need to configure an HTTP
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
the correct worker, or to the main synapse instance. Note that this includes
requests made to the federation port. The caveats regarding running a
reverse-proxy on the federation port still apply (see
https://github.com/matrix-org/synapse/blob/master/README.rst#reverse-proxying-the-federation-port).
To enable workers, you need to add two replication listeners to the master
synapse, e.g.::
listeners:
# The TCP replication port
- port: 9092
bind_address: '127.0.0.1'
type: replication
# The HTTP replication port
- port: 9093
bind_address: '127.0.0.1'
type: http
resources:
- names: [replication]
Under **no circumstances** should this replication API listener be exposed to the
public internet; it currently implements no authentication whatsoever and is
Under **no circumstances** should these replication API listeners be exposed to
the public internet; it currently implements no authentication whatsoever and is
unencrypted.
You then create a set of configs for the various worker processes. These should be
worker configuration files should be stored in a dedicated subdirectory, to allow
synctl to manipulate them.
(Roughly, the TCP port is used for streaming data from the master to the
workers, and the HTTP port for the workers to send data to the main
synapse process.)
The current available worker applications are:
* synapse.app.pusher - handles sending push notifications to sygnal and email
* synapse.app.synchrotron - handles /sync endpoints. can scales horizontally through multiple instances.
* synapse.app.appservice - handles output traffic to Application Services
* synapse.app.federation_reader - handles receiving federation traffic (including public_rooms API)
* synapse.app.media_repository - handles the media repository.
* synapse.app.client_reader - handles client API endpoints like /publicRooms
You then create a set of configs for the various worker processes. These
should be worker configuration files, and should be stored in a dedicated
subdirectory, to allow synctl to manipulate them. An additional configuration
for the master synapse process will need to be created because the process will
not be started automatically. That configuration should look like this::
worker_app: synapse.app.homeserver
daemonize: true
Each worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that worker,
e.g. the HTTP listener that it provides (if any); logging configuration; etc.
You should minimise the number of overrides though to maintain a usable config.
You must specify the type of worker application (worker_app) and the replication
endpoint that it's talking to on the main synapse process (worker_replication_host
and worker_replication_port).
You must specify the type of worker application (``worker_app``). The currently
available worker applications are listed below. You must also specify the
replication endpoints that it's talking to on the main synapse process.
``worker_replication_host`` should specify the host of the main synapse,
``worker_replication_port`` should point to the TCP replication listener port and
``worker_replication_http_port`` should point to the HTTP replication port.
Currently, only the ``event_creator`` worker requires specifying
``worker_replication_http_port``.
For instance::
@@ -55,6 +84,7 @@ For instance::
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
- type: http
@@ -68,11 +98,11 @@ For instance::
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
...is a full configuration for a synchrotron worker instance, which will expose a
plain HTTP /sync endpoint on port 8083 separately from the /sync endpoint provided
plain HTTP ``/sync`` endpoint on port 8083 separately from the ``/sync`` endpoint provided
by the main synapse.
Obviously you should configure your loadbalancer to route the /sync endpoint to
the synchrotron instance(s) in this instance.
Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (``localhost:8083`` in the above example).
Finally, to actually run your worker-based synapse, you must pass synctl the -a
commandline option to tell it to operate on all the worker configurations found
@@ -89,6 +119,127 @@ To manipulate a specific worker, you pass the -w option to synctl::
synctl -w $CONFIG/workers/synchrotron.yaml restart
All of the above is highly experimental and subject to change as Synapse evolves,
but documenting it here to help folks needing highly scalable Synapses similar
to the one running matrix.org!
Available worker applications
-----------------------------
``synapse.app.pusher``
~~~~~~~~~~~~~~~~~~~~~~
Handles sending push notifications to sygnal and email. Doesn't handle any
REST endpoints itself, but you should set ``start_pushers: False`` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.synchrotron``
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The synchrotron handles ``sync`` requests from clients. In particular, it can
handle REST endpoints matching the following regular expressions::
^/_matrix/client/(v2_alpha|r0)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
^/_matrix/client/(api/v1|r0)/initialSync$
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
The above endpoints should all be routed to the synchrotron worker by the
reverse-proxy configuration.
It is possible to run multiple instances of the synchrotron to scale
horizontally. In this case the reverse-proxy should be configured to
load-balance across the instances, though it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting
a userid from the access token is currently left as an exercise for the reader.
``synapse.app.appservice``
~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles sending output traffic to Application Services. Doesn't handle any
REST endpoints itself, but you should set ``notify_appservices: False`` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.federation_reader``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles a subset of federation endpoints. In particular, it can handle REST
endpoints matching the following regular expressions::
^/_matrix/federation/v1/event/
^/_matrix/federation/v1/state/
^/_matrix/federation/v1/state_ids/
^/_matrix/federation/v1/backfill/
^/_matrix/federation/v1/get_missing_events/
^/_matrix/federation/v1/publicRooms
The above endpoints should all be routed to the federation_reader worker by the
reverse-proxy configuration.
``synapse.app.federation_sender``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles sending federation traffic to other servers. Doesn't handle any
REST endpoints itself, but you should set ``send_federation: False`` in the
shared configuration file to stop the main synapse sending this traffic.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.media_repository``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles the media repository. It can handle all endpoints starting with::
/_matrix/media/
You should also set ``enable_media_repo: False`` in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.client_reader``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles client API endpoints. It can handle REST endpoints matching the
following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
``synapse.app.user_dir``
~~~~~~~~~~~~~~~~~~~~~~~~
Handles searches in the user directory. It can handle REST endpoints matching
the following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
``synapse.app.frontend_proxy``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxies some frequently-requested client endpoints to add caching and remove
load from the main synapse. It can handle REST endpoints matching the following
regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
It will proxy any requests it cannot handle to the main synapse instance. It
must therefore be configured with the location of the main instance, via
the ``worker_main_http_uri`` setting in the frontend_proxy worker configuration
file. For example::
worker_main_http_uri: http://127.0.0.1:8008
``synapse.app.event_creator``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles some event creation. It can handle REST endpoints matching::
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
It will create events locally and then send them on to the main synapse
instance to be persisted and handled.

View File

@@ -1,5 +1,7 @@
#! /bin/bash
set -eux
cd "`dirname $0`/.."
TOX_DIR=$WORKSPACE/.tox
@@ -14,7 +16,20 @@ fi
tox -e py27 --notest -v
TOX_BIN=$TOX_DIR/py27/bin
$TOX_BIN/pip install setuptools
# cryptography 2.2 requires setuptools >= 18.5.
#
# older versions of virtualenv (?) give us a virtualenv with the same version
# of setuptools as is installed on the system python (and tox runs virtualenv
# under python3, so we get the version of setuptools that is installed on that).
#
# anyway, make sure that we have a recent enough setuptools.
$TOX_BIN/pip install 'setuptools>=18.5'
# we also need a semi-recent version of pip, because old ones fail to install
# the "enum34" dependency of cryptography.
$TOX_BIN/pip install 'pip>=10'
{ python synapse/python_dependencies.py
echo lxml psycopg2
} | xargs $TOX_BIN/pip install

View File

@@ -123,15 +123,25 @@ def lookup(destination, path):
except:
return "https://%s:%d%s" % (destination, 8448, path)
def get_json(origin_name, origin_key, destination, path):
request_json = {
"method": "GET",
def request_json(method, origin_name, origin_key, destination, path, content):
if method is None:
if content is None:
method = "GET"
else:
method = "POST"
json_to_sign = {
"method": method,
"uri": path,
"origin": origin_name,
"destination": destination,
}
signed_json = sign_json(request_json, origin_key, origin_name)
if content is not None:
json_to_sign["content"] = json.loads(content)
signed_json = sign_json(json_to_sign, origin_key, origin_name)
authorization_headers = []
@@ -145,10 +155,12 @@ def get_json(origin_name, origin_key, destination, path):
dest = lookup(destination, path)
print ("Requesting %s" % dest, file=sys.stderr)
result = requests.get(
dest,
result = requests.request(
method=method,
url=dest,
headers={"Authorization": authorization_headers[0]},
verify=False,
data=content,
)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
return result.json()
@@ -186,6 +198,17 @@ def main():
"connect appropriately.",
)
parser.add_argument(
"-X", "--method",
help="HTTP method to use for the request. Defaults to GET if --data is"
"unspecified, POST if it is."
)
parser.add_argument(
"--body",
help="Data to send as the body of the HTTP request"
)
parser.add_argument(
"path",
help="request path. We will add '/_matrix/federation/v1/' to this."
@@ -199,8 +222,11 @@ def main():
with open(args.signing_key_path) as f:
key = read_signing_keys(f)[0]
result = get_json(
args.server_name, key, args.destination, "/_matrix/federation/v1/" + args.path
result = request_json(
args.method,
args.server_name, key, args.destination,
"/_matrix/federation/v1/" + args.path,
content=args.body,
)
json.dump(result, sys.stdout)

View File

@@ -0,0 +1,133 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Moves a list of remote media from one media store to another.
The input should be a list of media files to be moved, one per line. Each line
should be formatted::
<origin server>|<file id>
This can be extracted from postgres with::
psql --tuples-only -A -c "select media_origin, filesystem_id from
matrix.remote_media_cache where ..."
To use, pipe the above into::
PYTHON_PATH=. ./scripts/move_remote_media_to_new_store.py <source repo> <dest repo>
"""
from __future__ import print_function
import argparse
import logging
import sys
import os
import shutil
from synapse.rest.media.v1.filepath import MediaFilePaths
logger = logging.getLogger()
def main(src_repo, dest_repo):
src_paths = MediaFilePaths(src_repo)
dest_paths = MediaFilePaths(dest_repo)
for line in sys.stdin:
line = line.strip()
parts = line.split('|')
if len(parts) != 2:
print("Unable to parse input line %s" % line, file=sys.stderr)
exit(1)
move_media(parts[0], parts[1], src_paths, dest_paths)
def move_media(origin_server, file_id, src_paths, dest_paths):
"""Move the given file, and any thumbnails, to the dest repo
Args:
origin_server (str):
file_id (str):
src_paths (MediaFilePaths):
dest_paths (MediaFilePaths):
"""
logger.info("%s/%s", origin_server, file_id)
# check that the original exists
original_file = src_paths.remote_media_filepath(origin_server, file_id)
if not os.path.exists(original_file):
logger.warn(
"Original for %s/%s (%s) does not exist",
origin_server, file_id, original_file,
)
else:
mkdir_and_move(
original_file,
dest_paths.remote_media_filepath(origin_server, file_id),
)
# now look for thumbnails
original_thumb_dir = src_paths.remote_media_thumbnail_dir(
origin_server, file_id,
)
if not os.path.exists(original_thumb_dir):
return
mkdir_and_move(
original_thumb_dir,
dest_paths.remote_media_thumbnail_dir(origin_server, file_id)
)
def mkdir_and_move(original_file, dest_file):
dirname = os.path.dirname(dest_file)
if not os.path.exists(dirname):
logger.debug("mkdir %s", dirname)
os.makedirs(dirname)
logger.debug("mv %s %s", original_file, dest_file)
shutil.move(original_file, dest_file)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class = argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"-v", action='store_true', help='enable debug logging')
parser.add_argument(
"src_repo",
help="Path to source content repo",
)
parser.add_argument(
"dest_repo",
help="Path to source content repo",
)
args = parser.parse_args()
logging_config = {
"level": logging.DEBUG if args.v else logging.INFO,
"format": "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s"
}
logging.basicConfig(**logging_config)
main(args.src_repo, args.dest_repo)

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -29,6 +30,8 @@ import time
import traceback
import yaml
from six import string_types
logger = logging.getLogger("synapse_port_db")
@@ -42,6 +45,14 @@ BOOLEAN_COLUMNS = {
"public_room_list_stream": ["visibility"],
"device_lists_outbound_pokes": ["sent"],
"users_who_share_rooms": ["share_private"],
"groups": ["is_public"],
"group_rooms": ["is_public"],
"group_users": ["is_public", "is_admin"],
"group_summary_rooms": ["is_public"],
"group_room_categories": ["is_public"],
"group_summary_users": ["is_public"],
"group_roles": ["is_public"],
"local_group_membership": ["is_publicised", "is_admin"],
}
@@ -112,6 +123,7 @@ class Store(object):
_simple_update_one = SQLBaseStore.__dict__["_simple_update_one"]
_simple_update_one_txn = SQLBaseStore.__dict__["_simple_update_one_txn"]
_simple_update_txn = SQLBaseStore.__dict__["_simple_update_txn"]
def runInteraction(self, desc, func, *args, **kwargs):
def r(conn):
@@ -241,6 +253,12 @@ class Porter(object):
@defer.inlineCallbacks
def handle_table(self, table, postgres_size, table_size, forward_chunk,
backward_chunk):
logger.info(
"Table %s: %i/%i (rows %i-%i) already ported",
table, postgres_size, table_size,
backward_chunk+1, forward_chunk-1,
)
if not table_size:
return
@@ -318,7 +336,7 @@ class Porter(object):
backward_chunk = min(row[0] for row in brows) - 1
rows = frows + brows
self._convert_rows(table, headers, rows)
rows = self._convert_rows(table, headers, rows)
def insert(txn):
self.postgres_store.insert_many_txn(
@@ -458,31 +476,10 @@ class Porter(object):
self.progress.set_state("Preparing PostgreSQL")
self.setup_db(postgres_config, postgres_engine)
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
)
postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
)
tables = set(sqlite_tables) & set(postgres_tables)
self.progress.set_state("Creating tables")
logger.info("Found %d tables", len(tables))
self.progress.set_state("Creating port tables")
def create_port_table(txn):
txn.execute(
"CREATE TABLE port_from_sqlite3 ("
"CREATE TABLE IF NOT EXISTS port_from_sqlite3 ("
" table_name varchar(100) NOT NULL UNIQUE,"
" forward_rowid bigint NOT NULL,"
" backward_rowid bigint NOT NULL"
@@ -508,18 +505,33 @@ class Porter(object):
"alter_table", alter_table
)
except Exception as e:
logger.info("Failed to create port table: %s", e)
pass
try:
yield self.postgres_store.runInteraction(
"create_port_table", create_port_table
)
except Exception as e:
logger.info("Failed to create port table: %s", e)
yield self.postgres_store.runInteraction(
"create_port_table", create_port_table
)
self.progress.set_state("Setting up")
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
)
# Set up tables.
postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
)
tables = set(sqlite_tables) & set(postgres_tables)
logger.info("Found %d tables", len(tables))
# Step 3. Figure out what still needs copying
self.progress.set_state("Checking on port progress")
setup_res = yield defer.gatherResults(
[
self.setup_table(table)
@@ -530,7 +542,8 @@ class Porter(object):
consumeErrors=True,
)
# Process tables.
# Step 4. Do the copying.
self.progress.set_state("Copying to postgres")
yield defer.gatherResults(
[
self.handle_table(*res)
@@ -539,6 +552,9 @@ class Porter(object):
consumeErrors=True,
)
# Step 5. Do final post-processing
yield self._setup_state_group_id_seq()
self.progress.done()
except:
global end_error_exec_info
@@ -554,17 +570,29 @@ class Porter(object):
i for i, h in enumerate(headers) if h in bool_col_names
]
class BadValueException(Exception):
pass
def conv(j, col):
if j in bool_cols:
return bool(col)
elif isinstance(col, string_types) and "\0" in col:
logger.warn("DROPPING ROW: NUL value in table %s col %s: %r", table, headers[j], col)
raise BadValueException();
return col
outrows = []
for i, row in enumerate(rows):
rows[i] = tuple(
conv(j, col)
for j, col in enumerate(row)
if j > 0
)
try:
outrows.append(tuple(
conv(j, col)
for j, col in enumerate(row)
if j > 0
))
except BadValueException:
pass
return outrows
@defer.inlineCallbacks
def _setup_sent_transactions(self):
@@ -592,7 +620,7 @@ class Porter(object):
"select", r,
)
self._convert_rows("sent_transactions", headers, rows)
rows = self._convert_rows("sent_transactions", headers, rows)
inserted_rows = len(rows)
if inserted_rows:
@@ -686,6 +714,16 @@ class Porter(object):
defer.returnValue((done, remaining + done))
def _setup_state_group_id_seq(self):
def r(txn):
txn.execute("SELECT MAX(id) FROM state_groups")
next_id = txn.fetchone()[0]+1
txn.execute(
"ALTER SEQUENCE state_group_id_seq RESTART WITH %s",
(next_id,),
)
return self.postgres_store.runInteraction("setup_state_group_id_seq", r)
##############################################
###### The following is simply UI stuff ######

45
scripts/sync_room_to_group.pl Executable file
View File

@@ -0,0 +1,45 @@
#!/usr/bin/env perl
use strict;
use warnings;
use JSON::XS;
use LWP::UserAgent;
use URI::Escape;
if (@ARGV < 4) {
die "usage: $0 <homeserver url> <access_token> <room_id|room_alias> <group_id>\n";
}
my ($hs, $access_token, $room_id, $group_id) = @ARGV;
my $ua = LWP::UserAgent->new();
$ua->timeout(10);
if ($room_id =~ /^#/) {
$room_id = uri_escape($room_id);
$room_id = decode_json($ua->get("${hs}/_matrix/client/r0/directory/room/${room_id}?access_token=${access_token}")->decoded_content)->{room_id};
}
my $room_users = [ keys %{decode_json($ua->get("${hs}/_matrix/client/r0/rooms/${room_id}/joined_members?access_token=${access_token}")->decoded_content)->{joined}} ];
my $group_users = [
(map { $_->{user_id} } @{decode_json($ua->get("${hs}/_matrix/client/unstable/groups/${group_id}/users?access_token=${access_token}" )->decoded_content)->{chunk}}),
(map { $_->{user_id} } @{decode_json($ua->get("${hs}/_matrix/client/unstable/groups/${group_id}/invited_users?access_token=${access_token}" )->decoded_content)->{chunk}}),
];
die "refusing to sync from empty room" unless (@$room_users);
die "refusing to sync to empty group" unless (@$group_users);
my $diff = {};
foreach my $user (@$room_users) { $diff->{$user}++ }
foreach my $user (@$group_users) { $diff->{$user}-- }
foreach my $user (keys %$diff) {
if ($diff->{$user} == 1) {
warn "inviting $user";
print STDERR $ua->put("${hs}/_matrix/client/unstable/groups/${group_id}/admin/users/invite/${user}?access_token=${access_token}", Content=>'{}')->status_line."\n";
}
elsif ($diff->{$user} == -1) {
warn "removing $user";
print STDERR $ua->put("${hs}/_matrix/client/unstable/groups/${group_id}/admin/users/remove/${user}?access_token=${access_token}", Content=>'{}')->status_line."\n";
}
}

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.24.1"
__version__ = "0.28.1"

View File

@@ -204,8 +204,8 @@ class Auth(object):
ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders(
"User-Agent",
default=[""]
b"User-Agent",
default=[b""]
)[0]
if user and access_token and ip_addr:
self.store.insert_client_ip(
@@ -270,7 +270,11 @@ class Auth(object):
rights (str): The operation being performed; the access token must
allow this.
Returns:
dict : dict that includes the user and the ID of their access token.
Deferred[dict]: dict that includes:
`user` (UserID)
`is_guest` (bool)
`token_id` (int|None): access token id. May be None if guest
`device_id` (str|None): device corresponding to access token
Raises:
AuthError if no user by that token exists or the token is invalid.
"""
@@ -668,7 +672,7 @@ def has_access_token(request):
bool: False if no access_token was given, True otherwise.
"""
query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders("Authorization")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers)
@@ -688,8 +692,8 @@ def get_access_token_from_request(request, token_not_found_http_status=401):
AuthError: If there isn't an access_token in the request.
"""
auth_headers = request.requestHeaders.getRawHeaders("Authorization")
query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get(b"access_token")
if auth_headers:
# Try the get the access_token from a "Authorization: Bearer"
# header

View File

@@ -16,6 +16,9 @@
"""Contains constants from the specification."""
# the "depth" field on events is limited to 2**63 - 1
MAX_DEPTH = 2**63 - 1
class Membership(object):

View File

@@ -15,9 +15,11 @@
"""Contains exceptions and error codes."""
import json
import logging
import simplejson as json
from six import iteritems
logger = logging.getLogger(__name__)
@@ -46,6 +48,7 @@ class Codes(object):
THREEPID_AUTH_FAILED = "M_THREEPID_AUTH_FAILED"
THREEPID_IN_USE = "M_THREEPID_IN_USE"
THREEPID_NOT_FOUND = "M_THREEPID_NOT_FOUND"
THREEPID_DENIED = "M_THREEPID_DENIED"
INVALID_USERNAME = "M_INVALID_USERNAME"
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
@@ -140,6 +143,48 @@ class RegistrationError(SynapseError):
pass
class FederationDeniedError(SynapseError):
"""An error raised when the server tries to federate with a server which
is not on its federation whitelist.
Attributes:
destination (str): The destination which has been denied
"""
def __init__(self, destination):
"""Raised by federation client or server to indicate that we are
are deliberately not attempting to contact a given server because it is
not on our federation whitelist.
Args:
destination (str): the domain in question
"""
self.destination = destination
super(FederationDeniedError, self).__init__(
code=403,
msg="Federation denied with %s." % (self.destination,),
errcode=Codes.FORBIDDEN,
)
class InteractiveAuthIncompleteError(Exception):
"""An error raised when UI auth is not yet complete
(This indicates we should return a 401 with 'result' as the body)
Attributes:
result (dict): the server response to the request, which should be
passed back to the client
"""
def __init__(self, result):
super(InteractiveAuthIncompleteError, self).__init__(
"Interactive auth not yet complete",
)
self.result = result
class UnrecognizedRequestError(SynapseError):
"""An error indicating we don't understand the request you're trying to make"""
def __init__(self, *args, **kwargs):
@@ -253,7 +298,7 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
A dict representing the error response JSON.
"""
err = {"error": msg, "errcode": code}
for key, value in kwargs.iteritems():
for key, value in iteritems(kwargs):
err[key] = value
return err

View File

@@ -17,7 +17,7 @@ from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, RoomID
from twisted.internet import defer
import ujson as json
import simplejson as json
import jsonschema
from jsonschema import FormatChecker

View File

@@ -25,7 +25,9 @@ except Exception:
from daemonize import Daemonize
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
from twisted.internet import reactor
from twisted.internet import error, reactor
logger = logging.getLogger(__name__)
def start_worker_reactor(appname, config):
@@ -120,3 +122,57 @@ def quit_with_error(error_string):
sys.stderr.write(" %s\n" % (line.rstrip(),))
sys.stderr.write("*" * line_length + '\n')
sys.exit(1)
def listen_tcp(bind_addresses, port, factory, backlog=50):
"""
Create a TCP socket for a port and several addresses
"""
for address in bind_addresses:
try:
reactor.listenTCP(
port,
factory,
backlog,
address
)
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
def listen_ssl(bind_addresses, port, factory, context_factory, backlog=50):
"""
Create an SSL socket for a port and several addresses
"""
for address in bind_addresses:
try:
reactor.listenSSL(
port,
factory,
context_factory,
backlog,
address
)
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
def check_bind_error(e, address, bind_addresses):
"""
This method checks an exception occurred while binding on 0.0.0.0.
If :: is specified in the bind addresses a warning is shown.
The exception is still raised otherwise.
Binding on both 0.0.0.0 and :: causes an exception on Linux and macOS
because :: binds on both IPv4 and IPv6 (as per RFC 3493).
When binding on 0.0.0.0 after :: this can safely be ignored.
Args:
e (Exception): Exception that was caught.
address (str): Address on which binding was attempted.
bind_addresses (list): Addresses on which the service listens.
"""
if address == '0.0.0.0' and '::' in bind_addresses:
logger.warn('Failed to listen on 0.0.0.0, continuing because listening on [::]')
else:
raise e

View File

@@ -32,11 +32,11 @@ from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.internet import reactor, defer
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.appservice")
@@ -49,19 +49,6 @@ class AppserviceSlaveStore(
class AppserviceServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = AppserviceSlaveStore(self.get_db_conn(), self)
@@ -77,19 +64,18 @@ class AppserviceServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse appservice now listening on port %d", port)
@@ -98,18 +84,15 @@ class AppserviceServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -129,9 +112,14 @@ class ASReplicationHandler(ReplicationClientHandler):
if stream_name == "events":
max_stream_id = self.store.get_room_max_stream_ordering()
preserve_fn(
self.appservice_handler.notify_interested_services
)(max_stream_id)
run_in_background(self._notify_app_services, max_stream_id)
@defer.inlineCallbacks
def _notify_app_services(self, room_stream_id):
try:
yield self.appservice_handler.notify_interested_services(room_stream_id)
except Exception:
logger.exception("Error notifying application services of event")
def start(config_options):

View File

@@ -44,7 +44,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.client_reader")
@@ -64,19 +64,6 @@ class ClientReaderSlavedStore(
class ClientReaderServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = ClientReaderSlavedStore(self.get_db_conn(), self)
@@ -101,19 +88,18 @@ class ClientReaderServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse client reader now listening on port %d", port)
@@ -122,18 +108,16 @@ class ClientReaderServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -172,7 +156,6 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():

View File

@@ -0,0 +1,189 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import (
RoomSendEventRestServlet, RoomMembershipRestServlet, RoomStateEventRestServlet,
JoinRoomAliasServlet,
)
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.event_creator")
class EventCreatorSlavedStore(
DirectoryStore,
TransactionStore,
SlavedProfileStore,
SlavedAccountDataStore,
SlavedPusherStore,
SlavedReceiptsStore,
SlavedPushRuleStore,
SlavedDeviceStore,
SlavedClientIpStore,
SlavedApplicationServiceStore,
SlavedEventStore,
SlavedRegistrationStore,
RoomStore,
BaseSlavedStore,
):
pass
class EventCreatorServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = EventCreatorSlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
RoomSendEventRestServlet(self).register(resource)
RoomMembershipRestServlet(self).register(resource)
RoomStateEventRestServlet(self).register(resource)
JoinRoomAliasServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse event creator now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse event creator", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.event_creator"
assert config.worker_replication_http_port is not None
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
ss = EventCreatorServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-event-creator", config)
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -41,7 +41,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_reader")
@@ -58,19 +58,6 @@ class FederationReaderSlavedStore(
class FederationReaderServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = FederationReaderSlavedStore(self.get_db_conn(), self)
@@ -90,19 +77,18 @@ class FederationReaderServer(HomeServer):
FEDERATION_PREFIX: TransportLayerServer(self),
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse federation reader now listening on port %d", port)
@@ -111,18 +97,15 @@ class FederationReaderServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -161,7 +144,6 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():

View File

@@ -38,11 +38,11 @@ from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_sender")
@@ -76,19 +76,6 @@ class FederationSenderSlaveStore(
class FederationSenderServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = FederationSenderSlaveStore(self.get_db_conn(), self)
@@ -104,19 +91,18 @@ class FederationSenderServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse federation_sender now listening on port %d", port)
@@ -125,18 +111,15 @@ class FederationSenderServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -246,7 +229,7 @@ class FederationSenderHandler(object):
# presence, typing, etc.
if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows)
preserve_fn(self.update_token)(token)
run_in_background(self.update_token, token)
# We also need to poke the federation sender when new events happen
elif stream_name == "events":
@@ -254,19 +237,22 @@ class FederationSenderHandler(object):
@defer.inlineCallbacks
def update_token(self, token):
self.federation_position = token
try:
self.federation_position = token
# We linearize here to ensure we don't have races updating the token
with (yield self._fed_position_linearizer.queue(None)):
if self._last_ack < self.federation_position:
yield self.store.update_federation_out_pos(
"federation", self.federation_position
)
# We linearize here to ensure we don't have races updating the token
with (yield self._fed_position_linearizer.queue(None)):
if self._last_ack < self.federation_position:
yield self.store.update_federation_out_pos(
"federation", self.federation_position
)
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(self.federation_position)
self._last_ack = self.federation_position
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(self.federation_position)
self._last_ack = self.federation_position
except Exception:
logger.exception("Error updating federation stream position")
if __name__ == '__main__':

View File

@@ -44,14 +44,13 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.frontend_proxy")
class KeyUploadServlet(RestServlet):
PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$",
releases=())
PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$")
def __init__(self, hs):
"""
@@ -89,9 +88,16 @@ class KeyUploadServlet(RestServlet):
if body:
# They're actually trying to upload something, proxy to main synapse.
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = {
"Authorization": auth_headers,
}
result = yield self.http_client.post_json_get_json(
self.main_uri + request.uri,
body,
headers=headers,
)
defer.returnValue((200, result))
@@ -112,19 +118,6 @@ class FrontendProxySlavedStore(
class FrontendProxyServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = FrontendProxySlavedStore(self.get_db_conn(), self)
@@ -149,19 +142,18 @@ class FrontendProxyServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse client reader now listening on port %d", port)
@@ -170,18 +162,15 @@ class FrontendProxyServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -222,7 +211,6 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():

View File

@@ -25,35 +25,39 @@ from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \
LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \
STATIC_PREFIX, WEB_CLIENT_PREFIX
from synapse.app import _base
from synapse.app._base import quit_with_error
from synapse.app._base import quit_with_error, listen_ssl, listen_tcp
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer
from synapse.module_api import ModuleApi
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import register_memory_metrics
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
check_requirements
from synapse.replication.http import ReplicationRestResource, REPLICATION_PREFIX
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.server import HomeServer
from synapse.storage import are_all_users_on_domain
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, Resource
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
@@ -107,87 +111,121 @@ class SynapseHomeServer(HomeServer):
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "client":
client_resource = ClientRestResource(self)
if res["compress"]:
client_resource = gz_wrap(client_resource)
resources.update(self._configure_named_resource(
name, res.get("compress", False),
))
resources.update({
"/_matrix/client/api/v1": client_resource,
"/_matrix/client/r0": client_resource,
"/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource,
})
if name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
})
if name in ["static", "client"]:
resources.update({
STATIC_PREFIX: File(
os.path.join(os.path.dirname(synapse.__file__), "static")
),
})
if name in ["media", "federation", "client"]:
media_repo = MediaRepositoryResource(self)
resources.update({
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
})
if name in ["keys", "federation"]:
resources.update({
SERVER_KEY_PREFIX: LocalKey(self),
SERVER_KEY_V2_PREFIX: KeyApiV2Resource(self),
})
if name == "webclient":
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
if name == "metrics" and self.get_config().enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(self)
additional_resources = listener_config.get("additional_resources", {})
logger.debug("Configuring additional resources: %r",
additional_resources)
module_api = ModuleApi(self, self.get_auth_handler())
for path, resmodule in additional_resources.items():
handler_cls, config = load_module(resmodule)
handler = handler_cls(config, module_api)
resources[path] = AdditionalResource(self, handler.handle_request)
if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
root_resource = Resource()
root_resource = NoResource()
root_resource = create_resource_tree(resources, root_resource)
if tls:
for address in bind_addresses:
reactor.listenSSL(
port,
SynapseSite(
"synapse.access.https.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
self.tls_server_context_factory,
interface=address
)
listen_ssl(
bind_addresses,
port,
SynapseSite(
"synapse.access.https.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
self.tls_server_context_factory,
)
else:
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse now listening on port %d", port)
def _configure_named_resource(self, name, compress=False):
"""Build a resource map for a named resource
Args:
name (str): named resource: one of "client", "federation", etc
compress (bool): whether to enable gzip compression for this
resource
Returns:
dict[str, Resource]: map from path to HTTP resource
"""
resources = {}
if name == "client":
client_resource = ClientRestResource(self)
if compress:
client_resource = gz_wrap(client_resource)
resources.update({
"/_matrix/client/api/v1": client_resource,
"/_matrix/client/r0": client_resource,
"/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource,
})
if name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
})
if name in ["static", "client"]:
resources.update({
STATIC_PREFIX: File(
os.path.join(os.path.dirname(synapse.__file__), "static")
),
})
if name in ["media", "federation", "client"]:
if self.get_config().enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update({
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
})
elif name == "media":
raise ConfigError(
"'media' resource conflicts with enable_media_repo=False",
)
if name in ["keys", "federation"]:
resources.update({
SERVER_KEY_PREFIX: LocalKey(self),
SERVER_KEY_V2_PREFIX: KeyApiV2Resource(self),
})
if name == "webclient":
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
if name == "metrics" and self.get_config().enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(self)
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
return resources
def start_listening(self):
config = self.get_config()
@@ -195,18 +233,15 @@ class SynapseHomeServer(HomeServer):
if listener["type"] == "http":
self._listener_http(config, listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
elif listener["type"] == "replication":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
@@ -236,19 +271,6 @@ class SynapseHomeServer(HomeServer):
except IncorrectDatabaseSetup as e:
quit_with_error(e.message)
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(config_options):
"""
@@ -327,7 +349,7 @@ def setup(config_options):
hs.get_state_handler().start_caching()
hs.get_datastore().start_profiling()
hs.get_datastore().start_doing_background_updates()
hs.get_replication_layer().start_get_pdu_cache()
hs.get_federation_client().start_get_pdu_cache()
register_memory_metrics(hs)
@@ -381,6 +403,10 @@ def run(hs):
stats = {}
# Contains the list of processes we will be monitoring
# currently either 0 or 1
stats_process = []
@defer.inlineCallbacks
def phone_stats_home():
logger.info("Gathering stats for reporting")
@@ -404,8 +430,21 @@ def run(hs):
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in r30_results.iteritems():
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = CACHE_SIZE_FACTOR
stats["event_cache_size"] = hs.config.event_cache_size
if len(stats_process) > 0:
stats["memory_rss"] = 0
stats["cpu_average"] = 0
for process in stats_process:
stats["memory_rss"] += process.memory_info().rss
stats["cpu_average"] += int(process.cpu_percent(interval=None))
logger.info("Reporting stats to matrix.org: %s" % (stats,))
try:
@@ -416,10 +455,32 @@ def run(hs):
except Exception as e:
logger.warn("Error reporting stats: %s", e)
def performance_stats_init():
try:
import psutil
process = psutil.Process()
# Ensure we can fetch both, and make the initial request for cpu_percent
# so the next request will use this as the initial point.
process.memory_info().rss
process.cpu_percent(interval=None)
logger.info("report_stats can use psutil")
stats_process.append(process)
except (ImportError, AttributeError):
logger.warn(
"report_stats enabled but psutil is not installed or incorrect version."
" Disabling reporting of memory/cpu stats."
" Ensuring psutil is available will help matrix.org track performance"
" changes across releases."
)
if hs.config.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
clock.call_later(0, performance_stats_init)
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home)

View File

@@ -35,7 +35,6 @@ from synapse.replication.slave.storage.registration import SlavedRegistrationSto
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
@@ -44,7 +43,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.media_repository")
@@ -61,19 +60,6 @@ class MediaRepositorySlavedStore(
class MediaRepositoryServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = MediaRepositorySlavedStore(self.get_db_conn(), self)
@@ -89,7 +75,7 @@ class MediaRepositoryServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "media":
media_repo = MediaRepositoryResource(self)
media_repo = self.get_media_repository_resource()
resources.update({
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
@@ -98,19 +84,18 @@ class MediaRepositoryServer(HomeServer):
),
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse media repository now listening on port %d", port)
@@ -119,18 +104,15 @@ class MediaRepositoryServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -151,6 +133,13 @@ def start(config_options):
assert config.worker_app == "synapse.app.media_repository"
if config.enable_media_repo:
_base.quit_with_error(
"enable_media_repo must be disabled in the main synapse process\n"
"before the media repo can be run in a separate worker.\n"
"Please add ``enable_media_repo: false`` to the main config\n"
)
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
@@ -169,7 +158,6 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():

View File

@@ -32,13 +32,12 @@ from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import create_engine
from synapse.storage.roommember import RoomMemberStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.pusher")
@@ -75,25 +74,8 @@ class PusherSlaveStore(
DataStore.get_profile_displayname.__func__
)
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
class PusherServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = PusherSlaveStore(self.get_db_conn(), self)
@@ -112,19 +94,18 @@ class PusherServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse pusher now listening on port %d", port)
@@ -133,18 +114,15 @@ class PusherServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -162,24 +140,27 @@ class PusherReplicationHandler(ReplicationClientHandler):
def on_rdata(self, stream_name, token, rows):
super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
preserve_fn(self.poke_pushers)(stream_name, token, rows)
run_in_background(self.poke_pushers, stream_name, token, rows)
@defer.inlineCallbacks
def poke_pushers(self, stream_name, token, rows):
if stream_name == "pushers":
for row in rows:
if row.deleted:
yield self.stop_pusher(row.user_id, row.app_id, row.pushkey)
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
yield self.pusher_pool.on_new_notifications(
token, token,
)
elif stream_name == "receipts":
yield self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
try:
if stream_name == "pushers":
for row in rows:
if row.deleted:
yield self.stop_pusher(row.user_id, row.app_id, row.pushkey)
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
yield self.pusher_pool.on_new_notifications(
token, token,
)
elif stream_name == "receipts":
yield self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
except Exception:
logger.exception("Error poking pushers")
def stop_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)

View File

@@ -51,19 +51,19 @@ from synapse.storage.engines import create_engine
from synapse.storage.presence import UserPresenceState
from synapse.storage.roommember import RoomMemberStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
from six import iteritems
logger = logging.getLogger("synapse.app.synchrotron")
class SynchrotronSlavedStore(
SlavedPushRuleStore,
SlavedEventStore,
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
@@ -73,14 +73,12 @@ class SynchrotronSlavedStore(
SlavedGroupServerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
):
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
did_forget = (
RoomMemberStore.__dict__["did_forget"]
)
@@ -215,7 +213,7 @@ class SynchrotronPresence(object):
def get_currently_syncing_users(self):
return [
user_id for user_id, count in self.user_to_num_current_syncs.iteritems()
user_id for user_id, count in iteritems(self.user_to_num_current_syncs)
if count > 0
]
@@ -246,19 +244,6 @@ class SynchrotronApplicationService(object):
class SynchrotronServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = SynchrotronSlavedStore(self.get_db_conn(), self)
@@ -286,19 +271,18 @@ class SynchrotronServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse synchrotron now listening on port %d", port)
@@ -307,18 +291,15 @@ class SynchrotronServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -340,15 +321,13 @@ class SyncReplicationHandler(ReplicationClientHandler):
self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler()
# NB this is a SynchrotronPresence, not a normal PresenceHandler
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier()
self.presence_handler.sync_callback = self.send_user_sync
def on_rdata(self, stream_name, token, rows):
super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
preserve_fn(self.process_and_notify)(stream_name, token, rows)
run_in_background(self.process_and_notify, stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(SyncReplicationHandler, self).get_streams_to_replicate()
@@ -360,55 +339,58 @@ class SyncReplicationHandler(ReplicationClientHandler):
@defer.inlineCallbacks
def process_and_notify(self, stream_name, token, rows):
if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
event = yield self.store.get_event(row.event_id)
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
max_token = self.store.get_room_max_stream_ordering()
self.notifier.on_new_room_event(
event, token, max_token, extra_users
)
elif stream_name == "push_rules":
self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows],
)
elif stream_name in ("account_data", "tag_account_data",):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows],
)
elif stream_name == "receipts":
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "typing":
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "to_device":
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
try:
if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
event = yield self.store.get_event(row.event_id)
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
max_token = self.store.get_room_max_stream_ordering()
self.notifier.on_new_room_event(
event, token, max_token, extra_users
)
elif stream_name == "push_rules":
self.notifier.on_new_event(
"to_device_key", token, users=entities,
"push_rules_key", token, users=[row.user_id for row in rows],
)
elif stream_name == "device_lists":
all_room_ids = set()
for row in rows:
room_ids = yield self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
self.notifier.on_new_event(
"device_list_key", token, rooms=all_room_ids,
)
elif stream_name == "presence":
yield self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows],
)
elif stream_name in ("account_data", "tag_account_data",):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows],
)
elif stream_name == "receipts":
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "typing":
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "to_device":
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
self.notifier.on_new_event(
"to_device_key", token, users=entities,
)
elif stream_name == "device_lists":
all_room_ids = set()
for row in rows:
room_ids = yield self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
self.notifier.on_new_event(
"device_list_key", token, rooms=all_room_ids,
)
elif stream_name == "presence":
yield self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows],
)
except Exception:
logger.exception("Error processing replication")
def start(config_options):

View File

@@ -38,7 +38,7 @@ def pid_running(pid):
try:
os.kill(pid, 0)
return True
except OSError, err:
except OSError as err:
if err.errno == errno.EPERM:
return True
return False
@@ -98,7 +98,7 @@ def stop(pidfile, app):
try:
os.kill(pid, signal.SIGTERM)
write("stopped %s" % (app,), colour=GREEN)
except OSError, err:
except OSError as err:
if err.errno == errno.ESRCH:
write("%s not running" % (app,), colour=YELLOW)
elif err.errno == errno.EPERM:
@@ -184,6 +184,9 @@ def main():
worker_configfiles.append(worker_configfile)
if options.all_processes:
# To start the main synapse with -a you need to add a worker file
# with worker_app == "synapse.app.homeserver"
start_stop_synapse = False
worker_configdir = options.all_processes
if not os.path.isdir(worker_configdir):
write(
@@ -200,11 +203,29 @@ def main():
with open(worker_configfile) as stream:
worker_config = yaml.load(stream)
worker_app = worker_config["worker_app"]
worker_pidfile = worker_config["worker_pid_file"]
worker_daemonize = worker_config["worker_daemonize"]
assert worker_daemonize, "In config %r: expected '%s' to be True" % (
worker_configfile, "worker_daemonize")
worker_cache_factor = worker_config.get("synctl_cache_factor")
if worker_app == "synapse.app.homeserver":
# We need to special case all of this to pick up options that may
# be set in the main config file or in this worker config file.
worker_pidfile = (
worker_config.get("pid_file")
or pidfile
)
worker_cache_factor = worker_config.get("synctl_cache_factor") or cache_factor
daemonize = worker_config.get("daemonize") or config.get("daemonize")
assert daemonize, "Main process must have daemonize set to true"
# The master process doesn't support using worker_* config.
for key in worker_config:
if key == "worker_app": # But we allow worker_app
continue
assert not key.startswith("worker_"), \
"Main process cannot use worker_* config"
else:
worker_pidfile = worker_config["worker_pid_file"]
worker_daemonize = worker_config["worker_daemonize"]
assert worker_daemonize, "In config %r: expected '%s' to be True" % (
worker_configfile, "worker_daemonize")
worker_cache_factor = worker_config.get("synctl_cache_factor")
workers.append(Worker(
worker_app, worker_configfile, worker_pidfile, worker_cache_factor,
))
@@ -231,6 +252,7 @@ def main():
for running_pid in running_pids:
while pid_running(running_pid):
time.sleep(0.2)
write("All processes exited; now restarting...")
if action == "start" or action == "restart":
if start_stop_synapse:

View File

@@ -39,11 +39,11 @@ from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.internet import reactor, defer
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.user_dir")
@@ -92,19 +92,6 @@ class UserDirectorySlaveStore(
class UserDirectoryServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = UserDirectorySlaveStore(self.get_db_conn(), self)
@@ -129,19 +116,18 @@ class UserDirectoryServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse user_dir now listening on port %d", port)
@@ -150,18 +136,15 @@ class UserDirectoryServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -181,7 +164,14 @@ class UserDirectoryReplicationHandler(ReplicationClientHandler):
stream_name, token, rows
)
if stream_name == "current_state_deltas":
preserve_fn(self.user_directory.notify_new_event)()
run_in_background(self._notify_directory)
@defer.inlineCallbacks
def _notify_directory(self):
try:
yield self.user_directory.notify_new_event()
except Exception:
logger.exception("Error notifiying user directory of state update")
def start(config_options):

View File

@@ -14,12 +14,15 @@
# limitations under the License.
from synapse.api.constants import EventTypes
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.types import GroupID, get_domain_from_id
from twisted.internet import defer
import logging
import re
from six import string_types
logger = logging.getLogger(__name__)
@@ -81,12 +84,13 @@ class ApplicationService(object):
# values.
NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]
def __init__(self, token, url=None, namespaces=None, hs_token=None,
def __init__(self, token, hostname, url=None, namespaces=None, hs_token=None,
sender=None, id=None, protocols=None, rate_limited=True):
self.token = token
self.url = url
self.hs_token = hs_token
self.sender = sender
self.server_name = hostname
self.namespaces = self._check_namespaces(namespaces)
self.id = id
@@ -125,8 +129,26 @@ class ApplicationService(object):
raise ValueError(
"Expected bool for 'exclusive' in ns '%s'" % ns
)
group_id = regex_obj.get("group_id")
if group_id:
if not isinstance(group_id, str):
raise ValueError(
"Expected string for 'group_id' in ns '%s'" % ns
)
try:
GroupID.from_string(group_id)
except Exception:
raise ValueError(
"Expected valid group ID for 'group_id' in ns '%s'" % ns
)
if get_domain_from_id(group_id) != self.server_name:
raise ValueError(
"Expected 'group_id' to be this host in ns '%s'" % ns
)
regex = regex_obj.get("regex")
if isinstance(regex, basestring):
if isinstance(regex, string_types):
regex_obj["regex"] = re.compile(regex) # Pre-compile regex
else:
raise ValueError(
@@ -251,6 +273,21 @@ class ApplicationService(object):
if regex_obj["exclusive"]
]
def get_groups_for_user(self, user_id):
"""Get the groups that this user is associated with by this AS
Args:
user_id (str): The ID of the user.
Returns:
iterable[str]: an iterable that yields group_id strings.
"""
return (
regex_obj["group_id"]
for regex_obj in self.namespaces[ApplicationService.NS_USERS]
if "group_id" in regex_obj and regex_obj["regex"].match(user_id)
)
def is_rate_limited(self):
return self.rate_limited

View File

@@ -72,7 +72,8 @@ class ApplicationServiceApi(SimpleHttpClient):
super(ApplicationServiceApi, self).__init__(hs)
self.clock = hs.get_clock()
self.protocol_meta_cache = ResponseCache(hs, timeout_ms=HOUR_IN_MS)
self.protocol_meta_cache = ResponseCache(hs, "as_protocol_meta",
timeout_ms=HOUR_IN_MS)
@defer.inlineCallbacks
def query_user(self, service, user_id):
@@ -192,9 +193,7 @@ class ApplicationServiceApi(SimpleHttpClient):
defer.returnValue(None)
key = (service.id, protocol)
return self.protocol_meta_cache.get(key) or (
self.protocol_meta_cache.set(key, _get())
)
return self.protocol_meta_cache.wrap(key, _get)
@defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None):

View File

@@ -51,7 +51,7 @@ components.
from twisted.internet import defer
from synapse.appservice import ApplicationServiceState
from synapse.util.logcontext import preserve_fn
from synapse.util.logcontext import run_in_background
from synapse.util.metrics import Measure
import logging
@@ -106,7 +106,7 @@ class _ServiceQueuer(object):
def enqueue(self, service, event):
# if this service isn't being sent something
self.queued_events.setdefault(service.id, []).append(event)
preserve_fn(self._send_request)(service)
run_in_background(self._send_request, service)
@defer.inlineCallbacks
def _send_request(self, service):
@@ -152,10 +152,10 @@ class _TransactionController(object):
if sent:
yield txn.complete(self.store)
else:
preserve_fn(self._start_recoverer)(service)
except Exception as e:
logger.exception(e)
preserve_fn(self._start_recoverer)(service)
run_in_background(self._start_recoverer, service)
except Exception:
logger.exception("Error creating appservice transaction")
run_in_background(self._start_recoverer, service)
@defer.inlineCallbacks
def on_recovered(self, recoverer):
@@ -176,17 +176,20 @@ class _TransactionController(object):
@defer.inlineCallbacks
def _start_recoverer(self, service):
yield self.store.set_appservice_state(
service,
ApplicationServiceState.DOWN
)
logger.info(
"Application service falling behind. Starting recoverer. AS ID %s",
service.id
)
recoverer = self.recoverer_fn(service, self.on_recovered)
self.add_recoverers([recoverer])
recoverer.recover()
try:
yield self.store.set_appservice_state(
service,
ApplicationServiceState.DOWN
)
logger.info(
"Application service falling behind. Starting recoverer. AS ID %s",
service.id
)
recoverer = self.recoverer_fn(service, self.on_recovered)
self.add_recoverers([recoverer])
recoverer.recover()
except Exception:
logger.exception("Error starting AS recoverer")
@defer.inlineCallbacks
def _is_service_up(self, service):

View File

@@ -19,6 +19,8 @@ import os
import yaml
from textwrap import dedent
from six import integer_types
class ConfigError(Exception):
pass
@@ -49,7 +51,7 @@ Missing mandatory `server_name` config option.
class Config(object):
@staticmethod
def parse_size(value):
if isinstance(value, int) or isinstance(value, long):
if isinstance(value, integer_types):
return value
sizes = {"K": 1024, "M": 1024 * 1024}
size = 1
@@ -61,7 +63,7 @@ class Config(object):
@staticmethod
def parse_duration(value):
if isinstance(value, int) or isinstance(value, long):
if isinstance(value, integer_types):
return value
second = 1000
minute = 60 * second
@@ -279,31 +281,31 @@ class Config(object):
)
if not cls.path_exists(config_dir_path):
os.makedirs(config_dir_path)
with open(config_path, "wb") as config_file:
config_bytes, config = obj.generate_config(
with open(config_path, "w") as config_file:
config_str, config = obj.generate_config(
config_dir_path=config_dir_path,
server_name=server_name,
report_stats=(config_args.report_stats == "yes"),
is_generating_file=True
)
obj.invoke_all("generate_files", config)
config_file.write(config_bytes)
print (
config_file.write(config_str)
print((
"A config file has been generated in %r for server name"
" %r with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it"
" to your needs."
) % (config_path, server_name)
print (
) % (config_path, server_name))
print(
"If this server name is incorrect, you will need to"
" regenerate the SSL certificates"
)
return
else:
print (
print((
"Config file %r already exists. Generating any missing key"
" files."
) % (config_path,)
) % (config_path,))
generate_keys = True
parser = argparse.ArgumentParser(

View File

@@ -17,10 +17,12 @@ from ._base import Config, ConfigError
from synapse.appservice import ApplicationService
from synapse.types import UserID
import urllib
import yaml
import logging
from six import string_types
from six.moves.urllib import parse as urlparse
logger = logging.getLogger(__name__)
@@ -89,21 +91,21 @@ def _load_appservice(hostname, as_info, config_filename):
"id", "as_token", "hs_token", "sender_localpart"
]
for field in required_string_fields:
if not isinstance(as_info.get(field), basestring):
if not isinstance(as_info.get(field), string_types):
raise KeyError("Required string field: '%s' (%s)" % (
field, config_filename,
))
# 'url' must either be a string or explicitly null, not missing
# to avoid accidentally turning off push for ASes.
if (not isinstance(as_info.get("url"), basestring) and
if (not isinstance(as_info.get("url"), string_types) and
as_info.get("url", "") is not None):
raise KeyError(
"Required string field or explicit null: 'url' (%s)" % (config_filename,)
)
localpart = as_info["sender_localpart"]
if urllib.quote(localpart) != localpart:
if urlparse.quote(localpart) != localpart:
raise ValueError(
"sender_localpart needs characters which are not URL encoded."
)
@@ -128,7 +130,7 @@ def _load_appservice(hostname, as_info, config_filename):
"Expected namespace entry in %s to be an object,"
" but got %s", ns, regex_obj
)
if not isinstance(regex_obj.get("regex"), basestring):
if not isinstance(regex_obj.get("regex"), string_types):
raise ValueError(
"Missing/bad type 'regex' key in %s", regex_obj
)
@@ -154,6 +156,7 @@ def _load_appservice(hostname, as_info, config_filename):
)
return ApplicationService(
token=as_info["as_token"],
hostname=hostname,
url=as_info["url"],
namespaces=as_info["namespaces"],
hs_token=as_info["hs_token"],

View File

@@ -41,7 +41,7 @@ class CasConfig(Config):
#cas_config:
# enabled: true
# server_url: "https://cas-server.com"
# service_url: "https://homesever.domain.com:8448"
# service_url: "https://homeserver.domain.com:8448"
# #required_attributes:
# # name: value
"""

View File

@@ -36,6 +36,7 @@ from .workers import WorkerConfig
from .push import PushConfig
from .spam_checker import SpamCheckerConfig
from .groups import GroupsConfig
from .user_directory import UserDirectoryConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
@@ -44,7 +45,7 @@ class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
JWTConfig, PasswordConfig, EmailConfig,
WorkerConfig, PasswordAuthProviderConfig, PushConfig,
SpamCheckerConfig, GroupsConfig,):
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,):
pass

View File

@@ -28,27 +28,27 @@ DEFAULT_LOG_CONFIG = Template("""
version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s\
- %(message)s'
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - \
%(request)s - %(message)s'
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
request: ""
context:
(): synapse.util.logcontext.LoggingContextFilter
request: ""
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: ${log_file}
maxBytes: 104857600
backupCount: 10
filters: [context]
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: ${log_file}
maxBytes: 104857600
backupCount: 10
filters: [context]
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse:
@@ -74,17 +74,10 @@ class LoggingConfig(Config):
self.log_file = self.abspath(config.get("log_file"))
def default_config(self, config_dir_path, server_name, **kwargs):
log_file = self.abspath("homeserver.log")
log_config = self.abspath(
os.path.join(config_dir_path, server_name + ".log.config")
)
return """
# Logging verbosity level. Ignored if log_config is specified.
verbose: 0
# File to write logging to. Ignored if log_config is specified.
log_file: "%(log_file)s"
# A yaml python logging config file
log_config: "%(log_config)s"
""" % locals()
@@ -123,9 +116,10 @@ class LoggingConfig(Config):
def generate_files(self, config):
log_config = config.get("log_config")
if log_config and not os.path.exists(log_config):
with open(log_config, "wb") as log_config_file:
log_file = self.abspath("homeserver.log")
with open(log_config, "w") as log_config_file:
log_config_file.write(
DEFAULT_LOG_CONFIG.substitute(log_file=config["log_file"])
DEFAULT_LOG_CONFIG.substitute(log_file=log_file)
)
@@ -148,8 +142,11 @@ def setup_logging(config, use_worker_options=False):
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if log_config is None:
if log_config is None:
# We don't have a logfile, so fall back to the 'verbosity' param from
# the config or cmdline. (Note that we generate a log config for new
# installs, so this will be an unusual case)
level = logging.INFO
level_for_storage = logging.INFO
if config.verbosity:
@@ -157,11 +154,10 @@ def setup_logging(config, use_worker_options=False):
if config.verbosity > 1:
level_for_storage = logging.DEBUG
# FIXME: we need a logging.WARN for a -q quiet option
logger = logging.getLogger('')
logger.setLevel(level)
logging.getLogger('synapse.storage').setLevel(level_for_storage)
logging.getLogger('synapse.storage.SQL').setLevel(level_for_storage)
formatter = logging.Formatter(log_format)
if log_file:
@@ -176,6 +172,10 @@ def setup_logging(config, use_worker_options=False):
logger.info("Opened new log file due to SIGHUP")
else:
handler = logging.StreamHandler()
def sighup(signum, stack):
pass
handler.setFormatter(formatter)
handler.addFilter(LoggingContextFilter(request=""))

View File

@@ -13,41 +13,40 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
from ._base import Config
from synapse.util.module_loader import load_module
LDAP_PROVIDER = 'ldap_auth_provider.LdapAuthProvider'
class PasswordAuthProviderConfig(Config):
def read_config(self, config):
self.password_providers = []
provider_config = None
providers = []
# We want to be backwards compatible with the old `ldap_config`
# param.
ldap_config = config.get("ldap_config", {})
self.ldap_enabled = ldap_config.get("enabled", False)
if self.ldap_enabled:
from ldap_auth_provider import LdapAuthProvider
parsed_config = LdapAuthProvider.parse_config(ldap_config)
self.password_providers.append((LdapAuthProvider, parsed_config))
if ldap_config.get("enabled", False):
providers.append({
'module': LDAP_PROVIDER,
'config': ldap_config,
})
providers = config.get("password_providers", [])
providers.extend(config.get("password_providers", []))
for provider in providers:
mod_name = provider['module']
# This is for backwards compat when the ldap auth provider resided
# in this package.
if provider['module'] == "synapse.util.ldap_auth_provider.LdapAuthProvider":
from ldap_auth_provider import LdapAuthProvider
provider_class = LdapAuthProvider
try:
provider_config = provider_class.parse_config(provider["config"])
except Exception as e:
raise ConfigError(
"Failed to parse config for %r: %r" % (provider['module'], e)
)
else:
(provider_class, provider_config) = load_module(provider)
if mod_name == "synapse.util.ldap_auth_provider.LdapAuthProvider":
mod_name = LDAP_PROVIDER
(provider_class, provider_config) = load_module({
"module": mod_name,
"config": provider['config'],
})
self.password_providers.append((provider_class, provider_config))

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -18,28 +19,43 @@ from ._base import Config
class PushConfig(Config):
def read_config(self, config):
self.push_redact_content = False
push_config = config.get("push", {})
self.push_include_content = push_config.get("include_content", True)
# There was a a 'redact_content' setting but mistakenly read from the
# 'email'section'. Check for the flag in the 'push' section, and log,
# but do not honour it to avoid nasty surprises when people upgrade.
if push_config.get("redact_content") is not None:
print(
"The push.redact_content content option has never worked. "
"Please set push.include_content if you want this behaviour"
)
# Now check for the one in the 'email' section and honour it,
# with a warning.
push_config = config.get("email", {})
self.push_redact_content = push_config.get("redact_content", False)
redact_content = push_config.get("redact_content")
if redact_content is not None:
print(
"The 'email.redact_content' option is deprecated: "
"please set push.include_content instead"
)
self.push_include_content = not redact_content
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Control how push messages are sent to google/apple to notifications.
# Normally every message said in a room with one or more people using
# mobile devices will be posted to a push server hosted by matrix.org
# which is registered with google and apple in order to allow push
# notifications to be sent to these mobile devices.
#
# Setting redact_content to true will make the push messages contain no
# message content which will provide increased privacy. This is a
# temporary solution pending improvements to Android and iPhone apps
# to get content from the app rather than the notification.
#
# Clients requesting push notifications can either have the body of
# the message sent in the notification poke along with other details
# like the sender, or just the event ID and room ID (`event_id_only`).
# If clients choose the former, this option controls whether the
# notification request includes the content of the event (other details
# like the sender are still included). For `event_id_only` push, it
# has no effect.
# For modern android devices the notification content will still appear
# because it is loaded by the app. iPhone, however will send a
# notification saying only that a message arrived and who it came from.
#
#push:
# redact_content: false
# include_content: true
"""

View File

@@ -31,6 +31,8 @@ class RegistrationConfig(Config):
strtobool(str(config["disable_registration"]))
)
self.registrations_require_3pid = config.get("registrations_require_3pid", [])
self.allowed_local_3pids = config.get("allowed_local_3pids", [])
self.registration_shared_secret = config.get("registration_shared_secret")
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
@@ -52,13 +54,32 @@ class RegistrationConfig(Config):
# Enable registration for new users.
enable_registration: False
# The user must provide all of the below types of 3PID when registering.
#
# registrations_require_3pid:
# - email
# - msisdn
# Mandate that users are only allowed to associate certain formats of
# 3PIDs with accounts on this server.
#
# allowed_local_3pids:
# - medium: email
# pattern: ".*@matrix\\.org"
# - medium: email
# pattern: ".*@vector\\.im"
# - medium: msisdn
# pattern: "\\+44"
# If set, allows registration by anyone who also has the shared
# secret, even if registration is otherwise disabled.
registration_shared_secret: "%(registration_shared_secret)s"
# Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash.
# The default number of rounds is 12.
# The default number is 12 (which equates to 2^12 rounds).
# N.B. that increasing this will exponentially increase the time required
# to register or login - e.g. 24 => 2^24 rounds which will take >20 mins.
bcrypt_rounds: 12
# Allows users to register as guests without a password/email/etc, and

View File

@@ -16,6 +16,8 @@
from ._base import Config, ConfigError
from collections import namedtuple
from synapse.util.module_loader import load_module
MISSING_NETADDR = (
"Missing netaddr library. This is required for URL preview API."
@@ -36,6 +38,14 @@ ThumbnailRequirement = namedtuple(
"ThumbnailRequirement", ["width", "height", "method", "media_type"]
)
MediaStorageProviderConfig = namedtuple(
"MediaStorageProviderConfig", (
"store_local", # Whether to store newly uploaded local files
"store_remote", # Whether to store newly downloaded remote files
"store_synchronous", # Whether to wait for successful storage for local uploads
),
)
def parse_thumbnail_requirements(thumbnail_sizes):
""" Takes a list of dictionaries with "width", "height", and "method" keys
@@ -73,16 +83,61 @@ class ContentRepositoryConfig(Config):
self.media_store_path = self.ensure_directory(config["media_store_path"])
self.backup_media_store_path = config.get("backup_media_store_path")
if self.backup_media_store_path:
self.backup_media_store_path = self.ensure_directory(
self.backup_media_store_path
)
backup_media_store_path = config.get("backup_media_store_path")
self.synchronous_backup_media_store = config.get(
synchronous_backup_media_store = config.get(
"synchronous_backup_media_store", False
)
storage_providers = config.get("media_storage_providers", [])
if backup_media_store_path:
if storage_providers:
raise ConfigError(
"Cannot use both 'backup_media_store_path' and 'storage_providers'"
)
storage_providers = [{
"module": "file_system",
"store_local": True,
"store_synchronous": synchronous_backup_media_store,
"store_remote": True,
"config": {
"directory": backup_media_store_path,
}
}]
# This is a list of config that can be used to create the storage
# providers. The entries are tuples of (Class, class_config,
# MediaStorageProviderConfig), where Class is the class of the provider,
# the class_config the config to pass to it, and
# MediaStorageProviderConfig are options for StorageProviderWrapper.
#
# We don't create the storage providers here as not all workers need
# them to be started.
self.media_storage_providers = []
for provider_config in storage_providers:
# We special case the module "file_system" so as not to need to
# expose FileStorageProviderBackend
if provider_config["module"] == "file_system":
provider_config["module"] = (
"synapse.rest.media.v1.storage_provider"
".FileStorageProviderBackend"
)
provider_class, parsed_config = load_module(provider_config)
wrapper_config = MediaStorageProviderConfig(
provider_config.get("store_local", False),
provider_config.get("store_remote", False),
provider_config.get("store_synchronous", False),
)
self.media_storage_providers.append(
(provider_class, parsed_config, wrapper_config,)
)
self.uploads_path = self.ensure_directory(config["uploads_path"])
self.dynamic_thumbnails = config["dynamic_thumbnails"]
self.thumbnail_requirements = parse_thumbnail_requirements(
@@ -127,13 +182,19 @@ class ContentRepositoryConfig(Config):
# Directory where uploaded images and attachments are stored.
media_store_path: "%(media_store)s"
# A secondary directory where uploaded images and attachments are
# stored as a backup.
# backup_media_store_path: "%(media_store)s"
# Whether to wait for successful write to backup media store before
# returning successfully.
# synchronous_backup_media_store: false
# Media storage providers allow media to be stored in different
# locations.
# media_storage_providers:
# - module: file_system
# # Whether to write new local files.
# store_local: false
# # Whether to write new remote media
# store_remote: false
# # Whether to block upload requests waiting for write to this
# # provider to complete
# store_synchronous: false
# config:
# directory: /mnt/some/other/directory
# Directory where in-progress uploads are stored.
uploads_path: "%(uploads_path)s"

View File

@@ -41,6 +41,12 @@ class ServerConfig(Config):
# false only if we are updating the user directory in a worker
self.update_user_directory = config.get("update_user_directory", True)
# whether to enable the media repository endpoints. This should be set
# to false if the media repository is running as a separate endpoint;
# doing so ensures that we will not run cache cleanup jobs on the
# master, potentially causing inconsistency.
self.enable_media_repo = config.get("enable_media_repo", True)
self.filter_timeline_limit = config.get("filter_timeline_limit", -1)
# Whether we should block invites sent to users on this server
@@ -49,6 +55,17 @@ class ServerConfig(Config):
"block_non_admin_invites", False,
)
# FIXME: federation_domain_whitelist needs sytests
self.federation_domain_whitelist = None
federation_domain_whitelist = config.get(
"federation_domain_whitelist", None
)
# turn the whitelist into a hash for speed of lookup
if federation_domain_whitelist is not None:
self.federation_domain_whitelist = {}
for domain in federation_domain_whitelist:
self.federation_domain_whitelist[domain] = True
if self.public_baseurl is not None:
if self.public_baseurl[-1] != '/':
self.public_baseurl += '/'
@@ -204,6 +221,17 @@ class ServerConfig(Config):
# (except those sent by local server admins). The default is False.
# block_non_admin_invites: True
# Restrict federation to the following whitelist of domains.
# N.B. we recommend also firewalling your federation listener to limit
# inbound federation traffic as early as possible, rather than relying
# purely on this application-layer restriction. If not specified, the
# default is to whitelist everything.
#
# federation_domain_whitelist:
# - lon.example.com
# - nyc.example.com
# - syd.example.com
# List of ports that Synapse should listen on, their purpose and their
# configuration.
listeners:
@@ -214,13 +242,12 @@ class ServerConfig(Config):
port: %(bind_port)s
# Local addresses to listen on.
# This will listen on all IPv4 addresses by default.
# On Linux and Mac OS, `::` will listen on all IPv4 and IPv6
# addresses by default. For most other OSes, this will only listen
# on IPv6.
bind_addresses:
- '::'
- '0.0.0.0'
# Uncomment to listen on all IPv6 interfaces
# N.B: On at least Linux this will also listen on all IPv4
# addresses, so you will need to comment out the line above.
# - '::'
# This is a 'http' listener, allows us to specify 'resources'.
type: http
@@ -247,11 +274,18 @@ class ServerConfig(Config):
- names: [federation] # Federation APIs
compress: false
# optional list of additional endpoints which can be loaded via
# dynamic modules
# additional_resources:
# "/_matrix/my/custom/endpoint":
# module: my_module.CustomRequestHandler
# config: {}
# Unsecure HTTP listener,
# For when matrix traffic passes through loadbalancer that unwraps TLS.
- port: %(unsecure_port)s
tls: false
bind_addresses: ['0.0.0.0']
bind_addresses: ['::', '0.0.0.0']
type: http
x_forwarded: false
@@ -265,7 +299,7 @@ class ServerConfig(Config):
# Turn on the twisted ssh manhole service on localhost on the given
# port.
# - port: 9000
# bind_address: 127.0.0.1
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
""" % locals()

View File

@@ -96,7 +96,7 @@ class TlsConfig(Config):
# certificates returned by this server match one of the fingerprints.
#
# Synapse automatically adds the fingerprint of its own certificate
# to the list. So if federation traffic is handle directly by synapse
# to the list. So if federation traffic is handled directly by synapse
# then no modification to the list is required.
#
# If synapse is run behind a load balancer that handles the TLS then it
@@ -109,6 +109,12 @@ class TlsConfig(Config):
# key. It may be necessary to publish the fingerprints of a new
# certificate and wait until the "valid_until_ts" of the previous key
# responses have passed before deploying it.
#
# You can calculate a fingerprint from a given TLS listener via:
# openssl s_client -connect $host:$port < /dev/null 2> /dev/null |
# openssl x509 -outform DER | openssl sha256 -binary | base64 | tr -d '='
# or by checking matrix.org/federationtester/api/report?server_name=$host
#
tls_fingerprints: []
# tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
""" % locals()
@@ -127,7 +133,7 @@ class TlsConfig(Config):
tls_dh_params_path = config["tls_dh_params_path"]
if not self.path_exists(tls_private_key_path):
with open(tls_private_key_path, "w") as private_key_file:
with open(tls_private_key_path, "wb") as private_key_file:
tls_private_key = crypto.PKey()
tls_private_key.generate_key(crypto.TYPE_RSA, 2048)
private_key_pem = crypto.dump_privatekey(
@@ -142,7 +148,7 @@ class TlsConfig(Config):
)
if not self.path_exists(tls_certificate_path):
with open(tls_certificate_path, "w") as certificate_file:
with open(tls_certificate_path, "wb") as certificate_file:
cert = crypto.X509()
subject = cert.get_subject()
subject.CN = config["server_name"]

View File

@@ -0,0 +1,44 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class UserDirectoryConfig(Config):
"""User Directory Configuration
Configuration for the behaviour of the /user_directory API
"""
def read_config(self, config):
self.user_directory_search_all_users = False
user_directory_config = config.get("user_directory", None)
if user_directory_config:
self.user_directory_search_all_users = (
user_directory_config.get("search_all_users", False)
)
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# User Directory configuration
#
# 'search_all_users' defines whether to search all users visible to your HS
# when searching the user directory, rather than limiting to users visible
# in public rooms. Defaults to false. If you set it True, you'll have to run
# UPDATE user_directory_stream_pos SET stream_id = NULL;
# on your database to tell it to rebuild the user_directory search indexes.
#
#user_directory:
# search_all_users: false
"""

View File

@@ -23,13 +23,26 @@ class WorkerConfig(Config):
def read_config(self, config):
self.worker_app = config.get("worker_app")
# Canonicalise worker_app so that master always has None
if self.worker_app == "synapse.app.homeserver":
self.worker_app = None
self.worker_listeners = config.get("worker_listeners")
self.worker_daemonize = config.get("worker_daemonize")
self.worker_pid_file = config.get("worker_pid_file")
self.worker_log_file = config.get("worker_log_file")
self.worker_log_config = config.get("worker_log_config")
# The host used to connect to the main synapse
self.worker_replication_host = config.get("worker_replication_host", None)
# The port on the main synapse for TCP replication
self.worker_replication_port = config.get("worker_replication_port", None)
# The port on the main synapse for HTTP replication endpoint
self.worker_replication_http_port = config.get("worker_replication_http_port")
self.worker_name = config.get("worker_name", self.worker_app)
self.worker_main_http_uri = config.get("worker_main_http_uri", None)

View File

@@ -13,8 +13,8 @@
# limitations under the License.
from twisted.internet import ssl
from OpenSSL import SSL
from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName
from OpenSSL import SSL, crypto
from twisted.internet._sslverify import _defaultCurveName
import logging
@@ -32,8 +32,9 @@ class ServerContextFactory(ssl.ContextFactory):
@staticmethod
def configure_context(context, config):
try:
_ecCurve = _OpenSSLECCurve(_defaultCurveName)
_ecCurve.addECKeyToContext(context)
_ecCurve = crypto.get_elliptic_curve(_defaultCurveName)
context.set_tmp_ecdh(_ecCurve)
except Exception:
logger.exception("Failed to enable elliptic curve for TLS")
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)

View File

@@ -32,15 +32,22 @@ def check_event_content_hash(event, hash_algorithm=hashlib.sha256):
"""Check whether the hash for this PDU matches the contents"""
name, expected_hash = compute_content_hash(event, hash_algorithm)
logger.debug("Expecting hash: %s", encode_base64(expected_hash))
if name not in event.hashes:
# some malformed events lack a 'hashes'. Protect against it being missing
# or a weird type by basically treating it the same as an unhashed event.
hashes = event.get("hashes")
if not isinstance(hashes, dict):
raise SynapseError(400, "Malformed 'hashes'", Codes.UNAUTHORIZED)
if name not in hashes:
raise SynapseError(
400,
"Algorithm %s not in hashes %s" % (
name, list(event.hashes),
name, list(hashes),
),
Codes.UNAUTHORIZED,
)
message_hash_base64 = event.hashes[name]
message_hash_base64 = hashes[name]
try:
message_hash_bytes = decode_base64(message_hash_base64)
except Exception:

View File

@@ -19,7 +19,8 @@ from synapse.api.errors import SynapseError, Codes
from synapse.util import unwrapFirstError, logcontext
from synapse.util.logcontext import (
PreserveLoggingContext,
preserve_fn
preserve_fn,
run_in_background,
)
from synapse.util.metrics import Measure
@@ -127,7 +128,7 @@ class Keyring(object):
verify_requests.append(verify_request)
preserve_fn(self._start_key_lookups)(verify_requests)
run_in_background(self._start_key_lookups, verify_requests)
# Pass those keys to handle_key_deferred so that the json object
# signatures can be verified
@@ -146,53 +147,56 @@ class Keyring(object):
verify_requests (List[VerifyKeyRequest]):
"""
# create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred()
for rq in verify_requests
}
try:
# create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred()
for rq in verify_requests
}
# We want to wait for any previous lookups to complete before
# proceeding.
yield self.wait_for_previous_lookups(
[rq.server_name for rq in verify_requests],
server_to_deferred,
)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
server_to_request_ids = {}
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request):
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for verify_request in verify_requests:
verify_request.deferred.addBoth(
remove_deferreds, verify_request,
# We want to wait for any previous lookups to complete before
# proceeding.
yield self.wait_for_previous_lookups(
[rq.server_name for rq in verify_requests],
server_to_deferred,
)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
server_to_request_ids = {}
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request):
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for verify_request in verify_requests:
verify_request.deferred.addBoth(
remove_deferreds, verify_request,
)
except Exception:
logger.exception("Error starting key lookups")
@defer.inlineCallbacks
def wait_for_previous_lookups(self, server_names, server_to_deferred):
"""Waits for any previous key lookups for the given servers to finish.
@@ -313,7 +317,7 @@ class Keyring(object):
if not verify_request.deferred.called:
verify_request.deferred.errback(err)
preserve_fn(do_iterations)().addErrback(on_err)
run_in_background(do_iterations).addErrback(on_err)
@defer.inlineCallbacks
def get_keys_from_store(self, server_name_and_key_ids):
@@ -329,8 +333,9 @@ class Keyring(object):
"""
res = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.get_server_verify_keys)(
server_name, key_ids
run_in_background(
self.store.get_server_verify_keys,
server_name, key_ids,
).addCallback(lambda ks, server: (server, ks), server_name)
for server_name, key_ids in server_name_and_key_ids
],
@@ -352,13 +357,13 @@ class Keyring(object):
logger.exception(
"Unable to get key from %r: %s %s",
perspective_name,
type(e).__name__, str(e.message),
type(e).__name__, str(e),
)
defer.returnValue({})
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(p_name, p_keys)
run_in_background(get_key, p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
],
consumeErrors=True,
@@ -384,7 +389,7 @@ class Keyring(object):
logger.info(
"Unable to get key %r for %r directly: %s %s",
key_ids, server_name,
type(e).__name__, str(e.message),
type(e).__name__, str(e),
)
if not keys:
@@ -398,7 +403,7 @@ class Keyring(object):
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(server_name, key_ids)
run_in_background(get_key, server_name, key_ids)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
@@ -481,7 +486,8 @@ class Keyring(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
run_in_background(
self.store_keys,
server_name=server_name,
from_server=perspective_name,
verify_keys=response_keys,
@@ -539,7 +545,8 @@ class Keyring(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
run_in_background(
self.store_keys,
server_name=key_server_name,
from_server=server_name,
verify_keys=verify_keys,
@@ -615,7 +622,8 @@ class Keyring(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_keys_json)(
run_in_background(
self.store.store_server_keys_json,
server_name=server_name,
key_id=key_id,
from_server=server_name,
@@ -716,7 +724,8 @@ class Keyring(object):
# TODO(markjh): Store whether the keys have expired.
return logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_verify_key)(
run_in_background(
self.store.store_server_verify_key,
server_name, server_name, key.time_added, key
)
for key_id, key in verify_keys.items()
@@ -734,7 +743,7 @@ def _handle_key_deferred(verify_request):
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
server_name, type(e).__name__, str(e),
)
raise SynapseError(
502,
@@ -744,7 +753,7 @@ def _handle_key_deferred(verify_request):
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
server_name, type(e).__name__, str(e),
)
raise SynapseError(
401,

View File

@@ -319,7 +319,7 @@ def _is_membership_change_allowed(event, auth_events):
# TODO (erikj): Implement kicks.
if target_banned and user_level < ban_level:
raise AuthError(
403, "You cannot unban user &s." % (target_user_id,)
403, "You cannot unban user %s." % (target_user_id,)
)
elif target_user_id != event.user_id:
kick_level = _get_named_level(auth_events, "kick", 50)

View File

@@ -47,14 +47,26 @@ class _EventInternalMetadata(object):
def _event_dict_property(key):
# We want to be able to use hasattr with the event dict properties.
# However, (on python3) hasattr expects AttributeError to be raised. Hence,
# we need to transform the KeyError into an AttributeError
def getter(self):
return self._event_dict[key]
try:
return self._event_dict[key]
except KeyError:
raise AttributeError(key)
def setter(self, v):
self._event_dict[key] = v
try:
self._event_dict[key] = v
except KeyError:
raise AttributeError(key)
def delete(self):
del self._event_dict[key]
try:
del self._event_dict[key]
except KeyError:
raise AttributeError(key)
return property(
getter,

View File

@@ -13,6 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from frozendict import frozendict
class EventContext(object):
"""
@@ -25,7 +29,9 @@ class EventContext(object):
The current state map excluding the current event.
(type, state_key) -> event_id
state_group (int): state group id
state_group (int|None): state group id, if the state has been stored
as a state group. This is usually only None if e.g. the event is
an outlier.
rejected (bool|str): A rejection reason if the event was rejected, else
False
@@ -46,7 +52,6 @@ class EventContext(object):
"prev_state_ids",
"state_group",
"rejected",
"push_actions",
"prev_group",
"delta_ids",
"prev_state_events",
@@ -61,7 +66,6 @@ class EventContext(object):
self.state_group = None
self.rejected = False
self.push_actions = []
# A previously persisted state group and a delta between that
# and this state.
@@ -71,3 +75,98 @@ class EventContext(object):
self.prev_state_events = None
self.app_service = None
def serialize(self, event):
"""Converts self to a type that can be serialized as JSON, and then
deserialized by `deserialize`
Args:
event (FrozenEvent): The event that this context relates to
Returns:
dict
"""
# We don't serialize the full state dicts, instead they get pulled out
# of the DB on the other side. However, the other side can't figure out
# the prev_state_ids, so if we're a state event we include the event
# id that we replaced in the state.
if event.is_state():
prev_state_id = self.prev_state_ids.get((event.type, event.state_key))
else:
prev_state_id = None
return {
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
"state_group": self.state_group,
"rejected": self.rejected,
"prev_group": self.prev_group,
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None
}
@staticmethod
@defer.inlineCallbacks
def deserialize(store, input):
"""Converts a dict that was produced by `serialize` back into a
EventContext.
Args:
store (DataStore): Used to convert AS ID to AS object
input (dict): A dict produced by `serialize`
Returns:
EventContext
"""
context = EventContext()
context.state_group = input["state_group"]
context.rejected = input["rejected"]
context.prev_group = input["prev_group"]
context.delta_ids = _decode_state_dict(input["delta_ids"])
context.prev_state_events = input["prev_state_events"]
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
prev_state_id = input["prev_state_id"]
event_type = input["event_type"]
event_state_key = input["event_state_key"]
context.current_state_ids = yield store.get_state_ids_for_group(
context.state_group,
)
if prev_state_id and event_state_key:
context.prev_state_ids = dict(context.current_state_ids)
context.prev_state_ids[(event_type, event_state_key)] = prev_state_id
else:
context.prev_state_ids = context.current_state_ids
app_service_id = input["app_service_id"]
if app_service_id:
context.app_service = store.get_app_service_by_id(app_service_id)
defer.returnValue(context)
def _encode_state_dict(state_dict):
"""Since dicts of (type, state_key) -> event_id cannot be serialized in
JSON we need to convert them to a form that can.
"""
if state_dict is None:
return None
return [
(etype, state_key, v)
for (etype, state_key), v in state_dict.iteritems()
]
def _decode_state_dict(input):
"""Decodes a state dict encoded using `_encode_state_dict` above
"""
if input is None:
return None
return frozendict({(etype, state_key,): v for etype, state_key, v in input})

View File

@@ -15,11 +15,3 @@
""" This package includes all the federation specific logic.
"""
from .replication import ReplicationLayer
def initialize_http_replication(hs):
transport = hs.get_federation_transport_client()
return ReplicationLayer(hs, transport)

View File

@@ -14,9 +14,14 @@
# limitations under the License.
import logging
from synapse.api.errors import SynapseError
import six
from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import SynapseError, Codes
from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import FrozenEvent
from synapse.events.utils import prune_event
from synapse.http.servlet import assert_params_in_request
from synapse.util import unwrapFirstError, logcontext
from twisted.internet import defer
@@ -25,7 +30,13 @@ logger = logging.getLogger(__name__)
class FederationBase(object):
def __init__(self, hs):
self.hs = hs
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
self.spam_checker = hs.get_spam_checker()
self.store = hs.get_datastore()
self._clock = hs.get_clock()
@defer.inlineCallbacks
def _check_sigs_and_hash_and_fetch(self, origin, pdus, outlier=False,
@@ -169,3 +180,40 @@ class FederationBase(object):
)
return deferreds
def event_from_pdu_json(pdu_json, outlier=False):
"""Construct a FrozenEvent from an event json received over federation
Args:
pdu_json (object): pdu as received over federation
outlier (bool): True to mark this event as an outlier
Returns:
FrozenEvent
Raises:
SynapseError: if the pdu is missing required fields or is otherwise
not a valid matrix event
"""
# we could probably enforce a bunch of other fields here (room_id, sender,
# origin, etc etc)
assert_params_in_request(pdu_json, ('event_id', 'type', 'depth'))
depth = pdu_json['depth']
if not isinstance(depth, six.integer_types):
raise SynapseError(400, "Depth %r not an intger" % (depth, ),
Codes.BAD_JSON)
if depth < 0:
raise SynapseError(400, "Depth too small", Codes.BAD_JSON)
elif depth > MAX_DEPTH:
raise SynapseError(400, "Depth too large", Codes.BAD_JSON)
event = FrozenEvent(
pdu_json
)
event.internal_metadata.outlier = outlier
return event

View File

@@ -14,28 +14,30 @@
# limitations under the License.
from twisted.internet import defer
from .federation_base import FederationBase
from synapse.api.constants import Membership
from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError,
)
from synapse.util import unwrapFirstError, logcontext
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logutils import log_function
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
from synapse.events import FrozenEvent, builder
import synapse.metrics
from synapse.util.retryutils import NotRetryingDestination
import copy
import itertools
import logging
import random
from six.moves import range
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError, FederationDeniedError
)
from synapse.events import builder
from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
)
import synapse.metrics
from synapse.util import logcontext, unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.logutils import log_function
from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__)
@@ -58,6 +60,7 @@ class FederationClient(FederationBase):
self._clear_tried_cache, 60 * 1000,
)
self.state = hs.get_state_handler()
self.transport_layer = hs.get_federation_transport_client()
def _clear_tried_cache(self):
"""Clear pdu_destination_tried cache"""
@@ -184,7 +187,7 @@ class FederationClient(FederationBase):
logger.debug("backfill transaction_data=%s", repr(transaction_data))
pdus = [
self.event_from_pdu_json(p, outlier=False)
event_from_pdu_json(p, outlier=False)
for p in transaction_data["pdus"]
]
@@ -244,7 +247,7 @@ class FederationClient(FederationBase):
logger.debug("transaction_data %r", transaction_data)
pdu_list = [
self.event_from_pdu_json(p, outlier=outlier)
event_from_pdu_json(p, outlier=outlier)
for p in transaction_data["pdus"]
]
@@ -266,6 +269,9 @@ class FederationClient(FederationBase):
except NotRetryingDestination as e:
logger.info(e.message)
continue
except FederationDeniedError as e:
logger.info(e.message)
continue
except Exception as e:
pdu_attempts[destination] = now
@@ -336,11 +342,11 @@ class FederationClient(FederationBase):
)
pdus = [
self.event_from_pdu_json(p, outlier=True) for p in result["pdus"]
event_from_pdu_json(p, outlier=True) for p in result["pdus"]
]
auth_chain = [
self.event_from_pdu_json(p, outlier=True)
event_from_pdu_json(p, outlier=True)
for p in result.get("auth_chain", [])
]
@@ -390,7 +396,7 @@ class FederationClient(FederationBase):
seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
signed_events = seen_events.values()
else:
seen_events = yield self.store.have_events(event_ids)
seen_events = yield self.store.have_seen_events(event_ids)
signed_events = []
failed_to_fetch = set()
@@ -409,18 +415,19 @@ class FederationClient(FederationBase):
batch_size = 20
missing_events = list(missing_events)
for i in xrange(0, len(missing_events), batch_size):
for i in range(0, len(missing_events), batch_size):
batch = set(missing_events[i:i + batch_size])
deferreds = [
preserve_fn(self.get_pdu)(
run_in_background(
self.get_pdu,
destinations=random_server_list(),
event_id=e_id,
)
for e_id in batch
]
res = yield preserve_context_over_deferred(
res = yield make_deferred_yieldable(
defer.DeferredList(deferreds, consumeErrors=True)
)
for success, result in res:
@@ -441,7 +448,7 @@ class FederationClient(FederationBase):
)
auth_chain = [
self.event_from_pdu_json(p, outlier=True)
event_from_pdu_json(p, outlier=True)
for p in res["auth_chain"]
]
@@ -570,12 +577,12 @@ class FederationClient(FederationBase):
logger.debug("Got content: %s", content)
state = [
self.event_from_pdu_json(p, outlier=True)
event_from_pdu_json(p, outlier=True)
for p in content.get("state", [])
]
auth_chain = [
self.event_from_pdu_json(p, outlier=True)
event_from_pdu_json(p, outlier=True)
for p in content.get("auth_chain", [])
]
@@ -650,7 +657,7 @@ class FederationClient(FederationBase):
logger.debug("Got response to send_invite: %s", pdu_dict)
pdu = self.event_from_pdu_json(pdu_dict)
pdu = event_from_pdu_json(pdu_dict)
# Check signatures are correct.
pdu = yield self._check_sigs_and_hash(pdu)
@@ -740,7 +747,7 @@ class FederationClient(FederationBase):
)
auth_chain = [
self.event_from_pdu_json(e)
event_from_pdu_json(e)
for e in content["auth_chain"]
]
@@ -788,7 +795,7 @@ class FederationClient(FederationBase):
)
events = [
self.event_from_pdu_json(e)
event_from_pdu_json(e)
for e in content.get("events", [])
]
@@ -805,15 +812,6 @@ class FederationClient(FederationBase):
defer.returnValue(signed_events)
def event_from_pdu_json(self, pdu_json, outlier=False):
event = FrozenEvent(
pdu_json
)
event.internal_metadata.outlier = outlier
return event
@defer.inlineCallbacks
def forward_third_party_invite(self, destinations, room_id, event_dict):
for destination in destinations:

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,24 +13,27 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from .federation_base import FederationBase
from .units import Transaction, Edu
from synapse.util import async
from synapse.util.logutils import log_function
from synapse.util.caches.response_cache import ResponseCache
from synapse.events import FrozenEvent
from synapse.types import get_domain_from_id
import synapse.metrics
from synapse.api.errors import AuthError, FederationError, SynapseError
from synapse.crypto.event_signing import compute_event_signature
import logging
import simplejson as json
import logging
from twisted.internet import defer
from synapse.api.errors import AuthError, FederationError, SynapseError, NotFoundError
from synapse.crypto.event_signing import compute_event_signature
from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
)
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
import synapse.metrics
from synapse.types import get_domain_from_id
from synapse.util import async
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logutils import log_function
from six import iteritems
# when processing incoming transactions, we try to handle multiple rooms in
# parallel, up to this limit.
@@ -52,49 +56,18 @@ class FederationServer(FederationBase):
super(FederationServer, self).__init__(hs)
self.auth = hs.get_auth()
self.handler = hs.get_handlers().federation_handler
self._server_linearizer = async.Linearizer("fed_server")
self._transaction_linearizer = async.Linearizer("fed_txn_handler")
self.transaction_actions = TransactionActions(self.store)
self.registry = hs.get_federation_registry()
# We cache responses to state queries, as they take a while and often
# come in waves.
self._state_resp_cache = ResponseCache(hs, timeout_ms=30000)
def set_handler(self, handler):
"""Sets the handler that the replication layer will use to communicate
receipt of new PDUs from other home servers. The required methods are
documented on :py:class:`.ReplicationHandler`.
"""
self.handler = handler
def register_edu_handler(self, edu_type, handler):
if edu_type in self.edu_handlers:
raise KeyError("Already have an EDU handler for %s" % (edu_type,))
self.edu_handlers[edu_type] = handler
def register_query_handler(self, query_type, handler):
"""Sets the handler callable that will be used to handle an incoming
federation Query of the given type.
Args:
query_type (str): Category name of the query, which should match
the string used by make_query.
handler (callable): Invoked to handle incoming queries of this type
handler is invoked as:
result = handler(args)
where 'args' is a dict mapping strings to strings of the query
arguments. It should return a Deferred that will eventually yield an
object to encode as JSON.
"""
if query_type in self.query_handlers:
raise KeyError(
"Already have a Query handler for %s" % (query_type,)
)
self.query_handlers[query_type] = handler
self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
@defer.inlineCallbacks
@log_function
@@ -171,7 +144,7 @@ class FederationServer(FederationBase):
p["age_ts"] = request_time - int(p["age"])
del p["age"]
event = self.event_from_pdu_json(p)
event = event_from_pdu_json(p)
room_id = event.room_id
pdus_by_room.setdefault(room_id, []).append(event)
@@ -229,16 +202,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def received_edu(self, origin, edu_type, content):
received_edus_counter.inc()
if edu_type in self.edu_handlers:
try:
yield self.edu_handlers[edu_type](origin, content)
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception as e:
logger.exception("Failed to handle edu %r", edu_type)
else:
logger.warn("Received EDU of type %s with no handler", edu_type)
yield self.registry.on_edu(edu_type, origin, content)
@defer.inlineCallbacks
@log_function
@@ -250,15 +214,17 @@ class FederationServer(FederationBase):
if not in_room:
raise AuthError(403, "Host not in room.")
result = self._state_resp_cache.get((room_id, event_id))
if not result:
with (yield self._server_linearizer.queue((origin, room_id))):
resp = yield self._state_resp_cache.set(
(room_id, event_id),
self._on_context_state_request_compute(room_id, event_id)
)
else:
resp = yield result
# we grab the linearizer to protect ourselves from servers which hammer
# us. In theory we might already have the response to this query
# in the cache so we could return it without waiting for the linearizer
# - but that's non-trivial to get right, and anyway somewhat defeats
# the point of the linearizer.
with (yield self._server_linearizer.queue((origin, room_id))):
resp = yield self._state_resp_cache.wrap(
(room_id, event_id),
self._on_context_state_request_compute,
room_id, event_id,
)
defer.returnValue((200, resp))
@@ -327,14 +293,8 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
received_queries_counter.inc(query_type)
if query_type in self.query_handlers:
response = yield self.query_handlers[query_type](args)
defer.returnValue((200, response))
else:
defer.returnValue(
(404, "No handler for Query type '%s'" % (query_type,))
)
resp = yield self.registry.on_query(query_type, args)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_make_join_request(self, room_id, user_id):
@@ -344,7 +304,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_invite_request(self, origin, content):
pdu = self.event_from_pdu_json(content)
pdu = event_from_pdu_json(content)
ret_pdu = yield self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec()
defer.returnValue((200, {"event": ret_pdu.get_pdu_json(time_now)}))
@@ -352,7 +312,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_send_join_request(self, origin, content):
logger.debug("on_send_join_request: content: %s", content)
pdu = self.event_from_pdu_json(content)
pdu = event_from_pdu_json(content)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
res_pdus = yield self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
@@ -372,7 +332,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_send_leave_request(self, origin, content):
logger.debug("on_send_leave_request: content: %s", content)
pdu = self.event_from_pdu_json(content)
pdu = event_from_pdu_json(content)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
yield self.handler.on_send_leave_request(origin, pdu)
defer.returnValue((200, {}))
@@ -409,7 +369,7 @@ class FederationServer(FederationBase):
"""
with (yield self._server_linearizer.queue((origin, room_id))):
auth_chain = [
self.event_from_pdu_json(e)
event_from_pdu_json(e)
for e in content["auth_chain"]
]
@@ -468,9 +428,9 @@ class FederationServer(FederationBase):
"Claimed one-time-keys: %s",
",".join((
"%s for %s:%s" % (key_id, user_id, device_id)
for user_id, user_keys in json_result.iteritems()
for device_id, device_keys in user_keys.iteritems()
for key_id, _ in device_keys.iteritems()
for user_id, user_keys in iteritems(json_result)
for device_id, device_keys in iteritems(user_keys)
for key_id, _ in iteritems(device_keys)
)),
)
@@ -537,13 +497,33 @@ class FederationServer(FederationBase):
def _handle_received_pdu(self, origin, pdu):
""" Process a PDU received in a federation /send/ transaction.
If the event is invalid, then this method throws a FederationError.
(The error will then be logged and sent back to the sender (which
probably won't do anything with it), and other events in the
transaction will be processed as normal).
It is likely that we'll then receive other events which refer to
this rejected_event in their prev_events, etc. When that happens,
we'll attempt to fetch the rejected event again, which will presumably
fail, so those second-generation events will also get rejected.
Eventually, we get to the point where there are more than 10 events
between any new events and the original rejected event. Since we
only try to backfill 10 events deep on received pdu, we then accept the
new event, possibly introducing a discontinuity in the DAG, with new
forward extremities, so normal service is approximately returned,
until we try to backfill across the discontinuity.
Args:
origin (str): server which sent the pdu
pdu (FrozenEvent): received pdu
Returns (Deferred): completes with None
Raises: FederationError if the signatures / hash do not match
"""
Raises: FederationError if the signatures / hash do not match, or
if the event was unacceptable for any other reason (eg, too large,
too many prev_events, couldn't find the prev_events)
"""
# check that it's actually being sent from a valid destination to
# workaround bug #1753 in 0.18.5 and 0.18.6
if origin != get_domain_from_id(pdu.event_id):
@@ -584,15 +564,6 @@ class FederationServer(FederationBase):
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name
def event_from_pdu_json(self, pdu_json, outlier=False):
event = FrozenEvent(
pdu_json
)
event.internal_metadata.outlier = outlier
return event
@defer.inlineCallbacks
def exchange_third_party_invite(
self,
@@ -615,3 +586,66 @@ class FederationServer(FederationBase):
origin, room_id, event_dict
)
defer.returnValue(ret)
class FederationHandlerRegistry(object):
"""Allows classes to register themselves as handlers for a given EDU or
query type for incoming federation traffic.
"""
def __init__(self):
self.edu_handlers = {}
self.query_handlers = {}
def register_edu_handler(self, edu_type, handler):
"""Sets the handler callable that will be used to handle an incoming
federation EDU of the given type.
Args:
edu_type (str): The type of the incoming EDU to register handler for
handler (Callable[[str, dict]]): A callable invoked on incoming EDU
of the given type. The arguments are the origin server name and
the EDU contents.
"""
if edu_type in self.edu_handlers:
raise KeyError("Already have an EDU handler for %s" % (edu_type,))
self.edu_handlers[edu_type] = handler
def register_query_handler(self, query_type, handler):
"""Sets the handler callable that will be used to handle an incoming
federation query of the given type.
Args:
query_type (str): Category name of the query, which should match
the string used by make_query.
handler (Callable[[dict], Deferred[dict]]): Invoked to handle
incoming queries of this type. The return will be yielded
on and the result used as the response to the query request.
"""
if query_type in self.query_handlers:
raise KeyError(
"Already have a Query handler for %s" % (query_type,)
)
self.query_handlers[query_type] = handler
@defer.inlineCallbacks
def on_edu(self, edu_type, origin, content):
handler = self.edu_handlers.get(edu_type)
if not handler:
logger.warn("No handler registered for EDU type %s", edu_type)
try:
yield handler(origin, content)
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception as e:
logger.exception("Failed to handle edu %r", edu_type)
def on_query(self, query_type, args):
handler = self.query_handlers.get(query_type)
if not handler:
logger.warn("No handler registered for query type %s", query_type)
raise NotFoundError("No handler for Query type '%s'" % (query_type,))
return handler(args)

View File

@@ -1,73 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This layer is responsible for replicating with remote home servers using
a given transport.
"""
from .federation_client import FederationClient
from .federation_server import FederationServer
from .persistence import TransactionActions
import logging
logger = logging.getLogger(__name__)
class ReplicationLayer(FederationClient, FederationServer):
"""This layer is responsible for replicating with remote home servers over
the given transport. I.e., does the sending and receiving of PDUs to
remote home servers.
The layer communicates with the rest of the server via a registered
ReplicationHandler.
In more detail, the layer:
* Receives incoming data and processes it into transactions and pdus.
* Fetches any PDUs it thinks it might have missed.
* Keeps the current state for contexts up to date by applying the
suitable conflict resolution.
* Sends outgoing pdus wrapped in transactions.
* Fills out the references to previous pdus/transactions appropriately
for outgoing data.
"""
def __init__(self, hs, transport_layer):
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
self.transport_layer = transport_layer
self.federation_client = self
self.store = hs.get_datastore()
self.handler = None
self.edu_handlers = {}
self.query_handlers = {}
self._clock = hs.get_clock()
self.transaction_actions = TransactionActions(self.store)
self.hs = hs
super(ReplicationLayer, self).__init__(hs)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name

View File

@@ -40,6 +40,8 @@ from collections import namedtuple
import logging
from six import itervalues, iteritems
logger = logging.getLogger(__name__)
@@ -122,7 +124,7 @@ class FederationRemoteSendQueue(object):
user_ids = set(
user_id
for uids in self.presence_changed.itervalues()
for uids in itervalues(self.presence_changed)
for user_id in uids
)
@@ -276,7 +278,7 @@ class FederationRemoteSendQueue(object):
# stream position.
keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]}
for ((destination, edu_key), pos) in keyed_edus.iteritems():
for ((destination, edu_key), pos) in iteritems(keyed_edus):
rows.append((pos, KeyedEduRow(
key=edu_key,
edu=self.keyed_edu[(destination, edu_key)],
@@ -309,7 +311,7 @@ class FederationRemoteSendQueue(object):
j = keys.bisect_right(to_token) + 1
device_messages = {self.device_messages[k]: k for k in keys[i:j]}
for (destination, pos) in device_messages.iteritems():
for (destination, pos) in iteritems(device_messages):
rows.append((pos, DeviceRow(
destination=destination,
)))
@@ -528,19 +530,19 @@ def process_rows_for_federation(transaction_queue, rows):
if buff.presence:
transaction_queue.send_presence(buff.presence)
for destination, edu_map in buff.keyed_edus.iteritems():
for destination, edu_map in iteritems(buff.keyed_edus):
for key, edu in edu_map.items():
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=key,
)
for destination, edu_list in buff.edus.iteritems():
for destination, edu_list in iteritems(buff.edus):
for edu in edu_list:
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=None,
)
for destination, failure_list in buff.failures.iteritems():
for destination, failure_list in iteritems(buff.failures):
for failure in failure_list:
transaction_queue.send_failure(destination, failure)

View File

@@ -19,8 +19,8 @@ from twisted.internet import defer
from .persistence import TransactionActions
from .units import Transaction, Edu
from synapse.api.errors import HttpResponseException
from synapse.util import logcontext
from synapse.api.errors import HttpResponseException, FederationDeniedError
from synapse.util import logcontext, PreserveLoggingContext
from synapse.util.async import run_on_reactor
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from synapse.util.metrics import measure_func
@@ -42,6 +42,8 @@ sent_edus_counter = client_metrics.register_counter("sent_edus")
sent_transactions_counter = client_metrics.register_counter("sent_transactions")
events_processed_counter = client_metrics.register_counter("events_processed")
class TransactionQueue(object):
"""This class makes sure we only have one transaction in flight at
@@ -146,7 +148,6 @@ class TransactionQueue(object):
else:
return not destination.startswith("localhost")
@defer.inlineCallbacks
def notify_new_events(self, current_id):
"""This gets called when we have some new events we might want to
send out to other servers.
@@ -156,12 +157,19 @@ class TransactionQueue(object):
if self._is_processing:
return
# fire off a processing loop in the background. It's likely it will
# outlast the current request, so run it in the sentinel logcontext.
with PreserveLoggingContext():
self._process_event_queue_loop()
@defer.inlineCallbacks
def _process_event_queue_loop(self):
try:
self._is_processing = True
while True:
last_token = yield self.store.get_federation_out_pos("events")
next_token, events = yield self.store.get_all_new_events_stream(
last_token, self._last_poked_id, limit=20,
last_token, self._last_poked_id, limit=100,
)
logger.debug("Handling %s -> %s", last_token, next_token)
@@ -169,24 +177,33 @@ class TransactionQueue(object):
if not events and next_token >= self._last_poked_id:
break
for event in events:
@defer.inlineCallbacks
def handle_event(event):
# Only send events for this server.
send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of()
is_mine = self.is_mine_id(event.event_id)
if not is_mine and send_on_behalf_of is None:
continue
return
try:
# Get the state from before the event.
# We need to make sure that this is the state from before
# the event and not from after it.
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = yield self.state.get_current_hosts_in_room(
event.room_id, latest_event_ids=[
prev_id for prev_id, _ in event.prev_events
],
)
except Exception:
logger.exception(
"Failed to calculate hosts in room for event: %s",
event.event_id,
)
return
# Get the state from before the event.
# We need to make sure that this is the state from before
# the event and not from after it.
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = yield self.state.get_current_hosts_in_room(
event.room_id, latest_event_ids=[
prev_id for prev_id, _ in event.prev_events
],
)
destinations = set(destinations)
if send_on_behalf_of is not None:
@@ -199,10 +216,44 @@ class TransactionQueue(object):
self._send_pdu(event, destinations)
@defer.inlineCallbacks
def handle_room_events(events):
for event in events:
yield handle_event(event)
events_by_room = {}
for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
],
consumeErrors=True
))
yield self.store.update_federation_out_pos(
"events", next_token
)
if events:
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_lag.set(
now - ts, "federation_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "federation_sender",
)
events_processed_counter.inc_by(len(events))
synapse.metrics.event_processing_positions.set(
next_token, "federation_sender",
)
finally:
self._is_processing = False
@@ -272,6 +323,8 @@ class TransactionQueue(object):
break
yield self._process_presence_inner(states_map.values())
except Exception:
logger.exception("Error sending presence states to servers")
finally:
self._processing_pending_presence = False
@@ -480,6 +533,8 @@ class TransactionQueue(object):
(e.retry_last_ts + e.retry_interval) / 1000.0
),
)
except FederationDeniedError as e:
logger.info(e)
except Exception as e:
logger.warn(
"TX [%s] Failed to send transaction: %s",

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -20,6 +21,7 @@ from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.util.logutils import log_function
import logging
import urllib
logger = logging.getLogger(__name__)
@@ -49,7 +51,7 @@ class TransportLayerClient(object):
logger.debug("get_room_state dest=%s, room=%s",
destination, room_id)
path = PREFIX + "/state/%s/" % room_id
path = _create_path(PREFIX, "/state/%s/", room_id)
return self.client.get_json(
destination, path=path, args={"event_id": event_id},
)
@@ -71,7 +73,7 @@ class TransportLayerClient(object):
logger.debug("get_room_state_ids dest=%s, room=%s",
destination, room_id)
path = PREFIX + "/state_ids/%s/" % room_id
path = _create_path(PREFIX, "/state_ids/%s/", room_id)
return self.client.get_json(
destination, path=path, args={"event_id": event_id},
)
@@ -93,7 +95,7 @@ class TransportLayerClient(object):
logger.debug("get_pdu dest=%s, event_id=%s",
destination, event_id)
path = PREFIX + "/event/%s/" % (event_id, )
path = _create_path(PREFIX, "/event/%s/", event_id)
return self.client.get_json(destination, path=path, timeout=timeout)
@log_function
@@ -119,7 +121,7 @@ class TransportLayerClient(object):
# TODO: raise?
return
path = PREFIX + "/backfill/%s/" % (room_id,)
path = _create_path(PREFIX, "/backfill/%s/", room_id)
args = {
"v": event_tuples,
@@ -157,9 +159,11 @@ class TransportLayerClient(object):
# generated by the json_data_callback.
json_data = transaction.get_dict()
path = _create_path(PREFIX, "/send/%s/", transaction.transaction_id)
response = yield self.client.put_json(
transaction.destination,
path=PREFIX + "/send/%s/" % transaction.transaction_id,
path=path,
data=json_data,
json_data_callback=json_data_callback,
long_retries=True,
@@ -177,7 +181,7 @@ class TransportLayerClient(object):
@log_function
def make_query(self, destination, query_type, args, retry_on_dns_fail,
ignore_backoff=False):
path = PREFIX + "/query/%s" % query_type
path = _create_path(PREFIX, "/query/%s", query_type)
content = yield self.client.get_json(
destination=destination,
@@ -212,6 +216,9 @@ class TransportLayerClient(object):
Fails with ``NotRetryingDestination`` if we are not yet ready
to retry this server.
Fails with ``FederationDeniedError`` if the remote destination
is not in our federation whitelist
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
if membership not in valid_memberships:
@@ -219,7 +226,7 @@ class TransportLayerClient(object):
"make_membership_event called with membership='%s', must be one of %s" %
(membership, ",".join(valid_memberships))
)
path = PREFIX + "/make_%s/%s/%s" % (membership, room_id, user_id)
path = _create_path(PREFIX, "/make_%s/%s/%s", membership, room_id, user_id)
ignore_backoff = False
retry_on_dns_fail = False
@@ -245,7 +252,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_join(self, destination, room_id, event_id, content):
path = PREFIX + "/send_join/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/send_join/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination,
@@ -258,7 +265,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_leave(self, destination, room_id, event_id, content):
path = PREFIX + "/send_leave/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/send_leave/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination,
@@ -277,7 +284,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_invite(self, destination, room_id, event_id, content):
path = PREFIX + "/invite/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/invite/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination,
@@ -319,7 +326,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def exchange_third_party_invite(self, destination, room_id, event_dict):
path = PREFIX + "/exchange_third_party_invite/%s" % (room_id,)
path = _create_path(PREFIX, "/exchange_third_party_invite/%s", room_id,)
response = yield self.client.put_json(
destination=destination,
@@ -332,7 +339,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def get_event_auth(self, destination, room_id, event_id):
path = PREFIX + "/event_auth/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/event_auth/%s/%s", room_id, event_id)
content = yield self.client.get_json(
destination=destination,
@@ -344,7 +351,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_query_auth(self, destination, room_id, event_id, content):
path = PREFIX + "/query_auth/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/query_auth/%s/%s", room_id, event_id)
content = yield self.client.post_json(
destination=destination,
@@ -406,7 +413,7 @@ class TransportLayerClient(object):
Returns:
A dict containg the device keys.
"""
path = PREFIX + "/user/devices/" + user_id
path = _create_path(PREFIX, "/user/devices/%s", user_id)
content = yield self.client.get_json(
destination=destination,
@@ -456,7 +463,7 @@ class TransportLayerClient(object):
@log_function
def get_missing_events(self, destination, room_id, earliest_events,
latest_events, limit, min_depth, timeout):
path = PREFIX + "/get_missing_events/%s" % (room_id,)
path = _create_path(PREFIX, "/get_missing_events/%s", room_id,)
content = yield self.client.post_json(
destination=destination,
@@ -476,7 +483,7 @@ class TransportLayerClient(object):
def get_group_profile(self, destination, group_id, requester_user_id):
"""Get a group profile
"""
path = PREFIX + "/groups/%s/profile" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/profile", group_id,)
return self.client.get_json(
destination=destination,
@@ -495,7 +502,7 @@ class TransportLayerClient(object):
requester_user_id (str)
content (dict): The new profile of the group
"""
path = PREFIX + "/groups/%s/profile" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/profile", group_id,)
return self.client.post_json(
destination=destination,
@@ -509,7 +516,7 @@ class TransportLayerClient(object):
def get_group_summary(self, destination, group_id, requester_user_id):
"""Get a group summary
"""
path = PREFIX + "/groups/%s/summary" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/summary", group_id,)
return self.client.get_json(
destination=destination,
@@ -522,7 +529,7 @@ class TransportLayerClient(object):
def get_rooms_in_group(self, destination, group_id, requester_user_id):
"""Get all rooms in a group
"""
path = PREFIX + "/groups/%s/rooms" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/rooms", group_id,)
return self.client.get_json(
destination=destination,
@@ -535,7 +542,24 @@ class TransportLayerClient(object):
content):
"""Add a room to a group
"""
path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,)
path = _create_path(PREFIX, "/groups/%s/room/%s", group_id, room_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
def update_room_in_group(self, destination, group_id, requester_user_id, room_id,
config_key, content):
"""Update room in group
"""
path = _create_path(
PREFIX, "/groups/%s/room/%s/config/%s",
group_id, room_id, config_key,
)
return self.client.post_json(
destination=destination,
@@ -548,7 +572,7 @@ class TransportLayerClient(object):
def remove_room_from_group(self, destination, group_id, requester_user_id, room_id):
"""Remove a room from a group
"""
path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,)
path = _create_path(PREFIX, "/groups/%s/room/%s", group_id, room_id,)
return self.client.delete_json(
destination=destination,
@@ -561,7 +585,7 @@ class TransportLayerClient(object):
def get_users_in_group(self, destination, group_id, requester_user_id):
"""Get users in a group
"""
path = PREFIX + "/groups/%s/users" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/users", group_id,)
return self.client.get_json(
destination=destination,
@@ -574,7 +598,7 @@ class TransportLayerClient(object):
def get_invited_users_in_group(self, destination, group_id, requester_user_id):
"""Get users that have been invited to a group
"""
path = PREFIX + "/groups/%s/invited_users" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/invited_users", group_id,)
return self.client.get_json(
destination=destination,
@@ -587,7 +611,23 @@ class TransportLayerClient(object):
def accept_group_invite(self, destination, group_id, user_id, content):
"""Accept a group invite
"""
path = PREFIX + "/groups/%s/users/%s/accept_invite" % (group_id, user_id)
path = _create_path(
PREFIX, "/groups/%s/users/%s/accept_invite",
group_id, user_id,
)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def join_group(self, destination, group_id, user_id, content):
"""Attempts to join a group
"""
path = _create_path(PREFIX, "/groups/%s/users/%s/join", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -600,7 +640,7 @@ class TransportLayerClient(object):
def invite_to_group(self, destination, group_id, user_id, requester_user_id, content):
"""Invite a user to a group
"""
path = PREFIX + "/groups/%s/users/%s/invite" % (group_id, user_id)
path = _create_path(PREFIX, "/groups/%s/users/%s/invite", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -616,7 +656,7 @@ class TransportLayerClient(object):
invited.
"""
path = PREFIX + "/groups/local/%s/users/%s/invite" % (group_id, user_id)
path = _create_path(PREFIX, "/groups/local/%s/users/%s/invite", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -630,7 +670,7 @@ class TransportLayerClient(object):
user_id, content):
"""Remove a user fron a group
"""
path = PREFIX + "/groups/%s/users/%s/remove" % (group_id, user_id)
path = _create_path(PREFIX, "/groups/%s/users/%s/remove", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -647,7 +687,7 @@ class TransportLayerClient(object):
kicked from the group.
"""
path = PREFIX + "/groups/local/%s/users/%s/remove" % (group_id, user_id)
path = _create_path(PREFIX, "/groups/local/%s/users/%s/remove", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -662,7 +702,7 @@ class TransportLayerClient(object):
the attestations
"""
path = PREFIX + "/groups/%s/renew_attestation/%s" % (group_id, user_id)
path = _create_path(PREFIX, "/groups/%s/renew_attestation/%s", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -677,11 +717,12 @@ class TransportLayerClient(object):
"""Update a room entry in a group summary
"""
if category_id:
path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % (
path = _create_path(
PREFIX, "/groups/%s/summary/categories/%s/rooms/%s",
group_id, category_id, room_id,
)
else:
path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,)
path = _create_path(PREFIX, "/groups/%s/summary/rooms/%s", group_id, room_id,)
return self.client.post_json(
destination=destination,
@@ -697,11 +738,12 @@ class TransportLayerClient(object):
"""Delete a room entry in a group summary
"""
if category_id:
path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % (
path = _create_path(
PREFIX + "/groups/%s/summary/categories/%s/rooms/%s",
group_id, category_id, room_id,
)
else:
path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,)
path = _create_path(PREFIX, "/groups/%s/summary/rooms/%s", group_id, room_id,)
return self.client.delete_json(
destination=destination,
@@ -714,7 +756,7 @@ class TransportLayerClient(object):
def get_group_categories(self, destination, group_id, requester_user_id):
"""Get all categories in a group
"""
path = PREFIX + "/groups/%s/categories" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/categories", group_id,)
return self.client.get_json(
destination=destination,
@@ -727,7 +769,7 @@ class TransportLayerClient(object):
def get_group_category(self, destination, group_id, requester_user_id, category_id):
"""Get category info in a group
"""
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,)
path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.get_json(
destination=destination,
@@ -741,7 +783,7 @@ class TransportLayerClient(object):
content):
"""Update a category in a group
"""
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,)
path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.post_json(
destination=destination,
@@ -756,7 +798,7 @@ class TransportLayerClient(object):
category_id):
"""Delete a category in a group
"""
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,)
path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.delete_json(
destination=destination,
@@ -769,7 +811,7 @@ class TransportLayerClient(object):
def get_group_roles(self, destination, group_id, requester_user_id):
"""Get all roles in a group
"""
path = PREFIX + "/groups/%s/roles" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/roles", group_id,)
return self.client.get_json(
destination=destination,
@@ -782,7 +824,7 @@ class TransportLayerClient(object):
def get_group_role(self, destination, group_id, requester_user_id, role_id):
"""Get a roles info
"""
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,)
path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.get_json(
destination=destination,
@@ -796,7 +838,7 @@ class TransportLayerClient(object):
content):
"""Update a role in a group
"""
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,)
path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.post_json(
destination=destination,
@@ -810,7 +852,7 @@ class TransportLayerClient(object):
def delete_group_role(self, destination, group_id, requester_user_id, role_id):
"""Delete a role in a group
"""
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,)
path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.delete_json(
destination=destination,
@@ -825,11 +867,12 @@ class TransportLayerClient(object):
"""Update a users entry in a group
"""
if role_id:
path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % (
path = _create_path(
PREFIX, "/groups/%s/summary/roles/%s/users/%s",
group_id, role_id, user_id,
)
else:
path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,)
path = _create_path(PREFIX, "/groups/%s/summary/users/%s", group_id, user_id,)
return self.client.post_json(
destination=destination,
@@ -839,17 +882,33 @@ class TransportLayerClient(object):
ignore_backoff=True,
)
@log_function
def set_group_join_policy(self, destination, group_id, requester_user_id,
content):
"""Sets the join policy for a group
"""
path = _create_path(PREFIX, "/groups/%s/settings/m.join_policy", group_id,)
return self.client.put_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def delete_group_summary_user(self, destination, group_id, requester_user_id,
user_id, role_id):
"""Delete a users entry in a group
"""
if role_id:
path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % (
path = _create_path(
PREFIX, "/groups/%s/summary/roles/%s/users/%s",
group_id, role_id, user_id,
)
else:
path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,)
path = _create_path(PREFIX, "/groups/%s/summary/users/%s", group_id, user_id,)
return self.client.delete_json(
destination=destination,
@@ -872,3 +931,22 @@ class TransportLayerClient(object):
data=content,
ignore_backoff=True,
)
def _create_path(prefix, path, *args):
"""Creates a path from the prefix, path template and args. Ensures that
all args are url encoded.
Example:
_create_path(PREFIX, "/event/%s/", event_id)
Args:
prefix (str)
path (str): String template for the path
args: ([str]): Args to insert into path. Each arg will be url encoded
Returns:
str
"""
return prefix + path % tuple(urllib.quote(arg, "") for arg in args)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,7 +17,7 @@
from twisted.internet import defer
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.api.errors import Codes, SynapseError
from synapse.api.errors import Codes, SynapseError, FederationDeniedError
from synapse.http.server import JsonResource
from synapse.http.servlet import (
parse_json_object_from_request, parse_integer_from_args, parse_string_from_args,
@@ -24,7 +25,7 @@ from synapse.http.servlet import (
)
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
from synapse.util.logcontext import preserve_fn
from synapse.util.logcontext import run_in_background
from synapse.types import ThirdPartyInstanceID, get_domain_from_id
import functools
@@ -81,6 +82,7 @@ class Authenticator(object):
self.keyring = hs.get_keyring()
self.server_name = hs.hostname
self.store = hs.get_datastore()
self.federation_domain_whitelist = hs.config.federation_domain_whitelist
# A method just so we can pass 'self' as the authenticator to the Servlets
@defer.inlineCallbacks
@@ -130,6 +132,12 @@ class Authenticator(object):
json_request["origin"] = origin
json_request["signatures"].setdefault(origin, {})[key] = sig
if (
self.federation_domain_whitelist is not None and
origin not in self.federation_domain_whitelist
):
raise FederationDeniedError(origin)
if not json_request["signatures"]:
raise NoAuthenticationError(
401, "Missing Authorization headers", Codes.UNAUTHORIZED,
@@ -144,11 +152,18 @@ class Authenticator(object):
# alive
retry_timings = yield self.store.get_destination_retry_timings(origin)
if retry_timings and retry_timings["retry_last_ts"]:
logger.info("Marking origin %r as up", origin)
preserve_fn(self.store.set_destination_retry_timings)(origin, 0, 0)
run_in_background(self._reset_retry_timings, origin)
defer.returnValue(origin)
@defer.inlineCallbacks
def _reset_retry_timings(self, origin):
try:
logger.info("Marking origin %r as up", origin)
yield self.store.set_destination_retry_timings(origin, 0, 0)
except Exception:
logger.exception("Error resetting retry timings on %s", origin)
class BaseFederationServlet(object):
REQUIRE_AUTH = True
@@ -676,7 +691,7 @@ class FederationGroupsRoomsServlet(BaseFederationServlet):
class FederationGroupsAddRoomsServlet(BaseFederationServlet):
"""Add/remove room from group
"""
PATH = "/groups/(?P<group_id>[^/]*)/room/(?<room_id>)$"
PATH = "/groups/(?P<group_id>[^/]*)/room/(?P<room_id>[^/]*)$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, room_id):
@@ -703,6 +718,27 @@ class FederationGroupsAddRoomsServlet(BaseFederationServlet):
defer.returnValue((200, new_content))
class FederationGroupsAddRoomsConfigServlet(BaseFederationServlet):
"""Update room config in group
"""
PATH = (
"/groups/(?P<group_id>[^/]*)/room/(?P<room_id>[^/]*)"
"/config/(?P<config_key>[^/]*)$"
)
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, room_id, config_key):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
result = yield self.groups_handler.update_room_in_group(
group_id, requester_user_id, room_id, config_key, content,
)
defer.returnValue((200, result))
class FederationGroupsUsersServlet(BaseFederationServlet):
"""Get the users in a group on behalf of a user
"""
@@ -774,6 +810,23 @@ class FederationGroupsAcceptInviteServlet(BaseFederationServlet):
defer.returnValue((200, new_content))
class FederationGroupsJoinServlet(BaseFederationServlet):
"""Attempt to join a group
"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/join$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
if get_domain_from_id(user_id) != origin:
raise SynapseError(403, "user_id doesn't match origin")
new_content = yield self.handler.join_group(
group_id, user_id, content,
)
defer.returnValue((200, new_content))
class FederationGroupsRemoveUserServlet(BaseFederationServlet):
"""Leave or kick a user from the group
"""
@@ -1096,6 +1149,24 @@ class FederationGroupsBulkPublicisedServlet(BaseFederationServlet):
defer.returnValue((200, resp))
class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
"""Sets whether a group is joinable without an invite or knock
"""
PATH = "/groups/(?P<group_id>[^/]*)/settings/m.join_policy$"
@defer.inlineCallbacks
def on_PUT(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.set_group_join_policy(
group_id, requester_user_id, content
)
defer.returnValue((200, new_content))
FEDERATION_SERVLET_CLASSES = (
FederationSendServlet,
FederationPullServlet,
@@ -1135,6 +1206,7 @@ GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsInvitedUsersServlet,
FederationGroupsInviteServlet,
FederationGroupsAcceptInviteServlet,
FederationGroupsJoinServlet,
FederationGroupsRemoveUserServlet,
FederationGroupsSummaryRoomsServlet,
FederationGroupsCategoriesServlet,
@@ -1142,6 +1214,9 @@ GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsRolesServlet,
FederationGroupsRoleServlet,
FederationGroupsSummaryUsersServlet,
FederationGroupsAddRoomsServlet,
FederationGroupsAddRoomsConfigServlet,
FederationGroupsSettingJoinPolicyServlet,
)
@@ -1160,7 +1235,7 @@ GROUP_ATTESTATION_SERVLET_CLASSES = (
def register_servlets(hs, resource, authenticator, ratelimiter):
for servletclass in FEDERATION_SERVLET_CLASSES:
servletclass(
handler=hs.get_replication_layer(),
handler=hs.get_federation_server(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,

View File

@@ -13,18 +13,51 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""Attestations ensure that users and groups can't lie about their memberships.
When a user joins a group the HS and GS swap attestations, which allow them
both to independently prove to third parties their membership.These
attestations have a validity period so need to be periodically renewed.
If a user leaves (or gets kicked out of) a group, either side can still use
their attestation to "prove" their membership, until the attestation expires.
Therefore attestations shouldn't be relied on to prove membership in important
cases, but can for less important situtations, e.g. showing a users membership
of groups on their profile, showing flairs, etc.abs
An attestsation is a signed blob of json that looks like:
{
"user_id": "@foo:a.example.com",
"group_id": "+bar:b.example.com",
"valid_until_ms": 1507994728530,
"signatures":{"matrix.org":{"ed25519:auto":"..."}}
}
"""
import logging
import random
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id
from synapse.util.logcontext import preserve_fn
from synapse.util.logcontext import run_in_background
from signedjson.sign import sign_json
logger = logging.getLogger(__name__)
# Default validity duration for new attestations we create
DEFAULT_ATTESTATION_LENGTH_MS = 3 * 24 * 60 * 60 * 1000
# We add some jitter to the validity duration of attestations so that if we
# add lots of users at once we don't need to renew them all at once.
# The jitter is a multiplier picked randomly between the first and second number
DEFAULT_ATTESTATION_JITTER = (0.9, 1.3)
# Start trying to update our attestations when they come this close to expiring
UPDATE_ATTESTATION_TIME_MS = 1 * 24 * 60 * 60 * 1000
@@ -73,10 +106,14 @@ class GroupAttestationSigning(object):
"""Create an attestation for the group_id and user_id with default
validity length.
"""
validity_period = DEFAULT_ATTESTATION_LENGTH_MS
validity_period *= random.uniform(*DEFAULT_ATTESTATION_JITTER)
valid_until_ms = int(self.clock.time_msec() + validity_period)
return sign_json({
"group_id": group_id,
"user_id": user_id,
"valid_until_ms": self.clock.time_msec() + DEFAULT_ATTESTATION_LENGTH_MS,
"valid_until_ms": valid_until_ms,
}, self.server_name, self.signing_key)
@@ -128,24 +165,35 @@ class GroupAttestionRenewer(object):
@defer.inlineCallbacks
def _renew_attestation(group_id, user_id):
attestation = self.attestations.create_attestation(group_id, user_id)
try:
if not self.is_mine_id(group_id):
destination = get_domain_from_id(group_id)
elif not self.is_mine_id(user_id):
destination = get_domain_from_id(user_id)
else:
logger.warn(
"Incorrectly trying to do attestations for user: %r in %r",
user_id, group_id,
)
yield self.store.remove_attestation_renewal(group_id, user_id)
return
if self.is_mine_id(group_id):
destination = get_domain_from_id(user_id)
else:
destination = get_domain_from_id(group_id)
attestation = self.attestations.create_attestation(group_id, user_id)
yield self.transport_client.renew_group_attestation(
destination, group_id, user_id,
content={"attestation": attestation},
)
yield self.transport_client.renew_group_attestation(
destination, group_id, user_id,
content={"attestation": attestation},
)
yield self.store.update_attestation_renewal(
group_id, user_id, attestation
)
yield self.store.update_attestation_renewal(
group_id, user_id, attestation
)
except Exception:
logger.exception("Error renewing attestation of %r in %r",
user_id, group_id)
for row in rows:
group_id = row["group_id"]
user_id = row["user_id"]
preserve_fn(_renew_attestation)(group_id, user_id)
run_in_background(_renew_attestation, group_id, user_id)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -49,7 +50,8 @@ class GroupsServerHandler(object):
hs.get_groups_attestation_renewer()
@defer.inlineCallbacks
def check_group_is_ours(self, group_id, and_exists=False, and_is_admin=None):
def check_group_is_ours(self, group_id, requester_user_id,
and_exists=False, and_is_admin=None):
"""Check that the group is ours, and optionally if it exists.
If group does exist then return group.
@@ -67,6 +69,10 @@ class GroupsServerHandler(object):
if and_exists and not group:
raise SynapseError(404, "Unknown group")
is_user_in_group = yield self.store.is_user_in_group(requester_user_id, group_id)
if group and not is_user_in_group and not group["is_public"]:
raise SynapseError(404, "Unknown group")
if and_is_admin:
is_admin = yield self.store.is_user_admin_in_group(group_id, and_is_admin)
if not is_admin:
@@ -84,7 +90,7 @@ class GroupsServerHandler(object):
A user/room may appear in multiple roles/categories.
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_user_in_group = yield self.store.is_user_in_group(requester_user_id, group_id)
@@ -153,10 +159,16 @@ class GroupsServerHandler(object):
})
@defer.inlineCallbacks
def update_group_summary_room(self, group_id, user_id, room_id, category_id, content):
def update_group_summary_room(self, group_id, requester_user_id,
room_id, category_id, content):
"""Add/update a room to the group summary
"""
yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id)
yield self.check_group_is_ours(
group_id,
requester_user_id,
and_exists=True,
and_is_admin=requester_user_id,
)
RoomID.from_string(room_id) # Ensure valid room id
@@ -175,10 +187,16 @@ class GroupsServerHandler(object):
defer.returnValue({})
@defer.inlineCallbacks
def delete_group_summary_room(self, group_id, user_id, room_id, category_id):
def delete_group_summary_room(self, group_id, requester_user_id,
room_id, category_id):
"""Remove a room from the summary
"""
yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id)
yield self.check_group_is_ours(
group_id,
requester_user_id,
and_exists=True,
and_is_admin=requester_user_id,
)
yield self.store.remove_room_from_summary(
group_id=group_id,
@@ -189,10 +207,32 @@ class GroupsServerHandler(object):
defer.returnValue({})
@defer.inlineCallbacks
def get_group_categories(self, group_id, user_id):
def set_group_join_policy(self, group_id, requester_user_id, content):
"""Sets the group join policy.
Currently supported policies are:
- "invite": an invite must be received and accepted in order to join.
- "open": anyone can join.
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
join_policy = _parse_join_policy_from_contents(content)
if join_policy is None:
raise SynapseError(
400, "No value specified for 'm.join_policy'"
)
yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
defer.returnValue({})
@defer.inlineCallbacks
def get_group_categories(self, group_id, requester_user_id):
"""Get all categories in a group (as seen by user)
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
categories = yield self.store.get_group_categories(
group_id=group_id,
@@ -200,10 +240,10 @@ class GroupsServerHandler(object):
defer.returnValue({"categories": categories})
@defer.inlineCallbacks
def get_group_category(self, group_id, user_id, category_id):
def get_group_category(self, group_id, requester_user_id, category_id):
"""Get a specific category in a group (as seen by user)
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
res = yield self.store.get_group_category(
group_id=group_id,
@@ -213,10 +253,15 @@ class GroupsServerHandler(object):
defer.returnValue(res)
@defer.inlineCallbacks
def update_group_category(self, group_id, user_id, category_id, content):
def update_group_category(self, group_id, requester_user_id, category_id, content):
"""Add/Update a group category
"""
yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id)
yield self.check_group_is_ours(
group_id,
requester_user_id,
and_exists=True,
and_is_admin=requester_user_id,
)
is_public = _parse_visibility_from_contents(content)
profile = content.get("profile")
@@ -231,10 +276,15 @@ class GroupsServerHandler(object):
defer.returnValue({})
@defer.inlineCallbacks
def delete_group_category(self, group_id, user_id, category_id):
def delete_group_category(self, group_id, requester_user_id, category_id):
"""Delete a group category
"""
yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id)
yield self.check_group_is_ours(
group_id,
requester_user_id,
and_exists=True,
and_is_admin=requester_user_id
)
yield self.store.remove_group_category(
group_id=group_id,
@@ -244,10 +294,10 @@ class GroupsServerHandler(object):
defer.returnValue({})
@defer.inlineCallbacks
def get_group_roles(self, group_id, user_id):
def get_group_roles(self, group_id, requester_user_id):
"""Get all roles in a group (as seen by user)
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
roles = yield self.store.get_group_roles(
group_id=group_id,
@@ -255,10 +305,10 @@ class GroupsServerHandler(object):
defer.returnValue({"roles": roles})
@defer.inlineCallbacks
def get_group_role(self, group_id, user_id, role_id):
def get_group_role(self, group_id, requester_user_id, role_id):
"""Get a specific role in a group (as seen by user)
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
res = yield self.store.get_group_role(
group_id=group_id,
@@ -267,10 +317,15 @@ class GroupsServerHandler(object):
defer.returnValue(res)
@defer.inlineCallbacks
def update_group_role(self, group_id, user_id, role_id, content):
def update_group_role(self, group_id, requester_user_id, role_id, content):
"""Add/update a role in a group
"""
yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id)
yield self.check_group_is_ours(
group_id,
requester_user_id,
and_exists=True,
and_is_admin=requester_user_id,
)
is_public = _parse_visibility_from_contents(content)
@@ -286,10 +341,15 @@ class GroupsServerHandler(object):
defer.returnValue({})
@defer.inlineCallbacks
def delete_group_role(self, group_id, user_id, role_id):
def delete_group_role(self, group_id, requester_user_id, role_id):
"""Remove role from group
"""
yield self.check_group_is_ours(group_id, and_exists=True, and_is_admin=user_id)
yield self.check_group_is_ours(
group_id,
requester_user_id,
and_exists=True,
and_is_admin=requester_user_id,
)
yield self.store.remove_group_role(
group_id=group_id,
@@ -304,7 +364,7 @@ class GroupsServerHandler(object):
"""Add/update a users entry in the group summary
"""
yield self.check_group_is_ours(
group_id, and_exists=True, and_is_admin=requester_user_id,
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id,
)
order = content.get("order", None)
@@ -326,7 +386,7 @@ class GroupsServerHandler(object):
"""Remove a user from the group summary
"""
yield self.check_group_is_ours(
group_id, and_exists=True, and_is_admin=requester_user_id,
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id,
)
yield self.store.remove_user_from_summary(
@@ -342,11 +402,18 @@ class GroupsServerHandler(object):
"""Get the group profile as seen by requester_user_id
"""
yield self.check_group_is_ours(group_id)
yield self.check_group_is_ours(group_id, requester_user_id)
group_description = yield self.store.get_group(group_id)
group = yield self.store.get_group(group_id)
if group:
cols = [
"name", "short_description", "long_description",
"avatar_url", "is_public",
]
group_description = {key: group[key] for key in cols}
group_description["is_openly_joinable"] = group["join_policy"] == "open"
if group_description:
defer.returnValue(group_description)
else:
raise SynapseError(404, "Unknown group")
@@ -356,7 +423,7 @@ class GroupsServerHandler(object):
"""Update the group profile
"""
yield self.check_group_is_ours(
group_id, and_exists=True, and_is_admin=requester_user_id,
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id,
)
profile = {}
@@ -377,7 +444,7 @@ class GroupsServerHandler(object):
The ordering is arbitrary at the moment
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_user_in_group = yield self.store.is_user_in_group(requester_user_id, group_id)
@@ -389,14 +456,15 @@ class GroupsServerHandler(object):
for user_result in user_results:
g_user_id = user_result["user_id"]
is_public = user_result["is_public"]
is_privileged = user_result["is_admin"]
entry = {"user_id": g_user_id}
profile = yield self.profile_handler.get_profile_from_cache(g_user_id)
entry.update(profile)
if not is_public:
entry["is_public"] = False
entry["is_public"] = bool(is_public)
entry["is_privileged"] = bool(is_privileged)
if not self.is_mine_id(g_user_id):
attestation = yield self.store.get_remote_attestation(group_id, g_user_id)
@@ -425,7 +493,7 @@ class GroupsServerHandler(object):
The ordering is arbitrary at the moment
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_user_in_group = yield self.store.is_user_in_group(requester_user_id, group_id)
@@ -459,7 +527,7 @@ class GroupsServerHandler(object):
This returns rooms in order of decreasing number of joined users
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_user_in_group = yield self.store.is_user_in_group(requester_user_id, group_id)
@@ -470,7 +538,6 @@ class GroupsServerHandler(object):
chunk = []
for room_result in room_results:
room_id = room_result["room_id"]
is_public = room_result["is_public"]
joined_users = yield self.store.get_users_in_room(room_id)
entry = yield self.room_list_handler.generate_room_entry(
@@ -481,8 +548,7 @@ class GroupsServerHandler(object):
if not entry:
continue
if not is_public:
entry["is_public"] = False
entry["is_public"] = bool(room_result["is_public"])
chunk.append(entry)
@@ -500,7 +566,7 @@ class GroupsServerHandler(object):
RoomID.from_string(room_id) # Ensure valid room id
yield self.check_group_is_ours(
group_id, and_exists=True, and_is_admin=requester_user_id
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
is_public = _parse_visibility_from_contents(content)
@@ -509,12 +575,35 @@ class GroupsServerHandler(object):
defer.returnValue({})
@defer.inlineCallbacks
def update_room_in_group(self, group_id, requester_user_id, room_id, config_key,
content):
"""Update room in group
"""
RoomID.from_string(room_id) # Ensure valid room id
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
if config_key == "m.visibility":
is_public = _parse_visibility_dict(content)
yield self.store.update_room_in_group_visibility(
group_id, room_id,
is_public=is_public,
)
else:
raise SynapseError(400, "Uknown config option")
defer.returnValue({})
@defer.inlineCallbacks
def remove_room_from_group(self, group_id, requester_user_id, room_id):
"""Remove room from group
"""
yield self.check_group_is_ours(
group_id, and_exists=True, and_is_admin=requester_user_id
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_room_from_group(group_id, room_id)
@@ -527,7 +616,7 @@ class GroupsServerHandler(object):
"""
group = yield self.check_group_is_ours(
group_id, and_exists=True, and_is_admin=requester_user_id
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
# TODO: Check if user knocked
@@ -596,19 +685,16 @@ class GroupsServerHandler(object):
raise SynapseError(502, "Unknown state returned by HS")
@defer.inlineCallbacks
def accept_invite(self, group_id, user_id, content):
"""User tries to accept an invite to the group.
def _add_user(self, group_id, user_id, content):
"""Add a user to a group based on a content dict.
This is different from them asking to join, and so should error if no
invite exists (and they're not a member of the group)
See accept_invite, join_group.
"""
yield self.check_group_is_ours(group_id, and_exists=True)
if not self.store.is_user_invited_to_local_group(group_id, user_id):
raise SynapseError(403, "User not invited to group")
if not self.hs.is_mine_id(user_id):
local_attestation = self.attestations.create_attestation(
group_id, user_id,
)
remote_attestation = content["attestation"]
yield self.attestations.verify_attestation(
@@ -617,10 +703,9 @@ class GroupsServerHandler(object):
group_id=group_id,
)
else:
local_attestation = None
remote_attestation = None
local_attestation = self.attestations.create_attestation(group_id, user_id)
is_public = _parse_visibility_from_contents(content)
yield self.store.add_user_to_group(
@@ -631,37 +716,77 @@ class GroupsServerHandler(object):
remote_attestation=remote_attestation,
)
defer.returnValue(local_attestation)
@defer.inlineCallbacks
def accept_invite(self, group_id, requester_user_id, content):
"""User tries to accept an invite to the group.
This is different from them asking to join, and so should error if no
invite exists (and they're not a member of the group)
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_invited = yield self.store.is_user_invited_to_local_group(
group_id, requester_user_id,
)
if not is_invited:
raise SynapseError(403, "User not invited to group")
local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({
"state": "join",
"attestation": local_attestation,
})
@defer.inlineCallbacks
def knock(self, group_id, user_id, content):
def join_group(self, group_id, requester_user_id, content):
"""User tries to join the group.
This will error if the group requires an invite/knock to join
"""
group_info = yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True
)
if group_info['join_policy'] != "open":
raise SynapseError(403, "Group is not publicly joinable")
local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({
"state": "join",
"attestation": local_attestation,
})
@defer.inlineCallbacks
def knock(self, group_id, requester_user_id, content):
"""A user requests becoming a member of the group
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
raise NotImplementedError()
@defer.inlineCallbacks
def accept_knock(self, group_id, user_id, content):
def accept_knock(self, group_id, requester_user_id, content):
"""Accept a users knock to the room.
Errors if the user hasn't knocked, rather than inviting them.
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
raise NotImplementedError()
@defer.inlineCallbacks
def remove_user_from_group(self, group_id, user_id, requester_user_id, content):
"""Remove a user from the group; either a user is leaving or and admin
kicked htem.
"""Remove a user from the group; either a user is leaving or an admin
kicked them.
"""
yield self.check_group_is_ours(group_id, and_exists=True)
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_kick = False
if requester_user_id != user_id:
@@ -692,8 +817,8 @@ class GroupsServerHandler(object):
defer.returnValue({})
@defer.inlineCallbacks
def create_group(self, group_id, user_id, content):
group = yield self.check_group_is_ours(group_id)
def create_group(self, group_id, requester_user_id, content):
group = yield self.check_group_is_ours(group_id, requester_user_id)
logger.info("Attempting to create group with ID: %r", group_id)
@@ -703,11 +828,11 @@ class GroupsServerHandler(object):
if group:
raise SynapseError(400, "Group already exists")
is_admin = yield self.auth.is_server_admin(UserID.from_string(user_id))
is_admin = yield self.auth.is_server_admin(UserID.from_string(requester_user_id))
if not is_admin:
if not self.hs.config.enable_group_creation:
raise SynapseError(
403, "Only server admin can create group on this server",
403, "Only a server admin can create groups on this server",
)
localpart = group_id_obj.localpart
if not localpart.startswith(self.hs.config.group_creation_prefix):
@@ -727,38 +852,41 @@ class GroupsServerHandler(object):
yield self.store.create_group(
group_id,
user_id,
requester_user_id,
name=name,
avatar_url=avatar_url,
short_description=short_description,
long_description=long_description,
)
if not self.hs.is_mine_id(user_id):
if not self.hs.is_mine_id(requester_user_id):
remote_attestation = content["attestation"]
yield self.attestations.verify_attestation(
remote_attestation,
user_id=user_id,
user_id=requester_user_id,
group_id=group_id,
)
local_attestation = self.attestations.create_attestation(group_id, user_id)
local_attestation = self.attestations.create_attestation(
group_id,
requester_user_id,
)
else:
local_attestation = None
remote_attestation = None
yield self.store.add_user_to_group(
group_id, user_id,
group_id, requester_user_id,
is_admin=True,
is_public=True, # TODO
local_attestation=local_attestation,
remote_attestation=remote_attestation,
)
if not self.hs.is_mine_id(user_id):
if not self.hs.is_mine_id(requester_user_id):
yield self.store.add_remote_profile_cache(
user_id,
requester_user_id,
displayname=user_profile.get("displayname"),
avatar_url=user_profile.get("avatar_url"),
)
@@ -768,20 +896,55 @@ class GroupsServerHandler(object):
})
def _parse_join_policy_from_contents(content):
"""Given a content for a request, return the specified join policy or None
"""
join_policy_dict = content.get("m.join_policy")
if join_policy_dict:
return _parse_join_policy_dict(join_policy_dict)
else:
return None
def _parse_join_policy_dict(join_policy_dict):
"""Given a dict for the "m.join_policy" config return the join policy specified
"""
join_policy_type = join_policy_dict.get("type")
if not join_policy_type:
return "invite"
if join_policy_type not in ("invite", "open"):
raise SynapseError(
400, "Synapse only supports 'invite'/'open' join rule"
)
return join_policy_type
def _parse_visibility_from_contents(content):
"""Given a content for a request parse out whether the entity should be
public or not
"""
visibility = content.get("visibility")
visibility = content.get("m.visibility")
if visibility:
vis_type = visibility["type"]
if vis_type not in ("public", "private"):
raise SynapseError(
400, "Synapse only supports 'public'/'private' visibility"
)
is_public = vis_type == "public"
return _parse_visibility_dict(visibility)
else:
is_public = True
return is_public
def _parse_visibility_dict(visibility):
"""Given a dict for the "m.visibility" config return if the entity should
be public or not
"""
vis_type = visibility.get("type")
if not vis_type:
return True
if vis_type not in ("public", "private"):
raise SynapseError(
400, "Synapse only supports 'public'/'private' visibility"
)
return vis_type == "public"

View File

@@ -17,7 +17,6 @@ from .register import RegistrationHandler
from .room import (
RoomCreationHandler, RoomContextHandler,
)
from .room_member import RoomMemberHandler
from .message import MessageHandler
from .federation import FederationHandler
from .directory import DirectoryHandler
@@ -49,7 +48,6 @@ class Handlers(object):
self.registration_handler = RegistrationHandler(hs)
self.message_handler = MessageHandler(hs)
self.room_creation_handler = RoomCreationHandler(hs)
self.room_member_handler = RoomMemberHandler(hs)
self.federation_handler = FederationHandler(hs)
self.directory_handler = DirectoryHandler(hs)
self.admin_handler = AdminHandler(hs)

View File

@@ -158,7 +158,7 @@ class BaseHandler(object):
# homeserver.
requester = synapse.types.create_requester(
target_user, is_guest=True)
handler = self.hs.get_handlers().room_member_handler
handler = self.hs.get_room_member_handler()
yield handler.update_membership(
requester,
target_user,

View File

@@ -15,14 +15,21 @@
from twisted.internet import defer
import synapse
from synapse.api.constants import EventTypes
from synapse.util.metrics import Measure
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
from synapse.util.logcontext import (
make_deferred_yieldable, run_in_background,
)
import logging
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
events_processed_counter = metrics.register_counter("events_processed")
def log_failure(failure):
logger.error(
@@ -70,21 +77,25 @@ class ApplicationServicesHandler(object):
with Measure(self.clock, "notify_interested_services"):
self.is_processing = True
try:
upper_bound = self.current_max
limit = 100
while True:
upper_bound, events = yield self.store.get_new_events_for_appservice(
upper_bound, limit
self.current_max, limit
)
if not events:
break
events_by_room = {}
for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
@defer.inlineCallbacks
def handle_event(event):
# Gather interested services
services = yield self._get_services_for_event(event)
if len(services) == 0:
continue # no services need notifying
return # no services need notifying
# Do we know this user exists? If not, poke the user
# query API for all services which match that user regex.
@@ -100,14 +111,35 @@ class ApplicationServicesHandler(object):
# Fork off pushes to these services
for service in services:
preserve_fn(self.scheduler.submit_event_for_as)(
service, event
)
self.scheduler.submit_event_for_as(service, event)
@defer.inlineCallbacks
def handle_room_events(events):
for event in events:
yield handle_event(event)
yield make_deferred_yieldable(defer.gatherResults([
run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
], consumeErrors=True))
yield self.store.set_appservice_last_pos(upper_bound)
if len(events) < limit:
break
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_positions.set(
upper_bound, "appservice_sender",
)
events_processed_counter.inc_by(len(events))
synapse.metrics.event_processing_lag.set(
now - ts, "appservice_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "appservice_sender",
)
finally:
self.is_processing = False
@@ -163,8 +195,11 @@ class ApplicationServicesHandler(object):
def query_3pe(self, kind, protocol, fields):
services = yield self._get_services_for_3pn(protocol)
results = yield preserve_context_over_deferred(defer.DeferredList([
preserve_fn(self.appservice_api.query_3pe)(service, kind, protocol, fields)
results = yield make_deferred_yieldable(defer.DeferredList([
run_in_background(
self.appservice_api.query_3pe,
service, kind, protocol, fields,
)
for service in services
], consumeErrors=True))
@@ -225,11 +260,15 @@ class ApplicationServicesHandler(object):
event based on the service regex.
"""
services = self.store.get_app_services()
interested_list = [
s for s in services if (
yield s.is_interested(event, self.store)
)
]
# we can't use a list comprehension here. Since python 3, list
# comprehensions use a generator internally. This means you can't yield
# inside of a list comprehension anymore.
interested_list = []
for s in services:
if (yield s.is_interested(event, self.store)):
interested_list.append(s)
defer.returnValue(interested_list)
def _get_services_for_user(self, user_id):

View File

@@ -13,15 +13,19 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from twisted.internet import defer, threads
from ._base import BaseHandler
from synapse.api.constants import LoginType
from synapse.api.errors import (
AuthError, Codes, InteractiveAuthIncompleteError, LoginError, StoreError,
SynapseError,
)
from synapse.module_api import ModuleApi
from synapse.types import UserID
from synapse.api.errors import AuthError, LoginError, Codes, StoreError, SynapseError
from synapse.util.async import run_on_reactor
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable
from twisted.web.client import PartialDownloadError
@@ -46,7 +50,6 @@ class AuthHandler(BaseHandler):
"""
super(AuthHandler, self).__init__(hs)
self.checkers = {
LoginType.PASSWORD: self._check_password_auth,
LoginType.RECAPTCHA: self._check_recaptcha,
LoginType.EMAIL_IDENTITY: self._check_email_identity,
LoginType.MSISDN: self._check_msisdn,
@@ -63,10 +66,7 @@ class AuthHandler(BaseHandler):
reset_expiry_on_get=True,
)
account_handler = _AccountHandler(
hs, check_user_exists=self.check_user_exists
)
account_handler = ModuleApi(hs, self)
self.password_providers = [
module(config=config, account_handler=account_handler)
for module, config in hs.config.password_providers
@@ -75,39 +75,120 @@ class AuthHandler(BaseHandler):
logger.info("Extra password_providers: %r", self.password_providers)
self.hs = hs # FIXME better possibility to access registrationHandler later?
self.device_handler = hs.get_device_handler()
self.macaroon_gen = hs.get_macaroon_generator()
self._password_enabled = hs.config.password_enabled
# we keep this as a list despite the O(N^2) implication so that we can
# keep PASSWORD first and avoid confusing clients which pick the first
# type in the list. (NB that the spec doesn't require us to do so and
# clients which favour types that they don't understand over those that
# they do are technically broken)
login_types = []
if self._password_enabled:
login_types.append(LoginType.PASSWORD)
for provider in self.password_providers:
if hasattr(provider, "get_supported_login_types"):
for t in provider.get_supported_login_types().keys():
if t not in login_types:
login_types.append(t)
self._supported_login_types = login_types
@defer.inlineCallbacks
def validate_user_via_ui_auth(self, requester, request_body, clientip):
"""
Checks that the user is who they claim to be, via a UI auth.
This is used for things like device deletion and password reset where
the user already has a valid access token, but we want to double-check
that it isn't stolen by re-authenticating them.
Args:
requester (Requester): The user, as given by the access token
request_body (dict): The body of the request sent by the client
clientip (str): The IP address of the client.
Returns:
defer.Deferred[dict]: the parameters for this request (which may
have been given only in a previous call).
Raises:
InteractiveAuthIncompleteError if the client has not yet completed
any of the permitted login flows
AuthError if the client has completed a login flow, and it gives
a different user to `requester`
"""
# build a list of supported flows
flows = [
[login_type] for login_type in self._supported_login_types
]
result, params, _ = yield self.check_auth(
flows, request_body, clientip,
)
# find the completed login type
for login_type in self._supported_login_types:
if login_type not in result:
continue
user_id = result[login_type]
break
else:
# this can't happen
raise Exception(
"check_auth returned True but no successful login type",
)
# check that the UI auth matched the access token
if user_id != requester.user.to_string():
raise AuthError(403, "Invalid auth")
defer.returnValue(params)
@defer.inlineCallbacks
def check_auth(self, flows, clientdict, clientip):
"""
Takes a dictionary sent by the client in the login / registration
protocol and handles the login flow.
protocol and handles the User-Interactive Auth flow.
As a side effect, this function fills in the 'creds' key on the user's
session with a map, which maps each auth-type (str) to the relevant
identity authenticated by that auth-type (mostly str, but for captcha, bool).
If no auth flows have been completed successfully, raises an
InteractiveAuthIncompleteError. To handle this, you can use
synapse.rest.client.v2_alpha._base.interactive_auth_handler as a
decorator.
Args:
flows (list): A list of login flows. Each flow is an ordered list of
strings representing auth-types. At least one full
flow must be completed in order for auth to be successful.
clientdict: The dictionary from the client root level, not the
'auth' key: this method prompts for auth if none is sent.
clientip (str): The IP address of the client.
Returns:
A tuple of (authed, dict, dict, session_id) where authed is true if
the client has successfully completed an auth flow. If it is true
the first dict contains the authenticated credentials of each stage.
defer.Deferred[dict, dict, str]: a deferred tuple of
(creds, params, session_id).
If authed is false, the first dictionary is the server response to
the login request and should be passed back to the client.
'creds' contains the authenticated credentials of each stage.
In either case, the second dict contains the parameters for this
request (which may have been given only in a previous call).
'params' contains the parameters for this request (which may
have been given only in a previous call).
session_id is the ID of this session, either passed in by the client
or assigned by the call to check_auth
'session_id' is the ID of this session, either passed in by the
client or assigned by this call
Raises:
InteractiveAuthIncompleteError if the client has not yet completed
all the stages in any of the permitted flows.
"""
authdict = None
@@ -135,11 +216,8 @@ class AuthHandler(BaseHandler):
clientdict = session['clientdict']
if not authdict:
defer.returnValue(
(
False, self._auth_dict_for_flows(flows, session),
clientdict, session['id']
)
raise InteractiveAuthIncompleteError(
self._auth_dict_for_flows(flows, session),
)
if 'creds' not in session:
@@ -150,14 +228,12 @@ class AuthHandler(BaseHandler):
errordict = {}
if 'type' in authdict:
login_type = authdict['type']
if login_type not in self.checkers:
raise LoginError(400, "", Codes.UNRECOGNIZED)
try:
result = yield self.checkers[login_type](authdict, clientip)
result = yield self._check_auth_dict(authdict, clientip)
if result:
creds[login_type] = result
self._save_session(session)
except LoginError, e:
except LoginError as e:
if login_type == LoginType.EMAIL_IDENTITY:
# riot used to have a bug where it would request a new
# validation token (thus sending a new email) each time it
@@ -166,7 +242,7 @@ class AuthHandler(BaseHandler):
#
# Grandfather in the old behaviour for now to avoid
# breaking old riot deployments.
raise e
raise
# this step failed. Merge the error dict into the response
# so that the client can have another go.
@@ -183,12 +259,14 @@ class AuthHandler(BaseHandler):
"Auth completed with creds: %r. Client dict has keys: %r",
creds, clientdict.keys()
)
defer.returnValue((True, creds, clientdict, session['id']))
defer.returnValue((creds, clientdict, session['id']))
ret = self._auth_dict_for_flows(flows, session)
ret['completed'] = creds.keys()
ret.update(errordict)
defer.returnValue((False, ret, clientdict, session['id']))
raise InteractiveAuthIncompleteError(
ret,
)
@defer.inlineCallbacks
def add_oob_auth(self, stagetype, authdict, clientip):
@@ -260,16 +338,37 @@ class AuthHandler(BaseHandler):
sess = self._get_session_info(session_id)
return sess.setdefault('serverdict', {}).get(key, default)
def _check_password_auth(self, authdict, _):
if "user" not in authdict or "password" not in authdict:
raise LoginError(400, "", Codes.MISSING_PARAM)
@defer.inlineCallbacks
def _check_auth_dict(self, authdict, clientip):
"""Attempt to validate the auth dict provided by a client
user_id = authdict["user"]
password = authdict["password"]
if not user_id.startswith('@'):
user_id = UserID(user_id, self.hs.hostname).to_string()
Args:
authdict (object): auth dict provided by the client
clientip (str): IP address of the client
return self._check_password(user_id, password)
Returns:
Deferred: result of the stage verification.
Raises:
StoreError if there was a problem accessing the database
SynapseError if there was a problem with the request
LoginError if there was an authentication problem.
"""
login_type = authdict['type']
checker = self.checkers.get(login_type)
if checker is not None:
res = yield checker(authdict, clientip)
defer.returnValue(res)
# build a v1-login-style dict out of the authdict and fall back to the
# v1 code
user_id = authdict.get("user")
if user_id is None:
raise SynapseError(400, "", Codes.MISSING_PARAM)
(canonical_id, callback) = yield self.validate_login(user_id, authdict)
defer.returnValue(canonical_id)
@defer.inlineCallbacks
def _check_recaptcha(self, authdict, clientip):
@@ -398,26 +497,8 @@ class AuthHandler(BaseHandler):
return self.sessions[session_id]
def validate_password_login(self, user_id, password):
"""
Authenticates the user with their username and password.
Used only by the v1 login API.
Args:
user_id (str): complete @user:id
password (str): Password
Returns:
defer.Deferred: (str) canonical user id
Raises:
StoreError if there was a problem accessing the database
LoginError if there was an authentication problem.
"""
return self._check_password(user_id, password)
@defer.inlineCallbacks
def get_access_token_for_user_id(self, user_id, device_id=None,
initial_display_name=None):
def get_access_token_for_user_id(self, user_id, device_id=None):
"""
Creates a new access token for the user with the given user ID.
@@ -431,13 +512,10 @@ class AuthHandler(BaseHandler):
device_id (str|None): the device ID to associate with the tokens.
None to leave the tokens unassociated with a device (deprecated:
we should always have a device ID)
initial_display_name (str): display name to associate with the
device if it needs re-registering
Returns:
The access token for the user's session.
Raises:
StoreError if there was a problem storing the token.
LoginError if there was an authentication problem.
"""
logger.info("Logging in user %s on device %s", user_id, device_id)
access_token = yield self.issue_access_token(user_id, device_id)
@@ -447,9 +525,11 @@ class AuthHandler(BaseHandler):
# really don't want is active access_tokens without a record of the
# device, so we double-check it here.
if device_id is not None:
yield self.device_handler.check_device_registered(
user_id, device_id, initial_display_name
)
try:
yield self.store.get_device(user_id, device_id)
except StoreError:
yield self.store.delete_access_token(access_token)
raise StoreError(400, "Login raced against device deletion")
defer.returnValue(access_token)
@@ -501,29 +581,115 @@ class AuthHandler(BaseHandler):
)
defer.returnValue(result)
@defer.inlineCallbacks
def _check_password(self, user_id, password):
"""Authenticate a user against the LDAP and local databases.
def get_supported_login_types(self):
"""Get a the login types supported for the /login API
user_id is checked case insensitively against the local database, but
will throw if there are multiple inexact matches.
By default this is just 'm.login.password' (unless password_enabled is
False in the config file), but password auth providers can provide
other login types.
Returns:
Iterable[str]: login types
"""
return self._supported_login_types
@defer.inlineCallbacks
def validate_login(self, username, login_submission):
"""Authenticates the user for the /login API
Also used by the user-interactive auth flow to validate
m.login.password auth types.
Args:
user_id (str): complete @user:id
username (str): username supplied by the user
login_submission (dict): the whole of the login submission
(including 'type' and other relevant fields)
Returns:
(str) the canonical_user_id
Deferred[str, func]: canonical user id, and optional callback
to be called once the access token and device id are issued
Raises:
LoginError if login fails
StoreError if there was a problem accessing the database
SynapseError if there was a problem with the request
LoginError if there was an authentication problem.
"""
if username.startswith('@'):
qualified_user_id = username
else:
qualified_user_id = UserID(
username, self.hs.hostname
).to_string()
login_type = login_submission.get("type")
known_login_type = False
# special case to check for "password" for the check_password interface
# for the auth providers
password = login_submission.get("password")
if login_type == LoginType.PASSWORD:
if not self._password_enabled:
raise SynapseError(400, "Password login has been disabled.")
if not password:
raise SynapseError(400, "Missing parameter: password")
for provider in self.password_providers:
is_valid = yield provider.check_password(user_id, password)
if is_valid:
defer.returnValue(user_id)
if (hasattr(provider, "check_password")
and login_type == LoginType.PASSWORD):
known_login_type = True
is_valid = yield provider.check_password(
qualified_user_id, password,
)
if is_valid:
defer.returnValue((qualified_user_id, None))
canonical_user_id = yield self._check_local_password(user_id, password)
if (not hasattr(provider, "get_supported_login_types")
or not hasattr(provider, "check_auth")):
# this password provider doesn't understand custom login types
continue
if canonical_user_id:
defer.returnValue(canonical_user_id)
supported_login_types = provider.get_supported_login_types()
if login_type not in supported_login_types:
# this password provider doesn't understand this login type
continue
known_login_type = True
login_fields = supported_login_types[login_type]
missing_fields = []
login_dict = {}
for f in login_fields:
if f not in login_submission:
missing_fields.append(f)
else:
login_dict[f] = login_submission[f]
if missing_fields:
raise SynapseError(
400, "Missing parameters for login type %s: %s" % (
login_type,
missing_fields,
),
)
result = yield provider.check_auth(
username, login_type, login_dict,
)
if result:
if isinstance(result, str):
result = (result, None)
defer.returnValue(result)
if login_type == LoginType.PASSWORD:
known_login_type = True
canonical_user_id = yield self._check_local_password(
qualified_user_id, password,
)
if canonical_user_id:
defer.returnValue((canonical_user_id, None))
if not known_login_type:
raise SynapseError(400, "Unknown login type %s" % login_type)
# unknown username or invalid password. We raise a 403 here, but note
# that if we're doing user-interactive login, it turns all LoginErrors
@@ -549,7 +715,7 @@ class AuthHandler(BaseHandler):
if not lookupres:
defer.returnValue(None)
(user_id, password_hash) = lookupres
result = self.validate_hash(password, password_hash)
result = yield self.validate_hash(password, password_hash)
if not result:
logger.warn("Failed password login for user %s", user_id)
defer.returnValue(None)
@@ -573,22 +739,65 @@ class AuthHandler(BaseHandler):
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
@defer.inlineCallbacks
def set_password(self, user_id, newpassword, requester=None):
password_hash = self.hash(newpassword)
def delete_access_token(self, access_token):
"""Invalidate a single access token
except_access_token_id = requester.access_token_id if requester else None
Args:
access_token (str): access token to be deleted
try:
yield self.store.user_set_password_hash(user_id, password_hash)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Unknown user", Codes.NOT_FOUND)
raise e
yield self.store.user_delete_access_tokens(
user_id, except_access_token_id
Returns:
Deferred
"""
user_info = yield self.auth.get_user_by_access_token(access_token)
yield self.store.delete_access_token(access_token)
# see if any of our auth providers want to know about this
for provider in self.password_providers:
if hasattr(provider, "on_logged_out"):
yield provider.on_logged_out(
user_id=str(user_info["user"]),
device_id=user_info["device_id"],
access_token=access_token,
)
# delete pushers associated with this access token
if user_info["token_id"] is not None:
yield self.hs.get_pusherpool().remove_pushers_by_access_token(
str(user_info["user"]), (user_info["token_id"], )
)
@defer.inlineCallbacks
def delete_access_tokens_for_user(self, user_id, except_token_id=None,
device_id=None):
"""Invalidate access tokens belonging to a user
Args:
user_id (str): ID of user the tokens belong to
except_token_id (str|None): access_token ID which should *not* be
deleted
device_id (str|None): ID of device the tokens are associated with.
If None, tokens associated with any device (or no device) will
be deleted
Returns:
Deferred
"""
tokens_and_devices = yield self.store.user_delete_access_tokens(
user_id, except_token_id=except_token_id, device_id=device_id,
)
yield self.hs.get_pusherpool().remove_pushers_by_user(
user_id, except_access_token_id
# see if any of our auth providers want to know about this
for provider in self.password_providers:
if hasattr(provider, "on_logged_out"):
for token, token_id, device_id in tokens_and_devices:
yield provider.on_logged_out(
user_id=user_id,
device_id=device_id,
access_token=token,
)
# delete pushers associated with the access tokens
yield self.hs.get_pusherpool().remove_pushers_by_access_token(
user_id, (token_id for _, token_id, _ in tokens_and_devices),
)
@defer.inlineCallbacks
@@ -634,10 +843,13 @@ class AuthHandler(BaseHandler):
password (str): Password to hash.
Returns:
Hashed password (str).
Deferred(str): Hashed password.
"""
return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper,
bcrypt.gensalt(self.bcrypt_rounds))
def _do_hash():
return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper,
bcrypt.gensalt(self.bcrypt_rounds))
return make_deferred_yieldable(threads.deferToThread(_do_hash))
def validate_hash(self, password, stored_hash):
"""Validates that self.hash(password) == stored_hash.
@@ -647,13 +859,19 @@ class AuthHandler(BaseHandler):
stored_hash (str): Expected hash value.
Returns:
Whether self.hash(password) == stored_hash (bool).
Deferred(bool): Whether self.hash(password) == stored_hash.
"""
def _do_validate_hash():
return bcrypt.checkpw(
password.encode('utf8') + self.hs.config.password_pepper,
stored_hash.encode('utf8')
)
if stored_hash:
return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper,
stored_hash.encode('utf8')) == stored_hash
return make_deferred_yieldable(threads.deferToThread(_do_validate_hash))
else:
return False
return defer.succeed(False)
class MacaroonGeneartor(object):
@@ -696,30 +914,3 @@ class MacaroonGeneartor(object):
macaroon.add_first_party_caveat("gen = 1")
macaroon.add_first_party_caveat("user_id = %s" % (user_id,))
return macaroon
class _AccountHandler(object):
"""A proxy object that gets passed to password auth providers so they
can register new users etc if necessary.
"""
def __init__(self, hs, check_user_exists):
self.hs = hs
self._check_user_exists = check_user_exists
def check_user_exists(self, user_id):
"""Check if user exissts.
Returns:
Deferred(bool)
"""
return self._check_user_exists(user_id)
def register(self, localpart):
"""Registers a new user with given localpart
Returns:
Deferred: a 2-tuple of (user_id, access_token)
"""
reg = self.hs.get_handlers().registration_handler
return reg.register(localpart=localpart)

View File

@@ -0,0 +1,52 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from ._base import BaseHandler
import logging
logger = logging.getLogger(__name__)
class DeactivateAccountHandler(BaseHandler):
"""Handler which deals with deactivating user accounts."""
def __init__(self, hs):
super(DeactivateAccountHandler, self).__init__(hs)
self._auth_handler = hs.get_auth_handler()
self._device_handler = hs.get_device_handler()
@defer.inlineCallbacks
def deactivate_account(self, user_id):
"""Deactivate a user's account
Args:
user_id (str): ID of user to be deactivated
Returns:
Deferred
"""
# FIXME: Theoretically there is a race here wherein user resets
# password using threepid.
# first delete any devices belonging to the user, which will also
# delete corresponding access tokens.
yield self._device_handler.delete_all_devices_for_user(user_id)
# then delete any remaining access tokens which weren't associated with
# a device.
yield self._auth_handler.delete_access_tokens_for_user(user_id)
yield self.store.user_delete_threepids(user_id)
yield self.store.user_set_password_hash(user_id, None)

View File

@@ -14,6 +14,7 @@
# limitations under the License.
from synapse.api import errors
from synapse.api.constants import EventTypes
from synapse.api.errors import FederationDeniedError
from synapse.util import stringutils
from synapse.util.async import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
@@ -34,15 +35,17 @@ class DeviceHandler(BaseHandler):
self.hs = hs
self.state = hs.get_state_handler()
self._auth_handler = hs.get_auth_handler()
self.federation_sender = hs.get_federation_sender()
self.federation = hs.get_replication_layer()
self._edu_updater = DeviceListEduUpdater(hs, self)
self.federation.register_edu_handler(
federation_registry = hs.get_federation_registry()
federation_registry.register_edu_handler(
"m.device_list_update", self._edu_updater.incoming_device_list_update,
)
self.federation.register_query_handler(
federation_registry.register_query_handler(
"user_devices", self.on_federation_query_user_devices,
)
@@ -152,16 +155,15 @@ class DeviceHandler(BaseHandler):
try:
yield self.store.delete_device(user_id, device_id)
except errors.StoreError, e:
except errors.StoreError as e:
if e.code == 404:
# no match
pass
else:
raise
yield self.store.user_delete_access_tokens(
yield self._auth_handler.delete_access_tokens_for_user(
user_id, device_id=device_id,
delete_refresh_tokens=True,
)
yield self.store.delete_e2e_keys_by_device(
@@ -170,13 +172,31 @@ class DeviceHandler(BaseHandler):
yield self.notify_device_update(user_id, [device_id])
@defer.inlineCallbacks
def delete_all_devices_for_user(self, user_id, except_device_id=None):
"""Delete all of the user's devices
Args:
user_id (str):
except_device_id (str|None): optional device id which should not
be deleted
Returns:
defer.Deferred:
"""
device_map = yield self.store.get_devices_by_user(user_id)
device_ids = device_map.keys()
if except_device_id is not None:
device_ids = [d for d in device_ids if d != except_device_id]
yield self.delete_devices(user_id, device_ids)
@defer.inlineCallbacks
def delete_devices(self, user_id, device_ids):
""" Delete several devices
Args:
user_id (str):
device_ids (str): The list of device IDs to delete
device_ids (List[str]): The list of device IDs to delete
Returns:
defer.Deferred:
@@ -184,7 +204,7 @@ class DeviceHandler(BaseHandler):
try:
yield self.store.delete_devices(user_id, device_ids)
except errors.StoreError, e:
except errors.StoreError as e:
if e.code == 404:
# no match
pass
@@ -194,9 +214,8 @@ class DeviceHandler(BaseHandler):
# Delete access tokens and e2e keys for each device. Not optimised as it is not
# considered as part of a critical path.
for device_id in device_ids:
yield self.store.user_delete_access_tokens(
yield self._auth_handler.delete_access_tokens_for_user(
user_id, device_id=device_id,
delete_refresh_tokens=True,
)
yield self.store.delete_e2e_keys_by_device(
user_id=user_id, device_id=device_id
@@ -224,7 +243,7 @@ class DeviceHandler(BaseHandler):
new_display_name=content.get("display_name")
)
yield self.notify_device_update(user_id, [device_id])
except errors.StoreError, e:
except errors.StoreError as e:
if e.code == 404:
raise errors.NotFoundError()
else:
@@ -412,7 +431,7 @@ class DeviceListEduUpdater(object):
def __init__(self, hs, device_handler):
self.store = hs.get_datastore()
self.federation = hs.get_replication_layer()
self.federation = hs.get_federation_client()
self.clock = hs.get_clock()
self.device_handler = device_handler
@@ -496,6 +515,9 @@ class DeviceListEduUpdater(object):
# This makes it more likely that the device lists will
# eventually become consistent.
return
except FederationDeniedError as e:
logger.info(e)
return
except Exception:
# TODO: Remember that we are now out of sync and try again
# later

View File

@@ -17,7 +17,8 @@ import logging
from twisted.internet import defer
from synapse.types import get_domain_from_id
from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id, UserID
from synapse.util.stringutils import random_string
@@ -33,10 +34,10 @@ class DeviceMessageHandler(object):
"""
self.store = hs.get_datastore()
self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id
self.is_mine = hs.is_mine
self.federation = hs.get_federation_sender()
hs.get_replication_layer().register_edu_handler(
hs.get_federation_registry().register_edu_handler(
"m.direct_to_device", self.on_direct_to_device_edu
)
@@ -52,6 +53,12 @@ class DeviceMessageHandler(object):
message_type = content["type"]
message_id = content["message_id"]
for user_id, by_device in content["messages"].items():
# we use UserID.from_string to catch invalid user ids
if not self.is_mine(UserID.from_string(user_id)):
logger.warning("Request for keys for non-local user %s",
user_id)
raise SynapseError(400, "Not a user here")
messages_by_device = {
device_id: {
"content": message_content,
@@ -77,7 +84,8 @@ class DeviceMessageHandler(object):
local_messages = {}
remote_messages = {}
for user_id, by_device in messages.items():
if self.is_mine_id(user_id):
# we use UserID.from_string to catch invalid user ids
if self.is_mine(UserID.from_string(user_id)):
messages_by_device = {
device_id: {
"content": message_content,

View File

@@ -34,9 +34,10 @@ class DirectoryHandler(BaseHandler):
self.state = hs.get_state_handler()
self.appservice_handler = hs.get_application_service_handler()
self.event_creation_handler = hs.get_event_creation_handler()
self.federation = hs.get_replication_layer()
self.federation.register_query_handler(
self.federation = hs.get_federation_client()
hs.get_federation_registry().register_query_handler(
"directory", self.on_directory_query
)
@@ -249,8 +250,7 @@ class DirectoryHandler(BaseHandler):
def send_room_alias_update_event(self, requester, user_id, room_id):
aliases = yield self.store.get_aliases_for_room(room_id)
msg_handler = self.hs.get_handlers().message_handler
yield msg_handler.create_and_send_nonmember_event(
yield self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.Aliases,
@@ -272,8 +272,7 @@ class DirectoryHandler(BaseHandler):
if not alias_event or alias_event.content.get("alias", "") != alias_str:
return
msg_handler = self.hs.get_handlers().message_handler
yield msg_handler.create_and_send_nonmember_event(
yield self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.CanonicalAlias,

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,15 +14,17 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import ujson as json
import simplejson as json
import logging
from canonicaljson import encode_canonical_json
from twisted.internet import defer
from synapse.api.errors import SynapseError, CodeMessageException
from synapse.types import get_domain_from_id
from synapse.util.logcontext import preserve_fn, make_deferred_yieldable
from synapse.api.errors import (
SynapseError, CodeMessageException, FederationDeniedError,
)
from synapse.types import get_domain_from_id, UserID
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__)
@@ -30,15 +33,15 @@ logger = logging.getLogger(__name__)
class E2eKeysHandler(object):
def __init__(self, hs):
self.store = hs.get_datastore()
self.federation = hs.get_replication_layer()
self.federation = hs.get_federation_client()
self.device_handler = hs.get_device_handler()
self.is_mine_id = hs.is_mine_id
self.is_mine = hs.is_mine
self.clock = hs.get_clock()
# doesn't really work as part of the generic query API, because the
# query request requires an object POST, but we abuse the
# "query handler" interface.
self.federation.register_query_handler(
hs.get_federation_registry().register_query_handler(
"client_keys", self.on_federation_query_client_keys
)
@@ -70,7 +73,8 @@ class E2eKeysHandler(object):
remote_queries = {}
for user_id, device_ids in device_keys_query.items():
if self.is_mine_id(user_id):
# we use UserID.from_string to catch invalid user ids
if self.is_mine(UserID.from_string(user_id)):
local_query[user_id] = device_ids
else:
remote_queries[user_id] = device_ids
@@ -131,24 +135,13 @@ class E2eKeysHandler(object):
if user_id in destination_query:
results[user_id] = keys
except CodeMessageException as e:
failures[destination] = {
"status": e.code, "message": e.message
}
except NotRetryingDestination as e:
failures[destination] = {
"status": 503, "message": "Not ready for retry",
}
except Exception as e:
# include ConnectionRefused and other errors
failures[destination] = {
"status": 503, "message": e.message
}
failures[destination] = _exception_to_failure(e)
yield make_deferred_yieldable(defer.gatherResults([
preserve_fn(do_remote_query)(destination)
run_in_background(do_remote_query, destination)
for destination in remote_queries_not_in_cache
]))
], consumeErrors=True))
defer.returnValue({
"device_keys": results, "failures": failures,
@@ -170,7 +163,8 @@ class E2eKeysHandler(object):
result_dict = {}
for user_id, device_ids in query.items():
if not self.is_mine_id(user_id):
# we use UserID.from_string to catch invalid user ids
if not self.is_mine(UserID.from_string(user_id)):
logger.warning("Request for keys for non-local user %s",
user_id)
raise SynapseError(400, "Not a user here")
@@ -213,7 +207,8 @@ class E2eKeysHandler(object):
remote_queries = {}
for user_id, device_keys in query.get("one_time_keys", {}).items():
if self.is_mine_id(user_id):
# we use UserID.from_string to catch invalid user ids
if self.is_mine(UserID.from_string(user_id)):
for device_id, algorithm in device_keys.items():
local_query.append((user_id, device_id, algorithm))
else:
@@ -243,24 +238,13 @@ class E2eKeysHandler(object):
for user_id, keys in remote_result["one_time_keys"].items():
if user_id in device_keys:
json_result[user_id] = keys
except CodeMessageException as e:
failures[destination] = {
"status": e.code, "message": e.message
}
except NotRetryingDestination as e:
failures[destination] = {
"status": 503, "message": "Not ready for retry",
}
except Exception as e:
# include ConnectionRefused and other errors
failures[destination] = {
"status": 503, "message": e.message
}
failures[destination] = _exception_to_failure(e)
yield make_deferred_yieldable(defer.gatherResults([
preserve_fn(claim_client_keys)(destination)
run_in_background(claim_client_keys, destination)
for destination in remote_queries
]))
], consumeErrors=True))
logger.info(
"Claimed one-time-keys: %s",
@@ -353,6 +337,31 @@ class E2eKeysHandler(object):
)
def _exception_to_failure(e):
if isinstance(e, CodeMessageException):
return {
"status": e.code, "message": e.message,
}
if isinstance(e, NotRetryingDestination):
return {
"status": 503, "message": "Not ready for retry",
}
if isinstance(e, FederationDeniedError):
return {
"status": 403, "message": "Federation Denied",
}
# include ConnectionRefused and other errors
#
# Note that some Exceptions (notably twisted's ResponseFailed etc) don't
# give a string for e.message, which simplejson then fails to serialize.
return {
"status": 503, "message": str(e.message),
}
def _one_time_keys_match(old_key_json, new_key):
old_key = json.loads(old_key_json)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -14,14 +15,23 @@
# limitations under the License.
"""Contains handlers for federation events."""
import itertools
import logging
import sys
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
import six
from six.moves import http_client
from twisted.internet import defer
from unpaddedbase64 import decode_base64
from ._base import BaseHandler
from synapse.api.errors import (
AuthError, FederationError, StoreError, CodeMessageException, SynapseError,
FederationDeniedError,
)
from synapse.api.constants import EventTypes, Membership, RejectedReason
from synapse.events.validator import EventValidator
@@ -41,10 +51,6 @@ from synapse.util.retryutils import NotRetryingDestination
from synapse.util.distributor import user_joined_room
from twisted.internet import defer
import itertools
import logging
logger = logging.getLogger(__name__)
@@ -66,7 +72,7 @@ class FederationHandler(BaseHandler):
self.hs = hs
self.store = hs.get_datastore()
self.replication_layer = hs.get_replication_layer()
self.replication_layer = hs.get_federation_client()
self.state_handler = hs.get_state_handler()
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
@@ -74,8 +80,7 @@ class FederationHandler(BaseHandler):
self.is_mine_id = hs.is_mine_id
self.pusher_pool = hs.get_pusherpool()
self.spam_checker = hs.get_spam_checker()
self.replication_layer.set_handler(self)
self.event_creation_handler = hs.get_event_creation_handler()
# When joining a room we need to queue any events for that room up
self.room_queues = {}
@@ -114,6 +119,19 @@ class FederationHandler(BaseHandler):
logger.debug("Already seen pdu %s", pdu.event_id)
return
# do some initial sanity-checking of the event. In particular, make
# sure it doesn't have hundreds of prev_events or auth_events, which
# could cause a huge state resolution or cascade of event fetches.
try:
self._sanity_check_event(pdu)
except SynapseError as err:
raise FederationError(
"ERROR",
err.code,
err.msg,
affected=pdu.event_id,
)
# If we are currently in the process of joining this room, then we
# queue up events for later processing.
if pdu.room_id in self.room_queues:
@@ -148,10 +166,6 @@ class FederationHandler(BaseHandler):
auth_chain = []
have_seen = yield self.store.have_events(
[ev for ev, _ in pdu.prev_events]
)
fetch_state = False
# Get missing pdus if necessary.
@@ -167,7 +181,7 @@ class FederationHandler(BaseHandler):
)
prevs = {e_id for e_id, _ in pdu.prev_events}
seen = set(have_seen.keys())
seen = yield self.store.have_seen_events(prevs)
if min_depth and pdu.depth < min_depth:
# This is so that we don't notify the user about this
@@ -195,8 +209,7 @@ class FederationHandler(BaseHandler):
# Update the set of things we've seen after trying to
# fetch the missing stuff
have_seen = yield self.store.have_events(prevs)
seen = set(have_seen.iterkeys())
seen = yield self.store.have_seen_events(prevs)
if not prevs - seen:
logger.info(
@@ -247,8 +260,7 @@ class FederationHandler(BaseHandler):
min_depth (int): Minimum depth of events to return.
"""
# We recalculate seen, since it may have changed.
have_seen = yield self.store.have_events(prevs)
seen = set(have_seen.keys())
seen = yield self.store.have_seen_events(prevs)
if not prevs - seen:
return
@@ -360,9 +372,7 @@ class FederationHandler(BaseHandler):
if auth_chain:
event_ids |= {e.event_id for e in auth_chain}
seen_ids = set(
(yield self.store.have_events(event_ids)).keys()
)
seen_ids = yield self.store.have_seen_events(event_ids)
if state and auth_chain is not None:
# If we have any state or auth_chain given to us by the replication
@@ -526,9 +536,16 @@ class FederationHandler(BaseHandler):
def backfill(self, dest, room_id, limit, extremities):
""" Trigger a backfill request to `dest` for the given `room_id`
This will attempt to get more events from the remote. This may return
be successfull and still return no events if the other side has no new
events to offer.
This will attempt to get more events from the remote. If the other side
has no new events to offer, this will return an empty list.
As the events are received, we check their signatures, and also do some
sanity-checking on them. If any of the backfilled events are invalid,
this method throws a SynapseError.
TODO: make this more useful to distinguish failures of the remote
server from invalid events (there is probably no point in trying to
re-fetch invalid events from every other HS in the room.)
"""
if dest == self.server_name:
raise SynapseError(400, "Can't backfill from self.")
@@ -540,6 +557,16 @@ class FederationHandler(BaseHandler):
extremities=extremities,
)
# ideally we'd sanity check the events here for excess prev_events etc,
# but it's hard to reject events at this point without completely
# breaking backfill in the same way that it is currently broken by
# events whose signature we cannot verify (#3121).
#
# So for now we accept the events anyway. #3124 tracks this.
#
# for ev in events:
# self._sanity_check_event(ev)
# Don't bother processing events we already have.
seen_events = yield self.store.have_events_in_timeline(
set(e.event_id for e in events)
@@ -612,7 +639,8 @@ class FederationHandler(BaseHandler):
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.preserve_fn(self.replication_layer.get_pdu)(
logcontext.run_in_background(
self.replication_layer.get_pdu,
[dest],
event_id,
outlier=True,
@@ -632,7 +660,7 @@ class FederationHandler(BaseHandler):
failed_to_fetch = missing_auth - set(auth_events)
seen_events = yield self.store.have_events(
seen_events = yield self.store.have_seen_events(
set(auth_events.keys()) | set(state_events.keys())
)
@@ -782,6 +810,9 @@ class FederationHandler(BaseHandler):
except NotRetryingDestination as e:
logger.info(e.message)
continue
except FederationDeniedError as e:
logger.info(e)
continue
except Exception as e:
logger.exception(
"Failed to backfill from %s because %s",
@@ -804,13 +835,12 @@ class FederationHandler(BaseHandler):
event_ids = list(extremities.keys())
logger.debug("calling resolve_state_groups in _maybe_backfill")
resolve = logcontext.preserve_fn(
self.state_handler.resolve_state_groups_for_events
)
states = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.preserve_fn(self.state_handler.resolve_state_groups)(
room_id, [e]
)
for e in event_ids
], consumeErrors=True,
[resolve(room_id, [e]) for e in event_ids],
consumeErrors=True,
))
states = dict(zip(event_ids, [s.state for s in states]))
@@ -840,6 +870,38 @@ class FederationHandler(BaseHandler):
defer.returnValue(False)
def _sanity_check_event(self, ev):
"""
Do some early sanity checks of a received event
In particular, checks it doesn't have an excessive number of
prev_events or auth_events, which could cause a huge state resolution
or cascade of event fetches.
Args:
ev (synapse.events.EventBase): event to be checked
Returns: None
Raises:
SynapseError if the event does not pass muster
"""
if len(ev.prev_events) > 20:
logger.warn("Rejecting event %s which has %i prev_events",
ev.event_id, len(ev.prev_events))
raise SynapseError(
http_client.BAD_REQUEST,
"Too many prev_events",
)
if len(ev.auth_events) > 10:
logger.warn("Rejecting event %s which has %i auth_events",
ev.event_id, len(ev.auth_events))
raise SynapseError(
http_client.BAD_REQUEST,
"Too many auth_events",
)
@defer.inlineCallbacks
def send_invite(self, target_host, event):
""" Sends the invite to the remote server for signing.
@@ -964,7 +1026,7 @@ class FederationHandler(BaseHandler):
# lots of requests for missing prev_events which we do actually
# have. Hence we fire off the deferred, but don't wait for it.
logcontext.preserve_fn(self._handle_queued_pdus)(room_queue)
logcontext.run_in_background(self._handle_queued_pdus, room_queue)
defer.returnValue(True)
@@ -1004,8 +1066,7 @@ class FederationHandler(BaseHandler):
})
try:
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
event, context = yield self.event_creation_handler.create_new_client_event(
builder=builder,
)
except AuthError as e:
@@ -1245,8 +1306,7 @@ class FederationHandler(BaseHandler):
"state_key": user_id,
})
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
event, context = yield self.event_creation_handler.create_new_client_event(
builder=builder,
)
@@ -1444,22 +1504,33 @@ class FederationHandler(BaseHandler):
auth_events=auth_events,
)
if not event.internal_metadata.is_outlier() and not backfilled:
yield self.action_generator.handle_push_actions_for_event(
event, context
try:
if not event.internal_metadata.is_outlier() and not backfilled:
yield self.action_generator.handle_push_actions_for_event(
event, context
)
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
backfilled=backfilled,
)
except: # noqa: E722, as we reraise the exception this is fine.
tp, value, tb = sys.exc_info()
logcontext.run_in_background(
self.store.remove_push_actions_from_staging,
event.event_id,
)
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
backfilled=backfilled,
)
six.reraise(tp, value, tb)
if not backfilled:
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
logcontext.preserve_fn(self.pusher_pool.on_new_notifications)(
event_stream_id, max_stream_id
logcontext.run_in_background(
self.pusher_pool.on_new_notifications,
event_stream_id, max_stream_id,
)
defer.returnValue((context, event_stream_id, max_stream_id))
@@ -1473,7 +1544,8 @@ class FederationHandler(BaseHandler):
"""
contexts = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.preserve_fn(self._prep_event)(
logcontext.run_in_background(
self._prep_event,
origin,
ev_info["event"],
state=ev_info.get("state"),
@@ -1706,6 +1778,17 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
@log_function
def do_auth(self, origin, event, context, auth_events):
"""
Args:
origin (str):
event (synapse.events.FrozenEvent):
context (synapse.events.snapshot.EventContext):
auth_events (dict[(str, str)->str]):
Returns:
defer.Deferred[None]
"""
# Check if we have all the auth events.
current_state = set(e.event_id for e in auth_events.values())
event_auth_events = set(e_id for e_id, _ in event.auth_events)
@@ -1716,7 +1799,8 @@ class FederationHandler(BaseHandler):
event_key = None
if event_auth_events - current_state:
have_events = yield self.store.have_events(
# TODO: can we use store.have_seen_events here instead?
have_events = yield self.store.get_seen_events_with_rejections(
event_auth_events - current_state
)
else:
@@ -1739,12 +1823,12 @@ class FederationHandler(BaseHandler):
origin, event.room_id, event.event_id
)
seen_remotes = yield self.store.have_events(
seen_remotes = yield self.store.have_seen_events(
[e.event_id for e in remote_auth_chain]
)
for e in remote_auth_chain:
if e.event_id in seen_remotes.keys():
if e.event_id in seen_remotes:
continue
if e.event_id == event.event_id:
@@ -1771,7 +1855,7 @@ class FederationHandler(BaseHandler):
except AuthError:
pass
have_events = yield self.store.have_events(
have_events = yield self.store.get_seen_events_with_rejections(
[e_id for e_id, _ in event.auth_events]
)
seen_events = set(have_events.keys())
@@ -1790,7 +1874,8 @@ class FederationHandler(BaseHandler):
different_events = yield logcontext.make_deferred_yieldable(
defer.gatherResults([
logcontext.preserve_fn(self.store.get_event)(
logcontext.run_in_background(
self.store.get_event,
d,
allow_none=True,
allow_rejected=False,
@@ -1817,16 +1902,9 @@ class FederationHandler(BaseHandler):
current_state = set(e.event_id for e in auth_events.values())
different_auth = event_auth_events - current_state
context.current_state_ids = dict(context.current_state_ids)
context.current_state_ids.update({
k: a.event_id for k, a in auth_events.items()
if k != event_key
})
context.prev_state_ids = dict(context.prev_state_ids)
context.prev_state_ids.update({
k: a.event_id for k, a in auth_events.items()
})
context.state_group = self.store.get_next_state_group()
yield self._update_context_for_auth_events(
event, context, auth_events, event_key,
)
if different_auth and not event.internal_metadata.is_outlier():
logger.info("Different auth after resolution: %s", different_auth)
@@ -1863,13 +1941,13 @@ class FederationHandler(BaseHandler):
local_auth_chain,
)
seen_remotes = yield self.store.have_events(
seen_remotes = yield self.store.have_seen_events(
[e.event_id for e in result["auth_chain"]]
)
# 3. Process any remote auth chain events we haven't seen.
for ev in result["auth_chain"]:
if ev.event_id in seen_remotes.keys():
if ev.event_id in seen_remotes:
continue
if ev.event_id == event.event_id:
@@ -1906,16 +1984,9 @@ class FederationHandler(BaseHandler):
# 4. Look at rejects and their proofs.
# TODO.
context.current_state_ids = dict(context.current_state_ids)
context.current_state_ids.update({
k: a.event_id for k, a in auth_events.items()
if k != event_key
})
context.prev_state_ids = dict(context.prev_state_ids)
context.prev_state_ids.update({
k: a.event_id for k, a in auth_events.items()
})
context.state_group = self.store.get_next_state_group()
yield self._update_context_for_auth_events(
event, context, auth_events, event_key,
)
try:
self.auth.check(event, auth_events=auth_events)
@@ -1923,6 +1994,45 @@ class FederationHandler(BaseHandler):
logger.warn("Failed auth resolution for %r because %s", event, e)
raise e
@defer.inlineCallbacks
def _update_context_for_auth_events(self, event, context, auth_events,
event_key):
"""Update the state_ids in an event context after auth event resolution,
storing the changes as a new state group.
Args:
event (Event): The event we're handling the context for
context (synapse.events.snapshot.EventContext): event context
to be updated
auth_events (dict[(str, str)->str]): Events to update in the event
context.
event_key ((str, str)): (type, state_key) for the current event.
this will not be included in the current_state in the context.
"""
state_updates = {
k: a.event_id for k, a in auth_events.iteritems()
if k != event_key
}
context.current_state_ids = dict(context.current_state_ids)
context.current_state_ids.update(state_updates)
if context.delta_ids is not None:
context.delta_ids = dict(context.delta_ids)
context.delta_ids.update(state_updates)
context.prev_state_ids = dict(context.prev_state_ids)
context.prev_state_ids.update({
k: a.event_id for k, a in auth_events.iteritems()
})
context.state_group = yield self.store.store_state_group(
event.event_id,
event.room_id,
prev_group=context.prev_group,
delta_ids=context.delta_ids,
current_state_ids=context.current_state_ids,
)
@defer.inlineCallbacks
def construct_auth_difference(self, local_auth, remote_auth):
""" Given a local and remote auth chain, find the differences. This
@@ -2091,8 +2201,7 @@ class FederationHandler(BaseHandler):
if (yield self.auth.check_host_in_room(room_id, self.hs.hostname)):
builder = self.event_builder_factory.new(event_dict)
EventValidator().validate_new(builder)
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
event, context = yield self.event_creation_handler.create_new_client_event(
builder=builder
)
@@ -2107,7 +2216,7 @@ class FederationHandler(BaseHandler):
raise e
yield self._check_signature(event, context)
member_handler = self.hs.get_handlers().room_member_handler
member_handler = self.hs.get_room_member_handler()
yield member_handler.send_membership_event(None, event, context)
else:
destinations = set(x.split(":", 1)[-1] for x in (sender_user_id, room_id))
@@ -2130,8 +2239,7 @@ class FederationHandler(BaseHandler):
"""
builder = self.event_builder_factory.new(event_dict)
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
event, context = yield self.event_creation_handler.create_new_client_event(
builder=builder,
)
@@ -2152,7 +2260,7 @@ class FederationHandler(BaseHandler):
# TODO: Make sure the signatures actually are correct.
event.signatures.update(returned_invite.signatures)
member_handler = self.hs.get_handlers().room_member_handler
member_handler = self.hs.get_room_member_handler()
yield member_handler.send_membership_event(None, event, context)
@defer.inlineCallbacks
@@ -2181,8 +2289,9 @@ class FederationHandler(BaseHandler):
builder = self.event_builder_factory.new(event_dict)
EventValidator().validate_new(builder)
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(builder=builder)
event, context = yield self.event_creation_handler.create_new_client_event(
builder=builder,
)
defer.returnValue((event, context))
@defer.inlineCallbacks

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -71,6 +72,7 @@ class GroupsLocalHandler(object):
get_invited_users_in_group = _create_rerouter("get_invited_users_in_group")
add_room_to_group = _create_rerouter("add_room_to_group")
update_room_in_group = _create_rerouter("update_room_in_group")
remove_room_from_group = _create_rerouter("remove_room_from_group")
update_group_summary_room = _create_rerouter("update_group_summary_room")
@@ -89,6 +91,8 @@ class GroupsLocalHandler(object):
get_group_role = _create_rerouter("get_group_role")
get_group_roles = _create_rerouter("get_group_roles")
set_group_join_policy = _create_rerouter("set_group_join_policy")
@defer.inlineCallbacks
def get_group_summary(self, group_id, requester_user_id):
"""Get the group summary for a group.
@@ -225,7 +229,45 @@ class GroupsLocalHandler(object):
def join_group(self, group_id, user_id, content):
"""Request to join a group
"""
raise NotImplementedError() # TODO
if self.is_mine_id(group_id):
yield self.groups_server_handler.join_group(
group_id, user_id, content
)
local_attestation = None
remote_attestation = None
else:
local_attestation = self.attestations.create_attestation(group_id, user_id)
content["attestation"] = local_attestation
res = yield self.transport_client.join_group(
get_domain_from_id(group_id), group_id, user_id, content,
)
remote_attestation = res["attestation"]
yield self.attestations.verify_attestation(
remote_attestation,
group_id=group_id,
user_id=user_id,
server_name=get_domain_from_id(group_id),
)
# TODO: Check that the group is public and we're being added publically
is_publicised = content.get("publicise", False)
token = yield self.store.register_user_group_membership(
group_id, user_id,
membership="join",
is_admin=False,
local_attestation=local_attestation,
remote_attestation=remote_attestation,
is_publicised=is_publicised,
)
self.notifier.on_new_event(
"groups_key", token, users=[user_id],
)
defer.returnValue({})
@defer.inlineCallbacks
def accept_invite(self, group_id, user_id, content):
@@ -374,13 +416,20 @@ class GroupsLocalHandler(object):
def get_publicised_groups_for_user(self, user_id):
if self.hs.is_mine_id(user_id):
result = yield self.store.get_publicised_groups_for_user(user_id)
# Check AS associated groups for this user - this depends on the
# RegExps in the AS registration file (under `users`)
for app_service in self.store.get_app_services():
result.extend(app_service.get_groups_for_user(user_id))
defer.returnValue({"groups": result})
else:
result = yield self.transport_client.get_publicised_groups_for_user(
get_domain_from_id(user_id), user_id
bulk_result = yield self.transport_client.bulk_get_publicised_groups(
get_domain_from_id(user_id), [user_id],
)
result = bulk_result.get("users", {}).get(user_id)
# TODO: Verify attestations
defer.returnValue(result)
defer.returnValue({"groups": result})
@defer.inlineCallbacks
def bulk_get_publicised_groups(self, user_ids, proxy=True):
@@ -414,4 +463,9 @@ class GroupsLocalHandler(object):
uid
)
# Check AS associated groups for this user - this depends on the
# RegExps in the AS registration file (under `users`)
for app_service in self.store.get_app_services():
results[uid].extend(app_service.get_groups_for_user(uid))
defer.returnValue({"users": results})

View File

@@ -15,6 +15,11 @@
# limitations under the License.
"""Utilities for interacting with Identity Servers"""
import logging
import simplejson as json
from twisted.internet import defer
from synapse.api.errors import (
@@ -24,9 +29,6 @@ from ._base import BaseHandler
from synapse.util.async import run_on_reactor
from synapse.api.errors import SynapseError, Codes
import json
import logging
logger = logging.getLogger(__name__)

View File

@@ -27,7 +27,7 @@ from synapse.types import (
from synapse.util import unwrapFirstError
from synapse.util.async import concurrently_execute
from synapse.util.caches.snapshot_cache import SnapshotCache
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
@@ -163,10 +163,11 @@ class InitialSyncHandler(BaseHandler):
lambda states: states[event.event_id]
)
(messages, token), current_state = yield preserve_context_over_deferred(
(messages, token), current_state = yield make_deferred_yieldable(
defer.gatherResults(
[
preserve_fn(self.store.get_recent_events_for_room)(
run_in_background(
self.store.get_recent_events_for_room,
event.room_id,
limit=limit,
end_token=room_end_token,
@@ -391,9 +392,10 @@ class InitialSyncHandler(BaseHandler):
presence, receipts, (messages, token) = yield defer.gatherResults(
[
preserve_fn(get_presence)(),
preserve_fn(get_receipts)(),
preserve_fn(self.store.get_recent_events_for_room)(
run_in_background(get_presence),
run_in_background(get_receipts),
run_in_background(
self.store.get_recent_events_for_room,
room_id,
limit=limit,
end_token=now_token.room_key,

View File

@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014 - 2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
# Copyright 2017 - 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,9 +13,16 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
import logging
import simplejson
import sys
from synapse.api.constants import EventTypes, Membership
from canonicaljson import encode_canonical_json
import six
from twisted.internet import defer, reactor
from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership, MAX_DEPTH
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events.utils import serialize_event
@@ -24,22 +31,48 @@ from synapse.types import (
UserID, RoomAlias, RoomStreamToken,
)
from synapse.util.async import run_on_reactor, ReadWriteLock, Limiter
from synapse.util.logcontext import preserve_fn
from synapse.util.logcontext import run_in_background
from synapse.util.metrics import measure_func
from synapse.util.frozenutils import unfreeze
from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.stringutils import random_string
from synapse.visibility import filter_events_for_client
from synapse.replication.http.send_event import send_event_to_master
from ._base import BaseHandler
from canonicaljson import encode_canonical_json
import logging
import random
import ujson
logger = logging.getLogger(__name__)
class PurgeStatus(object):
"""Object tracking the status of a purge request
This class contains information on the progress of a purge request, for
return by get_purge_status.
Attributes:
status (int): Tracks whether this request has completed. One of
STATUS_{ACTIVE,COMPLETE,FAILED}
"""
STATUS_ACTIVE = 0
STATUS_COMPLETE = 1
STATUS_FAILED = 2
STATUS_TEXT = {
STATUS_ACTIVE: "active",
STATUS_COMPLETE: "complete",
STATUS_FAILED: "failed",
}
def __init__(self):
self.status = PurgeStatus.STATUS_ACTIVE
def asdict(self):
return {
"status": PurgeStatus.STATUS_TEXT[self.status]
}
class MessageHandler(BaseHandler):
def __init__(self, hs):
@@ -47,32 +80,89 @@ class MessageHandler(BaseHandler):
self.hs = hs
self.state = hs.get_state_handler()
self.clock = hs.get_clock()
self.validator = EventValidator()
self.profile_handler = hs.get_profile_handler()
self.pagination_lock = ReadWriteLock()
self._purges_in_progress_by_room = set()
# map from purge id to PurgeStatus
self._purges_by_id = {}
self.pusher_pool = hs.get_pusherpool()
def start_purge_history(self, room_id, topological_ordering,
delete_local_events=False):
"""Start off a history purge on a room.
# We arbitrarily limit concurrent event creation for a room to 5.
# This is to stop us from diverging history *too* much.
self.limiter = Limiter(max_count=5)
Args:
room_id (str): The room to purge from
self.action_generator = hs.get_action_generator()
topological_ordering (int): minimum topo ordering to preserve
delete_local_events (bool): True to delete local events as well as
remote ones
self.spam_checker = hs.get_spam_checker()
Returns:
str: unique ID for this purge transaction.
"""
if room_id in self._purges_in_progress_by_room:
raise SynapseError(
400,
"History purge already in progress for %s" % (room_id, ),
)
purge_id = random_string(16)
# we log the purge_id here so that it can be tied back to the
# request id in the log lines.
logger.info("[purge] starting purge_id %s", purge_id)
self._purges_by_id[purge_id] = PurgeStatus()
run_in_background(
self._purge_history,
purge_id, room_id, topological_ordering, delete_local_events,
)
return purge_id
@defer.inlineCallbacks
def purge_history(self, room_id, event_id):
event = yield self.store.get_event(event_id)
def _purge_history(self, purge_id, room_id, topological_ordering,
delete_local_events):
"""Carry out a history purge on a room.
if event.room_id != room_id:
raise SynapseError(400, "Event is for wrong room.")
Args:
purge_id (str): The id for this purge
room_id (str): The room to purge from
topological_ordering (int): minimum topo ordering to preserve
delete_local_events (bool): True to delete local events as well as
remote ones
depth = event.depth
Returns:
Deferred
"""
self._purges_in_progress_by_room.add(room_id)
try:
with (yield self.pagination_lock.write(room_id)):
yield self.store.purge_history(
room_id, topological_ordering, delete_local_events,
)
logger.info("[purge] complete")
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
except Exception:
logger.error("[purge] failed: %s", Failure().getTraceback().rstrip())
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
finally:
self._purges_in_progress_by_room.discard(room_id)
with (yield self.pagination_lock.write(room_id)):
yield self.store.delete_old_state(room_id, depth)
# remove the purge from the list 24 hours after it completes
def clear_purge():
del self._purges_by_id[purge_id]
reactor.callLater(24 * 3600, clear_purge)
def get_purge_status(self, purge_id):
"""Get the current status of an active purge
Args:
purge_id (str): purge_id returned by start_purge_history
Returns:
PurgeStatus|None
"""
return self._purges_by_id.get(purge_id)
@defer.inlineCallbacks
def get_messages(self, requester, room_id=None, pagin_config=None,
@@ -182,166 +272,6 @@ class MessageHandler(BaseHandler):
defer.returnValue(chunk)
@defer.inlineCallbacks
def create_event(self, requester, event_dict, token_id=None, txn_id=None,
prev_event_ids=None):
"""
Given a dict from a client, create a new event.
Creates an FrozenEvent object, filling out auth_events, prev_events,
etc.
Adds display names to Join membership events.
Args:
requester
event_dict (dict): An entire event
token_id (str)
txn_id (str)
prev_event_ids (list): The prev event ids to use when creating the event
Returns:
Tuple of created event (FrozenEvent), Context
"""
builder = self.event_builder_factory.new(event_dict)
with (yield self.limiter.queue(builder.room_id)):
self.validator.validate_new(builder)
if builder.type == EventTypes.Member:
membership = builder.content.get("membership", None)
target = UserID.from_string(builder.state_key)
if membership in {Membership.JOIN, Membership.INVITE}:
# If event doesn't include a display name, add one.
profile = self.profile_handler
content = builder.content
try:
if "displayname" not in content:
content["displayname"] = yield profile.get_displayname(target)
if "avatar_url" not in content:
content["avatar_url"] = yield profile.get_avatar_url(target)
except Exception as e:
logger.info(
"Failed to get profile information for %r: %s",
target, e
)
if token_id is not None:
builder.internal_metadata.token_id = token_id
if txn_id is not None:
builder.internal_metadata.txn_id = txn_id
event, context = yield self._create_new_client_event(
builder=builder,
requester=requester,
prev_event_ids=prev_event_ids,
)
defer.returnValue((event, context))
@defer.inlineCallbacks
def send_nonmember_event(self, requester, event, context, ratelimit=True):
"""
Persists and notifies local clients and federation of an event.
Args:
event (FrozenEvent) the event to send.
context (Context) the context of the event.
ratelimit (bool): Whether to rate limit this send.
is_guest (bool): Whether the sender is a guest.
"""
if event.type == EventTypes.Member:
raise SynapseError(
500,
"Tried to send member event through non-member codepath"
)
# We check here if we are currently being rate limited, so that we
# don't do unnecessary work. We check again just before we actually
# send the event.
yield self.ratelimit(requester, update=False)
user = UserID.from_string(event.sender)
assert self.hs.is_mine(user), "User must be our own: %s" % (user,)
if event.is_state():
prev_state = yield self.deduplicate_state_event(event, context)
if prev_state is not None:
defer.returnValue(prev_state)
yield self.handle_new_client_event(
requester=requester,
event=event,
context=context,
ratelimit=ratelimit,
)
if event.type == EventTypes.Message:
presence = self.hs.get_presence_handler()
# We don't want to block sending messages on any presence code. This
# matters as sometimes presence code can take a while.
preserve_fn(presence.bump_presence_active_time)(user)
@defer.inlineCallbacks
def deduplicate_state_event(self, event, context):
"""
Checks whether event is in the latest resolved state in context.
If so, returns the version of the event in context.
Otherwise, returns None.
"""
prev_event_id = context.prev_state_ids.get((event.type, event.state_key))
prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
if not prev_event:
return
if prev_event and event.user_id == prev_event.user_id:
prev_content = encode_canonical_json(prev_event.content)
next_content = encode_canonical_json(event.content)
if prev_content == next_content:
defer.returnValue(prev_event)
return
@defer.inlineCallbacks
def create_and_send_nonmember_event(
self,
requester,
event_dict,
ratelimit=True,
txn_id=None
):
"""
Creates an event, then sends it.
See self.create_event and self.send_nonmember_event.
"""
event, context = yield self.create_event(
requester,
event_dict,
token_id=requester.access_token_id,
txn_id=txn_id
)
spam_error = self.spam_checker.check_event_for_spam(event)
if spam_error:
if not isinstance(spam_error, basestring):
spam_error = "Spam is not permitted here"
raise SynapseError(
403, spam_error, Codes.FORBIDDEN
)
yield self.send_nonmember_event(
requester,
event,
context,
ratelimit=ratelimit,
)
defer.returnValue(event)
@defer.inlineCallbacks
def get_room_data(self, user_id=None, room_id=None,
event_type=None, state_key="", is_guest=False):
@@ -470,48 +400,247 @@ class MessageHandler(BaseHandler):
for user_id, profile in users_with_profile.iteritems()
})
@measure_func("_create_new_client_event")
class EventCreationHandler(object):
def __init__(self, hs):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.clock = hs.get_clock()
self.validator = EventValidator()
self.profile_handler = hs.get_profile_handler()
self.event_builder_factory = hs.get_event_builder_factory()
self.server_name = hs.hostname
self.ratelimiter = hs.get_ratelimiter()
self.notifier = hs.get_notifier()
self.config = hs.config
self.http_client = hs.get_simple_http_client()
# This is only used to get at ratelimit function, and maybe_kick_guest_users
self.base_handler = BaseHandler(hs)
self.pusher_pool = hs.get_pusherpool()
# We arbitrarily limit concurrent event creation for a room to 5.
# This is to stop us from diverging history *too* much.
self.limiter = Limiter(max_count=5)
self.action_generator = hs.get_action_generator()
self.spam_checker = hs.get_spam_checker()
@defer.inlineCallbacks
def _create_new_client_event(self, builder, requester=None, prev_event_ids=None):
if prev_event_ids:
prev_events = yield self.store.add_event_hashes(prev_event_ids)
prev_max_depth = yield self.store.get_max_depth_of_events(prev_event_ids)
depth = prev_max_depth + 1
else:
latest_ret = yield self.store.get_latest_event_ids_and_hashes_in_room(
builder.room_id,
def create_event(self, requester, event_dict, token_id=None, txn_id=None,
prev_events_and_hashes=None):
"""
Given a dict from a client, create a new event.
Creates an FrozenEvent object, filling out auth_events, prev_events,
etc.
Adds display names to Join membership events.
Args:
requester
event_dict (dict): An entire event
token_id (str)
txn_id (str)
prev_events_and_hashes (list[(str, dict[str, str], int)]|None):
the forward extremities to use as the prev_events for the
new event. For each event, a tuple of (event_id, hashes, depth)
where *hashes* is a map from algorithm to hash.
If None, they will be requested from the database.
Returns:
Tuple of created event (FrozenEvent), Context
"""
builder = self.event_builder_factory.new(event_dict)
self.validator.validate_new(builder)
if builder.type == EventTypes.Member:
membership = builder.content.get("membership", None)
target = UserID.from_string(builder.state_key)
if membership in {Membership.JOIN, Membership.INVITE}:
# If event doesn't include a display name, add one.
profile = self.profile_handler
content = builder.content
try:
if "displayname" not in content:
content["displayname"] = yield profile.get_displayname(target)
if "avatar_url" not in content:
content["avatar_url"] = yield profile.get_avatar_url(target)
except Exception as e:
logger.info(
"Failed to get profile information for %r: %s",
target, e
)
if token_id is not None:
builder.internal_metadata.token_id = token_id
if txn_id is not None:
builder.internal_metadata.txn_id = txn_id
event, context = yield self.create_new_client_event(
builder=builder,
requester=requester,
prev_events_and_hashes=prev_events_and_hashes,
)
defer.returnValue((event, context))
@defer.inlineCallbacks
def send_nonmember_event(self, requester, event, context, ratelimit=True):
"""
Persists and notifies local clients and federation of an event.
Args:
event (FrozenEvent) the event to send.
context (Context) the context of the event.
ratelimit (bool): Whether to rate limit this send.
is_guest (bool): Whether the sender is a guest.
"""
if event.type == EventTypes.Member:
raise SynapseError(
500,
"Tried to send member event through non-member codepath"
)
# We want to limit the max number of prev events we point to in our
# new event
if len(latest_ret) > 10:
# Sort by reverse depth, so we point to the most recent.
latest_ret.sort(key=lambda a: -a[2])
new_latest_ret = latest_ret[:5]
user = UserID.from_string(event.sender)
# We also randomly point to some of the older events, to make
# sure that we don't completely ignore the older events.
if latest_ret[5:]:
sample_size = min(5, len(latest_ret[5:]))
new_latest_ret.extend(random.sample(latest_ret[5:], sample_size))
latest_ret = new_latest_ret
assert self.hs.is_mine(user), "User must be our own: %s" % (user,)
if latest_ret:
depth = max([d for _, _, d in latest_ret]) + 1
else:
depth = 1
if event.is_state():
prev_state = yield self.deduplicate_state_event(event, context)
if prev_state is not None:
defer.returnValue(prev_state)
prev_events = [
(event_id, prev_hashes)
for event_id, prev_hashes, _ in latest_ret
]
yield self.handle_new_client_event(
requester=requester,
event=event,
context=context,
ratelimit=ratelimit,
)
@defer.inlineCallbacks
def deduplicate_state_event(self, event, context):
"""
Checks whether event is in the latest resolved state in context.
If so, returns the version of the event in context.
Otherwise, returns None.
"""
prev_event_id = context.prev_state_ids.get((event.type, event.state_key))
prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
if not prev_event:
return
if prev_event and event.user_id == prev_event.user_id:
prev_content = encode_canonical_json(prev_event.content)
next_content = encode_canonical_json(event.content)
if prev_content == next_content:
defer.returnValue(prev_event)
return
@defer.inlineCallbacks
def create_and_send_nonmember_event(
self,
requester,
event_dict,
ratelimit=True,
txn_id=None
):
"""
Creates an event, then sends it.
See self.create_event and self.send_nonmember_event.
"""
# We limit the number of concurrent event sends in a room so that we
# don't fork the DAG too much. If we don't limit then we can end up in
# a situation where event persistence can't keep up, causing
# extremities to pile up, which in turn leads to state resolution
# taking longer.
with (yield self.limiter.queue(event_dict["room_id"])):
event, context = yield self.create_event(
requester,
event_dict,
token_id=requester.access_token_id,
txn_id=txn_id
)
spam_error = self.spam_checker.check_event_for_spam(event)
if spam_error:
if not isinstance(spam_error, basestring):
spam_error = "Spam is not permitted here"
raise SynapseError(
403, spam_error, Codes.FORBIDDEN
)
yield self.send_nonmember_event(
requester,
event,
context,
ratelimit=ratelimit,
)
defer.returnValue(event)
@measure_func("create_new_client_event")
@defer.inlineCallbacks
def create_new_client_event(self, builder, requester=None,
prev_events_and_hashes=None):
"""Create a new event for a local client
Args:
builder (EventBuilder):
requester (synapse.types.Requester|None):
prev_events_and_hashes (list[(str, dict[str, str], int)]|None):
the forward extremities to use as the prev_events for the
new event. For each event, a tuple of (event_id, hashes, depth)
where *hashes* is a map from algorithm to hash.
If None, they will be requested from the database.
Returns:
Deferred[(synapse.events.EventBase, synapse.events.snapshot.EventContext)]
"""
if prev_events_and_hashes is not None:
assert len(prev_events_and_hashes) <= 10, \
"Attempting to create an event with %i prev_events" % (
len(prev_events_and_hashes),
)
else:
prev_events_and_hashes = \
yield self.store.get_prev_events_for_room(builder.room_id)
if prev_events_and_hashes:
depth = max([d for _, _, d in prev_events_and_hashes]) + 1
# we cap depth of generated events, to ensure that they are not
# rejected by other servers (and so that they can be persisted in
# the db)
depth = min(depth, MAX_DEPTH)
else:
depth = 1
prev_events = [
(event_id, prev_hashes)
for event_id, prev_hashes, _ in prev_events_and_hashes
]
builder.prev_events = prev_events
builder.depth = depth
state_handler = self.state_handler
context = yield state_handler.compute_event_context(builder)
context = yield self.state.compute_event_context(builder)
if requester:
context.app_service = requester.app_service
@@ -546,12 +675,21 @@ class MessageHandler(BaseHandler):
event,
context,
ratelimit=True,
extra_users=[]
extra_users=[],
):
# We now need to go and hit out to wherever we need to hit out to.
"""Processes a new event. This includes checking auth, persisting it,
notifying users, sending to remote servers, etc.
if ratelimit:
yield self.ratelimit(requester)
If called from a worker will hit out to the master process for final
processing.
Args:
requester (Requester)
event (FrozenEvent)
context (EventContext)
ratelimit (bool)
extra_users (list(UserID)): Any extra users to notify about event
"""
try:
yield self.auth.check_from_context(event, context)
@@ -561,13 +699,70 @@ class MessageHandler(BaseHandler):
# Ensure that we can round trip before trying to persist in db
try:
dump = ujson.dumps(unfreeze(event.content))
ujson.loads(dump)
dump = frozendict_json_encoder.encode(event.content)
simplejson.loads(dump)
except Exception:
logger.exception("Failed to encode content: %r", event.content)
raise
yield self.maybe_kick_guest_users(event, context)
yield self.action_generator.handle_push_actions_for_event(
event, context
)
try:
# If we're a worker we need to hit out to the master.
if self.config.worker_app:
yield send_event_to_master(
self.http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
requester=requester,
event=event,
context=context,
ratelimit=ratelimit,
extra_users=extra_users,
)
return
yield self.persist_and_notify_client_event(
requester,
event,
context,
ratelimit=ratelimit,
extra_users=extra_users,
)
except: # noqa: E722, as we reraise the exception this is fine.
# Ensure that we actually remove the entries in the push actions
# staging area, if we calculated them.
tp, value, tb = sys.exc_info()
run_in_background(
self.store.remove_push_actions_from_staging,
event.event_id,
)
six.reraise(tp, value, tb)
@defer.inlineCallbacks
def persist_and_notify_client_event(
self,
requester,
event,
context,
ratelimit=True,
extra_users=[],
):
"""Called when we have fully built the event, have already
calculated the push actions for the event, and checked auth.
This should only be run on master.
"""
assert not self.config.worker_app
if ratelimit:
yield self.base_handler.ratelimit(requester)
yield self.base_handler.maybe_kick_guest_users(event, context)
if event.type == EventTypes.CanonicalAlias:
# Check the alias is acually valid (at this time at least)
@@ -660,26 +855,39 @@ class MessageHandler(BaseHandler):
"Changing the room create event is forbidden",
)
yield self.action_generator.handle_push_actions_for_event(
event, context
)
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event, context=context
)
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
preserve_fn(self.pusher_pool.on_new_notifications)(
run_in_background(
self.pusher_pool.on_new_notifications,
event_stream_id, max_stream_id
)
@defer.inlineCallbacks
def _notify():
yield run_on_reactor()
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
try:
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
except Exception:
logger.exception("Error notifying about new room event")
preserve_fn(_notify)()
run_in_background(_notify)
if event.type == EventTypes.Message:
# We don't want to block sending messages on any presence code. This
# matters as sometimes presence code can take a while.
run_in_background(self._bump_active_time, requester.user)
@defer.inlineCallbacks
def _bump_active_time(self, user):
try:
presence = self.hs.get_presence_handler()
yield presence.bump_presence_active_time(user)
except Exception:
logger.exception("Error bumping presence active time")

View File

@@ -31,7 +31,7 @@ from synapse.storage.presence import UserPresenceState
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.util.async import Linearizer
from synapse.util.logcontext import preserve_fn
from synapse.util.logcontext import run_in_background
from synapse.util.logutils import log_function
from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer
@@ -93,29 +93,30 @@ class PresenceHandler(object):
self.store = hs.get_datastore()
self.wheel_timer = WheelTimer()
self.notifier = hs.get_notifier()
self.replication = hs.get_replication_layer()
self.federation = hs.get_federation_sender()
self.state = hs.get_state_handler()
self.replication.register_edu_handler(
federation_registry = hs.get_federation_registry()
federation_registry.register_edu_handler(
"m.presence", self.incoming_presence
)
self.replication.register_edu_handler(
federation_registry.register_edu_handler(
"m.presence_invite",
lambda origin, content: self.invite_presence(
observed_user=UserID.from_string(content["observed_user"]),
observer_user=UserID.from_string(content["observer_user"]),
)
)
self.replication.register_edu_handler(
federation_registry.register_edu_handler(
"m.presence_accept",
lambda origin, content: self.accept_presence(
observed_user=UserID.from_string(content["observed_user"]),
observer_user=UserID.from_string(content["observer_user"]),
)
)
self.replication.register_edu_handler(
federation_registry.register_edu_handler(
"m.presence_deny",
lambda origin, content: self.deny_presence(
observed_user=UserID.from_string(content["observed_user"]),
@@ -253,6 +254,14 @@ class PresenceHandler(object):
logger.info("Finished _persist_unpersisted_changes")
@defer.inlineCallbacks
def _update_states_and_catch_exception(self, new_states):
try:
res = yield self._update_states(new_states)
defer.returnValue(res)
except Exception:
logger.exception("Error updating presence")
@defer.inlineCallbacks
def _update_states(self, new_states):
"""Updates presence of users. Sets the appropriate timeouts. Pokes
@@ -363,7 +372,7 @@ class PresenceHandler(object):
now=now,
)
preserve_fn(self._update_states)(changes)
run_in_background(self._update_states_and_catch_exception, changes)
except Exception:
logger.exception("Exception in _handle_timeouts loop")
@@ -421,20 +430,23 @@ class PresenceHandler(object):
@defer.inlineCallbacks
def _end():
if affect_presence:
try:
self.user_to_num_current_syncs[user_id] -= 1
prev_state = yield self.current_state_for_user(user_id)
yield self._update_states([prev_state.copy_and_replace(
last_user_sync_ts=self.clock.time_msec(),
)])
except Exception:
logger.exception("Error updating presence after sync")
@contextmanager
def _user_syncing():
try:
yield
finally:
preserve_fn(_end)()
if affect_presence:
run_in_background(_end)
defer.returnValue(_user_syncing())
@@ -1199,7 +1211,7 @@ def handle_timeout(state, is_mine, syncing_user_ids, now):
)
changed = True
else:
# We expect to be poked occaisonally by the other side.
# We expect to be poked occasionally by the other side.
# This is to protect against forgetful/buggy servers, so that
# no one gets stuck online forever.
if now - state.last_federation_update_ts > FEDERATION_TIMEOUT:

View File

@@ -17,7 +17,6 @@ import logging
from twisted.internet import defer
import synapse.types
from synapse.api.errors import SynapseError, AuthError, CodeMessageException
from synapse.types import UserID, get_domain_from_id
from ._base import BaseHandler
@@ -32,12 +31,17 @@ class ProfileHandler(BaseHandler):
def __init__(self, hs):
super(ProfileHandler, self).__init__(hs)
self.federation = hs.get_replication_layer()
self.federation.register_query_handler(
self.federation = hs.get_federation_client()
hs.get_federation_registry().register_query_handler(
"profile", self.on_profile_query
)
self.clock.looping_call(self._update_remote_profile_cache, self.PROFILE_UPDATE_MS)
self.user_directory_handler = hs.get_user_directory_handler()
if hs.config.worker_app is None:
self.clock.looping_call(
self._update_remote_profile_cache, self.PROFILE_UPDATE_MS,
)
@defer.inlineCallbacks
def get_profile(self, user_id):
@@ -140,7 +144,13 @@ class ProfileHandler(BaseHandler):
target_user.localpart, new_displayname
)
yield self._update_join_states(requester)
if self.hs.config.user_directory_search_all_users:
profile = yield self.store.get_profileinfo(target_user.localpart)
yield self.user_directory_handler.handle_local_profile_change(
target_user.to_string(), profile
)
yield self._update_join_states(requester, target_user)
@defer.inlineCallbacks
def get_avatar_url(self, target_user):
@@ -184,7 +194,13 @@ class ProfileHandler(BaseHandler):
target_user.localpart, new_avatar_url
)
yield self._update_join_states(requester)
if self.hs.config.user_directory_search_all_users:
profile = yield self.store.get_profileinfo(target_user.localpart)
yield self.user_directory_handler.handle_local_profile_change(
target_user.to_string(), profile
)
yield self._update_join_states(requester, target_user)
@defer.inlineCallbacks
def on_profile_query(self, args):
@@ -209,28 +225,24 @@ class ProfileHandler(BaseHandler):
defer.returnValue(response)
@defer.inlineCallbacks
def _update_join_states(self, requester):
user = requester.user
if not self.hs.is_mine(user):
def _update_join_states(self, requester, target_user):
if not self.hs.is_mine(target_user):
return
yield self.ratelimit(requester)
room_ids = yield self.store.get_rooms_for_user(
user.to_string(),
target_user.to_string(),
)
for room_id in room_ids:
handler = self.hs.get_handlers().room_member_handler
handler = self.hs.get_room_member_handler()
try:
# Assume the user isn't a guest because we don't let guests set
# profile or avatar data.
# XXX why are we recreating `requester` here for each room?
# what was wrong with the `requester` we were passed?
requester = synapse.types.create_requester(user)
# Assume the target_user isn't a guest,
# because we don't let guests set profile or avatar data.
yield handler.update_membership(
requester,
user,
target_user,
room_id,
"join", # We treat a profile update like a join.
ratelimit=False, # Try to hide that these events aren't atomic.

View File

@@ -41,9 +41,9 @@ class ReadMarkerHandler(BaseHandler):
"""
with (yield self.read_marker_linearizer.queue((room_id, user_id))):
account_data = yield self.store.get_account_data_for_room(user_id, room_id)
existing_read_marker = account_data.get("m.fully_read", None)
existing_read_marker = yield self.store.get_account_data_for_room_and_type(
user_id, room_id, "m.fully_read",
)
should_update = True

View File

@@ -35,7 +35,7 @@ class ReceiptsHandler(BaseHandler):
self.store = hs.get_datastore()
self.hs = hs
self.federation = hs.get_federation_sender()
hs.get_replication_layer().register_edu_handler(
hs.get_federation_registry().register_edu_handler(
"m.receipt", self._received_remote_receipt
)
self.clock = self.hs.get_clock()
@@ -135,37 +135,40 @@ class ReceiptsHandler(BaseHandler):
"""Given a list of receipts, works out which remote servers should be
poked and pokes them.
"""
# TODO: Some of this stuff should be coallesced.
for receipt in receipts:
room_id = receipt["room_id"]
receipt_type = receipt["receipt_type"]
user_id = receipt["user_id"]
event_ids = receipt["event_ids"]
data = receipt["data"]
try:
# TODO: Some of this stuff should be coallesced.
for receipt in receipts:
room_id = receipt["room_id"]
receipt_type = receipt["receipt_type"]
user_id = receipt["user_id"]
event_ids = receipt["event_ids"]
data = receipt["data"]
users = yield self.state.get_current_user_in_room(room_id)
remotedomains = set(get_domain_from_id(u) for u in users)
remotedomains = remotedomains.copy()
remotedomains.discard(self.server_name)
users = yield self.state.get_current_user_in_room(room_id)
remotedomains = set(get_domain_from_id(u) for u in users)
remotedomains = remotedomains.copy()
remotedomains.discard(self.server_name)
logger.debug("Sending receipt to: %r", remotedomains)
logger.debug("Sending receipt to: %r", remotedomains)
for domain in remotedomains:
self.federation.send_edu(
destination=domain,
edu_type="m.receipt",
content={
room_id: {
receipt_type: {
user_id: {
"event_ids": event_ids,
"data": data,
for domain in remotedomains:
self.federation.send_edu(
destination=domain,
edu_type="m.receipt",
content={
room_id: {
receipt_type: {
user_id: {
"event_ids": event_ids,
"data": data,
}
}
}
},
},
},
key=(room_id, receipt_type, user_id),
)
key=(room_id, receipt_type, user_id),
)
except Exception:
logger.exception("Error pushing receipts to remote servers")
@defer.inlineCallbacks
def get_receipts_for_room(self, room_id, to_key):

View File

@@ -23,8 +23,9 @@ from synapse.api.errors import (
)
from synapse.http.client import CaptchaServerHttpClient
from synapse import types
from synapse.types import UserID
from synapse.util.async import run_on_reactor
from synapse.types import UserID, create_requester, RoomID, RoomAlias
from synapse.util.async import run_on_reactor, Linearizer
from synapse.util.threepids import check_3pid_allowed
from ._base import BaseHandler
logger = logging.getLogger(__name__)
@@ -36,13 +37,19 @@ class RegistrationHandler(BaseHandler):
super(RegistrationHandler, self).__init__(hs)
self.auth = hs.get_auth()
self._auth_handler = hs.get_auth_handler()
self.profile_handler = hs.get_profile_handler()
self.user_directory_handler = hs.get_user_directory_handler()
self.captcha_client = CaptchaServerHttpClient(hs)
self._next_generated_user_id = None
self.macaroon_gen = hs.get_macaroon_generator()
self._generate_user_id_linearizer = Linearizer(
name="_generate_user_id_linearizer",
)
@defer.inlineCallbacks
def check_username(self, localpart, guest_access_token=None,
assigned_user_id=None):
@@ -129,7 +136,7 @@ class RegistrationHandler(BaseHandler):
yield run_on_reactor()
password_hash = None
if password:
password_hash = self.auth_handler().hash(password)
password_hash = yield self.auth_handler().hash(password)
if localpart:
yield self.check_username(localpart, guest_access_token=guest_access_token)
@@ -164,6 +171,13 @@ class RegistrationHandler(BaseHandler):
),
admin=admin,
)
if self.hs.config.user_directory_search_all_users:
profile = yield self.store.get_profileinfo(localpart)
yield self.user_directory_handler.handle_local_profile_change(
user_id, profile
)
else:
# autogen a sequential user ID
attempts = 0
@@ -191,10 +205,17 @@ class RegistrationHandler(BaseHandler):
token = None
attempts += 1
# auto-join the user to any rooms we're supposed to dump them into
fake_requester = create_requester(user_id)
for r in self.hs.config.auto_join_rooms:
try:
yield self._join_user_to_room(fake_requester, r)
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e)
# We used to generate default identicons here, but nowadays
# we want clients to generate their own as part of their branding
# rather than there being consistent matrix-wide ones, so we don't.
defer.returnValue((user_id, token))
@defer.inlineCallbacks
@@ -284,7 +305,7 @@ class RegistrationHandler(BaseHandler):
"""
for c in threepidCreds:
logger.info("validating theeepidcred sid %s on id server %s",
logger.info("validating threepidcred sid %s on id server %s",
c['sid'], c['idServer'])
try:
identity_handler = self.hs.get_handlers().identity_handler
@@ -298,6 +319,11 @@ class RegistrationHandler(BaseHandler):
logger.info("got threepid with medium '%s' and address '%s'",
threepid['medium'], threepid['address'])
if not check_3pid_allowed(self.hs, threepid['medium'], threepid['address']):
raise RegistrationError(
403, "Third party identifier is not allowed"
)
@defer.inlineCallbacks
def bind_emails(self, user_id, threepidCreds):
"""Links emails with a user ID and informs an identity server.
@@ -330,9 +356,11 @@ class RegistrationHandler(BaseHandler):
@defer.inlineCallbacks
def _generate_user_id(self, reseed=False):
if reseed or self._next_generated_user_id is None:
self._next_generated_user_id = (
yield self.store.find_next_generated_user_id_localpart()
)
with (yield self._generate_user_id_linearizer.queue(())):
if reseed or self._next_generated_user_id is None:
self._next_generated_user_id = (
yield self.store.find_next_generated_user_id_localpart()
)
id = self._next_generated_user_id
self._next_generated_user_id += 1
@@ -416,7 +444,7 @@ class RegistrationHandler(BaseHandler):
create_profile_with_localpart=user.localpart,
)
else:
yield self.store.user_delete_access_tokens(user_id=user_id)
yield self._auth_handler.delete_access_tokens_for_user(user_id)
yield self.store.add_access_token_to_user(user_id=user_id, token=token)
if displayname is not None:
@@ -431,16 +459,59 @@ class RegistrationHandler(BaseHandler):
return self.hs.get_auth_handler()
@defer.inlineCallbacks
def guest_access_token_for(self, medium, address, inviter_user_id):
def get_or_register_3pid_guest(self, medium, address, inviter_user_id):
"""Get a guest access token for a 3PID, creating a guest account if
one doesn't already exist.
Args:
medium (str)
address (str)
inviter_user_id (str): The user ID who is trying to invite the
3PID
Returns:
Deferred[(str, str)]: A 2-tuple of `(user_id, access_token)` of the
3PID guest account.
"""
access_token = yield self.store.get_3pid_guest_access_token(medium, address)
if access_token:
defer.returnValue(access_token)
user_info = yield self.auth.get_user_by_access_token(
access_token
)
_, access_token = yield self.register(
defer.returnValue((user_info["user"].to_string(), access_token))
user_id, access_token = yield self.register(
generate_token=True,
make_guest=True
)
access_token = yield self.store.save_or_get_3pid_guest_access_token(
medium, address, access_token, inviter_user_id
)
defer.returnValue(access_token)
defer.returnValue((user_id, access_token))
@defer.inlineCallbacks
def _join_user_to_room(self, requester, room_identifier):
room_id = None
room_member_handler = self.hs.get_room_member_handler()
if RoomID.is_valid(room_identifier):
room_id = room_identifier
elif RoomAlias.is_valid(room_identifier):
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = (
yield room_member_handler.lookup_room_alias(room_alias)
)
room_id = room_id.to_string()
else:
raise SynapseError(400, "%s was not legal room ID or room alias" % (
room_identifier,
))
yield room_member_handler.update_membership(
requester=requester,
target=requester.user,
room_id=room_id,
remote_room_hosts=remote_room_hosts,
action="join",
)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014 - 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -64,6 +65,7 @@ class RoomCreationHandler(BaseHandler):
super(RoomCreationHandler, self).__init__(hs)
self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler()
@defer.inlineCallbacks
def create_room(self, requester, config, ratelimit=True):
@@ -163,13 +165,11 @@ class RoomCreationHandler(BaseHandler):
creation_content = config.get("creation_content", {})
msg_handler = self.hs.get_handlers().message_handler
room_member_handler = self.hs.get_handlers().room_member_handler
room_member_handler = self.hs.get_room_member_handler()
yield self._send_events_for_new_room(
requester,
room_id,
msg_handler,
room_member_handler,
preset_config=preset_config,
invite_list=invite_list,
@@ -181,7 +181,7 @@ class RoomCreationHandler(BaseHandler):
if "name" in config:
name = config["name"]
yield msg_handler.create_and_send_nonmember_event(
yield self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.Name,
@@ -194,7 +194,7 @@ class RoomCreationHandler(BaseHandler):
if "topic" in config:
topic = config["topic"]
yield msg_handler.create_and_send_nonmember_event(
yield self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.Topic,
@@ -205,12 +205,12 @@ class RoomCreationHandler(BaseHandler):
},
ratelimit=False)
content = {}
is_direct = config.get("is_direct", None)
if is_direct:
content["is_direct"] = is_direct
for invitee in invite_list:
content = {}
is_direct = config.get("is_direct", None)
if is_direct:
content["is_direct"] = is_direct
yield room_member_handler.update_membership(
requester,
UserID.from_string(invitee),
@@ -224,7 +224,7 @@ class RoomCreationHandler(BaseHandler):
id_server = invite_3pid["id_server"]
address = invite_3pid["address"]
medium = invite_3pid["medium"]
yield self.hs.get_handlers().room_member_handler.do_3pid_invite(
yield self.hs.get_room_member_handler().do_3pid_invite(
room_id,
requester.user,
medium,
@@ -249,7 +249,6 @@ class RoomCreationHandler(BaseHandler):
self,
creator, # A Requester object.
room_id,
msg_handler,
room_member_handler,
preset_config,
invite_list,
@@ -272,7 +271,7 @@ class RoomCreationHandler(BaseHandler):
@defer.inlineCallbacks
def send(etype, content, **kwargs):
event = create(etype, content, **kwargs)
yield msg_handler.create_and_send_nonmember_event(
yield self.event_creation_handler.create_and_send_nonmember_event(
creator,
event,
ratelimit=False
@@ -476,12 +475,9 @@ class RoomEventSource(object):
user.to_string()
)
if app_service:
events, end_key = yield self.store.get_appservice_room_stream(
service=app_service,
from_key=from_key,
to_key=to_key,
limit=limit,
)
# We no longer support AS users using /sync directly.
# See https://github.com/matrix-org/matrix-doc/issues/1144
raise NotImplementedError()
else:
room_events = yield self.store.get_membership_changes_for_user(
user.to_string(), from_key, to_key

View File

@@ -15,6 +15,8 @@
from twisted.internet import defer
from six.moves import range
from ._base import BaseHandler
from synapse.api.constants import (
@@ -43,8 +45,9 @@ EMTPY_THIRD_PARTY_ID = ThirdPartyInstanceID(None, None)
class RoomListHandler(BaseHandler):
def __init__(self, hs):
super(RoomListHandler, self).__init__(hs)
self.response_cache = ResponseCache(hs)
self.remote_response_cache = ResponseCache(hs, timeout_ms=30 * 1000)
self.response_cache = ResponseCache(hs, "room_list")
self.remote_response_cache = ResponseCache(hs, "remote_room_list",
timeout_ms=30 * 1000)
def get_local_public_room_list(self, limit=None, since_token=None,
search_filter=None,
@@ -70,20 +73,17 @@ class RoomListHandler(BaseHandler):
if search_filter:
# We explicitly don't bother caching searches or requests for
# appservice specific lists.
logger.info("Bypassing cache as search request.")
return self._get_public_room_list(
limit, since_token, search_filter, network_tuple=network_tuple,
)
key = (limit, since_token, network_tuple)
result = self.response_cache.get(key)
if not result:
result = self.response_cache.set(
key,
self._get_public_room_list(
limit, since_token, network_tuple=network_tuple
)
)
return result
return self.response_cache.wrap(
key,
self._get_public_room_list,
limit, since_token, network_tuple=network_tuple,
)
@defer.inlineCallbacks
def _get_public_room_list(self, limit=None, since_token=None,
@@ -149,6 +149,8 @@ class RoomListHandler(BaseHandler):
# We want larger rooms to be first, hence negating num_joined_users
rooms_to_order_value[room_id] = (-num_joined_users, room_id)
logger.info("Getting ordering for %i rooms since %s",
len(room_ids), stream_token)
yield concurrently_execute(get_order_for_room, room_ids, 10)
sorted_entries = sorted(rooms_to_order_value.items(), key=lambda e: e[1])
@@ -176,34 +178,43 @@ class RoomListHandler(BaseHandler):
rooms_to_scan = rooms_to_scan[:since_token.current_limit]
rooms_to_scan.reverse()
# Actually generate the entries. _append_room_entry_to_chunk will append to
# chunk but will stop if len(chunk) > limit
chunk = []
if limit and not search_filter:
logger.info("After sorting and filtering, %i rooms remain",
len(rooms_to_scan))
# _append_room_entry_to_chunk will append to chunk but will stop if
# len(chunk) > limit
#
# Normally we will generate enough results on the first iteration here,
# but if there is a search filter, _append_room_entry_to_chunk may
# filter some results out, in which case we loop again.
#
# We don't want to scan over the entire range either as that
# would potentially waste a lot of work.
#
# XXX if there is no limit, we may end up DoSing the server with
# calls to get_current_state_ids for every single room on the
# server. Surely we should cap this somehow?
#
if limit:
step = limit + 1
for i in xrange(0, len(rooms_to_scan), step):
# We iterate here because the vast majority of cases we'll stop
# at first iteration, but occaisonally _append_room_entry_to_chunk
# won't append to the chunk and so we need to loop again.
# We don't want to scan over the entire range either as that
# would potentially waste a lot of work.
yield concurrently_execute(
lambda r: self._append_room_entry_to_chunk(
r, rooms_to_num_joined[r],
chunk, limit, search_filter
),
rooms_to_scan[i:i + step], 10
)
if len(chunk) >= limit + 1:
break
else:
# step cannot be zero
step = len(rooms_to_scan) if len(rooms_to_scan) != 0 else 1
chunk = []
for i in range(0, len(rooms_to_scan), step):
batch = rooms_to_scan[i:i + step]
logger.info("Processing %i rooms for result", len(batch))
yield concurrently_execute(
lambda r: self._append_room_entry_to_chunk(
r, rooms_to_num_joined[r],
chunk, limit, search_filter
),
rooms_to_scan, 5
batch, 5,
)
logger.info("Now %i rooms in result", len(chunk))
if len(chunk) >= limit + 1:
break
chunk.sort(key=lambda e: (-e["num_joined_members"], e["room_id"]))
@@ -393,7 +404,7 @@ class RoomListHandler(BaseHandler):
def _get_remote_list_cached(self, server_name, limit=None, since_token=None,
search_filter=None, include_all_networks=False,
third_party_instance_id=None,):
repl_layer = self.hs.get_replication_layer()
repl_layer = self.hs.get_federation_client()
if search_filter:
# We can't cache when asking for search
return repl_layer.get_public_rooms(
@@ -406,18 +417,14 @@ class RoomListHandler(BaseHandler):
server_name, limit, since_token, include_all_networks,
third_party_instance_id,
)
result = self.remote_response_cache.get(key)
if not result:
result = self.remote_response_cache.set(
key,
repl_layer.get_public_rooms(
server_name, limit=limit, since_token=since_token,
search_filter=search_filter,
include_all_networks=include_all_networks,
third_party_instance_id=third_party_instance_id,
)
)
return result
return self.remote_response_cache.wrap(
key,
repl_layer.get_public_rooms,
server_name, limit=limit, since_token=since_token,
search_filter=search_filter,
include_all_networks=include_all_networks,
third_party_instance_id=third_party_instance_id,
)
class RoomListNextBatch(namedtuple("RoomListNextBatch", (

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import logging
from signedjson.key import decode_verify_key_bytes
@@ -29,50 +30,138 @@ from synapse.api.errors import AuthError, SynapseError, Codes
from synapse.types import UserID, RoomID
from synapse.util.async import Linearizer
from synapse.util.distributor import user_left_room, user_joined_room
from ._base import BaseHandler
logger = logging.getLogger(__name__)
id_server_scheme = "https://"
class RoomMemberHandler(BaseHandler):
class RoomMemberHandler(object):
# TODO(paul): This handler currently contains a messy conflation of
# low-level API that works on UserID objects and so on, and REST-level
# API that takes ID strings and returns pagination chunks. These concerns
# ought to be separated out a lot better.
def __init__(self, hs):
super(RoomMemberHandler, self).__init__(hs)
__metaclass__ = abc.ABCMeta
def __init__(self, hs):
self.hs = hs
self.store = hs.get_datastore()
self.auth = hs.get_auth()
self.state_handler = hs.get_state_handler()
self.config = hs.config
self.simple_http_client = hs.get_simple_http_client()
self.federation_handler = hs.get_handlers().federation_handler
self.directory_handler = hs.get_handlers().directory_handler
self.registration_handler = hs.get_handlers().registration_handler
self.profile_handler = hs.get_profile_handler()
self.event_creation_hander = hs.get_event_creation_handler()
self.member_linearizer = Linearizer(name="member")
self.clock = hs.get_clock()
self.spam_checker = hs.get_spam_checker()
self.distributor = hs.get_distributor()
self.distributor.declare("user_joined_room")
self.distributor.declare("user_left_room")
@abc.abstractmethod
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
"""Try and join a room that this server is not in
Args:
requester (Requester)
remote_room_hosts (list[str]): List of servers that can be used
to join via.
room_id (str): Room that we are trying to join
user (UserID): User who is trying to join
content (dict): A dict that should be used as the content of the
join event.
Returns:
Deferred
"""
raise NotImplementedError()
@abc.abstractmethod
def _remote_reject_invite(self, remote_room_hosts, room_id, target):
"""Attempt to reject an invite for a room this server is not in. If we
fail to do so we locally mark the invite as rejected.
Args:
requester (Requester)
remote_room_hosts (list[str]): List of servers to use to try and
reject invite
room_id (str)
target (UserID): The user rejecting the invite
Returns:
Deferred[dict]: A dictionary to be returned to the client, may
include event_id etc, or nothing if we locally rejected
"""
raise NotImplementedError()
@abc.abstractmethod
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Get a guest access token for a 3PID, creating a guest account if
one doesn't already exist.
Args:
requester (Requester)
medium (str)
address (str)
inviter_user_id (str): The user ID who is trying to invite the
3PID
Returns:
Deferred[(str, str)]: A 2-tuple of `(user_id, access_token)` of the
3PID guest account.
"""
raise NotImplementedError()
@abc.abstractmethod
def _user_joined_room(self, target, room_id):
"""Notifies distributor on master process that the user has joined the
room.
Args:
target (UserID)
room_id (str)
Returns:
Deferred|None
"""
raise NotImplementedError()
@abc.abstractmethod
def _user_left_room(self, target, room_id):
"""Notifies distributor on master process that the user has left the
room.
Args:
target (UserID)
room_id (str)
Returns:
Deferred|None
"""
raise NotImplementedError()
@defer.inlineCallbacks
def _local_membership_update(
self, requester, target, room_id, membership,
prev_event_ids,
prev_events_and_hashes,
txn_id=None,
ratelimit=True,
content=None,
):
if content is None:
content = {}
msg_handler = self.hs.get_handlers().message_handler
content["membership"] = membership
if requester.is_guest:
content["kind"] = "guest"
event, context = yield msg_handler.create_event(
event, context = yield self.event_creation_hander.create_event(
requester,
{
"type": EventTypes.Member,
@@ -86,16 +175,18 @@ class RoomMemberHandler(BaseHandler):
},
token_id=requester.access_token_id,
txn_id=txn_id,
prev_event_ids=prev_event_ids,
prev_events_and_hashes=prev_events_and_hashes,
)
# Check if this event matches the previous membership event for the user.
duplicate = yield msg_handler.deduplicate_state_event(event, context)
duplicate = yield self.event_creation_hander.deduplicate_state_event(
event, context,
)
if duplicate is not None:
# Discard the new event since this membership change is a no-op.
defer.returnValue(duplicate)
yield msg_handler.handle_new_client_event(
yield self.event_creation_hander.handle_new_client_event(
requester,
event,
context,
@@ -117,32 +208,15 @@ class RoomMemberHandler(BaseHandler):
prev_member_event = yield self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN
if newly_joined:
yield user_joined_room(self.distributor, target, room_id)
yield self._user_joined_room(target, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event_id:
prev_member_event = yield self.store.get_event(prev_member_event_id)
if prev_member_event.membership == Membership.JOIN:
user_left_room(self.distributor, target, room_id)
yield self._user_left_room(target, room_id)
defer.returnValue(event)
@defer.inlineCallbacks
def remote_join(self, remote_room_hosts, room_id, user, content):
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
# We don't do an auth check if we are doing an invite
# join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we
# need to do the invite/join dance.
yield self.hs.get_handlers().federation_handler.do_invite_join(
remote_room_hosts,
room_id,
user.to_string(),
content,
)
yield user_joined_room(self.distributor, user, room_id)
@defer.inlineCallbacks
def update_membership(
self,
@@ -189,6 +263,10 @@ class RoomMemberHandler(BaseHandler):
content_specified = bool(content)
if content is None:
content = {}
else:
# We do a copy here as we potentially change some keys
# later on.
content = dict(content)
effective_membership_state = action
if action in ["kick", "unban"]:
@@ -197,8 +275,7 @@ class RoomMemberHandler(BaseHandler):
# if this is a join with a 3pid signature, we may need to turn a 3pid
# invite into a normal invite before we can handle the join.
if third_party_signed is not None:
replication = self.hs.get_replication_layer()
yield replication.exchange_third_party_invite(
yield self.federation_handler.exchange_third_party_invite(
third_party_signed["sender"],
target.to_string(),
room_id,
@@ -219,7 +296,7 @@ class RoomMemberHandler(BaseHandler):
requester.user,
)
if not is_requester_admin:
if self.hs.config.block_non_admin_invites:
if self.config.block_non_admin_invites:
logger.info(
"Blocking invite: user is not admin and non-admin "
"invites disabled"
@@ -237,7 +314,12 @@ class RoomMemberHandler(BaseHandler):
403, "Invites have been disabled on this server",
)
latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
prev_events_and_hashes = yield self.store.get_prev_events_for_room(
room_id,
)
latest_event_ids = (
event_id for (event_id, _, _) in prev_events_and_hashes
)
current_state_ids = yield self.state_handler.get_current_state_ids(
room_id, latest_event_ids=latest_event_ids,
)
@@ -278,7 +360,7 @@ class RoomMemberHandler(BaseHandler):
raise AuthError(403, "Guest access not allowed")
if not is_host_in_room:
inviter = yield self.get_inviter(target.to_string(), room_id)
inviter = yield self._get_inviter(target.to_string(), room_id)
if inviter and not self.hs.is_mine(inviter):
remote_room_hosts.append(inviter.domain)
@@ -292,15 +374,15 @@ class RoomMemberHandler(BaseHandler):
if requester.is_guest:
content["kind"] = "guest"
ret = yield self.remote_join(
remote_room_hosts, room_id, target, content
ret = yield self._remote_join(
requester, remote_room_hosts, room_id, target, content
)
defer.returnValue(ret)
elif effective_membership_state == Membership.LEAVE:
if not is_host_in_room:
# perhaps we've been invited
inviter = yield self.get_inviter(target.to_string(), room_id)
inviter = yield self._get_inviter(target.to_string(), room_id)
if not inviter:
raise SynapseError(404, "Not a known room")
@@ -314,28 +396,10 @@ class RoomMemberHandler(BaseHandler):
else:
# send the rejection to the inviter's HS.
remote_room_hosts = remote_room_hosts + [inviter.domain]
fed_handler = self.hs.get_handlers().federation_handler
try:
ret = yield fed_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
target.to_string(),
)
defer.returnValue(ret)
except Exception as e:
# if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead.
#
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(
target.to_string(), room_id
)
defer.returnValue({})
res = yield self._remote_reject_invite(
requester, remote_room_hosts, room_id, target,
)
defer.returnValue(res)
res = yield self._local_membership_update(
requester=requester,
@@ -344,7 +408,7 @@ class RoomMemberHandler(BaseHandler):
membership=effective_membership_state,
txn_id=txn_id,
ratelimit=ratelimit,
prev_event_ids=latest_event_ids,
prev_events_and_hashes=prev_events_and_hashes,
content=content,
)
defer.returnValue(res)
@@ -390,8 +454,9 @@ class RoomMemberHandler(BaseHandler):
else:
requester = synapse.types.create_requester(target_user)
message_handler = self.hs.get_handlers().message_handler
prev_event = yield message_handler.deduplicate_state_event(event, context)
prev_event = yield self.event_creation_hander.deduplicate_state_event(
event, context,
)
if prev_event is not None:
return
@@ -408,7 +473,7 @@ class RoomMemberHandler(BaseHandler):
if is_blocked:
raise SynapseError(403, "This room has been blocked on this server")
yield message_handler.handle_new_client_event(
yield self.event_creation_hander.handle_new_client_event(
requester,
event,
context,
@@ -430,12 +495,12 @@ class RoomMemberHandler(BaseHandler):
prev_member_event = yield self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN
if newly_joined:
yield user_joined_room(self.distributor, target_user, room_id)
yield self._user_joined_room(target_user, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event_id:
prev_member_event = yield self.store.get_event(prev_member_event_id)
if prev_member_event.membership == Membership.JOIN:
user_left_room(self.distributor, target_user, room_id)
yield self._user_left_room(target_user, room_id)
@defer.inlineCallbacks
def _can_guest_join(self, current_state_ids):
@@ -469,7 +534,7 @@ class RoomMemberHandler(BaseHandler):
Raises:
SynapseError if room alias could not be found.
"""
directory_handler = self.hs.get_handlers().directory_handler
directory_handler = self.directory_handler
mapping = yield directory_handler.get_association(room_alias)
if not mapping:
@@ -481,7 +546,7 @@ class RoomMemberHandler(BaseHandler):
defer.returnValue((RoomID.from_string(room_id), servers))
@defer.inlineCallbacks
def get_inviter(self, user_id, room_id):
def _get_inviter(self, user_id, room_id):
invite = yield self.store.get_invite_for_user_in_room(
user_id=user_id,
room_id=room_id,
@@ -500,7 +565,7 @@ class RoomMemberHandler(BaseHandler):
requester,
txn_id
):
if self.hs.config.block_non_admin_invites:
if self.config.block_non_admin_invites:
is_requester_admin = yield self.auth.is_server_admin(
requester.user,
)
@@ -547,7 +612,7 @@ class RoomMemberHandler(BaseHandler):
str: the matrix ID of the 3pid, or None if it is not recognized.
"""
try:
data = yield self.hs.get_simple_http_client().get_json(
data = yield self.simple_http_client.get_json(
"%s%s/_matrix/identity/api/v1/lookup" % (id_server_scheme, id_server,),
{
"medium": medium,
@@ -558,7 +623,7 @@ class RoomMemberHandler(BaseHandler):
if "mxid" in data:
if "signatures" not in data:
raise AuthError(401, "No signatures on 3pid binding")
self.verify_any_signature(data, id_server)
yield self._verify_any_signature(data, id_server)
defer.returnValue(data["mxid"])
except IOError as e:
@@ -566,11 +631,11 @@ class RoomMemberHandler(BaseHandler):
defer.returnValue(None)
@defer.inlineCallbacks
def verify_any_signature(self, data, server_hostname):
def _verify_any_signature(self, data, server_hostname):
if server_hostname not in data["signatures"]:
raise AuthError(401, "No signature from server %s" % (server_hostname,))
for key_name, signature in data["signatures"][server_hostname].items():
key_data = yield self.hs.get_simple_http_client().get_json(
key_data = yield self.simple_http_client.get_json(
"%s%s/_matrix/identity/api/v1/pubkey/%s" %
(id_server_scheme, server_hostname, key_name,),
)
@@ -595,7 +660,7 @@ class RoomMemberHandler(BaseHandler):
user,
txn_id
):
room_state = yield self.hs.get_state_handler().get_current_state(room_id)
room_state = yield self.state_handler.get_current_state(room_id)
inviter_display_name = ""
inviter_avatar_url = ""
@@ -626,6 +691,7 @@ class RoomMemberHandler(BaseHandler):
token, public_keys, fallback_public_key, display_name = (
yield self._ask_id_server_for_third_party_invite(
requester=requester,
id_server=id_server,
medium=medium,
address=address,
@@ -640,8 +706,7 @@ class RoomMemberHandler(BaseHandler):
)
)
msg_handler = self.hs.get_handlers().message_handler
yield msg_handler.create_and_send_nonmember_event(
yield self.event_creation_hander.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.ThirdPartyInvite,
@@ -663,6 +728,7 @@ class RoomMemberHandler(BaseHandler):
@defer.inlineCallbacks
def _ask_id_server_for_third_party_invite(
self,
requester,
id_server,
medium,
address,
@@ -679,6 +745,7 @@ class RoomMemberHandler(BaseHandler):
Asks an identity server for a third party invite.
Args:
requester (Requester)
id_server (str): hostname + optional port for the identity server.
medium (str): The literal string "email".
address (str): The third party address being invited.
@@ -720,24 +787,20 @@ class RoomMemberHandler(BaseHandler):
"sender_avatar_url": inviter_avatar_url,
}
if self.hs.config.invite_3pid_guest:
registration_handler = self.hs.get_handlers().registration_handler
guest_access_token = yield registration_handler.guest_access_token_for(
if self.config.invite_3pid_guest:
guest_access_token, guest_user_id = yield self.get_or_register_3pid_guest(
requester=requester,
medium=medium,
address=address,
inviter_user_id=inviter_user_id,
)
guest_user_info = yield self.hs.get_auth().get_user_by_access_token(
guest_access_token
)
invite_config.update({
"guest_access_token": guest_access_token,
"guest_user_id": guest_user_info["user"].to_string(),
"guest_user_id": guest_user_id,
})
data = yield self.hs.get_simple_http_client().post_urlencoded_get_json(
data = yield self.simple_http_client.post_urlencoded_get_json(
is_url,
invite_config
)
@@ -759,27 +822,6 @@ class RoomMemberHandler(BaseHandler):
display_name = data["display_name"]
defer.returnValue((token, public_keys, fallback_public_key, display_name))
@defer.inlineCallbacks
def forget(self, user, room_id):
user_id = user.to_string()
member = yield self.state_handler.get_current_state(
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
)
membership = member.membership if member else None
if membership is not None and membership not in [
Membership.LEAVE, Membership.BAN
]:
raise SynapseError(400, "User %s in room %s" % (
user_id, room_id
))
if membership:
yield self.store.forget(user_id, room_id)
@defer.inlineCallbacks
def _is_host_in_room(self, current_state_ids):
# Have we just created the room, and is this about to be the very
@@ -801,3 +843,102 @@ class RoomMemberHandler(BaseHandler):
defer.returnValue(True)
defer.returnValue(False)
class RoomMemberMasterHandler(RoomMemberHandler):
def __init__(self, hs):
super(RoomMemberMasterHandler, self).__init__(hs)
self.distributor = hs.get_distributor()
self.distributor.declare("user_joined_room")
self.distributor.declare("user_left_room")
@defer.inlineCallbacks
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
"""Implements RoomMemberHandler._remote_join
"""
# filter ourselves out of remote_room_hosts: do_invite_join ignores it
# and if it is the only entry we'd like to return a 404 rather than a
# 500.
remote_room_hosts = [
host for host in remote_room_hosts if host != self.hs.hostname
]
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
# We don't do an auth check if we are doing an invite
# join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we
# need to do the invite/join dance.
yield self.federation_handler.do_invite_join(
remote_room_hosts,
room_id,
user.to_string(),
content,
)
yield self._user_joined_room(user, room_id)
@defer.inlineCallbacks
def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
"""Implements RoomMemberHandler._remote_reject_invite
"""
fed_handler = self.federation_handler
try:
ret = yield fed_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
target.to_string(),
)
defer.returnValue(ret)
except Exception as e:
# if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead.
#
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(
target.to_string(), room_id
)
defer.returnValue({})
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Implements RoomMemberHandler.get_or_register_3pid_guest
"""
rg = self.registration_handler
return rg.get_or_register_3pid_guest(medium, address, inviter_user_id)
def _user_joined_room(self, target, room_id):
"""Implements RoomMemberHandler._user_joined_room
"""
return user_joined_room(self.distributor, target, room_id)
def _user_left_room(self, target, room_id):
"""Implements RoomMemberHandler._user_left_room
"""
return user_left_room(self.distributor, target, room_id)
@defer.inlineCallbacks
def forget(self, user, room_id):
user_id = user.to_string()
member = yield self.state_handler.get_current_state(
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
)
membership = member.membership if member else None
if membership is not None and membership not in [
Membership.LEAVE, Membership.BAN
]:
raise SynapseError(400, "User %s in room %s" % (
user_id, room_id
))
if membership:
yield self.store.forget(user_id, room_id)

Some files were not shown because too many files have changed in this diff Show More