Compare commits

...

2156 Commits

Author SHA1 Message Date
Erik Johnston
790328a93c Require SQLite3 version 3.15 or above
This is primarily to allow tuple comparisons in queries, though a better
query optimiser and other improvements mean that using newer versions of
sqlite is highly recommended anyway.
2018-05-08 13:48:45 +01:00
Richard van der Hoff
966686c845 Merge pull request #3007 from matrix-org/rav/warn_on_logcontext_fail
Make 'unexpected logging context' into warnings
2018-05-03 15:10:04 +01:00
Richard van der Hoff
093d8c415a Merge remote-tracking branch 'origin/develop' into rav/warn_on_logcontext_fail 2018-05-03 14:59:29 +01:00
Richard van der Hoff
0ba609dc6f Merge pull request #3183 from matrix-org/rav/moar_logcontext_leaks
Fix logcontext leaks in rate limiter
2018-05-03 14:58:13 +01:00
Richard van der Hoff
2117f84323 Merge pull request #3182 from Half-Shot/hs/fix-twisted-shutdown
Fix 'Unhandled Error' logs with Twisted 18.4
2018-05-03 12:40:11 +01:00
Richard van der Hoff
a7fe62f0cb Fix logcontext leaks in rate limiter 2018-05-03 12:31:59 +01:00
Will Hunt
2e7a94c36b Don't abortConnection() if the transport connection has already closed. 2018-05-03 12:31:47 +01:00
Richard van der Hoff
a2aaa9cb3c Merge pull request #3178 from matrix-org/rav/fix_request_timeouts
fix http request timeout code
2018-05-03 11:33:26 +01:00
Richard van der Hoff
d72faf2fad Fix changes warning 2018-05-03 10:56:42 +01:00
Richard van der Hoff
a0501ac57e Warn of potential client incompatibility from #3161 2018-05-03 10:51:39 +01:00
Erik Johnston
0a3b51c420 Merge pull request #3141 from matrix-org/erikj/fixup_state
Refactor event storage to prepare for changes in state calculations
2018-05-03 10:39:20 +01:00
Erik Johnston
31c7c29d43 Fix up grammar 2018-05-03 10:38:58 +01:00
Richard van der Hoff
902673e356 Merge pull request #3161 from NotAFile/remove-v1auth
Make Client-Server API return 403 for invalid token
2018-05-03 10:10:57 +01:00
Erik Johnston
53a5fdf312 Merge pull request #3175 from matrix-org/erikj/escape_metric_values
Escape label values in prometheus metrics
2018-05-03 10:01:04 +01:00
Richard van der Hoff
1dfd650348 add missing param to cancelled_to_request_timed_out_error
This gets two arguments, not one.
2018-05-02 22:42:36 +01:00
Erik Johnston
a41117c63b Make _escape_character take MatchObject 2018-05-02 17:27:27 +01:00
Erik Johnston
32015e1109 Escape label values in prometheus metrics 2018-05-02 16:52:42 +01:00
Richard van der Hoff
3a42aed9a1 Merge pull request #3170 from matrix-org/rav/more_logcontext_leaks
Fix a class of logcontext leaks
2018-05-02 16:45:51 +01:00
Richard van der Hoff
5a0be97ab2 Merge pull request #3174 from matrix-org/rav/media_repo_logcontext_leaks
Fix logcontext leak in media repo
2018-05-02 16:43:04 +01:00
Richard van der Hoff
415c6b672e Merge branch 'develop' into rav/more_logcontext_leaks 2018-05-02 16:16:01 +01:00
Richard van der Hoff
4e9bdeba57 Merge pull request #3172 from matrix-org/rav/fix_test_logcontext_leaks
Fix a couple of logcontext leaks in unit tests
2018-05-02 16:15:22 +01:00
Richard van der Hoff
be31adb036 Fix logcontext leak in media repo
Make FileResponder.write_to_consumer uphold the logcontext contract
2018-05-02 16:14:50 +01:00
Richard van der Hoff
11607006d9 Remove spurious unittest.DEBUG 2018-05-02 15:48:47 +01:00
Richard van der Hoff
46beeb9a30 Fix a couple of logcontext leaks in unit tests
... which were making other, innocent, tests, fail.

Plus remove a spurious unittest.DEBUG which was making the output noisy.
2018-05-02 15:46:22 +01:00
Richard van der Hoff
f22e7cda2c Fix a class of logcontext leaks
So, it turns out that if you have a first `Deferred` `D1`, you can add a
callback which returns another `Deferred` `D2`, and `D2` must then complete
before any further callbacks on `D1` will execute (and later callbacks on `D1`
get the *result* of `D2` rather than `D2` itself).

So, `D1` might have `called=True` (as in, it has started running its
callbacks), but any new callbacks added to `D1` won't get run until `D2`
completes - so if you `yield D1` in an `inlineCallbacks` function, your `yield`
will 'block'.

In conclusion: some of our assumptions in `logcontext` were invalid. We need to
make sure that we don't optimise out the logcontext juggling when this
situation happens. Fortunately, it is easy to detect by checking `D1.paused`.
2018-05-02 11:58:00 +01:00
Richard van der Hoff
a8d8bf92e0 Merge pull request #3168 from matrix-org/rav/fix_logformatter
Fix incorrect reference to StringIO
2018-05-02 10:03:36 +01:00
Richard van der Hoff
e482f8cd85 Fix incorrect reference to StringIO
This was introduced in 4f2f5171
2018-05-02 09:12:26 +01:00
Matthew Hodgson
9f21de6a01 missing word :| 2018-05-01 19:19:46 +01:00
Matthew Hodgson
8ae7096958 Merge branch 'release-v0.28.1' into develop 2018-05-01 19:05:03 +01:00
Matthew Hodgson
5c2214f4c7 fix markdown 2018-05-01 19:03:35 +01:00
Neil Johnson
2414178ed6 Merge branch 'master' into develop 2018-05-01 18:53:56 +01:00
Neil Johnson
40d1bbd257 fix conflict in changelog from previous release 2018-05-01 18:52:44 +01:00
Matthew Hodgson
8e6bd0e324 changelog for 0.28.1 2018-05-01 18:28:23 +01:00
Neil Johnson
8570bb84cc Update __init__.py
bump version
2018-05-01 18:22:53 +01:00
Richard van der Hoff
ca7211104e Merge branch 'release-v0.28.1' into develop 2018-05-01 18:16:57 +01:00
Richard van der Hoff
d5eee5d601 Merge commit '33f469b' into release-v0.28.1 2018-05-01 18:14:18 +01:00
Richard van der Hoff
d858f3bd4e Miscellaneous fixes to python_dependencies
* add some doc about wtf this thing does
* pin Twisted to < 18.4
* add explicit dep on six (fixes #3089)
2018-05-01 18:13:54 +01:00
Richard van der Hoff
33f469ba19 Apply some limits to depth to counter abuse
* When creating a new event, cap its depth to 2^63 - 1
* When receiving events, reject any without a sensible depth

As per https://docs.google.com/document/d/1I3fi2S-XnpO45qrpCsowZv8P8dHcNZ4fsBsbOW7KABI
2018-05-01 17:54:19 +01:00
Adrian Tschira
6495dbb326 Burminate v1auth
This closes #2602

v1auth was created to account for the differences in status code between
the v1 and v2_alpha revisions of the protocol (401 vs 403 for invalid
tokens). However since those protocols were merged, this makes the r0
version/endpoint internally inconsistent, and violates the
specification for the r0 endpoint.

This might break clients that rely on this inconsistency with the
specification. This is said to affect the legacy angular reference
client. However, I feel that restoring parity with the spec is more
important. Either way, it is critical to inform developers about this
change, in case they rely on the illegal behaviour.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-30 22:20:43 +02:00
Will Hunt
2ad3fc36e6 Fixes #3135 - Replace _OpenSSLECCurve with crypto.get_elliptic_curve (#3157)
fixes #3135

Signed-off-by: Will Hunt will@half-shot.uk
2018-04-30 16:21:11 +01:00
Richard van der Hoff
cead75fae3 Merge pull request #3160 from krombel/fix_3076
add guard for None on purge_history api
2018-04-30 15:03:59 +01:00
Krombel
576b71dd3d add guard for None on purge_history api 2018-04-30 14:29:48 +02:00
Matthew Hodgson
99a54bf2af Merge pull request #3129 from matrix-org/matthew/fix_group_dups
remove duplicates from groups tables
2018-04-30 11:47:25 +01:00
Richard van der Hoff
63ae5cbf34 Merge pull request #3143 from matrix-org/rav/remove_redundant_preserve_fn
Remove redundant call to preserve_fn
2018-04-30 10:23:59 +01:00
Richard van der Hoff
fdb6849b81 Merge pull request #3144 from matrix-org/rav/run_in_background_exception_handling
Trap exceptions thrown within run_in_background
2018-04-30 10:23:02 +01:00
Richard van der Hoff
66aa32ede2 Merge pull request #3159 from NotAFile/py3-tests-config
run config tests on py3
2018-04-30 10:22:45 +01:00
Adrian Tschira
6e005d1382 run config tests on py3
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-30 10:39:45 +02:00
Richard van der Hoff
01e8a52825 Merge pull request #3102 from NotAFile/py3-attributeerror
Make event properties raise AttributeError instead
2018-04-30 09:22:09 +01:00
Adrian Tschira
0c9db26260 add comment explaining attributeerror 2018-04-30 09:49:10 +02:00
Richard van der Hoff
950a32eb47 Merge pull request #3152 from NotAFile/py3-local-imports
make imports local
2018-04-30 01:28:13 +01:00
Richard van der Hoff
bc2017a594 Merge pull request #3153 from NotAFile/py3-httplib
move httplib import to six
2018-04-30 01:26:42 +01:00
Richard van der Hoff
683149c1f9 Merge pull request #3151 from NotAFile/py3-xrange-1
Move more xrange to six
2018-04-30 01:20:06 +01:00
Richard van der Hoff
7b908aeec4 Merge branch 'rav/test_36' into develop 2018-04-30 01:12:58 +01:00
Richard van der Hoff
3b0e431c82 Merge pull request #3150 from NotAFile/py3-listcomp-yield
Don't yield in list comprehensions
2018-04-30 01:11:41 +01:00
Richard van der Hoff
db75c86e84 Merge branch 'develop' into py3-xrange-1 2018-04-30 01:02:25 +01:00
Richard van der Hoff
2fd96727b1 Merge pull request #3085 from NotAFile/py3-config-text-mode
Open config file in non-bytes mode
2018-04-30 01:00:23 +01:00
Richard van der Hoff
b8ee12b978 Merge pull request #3084 from NotAFile/py3-certs-byte-mode
Open certificate files as bytes
2018-04-30 01:00:05 +01:00
Richard van der Hoff
049b0b5af2 Merge pull request #3154 from NotAFile/py3-stringio
Replace stringIO imports with six
2018-04-30 00:59:04 +01:00
Richard van der Hoff
d1d54d6088 add py36 to build matrix 2018-04-30 00:58:31 +01:00
Richard van der Hoff
ac5f2f4d86 Merge pull request #3145 from NotAFile/py3-tests
Add py3 tests to tox with folders that work
2018-04-30 00:53:05 +01:00
Richard van der Hoff
af3cc50511 Remove redundant call to preserve_fn
submit_event_for_as doesn't return a deferred anyway, so this is pointless.
2018-04-30 00:48:36 +01:00
Richard van der Hoff
dbf6f28d64 Merge pull request #3155 from NotAFile/py3-bytes-1
more bytes strings
2018-04-30 00:38:21 +01:00
Richard van der Hoff
7767a9fc0e Update tox.ini
add missing comma
2018-04-30 00:37:32 +01:00
Richard van der Hoff
aab2e4da60 Merge pull request #3140 from matrix-org/rav/use_run_in_background
Use run_in_background in preference to preserve_fn
2018-04-30 00:34:28 +01:00
Richard van der Hoff
1315d374cc Merge pull request #3156 from NotAFile/py3-hmac-bytes
Construct HMAC as bytes on py3
2018-04-30 00:33:20 +01:00
Richard van der Hoff
9e2601f830 Merge pull request #3108 from NotAFile/py3-six-urlparse
Use six.moves.urlparse
2018-04-30 00:33:05 +01:00
Adrian Tschira
122593265b Construct HMAC as bytes on py3
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-29 00:19:41 +02:00
Adrian Tschira
e9143b6593 more bytes strings
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-29 00:13:57 +02:00
Matthew Hodgson
adaf3ec87f fix missing import 2018-04-28 22:39:15 +01:00
Matthew Hodgson
006e18b6bb pep8 2018-04-28 22:32:24 +01:00
Matthew Hodgson
42c89c8215 make it work with sqlite 2018-04-28 22:27:30 +01:00
Adrian Tschira
d82b6ea9e6 Move more xrange to six
plus a bonus next()

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:57:00 +02:00
Adrian Tschira
4f2f5171b7 replace stringIO imports 2018-04-28 13:46:23 +02:00
Adrian Tschira
94f4d7f49e move httplib import to six 2018-04-28 13:43:34 +02:00
Adrian Tschira
57b58e2174 make imports local
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:41:41 +02:00
Adrian Tschira
cdb4647a80 Don't yield in list comprehensions
I've tried to grep for more of this with no success.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:36:30 +02:00
Adrian Tschira
a376d8f761 open log_config in text mode too
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:34:13 +02:00
Adrian Tschira
4f5694e2ce Add py3 tests to tox with folders that work
It's just a few tests, but it will at least prevent a few files from
regressing. Also, it makes it easiert to check your code against py36
while writing it.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-27 16:29:41 +02:00
Richard van der Hoff
9558236728 Merge pull request #3127 from matrix-org/rav/deferred_timeout
Use deferred.addTimeout instead of time_bound_deferred
2018-04-27 14:32:54 +01:00
Richard van der Hoff
453adf00b6 pep8; remove spurious import 2018-04-27 14:32:08 +01:00
Richard van der Hoff
fc149b4eeb Merge remote-tracking branch 'origin/develop' into rav/use_run_in_background 2018-04-27 14:31:23 +01:00
Richard van der Hoff
6146332387 Merge remote-tracking branch 'origin/develop' into rav/deferred_timeout 2018-04-27 14:18:00 +01:00
Erik Johnston
d2737c1fae Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-04-27 13:28:51 +01:00
Richard van der Hoff
2a13af23bc Use run_in_background in preference to preserve_fn
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
2018-04-27 12:55:51 +01:00
Richard van der Hoff
3d1ae61399 Merge branch 'develop' into rav/deferred_timeout 2018-04-27 12:54:43 +01:00
Richard van der Hoff
9d2c1b8429 Backport deferred.addTimeout
Twisted 16.0 doesn't have addTimeout, so let's backport it.
2018-04-27 12:52:30 +01:00
Richard van der Hoff
13843f771e Trap exceptions thrown within run_in_background
Turn any exceptions that get thrown synchronously within run_in_background into
Failures instead.
2018-04-27 12:17:13 +01:00
Richard van der Hoff
41d4b07a53 Merge pull request #3142 from matrix-org/rav/reraise
reraise exceptions more carefully
2018-04-27 12:16:19 +01:00
Neil Johnson
05ba7e3a44 Update CHANGES.rst 2018-04-27 12:13:12 +01:00
Neil Johnson
53849ea9d3 Update CHANGES.rst 2018-04-27 12:11:39 +01:00
Richard van der Hoff
268e40341b Merge pull request #3136 from matrix-org/rav/fix_dependencies
Miscellaneous fixes to python_dependencies
2018-04-27 11:48:28 +01:00
Richard van der Hoff
9c3da24561 Merge pull request #3138 from matrix-org/rav/catch_unhandled_exceptions
Improve exception handling for background processes
2018-04-27 11:47:49 +01:00
Richard van der Hoff
53494c34df Merge pull request #3139 from matrix-org/rav/consume_errors
Add missing consumeErrors to improve exception handling
2018-04-27 11:47:21 +01:00
Richard van der Hoff
6493b22b42 reraise exceptions more carefully
We need to be careful (under python 2, at least) that when we reraise an
exception after doing some error handling, we actually reraise the original
exception rather than anything that might have been raised (and handled) during
the error handling.
2018-04-27 11:40:06 +01:00
Erik Johnston
6e10eed28e Refactor event storage to not require state
This is in preparation for using contexts that may or may not have the
current_state_ids set. This will allow us to avoid unnecessarily pulling
out state for an event on the master process when using workers.

We also add a check to see if the state groups of the old extremities
are the same as the new ones.
2018-04-27 11:38:02 +01:00
Richard van der Hoff
605defb9e4 Add missing consumeErrors
In general we want defer.gatherResults to consumeErrors, rather than having
exceptions hanging around and getting logged as CRITICAL unhandled errors.
2018-04-27 11:16:28 +01:00
Richard van der Hoff
9255a6cb17 Improve exception handling for background processes
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.

This is unsatisfactory for a number of reasons:
 - logging on garbage collection is best-effort and may happen some time after
   the error, if at all
 - it can be hard to figure out where the error actually happened.
 - it is logged as a scary CRITICAL error which (a) I always forget to grep for
   and (b) it's not really CRITICAL if a background process we don't care about
   fails.

So this is an attempt to add exception handling to everything we fire off into
the background.
2018-04-27 11:07:40 +01:00
Neil Johnson
d842ed14f4 Merge tag 'v0.28.0'
Changes in synapse v0.28.0-rc1 (2018-04-26)
===========================================

Bug Fixes:

* Fix quarantine media admin API and search reindex (PR #3130)
* Fix media admin APIs (PR #3134)

Changes in synapse v0.28.0-rc1 (2018-04-24)
===========================================

Minor performance improvement to federation sending and bug fixes.

(Note: This release does not include state resolutions discussed in matrix live)

Features:

* Add metrics for event processing lag (PR #3090)
* Add metrics for ResponseCache (PR #3092)

Changes:

* Synapse on PyPy (PR #2760) Thanks to @Valodim!
* move handling of auto_join_rooms to RegisterHandler (PR #2996) Thanks to @krombel!
* Improve handling of SRV records for federation connections (PR #3016) Thanks to @silkeh!
* Document the behaviour of ResponseCache (PR #3059)
* Preparation for py3 (PR #3061, #3073, #3074, #3075, #3103, #3104, #3106, #3107, #3109, #3110) Thanks to @NotAFile!
* update prometheus dashboard to use new metric names (PR #3069) Thanks to @krombel!
* use python3-compatible prints (PR #3074) Thanks to @NotAFile!
* Send federation events concurrently (PR #3078)
* Limit concurrent event sends for a room (PR #3079)
* Improve R30 stat definition (PR #3086)
* Send events to ASes concurrently (PR #3088)
* Refactor ResponseCache usage (PR #3093)
* Clarify that SRV may not point to a CNAME (PR #3100) Thanks to @silkeh!
* Use str(e) instead of e.message (PR #3103) Thanks to @NotAFile!
* Use six.itervalues in some places (PR #3106) Thanks to @NotAFile!
* Refactor store.have_events (PR #3117)

Bug Fixes:

* Return 401 for invalid access_token on logout (PR #2938) Thanks to @dklug!
* Return a 404 rather than a 500 on rejoining empty rooms (PR #3080)
* fix federation_domain_whitelist (PR #3099)
* Avoid creating events with huge numbers of prev_events (PR #3113)
* Reject events which have lots of prev_events (PR #3118)
2018-04-27 10:40:27 +01:00
Richard van der Hoff
31c8be956f also upgrade pip when installing 2018-04-27 01:56:58 +01:00
Neil Johnson
28dd536e80 update changelog and bump version to 0.28.0 2018-04-26 15:51:39 +01:00
Neil Johnson
8721580303 Merge branch 'develop' of https://github.com/matrix-org/synapse into release-v0.28.0-rc1 2018-04-26 15:44:54 +01:00
Richard van der Hoff
dbf76fd4b9 jenkins build: make sure we have a recent setuptools 2018-04-26 13:11:03 +01:00
Richard van der Hoff
d78ada3166 Miscellaneous fixes to python_dependencies
* add some doc about wtf this thing does
* pin Twisted to < 18.4
* add explicit dep on six (fixes #3089)
2018-04-26 13:11:03 +01:00
Erik Johnston
0ced8b5b47 Merge pull request #3134 from matrix-org/erikj/fix_admin_media_api
Fix media admin APIs
2018-04-26 12:02:40 +01:00
Erik Johnston
7ec8e798b4 Fix media admin APIs 2018-04-26 11:31:22 +01:00
Erik Johnston
a5ad88913c Merge pull request #3130 from matrix-org/erikj/fix_quarantine_room
Fix quarantine media admin API
2018-04-25 17:54:12 +01:00
Erik Johnston
22881b3d69 Also fix reindexing of search 2018-04-25 15:32:04 +01:00
Erik Johnston
ba3166743c Fix quarantine media admin API 2018-04-25 15:11:18 +01:00
Matthew Hodgson
e3a373f002 remove duplicates from groups tables
and rename inconsistently named indexes.
Based on https://github.com/matrix-org/synapse/pull/3128 - thanks @vurpo\!
2018-04-25 14:58:43 +01:00
Neil Johnson
6ab3b9c743 Update CHANGES.rst
Rephrase v0.28.0-rc1 summary
2018-04-24 16:39:20 +01:00
Neil Johnson
1bb83d5d41 Merge branch 'master' into develop 2018-04-24 15:52:43 +01:00
Neil Johnson
13a2beabca Update CHANGES.rst
fix formatting on line break
2018-04-24 15:43:30 +01:00
Neil Johnson
2c3e995f38 Bump version and update changelog 2018-04-24 15:33:22 +01:00
Neil Johnson
8e8b06715f Revert "Bump version and update changelog"
This reverts commit 08b29d4574.
2018-04-24 13:58:45 +01:00
Neil Johnson
08b29d4574 Bump version and update changelog 2018-04-24 13:56:12 +01:00
Richard van der Hoff
77ebef9d43 Merge pull request #3118 from matrix-org/rav/reject_prev_events
Reject events which have lots of prev_events
2018-04-23 17:51:38 +01:00
Richard van der Hoff
9b9c38373c Remove spurious param 2018-04-23 12:00:06 +01:00
Richard van der Hoff
286e20f2bc Merge pull request #3109 from NotAFile/py3-tests-fix
Make tests py3 compatible
2018-04-23 11:59:03 +01:00
Richard van der Hoff
1ea904b9f0 Use deferred.addTimeout instead of time_bound_deferred
This doesn't feel like a wheel we need to reinvent.
2018-04-23 00:53:18 +01:00
Richard van der Hoff
dc875d2712 Merge pull request #3106 from NotAFile/py3-six-itervalues-1
Use six.itervalues in some places
2018-04-20 15:43:52 +01:00
Richard van der Hoff
8dc4a6144b Merge pull request #3107 from NotAFile/py3-bool-nonzero
add __bool__ alias to __nonzero__ methods
2018-04-20 15:43:39 +01:00
Richard van der Hoff
d06a9ea5f7 Merge pull request #3104 from NotAFile/py3-unittest-config
Add some more variables to the unittest config
2018-04-20 15:35:58 +01:00
Richard van der Hoff
c09a6daf09 Merge pull request #3110 from NotAFile/py3-six-queue
Replace Queue with six.moves.queue
2018-04-20 15:35:00 +01:00
Richard van der Hoff
692a3cc806 Merge pull request #3103 from NotAFile/py3-baseexcepton-message
Use str(e) instead of e.message
2018-04-20 15:34:49 +01:00
Erik Johnston
366dd893fc Merge pull request #3100 from silkeh/readme-srv-cname
Clarify that SRV may not point to a CNAME
2018-04-20 15:18:44 +01:00
Erik Johnston
bdb7714d13 Merge pull request #3125 from matrix-org/erikj/add_contrib_docs
Document contrib directory
2018-04-20 13:02:24 +01:00
Erik Johnston
67dabe143d Document contrib directory 2018-04-20 11:47:38 +01:00
Richard van der Hoff
3de7d9fe99 accept stupid events over backfill 2018-04-20 11:41:03 +01:00
Richard van der Hoff
11a67b7c9d Merge pull request #3093 from matrix-org/rav/response_cache_wrap
Refactor ResponseCache usage
2018-04-20 11:31:17 +01:00
Richard van der Hoff
0c280d4d99 Reinstate linearizer for federation_server.on_context_state_request 2018-04-20 11:10:04 +01:00
Richard van der Hoff
bc381d5798 Merge pull request #3117 from matrix-org/rav/refactor_have_events
Refactor store.have_events
2018-04-20 10:26:12 +01:00
Richard van der Hoff
b1dfbc3c40 Refactor store.have_events
It turns out that most of the time we were calling have_events, we were only
using half of the result. Replace have_events with have_seen_events and
get_rejection_reasons, so that we can see what's going on a bit more clearly.
2018-04-20 10:25:56 +01:00
Richard van der Hoff
dacf3a50ac Merge pull request #3113 from matrix-org/rav/fix_huge_prev_events
Avoid creating events with huge numbers of prev_events
2018-04-18 11:27:56 +01:00
Richard van der Hoff
1f4b498b73 Add some comments 2018-04-18 00:15:36 +01:00
Richard van der Hoff
e585228860 Check events on backfill too 2018-04-18 00:06:42 +01:00
Richard van der Hoff
9b7794262f Reject events which have too many auth_events or prev_events
... this should protect us from being dossed by people making silly events
(deliberately or otherwise)
2018-04-18 00:06:42 +01:00
Richard van der Hoff
639480e14a Avoid creating events with huge numbers of prev_events
In most cases, we limit the number of prev_events for a given event to 10
events. This fixes a particular code path which created events with huge
numbers of prev_events.
2018-04-16 18:41:37 +01:00
Adrian Tschira
878995e660 Replace Queue with six.moves.queue
and a six.range change which I missed the last time

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-16 00:46:21 +02:00
Adrian Tschira
a1a3c9660f Make tests py3 compatible
This is a mixed commit that fixes various small issues

 * print parentheses
 * 01 is invalid syntax (it was octal in py2)
 * [x for i in 1, 2] is invalid syntax
 * six moves

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-16 00:39:32 +02:00
Matthew Hodgson
512633ef44 fix spurious changelog dup 2018-04-15 22:45:06 +01:00
Adrian Tschira
2a3c33ff03 Use six.moves.urlparse
The imports were shuffled around a bunch in py3

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 21:22:43 +02:00
Adrian Tschira
f63ff73c7f add __bool__ alias to __nonzero__ methods
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:40:47 +02:00
Adrian Tschira
36c59ce669 Use six.itervalues in some places
There's more where that came from

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:39:43 +02:00
Adrian Tschira
cb9cdfecd0 Add some more variables to the unittest config
These worked accidentally before (python2 doesn't complain if you
compare incompatible types) but under py3 this blows up spectacularly

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:36:39 +02:00
Adrian Tschira
1515560f5c Use str(e) instead of e.message
Doing this I learned e.message was pretty shortlived, added in 2.6,
they realized it was a bad idea and deprecated it in 2.7

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:32:42 +02:00
Adrian Tschira
bfc2ade9b3 Make event properties raise AttributeError instead
They raised KeyError before. I'm changing this because the code uses
hasattr() to check for the presence of a key. This worked accidentally
before, because hasattr() silences all exceptions in python 2. However,
in python3, this isn't the case anymore.

I had a look around to see if anything depended on this raising a
KeyError and I couldn't find anything. Of course, I could have simply
missed it.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:16:59 +02:00
Silke
c4bdbc2bd2 Clarify that SRV may not point to a CNAME
Signed-off-by: Silke Hofstra <silke@slxh.eu>
2018-04-14 14:55:25 +02:00
Neil Johnson
154b44c249 Merge branch 'master' of https://github.com/matrix-org/synapse into develop 2018-04-13 17:07:54 +01:00
Matthew Hodgson
0d8c50df44 Merge pull request #3099 from matrix-org/matthew/fix-federation-domain-whitelist
fix federation_domain_whitelist
2018-04-13 15:51:13 +01:00
Matthew Hodgson
78a9698650 fix federation_domain_whitelist
we were checking the wrong server_name on inbound requests
2018-04-13 15:47:43 +01:00
Matthew Hodgson
25b0ba30b1 revert last to PR properly 2018-04-13 15:46:37 +01:00
Matthew Hodgson
f8d46cad3c correctly auth inbound federation_domain_whitelist reqs 2018-04-13 15:41:52 +01:00
Neil Johnson
d4b2e05852 Merge branch 'master' of https://github.com/matrix-org/synapse into develop 2018-04-13 12:16:27 +01:00
Neil Johnson
eb53439c4a Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-04-13 12:14:57 +01:00
Neil Johnson
51d628d28d Bump version and Change log 2018-04-13 12:08:19 +01:00
Neil Johnson
df77837a33 Merge pull request #3095 from matrix-org/rav/bump_canonical_json
Update canonicaljson dependency
2018-04-13 12:04:46 +01:00
Richard van der Hoff
d3347ad485 Revert "Use sortedcontainers instead of blist"
This reverts commit 9fbe70a7dc.

It turns out that sortedcontainers.SortedDict is not an exact match for
blist.sorteddict; in particular, `popitem()` removes things from the opposite
end of the dict.

This is trivial to fix, but I want to add some unit tests, and potentially some
more thought about it, before we do so.
2018-04-13 11:16:43 +01:00
Richard van der Hoff
fac3f9e678 Bump canonicaljson to 1.1.3
1.1.2 was a bit broken too :/
2018-04-13 10:21:38 +01:00
Richard van der Hoff
60f6014bb7 ResponseCache: fix handling of completed results
Turns out that ObservableDeferred.observe doesn't return a deferred if the
result is already completed. Fix handling and improve documentation.
2018-04-13 07:32:29 +01:00
Richard van der Hoff
119596ab8f Update canonicaljson dependency
1.1.0 and 1.1.1 were broken, so we're updating this to help people make sure
they don't end up on a broken version.

Also, 1.1.0 is speedier...
2018-04-12 17:31:44 +01:00
Richard van der Hoff
b78395b7fe Refactor ResponseCache usage
Adds a `.wrap` method to ResponseCache which wraps up the boilerplate of a
(get, set) pair, and then use it throughout the codebase.

This will be largely non-functional, but does include the following functional
changes:

* federation_server.on_context_state_request: drops use of _server_linearizer
  which looked redundant and could cause incorrect cache misses by yielding
  between the get and the set.
* RoomListHandler.get_remote_public_room_list(): fixes logcontext leaks
* the wrap function includes some logging. I'm hoping this won't be too noisy
  on production.
2018-04-12 13:02:15 +01:00
Richard van der Hoff
d5c74b9f6c Merge pull request #3092 from matrix-org/rav/response_cache_metrics
Add metrics for ResponseCache
2018-04-12 12:59:36 +01:00
Erik Johnston
0f13f30fca Merge pull request #3090 from matrix-org/erikj/processed_event_lag
Add metrics for event processing lag
2018-04-12 12:18:57 +01:00
Erik Johnston
415aeefd89 Format docstring 2018-04-12 12:07:09 +01:00
Erik Johnston
19ceb4851f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/processed_event_lag 2018-04-12 11:36:07 +01:00
Richard van der Hoff
261124396e Merge pull request #3059 from matrix-org/rav/doc_response_cache
Document the behaviour of ResponseCache
2018-04-12 11:22:30 +01:00
Erik Johnston
23a7f9d7f4 Doc we raise on unknown event 2018-04-12 11:20:51 +01:00
Erik Johnston
d7bf3a68f0 s/list/tuple 2018-04-12 11:19:04 +01:00
Erik Johnston
f67e906e18 Set all metrics at the same time 2018-04-12 11:18:19 +01:00
Erik Johnston
971059a733 Merge pull request #3088 from matrix-org/erikj/as_parallel
Send events to ASes concurrently
2018-04-12 10:42:36 +01:00
Erik Johnston
e939f3bca6 Fix tests 2018-04-11 14:37:11 +01:00
Erik Johnston
4dae4a97ed Track last processed event received_ts 2018-04-11 14:27:09 +01:00
Erik Johnston
92e34615c5 Track where event stream processing have gotten up to 2018-04-11 12:13:40 +01:00
Erik Johnston
ab825aa328 Add GaugeMetric 2018-04-11 12:13:40 +01:00
Richard van der Hoff
233699c42e Merge pull request #2760 from Valodim/pypy
Synapse on PyPy
2018-04-11 11:20:01 +01:00
Neil Johnson
427e6c4059 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-04-11 10:59:00 +01:00
Neil Johnson
781cd8c54f bump version/changelog 2018-04-11 10:54:43 +01:00
Neil Johnson
9ef0b179e0 Merge commit '11d2609da70af797405241cdf7d9df19db5628f2' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-11 10:51:59 +01:00
Erik Johnston
121591568b Send events to ASes concurrently 2018-04-11 09:56:00 +01:00
Richard van der Hoff
b3384232a0 Add metrics for ResponseCache 2018-04-10 23:14:47 +01:00
Matthew Hodgson
360d899a64 Merge pull request #3086 from matrix-org/r30_stats
fix typo
2018-04-10 17:46:37 +01:00
Neil Johnson
d54cfbb7a8 fix typo 2018-04-10 17:38:16 +01:00
Erik Johnston
eaa2ebf20b Merge pull request #3079 from matrix-org/erikj/limit_concurrent_sends
Limit concurrent event sends for a room
2018-04-10 16:43:58 +01:00
Erik Johnston
9daf82278f Merge pull request #3078 from matrix-org/erikj/federation_sender
Send federation events concurrently
2018-04-10 16:43:48 +01:00
Adrian Tschira
a3f9ddbede Open certificate files as bytes
That's what pyOpenSSL expects on python3

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-10 17:36:29 +02:00
Adrian Tschira
7f8eebc8ee Open config file in non-bytes mode
Nothing written into it is encoded, so it makes little sense, but it
does break in python3 the way it was before.

The variable names were adjusted to be less misleading.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-10 17:32:40 +02:00
Neil Johnson
dd723267b2 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into develop 2018-04-10 15:03:29 +01:00
Erik Johnston
a060dfa132 Use run_in_background instead 2018-04-10 14:25:11 +01:00
Erik Johnston
f8e8ec013b Note why we're limiting concurrent event sends 2018-04-10 14:00:46 +01:00
Erik Johnston
1246d23710 Preserve log contexts correctly 2018-04-10 12:04:32 +01:00
Erik Johnston
d49cbf712f Log event ID on exception 2018-04-10 12:03:41 +01:00
Erik Johnston
ce72d590ed Merge pull request #3082 from matrix-org/erikj/urlencode_paths
URL quote path segments over federation
2018-04-10 11:31:16 +01:00
Erik Johnston
11d2609da7 Ensure slashes are escaped 2018-04-10 11:24:40 +01:00
Erik Johnston
dab87b84a3 URL quote path segments over federation 2018-04-10 11:16:08 +01:00
Vincent Breitmoser
6d7f0f8dd3 Don't disable GC when running on PyPy
PyPy's incminimark GC can't be triggered manually. From what I observed
there are no obvious issues with just letting it run normally. And
unlike CPython, it actually returns unused RAM to the system.

Signed-off-by: Vincent Breitmoser <look@my.amazin.horse>
2018-04-10 11:35:34 +02:00
Vincent Breitmoser
f4284d943a In DomainSpecificString, override __repr__ in addition to __str__
For some reason, string interpolation on a DomainSpecificString object
like "%r" % (domainSpecificStringObj) fails under PyPy, because the
default __repr__ implementation wants to iterate over the object. I'm
not sure why that happens, but overriding __repr__ instead of __str__
fixes this problem, and is arguably the more appropriate thing to do
anyways.
2018-04-10 11:35:29 +02:00
Richard van der Hoff
d1e56cfcd1 Fix pep8 error on psycopg2cffi hack 2018-04-10 11:35:29 +02:00
Vincent Breitmoser
89de934981 Use psycopg2cffi module instead of psycopg2 if running on pypy
The psycopg2 package isn't available for PyPy.  This commit adds a check
if the runtime is PyPy, and if it is uses psycopg2cffi module in favor
of psycopg2. This is almost a drop-in replacement, except for one place
where an additional cast to string is required.
2018-04-10 11:29:52 +02:00
Vincent Breitmoser
9fbe70a7dc Use sortedcontainers instead of blist
This commit drop-in replaces blist with SortedContainers. They are
written in pure python so work with pypy, but perform as good as
native implementations, at least in a couple benchmarks:

http://www.grantjenks.com/docs/sortedcontainers/performance.html
2018-04-10 11:29:51 +02:00
Richard van der Hoff
a3599dda97 Merge pull request #2996 from krombel/allow_auto_join_rooms
move handling of auto_join_rooms to RegisterHandler
2018-04-10 01:11:00 +01:00
Richard van der Hoff
87478c5a60 Merge pull request #3061 from NotAFile/add-some-byte-strings
Add b prefixes to some strings that are bytes in py3
2018-04-09 23:54:05 +01:00
Richard van der Hoff
c508b2f2f0 Merge pull request #3073 from NotAFile/use-six-reraise
Replace old-style raise with six.reraise
2018-04-09 23:53:40 +01:00
Richard van der Hoff
37354b55c9 Merge pull request #2938 from dklug/develop
Return 401 for invalid access_token on logout
2018-04-09 23:52:56 +01:00
Richard van der Hoff
0e9aa1d091 Merge pull request #3074 from NotAFile/fix-py3-prints
use python3-compatible prints
2018-04-09 23:44:41 +01:00
Richard van der Hoff
8eaa141d8f Merge pull request #3075 from NotAFile/six-type-checks
Replace some type checks with six type checks
2018-04-09 23:40:44 +01:00
Richard van der Hoff
664adb4236 Merge pull request #3016 from silkeh/improve-service-lookups
Improve handling of SRV records for federation connections
2018-04-09 23:40:06 +01:00
Richard van der Hoff
aea3a93611 Merge pull request #3069 from krombel/update_prometheus_config
update prometheus dashboard to use new metric names
2018-04-09 23:37:18 +01:00
Neil Johnson
41e0611895 remove errant print 2018-04-09 18:44:20 +01:00
Neil Johnson
61b439c904 Fix msec to sec, again 2018-04-09 18:43:48 +01:00
Neil Johnson
87770300d5 Fix msec to sec 2018-04-09 18:38:59 +01:00
Neil Johnson
9a311adfea v0.27.3-rc2 2018-04-09 17:52:08 +01:00
Neil Johnson
64bc2162ef Fix psycopg2 interpolation 2018-04-09 17:50:36 +01:00
Richard van der Hoff
d2c6f4d626 Merge pull request #3080 from matrix-org/rav/fix_500_on_rejoin
Return a 404 rather than a 500 on rejoining empty rooms
2018-04-09 17:32:36 +01:00
Neil Johnson
5232d3bfb1 version bump v0.27.3-rc2 2018-04-09 17:25:57 +01:00
Neil Johnson
5e785d4d5b Merge branch 'develop' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 17:21:34 +01:00
Erik Johnston
6e025a97b4 Handle all events in a room correctly 2018-04-09 16:02:48 +01:00
Neil Johnson
414b2b3bd1 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 16:02:08 +01:00
Neil Johnson
b151eb14a2 Update CHANGES.rst 2018-04-09 16:01:59 +01:00
Neil Johnson
64cebbc730 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 16:00:51 +01:00
Neil Johnson
d9ae2bc826 bump version to release candidate 2018-04-09 16:00:31 +01:00
Neil Johnson
21d5a2a08e Update CHANGES.rst 2018-04-09 15:55:41 +01:00
Neil Johnson
c115deed12 Update CHANGES.rst 2018-04-09 15:54:32 +01:00
Neil Johnson
072fb59446 bump version 2018-04-09 13:49:25 +01:00
Neil Johnson
89dda61315 Merge branch 'develop' into release-v0.27.0 2018-04-09 13:48:15 +01:00
Neil Johnson
687f3451bd 0.27.3 2018-04-09 13:44:05 +01:00
Richard van der Hoff
13decdbf96 Revert "Merge pull request #3066 from matrix-org/rav/remove_redundant_metrics"
We aren't ready to release this yet, so I'm reverting it for now.

This reverts commit d1679a4ed7, reversing
changes made to e089100c62.
2018-04-09 12:59:12 +01:00
Richard van der Hoff
f3ef60662f Return a 404 rather than a 500 on rejoining empty rooms
Filter ourselves out of the server list before checking for an empty remote
host list, to fix 500 error

Fixes #2141
2018-04-09 12:56:22 +01:00
Erik Johnston
e5082494eb Limit concurrent event sends for a room 2018-04-09 12:07:39 +01:00
Erik Johnston
56b0589865 Use create_and_send_nonmember_event everywhere 2018-04-09 12:04:18 +01:00
Erik Johnston
11974f3787 Send federation events concurrently 2018-04-09 11:47:10 +01:00
Erik Johnston
145d14656b Handle exceptions in get_hosts_for_room when sending events over federation 2018-04-09 11:47:01 +01:00
Adrian Tschira
e54c202b81 Replace some type checks with six type checks
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-07 01:02:32 +02:00
Adrian Tschira
b0500d3774 use python3-compatible prints 2018-04-06 23:35:27 +02:00
Adrian Tschira
4f40d058cc Replace old-style raise with six.reraise
The old style raise is invalid syntax in python3. As noted in the docs,
this adds one more frame in the traceback, but I think this is
acceptable:

    <ipython-input-7-bcc5cba3de3f> in <module>()
         16     except:
         17         pass
    ---> 18     six.reraise(*x)

    /usr/lib/python3.6/site-packages/six.py in reraise(tp, value, tb)
        691             if value.__traceback__ is not tb:
        692                 raise value.with_traceback(tb)
    --> 693             raise value
        694         finally:
        695             value = None

    <ipython-input-7-bcc5cba3de3f> in <module>()
          9
         10 try:
    ---> 11     x()
         12 except:
         13     x = sys.exc_info()

Also note that this uses six, which is not formally a dependency yet,
but is included indirectly since most packages depend on it.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-06 23:06:24 +02:00
Luke Barnard
135fc5b9cd Merge pull request #3046 from matrix-org/dbkr/join_group
Implement group join API
2018-04-06 16:24:32 +01:00
Luke Barnard
020a501354 de-lint, quote consistency 2018-04-06 16:02:06 +01:00
Luke Barnard
db2fd801f7 Explicitly grab individual columns from group object 2018-04-06 15:57:25 +01:00
Erik Johnston
e8b03cab1b Merge pull request #3071 from matrix-org/erikj/resp_size_metrics
Add response size metrics
2018-04-06 15:49:12 +01:00
Richard van der Hoff
8844f95c32 Merge pull request #3072 from matrix-org/rav/fix_port_script
postgres port script: fix state_groups_pkey error
2018-04-06 15:47:40 +01:00
Luke Barnard
7945435587 When exposing group state, return is_openly_joinable
as opposed to join_policy, which is really only pertinent to the
synapse implementation of the group server.

By doing this we keep the group server concept extensible by
allowing arbitrarily complex rules for deciding whether a group
is openly joinable.
2018-04-06 15:43:27 +01:00
Luke Barnard
6bd1b7053e By default, join policy is "invite" 2018-04-06 15:43:27 +01:00
Luke Barnard
b4478e586f add_user -> _add_user 2018-04-06 15:43:27 +01:00
Luke Barnard
112c2253e2 pep8 2018-04-06 15:43:27 +01:00
Luke Barnard
6850f8aea3 Get group_info from existing call to check_group_is_ours 2018-04-06 15:43:27 +01:00
Luke Barnard
cd087a265d Don't use redundant inlineCallbacks 2018-04-06 15:43:27 +01:00
Luke Barnard
87c864b698 join_rule -> join_policy 2018-04-06 15:43:27 +01:00
Luke Barnard
ae85c7804e is_joinable -> join_rule 2018-04-06 15:43:27 +01:00
Luke Barnard
f8d1917fce Fix federation client set_group_joinable typo 2018-04-06 15:43:27 +01:00
Luke Barnard
6eb3aa94b6 Factor out add_user from accept_invite and join_group 2018-04-06 15:43:27 +01:00
David Baker
edb45aae38 pep8 2018-04-06 15:43:27 +01:00
David Baker
b370fe61c0 Implement group join API 2018-04-06 15:43:27 +01:00
Richard van der Hoff
6a9777ba02 Port script: Set up state_group_id_seq
Fixes https://github.com/matrix-org/synapse/issues/3050.
2018-04-06 15:33:30 +01:00
Richard van der Hoff
01579384cc Port script: clean up a bit
Improve logging and comments. Group all the stuff to do with inspecting tables
together rather than creating the port tables in the middle.
2018-04-06 15:33:30 +01:00
Richard van der Hoff
e01ba5bda3 Port script: avoid nasty errors when setting up
We really shouldn't spit out "Failed to create port table", it looks scary.
2018-04-06 15:33:30 +01:00
Erik Johnston
7b824f1475 Add response size metrics 2018-04-06 13:20:11 +01:00
Erik Johnston
35ff941172 Merge pull request #3070 from krombel/group_join_put_instead_post
use PUT instead of POST for federating groups/m.join_policy
2018-04-06 12:11:16 +01:00
Krombel
1d71f484d4 use PUT instead of POST for federating groups/m.join_policy 2018-04-06 12:54:09 +02:00
Richard van der Hoff
15e8ed874f more verbosity in synctl 2018-04-06 09:28:36 +01:00
Krombel
c7ede92d0b make prometheus config compliant to v0.28 2018-04-05 23:34:01 +02:00
Richard van der Hoff
551422051b Merge pull request #2886 from turt2live/travis/new-worker-docs
Add a blurb explaining the main synapse worker
2018-04-05 17:33:09 +01:00
Richard van der Hoff
c7f0969731 Merge pull request #2986 from jplatte/join_reponse_room_id
Add room_id to the response of `rooms/{roomId}/join`
2018-04-05 17:29:06 +01:00
Richard van der Hoff
3449da3bc7 Merge pull request #3068 from matrix-org/rav/fix_cache_invalidation
Improve database cache performance
2018-04-05 17:21:44 +01:00
Richard van der Hoff
d1679a4ed7 Merge pull request #3066 from matrix-org/rav/remove_redundant_metrics
Remove redundant metrics which were deprecated in 0.27.0.
2018-04-05 17:21:18 +01:00
Richard van der Hoff
01afc563c3 Fix overzealous cache invalidation
Fixes an issue where a cache invalidation would invalidate *all* pending
entries, rather than just the entry that we intended to invalidate.
2018-04-05 16:24:04 +01:00
Luke Barnard
e089100c62 Merge pull request #3045 from matrix-org/dbkr/group_joinable
Add joinability for groups
2018-04-05 15:57:49 +01:00
Neil Johnson
68b0ee4e8d Merge pull request #3041 from matrix-org/r30_stats
R30 stats
2018-04-05 15:37:37 +01:00
Richard van der Hoff
22284a6f65 Merge pull request #3060 from matrix-org/rav/kill_event_content
Remove uses of events.content
2018-04-05 15:02:17 +01:00
Luke Barnard
917380e89d NON NULL -> NOT NULL 2018-04-05 14:32:12 +01:00
Luke Barnard
104c0bc1d5 Use "/settings/" (plural) 2018-04-05 14:07:16 +01:00
Luke Barnard
700e5e7198 Use DEFAULT join_policy of "invite" in db 2018-04-05 14:01:17 +01:00
Luke Barnard
b214a04ffc Document set_group_join_policy 2018-04-05 13:29:16 +01:00
Neil Johnson
0e5f479fc0 Review comments
Use iteritems over item to loop over dict
formatting
2018-04-05 12:16:46 +01:00
Richard van der Hoff
518f6de088 Remove redundant metrics which were deprecated in 0.27.0. 2018-04-04 19:46:28 +01:00
Jan Christian Grünhage
7d0f712348 Merge pull request #3063 from matrix-org/jcgruenhage/cache_settings_stats
phone home cache size configurations
2018-04-04 17:24:40 +01:00
Jan Christian Grünhage
e4570c53dd phone home cache size configurations 2018-04-04 16:46:58 +01:00
Travis Ralston
88964b987e Merge remote-tracking branch 'matrix-org/develop' into travis/new-worker-docs 2018-04-04 08:46:56 -06:00
Travis Ralston
204fc98520 Document the additional routes for the event_creator worker
Fixes https://github.com/matrix-org/synapse/issues/3018

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-04-04 08:46:17 -06:00
Travis Ralston
301b339494 Move the mention of the main synapse worker higher up
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-04-04 08:45:51 -06:00
Adrian Tschira
6168351877 Add b prefixes to some strings that are bytes in py3
This has no effect on python2

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-04 13:48:51 +02:00
Richard van der Hoff
9cd3f06ab7 Merge pull request #3062 from matrix-org/revert-3053-speedup-mxid-check
Revert "improve mxid check performance"
2018-04-04 12:09:17 +01:00
Richard van der Hoff
f92963f5db Revert "improve mxid check performance" 2018-04-04 12:08:29 +01:00
Silke
72251d1b97 Remove address resolution of hosts in SRV records
Signed-off-by: Silke Hofstra <silke@slxh.eu>
2018-04-04 12:26:50 +02:00
Richard van der Hoff
725a72ec5a Merge pull request #3000 from NotAFile/change-except-style
Replace old style error catching with 'as' keyword
2018-04-04 10:45:22 +01:00
Richard van der Hoff
a89f9f830c Merge pull request #3044 from matrix-org/michaelk/performance_stats
Add basic performance statistics to phone home
2018-04-04 10:37:25 +01:00
Richard van der Hoff
39ce38b024 Merge pull request #3053 from NotAFile/speedup-mxid-check
improve mxid check performance
2018-04-04 10:19:52 +01:00
Richard van der Hoff
a9a74101a4 Document the behaviour of ResponseCache
it looks like everything that uses ResponseCache expects to have to
`make_deferred_yieldable` its results. It's debatable whether that is the best
approach, but let's document it for now to avoid further confusion.
2018-04-04 09:06:22 +01:00
Luke Barnard
eb8d8d6f57 Use join_policy API instead of joinable
The API is now under
 /groups/$group_id/setting/m.join_policy

and expects a JSON blob of the shape

```json
{
  "m.join_policy": {
    "type": "invite"
  }
}
```

where "invite" could alternatively be "open".
2018-04-03 16:16:40 +01:00
Richard van der Hoff
8da39ad98f Merge pull request #3049 from matrix-org/rav/use_staticjson
Use static JSONEncoders
2018-04-03 15:18:32 +01:00
Richard van der Hoff
3ee4ad09eb Fix json encoding bug in replication
json encoders have an encode method, not a dumps method.
2018-04-03 15:09:48 +01:00
Richard van der Hoff
0ca5c4d2af Merge pull request #3048 from matrix-org/rav/use_simplejson
Use simplejson throughout
2018-04-03 14:39:57 +01:00
Adrian Tschira
11597ddea5 improve mxid check performance ~4x
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-03-31 02:00:22 +02:00
Richard van der Hoff
2fe3f848b9 Remove uses of events.content 2018-03-29 23:17:12 +01:00
Richard van der Hoff
05630758f2 Use static JSONEncoders
using json.dumps with custom options requires us to create a new JSONEncoder on
each call. It's more efficient to create one upfront and reuse it.
2018-03-29 23:13:33 +01:00
Richard van der Hoff
fcfe7f6ad3 Use simplejson throughout
Let's use simplejson rather than json, for consistency.
2018-03-29 22:45:52 +01:00
Neil Johnson
b4e37c6f50 pep8 2018-03-29 17:27:39 +01:00
Neil Johnson
9ee44a372d Remove need for sqlite specific query 2018-03-29 16:45:34 +01:00
Erik Johnston
88cc9cc69e Merge pull request #3043 from matrix-org/erikj/measure_state_group_creation
Measure time it takes to calculate state group ID
2018-03-28 17:40:14 +01:00
Neil Johnson
dc7c020b33 fix pep8 errors 2018-03-28 17:25:15 +01:00
Neil Johnson
16aeb41547 Update README.rst
update docker hub url
2018-03-28 16:47:56 +01:00
David Baker
c5de6987c2 This should probably be a PUT 2018-03-28 16:44:11 +01:00
Neil Johnson
241e4e8687 remove twisted deferral cruft 2018-03-28 16:25:53 +01:00
David Baker
929b34963d OK, smallint it is then 2018-03-28 14:53:55 +01:00
Richard van der Hoff
9a0db062af Merge pull request #3034 from matrix-org/rav/fix_key_claim_errors
Fix error when claiming e2e keys from offline servers
2018-03-28 14:50:38 +01:00
David Baker
a838444a70 Grr. Copy the definition from is_admin 2018-03-28 14:50:30 +01:00
Neil Johnson
4262aba17b bump schema version 2018-03-28 14:40:03 +01:00
Neil Johnson
86932be2cb Support multi client R30 for psql 2018-03-28 14:36:53 +01:00
David Baker
32260baa41 pep8 2018-03-28 14:29:42 +01:00
Michael Kaye
33f6195d9a Handle review comments 2018-03-28 14:25:25 +01:00
David Baker
a164270833 Make column definition that works on both dbs 2018-03-28 14:23:00 +01:00
David Baker
352e1ff9ed Add schema delta file 2018-03-28 14:07:57 +01:00
David Baker
79452edeee Add joinability for groups
Adds API to set the 'joinable' flag, and corresponding flag in the
table.
2018-03-28 14:03:37 +01:00
Krombel
6152e253d8 Merge branch 'develop' of into allow_auto_join_rooms 2018-03-28 14:45:28 +02:00
Erik Johnston
e9e4cb25fc Merge pull request #3042 from matrix-org/fix_locally_failing_tests
fix tests/storage/test_user_directory.py
2018-03-28 13:10:37 +01:00
Neil Johnson
792d340572 rename stat to future proof 2018-03-28 12:25:02 +01:00
Michael Kaye
4ceaa7433a As daemonizing will make a new process, defer call to init. 2018-03-28 12:19:01 +01:00
Neil Johnson
788e69098c Add user_ips last seen index 2018-03-28 12:03:13 +01:00
Neil Johnson
0f890f477e No need to cast in count_daily_users 2018-03-28 11:49:57 +01:00
Neil Johnson
545001b9e4 Fix search_user_dir multiple sqlite versions do different things 2018-03-28 11:19:45 +01:00
Erik Johnston
01ccc9e6f2 Measure time it takes to calculate state group ID 2018-03-28 11:03:52 +01:00
Neil Johnson
a9cb1a35c8 fix tests/storage/test_user_directory.py 2018-03-28 10:57:27 +01:00
Neil Johnson
a32d2548d9 query and call for r30 stats 2018-03-28 10:39:13 +01:00
Neil Johnson
9187e0762f count_daily_users failed if db was sqlite due to type failure - presumably this prevcented all sqlite homeservers reporting home 2018-03-28 10:02:32 +01:00
Erik Johnston
f879127aaa Merge pull request #3029 from matrix-org/erikj/linearize_generate_user_id
Linearize calls to _generate_user_id
2018-03-28 10:00:31 +01:00
Erik Johnston
e6d87c93f3 Merge pull request #3030 from matrix-org/erikj/no_ujson
Remove last usage of ujson
2018-03-28 10:00:06 +01:00
Erik Johnston
004cc8a328 Merge pull request #3033 from matrix-org/erikj/calculate_state_metrics
Add counter metrics for calculating state delta
2018-03-28 09:59:42 +01:00
Michael Kaye
ef520d8d0e Include coarse CPU and Memory use in stats callbacks.
This requires the psutil module, and is still opt-in based on the report_stats
config option.
2018-03-27 17:56:03 +01:00
Richard van der Hoff
a134c572a6 Stringify exceptions for keys/{query,claim}
Make sure we stringify any exceptions we return from keys/query and keys/claim,
to avoid a 'not JSON serializable' error later

Fixes #3010
2018-03-27 17:15:06 +01:00
Richard van der Hoff
c2a5cf2fe3 factor out exception handling for keys/claim and keys/query
this stuff is badly c&p'ed
2018-03-27 17:11:23 +01:00
Erik Johnston
800cfd5774 Comment 2018-03-27 13:30:39 +01:00
Erik Johnston
152c2ac19e Fix indent 2018-03-27 13:13:46 +01:00
Erik Johnston
e70287cff3 Comment 2018-03-27 13:13:38 +01:00
Erik Johnston
03a26e28d9 Merge pull request #3017 from matrix-org/erikj/add_cache_control_headers
Add Cache-Control headers to all JSON APIs
2018-03-27 13:10:38 +01:00
Erik Johnston
3e0c0660b3 Also do check inside linearizer 2018-03-27 13:01:34 +01:00
Erik Johnston
3f49e131d9 Add counter metrics for calculating state delta
This will allow us to measure how often we calculate state deltas in
event persistence that we would have been able to calculate at the same
time we calculated the state for the event.
2018-03-27 10:57:35 +01:00
Erik Johnston
9b8c0fb162 Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-03-26 21:41:55 +01:00
Erik Johnston
691f8492fb Merge branch 'release-v0.27.0' of github.com:matrix-org/synapse 2018-03-26 16:38:23 +01:00
Erik Johnston
a9d7d98d3f Bum version and changelog 2018-03-26 16:36:53 +01:00
Erik Johnston
bdbb1eec65 Merge branch 'erikj/simplejson_replication' of github.com:matrix-org/synapse into release-v0.27.0 2018-03-26 16:35:55 +01:00
Erik Johnston
01f72e2fc7 Fix date 2018-03-26 16:21:26 +01:00
Erik Johnston
9187862002 Bump version and changelog 2018-03-26 16:20:24 +01:00
Neil Johnson
aa3587fdd1 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-03-26 14:51:11 +01:00
Neil Johnson
51406dab96 version bump 2018-03-26 14:48:19 +01:00
Erik Johnston
fecb45e0c3 Remove last usage of ujson 2018-03-26 13:32:29 +01:00
Erik Johnston
44cd6e1358 PEP8 2018-03-26 12:06:48 +01:00
Erik Johnston
8d6dc106d1 Don't use _cursor_to_dict in find_next_generated_user_id_localpart 2018-03-26 12:02:44 +01:00
Erik Johnston
a052aa42e7 Linearize calls to _generate_user_id 2018-03-26 12:02:20 +01:00
Matthew Hodgson
8efe773ef1 fix typo 2018-03-23 21:38:42 +00:00
Matthew Hodgson
b7e7b52452 Merge pull request #3022 from matrix-org/matthew/noresource
404 correctly on missing paths via NoResource
2018-03-23 11:10:32 +00:00
Matthew Hodgson
8cbbfaefc1 404 correctly on missing paths via NoResource
fixes https://github.com/matrix-org/synapse/issues/2043 and https://github.com/matrix-org/synapse/issues/2029
2018-03-23 10:32:50 +00:00
Erik Johnston
84b5cc69f5 Merge pull request #3006 from matrix-org/erikj/state_iter
Use .iter* to avoid copies in StateHandler
2018-03-22 11:51:13 +00:00
Erik Johnston
fde8e8f09f Fix s/iteriterms/itervalues 2018-03-22 11:42:16 +00:00
Erik Johnston
eb9fc021e3 Merge branch 'release-v0.27.0' of github.com:matrix-org/synapse into develop 2018-03-22 10:19:53 +00:00
Erik Johnston
1c41b05c8c Add Cache-Control headers to all JSON APIs
It is especially important that sync requests don't get cached, as if a
sync returns the same token given then the client will call sync with
the same parameters again. If the previous response was cached it will
get reused, resulting in the client tight looping making the same
request and never making any progress.

In general, clients will expect to get up to date data when requesting
APIs, and so its safer to do a blanket no cache policy than only
whitelisting APIs that we know will break things if they get cached.
2018-03-21 17:46:26 +00:00
Erik Johnston
5bdb57cb66 Merge pull request #3015 from matrix-org/erikj/simplejson_replication
Fix replication after switch to simplejson
2018-03-20 15:06:33 +00:00
Neil Johnson
f5aa027c2f Update CHANGES.rst
rearrange ordering of releases to match chronology
2018-03-20 15:06:22 +00:00
Neil Johnson
e66fbcbb02 fix merge conflicts 2018-03-20 14:25:31 +00:00
Erik Johnston
9aa5a0af51 Explicitly use simplejson 2018-03-20 09:58:13 +00:00
Erik Johnston
610accbb7f Fix replication after switch to simplejson
Turns out that simplejson serialises namedtuple's as dictionaries rather
than tuples by default.
2018-03-19 16:12:48 +00:00
Neil Johnson
c384705ee8 Update __init__.py
bump version
2018-03-19 15:11:58 +00:00
Neil Johnson
1a3aa957ca Update CHANGES.rst 2018-03-19 15:11:00 +00:00
Erik Johnston
3f961e638a Merge pull request #3005 from matrix-org/erikj/fix_cache_size
Fix bug where state cache used lots of memory
2018-03-19 11:44:26 +00:00
Erik Johnston
fa72803490 Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-03-19 11:41:01 +00:00
Erik Johnston
9a0d783c11 Add comments 2018-03-19 11:35:53 +00:00
Matthew Hodgson
38f952b9bc spell out not to massively increase bcrypt rounds 2018-03-19 09:27:36 +00:00
Erik Johnston
a8ce159be4 Replace some ujson with simplejson to make it work 2018-03-16 00:27:09 +00:00
Erik Johnston
f609acc109 Merge branch 'hotfixes-v0.26.1' of github.com:matrix-org/synapse 2018-03-16 00:13:51 +00:00
Erik Johnston
0092cf38ae Newline 2018-03-16 00:11:58 +00:00
Erik Johnston
5b631ff41a Remove wrong comment 2018-03-16 00:07:08 +00:00
Erik Johnston
ba48755d56 Bump version and changelog 2018-03-15 23:57:26 +00:00
Erik Johnston
926ba76e23 Replace ujson with simplejson 2018-03-15 23:43:31 +00:00
Richard van der Hoff
5a6e54264d Make 'unexpected logging context' into warnings
I think we've now fixed enough of these that the rest can be logged at
warning.
2018-03-15 18:40:38 +00:00
Erik Johnston
9cf519769b Use .iter* to avoid copies in StateHandler 2018-03-15 17:50:26 +00:00
Erik Johnston
7c7706f42b Fix bug where state cache used lots of memory
The state cache bases its size on the sum of the size of entries. The
size of the entry is calculated once on insertion, so it is important
that the size of entries does not change.

The DictionaryCache modified the entries size, which caused the state
cache to incorrectly think it was smaller than it actually was.
2018-03-15 15:46:54 +00:00
NotAFile
2cc9f76bc3 replace old style error catching with 'as' keyword
This is both easier to read and compatible with python3 (not that that
matters)

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-03-15 16:11:17 +01:00
Erik Johnston
ddb00efc1d Bump version number 2018-03-15 14:41:30 +00:00
Erik Johnston
2a376579f3 Merge pull request #3003 from matrix-org/rav/fix_contributing
CONTRIBUTING.rst: fix CI info
2018-03-15 13:31:59 +00:00
Erik Johnston
873aea7168 Merge pull request #3002 from matrix-org/rav/purge_doc
Update purge_history_api.rst
2018-03-15 13:31:54 +00:00
Erik Johnston
bf7ee93cb6 Merge pull request #2997 from turt2live/patch-2
Doc: Make the event_creator routes regex a code block
2018-03-15 13:12:44 +00:00
Richard van der Hoff
5ea624b0f5 CONTRIBUTING.rst: fix CI info 2018-03-15 11:48:35 +00:00
Richard van der Hoff
0ad5125814 Update purge_history_api.rst
clarify that `purge_history` will not purge state
2018-03-15 11:05:42 +00:00
Richard van der Hoff
068c21ab10 CHANGES.rst: reword metric deprecation 2018-03-15 10:36:31 +00:00
Neil Johnson
b29d1abab6 Update CHANGES.rst 2018-03-15 10:34:15 +00:00
Neil Johnson
7367a4a823 Update CHANGES.rst 2018-03-15 10:33:52 +00:00
Neil Johnson
7d26591048 Update CHANGES.rst 2018-03-15 10:33:24 +00:00
Erik Johnston
2059b8573f Update CHANGES.rst 2018-03-14 18:11:21 +00:00
Erik Johnston
10fdcf561d Fix up rst formatting 2018-03-14 17:30:17 +00:00
Neil Johnson
5ccb57d3ff Update CHANGES.rst 2018-03-14 17:12:58 +00:00
Travis Ralston
c33c1ceddd OCD: Make the event_creator routes regex a code block
All the others are code blocks, so this one should be to (currently it is a blockquote).

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-03-14 11:09:08 -06:00
Neil Johnson
fb647164f2 Update CHANGES.rst 2018-03-14 16:20:36 +00:00
Neil Johnson
a492b17fe2 Update CHANGES.rst
clean formatting
2018-03-14 16:18:09 +00:00
Neil Johnson
cb2c7c0669 Update CHANGES.rst
WIP, need to add most recent PRs
2018-03-14 16:09:20 +00:00
Krombel
91ea0202e6 move handling of auto_join_rooms to RegisterHandler
Currently the handling of auto_join_rooms only works when a user
registers itself via public register api. Registrations via
registration_shared_secret and ModuleApi do not work

This auto_joins the users in the registration handler which enables
the auto join feature for all 3 registration paths.

This is related to issue #2725

Signed-Off-by: Matthias Kesler <krombel@krombel.de>
2018-03-14 16:45:37 +01:00
Erik Johnston
3959754de3 Merge pull request #2995 from matrix-org/erikj/enable_membership_worker
Register membership/state servlets in event_creator
2018-03-14 14:37:33 +00:00
Erik Johnston
4f28018c83 Register membership/state servlets in event_creator 2018-03-14 14:30:06 +00:00
Erik Johnston
57db62e554 Merge pull request #2992 from matrix-org/erikj/implement_member_workre
Implement RoomMemberWorkerHandler
2018-03-14 14:29:33 +00:00
Erik Johnston
0011ede3b0 Fix imports 2018-03-14 14:19:23 +00:00
Erik Johnston
62ad701326 s/join/joined/ in notify_user_membership_change 2018-03-14 14:17:43 +00:00
Erik Johnston
3f0f06cb31 Split RoomMemberWorkerHandler to separate file 2018-03-14 11:41:45 +00:00
Erik Johnston
3e839e0548 Merge pull request #2989 from matrix-org/erikj/profile_cache_master
Only update remote profile cache on master
2018-03-14 09:42:27 +00:00
Erik Johnston
ebd0127999 Merge pull request #2988 from matrix-org/erikj/split_profile_store
Split up ProfileStore
2018-03-14 09:41:06 +00:00
Erik Johnston
cfe75a9fb6 Merge pull request #2991 from matrix-org/erikj/fixup_rm
_remote_join and co take a requester
2018-03-14 09:40:57 +00:00
Erik Johnston
f51565e023 Merge pull request #2993 from matrix-org/erikj/is_blocked
Add is_blocked to worker store
2018-03-14 09:39:18 +00:00
Matthew Hodgson
d144ed6ffb fix bug #2926 (loading all state for a given type from the DB if the state_key is None) (#2990)
Fixes a regression that had crept in where the caching layer upholds requests for loading state which is filtered by type (but not by state_key), but the DB layer itself would interpret a missing state_key as a request to filter by null state_key rather than returning all state_keys.
2018-03-13 22:36:04 +00:00
Erik Johnston
a08726fc42 Add is_blocked to worker store 2018-03-13 18:28:44 +00:00
Erik Johnston
b27320b550 Implement RoomMemberWorkerHandler 2018-03-13 18:26:00 +00:00
Erik Johnston
350331d466 _remote_join and co take a requester 2018-03-13 17:50:39 +00:00
Erik Johnston
1a69c6d590 Merge pull request #2987 from matrix-org/erikj/split_room_member_handler
Split RoomMemberHandler into base and master class
2018-03-13 17:40:00 +00:00
Erik Johnston
df8ff682a7 Only update remote profile cache on master 2018-03-13 17:38:21 +00:00
Erik Johnston
3518d0ea8f Split up ProfileStore 2018-03-13 17:36:50 +00:00
Erik Johnston
d45a114824 Raise, don't return, exception 2018-03-13 17:24:34 +00:00
Erik Johnston
6dbebef141 Add missing param to docstrings 2018-03-13 17:15:32 +00:00
Erik Johnston
16adb11cc0 Correct import order 2018-03-13 16:57:07 +00:00
Erik Johnston
82f16faa78 Move user_*_room distributor stuff to master class
I added yields when calling user_left_room, but they shouldn't matter on
the master process as they always return None anyway.
2018-03-13 16:38:15 +00:00
Erik Johnston
b78717b87b Split RoomMemberHandler into base and master class
The intention here is to split the class into the bits that can be done
on workers and the bits that have to be done on the master.

In future there will also be a class that can be run on the worker,
which will delegate work to the master when necessary.
2018-03-13 16:37:41 +00:00
Erik Johnston
95cb401ae0 Merge pull request #2978 from matrix-org/erikj/refactor_replication_layer
Remove ReplicationLayer and user Client/Server directly
2018-03-13 15:45:08 +00:00
Erik Johnston
5d8476d8ff Merge pull request #2981 from matrix-org/erikj/factor_remote_leave
Factor out _remote_reject_invite in RoomMember
2018-03-13 15:44:56 +00:00
Jonas Platte
47ce527f45 Add room_id to the response of rooms/{roomId}/join
Fixes #2349
2018-03-13 14:48:12 +01:00
Erik Johnston
56e709857c Merge pull request #2979 from matrix-org/erikj/no_handlers
Don't build handlers on workers unnecessarily
2018-03-13 13:46:38 +00:00
Erik Johnston
cb9f8e527c s/replication_client/federation_client/ 2018-03-13 13:26:52 +00:00
Erik Johnston
cea462e285 s/replication_server/federation_server 2018-03-13 13:22:21 +00:00
Erik Johnston
bf8e97bd3c Merge branch 'develop' of github.com:matrix-org/synapse into erikj/factor_remote_leave 2018-03-13 13:17:08 +00:00
Erik Johnston
ea3442c15c Add docstring 2018-03-13 13:16:21 +00:00
Erik Johnston
16469a4f15 Merge pull request #2980 from matrix-org/erikj/rm_priv
Make RoomMemberHandler functions private that can be
2018-03-13 13:11:11 +00:00
Erik Johnston
c82111a55f Merge pull request #2982 from matrix-org/erikj/fix_extra_users
extra_users is actually a list of UserIDs
2018-03-13 13:11:04 +00:00
Erik Johnston
da87791975 Merge pull request #2983 from matrix-org/erikj/rename_register_3pid
Refactor get_or_register_3pid_guest
2018-03-13 13:10:52 +00:00
Erik Johnston
99e9b4f26c Merge pull request #2984 from matrix-org/erikj/fix_rest_regeix
RoomMembershipRestServlet doesn't handle /forget
2018-03-13 13:10:44 +00:00
Erik Johnston
f5160d4a3e RoomMembershipRestServlet doesn't handle /forget
Due to the order we register the REST handlers `/forget` was handled by
the correct handler.
2018-03-13 12:12:55 +00:00
Erik Johnston
8b3573a8b2 Refactor get_or_register_3pid_guest 2018-03-13 12:08:58 +00:00
Richard van der Hoff
299fd740c7 Merge pull request #2975 from matrix-org/rav/measure_persist_events
Add Measure block for persist_events
2018-03-13 12:06:25 +00:00
Erik Johnston
9a2d9b4789 Merge pull request #2977 from matrix-org/erikj/replication_move_props
Move property setting from ReplicationLayer to base classes
2018-03-13 11:45:25 +00:00
Erik Johnston
141c343e03 Merge pull request #2976 from matrix-org/erikj/replication_registry
Split out edu/query registration to a separate class
2018-03-13 11:45:08 +00:00
Erik Johnston
f43b6d6d9b Fix docstring types 2018-03-13 11:29:35 +00:00
Erik Johnston
0f942f68c1 Factor out _remote_reject_invite in RoomMember 2018-03-13 11:22:45 +00:00
Erik Johnston
d0fcc48f9d extra_users is actually a list of UserIDs 2018-03-13 11:20:06 +00:00
Erik Johnston
31becf4ac3 Make functions private that can be 2018-03-13 11:15:16 +00:00
Erik Johnston
d023ecb810 Don't build handlers on workers unnecessarily 2018-03-13 11:08:10 +00:00
Erik Johnston
ea7b3c4b1b Remove unused ReplicationLayer 2018-03-13 11:00:04 +00:00
Erik Johnston
6ea27fafad Fix tests 2018-03-13 10:55:47 +00:00
Erik Johnston
265b993b8a Split replication layer into two 2018-03-13 10:55:47 +00:00
Erik Johnston
e05bf34117 Move property setting from ReplicationLayer to FederationBase 2018-03-13 10:51:30 +00:00
Erik Johnston
631a73f7ef Fix tests 2018-03-13 10:39:19 +00:00
Erik Johnston
c3f79c9da5 Split out edu/query registration to a separate class 2018-03-13 10:24:27 +00:00
Richard van der Hoff
889a2a853a Add Measure block for persist_events
This seems like a useful thing to measure.
2018-03-13 10:01:42 +00:00
Richard van der Hoff
d65ceb4b48 Merge pull request #2962 from matrix-org/rav/purge_history_txns
Add transactional API to history purge
2018-03-12 16:32:18 +00:00
Richard van der Hoff
e48c7aac4d Add transactional API to history purge
Make the purge request return quickly, and allow scripts to poll for updates.
2018-03-12 16:22:55 +00:00
Richard van der Hoff
1708412f56 Return an error when doing two purges on a room
Queuing up purges doesn't sound like a good thing.
2018-03-12 16:22:54 +00:00
Richard van der Hoff
b984dd0b73 Merge pull request #2961 from matrix-org/rav/run_in_background
Factor run_in_background out from preserve_fn
2018-03-12 16:19:13 +00:00
Richard van der Hoff
ba1d08bc4b Merge pull request #2965 from matrix-org/rav/request_logging
Add a metric which increments when a request is received
2018-03-12 09:45:33 +00:00
Richard van der Hoff
58dd148c4f Add some docstrings to help figure this out 2018-03-09 18:05:41 +00:00
Richard van der Hoff
88541f9009 Add a metric which increments when a request is received
It's useful to know when there are peaks in incoming requests - which isn't
quite the same as there being peaks in outgoing responses, due to the time
taken to handle requests.
2018-03-09 16:30:26 +00:00
Richard van der Hoff
dbe80a286b refactor JsonResource
rephrase the OPTIONS and unrecognised request handling so that they look
similar to the common flow.
2018-03-09 16:22:16 +00:00
Richard van der Hoff
20f40348d4 Factor run_in_background out from preserve_fn
It annoys me that we create temporary function objects when there's really no
need for it. Let's factor the gubbins out of preserve_fn and start using it.
2018-03-08 11:50:11 +00:00
Erik Johnston
735fd8719a Merge pull request #2944 from matrix-org/erikj/fix_sync_race
Fix race in sync when joining room
2018-03-07 17:42:26 +00:00
Erik Johnston
a56d54dcb7 Fix up log message 2018-03-07 11:55:31 +00:00
Erik Johnston
02a1296ad6 Fix typo 2018-03-07 11:55:31 +00:00
Erik Johnston
8cb44da4aa Fix race in sync when joining room
The race happens when the user joins a room at the same time as doing a
sync. We fetch the current token and then get the rooms the user is in.
If the join happens after the current token, but before we get the rooms
we end up sending down a partial room entry in the sync.

This is fixed by looking at the stream ordering of the membership
returned by get_rooms_for_user, and handling the case when that stream
ordering is after the current token.
2018-03-07 11:55:31 +00:00
Richard van der Hoff
8ffaacbee3 Merge pull request #2949 from krombel/use_bcrypt_checkpw
use bcrypt.checkpw
2018-03-06 11:56:06 +00:00
Richard van der Hoff
b2932107bb Merge pull request #2946 from matrix-org/rav/timestamp_to_purge
Implement purge_history by timestamp
2018-03-06 11:20:23 +00:00
Erik Johnston
7aed50a038 Merge pull request #2948 from matrix-org/erikj/kill_as_sync
Remove ability for AS users to call /events and /sync
2018-03-06 11:10:09 +00:00
Erik Johnston
b6c4b851f1 Merge pull request #2947 from matrix-org/erikj/split_directory_store
Split Directory store
2018-03-05 18:17:32 +00:00
Krombel
ed9b5eced4 use bcrypt.checkpw
in bcrypt 3.1.0 checkpw got introduced (already 2 years ago)
This makes use of that with enhancements which might get introduced
by that

Signed-Off-by: Matthias Kesler <krombel@krombel.de>
2018-03-05 18:02:59 +01:00
Erik Johnston
d4ffe61d4f Remove ability for AS users to call /events and /sync
This functionality has been deprecated for a while as well as being
broken for a while. Instead of fixing it lets just remove it entirely.

See: https://github.com/matrix-org/matrix-doc/issues/1144
2018-03-05 15:44:46 +00:00
Erik Johnston
69ce365b79 Fix cache invalidation on deletion 2018-03-05 15:29:03 +00:00
Erik Johnston
2e223163ff Split Directory store 2018-03-05 15:11:30 +00:00
Richard van der Hoff
f8bfcd7e0d Provide a means to pass a timestamp to purge_history 2018-03-05 14:37:23 +00:00
Richard van der Hoff
d032785aa7 Merge pull request #2943 from matrix-org/rav/fix_find_first_stream_ordering_after_ts
Test and fix find_first_stream_ordering_after_ts
2018-03-05 12:26:14 +00:00
Richard van der Hoff
2c911d75e8 Fix comment typo 2018-03-05 12:24:49 +00:00
Richard van der Hoff
c818fcab11 Test and fix find_first_stream_ordering_after_ts
It seemed to suffer from a bunch of off-by-one errors.
2018-03-05 12:04:02 +00:00
Richard van der Hoff
06a14876e5 Add find_first_stream_ordering_after_ts
Expose this as a public function which can be called outside a txn
2018-03-05 11:53:39 +00:00
Erik Johnston
42174946f8 Merge pull request #2934 from matrix-org/erikj/cache_fix
Fix bug with delayed cache invalidation stream
2018-03-05 11:33:17 +00:00
Erik Johnston
f394f5574d Merge pull request #2929 from matrix-org/erikj/split_regististration_store
Split registration store
2018-03-05 11:33:07 +00:00
dklug
af7ed8e1ef Return 401 for invalid access_token on logout
Signed-off-by: Duncan Klug <dklug@ucmerced.edu>
2018-03-02 22:01:27 -08:00
Erik Johnston
efb79820b4 Fix bug with delayed cache invalidation stream
We poked the notifier before updated the current token for the cache
invalidation stream. This mean that sometimes the update wouldn't be
sent until the next time a cache was invalidated.
2018-03-02 14:45:15 +00:00
Erik Johnston
fafa3e7114 Split registration store 2018-03-02 13:48:27 +00:00
Erik Johnston
6619f047ad Merge pull request #2933 from matrix-org/erikj/3pid_yield
Add missing yield during 3pid signature checks
2018-03-02 11:31:50 +00:00
Erik Johnston
d960d23830 Add missing yield during 3pid signature checks 2018-03-02 11:03:18 +00:00
Erik Johnston
1a6c7cdf54 Merge pull request #2928 from matrix-org/erikj/read_marker_caches
Fix typo in getting replication account data processing
2018-03-01 17:56:14 +00:00
Erik Johnston
89b7232ff8 Fix typo in getting replication account data processing 2018-03-01 17:50:30 +00:00
Erik Johnston
1773df0632 Merge pull request #2925 from matrix-org/erikj/split_sig_fed
Split out SignatureStore and EventFederationStore
2018-03-01 17:32:58 +00:00
Erik Johnston
65cf454fd1 Remove unused DataStore 2018-03-01 17:27:53 +00:00
Erik Johnston
9e08a93a7b Merge pull request #2927 from matrix-org/erikj/read_marker_caches
Improve caching for read_marker API
2018-03-01 17:12:34 +00:00
Erik Johnston
4b44f05f19 Fewer lies are better 2018-03-01 17:08:17 +00:00
Erik Johnston
a83c514d1f Improve caching for read_marker API
We add a new storage function to get a paritcular type of room account
data. This allows us to prefill the cache when updating that acount
data.
2018-03-01 17:08:17 +00:00
Erik Johnston
33bebb63f3 Add some caches to help read marker API 2018-03-01 17:08:17 +00:00
Erik Johnston
483e8104db Merge pull request #2926 from matrix-org/erikj/member_handler_move
Move RoomMemberHandler out of Handlers
2018-03-01 17:01:25 +00:00
Erik Johnston
2ad4d5b5bb Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_sig_fed 2018-03-01 16:59:39 +00:00
Erik Johnston
92789199a9 Merge pull request #2924 from matrix-org/erikj/split_stream_store
Split out stream store
2018-03-01 16:56:37 +00:00
Erik Johnston
529c026ac1 Move back to hs.is_mine 2018-03-01 16:49:12 +00:00
Erik Johnston
7c371834cc Stub out broken function only used for cache 2018-03-01 16:44:13 +00:00
Erik Johnston
64346be26d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_stream_store 2018-03-01 16:26:42 +00:00
Erik Johnston
22518e2833 Merge pull request #2923 from matrix-org/erikj/stream_ago_worker
Calculate stream_ordering_month_ago correctly on workers
2018-03-01 16:23:54 +00:00
Erik Johnston
884b26ae41 Remove unused variables 2018-03-01 16:23:48 +00:00
Erik Johnston
1b2af11650 Document abstract class and method better 2018-03-01 16:20:57 +00:00
Erik Johnston
872ff95ed4 Default stream_ordering_*_ago to None 2018-03-01 16:00:05 +00:00
Erik Johnston
22004b524e Fix comment typo 2018-03-01 15:59:40 +00:00
Erik Johnston
4bc4236faf Merge pull request #2922 from matrix-org/erikj/split_room_store
Split up RoomStore
2018-03-01 15:55:01 +00:00
Richard van der Hoff
2324124a72 Merge pull request #2921 from matrix-org/rav/unyielding_make_deferred_yieldable
Rewrite make_deferred_yieldable avoiding inlineCallbacks
2018-03-01 15:39:10 +00:00
Erik Johnston
f793bc3877 Split out stream store 2018-03-01 15:13:08 +00:00
Erik Johnston
784f036306 Move RoomMemberHandler out of Handlers 2018-03-01 14:36:50 +00:00
Erik Johnston
6411f725be Calculate stream_ordering_month_ago correctly on workers 2018-03-01 14:20:53 +00:00
Erik Johnston
a9a2d66cdd Split out SignatureStore and EventFederationStore 2018-03-01 14:17:53 +00:00
Erik Johnston
0c8ba5dd1c Split up RoomStore 2018-03-01 14:01:19 +00:00
Richard van der Hoff
3a75de923b Rewrite make_deferred_yieldable avoiding inlineCallbacks
... because (a) it's actually simpler (b) it might be marginally more
performant?
2018-03-01 12:40:05 +00:00
Erik Johnston
17445e6701 Merge pull request #2920 from matrix-org/erikj/retry_send_event
Make repl send_event idempotent and retry on timeouts
2018-03-01 12:14:21 +00:00
Erik Johnston
126b9bf96f Log in the correct places 2018-03-01 12:05:33 +00:00
Erik Johnston
157298f986 Don't do preserve_fn for every request 2018-03-01 11:59:45 +00:00
Erik Johnston
89f90d808a Add some logging 2018-03-01 11:59:16 +00:00
Erik Johnston
8ded8ba2c7 Make repl send_event idempotent and retry on timeouts
If we treated timeouts as failures on the worker we would attempt to
clean up e.g. push actions while the master might still process the
event.
2018-03-01 11:20:34 +00:00
Erik Johnston
182ff17c83 Merge pull request #2875 from matrix-org/erikj/push_actions_worker
Calculate push actions on worker
2018-03-01 11:20:07 +00:00
Erik Johnston
f381d63813 Check event auth on the worker 2018-03-01 10:18:37 +00:00
Erik Johnston
6b8604239f Correctly send ratelimit and extra_users params 2018-03-01 10:08:39 +00:00
Erik Johnston
f756f961ea Fixup comments 2018-03-01 10:05:27 +00:00
Erik Johnston
28e973ac11 Calculate push actions on worker 2018-02-28 18:02:30 +00:00
Erik Johnston
9cb3a190bc Merge pull request #2913 from matrix-org/erikj/store_push
Move storage functions for push calculations
2018-02-28 18:02:03 +00:00
Erik Johnston
493e25d554 Move storage functions for push calculations
This will allow push actions for an event to be calculated on workers.
2018-02-27 13:58:16 +00:00
Erik Johnston
3594dbc6dc Merge pull request #2904 from matrix-org/erikj/receipt_cache_invalidation
Fix missing invalidations for receipt storage
2018-02-27 11:34:26 +00:00
Erik Johnston
2311189ee4 Merge pull request #2903 from matrix-org/erikj/split_roommember_store
Split out RoomMemberStore
2018-02-27 11:32:10 +00:00
Erik Johnston
c57607874c Merge pull request #2901 from matrix-org/erikj/split_as_stores
Split AS stores
2018-02-27 10:07:07 +00:00
Erik Johnston
8956f0147a Add comment 2018-02-27 10:06:51 +00:00
Erik Johnston
e5b4a208ce Merge pull request #2892 from matrix-org/erikj/batch_inserts_push_actions
Batch inserts into event_push_actions_staging
2018-02-26 14:45:40 +00:00
Erik Johnston
73fe866847 Merge pull request #2894 from matrix-org/erikj/handle_unpersisted_events_push
Ensure all push actions are deleted from staging
2018-02-26 14:28:35 +00:00
Erik Johnston
45b5fe9122 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/handle_unpersisted_events_push 2018-02-26 13:49:24 +00:00
Erik Johnston
d62ce972f8 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_roommember_store 2018-02-23 11:46:24 +00:00
Erik Johnston
6ae9a3d2a6 Update copyright 2018-02-23 11:44:49 +00:00
Erik Johnston
2ec49826e8 Merge pull request #2900 from matrix-org/erikj/split_event_push_actions
Split out EventPushActionWorkerStore
2018-02-23 11:44:30 +00:00
Erik Johnston
a90c60912f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_event_push_actions 2018-02-23 11:26:31 +00:00
Erik Johnston
50e8657867 Merge pull request #2902 from matrix-org/erikj/split_events_store
Split out get_events and co into a worker store
2018-02-23 11:23:52 +00:00
Erik Johnston
1cf9e071dd Merge pull request #2899 from matrix-org/erikj/split_pushers
Split PusherStore
2018-02-23 11:23:35 +00:00
Erik Johnston
d0957753bf Merge pull request #2898 from matrix-org/erikj/split_push_rules_store
Split PushRulesStore
2018-02-23 11:23:23 +00:00
Erik Johnston
199dba6c15 Merge pull request #2897 from matrix-org/erikj/split_account_data
Split AccountDataStore and TagStore
2018-02-23 11:23:11 +00:00
Erik Johnston
70349872c2 Update copyright 2018-02-23 11:14:35 +00:00
Erik Johnston
eba93b05bf Split EventsWorkerStore into separate file 2018-02-23 11:01:21 +00:00
Erik Johnston
bf8a36e080 Update copyright 2018-02-23 10:52:10 +00:00
Erik Johnston
5d0f665848 Remove redundant clock 2018-02-23 10:49:58 +00:00
Erik Johnston
3bd760628b _event_persist_queue shouldn't be in worker store 2018-02-23 10:49:18 +00:00
Erik Johnston
eb9b5eec81 Update copyright 2018-02-23 10:42:39 +00:00
Erik Johnston
c2ecfcc3a4 Update copyright 2018-02-23 10:41:34 +00:00
Erik Johnston
7e6cf89dc2 Update copyright 2018-02-23 10:39:19 +00:00
Erik Johnston
26d37f7a63 Update copyright 2018-02-23 10:33:55 +00:00
Erik Johnston
bb73f55fc6 Use absolute imports 2018-02-23 10:31:16 +00:00
Erik Johnston
faeb369f15 Fix missing invalidations for receipt storage 2018-02-21 15:19:54 +00:00
Erik Johnston
3dec9c66b3 Split out RoomMemberStore 2018-02-21 12:07:26 +00:00
Erik Johnston
46244b2759 Split AS stores 2018-02-21 11:49:34 +00:00
Erik Johnston
27b094f382 Split out get_events and co into a worker store 2018-02-21 11:41:48 +00:00
Erik Johnston
573712da6b Update comments 2018-02-21 11:29:49 +00:00
Erik Johnston
c96d547f4d Actually use new param 2018-02-21 11:03:42 +00:00
Erik Johnston
d15d237b0d Split out EventPushActionWorkerStore 2018-02-21 11:01:13 +00:00
Erik Johnston
27939cbb0e Merge pull request #2893 from matrix-org/erikj/delete_from_staging_fed
Delete from push_actions_staging in federation too
2018-02-21 11:00:06 +00:00
Erik Johnston
6f72765371 Split PusherStore 2018-02-21 10:54:21 +00:00
Erik Johnston
cbaad969f9 Split PushRulesStore 2018-02-21 10:43:31 +00:00
Erik Johnston
ca9b9d9703 Split AccountDataStore and TagStore 2018-02-21 10:15:04 +00:00
Erik Johnston
a2b25de68d Merge pull request #2896 from matrix-org/erikj/split_receipts_store
Split ReceiptsStore
2018-02-20 18:08:20 +00:00
Erik Johnston
8fbb4d0d19 Raise exception in abstract method 2018-02-20 17:59:23 +00:00
Erik Johnston
95e4cffd85 Fix comment 2018-02-20 17:58:40 +00:00
Erik Johnston
e316bbb4c0 Use abstract base class to access stream IDs 2018-02-20 17:43:57 +00:00
Erik Johnston
f5ac4dc2d4 Split ReceiptsStore 2018-02-20 16:28:28 +00:00
Erik Johnston
25634ed152 Fix test 2018-02-20 12:40:44 +00:00
Erik Johnston
24087bffa9 Ensure all push actions are deleted from staging 2018-02-20 12:34:31 +00:00
Erik Johnston
ad0ccf15ea Refactor _set_push_actions_for_event_and_users_txn to use events_and_contexts 2018-02-20 12:34:28 +00:00
Erik Johnston
e440e28456 Fix unit tests 2018-02-20 11:41:40 +00:00
Erik Johnston
d874d4f2d7 Delete from push_actions_staging in federation too 2018-02-20 11:37:52 +00:00
Erik Johnston
6ff8c87484 Batch inserts into event_push_actions_staging 2018-02-20 11:33:07 +00:00
Erik Johnston
324c3e9399 Merge pull request #2868 from matrix-org/erikj/refactor_media_storage
Make store_file use store_into_file
2018-02-20 11:31:24 +00:00
Richard van der Hoff
3fc33bae8b Merge pull request #2888 from bachp/pynacl-1.2.1
Update pynacl dependency to 1.2.1 or higher
2018-02-19 14:37:51 +00:00
Pascal Bach
3acd616979 Update pynacl dependency to 1.2.1 or higher
Signed-off-by: Pascal Bach <pascal.bach@nextrem.ch>
2018-02-19 10:45:22 +01:00
Travis Ralston
923d9300ed Add a blurb explaining the main synapse worker
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-02-17 21:53:46 -07:00
Richard van der Hoff
a71a080cd2 Merge pull request #2882 from matrix-org/rav/fix_purge_tablescan
(Really) fix tablescan of event_push_actions on purge
2018-02-16 15:18:47 +00:00
Richard van der Hoff
d1a3325f99 (Really) fix tablescan of event_push_actions on purge
commit 278d21b5 added new code to avoid the tablescan, but didn't remove the
old :/
2018-02-16 14:02:31 +00:00
Erik Johnston
bf5ef10a93 Merge pull request #2874 from matrix-org/erikj/creator_push_actions
Store push actions in DB staging area instead of context
2018-02-16 11:52:51 +00:00
Erik Johnston
6af025d3c4 Fix typo of double is_highlight 2018-02-16 11:35:31 +00:00
Erik Johnston
012e8e142a Comments 2018-02-16 11:35:01 +00:00
Erik Johnston
3a061cae26 Fix unit test 2018-02-15 16:31:59 +00:00
Erik Johnston
b96278d6fe Ensure that we delete staging push actions on errors 2018-02-15 15:47:06 +00:00
Erik Johnston
4810f7effd Remove context.push_actions 2018-02-15 15:47:06 +00:00
Erik Johnston
c714c61853 Update event_push_actions table from staging table 2018-02-15 15:47:06 +00:00
Erik Johnston
acac21248c Store push actions in staging area 2018-02-15 15:47:04 +00:00
Erik Johnston
6ed9ff69c2 Merge pull request #2873 from matrix-org/erikj/event_creator_no_state
Don't serialize current state over replication
2018-02-15 14:09:42 +00:00
Erik Johnston
106906a65e Don't serialize current state over replication 2018-02-15 13:53:18 +00:00
Erik Johnston
5fb347fc41 Merge pull request #2872 from matrix-org/erikj/event_worker_dont_log
Don't log errors propogated from send_event
2018-02-15 12:31:49 +00:00
Erik Johnston
cd94728e93 Merge pull request #2871 from matrix-org/erikj/event_creator_state
Fix state group storage bug in workers
2018-02-15 11:34:01 +00:00
Erik Johnston
fd1601c596 Fix state group storage bug in workers
We needed to move `_count_state_group_hops_txn` to the
StateGroupWorkerStore.
2018-02-15 11:04:32 +00:00
Erik Johnston
ef344b10e5 Don't log errors propogated from send_event 2018-02-15 11:03:49 +00:00
Richard van der Hoff
b8d821aa68 Merge pull request #2867 from matrix-org/rav/rework_purge
purge_history cleanups
2018-02-15 09:49:07 +00:00
Erik Johnston
92c52df702 Make store_file use store_into_file 2018-02-14 17:55:18 +00:00
Richard van der Hoff
d28ec43e15 Merge pull request #2769 from matrix-org/matthew/hit_the_gin
switch back from GIST to GIN indexes
2018-02-14 16:59:03 +00:00
Richard van der Hoff
39bf47319f purge_history: fix sqlite syntax error
apparently sqlite insists on indexes being named
2018-02-14 16:42:19 +00:00
Richard van der Hoff
ac27f6a35e purge_history: handle sqlite asshattery
apparently creating a temporary table commits the transaction. because that's a
useful thing.
2018-02-14 16:41:12 +00:00
Richard van der Hoff
5978dccff0 remove overzealous exception handling 2018-02-14 15:54:09 +00:00
Richard van der Hoff
278d21b5e4 purge_history: fix index use
event_push_actions doesn't have an index on event_id, so we need to specify
room_id.
2018-02-14 15:44:51 +00:00
Richard van der Hoff
5fcbf1e07c Rework event purge to use a temporary table
... which should speed things up by reducing the amount of data being shuffled
across the connection
2018-02-14 11:02:22 +00:00
Erik Johnston
c0c9327fe0 Merge pull request #2854 from matrix-org/erikj/event_create_worker
Create a worker for event creation
2018-02-13 18:07:10 +00:00
Erik Johnston
059d3a6c8e Update docs 2018-02-13 17:53:56 +00:00
Richard van der Hoff
d627174da2 Fix log message in purge_history
(we don't just remove remote events)
2018-02-13 16:51:21 +00:00
Richard van der Hoff
ddb6a79b68 Merge branch 'matthew/gin_work_mem' into matthew/hit_the_gin 2018-02-13 16:45:36 +00:00
Richard van der Hoff
0b27ae8dc3 move search reindex to schema 47
We're up to schema v47 on develop now, so this will have to go in there to have
an effect.

This might cause an error if somebody has already run it in the v46 guise, and
runs it again in the v47 guise, because it will cause a duplicate entry in the
bbackground_updates table. On the other hand, the entry is removed once it is
complete, and it is unlikely that anyone other than matrix.org has run it on
v46. The update itself is harmless to re-run because it deliberately copes with
the index already existing.
2018-02-13 16:44:46 +00:00
Richard van der Hoff
4a6d551704 GIN reindex: Fix syntax errors, improve exception handling 2018-02-13 16:44:46 +00:00
Richard van der Hoff
bfdf7b9237 Merge pull request #2864 from matrix-org/rav/persist_event_caching
Use StateResolutionHandler to resolve state in persist_events
2018-02-13 14:45:57 +00:00
Richard van der Hoff
630caf8a70 style nit 2018-02-13 14:29:22 +00:00
Richard van der Hoff
8fd1a32456 Fix typos in purge api & doc
* It's supposed to be purge_local_events, not ..._history
* Fix the doc to have valid json
2018-02-13 13:09:39 +00:00
Richard van der Hoff
4d09366656 Merge pull request #2695 from okurz/feature/allow_recent_pysaml
Allow use of higher versions of saml2
2018-02-13 12:24:08 +00:00
Richard van der Hoff
a9b712e9dc Merge branch 'develop' into matthew/gin_work_mem 2018-02-13 12:16:01 +00:00
Erik Johnston
32c7b8e48b Update workers docs to include http port 2018-02-12 17:21:23 +00:00
Erik Johnston
1026690cd2 Merge pull request #2857 from matrix-org/erikj/upload_store
Tell storage providers about new file so they can upload
2018-02-12 13:52:58 +00:00
Richard van der Hoff
10b34dbb9a Merge pull request #2858 from matrix-org/rav/purge_updates
delete_local_events for purge_room_history
2018-02-09 14:11:00 +00:00
Richard van der Hoff
39a6b35496 purge: move room_depth update to end
... to avoid locking the table for too long
2018-02-09 13:07:41 +00:00
Richard van der Hoff
74fcbf741b delete_local_events for purge_history
Add a flag which makes the purger delete local events
2018-02-09 13:07:41 +00:00
Richard van der Hoff
e571aef06d purge: Move cache invalidation to more appropriate place
it was a bit of a non-sequitur there
2018-02-09 13:07:41 +00:00
Richard van der Hoff
61ffaa8137 bump purge logging to info
this thing takes ages and the only sign of any progress is the logs, so having
some logs is useful.
2018-02-09 13:07:41 +00:00
Richard van der Hoff
671540dccf rename delete_old_state -> purge_history
(beacause it deletes more than state)
2018-02-09 13:07:41 +00:00
Erik Johnston
5fa571a91b Tell storage providers about new file so they can upload 2018-02-07 13:35:08 +00:00
Erik Johnston
053255f36c Merge pull request #2856 from matrix-org/erikj/remove_ratelimit
Remove pointless ratelimit check
2018-02-07 11:12:14 +00:00
Erik Johnston
f133228cb3 Add note in docs/workers.rst 2018-02-07 10:34:31 +00:00
Erik Johnston
50fe92cd26 Move presence handling into handle_new_client_event
As we want to have it run on the main synapse instance
2018-02-07 10:34:09 +00:00
Erik Johnston
8ec2e638be Add event_creator worker 2018-02-07 10:32:32 +00:00
Erik Johnston
24dd73028a Add replication http endpoint for event sending 2018-02-07 10:32:32 +00:00
Erik Johnston
e3624fad5f Remove pointless ratelimit check
The intention was for the check to be called as early as possible in the
request, but actually was called just before the main ratelimit check,
so was fairly pointless.
2018-02-07 10:30:25 +00:00
Erik Johnston
617199d73d Merge pull request #2847 from matrix-org/erikj/separate_event_creation
Split event creation into a separate handler
2018-02-06 17:01:17 +00:00
Erik Johnston
3e1e69ccaf Update copyright 2018-02-06 16:40:38 +00:00
Erik Johnston
770b2252ca s/_create_new_client_event/create_new_client_event/ 2018-02-06 16:40:30 +00:00
Erik Johnston
3d33eef6fc Store state groups separately from events (#2784)
* Split state group persist into seperate storage func

* Add per database engine code for state group id gen

* Move store_state_group to StateReadStore

This allows other workers to use it, and so resolve state.

* Hook up store_state_group

* Fix tests

* Rename _store_mult_state_groups_txn

* Rename StateGroupReadStore

* Remove redundant _have_persisted_state_group_txn

* Update comments

* Comment compute_event_context

* Set start val for state_group_id_seq

... otherwise we try to recreate old state groups

* Update comments

* Don't store state for outliers

* Update comment

* Update docstring as state groups are ints
2018-02-06 14:31:24 +00:00
Richard van der Hoff
b31bf0bb51 Merge pull request #2849 from matrix-org/rav/clean_up_state_delta
Remove redundant return value from _calculate_state_delta
2018-02-05 17:42:20 +01:00
Richard van der Hoff
9a304ef2b0 Merge pull request #2848 from matrix-org/rav/refactor_search_insert
Factor out common code for search insert
2018-02-05 17:42:09 +01:00
Richard van der Hoff
ebfe64e3d6 Use StateResolutionHandler to resolve state in persist events
... and thus benefit (hopefully) from its cache.
2018-02-05 16:23:26 +00:00
Richard van der Hoff
225dc3b4cb Flatten _get_new_state_after_events
rejig the if statements to simplify the logic and reduce indentation
2018-02-05 16:23:25 +00:00
Richard van der Hoff
9fcbbe8e7d Check that events being persisted have state_group 2018-02-05 16:23:25 +00:00
Richard van der Hoff
447aed42d2 Add event_map param to resolve_state_groups 2018-02-05 16:23:25 +00:00
Richard van der Hoff
ee6fb4cf85 Remove redundant return value from _calculate_state_delta
we already have the state from _get_new_state_after_events, so returning it
from _calculate_state_delta is just confusing.
2018-02-05 16:23:20 +00:00
Richard van der Hoff
3c7b480ba3 Factor out common code for search insert
we can reuse the same code as is used for event insert, for doing the
background index population.
2018-02-05 16:12:14 +00:00
Erik Johnston
25c0a020f4 Updates tests 2018-02-05 16:01:48 +00:00
Erik Johnston
3fa362502c Update places where we create events 2018-02-05 16:01:48 +00:00
Erik Johnston
5ff3d23564 Split event creation into a separate handler 2018-02-05 16:01:48 +00:00
Richard van der Hoff
c46e75d3d8 Move store_event_search_txn to SearchStore
... as a precursor to making event storing and doing the bg update share some
code.
2018-02-05 15:43:22 +00:00
Richard van der Hoff
db91e72ade Merge pull request #2844 from matrix-org/rav/evicted_metrics
montoring metrics for number of cache evictions
2018-02-05 16:34:36 +01:00
Richard van der Hoff
bc496df192 report metrics on number of cache evictions 2018-02-05 15:34:01 +00:00
Erik Johnston
a1beca0e25 Fix broken unit test for media storage 2018-02-05 12:44:03 +00:00
Erik Johnston
b5049d2e5c Add .vscode to gitignore 2018-02-05 12:06:46 +00:00
Erik Johnston
1f881e0746 Merge pull request #2791 from matrix-org/erikj/media_storage_refactor
Ensure media is in local cache before thumbnailing
2018-02-05 11:28:52 +00:00
Richard van der Hoff
80b8a28100 Factor out common code for search insert
we can reuse the same code as is used for event insert, for doing the
background index population.
2018-02-04 00:23:06 +00:00
Richard van der Hoff
bd25f9cf36 Clean up work_mem handling
Add some comments and improve exception handling when twiddling work_mem for
the search update
2018-02-03 23:05:41 +00:00
Richard van der Hoff
4eeae7ad65 Move store_event_search_txn to SearchStore
... as a precursor to making event storing and doing the bg update share some
code.
2018-02-03 22:59:45 +00:00
Richard van der Hoff
bb9f0f3cdb Merge branch 'develop' into matthew/gin_work_mem 2018-02-03 22:40:28 +00:00
Richard van der Hoff
6b02fc80d1 Reinstate event_search_postgres_gist handler
People may have queued updates for this, so we can't just delete it.
2018-02-02 14:32:51 +00:00
Richard van der Hoff
9c9356512e Merge pull request #2845 from matrix-org/rav/urlcache_error_handling
Handle url_previews with no content-type
2018-02-02 15:27:52 +01:00
Richard van der Hoff
18eae413af Merge pull request #2842 from matrix-org/rav/state_resolution_handler
Factor out resolve_state_groups to a separate handler
2018-02-02 15:27:35 +01:00
Richard van der Hoff
78d6ddba86 Merge pull request #2841 from matrix-org/rav/refactor_calc_state_delta
factor _get_new_state_after_events out of _calculate_state_delta
2018-02-02 15:27:15 +01:00
Richard van der Hoff
9dcd667ac2 Merge pull request #2836 from matrix-org/rav/resolve_state_events_docstring
Docstring fixes
2018-02-02 15:26:49 +01:00
Richard van der Hoff
33cac3dc29 Merge pull request #2818 from turt2live/travis/admin-list-media
Add an admin route to get all the media in a room
2018-02-02 11:41:03 +01:00
Travis Ralston
6e87b34f7b Merge branch 'develop' into travis/admin-list-media 2018-02-01 18:05:47 -07:00
Richard van der Hoff
d5352cbba8 Handle url_previews with no content-type
avoid failing with an exception if the remote server doesn't give us a
Content-Type header.

Also, clean up the exception handling a bit.
2018-02-02 00:53:46 +00:00
Richard van der Hoff
14737ba495 doc arg types for _seperate 2018-02-01 12:41:34 +00:00
Richard van der Hoff
e15d4ea248 More docstring fixes
Fix a couple of errors in docstrings
2018-02-01 12:41:34 +00:00
Richard van der Hoff
a18828c129 Fix docstring for StateHandler.resolve_state_groups
The return type was a complete lie, so fix it
2018-02-01 12:41:34 +00:00
Richard van der Hoff
6da4c4d3bd Factor out resolve_state_groups to a separate handler
We extract the storage-independent bits of the state group resolution out to a
separate functiom, and stick it in a new handler, in preparation for its use
from the storage layer.
2018-02-01 12:40:04 +00:00
Richard van der Hoff
0cbda53819 Rename resolve_state_groups -> resolve_state_groups_for_events
(to make way for a method that actually just does the state group resolution)
2018-02-01 12:40:00 +00:00
Richard van der Hoff
77c0629ebc Merge pull request #2837 from matrix-org/rav/fix_quarantine_media
Fix sql error in quarantine_media
2018-02-01 11:45:58 +01:00
Travis Ralston
e16e45b1b4 pep8
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-31 15:30:38 -07:00
Richard van der Hoff
e1e4ec9f9d factor _get_new_state_after_events out of _calculate_state_delta
This reduces the scope of a bunch of variables
2018-01-31 21:32:09 +00:00
Richard van der Hoff
78e7e05188 Merge pull request #2838 from matrix-org/rav/fix_logging_on_dns_fail
Remove spurious log argument
2018-01-31 22:18:46 +01:00
Richard van der Hoff
ad48dfe73d Merge pull request #2835 from matrix-org/rav/remove_event_type_param
Remove unused "event_type" param on state.get_current_state_ids
2018-01-31 22:17:57 +01:00
Richard van der Hoff
518a74586c Merge pull request #2834 from matrix-org/rav/better_persist_event_exception_handling
Improve exception handling in persist_event
2018-01-31 22:17:38 +01:00
Richard van der Hoff
d1fe4db882 Merge pull request #2833 from matrix-org/rav/pusher_hacks
Logging and metrics for the http pusher
2018-01-31 22:16:59 +01:00
Richard van der Hoff
421d68ca8c Merge pull request #2817 from matrix-org/rav/http_conn_pool
Use a connection pool for the SimpleHttpClient
2018-01-31 22:14:22 +01:00
Richard van der Hoff
326189c25a Script to move remote media to another media store 2018-01-31 18:45:10 +00:00
Travis Ralston
3af53c183a Add admin api documentation for list media endpoint
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-31 08:15:59 -07:00
Travis Ralston
63c4383927 Documentation and naming
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-31 08:07:52 -07:00
Richard van der Hoff
af19f5e9aa Remove spurious log argument
... which would cause scary-looking and unhelpful errors in the log on dns fail
2018-01-30 17:52:03 +00:00
Richard van der Hoff
773f0eed1e Fix sql error in quarantine_media 2018-01-30 15:02:51 +00:00
Richard van der Hoff
adfc0c9539 docstring for get_current_state_ids 2018-01-29 17:39:55 +00:00
Richard van der Hoff
d413a2ba98 Remove unused "event_type" param on state.get_current_state_ids
this param doesn't seem to be used, and is a bit pointless anyway because it
can easily be replicated by the caller. It is also horrible, because it changes
the return type of the method.
2018-01-29 17:06:57 +00:00
Richard van der Hoff
b387ee17b6 Improve exception handling in persist_event
1. use `deferred.errback()` instead of `deferred.errback(e)`, which means that
a Failure object will be constructed using the current exception state,
*including* its stack trace - so the stack trace is saved in the Failure,
leading to better exception reports.

2. Set `consumeErrors=True` on the ObservableDeferred, because we know that
there will always be at least one observer - which avoids a spurious "CRITICAL:
unhandled exception in Deferred" error in the logs
2018-01-29 17:05:33 +00:00
Richard van der Hoff
03dd745fe2 Better logging when pushes fail 2018-01-29 15:49:06 +00:00
Richard van der Hoff
e051abd20b add appid/device_display_name to to pusher logging 2018-01-29 15:04:16 +00:00
Richard van der Hoff
02ba118f81 Increase http conn pool size 2018-01-29 14:30:15 +00:00
Richard van der Hoff
4c65b98e4a Merge pull request #2831 from matrix-org/rav/fix-userdir-search-again
Fix SQL for user search
2018-01-27 22:58:24 +01:00
Richard van der Hoff
d1f3490e75 Add tests for user directory search 2018-01-27 17:21:57 +00:00
Richard van der Hoff
46022025ea Fix SQL for user search
fix some syntax errors for user search when search_all_users is enabled

fixes #2801, hopefully
2018-01-27 17:21:57 +00:00
Richard van der Hoff
2186d7c06e Merge pull request #2829 from matrix-org/rav/postgres_in_tests
Make it possible to run tests against postgres
2018-01-27 18:20:55 +01:00
Richard van der Hoff
88b9c5cbf0 Make it possible to run tests against postgres 2018-01-27 17:15:24 +00:00
Richard van der Hoff
d7eacc4f87 Create dbpool as normal in tests
... instead of creating our own special SQLiteMemoryDbPool, whose purpose was a
bit of a mystery.

For some reason this makes one of the tests run slightly slower, so bump the
sleep(). Sorry.
2018-01-27 17:15:15 +00:00
Richard van der Hoff
b178eca261 Run on_new_connection for unit tests
Configure the connectionpool used for unit tests to run the `on_new_connection`
function.
2018-01-27 17:06:04 +00:00
Richard van der Hoff
d8f90c4208 Merge pull request #2828 from matrix-org/rav/kill_memory_datastore
Remove unused/bitrotted MemoryDataStore
2018-01-27 17:57:02 +01:00
Richard van der Hoff
4b0f06e99c Merge pull request #2830 from matrix-org/rav/factor_out_get_conn
Factor out get_db_conn to HomeServer base class
2018-01-27 17:56:25 +01:00
Neil Johnson
e98f0f9112 Merge pull request #2827 from matrix-org/fix_server_500_on_public_rooms_call_when_no_rooms_exist
Fix server 500 on public rooms call when no rooms exist
2018-01-26 20:22:40 +00:00
Richard van der Hoff
25adde9a04 Factor out get_db_conn to HomeServer base class
This function is identical to all subclasses, so we may as well push it up to
the base class to reduce duplication (and make use of it in the tests)
2018-01-26 00:56:49 +00:00
Richard van der Hoff
6e9bf67f18 Remove unused/bitrotted MemoryDataStore
This isn't used, and looks thoroughly bitrotted.
2018-01-26 00:35:15 +00:00
Richard van der Hoff
2b91846497 Remove spurious unittest.DEBUG 2018-01-26 00:34:27 +00:00
Neil Johnson
73560237d6 add white space line 2018-01-26 00:15:10 +00:00
Neil Johnson
86c4f49a31 rather than try reconstruct the results object, better to guard against the xrange step argument being 0 2018-01-26 00:12:02 +00:00
Neil Johnson
f632083576 fix return type, should be a dict 2018-01-25 23:52:17 +00:00
Neil Johnson
6c6e197b0a fix PEP8 violation 2018-01-25 23:47:46 +00:00
Neil Johnson
d02e43b15f remove white space 2018-01-25 23:29:46 +00:00
Neil Johnson
349c739966 synapse 500s on a call to publicRooms in the case where the number of public rooms is zero, the specific cause is due to xrange trying to use a step value of zero, but if the total room number really is zero then it makes sense to just bail and save the extra processing 2018-01-25 23:28:44 +00:00
Matthew Hodgson
9a72b70630 fix thinko on 3pid whitelisting 2018-01-24 11:07:47 +01:00
Richard van der Hoff
25e2456ee7 Merge pull request #2816 from matrix-org/rav/metrics_comments
Add some comments about the reactor tick time metric
2018-01-23 19:39:26 +00:00
Matthew Hodgson
d32385336f add ?ts massaging for ASes (#2754)
blindly implement ?ts for AS. untested
2018-01-23 09:59:06 +01:00
Richard van der Hoff
b2da272b77 Merge pull request #2821 from matrix-org/rav/matthew_test_fixes
Matthew's fixes to the unit tests
2018-01-22 20:43:56 +00:00
Richard van der Hoff
4528dd2443 Fix logging and add user_id 2018-01-22 20:15:42 +00:00
Richard van der Hoff
93efd7eb04 logging and debug for http pusher 2018-01-22 18:14:10 +00:00
Matthew Hodgson
ab9f844aaf Add federation_domain_whitelist option (#2820)
Add federation_domain_whitelist

gives a way to restrict which domains your HS is allowed to federate with.
useful mainly for gracefully preventing a private but internet-connected HS from trying to federate to the wider public Matrix network
2018-01-22 19:11:18 +01:00
Richard van der Hoff
5c431f421c Matthew's fixes to the unit tests
Extracted from https://github.com/matrix-org/synapse/pull/2820
2018-01-22 16:46:16 +00:00
Matthew Hodgson
d84f65255e Merge pull request #2813 from matrix-org/matthew/registrations_require_3pid
add registrations_require_3pid and allow_local_3pids
2018-01-22 13:57:22 +00:00
Travis Ralston
a94d9b6b82 Appease the linter
These are ids anyways, not mxc uris.

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-20 22:49:46 -07:00
Travis Ralston
5552ed9a7f Add an admin route to get all the media in a room
This is intended to be used by administrators to monitor the media that is passing through their server, if they wish.

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-20 22:37:53 -07:00
Richard van der Hoff
2c8526cac7 Use a connection pool for the SimpleHttpClient
In particular I hope this will help the pusher, which makes many requests to
sygnal, and is currently negotiating SSL for each one.
2018-01-20 00:55:44 +00:00
Richard van der Hoff
87b7d72760 Add some comments about the reactor tick time metric 2018-01-19 23:51:04 +00:00
Matthew Hodgson
49fce04624 fix typo (thanks sytest) 2018-01-19 19:55:38 +00:00
Richard van der Hoff
b0d9e633ee Merge pull request #2814 from matrix-org/rav/fix_urlcache_thumbs
Use the right path for url_preview thumbnails
2018-01-19 18:57:15 +00:00
Richard van der Hoff
ad7ec63d08 Use the right path for url_preview thumbnails
This was introduced by #2627: we were overwriting the original media for url
previews with the thumbnails :/

(fixes https://github.com/vector-im/riot-web/issues/6012, hopefully)
2018-01-19 18:29:39 +00:00
Matthew Hodgson
62d7d66ae5 oops, check all login types 2018-01-19 18:23:56 +00:00
Matthew Hodgson
8fe253f19b fix PR nitpicking 2018-01-19 18:23:45 +00:00
Matthew Hodgson
293380bef7 trailing commas 2018-01-19 15:38:53 +00:00
Matthew Hodgson
447f4f0d5f rewrite based on PR feedback:
* [ ] split config options into allowed_local_3pids and registrations_require_3pid
 * [ ] simplify and comment logic for picking registration flows
 * [ ] fix docstring and move check_3pid_allowed into a new util module
 * [ ] use check_3pid_allowed everywhere

@erikjohnston PTAL
2018-01-19 15:33:55 +00:00
Matthew Hodgson
9d332e0f79 fix up v1, and improve errors 2018-01-19 00:53:58 +00:00
Matthew Hodgson
0af58f14ee fix pep8 2018-01-19 00:33:51 +00:00
Matthew Hodgson
81d037dbd8 mock registrations_require_3pid 2018-01-19 00:28:08 +00:00
Matthew Hodgson
28a6ccb49c add registrations_require_3pid
lets homeservers specify a whitelist for 3PIDs that users are allowed to associate with.
Typically useful for stopping people from registering with non-work emails
2018-01-19 00:19:58 +00:00
Erik Johnston
cd871a3057 Fix storage provider bug introduced when renamed to store_local 2018-01-18 18:37:59 +00:00
Erik Johnston
8ff6726c0d Merge pull request #2812 from matrix-org/erikj/media_storage_provider_config
Make storage providers configurable
2018-01-18 18:33:57 +00:00
Erik Johnston
d69768348f Fix passing wrong config to provider constructor 2018-01-18 17:14:05 +00:00
Erik Johnston
8e85220373 Remove duplicate directory test 2018-01-18 17:12:35 +00:00
Erik Johnston
3fe2bae857 Missing staticmethod 2018-01-18 17:11:45 +00:00
Erik Johnston
aae77da73f Fixup comments 2018-01-18 17:11:29 +00:00
Erik Johnston
ce4f66133e Add unit tests 2018-01-18 16:32:11 +00:00
Erik Johnston
b6dc7044a9 Merge pull request #2804 from matrix-org/erikj/file_consumer
Add decent impl of a FileConsumer
2018-01-18 16:31:33 +00:00
Erik Johnston
9a89dae8c5 Fix typo in thumbnail resource causing access times to be incorrect 2018-01-18 15:06:24 +00:00
Erik Johnston
0af5dc63a8 Make storage providers more configurable 2018-01-18 14:07:21 +00:00
Richard van der Hoff
5a4da21d58 Merge pull request #2810 from matrix-org/rav/metrics_fixes
Fix bugs in block metrics
2018-01-18 13:35:28 +00:00
Richard van der Hoff
d57765fc8a Fix bugs in block metrics
... which I introduced in #2785
2018-01-18 12:24:42 +00:00
Erik Johnston
2cf6a7bc20 Use better file consumer 2018-01-18 12:00:46 +00:00
Erik Johnston
4a53f3a3e8 Ensure media is in local cache before thumbnailing 2018-01-18 12:00:46 +00:00
Erik Johnston
be0dfcd4a2 Do logcontexts correctly 2018-01-18 11:57:57 +00:00
Erik Johnston
1432f7ccd5 Move test stuff to tests 2018-01-18 11:57:57 +00:00
Erik Johnston
2f18a2647b Make all fields private 2018-01-18 11:57:54 +00:00
Richard van der Hoff
d6af5512bb Merge pull request #2809 from matrix-org/rav/metrics_errors
better exception logging in callbackmetrics
2018-01-18 11:46:37 +00:00
Richard van der Hoff
ce236f8ac8 better exception logging in callbackmetrics
when we fail to render a metric, give a clue as to which metric it was
2018-01-18 11:30:49 +00:00
Erik Johnston
dc519602ac Ensure we registerProducer isn't called twice 2018-01-18 11:07:17 +00:00
Erik Johnston
17b54389fe Fix _notify_empty typo 2018-01-18 11:05:34 +00:00
Erik Johnston
28b338ed9b Move definition of paused_producer to __init__ 2018-01-18 11:04:41 +00:00
Erik Johnston
a177325b49 Fix comments 2018-01-18 11:02:43 +00:00
Richard van der Hoff
36da256cc6 Merge pull request #2805 from matrix-org/rav/log_state_res
Log room when doing state resolution
2018-01-17 18:05:04 +00:00
Richard van der Hoff
1224612a79 Log room when doing state resolution
Mostly because it helps figure out what is prompting the resolution
2018-01-17 17:11:59 +00:00
Erik Johnston
bc67e7d260 Add decent impl of a FileConsumer
Twisted core doesn't have a general purpose one, so we need to write one
ourselves.

Features:
- All writing happens in background thread
- Supports both push and pull producers
- Push producers get paused if the consumer falls behind
2018-01-17 16:43:03 +00:00
Erik Johnston
a87006f9c7 Merge pull request #2783 from matrix-org/erikj/media_last_accessed
Keep track of last access time for local media
2018-01-17 16:39:02 +00:00
Matthew Hodgson
06db5c4b76 Merge pull request #2803 from matrix-org/matthew/fix-userdir-sql
fix SQL when searching all users
2018-01-17 16:27:54 +00:00
Richard van der Hoff
8716eb4920 Merge pull request #2802 from matrix-org/rav/clean_up_resolve_events
Split resolve_events into two functions
2018-01-17 16:01:33 +00:00
Matthew Hodgson
2d9ab533f9 fix SQL when searching all users 2018-01-17 15:58:52 +00:00
Richard van der Hoff
390093d45e Split resolve_events into two functions
... so that the return type doesn't depend on the arg types
2018-01-17 15:44:31 +00:00
Erik Johnston
2fb3a28c98 Remove lost comment 2018-01-17 14:59:44 +00:00
Richard van der Hoff
a7e4ff9cca Merge pull request #2795 from matrix-org/rav/track_db_scheduling
Track DB scheduling delay per-request
2018-01-17 14:29:37 +00:00
Richard van der Hoff
f884cfffb9 Merge pull request #2797 from matrix-org/rav/user_id_checking
Sanity checking for user ids
2018-01-17 14:29:12 +00:00
Richard van der Hoff
a5213df1f7 Sanity checking for user ids
Check the user_id passed to a couple of APIs for validity, to avoid
"IndexError: list index out of range" exception which looks scary and results
in a 500 rather than a more useful error.

Fixes #1432, among other things
2018-01-17 14:28:54 +00:00
Erik Johnston
3d5a25407c Merge pull request #2798 from jeremycline/fedora-repo
Note that Synapse is available in Fedora
2018-01-17 14:28:17 +00:00
Richard van der Hoff
e8f7541d3f Merge remote-tracking branch 'origin/develop' into rav/track_db_scheduling 2018-01-17 14:01:57 +00:00
Richard van der Hoff
fb6563b4be Merge pull request #2793 from matrix-org/rav/db_txn_time_in_millis
Track db txn time in millisecs
2018-01-17 13:52:42 +00:00
Richard van der Hoff
1954e867b4 Merge pull request #2796 from matrix-org/rav/fix_closed_connection_errors
Fix 'NoneType' object has no attribute 'writeHeaders'
2018-01-17 13:52:24 +00:00
Richard van der Hoff
f23b4078c0 Merge pull request #2794 from matrix-org/rav/rework_run_interaction
rework runInteraction in terms of runConnection
2018-01-17 13:50:42 +00:00
Richard van der Hoff
11ab2f56f5 Merge branch 'develop' into rav/fix_closed_connection_errors 2018-01-17 11:25:11 +00:00
Richard van der Hoff
0486a7814a Merge branch 'develop' into rav/rework_run_interaction 2018-01-17 11:24:38 +00:00
Richard van der Hoff
90c14da992 Merge branch 'develop' into rav/db_txn_time_in_millis 2018-01-17 11:19:55 +00:00
Richard van der Hoff
1067b96364 Merge pull request #2792 from matrix-org/rav/optimise_logging_context
Optimise LoggingContext creation and copying
2018-01-17 11:19:28 +00:00
Richard van der Hoff
38506773eb Merge remote-tracking branch 'origin/develop' into rav/optimise_logging_context 2018-01-17 10:57:13 +00:00
Erik Johnston
300edc2348 Update last access time when thumbnails are viewed 2018-01-17 10:24:43 +00:00
Erik Johnston
05f98a2224 Keep track of last access time for local media 2018-01-17 10:24:43 +00:00
Erik Johnston
3cb2dabaad Merge pull request #2789 from matrix-org/erikj/fix_thumbnails
Fix thumbnailing remote files
2018-01-17 10:22:20 +00:00
Erik Johnston
d728c47142 Add docstring 2018-01-17 10:06:14 +00:00
Jeremy Cline
4102468da9 Note that Synapse is available in Fedora
Signed-off-by: Jeremy Cline <jeremy@jcline.org>
2018-01-16 16:14:02 -05:00
Richard van der Hoff
936482d507 Fix 'NoneType' object has no attribute 'writeHeaders'
Avoid throwing a (harmless) exception when we try to write an error response to
an http request where the client has disconnected.

This comes up as a CRITICAL error in the logs which tends to mislead people
into thinking there's an actual problem
2018-01-16 17:58:16 +00:00
Richard van der Hoff
3d12d97415 Track DB scheduling delay per-request
For each request, track the amount of time spent waiting for a db
connection. This entails adding it to the LoggingContext and we may as well add
metrics for it while we are passing.
2018-01-16 17:23:32 +00:00
Richard van der Hoff
0f5d2cc37c Merge branch 'rav/db_txn_time_in_millis' into rav/track_db_scheduling 2018-01-16 17:09:15 +00:00
Richard van der Hoff
8615f19d20 rework runInteraction in terms of runConnection
... so that we can share the code
2018-01-16 17:08:29 +00:00
Matthew Hodgson
5e97ca7ee6 fix typo 2018-01-16 16:52:35 +00:00
Erik Johnston
d863f68cab Use local vars 2018-01-16 16:24:15 +00:00
Erik Johnston
6368e5c0ab Change _generate_thumbnails to take media_type 2018-01-16 16:17:38 +00:00
Erik Johnston
0a90d9ede4 Move setting of file_id up to caller 2018-01-16 16:03:05 +00:00
Richard van der Hoff
6324b65f08 Track db txn time in millisecs
... to reduce the amount of floating-point foo we do.
2018-01-16 15:53:18 +00:00
Richard van der Hoff
44a498418c Optimise LoggingContext creation and copying
It turns out that the only thing we use the __dict__ of LoggingContext for is
`request`, and given we create lots of LoggingContexts and then copy them every
time we do a db transaction or log line, using the __dict__ seems a bit
redundant. Let's try to optimise things by making the request attribute
explicit.
2018-01-16 15:49:42 +00:00
Erik Johnston
5dfc83704b Fix typo 2018-01-16 14:32:56 +00:00
Richard van der Hoff
febdca4b37 Merge pull request #2785 from matrix-org/rav/reorganise_metrics_again
Reorganise request and block metrics
2018-01-16 14:10:21 +00:00
Richard van der Hoff
f5f89fda21 Merge pull request #2790 from matrix-org/rav/preserve_event_logcontext_leak
Fix a logcontext leak in persist_events
2018-01-16 14:09:13 +00:00
Erik Johnston
307f88dfb6 Fix up log lines 2018-01-16 13:53:52 +00:00
Richard van der Hoff
5b527d7ee1 Merge pull request #2674 from her001/readme-sytest
Mention SyTest in the README, after Development
2018-01-16 13:18:17 +00:00
Richard van der Hoff
807e848f0f Merge pull request #2787 from matrix-org/rav/worker_event_counts
Metrics for events processed in appservice and fed sender
2018-01-16 13:14:09 +00:00
Richard van der Hoff
4a31a61ef9 Merge pull request #2786 from matrix-org/rav/rdata_metrics
Metrics for number of RDATA commands received
2018-01-16 13:13:53 +00:00
Richard van der Hoff
ee7a1cabd8 document metrics changes 2018-01-16 13:04:01 +00:00
Erik Johnston
9795b9ebb1 Correctly use server_name/file_id when generating/fetching remote thumbnails 2018-01-16 12:02:06 +00:00
Erik Johnston
c5b589f2e8 Log when we respond with 404 2018-01-16 12:01:40 +00:00
Richard van der Hoff
64ddec1bc0 Fix a logcontext leak in persist_events
ObserveableDeferred expects its callbacks to be called without any
logcontexts, whereas it turns out we were calling them with the logcontext of
the request which initiated the persistence loop.

It seems wrong that we are attributing work done in the persistence loop to the
request that happened to initiate it, so let's solve this by dropping the
logcontext for it.

(I'm not sure this actually causes any real problems other than messages in the
debug log, but let's clean it up anyway)
2018-01-16 11:47:36 +00:00
Erik Johnston
a4c5e4a645 Fix thumbnailing remote files 2018-01-16 11:37:50 +00:00
Erik Johnston
1159abbdd2 Merge pull request #2767 from matrix-org/erikj/media_storage_refactor
Refactor MediaRepository to separate out storage
2018-01-16 10:23:50 +00:00
Richard van der Hoff
a027c2af8d Metrics for events processed in appservice and fed sender
More metrics I wished I'd had
2018-01-15 18:23:24 +00:00
Richard van der Hoff
5c3c32f16f Metrics for number of RDATA commands received
I found myself wishing we had this.
2018-01-15 17:45:55 +00:00
Richard van der Hoff
39f4e29d01 Reorganise request and block metrics
In order to circumvent the number of duplicate foo:count metrics increasing
without bounds, it's time for a rearrangement.

The following are all deprecated, and replaced with synapse_util_metrics_block_count:
  synapse_util_metrics_block_timer:count
  synapse_util_metrics_block_ru_utime:count
  synapse_util_metrics_block_ru_stime:count
  synapse_util_metrics_block_db_txn_count:count
  synapse_util_metrics_block_db_txn_duration:count

The following are all deprecated, and replaced with synapse_http_server_response_count:
   synapse_http_server_requests
   synapse_http_server_response_time:count
   synapse_http_server_response_ru_utime:count
   synapse_http_server_response_ru_stime:count
   synapse_http_server_response_db_txn_count:count
   synapse_http_server_response_db_txn_duration:count

The following are renamed (the old metrics are kept for now, but deprecated):

  synapse_util_metrics_block_timer:total ->
     synapse_util_metrics_block_time_seconds

  synapse_util_metrics_block_ru_utime:total ->
     synapse_util_metrics_block_ru_utime_seconds

  synapse_util_metrics_block_ru_stime:total ->
     synapse_util_metrics_block_ru_stime_seconds

  synapse_util_metrics_block_db_txn_count:total ->
     synapse_util_metrics_block_db_txn_count

  synapse_util_metrics_block_db_txn_duration:total ->
     synapse_util_metrics_block_db_txn_duration_seconds

  synapse_http_server_response_time:total ->
     synapse_http_server_response_time_seconds

  synapse_http_server_response_ru_utime:total ->
     synapse_http_server_response_ru_utime_seconds

  synapse_http_server_response_ru_stime:total ->
     synapse_http_server_response_ru_stime_seconds

   synapse_http_server_response_db_txn_count:total ->
      synapse_http_server_response_db_txn_count

   synapse_http_server_response_db_txn_duration:total
      synapse_http_server_response_db_txn_duration_seconds
2018-01-15 17:09:44 +00:00
Richard van der Hoff
992018d1c0 mechanism to render metrics with alternative names 2018-01-15 17:04:39 +00:00
Richard van der Hoff
80fa610f9c Add some comments to metrics classes 2018-01-15 16:52:52 +00:00
Richard van der Hoff
5e16c1dc8c Merge pull request #2778 from matrix-org/rav/counters_should_be_floats
Make Counter render floats
2018-01-15 14:09:53 +00:00
Richard van der Hoff
19d274085f Make Counter render floats
Prometheus handles all metrics as floats, and sometimes we store non-integer
values in them (notably, durations in seconds), so let's render them as floats
too.

(Note that the standard client libraries also treat Counters as floats.)
2018-01-12 23:49:44 +00:00
Richard van der Hoff
0fc2362d37 Merge pull request #2777 from matrix-org/rav/fix_remote_thumbnails
Reinstate media download on thumbnail request
2018-01-12 19:12:31 +00:00
Richard van der Hoff
21bf87a146 Reinstate media download on thumbnail request
We need to actually download the remote media when we get a request for a
thumbnail.
2018-01-12 15:38:06 +00:00
Erik Johnston
694f1c1b18 Fix up comments 2018-01-12 15:02:46 +00:00
Erik Johnston
e21370ba54 Correctly reraise exception 2018-01-12 14:44:02 +00:00
Erik Johnston
85a4d78213 Make Responder a context manager 2018-01-12 13:32:03 +00:00
Erik Johnston
dcc8eded41 Add missing class var 2018-01-12 13:16:27 +00:00
Erik Johnston
fefeb0ab0e Merge pull request #2774 from matrix-org/erikj/synctl
When using synctl with workers, don't start the main synapse automatically
2018-01-12 12:00:07 +00:00
Erik Johnston
81391fa162 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/media_storage_refactor 2018-01-12 11:28:49 +00:00
Erik Johnston
1e4edd1717 Remove unnecessary condition 2018-01-12 11:28:32 +00:00
Erik Johnston
c6c009603c Remove unused variables 2018-01-12 11:24:05 +00:00
Erik Johnston
4d88958cf6 Make class var local 2018-01-12 11:23:54 +00:00
Erik Johnston
227c491510 Comments 2018-01-12 11:22:41 +00:00
Erik Johnston
f4d93ae424 Actually make it work 2018-01-12 10:39:27 +00:00
Erik Johnston
f68e4cf690 Refactor 2018-01-12 10:11:12 +00:00
Richard van der Hoff
5f23b6d5ea Merge pull request #2766 from matrix-org/rav/room_event
Add /room/{id}/event/{id} to synapse
2018-01-11 14:47:18 +00:00
Erik Johnston
7cd34512d8 When using synctl with workers, don't start the main synapse automatically 2018-01-11 11:37:39 +00:00
Erik Johnston
07ab948c38 Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-01-11 11:23:40 +00:00
Erik Johnston
825a07a974 Merge pull request #2773 from matrix-org/erikj/hash_bg
Do bcrypt hashing in a background thread
2018-01-10 18:11:41 +00:00
Erik Johnston
f8e1ab5fee Do bcrypt hashing in a background thread 2018-01-10 18:01:28 +00:00
Erik Johnston
b9e4a97922 Merge pull request #2772 from matrix-org/t3chguy/patch-1
Fix publicised groups GET API (singular) over federation
2018-01-10 15:15:48 +00:00
Michael Telatynski
5f07f5694c fix order of operations derp and also use .get to default to {}
Signed-off-by: Michael Telatynski <7t3chguy@gmail.com>
2018-01-10 15:11:35 +00:00
Michael Telatynski
8c9d5b4873 Fix publicised groups API (singular) over federation
which was missing its fed client API, since there is no other API
it might as well reuse the bulk one and unwrap it

Signed-off-by: Michael Telatynski <7t3chguy@gmail.com>
2018-01-10 15:04:51 +00:00
Richard van der Hoff
c175a5f0f2 Merge pull request #2770 from matrix-org/rav/fix_request_metrics
Update http request metrics before calling servlet
2018-01-10 14:53:40 +00:00
Richard van der Hoff
d90e8ea444 Update http request metrics before calling servlet
Make sure that we set the servlet name in the metrics object *before* calling
the servlet, in case the servlet throws an exception.
2018-01-09 18:27:35 +00:00
hera
174eacc8ba oops 2018-01-09 18:14:32 +00:00
Matthew Hodgson
a66f489678 fix GIST->GIN switch 2018-01-09 16:55:51 +00:00
Matthew Hodgson
e79db0a673 switch back from GIST to GIN indexes 2018-01-09 16:37:48 +00:00
Matthew Hodgson
e365ad329f oops, tweak work_mem when actually storing 2018-01-09 16:30:30 +00:00
Matthew Hodgson
19f9227643 avoid 80s GIN inserts by tweaking work_mem
see https://github.com/matrix-org/synapse/issues/2753 for details
2018-01-09 16:25:04 +00:00
Erik Johnston
8f03aa9f61 Add StorageProvider concept 2018-01-09 16:16:12 +00:00
Erik Johnston
2442e9876c Make PreviewUrlResource use MediaStorage 2018-01-09 16:15:07 +00:00
Erik Johnston
9d30a7691c Make ThumbnailResource use MediaStorage 2018-01-09 16:15:07 +00:00
Erik Johnston
9e20840e02 Use MediaStorage for remote media 2018-01-09 16:15:07 +00:00
Erik Johnston
dd3092c3a3 Use MediaStorage for local files 2018-01-09 16:15:07 +00:00
Erik Johnston
ada470bccb Add MediaStorage class 2018-01-09 16:15:07 +00:00
Erik Johnston
1ee787912b Add some helper classes 2018-01-09 16:15:07 +00:00
Erik Johnston
47ca5eb882 Split out add_file_headers 2018-01-09 16:15:07 +00:00
Erik Johnston
ce3a726fc0 Merge pull request #2764 from matrix-org/erikj/remove_dead_thumbnail_code
Remove dead code related to default thumbnails
2018-01-09 16:01:02 +00:00
Erik Johnston
b6c9deffda Remove dead TODO 2018-01-09 15:53:23 +00:00
Richard van der Hoff
51c9d9ed65 Add /room/{id}/event/{id} to synapse
Turns out that there is a valid usecase for retrieving event by id (notably
having received a push), but event ids should be scoped to room, so /event/{id}
is wrong.
2018-01-09 14:39:12 +00:00
Erik Johnston
b30cd5b107 Remove dead code related to default thumbnails 2018-01-09 14:38:33 +00:00
Richard van der Hoff
a767f06e3f Merge pull request #2765 from matrix-org/rav/fix_room_uts
Fix flaky test_rooms UTs
2018-01-09 14:21:03 +00:00
Richard van der Hoff
cb66a2d387 Merge pull request #2763 from matrix-org/rav/fix_config_uts
Fix broken config UTs
2018-01-09 12:08:08 +00:00
Richard van der Hoff
aed4e4ecdd Merge pull request #2762 from matrix-org/rav/fix_logconfig_indenting
Make indentation of generated log config consistent
2018-01-09 12:07:56 +00:00
Richard van der Hoff
f8fa5ae4af enable twisted delayedcall debugging in UTs 2018-01-09 12:06:45 +00:00
Richard van der Hoff
374c4d4ced Remove dead code
pointless function is pointless
2018-01-09 12:06:45 +00:00
Richard van der Hoff
142fb0a7d4 Disable user_directory updates for UTs
Fix flakiness in the UTs caused by the user_directory being updated in the
background
2018-01-09 12:06:45 +00:00
Richard van der Hoff
0211464ba2 Fix broken config UTs
https://github.com/matrix-org/synapse/pull/2755 broke log-config generation,
which in turn broke the unit tests.
2018-01-09 11:28:33 +00:00
Richard van der Hoff
3a556f1ea0 Make indentation of generated log config consistent
(we had a mix of 2- and 4-space indents)
2018-01-09 11:27:19 +00:00
Erik Johnston
e9f7677170 Merge pull request #2761 from turt2live/patch-1
Fix templating error with unban permission message
2018-01-08 10:10:49 +00:00
Travis Ralston
eccfc8e928 Fix templating error with unban permission message
Fixes https://github.com/matrix-org/synapse/issues/2759

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-01-07 19:52:58 -07:00
Richard van der Hoff
e6b24663e4 Merge pull request #2755 from matrix-org/richvdh-patch-1
Remove 'verbosity'/'log_file' from generated cfg
2018-01-05 14:17:46 +00:00
Richard van der Hoff
840f72356e Remove 'verbosity'/'log_file' from generated cfg
... because these only really exist to confuse people nowadays.

Also bring log config more into line with the generated log config, by making `level_for_storage`
apply to the `synapse.storage.SQL` logger rather than `synapse.storage`.
2018-01-05 12:30:28 +00:00
Erik Johnston
18e3a16e8b Merge branch 'release-v0.26.0' of github.com:matrix-org/synapse 2018-01-05 10:54:22 +00:00
Erik Johnston
864a6d2977 Bump version and changelog 2018-01-05 10:54:01 +00:00
Richard van der Hoff
6e375f4597 Merge pull request #2744 from matrix-org/rav/login_logging
Better logging when login can't find a 3pid
2018-01-05 09:54:39 +00:00
Richard van der Hoff
efdfd5c835 Merge pull request #2745 from matrix-org/rav/assert_params
Check missing fields in event_from_pdu_json
2018-01-05 09:52:39 +00:00
Richard van der Hoff
bd91857028 Check missing fields in event_from_pdu_json
Return a 400 rather than a 500 when somebody messes up their send_join
2017-12-30 18:40:19 +00:00
Richard van der Hoff
3079f80d4a Factor out event_from_pdu_json
turns out we have two copies of this, and neither needs to be an instance
method
2017-12-30 18:40:19 +00:00
Richard van der Hoff
65abc90fb6 federation_server: clean up imports 2017-12-30 18:40:19 +00:00
Richard van der Hoff
a7b726ad18 federation_client: clean up imports 2017-12-30 18:40:19 +00:00
Richard van der Hoff
75c1b8df01 Better logging when login can't find a 3pid 2017-12-20 19:31:00 +00:00
Richard van der Hoff
3f9f1c50f3 Merge pull request #2683 from seckrv/fix_pwd_auth_prov_typo
synapse/config/password_auth_providers: Fixed bracket typo
2017-12-18 22:37:21 +00:00
Richard van der Hoff
48fa4e1e5b Merge pull request #2435 from silkeh/listen-ipv6-default
Adapt the default config to bind on both IPv4 and IPv6 on all platforms
2017-12-18 22:34:50 +00:00
Silke
df0f602796 Implement listen_tcp method in remaining workers
Signed-off-by: Silke <silke@slxh.eu>
2017-12-18 20:00:42 +01:00
Silke
26cd3f5690 Remove logger argument and do not catch replication listener
Signed-off-by: Silke <silke@slxh.eu>
2017-12-18 20:00:42 +01:00
Matthew Hodgson
3355ce650d Merge pull request #2737 from Valodim/master
mention federation tester more prominently in the readme
2017-12-17 23:05:17 +00:00
Silke Hofstra
ed48ecc58c Add methods for listening on multiple addresses
Add listen_tcp and listen_ssl which implement Twisted's reactor.listenTCP
and reactor.listenSSL for multiple addresses.

Signed-off-by: Silke Hofstra <silke@slxh.eu>
2017-12-17 13:15:48 +01:00
Silke Hofstra
37d1a90025 Allow binds to both :: and 0.0.0.0
Binding on 0.0.0.0 when :: is specified in the bind_addresses is now allowed.
This causes a warning explaining the behaviour.
Configuration changed to match.

See #2232

Signed-off-by: Silke Hofstra <silke@slxh.eu>
2017-12-17 13:10:31 +01:00
Willem Mulder
3e59143ba8 Adapt the default config to bind on IPv6.
Most deployments are on Linux (or Mac OS), so this would actually bind
on both IPv4 and IPv6.

Resolves #1886.

Signed-off-by: Willem Mulder <willemmaster@hotmail.com>
2017-12-17 13:07:37 +01:00
Vincent Breitmoser
9419bb5776 mention federation tester more prominently in the readme 2017-12-16 22:09:53 +02:00
Erik Johnston
80573e3900 Fix rc version number 2017-12-13 15:15:33 +00:00
Erik Johnston
069ae2a5d6 Bump changelog and version 2017-12-13 15:11:22 +00:00
Erik Johnston
ba24576f2f Merge pull request #2717 from matrix-org/erikj/createroom_content
Fix wrong avatars when inviting multiple users when creating room
2017-12-07 20:00:34 +00:00
Erik Johnston
d8a6c734fa Merge branch 'develop' of github.com:matrix-org/synapse into erikj/createroom_content 2017-12-07 14:24:01 +00:00
Erik Johnston
ef045dcd71 Copy dict in update_membership too 2017-12-07 14:17:15 +00:00
Matthew Hodgson
33cb7ef0b7 Merge pull request #2723 from matrix-org/matthew/search-all-local-users
Add all local users to the user_directory and optionally search them
2017-12-05 11:09:47 +00:00
Matthew Hodgson
cdc2cb5d11 fix StoreError syntax 2017-12-05 11:09:31 +00:00
Richard van der Hoff
16ec3805e5 Fix error when deleting devices
This was introduced in d7ea8c4 / PR #2728
2017-12-05 09:49:22 +00:00
Richard van der Hoff
8529874368 Merge pull request #2729 from matrix-org/rav/custom_ui_auth
support custom login types for validating users
2017-12-05 09:43:43 +00:00
Richard van der Hoff
da1010c83a support custom login types for validating users
Wire the custom login type support from password providers into the UI-auth
user-validation flows.
2017-12-05 09:43:30 +00:00
Richard van der Hoff
cc58e177f3 Merge pull request #2728 from matrix-org/rav/validate_user_via_ui_auth
Factor out a validate_user_via_ui_auth method
2017-12-05 09:42:46 +00:00
Richard van der Hoff
d7ea8c4800 Factor out a validate_user_via_ui_auth method
Collect together all the places that validate a logged-in user via UI auth.
2017-12-05 09:42:30 +00:00
Richard van der Hoff
aa6ecf0984 Merge pull request #2727 from matrix-org/rav/refactor_ui_auth_return
Refactor UI auth implementation
2017-12-05 09:40:38 +00:00
Richard van der Hoff
d5f9fb06b0 Refactor UI auth implementation
Instead of returning False when auth is incomplete, throw an exception which
can be caught with a wrapper.
2017-12-05 09:40:05 +00:00
Matthew Hodgson
c22e73293a speed up the rate of initial spam for users 2017-12-04 18:05:28 +00:00
Matthew Hodgson
b11dca2025 better doc 2017-12-04 17:51:33 +00:00
Matthew Hodgson
7b86c1fdcd try make tests work a bit more... 2017-12-04 17:10:03 +00:00
Richard van der Hoff
58ebdb037c Merge pull request #2716 from matrix-org/rav/federation_client_post
federation_client script: Support for posting content
2017-12-04 17:04:08 +00:00
Matthew Hodgson
95f8a713dc erik told me to 2017-12-04 16:56:25 +00:00
Matthew Hodgson
74e0cc74ce fix pep8 and tests 2017-12-04 15:11:38 +00:00
Matthew Hodgson
1bd40ca73e switch to a simpler 'search_all_users' button as per review feedback 2017-12-04 14:58:39 +00:00
Matthew Hodgson
f397153dfc Merge branch 'develop' into matthew/search-all-local-users 2017-11-30 01:51:38 +00:00
Matthew Hodgson
5406392f8b specify default user_directory_include_pattern 2017-11-30 01:45:34 +00:00
Matthew Hodgson
f61e107f63 remove null constraint on user_dir.room_id 2017-11-30 01:43:50 +00:00
Matthew Hodgson
4b1fceb913 fix alternation operator for FTS4 - how did this ever work!? 2017-11-30 01:34:03 +00:00
Matthew Hodgson
a4bb133b68 fix thinkos galore 2017-11-30 01:17:15 +00:00
Matthew Hodgson
cd3697e8b7 kick the user_directory index when new users register 2017-11-29 18:33:34 +00:00
Matthew Hodgson
3241c7aac3 untested WIP but might actually work 2017-11-29 18:27:05 +00:00
Richard van der Hoff
624c46eb06 Merge pull request #2721 from matrix-org/rav/get_user_by_access_token_comments
Improve comments on get_user_by_access_token
2017-11-29 17:57:06 +00:00
Richard van der Hoff
7a48a6b63e Merge pull request #2722 from matrix-org/rav/delete_device_on_logout
Delete devices and pushers on logouts etc
2017-11-29 17:56:46 +00:00
Matthew Hodgson
47d99a20d5 Add user_directory_include_pattern config param to expand search results to additional users
Initial commit; this doesn't work yet - the LIKE filtering seems too aggressive.
It also needs _do_initial_spam to be aware of prepopulating the whole user_directory_search table with all users...
...and it needs a handle_user_signup() or something to be added so that new signups get incrementally added to the table too.

Committing it here as a WIP
2017-11-29 16:46:45 +00:00
Richard van der Hoff
ad7e570d07 Delete devices in various logout situations
Make sure that we delete devices whenever a user is logged out due to any of
the following situations:

 * /logout
 * /logout_all
 * change password
 * deactivate account (by the user or by an admin)
 * invalidate access token from a dynamic module

Fixes #2672.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
ae31f8ce45 Move set_password into its own handler
Non-functional refactoring to move set_password. This means that we'll be able
to properly deactivate devices and access tokens without introducing a
dependency loop.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
7ca5c68233 Move deactivate_account into its own handler
Non-functional refactoring to move deactivate_account. This means that we'll be
able to properly deactivate devices and access tokens without introducing a
dependency loop.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
2c6d63922a Remove pushers when deleting access tokens
Whenever an access token is invalidated, we should remove the associated
pushers.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
97d1a1dc01 Merge pull request #2718 from matrix-org/rav/notify_logcontexts
Clear logcontext before starting fed txn queue runner
2017-11-29 16:01:46 +00:00
Richard van der Hoff
8b45de90a4 Merge pull request #2719 from matrix-org/rav/handle_missing_hashes
Fix 500 when joining matrix-dev
2017-11-29 16:01:33 +00:00
Richard van der Hoff
7303ed65e1 Fix 500 when joining matrix-dev
matrix-dev has an event (`$/6ANj/9QWQyd71N6DpRQPf+SDUu11+HVMeKSpMzBCwM:zemos.net`)
which has no `hashes` member.

Check for missing `hashes` element in events.
2017-11-29 16:00:46 +00:00
Richard van der Hoff
da562bd6a1 Improve comments on get_user_by_access_token
because I have to reverse-engineer this every time.
2017-11-29 15:52:41 +00:00
Richard van der Hoff
d4fb4f7c52 Clear logcontext before starting fed txn queue runner
These processes take a long time compared to the request, so there is lots of
"Entering|Restoring dead context" in the logs. Let's try to shut it up a bit.
2017-11-28 15:26:14 +00:00
Erik Johnston
dfbc45302e PEP8 2017-11-28 15:23:26 +00:00
Erik Johnston
c4c1d170af Fix wrong avatars when inviting multiple users when creating room
We reused the `content` dictionary between invite requests, which meant they could end up reusing the profile info for a previous user
2017-11-28 15:19:15 +00:00
Richard van der Hoff
fd04968f32 federation_client script: Support for posting content 2017-11-28 11:59:24 +00:00
Luke Barnard
c2a1194424 Merge pull request #2715 from matrix-org/luke/group-guest-access
Allow guest access to group APIs for reading
2017-11-28 11:58:54 +00:00
Luke Barnard
ab1b2d0ff2 Allow guest access to group APIs for reading 2017-11-28 11:23:00 +00:00
Richard van der Hoff
5a4da5bf78 Merge pull request #2697 from matrix-org/rav/fix_urlcache_index_error
Fix error on sqlite 3.7
2017-11-27 12:25:48 +00:00
Richard van der Hoff
84b31a3e7a Merge pull request #2713 from matrix-org/rav/no_upsert_forever
Avoid retrying forever on IntegrityError
2017-11-27 12:19:35 +00:00
Richard van der Hoff
df6c72ede3 Merge pull request #2711 from matrix-org/rav/fix_dns_errhandler
Fix error handling on dns lookup
2017-11-27 12:19:18 +00:00
Richard van der Hoff
04bb79f139 Merge pull request #2710 from matrix-org/rav/remove_dead_code
Tiny code cleanups
2017-11-27 12:15:44 +00:00
Richard van der Hoff
e828a7380a Merge pull request #2708 from matrix-org/rav/replication_logcontext_leaks
Fix some logcontext leaks in replication resource
2017-11-27 12:15:33 +00:00
Richard van der Hoff
7ef22a41a3 Merge pull request #2707 from matrix-org/rav/fix_urlpreview
Fix OPTIONS on preview_url
2017-11-27 12:15:14 +00:00
Richard van der Hoff
96387bd26f Merge pull request #2705 from matrix-org/rav/improve_tracebacks
Improve tracebacks on exceptions
2017-11-27 12:07:52 +00:00
Richard van der Hoff
6be01f599b Improve tracebacks on exceptions
Use failure.Failure to recover our failure, which will give us a useful
stacktrace, unlike the rethrown exception.
2017-11-27 12:05:58 +00:00
Richard van der Hoff
63ccaa5873 Avoid retrying forever on IntegrityError 2017-11-27 12:00:07 +00:00
Richard van der Hoff
8b38096a89 Fix error handling on dns lookup
pass the right arguments to the errback handler

Fixes "TypeError('eb() takes exactly 2 arguments (1 given)',)"
2017-11-24 16:47:48 +00:00
Richard van der Hoff
795b0849f3 Add a comment which might save some confusion 2017-11-24 00:34:56 +00:00
Richard van der Hoff
7f14f0ae38 Remove dead sync_callback
This is never used; let's remove it to stop confusing things.
2017-11-24 00:32:04 +00:00
Richard van der Hoff
0edf085b68 Fix some logcontext leaks in replication resource
The @measure_func annotations rely on the wrapped function respecting the
logcontext rules. Add the necessary yields to make this work.
2017-11-23 23:19:43 +00:00
Richard van der Hoff
8132a6b7ac Fix OPTIONS on preview_url
Fixes #2706
2017-11-23 17:52:31 +00:00
Richard van der Hoff
6b48b3e277 fix sql fails 2017-11-22 18:06:24 +00:00
Richard van der Hoff
2908f955d1 Check database in has_completed_background_updates
so that the right thing happens on workers.
2017-11-22 18:02:15 +00:00
Richard van der Hoff
79eba878a7 Merge pull request #2701 from matrix-org/rav/one_mediarepo_to_rule_them_all
Try to avoid having multiple PreviewUrlResource instances
2017-11-22 16:46:09 +00:00
Richard van der Hoff
68ca864141 Add config option to disable media_repo on main synapse
... to stop us doing the cache cleanup jobs on the master.
2017-11-22 16:20:27 +00:00
Richard van der Hoff
e1fd4751de Build MediaRepositoryResource as a homeserver dependency
This avoids the scenario where we have four different PreviewUrlResources
configured on a single app, each of which have their own caches and cache
clearing jobs.
2017-11-22 16:19:49 +00:00
Richard van der Hoff
148c113fbe Merge pull request #2700 from matrix-org/rav/worker_docs
* Improve documentation of workers

Fixes https://github.com/matrix-org/synapse/issues/2554
2017-11-21 18:28:40 +00:00
Richard van der Hoff
a0c6688976 Improve documentation of workers
Fixes https://github.com/matrix-org/synapse/issues/2554
2017-11-21 18:28:13 +00:00
Richard van der Hoff
d5a7c56ef9 Merge pull request #2698 from matrix-org/rav/remove_dead_dependencies
Clean up dependency list
2017-11-21 17:38:42 +00:00
Richard van der Hoff
0b4aa2dc21 Merge pull request #2689 from matrix-org/rav/unlock_account_data_upsert
Avoid locking account_data tables for upserts
2017-11-21 13:39:14 +00:00
Matthew Hodgson
3ab2cfec47 sanity checks 2017-11-21 12:10:20 +00:00
Richard van der Hoff
7298ed7c51 Clean up dependency list
remove those that aren't used at all, and replace the ones that don't have
builders with simple getters rather than dynamically-generated methods.
2017-11-21 11:15:41 +00:00
Richard van der Hoff
7098b65cb8 Fix error on sqlite 3.7
Create the url_cache index on local_media_repository as a background update, so
that we can detect whether we are on sqlite or not and create a partial or
complete index accordingly.

To avoid running the cleanup job before we have built the index, add a bailout
which will defer the cleanup if the bg updates are still running.

Fixes https://github.com/matrix-org/synapse/issues/2572.
2017-11-21 11:14:17 +00:00
Oliver Kurz
83d8d4d8cd Allow use of higher versions of saml2
The package was pinned to <4.0 with 07cf96eb because "from saml2 import
config" did not work. This seems to have been fixed in the mean time in the
saml2 package and therefore should not stop to use a more recent version.

Signed-off-by: Oliver Kurz <okurz@suse.de>
2017-11-20 11:14:39 +01:00
Matthew Hodgson
2145ee1976 don't double-invite in sync_room_to_group.pl 2017-11-19 00:48:47 +00:00
Richard van der Hoff
59a7275258 Merge pull request #2688 from matrix-org/rav/unlock_more_upsert
Avoid locking for upsert on pushers tables
2017-11-17 13:09:31 +00:00
Richard van der Hoff
d8a05418f9 Merge branch 'master' into develop 2017-11-17 12:13:14 +00:00
Richard van der Hoff
b102e93571 Merge branch 'release-v0.25.1' 2017-11-17 12:03:07 +00:00
Luke Barnard
cdf6fc15b0 Merge pull request #2686 from matrix-org/luke/as-flair
Add automagical AS Publicised Group(s)
2017-11-17 10:13:46 +00:00
Richard van der Hoff
74bbeb4373 Bump version in __init__.py 2017-11-17 10:10:53 +00:00
Richard van der Hoff
2187724ad2 Prep changelog for v0.25.1 2017-11-17 10:09:16 +00:00
Jurek
eded7084d2 Fix auth handler #2678 2017-11-17 10:07:27 +00:00
Matthew Hodgson
34c3d0a386 typo 2017-11-17 01:54:02 +00:00
Matthew Hodgson
9d50b6f0ea quick and dirty room membership<->group membership sync script 2017-11-17 01:54:02 +00:00
Luke Barnard
ab1dc84779 Add extra space before inline comment 2017-11-16 18:22:40 +00:00
Luke Barnard
7fb0e98b03 Extract group_id from the dict for multiple use 2017-11-16 18:18:30 +00:00
Luke Barnard
e836bdf734 Fix tests 2017-11-16 18:14:39 +00:00
Richard van der Hoff
c46139a17e Avoid locking account_data tables for upserts 2017-11-16 18:08:01 +00:00
Luke Barnard
d8391f0541 Remove unused GROUP_ID_REGEX 2017-11-16 18:05:57 +00:00
Luke Barnard
4e8374856d Document get_groups_for_user 2017-11-16 18:03:46 +00:00
Luke Barnard
270f9cd23a Flake8 2017-11-16 18:03:31 +00:00
Luke Barnard
9d83d52027 Use a generator instead of a list 2017-11-16 17:57:34 +00:00
Luke Barnard
5b48eec4a1 Make sure we check AS groups for lookup on bulk 2017-11-16 17:55:15 +00:00
Luke Barnard
b1edf26051 Check group_id belongs to this domain 2017-11-16 17:54:27 +00:00
Richard van der Hoff
06e5bcfc83 Avoid locking for upsert on pushers tables
* replace the upsert into deleted_pushers with an insert
* no need to lock for upsert on pusher_throttle
2017-11-16 17:52:23 +00:00
Jurek
624a8bbd67 Fix auth handler #2678 2017-11-16 17:19:02 +00:00
Richard van der Hoff
b26cbbb60e Revert "Merge pull request #2679 from jkolo/fix_auth_handler"
This PR was against master, not develop :(

This reverts commit 203058a027, reversing
changes made to 552f123bea.
2017-11-16 17:18:11 +00:00
Richard van der Hoff
203058a027 Merge pull request #2679 from jkolo/fix_auth_handler
Fix auth handler
2017-11-16 17:09:00 +00:00
Luke Barnard
97bd18af4e Add automagical AS Publicised Group(s)
via registration file "users" namespace:

```YAML
...
namespaces:
  users:
    - exclusive: true
      regex: '.*luke.*'
      group_id: '+all_the_lukes:hsdomain'
...
```

This is part of giving App Services their own groups for matching users. With this, ghost users will be given the appeareance that they are in a group and that they have publicised the fact, but _only_ from the perspective of the `get_publicised_groups_for_user` API.
2017-11-16 16:44:55 +00:00
Richard van der Hoff
ba05f28ae7 Merge pull request #2684 from matrix-org/rav/unlock_upsert
Start work on avoiding table locks for upserts
2017-11-16 16:27:29 +00:00
Richard van der Hoff
77a1227870 Fix broken ref to IntegrityError 2017-11-16 16:03:38 +00:00
Richard van der Hoff
7ab2b69e18 Avoid locking pushers table on upsert
Now that _simple_upsert will retry on IntegrityError, we don't need to lock the
table.
2017-11-16 15:32:01 +00:00
Richard van der Hoff
10aaa1bc15 _simple_upsert: retry on IntegrityError
wrap the call to _simple_upsert_txn in a loop so that we retry on an
integrityerror: this means we can avoid locking the table provided there is an
unique index.
2017-11-16 15:30:15 +00:00
Richard van der Hoff
cdc9e50a5d Cleanup in _simple_upsert_txn
Bail out early to reduce indentation
2017-11-16 15:29:10 +00:00
Richard von Seck
6f05de0e5e synapse/config/password_auth_providers: Fixed bracket typo
Signed-off-by: Richard von Seck <richard.von-seck@gmx.net>
2017-11-16 15:59:38 +01:00
Jurek
56e2a4333e Fix auth handler #2678 2017-11-15 22:49:43 +01:00
Richard van der Hoff
f959c01600 Merge pull request #2661 from matrix-org/rav/statereadstore
Pull out bits of StateStore to a mixin
2017-11-15 17:23:01 +00:00
Richard van der Hoff
117a8c0d35 Merge pull request #2677 from matrix-org/rav/spec_r0.3.0
Declare support for r0.3.0
2017-11-15 17:10:45 +00:00
Richard van der Hoff
30d2730ee2 Declare support for r0.3.0 2017-11-15 16:24:22 +00:00
Erik Johnston
aa812feb41 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-11-15 11:35:34 +00:00
Erik Johnston
552f123bea Merge branch 'release-v0.25.0' of github.com:matrix-org/synapse 2017-11-15 11:32:24 +00:00
Erik Johnston
5d0cbf763f Bump changelog 2017-11-15 11:29:32 +00:00
Richard van der Hoff
1b83c09c03 Merge pull request #2675 from matrix-org/rav/remove_broken_logcontext_funcs
Remove preserve_context_over_{fn, deferred}
2017-11-15 11:13:53 +00:00
David Baker
7190a550dc Merge pull request #2650 from matrix-org/dbkr/push_include_content_option
Rename redact_content option to include_content
2017-11-15 10:47:38 +00:00
Richard van der Hoff
b2cd6accf5 Remove __PreservingContextDeferred too 2017-11-14 23:00:10 +00:00
Andrew Conrad
053ecae4db Mention SyTest in the README, after Development
Signed-off-by: Andrew Conrad <aconrad103@gmail.com>
2017-11-14 15:11:35 -06:00
Richard van der Hoff
038c994724 Merge pull request #2648 from krombel/update_prometheus
update prometheus-config to new format
2017-11-14 19:14:32 +00:00
Krombel
c161472575 Make clear that the config has changed since prometheus v2
This restores the config that is usable for prometheus pre v2.0.0
The new config only works for Prometheus v2+
2017-11-14 19:59:26 +01:00
Richard van der Hoff
008aa2fc6d Merge pull request #2671 from matrix-org/rav/room_list_fixes
Reshuffle room list request code
2017-11-14 18:11:02 +00:00
Richard van der Hoff
6f30fd9235 Merge pull request #2673 from matrix-org/erikj/fix_port_script_groups
Add new boolean columns to port script
2017-11-14 16:03:41 +00:00
Erik Johnston
9ecf621404 Less s's 2017-11-14 15:55:15 +00:00
Erik Johnston
22db751d1e Add new boolean columns to port script 2017-11-14 15:48:50 +00:00
Erik Johnston
03feb7a34d Bump version and changelog 2017-11-14 14:51:25 +00:00
Richard van der Hoff
35a4b63240 Pull out bits of StateStore to a mixin
... so that we don't need to secretly gut-wrench it for use in the slaved
stores. I haven't done the other stores yet, but we should. I'm tired of the
workers breaking every time we tweak the stores because I forgot to gut-wrench
the right method.

fixes https://github.com/matrix-org/synapse/issues/2655.
2017-11-14 11:43:58 +00:00
Richard van der Hoff
4dd1bfa8c1 Revert "Revert "move _state_group_cache to statestore""
We're going to fix this properly on this branch, so that the _state_group_cache
can end up in StateGroupReadStore.

This reverts commit ab335edb02.
2017-11-14 11:43:58 +00:00
Richard van der Hoff
6caa379ba1 Merge pull request #2658 from matrix-org/rav/store_heirarchy_init
Make __init__ consistent across Store hierarchy
2017-11-14 11:43:12 +00:00
Richard van der Hoff
7e6fa29cb5 Remove preserve_context_over_{fn, deferred}
Both of these functions ae known to leak logcontexts. Replace the remaining
calls to them and kill them off.
2017-11-14 11:22:42 +00:00
Richard van der Hoff
44a1bfd6a6 Reshuffle room list request code
I'm not entirely sure if this will actually help anything, but it simplifies
the code and might give further clues about why room list search requests are
blowing out the get_current_state_ids caches.
2017-11-14 10:29:58 +00:00
Richard van der Hoff
1fc66c7460 Add a load of logging to the room_list handler
So we can see what it gets up to.
2017-11-14 10:23:47 +00:00
Richard van der Hoff
7bd6c87eca Merge pull request #2668 from turt2live/travis/whoami
Add a route for determining who you are
2017-11-14 09:54:21 +00:00
Travis Ralston
812c191939 Remove redundent call
Signed-off-by: Travis Ralston <travpc@gmail.com>
2017-11-13 12:44:21 -07:00
Richard van der Hoff
c741ba59c9 Merge pull request #2669 from matrix-org/rav/cache_urlpreview_failure
Cache failures in url_preview handler
2017-11-13 18:36:24 +00:00
Richard van der Hoff
781c15a6a3 Merge pull request #2663 from matrix-org/rav/invalid_request_utf8
Fix 500 on invalid utf-8 in request
2017-11-13 18:35:24 +00:00
David Baker
45ab288e07 Print instead of logging
because we had to wait until the logger was set up
2017-11-13 18:32:08 +00:00
Richard van der Hoff
8b33ac8f6c Merge branch 'develop' into rav/invalid_request_utf8 2017-11-13 11:56:22 +00:00
Richard van der Hoff
63ef607f1f Fix tests for Store.__init__ update
Fix the test to pass the right number of args to the Store constructors
2017-11-13 10:46:08 +00:00
Richard van der Hoff
6cfee09be9 Make __init__ consitstent across Store heirarchy
Add db_conn parameters to the `__init__` methods of the *Store classes, so that
they are all consistent, which makes the multiple inheritance work correctly
(and so that we can later extract mixins which can be used in the slavedstores)
2017-11-13 10:46:07 +00:00
Erik Johnston
ab335edb02 Revert "move _state_group_cache to statestore"
This reverts commit f5cf3638e9.
2017-11-13 10:05:33 +00:00
Erik Johnston
bfbf1e1f1a Up cache size of get_global_account_data_by_type_for_user 2017-11-13 09:52:11 +00:00
Travis Ralston
2d314b771f Add a route for determining who you are
Useful for applications which may have an access token, but no idea as to who owns it.

Signed-off-by: Travis Ralston <travpc@gmail.com>
2017-11-12 23:39:38 -07:00
Richard van der Hoff
5d15abb120 Bit more logging 2017-11-10 16:58:04 +00:00
Richard van der Hoff
46790f50cf Cache failures in url_preview handler
Reshuffle the caching logic in the url_preview handler so that failures are
cached (and to generally simplify things and fix the logcontext leaks).
2017-11-10 16:50:50 +00:00
Richard van der Hoff
4d0414c714 Merge pull request #2662 from matrix-org/rav/fix_mxids_again
Downcase userid on registration
2017-11-10 13:57:34 +00:00
Richard van der Hoff
e508145c9b Add some more comments appservice user registration
Explain why we don't validate userids registered via app services
2017-11-10 12:39:45 +00:00
Richard van der Hoff
e0ebd1e4bd Downcase userids for shared-secret registration 2017-11-10 12:39:05 +00:00
Richard van der Hoff
f90649eb2b Fix 500 on invalid utf-8 in request
If somebody sends us a request where the the body is invalid utf-8, we should
return a 400 rather than a 500. (json.loads throws a UnicodeError in this
situation)

We might as well catch all Exceptions here: it seems very unlikely that we
would get a request that *isn't caused by invalid json.
2017-11-10 09:15:39 +00:00
Richard van der Hoff
9b599bc18d Downcase userid on registration
Force username to lowercase before attempting to register

https://github.com/matrix-org/synapse/issues/2660
2017-11-09 22:20:01 +00:00
Richard van der Hoff
9b803ccc98 Revert "Allow upper-case characters in mxids"
This reverts commit b70b646903.
2017-11-09 21:57:24 +00:00
Richard van der Hoff
1282086f58 Merge pull request #2659 from matrix-org/rav/apparently_we_dont_follow_our_own_spec_now
Allow upper-case characters in mxids
2017-11-09 20:06:47 +00:00
Richard van der Hoff
b70b646903 Allow upper-case characters in mxids
Because we're never going to be able to fix this :'(
2017-11-09 19:36:13 +00:00
Erik Johnston
2dce6b15c3 Fix typo 2017-11-09 15:56:16 +00:00
Erik Johnston
4e2b2508af Register group servlet 2017-11-09 15:49:42 +00:00
Luke Barnard
0ea5310290 Merge pull request #2657 from matrix-org/erikj/group_visibility_namespace
Namespace visibility options for groups
2017-11-09 15:41:51 +00:00
Erik Johnston
13735843c7 Namespace visibility options for groups 2017-11-09 15:27:18 +00:00
Richard van der Hoff
618c7b816a Merge pull request #2656 from matrix-org/rav/fix_deactivate
Fix 'NoneType' not iterable in /deactivate
2017-11-09 15:20:35 +00:00
Erik Johnston
0fcb5a8ce5 Merge pull request #2651 from matrix-org/erikj/update_group_room_settings
Change so that update group room visibility isn't an upsert
2017-11-09 15:20:24 +00:00
Richard van der Hoff
889102315e Fix 'NoneType' not iterable in /deactivate
make sure we actually return a value from user_delete_access_tokens
2017-11-09 15:15:33 +00:00
David Baker
b2a788e902 Make the commented config have the default 2017-11-09 10:11:42 +00:00
Erik Johnston
82e4bfb53d Add brackets 2017-11-09 10:06:42 +00:00
Erik Johnston
e8814410ef Have an explicit API to update room config 2017-11-08 16:13:27 +00:00
Erik Johnston
94ff2cda73 Revert "Modify group room association API to allow modification of is_public" 2017-11-08 15:43:34 +00:00
Erik Johnston
d305987b40 Merge pull request #2631 from xyzz/fix_appservice_event_backlog
Fix appservices being backlogged and not receiving new events due to a bug in notify_interested_services
2017-11-08 11:54:10 +00:00
Erik Johnston
167eb01d83 Merge pull request #2637 from spantaleev/avoid-noop-media-deletes
Avoid no-op media deletes
2017-11-08 11:53:27 +00:00
David Baker
ad408beb66 better comments 2017-11-08 11:50:08 +00:00
David Baker
1b870937ae Log if any of the old config flags are set 2017-11-08 11:46:24 +00:00
David Baker
2a98ba0ed3 Rename redact_content option to include_content
The redact_content option never worked because it read the wrong config
section. The PR introducing it
(https://github.com/matrix-org/synapse/pull/2301) had feedback suggesting the
name be changed to not re-use the term 'redact' but this wasn't
incorporated.

This reanmes the option to give it a less confusing name, and also
means that people who've set the redact_content option won't suddenly
see a behaviour change when upgrading synapse, but instead can set
include_content if they want to.

This PR also updates the wording of the config comment to clarify
that this has no effect on event_id_only push.

Includes https://github.com/matrix-org/synapse/pull/2422
2017-11-08 10:35:30 +00:00
Richard van der Hoff
02a9a93bde Merge pull request #2649 from matrix-org/rav/fix_delta_on_state_res
Fix bug in state group storage
2017-11-08 09:22:13 +00:00
Richard van der Hoff
e148438e97 s/items/iteritems/ 2017-11-08 09:21:41 +00:00
Ilya Zhuravlev
d46386d57e Remove useless assignment in notify_interested_services 2017-11-07 22:23:22 +03:00
Matthew Hodgson
228ccf1fe3 Merge pull request #2643 from matrix-org/matthew/user_dir_typos
Fix various embarrassing typos around user_directory and add some doc.
2017-11-07 17:31:11 +00:00
Richard van der Hoff
780dbb378f Update deltas when doing auth resolution
Fixes a bug where the persisted state groups were different to those actually
being used after auth resolution.
2017-11-07 16:43:00 +00:00
Richard van der Hoff
1ca4288135 factor out _update_context_for_auth_events
This is duplicated, so let's factor it out before fixing it
2017-11-07 16:43:00 +00:00
Richard van der Hoff
f5cf3638e9 move _state_group_cache to statestore
this is internal to statestore, so let's keep it there.
2017-11-07 16:43:00 +00:00
Erik Johnston
5ef5e14ecc Merge pull request #2636 from farialima/me-master
Fix for #2635: correctly update rooms avatar/display name when modified by admin
2017-11-07 13:49:27 +00:00
Erik Johnston
76c9af193c Revert "Merge branch 'master' of github.com:matrix-org/synapse into develop"
This reverts commit f9b255cd62, reversing
changes made to 1bd654dabd.
2017-11-07 13:32:35 +00:00
Erik Johnston
f9b255cd62 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-11-07 13:31:03 +00:00
Krombel
44ad6dd4bf update prometheus-config to new format 2017-11-07 13:35:35 +01:00
Luke Barnard
1bd654dabd Merge pull request #2647 from matrix-org/luke/get-group-users-is-privileged
Return whether a user is an admin within a group
2017-11-07 12:05:11 +00:00
Luke Barnard
38b265cb51 Remember to pick is_admin out of the db 2017-11-07 11:24:04 +00:00
Luke Barnard
5561c09091 Return whether a user is an admin within a group 2017-11-07 11:18:45 +00:00
Matthew Hodgson
3db5ff69b2 Merge pull request #2576 from maximevaillancourt/exclude-noscript-url-preview
Ignore <noscript> tags when generating URL preview descriptions
2017-11-07 11:09:22 +00:00
Richard van der Hoff
ec12e7eada Merge pull request #2646 from matrix-org/rav/logging_for_limiter
Logging and logcontext fixes for Limiter
2017-11-07 10:47:00 +00:00
Matthew Hodgson
631fa4a1b7 create new indexes before dropping old ones to keep safetynet in place 2017-11-07 10:41:55 +00:00
Richard van der Hoff
bf993db11c Logging and logcontext fixes for Limiter
Add some logging to the Limiter in a similar spirit to the Linearizer, to help
debug issues.

Also fix a logcontext leak.

Also refactor slightly to avoid throwing exceptions.
2017-11-07 00:48:57 +00:00
Matthew Hodgson
4ad883398f s/users_in_pubic_room/users_in_public_rooms/g 2017-11-04 19:39:40 +00:00
Matthew Hodgson
d802e8ca6a s/users_in_pubic_room/users_in_public_rooms/g 2017-11-04 19:38:13 +00:00
Matthew Hodgson
a100700630 fix copyright.... 2017-11-04 19:35:49 +00:00
Matthew Hodgson
b6b075fd49 s/popualte/populate/ 2017-11-04 19:35:33 +00:00
Matthew Hodgson
d1622e080f s/intial/initial/ 2017-11-04 19:35:14 +00:00
Matthew Hodgson
2ac6deafb7 simplify instructions for regenerating user_dir 2017-11-04 19:34:59 +00:00
Slavi Pantaleev
805196fbeb Avoid no-op media deletes
If there are no media entries to delete,
avoid creating transactions, prepared statements
and unnecessary log entries.

Signed-off-by: Slavi Pantaleev <slavi@devture.com>
2017-11-04 09:50:15 +02:00
Francois Granade
f103b91ffa removed unused import flagged by flake8a 2017-11-03 18:45:49 +01:00
Francois Granade
fa4f337b49 Fix for issue 2635: correctly update rooms avatar/display name when modified by admin 2017-11-03 18:25:04 +01:00
Ilya Zhuravlev
8a4a0ddea6 Fix appservice tests to account for new behavior of notify_interested_services 2017-11-02 23:19:57 +03:00
Ilya Zhuravlev
45fbe4ff67 Fix appservices being backlogged and not receiving new events due to a bug in notify_interested_services 2017-11-02 22:49:43 +03:00
David Baker
f851bc8182 Merge pull request #2630 from matrix-org/luke/fix-rooms-in-group
Make the get_rooms_in_group API more sane
2017-11-02 17:23:17 +00:00
David Baker
9e09a1800b Merge pull request #2629 from matrix-org/rav/register_inhibit_login
support inhibit_login in /register
2017-11-02 16:51:35 +00:00
Luke Barnard
a34c586a89 Make the get_rooms_in_group API more sane
Return entries with is_public = True when they're public and is_public = False otherwise.
2017-11-02 16:42:30 +00:00
Richard van der Hoff
6c3a02072b support inhibit_login in /register
Allow things to pass inhibit_login when registering to ... inhibit logins.
2017-11-02 16:31:07 +00:00
David Baker
4a6754baf2 Merge pull request #2628 from matrix-org/rav/module_api_hooks
Add more hooks to ModuleApi
2017-11-02 15:37:14 +00:00
David Baker
4b36897cd9 Merge remote-tracking branch 'origin/develop' into rav/module_api_hooks 2017-11-02 15:19:17 +00:00
David Baker
d4553818a0 Merge pull request #2627 from matrix-org/rav/custom_rest_endpoints
Add a hook for custom rest endpoints
2017-11-02 15:18:37 +00:00
David Baker
6b6f03ae05 Merge pull request #2626 from matrix-org/rav/refactor_module_api
Factor _AccountHandler proxy out to ModuleApi
2017-11-02 15:15:30 +00:00
David Baker
77e3757fa9 Merge pull request #2625 from matrix-org/rav/named_resource_refactor
Factor out _configure_named_resource
2017-11-02 15:11:30 +00:00
Richard van der Hoff
6b60f7dca0 Add more hooks to ModuleApi
add `get_user_by_req` and `invalidate_access_token`
2017-11-02 14:37:39 +00:00
Richard van der Hoff
fcdfc911ee Add a hook for custom rest endpoints
Let the user specify custom modules which can be used for implementing extra
endpoints.
2017-11-02 14:36:55 +00:00
Richard van der Hoff
1189be43a2 Factor _AccountHandler proxy out to ModuleApi
We're going to need to use this from places that aren't password auth, so let's
move it to a proper class.
2017-11-02 14:36:11 +00:00
Richard van der Hoff
6650a07ede Factor out _configure_named_resource
This was a bit of a code vomit, so let's factor it out to preserve some sanity
2017-11-02 14:33:37 +00:00
David Baker
b19d9e2174 Merge pull request #2624 from matrix-org/rav/password_provider_notify_logout
Notify auth providers on logout
2017-11-02 10:55:17 +00:00
David Baker
1f080a6c97 Merge pull request #2623 from matrix-org/rav/callbacks_for_auth_providers
Allow password_auth_providers to return a callback
2017-11-02 10:49:03 +00:00
David Baker
04897c9dc1 Merge pull request #2622 from matrix-org/rav/db_access_for_auth_providers
Let auth providers get to the database
2017-11-02 10:41:25 +00:00
Richard van der Hoff
979eed4362 Fix user-interactive password auth
this got broken in the previous commit
2017-11-01 17:03:20 +00:00
Richard van der Hoff
bc8a5c0330 Notify auth providers on logout
Provide a hook by which auth providers can be notified of logouts.
2017-11-01 16:51:51 +00:00
Richard van der Hoff
4c8f94ac94 Allow password_auth_providers to return a callback
... so that they have a way to record access tokens.
2017-11-01 16:51:03 +00:00
Richard van der Hoff
846a94fbc9 Merge pull request #2620 from matrix-org/rav/auth_non_password
Let password auth providers handle arbitrary login types
2017-11-01 16:45:33 +00:00
Richard van der Hoff
3cd6b22c7b Let password auth providers handle arbitrary login types
Provide a hook where password auth providers can say they know about other
login types, and get passed the relevant parameters
2017-11-01 16:43:57 +00:00
David Baker
c9b9ef575b Merge pull request #2621 from matrix-org/rav/refactor_accesstoken_delete
Move access token deletion into auth handler
2017-11-01 16:26:06 +00:00
Matthew Hodgson
275826f234 Merge pull request #2617 from matrix-org/matthew/auto-displayname
automatically set default displayname on register
2017-11-01 16:21:16 +00:00
David Baker
4f0488b307 Merge remote-tracking branch 'origin/develop' into rav/refactor_accesstoken_delete 2017-11-01 16:20:19 +00:00
David Baker
e5e930aec3 Merge pull request #2615 from matrix-org/rav/break_auth_device_dep
Break dependency of auth_handler on device_handler
2017-11-01 16:06:31 +00:00
David Baker
fbbacb284e Merge pull request #2613 from matrix-org/rav/kill_refresh_tokens
Remove the last vestiges of refresh_tokens
2017-11-01 15:57:35 +00:00
Matthew Hodgson
9f7a555b4e switch to setting default displayname in the storage layer
to avoid clobbering guest user displaynames on registration
2017-11-01 15:51:30 +00:00
Richard van der Hoff
dd13310fb8 Move access token deletion into auth handler
Also move duplicated deactivation code into the auth handler.

I want to add some hooks when we deactivate an access token, so let's bring it
all in here so that there's somewhere to put it.
2017-11-01 15:46:22 +00:00
David Baker
691cc4e036 Merge pull request #2618 from matrix-org/dbkr/log_login_requests
Log login requests
2017-11-01 14:11:28 +00:00
David Baker
0bb253f37b Apparently this is python 2017-11-01 14:02:52 +00:00
David Baker
59e7e62c4b Log login requests
Carefully though, to avoid logging passwords
2017-11-01 13:58:01 +00:00
Matthew Hodgson
f8420d6279 automatically set default displayname on register
to avoid leaking ugly MXIDs and cluttering up the timeline with
displayname changes as well as membership joins for autojoin rooms
(e.g. the status autojoin rooms), automatically set the displayname
to match the localpart of the mxid upon registration.
2017-11-01 13:15:41 +00:00
Luke Barnard
99354b430e Merge pull request #2612 from matrix-org/luke/groups-room-relationship-is-public
Modify group room association API to allow modification of is_public
2017-11-01 11:08:36 +00:00
Richard van der Hoff
74c56f794c Break dependency of auth_handler on device_handler
I'm going to need to make the device_handler depend on the auth_handler, so I
need to break this dependency to avoid a cycle.

It turns out that the auth_handler was only using the device_handler in one
place which was an edge case which we can more elegantly handle by throwing an
error rather than fixing it up.
2017-11-01 10:27:06 +00:00
Richard van der Hoff
02237ce725 Fix tests for refresh_token removal 2017-11-01 10:19:42 +00:00
Luke Barnard
318a249c8b Leave is_public as required argument of update_room_group_association 2017-11-01 09:36:01 +00:00
Luke Barnard
207fabbc6a Update docs for updating room group association 2017-11-01 09:35:15 +00:00
Richard van der Hoff
356bcafc44 Remove the last vestiges of refresh_tokens 2017-10-31 20:35:58 +00:00
Richard van der Hoff
3e0aaad190 Let auth providers get to the database
Somewhat open to abuse, but also somewhat unavoidable :/
2017-10-31 17:22:29 +00:00
Richard van der Hoff
a72e4e3e28 Merge pull request #2610 from matrix-org/rav/schema_for_pw_providers
DB schema interface for password auth providers
2017-10-31 17:20:47 +00:00
Luke Barnard
13b3d7b4a0 Flake8 2017-10-31 17:20:11 +00:00
Richard van der Hoff
e025aec028 Merge pull request #2611 from matrix-org/dbkr/port_script_drop_nuls
Make the port script drop NUL values in all tables
2017-10-31 17:16:08 +00:00
Luke Barnard
20fe347906 Modify group room association API to allow modification of is_public
also includes renamings to make things more consistent.
2017-10-31 17:04:28 +00:00
David Baker
9d419f48e6 Make the port script drop NUL values in all tables
Postgres doesn't support NULs in strings so it makes the script
throw an exception and stop if any values contain \0. Drop them
with appropriate warning.
2017-10-31 16:58:49 +00:00
Richard van der Hoff
9ded00f221 fix tests 2017-10-31 14:21:13 +00:00
Richard van der Hoff
1650eb5847 DB schema interface for password auth providers
Provide an interface by which password auth providers can register db schema
files to be run at startup
2017-10-31 14:01:53 +00:00
David Baker
c31a7c3ff6 Merge pull request #2609 from matrix-org/rav/refactor_login
Refactor some logic from LoginRestServlet into AuthHandler
2017-10-31 13:51:36 +00:00
Richard van der Hoff
b8e54fbc08 Merge pull request #2607 from matrix-org/rav/cleanup_ldap_hacks
Clean up backwards-compat hacks for ldap
2017-10-31 13:41:12 +00:00
David Baker
a1f8b0fd64 Merge pull request #2608 from matrix-org/rav/password_provider_doc
Start some documentation on password providers
2017-10-31 13:40:10 +00:00
Richard van der Hoff
1b65ae00ac Refactor some logic from LoginRestServlet into AuthHandler
I'm going to need some more flexibility in handling login types in password
auth providers, so as a first step, move some stuff from LoginRestServlet into
AuthHandler.

In particular, we pass everything other than SAML, JWT and token logins down to
the AuthHandler, which now has responsibility for checking the login type and
fishing the password out of the login dictionary, as well as qualifying the
user_id if need be. Ideally SAML, JWT and token would go that way too, but
there's no real need for it right now and I'm trying to minimise impact.

This commit *should* be non-functional.
2017-10-31 10:48:41 +00:00
Richard van der Hoff
ebda45de4c Start some documentation on password providers
Document the existing interface, before I start adding new stuff.
2017-10-31 10:47:52 +00:00
Richard van der Hoff
ffc574a6f9 Clean up backwards-compat hacks for ldap
try to make the backwards-compat flows follow the same code paths as the modern
impl.

This commit should be non-functional.
2017-10-31 10:47:02 +00:00
Richard van der Hoff
e2f4190209 Merge pull request #2605 from matrix-org/luke/fix-group-creation-error-wording
Fix wording on group creation error
2017-10-30 16:06:42 +00:00
Luke Barnard
9bc17fc5fb Fix wording on group creation error 2017-10-30 15:17:23 +00:00
Matthew Hodgson
208a6647f1 fix typo 2017-10-29 20:54:20 +00:00
Matthew Hodgson
e51c2bcaef move url_previews to MD as RST does my head in 2017-10-29 20:47:17 +00:00
Erik Johnston
71a1bd53b2 Merge pull request #2599 from matrix-org/erikj/groups_invite
Fix typo when checking if user is invited to group
2017-10-27 17:34:54 +01:00
Erik Johnston
d0abb4e8e6 Fix typo when checking if user is invited to group 2017-10-27 16:57:19 +01:00
Erik Johnston
977078f06d Fix bad merge 2017-10-27 15:10:50 +01:00
Erik Johnston
6980c4557e Merge branch 'erikj/attestation_jitter' of github.com:matrix-org/synapse into develop 2017-10-27 15:09:05 +01:00
Erik Johnston
632baf799e Merge pull request #2598 from matrix-org/revert-2596-erikj/attestation_jitter
Revert "Add jitter to validity period of attestations"
2017-10-27 15:08:29 +01:00
Erik Johnston
af92f5b00f Revert "Add jitter to validity period of attestations" 2017-10-27 15:07:21 +01:00
Erik Johnston
4ab8abbc2b Merge branch 'erikj/attestation_local_fix' of github.com:matrix-org/synapse into develop 2017-10-27 15:07:08 +01:00
Erik Johnston
b1e62d4a57 Merge pull request #2596 from matrix-org/erikj/attestation_jitter
Add jitter to validity period of attestations
2017-10-27 14:20:25 +01:00
Erik Johnston
6af3656deb Merge pull request #2595 from matrix-org/erikj/attestation_commnet
Add comment about attestations
2017-10-27 14:20:19 +01:00
Richard van der Hoff
4d83632009 Merge pull request #2591 from matrix-org/rav/device_delete_auth
Device deletion: check UI auth matches access token
2017-10-27 12:30:10 +01:00
Richard van der Hoff
110b373e9c Merge pull request #2589 from matrix-org/rav/as_deactivate_account
Allow ASes to deactivate their own users
2017-10-27 12:29:32 +01:00
Erik Johnston
ca571b0ec3 Add jitter to validity period of attestations
This helps ensure that the renewals of attestations are spread out more
evenly.
2017-10-27 11:57:27 +01:00
Luke Barnard
d8c26162a1 Merge pull request #2582 from matrix-org/luke/group-is-public
Add is_public to groups table to allow for private groups
2017-10-27 11:41:13 +01:00
Erik Johnston
c067088747 Add comment about attestations 2017-10-27 11:35:41 +01:00
Luke Barnard
5451cc7792 Request is_public from database 2017-10-27 11:27:43 +01:00
Luke Barnard
124314672f group is dict 2017-10-27 11:08:19 +01:00
Luke Barnard
6362298fa5 Create groups with is_public = True 2017-10-27 11:04:20 +01:00
Richard van der Hoff
8b56977b6f Merge pull request #2586 from matrix-org/rav/frontend_proxy_auth_header
Front-end proxy: pass through auth header
2017-10-27 11:01:50 +01:00
Richard van der Hoff
173567a7f2 Docstring for post_urlencoded_get_json 2017-10-27 10:59:50 +01:00
Luke Barnard
c7d9f25d22 Fix create_group to pass requester_user_id 2017-10-27 10:57:20 +01:00
Erik Johnston
e27b76d117 Import logger 2017-10-27 10:54:02 +01:00
Richard van der Hoff
8854c039f2 Merge pull request #2585 from matrix-org/rav/unstable_to_r0
Support /keys/upload on /r0 as well as /unstable
2017-10-27 10:53:48 +01:00
Richard van der Hoff
14f581abc2 Merge pull request #2584 from matrix-org/rav/fix_httpclient_logcontexts
Fix logcontext leaks in httpclient
2017-10-27 10:53:29 +01:00
Luke Barnard
2ca46c7afc Correct logic for checking private group membership 2017-10-27 10:48:01 +01:00
Erik Johnston
82d8c1bacb Fixup 2017-10-27 10:30:21 +01:00
Erik Johnston
2fd9831f7c Merge pull request #2574 from matrix-org/erikj/room_list_fixes
Add logging and fix log contexts for publicRooms
2017-10-27 10:01:23 +01:00
Erik Johnston
195abfe7a5 Remove incorrect attestations 2017-10-27 09:58:13 +01:00
Erik Johnston
d8dde19f04 Log if we try to do attestations for our own user and group 2017-10-27 09:55:01 +01:00
Erik Johnston
585972b51a Don't generate group attestations for local users 2017-10-27 09:46:56 +01:00
Richard van der Hoff
7a6546228b Device deletion: check UI auth matches access token
(otherwise there's no point in the UI auth)
2017-10-27 00:04:31 +01:00
Matthew Hodgson
92f680889d spell out need for libxml2 for lxml to work 2017-10-27 00:02:29 +01:00
Richard van der Hoff
785bd7fd75 Allow ASes to deactivate their own users 2017-10-27 00:01:00 +01:00
Richard van der Hoff
c89e6aadff Merge pull request #2581 from matrix-org/rav/fix_init_with_no_logfile
Fix error when running synapse with no logfile
2017-10-26 22:16:57 +01:00
Richard van der Hoff
54a2525133 Front-end proxy: pass through auth header
So that access-token-in-an-auth-header works.
2017-10-26 18:19:01 +01:00
Richard van der Hoff
0a5866bec9 Support /keys/upload on /r0 as well as /unstable
(So that we can stop riot relying on it in /unstable)
2017-10-26 18:18:23 +01:00
Richard van der Hoff
0d8e3ad48b Fix logcontext leaks in httpclient
`preserve_context_over_fn` is borked
2017-10-26 18:17:10 +01:00
Richard van der Hoff
12ef02dc3d SimpleHTTPClient: add support for headers
Sometimes we need to pass headers into these methods
2017-10-26 17:59:50 +01:00
Luke Barnard
69e8a05f35 Make it work 2017-10-26 17:55:58 +01:00
Luke Barnard
007cd48af6 Recreate groups table instead of adding column
Adding a column with non-constant default not possible in sqlite3
2017-10-26 17:55:22 +01:00
Luke Barnard
713e60b9b6 Awful hack to get default true 2017-10-26 17:38:14 +01:00
Luke Barnard
e86cefcb6f Add groups table to BOOLEAN_COLUMNS in synapse_port_db 2017-10-26 17:24:54 +01:00
Luke Barnard
cfa4e658e0 Bump schema version to 46 2017-10-26 17:23:49 +01:00
Luke Barnard
595fe67f01 delint 2017-10-26 17:20:24 +01:00
Luke Barnard
9b2feef9eb Add is_public to groups table to allow for private groups
Prevent group API access to non-members for private groups

Also make all the group code paths consistent with `requester_user_id` always being the User ID of the requesting user.
2017-10-26 16:51:32 +01:00
Richard van der Hoff
f7f90e0c8d Fix error when running synapse with no logfile
Fixes 'UnboundLocalError: local variable 'sighup' referenced before assignment'
2017-10-26 16:45:20 +01:00
Richard van der Hoff
1dd0f53b21 Merge pull request #2579 from krombel/move_unstable_to_r0
register some /unstable endpoints in /r0 as well
2017-10-26 16:10:43 +01:00
Krombel
8299b323ee add release endpoints for /thirdparty 2017-10-26 16:58:20 +02:00
Krombel
9b436c8b4c register some /unstable endpoints in /r0 as well 2017-10-26 15:22:50 +02:00
Richard van der Hoff
5b38fdab31 Merge pull request #2578 from matrix-org/rav/code_style_imports
Code_style updates
2017-10-26 11:55:59 +01:00
Richard van der Hoff
1eb300e1fc Document import rules 2017-10-26 11:55:41 +01:00
Richard van der Hoff
f7f6bfaae4 code_style: more formatting 2017-10-26 11:55:41 +01:00
Erik Johnston
4ea882ede4 Merge pull request #2577 from matrix-org/erikj/fix_port
Fix port script
2017-10-26 11:54:12 +01:00
Erik Johnston
566e21eac8 Update room_list.py 2017-10-26 11:39:54 +01:00
Richard van der Hoff
351cc35342 code_style.rst: a couple of tidyups 2017-10-26 10:29:26 +01:00
Erik Johnston
37d766aedd Fix port script
We changed _simple_update_one_txn to use _simple_update_txn but didn't
yank it out in the port script.

Fixes #2565
2017-10-26 10:01:03 +01:00
Maxime Vaillancourt
5287e57c86 Ignore noscript tags when generating URL previews 2017-10-25 20:44:34 -04:00
Erik Johnston
2a7e9faeec Do logcontexts outside ResponseCache 2017-10-25 15:21:08 +01:00
Erik Johnston
1ad1ba9e6a Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-10-25 10:27:23 +01:00
Erik Johnston
33a9026cdf Add logging and fix log contexts for publicRooms 2017-10-25 10:26:06 +01:00
Matthew Hodgson
efd0f5a3c5 tip for generating tls_fingerprints 2017-10-24 18:49:49 +01:00
Erik Johnston
f009df23ec Merge branch 'release-v0.24.1' of github.com:matrix-org/synapse 2017-10-24 15:02:25 +01:00
Erik Johnston
6ba4fabdb9 Bump version and changelog 2017-10-24 14:15:27 +01:00
Erik Johnston
9e2c22c97f Merge pull request #2567 from matrix-org/erikj/group_fed_update_profile
Correctly wire in update group profile over federation
2017-10-24 09:21:47 +01:00
Erik Johnston
39dc52157d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/group_fed_update_profile 2017-10-24 09:16:20 +01:00
Richard van der Hoff
0d437698b2 Merge pull request #2568 from matrix-org/rav/pep8
PEP8 fixes
2017-10-23 17:39:31 +01:00
Richard van der Hoff
0be99858f3 fix vars named l
E741 says "do not use variables named ‘l’, ‘O’, or ‘I’".
2017-10-23 15:56:38 +01:00
Richard van der Hoff
eaaabc6c4f replace 'except:' with 'except Exception:'
what could possibly go wrong
2017-10-23 15:52:32 +01:00
Erik Johnston
ce6d4914f4 Correctly wire in update group profile over federation 2017-10-23 15:21:24 +01:00
Richard van der Hoff
ecf198aab8 Merge pull request #2566 from matrix-org/rav/media_logcontext_leak
Fix a logcontext leak in the media repo
2017-10-23 14:47:49 +01:00
Richard van der Hoff
3267b81b81 Merge pull request #2561 from matrix-org/rav/id_checking
Updates to ID checking
2017-10-23 14:39:20 +01:00
Richard van der Hoff
d03cfc4258 Fix a logcontext leak in the media repo 2017-10-23 14:34:27 +01:00
Erik Johnston
1de557975f Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-10-23 13:18:12 +01:00
Erik Johnston
ffba978077 Merge branch 'release-v0.24.0' of github.com:matrix-org/synapse 2017-10-23 13:13:53 +01:00
Erik Johnston
13e16cf302 Bump version and changelog 2017-10-23 13:13:31 +01:00
Richard van der Hoff
bd0d84bf92 Merge pull request #2560 from matrix-org/rav/kill_pointless_method
Remove pointless create() method
2017-10-23 11:06:57 +01:00
Richard van der Hoff
1135193dfd Validate group ids when parsing
May as well do it whenever we parse a Group ID. We check the sigil and basic
structure here so it makes sense to check the grammar in the same place.
2017-10-21 00:30:39 +01:00
Richard van der Hoff
29812c628b Allow = in mxids and groupids
... because the spec says we should.
2017-10-20 23:42:53 +01:00
Richard van der Hoff
58fbbe0f1d Disallow capital letters in userids
Factor out a common function for checking user ids and group ids, which forbids
capitals.
2017-10-20 23:37:22 +01:00
Richard van der Hoff
631d7b87b5 Remove pointless create() method
It just calls the constructor, so we may as well kill it rather than having
random codepaths.
2017-10-20 22:14:55 +01:00
Erik Johnston
6070647774 Correctly bump version 2017-10-19 16:40:20 +01:00
Erik Johnston
d6237859f6 Update changelog 2017-10-19 14:19:52 +01:00
Erik Johnston
0ef0aeceac Bump version and changelog 2017-10-19 13:48:33 +01:00
Erik Johnston
b4a6b7f720 Merge pull request #2559 from matrix-org/erikj/group_id_validation
Add config to enable group creation
2017-10-19 13:45:09 +01:00
Erik Johnston
c7d46510d7 Flake8 2017-10-19 13:36:06 +01:00
Erik Johnston
ffd3f1a783 Add missing file... 2017-10-19 12:17:30 +01:00
Erik Johnston
29bafe2f7e Add config to enable group creation 2017-10-19 12:13:44 +01:00
Erik Johnston
287dd1ee2c Merge pull request #2558 from matrix-org/erikj/group_id_validation
Enforce sensible group IDs
2017-10-19 12:13:31 +01:00
Erik Johnston
513c23bfd9 Enforce sensible group IDs 2017-10-19 12:01:01 +01:00
Erik Johnston
011d03a0f6 Fix typo 2017-10-19 11:22:48 +01:00
Erik Johnston
9ab859f27b Fix typo in group attestation handling 2017-10-19 10:55:52 +01:00
Erik Johnston
f4f65ef93e Merge pull request #2557 from matrix-org/erikj/media_tumbnails
Fix typo in thumbnail generation
2017-10-19 10:38:52 +01:00
Erik Johnston
bd5718d0ad Fix typo in thumbnail generation 2017-10-19 10:27:18 +01:00
Erik Johnston
161a862ffb Fix typo 2017-10-19 10:17:43 +01:00
Richard van der Hoff
69994c385a Merge pull request #2553 from matrix-org/rav/fix_500_on_event_send
Fix 500 error when we get an error handling a PDU
2017-10-18 11:25:44 +01:00
Richard van der Hoff
b5dbbac308 Merge pull request #2552 from matrix-org/rav/fix_500_on_dodgy_powerlevels
Fix 500 error when fields missing from power_levels event
2017-10-17 20:53:30 +01:00
Richard van der Hoff
582bd19ee9 Fix 500 error when we get an error handling a PDU
FederationServer doesn't have a send_failure (and nor does its subclass,
ReplicationLayer), so this was failing.

I'm not really sure what the idea behind send_failure is, given (a) we don't do
anything at the other end with it except log it, and (b) we also send back the
failure via the transaction response. I suspect there's a whole lot of dead
code around it, but for now I'm just removing the broken bit.
2017-10-17 20:52:40 +01:00
Richard van der Hoff
74f99f227c Doc some more dynamic Homeserver methods 2017-10-17 20:51:29 +01:00
Richard van der Hoff
c2bd177ea0 Fix 500 error when fields missing from power_levels event
If the users or events keys were missing from a power_levels event, then
we would throw 500s when trying to auth them.
2017-10-17 17:05:42 +01:00
Erik Johnston
fe6e9f580b Merge pull request #2550 from krombel/fix_thumbnail_2548
fix thumbnailing (#2548)
2017-10-17 15:35:18 +01:00
Richard van der Hoff
7216c76654 Improve error handling for missing files (#2551)
`os.path.exists` doesn't allow us to distinguish between permissions errors and
the path actually not existing, which repeatedly confuses people. It also means
that we try to overwrite existing key files, which is super-confusing. (cf
issues #2455, #2379). Use os.stat instead.

Also, don't recomemnd the the use of --generate-config, which screws everything
up if you're using debian (cf #2455).
2017-10-17 14:46:17 +01:00
Richard van der Hoff
dbdfd8967d Merge pull request #2546 from matrix-org/rav/remove_dead_event_injector
Remove dead class
2017-10-17 13:21:22 +01:00
Richard van der Hoff
b8e40d146f Merge pull request #2547 from matrix-org/rav/test_make_deferred_yieldable
Add some tests for make_deferred_yieldable
2017-10-17 13:21:09 +01:00
Richard van der Hoff
4cc8bb0767 Merge pull request #2549 from matrix-org/rav/event_persist_logcontexts
Fix logcontext handling for persist_events
2017-10-17 13:20:35 +01:00
David Baker
4e242b3e20 Merge pull request #2545 from matrix-org/dbkr/auto_join_rooms
Add config option to auto-join new users to rooms
2017-10-17 11:45:49 +01:00
Krombel
a6245478c8 fix thumbnailing (#2548)
in commit 0e28281a the code for thumbnailing got refactored and the
renaming of this variables was not done correctly.

Signed-Off-by: Matthias Kesler <krombel@krombel.de>
2017-10-17 12:45:33 +02:00
Richard van der Hoff
2e9f5ea31a Fix logcontext handling for persist_events
* don't use preserve_context_over_deferred, which is known broken.

* remove a redundant preserve_fn.

* add/improve some comments
2017-10-17 10:59:30 +01:00
Richard van der Hoff
a6ad8148b9 Fix name of test_logcontext
The file under test is logcontext.py, not log_context.py
2017-10-17 10:53:34 +01:00
Richard van der Hoff
5b5f35ccc0 Add some tests for make_deferred_yieldable 2017-10-17 10:52:31 +01:00
Richard van der Hoff
9b714abf35 Remove dead class
This isn't used anywhere.
2017-10-17 10:43:36 +01:00
David Baker
33122c5a1b Fix test 2017-10-17 10:39:50 +01:00
David Baker
a9c2e930ac pep8 2017-10-17 10:13:13 +01:00
David Baker
c05e6015cc Add config option to auto-join new users to rooms
New users who register on the server will be dumped into all rooms in
auto_join_rooms in the config.
2017-10-16 17:57:27 +01:00
Luke Barnard
e0a75e0c25 Merge pull request #2544 from matrix-org/luke/groups-invited-users
Implement GET /groups/$groupId/invited_users
2017-10-16 17:33:33 +02:00
Luke Barnard
85f5674e44 Delint 2017-10-16 15:52:17 +01:00
Luke Barnard
c43e8a9736 Make it work. Warn about lack of user profile 2017-10-16 15:50:39 +01:00
Luke Barnard
a3ac4f6b0a _create_rererouter for get_invited_users_in_group 2017-10-16 15:41:03 +01:00
Luke Barnard
5dfd0350c7 Merge branch 'develop' into luke/groups-invited-users 2017-10-16 15:34:36 +01:00
Luke Barnard
ca96d609e4 Merge pull request #2543 from matrix-org/luke/fix-on-group-invite-no-profile
Log a warning when no profile for invited member
2017-10-16 16:33:53 +02:00
Luke Barnard
2c5972f87f Implement GET /groups/$groupId/invited_users 2017-10-16 15:31:11 +01:00
Luke Barnard
6079d0027a Log a warning when no profile for invited member
And return empty profile
2017-10-16 14:20:45 +01:00
David Baker
99a6c9dbf2 Merge pull request #2542 from matrix-org/dbkr/room_notif_no_glob
Omit the *s for @room notifications
2017-10-16 13:46:17 +01:00
David Baker
9342bcfce0 Omit the *s for @room notifications
They're just redundant
2017-10-16 13:38:10 +01:00
Richard van der Hoff
e504816977 Merge pull request #2540 from 4nd3r/patch-1
make it absolutely clear that purge doesn't remove everything
2017-10-16 10:43:22 +01:00
Ander Punnar
b2e02084b8 make it absolutely clear that Purge History API does not remove all traces of events and message contents
because this topic pops up too often

#890 #1621 #1730 #2260 #2315 and so on
2017-10-14 13:25:42 +03:00
Erik Johnston
db3d84f46c Merge pull request #2538 from matrix-org/erikj/media_backup
Basic implementation of backup media store
2017-10-13 16:11:31 +01:00
Erik Johnston
1b6b0b1e66 Add try/finally block to close t_byte_source 2017-10-13 15:34:08 +01:00
Erik Johnston
6b725cf56a Remove old comment 2017-10-13 15:23:41 +01:00
Matthew Hodgson
64665b57d0 oops 2017-10-13 14:26:07 +01:00
Erik Johnston
2b24416e90 Don't reuse source but instead copy from primary media store to backup 2017-10-13 14:11:34 +01:00
Erik Johnston
b92a8e6e4a PEP8 2017-10-13 13:58:57 +01:00
Matthew Hodgson
931fc43cc8 fix copyright to companies which actually exist(ed) 2017-10-13 13:54:31 +01:00
Erik Johnston
31aa7bd8d1 Move type into key 2017-10-13 13:47:38 +01:00
Erik Johnston
ad1911bbf4 Comment 2017-10-13 13:47:05 +01:00
Erik Johnston
c021c39cbd Remove spurious addition 2017-10-13 13:46:53 +01:00
Erik Johnston
1f43d22397 Don't needlessly rename variable 2017-10-13 11:42:07 +01:00
Erik Johnston
a675bd08bd Add paths back in... 2017-10-13 11:41:06 +01:00
Erik Johnston
4d7e1dde70 Remove unnecessary diff 2017-10-13 11:36:32 +01:00
Erik Johnston
ae5d18617a Make things be absolute paths again 2017-10-13 11:35:44 +01:00
Erik Johnston
9732ec6797 s/write_to_file/write_to_file_and_backup/ 2017-10-13 11:34:41 +01:00
Erik Johnston
0e28281a02 Fix up 2017-10-13 11:33:49 +01:00
Erik Johnston
505371414f Fix up thumbnailing function 2017-10-13 11:23:53 +01:00
Erik Johnston
e3428d26ca Fix typo 2017-10-13 10:39:59 +01:00
Erik Johnston
35332298ef Fix up comments 2017-10-13 10:39:32 +01:00
Erik Johnston
64db043a71 Move makedirs to thread 2017-10-13 10:25:01 +01:00
Erik Johnston
b60859d6cc Use make_deferred_yieldable 2017-10-13 10:24:19 +01:00
Erik Johnston
d76621a47b Fix comments 2017-10-12 18:16:25 +01:00
Erik Johnston
4ae85ae121 Don't close prematurely.. 2017-10-12 17:57:31 +01:00
Erik Johnston
cc505b4b5e getvalue closes buffer 2017-10-12 17:52:30 +01:00
Erik Johnston
1259a76047 Get len before close 2017-10-12 17:39:23 +01:00
Erik Johnston
802ca12d05 Don't close file prematurely 2017-10-12 17:37:21 +01:00
Erik Johnston
e283b555b1 Copy everything to backup 2017-10-12 17:31:24 +01:00
Erik Johnston
b77a13812c Typo 2017-10-12 15:32:32 +01:00
Erik Johnston
6dfde6d485 Remove dead code 2017-10-12 15:30:26 +01:00
Erik Johnston
c8eeef6947 Fix typos 2017-10-12 15:28:24 +01:00
Erik Johnston
67cb89fbdf Fix typo 2017-10-12 15:23:41 +01:00
Erik Johnston
bf4fb1fb40 Basic implementation of backup media store 2017-10-12 15:20:59 +01:00
hera
f807f7f804 log when we get an exception handling replication updates 2017-10-12 11:51:24 +01:00
David Baker
b8d8ed1ba9 Merge pull request #2531 from matrix-org/dbkr/spamcheck_error_messages
Allow error strings from spam checker
2017-10-12 10:31:03 +01:00
Richard van der Hoff
cc794d60e7 Merge pull request #2532 from matrix-org/rav/fix_linearizer
Fix stackoverflow and logcontexts from linearizer
2017-10-11 17:29:32 +01:00
Richard van der Hoff
8dd0c85ac5 Merge pull request #2529 from matrix-org/rav/fix_transaction_failure_handling
log pdu_failures from incoming transactions
2017-10-11 17:29:14 +01:00
Richard van der Hoff
76fa695241 Merge pull request #2515 from matrix-org/rav/fix_receipt_logcontext
A logformatter which includes the stack when the exception was caught when
logging exceptions.
2017-10-11 17:28:01 +01:00
Richard van der Hoff
f30c4ed2bc logformatter: fix AttributeError
make sure we have the relevant fields before we try to log them.
2017-10-11 17:26:17 +01:00
Erik Johnston
b752507b48 Fix fetching remote summaries 2017-10-11 16:59:18 +01:00
Erik Johnston
af94ba9d02 Merge pull request #2533 from matrix-org/erikj/fix_group_repl
Fix group stream replication
2017-10-11 16:01:57 +01:00
Erik Johnston
818b08d0e4 peeeeeeeeep8888888888888888888888888888 2017-10-11 15:54:00 +01:00
Erik Johnston
ea18996f54 Fix group stream replication
The stream update functions expect the storage function to return a list
of tuples.
2017-10-11 15:44:39 +01:00
Richard van der Hoff
68fd82e840 Merge pull request #2530 from matrix-org/rav/fix_receipt_logcontext
fix a logcontext leak in read receipt handling
2017-10-11 15:08:53 +01:00
Richard van der Hoff
4fad8efbfb Fix stackoverflow and logcontexts from linearizer
1. make it not blow out the stack when there are more than 50 things waiting
   for a lock. Fixes https://github.com/matrix-org/synapse/issues/2505.

2. Make it not mess up the log contexts.
2017-10-11 15:05:05 +01:00
David Baker
b78bae2d51 fix isinstance 2017-10-11 14:49:09 +01:00
Erik Johnston
271f5601f3 Fix typo in invite to group 2017-10-11 14:45:33 +01:00
David Baker
c3b7a45e84 Allow error strings from spam checker 2017-10-11 14:39:22 +01:00
Richard van der Hoff
c3e190ce67 fix a logcontext leak in read receipt handling 2017-10-11 14:37:20 +01:00
Richard van der Hoff
b75d443caf log pdu_failures from incoming transactions
... even if we have no EDUs.

This appears to have been introduced in
476899295f.
2017-10-11 14:36:13 +01:00
Erik Johnston
27e727a146 Fix typo 2017-10-11 14:32:40 +01:00
Erik Johnston
4ce4379235 Fix attestations to check correct server name 2017-10-11 14:11:43 +01:00
Erik Johnston
c2c47550f9 Fix schema delta versions 2017-10-11 13:23:15 +01:00
Erik Johnston
535cc49f27 Merge pull request #2466 from matrix-org/erikj/groups_merged
Initial Group Implementation
2017-10-11 13:20:07 +01:00
Erik Johnston
dfbf73408c Merge pull request #2501 from matrix-org/dbkr/channel_notifications
Support for channel notifications
2017-10-11 13:19:29 +01:00
Erik Johnston
bc7f3eb32f Merge pull request #2483 from jeremycline/unfreeze-ujson-dump
Unfreeze event before serializing with ujson
2017-10-11 13:18:52 +01:00
Erik Johnston
ec954f47fb Validate room ids 2017-10-11 13:15:44 +01:00
David Baker
81a5e0073c pep8 2017-10-10 15:53:34 +01:00
David Baker
ab1bc9bf5f Don't KeyError if no power_levels event 2017-10-10 15:34:05 +01:00
David Baker
0f1eb3e914 Use notification levels in power_levels
Rather than making the condition directly require a specific power
level. This way the level require to notify a room can be configured
per room.
2017-10-10 15:23:00 +01:00
Erik Johnston
84e27a592d Merge pull request #2490 from matrix-org/erikj/drop_left_room_events
Ignore incoming events for rooms that we have left
2017-10-10 11:58:32 +01:00
David Baker
c9f034b4ac There was already a constant for this
also update copyright
2017-10-10 11:47:10 +01:00
David Baker
a9f9d68631 More optimisation 2017-10-10 11:38:31 +01:00
David Baker
707374d5dc What year is it!? Who's the president!? 2017-10-10 11:21:41 +01:00
David Baker
89fa00ddff Merge branch 'develop' into dbkr/channel_notifications 2017-10-10 11:20:17 +01:00
Richard van der Hoff
79bea15830 Merge pull request #2520 from matrix-org/rav/process_incoming_rooms_in_parallel
fed server: process PDUs for different rooms in parallel
2017-10-10 10:26:21 +01:00
Richard van der Hoff
426f8b0f66 Merge pull request #2518 from matrix-org/rav/linearize_incoming_transactions
Fed server: use a linearizer for ongoing transactions
2017-10-10 10:20:51 +01:00
Richard van der Hoff
6a6cc27aee fed server: process PDUs for different rooms in parallel
With luck, this will give a real-time improvement when there are many rooms and
the server ends up calling out to fetch missing events.
2017-10-09 18:30:31 +01:00
Richard van der Hoff
4c7c4d4061 Fed server: use a linearizer for ongoing transactions
We don't want to process the same transaction multiple times concurrently, so
use a linearizer.
2017-10-09 18:30:10 +01:00
Richard van der Hoff
4d24becf7f Merge pull request #2517 from matrix-org/rav/fed_server_refactor
fed server: refactor on_incoming_transaction
2017-10-09 18:19:23 +01:00
Richard van der Hoff
ba5b9b80a5 fed server: refactor on_incoming_transaction
Move as much as possible to after the have_responded check, and reduce the
number of times we iterate over the pdu list.
2017-10-09 18:10:53 +01:00
Richard van der Hoff
c7b0678356 Merge pull request #2516 from matrix-org/rav/fix_fed_server_origin_check
Fed server: Move origin-check code to _handle_received_pdu
2017-10-09 18:09:43 +01:00
Richard van der Hoff
a6e3222fe5 Fed server: Move origin-check code to _handle_received_pdu
The response-building code expects there to be an entry in the `results` list
for each entry in the pdu_list, so the early `continue` was messing this
up. That doesn't really matter, because all that the federation client does is
log any errors, but it's pretty poor form.
2017-10-09 17:53:32 +01:00
Richard van der Hoff
3cc852d339 Fancy logformatter to format exceptions better
This is a bit of an experimental change at this point; the idea is to see if it
helps us track down where our stack overflows are coming from by logging the
stack when the exception was caught and turned into a Failure. (We'll also need
edf2704420).

If we deploy this, we'll be able to enable it via the log config yaml.
2017-10-09 17:44:42 +01:00
Richard van der Hoff
0eeaa25694 Merge pull request #2508 from matrix-org/rav/federation_queue_logcontexts
Fix up logcontext handling in (federation) TransactionQueue
2017-10-09 17:43:48 +01:00
Richard van der Hoff
aa3fac8057 Merge pull request #2507 from matrix-org/rav/execute_concurrently_log_contexts
Fix logcontext handling for concurrently_execute
2017-10-09 17:43:32 +01:00
Richard van der Hoff
c1c81ee2a4 Merge pull request #2506 from matrix-org/rav/unhandled_failure
Fix up deferred handling in federation.py
2017-10-09 17:43:10 +01:00
Erik Johnston
e8496efe84 Fix up comment 2017-10-09 15:17:34 +01:00
Richard van der Hoff
01bbacf3c4 Fix up logcontext handling in (federation) TransactionQueue
Avoid using preserve_context_over_function, which has problems with respect to
logcontexts.
2017-10-06 22:39:25 +01:00
Richard van der Hoff
148428ce76 Fix logcontext handling for concurrently_execute
Avoid preserve_context_over_deferred, which is broken.
2017-10-06 22:24:28 +01:00
Richard van der Hoff
c8f568ddf9 Fix up deferred handling in federation.py
* Avoid preserve_context_over_deferred, which is broken

* set consumeErrors=True on defer.gatherResults, to avoid spurious "unhandled
  failure" erros
2017-10-06 22:14:24 +01:00
Richard van der Hoff
3ddda939d3 some comments in the state res code 2017-10-05 14:58:17 +01:00
David Baker
5de926d66f Merge pull request #2502 from matrix-org/dbkr/allow_dms_to_admins
Spam checking: add the invitee to user_may_invite
2017-10-05 14:14:13 +01:00
David Baker
f878e6f8af Spam checking: add the invitee to user_may_invite 2017-10-05 14:02:28 +01:00
David Baker
269af961e9 Make be faster 2017-10-05 13:27:12 +01:00
David Baker
ed80c6b6cc Add fastpath optimisation 2017-10-05 13:20:22 +01:00
David Baker
e433393c4f pep8 2017-10-05 13:08:02 +01:00
David Baker
985ce80375 They're called rooms 2017-10-05 13:03:44 +01:00
David Baker
b9b9714fd5 Get rule type right 2017-10-05 13:02:19 +01:00
David Baker
fa969cfdde Support for channel notifications
Add condition type to check the sender's power level and add a base
rule using it for @channel notifications.
2017-10-05 12:39:18 +01:00
David Baker
44f8e383f3 Merge pull request #2500 from matrix-org/dbkr/fix_word_boundary_mentions
Fix notif kws that start/end with non-word chars
2017-10-05 12:27:59 +01:00
David Baker
0c8da8b519 Use better method for word boundary searching
From ebc95667b8
2017-10-05 11:57:43 +01:00
Erik Johnston
eaaa837e00 Don't corrupt cache 2017-10-05 11:43:22 +01:00
David Baker
cbe3c3fdd4 pep8 2017-10-05 11:43:10 +01:00
David Baker
6748f0a579 Fix notif kws that start/end with non-word chars
Only prepend / append word bounary characters if the search
expression starts or ends with a word character, otherwise they
don't work because there's no word bounary between whitespace and
a non-word char.
2017-10-05 11:33:30 +01:00
David Baker
93b0cf7a99 Merge pull request #2495 from matrix-org/dbkr/spam_check_room_creation
Add room creation checks to spam checker
2017-10-04 14:49:20 +01:00
David Baker
d8ce68b09b spam check room publishing 2017-10-04 14:29:33 +01:00
David Baker
78d4ced829 un-double indent 2017-10-04 12:44:27 +01:00
David Baker
197c14dbcf Add room creation checks to spam checker
Lets the spam checker deny attempts to create rooms and add aliases
to them.
2017-10-04 10:47:54 +01:00
David Baker
5f20a91fa1 Merge pull request #2492 from matrix-org/dbkr/spam_check_invites
Allow spam checker to reject invites too
2017-10-03 18:10:23 +01:00
David Baker
1e2ac54351 s/roomid/room_id/ 2017-10-03 17:41:38 +01:00
David Baker
1e375468de pass room id too 2017-10-03 17:13:14 +01:00
David Baker
c2c188b699 Federation was passing strings anyway
so pass string everywhere
2017-10-03 15:46:19 +01:00
David Baker
c46a0d7eb4 this shouldn't be debug 2017-10-03 15:20:14 +01:00
David Baker
bd769a81e1 better logging 2017-10-03 15:16:40 +01:00
David Baker
537088e7dc Actually write warpper function 2017-10-03 14:28:12 +01:00
David Baker
41fd9989a2 Skip spam check for admin users 2017-10-03 14:17:44 +01:00
Erik Johnston
11d62f43c9 Invalidate cache 2017-10-03 14:12:28 +01:00
Erik Johnston
e4ab96021e Update comments 2017-10-03 14:10:41 +01:00
David Baker
2a7ed700d5 Fix param name & lint 2017-10-03 14:04:10 +01:00
David Baker
84716d267c Allow spam checker to reject invites too 2017-10-03 13:56:43 +01:00
Richard van der Hoff
e4779be97a Merge pull request #2491 from matrix-org/rav/port_db_fixes
Drop search values with nul characters
2017-10-03 13:49:05 +01:00
Erik Johnston
f2da6df568 Remove spurious line feed 2017-10-03 11:31:06 +01:00
Erik Johnston
30848c0fcd Ignore incoming events for rooms that we have left
When synapse receives an event for a room its not in over federation, it
double checks with the remote server to see if it is in fact in the
room. This is done so that if the server has forgotten about the room
(usually as a result of the database being dropped) it can recover from
it.

However, in the presence of state resets in large rooms, this can cause
a lot of work for servers that have legitimately left. As a hacky
solution that supports both cases we drop incoming events for rooms that
we have explicitly left.

This means that we no longer support the case of servers having
forgotten that they've rejoined a room, but that is sufficiently rare
that we're not going to support it for now.
2017-10-03 11:18:21 +01:00
Erik Johnston
e585c83209 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-10-02 18:11:24 +01:00
Erik Johnston
6c1bb1601e Bump version and changelog 2017-10-02 18:05:17 +01:00
Erik Johnston
ea87cb1ba5 Make 'affinity' package optional 2017-10-02 18:03:59 +01:00
Erik Johnston
3fed5bb25f Move quit_with_error 2017-10-02 17:59:34 +01:00
David Baker
27955056e0 Merge branch 'develop' into erikj/groups_merged 2017-10-02 16:20:41 +01:00
Erik Johnston
90d70af269 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-10-02 16:20:23 +01:00
Erik Johnston
b23cb8fba8 Merge branch 'release-v0.23.0' of github.com:matrix-org/synapse 2017-10-02 13:52:03 +01:00
Erik Johnston
e4a709eda3 Bump version and change log 2017-10-02 13:51:38 +01:00
Richard van der Hoff
7fc1aad195 Drop search values with nul characters
https://github.com/matrix-org/synapse/issues/2187 contains a report of a port
failing due to nul characters somewhere in the search table. Let's try dropping
the offending rows.
2017-10-02 00:53:32 +01:00
Jeremy Cline
cafb8de132 Unfreeze event before serializing with ujson
In newer versions of https://github.com/esnme/ultrajson, ujson does not
serialize frozendicts (introduced in esnme/ultrajson@53f85b1). Although
the PyPI version is still 1.35, Fedora ships with a build from commit
esnme/ultrajson@2f1d487. This causes the serialization to fail if the
distribution-provided package is used.

This runs the event through the unfreeze utility before serializing it.

Thanks to @ignatenkobrain for tracking down the root cause.

fixes #2351

Signed-off-by: Jeremy Cline <jeremy@jcline.org>
2017-09-30 11:22:37 -04:00
Richard van der Hoff
d5325d7ef1 Merge pull request #2480 from matrix-org/rav/federation_client_logging
Improve logging of failures in matrixfederationclient
2017-09-29 17:32:53 +01:00
Erik Johnston
d5694ac5fa Only log if we've removed media 2017-09-28 16:08:08 +01:00
Richard van der Hoff
e43de3ae4b Improve logging of failures in matrixfederationclient
* don't log exception types twice
* not all exceptions have a meaningful 'message'. Use the repr rather than
  attempting to build a string ourselves.
2017-09-28 15:38:09 +01:00
Richard van der Hoff
75e67b9ee4 Handle SERVFAILs when doing AAAA lookups for federation (#2477)
... to cope with people with broken dnssec setups, mostly
2017-09-28 15:24:00 +01:00
Erik Johnston
768f00dedb Up the limits on number of url cache entries to delete at one time 2017-09-28 14:27:27 +01:00
Erik Johnston
4dc07e93a8 Add old indices 2017-09-28 14:10:33 +01:00
Erik Johnston
7cc483aa0e Clear up expired url cache every 10s 2017-09-28 13:56:53 +01:00
Erik Johnston
e1e7d76cf1 Actually assign result to variable 2017-09-28 13:55:29 +01:00
Erik Johnston
93247a424a Only pull out local media that were for url cache 2017-09-28 13:48:14 +01:00
Erik Johnston
5f501ec7e2 Fix typo in url cache expiry timer 2017-09-28 12:59:01 +01:00
Erik Johnston
761d255fdf Merge pull request #2479 from matrix-org/erikj/expire_url_cache_thumbnails
Support new and old style media id formats
2017-09-28 12:58:13 +01:00
Erik Johnston
ace8079086 Support new and old style media id formats 2017-09-28 12:52:51 +01:00
Erik Johnston
7a44c01d89 Fix typo 2017-09-28 12:46:04 +01:00
Erik Johnston
c9bc4b7031 Merge pull request #2478 from matrix-org/erikj/expire_url_cache_thumbnails
Delete expired url cache data
2017-09-28 12:42:33 +01:00
Erik Johnston
ae79764fe5 Change expires column to expires_ts 2017-09-28 12:37:53 +01:00
Erik Johnston
77f1d24de3 More brackets 2017-09-28 12:23:15 +01:00
Erik Johnston
9ccb4226ba Delete expired url cache data 2017-09-28 12:18:06 +01:00
Erik Johnston
bf86a41ef1 Merge pull request #2476 from matrix-org/erikj/joined_members_auth
Fix /joined_members to work with AS users
2017-09-28 10:44:44 +01:00
Erik Johnston
8090fd4664 Fix /joined_members to work with AS users 2017-09-28 10:09:32 +01:00
Erik Johnston
3a743f649c Merge pull request #2475 from matrix-org/erikj/joined_members_auth
Fix bug where /joined_members didn't check user was in room
2017-09-27 16:42:23 +01:00
Erik Johnston
adec03395d Fix bug where /joined_members didn't check user was in room 2017-09-27 15:14:39 +01:00
David Baker
74e494b010 Merge pull request #2474 from matrix-org/dbkr/spam_check_module
Make the spam checker a module
2017-09-27 11:31:00 +01:00
David Baker
ef3a5ae787 Don't test is spam_checker not None
Sometimes it's a Mock object which is not none but is still not
what we're after
2017-09-27 11:24:19 +01:00
David Baker
8c06dd6071 Remove unintentional debugging 2017-09-27 10:31:14 +01:00
David Baker
60c78666ab pep8 2017-09-27 10:26:13 +01:00
David Baker
1786b0e768 Forgot the new file again :( 2017-09-27 10:22:54 +01:00
David Baker
8ad5f34908 pep8 2017-09-26 19:21:41 +01:00
David Baker
6cd5fcd536 Make the spam checker a module 2017-09-26 19:20:23 +01:00
David Baker
ccc67d445b Merge pull request #2473 from matrix-org/dbkr/factor_out_module_loading
Factor out module loading to a separate place
2017-09-26 18:12:17 +01:00
David Baker
9fd086e506 unnecessary parens 2017-09-26 17:59:46 +01:00
David Baker
0b03a97708 Add module_loader.py 2017-09-26 17:56:41 +01:00
David Baker
4824a33c31 Factor out module loading to a separate place
So it can be reused
2017-09-26 17:51:26 +01:00
Erik Johnston
1e5fcfd14a Merge pull request #2472 from matrix-org/erikj/groups_rooms
Add remove room from group API
2017-09-26 16:05:46 +01:00
Erik Johnston
17b8e2bd02 Add remove room API 2017-09-26 15:52:41 +01:00
Erik Johnston
a8e2a3df32 Add unique index to group_rooms table 2017-09-26 15:39:21 +01:00
Erik Johnston
0d7c7fd907 Merge pull request #2471 from matrix-org/erikj/group_summary_publicised
Add is_publicised to group summary
2017-09-26 11:33:21 +01:00
Erik Johnston
95298783bb Add is_publicised to group summary 2017-09-26 11:04:37 +01:00
Erik Johnston
1a398b19fd Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.23.0 2017-09-26 10:08:59 +01:00
Erik Johnston
f4c8cd5e85 Bump changelog and version 2017-09-26 10:02:48 +01:00
Erik Johnston
b8d832a08c Merge pull request #2470 from matrix-org/erikj/sync_speed_fix
Refactor to speed up incremental syncs
2017-09-25 17:43:14 +01:00
Erik Johnston
e3edca3b5d Refactor to speed up incremental syncs 2017-09-25 17:35:39 +01:00
Richard van der Hoff
cacfa04cb6 Merge pull request #2468 from maxidor/develop
Clarify recommended network setup
2017-09-25 16:37:33 +01:00
Max Dor
e591f7b3f0 Include review feedback 2017-09-25 16:42:26 +02:00
Max Dor
7141f1a5cc Clarify recommended network setup 2017-09-25 16:20:23 +02:00
Erik Johnston
44edac0497 Merge branch 'release-v0.23.0' of github.com:matrix-org/synapse into develop 2017-09-25 14:52:46 +01:00
Richard van der Hoff
29e1c717c3 Merge pull request #2390 from r3dey3/develop
Fix iteration of requests_missing_keys; list doesn't have .values()
2017-09-25 11:56:01 +01:00
Richard van der Hoff
94133d7ce8 Merge branch 'develop' into develop 2017-09-25 11:50:11 +01:00
Erik Johnston
b15c2b7971 Update CHANGES 2017-09-25 11:34:12 +01:00
Erik Johnston
ba8fdc925c Bump version and changes 2017-09-25 11:01:31 +01:00
Richard van der Hoff
79b3cf3e02 Fix logcontxt leak in keyclient (#2465)
preserve_context_over_function doesn't do what you want it to do.
2017-09-25 09:51:39 +01:00
Richard van der Hoff
b4fd710e1a Merge pull request #2464 from rnbdsh/patch-4
Remove non-existing files, add stop, use synctl
2017-09-25 09:33:22 +01:00
rnbdsh
b68b0ede7a Start traditionally, stop synctl
Starting with synctl lead to "no config file found"
Stopping also leads to some (code=exited, status=1/FAILURE), but at least now we can stop the service.
2017-09-24 04:55:19 +02:00
rnbdsh
68f737702b Remove non-existing files, add stop, use synctl
Non-existing files, when running the suggested from https://github.com/matrix-org/synapse#configuring-synapse
/etc/synapse/log_config.yaml so the --log-config leads to an error
/etc/sysconfig/synapse The environment-file or even the /etc/sysconfig does not exist in arch linux

Also instead of calling python2 we use synctl, as this seems to be the proper way to start it, and it gives us a more useful error in the systemctl status. And we now allow stop (and therefore restart).
2017-09-24 04:26:23 +02:00
Richard van der Hoff
f65e31d22f Do an AAAA lookup on SRV record targets (#2462)
Support SRV records which point at AAAA records, as well as A records.

Fixes https://github.com/matrix-org/synapse/issues/2405
2017-09-22 20:26:47 +01:00
Matthew Hodgson
f496399ac4 fix thinko'd docstring 2017-09-22 15:34:14 +01:00
Erik Johnston
3166ed55b2 Fix device list when rejoining room (#2461) 2017-09-22 14:44:17 +01:00
Erik Johnston
e1dec2f1a7 Remove user from group summary when the leave the group 2017-09-21 16:09:57 +01:00
Erik Johnston
bb746a9de1 Revert: Keep room_id's in group summary 2017-09-21 15:57:22 +01:00
Erik Johnston
ae8d4bb0f0 Keep room_id's in group summary 2017-09-21 15:55:18 +01:00
Richard van der Hoff
c94ab5976a Merge pull request #2459 from matrix-org/rav/keyring_cleanups
Clean up Keyring code
2017-09-20 11:29:39 +01:00
Erik Johnston
197d82dc07 Correctly return next token 2017-09-20 11:12:11 +01:00
Erik Johnston
069ae2df12 Fix initial sync 2017-09-20 10:52:12 +01:00
Richard van der Hoff
6de74ea6d7 Fix logcontexts in _check_sigs_and_hashes 2017-09-20 01:32:42 +01:00
Richard van der Hoff
72472456d8 Add some more tests for Keyring 2017-09-20 01:32:42 +01:00
Richard van der Hoff
c5c24c239b Fix logcontext handling in verify_json_objects_for_server
preserve_context_over_fn is essentially broken, because (a) it pointlessly
drops the current logcontext before calling its wrapped function, which means
we don't get any useful logcontexts for _handle_key_deferred; (b) it wraps the
resulting deferred in a _PreservingContextDeferred, which is very dangerous
because you then can't yield on it without leaking context back into the
reactor.

Instead, let's specify that the resultant deferreds call their callbacks with
no logcontext.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
c5b0e9f485 Turn _start_key_lookups into an inlineCallbacks function
... which means that logcontexts can be correctly preserved for the stuff it
does.

get_server_verify_keys is now called with the logcontext, so needs to
preserve_fn when it fires off its nested inlineCallbacks function.

Also renames get_server_verify_keys to reflect the fact it's meant to be
private.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
abdefb8a01 Fix potential race in _start_key_lookups
If the verify_request.deferred has already completed, then `remove_deferreds`
will be called immediately. It therefore might resolve the server_to_deferred
deferred while there are still other requests for that server in flight.

To avoid that, we should build the complete list of requests, and *then* add the
callbacks.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
afbd773dc6 Add some comments to _start_key_lookups 2017-09-20 01:32:42 +01:00
Richard van der Hoff
2a4b9ea233 Consistency for how verify_request.deferred is called
Define that it is run with no log context, and make sure that happens.

If we aren't careful to reset the logcontext, we can't bung the deferreds into
defer.gatherResults etc. We don't actually do that directly, but we *do*
resolve other deferreds from affected callbacks (notably the server_to_deferred
map in _start_key_lookups), and those *do* get passed into
defer.gatherResults. It turns out that this way ends up being least confusing.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
3b98439eca Factor out _start_key_lookups
... to make it easier to see what's going on.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
fde63b880d Replace server_and_json with verify_requests
This is a precursor to factoring some of this code out.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
2d511defd9 pull out handle_key_deferred to top level
There's no need for this to be a nested definition; pulling it out not only
makes it more efficient, but makes it easier to check that it's not accessing
any local variables it shouldn't be.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
dd1ea9763a Fix incorrect key_ids in error message 2017-09-20 01:32:42 +01:00
Richard van der Hoff
e76d1135dd Invalidate signing key cache when we gat an update
This might make the cache slightly more efficient.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
fcf2c0fd1a Remove redundant preserve_fn
preserve_fn is a no-op unless the wrapped function returns a
Deferred. verify_json_objects_for_server returns a list, so this is doing
nothing.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
9864efa532 Fix concurrent server_key requests (#2458)
Fix a bug where we could end up firing off multiple requests for server_keys
for the same server at the same time.
2017-09-19 23:25:44 +01:00
Richard van der Hoff
aa620d09a0 Add a config option to block all room invites (#2457)
- allows sysadmins the ability to lock down their servers so that people can't
send their users room invites.
2017-09-19 16:08:14 +01:00
Richard van der Hoff
2eabdf3f98 add some comments to on_exchange_third_party_invite_request 2017-09-19 12:20:36 +01:00
Richard van der Hoff
5ed109d59f PoC for filtering spammy events (#2456)
Demonstration of how you might add some hooks to filter out spammy events.
2017-09-19 12:20:11 +01:00
Erik Johnston
47d9848dc4 Merge pull request #2454 from matrix-org/erikj/groups_sync_creator
Ensure that creator of group sees group down /sync
2017-09-19 11:21:26 +01:00
Erik Johnston
93e504d04e Ensure that creator of group sees group down /sync 2017-09-19 11:08:16 +01:00
Erik Johnston
b5feaa5a49 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/groups_merged 2017-09-19 11:07:45 +01:00
Richard van der Hoff
3f405b34e9 Fix overzealous kicking of guest users (#2453)
We should only kick guest users if the guest access event is authorised.
2017-09-19 08:52:52 +01:00
Richard van der Hoff
290777b3d9 Clean up and document handling of logcontexts in Keyring (#2452)
I'm still unclear on what the intended behaviour for
`verify_json_objects_for_server` is, but at least I now understand the
behaviour of most of the things it calls...
2017-09-18 18:31:01 +01:00
Erik Johnston
77c81ca6ea Merge pull request #2451 from matrix-org/erikj/add_state_to_timeline
Don't filter out current state events from timeline
2017-09-18 17:22:33 +01:00
Erik Johnston
2d1b7955ae Don't filter out current state events from timeline 2017-09-18 17:13:03 +01:00
David Baker
862c8da560 Merge pull request #2450 from matrix-org/dbkr/push_event_id_only
Add support for event_id_only push format
2017-09-18 16:41:29 +01:00
Erik Johnston
2d9f341c3e Merge pull request #2449 from matrix-org/erikj/rejoin_device_lists
Correctly handle leaving room in /key/changes
2017-09-18 15:59:13 +01:00
David Baker
436ee0a2ea Also include the room_id
as really it's part of the event ID
2017-09-18 15:58:38 +01:00
David Baker
b393f5db51 Use .get - it's much shorter 2017-09-18 15:50:26 +01:00
David Baker
a2562f9d74 Add support for event_id_only push format
Param in the data dict of a pusher that tells an HTTP pusher to
send just the event_id of the event it's notifying about and the
notification counts. For clients that want to go & fetch the body
of the event themselves anyway.
2017-09-18 15:39:39 +01:00
Erik Johnston
d6dadd95ac Correctly handle leaving room in /key/changes 2017-09-18 15:38:22 +01:00
Erik Johnston
993d3f710b Merge pull request #2443 from matrix-org/erikj/rejoin_device_lists
Send down device list change notif when member leaves/rejoins room
2017-09-18 13:17:53 +01:00
Erik Johnston
4a94eb3ea4 Fix typo 2017-09-15 09:56:54 +01:00
Erik Johnston
3a0cee28d6 Actually hook leave notifs up 2017-09-14 11:49:37 +01:00
Erik Johnston
4f845a0713 Handle joining/leaving rooms in /keys/changes 2017-09-13 16:28:08 +01:00
Erik Johnston
473700f016 Get left rooms 2017-09-13 15:13:41 +01:00
Erik Johnston
9ce866ed4f In sync handle device lists for newly joined/left rooms 2017-09-12 16:44:26 +01:00
Erik Johnston
69ef4987a6 Add left section to /keys/changes 2017-09-08 14:44:36 +01:00
Erik Johnston
53cc8ad35a Send down device list change notif when member leaves/rejoins room 2017-09-07 15:08:39 +01:00
Richard van der Hoff
e2fcba038c Merge pull request #2439 from matrix-org/rav/tox_tweaks
do tox install with pip -e
2017-09-06 17:08:19 +01:00
Richard van der Hoff
5f59f20636 Merge remote-tracking branch 'origin/master' into develop 2017-09-05 21:58:19 +01:00
Richard van der Hoff
59de2c7afa Exclude the github issue template from our sdist (#2440)
PR #2413 added an issue template, but just adding files to the project
directory upsets the packaging scripts: we need to explicitly include or
exclude them.

Move the template into a .github directory to make that easy, and to de-clutter
the root a bit.
2017-09-05 21:57:19 +01:00
Richard van der Hoff
4b616c8cf2 Merge branch 'master' into develop 2017-09-05 17:51:13 +01:00
Richard van der Hoff
4dd61df6f8 do tox install with pip -e
- this ensures we end up with a working virtualenv which we can use for other
things.
2017-09-05 16:35:23 +01:00
Erik Johnston
c0c31656ff Merge pull request #2433 from ptman/patch-1
Document known to work postgres version
2017-09-01 15:28:42 +01:00
Paul Tötterman
8b16b43b7f Document known to work postgres version 2017-09-01 16:52:45 +03:00
Richard van der Hoff
dff396de0f Set --python when running sytest
.. because I want to make the 'install_and_run' script useful for non-synapse
jobs, which do not accept --python. In any case we set up the path here, so
sytest shouldn't be guessing it.
2017-09-01 11:20:37 +01:00
Richard van der Hoff
f06ffdb6fa fix python path in jenkins scripts 2017-09-01 10:31:45 +01:00
Richard van der Hoff
6e67aaa7f2 Set --python when running sytest
.. because I want to make the 'install_and_run' script useful for non-synapse
jobs, which do not accept --python. In any case we set up the path here, so
sytest shouldn't be guessing it.
2017-09-01 10:06:21 +01:00
Erik Johnston
7f0d0ba3bc Merge pull request #2430 from matrix-org/erikj/groups_profile_cache
Add user profiles to summary from group server
2017-08-25 16:48:58 +01:00
Erik Johnston
4a9b1cf253 Add user profiles to summary from group server 2017-08-25 16:23:58 +01:00
Erik Johnston
6d8799af1a Merge pull request #2429 from matrix-org/erikj/groups_profile_cache
Add a remote user profile cache
2017-08-25 15:52:42 +01:00
Erik Johnston
258409ef61 Fix typos and reinherit 2017-08-25 14:45:20 +01:00
Erik Johnston
bf81f3cf2c Split out profile handler to fix tests 2017-08-25 14:34:56 +01:00
Erik Johnston
27ebc5c8f2 Add remote profile cache 2017-08-25 11:25:47 +01:00
Erik Johnston
97c544f91f Add _simple_update 2017-08-25 11:11:37 +01:00
Richard van der Hoff
934ab76835 Merge pull request #2428 from matrix-org/rav/update_upgrade
Tweaks to the upgrade instructions
2017-08-24 11:42:02 +01:00
Richard van der Hoff
fc9878f6a4 Tweaks to the upgrade instructions 2017-08-23 15:27:02 +01:00
Richard van der Hoff
a4d3bfe3d6 Merge pull request #2417 from matrix-org/rav/federation_client
Improvements to the federation test client
2017-08-23 14:50:26 +01:00
Richard van der Hoff
a7effa8400 Merge pull request #2288 from kyrias/bcrypt
python_dependencies: Use bcrypt module instead of py-bcrypt
2017-08-23 14:14:56 +01:00
Richard van der Hoff
a04c6bbf8f test federation client: Allow server-name and key-file as options
so that you don't necessarily need a config file.
2017-08-22 11:19:30 +01:00
Richard van der Hoff
77ea8cbdd7 Merge pull request #2416 from matrix-org/rav/prometheus_config
Add prometheus config
2017-08-22 10:34:40 +01:00
Erik Johnston
2800983f3e Merge pull request #2410 from matrix-org/erikj/groups_publicise
Add ability to publicise group membership
2017-08-21 16:51:56 +01:00
Erik Johnston
8b50fe5330 Use BOOLEAN rather than TEXT type 2017-08-21 16:37:29 +01:00
Erik Johnston
73b4e18c62 Merge pull request #2426 from matrix-org/erikj/groups_fix_sync
Groups: Fix mising json.load in initial sync
2017-08-21 16:37:14 +01:00
Tom Lant
20b3660495 Merge pull request #2413 from matrix-org/toml-issue-template
Issue template for Synapse
2017-08-21 16:07:35 +01:00
Erik Johnston
175a01f56c Groups: Fix mising json.load in initial sync 2017-08-21 14:45:56 +01:00
Richard van der Hoff
046b659ce2 Improvements to the federation test client
Make it read the config file, primarily.
2017-08-17 16:59:11 +01:00
Tom Lant
413c270723 Update ISSUE_TEMPLATE.md
Added instructions for checking server version.
2017-08-17 11:14:35 +01:00
Tom Lant
ec3a2dc773 Update ISSUE_TEMPLATE.md
Responding to review comments.
2017-08-17 11:00:51 +01:00
Richard van der Hoff
012875258c Add prometheus config
... from https://github.com/matrix-org/synapse-prometheus-config.
2017-08-16 15:31:44 +01:00
Richard van der Hoff
692250c6be Fix user_dir startup
Add missing parameter to _base.start_worker_reactor
2017-08-16 15:11:29 +01:00
Richard van der Hoff
d2352347cf Fix process startup
escape the % that got added in 92168cb so that the process starts up ok.
2017-08-16 14:57:35 +01:00
Matthew Hodgson
92168cbbc5 explain why CPU affinity is a good idea 2017-08-15 18:27:42 +01:00
Richard van der Hoff
963015005e Merge pull request #2415 from matrix-org/rav/synctl_cpu_affinity
Allow configuration of CPU affinity
2017-08-15 17:42:05 +01:00
Richard van der Hoff
10d8b701a1 Allow configuration of CPU affinity
Make it possible to set the CPU affinity in the config file, so that we don't
need to remember to do it manually every time.
2017-08-15 17:08:28 +01:00
Richard van der Hoff
543c794a76 Factor out common application start
We have 10 copies of this code, and I don't really want to update each one
separately.
2017-08-15 17:04:40 +01:00
Tom Lant
57cd0c3dea Update ISSUE_TEMPLATE.md
Removed the sentence encouraging people not to file a bug - if people are in doubt we'd rather they filed a bug than gave up entirely.
2017-08-14 14:40:32 +01:00
Tom Lant
b524dd4c35 Update ISSUE_TEMPLATE.md
Oops capital L.
2017-08-14 14:36:49 +01:00
Tom Lant
09703609fc Create ISSUE_TEMPLATE.md
A new issue template proposed to try and steer people towards #matrix:matrix.org for support queries relating to running their own homeserver.
2017-08-14 14:35:25 +01:00
Erik Johnston
ba3ff7918b Fixup 2017-08-11 13:42:42 +01:00
Erik Johnston
ef8e578677 Add bulk group publicised lookup API 2017-08-09 13:36:22 +01:00
Erik Johnston
b880ff190a Allow update group publicity 2017-08-08 14:19:41 +01:00
Erik Johnston
05e21285aa Store whether the user wants to publicise their membership of a group 2017-08-08 13:01:46 +01:00
hera
eae04f1952 fix english 2017-08-04 23:56:42 +01:00
hera
5699b05072 typo 2017-08-04 23:44:37 +01:00
Erik Johnston
a1e67bcb97 Remove stale TODO comments 2017-08-04 10:07:10 +01:00
Erik Johnston
09552f9d9c Reduce spammy log line in synchrotrons 2017-08-02 17:29:51 +01:00
Kenny Keslar
f18373dc5d Fix iteration of requests_missing_keys; list doesn't have .values()
Signed-off-by: Kenny Keslar <r3dey3@r3dey3.com>
2017-07-26 22:44:19 -05:00
Erik Johnston
ebbaae5526 Merge pull request #2382 from matrix-org/erikj/group_privilege
Include users membership in group in summary API
2017-07-24 18:09:12 +01:00
Erik Johnston
966a70f1fa Update comment 2017-07-24 17:49:39 +01:00
Erik Johnston
629cdfb124 Use join rather than joined, etc. 2017-07-24 14:54:05 +01:00
Erik Johnston
ed666d3969 Fix all the typos 2017-07-24 14:05:09 +01:00
Erik Johnston
b76ef6ccb8 Include users membership in group in summary API 2017-07-24 13:55:39 +01:00
Erik Johnston
851aeae7c7 Check users/rooms are in group before adding to summary 2017-07-24 13:40:56 +01:00
Erik Johnston
d5e32c843f Correctly add joins to correct segment 2017-07-24 13:31:26 +01:00
Erik Johnston
96917d5552 Merge pull request #2378 from matrix-org/erikj/group_sync_support
Add groups to sync stream
2017-07-21 11:05:39 +01:00
Erik Johnston
0401604222 Merge pull request #2377 from matrix-org/erikj/group_profile_update
Add update group profile API
2017-07-20 17:53:39 +01:00
Erik Johnston
b238cf7f6b Remove spurious content param 2017-07-20 17:49:55 +01:00
Erik Johnston
960dae3340 Add notifier 2017-07-20 17:14:44 +01:00
Erik Johnston
2cc998fed8 Fix replication. And notify 2017-07-20 17:13:18 +01:00
Erik Johnston
139fe30f47 Remember to cast to bool 2017-07-20 16:47:35 +01:00
Erik Johnston
4d793626ff Fix bug in generating current token 2017-07-20 16:42:44 +01:00
Erik Johnston
c544188ee3 Add groups to sync stream 2017-07-20 16:36:42 +01:00
Erik Johnston
0ab153d201 Check values are strings 2017-07-20 16:24:18 +01:00
Erik Johnston
8209b5f033 Fix a storage desc 2017-07-20 16:22:22 +01:00
Erik Johnston
b27429729d Merge pull request #2375 from matrix-org/erikj/port_script
Fix port script for user directory tables
2017-07-20 16:16:43 +01:00
Erik Johnston
60a9a49f83 Extend comment 2017-07-20 16:16:29 +01:00
Erik Johnston
b3bf6a1218 Merge pull request #2374 from matrix-org/erikj/group_server_local
Add local group server support
2017-07-20 13:15:42 +01:00
Erik Johnston
57826d645b Fix typo 2017-07-20 13:15:22 +01:00
Erik Johnston
d7d24750be Fix port script for user directory tables 2017-07-20 10:47:01 +01:00
Erik Johnston
6f443a74cf Add update group profile API 2017-07-20 09:46:33 +01:00
Erik Johnston
14a34f12d7 Comments 2017-07-18 17:28:42 +01:00
Erik Johnston
3431ec55dc Comments 2017-07-18 17:23:50 +01:00
Erik Johnston
6027b1992f Fix permissions 2017-07-18 16:51:25 +01:00
Erik Johnston
e884ff31d8 Add DELETE 2017-07-18 16:41:44 +01:00
Erik Johnston
05c13f6c22 Add 'args' param to post_json 2017-07-18 16:40:21 +01:00
Erik Johnston
94ecd871a0 Fix typos 2017-07-18 16:38:54 +01:00
Erik Johnston
12ed4ee48e Correctly parse query params 2017-07-18 15:33:09 +01:00
Erik Johnston
332839f6ea Update federation client pokes 2017-07-18 14:45:37 +01:00
Erik Johnston
e5ea6dd021 Add client apis 2017-07-18 14:37:06 +01:00
Erik Johnston
cccfcfa7b9 Comments 2017-07-18 10:35:18 +01:00
Erik Johnston
68f34e85ce Use transport client directly 2017-07-18 10:29:57 +01:00
Erik Johnston
3e703eb04e Comment 2017-07-18 10:17:25 +01:00
Erik Johnston
508460f240 Remove sync stuff 2017-07-18 09:55:46 +01:00
Erik Johnston
6e9f147faa Add GroupID type 2017-07-18 09:47:25 +01:00
Erik Johnston
4540730111 Remove unused tables 2017-07-18 09:38:15 +01:00
Erik Johnston
e96ee95a7e Remove sync stuff 2017-07-18 09:38:08 +01:00
Erik Johnston
2f9eafdd36 Add local group server support 2017-07-17 12:03:49 +01:00
Erik Johnston
b3de67234e Merge pull request #2363 from matrix-org/erikj/group_server_summary
Add group summary APIs
2017-07-17 11:54:14 +01:00
Erik Johnston
514c2d3c4d Merge pull request #2371 from matrix-org/erikj/push_cache_hit
Increase cache hit ratio for push
2017-07-17 09:42:27 +01:00
Erik Johnston
bfde076022 Increase cache hit ratio for push
We don't update the cache in all code paths, which causes subsequent
calls to miss the cache
2017-07-14 16:11:26 +01:00
Erik Johnston
cb3aee8219 Ensure category and role ids are non-null 2017-07-14 14:06:55 +01:00
Erik Johnston
85fda57208 Add DEFAULT_ROLE_ID 2017-07-14 14:03:54 +01:00
Erik Johnston
4b203bdba5 Correctly increment orders 2017-07-14 14:02:31 +01:00
Erik Johnston
d3862812ff Merge pull request #2366 from matrix-org/erikj/push_metrics
Add more metrics to push rule evaluation
2017-07-14 11:04:03 +01:00
Erik Johnston
8d26385d76 Add more metrics to push rule evaluation 2017-07-13 14:37:30 +01:00
Erik Johnston
3b0470dba5 Remove unused functions 2017-07-13 13:54:01 +01:00
Erik Johnston
8575e3160f Comments 2017-07-13 13:52:41 +01:00
Erik Johnston
67b7b904ba Merge pull request #2365 from matrix-org/erikj/push_skip_lock
Push: Don't acquire lock unless necessary
2017-07-13 11:44:48 +01:00
Erik Johnston
f60218ec41 Push: Don't acquire lock unless necessary 2017-07-13 11:23:53 +01:00
Erik Johnston
a78cda4baf Remove TODO 2017-07-13 11:17:07 +01:00
Erik Johnston
7a39da8cc6 Add summary APIs to federation 2017-07-13 11:13:19 +01:00
Erik Johnston
5bbb53580a raise NotImplementedError 2017-07-13 10:25:29 +01:00
Erik Johnston
26451a09eb Comments 2017-07-12 14:47:18 +01:00
Erik Johnston
8d55877c9e Simplify checking if admin 2017-07-12 11:43:39 +01:00
Erik Johnston
a62406aaa5 Add group summary APIs 2017-07-12 11:36:15 +01:00
Erik Johnston
91818723a1 Merge pull request #2362 from matrix-org/erikj/sync_user_users_who_share
Use less DB for device list handling in sync
2017-07-12 10:45:30 +01:00
Erik Johnston
e9aec001f4 Use less DB for device list handling in sync 2017-07-12 10:30:10 +01:00
Erik Johnston
28e8c46f29 Merge pull request #2352 from matrix-org/erikj/group_server_split
Initial Group Server
2017-07-12 10:26:29 +01:00
Erik Johnston
6d586dc05c Comment 2017-07-12 09:58:37 +01:00
Erik Johnston
410b4e14a1 Move comment 2017-07-11 15:44:18 +01:00
Erik Johnston
fe4e885f54 Add federation API for adding room to group 2017-07-11 14:35:07 +01:00
Erik Johnston
bbb739d24a Comment 2017-07-11 14:31:36 +01:00
Erik Johnston
26752df503 Typo 2017-07-11 14:29:03 +01:00
Erik Johnston
e52c391cd4 Rename column to attestation_json 2017-07-11 14:25:46 +01:00
Erik Johnston
0aac30d53b Comments 2017-07-11 14:23:50 +01:00
Erik Johnston
0184a97dbd Merge pull request #2354 from krombel/reduce_static_sync_reply
encode sync-response statically
2017-07-11 14:19:56 +01:00
Krombel
85b9f76f1d split out reducing stuff; just make encode_* static 2017-07-11 13:14:35 +02:00
Erik Johnston
6322fbbd41 Comment 2017-07-11 11:52:03 +01:00
Erik Johnston
8ba89f1050 Remove u/ requirement 2017-07-11 11:45:32 +01:00
Erik Johnston
429925a5e9 Lift out visibility parsing 2017-07-11 11:44:08 +01:00
Erik Johnston
83936293eb Comments 2017-07-11 11:42:25 +01:00
Erik Johnston
e2cb760dcc Merge pull request #2357 from matrix-org/erikj/push
Don't compute push actions for backfilled events
2017-07-11 10:53:22 +01:00
Erik Johnston
925b3638ff Reduce log levels in tcp replication 2017-07-11 10:04:21 +01:00
Erik Johnston
9a6fd3ef29 Don't compute push actions for backfilled events 2017-07-11 10:02:21 +01:00
Krombel
2f82de18ee fix test 2017-07-10 17:34:58 +02:00
Erik Johnston
b8ca494ee9 Initial group server implementation 2017-07-10 15:44:15 +01:00
Krombel
6e16aca8b0 encode sync-response statically; omit empty objects from sync-response 2017-07-10 16:42:17 +02:00
Erik Johnston
d4d12daed9 Include registration and as stores in frontend proxy 2017-07-07 18:36:45 +01:00
Erik Johnston
f467a8f66d Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-07-07 18:26:28 +01:00
Erik Johnston
c9184ed87e Merge pull request #2344 from matrix-org/erikj/frontend_proxy
Add a frontend proxy
2017-07-07 18:25:46 +01:00
Erik Johnston
1fc4a962e4 Add a frontend proxy 2017-07-07 18:19:46 +01:00
Erik Johnston
08284c86ed Merge pull request #2343 from matrix-org/erikj/fastpush
Perf: Don't filter events for push
2017-07-07 14:25:42 +01:00
Erik Johnston
f502b0dea1 Perf: Don't filter events for push
We know the users are joined and we can explicitly check for if they are
ignoring the user, so lets do that.
2017-07-07 14:04:40 +01:00
Erik Johnston
1200f28d66 Merge branch 'hotfixes-v0.22.1' of github.com:matrix-org/synapse 2017-07-06 18:11:49 +01:00
Erik Johnston
76ed3476d3 Bump version and changelog 2017-07-06 18:11:22 +01:00
Erik Johnston
58dc1f2c78 Merge pull request #2342 from matrix-org/erikj/pusher_pool_instantiate
Fix bug where pusherpool didn't start and broke some rooms
2017-07-06 18:08:43 +01:00
Erik Johnston
5a7f561a9b Fix bug where pusherpool didn't start and broke some rooms
Since we didn't instansiate the PusherPool at start time it could fail
at run time, which it did for some users.

This may or may not fix things for those users, but it should happen at
start time and stop the server from starting.
2017-07-06 17:55:51 +01:00
Erik Johnston
ed9a7f5436 Merge pull request #2309 from matrix-org/erikj/user_ip_repl
Fix up user_ip replication commands
2017-07-06 14:33:14 +01:00
Erik Johnston
1f64207f26 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-07-06 13:57:45 +01:00
Erik Johnston
42b50483be Merge branch 'release-v0.22.0' of github.com:matrix-org/synapse 2017-07-06 10:36:25 +01:00
Erik Johnston
6264cf9666 Bump version and changelog 2017-07-06 10:35:56 +01:00
Erik Johnston
f386632800 Merge pull request #2334 from matrix-org/erikj/refactor_transport_server
Separate federation servlet into different lists
2017-07-05 17:09:07 +01:00
Erik Johnston
5e49a57ecc Separate federation servlet into different lists 2017-07-05 14:32:24 +01:00
Richard van der Hoff
3d31b39297 Merge pull request #2332 from matrix-org/rav/fix_pushes
Fix caching error in the push evaluator
2017-07-05 11:10:53 +01:00
Richard van der Hoff
73cfe48031 Fix caching error in the push evaluator
Initialising `result` to `{}` in the parameters meant that every call to
_flatten_dict used the *same* target dictionary.

I'm hopeful this will fix https://github.com/matrix-org/synapse/issues/2270,
but I suspect it won't. (This code seems to have been here since forever,
unlike the bug, and I don't really think it explains the observed
behaviour). Still, it makes it hard to investigate the problem.
2017-07-05 00:28:43 +01:00
Erik Johnston
05538587ef Bump version and changelog 2017-07-04 14:02:21 +01:00
Erik Johnston
f92d7416d7 Merge pull request #2330 from matrix-org/erikj/cache_size_factor
Increase default cache size
2017-07-04 10:51:21 +01:00
Mark Haines
1f12d808e7 Merge pull request #2323 from matrix-org/markjh/invite_checks
Improve the error handling for bad invites received over federation
2017-07-04 10:50:43 +01:00
Erik Johnston
29a4066a4d Update test 2017-07-04 10:21:25 +01:00
Erik Johnston
7afb4e3f54 Update README 2017-07-04 10:00:52 +01:00
Erik Johnston
495f075b41 Increase default cache factor size. 2017-07-04 09:58:32 +01:00
Erik Johnston
b5e8d529e6 Define CACHE_SIZE_FACTOR once 2017-07-04 09:56:44 +01:00
Mark Haines
3e279411fe Improve the error handling for bad invites received over federation 2017-06-30 16:20:30 +01:00
Erik Johnston
47574c9cba Merge pull request #2321 from matrix-org/erikj/prefill_forward
Prefill forward extrems and event to state groups
2017-06-30 11:03:04 +01:00
Erik Johnston
6ff14ddd2e Make into list 2017-06-29 15:47:37 +01:00
Erik Johnston
5946aa0877 Prefill forward extrems and event to state groups 2017-06-29 15:38:48 +01:00
Erik Johnston
d800ab2847 Merge pull request #2320 from matrix-org/erikj/cache_macaroon_parse
Cache macaroon parse and validation
2017-06-29 15:06:43 +01:00
Erik Johnston
2c365f4723 Cache macaroon parse and validation
Turns out this can be quite expensive for requests, and is easily
cachable. We don't cache the lookup to the DB so invalidation still
works.
2017-06-29 14:50:18 +01:00
Erik Johnston
a1a253ea50 Merge pull request #2319 from matrix-org/erikj/prune_sessions
Use an ExpiringCache for storing registration sessions
2017-06-29 14:20:24 +01:00
Erik Johnston
c72058bcc6 Use an ExpiringCache for storing registration sessions
This is because pruning them was a significant performance drain on
matrix.org
2017-06-29 14:08:37 +01:00
Erik Johnston
27f26e48b7 Serialize user ip command as json 2017-06-27 16:25:38 +01:00
Erik Johnston
8c23221666 Fix up 2017-06-27 15:53:45 +01:00
Erik Johnston
731f3c37a0 Merge branch 'release-v0.22.0' of github.com:matrix-org/synapse into develop 2017-06-27 15:41:34 +01:00
Erik Johnston
4b444723f0 Merge pull request #2308 from matrix-org/erikj/user_ip_repl
Make workers report to master for user ip updates
2017-06-27 15:36:47 +01:00
Erik Johnston
816605a137 Merge pull request #2307 from matrix-org/erikj/user_ip_batch
Batch upsert user ips
2017-06-27 15:08:32 +01:00
Erik Johnston
78cefd78d6 Make workers report to master for user ip updates 2017-06-27 14:58:10 +01:00
Erik Johnston
a0a561ae85 Fix up client ips to read from pending data 2017-06-27 14:46:12 +01:00
Erik Johnston
ed3d0170d9 Batch upsert user ips 2017-06-27 13:37:04 +01:00
Erik Johnston
976128f368 Update version and changelog 2017-06-26 16:14:56 +01:00
Erik Johnston
d04d672a80 Merge pull request #2290 from matrix-org/erikj/ensure_round_trip
Reject local events that don't round trip the DB
2017-06-26 15:12:02 +01:00
Erik Johnston
036f439f53 Merge pull request #2304 from matrix-org/erikj/users_share_fix
Fix up indices for users_who_share_rooms
2017-06-26 15:11:39 +01:00
Erik Johnston
1bce3e6b35 Remove unused variables 2017-06-26 14:03:27 +01:00
Erik Johnston
e3cbec10c1 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/ensure_round_trip 2017-06-26 14:02:44 +01:00
Erik Johnston
8abdd7b553 Fix up indices for users_who_share_rooms 2017-06-26 14:01:30 +01:00
Erik Johnston
ff13c5e7af Merge pull request #2301 from xwiki-labs/push-redact-content
Add configuration parameter to allow redaction of content from push m…
2017-06-24 13:13:51 +01:00
Caleb James DeLisle
27bd0b9a91 Change the config file generator to more descriptive explanation of push.redact_content 2017-06-24 10:32:12 +02:00
Caleb James DeLisle
bce144595c Fix TravisCI tests for PR #2301 - Fat finger mistake 2017-06-23 15:26:09 +02:00
Caleb James DeLisle
75eba3b07d Fix TravisCI tests for PR #2301 2017-06-23 15:15:18 +02:00
Caleb James DeLisle
1591eddaea Add configuration parameter to allow redaction of content from push messages for google/apple devices 2017-06-23 13:01:04 +02:00
Erik Johnston
4fec80ba6f Merge pull request #2299 from matrix-org/erikj/segregate_url_cache_downloads
Store URL cache preview downloads separately
2017-06-23 12:00:45 +01:00
Erik Johnston
7fe8ed1787 Store URL cache preview downloads seperately
This makes it easier to clear old media out at a later date
2017-06-23 11:14:11 +01:00
Erik Johnston
e204062310 Merge pull request #2297 from matrix-org/erikj/user_dir_fix
Fix thinko in initial public room user spam
2017-06-22 15:14:47 +01:00
Erik Johnston
44c722931b Make some more params configurable 2017-06-22 14:59:52 +01:00
Erik Johnston
2d520a9826 Typo. ARGH. 2017-06-22 14:42:39 +01:00
Erik Johnston
24d894e2e2 Fix thinko in unhandled user spam 2017-06-22 14:39:05 +01:00
Matthew Hodgson
ccfcef6b59 Merge branch 'master' into develop 2017-06-22 13:03:44 +01:00
Erik Johnston
e0004aa28a Add desc 2017-06-22 10:03:48 +01:00
Erik Johnston
b668112320 Merge pull request #2296 from matrix-org/erikj/dont_appserver_shar
Don't work out users who share room with appservice users
2017-06-21 14:50:24 +01:00
Erik Johnston
dae9a00a28 Initialise exclusive_user_regex 2017-06-21 14:19:33 +01:00
Erik Johnston
71995e1397 Merge pull request #2219 from krombel/avoid_duplicate_filters
only add new filter when not existent prevoisly
2017-06-21 14:11:26 +01:00
Erik Johnston
8177563ebe Fix for workers 2017-06-21 13:57:49 +01:00
Krombel
4202fba82a Merge branch 'develop' into avoid_duplicate_filters 2017-06-21 14:48:21 +02:00
Krombel
812c030e87 replaced json.dumps with encode_canonical_json 2017-06-21 14:48:12 +02:00
Erik Johnston
1217c7da91 Don't work out users who share room with appservice users 2017-06-21 12:00:41 +01:00
Erik Johnston
7d69f2d956 Merge pull request #2292 from matrix-org/erikj/quarantine_media
Add API to quarantine media
2017-06-19 18:15:00 +01:00
Erik Johnston
385dcb7c60 Handle thumbnail urls 2017-06-19 17:48:28 +01:00
Erik Johnston
b8b936a6ea Add API to quarantine media 2017-06-19 17:39:21 +01:00
Erik Johnston
b5f665de32 Merge pull request #2291 from matrix-org/erikj/shutdown_room
Add shutdown room API
2017-06-19 16:20:55 +01:00
Erik Johnston
e5ae386ea4 Handle all cases of sending membership events 2017-06-19 16:07:54 +01:00
Erik Johnston
36e51aad3c Remove unused import 2017-06-19 14:42:21 +01:00
Erik Johnston
b490299a3b Change to create new room and join other users 2017-06-19 14:10:13 +01:00
Erik Johnston
5db7070dd1 Forget room 2017-06-19 12:40:29 +01:00
Erik Johnston
d7fe6b356c Add shutdown room API 2017-06-19 12:37:27 +01:00
Erik Johnston
fcf01dd88e Reject local events that don't round trip the DB 2017-06-19 11:33:40 +01:00
Johannes Löthberg
4f66312df8 python_dependencies: Use bcrypt module instead of py-bcrypt
py-bcrypt has been unmaintained for a long while, while bcrypt is
actively maintained. And since ff8b87118d
we're compatible with the bcrypt anyway.

Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2017-06-17 17:39:35 +02:00
Matthew
3fafb7b189 add missing boolean to synapse_port_db 2017-06-16 20:51:19 +01:00
Matthew
776a070421 fix synapse_port script 2017-06-16 20:24:14 +01:00
Erik Johnston
dfeca6cf40 Merge pull request #2286 from matrix-org/erikj/split_out_user_dir
Split out user directory to a separate process
2017-06-16 13:01:19 +01:00
Erik Johnston
6aa5bc8635 Initial worker impl 2017-06-16 11:47:11 +01:00
Erik Johnston
d8f47d2efa Merge pull request #2280 from matrix-org/erikj/share_room_user_dir
Include users who you share a room with in user directory
2017-06-16 11:06:00 +01:00
Erik Johnston
0a9315bbc7 Merge pull request #2285 from krombel/allow_authorization_header
allow Authorization header
2017-06-16 10:41:58 +01:00
Krombel
1ff419d343 allow Authorization header which handling got implemented in #1098
Signed-off-by: Matthias Kesler <krombel@krombel.de>
2017-06-16 11:21:14 +02:00
Erik Johnston
24df576795 Merge pull request #2282 from matrix-org/release-v0.21.1
Release v0.21.1
2017-06-15 13:24:16 +01:00
Erik Johnston
fdf1ca30f0 Bump version and changelog 2017-06-15 12:59:06 +01:00
Erik Johnston
052c5d19d5 Merge pull request #2281 from matrix-org/erikj/phone_home_stats
Fix phone home stats
2017-06-15 12:46:23 +01:00
Erik Johnston
5ddd199870 Typo 2017-06-15 10:49:10 +01:00
Erik Johnston
a9d6fa8b2b Include users who share room with requester in user directory 2017-06-15 10:17:21 +01:00
Erik Johnston
4564b05483 Implement updating users who share rooms on the fly 2017-06-15 10:17:17 +01:00
Erik Johnston
72613bc379 Implement initial population of users who share rooms table 2017-06-15 09:59:04 +01:00
Erik Johnston
ebcd55d641 Add DB schema for tracking users who share rooms 2017-06-15 09:45:48 +01:00
Erik Johnston
4b461a6931 Add some more stats 2017-06-15 09:39:39 +01:00
Erik Johnston
93e7a38370 Remove unhelpful test 2017-06-15 09:30:54 +01:00
Erik Johnston
617304b2cf Fix phone home stats 2017-06-14 19:47:15 +01:00
Matthew Hodgson
ba502fb89a add notes on running out of FDs 2017-06-14 02:23:14 +01:00
Erik Johnston
6c6b9689bb Merge pull request #2279 from matrix-org/erikj/fix_user_dir
Fix user directory insertion due to missing room_id
2017-06-13 12:48:50 +01:00
Erik Johnston
d9fd937e39 Fix user directory insertion due to missing room_id 2017-06-13 11:50:24 +01:00
Erik Johnston
fe9dc522d4 Merge pull request #2278 from matrix-org/erikj/fix_user_dir
Fix user dir to not assume existence of user
2017-06-13 11:38:44 +01:00
Erik Johnston
505e7e8b9d Fix up sql 2017-06-13 11:19:18 +01:00
Erik Johnston
6fd7e6db3d Fix user dir to not assume existence of user 2017-06-13 11:11:26 +01:00
Erik Johnston
fdca6e36ee Merge pull request #2274 from matrix-org/erikj/cache_is_host_joined
Add cache for is_host_joined
2017-06-13 10:58:53 +01:00
Erik Johnston
90ae0cffec Merge pull request #2275 from matrix-org/erikj/tweark_user_directory_search
Tweak the ranking of PG user dir search
2017-06-13 10:58:43 +01:00
Erik Johnston
de4cb50ca6 Merge pull request #2276 from matrix-org/erikj/fix_user_di
Don't assume existence of events when updating user directory
2017-06-13 10:55:41 +01:00
Erik Johnston
a09e09ce76 Merge pull request #2277 from matrix-org/erikj/media
Throw exception when not retrying when downloading media
2017-06-13 10:52:45 +01:00
Erik Johnston
48d2949416 Throw exception when not retrying when downloading media 2017-06-13 10:23:14 +01:00
Erik Johnston
6ae8373d40 Don't assume existance of events when updating user directory 2017-06-13 10:19:26 +01:00
Erik Johnston
b58e24cc3c Tweak the ranking of PG user dir search 2017-06-13 10:16:31 +01:00
Erik Johnston
d53fe399eb Add cache for is_host_joined 2017-06-13 09:56:18 +01:00
Erik Johnston
a837765e8c Merge pull request #2266 from matrix-org/erikj/host_in_room
Change is_host_joined to use current_state table
2017-06-12 09:49:51 +01:00
Erik Johnston
f540b494a4 Merge pull request #2269 from matrix-org/erikj/cache_state_delta
Cache state deltas
2017-06-09 18:32:04 +01:00
Erik Johnston
8060974344 Fix replication 2017-06-09 16:40:52 +01:00
Erik Johnston
b0d975e216 Comments 2017-06-09 16:25:42 +01:00
Erik Johnston
e54d7d536e Cache state deltas 2017-06-09 16:24:00 +01:00
Erik Johnston
1e9b4d5a95 Merge pull request #2268 from matrix-org/erikj/entity_has_changed
Fix has_any_entity_changed
2017-06-09 15:30:55 +01:00
Erik Johnston
efc2b7db95 Rewrite conditional 2017-06-09 13:35:15 +01:00
Erik Johnston
bfd68019c2 Merge pull request #2267 from matrix-org/erikj/missing_notifier
Fix removing of pushers when using workers
2017-06-09 13:07:29 +01:00
Erik Johnston
1946867bc2 Merge pull request #2265 from matrix-org/erikj/remote_leave_outlier
Mark remote invite rejections as outliers
2017-06-09 13:05:15 +01:00
Erik Johnston
1664948e41 Comment 2017-06-09 13:05:05 +01:00
Erik Johnston
935e588799 Tweak SQL 2017-06-09 13:01:23 +01:00
Erik Johnston
eed59dcc1e Fix has_any_entity_changed
Occaisonally has_any_entity_changed would throw the error: "Set changed
size during iteration" when taking the max of the `sorteddict`. While
its uncertain how that happens, its quite inefficient to iterate over
the entire dict anyway so we change to using the more traditional
`bisect_*` functions.
2017-06-09 11:44:01 +01:00
Erik Johnston
2cac7623a5 Add missing notifier 2017-06-09 11:24:41 +01:00
Erik Johnston
298d83b340 Fix replication 2017-06-09 11:01:28 +01:00
Erik Johnston
0185b75381 Change is_host_joined to use current_state table
This bypasses a bug where using the state groups to figure out if a host
is in a room sometimes errors if the servers isn't in the room. (For
example when the server rejected an invite to a remote room)
2017-06-09 10:52:26 +01:00
Erik Johnston
7132e5cdff Mark remote invite rejections as outliers 2017-06-09 10:08:18 +01:00
Erik Johnston
98bdb4468b Merge pull request #2263 from matrix-org/erikj/fix_state_woes
Ensure we don't use unpersisted state group as prev group
2017-06-08 12:56:18 +01:00
Erik Johnston
ea11ee09f3 Ensure we don't use unpersisted state group as prev group 2017-06-08 11:59:57 +01:00
Erik Johnston
c62c480dc6 Merge pull request #2259 from matrix-org/erikj/fix_state_woes
Fix bug where state_group tables got corrupted
2017-06-07 17:51:25 +01:00
Erik Johnston
197bd126f0 Fix bug where state_group tables got corrupted
This is due to the fact that we prefilled caches using txn.call_after,
which always gets called including on error.

We fix this by making txn.call_after only fire when a transaction
completes successfully, which is what we want most of the time anyway.
2017-06-07 17:39:36 +01:00
Erik Johnston
f45f07ab86 Merge pull request #2258 from matrix-org/erikj/user_dir
Don't start user_directory handling on workers
2017-06-07 14:04:50 +01:00
Erik Johnston
a053ff3979 Merge pull request #2248 from matrix-org/erikj/state_fixup
Faster cache for get_joined_hosts
2017-06-07 14:01:06 +01:00
Erik Johnston
ecdd2a3658 Don't start user_directory handling on workers 2017-06-07 12:02:53 +01:00
Erik Johnston
2f34ad31ac Add some logging to user directory 2017-06-07 11:50:44 +01:00
Erik Johnston
671f0afa1d Merge pull request #2256 from matrix-org/erikj/faster_device_updates
Split up device_lists_outbound_pokes table for faster updates.
2017-06-07 11:48:00 +01:00
Erik Johnston
64ed74c01e When pruning, delete from device_lists_outbound_last_success 2017-06-07 11:20:47 +01:00
Erik Johnston
1a81a1898e Keep pruning background task 2017-06-07 11:16:56 +01:00
Erik Johnston
6ba21bf2b8 Comments 2017-06-07 11:08:36 +01:00
Erik Johnston
09e4bc0501 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/state_fixup 2017-06-07 11:05:23 +01:00
Erik Johnston
6e2a7ee1bc Remove spurious log lines 2017-06-07 11:05:17 +01:00
Erik Johnston
65f0513a33 Split up device_lists_outbound_pokes table for faster updates. 2017-06-07 11:02:38 +01:00
Erik Johnston
6f83c4537c Increase size of IP cache 2017-06-07 10:18:44 +01:00
Erik Johnston
cca94272fa Fix typo when getting app name 2017-06-06 11:50:07 +01:00
Erik Johnston
66b121b2fc Fix wrong number of arguments 2017-06-06 11:46:38 +01:00
Erik Johnston
8d34120a53 Merge pull request #2253 from matrix-org/erikj/user_dir
Handle profile updates in user directory
2017-06-01 17:33:20 +01:00
Erik Johnston
1a01af079e Handle profile updates in user directory 2017-06-01 15:39:51 +01:00
Erik Johnston
87e5e05aea Merge pull request #2252 from matrix-org/erikj/user_dir
Add a user directory
2017-06-01 15:39:32 +01:00
Erik Johnston
4d039aa2ca Fix sqlite 2017-06-01 14:58:48 +01:00
Erik Johnston
21e255a8f1 Split the table in two 2017-06-01 14:50:46 +01:00
Erik Johnston
d5477c7afd Tweak search query 2017-06-01 13:28:01 +01:00
Erik Johnston
02a6108235 Tweak search query 2017-06-01 13:16:40 +01:00
Erik Johnston
7233341eac Comments 2017-06-01 13:11:38 +01:00
Erik Johnston
8be6fd95a3 Check if host is still in room 2017-06-01 13:05:39 +01:00
Erik Johnston
59dbb47065 Remove spurious inlineCallbacks 2017-06-01 11:41:29 +01:00
Erik Johnston
9c7db2491b Fix removing users 2017-06-01 11:36:50 +01:00
Erik Johnston
0fe6f3c521 Bug fixes and logging
- Check if room is public when a user joins before adding to user dir
- Fix typo of field name "content.join_rules" -> "content.join_rule"
2017-06-01 11:09:49 +01:00
Erik Johnston
036362ede6 Order by if they have profile info 2017-06-01 09:41:08 +01:00
Erik Johnston
a757dd4863 Use prefix matching 2017-06-01 09:40:37 +01:00
Erik Johnston
f5cc22bdc6 Comment on why arbitrary comments 2017-05-31 17:30:26 +01:00
Erik Johnston
5dd1b2c525 Use unique indices 2017-05-31 17:29:12 +01:00
Erik Johnston
cc7609aa9f Comment briefly on how we keep user_directory up to date 2017-05-31 17:11:18 +01:00
Erik Johnston
f1378aef91 Convert to int 2017-05-31 17:03:08 +01:00
Erik Johnston
b2d8d07109 Lifts things into separate function 2017-05-31 17:00:24 +01:00
Erik Johnston
f9791498ae Typos 2017-05-31 16:50:57 +01:00
Erik Johnston
f091061711 Fix tests 2017-05-31 16:34:40 +01:00
Erik Johnston
4abcff0177 Fix typo 2017-05-31 16:22:36 +01:00
Erik Johnston
63c58c2a3f Limit number of things we fetch out of the db 2017-05-31 16:17:58 +01:00
Erik Johnston
304880d185 Add stream change cache 2017-05-31 15:46:36 +01:00
Erik Johnston
5d79d728f5 Split out directory and search tables 2017-05-31 15:23:49 +01:00
Erik Johnston
dc51af3d03 Pull max id from correct table 2017-05-31 15:13:49 +01:00
Erik Johnston
350622a107 Handle the server leaving a public room 2017-05-31 15:11:36 +01:00
Erik Johnston
63fda37e20 Add comments 2017-05-31 15:00:29 +01:00
Erik Johnston
293ef29655 Weight differently 2017-05-31 14:29:32 +01:00
Erik Johnston
535c99f157 Use POST 2017-05-31 14:15:45 +01:00
Erik Johnston
45a5df5914 Add REST API 2017-05-31 14:11:55 +01:00
Erik Johnston
3b5f22ca40 Add search 2017-05-31 14:00:01 +01:00
Erik Johnston
b5db4ed5f6 Update room column when room becomes unpublic 2017-05-31 13:40:28 +01:00
Erik Johnston
168524543f Add call later 2017-05-31 11:59:36 +01:00
Erik Johnston
3e123b8497 Start later 2017-05-31 11:56:27 +01:00
Erik Johnston
42137efde7 Don't go round in circles 2017-05-31 11:55:13 +01:00
Erik Johnston
eeb2f9e546 Add user_directory to database 2017-05-31 11:51:01 +01:00
Erik Johnston
5dbaa520a5 Merge pull request #2251 from matrix-org/erikj/current_state_delta_stream
Add current_state_delta_stream table
2017-05-30 15:06:17 +01:00
Erik Johnston
dd48f7204c Add comment 2017-05-30 15:01:22 +01:00
Erik Johnston
04095f7581 Add clobbered event_id 2017-05-30 14:53:01 +01:00
Erik Johnston
a584a81b3e Add current_state_delta_stream table 2017-05-30 14:44:09 +01:00
Erik Johnston
619e8ecd0c Handle None state group correctly 2017-05-26 10:46:03 +01:00
Erik Johnston
23da638360 Fix typing tests 2017-05-26 10:02:04 +01:00
Erik Johnston
dfbda5e025 Faster cache for get_joined_hosts 2017-05-25 17:24:44 +01:00
Erik Johnston
2b03751c3c Don't return weird prev_group 2017-05-25 14:47:39 +01:00
Erik Johnston
dbc0dfd2d5 Remove unused options 2017-05-25 14:28:34 +01:00
Erik Johnston
11f139a647 Merge pull request #2247 from matrix-org/erikj/auth_event
Only store event_auth for state events
2017-05-24 16:46:34 +01:00
Erik Johnston
6e614e9e10 Add background task to clear out old event_auth 2017-05-24 15:23:34 +01:00
Erik Johnston
c049472b8a Only store event_auth for state events 2017-05-24 15:23:31 +01:00
Erik Johnston
9a804b2812 Merge pull request #2243 from matrix-org/matthew/fix-url-preview-length-again
actually trim oversize og:description meta
2017-05-23 13:26:28 +01:00
Erik Johnston
fbbc40f385 Merge pull request #2237 from matrix-org/erikj/sync_key_count
Add count of one time keys to sync stream
2017-05-23 11:18:13 +01:00
Erik Johnston
8cf9f0a3e7 Remove redundant invalidation 2017-05-23 09:46:59 +01:00
Erik Johnston
e6618ece2d Missed an invalidation 2017-05-23 09:36:52 +01:00
Erik Johnston
58c4720293 Merge pull request #2242 from matrix-org/erikj/email_refactor
Only load jinja2 templates once
2017-05-23 09:34:08 +01:00
Matthew Hodgson
836d5c44b6 actually trim oversize og:description meta 2017-05-22 21:14:20 +01:00
Erik Johnston
11c2a3655f Only load jinja2 templates once
Instead of every time a new email pusher is created, as loading jinja2
templates is slow.
2017-05-22 17:48:58 +01:00
Erik Johnston
539aa4d333 Merge pull request #2241 from matrix-org/erikj/fix_notifs
Correctly calculate push rules for member events
2017-05-22 16:46:58 +01:00
Erik Johnston
f85a415279 Add missing storage function to slave store 2017-05-22 16:31:24 +01:00
Erik Johnston
6489455bed Comment 2017-05-22 16:22:04 +01:00
Erik Johnston
d668caa79c Remove spurious log level guards 2017-05-22 16:21:06 +01:00
Erik Johnston
74bf4ee7bf Stream count_e2e_one_time_keys cache invalidation 2017-05-22 16:19:22 +01:00
Erik Johnston
33ba90c6e9 Merge pull request #2240 from matrix-org/erikj/cache_list_fix
Update list cache to handle one arg case
2017-05-22 16:10:46 +01:00
Erik Johnston
ccd62415ac Merge pull request #2238 from matrix-org/erikj/faster_push_rules
Speed up calculating push rules
2017-05-22 15:12:34 +01:00
Erik Johnston
bd7bb5df71 Pull out if statement from for loop 2017-05-22 15:12:19 +01:00
Erik Johnston
e3417a06e2 Update list cache to handle one arg case
We update the normal cache descriptors to handle caches with a single
argument specially so that the key wasn't a 1-tuple. We need to update
the cache list to be aware of this.
2017-05-22 15:04:42 +01:00
Erik Johnston
7fb80b5eae Check if current event is a membership event 2017-05-22 15:02:12 +01:00
Erik Johnston
2d17b09a6d Add debug logging 2017-05-22 15:01:36 +01:00
Erik Johnston
24c8f38784 Comment 2017-05-22 14:59:27 +01:00
Erik Johnston
25f03cf8e9 Use tuple unpacking 2017-05-22 14:58:22 +01:00
Erik Johnston
270e1c904a Speed up calculating push rules 2017-05-19 16:51:05 +01:00
Erik Johnston
b4f59c7e27 Add count of one time keys to sync stream 2017-05-19 15:47:55 +01:00
Erik Johnston
ab4ee2e524 Merge pull request #2236 from matrix-org/erikj/invalidation
Fix invalidation of get_users_with_read_receipts_in_room
2017-05-19 14:50:23 +01:00
Erik Johnston
58ebb96cce Fix invalidation of get_users_with_read_receipts_in_room 2017-05-19 14:38:50 +01:00
Erik Johnston
99713dc7d3 Merge pull request #2234 from matrix-org/erikj/fix_push
Store ActionGenerator in HomeServer
2017-05-19 13:42:49 +01:00
Erik Johnston
1c1c0257f4 Move invalidation cb to its own structure 2017-05-19 11:44:11 +01:00
Erik Johnston
cafe659f72 Store ActionGenerator in HomeServer 2017-05-19 10:09:56 +01:00
Erik Johnston
72ed8196b3 Don't push users who have left 2017-05-18 17:48:36 +01:00
Erik Johnston
107ac7ac96 Increase size of push rule caches 2017-05-18 17:17:53 +01:00
Erik Johnston
234772db6d Merge pull request #2233 from matrix-org/erikj/faster_as_check
Make get_if_app_services_interested_in_user faster
2017-05-18 16:51:18 +01:00
Erik Johnston
760625acba Make get_if_app_services_interested_in_user faster 2017-05-18 16:34:44 +01:00
Erik Johnston
c57789d138 Remove size of push get_rules cache 2017-05-18 16:17:23 +01:00
Erik Johnston
f33df30732 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-05-18 13:56:37 +01:00
Erik Johnston
3accee1a8c Merge branch 'release-v0.21.0' of github.com:matrix-org/synapse 2017-05-18 13:54:27 +01:00
Erik Johnston
a5425b2e5b Bump changelog and version 2017-05-18 13:53:48 +01:00
Erik Johnston
6e381180ae Merge pull request #2177 from matrix-org/erikj/faster_push_rules
Make calculating push actions faster
2017-05-18 11:46:18 +01:00
Erik Johnston
056ba9b795 Add comment 2017-05-18 11:45:56 +01:00
Erik Johnston
88664afe14 Merge pull request #2231 from aaronraimist/patch-1
Correct a typo in UPGRADE.rst
2017-05-18 10:00:35 +01:00
Aaron Raimist
f98efea9b1 Correct a typo in UPGRADE.rst 2017-05-17 21:41:48 -05:00
Erik Johnston
d9e3a4b5db Merge pull request #2230 from matrix-org/erikj/speed_up_get_state
Make get_state_groups_from_groups faster.
2017-05-17 17:23:04 +01:00
Erik Johnston
66d8ffabbd Faster push rule calculation via push specific cache
We add a push rule specific cache that ensures that we can reuse
calculated push rules appropriately when a user join/leaves.
2017-05-17 16:55:40 +01:00
Erik Johnston
ace23463c5 Merge pull request #2216 from slipeer/app_services_interested_in_user
Fix users claimed non-exclusively by an app service don't get notific…
2017-05-17 16:28:50 +01:00
Erik Johnston
bbfe4e996c Make get_state_groups_from_groups faster.
Most of the time was spent copying a dict to filter out sentinel values
that indicated that keys did not exist in the dict. The sentinel values
were added to ensure that we cached the non-existence of keys.

By updating DictionaryCache to keep track of which keys were known to
not exist itself we can remove a dictionary copy.
2017-05-17 15:12:15 +01:00
Erik Johnston
9f430fa07f Merge branch 'release-v0.21.0' of github.com:matrix-org/synapse into develop 2017-05-17 13:28:46 +01:00
Erik Johnston
7c53a27801 Update changelog 2017-05-17 13:13:45 +01:00
Erik Johnston
a8bc7cae56 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.21.0 2017-05-17 13:11:43 +01:00
Erik Johnston
bf1050f7cf Merge pull request #2229 from matrix-org/erikj/faster_get_joined
Make get_joined_users faster when we have delta state
2017-05-17 13:00:05 +01:00
Erik Johnston
c6f4ff1475 Spelling 2017-05-17 11:29:14 +01:00
Erik Johnston
3a431a126d Bump changelog and version 2017-05-17 11:26:57 +01:00
Erik Johnston
ac08316548 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.21.0 2017-05-17 11:25:23 +01:00
Erik Johnston
85e8092cca Comment 2017-05-17 10:03:09 +01:00
Erik Johnston
ad53fc3cf4 Short circuit when we have delta ids 2017-05-17 09:57:34 +01:00
Erik Johnston
6fa8148ccb Merge pull request #2228 from matrix-org/erikj/speed_up_get_hosts
Speed up get_joined_hosts
2017-05-16 17:40:55 +01:00
Erik Johnston
7c69849a0d Merge pull request #2227 from matrix-org/erikj/presence_caches
Make presence use cached users/hosts in room
2017-05-16 17:40:47 +01:00
Erik Johnston
11bc21b6d9 Merge pull request #2226 from matrix-org/erikj/domain_from_id
Speed up get_domain_from_id
2017-05-16 17:18:16 +01:00
Erik Johnston
13f540ef1b Speed up get_joined_hosts 2017-05-16 16:05:22 +01:00
Erik Johnston
ec5c4499f4 Make presence use cached users/hosts in room 2017-05-16 16:01:43 +01:00
Erik Johnston
f2a5b6dbfd Speed up get_domain_from_id 2017-05-16 15:59:37 +01:00
Erik Johnston
b8492b6c2f Merge pull request #2224 from matrix-org/erikj/prefill_state
Prefill state caches
2017-05-16 15:50:11 +01:00
Erik Johnston
331570ea6f Remove spurious merge artifacts 2017-05-16 15:33:07 +01:00
Krombel
55af207321 Merge branch 'develop' into avoid_duplicate_filters 2017-05-16 15:29:59 +02:00
Richard van der Hoff
d648f65aaf Merge pull request #2218 from matrix-org/rav/event_search_index
Add an index to event_search
2017-05-16 13:26:07 +01:00
Erik Johnston
608b5a6317 Take a copy before prefilling, as it may be a frozendict 2017-05-16 12:55:29 +01:00
Krombel
64953c8ed2 avoid access-error if no filter_id matches 2017-05-15 18:36:37 +02:00
Erik Johnston
f451b64c8f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/prefill_state 2017-05-15 16:09:32 +01:00
Erik Johnston
2c9475b58e Merge pull request #2221 from psaavedra/sync_timeline_limit_filter_by_name
Configurable maximum number of events requested by /sync and /messages
2017-05-15 16:08:46 +01:00
Erik Johnston
6d17573c23 Merge pull request #2223 from matrix-org/erikj/ignore_not_retrying
Don't log exceptions for NotRetryingDestination
2017-05-15 15:52:28 +01:00
Erik Johnston
d12ae7fd1c Don't log exceptions for NotRetryingDestination 2017-05-15 15:42:18 +01:00
Pablo Saavedra
224137fcf9 Fixed syntax nits 2017-05-15 16:21:02 +02:00
Erik Johnston
e4435b014e Update comment 2017-05-15 15:11:30 +01:00
Erik Johnston
871605f4e2 Comments 2017-05-15 15:11:30 +01:00
Erik Johnston
e0d2f6d5b0 Add more granular event send metrics 2017-05-15 15:11:30 +01:00
Erik Johnston
bfbc907cec Prefill state caches 2017-05-15 15:11:13 +01:00
Erik Johnston
5e9d75b4a5 Merge pull request #2215 from hamber-dick/develop
Documantation to chek synapse version
2017-05-15 14:42:52 +01:00
Pablo Saavedra
627e6ea2b0 Fixed implementation errors
* Added HS as property in SyncRestServlet
* Fixed set_timeline_upper_limit function implementat¡ion
2017-05-15 14:51:43 +02:00
Pablo Saavedra
9da4316ca5 Configurable maximum number of events requested by /sync and /messages (#2220)
Set the limit on the returned events in the timeline in the get and sync
operations. The default value is -1, means no upper limit.

For example, using `filter_timeline_limit: 5000`:

POST /_matrix/client/r0/user/user:id/filter
{
room: {
    timeline: {
      limit: 1000000000000000000
    }
}
}

GET /_matrix/client/r0/user/user:id/filter/filter:id

{
room: {
    timeline: {
      limit: 5000
    }
}
}

The server cuts down the room.timeline.limit.
2017-05-13 18:17:54 +02:00
Krombel
eb7cbf27bc insert whitespace to fix travis build 2017-05-12 12:09:42 +02:00
Krombel
6b95e35e96 add check to only add a new filter if the same filter does not exist previously
Signed-off-by: Matthias Kesler <krombel@krombel.de>
2017-05-11 16:05:30 +02:00
Richard van der Hoff
ff3d810ea8 Add a comment to old delta 2017-05-11 12:48:50 +01:00
Richard van der Hoff
34194aaff7 Don't create event_search index on sqlite
... because the table is virtual
2017-05-11 12:46:55 +01:00
Richard van der Hoff
114f290947 Add more logging for purging
Log the number of events we will be deleting at info.
2017-05-11 12:08:47 +01:00
Richard van der Hoff
baafb85ba4 Add an index to event_search
- to make the purge API quicker
2017-05-11 12:05:22 +01:00
Richard van der Hoff
29ded770b1 Merge pull request #2214 from matrix-org/rav/hurry_up_purge
When purging, don't de-delta state groups we're about to delete
2017-05-11 12:04:25 +01:00
Richard van der Hoff
dc026bb16f Tidy purge code and add some comments
Try to make this clearer with more comments and some variable renames
2017-05-11 10:56:12 +01:00
Slipeer
328378f9cb Fix users claimed non-exclusively by an app service don't get notifications #2211 2017-05-11 11:42:08 +03:00
Luke Barnard
c1935f0a41 Merge pull request #2213 from matrix-org/luke/username-availability-qp
Modify register/available to be GET with query param
2017-05-11 09:36:39 +01:00
hamber-dick
43cd86ba8a Merge pull request #1 from hamber-dick/dev-branch-hamber-dick
Documantation to chek synapse version
2017-05-10 23:14:12 +02:00
Richard van der Hoff
8e345ce465 Don't de-delta state groups we're about to delete 2017-05-10 18:44:22 +01:00
Richard van der Hoff
b64d312421 add some logging to purge_history 2017-05-10 18:44:22 +01:00
Luke Barnard
ccad2ed824 Modify condition on empty localpart 2017-05-10 17:34:30 +01:00
Luke Barnard
369195caa5 Modify register/available to be GET with query param
- GET is now the method for register/available
- a query parameter "username" is now used

Also, empty usernames are now handled with an error message on registration or via register/available: `User ID cannot be empty`
2017-05-10 17:23:55 +01:00
hamber-dick
57ed7f6772 Documantation to chek synapse version
I've added some Documentation, how to get the running Version of a
Synapse homeserver. This should help the HS-Owners to check whether the
Upgrade was successful.
2017-05-10 18:01:39 +02:00
Erik Johnston
a3648f84b2 Merge pull request #2208 from matrix-org/erikj/ratelimit_overrid
Add per user ratelimiting overrides
2017-05-10 15:54:48 +01:00
Richard van der Hoff
5331cd150a Merge pull request #2206 from matrix-org/rav/one_time_key_upload_change_sig
Allow clients to upload one-time-keys with new sigs
2017-05-10 14:18:40 +01:00
Luke Barnard
7313a23dba Merge pull request #2209 from matrix-org/luke/username-availability-post
Change register/available to POST (from GET)
2017-05-10 13:05:45 +01:00
Luke Barnard
f7278e612e Change register/available to POST (from GET) 2017-05-10 11:40:18 +01:00
Erik Johnston
b990b2fce5 Add per user ratelimiting overrides 2017-05-10 11:05:43 +01:00
Richard van der Hoff
aedaba018f Replace some instances of preserve_context_over_deferred 2017-05-09 19:04:56 +01:00
Richard van der Hoff
de042b3b88 Do some logging when one-time-keys get claimed
might help us figure out if https://github.com/vector-im/riot-web/issues/3868
has happened.
2017-05-09 19:04:56 +01:00
Richard van der Hoff
a7e9d8762d Allow clients to upload one-time-keys with new sigs
When a client retries a key upload, don't give an error if the signature has
changed (but the key is the same).

Fixes https://github.com/vector-im/riot-android/issues/1208, hopefully.
2017-05-09 19:04:56 +01:00
Erik Johnston
ca238bc023 Merge branch 'release-v0.21.0' of github.com:matrix-org/synapse into develop 2017-05-08 17:35:11 +01:00
Erik Johnston
40dcf0d856 Merge pull request #2203 from matrix-org/erikj/event_cache_hit_ratio
Don't update event cache hit ratio from get_joined_users
2017-05-08 16:52:39 +01:00
Erik Johnston
d3c3026496 Merge pull request #2201 from matrix-org/erikj/store_device_cache
Cache check to see if device exists
2017-05-08 16:23:04 +01:00
Erik Johnston
093f7e47cc Expand docstring a bit 2017-05-08 16:14:46 +01:00
Erik Johnston
6a12998a83 Add missing yields 2017-05-08 16:10:51 +01:00
Erik Johnston
b9c84f3f3a Merge pull request #2202 from matrix-org/erikj/cache_count_device
Cache one time key counts
2017-05-08 16:09:12 +01:00
Erik Johnston
ffad4fe35b Don't update event cache hit ratio from get_joined_users
Otherwise the hit ration of plain get_events gets completely skewed by
calls to get_joined_users* functions.
2017-05-08 16:06:17 +01:00
Erik Johnston
94e6ad71f5 Invalidate cache on device deletion 2017-05-08 15:55:59 +01:00
Erik Johnston
8571f864d2 Cache one time key counts 2017-05-08 15:34:27 +01:00
Erik Johnston
fc6d4974a6 Comment 2017-05-08 15:33:57 +01:00
Erik Johnston
738ccf61c0 Cache check to see if device exists 2017-05-08 15:32:18 +01:00
Erik Johnston
dcabef952c Increase client_ip cache size 2017-05-08 15:09:19 +01:00
Erik Johnston
771c8a83c7 Bump version and changelog 2017-05-08 13:23:46 +01:00
Erik Johnston
6631985990 Merge pull request #2200 from matrix-org/erikj/revert_push
Revert speed up push
2017-05-08 13:20:52 +01:00
Erik Johnston
e0f20e9425 Revert "Remove unused import"
This reverts commit ab37bef83b.
2017-05-08 13:07:43 +01:00
Erik Johnston
fe7c1b969c Revert "We don't care about forgotten rooms"
This reverts commit ad8b316939.
2017-05-08 13:07:43 +01:00
Erik Johnston
78f306a6f7 Revert "Speed up filtering of a single event in push"
This reverts commit 421fdf7460.
2017-05-08 13:07:41 +01:00
Erik Johnston
9ac98197bb Bump version and changelog 2017-05-08 11:07:54 +01:00
Erik Johnston
27c28eaa27 Merge pull request #2190 from matrix-org/erikj/mark_remote_as_back_more
Always mark remotes as up if we receive a signed request from them
2017-05-05 14:08:12 +01:00
Erik Johnston
be2672716d Merge pull request #2189 from matrix-org/erikj/handle_remote_device_list
Handle exceptions thrown in handling remote device list updates
2017-05-05 14:01:27 +01:00
Erik Johnston
653d90c1a5 Comment 2017-05-05 14:01:17 +01:00
Erik Johnston
310b1ccdc1 Use preserve_fn and add logs 2017-05-05 13:41:19 +01:00
Kegsay
a59b0ad1a1 Merge pull request #2192 from matrix-org/kegan/simple-http-client-timeouts
Rewrite SimpleHttpClient.request to include timeouts
2017-05-05 11:52:43 +01:00
Erik Johnston
7b222fc56e Remove redundant reset of destination timers 2017-05-05 11:14:09 +01:00
Kegan Dougal
d0debb2116 Remember how twisted works 2017-05-05 11:00:21 +01:00
Erik Johnston
66f371e8b8 Merge pull request #2176 from matrix-org/erikj/faster_get_joined
Make get_joined_users faster
2017-05-05 10:59:55 +01:00
Erik Johnston
b843631d71 Add comment and TODO 2017-05-05 10:59:32 +01:00
Kegan Dougal
c2ddd773bc Include the clock 2017-05-05 10:52:46 +01:00
Kegan Dougal
7dd3bf5e24 Rewrite SimpleHttpClient.request to include timeouts
Fixes #2191
2017-05-05 10:49:19 +01:00
Erik Johnston
db7d0c3127 Always mark remotes as up if we receive a signed request from them 2017-05-05 10:34:53 +01:00
Erik Johnston
f346048a6e Handle exceptions thrown in handling remote device list updates 2017-05-05 10:34:10 +01:00
Erik Johnston
e3aa8a7aa8 Merge pull request #2185 from matrix-org/erikj/smaller_caches
Optimise caches for single key
2017-05-05 10:19:05 +01:00
Erik Johnston
cf589f2c1e Fixes 2017-05-05 10:17:56 +01:00
Erik Johnston
8af4569583 Merge pull request #2174 from matrix-org/erikj/current_cache_hosts
Add cache for get_current_hosts_in_room
2017-05-05 10:15:24 +01:00
Erik Johnston
b25db11d08 Merge pull request #2186 from matrix-org/revert-2175-erikj/prefill_state
Revert "Prefill state caches"
2017-05-04 15:09:25 +01:00
Erik Johnston
587f07543f Revert "Prefill state caches" 2017-05-04 15:07:27 +01:00
Erik Johnston
aa93cb9f44 Add comment 2017-05-04 14:59:28 +01:00
Erik Johnston
537dbadea0 Intern host strings 2017-05-04 14:55:28 +01:00
Erik Johnston
07a07588a0 Make caches bigger 2017-05-04 14:52:28 +01:00
Erik Johnston
dfaa58f72d Fix comment and num args 2017-05-04 14:50:24 +01:00
Erik Johnston
9ac263ed1b Add new storage functions to slave store 2017-05-04 14:29:03 +01:00
Erik Johnston
d2d8ed4884 Optimise caches with single key 2017-05-04 14:18:46 +01:00
Erik Johnston
5d8290429c Reduce size of get_users_in_room 2017-05-04 13:43:19 +01:00
Luke Barnard
6aa423a1a8 Merge pull request #2183 from matrix-org/luke/username-availability
Implement username availability checker
2017-05-04 09:58:40 +01:00
Luke Barnard
3669065466 Appease the flake8 gods 2017-05-03 18:05:49 +01:00
Erik Johnston
7ebf518c02 Make get_joined_users faster 2017-05-03 15:55:54 +01:00
Luke Barnard
34ed4f4206 Implement username availability checker
Outlined here: https://github.com/vector-im/riot-web/issues/3605#issuecomment-298679388

```HTTP
GET /_matrix/.../register/available
{
    "username": "desiredlocalpart123"
}
```

If available, the response looks like
```HTTP
HTTP/1.1 200 OK
{
    "available": true
}
```

Otherwise,
```HTTP
HTTP/1.1 429
{
    "errcode": "M_LIMIT_EXCEEDED",
    "error": "Too Many Requests",
    "retry_after_ms": 2000
}
```
or
```HTTP
HTTP/1.1 400
{
    "errcode": "M_USER_IN_USE",
    "error": "User ID already taken."
}

```
or
```HTTP
HTTP/1.1 400
{
    "errcode": "M_INVALID_USERNAME",
    "error": "Some reason for username being invalid"
}
```
2017-05-03 12:04:12 +01:00
David Baker
60833c8978 Merge pull request #2147 from matrix-org/dbkr/http_request_propagate_error
Propagate errors sensibly from proxied IS requests
2017-05-03 11:23:25 +01:00
David Baker
482a2ad122 No need for the exception variable 2017-05-03 11:02:59 +01:00
David Baker
c0380402bc List caught expection types 2017-05-03 10:56:22 +01:00
Erik Johnston
cdbf38728d Merge pull request #2175 from matrix-org/erikj/prefill_state
Prefill state caches
2017-05-03 10:54:11 +01:00
Erik Johnston
0c27383dd7 Merge pull request #2170 from matrix-org/erikj/fed_hole_state
Don't fetch state for missing events that we fetched
2017-05-03 10:49:37 +01:00
Erik Johnston
ef862186dd Merge together redundant calculations/logging 2017-05-03 10:06:43 +01:00
Erik Johnston
2c2dcf81d0 Update comment 2017-05-03 10:00:29 +01:00
Erik Johnston
1827057acc Comments 2017-05-03 09:56:05 +01:00
Erik Johnston
8346e6e696 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/prefill_state 2017-05-03 09:46:40 +01:00
Erik Johnston
e4c15fcb5c Merge pull request #2178 from matrix-org/erikj/message_metrics
Add more granular event send metrics
2017-05-02 17:57:34 +01:00
Erik Johnston
3e5a62ecd8 Add more granular event send metrics 2017-05-02 14:23:26 +01:00
Richard van der Hoff
82475a18d9 Merge pull request #2180 from matrix-org/rav/fix_timeout_on_timeout
Instantiate DeferredTimedOutError correctly
2017-05-02 13:32:58 +01:00
Richard van der Hoff
2e996271fe Instantiate DeferredTimedOutError correctly
Call `super` correctly, so that we correctly initialise the `errcode` field.

Fixes https://github.com/matrix-org/synapse/issues/2179.
2017-05-02 13:26:17 +01:00
Erik Johnston
a2c89a225c Prefill state caches 2017-05-02 10:40:31 +01:00
Erik Johnston
7166854f41 Add cache for get_current_hosts_in_room 2017-05-02 10:36:35 +01:00
Erik Johnston
3033261891 Merge pull request #2080 from matrix-org/erikj/filter_speed
Speed up filtering of a single event in push
2017-04-28 14:17:13 +01:00
Erik Johnston
2347efc065 Fixup 2017-04-28 12:46:53 +01:00
Erik Johnston
9b147cd730 Remove unncessary call in _get_missing_events_for_pdu 2017-04-28 11:55:25 +01:00
Erik Johnston
3a9f5bf6dd Don't fetch state for missing events that we fetched 2017-04-28 11:26:46 +01:00
Erik Johnston
ab37bef83b Remove unused import 2017-04-28 09:57:23 +01:00
Erik Johnston
ad8b316939 We don't care about forgotten rooms 2017-04-28 09:52:36 +01:00
Erik Johnston
421fdf7460 Speed up filtering of a single event in push 2017-04-28 09:52:36 +01:00
Erik Johnston
25a96e0c63 Merge pull request #2163 from matrix-org/erikj/fix_invite_state
Fix invite state to always include all events
2017-04-27 17:36:30 +01:00
Erik Johnston
46826bb078 Comment and remove spurious logging 2017-04-27 17:25:44 +01:00
Erik Johnston
f87b287291 Merge pull request #2127 from APwhitehat/alreadystarted
print something legible if synapse already running
2017-04-27 15:46:53 +01:00
Erik Johnston
bb9246e525 Merge pull request #2131 from matthewjwolff/develop
web_client_location documentation fix
2017-04-27 15:46:40 +01:00
Richard van der Hoff
c84770b877 Fix bgupdate error if index already exists (#2167)
When creating a new table index in the background, guard against it existing already. Fixes
https://github.com/matrix-org/synapse/issues/2135.

Also, make sure we restore the autocommit flag when we're done, otherwise we
get more failures from other operations later on. Fixes
https://github.com/matrix-org/synapse/issues/1890 (hopefully).
2017-04-27 15:27:48 +01:00
Erik Johnston
380fb87ecc Merge pull request #2168 from matrix-org/erikj/federation_logging
Add some extra logging for edge cases of federation
2017-04-27 15:19:12 +01:00
Erik Johnston
87ae59f5e9 Typo 2017-04-27 15:16:21 +01:00
Erik Johnston
e42b4ebf0f Add some extra logging for edge cases of federation 2017-04-27 14:38:21 +01:00
Erik Johnston
d3c150411c Merge pull request #2130 from APwhitehat/roomexists
Check that requested room_id exists
2017-04-27 09:20:26 +01:00
Erik Johnston
1e166470ab Fix tests 2017-04-26 16:23:30 +01:00
Erik Johnston
34e682d385 Fix invite state to always include all events 2017-04-26 16:18:08 +01:00
Erik Johnston
7239258ae6 Merge pull request #2160 from matrix-org/erikj/reduce_join_cache_size
Make state caches cache in ascii
2017-04-26 14:02:06 +01:00
David Baker
5fd12dce01 Remove debugging 2017-04-26 12:36:26 +01:00
David Baker
82ae0238f9 Revert accidental commit 2017-04-26 11:43:16 +01:00
David Baker
81804909d3 Merge remote-tracking branch 'origin/develop' into dbkr/http_request_propagate_error 2017-04-26 11:31:55 +01:00
David Baker
c366276056 Fix get_json 2017-04-26 10:07:01 +01:00
David Baker
1a9255c12e Use CodeMessageException subclass instead
Parse json errors from get_json client methods and throw special
errors.
2017-04-25 19:30:55 +01:00
Matthew Hodgson
94f36b0273 document how to make IPv6 work (#2088)
* document how to make IPv6 work

* spell out that pip will install 17.1 by default
2017-04-25 18:37:12 +01:00
Erik Johnston
c45dc6c62a Merge pull request #2136 from bbigras/patch-1
Fix the system requirements list in README.rst
2017-04-25 17:25:52 +01:00
Erik Johnston
f053a1409e Make state caches cache in ascii 2017-04-25 17:22:55 +01:00
Erik Johnston
22f935ab7c Merge pull request #2159 from matrix-org/erikj/reduce_join_cache_size
Reduce size of joined_user cache
2017-04-25 17:22:02 +01:00
Erik Johnston
9388eece2b Merge pull request #2149 from enckse/develop
setting up metrics, just adding/clarifying 2 very minor items
2017-04-25 16:37:46 +01:00
Erik Johnston
acb58bfb6a fix up 2017-04-25 15:39:19 +01:00
Erik Johnston
f7181615f2 Don't specify default as dict 2017-04-25 15:22:59 +01:00
Erik Johnston
f144365281 Comment 2017-04-25 15:18:26 +01:00
Erik Johnston
d9aa645f86 Reduce size of joined_user cache
The _get_joined_users_from_context cache stores a mapping from user_id
to avatar_url and display_name. Instead of storing those in a dict,
store them in a namedtuple as that uses much less memory.

We also try converting the string to ascii to further reduce the size.
2017-04-25 14:38:51 +01:00
Erik Johnston
22f3d3ae76 Reduce _get_state_group_for_event cache size 2017-04-25 11:43:03 +01:00
Erik Johnston
b4da08cad8 Merge pull request #2158 from matrix-org/erikj/reduce_cache_size
Reduce cache size by not storing deferreds
2017-04-25 11:18:07 +01:00
Erik Johnston
efab1dadde Remove DEBUG_CACHES 2017-04-25 10:54:09 +01:00
Mark Haines
33d5134b59 Merge pull request #2156 from matrix-org/markjh/old_verify_keys
Fix code for reporting old verify keys in synapse
2017-04-25 10:44:25 +01:00
Erik Johnston
119cb9bbcf Reduce cache size by not storing deferreds
Currently the cache descriptors store deferreds rather than raw values,
this is a simple way of triggering only one database hit and sharing the
result if two callers attempt to get the same value.

However, there are a few caches that simply store a mapping from string
to string (or int). These caches can have a large number of entries,
under the assumption that each entry is small. However, the size of a
deferred (specifically the size of ObservableDeferred) is signigicantly
larger than that of the raw value, 2kb vs 32b.

This PR therefore changes the cache descriptors to store the raw values
rather than the deferreds.

As a side effect cached storage function now either return a deferred or
the actual value, as the cached list decriptor already does. This is
fine as we always end up just yield'ing on the returned value
eventually, which handles that case correctly.
2017-04-25 10:23:11 +01:00
Mark Haines
e6e2627636 Fix code for reporting old verify keys in synapse 2017-04-24 18:51:25 +01:00
Richard van der Hoff
30f7bfa121 Merge pull request #2145 from matrix-org/rav/reject_invite_to_unreachable_server
Fix rejection of invites to unreachable servers
2017-04-24 15:20:52 +01:00
Erik Johnston
7af825bae4 Merge pull request #2155 from matrix-org/erikj/string_intern
Only intern ascii strings
2017-04-24 14:28:41 +01:00
Erik Johnston
26bcda31b8 Merge pull request #2154 from matrix-org/erikj/remove_unused_cache
Remove unused cache
2017-04-24 14:24:46 +01:00
Erik Johnston
d134d0935e Only intern ascii strings 2017-04-24 14:07:48 +01:00
Erik Johnston
e4f3431116 Remove unused cache 2017-04-24 13:27:38 +01:00
David Baker
a46982cee9 Need the HTTP status code 2017-04-21 16:20:12 +01:00
David Baker
70caf49914 Do the same for get_json 2017-04-21 16:09:03 +01:00
Sean Enck
719aec4064 clarify metric setup to use 'scrape_configs' section of yaml and use an array for target 2017-04-21 11:03:32 -04:00
Richard van der Hoff
cea7839911 Document some of the admin APIs (#2143)
I haven't (yet) documented all of the user-list APIs introduced in
https://github.com/matrix-org/synapse/pull/1784 because the API shape seems
very odd, given the functionality.
2017-04-21 11:55:07 +01:00
David Baker
a1595cec78 Don't error for 3xx responses 2017-04-21 11:51:17 +01:00
David Baker
2e165295b7 Merge remote-tracking branch 'origin/develop' into dbkr/http_request_propagate_error 2017-04-21 11:35:52 +01:00
David Baker
a90a0f5c8a Propagate errors sensibly from proxied IS requests
When we're proxying Matrix endpoints, parse out Matrix error
responses and turn them into SynapseErrors so they can be
propagated sensibly upstream.
2017-04-21 11:32:48 +01:00
Richard van der Hoff
91b3981800 Try harder when sending leave events
When we're rejecting invites, ignore the backoff data, so that we have a better
chance of not getting the room out of sync.
2017-04-21 01:50:36 +01:00
Richard van der Hoff
0cdb32fc43 Remove redundant try/except clauses
The `except SynapseError` clauses were pointless because the wrapped functions
would never throw a `SynapseError` (they either throw a `CodeMessageException`
or a `RuntimeError`).

The `except CodeMessageException` is now also pointless because the caller
treats all exceptions equally, so we may as well just throw the
`CodeMessageException`.
2017-04-21 01:32:01 +01:00
Richard van der Hoff
838810b76a Broaden the conditions for locally_rejecting invites
The logic for marking invites as locally rejected was all well and good, but
didn't happen when the remote server returned a 500, or wasn't reachable, or
had no DNS, or whatever.

Just expand the except clause to catch everything.

Fixes https://github.com/matrix-org/synapse/issues/761.
2017-04-21 01:31:37 +01:00
Richard van der Hoff
736b9a4784 Remove redundant function
inline `reject_remote_invite`, which only existed to make tracing the callflow
more difficult.
2017-04-21 01:31:09 +01:00
Richard van der Hoff
4903ccf159 Fix some lies, and other clarifications, in docstrings
The documentation on get_json has been wrong ever since the very first commit
to synapse...
2017-04-21 01:31:09 +01:00
Bruno Bigras
51fb884c52 Fix the system requirements list in README.rst 2017-04-19 17:32:00 -04:00
Matthew Wolff
d4040e9e28 Queried CONDITIONAL_REQUIREMENTS 2017-04-18 16:19:48 -05:00
Luke Barnard
3fb8784c92 m.read_marker -> m.fully_read (#2128)
Also:
 - change the REST endpoint to have a "S" on the end (so it's now /read_markers)
 - change the content of the m.read_up_to event to have the key "event_id" instead of "marker".
2017-04-18 17:46:15 +01:00
Matthew Hodgson
c02b6a37d6 Merge pull request #2132 from feld/patch-1
Update README.rst
2017-04-17 16:18:34 +01:00
Mark Felder
814fb032eb Update README.rst
The FreeBSD port has been moved to the net-im category
2017-04-17 08:26:50 -05:00
Matthew Wolff
54f9a4cb59 Fixed travis build failure
Signed-off-by: Matthew Wolff <matthewjwolff@gmail.com>
2017-04-17 01:38:27 -05:00
Matthew Wolff
8e780b113d web_server_root documentation fix
Signed-off-by: Matthew Wolff <matthewjwolff@gmail.com>
2017-04-17 00:49:11 -05:00
Anant Prakash
574d573ac2 Check that requested room_id exists 2017-04-14 23:50:59 +05:30
Luke Barnard
78f0ddbfad Merge pull request #2120 from matrix-org/luke/read-markers
Implement Read Marker API
2017-04-13 14:21:31 +01:00
Luke Barnard
6a70647d45 Correct logic in is_event_after 2017-04-13 13:46:17 +01:00
Anant Prakash
c1f52a321d synctl.py: Check if synapse is already running 2017-04-13 18:00:02 +05:30
Luke Barnard
b9557064bf Simplify is_event_after logic 2017-04-12 14:36:20 +01:00
Luke Barnard
cf6121e3da More null-guard changes 2017-04-12 14:02:03 +01:00
Erik Johnston
247c736b9b Merge pull request #2115 from matrix-org/erikj/dedupe_federation_repl
Reduce federation replication traffic
2017-04-12 11:07:13 +01:00
Paul Evans
8fbc0d29ee Merge pull request #2121 from matrix-org/paul/sent-transactions-metric
Add a counter metric for successfully-sent transactions
2017-04-12 11:04:31 +01:00
Erik Johnston
c06c00190f Merge pull request #2116 from matrix-org/erikj/dedupe_federation_repl2
Dedupe KeyedEdu and Devices federation repl traffic
2017-04-12 10:57:24 +01:00
Luke Barnard
c0aba0a23e Remove Unused ref to hs 2017-04-12 10:52:11 +01:00
Luke Barnard
b9676a75f6 Move a space 2017-04-12 10:51:17 +01:00
Luke Barnard
69a18514e9 Only notify user, not entire room 2017-04-12 10:50:37 +01:00
Luke Barnard
122cd52ce4 Remove comment, simplify null-guard 2017-04-12 10:48:32 +01:00
Erik Johnston
26ae5178a4 Add some comments 2017-04-12 10:36:29 +01:00
Erik Johnston
bf9060156a Merge pull request #2117 from matrix-org/erikj/remove_http_replication
Remove HTTP replication APIs
2017-04-12 10:21:42 +01:00
Erik Johnston
82301b6c29 Remove last reference to worker_replication_url 2017-04-12 10:21:02 +01:00
Erik Johnston
1745069543 Comment 2017-04-12 10:17:10 +01:00
Erik Johnston
c7ddb5ef7a Reuse get_interested_parties 2017-04-12 10:16:26 +01:00
Erik Johnston
7b41013102 Merge pull request #2118 from matrix-org/erikj/no_devices
Fix getting latest device IP for user with no devices
2017-04-12 10:14:32 +01:00
Luke Barnard
77fb2b72ae Handle no previous RM 2017-04-12 09:47:29 +01:00
Luke Barnard
7f94709066 travis flake8.. 2017-04-11 18:35:45 +01:00
Luke Barnard
867822fa1e flake8 2017-04-11 17:36:04 +01:00
Luke Barnard
73880268ef Refactor event ordering check to events store 2017-04-11 17:34:09 +01:00
Luke Barnard
131485ef66 Copyright 2017-04-11 17:33:51 +01:00
Paul "LeoNerd" Evans
11dbceb761 Add a counter metric for successfully-sent transactions 2017-04-11 17:16:12 +01:00
Luke Barnard
0127423027 flake8 2017-04-11 17:07:07 +01:00
Erik Johnston
85657eedf8 Bail on where clause instead 2017-04-11 16:24:31 +01:00
Erik Johnston
b48045a8f5 Don't bother with outer check for now 2017-04-11 16:23:24 +01:00
Erik Johnston
6f65e2f90c Update replication docs 2017-04-11 16:21:12 +01:00
Erik Johnston
323634bf8b Update workers docs 2017-04-11 16:19:52 +01:00
Erik Johnston
9c712a366f Move get_presence_list_* to SlaveStore 2017-04-11 16:07:33 +01:00
Erik Johnston
a8c8e4efd4 Comment 2017-04-11 15:35:49 +01:00
Erik Johnston
414522aed5 Move get_interested_parties 2017-04-11 15:33:26 +01:00
Erik Johnston
2be8a281d2 Comments 2017-04-11 15:28:24 +01:00
Erik Johnston
6308ac45b0 Move get_interested_remotes back to presence handler 2017-04-11 15:19:26 +01:00
Erik Johnston
b9b72bc6e2 Comments 2017-04-11 15:15:34 +01:00
Luke Barnard
d892079844 Finish implementing RM endpoint
- This change causes a 405 to be sent if "m.read_marker" is set via /account_data
 - This also fixes-up the RM endpoint so that it actually Works.
2017-04-11 15:01:39 +01:00
lukebarnard
e263c26690 Initial commit of RM server-side impl
(See https://docs.google.com/document/d/1UWqdS-e1sdwkLDUY0wA4gZyIkRp-ekjsLZ8k6g_Zvso/edit#heading=h.lndohpg8at5u)
2017-04-11 11:55:30 +01:00
Erik Johnston
f3cf3ff8b6 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-04-11 11:13:32 +01:00
Erik Johnston
4902db1fc9 Merge branch 'release-v0.20.0' of github.com:matrix-org/synapse 2017-04-11 11:12:37 +01:00
Erik Johnston
d563b8d944 Bump changelog 2017-04-11 11:12:13 +01:00
Erik Johnston
85a0d6c7ab Remove test of replication resource 2017-04-11 10:59:27 +01:00
Erik Johnston
34840cdcef Fix getting latest device IP for user with no devices 2017-04-11 09:56:54 +01:00
Erik Johnston
28a4649785 Remove HTTP replication APIs 2017-04-11 09:52:11 +01:00
Matthew Hodgson
7c551ec445 trust a hypothetical future riot.im IS 2017-04-10 17:58:36 +01:00
Erik Johnston
84fbb80c8f Use generators 2017-04-10 16:55:56 +01:00
Erik Johnston
40453b3f84 Dedupe KeyedEdu and Devices federation repl traffic 2017-04-10 16:49:51 +01:00
Erik Johnston
29574fd5b3 Reduce federation presence replication traffic
This is mainly done by moving the calculation of where to send presence
updates from the presence handler to the transaction queue, so we only
need to send the presence event (and not the destinations) across the
replication connection. Before we were duplicating by sending the full
state across once per destination.
2017-04-10 16:48:30 +01:00
Erik Johnston
2e6f5a4910 Typo 2017-04-10 16:17:40 +01:00
David Baker
405ba4178a Merge pull request #2102 from DanielDent/add-auth-email
Support authenticated SMTP
2017-04-10 15:42:16 +01:00
Erik Johnston
efcb6db688 Merge pull request #2109 from matrix-org/erikj/send_queue_fix
Fix up federation SendQueue and document types
2017-04-10 13:09:25 +01:00
Erik Johnston
0018491af2 Rename variable 2017-04-10 12:44:43 +01:00
Erik Johnston
0364d23210 Up replication ping timeout 2017-04-10 11:32:05 +01:00
Erik Johnston
8c5f03cec7 Revert to sending the same data type as before 2017-04-10 10:07:18 +01:00
Erik Johnston
f8434db549 Change name 2017-04-10 10:03:07 +01:00
Erik Johnston
ab904caf33 Comments 2017-04-10 10:02:17 +01:00
Richard van der Hoff
54a59adc7c Merge pull request #2110 from matrix-org/rav/fix_reject_persistence
When we do an invite rejection, save the signed leave event to the db
2017-04-07 16:14:41 +01:00
Richard van der Hoff
64765e5199 When we do an invite rejection, save the signed leave event to the db
During a rejection of an invite received over federation, we ask a remote
server to make us a `leave` event, then sign it, then send that with
`send_leave`.

We were saving the *unsigned* version of the event (which has a different event
id to the signed version) to our db (and sending it to the clients), whereas
other servers in the room will have seen the *signed* version. We're not aware
of any actual problems that caused, except that it makes the database confusing
to look at and generally leaves the room in a weird state.
2017-04-07 14:39:32 +01:00
Erik Johnston
0cd01f5c9c Merge pull request #2108 from matrix-org/erikj/current_state_ids
Speed up get_current_state_ids
2017-04-07 14:20:16 +01:00
Erik Johnston
2a3e822f44 Comment 2017-04-07 13:47:04 +01:00
Erik Johnston
a828a64b75 Comment 2017-04-07 11:54:03 +01:00
Erik Johnston
d4d176e5d0 Add logging 2017-04-07 11:51:28 +01:00
Erik Johnston
449d1297ca Fix up federation SendQueue and document types 2017-04-07 11:48:33 +01:00
Erik Johnston
d72667fcce Speed up get_current_state_ids
Using _simple_select_list is fairly expensive for functions that return
a lot of rows and/or get called a lot. (This is because it carefully
constructs a list of dicts).

get_current_state_ids gets called a lot on startup and e.g. when the IRC
bridge decided to send tonnes of joins/leaves (as it invalidates the
cache). We therefore replace it with a custon txn function that builds
up the final result dict without building up and intermediate
representation.
2017-04-07 10:10:49 +01:00
Erik Johnston
a41fe500d6 Bump version and changelog 2017-04-07 10:03:48 +01:00
Erik Johnston
54f59bd7d4 Merge pull request #2107 from HarHarLinks/patch-1
fix typo in synctl help
2017-04-07 09:54:37 +01:00
Erik Johnston
98ce212093 Merge pull request #2103 from matrix-org/erikj/no-double-encode
Don't double encode replication data
2017-04-07 09:39:52 +01:00
Kim Brose
8a1137ceab fix typo in synctl help 2017-04-06 17:10:20 +02:00
Erik Johnston
877c029c16 Use iteritems 2017-04-06 15:51:22 +01:00
Erik Johnston
944692ef69 Merge pull request #2106 from matrix-org/erikj/reduce_user_sync
Reduce rate of USER_SYNC repl commands
2017-04-06 13:35:31 +01:00
Erik Johnston
391712a4f9 Comment 2017-04-06 13:35:00 +01:00
Erik Johnston
ad544c803a Document types of the replication streams 2017-04-06 13:28:52 +01:00
Erik Johnston
dbf87282d3 Docs 2017-04-06 13:11:21 +01:00
Erik Johnston
69b3fd485d Fix incorrect type when using InvalidateCacheCommand 2017-04-06 09:36:38 +01:00
Daniel Dent
5058292537 Support authenticated SMTP
Closes (SYN-714) #1385

Signed-off-by: Daniel Dent <matrixcontrib@contactdaniel.net>
2017-04-05 21:01:08 -07:00
Erik Johnston
fcc803b2bf Add log lines 2017-04-05 17:13:44 +01:00
Erik Johnston
ea0152b132 Merge pull request #2104 from matrix-org/erikj/metrics_tcp
Rearrange TCP replication metrics
2017-04-05 14:24:06 +01:00
Erik Johnston
3f213d908d Rearrange metrics 2017-04-05 14:15:09 +01:00
Erik Johnston
1ca0e78ca1 Fix typo 2017-04-05 13:43:39 +01:00
Erik Johnston
b43d3267e2 Fixup some metrics for tcp repl 2017-04-05 13:34:54 +01:00
Erik Johnston
b5cb6347a4 Don't immediately notify the master about users whose syncs have gone away 2017-04-05 13:25:40 +01:00
Erik Johnston
96b9b6c127 Don't double json encode typing replication data 2017-04-05 11:34:20 +01:00
Erik Johnston
f10ce8944b Don't double json encode federation replication data 2017-04-05 11:10:28 +01:00
Erik Johnston
a5c401bd12 Merge pull request #2097 from matrix-org/erikj/repl_tcp_client
Move to using TCP replication
2017-04-05 09:36:21 +01:00
Erik Johnston
b9caf4f726 Merge pull request #2099 from matrix-org/erikj/deviceinbox_reduce
Deduplicate new deviceinbox rows for replication
2017-04-05 09:35:59 +01:00
Erik Johnston
d1d5362267 Add comment 2017-04-04 16:41:03 +01:00
Erik Johnston
9f26d3b75b Deduplicate new deviceinbox rows for replication 2017-04-04 16:21:21 +01:00
Erik Johnston
a76886726b Merge pull request #2098 from matrix-org/erikj/repl_tcp_fix
Advance replication streams even if nothing is listening
2017-04-04 15:40:51 +01:00
Erik Johnston
ac66e11f2b Add the appropriate amount of preserve_fn 2017-04-04 15:22:54 +01:00
Erik Johnston
4264ceb31c Fiddle tcp replication logging 2017-04-04 14:14:03 +01:00
Erik Johnston
023ee197be Advance replication streams even if nothing is listening
Otherwise the streams don't advance and steadily fall behind, so when a
worker does connect either a) they'll be streamed lots of old updates or
b) the connection will fail as the streams are too far behind.
2017-04-04 13:19:26 +01:00
Erik Johnston
d1605794ad Remove unused worker config option 2017-04-04 11:17:00 +01:00
Erik Johnston
3376f16012 Shuffle and comment synchrotron presence 2017-04-04 11:14:16 +01:00
Erik Johnston
6ce6bbedcb Move where we ack federation 2017-04-04 11:02:44 +01:00
Erik Johnston
27cc627e42 Merge pull request #2082 from matrix-org/erikj/repl_tcp_server
Replace HTTP replication with TCP replication (Server side part)
2017-04-04 10:07:57 +01:00
Erik Johnston
62b89daac6 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/repl_tcp_server 2017-04-04 09:46:16 +01:00
Richard van der Hoff
773e64cc1a Merge pull request #2095 from matrix-org/rav/cull_log_preserves
Cull spurious PreserveLoggingContexts
2017-04-03 17:02:25 +01:00
Richard van der Hoff
2d05eb3cf5 Merge remote-tracking branch 'origin/release-v0.20.0' into develop 2017-04-03 16:23:23 +01:00
Richard van der Hoff
ac63b92b64 Merge pull request #2094 from matrix-org/rav/fix_federation_join
Accept join events from all servers
2017-04-03 16:17:17 +01:00
Richard van der Hoff
30bcbf775a Accept join events from all servers
Make sure that we accept join events from any server, rather than just the
origin server, to make the federation join dance work correctly.

(Fixes #1893).
2017-04-03 15:58:07 +01:00
Richard van der Hoff
7eb9f34cc3 Remove spurious yield
In `MessageHandler`, remove `yield` on call to `Notifier.on_new_room_event`:
it doesn't return anything anyway.
2017-04-03 15:44:19 +01:00
Richard van der Hoff
0b08c48fc5 Remove more spurious PreserveLoggingContexts
Remove `PreserveLoggingContext` around calls to `Notifier.on_new_room_event`;
there is no problem if the logcontext is set when calling it.
2017-04-03 15:43:37 +01:00
Richard van der Hoff
65e1683680 Remove spurious PreserveLoggingContext
In `on_new_room_event`, remove `PreserveLoggingContext` - we can call its
subroutines with the logcontext set.
2017-04-03 15:42:38 +01:00
Richard van der Hoff
feb496056e preserve_fn some deferred-returning things
In `Notifier._on_new_room_event`, `preserve_fn` around its subroutines which
return deferreds, so that it is safe to call it with an active logcontext.
2017-04-03 15:41:17 +01:00
Richard van der Hoff
e2eebf1696 Fix fixme in preserve_fn
`preserve_fn` is no longer used as a decorator anywhere, so we can safely fix a
fixme therein.
2017-04-03 15:38:02 +01:00
Erik Johnston
36c28bc467 Update all the workers and master to use TCP replication 2017-04-03 15:35:52 +01:00
Erik Johnston
3a1f3f8388 Change slave storage to use new replication interface
As the TCP replication uses a slightly different API and streams than
the HTTP replication.

This breaks HTTP replication.
2017-04-03 15:34:19 +01:00
Erik Johnston
52bfa604e1 Add basic replication client handler and factory 2017-04-03 15:34:13 +01:00
Erik Johnston
0a6a966e2b Always advance stream tokens 2017-04-03 15:22:56 +01:00
Richard van der Hoff
773e1c6d68 Remove spurious @preserve_fn decorators
Remove `@preserve_fn` decorators on `on_new_room_event`,
`_notify_pending_new_room_events`, `_on_new_room_event`, `on_new_event`, and
`on_new_replication_data` - none of these functions return a deferred, and the
decorator does nothing unless the wrapped function returns a deferred, so the
decorator was a no-op.
2017-04-03 15:14:11 +01:00
Erik Johnston
0d1c85e643 Merge branch 'release-v0.20.0' of github.com:matrix-org/synapse into develop 2017-04-03 14:58:14 +01:00
Erik Johnston
1df7c28661 Use callbacks to notify tcp replication rather than deferreds 2017-03-31 15:42:51 +01:00
Erik Johnston
36d2b66f90 Add a timestamp to USER_SYNC command
This timestamp is used to indicate when the user last sync'd
2017-03-31 15:42:22 +01:00
Erik Johnston
8a240e4f9c Merge pull request #2078 from APwhitehat/assertuserfriendly
add user friendly report of assertion error in synctl.py
2017-03-31 14:41:49 +01:00
Erik Johnston
ec039e6790 Merge pull request #1984 from RyanBreaker/patch-1
Add missing package to CentOS section
2017-03-31 14:39:32 +01:00
Erik Johnston
142b6b4abf Merge pull request #2011 from matrix-org/matthew/turn_allow_guests
add setting (on by default) to support TURN for guests
2017-03-31 14:37:09 +01:00
Erik Johnston
2a06b44be2 Merge pull request #1986 from matrix-org/matthew/enable_guest_3p
enable guest access for the 3pl/3pid APIs
2017-03-31 14:36:03 +01:00
Erik Johnston
2dc57e7413 Merge pull request #2024 from jerrykan/db_port_schema
Don't assume postgres tables are in the public schema during db port
2017-03-31 14:14:30 +01:00
Erik Johnston
07a32d192c Merge pull request #1961 from benhylau/patch-1
Clarify doc for SQLite to PostgreSQL port
2017-03-31 14:13:26 +01:00
Erik Johnston
9a27448b1b Merge pull request #1927 from zuckschwerdt/fix-nuke-script
bring nuke-room script to current schema
2017-03-31 14:06:29 +01:00
Matthew Hodgson
9ee397b440 switch to allow_guest=True for authing 3Ps as per PR feedback 2017-03-31 13:54:26 +01:00
Erik Johnston
9d0170ac6c Fix up presence 2017-03-31 11:36:32 +01:00
Erik Johnston
b4276a3896 Add a brief list of commands to docs 2017-03-31 11:34:45 +01:00
Erik Johnston
bfcf016714 Fix up docs 2017-03-31 11:19:24 +01:00
Erik Johnston
9ff4e0e91b Bump version and changelog 2017-03-30 16:37:40 +01:00
Erik Johnston
63fcc42990 Remove user from process_presence when stops syncing 2017-03-30 14:26:08 +01:00
Erik Johnston
31e0fe9031 Fix indentation in docs/ 2017-03-30 13:54:15 +01:00
Erik Johnston
3ba2859e0c Add tcp replication listener type and hook it up 2017-03-30 13:31:10 +01:00
Erik Johnston
e9dd8370b0 Add functions to presence to support remote syncs
The TCP replication protocol streams deltas of who has started or
stopped syncing. This is different from the HTTP API which periodically
sends the full list of users who are syncing. This commit adds support
for the new TCP style of sending deltas.
2017-03-30 13:25:14 +01:00
Erik Johnston
4d7fc7f977 Add server side resource for tcp replication 2017-03-30 13:24:45 +01:00
Erik Johnston
7450693435 Initial TCP protocol implementation
This defines the low level TCP replication protocol
2017-03-30 12:54:46 +01:00
Erik Johnston
8da6f0be48 Define the various streams we will replicate 2017-03-30 12:54:46 +01:00
Erik Johnston
11880103b1 Make federation send queue take the current position 2017-03-30 12:54:36 +01:00
Erik Johnston
7984708a55 Add a simple hook to wait for replication traffic 2017-03-30 11:57:52 +01:00
Erik Johnston
24d35ab47b Add new storage functions for new replication
The new replication protocol will keep all the streams separate, rather
than muxing multiple streams into one.
2017-03-30 11:48:35 +01:00
Anant Prakash
305d16d612 add user friendly report of assertion error in synctl.py
Signed-off-by: Anant Prakash <anantprakashjsr@gmail.com>
2017-03-29 20:41:39 +05:30
John Kristensen
be44558886 Don't assume postgres tables are in the public schema during db port
When fetching the list of tables from the postgres database during the
db port, it is assumed that the tables are in the public schema. This is
not always the case, so lets just rely on postgres to determine the
default schema to use.
2017-03-17 10:53:32 +11:00
Matthew Hodgson
0970e0307e typo 2017-03-15 12:40:42 +00:00
Matthew Hodgson
5aa42d4292 set default for turn_allow_guests correctly 2017-03-15 12:40:13 +00:00
Matthew Hodgson
e0ff66251f add setting (on by default) to support TURN for guests 2017-03-15 12:22:18 +00:00
Ryan Breaker
a175963ba5 Add --upgrade pip
Needed before `pip instal --upgrade setuptools` for CentOS 7 and also doesn't hurt for any other distro.
2017-03-13 14:05:31 -05:00
Matthew Hodgson
a61dd408ed enable guest access for the 3pl/3pid APIs 2017-03-12 19:30:45 +00:00
Ryan Breaker
53254551f0 Add missing package to CentOS section
Also added Fedora 25 to header as the same packages work for it as well.
2017-03-10 22:09:22 -06:00
Benedict Lau
92312aa3e6 Clarify doc for SQLite to PostgreSQL port 2017-03-01 01:30:11 -05:00
Christian W. Zuckschwerdt
20746d8150 bring nuke-room script to current schema
Signed-off-by: Christian W. Zuckschwerdt <christian@zuckschwerdt.org>
2017-02-19 05:27:45 +01:00
373 changed files with 31954 additions and 9947 deletions

47
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,47 @@
<!--
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
You will likely get better support more quickly if you ask in ** #matrix:matrix.org ** ;)
This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all
the necessary data to fix your issue.
You can also preview your report before submitting it. You may remove sections
that aren't relevant to your particular case.
Text between <!-- and --> marks will be invisible in the report.
-->
### Description
Describe here the problem that you are experiencing, or the feature you are requesting.
### Steps to reproduce
- For bugs, list the steps
- that reproduce the bug
- using hyphens as bullet points
Describe how what happens differs from what you expected.
If you can identify any relevant log snippets from _homeserver.log_, please include
those here (please be careful to remove any personal or private data):
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Homeserver**: Was this issue identified on matrix.org or another homeserver?
If not matrix.org:
- **Version**: What version of Synapse is running? <!--
You can find the Synapse version by inspecting the server headers (replace matrix.org with
your own homeserver domain):
$ curl -v https://matrix.org/_matrix/client/versions 2>&1 | grep "Server:"
-->
- **Install method**: package manager/git clone/pip
- **Platform**: Tell us about the environment in which your homeserver is operating
- distro, hardware, if it's running in a vm/container, etc.

2
.gitignore vendored
View File

@@ -46,3 +46,5 @@ static/client/register/register_config.js
env/
*.config
.vscode/

View File

@@ -1,14 +1,22 @@
sudo: false
language: python
python: 2.7
# tell travis to cache ~/.cache/pip
cache: pip
env:
- TOX_ENV=packaging
- TOX_ENV=pep8
- TOX_ENV=py27
matrix:
include:
- python: 2.7
env: TOX_ENV=packaging
- python: 2.7
env: TOX_ENV=pep8
- python: 2.7
env: TOX_ENV=py27
- python: 3.6
env: TOX_ENV=py36
install:
- pip install tox

View File

@@ -1,3 +1,679 @@
Changes in synapse <unreleased>
===============================
Potentially breaking change:
* Make Client-Server API return 401 for invalid token (PR #3161).
This changes the Client-server spec to return a 401 error code instead of 403
when the access token is unrecognised. This is the behaviour required by the
specification, but some clients may be relying on the old, incorrect
behaviour.
Thanks to @NotAFile for fixing this.
Changes in synapse v0.28.1 (2018-05-01)
=======================================
SECURITY UPDATE
* Clamp the allowed values of event depth received over federation to be
[0, 2^63 - 1]. This mitigates an attack where malicious events
injected with depth = 2^63 - 1 render rooms unusable. Depth is used to
determine the cosmetic ordering of events within a room, and so the ordering
of events in such a room will default to using stream_ordering rather than depth
(topological_ordering).
This is a temporary solution to mitigate abuse in the wild, whilst a long term solution
is being implemented to improve how the depth parameter is used.
Full details at
https://docs.google.com/document/d/1I3fi2S-XnpO45qrpCsowZv8P8dHcNZ4fsBsbOW7KABI
* Pin Twisted to <18.4 until we stop using the private _OpenSSLECCurve API.
Changes in synapse v0.28.0 (2018-04-26)
=======================================
Bug Fixes:
* Fix quarantine media admin API and search reindex (PR #3130)
* Fix media admin APIs (PR #3134)
Changes in synapse v0.28.0-rc1 (2018-04-24)
===========================================
Minor performance improvement to federation sending and bug fixes.
(Note: This release does not include the delta state resolution implementation discussed in matrix live)
Features:
* Add metrics for event processing lag (PR #3090)
* Add metrics for ResponseCache (PR #3092)
Changes:
* Synapse on PyPy (PR #2760) Thanks to @Valodim!
* move handling of auto_join_rooms to RegisterHandler (PR #2996) Thanks to @krombel!
* Improve handling of SRV records for federation connections (PR #3016) Thanks to @silkeh!
* Document the behaviour of ResponseCache (PR #3059)
* Preparation for py3 (PR #3061, #3073, #3074, #3075, #3103, #3104, #3106, #3107, #3109, #3110) Thanks to @NotAFile!
* update prometheus dashboard to use new metric names (PR #3069) Thanks to @krombel!
* use python3-compatible prints (PR #3074) Thanks to @NotAFile!
* Send federation events concurrently (PR #3078)
* Limit concurrent event sends for a room (PR #3079)
* Improve R30 stat definition (PR #3086)
* Send events to ASes concurrently (PR #3088)
* Refactor ResponseCache usage (PR #3093)
* Clarify that SRV may not point to a CNAME (PR #3100) Thanks to @silkeh!
* Use str(e) instead of e.message (PR #3103) Thanks to @NotAFile!
* Use six.itervalues in some places (PR #3106) Thanks to @NotAFile!
* Refactor store.have_events (PR #3117)
Bug Fixes:
* Return 401 for invalid access_token on logout (PR #2938) Thanks to @dklug!
* Return a 404 rather than a 500 on rejoining empty rooms (PR #3080)
* fix federation_domain_whitelist (PR #3099)
* Avoid creating events with huge numbers of prev_events (PR #3113)
* Reject events which have lots of prev_events (PR #3118)
Changes in synapse v0.27.4 (2018-04-13)
======================================
Changes:
* Update canonicaljson dependency (#3095)
Changes in synapse v0.27.3 (2018-04-11)
======================================
Bug fixes:
* URL quote path segments over federation (#3082)
Changes in synapse v0.27.3-rc2 (2018-04-09)
==========================================
v0.27.3-rc1 used a stale version of the develop branch so the changelog overstates
the functionality. v0.27.3-rc2 is up to date, rc1 should be ignored.
Changes in synapse v0.27.3-rc1 (2018-04-09)
=======================================
Notable changes include API support for joinability of groups. Also new metrics
and phone home stats. Phone home stats include better visibility of system usage
so we can tweak synpase to work better for all users rather than our own experience
with matrix.org. Also, recording 'r30' stat which is the measure we use to track
overal growth of the Matrix ecosystem. It is defined as:-
Counts the number of native 30 day retained users, defined as:-
* Users who have created their accounts more than 30 days
* Where last seen at most 30 days ago
* Where account creation and last_seen are > 30 days"
Features:
* Add joinability for groups (PR #3045)
* Implement group join API (PR #3046)
* Add counter metrics for calculating state delta (PR #3033)
* R30 stats (PR #3041)
* Measure time it takes to calculate state group ID (PR #3043)
* Add basic performance statistics to phone home (PR #3044)
* Add response size metrics (PR #3071)
* phone home cache size configurations (PR #3063)
Changes:
* Add a blurb explaining the main synapse worker (PR #2886) Thanks to @turt2live!
* Replace old style error catching with 'as' keyword (PR #3000) Thanks to @NotAFile!
* Use .iter* to avoid copies in StateHandler (PR #3006)
* Linearize calls to _generate_user_id (PR #3029)
* Remove last usage of ujson (PR #3030)
* Use simplejson throughout (PR #3048)
* Use static JSONEncoders (PR #3049)
* Remove uses of events.content (PR #3060)
* Improve database cache performance (PR #3068)
Bug fixes:
* Add room_id to the response of `rooms/{roomId}/join` (PR #2986) Thanks to @jplatte!
* Fix replication after switch to simplejson (PR #3015)
* 404 correctly on missing paths via NoResource (PR #3022)
* Fix error when claiming e2e keys from offline servers (PR #3034)
* fix tests/storage/test_user_directory.py (PR #3042)
* use PUT instead of POST for federating groups/m.join_policy (PR #3070) Thanks to @krombel!
* postgres port script: fix state_groups_pkey error (PR #3072)
Changes in synapse v0.27.2 (2018-03-26)
=======================================
Bug fixes:
* Fix bug which broke TCP replication between workers (PR #3015)
Changes in synapse v0.27.1 (2018-03-26)
=======================================
Meta release as v0.27.0 temporarily pointed to the wrong commit
Changes in synapse v0.27.0 (2018-03-26)
=======================================
No changes since v0.27.0-rc2
Changes in synapse v0.27.0-rc2 (2018-03-19)
===========================================
Pulls in v0.26.1
Bug fixes:
* Fix bug introduced in v0.27.0-rc1 that causes much increased memory usage in state cache (PR #3005)
Changes in synapse v0.26.1 (2018-03-15)
=======================================
Bug fixes:
* Fix bug where an invalid event caused server to stop functioning correctly,
due to parsing and serializing bugs in ujson library (PR #3008)
Changes in synapse v0.27.0-rc1 (2018-03-14)
===========================================
The common case for running Synapse is not to run separate workers, but for those that do, be aware that synctl no longer starts the main synapse when using ``-a`` option with workers. A new worker file should be added with ``worker_app: synapse.app.homeserver``.
This release also begins the process of renaming a number of the metrics
reported to prometheus. See `docs/metrics-howto.rst <docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0>`_.
Note that the v0.28.0 release will remove the deprecated metric names.
Features:
* Add ability for ASes to override message send time (PR #2754)
* Add support for custom storage providers for media repository (PR #2867, #2777, #2783, #2789, #2791, #2804, #2812, #2814, #2857, #2868, #2767)
* Add purge API features, see `docs/admin_api/purge_history_api.rst <docs/admin_api/purge_history_api.rst>`_ for full details (PR #2858, #2867, #2882, #2946, #2962, #2943)
* Add support for whitelisting 3PIDs that users can register. (PR #2813)
* Add ``/room/{id}/event/{id}`` API (PR #2766)
* Add an admin API to get all the media in a room (PR #2818) Thanks to @turt2live!
* Add ``federation_domain_whitelist`` option (PR #2820, #2821)
Changes:
* Continue to factor out processing from main process and into worker processes. See updated `docs/workers.rst <docs/workers.rst>`_ (PR #2892 - #2904, #2913, #2920 - #2926, #2947, #2847, #2854, #2872, #2873, #2874, #2928, #2929, #2934, #2856, #2976 - #2984, #2987 - #2989, #2991 - #2993, #2995, #2784)
* Ensure state cache is used when persisting events (PR #2864, #2871, #2802, #2835, #2836, #2841, #2842, #2849)
* Change the default config to bind on both IPv4 and IPv6 on all platforms (PR #2435) Thanks to @silkeh!
* No longer require a specific version of saml2 (PR #2695) Thanks to @okurz!
* Remove ``verbosity``/``log_file`` from generated config (PR #2755)
* Add and improve metrics and logging (PR #2770, #2778, #2785, #2786, #2787, #2793, #2794, #2795, #2809, #2810, #2833, #2834, #2844, #2965, #2927, #2975, #2790, #2796, #2838)
* When using synctl with workers, don't start the main synapse automatically (PR #2774)
* Minor performance improvements (PR #2773, #2792)
* Use a connection pool for non-federation outbound connections (PR #2817)
* Make it possible to run unit tests against postgres (PR #2829)
* Update pynacl dependency to 1.2.1 or higher (PR #2888) Thanks to @bachp!
* Remove ability for AS users to call /events and /sync (PR #2948)
* Use bcrypt.checkpw (PR #2949) Thanks to @krombel!
Bug fixes:
* Fix broken ``ldap_config`` config option (PR #2683) Thanks to @seckrv!
* Fix error message when user is not allowed to unban (PR #2761) Thanks to @turt2live!
* Fix publicised groups GET API (singular) over federation (PR #2772)
* Fix user directory when using ``user_directory_search_all_users`` config option (PR #2803, #2831)
* Fix error on ``/publicRooms`` when no rooms exist (PR #2827)
* Fix bug in quarantine_media (PR #2837)
* Fix url_previews when no Content-Type is returned from URL (PR #2845)
* Fix rare race in sync API when joining room (PR #2944)
* Fix slow event search, switch back from GIST to GIN indexes (PR #2769, #2848)
Changes in synapse v0.26.0 (2018-01-05)
=======================================
No changes since v0.26.0-rc1
Changes in synapse v0.26.0-rc1 (2017-12-13)
===========================================
Features:
* Add ability for ASes to publicise groups for their users (PR #2686)
* Add all local users to the user_directory and optionally search them (PR
#2723)
* Add support for custom login types for validating users (PR #2729)
Changes:
* Update example Prometheus config to new format (PR #2648) Thanks to
@krombel!
* Rename redact_content option to include_content in Push API (PR #2650)
* Declare support for r0.3.0 (PR #2677)
* Improve upserts (PR #2684, #2688, #2689, #2713)
* Improve documentation of workers (PR #2700)
* Improve tracebacks on exceptions (PR #2705)
* Allow guest access to group APIs for reading (PR #2715)
* Support for posting content in federation_client script (PR #2716)
* Delete devices and pushers on logouts etc (PR #2722)
Bug fixes:
* Fix database port script (PR #2673)
* Fix internal server error on login with ldap_auth_provider (PR #2678) Thanks
to @jkolo!
* Fix error on sqlite 3.7 (PR #2697)
* Fix OPTIONS on preview_url (PR #2707)
* Fix error handling on dns lookup (PR #2711)
* Fix wrong avatars when inviting multiple users when creating room (PR #2717)
* Fix 500 when joining matrix-dev (PR #2719)
Changes in synapse v0.25.1 (2017-11-17)
=======================================
Bug fixes:
* Fix login with LDAP and other password provider modules (PR #2678). Thanks to
@jkolo!
Changes in synapse v0.25.0 (2017-11-15)
=======================================
Bug fixes:
* Fix port script (PR #2673)
Changes in synapse v0.25.0-rc1 (2017-11-14)
===========================================
Features:
* Add is_public to groups table to allow for private groups (PR #2582)
* Add a route for determining who you are (PR #2668) Thanks to @turt2live!
* Add more features to the password providers (PR #2608, #2610, #2620, #2622,
#2623, #2624, #2626, #2628, #2629)
* Add a hook for custom rest endpoints (PR #2627)
* Add API to update group room visibility (PR #2651)
Changes:
* Ignore <noscript> tags when generating URL preview descriptions (PR #2576)
Thanks to @maximevaillancourt!
* Register some /unstable endpoints in /r0 as well (PR #2579) Thanks to
@krombel!
* Support /keys/upload on /r0 as well as /unstable (PR #2585)
* Front-end proxy: pass through auth header (PR #2586)
* Allow ASes to deactivate their own users (PR #2589)
* Remove refresh tokens (PR #2613)
* Automatically set default displayname on register (PR #2617)
* Log login requests (PR #2618)
* Always return `is_public` in the `/groups/:group_id/rooms` API (PR #2630)
* Avoid no-op media deletes (PR #2637) Thanks to @spantaleev!
* Fix various embarrassing typos around user_directory and add some doc. (PR
#2643)
* Return whether a user is an admin within a group (PR #2647)
* Namespace visibility options for groups (PR #2657)
* Downcase UserIDs on registration (PR #2662)
* Cache failures when fetching URL previews (PR #2669)
Bug fixes:
* Fix port script (PR #2577)
* Fix error when running synapse with no logfile (PR #2581)
* Fix UI auth when deleting devices (PR #2591)
* Fix typo when checking if user is invited to group (PR #2599)
* Fix the port script to drop NUL values in all tables (PR #2611)
* Fix appservices being backlogged and not receiving new events due to a bug in
notify_interested_services (PR #2631) Thanks to @xyzz!
* Fix updating rooms avatar/display name when modified by admin (PR #2636)
Thanks to @farialima!
* Fix bug in state group storage (PR #2649)
* Fix 500 on invalid utf-8 in request (PR #2663)
Changes in synapse v0.24.1 (2017-10-24)
=======================================
Bug fixes:
* Fix updating group profiles over federation (PR #2567)
Changes in synapse v0.24.0 (2017-10-23)
=======================================
No changes since v0.24.0-rc1
Changes in synapse v0.24.0-rc1 (2017-10-19)
===========================================
Features:
* Add Group Server (PR #2352, #2363, #2374, #2377, #2378, #2382, #2410, #2426,
#2430, #2454, #2471, #2472, #2544)
* Add support for channel notifications (PR #2501)
* Add basic implementation of backup media store (PR #2538)
* Add config option to auto-join new users to rooms (PR #2545)
Changes:
* Make the spam checker a module (PR #2474)
* Delete expired url cache data (PR #2478)
* Ignore incoming events for rooms that we have left (PR #2490)
* Allow spam checker to reject invites too (PR #2492)
* Add room creation checks to spam checker (PR #2495)
* Spam checking: add the invitee to user_may_invite (PR #2502)
* Process events from federation for different rooms in parallel (PR #2520)
* Allow error strings from spam checker (PR #2531)
* Improve error handling for missing files in config (PR #2551)
Bug fixes:
* Fix handling SERVFAILs when doing AAAA lookups for federation (PR #2477)
* Fix incompatibility with newer versions of ujson (PR #2483) Thanks to
@jeremycline!
* Fix notification keywords that start/end with non-word chars (PR #2500)
* Fix stack overflow and logcontexts from linearizer (PR #2532)
* Fix 500 error when fields missing from power_levels event (PR #2552)
* Fix 500 error when we get an error handling a PDU (PR #2553)
Changes in synapse v0.23.1 (2017-10-02)
=======================================
Changes:
* Make 'affinity' package optional, as it is not supported on some platforms
Changes in synapse v0.23.0 (2017-10-02)
=======================================
No changes since v0.23.0-rc2
Changes in synapse v0.23.0-rc2 (2017-09-26)
===========================================
Bug fixes:
* Fix regression in performance of syncs (PR #2470)
Changes in synapse v0.23.0-rc1 (2017-09-25)
===========================================
Features:
* Add a frontend proxy worker (PR #2344)
* Add support for event_id_only push format (PR #2450)
* Add a PoC for filtering spammy events (PR #2456)
* Add a config option to block all room invites (PR #2457)
Changes:
* Use bcrypt module instead of py-bcrypt (PR #2288) Thanks to @kyrias!
* Improve performance of generating push notifications (PR #2343, #2357, #2365,
#2366, #2371)
* Improve DB performance for device list handling in sync (PR #2362)
* Include a sample prometheus config (PR #2416)
* Document known to work postgres version (PR #2433) Thanks to @ptman!
Bug fixes:
* Fix caching error in the push evaluator (PR #2332)
* Fix bug where pusherpool didn't start and broke some rooms (PR #2342)
* Fix port script for user directory tables (PR #2375)
* Fix device lists notifications when user rejoins a room (PR #2443, #2449)
* Fix sync to always send down current state events in timeline (PR #2451)
* Fix bug where guest users were incorrectly kicked (PR #2453)
* Fix bug talking to IPv6 only servers using SRV records (PR #2462)
Changes in synapse v0.22.1 (2017-07-06)
=======================================
Bug fixes:
* Fix bug where pusher pool didn't start and caused issues when
interacting with some rooms (PR #2342)
Changes in synapse v0.22.0 (2017-07-06)
=======================================
No changes since v0.22.0-rc2
Changes in synapse v0.22.0-rc2 (2017-07-04)
===========================================
Changes:
* Improve performance of storing user IPs (PR #2307, #2308)
* Slightly improve performance of verifying access tokens (PR #2320)
* Slightly improve performance of event persistence (PR #2321)
* Increase default cache factor size from 0.1 to 0.5 (PR #2330)
Bug fixes:
* Fix bug with storing registration sessions that caused frequent CPU churn
(PR #2319)
Changes in synapse v0.22.0-rc1 (2017-06-26)
===========================================
Features:
* Add a user directory API (PR #2252, and many more)
* Add shutdown room API to remove room from local server (PR #2291)
* Add API to quarantine media (PR #2292)
* Add new config option to not send event contents to push servers (PR #2301)
Thanks to @cjdelisle!
Changes:
* Various performance fixes (PR #2177, #2233, #2230, #2238, #2248, #2256,
#2274)
* Deduplicate sync filters (PR #2219) Thanks to @krombel!
* Correct a typo in UPGRADE.rst (PR #2231) Thanks to @aaronraimist!
* Add count of one time keys to sync stream (PR #2237)
* Only store event_auth for state events (PR #2247)
* Store URL cache preview downloads separately (PR #2299)
Bug fixes:
* Fix users not getting notifications when AS listened to that user_id (PR
#2216) Thanks to @slipeer!
* Fix users without push set up not getting notifications after joining rooms
(PR #2236)
* Fix preview url API to trim long descriptions (PR #2243)
* Fix bug where we used cached but unpersisted state group as prev group,
resulting in broken state of restart (PR #2263)
* Fix removing of pushers when using workers (PR #2267)
* Fix CORS headers to allow Authorization header (PR #2285) Thanks to @krombel!
Changes in synapse v0.21.1 (2017-06-15)
=======================================
Bug fixes:
* Fix bug in anonymous usage statistic reporting (PR #2281)
Changes in synapse v0.21.0 (2017-05-18)
=======================================
No changes since v0.21.0-rc3
Changes in synapse v0.21.0-rc3 (2017-05-17)
===========================================
Features:
* Add per user rate-limiting overrides (PR #2208)
* Add config option to limit maximum number of events requested by ``/sync``
and ``/messages`` (PR #2221) Thanks to @psaavedra!
Changes:
* Various small performance fixes (PR #2201, #2202, #2224, #2226, #2227, #2228,
#2229)
* Update username availability checker API (PR #2209, #2213)
* When purging, don't de-delta state groups we're about to delete (PR #2214)
* Documentation to check synapse version (PR #2215) Thanks to @hamber-dick!
* Add an index to event_search to speed up purge history API (PR #2218)
Bug fixes:
* Fix API to allow clients to upload one-time-keys with new sigs (PR #2206)
Changes in synapse v0.21.0-rc2 (2017-05-08)
===========================================
Changes:
* Always mark remotes as up if we receive a signed request from them (PR #2190)
Bug fixes:
* Fix bug where users got pushed for rooms they had muted (PR #2200)
Changes in synapse v0.21.0-rc1 (2017-05-08)
===========================================
Features:
* Add username availability checker API (PR #2183)
* Add read marker API (PR #2120)
Changes:
* Enable guest access for the 3pl/3pid APIs (PR #1986)
* Add setting to support TURN for guests (PR #2011)
* Various performance improvements (PR #2075, #2076, #2080, #2083, #2108,
#2158, #2176, #2185)
* Make synctl a bit more user friendly (PR #2078, #2127) Thanks @APwhitehat!
* Replace HTTP replication with TCP replication (PR #2082, #2097, #2098,
#2099, #2103, #2014, #2016, #2115, #2116, #2117)
* Support authenticated SMTP (PR #2102) Thanks @DanielDent!
* Add a counter metric for successfully-sent transactions (PR #2121)
* Propagate errors sensibly from proxied IS requests (PR #2147)
* Add more granular event send metrics (PR #2178)
Bug fixes:
* Fix nuke-room script to work with current schema (PR #1927) Thanks
@zuckschwerdt!
* Fix db port script to not assume postgres tables are in the public schema
(PR #2024) Thanks @jerrykan!
* Fix getting latest device IP for user with no devices (PR #2118)
* Fix rejection of invites to unreachable servers (PR #2145)
* Fix code for reporting old verify keys in synapse (PR #2156)
* Fix invite state to always include all events (PR #2163)
* Fix bug where synapse would always fetch state for any missing event (PR #2170)
* Fix a leak with timed out HTTP connections (PR #2180)
* Fix bug where we didn't time out HTTP requests to ASes (PR #2192)
Docs:
* Clarify doc for SQLite to PostgreSQL port (PR #1961) Thanks @benhylau!
* Fix typo in synctl help (PR #2107) Thanks @HarHarLinks!
* ``web_client_location`` documentation fix (PR #2131) Thanks @matthewjwolff!
* Update README.rst with FreeBSD changes (PR #2132) Thanks @feld!
* Clarify setting up metrics (PR #2149) Thanks @encks!
Changes in synapse v0.20.0 (2017-04-11)
=======================================
Bug fixes:
* Fix joining rooms over federation where not all servers in the room saw the
new server had joined (PR #2094)
Changes in synapse v0.20.0-rc1 (2017-03-30)
===========================================
Features:
* Add delete_devices API (PR #1993)
* Add phone number registration/login support (PR #1994, #2055)
Changes:
* Use JSONSchema for validation of filters. Thanks @pik! (PR #1783)
* Reread log config on SIGHUP (PR #1982)
* Speed up public room list (PR #1989)
* Add helpful texts to logger config options (PR #1990)
* Minor ``/sync`` performance improvements. (PR #2002, #2013, #2022)
* Add some debug to help diagnose weird federation issue (PR #2035)
* Correctly limit retries for all federation requests (PR #2050, #2061)
* Don't lock table when persisting new one time keys (PR #2053)
* Reduce some CPU work on DB threads (PR #2054)
* Cache hosts in room (PR #2060)
* Batch sending of device list pokes (PR #2063)
* Speed up persist event path in certain edge cases (PR #2070)
Bug fixes:
* Fix bug where current_state_events renamed to current_state_ids (PR #1849)
* Fix routing loop when fetching remote media (PR #1992)
* Fix current_state_events table to not lie (PR #1996)
* Fix CAS login to handle PartialDownloadError (PR #1997)
* Fix assertion to stop transaction queue getting wedged (PR #2010)
* Fix presence to fallback to last_active_ts if it beats the last sync time.
Thanks @Half-Shot! (PR #2014)
* Fix bug when federation received a PDU while a room join is in progress (PR
#2016)
* Fix resetting state on rejected events (PR #2025)
* Fix installation issues in readme. Thanks @ricco386 (PR #2037)
* Fix caching of remote servers' signature keys (PR #2042)
* Fix some leaking log context (PR #2048, #2049, #2057, #2058)
* Fix rejection of invites not reaching sync (PR #2056)
Changes in synapse v0.19.3 (2017-03-20)
=======================================

View File

@@ -30,8 +30,12 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use Jenkins for continuous integration (http://matrix.org/jenkins), and
typically all pull requests get automatically tested Jenkins: if your change breaks the build, Jenkins will yell about it in #matrix-dev:matrix.org so please lurk there and keep an eye open.
We use `Jenkins <http://matrix.org/jenkins>`_ and
`Travis <https://travis-ci.org/matrix-org/synapse>`_ for continuous
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
pull request for feedback.
Code style
~~~~~~~~~~

View File

@@ -27,4 +27,5 @@ exclude jenkins*.sh
exclude jenkins*
recursive-exclude jenkins *.sh
prune .github
prune demo/etc

View File

@@ -84,6 +84,7 @@ Synapse Installation
Synapse is the reference python/twisted Matrix homeserver implementation.
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 2.7
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
@@ -108,10 +109,10 @@ Installing prerequisites on ArchLinux::
sudo pacman -S base-devel python2 python-pip \
python-setuptools python-virtualenv sqlite3
Installing prerequisites on CentOS 7::
Installing prerequisites on CentOS 7 or Fedora 25::
sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel redhat-rpm-config \
python-virtualenv libffi-devel openssl-devel
sudo yum groupinstall "Development Tools"
@@ -156,8 +157,8 @@ if you prefer.
In case of problems, please see the _`Troubleshooting` section below.
Alternatively, Silvio Fricke has contributed a Dockerfile to automate the
above in Docker at https://registry.hub.docker.com/u/silviof/docker-matrix/.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate the
above in Docker at https://hub.docker.com/r/avhost/docker-matrix/tags/
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
@@ -199,11 +200,11 @@ different. See `the spec`__ for more information on key management.)
.. __: `key_management`_
The default configuration exposes two HTTP ports: 8008 and 8448. Port 8008 is
configured without TLS; it is not recommended this be exposed outside your
local network. Port 8448 is configured to use TLS with a self-signed
certificate. This is fine for testing with but, to avoid your clients
complaining about the certificate, you will almost certainly want to use
another certificate for production purposes. (Note that a self-signed
configured without TLS; it should be behind a reverse proxy for TLS/SSL
termination on port 443 which in turn should be used for clients. Port 8448
is configured to use TLS with a self-signed certificate. If you would like
to do initial test with a client without having to setup a reverse proxy,
you can temporarly use another certificate. (Note that a self-signed
certificate is fine for `Federation`_). You can do so by changing
``tls_certificate_path``, ``tls_private_key_path`` and ``tls_dh_params_path``
in ``homeserver.yaml``; alternatively, you can use a reverse-proxy, but be sure
@@ -245,6 +246,25 @@ Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See `<docs/turn-howto.rst>`_ for details.
IPv6
----
As of Synapse 0.19 we finally support IPv6, many thanks to @kyrias and @glyph
for providing PR #1696.
However, for federation to work on hosts with IPv6 DNS servers you **must**
be running Twisted 17.1.0 or later - see https://github.com/matrix-org/synapse/issues/1002
for details. We can't make Synapse depend on Twisted 17.1 by default
yet as it will break most older distributions (see https://github.com/matrix-org/synapse/pull/1909)
so if you are using operating system dependencies you'll have to install your
own Twisted 17.1 package via pip or backports etc.
If you're running in a virtualenv then pip should have installed the newest
Twisted automatically, but if your virtualenv is old you will need to manually
upgrade to a newer Twisted dependency via:
pip install Twisted>=17.1.0
Running Synapse
===============
@@ -263,10 +283,16 @@ Connecting to Synapse from a client
The easiest way to try out your new Synapse installation is by connecting to it
from a web client. The easiest option is probably the one at
http://riot.im/app. You will need to specify a "Custom server" when you log on
or register: set this to ``https://localhost:8448`` - remember to specify the
port (``:8448``) unless you changed the configuration. (Leave the identity
or register: set this to ``https://domain.tld`` if you setup a reverse proxy
following the recommended setup, or ``https://localhost:8448`` - remember to specify the
port (``:8448``) if not ``:443`` unless you changed the configuration. (Leave the identity
server as the default - see `Identity servers`_.)
If using port 8448 you will run into errors until you accept the self-signed
certificate. You can easily do this by going to ``https://localhost:8448``
directly with your browser and accept the presented certificate. You can then
go back in your web client and proceed further.
If all goes well you should at least be able to log in, create a room, and
start sending messages.
@@ -328,6 +354,10 @@ https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one
Fedora
------
Synapse is in the Fedora repositories as ``matrix-synapse``::
sudo dnf install matrix-synapse
Oleg Girko provides Fedora RPMs at
https://obs.infoserver.lv/project/monitor/matrix-synapse
@@ -335,8 +365,11 @@ ArchLinux
---------
The quickest way to get up and running with ArchLinux is probably with the community package
https://www.archlinux.org/packages/community/any/matrix-synapse/, which should pull in all
the necessary dependencies.
https://www.archlinux.org/packages/community/any/matrix-synapse/, which should pull in most of
the necessary dependencies. If the default web client is to be served (enabled by default in
the generated config),
https://www.archlinux.org/packages/community/any/python2-matrix-angular-sdk/ will also need to
be installed.
Alternatively, to install using pip a few changes may be needed as ArchLinux
defaults to python 3, but synapse currently assumes python 2.7 by default:
@@ -373,7 +406,7 @@ FreeBSD
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
- Ports: ``cd /usr/ports/net/py-matrix-synapse && make install clean``
- Ports: ``cd /usr/ports/net-im/py-matrix-synapse && make install clean``
- Packages: ``pkg install py27-matrix-synapse``
@@ -505,6 +538,30 @@ fix try re-installing from PyPI or directly from
# Install from github
pip install --user https://github.com/pyca/pynacl/tarball/master
Running out of File Handles
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If synapse runs out of filehandles, it typically fails badly - live-locking
at 100% CPU, and/or failing to accept new TCP connections (blocking the
connecting client). Matrix currently can legitimately use a lot of file handles,
thanks to busy rooms like #matrix:matrix.org containing hundreds of participating
servers. The first time a server talks in a room it will try to connect
simultaneously to all participating servers, which could exhaust the available
file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow
to respond. (We need to improve the routing algorithm used to be better than
full mesh, but as of June 2017 this hasn't happened yet).
If you hit this failure mode, we recommend increasing the maximum number of
open file handles to be at least 4096 (assuming a default of 1024 or 256).
This is typically done by editing ``/etc/security/limits.conf``
Separately, Synapse may leak file handles if inbound HTTP requests get stuck
during processing - e.g. blocked behind a lock or talking to a remote server etc.
This is best diagnosed by matching up the 'Received request' and 'Processed request'
log lines and looking for any 'Processed request' lines which take more than
a few seconds to execute. Please let us know at #matrix-dev:matrix.org if
you see this failure mode so we can help debug it, however.
ArchLinux
~~~~~~~~~
@@ -546,8 +603,9 @@ you to run your server on a machine that might not have the same name as your
domain name. For example, you might want to run your server at
``synapse.example.com``, but have your Matrix user-ids look like
``@user:example.com``. (A SRV record also allows you to change the port from
the default 8448. However, if you are thinking of using a reverse-proxy, be
sure to read `Reverse-proxying the federation port`_ first.)
the default 8448. However, if you are thinking of using a reverse-proxy on the
federation port, which is not recommended, be sure to read
`Reverse-proxying the federation port`_ first.)
To use a SRV record, first create your SRV record and publish it in DNS. This
should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
@@ -556,6 +614,9 @@ should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
$ dig -t srv _matrix._tcp.example.com
_matrix._tcp.example.com. 3600 IN SRV 10 0 8448 synapse.example.com.
Note that the server hostname cannot be an alias (CNAME record): it has to point
directly to the server hosting the synapse instance.
You can then configure your homeserver to use ``<yourdomain.com>`` as the domain in
its user-ids, by setting ``server_name``::
@@ -578,6 +639,11 @@ largest boxes pause for thought.)
Troubleshooting
---------------
You can use the federation tester to check if your homeserver is all set:
``https://matrix.org/federationtester/api/report?server_name=<your_server_name>``
If any of the attributes under "checks" is false, federation won't work.
The typical failure mode with federation is that when you try to join a room,
it is rejected with "401: Unauthorized". Generally this means that other
servers in the room couldn't access yours. (Joining a room over federation is a
@@ -627,7 +693,7 @@ For information on how to install and use PostgreSQL, please see
Using a reverse proxy with Synapse
==================================
It is possible to put a reverse proxy such as
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_ or
`HAProxy <http://www.haproxy.org/>`_ in front of Synapse. One advantage of
@@ -645,9 +711,9 @@ federation port has a number of pitfalls. It is possible, but be sure to read
`Reverse-proxying the federation port`_.
The recommended setup is therefore to configure your reverse-proxy on port 443
for client connections, but to also expose port 8448 for server-server
connections. All the Matrix endpoints begin ``/_matrix``, so an example nginx
configuration might look like::
to port 8008 of synapse for client connections, but to also directly expose port
8448 for server-server connections. All the Matrix endpoints begin ``/_matrix``,
so an example nginx configuration might look like::
server {
listen 443 ssl;
@@ -769,7 +835,9 @@ spidering 'internal' URLs on your network. At the very least we recommend that
your loopback and RFC1918 IP addresses are blacklisted.
This also requires the optional lxml and netaddr python dependencies to be
installed.
installed. This in turn requires the libxml2 library to be available - on
Debian/Ubuntu this means ``apt-get install libxml2-dev``, or equivalent for
your OS.
Password reset
@@ -829,6 +897,17 @@ This should end with a 'PASSED' result::
PASSED (successes=143)
Running the Integration Tests
=============================
Synapse is accompanied by `SyTest <https://github.com/matrix-org/sytest>`_,
a Matrix homeserver integration testing suite, which uses HTTP requests to
access the API as a Matrix client would. It is able to run Synapse directly from
the source tree, so installation of the server is not required.
Testing with SyTest is recommended for verifying that changes related to the
Client-Server API are functioning correctly. See the `installation instructions
<https://github.com/matrix-org/sytest#installing>`_ for details.
Building Internal API Documentation
===================================
@@ -852,12 +931,9 @@ cache a lot of recent room data and metadata in RAM in order to speed up
common requests. We'll improve this in future, but for now the easiest
way to either reduce the RAM usage (at the risk of slowing things down)
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
variable. Roughly speaking, a SYNAPSE_CACHE_FACTOR of 1.0 will max out
at around 3-4GB of resident memory - this is what we currently run the
matrix.org on. The default setting is currently 0.1, which is probably
around a ~700MB footprint. You can dial it down further to 0.02 if
desired, which targets roughly ~512MB. Conversely you can dial it up if
you need performance for lots of users and have a box with a lot of RAM.
variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
.. _`key_management`: https://matrix.org/docs/spec/server_server/unstable.html#retrieving-server-keys

View File

@@ -5,30 +5,60 @@ Before upgrading check if any special steps are required to upgrade from the
what you currently have installed to current version of synapse. The extra
instructions that may be required are listed later in this document.
If synapse was installed in a virtualenv then active that virtualenv before
upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then run:
1. If synapse was installed in a virtualenv then active that virtualenv before
upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then
run:
.. code:: bash
.. code:: bash
source ~/.synapse/bin/activate
If synapse was installed using pip then upgrade to the latest version by
running:
2. If synapse was installed using pip then upgrade to the latest version by
running:
.. code:: bash
.. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
If synapse was installed using git then upgrade to the latest version by
running:
# restart synapse
synctl restart
.. code:: bash
If synapse was installed using git then upgrade to the latest version by
running:
.. code:: bash
# Pull the latest version of the master branch.
git pull
# Update the versions of synapse's python dependencies.
python synapse/python_dependencies.py | xargs -n1 pip install --upgrade
python synapse/python_dependencies.py | xargs pip install --upgrade
# restart synapse
./synctl restart
To check whether your update was sucessful, you can check the Server header
returned by the Client-Server API:
.. code:: bash
# replace <host.name> with the hostname of your synapse homeserver.
# You may need to specify a port (eg, :8448) if your server is not
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to $NEXT_VERSION
====================
This release expands the anonymous usage stats sent if the opt-in
``report_stats`` configuration is set to ``true``. We now capture RSS memory
and cpu use at a very coarse level. This requires administrators to install
the optional ``psutil`` python module.
We would appreciate it if you could assist by ensuring this module is available
and ``report_stats`` is enabled. This will let us see if performance changes to
synapse are having an impact to the general community.
Upgrading to v0.15.0
====================

10
contrib/README.rst Normal file
View File

@@ -0,0 +1,10 @@
Community Contributions
=======================
Everything in this directory are projects submitted by the community that may be useful
to others. As such, the project maintainers cannot guarantee support, stability
or backwards compatibility of these projects.
Files in this directory should *not* be relied on directly, as they may not
continue to work or exist in future. If you wish to use any of these files then
they should be copied to avoid them breaking from underneath you.

View File

@@ -36,15 +36,13 @@ class HttpClient(object):
the request body. This will be encoded as JSON.
Returns:
Deferred: Succeeds when we get *any* HTTP response.
The result of the deferred is a tuple of `(code, response)`,
where `response` is a dict representing the decoded JSON body.
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body.
"""
pass
def get_json(self, url, args=None):
""" Get's some json from the given host homeserver and path
""" Gets some json from the given host homeserver and path
Args:
url (str): The URL to GET data from.
@@ -54,10 +52,8 @@ class HttpClient(object):
and *not* a string.
Returns:
Deferred: Succeeds when we get *any* HTTP response.
The result of the deferred is a tuple of `(code, response)`,
where `response` is a dict representing the decoded JSON body.
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body.
"""
pass

View File

@@ -22,6 +22,8 @@ import argparse
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
from six import string_types
def make_graph(file_name, room_id, file_prefix, limit):
print "Reading lines"
@@ -58,7 +60,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for key, value in unfreeze(event.get_dict()["content"]).items():
if value is None:
value = "<null>"
elif isinstance(value, basestring):
elif isinstance(value, string_types):
pass
else:
value = json.dumps(value)

37
contrib/prometheus/README Normal file
View File

@@ -0,0 +1,37 @@
This directory contains some sample monitoring config for using the
'Prometheus' monitoring server against synapse.
To use it, first install prometheus by following the instructions at
http://prometheus.io/
### for Prometheus v1
Add a new job to the main prometheus.conf file:
job: {
name: "synapse"
target_group: {
target: "http://SERVER.LOCATION.HERE:PORT/_synapse/metrics"
}
}
### for Prometheus v2
Add a new job to the main prometheus.yml file:
- job_name: "synapse"
metrics_path: "/_synapse/metrics"
# when endpoint uses https:
scheme: "https"
static_configs:
- targets: ['SERVER.LOCATION:PORT']
To use `synapse.rules` add
rule_files:
- "/PATH/TO/synapse-v2.rules"
Metrics are disabled by default when running synapse; they must be enabled
with the 'enable-metrics' option, either in the synapse config file or as a
command-line option.

View File

@@ -0,0 +1,395 @@
{{ template "head" . }}
{{ template "prom_content_head" . }}
<h1>System Resources</h1>
<h3>CPU</h3>
<div id="process_resource_utime"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_resource_utime"),
expr: "rate(process_cpu_seconds_total[2m]) * 100",
name: "[[job]]",
min: 0,
max: 100,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "%",
yTitle: "CPU Usage"
})
</script>
<h3>Memory</h3>
<div id="process_resource_maxrss"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_resource_maxrss"),
expr: "process_psutil_rss:max",
name: "Maxrss",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "bytes",
yTitle: "Usage"
})
</script>
<h3>File descriptors</h3>
<div id="process_fds"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_fds"),
expr: "process_open_fds{job='synapse'}",
name: "FDs",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "",
yTitle: "Descriptors"
})
</script>
<h1>Reactor</h1>
<h3>Total reactor time</h3>
<div id="reactor_total_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_total_time"),
expr: "rate(python_twisted_reactor_tick_time:total[2m]) / 1000",
name: "time",
max: 1,
min: 0,
renderer: "area",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Average reactor tick time</h3>
<div id="reactor_average_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_average_time"),
expr: "rate(python_twisted_reactor_tick_time:total[2m]) / rate(python_twisted_reactor_tick_time:count[2m]) / 1000",
name: "time",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s",
yTitle: "Time"
})
</script>
<h3>Pending calls per tick</h3>
<div id="reactor_pending_calls"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_pending_calls"),
expr: "rate(python_twisted_reactor_pending_calls:total[30s])/rate(python_twisted_reactor_pending_calls:count[30s])",
name: "calls",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yTitle: "Pending Cals"
})
</script>
<h1>Storage</h1>
<h3>Queries</h3>
<div id="synapse_storage_query_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_query_time"),
expr: "rate(synapse_storage_query_time:count[2m])",
name: "[[verb]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "queries/s",
yTitle: "Queries"
})
</script>
<h3>Transactions</h3>
<div id="synapse_storage_transaction_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_transaction_time"),
expr: "rate(synapse_storage_transaction_time:count[2m])",
name: "[[desc]]",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "txn/s",
yTitle: "Transactions"
})
</script>
<h3>Transaction execution time</h3>
<div id="synapse_storage_transactions_time_msec"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_transactions_time_msec"),
expr: "rate(synapse_storage_transaction_time:total[2m]) / 1000",
name: "[[desc]]",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Database scheduling latency</h3>
<div id="synapse_storage_schedule_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_schedule_time"),
expr: "rate(synapse_storage_schedule_time:total[2m]) / 1000",
name: "Total latency",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Cache hit ratio</h3>
<div id="synapse_cache_ratio"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_cache_ratio"),
expr: "rate(synapse_util_caches_cache:total[2m]) * 100",
name: "[[name]]",
min: 0,
max: 100,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "%",
yTitle: "Percentage"
})
</script>
<h3>Cache size</h3>
<div id="synapse_cache_size"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_cache_size"),
expr: "synapse_util_caches_cache:size",
name: "[[name]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Items"
})
</script>
<h1>Requests</h1>
<h3>Requests by Servlet</h3>
<div id="synapse_http_server_request_count_servlet"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_request_count_servlet"),
expr: "rate(synapse_http_server_request_count:servlet[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4>
<div id="synapse_http_server_request_count_servlet_minus_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_request_count_servlet_minus_events"),
expr: "rate(synapse_http_server_request_count:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Average response times</h3>
<div id="synapse_http_server_response_time_avg"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_time_avg"),
expr: "rate(synapse_http_server_response_time_seconds[2m]) / rate(synapse_http_server_response_count[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/req",
yTitle: "Response time"
})
</script>
<h3>All responses by code</h3>
<div id="synapse_http_server_responses"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_responses"),
expr: "rate(synapse_http_server_responses[2m])",
name: "[[method]] / [[code]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Error responses by code</h3>
<div id="synapse_http_server_responses_err"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_responses_err"),
expr: "rate(synapse_http_server_responses{code=~\"[45]..\"}[2m])",
name: "[[method]] / [[code]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>CPU Usage</h3>
<div id="synapse_http_server_response_ru_utime"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_ru_utime"),
expr: "rate(synapse_http_server_response_ru_utime_seconds[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "CPU Usage"
})
</script>
<h3>DB Usage</h3>
<div id="synapse_http_server_response_db_txn_duration"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
expr: "rate(synapse_http_server_response_db_txn_duration_seconds[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "DB Usage"
})
</script>
<h3>Average event send times</h3>
<div id="synapse_http_server_send_time_avg"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_send_time_avg"),
expr: "rate(synapse_http_server_response_time_second{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/req",
yTitle: "Response time"
})
</script>
<h1>Federation</h1>
<h3>Sent Messages</h3>
<div id="synapse_federation_client_sent"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_client_sent"),
expr: "rate(synapse_federation_client_sent[2m])",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Received Messages</h3>
<div id="synapse_federation_server_received"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_server_received"),
expr: "rate(synapse_federation_server_received[2m])",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Pending</h3>
<div id="synapse_federation_transaction_queue_pending"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_transaction_queue_pending"),
expr: "synapse_federation_transaction_queue_pending",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Units"
})
</script>
<h1>Clients</h1>
<h3>Notifiers</h3>
<div id="synapse_notifier_listeners"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_notifier_listeners"),
expr: "synapse_notifier_listeners",
name: "listeners",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Listeners"
})
</script>
<h3>Notified Events</h3>
<div id="synapse_notifier_notified_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_notifier_notified_events"),
expr: "rate(synapse_notifier_notified_events[2m])",
name: "events",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "events/s",
yTitle: "Event rate"
})
</script>
{{ template "prom_content_tail" . }}
{{ template "tail" }}

View File

@@ -0,0 +1,21 @@
synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
synapse_http_server_request_count:method{servlet=""} = sum(synapse_http_server_request_count) by (method)
synapse_http_server_request_count:servlet{method=""} = sum(synapse_http_server_request_count) by (servlet)
synapse_http_server_request_count:total{servlet=""} = sum(synapse_http_server_request_count:by_method) by (servlet)
synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])
synapse_federation_client_sent{type="EDU"} = synapse_federation_client_sent_edus + 0
synapse_federation_client_sent{type="PDU"} = synapse_federation_client_sent_pdu_destinations:count + 0
synapse_federation_client_sent{type="Query"} = sum(synapse_federation_client_sent_queries) by (job)
synapse_federation_server_received{type="EDU"} = synapse_federation_server_received_edus + 0
synapse_federation_server_received{type="PDU"} = synapse_federation_server_received_pdus + 0
synapse_federation_server_received{type="Query"} = sum(synapse_federation_server_received_queries) by (job)
synapse_federation_transaction_queue_pending{type="EDU"} = synapse_federation_transaction_queue_pending_edus + 0
synapse_federation_transaction_queue_pending{type="PDU"} = synapse_federation_transaction_queue_pending_pdus + 0

View File

@@ -0,0 +1,60 @@
groups:
- name: synapse
rules:
- record: "synapse_federation_transaction_queue_pendingEdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)"
- record: "synapse_federation_transaction_queue_pendingPdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)"
- record: 'synapse_http_server_request_count:method'
labels:
servlet: ""
expr: "sum(synapse_http_server_request_count) by (method)"
- record: 'synapse_http_server_request_count:servlet'
labels:
method: ""
expr: 'sum(synapse_http_server_request_count) by (servlet)'
- record: 'synapse_http_server_request_count:total'
labels:
servlet: ""
expr: 'sum(synapse_http_server_request_count:by_method) by (servlet)'
- record: 'synapse_cache:hit_ratio_5m'
expr: 'rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])'
- record: 'synapse_cache:hit_ratio_30s'
expr: 'rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])'
- record: 'synapse_federation_client_sent'
labels:
type: "EDU"
expr: 'synapse_federation_client_sent_edus + 0'
- record: 'synapse_federation_client_sent'
labels:
type: "PDU"
expr: 'synapse_federation_client_sent_pdu_destinations:count + 0'
- record: 'synapse_federation_client_sent'
labels:
type: "Query"
expr: 'sum(synapse_federation_client_sent_queries) by (job)'
- record: 'synapse_federation_server_received'
labels:
type: "EDU"
expr: 'synapse_federation_server_received_edus + 0'
- record: 'synapse_federation_server_received'
labels:
type: "PDU"
expr: 'synapse_federation_server_received_pdus + 0'
- record: 'synapse_federation_server_received'
labels:
type: "Query"
expr: 'sum(synapse_federation_server_received_queries) by (job)'
- record: 'synapse_federation_transaction_queue_pending'
labels:
type: "EDU"
expr: 'synapse_federation_transaction_queue_pending_edus + 0'
- record: 'synapse_federation_transaction_queue_pending'
labels:
type: "PDU"
expr: 'synapse_federation_transaction_queue_pending_pdus + 0'

View File

@@ -2,6 +2,9 @@
# (e.g. https://www.archlinux.org/packages/community/any/matrix-synapse/ for ArchLinux)
# rather than in a user home directory or similar under virtualenv.
# **NOTE:** This is an example service file that may change in the future. If you
# wish to use this please copy rather than symlink it.
[Unit]
Description=Synapse Matrix homeserver
@@ -9,9 +12,11 @@ Description=Synapse Matrix homeserver
Type=simple
User=synapse
Group=synapse
EnvironmentFile=-/etc/sysconfig/synapse
WorkingDirectory=/var/lib/synapse
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml --log-config=/etc/synapse/log_config.yaml
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml
ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml
# EnvironmentFile=-/etc/sysconfig/synapse # Can be used to e.g. set SYNAPSE_CACHE_FACTOR
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,23 @@
# List all media in a room
This API gets a list of known media in a room.
The API is:
```
GET /_matrix/client/r0/admin/room/<room_id>/media
```
including an `access_token` of a server admin.
It returns a JSON body like the following:
```
{
"local": [
"mxc://localhost/xwvutsrqponmlkjihgfedcba",
"mxc://localhost/abcdefghijklmnopqrstuvwx"
],
"remote": [
"mxc://matrix.org/xwvutsrqponmlkjihgfedcba",
"mxc://matrix.org/abcdefghijklmnopqrstuvwx"
]
}
```

View File

@@ -8,8 +8,56 @@ Depending on the amount of history being purged a call to the API may take
several minutes or longer. During this period users will not be able to
paginate further back in the room from the point being purged from.
The API is simply:
The API is:
``POST /_matrix/client/r0/admin/purge_history/<room_id>/<event_id>``
``POST /_matrix/client/r0/admin/purge_history/<room_id>[/<event_id>]``
including an ``access_token`` of a server admin.
By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are
deleted.)
Room state data (such as joins, leaves, topic) is always preserved.
To delete local message events as well, set ``delete_local_events`` in the body:
.. code:: json
{
"delete_local_events": true
}
The caller must specify the point in the room to purge up to. This can be
specified by including an event_id in the URI, or by setting a
``purge_up_to_event_id`` or ``purge_up_to_ts`` in the request body. If an event
id is given, that event (and others at the same graph depth) will be retained.
If ``purge_up_to_ts`` is given, it should be a timestamp since the unix epoch,
in milliseconds.
The API starts the purge running, and returns immediately with a JSON body with
a purge id:
.. code:: json
{
"purge_id": "<opaque id>"
}
Purge status query
------------------
It is possible to poll for updates on recent purges with a second API;
``GET /_matrix/client/r0/admin/purge_history_status/<purge_id>``
(again, with a suitable ``access_token``). This API returns a JSON body like
the following:
.. code:: json
{
"status": "active"
}
The status will be one of ``active``, ``complete``, or ``failed``.

View File

@@ -0,0 +1,73 @@
Query Account
=============
This API returns information about a specific user account.
The api is::
GET /_matrix/client/r0/admin/whois/<user_id>
including an ``access_token`` of a server admin.
It returns a JSON body like the following:
.. code:: json
{
"user_id": "<user_id>",
"devices": {
"": {
"sessions": [
{
"connections": [
{
"ip": "1.2.3.4",
"last_seen": 1417222374433,
"user_agent": "Mozilla/5.0 ..."
},
{
"ip": "1.2.3.10",
"last_seen": 1417222374500,
"user_agent": "Dalvik/2.1.0 ..."
}
]
}
]
}
}
}
``last_seen`` is measured in milliseconds since the Unix epoch.
Deactivate Account
==================
This API deactivates an account. It removes active access tokens, resets the
password, and deletes third-party IDs (to prevent the user requesting a
password reset).
The api is::
POST /_matrix/client/r0/admin/deactivate/<user_id>
including an ``access_token`` of a server admin, and an empty request body.
Reset password
==============
Changes the password of another user.
The api is::
POST /_matrix/client/r0/admin/reset_password/<user_id>
with a body of:
.. code:: json
{
"new_password": "<secret>"
}
including an ``access_token`` of a server admin.

View File

@@ -1,26 +1,13 @@
Basically, PEP8
- Everything should comply with PEP8. Code should pass
``pep8 --max-line-length=100`` without any warnings.
- NEVER tabs. 4 spaces to indent.
- Max line width: 79 chars (with flexibility to overflow by a "few chars" if
the overflowing content is not semantically significant and avoids an
explosion of vertical whitespace).
- Use camel case for class and type names
- Use underscores for functions and variables.
- Use double quotes.
- Use parentheses instead of '\\' for line continuation where ever possible
(which is pretty much everywhere)
- There should be max a single new line between:
- statements
- functions in a class
- There should be two new lines between:
- definitions in a module (e.g., between different classes)
- There should be spaces where spaces should be and not where there shouldn't be:
- a single space after a comma
- a single space before and after for '=' when used as assignment
- no spaces before and after for '=' for default values and keyword arguments.
- Indenting must follow PEP8; either hanging indent or multiline-visual indent
depending on the size and shape of the arguments and what makes more sense to
the author. In other words, both this::
- **Indenting**:
- NEVER tabs. 4 spaces to indent.
- follow PEP8; either hanging indent or multiline-visual indent depending
on the size and shape of the arguments and what makes more sense to the
author. In other words, both this::
print("I am a fish %s" % "moo")
@@ -33,20 +20,100 @@ Basically, PEP8
print(
"I am a fish %s" %
"moo"
"moo",
)
...are valid, although given each one takes up 2x more vertical space than
the previous, it's up to the author's discretion as to which layout makes most
sense for their function invocation. (e.g. if they want to add comments
per-argument, or put expressions in the arguments, or group related arguments
together, or want to deliberately extend or preserve vertical/horizontal
space)
the previous, it's up to the author's discretion as to which layout makes
most sense for their function invocation. (e.g. if they want to add
comments per-argument, or put expressions in the arguments, or group
related arguments together, or want to deliberately extend or preserve
vertical/horizontal space)
Comments should follow the `google code style <http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with
`sphinx <http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples <http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
in the sphinx documentation.
- **Line length**:
Code should pass pep8 --max-line-length=100 without any warnings.
Max line length is 79 chars (with flexibility to overflow by a "few chars" if
the overflowing content is not semantically significant and avoids an
explosion of vertical whitespace).
Use parentheses instead of ``\`` for line continuation where ever possible
(which is pretty much everywhere).
- **Naming**:
- Use camel case for class and type names
- Use underscores for functions and variables.
- Use double quotes ``"foo"`` rather than single quotes ``'foo'``.
- **Blank lines**:
- There should be max a single new line between:
- statements
- functions in a class
- There should be two new lines between:
- definitions in a module (e.g., between different classes)
- **Whitespace**:
There should be spaces where spaces should be and not where there shouldn't
be:
- a single space after a comma
- a single space before and after for '=' when used as assignment
- no spaces before and after for '=' for default values and keyword arguments.
- **Comments**: should follow the `google code style
<http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with `sphinx
<http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples
<http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
in the sphinx documentation.
- **Imports**:
- Prefer to import classes and functions than packages or modules.
Example::
from synapse.types import UserID
...
user_id = UserID(local, server)
is preferred over::
from synapse import types
...
user_id = types.UserID(local, server)
(or any other variant).
This goes against the advice in the Google style guide, but it means that
errors in the name are caught early (at import time).
- Multiple imports from the same package can be combined onto one line::
from synapse.types import GroupID, RoomID, UserID
An effort should be made to keep the individual imports in alphabetical
order.
If the list becomes long, wrap it with parentheses and split it over
multiple lines.
- As per `PEP-8 <https://www.python.org/dev/peps/pep-0008/#imports>`_,
imports should be grouped in the following order, with a blank line between
each group:
1. standard library imports
2. related third party imports
3. local application/library specific imports
- Imports within each group should be sorted alphabetically by module name.
- Avoid wildcard imports (``from synapse.types import *``) and relative
imports (``from .types import UserID``).

View File

@@ -279,9 +279,9 @@ Obviously that option means that the operations done in
that might be fixed by setting a different logcontext via a ``with
LoggingContext(...)`` in ``background_operation``).
The second option is to use ``logcontext.preserve_fn``, which wraps a function
so that it doesn't reset the logcontext even when it returns an incomplete
deferred, and adds a callback to the returned deferred to reset the
The second option is to use ``logcontext.run_in_background``, which wraps a
function so that it doesn't reset the logcontext even when it returns an
incomplete deferred, and adds a callback to the returned deferred to reset the
logcontext. In other words, it turns a function that follows the Synapse rules
about logcontexts and Deferreds into one which behaves more like an external
function — the opposite operation to that described in the previous section.
@@ -293,15 +293,11 @@ It can be used like this:
def do_request_handling():
yield foreground_operation()
logcontext.preserve_fn(background_operation)()
logcontext.run_in_background(background_operation)
# this will now be logged against the request context
logger.debug("Request handling complete")
XXX: I think ``preserve_context_over_fn`` is supposed to do the first option,
but the fact that it does ``preserve_context_over_deferred`` on its results
means that its use is fraught with difficulty.
Passing synapse deferreds into third-party functions
----------------------------------------------------

View File

@@ -21,19 +21,65 @@ How to monitor Synapse metrics using Prometheus
3. Add a prometheus target for synapse.
It needs to set the ``metrics_path`` to a non-default value::
It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``)::
- job_name: "synapse"
metrics_path: "/_synapse/metrics"
static_configs:
- targets:
"my.server.here:9092"
- targets: ["my.server.here:9092"]
If your prometheus is older than 1.5.2, you will need to replace
``static_configs`` in the above with ``target_groups``.
Restart prometheus.
Block and response metrics renamed for 0.27.0
---------------------------------------------
Synapse 0.27.0 begins the process of rationalising the duplicate ``*:count``
metrics reported for the resource tracking for code blocks and HTTP requests.
At the same time, the corresponding ``*:total`` metrics are being renamed, as
the ``:total`` suffix no longer makes sense in the absence of a corresponding
``:count`` metric.
To enable a graceful migration path, this release just adds new names for the
metrics being renamed. A future release will remove the old ones.
The following table shows the new metrics, and the old metrics which they are
replacing.
==================================================== ===================================================
New name Old name
==================================================== ===================================================
synapse_util_metrics_block_count synapse_util_metrics_block_timer:count
synapse_util_metrics_block_count synapse_util_metrics_block_ru_utime:count
synapse_util_metrics_block_count synapse_util_metrics_block_ru_stime:count
synapse_util_metrics_block_count synapse_util_metrics_block_db_txn_count:count
synapse_util_metrics_block_count synapse_util_metrics_block_db_txn_duration:count
synapse_util_metrics_block_time_seconds synapse_util_metrics_block_timer:total
synapse_util_metrics_block_ru_utime_seconds synapse_util_metrics_block_ru_utime:total
synapse_util_metrics_block_ru_stime_seconds synapse_util_metrics_block_ru_stime:total
synapse_util_metrics_block_db_txn_count synapse_util_metrics_block_db_txn_count:total
synapse_util_metrics_block_db_txn_duration_seconds synapse_util_metrics_block_db_txn_duration:total
synapse_http_server_response_count synapse_http_server_requests
synapse_http_server_response_count synapse_http_server_response_time:count
synapse_http_server_response_count synapse_http_server_response_ru_utime:count
synapse_http_server_response_count synapse_http_server_response_ru_stime:count
synapse_http_server_response_count synapse_http_server_response_db_txn_count:count
synapse_http_server_response_count synapse_http_server_response_db_txn_duration:count
synapse_http_server_response_time_seconds synapse_http_server_response_time:total
synapse_http_server_response_ru_utime_seconds synapse_http_server_response_ru_utime:total
synapse_http_server_response_ru_stime_seconds synapse_http_server_response_ru_stime:total
synapse_http_server_response_db_txn_count synapse_http_server_response_db_txn_count:total
synapse_http_server_response_db_txn_duration_seconds synapse_http_server_response_db_txn_duration:total
==================================================== ===================================================
Standard Metric Names
---------------------
@@ -43,7 +89,7 @@ have been changed to seconds, from miliseconds.
================================== =============================
New name Old name
---------------------------------- -----------------------------
================================== =============================
process_cpu_user_seconds_total process_resource_utime / 1000
process_cpu_system_seconds_total process_resource_stime / 1000
process_open_fds (no 'type' label) process_fds
@@ -53,7 +99,7 @@ The python-specific counts of garbage collector performance have been renamed.
=========================== ======================
New name Old name
--------------------------- ----------------------
=========================== ======================
python_gc_time reactor_gc_time
python_gc_unreachable_total reactor_gc_unreachable
python_gc_counts reactor_gc_counts
@@ -63,7 +109,7 @@ The twisted-specific reactor metrics have been renamed.
==================================== =====================
New name Old name
------------------------------------ ---------------------
==================================== =====================
python_twisted_reactor_pending_calls reactor_pending_calls
python_twisted_reactor_tick_time reactor_tick_time
==================================== =====================

View File

@@ -0,0 +1,99 @@
Password auth provider modules
==============================
Password auth providers offer a way for server administrators to integrate
their Synapse installation with an existing authentication system.
A password auth provider is a Python class which is dynamically loaded into
Synapse, and provides a number of methods by which it can integrate with the
authentication system.
This document serves as a reference for those looking to implement their own
password auth providers.
Required methods
----------------
Password auth provider classes must provide the following methods:
*class* ``SomeProvider.parse_config``\(*config*)
This method is passed the ``config`` object for this module from the
homeserver configuration file.
It should perform any appropriate sanity checks on the provided
configuration, and return an object which is then passed into ``__init__``.
*class* ``SomeProvider``\(*config*, *account_handler*)
The constructor is passed the config object returned by ``parse_config``,
and a ``synapse.module_api.ModuleApi`` object which allows the
password provider to check if accounts exist and/or create new ones.
Optional methods
----------------
Password auth provider classes may optionally provide the following methods.
*class* ``SomeProvider.get_db_schema_files``\()
This method, if implemented, should return an Iterable of ``(name,
stream)`` pairs of database schema files. Each file is applied in turn at
initialisation, and a record is then made in the database so that it is
not re-applied on the next start.
``someprovider.get_supported_login_types``\()
This method, if implemented, should return a ``dict`` mapping from a login
type identifier (such as ``m.login.password``) to an iterable giving the
fields which must be provided by the user in the submission to the
``/login`` api. These fields are passed in the ``login_dict`` dictionary
to ``check_auth``.
For example, if a password auth provider wants to implement a custom login
type of ``com.example.custom_login``, where the client is expected to pass
the fields ``secret1`` and ``secret2``, the provider should implement this
method and return the following dict::
{"com.example.custom_login": ("secret1", "secret2")}
``someprovider.check_auth``\(*username*, *login_type*, *login_dict*)
This method is the one that does the real work. If implemented, it will be
called for each login attempt where the login type matches one of the keys
returned by ``get_supported_login_types``.
It is passed the (possibly UNqualified) ``user`` provided by the client,
the login type, and a dictionary of login secrets passed by the client.
The method should return a Twisted ``Deferred`` object, which resolves to
the canonical ``@localpart:domain`` user id if authentication is successful,
and ``None`` if not.
Alternatively, the ``Deferred`` can resolve to a ``(str, func)`` tuple, in
which case the second field is a callback which will be called with the
result from the ``/login`` call (including ``access_token``, ``device_id``,
etc.)
``someprovider.check_password``\(*user_id*, *password*)
This method provides a simpler interface than ``get_supported_login_types``
and ``check_auth`` for password auth providers that just want to provide a
mechanism for validating ``m.login.password`` logins.
Iif implemented, it will be called to check logins with an
``m.login.password`` login type. It is passed a qualified
``@localpart:domain`` user id, and the password provided by the user.
The method should return a Twisted ``Deferred`` object, which resolves to
``True`` if authentication is successful, and ``False`` if not.
``someprovider.on_logged_out``\(*user_id*, *device_id*, *access_token*)
This method, if implemented, is called when a user logs out. It is passed
the qualified user ID, the ID of the deactivated device (if any: access
tokens are occasionally created without an associated device ID), and the
(now deactivated) access token.
It may return a Twisted ``Deferred`` object; the logout request will wait
for the deferred to complete but the result is ignored.

View File

@@ -1,6 +1,8 @@
Using Postgres
--------------
Postgres version 9.4 or later is known to work.
Set up database
===============
@@ -112,9 +114,9 @@ script one last time, e.g. if the SQLite database is at ``homeserver.db``
run::
synapse_port_db --sqlite-database homeserver.db \
--postgres-config database_config.yaml
--postgres-config homeserver-postgres.yaml
Once that has completed, change the synapse config to point at the PostgreSQL
database configuration file using the ``database_config`` parameter (see
`Synapse Config`_) and restart synapse. Synapse should now be running against
database configuration file ``homeserver-postgres.yaml`` (i.e. rename it to
``homeserver.yaml``) and restart synapse. Synapse should now be running against
PostgreSQL.

View File

@@ -26,28 +26,10 @@ expose the append-only log to the readers should be fairly minimal.
Architecture
------------
The Replication API
~~~~~~~~~~~~~~~~~~~
The Replication Protocol
~~~~~~~~~~~~~~~~~~~~~~~~
Synapse will optionally expose a long poll HTTP API for extracting updates. The
API will have a similar shape to /sync in that clients provide tokens
indicating where in the log they have reached and a timeout. The synapse server
then either responds with updates immediately if it already has updates or it
waits until the timeout for more updates. If the timeout expires and nothing
happened then the server returns an empty response.
However unlike the /sync API this replication API is returning synapse specific
data rather than trying to implement a matrix specification. The replication
results are returned as arrays of rows where the rows are mostly lifted
directly from the database. This avoids unnecessary JSON parsing on the server
and hopefully avoids an impedance mismatch between the data returned and the
required updates to the datastore.
This does not replicate all the database tables as many of the database tables
are indexes that can be recovered from the contents of other tables.
The format and parameters for the api are documented in
``synapse/replication/resource.py``.
See ``tcp_replication.rst``
The Slaved DataStore

View File

@@ -50,7 +50,7 @@ master_doc = 'index'
# General information about the project.
project = u'Synapse'
copyright = u'2014, TNG'
copyright = u'Copyright 2014-2017 OpenMarket Ltd, 2017 Vector Creations Ltd, 2017 New Vector Ltd'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the

223
docs/tcp_replication.rst Normal file
View File

@@ -0,0 +1,223 @@
TCP Replication
===============
Motivation
----------
Previously the workers used an HTTP long poll mechanism to get updates from the
master, which had the problem of causing a lot of duplicate work on the server.
This TCP protocol replaces those APIs with the aim of increased efficiency.
Overview
--------
The protocol is based on fire and forget, line based commands. An example flow
would be (where '>' indicates master to worker and '<' worker to master flows)::
> SERVER example.com
< REPLICATE events 53
> RDATA events 54 ["$foo1:bar.com", ...]
> RDATA events 55 ["$foo4:bar.com", ...]
The example shows the server accepting a new connection and sending its identity
with the ``SERVER`` command, followed by the client asking to subscribe to the
``events`` stream from the token ``53``. The server then periodically sends ``RDATA``
commands which have the format ``RDATA <stream_name> <token> <row>``, where the
format of ``<row>`` is defined by the individual streams.
Error reporting happens by either the client or server sending an `ERROR`
command, and usually the connection will be closed.
Since the protocol is a simple line based, its possible to manually connect to
the server using a tool like netcat. A few things should be noted when manually
using the protocol:
* When subscribing to a stream using ``REPLICATE``, the special token ``NOW`` can
be used to get all future updates. The special stream name ``ALL`` can be used
with ``NOW`` to subscribe to all available streams.
* The federation stream is only available if federation sending has been
disabled on the main process.
* The server will only time connections out that have sent a ``PING`` command.
If a ping is sent then the connection will be closed if no further commands
are receieved within 15s. Both the client and server protocol implementations
will send an initial PING on connection and ensure at least one command every
5s is sent (not necessarily ``PING``).
* ``RDATA`` commands *usually* include a numeric token, however if the stream
has multiple rows to replicate per token the server will send multiple
``RDATA`` commands, with all but the last having a token of ``batch``. See
the documentation on ``commands.RdataCommand`` for further details.
Architecture
------------
The basic structure of the protocol is line based, where the initial word of
each line specifies the command. The rest of the line is parsed based on the
command. For example, the `RDATA` command is defined as::
RDATA <stream_name> <token> <row_json>
(Note that `<row_json>` may contains spaces, but cannot contain newlines.)
Blank lines are ignored.
Keep alives
~~~~~~~~~~~
Both sides are expected to send at least one command every 5s or so, and
should send a ``PING`` command if necessary. If either side do not receive a
command within e.g. 15s then the connection should be closed.
Because the server may be connected to manually using e.g. netcat, the timeouts
aren't enabled until an initial ``PING`` command is seen. Both the client and
server implementations below send a ``PING`` command immediately on connection to
ensure the timeouts are enabled.
This ensures that both sides can quickly realize if the tcp connection has gone
and handle the situation appropriately.
Start up
~~~~~~~~
When a new connection is made, the server:
* Sends a ``SERVER`` command, which includes the identity of the server, allowing
the client to detect if its connected to the expected server
* Sends a ``PING`` command as above, to enable the client to time out connections
promptly.
The client:
* Sends a ``NAME`` command, allowing the server to associate a human friendly
name with the connection. This is optional.
* Sends a ``PING`` as above
* For each stream the client wishes to subscribe to it sends a ``REPLICATE``
with the stream_name and token it wants to subscribe from.
* On receipt of a ``SERVER`` command, checks that the server name matches the
expected server name.
Error handling
~~~~~~~~~~~~~~
If either side detects an error it can send an ``ERROR`` command and close the
connection.
If the client side loses the connection to the server it should reconnect,
following the steps above.
Congestion
~~~~~~~~~~
If the server sends messages faster than the client can consume them the server
will first buffer a (fairly large) number of commands and then disconnect the
client. This ensures that we don't queue up an unbounded number of commands in
memory and gives us a potential oppurtunity to squawk loudly. When/if the client
recovers it can reconnect to the server and ask for missed messages.
Reliability
~~~~~~~~~~~
In general the replication stream should be considered an unreliable transport
since e.g. commands are not resent if the connection disappears.
The exception to that are the replication streams, i.e. RDATA commands, since
these include tokens which can be used to restart the stream on connection
errors.
The client should keep track of the token in the last RDATA command received
for each stream so that on reconneciton it can start streaming from the correct
place. Note: not all RDATA have valid tokens due to batching. See
``RdataCommand`` for more details.
Example
~~~~~~~
An example iteraction is shown below. Each line is prefixed with '>' or '<' to
indicate which side is sending, these are *not* included on the wire::
* connection established *
> SERVER localhost:8823
> PING 1490197665618
< NAME synapse.app.appservice
< PING 1490197665618
< REPLICATE events 1
< REPLICATE backfill 1
< REPLICATE caches 1
> POSITION events 1
> POSITION backfill 1
> POSITION caches 1
> RDATA caches 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513]
> RDATA events 14 ["$149019767112vOHxz:localhost:8823",
"!AFDCvgApUmpdfVjIXm:localhost:8823","m.room.guest_access","",null]
< PING 1490197675618
> ERROR server stopping
* connection closed by server *
The ``POSITION`` command sent by the server is used to set the clients position
without needing to send data with the ``RDATA`` command.
An example of a batched set of ``RDATA`` is::
> RDATA caches batch ["get_user_by_id",["@test:localhost:8823"],1490197670513]
> RDATA caches batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513]
> RDATA caches batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513]
> RDATA caches 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513]
In this case the client shouldn't advance their caches token until it sees the
the last ``RDATA``.
List of commands
~~~~~~~~~~~~~~~~
The list of valid commands, with which side can send it: server (S) or client (C):
SERVER (S)
Sent at the start to identify which server the client is talking to
RDATA (S)
A single update in a stream
POSITION (S)
The position of the stream has been updated
ERROR (S, C)
There was an error
PING (S, C)
Sent periodically to ensure the connection is still alive
NAME (C)
Sent at the start by client to inform the server who they are
REPLICATE (C)
Asks the server to replicate a given stream
USER_SYNC (C)
A user has started or stopped syncing
FEDERATION_ACK (C)
Acknowledge receipt of some federation data
REMOVE_PUSHER (C)
Inform the server a pusher should be removed
INVALIDATE_CACHE (C)
Inform the server a cache should be invalidated
SYNC (S, C)
Used exclusively in tests
See ``synapse/replication/tcp/commands.py`` for a detailed description and the
format of each command.

View File

@@ -50,14 +50,37 @@ You may be able to setup coturn via your package manager, or set it up manually
pwgen -s 64 1
5. Ensure youe firewall allows traffic into the TURN server on
the ports you've configured it to listen on (remember to allow
both TCP and UDP if you've enabled both).
5. Consider your security settings. TURN lets users request a relay
which will connect to arbitrary IP addresses and ports. At the least
we recommend:
6. If you've configured coturn to support TLS/DTLS, generate or
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work
allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200
Ideally coturn should refuse to relay traffic which isn't SRTP;
see https://github.com/matrix-org/synapse/issues/2009
6. Ensure your firewall allows traffic into the TURN server on
the ports you've configured it to listen on (remember to allow
both TCP and UDP TURN traffic)
7. If you've configured coturn to support TLS/DTLS, generate or
import your private key and certificate.
7. Start the turn server::
8. Start the turn server::
bin/turnserver -o
@@ -83,12 +106,19 @@ Your home server configuration file needs the following extra keys:
to refresh credentials. The TURN REST API specification recommends
one day (86400000).
4. "turn_allow_guests": Whether to allow guest users to use the TURN
server. This is enabled by default, as otherwise VoIP will not
work reliably for guests. However, it does introduce a security risk
as it lets guests connect to arbitrary endpoints without having gone
through a CAPTCHA or similar to register a real account.
As an example, here is the relevant section of the config file for
matrix.org::
turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ]
turn_shared_secret: n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons
turn_user_lifetime: 86400000
turn_allow_guests: True
Now, restart synapse::

View File

@@ -56,6 +56,7 @@ As a first cut, let's do #2 and have the receiver hit the API to calculate its o
API
---
```
GET /_matrix/media/r0/preview_url?url=http://wherever.com
200 OK
{
@@ -66,6 +67,7 @@ GET /_matrix/media/r0/preview_url?url=http://wherever.com
"og:description" : "“Synapse 0.12 is out! Lots of polishing, performance &amp;amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”"
"og:site_name" : "Twitter"
}
```
* Downloads the URL
* If HTML, just stores it in RAM and parses it for OG meta tags

17
docs/user_directory.md Normal file
View File

@@ -0,0 +1,17 @@
User Directory API Implementation
=================================
The user directory is currently maintained based on the 'visible' users
on this particular server - i.e. ones which your account shares a room with, or
who are present in a publicly viewable room present on the server.
The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
quickest solution to fix it is:
```
UPDATE user_directory_stream_pos SET stream_id = NULL;
```
and restart the synapse, which should then start a background task to
flush the current tables and regenerate the directory.

View File

@@ -1,63 +1,90 @@
Scaling synapse via workers
---------------------------
===========================
Synapse has experimental support for splitting out functionality into
multiple separate python processes, helping greatly with scalability. These
processes are called 'workers', and are (eventually) intended to scale
horizontally independently.
All of the below is highly experimental and subject to change as Synapse evolves,
but documenting it here to help folks needing highly scalable Synapses similar
to the one running matrix.org!
All processes continue to share the same database instance, and as such, workers
only work with postgres based synapse deployments (sharing a single sqlite
across multiple processes is a recipe for disaster, plus you should be using
postgres anyway if you care about scalability).
The workers communicate with the master synapse process via a synapse-specific
HTTP protocol called 'replication' - analogous to MySQL or Postgres style
TCP protocol called 'replication' - analogous to MySQL or Postgres style
database replication; feeding a stream of relevant data to the workers so they
can be kept in sync with the main synapse process and database state.
To enable workers, you need to add a replication listener to the master synapse, e.g.::
Configuration
-------------
To make effective use of the workers, you will need to configure an HTTP
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
the correct worker, or to the main synapse instance. Note that this includes
requests made to the federation port. The caveats regarding running a
reverse-proxy on the federation port still apply (see
https://github.com/matrix-org/synapse/blob/master/README.rst#reverse-proxying-the-federation-port).
To enable workers, you need to add two replication listeners to the master
synapse, e.g.::
listeners:
# The TCP replication port
- port: 9092
bind_address: '127.0.0.1'
type: replication
# The HTTP replication port
- port: 9093
bind_address: '127.0.0.1'
type: http
tls: false
x_forwarded: false
resources:
- names: [replication]
compress: false
Under **no circumstances** should this replication API listener be exposed to the
public internet; it currently implements no authentication whatsoever and is
unencrypted HTTP.
Under **no circumstances** should these replication API listeners be exposed to
the public internet; it currently implements no authentication whatsoever and is
unencrypted.
You then create a set of configs for the various worker processes. These should be
worker configuration files should be stored in a dedicated subdirectory, to allow
synctl to manipulate them.
(Roughly, the TCP port is used for streaming data from the master to the
workers, and the HTTP port for the workers to send data to the main
synapse process.)
The current available worker applications are:
* synapse.app.pusher - handles sending push notifications to sygnal and email
* synapse.app.synchrotron - handles /sync endpoints. can scales horizontally through multiple instances.
* synapse.app.appservice - handles output traffic to Application Services
* synapse.app.federation_reader - handles receiving federation traffic (including public_rooms API)
* synapse.app.media_repository - handles the media repository.
* synapse.app.client_reader - handles client API endpoints like /publicRooms
You then create a set of configs for the various worker processes. These
should be worker configuration files, and should be stored in a dedicated
subdirectory, to allow synctl to manipulate them. An additional configuration
for the master synapse process will need to be created because the process will
not be started automatically. That configuration should look like this::
worker_app: synapse.app.homeserver
daemonize: true
Each worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that worker,
e.g. the HTTP listener that it provides (if any); logging configuration; etc.
You should minimise the number of overrides though to maintain a usable config.
You must specify the type of worker application (worker_app) and the replication
endpoint that it's talking to on the main synapse process (worker_replication_url).
You must specify the type of worker application (``worker_app``). The currently
available worker applications are listed below. You must also specify the
replication endpoints that it's talking to on the main synapse process.
``worker_replication_host`` should specify the host of the main synapse,
``worker_replication_port`` should point to the TCP replication listener port and
``worker_replication_http_port`` should point to the HTTP replication port.
Currently, only the ``event_creator`` worker requires specifying
``worker_replication_http_port``.
For instance::
worker_app: synapse.app.synchrotron
# The replication listener on the synapse to talk to.
worker_replication_url: http://127.0.0.1:9092/_synapse/replication
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
- type: http
@@ -71,11 +98,11 @@ For instance::
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
...is a full configuration for a synchrotron worker instance, which will expose a
plain HTTP /sync endpoint on port 8083 separately from the /sync endpoint provided
plain HTTP ``/sync`` endpoint on port 8083 separately from the ``/sync`` endpoint provided
by the main synapse.
Obviously you should configure your loadbalancer to route the /sync endpoint to
the synchrotron instance(s) in this instance.
Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (``localhost:8083`` in the above example).
Finally, to actually run your worker-based synapse, you must pass synctl the -a
commandline option to tell it to operate on all the worker configurations found
@@ -92,7 +119,127 @@ To manipulate a specific worker, you pass the -w option to synctl::
synctl -w $CONFIG/workers/synchrotron.yaml restart
All of the above is highly experimental and subject to change as Synapse evolves,
but documenting it here to help folks needing highly scalable Synapses similar
to the one running matrix.org!
Available worker applications
-----------------------------
``synapse.app.pusher``
~~~~~~~~~~~~~~~~~~~~~~
Handles sending push notifications to sygnal and email. Doesn't handle any
REST endpoints itself, but you should set ``start_pushers: False`` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.synchrotron``
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The synchrotron handles ``sync`` requests from clients. In particular, it can
handle REST endpoints matching the following regular expressions::
^/_matrix/client/(v2_alpha|r0)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
^/_matrix/client/(api/v1|r0)/initialSync$
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
The above endpoints should all be routed to the synchrotron worker by the
reverse-proxy configuration.
It is possible to run multiple instances of the synchrotron to scale
horizontally. In this case the reverse-proxy should be configured to
load-balance across the instances, though it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting
a userid from the access token is currently left as an exercise for the reader.
``synapse.app.appservice``
~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles sending output traffic to Application Services. Doesn't handle any
REST endpoints itself, but you should set ``notify_appservices: False`` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.federation_reader``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles a subset of federation endpoints. In particular, it can handle REST
endpoints matching the following regular expressions::
^/_matrix/federation/v1/event/
^/_matrix/federation/v1/state/
^/_matrix/federation/v1/state_ids/
^/_matrix/federation/v1/backfill/
^/_matrix/federation/v1/get_missing_events/
^/_matrix/federation/v1/publicRooms
The above endpoints should all be routed to the federation_reader worker by the
reverse-proxy configuration.
``synapse.app.federation_sender``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles sending federation traffic to other servers. Doesn't handle any
REST endpoints itself, but you should set ``send_federation: False`` in the
shared configuration file to stop the main synapse sending this traffic.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.media_repository``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles the media repository. It can handle all endpoints starting with::
/_matrix/media/
You should also set ``enable_media_repo: False`` in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.client_reader``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles client API endpoints. It can handle REST endpoints matching the
following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
``synapse.app.user_dir``
~~~~~~~~~~~~~~~~~~~~~~~~
Handles searches in the user directory. It can handle REST endpoints matching
the following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
``synapse.app.frontend_proxy``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxies some frequently-requested client endpoints to add caching and remove
load from the main synapse. It can handle REST endpoints matching the following
regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
It will proxy any requests it cannot handle to the main synapse instance. It
must therefore be configured with the location of the main instance, via
the ``worker_main_http_uri`` setting in the frontend_proxy worker configuration
file. For example::
worker_main_http_uri: http://127.0.0.1:8008
``synapse.app.event_creator``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles some event creation. It can handle REST endpoints matching::
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
It will create events locally and then send them on to the main synapse
instance to be persisted and handled.

View File

@@ -17,6 +17,7 @@ export HAPROXY_BIN=/home/haproxy/haproxy-1.6.11/haproxy
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \
--haproxy \

View File

@@ -15,5 +15,6 @@ export SYNAPSE_CACHE_FACTOR=1
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \

View File

@@ -14,4 +14,5 @@ export SYNAPSE_CACHE_FACTOR=1
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View File

@@ -12,4 +12,5 @@ export SYNAPSE_CACHE_FACTOR=1
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View File

@@ -1,5 +1,7 @@
#! /bin/bash
set -eux
cd "`dirname $0`/.."
TOX_DIR=$WORKSPACE/.tox
@@ -14,7 +16,20 @@ fi
tox -e py27 --notest -v
TOX_BIN=$TOX_DIR/py27/bin
$TOX_BIN/pip install setuptools
# cryptography 2.2 requires setuptools >= 18.5.
#
# older versions of virtualenv (?) give us a virtualenv with the same version
# of setuptools as is installed on the system python (and tox runs virtualenv
# under python3, so we get the version of setuptools that is installed on that).
#
# anyway, make sure that we have a recent enough setuptools.
$TOX_BIN/pip install 'setuptools>=18.5'
# we also need a semi-recent version of pip, because old ones fail to install
# the "enum34" dependency of cryptography.
$TOX_BIN/pip install 'pip>=10'
{ python synapse/python_dependencies.py
echo lxml psycopg2
} | xargs $TOX_BIN/pip install

125
scripts-dev/federation_client.py Normal file → Executable file
View File

@@ -1,10 +1,30 @@
#!/usr/bin/env python
#
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import argparse
import nacl.signing
import json
import base64
import requests
import sys
import srvlookup
import yaml
def encode_base64(input_bytes):
"""Encode bytes as a base64 string without any padding."""
@@ -103,15 +123,25 @@ def lookup(destination, path):
except:
return "https://%s:%d%s" % (destination, 8448, path)
def get_json(origin_name, origin_key, destination, path):
request_json = {
"method": "GET",
def request_json(method, origin_name, origin_key, destination, path, content):
if method is None:
if content is None:
method = "GET"
else:
method = "POST"
json_to_sign = {
"method": method,
"uri": path,
"origin": origin_name,
"destination": destination,
}
signed_json = sign_json(request_json, origin_key, origin_name)
if content is not None:
json_to_sign["content"] = json.loads(content)
signed_json = sign_json(json_to_sign, origin_key, origin_name)
authorization_headers = []
@@ -120,30 +150,97 @@ def get_json(origin_name, origin_key, destination, path):
origin_name, key, sig,
)
authorization_headers.append(bytes(header))
sys.stderr.write(header)
sys.stderr.write("\n")
print ("Authorization: %s" % header, file=sys.stderr)
result = requests.get(
lookup(destination, path),
dest = lookup(destination, path)
print ("Requesting %s" % dest, file=sys.stderr)
result = requests.request(
method=method,
url=dest,
headers={"Authorization": authorization_headers[0]},
verify=False,
data=content,
)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
return result.json()
def main():
origin_name, keyfile, destination, path = sys.argv[1:]
parser = argparse.ArgumentParser(
description=
"Signs and sends a federation request to a matrix homeserver",
)
with open(keyfile) as f:
parser.add_argument(
"-N", "--server-name",
help="Name to give as the local homeserver. If unspecified, will be "
"read from the config file.",
)
parser.add_argument(
"-k", "--signing-key-path",
help="Path to the file containing the private ed25519 key to sign the "
"request with.",
)
parser.add_argument(
"-c", "--config",
default="homeserver.yaml",
help="Path to server config file. Ignored if --server-name and "
"--signing-key-path are both given.",
)
parser.add_argument(
"-d", "--destination",
default="matrix.org",
help="name of the remote homeserver. We will do SRV lookups and "
"connect appropriately.",
)
parser.add_argument(
"-X", "--method",
help="HTTP method to use for the request. Defaults to GET if --data is"
"unspecified, POST if it is."
)
parser.add_argument(
"--body",
help="Data to send as the body of the HTTP request"
)
parser.add_argument(
"path",
help="request path. We will add '/_matrix/federation/v1/' to this."
)
args = parser.parse_args()
if not args.server_name or not args.signing_key_path:
read_args_from_config(args)
with open(args.signing_key_path) as f:
key = read_signing_keys(f)[0]
result = get_json(
origin_name, key, destination, "/_matrix/federation/v1/" + path
result = request_json(
args.method,
args.server_name, key, args.destination,
"/_matrix/federation/v1/" + args.path,
content=args.body,
)
json.dump(result, sys.stdout)
print ""
print ("")
def read_args_from_config(args):
with open(args.config, 'r') as fh:
config = yaml.safe_load(fh)
if not args.server_name:
args.server_name = config['server_name']
if not args.signing_key_path:
args.signing_key_path = config['signing_key_path']
if __name__ == "__main__":
main()

View File

@@ -9,16 +9,39 @@
ROOMID="$1"
sqlite3 homeserver.db <<EOF
DELETE FROM context_depth WHERE context = '$ROOMID';
DELETE FROM current_state WHERE context = '$ROOMID';
DELETE FROM feedback WHERE room_id = '$ROOMID';
DELETE FROM messages WHERE room_id = '$ROOMID';
DELETE FROM pdu_backward_extremities WHERE context = '$ROOMID';
DELETE FROM pdu_edges WHERE context = '$ROOMID';
DELETE FROM pdu_forward_extremities WHERE context = '$ROOMID';
DELETE FROM pdus WHERE context = '$ROOMID';
DELETE FROM room_data WHERE room_id = '$ROOMID';
DELETE FROM event_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_backward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_edges WHERE room_id = '$ROOMID';
DELETE FROM room_depth WHERE room_id = '$ROOMID';
DELETE FROM state_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM events WHERE room_id = '$ROOMID';
DELETE FROM event_json WHERE room_id = '$ROOMID';
DELETE FROM state_events WHERE room_id = '$ROOMID';
DELETE FROM current_state_events WHERE room_id = '$ROOMID';
DELETE FROM room_memberships WHERE room_id = '$ROOMID';
DELETE FROM feedback WHERE room_id = '$ROOMID';
DELETE FROM topics WHERE room_id = '$ROOMID';
DELETE FROM room_names WHERE room_id = '$ROOMID';
DELETE FROM rooms WHERE room_id = '$ROOMID';
DELETE FROM state_pdus WHERE context = '$ROOMID';
DELETE FROM room_hosts WHERE room_id = '$ROOMID';
DELETE FROM room_aliases WHERE room_id = '$ROOMID';
DELETE FROM state_groups WHERE room_id = '$ROOMID';
DELETE FROM state_groups_state WHERE room_id = '$ROOMID';
DELETE FROM receipts_graph WHERE room_id = '$ROOMID';
DELETE FROM receipts_linearized WHERE room_id = '$ROOMID';
DELETE FROM event_search_content WHERE c1room_id = '$ROOMID';
DELETE FROM guest_access WHERE room_id = '$ROOMID';
DELETE FROM history_visibility WHERE room_id = '$ROOMID';
DELETE FROM room_tags WHERE room_id = '$ROOMID';
DELETE FROM room_tags_revisions WHERE room_id = '$ROOMID';
DELETE FROM room_account_data WHERE room_id = '$ROOMID';
DELETE FROM event_push_actions WHERE room_id = '$ROOMID';
DELETE FROM local_invites WHERE room_id = '$ROOMID';
DELETE FROM pusher_throttle WHERE room_id = '$ROOMID';
DELETE FROM event_reports WHERE room_id = '$ROOMID';
DELETE FROM public_room_list_stream WHERE room_id = '$ROOMID';
DELETE FROM stream_ordering_to_exterm WHERE room_id = '$ROOMID';
DELETE FROM event_auth WHERE room_id = '$ROOMID';
DELETE FROM appservice_room_list WHERE room_id = '$ROOMID';
VACUUM;
EOF

View File

@@ -0,0 +1,133 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Moves a list of remote media from one media store to another.
The input should be a list of media files to be moved, one per line. Each line
should be formatted::
<origin server>|<file id>
This can be extracted from postgres with::
psql --tuples-only -A -c "select media_origin, filesystem_id from
matrix.remote_media_cache where ..."
To use, pipe the above into::
PYTHON_PATH=. ./scripts/move_remote_media_to_new_store.py <source repo> <dest repo>
"""
from __future__ import print_function
import argparse
import logging
import sys
import os
import shutil
from synapse.rest.media.v1.filepath import MediaFilePaths
logger = logging.getLogger()
def main(src_repo, dest_repo):
src_paths = MediaFilePaths(src_repo)
dest_paths = MediaFilePaths(dest_repo)
for line in sys.stdin:
line = line.strip()
parts = line.split('|')
if len(parts) != 2:
print("Unable to parse input line %s" % line, file=sys.stderr)
exit(1)
move_media(parts[0], parts[1], src_paths, dest_paths)
def move_media(origin_server, file_id, src_paths, dest_paths):
"""Move the given file, and any thumbnails, to the dest repo
Args:
origin_server (str):
file_id (str):
src_paths (MediaFilePaths):
dest_paths (MediaFilePaths):
"""
logger.info("%s/%s", origin_server, file_id)
# check that the original exists
original_file = src_paths.remote_media_filepath(origin_server, file_id)
if not os.path.exists(original_file):
logger.warn(
"Original for %s/%s (%s) does not exist",
origin_server, file_id, original_file,
)
else:
mkdir_and_move(
original_file,
dest_paths.remote_media_filepath(origin_server, file_id),
)
# now look for thumbnails
original_thumb_dir = src_paths.remote_media_thumbnail_dir(
origin_server, file_id,
)
if not os.path.exists(original_thumb_dir):
return
mkdir_and_move(
original_thumb_dir,
dest_paths.remote_media_thumbnail_dir(origin_server, file_id)
)
def mkdir_and_move(original_file, dest_file):
dirname = os.path.dirname(dest_file)
if not os.path.exists(dirname):
logger.debug("mkdir %s", dirname)
os.makedirs(dirname)
logger.debug("mv %s %s", original_file, dest_file)
shutil.move(original_file, dest_file)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class = argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"-v", action='store_true', help='enable debug logging')
parser.add_argument(
"src_repo",
help="Path to source content repo",
)
parser.add_argument(
"dest_repo",
help="Path to source content repo",
)
args = parser.parse_args()
logging_config = {
"level": logging.DEBUG if args.v else logging.INFO,
"format": "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s"
}
logging.basicConfig(**logging_config)
main(args.src_repo, args.dest_repo)

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -29,6 +30,8 @@ import time
import traceback
import yaml
from six import string_types
logger = logging.getLogger("synapse_port_db")
@@ -41,6 +44,15 @@ BOOLEAN_COLUMNS = {
"presence_stream": ["currently_active"],
"public_room_list_stream": ["visibility"],
"device_lists_outbound_pokes": ["sent"],
"users_who_share_rooms": ["share_private"],
"groups": ["is_public"],
"group_rooms": ["is_public"],
"group_users": ["is_public", "is_admin"],
"group_summary_rooms": ["is_public"],
"group_room_categories": ["is_public"],
"group_summary_users": ["is_public"],
"group_roles": ["is_public"],
"local_group_membership": ["is_publicised", "is_admin"],
}
@@ -111,6 +123,7 @@ class Store(object):
_simple_update_one = SQLBaseStore.__dict__["_simple_update_one"]
_simple_update_one_txn = SQLBaseStore.__dict__["_simple_update_one_txn"]
_simple_update_txn = SQLBaseStore.__dict__["_simple_update_txn"]
def runInteraction(self, desc, func, *args, **kwargs):
def r(conn):
@@ -121,7 +134,7 @@ class Store(object):
try:
txn = conn.cursor()
return func(
LoggingTransaction(txn, desc, self.database_engine, []),
LoggingTransaction(txn, desc, self.database_engine, [], []),
*args, **kwargs
)
except self.database_engine.module.DatabaseError as e:
@@ -240,6 +253,12 @@ class Porter(object):
@defer.inlineCallbacks
def handle_table(self, table, postgres_size, table_size, forward_chunk,
backward_chunk):
logger.info(
"Table %s: %i/%i (rows %i-%i) already ported",
table, postgres_size, table_size,
backward_chunk+1, forward_chunk-1,
)
if not table_size:
return
@@ -251,6 +270,25 @@ class Porter(object):
)
return
if table in (
"user_directory", "user_directory_search", "users_who_share_rooms",
"users_in_pubic_room",
):
# We don't port these tables, as they're a faff and we can regenreate
# them anyway.
self.progress.update(table, table_size) # Mark table as done
return
if table == "user_directory_stream_pos":
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
yield self.postgres_store._simple_insert(
table=table,
values={"stream_id": None},
)
self.progress.update(table, table_size) # Mark table as done
return
forward_select = (
"SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?"
% (table,)
@@ -298,7 +336,7 @@ class Porter(object):
backward_chunk = min(row[0] for row in brows) - 1
rows = frows + brows
self._convert_rows(table, headers, rows)
rows = self._convert_rows(table, headers, rows)
def insert(txn):
self.postgres_store.insert_many_txn(
@@ -356,10 +394,13 @@ class Porter(object):
" VALUES (?,?,?,?,to_tsvector('english', ?),?,?)"
)
rows_dict = [
dict(zip(headers, row))
for row in rows
]
rows_dict = []
for row in rows:
d = dict(zip(headers, row))
if "\0" in d['value']:
logger.warn('dropping search row %s', d)
else:
rows_dict.append(d)
txn.executemany(sql, [
(
@@ -435,33 +476,10 @@ class Porter(object):
self.progress.set_state("Preparing PostgreSQL")
self.setup_db(postgres_config, postgres_engine)
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
)
postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={
"table_schema": "public",
},
retcol="distinct table_name",
)
tables = set(sqlite_tables) & set(postgres_tables)
self.progress.set_state("Creating tables")
logger.info("Found %d tables", len(tables))
self.progress.set_state("Creating port tables")
def create_port_table(txn):
txn.execute(
"CREATE TABLE port_from_sqlite3 ("
"CREATE TABLE IF NOT EXISTS port_from_sqlite3 ("
" table_name varchar(100) NOT NULL UNIQUE,"
" forward_rowid bigint NOT NULL,"
" backward_rowid bigint NOT NULL"
@@ -487,18 +505,33 @@ class Porter(object):
"alter_table", alter_table
)
except Exception as e:
logger.info("Failed to create port table: %s", e)
pass
try:
yield self.postgres_store.runInteraction(
"create_port_table", create_port_table
)
except Exception as e:
logger.info("Failed to create port table: %s", e)
self.progress.set_state("Setting up")
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
)
# Set up tables.
postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
)
tables = set(sqlite_tables) & set(postgres_tables)
logger.info("Found %d tables", len(tables))
# Step 3. Figure out what still needs copying
self.progress.set_state("Checking on port progress")
setup_res = yield defer.gatherResults(
[
self.setup_table(table)
@@ -509,7 +542,8 @@ class Porter(object):
consumeErrors=True,
)
# Process tables.
# Step 4. Do the copying.
self.progress.set_state("Copying to postgres")
yield defer.gatherResults(
[
self.handle_table(*res)
@@ -518,6 +552,9 @@ class Porter(object):
consumeErrors=True,
)
# Step 5. Do final post-processing
yield self._setup_state_group_id_seq()
self.progress.done()
except:
global end_error_exec_info
@@ -533,17 +570,29 @@ class Porter(object):
i for i, h in enumerate(headers) if h in bool_col_names
]
class BadValueException(Exception):
pass
def conv(j, col):
if j in bool_cols:
return bool(col)
elif isinstance(col, string_types) and "\0" in col:
logger.warn("DROPPING ROW: NUL value in table %s col %s: %r", table, headers[j], col)
raise BadValueException();
return col
outrows = []
for i, row in enumerate(rows):
rows[i] = tuple(
try:
outrows.append(tuple(
conv(j, col)
for j, col in enumerate(row)
if j > 0
)
))
except BadValueException:
pass
return outrows
@defer.inlineCallbacks
def _setup_sent_transactions(self):
@@ -571,7 +620,7 @@ class Porter(object):
"select", r,
)
self._convert_rows("sent_transactions", headers, rows)
rows = self._convert_rows("sent_transactions", headers, rows)
inserted_rows = len(rows)
if inserted_rows:
@@ -665,6 +714,16 @@ class Porter(object):
defer.returnValue((done, remaining + done))
def _setup_state_group_id_seq(self):
def r(txn):
txn.execute("SELECT MAX(id) FROM state_groups")
next_id = txn.fetchone()[0]+1
txn.execute(
"ALTER SEQUENCE state_group_id_seq RESTART WITH %s",
(next_id,),
)
return self.postgres_store.runInteraction("setup_state_group_id_seq", r)
##############################################
###### The following is simply UI stuff ######

45
scripts/sync_room_to_group.pl Executable file
View File

@@ -0,0 +1,45 @@
#!/usr/bin/env perl
use strict;
use warnings;
use JSON::XS;
use LWP::UserAgent;
use URI::Escape;
if (@ARGV < 4) {
die "usage: $0 <homeserver url> <access_token> <room_id|room_alias> <group_id>\n";
}
my ($hs, $access_token, $room_id, $group_id) = @ARGV;
my $ua = LWP::UserAgent->new();
$ua->timeout(10);
if ($room_id =~ /^#/) {
$room_id = uri_escape($room_id);
$room_id = decode_json($ua->get("${hs}/_matrix/client/r0/directory/room/${room_id}?access_token=${access_token}")->decoded_content)->{room_id};
}
my $room_users = [ keys %{decode_json($ua->get("${hs}/_matrix/client/r0/rooms/${room_id}/joined_members?access_token=${access_token}")->decoded_content)->{joined}} ];
my $group_users = [
(map { $_->{user_id} } @{decode_json($ua->get("${hs}/_matrix/client/unstable/groups/${group_id}/users?access_token=${access_token}" )->decoded_content)->{chunk}}),
(map { $_->{user_id} } @{decode_json($ua->get("${hs}/_matrix/client/unstable/groups/${group_id}/invited_users?access_token=${access_token}" )->decoded_content)->{chunk}}),
];
die "refusing to sync from empty room" unless (@$room_users);
die "refusing to sync to empty group" unless (@$group_users);
my $diff = {};
foreach my $user (@$room_users) { $diff->{$user}++ }
foreach my $user (@$group_users) { $diff->{$user}-- }
foreach my $user (keys %$diff) {
if ($diff->{$user} == 1) {
warn "inviting $user";
print STDERR $ua->put("${hs}/_matrix/client/unstable/groups/${group_id}/admin/users/invite/${user}?access_token=${access_token}", Content=>'{}')->status_line."\n";
}
elsif ($diff->{$user} == -1) {
warn "removing $user";
print STDERR $ua->put("${hs}/_matrix/client/unstable/groups/${group_id}/admin/users/remove/${user}?access_token=${access_token}", Content=>'{}')->status_line."\n";
}
}

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.19.3"
__version__ = "0.28.1"

View File

@@ -23,7 +23,8 @@ from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, Codes
from synapse.types import UserID
from synapse.util import logcontext
from synapse.util.caches import register_cache, CACHE_SIZE_FACTOR
from synapse.util.caches.lrucache import LruCache
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)
@@ -39,6 +40,10 @@ AuthEventTypes = (
GUEST_DEVICE_ID = "guest_device"
class _InvalidMacaroonException(Exception):
pass
class Auth(object):
"""
FIXME: This class contains a mix of functions for authenticating users
@@ -51,6 +56,9 @@ class Auth(object):
self.state = hs.get_state_handler()
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("token_cache", self.token_cache)
@defer.inlineCallbacks
def check_from_context(self, event, context, do_sig_check=True):
auth_events_ids = yield self.compute_auth_events(
@@ -144,17 +152,8 @@ class Auth(object):
@defer.inlineCallbacks
def check_host_in_room(self, room_id, host):
with Measure(self.clock, "check_host_in_room"):
latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
logger.debug("calling resolve_state_groups from check_host_in_room")
entry = yield self.state.resolve_state_groups(
room_id, latest_event_ids
)
ret = yield self.store.is_host_joined(
room_id, host, entry.state_group, entry.state
)
defer.returnValue(ret)
latest_event_ids = yield self.store.is_host_joined(room_id, host)
defer.returnValue(latest_event_ids)
def _check_joined_room(self, member, user_id, room_id):
if not member or member.membership != Membership.JOIN:
@@ -205,12 +204,12 @@ class Auth(object):
ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders(
"User-Agent",
default=[""]
b"User-Agent",
default=[b""]
)[0]
if user and access_token and ip_addr:
logcontext.preserve_fn(self.store.insert_client_ip)(
user=user,
self.store.insert_client_ip(
user_id=user.to_string(),
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
@@ -271,13 +270,17 @@ class Auth(object):
rights (str): The operation being performed; the access token must
allow this.
Returns:
dict : dict that includes the user and the ID of their access token.
Deferred[dict]: dict that includes:
`user` (UserID)
`is_guest` (bool)
`token_id` (int|None): access token id. May be None if guest
`device_id` (str|None): device corresponding to access token
Raises:
AuthError if no user by that token exists or the token is invalid.
"""
try:
macaroon = pymacaroons.Macaroon.deserialize(token)
except Exception: # deserialize can throw more-or-less anything
user_id, guest = self._parse_and_validate_macaroon(token, rights)
except _InvalidMacaroonException:
# doesn't look like a macaroon: treat it as an opaque token which
# must be in the database.
# TODO: it would be nice to get rid of this, but apparently some
@@ -286,19 +289,8 @@ class Auth(object):
defer.returnValue(r)
try:
user_id = self.get_user_id_from_macaroon(macaroon)
user = UserID.from_string(user_id)
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token,
user_id=user_id,
)
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id == "guest = true":
guest = True
if guest:
# Guest access tokens are not stored in the database (there can
# only be one access token per guest, anyway).
@@ -370,6 +362,55 @@ class Auth(object):
errcode=Codes.UNKNOWN_TOKEN
)
def _parse_and_validate_macaroon(self, token, rights="access"):
"""Takes a macaroon and tries to parse and validate it. This is cached
if and only if rights == access and there isn't an expiry.
On invalid macaroon raises _InvalidMacaroonException
Returns:
(user_id, is_guest)
"""
if rights == "access":
cached = self.token_cache.get(token, None)
if cached:
return cached
try:
macaroon = pymacaroons.Macaroon.deserialize(token)
except Exception: # deserialize can throw more-or-less anything
# doesn't look like a macaroon: treat it as an opaque token which
# must be in the database.
# TODO: it would be nice to get rid of this, but apparently some
# people use access tokens which aren't macaroons
raise _InvalidMacaroonException()
try:
user_id = self.get_user_id_from_macaroon(macaroon)
has_expiry = False
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith("time "):
has_expiry = True
elif caveat.caveat_id == "guest = true":
guest = True
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token,
user_id=user_id,
)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN
)
if not has_expiry and rights == "access":
self.token_cache[token] = (user_id, guest)
return user_id, guest
def get_user_id_from_macaroon(self, macaroon):
"""Retrieve the user_id given by the caveats on the macaroon.
@@ -482,6 +523,14 @@ class Auth(object):
)
def is_server_admin(self, user):
""" Check if the given user is a local server admin.
Args:
user (str): mxid of user to check
Returns:
bool: True if the user is an admin
"""
return self.store.is_server_admin(user)
@defer.inlineCallbacks
@@ -623,7 +672,7 @@ def has_access_token(request):
bool: False if no access_token was given, True otherwise.
"""
query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders("Authorization")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers)
@@ -643,8 +692,8 @@ def get_access_token_from_request(request, token_not_found_http_status=401):
AuthError: If there isn't an access_token in the request.
"""
auth_headers = request.requestHeaders.getRawHeaders("Authorization")
query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get(b"access_token")
if auth_headers:
# Try the get the access_token from a "Authorization: Bearer"
# header

View File

@@ -16,6 +16,9 @@
"""Contains constants from the specification."""
# the "depth" field on events is limited to 2**63 - 1
MAX_DEPTH = 2**63 - 1
class Membership(object):

View File

@@ -15,9 +15,11 @@
"""Contains exceptions and error codes."""
import json
import logging
import simplejson as json
from six import iteritems
logger = logging.getLogger(__name__)
@@ -46,6 +48,7 @@ class Codes(object):
THREEPID_AUTH_FAILED = "M_THREEPID_AUTH_FAILED"
THREEPID_IN_USE = "M_THREEPID_IN_USE"
THREEPID_NOT_FOUND = "M_THREEPID_NOT_FOUND"
THREEPID_DENIED = "M_THREEPID_DENIED"
INVALID_USERNAME = "M_INVALID_USERNAME"
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
@@ -66,6 +69,17 @@ class CodeMessageException(RuntimeError):
return cs_error(self.msg)
class MatrixCodeMessageException(CodeMessageException):
"""An error from a general matrix endpoint, eg. from a proxied Matrix API call.
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN):
super(MatrixCodeMessageException, self).__init__(code, msg)
self.errcode = errcode
class SynapseError(CodeMessageException):
"""A base exception type for matrix errors which have an errcode and error
message (as well as an HTTP status code).
@@ -129,6 +143,48 @@ class RegistrationError(SynapseError):
pass
class FederationDeniedError(SynapseError):
"""An error raised when the server tries to federate with a server which
is not on its federation whitelist.
Attributes:
destination (str): The destination which has been denied
"""
def __init__(self, destination):
"""Raised by federation client or server to indicate that we are
are deliberately not attempting to contact a given server because it is
not on our federation whitelist.
Args:
destination (str): the domain in question
"""
self.destination = destination
super(FederationDeniedError, self).__init__(
code=403,
msg="Federation denied with %s." % (self.destination,),
errcode=Codes.FORBIDDEN,
)
class InteractiveAuthIncompleteError(Exception):
"""An error raised when UI auth is not yet complete
(This indicates we should return a 401 with 'result' as the body)
Attributes:
result (dict): the server response to the request, which should be
passed back to the client
"""
def __init__(self, result):
super(InteractiveAuthIncompleteError, self).__init__(
"Interactive auth not yet complete",
)
self.result = result
class UnrecognizedRequestError(SynapseError):
"""An error indicating we don't understand the request you're trying to make"""
def __init__(self, *args, **kwargs):
@@ -242,7 +298,7 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
A dict representing the error response JSON.
"""
err = {"error": msg, "errcode": code}
for key, value in kwargs.iteritems():
for key, value in iteritems(kwargs):
err[key] = value
return err

View File

@@ -17,7 +17,7 @@ from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, RoomID
from twisted.internet import defer
import ujson as json
import simplejson as json
import jsonschema
from jsonschema import FormatChecker

178
synapse/app/_base.py Normal file
View File

@@ -0,0 +1,178 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import logging
import sys
try:
import affinity
except Exception:
affinity = None
from daemonize import Daemonize
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
from twisted.internet import error, reactor
logger = logging.getLogger(__name__)
def start_worker_reactor(appname, config):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
the reactor. Pulls configuration from the 'worker' settings in 'config'.
Args:
appname (str): application name which will be sent to syslog
config (synapse.config.Config): config object
"""
logger = logging.getLogger(config.worker_app)
start_reactor(
appname,
config.soft_file_limit,
config.gc_thresholds,
config.worker_pid_file,
config.worker_daemonize,
config.worker_cpu_affinity,
logger,
)
def start_reactor(
appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
cpu_affinity,
logger,
):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
the reactor
Args:
appname (str): application name which will be sent to syslog
soft_file_limit (int):
gc_thresholds:
pid_file (str): name of pid file to write to if daemonize is True
daemonize (bool): true to run the reactor in a background process
cpu_affinity (int|None): cpu affinity mask
logger (logging.Logger): logger instance to pass to Daemonize
"""
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
if cpu_affinity is not None:
if not affinity:
quit_with_error(
"Missing package 'affinity' required for cpu_affinity\n"
"option\n\n"
"Install by running:\n\n"
" pip install affinity\n\n"
)
logger.info("Setting CPU affinity to %s" % cpu_affinity)
affinity.set_process_affinity_mask(0, cpu_affinity)
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
reactor.run()
if daemonize:
daemon = Daemonize(
app=appname,
pid=pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
def quit_with_error(error_string):
message_lines = error_string.split("\n")
line_length = max([len(l) for l in message_lines if len(l) < 80]) + 2
sys.stderr.write("*" * line_length + '\n')
for line in message_lines:
sys.stderr.write(" %s\n" % (line.rstrip(),))
sys.stderr.write("*" * line_length + '\n')
sys.exit(1)
def listen_tcp(bind_addresses, port, factory, backlog=50):
"""
Create a TCP socket for a port and several addresses
"""
for address in bind_addresses:
try:
reactor.listenTCP(
port,
factory,
backlog,
address
)
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
def listen_ssl(bind_addresses, port, factory, context_factory, backlog=50):
"""
Create an SSL socket for a port and several addresses
"""
for address in bind_addresses:
try:
reactor.listenSSL(
port,
factory,
context_factory,
backlog,
address
)
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
def check_bind_error(e, address, bind_addresses):
"""
This method checks an exception occurred while binding on 0.0.0.0.
If :: is specified in the bind addresses a warning is shown.
The exception is still raised otherwise.
Binding on both 0.0.0.0 and :: causes an exception on Linux and macOS
because :: binds on both IPv4 and IPv6 (as per RFC 3493).
When binding on 0.0.0.0 after :: this can safely be ignored.
Args:
e (Exception): Exception that was caught.
address (str): Address on which binding was attempted.
bind_addresses (list): Addresses on which the service listens.
"""
if address == '0.0.0.0' and '::' in bind_addresses:
logger.warn('Failed to listen on 0.0.0.0, continuing because listening on [::]')
else:
raise e

View File

@@ -13,37 +13,30 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.appservice")
@@ -56,19 +49,6 @@ class AppserviceSlaveStore(
class AppserviceServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = AppserviceSlaveStore(self.get_db_conn(), self)
@@ -84,18 +64,17 @@ class AppserviceServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
)
logger.info("Synapse appservice now listening on port %d", port)
@@ -105,45 +84,42 @@ class AppserviceServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
appservice_handler = self.get_application_service_handler()
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ASReplicationHandler(self)
class ASReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(ASReplicationHandler, self).__init__(hs.get_datastore())
self.appservice_handler = hs.get_application_service_handler()
def on_rdata(self, stream_name, token, rows):
super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
if stream_name == "events":
max_stream_id = self.store.get_room_max_stream_ordering()
run_in_background(self._notify_app_services, max_stream_id)
@defer.inlineCallbacks
def replicate(results):
stream = results.get("events")
if stream:
max_stream_id = stream["position"]
yield appservice_handler.notify_interested_services(max_stream_id)
while True:
def _notify_app_services(self, room_stream_id):
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
replicate(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
yield self.appservice_handler.notify_interested_services(room_stream_id)
except Exception:
logger.exception("Error notifying application services of event")
def start(config_options):
@@ -186,37 +162,13 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-appservice",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-appservice", config)
if __name__ == '__main__':

View File

@@ -13,46 +13,38 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import PublicRoomListRestServlet
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.client_reader")
@@ -65,26 +57,13 @@ class ClientReaderSlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
TransactionStore,
SlavedClientIpStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
pass
class ClientReaderServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = ClientReaderSlavedStore(self.get_db_conn(), self)
@@ -109,18 +88,17 @@ class ClientReaderServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
)
logger.info("Synapse client reader now listening on port %d", port)
@@ -130,36 +108,23 @@ class ClientReaderServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
self.get_tcp_replication().start_replication(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
@@ -191,40 +156,15 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-client-reader",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-client-reader", config)
if __name__ == '__main__':

View File

@@ -0,0 +1,189 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import (
RoomSendEventRestServlet, RoomMembershipRestServlet, RoomStateEventRestServlet,
JoinRoomAliasServlet,
)
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.event_creator")
class EventCreatorSlavedStore(
DirectoryStore,
TransactionStore,
SlavedProfileStore,
SlavedAccountDataStore,
SlavedPusherStore,
SlavedReceiptsStore,
SlavedPushRuleStore,
SlavedDeviceStore,
SlavedClientIpStore,
SlavedApplicationServiceStore,
SlavedEventStore,
SlavedRegistrationStore,
RoomStore,
BaseSlavedStore,
):
pass
class EventCreatorServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = EventCreatorSlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
RoomSendEventRestServlet(self).register(resource)
RoomMembershipRestServlet(self).register(resource)
RoomStateEventRestServlet(self).register(resource)
JoinRoomAliasServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse event creator now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse event creator", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.event_creator"
assert config.worker_replication_http_port is not None
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
ss = EventCreatorServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-event-creator", config)
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -13,43 +13,35 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.api.urls import FEDERATION_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.api.urls import FEDERATION_PREFIX
from synapse.federation.transport.server import TransportLayerServer
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_reader")
@@ -66,19 +58,6 @@ class FederationReaderSlavedStore(
class FederationReaderServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = FederationReaderSlavedStore(self.get_db_conn(), self)
@@ -98,18 +77,17 @@ class FederationReaderServer(HomeServer):
FEDERATION_PREFIX: TransportLayerServer(self),
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
)
logger.info("Synapse federation reader now listening on port %d", port)
@@ -119,36 +97,22 @@ class FederationReaderServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
self.get_tcp_replication().start_replication(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
@@ -180,40 +144,15 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-federation-reader",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-federation-reader", config)
if __name__ == '__main__':

View File

@@ -13,69 +13,69 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.site import SynapseSite
from synapse.federation import send_queue
from synapse.federation.units import Edu
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.presence import UserPresenceState
from synapse.util.async import sleep
from synapse.util.async import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
import ujson as json
logger = logging.getLogger("synapse.app.appservice")
logger = logging.getLogger("synapse.app.federation_sender")
class FederationSenderSlaveStore(
SlavedDeviceInboxStore, TransactionStore, SlavedReceiptsStore, SlavedEventStore,
SlavedRegistrationStore, SlavedDeviceStore,
SlavedRegistrationStore, SlavedDeviceStore, SlavedPresenceStore,
):
pass
def __init__(self, db_conn, hs):
super(FederationSenderSlaveStore, self).__init__(db_conn, hs)
# We pull out the current federation stream position now so that we
# always have a known value for the federation position in memory so
# that we don't have to bounce via a deferred once when we start the
# replication streams.
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = (
"SELECT stream_id FROM federation_stream_position"
" WHERE type = ?"
)
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()
txn.execute(sql, ("federation",))
rows = txn.fetchall()
txn.close()
return rows[0][0] if rows else -1
class FederationSenderServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = FederationSenderSlaveStore(self.get_db_conn(), self)
@@ -91,18 +91,17 @@ class FederationSenderServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
)
logger.info("Synapse federation_sender now listening on port %d", port)
@@ -112,41 +111,39 @@ class FederationSenderServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
send_handler = FederationSenderHandler(self)
self.get_tcp_replication().start_replication(self)
send_handler.on_start()
def build_tcp_replication(self):
return FederationSenderReplicationHandler(self)
while True:
try:
args = store.stream_positions()
args.update((yield send_handler.stream_positions()))
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
yield send_handler.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
class FederationSenderReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(FederationSenderReplicationHandler, self).__init__(hs.get_datastore())
self.send_handler = FederationSenderHandler(hs, self)
def on_rdata(self, stream_name, token, rows):
super(FederationSenderReplicationHandler, self).on_rdata(
stream_name, token, rows
)
self.send_handler.process_replication_rows(stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(FederationSenderReplicationHandler, self).get_streams_to_replicate()
args.update(self.send_handler.stream_positions())
return args
def start(config_options):
@@ -192,46 +189,27 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-federation-sender",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-federation-sender", config)
class FederationSenderHandler(object):
"""Processes the replication stream and forwards the appropriate entries
to the federation sender.
"""
def __init__(self, hs):
def __init__(self, hs, replication_client):
self.store = hs.get_datastore()
self.federation_sender = hs.get_federation_sender()
self.replication_client = replication_client
self.federation_position = self.store.federation_out_pos_startup
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
self._last_ack = self.federation_position
self._room_serials = {}
self._room_typing = {}
@@ -243,98 +221,38 @@ class FederationSenderHandler(object):
self.store.get_room_max_stream_ordering()
)
@defer.inlineCallbacks
def stream_positions(self):
stream_id = yield self.store.get_federation_out_pos("federation")
defer.returnValue({
"federation": stream_id,
return {"federation": self.federation_position}
# Ack stuff we've "processed", this should only be called from
# one process.
"federation_ack": stream_id,
})
@defer.inlineCallbacks
def process_replication(self, result):
def process_replication_rows(self, stream_name, token, rows):
# The federation stream contains things that we want to send out, e.g.
# presence, typing, etc.
fed_stream = result.get("federation")
if fed_stream:
latest_id = int(fed_stream["position"])
# The federation stream containis a bunch of different types of
# rows that need to be handled differently. We parse the rows, put
# them into the appropriate collection and then send them off.
presence_to_send = {}
keyed_edus = {}
edus = {}
failures = {}
device_destinations = set()
# Parse the rows in the stream
for row in fed_stream["rows"]:
position, typ, content_js = row
content = json.loads(content_js)
if typ == send_queue.PRESENCE_TYPE:
destination = content["destination"]
state = UserPresenceState.from_dict(content["state"])
presence_to_send.setdefault(destination, []).append(state)
elif typ == send_queue.KEYED_EDU_TYPE:
key = content["key"]
edu = Edu(**content["edu"])
keyed_edus.setdefault(
edu.destination, {}
)[(edu.destination, tuple(key))] = edu
elif typ == send_queue.EDU_TYPE:
edu = Edu(**content)
edus.setdefault(edu.destination, []).append(edu)
elif typ == send_queue.FAILURE_TYPE:
destination = content["destination"]
failure = content["failure"]
failures.setdefault(destination, []).append(failure)
elif typ == send_queue.DEVICE_MESSAGE_TYPE:
device_destinations.add(content["destination"])
else:
raise Exception("Unrecognised federation type: %r", typ)
# We've finished collecting, send everything off
for destination, states in presence_to_send.items():
self.federation_sender.send_presence(destination, states)
for destination, edu_map in keyed_edus.items():
for key, edu in edu_map.items():
self.federation_sender.send_edu(
edu.destination, edu.edu_type, edu.content, key=key,
)
for destination, edu_list in edus.items():
for edu in edu_list:
self.federation_sender.send_edu(
edu.destination, edu.edu_type, edu.content, key=None,
)
for destination, failure_list in failures.items():
for failure in failure_list:
self.federation_sender.send_failure(destination, failure)
for destination in device_destinations:
self.federation_sender.send_device_messages(destination)
# Record where we are in the stream.
yield self.store.update_federation_out_pos(
"federation", latest_id
)
if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows)
run_in_background(self.update_token, token)
# We also need to poke the federation sender when new events happen
event_stream = result.get("events")
if event_stream:
latest_pos = event_stream["position"]
self.federation_sender.notify_new_events(latest_pos)
elif stream_name == "events":
self.federation_sender.notify_new_events(token)
@defer.inlineCallbacks
def update_token(self, token):
try:
self.federation_position = token
# We linearize here to ensure we don't have races updating the token
with (yield self._fed_position_linearizer.queue(None)):
if self._last_ack < self.federation_position:
yield self.store.update_federation_out_pos(
"federation", self.federation_position
)
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(self.federation_position)
self._last_ack = self.federation_position
except Exception:
logger.exception("Error updating federation stream position")
if __name__ == '__main__':

View File

@@ -0,0 +1,227 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.api.errors import SynapseError
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.servlet import (
RestServlet, parse_json_object_from_request,
)
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v2_alpha._base import client_v2_patterns
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.frontend_proxy")
class KeyUploadServlet(RestServlet):
PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$")
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer): server
"""
super(KeyUploadServlet, self).__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker_main_http_uri
@defer.inlineCallbacks
def on_POST(self, request, device_id):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
if device_id is not None:
# passing the device_id here is deprecated; however, we allow it
# for now for compatibility with older clients.
if (requester.device_id is not None and
device_id != requester.device_id):
logger.warning("Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id, device_id)
else:
device_id = requester.device_id
if device_id is None:
raise SynapseError(
400,
"To upload keys, you must pass device_id when authenticating"
)
if body:
# They're actually trying to upload something, proxy to main synapse.
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = {
"Authorization": auth_headers,
}
result = yield self.http_client.post_json_get_json(
self.main_uri + request.uri,
body,
headers=headers,
)
defer.returnValue((200, result))
else:
# Just interested in counts.
result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue((200, {"one_time_key_counts": result}))
class FrontendProxySlavedStore(
SlavedDeviceStore,
SlavedClientIpStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
BaseSlavedStore,
):
pass
class FrontendProxyServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = FrontendProxySlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse client reader now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse frontend proxy", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.frontend_proxy"
assert config.worker_main_http_uri is not None
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
ss = FrontendProxyServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-frontend-proxy", config)
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -13,61 +13,53 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
import gc
import logging
import os
import sys
import synapse
import synapse.config.logger
from synapse import events
from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \
LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \
STATIC_PREFIX, WEB_CLIENT_PREFIX
from synapse.app import _base
from synapse.app._base import quit_with_error, listen_ssl, listen_tcp
from synapse.config._base import ConfigError
from synapse.python_dependencies import (
check_requirements, DEPENDENCY_LINKS
)
from synapse.rest import ClientRestResource
from synapse.storage.engines import create_engine, IncorrectDatabaseSetup
from synapse.storage import are_all_users_on_domain
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.server import HomeServer
from twisted.internet import reactor, task, defer
from twisted.application import service
from twisted.web.resource import Resource, EncodingResourceWrapper
from twisted.web.static import File
from twisted.web.server import GzipEncoderFactory
from synapse.http.server import RootRedirect
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.api.urls import (
FEDERATION_PREFIX, WEB_CLIENT_PREFIX, CONTENT_REPO_PREFIX,
SERVER_KEY_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, STATIC_PREFIX,
SERVER_KEY_V2_PREFIX,
)
from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.metrics import register_memory_metrics, get_metrics_for
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.resource import ReplicationResource, REPLICATION_PREFIX
from synapse.federation.transport.server import TransportLayerServer
from synapse.module_api import ModuleApi
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import register_memory_metrics
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
check_requirements
from synapse.replication.http import ReplicationRestResource, REPLICATION_PREFIX
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
from synapse.storage import are_all_users_on_domain
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.http.site import SynapseSite
from synapse import events
from daemonize import Daemonize
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
logger = logging.getLogger("synapse.app.homeserver")
@@ -92,7 +84,7 @@ def build_resource_for_web_client(hs):
"\n"
"You can also disable hosting of the webclient via the\n"
"configuration option `web_client`\n"
% {"dep": DEPENDENCY_LINKS["matrix-angular-sdk"]}
% {"dep": CONDITIONAL_REQUIREMENTS["web_client"].keys()[0]}
)
syweb_path = os.path.dirname(syweb.__file__)
webclient_path = os.path.join(syweb_path, "webclient")
@@ -119,9 +111,67 @@ class SynapseHomeServer(HomeServer):
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
resources.update(self._configure_named_resource(
name, res.get("compress", False),
))
additional_resources = listener_config.get("additional_resources", {})
logger.debug("Configuring additional resources: %r",
additional_resources)
module_api = ModuleApi(self, self.get_auth_handler())
for path, resmodule in additional_resources.items():
handler_cls, config = load_module(resmodule)
handler = handler_cls(config, module_api)
resources[path] = AdditionalResource(self, handler.handle_request)
if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
root_resource = NoResource()
root_resource = create_resource_tree(resources, root_resource)
if tls:
listen_ssl(
bind_addresses,
port,
SynapseSite(
"synapse.access.https.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
self.tls_server_context_factory,
)
else:
listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse now listening on port %d", port)
def _configure_named_resource(self, name, compress=False):
"""Build a resource map for a named resource
Args:
name (str): named resource: one of "client", "federation", etc
compress (bool): whether to enable gzip compression for this
resource
Returns:
dict[str, Resource]: map from path to HTTP resource
"""
resources = {}
if name == "client":
client_resource = ClientRestResource(self)
if res["compress"]:
if compress:
client_resource = gz_wrap(client_resource)
resources.update({
@@ -145,7 +195,8 @@ class SynapseHomeServer(HomeServer):
})
if name in ["media", "federation", "client"]:
media_repo = MediaRepositoryResource(self)
if self.get_config().enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update({
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
@@ -153,6 +204,10 @@ class SynapseHomeServer(HomeServer):
self, self.config.uploads_path
),
})
elif name == "media":
raise ConfigError(
"'media' resource conflicts with enable_media_repo=False",
)
if name in ["keys", "federation"]:
resources.update({
@@ -167,41 +222,9 @@ class SynapseHomeServer(HomeServer):
resources[METRICS_PREFIX] = MetricsResource(self)
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationResource(self)
resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
root_resource = Resource()
root_resource = create_resource_tree(resources, root_resource)
if tls:
for address in bind_addresses:
reactor.listenSSL(
port,
SynapseSite(
"synapse.access.https.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
self.tls_server_context_factory,
interface=address
)
else:
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
logger.info("Synapse now listening on port %d", port)
return resources
def start_listening(self):
config = self.get_config()
@@ -210,17 +233,24 @@ class SynapseHomeServer(HomeServer):
if listener["type"] == "http":
self._listener_http(config, listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
)
elif listener["type"] == "replication":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
factory = ReplicationStreamProtocolFactory(self)
server_listener = reactor.listenTCP(
listener["port"], factory, interface=address
)
reactor.addSystemEventTrigger(
"before", "shutdown", server_listener.stopListening,
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -241,29 +271,6 @@ class SynapseHomeServer(HomeServer):
except IncorrectDatabaseSetup as e:
quit_with_error(e.message)
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def quit_with_error(error_string):
message_lines = error_string.split("\n")
line_length = max([len(l) for l in message_lines if len(l) < 80]) + 2
sys.stderr.write("*" * line_length + '\n')
for line in message_lines:
sys.stderr.write(" %s\n" % (line.rstrip(),))
sys.stderr.write("*" * line_length + '\n')
sys.exit(1)
def setup(config_options):
"""
@@ -342,7 +349,7 @@ def setup(config_options):
hs.get_state_handler().start_caching()
hs.get_datastore().start_profiling()
hs.get_datastore().start_doing_background_updates()
hs.get_replication_layer().start_get_pdu_cache()
hs.get_federation_client().start_get_pdu_cache()
register_memory_metrics(hs)
@@ -391,10 +398,15 @@ def run(hs):
ThreadPool._worker = profile(ThreadPool._worker)
reactor.run = profile(reactor.run)
start_time = hs.get_clock().time()
clock = hs.get_clock()
start_time = clock.time()
stats = {}
# Contains the list of processes we will be monitoring
# currently either 0 or 1
stats_process = []
@defer.inlineCallbacks
def phone_stats_home():
logger.info("Gathering stats for reporting")
@@ -403,41 +415,36 @@ def run(hs):
if uptime < 0:
uptime = 0
# If the stats directory is empty then this is the first time we've
# reported stats.
first_time = not stats
stats["homeserver"] = hs.config.server_name
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
stats["total_users"] = yield hs.get_datastore().count_all_users()
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users
room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
daily_messages = yield hs.get_datastore().count_daily_messages()
if daily_messages is not None:
stats["daily_messages"] = daily_messages
else:
stats.pop("daily_messages", None)
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
if first_time:
# Add callbacks to report the synapse stats as metrics whenever
# prometheus requests them, typically every 30s.
# As some of the stats are expensive to calculate we only update
# them when synapse phones home to matrix.org every 24 hours.
metrics = get_metrics_for("synapse.usage")
metrics.add_callback("timestamp", lambda: stats["timestamp"])
metrics.add_callback("uptime_seconds", lambda: stats["uptime_seconds"])
metrics.add_callback("total_users", lambda: stats["total_users"])
metrics.add_callback("total_room_count", lambda: stats["total_room_count"])
metrics.add_callback(
"daily_active_users", lambda: stats["daily_active_users"]
)
metrics.add_callback(
"daily_messages", lambda: stats.get("daily_messages", 0)
)
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in r30_results.iteritems():
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = CACHE_SIZE_FACTOR
stats["event_cache_size"] = hs.config.event_cache_size
if len(stats_process) > 0:
stats["memory_rss"] = 0
stats["cpu_average"] = 0
for process in stats_process:
stats["memory_rss"] += process.memory_info().rss
stats["cpu_average"] += int(process.cpu_percent(interval=None))
logger.info("Reporting stats to matrix.org: %s" % (stats,))
try:
@@ -448,42 +455,48 @@ def run(hs):
except Exception as e:
logger.warn("Error reporting stats: %s", e)
if hs.config.report_stats:
phone_home_task = task.LoopingCall(phone_stats_home)
logger.info("Scheduling stats reporting for 24 hour intervals")
phone_home_task.start(60 * 60 * 24, now=False)
def in_thread():
# Uncomment to enable tracing of log context changes.
# sys.settrace(logcontext_tracer)
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
change_resource_limit(hs.config.soft_file_limit)
if hs.config.gc_thresholds:
gc.set_threshold(*hs.config.gc_thresholds)
reactor.run()
if hs.config.daemonize:
if hs.config.print_pidfile:
print (hs.config.pid_file)
daemon = Daemonize(
app="synapse-homeserver",
pid=hs.config.pid_file,
action=lambda: in_thread(),
auto_close_fds=False,
verbose=True,
logger=logger,
def performance_stats_init():
try:
import psutil
process = psutil.Process()
# Ensure we can fetch both, and make the initial request for cpu_percent
# so the next request will use this as the initial point.
process.memory_info().rss
process.cpu_percent(interval=None)
logger.info("report_stats can use psutil")
stats_process.append(process)
except (ImportError, AttributeError):
logger.warn(
"report_stats enabled but psutil is not installed or incorrect version."
" Disabling reporting of memory/cpu stats."
" Ensuring psutil is available will help matrix.org track performance"
" changes across releases."
)
daemon.start()
else:
in_thread()
if hs.config.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
clock.call_later(0, performance_stats_init)
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home)
if hs.config.daemonize and hs.config.print_pidfile:
print (hs.config.pid_file)
_base.start_reactor(
"synapse-homeserver",
hs.config.soft_file_limit,
hs.config.gc_thresholds,
hs.config.pid_file,
hs.config.daemonize,
hs.config.cpu_affinity,
logger,
)
def main():

View File

@@ -13,46 +13,37 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from synapse.api.urls import (
CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
)
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.media_repository")
@@ -60,28 +51,15 @@ logger = logging.getLogger("synapse.app.media_repository")
class MediaRepositorySlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
TransactionStore,
BaseSlavedStore,
MediaRepositoryStore,
ClientIpStore,
):
pass
class MediaRepositoryServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = MediaRepositorySlavedStore(self.get_db_conn(), self)
@@ -97,7 +75,7 @@ class MediaRepositoryServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "media":
media_repo = MediaRepositoryResource(self)
media_repo = self.get_media_repository_resource()
resources.update({
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
@@ -106,18 +84,17 @@ class MediaRepositoryServer(HomeServer):
),
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
)
logger.info("Synapse media repository now listening on port %d", port)
@@ -127,36 +104,22 @@ class MediaRepositoryServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
self.get_tcp_replication().start_replication(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
@@ -170,6 +133,13 @@ def start(config_options):
assert config.worker_app == "synapse.app.media_repository"
if config.enable_media_repo:
_base.quit_with_error(
"enable_media_repo must be disabled in the main synapse process\n"
"before the media repo can be run in a separate worker.\n"
"Please add ``enable_media_repo: false`` to the main config\n"
)
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
@@ -188,40 +158,15 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-media-repository",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-media-repository", config)
if __name__ == '__main__':

View File

@@ -13,40 +13,31 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.storage.roommember import RoomMemberStore
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.storage.engines import create_engine
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.util.async import sleep
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn, \
PreserveLoggingContext
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.pusher")
@@ -83,42 +74,15 @@ class PusherSlaveStore(
DataStore.get_profile_displayname.__func__
)
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
class PusherServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = PusherSlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def remove_pusher(self, app_id, push_key, user_id):
http_client = self.get_simple_http_client()
replication_url = self.config.worker_replication_url
url = replication_url + "/remove_pushers"
return http_client.post_json_get_json(url, {
"remove": [{
"app_id": app_id,
"push_key": push_key,
"user_id": user_id,
}]
})
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -130,18 +94,17 @@ class PusherServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
)
logger.info("Synapse pusher now listening on port %d", port)
@@ -151,88 +114,67 @@ class PusherServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
pusher_pool = self.get_pusherpool()
self.get_tcp_replication().start_replication(self)
def stop_pusher(user_id, app_id, pushkey):
def build_tcp_replication(self):
return PusherReplicationHandler(self)
class PusherReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(PusherReplicationHandler, self).__init__(hs.get_datastore())
self.pusher_pool = hs.get_pusherpool()
def on_rdata(self, stream_name, token, rows):
super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.poke_pushers, stream_name, token, rows)
@defer.inlineCallbacks
def poke_pushers(self, stream_name, token, rows):
try:
if stream_name == "pushers":
for row in rows:
if row.deleted:
yield self.stop_pusher(row.user_id, row.app_id, row.pushkey)
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
yield self.pusher_pool.on_new_notifications(
token, token,
)
elif stream_name == "receipts":
yield self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
except Exception:
logger.exception("Error poking pushers")
def stop_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
pushers_for_user = pusher_pool.pushers.get(user_id, {})
pushers_for_user = self.pusher_pool.pushers.get(user_id, {})
pusher = pushers_for_user.pop(key, None)
if pusher is None:
return
logger.info("Stopping pusher %r / %r", user_id, key)
pusher.on_stop()
def start_pusher(user_id, app_id, pushkey):
def start_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return pusher_pool._refresh_pusher(app_id, pushkey, user_id)
@defer.inlineCallbacks
def poke_pushers(results):
pushers_rows = set(
map(tuple, results.get("pushers", {}).get("rows", []))
)
deleted_pushers_rows = set(
map(tuple, results.get("deleted_pushers", {}).get("rows", []))
)
for row in sorted(pushers_rows | deleted_pushers_rows):
if row in deleted_pushers_rows:
user_id, app_id, pushkey = row[1:4]
stop_pusher(user_id, app_id, pushkey)
elif row in pushers_rows:
user_id = row[1]
app_id = row[5]
pushkey = row[8]
yield start_pusher(user_id, app_id, pushkey)
stream = results.get("events")
if stream and stream["rows"]:
min_stream_id = stream["rows"][0][0]
max_stream_id = stream["position"]
preserve_fn(pusher_pool.on_new_notifications)(
min_stream_id, max_stream_id
)
stream = results.get("receipts")
if stream and stream["rows"]:
rows = stream["rows"]
affected_room_ids = set(row[1] for row in rows)
min_stream_id = rows[0][0]
max_stream_id = stream["position"]
preserve_fn(pusher_pool.on_new_receipts)(
min_stream_id, max_stream_id, affected_room_ids
)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
poke_pushers(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
return self.pusher_pool._refresh_pusher(app_id, pushkey, user_id)
def start(config_options):
@@ -275,38 +217,14 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_pusherpool().start()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-pusher",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-pusher", config)
if __name__ == '__main__':

View File

@@ -13,105 +13,87 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import logging
import sys
import synapse
from synapse.api.constants import EventTypes, PresenceState
from synapse.api.constants import EventTypes
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.handlers.presence import PresenceHandler
from synapse.http.site import SynapseSite
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.rest.client.v2_alpha import sync
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v2_alpha import sync
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.presence import PresenceStore, UserPresenceState
from synapse.storage.presence import UserPresenceState
from synapse.storage.roommember import RoomMemberStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn, \
PreserveLoggingContext
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import contextlib
import gc
import ujson as json
from six import iteritems
logger = logging.getLogger("synapse.app.synchrotron")
class SynchrotronSlavedStore(
SlavedPushRuleStore,
SlavedEventStore,
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedPresenceStore,
SlavedGroupServerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
did_forget = (
RoomMemberStore.__dict__["did_forget"]
)
# XXX: This is a bit broken because we don't persist the accepted list in a
# way that can be replicated. This means that we don't have a way to
# invalidate the cache correctly.
get_presence_list_accepted = PresenceStore.__dict__[
"get_presence_list_accepted"
]
get_presence_list_observers_accepted = PresenceStore.__dict__[
"get_presence_list_observers_accepted"
]
UPDATE_SYNCING_USERS_MS = 10 * 1000
class SynchrotronPresence(object):
def __init__(self, hs):
self.hs = hs
self.is_mine_id = hs.is_mine_id
self.http_client = hs.get_simple_http_client()
self.store = hs.get_datastore()
self.user_to_num_current_syncs = {}
self.syncing_users_url = hs.config.worker_replication_url + "/syncing_users"
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
@@ -121,17 +103,52 @@ class SynchrotronPresence(object):
for state in active_presence
}
# user_id -> last_sync_ms. Lists the users that have stopped syncing
# but we haven't notified the master of that yet
self.users_going_offline = {}
self._send_stop_syncing_loop = self.clock.looping_call(
self.send_stop_syncing, 10 * 1000
)
self.process_id = random_string(16)
logger.info("Presence process_id is %r", self.process_id)
self._sending_sync = False
self._need_to_send_sync = False
self.clock.looping_call(
self._send_syncing_users_regularly,
UPDATE_SYNCING_USERS_MS,
)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
self.hs.get_tcp_replication().send_user_sync(user_id, is_syncing, last_sync_ms)
reactor.addSystemEventTrigger("before", "shutdown", self._on_shutdown)
def mark_as_coming_online(self, user_id):
"""A user has started syncing. Send a UserSync to the master, unless they
had recently stopped syncing.
Args:
user_id (str)
"""
going_offline = self.users_going_offline.pop(user_id, None)
if not going_offline:
# Safe to skip because we haven't yet told the master they were offline
self.send_user_sync(user_id, True, self.clock.time_msec())
def mark_as_going_offline(self, user_id):
"""A user has stopped syncing. We wait before notifying the master as
its likely they'll come back soon. This allows us to avoid sending
a stopped syncing immediately followed by a started syncing notification
to the master
Args:
user_id (str)
"""
self.users_going_offline[user_id] = self.clock.time_msec()
def send_stop_syncing(self):
"""Check if there are any users who have stopped syncing a while ago
and haven't come back yet. If there are poke the master about them.
"""
now = self.clock.time_msec()
for user_id, last_sync_ms in self.users_going_offline.items():
if now - last_sync_ms > 10 * 1000:
self.users_going_offline.pop(user_id, None)
self.send_user_sync(user_id, False, last_sync_ms)
def set_state(self, user, state, ignore_status_msg=False):
# TODO Hows this supposed to work?
@@ -139,18 +156,16 @@ class SynchrotronPresence(object):
get_states = PresenceHandler.get_states.__func__
get_state = PresenceHandler.get_state.__func__
_get_interested_parties = PresenceHandler._get_interested_parties.__func__
current_state_for_users = PresenceHandler.current_state_for_users.__func__
@defer.inlineCallbacks
def user_syncing(self, user_id, affect_presence):
if affect_presence:
curr_sync = self.user_to_num_current_syncs.get(user_id, 0)
self.user_to_num_current_syncs[user_id] = curr_sync + 1
prev_states = yield self.current_state_for_users([user_id])
if prev_states[user_id].state == PresenceState.OFFLINE:
# TODO: Don't block the sync request on this HTTP hit.
yield self._send_syncing_users_now()
# If we went from no in flight sync to some, notify replication
if self.user_to_num_current_syncs[user_id] == 1:
self.mark_as_coming_online(user_id)
def _end():
# We check that the user_id is in user_to_num_current_syncs because
@@ -159,6 +174,10 @@ class SynchrotronPresence(object):
if affect_presence and user_id in self.user_to_num_current_syncs:
self.user_to_num_current_syncs[user_id] -= 1
# If we went from one in flight sync to non, notify replication
if self.user_to_num_current_syncs[user_id] == 0:
self.mark_as_going_offline(user_id)
@contextlib.contextmanager
def _user_syncing():
try:
@@ -166,56 +185,12 @@ class SynchrotronPresence(object):
finally:
_end()
defer.returnValue(_user_syncing())
@defer.inlineCallbacks
def _on_shutdown(self):
# When the synchrotron is shutdown tell the master to clear the in
# progress syncs for this process
self.user_to_num_current_syncs.clear()
yield self._send_syncing_users_now()
def _send_syncing_users_regularly(self):
# Only send an update if we aren't in the middle of sending one.
if not self._sending_sync:
preserve_fn(self._send_syncing_users_now)()
@defer.inlineCallbacks
def _send_syncing_users_now(self):
if self._sending_sync:
# We don't want to race with sending another update.
# Instead we wait for that update to finish and send another
# update afterwards.
self._need_to_send_sync = True
return
# Flag that we are sending an update.
self._sending_sync = True
yield self.http_client.post_json_get_json(self.syncing_users_url, {
"process_id": self.process_id,
"syncing_users": [
user_id for user_id, count in self.user_to_num_current_syncs.items()
if count > 0
],
})
# Unset the flag as we are no longer sending an update.
self._sending_sync = False
if self._need_to_send_sync:
# If something happened while we were sending the update then
# we might need to send another update.
# TODO: Check if the update that was sent matches the current state
# as we only need to send an update if they are different.
self._need_to_send_sync = False
yield self._send_syncing_users_now()
return defer.succeed(_user_syncing())
@defer.inlineCallbacks
def notify_from_replication(self, states, stream_id):
parties = yield self._get_interested_parties(
states, calculate_remote_hosts=False
)
room_ids_to_states, users_to_states, _ = parties
parties = yield get_interested_parties(self.store, states)
room_ids_to_states, users_to_states = parties
self.notifier.on_new_event(
"presence_key", stream_id, rooms=room_ids_to_states.keys(),
@@ -223,27 +198,25 @@ class SynchrotronPresence(object):
)
@defer.inlineCallbacks
def process_replication(self, result):
stream = result.get("presence", {"rows": []})
states = []
for row in stream["rows"]:
(
position, user_id, state, last_active_ts,
last_federation_update_ts, last_user_sync_ts, status_msg,
currently_active
) = row
state = UserPresenceState(
user_id, state, last_active_ts,
last_federation_update_ts, last_user_sync_ts, status_msg,
currently_active
)
self.user_to_current_state[user_id] = state
states.append(state)
def process_replication_rows(self, token, rows):
states = [UserPresenceState(
row.user_id, row.state, row.last_active_ts,
row.last_federation_update_ts, row.last_user_sync_ts, row.status_msg,
row.currently_active
) for row in rows]
if states and "position" in stream:
stream_id = int(stream["position"])
for state in states:
self.user_to_current_state[row.user_id] = state
stream_id = token
yield self.notify_from_replication(states, stream_id)
def get_currently_syncing_users(self):
return [
user_id for user_id, count in iteritems(self.user_to_num_current_syncs)
if count > 0
]
class SynchrotronTyping(object):
def __init__(self, hs):
@@ -257,16 +230,12 @@ class SynchrotronTyping(object):
# value which we *must* use for the next replication request.
return {"typing": self._latest_room_serial}
def process_replication(self, result):
stream = result.get("typing")
if stream:
self._latest_room_serial = int(stream["position"])
def process_replication_rows(self, token, rows):
self._latest_room_serial = token
for row in stream["rows"]:
position, room_id, typing_json = row
typing = json.loads(typing_json)
self._room_serials[room_id] = position
self._room_typing[room_id] = typing
for row in rows:
self._room_serials[row.room_id] = token
self._room_typing[row.room_id] = row.user_ids
class SynchrotronApplicationService(object):
@@ -275,19 +244,6 @@ class SynchrotronApplicationService(object):
class SynchrotronServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = SynchrotronSlavedStore(self.get_db_conn(), self)
@@ -315,18 +271,17 @@ class SynchrotronServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
)
logger.info("Synapse synchrotron now listening on port %d", port)
@@ -336,133 +291,22 @@ class SynchrotronServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
notifier = self.get_notifier()
presence_handler = self.get_presence_handler()
typing_handler = self.get_typing_handler()
self.get_tcp_replication().start_replication(self)
def notify_from_stream(
result, stream_name, stream_key, room=None, user=None
):
stream = result.get(stream_name)
if stream:
position_index = stream["field_names"].index("position")
if room:
room_index = stream["field_names"].index(room)
if user:
user_index = stream["field_names"].index(user)
users = ()
rooms = ()
for row in stream["rows"]:
position = row[position_index]
if user:
users = (row[user_index],)
if room:
rooms = (row[room_index],)
notifier.on_new_event(
stream_key, position, users=users, rooms=rooms
)
@defer.inlineCallbacks
def notify_device_list_update(result):
stream = result.get("device_lists")
if not stream:
return
position_index = stream["field_names"].index("position")
user_index = stream["field_names"].index("user_id")
for row in stream["rows"]:
position = row[position_index]
user_id = row[user_index]
room_ids = yield store.get_rooms_for_user(user_id)
notifier.on_new_event(
"device_list_key", position, rooms=room_ids,
)
@defer.inlineCallbacks
def notify(result):
stream = result.get("events")
if stream:
max_position = stream["position"]
event_map = yield store.get_events([row[1] for row in stream["rows"]])
for row in stream["rows"]:
position = row[0]
event_id = row[1]
event = event_map.get(event_id, None)
if not event:
continue
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
notifier.on_new_room_event(
event, position, max_position, extra_users
)
notify_from_stream(
result, "push_rules", "push_rules_key", user="user_id"
)
notify_from_stream(
result, "user_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "room_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "tag_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "receipts", "receipt_key", room="room_id"
)
notify_from_stream(
result, "typing", "typing_key", room="room_id"
)
notify_from_stream(
result, "to_device", "to_device_key", user="user_id"
)
yield notify_device_list_update(result)
while True:
try:
args = store.stream_positions()
args.update(typing_handler.stream_positions())
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
typing_handler.process_replication(result)
yield presence_handler.process_replication(result)
yield notify(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return SyncReplicationHandler(self)
def build_presence_handler(self):
return SynchrotronPresence(self)
@@ -471,6 +315,84 @@ class SynchrotronServer(HomeServer):
return SynchrotronTyping(self)
class SyncReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(SyncReplicationHandler, self).__init__(hs.get_datastore())
self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler()
# NB this is a SynchrotronPresence, not a normal PresenceHandler
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier()
def on_rdata(self, stream_name, token, rows):
super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.process_and_notify, stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(SyncReplicationHandler, self).get_streams_to_replicate()
args.update(self.typing_handler.stream_positions())
return args
def get_currently_syncing_users(self):
return self.presence_handler.get_currently_syncing_users()
@defer.inlineCallbacks
def process_and_notify(self, stream_name, token, rows):
try:
if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
event = yield self.store.get_event(row.event_id)
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
max_token = self.store.get_room_max_stream_ordering()
self.notifier.on_new_room_event(
event, token, max_token, extra_users
)
elif stream_name == "push_rules":
self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows],
)
elif stream_name in ("account_data", "tag_account_data",):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows],
)
elif stream_name == "receipts":
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "typing":
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "to_device":
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
self.notifier.on_new_event(
"to_device_key", token, users=entities,
)
elif stream_name == "device_lists":
all_room_ids = set()
for row in rows:
room_ids = yield self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
self.notifier.on_new_event(
"device_list_key", token, rooms=all_room_ids,
)
elif stream_name == "presence":
yield self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows],
)
except Exception:
logger.exception("Error processing replication")
def start(config_options):
try:
config = HomeServerConfig.load_config(
@@ -500,37 +422,13 @@ def start(config_options):
ss.setup()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_datastore().start_profiling()
ss.replicate()
ss.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-synchrotron",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-synchrotron", config)
if __name__ == '__main__':

View File

@@ -38,7 +38,7 @@ def pid_running(pid):
try:
os.kill(pid, 0)
return True
except OSError, err:
except OSError as err:
if err.errno == errno.EPERM:
return True
return False
@@ -98,7 +98,7 @@ def stop(pidfile, app):
try:
os.kill(pid, signal.SIGTERM)
write("stopped %s" % (app,), colour=GREEN)
except OSError, err:
except OSError as err:
if err.errno == errno.ESRCH:
write("%s not running" % (app,), colour=YELLOW)
elif err.errno == errno.EPERM:
@@ -125,7 +125,7 @@ def main():
"configfile",
nargs="?",
default="homeserver.yaml",
help="the homeserver config file, defaults to homserver.yaml",
help="the homeserver config file, defaults to homeserver.yaml",
)
parser.add_argument(
"-w", "--worker",
@@ -184,6 +184,9 @@ def main():
worker_configfiles.append(worker_configfile)
if options.all_processes:
# To start the main synapse with -a you need to add a worker file
# with worker_app == "synapse.app.homeserver"
start_stop_synapse = False
worker_configdir = options.all_processes
if not os.path.isdir(worker_configdir):
write(
@@ -200,9 +203,28 @@ def main():
with open(worker_configfile) as stream:
worker_config = yaml.load(stream)
worker_app = worker_config["worker_app"]
if worker_app == "synapse.app.homeserver":
# We need to special case all of this to pick up options that may
# be set in the main config file or in this worker config file.
worker_pidfile = (
worker_config.get("pid_file")
or pidfile
)
worker_cache_factor = worker_config.get("synctl_cache_factor") or cache_factor
daemonize = worker_config.get("daemonize") or config.get("daemonize")
assert daemonize, "Main process must have daemonize set to true"
# The master process doesn't support using worker_* config.
for key in worker_config:
if key == "worker_app": # But we allow worker_app
continue
assert not key.startswith("worker_"), \
"Main process cannot use worker_* config"
else:
worker_pidfile = worker_config["worker_pid_file"]
worker_daemonize = worker_config["worker_daemonize"]
assert worker_daemonize # TODO print something more user friendly
assert worker_daemonize, "In config %r: expected '%s' to be True" % (
worker_configfile, "worker_daemonize")
worker_cache_factor = worker_config.get("synctl_cache_factor")
workers.append(Worker(
worker_app, worker_configfile, worker_pidfile, worker_cache_factor,
@@ -230,9 +252,13 @@ def main():
for running_pid in running_pids:
while pid_running(running_pid):
time.sleep(0.2)
write("All processes exited; now restarting...")
if action == "start" or action == "restart":
if start_stop_synapse:
# Check if synapse is already running
if os.path.exists(pidfile) and pid_running(int(open(pidfile).read())):
abort("synapse.app.homeserver already running")
start(configfile)
for worker in workers:

231
synapse/app/user_dir.py Normal file
View File

@@ -0,0 +1,231 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v2_alpha import user_directory
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.user_dir")
class UserDirectorySlaveStore(
SlavedEventStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
UserDirectoryStore,
BaseSlavedStore,
):
def __init__(self, db_conn, hs):
super(UserDirectorySlaveStore, self).__init__(db_conn, hs)
events_max = self._stream_id_gen.get_current_token()
curr_state_delta_prefill, min_curr_state_delta_id = self._get_cache_dict(
db_conn, "current_state_delta_stream",
entity_column="room_id",
stream_column="stream_id",
max_value=events_max, # As we share the stream id with events token
limit=1000,
)
self._curr_state_delta_stream_cache = StreamChangeCache(
"_curr_state_delta_stream_cache", min_curr_state_delta_id,
prefilled_cache=curr_state_delta_prefill,
)
self._current_state_delta_pos = events_max
def stream_positions(self):
result = super(UserDirectorySlaveStore, self).stream_positions()
result["current_state_deltas"] = self._current_state_delta_pos
return result
def process_replication_rows(self, stream_name, token, rows):
if stream_name == "current_state_deltas":
self._current_state_delta_pos = token
for row in rows:
self._curr_state_delta_stream_cache.entity_has_changed(
row.room_id, token
)
return super(UserDirectorySlaveStore, self).process_replication_rows(
stream_name, token, rows
)
class UserDirectoryServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = UserDirectorySlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
user_directory.register_servlets(self, resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse user_dir now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return UserDirectoryReplicationHandler(self)
class UserDirectoryReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(UserDirectoryReplicationHandler, self).__init__(hs.get_datastore())
self.user_directory = hs.get_user_directory_handler()
def on_rdata(self, stream_name, token, rows):
super(UserDirectoryReplicationHandler, self).on_rdata(
stream_name, token, rows
)
if stream_name == "current_state_deltas":
run_in_background(self._notify_directory)
@defer.inlineCallbacks
def _notify_directory(self):
try:
yield self.user_directory.notify_new_event()
except Exception:
logger.exception("Error notifiying user directory of state update")
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse user directory", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.user_dir"
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
if config.update_user_directory:
sys.stderr.write(
"\nThe update_user_directory must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``update_user_directory: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.update_user_directory = True
tls_server_context_factory = context_factory.ServerContextFactory(config)
ps = UserDirectoryServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ps.setup()
ps.start_listening(config.worker_listeners)
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-user-dir", config)
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -14,12 +14,15 @@
# limitations under the License.
from synapse.api.constants import EventTypes
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.types import GroupID, get_domain_from_id
from twisted.internet import defer
import logging
import re
from six import string_types
logger = logging.getLogger(__name__)
@@ -81,12 +84,13 @@ class ApplicationService(object):
# values.
NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]
def __init__(self, token, url=None, namespaces=None, hs_token=None,
def __init__(self, token, hostname, url=None, namespaces=None, hs_token=None,
sender=None, id=None, protocols=None, rate_limited=True):
self.token = token
self.url = url
self.hs_token = hs_token
self.sender = sender
self.server_name = hostname
self.namespaces = self._check_namespaces(namespaces)
self.id = id
@@ -125,8 +129,26 @@ class ApplicationService(object):
raise ValueError(
"Expected bool for 'exclusive' in ns '%s'" % ns
)
group_id = regex_obj.get("group_id")
if group_id:
if not isinstance(group_id, str):
raise ValueError(
"Expected string for 'group_id' in ns '%s'" % ns
)
try:
GroupID.from_string(group_id)
except Exception:
raise ValueError(
"Expected valid group ID for 'group_id' in ns '%s'" % ns
)
if get_domain_from_id(group_id) != self.server_name:
raise ValueError(
"Expected 'group_id' to be this host in ns '%s'" % ns
)
regex = regex_obj.get("regex")
if isinstance(regex, basestring):
if isinstance(regex, string_types):
regex_obj["regex"] = re.compile(regex) # Pre-compile regex
else:
raise ValueError(
@@ -241,6 +263,31 @@ class ApplicationService(object):
def is_exclusive_room(self, room_id):
return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)
def get_exlusive_user_regexes(self):
"""Get the list of regexes used to determine if a user is exclusively
registered by the AS
"""
return [
regex_obj["regex"]
for regex_obj in self.namespaces[ApplicationService.NS_USERS]
if regex_obj["exclusive"]
]
def get_groups_for_user(self, user_id):
"""Get the groups that this user is associated with by this AS
Args:
user_id (str): The ID of the user.
Returns:
iterable[str]: an iterable that yields group_id strings.
"""
return (
regex_obj["group_id"]
for regex_obj in self.namespaces[ApplicationService.NS_USERS]
if "group_id" in regex_obj and regex_obj["regex"].match(user_id)
)
def is_rate_limited(self):
return self.rate_limited

View File

@@ -72,7 +72,8 @@ class ApplicationServiceApi(SimpleHttpClient):
super(ApplicationServiceApi, self).__init__(hs)
self.clock = hs.get_clock()
self.protocol_meta_cache = ResponseCache(hs, timeout_ms=HOUR_IN_MS)
self.protocol_meta_cache = ResponseCache(hs, "as_protocol_meta",
timeout_ms=HOUR_IN_MS)
@defer.inlineCallbacks
def query_user(self, service, user_id):
@@ -192,9 +193,7 @@ class ApplicationServiceApi(SimpleHttpClient):
defer.returnValue(None)
key = (service.id, protocol)
return self.protocol_meta_cache.get(key) or (
self.protocol_meta_cache.set(key, _get())
)
return self.protocol_meta_cache.wrap(key, _get)
@defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None):

View File

@@ -51,7 +51,7 @@ components.
from twisted.internet import defer
from synapse.appservice import ApplicationServiceState
from synapse.util.logcontext import preserve_fn
from synapse.util.logcontext import run_in_background
from synapse.util.metrics import Measure
import logging
@@ -106,7 +106,7 @@ class _ServiceQueuer(object):
def enqueue(self, service, event):
# if this service isn't being sent something
self.queued_events.setdefault(service.id, []).append(event)
preserve_fn(self._send_request)(service)
run_in_background(self._send_request, service)
@defer.inlineCallbacks
def _send_request(self, service):
@@ -123,7 +123,7 @@ class _ServiceQueuer(object):
with Measure(self.clock, "servicequeuer.send"):
try:
yield self.txn_ctrl.send(service, events)
except:
except Exception:
logger.exception("AS request failed")
finally:
self.requests_in_flight.discard(service.id)
@@ -152,10 +152,10 @@ class _TransactionController(object):
if sent:
yield txn.complete(self.store)
else:
preserve_fn(self._start_recoverer)(service)
except Exception as e:
logger.exception(e)
preserve_fn(self._start_recoverer)(service)
run_in_background(self._start_recoverer, service)
except Exception:
logger.exception("Error creating appservice transaction")
run_in_background(self._start_recoverer, service)
@defer.inlineCallbacks
def on_recovered(self, recoverer):
@@ -176,6 +176,7 @@ class _TransactionController(object):
@defer.inlineCallbacks
def _start_recoverer(self, service):
try:
yield self.store.set_appservice_state(
service,
ApplicationServiceState.DOWN
@@ -187,6 +188,8 @@ class _TransactionController(object):
recoverer = self.recoverer_fn(service, self.on_recovered)
self.add_recoverers([recoverer])
recoverer.recover()
except Exception:
logger.exception("Error starting AS recoverer")
@defer.inlineCallbacks
def _is_service_up(self, service):

View File

@@ -19,6 +19,8 @@ import os
import yaml
from textwrap import dedent
from six import integer_types
class ConfigError(Exception):
pass
@@ -49,7 +51,7 @@ Missing mandatory `server_name` config option.
class Config(object):
@staticmethod
def parse_size(value):
if isinstance(value, int) or isinstance(value, long):
if isinstance(value, integer_types):
return value
sizes = {"K": 1024, "M": 1024 * 1024}
size = 1
@@ -61,7 +63,7 @@ class Config(object):
@staticmethod
def parse_duration(value):
if isinstance(value, int) or isinstance(value, long):
if isinstance(value, integer_types):
return value
second = 1000
minute = 60 * second
@@ -81,22 +83,38 @@ class Config(object):
def abspath(file_path):
return os.path.abspath(file_path) if file_path else file_path
@classmethod
def path_exists(cls, file_path):
"""Check if a file exists
Unlike os.path.exists, this throws an exception if there is an error
checking if the file exists (for example, if there is a perms error on
the parent dir).
Returns:
bool: True if the file exists; False if not.
"""
try:
os.stat(file_path)
return True
except OSError as e:
if e.errno != errno.ENOENT:
raise e
return False
@classmethod
def check_file(cls, file_path, config_name):
if file_path is None:
raise ConfigError(
"Missing config for %s."
" You must specify a path for the config file. You can "
"do this with the -c or --config-path option. "
"Adding --generate-config along with --server-name "
"<server name> will generate a config file at the given path."
% (config_name,)
)
if not os.path.exists(file_path):
try:
os.stat(file_path)
except OSError as e:
raise ConfigError(
"File %s config for %s doesn't exist."
" Try running again with --generate-config"
% (file_path, config_name,)
"Error accessing file '%s' (config for %s): %s"
% (file_path, config_name, e.strerror)
)
return cls.abspath(file_path)
@@ -248,7 +266,7 @@ class Config(object):
" -c CONFIG-FILE\""
)
(config_path,) = config_files
if not os.path.exists(config_path):
if not cls.path_exists(config_path):
if config_args.keys_directory:
config_dir_path = config_args.keys_directory
else:
@@ -261,33 +279,33 @@ class Config(object):
"Must specify a server_name to a generate config for."
" Pass -H server.name."
)
if not os.path.exists(config_dir_path):
if not cls.path_exists(config_dir_path):
os.makedirs(config_dir_path)
with open(config_path, "wb") as config_file:
config_bytes, config = obj.generate_config(
with open(config_path, "w") as config_file:
config_str, config = obj.generate_config(
config_dir_path=config_dir_path,
server_name=server_name,
report_stats=(config_args.report_stats == "yes"),
is_generating_file=True
)
obj.invoke_all("generate_files", config)
config_file.write(config_bytes)
print (
config_file.write(config_str)
print((
"A config file has been generated in %r for server name"
" %r with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it"
" to your needs."
) % (config_path, server_name)
print (
) % (config_path, server_name))
print(
"If this server name is incorrect, you will need to"
" regenerate the SSL certificates"
)
return
else:
print (
print((
"Config file %r already exists. Generating any missing key"
" files."
) % (config_path,)
) % (config_path,))
generate_keys = True
parser = argparse.ArgumentParser(

View File

@@ -17,10 +17,12 @@ from ._base import Config, ConfigError
from synapse.appservice import ApplicationService
from synapse.types import UserID
import urllib
import yaml
import logging
from six import string_types
from six.moves.urllib import parse as urlparse
logger = logging.getLogger(__name__)
@@ -89,21 +91,21 @@ def _load_appservice(hostname, as_info, config_filename):
"id", "as_token", "hs_token", "sender_localpart"
]
for field in required_string_fields:
if not isinstance(as_info.get(field), basestring):
if not isinstance(as_info.get(field), string_types):
raise KeyError("Required string field: '%s' (%s)" % (
field, config_filename,
))
# 'url' must either be a string or explicitly null, not missing
# to avoid accidentally turning off push for ASes.
if (not isinstance(as_info.get("url"), basestring) and
if (not isinstance(as_info.get("url"), string_types) and
as_info.get("url", "") is not None):
raise KeyError(
"Required string field or explicit null: 'url' (%s)" % (config_filename,)
)
localpart = as_info["sender_localpart"]
if urllib.quote(localpart) != localpart:
if urlparse.quote(localpart) != localpart:
raise ValueError(
"sender_localpart needs characters which are not URL encoded."
)
@@ -128,7 +130,7 @@ def _load_appservice(hostname, as_info, config_filename):
"Expected namespace entry in %s to be an object,"
" but got %s", ns, regex_obj
)
if not isinstance(regex_obj.get("regex"), basestring):
if not isinstance(regex_obj.get("regex"), string_types):
raise ValueError(
"Missing/bad type 'regex' key in %s", regex_obj
)
@@ -154,6 +156,7 @@ def _load_appservice(hostname, as_info, config_filename):
)
return ApplicationService(
token=as_info["as_token"],
hostname=hostname,
url=as_info["url"],
namespaces=as_info["namespaces"],
hs_token=as_info["hs_token"],

View File

@@ -41,7 +41,7 @@ class CasConfig(Config):
#cas_config:
# enabled: true
# server_url: "https://cas-server.com"
# service_url: "https://homesever.domain.com:8448"
# service_url: "https://homeserver.domain.com:8448"
# #required_attributes:
# # name: value
"""

View File

@@ -71,6 +71,15 @@ class EmailConfig(Config):
self.email_riot_base_url = email_config.get(
"riot_base_url", None
)
self.email_smtp_user = email_config.get(
"smtp_user", None
)
self.email_smtp_pass = email_config.get(
"smtp_pass", None
)
self.require_transport_security = email_config.get(
"require_transport_security", False
)
if "app_name" in email_config:
self.email_app_name = email_config["app_name"]
else:
@@ -91,10 +100,17 @@ class EmailConfig(Config):
# Defining a custom URL for Riot is only needed if email notifications
# should contain links to a self-hosted installation of Riot; when set
# the "app_name" setting is ignored.
#
# If your SMTP server requires authentication, the optional smtp_user &
# smtp_pass variables should be used
#
#email:
# enable_notifs: false
# smtp_host: "localhost"
# smtp_port: 25
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: False
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# app_name: Matrix
# template_dir: res/templates

32
synapse/config/groups.py Normal file
View File

@@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class GroupsConfig(Config):
def read_config(self, config):
self.enable_group_creation = config.get("enable_group_creation", False)
self.group_creation_prefix = config.get("group_creation_prefix", "")
def default_config(self, **kwargs):
return """\
# Whether to allow non server admins to create groups on this server
enable_group_creation: false
# If enabled, non server admins can only create groups with local parts
# starting with this prefix
# group_creation_prefix: "unofficial/"
"""

View File

@@ -33,6 +33,10 @@ from .jwt import JWTConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .emailconfig import EmailConfig
from .workers import WorkerConfig
from .push import PushConfig
from .spam_checker import SpamCheckerConfig
from .groups import GroupsConfig
from .user_directory import UserDirectoryConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
@@ -40,7 +44,8 @@ class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
VoipConfig, RegistrationConfig, MetricsConfig, ApiConfig,
AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
JWTConfig, PasswordConfig, EmailConfig,
WorkerConfig, PasswordAuthProviderConfig,):
WorkerConfig, PasswordAuthProviderConfig, PushConfig,
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,):
pass

View File

@@ -118,10 +118,9 @@ class KeyConfig(Config):
signing_keys = self.read_file(signing_key_path, "signing_key")
try:
return read_signing_keys(signing_keys.splitlines(True))
except Exception:
except Exception as e:
raise ConfigError(
"Error reading signing_key."
" Try running again with --generate-config"
"Error reading signing_key: %s" % (str(e))
)
def read_old_signing_keys(self, old_signing_keys):
@@ -141,7 +140,8 @@ class KeyConfig(Config):
def generate_files(self, config):
signing_key_path = config["signing_key_path"]
if not os.path.exists(signing_key_path):
if not self.path_exists(signing_key_path):
with open(signing_key_path, "w") as signing_key_file:
key_id = "a_" + random_string(4)
write_signing_keys(

View File

@@ -29,8 +29,8 @@ version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s\
- %(message)s'
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - \
%(request)s - %(message)s'
filters:
context:
@@ -74,17 +74,10 @@ class LoggingConfig(Config):
self.log_file = self.abspath(config.get("log_file"))
def default_config(self, config_dir_path, server_name, **kwargs):
log_file = self.abspath("homeserver.log")
log_config = self.abspath(
os.path.join(config_dir_path, server_name + ".log.config")
)
return """
# Logging verbosity level. Ignored if log_config is specified.
verbose: 0
# File to write logging to. Ignored if log_config is specified.
log_file: "%(log_file)s"
# A yaml python logging config file
log_config: "%(log_config)s"
""" % locals()
@@ -123,9 +116,10 @@ class LoggingConfig(Config):
def generate_files(self, config):
log_config = config.get("log_config")
if log_config and not os.path.exists(log_config):
with open(log_config, "wb") as log_config_file:
log_file = self.abspath("homeserver.log")
with open(log_config, "w") as log_config_file:
log_config_file.write(
DEFAULT_LOG_CONFIG.substitute(log_file=config["log_file"])
DEFAULT_LOG_CONFIG.substitute(log_file=log_file)
)
@@ -148,8 +142,11 @@ def setup_logging(config, use_worker_options=False):
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if log_config is None:
if log_config is None:
# We don't have a logfile, so fall back to the 'verbosity' param from
# the config or cmdline. (Note that we generate a log config for new
# installs, so this will be an unusual case)
level = logging.INFO
level_for_storage = logging.INFO
if config.verbosity:
@@ -157,11 +154,10 @@ def setup_logging(config, use_worker_options=False):
if config.verbosity > 1:
level_for_storage = logging.DEBUG
# FIXME: we need a logging.WARN for a -q quiet option
logger = logging.getLogger('')
logger.setLevel(level)
logging.getLogger('synapse.storage').setLevel(level_for_storage)
logging.getLogger('synapse.storage.SQL').setLevel(level_for_storage)
formatter = logging.Formatter(log_format)
if log_file:
@@ -176,6 +172,10 @@ def setup_logging(config, use_worker_options=False):
logger.info("Opened new log file due to SIGHUP")
else:
handler = logging.StreamHandler()
def sighup(signum, stack):
pass
handler.setFormatter(formatter)
handler.addFilter(LoggingContextFilter(request=""))

View File

@@ -13,44 +13,41 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
from ._base import Config
import importlib
from synapse.util.module_loader import load_module
LDAP_PROVIDER = 'ldap_auth_provider.LdapAuthProvider'
class PasswordAuthProviderConfig(Config):
def read_config(self, config):
self.password_providers = []
providers = []
# We want to be backwards compatible with the old `ldap_config`
# param.
ldap_config = config.get("ldap_config", {})
self.ldap_enabled = ldap_config.get("enabled", False)
if self.ldap_enabled:
from ldap_auth_provider import LdapAuthProvider
parsed_config = LdapAuthProvider.parse_config(ldap_config)
self.password_providers.append((LdapAuthProvider, parsed_config))
if ldap_config.get("enabled", False):
providers.append({
'module': LDAP_PROVIDER,
'config': ldap_config,
})
providers = config.get("password_providers", [])
providers.extend(config.get("password_providers", []))
for provider in providers:
mod_name = provider['module']
# This is for backwards compat when the ldap auth provider resided
# in this package.
if provider['module'] == "synapse.util.ldap_auth_provider.LdapAuthProvider":
from ldap_auth_provider import LdapAuthProvider
provider_class = LdapAuthProvider
else:
# We need to import the module, and then pick the class out of
# that, so we split based on the last dot.
module, clz = provider['module'].rsplit(".", 1)
module = importlib.import_module(module)
provider_class = getattr(module, clz)
if mod_name == "synapse.util.ldap_auth_provider.LdapAuthProvider":
mod_name = LDAP_PROVIDER
(provider_class, provider_config) = load_module({
"module": mod_name,
"config": provider['config'],
})
try:
provider_config = provider_class.parse_config(provider["config"])
except Exception as e:
raise ConfigError(
"Failed to parse config for %r: %r" % (provider['module'], e)
)
self.password_providers.append((provider_class, provider_config))
def default_config(self, **kwargs):

61
synapse/config/push.py Normal file
View File

@@ -0,0 +1,61 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class PushConfig(Config):
def read_config(self, config):
push_config = config.get("push", {})
self.push_include_content = push_config.get("include_content", True)
# There was a a 'redact_content' setting but mistakenly read from the
# 'email'section'. Check for the flag in the 'push' section, and log,
# but do not honour it to avoid nasty surprises when people upgrade.
if push_config.get("redact_content") is not None:
print(
"The push.redact_content content option has never worked. "
"Please set push.include_content if you want this behaviour"
)
# Now check for the one in the 'email' section and honour it,
# with a warning.
push_config = config.get("email", {})
redact_content = push_config.get("redact_content")
if redact_content is not None:
print(
"The 'email.redact_content' option is deprecated: "
"please set push.include_content instead"
)
self.push_include_content = not redact_content
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Clients requesting push notifications can either have the body of
# the message sent in the notification poke along with other details
# like the sender, or just the event ID and room ID (`event_id_only`).
# If clients choose the former, this option controls whether the
# notification request includes the content of the event (other details
# like the sender are still included). For `event_id_only` push, it
# has no effect.
# For modern android devices the notification content will still appear
# because it is loaded by the app. iPhone, however will send a
# notification saying only that a message arrived and who it came from.
#
#push:
# include_content: true
"""

View File

@@ -31,6 +31,8 @@ class RegistrationConfig(Config):
strtobool(str(config["disable_registration"]))
)
self.registrations_require_3pid = config.get("registrations_require_3pid", [])
self.allowed_local_3pids = config.get("allowed_local_3pids", [])
self.registration_shared_secret = config.get("registration_shared_secret")
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
@@ -41,6 +43,8 @@ class RegistrationConfig(Config):
self.allow_guest_access and config.get("invite_3pid_guest", False)
)
self.auto_join_rooms = config.get("auto_join_rooms", [])
def default_config(self, **kwargs):
registration_shared_secret = random_string_with_symbols(50)
@@ -50,13 +54,32 @@ class RegistrationConfig(Config):
# Enable registration for new users.
enable_registration: False
# The user must provide all of the below types of 3PID when registering.
#
# registrations_require_3pid:
# - email
# - msisdn
# Mandate that users are only allowed to associate certain formats of
# 3PIDs with accounts on this server.
#
# allowed_local_3pids:
# - medium: email
# pattern: ".*@matrix\\.org"
# - medium: email
# pattern: ".*@vector\\.im"
# - medium: msisdn
# pattern: "\\+44"
# If set, allows registration by anyone who also has the shared
# secret, even if registration is otherwise disabled.
registration_shared_secret: "%(registration_shared_secret)s"
# Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash.
# The default number of rounds is 12.
# The default number is 12 (which equates to 2^12 rounds).
# N.B. that increasing this will exponentially increase the time required
# to register or login - e.g. 24 => 2^24 rounds which will take >20 mins.
bcrypt_rounds: 12
# Allows users to register as guests without a password/email/etc, and
@@ -69,6 +92,12 @@ class RegistrationConfig(Config):
trusted_third_party_id_servers:
- matrix.org
- vector.im
- riot.im
# Users who register on this homeserver will automatically be joined
# to these rooms
#auto_join_rooms:
# - "#example:example.com"
""" % locals()
def add_arguments(self, parser):

View File

@@ -16,6 +16,8 @@
from ._base import Config, ConfigError
from collections import namedtuple
from synapse.util.module_loader import load_module
MISSING_NETADDR = (
"Missing netaddr library. This is required for URL preview API."
@@ -36,6 +38,14 @@ ThumbnailRequirement = namedtuple(
"ThumbnailRequirement", ["width", "height", "method", "media_type"]
)
MediaStorageProviderConfig = namedtuple(
"MediaStorageProviderConfig", (
"store_local", # Whether to store newly uploaded local files
"store_remote", # Whether to store newly downloaded remote files
"store_synchronous", # Whether to wait for successful storage for local uploads
),
)
def parse_thumbnail_requirements(thumbnail_sizes):
""" Takes a list of dictionaries with "width", "height", and "method" keys
@@ -70,7 +80,64 @@ class ContentRepositoryConfig(Config):
self.max_upload_size = self.parse_size(config["max_upload_size"])
self.max_image_pixels = self.parse_size(config["max_image_pixels"])
self.max_spider_size = self.parse_size(config["max_spider_size"])
self.media_store_path = self.ensure_directory(config["media_store_path"])
backup_media_store_path = config.get("backup_media_store_path")
synchronous_backup_media_store = config.get(
"synchronous_backup_media_store", False
)
storage_providers = config.get("media_storage_providers", [])
if backup_media_store_path:
if storage_providers:
raise ConfigError(
"Cannot use both 'backup_media_store_path' and 'storage_providers'"
)
storage_providers = [{
"module": "file_system",
"store_local": True,
"store_synchronous": synchronous_backup_media_store,
"store_remote": True,
"config": {
"directory": backup_media_store_path,
}
}]
# This is a list of config that can be used to create the storage
# providers. The entries are tuples of (Class, class_config,
# MediaStorageProviderConfig), where Class is the class of the provider,
# the class_config the config to pass to it, and
# MediaStorageProviderConfig are options for StorageProviderWrapper.
#
# We don't create the storage providers here as not all workers need
# them to be started.
self.media_storage_providers = []
for provider_config in storage_providers:
# We special case the module "file_system" so as not to need to
# expose FileStorageProviderBackend
if provider_config["module"] == "file_system":
provider_config["module"] = (
"synapse.rest.media.v1.storage_provider"
".FileStorageProviderBackend"
)
provider_class, parsed_config = load_module(provider_config)
wrapper_config = MediaStorageProviderConfig(
provider_config.get("store_local", False),
provider_config.get("store_remote", False),
provider_config.get("store_synchronous", False),
)
self.media_storage_providers.append(
(provider_class, parsed_config, wrapper_config,)
)
self.uploads_path = self.ensure_directory(config["uploads_path"])
self.dynamic_thumbnails = config["dynamic_thumbnails"]
self.thumbnail_requirements = parse_thumbnail_requirements(
@@ -115,6 +182,20 @@ class ContentRepositoryConfig(Config):
# Directory where uploaded images and attachments are stored.
media_store_path: "%(media_store)s"
# Media storage providers allow media to be stored in different
# locations.
# media_storage_providers:
# - module: file_system
# # Whether to write new local files.
# store_local: false
# # Whether to write new remote media
# store_remote: false
# # Whether to block upload requests waiting for write to this
# # provider to complete
# store_synchronous: false
# config:
# directory: /mnt/some/other/directory
# Directory where in-progress uploads are stored.
uploads_path: "%(uploads_path)s"

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -29,12 +30,42 @@ class ServerConfig(Config):
self.user_agent_suffix = config.get("user_agent_suffix")
self.use_frozen_dicts = config.get("use_frozen_dicts", False)
self.public_baseurl = config.get("public_baseurl")
self.cpu_affinity = config.get("cpu_affinity")
# Whether to send federation traffic out in this process. This only
# applies to some federation traffic, and so shouldn't be used to
# "disable" federation
self.send_federation = config.get("send_federation", True)
# Whether to update the user directory or not. This should be set to
# false only if we are updating the user directory in a worker
self.update_user_directory = config.get("update_user_directory", True)
# whether to enable the media repository endpoints. This should be set
# to false if the media repository is running as a separate endpoint;
# doing so ensures that we will not run cache cleanup jobs on the
# master, potentially causing inconsistency.
self.enable_media_repo = config.get("enable_media_repo", True)
self.filter_timeline_limit = config.get("filter_timeline_limit", -1)
# Whether we should block invites sent to users on this server
# (other than those sent by local server admins)
self.block_non_admin_invites = config.get(
"block_non_admin_invites", False,
)
# FIXME: federation_domain_whitelist needs sytests
self.federation_domain_whitelist = None
federation_domain_whitelist = config.get(
"federation_domain_whitelist", None
)
# turn the whitelist into a hash for speed of lookup
if federation_domain_whitelist is not None:
self.federation_domain_whitelist = {}
for domain in federation_domain_whitelist:
self.federation_domain_whitelist[domain] = True
if self.public_baseurl is not None:
if self.public_baseurl[-1] != '/':
self.public_baseurl += '/'
@@ -141,9 +172,36 @@ class ServerConfig(Config):
# When running as a daemon, the file to store the pid in
pid_file: %(pid_file)s
# CPU affinity mask. Setting this restricts the CPUs on which the
# process will be scheduled. It is represented as a bitmask, with the
# lowest order bit corresponding to the first logical CPU and the
# highest order bit corresponding to the last logical CPU. Not all CPUs
# may exist on a given system but a mask may specify more CPUs than are
# present.
#
# For example:
# 0x00000001 is processor #0,
# 0x00000003 is processors #0 and #1,
# 0xFFFFFFFF is all processors (#0 through #31).
#
# Pinning a Python process to a single CPU is desirable, because Python
# is inherently single-threaded due to the GIL, and can suffer a
# 30-40%% slowdown due to cache blow-out and thread context switching
# if the scheduler happens to schedule the underlying threads across
# different cores. See
# https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/.
#
# cpu_affinity: 0xFFFFFFFF
# Whether to serve a web client from the HTTP/HTTPS root resource.
web_client: True
# The root directory to server for the above web client.
# If left undefined, synapse will serve the matrix-angular-sdk web client.
# Make sure matrix-angular-sdk is installed with pip if web_client is True
# and web_client_location is undefined
# web_client_location: "/path/to/web/root"
# The public-facing base URL for the client API (not including _matrix/...)
# public_baseurl: https://example.com:8448/
@@ -155,6 +213,25 @@ class ServerConfig(Config):
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
# gc_thresholds: [700, 10, 10]
# Set the limit on the returned events in the timeline in the get
# and sync operations. The default value is -1, means no upper limit.
# filter_timeline_limit: 5000
# Whether room invites to users on this server should be blocked
# (except those sent by local server admins). The default is False.
# block_non_admin_invites: True
# Restrict federation to the following whitelist of domains.
# N.B. we recommend also firewalling your federation listener to limit
# inbound federation traffic as early as possible, rather than relying
# purely on this application-layer restriction. If not specified, the
# default is to whitelist everything.
#
# federation_domain_whitelist:
# - lon.example.com
# - nyc.example.com
# - syd.example.com
# List of ports that Synapse should listen on, their purpose and their
# configuration.
listeners:
@@ -165,13 +242,12 @@ class ServerConfig(Config):
port: %(bind_port)s
# Local addresses to listen on.
# This will listen on all IPv4 addresses by default.
# On Linux and Mac OS, `::` will listen on all IPv4 and IPv6
# addresses by default. For most other OSes, this will only listen
# on IPv6.
bind_addresses:
- '::'
- '0.0.0.0'
# Uncomment to listen on all IPv6 interfaces
# N.B: On at least Linux this will also listen on all IPv4
# addresses, so you will need to comment out the line above.
# - '::'
# This is a 'http' listener, allows us to specify 'resources'.
type: http
@@ -198,11 +274,18 @@ class ServerConfig(Config):
- names: [federation] # Federation APIs
compress: false
# optional list of additional endpoints which can be loaded via
# dynamic modules
# additional_resources:
# "/_matrix/my/custom/endpoint":
# module: my_module.CustomRequestHandler
# config: {}
# Unsecure HTTP listener,
# For when matrix traffic passes through loadbalancer that unwraps TLS.
- port: %(unsecure_port)s
tls: false
bind_addresses: ['0.0.0.0']
bind_addresses: ['::', '0.0.0.0']
type: http
x_forwarded: false
@@ -216,7 +299,7 @@ class ServerConfig(Config):
# Turn on the twisted ssh manhole service on localhost on the given
# port.
# - port: 9000
# bind_address: 127.0.0.1
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
""" % locals()
@@ -254,7 +337,7 @@ def read_gc_thresholds(thresholds):
return (
int(thresholds[0]), int(thresholds[1]), int(thresholds[2]),
)
except:
except Exception:
raise ConfigError(
"Value of `gc_threshold` must be a list of three integers if set"
)

View File

@@ -0,0 +1,35 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util.module_loader import load_module
from ._base import Config
class SpamCheckerConfig(Config):
def read_config(self, config):
self.spam_checker = None
provider = config.get("spam_checker", None)
if provider is not None:
self.spam_checker = load_module(provider)
def default_config(self, **kwargs):
return """\
# spam_checker:
# module: "my_custom_project.SuperSpamChecker"
# config:
# example_option: 'things'
"""

View File

@@ -96,7 +96,7 @@ class TlsConfig(Config):
# certificates returned by this server match one of the fingerprints.
#
# Synapse automatically adds the fingerprint of its own certificate
# to the list. So if federation traffic is handle directly by synapse
# to the list. So if federation traffic is handled directly by synapse
# then no modification to the list is required.
#
# If synapse is run behind a load balancer that handles the TLS then it
@@ -109,6 +109,12 @@ class TlsConfig(Config):
# key. It may be necessary to publish the fingerprints of a new
# certificate and wait until the "valid_until_ts" of the previous key
# responses have passed before deploying it.
#
# You can calculate a fingerprint from a given TLS listener via:
# openssl s_client -connect $host:$port < /dev/null 2> /dev/null |
# openssl x509 -outform DER | openssl sha256 -binary | base64 | tr -d '='
# or by checking matrix.org/federationtester/api/report?server_name=$host
#
tls_fingerprints: []
# tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
""" % locals()
@@ -126,8 +132,8 @@ class TlsConfig(Config):
tls_private_key_path = config["tls_private_key_path"]
tls_dh_params_path = config["tls_dh_params_path"]
if not os.path.exists(tls_private_key_path):
with open(tls_private_key_path, "w") as private_key_file:
if not self.path_exists(tls_private_key_path):
with open(tls_private_key_path, "wb") as private_key_file:
tls_private_key = crypto.PKey()
tls_private_key.generate_key(crypto.TYPE_RSA, 2048)
private_key_pem = crypto.dump_privatekey(
@@ -141,8 +147,8 @@ class TlsConfig(Config):
crypto.FILETYPE_PEM, private_key_pem
)
if not os.path.exists(tls_certificate_path):
with open(tls_certificate_path, "w") as certificate_file:
if not self.path_exists(tls_certificate_path):
with open(tls_certificate_path, "wb") as certificate_file:
cert = crypto.X509()
subject = cert.get_subject()
subject.CN = config["server_name"]
@@ -159,7 +165,7 @@ class TlsConfig(Config):
certificate_file.write(cert_pem)
if not os.path.exists(tls_dh_params_path):
if not self.path_exists(tls_dh_params_path):
if GENERATE_DH_PARAMS:
subprocess.check_call([
"openssl", "dhparam",

View File

@@ -0,0 +1,44 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class UserDirectoryConfig(Config):
"""User Directory Configuration
Configuration for the behaviour of the /user_directory API
"""
def read_config(self, config):
self.user_directory_search_all_users = False
user_directory_config = config.get("user_directory", None)
if user_directory_config:
self.user_directory_search_all_users = (
user_directory_config.get("search_all_users", False)
)
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# User Directory configuration
#
# 'search_all_users' defines whether to search all users visible to your HS
# when searching the user directory, rather than limiting to users visible
# in public rooms. Defaults to false. If you set it True, you'll have to run
# UPDATE user_directory_stream_pos SET stream_id = NULL;
# on your database to tell it to rebuild the user_directory search indexes.
#
#user_directory:
# search_all_users: false
"""

View File

@@ -23,6 +23,7 @@ class VoipConfig(Config):
self.turn_username = config.get("turn_username")
self.turn_password = config.get("turn_password")
self.turn_user_lifetime = self.parse_duration(config["turn_user_lifetime"])
self.turn_allow_guests = config.get("turn_allow_guests", True)
def default_config(self, **kwargs):
return """\
@@ -41,4 +42,11 @@ class VoipConfig(Config):
# How long generated TURN credentials last
turn_user_lifetime: "1h"
# Whether guests should be allowed to use the TURN server.
# This defaults to True, otherwise VoIP will be unreliable for guests.
# However, it does introduce a slight security risk as it allows users to
# connect to arbitrary endpoints without having first signed up for a
# valid account (e.g. by passing a CAPTCHA).
turn_allow_guests: True
"""

View File

@@ -23,12 +23,30 @@ class WorkerConfig(Config):
def read_config(self, config):
self.worker_app = config.get("worker_app")
# Canonicalise worker_app so that master always has None
if self.worker_app == "synapse.app.homeserver":
self.worker_app = None
self.worker_listeners = config.get("worker_listeners")
self.worker_daemonize = config.get("worker_daemonize")
self.worker_pid_file = config.get("worker_pid_file")
self.worker_log_file = config.get("worker_log_file")
self.worker_log_config = config.get("worker_log_config")
self.worker_replication_url = config.get("worker_replication_url")
# The host used to connect to the main synapse
self.worker_replication_host = config.get("worker_replication_host", None)
# The port on the main synapse for TCP replication
self.worker_replication_port = config.get("worker_replication_port", None)
# The port on the main synapse for HTTP replication endpoint
self.worker_replication_http_port = config.get("worker_replication_http_port")
self.worker_name = config.get("worker_name", self.worker_app)
self.worker_main_http_uri = config.get("worker_main_http_uri", None)
self.worker_cpu_affinity = config.get("worker_cpu_affinity")
if self.worker_listeners:
for listener in self.worker_listeners:

View File

@@ -13,8 +13,8 @@
# limitations under the License.
from twisted.internet import ssl
from OpenSSL import SSL
from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName
from OpenSSL import SSL, crypto
from twisted.internet._sslverify import _defaultCurveName
import logging
@@ -32,9 +32,10 @@ class ServerContextFactory(ssl.ContextFactory):
@staticmethod
def configure_context(context, config):
try:
_ecCurve = _OpenSSLECCurve(_defaultCurveName)
_ecCurve.addECKeyToContext(context)
except:
_ecCurve = crypto.get_elliptic_curve(_defaultCurveName)
context.set_tmp_ecdh(_ecCurve)
except Exception:
logger.exception("Failed to enable elliptic curve for TLS")
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
context.use_certificate_chain_file(config.tls_certificate_file)

View File

@@ -32,18 +32,25 @@ def check_event_content_hash(event, hash_algorithm=hashlib.sha256):
"""Check whether the hash for this PDU matches the contents"""
name, expected_hash = compute_content_hash(event, hash_algorithm)
logger.debug("Expecting hash: %s", encode_base64(expected_hash))
if name not in event.hashes:
# some malformed events lack a 'hashes'. Protect against it being missing
# or a weird type by basically treating it the same as an unhashed event.
hashes = event.get("hashes")
if not isinstance(hashes, dict):
raise SynapseError(400, "Malformed 'hashes'", Codes.UNAUTHORIZED)
if name not in hashes:
raise SynapseError(
400,
"Algorithm %s not in hashes %s" % (
name, list(event.hashes),
name, list(hashes),
),
Codes.UNAUTHORIZED,
)
message_hash_base64 = event.hashes[name]
message_hash_base64 = hashes[name]
try:
message_hash_bytes = decode_base64(message_hash_base64)
except:
except Exception:
raise SynapseError(
400,
"Invalid base64: %s" % (message_hash_base64,),

View File

@@ -13,14 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util import logcontext
from twisted.web.http import HTTPClient
from twisted.internet.protocol import Factory
from twisted.internet import defer, reactor
from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util.logcontext import (
preserve_context_over_fn, preserve_context_over_deferred
)
import simplejson as json
import logging
@@ -43,14 +40,10 @@ def fetch_server_key(server_name, ssl_context_factory, path=KEY_API_V1):
for i in range(5):
try:
protocol = yield preserve_context_over_fn(
endpoint.connect, factory
)
server_response, server_certificate = yield preserve_context_over_deferred(
protocol.remote_key
)
with logcontext.PreserveLoggingContext():
protocol = yield endpoint.connect(factory)
server_response, server_certificate = yield protocol.remote_key
defer.returnValue((server_response, server_certificate))
return
except SynapseKeyClientError as e:
logger.exception("Error getting key for %r" % (server_name,))
if e.status.startswith("4"):

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,11 +16,11 @@
from synapse.crypto.keyclient import fetch_server_key
from synapse.api.errors import SynapseError, Codes
from synapse.util import unwrapFirstError
from synapse.util.async import ObservableDeferred
from synapse.util import unwrapFirstError, logcontext
from synapse.util.logcontext import (
preserve_context_over_deferred, preserve_context_over_fn, PreserveLoggingContext,
preserve_fn
PreserveLoggingContext,
preserve_fn,
run_in_background,
)
from synapse.util.metrics import Measure
@@ -57,7 +58,8 @@ Attributes:
json_object(dict): The JSON object to verify.
deferred(twisted.internet.defer.Deferred):
A deferred (server_name, key_id, verify_key) tuple that resolves when
a verify key has been fetched
a verify key has been fetched. The deferreds' callbacks are run with no
logcontext.
"""
@@ -74,23 +76,32 @@ class Keyring(object):
self.perspective_servers = self.config.perspectives
self.hs = hs
# map from server name to Deferred. Has an entry for each server with
# an ongoing key download; the Deferred completes once the download
# completes.
#
# These are regular, logcontext-agnostic Deferreds.
self.key_downloads = {}
def verify_json_for_server(self, server_name, json_object):
return self.verify_json_objects_for_server(
return logcontext.make_deferred_yieldable(
self.verify_json_objects_for_server(
[(server_name, json_object)]
)[0]
)
def verify_json_objects_for_server(self, server_and_json):
"""Bulk verfies signatures of json objects, bulk fetching keys as
"""Bulk verifies signatures of json objects, bulk fetching keys as
necessary.
Args:
server_and_json (list): List of pairs of (server_name, json_object)
Returns:
list of deferreds indicating success or failure to verify each
json object's signature for the given server_name.
List<Deferred>: for each input pair, a deferred indicating success
or failure to verify each json object's signature for the given
server_name. The deferreds run their callbacks in the sentinel
logcontext.
"""
verify_requests = []
@@ -117,73 +128,60 @@ class Keyring(object):
verify_requests.append(verify_request)
run_in_background(self._start_key_lookups, verify_requests)
# Pass those keys to handle_key_deferred so that the json object
# signatures can be verified
handle = preserve_fn(_handle_key_deferred)
return [
handle(rq) for rq in verify_requests
]
@defer.inlineCallbacks
def handle_key_deferred(verify_request):
server_name = verify_request.server_name
def _start_key_lookups(self, verify_requests):
"""Sets off the key fetches for each verify request
Once each fetch completes, verify_request.deferred will be resolved.
Args:
verify_requests (List[VerifyKeyRequest]):
"""
try:
_, key_id, verify_key = yield verify_request.deferred
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
)
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
401,
"No key for %s with id %s" % (server_name, key_ids),
Codes.UNAUTHORIZED,
)
json_object = verify_request.json_object
logger.debug("Got key %s %s:%s for server %s, verifying" % (
key_id, verify_key.alg, verify_key.version, server_name,
))
try:
verify_signed_json(json_object, server_name, verify_key)
except:
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
),
Codes.UNAUTHORIZED,
)
# create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
server_name: defer.Deferred()
for server_name, _ in server_and_json
rq.server_name: defer.Deferred()
for rq in verify_requests
}
with PreserveLoggingContext():
# We want to wait for any previous lookups to complete before
# proceeding.
wait_on_deferred = self.wait_for_previous_lookups(
[server_name for server_name, _ in server_and_json],
yield self.wait_for_previous_lookups(
[rq.server_name for rq in verify_requests],
server_to_deferred,
)
# Actually start fetching keys.
wait_on_deferred.addBoth(
lambda _: self.get_server_verify_keys(verify_requests)
)
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
server_to_request_ids = {}
def remove_deferreds(res, server_name, verify_request):
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request):
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
@@ -193,17 +191,11 @@ class Keyring(object):
return res
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
deferred.addBoth(remove_deferreds, server_name, verify_request)
# Pass those keys to handle_key_deferred so that the json object
# signatures can be verified
return [
preserve_context_over_fn(handle_key_deferred, verify_request)
for verify_request in verify_requests
]
verify_request.deferred.addBoth(
remove_deferreds, verify_request,
)
except Exception:
logger.exception("Error starting key lookups")
@defer.inlineCallbacks
def wait_for_previous_lookups(self, server_names, server_to_deferred):
@@ -212,7 +204,13 @@ class Keyring(object):
Args:
server_names (list): list of server_names we want to lookup
server_to_deferred (dict): server_name to deferred which gets
resolved once we've finished looking up keys for that server
resolved once we've finished looking up keys for that server.
The Deferreds should be regular twisted ones which call their
callbacks with no logcontext.
Returns: a Deferred which resolves once all key lookups for the given
servers have completed. Follows the synapse rules of logcontext
preservation.
"""
while True:
wait_on = [
@@ -226,17 +224,15 @@ class Keyring(object):
else:
break
for server_name, deferred in server_to_deferred.items():
d = ObservableDeferred(preserve_context_over_deferred(deferred))
self.key_downloads[server_name] = d
def rm(r, server_name):
self.key_downloads.pop(server_name, None)
def rm(r, server_name_):
self.key_downloads.pop(server_name_, None)
return r
d.addBoth(rm, server_name)
for server_name, deferred in server_to_deferred.items():
self.key_downloads[server_name] = deferred
deferred.addBoth(rm, server_name)
def get_server_verify_keys(self, verify_requests):
def _get_server_verify_keys(self, verify_requests):
"""Tries to find at least one key for each verify request
For each verify_request, verify_request.deferred is called back with
@@ -305,7 +301,8 @@ class Keyring(object):
if not missing_keys:
break
for verify_request in requests_missing_keys.values():
with PreserveLoggingContext():
for verify_request in requests_missing_keys:
verify_request.deferred.errback(SynapseError(
401,
"No key for %s with id %s" % (
@@ -315,11 +312,12 @@ class Keyring(object):
))
def on_err(err):
with PreserveLoggingContext():
for verify_request in verify_requests:
if not verify_request.deferred.called:
verify_request.deferred.errback(err)
do_iterations().addErrback(on_err)
run_in_background(do_iterations).addErrback(on_err)
@defer.inlineCallbacks
def get_keys_from_store(self, server_name_and_key_ids):
@@ -333,15 +331,16 @@ class Keyring(object):
Deferred: resolves to dict[str, dict[str, VerifyKey]]: map from
server_name -> key_id -> VerifyKey
"""
res = yield preserve_context_over_deferred(defer.gatherResults(
res = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.get_server_verify_keys)(
server_name, key_ids
run_in_background(
self.store.get_server_verify_keys,
server_name, key_ids,
).addCallback(lambda ks, server: (server, ks), server_name)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(dict(res))
@@ -358,17 +357,17 @@ class Keyring(object):
logger.exception(
"Unable to get key from %r: %s %s",
perspective_name,
type(e).__name__, str(e.message),
type(e).__name__, str(e),
)
defer.returnValue({})
results = yield preserve_context_over_deferred(defer.gatherResults(
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(p_name, p_keys)
run_in_background(get_key, p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
union_of_keys = {}
for result in results:
@@ -390,7 +389,7 @@ class Keyring(object):
logger.info(
"Unable to get key %r for %r directly: %s %s",
key_ids, server_name,
type(e).__name__, str(e.message),
type(e).__name__, str(e),
)
if not keys:
@@ -402,13 +401,13 @@ class Keyring(object):
defer.returnValue(keys)
results = yield preserve_context_over_deferred(defer.gatherResults(
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(server_name, key_ids)
run_in_background(get_key, server_name, key_ids)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
merged = {}
for result in results:
@@ -485,9 +484,10 @@ class Keyring(object):
for server_name, response_keys in processed_response.items():
keys.setdefault(server_name, {}).update(response_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
run_in_background(
self.store_keys,
server_name=server_name,
from_server=perspective_name,
verify_keys=response_keys,
@@ -495,7 +495,7 @@ class Keyring(object):
for server_name, response_keys in keys.items()
],
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(keys)
@@ -543,9 +543,10 @@ class Keyring(object):
keys.update(response_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
run_in_background(
self.store_keys,
server_name=key_server_name,
from_server=server_name,
verify_keys=verify_keys,
@@ -553,7 +554,7 @@ class Keyring(object):
for key_server_name, verify_keys in keys.items()
],
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(keys)
@@ -619,9 +620,10 @@ class Keyring(object):
response_keys.update(verify_keys)
response_keys.update(old_verify_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_keys_json)(
run_in_background(
self.store.store_server_keys_json,
server_name=server_name,
key_id=key_id,
from_server=server_name,
@@ -632,7 +634,7 @@ class Keyring(object):
for key_id in updated_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
results[server_name] = response_keys
@@ -710,7 +712,6 @@ class Keyring(object):
defer.returnValue(verify_keys)
@defer.inlineCallbacks
def store_keys(self, server_name, from_server, verify_keys):
"""Store a collection of verify keys for a given server
Args:
@@ -721,12 +722,57 @@ class Keyring(object):
A deferred that completes when the keys are stored.
"""
# TODO(markjh): Store whether the keys have expired.
yield preserve_context_over_deferred(defer.gatherResults(
return logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_verify_key)(
run_in_background(
self.store.store_server_verify_key,
server_name, server_name, key.time_added, key
)
for key_id, key in verify_keys.items()
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
@defer.inlineCallbacks
def _handle_key_deferred(verify_request):
server_name = verify_request.server_name
try:
with PreserveLoggingContext():
_, key_id, verify_key = yield verify_request.deferred
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
)
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e),
)
raise SynapseError(
401,
"No key for %s with id %s" % (server_name, verify_request.key_ids),
Codes.UNAUTHORIZED,
)
json_object = verify_request.json_object
logger.debug("Got key %s %s:%s for server %s, verifying" % (
key_id, verify_key.alg, verify_key.version, server_name,
))
try:
verify_signed_json(json_object, server_name, verify_key)
except Exception:
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
),
Codes.UNAUTHORIZED,
)

View File

@@ -319,7 +319,7 @@ def _is_membership_change_allowed(event, auth_events):
# TODO (erikj): Implement kicks.
if target_banned and user_level < ban_level:
raise AuthError(
403, "You cannot unban user &s." % (target_user_id,)
403, "You cannot unban user %s." % (target_user_id,)
)
elif target_user_id != event.user_id:
kick_level = _get_named_level(auth_events, "kick", 50)
@@ -443,12 +443,12 @@ def _check_power_levels(event, auth_events):
for k, v in user_list.items():
try:
UserID.from_string(k)
except:
except Exception:
raise SynapseError(400, "Not a valid user_id: %s" % (k,))
try:
int(v)
except:
except Exception:
raise SynapseError(400, "Not a valid power level: %s" % (v,))
key = (event.type, event.state_key, )
@@ -470,14 +470,14 @@ def _check_power_levels(event, auth_events):
("invite", None),
]
old_list = current_state.content.get("users")
old_list = current_state.content.get("users", {})
for user in set(old_list.keys() + user_list.keys()):
levels_to_check.append(
(user, "users")
)
old_list = current_state.content.get("events")
new_list = event.content.get("events")
old_list = current_state.content.get("events", {})
new_list = event.content.get("events", {})
for ev_id in set(old_list.keys() + new_list.keys()):
levels_to_check.append(
(ev_id, "events")

View File

@@ -47,14 +47,26 @@ class _EventInternalMetadata(object):
def _event_dict_property(key):
# We want to be able to use hasattr with the event dict properties.
# However, (on python3) hasattr expects AttributeError to be raised. Hence,
# we need to transform the KeyError into an AttributeError
def getter(self):
try:
return self._event_dict[key]
except KeyError:
raise AttributeError(key)
def setter(self, v):
try:
self._event_dict[key] = v
except KeyError:
raise AttributeError(key)
def delete(self):
try:
del self._event_dict[key]
except KeyError:
raise AttributeError(key)
return property(
getter,

View File

@@ -55,7 +55,7 @@ class EventBuilderFactory(object):
local_part = str(int(self.clock.time())) + i + random_string(5)
e_id = EventID.create(local_part, self.hostname)
e_id = EventID(local_part, self.hostname)
return e_id.to_string()

View File

@@ -13,6 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from frozendict import frozendict
class EventContext(object):
"""
@@ -25,7 +29,9 @@ class EventContext(object):
The current state map excluding the current event.
(type, state_key) -> event_id
state_group (int): state group id
state_group (int|None): state group id, if the state has been stored
as a state group. This is usually only None if e.g. the event is
an outlier.
rejected (bool|str): A rejection reason if the event was rejected, else
False
@@ -46,10 +52,10 @@ class EventContext(object):
"prev_state_ids",
"state_group",
"rejected",
"push_actions",
"prev_group",
"delta_ids",
"prev_state_events",
"app_service",
]
def __init__(self):
@@ -60,7 +66,6 @@ class EventContext(object):
self.state_group = None
self.rejected = False
self.push_actions = []
# A previously persisted state group and a delta between that
# and this state.
@@ -68,3 +73,100 @@ class EventContext(object):
self.delta_ids = None
self.prev_state_events = None
self.app_service = None
def serialize(self, event):
"""Converts self to a type that can be serialized as JSON, and then
deserialized by `deserialize`
Args:
event (FrozenEvent): The event that this context relates to
Returns:
dict
"""
# We don't serialize the full state dicts, instead they get pulled out
# of the DB on the other side. However, the other side can't figure out
# the prev_state_ids, so if we're a state event we include the event
# id that we replaced in the state.
if event.is_state():
prev_state_id = self.prev_state_ids.get((event.type, event.state_key))
else:
prev_state_id = None
return {
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
"state_group": self.state_group,
"rejected": self.rejected,
"prev_group": self.prev_group,
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None
}
@staticmethod
@defer.inlineCallbacks
def deserialize(store, input):
"""Converts a dict that was produced by `serialize` back into a
EventContext.
Args:
store (DataStore): Used to convert AS ID to AS object
input (dict): A dict produced by `serialize`
Returns:
EventContext
"""
context = EventContext()
context.state_group = input["state_group"]
context.rejected = input["rejected"]
context.prev_group = input["prev_group"]
context.delta_ids = _decode_state_dict(input["delta_ids"])
context.prev_state_events = input["prev_state_events"]
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
prev_state_id = input["prev_state_id"]
event_type = input["event_type"]
event_state_key = input["event_state_key"]
context.current_state_ids = yield store.get_state_ids_for_group(
context.state_group,
)
if prev_state_id and event_state_key:
context.prev_state_ids = dict(context.current_state_ids)
context.prev_state_ids[(event_type, event_state_key)] = prev_state_id
else:
context.prev_state_ids = context.current_state_ids
app_service_id = input["app_service_id"]
if app_service_id:
context.app_service = store.get_app_service_by_id(app_service_id)
defer.returnValue(context)
def _encode_state_dict(state_dict):
"""Since dicts of (type, state_key) -> event_id cannot be serialized in
JSON we need to convert them to a form that can.
"""
if state_dict is None:
return None
return [
(etype, state_key, v)
for (etype, state_key), v in state_dict.iteritems()
]
def _decode_state_dict(input):
"""Decodes a state dict encoded using `_encode_state_dict` above
"""
if input is None:
return None
return frozendict({(etype, state_key,): v for etype, state_key, v in input})

113
synapse/events/spamcheck.py Normal file
View File

@@ -0,0 +1,113 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class SpamChecker(object):
def __init__(self, hs):
self.spam_checker = None
module = None
config = None
try:
module, config = hs.config.spam_checker
except Exception:
pass
if module is not None:
self.spam_checker = module(config=config)
def check_event_for_spam(self, event):
"""Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if
sent by a local user. If it is sent by a user on another server, then
users receive a blank event.
Args:
event (synapse.events.EventBase): the event to be checked
Returns:
bool: True if the event is spammy.
"""
if self.spam_checker is None:
return False
return self.spam_checker.check_event_for_spam(event)
def user_may_invite(self, inviter_userid, invitee_userid, room_id):
"""Checks if a given user may send an invite
If this method returns false, the invite will be rejected.
Args:
userid (string): The sender's user ID
Returns:
bool: True if the user may send an invite, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_invite(inviter_userid, invitee_userid, room_id)
def user_may_create_room(self, userid):
"""Checks if a given user may create a room
If this method returns false, the creation request will be rejected.
Args:
userid (string): The sender's user ID
Returns:
bool: True if the user may create a room, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_create_room(userid)
def user_may_create_room_alias(self, userid, room_alias):
"""Checks if a given user may create a room alias
If this method returns false, the association request will be rejected.
Args:
userid (string): The sender's user ID
room_alias (string): The alias to be created
Returns:
bool: True if the user may create a room alias, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_create_room_alias(userid, room_alias)
def user_may_publish_room(self, userid, room_id):
"""Checks if a given user may publish a room to the directory
If this method returns false, the publish request will be rejected.
Args:
userid (string): The sender's user ID
room_id (string): The ID of the room that would be published
Returns:
bool: True if the user may publish the room, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_publish_room(userid, room_id)

View File

@@ -225,7 +225,22 @@ def format_event_for_client_v2_without_room_id(d):
def serialize_event(e, time_now_ms, as_client_event=True,
event_format=format_event_for_client_v1,
token_id=None, only_event_fields=None):
token_id=None, only_event_fields=None, is_invite=False):
"""Serialize event for clients
Args:
e (EventBase)
time_now_ms (int)
as_client_event (bool)
event_format
token_id
only_event_fields
is_invite (bool): Whether this is an invite that is being sent to the
invitee
Returns:
dict
"""
# FIXME(erikj): To handle the case of presence events and the like
if not isinstance(e, EventBase):
return e
@@ -251,6 +266,12 @@ def serialize_event(e, time_now_ms, as_client_event=True,
if txn_id is not None:
d["unsigned"]["transaction_id"] = txn_id
# If this is an invite for somebody else, then we don't care about the
# invite_room_state as that's meant solely for the invitee. Other clients
# will already have the state since they're in the room.
if not is_invite:
d["unsigned"].pop("invite_room_state", None)
if as_client_event:
d = event_format(d)

View File

@@ -15,11 +15,3 @@
""" This package includes all the federation specific logic.
"""
from .replication import ReplicationLayer
def initialize_http_replication(hs):
transport = hs.get_federation_transport_client()
return ReplicationLayer(hs, transport)

View File

@@ -12,28 +12,31 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.events.utils import prune_event
from synapse.crypto.event_signing import check_event_content_hash
from synapse.api.errors import SynapseError
from synapse.util import unwrapFirstError
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
import logging
import six
from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import SynapseError, Codes
from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import FrozenEvent
from synapse.events.utils import prune_event
from synapse.http.servlet import assert_params_in_request
from synapse.util import unwrapFirstError, logcontext
from twisted.internet import defer
logger = logging.getLogger(__name__)
class FederationBase(object):
def __init__(self, hs):
pass
self.hs = hs
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
self.spam_checker = hs.get_spam_checker()
self.store = hs.get_datastore()
self._clock = hs.get_clock()
@defer.inlineCallbacks
def _check_sigs_and_hash_and_fetch(self, origin, pdus, outlier=False,
@@ -57,56 +60,52 @@ class FederationBase(object):
"""
deferreds = self._check_sigs_and_hashes(pdus)
def callback(pdu):
return pdu
@defer.inlineCallbacks
def handle_check_result(pdu, deferred):
try:
res = yield logcontext.make_deferred_yieldable(deferred)
except SynapseError:
res = None
def errback(failure, pdu):
failure.trap(SynapseError)
return None
def try_local_db(res, pdu):
if not res:
# Check local db.
return self.store.get_event(
res = yield self.store.get_event(
pdu.event_id,
allow_rejected=True,
allow_none=True,
)
return res
def try_remote(res, pdu):
if not res and pdu.origin != origin:
return self.get_pdu(
try:
res = yield self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
outlier=outlier,
timeout=10000,
).addErrback(lambda e: None)
return res
)
except SynapseError:
pass
def warn(res, pdu):
if not res:
logger.warn(
"Failed to find copy of %s with valid signature",
pdu.event_id,
)
return res
for pdu, deferred in zip(pdus, deferreds):
deferred.addCallbacks(
callback, errback, errbackArgs=[pdu]
).addCallback(
try_local_db, pdu
).addCallback(
try_remote, pdu
).addCallback(
warn, pdu
defer.returnValue(res)
handle = logcontext.preserve_fn(handle_check_result)
deferreds2 = [
handle(pdu, deferred)
for pdu, deferred in zip(pdus, deferreds)
]
valid_pdus = yield logcontext.make_deferred_yieldable(
defer.gatherResults(
deferreds2,
consumeErrors=True,
)
valid_pdus = yield preserve_context_over_deferred(defer.gatherResults(
deferreds,
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError)
if include_none:
defer.returnValue(valid_pdus)
@@ -114,15 +113,24 @@ class FederationBase(object):
defer.returnValue([p for p in valid_pdus if p])
def _check_sigs_and_hash(self, pdu):
return self._check_sigs_and_hashes([pdu])[0]
return logcontext.make_deferred_yieldable(
self._check_sigs_and_hashes([pdu])[0],
)
def _check_sigs_and_hashes(self, pdus):
"""Throws a SynapseError if a PDU does not have the correct
signatures.
"""Checks that each of the received events is correctly signed by the
sending server.
Args:
pdus (list[FrozenEvent]): the events to be checked
Returns:
FrozenEvent: Either the given event or it redacted if it failed the
content hash check.
list[Deferred]: for each input event, a deferred which:
* returns the original event if the checks pass
* returns a redacted version of the event (if the signature
matched but the hash did not)
* throws a SynapseError if the signature check failed.
The deferreds run their callbacks in the sentinel logcontext.
"""
redacted_pdus = [
@@ -130,22 +138,34 @@ class FederationBase(object):
for pdu in pdus
]
deferreds = preserve_fn(self.keyring.verify_json_objects_for_server)([
deferreds = self.keyring.verify_json_objects_for_server([
(p.origin, p.get_pdu_json())
for p in redacted_pdus
])
ctx = logcontext.LoggingContext.current_context()
def callback(_, pdu, redacted):
with logcontext.PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
if self.spam_checker.check_event_for_spam(pdu):
logger.warn(
"Event contains spam, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
return pdu
def errback(failure, pdu):
failure.trap(SynapseError)
with logcontext.PreserveLoggingContext(ctx):
logger.warn(
"Signature check failed for %s",
pdu.event_id,
@@ -160,3 +180,40 @@ class FederationBase(object):
)
return deferreds
def event_from_pdu_json(pdu_json, outlier=False):
"""Construct a FrozenEvent from an event json received over federation
Args:
pdu_json (object): pdu as received over federation
outlier (bool): True to mark this event as an outlier
Returns:
FrozenEvent
Raises:
SynapseError: if the pdu is missing required fields or is otherwise
not a valid matrix event
"""
# we could probably enforce a bunch of other fields here (room_id, sender,
# origin, etc etc)
assert_params_in_request(pdu_json, ('event_id', 'type', 'depth'))
depth = pdu_json['depth']
if not isinstance(depth, six.integer_types):
raise SynapseError(400, "Depth %r not an intger" % (depth, ),
Codes.BAD_JSON)
if depth < 0:
raise SynapseError(400, "Depth too small", Codes.BAD_JSON)
elif depth > MAX_DEPTH:
raise SynapseError(400, "Depth too large", Codes.BAD_JSON)
event = FrozenEvent(
pdu_json
)
event.internal_metadata.outlier = outlier
return event

View File

@@ -14,28 +14,30 @@
# limitations under the License.
from twisted.internet import defer
from .federation_base import FederationBase
from synapse.api.constants import Membership
from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError,
)
from synapse.util import unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logutils import log_function
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
from synapse.events import FrozenEvent, builder
import synapse.metrics
from synapse.util.retryutils import NotRetryingDestination
import copy
import itertools
import logging
import random
from six.moves import range
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError, FederationDeniedError
)
from synapse.events import builder
from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
)
import synapse.metrics
from synapse.util import logcontext, unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.logutils import log_function
from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__)
@@ -58,6 +60,7 @@ class FederationClient(FederationBase):
self._clear_tried_cache, 60 * 1000,
)
self.state = hs.get_state_handler()
self.transport_layer = hs.get_federation_transport_client()
def _clear_tried_cache(self):
"""Clear pdu_destination_tried cache"""
@@ -184,15 +187,15 @@ class FederationClient(FederationBase):
logger.debug("backfill transaction_data=%s", repr(transaction_data))
pdus = [
self.event_from_pdu_json(p, outlier=False)
event_from_pdu_json(p, outlier=False)
for p in transaction_data["pdus"]
]
# FIXME: We should handle signature failures more gracefully.
pdus[:] = yield preserve_context_over_deferred(defer.gatherResults(
pdus[:] = yield logcontext.make_deferred_yieldable(defer.gatherResults(
self._check_sigs_and_hashes(pdus),
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(pdus)
@@ -244,7 +247,7 @@ class FederationClient(FederationBase):
logger.debug("transaction_data %r", transaction_data)
pdu_list = [
self.event_from_pdu_json(p, outlier=outlier)
event_from_pdu_json(p, outlier=outlier)
for p in transaction_data["pdus"]
]
@@ -252,7 +255,7 @@ class FederationClient(FederationBase):
pdu = pdu_list[0]
# Check signatures are correct.
signed_pdu = yield self._check_sigs_and_hashes([pdu])[0]
signed_pdu = yield self._check_sigs_and_hash(pdu)
break
@@ -266,6 +269,9 @@ class FederationClient(FederationBase):
except NotRetryingDestination as e:
logger.info(e.message)
continue
except FederationDeniedError as e:
logger.info(e.message)
continue
except Exception as e:
pdu_attempts[destination] = now
@@ -336,11 +342,11 @@ class FederationClient(FederationBase):
)
pdus = [
self.event_from_pdu_json(p, outlier=True) for p in result["pdus"]
event_from_pdu_json(p, outlier=True) for p in result["pdus"]
]
auth_chain = [
self.event_from_pdu_json(p, outlier=True)
event_from_pdu_json(p, outlier=True)
for p in result.get("auth_chain", [])
]
@@ -390,7 +396,7 @@ class FederationClient(FederationBase):
seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
signed_events = seen_events.values()
else:
seen_events = yield self.store.have_events(event_ids)
seen_events = yield self.store.have_seen_events(event_ids)
signed_events = []
failed_to_fetch = set()
@@ -409,18 +415,19 @@ class FederationClient(FederationBase):
batch_size = 20
missing_events = list(missing_events)
for i in xrange(0, len(missing_events), batch_size):
for i in range(0, len(missing_events), batch_size):
batch = set(missing_events[i:i + batch_size])
deferreds = [
preserve_fn(self.get_pdu)(
run_in_background(
self.get_pdu,
destinations=random_server_list(),
event_id=e_id,
)
for e_id in batch
]
res = yield preserve_context_over_deferred(
res = yield make_deferred_yieldable(
defer.DeferredList(deferreds, consumeErrors=True)
)
for success, result in res:
@@ -441,7 +448,7 @@ class FederationClient(FederationBase):
)
auth_chain = [
self.event_from_pdu_json(p, outlier=True)
event_from_pdu_json(p, outlier=True)
for p in res["auth_chain"]
]
@@ -474,8 +481,13 @@ class FederationClient(FederationBase):
content (object): Any additional data to put into the content field
of the event.
Return:
A tuple of (origin (str), event (object)) where origin is the remote
homeserver which generated the event.
Deferred: resolves to a tuple of (origin (str), event (object))
where origin is the remote homeserver which generated the event.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
if membership not in valid_memberships:
@@ -528,6 +540,27 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
def send_join(self, destinations, pdu):
"""Sends a join event to one of a list of homeservers.
Doing so will cause the remote server to add the event to the graph,
and send the event out to the rest of the federation.
Args:
destinations (str): Candidate homeservers which are probably
participating in the room.
pdu (BaseEvent): event to be sent
Return:
Deferred: resolves to a dict with members ``origin`` (a string
giving the serer the event was sent to, ``state`` (?) and
``auth_chain``.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
@@ -544,12 +577,12 @@ class FederationClient(FederationBase):
logger.debug("Got content: %s", content)
state = [
self.event_from_pdu_json(p, outlier=True)
event_from_pdu_json(p, outlier=True)
for p in content.get("state", [])
]
auth_chain = [
self.event_from_pdu_json(p, outlier=True)
event_from_pdu_json(p, outlier=True)
for p in content.get("auth_chain", [])
]
@@ -624,7 +657,7 @@ class FederationClient(FederationBase):
logger.debug("Got response to send_invite: %s", pdu_dict)
pdu = self.event_from_pdu_json(pdu_dict)
pdu = event_from_pdu_json(pdu_dict)
# Check signatures are correct.
pdu = yield self._check_sigs_and_hash(pdu)
@@ -635,6 +668,26 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
def send_leave(self, destinations, pdu):
"""Sends a leave event to one of a list of homeservers.
Doing so will cause the remote server to add the event to the graph,
and send the event out to the rest of the federation.
This is mostly useful to reject received invites.
Args:
destinations (str): Candidate homeservers which are probably
participating in the room.
pdu (BaseEvent): event to be sent
Return:
Deferred: resolves to None.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a non-200 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
@@ -694,7 +747,7 @@ class FederationClient(FederationBase):
)
auth_chain = [
self.event_from_pdu_json(e)
event_from_pdu_json(e)
for e in content["auth_chain"]
]
@@ -742,7 +795,7 @@ class FederationClient(FederationBase):
)
events = [
self.event_from_pdu_json(e)
event_from_pdu_json(e)
for e in content.get("events", [])
]
@@ -759,15 +812,6 @@ class FederationClient(FederationBase):
defer.returnValue(signed_events)
def event_from_pdu_json(self, pdu_json, outlier=False):
event = FrozenEvent(
pdu_json
)
event.internal_metadata.outlier = outlier
return event
@defer.inlineCallbacks
def forward_third_party_invite(self, destinations, room_id, event_dict):
for destination in destinations:

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,27 +13,31 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from .federation_base import FederationBase
from .units import Transaction, Edu
from synapse.util.async import Linearizer
from synapse.util.logutils import log_function
from synapse.util.caches.response_cache import ResponseCache
from synapse.events import FrozenEvent
from synapse.types import get_domain_from_id
import synapse.metrics
from synapse.api.errors import AuthError, FederationError, SynapseError
from synapse.crypto.event_signing import compute_event_signature
import simplejson as json
import logging
import simplejson as json
from twisted.internet import defer
from synapse.api.errors import AuthError, FederationError, SynapseError, NotFoundError
from synapse.crypto.event_signing import compute_event_signature
from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
)
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
import synapse.metrics
from synapse.types import get_domain_from_id
from synapse.util import async
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logutils import log_function
from six import iteritems
# when processing incoming transactions, we try to handle multiple rooms in
# parallel, up to this limit.
TRANSACTION_CONCURRENCY_LIMIT = 10
logger = logging.getLogger(__name__)
@@ -51,48 +56,18 @@ class FederationServer(FederationBase):
super(FederationServer, self).__init__(hs)
self.auth = hs.get_auth()
self.handler = hs.get_handlers().federation_handler
self._server_linearizer = Linearizer("fed_server")
self._server_linearizer = async.Linearizer("fed_server")
self._transaction_linearizer = async.Linearizer("fed_txn_handler")
self.transaction_actions = TransactionActions(self.store)
self.registry = hs.get_federation_registry()
# We cache responses to state queries, as they take a while and often
# come in waves.
self._state_resp_cache = ResponseCache(hs, timeout_ms=30000)
def set_handler(self, handler):
"""Sets the handler that the replication layer will use to communicate
receipt of new PDUs from other home servers. The required methods are
documented on :py:class:`.ReplicationHandler`.
"""
self.handler = handler
def register_edu_handler(self, edu_type, handler):
if edu_type in self.edu_handlers:
raise KeyError("Already have an EDU handler for %s" % (edu_type,))
self.edu_handlers[edu_type] = handler
def register_query_handler(self, query_type, handler):
"""Sets the handler callable that will be used to handle an incoming
federation Query of the given type.
Args:
query_type (str): Category name of the query, which should match
the string used by make_query.
handler (callable): Invoked to handle incoming queries of this type
handler is invoked as:
result = handler(args)
where 'args' is a dict mapping strings to strings of the query
arguments. It should return a Deferred that will eventually yield an
object to encode as JSON.
"""
if query_type in self.query_handlers:
raise KeyError(
"Already have a Query handler for %s" % (query_type,)
)
self.query_handlers[query_type] = handler
self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
@defer.inlineCallbacks
@log_function
@@ -109,25 +84,41 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
@log_function
def on_incoming_transaction(self, transaction_data):
# keep this as early as possible to make the calculated origin ts as
# accurate as possible.
request_time = self._clock.time_msec()
transaction = Transaction(**transaction_data)
received_pdus_counter.inc_by(len(transaction.pdus))
for p in transaction.pdus:
if "unsigned" in p:
unsigned = p["unsigned"]
if "age" in unsigned:
p["age"] = unsigned["age"]
if "age" in p:
p["age_ts"] = int(self._clock.time_msec()) - int(p["age"])
del p["age"]
pdu_list = [
self.event_from_pdu_json(p) for p in transaction.pdus
]
if not transaction.transaction_id:
raise Exception("Transaction missing transaction_id")
if not transaction.origin:
raise Exception("Transaction missing origin")
logger.debug("[%s] Got transaction", transaction.transaction_id)
# use a linearizer to ensure that we don't process the same transaction
# multiple times in parallel.
with (yield self._transaction_linearizer.queue(
(transaction.origin, transaction.transaction_id),
)):
result = yield self._handle_incoming_transaction(
transaction, request_time,
)
defer.returnValue(result)
@defer.inlineCallbacks
def _handle_incoming_transaction(self, transaction, request_time):
""" Process an incoming transaction and return the HTTP response
Args:
transaction (Transaction): incoming transaction
request_time (int): timestamp that the HTTP request arrived at
Returns:
Deferred[(int, object)]: http response code and body
"""
response = yield self.transaction_actions.have_responded(transaction)
if response:
@@ -140,38 +131,49 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id)
results = []
received_pdus_counter.inc_by(len(transaction.pdus))
for pdu in pdu_list:
# check that it's actually being sent from a valid destination to
# workaround bug #1753 in 0.18.5 and 0.18.6
if transaction.origin != get_domain_from_id(pdu.event_id):
if not (
pdu.type == 'm.room.member' and
pdu.content and
pdu.content.get("membership", None) == 'join' and
self.hs.is_mine_id(pdu.state_key)
):
logger.info(
"Discarding PDU %s from invalid origin %s",
pdu.event_id, transaction.origin
)
continue
else:
logger.info(
"Accepting join PDU %s from %s",
pdu.event_id, transaction.origin
)
pdus_by_room = {}
for p in transaction.pdus:
if "unsigned" in p:
unsigned = p["unsigned"]
if "age" in unsigned:
p["age"] = unsigned["age"]
if "age" in p:
p["age_ts"] = request_time - int(p["age"])
del p["age"]
event = event_from_pdu_json(p)
room_id = event.room_id
pdus_by_room.setdefault(room_id, []).append(event)
pdu_results = {}
# we can process different rooms in parallel (which is useful if they
# require callouts to other servers to fetch missing events), but
# impose a limit to avoid going too crazy with ram/cpu.
@defer.inlineCallbacks
def process_pdus_for_room(room_id):
logger.debug("Processing PDUs for %s", room_id)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
try:
yield self._handle_received_pdu(transaction.origin, pdu)
results.append({})
yield self._handle_received_pdu(
transaction.origin, pdu
)
pdu_results[event_id] = {}
except FederationError as e:
self.send_failure(e, transaction.origin)
results.append({"error": str(e)})
logger.warn("Error handling PDU %s: %s", event_id, e)
pdu_results[event_id] = {"error": str(e)}
except Exception as e:
results.append({"error": str(e)})
logger.exception("Failed to handle PDU")
pdu_results[event_id] = {"error": str(e)}
logger.exception("Failed to handle PDU %s", event_id)
yield async.concurrently_execute(
process_pdus_for_room, pdus_by_room.keys(),
TRANSACTION_CONCURRENCY_LIMIT,
)
if hasattr(transaction, "edus"):
for edu in (Edu(**x) for x in transaction.edus):
@@ -181,17 +183,16 @@ class FederationServer(FederationBase):
edu.content
)
for failure in getattr(transaction, "pdu_failures", []):
pdu_failures = getattr(transaction, "pdu_failures", [])
for failure in pdu_failures:
logger.info("Got failure %r", failure)
logger.debug("Returning: %s", str(results))
response = {
"pdus": dict(zip(
(p.event_id for p in pdu_list), results
)),
"pdus": pdu_results,
}
logger.debug("Returning: %s", str(response))
yield self.transaction_actions.set_response(
transaction,
200, response
@@ -201,16 +202,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def received_edu(self, origin, edu_type, content):
received_edus_counter.inc()
if edu_type in self.edu_handlers:
try:
yield self.edu_handlers[edu_type](origin, content)
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception as e:
logger.exception("Failed to handle edu %r", edu_type)
else:
logger.warn("Received EDU of type %s with no handler", edu_type)
yield self.registry.on_edu(edu_type, origin, content)
@defer.inlineCallbacks
@log_function
@@ -222,15 +214,17 @@ class FederationServer(FederationBase):
if not in_room:
raise AuthError(403, "Host not in room.")
result = self._state_resp_cache.get((room_id, event_id))
if not result:
# we grab the linearizer to protect ourselves from servers which hammer
# us. In theory we might already have the response to this query
# in the cache so we could return it without waiting for the linearizer
# - but that's non-trivial to get right, and anyway somewhat defeats
# the point of the linearizer.
with (yield self._server_linearizer.queue((origin, room_id))):
resp = yield self._state_resp_cache.set(
resp = yield self._state_resp_cache.wrap(
(room_id, event_id),
self._on_context_state_request_compute(room_id, event_id)
self._on_context_state_request_compute,
room_id, event_id,
)
else:
resp = yield result
defer.returnValue((200, resp))
@@ -299,14 +293,8 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
received_queries_counter.inc(query_type)
if query_type in self.query_handlers:
response = yield self.query_handlers[query_type](args)
defer.returnValue((200, response))
else:
defer.returnValue(
(404, "No handler for Query type '%s'" % (query_type,))
)
resp = yield self.registry.on_query(query_type, args)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_make_join_request(self, room_id, user_id):
@@ -316,7 +304,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_invite_request(self, origin, content):
pdu = self.event_from_pdu_json(content)
pdu = event_from_pdu_json(content)
ret_pdu = yield self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec()
defer.returnValue((200, {"event": ret_pdu.get_pdu_json(time_now)}))
@@ -324,7 +312,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_send_join_request(self, origin, content):
logger.debug("on_send_join_request: content: %s", content)
pdu = self.event_from_pdu_json(content)
pdu = event_from_pdu_json(content)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
res_pdus = yield self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
@@ -344,7 +332,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_send_leave_request(self, origin, content):
logger.debug("on_send_leave_request: content: %s", content)
pdu = self.event_from_pdu_json(content)
pdu = event_from_pdu_json(content)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
yield self.handler.on_send_leave_request(origin, pdu)
defer.returnValue((200, {}))
@@ -381,7 +369,7 @@ class FederationServer(FederationBase):
"""
with (yield self._server_linearizer.queue((origin, room_id))):
auth_chain = [
self.event_from_pdu_json(e)
event_from_pdu_json(e)
for e in content["auth_chain"]
]
@@ -436,6 +424,16 @@ class FederationServer(FederationBase):
key_id: json.loads(json_bytes)
}
logger.info(
"Claimed one-time-keys: %s",
",".join((
"%s for %s:%s" % (key_id, user_id, device_id)
for user_id, user_keys in iteritems(json_result)
for device_id, device_keys in iteritems(user_keys)
for key_id, _ in iteritems(device_keys)
)),
)
defer.returnValue({"one_time_keys": json_result})
@defer.inlineCallbacks
@@ -499,13 +497,57 @@ class FederationServer(FederationBase):
def _handle_received_pdu(self, origin, pdu):
""" Process a PDU received in a federation /send/ transaction.
If the event is invalid, then this method throws a FederationError.
(The error will then be logged and sent back to the sender (which
probably won't do anything with it), and other events in the
transaction will be processed as normal).
It is likely that we'll then receive other events which refer to
this rejected_event in their prev_events, etc. When that happens,
we'll attempt to fetch the rejected event again, which will presumably
fail, so those second-generation events will also get rejected.
Eventually, we get to the point where there are more than 10 events
between any new events and the original rejected event. Since we
only try to backfill 10 events deep on received pdu, we then accept the
new event, possibly introducing a discontinuity in the DAG, with new
forward extremities, so normal service is approximately returned,
until we try to backfill across the discontinuity.
Args:
origin (str): server which sent the pdu
pdu (FrozenEvent): received pdu
Returns (Deferred): completes with None
Raises: FederationError if the signatures / hash do not match
Raises: FederationError if the signatures / hash do not match, or
if the event was unacceptable for any other reason (eg, too large,
too many prev_events, couldn't find the prev_events)
"""
# check that it's actually being sent from a valid destination to
# workaround bug #1753 in 0.18.5 and 0.18.6
if origin != get_domain_from_id(pdu.event_id):
# We continue to accept join events from any server; this is
# necessary for the federation join dance to work correctly.
# (When we join over federation, the "helper" server is
# responsible for sending out the join event, rather than the
# origin. See bug #1893).
if not (
pdu.type == 'm.room.member' and
pdu.content and
pdu.content.get("membership", None) == 'join'
):
logger.info(
"Discarding PDU %s from invalid origin %s",
pdu.event_id, origin
)
return
else:
logger.info(
"Accepting join PDU %s from %s",
pdu.event_id, origin
)
# Check signature.
try:
pdu = yield self._check_sigs_and_hash(pdu)
@@ -522,15 +564,6 @@ class FederationServer(FederationBase):
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name
def event_from_pdu_json(self, pdu_json, outlier=False):
event = FrozenEvent(
pdu_json
)
event.internal_metadata.outlier = outlier
return event
@defer.inlineCallbacks
def exchange_third_party_invite(
self,
@@ -553,3 +586,66 @@ class FederationServer(FederationBase):
origin, room_id, event_dict
)
defer.returnValue(ret)
class FederationHandlerRegistry(object):
"""Allows classes to register themselves as handlers for a given EDU or
query type for incoming federation traffic.
"""
def __init__(self):
self.edu_handlers = {}
self.query_handlers = {}
def register_edu_handler(self, edu_type, handler):
"""Sets the handler callable that will be used to handle an incoming
federation EDU of the given type.
Args:
edu_type (str): The type of the incoming EDU to register handler for
handler (Callable[[str, dict]]): A callable invoked on incoming EDU
of the given type. The arguments are the origin server name and
the EDU contents.
"""
if edu_type in self.edu_handlers:
raise KeyError("Already have an EDU handler for %s" % (edu_type,))
self.edu_handlers[edu_type] = handler
def register_query_handler(self, query_type, handler):
"""Sets the handler callable that will be used to handle an incoming
federation query of the given type.
Args:
query_type (str): Category name of the query, which should match
the string used by make_query.
handler (Callable[[dict], Deferred[dict]]): Invoked to handle
incoming queries of this type. The return will be yielded
on and the result used as the response to the query request.
"""
if query_type in self.query_handlers:
raise KeyError(
"Already have a Query handler for %s" % (query_type,)
)
self.query_handlers[query_type] = handler
@defer.inlineCallbacks
def on_edu(self, edu_type, origin, content):
handler = self.edu_handlers.get(edu_type)
if not handler:
logger.warn("No handler registered for EDU type %s", edu_type)
try:
yield handler(origin, content)
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception as e:
logger.exception("Failed to handle edu %r", edu_type)
def on_query(self, query_type, args):
handler = self.query_handlers.get(query_type)
if not handler:
logger.warn("No handler registered for query type %s", query_type)
raise NotFoundError("No handler for Query type '%s'" % (query_type,))
return handler(args)

View File

@@ -1,73 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This layer is responsible for replicating with remote home servers using
a given transport.
"""
from .federation_client import FederationClient
from .federation_server import FederationServer
from .persistence import TransactionActions
import logging
logger = logging.getLogger(__name__)
class ReplicationLayer(FederationClient, FederationServer):
"""This layer is responsible for replicating with remote home servers over
the given transport. I.e., does the sending and receiving of PDUs to
remote home servers.
The layer communicates with the rest of the server via a registered
ReplicationHandler.
In more detail, the layer:
* Receives incoming data and processes it into transactions and pdus.
* Fetches any PDUs it thinks it might have missed.
* Keeps the current state for contexts up to date by applying the
suitable conflict resolution.
* Sends outgoing pdus wrapped in transactions.
* Fills out the references to previous pdus/transactions appropriately
for outgoing data.
"""
def __init__(self, hs, transport_layer):
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
self.transport_layer = transport_layer
self.federation_client = self
self.store = hs.get_datastore()
self.handler = None
self.edu_handlers = {}
self.query_handlers = {}
self._clock = hs.get_clock()
self.transaction_actions = TransactionActions(self.store)
self.hs = hs
super(ReplicationLayer, self).__init__(hs)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name

View File

@@ -31,23 +31,23 @@ Events are replicated via a separate events stream.
from .units import Edu
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
import synapse.metrics
from blist import sorteddict
import ujson
from collections import namedtuple
import logging
from six import itervalues, iteritems
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
PRESENCE_TYPE = "p"
KEYED_EDU_TYPE = "k"
EDU_TYPE = "e"
FAILURE_TYPE = "f"
DEVICE_MESSAGE_TYPE = "d"
class FederationRemoteSendQueue(object):
"""A drop in replacement for TransactionQueue"""
@@ -55,18 +55,19 @@ class FederationRemoteSendQueue(object):
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id
self.presence_map = {}
self.presence_changed = sorteddict()
self.presence_map = {} # Pending presence map user_id -> UserPresenceState
self.presence_changed = sorteddict() # Stream position -> user_id
self.keyed_edu = {}
self.keyed_edu_changed = sorteddict()
self.keyed_edu = {} # (destination, key) -> EDU
self.keyed_edu_changed = sorteddict() # stream position -> (destination, key)
self.edus = sorteddict()
self.edus = sorteddict() # stream position -> Edu
self.failures = sorteddict()
self.failures = sorteddict() # stream position -> (destination, Failure)
self.device_messages = sorteddict()
self.device_messages = sorteddict() # stream position -> destination
self.pos = 1
self.pos_time = sorteddict()
@@ -122,7 +123,9 @@ class FederationRemoteSendQueue(object):
del self.presence_changed[key]
user_ids = set(
user_id for uids in self.presence_changed.values() for _, user_id in uids
user_id
for uids in itervalues(self.presence_changed)
for user_id in uids
)
to_del = [
@@ -189,18 +192,20 @@ class FederationRemoteSendQueue(object):
self.notifier.on_new_replication_data()
def send_presence(self, destination, states):
"""As per TransactionQueue"""
def send_presence(self, states):
"""As per TransactionQueue
Args:
states (list(UserPresenceState))
"""
pos = self._next_pos()
self.presence_map.update({
state.user_id: state
for state in states
})
# We only want to send presence for our own users, so lets always just
# filter here just in case.
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
self.presence_changed[pos] = [
(destination, state.user_id) for state in states
]
self.presence_map.update({state.user_id: state for state in local_states})
self.presence_changed[pos] = [state.user_id for state in local_states]
self.notifier.on_new_replication_data()
@@ -220,10 +225,15 @@ class FederationRemoteSendQueue(object):
def get_current_token(self):
return self.pos - 1
def get_replication_rows(self, token, limit, federation_ack=None):
"""
def federation_ack(self, token):
self._clear_queue_before_pos(token)
def get_replication_rows(self, from_token, to_token, limit, federation_ack=None):
"""Get rows to be sent over federation between the two tokens
Args:
token (int)
from_token (int)
to_token(int)
limit (int)
federation_ack (int): Optional. The position where the worker is
explicitly acknowledged it has handled. Allows us to drop
@@ -232,9 +242,11 @@ class FederationRemoteSendQueue(object):
# TODO: Handle limit.
# To handle restarts where we wrap around
if token > self.pos:
token = -1
if from_token > self.pos:
from_token = -1
# list of tuple(int, BaseFederationRow), where the first is the position
# of the federation stream.
rows = []
# There should be only one reader, so lets delete everything its
@@ -244,62 +256,295 @@ class FederationRemoteSendQueue(object):
# Fetch changed presence
keys = self.presence_changed.keys()
i = keys.bisect_right(token)
dest_user_ids = set(
(pos, dest_user_id)
for pos in keys[i:]
for dest_user_id in self.presence_changed[pos]
)
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
dest_user_ids = [
(pos, user_id)
for pos in keys[i:j]
for user_id in self.presence_changed[pos]
]
for (key, (dest, user_id)) in dest_user_ids:
rows.append((key, PRESENCE_TYPE, ujson.dumps({
"destination": dest,
"state": self.presence_map[user_id].as_dict(),
})))
for (key, user_id) in dest_user_ids:
rows.append((key, PresenceRow(
state=self.presence_map[user_id],
)))
# Fetch changes keyed edus
keys = self.keyed_edu_changed.keys()
i = keys.bisect_right(token)
keyed_edus = set((k, self.keyed_edu_changed[k]) for k in keys[i:])
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
# We purposefully clobber based on the key here, python dict comprehensions
# always use the last value, so this will correctly point to the last
# stream position.
keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]}
for (pos, (destination, edu_key)) in keyed_edus:
rows.append(
(pos, KEYED_EDU_TYPE, ujson.dumps({
"key": edu_key,
"edu": self.keyed_edu[(destination, edu_key)].get_internal_dict(),
}))
)
for ((destination, edu_key), pos) in iteritems(keyed_edus):
rows.append((pos, KeyedEduRow(
key=edu_key,
edu=self.keyed_edu[(destination, edu_key)],
)))
# Fetch changed edus
keys = self.edus.keys()
i = keys.bisect_right(token)
edus = set((k, self.edus[k]) for k in keys[i:])
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
edus = ((k, self.edus[k]) for k in keys[i:j])
for (pos, edu) in edus:
rows.append((pos, EDU_TYPE, ujson.dumps(edu.get_internal_dict())))
rows.append((pos, EduRow(edu)))
# Fetch changed failures
keys = self.failures.keys()
i = keys.bisect_right(token)
failures = set((k, self.failures[k]) for k in keys[i:])
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
failures = ((k, self.failures[k]) for k in keys[i:j])
for (pos, (destination, failure)) in failures:
rows.append((pos, FAILURE_TYPE, ujson.dumps({
"destination": destination,
"failure": failure,
})))
rows.append((pos, FailureRow(
destination=destination,
failure=failure,
)))
# Fetch changed device messages
keys = self.device_messages.keys()
i = keys.bisect_right(token)
device_messages = set((k, self.device_messages[k]) for k in keys[i:])
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
device_messages = {self.device_messages[k]: k for k in keys[i:j]}
for (pos, destination) in device_messages:
rows.append((pos, DEVICE_MESSAGE_TYPE, ujson.dumps({
"destination": destination,
})))
for (destination, pos) in iteritems(device_messages):
rows.append((pos, DeviceRow(
destination=destination,
)))
# Sort rows based on pos
rows.sort()
return rows
return [(pos, row.TypeId, row.to_data()) for pos, row in rows]
class BaseFederationRow(object):
"""Base class for rows to be sent in the federation stream.
Specifies how to identify, serialize and deserialize the different types.
"""
TypeId = None # Unique string that ids the type. Must be overriden in sub classes.
@staticmethod
def from_data(data):
"""Parse the data from the federation stream into a row.
Args:
data: The value of ``data`` from FederationStreamRow.data, type
depends on the type of stream
"""
raise NotImplementedError()
def to_data(self):
"""Serialize this row to be sent over the federation stream.
Returns:
The value to be sent in FederationStreamRow.data. The type depends
on the type of stream.
"""
raise NotImplementedError()
def add_to_buffer(self, buff):
"""Add this row to the appropriate field in the buffer ready for this
to be sent over federation.
We use a buffer so that we can batch up events that have come in at
the same time and send them all at once.
Args:
buff (BufferedToSend)
"""
raise NotImplementedError()
class PresenceRow(BaseFederationRow, namedtuple("PresenceRow", (
"state", # UserPresenceState
))):
TypeId = "p"
@staticmethod
def from_data(data):
return PresenceRow(
state=UserPresenceState.from_dict(data)
)
def to_data(self):
return self.state.as_dict()
def add_to_buffer(self, buff):
buff.presence.append(self.state)
class KeyedEduRow(BaseFederationRow, namedtuple("KeyedEduRow", (
"key", # tuple(str) - the edu key passed to send_edu
"edu", # Edu
))):
"""Streams EDUs that have an associated key that is ued to clobber. For example,
typing EDUs clobber based on room_id.
"""
TypeId = "k"
@staticmethod
def from_data(data):
return KeyedEduRow(
key=tuple(data["key"]),
edu=Edu(**data["edu"]),
)
def to_data(self):
return {
"key": self.key,
"edu": self.edu.get_internal_dict(),
}
def add_to_buffer(self, buff):
buff.keyed_edus.setdefault(
self.edu.destination, {}
)[self.key] = self.edu
class EduRow(BaseFederationRow, namedtuple("EduRow", (
"edu", # Edu
))):
"""Streams EDUs that don't have keys. See KeyedEduRow
"""
TypeId = "e"
@staticmethod
def from_data(data):
return EduRow(Edu(**data))
def to_data(self):
return self.edu.get_internal_dict()
def add_to_buffer(self, buff):
buff.edus.setdefault(self.edu.destination, []).append(self.edu)
class FailureRow(BaseFederationRow, namedtuple("FailureRow", (
"destination", # str
"failure",
))):
"""Streams failures to a remote server. Failures are issued when there was
something wrong with a transaction the remote sent us, e.g. it included
an event that was invalid.
"""
TypeId = "f"
@staticmethod
def from_data(data):
return FailureRow(
destination=data["destination"],
failure=data["failure"],
)
def to_data(self):
return {
"destination": self.destination,
"failure": self.failure,
}
def add_to_buffer(self, buff):
buff.failures.setdefault(self.destination, []).append(self.failure)
class DeviceRow(BaseFederationRow, namedtuple("DeviceRow", (
"destination", # str
))):
"""Streams the fact that either a) there is pending to device messages for
users on the remote, or b) a local users device has changed and needs to
be sent to the remote.
"""
TypeId = "d"
@staticmethod
def from_data(data):
return DeviceRow(destination=data["destination"])
def to_data(self):
return {"destination": self.destination}
def add_to_buffer(self, buff):
buff.device_destinations.add(self.destination)
TypeToRow = {
Row.TypeId: Row
for Row in (
PresenceRow,
KeyedEduRow,
EduRow,
FailureRow,
DeviceRow,
)
}
ParsedFederationStreamData = namedtuple("ParsedFederationStreamData", (
"presence", # list(UserPresenceState)
"keyed_edus", # dict of destination -> { key -> Edu }
"edus", # dict of destination -> [Edu]
"failures", # dict of destination -> [failures]
"device_destinations", # set of destinations
))
def process_rows_for_federation(transaction_queue, rows):
"""Parse a list of rows from the federation stream and put them in the
transaction queue ready for sending to the relevant homeservers.
Args:
transaction_queue (TransactionQueue)
rows (list(synapse.replication.tcp.streams.FederationStreamRow))
"""
# The federation stream contains a bunch of different types of
# rows that need to be handled differently. We parse the rows, put
# them into the appropriate collection and then send them off.
buff = ParsedFederationStreamData(
presence=[],
keyed_edus={},
edus={},
failures={},
device_destinations=set(),
)
# Parse the rows in the stream and add to the buffer
for row in rows:
if row.type not in TypeToRow:
logger.error("Unrecognized federation row type %r", row.type)
continue
RowType = TypeToRow[row.type]
parsed_row = RowType.from_data(row.data)
parsed_row.add_to_buffer(buff)
if buff.presence:
transaction_queue.send_presence(buff.presence)
for destination, edu_map in iteritems(buff.keyed_edus):
for key, edu in edu_map.items():
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=key,
)
for destination, edu_list in iteritems(buff.edus):
for edu in edu_list:
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=None,
)
for destination, failure_list in iteritems(buff.failures):
for failure in failure_list:
transaction_queue.send_failure(destination, failure)
for destination in buff.device_destinations:
transaction_queue.send_device_messages(destination)

View File

@@ -19,13 +19,12 @@ from twisted.internet import defer
from .persistence import TransactionActions
from .units import Transaction, Edu
from synapse.api.errors import HttpResponseException
from synapse.api.errors import HttpResponseException, FederationDeniedError
from synapse.util import logcontext, PreserveLoggingContext
from synapse.util.async import run_on_reactor
from synapse.util.logcontext import preserve_context_over_fn
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from synapse.util.metrics import measure_func
from synapse.types import get_domain_from_id
from synapse.handlers.presence import format_user_presence_state
from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
import synapse.metrics
import logging
@@ -41,6 +40,10 @@ sent_pdus_destination_dist = client_metrics.register_distribution(
)
sent_edus_counter = client_metrics.register_counter("sent_edus")
sent_transactions_counter = client_metrics.register_counter("sent_transactions")
events_processed_counter = client_metrics.register_counter("events_processed")
class TransactionQueue(object):
"""This class makes sure we only have one transaction in flight at
@@ -77,8 +80,18 @@ class TransactionQueue(object):
# destination -> list of tuple(edu, deferred)
self.pending_edus_by_dest = edus = {}
# Presence needs to be separate as we send single aggragate EDUs
# Map of user_id -> UserPresenceState for all the pending presence
# to be sent out by user_id. Entries here get processed and put in
# pending_presence_by_dest
self.pending_presence = {}
# Map of destination -> user_id -> UserPresenceState of pending presence
# to be sent to each destinations
self.pending_presence_by_dest = presence = {}
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
# based on their key (e.g. typing events by room_id)
# Map of destination -> (edu_type, key) -> Edu
self.pending_edus_keyed_by_dest = edus_keyed = {}
metrics.register_callback(
@@ -113,6 +126,8 @@ class TransactionQueue(object):
self._is_processing = False
self._last_poked_id = -1
self._processing_pending_presence = False
def can_send_to(self, destination):
"""Can we send messages to the given server?
@@ -133,7 +148,6 @@ class TransactionQueue(object):
else:
return not destination.startswith("localhost")
@defer.inlineCallbacks
def notify_new_events(self, current_id):
"""This gets called when we have some new events we might want to
send out to other servers.
@@ -143,12 +157,19 @@ class TransactionQueue(object):
if self._is_processing:
return
# fire off a processing loop in the background. It's likely it will
# outlast the current request, so run it in the sentinel logcontext.
with PreserveLoggingContext():
self._process_event_queue_loop()
@defer.inlineCallbacks
def _process_event_queue_loop(self):
try:
self._is_processing = True
while True:
last_token = yield self.store.get_federation_out_pos("events")
next_token, events = yield self.store.get_all_new_events_stream(
last_token, self._last_poked_id, limit=20,
last_token, self._last_poked_id, limit=100,
)
logger.debug("Handling %s -> %s", last_token, next_token)
@@ -156,28 +177,35 @@ class TransactionQueue(object):
if not events and next_token >= self._last_poked_id:
break
for event in events:
@defer.inlineCallbacks
def handle_event(event):
# Only send events for this server.
send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of()
is_mine = self.is_mine_id(event.event_id)
if not is_mine and send_on_behalf_of is None:
continue
return
try:
# Get the state from before the event.
# We need to make sure that this is the state from before
# the event and not from after it.
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
users_in_room = yield self.state.get_current_user_in_room(
destinations = yield self.state.get_current_hosts_in_room(
event.room_id, latest_event_ids=[
prev_id for prev_id, _ in event.prev_events
],
)
destinations = set(
get_domain_from_id(user_id) for user_id in users_in_room
except Exception:
logger.exception(
"Failed to calculate hosts in room for event: %s",
event.event_id,
)
return
destinations = set(destinations)
if send_on_behalf_of is not None:
# If we are sending the event on behalf of another server
# then it already has the event and there is no reason to
@@ -188,10 +216,44 @@ class TransactionQueue(object):
self._send_pdu(event, destinations)
@defer.inlineCallbacks
def handle_room_events(events):
for event in events:
yield handle_event(event)
events_by_room = {}
for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
],
consumeErrors=True
))
yield self.store.update_federation_out_pos(
"events", next_token
)
if events:
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_lag.set(
now - ts, "federation_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "federation_sender",
)
events_processed_counter.inc_by(len(events))
synapse.metrics.event_processing_positions.set(
next_token, "federation_sender",
)
finally:
self._is_processing = False
@@ -220,21 +282,75 @@ class TransactionQueue(object):
(pdu, order)
)
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
self._attempt_new_transaction(destination)
def send_presence(self, destination, states):
if not self.can_send_to(destination):
@logcontext.preserve_fn # the caller should not yield on this
@defer.inlineCallbacks
def send_presence(self, states):
"""Send the new presence states to the appropriate destinations.
This actually queues up the presence states ready for sending and
triggers a background task to process them and send out the transactions.
Args:
states (list(UserPresenceState))
"""
# First we queue up the new presence by user ID, so multiple presence
# updates in quick successtion are correctly handled
# We only want to send presence for our own users, so lets always just
# filter here just in case.
self.pending_presence.update({
state.user_id: state for state in states
if self.is_mine_id(state.user_id)
})
# We then handle the new pending presence in batches, first figuring
# out the destinations we need to send each state to and then poking it
# to attempt a new transaction. We linearize this so that we don't
# accidentally mess up the ordering and send multiple presence updates
# in the wrong order
if self._processing_pending_presence:
return
self.pending_presence_by_dest.setdefault(destination, {}).update({
self._processing_pending_presence = True
try:
while True:
states_map = self.pending_presence
self.pending_presence = {}
if not states_map:
break
yield self._process_presence_inner(states_map.values())
except Exception:
logger.exception("Error sending presence states to servers")
finally:
self._processing_pending_presence = False
@measure_func("txnqueue._process_presence")
@defer.inlineCallbacks
def _process_presence_inner(self, states):
"""Given a list of states populate self.pending_presence_by_dest and
poke to send a new transaction to each destination
Args:
states (list(UserPresenceState))
"""
hosts_and_states = yield get_interested_remotes(self.store, states, self.state)
for destinations, states in hosts_and_states:
for destination in destinations:
if not self.can_send_to(destination):
continue
self.pending_presence_by_dest.setdefault(
destination, {}
).update({
state.user_id: state for state in states
})
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
self._attempt_new_transaction(destination)
def send_edu(self, destination, edu_type, content, key=None):
edu = Edu(
@@ -256,9 +372,7 @@ class TransactionQueue(object):
else:
self.pending_edus_by_dest.setdefault(destination, []).append(edu)
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
self._attempt_new_transaction(destination)
def send_failure(self, failure, destination):
if destination == self.server_name or destination == "localhost":
@@ -271,9 +385,7 @@ class TransactionQueue(object):
destination, []
).append(failure)
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
self._attempt_new_transaction(destination)
def send_device_messages(self, destination):
if destination == self.server_name or destination == "localhost":
@@ -282,15 +394,24 @@ class TransactionQueue(object):
if not self.can_send_to(destination):
return
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
self._attempt_new_transaction(destination)
def get_current_token(self):
return 0
@defer.inlineCallbacks
def _attempt_new_transaction(self, destination):
"""Try to start a new transaction to this destination
If there is already a transaction in progress to this destination,
returns immediately. Otherwise kicks off the process of sending a
transaction in the background.
Args:
destination (str):
Returns:
None
"""
# list of (pending_pdu, deferred, order)
if destination in self.pending_transactions:
# XXX: pending_transactions can get stuck on by a never-ending
@@ -303,6 +424,19 @@ class TransactionQueue(object):
)
return
logger.debug("TX [%s] Starting transaction loop", destination)
# Drop the logcontext before starting the transaction. It doesn't
# really make sense to log all the outbound transactions against
# whatever path led us to this point: that's pretty arbitrary really.
#
# (this also means we can fire off _perform_transaction without
# yielding)
with logcontext.PreserveLoggingContext():
self._transaction_transmission_loop(destination)
@defer.inlineCallbacks
def _transaction_transmission_loop(self, destination):
pending_pdus = []
try:
self.pending_transactions[destination] = 1
@@ -374,6 +508,7 @@ class TransactionQueue(object):
destination, pending_pdus, pending_edus, pending_failures,
)
if success:
sent_transactions_counter.inc()
# Remove the acknowledged device messages from the database
# Only bother if we actually sent some device messages
if device_message_edus:
@@ -398,6 +533,8 @@ class TransactionQueue(object):
(e.retry_last_ts + e.retry_interval) / 1000.0
),
)
except FederationDeniedError as e:
logger.info(e)
except Exception as e:
logger.warn(
"TX [%s] Failed to send transaction: %s",

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -20,6 +21,7 @@ from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.util.logutils import log_function
import logging
import urllib
logger = logging.getLogger(__name__)
@@ -49,7 +51,7 @@ class TransportLayerClient(object):
logger.debug("get_room_state dest=%s, room=%s",
destination, room_id)
path = PREFIX + "/state/%s/" % room_id
path = _create_path(PREFIX, "/state/%s/", room_id)
return self.client.get_json(
destination, path=path, args={"event_id": event_id},
)
@@ -71,7 +73,7 @@ class TransportLayerClient(object):
logger.debug("get_room_state_ids dest=%s, room=%s",
destination, room_id)
path = PREFIX + "/state_ids/%s/" % room_id
path = _create_path(PREFIX, "/state_ids/%s/", room_id)
return self.client.get_json(
destination, path=path, args={"event_id": event_id},
)
@@ -93,7 +95,7 @@ class TransportLayerClient(object):
logger.debug("get_pdu dest=%s, event_id=%s",
destination, event_id)
path = PREFIX + "/event/%s/" % (event_id, )
path = _create_path(PREFIX, "/event/%s/", event_id)
return self.client.get_json(destination, path=path, timeout=timeout)
@log_function
@@ -119,7 +121,7 @@ class TransportLayerClient(object):
# TODO: raise?
return
path = PREFIX + "/backfill/%s/" % (room_id,)
path = _create_path(PREFIX, "/backfill/%s/", room_id)
args = {
"v": event_tuples,
@@ -157,9 +159,11 @@ class TransportLayerClient(object):
# generated by the json_data_callback.
json_data = transaction.get_dict()
path = _create_path(PREFIX, "/send/%s/", transaction.transaction_id)
response = yield self.client.put_json(
transaction.destination,
path=PREFIX + "/send/%s/" % transaction.transaction_id,
path=path,
data=json_data,
json_data_callback=json_data_callback,
long_retries=True,
@@ -177,7 +181,7 @@ class TransportLayerClient(object):
@log_function
def make_query(self, destination, query_type, args, retry_on_dns_fail,
ignore_backoff=False):
path = PREFIX + "/query/%s" % query_type
path = _create_path(PREFIX, "/query/%s", query_type)
content = yield self.client.get_json(
destination=destination,
@@ -193,19 +197,54 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def make_membership_event(self, destination, room_id, user_id, membership):
"""Asks a remote server to build and sign us a membership event
Note that this does not append any events to any graphs.
Args:
destination (str): address of remote homeserver
room_id (str): room to join/leave
user_id (str): user to be joined/left
membership (str): one of join/leave
Returns:
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body (ie, the new event).
Fails with ``HTTPRequestException`` if we get an HTTP response
code >= 300.
Fails with ``NotRetryingDestination`` if we are not yet ready
to retry this server.
Fails with ``FederationDeniedError`` if the remote destination
is not in our federation whitelist
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
if membership not in valid_memberships:
raise RuntimeError(
"make_membership_event called with membership='%s', must be one of %s" %
(membership, ",".join(valid_memberships))
)
path = PREFIX + "/make_%s/%s/%s" % (membership, room_id, user_id)
path = _create_path(PREFIX, "/make_%s/%s/%s", membership, room_id, user_id)
ignore_backoff = False
retry_on_dns_fail = False
if membership == Membership.LEAVE:
# we particularly want to do our best to send leave events. The
# problem is that if it fails, we won't retry it later, so if the
# remote server was just having a momentary blip, the room will be
# out of sync.
ignore_backoff = True
retry_on_dns_fail = True
content = yield self.client.get_json(
destination=destination,
path=path,
retry_on_dns_fail=False,
retry_on_dns_fail=retry_on_dns_fail,
timeout=20000,
ignore_backoff=ignore_backoff,
)
defer.returnValue(content)
@@ -213,7 +252,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_join(self, destination, room_id, event_id, content):
path = PREFIX + "/send_join/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/send_join/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination,
@@ -226,12 +265,18 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_leave(self, destination, room_id, event_id, content):
path = PREFIX + "/send_leave/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/send_leave/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination,
path=path,
data=content,
# we want to do our best to send this through. The problem is
# that if it fails, we won't retry it later, so if the remote
# server was just having a momentary blip, the room will be out of
# sync.
ignore_backoff=True,
)
defer.returnValue(response)
@@ -239,7 +284,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_invite(self, destination, room_id, event_id, content):
path = PREFIX + "/invite/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/invite/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination,
@@ -281,7 +326,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def exchange_third_party_invite(self, destination, room_id, event_dict):
path = PREFIX + "/exchange_third_party_invite/%s" % (room_id,)
path = _create_path(PREFIX, "/exchange_third_party_invite/%s", room_id,)
response = yield self.client.put_json(
destination=destination,
@@ -294,7 +339,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def get_event_auth(self, destination, room_id, event_id):
path = PREFIX + "/event_auth/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/event_auth/%s/%s", room_id, event_id)
content = yield self.client.get_json(
destination=destination,
@@ -306,7 +351,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_query_auth(self, destination, room_id, event_id, content):
path = PREFIX + "/query_auth/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/query_auth/%s/%s", room_id, event_id)
content = yield self.client.post_json(
destination=destination,
@@ -368,7 +413,7 @@ class TransportLayerClient(object):
Returns:
A dict containg the device keys.
"""
path = PREFIX + "/user/devices/" + user_id
path = _create_path(PREFIX, "/user/devices/%s", user_id)
content = yield self.client.get_json(
destination=destination,
@@ -418,7 +463,7 @@ class TransportLayerClient(object):
@log_function
def get_missing_events(self, destination, room_id, earliest_events,
latest_events, limit, min_depth, timeout):
path = PREFIX + "/get_missing_events/%s" % (room_id,)
path = _create_path(PREFIX, "/get_missing_events/%s", room_id,)
content = yield self.client.post_json(
destination=destination,
@@ -433,3 +478,475 @@ class TransportLayerClient(object):
)
defer.returnValue(content)
@log_function
def get_group_profile(self, destination, group_id, requester_user_id):
"""Get a group profile
"""
path = _create_path(PREFIX, "/groups/%s/profile", group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def update_group_profile(self, destination, group_id, requester_user_id, content):
"""Update a remote group profile
Args:
destination (str)
group_id (str)
requester_user_id (str)
content (dict): The new profile of the group
"""
path = _create_path(PREFIX, "/groups/%s/profile", group_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def get_group_summary(self, destination, group_id, requester_user_id):
"""Get a group summary
"""
path = _create_path(PREFIX, "/groups/%s/summary", group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_rooms_in_group(self, destination, group_id, requester_user_id):
"""Get all rooms in a group
"""
path = _create_path(PREFIX, "/groups/%s/rooms", group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
def add_room_to_group(self, destination, group_id, requester_user_id, room_id,
content):
"""Add a room to a group
"""
path = _create_path(PREFIX, "/groups/%s/room/%s", group_id, room_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
def update_room_in_group(self, destination, group_id, requester_user_id, room_id,
config_key, content):
"""Update room in group
"""
path = _create_path(
PREFIX, "/groups/%s/room/%s/config/%s",
group_id, room_id, config_key,
)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
def remove_room_from_group(self, destination, group_id, requester_user_id, room_id):
"""Remove a room from a group
"""
path = _create_path(PREFIX, "/groups/%s/room/%s", group_id, room_id,)
return self.client.delete_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_users_in_group(self, destination, group_id, requester_user_id):
"""Get users in a group
"""
path = _create_path(PREFIX, "/groups/%s/users", group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_invited_users_in_group(self, destination, group_id, requester_user_id):
"""Get users that have been invited to a group
"""
path = _create_path(PREFIX, "/groups/%s/invited_users", group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def accept_group_invite(self, destination, group_id, user_id, content):
"""Accept a group invite
"""
path = _create_path(
PREFIX, "/groups/%s/users/%s/accept_invite",
group_id, user_id,
)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def join_group(self, destination, group_id, user_id, content):
"""Attempts to join a group
"""
path = _create_path(PREFIX, "/groups/%s/users/%s/join", group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def invite_to_group(self, destination, group_id, user_id, requester_user_id, content):
"""Invite a user to a group
"""
path = _create_path(PREFIX, "/groups/%s/users/%s/invite", group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def invite_to_group_notification(self, destination, group_id, user_id, content):
"""Sent by group server to inform a user's server that they have been
invited.
"""
path = _create_path(PREFIX, "/groups/local/%s/users/%s/invite", group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def remove_user_from_group(self, destination, group_id, requester_user_id,
user_id, content):
"""Remove a user fron a group
"""
path = _create_path(PREFIX, "/groups/%s/users/%s/remove", group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def remove_user_from_group_notification(self, destination, group_id, user_id,
content):
"""Sent by group server to inform a user's server that they have been
kicked from the group.
"""
path = _create_path(PREFIX, "/groups/local/%s/users/%s/remove", group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def renew_group_attestation(self, destination, group_id, user_id, content):
"""Sent by either a group server or a user's server to periodically update
the attestations
"""
path = _create_path(PREFIX, "/groups/%s/renew_attestation/%s", group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def update_group_summary_room(self, destination, group_id, user_id, room_id,
category_id, content):
"""Update a room entry in a group summary
"""
if category_id:
path = _create_path(
PREFIX, "/groups/%s/summary/categories/%s/rooms/%s",
group_id, category_id, room_id,
)
else:
path = _create_path(PREFIX, "/groups/%s/summary/rooms/%s", group_id, room_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": user_id},
data=content,
ignore_backoff=True,
)
@log_function
def delete_group_summary_room(self, destination, group_id, user_id, room_id,
category_id):
"""Delete a room entry in a group summary
"""
if category_id:
path = _create_path(
PREFIX + "/groups/%s/summary/categories/%s/rooms/%s",
group_id, category_id, room_id,
)
else:
path = _create_path(PREFIX, "/groups/%s/summary/rooms/%s", group_id, room_id,)
return self.client.delete_json(
destination=destination,
path=path,
args={"requester_user_id": user_id},
ignore_backoff=True,
)
@log_function
def get_group_categories(self, destination, group_id, requester_user_id):
"""Get all categories in a group
"""
path = _create_path(PREFIX, "/groups/%s/categories", group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_group_category(self, destination, group_id, requester_user_id, category_id):
"""Get category info in a group
"""
path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def update_group_category(self, destination, group_id, requester_user_id, category_id,
content):
"""Update a category in a group
"""
path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def delete_group_category(self, destination, group_id, requester_user_id,
category_id):
"""Delete a category in a group
"""
path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.delete_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_group_roles(self, destination, group_id, requester_user_id):
"""Get all roles in a group
"""
path = _create_path(PREFIX, "/groups/%s/roles", group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_group_role(self, destination, group_id, requester_user_id, role_id):
"""Get a roles info
"""
path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def update_group_role(self, destination, group_id, requester_user_id, role_id,
content):
"""Update a role in a group
"""
path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def delete_group_role(self, destination, group_id, requester_user_id, role_id):
"""Delete a role in a group
"""
path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.delete_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def update_group_summary_user(self, destination, group_id, requester_user_id,
user_id, role_id, content):
"""Update a users entry in a group
"""
if role_id:
path = _create_path(
PREFIX, "/groups/%s/summary/roles/%s/users/%s",
group_id, role_id, user_id,
)
else:
path = _create_path(PREFIX, "/groups/%s/summary/users/%s", group_id, user_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def set_group_join_policy(self, destination, group_id, requester_user_id,
content):
"""Sets the join policy for a group
"""
path = _create_path(PREFIX, "/groups/%s/settings/m.join_policy", group_id,)
return self.client.put_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def delete_group_summary_user(self, destination, group_id, requester_user_id,
user_id, role_id):
"""Delete a users entry in a group
"""
if role_id:
path = _create_path(
PREFIX, "/groups/%s/summary/roles/%s/users/%s",
group_id, role_id, user_id,
)
else:
path = _create_path(PREFIX, "/groups/%s/summary/users/%s", group_id, user_id,)
return self.client.delete_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
def bulk_get_publicised_groups(self, destination, user_ids):
"""Get the groups a list of users are publicising
"""
path = PREFIX + "/get_groups_publicised"
content = {"user_ids": user_ids}
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
def _create_path(prefix, path, *args):
"""Creates a path from the prefix, path template and args. Ensures that
all args are url encoded.
Example:
_create_path(PREFIX, "/event/%s/", event_id)
Args:
prefix (str)
path (str): String template for the path
args: ([str]): Args to insert into path. Each arg will be url encoded
Returns:
str
"""
return prefix + path % tuple(urllib.quote(arg, "") for arg in args)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,7 +17,7 @@
from twisted.internet import defer
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.api.errors import Codes, SynapseError
from synapse.api.errors import Codes, SynapseError, FederationDeniedError
from synapse.http.server import JsonResource
from synapse.http.servlet import (
parse_json_object_from_request, parse_integer_from_args, parse_string_from_args,
@@ -24,7 +25,8 @@ from synapse.http.servlet import (
)
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
from synapse.types import ThirdPartyInstanceID
from synapse.util.logcontext import run_in_background
from synapse.types import ThirdPartyInstanceID, get_domain_from_id
import functools
import logging
@@ -79,6 +81,8 @@ class Authenticator(object):
def __init__(self, hs):
self.keyring = hs.get_keyring()
self.server_name = hs.hostname
self.store = hs.get_datastore()
self.federation_domain_whitelist = hs.config.federation_domain_whitelist
# A method just so we can pass 'self' as the authenticator to the Servlets
@defer.inlineCallbacks
@@ -110,7 +114,7 @@ class Authenticator(object):
key = strip_quotes(param_dict["key"])
sig = strip_quotes(param_dict["sig"])
return (origin, key, sig)
except:
except Exception:
raise AuthenticationError(
400, "Malformed Authorization header", Codes.UNAUTHORIZED
)
@@ -128,6 +132,12 @@ class Authenticator(object):
json_request["origin"] = origin
json_request["signatures"].setdefault(origin, {})[key] = sig
if (
self.federation_domain_whitelist is not None and
origin not in self.federation_domain_whitelist
):
raise FederationDeniedError(origin)
if not json_request["signatures"]:
raise NoAuthenticationError(
401, "Missing Authorization headers", Codes.UNAUTHORIZED,
@@ -138,18 +148,30 @@ class Authenticator(object):
logger.info("Request from %s", origin)
request.authenticated_entity = origin
# If we get a valid signed request from the other side, its probably
# alive
retry_timings = yield self.store.get_destination_retry_timings(origin)
if retry_timings and retry_timings["retry_last_ts"]:
run_in_background(self._reset_retry_timings, origin)
defer.returnValue(origin)
@defer.inlineCallbacks
def _reset_retry_timings(self, origin):
try:
logger.info("Marking origin %r as up", origin)
yield self.store.set_destination_retry_timings(origin, 0, 0)
except Exception:
logger.exception("Error resetting retry timings on %s", origin)
class BaseFederationServlet(object):
REQUIRE_AUTH = True
def __init__(self, handler, authenticator, ratelimiter, server_name,
room_list_handler):
def __init__(self, handler, authenticator, ratelimiter, server_name):
self.handler = handler
self.authenticator = authenticator
self.ratelimiter = ratelimiter
self.room_list_handler = room_list_handler
def _wrap(self, func):
authenticator = self.authenticator
@@ -170,7 +192,7 @@ class BaseFederationServlet(object):
if self.REQUIRE_AUTH:
logger.exception("authenticate_request failed")
raise
except:
except Exception:
logger.exception("authenticate_request failed")
raise
@@ -263,7 +285,7 @@ class FederationSendServlet(BaseFederationServlet):
code, response = yield self.handler.on_incoming_transaction(
transaction_data
)
except:
except Exception:
logger.exception("on_incoming_transaction failed")
raise
@@ -581,7 +603,7 @@ class PublicRoomList(BaseFederationServlet):
else:
network_tuple = ThirdPartyInstanceID(None, None)
data = yield self.room_list_handler.get_local_public_room_list(
data = yield self.handler.get_local_public_room_list(
limit, since_token,
network_tuple=network_tuple
)
@@ -602,7 +624,550 @@ class FederationVersionServlet(BaseFederationServlet):
}))
SERVLET_CLASSES = (
class FederationGroupsProfileServlet(BaseFederationServlet):
"""Get/set the basic profile of a group on behalf of a user
"""
PATH = "/groups/(?P<group_id>[^/]*)/profile$"
@defer.inlineCallbacks
def on_GET(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.get_group_profile(
group_id, requester_user_id
)
defer.returnValue((200, new_content))
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.update_group_profile(
group_id, requester_user_id, content
)
defer.returnValue((200, new_content))
class FederationGroupsSummaryServlet(BaseFederationServlet):
PATH = "/groups/(?P<group_id>[^/]*)/summary$"
@defer.inlineCallbacks
def on_GET(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.get_group_summary(
group_id, requester_user_id
)
defer.returnValue((200, new_content))
class FederationGroupsRoomsServlet(BaseFederationServlet):
"""Get the rooms in a group on behalf of a user
"""
PATH = "/groups/(?P<group_id>[^/]*)/rooms$"
@defer.inlineCallbacks
def on_GET(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.get_rooms_in_group(
group_id, requester_user_id
)
defer.returnValue((200, new_content))
class FederationGroupsAddRoomsServlet(BaseFederationServlet):
"""Add/remove room from group
"""
PATH = "/groups/(?P<group_id>[^/]*)/room/(?P<room_id>[^/]*)$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, room_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.add_room_to_group(
group_id, requester_user_id, room_id, content
)
defer.returnValue((200, new_content))
@defer.inlineCallbacks
def on_DELETE(self, origin, content, query, group_id, room_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.remove_room_from_group(
group_id, requester_user_id, room_id,
)
defer.returnValue((200, new_content))
class FederationGroupsAddRoomsConfigServlet(BaseFederationServlet):
"""Update room config in group
"""
PATH = (
"/groups/(?P<group_id>[^/]*)/room/(?P<room_id>[^/]*)"
"/config/(?P<config_key>[^/]*)$"
)
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, room_id, config_key):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
result = yield self.groups_handler.update_room_in_group(
group_id, requester_user_id, room_id, config_key, content,
)
defer.returnValue((200, result))
class FederationGroupsUsersServlet(BaseFederationServlet):
"""Get the users in a group on behalf of a user
"""
PATH = "/groups/(?P<group_id>[^/]*)/users$"
@defer.inlineCallbacks
def on_GET(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.get_users_in_group(
group_id, requester_user_id
)
defer.returnValue((200, new_content))
class FederationGroupsInvitedUsersServlet(BaseFederationServlet):
"""Get the users that have been invited to a group
"""
PATH = "/groups/(?P<group_id>[^/]*)/invited_users$"
@defer.inlineCallbacks
def on_GET(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.get_invited_users_in_group(
group_id, requester_user_id
)
defer.returnValue((200, new_content))
class FederationGroupsInviteServlet(BaseFederationServlet):
"""Ask a group server to invite someone to the group
"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/invite$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.invite_to_group(
group_id, user_id, requester_user_id, content,
)
defer.returnValue((200, new_content))
class FederationGroupsAcceptInviteServlet(BaseFederationServlet):
"""Accept an invitation from the group server
"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/accept_invite$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
if get_domain_from_id(user_id) != origin:
raise SynapseError(403, "user_id doesn't match origin")
new_content = yield self.handler.accept_invite(
group_id, user_id, content,
)
defer.returnValue((200, new_content))
class FederationGroupsJoinServlet(BaseFederationServlet):
"""Attempt to join a group
"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/join$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
if get_domain_from_id(user_id) != origin:
raise SynapseError(403, "user_id doesn't match origin")
new_content = yield self.handler.join_group(
group_id, user_id, content,
)
defer.returnValue((200, new_content))
class FederationGroupsRemoveUserServlet(BaseFederationServlet):
"""Leave or kick a user from the group
"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/remove$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.remove_user_from_group(
group_id, user_id, requester_user_id, content,
)
defer.returnValue((200, new_content))
class FederationGroupsLocalInviteServlet(BaseFederationServlet):
"""A group server has invited a local user
"""
PATH = "/groups/local/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/invite$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
if get_domain_from_id(group_id) != origin:
raise SynapseError(403, "group_id doesn't match origin")
new_content = yield self.handler.on_invite(
group_id, user_id, content,
)
defer.returnValue((200, new_content))
class FederationGroupsRemoveLocalUserServlet(BaseFederationServlet):
"""A group server has removed a local user
"""
PATH = "/groups/local/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/remove$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
if get_domain_from_id(group_id) != origin:
raise SynapseError(403, "user_id doesn't match origin")
new_content = yield self.handler.user_removed_from_group(
group_id, user_id, content,
)
defer.returnValue((200, new_content))
class FederationGroupsRenewAttestaionServlet(BaseFederationServlet):
"""A group or user's server renews their attestation
"""
PATH = "/groups/(?P<group_id>[^/]*)/renew_attestation/(?P<user_id>[^/]*)$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
# We don't need to check auth here as we check the attestation signatures
new_content = yield self.handler.on_renew_attestation(
group_id, user_id, content
)
defer.returnValue((200, new_content))
class FederationGroupsSummaryRoomsServlet(BaseFederationServlet):
"""Add/remove a room from the group summary, with optional category.
Matches both:
- /groups/:group/summary/rooms/:room_id
- /groups/:group/summary/categories/:category/rooms/:room_id
"""
PATH = (
"/groups/(?P<group_id>[^/]*)/summary"
"(/categories/(?P<category_id>[^/]+))?"
"/rooms/(?P<room_id>[^/]*)$"
)
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, category_id, room_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
if category_id == "":
raise SynapseError(400, "category_id cannot be empty string")
resp = yield self.handler.update_group_summary_room(
group_id, requester_user_id,
room_id=room_id,
category_id=category_id,
content=content,
)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_DELETE(self, origin, content, query, group_id, category_id, room_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
if category_id == "":
raise SynapseError(400, "category_id cannot be empty string")
resp = yield self.handler.delete_group_summary_room(
group_id, requester_user_id,
room_id=room_id,
category_id=category_id,
)
defer.returnValue((200, resp))
class FederationGroupsCategoriesServlet(BaseFederationServlet):
"""Get all categories for a group
"""
PATH = (
"/groups/(?P<group_id>[^/]*)/categories/$"
)
@defer.inlineCallbacks
def on_GET(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
resp = yield self.handler.get_group_categories(
group_id, requester_user_id,
)
defer.returnValue((200, resp))
class FederationGroupsCategoryServlet(BaseFederationServlet):
"""Add/remove/get a category in a group
"""
PATH = (
"/groups/(?P<group_id>[^/]*)/categories/(?P<category_id>[^/]+)$"
)
@defer.inlineCallbacks
def on_GET(self, origin, content, query, group_id, category_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
resp = yield self.handler.get_group_category(
group_id, requester_user_id, category_id
)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, category_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
if category_id == "":
raise SynapseError(400, "category_id cannot be empty string")
resp = yield self.handler.upsert_group_category(
group_id, requester_user_id, category_id, content,
)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_DELETE(self, origin, content, query, group_id, category_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
if category_id == "":
raise SynapseError(400, "category_id cannot be empty string")
resp = yield self.handler.delete_group_category(
group_id, requester_user_id, category_id,
)
defer.returnValue((200, resp))
class FederationGroupsRolesServlet(BaseFederationServlet):
"""Get roles in a group
"""
PATH = (
"/groups/(?P<group_id>[^/]*)/roles/$"
)
@defer.inlineCallbacks
def on_GET(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
resp = yield self.handler.get_group_roles(
group_id, requester_user_id,
)
defer.returnValue((200, resp))
class FederationGroupsRoleServlet(BaseFederationServlet):
"""Add/remove/get a role in a group
"""
PATH = (
"/groups/(?P<group_id>[^/]*)/roles/(?P<role_id>[^/]+)$"
)
@defer.inlineCallbacks
def on_GET(self, origin, content, query, group_id, role_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
resp = yield self.handler.get_group_role(
group_id, requester_user_id, role_id
)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, role_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
if role_id == "":
raise SynapseError(400, "role_id cannot be empty string")
resp = yield self.handler.update_group_role(
group_id, requester_user_id, role_id, content,
)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_DELETE(self, origin, content, query, group_id, role_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
if role_id == "":
raise SynapseError(400, "role_id cannot be empty string")
resp = yield self.handler.delete_group_role(
group_id, requester_user_id, role_id,
)
defer.returnValue((200, resp))
class FederationGroupsSummaryUsersServlet(BaseFederationServlet):
"""Add/remove a user from the group summary, with optional role.
Matches both:
- /groups/:group/summary/users/:user_id
- /groups/:group/summary/roles/:role/users/:user_id
"""
PATH = (
"/groups/(?P<group_id>[^/]*)/summary"
"(/roles/(?P<role_id>[^/]+))?"
"/users/(?P<user_id>[^/]*)$"
)
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, role_id, user_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
if role_id == "":
raise SynapseError(400, "role_id cannot be empty string")
resp = yield self.handler.update_group_summary_user(
group_id, requester_user_id,
user_id=user_id,
role_id=role_id,
content=content,
)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_DELETE(self, origin, content, query, group_id, role_id, user_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
if role_id == "":
raise SynapseError(400, "role_id cannot be empty string")
resp = yield self.handler.delete_group_summary_user(
group_id, requester_user_id,
user_id=user_id,
role_id=role_id,
)
defer.returnValue((200, resp))
class FederationGroupsBulkPublicisedServlet(BaseFederationServlet):
"""Get roles in a group
"""
PATH = (
"/get_groups_publicised$"
)
@defer.inlineCallbacks
def on_POST(self, origin, content, query):
resp = yield self.handler.bulk_get_publicised_groups(
content["user_ids"], proxy=False,
)
defer.returnValue((200, resp))
class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
"""Sets whether a group is joinable without an invite or knock
"""
PATH = "/groups/(?P<group_id>[^/]*)/settings/m.join_policy$"
@defer.inlineCallbacks
def on_PUT(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.set_group_join_policy(
group_id, requester_user_id, content
)
defer.returnValue((200, new_content))
FEDERATION_SERVLET_CLASSES = (
FederationSendServlet,
FederationPullServlet,
FederationEventServlet,
@@ -625,17 +1190,85 @@ SERVLET_CLASSES = (
FederationThirdPartyInviteExchangeServlet,
On3pidBindServlet,
OpenIdUserInfo,
PublicRoomList,
FederationVersionServlet,
)
ROOM_LIST_CLASSES = (
PublicRoomList,
)
GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsProfileServlet,
FederationGroupsSummaryServlet,
FederationGroupsRoomsServlet,
FederationGroupsUsersServlet,
FederationGroupsInvitedUsersServlet,
FederationGroupsInviteServlet,
FederationGroupsAcceptInviteServlet,
FederationGroupsJoinServlet,
FederationGroupsRemoveUserServlet,
FederationGroupsSummaryRoomsServlet,
FederationGroupsCategoriesServlet,
FederationGroupsCategoryServlet,
FederationGroupsRolesServlet,
FederationGroupsRoleServlet,
FederationGroupsSummaryUsersServlet,
FederationGroupsAddRoomsServlet,
FederationGroupsAddRoomsConfigServlet,
FederationGroupsSettingJoinPolicyServlet,
)
GROUP_LOCAL_SERVLET_CLASSES = (
FederationGroupsLocalInviteServlet,
FederationGroupsRemoveLocalUserServlet,
FederationGroupsBulkPublicisedServlet,
)
GROUP_ATTESTATION_SERVLET_CLASSES = (
FederationGroupsRenewAttestaionServlet,
)
def register_servlets(hs, resource, authenticator, ratelimiter):
for servletclass in SERVLET_CLASSES:
for servletclass in FEDERATION_SERVLET_CLASSES:
servletclass(
handler=hs.get_replication_layer(),
handler=hs.get_federation_server(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
).register(resource)
for servletclass in ROOM_LIST_CLASSES:
servletclass(
handler=hs.get_room_list_handler(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
).register(resource)
for servletclass in GROUP_SERVER_SERVLET_CLASSES:
servletclass(
handler=hs.get_groups_server_handler(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
).register(resource)
for servletclass in GROUP_LOCAL_SERVLET_CLASSES:
servletclass(
handler=hs.get_groups_local_handler(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
).register(resource)
for servletclass in GROUP_ATTESTATION_SERVLET_CLASSES:
servletclass(
handler=hs.get_groups_attestation_renewer(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
room_list_handler=hs.get_room_list_handler(),
).register(resource)

View File

Some files were not shown because too many files have changed in this diff Show More