Compare commits

...

478 Commits

Author SHA1 Message Date
Neil Johnson
28dd536e80 update changelog and bump version to 0.28.0 2018-04-26 15:51:39 +01:00
Neil Johnson
8721580303 Merge branch 'develop' of https://github.com/matrix-org/synapse into release-v0.28.0-rc1 2018-04-26 15:44:54 +01:00
Erik Johnston
0ced8b5b47 Merge pull request #3134 from matrix-org/erikj/fix_admin_media_api
Fix media admin APIs
2018-04-26 12:02:40 +01:00
Erik Johnston
7ec8e798b4 Fix media admin APIs 2018-04-26 11:31:22 +01:00
Erik Johnston
a5ad88913c Merge pull request #3130 from matrix-org/erikj/fix_quarantine_room
Fix quarantine media admin API
2018-04-25 17:54:12 +01:00
Erik Johnston
22881b3d69 Also fix reindexing of search 2018-04-25 15:32:04 +01:00
Erik Johnston
ba3166743c Fix quarantine media admin API 2018-04-25 15:11:18 +01:00
Neil Johnson
1bb83d5d41 Merge branch 'master' into develop 2018-04-24 15:52:43 +01:00
Neil Johnson
13a2beabca Update CHANGES.rst
fix formatting on line break
2018-04-24 15:43:30 +01:00
Neil Johnson
2c3e995f38 Bump version and update changelog 2018-04-24 15:33:22 +01:00
Neil Johnson
8e8b06715f Revert "Bump version and update changelog"
This reverts commit 08b29d4574.
2018-04-24 13:58:45 +01:00
Neil Johnson
08b29d4574 Bump version and update changelog 2018-04-24 13:56:12 +01:00
Richard van der Hoff
77ebef9d43 Merge pull request #3118 from matrix-org/rav/reject_prev_events
Reject events which have lots of prev_events
2018-04-23 17:51:38 +01:00
Richard van der Hoff
9b9c38373c Remove spurious param 2018-04-23 12:00:06 +01:00
Richard van der Hoff
286e20f2bc Merge pull request #3109 from NotAFile/py3-tests-fix
Make tests py3 compatible
2018-04-23 11:59:03 +01:00
Richard van der Hoff
dc875d2712 Merge pull request #3106 from NotAFile/py3-six-itervalues-1
Use six.itervalues in some places
2018-04-20 15:43:52 +01:00
Richard van der Hoff
8dc4a6144b Merge pull request #3107 from NotAFile/py3-bool-nonzero
add __bool__ alias to __nonzero__ methods
2018-04-20 15:43:39 +01:00
Richard van der Hoff
d06a9ea5f7 Merge pull request #3104 from NotAFile/py3-unittest-config
Add some more variables to the unittest config
2018-04-20 15:35:58 +01:00
Richard van der Hoff
c09a6daf09 Merge pull request #3110 from NotAFile/py3-six-queue
Replace Queue with six.moves.queue
2018-04-20 15:35:00 +01:00
Richard van der Hoff
692a3cc806 Merge pull request #3103 from NotAFile/py3-baseexcepton-message
Use str(e) instead of e.message
2018-04-20 15:34:49 +01:00
Erik Johnston
366dd893fc Merge pull request #3100 from silkeh/readme-srv-cname
Clarify that SRV may not point to a CNAME
2018-04-20 15:18:44 +01:00
Erik Johnston
bdb7714d13 Merge pull request #3125 from matrix-org/erikj/add_contrib_docs
Document contrib directory
2018-04-20 13:02:24 +01:00
Erik Johnston
67dabe143d Document contrib directory 2018-04-20 11:47:38 +01:00
Richard van der Hoff
3de7d9fe99 accept stupid events over backfill 2018-04-20 11:41:03 +01:00
Richard van der Hoff
11a67b7c9d Merge pull request #3093 from matrix-org/rav/response_cache_wrap
Refactor ResponseCache usage
2018-04-20 11:31:17 +01:00
Richard van der Hoff
0c280d4d99 Reinstate linearizer for federation_server.on_context_state_request 2018-04-20 11:10:04 +01:00
Richard van der Hoff
bc381d5798 Merge pull request #3117 from matrix-org/rav/refactor_have_events
Refactor store.have_events
2018-04-20 10:26:12 +01:00
Richard van der Hoff
b1dfbc3c40 Refactor store.have_events
It turns out that most of the time we were calling have_events, we were only
using half of the result. Replace have_events with have_seen_events and
get_rejection_reasons, so that we can see what's going on a bit more clearly.
2018-04-20 10:25:56 +01:00
Richard van der Hoff
dacf3a50ac Merge pull request #3113 from matrix-org/rav/fix_huge_prev_events
Avoid creating events with huge numbers of prev_events
2018-04-18 11:27:56 +01:00
Richard van der Hoff
1f4b498b73 Add some comments 2018-04-18 00:15:36 +01:00
Richard van der Hoff
e585228860 Check events on backfill too 2018-04-18 00:06:42 +01:00
Richard van der Hoff
9b7794262f Reject events which have too many auth_events or prev_events
... this should protect us from being dossed by people making silly events
(deliberately or otherwise)
2018-04-18 00:06:42 +01:00
Richard van der Hoff
639480e14a Avoid creating events with huge numbers of prev_events
In most cases, we limit the number of prev_events for a given event to 10
events. This fixes a particular code path which created events with huge
numbers of prev_events.
2018-04-16 18:41:37 +01:00
Adrian Tschira
878995e660 Replace Queue with six.moves.queue
and a six.range change which I missed the last time

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-16 00:46:21 +02:00
Adrian Tschira
a1a3c9660f Make tests py3 compatible
This is a mixed commit that fixes various small issues

 * print parentheses
 * 01 is invalid syntax (it was octal in py2)
 * [x for i in 1, 2] is invalid syntax
 * six moves

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-16 00:39:32 +02:00
Matthew Hodgson
512633ef44 fix spurious changelog dup 2018-04-15 22:45:06 +01:00
Adrian Tschira
f63ff73c7f add __bool__ alias to __nonzero__ methods
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:40:47 +02:00
Adrian Tschira
36c59ce669 Use six.itervalues in some places
There's more where that came from

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:39:43 +02:00
Adrian Tschira
cb9cdfecd0 Add some more variables to the unittest config
These worked accidentally before (python2 doesn't complain if you
compare incompatible types) but under py3 this blows up spectacularly

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:36:39 +02:00
Adrian Tschira
1515560f5c Use str(e) instead of e.message
Doing this I learned e.message was pretty shortlived, added in 2.6,
they realized it was a bad idea and deprecated it in 2.7

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:32:42 +02:00
Silke
c4bdbc2bd2 Clarify that SRV may not point to a CNAME
Signed-off-by: Silke Hofstra <silke@slxh.eu>
2018-04-14 14:55:25 +02:00
Neil Johnson
154b44c249 Merge branch 'master' of https://github.com/matrix-org/synapse into develop 2018-04-13 17:07:54 +01:00
Matthew Hodgson
0d8c50df44 Merge pull request #3099 from matrix-org/matthew/fix-federation-domain-whitelist
fix federation_domain_whitelist
2018-04-13 15:51:13 +01:00
Matthew Hodgson
78a9698650 fix federation_domain_whitelist
we were checking the wrong server_name on inbound requests
2018-04-13 15:47:43 +01:00
Matthew Hodgson
25b0ba30b1 revert last to PR properly 2018-04-13 15:46:37 +01:00
Matthew Hodgson
f8d46cad3c correctly auth inbound federation_domain_whitelist reqs 2018-04-13 15:41:52 +01:00
Neil Johnson
d4b2e05852 Merge branch 'master' of https://github.com/matrix-org/synapse into develop 2018-04-13 12:16:27 +01:00
Neil Johnson
eb53439c4a Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-04-13 12:14:57 +01:00
Neil Johnson
51d628d28d Bump version and Change log 2018-04-13 12:08:19 +01:00
Neil Johnson
df77837a33 Merge pull request #3095 from matrix-org/rav/bump_canonical_json
Update canonicaljson dependency
2018-04-13 12:04:46 +01:00
Richard van der Hoff
d3347ad485 Revert "Use sortedcontainers instead of blist"
This reverts commit 9fbe70a7dc.

It turns out that sortedcontainers.SortedDict is not an exact match for
blist.sorteddict; in particular, `popitem()` removes things from the opposite
end of the dict.

This is trivial to fix, but I want to add some unit tests, and potentially some
more thought about it, before we do so.
2018-04-13 11:16:43 +01:00
Richard van der Hoff
fac3f9e678 Bump canonicaljson to 1.1.3
1.1.2 was a bit broken too :/
2018-04-13 10:21:38 +01:00
Richard van der Hoff
60f6014bb7 ResponseCache: fix handling of completed results
Turns out that ObservableDeferred.observe doesn't return a deferred if the
result is already completed. Fix handling and improve documentation.
2018-04-13 07:32:29 +01:00
Richard van der Hoff
119596ab8f Update canonicaljson dependency
1.1.0 and 1.1.1 were broken, so we're updating this to help people make sure
they don't end up on a broken version.

Also, 1.1.0 is speedier...
2018-04-12 17:31:44 +01:00
Richard van der Hoff
b78395b7fe Refactor ResponseCache usage
Adds a `.wrap` method to ResponseCache which wraps up the boilerplate of a
(get, set) pair, and then use it throughout the codebase.

This will be largely non-functional, but does include the following functional
changes:

* federation_server.on_context_state_request: drops use of _server_linearizer
  which looked redundant and could cause incorrect cache misses by yielding
  between the get and the set.
* RoomListHandler.get_remote_public_room_list(): fixes logcontext leaks
* the wrap function includes some logging. I'm hoping this won't be too noisy
  on production.
2018-04-12 13:02:15 +01:00
Richard van der Hoff
d5c74b9f6c Merge pull request #3092 from matrix-org/rav/response_cache_metrics
Add metrics for ResponseCache
2018-04-12 12:59:36 +01:00
Erik Johnston
0f13f30fca Merge pull request #3090 from matrix-org/erikj/processed_event_lag
Add metrics for event processing lag
2018-04-12 12:18:57 +01:00
Erik Johnston
415aeefd89 Format docstring 2018-04-12 12:07:09 +01:00
Erik Johnston
19ceb4851f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/processed_event_lag 2018-04-12 11:36:07 +01:00
Richard van der Hoff
261124396e Merge pull request #3059 from matrix-org/rav/doc_response_cache
Document the behaviour of ResponseCache
2018-04-12 11:22:30 +01:00
Erik Johnston
23a7f9d7f4 Doc we raise on unknown event 2018-04-12 11:20:51 +01:00
Erik Johnston
d7bf3a68f0 s/list/tuple 2018-04-12 11:19:04 +01:00
Erik Johnston
f67e906e18 Set all metrics at the same time 2018-04-12 11:18:19 +01:00
Erik Johnston
971059a733 Merge pull request #3088 from matrix-org/erikj/as_parallel
Send events to ASes concurrently
2018-04-12 10:42:36 +01:00
Erik Johnston
e939f3bca6 Fix tests 2018-04-11 14:37:11 +01:00
Erik Johnston
4dae4a97ed Track last processed event received_ts 2018-04-11 14:27:09 +01:00
Erik Johnston
92e34615c5 Track where event stream processing have gotten up to 2018-04-11 12:13:40 +01:00
Erik Johnston
ab825aa328 Add GaugeMetric 2018-04-11 12:13:40 +01:00
Richard van der Hoff
233699c42e Merge pull request #2760 from Valodim/pypy
Synapse on PyPy
2018-04-11 11:20:01 +01:00
Neil Johnson
427e6c4059 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-04-11 10:59:00 +01:00
Neil Johnson
781cd8c54f bump version/changelog 2018-04-11 10:54:43 +01:00
Neil Johnson
9ef0b179e0 Merge commit '11d2609da70af797405241cdf7d9df19db5628f2' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-11 10:51:59 +01:00
Erik Johnston
121591568b Send events to ASes concurrently 2018-04-11 09:56:00 +01:00
Richard van der Hoff
b3384232a0 Add metrics for ResponseCache 2018-04-10 23:14:47 +01:00
Matthew Hodgson
360d899a64 Merge pull request #3086 from matrix-org/r30_stats
fix typo
2018-04-10 17:46:37 +01:00
Neil Johnson
d54cfbb7a8 fix typo 2018-04-10 17:38:16 +01:00
Erik Johnston
eaa2ebf20b Merge pull request #3079 from matrix-org/erikj/limit_concurrent_sends
Limit concurrent event sends for a room
2018-04-10 16:43:58 +01:00
Erik Johnston
9daf82278f Merge pull request #3078 from matrix-org/erikj/federation_sender
Send federation events concurrently
2018-04-10 16:43:48 +01:00
Neil Johnson
dd723267b2 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into develop 2018-04-10 15:03:29 +01:00
Erik Johnston
a060dfa132 Use run_in_background instead 2018-04-10 14:25:11 +01:00
Erik Johnston
f8e8ec013b Note why we're limiting concurrent event sends 2018-04-10 14:00:46 +01:00
Erik Johnston
1246d23710 Preserve log contexts correctly 2018-04-10 12:04:32 +01:00
Erik Johnston
d49cbf712f Log event ID on exception 2018-04-10 12:03:41 +01:00
Erik Johnston
ce72d590ed Merge pull request #3082 from matrix-org/erikj/urlencode_paths
URL quote path segments over federation
2018-04-10 11:31:16 +01:00
Erik Johnston
11d2609da7 Ensure slashes are escaped 2018-04-10 11:24:40 +01:00
Erik Johnston
dab87b84a3 URL quote path segments over federation 2018-04-10 11:16:08 +01:00
Vincent Breitmoser
6d7f0f8dd3 Don't disable GC when running on PyPy
PyPy's incminimark GC can't be triggered manually. From what I observed
there are no obvious issues with just letting it run normally. And
unlike CPython, it actually returns unused RAM to the system.

Signed-off-by: Vincent Breitmoser <look@my.amazin.horse>
2018-04-10 11:35:34 +02:00
Vincent Breitmoser
f4284d943a In DomainSpecificString, override __repr__ in addition to __str__
For some reason, string interpolation on a DomainSpecificString object
like "%r" % (domainSpecificStringObj) fails under PyPy, because the
default __repr__ implementation wants to iterate over the object. I'm
not sure why that happens, but overriding __repr__ instead of __str__
fixes this problem, and is arguably the more appropriate thing to do
anyways.
2018-04-10 11:35:29 +02:00
Richard van der Hoff
d1e56cfcd1 Fix pep8 error on psycopg2cffi hack 2018-04-10 11:35:29 +02:00
Vincent Breitmoser
89de934981 Use psycopg2cffi module instead of psycopg2 if running on pypy
The psycopg2 package isn't available for PyPy.  This commit adds a check
if the runtime is PyPy, and if it is uses psycopg2cffi module in favor
of psycopg2. This is almost a drop-in replacement, except for one place
where an additional cast to string is required.
2018-04-10 11:29:52 +02:00
Vincent Breitmoser
9fbe70a7dc Use sortedcontainers instead of blist
This commit drop-in replaces blist with SortedContainers. They are
written in pure python so work with pypy, but perform as good as
native implementations, at least in a couple benchmarks:

http://www.grantjenks.com/docs/sortedcontainers/performance.html
2018-04-10 11:29:51 +02:00
Richard van der Hoff
a3599dda97 Merge pull request #2996 from krombel/allow_auto_join_rooms
move handling of auto_join_rooms to RegisterHandler
2018-04-10 01:11:00 +01:00
Richard van der Hoff
87478c5a60 Merge pull request #3061 from NotAFile/add-some-byte-strings
Add b prefixes to some strings that are bytes in py3
2018-04-09 23:54:05 +01:00
Richard van der Hoff
c508b2f2f0 Merge pull request #3073 from NotAFile/use-six-reraise
Replace old-style raise with six.reraise
2018-04-09 23:53:40 +01:00
Richard van der Hoff
37354b55c9 Merge pull request #2938 from dklug/develop
Return 401 for invalid access_token on logout
2018-04-09 23:52:56 +01:00
Richard van der Hoff
0e9aa1d091 Merge pull request #3074 from NotAFile/fix-py3-prints
use python3-compatible prints
2018-04-09 23:44:41 +01:00
Richard van der Hoff
8eaa141d8f Merge pull request #3075 from NotAFile/six-type-checks
Replace some type checks with six type checks
2018-04-09 23:40:44 +01:00
Richard van der Hoff
664adb4236 Merge pull request #3016 from silkeh/improve-service-lookups
Improve handling of SRV records for federation connections
2018-04-09 23:40:06 +01:00
Richard van der Hoff
aea3a93611 Merge pull request #3069 from krombel/update_prometheus_config
update prometheus dashboard to use new metric names
2018-04-09 23:37:18 +01:00
Neil Johnson
41e0611895 remove errant print 2018-04-09 18:44:20 +01:00
Neil Johnson
61b439c904 Fix msec to sec, again 2018-04-09 18:43:48 +01:00
Neil Johnson
87770300d5 Fix msec to sec 2018-04-09 18:38:59 +01:00
Neil Johnson
9a311adfea v0.27.3-rc2 2018-04-09 17:52:08 +01:00
Neil Johnson
64bc2162ef Fix psycopg2 interpolation 2018-04-09 17:50:36 +01:00
Richard van der Hoff
d2c6f4d626 Merge pull request #3080 from matrix-org/rav/fix_500_on_rejoin
Return a 404 rather than a 500 on rejoining empty rooms
2018-04-09 17:32:36 +01:00
Neil Johnson
5232d3bfb1 version bump v0.27.3-rc2 2018-04-09 17:25:57 +01:00
Neil Johnson
5e785d4d5b Merge branch 'develop' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 17:21:34 +01:00
Erik Johnston
6e025a97b4 Handle all events in a room correctly 2018-04-09 16:02:48 +01:00
Neil Johnson
414b2b3bd1 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 16:02:08 +01:00
Neil Johnson
b151eb14a2 Update CHANGES.rst 2018-04-09 16:01:59 +01:00
Neil Johnson
64cebbc730 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 16:00:51 +01:00
Neil Johnson
d9ae2bc826 bump version to release candidate 2018-04-09 16:00:31 +01:00
Neil Johnson
21d5a2a08e Update CHANGES.rst 2018-04-09 15:55:41 +01:00
Neil Johnson
c115deed12 Update CHANGES.rst 2018-04-09 15:54:32 +01:00
Neil Johnson
072fb59446 bump version 2018-04-09 13:49:25 +01:00
Neil Johnson
89dda61315 Merge branch 'develop' into release-v0.27.0 2018-04-09 13:48:15 +01:00
Neil Johnson
687f3451bd 0.27.3 2018-04-09 13:44:05 +01:00
Richard van der Hoff
13decdbf96 Revert "Merge pull request #3066 from matrix-org/rav/remove_redundant_metrics"
We aren't ready to release this yet, so I'm reverting it for now.

This reverts commit d1679a4ed7, reversing
changes made to e089100c62.
2018-04-09 12:59:12 +01:00
Richard van der Hoff
f3ef60662f Return a 404 rather than a 500 on rejoining empty rooms
Filter ourselves out of the server list before checking for an empty remote
host list, to fix 500 error

Fixes #2141
2018-04-09 12:56:22 +01:00
Erik Johnston
e5082494eb Limit concurrent event sends for a room 2018-04-09 12:07:39 +01:00
Erik Johnston
56b0589865 Use create_and_send_nonmember_event everywhere 2018-04-09 12:04:18 +01:00
Erik Johnston
11974f3787 Send federation events concurrently 2018-04-09 11:47:10 +01:00
Erik Johnston
145d14656b Handle exceptions in get_hosts_for_room when sending events over federation 2018-04-09 11:47:01 +01:00
Adrian Tschira
e54c202b81 Replace some type checks with six type checks
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-07 01:02:32 +02:00
Adrian Tschira
b0500d3774 use python3-compatible prints 2018-04-06 23:35:27 +02:00
Adrian Tschira
4f40d058cc Replace old-style raise with six.reraise
The old style raise is invalid syntax in python3. As noted in the docs,
this adds one more frame in the traceback, but I think this is
acceptable:

    <ipython-input-7-bcc5cba3de3f> in <module>()
         16     except:
         17         pass
    ---> 18     six.reraise(*x)

    /usr/lib/python3.6/site-packages/six.py in reraise(tp, value, tb)
        691             if value.__traceback__ is not tb:
        692                 raise value.with_traceback(tb)
    --> 693             raise value
        694         finally:
        695             value = None

    <ipython-input-7-bcc5cba3de3f> in <module>()
          9
         10 try:
    ---> 11     x()
         12 except:
         13     x = sys.exc_info()

Also note that this uses six, which is not formally a dependency yet,
but is included indirectly since most packages depend on it.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-06 23:06:24 +02:00
Luke Barnard
135fc5b9cd Merge pull request #3046 from matrix-org/dbkr/join_group
Implement group join API
2018-04-06 16:24:32 +01:00
Luke Barnard
020a501354 de-lint, quote consistency 2018-04-06 16:02:06 +01:00
Luke Barnard
db2fd801f7 Explicitly grab individual columns from group object 2018-04-06 15:57:25 +01:00
Erik Johnston
e8b03cab1b Merge pull request #3071 from matrix-org/erikj/resp_size_metrics
Add response size metrics
2018-04-06 15:49:12 +01:00
Richard van der Hoff
8844f95c32 Merge pull request #3072 from matrix-org/rav/fix_port_script
postgres port script: fix state_groups_pkey error
2018-04-06 15:47:40 +01:00
Luke Barnard
7945435587 When exposing group state, return is_openly_joinable
as opposed to join_policy, which is really only pertinent to the
synapse implementation of the group server.

By doing this we keep the group server concept extensible by
allowing arbitrarily complex rules for deciding whether a group
is openly joinable.
2018-04-06 15:43:27 +01:00
Luke Barnard
6bd1b7053e By default, join policy is "invite" 2018-04-06 15:43:27 +01:00
Luke Barnard
b4478e586f add_user -> _add_user 2018-04-06 15:43:27 +01:00
Luke Barnard
112c2253e2 pep8 2018-04-06 15:43:27 +01:00
Luke Barnard
6850f8aea3 Get group_info from existing call to check_group_is_ours 2018-04-06 15:43:27 +01:00
Luke Barnard
cd087a265d Don't use redundant inlineCallbacks 2018-04-06 15:43:27 +01:00
Luke Barnard
87c864b698 join_rule -> join_policy 2018-04-06 15:43:27 +01:00
Luke Barnard
ae85c7804e is_joinable -> join_rule 2018-04-06 15:43:27 +01:00
Luke Barnard
f8d1917fce Fix federation client set_group_joinable typo 2018-04-06 15:43:27 +01:00
Luke Barnard
6eb3aa94b6 Factor out add_user from accept_invite and join_group 2018-04-06 15:43:27 +01:00
David Baker
edb45aae38 pep8 2018-04-06 15:43:27 +01:00
David Baker
b370fe61c0 Implement group join API 2018-04-06 15:43:27 +01:00
Richard van der Hoff
6a9777ba02 Port script: Set up state_group_id_seq
Fixes https://github.com/matrix-org/synapse/issues/3050.
2018-04-06 15:33:30 +01:00
Richard van der Hoff
01579384cc Port script: clean up a bit
Improve logging and comments. Group all the stuff to do with inspecting tables
together rather than creating the port tables in the middle.
2018-04-06 15:33:30 +01:00
Richard van der Hoff
e01ba5bda3 Port script: avoid nasty errors when setting up
We really shouldn't spit out "Failed to create port table", it looks scary.
2018-04-06 15:33:30 +01:00
Erik Johnston
7b824f1475 Add response size metrics 2018-04-06 13:20:11 +01:00
Erik Johnston
35ff941172 Merge pull request #3070 from krombel/group_join_put_instead_post
use PUT instead of POST for federating groups/m.join_policy
2018-04-06 12:11:16 +01:00
Krombel
1d71f484d4 use PUT instead of POST for federating groups/m.join_policy 2018-04-06 12:54:09 +02:00
Richard van der Hoff
15e8ed874f more verbosity in synctl 2018-04-06 09:28:36 +01:00
Krombel
c7ede92d0b make prometheus config compliant to v0.28 2018-04-05 23:34:01 +02:00
Richard van der Hoff
551422051b Merge pull request #2886 from turt2live/travis/new-worker-docs
Add a blurb explaining the main synapse worker
2018-04-05 17:33:09 +01:00
Richard van der Hoff
c7f0969731 Merge pull request #2986 from jplatte/join_reponse_room_id
Add room_id to the response of `rooms/{roomId}/join`
2018-04-05 17:29:06 +01:00
Richard van der Hoff
3449da3bc7 Merge pull request #3068 from matrix-org/rav/fix_cache_invalidation
Improve database cache performance
2018-04-05 17:21:44 +01:00
Richard van der Hoff
d1679a4ed7 Merge pull request #3066 from matrix-org/rav/remove_redundant_metrics
Remove redundant metrics which were deprecated in 0.27.0.
2018-04-05 17:21:18 +01:00
Richard van der Hoff
01afc563c3 Fix overzealous cache invalidation
Fixes an issue where a cache invalidation would invalidate *all* pending
entries, rather than just the entry that we intended to invalidate.
2018-04-05 16:24:04 +01:00
Luke Barnard
e089100c62 Merge pull request #3045 from matrix-org/dbkr/group_joinable
Add joinability for groups
2018-04-05 15:57:49 +01:00
Neil Johnson
68b0ee4e8d Merge pull request #3041 from matrix-org/r30_stats
R30 stats
2018-04-05 15:37:37 +01:00
Richard van der Hoff
22284a6f65 Merge pull request #3060 from matrix-org/rav/kill_event_content
Remove uses of events.content
2018-04-05 15:02:17 +01:00
Luke Barnard
917380e89d NON NULL -> NOT NULL 2018-04-05 14:32:12 +01:00
Luke Barnard
104c0bc1d5 Use "/settings/" (plural) 2018-04-05 14:07:16 +01:00
Luke Barnard
700e5e7198 Use DEFAULT join_policy of "invite" in db 2018-04-05 14:01:17 +01:00
Luke Barnard
b214a04ffc Document set_group_join_policy 2018-04-05 13:29:16 +01:00
Neil Johnson
0e5f479fc0 Review comments
Use iteritems over item to loop over dict
formatting
2018-04-05 12:16:46 +01:00
Richard van der Hoff
518f6de088 Remove redundant metrics which were deprecated in 0.27.0. 2018-04-04 19:46:28 +01:00
Jan Christian Grünhage
7d0f712348 Merge pull request #3063 from matrix-org/jcgruenhage/cache_settings_stats
phone home cache size configurations
2018-04-04 17:24:40 +01:00
Jan Christian Grünhage
e4570c53dd phone home cache size configurations 2018-04-04 16:46:58 +01:00
Travis Ralston
88964b987e Merge remote-tracking branch 'matrix-org/develop' into travis/new-worker-docs 2018-04-04 08:46:56 -06:00
Travis Ralston
204fc98520 Document the additional routes for the event_creator worker
Fixes https://github.com/matrix-org/synapse/issues/3018

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-04-04 08:46:17 -06:00
Travis Ralston
301b339494 Move the mention of the main synapse worker higher up
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-04-04 08:45:51 -06:00
Adrian Tschira
6168351877 Add b prefixes to some strings that are bytes in py3
This has no effect on python2

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-04 13:48:51 +02:00
Richard van der Hoff
9cd3f06ab7 Merge pull request #3062 from matrix-org/revert-3053-speedup-mxid-check
Revert "improve mxid check performance"
2018-04-04 12:09:17 +01:00
Richard van der Hoff
f92963f5db Revert "improve mxid check performance" 2018-04-04 12:08:29 +01:00
Silke
72251d1b97 Remove address resolution of hosts in SRV records
Signed-off-by: Silke Hofstra <silke@slxh.eu>
2018-04-04 12:26:50 +02:00
Richard van der Hoff
725a72ec5a Merge pull request #3000 from NotAFile/change-except-style
Replace old style error catching with 'as' keyword
2018-04-04 10:45:22 +01:00
Richard van der Hoff
a89f9f830c Merge pull request #3044 from matrix-org/michaelk/performance_stats
Add basic performance statistics to phone home
2018-04-04 10:37:25 +01:00
Richard van der Hoff
39ce38b024 Merge pull request #3053 from NotAFile/speedup-mxid-check
improve mxid check performance
2018-04-04 10:19:52 +01:00
Richard van der Hoff
a9a74101a4 Document the behaviour of ResponseCache
it looks like everything that uses ResponseCache expects to have to
`make_deferred_yieldable` its results. It's debatable whether that is the best
approach, but let's document it for now to avoid further confusion.
2018-04-04 09:06:22 +01:00
Luke Barnard
eb8d8d6f57 Use join_policy API instead of joinable
The API is now under
 /groups/$group_id/setting/m.join_policy

and expects a JSON blob of the shape

```json
{
  "m.join_policy": {
    "type": "invite"
  }
}
```

where "invite" could alternatively be "open".
2018-04-03 16:16:40 +01:00
Richard van der Hoff
8da39ad98f Merge pull request #3049 from matrix-org/rav/use_staticjson
Use static JSONEncoders
2018-04-03 15:18:32 +01:00
Richard van der Hoff
3ee4ad09eb Fix json encoding bug in replication
json encoders have an encode method, not a dumps method.
2018-04-03 15:09:48 +01:00
Richard van der Hoff
0ca5c4d2af Merge pull request #3048 from matrix-org/rav/use_simplejson
Use simplejson throughout
2018-04-03 14:39:57 +01:00
Adrian Tschira
11597ddea5 improve mxid check performance ~4x
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-03-31 02:00:22 +02:00
Richard van der Hoff
2fe3f848b9 Remove uses of events.content 2018-03-29 23:17:12 +01:00
Richard van der Hoff
05630758f2 Use static JSONEncoders
using json.dumps with custom options requires us to create a new JSONEncoder on
each call. It's more efficient to create one upfront and reuse it.
2018-03-29 23:13:33 +01:00
Richard van der Hoff
fcfe7f6ad3 Use simplejson throughout
Let's use simplejson rather than json, for consistency.
2018-03-29 22:45:52 +01:00
Neil Johnson
b4e37c6f50 pep8 2018-03-29 17:27:39 +01:00
Neil Johnson
9ee44a372d Remove need for sqlite specific query 2018-03-29 16:45:34 +01:00
Erik Johnston
88cc9cc69e Merge pull request #3043 from matrix-org/erikj/measure_state_group_creation
Measure time it takes to calculate state group ID
2018-03-28 17:40:14 +01:00
Neil Johnson
dc7c020b33 fix pep8 errors 2018-03-28 17:25:15 +01:00
Neil Johnson
16aeb41547 Update README.rst
update docker hub url
2018-03-28 16:47:56 +01:00
David Baker
c5de6987c2 This should probably be a PUT 2018-03-28 16:44:11 +01:00
Neil Johnson
241e4e8687 remove twisted deferral cruft 2018-03-28 16:25:53 +01:00
David Baker
929b34963d OK, smallint it is then 2018-03-28 14:53:55 +01:00
Richard van der Hoff
9a0db062af Merge pull request #3034 from matrix-org/rav/fix_key_claim_errors
Fix error when claiming e2e keys from offline servers
2018-03-28 14:50:38 +01:00
David Baker
a838444a70 Grr. Copy the definition from is_admin 2018-03-28 14:50:30 +01:00
Neil Johnson
4262aba17b bump schema version 2018-03-28 14:40:03 +01:00
Neil Johnson
86932be2cb Support multi client R30 for psql 2018-03-28 14:36:53 +01:00
David Baker
32260baa41 pep8 2018-03-28 14:29:42 +01:00
Michael Kaye
33f6195d9a Handle review comments 2018-03-28 14:25:25 +01:00
David Baker
a164270833 Make column definition that works on both dbs 2018-03-28 14:23:00 +01:00
David Baker
352e1ff9ed Add schema delta file 2018-03-28 14:07:57 +01:00
David Baker
79452edeee Add joinability for groups
Adds API to set the 'joinable' flag, and corresponding flag in the
table.
2018-03-28 14:03:37 +01:00
Krombel
6152e253d8 Merge branch 'develop' of into allow_auto_join_rooms 2018-03-28 14:45:28 +02:00
Erik Johnston
e9e4cb25fc Merge pull request #3042 from matrix-org/fix_locally_failing_tests
fix tests/storage/test_user_directory.py
2018-03-28 13:10:37 +01:00
Neil Johnson
792d340572 rename stat to future proof 2018-03-28 12:25:02 +01:00
Michael Kaye
4ceaa7433a As daemonizing will make a new process, defer call to init. 2018-03-28 12:19:01 +01:00
Neil Johnson
788e69098c Add user_ips last seen index 2018-03-28 12:03:13 +01:00
Neil Johnson
0f890f477e No need to cast in count_daily_users 2018-03-28 11:49:57 +01:00
Neil Johnson
545001b9e4 Fix search_user_dir multiple sqlite versions do different things 2018-03-28 11:19:45 +01:00
Erik Johnston
01ccc9e6f2 Measure time it takes to calculate state group ID 2018-03-28 11:03:52 +01:00
Neil Johnson
a9cb1a35c8 fix tests/storage/test_user_directory.py 2018-03-28 10:57:27 +01:00
Neil Johnson
a32d2548d9 query and call for r30 stats 2018-03-28 10:39:13 +01:00
Neil Johnson
9187e0762f count_daily_users failed if db was sqlite due to type failure - presumably this prevcented all sqlite homeservers reporting home 2018-03-28 10:02:32 +01:00
Erik Johnston
f879127aaa Merge pull request #3029 from matrix-org/erikj/linearize_generate_user_id
Linearize calls to _generate_user_id
2018-03-28 10:00:31 +01:00
Erik Johnston
e6d87c93f3 Merge pull request #3030 from matrix-org/erikj/no_ujson
Remove last usage of ujson
2018-03-28 10:00:06 +01:00
Erik Johnston
004cc8a328 Merge pull request #3033 from matrix-org/erikj/calculate_state_metrics
Add counter metrics for calculating state delta
2018-03-28 09:59:42 +01:00
Michael Kaye
ef520d8d0e Include coarse CPU and Memory use in stats callbacks.
This requires the psutil module, and is still opt-in based on the report_stats
config option.
2018-03-27 17:56:03 +01:00
Richard van der Hoff
a134c572a6 Stringify exceptions for keys/{query,claim}
Make sure we stringify any exceptions we return from keys/query and keys/claim,
to avoid a 'not JSON serializable' error later

Fixes #3010
2018-03-27 17:15:06 +01:00
Richard van der Hoff
c2a5cf2fe3 factor out exception handling for keys/claim and keys/query
this stuff is badly c&p'ed
2018-03-27 17:11:23 +01:00
Erik Johnston
800cfd5774 Comment 2018-03-27 13:30:39 +01:00
Erik Johnston
152c2ac19e Fix indent 2018-03-27 13:13:46 +01:00
Erik Johnston
e70287cff3 Comment 2018-03-27 13:13:38 +01:00
Erik Johnston
03a26e28d9 Merge pull request #3017 from matrix-org/erikj/add_cache_control_headers
Add Cache-Control headers to all JSON APIs
2018-03-27 13:10:38 +01:00
Erik Johnston
3e0c0660b3 Also do check inside linearizer 2018-03-27 13:01:34 +01:00
Erik Johnston
3f49e131d9 Add counter metrics for calculating state delta
This will allow us to measure how often we calculate state deltas in
event persistence that we would have been able to calculate at the same
time we calculated the state for the event.
2018-03-27 10:57:35 +01:00
Erik Johnston
9b8c0fb162 Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-03-26 21:41:55 +01:00
Erik Johnston
691f8492fb Merge branch 'release-v0.27.0' of github.com:matrix-org/synapse 2018-03-26 16:38:23 +01:00
Erik Johnston
a9d7d98d3f Bum version and changelog 2018-03-26 16:36:53 +01:00
Erik Johnston
bdbb1eec65 Merge branch 'erikj/simplejson_replication' of github.com:matrix-org/synapse into release-v0.27.0 2018-03-26 16:35:55 +01:00
Erik Johnston
01f72e2fc7 Fix date 2018-03-26 16:21:26 +01:00
Erik Johnston
9187862002 Bump version and changelog 2018-03-26 16:20:24 +01:00
Neil Johnson
aa3587fdd1 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-03-26 14:51:11 +01:00
Neil Johnson
51406dab96 version bump 2018-03-26 14:48:19 +01:00
Erik Johnston
fecb45e0c3 Remove last usage of ujson 2018-03-26 13:32:29 +01:00
Erik Johnston
44cd6e1358 PEP8 2018-03-26 12:06:48 +01:00
Erik Johnston
8d6dc106d1 Don't use _cursor_to_dict in find_next_generated_user_id_localpart 2018-03-26 12:02:44 +01:00
Erik Johnston
a052aa42e7 Linearize calls to _generate_user_id 2018-03-26 12:02:20 +01:00
Matthew Hodgson
8efe773ef1 fix typo 2018-03-23 21:38:42 +00:00
Matthew Hodgson
b7e7b52452 Merge pull request #3022 from matrix-org/matthew/noresource
404 correctly on missing paths via NoResource
2018-03-23 11:10:32 +00:00
Matthew Hodgson
8cbbfaefc1 404 correctly on missing paths via NoResource
fixes https://github.com/matrix-org/synapse/issues/2043 and https://github.com/matrix-org/synapse/issues/2029
2018-03-23 10:32:50 +00:00
Erik Johnston
84b5cc69f5 Merge pull request #3006 from matrix-org/erikj/state_iter
Use .iter* to avoid copies in StateHandler
2018-03-22 11:51:13 +00:00
Erik Johnston
fde8e8f09f Fix s/iteriterms/itervalues 2018-03-22 11:42:16 +00:00
Erik Johnston
eb9fc021e3 Merge branch 'release-v0.27.0' of github.com:matrix-org/synapse into develop 2018-03-22 10:19:53 +00:00
Erik Johnston
1c41b05c8c Add Cache-Control headers to all JSON APIs
It is especially important that sync requests don't get cached, as if a
sync returns the same token given then the client will call sync with
the same parameters again. If the previous response was cached it will
get reused, resulting in the client tight looping making the same
request and never making any progress.

In general, clients will expect to get up to date data when requesting
APIs, and so its safer to do a blanket no cache policy than only
whitelisting APIs that we know will break things if they get cached.
2018-03-21 17:46:26 +00:00
Erik Johnston
5bdb57cb66 Merge pull request #3015 from matrix-org/erikj/simplejson_replication
Fix replication after switch to simplejson
2018-03-20 15:06:33 +00:00
Neil Johnson
f5aa027c2f Update CHANGES.rst
rearrange ordering of releases to match chronology
2018-03-20 15:06:22 +00:00
Neil Johnson
e66fbcbb02 fix merge conflicts 2018-03-20 14:25:31 +00:00
Erik Johnston
9aa5a0af51 Explicitly use simplejson 2018-03-20 09:58:13 +00:00
Erik Johnston
610accbb7f Fix replication after switch to simplejson
Turns out that simplejson serialises namedtuple's as dictionaries rather
than tuples by default.
2018-03-19 16:12:48 +00:00
Neil Johnson
c384705ee8 Update __init__.py
bump version
2018-03-19 15:11:58 +00:00
Neil Johnson
1a3aa957ca Update CHANGES.rst 2018-03-19 15:11:00 +00:00
Erik Johnston
3f961e638a Merge pull request #3005 from matrix-org/erikj/fix_cache_size
Fix bug where state cache used lots of memory
2018-03-19 11:44:26 +00:00
Erik Johnston
fa72803490 Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-03-19 11:41:01 +00:00
Erik Johnston
9a0d783c11 Add comments 2018-03-19 11:35:53 +00:00
Matthew Hodgson
38f952b9bc spell out not to massively increase bcrypt rounds 2018-03-19 09:27:36 +00:00
Erik Johnston
a8ce159be4 Replace some ujson with simplejson to make it work 2018-03-16 00:27:09 +00:00
Erik Johnston
f609acc109 Merge branch 'hotfixes-v0.26.1' of github.com:matrix-org/synapse 2018-03-16 00:13:51 +00:00
Erik Johnston
0092cf38ae Newline 2018-03-16 00:11:58 +00:00
Erik Johnston
5b631ff41a Remove wrong comment 2018-03-16 00:07:08 +00:00
Erik Johnston
ba48755d56 Bump version and changelog 2018-03-15 23:57:26 +00:00
Erik Johnston
926ba76e23 Replace ujson with simplejson 2018-03-15 23:43:31 +00:00
Erik Johnston
9cf519769b Use .iter* to avoid copies in StateHandler 2018-03-15 17:50:26 +00:00
Erik Johnston
7c7706f42b Fix bug where state cache used lots of memory
The state cache bases its size on the sum of the size of entries. The
size of the entry is calculated once on insertion, so it is important
that the size of entries does not change.

The DictionaryCache modified the entries size, which caused the state
cache to incorrectly think it was smaller than it actually was.
2018-03-15 15:46:54 +00:00
NotAFile
2cc9f76bc3 replace old style error catching with 'as' keyword
This is both easier to read and compatible with python3 (not that that
matters)

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-03-15 16:11:17 +01:00
Erik Johnston
ddb00efc1d Bump version number 2018-03-15 14:41:30 +00:00
Erik Johnston
2a376579f3 Merge pull request #3003 from matrix-org/rav/fix_contributing
CONTRIBUTING.rst: fix CI info
2018-03-15 13:31:59 +00:00
Erik Johnston
873aea7168 Merge pull request #3002 from matrix-org/rav/purge_doc
Update purge_history_api.rst
2018-03-15 13:31:54 +00:00
Erik Johnston
bf7ee93cb6 Merge pull request #2997 from turt2live/patch-2
Doc: Make the event_creator routes regex a code block
2018-03-15 13:12:44 +00:00
Richard van der Hoff
5ea624b0f5 CONTRIBUTING.rst: fix CI info 2018-03-15 11:48:35 +00:00
Richard van der Hoff
0ad5125814 Update purge_history_api.rst
clarify that `purge_history` will not purge state
2018-03-15 11:05:42 +00:00
Richard van der Hoff
068c21ab10 CHANGES.rst: reword metric deprecation 2018-03-15 10:36:31 +00:00
Neil Johnson
b29d1abab6 Update CHANGES.rst 2018-03-15 10:34:15 +00:00
Neil Johnson
7367a4a823 Update CHANGES.rst 2018-03-15 10:33:52 +00:00
Neil Johnson
7d26591048 Update CHANGES.rst 2018-03-15 10:33:24 +00:00
Erik Johnston
2059b8573f Update CHANGES.rst 2018-03-14 18:11:21 +00:00
Erik Johnston
10fdcf561d Fix up rst formatting 2018-03-14 17:30:17 +00:00
Neil Johnson
5ccb57d3ff Update CHANGES.rst 2018-03-14 17:12:58 +00:00
Travis Ralston
c33c1ceddd OCD: Make the event_creator routes regex a code block
All the others are code blocks, so this one should be to (currently it is a blockquote).

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-03-14 11:09:08 -06:00
Neil Johnson
fb647164f2 Update CHANGES.rst 2018-03-14 16:20:36 +00:00
Neil Johnson
a492b17fe2 Update CHANGES.rst
clean formatting
2018-03-14 16:18:09 +00:00
Neil Johnson
cb2c7c0669 Update CHANGES.rst
WIP, need to add most recent PRs
2018-03-14 16:09:20 +00:00
Krombel
91ea0202e6 move handling of auto_join_rooms to RegisterHandler
Currently the handling of auto_join_rooms only works when a user
registers itself via public register api. Registrations via
registration_shared_secret and ModuleApi do not work

This auto_joins the users in the registration handler which enables
the auto join feature for all 3 registration paths.

This is related to issue #2725

Signed-Off-by: Matthias Kesler <krombel@krombel.de>
2018-03-14 16:45:37 +01:00
Erik Johnston
3959754de3 Merge pull request #2995 from matrix-org/erikj/enable_membership_worker
Register membership/state servlets in event_creator
2018-03-14 14:37:33 +00:00
Erik Johnston
4f28018c83 Register membership/state servlets in event_creator 2018-03-14 14:30:06 +00:00
Erik Johnston
57db62e554 Merge pull request #2992 from matrix-org/erikj/implement_member_workre
Implement RoomMemberWorkerHandler
2018-03-14 14:29:33 +00:00
Erik Johnston
0011ede3b0 Fix imports 2018-03-14 14:19:23 +00:00
Erik Johnston
62ad701326 s/join/joined/ in notify_user_membership_change 2018-03-14 14:17:43 +00:00
Erik Johnston
3f0f06cb31 Split RoomMemberWorkerHandler to separate file 2018-03-14 11:41:45 +00:00
Erik Johnston
3e839e0548 Merge pull request #2989 from matrix-org/erikj/profile_cache_master
Only update remote profile cache on master
2018-03-14 09:42:27 +00:00
Erik Johnston
ebd0127999 Merge pull request #2988 from matrix-org/erikj/split_profile_store
Split up ProfileStore
2018-03-14 09:41:06 +00:00
Erik Johnston
cfe75a9fb6 Merge pull request #2991 from matrix-org/erikj/fixup_rm
_remote_join and co take a requester
2018-03-14 09:40:57 +00:00
Erik Johnston
f51565e023 Merge pull request #2993 from matrix-org/erikj/is_blocked
Add is_blocked to worker store
2018-03-14 09:39:18 +00:00
Matthew Hodgson
d144ed6ffb fix bug #2926 (loading all state for a given type from the DB if the state_key is None) (#2990)
Fixes a regression that had crept in where the caching layer upholds requests for loading state which is filtered by type (but not by state_key), but the DB layer itself would interpret a missing state_key as a request to filter by null state_key rather than returning all state_keys.
2018-03-13 22:36:04 +00:00
Erik Johnston
a08726fc42 Add is_blocked to worker store 2018-03-13 18:28:44 +00:00
Erik Johnston
b27320b550 Implement RoomMemberWorkerHandler 2018-03-13 18:26:00 +00:00
Erik Johnston
350331d466 _remote_join and co take a requester 2018-03-13 17:50:39 +00:00
Erik Johnston
1a69c6d590 Merge pull request #2987 from matrix-org/erikj/split_room_member_handler
Split RoomMemberHandler into base and master class
2018-03-13 17:40:00 +00:00
Erik Johnston
df8ff682a7 Only update remote profile cache on master 2018-03-13 17:38:21 +00:00
Erik Johnston
3518d0ea8f Split up ProfileStore 2018-03-13 17:36:50 +00:00
Erik Johnston
d45a114824 Raise, don't return, exception 2018-03-13 17:24:34 +00:00
Erik Johnston
6dbebef141 Add missing param to docstrings 2018-03-13 17:15:32 +00:00
Erik Johnston
16adb11cc0 Correct import order 2018-03-13 16:57:07 +00:00
Erik Johnston
82f16faa78 Move user_*_room distributor stuff to master class
I added yields when calling user_left_room, but they shouldn't matter on
the master process as they always return None anyway.
2018-03-13 16:38:15 +00:00
Erik Johnston
b78717b87b Split RoomMemberHandler into base and master class
The intention here is to split the class into the bits that can be done
on workers and the bits that have to be done on the master.

In future there will also be a class that can be run on the worker,
which will delegate work to the master when necessary.
2018-03-13 16:37:41 +00:00
Erik Johnston
95cb401ae0 Merge pull request #2978 from matrix-org/erikj/refactor_replication_layer
Remove ReplicationLayer and user Client/Server directly
2018-03-13 15:45:08 +00:00
Erik Johnston
5d8476d8ff Merge pull request #2981 from matrix-org/erikj/factor_remote_leave
Factor out _remote_reject_invite in RoomMember
2018-03-13 15:44:56 +00:00
Jonas Platte
47ce527f45 Add room_id to the response of rooms/{roomId}/join
Fixes #2349
2018-03-13 14:48:12 +01:00
Erik Johnston
56e709857c Merge pull request #2979 from matrix-org/erikj/no_handlers
Don't build handlers on workers unnecessarily
2018-03-13 13:46:38 +00:00
Erik Johnston
cb9f8e527c s/replication_client/federation_client/ 2018-03-13 13:26:52 +00:00
Erik Johnston
cea462e285 s/replication_server/federation_server 2018-03-13 13:22:21 +00:00
Erik Johnston
bf8e97bd3c Merge branch 'develop' of github.com:matrix-org/synapse into erikj/factor_remote_leave 2018-03-13 13:17:08 +00:00
Erik Johnston
ea3442c15c Add docstring 2018-03-13 13:16:21 +00:00
Erik Johnston
16469a4f15 Merge pull request #2980 from matrix-org/erikj/rm_priv
Make RoomMemberHandler functions private that can be
2018-03-13 13:11:11 +00:00
Erik Johnston
c82111a55f Merge pull request #2982 from matrix-org/erikj/fix_extra_users
extra_users is actually a list of UserIDs
2018-03-13 13:11:04 +00:00
Erik Johnston
da87791975 Merge pull request #2983 from matrix-org/erikj/rename_register_3pid
Refactor get_or_register_3pid_guest
2018-03-13 13:10:52 +00:00
Erik Johnston
99e9b4f26c Merge pull request #2984 from matrix-org/erikj/fix_rest_regeix
RoomMembershipRestServlet doesn't handle /forget
2018-03-13 13:10:44 +00:00
Erik Johnston
f5160d4a3e RoomMembershipRestServlet doesn't handle /forget
Due to the order we register the REST handlers `/forget` was handled by
the correct handler.
2018-03-13 12:12:55 +00:00
Erik Johnston
8b3573a8b2 Refactor get_or_register_3pid_guest 2018-03-13 12:08:58 +00:00
Richard van der Hoff
299fd740c7 Merge pull request #2975 from matrix-org/rav/measure_persist_events
Add Measure block for persist_events
2018-03-13 12:06:25 +00:00
Erik Johnston
9a2d9b4789 Merge pull request #2977 from matrix-org/erikj/replication_move_props
Move property setting from ReplicationLayer to base classes
2018-03-13 11:45:25 +00:00
Erik Johnston
141c343e03 Merge pull request #2976 from matrix-org/erikj/replication_registry
Split out edu/query registration to a separate class
2018-03-13 11:45:08 +00:00
Erik Johnston
f43b6d6d9b Fix docstring types 2018-03-13 11:29:35 +00:00
Erik Johnston
0f942f68c1 Factor out _remote_reject_invite in RoomMember 2018-03-13 11:22:45 +00:00
Erik Johnston
d0fcc48f9d extra_users is actually a list of UserIDs 2018-03-13 11:20:06 +00:00
Erik Johnston
31becf4ac3 Make functions private that can be 2018-03-13 11:15:16 +00:00
Erik Johnston
d023ecb810 Don't build handlers on workers unnecessarily 2018-03-13 11:08:10 +00:00
Erik Johnston
ea7b3c4b1b Remove unused ReplicationLayer 2018-03-13 11:00:04 +00:00
Erik Johnston
6ea27fafad Fix tests 2018-03-13 10:55:47 +00:00
Erik Johnston
265b993b8a Split replication layer into two 2018-03-13 10:55:47 +00:00
Erik Johnston
e05bf34117 Move property setting from ReplicationLayer to FederationBase 2018-03-13 10:51:30 +00:00
Erik Johnston
631a73f7ef Fix tests 2018-03-13 10:39:19 +00:00
Erik Johnston
c3f79c9da5 Split out edu/query registration to a separate class 2018-03-13 10:24:27 +00:00
Richard van der Hoff
889a2a853a Add Measure block for persist_events
This seems like a useful thing to measure.
2018-03-13 10:01:42 +00:00
Richard van der Hoff
d65ceb4b48 Merge pull request #2962 from matrix-org/rav/purge_history_txns
Add transactional API to history purge
2018-03-12 16:32:18 +00:00
Richard van der Hoff
e48c7aac4d Add transactional API to history purge
Make the purge request return quickly, and allow scripts to poll for updates.
2018-03-12 16:22:55 +00:00
Richard van der Hoff
1708412f56 Return an error when doing two purges on a room
Queuing up purges doesn't sound like a good thing.
2018-03-12 16:22:54 +00:00
Richard van der Hoff
b984dd0b73 Merge pull request #2961 from matrix-org/rav/run_in_background
Factor run_in_background out from preserve_fn
2018-03-12 16:19:13 +00:00
Richard van der Hoff
ba1d08bc4b Merge pull request #2965 from matrix-org/rav/request_logging
Add a metric which increments when a request is received
2018-03-12 09:45:33 +00:00
Richard van der Hoff
58dd148c4f Add some docstrings to help figure this out 2018-03-09 18:05:41 +00:00
Richard van der Hoff
88541f9009 Add a metric which increments when a request is received
It's useful to know when there are peaks in incoming requests - which isn't
quite the same as there being peaks in outgoing responses, due to the time
taken to handle requests.
2018-03-09 16:30:26 +00:00
Richard van der Hoff
dbe80a286b refactor JsonResource
rephrase the OPTIONS and unrecognised request handling so that they look
similar to the common flow.
2018-03-09 16:22:16 +00:00
Richard van der Hoff
20f40348d4 Factor run_in_background out from preserve_fn
It annoys me that we create temporary function objects when there's really no
need for it. Let's factor the gubbins out of preserve_fn and start using it.
2018-03-08 11:50:11 +00:00
Erik Johnston
735fd8719a Merge pull request #2944 from matrix-org/erikj/fix_sync_race
Fix race in sync when joining room
2018-03-07 17:42:26 +00:00
Erik Johnston
a56d54dcb7 Fix up log message 2018-03-07 11:55:31 +00:00
Erik Johnston
02a1296ad6 Fix typo 2018-03-07 11:55:31 +00:00
Erik Johnston
8cb44da4aa Fix race in sync when joining room
The race happens when the user joins a room at the same time as doing a
sync. We fetch the current token and then get the rooms the user is in.
If the join happens after the current token, but before we get the rooms
we end up sending down a partial room entry in the sync.

This is fixed by looking at the stream ordering of the membership
returned by get_rooms_for_user, and handling the case when that stream
ordering is after the current token.
2018-03-07 11:55:31 +00:00
Richard van der Hoff
8ffaacbee3 Merge pull request #2949 from krombel/use_bcrypt_checkpw
use bcrypt.checkpw
2018-03-06 11:56:06 +00:00
Richard van der Hoff
b2932107bb Merge pull request #2946 from matrix-org/rav/timestamp_to_purge
Implement purge_history by timestamp
2018-03-06 11:20:23 +00:00
Erik Johnston
7aed50a038 Merge pull request #2948 from matrix-org/erikj/kill_as_sync
Remove ability for AS users to call /events and /sync
2018-03-06 11:10:09 +00:00
Erik Johnston
b6c4b851f1 Merge pull request #2947 from matrix-org/erikj/split_directory_store
Split Directory store
2018-03-05 18:17:32 +00:00
Krombel
ed9b5eced4 use bcrypt.checkpw
in bcrypt 3.1.0 checkpw got introduced (already 2 years ago)
This makes use of that with enhancements which might get introduced
by that

Signed-Off-by: Matthias Kesler <krombel@krombel.de>
2018-03-05 18:02:59 +01:00
Erik Johnston
d4ffe61d4f Remove ability for AS users to call /events and /sync
This functionality has been deprecated for a while as well as being
broken for a while. Instead of fixing it lets just remove it entirely.

See: https://github.com/matrix-org/matrix-doc/issues/1144
2018-03-05 15:44:46 +00:00
Erik Johnston
69ce365b79 Fix cache invalidation on deletion 2018-03-05 15:29:03 +00:00
Erik Johnston
2e223163ff Split Directory store 2018-03-05 15:11:30 +00:00
Richard van der Hoff
f8bfcd7e0d Provide a means to pass a timestamp to purge_history 2018-03-05 14:37:23 +00:00
Richard van der Hoff
d032785aa7 Merge pull request #2943 from matrix-org/rav/fix_find_first_stream_ordering_after_ts
Test and fix find_first_stream_ordering_after_ts
2018-03-05 12:26:14 +00:00
Richard van der Hoff
2c911d75e8 Fix comment typo 2018-03-05 12:24:49 +00:00
Richard van der Hoff
c818fcab11 Test and fix find_first_stream_ordering_after_ts
It seemed to suffer from a bunch of off-by-one errors.
2018-03-05 12:04:02 +00:00
Richard van der Hoff
06a14876e5 Add find_first_stream_ordering_after_ts
Expose this as a public function which can be called outside a txn
2018-03-05 11:53:39 +00:00
Erik Johnston
42174946f8 Merge pull request #2934 from matrix-org/erikj/cache_fix
Fix bug with delayed cache invalidation stream
2018-03-05 11:33:17 +00:00
Erik Johnston
f394f5574d Merge pull request #2929 from matrix-org/erikj/split_regististration_store
Split registration store
2018-03-05 11:33:07 +00:00
dklug
af7ed8e1ef Return 401 for invalid access_token on logout
Signed-off-by: Duncan Klug <dklug@ucmerced.edu>
2018-03-02 22:01:27 -08:00
Erik Johnston
efb79820b4 Fix bug with delayed cache invalidation stream
We poked the notifier before updated the current token for the cache
invalidation stream. This mean that sometimes the update wouldn't be
sent until the next time a cache was invalidated.
2018-03-02 14:45:15 +00:00
Erik Johnston
fafa3e7114 Split registration store 2018-03-02 13:48:27 +00:00
Erik Johnston
6619f047ad Merge pull request #2933 from matrix-org/erikj/3pid_yield
Add missing yield during 3pid signature checks
2018-03-02 11:31:50 +00:00
Erik Johnston
d960d23830 Add missing yield during 3pid signature checks 2018-03-02 11:03:18 +00:00
Erik Johnston
1a6c7cdf54 Merge pull request #2928 from matrix-org/erikj/read_marker_caches
Fix typo in getting replication account data processing
2018-03-01 17:56:14 +00:00
Erik Johnston
89b7232ff8 Fix typo in getting replication account data processing 2018-03-01 17:50:30 +00:00
Erik Johnston
1773df0632 Merge pull request #2925 from matrix-org/erikj/split_sig_fed
Split out SignatureStore and EventFederationStore
2018-03-01 17:32:58 +00:00
Erik Johnston
65cf454fd1 Remove unused DataStore 2018-03-01 17:27:53 +00:00
Erik Johnston
9e08a93a7b Merge pull request #2927 from matrix-org/erikj/read_marker_caches
Improve caching for read_marker API
2018-03-01 17:12:34 +00:00
Erik Johnston
4b44f05f19 Fewer lies are better 2018-03-01 17:08:17 +00:00
Erik Johnston
a83c514d1f Improve caching for read_marker API
We add a new storage function to get a paritcular type of room account
data. This allows us to prefill the cache when updating that acount
data.
2018-03-01 17:08:17 +00:00
Erik Johnston
33bebb63f3 Add some caches to help read marker API 2018-03-01 17:08:17 +00:00
Erik Johnston
483e8104db Merge pull request #2926 from matrix-org/erikj/member_handler_move
Move RoomMemberHandler out of Handlers
2018-03-01 17:01:25 +00:00
Erik Johnston
2ad4d5b5bb Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_sig_fed 2018-03-01 16:59:39 +00:00
Erik Johnston
92789199a9 Merge pull request #2924 from matrix-org/erikj/split_stream_store
Split out stream store
2018-03-01 16:56:37 +00:00
Erik Johnston
529c026ac1 Move back to hs.is_mine 2018-03-01 16:49:12 +00:00
Erik Johnston
7c371834cc Stub out broken function only used for cache 2018-03-01 16:44:13 +00:00
Erik Johnston
64346be26d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_stream_store 2018-03-01 16:26:42 +00:00
Erik Johnston
22518e2833 Merge pull request #2923 from matrix-org/erikj/stream_ago_worker
Calculate stream_ordering_month_ago correctly on workers
2018-03-01 16:23:54 +00:00
Erik Johnston
884b26ae41 Remove unused variables 2018-03-01 16:23:48 +00:00
Erik Johnston
1b2af11650 Document abstract class and method better 2018-03-01 16:20:57 +00:00
Erik Johnston
872ff95ed4 Default stream_ordering_*_ago to None 2018-03-01 16:00:05 +00:00
Erik Johnston
22004b524e Fix comment typo 2018-03-01 15:59:40 +00:00
Erik Johnston
4bc4236faf Merge pull request #2922 from matrix-org/erikj/split_room_store
Split up RoomStore
2018-03-01 15:55:01 +00:00
Richard van der Hoff
2324124a72 Merge pull request #2921 from matrix-org/rav/unyielding_make_deferred_yieldable
Rewrite make_deferred_yieldable avoiding inlineCallbacks
2018-03-01 15:39:10 +00:00
Erik Johnston
f793bc3877 Split out stream store 2018-03-01 15:13:08 +00:00
Erik Johnston
784f036306 Move RoomMemberHandler out of Handlers 2018-03-01 14:36:50 +00:00
Erik Johnston
6411f725be Calculate stream_ordering_month_ago correctly on workers 2018-03-01 14:20:53 +00:00
Erik Johnston
a9a2d66cdd Split out SignatureStore and EventFederationStore 2018-03-01 14:17:53 +00:00
Erik Johnston
0c8ba5dd1c Split up RoomStore 2018-03-01 14:01:19 +00:00
Richard van der Hoff
3a75de923b Rewrite make_deferred_yieldable avoiding inlineCallbacks
... because (a) it's actually simpler (b) it might be marginally more
performant?
2018-03-01 12:40:05 +00:00
Erik Johnston
17445e6701 Merge pull request #2920 from matrix-org/erikj/retry_send_event
Make repl send_event idempotent and retry on timeouts
2018-03-01 12:14:21 +00:00
Erik Johnston
126b9bf96f Log in the correct places 2018-03-01 12:05:33 +00:00
Erik Johnston
157298f986 Don't do preserve_fn for every request 2018-03-01 11:59:45 +00:00
Erik Johnston
89f90d808a Add some logging 2018-03-01 11:59:16 +00:00
Erik Johnston
8ded8ba2c7 Make repl send_event idempotent and retry on timeouts
If we treated timeouts as failures on the worker we would attempt to
clean up e.g. push actions while the master might still process the
event.
2018-03-01 11:20:34 +00:00
Erik Johnston
182ff17c83 Merge pull request #2875 from matrix-org/erikj/push_actions_worker
Calculate push actions on worker
2018-03-01 11:20:07 +00:00
Erik Johnston
f381d63813 Check event auth on the worker 2018-03-01 10:18:37 +00:00
Erik Johnston
6b8604239f Correctly send ratelimit and extra_users params 2018-03-01 10:08:39 +00:00
Erik Johnston
f756f961ea Fixup comments 2018-03-01 10:05:27 +00:00
Erik Johnston
28e973ac11 Calculate push actions on worker 2018-02-28 18:02:30 +00:00
Erik Johnston
9cb3a190bc Merge pull request #2913 from matrix-org/erikj/store_push
Move storage functions for push calculations
2018-02-28 18:02:03 +00:00
Erik Johnston
493e25d554 Move storage functions for push calculations
This will allow push actions for an event to be calculated on workers.
2018-02-27 13:58:16 +00:00
Erik Johnston
3594dbc6dc Merge pull request #2904 from matrix-org/erikj/receipt_cache_invalidation
Fix missing invalidations for receipt storage
2018-02-27 11:34:26 +00:00
Erik Johnston
2311189ee4 Merge pull request #2903 from matrix-org/erikj/split_roommember_store
Split out RoomMemberStore
2018-02-27 11:32:10 +00:00
Erik Johnston
c57607874c Merge pull request #2901 from matrix-org/erikj/split_as_stores
Split AS stores
2018-02-27 10:07:07 +00:00
Erik Johnston
8956f0147a Add comment 2018-02-27 10:06:51 +00:00
Erik Johnston
e5b4a208ce Merge pull request #2892 from matrix-org/erikj/batch_inserts_push_actions
Batch inserts into event_push_actions_staging
2018-02-26 14:45:40 +00:00
Erik Johnston
73fe866847 Merge pull request #2894 from matrix-org/erikj/handle_unpersisted_events_push
Ensure all push actions are deleted from staging
2018-02-26 14:28:35 +00:00
Erik Johnston
45b5fe9122 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/handle_unpersisted_events_push 2018-02-26 13:49:24 +00:00
Erik Johnston
d62ce972f8 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_roommember_store 2018-02-23 11:46:24 +00:00
Erik Johnston
6ae9a3d2a6 Update copyright 2018-02-23 11:44:49 +00:00
Erik Johnston
2ec49826e8 Merge pull request #2900 from matrix-org/erikj/split_event_push_actions
Split out EventPushActionWorkerStore
2018-02-23 11:44:30 +00:00
Erik Johnston
a90c60912f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_event_push_actions 2018-02-23 11:26:31 +00:00
Erik Johnston
50e8657867 Merge pull request #2902 from matrix-org/erikj/split_events_store
Split out get_events and co into a worker store
2018-02-23 11:23:52 +00:00
Erik Johnston
1cf9e071dd Merge pull request #2899 from matrix-org/erikj/split_pushers
Split PusherStore
2018-02-23 11:23:35 +00:00
Erik Johnston
d0957753bf Merge pull request #2898 from matrix-org/erikj/split_push_rules_store
Split PushRulesStore
2018-02-23 11:23:23 +00:00
Erik Johnston
199dba6c15 Merge pull request #2897 from matrix-org/erikj/split_account_data
Split AccountDataStore and TagStore
2018-02-23 11:23:11 +00:00
Erik Johnston
70349872c2 Update copyright 2018-02-23 11:14:35 +00:00
Erik Johnston
eba93b05bf Split EventsWorkerStore into separate file 2018-02-23 11:01:21 +00:00
Erik Johnston
bf8a36e080 Update copyright 2018-02-23 10:52:10 +00:00
Erik Johnston
5d0f665848 Remove redundant clock 2018-02-23 10:49:58 +00:00
Erik Johnston
3bd760628b _event_persist_queue shouldn't be in worker store 2018-02-23 10:49:18 +00:00
Erik Johnston
eb9b5eec81 Update copyright 2018-02-23 10:42:39 +00:00
Erik Johnston
c2ecfcc3a4 Update copyright 2018-02-23 10:41:34 +00:00
Erik Johnston
7e6cf89dc2 Update copyright 2018-02-23 10:39:19 +00:00
Erik Johnston
26d37f7a63 Update copyright 2018-02-23 10:33:55 +00:00
Erik Johnston
bb73f55fc6 Use absolute imports 2018-02-23 10:31:16 +00:00
Erik Johnston
faeb369f15 Fix missing invalidations for receipt storage 2018-02-21 15:19:54 +00:00
Erik Johnston
3dec9c66b3 Split out RoomMemberStore 2018-02-21 12:07:26 +00:00
Erik Johnston
46244b2759 Split AS stores 2018-02-21 11:49:34 +00:00
Erik Johnston
27b094f382 Split out get_events and co into a worker store 2018-02-21 11:41:48 +00:00
Erik Johnston
573712da6b Update comments 2018-02-21 11:29:49 +00:00
Erik Johnston
c96d547f4d Actually use new param 2018-02-21 11:03:42 +00:00
Erik Johnston
d15d237b0d Split out EventPushActionWorkerStore 2018-02-21 11:01:13 +00:00
Erik Johnston
27939cbb0e Merge pull request #2893 from matrix-org/erikj/delete_from_staging_fed
Delete from push_actions_staging in federation too
2018-02-21 11:00:06 +00:00
Erik Johnston
6f72765371 Split PusherStore 2018-02-21 10:54:21 +00:00
Erik Johnston
cbaad969f9 Split PushRulesStore 2018-02-21 10:43:31 +00:00
Erik Johnston
ca9b9d9703 Split AccountDataStore and TagStore 2018-02-21 10:15:04 +00:00
Erik Johnston
a2b25de68d Merge pull request #2896 from matrix-org/erikj/split_receipts_store
Split ReceiptsStore
2018-02-20 18:08:20 +00:00
Erik Johnston
8fbb4d0d19 Raise exception in abstract method 2018-02-20 17:59:23 +00:00
Erik Johnston
95e4cffd85 Fix comment 2018-02-20 17:58:40 +00:00
Erik Johnston
e316bbb4c0 Use abstract base class to access stream IDs 2018-02-20 17:43:57 +00:00
Erik Johnston
f5ac4dc2d4 Split ReceiptsStore 2018-02-20 16:28:28 +00:00
Erik Johnston
25634ed152 Fix test 2018-02-20 12:40:44 +00:00
Erik Johnston
24087bffa9 Ensure all push actions are deleted from staging 2018-02-20 12:34:31 +00:00
Erik Johnston
ad0ccf15ea Refactor _set_push_actions_for_event_and_users_txn to use events_and_contexts 2018-02-20 12:34:28 +00:00
Erik Johnston
e440e28456 Fix unit tests 2018-02-20 11:41:40 +00:00
Erik Johnston
d874d4f2d7 Delete from push_actions_staging in federation too 2018-02-20 11:37:52 +00:00
Erik Johnston
6ff8c87484 Batch inserts into event_push_actions_staging 2018-02-20 11:33:07 +00:00
Erik Johnston
324c3e9399 Merge pull request #2868 from matrix-org/erikj/refactor_media_storage
Make store_file use store_into_file
2018-02-20 11:31:24 +00:00
Richard van der Hoff
3fc33bae8b Merge pull request #2888 from bachp/pynacl-1.2.1
Update pynacl dependency to 1.2.1 or higher
2018-02-19 14:37:51 +00:00
Pascal Bach
3acd616979 Update pynacl dependency to 1.2.1 or higher
Signed-off-by: Pascal Bach <pascal.bach@nextrem.ch>
2018-02-19 10:45:22 +01:00
Travis Ralston
923d9300ed Add a blurb explaining the main synapse worker
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-02-17 21:53:46 -07:00
Richard van der Hoff
a71a080cd2 Merge pull request #2882 from matrix-org/rav/fix_purge_tablescan
(Really) fix tablescan of event_push_actions on purge
2018-02-16 15:18:47 +00:00
Richard van der Hoff
d1a3325f99 (Really) fix tablescan of event_push_actions on purge
commit 278d21b5 added new code to avoid the tablescan, but didn't remove the
old :/
2018-02-16 14:02:31 +00:00
Erik Johnston
bf5ef10a93 Merge pull request #2874 from matrix-org/erikj/creator_push_actions
Store push actions in DB staging area instead of context
2018-02-16 11:52:51 +00:00
Erik Johnston
6af025d3c4 Fix typo of double is_highlight 2018-02-16 11:35:31 +00:00
Erik Johnston
012e8e142a Comments 2018-02-16 11:35:01 +00:00
Erik Johnston
3a061cae26 Fix unit test 2018-02-15 16:31:59 +00:00
Erik Johnston
b96278d6fe Ensure that we delete staging push actions on errors 2018-02-15 15:47:06 +00:00
Erik Johnston
4810f7effd Remove context.push_actions 2018-02-15 15:47:06 +00:00
Erik Johnston
c714c61853 Update event_push_actions table from staging table 2018-02-15 15:47:06 +00:00
Erik Johnston
acac21248c Store push actions in staging area 2018-02-15 15:47:04 +00:00
Erik Johnston
6ed9ff69c2 Merge pull request #2873 from matrix-org/erikj/event_creator_no_state
Don't serialize current state over replication
2018-02-15 14:09:42 +00:00
Erik Johnston
106906a65e Don't serialize current state over replication 2018-02-15 13:53:18 +00:00
Erik Johnston
5fb347fc41 Merge pull request #2872 from matrix-org/erikj/event_worker_dont_log
Don't log errors propogated from send_event
2018-02-15 12:31:49 +00:00
Erik Johnston
cd94728e93 Merge pull request #2871 from matrix-org/erikj/event_creator_state
Fix state group storage bug in workers
2018-02-15 11:34:01 +00:00
Erik Johnston
fd1601c596 Fix state group storage bug in workers
We needed to move `_count_state_group_hops_txn` to the
StateGroupWorkerStore.
2018-02-15 11:04:32 +00:00
Erik Johnston
ef344b10e5 Don't log errors propogated from send_event 2018-02-15 11:03:49 +00:00
Erik Johnston
92c52df702 Make store_file use store_into_file 2018-02-14 17:55:18 +00:00
Erik Johnston
3d5a25407c Merge pull request #2798 from jeremycline/fedora-repo
Note that Synapse is available in Fedora
2018-01-17 14:28:17 +00:00
Jeremy Cline
4102468da9 Note that Synapse is available in Fedora
Signed-off-by: Jeremy Cline <jeremy@jcline.org>
2018-01-16 16:14:02 -05:00
Richard van der Hoff
5b527d7ee1 Merge pull request #2674 from her001/readme-sytest
Mention SyTest in the README, after Development
2018-01-16 13:18:17 +00:00
Andrew Conrad
053ecae4db Mention SyTest in the README, after Development
Signed-off-by: Andrew Conrad <aconrad103@gmail.com>
2017-11-14 15:11:35 -06:00
175 changed files with 5665 additions and 3130 deletions

View File

@@ -1,11 +1,208 @@
Unreleased
==========
Changes in synapse v0.28.0-rc1 (2018-04-26)
===========================================
synctl no longer starts the main synapse when using ``-a`` option with workers.
A new worker file should be added with ``worker_app: synapse.app.homeserver``.
Bug Fixes:
* Fix quarantine media admin API and search reindex (PR #3130)
* Fix media admin APIs (PR #3134)
Changes in synapse v0.28.0-rc1 (2018-04-24)
===========================================
Minor performance improvement to federation sending and bug fixes.
(Note: This release does not include state resolutions discussed in matrix live)
Features:
* Add metrics for event processing lag (PR #3090)
* Add metrics for ResponseCache (PR #3092)
Changes:
* Synapse on PyPy (PR #2760) Thanks to @Valodim!
* move handling of auto_join_rooms to RegisterHandler (PR #2996) Thanks to @krombel!
* Improve handling of SRV records for federation connections (PR #3016) Thanks to @silkeh!
* Document the behaviour of ResponseCache (PR #3059)
* Preparation for py3 (PR #3061, #3073, #3074, #3075, #3103, #3104, #3106, #3107, #3109, #3110) Thanks to @NotAFile!
* update prometheus dashboard to use new metric names (PR #3069) Thanks to @krombel!
* use python3-compatible prints (PR #3074) Thanks to @NotAFile!
* Send federation events concurrently (PR #3078)
* Limit concurrent event sends for a room (PR #3079)
* Improve R30 stat definition (PR #3086)
* Send events to ASes concurrently (PR #3088)
* Refactor ResponseCache usage (PR #3093)
* Clarify that SRV may not point to a CNAME (PR #3100) Thanks to @silkeh!
* Use str(e) instead of e.message (PR #3103) Thanks to @NotAFile!
* Use six.itervalues in some places (PR #3106) Thanks to @NotAFile!
* Refactor store.have_events (PR #3117)
Bug Fixes:
* Return 401 for invalid access_token on logout (PR #2938) Thanks to @dklug!
* Return a 404 rather than a 500 on rejoining empty rooms (PR #3080)
* fix federation_domain_whitelist (PR #3099)
* Avoid creating events with huge numbers of prev_events (PR #3113)
* Reject events which have lots of prev_events (PR #3118)
Changes in synapse v0.27.4 (2018-04-13)
======================================
Changes:
* Update canonicaljson dependency (#3095)
Changes in synapse v0.27.3 (2018-04-11)
======================================
Bug fixes:
* URL quote path segments over federation (#3082)
Changes in synapse v0.27.3-rc2 (2018-04-09)
==========================================
v0.27.3-rc1 used a stale version of the develop branch so the changelog overstates
the functionality. v0.27.3-rc2 is up to date, rc1 should be ignored.
Changes in synapse v0.27.3-rc1 (2018-04-09)
=======================================
Notable changes include API support for joinability of groups. Also new metrics
and phone home stats. Phone home stats include better visibility of system usage
so we can tweak synpase to work better for all users rather than our own experience
with matrix.org. Also, recording 'r30' stat which is the measure we use to track
overal growth of the Matrix ecosystem. It is defined as:-
Counts the number of native 30 day retained users, defined as:-
* Users who have created their accounts more than 30 days
* Where last seen at most 30 days ago
* Where account creation and last_seen are > 30 days"
Features:
* Add joinability for groups (PR #3045)
* Implement group join API (PR #3046)
* Add counter metrics for calculating state delta (PR #3033)
* R30 stats (PR #3041)
* Measure time it takes to calculate state group ID (PR #3043)
* Add basic performance statistics to phone home (PR #3044)
* Add response size metrics (PR #3071)
* phone home cache size configurations (PR #3063)
Changes:
* Add a blurb explaining the main synapse worker (PR #2886) Thanks to @turt2live!
* Replace old style error catching with 'as' keyword (PR #3000) Thanks to @NotAFile!
* Use .iter* to avoid copies in StateHandler (PR #3006)
* Linearize calls to _generate_user_id (PR #3029)
* Remove last usage of ujson (PR #3030)
* Use simplejson throughout (PR #3048)
* Use static JSONEncoders (PR #3049)
* Remove uses of events.content (PR #3060)
* Improve database cache performance (PR #3068)
Bug fixes:
* Add room_id to the response of `rooms/{roomId}/join` (PR #2986) Thanks to @jplatte!
* Fix replication after switch to simplejson (PR #3015)
* 404 correctly on missing paths via NoResource (PR #3022)
* Fix error when claiming e2e keys from offline servers (PR #3034)
* fix tests/storage/test_user_directory.py (PR #3042)
* use PUT instead of POST for federating groups/m.join_policy (PR #3070) Thanks to @krombel!
* postgres port script: fix state_groups_pkey error (PR #3072)
Changes in synapse v0.27.2 (2018-03-26)
=======================================
Bug fixes:
* Fix bug which broke TCP replication between workers (PR #3015)
Changes in synapse v0.27.1 (2018-03-26)
=======================================
Meta release as v0.27.0 temporarily pointed to the wrong commit
Changes in synapse v0.27.0 (2018-03-26)
=======================================
No changes since v0.27.0-rc2
Changes in synapse v0.27.0-rc2 (2018-03-19)
===========================================
Pulls in v0.26.1
Bug fixes:
* Fix bug introduced in v0.27.0-rc1 that causes much increased memory usage in state cache (PR #3005)
Changes in synapse v0.26.1 (2018-03-15)
=======================================
Bug fixes:
* Fix bug where an invalid event caused server to stop functioning correctly,
due to parsing and serializing bugs in ujson library (PR #3008)
Changes in synapse v0.27.0-rc1 (2018-03-14)
===========================================
The common case for running Synapse is not to run separate workers, but for those that do, be aware that synctl no longer starts the main synapse when using ``-a`` option with workers. A new worker file should be added with ``worker_app: synapse.app.homeserver``.
This release also begins the process of renaming a number of the metrics
reported to prometheus. See `docs/metrics-howto.rst <docs/metrics-howto.rst#block-and-response-metrics-renamed-for-0-27-0>`_.
Note that the v0.28.0 release will remove the deprecated metric names.
Features:
* Add ability for ASes to override message send time (PR #2754)
* Add support for custom storage providers for media repository (PR #2867, #2777, #2783, #2789, #2791, #2804, #2812, #2814, #2857, #2868, #2767)
* Add purge API features, see `docs/admin_api/purge_history_api.rst <docs/admin_api/purge_history_api.rst>`_ for full details (PR #2858, #2867, #2882, #2946, #2962, #2943)
* Add support for whitelisting 3PIDs that users can register. (PR #2813)
* Add ``/room/{id}/event/{id}`` API (PR #2766)
* Add an admin API to get all the media in a room (PR #2818) Thanks to @turt2live!
* Add ``federation_domain_whitelist`` option (PR #2820, #2821)
Changes:
* Continue to factor out processing from main process and into worker processes. See updated `docs/workers.rst <docs/workers.rst>`_ (PR #2892 - #2904, #2913, #2920 - #2926, #2947, #2847, #2854, #2872, #2873, #2874, #2928, #2929, #2934, #2856, #2976 - #2984, #2987 - #2989, #2991 - #2993, #2995, #2784)
* Ensure state cache is used when persisting events (PR #2864, #2871, #2802, #2835, #2836, #2841, #2842, #2849)
* Change the default config to bind on both IPv4 and IPv6 on all platforms (PR #2435) Thanks to @silkeh!
* No longer require a specific version of saml2 (PR #2695) Thanks to @okurz!
* Remove ``verbosity``/``log_file`` from generated config (PR #2755)
* Add and improve metrics and logging (PR #2770, #2778, #2785, #2786, #2787, #2793, #2794, #2795, #2809, #2810, #2833, #2834, #2844, #2965, #2927, #2975, #2790, #2796, #2838)
* When using synctl with workers, don't start the main synapse automatically (PR #2774)
* Minor performance improvements (PR #2773, #2792)
* Use a connection pool for non-federation outbound connections (PR #2817)
* Make it possible to run unit tests against postgres (PR #2829)
* Update pynacl dependency to 1.2.1 or higher (PR #2888) Thanks to @bachp!
* Remove ability for AS users to call /events and /sync (PR #2948)
* Use bcrypt.checkpw (PR #2949) Thanks to @krombel!
Bug fixes:
* Fix broken ``ldap_config`` config option (PR #2683) Thanks to @seckrv!
* Fix error message when user is not allowed to unban (PR #2761) Thanks to @turt2live!
* Fix publicised groups GET API (singular) over federation (PR #2772)
* Fix user directory when using ``user_directory_search_all_users`` config option (PR #2803, #2831)
* Fix error on ``/publicRooms`` when no rooms exist (PR #2827)
* Fix bug in quarantine_media (PR #2837)
* Fix url_previews when no Content-Type is returned from URL (PR #2845)
* Fix rare race in sync API when joining room (PR #2944)
* Fix slow event search, switch back from GIST to GIN indexes (PR #2769, #2848)
Changes in synapse v0.26.0 (2018-01-05)

View File

@@ -30,8 +30,12 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use Jenkins for continuous integration (http://matrix.org/jenkins), and
typically all pull requests get automatically tested Jenkins: if your change breaks the build, Jenkins will yell about it in #matrix-dev:matrix.org so please lurk there and keep an eye open.
We use `Jenkins <http://matrix.org/jenkins>`_ and
`Travis <https://travis-ci.org/matrix-org/synapse>`_ for continuous
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
pull request for feedback.
Code style
~~~~~~~~~~
@@ -115,4 +119,4 @@ can't be accepted. Git makes this trivial - just use the -s flag when you do
Conclusion
~~~~~~~~~~
That's it! Matrix is a very open and collaborative project as you might expect given our obsession with open communication. If we're going to successfully matrix together all the fragmented communication technologies out there we are reliant on contributions and collaboration from the community to do so. So please get involved - and we hope you have as much fun hacking on Matrix as we do!
That's it! Matrix is a very open and collaborative project as you might expect given our obsession with open communication. If we're going to successfully matrix together all the fragmented communication technologies out there we are reliant on contributions and collaboration from the community to do so. So please get involved - and we hope you have as much fun hacking on Matrix as we do!

View File

@@ -157,8 +157,8 @@ if you prefer.
In case of problems, please see the _`Troubleshooting` section below.
Alternatively, Silvio Fricke has contributed a Dockerfile to automate the
above in Docker at https://registry.hub.docker.com/u/silviof/docker-matrix/.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate the
above in Docker at https://hub.docker.com/r/avhost/docker-matrix/tags/
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
@@ -354,6 +354,10 @@ https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one
Fedora
------
Synapse is in the Fedora repositories as ``matrix-synapse``::
sudo dnf install matrix-synapse
Oleg Girko provides Fedora RPMs at
https://obs.infoserver.lv/project/monitor/matrix-synapse
@@ -610,6 +614,9 @@ should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
$ dig -t srv _matrix._tcp.example.com
_matrix._tcp.example.com. 3600 IN SRV 10 0 8448 synapse.example.com.
Note that the server hostname cannot be an alias (CNAME record): it has to point
directly to the server hosting the synapse instance.
You can then configure your homeserver to use ``<yourdomain.com>`` as the domain in
its user-ids, by setting ``server_name``::
@@ -890,6 +897,17 @@ This should end with a 'PASSED' result::
PASSED (successes=143)
Running the Integration Tests
=============================
Synapse is accompanied by `SyTest <https://github.com/matrix-org/sytest>`_,
a Matrix homeserver integration testing suite, which uses HTTP requests to
access the API as a Matrix client would. It is able to run Synapse directly from
the source tree, so installation of the server is not required.
Testing with SyTest is recommended for verifying that changes related to the
Client-Server API are functioning correctly. See the `installation instructions
<https://github.com/matrix-org/sytest#installing>`_ for details.
Building Internal API Documentation
===================================

View File

@@ -48,6 +48,18 @@ returned by the Client-Server API:
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to $NEXT_VERSION
====================
This release expands the anonymous usage stats sent if the opt-in
``report_stats`` configuration is set to ``true``. We now capture RSS memory
and cpu use at a very coarse level. This requires administrators to install
the optional ``psutil`` python module.
We would appreciate it if you could assist by ensuring this module is available
and ``report_stats`` is enabled. This will let us see if performance changes to
synapse are having an impact to the general community.
Upgrading to v0.15.0
====================

10
contrib/README.rst Normal file
View File

@@ -0,0 +1,10 @@
Community Contributions
=======================
Everything in this directory are projects submitted by the community that may be useful
to others. As such, the project maintainers cannot guarantee support, stability
or backwards compatibility of these projects.
Files in this directory should *not* be relied on directly, as they may not
continue to work or exist in future. If you wish to use any of these files then
they should be copied to avoid them breaking from underneath you.

View File

@@ -22,6 +22,8 @@ import argparse
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
from six import string_types
def make_graph(file_name, room_id, file_prefix, limit):
print "Reading lines"
@@ -58,7 +60,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for key, value in unfreeze(event.get_dict()["content"]).items():
if value is None:
value = "<null>"
elif isinstance(value, basestring):
elif isinstance(value, string_types):
pass
else:
value = json.dumps(value)

View File

@@ -202,11 +202,11 @@ new PromConsole.Graph({
<h1>Requests</h1>
<h3>Requests by Servlet</h3>
<div id="synapse_http_server_requests_servlet"></div>
<div id="synapse_http_server_request_count_servlet"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet"),
expr: "rate(synapse_http_server_requests:servlet[2m])",
node: document.querySelector("#synapse_http_server_request_count_servlet"),
expr: "rate(synapse_http_server_request_count:servlet[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -215,11 +215,11 @@ new PromConsole.Graph({
})
</script>
<h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4>
<div id="synapse_http_server_requests_servlet_minus_events"></div>
<div id="synapse_http_server_request_count_servlet_minus_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet_minus_events"),
expr: "rate(synapse_http_server_requests:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
node: document.querySelector("#synapse_http_server_request_count_servlet_minus_events"),
expr: "rate(synapse_http_server_request_count:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -233,7 +233,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_time_avg"),
expr: "rate(synapse_http_server_response_time:total[2m]) / rate(synapse_http_server_response_time:count[2m]) / 1000",
expr: "rate(synapse_http_server_response_time_seconds[2m]) / rate(synapse_http_server_response_count[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -276,7 +276,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_ru_utime"),
expr: "rate(synapse_http_server_response_ru_utime:total[2m])",
expr: "rate(synapse_http_server_response_ru_utime_seconds[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -291,7 +291,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
expr: "rate(synapse_http_server_response_db_txn_duration:total[2m])",
expr: "rate(synapse_http_server_response_db_txn_duration_seconds[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -306,7 +306,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_send_time_avg"),
expr: "rate(synapse_http_server_response_time:total{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_time:count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
expr: "rate(synapse_http_server_response_time_second{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,

View File

@@ -1,10 +1,10 @@
synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method)
synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet)
synapse_http_server_request_count:method{servlet=""} = sum(synapse_http_server_request_count) by (method)
synapse_http_server_request_count:servlet{method=""} = sum(synapse_http_server_request_count) by (servlet)
synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet)
synapse_http_server_request_count:total{servlet=""} = sum(synapse_http_server_request_count:by_method) by (servlet)
synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])

View File

@@ -5,19 +5,19 @@ groups:
expr: "sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)"
- record: "synapse_federation_transaction_queue_pendingPdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)"
- record: 'synapse_http_server_requests:method'
- record: 'synapse_http_server_request_count:method'
labels:
servlet: ""
expr: "sum(synapse_http_server_requests) by (method)"
- record: 'synapse_http_server_requests:servlet'
expr: "sum(synapse_http_server_request_count) by (method)"
- record: 'synapse_http_server_request_count:servlet'
labels:
method: ""
expr: 'sum(synapse_http_server_requests) by (servlet)'
expr: 'sum(synapse_http_server_request_count) by (servlet)'
- record: 'synapse_http_server_requests:total'
- record: 'synapse_http_server_request_count:total'
labels:
servlet: ""
expr: 'sum(synapse_http_server_requests:by_method) by (servlet)'
expr: 'sum(synapse_http_server_request_count:by_method) by (servlet)'
- record: 'synapse_cache:hit_ratio_5m'
expr: 'rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])'

View File

@@ -2,6 +2,9 @@
# (e.g. https://www.archlinux.org/packages/community/any/matrix-synapse/ for ArchLinux)
# rather than in a user home directory or similar under virtualenv.
# **NOTE:** This is an example service file that may change in the future. If you
# wish to use this please copy rather than symlink it.
[Unit]
Description=Synapse Matrix homeserver
@@ -12,6 +15,7 @@ Group=synapse
WorkingDirectory=/var/lib/synapse
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml
ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml
# EnvironmentFile=-/etc/sysconfig/synapse # Can be used to e.g. set SYNAPSE_CACHE_FACTOR
[Install]
WantedBy=multi-user.target

View File

@@ -8,20 +8,56 @@ Depending on the amount of history being purged a call to the API may take
several minutes or longer. During this period users will not be able to
paginate further back in the room from the point being purged from.
The API is simply:
The API is:
``POST /_matrix/client/r0/admin/purge_history/<room_id>/<event_id>``
``POST /_matrix/client/r0/admin/purge_history/<room_id>[/<event_id>]``
including an ``access_token`` of a server admin.
By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are
deleted, and room state data before the cutoff is always removed).
deleted.)
To delete local events as well, set ``delete_local_events`` in the body:
Room state data (such as joins, leaves, topic) is always preserved.
To delete local message events as well, set ``delete_local_events`` in the body:
.. code:: json
{
"delete_local_events": true
}
The caller must specify the point in the room to purge up to. This can be
specified by including an event_id in the URI, or by setting a
``purge_up_to_event_id`` or ``purge_up_to_ts`` in the request body. If an event
id is given, that event (and others at the same graph depth) will be retained.
If ``purge_up_to_ts`` is given, it should be a timestamp since the unix epoch,
in milliseconds.
The API starts the purge running, and returns immediately with a JSON body with
a purge id:
.. code:: json
{
"purge_id": "<opaque id>"
}
Purge status query
------------------
It is possible to poll for updates on recent purges with a second API;
``GET /_matrix/client/r0/admin/purge_history_status/<purge_id>``
(again, with a suitable ``access_token``). This API returns a JSON body like
the following:
.. code:: json
{
"status": "active"
}
The status will be one of ``active``, ``complete``, or ``failed``.

View File

@@ -279,9 +279,9 @@ Obviously that option means that the operations done in
that might be fixed by setting a different logcontext via a ``with
LoggingContext(...)`` in ``background_operation``).
The second option is to use ``logcontext.preserve_fn``, which wraps a function
so that it doesn't reset the logcontext even when it returns an incomplete
deferred, and adds a callback to the returned deferred to reset the
The second option is to use ``logcontext.run_in_background``, which wraps a
function so that it doesn't reset the logcontext even when it returns an
incomplete deferred, and adds a callback to the returned deferred to reset the
logcontext. In other words, it turns a function that follows the Synapse rules
about logcontexts and Deferreds into one which behaves more like an external
function — the opposite operation to that described in the previous section.
@@ -293,7 +293,7 @@ It can be used like this:
def do_request_handling():
yield foreground_operation()
logcontext.preserve_fn(background_operation)()
logcontext.run_in_background(background_operation)
# this will now be logged against the request context
logger.debug("Request handling complete")

View File

@@ -55,7 +55,12 @@ synapse process.)
You then create a set of configs for the various worker processes. These
should be worker configuration files, and should be stored in a dedicated
subdirectory, to allow synctl to manipulate them.
subdirectory, to allow synctl to manipulate them. An additional configuration
for the master synapse process will need to be created because the process will
not be started automatically. That configuration should look like this::
worker_app: synapse.app.homeserver
daemonize: true
Each worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that worker,
@@ -230,9 +235,11 @@ file. For example::
``synapse.app.event_creator``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles non-state event creation. It can handle REST endpoints matching:
Handles some event creation. It can handle REST endpoints matching::
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
It will create events locally and then send them on to the main synapse
instance to be persisted and handled.

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -29,6 +30,8 @@ import time
import traceback
import yaml
from six import string_types
logger = logging.getLogger("synapse_port_db")
@@ -250,6 +253,12 @@ class Porter(object):
@defer.inlineCallbacks
def handle_table(self, table, postgres_size, table_size, forward_chunk,
backward_chunk):
logger.info(
"Table %s: %i/%i (rows %i-%i) already ported",
table, postgres_size, table_size,
backward_chunk+1, forward_chunk-1,
)
if not table_size:
return
@@ -467,31 +476,10 @@ class Porter(object):
self.progress.set_state("Preparing PostgreSQL")
self.setup_db(postgres_config, postgres_engine)
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
)
postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
)
tables = set(sqlite_tables) & set(postgres_tables)
self.progress.set_state("Creating tables")
logger.info("Found %d tables", len(tables))
self.progress.set_state("Creating port tables")
def create_port_table(txn):
txn.execute(
"CREATE TABLE port_from_sqlite3 ("
"CREATE TABLE IF NOT EXISTS port_from_sqlite3 ("
" table_name varchar(100) NOT NULL UNIQUE,"
" forward_rowid bigint NOT NULL,"
" backward_rowid bigint NOT NULL"
@@ -517,18 +505,33 @@ class Porter(object):
"alter_table", alter_table
)
except Exception as e:
logger.info("Failed to create port table: %s", e)
pass
try:
yield self.postgres_store.runInteraction(
"create_port_table", create_port_table
)
except Exception as e:
logger.info("Failed to create port table: %s", e)
yield self.postgres_store.runInteraction(
"create_port_table", create_port_table
)
self.progress.set_state("Setting up")
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
)
# Set up tables.
postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
)
tables = set(sqlite_tables) & set(postgres_tables)
logger.info("Found %d tables", len(tables))
# Step 3. Figure out what still needs copying
self.progress.set_state("Checking on port progress")
setup_res = yield defer.gatherResults(
[
self.setup_table(table)
@@ -539,7 +542,8 @@ class Porter(object):
consumeErrors=True,
)
# Process tables.
# Step 4. Do the copying.
self.progress.set_state("Copying to postgres")
yield defer.gatherResults(
[
self.handle_table(*res)
@@ -548,6 +552,9 @@ class Porter(object):
consumeErrors=True,
)
# Step 5. Do final post-processing
yield self._setup_state_group_id_seq()
self.progress.done()
except:
global end_error_exec_info
@@ -569,7 +576,7 @@ class Porter(object):
def conv(j, col):
if j in bool_cols:
return bool(col)
elif isinstance(col, basestring) and "\0" in col:
elif isinstance(col, string_types) and "\0" in col:
logger.warn("DROPPING ROW: NUL value in table %s col %s: %r", table, headers[j], col)
raise BadValueException();
return col
@@ -707,6 +714,16 @@ class Porter(object):
defer.returnValue((done, remaining + done))
def _setup_state_group_id_seq(self):
def r(txn):
txn.execute("SELECT MAX(id) FROM state_groups")
next_id = txn.fetchone()[0]+1
txn.execute(
"ALTER SEQUENCE state_group_id_seq RESTART WITH %s",
(next_id,),
)
return self.postgres_store.runInteraction("setup_state_group_id_seq", r)
##############################################
###### The following is simply UI stuff ######

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.26.0"
__version__ = "0.28.0"

View File

@@ -204,8 +204,8 @@ class Auth(object):
ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders(
"User-Agent",
default=[""]
b"User-Agent",
default=[b""]
)[0]
if user and access_token and ip_addr:
self.store.insert_client_ip(
@@ -672,7 +672,7 @@ def has_access_token(request):
bool: False if no access_token was given, True otherwise.
"""
query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders("Authorization")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers)
@@ -692,8 +692,8 @@ def get_access_token_from_request(request, token_not_found_http_status=401):
AuthError: If there isn't an access_token in the request.
"""
auth_headers = request.requestHeaders.getRawHeaders("Authorization")
query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get(b"access_token")
if auth_headers:
# Try the get the access_token from a "Authorization: Bearer"
# header

View File

@@ -15,9 +15,11 @@
"""Contains exceptions and error codes."""
import json
import logging
import simplejson as json
from six import iteritems
logger = logging.getLogger(__name__)
@@ -296,7 +298,7 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
A dict representing the error response JSON.
"""
err = {"error": msg, "errcode": code}
for key, value in kwargs.iteritems():
for key, value in iteritems(kwargs):
err[key] = value
return err

View File

@@ -17,7 +17,7 @@ from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, RoomID
from twisted.internet import defer
import ujson as json
import simplejson as json
import jsonschema
from jsonschema import FormatChecker

View File

@@ -36,7 +36,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.appservice")
@@ -64,7 +64,7 @@ class AppserviceServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,

View File

@@ -44,7 +44,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.client_reader")
@@ -88,7 +88,7 @@ class ClientReaderServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
@@ -156,7 +156,6 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():

View File

@@ -27,14 +27,24 @@ from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import RoomSendEventRestServlet
from synapse.rest.client.v1.room import (
RoomSendEventRestServlet, RoomMembershipRestServlet, RoomStateEventRestServlet,
JoinRoomAliasServlet,
)
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
@@ -42,12 +52,19 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.event_creator")
class EventCreatorSlavedStore(
DirectoryStore,
TransactionStore,
SlavedProfileStore,
SlavedAccountDataStore,
SlavedPusherStore,
SlavedReceiptsStore,
SlavedPushRuleStore,
SlavedDeviceStore,
SlavedClientIpStore,
SlavedApplicationServiceStore,
@@ -77,6 +94,9 @@ class EventCreatorServer(HomeServer):
elif name == "client":
resource = JsonResource(self, canonical_json=False)
RoomSendEventRestServlet(self).register(resource)
RoomMembershipRestServlet(self).register(resource)
RoomStateEventRestServlet(self).register(resource)
JoinRoomAliasServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
@@ -84,7 +104,7 @@ class EventCreatorServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
@@ -153,7 +173,6 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():

View File

@@ -41,7 +41,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_reader")
@@ -77,7 +77,7 @@ class FederationReaderServer(HomeServer):
FEDERATION_PREFIX: TransportLayerServer(self),
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
@@ -144,7 +144,6 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():

View File

@@ -42,7 +42,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_sender")
@@ -91,7 +91,7 @@ class FederationSenderServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,

View File

@@ -44,7 +44,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.frontend_proxy")
@@ -90,7 +90,7 @@ class KeyUploadServlet(RestServlet):
# They're actually trying to upload something, proxy to main synapse.
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = {
"Authorization": auth_headers,
}
@@ -142,7 +142,7 @@ class FrontendProxyServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
@@ -211,7 +211,6 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():

View File

@@ -48,6 +48,7 @@ from synapse.server import HomeServer
from synapse.storage import are_all_users_on_domain
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
@@ -56,7 +57,7 @@ from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, Resource
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
@@ -126,7 +127,7 @@ class SynapseHomeServer(HomeServer):
if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
root_resource = Resource()
root_resource = NoResource()
root_resource = create_resource_tree(resources, root_resource)
@@ -348,7 +349,7 @@ def setup(config_options):
hs.get_state_handler().start_caching()
hs.get_datastore().start_profiling()
hs.get_datastore().start_doing_background_updates()
hs.get_replication_layer().start_get_pdu_cache()
hs.get_federation_client().start_get_pdu_cache()
register_memory_metrics(hs)
@@ -402,6 +403,10 @@ def run(hs):
stats = {}
# Contains the list of processes we will be monitoring
# currently either 0 or 1
stats_process = []
@defer.inlineCallbacks
def phone_stats_home():
logger.info("Gathering stats for reporting")
@@ -425,8 +430,21 @@ def run(hs):
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in r30_results.iteritems():
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = CACHE_SIZE_FACTOR
stats["event_cache_size"] = hs.config.event_cache_size
if len(stats_process) > 0:
stats["memory_rss"] = 0
stats["cpu_average"] = 0
for process in stats_process:
stats["memory_rss"] += process.memory_info().rss
stats["cpu_average"] += int(process.cpu_percent(interval=None))
logger.info("Reporting stats to matrix.org: %s" % (stats,))
try:
@@ -437,10 +455,32 @@ def run(hs):
except Exception as e:
logger.warn("Error reporting stats: %s", e)
def performance_stats_init():
try:
import psutil
process = psutil.Process()
# Ensure we can fetch both, and make the initial request for cpu_percent
# so the next request will use this as the initial point.
process.memory_info().rss
process.cpu_percent(interval=None)
logger.info("report_stats can use psutil")
stats_process.append(process)
except (ImportError, AttributeError):
logger.warn(
"report_stats enabled but psutil is not installed or incorrect version."
" Disabling reporting of memory/cpu stats."
" Ensuring psutil is available will help matrix.org track performance"
" changes across releases."
)
if hs.config.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
clock.call_later(0, performance_stats_init)
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home)

View File

@@ -43,7 +43,7 @@ from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.media_repository")
@@ -84,7 +84,7 @@ class MediaRepositoryServer(HomeServer):
),
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
@@ -158,7 +158,6 @@ def start(config_options):
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():

View File

@@ -32,13 +32,12 @@ from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import create_engine
from synapse.storage.roommember import RoomMemberStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.pusher")
@@ -75,10 +74,6 @@ class PusherSlaveStore(
DataStore.get_profile_displayname.__func__
)
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
class PusherServer(HomeServer):
def setup(self):
@@ -99,7 +94,7 @@ class PusherServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,

View File

@@ -56,14 +56,14 @@ from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
from six import iteritems
logger = logging.getLogger("synapse.app.synchrotron")
class SynchrotronSlavedStore(
SlavedPushRuleStore,
SlavedEventStore,
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
@@ -73,14 +73,12 @@ class SynchrotronSlavedStore(
SlavedGroupServerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
):
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
did_forget = (
RoomMemberStore.__dict__["did_forget"]
)
@@ -215,7 +213,7 @@ class SynchrotronPresence(object):
def get_currently_syncing_users(self):
return [
user_id for user_id, count in self.user_to_num_current_syncs.iteritems()
user_id for user_id, count in iteritems(self.user_to_num_current_syncs)
if count > 0
]
@@ -273,7 +271,7 @@ class SynchrotronServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,

View File

@@ -38,7 +38,7 @@ def pid_running(pid):
try:
os.kill(pid, 0)
return True
except OSError, err:
except OSError as err:
if err.errno == errno.EPERM:
return True
return False
@@ -98,7 +98,7 @@ def stop(pidfile, app):
try:
os.kill(pid, signal.SIGTERM)
write("stopped %s" % (app,), colour=GREEN)
except OSError, err:
except OSError as err:
if err.errno == errno.ESRCH:
write("%s not running" % (app,), colour=YELLOW)
elif err.errno == errno.EPERM:
@@ -252,6 +252,7 @@ def main():
for running_pid in running_pids:
while pid_running(running_pid):
time.sleep(0.2)
write("All processes exited; now restarting...")
if action == "start" or action == "restart":
if start_stop_synapse:

View File

@@ -43,7 +43,7 @@ from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.user_dir")
@@ -116,7 +116,7 @@ class UserDirectoryServer(HomeServer):
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,

View File

@@ -21,6 +21,8 @@ from twisted.internet import defer
import logging
import re
from six import string_types
logger = logging.getLogger(__name__)
@@ -146,7 +148,7 @@ class ApplicationService(object):
)
regex = regex_obj.get("regex")
if isinstance(regex, basestring):
if isinstance(regex, string_types):
regex_obj["regex"] = re.compile(regex) # Pre-compile regex
else:
raise ValueError(

View File

@@ -18,7 +18,6 @@ from synapse.api.constants import ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException
from synapse.http.client import SimpleHttpClient
from synapse.events.utils import serialize_event
from synapse.util.logcontext import preserve_fn, make_deferred_yieldable
from synapse.util.caches.response_cache import ResponseCache
from synapse.types import ThirdPartyInstanceID
@@ -73,7 +72,8 @@ class ApplicationServiceApi(SimpleHttpClient):
super(ApplicationServiceApi, self).__init__(hs)
self.clock = hs.get_clock()
self.protocol_meta_cache = ResponseCache(hs, timeout_ms=HOUR_IN_MS)
self.protocol_meta_cache = ResponseCache(hs, "as_protocol_meta",
timeout_ms=HOUR_IN_MS)
@defer.inlineCallbacks
def query_user(self, service, user_id):
@@ -193,12 +193,7 @@ class ApplicationServiceApi(SimpleHttpClient):
defer.returnValue(None)
key = (service.id, protocol)
result = self.protocol_meta_cache.get(key)
if not result:
result = self.protocol_meta_cache.set(
key, preserve_fn(_get)()
)
return make_deferred_yieldable(result)
return self.protocol_meta_cache.wrap(key, _get)
@defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None):

View File

@@ -19,6 +19,8 @@ import os
import yaml
from textwrap import dedent
from six import integer_types
class ConfigError(Exception):
pass
@@ -49,7 +51,7 @@ Missing mandatory `server_name` config option.
class Config(object):
@staticmethod
def parse_size(value):
if isinstance(value, int) or isinstance(value, long):
if isinstance(value, integer_types):
return value
sizes = {"K": 1024, "M": 1024 * 1024}
size = 1
@@ -61,7 +63,7 @@ class Config(object):
@staticmethod
def parse_duration(value):
if isinstance(value, int) or isinstance(value, long):
if isinstance(value, integer_types):
return value
second = 1000
minute = 60 * second
@@ -288,22 +290,22 @@ class Config(object):
)
obj.invoke_all("generate_files", config)
config_file.write(config_bytes)
print (
print((
"A config file has been generated in %r for server name"
" %r with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it"
" to your needs."
) % (config_path, server_name)
print (
) % (config_path, server_name))
print(
"If this server name is incorrect, you will need to"
" regenerate the SSL certificates"
)
return
else:
print (
print((
"Config file %r already exists. Generating any missing key"
" files."
) % (config_path,)
) % (config_path,))
generate_keys = True
parser = argparse.ArgumentParser(

View File

@@ -21,6 +21,8 @@ import urllib
import yaml
import logging
from six import string_types
logger = logging.getLogger(__name__)
@@ -89,14 +91,14 @@ def _load_appservice(hostname, as_info, config_filename):
"id", "as_token", "hs_token", "sender_localpart"
]
for field in required_string_fields:
if not isinstance(as_info.get(field), basestring):
if not isinstance(as_info.get(field), string_types):
raise KeyError("Required string field: '%s' (%s)" % (
field, config_filename,
))
# 'url' must either be a string or explicitly null, not missing
# to avoid accidentally turning off push for ASes.
if (not isinstance(as_info.get("url"), basestring) and
if (not isinstance(as_info.get("url"), string_types) and
as_info.get("url", "") is not None):
raise KeyError(
"Required string field or explicit null: 'url' (%s)" % (config_filename,)
@@ -128,7 +130,7 @@ def _load_appservice(hostname, as_info, config_filename):
"Expected namespace entry in %s to be an object,"
" but got %s", ns, regex_obj
)
if not isinstance(regex_obj.get("regex"), basestring):
if not isinstance(regex_obj.get("regex"), string_types):
raise ValueError(
"Missing/bad type 'regex' key in %s", regex_obj
)

View File

@@ -77,7 +77,9 @@ class RegistrationConfig(Config):
# Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash.
# The default number of rounds is 12.
# The default number is 12 (which equates to 2^12 rounds).
# N.B. that increasing this will exponentially increase the time required
# to register or login - e.g. 24 => 2^24 rounds which will take >20 mins.
bcrypt_rounds: 12
# Allows users to register as guests without a password/email/etc, and

View File

@@ -352,7 +352,7 @@ class Keyring(object):
logger.exception(
"Unable to get key from %r: %s %s",
perspective_name,
type(e).__name__, str(e.message),
type(e).__name__, str(e),
)
defer.returnValue({})
@@ -384,7 +384,7 @@ class Keyring(object):
logger.info(
"Unable to get key %r for %r directly: %s %s",
key_ids, server_name,
type(e).__name__, str(e.message),
type(e).__name__, str(e),
)
if not keys:
@@ -734,7 +734,7 @@ def _handle_key_deferred(verify_request):
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
server_name, type(e).__name__, str(e),
)
raise SynapseError(
502,
@@ -744,7 +744,7 @@ def _handle_key_deferred(verify_request):
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
server_name, type(e).__name__, str(e),
)
raise SynapseError(
401,

View File

@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from frozendict import frozendict
@@ -51,7 +52,6 @@ class EventContext(object):
"prev_state_ids",
"state_group",
"rejected",
"push_actions",
"prev_group",
"delta_ids",
"prev_state_events",
@@ -66,7 +66,6 @@ class EventContext(object):
self.state_group = None
self.rejected = False
self.push_actions = []
# A previously persisted state group and a delta between that
# and this state.
@@ -77,19 +76,32 @@ class EventContext(object):
self.app_service = None
def serialize(self):
def serialize(self, event):
"""Converts self to a type that can be serialized as JSON, and then
deserialized by `deserialize`
Args:
event (FrozenEvent): The event that this context relates to
Returns:
dict
"""
# We don't serialize the full state dicts, instead they get pulled out
# of the DB on the other side. However, the other side can't figure out
# the prev_state_ids, so if we're a state event we include the event
# id that we replaced in the state.
if event.is_state():
prev_state_id = self.prev_state_ids.get((event.type, event.state_key))
else:
prev_state_id = None
return {
"current_state_ids": _encode_state_dict(self.current_state_ids),
"prev_state_ids": _encode_state_dict(self.prev_state_ids),
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
"state_group": self.state_group,
"rejected": self.rejected,
"push_actions": self.push_actions,
"prev_group": self.prev_group,
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
@@ -97,6 +109,7 @@ class EventContext(object):
}
@staticmethod
@defer.inlineCallbacks
def deserialize(store, input):
"""Converts a dict that was produced by `serialize` back into a
EventContext.
@@ -109,20 +122,32 @@ class EventContext(object):
EventContext
"""
context = EventContext()
context.current_state_ids = _decode_state_dict(input["current_state_ids"])
context.prev_state_ids = _decode_state_dict(input["prev_state_ids"])
context.state_group = input["state_group"]
context.rejected = input["rejected"]
context.push_actions = input["push_actions"]
context.prev_group = input["prev_group"]
context.delta_ids = _decode_state_dict(input["delta_ids"])
context.prev_state_events = input["prev_state_events"]
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
prev_state_id = input["prev_state_id"]
event_type = input["event_type"]
event_state_key = input["event_state_key"]
context.current_state_ids = yield store.get_state_ids_for_group(
context.state_group,
)
if prev_state_id and event_state_key:
context.prev_state_ids = dict(context.current_state_ids)
context.prev_state_ids[(event_type, event_state_key)] = prev_state_id
else:
context.prev_state_ids = context.current_state_ids
app_service_id = input["app_service_id"]
if app_service_id:
context.app_service = store.get_app_service_by_id(app_service_id)
return context
defer.returnValue(context)
def _encode_state_dict(state_dict):

View File

@@ -15,11 +15,3 @@
""" This package includes all the federation specific logic.
"""
from .replication import ReplicationLayer
def initialize_http_replication(hs):
transport = hs.get_federation_transport_client()
return ReplicationLayer(hs, transport)

View File

@@ -27,7 +27,13 @@ logger = logging.getLogger(__name__)
class FederationBase(object):
def __init__(self, hs):
self.hs = hs
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
self.spam_checker = hs.get_spam_checker()
self.store = hs.get_datastore()
self._clock = hs.get_clock()
@defer.inlineCallbacks
def _check_sigs_and_hash_and_fetch(self, origin, pdus, outlier=False,

View File

@@ -58,6 +58,7 @@ class FederationClient(FederationBase):
self._clear_tried_cache, 60 * 1000,
)
self.state = hs.get_state_handler()
self.transport_layer = hs.get_federation_transport_client()
def _clear_tried_cache(self):
"""Clear pdu_destination_tried cache"""
@@ -393,7 +394,7 @@ class FederationClient(FederationBase):
seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
signed_events = seen_events.values()
else:
seen_events = yield self.store.have_events(event_ids)
seen_events = yield self.store.have_seen_events(event_ids)
signed_events = []
failed_to_fetch = set()

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -17,20 +18,23 @@ import logging
import simplejson as json
from twisted.internet import defer
from synapse.api.errors import AuthError, FederationError, SynapseError
from synapse.api.errors import AuthError, FederationError, SynapseError, NotFoundError
from synapse.crypto.event_signing import compute_event_signature
from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
)
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
import synapse.metrics
from synapse.types import get_domain_from_id
from synapse.util import async
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.util.logutils import log_function
from six import iteritems
# when processing incoming transactions, we try to handle multiple rooms in
# parallel, up to this limit.
TRANSACTION_CONCURRENCY_LIMIT = 10
@@ -52,49 +56,18 @@ class FederationServer(FederationBase):
super(FederationServer, self).__init__(hs)
self.auth = hs.get_auth()
self.handler = hs.get_handlers().federation_handler
self._server_linearizer = async.Linearizer("fed_server")
self._transaction_linearizer = async.Linearizer("fed_txn_handler")
self.transaction_actions = TransactionActions(self.store)
self.registry = hs.get_federation_registry()
# We cache responses to state queries, as they take a while and often
# come in waves.
self._state_resp_cache = ResponseCache(hs, timeout_ms=30000)
def set_handler(self, handler):
"""Sets the handler that the replication layer will use to communicate
receipt of new PDUs from other home servers. The required methods are
documented on :py:class:`.ReplicationHandler`.
"""
self.handler = handler
def register_edu_handler(self, edu_type, handler):
if edu_type in self.edu_handlers:
raise KeyError("Already have an EDU handler for %s" % (edu_type,))
self.edu_handlers[edu_type] = handler
def register_query_handler(self, query_type, handler):
"""Sets the handler callable that will be used to handle an incoming
federation Query of the given type.
Args:
query_type (str): Category name of the query, which should match
the string used by make_query.
handler (callable): Invoked to handle incoming queries of this type
handler is invoked as:
result = handler(args)
where 'args' is a dict mapping strings to strings of the query
arguments. It should return a Deferred that will eventually yield an
object to encode as JSON.
"""
if query_type in self.query_handlers:
raise KeyError(
"Already have a Query handler for %s" % (query_type,)
)
self.query_handlers[query_type] = handler
self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
@defer.inlineCallbacks
@log_function
@@ -229,16 +202,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def received_edu(self, origin, edu_type, content):
received_edus_counter.inc()
if edu_type in self.edu_handlers:
try:
yield self.edu_handlers[edu_type](origin, content)
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception as e:
logger.exception("Failed to handle edu %r", edu_type)
else:
logger.warn("Received EDU of type %s with no handler", edu_type)
yield self.registry.on_edu(edu_type, origin, content)
@defer.inlineCallbacks
@log_function
@@ -250,16 +214,17 @@ class FederationServer(FederationBase):
if not in_room:
raise AuthError(403, "Host not in room.")
result = self._state_resp_cache.get((room_id, event_id))
if not result:
with (yield self._server_linearizer.queue((origin, room_id))):
d = self._state_resp_cache.set(
(room_id, event_id),
preserve_fn(self._on_context_state_request_compute)(room_id, event_id)
)
resp = yield make_deferred_yieldable(d)
else:
resp = yield make_deferred_yieldable(result)
# we grab the linearizer to protect ourselves from servers which hammer
# us. In theory we might already have the response to this query
# in the cache so we could return it without waiting for the linearizer
# - but that's non-trivial to get right, and anyway somewhat defeats
# the point of the linearizer.
with (yield self._server_linearizer.queue((origin, room_id))):
resp = yield self._state_resp_cache.wrap(
(room_id, event_id),
self._on_context_state_request_compute,
room_id, event_id,
)
defer.returnValue((200, resp))
@@ -328,14 +293,8 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
received_queries_counter.inc(query_type)
if query_type in self.query_handlers:
response = yield self.query_handlers[query_type](args)
defer.returnValue((200, response))
else:
defer.returnValue(
(404, "No handler for Query type '%s'" % (query_type,))
)
resp = yield self.registry.on_query(query_type, args)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_make_join_request(self, room_id, user_id):
@@ -469,9 +428,9 @@ class FederationServer(FederationBase):
"Claimed one-time-keys: %s",
",".join((
"%s for %s:%s" % (key_id, user_id, device_id)
for user_id, user_keys in json_result.iteritems()
for device_id, device_keys in user_keys.iteritems()
for key_id, _ in device_keys.iteritems()
for user_id, user_keys in iteritems(json_result)
for device_id, device_keys in iteritems(user_keys)
for key_id, _ in iteritems(device_keys)
)),
)
@@ -538,13 +497,33 @@ class FederationServer(FederationBase):
def _handle_received_pdu(self, origin, pdu):
""" Process a PDU received in a federation /send/ transaction.
If the event is invalid, then this method throws a FederationError.
(The error will then be logged and sent back to the sender (which
probably won't do anything with it), and other events in the
transaction will be processed as normal).
It is likely that we'll then receive other events which refer to
this rejected_event in their prev_events, etc. When that happens,
we'll attempt to fetch the rejected event again, which will presumably
fail, so those second-generation events will also get rejected.
Eventually, we get to the point where there are more than 10 events
between any new events and the original rejected event. Since we
only try to backfill 10 events deep on received pdu, we then accept the
new event, possibly introducing a discontinuity in the DAG, with new
forward extremities, so normal service is approximately returned,
until we try to backfill across the discontinuity.
Args:
origin (str): server which sent the pdu
pdu (FrozenEvent): received pdu
Returns (Deferred): completes with None
Raises: FederationError if the signatures / hash do not match
"""
Raises: FederationError if the signatures / hash do not match, or
if the event was unacceptable for any other reason (eg, too large,
too many prev_events, couldn't find the prev_events)
"""
# check that it's actually being sent from a valid destination to
# workaround bug #1753 in 0.18.5 and 0.18.6
if origin != get_domain_from_id(pdu.event_id):
@@ -607,3 +586,66 @@ class FederationServer(FederationBase):
origin, room_id, event_dict
)
defer.returnValue(ret)
class FederationHandlerRegistry(object):
"""Allows classes to register themselves as handlers for a given EDU or
query type for incoming federation traffic.
"""
def __init__(self):
self.edu_handlers = {}
self.query_handlers = {}
def register_edu_handler(self, edu_type, handler):
"""Sets the handler callable that will be used to handle an incoming
federation EDU of the given type.
Args:
edu_type (str): The type of the incoming EDU to register handler for
handler (Callable[[str, dict]]): A callable invoked on incoming EDU
of the given type. The arguments are the origin server name and
the EDU contents.
"""
if edu_type in self.edu_handlers:
raise KeyError("Already have an EDU handler for %s" % (edu_type,))
self.edu_handlers[edu_type] = handler
def register_query_handler(self, query_type, handler):
"""Sets the handler callable that will be used to handle an incoming
federation query of the given type.
Args:
query_type (str): Category name of the query, which should match
the string used by make_query.
handler (Callable[[dict], Deferred[dict]]): Invoked to handle
incoming queries of this type. The return will be yielded
on and the result used as the response to the query request.
"""
if query_type in self.query_handlers:
raise KeyError(
"Already have a Query handler for %s" % (query_type,)
)
self.query_handlers[query_type] = handler
@defer.inlineCallbacks
def on_edu(self, edu_type, origin, content):
handler = self.edu_handlers.get(edu_type)
if not handler:
logger.warn("No handler registered for EDU type %s", edu_type)
try:
yield handler(origin, content)
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception as e:
logger.exception("Failed to handle edu %r", edu_type)
def on_query(self, query_type, args):
handler = self.query_handlers.get(query_type)
if not handler:
logger.warn("No handler registered for query type %s", query_type)
raise NotFoundError("No handler for Query type '%s'" % (query_type,))
return handler(args)

View File

@@ -1,73 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This layer is responsible for replicating with remote home servers using
a given transport.
"""
from .federation_client import FederationClient
from .federation_server import FederationServer
from .persistence import TransactionActions
import logging
logger = logging.getLogger(__name__)
class ReplicationLayer(FederationClient, FederationServer):
"""This layer is responsible for replicating with remote home servers over
the given transport. I.e., does the sending and receiving of PDUs to
remote home servers.
The layer communicates with the rest of the server via a registered
ReplicationHandler.
In more detail, the layer:
* Receives incoming data and processes it into transactions and pdus.
* Fetches any PDUs it thinks it might have missed.
* Keeps the current state for contexts up to date by applying the
suitable conflict resolution.
* Sends outgoing pdus wrapped in transactions.
* Fills out the references to previous pdus/transactions appropriately
for outgoing data.
"""
def __init__(self, hs, transport_layer):
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
self.transport_layer = transport_layer
self.federation_client = self
self.store = hs.get_datastore()
self.handler = None
self.edu_handlers = {}
self.query_handlers = {}
self._clock = hs.get_clock()
self.transaction_actions = TransactionActions(self.store)
self.hs = hs
super(ReplicationLayer, self).__init__(hs)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name

View File

@@ -40,6 +40,8 @@ from collections import namedtuple
import logging
from six import itervalues, iteritems
logger = logging.getLogger(__name__)
@@ -122,7 +124,7 @@ class FederationRemoteSendQueue(object):
user_ids = set(
user_id
for uids in self.presence_changed.itervalues()
for uids in itervalues(self.presence_changed)
for user_id in uids
)
@@ -276,7 +278,7 @@ class FederationRemoteSendQueue(object):
# stream position.
keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]}
for ((destination, edu_key), pos) in keyed_edus.iteritems():
for ((destination, edu_key), pos) in iteritems(keyed_edus):
rows.append((pos, KeyedEduRow(
key=edu_key,
edu=self.keyed_edu[(destination, edu_key)],
@@ -309,7 +311,7 @@ class FederationRemoteSendQueue(object):
j = keys.bisect_right(to_token) + 1
device_messages = {self.device_messages[k]: k for k in keys[i:j]}
for (destination, pos) in device_messages.iteritems():
for (destination, pos) in iteritems(device_messages):
rows.append((pos, DeviceRow(
destination=destination,
)))
@@ -528,19 +530,19 @@ def process_rows_for_federation(transaction_queue, rows):
if buff.presence:
transaction_queue.send_presence(buff.presence)
for destination, edu_map in buff.keyed_edus.iteritems():
for destination, edu_map in iteritems(buff.keyed_edus):
for key, edu in edu_map.items():
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=key,
)
for destination, edu_list in buff.edus.iteritems():
for destination, edu_list in iteritems(buff.edus):
for edu in edu_list:
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=None,
)
for destination, failure_list in buff.failures.iteritems():
for destination, failure_list in iteritems(buff.failures):
for failure in failure_list:
transaction_queue.send_failure(destination, failure)

View File

@@ -169,7 +169,7 @@ class TransactionQueue(object):
while True:
last_token = yield self.store.get_federation_out_pos("events")
next_token, events = yield self.store.get_all_new_events_stream(
last_token, self._last_poked_id, limit=20,
last_token, self._last_poked_id, limit=100,
)
logger.debug("Handling %s -> %s", last_token, next_token)
@@ -177,24 +177,33 @@ class TransactionQueue(object):
if not events and next_token >= self._last_poked_id:
break
for event in events:
@defer.inlineCallbacks
def handle_event(event):
# Only send events for this server.
send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of()
is_mine = self.is_mine_id(event.event_id)
if not is_mine and send_on_behalf_of is None:
continue
return
try:
# Get the state from before the event.
# We need to make sure that this is the state from before
# the event and not from after it.
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = yield self.state.get_current_hosts_in_room(
event.room_id, latest_event_ids=[
prev_id for prev_id, _ in event.prev_events
],
)
except Exception:
logger.exception(
"Failed to calculate hosts in room for event: %s",
event.event_id,
)
return
# Get the state from before the event.
# We need to make sure that this is the state from before
# the event and not from after it.
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = yield self.state.get_current_hosts_in_room(
event.room_id, latest_event_ids=[
prev_id for prev_id, _ in event.prev_events
],
)
destinations = set(destinations)
if send_on_behalf_of is not None:
@@ -207,12 +216,44 @@ class TransactionQueue(object):
self._send_pdu(event, destinations)
events_processed_counter.inc_by(len(events))
@defer.inlineCallbacks
def handle_room_events(events):
for event in events:
yield handle_event(event)
events_by_room = {}
for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
],
consumeErrors=True
))
yield self.store.update_federation_out_pos(
"events", next_token
)
if events:
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_lag.set(
now - ts, "federation_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "federation_sender",
)
events_processed_counter.inc_by(len(events))
synapse.metrics.event_processing_positions.set(
next_token, "federation_sender",
)
finally:
self._is_processing = False

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -20,6 +21,7 @@ from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.util.logutils import log_function
import logging
import urllib
logger = logging.getLogger(__name__)
@@ -49,7 +51,7 @@ class TransportLayerClient(object):
logger.debug("get_room_state dest=%s, room=%s",
destination, room_id)
path = PREFIX + "/state/%s/" % room_id
path = _create_path(PREFIX, "/state/%s/", room_id)
return self.client.get_json(
destination, path=path, args={"event_id": event_id},
)
@@ -71,7 +73,7 @@ class TransportLayerClient(object):
logger.debug("get_room_state_ids dest=%s, room=%s",
destination, room_id)
path = PREFIX + "/state_ids/%s/" % room_id
path = _create_path(PREFIX, "/state_ids/%s/", room_id)
return self.client.get_json(
destination, path=path, args={"event_id": event_id},
)
@@ -93,7 +95,7 @@ class TransportLayerClient(object):
logger.debug("get_pdu dest=%s, event_id=%s",
destination, event_id)
path = PREFIX + "/event/%s/" % (event_id, )
path = _create_path(PREFIX, "/event/%s/", event_id)
return self.client.get_json(destination, path=path, timeout=timeout)
@log_function
@@ -119,7 +121,7 @@ class TransportLayerClient(object):
# TODO: raise?
return
path = PREFIX + "/backfill/%s/" % (room_id,)
path = _create_path(PREFIX, "/backfill/%s/", room_id)
args = {
"v": event_tuples,
@@ -157,9 +159,11 @@ class TransportLayerClient(object):
# generated by the json_data_callback.
json_data = transaction.get_dict()
path = _create_path(PREFIX, "/send/%s/", transaction.transaction_id)
response = yield self.client.put_json(
transaction.destination,
path=PREFIX + "/send/%s/" % transaction.transaction_id,
path=path,
data=json_data,
json_data_callback=json_data_callback,
long_retries=True,
@@ -177,7 +181,7 @@ class TransportLayerClient(object):
@log_function
def make_query(self, destination, query_type, args, retry_on_dns_fail,
ignore_backoff=False):
path = PREFIX + "/query/%s" % query_type
path = _create_path(PREFIX, "/query/%s", query_type)
content = yield self.client.get_json(
destination=destination,
@@ -222,7 +226,7 @@ class TransportLayerClient(object):
"make_membership_event called with membership='%s', must be one of %s" %
(membership, ",".join(valid_memberships))
)
path = PREFIX + "/make_%s/%s/%s" % (membership, room_id, user_id)
path = _create_path(PREFIX, "/make_%s/%s/%s", membership, room_id, user_id)
ignore_backoff = False
retry_on_dns_fail = False
@@ -248,7 +252,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_join(self, destination, room_id, event_id, content):
path = PREFIX + "/send_join/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/send_join/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination,
@@ -261,7 +265,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_leave(self, destination, room_id, event_id, content):
path = PREFIX + "/send_leave/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/send_leave/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination,
@@ -280,7 +284,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_invite(self, destination, room_id, event_id, content):
path = PREFIX + "/invite/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/invite/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination,
@@ -322,7 +326,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def exchange_third_party_invite(self, destination, room_id, event_dict):
path = PREFIX + "/exchange_third_party_invite/%s" % (room_id,)
path = _create_path(PREFIX, "/exchange_third_party_invite/%s", room_id,)
response = yield self.client.put_json(
destination=destination,
@@ -335,7 +339,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def get_event_auth(self, destination, room_id, event_id):
path = PREFIX + "/event_auth/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/event_auth/%s/%s", room_id, event_id)
content = yield self.client.get_json(
destination=destination,
@@ -347,7 +351,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_query_auth(self, destination, room_id, event_id, content):
path = PREFIX + "/query_auth/%s/%s" % (room_id, event_id)
path = _create_path(PREFIX, "/query_auth/%s/%s", room_id, event_id)
content = yield self.client.post_json(
destination=destination,
@@ -409,7 +413,7 @@ class TransportLayerClient(object):
Returns:
A dict containg the device keys.
"""
path = PREFIX + "/user/devices/" + user_id
path = _create_path(PREFIX, "/user/devices/%s", user_id)
content = yield self.client.get_json(
destination=destination,
@@ -459,7 +463,7 @@ class TransportLayerClient(object):
@log_function
def get_missing_events(self, destination, room_id, earliest_events,
latest_events, limit, min_depth, timeout):
path = PREFIX + "/get_missing_events/%s" % (room_id,)
path = _create_path(PREFIX, "/get_missing_events/%s", room_id,)
content = yield self.client.post_json(
destination=destination,
@@ -479,7 +483,7 @@ class TransportLayerClient(object):
def get_group_profile(self, destination, group_id, requester_user_id):
"""Get a group profile
"""
path = PREFIX + "/groups/%s/profile" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/profile", group_id,)
return self.client.get_json(
destination=destination,
@@ -498,7 +502,7 @@ class TransportLayerClient(object):
requester_user_id (str)
content (dict): The new profile of the group
"""
path = PREFIX + "/groups/%s/profile" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/profile", group_id,)
return self.client.post_json(
destination=destination,
@@ -512,7 +516,7 @@ class TransportLayerClient(object):
def get_group_summary(self, destination, group_id, requester_user_id):
"""Get a group summary
"""
path = PREFIX + "/groups/%s/summary" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/summary", group_id,)
return self.client.get_json(
destination=destination,
@@ -525,7 +529,7 @@ class TransportLayerClient(object):
def get_rooms_in_group(self, destination, group_id, requester_user_id):
"""Get all rooms in a group
"""
path = PREFIX + "/groups/%s/rooms" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/rooms", group_id,)
return self.client.get_json(
destination=destination,
@@ -538,7 +542,7 @@ class TransportLayerClient(object):
content):
"""Add a room to a group
"""
path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,)
path = _create_path(PREFIX, "/groups/%s/room/%s", group_id, room_id,)
return self.client.post_json(
destination=destination,
@@ -552,7 +556,10 @@ class TransportLayerClient(object):
config_key, content):
"""Update room in group
"""
path = PREFIX + "/groups/%s/room/%s/config/%s" % (group_id, room_id, config_key,)
path = _create_path(
PREFIX, "/groups/%s/room/%s/config/%s",
group_id, room_id, config_key,
)
return self.client.post_json(
destination=destination,
@@ -565,7 +572,7 @@ class TransportLayerClient(object):
def remove_room_from_group(self, destination, group_id, requester_user_id, room_id):
"""Remove a room from a group
"""
path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,)
path = _create_path(PREFIX, "/groups/%s/room/%s", group_id, room_id,)
return self.client.delete_json(
destination=destination,
@@ -578,7 +585,7 @@ class TransportLayerClient(object):
def get_users_in_group(self, destination, group_id, requester_user_id):
"""Get users in a group
"""
path = PREFIX + "/groups/%s/users" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/users", group_id,)
return self.client.get_json(
destination=destination,
@@ -591,7 +598,7 @@ class TransportLayerClient(object):
def get_invited_users_in_group(self, destination, group_id, requester_user_id):
"""Get users that have been invited to a group
"""
path = PREFIX + "/groups/%s/invited_users" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/invited_users", group_id,)
return self.client.get_json(
destination=destination,
@@ -604,7 +611,23 @@ class TransportLayerClient(object):
def accept_group_invite(self, destination, group_id, user_id, content):
"""Accept a group invite
"""
path = PREFIX + "/groups/%s/users/%s/accept_invite" % (group_id, user_id)
path = _create_path(
PREFIX, "/groups/%s/users/%s/accept_invite",
group_id, user_id,
)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def join_group(self, destination, group_id, user_id, content):
"""Attempts to join a group
"""
path = _create_path(PREFIX, "/groups/%s/users/%s/join", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -617,7 +640,7 @@ class TransportLayerClient(object):
def invite_to_group(self, destination, group_id, user_id, requester_user_id, content):
"""Invite a user to a group
"""
path = PREFIX + "/groups/%s/users/%s/invite" % (group_id, user_id)
path = _create_path(PREFIX, "/groups/%s/users/%s/invite", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -633,7 +656,7 @@ class TransportLayerClient(object):
invited.
"""
path = PREFIX + "/groups/local/%s/users/%s/invite" % (group_id, user_id)
path = _create_path(PREFIX, "/groups/local/%s/users/%s/invite", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -647,7 +670,7 @@ class TransportLayerClient(object):
user_id, content):
"""Remove a user fron a group
"""
path = PREFIX + "/groups/%s/users/%s/remove" % (group_id, user_id)
path = _create_path(PREFIX, "/groups/%s/users/%s/remove", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -664,7 +687,7 @@ class TransportLayerClient(object):
kicked from the group.
"""
path = PREFIX + "/groups/local/%s/users/%s/remove" % (group_id, user_id)
path = _create_path(PREFIX, "/groups/local/%s/users/%s/remove", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -679,7 +702,7 @@ class TransportLayerClient(object):
the attestations
"""
path = PREFIX + "/groups/%s/renew_attestation/%s" % (group_id, user_id)
path = _create_path(PREFIX, "/groups/%s/renew_attestation/%s", group_id, user_id)
return self.client.post_json(
destination=destination,
@@ -694,11 +717,12 @@ class TransportLayerClient(object):
"""Update a room entry in a group summary
"""
if category_id:
path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % (
path = _create_path(
PREFIX, "/groups/%s/summary/categories/%s/rooms/%s",
group_id, category_id, room_id,
)
else:
path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,)
path = _create_path(PREFIX, "/groups/%s/summary/rooms/%s", group_id, room_id,)
return self.client.post_json(
destination=destination,
@@ -714,11 +738,12 @@ class TransportLayerClient(object):
"""Delete a room entry in a group summary
"""
if category_id:
path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % (
path = _create_path(
PREFIX + "/groups/%s/summary/categories/%s/rooms/%s",
group_id, category_id, room_id,
)
else:
path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,)
path = _create_path(PREFIX, "/groups/%s/summary/rooms/%s", group_id, room_id,)
return self.client.delete_json(
destination=destination,
@@ -731,7 +756,7 @@ class TransportLayerClient(object):
def get_group_categories(self, destination, group_id, requester_user_id):
"""Get all categories in a group
"""
path = PREFIX + "/groups/%s/categories" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/categories", group_id,)
return self.client.get_json(
destination=destination,
@@ -744,7 +769,7 @@ class TransportLayerClient(object):
def get_group_category(self, destination, group_id, requester_user_id, category_id):
"""Get category info in a group
"""
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,)
path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.get_json(
destination=destination,
@@ -758,7 +783,7 @@ class TransportLayerClient(object):
content):
"""Update a category in a group
"""
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,)
path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.post_json(
destination=destination,
@@ -773,7 +798,7 @@ class TransportLayerClient(object):
category_id):
"""Delete a category in a group
"""
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,)
path = _create_path(PREFIX, "/groups/%s/categories/%s", group_id, category_id,)
return self.client.delete_json(
destination=destination,
@@ -786,7 +811,7 @@ class TransportLayerClient(object):
def get_group_roles(self, destination, group_id, requester_user_id):
"""Get all roles in a group
"""
path = PREFIX + "/groups/%s/roles" % (group_id,)
path = _create_path(PREFIX, "/groups/%s/roles", group_id,)
return self.client.get_json(
destination=destination,
@@ -799,7 +824,7 @@ class TransportLayerClient(object):
def get_group_role(self, destination, group_id, requester_user_id, role_id):
"""Get a roles info
"""
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,)
path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.get_json(
destination=destination,
@@ -813,7 +838,7 @@ class TransportLayerClient(object):
content):
"""Update a role in a group
"""
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,)
path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.post_json(
destination=destination,
@@ -827,7 +852,7 @@ class TransportLayerClient(object):
def delete_group_role(self, destination, group_id, requester_user_id, role_id):
"""Delete a role in a group
"""
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,)
path = _create_path(PREFIX, "/groups/%s/roles/%s", group_id, role_id,)
return self.client.delete_json(
destination=destination,
@@ -842,11 +867,12 @@ class TransportLayerClient(object):
"""Update a users entry in a group
"""
if role_id:
path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % (
path = _create_path(
PREFIX, "/groups/%s/summary/roles/%s/users/%s",
group_id, role_id, user_id,
)
else:
path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,)
path = _create_path(PREFIX, "/groups/%s/summary/users/%s", group_id, user_id,)
return self.client.post_json(
destination=destination,
@@ -856,17 +882,33 @@ class TransportLayerClient(object):
ignore_backoff=True,
)
@log_function
def set_group_join_policy(self, destination, group_id, requester_user_id,
content):
"""Sets the join policy for a group
"""
path = _create_path(PREFIX, "/groups/%s/settings/m.join_policy", group_id,)
return self.client.put_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def delete_group_summary_user(self, destination, group_id, requester_user_id,
user_id, role_id):
"""Delete a users entry in a group
"""
if role_id:
path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % (
path = _create_path(
PREFIX, "/groups/%s/summary/roles/%s/users/%s",
group_id, role_id, user_id,
)
else:
path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,)
path = _create_path(PREFIX, "/groups/%s/summary/users/%s", group_id, user_id,)
return self.client.delete_json(
destination=destination,
@@ -889,3 +931,22 @@ class TransportLayerClient(object):
data=content,
ignore_backoff=True,
)
def _create_path(prefix, path, *args):
"""Creates a path from the prefix, path template and args. Ensures that
all args are url encoded.
Example:
_create_path(PREFIX, "/event/%s/", event_id)
Args:
prefix (str)
path (str): String template for the path
args: ([str]): Args to insert into path. Each arg will be url encoded
Returns:
str
"""
return prefix + path % tuple(urllib.quote(arg, "") for arg in args)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -93,12 +94,6 @@ class Authenticator(object):
"signatures": {},
}
if (
self.federation_domain_whitelist is not None and
self.server_name not in self.federation_domain_whitelist
):
raise FederationDeniedError(self.server_name)
if content is not None:
json_request["content"] = content
@@ -137,6 +132,12 @@ class Authenticator(object):
json_request["origin"] = origin
json_request["signatures"].setdefault(origin, {})[key] = sig
if (
self.federation_domain_whitelist is not None and
origin not in self.federation_domain_whitelist
):
raise FederationDeniedError(origin)
if not json_request["signatures"]:
raise NoAuthenticationError(
401, "Missing Authorization headers", Codes.UNAUTHORIZED,
@@ -802,6 +803,23 @@ class FederationGroupsAcceptInviteServlet(BaseFederationServlet):
defer.returnValue((200, new_content))
class FederationGroupsJoinServlet(BaseFederationServlet):
"""Attempt to join a group
"""
PATH = "/groups/(?P<group_id>[^/]*)/users/(?P<user_id>[^/]*)/join$"
@defer.inlineCallbacks
def on_POST(self, origin, content, query, group_id, user_id):
if get_domain_from_id(user_id) != origin:
raise SynapseError(403, "user_id doesn't match origin")
new_content = yield self.handler.join_group(
group_id, user_id, content,
)
defer.returnValue((200, new_content))
class FederationGroupsRemoveUserServlet(BaseFederationServlet):
"""Leave or kick a user from the group
"""
@@ -1124,6 +1142,24 @@ class FederationGroupsBulkPublicisedServlet(BaseFederationServlet):
defer.returnValue((200, resp))
class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
"""Sets whether a group is joinable without an invite or knock
"""
PATH = "/groups/(?P<group_id>[^/]*)/settings/m.join_policy$"
@defer.inlineCallbacks
def on_PUT(self, origin, content, query, group_id):
requester_user_id = parse_string_from_args(query, "requester_user_id")
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
new_content = yield self.handler.set_group_join_policy(
group_id, requester_user_id, content
)
defer.returnValue((200, new_content))
FEDERATION_SERVLET_CLASSES = (
FederationSendServlet,
FederationPullServlet,
@@ -1163,6 +1199,7 @@ GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsInvitedUsersServlet,
FederationGroupsInviteServlet,
FederationGroupsAcceptInviteServlet,
FederationGroupsJoinServlet,
FederationGroupsRemoveUserServlet,
FederationGroupsSummaryRoomsServlet,
FederationGroupsCategoriesServlet,
@@ -1172,6 +1209,7 @@ GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsSummaryUsersServlet,
FederationGroupsAddRoomsServlet,
FederationGroupsAddRoomsConfigServlet,
FederationGroupsSettingJoinPolicyServlet,
)
@@ -1190,7 +1228,7 @@ GROUP_ATTESTATION_SERVLET_CLASSES = (
def register_servlets(hs, resource, authenticator, ratelimiter):
for servletclass in FEDERATION_SERVLET_CLASSES:
servletclass(
handler=hs.get_replication_layer(),
handler=hs.get_federation_server(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -205,6 +206,28 @@ class GroupsServerHandler(object):
defer.returnValue({})
@defer.inlineCallbacks
def set_group_join_policy(self, group_id, requester_user_id, content):
"""Sets the group join policy.
Currently supported policies are:
- "invite": an invite must be received and accepted in order to join.
- "open": anyone can join.
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
join_policy = _parse_join_policy_from_contents(content)
if join_policy is None:
raise SynapseError(
400, "No value specified for 'm.join_policy'"
)
yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
defer.returnValue({})
@defer.inlineCallbacks
def get_group_categories(self, group_id, requester_user_id):
"""Get all categories in a group (as seen by user)
@@ -381,9 +404,16 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id)
group_description = yield self.store.get_group(group_id)
group = yield self.store.get_group(group_id)
if group:
cols = [
"name", "short_description", "long_description",
"avatar_url", "is_public",
]
group_description = {key: group[key] for key in cols}
group_description["is_openly_joinable"] = group["join_policy"] == "open"
if group_description:
defer.returnValue(group_description)
else:
raise SynapseError(404, "Unknown group")
@@ -654,6 +684,40 @@ class GroupsServerHandler(object):
else:
raise SynapseError(502, "Unknown state returned by HS")
@defer.inlineCallbacks
def _add_user(self, group_id, user_id, content):
"""Add a user to a group based on a content dict.
See accept_invite, join_group.
"""
if not self.hs.is_mine_id(user_id):
local_attestation = self.attestations.create_attestation(
group_id, user_id,
)
remote_attestation = content["attestation"]
yield self.attestations.verify_attestation(
remote_attestation,
user_id=user_id,
group_id=group_id,
)
else:
local_attestation = None
remote_attestation = None
is_public = _parse_visibility_from_contents(content)
yield self.store.add_user_to_group(
group_id, user_id,
is_admin=False,
is_public=is_public,
local_attestation=local_attestation,
remote_attestation=remote_attestation,
)
defer.returnValue(local_attestation)
@defer.inlineCallbacks
def accept_invite(self, group_id, requester_user_id, content):
"""User tries to accept an invite to the group.
@@ -670,30 +734,27 @@ class GroupsServerHandler(object):
if not is_invited:
raise SynapseError(403, "User not invited to group")
if not self.hs.is_mine_id(requester_user_id):
local_attestation = self.attestations.create_attestation(
group_id, requester_user_id,
)
remote_attestation = content["attestation"]
local_attestation = yield self._add_user(group_id, requester_user_id, content)
yield self.attestations.verify_attestation(
remote_attestation,
user_id=requester_user_id,
group_id=group_id,
)
else:
local_attestation = None
remote_attestation = None
defer.returnValue({
"state": "join",
"attestation": local_attestation,
})
is_public = _parse_visibility_from_contents(content)
@defer.inlineCallbacks
def join_group(self, group_id, requester_user_id, content):
"""User tries to join the group.
yield self.store.add_user_to_group(
group_id, requester_user_id,
is_admin=False,
is_public=is_public,
local_attestation=local_attestation,
remote_attestation=remote_attestation,
This will error if the group requires an invite/knock to join
"""
group_info = yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True
)
if group_info['join_policy'] != "open":
raise SynapseError(403, "Group is not publicly joinable")
local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({
"state": "join",
@@ -835,6 +896,31 @@ class GroupsServerHandler(object):
})
def _parse_join_policy_from_contents(content):
"""Given a content for a request, return the specified join policy or None
"""
join_policy_dict = content.get("m.join_policy")
if join_policy_dict:
return _parse_join_policy_dict(join_policy_dict)
else:
return None
def _parse_join_policy_dict(join_policy_dict):
"""Given a dict for the "m.join_policy" config return the join policy specified
"""
join_policy_type = join_policy_dict.get("type")
if not join_policy_type:
return "invite"
if join_policy_type not in ("invite", "open"):
raise SynapseError(
400, "Synapse only supports 'invite'/'open' join rule"
)
return join_policy_type
def _parse_visibility_from_contents(content):
"""Given a content for a request parse out whether the entity should be
public or not

View File

@@ -17,7 +17,6 @@ from .register import RegistrationHandler
from .room import (
RoomCreationHandler, RoomContextHandler,
)
from .room_member import RoomMemberHandler
from .message import MessageHandler
from .federation import FederationHandler
from .directory import DirectoryHandler
@@ -49,7 +48,6 @@ class Handlers(object):
self.registration_handler = RegistrationHandler(hs)
self.message_handler = MessageHandler(hs)
self.room_creation_handler = RoomCreationHandler(hs)
self.room_member_handler = RoomMemberHandler(hs)
self.federation_handler = FederationHandler(hs)
self.directory_handler = DirectoryHandler(hs)
self.admin_handler = AdminHandler(hs)

View File

@@ -158,7 +158,7 @@ class BaseHandler(object):
# homeserver.
requester = synapse.types.create_requester(
target_user, is_guest=True)
handler = self.hs.get_handlers().room_member_handler
handler = self.hs.get_room_member_handler()
yield handler.update_membership(
requester,
target_user,

View File

@@ -18,7 +18,9 @@ from twisted.internet import defer
import synapse
from synapse.api.constants import EventTypes
from synapse.util.metrics import Measure
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.util.logcontext import (
make_deferred_yieldable, preserve_fn, run_in_background,
)
import logging
@@ -84,11 +86,16 @@ class ApplicationServicesHandler(object):
if not events:
break
events_by_room = {}
for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
@defer.inlineCallbacks
def handle_event(event):
# Gather interested services
services = yield self._get_services_for_event(event)
if len(services) == 0:
continue # no services need notifying
return # no services need notifying
# Do we know this user exists? If not, poke the user
# query API for all services which match that user regex.
@@ -108,9 +115,33 @@ class ApplicationServicesHandler(object):
service, event
)
events_processed_counter.inc_by(len(events))
@defer.inlineCallbacks
def handle_room_events(events):
for event in events:
yield handle_event(event)
yield make_deferred_yieldable(defer.gatherResults([
run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
], consumeErrors=True))
yield self.store.set_appservice_last_pos(upper_bound)
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_positions.set(
upper_bound, "appservice_sender",
)
events_processed_counter.inc_by(len(events))
synapse.metrics.event_processing_lag.set(
now - ts, "appservice_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "appservice_sender",
)
finally:
self.is_processing = False

View File

@@ -863,8 +863,10 @@ class AuthHandler(BaseHandler):
"""
def _do_validate_hash():
return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper,
stored_hash.encode('utf8')) == stored_hash
return bcrypt.checkpw(
password.encode('utf8') + self.hs.config.password_pepper,
stored_hash.encode('utf8')
)
if stored_hash:
return make_deferred_yieldable(threads.deferToThread(_do_validate_hash))

View File

@@ -37,14 +37,15 @@ class DeviceHandler(BaseHandler):
self.state = hs.get_state_handler()
self._auth_handler = hs.get_auth_handler()
self.federation_sender = hs.get_federation_sender()
self.federation = hs.get_replication_layer()
self._edu_updater = DeviceListEduUpdater(hs, self)
self.federation.register_edu_handler(
federation_registry = hs.get_federation_registry()
federation_registry.register_edu_handler(
"m.device_list_update", self._edu_updater.incoming_device_list_update,
)
self.federation.register_query_handler(
federation_registry.register_query_handler(
"user_devices", self.on_federation_query_user_devices,
)
@@ -154,7 +155,7 @@ class DeviceHandler(BaseHandler):
try:
yield self.store.delete_device(user_id, device_id)
except errors.StoreError, e:
except errors.StoreError as e:
if e.code == 404:
# no match
pass
@@ -203,7 +204,7 @@ class DeviceHandler(BaseHandler):
try:
yield self.store.delete_devices(user_id, device_ids)
except errors.StoreError, e:
except errors.StoreError as e:
if e.code == 404:
# no match
pass
@@ -242,7 +243,7 @@ class DeviceHandler(BaseHandler):
new_display_name=content.get("display_name")
)
yield self.notify_device_update(user_id, [device_id])
except errors.StoreError, e:
except errors.StoreError as e:
if e.code == 404:
raise errors.NotFoundError()
else:
@@ -430,7 +431,7 @@ class DeviceListEduUpdater(object):
def __init__(self, hs, device_handler):
self.store = hs.get_datastore()
self.federation = hs.get_replication_layer()
self.federation = hs.get_federation_client()
self.clock = hs.get_clock()
self.device_handler = device_handler

View File

@@ -37,7 +37,7 @@ class DeviceMessageHandler(object):
self.is_mine = hs.is_mine
self.federation = hs.get_federation_sender()
hs.get_replication_layer().register_edu_handler(
hs.get_federation_registry().register_edu_handler(
"m.direct_to_device", self.on_direct_to_device_edu
)

View File

@@ -36,8 +36,8 @@ class DirectoryHandler(BaseHandler):
self.appservice_handler = hs.get_application_service_handler()
self.event_creation_handler = hs.get_event_creation_handler()
self.federation = hs.get_replication_layer()
self.federation.register_query_handler(
self.federation = hs.get_federation_client()
hs.get_federation_registry().register_query_handler(
"directory", self.on_directory_query
)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import ujson as json
import simplejson as json
import logging
from canonicaljson import encode_canonical_json
@@ -32,7 +33,7 @@ logger = logging.getLogger(__name__)
class E2eKeysHandler(object):
def __init__(self, hs):
self.store = hs.get_datastore()
self.federation = hs.get_replication_layer()
self.federation = hs.get_federation_client()
self.device_handler = hs.get_device_handler()
self.is_mine = hs.is_mine
self.clock = hs.get_clock()
@@ -40,7 +41,7 @@ class E2eKeysHandler(object):
# doesn't really work as part of the generic query API, because the
# query request requires an object POST, but we abuse the
# "query handler" interface.
self.federation.register_query_handler(
hs.get_federation_registry().register_query_handler(
"client_keys", self.on_federation_query_client_keys
)
@@ -134,23 +135,8 @@ class E2eKeysHandler(object):
if user_id in destination_query:
results[user_id] = keys
except CodeMessageException as e:
failures[destination] = {
"status": e.code, "message": e.message
}
except NotRetryingDestination as e:
failures[destination] = {
"status": 503, "message": "Not ready for retry",
}
except FederationDeniedError as e:
failures[destination] = {
"status": 403, "message": "Federation Denied",
}
except Exception as e:
# include ConnectionRefused and other errors
failures[destination] = {
"status": 503, "message": e.message
}
failures[destination] = _exception_to_failure(e)
yield make_deferred_yieldable(defer.gatherResults([
preserve_fn(do_remote_query)(destination)
@@ -252,19 +238,8 @@ class E2eKeysHandler(object):
for user_id, keys in remote_result["one_time_keys"].items():
if user_id in device_keys:
json_result[user_id] = keys
except CodeMessageException as e:
failures[destination] = {
"status": e.code, "message": e.message
}
except NotRetryingDestination as e:
failures[destination] = {
"status": 503, "message": "Not ready for retry",
}
except Exception as e:
# include ConnectionRefused and other errors
failures[destination] = {
"status": 503, "message": e.message
}
failures[destination] = _exception_to_failure(e)
yield make_deferred_yieldable(defer.gatherResults([
preserve_fn(claim_client_keys)(destination)
@@ -362,6 +337,31 @@ class E2eKeysHandler(object):
)
def _exception_to_failure(e):
if isinstance(e, CodeMessageException):
return {
"status": e.code, "message": e.message,
}
if isinstance(e, NotRetryingDestination):
return {
"status": 503, "message": "Not ready for retry",
}
if isinstance(e, FederationDeniedError):
return {
"status": 403, "message": "Federation Denied",
}
# include ConnectionRefused and other errors
#
# Note that some Exceptions (notably twisted's ResponseFailed etc) don't
# give a string for e.message, which simplejson then fails to serialize.
return {
"status": 503, "message": str(e.message),
}
def _one_time_keys_match(old_key_json, new_key):
old_key = json.loads(old_key_json)

View File

@@ -15,8 +15,14 @@
# limitations under the License.
"""Contains handlers for federation events."""
import httplib
import itertools
import logging
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
from twisted.internet import defer
from unpaddedbase64 import decode_base64
from ._base import BaseHandler
@@ -43,10 +49,6 @@ from synapse.util.retryutils import NotRetryingDestination
from synapse.util.distributor import user_joined_room
from twisted.internet import defer
import itertools
import logging
logger = logging.getLogger(__name__)
@@ -68,7 +70,7 @@ class FederationHandler(BaseHandler):
self.hs = hs
self.store = hs.get_datastore()
self.replication_layer = hs.get_replication_layer()
self.replication_layer = hs.get_federation_client()
self.state_handler = hs.get_state_handler()
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
@@ -78,8 +80,6 @@ class FederationHandler(BaseHandler):
self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler()
self.replication_layer.set_handler(self)
# When joining a room we need to queue any events for that room up
self.room_queues = {}
self._room_pdu_linearizer = Linearizer("fed_room_pdu")
@@ -117,6 +117,19 @@ class FederationHandler(BaseHandler):
logger.debug("Already seen pdu %s", pdu.event_id)
return
# do some initial sanity-checking of the event. In particular, make
# sure it doesn't have hundreds of prev_events or auth_events, which
# could cause a huge state resolution or cascade of event fetches.
try:
self._sanity_check_event(pdu)
except SynapseError as err:
raise FederationError(
"ERROR",
err.code,
err.msg,
affected=pdu.event_id,
)
# If we are currently in the process of joining this room, then we
# queue up events for later processing.
if pdu.room_id in self.room_queues:
@@ -151,10 +164,6 @@ class FederationHandler(BaseHandler):
auth_chain = []
have_seen = yield self.store.have_events(
[ev for ev, _ in pdu.prev_events]
)
fetch_state = False
# Get missing pdus if necessary.
@@ -170,7 +179,7 @@ class FederationHandler(BaseHandler):
)
prevs = {e_id for e_id, _ in pdu.prev_events}
seen = set(have_seen.keys())
seen = yield self.store.have_seen_events(prevs)
if min_depth and pdu.depth < min_depth:
# This is so that we don't notify the user about this
@@ -198,8 +207,7 @@ class FederationHandler(BaseHandler):
# Update the set of things we've seen after trying to
# fetch the missing stuff
have_seen = yield self.store.have_events(prevs)
seen = set(have_seen.iterkeys())
seen = yield self.store.have_seen_events(prevs)
if not prevs - seen:
logger.info(
@@ -250,8 +258,7 @@ class FederationHandler(BaseHandler):
min_depth (int): Minimum depth of events to return.
"""
# We recalculate seen, since it may have changed.
have_seen = yield self.store.have_events(prevs)
seen = set(have_seen.keys())
seen = yield self.store.have_seen_events(prevs)
if not prevs - seen:
return
@@ -363,9 +370,7 @@ class FederationHandler(BaseHandler):
if auth_chain:
event_ids |= {e.event_id for e in auth_chain}
seen_ids = set(
(yield self.store.have_events(event_ids)).keys()
)
seen_ids = yield self.store.have_seen_events(event_ids)
if state and auth_chain is not None:
# If we have any state or auth_chain given to us by the replication
@@ -529,9 +534,16 @@ class FederationHandler(BaseHandler):
def backfill(self, dest, room_id, limit, extremities):
""" Trigger a backfill request to `dest` for the given `room_id`
This will attempt to get more events from the remote. This may return
be successfull and still return no events if the other side has no new
events to offer.
This will attempt to get more events from the remote. If the other side
has no new events to offer, this will return an empty list.
As the events are received, we check their signatures, and also do some
sanity-checking on them. If any of the backfilled events are invalid,
this method throws a SynapseError.
TODO: make this more useful to distinguish failures of the remote
server from invalid events (there is probably no point in trying to
re-fetch invalid events from every other HS in the room.)
"""
if dest == self.server_name:
raise SynapseError(400, "Can't backfill from self.")
@@ -543,6 +555,16 @@ class FederationHandler(BaseHandler):
extremities=extremities,
)
# ideally we'd sanity check the events here for excess prev_events etc,
# but it's hard to reject events at this point without completely
# breaking backfill in the same way that it is currently broken by
# events whose signature we cannot verify (#3121).
#
# So for now we accept the events anyway. #3124 tracks this.
#
# for ev in events:
# self._sanity_check_event(ev)
# Don't bother processing events we already have.
seen_events = yield self.store.have_events_in_timeline(
set(e.event_id for e in events)
@@ -635,7 +657,7 @@ class FederationHandler(BaseHandler):
failed_to_fetch = missing_auth - set(auth_events)
seen_events = yield self.store.have_events(
seen_events = yield self.store.have_seen_events(
set(auth_events.keys()) | set(state_events.keys())
)
@@ -845,6 +867,38 @@ class FederationHandler(BaseHandler):
defer.returnValue(False)
def _sanity_check_event(self, ev):
"""
Do some early sanity checks of a received event
In particular, checks it doesn't have an excessive number of
prev_events or auth_events, which could cause a huge state resolution
or cascade of event fetches.
Args:
ev (synapse.events.EventBase): event to be checked
Returns: None
Raises:
SynapseError if the event does not pass muster
"""
if len(ev.prev_events) > 20:
logger.warn("Rejecting event %s which has %i prev_events",
ev.event_id, len(ev.prev_events))
raise SynapseError(
httplib.BAD_REQUEST,
"Too many prev_events",
)
if len(ev.auth_events) > 10:
logger.warn("Rejecting event %s which has %i auth_events",
ev.event_id, len(ev.auth_events))
raise SynapseError(
httplib.BAD_REQUEST,
"Too many auth_events",
)
@defer.inlineCallbacks
def send_invite(self, target_host, event):
""" Sends the invite to the remote server for signing.
@@ -1447,16 +1501,24 @@ class FederationHandler(BaseHandler):
auth_events=auth_events,
)
if not event.internal_metadata.is_outlier() and not backfilled:
yield self.action_generator.handle_push_actions_for_event(
event, context
)
try:
if not event.internal_metadata.is_outlier() and not backfilled:
yield self.action_generator.handle_push_actions_for_event(
event, context
)
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
backfilled=backfilled,
)
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
backfilled=backfilled,
)
except: # noqa: E722, as we reraise the exception this is fine.
# Ensure that we actually remove the entries in the push actions
# staging area
logcontext.preserve_fn(
self.store.remove_push_actions_from_staging
)(event.event_id)
raise
if not backfilled:
# this intentionally does not yield: we don't care about the result
@@ -1730,7 +1792,8 @@ class FederationHandler(BaseHandler):
event_key = None
if event_auth_events - current_state:
have_events = yield self.store.have_events(
# TODO: can we use store.have_seen_events here instead?
have_events = yield self.store.get_seen_events_with_rejections(
event_auth_events - current_state
)
else:
@@ -1753,12 +1816,12 @@ class FederationHandler(BaseHandler):
origin, event.room_id, event.event_id
)
seen_remotes = yield self.store.have_events(
seen_remotes = yield self.store.have_seen_events(
[e.event_id for e in remote_auth_chain]
)
for e in remote_auth_chain:
if e.event_id in seen_remotes.keys():
if e.event_id in seen_remotes:
continue
if e.event_id == event.event_id:
@@ -1785,7 +1848,7 @@ class FederationHandler(BaseHandler):
except AuthError:
pass
have_events = yield self.store.have_events(
have_events = yield self.store.get_seen_events_with_rejections(
[e_id for e_id, _ in event.auth_events]
)
seen_events = set(have_events.keys())
@@ -1870,13 +1933,13 @@ class FederationHandler(BaseHandler):
local_auth_chain,
)
seen_remotes = yield self.store.have_events(
seen_remotes = yield self.store.have_seen_events(
[e.event_id for e in result["auth_chain"]]
)
# 3. Process any remote auth chain events we haven't seen.
for ev in result["auth_chain"]:
if ev.event_id in seen_remotes.keys():
if ev.event_id in seen_remotes:
continue
if ev.event_id == event.event_id:
@@ -2145,7 +2208,7 @@ class FederationHandler(BaseHandler):
raise e
yield self._check_signature(event, context)
member_handler = self.hs.get_handlers().room_member_handler
member_handler = self.hs.get_room_member_handler()
yield member_handler.send_membership_event(None, event, context)
else:
destinations = set(x.split(":", 1)[-1] for x in (sender_user_id, room_id))
@@ -2189,7 +2252,7 @@ class FederationHandler(BaseHandler):
# TODO: Make sure the signatures actually are correct.
event.signatures.update(returned_invite.signatures)
member_handler = self.hs.get_handlers().room_member_handler
member_handler = self.hs.get_room_member_handler()
yield member_handler.send_membership_event(None, event, context)
@defer.inlineCallbacks

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -90,6 +91,8 @@ class GroupsLocalHandler(object):
get_group_role = _create_rerouter("get_group_role")
get_group_roles = _create_rerouter("get_group_roles")
set_group_join_policy = _create_rerouter("set_group_join_policy")
@defer.inlineCallbacks
def get_group_summary(self, group_id, requester_user_id):
"""Get the group summary for a group.
@@ -226,7 +229,45 @@ class GroupsLocalHandler(object):
def join_group(self, group_id, user_id, content):
"""Request to join a group
"""
raise NotImplementedError() # TODO
if self.is_mine_id(group_id):
yield self.groups_server_handler.join_group(
group_id, user_id, content
)
local_attestation = None
remote_attestation = None
else:
local_attestation = self.attestations.create_attestation(group_id, user_id)
content["attestation"] = local_attestation
res = yield self.transport_client.join_group(
get_domain_from_id(group_id), group_id, user_id, content,
)
remote_attestation = res["attestation"]
yield self.attestations.verify_attestation(
remote_attestation,
group_id=group_id,
user_id=user_id,
server_name=get_domain_from_id(group_id),
)
# TODO: Check that the group is public and we're being added publically
is_publicised = content.get("publicise", False)
token = yield self.store.register_user_group_membership(
group_id, user_id,
membership="join",
is_admin=False,
local_attestation=local_attestation,
remote_attestation=remote_attestation,
is_publicised=is_publicised,
)
self.notifier.on_new_event(
"groups_key", token, users=[user_id],
)
defer.returnValue({})
@defer.inlineCallbacks
def accept_invite(self, group_id, user_id, content):

View File

@@ -15,6 +15,11 @@
# limitations under the License.
"""Utilities for interacting with Identity Servers"""
import logging
import simplejson as json
from twisted.internet import defer
from synapse.api.errors import (
@@ -24,9 +29,6 @@ from ._base import BaseHandler
from synapse.util.async import run_on_reactor
from synapse.api.errors import SynapseError, Codes
import json
import logging
logger = logging.getLogger(__name__)

View File

@@ -13,7 +13,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from twisted.internet import defer, reactor
from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError
@@ -24,9 +25,10 @@ from synapse.types import (
UserID, RoomAlias, RoomStreamToken,
)
from synapse.util.async import run_on_reactor, ReadWriteLock, Limiter
from synapse.util.logcontext import preserve_fn
from synapse.util.logcontext import preserve_fn, run_in_background
from synapse.util.metrics import measure_func
from synapse.util.frozenutils import unfreeze
from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.stringutils import random_string
from synapse.visibility import filter_events_for_client
from synapse.replication.http.send_event import send_event_to_master
@@ -35,12 +37,41 @@ from ._base import BaseHandler
from canonicaljson import encode_canonical_json
import logging
import random
import ujson
import simplejson
logger = logging.getLogger(__name__)
class PurgeStatus(object):
"""Object tracking the status of a purge request
This class contains information on the progress of a purge request, for
return by get_purge_status.
Attributes:
status (int): Tracks whether this request has completed. One of
STATUS_{ACTIVE,COMPLETE,FAILED}
"""
STATUS_ACTIVE = 0
STATUS_COMPLETE = 1
STATUS_FAILED = 2
STATUS_TEXT = {
STATUS_ACTIVE: "active",
STATUS_COMPLETE: "complete",
STATUS_FAILED: "failed",
}
def __init__(self):
self.status = PurgeStatus.STATUS_ACTIVE
def asdict(self):
return {
"status": PurgeStatus.STATUS_TEXT[self.status]
}
class MessageHandler(BaseHandler):
def __init__(self, hs):
@@ -50,18 +81,87 @@ class MessageHandler(BaseHandler):
self.clock = hs.get_clock()
self.pagination_lock = ReadWriteLock()
self._purges_in_progress_by_room = set()
# map from purge id to PurgeStatus
self._purges_by_id = {}
def start_purge_history(self, room_id, topological_ordering,
delete_local_events=False):
"""Start off a history purge on a room.
Args:
room_id (str): The room to purge from
topological_ordering (int): minimum topo ordering to preserve
delete_local_events (bool): True to delete local events as well as
remote ones
Returns:
str: unique ID for this purge transaction.
"""
if room_id in self._purges_in_progress_by_room:
raise SynapseError(
400,
"History purge already in progress for %s" % (room_id, ),
)
purge_id = random_string(16)
# we log the purge_id here so that it can be tied back to the
# request id in the log lines.
logger.info("[purge] starting purge_id %s", purge_id)
self._purges_by_id[purge_id] = PurgeStatus()
run_in_background(
self._purge_history,
purge_id, room_id, topological_ordering, delete_local_events,
)
return purge_id
@defer.inlineCallbacks
def purge_history(self, room_id, event_id, delete_local_events=False):
event = yield self.store.get_event(event_id)
def _purge_history(self, purge_id, room_id, topological_ordering,
delete_local_events):
"""Carry out a history purge on a room.
if event.room_id != room_id:
raise SynapseError(400, "Event is for wrong room.")
Args:
purge_id (str): The id for this purge
room_id (str): The room to purge from
topological_ordering (int): minimum topo ordering to preserve
delete_local_events (bool): True to delete local events as well as
remote ones
depth = event.depth
Returns:
Deferred
"""
self._purges_in_progress_by_room.add(room_id)
try:
with (yield self.pagination_lock.write(room_id)):
yield self.store.purge_history(
room_id, topological_ordering, delete_local_events,
)
logger.info("[purge] complete")
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
except Exception:
logger.error("[purge] failed: %s", Failure().getTraceback().rstrip())
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
finally:
self._purges_in_progress_by_room.discard(room_id)
with (yield self.pagination_lock.write(room_id)):
yield self.store.purge_history(room_id, depth, delete_local_events)
# remove the purge from the list 24 hours after it completes
def clear_purge():
del self._purges_by_id[purge_id]
reactor.callLater(24 * 3600, clear_purge)
def get_purge_status(self, purge_id):
"""Get the current status of an active purge
Args:
purge_id (str): purge_id returned by start_purge_history
Returns:
PurgeStatus|None
"""
return self._purges_by_id.get(purge_id)
@defer.inlineCallbacks
def get_messages(self, requester, room_id=None, pagin_config=None,
@@ -332,7 +432,7 @@ class EventCreationHandler(object):
@defer.inlineCallbacks
def create_event(self, requester, event_dict, token_id=None, txn_id=None,
prev_event_ids=None):
prev_events_and_hashes=None):
"""
Given a dict from a client, create a new event.
@@ -346,47 +446,52 @@ class EventCreationHandler(object):
event_dict (dict): An entire event
token_id (str)
txn_id (str)
prev_event_ids (list): The prev event ids to use when creating the event
prev_events_and_hashes (list[(str, dict[str, str], int)]|None):
the forward extremities to use as the prev_events for the
new event. For each event, a tuple of (event_id, hashes, depth)
where *hashes* is a map from algorithm to hash.
If None, they will be requested from the database.
Returns:
Tuple of created event (FrozenEvent), Context
"""
builder = self.event_builder_factory.new(event_dict)
with (yield self.limiter.queue(builder.room_id)):
self.validator.validate_new(builder)
self.validator.validate_new(builder)
if builder.type == EventTypes.Member:
membership = builder.content.get("membership", None)
target = UserID.from_string(builder.state_key)
if builder.type == EventTypes.Member:
membership = builder.content.get("membership", None)
target = UserID.from_string(builder.state_key)
if membership in {Membership.JOIN, Membership.INVITE}:
# If event doesn't include a display name, add one.
profile = self.profile_handler
content = builder.content
if membership in {Membership.JOIN, Membership.INVITE}:
# If event doesn't include a display name, add one.
profile = self.profile_handler
content = builder.content
try:
if "displayname" not in content:
content["displayname"] = yield profile.get_displayname(target)
if "avatar_url" not in content:
content["avatar_url"] = yield profile.get_avatar_url(target)
except Exception as e:
logger.info(
"Failed to get profile information for %r: %s",
target, e
)
try:
if "displayname" not in content:
content["displayname"] = yield profile.get_displayname(target)
if "avatar_url" not in content:
content["avatar_url"] = yield profile.get_avatar_url(target)
except Exception as e:
logger.info(
"Failed to get profile information for %r: %s",
target, e
)
if token_id is not None:
builder.internal_metadata.token_id = token_id
if token_id is not None:
builder.internal_metadata.token_id = token_id
if txn_id is not None:
builder.internal_metadata.txn_id = txn_id
if txn_id is not None:
builder.internal_metadata.txn_id = txn_id
event, context = yield self.create_new_client_event(
builder=builder,
requester=requester,
prev_event_ids=prev_event_ids,
)
event, context = yield self.create_new_client_event(
builder=builder,
requester=requester,
prev_events_and_hashes=prev_events_and_hashes,
)
defer.returnValue((event, context))
@@ -456,64 +561,76 @@ class EventCreationHandler(object):
See self.create_event and self.send_nonmember_event.
"""
event, context = yield self.create_event(
requester,
event_dict,
token_id=requester.access_token_id,
txn_id=txn_id
)
spam_error = self.spam_checker.check_event_for_spam(event)
if spam_error:
if not isinstance(spam_error, basestring):
spam_error = "Spam is not permitted here"
raise SynapseError(
403, spam_error, Codes.FORBIDDEN
# We limit the number of concurrent event sends in a room so that we
# don't fork the DAG too much. If we don't limit then we can end up in
# a situation where event persistence can't keep up, causing
# extremities to pile up, which in turn leads to state resolution
# taking longer.
with (yield self.limiter.queue(event_dict["room_id"])):
event, context = yield self.create_event(
requester,
event_dict,
token_id=requester.access_token_id,
txn_id=txn_id
)
yield self.send_nonmember_event(
requester,
event,
context,
ratelimit=ratelimit,
)
spam_error = self.spam_checker.check_event_for_spam(event)
if spam_error:
if not isinstance(spam_error, basestring):
spam_error = "Spam is not permitted here"
raise SynapseError(
403, spam_error, Codes.FORBIDDEN
)
yield self.send_nonmember_event(
requester,
event,
context,
ratelimit=ratelimit,
)
defer.returnValue(event)
@measure_func("create_new_client_event")
@defer.inlineCallbacks
def create_new_client_event(self, builder, requester=None, prev_event_ids=None):
if prev_event_ids:
prev_events = yield self.store.add_event_hashes(prev_event_ids)
prev_max_depth = yield self.store.get_max_depth_of_events(prev_event_ids)
depth = prev_max_depth + 1
else:
latest_ret = yield self.store.get_latest_event_ids_and_hashes_in_room(
builder.room_id,
def create_new_client_event(self, builder, requester=None,
prev_events_and_hashes=None):
"""Create a new event for a local client
Args:
builder (EventBuilder):
requester (synapse.types.Requester|None):
prev_events_and_hashes (list[(str, dict[str, str], int)]|None):
the forward extremities to use as the prev_events for the
new event. For each event, a tuple of (event_id, hashes, depth)
where *hashes* is a map from algorithm to hash.
If None, they will be requested from the database.
Returns:
Deferred[(synapse.events.EventBase, synapse.events.snapshot.EventContext)]
"""
if prev_events_and_hashes is not None:
assert len(prev_events_and_hashes) <= 10, \
"Attempting to create an event with %i prev_events" % (
len(prev_events_and_hashes),
)
else:
prev_events_and_hashes = \
yield self.store.get_prev_events_for_room(builder.room_id)
# We want to limit the max number of prev events we point to in our
# new event
if len(latest_ret) > 10:
# Sort by reverse depth, so we point to the most recent.
latest_ret.sort(key=lambda a: -a[2])
new_latest_ret = latest_ret[:5]
if prev_events_and_hashes:
depth = max([d for _, _, d in prev_events_and_hashes]) + 1
else:
depth = 1
# We also randomly point to some of the older events, to make
# sure that we don't completely ignore the older events.
if latest_ret[5:]:
sample_size = min(5, len(latest_ret[5:]))
new_latest_ret.extend(random.sample(latest_ret[5:], sample_size))
latest_ret = new_latest_ret
if latest_ret:
depth = max([d for _, _, d in latest_ret]) + 1
else:
depth = 1
prev_events = [
(event_id, prev_hashes)
for event_id, prev_hashes, _ in latest_ret
]
prev_events = [
(event_id, prev_hashes)
for event_id, prev_hashes, _ in prev_events_and_hashes
]
builder.prev_events = prev_events
builder.depth = depth
@@ -553,24 +670,21 @@ class EventCreationHandler(object):
event,
context,
ratelimit=True,
extra_users=[]
extra_users=[],
):
# We now need to go and hit out to wherever we need to hit out to.
"""Processes a new event. This includes checking auth, persisting it,
notifying users, sending to remote servers, etc.
# If we're a worker we need to hit out to the master.
if self.config.worker_app:
yield send_event_to_master(
self.http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
requester=requester,
event=event,
context=context,
)
return
If called from a worker will hit out to the master process for final
processing.
if ratelimit:
yield self.base_handler.ratelimit(requester)
Args:
requester (Requester)
event (FrozenEvent)
context (EventContext)
ratelimit (bool)
extra_users (list(UserID)): Any extra users to notify about event
"""
try:
yield self.auth.check_from_context(event, context)
@@ -580,12 +694,63 @@ class EventCreationHandler(object):
# Ensure that we can round trip before trying to persist in db
try:
dump = ujson.dumps(unfreeze(event.content))
ujson.loads(dump)
dump = frozendict_json_encoder.encode(event.content)
simplejson.loads(dump)
except Exception:
logger.exception("Failed to encode content: %r", event.content)
raise
yield self.action_generator.handle_push_actions_for_event(
event, context
)
try:
# If we're a worker we need to hit out to the master.
if self.config.worker_app:
yield send_event_to_master(
self.http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
requester=requester,
event=event,
context=context,
ratelimit=ratelimit,
extra_users=extra_users,
)
return
yield self.persist_and_notify_client_event(
requester,
event,
context,
ratelimit=ratelimit,
extra_users=extra_users,
)
except: # noqa: E722, as we reraise the exception this is fine.
# Ensure that we actually remove the entries in the push actions
# staging area, if we calculated them.
preserve_fn(self.store.remove_push_actions_from_staging)(event.event_id)
raise
@defer.inlineCallbacks
def persist_and_notify_client_event(
self,
requester,
event,
context,
ratelimit=True,
extra_users=[],
):
"""Called when we have fully built the event, have already
calculated the push actions for the event, and checked auth.
This should only be run on master.
"""
assert not self.config.worker_app
if ratelimit:
yield self.base_handler.ratelimit(requester)
yield self.base_handler.maybe_kick_guest_users(event, context)
if event.type == EventTypes.CanonicalAlias:
@@ -679,10 +844,6 @@ class EventCreationHandler(object):
"Changing the room create event is forbidden",
)
yield self.action_generator.handle_push_actions_for_event(
event, context
)
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event, context=context
)

View File

@@ -93,29 +93,30 @@ class PresenceHandler(object):
self.store = hs.get_datastore()
self.wheel_timer = WheelTimer()
self.notifier = hs.get_notifier()
self.replication = hs.get_replication_layer()
self.federation = hs.get_federation_sender()
self.state = hs.get_state_handler()
self.replication.register_edu_handler(
federation_registry = hs.get_federation_registry()
federation_registry.register_edu_handler(
"m.presence", self.incoming_presence
)
self.replication.register_edu_handler(
federation_registry.register_edu_handler(
"m.presence_invite",
lambda origin, content: self.invite_presence(
observed_user=UserID.from_string(content["observed_user"]),
observer_user=UserID.from_string(content["observer_user"]),
)
)
self.replication.register_edu_handler(
federation_registry.register_edu_handler(
"m.presence_accept",
lambda origin, content: self.accept_presence(
observed_user=UserID.from_string(content["observed_user"]),
observer_user=UserID.from_string(content["observer_user"]),
)
)
self.replication.register_edu_handler(
federation_registry.register_edu_handler(
"m.presence_deny",
lambda origin, content: self.deny_presence(
observed_user=UserID.from_string(content["observed_user"]),

View File

@@ -31,14 +31,17 @@ class ProfileHandler(BaseHandler):
def __init__(self, hs):
super(ProfileHandler, self).__init__(hs)
self.federation = hs.get_replication_layer()
self.federation.register_query_handler(
self.federation = hs.get_federation_client()
hs.get_federation_registry().register_query_handler(
"profile", self.on_profile_query
)
self.user_directory_handler = hs.get_user_directory_handler()
self.clock.looping_call(self._update_remote_profile_cache, self.PROFILE_UPDATE_MS)
if hs.config.worker_app is None:
self.clock.looping_call(
self._update_remote_profile_cache, self.PROFILE_UPDATE_MS,
)
@defer.inlineCallbacks
def get_profile(self, user_id):
@@ -233,7 +236,7 @@ class ProfileHandler(BaseHandler):
)
for room_id in room_ids:
handler = self.hs.get_handlers().room_member_handler
handler = self.hs.get_room_member_handler()
try:
# Assume the target_user isn't a guest,
# because we don't let guests set profile or avatar data.

View File

@@ -41,9 +41,9 @@ class ReadMarkerHandler(BaseHandler):
"""
with (yield self.read_marker_linearizer.queue((room_id, user_id))):
account_data = yield self.store.get_account_data_for_room(user_id, room_id)
existing_read_marker = account_data.get("m.fully_read", None)
existing_read_marker = yield self.store.get_account_data_for_room_and_type(
user_id, room_id, "m.fully_read",
)
should_update = True

View File

@@ -35,7 +35,7 @@ class ReceiptsHandler(BaseHandler):
self.store = hs.get_datastore()
self.hs = hs
self.federation = hs.get_federation_sender()
hs.get_replication_layer().register_edu_handler(
hs.get_federation_registry().register_edu_handler(
"m.receipt", self._received_remote_receipt
)
self.clock = self.hs.get_clock()

View File

@@ -23,8 +23,8 @@ from synapse.api.errors import (
)
from synapse.http.client import CaptchaServerHttpClient
from synapse import types
from synapse.types import UserID
from synapse.util.async import run_on_reactor
from synapse.types import UserID, create_requester, RoomID, RoomAlias
from synapse.util.async import run_on_reactor, Linearizer
from synapse.util.threepids import check_3pid_allowed
from ._base import BaseHandler
@@ -46,6 +46,10 @@ class RegistrationHandler(BaseHandler):
self.macaroon_gen = hs.get_macaroon_generator()
self._generate_user_id_linearizer = Linearizer(
name="_generate_user_id_linearizer",
)
@defer.inlineCallbacks
def check_username(self, localpart, guest_access_token=None,
assigned_user_id=None):
@@ -201,10 +205,17 @@ class RegistrationHandler(BaseHandler):
token = None
attempts += 1
# auto-join the user to any rooms we're supposed to dump them into
fake_requester = create_requester(user_id)
for r in self.hs.config.auto_join_rooms:
try:
yield self._join_user_to_room(fake_requester, r)
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e)
# We used to generate default identicons here, but nowadays
# we want clients to generate their own as part of their branding
# rather than there being consistent matrix-wide ones, so we don't.
defer.returnValue((user_id, token))
@defer.inlineCallbacks
@@ -345,9 +356,11 @@ class RegistrationHandler(BaseHandler):
@defer.inlineCallbacks
def _generate_user_id(self, reseed=False):
if reseed or self._next_generated_user_id is None:
self._next_generated_user_id = (
yield self.store.find_next_generated_user_id_localpart()
)
with (yield self._generate_user_id_linearizer.queue(())):
if reseed or self._next_generated_user_id is None:
self._next_generated_user_id = (
yield self.store.find_next_generated_user_id_localpart()
)
id = self._next_generated_user_id
self._next_generated_user_id += 1
@@ -446,16 +459,59 @@ class RegistrationHandler(BaseHandler):
return self.hs.get_auth_handler()
@defer.inlineCallbacks
def guest_access_token_for(self, medium, address, inviter_user_id):
def get_or_register_3pid_guest(self, medium, address, inviter_user_id):
"""Get a guest access token for a 3PID, creating a guest account if
one doesn't already exist.
Args:
medium (str)
address (str)
inviter_user_id (str): The user ID who is trying to invite the
3PID
Returns:
Deferred[(str, str)]: A 2-tuple of `(user_id, access_token)` of the
3PID guest account.
"""
access_token = yield self.store.get_3pid_guest_access_token(medium, address)
if access_token:
defer.returnValue(access_token)
user_info = yield self.auth.get_user_by_access_token(
access_token
)
_, access_token = yield self.register(
defer.returnValue((user_info["user"].to_string(), access_token))
user_id, access_token = yield self.register(
generate_token=True,
make_guest=True
)
access_token = yield self.store.save_or_get_3pid_guest_access_token(
medium, address, access_token, inviter_user_id
)
defer.returnValue(access_token)
defer.returnValue((user_id, access_token))
@defer.inlineCallbacks
def _join_user_to_room(self, requester, room_identifier):
room_id = None
room_member_handler = self.hs.get_room_member_handler()
if RoomID.is_valid(room_identifier):
room_id = room_identifier
elif RoomAlias.is_valid(room_identifier):
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = (
yield room_member_handler.lookup_room_alias(room_alias)
)
room_id = room_id.to_string()
else:
raise SynapseError(400, "%s was not legal room ID or room alias" % (
room_identifier,
))
yield room_member_handler.update_membership(
requester=requester,
target=requester.user,
room_id=room_id,
remote_room_hosts=remote_room_hosts,
action="join",
)

View File

@@ -165,7 +165,7 @@ class RoomCreationHandler(BaseHandler):
creation_content = config.get("creation_content", {})
room_member_handler = self.hs.get_handlers().room_member_handler
room_member_handler = self.hs.get_room_member_handler()
yield self._send_events_for_new_room(
requester,
@@ -224,7 +224,7 @@ class RoomCreationHandler(BaseHandler):
id_server = invite_3pid["id_server"]
address = invite_3pid["address"]
medium = invite_3pid["medium"]
yield self.hs.get_handlers().room_member_handler.do_3pid_invite(
yield self.hs.get_room_member_handler().do_3pid_invite(
room_id,
requester.user,
medium,
@@ -475,12 +475,9 @@ class RoomEventSource(object):
user.to_string()
)
if app_service:
events, end_key = yield self.store.get_appservice_room_stream(
service=app_service,
from_key=from_key,
to_key=to_key,
limit=limit,
)
# We no longer support AS users using /sync directly.
# See https://github.com/matrix-org/matrix-doc/issues/1144
raise NotImplementedError()
else:
room_events = yield self.store.get_membership_changes_for_user(
user.to_string(), from_key, to_key

View File

@@ -20,7 +20,6 @@ from ._base import BaseHandler
from synapse.api.constants import (
EventTypes, JoinRules,
)
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.util.async import concurrently_execute
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.util.caches.response_cache import ResponseCache
@@ -44,8 +43,9 @@ EMTPY_THIRD_PARTY_ID = ThirdPartyInstanceID(None, None)
class RoomListHandler(BaseHandler):
def __init__(self, hs):
super(RoomListHandler, self).__init__(hs)
self.response_cache = ResponseCache(hs)
self.remote_response_cache = ResponseCache(hs, timeout_ms=30 * 1000)
self.response_cache = ResponseCache(hs, "room_list")
self.remote_response_cache = ResponseCache(hs, "remote_room_list",
timeout_ms=30 * 1000)
def get_local_public_room_list(self, limit=None, since_token=None,
search_filter=None,
@@ -77,18 +77,11 @@ class RoomListHandler(BaseHandler):
)
key = (limit, since_token, network_tuple)
result = self.response_cache.get(key)
if not result:
logger.info("No cached result, calculating one.")
result = self.response_cache.set(
key,
preserve_fn(self._get_public_room_list)(
limit, since_token, network_tuple=network_tuple
)
)
else:
logger.info("Using cached deferred result.")
return make_deferred_yieldable(result)
return self.response_cache.wrap(
key,
self._get_public_room_list,
limit, since_token, network_tuple=network_tuple,
)
@defer.inlineCallbacks
def _get_public_room_list(self, limit=None, since_token=None,
@@ -409,7 +402,7 @@ class RoomListHandler(BaseHandler):
def _get_remote_list_cached(self, server_name, limit=None, since_token=None,
search_filter=None, include_all_networks=False,
third_party_instance_id=None,):
repl_layer = self.hs.get_replication_layer()
repl_layer = self.hs.get_federation_client()
if search_filter:
# We can't cache when asking for search
return repl_layer.get_public_rooms(
@@ -422,18 +415,14 @@ class RoomListHandler(BaseHandler):
server_name, limit, since_token, include_all_networks,
third_party_instance_id,
)
result = self.remote_response_cache.get(key)
if not result:
result = self.remote_response_cache.set(
key,
repl_layer.get_public_rooms(
server_name, limit=limit, since_token=since_token,
search_filter=search_filter,
include_all_networks=include_all_networks,
third_party_instance_id=third_party_instance_id,
)
)
return result
return self.remote_response_cache.wrap(
key,
repl_layer.get_public_rooms,
server_name, limit=limit, since_token=since_token,
search_filter=search_filter,
include_all_networks=include_all_networks,
third_party_instance_id=third_party_instance_id,
)
class RoomListNextBatch(namedtuple("RoomListNextBatch", (

View File

@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import logging
from signedjson.key import decode_verify_key_bytes
@@ -30,22 +30,32 @@ from synapse.api.errors import AuthError, SynapseError, Codes
from synapse.types import UserID, RoomID
from synapse.util.async import Linearizer
from synapse.util.distributor import user_left_room, user_joined_room
from ._base import BaseHandler
logger = logging.getLogger(__name__)
id_server_scheme = "https://"
class RoomMemberHandler(BaseHandler):
class RoomMemberHandler(object):
# TODO(paul): This handler currently contains a messy conflation of
# low-level API that works on UserID objects and so on, and REST-level
# API that takes ID strings and returns pagination chunks. These concerns
# ought to be separated out a lot better.
def __init__(self, hs):
super(RoomMemberHandler, self).__init__(hs)
__metaclass__ = abc.ABCMeta
def __init__(self, hs):
self.hs = hs
self.store = hs.get_datastore()
self.auth = hs.get_auth()
self.state_handler = hs.get_state_handler()
self.config = hs.config
self.simple_http_client = hs.get_simple_http_client()
self.federation_handler = hs.get_handlers().federation_handler
self.directory_handler = hs.get_handlers().directory_handler
self.registration_handler = hs.get_handlers().registration_handler
self.profile_handler = hs.get_profile_handler()
self.event_creation_hander = hs.get_event_creation_handler()
@@ -54,14 +64,92 @@ class RoomMemberHandler(BaseHandler):
self.clock = hs.get_clock()
self.spam_checker = hs.get_spam_checker()
self.distributor = hs.get_distributor()
self.distributor.declare("user_joined_room")
self.distributor.declare("user_left_room")
@abc.abstractmethod
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
"""Try and join a room that this server is not in
Args:
requester (Requester)
remote_room_hosts (list[str]): List of servers that can be used
to join via.
room_id (str): Room that we are trying to join
user (UserID): User who is trying to join
content (dict): A dict that should be used as the content of the
join event.
Returns:
Deferred
"""
raise NotImplementedError()
@abc.abstractmethod
def _remote_reject_invite(self, remote_room_hosts, room_id, target):
"""Attempt to reject an invite for a room this server is not in. If we
fail to do so we locally mark the invite as rejected.
Args:
requester (Requester)
remote_room_hosts (list[str]): List of servers to use to try and
reject invite
room_id (str)
target (UserID): The user rejecting the invite
Returns:
Deferred[dict]: A dictionary to be returned to the client, may
include event_id etc, or nothing if we locally rejected
"""
raise NotImplementedError()
@abc.abstractmethod
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Get a guest access token for a 3PID, creating a guest account if
one doesn't already exist.
Args:
requester (Requester)
medium (str)
address (str)
inviter_user_id (str): The user ID who is trying to invite the
3PID
Returns:
Deferred[(str, str)]: A 2-tuple of `(user_id, access_token)` of the
3PID guest account.
"""
raise NotImplementedError()
@abc.abstractmethod
def _user_joined_room(self, target, room_id):
"""Notifies distributor on master process that the user has joined the
room.
Args:
target (UserID)
room_id (str)
Returns:
Deferred|None
"""
raise NotImplementedError()
@abc.abstractmethod
def _user_left_room(self, target, room_id):
"""Notifies distributor on master process that the user has left the
room.
Args:
target (UserID)
room_id (str)
Returns:
Deferred|None
"""
raise NotImplementedError()
@defer.inlineCallbacks
def _local_membership_update(
self, requester, target, room_id, membership,
prev_event_ids,
prev_events_and_hashes,
txn_id=None,
ratelimit=True,
content=None,
@@ -87,7 +175,7 @@ class RoomMemberHandler(BaseHandler):
},
token_id=requester.access_token_id,
txn_id=txn_id,
prev_event_ids=prev_event_ids,
prev_events_and_hashes=prev_events_and_hashes,
)
# Check if this event matches the previous membership event for the user.
@@ -120,32 +208,15 @@ class RoomMemberHandler(BaseHandler):
prev_member_event = yield self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN
if newly_joined:
yield user_joined_room(self.distributor, target, room_id)
yield self._user_joined_room(target, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event_id:
prev_member_event = yield self.store.get_event(prev_member_event_id)
if prev_member_event.membership == Membership.JOIN:
user_left_room(self.distributor, target, room_id)
yield self._user_left_room(target, room_id)
defer.returnValue(event)
@defer.inlineCallbacks
def remote_join(self, remote_room_hosts, room_id, user, content):
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
# We don't do an auth check if we are doing an invite
# join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we
# need to do the invite/join dance.
yield self.hs.get_handlers().federation_handler.do_invite_join(
remote_room_hosts,
room_id,
user.to_string(),
content,
)
yield user_joined_room(self.distributor, user, room_id)
@defer.inlineCallbacks
def update_membership(
self,
@@ -204,8 +275,7 @@ class RoomMemberHandler(BaseHandler):
# if this is a join with a 3pid signature, we may need to turn a 3pid
# invite into a normal invite before we can handle the join.
if third_party_signed is not None:
replication = self.hs.get_replication_layer()
yield replication.exchange_third_party_invite(
yield self.federation_handler.exchange_third_party_invite(
third_party_signed["sender"],
target.to_string(),
room_id,
@@ -226,7 +296,7 @@ class RoomMemberHandler(BaseHandler):
requester.user,
)
if not is_requester_admin:
if self.hs.config.block_non_admin_invites:
if self.config.block_non_admin_invites:
logger.info(
"Blocking invite: user is not admin and non-admin "
"invites disabled"
@@ -244,7 +314,12 @@ class RoomMemberHandler(BaseHandler):
403, "Invites have been disabled on this server",
)
latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
prev_events_and_hashes = yield self.store.get_prev_events_for_room(
room_id,
)
latest_event_ids = (
event_id for (event_id, _, _) in prev_events_and_hashes
)
current_state_ids = yield self.state_handler.get_current_state_ids(
room_id, latest_event_ids=latest_event_ids,
)
@@ -285,7 +360,7 @@ class RoomMemberHandler(BaseHandler):
raise AuthError(403, "Guest access not allowed")
if not is_host_in_room:
inviter = yield self.get_inviter(target.to_string(), room_id)
inviter = yield self._get_inviter(target.to_string(), room_id)
if inviter and not self.hs.is_mine(inviter):
remote_room_hosts.append(inviter.domain)
@@ -299,15 +374,15 @@ class RoomMemberHandler(BaseHandler):
if requester.is_guest:
content["kind"] = "guest"
ret = yield self.remote_join(
remote_room_hosts, room_id, target, content
ret = yield self._remote_join(
requester, remote_room_hosts, room_id, target, content
)
defer.returnValue(ret)
elif effective_membership_state == Membership.LEAVE:
if not is_host_in_room:
# perhaps we've been invited
inviter = yield self.get_inviter(target.to_string(), room_id)
inviter = yield self._get_inviter(target.to_string(), room_id)
if not inviter:
raise SynapseError(404, "Not a known room")
@@ -321,28 +396,10 @@ class RoomMemberHandler(BaseHandler):
else:
# send the rejection to the inviter's HS.
remote_room_hosts = remote_room_hosts + [inviter.domain]
fed_handler = self.hs.get_handlers().federation_handler
try:
ret = yield fed_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
target.to_string(),
)
defer.returnValue(ret)
except Exception as e:
# if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead.
#
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(
target.to_string(), room_id
)
defer.returnValue({})
res = yield self._remote_reject_invite(
requester, remote_room_hosts, room_id, target,
)
defer.returnValue(res)
res = yield self._local_membership_update(
requester=requester,
@@ -351,7 +408,7 @@ class RoomMemberHandler(BaseHandler):
membership=effective_membership_state,
txn_id=txn_id,
ratelimit=ratelimit,
prev_event_ids=latest_event_ids,
prev_events_and_hashes=prev_events_and_hashes,
content=content,
)
defer.returnValue(res)
@@ -438,12 +495,12 @@ class RoomMemberHandler(BaseHandler):
prev_member_event = yield self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN
if newly_joined:
yield user_joined_room(self.distributor, target_user, room_id)
yield self._user_joined_room(target_user, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event_id:
prev_member_event = yield self.store.get_event(prev_member_event_id)
if prev_member_event.membership == Membership.JOIN:
user_left_room(self.distributor, target_user, room_id)
yield self._user_left_room(target_user, room_id)
@defer.inlineCallbacks
def _can_guest_join(self, current_state_ids):
@@ -477,7 +534,7 @@ class RoomMemberHandler(BaseHandler):
Raises:
SynapseError if room alias could not be found.
"""
directory_handler = self.hs.get_handlers().directory_handler
directory_handler = self.directory_handler
mapping = yield directory_handler.get_association(room_alias)
if not mapping:
@@ -489,7 +546,7 @@ class RoomMemberHandler(BaseHandler):
defer.returnValue((RoomID.from_string(room_id), servers))
@defer.inlineCallbacks
def get_inviter(self, user_id, room_id):
def _get_inviter(self, user_id, room_id):
invite = yield self.store.get_invite_for_user_in_room(
user_id=user_id,
room_id=room_id,
@@ -508,7 +565,7 @@ class RoomMemberHandler(BaseHandler):
requester,
txn_id
):
if self.hs.config.block_non_admin_invites:
if self.config.block_non_admin_invites:
is_requester_admin = yield self.auth.is_server_admin(
requester.user,
)
@@ -555,7 +612,7 @@ class RoomMemberHandler(BaseHandler):
str: the matrix ID of the 3pid, or None if it is not recognized.
"""
try:
data = yield self.hs.get_simple_http_client().get_json(
data = yield self.simple_http_client.get_json(
"%s%s/_matrix/identity/api/v1/lookup" % (id_server_scheme, id_server,),
{
"medium": medium,
@@ -566,7 +623,7 @@ class RoomMemberHandler(BaseHandler):
if "mxid" in data:
if "signatures" not in data:
raise AuthError(401, "No signatures on 3pid binding")
self.verify_any_signature(data, id_server)
yield self._verify_any_signature(data, id_server)
defer.returnValue(data["mxid"])
except IOError as e:
@@ -574,11 +631,11 @@ class RoomMemberHandler(BaseHandler):
defer.returnValue(None)
@defer.inlineCallbacks
def verify_any_signature(self, data, server_hostname):
def _verify_any_signature(self, data, server_hostname):
if server_hostname not in data["signatures"]:
raise AuthError(401, "No signature from server %s" % (server_hostname,))
for key_name, signature in data["signatures"][server_hostname].items():
key_data = yield self.hs.get_simple_http_client().get_json(
key_data = yield self.simple_http_client.get_json(
"%s%s/_matrix/identity/api/v1/pubkey/%s" %
(id_server_scheme, server_hostname, key_name,),
)
@@ -603,7 +660,7 @@ class RoomMemberHandler(BaseHandler):
user,
txn_id
):
room_state = yield self.hs.get_state_handler().get_current_state(room_id)
room_state = yield self.state_handler.get_current_state(room_id)
inviter_display_name = ""
inviter_avatar_url = ""
@@ -634,6 +691,7 @@ class RoomMemberHandler(BaseHandler):
token, public_keys, fallback_public_key, display_name = (
yield self._ask_id_server_for_third_party_invite(
requester=requester,
id_server=id_server,
medium=medium,
address=address,
@@ -670,6 +728,7 @@ class RoomMemberHandler(BaseHandler):
@defer.inlineCallbacks
def _ask_id_server_for_third_party_invite(
self,
requester,
id_server,
medium,
address,
@@ -686,6 +745,7 @@ class RoomMemberHandler(BaseHandler):
Asks an identity server for a third party invite.
Args:
requester (Requester)
id_server (str): hostname + optional port for the identity server.
medium (str): The literal string "email".
address (str): The third party address being invited.
@@ -727,24 +787,20 @@ class RoomMemberHandler(BaseHandler):
"sender_avatar_url": inviter_avatar_url,
}
if self.hs.config.invite_3pid_guest:
registration_handler = self.hs.get_handlers().registration_handler
guest_access_token = yield registration_handler.guest_access_token_for(
if self.config.invite_3pid_guest:
guest_access_token, guest_user_id = yield self.get_or_register_3pid_guest(
requester=requester,
medium=medium,
address=address,
inviter_user_id=inviter_user_id,
)
guest_user_info = yield self.hs.get_auth().get_user_by_access_token(
guest_access_token
)
invite_config.update({
"guest_access_token": guest_access_token,
"guest_user_id": guest_user_info["user"].to_string(),
"guest_user_id": guest_user_id,
})
data = yield self.hs.get_simple_http_client().post_urlencoded_get_json(
data = yield self.simple_http_client.post_urlencoded_get_json(
is_url,
invite_config
)
@@ -766,27 +822,6 @@ class RoomMemberHandler(BaseHandler):
display_name = data["display_name"]
defer.returnValue((token, public_keys, fallback_public_key, display_name))
@defer.inlineCallbacks
def forget(self, user, room_id):
user_id = user.to_string()
member = yield self.state_handler.get_current_state(
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
)
membership = member.membership if member else None
if membership is not None and membership not in [
Membership.LEAVE, Membership.BAN
]:
raise SynapseError(400, "User %s in room %s" % (
user_id, room_id
))
if membership:
yield self.store.forget(user_id, room_id)
@defer.inlineCallbacks
def _is_host_in_room(self, current_state_ids):
# Have we just created the room, and is this about to be the very
@@ -808,3 +843,102 @@ class RoomMemberHandler(BaseHandler):
defer.returnValue(True)
defer.returnValue(False)
class RoomMemberMasterHandler(RoomMemberHandler):
def __init__(self, hs):
super(RoomMemberMasterHandler, self).__init__(hs)
self.distributor = hs.get_distributor()
self.distributor.declare("user_joined_room")
self.distributor.declare("user_left_room")
@defer.inlineCallbacks
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
"""Implements RoomMemberHandler._remote_join
"""
# filter ourselves out of remote_room_hosts: do_invite_join ignores it
# and if it is the only entry we'd like to return a 404 rather than a
# 500.
remote_room_hosts = [
host for host in remote_room_hosts if host != self.hs.hostname
]
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
# We don't do an auth check if we are doing an invite
# join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we
# need to do the invite/join dance.
yield self.federation_handler.do_invite_join(
remote_room_hosts,
room_id,
user.to_string(),
content,
)
yield self._user_joined_room(user, room_id)
@defer.inlineCallbacks
def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
"""Implements RoomMemberHandler._remote_reject_invite
"""
fed_handler = self.federation_handler
try:
ret = yield fed_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
target.to_string(),
)
defer.returnValue(ret)
except Exception as e:
# if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead.
#
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(
target.to_string(), room_id
)
defer.returnValue({})
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Implements RoomMemberHandler.get_or_register_3pid_guest
"""
rg = self.registration_handler
return rg.get_or_register_3pid_guest(medium, address, inviter_user_id)
def _user_joined_room(self, target, room_id):
"""Implements RoomMemberHandler._user_joined_room
"""
return user_joined_room(self.distributor, target, room_id)
def _user_left_room(self, target, room_id):
"""Implements RoomMemberHandler._user_left_room
"""
return user_left_room(self.distributor, target, room_id)
@defer.inlineCallbacks
def forget(self, user, room_id):
user_id = user.to_string()
member = yield self.state_handler.get_current_state(
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
)
membership = member.membership if member else None
if membership is not None and membership not in [
Membership.LEAVE, Membership.BAN
]:
raise SynapseError(400, "User %s in room %s" % (
user_id, room_id
))
if membership:
yield self.store.forget(user_id, room_id)

View File

@@ -0,0 +1,102 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.handlers.room_member import RoomMemberHandler
from synapse.replication.http.membership import (
remote_join, remote_reject_invite, get_or_register_3pid_guest,
notify_user_membership_change,
)
logger = logging.getLogger(__name__)
class RoomMemberWorkerHandler(RoomMemberHandler):
@defer.inlineCallbacks
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
"""Implements RoomMemberHandler._remote_join
"""
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
ret = yield remote_join(
self.simple_http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
requester=requester,
remote_room_hosts=remote_room_hosts,
room_id=room_id,
user_id=user.to_string(),
content=content,
)
yield self._user_joined_room(user, room_id)
defer.returnValue(ret)
def _remote_reject_invite(self, requester, remote_room_hosts, room_id, target):
"""Implements RoomMemberHandler._remote_reject_invite
"""
return remote_reject_invite(
self.simple_http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
requester=requester,
remote_room_hosts=remote_room_hosts,
room_id=room_id,
user_id=target.to_string(),
)
def _user_joined_room(self, target, room_id):
"""Implements RoomMemberHandler._user_joined_room
"""
return notify_user_membership_change(
self.simple_http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
user_id=target.to_string(),
room_id=room_id,
change="joined",
)
def _user_left_room(self, target, room_id):
"""Implements RoomMemberHandler._user_left_room
"""
return notify_user_membership_change(
self.simple_http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
user_id=target.to_string(),
room_id=room_id,
change="left",
)
def get_or_register_3pid_guest(self, requester, medium, address, inviter_user_id):
"""Implements RoomMemberHandler.get_or_register_3pid_guest
"""
return get_or_register_3pid_guest(
self.simple_http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
requester=requester,
medium=medium,
address=address,
inviter_user_id=inviter_user_id,
)

View File

@@ -15,7 +15,7 @@
from synapse.api.constants import Membership, EventTypes
from synapse.util.async import concurrently_execute
from synapse.util.logcontext import LoggingContext, make_deferred_yieldable, preserve_fn
from synapse.util.logcontext import LoggingContext
from synapse.util.metrics import Measure, measure_func
from synapse.util.caches.response_cache import ResponseCache
from synapse.push.clientformat import format_push_rules_for_user
@@ -52,6 +52,7 @@ class TimelineBatch(collections.namedtuple("TimelineBatch", [
to tell if room needs to be part of the sync result.
"""
return bool(self.events)
__bool__ = __nonzero__ # python3
class JoinedSyncResult(collections.namedtuple("JoinedSyncResult", [
@@ -76,6 +77,7 @@ class JoinedSyncResult(collections.namedtuple("JoinedSyncResult", [
# nb the notification count does not, er, count: if there's nothing
# else in the result, we don't need to send it.
)
__bool__ = __nonzero__ # python3
class ArchivedSyncResult(collections.namedtuple("ArchivedSyncResult", [
@@ -95,6 +97,7 @@ class ArchivedSyncResult(collections.namedtuple("ArchivedSyncResult", [
or self.state
or self.account_data
)
__bool__ = __nonzero__ # python3
class InvitedSyncResult(collections.namedtuple("InvitedSyncResult", [
@@ -106,6 +109,7 @@ class InvitedSyncResult(collections.namedtuple("InvitedSyncResult", [
def __nonzero__(self):
"""Invited rooms should always be reported to the client"""
return True
__bool__ = __nonzero__ # python3
class GroupsSyncResult(collections.namedtuple("GroupsSyncResult", [
@@ -117,6 +121,7 @@ class GroupsSyncResult(collections.namedtuple("GroupsSyncResult", [
def __nonzero__(self):
return bool(self.join or self.invite or self.leave)
__bool__ = __nonzero__ # python3
class DeviceLists(collections.namedtuple("DeviceLists", [
@@ -127,6 +132,7 @@ class DeviceLists(collections.namedtuple("DeviceLists", [
def __nonzero__(self):
return bool(self.changed or self.left)
__bool__ = __nonzero__ # python3
class SyncResult(collections.namedtuple("SyncResult", [
@@ -159,6 +165,7 @@ class SyncResult(collections.namedtuple("SyncResult", [
self.device_lists or
self.groups
)
__bool__ = __nonzero__ # python3
class SyncHandler(object):
@@ -169,7 +176,7 @@ class SyncHandler(object):
self.presence_handler = hs.get_presence_handler()
self.event_sources = hs.get_event_sources()
self.clock = hs.get_clock()
self.response_cache = ResponseCache(hs)
self.response_cache = ResponseCache(hs, "sync")
self.state = hs.get_state_handler()
def wait_for_sync_for_user(self, sync_config, since_token=None, timeout=0,
@@ -180,15 +187,11 @@ class SyncHandler(object):
Returns:
A Deferred SyncResult.
"""
result = self.response_cache.get(sync_config.request_key)
if not result:
result = self.response_cache.set(
sync_config.request_key,
preserve_fn(self._wait_for_sync_for_user)(
sync_config, since_token, timeout, full_state
)
)
return make_deferred_yieldable(result)
return self.response_cache.wrap(
sync_config.request_key,
self._wait_for_sync_for_user,
sync_config, since_token, timeout, full_state,
)
@defer.inlineCallbacks
def _wait_for_sync_for_user(self, sync_config, since_token, timeout,
@@ -235,10 +238,10 @@ class SyncHandler(object):
defer.returnValue(rules)
@defer.inlineCallbacks
def ephemeral_by_room(self, sync_config, now_token, since_token=None):
def ephemeral_by_room(self, sync_result_builder, now_token, since_token=None):
"""Get the ephemeral events for each room the user is in
Args:
sync_config (SyncConfig): The flags, filters and user for the sync.
sync_result_builder(SyncResultBuilder)
now_token (StreamToken): Where the server is currently up to.
since_token (StreamToken): Where the server was when the client
last synced.
@@ -248,10 +251,12 @@ class SyncHandler(object):
typing events for that room.
"""
sync_config = sync_result_builder.sync_config
with Measure(self.clock, "ephemeral_by_room"):
typing_key = since_token.typing_key if since_token else "0"
room_ids = yield self.store.get_rooms_for_user(sync_config.user.to_string())
room_ids = sync_result_builder.joined_room_ids
typing_source = self.event_sources.sources["typing"]
typing, typing_key = yield typing_source.get_new_events(
@@ -565,10 +570,22 @@ class SyncHandler(object):
# Always use the `now_token` in `SyncResultBuilder`
now_token = yield self.event_sources.get_current_token()
user_id = sync_config.user.to_string()
app_service = self.store.get_app_service_by_user_id(user_id)
if app_service:
# We no longer support AS users using /sync directly.
# See https://github.com/matrix-org/matrix-doc/issues/1144
raise NotImplementedError()
else:
joined_room_ids = yield self.get_rooms_for_user_at(
user_id, now_token.room_stream_id,
)
sync_result_builder = SyncResultBuilder(
sync_config, full_state,
since_token=since_token,
now_token=now_token,
joined_room_ids=joined_room_ids,
)
account_data_by_room = yield self._generate_sync_entry_for_account_data(
@@ -603,7 +620,6 @@ class SyncHandler(object):
device_id = sync_config.device_id
one_time_key_counts = {}
if device_id:
user_id = sync_config.user.to_string()
one_time_key_counts = yield self.store.count_e2e_one_time_keys(
user_id, device_id
)
@@ -891,7 +907,7 @@ class SyncHandler(object):
ephemeral_by_room = {}
else:
now_token, ephemeral_by_room = yield self.ephemeral_by_room(
sync_result_builder.sync_config,
sync_result_builder,
now_token=sync_result_builder.now_token,
since_token=sync_result_builder.since_token,
)
@@ -996,15 +1012,8 @@ class SyncHandler(object):
if rooms_changed:
defer.returnValue(True)
app_service = self.store.get_app_service_by_user_id(user_id)
if app_service:
rooms = yield self.store.get_app_service_rooms(app_service)
joined_room_ids = set(r.room_id for r in rooms)
else:
joined_room_ids = yield self.store.get_rooms_for_user(user_id)
stream_id = RoomStreamToken.parse_stream_token(since_token.room_key).stream
for room_id in joined_room_ids:
for room_id in sync_result_builder.joined_room_ids:
if self.store.has_room_changed_since(room_id, stream_id):
defer.returnValue(True)
defer.returnValue(False)
@@ -1028,13 +1037,6 @@ class SyncHandler(object):
assert since_token
app_service = self.store.get_app_service_by_user_id(user_id)
if app_service:
rooms = yield self.store.get_app_service_rooms(app_service)
joined_room_ids = set(r.room_id for r in rooms)
else:
joined_room_ids = yield self.store.get_rooms_for_user(user_id)
# Get a list of membership change events that have happened.
rooms_changed = yield self.store.get_membership_changes_for_user(
user_id, since_token.room_key, now_token.room_key
@@ -1057,7 +1059,7 @@ class SyncHandler(object):
# we do send down the room, and with full state, where necessary
old_state_ids = None
if room_id in joined_room_ids and non_joins:
if room_id in sync_result_builder.joined_room_ids and non_joins:
# Always include if the user (re)joined the room, especially
# important so that device list changes are calculated correctly.
# If there are non join member events, but we are still in the room,
@@ -1067,7 +1069,7 @@ class SyncHandler(object):
# User is in the room so we don't need to do the invite/leave checks
continue
if room_id in joined_room_ids or has_join:
if room_id in sync_result_builder.joined_room_ids or has_join:
old_state_ids = yield self.get_state_at(room_id, since_token)
old_mem_ev_id = old_state_ids.get((EventTypes.Member, user_id), None)
old_mem_ev = None
@@ -1079,7 +1081,7 @@ class SyncHandler(object):
newly_joined_rooms.append(room_id)
# If user is in the room then we don't need to do the invite/leave checks
if room_id in joined_room_ids:
if room_id in sync_result_builder.joined_room_ids:
continue
if not non_joins:
@@ -1146,7 +1148,7 @@ class SyncHandler(object):
# Get all events for rooms we're currently joined to.
room_to_events = yield self.store.get_room_events_stream_for_rooms(
room_ids=joined_room_ids,
room_ids=sync_result_builder.joined_room_ids,
from_key=since_token.room_key,
to_key=now_token.room_key,
limit=timeline_limit + 1,
@@ -1154,7 +1156,7 @@ class SyncHandler(object):
# We loop through all room ids, even if there are no new events, in case
# there are non room events taht we need to notify about.
for room_id in joined_room_ids:
for room_id in sync_result_builder.joined_room_ids:
room_entry = room_to_events.get(room_id, None)
if room_entry:
@@ -1362,6 +1364,54 @@ class SyncHandler(object):
else:
raise Exception("Unrecognized rtype: %r", room_builder.rtype)
@defer.inlineCallbacks
def get_rooms_for_user_at(self, user_id, stream_ordering):
"""Get set of joined rooms for a user at the given stream ordering.
The stream ordering *must* be recent, otherwise this may throw an
exception if older than a month. (This function is called with the
current token, which should be perfectly fine).
Args:
user_id (str)
stream_ordering (int)
ReturnValue:
Deferred[frozenset[str]]: Set of room_ids the user is in at given
stream_ordering.
"""
joined_rooms = yield self.store.get_rooms_for_user_with_stream_ordering(
user_id,
)
joined_room_ids = set()
# We need to check that the stream ordering of the join for each room
# is before the stream_ordering asked for. This might not be the case
# if the user joins a room between us getting the current token and
# calling `get_rooms_for_user_with_stream_ordering`.
# If the membership's stream ordering is after the given stream
# ordering, we need to go and work out if the user was in the room
# before.
for room_id, membership_stream_ordering in joined_rooms:
if membership_stream_ordering <= stream_ordering:
joined_room_ids.add(room_id)
continue
logger.info("User joined room after current token: %s", room_id)
extrems = yield self.store.get_forward_extremeties_for_room(
room_id, stream_ordering,
)
users_in_room = yield self.state.get_current_user_in_room(
room_id, extrems,
)
if user_id in users_in_room:
joined_room_ids.add(room_id)
joined_room_ids = frozenset(joined_room_ids)
defer.returnValue(joined_room_ids)
def _action_has_highlight(actions):
for action in actions:
@@ -1411,7 +1461,8 @@ def _calculate_state(timeline_contains, timeline_start, previous, current):
class SyncResultBuilder(object):
"Used to help build up a new SyncResult for a user"
def __init__(self, sync_config, full_state, since_token, now_token):
def __init__(self, sync_config, full_state, since_token, now_token,
joined_room_ids):
"""
Args:
sync_config(SyncConfig)
@@ -1423,6 +1474,7 @@ class SyncResultBuilder(object):
self.full_state = full_state
self.since_token = since_token
self.now_token = now_token
self.joined_room_ids = joined_room_ids
self.presence = []
self.account_data = []

View File

@@ -56,7 +56,7 @@ class TypingHandler(object):
self.federation = hs.get_federation_sender()
hs.get_replication_layer().register_edu_handler("m.typing", self._recv_edu)
hs.get_federation_registry().register_edu_handler("m.typing", self._recv_edu)
hs.get_distributor().observe("user_left_room", self.user_left_room)

View File

@@ -12,8 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import socket
from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
from twisted.internet import defer, reactor
from twisted.internet.error import ConnectError
@@ -33,7 +31,7 @@ SERVER_CACHE = {}
# our record of an individual server which can be tried to reach a destination.
#
# "host" is actually a dotted-quad or ipv6 address string. Except when there's
# "host" is the hostname acquired from the SRV record. Except when there's
# no SRV record, in which case it is the original hostname.
_Server = collections.namedtuple(
"_Server", "priority weight host port expires"
@@ -297,20 +295,13 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
payload = answer.payload
hosts = yield _get_hosts_for_srv_record(
dns_client, str(payload.target)
)
for (ip, ttl) in hosts:
host_ttl = min(answer.ttl, ttl)
servers.append(_Server(
host=ip,
port=int(payload.port),
priority=int(payload.priority),
weight=int(payload.weight),
expires=int(clock.time()) + host_ttl,
))
servers.append(_Server(
host=str(payload.target),
port=int(payload.port),
priority=int(payload.priority),
weight=int(payload.weight),
expires=int(clock.time()) + answer.ttl,
))
servers.sort()
cache[service_name] = list(servers)
@@ -328,81 +319,3 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
raise e
defer.returnValue(servers)
@defer.inlineCallbacks
def _get_hosts_for_srv_record(dns_client, host):
"""Look up each of the hosts in a SRV record
Args:
dns_client (twisted.names.dns.IResolver):
host (basestring): host to look up
Returns:
Deferred[list[(str, int)]]: a list of (host, ttl) pairs
"""
ip4_servers = []
ip6_servers = []
def cb(res):
# lookupAddress and lookupIP6Address return a three-tuple
# giving the answer, authority, and additional sections of the
# response.
#
# we only care about the answers.
return res[0]
def eb(res, record_type):
if res.check(DNSNameError):
return []
logger.warn("Error looking up %s for %s: %s", record_type, host, res)
return res
# no logcontexts here, so we can safely fire these off and gatherResults
d1 = dns_client.lookupAddress(host).addCallbacks(
cb, eb, errbackArgs=("A", ))
d2 = dns_client.lookupIPV6Address(host).addCallbacks(
cb, eb, errbackArgs=("AAAA", ))
results = yield defer.DeferredList(
[d1, d2], consumeErrors=True)
# if all of the lookups failed, raise an exception rather than blowing out
# the cache with an empty result.
if results and all(s == defer.FAILURE for (s, _) in results):
defer.returnValue(results[0][1])
for (success, result) in results:
if success == defer.FAILURE:
continue
for answer in result:
if not answer.payload:
continue
try:
if answer.type == dns.A:
ip = answer.payload.dottedQuad()
ip4_servers.append((ip, answer.ttl))
elif answer.type == dns.AAAA:
ip = socket.inet_ntop(
socket.AF_INET6, answer.payload.address,
)
ip6_servers.append((ip, answer.ttl))
else:
# the most likely candidate here is a CNAME record.
# rfc2782 says srvs may not point to aliases.
logger.warn(
"Ignoring unexpected DNS record type %s for %s",
answer.type, host,
)
continue
except Exception as e:
logger.warn("Ignoring invalid DNS response for %s: %s",
host, e)
continue
# keep the ipv4 results before the ipv6 results, mostly to match historical
# behaviour.
defer.returnValue(ip4_servers + ip6_servers)

View File

@@ -286,7 +286,8 @@ class MatrixFederationHttpClient(object):
headers_dict[b"Authorization"] = auth_headers
@defer.inlineCallbacks
def put_json(self, destination, path, data={}, json_data_callback=None,
def put_json(self, destination, path, args={}, data={},
json_data_callback=None,
long_retries=False, timeout=None,
ignore_backoff=False,
backoff_on_404=False):
@@ -296,6 +297,7 @@ class MatrixFederationHttpClient(object):
destination (str): The remote server to send the HTTP request
to.
path (str): The HTTP path.
args (dict): query params
data (dict): A dict containing the data that will be used as
the request body. This will be encoded as JSON.
json_data_callback (callable): A callable returning the dict to
@@ -342,6 +344,7 @@ class MatrixFederationHttpClient(object):
path,
body_callback=body_callback,
headers_dict={"Content-Type": ["application/json"]},
query_bytes=encode_query_args(args),
long_retries=long_retries,
timeout=timeout,
ignore_backoff=ignore_backoff,
@@ -373,6 +376,7 @@ class MatrixFederationHttpClient(object):
giving up. None indicates no timeout.
ignore_backoff (bool): true to ignore the historical backoff data and
try the request anyway.
args (dict): query params
Returns:
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body.

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -36,7 +37,7 @@ from twisted.web.util import redirectTo
import collections
import logging
import urllib
import ujson
import simplejson
logger = logging.getLogger(__name__)
@@ -59,6 +60,11 @@ response_count = metrics.register_counter(
)
)
requests_counter = metrics.register_counter(
"requests_received",
labels=["method", "servlet", ],
)
outgoing_responses_counter = metrics.register_counter(
"responses",
labels=["method", "code"],
@@ -107,6 +113,11 @@ response_db_sched_duration = metrics.register_counter(
"response_db_sched_duration_seconds", labels=["method", "servlet", "tag"]
)
# size in bytes of the response written
response_size = metrics.register_counter(
"response_size", labels=["method", "servlet", "tag"]
)
_next_request_id = 0
@@ -145,7 +156,8 @@ def wrap_request_handler(request_handler, include_metrics=False):
# at the servlet name. For most requests that name will be
# JsonResource (or a subclass), and JsonResource._async_render
# will update it once it picks a servlet.
request_metrics.start(self.clock, name=self.__class__.__name__)
servlet_name = self.__class__.__name__
request_metrics.start(self.clock, name=servlet_name)
request_context.request = request_id
with request.processing():
@@ -154,6 +166,7 @@ def wrap_request_handler(request_handler, include_metrics=False):
if include_metrics:
yield request_handler(self, request, request_metrics)
else:
requests_counter.inc(request.method, servlet_name)
yield request_handler(self, request)
except CodeMessageException as e:
code = e.code
@@ -229,7 +242,7 @@ class JsonResource(HttpServer, resource.Resource):
""" This implements the HttpServer interface and provides JSON support for
Resources.
Register callbacks via register_path()
Register callbacks via register_paths()
Callbacks can return a tuple of status code and a dict in which case the
the dict will automatically be sent to the client as a JSON object.
@@ -276,49 +289,59 @@ class JsonResource(HttpServer, resource.Resource):
This checks if anyone has registered a callback for that method and
path.
"""
if request.method == "OPTIONS":
self._send_response(request, 200, {})
return
callback, group_dict = self._get_handler_for_request(request)
servlet_instance = getattr(callback, "__self__", None)
if servlet_instance is not None:
servlet_classname = servlet_instance.__class__.__name__
else:
servlet_classname = "%r" % callback
request_metrics.name = servlet_classname
requests_counter.inc(request.method, servlet_classname)
# Now trigger the callback. If it returns a response, we send it
# here. If it throws an exception, that is handled by the wrapper
# installed by @request_handler.
kwargs = intern_dict({
name: urllib.unquote(value).decode("UTF-8") if value else value
for name, value in group_dict.items()
})
callback_return = yield callback(request, **kwargs)
if callback_return is not None:
code, response = callback_return
self._send_response(request, code, response)
def _get_handler_for_request(self, request):
"""Finds a callback method to handle the given request
Args:
request (twisted.web.http.Request):
Returns:
Tuple[Callable, dict[str, str]]: callback method, and the dict
mapping keys to path components as specified in the handler's
path match regexp.
The callback will normally be a method registered via
register_paths, so will return (possibly via Deferred) either
None, or a tuple of (http code, response body).
"""
if request.method == b"OPTIONS":
return _options_handler, {}
# Loop through all the registered callbacks to check if the method
# and path regex match
for path_entry in self.path_regexs.get(request.method, []):
m = path_entry.pattern.match(request.path)
if not m:
continue
# We found a match! First update the metrics object to indicate
# which servlet is handling the request.
callback = path_entry.callback
servlet_instance = getattr(callback, "__self__", None)
if servlet_instance is not None:
servlet_classname = servlet_instance.__class__.__name__
else:
servlet_classname = "%r" % callback
request_metrics.name = servlet_classname
# Now trigger the callback. If it returns a response, we send it
# here. If it throws an exception, that is handled by the wrapper
# installed by @request_handler.
kwargs = intern_dict({
name: urllib.unquote(value).decode("UTF-8") if value else value
for name, value in m.groupdict().items()
})
callback_return = yield callback(request, **kwargs)
if callback_return is not None:
code, response = callback_return
self._send_response(request, code, response)
return
if m:
# We found a match!
return path_entry.callback, m.groupdict()
# Huh. No one wanted to handle that? Fiiiiiine. Send 400.
request_metrics.name = self.__class__.__name__ + ".UnrecognizedRequest"
raise UnrecognizedRequestError()
return _unrecognised_request_handler, {}
def _send_response(self, request, code, response_json_object,
response_code_message=None):
@@ -335,6 +358,34 @@ class JsonResource(HttpServer, resource.Resource):
)
def _options_handler(request):
"""Request handler for OPTIONS requests
This is a request handler suitable for return from
_get_handler_for_request. It returns a 200 and an empty body.
Args:
request (twisted.web.http.Request):
Returns:
Tuple[int, dict]: http code, response body.
"""
return 200, {}
def _unrecognised_request_handler(request):
"""Request handler for unrecognised requests
This is a request handler suitable for return from
_get_handler_for_request. It actually just raises an
UnrecognizedRequestError.
Args:
request (twisted.web.http.Request):
"""
raise UnrecognizedRequestError()
class RequestMetrics(object):
def start(self, clock, name):
self.start = clock.time_msec()
@@ -380,6 +431,8 @@ class RequestMetrics(object):
context.db_sched_duration_ms / 1000., request.method, self.name, tag
)
response_size.inc_by(request.sentLength, request.method, self.name, tag)
class RootRedirect(resource.Resource):
"""Redirects the root '/' path to another path."""
@@ -415,8 +468,7 @@ def respond_with_json(request, code, json_object, send_cors=False,
if canonical_json or synapse.events.USE_FROZEN_DICTS:
json_bytes = encode_canonical_json(json_object)
else:
# ujson doesn't like frozen_dicts.
json_bytes = ujson.dumps(json_object, ensure_ascii=False)
json_bytes = simplejson.dumps(json_object)
return respond_with_json_bytes(
request, code, json_bytes,
@@ -443,6 +495,7 @@ def respond_with_json_bytes(request, code, json_bytes, send_cors=False,
request.setHeader(b"Content-Type", b"application/json")
request.setHeader(b"Server", version_string)
request.setHeader(b"Content-Length", b"%d" % (len(json_bytes),))
request.setHeader(b"Cache-Control", b"no-cache, no-store, must-revalidate")
if send_cors:
set_cors_headers(request)
@@ -490,7 +543,7 @@ def finish_request(request):
def _request_user_agent_is_curl(request):
user_agents = request.requestHeaders.getRawHeaders(
"User-Agent", default=[]
b"User-Agent", default=[]
)
for user_agent in user_agents:
if "curl" in user_agent:

View File

@@ -20,7 +20,7 @@ import logging
import re
import time
ACCESS_TOKEN_RE = re.compile(r'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
ACCESS_TOKEN_RE = re.compile(br'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
class SynapseRequest(Request):
@@ -43,12 +43,12 @@ class SynapseRequest(Request):
def get_redacted_uri(self):
return ACCESS_TOKEN_RE.sub(
r'\1<redacted>\3',
br'\1<redacted>\3',
self.uri
)
def get_user_agent(self):
return self.requestHeaders.getRawHeaders("User-Agent", [None])[-1]
return self.requestHeaders.getRawHeaders(b"User-Agent", [None])[-1]
def started_processing(self):
self.site.access_logger.info(

View File

@@ -17,12 +17,13 @@ import logging
import functools
import time
import gc
import platform
from twisted.internet import reactor
from .metric import (
CounterMetric, CallbackMetric, DistributionMetric, CacheMetric,
MemoryUsageMetric,
MemoryUsageMetric, GaugeMetric,
)
from .process_collector import register_process_collector
@@ -30,6 +31,7 @@ from .process_collector import register_process_collector
logger = logging.getLogger(__name__)
running_on_pypy = platform.python_implementation() == 'PyPy'
all_metrics = []
all_collectors = []
@@ -57,15 +59,38 @@ class Metrics(object):
return metric
def register_counter(self, *args, **kwargs):
"""
Returns:
CounterMetric
"""
return self._register(CounterMetric, *args, **kwargs)
def register_gauge(self, *args, **kwargs):
"""
Returns:
GaugeMetric
"""
return self._register(GaugeMetric, *args, **kwargs)
def register_callback(self, *args, **kwargs):
"""
Returns:
CallbackMetric
"""
return self._register(CallbackMetric, *args, **kwargs)
def register_distribution(self, *args, **kwargs):
"""
Returns:
DistributionMetric
"""
return self._register(DistributionMetric, *args, **kwargs)
def register_cache(self, *args, **kwargs):
"""
Returns:
CacheMetric
"""
return self._register(CacheMetric, *args, **kwargs)
@@ -126,6 +151,32 @@ reactor_metrics = get_metrics_for("python.twisted.reactor")
tick_time = reactor_metrics.register_distribution("tick_time")
pending_calls_metric = reactor_metrics.register_distribution("pending_calls")
synapse_metrics = get_metrics_for("synapse")
# Used to track where various components have processed in the event stream,
# e.g. federation sending, appservice sending, etc.
event_processing_positions = synapse_metrics.register_gauge(
"event_processing_positions", labels=["name"],
)
# Used to track the current max events stream position
event_persisted_position = synapse_metrics.register_gauge(
"event_persisted_position",
)
# Used to track the received_ts of the last event processed by various
# components
event_processing_last_ts = synapse_metrics.register_gauge(
"event_processing_last_ts", labels=["name"],
)
# Used to track the lag processing events. This is the time difference
# between the last processed event's received_ts and the time it was
# finished being processed.
event_processing_lag = synapse_metrics.register_gauge(
"event_processing_lag", labels=["name"],
)
def runUntilCurrentTimer(func):
@@ -158,6 +209,9 @@ def runUntilCurrentTimer(func):
tick_time.inc_by(end - start)
pending_calls_metric.inc_by(num_pending)
if running_on_pypy:
return ret
# Check if we need to do a manual GC (since its been disabled), and do
# one if necessary.
threshold = gc.get_threshold()
@@ -190,6 +244,7 @@ try:
# We manually run the GC each reactor tick so that we can get some metrics
# about time spent doing GC,
gc.disable()
if not running_on_pypy:
gc.disable()
except AttributeError:
pass

View File

@@ -115,7 +115,7 @@ class CounterMetric(BaseMetric):
# dict[list[str]]: value for each set of label values. the keys are the
# label values, in the same order as the labels in self.labels.
#
# (if the metric is a scalar, the (single) key is the empty list).
# (if the metric is a scalar, the (single) key is the empty tuple).
self.counts = {}
# Scalar metrics are never empty
@@ -145,6 +145,36 @@ class CounterMetric(BaseMetric):
)
class GaugeMetric(BaseMetric):
"""A metric that can go up or down
"""
def __init__(self, *args, **kwargs):
super(GaugeMetric, self).__init__(*args, **kwargs)
# dict[list[str]]: value for each set of label values. the keys are the
# label values, in the same order as the labels in self.labels.
#
# (if the metric is a scalar, the (single) key is the empty tuple).
self.guages = {}
def set(self, v, *values):
if len(values) != self.dimension():
raise ValueError(
"Expected as many values to inc() as labels (%d)" % (self.dimension())
)
# TODO: should assert that the tag values are all strings
self.guages[values] = v
def render(self):
return flatten(
self._render_for_labels(k, self.guages[k])
for k in sorted(self.guages.keys())
)
class CallbackMetric(BaseMetric):
"""A metric that returns the numeric value returned by a callback whenever
it is rendered. Typically this is used to implement gauges that yield the

View File

@@ -144,6 +144,7 @@ class _NotifierUserStream(object):
class EventStreamResult(namedtuple("EventStreamResult", ("events", "tokens"))):
def __nonzero__(self):
return bool(self.events)
__bool__ = __nonzero__ # python3
class Notifier(object):

View File

@@ -40,10 +40,6 @@ class ActionGenerator(object):
@defer.inlineCallbacks
def handle_push_actions_for_event(self, event, context):
with Measure(self.clock, "action_for_event_by_user"):
actions_by_user = yield self.bulk_evaluator.action_for_event_by_user(
yield self.bulk_evaluator.action_for_event_by_user(
event, context
)
context.push_actions = [
(uid, actions) for uid, actions in actions_by_user.iteritems()
]

View File

@@ -137,11 +137,11 @@ class BulkPushRuleEvaluator(object):
@defer.inlineCallbacks
def action_for_event_by_user(self, event, context):
"""Given an event and context, evaluate the push rules and return
the results
"""Given an event and context, evaluate the push rules and insert the
results into the event_push_actions_staging table.
Returns:
dict of user_id -> action
Deferred
"""
rules_by_user = yield self._get_rules_for_event(event, context)
actions_by_user = {}
@@ -190,9 +190,16 @@ class BulkPushRuleEvaluator(object):
if matches:
actions = [x for x in rule['actions'] if x != 'dont_notify']
if actions and 'notify' in actions:
# Push rules say we should notify the user of this event
actions_by_user[uid] = actions
break
defer.returnValue(actions_by_user)
# Mark in the DB staging area the push actions for users who should be
# notified for this event. (This will then get handled when we persist
# the event)
yield self.store.add_push_actions_to_staging(
event.event_id, actions_by_user,
)
def _condition_checker(evaluator, conditions, uid, display_name, cache):

View File

@@ -22,19 +22,18 @@ REQUIREMENTS = {
"jsonschema>=2.5.1": ["jsonschema>=2.5.1"],
"frozendict>=0.4": ["frozendict"],
"unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"],
"canonicaljson>=1.0.0": ["canonicaljson>=1.0.0"],
"canonicaljson>=1.1.3": ["canonicaljson>=1.1.3"],
"signedjson>=1.0.0": ["signedjson>=1.0.0"],
"pynacl==0.3.0": ["nacl==0.3.0", "nacl.bindings"],
"pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"],
"service_identity>=1.0.0": ["service_identity>=1.0.0"],
"Twisted>=16.0.0": ["twisted>=16.0.0"],
"pyopenssl>=0.14": ["OpenSSL>=0.14"],
"pyyaml": ["yaml"],
"pyasn1": ["pyasn1"],
"daemonize": ["daemonize"],
"bcrypt": ["bcrypt"],
"bcrypt": ["bcrypt>=3.1.0"],
"pillow": ["PIL"],
"pydenticon": ["pydenticon"],
"ujson": ["ujson"],
"blist": ["blist"],
"pysaml2>=3.0.0": ["saml2>=3.0.0"],
"pymacaroons-pynacl": ["pymacaroons"],

View File

@@ -13,10 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import send_event
from synapse.http.server import JsonResource
from synapse.replication.http import membership, send_event
REPLICATION_PREFIX = "/_synapse/replication"
@@ -29,3 +27,4 @@ class ReplicationRestResource(JsonResource):
def register_servlets(self, hs):
send_event.register_servlets(hs, self)
membership.register_servlets(hs, self)

View File

@@ -0,0 +1,334 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import re
from twisted.internet import defer
from synapse.api.errors import SynapseError, MatrixCodeMessageException
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.types import Requester, UserID
from synapse.util.distributor import user_left_room, user_joined_room
logger = logging.getLogger(__name__)
@defer.inlineCallbacks
def remote_join(client, host, port, requester, remote_room_hosts,
room_id, user_id, content):
"""Ask the master to do a remote join for the given user to the given room
Args:
client (SimpleHttpClient)
host (str): host of master
port (int): port on master listening for HTTP replication
requester (Requester)
remote_room_hosts (list[str]): Servers to try and join via
room_id (str)
user_id (str)
content (dict): The event content to use for the join event
Returns:
Deferred
"""
uri = "http://%s:%s/_synapse/replication/remote_join" % (host, port)
payload = {
"requester": requester.serialize(),
"remote_room_hosts": remote_room_hosts,
"room_id": room_id,
"user_id": user_id,
"content": content,
}
try:
result = yield client.post_json_get_json(uri, payload)
except MatrixCodeMessageException as e:
# We convert to SynapseError as we know that it was a SynapseError
# on the master process that we should send to the client. (And
# importantly, not stack traces everywhere)
raise SynapseError(e.code, e.msg, e.errcode)
defer.returnValue(result)
@defer.inlineCallbacks
def remote_reject_invite(client, host, port, requester, remote_room_hosts,
room_id, user_id):
"""Ask master to reject the invite for the user and room.
Args:
client (SimpleHttpClient)
host (str): host of master
port (int): port on master listening for HTTP replication
requester (Requester)
remote_room_hosts (list[str]): Servers to try and reject via
room_id (str)
user_id (str)
Returns:
Deferred
"""
uri = "http://%s:%s/_synapse/replication/remote_reject_invite" % (host, port)
payload = {
"requester": requester.serialize(),
"remote_room_hosts": remote_room_hosts,
"room_id": room_id,
"user_id": user_id,
}
try:
result = yield client.post_json_get_json(uri, payload)
except MatrixCodeMessageException as e:
# We convert to SynapseError as we know that it was a SynapseError
# on the master process that we should send to the client. (And
# importantly, not stack traces everywhere)
raise SynapseError(e.code, e.msg, e.errcode)
defer.returnValue(result)
@defer.inlineCallbacks
def get_or_register_3pid_guest(client, host, port, requester,
medium, address, inviter_user_id):
"""Ask the master to get/create a guest account for given 3PID.
Args:
client (SimpleHttpClient)
host (str): host of master
port (int): port on master listening for HTTP replication
requester (Requester)
medium (str)
address (str)
inviter_user_id (str): The user ID who is trying to invite the
3PID
Returns:
Deferred[(str, str)]: A 2-tuple of `(user_id, access_token)` of the
3PID guest account.
"""
uri = "http://%s:%s/_synapse/replication/get_or_register_3pid_guest" % (host, port)
payload = {
"requester": requester.serialize(),
"medium": medium,
"address": address,
"inviter_user_id": inviter_user_id,
}
try:
result = yield client.post_json_get_json(uri, payload)
except MatrixCodeMessageException as e:
# We convert to SynapseError as we know that it was a SynapseError
# on the master process that we should send to the client. (And
# importantly, not stack traces everywhere)
raise SynapseError(e.code, e.msg, e.errcode)
defer.returnValue(result)
@defer.inlineCallbacks
def notify_user_membership_change(client, host, port, user_id, room_id, change):
"""Notify master that a user has joined or left the room
Args:
client (SimpleHttpClient)
host (str): host of master
port (int): port on master listening for HTTP replication.
user_id (str)
room_id (str)
change (str): Either "join" or "left"
Returns:
Deferred
"""
assert change in ("joined", "left")
uri = "http://%s:%s/_synapse/replication/user_%s_room" % (host, port, change)
payload = {
"user_id": user_id,
"room_id": room_id,
}
try:
result = yield client.post_json_get_json(uri, payload)
except MatrixCodeMessageException as e:
# We convert to SynapseError as we know that it was a SynapseError
# on the master process that we should send to the client. (And
# importantly, not stack traces everywhere)
raise SynapseError(e.code, e.msg, e.errcode)
defer.returnValue(result)
class ReplicationRemoteJoinRestServlet(RestServlet):
PATTERNS = [re.compile("^/_synapse/replication/remote_join$")]
def __init__(self, hs):
super(ReplicationRemoteJoinRestServlet, self).__init__()
self.federation_handler = hs.get_handlers().federation_handler
self.store = hs.get_datastore()
self.clock = hs.get_clock()
@defer.inlineCallbacks
def on_POST(self, request):
content = parse_json_object_from_request(request)
remote_room_hosts = content["remote_room_hosts"]
room_id = content["room_id"]
user_id = content["user_id"]
event_content = content["content"]
requester = Requester.deserialize(self.store, content["requester"])
if requester.user:
request.authenticated_entity = requester.user.to_string()
logger.info(
"remote_join: %s into room: %s",
user_id, room_id,
)
yield self.federation_handler.do_invite_join(
remote_room_hosts,
room_id,
user_id,
event_content,
)
defer.returnValue((200, {}))
class ReplicationRemoteRejectInviteRestServlet(RestServlet):
PATTERNS = [re.compile("^/_synapse/replication/remote_reject_invite$")]
def __init__(self, hs):
super(ReplicationRemoteRejectInviteRestServlet, self).__init__()
self.federation_handler = hs.get_handlers().federation_handler
self.store = hs.get_datastore()
self.clock = hs.get_clock()
@defer.inlineCallbacks
def on_POST(self, request):
content = parse_json_object_from_request(request)
remote_room_hosts = content["remote_room_hosts"]
room_id = content["room_id"]
user_id = content["user_id"]
requester = Requester.deserialize(self.store, content["requester"])
if requester.user:
request.authenticated_entity = requester.user.to_string()
logger.info(
"remote_reject_invite: %s out of room: %s",
user_id, room_id,
)
try:
event = yield self.federation_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
user_id,
)
ret = event.get_pdu_json()
except Exception as e:
# if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead.
#
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(
user_id, room_id
)
ret = {}
defer.returnValue((200, ret))
class ReplicationRegister3PIDGuestRestServlet(RestServlet):
PATTERNS = [re.compile("^/_synapse/replication/get_or_register_3pid_guest$")]
def __init__(self, hs):
super(ReplicationRegister3PIDGuestRestServlet, self).__init__()
self.registeration_handler = hs.get_handlers().registration_handler
self.store = hs.get_datastore()
self.clock = hs.get_clock()
@defer.inlineCallbacks
def on_POST(self, request):
content = parse_json_object_from_request(request)
medium = content["medium"]
address = content["address"]
inviter_user_id = content["inviter_user_id"]
requester = Requester.deserialize(self.store, content["requester"])
if requester.user:
request.authenticated_entity = requester.user.to_string()
logger.info("get_or_register_3pid_guest: %r", content)
ret = yield self.registeration_handler.get_or_register_3pid_guest(
medium, address, inviter_user_id,
)
defer.returnValue((200, ret))
class ReplicationUserJoinedLeftRoomRestServlet(RestServlet):
PATTERNS = [re.compile("^/_synapse/replication/user_(?P<change>joined|left)_room$")]
def __init__(self, hs):
super(ReplicationUserJoinedLeftRoomRestServlet, self).__init__()
self.registeration_handler = hs.get_handlers().registration_handler
self.store = hs.get_datastore()
self.clock = hs.get_clock()
self.distributor = hs.get_distributor()
def on_POST(self, request, change):
content = parse_json_object_from_request(request)
user_id = content["user_id"]
room_id = content["room_id"]
logger.info("user membership change: %s in %s", user_id, room_id)
user = UserID.from_string(user_id)
if change == "joined":
user_joined_room(self.distributor, user, room_id)
elif change == "left":
user_left_room(self.distributor, user, room_id)
else:
raise Exception("Unrecognized change: %r", change)
return (200, {})
def register_servlets(hs, http_server):
ReplicationRemoteJoinRestServlet(hs).register(http_server)
ReplicationRemoteRejectInviteRestServlet(hs).register(http_server)
ReplicationRegister3PIDGuestRestServlet(hs).register(http_server)
ReplicationUserJoinedLeftRoomRestServlet(hs).register(http_server)

View File

@@ -15,11 +15,16 @@
from twisted.internet import defer
from synapse.api.errors import (
SynapseError, MatrixCodeMessageException, CodeMessageException,
)
from synapse.events import FrozenEvent
from synapse.events.snapshot import EventContext
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.util.async import sleep
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.metrics import Measure
from synapse.types import Requester
from synapse.types import Requester, UserID
import logging
import re
@@ -27,7 +32,9 @@ import re
logger = logging.getLogger(__name__)
def send_event_to_master(client, host, port, requester, event, context):
@defer.inlineCallbacks
def send_event_to_master(client, host, port, requester, event, context,
ratelimit, extra_users):
"""Send event to be handled on the master
Args:
@@ -37,18 +44,46 @@ def send_event_to_master(client, host, port, requester, event, context):
requester (Requester)
event (FrozenEvent)
context (EventContext)
ratelimit (bool)
extra_users (list(UserID)): Any extra users to notify about event
"""
uri = "http://%s:%s/_synapse/replication/send_event" % (host, port,)
uri = "http://%s:%s/_synapse/replication/send_event/%s" % (
host, port, event.event_id,
)
payload = {
"event": event.get_pdu_json(),
"internal_metadata": event.internal_metadata.get_dict(),
"rejected_reason": event.rejected_reason,
"context": context.serialize(),
"context": context.serialize(event),
"requester": requester.serialize(),
"ratelimit": ratelimit,
"extra_users": [u.to_string() for u in extra_users],
}
return client.post_json_get_json(uri, payload)
try:
# We keep retrying the same request for timeouts. This is so that we
# have a good idea that the request has either succeeded or failed on
# the master, and so whether we should clean up or not.
while True:
try:
result = yield client.put_json(uri, payload)
break
except CodeMessageException as e:
if e.code != 504:
raise
logger.warn("send_event request timed out")
# If we timed out we probably don't need to worry about backing
# off too much, but lets just wait a little anyway.
yield sleep(1)
except MatrixCodeMessageException as e:
# We convert to SynapseError as we know that it was a SynapseError
# on the master process that we should send to the client. (And
# importantly, not stack traces everywhere)
raise SynapseError(e.code, e.msg, e.errcode)
defer.returnValue(result)
class ReplicationSendEventRestServlet(RestServlet):
@@ -57,7 +92,7 @@ class ReplicationSendEventRestServlet(RestServlet):
The API looks like:
POST /_synapse/replication/send_event
POST /_synapse/replication/send_event/:event_id
{
"event": { .. serialized event .. },
@@ -65,9 +100,11 @@ class ReplicationSendEventRestServlet(RestServlet):
"rejected_reason": .., // The event.rejected_reason field
"context": { .. serialized event context .. },
"requester": { .. serialized requester .. },
"ratelimit": true,
"extra_users": [],
}
"""
PATTERNS = [re.compile("^/_synapse/replication/send_event$")]
PATTERNS = [re.compile("^/_synapse/replication/send_event/(?P<event_id>[^/]+)$")]
def __init__(self, hs):
super(ReplicationSendEventRestServlet, self).__init__()
@@ -76,8 +113,18 @@ class ReplicationSendEventRestServlet(RestServlet):
self.store = hs.get_datastore()
self.clock = hs.get_clock()
# The responses are tiny, so we may as well cache them for a while
self.response_cache = ResponseCache(hs, "send_event", timeout_ms=30 * 60 * 1000)
def on_PUT(self, request, event_id):
return self.response_cache.wrap(
event_id,
self._handle_request,
request
)
@defer.inlineCallbacks
def on_POST(self, request):
def _handle_request(self, request):
with Measure(self.clock, "repl_send_event_parse"):
content = parse_json_object_from_request(request)
@@ -87,7 +134,10 @@ class ReplicationSendEventRestServlet(RestServlet):
event = FrozenEvent(event_dict, internal_metadata, rejected_reason)
requester = Requester.deserialize(self.store, content["requester"])
context = EventContext.deserialize(self.store, content["context"])
context = yield EventContext.deserialize(self.store, content["context"])
ratelimit = content["ratelimit"]
extra_users = [UserID.from_string(u) for u in content["extra_users"]]
if requester.user:
request.authenticated_entity = requester.user.to_string()
@@ -97,8 +147,10 @@ class ReplicationSendEventRestServlet(RestServlet):
event.event_id, event.room_id,
)
yield self.event_creation_handler.handle_new_client_event(
yield self.event_creation_handler.persist_and_notify_client_event(
requester, event, context,
ratelimit=ratelimit,
extra_users=extra_users,
)
defer.returnValue((200, {}))

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,50 +14,20 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker
from synapse.storage import DataStore
from synapse.storage.account_data import AccountDataStore
from synapse.storage.tags import TagsStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.storage.account_data import AccountDataWorkerStore
from synapse.storage.tags import TagsWorkerStore
class SlavedAccountDataStore(BaseSlavedStore):
class SlavedAccountDataStore(TagsWorkerStore, AccountDataWorkerStore, BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedAccountDataStore, self).__init__(db_conn, hs)
self._account_data_id_gen = SlavedIdTracker(
db_conn, "account_data_max_stream_id", "stream_id",
)
self._account_data_stream_cache = StreamChangeCache(
"AccountDataAndTagsChangeCache",
self._account_data_id_gen.get_current_token(),
)
get_account_data_for_user = (
AccountDataStore.__dict__["get_account_data_for_user"]
)
get_global_account_data_by_type_for_users = (
AccountDataStore.__dict__["get_global_account_data_by_type_for_users"]
)
get_global_account_data_by_type_for_user = (
AccountDataStore.__dict__["get_global_account_data_by_type_for_user"]
)
get_tags_for_user = TagsStore.__dict__["get_tags_for_user"]
get_tags_for_room = (
DataStore.get_tags_for_room.__func__
)
get_account_data_for_room = (
DataStore.get_account_data_for_room.__func__
)
get_updated_tags = DataStore.get_updated_tags.__func__
get_updated_account_data_for_user = (
DataStore.get_updated_account_data_for_user.__func__
)
super(SlavedAccountDataStore, self).__init__(db_conn, hs)
def get_max_account_data_stream_id(self):
return self._account_data_id_gen.get_current_token()
@@ -85,6 +56,10 @@ class SlavedAccountDataStore(BaseSlavedStore):
(row.data_type, row.user_id,)
)
self.get_account_data_for_user.invalidate((row.user_id,))
self.get_account_data_for_room.invalidate((row.user_id, row.room_id,))
self.get_account_data_for_room_and_type.invalidate(
(row.user_id, row.room_id, row.data_type,),
)
self._account_data_stream_cache.entity_has_changed(
row.user_id, token
)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,33 +14,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import BaseSlavedStore
from synapse.storage import DataStore
from synapse.config.appservice import load_appservices
from synapse.storage.appservice import _make_exclusive_regex
from synapse.storage.appservice import (
ApplicationServiceWorkerStore, ApplicationServiceTransactionWorkerStore,
)
class SlavedApplicationServiceStore(BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedApplicationServiceStore, self).__init__(db_conn, hs)
self.services_cache = load_appservices(
hs.config.server_name,
hs.config.app_service_config_files
)
self.exclusive_user_regex = _make_exclusive_regex(self.services_cache)
get_app_service_by_token = DataStore.get_app_service_by_token.__func__
get_app_service_by_user_id = DataStore.get_app_service_by_user_id.__func__
get_app_services = DataStore.get_app_services.__func__
get_new_events_for_appservice = DataStore.get_new_events_for_appservice.__func__
create_appservice_txn = DataStore.create_appservice_txn.__func__
get_appservices_by_state = DataStore.get_appservices_by_state.__func__
get_oldest_unsent_txn = DataStore.get_oldest_unsent_txn.__func__
_get_last_txn = DataStore._get_last_txn.__func__
complete_appservice_txn = DataStore.complete_appservice_txn.__func__
get_appservice_state = DataStore.get_appservice_state.__func__
set_appservice_last_pos = DataStore.set_appservice_last_pos.__func__
set_appservice_state = DataStore.set_appservice_state.__func__
get_if_app_services_interested_in_user = (
DataStore.get_if_app_services_interested_in_user.__func__
)
class SlavedApplicationServiceStore(ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore):
pass

View File

@@ -14,10 +14,8 @@
# limitations under the License.
from ._base import BaseSlavedStore
from synapse.storage.directory import DirectoryStore
from synapse.storage.directory import DirectoryWorkerStore
class DirectoryStore(BaseSlavedStore):
get_aliases_for_room = DirectoryStore.__dict__[
"get_aliases_for_room"
]
class DirectoryStore(DirectoryWorkerStore, BaseSlavedStore):
pass

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,14 +16,13 @@
import logging
from synapse.api.constants import EventTypes
from synapse.storage import DataStore
from synapse.storage.event_federation import EventFederationStore
from synapse.storage.event_push_actions import EventPushActionsStore
from synapse.storage.roommember import RoomMemberStore
from synapse.storage.event_federation import EventFederationWorkerStore
from synapse.storage.event_push_actions import EventPushActionsWorkerStore
from synapse.storage.events_worker import EventsWorkerStore
from synapse.storage.roommember import RoomMemberWorkerStore
from synapse.storage.state import StateGroupWorkerStore
from synapse.storage.stream import StreamStore
from synapse.storage.signatures import SignatureStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.storage.stream import StreamWorkerStore
from synapse.storage.signatures import SignatureWorkerStore
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker
@@ -38,157 +38,33 @@ logger = logging.getLogger(__name__)
# the method descriptor on the DataStore and chuck them into our class.
class SlavedEventStore(StateGroupWorkerStore, BaseSlavedStore):
class SlavedEventStore(EventFederationWorkerStore,
RoomMemberWorkerStore,
EventPushActionsWorkerStore,
StreamWorkerStore,
EventsWorkerStore,
StateGroupWorkerStore,
SignatureWorkerStore,
BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedEventStore, self).__init__(db_conn, hs)
self._stream_id_gen = SlavedIdTracker(
db_conn, "events", "stream_ordering",
)
self._backfill_id_gen = SlavedIdTracker(
db_conn, "events", "stream_ordering", step=-1
)
events_max = self._stream_id_gen.get_current_token()
event_cache_prefill, min_event_val = self._get_cache_dict(
db_conn, "events",
entity_column="room_id",
stream_column="stream_ordering",
max_value=events_max,
)
self._events_stream_cache = StreamChangeCache(
"EventsRoomStreamChangeCache", min_event_val,
prefilled_cache=event_cache_prefill,
)
self._membership_stream_cache = StreamChangeCache(
"MembershipStreamChangeCache", events_max,
)
self.stream_ordering_month_ago = 0
self._stream_order_on_start = self.get_room_max_stream_ordering()
super(SlavedEventStore, self).__init__(db_conn, hs)
# Cached functions can't be accessed through a class instance so we need
# to reach inside the __dict__ to extract them.
get_rooms_for_user = RoomMemberStore.__dict__["get_rooms_for_user"]
get_users_in_room = RoomMemberStore.__dict__["get_users_in_room"]
get_hosts_in_room = RoomMemberStore.__dict__["get_hosts_in_room"]
get_users_who_share_room_with_user = (
RoomMemberStore.__dict__["get_users_who_share_room_with_user"]
)
get_latest_event_ids_in_room = EventFederationStore.__dict__[
"get_latest_event_ids_in_room"
]
get_invited_rooms_for_user = RoomMemberStore.__dict__[
"get_invited_rooms_for_user"
]
get_unread_event_push_actions_by_room_for_user = (
EventPushActionsStore.__dict__["get_unread_event_push_actions_by_room_for_user"]
)
_get_unread_counts_by_receipt_txn = (
DataStore._get_unread_counts_by_receipt_txn.__func__
)
_get_unread_counts_by_pos_txn = (
DataStore._get_unread_counts_by_pos_txn.__func__
)
get_recent_event_ids_for_room = (
StreamStore.__dict__["get_recent_event_ids_for_room"]
)
_get_joined_hosts_cache = RoomMemberStore.__dict__["_get_joined_hosts_cache"]
has_room_changed_since = DataStore.has_room_changed_since.__func__
get_unread_push_actions_for_user_in_range_for_http = (
DataStore.get_unread_push_actions_for_user_in_range_for_http.__func__
)
get_unread_push_actions_for_user_in_range_for_email = (
DataStore.get_unread_push_actions_for_user_in_range_for_email.__func__
)
get_push_action_users_in_range = (
DataStore.get_push_action_users_in_range.__func__
)
get_event = DataStore.get_event.__func__
get_events = DataStore.get_events.__func__
get_rooms_for_user_where_membership_is = (
DataStore.get_rooms_for_user_where_membership_is.__func__
)
get_membership_changes_for_user = (
DataStore.get_membership_changes_for_user.__func__
)
get_room_events_max_id = DataStore.get_room_events_max_id.__func__
get_room_events_stream_for_room = (
DataStore.get_room_events_stream_for_room.__func__
)
get_events_around = DataStore.get_events_around.__func__
get_joined_users_from_state = DataStore.get_joined_users_from_state.__func__
get_joined_users_from_context = DataStore.get_joined_users_from_context.__func__
_get_joined_users_from_context = (
RoomMemberStore.__dict__["_get_joined_users_from_context"]
)
def get_room_max_stream_ordering(self):
return self._stream_id_gen.get_current_token()
get_joined_hosts = DataStore.get_joined_hosts.__func__
_get_joined_hosts = RoomMemberStore.__dict__["_get_joined_hosts"]
get_recent_events_for_room = DataStore.get_recent_events_for_room.__func__
get_room_events_stream_for_rooms = (
DataStore.get_room_events_stream_for_rooms.__func__
)
is_host_joined = RoomMemberStore.__dict__["is_host_joined"]
get_stream_token_for_event = DataStore.get_stream_token_for_event.__func__
_set_before_and_after = staticmethod(DataStore._set_before_and_after)
_get_events = DataStore._get_events.__func__
_get_events_from_cache = DataStore._get_events_from_cache.__func__
_invalidate_get_event_cache = DataStore._invalidate_get_event_cache.__func__
_enqueue_events = DataStore._enqueue_events.__func__
_do_fetch = DataStore._do_fetch.__func__
_fetch_event_rows = DataStore._fetch_event_rows.__func__
_get_event_from_row = DataStore._get_event_from_row.__func__
_get_rooms_for_user_where_membership_is_txn = (
DataStore._get_rooms_for_user_where_membership_is_txn.__func__
)
_get_events_around_txn = DataStore._get_events_around_txn.__func__
get_backfill_events = DataStore.get_backfill_events.__func__
_get_backfill_events = DataStore._get_backfill_events.__func__
get_missing_events = DataStore.get_missing_events.__func__
_get_missing_events = DataStore._get_missing_events.__func__
get_auth_chain = DataStore.get_auth_chain.__func__
get_auth_chain_ids = DataStore.get_auth_chain_ids.__func__
_get_auth_chain_ids_txn = DataStore._get_auth_chain_ids_txn.__func__
get_room_max_stream_ordering = DataStore.get_room_max_stream_ordering.__func__
get_forward_extremeties_for_room = (
DataStore.get_forward_extremeties_for_room.__func__
)
_get_forward_extremeties_for_room = (
EventFederationStore.__dict__["_get_forward_extremeties_for_room"]
)
get_all_new_events_stream = DataStore.get_all_new_events_stream.__func__
get_federation_out_pos = DataStore.get_federation_out_pos.__func__
update_federation_out_pos = DataStore.update_federation_out_pos.__func__
get_latest_event_ids_and_hashes_in_room = (
DataStore.get_latest_event_ids_and_hashes_in_room.__func__
)
_get_latest_event_ids_and_hashes_in_room = (
DataStore._get_latest_event_ids_and_hashes_in_room.__func__
)
_get_event_reference_hashes_txn = (
DataStore._get_event_reference_hashes_txn.__func__
)
add_event_hashes = (
DataStore.add_event_hashes.__func__
)
get_event_reference_hashes = (
SignatureStore.__dict__["get_event_reference_hashes"]
)
get_event_reference_hash = (
SignatureStore.__dict__["get_event_reference_hash"]
)
def get_room_min_stream_ordering(self):
return self._backfill_id_gen.get_current_token()
def stream_positions(self):
result = super(SlavedEventStore, self).stream_positions()

View File

@@ -0,0 +1,21 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.storage.profile import ProfileWorkerStore
class SlavedProfileStore(ProfileWorkerStore, BaseSlavedStore):
pass

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,29 +16,15 @@
from .events import SlavedEventStore
from ._slaved_id_tracker import SlavedIdTracker
from synapse.storage import DataStore
from synapse.storage.push_rule import PushRuleStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.storage.push_rule import PushRulesWorkerStore
class SlavedPushRuleStore(SlavedEventStore):
class SlavedPushRuleStore(PushRulesWorkerStore, SlavedEventStore):
def __init__(self, db_conn, hs):
super(SlavedPushRuleStore, self).__init__(db_conn, hs)
self._push_rules_stream_id_gen = SlavedIdTracker(
db_conn, "push_rules_stream", "stream_id",
)
self.push_rules_stream_cache = StreamChangeCache(
"PushRulesStreamChangeCache",
self._push_rules_stream_id_gen.get_current_token(),
)
get_push_rules_for_user = PushRuleStore.__dict__["get_push_rules_for_user"]
get_push_rules_enabled_for_user = (
PushRuleStore.__dict__["get_push_rules_enabled_for_user"]
)
have_push_rules_changed_for_user = (
DataStore.have_push_rules_changed_for_user.__func__
)
super(SlavedPushRuleStore, self).__init__(db_conn, hs)
def get_push_rules_stream_token(self):
return (
@@ -45,6 +32,9 @@ class SlavedPushRuleStore(SlavedEventStore):
self._stream_id_gen.get_current_token(),
)
def get_max_push_rules_stream_id(self):
return self._push_rules_stream_id_gen.get_current_token()
def stream_positions(self):
result = super(SlavedPushRuleStore, self).stream_positions()
result["push_rules"] = self._push_rules_stream_id_gen.get_current_token()

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,10 +17,10 @@
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker
from synapse.storage import DataStore
from synapse.storage.pusher import PusherWorkerStore
class SlavedPusherStore(BaseSlavedStore):
class SlavedPusherStore(PusherWorkerStore, BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedPusherStore, self).__init__(db_conn, hs)
@@ -28,13 +29,6 @@ class SlavedPusherStore(BaseSlavedStore):
extra_tables=[("deleted_pushers", "stream_id")],
)
get_all_pushers = DataStore.get_all_pushers.__func__
get_pushers_by = DataStore.get_pushers_by.__func__
get_pushers_by_app_id_and_pushkey = (
DataStore.get_pushers_by_app_id_and_pushkey.__func__
)
_decode_pushers_rows = DataStore._decode_pushers_rows.__func__
def stream_positions(self):
result = super(SlavedPusherStore, self).stream_positions()
result["pushers"] = self._pushers_id_gen.get_current_token()

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,9 +17,7 @@
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker
from synapse.storage import DataStore
from synapse.storage.receipts import ReceiptsStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.storage.receipts import ReceiptsWorkerStore
# So, um, we want to borrow a load of functions intended for reading from
# a DataStore, but we don't want to take functions that either write to the
@@ -29,36 +28,19 @@ from synapse.util.caches.stream_change_cache import StreamChangeCache
# the method descriptor on the DataStore and chuck them into our class.
class SlavedReceiptsStore(BaseSlavedStore):
class SlavedReceiptsStore(ReceiptsWorkerStore, BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedReceiptsStore, self).__init__(db_conn, hs)
# We instantiate this first as the ReceiptsWorkerStore constructor
# needs to be able to call get_max_receipt_stream_id
self._receipts_id_gen = SlavedIdTracker(
db_conn, "receipts_linearized", "stream_id"
)
self._receipts_stream_cache = StreamChangeCache(
"ReceiptsRoomChangeCache", self._receipts_id_gen.get_current_token()
)
super(SlavedReceiptsStore, self).__init__(db_conn, hs)
get_receipts_for_user = ReceiptsStore.__dict__["get_receipts_for_user"]
get_linearized_receipts_for_room = (
ReceiptsStore.__dict__["get_linearized_receipts_for_room"]
)
_get_linearized_receipts_for_rooms = (
ReceiptsStore.__dict__["_get_linearized_receipts_for_rooms"]
)
get_last_receipt_event_id_for_user = (
ReceiptsStore.__dict__["get_last_receipt_event_id_for_user"]
)
get_max_receipt_stream_id = DataStore.get_max_receipt_stream_id.__func__
get_all_updated_receipts = DataStore.get_all_updated_receipts.__func__
get_linearized_receipts_for_rooms = (
DataStore.get_linearized_receipts_for_rooms.__func__
)
def get_max_receipt_stream_id(self):
return self._receipts_id_gen.get_current_token()
def stream_positions(self):
result = super(SlavedReceiptsStore, self).stream_positions()
@@ -71,6 +53,8 @@ class SlavedReceiptsStore(BaseSlavedStore):
self.get_last_receipt_event_id_for_user.invalidate(
(user_id, room_id, receipt_type)
)
self._invalidate_get_users_with_receipts_in_room(room_id, receipt_type, user_id)
self.get_receipts_for_room.invalidate((room_id, receipt_type))
def process_replication_rows(self, stream_name, token, rows):
if stream_name == "receipts":

View File

@@ -14,20 +14,8 @@
# limitations under the License.
from ._base import BaseSlavedStore
from synapse.storage import DataStore
from synapse.storage.registration import RegistrationStore
from synapse.storage.registration import RegistrationWorkerStore
class SlavedRegistrationStore(BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedRegistrationStore, self).__init__(db_conn, hs)
# TODO: use the cached version and invalidate deleted tokens
get_user_by_access_token = RegistrationStore.__dict__[
"get_user_by_access_token"
]
_query_for_auth = DataStore._query_for_auth.__func__
get_user_by_id = RegistrationStore.__dict__[
"get_user_by_id"
]
class SlavedRegistrationStore(RegistrationWorkerStore, BaseSlavedStore):
pass

View File

@@ -14,32 +14,19 @@
# limitations under the License.
from ._base import BaseSlavedStore
from synapse.storage import DataStore
from synapse.storage.room import RoomStore
from synapse.storage.room import RoomWorkerStore
from ._slaved_id_tracker import SlavedIdTracker
class RoomStore(BaseSlavedStore):
class RoomStore(RoomWorkerStore, BaseSlavedStore):
def __init__(self, db_conn, hs):
super(RoomStore, self).__init__(db_conn, hs)
self._public_room_id_gen = SlavedIdTracker(
db_conn, "public_room_list_stream", "stream_id"
)
get_public_room_ids = DataStore.get_public_room_ids.__func__
get_current_public_room_stream_id = (
DataStore.get_current_public_room_stream_id.__func__
)
get_public_room_ids_at_stream_id = (
RoomStore.__dict__["get_public_room_ids_at_stream_id"]
)
get_public_room_ids_at_stream_id_txn = (
DataStore.get_public_room_ids_at_stream_id_txn.__func__
)
get_published_at_stream_id_txn = (
DataStore.get_published_at_stream_id_txn.__func__
)
get_public_room_changes = DataStore.get_public_room_changes.__func__
def get_current_public_room_stream_id(self):
return self._public_room_id_gen.get_current_token()
def stream_positions(self):
result = super(RoomStore, self).stream_positions()

View File

@@ -19,11 +19,13 @@ allowed to be sent by which side.
"""
import logging
import ujson as json
import simplejson
logger = logging.getLogger(__name__)
_json_encoder = simplejson.JSONEncoder(namedtuple_as_object=False)
class Command(object):
"""The base command class.
@@ -100,14 +102,14 @@ class RdataCommand(Command):
return cls(
stream_name,
None if token == "batch" else int(token),
json.loads(row_json)
simplejson.loads(row_json)
)
def to_line(self):
return " ".join((
self.stream_name,
str(self.token) if self.token is not None else "batch",
json.dumps(self.row),
_json_encoder.encode(self.row),
))
@@ -298,10 +300,12 @@ class InvalidateCacheCommand(Command):
def from_line(cls, line):
cache_func, keys_json = line.split(" ", 1)
return cls(cache_func, json.loads(keys_json))
return cls(cache_func, simplejson.loads(keys_json))
def to_line(self):
return " ".join((self.cache_func, json.dumps(self.keys)))
return " ".join((
self.cache_func, _json_encoder.encode(self.keys),
))
class UserIpCommand(Command):
@@ -325,14 +329,14 @@ class UserIpCommand(Command):
def from_line(cls, line):
user_id, jsn = line.split(" ", 1)
access_token, ip, user_agent, device_id, last_seen = json.loads(jsn)
access_token, ip, user_agent, device_id, last_seen = simplejson.loads(jsn)
return cls(
user_id, access_token, ip, user_agent, device_id, last_seen
)
def to_line(self):
return self.user_id + " " + json.dumps((
return self.user_id + " " + _json_encoder.encode((
self.access_token, self.ip, self.user_agent, self.device_id,
self.last_seen,
))

View File

@@ -17,7 +17,7 @@
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.errors import AuthError, SynapseError
from synapse.api.errors import AuthError, SynapseError, Codes, NotFoundError
from synapse.types import UserID, create_requester
from synapse.http.servlet import parse_json_object_from_request
@@ -114,12 +114,18 @@ class PurgeMediaCacheRestServlet(ClientV1RestServlet):
class PurgeHistoryRestServlet(ClientV1RestServlet):
PATTERNS = client_path_patterns(
"/admin/purge_history/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
"/admin/purge_history/(?P<room_id>[^/]*)(/(?P<event_id>[^/]+))?"
)
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer)
"""
super(PurgeHistoryRestServlet, self).__init__(hs)
self.handlers = hs.get_handlers()
self.store = hs.get_datastore()
@defer.inlineCallbacks
def on_POST(self, request, room_id, event_id):
@@ -133,12 +139,89 @@ class PurgeHistoryRestServlet(ClientV1RestServlet):
delete_local_events = bool(body.get("delete_local_events", False))
yield self.handlers.message_handler.purge_history(
room_id, event_id,
# establish the topological ordering we should keep events from. The
# user can provide an event_id in the URL or the request body, or can
# provide a timestamp in the request body.
if event_id is None:
event_id = body.get('purge_up_to_event_id')
if event_id is not None:
event = yield self.store.get_event(event_id)
if event.room_id != room_id:
raise SynapseError(400, "Event is for wrong room.")
depth = event.depth
logger.info(
"[purge] purging up to depth %i (event_id %s)",
depth, event_id,
)
elif 'purge_up_to_ts' in body:
ts = body['purge_up_to_ts']
if not isinstance(ts, int):
raise SynapseError(
400, "purge_up_to_ts must be an int",
errcode=Codes.BAD_JSON,
)
stream_ordering = (
yield self.store.find_first_stream_ordering_after_ts(ts)
)
(_, depth, _) = (
yield self.store.get_room_event_after_stream_ordering(
room_id, stream_ordering,
)
)
logger.info(
"[purge] purging up to depth %i (received_ts %i => "
"stream_ordering %i)",
depth, ts, stream_ordering,
)
else:
raise SynapseError(
400,
"must specify purge_up_to_event_id or purge_up_to_ts",
errcode=Codes.BAD_JSON,
)
purge_id = yield self.handlers.message_handler.start_purge_history(
room_id, depth,
delete_local_events=delete_local_events,
)
defer.returnValue((200, {}))
defer.returnValue((200, {
"purge_id": purge_id,
}))
class PurgeHistoryStatusRestServlet(ClientV1RestServlet):
PATTERNS = client_path_patterns(
"/admin/purge_history_status/(?P<purge_id>[^/]+)"
)
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer)
"""
super(PurgeHistoryStatusRestServlet, self).__init__(hs)
self.handlers = hs.get_handlers()
@defer.inlineCallbacks
def on_GET(self, request, purge_id):
requester = yield self.auth.get_user_by_req(request)
is_admin = yield self.auth.is_server_admin(requester.user)
if not is_admin:
raise AuthError(403, "You are not a server admin")
purge_status = self.handlers.message_handler.get_purge_status(purge_id)
if purge_status is None:
raise NotFoundError("purge id '%s' not found" % purge_id)
defer.returnValue((200, purge_status.asdict()))
class DeactivateAccountRestServlet(ClientV1RestServlet):
@@ -180,6 +263,7 @@ class ShutdownRoomRestServlet(ClientV1RestServlet):
self.handlers = hs.get_handlers()
self.state = hs.get_state_handler()
self.event_creation_handler = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler()
@defer.inlineCallbacks
def on_POST(self, request, room_id):
@@ -238,7 +322,7 @@ class ShutdownRoomRestServlet(ClientV1RestServlet):
logger.info("Kicking %r from %r...", user_id, room_id)
target_requester = create_requester(user_id)
yield self.handlers.room_member_handler.update_membership(
yield self.room_member_handler.update_membership(
requester=target_requester,
target=target_requester.user,
room_id=room_id,
@@ -247,9 +331,9 @@ class ShutdownRoomRestServlet(ClientV1RestServlet):
ratelimit=False
)
yield self.handlers.room_member_handler.forget(target_requester.user, room_id)
yield self.room_member_handler.forget(target_requester.user, room_id)
yield self.handlers.room_member_handler.update_membership(
yield self.room_member_handler.update_membership(
requester=target_requester,
target=target_requester.user,
room_id=new_room_id,
@@ -508,6 +592,7 @@ class SearchUsersRestServlet(ClientV1RestServlet):
def register_servlets(hs, http_server):
WhoisRestServlet(hs).register(http_server)
PurgeMediaCacheRestServlet(hs).register(http_server)
PurgeHistoryStatusRestServlet(hs).register(http_server)
DeactivateAccountRestServlet(hs).register(http_server)
PurgeHistoryRestServlet(hs).register(http_server)
UsersRestServlet(hs).register(http_server)

View File

@@ -44,7 +44,10 @@ class LogoutRestServlet(ClientV1RestServlet):
requester = yield self.auth.get_user_by_req(request)
except AuthError:
# this implies the access token has already been deleted.
pass
defer.returnValue((401, {
"errcode": "M_UNKNOWN_TOKEN",
"error": "Access Token unknown or expired"
}))
else:
if requester.device_id is None:
# the acccess token wasn't associated with a device.

View File

@@ -348,9 +348,9 @@ class RegisterRestServlet(ClientV1RestServlet):
admin = register_json.get("admin", None)
# Its important to check as we use null bytes as HMAC field separators
if "\x00" in user:
if b"\x00" in user:
raise SynapseError(400, "Invalid user")
if "\x00" in password:
if b"\x00" in password:
raise SynapseError(400, "Invalid password")
# str() because otherwise hmac complains that 'unicode' does not

View File

@@ -30,7 +30,7 @@ from synapse.http.servlet import (
import logging
import urllib
import ujson as json
import simplejson as json
logger = logging.getLogger(__name__)
@@ -84,6 +84,7 @@ class RoomStateEventRestServlet(ClientV1RestServlet):
super(RoomStateEventRestServlet, self).__init__(hs)
self.handlers = hs.get_handlers()
self.event_creation_hander = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler()
def register(self, http_server):
# /room/$roomid/state/$eventtype
@@ -156,7 +157,7 @@ class RoomStateEventRestServlet(ClientV1RestServlet):
if event_type == EventTypes.Member:
membership = content.get("membership", None)
event = yield self.handlers.room_member_handler.update_membership(
event = yield self.room_member_handler.update_membership(
requester,
target=UserID.from_string(state_key),
room_id=room_id,
@@ -164,17 +165,12 @@ class RoomStateEventRestServlet(ClientV1RestServlet):
content=content,
)
else:
event, context = yield self.event_creation_hander.create_event(
event = yield self.event_creation_hander.create_and_send_nonmember_event(
requester,
event_dict,
token_id=requester.access_token_id,
txn_id=txn_id,
)
yield self.event_creation_hander.send_nonmember_event(
requester, event, context,
)
ret = {}
if event:
ret = {"event_id": event.event_id}
@@ -229,7 +225,7 @@ class RoomSendEventRestServlet(ClientV1RestServlet):
class JoinRoomAliasServlet(ClientV1RestServlet):
def __init__(self, hs):
super(JoinRoomAliasServlet, self).__init__(hs)
self.handlers = hs.get_handlers()
self.room_member_handler = hs.get_room_member_handler()
def register(self, http_server):
# /join/$room_identifier[/$txn_id]
@@ -257,7 +253,7 @@ class JoinRoomAliasServlet(ClientV1RestServlet):
except Exception:
remote_room_hosts = None
elif RoomAlias.is_valid(room_identifier):
handler = self.handlers.room_member_handler
handler = self.room_member_handler
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = yield handler.lookup_room_alias(room_alias)
room_id = room_id.to_string()
@@ -266,7 +262,7 @@ class JoinRoomAliasServlet(ClientV1RestServlet):
room_identifier,
))
yield self.handlers.room_member_handler.update_membership(
yield self.room_member_handler.update_membership(
requester=requester,
target=requester.user,
room_id=room_id,
@@ -562,7 +558,7 @@ class RoomEventContextServlet(ClientV1RestServlet):
class RoomForgetRestServlet(ClientV1RestServlet):
def __init__(self, hs):
super(RoomForgetRestServlet, self).__init__(hs)
self.handlers = hs.get_handlers()
self.room_member_handler = hs.get_room_member_handler()
def register(self, http_server):
PATTERNS = ("/rooms/(?P<room_id>[^/]*)/forget")
@@ -575,7 +571,7 @@ class RoomForgetRestServlet(ClientV1RestServlet):
allow_guest=False,
)
yield self.handlers.room_member_handler.forget(
yield self.room_member_handler.forget(
user=requester.user,
room_id=room_id,
)
@@ -593,12 +589,12 @@ class RoomMembershipRestServlet(ClientV1RestServlet):
def __init__(self, hs):
super(RoomMembershipRestServlet, self).__init__(hs)
self.handlers = hs.get_handlers()
self.room_member_handler = hs.get_room_member_handler()
def register(self, http_server):
# /rooms/$roomid/[invite|join|leave]
PATTERNS = ("/rooms/(?P<room_id>[^/]*)/"
"(?P<membership_action>join|invite|leave|ban|unban|kick|forget)")
"(?P<membership_action>join|invite|leave|ban|unban|kick)")
register_txn_path(self, PATTERNS, http_server)
@defer.inlineCallbacks
@@ -622,7 +618,7 @@ class RoomMembershipRestServlet(ClientV1RestServlet):
content = {}
if membership_action == "invite" and self._has_3pid_invite_keys(content):
yield self.handlers.room_member_handler.do_3pid_invite(
yield self.room_member_handler.do_3pid_invite(
room_id,
requester.user,
content["medium"],
@@ -644,7 +640,7 @@ class RoomMembershipRestServlet(ClientV1RestServlet):
if 'reason' in content and membership_action in ['kick', 'ban']:
event_content = {'reason': content['reason']}
yield self.handlers.room_member_handler.update_membership(
yield self.room_member_handler.update_membership(
requester=requester,
target=target,
room_id=room_id,
@@ -654,7 +650,12 @@ class RoomMembershipRestServlet(ClientV1RestServlet):
content=event_content,
)
defer.returnValue((200, {}))
return_value = {}
if membership_action == "join":
return_value["room_id"] = room_id
defer.returnValue((200, return_value))
def _has_3pid_invite_keys(self, content):
for key in {"id_server", "medium", "address"}:

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -401,6 +402,32 @@ class GroupInvitedUsersServlet(RestServlet):
defer.returnValue((200, result))
class GroupSettingJoinPolicyServlet(RestServlet):
"""Set group join policy
"""
PATTERNS = client_v2_patterns("/groups/(?P<group_id>[^/]*)/settings/m.join_policy$")
def __init__(self, hs):
super(GroupSettingJoinPolicyServlet, self).__init__()
self.auth = hs.get_auth()
self.groups_handler = hs.get_groups_local_handler()
@defer.inlineCallbacks
def on_PUT(self, request, group_id):
requester = yield self.auth.get_user_by_req(request)
requester_user_id = requester.user.to_string()
content = parse_json_object_from_request(request)
result = yield self.groups_handler.set_group_join_policy(
group_id,
requester_user_id,
content,
)
defer.returnValue((200, result))
class GroupCreateServlet(RestServlet):
"""Create a group
"""
@@ -738,6 +765,7 @@ def register_servlets(hs, http_server):
GroupInvitedUsersServlet(hs).register(http_server)
GroupUsersServlet(hs).register(http_server)
GroupRoomServlet(hs).register(http_server)
GroupSettingJoinPolicyServlet(hs).register(http_server)
GroupCreateServlet(hs).register(http_server)
GroupAdminRoomsServlet(hs).register(http_server)
GroupAdminRoomsConfigServlet(hs).register(http_server)

View File

@@ -20,7 +20,6 @@ import synapse
import synapse.types
from synapse.api.auth import get_access_token_from_request, has_access_token
from synapse.api.constants import LoginType
from synapse.types import RoomID, RoomAlias
from synapse.api.errors import SynapseError, Codes, UnrecognizedRequestError
from synapse.http.servlet import (
RestServlet, parse_json_object_from_request, assert_params_in_request, parse_string
@@ -183,7 +182,7 @@ class RegisterRestServlet(RestServlet):
self.auth_handler = hs.get_auth_handler()
self.registration_handler = hs.get_handlers().registration_handler
self.identity_handler = hs.get_handlers().identity_handler
self.room_member_handler = hs.get_handlers().room_member_handler
self.room_member_handler = hs.get_room_member_handler()
self.device_handler = hs.get_device_handler()
self.macaroon_gen = hs.get_macaroon_generator()
@@ -405,14 +404,6 @@ class RegisterRestServlet(RestServlet):
generate_token=False,
)
# auto-join the user to any rooms we're supposed to dump them into
fake_requester = synapse.types.create_requester(registered_user_id)
for r in self.hs.config.auto_join_rooms:
try:
yield self._join_user_to_room(fake_requester, r)
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e)
# remember that we've now registered that user account, and with
# what user ID (since the user may not have specified)
self.auth_handler.set_session_data(
@@ -445,29 +436,6 @@ class RegisterRestServlet(RestServlet):
def on_OPTIONS(self, _):
return 200, {}
@defer.inlineCallbacks
def _join_user_to_room(self, requester, room_identifier):
room_id = None
if RoomID.is_valid(room_identifier):
room_id = room_identifier
elif RoomAlias.is_valid(room_identifier):
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = (
yield self.room_member_handler.lookup_room_alias(room_alias)
)
room_id = room_id.to_string()
else:
raise SynapseError(400, "%s was not legal room ID or room alias" % (
room_identifier,
))
yield self.room_member_handler.update_membership(
requester=requester,
target=requester.user,
room_id=room_id,
action="join",
)
@defer.inlineCallbacks
def _do_appservice_registration(self, username, as_token, body):
user_id = yield self.registration_handler.appservice_register(

Some files were not shown because too many files have changed in this diff Show More