Compare commits

...

668 Commits

Author SHA1 Message Date
Erik Johnston
e4a709eda3 Bump version and change log 2017-10-02 13:51:38 +01:00
Erik Johnston
1a398b19fd Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.23.0 2017-09-26 10:08:59 +01:00
Erik Johnston
f4c8cd5e85 Bump changelog and version 2017-09-26 10:02:48 +01:00
Erik Johnston
b8d832a08c Merge pull request #2470 from matrix-org/erikj/sync_speed_fix
Refactor to speed up incremental syncs
2017-09-25 17:43:14 +01:00
Erik Johnston
e3edca3b5d Refactor to speed up incremental syncs 2017-09-25 17:35:39 +01:00
Richard van der Hoff
cacfa04cb6 Merge pull request #2468 from maxidor/develop
Clarify recommended network setup
2017-09-25 16:37:33 +01:00
Max Dor
e591f7b3f0 Include review feedback 2017-09-25 16:42:26 +02:00
Max Dor
7141f1a5cc Clarify recommended network setup 2017-09-25 16:20:23 +02:00
Erik Johnston
44edac0497 Merge branch 'release-v0.23.0' of github.com:matrix-org/synapse into develop 2017-09-25 14:52:46 +01:00
Richard van der Hoff
29e1c717c3 Merge pull request #2390 from r3dey3/develop
Fix iteration of requests_missing_keys; list doesn't have .values()
2017-09-25 11:56:01 +01:00
Richard van der Hoff
94133d7ce8 Merge branch 'develop' into develop 2017-09-25 11:50:11 +01:00
Erik Johnston
b15c2b7971 Update CHANGES 2017-09-25 11:34:12 +01:00
Erik Johnston
ba8fdc925c Bump version and changes 2017-09-25 11:01:31 +01:00
Richard van der Hoff
79b3cf3e02 Fix logcontxt leak in keyclient (#2465)
preserve_context_over_function doesn't do what you want it to do.
2017-09-25 09:51:39 +01:00
Richard van der Hoff
f65e31d22f Do an AAAA lookup on SRV record targets (#2462)
Support SRV records which point at AAAA records, as well as A records.

Fixes https://github.com/matrix-org/synapse/issues/2405
2017-09-22 20:26:47 +01:00
Matthew Hodgson
f496399ac4 fix thinko'd docstring 2017-09-22 15:34:14 +01:00
Erik Johnston
3166ed55b2 Fix device list when rejoining room (#2461) 2017-09-22 14:44:17 +01:00
Richard van der Hoff
c94ab5976a Merge pull request #2459 from matrix-org/rav/keyring_cleanups
Clean up Keyring code
2017-09-20 11:29:39 +01:00
Richard van der Hoff
6de74ea6d7 Fix logcontexts in _check_sigs_and_hashes 2017-09-20 01:32:42 +01:00
Richard van der Hoff
72472456d8 Add some more tests for Keyring 2017-09-20 01:32:42 +01:00
Richard van der Hoff
c5c24c239b Fix logcontext handling in verify_json_objects_for_server
preserve_context_over_fn is essentially broken, because (a) it pointlessly
drops the current logcontext before calling its wrapped function, which means
we don't get any useful logcontexts for _handle_key_deferred; (b) it wraps the
resulting deferred in a _PreservingContextDeferred, which is very dangerous
because you then can't yield on it without leaking context back into the
reactor.

Instead, let's specify that the resultant deferreds call their callbacks with
no logcontext.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
c5b0e9f485 Turn _start_key_lookups into an inlineCallbacks function
... which means that logcontexts can be correctly preserved for the stuff it
does.

get_server_verify_keys is now called with the logcontext, so needs to
preserve_fn when it fires off its nested inlineCallbacks function.

Also renames get_server_verify_keys to reflect the fact it's meant to be
private.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
abdefb8a01 Fix potential race in _start_key_lookups
If the verify_request.deferred has already completed, then `remove_deferreds`
will be called immediately. It therefore might resolve the server_to_deferred
deferred while there are still other requests for that server in flight.

To avoid that, we should build the complete list of requests, and *then* add the
callbacks.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
afbd773dc6 Add some comments to _start_key_lookups 2017-09-20 01:32:42 +01:00
Richard van der Hoff
2a4b9ea233 Consistency for how verify_request.deferred is called
Define that it is run with no log context, and make sure that happens.

If we aren't careful to reset the logcontext, we can't bung the deferreds into
defer.gatherResults etc. We don't actually do that directly, but we *do*
resolve other deferreds from affected callbacks (notably the server_to_deferred
map in _start_key_lookups), and those *do* get passed into
defer.gatherResults. It turns out that this way ends up being least confusing.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
3b98439eca Factor out _start_key_lookups
... to make it easier to see what's going on.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
fde63b880d Replace server_and_json with verify_requests
This is a precursor to factoring some of this code out.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
2d511defd9 pull out handle_key_deferred to top level
There's no need for this to be a nested definition; pulling it out not only
makes it more efficient, but makes it easier to check that it's not accessing
any local variables it shouldn't be.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
dd1ea9763a Fix incorrect key_ids in error message 2017-09-20 01:32:42 +01:00
Richard van der Hoff
e76d1135dd Invalidate signing key cache when we gat an update
This might make the cache slightly more efficient.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
fcf2c0fd1a Remove redundant preserve_fn
preserve_fn is a no-op unless the wrapped function returns a
Deferred. verify_json_objects_for_server returns a list, so this is doing
nothing.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
9864efa532 Fix concurrent server_key requests (#2458)
Fix a bug where we could end up firing off multiple requests for server_keys
for the same server at the same time.
2017-09-19 23:25:44 +01:00
Richard van der Hoff
aa620d09a0 Add a config option to block all room invites (#2457)
- allows sysadmins the ability to lock down their servers so that people can't
send their users room invites.
2017-09-19 16:08:14 +01:00
Richard van der Hoff
2eabdf3f98 add some comments to on_exchange_third_party_invite_request 2017-09-19 12:20:36 +01:00
Richard van der Hoff
5ed109d59f PoC for filtering spammy events (#2456)
Demonstration of how you might add some hooks to filter out spammy events.
2017-09-19 12:20:11 +01:00
Richard van der Hoff
3f405b34e9 Fix overzealous kicking of guest users (#2453)
We should only kick guest users if the guest access event is authorised.
2017-09-19 08:52:52 +01:00
Richard van der Hoff
290777b3d9 Clean up and document handling of logcontexts in Keyring (#2452)
I'm still unclear on what the intended behaviour for
`verify_json_objects_for_server` is, but at least I now understand the
behaviour of most of the things it calls...
2017-09-18 18:31:01 +01:00
Erik Johnston
77c81ca6ea Merge pull request #2451 from matrix-org/erikj/add_state_to_timeline
Don't filter out current state events from timeline
2017-09-18 17:22:33 +01:00
Erik Johnston
2d1b7955ae Don't filter out current state events from timeline 2017-09-18 17:13:03 +01:00
David Baker
862c8da560 Merge pull request #2450 from matrix-org/dbkr/push_event_id_only
Add support for event_id_only push format
2017-09-18 16:41:29 +01:00
Erik Johnston
2d9f341c3e Merge pull request #2449 from matrix-org/erikj/rejoin_device_lists
Correctly handle leaving room in /key/changes
2017-09-18 15:59:13 +01:00
David Baker
436ee0a2ea Also include the room_id
as really it's part of the event ID
2017-09-18 15:58:38 +01:00
David Baker
b393f5db51 Use .get - it's much shorter 2017-09-18 15:50:26 +01:00
David Baker
a2562f9d74 Add support for event_id_only push format
Param in the data dict of a pusher that tells an HTTP pusher to
send just the event_id of the event it's notifying about and the
notification counts. For clients that want to go & fetch the body
of the event themselves anyway.
2017-09-18 15:39:39 +01:00
Erik Johnston
d6dadd95ac Correctly handle leaving room in /key/changes 2017-09-18 15:38:22 +01:00
Erik Johnston
993d3f710b Merge pull request #2443 from matrix-org/erikj/rejoin_device_lists
Send down device list change notif when member leaves/rejoins room
2017-09-18 13:17:53 +01:00
Erik Johnston
4a94eb3ea4 Fix typo 2017-09-15 09:56:54 +01:00
Erik Johnston
3a0cee28d6 Actually hook leave notifs up 2017-09-14 11:49:37 +01:00
Erik Johnston
4f845a0713 Handle joining/leaving rooms in /keys/changes 2017-09-13 16:28:08 +01:00
Erik Johnston
473700f016 Get left rooms 2017-09-13 15:13:41 +01:00
Erik Johnston
9ce866ed4f In sync handle device lists for newly joined/left rooms 2017-09-12 16:44:26 +01:00
Erik Johnston
69ef4987a6 Add left section to /keys/changes 2017-09-08 14:44:36 +01:00
Erik Johnston
53cc8ad35a Send down device list change notif when member leaves/rejoins room 2017-09-07 15:08:39 +01:00
Richard van der Hoff
e2fcba038c Merge pull request #2439 from matrix-org/rav/tox_tweaks
do tox install with pip -e
2017-09-06 17:08:19 +01:00
Richard van der Hoff
5f59f20636 Merge remote-tracking branch 'origin/master' into develop 2017-09-05 21:58:19 +01:00
Richard van der Hoff
59de2c7afa Exclude the github issue template from our sdist (#2440)
PR #2413 added an issue template, but just adding files to the project
directory upsets the packaging scripts: we need to explicitly include or
exclude them.

Move the template into a .github directory to make that easy, and to de-clutter
the root a bit.
2017-09-05 21:57:19 +01:00
Richard van der Hoff
4b616c8cf2 Merge branch 'master' into develop 2017-09-05 17:51:13 +01:00
Richard van der Hoff
4dd61df6f8 do tox install with pip -e
- this ensures we end up with a working virtualenv which we can use for other
things.
2017-09-05 16:35:23 +01:00
Erik Johnston
c0c31656ff Merge pull request #2433 from ptman/patch-1
Document known to work postgres version
2017-09-01 15:28:42 +01:00
Paul Tötterman
8b16b43b7f Document known to work postgres version 2017-09-01 16:52:45 +03:00
Richard van der Hoff
dff396de0f Set --python when running sytest
.. because I want to make the 'install_and_run' script useful for non-synapse
jobs, which do not accept --python. In any case we set up the path here, so
sytest shouldn't be guessing it.
2017-09-01 11:20:37 +01:00
Richard van der Hoff
f06ffdb6fa fix python path in jenkins scripts 2017-09-01 10:31:45 +01:00
Richard van der Hoff
6e67aaa7f2 Set --python when running sytest
.. because I want to make the 'install_and_run' script useful for non-synapse
jobs, which do not accept --python. In any case we set up the path here, so
sytest shouldn't be guessing it.
2017-09-01 10:06:21 +01:00
Richard van der Hoff
934ab76835 Merge pull request #2428 from matrix-org/rav/update_upgrade
Tweaks to the upgrade instructions
2017-08-24 11:42:02 +01:00
Richard van der Hoff
fc9878f6a4 Tweaks to the upgrade instructions 2017-08-23 15:27:02 +01:00
Richard van der Hoff
a4d3bfe3d6 Merge pull request #2417 from matrix-org/rav/federation_client
Improvements to the federation test client
2017-08-23 14:50:26 +01:00
Richard van der Hoff
a7effa8400 Merge pull request #2288 from kyrias/bcrypt
python_dependencies: Use bcrypt module instead of py-bcrypt
2017-08-23 14:14:56 +01:00
Richard van der Hoff
a04c6bbf8f test federation client: Allow server-name and key-file as options
so that you don't necessarily need a config file.
2017-08-22 11:19:30 +01:00
Richard van der Hoff
77ea8cbdd7 Merge pull request #2416 from matrix-org/rav/prometheus_config
Add prometheus config
2017-08-22 10:34:40 +01:00
Tom Lant
20b3660495 Merge pull request #2413 from matrix-org/toml-issue-template
Issue template for Synapse
2017-08-21 16:07:35 +01:00
Richard van der Hoff
046b659ce2 Improvements to the federation test client
Make it read the config file, primarily.
2017-08-17 16:59:11 +01:00
Tom Lant
413c270723 Update ISSUE_TEMPLATE.md
Added instructions for checking server version.
2017-08-17 11:14:35 +01:00
Tom Lant
ec3a2dc773 Update ISSUE_TEMPLATE.md
Responding to review comments.
2017-08-17 11:00:51 +01:00
Richard van der Hoff
012875258c Add prometheus config
... from https://github.com/matrix-org/synapse-prometheus-config.
2017-08-16 15:31:44 +01:00
Richard van der Hoff
692250c6be Fix user_dir startup
Add missing parameter to _base.start_worker_reactor
2017-08-16 15:11:29 +01:00
Richard van der Hoff
d2352347cf Fix process startup
escape the % that got added in 92168cb so that the process starts up ok.
2017-08-16 14:57:35 +01:00
Matthew Hodgson
92168cbbc5 explain why CPU affinity is a good idea 2017-08-15 18:27:42 +01:00
Richard van der Hoff
963015005e Merge pull request #2415 from matrix-org/rav/synctl_cpu_affinity
Allow configuration of CPU affinity
2017-08-15 17:42:05 +01:00
Richard van der Hoff
10d8b701a1 Allow configuration of CPU affinity
Make it possible to set the CPU affinity in the config file, so that we don't
need to remember to do it manually every time.
2017-08-15 17:08:28 +01:00
Richard van der Hoff
543c794a76 Factor out common application start
We have 10 copies of this code, and I don't really want to update each one
separately.
2017-08-15 17:04:40 +01:00
Tom Lant
57cd0c3dea Update ISSUE_TEMPLATE.md
Removed the sentence encouraging people not to file a bug - if people are in doubt we'd rather they filed a bug than gave up entirely.
2017-08-14 14:40:32 +01:00
Tom Lant
b524dd4c35 Update ISSUE_TEMPLATE.md
Oops capital L.
2017-08-14 14:36:49 +01:00
Tom Lant
09703609fc Create ISSUE_TEMPLATE.md
A new issue template proposed to try and steer people towards #matrix:matrix.org for support queries relating to running their own homeserver.
2017-08-14 14:35:25 +01:00
hera
eae04f1952 fix english 2017-08-04 23:56:42 +01:00
hera
5699b05072 typo 2017-08-04 23:44:37 +01:00
Erik Johnston
09552f9d9c Reduce spammy log line in synchrotrons 2017-08-02 17:29:51 +01:00
Kenny Keslar
f18373dc5d Fix iteration of requests_missing_keys; list doesn't have .values()
Signed-off-by: Kenny Keslar <r3dey3@r3dey3.com>
2017-07-26 22:44:19 -05:00
Erik Johnston
b27429729d Merge pull request #2375 from matrix-org/erikj/port_script
Fix port script for user directory tables
2017-07-20 16:16:43 +01:00
Erik Johnston
60a9a49f83 Extend comment 2017-07-20 16:16:29 +01:00
Erik Johnston
d7d24750be Fix port script for user directory tables 2017-07-20 10:47:01 +01:00
Erik Johnston
514c2d3c4d Merge pull request #2371 from matrix-org/erikj/push_cache_hit
Increase cache hit ratio for push
2017-07-17 09:42:27 +01:00
Erik Johnston
bfde076022 Increase cache hit ratio for push
We don't update the cache in all code paths, which causes subsequent
calls to miss the cache
2017-07-14 16:11:26 +01:00
Erik Johnston
d3862812ff Merge pull request #2366 from matrix-org/erikj/push_metrics
Add more metrics to push rule evaluation
2017-07-14 11:04:03 +01:00
Erik Johnston
8d26385d76 Add more metrics to push rule evaluation 2017-07-13 14:37:30 +01:00
Erik Johnston
67b7b904ba Merge pull request #2365 from matrix-org/erikj/push_skip_lock
Push: Don't acquire lock unless necessary
2017-07-13 11:44:48 +01:00
Erik Johnston
f60218ec41 Push: Don't acquire lock unless necessary 2017-07-13 11:23:53 +01:00
Erik Johnston
91818723a1 Merge pull request #2362 from matrix-org/erikj/sync_user_users_who_share
Use less DB for device list handling in sync
2017-07-12 10:45:30 +01:00
Erik Johnston
e9aec001f4 Use less DB for device list handling in sync 2017-07-12 10:30:10 +01:00
Erik Johnston
0184a97dbd Merge pull request #2354 from krombel/reduce_static_sync_reply
encode sync-response statically
2017-07-11 14:19:56 +01:00
Krombel
85b9f76f1d split out reducing stuff; just make encode_* static 2017-07-11 13:14:35 +02:00
Erik Johnston
e2cb760dcc Merge pull request #2357 from matrix-org/erikj/push
Don't compute push actions for backfilled events
2017-07-11 10:53:22 +01:00
Erik Johnston
925b3638ff Reduce log levels in tcp replication 2017-07-11 10:04:21 +01:00
Erik Johnston
9a6fd3ef29 Don't compute push actions for backfilled events 2017-07-11 10:02:21 +01:00
Krombel
2f82de18ee fix test 2017-07-10 17:34:58 +02:00
Krombel
6e16aca8b0 encode sync-response statically; omit empty objects from sync-response 2017-07-10 16:42:17 +02:00
Erik Johnston
d4d12daed9 Include registration and as stores in frontend proxy 2017-07-07 18:36:45 +01:00
Erik Johnston
f467a8f66d Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-07-07 18:26:28 +01:00
Erik Johnston
c9184ed87e Merge pull request #2344 from matrix-org/erikj/frontend_proxy
Add a frontend proxy
2017-07-07 18:25:46 +01:00
Erik Johnston
1fc4a962e4 Add a frontend proxy 2017-07-07 18:19:46 +01:00
Erik Johnston
08284c86ed Merge pull request #2343 from matrix-org/erikj/fastpush
Perf: Don't filter events for push
2017-07-07 14:25:42 +01:00
Erik Johnston
f502b0dea1 Perf: Don't filter events for push
We know the users are joined and we can explicitly check for if they are
ignoring the user, so lets do that.
2017-07-07 14:04:40 +01:00
Erik Johnston
1200f28d66 Merge branch 'hotfixes-v0.22.1' of github.com:matrix-org/synapse 2017-07-06 18:11:49 +01:00
Erik Johnston
76ed3476d3 Bump version and changelog 2017-07-06 18:11:22 +01:00
Erik Johnston
58dc1f2c78 Merge pull request #2342 from matrix-org/erikj/pusher_pool_instantiate
Fix bug where pusherpool didn't start and broke some rooms
2017-07-06 18:08:43 +01:00
Erik Johnston
5a7f561a9b Fix bug where pusherpool didn't start and broke some rooms
Since we didn't instansiate the PusherPool at start time it could fail
at run time, which it did for some users.

This may or may not fix things for those users, but it should happen at
start time and stop the server from starting.
2017-07-06 17:55:51 +01:00
Erik Johnston
ed9a7f5436 Merge pull request #2309 from matrix-org/erikj/user_ip_repl
Fix up user_ip replication commands
2017-07-06 14:33:14 +01:00
Erik Johnston
1f64207f26 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-07-06 13:57:45 +01:00
Erik Johnston
42b50483be Merge branch 'release-v0.22.0' of github.com:matrix-org/synapse 2017-07-06 10:36:25 +01:00
Erik Johnston
6264cf9666 Bump version and changelog 2017-07-06 10:35:56 +01:00
Erik Johnston
f386632800 Merge pull request #2334 from matrix-org/erikj/refactor_transport_server
Separate federation servlet into different lists
2017-07-05 17:09:07 +01:00
Erik Johnston
5e49a57ecc Separate federation servlet into different lists 2017-07-05 14:32:24 +01:00
Richard van der Hoff
3d31b39297 Merge pull request #2332 from matrix-org/rav/fix_pushes
Fix caching error in the push evaluator
2017-07-05 11:10:53 +01:00
Richard van der Hoff
73cfe48031 Fix caching error in the push evaluator
Initialising `result` to `{}` in the parameters meant that every call to
_flatten_dict used the *same* target dictionary.

I'm hopeful this will fix https://github.com/matrix-org/synapse/issues/2270,
but I suspect it won't. (This code seems to have been here since forever,
unlike the bug, and I don't really think it explains the observed
behaviour). Still, it makes it hard to investigate the problem.
2017-07-05 00:28:43 +01:00
Erik Johnston
05538587ef Bump version and changelog 2017-07-04 14:02:21 +01:00
Erik Johnston
f92d7416d7 Merge pull request #2330 from matrix-org/erikj/cache_size_factor
Increase default cache size
2017-07-04 10:51:21 +01:00
Mark Haines
1f12d808e7 Merge pull request #2323 from matrix-org/markjh/invite_checks
Improve the error handling for bad invites received over federation
2017-07-04 10:50:43 +01:00
Erik Johnston
29a4066a4d Update test 2017-07-04 10:21:25 +01:00
Erik Johnston
7afb4e3f54 Update README 2017-07-04 10:00:52 +01:00
Erik Johnston
495f075b41 Increase default cache factor size. 2017-07-04 09:58:32 +01:00
Erik Johnston
b5e8d529e6 Define CACHE_SIZE_FACTOR once 2017-07-04 09:56:44 +01:00
Mark Haines
3e279411fe Improve the error handling for bad invites received over federation 2017-06-30 16:20:30 +01:00
Erik Johnston
47574c9cba Merge pull request #2321 from matrix-org/erikj/prefill_forward
Prefill forward extrems and event to state groups
2017-06-30 11:03:04 +01:00
Erik Johnston
6ff14ddd2e Make into list 2017-06-29 15:47:37 +01:00
Erik Johnston
5946aa0877 Prefill forward extrems and event to state groups 2017-06-29 15:38:48 +01:00
Erik Johnston
d800ab2847 Merge pull request #2320 from matrix-org/erikj/cache_macaroon_parse
Cache macaroon parse and validation
2017-06-29 15:06:43 +01:00
Erik Johnston
2c365f4723 Cache macaroon parse and validation
Turns out this can be quite expensive for requests, and is easily
cachable. We don't cache the lookup to the DB so invalidation still
works.
2017-06-29 14:50:18 +01:00
Erik Johnston
a1a253ea50 Merge pull request #2319 from matrix-org/erikj/prune_sessions
Use an ExpiringCache for storing registration sessions
2017-06-29 14:20:24 +01:00
Erik Johnston
c72058bcc6 Use an ExpiringCache for storing registration sessions
This is because pruning them was a significant performance drain on
matrix.org
2017-06-29 14:08:37 +01:00
Erik Johnston
27f26e48b7 Serialize user ip command as json 2017-06-27 16:25:38 +01:00
Erik Johnston
8c23221666 Fix up 2017-06-27 15:53:45 +01:00
Erik Johnston
731f3c37a0 Merge branch 'release-v0.22.0' of github.com:matrix-org/synapse into develop 2017-06-27 15:41:34 +01:00
Erik Johnston
4b444723f0 Merge pull request #2308 from matrix-org/erikj/user_ip_repl
Make workers report to master for user ip updates
2017-06-27 15:36:47 +01:00
Erik Johnston
816605a137 Merge pull request #2307 from matrix-org/erikj/user_ip_batch
Batch upsert user ips
2017-06-27 15:08:32 +01:00
Erik Johnston
78cefd78d6 Make workers report to master for user ip updates 2017-06-27 14:58:10 +01:00
Erik Johnston
a0a561ae85 Fix up client ips to read from pending data 2017-06-27 14:46:12 +01:00
Erik Johnston
ed3d0170d9 Batch upsert user ips 2017-06-27 13:37:04 +01:00
Erik Johnston
976128f368 Update version and changelog 2017-06-26 16:14:56 +01:00
Erik Johnston
d04d672a80 Merge pull request #2290 from matrix-org/erikj/ensure_round_trip
Reject local events that don't round trip the DB
2017-06-26 15:12:02 +01:00
Erik Johnston
036f439f53 Merge pull request #2304 from matrix-org/erikj/users_share_fix
Fix up indices for users_who_share_rooms
2017-06-26 15:11:39 +01:00
Erik Johnston
1bce3e6b35 Remove unused variables 2017-06-26 14:03:27 +01:00
Erik Johnston
e3cbec10c1 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/ensure_round_trip 2017-06-26 14:02:44 +01:00
Erik Johnston
8abdd7b553 Fix up indices for users_who_share_rooms 2017-06-26 14:01:30 +01:00
Erik Johnston
ff13c5e7af Merge pull request #2301 from xwiki-labs/push-redact-content
Add configuration parameter to allow redaction of content from push m…
2017-06-24 13:13:51 +01:00
Caleb James DeLisle
27bd0b9a91 Change the config file generator to more descriptive explanation of push.redact_content 2017-06-24 10:32:12 +02:00
Caleb James DeLisle
bce144595c Fix TravisCI tests for PR #2301 - Fat finger mistake 2017-06-23 15:26:09 +02:00
Caleb James DeLisle
75eba3b07d Fix TravisCI tests for PR #2301 2017-06-23 15:15:18 +02:00
Caleb James DeLisle
1591eddaea Add configuration parameter to allow redaction of content from push messages for google/apple devices 2017-06-23 13:01:04 +02:00
Erik Johnston
4fec80ba6f Merge pull request #2299 from matrix-org/erikj/segregate_url_cache_downloads
Store URL cache preview downloads separately
2017-06-23 12:00:45 +01:00
Erik Johnston
7fe8ed1787 Store URL cache preview downloads seperately
This makes it easier to clear old media out at a later date
2017-06-23 11:14:11 +01:00
Erik Johnston
e204062310 Merge pull request #2297 from matrix-org/erikj/user_dir_fix
Fix thinko in initial public room user spam
2017-06-22 15:14:47 +01:00
Erik Johnston
44c722931b Make some more params configurable 2017-06-22 14:59:52 +01:00
Erik Johnston
2d520a9826 Typo. ARGH. 2017-06-22 14:42:39 +01:00
Erik Johnston
24d894e2e2 Fix thinko in unhandled user spam 2017-06-22 14:39:05 +01:00
Matthew Hodgson
ccfcef6b59 Merge branch 'master' into develop 2017-06-22 13:03:44 +01:00
Erik Johnston
e0004aa28a Add desc 2017-06-22 10:03:48 +01:00
Erik Johnston
b668112320 Merge pull request #2296 from matrix-org/erikj/dont_appserver_shar
Don't work out users who share room with appservice users
2017-06-21 14:50:24 +01:00
Erik Johnston
dae9a00a28 Initialise exclusive_user_regex 2017-06-21 14:19:33 +01:00
Erik Johnston
71995e1397 Merge pull request #2219 from krombel/avoid_duplicate_filters
only add new filter when not existent prevoisly
2017-06-21 14:11:26 +01:00
Erik Johnston
8177563ebe Fix for workers 2017-06-21 13:57:49 +01:00
Krombel
4202fba82a Merge branch 'develop' into avoid_duplicate_filters 2017-06-21 14:48:21 +02:00
Krombel
812c030e87 replaced json.dumps with encode_canonical_json 2017-06-21 14:48:12 +02:00
Erik Johnston
1217c7da91 Don't work out users who share room with appservice users 2017-06-21 12:00:41 +01:00
Erik Johnston
7d69f2d956 Merge pull request #2292 from matrix-org/erikj/quarantine_media
Add API to quarantine media
2017-06-19 18:15:00 +01:00
Erik Johnston
385dcb7c60 Handle thumbnail urls 2017-06-19 17:48:28 +01:00
Erik Johnston
b8b936a6ea Add API to quarantine media 2017-06-19 17:39:21 +01:00
Erik Johnston
b5f665de32 Merge pull request #2291 from matrix-org/erikj/shutdown_room
Add shutdown room API
2017-06-19 16:20:55 +01:00
Erik Johnston
e5ae386ea4 Handle all cases of sending membership events 2017-06-19 16:07:54 +01:00
Erik Johnston
36e51aad3c Remove unused import 2017-06-19 14:42:21 +01:00
Erik Johnston
b490299a3b Change to create new room and join other users 2017-06-19 14:10:13 +01:00
Erik Johnston
5db7070dd1 Forget room 2017-06-19 12:40:29 +01:00
Erik Johnston
d7fe6b356c Add shutdown room API 2017-06-19 12:37:27 +01:00
Erik Johnston
fcf01dd88e Reject local events that don't round trip the DB 2017-06-19 11:33:40 +01:00
Johannes Löthberg
4f66312df8 python_dependencies: Use bcrypt module instead of py-bcrypt
py-bcrypt has been unmaintained for a long while, while bcrypt is
actively maintained. And since ff8b87118d
we're compatible with the bcrypt anyway.

Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2017-06-17 17:39:35 +02:00
Matthew
3fafb7b189 add missing boolean to synapse_port_db 2017-06-16 20:51:19 +01:00
Matthew
776a070421 fix synapse_port script 2017-06-16 20:24:14 +01:00
Erik Johnston
dfeca6cf40 Merge pull request #2286 from matrix-org/erikj/split_out_user_dir
Split out user directory to a separate process
2017-06-16 13:01:19 +01:00
Erik Johnston
6aa5bc8635 Initial worker impl 2017-06-16 11:47:11 +01:00
Erik Johnston
d8f47d2efa Merge pull request #2280 from matrix-org/erikj/share_room_user_dir
Include users who you share a room with in user directory
2017-06-16 11:06:00 +01:00
Erik Johnston
0a9315bbc7 Merge pull request #2285 from krombel/allow_authorization_header
allow Authorization header
2017-06-16 10:41:58 +01:00
Krombel
1ff419d343 allow Authorization header which handling got implemented in #1098
Signed-off-by: Matthias Kesler <krombel@krombel.de>
2017-06-16 11:21:14 +02:00
Erik Johnston
24df576795 Merge pull request #2282 from matrix-org/release-v0.21.1
Release v0.21.1
2017-06-15 13:24:16 +01:00
Erik Johnston
fdf1ca30f0 Bump version and changelog 2017-06-15 12:59:06 +01:00
Erik Johnston
052c5d19d5 Merge pull request #2281 from matrix-org/erikj/phone_home_stats
Fix phone home stats
2017-06-15 12:46:23 +01:00
Erik Johnston
5ddd199870 Typo 2017-06-15 10:49:10 +01:00
Erik Johnston
a9d6fa8b2b Include users who share room with requester in user directory 2017-06-15 10:17:21 +01:00
Erik Johnston
4564b05483 Implement updating users who share rooms on the fly 2017-06-15 10:17:17 +01:00
Erik Johnston
72613bc379 Implement initial population of users who share rooms table 2017-06-15 09:59:04 +01:00
Erik Johnston
ebcd55d641 Add DB schema for tracking users who share rooms 2017-06-15 09:45:48 +01:00
Erik Johnston
4b461a6931 Add some more stats 2017-06-15 09:39:39 +01:00
Erik Johnston
93e7a38370 Remove unhelpful test 2017-06-15 09:30:54 +01:00
Erik Johnston
617304b2cf Fix phone home stats 2017-06-14 19:47:15 +01:00
Matthew Hodgson
ba502fb89a add notes on running out of FDs 2017-06-14 02:23:14 +01:00
Erik Johnston
6c6b9689bb Merge pull request #2279 from matrix-org/erikj/fix_user_dir
Fix user directory insertion due to missing room_id
2017-06-13 12:48:50 +01:00
Erik Johnston
d9fd937e39 Fix user directory insertion due to missing room_id 2017-06-13 11:50:24 +01:00
Erik Johnston
fe9dc522d4 Merge pull request #2278 from matrix-org/erikj/fix_user_dir
Fix user dir to not assume existence of user
2017-06-13 11:38:44 +01:00
Erik Johnston
505e7e8b9d Fix up sql 2017-06-13 11:19:18 +01:00
Erik Johnston
6fd7e6db3d Fix user dir to not assume existence of user 2017-06-13 11:11:26 +01:00
Erik Johnston
fdca6e36ee Merge pull request #2274 from matrix-org/erikj/cache_is_host_joined
Add cache for is_host_joined
2017-06-13 10:58:53 +01:00
Erik Johnston
90ae0cffec Merge pull request #2275 from matrix-org/erikj/tweark_user_directory_search
Tweak the ranking of PG user dir search
2017-06-13 10:58:43 +01:00
Erik Johnston
de4cb50ca6 Merge pull request #2276 from matrix-org/erikj/fix_user_di
Don't assume existence of events when updating user directory
2017-06-13 10:55:41 +01:00
Erik Johnston
a09e09ce76 Merge pull request #2277 from matrix-org/erikj/media
Throw exception when not retrying when downloading media
2017-06-13 10:52:45 +01:00
Erik Johnston
48d2949416 Throw exception when not retrying when downloading media 2017-06-13 10:23:14 +01:00
Erik Johnston
6ae8373d40 Don't assume existance of events when updating user directory 2017-06-13 10:19:26 +01:00
Erik Johnston
b58e24cc3c Tweak the ranking of PG user dir search 2017-06-13 10:16:31 +01:00
Erik Johnston
d53fe399eb Add cache for is_host_joined 2017-06-13 09:56:18 +01:00
Erik Johnston
a837765e8c Merge pull request #2266 from matrix-org/erikj/host_in_room
Change is_host_joined to use current_state table
2017-06-12 09:49:51 +01:00
Erik Johnston
f540b494a4 Merge pull request #2269 from matrix-org/erikj/cache_state_delta
Cache state deltas
2017-06-09 18:32:04 +01:00
Erik Johnston
8060974344 Fix replication 2017-06-09 16:40:52 +01:00
Erik Johnston
b0d975e216 Comments 2017-06-09 16:25:42 +01:00
Erik Johnston
e54d7d536e Cache state deltas 2017-06-09 16:24:00 +01:00
Erik Johnston
1e9b4d5a95 Merge pull request #2268 from matrix-org/erikj/entity_has_changed
Fix has_any_entity_changed
2017-06-09 15:30:55 +01:00
Erik Johnston
efc2b7db95 Rewrite conditional 2017-06-09 13:35:15 +01:00
Erik Johnston
bfd68019c2 Merge pull request #2267 from matrix-org/erikj/missing_notifier
Fix removing of pushers when using workers
2017-06-09 13:07:29 +01:00
Erik Johnston
1946867bc2 Merge pull request #2265 from matrix-org/erikj/remote_leave_outlier
Mark remote invite rejections as outliers
2017-06-09 13:05:15 +01:00
Erik Johnston
1664948e41 Comment 2017-06-09 13:05:05 +01:00
Erik Johnston
935e588799 Tweak SQL 2017-06-09 13:01:23 +01:00
Erik Johnston
eed59dcc1e Fix has_any_entity_changed
Occaisonally has_any_entity_changed would throw the error: "Set changed
size during iteration" when taking the max of the `sorteddict`. While
its uncertain how that happens, its quite inefficient to iterate over
the entire dict anyway so we change to using the more traditional
`bisect_*` functions.
2017-06-09 11:44:01 +01:00
Erik Johnston
2cac7623a5 Add missing notifier 2017-06-09 11:24:41 +01:00
Erik Johnston
298d83b340 Fix replication 2017-06-09 11:01:28 +01:00
Erik Johnston
0185b75381 Change is_host_joined to use current_state table
This bypasses a bug where using the state groups to figure out if a host
is in a room sometimes errors if the servers isn't in the room. (For
example when the server rejected an invite to a remote room)
2017-06-09 10:52:26 +01:00
Erik Johnston
7132e5cdff Mark remote invite rejections as outliers 2017-06-09 10:08:18 +01:00
Erik Johnston
98bdb4468b Merge pull request #2263 from matrix-org/erikj/fix_state_woes
Ensure we don't use unpersisted state group as prev group
2017-06-08 12:56:18 +01:00
Erik Johnston
ea11ee09f3 Ensure we don't use unpersisted state group as prev group 2017-06-08 11:59:57 +01:00
Erik Johnston
c62c480dc6 Merge pull request #2259 from matrix-org/erikj/fix_state_woes
Fix bug where state_group tables got corrupted
2017-06-07 17:51:25 +01:00
Erik Johnston
197bd126f0 Fix bug where state_group tables got corrupted
This is due to the fact that we prefilled caches using txn.call_after,
which always gets called including on error.

We fix this by making txn.call_after only fire when a transaction
completes successfully, which is what we want most of the time anyway.
2017-06-07 17:39:36 +01:00
Erik Johnston
f45f07ab86 Merge pull request #2258 from matrix-org/erikj/user_dir
Don't start user_directory handling on workers
2017-06-07 14:04:50 +01:00
Erik Johnston
a053ff3979 Merge pull request #2248 from matrix-org/erikj/state_fixup
Faster cache for get_joined_hosts
2017-06-07 14:01:06 +01:00
Erik Johnston
ecdd2a3658 Don't start user_directory handling on workers 2017-06-07 12:02:53 +01:00
Erik Johnston
2f34ad31ac Add some logging to user directory 2017-06-07 11:50:44 +01:00
Erik Johnston
671f0afa1d Merge pull request #2256 from matrix-org/erikj/faster_device_updates
Split up device_lists_outbound_pokes table for faster updates.
2017-06-07 11:48:00 +01:00
Erik Johnston
64ed74c01e When pruning, delete from device_lists_outbound_last_success 2017-06-07 11:20:47 +01:00
Erik Johnston
1a81a1898e Keep pruning background task 2017-06-07 11:16:56 +01:00
Erik Johnston
6ba21bf2b8 Comments 2017-06-07 11:08:36 +01:00
Erik Johnston
09e4bc0501 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/state_fixup 2017-06-07 11:05:23 +01:00
Erik Johnston
6e2a7ee1bc Remove spurious log lines 2017-06-07 11:05:17 +01:00
Erik Johnston
65f0513a33 Split up device_lists_outbound_pokes table for faster updates. 2017-06-07 11:02:38 +01:00
Erik Johnston
6f83c4537c Increase size of IP cache 2017-06-07 10:18:44 +01:00
Erik Johnston
cca94272fa Fix typo when getting app name 2017-06-06 11:50:07 +01:00
Erik Johnston
66b121b2fc Fix wrong number of arguments 2017-06-06 11:46:38 +01:00
Erik Johnston
8d34120a53 Merge pull request #2253 from matrix-org/erikj/user_dir
Handle profile updates in user directory
2017-06-01 17:33:20 +01:00
Erik Johnston
1a01af079e Handle profile updates in user directory 2017-06-01 15:39:51 +01:00
Erik Johnston
87e5e05aea Merge pull request #2252 from matrix-org/erikj/user_dir
Add a user directory
2017-06-01 15:39:32 +01:00
Erik Johnston
4d039aa2ca Fix sqlite 2017-06-01 14:58:48 +01:00
Erik Johnston
21e255a8f1 Split the table in two 2017-06-01 14:50:46 +01:00
Erik Johnston
d5477c7afd Tweak search query 2017-06-01 13:28:01 +01:00
Erik Johnston
02a6108235 Tweak search query 2017-06-01 13:16:40 +01:00
Erik Johnston
7233341eac Comments 2017-06-01 13:11:38 +01:00
Erik Johnston
8be6fd95a3 Check if host is still in room 2017-06-01 13:05:39 +01:00
Erik Johnston
59dbb47065 Remove spurious inlineCallbacks 2017-06-01 11:41:29 +01:00
Erik Johnston
9c7db2491b Fix removing users 2017-06-01 11:36:50 +01:00
Erik Johnston
0fe6f3c521 Bug fixes and logging
- Check if room is public when a user joins before adding to user dir
- Fix typo of field name "content.join_rules" -> "content.join_rule"
2017-06-01 11:09:49 +01:00
Erik Johnston
036362ede6 Order by if they have profile info 2017-06-01 09:41:08 +01:00
Erik Johnston
a757dd4863 Use prefix matching 2017-06-01 09:40:37 +01:00
Erik Johnston
f5cc22bdc6 Comment on why arbitrary comments 2017-05-31 17:30:26 +01:00
Erik Johnston
5dd1b2c525 Use unique indices 2017-05-31 17:29:12 +01:00
Erik Johnston
cc7609aa9f Comment briefly on how we keep user_directory up to date 2017-05-31 17:11:18 +01:00
Erik Johnston
f1378aef91 Convert to int 2017-05-31 17:03:08 +01:00
Erik Johnston
b2d8d07109 Lifts things into separate function 2017-05-31 17:00:24 +01:00
Erik Johnston
f9791498ae Typos 2017-05-31 16:50:57 +01:00
Erik Johnston
f091061711 Fix tests 2017-05-31 16:34:40 +01:00
Erik Johnston
4abcff0177 Fix typo 2017-05-31 16:22:36 +01:00
Erik Johnston
63c58c2a3f Limit number of things we fetch out of the db 2017-05-31 16:17:58 +01:00
Erik Johnston
304880d185 Add stream change cache 2017-05-31 15:46:36 +01:00
Erik Johnston
5d79d728f5 Split out directory and search tables 2017-05-31 15:23:49 +01:00
Erik Johnston
dc51af3d03 Pull max id from correct table 2017-05-31 15:13:49 +01:00
Erik Johnston
350622a107 Handle the server leaving a public room 2017-05-31 15:11:36 +01:00
Erik Johnston
63fda37e20 Add comments 2017-05-31 15:00:29 +01:00
Erik Johnston
293ef29655 Weight differently 2017-05-31 14:29:32 +01:00
Erik Johnston
535c99f157 Use POST 2017-05-31 14:15:45 +01:00
Erik Johnston
45a5df5914 Add REST API 2017-05-31 14:11:55 +01:00
Erik Johnston
3b5f22ca40 Add search 2017-05-31 14:00:01 +01:00
Erik Johnston
b5db4ed5f6 Update room column when room becomes unpublic 2017-05-31 13:40:28 +01:00
Erik Johnston
168524543f Add call later 2017-05-31 11:59:36 +01:00
Erik Johnston
3e123b8497 Start later 2017-05-31 11:56:27 +01:00
Erik Johnston
42137efde7 Don't go round in circles 2017-05-31 11:55:13 +01:00
Erik Johnston
eeb2f9e546 Add user_directory to database 2017-05-31 11:51:01 +01:00
Erik Johnston
5dbaa520a5 Merge pull request #2251 from matrix-org/erikj/current_state_delta_stream
Add current_state_delta_stream table
2017-05-30 15:06:17 +01:00
Erik Johnston
dd48f7204c Add comment 2017-05-30 15:01:22 +01:00
Erik Johnston
04095f7581 Add clobbered event_id 2017-05-30 14:53:01 +01:00
Erik Johnston
a584a81b3e Add current_state_delta_stream table 2017-05-30 14:44:09 +01:00
Erik Johnston
619e8ecd0c Handle None state group correctly 2017-05-26 10:46:03 +01:00
Erik Johnston
23da638360 Fix typing tests 2017-05-26 10:02:04 +01:00
Erik Johnston
dfbda5e025 Faster cache for get_joined_hosts 2017-05-25 17:24:44 +01:00
Erik Johnston
2b03751c3c Don't return weird prev_group 2017-05-25 14:47:39 +01:00
Erik Johnston
dbc0dfd2d5 Remove unused options 2017-05-25 14:28:34 +01:00
Erik Johnston
11f139a647 Merge pull request #2247 from matrix-org/erikj/auth_event
Only store event_auth for state events
2017-05-24 16:46:34 +01:00
Erik Johnston
6e614e9e10 Add background task to clear out old event_auth 2017-05-24 15:23:34 +01:00
Erik Johnston
c049472b8a Only store event_auth for state events 2017-05-24 15:23:31 +01:00
Erik Johnston
9a804b2812 Merge pull request #2243 from matrix-org/matthew/fix-url-preview-length-again
actually trim oversize og:description meta
2017-05-23 13:26:28 +01:00
Erik Johnston
fbbc40f385 Merge pull request #2237 from matrix-org/erikj/sync_key_count
Add count of one time keys to sync stream
2017-05-23 11:18:13 +01:00
Erik Johnston
8cf9f0a3e7 Remove redundant invalidation 2017-05-23 09:46:59 +01:00
Erik Johnston
e6618ece2d Missed an invalidation 2017-05-23 09:36:52 +01:00
Erik Johnston
58c4720293 Merge pull request #2242 from matrix-org/erikj/email_refactor
Only load jinja2 templates once
2017-05-23 09:34:08 +01:00
Matthew Hodgson
836d5c44b6 actually trim oversize og:description meta 2017-05-22 21:14:20 +01:00
Erik Johnston
11c2a3655f Only load jinja2 templates once
Instead of every time a new email pusher is created, as loading jinja2
templates is slow.
2017-05-22 17:48:58 +01:00
Erik Johnston
539aa4d333 Merge pull request #2241 from matrix-org/erikj/fix_notifs
Correctly calculate push rules for member events
2017-05-22 16:46:58 +01:00
Erik Johnston
f85a415279 Add missing storage function to slave store 2017-05-22 16:31:24 +01:00
Erik Johnston
6489455bed Comment 2017-05-22 16:22:04 +01:00
Erik Johnston
d668caa79c Remove spurious log level guards 2017-05-22 16:21:06 +01:00
Erik Johnston
74bf4ee7bf Stream count_e2e_one_time_keys cache invalidation 2017-05-22 16:19:22 +01:00
Erik Johnston
33ba90c6e9 Merge pull request #2240 from matrix-org/erikj/cache_list_fix
Update list cache to handle one arg case
2017-05-22 16:10:46 +01:00
Erik Johnston
ccd62415ac Merge pull request #2238 from matrix-org/erikj/faster_push_rules
Speed up calculating push rules
2017-05-22 15:12:34 +01:00
Erik Johnston
bd7bb5df71 Pull out if statement from for loop 2017-05-22 15:12:19 +01:00
Erik Johnston
e3417a06e2 Update list cache to handle one arg case
We update the normal cache descriptors to handle caches with a single
argument specially so that the key wasn't a 1-tuple. We need to update
the cache list to be aware of this.
2017-05-22 15:04:42 +01:00
Erik Johnston
7fb80b5eae Check if current event is a membership event 2017-05-22 15:02:12 +01:00
Erik Johnston
2d17b09a6d Add debug logging 2017-05-22 15:01:36 +01:00
Erik Johnston
24c8f38784 Comment 2017-05-22 14:59:27 +01:00
Erik Johnston
25f03cf8e9 Use tuple unpacking 2017-05-22 14:58:22 +01:00
Erik Johnston
270e1c904a Speed up calculating push rules 2017-05-19 16:51:05 +01:00
Erik Johnston
b4f59c7e27 Add count of one time keys to sync stream 2017-05-19 15:47:55 +01:00
Erik Johnston
ab4ee2e524 Merge pull request #2236 from matrix-org/erikj/invalidation
Fix invalidation of get_users_with_read_receipts_in_room
2017-05-19 14:50:23 +01:00
Erik Johnston
58ebb96cce Fix invalidation of get_users_with_read_receipts_in_room 2017-05-19 14:38:50 +01:00
Erik Johnston
99713dc7d3 Merge pull request #2234 from matrix-org/erikj/fix_push
Store ActionGenerator in HomeServer
2017-05-19 13:42:49 +01:00
Erik Johnston
1c1c0257f4 Move invalidation cb to its own structure 2017-05-19 11:44:11 +01:00
Erik Johnston
cafe659f72 Store ActionGenerator in HomeServer 2017-05-19 10:09:56 +01:00
Erik Johnston
72ed8196b3 Don't push users who have left 2017-05-18 17:48:36 +01:00
Erik Johnston
107ac7ac96 Increase size of push rule caches 2017-05-18 17:17:53 +01:00
Erik Johnston
234772db6d Merge pull request #2233 from matrix-org/erikj/faster_as_check
Make get_if_app_services_interested_in_user faster
2017-05-18 16:51:18 +01:00
Erik Johnston
760625acba Make get_if_app_services_interested_in_user faster 2017-05-18 16:34:44 +01:00
Erik Johnston
c57789d138 Remove size of push get_rules cache 2017-05-18 16:17:23 +01:00
Erik Johnston
f33df30732 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-05-18 13:56:37 +01:00
Erik Johnston
3accee1a8c Merge branch 'release-v0.21.0' of github.com:matrix-org/synapse 2017-05-18 13:54:27 +01:00
Erik Johnston
a5425b2e5b Bump changelog and version 2017-05-18 13:53:48 +01:00
Erik Johnston
6e381180ae Merge pull request #2177 from matrix-org/erikj/faster_push_rules
Make calculating push actions faster
2017-05-18 11:46:18 +01:00
Erik Johnston
056ba9b795 Add comment 2017-05-18 11:45:56 +01:00
Erik Johnston
88664afe14 Merge pull request #2231 from aaronraimist/patch-1
Correct a typo in UPGRADE.rst
2017-05-18 10:00:35 +01:00
Aaron Raimist
f98efea9b1 Correct a typo in UPGRADE.rst 2017-05-17 21:41:48 -05:00
Erik Johnston
d9e3a4b5db Merge pull request #2230 from matrix-org/erikj/speed_up_get_state
Make get_state_groups_from_groups faster.
2017-05-17 17:23:04 +01:00
Erik Johnston
66d8ffabbd Faster push rule calculation via push specific cache
We add a push rule specific cache that ensures that we can reuse
calculated push rules appropriately when a user join/leaves.
2017-05-17 16:55:40 +01:00
Erik Johnston
ace23463c5 Merge pull request #2216 from slipeer/app_services_interested_in_user
Fix users claimed non-exclusively by an app service don't get notific…
2017-05-17 16:28:50 +01:00
Erik Johnston
bbfe4e996c Make get_state_groups_from_groups faster.
Most of the time was spent copying a dict to filter out sentinel values
that indicated that keys did not exist in the dict. The sentinel values
were added to ensure that we cached the non-existence of keys.

By updating DictionaryCache to keep track of which keys were known to
not exist itself we can remove a dictionary copy.
2017-05-17 15:12:15 +01:00
Erik Johnston
9f430fa07f Merge branch 'release-v0.21.0' of github.com:matrix-org/synapse into develop 2017-05-17 13:28:46 +01:00
Erik Johnston
7c53a27801 Update changelog 2017-05-17 13:13:45 +01:00
Erik Johnston
a8bc7cae56 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.21.0 2017-05-17 13:11:43 +01:00
Erik Johnston
bf1050f7cf Merge pull request #2229 from matrix-org/erikj/faster_get_joined
Make get_joined_users faster when we have delta state
2017-05-17 13:00:05 +01:00
Erik Johnston
c6f4ff1475 Spelling 2017-05-17 11:29:14 +01:00
Erik Johnston
3a431a126d Bump changelog and version 2017-05-17 11:26:57 +01:00
Erik Johnston
ac08316548 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.21.0 2017-05-17 11:25:23 +01:00
Erik Johnston
85e8092cca Comment 2017-05-17 10:03:09 +01:00
Erik Johnston
ad53fc3cf4 Short circuit when we have delta ids 2017-05-17 09:57:34 +01:00
Erik Johnston
6fa8148ccb Merge pull request #2228 from matrix-org/erikj/speed_up_get_hosts
Speed up get_joined_hosts
2017-05-16 17:40:55 +01:00
Erik Johnston
7c69849a0d Merge pull request #2227 from matrix-org/erikj/presence_caches
Make presence use cached users/hosts in room
2017-05-16 17:40:47 +01:00
Erik Johnston
11bc21b6d9 Merge pull request #2226 from matrix-org/erikj/domain_from_id
Speed up get_domain_from_id
2017-05-16 17:18:16 +01:00
Erik Johnston
13f540ef1b Speed up get_joined_hosts 2017-05-16 16:05:22 +01:00
Erik Johnston
ec5c4499f4 Make presence use cached users/hosts in room 2017-05-16 16:01:43 +01:00
Erik Johnston
f2a5b6dbfd Speed up get_domain_from_id 2017-05-16 15:59:37 +01:00
Erik Johnston
b8492b6c2f Merge pull request #2224 from matrix-org/erikj/prefill_state
Prefill state caches
2017-05-16 15:50:11 +01:00
Erik Johnston
331570ea6f Remove spurious merge artifacts 2017-05-16 15:33:07 +01:00
Krombel
55af207321 Merge branch 'develop' into avoid_duplicate_filters 2017-05-16 15:29:59 +02:00
Richard van der Hoff
d648f65aaf Merge pull request #2218 from matrix-org/rav/event_search_index
Add an index to event_search
2017-05-16 13:26:07 +01:00
Erik Johnston
608b5a6317 Take a copy before prefilling, as it may be a frozendict 2017-05-16 12:55:29 +01:00
Krombel
64953c8ed2 avoid access-error if no filter_id matches 2017-05-15 18:36:37 +02:00
Erik Johnston
f451b64c8f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/prefill_state 2017-05-15 16:09:32 +01:00
Erik Johnston
2c9475b58e Merge pull request #2221 from psaavedra/sync_timeline_limit_filter_by_name
Configurable maximum number of events requested by /sync and /messages
2017-05-15 16:08:46 +01:00
Erik Johnston
6d17573c23 Merge pull request #2223 from matrix-org/erikj/ignore_not_retrying
Don't log exceptions for NotRetryingDestination
2017-05-15 15:52:28 +01:00
Erik Johnston
d12ae7fd1c Don't log exceptions for NotRetryingDestination 2017-05-15 15:42:18 +01:00
Pablo Saavedra
224137fcf9 Fixed syntax nits 2017-05-15 16:21:02 +02:00
Erik Johnston
e4435b014e Update comment 2017-05-15 15:11:30 +01:00
Erik Johnston
871605f4e2 Comments 2017-05-15 15:11:30 +01:00
Erik Johnston
e0d2f6d5b0 Add more granular event send metrics 2017-05-15 15:11:30 +01:00
Erik Johnston
bfbc907cec Prefill state caches 2017-05-15 15:11:13 +01:00
Erik Johnston
5e9d75b4a5 Merge pull request #2215 from hamber-dick/develop
Documantation to chek synapse version
2017-05-15 14:42:52 +01:00
Pablo Saavedra
627e6ea2b0 Fixed implementation errors
* Added HS as property in SyncRestServlet
* Fixed set_timeline_upper_limit function implementat¡ion
2017-05-15 14:51:43 +02:00
Pablo Saavedra
9da4316ca5 Configurable maximum number of events requested by /sync and /messages (#2220)
Set the limit on the returned events in the timeline in the get and sync
operations. The default value is -1, means no upper limit.

For example, using `filter_timeline_limit: 5000`:

POST /_matrix/client/r0/user/user:id/filter
{
room: {
    timeline: {
      limit: 1000000000000000000
    }
}
}

GET /_matrix/client/r0/user/user:id/filter/filter:id

{
room: {
    timeline: {
      limit: 5000
    }
}
}

The server cuts down the room.timeline.limit.
2017-05-13 18:17:54 +02:00
Krombel
eb7cbf27bc insert whitespace to fix travis build 2017-05-12 12:09:42 +02:00
Krombel
6b95e35e96 add check to only add a new filter if the same filter does not exist previously
Signed-off-by: Matthias Kesler <krombel@krombel.de>
2017-05-11 16:05:30 +02:00
Richard van der Hoff
ff3d810ea8 Add a comment to old delta 2017-05-11 12:48:50 +01:00
Richard van der Hoff
34194aaff7 Don't create event_search index on sqlite
... because the table is virtual
2017-05-11 12:46:55 +01:00
Richard van der Hoff
114f290947 Add more logging for purging
Log the number of events we will be deleting at info.
2017-05-11 12:08:47 +01:00
Richard van der Hoff
baafb85ba4 Add an index to event_search
- to make the purge API quicker
2017-05-11 12:05:22 +01:00
Richard van der Hoff
29ded770b1 Merge pull request #2214 from matrix-org/rav/hurry_up_purge
When purging, don't de-delta state groups we're about to delete
2017-05-11 12:04:25 +01:00
Richard van der Hoff
dc026bb16f Tidy purge code and add some comments
Try to make this clearer with more comments and some variable renames
2017-05-11 10:56:12 +01:00
Slipeer
328378f9cb Fix users claimed non-exclusively by an app service don't get notifications #2211 2017-05-11 11:42:08 +03:00
Luke Barnard
c1935f0a41 Merge pull request #2213 from matrix-org/luke/username-availability-qp
Modify register/available to be GET with query param
2017-05-11 09:36:39 +01:00
hamber-dick
43cd86ba8a Merge pull request #1 from hamber-dick/dev-branch-hamber-dick
Documantation to chek synapse version
2017-05-10 23:14:12 +02:00
Richard van der Hoff
8e345ce465 Don't de-delta state groups we're about to delete 2017-05-10 18:44:22 +01:00
Richard van der Hoff
b64d312421 add some logging to purge_history 2017-05-10 18:44:22 +01:00
Luke Barnard
ccad2ed824 Modify condition on empty localpart 2017-05-10 17:34:30 +01:00
Luke Barnard
369195caa5 Modify register/available to be GET with query param
- GET is now the method for register/available
- a query parameter "username" is now used

Also, empty usernames are now handled with an error message on registration or via register/available: `User ID cannot be empty`
2017-05-10 17:23:55 +01:00
hamber-dick
57ed7f6772 Documantation to chek synapse version
I've added some Documentation, how to get the running Version of a
Synapse homeserver. This should help the HS-Owners to check whether the
Upgrade was successful.
2017-05-10 18:01:39 +02:00
Erik Johnston
a3648f84b2 Merge pull request #2208 from matrix-org/erikj/ratelimit_overrid
Add per user ratelimiting overrides
2017-05-10 15:54:48 +01:00
Richard van der Hoff
5331cd150a Merge pull request #2206 from matrix-org/rav/one_time_key_upload_change_sig
Allow clients to upload one-time-keys with new sigs
2017-05-10 14:18:40 +01:00
Luke Barnard
7313a23dba Merge pull request #2209 from matrix-org/luke/username-availability-post
Change register/available to POST (from GET)
2017-05-10 13:05:45 +01:00
Luke Barnard
f7278e612e Change register/available to POST (from GET) 2017-05-10 11:40:18 +01:00
Erik Johnston
b990b2fce5 Add per user ratelimiting overrides 2017-05-10 11:05:43 +01:00
Richard van der Hoff
aedaba018f Replace some instances of preserve_context_over_deferred 2017-05-09 19:04:56 +01:00
Richard van der Hoff
de042b3b88 Do some logging when one-time-keys get claimed
might help us figure out if https://github.com/vector-im/riot-web/issues/3868
has happened.
2017-05-09 19:04:56 +01:00
Richard van der Hoff
a7e9d8762d Allow clients to upload one-time-keys with new sigs
When a client retries a key upload, don't give an error if the signature has
changed (but the key is the same).

Fixes https://github.com/vector-im/riot-android/issues/1208, hopefully.
2017-05-09 19:04:56 +01:00
Erik Johnston
ca238bc023 Merge branch 'release-v0.21.0' of github.com:matrix-org/synapse into develop 2017-05-08 17:35:11 +01:00
Erik Johnston
40dcf0d856 Merge pull request #2203 from matrix-org/erikj/event_cache_hit_ratio
Don't update event cache hit ratio from get_joined_users
2017-05-08 16:52:39 +01:00
Erik Johnston
d3c3026496 Merge pull request #2201 from matrix-org/erikj/store_device_cache
Cache check to see if device exists
2017-05-08 16:23:04 +01:00
Erik Johnston
093f7e47cc Expand docstring a bit 2017-05-08 16:14:46 +01:00
Erik Johnston
6a12998a83 Add missing yields 2017-05-08 16:10:51 +01:00
Erik Johnston
b9c84f3f3a Merge pull request #2202 from matrix-org/erikj/cache_count_device
Cache one time key counts
2017-05-08 16:09:12 +01:00
Erik Johnston
ffad4fe35b Don't update event cache hit ratio from get_joined_users
Otherwise the hit ration of plain get_events gets completely skewed by
calls to get_joined_users* functions.
2017-05-08 16:06:17 +01:00
Erik Johnston
94e6ad71f5 Invalidate cache on device deletion 2017-05-08 15:55:59 +01:00
Erik Johnston
8571f864d2 Cache one time key counts 2017-05-08 15:34:27 +01:00
Erik Johnston
fc6d4974a6 Comment 2017-05-08 15:33:57 +01:00
Erik Johnston
738ccf61c0 Cache check to see if device exists 2017-05-08 15:32:18 +01:00
Erik Johnston
dcabef952c Increase client_ip cache size 2017-05-08 15:09:19 +01:00
Erik Johnston
771c8a83c7 Bump version and changelog 2017-05-08 13:23:46 +01:00
Erik Johnston
6631985990 Merge pull request #2200 from matrix-org/erikj/revert_push
Revert speed up push
2017-05-08 13:20:52 +01:00
Erik Johnston
e0f20e9425 Revert "Remove unused import"
This reverts commit ab37bef83b.
2017-05-08 13:07:43 +01:00
Erik Johnston
fe7c1b969c Revert "We don't care about forgotten rooms"
This reverts commit ad8b316939.
2017-05-08 13:07:43 +01:00
Erik Johnston
78f306a6f7 Revert "Speed up filtering of a single event in push"
This reverts commit 421fdf7460.
2017-05-08 13:07:41 +01:00
Erik Johnston
9ac98197bb Bump version and changelog 2017-05-08 11:07:54 +01:00
Erik Johnston
27c28eaa27 Merge pull request #2190 from matrix-org/erikj/mark_remote_as_back_more
Always mark remotes as up if we receive a signed request from them
2017-05-05 14:08:12 +01:00
Erik Johnston
be2672716d Merge pull request #2189 from matrix-org/erikj/handle_remote_device_list
Handle exceptions thrown in handling remote device list updates
2017-05-05 14:01:27 +01:00
Erik Johnston
653d90c1a5 Comment 2017-05-05 14:01:17 +01:00
Erik Johnston
310b1ccdc1 Use preserve_fn and add logs 2017-05-05 13:41:19 +01:00
Kegsay
a59b0ad1a1 Merge pull request #2192 from matrix-org/kegan/simple-http-client-timeouts
Rewrite SimpleHttpClient.request to include timeouts
2017-05-05 11:52:43 +01:00
Erik Johnston
7b222fc56e Remove redundant reset of destination timers 2017-05-05 11:14:09 +01:00
Kegan Dougal
d0debb2116 Remember how twisted works 2017-05-05 11:00:21 +01:00
Erik Johnston
66f371e8b8 Merge pull request #2176 from matrix-org/erikj/faster_get_joined
Make get_joined_users faster
2017-05-05 10:59:55 +01:00
Erik Johnston
b843631d71 Add comment and TODO 2017-05-05 10:59:32 +01:00
Kegan Dougal
c2ddd773bc Include the clock 2017-05-05 10:52:46 +01:00
Kegan Dougal
7dd3bf5e24 Rewrite SimpleHttpClient.request to include timeouts
Fixes #2191
2017-05-05 10:49:19 +01:00
Erik Johnston
db7d0c3127 Always mark remotes as up if we receive a signed request from them 2017-05-05 10:34:53 +01:00
Erik Johnston
f346048a6e Handle exceptions thrown in handling remote device list updates 2017-05-05 10:34:10 +01:00
Erik Johnston
e3aa8a7aa8 Merge pull request #2185 from matrix-org/erikj/smaller_caches
Optimise caches for single key
2017-05-05 10:19:05 +01:00
Erik Johnston
cf589f2c1e Fixes 2017-05-05 10:17:56 +01:00
Erik Johnston
8af4569583 Merge pull request #2174 from matrix-org/erikj/current_cache_hosts
Add cache for get_current_hosts_in_room
2017-05-05 10:15:24 +01:00
Erik Johnston
b25db11d08 Merge pull request #2186 from matrix-org/revert-2175-erikj/prefill_state
Revert "Prefill state caches"
2017-05-04 15:09:25 +01:00
Erik Johnston
587f07543f Revert "Prefill state caches" 2017-05-04 15:07:27 +01:00
Erik Johnston
aa93cb9f44 Add comment 2017-05-04 14:59:28 +01:00
Erik Johnston
537dbadea0 Intern host strings 2017-05-04 14:55:28 +01:00
Erik Johnston
07a07588a0 Make caches bigger 2017-05-04 14:52:28 +01:00
Erik Johnston
dfaa58f72d Fix comment and num args 2017-05-04 14:50:24 +01:00
Erik Johnston
9ac263ed1b Add new storage functions to slave store 2017-05-04 14:29:03 +01:00
Erik Johnston
d2d8ed4884 Optimise caches with single key 2017-05-04 14:18:46 +01:00
Erik Johnston
5d8290429c Reduce size of get_users_in_room 2017-05-04 13:43:19 +01:00
Luke Barnard
6aa423a1a8 Merge pull request #2183 from matrix-org/luke/username-availability
Implement username availability checker
2017-05-04 09:58:40 +01:00
Luke Barnard
3669065466 Appease the flake8 gods 2017-05-03 18:05:49 +01:00
Erik Johnston
7ebf518c02 Make get_joined_users faster 2017-05-03 15:55:54 +01:00
Luke Barnard
34ed4f4206 Implement username availability checker
Outlined here: https://github.com/vector-im/riot-web/issues/3605#issuecomment-298679388

```HTTP
GET /_matrix/.../register/available
{
    "username": "desiredlocalpart123"
}
```

If available, the response looks like
```HTTP
HTTP/1.1 200 OK
{
    "available": true
}
```

Otherwise,
```HTTP
HTTP/1.1 429
{
    "errcode": "M_LIMIT_EXCEEDED",
    "error": "Too Many Requests",
    "retry_after_ms": 2000
}
```
or
```HTTP
HTTP/1.1 400
{
    "errcode": "M_USER_IN_USE",
    "error": "User ID already taken."
}

```
or
```HTTP
HTTP/1.1 400
{
    "errcode": "M_INVALID_USERNAME",
    "error": "Some reason for username being invalid"
}
```
2017-05-03 12:04:12 +01:00
David Baker
60833c8978 Merge pull request #2147 from matrix-org/dbkr/http_request_propagate_error
Propagate errors sensibly from proxied IS requests
2017-05-03 11:23:25 +01:00
David Baker
482a2ad122 No need for the exception variable 2017-05-03 11:02:59 +01:00
David Baker
c0380402bc List caught expection types 2017-05-03 10:56:22 +01:00
Erik Johnston
cdbf38728d Merge pull request #2175 from matrix-org/erikj/prefill_state
Prefill state caches
2017-05-03 10:54:11 +01:00
Erik Johnston
0c27383dd7 Merge pull request #2170 from matrix-org/erikj/fed_hole_state
Don't fetch state for missing events that we fetched
2017-05-03 10:49:37 +01:00
Erik Johnston
ef862186dd Merge together redundant calculations/logging 2017-05-03 10:06:43 +01:00
Erik Johnston
2c2dcf81d0 Update comment 2017-05-03 10:00:29 +01:00
Erik Johnston
1827057acc Comments 2017-05-03 09:56:05 +01:00
Erik Johnston
8346e6e696 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/prefill_state 2017-05-03 09:46:40 +01:00
Erik Johnston
e4c15fcb5c Merge pull request #2178 from matrix-org/erikj/message_metrics
Add more granular event send metrics
2017-05-02 17:57:34 +01:00
Erik Johnston
3e5a62ecd8 Add more granular event send metrics 2017-05-02 14:23:26 +01:00
Richard van der Hoff
82475a18d9 Merge pull request #2180 from matrix-org/rav/fix_timeout_on_timeout
Instantiate DeferredTimedOutError correctly
2017-05-02 13:32:58 +01:00
Richard van der Hoff
2e996271fe Instantiate DeferredTimedOutError correctly
Call `super` correctly, so that we correctly initialise the `errcode` field.

Fixes https://github.com/matrix-org/synapse/issues/2179.
2017-05-02 13:26:17 +01:00
Erik Johnston
a2c89a225c Prefill state caches 2017-05-02 10:40:31 +01:00
Erik Johnston
7166854f41 Add cache for get_current_hosts_in_room 2017-05-02 10:36:35 +01:00
Erik Johnston
3033261891 Merge pull request #2080 from matrix-org/erikj/filter_speed
Speed up filtering of a single event in push
2017-04-28 14:17:13 +01:00
Erik Johnston
2347efc065 Fixup 2017-04-28 12:46:53 +01:00
Erik Johnston
9b147cd730 Remove unncessary call in _get_missing_events_for_pdu 2017-04-28 11:55:25 +01:00
Erik Johnston
3a9f5bf6dd Don't fetch state for missing events that we fetched 2017-04-28 11:26:46 +01:00
Erik Johnston
ab37bef83b Remove unused import 2017-04-28 09:57:23 +01:00
Erik Johnston
ad8b316939 We don't care about forgotten rooms 2017-04-28 09:52:36 +01:00
Erik Johnston
421fdf7460 Speed up filtering of a single event in push 2017-04-28 09:52:36 +01:00
Erik Johnston
25a96e0c63 Merge pull request #2163 from matrix-org/erikj/fix_invite_state
Fix invite state to always include all events
2017-04-27 17:36:30 +01:00
Erik Johnston
46826bb078 Comment and remove spurious logging 2017-04-27 17:25:44 +01:00
Erik Johnston
f87b287291 Merge pull request #2127 from APwhitehat/alreadystarted
print something legible if synapse already running
2017-04-27 15:46:53 +01:00
Erik Johnston
bb9246e525 Merge pull request #2131 from matthewjwolff/develop
web_client_location documentation fix
2017-04-27 15:46:40 +01:00
Richard van der Hoff
c84770b877 Fix bgupdate error if index already exists (#2167)
When creating a new table index in the background, guard against it existing already. Fixes
https://github.com/matrix-org/synapse/issues/2135.

Also, make sure we restore the autocommit flag when we're done, otherwise we
get more failures from other operations later on. Fixes
https://github.com/matrix-org/synapse/issues/1890 (hopefully).
2017-04-27 15:27:48 +01:00
Erik Johnston
380fb87ecc Merge pull request #2168 from matrix-org/erikj/federation_logging
Add some extra logging for edge cases of federation
2017-04-27 15:19:12 +01:00
Erik Johnston
87ae59f5e9 Typo 2017-04-27 15:16:21 +01:00
Erik Johnston
e42b4ebf0f Add some extra logging for edge cases of federation 2017-04-27 14:38:21 +01:00
Erik Johnston
d3c150411c Merge pull request #2130 from APwhitehat/roomexists
Check that requested room_id exists
2017-04-27 09:20:26 +01:00
Erik Johnston
1e166470ab Fix tests 2017-04-26 16:23:30 +01:00
Erik Johnston
34e682d385 Fix invite state to always include all events 2017-04-26 16:18:08 +01:00
Erik Johnston
7239258ae6 Merge pull request #2160 from matrix-org/erikj/reduce_join_cache_size
Make state caches cache in ascii
2017-04-26 14:02:06 +01:00
David Baker
5fd12dce01 Remove debugging 2017-04-26 12:36:26 +01:00
David Baker
82ae0238f9 Revert accidental commit 2017-04-26 11:43:16 +01:00
David Baker
81804909d3 Merge remote-tracking branch 'origin/develop' into dbkr/http_request_propagate_error 2017-04-26 11:31:55 +01:00
David Baker
c366276056 Fix get_json 2017-04-26 10:07:01 +01:00
David Baker
1a9255c12e Use CodeMessageException subclass instead
Parse json errors from get_json client methods and throw special
errors.
2017-04-25 19:30:55 +01:00
Matthew Hodgson
94f36b0273 document how to make IPv6 work (#2088)
* document how to make IPv6 work

* spell out that pip will install 17.1 by default
2017-04-25 18:37:12 +01:00
Erik Johnston
c45dc6c62a Merge pull request #2136 from bbigras/patch-1
Fix the system requirements list in README.rst
2017-04-25 17:25:52 +01:00
Erik Johnston
f053a1409e Make state caches cache in ascii 2017-04-25 17:22:55 +01:00
Erik Johnston
22f935ab7c Merge pull request #2159 from matrix-org/erikj/reduce_join_cache_size
Reduce size of joined_user cache
2017-04-25 17:22:02 +01:00
Erik Johnston
9388eece2b Merge pull request #2149 from enckse/develop
setting up metrics, just adding/clarifying 2 very minor items
2017-04-25 16:37:46 +01:00
Erik Johnston
acb58bfb6a fix up 2017-04-25 15:39:19 +01:00
Erik Johnston
f7181615f2 Don't specify default as dict 2017-04-25 15:22:59 +01:00
Erik Johnston
f144365281 Comment 2017-04-25 15:18:26 +01:00
Erik Johnston
d9aa645f86 Reduce size of joined_user cache
The _get_joined_users_from_context cache stores a mapping from user_id
to avatar_url and display_name. Instead of storing those in a dict,
store them in a namedtuple as that uses much less memory.

We also try converting the string to ascii to further reduce the size.
2017-04-25 14:38:51 +01:00
Erik Johnston
22f3d3ae76 Reduce _get_state_group_for_event cache size 2017-04-25 11:43:03 +01:00
Erik Johnston
b4da08cad8 Merge pull request #2158 from matrix-org/erikj/reduce_cache_size
Reduce cache size by not storing deferreds
2017-04-25 11:18:07 +01:00
Erik Johnston
efab1dadde Remove DEBUG_CACHES 2017-04-25 10:54:09 +01:00
Mark Haines
33d5134b59 Merge pull request #2156 from matrix-org/markjh/old_verify_keys
Fix code for reporting old verify keys in synapse
2017-04-25 10:44:25 +01:00
Erik Johnston
119cb9bbcf Reduce cache size by not storing deferreds
Currently the cache descriptors store deferreds rather than raw values,
this is a simple way of triggering only one database hit and sharing the
result if two callers attempt to get the same value.

However, there are a few caches that simply store a mapping from string
to string (or int). These caches can have a large number of entries,
under the assumption that each entry is small. However, the size of a
deferred (specifically the size of ObservableDeferred) is signigicantly
larger than that of the raw value, 2kb vs 32b.

This PR therefore changes the cache descriptors to store the raw values
rather than the deferreds.

As a side effect cached storage function now either return a deferred or
the actual value, as the cached list decriptor already does. This is
fine as we always end up just yield'ing on the returned value
eventually, which handles that case correctly.
2017-04-25 10:23:11 +01:00
Mark Haines
e6e2627636 Fix code for reporting old verify keys in synapse 2017-04-24 18:51:25 +01:00
Richard van der Hoff
30f7bfa121 Merge pull request #2145 from matrix-org/rav/reject_invite_to_unreachable_server
Fix rejection of invites to unreachable servers
2017-04-24 15:20:52 +01:00
Erik Johnston
7af825bae4 Merge pull request #2155 from matrix-org/erikj/string_intern
Only intern ascii strings
2017-04-24 14:28:41 +01:00
Erik Johnston
26bcda31b8 Merge pull request #2154 from matrix-org/erikj/remove_unused_cache
Remove unused cache
2017-04-24 14:24:46 +01:00
Erik Johnston
d134d0935e Only intern ascii strings 2017-04-24 14:07:48 +01:00
Erik Johnston
e4f3431116 Remove unused cache 2017-04-24 13:27:38 +01:00
David Baker
a46982cee9 Need the HTTP status code 2017-04-21 16:20:12 +01:00
David Baker
70caf49914 Do the same for get_json 2017-04-21 16:09:03 +01:00
Sean Enck
719aec4064 clarify metric setup to use 'scrape_configs' section of yaml and use an array for target 2017-04-21 11:03:32 -04:00
Richard van der Hoff
cea7839911 Document some of the admin APIs (#2143)
I haven't (yet) documented all of the user-list APIs introduced in
https://github.com/matrix-org/synapse/pull/1784 because the API shape seems
very odd, given the functionality.
2017-04-21 11:55:07 +01:00
David Baker
a1595cec78 Don't error for 3xx responses 2017-04-21 11:51:17 +01:00
David Baker
2e165295b7 Merge remote-tracking branch 'origin/develop' into dbkr/http_request_propagate_error 2017-04-21 11:35:52 +01:00
David Baker
a90a0f5c8a Propagate errors sensibly from proxied IS requests
When we're proxying Matrix endpoints, parse out Matrix error
responses and turn them into SynapseErrors so they can be
propagated sensibly upstream.
2017-04-21 11:32:48 +01:00
Richard van der Hoff
91b3981800 Try harder when sending leave events
When we're rejecting invites, ignore the backoff data, so that we have a better
chance of not getting the room out of sync.
2017-04-21 01:50:36 +01:00
Richard van der Hoff
0cdb32fc43 Remove redundant try/except clauses
The `except SynapseError` clauses were pointless because the wrapped functions
would never throw a `SynapseError` (they either throw a `CodeMessageException`
or a `RuntimeError`).

The `except CodeMessageException` is now also pointless because the caller
treats all exceptions equally, so we may as well just throw the
`CodeMessageException`.
2017-04-21 01:32:01 +01:00
Richard van der Hoff
838810b76a Broaden the conditions for locally_rejecting invites
The logic for marking invites as locally rejected was all well and good, but
didn't happen when the remote server returned a 500, or wasn't reachable, or
had no DNS, or whatever.

Just expand the except clause to catch everything.

Fixes https://github.com/matrix-org/synapse/issues/761.
2017-04-21 01:31:37 +01:00
Richard van der Hoff
736b9a4784 Remove redundant function
inline `reject_remote_invite`, which only existed to make tracing the callflow
more difficult.
2017-04-21 01:31:09 +01:00
Richard van der Hoff
4903ccf159 Fix some lies, and other clarifications, in docstrings
The documentation on get_json has been wrong ever since the very first commit
to synapse...
2017-04-21 01:31:09 +01:00
Bruno Bigras
51fb884c52 Fix the system requirements list in README.rst 2017-04-19 17:32:00 -04:00
Matthew Wolff
d4040e9e28 Queried CONDITIONAL_REQUIREMENTS 2017-04-18 16:19:48 -05:00
Luke Barnard
3fb8784c92 m.read_marker -> m.fully_read (#2128)
Also:
 - change the REST endpoint to have a "S" on the end (so it's now /read_markers)
 - change the content of the m.read_up_to event to have the key "event_id" instead of "marker".
2017-04-18 17:46:15 +01:00
Matthew Hodgson
c02b6a37d6 Merge pull request #2132 from feld/patch-1
Update README.rst
2017-04-17 16:18:34 +01:00
Mark Felder
814fb032eb Update README.rst
The FreeBSD port has been moved to the net-im category
2017-04-17 08:26:50 -05:00
Matthew Wolff
54f9a4cb59 Fixed travis build failure
Signed-off-by: Matthew Wolff <matthewjwolff@gmail.com>
2017-04-17 01:38:27 -05:00
Matthew Wolff
8e780b113d web_server_root documentation fix
Signed-off-by: Matthew Wolff <matthewjwolff@gmail.com>
2017-04-17 00:49:11 -05:00
Anant Prakash
574d573ac2 Check that requested room_id exists 2017-04-14 23:50:59 +05:30
Luke Barnard
78f0ddbfad Merge pull request #2120 from matrix-org/luke/read-markers
Implement Read Marker API
2017-04-13 14:21:31 +01:00
Luke Barnard
6a70647d45 Correct logic in is_event_after 2017-04-13 13:46:17 +01:00
Anant Prakash
c1f52a321d synctl.py: Check if synapse is already running 2017-04-13 18:00:02 +05:30
Luke Barnard
b9557064bf Simplify is_event_after logic 2017-04-12 14:36:20 +01:00
Luke Barnard
cf6121e3da More null-guard changes 2017-04-12 14:02:03 +01:00
Erik Johnston
247c736b9b Merge pull request #2115 from matrix-org/erikj/dedupe_federation_repl
Reduce federation replication traffic
2017-04-12 11:07:13 +01:00
Paul Evans
8fbc0d29ee Merge pull request #2121 from matrix-org/paul/sent-transactions-metric
Add a counter metric for successfully-sent transactions
2017-04-12 11:04:31 +01:00
Erik Johnston
c06c00190f Merge pull request #2116 from matrix-org/erikj/dedupe_federation_repl2
Dedupe KeyedEdu and Devices federation repl traffic
2017-04-12 10:57:24 +01:00
Luke Barnard
c0aba0a23e Remove Unused ref to hs 2017-04-12 10:52:11 +01:00
Luke Barnard
b9676a75f6 Move a space 2017-04-12 10:51:17 +01:00
Luke Barnard
69a18514e9 Only notify user, not entire room 2017-04-12 10:50:37 +01:00
Luke Barnard
122cd52ce4 Remove comment, simplify null-guard 2017-04-12 10:48:32 +01:00
Erik Johnston
26ae5178a4 Add some comments 2017-04-12 10:36:29 +01:00
Erik Johnston
bf9060156a Merge pull request #2117 from matrix-org/erikj/remove_http_replication
Remove HTTP replication APIs
2017-04-12 10:21:42 +01:00
Erik Johnston
82301b6c29 Remove last reference to worker_replication_url 2017-04-12 10:21:02 +01:00
Erik Johnston
1745069543 Comment 2017-04-12 10:17:10 +01:00
Erik Johnston
c7ddb5ef7a Reuse get_interested_parties 2017-04-12 10:16:26 +01:00
Erik Johnston
7b41013102 Merge pull request #2118 from matrix-org/erikj/no_devices
Fix getting latest device IP for user with no devices
2017-04-12 10:14:32 +01:00
Luke Barnard
77fb2b72ae Handle no previous RM 2017-04-12 09:47:29 +01:00
Luke Barnard
7f94709066 travis flake8.. 2017-04-11 18:35:45 +01:00
Luke Barnard
867822fa1e flake8 2017-04-11 17:36:04 +01:00
Luke Barnard
73880268ef Refactor event ordering check to events store 2017-04-11 17:34:09 +01:00
Luke Barnard
131485ef66 Copyright 2017-04-11 17:33:51 +01:00
Paul "LeoNerd" Evans
11dbceb761 Add a counter metric for successfully-sent transactions 2017-04-11 17:16:12 +01:00
Luke Barnard
0127423027 flake8 2017-04-11 17:07:07 +01:00
Erik Johnston
85657eedf8 Bail on where clause instead 2017-04-11 16:24:31 +01:00
Erik Johnston
b48045a8f5 Don't bother with outer check for now 2017-04-11 16:23:24 +01:00
Erik Johnston
6f65e2f90c Update replication docs 2017-04-11 16:21:12 +01:00
Erik Johnston
323634bf8b Update workers docs 2017-04-11 16:19:52 +01:00
Erik Johnston
9c712a366f Move get_presence_list_* to SlaveStore 2017-04-11 16:07:33 +01:00
Erik Johnston
a8c8e4efd4 Comment 2017-04-11 15:35:49 +01:00
Erik Johnston
414522aed5 Move get_interested_parties 2017-04-11 15:33:26 +01:00
Erik Johnston
2be8a281d2 Comments 2017-04-11 15:28:24 +01:00
Erik Johnston
6308ac45b0 Move get_interested_remotes back to presence handler 2017-04-11 15:19:26 +01:00
Erik Johnston
b9b72bc6e2 Comments 2017-04-11 15:15:34 +01:00
Luke Barnard
d892079844 Finish implementing RM endpoint
- This change causes a 405 to be sent if "m.read_marker" is set via /account_data
 - This also fixes-up the RM endpoint so that it actually Works.
2017-04-11 15:01:39 +01:00
lukebarnard
e263c26690 Initial commit of RM server-side impl
(See https://docs.google.com/document/d/1UWqdS-e1sdwkLDUY0wA4gZyIkRp-ekjsLZ8k6g_Zvso/edit#heading=h.lndohpg8at5u)
2017-04-11 11:55:30 +01:00
Erik Johnston
f3cf3ff8b6 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-04-11 11:13:32 +01:00
Erik Johnston
4902db1fc9 Merge branch 'release-v0.20.0' of github.com:matrix-org/synapse 2017-04-11 11:12:37 +01:00
Erik Johnston
d563b8d944 Bump changelog 2017-04-11 11:12:13 +01:00
Erik Johnston
85a0d6c7ab Remove test of replication resource 2017-04-11 10:59:27 +01:00
Erik Johnston
34840cdcef Fix getting latest device IP for user with no devices 2017-04-11 09:56:54 +01:00
Erik Johnston
28a4649785 Remove HTTP replication APIs 2017-04-11 09:52:11 +01:00
Matthew Hodgson
7c551ec445 trust a hypothetical future riot.im IS 2017-04-10 17:58:36 +01:00
Erik Johnston
84fbb80c8f Use generators 2017-04-10 16:55:56 +01:00
Erik Johnston
40453b3f84 Dedupe KeyedEdu and Devices federation repl traffic 2017-04-10 16:49:51 +01:00
Erik Johnston
29574fd5b3 Reduce federation presence replication traffic
This is mainly done by moving the calculation of where to send presence
updates from the presence handler to the transaction queue, so we only
need to send the presence event (and not the destinations) across the
replication connection. Before we were duplicating by sending the full
state across once per destination.
2017-04-10 16:48:30 +01:00
Erik Johnston
2e6f5a4910 Typo 2017-04-10 16:17:40 +01:00
David Baker
405ba4178a Merge pull request #2102 from DanielDent/add-auth-email
Support authenticated SMTP
2017-04-10 15:42:16 +01:00
Erik Johnston
efcb6db688 Merge pull request #2109 from matrix-org/erikj/send_queue_fix
Fix up federation SendQueue and document types
2017-04-10 13:09:25 +01:00
Erik Johnston
0018491af2 Rename variable 2017-04-10 12:44:43 +01:00
Erik Johnston
0364d23210 Up replication ping timeout 2017-04-10 11:32:05 +01:00
Erik Johnston
8c5f03cec7 Revert to sending the same data type as before 2017-04-10 10:07:18 +01:00
Erik Johnston
f8434db549 Change name 2017-04-10 10:03:07 +01:00
Erik Johnston
ab904caf33 Comments 2017-04-10 10:02:17 +01:00
Richard van der Hoff
54a59adc7c Merge pull request #2110 from matrix-org/rav/fix_reject_persistence
When we do an invite rejection, save the signed leave event to the db
2017-04-07 16:14:41 +01:00
Richard van der Hoff
64765e5199 When we do an invite rejection, save the signed leave event to the db
During a rejection of an invite received over federation, we ask a remote
server to make us a `leave` event, then sign it, then send that with
`send_leave`.

We were saving the *unsigned* version of the event (which has a different event
id to the signed version) to our db (and sending it to the clients), whereas
other servers in the room will have seen the *signed* version. We're not aware
of any actual problems that caused, except that it makes the database confusing
to look at and generally leaves the room in a weird state.
2017-04-07 14:39:32 +01:00
Erik Johnston
0cd01f5c9c Merge pull request #2108 from matrix-org/erikj/current_state_ids
Speed up get_current_state_ids
2017-04-07 14:20:16 +01:00
Erik Johnston
2a3e822f44 Comment 2017-04-07 13:47:04 +01:00
Erik Johnston
a828a64b75 Comment 2017-04-07 11:54:03 +01:00
Erik Johnston
d4d176e5d0 Add logging 2017-04-07 11:51:28 +01:00
Erik Johnston
449d1297ca Fix up federation SendQueue and document types 2017-04-07 11:48:33 +01:00
Erik Johnston
d72667fcce Speed up get_current_state_ids
Using _simple_select_list is fairly expensive for functions that return
a lot of rows and/or get called a lot. (This is because it carefully
constructs a list of dicts).

get_current_state_ids gets called a lot on startup and e.g. when the IRC
bridge decided to send tonnes of joins/leaves (as it invalidates the
cache). We therefore replace it with a custon txn function that builds
up the final result dict without building up and intermediate
representation.
2017-04-07 10:10:49 +01:00
Erik Johnston
a41fe500d6 Bump version and changelog 2017-04-07 10:03:48 +01:00
Erik Johnston
54f59bd7d4 Merge pull request #2107 from HarHarLinks/patch-1
fix typo in synctl help
2017-04-07 09:54:37 +01:00
Erik Johnston
98ce212093 Merge pull request #2103 from matrix-org/erikj/no-double-encode
Don't double encode replication data
2017-04-07 09:39:52 +01:00
Kim Brose
8a1137ceab fix typo in synctl help 2017-04-06 17:10:20 +02:00
Erik Johnston
877c029c16 Use iteritems 2017-04-06 15:51:22 +01:00
Erik Johnston
944692ef69 Merge pull request #2106 from matrix-org/erikj/reduce_user_sync
Reduce rate of USER_SYNC repl commands
2017-04-06 13:35:31 +01:00
Erik Johnston
391712a4f9 Comment 2017-04-06 13:35:00 +01:00
Erik Johnston
ad544c803a Document types of the replication streams 2017-04-06 13:28:52 +01:00
Erik Johnston
dbf87282d3 Docs 2017-04-06 13:11:21 +01:00
Erik Johnston
69b3fd485d Fix incorrect type when using InvalidateCacheCommand 2017-04-06 09:36:38 +01:00
Daniel Dent
5058292537 Support authenticated SMTP
Closes (SYN-714) #1385

Signed-off-by: Daniel Dent <matrixcontrib@contactdaniel.net>
2017-04-05 21:01:08 -07:00
Erik Johnston
fcc803b2bf Add log lines 2017-04-05 17:13:44 +01:00
Erik Johnston
ea0152b132 Merge pull request #2104 from matrix-org/erikj/metrics_tcp
Rearrange TCP replication metrics
2017-04-05 14:24:06 +01:00
Erik Johnston
3f213d908d Rearrange metrics 2017-04-05 14:15:09 +01:00
Erik Johnston
1ca0e78ca1 Fix typo 2017-04-05 13:43:39 +01:00
Erik Johnston
b43d3267e2 Fixup some metrics for tcp repl 2017-04-05 13:34:54 +01:00
Erik Johnston
b5cb6347a4 Don't immediately notify the master about users whose syncs have gone away 2017-04-05 13:25:40 +01:00
Erik Johnston
96b9b6c127 Don't double json encode typing replication data 2017-04-05 11:34:20 +01:00
Erik Johnston
f10ce8944b Don't double json encode federation replication data 2017-04-05 11:10:28 +01:00
Erik Johnston
a5c401bd12 Merge pull request #2097 from matrix-org/erikj/repl_tcp_client
Move to using TCP replication
2017-04-05 09:36:21 +01:00
Erik Johnston
b9caf4f726 Merge pull request #2099 from matrix-org/erikj/deviceinbox_reduce
Deduplicate new deviceinbox rows for replication
2017-04-05 09:35:59 +01:00
Erik Johnston
d1d5362267 Add comment 2017-04-04 16:41:03 +01:00
Erik Johnston
9f26d3b75b Deduplicate new deviceinbox rows for replication 2017-04-04 16:21:21 +01:00
Erik Johnston
a76886726b Merge pull request #2098 from matrix-org/erikj/repl_tcp_fix
Advance replication streams even if nothing is listening
2017-04-04 15:40:51 +01:00
Erik Johnston
ac66e11f2b Add the appropriate amount of preserve_fn 2017-04-04 15:22:54 +01:00
Erik Johnston
4264ceb31c Fiddle tcp replication logging 2017-04-04 14:14:03 +01:00
Erik Johnston
023ee197be Advance replication streams even if nothing is listening
Otherwise the streams don't advance and steadily fall behind, so when a
worker does connect either a) they'll be streamed lots of old updates or
b) the connection will fail as the streams are too far behind.
2017-04-04 13:19:26 +01:00
Erik Johnston
d1605794ad Remove unused worker config option 2017-04-04 11:17:00 +01:00
Erik Johnston
3376f16012 Shuffle and comment synchrotron presence 2017-04-04 11:14:16 +01:00
Erik Johnston
6ce6bbedcb Move where we ack federation 2017-04-04 11:02:44 +01:00
Erik Johnston
27cc627e42 Merge pull request #2082 from matrix-org/erikj/repl_tcp_server
Replace HTTP replication with TCP replication (Server side part)
2017-04-04 10:07:57 +01:00
Erik Johnston
62b89daac6 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/repl_tcp_server 2017-04-04 09:46:16 +01:00
Richard van der Hoff
773e64cc1a Merge pull request #2095 from matrix-org/rav/cull_log_preserves
Cull spurious PreserveLoggingContexts
2017-04-03 17:02:25 +01:00
Richard van der Hoff
2d05eb3cf5 Merge remote-tracking branch 'origin/release-v0.20.0' into develop 2017-04-03 16:23:23 +01:00
Richard van der Hoff
ac63b92b64 Merge pull request #2094 from matrix-org/rav/fix_federation_join
Accept join events from all servers
2017-04-03 16:17:17 +01:00
Richard van der Hoff
30bcbf775a Accept join events from all servers
Make sure that we accept join events from any server, rather than just the
origin server, to make the federation join dance work correctly.

(Fixes #1893).
2017-04-03 15:58:07 +01:00
Richard van der Hoff
7eb9f34cc3 Remove spurious yield
In `MessageHandler`, remove `yield` on call to `Notifier.on_new_room_event`:
it doesn't return anything anyway.
2017-04-03 15:44:19 +01:00
Richard van der Hoff
0b08c48fc5 Remove more spurious PreserveLoggingContexts
Remove `PreserveLoggingContext` around calls to `Notifier.on_new_room_event`;
there is no problem if the logcontext is set when calling it.
2017-04-03 15:43:37 +01:00
Richard van der Hoff
65e1683680 Remove spurious PreserveLoggingContext
In `on_new_room_event`, remove `PreserveLoggingContext` - we can call its
subroutines with the logcontext set.
2017-04-03 15:42:38 +01:00
Richard van der Hoff
feb496056e preserve_fn some deferred-returning things
In `Notifier._on_new_room_event`, `preserve_fn` around its subroutines which
return deferreds, so that it is safe to call it with an active logcontext.
2017-04-03 15:41:17 +01:00
Richard van der Hoff
e2eebf1696 Fix fixme in preserve_fn
`preserve_fn` is no longer used as a decorator anywhere, so we can safely fix a
fixme therein.
2017-04-03 15:38:02 +01:00
Erik Johnston
36c28bc467 Update all the workers and master to use TCP replication 2017-04-03 15:35:52 +01:00
Erik Johnston
3a1f3f8388 Change slave storage to use new replication interface
As the TCP replication uses a slightly different API and streams than
the HTTP replication.

This breaks HTTP replication.
2017-04-03 15:34:19 +01:00
Erik Johnston
52bfa604e1 Add basic replication client handler and factory 2017-04-03 15:34:13 +01:00
Erik Johnston
0a6a966e2b Always advance stream tokens 2017-04-03 15:22:56 +01:00
Richard van der Hoff
773e1c6d68 Remove spurious @preserve_fn decorators
Remove `@preserve_fn` decorators on `on_new_room_event`,
`_notify_pending_new_room_events`, `_on_new_room_event`, `on_new_event`, and
`on_new_replication_data` - none of these functions return a deferred, and the
decorator does nothing unless the wrapped function returns a deferred, so the
decorator was a no-op.
2017-04-03 15:14:11 +01:00
Erik Johnston
0d1c85e643 Merge branch 'release-v0.20.0' of github.com:matrix-org/synapse into develop 2017-04-03 14:58:14 +01:00
Erik Johnston
1df7c28661 Use callbacks to notify tcp replication rather than deferreds 2017-03-31 15:42:51 +01:00
Erik Johnston
36d2b66f90 Add a timestamp to USER_SYNC command
This timestamp is used to indicate when the user last sync'd
2017-03-31 15:42:22 +01:00
Erik Johnston
8a240e4f9c Merge pull request #2078 from APwhitehat/assertuserfriendly
add user friendly report of assertion error in synctl.py
2017-03-31 14:41:49 +01:00
Erik Johnston
ec039e6790 Merge pull request #1984 from RyanBreaker/patch-1
Add missing package to CentOS section
2017-03-31 14:39:32 +01:00
Erik Johnston
142b6b4abf Merge pull request #2011 from matrix-org/matthew/turn_allow_guests
add setting (on by default) to support TURN for guests
2017-03-31 14:37:09 +01:00
Erik Johnston
2a06b44be2 Merge pull request #1986 from matrix-org/matthew/enable_guest_3p
enable guest access for the 3pl/3pid APIs
2017-03-31 14:36:03 +01:00
Erik Johnston
2dc57e7413 Merge pull request #2024 from jerrykan/db_port_schema
Don't assume postgres tables are in the public schema during db port
2017-03-31 14:14:30 +01:00
Erik Johnston
07a32d192c Merge pull request #1961 from benhylau/patch-1
Clarify doc for SQLite to PostgreSQL port
2017-03-31 14:13:26 +01:00
Erik Johnston
9a27448b1b Merge pull request #1927 from zuckschwerdt/fix-nuke-script
bring nuke-room script to current schema
2017-03-31 14:06:29 +01:00
Matthew Hodgson
9ee397b440 switch to allow_guest=True for authing 3Ps as per PR feedback 2017-03-31 13:54:26 +01:00
Erik Johnston
9d0170ac6c Fix up presence 2017-03-31 11:36:32 +01:00
Erik Johnston
b4276a3896 Add a brief list of commands to docs 2017-03-31 11:34:45 +01:00
Erik Johnston
bfcf016714 Fix up docs 2017-03-31 11:19:24 +01:00
Erik Johnston
9ff4e0e91b Bump version and changelog 2017-03-30 16:37:40 +01:00
Erik Johnston
63fcc42990 Remove user from process_presence when stops syncing 2017-03-30 14:26:08 +01:00
Erik Johnston
31e0fe9031 Fix indentation in docs/ 2017-03-30 13:54:15 +01:00
Erik Johnston
3ba2859e0c Add tcp replication listener type and hook it up 2017-03-30 13:31:10 +01:00
Erik Johnston
e9dd8370b0 Add functions to presence to support remote syncs
The TCP replication protocol streams deltas of who has started or
stopped syncing. This is different from the HTTP API which periodically
sends the full list of users who are syncing. This commit adds support
for the new TCP style of sending deltas.
2017-03-30 13:25:14 +01:00
Erik Johnston
4d7fc7f977 Add server side resource for tcp replication 2017-03-30 13:24:45 +01:00
Erik Johnston
7450693435 Initial TCP protocol implementation
This defines the low level TCP replication protocol
2017-03-30 12:54:46 +01:00
Erik Johnston
8da6f0be48 Define the various streams we will replicate 2017-03-30 12:54:46 +01:00
Erik Johnston
11880103b1 Make federation send queue take the current position 2017-03-30 12:54:36 +01:00
Erik Johnston
7984708a55 Add a simple hook to wait for replication traffic 2017-03-30 11:57:52 +01:00
Erik Johnston
24d35ab47b Add new storage functions for new replication
The new replication protocol will keep all the streams separate, rather
than muxing multiple streams into one.
2017-03-30 11:48:35 +01:00
Anant Prakash
305d16d612 add user friendly report of assertion error in synctl.py
Signed-off-by: Anant Prakash <anantprakashjsr@gmail.com>
2017-03-29 20:41:39 +05:30
John Kristensen
be44558886 Don't assume postgres tables are in the public schema during db port
When fetching the list of tables from the postgres database during the
db port, it is assumed that the tables are in the public schema. This is
not always the case, so lets just rely on postgres to determine the
default schema to use.
2017-03-17 10:53:32 +11:00
Matthew Hodgson
0970e0307e typo 2017-03-15 12:40:42 +00:00
Matthew Hodgson
5aa42d4292 set default for turn_allow_guests correctly 2017-03-15 12:40:13 +00:00
Matthew Hodgson
e0ff66251f add setting (on by default) to support TURN for guests 2017-03-15 12:22:18 +00:00
Ryan Breaker
a175963ba5 Add --upgrade pip
Needed before `pip instal --upgrade setuptools` for CentOS 7 and also doesn't hurt for any other distro.
2017-03-13 14:05:31 -05:00
Matthew Hodgson
a61dd408ed enable guest access for the 3pl/3pid APIs 2017-03-12 19:30:45 +00:00
Ryan Breaker
53254551f0 Add missing package to CentOS section
Also added Fedora 25 to header as the same packages work for it as well.
2017-03-10 22:09:22 -06:00
Benedict Lau
92312aa3e6 Clarify doc for SQLite to PostgreSQL port 2017-03-01 01:30:11 -05:00
Christian W. Zuckschwerdt
20746d8150 bring nuke-room script to current schema
Signed-off-by: Christian W. Zuckschwerdt <christian@zuckschwerdt.org>
2017-02-19 05:27:45 +01:00
192 changed files with 11480 additions and 3724 deletions

47
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,47 @@
<!--
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
You will likely get better support more quickly if you ask in ** #matrix:matrix.org ** ;)
This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all
the necessary data to fix your issue.
You can also preview your report before submitting it. You may remove sections
that aren't relevant to your particular case.
Text between <!-- and --> marks will be invisible in the report.
-->
### Description
Describe here the problem that you are experiencing, or the feature you are requesting.
### Steps to reproduce
- For bugs, list the steps
- that reproduce the bug
- using hyphens as bullet points
Describe how what happens differs from what you expected.
If you can identify any relevant log snippets from _homeserver.log_, please include
those here (please be careful to remove any personal or private data):
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Homeserver**: Was this issue identified on matrix.org or another homeserver?
If not matrix.org:
- **Version**: What version of Synapse is running? <!--
You can find the Synapse version by inspecting the server headers (replace matrix.org with
your own homeserver domain):
$ curl -v https://matrix.org/_matrix/client/versions 2>&1 | grep "Server:"
-->
- **Install method**: package manager/git clone/pip
- **Platform**: Tell us about the environment in which your homeserver is operating
- distro, hardware, if it's running in a vm/container, etc.

View File

@@ -1,3 +1,268 @@
Changes in synapse v0.23.0 (2017-10-02)
=======================================
No changes since v0.23.0-rc2
Changes in synapse v0.23.0-rc2 (2017-09-26)
===========================================
Bug fixes:
* Fix regression in performance of syncs (PR #2470)
Changes in synapse v0.23.0-rc1 (2017-09-25)
===========================================
Features:
* Add a frontend proxy worker (PR #2344)
* Add support for event_id_only push format (PR #2450)
* Add a PoC for filtering spammy events (PR #2456)
* Add a config option to block all room invites (PR #2457)
Changes:
* Use bcrypt module instead of py-bcrypt (PR #2288) Thanks to @kyrias!
* Improve performance of generating push notifications (PR #2343, #2357, #2365,
#2366, #2371)
* Improve DB performance for device list handling in sync (PR #2362)
* Include a sample prometheus config (PR #2416)
* Document known to work postgres version (PR #2433) Thanks to @ptman!
Bug fixes:
* Fix caching error in the push evaluator (PR #2332)
* Fix bug where pusherpool didn't start and broke some rooms (PR #2342)
* Fix port script for user directory tables (PR #2375)
* Fix device lists notifications when user rejoins a room (PR #2443, #2449)
* Fix sync to always send down current state events in timeline (PR #2451)
* Fix bug where guest users were incorrectly kicked (PR #2453)
* Fix bug talking to IPv6 only servers using SRV records (PR #2462)
Changes in synapse v0.22.1 (2017-07-06)
=======================================
Bug fixes:
* Fix bug where pusher pool didn't start and caused issues when
interacting with some rooms (PR #2342)
Changes in synapse v0.22.0 (2017-07-06)
=======================================
No changes since v0.22.0-rc2
Changes in synapse v0.22.0-rc2 (2017-07-04)
===========================================
Changes:
* Improve performance of storing user IPs (PR #2307, #2308)
* Slightly improve performance of verifying access tokens (PR #2320)
* Slightly improve performance of event persistence (PR #2321)
* Increase default cache factor size from 0.1 to 0.5 (PR #2330)
Bug fixes:
* Fix bug with storing registration sessions that caused frequent CPU churn
(PR #2319)
Changes in synapse v0.22.0-rc1 (2017-06-26)
===========================================
Features:
* Add a user directory API (PR #2252, and many more)
* Add shutdown room API to remove room from local server (PR #2291)
* Add API to quarantine media (PR #2292)
* Add new config option to not send event contents to push servers (PR #2301)
Thanks to @cjdelisle!
Changes:
* Various performance fixes (PR #2177, #2233, #2230, #2238, #2248, #2256,
#2274)
* Deduplicate sync filters (PR #2219) Thanks to @krombel!
* Correct a typo in UPGRADE.rst (PR #2231) Thanks to @aaronraimist!
* Add count of one time keys to sync stream (PR #2237)
* Only store event_auth for state events (PR #2247)
* Store URL cache preview downloads separately (PR #2299)
Bug fixes:
* Fix users not getting notifications when AS listened to that user_id (PR
#2216) Thanks to @slipeer!
* Fix users without push set up not getting notifications after joining rooms
(PR #2236)
* Fix preview url API to trim long descriptions (PR #2243)
* Fix bug where we used cached but unpersisted state group as prev group,
resulting in broken state of restart (PR #2263)
* Fix removing of pushers when using workers (PR #2267)
* Fix CORS headers to allow Authorization header (PR #2285) Thanks to @krombel!
Changes in synapse v0.21.1 (2017-06-15)
=======================================
Bug fixes:
* Fix bug in anonymous usage statistic reporting (PR #2281)
Changes in synapse v0.21.0 (2017-05-18)
=======================================
No changes since v0.21.0-rc3
Changes in synapse v0.21.0-rc3 (2017-05-17)
===========================================
Features:
* Add per user rate-limiting overrides (PR #2208)
* Add config option to limit maximum number of events requested by ``/sync``
and ``/messages`` (PR #2221) Thanks to @psaavedra!
Changes:
* Various small performance fixes (PR #2201, #2202, #2224, #2226, #2227, #2228,
#2229)
* Update username availability checker API (PR #2209, #2213)
* When purging, don't de-delta state groups we're about to delete (PR #2214)
* Documentation to check synapse version (PR #2215) Thanks to @hamber-dick!
* Add an index to event_search to speed up purge history API (PR #2218)
Bug fixes:
* Fix API to allow clients to upload one-time-keys with new sigs (PR #2206)
Changes in synapse v0.21.0-rc2 (2017-05-08)
===========================================
Changes:
* Always mark remotes as up if we receive a signed request from them (PR #2190)
Bug fixes:
* Fix bug where users got pushed for rooms they had muted (PR #2200)
Changes in synapse v0.21.0-rc1 (2017-05-08)
===========================================
Features:
* Add username availability checker API (PR #2183)
* Add read marker API (PR #2120)
Changes:
* Enable guest access for the 3pl/3pid APIs (PR #1986)
* Add setting to support TURN for guests (PR #2011)
* Various performance improvements (PR #2075, #2076, #2080, #2083, #2108,
#2158, #2176, #2185)
* Make synctl a bit more user friendly (PR #2078, #2127) Thanks @APwhitehat!
* Replace HTTP replication with TCP replication (PR #2082, #2097, #2098,
#2099, #2103, #2014, #2016, #2115, #2116, #2117)
* Support authenticated SMTP (PR #2102) Thanks @DanielDent!
* Add a counter metric for successfully-sent transactions (PR #2121)
* Propagate errors sensibly from proxied IS requests (PR #2147)
* Add more granular event send metrics (PR #2178)
Bug fixes:
* Fix nuke-room script to work with current schema (PR #1927) Thanks
@zuckschwerdt!
* Fix db port script to not assume postgres tables are in the public schema
(PR #2024) Thanks @jerrykan!
* Fix getting latest device IP for user with no devices (PR #2118)
* Fix rejection of invites to unreachable servers (PR #2145)
* Fix code for reporting old verify keys in synapse (PR #2156)
* Fix invite state to always include all events (PR #2163)
* Fix bug where synapse would always fetch state for any missing event (PR #2170)
* Fix a leak with timed out HTTP connections (PR #2180)
* Fix bug where we didn't time out HTTP requests to ASes (PR #2192)
Docs:
* Clarify doc for SQLite to PostgreSQL port (PR #1961) Thanks @benhylau!
* Fix typo in synctl help (PR #2107) Thanks @HarHarLinks!
* ``web_client_location`` documentation fix (PR #2131) Thanks @matthewjwolff!
* Update README.rst with FreeBSD changes (PR #2132) Thanks @feld!
* Clarify setting up metrics (PR #2149) Thanks @encks!
Changes in synapse v0.20.0 (2017-04-11)
=======================================
Bug fixes:
* Fix joining rooms over federation where not all servers in the room saw the
new server had joined (PR #2094)
Changes in synapse v0.20.0-rc1 (2017-03-30)
===========================================
Features:
* Add delete_devices API (PR #1993)
* Add phone number registration/login support (PR #1994, #2055)
Changes:
* Use JSONSchema for validation of filters. Thanks @pik! (PR #1783)
* Reread log config on SIGHUP (PR #1982)
* Speed up public room list (PR #1989)
* Add helpful texts to logger config options (PR #1990)
* Minor ``/sync`` performance improvements. (PR #2002, #2013, #2022)
* Add some debug to help diagnose weird federation issue (PR #2035)
* Correctly limit retries for all federation requests (PR #2050, #2061)
* Don't lock table when persisting new one time keys (PR #2053)
* Reduce some CPU work on DB threads (PR #2054)
* Cache hosts in room (PR #2060)
* Batch sending of device list pokes (PR #2063)
* Speed up persist event path in certain edge cases (PR #2070)
Bug fixes:
* Fix bug where current_state_events renamed to current_state_ids (PR #1849)
* Fix routing loop when fetching remote media (PR #1992)
* Fix current_state_events table to not lie (PR #1996)
* Fix CAS login to handle PartialDownloadError (PR #1997)
* Fix assertion to stop transaction queue getting wedged (PR #2010)
* Fix presence to fallback to last_active_ts if it beats the last sync time.
Thanks @Half-Shot! (PR #2014)
* Fix bug when federation received a PDU while a room join is in progress (PR
#2016)
* Fix resetting state on rejected events (PR #2025)
* Fix installation issues in readme. Thanks @ricco386 (PR #2037)
* Fix caching of remote servers' signature keys (PR #2042)
* Fix some leaking log context (PR #2048, #2049, #2057, #2058)
* Fix rejection of invites not reaching sync (PR #2056)
Changes in synapse v0.19.3 (2017-03-20)
=======================================

View File

@@ -27,4 +27,5 @@ exclude jenkins*.sh
exclude jenkins*
recursive-exclude jenkins *.sh
prune .github
prune demo/etc

View File

@@ -84,6 +84,7 @@ Synapse Installation
Synapse is the reference python/twisted Matrix homeserver implementation.
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 2.7
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
@@ -108,10 +109,10 @@ Installing prerequisites on ArchLinux::
sudo pacman -S base-devel python2 python-pip \
python-setuptools python-virtualenv sqlite3
Installing prerequisites on CentOS 7::
Installing prerequisites on CentOS 7 or Fedora 25::
sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel redhat-rpm-config \
python-virtualenv libffi-devel openssl-devel
sudo yum groupinstall "Development Tools"
@@ -199,11 +200,11 @@ different. See `the spec`__ for more information on key management.)
.. __: `key_management`_
The default configuration exposes two HTTP ports: 8008 and 8448. Port 8008 is
configured without TLS; it is not recommended this be exposed outside your
local network. Port 8448 is configured to use TLS with a self-signed
certificate. This is fine for testing with but, to avoid your clients
complaining about the certificate, you will almost certainly want to use
another certificate for production purposes. (Note that a self-signed
configured without TLS; it should be behind a reverse proxy for TLS/SSL
termination on port 443 which in turn should be used for clients. Port 8448
is configured to use TLS with a self-signed certificate. If you would like
to do initial test with a client without having to setup a reverse proxy,
you can temporarly use another certificate. (Note that a self-signed
certificate is fine for `Federation`_). You can do so by changing
``tls_certificate_path``, ``tls_private_key_path`` and ``tls_dh_params_path``
in ``homeserver.yaml``; alternatively, you can use a reverse-proxy, but be sure
@@ -245,6 +246,25 @@ Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See `<docs/turn-howto.rst>`_ for details.
IPv6
----
As of Synapse 0.19 we finally support IPv6, many thanks to @kyrias and @glyph
for providing PR #1696.
However, for federation to work on hosts with IPv6 DNS servers you **must**
be running Twisted 17.1.0 or later - see https://github.com/matrix-org/synapse/issues/1002
for details. We can't make Synapse depend on Twisted 17.1 by default
yet as it will break most older distributions (see https://github.com/matrix-org/synapse/pull/1909)
so if you are using operating system dependencies you'll have to install your
own Twisted 17.1 package via pip or backports etc.
If you're running in a virtualenv then pip should have installed the newest
Twisted automatically, but if your virtualenv is old you will need to manually
upgrade to a newer Twisted dependency via:
pip install Twisted>=17.1.0
Running Synapse
===============
@@ -263,10 +283,16 @@ Connecting to Synapse from a client
The easiest way to try out your new Synapse installation is by connecting to it
from a web client. The easiest option is probably the one at
http://riot.im/app. You will need to specify a "Custom server" when you log on
or register: set this to ``https://localhost:8448`` - remember to specify the
port (``:8448``) unless you changed the configuration. (Leave the identity
or register: set this to ``https://domain.tld`` if you setup a reverse proxy
following the recommended setup, or ``https://localhost:8448`` - remember to specify the
port (``:8448``) if not ``:443`` unless you changed the configuration. (Leave the identity
server as the default - see `Identity servers`_.)
If using port 8448 you will run into errors until you accept the self-signed
certificate. You can easily do this by going to ``https://localhost:8448``
directly with your browser and accept the presented certificate. You can then
go back in your web client and proceed further.
If all goes well you should at least be able to log in, create a room, and
start sending messages.
@@ -335,8 +361,11 @@ ArchLinux
---------
The quickest way to get up and running with ArchLinux is probably with the community package
https://www.archlinux.org/packages/community/any/matrix-synapse/, which should pull in all
the necessary dependencies.
https://www.archlinux.org/packages/community/any/matrix-synapse/, which should pull in most of
the necessary dependencies. If the default web client is to be served (enabled by default in
the generated config),
https://www.archlinux.org/packages/community/any/python2-matrix-angular-sdk/ will also need to
be installed.
Alternatively, to install using pip a few changes may be needed as ArchLinux
defaults to python 3, but synapse currently assumes python 2.7 by default:
@@ -373,7 +402,7 @@ FreeBSD
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
- Ports: ``cd /usr/ports/net/py-matrix-synapse && make install clean``
- Ports: ``cd /usr/ports/net-im/py-matrix-synapse && make install clean``
- Packages: ``pkg install py27-matrix-synapse``
@@ -505,6 +534,30 @@ fix try re-installing from PyPI or directly from
# Install from github
pip install --user https://github.com/pyca/pynacl/tarball/master
Running out of File Handles
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If synapse runs out of filehandles, it typically fails badly - live-locking
at 100% CPU, and/or failing to accept new TCP connections (blocking the
connecting client). Matrix currently can legitimately use a lot of file handles,
thanks to busy rooms like #matrix:matrix.org containing hundreds of participating
servers. The first time a server talks in a room it will try to connect
simultaneously to all participating servers, which could exhaust the available
file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow
to respond. (We need to improve the routing algorithm used to be better than
full mesh, but as of June 2017 this hasn't happened yet).
If you hit this failure mode, we recommend increasing the maximum number of
open file handles to be at least 4096 (assuming a default of 1024 or 256).
This is typically done by editing ``/etc/security/limits.conf``
Separately, Synapse may leak file handles if inbound HTTP requests get stuck
during processing - e.g. blocked behind a lock or talking to a remote server etc.
This is best diagnosed by matching up the 'Received request' and 'Processed request'
log lines and looking for any 'Processed request' lines which take more than
a few seconds to execute. Please let us know at #matrix-dev:matrix.org if
you see this failure mode so we can help debug it, however.
ArchLinux
~~~~~~~~~
@@ -546,8 +599,9 @@ you to run your server on a machine that might not have the same name as your
domain name. For example, you might want to run your server at
``synapse.example.com``, but have your Matrix user-ids look like
``@user:example.com``. (A SRV record also allows you to change the port from
the default 8448. However, if you are thinking of using a reverse-proxy, be
sure to read `Reverse-proxying the federation port`_ first.)
the default 8448. However, if you are thinking of using a reverse-proxy on the
federation port, which is not recommended, be sure to read
`Reverse-proxying the federation port`_ first.)
To use a SRV record, first create your SRV record and publish it in DNS. This
should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
@@ -627,7 +681,7 @@ For information on how to install and use PostgreSQL, please see
Using a reverse proxy with Synapse
==================================
It is possible to put a reverse proxy such as
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_ or
`HAProxy <http://www.haproxy.org/>`_ in front of Synapse. One advantage of
@@ -645,9 +699,9 @@ federation port has a number of pitfalls. It is possible, but be sure to read
`Reverse-proxying the federation port`_.
The recommended setup is therefore to configure your reverse-proxy on port 443
for client connections, but to also expose port 8448 for server-server
connections. All the Matrix endpoints begin ``/_matrix``, so an example nginx
configuration might look like::
to port 8008 of synapse for client connections, but to also directly expose port
8448 for server-server connections. All the Matrix endpoints begin ``/_matrix``,
so an example nginx configuration might look like::
server {
listen 443 ssl;
@@ -852,12 +906,9 @@ cache a lot of recent room data and metadata in RAM in order to speed up
common requests. We'll improve this in future, but for now the easiest
way to either reduce the RAM usage (at the risk of slowing things down)
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
variable. Roughly speaking, a SYNAPSE_CACHE_FACTOR of 1.0 will max out
at around 3-4GB of resident memory - this is what we currently run the
matrix.org on. The default setting is currently 0.1, which is probably
around a ~700MB footprint. You can dial it down further to 0.02 if
desired, which targets roughly ~512MB. Conversely you can dial it up if
you need performance for lots of users and have a box with a lot of RAM.
variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
.. _`key_management`: https://matrix.org/docs/spec/server_server/unstable.html#retrieving-server-keys

View File

@@ -5,30 +5,48 @@ Before upgrading check if any special steps are required to upgrade from the
what you currently have installed to current version of synapse. The extra
instructions that may be required are listed later in this document.
If synapse was installed in a virtualenv then active that virtualenv before
upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then run:
1. If synapse was installed in a virtualenv then active that virtualenv before
upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then
run:
.. code:: bash
source ~/.synapse/bin/activate
2. If synapse was installed using pip then upgrade to the latest version by
running:
.. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
# restart synapse
synctl restart
If synapse was installed using git then upgrade to the latest version by
running:
.. code:: bash
# Pull the latest version of the master branch.
git pull
# Update the versions of synapse's python dependencies.
python synapse/python_dependencies.py | xargs pip install --upgrade
# restart synapse
./synctl restart
To check whether your update was sucessful, you can check the Server header
returned by the Client-Server API:
.. code:: bash
source ~/.synapse/bin/activate
If synapse was installed using pip then upgrade to the latest version by
running:
.. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
If synapse was installed using git then upgrade to the latest version by
running:
.. code:: bash
# Pull the latest version of the master branch.
git pull
# Update the versions of synapse's python dependencies.
python synapse/python_dependencies.py | xargs -n1 pip install --upgrade
# replace <host.name> with the hostname of your synapse homeserver.
# You may need to specify a port (eg, :8448) if your server is not
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to v0.15.0
====================
@@ -68,7 +86,7 @@ It has been replaced by specifying a list of application service registrations i
``homeserver.yaml``::
app_service_config_files: ["registration-01.yaml", "registration-02.yaml"]
Where ``registration-01.yaml`` looks like::
url: <String> # e.g. "https://my.application.service.com"
@@ -157,7 +175,7 @@ This release completely changes the database schema and so requires upgrading
it before starting the new version of the homeserver.
The script "database-prepare-for-0.5.0.sh" should be used to upgrade the
database. This will save all user information, such as logins and profiles,
database. This will save all user information, such as logins and profiles,
but will otherwise purge the database. This includes messages, which
rooms the home server was a member of and room alias mappings.
@@ -166,18 +184,18 @@ file and ask for help in #matrix:matrix.org. The upgrade process is,
unfortunately, non trivial and requires human intervention to resolve any
resulting conflicts during the upgrade process.
Before running the command the homeserver should be first completely
Before running the command the homeserver should be first completely
shutdown. To run it, simply specify the location of the database, e.g.:
./scripts/database-prepare-for-0.5.0.sh "homeserver.db"
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
restart than usual as it reinitializes the database.
On startup of the new version, users can either rejoin remote rooms using room
aliases or by being reinvited. Alternatively, if any other homeserver sends a
message to a room that the homeserver was previously in the local HS will
message to a room that the homeserver was previously in the local HS will
automatically rejoin the room.
Upgrading to v0.4.0
@@ -236,7 +254,7 @@ automatically generate default config use::
--config-path homeserver.config \
--generate-config
This config can be edited if desired, for example to specify a different SSL
This config can be edited if desired, for example to specify a different SSL
certificate to use. Once done you can run the home server using::
$ python synapse/app/homeserver.py --config-path homeserver.config
@@ -257,20 +275,20 @@ This release completely changes the database schema and so requires upgrading
it before starting the new version of the homeserver.
The script "database-prepare-for-0.0.1.sh" should be used to upgrade the
database. This will save all user information, such as logins and profiles,
database. This will save all user information, such as logins and profiles,
but will otherwise purge the database. This includes messages, which
rooms the home server was a member of and room alias mappings.
Before running the command the homeserver should be first completely
Before running the command the homeserver should be first completely
shutdown. To run it, simply specify the location of the database, e.g.:
./scripts/database-prepare-for-0.0.1.sh "homeserver.db"
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
restart than usual as it reinitializes the database.
On startup of the new version, users can either rejoin remote rooms using room
aliases or by being reinvited. Alternatively, if any other homeserver sends a
message to a room that the homeserver was previously in the local HS will
message to a room that the homeserver was previously in the local HS will
automatically rejoin the room.

View File

@@ -36,15 +36,13 @@ class HttpClient(object):
the request body. This will be encoded as JSON.
Returns:
Deferred: Succeeds when we get *any* HTTP response.
The result of the deferred is a tuple of `(code, response)`,
where `response` is a dict representing the decoded JSON body.
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body.
"""
pass
def get_json(self, url, args=None):
""" Get's some json from the given host homeserver and path
""" Gets some json from the given host homeserver and path
Args:
url (str): The URL to GET data from.
@@ -54,10 +52,8 @@ class HttpClient(object):
and *not* a string.
Returns:
Deferred: Succeeds when we get *any* HTTP response.
The result of the deferred is a tuple of `(code, response)`,
where `response` is a dict representing the decoded JSON body.
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body.
"""
pass
@@ -214,4 +210,4 @@ class _JsonProducer(object):
pass
def stopProducing(self):
pass
pass

20
contrib/prometheus/README Normal file
View File

@@ -0,0 +1,20 @@
This directory contains some sample monitoring config for using the
'Prometheus' monitoring server against synapse.
To use it, first install prometheus by following the instructions at
http://prometheus.io/
Then add a new job to the main prometheus.conf file:
job: {
name: "synapse"
target_group: {
target: "http://SERVER.LOCATION.HERE:PORT/_synapse/metrics"
}
}
Metrics are disabled by default when running synapse; they must be enabled
with the 'enable-metrics' option, either in the synapse config file or as a
command-line option.

View File

@@ -0,0 +1,395 @@
{{ template "head" . }}
{{ template "prom_content_head" . }}
<h1>System Resources</h1>
<h3>CPU</h3>
<div id="process_resource_utime"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_resource_utime"),
expr: "rate(process_cpu_seconds_total[2m]) * 100",
name: "[[job]]",
min: 0,
max: 100,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "%",
yTitle: "CPU Usage"
})
</script>
<h3>Memory</h3>
<div id="process_resource_maxrss"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_resource_maxrss"),
expr: "process_psutil_rss:max",
name: "Maxrss",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "bytes",
yTitle: "Usage"
})
</script>
<h3>File descriptors</h3>
<div id="process_fds"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_fds"),
expr: "process_open_fds{job='synapse'}",
name: "FDs",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "",
yTitle: "Descriptors"
})
</script>
<h1>Reactor</h1>
<h3>Total reactor time</h3>
<div id="reactor_total_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_total_time"),
expr: "rate(python_twisted_reactor_tick_time:total[2m]) / 1000",
name: "time",
max: 1,
min: 0,
renderer: "area",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Average reactor tick time</h3>
<div id="reactor_average_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_average_time"),
expr: "rate(python_twisted_reactor_tick_time:total[2m]) / rate(python_twisted_reactor_tick_time:count[2m]) / 1000",
name: "time",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s",
yTitle: "Time"
})
</script>
<h3>Pending calls per tick</h3>
<div id="reactor_pending_calls"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_pending_calls"),
expr: "rate(python_twisted_reactor_pending_calls:total[30s])/rate(python_twisted_reactor_pending_calls:count[30s])",
name: "calls",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yTitle: "Pending Cals"
})
</script>
<h1>Storage</h1>
<h3>Queries</h3>
<div id="synapse_storage_query_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_query_time"),
expr: "rate(synapse_storage_query_time:count[2m])",
name: "[[verb]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "queries/s",
yTitle: "Queries"
})
</script>
<h3>Transactions</h3>
<div id="synapse_storage_transaction_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_transaction_time"),
expr: "rate(synapse_storage_transaction_time:count[2m])",
name: "[[desc]]",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "txn/s",
yTitle: "Transactions"
})
</script>
<h3>Transaction execution time</h3>
<div id="synapse_storage_transactions_time_msec"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_transactions_time_msec"),
expr: "rate(synapse_storage_transaction_time:total[2m]) / 1000",
name: "[[desc]]",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Database scheduling latency</h3>
<div id="synapse_storage_schedule_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_schedule_time"),
expr: "rate(synapse_storage_schedule_time:total[2m]) / 1000",
name: "Total latency",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Cache hit ratio</h3>
<div id="synapse_cache_ratio"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_cache_ratio"),
expr: "rate(synapse_util_caches_cache:total[2m]) * 100",
name: "[[name]]",
min: 0,
max: 100,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "%",
yTitle: "Percentage"
})
</script>
<h3>Cache size</h3>
<div id="synapse_cache_size"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_cache_size"),
expr: "synapse_util_caches_cache:size",
name: "[[name]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Items"
})
</script>
<h1>Requests</h1>
<h3>Requests by Servlet</h3>
<div id="synapse_http_server_requests_servlet"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet"),
expr: "rate(synapse_http_server_requests:servlet[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4>
<div id="synapse_http_server_requests_servlet_minus_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet_minus_events"),
expr: "rate(synapse_http_server_requests:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Average response times</h3>
<div id="synapse_http_server_response_time_avg"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_time_avg"),
expr: "rate(synapse_http_server_response_time:total[2m]) / rate(synapse_http_server_response_time:count[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/req",
yTitle: "Response time"
})
</script>
<h3>All responses by code</h3>
<div id="synapse_http_server_responses"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_responses"),
expr: "rate(synapse_http_server_responses[2m])",
name: "[[method]] / [[code]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Error responses by code</h3>
<div id="synapse_http_server_responses_err"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_responses_err"),
expr: "rate(synapse_http_server_responses{code=~\"[45]..\"}[2m])",
name: "[[method]] / [[code]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>CPU Usage</h3>
<div id="synapse_http_server_response_ru_utime"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_ru_utime"),
expr: "rate(synapse_http_server_response_ru_utime:total[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "CPU Usage"
})
</script>
<h3>DB Usage</h3>
<div id="synapse_http_server_response_db_txn_duration"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
expr: "rate(synapse_http_server_response_db_txn_duration:total[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "DB Usage"
})
</script>
<h3>Average event send times</h3>
<div id="synapse_http_server_send_time_avg"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_send_time_avg"),
expr: "rate(synapse_http_server_response_time:total{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_time:count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/req",
yTitle: "Response time"
})
</script>
<h1>Federation</h1>
<h3>Sent Messages</h3>
<div id="synapse_federation_client_sent"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_client_sent"),
expr: "rate(synapse_federation_client_sent[2m])",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Received Messages</h3>
<div id="synapse_federation_server_received"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_server_received"),
expr: "rate(synapse_federation_server_received[2m])",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Pending</h3>
<div id="synapse_federation_transaction_queue_pending"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_transaction_queue_pending"),
expr: "synapse_federation_transaction_queue_pending",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Units"
})
</script>
<h1>Clients</h1>
<h3>Notifiers</h3>
<div id="synapse_notifier_listeners"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_notifier_listeners"),
expr: "synapse_notifier_listeners",
name: "listeners",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Listeners"
})
</script>
<h3>Notified Events</h3>
<div id="synapse_notifier_notified_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_notifier_notified_events"),
expr: "rate(synapse_notifier_notified_events[2m])",
name: "events",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "events/s",
yTitle: "Event rate"
})
</script>
{{ template "prom_content_tail" . }}
{{ template "tail" }}

View File

@@ -0,0 +1,21 @@
synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method)
synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet)
synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet)
synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])
synapse_federation_client_sent{type="EDU"} = synapse_federation_client_sent_edus + 0
synapse_federation_client_sent{type="PDU"} = synapse_federation_client_sent_pdu_destinations:count + 0
synapse_federation_client_sent{type="Query"} = sum(synapse_federation_client_sent_queries) by (job)
synapse_federation_server_received{type="EDU"} = synapse_federation_server_received_edus + 0
synapse_federation_server_received{type="PDU"} = synapse_federation_server_received_pdus + 0
synapse_federation_server_received{type="Query"} = sum(synapse_federation_server_received_queries) by (job)
synapse_federation_transaction_queue_pending{type="EDU"} = synapse_federation_transaction_queue_pending_edus + 0
synapse_federation_transaction_queue_pending{type="PDU"} = synapse_federation_transaction_queue_pending_pdus + 0

View File

@@ -0,0 +1,73 @@
Query Account
=============
This API returns information about a specific user account.
The api is::
GET /_matrix/client/r0/admin/whois/<user_id>
including an ``access_token`` of a server admin.
It returns a JSON body like the following:
.. code:: json
{
"user_id": "<user_id>",
"devices": {
"": {
"sessions": [
{
"connections": [
{
"ip": "1.2.3.4",
"last_seen": 1417222374433,
"user_agent": "Mozilla/5.0 ..."
},
{
"ip": "1.2.3.10",
"last_seen": 1417222374500,
"user_agent": "Dalvik/2.1.0 ..."
}
]
}
]
}
}
}
``last_seen`` is measured in milliseconds since the Unix epoch.
Deactivate Account
==================
This API deactivates an account. It removes active access tokens, resets the
password, and deletes third-party IDs (to prevent the user requesting a
password reset).
The api is::
POST /_matrix/client/r0/admin/deactivate/<user_id>
including an ``access_token`` of a server admin, and an empty request body.
Reset password
==============
Changes the password of another user.
The api is::
POST /_matrix/client/r0/admin/reset_password/<user_id>
with a body of:
.. code:: json
{
"new_password": "<secret>"
}
including an ``access_token`` of a server admin.

View File

@@ -21,13 +21,12 @@ How to monitor Synapse metrics using Prometheus
3. Add a prometheus target for synapse.
It needs to set the ``metrics_path`` to a non-default value::
It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``)::
- job_name: "synapse"
metrics_path: "/_synapse/metrics"
static_configs:
- targets:
"my.server.here:9092"
- targets: ["my.server.here:9092"]
If your prometheus is older than 1.5.2, you will need to replace
``static_configs`` in the above with ``target_groups``.

View File

@@ -1,6 +1,8 @@
Using Postgres
--------------
Postgres version 9.4 or later is known to work.
Set up database
===============
@@ -112,9 +114,9 @@ script one last time, e.g. if the SQLite database is at ``homeserver.db``
run::
synapse_port_db --sqlite-database homeserver.db \
--postgres-config database_config.yaml
--postgres-config homeserver-postgres.yaml
Once that has completed, change the synapse config to point at the PostgreSQL
database configuration file using the ``database_config`` parameter (see
`Synapse Config`_) and restart synapse. Synapse should now be running against
database configuration file ``homeserver-postgres.yaml`` (i.e. rename it to
``homeserver.yaml``) and restart synapse. Synapse should now be running against
PostgreSQL.

View File

@@ -26,28 +26,10 @@ expose the append-only log to the readers should be fairly minimal.
Architecture
------------
The Replication API
~~~~~~~~~~~~~~~~~~~
The Replication Protocol
~~~~~~~~~~~~~~~~~~~~~~~~
Synapse will optionally expose a long poll HTTP API for extracting updates. The
API will have a similar shape to /sync in that clients provide tokens
indicating where in the log they have reached and a timeout. The synapse server
then either responds with updates immediately if it already has updates or it
waits until the timeout for more updates. If the timeout expires and nothing
happened then the server returns an empty response.
However unlike the /sync API this replication API is returning synapse specific
data rather than trying to implement a matrix specification. The replication
results are returned as arrays of rows where the rows are mostly lifted
directly from the database. This avoids unnecessary JSON parsing on the server
and hopefully avoids an impedance mismatch between the data returned and the
required updates to the datastore.
This does not replicate all the database tables as many of the database tables
are indexes that can be recovered from the contents of other tables.
The format and parameters for the api are documented in
``synapse/replication/resource.py``.
See ``tcp_replication.rst``
The Slaved DataStore

223
docs/tcp_replication.rst Normal file
View File

@@ -0,0 +1,223 @@
TCP Replication
===============
Motivation
----------
Previously the workers used an HTTP long poll mechanism to get updates from the
master, which had the problem of causing a lot of duplicate work on the server.
This TCP protocol replaces those APIs with the aim of increased efficiency.
Overview
--------
The protocol is based on fire and forget, line based commands. An example flow
would be (where '>' indicates master to worker and '<' worker to master flows)::
> SERVER example.com
< REPLICATE events 53
> RDATA events 54 ["$foo1:bar.com", ...]
> RDATA events 55 ["$foo4:bar.com", ...]
The example shows the server accepting a new connection and sending its identity
with the ``SERVER`` command, followed by the client asking to subscribe to the
``events`` stream from the token ``53``. The server then periodically sends ``RDATA``
commands which have the format ``RDATA <stream_name> <token> <row>``, where the
format of ``<row>`` is defined by the individual streams.
Error reporting happens by either the client or server sending an `ERROR`
command, and usually the connection will be closed.
Since the protocol is a simple line based, its possible to manually connect to
the server using a tool like netcat. A few things should be noted when manually
using the protocol:
* When subscribing to a stream using ``REPLICATE``, the special token ``NOW`` can
be used to get all future updates. The special stream name ``ALL`` can be used
with ``NOW`` to subscribe to all available streams.
* The federation stream is only available if federation sending has been
disabled on the main process.
* The server will only time connections out that have sent a ``PING`` command.
If a ping is sent then the connection will be closed if no further commands
are receieved within 15s. Both the client and server protocol implementations
will send an initial PING on connection and ensure at least one command every
5s is sent (not necessarily ``PING``).
* ``RDATA`` commands *usually* include a numeric token, however if the stream
has multiple rows to replicate per token the server will send multiple
``RDATA`` commands, with all but the last having a token of ``batch``. See
the documentation on ``commands.RdataCommand`` for further details.
Architecture
------------
The basic structure of the protocol is line based, where the initial word of
each line specifies the command. The rest of the line is parsed based on the
command. For example, the `RDATA` command is defined as::
RDATA <stream_name> <token> <row_json>
(Note that `<row_json>` may contains spaces, but cannot contain newlines.)
Blank lines are ignored.
Keep alives
~~~~~~~~~~~
Both sides are expected to send at least one command every 5s or so, and
should send a ``PING`` command if necessary. If either side do not receive a
command within e.g. 15s then the connection should be closed.
Because the server may be connected to manually using e.g. netcat, the timeouts
aren't enabled until an initial ``PING`` command is seen. Both the client and
server implementations below send a ``PING`` command immediately on connection to
ensure the timeouts are enabled.
This ensures that both sides can quickly realize if the tcp connection has gone
and handle the situation appropriately.
Start up
~~~~~~~~
When a new connection is made, the server:
* Sends a ``SERVER`` command, which includes the identity of the server, allowing
the client to detect if its connected to the expected server
* Sends a ``PING`` command as above, to enable the client to time out connections
promptly.
The client:
* Sends a ``NAME`` command, allowing the server to associate a human friendly
name with the connection. This is optional.
* Sends a ``PING`` as above
* For each stream the client wishes to subscribe to it sends a ``REPLICATE``
with the stream_name and token it wants to subscribe from.
* On receipt of a ``SERVER`` command, checks that the server name matches the
expected server name.
Error handling
~~~~~~~~~~~~~~
If either side detects an error it can send an ``ERROR`` command and close the
connection.
If the client side loses the connection to the server it should reconnect,
following the steps above.
Congestion
~~~~~~~~~~
If the server sends messages faster than the client can consume them the server
will first buffer a (fairly large) number of commands and then disconnect the
client. This ensures that we don't queue up an unbounded number of commands in
memory and gives us a potential oppurtunity to squawk loudly. When/if the client
recovers it can reconnect to the server and ask for missed messages.
Reliability
~~~~~~~~~~~
In general the replication stream should be considered an unreliable transport
since e.g. commands are not resent if the connection disappears.
The exception to that are the replication streams, i.e. RDATA commands, since
these include tokens which can be used to restart the stream on connection
errors.
The client should keep track of the token in the last RDATA command received
for each stream so that on reconneciton it can start streaming from the correct
place. Note: not all RDATA have valid tokens due to batching. See
``RdataCommand`` for more details.
Example
~~~~~~~
An example iteraction is shown below. Each line is prefixed with '>' or '<' to
indicate which side is sending, these are *not* included on the wire::
* connection established *
> SERVER localhost:8823
> PING 1490197665618
< NAME synapse.app.appservice
< PING 1490197665618
< REPLICATE events 1
< REPLICATE backfill 1
< REPLICATE caches 1
> POSITION events 1
> POSITION backfill 1
> POSITION caches 1
> RDATA caches 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513]
> RDATA events 14 ["$149019767112vOHxz:localhost:8823",
"!AFDCvgApUmpdfVjIXm:localhost:8823","m.room.guest_access","",null]
< PING 1490197675618
> ERROR server stopping
* connection closed by server *
The ``POSITION`` command sent by the server is used to set the clients position
without needing to send data with the ``RDATA`` command.
An example of a batched set of ``RDATA`` is::
> RDATA caches batch ["get_user_by_id",["@test:localhost:8823"],1490197670513]
> RDATA caches batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513]
> RDATA caches batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513]
> RDATA caches 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513]
In this case the client shouldn't advance their caches token until it sees the
the last ``RDATA``.
List of commands
~~~~~~~~~~~~~~~~
The list of valid commands, with which side can send it: server (S) or client (C):
SERVER (S)
Sent at the start to identify which server the client is talking to
RDATA (S)
A single update in a stream
POSITION (S)
The position of the stream has been updated
ERROR (S, C)
There was an error
PING (S, C)
Sent periodically to ensure the connection is still alive
NAME (C)
Sent at the start by client to inform the server who they are
REPLICATE (C)
Asks the server to replicate a given stream
USER_SYNC (C)
A user has started or stopped syncing
FEDERATION_ACK (C)
Acknowledge receipt of some federation data
REMOVE_PUSHER (C)
Inform the server a pusher should be removed
INVALIDATE_CACHE (C)
Inform the server a cache should be invalidated
SYNC (S, C)
Used exclusively in tests
See ``synapse/replication/tcp/commands.py`` for a detailed description and the
format of each command.

View File

@@ -50,14 +50,37 @@ You may be able to setup coturn via your package manager, or set it up manually
pwgen -s 64 1
5. Ensure youe firewall allows traffic into the TURN server on
the ports you've configured it to listen on (remember to allow
both TCP and UDP if you've enabled both).
5. Consider your security settings. TURN lets users request a relay
which will connect to arbitrary IP addresses and ports. At the least
we recommend:
6. If you've configured coturn to support TLS/DTLS, generate or
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work
allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200
Ideally coturn should refuse to relay traffic which isn't SRTP;
see https://github.com/matrix-org/synapse/issues/2009
6. Ensure your firewall allows traffic into the TURN server on
the ports you've configured it to listen on (remember to allow
both TCP and UDP TURN traffic)
7. If you've configured coturn to support TLS/DTLS, generate or
import your private key and certificate.
7. Start the turn server::
8. Start the turn server::
bin/turnserver -o
@@ -83,12 +106,19 @@ Your home server configuration file needs the following extra keys:
to refresh credentials. The TURN REST API specification recommends
one day (86400000).
4. "turn_allow_guests": Whether to allow guest users to use the TURN
server. This is enabled by default, as otherwise VoIP will not
work reliably for guests. However, it does introduce a security risk
as it lets guests connect to arbitrary endpoints without having gone
through a CAPTCHA or similar to register a real account.
As an example, here is the relevant section of the config file for
matrix.org::
turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ]
turn_shared_secret: n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons
turn_user_lifetime: 86400000
turn_allow_guests: True
Now, restart synapse::

View File

@@ -12,7 +12,7 @@ across multiple processes is a recipe for disaster, plus you should be using
postgres anyway if you care about scalability).
The workers communicate with the master synapse process via a synapse-specific
HTTP protocol called 'replication' - analogous to MySQL or Postgres style
TCP protocol called 'replication' - analogous to MySQL or Postgres style
database replication; feeding a stream of relevant data to the workers so they
can be kept in sync with the main synapse process and database state.
@@ -21,16 +21,11 @@ To enable workers, you need to add a replication listener to the master synapse,
listeners:
- port: 9092
bind_address: '127.0.0.1'
type: http
tls: false
x_forwarded: false
resources:
- names: [replication]
compress: false
type: replication
Under **no circumstances** should this replication API listener be exposed to the
public internet; it currently implements no authentication whatsoever and is
unencrypted HTTP.
unencrypted.
You then create a set of configs for the various worker processes. These should be
worker configuration files should be stored in a dedicated subdirectory, to allow
@@ -50,14 +45,16 @@ e.g. the HTTP listener that it provides (if any); logging configuration; etc.
You should minimise the number of overrides though to maintain a usable config.
You must specify the type of worker application (worker_app) and the replication
endpoint that it's talking to on the main synapse process (worker_replication_url).
endpoint that it's talking to on the main synapse process (worker_replication_host
and worker_replication_port).
For instance::
worker_app: synapse.app.synchrotron
# The replication listener on the synapse to talk to.
worker_replication_url: http://127.0.0.1:9092/_synapse/replication
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_listeners:
- type: http
@@ -95,4 +92,3 @@ To manipulate a specific worker, you pass the -w option to synctl::
All of the above is highly experimental and subject to change as Synapse evolves,
but documenting it here to help folks needing highly scalable Synapses similar
to the one running matrix.org!

View File

@@ -17,6 +17,7 @@ export HAPROXY_BIN=/home/haproxy/haproxy-1.6.11/haproxy
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \
--haproxy \

View File

@@ -15,5 +15,6 @@ export SYNAPSE_CACHE_FACTOR=1
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \

View File

@@ -14,4 +14,5 @@ export SYNAPSE_CACHE_FACTOR=1
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View File

@@ -12,4 +12,5 @@ export SYNAPSE_CACHE_FACTOR=1
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

87
scripts-dev/federation_client.py Normal file → Executable file
View File

@@ -1,10 +1,30 @@
#!/usr/bin/env python
#
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import argparse
import nacl.signing
import json
import base64
import requests
import sys
import srvlookup
import yaml
def encode_base64(input_bytes):
"""Encode bytes as a base64 string without any padding."""
@@ -120,11 +140,13 @@ def get_json(origin_name, origin_key, destination, path):
origin_name, key, sig,
)
authorization_headers.append(bytes(header))
sys.stderr.write(header)
sys.stderr.write("\n")
print ("Authorization: %s" % header, file=sys.stderr)
dest = lookup(destination, path)
print ("Requesting %s" % dest, file=sys.stderr)
result = requests.get(
lookup(destination, path),
dest,
headers={"Authorization": authorization_headers[0]},
verify=False,
)
@@ -133,17 +155,66 @@ def get_json(origin_name, origin_key, destination, path):
def main():
origin_name, keyfile, destination, path = sys.argv[1:]
parser = argparse.ArgumentParser(
description=
"Signs and sends a federation request to a matrix homeserver",
)
with open(keyfile) as f:
parser.add_argument(
"-N", "--server-name",
help="Name to give as the local homeserver. If unspecified, will be "
"read from the config file.",
)
parser.add_argument(
"-k", "--signing-key-path",
help="Path to the file containing the private ed25519 key to sign the "
"request with.",
)
parser.add_argument(
"-c", "--config",
default="homeserver.yaml",
help="Path to server config file. Ignored if --server-name and "
"--signing-key-path are both given.",
)
parser.add_argument(
"-d", "--destination",
default="matrix.org",
help="name of the remote homeserver. We will do SRV lookups and "
"connect appropriately.",
)
parser.add_argument(
"path",
help="request path. We will add '/_matrix/federation/v1/' to this."
)
args = parser.parse_args()
if not args.server_name or not args.signing_key_path:
read_args_from_config(args)
with open(args.signing_key_path) as f:
key = read_signing_keys(f)[0]
result = get_json(
origin_name, key, destination, "/_matrix/federation/v1/" + path
args.server_name, key, args.destination, "/_matrix/federation/v1/" + args.path
)
json.dump(result, sys.stdout)
print ""
print ("")
def read_args_from_config(args):
with open(args.config, 'r') as fh:
config = yaml.safe_load(fh)
if not args.server_name:
args.server_name = config['server_name']
if not args.signing_key_path:
args.signing_key_path = config['signing_key_path']
if __name__ == "__main__":
main()

View File

@@ -9,16 +9,39 @@
ROOMID="$1"
sqlite3 homeserver.db <<EOF
DELETE FROM context_depth WHERE context = '$ROOMID';
DELETE FROM current_state WHERE context = '$ROOMID';
DELETE FROM feedback WHERE room_id = '$ROOMID';
DELETE FROM messages WHERE room_id = '$ROOMID';
DELETE FROM pdu_backward_extremities WHERE context = '$ROOMID';
DELETE FROM pdu_edges WHERE context = '$ROOMID';
DELETE FROM pdu_forward_extremities WHERE context = '$ROOMID';
DELETE FROM pdus WHERE context = '$ROOMID';
DELETE FROM room_data WHERE room_id = '$ROOMID';
DELETE FROM event_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_backward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_edges WHERE room_id = '$ROOMID';
DELETE FROM room_depth WHERE room_id = '$ROOMID';
DELETE FROM state_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM events WHERE room_id = '$ROOMID';
DELETE FROM event_json WHERE room_id = '$ROOMID';
DELETE FROM state_events WHERE room_id = '$ROOMID';
DELETE FROM current_state_events WHERE room_id = '$ROOMID';
DELETE FROM room_memberships WHERE room_id = '$ROOMID';
DELETE FROM feedback WHERE room_id = '$ROOMID';
DELETE FROM topics WHERE room_id = '$ROOMID';
DELETE FROM room_names WHERE room_id = '$ROOMID';
DELETE FROM rooms WHERE room_id = '$ROOMID';
DELETE FROM state_pdus WHERE context = '$ROOMID';
DELETE FROM room_hosts WHERE room_id = '$ROOMID';
DELETE FROM room_aliases WHERE room_id = '$ROOMID';
DELETE FROM state_groups WHERE room_id = '$ROOMID';
DELETE FROM state_groups_state WHERE room_id = '$ROOMID';
DELETE FROM receipts_graph WHERE room_id = '$ROOMID';
DELETE FROM receipts_linearized WHERE room_id = '$ROOMID';
DELETE FROM event_search_content WHERE c1room_id = '$ROOMID';
DELETE FROM guest_access WHERE room_id = '$ROOMID';
DELETE FROM history_visibility WHERE room_id = '$ROOMID';
DELETE FROM room_tags WHERE room_id = '$ROOMID';
DELETE FROM room_tags_revisions WHERE room_id = '$ROOMID';
DELETE FROM room_account_data WHERE room_id = '$ROOMID';
DELETE FROM event_push_actions WHERE room_id = '$ROOMID';
DELETE FROM local_invites WHERE room_id = '$ROOMID';
DELETE FROM pusher_throttle WHERE room_id = '$ROOMID';
DELETE FROM event_reports WHERE room_id = '$ROOMID';
DELETE FROM public_room_list_stream WHERE room_id = '$ROOMID';
DELETE FROM stream_ordering_to_exterm WHERE room_id = '$ROOMID';
DELETE FROM event_auth WHERE room_id = '$ROOMID';
DELETE FROM appservice_room_list WHERE room_id = '$ROOMID';
VACUUM;
EOF

View File

@@ -41,6 +41,7 @@ BOOLEAN_COLUMNS = {
"presence_stream": ["currently_active"],
"public_room_list_stream": ["visibility"],
"device_lists_outbound_pokes": ["sent"],
"users_who_share_rooms": ["share_private"],
}
@@ -121,7 +122,7 @@ class Store(object):
try:
txn = conn.cursor()
return func(
LoggingTransaction(txn, desc, self.database_engine, []),
LoggingTransaction(txn, desc, self.database_engine, [], []),
*args, **kwargs
)
except self.database_engine.module.DatabaseError as e:
@@ -251,6 +252,25 @@ class Porter(object):
)
return
if table in (
"user_directory", "user_directory_search", "users_who_share_rooms",
"users_in_pubic_room",
):
# We don't port these tables, as they're a faff and we can regenreate
# them anyway.
self.progress.update(table, table_size) # Mark table as done
return
if table == "user_directory_stream_pos":
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
yield self.postgres_store._simple_insert(
table=table,
values={"stream_id": None},
)
self.progress.update(table, table_size) # Mark table as done
return
forward_select = (
"SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?"
% (table,)
@@ -447,9 +467,7 @@ class Porter(object):
postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={
"table_schema": "public",
},
keyvalues={},
retcol="distinct table_name",
)

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.19.3"
__version__ = "0.23.0"

View File

@@ -23,7 +23,8 @@ from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, Codes
from synapse.types import UserID
from synapse.util import logcontext
from synapse.util.caches import register_cache, CACHE_SIZE_FACTOR
from synapse.util.caches.lrucache import LruCache
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)
@@ -39,6 +40,10 @@ AuthEventTypes = (
GUEST_DEVICE_ID = "guest_device"
class _InvalidMacaroonException(Exception):
pass
class Auth(object):
"""
FIXME: This class contains a mix of functions for authenticating users
@@ -51,6 +56,9 @@ class Auth(object):
self.state = hs.get_state_handler()
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("token_cache", self.token_cache)
@defer.inlineCallbacks
def check_from_context(self, event, context, do_sig_check=True):
auth_events_ids = yield self.compute_auth_events(
@@ -144,17 +152,8 @@ class Auth(object):
@defer.inlineCallbacks
def check_host_in_room(self, room_id, host):
with Measure(self.clock, "check_host_in_room"):
latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
logger.debug("calling resolve_state_groups from check_host_in_room")
entry = yield self.state.resolve_state_groups(
room_id, latest_event_ids
)
ret = yield self.store.is_host_joined(
room_id, host, entry.state_group, entry.state
)
defer.returnValue(ret)
latest_event_ids = yield self.store.is_host_joined(room_id, host)
defer.returnValue(latest_event_ids)
def _check_joined_room(self, member, user_id, room_id):
if not member or member.membership != Membership.JOIN:
@@ -209,8 +208,8 @@ class Auth(object):
default=[""]
)[0]
if user and access_token and ip_addr:
logcontext.preserve_fn(self.store.insert_client_ip)(
user=user,
self.store.insert_client_ip(
user_id=user.to_string(),
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
@@ -276,8 +275,8 @@ class Auth(object):
AuthError if no user by that token exists or the token is invalid.
"""
try:
macaroon = pymacaroons.Macaroon.deserialize(token)
except Exception: # deserialize can throw more-or-less anything
user_id, guest = self._parse_and_validate_macaroon(token, rights)
except _InvalidMacaroonException:
# doesn't look like a macaroon: treat it as an opaque token which
# must be in the database.
# TODO: it would be nice to get rid of this, but apparently some
@@ -286,19 +285,8 @@ class Auth(object):
defer.returnValue(r)
try:
user_id = self.get_user_id_from_macaroon(macaroon)
user = UserID.from_string(user_id)
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token,
user_id=user_id,
)
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id == "guest = true":
guest = True
if guest:
# Guest access tokens are not stored in the database (there can
# only be one access token per guest, anyway).
@@ -370,6 +358,55 @@ class Auth(object):
errcode=Codes.UNKNOWN_TOKEN
)
def _parse_and_validate_macaroon(self, token, rights="access"):
"""Takes a macaroon and tries to parse and validate it. This is cached
if and only if rights == access and there isn't an expiry.
On invalid macaroon raises _InvalidMacaroonException
Returns:
(user_id, is_guest)
"""
if rights == "access":
cached = self.token_cache.get(token, None)
if cached:
return cached
try:
macaroon = pymacaroons.Macaroon.deserialize(token)
except Exception: # deserialize can throw more-or-less anything
# doesn't look like a macaroon: treat it as an opaque token which
# must be in the database.
# TODO: it would be nice to get rid of this, but apparently some
# people use access tokens which aren't macaroons
raise _InvalidMacaroonException()
try:
user_id = self.get_user_id_from_macaroon(macaroon)
has_expiry = False
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith("time "):
has_expiry = True
elif caveat.caveat_id == "guest = true":
guest = True
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token,
user_id=user_id,
)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN
)
if not has_expiry and rights == "access":
self.token_cache[token] = (user_id, guest)
return user_id, guest
def get_user_id_from_macaroon(self, macaroon):
"""Retrieve the user_id given by the caveats on the macaroon.
@@ -482,6 +519,14 @@ class Auth(object):
)
def is_server_admin(self, user):
""" Check if the given user is a local server admin.
Args:
user (str): mxid of user to check
Returns:
bool: True if the user is an admin
"""
return self.store.is_server_admin(user)
@defer.inlineCallbacks

View File

@@ -66,6 +66,17 @@ class CodeMessageException(RuntimeError):
return cs_error(self.msg)
class MatrixCodeMessageException(CodeMessageException):
"""An error from a general matrix endpoint, eg. from a proxied Matrix API call.
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN):
super(MatrixCodeMessageException, self).__init__(code, msg)
self.errcode = errcode
class SynapseError(CodeMessageException):
"""A base exception type for matrix errors which have an errcode and error
message (as well as an HTTP status code).

99
synapse/app/_base.py Normal file
View File

@@ -0,0 +1,99 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import logging
import affinity
from daemonize import Daemonize
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
from twisted.internet import reactor
def start_worker_reactor(appname, config):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
the reactor. Pulls configuration from the 'worker' settings in 'config'.
Args:
appname (str): application name which will be sent to syslog
config (synapse.config.Config): config object
"""
logger = logging.getLogger(config.worker_app)
start_reactor(
appname,
config.soft_file_limit,
config.gc_thresholds,
config.worker_pid_file,
config.worker_daemonize,
config.worker_cpu_affinity,
logger,
)
def start_reactor(
appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
cpu_affinity,
logger,
):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
the reactor
Args:
appname (str): application name which will be sent to syslog
soft_file_limit (int):
gc_thresholds:
pid_file (str): name of pid file to write to if daemonize is True
daemonize (bool): true to run the reactor in a background process
cpu_affinity (int|None): cpu affinity mask
logger (logging.Logger): logger instance to pass to Daemonize
"""
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
if cpu_affinity is not None:
logger.info("Setting CPU affinity to %s" % cpu_affinity)
affinity.set_process_affinity_mask(0, cpu_affinity)
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
reactor.run()
if daemonize:
daemon = Daemonize(
app=appname,
pid=pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()

View File

@@ -13,38 +13,31 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from twisted.internet import reactor, defer
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.appservice")
@@ -120,30 +113,25 @@ class AppserviceServer(HomeServer):
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
appservice_handler = self.get_application_service_handler()
self.get_tcp_replication().start_replication(self)
@defer.inlineCallbacks
def replicate(results):
stream = results.get("events")
if stream:
max_stream_id = stream["position"]
yield appservice_handler.notify_interested_services(max_stream_id)
def build_tcp_replication(self):
return ASReplicationHandler(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
replicate(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
class ASReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(ASReplicationHandler, self).__init__(hs.get_datastore())
self.appservice_handler = hs.get_application_service_handler()
def on_rdata(self, stream_name, token, rows):
super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
if stream_name == "events":
max_stream_id = self.store.get_room_max_stream_ordering()
preserve_fn(
self.appservice_handler.notify_interested_services
)(max_stream_id)
def start(config_options):
@@ -186,37 +174,13 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-appservice",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-appservice", config)
if __name__ == '__main__':

View File

@@ -13,47 +13,39 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import PublicRoomListRestServlet
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor, defer
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.client_reader")
@@ -65,8 +57,8 @@ class ClientReaderSlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
TransactionStore,
SlavedClientIpStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
pass
@@ -145,21 +137,10 @@ class ClientReaderServer(HomeServer):
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
self.get_tcp_replication().start_replication(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
@@ -194,37 +175,13 @@ def start(config_options):
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-client-reader",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-client-reader", config)
if __name__ == '__main__':

View File

@@ -13,44 +13,36 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.api.urls import FEDERATION_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.api.urls import FEDERATION_PREFIX
from synapse.federation.transport.server import TransportLayerServer
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor, defer
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.federation_reader")
@@ -134,21 +126,10 @@ class FederationReaderServer(HomeServer):
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
self.get_tcp_replication().start_replication(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
@@ -183,37 +164,13 @@ def start(config_options):
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-federation-reader",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-federation-reader", config)
if __name__ == '__main__':

View File

@@ -13,53 +13,66 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.site import SynapseSite
from synapse.federation import send_queue
from synapse.federation.units import Edu
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.presence import UserPresenceState
from synapse.util.async import sleep
from synapse.util.async import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from twisted.internet import reactor, defer
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
import ujson as json
logger = logging.getLogger("synapse.app.appservice")
logger = logging.getLogger("synapse.app.federation_sender")
class FederationSenderSlaveStore(
SlavedDeviceInboxStore, TransactionStore, SlavedReceiptsStore, SlavedEventStore,
SlavedRegistrationStore, SlavedDeviceStore,
SlavedRegistrationStore, SlavedDeviceStore, SlavedPresenceStore,
):
pass
def __init__(self, db_conn, hs):
super(FederationSenderSlaveStore, self).__init__(db_conn, hs)
# We pull out the current federation stream position now so that we
# always have a known value for the federation position in memory so
# that we don't have to bounce via a deferred once when we start the
# replication streams.
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = (
"SELECT stream_id FROM federation_stream_position"
" WHERE type = ?"
)
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()
txn.execute(sql, ("federation",))
rows = txn.fetchall()
txn.close()
return rows[0][0] if rows else -1
class FederationSenderServer(HomeServer):
@@ -127,26 +140,27 @@ class FederationSenderServer(HomeServer):
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
send_handler = FederationSenderHandler(self)
self.get_tcp_replication().start_replication(self)
send_handler.on_start()
def build_tcp_replication(self):
return FederationSenderReplicationHandler(self)
while True:
try:
args = store.stream_positions()
args.update((yield send_handler.stream_positions()))
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
yield send_handler.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
class FederationSenderReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(FederationSenderReplicationHandler, self).__init__(hs.get_datastore())
self.send_handler = FederationSenderHandler(hs, self)
def on_rdata(self, stream_name, token, rows):
super(FederationSenderReplicationHandler, self).on_rdata(
stream_name, token, rows
)
self.send_handler.process_replication_rows(stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(FederationSenderReplicationHandler, self).get_streams_to_replicate()
args.update(self.send_handler.stream_positions())
return args
def start(config_options):
@@ -192,46 +206,27 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-federation-sender",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-federation-sender", config)
class FederationSenderHandler(object):
"""Processes the replication stream and forwards the appropriate entries
to the federation sender.
"""
def __init__(self, hs):
def __init__(self, hs, replication_client):
self.store = hs.get_datastore()
self.federation_sender = hs.get_federation_sender()
self.replication_client = replication_client
self.federation_position = self.store.federation_out_pos_startup
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
self._last_ack = self.federation_position
self._room_serials = {}
self._room_typing = {}
@@ -243,98 +238,35 @@ class FederationSenderHandler(object):
self.store.get_room_max_stream_ordering()
)
@defer.inlineCallbacks
def stream_positions(self):
stream_id = yield self.store.get_federation_out_pos("federation")
defer.returnValue({
"federation": stream_id,
return {"federation": self.federation_position}
# Ack stuff we've "processed", this should only be called from
# one process.
"federation_ack": stream_id,
})
@defer.inlineCallbacks
def process_replication(self, result):
def process_replication_rows(self, stream_name, token, rows):
# The federation stream contains things that we want to send out, e.g.
# presence, typing, etc.
fed_stream = result.get("federation")
if fed_stream:
latest_id = int(fed_stream["position"])
# The federation stream containis a bunch of different types of
# rows that need to be handled differently. We parse the rows, put
# them into the appropriate collection and then send them off.
presence_to_send = {}
keyed_edus = {}
edus = {}
failures = {}
device_destinations = set()
# Parse the rows in the stream
for row in fed_stream["rows"]:
position, typ, content_js = row
content = json.loads(content_js)
if typ == send_queue.PRESENCE_TYPE:
destination = content["destination"]
state = UserPresenceState.from_dict(content["state"])
presence_to_send.setdefault(destination, []).append(state)
elif typ == send_queue.KEYED_EDU_TYPE:
key = content["key"]
edu = Edu(**content["edu"])
keyed_edus.setdefault(
edu.destination, {}
)[(edu.destination, tuple(key))] = edu
elif typ == send_queue.EDU_TYPE:
edu = Edu(**content)
edus.setdefault(edu.destination, []).append(edu)
elif typ == send_queue.FAILURE_TYPE:
destination = content["destination"]
failure = content["failure"]
failures.setdefault(destination, []).append(failure)
elif typ == send_queue.DEVICE_MESSAGE_TYPE:
device_destinations.add(content["destination"])
else:
raise Exception("Unrecognised federation type: %r", typ)
# We've finished collecting, send everything off
for destination, states in presence_to_send.items():
self.federation_sender.send_presence(destination, states)
for destination, edu_map in keyed_edus.items():
for key, edu in edu_map.items():
self.federation_sender.send_edu(
edu.destination, edu.edu_type, edu.content, key=key,
)
for destination, edu_list in edus.items():
for edu in edu_list:
self.federation_sender.send_edu(
edu.destination, edu.edu_type, edu.content, key=None,
)
for destination, failure_list in failures.items():
for failure in failure_list:
self.federation_sender.send_failure(destination, failure)
for destination in device_destinations:
self.federation_sender.send_device_messages(destination)
# Record where we are in the stream.
yield self.store.update_federation_out_pos(
"federation", latest_id
)
if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows)
preserve_fn(self.update_token)(token)
# We also need to poke the federation sender when new events happen
event_stream = result.get("events")
if event_stream:
latest_pos = event_stream["position"]
self.federation_sender.notify_new_events(latest_pos)
elif stream_name == "events":
self.federation_sender.notify_new_events(token)
@defer.inlineCallbacks
def update_token(self, token):
self.federation_position = token
# We linearize here to ensure we don't have races updating the token
with (yield self._fed_position_linearizer.queue(None)):
if self._last_ack < self.federation_position:
yield self.store.update_federation_out_pos(
"federation", self.federation_position
)
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(self.federation_position)
self._last_ack = self.federation_position
if __name__ == '__main__':

View File

@@ -0,0 +1,239 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.api.errors import SynapseError
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.servlet import (
RestServlet, parse_json_object_from_request,
)
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v2_alpha._base import client_v2_patterns
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
logger = logging.getLogger("synapse.app.frontend_proxy")
class KeyUploadServlet(RestServlet):
PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$",
releases=())
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer): server
"""
super(KeyUploadServlet, self).__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker_main_http_uri
@defer.inlineCallbacks
def on_POST(self, request, device_id):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
if device_id is not None:
# passing the device_id here is deprecated; however, we allow it
# for now for compatibility with older clients.
if (requester.device_id is not None and
device_id != requester.device_id):
logger.warning("Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id, device_id)
else:
device_id = requester.device_id
if device_id is None:
raise SynapseError(
400,
"To upload keys, you must pass device_id when authenticating"
)
if body:
# They're actually trying to upload something, proxy to main synapse.
result = yield self.http_client.post_json_get_json(
self.main_uri + request.uri,
body,
)
defer.returnValue((200, result))
else:
# Just interested in counts.
result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue((200, {"one_time_key_counts": result}))
class FrontendProxySlavedStore(
SlavedDeviceStore,
SlavedClientIpStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
BaseSlavedStore,
):
pass
class FrontendProxyServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = FrontendProxySlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
logger.info("Synapse client reader now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse frontend proxy", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.frontend_proxy"
assert config.worker_main_http_uri is not None
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
ss = FrontendProxyServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-frontend-proxy", config)
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -13,61 +13,48 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
import gc
import logging
import os
import sys
import synapse
import synapse.config.logger
from synapse import events
from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \
LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \
STATIC_PREFIX, WEB_CLIENT_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.python_dependencies import (
check_requirements, DEPENDENCY_LINKS
)
from synapse.rest import ClientRestResource
from synapse.storage.engines import create_engine, IncorrectDatabaseSetup
from synapse.storage import are_all_users_on_domain
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.server import HomeServer
from twisted.internet import reactor, task, defer
from twisted.application import service
from twisted.web.resource import Resource, EncodingResourceWrapper
from twisted.web.static import File
from twisted.web.server import GzipEncoderFactory
from synapse.http.server import RootRedirect
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.api.urls import (
FEDERATION_PREFIX, WEB_CLIENT_PREFIX, CONTENT_REPO_PREFIX,
SERVER_KEY_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, STATIC_PREFIX,
SERVER_KEY_V2_PREFIX,
)
from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.metrics import register_memory_metrics, get_metrics_for
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.resource import ReplicationResource, REPLICATION_PREFIX
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import register_memory_metrics
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
check_requirements
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.server import HomeServer
from synapse.storage import are_all_users_on_domain
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.http.site import SynapseSite
from synapse import events
from daemonize import Daemonize
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, Resource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
logger = logging.getLogger("synapse.app.homeserver")
@@ -92,7 +79,7 @@ def build_resource_for_web_client(hs):
"\n"
"You can also disable hosting of the webclient via the\n"
"configuration option `web_client`\n"
% {"dep": DEPENDENCY_LINKS["matrix-angular-sdk"]}
% {"dep": CONDITIONAL_REQUIREMENTS["web_client"].keys()[0]}
)
syweb_path = os.path.dirname(syweb.__file__)
webclient_path = os.path.join(syweb_path, "webclient")
@@ -166,9 +153,6 @@ class SynapseHomeServer(HomeServer):
if name == "metrics" and self.get_config().enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(self)
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationResource(self)
if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
@@ -222,6 +206,16 @@ class SynapseHomeServer(HomeServer):
),
interface=address
)
elif listener["type"] == "replication":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
factory = ReplicationStreamProtocolFactory(self)
server_listener = reactor.listenTCP(
listener["port"], factory, interface=address
)
reactor.addSystemEventTrigger(
"before", "shutdown", server_listener.stopListening,
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -391,7 +385,8 @@ def run(hs):
ThreadPool._worker = profile(ThreadPool._worker)
reactor.run = profile(reactor.run)
start_time = hs.get_clock().time()
clock = hs.get_clock()
start_time = clock.time()
stats = {}
@@ -403,41 +398,23 @@ def run(hs):
if uptime < 0:
uptime = 0
# If the stats directory is empty then this is the first time we've
# reported stats.
first_time = not stats
stats["homeserver"] = hs.config.server_name
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
stats["total_users"] = yield hs.get_datastore().count_all_users()
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users
room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
daily_messages = yield hs.get_datastore().count_daily_messages()
if daily_messages is not None:
stats["daily_messages"] = daily_messages
else:
stats.pop("daily_messages", None)
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
if first_time:
# Add callbacks to report the synapse stats as metrics whenever
# prometheus requests them, typically every 30s.
# As some of the stats are expensive to calculate we only update
# them when synapse phones home to matrix.org every 24 hours.
metrics = get_metrics_for("synapse.usage")
metrics.add_callback("timestamp", lambda: stats["timestamp"])
metrics.add_callback("uptime_seconds", lambda: stats["uptime_seconds"])
metrics.add_callback("total_users", lambda: stats["total_users"])
metrics.add_callback("total_room_count", lambda: stats["total_room_count"])
metrics.add_callback(
"daily_active_users", lambda: stats["daily_active_users"]
)
metrics.add_callback(
"daily_messages", lambda: stats.get("daily_messages", 0)
)
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
logger.info("Reporting stats to matrix.org: %s" % (stats,))
try:
@@ -449,41 +426,25 @@ def run(hs):
logger.warn("Error reporting stats: %s", e)
if hs.config.report_stats:
phone_home_task = task.LoopingCall(phone_stats_home)
logger.info("Scheduling stats reporting for 24 hour intervals")
phone_home_task.start(60 * 60 * 24, now=False)
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)
def in_thread():
# Uncomment to enable tracing of log context changes.
# sys.settrace(logcontext_tracer)
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home)
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
change_resource_limit(hs.config.soft_file_limit)
if hs.config.gc_thresholds:
gc.set_threshold(*hs.config.gc_thresholds)
reactor.run()
if hs.config.daemonize and hs.config.print_pidfile:
print (hs.config.pid_file)
if hs.config.daemonize:
if hs.config.print_pidfile:
print (hs.config.pid_file)
daemon = Daemonize(
app="synapse-homeserver",
pid=hs.config.pid_file,
action=lambda: in_thread(),
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
in_thread()
_base.start_reactor(
"synapse-homeserver",
hs.config.soft_file_limit,
hs.config.gc_thresholds,
hs.config.pid_file,
hs.config.daemonize,
hs.config.cpu_affinity,
logger,
)
def main():

View File

@@ -13,57 +13,49 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from synapse.api.urls import (
CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
)
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor, defer
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.media_repository")
class MediaRepositorySlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
TransactionStore,
BaseSlavedStore,
MediaRepositoryStore,
ClientIpStore,
):
pass
@@ -142,21 +134,10 @@ class MediaRepositoryServer(HomeServer):
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
self.get_tcp_replication().start_replication(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
@@ -191,37 +172,13 @@ def start(config_options):
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-media-repository",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-media-repository", config)
if __name__ == '__main__':

View File

@@ -13,41 +13,33 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.storage.roommember import RoomMemberStore
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.storage.engines import create_engine
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.util.async import sleep
from synapse.storage.engines import create_engine
from synapse.storage.roommember import RoomMemberStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn, \
PreserveLoggingContext
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from twisted.internet import reactor, defer
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.pusher")
@@ -89,7 +81,6 @@ class PusherSlaveStore(
class PusherServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
@@ -109,16 +100,7 @@ class PusherServer(HomeServer):
logger.info("Finished setting up.")
def remove_pusher(self, app_id, push_key, user_id):
http_client = self.get_simple_http_client()
replication_url = self.config.worker_replication_url
url = replication_url + "/remove_pushers"
return http_client.post_json_get_json(url, {
"remove": [{
"app_id": app_id,
"push_key": push_key,
"user_id": user_id,
}]
})
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -166,73 +148,52 @@ class PusherServer(HomeServer):
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return PusherReplicationHandler(self)
class PusherReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(PusherReplicationHandler, self).__init__(hs.get_datastore())
self.pusher_pool = hs.get_pusherpool()
def on_rdata(self, stream_name, token, rows):
super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
preserve_fn(self.poke_pushers)(stream_name, token, rows)
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
pusher_pool = self.get_pusherpool()
def stop_pusher(user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
pushers_for_user = pusher_pool.pushers.get(user_id, {})
pusher = pushers_for_user.pop(key, None)
if pusher is None:
return
logger.info("Stopping pusher %r / %r", user_id, key)
pusher.on_stop()
def start_pusher(user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return pusher_pool._refresh_pusher(app_id, pushkey, user_id)
@defer.inlineCallbacks
def poke_pushers(results):
pushers_rows = set(
map(tuple, results.get("pushers", {}).get("rows", []))
def poke_pushers(self, stream_name, token, rows):
if stream_name == "pushers":
for row in rows:
if row.deleted:
yield self.stop_pusher(row.user_id, row.app_id, row.pushkey)
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
yield self.pusher_pool.on_new_notifications(
token, token,
)
deleted_pushers_rows = set(
map(tuple, results.get("deleted_pushers", {}).get("rows", []))
elif stream_name == "receipts":
yield self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
for row in sorted(pushers_rows | deleted_pushers_rows):
if row in deleted_pushers_rows:
user_id, app_id, pushkey = row[1:4]
stop_pusher(user_id, app_id, pushkey)
elif row in pushers_rows:
user_id = row[1]
app_id = row[5]
pushkey = row[8]
yield start_pusher(user_id, app_id, pushkey)
stream = results.get("events")
if stream and stream["rows"]:
min_stream_id = stream["rows"][0][0]
max_stream_id = stream["position"]
preserve_fn(pusher_pool.on_new_notifications)(
min_stream_id, max_stream_id
)
def stop_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
pushers_for_user = self.pusher_pool.pushers.get(user_id, {})
pusher = pushers_for_user.pop(key, None)
if pusher is None:
return
logger.info("Stopping pusher %r / %r", user_id, key)
pusher.on_stop()
stream = results.get("receipts")
if stream and stream["rows"]:
rows = stream["rows"]
affected_room_ids = set(row[1] for row in rows)
min_stream_id = rows[0][0]
max_stream_id = stream["position"]
preserve_fn(pusher_pool.on_new_receipts)(
min_stream_id, max_stream_id, affected_room_ids
)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
poke_pushers(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
def start_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return self.pusher_pool._refresh_pusher(app_id, pushkey, user_id)
def start(config_options):
@@ -275,38 +236,14 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_pusherpool().start()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-pusher",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-pusher", config)
if __name__ == '__main__':

View File

@@ -13,58 +13,50 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import logging
import sys
import synapse
from synapse.api.constants import EventTypes, PresenceState
from synapse.api.constants import EventTypes
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.handlers.presence import PresenceHandler
from synapse.http.site import SynapseSite
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.rest.client.v2_alpha import sync
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v2_alpha import sync
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.presence import PresenceStore, UserPresenceState
from synapse.storage.presence import UserPresenceState
from synapse.storage.roommember import RoomMemberStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn, \
PreserveLoggingContext
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import contextlib
import gc
import ujson as json
logger = logging.getLogger("synapse.app.synchrotron")
@@ -79,9 +71,9 @@ class SynchrotronSlavedStore(
SlavedPresenceStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
@@ -91,27 +83,17 @@ class SynchrotronSlavedStore(
RoomMemberStore.__dict__["did_forget"]
)
# XXX: This is a bit broken because we don't persist the accepted list in a
# way that can be replicated. This means that we don't have a way to
# invalidate the cache correctly.
get_presence_list_accepted = PresenceStore.__dict__[
"get_presence_list_accepted"
]
get_presence_list_observers_accepted = PresenceStore.__dict__[
"get_presence_list_observers_accepted"
]
UPDATE_SYNCING_USERS_MS = 10 * 1000
class SynchrotronPresence(object):
def __init__(self, hs):
self.hs = hs
self.is_mine_id = hs.is_mine_id
self.http_client = hs.get_simple_http_client()
self.store = hs.get_datastore()
self.user_to_num_current_syncs = {}
self.syncing_users_url = hs.config.worker_replication_url + "/syncing_users"
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
@@ -121,17 +103,52 @@ class SynchrotronPresence(object):
for state in active_presence
}
# user_id -> last_sync_ms. Lists the users that have stopped syncing
# but we haven't notified the master of that yet
self.users_going_offline = {}
self._send_stop_syncing_loop = self.clock.looping_call(
self.send_stop_syncing, 10 * 1000
)
self.process_id = random_string(16)
logger.info("Presence process_id is %r", self.process_id)
self._sending_sync = False
self._need_to_send_sync = False
self.clock.looping_call(
self._send_syncing_users_regularly,
UPDATE_SYNCING_USERS_MS,
)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
self.hs.get_tcp_replication().send_user_sync(user_id, is_syncing, last_sync_ms)
reactor.addSystemEventTrigger("before", "shutdown", self._on_shutdown)
def mark_as_coming_online(self, user_id):
"""A user has started syncing. Send a UserSync to the master, unless they
had recently stopped syncing.
Args:
user_id (str)
"""
going_offline = self.users_going_offline.pop(user_id, None)
if not going_offline:
# Safe to skip because we haven't yet told the master they were offline
self.send_user_sync(user_id, True, self.clock.time_msec())
def mark_as_going_offline(self, user_id):
"""A user has stopped syncing. We wait before notifying the master as
its likely they'll come back soon. This allows us to avoid sending
a stopped syncing immediately followed by a started syncing notification
to the master
Args:
user_id (str)
"""
self.users_going_offline[user_id] = self.clock.time_msec()
def send_stop_syncing(self):
"""Check if there are any users who have stopped syncing a while ago
and haven't come back yet. If there are poke the master about them.
"""
now = self.clock.time_msec()
for user_id, last_sync_ms in self.users_going_offline.items():
if now - last_sync_ms > 10 * 1000:
self.users_going_offline.pop(user_id, None)
self.send_user_sync(user_id, False, last_sync_ms)
def set_state(self, user, state, ignore_status_msg=False):
# TODO Hows this supposed to work?
@@ -139,18 +156,16 @@ class SynchrotronPresence(object):
get_states = PresenceHandler.get_states.__func__
get_state = PresenceHandler.get_state.__func__
_get_interested_parties = PresenceHandler._get_interested_parties.__func__
current_state_for_users = PresenceHandler.current_state_for_users.__func__
@defer.inlineCallbacks
def user_syncing(self, user_id, affect_presence):
if affect_presence:
curr_sync = self.user_to_num_current_syncs.get(user_id, 0)
self.user_to_num_current_syncs[user_id] = curr_sync + 1
prev_states = yield self.current_state_for_users([user_id])
if prev_states[user_id].state == PresenceState.OFFLINE:
# TODO: Don't block the sync request on this HTTP hit.
yield self._send_syncing_users_now()
# If we went from no in flight sync to some, notify replication
if self.user_to_num_current_syncs[user_id] == 1:
self.mark_as_coming_online(user_id)
def _end():
# We check that the user_id is in user_to_num_current_syncs because
@@ -159,6 +174,10 @@ class SynchrotronPresence(object):
if affect_presence and user_id in self.user_to_num_current_syncs:
self.user_to_num_current_syncs[user_id] -= 1
# If we went from one in flight sync to non, notify replication
if self.user_to_num_current_syncs[user_id] == 0:
self.mark_as_going_offline(user_id)
@contextlib.contextmanager
def _user_syncing():
try:
@@ -166,56 +185,12 @@ class SynchrotronPresence(object):
finally:
_end()
defer.returnValue(_user_syncing())
@defer.inlineCallbacks
def _on_shutdown(self):
# When the synchrotron is shutdown tell the master to clear the in
# progress syncs for this process
self.user_to_num_current_syncs.clear()
yield self._send_syncing_users_now()
def _send_syncing_users_regularly(self):
# Only send an update if we aren't in the middle of sending one.
if not self._sending_sync:
preserve_fn(self._send_syncing_users_now)()
@defer.inlineCallbacks
def _send_syncing_users_now(self):
if self._sending_sync:
# We don't want to race with sending another update.
# Instead we wait for that update to finish and send another
# update afterwards.
self._need_to_send_sync = True
return
# Flag that we are sending an update.
self._sending_sync = True
yield self.http_client.post_json_get_json(self.syncing_users_url, {
"process_id": self.process_id,
"syncing_users": [
user_id for user_id, count in self.user_to_num_current_syncs.items()
if count > 0
],
})
# Unset the flag as we are no longer sending an update.
self._sending_sync = False
if self._need_to_send_sync:
# If something happened while we were sending the update then
# we might need to send another update.
# TODO: Check if the update that was sent matches the current state
# as we only need to send an update if they are different.
self._need_to_send_sync = False
yield self._send_syncing_users_now()
return defer.succeed(_user_syncing())
@defer.inlineCallbacks
def notify_from_replication(self, states, stream_id):
parties = yield self._get_interested_parties(
states, calculate_remote_hosts=False
)
room_ids_to_states, users_to_states, _ = parties
parties = yield get_interested_parties(self.store, states)
room_ids_to_states, users_to_states = parties
self.notifier.on_new_event(
"presence_key", stream_id, rooms=room_ids_to_states.keys(),
@@ -223,26 +198,24 @@ class SynchrotronPresence(object):
)
@defer.inlineCallbacks
def process_replication(self, result):
stream = result.get("presence", {"rows": []})
states = []
for row in stream["rows"]:
(
position, user_id, state, last_active_ts,
last_federation_update_ts, last_user_sync_ts, status_msg,
currently_active
) = row
state = UserPresenceState(
user_id, state, last_active_ts,
last_federation_update_ts, last_user_sync_ts, status_msg,
currently_active
)
self.user_to_current_state[user_id] = state
states.append(state)
def process_replication_rows(self, token, rows):
states = [UserPresenceState(
row.user_id, row.state, row.last_active_ts,
row.last_federation_update_ts, row.last_user_sync_ts, row.status_msg,
row.currently_active
) for row in rows]
if states and "position" in stream:
stream_id = int(stream["position"])
yield self.notify_from_replication(states, stream_id)
for state in states:
self.user_to_current_state[row.user_id] = state
stream_id = token
yield self.notify_from_replication(states, stream_id)
def get_currently_syncing_users(self):
return [
user_id for user_id, count in self.user_to_num_current_syncs.iteritems()
if count > 0
]
class SynchrotronTyping(object):
@@ -257,16 +230,12 @@ class SynchrotronTyping(object):
# value which we *must* use for the next replication request.
return {"typing": self._latest_room_serial}
def process_replication(self, result):
stream = result.get("typing")
if stream:
self._latest_room_serial = int(stream["position"])
def process_replication_rows(self, token, rows):
self._latest_room_serial = token
for row in stream["rows"]:
position, room_id, typing_json = row
typing = json.loads(typing_json)
self._room_serials[room_id] = position
self._room_typing[room_id] = typing
for row in rows:
self._room_serials[row.room_id] = token
self._room_typing[row.room_id] = row.user_ids
class SynchrotronApplicationService(object):
@@ -351,118 +320,10 @@ class SynchrotronServer(HomeServer):
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
notifier = self.get_notifier()
presence_handler = self.get_presence_handler()
typing_handler = self.get_typing_handler()
self.get_tcp_replication().start_replication(self)
def notify_from_stream(
result, stream_name, stream_key, room=None, user=None
):
stream = result.get(stream_name)
if stream:
position_index = stream["field_names"].index("position")
if room:
room_index = stream["field_names"].index(room)
if user:
user_index = stream["field_names"].index(user)
users = ()
rooms = ()
for row in stream["rows"]:
position = row[position_index]
if user:
users = (row[user_index],)
if room:
rooms = (row[room_index],)
notifier.on_new_event(
stream_key, position, users=users, rooms=rooms
)
@defer.inlineCallbacks
def notify_device_list_update(result):
stream = result.get("device_lists")
if not stream:
return
position_index = stream["field_names"].index("position")
user_index = stream["field_names"].index("user_id")
for row in stream["rows"]:
position = row[position_index]
user_id = row[user_index]
room_ids = yield store.get_rooms_for_user(user_id)
notifier.on_new_event(
"device_list_key", position, rooms=room_ids,
)
@defer.inlineCallbacks
def notify(result):
stream = result.get("events")
if stream:
max_position = stream["position"]
event_map = yield store.get_events([row[1] for row in stream["rows"]])
for row in stream["rows"]:
position = row[0]
event_id = row[1]
event = event_map.get(event_id, None)
if not event:
continue
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
notifier.on_new_room_event(
event, position, max_position, extra_users
)
notify_from_stream(
result, "push_rules", "push_rules_key", user="user_id"
)
notify_from_stream(
result, "user_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "room_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "tag_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "receipts", "receipt_key", room="room_id"
)
notify_from_stream(
result, "typing", "typing_key", room="room_id"
)
notify_from_stream(
result, "to_device", "to_device_key", user="user_id"
)
yield notify_device_list_update(result)
while True:
try:
args = store.stream_positions()
args.update(typing_handler.stream_positions())
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
typing_handler.process_replication(result)
yield presence_handler.process_replication(result)
yield notify(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return SyncReplicationHandler(self)
def build_presence_handler(self):
return SynchrotronPresence(self)
@@ -471,6 +332,79 @@ class SynchrotronServer(HomeServer):
return SynchrotronTyping(self)
class SyncReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(SyncReplicationHandler, self).__init__(hs.get_datastore())
self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler()
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier()
self.presence_handler.sync_callback = self.send_user_sync
def on_rdata(self, stream_name, token, rows):
super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
preserve_fn(self.process_and_notify)(stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(SyncReplicationHandler, self).get_streams_to_replicate()
args.update(self.typing_handler.stream_positions())
return args
def get_currently_syncing_users(self):
return self.presence_handler.get_currently_syncing_users()
@defer.inlineCallbacks
def process_and_notify(self, stream_name, token, rows):
if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
event = yield self.store.get_event(row.event_id)
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
max_token = self.store.get_room_max_stream_ordering()
self.notifier.on_new_room_event(
event, token, max_token, extra_users
)
elif stream_name == "push_rules":
self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows],
)
elif stream_name in ("account_data", "tag_account_data",):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows],
)
elif stream_name == "receipts":
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "typing":
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "to_device":
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
self.notifier.on_new_event(
"to_device_key", token, users=entities,
)
elif stream_name == "device_lists":
all_room_ids = set()
for row in rows:
room_ids = yield self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
self.notifier.on_new_event(
"device_list_key", token, rooms=all_room_ids,
)
elif stream_name == "presence":
yield self.presence_handler.process_replication_rows(token, rows)
def start(config_options):
try:
config = HomeServerConfig.load_config(
@@ -500,37 +434,13 @@ def start(config_options):
ss.setup()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_datastore().start_profiling()
ss.replicate()
ss.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-synchrotron",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-synchrotron", config)
if __name__ == '__main__':

View File

@@ -125,7 +125,7 @@ def main():
"configfile",
nargs="?",
default="homeserver.yaml",
help="the homeserver config file, defaults to homserver.yaml",
help="the homeserver config file, defaults to homeserver.yaml",
)
parser.add_argument(
"-w", "--worker",
@@ -202,7 +202,8 @@ def main():
worker_app = worker_config["worker_app"]
worker_pidfile = worker_config["worker_pid_file"]
worker_daemonize = worker_config["worker_daemonize"]
assert worker_daemonize # TODO print something more user friendly
assert worker_daemonize, "In config %r: expected '%s' to be True" % (
worker_configfile, "worker_daemonize")
worker_cache_factor = worker_config.get("synctl_cache_factor")
workers.append(Worker(
worker_app, worker_configfile, worker_pidfile, worker_cache_factor,
@@ -233,6 +234,9 @@ def main():
if action == "start" or action == "restart":
if start_stop_synapse:
# Check if synapse is already running
if os.path.exists(pidfile) and pid_running(int(open(pidfile).read())):
abort("synapse.app.homeserver already running")
start(configfile)
for worker in workers:

241
synapse/app/user_dir.py Normal file
View File

@@ -0,0 +1,241 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v2_alpha import user_directory
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
logger = logging.getLogger("synapse.app.user_dir")
class UserDirectorySlaveStore(
SlavedEventStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
UserDirectoryStore,
BaseSlavedStore,
):
def __init__(self, db_conn, hs):
super(UserDirectorySlaveStore, self).__init__(db_conn, hs)
events_max = self._stream_id_gen.get_current_token()
curr_state_delta_prefill, min_curr_state_delta_id = self._get_cache_dict(
db_conn, "current_state_delta_stream",
entity_column="room_id",
stream_column="stream_id",
max_value=events_max, # As we share the stream id with events token
limit=1000,
)
self._curr_state_delta_stream_cache = StreamChangeCache(
"_curr_state_delta_stream_cache", min_curr_state_delta_id,
prefilled_cache=curr_state_delta_prefill,
)
self._current_state_delta_pos = events_max
def stream_positions(self):
result = super(UserDirectorySlaveStore, self).stream_positions()
result["current_state_deltas"] = self._current_state_delta_pos
return result
def process_replication_rows(self, stream_name, token, rows):
if stream_name == "current_state_deltas":
self._current_state_delta_pos = token
for row in rows:
self._curr_state_delta_stream_cache.entity_has_changed(
row.room_id, token
)
return super(UserDirectorySlaveStore, self).process_replication_rows(
stream_name, token, rows
)
class UserDirectoryServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = UserDirectorySlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
user_directory.register_servlets(self, resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
logger.info("Synapse user_dir now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return UserDirectoryReplicationHandler(self)
class UserDirectoryReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(UserDirectoryReplicationHandler, self).__init__(hs.get_datastore())
self.user_directory = hs.get_user_directory_handler()
def on_rdata(self, stream_name, token, rows):
super(UserDirectoryReplicationHandler, self).on_rdata(
stream_name, token, rows
)
if stream_name == "current_state_deltas":
preserve_fn(self.user_directory.notify_new_event)()
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse user directory", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.user_dir"
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
if config.update_user_directory:
sys.stderr.write(
"\nThe update_user_directory must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``update_user_directory: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.update_user_directory = True
tls_server_context_factory = context_factory.ServerContextFactory(config)
ps = UserDirectoryServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ps.setup()
ps.start_listening(config.worker_listeners)
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-user-dir", config)
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -241,6 +241,16 @@ class ApplicationService(object):
def is_exclusive_room(self, room_id):
return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)
def get_exlusive_user_regexes(self):
"""Get the list of regexes used to determine if a user is exclusively
registered by the AS
"""
return [
regex_obj["regex"]
for regex_obj in self.namespaces[ApplicationService.NS_USERS]
if regex_obj["exclusive"]
]
def is_rate_limited(self):
return self.rate_limited

View File

@@ -71,6 +71,15 @@ class EmailConfig(Config):
self.email_riot_base_url = email_config.get(
"riot_base_url", None
)
self.email_smtp_user = email_config.get(
"smtp_user", None
)
self.email_smtp_pass = email_config.get(
"smtp_pass", None
)
self.require_transport_security = email_config.get(
"require_transport_security", False
)
if "app_name" in email_config:
self.email_app_name = email_config["app_name"]
else:
@@ -91,10 +100,17 @@ class EmailConfig(Config):
# Defining a custom URL for Riot is only needed if email notifications
# should contain links to a self-hosted installation of Riot; when set
# the "app_name" setting is ignored.
#
# If your SMTP server requires authentication, the optional smtp_user &
# smtp_pass variables should be used
#
#email:
# enable_notifs: false
# smtp_host: "localhost"
# smtp_port: 25
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: False
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# app_name: Matrix
# template_dir: res/templates

View File

@@ -33,6 +33,7 @@ from .jwt import JWTConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .emailconfig import EmailConfig
from .workers import WorkerConfig
from .push import PushConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
@@ -40,7 +41,7 @@ class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
VoipConfig, RegistrationConfig, MetricsConfig, ApiConfig,
AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
JWTConfig, PasswordConfig, EmailConfig,
WorkerConfig, PasswordAuthProviderConfig,):
WorkerConfig, PasswordAuthProviderConfig, PushConfig,):
pass

45
synapse/config/push.py Normal file
View File

@@ -0,0 +1,45 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class PushConfig(Config):
def read_config(self, config):
self.push_redact_content = False
push_config = config.get("email", {})
self.push_redact_content = push_config.get("redact_content", False)
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Control how push messages are sent to google/apple to notifications.
# Normally every message said in a room with one or more people using
# mobile devices will be posted to a push server hosted by matrix.org
# which is registered with google and apple in order to allow push
# notifications to be sent to these mobile devices.
#
# Setting redact_content to true will make the push messages contain no
# message content which will provide increased privacy. This is a
# temporary solution pending improvements to Android and iPhone apps
# to get content from the app rather than the notification.
#
# For modern android devices the notification content will still appear
# because it is loaded by the app. iPhone, however will send a
# notification saying only that a message arrived and who it came from.
#
#push:
# redact_content: false
"""

View File

@@ -69,6 +69,7 @@ class RegistrationConfig(Config):
trusted_third_party_id_servers:
- matrix.org
- vector.im
- riot.im
""" % locals()
def add_arguments(self, parser):

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -29,12 +30,25 @@ class ServerConfig(Config):
self.user_agent_suffix = config.get("user_agent_suffix")
self.use_frozen_dicts = config.get("use_frozen_dicts", False)
self.public_baseurl = config.get("public_baseurl")
self.cpu_affinity = config.get("cpu_affinity")
# Whether to send federation traffic out in this process. This only
# applies to some federation traffic, and so shouldn't be used to
# "disable" federation
self.send_federation = config.get("send_federation", True)
# Whether to update the user directory or not. This should be set to
# false only if we are updating the user directory in a worker
self.update_user_directory = config.get("update_user_directory", True)
self.filter_timeline_limit = config.get("filter_timeline_limit", -1)
# Whether we should block invites sent to users on this server
# (other than those sent by local server admins)
self.block_non_admin_invites = config.get(
"block_non_admin_invites", False,
)
if self.public_baseurl is not None:
if self.public_baseurl[-1] != '/':
self.public_baseurl += '/'
@@ -141,9 +155,36 @@ class ServerConfig(Config):
# When running as a daemon, the file to store the pid in
pid_file: %(pid_file)s
# CPU affinity mask. Setting this restricts the CPUs on which the
# process will be scheduled. It is represented as a bitmask, with the
# lowest order bit corresponding to the first logical CPU and the
# highest order bit corresponding to the last logical CPU. Not all CPUs
# may exist on a given system but a mask may specify more CPUs than are
# present.
#
# For example:
# 0x00000001 is processor #0,
# 0x00000003 is processors #0 and #1,
# 0xFFFFFFFF is all processors (#0 through #31).
#
# Pinning a Python process to a single CPU is desirable, because Python
# is inherently single-threaded due to the GIL, and can suffer a
# 30-40%% slowdown due to cache blow-out and thread context switching
# if the scheduler happens to schedule the underlying threads across
# different cores. See
# https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/.
#
# cpu_affinity: 0xFFFFFFFF
# Whether to serve a web client from the HTTP/HTTPS root resource.
web_client: True
# The root directory to server for the above web client.
# If left undefined, synapse will serve the matrix-angular-sdk web client.
# Make sure matrix-angular-sdk is installed with pip if web_client is True
# and web_client_location is undefined
# web_client_location: "/path/to/web/root"
# The public-facing base URL for the client API (not including _matrix/...)
# public_baseurl: https://example.com:8448/
@@ -155,6 +196,14 @@ class ServerConfig(Config):
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
# gc_thresholds: [700, 10, 10]
# Set the limit on the returned events in the timeline in the get
# and sync operations. The default value is -1, means no upper limit.
# filter_timeline_limit: 5000
# Whether room invites to users on this server should be blocked
# (except those sent by local server admins). The default is False.
# block_non_admin_invites: True
# List of ports that Synapse should listen on, their purpose and their
# configuration.
listeners:

View File

@@ -23,6 +23,7 @@ class VoipConfig(Config):
self.turn_username = config.get("turn_username")
self.turn_password = config.get("turn_password")
self.turn_user_lifetime = self.parse_duration(config["turn_user_lifetime"])
self.turn_allow_guests = config.get("turn_allow_guests", True)
def default_config(self, **kwargs):
return """\
@@ -41,4 +42,11 @@ class VoipConfig(Config):
# How long generated TURN credentials last
turn_user_lifetime: "1h"
# Whether guests should be allowed to use the TURN server.
# This defaults to True, otherwise VoIP will be unreliable for guests.
# However, it does introduce a slight security risk as it allows users to
# connect to arbitrary endpoints without having first signed up for a
# valid account (e.g. by passing a CAPTCHA).
turn_allow_guests: True
"""

View File

@@ -28,7 +28,12 @@ class WorkerConfig(Config):
self.worker_pid_file = config.get("worker_pid_file")
self.worker_log_file = config.get("worker_log_file")
self.worker_log_config = config.get("worker_log_config")
self.worker_replication_url = config.get("worker_replication_url")
self.worker_replication_host = config.get("worker_replication_host", None)
self.worker_replication_port = config.get("worker_replication_port", None)
self.worker_name = config.get("worker_name", self.worker_app)
self.worker_main_http_uri = config.get("worker_main_http_uri", None)
self.worker_cpu_affinity = config.get("worker_cpu_affinity")
if self.worker_listeners:
for listener in self.worker_listeners:

View File

@@ -13,14 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util import logcontext
from twisted.web.http import HTTPClient
from twisted.internet.protocol import Factory
from twisted.internet import defer, reactor
from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util.logcontext import (
preserve_context_over_fn, preserve_context_over_deferred
)
import simplejson as json
import logging
@@ -43,14 +40,10 @@ def fetch_server_key(server_name, ssl_context_factory, path=KEY_API_V1):
for i in range(5):
try:
protocol = yield preserve_context_over_fn(
endpoint.connect, factory
)
server_response, server_certificate = yield preserve_context_over_deferred(
protocol.remote_key
)
defer.returnValue((server_response, server_certificate))
return
with logcontext.PreserveLoggingContext():
protocol = yield endpoint.connect(factory)
server_response, server_certificate = yield protocol.remote_key
defer.returnValue((server_response, server_certificate))
except SynapseKeyClientError as e:
logger.exception("Error getting key for %r" % (server_name,))
if e.status.startswith("4"):

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,10 +16,9 @@
from synapse.crypto.keyclient import fetch_server_key
from synapse.api.errors import SynapseError, Codes
from synapse.util import unwrapFirstError
from synapse.util.async import ObservableDeferred
from synapse.util import unwrapFirstError, logcontext
from synapse.util.logcontext import (
preserve_context_over_deferred, preserve_context_over_fn, PreserveLoggingContext,
PreserveLoggingContext,
preserve_fn
)
from synapse.util.metrics import Measure
@@ -57,7 +57,8 @@ Attributes:
json_object(dict): The JSON object to verify.
deferred(twisted.internet.defer.Deferred):
A deferred (server_name, key_id, verify_key) tuple that resolves when
a verify key has been fetched
a verify key has been fetched. The deferreds' callbacks are run with no
logcontext.
"""
@@ -74,23 +75,32 @@ class Keyring(object):
self.perspective_servers = self.config.perspectives
self.hs = hs
# map from server name to Deferred. Has an entry for each server with
# an ongoing key download; the Deferred completes once the download
# completes.
#
# These are regular, logcontext-agnostic Deferreds.
self.key_downloads = {}
def verify_json_for_server(self, server_name, json_object):
return self.verify_json_objects_for_server(
[(server_name, json_object)]
)[0]
return logcontext.make_deferred_yieldable(
self.verify_json_objects_for_server(
[(server_name, json_object)]
)[0]
)
def verify_json_objects_for_server(self, server_and_json):
"""Bulk verfies signatures of json objects, bulk fetching keys as
"""Bulk verifies signatures of json objects, bulk fetching keys as
necessary.
Args:
server_and_json (list): List of pairs of (server_name, json_object)
Returns:
list of deferreds indicating success or failure to verify each
json object's signature for the given server_name.
List<Deferred>: for each input pair, a deferred indicating success
or failure to verify each json object's signature for the given
server_name. The deferreds run their callbacks in the sentinel
logcontext.
"""
verify_requests = []
@@ -117,94 +127,72 @@ class Keyring(object):
verify_requests.append(verify_request)
@defer.inlineCallbacks
def handle_key_deferred(verify_request):
server_name = verify_request.server_name
try:
_, key_id, verify_key = yield verify_request.deferred
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
)
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
401,
"No key for %s with id %s" % (server_name, key_ids),
Codes.UNAUTHORIZED,
)
json_object = verify_request.json_object
logger.debug("Got key %s %s:%s for server %s, verifying" % (
key_id, verify_key.alg, verify_key.version, server_name,
))
try:
verify_signed_json(json_object, server_name, verify_key)
except:
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
),
Codes.UNAUTHORIZED,
)
server_to_deferred = {
server_name: defer.Deferred()
for server_name, _ in server_and_json
}
with PreserveLoggingContext():
# We want to wait for any previous lookups to complete before
# proceeding.
wait_on_deferred = self.wait_for_previous_lookups(
[server_name for server_name, _ in server_and_json],
server_to_deferred,
)
# Actually start fetching keys.
wait_on_deferred.addBoth(
lambda _: self.get_server_verify_keys(verify_requests)
)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
server_to_request_ids = {}
def remove_deferreds(res, server_name, verify_request):
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
deferred.addBoth(remove_deferreds, server_name, verify_request)
preserve_fn(self._start_key_lookups)(verify_requests)
# Pass those keys to handle_key_deferred so that the json object
# signatures can be verified
handle = preserve_fn(_handle_key_deferred)
return [
preserve_context_over_fn(handle_key_deferred, verify_request)
for verify_request in verify_requests
handle(rq) for rq in verify_requests
]
@defer.inlineCallbacks
def _start_key_lookups(self, verify_requests):
"""Sets off the key fetches for each verify request
Once each fetch completes, verify_request.deferred will be resolved.
Args:
verify_requests (List[VerifyKeyRequest]):
"""
# create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred()
for rq in verify_requests
}
# We want to wait for any previous lookups to complete before
# proceeding.
yield self.wait_for_previous_lookups(
[rq.server_name for rq in verify_requests],
server_to_deferred,
)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
server_to_request_ids = {}
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request):
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for verify_request in verify_requests:
verify_request.deferred.addBoth(
remove_deferreds, verify_request,
)
@defer.inlineCallbacks
def wait_for_previous_lookups(self, server_names, server_to_deferred):
"""Waits for any previous key lookups for the given servers to finish.
@@ -212,7 +200,13 @@ class Keyring(object):
Args:
server_names (list): list of server_names we want to lookup
server_to_deferred (dict): server_name to deferred which gets
resolved once we've finished looking up keys for that server
resolved once we've finished looking up keys for that server.
The Deferreds should be regular twisted ones which call their
callbacks with no logcontext.
Returns: a Deferred which resolves once all key lookups for the given
servers have completed. Follows the synapse rules of logcontext
preservation.
"""
while True:
wait_on = [
@@ -226,17 +220,15 @@ class Keyring(object):
else:
break
def rm(r, server_name_):
self.key_downloads.pop(server_name_, None)
return r
for server_name, deferred in server_to_deferred.items():
d = ObservableDeferred(preserve_context_over_deferred(deferred))
self.key_downloads[server_name] = d
self.key_downloads[server_name] = deferred
deferred.addBoth(rm, server_name)
def rm(r, server_name):
self.key_downloads.pop(server_name, None)
return r
d.addBoth(rm, server_name)
def get_server_verify_keys(self, verify_requests):
def _get_server_verify_keys(self, verify_requests):
"""Tries to find at least one key for each verify request
For each verify_request, verify_request.deferred is called back with
@@ -305,21 +297,23 @@ class Keyring(object):
if not missing_keys:
break
for verify_request in requests_missing_keys.values():
verify_request.deferred.errback(SynapseError(
401,
"No key for %s with id %s" % (
verify_request.server_name, verify_request.key_ids,
),
Codes.UNAUTHORIZED,
))
with PreserveLoggingContext():
for verify_request in requests_missing_keys:
verify_request.deferred.errback(SynapseError(
401,
"No key for %s with id %s" % (
verify_request.server_name, verify_request.key_ids,
),
Codes.UNAUTHORIZED,
))
def on_err(err):
for verify_request in verify_requests:
if not verify_request.deferred.called:
verify_request.deferred.errback(err)
with PreserveLoggingContext():
for verify_request in verify_requests:
if not verify_request.deferred.called:
verify_request.deferred.errback(err)
do_iterations().addErrback(on_err)
preserve_fn(do_iterations)().addErrback(on_err)
@defer.inlineCallbacks
def get_keys_from_store(self, server_name_and_key_ids):
@@ -333,7 +327,7 @@ class Keyring(object):
Deferred: resolves to dict[str, dict[str, VerifyKey]]: map from
server_name -> key_id -> VerifyKey
"""
res = yield preserve_context_over_deferred(defer.gatherResults(
res = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.get_server_verify_keys)(
server_name, key_ids
@@ -341,7 +335,7 @@ class Keyring(object):
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(dict(res))
@@ -362,13 +356,13 @@ class Keyring(object):
)
defer.returnValue({})
results = yield preserve_context_over_deferred(defer.gatherResults(
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
union_of_keys = {}
for result in results:
@@ -402,13 +396,13 @@ class Keyring(object):
defer.returnValue(keys)
results = yield preserve_context_over_deferred(defer.gatherResults(
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(server_name, key_ids)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
merged = {}
for result in results:
@@ -485,7 +479,7 @@ class Keyring(object):
for server_name, response_keys in processed_response.items():
keys.setdefault(server_name, {}).update(response_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
server_name=server_name,
@@ -495,7 +489,7 @@ class Keyring(object):
for server_name, response_keys in keys.items()
],
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(keys)
@@ -543,7 +537,7 @@ class Keyring(object):
keys.update(response_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
server_name=key_server_name,
@@ -553,7 +547,7 @@ class Keyring(object):
for key_server_name, verify_keys in keys.items()
],
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(keys)
@@ -619,7 +613,7 @@ class Keyring(object):
response_keys.update(verify_keys)
response_keys.update(old_verify_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_keys_json)(
server_name=server_name,
@@ -632,7 +626,7 @@ class Keyring(object):
for key_id in updated_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
results[server_name] = response_keys
@@ -710,7 +704,6 @@ class Keyring(object):
defer.returnValue(verify_keys)
@defer.inlineCallbacks
def store_keys(self, server_name, from_server, verify_keys):
"""Store a collection of verify keys for a given server
Args:
@@ -721,7 +714,7 @@ class Keyring(object):
A deferred that completes when the keys are stored.
"""
# TODO(markjh): Store whether the keys have expired.
yield preserve_context_over_deferred(defer.gatherResults(
return logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_verify_key)(
server_name, server_name, key.time_added, key
@@ -729,4 +722,48 @@ class Keyring(object):
for key_id, key in verify_keys.items()
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
@defer.inlineCallbacks
def _handle_key_deferred(verify_request):
server_name = verify_request.server_name
try:
with PreserveLoggingContext():
_, key_id, verify_key = yield verify_request.deferred
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
)
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
401,
"No key for %s with id %s" % (server_name, verify_request.key_ids),
Codes.UNAUTHORIZED,
)
json_object = verify_request.json_object
logger.debug("Got key %s %s:%s for server %s, verifying" % (
key_id, verify_key.alg, verify_key.version, server_name,
))
try:
verify_signed_json(json_object, server_name, verify_key)
except:
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
),
Codes.UNAUTHORIZED,
)

View File

@@ -50,6 +50,7 @@ class EventContext(object):
"prev_group",
"delta_ids",
"prev_state_events",
"app_service",
]
def __init__(self):
@@ -68,3 +69,5 @@ class EventContext(object):
self.delta_ids = None
self.prev_state_events = None
self.app_service = None

View File

@@ -0,0 +1,38 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def check_event_for_spam(event):
"""Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if
sent by a local user. If it is sent by a user on another server, then
users receive a blank event.
Args:
event (synapse.events.EventBase): the event to be checked
Returns:
bool: True if the event is spammy.
"""
if not hasattr(event, "content") or "body" not in event.content:
return False
# for example:
#
# if "the third flower is green" in event.content["body"]:
# return True
return False

View File

@@ -225,7 +225,22 @@ def format_event_for_client_v2_without_room_id(d):
def serialize_event(e, time_now_ms, as_client_event=True,
event_format=format_event_for_client_v1,
token_id=None, only_event_fields=None):
token_id=None, only_event_fields=None, is_invite=False):
"""Serialize event for clients
Args:
e (EventBase)
time_now_ms (int)
as_client_event (bool)
event_format
token_id
only_event_fields
is_invite (bool): Whether this is an invite that is being sent to the
invitee
Returns:
dict
"""
# FIXME(erikj): To handle the case of presence events and the like
if not isinstance(e, EventBase):
return e
@@ -251,6 +266,12 @@ def serialize_event(e, time_now_ms, as_client_event=True,
if txn_id is not None:
d["unsigned"]["transaction_id"] = txn_id
# If this is an invite for somebody else, then we don't care about the
# invite_room_state as that's meant solely for the invitee. Other clients
# will already have the state since they're in the room.
if not is_invite:
d["unsigned"].pop("invite_room_state", None)
if as_client_event:
d = event_format(d)

View File

@@ -12,21 +12,14 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.events.utils import prune_event
from synapse.crypto.event_signing import check_event_content_hash
from synapse.api.errors import SynapseError
from synapse.util import unwrapFirstError
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
import logging
from synapse.api.errors import SynapseError
from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import spamcheck
from synapse.events.utils import prune_event
from synapse.util import unwrapFirstError, logcontext
from twisted.internet import defer
logger = logging.getLogger(__name__)
@@ -57,56 +50,52 @@ class FederationBase(object):
"""
deferreds = self._check_sigs_and_hashes(pdus)
def callback(pdu):
return pdu
@defer.inlineCallbacks
def handle_check_result(pdu, deferred):
try:
res = yield logcontext.make_deferred_yieldable(deferred)
except SynapseError:
res = None
def errback(failure, pdu):
failure.trap(SynapseError)
return None
def try_local_db(res, pdu):
if not res:
# Check local db.
return self.store.get_event(
res = yield self.store.get_event(
pdu.event_id,
allow_rejected=True,
allow_none=True,
)
return res
def try_remote(res, pdu):
if not res and pdu.origin != origin:
return self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
outlier=outlier,
timeout=10000,
).addErrback(lambda e: None)
return res
try:
res = yield self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
outlier=outlier,
timeout=10000,
)
except SynapseError:
pass
def warn(res, pdu):
if not res:
logger.warn(
"Failed to find copy of %s with valid signature",
pdu.event_id,
)
return res
for pdu, deferred in zip(pdus, deferreds):
deferred.addCallbacks(
callback, errback, errbackArgs=[pdu]
).addCallback(
try_local_db, pdu
).addCallback(
try_remote, pdu
).addCallback(
warn, pdu
defer.returnValue(res)
handle = logcontext.preserve_fn(handle_check_result)
deferreds2 = [
handle(pdu, deferred)
for pdu, deferred in zip(pdus, deferreds)
]
valid_pdus = yield logcontext.make_deferred_yieldable(
defer.gatherResults(
deferreds2,
consumeErrors=True,
)
valid_pdus = yield preserve_context_over_deferred(defer.gatherResults(
deferreds,
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError)
if include_none:
defer.returnValue(valid_pdus)
@@ -114,15 +103,24 @@ class FederationBase(object):
defer.returnValue([p for p in valid_pdus if p])
def _check_sigs_and_hash(self, pdu):
return self._check_sigs_and_hashes([pdu])[0]
return logcontext.make_deferred_yieldable(
self._check_sigs_and_hashes([pdu])[0],
)
def _check_sigs_and_hashes(self, pdus):
"""Throws a SynapseError if a PDU does not have the correct
signatures.
"""Checks that each of the received events is correctly signed by the
sending server.
Args:
pdus (list[FrozenEvent]): the events to be checked
Returns:
FrozenEvent: Either the given event or it redacted if it failed the
content hash check.
list[Deferred]: for each input event, a deferred which:
* returns the original event if the checks pass
* returns a redacted version of the event (if the signature
matched but the hash did not)
* throws a SynapseError if the signature check failed.
The deferreds run their callbacks in the sentinel logcontext.
"""
redacted_pdus = [
@@ -130,26 +128,38 @@ class FederationBase(object):
for pdu in pdus
]
deferreds = preserve_fn(self.keyring.verify_json_objects_for_server)([
deferreds = self.keyring.verify_json_objects_for_server([
(p.origin, p.get_pdu_json())
for p in redacted_pdus
])
ctx = logcontext.LoggingContext.current_context()
def callback(_, pdu, redacted):
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
return pdu
with logcontext.PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
if spamcheck.check_event_for_spam(pdu):
logger.warn(
"Event contains spam, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
return pdu
def errback(failure, pdu):
failure.trap(SynapseError)
logger.warn(
"Signature check failed for %s",
pdu.event_id,
)
with logcontext.PreserveLoggingContext(ctx):
logger.warn(
"Signature check failed for %s",
pdu.event_id,
)
return failure
for deferred, pdu, redacted in zip(deferreds, pdus, redacted_pdus):

View File

@@ -22,7 +22,7 @@ from synapse.api.constants import Membership
from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError,
)
from synapse.util import unwrapFirstError
from synapse.util import unwrapFirstError, logcontext
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logutils import log_function
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
@@ -189,10 +189,10 @@ class FederationClient(FederationBase):
]
# FIXME: We should handle signature failures more gracefully.
pdus[:] = yield preserve_context_over_deferred(defer.gatherResults(
pdus[:] = yield logcontext.make_deferred_yieldable(defer.gatherResults(
self._check_sigs_and_hashes(pdus),
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(pdus)
@@ -252,7 +252,7 @@ class FederationClient(FederationBase):
pdu = pdu_list[0]
# Check signatures are correct.
signed_pdu = yield self._check_sigs_and_hashes([pdu])[0]
signed_pdu = yield self._check_sigs_and_hash(pdu)
break
@@ -474,8 +474,13 @@ class FederationClient(FederationBase):
content (object): Any additional data to put into the content field
of the event.
Return:
A tuple of (origin (str), event (object)) where origin is the remote
homeserver which generated the event.
Deferred: resolves to a tuple of (origin (str), event (object))
where origin is the remote homeserver which generated the event.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
if membership not in valid_memberships:
@@ -528,6 +533,27 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
def send_join(self, destinations, pdu):
"""Sends a join event to one of a list of homeservers.
Doing so will cause the remote server to add the event to the graph,
and send the event out to the rest of the federation.
Args:
destinations (str): Candidate homeservers which are probably
participating in the room.
pdu (BaseEvent): event to be sent
Return:
Deferred: resolves to a dict with members ``origin`` (a string
giving the serer the event was sent to, ``state`` (?) and
``auth_chain``.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
@@ -635,6 +661,26 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
def send_leave(self, destinations, pdu):
"""Sends a leave event to one of a list of homeservers.
Doing so will cause the remote server to add the event to the graph,
and send the event out to the rest of the federation.
This is mostly useful to reject received invites.
Args:
destinations (str): Candidate homeservers which are probably
participating in the room.
pdu (BaseEvent): event to be sent
Return:
Deferred: resolves to None.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a non-200 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue

View File

@@ -146,11 +146,15 @@ class FederationServer(FederationBase):
# check that it's actually being sent from a valid destination to
# workaround bug #1753 in 0.18.5 and 0.18.6
if transaction.origin != get_domain_from_id(pdu.event_id):
# We continue to accept join events from any server; this is
# necessary for the federation join dance to work correctly.
# (When we join over federation, the "helper" server is
# responsible for sending out the join event, rather than the
# origin. See bug #1893).
if not (
pdu.type == 'm.room.member' and
pdu.content and
pdu.content.get("membership", None) == 'join' and
self.hs.is_mine_id(pdu.state_key)
pdu.content.get("membership", None) == 'join'
):
logger.info(
"Discarding PDU %s from invalid origin %s",
@@ -436,6 +440,16 @@ class FederationServer(FederationBase):
key_id: json.loads(json_bytes)
}
logger.info(
"Claimed one-time-keys: %s",
",".join((
"%s for %s:%s" % (key_id, user_id, device_id)
for user_id, user_keys in json_result.iteritems()
for device_id, device_keys in user_keys.iteritems()
for key_id, _ in device_keys.iteritems()
)),
)
defer.returnValue({"one_time_keys": json_result})
@defer.inlineCallbacks

View File

@@ -31,23 +31,21 @@ Events are replicated via a separate events stream.
from .units import Edu
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
import synapse.metrics
from blist import sorteddict
import ujson
from collections import namedtuple
import logging
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
PRESENCE_TYPE = "p"
KEYED_EDU_TYPE = "k"
EDU_TYPE = "e"
FAILURE_TYPE = "f"
DEVICE_MESSAGE_TYPE = "d"
class FederationRemoteSendQueue(object):
"""A drop in replacement for TransactionQueue"""
@@ -55,18 +53,19 @@ class FederationRemoteSendQueue(object):
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id
self.presence_map = {}
self.presence_changed = sorteddict()
self.presence_map = {} # Pending presence map user_id -> UserPresenceState
self.presence_changed = sorteddict() # Stream position -> user_id
self.keyed_edu = {}
self.keyed_edu_changed = sorteddict()
self.keyed_edu = {} # (destination, key) -> EDU
self.keyed_edu_changed = sorteddict() # stream position -> (destination, key)
self.edus = sorteddict()
self.edus = sorteddict() # stream position -> Edu
self.failures = sorteddict()
self.failures = sorteddict() # stream position -> (destination, Failure)
self.device_messages = sorteddict()
self.device_messages = sorteddict() # stream position -> destination
self.pos = 1
self.pos_time = sorteddict()
@@ -122,7 +121,9 @@ class FederationRemoteSendQueue(object):
del self.presence_changed[key]
user_ids = set(
user_id for uids in self.presence_changed.values() for _, user_id in uids
user_id
for uids in self.presence_changed.itervalues()
for user_id in uids
)
to_del = [
@@ -189,18 +190,20 @@ class FederationRemoteSendQueue(object):
self.notifier.on_new_replication_data()
def send_presence(self, destination, states):
"""As per TransactionQueue"""
def send_presence(self, states):
"""As per TransactionQueue
Args:
states (list(UserPresenceState))
"""
pos = self._next_pos()
self.presence_map.update({
state.user_id: state
for state in states
})
# We only want to send presence for our own users, so lets always just
# filter here just in case.
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
self.presence_changed[pos] = [
(destination, state.user_id) for state in states
]
self.presence_map.update({state.user_id: state for state in local_states})
self.presence_changed[pos] = [state.user_id for state in local_states]
self.notifier.on_new_replication_data()
@@ -220,10 +223,15 @@ class FederationRemoteSendQueue(object):
def get_current_token(self):
return self.pos - 1
def get_replication_rows(self, token, limit, federation_ack=None):
"""
def federation_ack(self, token):
self._clear_queue_before_pos(token)
def get_replication_rows(self, from_token, to_token, limit, federation_ack=None):
"""Get rows to be sent over federation between the two tokens
Args:
token (int)
from_token (int)
to_token(int)
limit (int)
federation_ack (int): Optional. The position where the worker is
explicitly acknowledged it has handled. Allows us to drop
@@ -232,9 +240,11 @@ class FederationRemoteSendQueue(object):
# TODO: Handle limit.
# To handle restarts where we wrap around
if token > self.pos:
token = -1
if from_token > self.pos:
from_token = -1
# list of tuple(int, BaseFederationRow), where the first is the position
# of the federation stream.
rows = []
# There should be only one reader, so lets delete everything its
@@ -244,62 +254,295 @@ class FederationRemoteSendQueue(object):
# Fetch changed presence
keys = self.presence_changed.keys()
i = keys.bisect_right(token)
dest_user_ids = set(
(pos, dest_user_id)
for pos in keys[i:]
for dest_user_id in self.presence_changed[pos]
)
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
dest_user_ids = [
(pos, user_id)
for pos in keys[i:j]
for user_id in self.presence_changed[pos]
]
for (key, (dest, user_id)) in dest_user_ids:
rows.append((key, PRESENCE_TYPE, ujson.dumps({
"destination": dest,
"state": self.presence_map[user_id].as_dict(),
})))
for (key, user_id) in dest_user_ids:
rows.append((key, PresenceRow(
state=self.presence_map[user_id],
)))
# Fetch changes keyed edus
keys = self.keyed_edu_changed.keys()
i = keys.bisect_right(token)
keyed_edus = set((k, self.keyed_edu_changed[k]) for k in keys[i:])
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
# We purposefully clobber based on the key here, python dict comprehensions
# always use the last value, so this will correctly point to the last
# stream position.
keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]}
for (pos, (destination, edu_key)) in keyed_edus:
rows.append(
(pos, KEYED_EDU_TYPE, ujson.dumps({
"key": edu_key,
"edu": self.keyed_edu[(destination, edu_key)].get_internal_dict(),
}))
)
for ((destination, edu_key), pos) in keyed_edus.iteritems():
rows.append((pos, KeyedEduRow(
key=edu_key,
edu=self.keyed_edu[(destination, edu_key)],
)))
# Fetch changed edus
keys = self.edus.keys()
i = keys.bisect_right(token)
edus = set((k, self.edus[k]) for k in keys[i:])
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
edus = ((k, self.edus[k]) for k in keys[i:j])
for (pos, edu) in edus:
rows.append((pos, EDU_TYPE, ujson.dumps(edu.get_internal_dict())))
rows.append((pos, EduRow(edu)))
# Fetch changed failures
keys = self.failures.keys()
i = keys.bisect_right(token)
failures = set((k, self.failures[k]) for k in keys[i:])
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
failures = ((k, self.failures[k]) for k in keys[i:j])
for (pos, (destination, failure)) in failures:
rows.append((pos, FAILURE_TYPE, ujson.dumps({
"destination": destination,
"failure": failure,
})))
rows.append((pos, FailureRow(
destination=destination,
failure=failure,
)))
# Fetch changed device messages
keys = self.device_messages.keys()
i = keys.bisect_right(token)
device_messages = set((k, self.device_messages[k]) for k in keys[i:])
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
device_messages = {self.device_messages[k]: k for k in keys[i:j]}
for (pos, destination) in device_messages:
rows.append((pos, DEVICE_MESSAGE_TYPE, ujson.dumps({
"destination": destination,
})))
for (destination, pos) in device_messages.iteritems():
rows.append((pos, DeviceRow(
destination=destination,
)))
# Sort rows based on pos
rows.sort()
return rows
return [(pos, row.TypeId, row.to_data()) for pos, row in rows]
class BaseFederationRow(object):
"""Base class for rows to be sent in the federation stream.
Specifies how to identify, serialize and deserialize the different types.
"""
TypeId = None # Unique string that ids the type. Must be overriden in sub classes.
@staticmethod
def from_data(data):
"""Parse the data from the federation stream into a row.
Args:
data: The value of ``data`` from FederationStreamRow.data, type
depends on the type of stream
"""
raise NotImplementedError()
def to_data(self):
"""Serialize this row to be sent over the federation stream.
Returns:
The value to be sent in FederationStreamRow.data. The type depends
on the type of stream.
"""
raise NotImplementedError()
def add_to_buffer(self, buff):
"""Add this row to the appropriate field in the buffer ready for this
to be sent over federation.
We use a buffer so that we can batch up events that have come in at
the same time and send them all at once.
Args:
buff (BufferedToSend)
"""
raise NotImplementedError()
class PresenceRow(BaseFederationRow, namedtuple("PresenceRow", (
"state", # UserPresenceState
))):
TypeId = "p"
@staticmethod
def from_data(data):
return PresenceRow(
state=UserPresenceState.from_dict(data)
)
def to_data(self):
return self.state.as_dict()
def add_to_buffer(self, buff):
buff.presence.append(self.state)
class KeyedEduRow(BaseFederationRow, namedtuple("KeyedEduRow", (
"key", # tuple(str) - the edu key passed to send_edu
"edu", # Edu
))):
"""Streams EDUs that have an associated key that is ued to clobber. For example,
typing EDUs clobber based on room_id.
"""
TypeId = "k"
@staticmethod
def from_data(data):
return KeyedEduRow(
key=tuple(data["key"]),
edu=Edu(**data["edu"]),
)
def to_data(self):
return {
"key": self.key,
"edu": self.edu.get_internal_dict(),
}
def add_to_buffer(self, buff):
buff.keyed_edus.setdefault(
self.edu.destination, {}
)[self.key] = self.edu
class EduRow(BaseFederationRow, namedtuple("EduRow", (
"edu", # Edu
))):
"""Streams EDUs that don't have keys. See KeyedEduRow
"""
TypeId = "e"
@staticmethod
def from_data(data):
return EduRow(Edu(**data))
def to_data(self):
return self.edu.get_internal_dict()
def add_to_buffer(self, buff):
buff.edus.setdefault(self.edu.destination, []).append(self.edu)
class FailureRow(BaseFederationRow, namedtuple("FailureRow", (
"destination", # str
"failure",
))):
"""Streams failures to a remote server. Failures are issued when there was
something wrong with a transaction the remote sent us, e.g. it included
an event that was invalid.
"""
TypeId = "f"
@staticmethod
def from_data(data):
return FailureRow(
destination=data["destination"],
failure=data["failure"],
)
def to_data(self):
return {
"destination": self.destination,
"failure": self.failure,
}
def add_to_buffer(self, buff):
buff.failures.setdefault(self.destination, []).append(self.failure)
class DeviceRow(BaseFederationRow, namedtuple("DeviceRow", (
"destination", # str
))):
"""Streams the fact that either a) there is pending to device messages for
users on the remote, or b) a local users device has changed and needs to
be sent to the remote.
"""
TypeId = "d"
@staticmethod
def from_data(data):
return DeviceRow(destination=data["destination"])
def to_data(self):
return {"destination": self.destination}
def add_to_buffer(self, buff):
buff.device_destinations.add(self.destination)
TypeToRow = {
Row.TypeId: Row
for Row in (
PresenceRow,
KeyedEduRow,
EduRow,
FailureRow,
DeviceRow,
)
}
ParsedFederationStreamData = namedtuple("ParsedFederationStreamData", (
"presence", # list(UserPresenceState)
"keyed_edus", # dict of destination -> { key -> Edu }
"edus", # dict of destination -> [Edu]
"failures", # dict of destination -> [failures]
"device_destinations", # set of destinations
))
def process_rows_for_federation(transaction_queue, rows):
"""Parse a list of rows from the federation stream and put them in the
transaction queue ready for sending to the relevant homeservers.
Args:
transaction_queue (TransactionQueue)
rows (list(synapse.replication.tcp.streams.FederationStreamRow))
"""
# The federation stream contains a bunch of different types of
# rows that need to be handled differently. We parse the rows, put
# them into the appropriate collection and then send them off.
buff = ParsedFederationStreamData(
presence=[],
keyed_edus={},
edus={},
failures={},
device_destinations=set(),
)
# Parse the rows in the stream and add to the buffer
for row in rows:
if row.type not in TypeToRow:
logger.error("Unrecognized federation row type %r", row.type)
continue
RowType = TypeToRow[row.type]
parsed_row = RowType.from_data(row.data)
parsed_row.add_to_buffer(buff)
if buff.presence:
transaction_queue.send_presence(buff.presence)
for destination, edu_map in buff.keyed_edus.iteritems():
for key, edu in edu_map.items():
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=key,
)
for destination, edu_list in buff.edus.iteritems():
for edu in edu_list:
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=None,
)
for destination, failure_list in buff.failures.iteritems():
for failure in failure_list:
transaction_queue.send_failure(destination, failure)
for destination in buff.device_destinations:
transaction_queue.send_device_messages(destination)

View File

@@ -21,11 +21,10 @@ from .units import Transaction, Edu
from synapse.api.errors import HttpResponseException
from synapse.util.async import run_on_reactor
from synapse.util.logcontext import preserve_context_over_fn
from synapse.util.logcontext import preserve_context_over_fn, preserve_fn
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from synapse.util.metrics import measure_func
from synapse.types import get_domain_from_id
from synapse.handlers.presence import format_user_presence_state
from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
import synapse.metrics
import logging
@@ -41,6 +40,8 @@ sent_pdus_destination_dist = client_metrics.register_distribution(
)
sent_edus_counter = client_metrics.register_counter("sent_edus")
sent_transactions_counter = client_metrics.register_counter("sent_transactions")
class TransactionQueue(object):
"""This class makes sure we only have one transaction in flight at
@@ -77,8 +78,18 @@ class TransactionQueue(object):
# destination -> list of tuple(edu, deferred)
self.pending_edus_by_dest = edus = {}
# Presence needs to be separate as we send single aggragate EDUs
# Map of user_id -> UserPresenceState for all the pending presence
# to be sent out by user_id. Entries here get processed and put in
# pending_presence_by_dest
self.pending_presence = {}
# Map of destination -> user_id -> UserPresenceState of pending presence
# to be sent to each destinations
self.pending_presence_by_dest = presence = {}
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
# based on their key (e.g. typing events by room_id)
# Map of destination -> (edu_type, key) -> Edu
self.pending_edus_keyed_by_dest = edus_keyed = {}
metrics.register_callback(
@@ -113,6 +124,8 @@ class TransactionQueue(object):
self._is_processing = False
self._last_poked_id = -1
self._processing_pending_presence = False
def can_send_to(self, destination):
"""Can we send messages to the given server?
@@ -169,15 +182,13 @@ class TransactionQueue(object):
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
users_in_room = yield self.state.get_current_user_in_room(
destinations = yield self.state.get_current_hosts_in_room(
event.room_id, latest_event_ids=[
prev_id for prev_id, _ in event.prev_events
],
)
destinations = set(destinations)
destinations = set(
get_domain_from_id(user_id) for user_id in users_in_room
)
if send_on_behalf_of is not None:
# If we are sending the event on behalf of another server
# then it already has the event and there is no reason to
@@ -224,17 +235,71 @@ class TransactionQueue(object):
self._attempt_new_transaction, destination
)
def send_presence(self, destination, states):
if not self.can_send_to(destination):
return
@preserve_fn # the caller should not yield on this
@defer.inlineCallbacks
def send_presence(self, states):
"""Send the new presence states to the appropriate destinations.
self.pending_presence_by_dest.setdefault(destination, {}).update({
This actually queues up the presence states ready for sending and
triggers a background task to process them and send out the transactions.
Args:
states (list(UserPresenceState))
"""
# First we queue up the new presence by user ID, so multiple presence
# updates in quick successtion are correctly handled
# We only want to send presence for our own users, so lets always just
# filter here just in case.
self.pending_presence.update({
state.user_id: state for state in states
if self.is_mine_id(state.user_id)
})
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
# We then handle the new pending presence in batches, first figuring
# out the destinations we need to send each state to and then poking it
# to attempt a new transaction. We linearize this so that we don't
# accidentally mess up the ordering and send multiple presence updates
# in the wrong order
if self._processing_pending_presence:
return
self._processing_pending_presence = True
try:
while True:
states_map = self.pending_presence
self.pending_presence = {}
if not states_map:
break
yield self._process_presence_inner(states_map.values())
finally:
self._processing_pending_presence = False
@measure_func("txnqueue._process_presence")
@defer.inlineCallbacks
def _process_presence_inner(self, states):
"""Given a list of states populate self.pending_presence_by_dest and
poke to send a new transaction to each destination
Args:
states (list(UserPresenceState))
"""
hosts_and_states = yield get_interested_remotes(self.store, states, self.state)
for destinations, states in hosts_and_states:
for destination in destinations:
if not self.can_send_to(destination):
continue
self.pending_presence_by_dest.setdefault(
destination, {}
).update({
state.user_id: state for state in states
})
preserve_fn(self._attempt_new_transaction)(destination)
def send_edu(self, destination, edu_type, content, key=None):
edu = Edu(
@@ -374,6 +439,7 @@ class TransactionQueue(object):
destination, pending_pdus, pending_edus, pending_failures,
)
if success:
sent_transactions_counter.inc()
# Remove the acknowledged device messages from the database
# Only bother if we actually sent some device messages
if device_message_edus:

View File

@@ -193,6 +193,26 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def make_membership_event(self, destination, room_id, user_id, membership):
"""Asks a remote server to build and sign us a membership event
Note that this does not append any events to any graphs.
Args:
destination (str): address of remote homeserver
room_id (str): room to join/leave
user_id (str): user to be joined/left
membership (str): one of join/leave
Returns:
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body (ie, the new event).
Fails with ``HTTPRequestException`` if we get an HTTP response
code >= 300.
Fails with ``NotRetryingDestination`` if we are not yet ready
to retry this server.
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
if membership not in valid_memberships:
raise RuntimeError(
@@ -201,11 +221,23 @@ class TransportLayerClient(object):
)
path = PREFIX + "/make_%s/%s/%s" % (membership, room_id, user_id)
ignore_backoff = False
retry_on_dns_fail = False
if membership == Membership.LEAVE:
# we particularly want to do our best to send leave events. The
# problem is that if it fails, we won't retry it later, so if the
# remote server was just having a momentary blip, the room will be
# out of sync.
ignore_backoff = True
retry_on_dns_fail = True
content = yield self.client.get_json(
destination=destination,
path=path,
retry_on_dns_fail=False,
retry_on_dns_fail=retry_on_dns_fail,
timeout=20000,
ignore_backoff=ignore_backoff,
)
defer.returnValue(content)
@@ -232,6 +264,12 @@ class TransportLayerClient(object):
destination=destination,
path=path,
data=content,
# we want to do our best to send this through. The problem is
# that if it fails, we won't retry it later, so if the remote
# server was just having a momentary blip, the room will be out of
# sync.
ignore_backoff=True,
)
defer.returnValue(response)

View File

@@ -24,6 +24,7 @@ from synapse.http.servlet import (
)
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
from synapse.util.logcontext import preserve_fn
from synapse.types import ThirdPartyInstanceID
import functools
@@ -79,6 +80,7 @@ class Authenticator(object):
def __init__(self, hs):
self.keyring = hs.get_keyring()
self.server_name = hs.hostname
self.store = hs.get_datastore()
# A method just so we can pass 'self' as the authenticator to the Servlets
@defer.inlineCallbacks
@@ -138,18 +140,23 @@ class Authenticator(object):
logger.info("Request from %s", origin)
request.authenticated_entity = origin
# If we get a valid signed request from the other side, its probably
# alive
retry_timings = yield self.store.get_destination_retry_timings(origin)
if retry_timings and retry_timings["retry_last_ts"]:
logger.info("Marking origin %r as up", origin)
preserve_fn(self.store.set_destination_retry_timings)(origin, 0, 0)
defer.returnValue(origin)
class BaseFederationServlet(object):
REQUIRE_AUTH = True
def __init__(self, handler, authenticator, ratelimiter, server_name,
room_list_handler):
def __init__(self, handler, authenticator, ratelimiter, server_name):
self.handler = handler
self.authenticator = authenticator
self.ratelimiter = ratelimiter
self.room_list_handler = room_list_handler
def _wrap(self, func):
authenticator = self.authenticator
@@ -581,7 +588,7 @@ class PublicRoomList(BaseFederationServlet):
else:
network_tuple = ThirdPartyInstanceID(None, None)
data = yield self.room_list_handler.get_local_public_room_list(
data = yield self.handler.get_local_public_room_list(
limit, since_token,
network_tuple=network_tuple
)
@@ -602,7 +609,7 @@ class FederationVersionServlet(BaseFederationServlet):
}))
SERVLET_CLASSES = (
FEDERATION_SERVLET_CLASSES = (
FederationSendServlet,
FederationPullServlet,
FederationEventServlet,
@@ -625,17 +632,27 @@ SERVLET_CLASSES = (
FederationThirdPartyInviteExchangeServlet,
On3pidBindServlet,
OpenIdUserInfo,
PublicRoomList,
FederationVersionServlet,
)
ROOM_LIST_CLASSES = (
PublicRoomList,
)
def register_servlets(hs, resource, authenticator, ratelimiter):
for servletclass in SERVLET_CLASSES:
for servletclass in FEDERATION_SERVLET_CLASSES:
servletclass(
handler=hs.get_replication_layer(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
room_list_handler=hs.get_room_list_handler(),
).register(resource)
for servletclass in ROOM_LIST_CLASSES:
servletclass(
handler=hs.get_room_list_handler(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
).register(resource)

View File

@@ -53,7 +53,20 @@ class BaseHandler(object):
self.event_builder_factory = hs.get_event_builder_factory()
def ratelimit(self, requester):
@defer.inlineCallbacks
def ratelimit(self, requester, update=True):
"""Ratelimits requests.
Args:
requester (Requester)
update (bool): Whether to record that a request is being processed.
Set to False when doing multiple checks for one request (e.g.
to check up front if we would reject the request), and set to
True for the last call for a given request.
Raises:
LimitExceededError if the request should be ratelimited
"""
time_now = self.clock.time()
user_id = requester.user.to_string()
@@ -67,10 +80,25 @@ class BaseHandler(object):
if requester.app_service and not requester.app_service.is_rate_limited():
return
# Check if there is a per user override in the DB.
override = yield self.store.get_ratelimit_for_user(user_id)
if override:
# If overriden with a null Hz then ratelimiting has been entirely
# disabled for the user
if not override.messages_per_second:
return
messages_per_second = override.messages_per_second
burst_count = override.burst_count
else:
messages_per_second = self.hs.config.rc_messages_per_second
burst_count = self.hs.config.rc_message_burst_count
allowed, time_allowed = self.ratelimiter.send_message(
user_id, time_now,
msg_rate_hz=self.hs.config.rc_messages_per_second,
burst_count=self.hs.config.rc_message_burst_count,
msg_rate_hz=messages_per_second,
burst_count=burst_count,
update=update,
)
if not allowed:
raise LimitExceededError(

View File

@@ -21,6 +21,7 @@ from synapse.api.constants import LoginType
from synapse.types import UserID
from synapse.api.errors import AuthError, LoginError, Codes, StoreError, SynapseError
from synapse.util.async import run_on_reactor
from synapse.util.caches.expiringcache import ExpiringCache
from twisted.web.client import PartialDownloadError
@@ -52,7 +53,15 @@ class AuthHandler(BaseHandler):
LoginType.DUMMY: self._check_dummy_auth,
}
self.bcrypt_rounds = hs.config.bcrypt_rounds
self.sessions = {}
# This is not a cache per se, but a store of all current sessions that
# expire after N hours
self.sessions = ExpiringCache(
cache_name="register_sessions",
clock=hs.get_clock(),
expiry_ms=self.SESSION_EXPIRE_MS,
reset_expiry_on_get=True,
)
account_handler = _AccountHandler(
hs, check_user_exists=self.check_user_exists
@@ -617,16 +626,6 @@ class AuthHandler(BaseHandler):
logger.debug("Saving session %s", session)
session["last_used"] = self.hs.get_clock().time_msec()
self.sessions[session["id"]] = session
self._prune_sessions()
def _prune_sessions(self):
for sid, sess in self.sessions.items():
last_used = 0
if 'last_used' in sess:
last_used = sess['last_used']
now = self.hs.get_clock().time_msec()
if last_used < now - AuthHandler.SESSION_EXPIRE_MS:
del self.sessions[sid]
def hash(self, password):
"""Computes a secure hash of password.

View File

@@ -17,6 +17,7 @@ from synapse.api.constants import EventTypes
from synapse.util import stringutils
from synapse.util.async import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.metrics import measure_func
from synapse.types import get_domain_from_id, RoomStreamToken
from twisted.internet import defer
@@ -105,7 +106,7 @@ class DeviceHandler(BaseHandler):
device_map = yield self.store.get_devices_by_user(user_id)
ips = yield self.store.get_last_client_ip_by_device(
devices=((user_id, device_id) for device_id in device_map.keys())
user_id, device_id=None
)
devices = device_map.values()
@@ -132,7 +133,7 @@ class DeviceHandler(BaseHandler):
except errors.StoreError:
raise errors.NotFoundError
ips = yield self.store.get_last_client_ip_by_device(
devices=((user_id, device_id),)
user_id, device_id,
)
_update_device_from_client_ips(device, ips)
defer.returnValue(device)
@@ -269,6 +270,8 @@ class DeviceHandler(BaseHandler):
user_id (str)
from_token (StreamToken)
"""
now_token = yield self.hs.get_event_sources().get_current_token()
room_ids = yield self.store.get_rooms_for_user(user_id)
# First we check if any devices have changed
@@ -279,11 +282,30 @@ class DeviceHandler(BaseHandler):
# Then work out if any users have since joined
rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key)
member_events = yield self.store.get_membership_changes_for_user(
user_id, from_token.room_key, now_token.room_key
)
rooms_changed.update(event.room_id for event in member_events)
stream_ordering = RoomStreamToken.parse_stream_token(
from_token.room_key).stream
from_token.room_key
).stream
possibly_changed = set(changed)
possibly_left = set()
for room_id in rooms_changed:
current_state_ids = yield self.store.get_current_state_ids(room_id)
# The user may have left the room
# TODO: Check if they actually did or if we were just invited.
if room_id not in room_ids:
for key, event_id in current_state_ids.iteritems():
etype, state_key = key
if etype != EventTypes.Member:
continue
possibly_left.add(state_key)
continue
# Fetch the current state at the time.
try:
event_ids = yield self.store.get_forward_extremeties_for_room(
@@ -294,8 +316,6 @@ class DeviceHandler(BaseHandler):
# ordering: treat it the same as a new room
event_ids = []
current_state_ids = yield self.store.get_current_state_ids(room_id)
# special-case for an empty prev state: include all members
# in the changed list
if not event_ids:
@@ -306,9 +326,25 @@ class DeviceHandler(BaseHandler):
possibly_changed.add(state_key)
continue
current_member_id = current_state_ids.get((EventTypes.Member, user_id))
if not current_member_id:
continue
# mapping from event_id -> state_dict
prev_state_ids = yield self.store.get_state_ids_for_events(event_ids)
# Check if we've joined the room? If so we just blindly add all the users to
# the "possibly changed" users.
for state_dict in prev_state_ids.itervalues():
member_event = state_dict.get((EventTypes.Member, user_id), None)
if not member_event or member_event != current_member_id:
for key, event_id in current_state_ids.iteritems():
etype, state_key = key
if etype != EventTypes.Member:
continue
possibly_changed.add(state_key)
break
# If there has been any change in membership, include them in the
# possibly changed list. We'll check if they are joined below,
# and we're not toooo worried about spuriously adding users.
@@ -319,19 +355,30 @@ class DeviceHandler(BaseHandler):
# check if this member has changed since any of the extremities
# at the stream_ordering, and add them to the list if so.
for state_dict in prev_state_ids.values():
for state_dict in prev_state_ids.itervalues():
prev_event_id = state_dict.get(key, None)
if not prev_event_id or prev_event_id != event_id:
possibly_changed.add(state_key)
if state_key != user_id:
possibly_changed.add(state_key)
break
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
user_id
)
if possibly_changed or possibly_left:
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
user_id
)
# Take the intersection of the users whose devices may have changed
# and those that actually still share a room with the user
defer.returnValue(users_who_share_room & possibly_changed)
# Take the intersection of the users whose devices may have changed
# and those that actually still share a room with the user
possibly_joined = possibly_changed & users_who_share_room
possibly_left = (possibly_changed | possibly_left) - users_who_share_room
else:
possibly_joined = []
possibly_left = []
defer.returnValue({
"changed": list(possibly_joined),
"left": list(possibly_left),
})
@defer.inlineCallbacks
def on_federation_query_user_devices(self, user_id):
@@ -425,12 +472,38 @@ class DeviceListEduUpdater(object):
# This can happen since we batch updates
return
# Given a list of updates we check if we need to resync. This
# happens if we've missed updates.
resync = yield self._need_to_do_resync(user_id, pending_updates)
if resync:
# Fetch all devices for the user.
origin = get_domain_from_id(user_id)
result = yield self.federation.query_user_devices(origin, user_id)
try:
result = yield self.federation.query_user_devices(origin, user_id)
except NotRetryingDestination:
# TODO: Remember that we are now out of sync and try again
# later
logger.warn(
"Failed to handle device list update for %s,"
" we're not retrying the remote",
user_id,
)
# We abort on exceptions rather than accepting the update
# as otherwise synapse will 'forget' that its device list
# is out of date. If we bail then we will retry the resync
# next time we get a device list update for this user_id.
# This makes it more likely that the device lists will
# eventually become consistent.
return
except Exception:
# TODO: Remember that we are now out of sync and try again
# later
logger.exception(
"Failed to handle device list update for %s", user_id
)
return
stream_id = result["stream_id"]
devices = result["devices"]
yield self.store.update_remote_device_list_cache(

View File

@@ -21,7 +21,7 @@ from twisted.internet import defer
from synapse.api.errors import SynapseError, CodeMessageException
from synapse.types import get_domain_from_id
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
from synapse.util.logcontext import preserve_fn, make_deferred_yieldable
from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__)
@@ -145,7 +145,7 @@ class E2eKeysHandler(object):
"status": 503, "message": e.message
}
yield preserve_context_over_deferred(defer.gatherResults([
yield make_deferred_yieldable(defer.gatherResults([
preserve_fn(do_remote_query)(destination)
for destination in remote_queries_not_in_cache
]))
@@ -257,11 +257,21 @@ class E2eKeysHandler(object):
"status": 503, "message": e.message
}
yield preserve_context_over_deferred(defer.gatherResults([
yield make_deferred_yieldable(defer.gatherResults([
preserve_fn(claim_client_keys)(destination)
for destination in remote_queries
]))
logger.info(
"Claimed one-time-keys: %s",
",".join((
"%s for %s:%s" % (key_id, user_id, device_id)
for user_id, user_keys in json_result.iteritems()
for device_id, device_keys in user_keys.iteritems()
for key_id, _ in device_keys.iteritems()
)),
)
defer.returnValue({
"one_time_keys": json_result,
"failures": failures
@@ -288,19 +298,8 @@ class E2eKeysHandler(object):
one_time_keys = keys.get("one_time_keys", None)
if one_time_keys:
logger.info(
"Adding %d one_time_keys for device %r for user %r at %d",
len(one_time_keys), device_id, user_id, time_now
)
key_list = []
for key_id, key_json in one_time_keys.items():
algorithm, key_id = key_id.split(":")
key_list.append((
algorithm, key_id, encode_canonical_json(key_json)
))
yield self.store.add_e2e_one_time_keys(
user_id, device_id, time_now, key_list
yield self._upload_one_time_keys_for_user(
user_id, device_id, time_now, one_time_keys,
)
# the device should have been registered already, but it may have been
@@ -313,3 +312,58 @@ class E2eKeysHandler(object):
result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue({"one_time_key_counts": result})
@defer.inlineCallbacks
def _upload_one_time_keys_for_user(self, user_id, device_id, time_now,
one_time_keys):
logger.info(
"Adding one_time_keys %r for device %r for user %r at %d",
one_time_keys.keys(), device_id, user_id, time_now,
)
# make a list of (alg, id, key) tuples
key_list = []
for key_id, key_obj in one_time_keys.items():
algorithm, key_id = key_id.split(":")
key_list.append((
algorithm, key_id, key_obj
))
# First we check if we have already persisted any of the keys.
existing_key_map = yield self.store.get_e2e_one_time_keys(
user_id, device_id, [k_id for _, k_id, _ in key_list]
)
new_keys = [] # Keys that we need to insert. (alg, id, json) tuples.
for algorithm, key_id, key in key_list:
ex_json = existing_key_map.get((algorithm, key_id), None)
if ex_json:
if not _one_time_keys_match(ex_json, key):
raise SynapseError(
400,
("One time key %s:%s already exists. "
"Old key: %s; new key: %r") %
(algorithm, key_id, ex_json, key)
)
else:
new_keys.append((algorithm, key_id, encode_canonical_json(key)))
yield self.store.add_e2e_one_time_keys(
user_id, device_id, time_now, new_keys
)
def _one_time_keys_match(old_key_json, new_key):
old_key = json.loads(old_key_json)
# if either is a string rather than an object, they must match exactly
if not isinstance(old_key, dict) or not isinstance(new_key, dict):
return old_key == new_key
# otherwise, we strip off the 'signatures' if any, because it's legitimate
# for different upload attempts to have different signatures.
old_key.pop("signatures", None)
new_key_copy = dict(new_key)
new_key_copy.pop("signatures", None)
return old_key == new_key_copy

View File

@@ -28,7 +28,7 @@ from synapse.api.constants import EventTypes, Membership, RejectedReason
from synapse.events.validator import EventValidator
from synapse.util import unwrapFirstError
from synapse.util.logcontext import (
PreserveLoggingContext, preserve_fn, preserve_context_over_deferred
preserve_fn, preserve_context_over_deferred
)
from synapse.util.metrics import measure_func
from synapse.util.logutils import log_function
@@ -43,7 +43,6 @@ from synapse.events.utils import prune_event
from synapse.util.retryutils import NotRetryingDestination
from synapse.push.action_generator import ActionGenerator
from synapse.util.distributor import user_joined_room
from twisted.internet import defer
@@ -75,6 +74,9 @@ class FederationHandler(BaseHandler):
self.state_handler = hs.get_state_handler()
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
self.action_generator = hs.get_action_generator()
self.is_mine_id = hs.is_mine_id
self.pusher_pool = hs.get_pusherpool()
self.replication_layer.set_handler(self)
@@ -172,8 +174,22 @@ class FederationHandler(BaseHandler):
origin, pdu, prevs, min_depth
)
prevs = {e_id for e_id, _ in pdu.prev_events}
seen = set(have_seen.keys())
# Update the set of things we've seen after trying to
# fetch the missing stuff
have_seen = yield self.store.have_events(prevs)
seen = set(have_seen.iterkeys())
if not prevs - seen:
logger.info(
"Found all missing prev events for %s", pdu.event_id
)
elif prevs - seen:
logger.info(
"Not fetching %d missing events for room %r,event %s: %r...",
len(prevs - seen), pdu.room_id, pdu.event_id,
list(prevs - seen)[:5],
)
if prevs - seen:
logger.info(
"Still missing %d events for room %r: %r...",
@@ -208,19 +224,15 @@ class FederationHandler(BaseHandler):
Args:
origin (str): Origin of the pdu. Will be called to get the missing events
pdu: received pdu
prevs (str[]): List of event ids which we are missing
prevs (set(str)): List of event ids which we are missing
min_depth (int): Minimum depth of events to return.
Returns:
Deferred<dict(str, str?)>: updated have_seen dictionary
"""
# We recalculate seen, since it may have changed.
have_seen = yield self.store.have_events(prevs)
seen = set(have_seen.keys())
if not prevs - seen:
# nothing left to do
defer.returnValue(have_seen)
return
latest = yield self.store.get_latest_event_ids_in_room(
pdu.room_id
@@ -232,8 +244,8 @@ class FederationHandler(BaseHandler):
latest |= seen
logger.info(
"Missing %d events for room %r: %r...",
len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
"Missing %d events for room %r pdu %s: %r...",
len(prevs - seen), pdu.room_id, pdu.event_id, list(prevs - seen)[:5]
)
# XXX: we set timeout to 10s to help workaround
@@ -265,22 +277,23 @@ class FederationHandler(BaseHandler):
timeout=10000,
)
logger.info(
"Got %d events: %r...",
len(missing_events), [e.event_id for e in missing_events[:5]]
)
# We want to sort these by depth so we process them and
# tell clients about them in order.
missing_events.sort(key=lambda x: x.depth)
for e in missing_events:
logger.info("Handling found event %s", e.event_id)
yield self.on_receive_pdu(
origin,
e,
get_missing=False
)
have_seen = yield self.store.have_events(
[ev for ev, _ in pdu.prev_events]
)
defer.returnValue(have_seen)
@log_function
@defer.inlineCallbacks
def _process_received_pdu(self, origin, pdu, state, auth_chain):
@@ -369,13 +382,6 @@ class FederationHandler(BaseHandler):
affected=event.event_id,
)
# if we're receiving valid events from an origin,
# it's probably a good idea to mark it as not in retry-state
# for sending (although this is a bit of a leap)
retry_timings = yield self.store.get_destination_retry_timings(origin)
if retry_timings and retry_timings["retry_last_ts"]:
self.store.set_destination_retry_timings(origin, 0, 0)
room = yield self.store.get_room(event.room_id)
if not room:
@@ -394,11 +400,10 @@ class FederationHandler(BaseHandler):
target_user = UserID.from_string(target_user_id)
extra_users.append(target_user)
with PreserveLoggingContext():
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
if event.type == EventTypes.Member:
if event.membership == Membership.JOIN:
@@ -829,7 +834,11 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
def on_event_auth(self, event_id):
auth = yield self.store.get_auth_chain([event_id])
event = yield self.store.get_event(event_id)
auth = yield self.store.get_auth_chain(
[auth_id for auth_id, _ in event.auth_events],
include_given=True
)
for event in auth:
event.signatures.update(
@@ -916,11 +925,10 @@ class FederationHandler(BaseHandler):
origin, auth_chain, state, event
)
with PreserveLoggingContext():
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=[joinee]
)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=[joinee]
)
logger.debug("Finished joining %s to %s", joinee, room_id)
finally:
@@ -1004,9 +1012,19 @@ class FederationHandler(BaseHandler):
)
event.internal_metadata.outlier = False
# Send this event on behalf of the origin server since they may not
# have an up to data view of the state of the room at this event so
# will not know which servers to send the event to.
# Send this event on behalf of the origin server.
#
# The reasons we have the destination server rather than the origin
# server send it are slightly mysterious: the origin server should have
# all the neccessary state once it gets the response to the send_join,
# so it could send the event itself if it wanted to. It may be that
# doing it this way reduces failure modes, or avoids certain attacks
# where a new server selectively tells a subset of the federation that
# it has joined.
#
# The fact is that, as of the current writing, Synapse doesn't send out
# the join event over federation after joining, and changing it now
# would introduce the danger of backwards-compatibility problems.
event.internal_metadata.send_on_behalf_of = origin
context, event_stream_id, max_stream_id = yield self._handle_new_event(
@@ -1025,10 +1043,9 @@ class FederationHandler(BaseHandler):
target_user = UserID.from_string(target_user_id)
extra_users.append(target_user)
with PreserveLoggingContext():
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id, extra_users=extra_users
)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id, extra_users=extra_users
)
if event.type == EventTypes.Member:
if event.content["membership"] == Membership.JOIN:
@@ -1036,9 +1053,7 @@ class FederationHandler(BaseHandler):
yield user_joined_room(self.distributor, user, event.room_id)
state_ids = context.prev_state_ids.values()
auth_chain = yield self.store.get_auth_chain(set(
[event.event_id] + state_ids
))
auth_chain = yield self.store.get_auth_chain(state_ids)
state = yield self.store.get_events(context.prev_state_ids.values())
@@ -1055,6 +1070,27 @@ class FederationHandler(BaseHandler):
"""
event = pdu
is_blocked = yield self.store.is_room_blocked(event.room_id)
if is_blocked:
raise SynapseError(403, "This room has been blocked on this server")
if self.hs.config.block_non_admin_invites:
raise SynapseError(403, "This server does not accept room invites")
membership = event.content.get("membership")
if event.type != EventTypes.Member or membership != Membership.INVITE:
raise SynapseError(400, "The event was not an m.room.member invite event")
sender_domain = get_domain_from_id(event.sender)
if sender_domain != origin:
raise SynapseError(400, "The invite event was not from the server sending it")
if event.state_key is None:
raise SynapseError(400, "The invite event did not have a state key")
if not self.is_mine_id(event.state_key):
raise SynapseError(400, "The invite event must be for this server")
event.internal_metadata.outlier = True
event.internal_metadata.invite_from_remote = True
@@ -1074,48 +1110,38 @@ class FederationHandler(BaseHandler):
)
target_user = UserID.from_string(event.state_key)
with PreserveLoggingContext():
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=[target_user],
)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=[target_user],
)
defer.returnValue(event)
@defer.inlineCallbacks
def do_remotely_reject_invite(self, target_hosts, room_id, user_id):
try:
origin, event = yield self._make_and_verify_event(
target_hosts,
room_id,
user_id,
"leave"
)
signed_event = self._sign_event(event)
except SynapseError:
raise
except CodeMessageException as e:
logger.warn("Failed to reject invite: %s", e)
raise SynapseError(500, "Failed to reject invite")
origin, event = yield self._make_and_verify_event(
target_hosts,
room_id,
user_id,
"leave"
)
# Mark as outlier as we don't have any state for this event; we're not
# even in the room.
event.internal_metadata.outlier = True
event = self._sign_event(event)
# Try the host we successfully got a response to /make_join/
# request first.
# Try the host that we succesfully called /make_leave/ on first for
# the /send_leave/ request.
try:
target_hosts.remove(origin)
target_hosts.insert(0, origin)
except ValueError:
pass
try:
yield self.replication_layer.send_leave(
target_hosts,
signed_event
)
except SynapseError:
raise
except CodeMessageException as e:
logger.warn("Failed to reject invite: %s", e)
raise SynapseError(500, "Failed to reject invite")
yield self.replication_layer.send_leave(
target_hosts,
event
)
context = yield self.state_handler.compute_event_context(event)
@@ -1236,10 +1262,9 @@ class FederationHandler(BaseHandler):
target_user = UserID.from_string(target_user_id)
extra_users.append(target_user)
with PreserveLoggingContext():
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id, extra_users=extra_users
)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id, extra_users=extra_users
)
defer.returnValue(None)
@@ -1274,7 +1299,7 @@ class FederationHandler(BaseHandler):
for event in res:
# We sign these again because there was a bug where we
# incorrectly signed things the first time round
if self.hs.is_mine_id(event.event_id):
if self.is_mine_id(event.event_id):
event.signatures.update(
compute_event_signature(
event,
@@ -1347,7 +1372,7 @@ class FederationHandler(BaseHandler):
)
if event:
if self.hs.is_mine_id(event.event_id):
if self.is_mine_id(event.event_id):
# FIXME: This is a temporary work around where we occasionally
# return events slightly differently than when they were
# originally signed
@@ -1391,9 +1416,8 @@ class FederationHandler(BaseHandler):
auth_events=auth_events,
)
if not event.internal_metadata.is_outlier():
action_generator = ActionGenerator(self.hs)
yield action_generator.handle_push_actions_for_event(
if not event.internal_metadata.is_outlier() and not backfilled:
yield self.action_generator.handle_push_actions_for_event(
event, context
)
@@ -1406,7 +1430,7 @@ class FederationHandler(BaseHandler):
if not backfilled:
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
preserve_fn(self.hs.get_pusherpool().on_new_notifications)(
preserve_fn(self.pusher_pool.on_new_notifications)(
event_stream_id, max_stream_id
)
@@ -1585,7 +1609,7 @@ class FederationHandler(BaseHandler):
context.rejected = RejectedReason.AUTH_ERROR
if event.type == EventTypes.GuestAccess:
if event.type == EventTypes.GuestAccess and not context.rejected:
yield self.maybe_kick_guest_users(event)
defer.returnValue(context)
@@ -1602,7 +1626,11 @@ class FederationHandler(BaseHandler):
pass
# Now get the current auth_chain for the event.
local_auth_chain = yield self.store.get_auth_chain([event_id])
event = yield self.store.get_event(event_id)
local_auth_chain = yield self.store.get_auth_chain(
[auth_id for auth_id, _ in event.auth_events],
include_given=True
)
# TODO: Check if we would now reject event_id. If so we need to tell
# everyone.
@@ -1795,7 +1823,9 @@ class FederationHandler(BaseHandler):
auth_ids = yield self.auth.compute_auth_events(
event, context.prev_state_ids
)
local_auth_chain = yield self.store.get_auth_chain(auth_ids)
local_auth_chain = yield self.store.get_auth_chain(
auth_ids, include_given=True
)
try:
# 2. Get remote difference.
@@ -2063,6 +2093,14 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
@log_function
def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
"""Handle an exchange_third_party_invite request from a remote server
The remote server will call this when it wants to turn a 3pid invite
into a normal m.room.member invite.
Returns:
Deferred: resolves (to None)
"""
builder = self.event_builder_factory.new(event_dict)
message_handler = self.hs.get_handlers().message_handler
@@ -2081,9 +2119,12 @@ class FederationHandler(BaseHandler):
raise e
yield self._check_signature(event, context)
# XXX we send the invite here, but send_membership_event also sends it,
# so we end up making two requests. I think this is redundant.
returned_invite = yield self.send_invite(origin, event)
# TODO: Make sure the signatures actually are correct.
event.signatures.update(returned_invite.signatures)
member_handler = self.hs.get_handlers().room_member_handler
yield member_handler.send_membership_event(None, event, context)

View File

@@ -18,7 +18,7 @@
from twisted.internet import defer
from synapse.api.errors import (
CodeMessageException
MatrixCodeMessageException, CodeMessageException
)
from ._base import BaseHandler
from synapse.util.async import run_on_reactor
@@ -90,6 +90,9 @@ class IdentityHandler(BaseHandler):
),
{'sid': creds['sid'], 'client_secret': client_secret}
)
except MatrixCodeMessageException as e:
logger.info("getValidated3pid failed with Matrix error: %r", e)
raise SynapseError(e.code, e.msg, e.errcode)
except CodeMessageException as e:
data = json.loads(e.msg)
@@ -159,6 +162,9 @@ class IdentityHandler(BaseHandler):
params
)
defer.returnValue(data)
except MatrixCodeMessageException as e:
logger.info("Proxied requestToken failed with Matrix error: %r", e)
raise SynapseError(e.code, e.msg, e.errcode)
except CodeMessageException as e:
logger.info("Proxied requestToken failed: %r", e)
raise e
@@ -193,6 +199,9 @@ class IdentityHandler(BaseHandler):
params
)
defer.returnValue(data)
except MatrixCodeMessageException as e:
logger.info("Proxied requestToken failed with Matrix error: %r", e)
raise SynapseError(e.code, e.msg, e.errcode)
except CodeMessageException as e:
logger.info("Proxied requestToken failed: %r", e)
raise e

View File

@@ -12,15 +12,14 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.events import spamcheck
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError, LimitExceededError
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.push.action_generator import ActionGenerator
from synapse.types import (
UserID, RoomAlias, RoomStreamToken,
)
@@ -35,6 +34,7 @@ from canonicaljson import encode_canonical_json
import logging
import random
import ujson
logger = logging.getLogger(__name__)
@@ -50,10 +50,14 @@ class MessageHandler(BaseHandler):
self.pagination_lock = ReadWriteLock()
self.pusher_pool = hs.get_pusherpool()
# We arbitrarily limit concurrent event creation for a room to 5.
# This is to stop us from diverging history *too* much.
self.limiter = Limiter(max_count=5)
self.action_generator = hs.get_action_generator()
@defer.inlineCallbacks
def purge_history(self, room_id, event_id):
event = yield self.store.get_event(event_id)
@@ -175,7 +179,8 @@ class MessageHandler(BaseHandler):
defer.returnValue(chunk)
@defer.inlineCallbacks
def create_event(self, event_dict, token_id=None, txn_id=None, prev_event_ids=None):
def create_event(self, requester, event_dict, token_id=None, txn_id=None,
prev_event_ids=None):
"""
Given a dict from a client, create a new event.
@@ -185,6 +190,7 @@ class MessageHandler(BaseHandler):
Adds display names to Join membership events.
Args:
requester
event_dict (dict): An entire event
token_id (str)
txn_id (str)
@@ -226,6 +232,7 @@ class MessageHandler(BaseHandler):
event, context = yield self._create_new_client_event(
builder=builder,
requester=requester,
prev_event_ids=prev_event_ids,
)
@@ -251,17 +258,7 @@ class MessageHandler(BaseHandler):
# We check here if we are currently being rate limited, so that we
# don't do unnecessary work. We check again just before we actually
# send the event.
time_now = self.clock.time()
allowed, time_allowed = self.ratelimiter.send_message(
event.sender, time_now,
msg_rate_hz=self.hs.config.rc_messages_per_second,
burst_count=self.hs.config.rc_message_burst_count,
update=False,
)
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now)),
)
yield self.ratelimit(requester, update=False)
user = UserID.from_string(event.sender)
@@ -319,10 +316,17 @@ class MessageHandler(BaseHandler):
See self.create_event and self.send_nonmember_event.
"""
event, context = yield self.create_event(
requester,
event_dict,
token_id=requester.access_token_id,
txn_id=txn_id
)
if spamcheck.check_event_for_spam(event):
raise SynapseError(
403, "Spam is not permitted here", Codes.FORBIDDEN
)
yield self.send_nonmember_event(
requester,
event,
@@ -416,7 +420,7 @@ class MessageHandler(BaseHandler):
@measure_func("_create_new_client_event")
@defer.inlineCallbacks
def _create_new_client_event(self, builder, prev_event_ids=None):
def _create_new_client_event(self, builder, requester=None, prev_event_ids=None):
if prev_event_ids:
prev_events = yield self.store.add_event_hashes(prev_event_ids)
prev_max_depth = yield self.store.get_max_depth_of_events(prev_event_ids)
@@ -456,6 +460,8 @@ class MessageHandler(BaseHandler):
state_handler = self.state_handler
context = yield state_handler.compute_event_context(builder)
if requester:
context.app_service = requester.app_service
if builder.is_state():
builder.prev_state = yield self.store.add_event_hashes(
@@ -493,7 +499,7 @@ class MessageHandler(BaseHandler):
# We now need to go and hit out to wherever we need to hit out to.
if ratelimit:
self.ratelimit(requester)
yield self.ratelimit(requester)
try:
yield self.auth.check_from_context(event, context)
@@ -501,6 +507,14 @@ class MessageHandler(BaseHandler):
logger.warn("Denying new event %r because %s", event, err)
raise err
# Ensure that we can round trip before trying to persist in db
try:
dump = ujson.dumps(event.content)
ujson.loads(dump)
except:
logger.exception("Failed to encode content: %r", event.content)
raise
yield self.maybe_kick_guest_users(event, context)
if event.type == EventTypes.CanonicalAlias:
@@ -531,9 +545,9 @@ class MessageHandler(BaseHandler):
state_to_include_ids = [
e_id
for k, e_id in context.current_state_ids.items()
for k, e_id in context.current_state_ids.iteritems()
if k[0] in self.hs.config.room_invite_state_types
or k[0] == EventTypes.Member and k[1] == event.sender
or k == (EventTypes.Member, event.sender)
]
state_to_include = yield self.store.get_events(state_to_include_ids)
@@ -545,7 +559,7 @@ class MessageHandler(BaseHandler):
"content": e.content,
"sender": e.sender,
}
for e in state_to_include.values()
for e in state_to_include.itervalues()
]
invitee = UserID.from_string(event.state_key)
@@ -594,8 +608,7 @@ class MessageHandler(BaseHandler):
"Changing the room create event is forbidden",
)
action_generator = ActionGenerator(self.hs)
yield action_generator.handle_push_actions_for_event(
yield self.action_generator.handle_push_actions_for_event(
event, context
)
@@ -605,19 +618,16 @@ class MessageHandler(BaseHandler):
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
preserve_fn(self.hs.get_pusherpool().on_new_notifications)(
preserve_fn(self.pusher_pool.on_new_notifications)(
event_stream_id, max_stream_id
)
@defer.inlineCallbacks
def _notify():
yield run_on_reactor()
yield self.notifier.on_new_room_event(
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
preserve_fn(_notify)()
# If invite, remove room_state from unsigned before sending.
event.unsigned.pop("invite_room_state", None)

View File

@@ -30,6 +30,7 @@ from synapse.api.constants import PresenceState
from synapse.storage.presence import UserPresenceState
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.util.async import Linearizer
from synapse.util.logcontext import preserve_fn
from synapse.util.logutils import log_function
from synapse.util.metrics import Measure
@@ -187,6 +188,7 @@ class PresenceHandler(object):
# process_id to millisecond timestamp last updated.
self.external_process_to_current_syncs = {}
self.external_process_last_updated_ms = {}
self.external_sync_linearizer = Linearizer(name="external_sync_linearizer")
# Start a LoopingCall in 30s that fires every 5s.
# The initial delay is to allow disconnected clients a chance to
@@ -316,11 +318,7 @@ class PresenceHandler(object):
if to_federation_ping:
federation_presence_out_counter.inc_by(len(to_federation_ping))
_, _, hosts_to_states = yield self._get_interested_parties(
to_federation_ping.values()
)
self._push_to_remotes(hosts_to_states)
self._push_to_remotes(to_federation_ping.values())
def _handle_timeouts(self):
"""Checks the presence of users that have timed out and updates as
@@ -508,6 +506,73 @@ class PresenceHandler(object):
self.external_process_last_updated_ms[process_id] = self.clock.time_msec()
self.external_process_to_current_syncs[process_id] = syncing_user_ids
@defer.inlineCallbacks
def update_external_syncs_row(self, process_id, user_id, is_syncing, sync_time_msec):
"""Update the syncing users for an external process as a delta.
Args:
process_id (str): An identifier for the process the users are
syncing against. This allows synapse to process updates
as user start and stop syncing against a given process.
user_id (str): The user who has started or stopped syncing
is_syncing (bool): Whether or not the user is now syncing
sync_time_msec(int): Time in ms when the user was last syncing
"""
with (yield self.external_sync_linearizer.queue(process_id)):
prev_state = yield self.current_state_for_user(user_id)
process_presence = self.external_process_to_current_syncs.setdefault(
process_id, set()
)
updates = []
if is_syncing and user_id not in process_presence:
if prev_state.state == PresenceState.OFFLINE:
updates.append(prev_state.copy_and_replace(
state=PresenceState.ONLINE,
last_active_ts=sync_time_msec,
last_user_sync_ts=sync_time_msec,
))
else:
updates.append(prev_state.copy_and_replace(
last_user_sync_ts=sync_time_msec,
))
process_presence.add(user_id)
elif user_id in process_presence:
updates.append(prev_state.copy_and_replace(
last_user_sync_ts=sync_time_msec,
))
if not is_syncing:
process_presence.discard(user_id)
if updates:
yield self._update_states(updates)
self.external_process_last_updated_ms[process_id] = self.clock.time_msec()
@defer.inlineCallbacks
def update_external_syncs_clear(self, process_id):
"""Marks all users that had been marked as syncing by a given process
as offline.
Used when the process has stopped/disappeared.
"""
with (yield self.external_sync_linearizer.queue(process_id)):
process_presence = self.external_process_to_current_syncs.pop(
process_id, set()
)
prev_states = yield self.current_state_for_users(process_presence)
time_now_ms = self.clock.time_msec()
yield self._update_states([
prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
)
for prev_state in prev_states.itervalues()
])
self.external_process_last_updated_ms.pop(process_id, None)
@defer.inlineCallbacks
def current_state_for_user(self, user_id):
"""Get the current presence state for a user.
@@ -527,14 +592,14 @@ class PresenceHandler(object):
for user_id in user_ids
}
missing = [user_id for user_id, state in states.items() if not state]
missing = [user_id for user_id, state in states.iteritems() if not state]
if missing:
# There are things not in our in memory cache. Lets pull them out of
# the database.
res = yield self.store.get_presence_for_users(missing)
states.update(res)
missing = [user_id for user_id, state in states.items() if not state]
missing = [user_id for user_id, state in states.iteritems() if not state]
if missing:
new = {
user_id: UserPresenceState.default(user_id)
@@ -545,54 +610,6 @@ class PresenceHandler(object):
defer.returnValue(states)
@defer.inlineCallbacks
def _get_interested_parties(self, states, calculate_remote_hosts=True):
"""Given a list of states return which entities (rooms, users, servers)
are interested in the given states.
Returns:
3-tuple: `(room_ids_to_states, users_to_states, hosts_to_states)`,
with each item being a dict of `entity_name` -> `[UserPresenceState]`
"""
room_ids_to_states = {}
users_to_states = {}
for state in states:
room_ids = yield self.store.get_rooms_for_user(state.user_id)
for room_id in room_ids:
room_ids_to_states.setdefault(room_id, []).append(state)
plist = yield self.store.get_presence_list_observers_accepted(state.user_id)
for u in plist:
users_to_states.setdefault(u, []).append(state)
# Always notify self
users_to_states.setdefault(state.user_id, []).append(state)
hosts_to_states = {}
if calculate_remote_hosts:
for room_id, states in room_ids_to_states.items():
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
if not local_states:
continue
hosts = yield self.store.get_hosts_in_room(room_id)
for host in hosts:
hosts_to_states.setdefault(host, []).extend(local_states)
for user_id, states in users_to_states.items():
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
if not local_states:
continue
host = get_domain_from_id(user_id)
hosts_to_states.setdefault(host, []).extend(local_states)
# TODO: de-dup hosts_to_states, as a single host might have multiple
# of same presence
defer.returnValue((room_ids_to_states, users_to_states, hosts_to_states))
@defer.inlineCallbacks
def _persist_and_notify(self, states):
"""Persist states in the database, poke the notifier and send to
@@ -600,34 +617,33 @@ class PresenceHandler(object):
"""
stream_id, max_token = yield self.store.update_presence(states)
parties = yield self._get_interested_parties(states)
room_ids_to_states, users_to_states, hosts_to_states = parties
parties = yield get_interested_parties(self.store, states)
room_ids_to_states, users_to_states = parties
self.notifier.on_new_event(
"presence_key", stream_id, rooms=room_ids_to_states.keys(),
users=[UserID.from_string(u) for u in users_to_states.keys()]
users=[UserID.from_string(u) for u in users_to_states]
)
self._push_to_remotes(hosts_to_states)
self._push_to_remotes(states)
@defer.inlineCallbacks
def notify_for_states(self, state, stream_id):
parties = yield self._get_interested_parties([state])
room_ids_to_states, users_to_states, hosts_to_states = parties
parties = yield get_interested_parties(self.store, [state])
room_ids_to_states, users_to_states = parties
self.notifier.on_new_event(
"presence_key", stream_id, rooms=room_ids_to_states.keys(),
users=[UserID.from_string(u) for u in users_to_states.keys()]
users=[UserID.from_string(u) for u in users_to_states]
)
def _push_to_remotes(self, hosts_to_states):
def _push_to_remotes(self, states):
"""Sends state updates to remote servers.
Args:
hosts_to_states (dict): Mapping `server_name` -> `[UserPresenceState]`
states (list(UserPresenceState))
"""
for host, states in hosts_to_states.items():
self.federation.send_presence(host, states)
self.federation.send_presence(states)
@defer.inlineCallbacks
def incoming_presence(self, origin, content):
@@ -764,18 +780,17 @@ class PresenceHandler(object):
# don't need to send to local clients here, as that is done as part
# of the event stream/sync.
# TODO: Only send to servers not already in the room.
user_ids = yield self.store.get_users_in_room(room_id)
if self.is_mine(user):
state = yield self.current_state_for_user(user.to_string())
hosts = set(get_domain_from_id(u) for u in user_ids)
self._push_to_remotes({host: (state,) for host in hosts})
self._push_to_remotes([state])
else:
user_ids = yield self.store.get_users_in_room(room_id)
user_ids = filter(self.is_mine_id, user_ids)
states = yield self.current_state_for_users(user_ids)
self._push_to_remotes({user.domain: states.values()})
self._push_to_remotes(states.values())
@defer.inlineCallbacks
def get_presence_list(self, observer_user, accepted=None):
@@ -1275,3 +1290,66 @@ def handle_update(prev_state, new_state, is_mine, wheel_timer, now):
persist_and_notify = True
return new_state, persist_and_notify, federation_ping
@defer.inlineCallbacks
def get_interested_parties(store, states):
"""Given a list of states return which entities (rooms, users)
are interested in the given states.
Args:
states (list(UserPresenceState))
Returns:
2-tuple: `(room_ids_to_states, users_to_states)`,
with each item being a dict of `entity_name` -> `[UserPresenceState]`
"""
room_ids_to_states = {}
users_to_states = {}
for state in states:
room_ids = yield store.get_rooms_for_user(state.user_id)
for room_id in room_ids:
room_ids_to_states.setdefault(room_id, []).append(state)
plist = yield store.get_presence_list_observers_accepted(state.user_id)
for u in plist:
users_to_states.setdefault(u, []).append(state)
# Always notify self
users_to_states.setdefault(state.user_id, []).append(state)
defer.returnValue((room_ids_to_states, users_to_states))
@defer.inlineCallbacks
def get_interested_remotes(store, states, state_handler):
"""Given a list of presence states figure out which remote servers
should be sent which.
All the presence states should be for local users only.
Args:
store (DataStore)
states (list(UserPresenceState))
Returns:
Deferred list of ([destinations], [UserPresenceState]), where for
each row the list of UserPresenceState should be sent to each
destination
"""
hosts_and_states = []
# First we look up the rooms each user is in (as well as any explicit
# subscriptions), then for each distinct room we look up the remote
# hosts in those rooms.
room_ids_to_states, users_to_states = yield get_interested_parties(store, states)
for room_id, states in room_ids_to_states.iteritems():
hosts = yield state_handler.get_current_hosts_in_room(room_id)
hosts_and_states.append((hosts, states))
for user_id, states in users_to_states.iteritems():
host = get_domain_from_id(user_id)
hosts_and_states.append(([host], states))
defer.returnValue(hosts_and_states)

View File

@@ -156,7 +156,7 @@ class ProfileHandler(BaseHandler):
if not self.hs.is_mine(user):
return
self.ratelimit(requester)
yield self.ratelimit(requester)
room_ids = yield self.store.get_rooms_for_user(
user.to_string(),

View File

@@ -0,0 +1,64 @@
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import BaseHandler
from twisted.internet import defer
from synapse.util.async import Linearizer
import logging
logger = logging.getLogger(__name__)
class ReadMarkerHandler(BaseHandler):
def __init__(self, hs):
super(ReadMarkerHandler, self).__init__(hs)
self.server_name = hs.config.server_name
self.store = hs.get_datastore()
self.read_marker_linearizer = Linearizer(name="read_marker")
self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def received_client_read_marker(self, room_id, user_id, event_id):
"""Updates the read marker for a given user in a given room if the event ID given
is ahead in the stream relative to the current read marker.
This uses a notifier to indicate that account data should be sent down /sync if
the read marker has changed.
"""
with (yield self.read_marker_linearizer.queue((room_id, user_id))):
account_data = yield self.store.get_account_data_for_room(user_id, room_id)
existing_read_marker = account_data.get("m.fully_read", None)
should_update = True
if existing_read_marker:
# Only update if the new marker is ahead in the stream
should_update = yield self.store.is_event_after(
event_id,
existing_read_marker['event_id']
)
if should_update:
content = {
"event_id": event_id
}
max_id = yield self.store.add_account_data_to_room(
user_id, room_id, "m.fully_read", content
)
self.notifier.on_new_event("account_data_key", max_id, users=[user_id])

View File

@@ -54,6 +54,13 @@ class RegistrationHandler(BaseHandler):
Codes.INVALID_USERNAME
)
if not localpart:
raise SynapseError(
400,
"User ID cannot be empty",
Codes.INVALID_USERNAME
)
if localpart[0] == '_':
raise SynapseError(
400,

View File

@@ -61,7 +61,7 @@ class RoomCreationHandler(BaseHandler):
}
@defer.inlineCallbacks
def create_room(self, requester, config):
def create_room(self, requester, config, ratelimit=True):
""" Creates a new room.
Args:
@@ -75,7 +75,8 @@ class RoomCreationHandler(BaseHandler):
"""
user_id = requester.user.to_string()
self.ratelimit(requester)
if ratelimit:
yield self.ratelimit(requester)
if "room_alias_name" in config:
for wchar in string.whitespace:
@@ -167,6 +168,7 @@ class RoomCreationHandler(BaseHandler):
initial_state=initial_state,
creation_content=creation_content,
room_alias=room_alias,
power_level_content_override=config.get("power_level_content_override", {})
)
if "name" in config:
@@ -245,7 +247,8 @@ class RoomCreationHandler(BaseHandler):
invite_list,
initial_state,
creation_content,
room_alias
room_alias,
power_level_content_override,
):
def create(etype, content, **kwargs):
e = {
@@ -291,7 +294,15 @@ class RoomCreationHandler(BaseHandler):
ratelimit=False,
)
if (EventTypes.PowerLevels, '') not in initial_state:
# We treat the power levels override specially as this needs to be one
# of the first events that get sent into a room.
pl_content = initial_state.pop((EventTypes.PowerLevels, ''), None)
if pl_content is not None:
yield send(
etype=EventTypes.PowerLevels,
content=pl_content,
)
else:
power_level_content = {
"users": {
creator_id: 100,
@@ -316,6 +327,8 @@ class RoomCreationHandler(BaseHandler):
for invitee in invite_list:
power_level_content["users"][invitee] = 100
power_level_content.update(power_level_content_override)
yield send(
etype=EventTypes.PowerLevels,
content=power_level_content,

View File

@@ -70,6 +70,7 @@ class RoomMemberHandler(BaseHandler):
content["kind"] = "guest"
event, context = yield msg_handler.create_event(
requester,
{
"type": EventTypes.Member,
"content": content,
@@ -139,13 +140,6 @@ class RoomMemberHandler(BaseHandler):
)
yield user_joined_room(self.distributor, user, room_id)
def reject_remote_invite(self, user_id, room_id, remote_room_hosts):
return self.hs.get_handlers().federation_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
user_id
)
@defer.inlineCallbacks
def update_membership(
self,
@@ -197,6 +191,8 @@ class RoomMemberHandler(BaseHandler):
if action in ["kick", "unban"]:
effective_membership_state = "leave"
# if this is a join with a 3pid signature, we may need to turn a 3pid
# invite into a normal invite before we can handle the join.
if third_party_signed is not None:
replication = self.hs.get_replication_layer()
yield replication.exchange_third_party_invite(
@@ -209,6 +205,21 @@ class RoomMemberHandler(BaseHandler):
if not remote_room_hosts:
remote_room_hosts = []
if effective_membership_state not in ("leave", "ban",):
is_blocked = yield self.store.is_room_blocked(room_id)
if is_blocked:
raise SynapseError(403, "This room has been blocked on this server")
if (effective_membership_state == "invite" and
self.hs.config.block_non_admin_invites):
is_requester_admin = yield self.auth.is_server_admin(
requester.user,
)
if not is_requester_admin:
raise SynapseError(
403, "Invites have been disabled on this server",
)
latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
current_state_ids = yield self.state_handler.get_current_state_ids(
room_id, latest_event_ids=latest_event_ids,
@@ -286,13 +297,21 @@ class RoomMemberHandler(BaseHandler):
else:
# send the rejection to the inviter's HS.
remote_room_hosts = remote_room_hosts + [inviter.domain]
fed_handler = self.hs.get_handlers().federation_handler
try:
ret = yield self.reject_remote_invite(
target.to_string(), room_id, remote_room_hosts
ret = yield fed_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
target.to_string(),
)
defer.returnValue(ret)
except SynapseError as e:
except Exception as e:
# if we were unable to reject the exception, just mark
# it as rejected on our end and plough ahead.
#
# The 'except' clause is very broad, but we need to
# capture everything from DNS failures upwards
#
logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(
@@ -367,6 +386,11 @@ class RoomMemberHandler(BaseHandler):
# so don't really fit into the general auth process.
raise AuthError(403, "Guest access not allowed")
if event.membership not in (Membership.LEAVE, Membership.BAN):
is_blocked = yield self.store.is_room_blocked(room_id)
if is_blocked:
raise SynapseError(403, "This room has been blocked on this server")
yield message_handler.handle_new_client_event(
requester,
event,
@@ -459,6 +483,16 @@ class RoomMemberHandler(BaseHandler):
requester,
txn_id
):
if self.hs.config.block_non_admin_invites:
is_requester_admin = yield self.auth.is_server_admin(
requester.user,
)
if not is_requester_admin:
raise SynapseError(
403, "Invites have been disabled on this server",
Codes.FORBIDDEN,
)
invitee = yield self._lookup_3pid(
id_server, medium, address
)
@@ -737,10 +771,11 @@ class RoomMemberHandler(BaseHandler):
if len(current_state_ids) == 1 and create_event_id:
defer.returnValue(self.hs.is_mine_id(create_event_id))
for (etype, state_key), event_id in current_state_ids.items():
for etype, state_key in current_state_ids:
if etype != EventTypes.Member or not self.hs.is_mine_id(state_key):
continue
event_id = current_state_ids[(etype, state_key)]
event = yield self.store.get_event(event_id, allow_none=True)
if not event:
continue

View File

@@ -108,6 +108,16 @@ class InvitedSyncResult(collections.namedtuple("InvitedSyncResult", [
return True
class DeviceLists(collections.namedtuple("DeviceLists", [
"changed", # list of user_ids whose devices may have changed
"left", # list of user_ids whose devices we no longer track
])):
__slots__ = []
def __nonzero__(self):
return bool(self.changed or self.left)
class SyncResult(collections.namedtuple("SyncResult", [
"next_batch", # Token for the next sync
"presence", # List of presence events for the user.
@@ -117,6 +127,8 @@ class SyncResult(collections.namedtuple("SyncResult", [
"archived", # ArchivedSyncResult for each archived room.
"to_device", # List of direct messages for the device.
"device_lists", # List of user_ids whose devices have chanegd
"device_one_time_keys_count", # Dict of algorithm to count for one time keys
# for this device
])):
__slots__ = []
@@ -288,10 +300,20 @@ class SyncHandler(object):
if recents:
recents = sync_config.filter_collection.filter_room_timeline(recents)
# We check if there are any state events, if there are then we pass
# all current state events to the filter_events function. This is to
# ensure that we always include current state in the timeline
current_state_ids = frozenset()
if any(e.is_state() for e in recents):
current_state_ids = yield self.state.get_current_state_ids(room_id)
current_state_ids = frozenset(current_state_ids.itervalues())
recents = yield filter_events_for_client(
self.store,
sync_config.user.to_string(),
recents,
always_include_ids=current_state_ids,
)
else:
recents = []
@@ -323,10 +345,20 @@ class SyncHandler(object):
loaded_recents = sync_config.filter_collection.filter_room_timeline(
events
)
# We check if there are any state events, if there are then we pass
# all current state events to the filter_events function. This is to
# ensure that we always include current state in the timeline
current_state_ids = frozenset()
if any(e.is_state() for e in loaded_recents):
current_state_ids = yield self.state.get_current_state_ids(room_id)
current_state_ids = frozenset(current_state_ids.itervalues())
loaded_recents = yield filter_events_for_client(
self.store,
sync_config.user.to_string(),
loaded_recents,
always_include_ids=current_state_ids,
)
loaded_recents.extend(recents)
recents = loaded_recents
@@ -533,7 +565,8 @@ class SyncHandler(object):
res = yield self._generate_sync_entry_for_rooms(
sync_result_builder, account_data_by_room
)
newly_joined_rooms, newly_joined_users = res
newly_joined_rooms, newly_joined_users, _, _ = res
_, _, newly_left_rooms, newly_left_users = res
block_all_presence_data = (
since_token is None and
@@ -547,9 +580,21 @@ class SyncHandler(object):
yield self._generate_sync_entry_for_to_device(sync_result_builder)
device_lists = yield self._generate_sync_entry_for_device_list(
sync_result_builder
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_users=newly_joined_users,
newly_left_rooms=newly_left_rooms,
newly_left_users=newly_left_users,
)
device_id = sync_config.device_id
one_time_key_counts = {}
if device_id:
user_id = sync_config.user.to_string()
one_time_key_counts = yield self.store.count_e2e_one_time_keys(
user_id, device_id
)
defer.returnValue(SyncResult(
presence=sync_result_builder.presence,
account_data=sync_result_builder.account_data,
@@ -558,30 +603,56 @@ class SyncHandler(object):
archived=sync_result_builder.archived,
to_device=sync_result_builder.to_device,
device_lists=device_lists,
device_one_time_keys_count=one_time_key_counts,
next_batch=sync_result_builder.now_token,
))
@measure_func("_generate_sync_entry_for_device_list")
@defer.inlineCallbacks
def _generate_sync_entry_for_device_list(self, sync_result_builder):
def _generate_sync_entry_for_device_list(self, sync_result_builder,
newly_joined_rooms, newly_joined_users,
newly_left_rooms, newly_left_users):
user_id = sync_result_builder.sync_config.user.to_string()
since_token = sync_result_builder.since_token
if since_token and since_token.device_list_key:
room_ids = yield self.store.get_rooms_for_user(user_id)
user_ids_changed = set()
changed = yield self.store.get_user_whose_devices_changed(
since_token.device_list_key
)
for other_user_id in changed:
other_room_ids = yield self.store.get_rooms_for_user(other_user_id)
if room_ids.intersection(other_room_ids):
user_ids_changed.add(other_user_id)
defer.returnValue(user_ids_changed)
# TODO: Be more clever than this, i.e. remove users who we already
# share a room with?
for room_id in newly_joined_rooms:
joined_users = yield self.state.get_current_user_in_room(room_id)
newly_joined_users.update(joined_users)
for room_id in newly_left_rooms:
left_users = yield self.state.get_current_user_in_room(room_id)
newly_left_users.update(left_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
changed.update(newly_joined_users)
if not changed and not newly_left_users:
defer.returnValue(DeviceLists(
changed=[],
left=newly_left_users,
))
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
user_id
)
defer.returnValue(DeviceLists(
changed=users_who_share_room & changed,
left=set(newly_left_users) - users_who_share_room,
))
else:
defer.returnValue([])
defer.returnValue(DeviceLists(
changed=[],
left=[],
))
@defer.inlineCallbacks
def _generate_sync_entry_for_to_device(self, sync_result_builder):
@@ -745,8 +816,8 @@ class SyncHandler(object):
account_data_by_room(dict): Dictionary of per room account data
Returns:
Deferred(tuple): Returns a 2-tuple of
`(newly_joined_rooms, newly_joined_users)`
Deferred(tuple): Returns a 4-tuple of
`(newly_joined_rooms, newly_joined_users, newly_left_rooms, newly_left_users)`
"""
user_id = sync_result_builder.sync_config.user.to_string()
block_all_room_ephemeral = (
@@ -777,7 +848,7 @@ class SyncHandler(object):
)
if not tags_by_room:
logger.debug("no-oping sync")
defer.returnValue(([], []))
defer.returnValue(([], [], [], []))
ignored_account_data = yield self.store.get_global_account_data_by_type_for_user(
"m.ignored_user_list", user_id=user_id,
@@ -790,7 +861,7 @@ class SyncHandler(object):
if since_token:
res = yield self._get_rooms_changed(sync_result_builder, ignored_users)
room_entries, invited, newly_joined_rooms = res
room_entries, invited, newly_joined_rooms, newly_left_rooms = res
tags_by_room = yield self.store.get_updated_tags(
user_id, since_token.account_data_key,
@@ -798,6 +869,7 @@ class SyncHandler(object):
else:
res = yield self._get_all_rooms(sync_result_builder, ignored_users)
room_entries, invited, newly_joined_rooms = res
newly_left_rooms = []
tags_by_room = yield self.store.get_tags_for_user(user_id)
@@ -818,17 +890,30 @@ class SyncHandler(object):
# Now we want to get any newly joined users
newly_joined_users = set()
newly_left_users = set()
if since_token:
for joined_sync in sync_result_builder.joined:
it = itertools.chain(
joined_sync.timeline.events, joined_sync.state.values()
joined_sync.timeline.events, joined_sync.state.itervalues()
)
for event in it:
if event.type == EventTypes.Member:
if event.membership == Membership.JOIN:
newly_joined_users.add(event.state_key)
else:
prev_content = event.unsigned.get("prev_content", {})
prev_membership = prev_content.get("membership", None)
if prev_membership == Membership.JOIN:
newly_left_users.add(event.state_key)
defer.returnValue((newly_joined_rooms, newly_joined_users))
newly_left_users -= newly_joined_users
defer.returnValue((
newly_joined_rooms,
newly_joined_users,
newly_left_rooms,
newly_left_users,
))
@defer.inlineCallbacks
def _have_rooms_changed(self, sync_result_builder):
@@ -898,15 +983,28 @@ class SyncHandler(object):
mem_change_events_by_room_id.setdefault(event.room_id, []).append(event)
newly_joined_rooms = []
newly_left_rooms = []
room_entries = []
invited = []
for room_id, events in mem_change_events_by_room_id.items():
for room_id, events in mem_change_events_by_room_id.iteritems():
non_joins = [e for e in events if e.membership != Membership.JOIN]
has_join = len(non_joins) != len(events)
# We want to figure out if we joined the room at some point since
# the last sync (even if we have since left). This is to make sure
# we do send down the room, and with full state, where necessary
old_state_ids = None
if room_id in joined_room_ids and non_joins:
# Always include if the user (re)joined the room, especially
# important so that device list changes are calculated correctly.
# If there are non join member events, but we are still in the room,
# then the user must have left and joined
newly_joined_rooms.append(room_id)
# User is in the room so we don't need to do the invite/leave checks
continue
if room_id in joined_room_ids or has_join:
old_state_ids = yield self.get_state_at(room_id, since_token)
old_mem_ev_id = old_state_ids.get((EventTypes.Member, user_id), None)
@@ -918,12 +1016,33 @@ class SyncHandler(object):
if not old_mem_ev or old_mem_ev.membership != Membership.JOIN:
newly_joined_rooms.append(room_id)
if room_id in joined_room_ids:
continue
# If user is in the room then we don't need to do the invite/leave checks
if room_id in joined_room_ids:
continue
if not non_joins:
continue
# Check if we have left the room. This can either be because we were
# joined before *or* that we since joined and then left.
if events[-1].membership != Membership.JOIN:
if has_join:
newly_left_rooms.append(room_id)
else:
if not old_state_ids:
old_state_ids = yield self.get_state_at(room_id, since_token)
old_mem_ev_id = old_state_ids.get(
(EventTypes.Member, user_id),
None,
)
old_mem_ev = None
if old_mem_ev_id:
old_mem_ev = yield self.store.get_event(
old_mem_ev_id, allow_none=True
)
if old_mem_ev and old_mem_ev.membership == Membership.JOIN:
newly_left_rooms.append(room_id)
# Only bother if we're still currently invited
should_invite = non_joins[-1].membership == Membership.INVITE
if should_invite:
@@ -1001,7 +1120,7 @@ class SyncHandler(object):
upto_token=since_token,
))
defer.returnValue((room_entries, invited, newly_joined_rooms))
defer.returnValue((room_entries, invited, newly_joined_rooms, newly_left_rooms))
@defer.inlineCallbacks
def _get_all_rooms(self, sync_result_builder, ignored_users):
@@ -1249,6 +1368,7 @@ class SyncResultBuilder(object):
self.invited = []
self.archived = []
self.device = []
self.to_device = []
class RoomSyncResultBuilder(object):

View File

@@ -24,7 +24,6 @@ from synapse.types import UserID, get_domain_from_id
import logging
from collections import namedtuple
import ujson as json
logger = logging.getLogger(__name__)
@@ -90,7 +89,7 @@ class TypingHandler(object):
until = self._member_typing_until.get(member, None)
if not until or until <= now:
logger.info("Timing out typing for: %s", member.user_id)
preserve_fn(self._stopped_typing)(member)
self._stopped_typing(member)
continue
# Check if we need to resend a keep alive over federation for this
@@ -148,7 +147,7 @@ class TypingHandler(object):
# No point sending another notification
defer.returnValue(None)
yield self._push_update(
self._push_update(
member=member,
typing=True,
)
@@ -172,7 +171,7 @@ class TypingHandler(object):
member = RoomMember(room_id=room_id, user_id=target_user_id)
yield self._stopped_typing(member)
self._stopped_typing(member)
@defer.inlineCallbacks
def user_left_room(self, user, room_id):
@@ -181,7 +180,6 @@ class TypingHandler(object):
member = RoomMember(room_id=room_id, user_id=user_id)
yield self._stopped_typing(member)
@defer.inlineCallbacks
def _stopped_typing(self, member):
if member.user_id not in self._room_typing.get(member.room_id, set()):
# No point
@@ -190,16 +188,15 @@ class TypingHandler(object):
self._member_typing_until.pop(member, None)
self._member_last_federation_poke.pop(member, None)
yield self._push_update(
self._push_update(
member=member,
typing=False,
)
@defer.inlineCallbacks
def _push_update(self, member, typing):
if self.hs.is_mine_id(member.user_id):
# Only send updates for changes to our own users.
yield self._push_remote(member, typing)
preserve_fn(self._push_remote)(member, typing)
self._push_update_local(
member=member,
@@ -288,11 +285,13 @@ class TypingHandler(object):
for room_id, serial in self._room_serials.items():
if last_id < serial and serial <= current_id:
typing = self._room_typing[room_id]
typing_bytes = json.dumps(list(typing), ensure_ascii=False)
rows.append((serial, room_id, typing_bytes))
rows.append((serial, room_id, list(typing)))
rows.sort()
return rows
def get_current_token(self):
return self._latest_room_serial
class TypingNotificationEventSource(object):
def __init__(self, hs):

View File

@@ -0,0 +1,641 @@
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.storage.roommember import ProfileInfo
from synapse.util.metrics import Measure
from synapse.util.async import sleep
logger = logging.getLogger(__name__)
class UserDirectoyHandler(object):
"""Handles querying of and keeping updated the user_directory.
N.B.: ASSUMES IT IS THE ONLY THING THAT MODIFIES THE USER DIRECTORY
The user directory is filled with users who this server can see are joined to a
world_readable or publically joinable room. We keep a database table up to date
by streaming changes of the current state and recalculating whether users should
be in the directory or not when necessary.
For each user in the directory we also store a room_id which is public and that the
user is joined to. This allows us to ignore history_visibility and join_rules changes
for that user in all other public rooms, as we know they'll still be in at least
one public room.
"""
INITIAL_SLEEP_MS = 50
INITIAL_SLEEP_COUNT = 100
INITIAL_BATCH_SIZE = 100
def __init__(self, hs):
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id
self.update_user_directory = hs.config.update_user_directory
# When start up for the first time we need to populate the user_directory.
# This is a set of user_id's we've inserted already
self.initially_handled_users = set()
self.initially_handled_users_in_public = set()
self.initially_handled_users_share = set()
self.initially_handled_users_share_private_room = set()
# The current position in the current_state_delta stream
self.pos = None
# Guard to ensure we only process deltas one at a time
self._is_processing = False
if self.update_user_directory:
self.notifier.add_replication_callback(self.notify_new_event)
# We kick this off so that we don't have to wait for a change before
# we start populating the user directory
self.clock.call_later(0, self.notify_new_event)
def search_users(self, user_id, search_term, limit):
"""Searches for users in directory
Returns:
dict of the form::
{
"limited": <bool>, # whether there were more results or not
"results": [ # Ordered by best match first
{
"user_id": <user_id>,
"display_name": <display_name>,
"avatar_url": <avatar_url>
}
]
}
"""
return self.store.search_user_dir(user_id, search_term, limit)
@defer.inlineCallbacks
def notify_new_event(self):
"""Called when there may be more deltas to process
"""
if not self.update_user_directory:
return
if self._is_processing:
return
self._is_processing = True
try:
yield self._unsafe_process()
finally:
self._is_processing = False
@defer.inlineCallbacks
def _unsafe_process(self):
# If self.pos is None then means we haven't fetched it from DB
if self.pos is None:
self.pos = yield self.store.get_user_directory_stream_pos()
# If still None then we need to do the initial fill of directory
if self.pos is None:
yield self._do_initial_spam()
self.pos = yield self.store.get_user_directory_stream_pos()
# Loop round handling deltas until we're up to date
while True:
with Measure(self.clock, "user_dir_delta"):
deltas = yield self.store.get_current_state_deltas(self.pos)
if not deltas:
return
logger.info("Handling %d state deltas", len(deltas))
yield self._handle_deltas(deltas)
self.pos = deltas[-1]["stream_id"]
yield self.store.update_user_directory_stream_pos(self.pos)
@defer.inlineCallbacks
def _do_initial_spam(self):
"""Populates the user_directory from the current state of the DB, used
when synapse first starts with user_directory support
"""
new_pos = yield self.store.get_max_stream_id_in_current_state_deltas()
# Delete any existing entries just in case there are any
yield self.store.delete_all_from_user_dir()
# We process by going through each existing room at a time.
room_ids = yield self.store.get_all_rooms()
logger.info("Doing initial update of user directory. %d rooms", len(room_ids))
num_processed_rooms = 1
for room_id in room_ids:
logger.info("Handling room %d/%d", num_processed_rooms, len(room_ids))
yield self._handle_intial_room(room_id)
num_processed_rooms += 1
yield sleep(self.INITIAL_SLEEP_MS / 1000.)
logger.info("Processed all rooms.")
self.initially_handled_users = None
self.initially_handled_users_in_public = None
self.initially_handled_users_share = None
self.initially_handled_users_share_private_room = None
yield self.store.update_user_directory_stream_pos(new_pos)
@defer.inlineCallbacks
def _handle_intial_room(self, room_id):
"""Called when we initially fill out user_directory one room at a time
"""
is_in_room = yield self.store.is_host_joined(room_id, self.server_name)
if not is_in_room:
return
is_public = yield self.store.is_room_world_readable_or_publicly_joinable(room_id)
users_with_profile = yield self.state.get_current_user_in_room(room_id)
user_ids = set(users_with_profile)
unhandled_users = user_ids - self.initially_handled_users
yield self.store.add_profiles_to_user_dir(
room_id, {
user_id: users_with_profile[user_id] for user_id in unhandled_users
}
)
self.initially_handled_users |= unhandled_users
if is_public:
yield self.store.add_users_to_public_room(
room_id,
user_ids=user_ids - self.initially_handled_users_in_public
)
self.initially_handled_users_in_public |= user_ids
# We now go and figure out the new users who share rooms with user entries
# We sleep aggressively here as otherwise it can starve resources.
# We also batch up inserts/updates, but try to avoid too many at once.
to_insert = set()
to_update = set()
count = 0
for user_id in user_ids:
if count % self.INITIAL_SLEEP_COUNT == 0:
yield sleep(self.INITIAL_SLEEP_MS / 1000.)
if not self.is_mine_id(user_id):
count += 1
continue
if self.store.get_if_app_services_interested_in_user(user_id):
count += 1
continue
for other_user_id in user_ids:
if user_id == other_user_id:
continue
if count % self.INITIAL_SLEEP_COUNT == 0:
yield sleep(self.INITIAL_SLEEP_MS / 1000.)
count += 1
user_set = (user_id, other_user_id)
if user_set in self.initially_handled_users_share_private_room:
continue
if user_set in self.initially_handled_users_share:
if is_public:
continue
to_update.add(user_set)
else:
to_insert.add(user_set)
if is_public:
self.initially_handled_users_share.add(user_set)
else:
self.initially_handled_users_share_private_room.add(user_set)
if len(to_insert) > self.INITIAL_BATCH_SIZE:
yield self.store.add_users_who_share_room(
room_id, not is_public, to_insert,
)
to_insert.clear()
if len(to_update) > self.INITIAL_BATCH_SIZE:
yield self.store.update_users_who_share_room(
room_id, not is_public, to_update,
)
to_update.clear()
if to_insert:
yield self.store.add_users_who_share_room(
room_id, not is_public, to_insert,
)
to_insert.clear()
if to_update:
yield self.store.update_users_who_share_room(
room_id, not is_public, to_update,
)
to_update.clear()
@defer.inlineCallbacks
def _handle_deltas(self, deltas):
"""Called with the state deltas to process
"""
for delta in deltas:
typ = delta["type"]
state_key = delta["state_key"]
room_id = delta["room_id"]
event_id = delta["event_id"]
prev_event_id = delta["prev_event_id"]
logger.debug("Handling: %r %r, %s", typ, state_key, event_id)
# For join rule and visibility changes we need to check if the room
# may have become public or not and add/remove the users in said room
if typ in (EventTypes.RoomHistoryVisibility, EventTypes.JoinRules):
yield self._handle_room_publicity_change(
room_id, prev_event_id, event_id, typ,
)
elif typ == EventTypes.Member:
change = yield self._get_key_change(
prev_event_id, event_id,
key_name="membership",
public_value=Membership.JOIN,
)
if change is None:
# Handle any profile changes
yield self._handle_profile_change(
state_key, room_id, prev_event_id, event_id,
)
continue
if not change:
# Need to check if the server left the room entirely, if so
# we might need to remove all the users in that room
is_in_room = yield self.store.is_host_joined(
room_id, self.server_name,
)
if not is_in_room:
logger.info("Server left room: %r", room_id)
# Fetch all the users that we marked as being in user
# directory due to being in the room and then check if
# need to remove those users or not
user_ids = yield self.store.get_users_in_dir_due_to_room(room_id)
for user_id in user_ids:
yield self._handle_remove_user(room_id, user_id)
return
else:
logger.debug("Server is still in room: %r", room_id)
if change: # The user joined
event = yield self.store.get_event(event_id, allow_none=True)
profile = ProfileInfo(
avatar_url=event.content.get("avatar_url"),
display_name=event.content.get("displayname"),
)
yield self._handle_new_user(room_id, state_key, profile)
else: # The user left
yield self._handle_remove_user(room_id, state_key)
else:
logger.debug("Ignoring irrelevant type: %r", typ)
@defer.inlineCallbacks
def _handle_room_publicity_change(self, room_id, prev_event_id, event_id, typ):
"""Handle a room having potentially changed from/to world_readable/publically
joinable.
Args:
room_id (str)
prev_event_id (str|None): The previous event before the state change
event_id (str|None): The new event after the state change
typ (str): Type of the event
"""
logger.debug("Handling change for %s: %s", typ, room_id)
if typ == EventTypes.RoomHistoryVisibility:
change = yield self._get_key_change(
prev_event_id, event_id,
key_name="history_visibility",
public_value="world_readable",
)
elif typ == EventTypes.JoinRules:
change = yield self._get_key_change(
prev_event_id, event_id,
key_name="join_rule",
public_value=JoinRules.PUBLIC,
)
else:
raise Exception("Invalid event type")
# If change is None, no change. True => become world_readable/public,
# False => was world_readable/public
if change is None:
logger.debug("No change")
return
# There's been a change to or from being world readable.
is_public = yield self.store.is_room_world_readable_or_publicly_joinable(
room_id
)
logger.debug("Change: %r, is_public: %r", change, is_public)
if change and not is_public:
# If we became world readable but room isn't currently public then
# we ignore the change
return
elif not change and is_public:
# If we stopped being world readable but are still public,
# ignore the change
return
if change:
users_with_profile = yield self.state.get_current_user_in_room(room_id)
for user_id, profile in users_with_profile.iteritems():
yield self._handle_new_user(room_id, user_id, profile)
else:
users = yield self.store.get_users_in_public_due_to_room(room_id)
for user_id in users:
yield self._handle_remove_user(room_id, user_id)
@defer.inlineCallbacks
def _handle_new_user(self, room_id, user_id, profile):
"""Called when we might need to add user to directory
Args:
room_id (str): room_id that user joined or started being public that
user_id (str)
"""
logger.debug("Adding user to dir, %r", user_id)
row = yield self.store.get_user_in_directory(user_id)
if not row:
yield self.store.add_profiles_to_user_dir(room_id, {user_id: profile})
is_public = yield self.store.is_room_world_readable_or_publicly_joinable(
room_id
)
if is_public:
row = yield self.store.get_user_in_public_room(user_id)
if not row:
yield self.store.add_users_to_public_room(room_id, [user_id])
else:
logger.debug("Not adding user to public dir, %r", user_id)
# Now we update users who share rooms with users. We do this by getting
# all the current users in the room and seeing which aren't already
# marked in the database as sharing with `user_id`
users_with_profile = yield self.state.get_current_user_in_room(room_id)
to_insert = set()
to_update = set()
is_appservice = self.store.get_if_app_services_interested_in_user(user_id)
# First, if they're our user then we need to update for every user
if self.is_mine_id(user_id) and not is_appservice:
# Returns a map of other_user_id -> shared_private. We only need
# to update mappings if for users that either don't share a room
# already (aren't in the map) or, if the room is private, those that
# only share a public room.
user_ids_shared = yield self.store.get_users_who_share_room_from_dir(
user_id
)
for other_user_id in users_with_profile:
if user_id == other_user_id:
continue
shared_is_private = user_ids_shared.get(other_user_id)
if shared_is_private is True:
# We've already marked in the database they share a private room
continue
elif shared_is_private is False:
# They already share a public room, so only update if this is
# a private room
if not is_public:
to_update.add((user_id, other_user_id))
elif shared_is_private is None:
# This is the first time they both share a room
to_insert.add((user_id, other_user_id))
# Next we need to update for every local user in the room
for other_user_id in users_with_profile:
if user_id == other_user_id:
continue
is_appservice = self.store.get_if_app_services_interested_in_user(
other_user_id
)
if self.is_mine_id(other_user_id) and not is_appservice:
shared_is_private = yield self.store.get_if_users_share_a_room(
other_user_id, user_id,
)
if shared_is_private is True:
# We've already marked in the database they share a private room
continue
elif shared_is_private is False:
# They already share a public room, so only update if this is
# a private room
if not is_public:
to_update.add((other_user_id, user_id))
elif shared_is_private is None:
# This is the first time they both share a room
to_insert.add((other_user_id, user_id))
if to_insert:
yield self.store.add_users_who_share_room(
room_id, not is_public, to_insert,
)
if to_update:
yield self.store.update_users_who_share_room(
room_id, not is_public, to_update,
)
@defer.inlineCallbacks
def _handle_remove_user(self, room_id, user_id):
"""Called when we might need to remove user to directory
Args:
room_id (str): room_id that user left or stopped being public that
user_id (str)
"""
logger.debug("Maybe removing user %r", user_id)
row = yield self.store.get_user_in_directory(user_id)
update_user_dir = row and row["room_id"] == room_id
row = yield self.store.get_user_in_public_room(user_id)
update_user_in_public = row and row["room_id"] == room_id
if (update_user_in_public or update_user_dir):
# XXX: Make this faster?
rooms = yield self.store.get_rooms_for_user(user_id)
for j_room_id in rooms:
if (not update_user_in_public and not update_user_dir):
break
is_in_room = yield self.store.is_host_joined(
j_room_id, self.server_name,
)
if not is_in_room:
continue
if update_user_dir:
update_user_dir = False
yield self.store.update_user_in_user_dir(user_id, j_room_id)
is_public = yield self.store.is_room_world_readable_or_publicly_joinable(
j_room_id
)
if update_user_in_public and is_public:
yield self.store.update_user_in_public_user_list(user_id, j_room_id)
update_user_in_public = False
if update_user_dir:
yield self.store.remove_from_user_dir(user_id)
elif update_user_in_public:
yield self.store.remove_from_user_in_public_room(user_id)
# Now handle users_who_share_rooms.
# Get a list of user tuples that were in the DB due to this room and
# users (this includes tuples where the other user matches `user_id`)
user_tuples = yield self.store.get_users_in_share_dir_with_room_id(
user_id, room_id,
)
for user_id, other_user_id in user_tuples:
# For each user tuple get a list of rooms that they still share,
# trying to find a private room, and update the entry in the DB
rooms = yield self.store.get_rooms_in_common_for_users(user_id, other_user_id)
# If they dont share a room anymore, remove the mapping
if not rooms:
yield self.store.remove_user_who_share_room(
user_id, other_user_id,
)
continue
found_public_share = None
for j_room_id in rooms:
is_public = yield self.store.is_room_world_readable_or_publicly_joinable(
j_room_id
)
if is_public:
found_public_share = j_room_id
else:
found_public_share = None
yield self.store.update_users_who_share_room(
room_id, not is_public, [(user_id, other_user_id)],
)
break
if found_public_share:
yield self.store.update_users_who_share_room(
room_id, not is_public, [(user_id, other_user_id)],
)
@defer.inlineCallbacks
def _handle_profile_change(self, user_id, room_id, prev_event_id, event_id):
"""Check member event changes for any profile changes and update the
database if there are.
"""
if not prev_event_id or not event_id:
return
prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
event = yield self.store.get_event(event_id, allow_none=True)
if not prev_event or not event:
return
if event.membership != Membership.JOIN:
return
prev_name = prev_event.content.get("displayname")
new_name = event.content.get("displayname")
prev_avatar = prev_event.content.get("avatar_url")
new_avatar = event.content.get("avatar_url")
if prev_name != new_name or prev_avatar != new_avatar:
yield self.store.update_profile_in_user_dir(
user_id, new_name, new_avatar, room_id,
)
@defer.inlineCallbacks
def _get_key_change(self, prev_event_id, event_id, key_name, public_value):
"""Given two events check if the `key_name` field in content changed
from not matching `public_value` to doing so.
For example, check if `history_visibility` (`key_name`) changed from
`shared` to `world_readable` (`public_value`).
Returns:
None if the field in the events either both match `public_value`
or if neither do, i.e. there has been no change.
True if it didnt match `public_value` but now does
False if it did match `public_value` but now doesn't
"""
prev_event = None
event = None
if prev_event_id:
prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
if event_id:
event = yield self.store.get_event(event_id, allow_none=True)
if not event and not prev_event:
logger.debug("Neither event exists: %r %r", prev_event_id, event_id)
defer.returnValue(None)
prev_value = None
value = None
if prev_event:
prev_value = prev_event.content.get(key_name)
if event:
value = event.content.get(key_name)
logger.debug("prev_value: %r -> value: %r", prev_value, value)
if value == public_value and prev_value != public_value:
defer.returnValue(True)
elif value != public_value and prev_value == public_value:
defer.returnValue(False)
else:
defer.returnValue(None)

View File

@@ -16,9 +16,10 @@ from OpenSSL import SSL
from OpenSSL.SSL import VERIFY_NONE
from synapse.api.errors import (
CodeMessageException, SynapseError, Codes,
CodeMessageException, MatrixCodeMessageException, SynapseError, Codes,
)
from synapse.util.logcontext import preserve_context_over_fn
from synapse.util import logcontext
import synapse.metrics
from synapse.http.endpoint import SpiderEndpoint
@@ -72,39 +73,45 @@ class SimpleHttpClient(object):
contextFactory=hs.get_http_client_context_factory()
)
self.user_agent = hs.version_string
self.clock = hs.get_clock()
if hs.config.user_agent_suffix:
self.user_agent = "%s %s" % (self.user_agent, hs.config.user_agent_suffix,)
@defer.inlineCallbacks
def request(self, method, uri, *args, **kwargs):
# A small wrapper around self.agent.request() so we can easily attach
# counters to it
outgoing_requests_counter.inc(method)
d = preserve_context_over_fn(
self.agent.request,
method, uri, *args, **kwargs
)
def send_request():
request_deferred = self.agent.request(
method, uri, *args, **kwargs
)
return self.clock.time_bound_deferred(
request_deferred,
time_out=60,
)
logger.info("Sending request %s %s", method, uri)
def _cb(response):
try:
with logcontext.PreserveLoggingContext():
response = yield send_request()
incoming_responses_counter.inc(method, response.code)
logger.info(
"Received response to %s %s: %s",
method, uri, response.code
)
return response
def _eb(failure):
defer.returnValue(response)
except Exception as e:
incoming_responses_counter.inc(method, "ERR")
logger.info(
"Error sending request to %s %s: %s %s",
method, uri, failure.type, failure.getErrorMessage()
method, uri, type(e).__name__, e.message
)
return failure
d.addCallbacks(_cb, _eb)
return d
raise e
@defer.inlineCallbacks
def post_urlencoded_get_json(self, uri, args={}):
@@ -145,6 +152,11 @@ class SimpleHttpClient(object):
body = yield preserve_context_over_fn(readBody, response)
if 200 <= response.code < 300:
defer.returnValue(json.loads(body))
else:
raise self._exceptionFromFailedRequest(response, body)
defer.returnValue(json.loads(body))
@defer.inlineCallbacks
@@ -164,8 +176,11 @@ class SimpleHttpClient(object):
On a non-2xx HTTP response. The response body will be used as the
error message.
"""
body = yield self.get_raw(uri, args)
defer.returnValue(json.loads(body))
try:
body = yield self.get_raw(uri, args)
defer.returnValue(json.loads(body))
except CodeMessageException as e:
raise self._exceptionFromFailedRequest(e.code, e.msg)
@defer.inlineCallbacks
def put_json(self, uri, json_body, args={}):
@@ -246,6 +261,15 @@ class SimpleHttpClient(object):
else:
raise CodeMessageException(response.code, body)
def _exceptionFromFailedRequest(self, response, body):
try:
jsonBody = json.loads(body)
errcode = jsonBody['errcode']
error = jsonBody['error']
return MatrixCodeMessageException(response.code, error, errcode)
except (ValueError, KeyError):
return CodeMessageException(response.code, body)
# XXX: FIXME: This is horribly copy-pasted from matrixfederationclient.
# The two should be factored out.

View File

@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import socket
from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
from twisted.internet import defer, reactor
@@ -30,7 +31,10 @@ logger = logging.getLogger(__name__)
SERVER_CACHE = {}
# our record of an individual server which can be tried to reach a destination.
#
# "host" is actually a dotted-quad or ipv6 address string. Except when there's
# no SRV record, in which case it is the original hostname.
_Server = collections.namedtuple(
"_Server", "priority weight host port expires"
)
@@ -219,9 +223,10 @@ class SRVClientEndpoint(object):
return self.default_server
else:
raise ConnectError(
"Not server available for %s" % self.service_name
"No server available for %s" % self.service_name
)
# look for all servers with the same priority
min_priority = self.servers[0].priority
weight_indexes = list(
(index, server.weight + 1)
@@ -231,11 +236,22 @@ class SRVClientEndpoint(object):
total_weight = sum(weight for index, weight in weight_indexes)
target_weight = random.randint(0, total_weight)
for index, weight in weight_indexes:
target_weight -= weight
if target_weight <= 0:
server = self.servers[index]
# XXX: this looks totally dubious:
#
# (a) we never reuse a server until we have been through
# all of the servers at the same priority, so if the
# weights are A: 100, B:1, we always do ABABAB instead of
# AAAA...AAAB (approximately).
#
# (b) After using all the servers at the lowest priority,
# we move onto the next priority. We should only use the
# second priority if servers at the top priority are
# unreachable.
#
del self.servers[index]
self.used_servers.append(server)
return server
@@ -280,26 +296,21 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
continue
payload = answer.payload
host = str(payload.target)
srv_ttl = answer.ttl
try:
answers, _, _ = yield dns_client.lookupAddress(host)
except DNSNameError:
continue
hosts = yield _get_hosts_for_srv_record(
dns_client, str(payload.target)
)
for answer in answers:
if answer.type == dns.A and answer.payload:
ip = answer.payload.dottedQuad()
host_ttl = min(srv_ttl, answer.ttl)
for (ip, ttl) in hosts:
host_ttl = min(answer.ttl, ttl)
servers.append(_Server(
host=ip,
port=int(payload.port),
priority=int(payload.priority),
weight=int(payload.weight),
expires=int(clock.time()) + host_ttl,
))
servers.append(_Server(
host=ip,
port=int(payload.port),
priority=int(payload.priority),
weight=int(payload.weight),
expires=int(clock.time()) + host_ttl,
))
servers.sort()
cache[service_name] = list(servers)
@@ -317,3 +328,68 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
raise e
defer.returnValue(servers)
@defer.inlineCallbacks
def _get_hosts_for_srv_record(dns_client, host):
"""Look up each of the hosts in a SRV record
Args:
dns_client (twisted.names.dns.IResolver):
host (basestring): host to look up
Returns:
Deferred[list[(str, int)]]: a list of (host, ttl) pairs
"""
ip4_servers = []
ip6_servers = []
def cb(res):
# lookupAddress and lookupIP6Address return a three-tuple
# giving the answer, authority, and additional sections of the
# response.
#
# we only care about the answers.
return res[0]
def eb(res):
res.trap(DNSNameError)
return []
# no logcontexts here, so we can safely fire these off and gatherResults
d1 = dns_client.lookupAddress(host).addCallbacks(cb, eb)
d2 = dns_client.lookupIPV6Address(host).addCallbacks(cb, eb)
results = yield defer.gatherResults([d1, d2], consumeErrors=True)
for result in results:
for answer in result:
if not answer.payload:
continue
try:
if answer.type == dns.A:
ip = answer.payload.dottedQuad()
ip4_servers.append((ip, answer.ttl))
elif answer.type == dns.AAAA:
ip = socket.inet_ntop(
socket.AF_INET6, answer.payload.address,
)
ip6_servers.append((ip, answer.ttl))
else:
# the most likely candidate here is a CNAME record.
# rfc2782 says srvs may not point to aliases.
logger.warn(
"Ignoring unexpected DNS record type %s for %s",
answer.type, host,
)
continue
except Exception as e:
logger.warn("Ignoring invalid DNS response for %s: %s",
host, e)
continue
# keep the ipv4 results before the ipv6 results, mostly to match historical
# behaviour.
defer.returnValue(ip4_servers + ip6_servers)

View File

@@ -125,6 +125,8 @@ class MatrixFederationHttpClient(object):
code >= 300.
Fails with ``NotRetryingDestination`` if we are not yet ready
to retry this server.
(May also fail with plenty of other Exceptions for things like DNS
failures, connection failures, SSL failures.)
"""
limiter = yield synapse.util.retryutils.get_retry_limiter(
destination,
@@ -302,8 +304,10 @@ class MatrixFederationHttpClient(object):
Returns:
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body. On a 4xx or 5xx error response a
CodeMessageException is raised.
will be the decoded JSON body.
Fails with ``HTTPRequestException`` if we get an HTTP response
code >= 300.
Fails with ``NotRetryingDestination`` if we are not yet ready
to retry this server.
@@ -360,8 +364,10 @@ class MatrixFederationHttpClient(object):
try the request anyway.
Returns:
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body. On a 4xx or 5xx error response a
CodeMessageException is raised.
will be the decoded JSON body.
Fails with ``HTTPRequestException`` if we get an HTTP response
code >= 300.
Fails with ``NotRetryingDestination`` if we are not yet ready
to retry this server.
@@ -410,10 +416,11 @@ class MatrixFederationHttpClient(object):
ignore_backoff (bool): true to ignore the historical backoff data
and try the request anyway.
Returns:
Deferred: Succeeds when we get *any* HTTP response.
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body.
The result of the deferred is a tuple of `(code, response)`,
where `response` is a dict representing the decoded JSON body.
Fails with ``HTTPRequestException`` if we get an HTTP response
code >= 300.
Fails with ``NotRetryingDestination`` if we are not yet ready
to retry this server.

View File

@@ -412,7 +412,7 @@ def set_cors_headers(request):
)
request.setHeader(
"Access-Control-Allow-Headers",
"Origin, X-Requested-With, Content-Type, Accept"
"Origin, X-Requested-With, Content-Type, Accept, Authorization"
)

View File

@@ -163,6 +163,8 @@ class Notifier(object):
self.store = hs.get_datastore()
self.pending_new_room_events = []
self.replication_callbacks = []
self.clock = hs.get_clock()
self.appservice_handler = hs.get_application_service_handler()
@@ -202,7 +204,12 @@ class Notifier(object):
lambda: len(self.user_to_user_stream),
)
@preserve_fn
def add_replication_callback(self, cb):
"""Add a callback that will be called when some new data is available.
Callback is not given any arguments.
"""
self.replication_callbacks.append(cb)
def on_new_room_event(self, event, room_stream_id, max_room_stream_id,
extra_users=[]):
""" Used by handlers to inform the notifier something has happened
@@ -216,15 +223,13 @@ class Notifier(object):
until all previous events have been persisted before notifying
the client streams.
"""
with PreserveLoggingContext():
self.pending_new_room_events.append((
room_stream_id, event, extra_users
))
self._notify_pending_new_room_events(max_room_stream_id)
self.pending_new_room_events.append((
room_stream_id, event, extra_users
))
self._notify_pending_new_room_events(max_room_stream_id)
self.notify_replication()
self.notify_replication()
@preserve_fn
def _notify_pending_new_room_events(self, max_room_stream_id):
"""Notify for the room events that were queued waiting for a previous
event to be persisted.
@@ -242,14 +247,17 @@ class Notifier(object):
else:
self._on_new_room_event(event, room_stream_id, extra_users)
@preserve_fn
def _on_new_room_event(self, event, room_stream_id, extra_users=[]):
"""Notify any user streams that are interested in this room event"""
# poke any interested application service.
self.appservice_handler.notify_interested_services(room_stream_id)
preserve_fn(self.appservice_handler.notify_interested_services)(
room_stream_id
)
if self.federation_sender:
self.federation_sender.notify_new_events(room_stream_id)
preserve_fn(self.federation_sender.notify_new_events)(
room_stream_id
)
if event.type == EventTypes.Member and event.membership == Membership.JOIN:
self._user_joined_room(event.state_key, event.room_id)
@@ -260,7 +268,6 @@ class Notifier(object):
rooms=[event.room_id],
)
@preserve_fn
def on_new_event(self, stream_key, new_token, users=[], rooms=[]):
""" Used to inform listeners that something has happend event wise.
@@ -287,7 +294,6 @@ class Notifier(object):
self.notify_replication()
@preserve_fn
def on_new_replication_data(self):
"""Used to inform replication listeners that something has happend
without waking up any of the normal user event streams"""
@@ -510,6 +516,9 @@ class Notifier(object):
self.replication_deferred = ObservableDeferred(defer.Deferred())
deferred.callback(None)
for cb in self.replication_callbacks:
preserve_fn(cb)()
@defer.inlineCallbacks
def wait_for_replication(self, callback, timeout):
"""Wait for an event to happen.

View File

@@ -15,7 +15,7 @@
from twisted.internet import defer
from .bulk_push_rule_evaluator import evaluator_for_event
from .bulk_push_rule_evaluator import BulkPushRuleEvaluator
from synapse.util.metrics import Measure
@@ -24,11 +24,12 @@ import logging
logger = logging.getLogger(__name__)
class ActionGenerator:
class ActionGenerator(object):
def __init__(self, hs):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.bulk_evaluator = BulkPushRuleEvaluator(hs)
# really we want to get all user ids and all profile tags too,
# since we want the actions for each profile tag for every user and
# also actions for a client with no profile tag for each user.
@@ -38,16 +39,11 @@ class ActionGenerator:
@defer.inlineCallbacks
def handle_push_actions_for_event(self, event, context):
with Measure(self.clock, "evaluator_for_event"):
bulk_evaluator = yield evaluator_for_event(
event, self.hs, self.store, context
)
with Measure(self.clock, "action_for_event_by_user"):
actions_by_user = yield bulk_evaluator.action_for_event_by_user(
actions_by_user = yield self.bulk_evaluator.action_for_event_by_user(
event, context
)
context.push_actions = [
(uid, actions) for uid, actions in actions_by_user.items()
(uid, actions) for uid, actions in actions_by_user.iteritems()
]

View File

@@ -19,65 +19,106 @@ from twisted.internet import defer
from .push_rule_evaluator import PushRuleEvaluatorForEvent
from synapse.api.constants import EventTypes
from synapse.visibility import filter_events_for_clients_context
from synapse.api.constants import EventTypes, Membership
from synapse.metrics import get_metrics_for
from synapse.util.caches import metrics as cache_metrics
from synapse.util.caches.descriptors import cached
from synapse.util.async import Linearizer
from collections import namedtuple
logger = logging.getLogger(__name__)
@defer.inlineCallbacks
def evaluator_for_event(event, hs, store, context):
rules_by_user = yield store.bulk_get_push_rules_for_room(
event, context
)
rules_by_room = {}
# if this event is an invite event, we may need to run rules for the user
# who's been invited, otherwise they won't get told they've been invited
if event.type == 'm.room.member' and event.content['membership'] == 'invite':
invited_user = event.state_key
if invited_user and hs.is_mine_id(invited_user):
has_pusher = yield store.user_has_pusher(invited_user)
if has_pusher:
rules_by_user = dict(rules_by_user)
rules_by_user[invited_user] = yield store.get_push_rules_for_user(
invited_user
)
push_metrics = get_metrics_for(__name__)
defer.returnValue(BulkPushRuleEvaluator(
event.room_id, rules_by_user, store
))
push_rules_invalidation_counter = push_metrics.register_counter(
"push_rules_invalidation_counter"
)
push_rules_state_size_counter = push_metrics.register_counter(
"push_rules_state_size_counter"
)
# Measures whether we use the fast path of using state deltas, or if we have to
# recalculate from scratch
push_rules_delta_state_cache_metric = cache_metrics.register_cache(
"cache",
size_callback=lambda: 0, # Meaningless size, as this isn't a cache that stores values
cache_name="push_rules_delta_state_cache_metric",
)
class BulkPushRuleEvaluator:
class BulkPushRuleEvaluator(object):
"""Calculates the outcome of push rules for an event for all users in the
room at once.
"""
Runs push rules for all users in a room.
This is faster than running PushRuleEvaluator for each user because it
fetches all the rules for all the users in one (batched) db query
rather than doing multiple queries per-user. It currently uses
the same logic to run the actual rules, but could be optimised further
(see https://matrix.org/jira/browse/SYN-562)
"""
def __init__(self, room_id, rules_by_user, store):
self.room_id = room_id
self.rules_by_user = rules_by_user
self.store = store
def __init__(self, hs):
self.hs = hs
self.store = hs.get_datastore()
self.room_push_rule_cache_metrics = cache_metrics.register_cache(
"cache",
size_callback=lambda: 0, # There's not good value for this
cache_name="room_push_rule_cache",
)
@defer.inlineCallbacks
def _get_rules_for_event(self, event, context):
"""This gets the rules for all users in the room at the time of the event,
as well as the push rules for the invitee if the event is an invite.
Returns:
dict of user_id -> push_rules
"""
room_id = event.room_id
rules_for_room = self._get_rules_for_room(room_id)
rules_by_user = yield rules_for_room.get_rules(event, context)
# if this event is an invite event, we may need to run rules for the user
# who's been invited, otherwise they won't get told they've been invited
if event.type == 'm.room.member' and event.content['membership'] == 'invite':
invited = event.state_key
if invited and self.hs.is_mine_id(invited):
has_pusher = yield self.store.user_has_pusher(invited)
if has_pusher:
rules_by_user = dict(rules_by_user)
rules_by_user[invited] = yield self.store.get_push_rules_for_user(
invited
)
defer.returnValue(rules_by_user)
@cached()
def _get_rules_for_room(self, room_id):
"""Get the current RulesForRoom object for the given room id
Returns:
RulesForRoom
"""
# It's important that RulesForRoom gets added to self._get_rules_for_room.cache
# before any lookup methods get called on it as otherwise there may be
# a race if invalidate_all gets called (which assumes its in the cache)
return RulesForRoom(
self.hs, room_id, self._get_rules_for_room.cache,
self.room_push_rule_cache_metrics,
)
@defer.inlineCallbacks
def action_for_event_by_user(self, event, context):
"""Given an event and context, evaluate the push rules and return
the results
Returns:
dict of user_id -> action
"""
rules_by_user = yield self._get_rules_for_event(event, context)
actions_by_user = {}
# None of these users can be peeking since this list of users comes
# from the set of users in the room, so we know for sure they're all
# actually in the room.
user_tuples = [
(u, False) for u in self.rules_by_user.keys()
]
filtered_by_user = yield filter_events_for_clients_context(
self.store, user_tuples, [event], {event.event_id: context}
)
room_members = yield self.store.get_joined_users_from_context(
event, context
)
@@ -86,21 +127,26 @@ class BulkPushRuleEvaluator:
condition_cache = {}
for uid, rules in self.rules_by_user.items():
display_name = room_members.get(uid, {}).get("display_name", None)
for uid, rules in rules_by_user.iteritems():
if event.sender == uid:
continue
if not event.is_state():
is_ignored = yield self.store.is_ignored_by(event.sender, uid)
if is_ignored:
continue
display_name = None
profile_info = room_members.get(uid)
if profile_info:
display_name = profile_info.display_name
if not display_name:
# Handle the case where we are pushing a membership event to
# that user, as they might not be already joined.
if event.type == EventTypes.Member and event.state_key == uid:
display_name = event.content.get("displayname", None)
filtered = filtered_by_user[uid]
if len(filtered) == 0:
continue
if filtered[0].sender == uid:
continue
for rule in rules:
if 'enabled' in rule and not rule['enabled']:
continue
@@ -134,3 +180,264 @@ def _condition_checker(evaluator, conditions, uid, display_name, cache):
return False
return True
class RulesForRoom(object):
"""Caches push rules for users in a room.
This efficiently handles users joining/leaving the room by not invalidating
the entire cache for the room.
"""
def __init__(self, hs, room_id, rules_for_room_cache, room_push_rule_cache_metrics):
"""
Args:
hs (HomeServer)
room_id (str)
rules_for_room_cache(Cache): The cache object that caches these
RoomsForUser objects.
room_push_rule_cache_metrics (CacheMetric)
"""
self.room_id = room_id
self.is_mine_id = hs.is_mine_id
self.store = hs.get_datastore()
self.room_push_rule_cache_metrics = room_push_rule_cache_metrics
self.linearizer = Linearizer(name="rules_for_room")
self.member_map = {} # event_id -> (user_id, state)
self.rules_by_user = {} # user_id -> rules
# The last state group we updated the caches for. If the state_group of
# a new event comes along, we know that we can just return the cached
# result.
# On invalidation of the rules themselves (if the user changes them),
# we invalidate everything and set state_group to `object()`
self.state_group = object()
# A sequence number to keep track of when we're allowed to update the
# cache. We bump the sequence number when we invalidate the cache. If
# the sequence number changes while we're calculating stuff we should
# not update the cache with it.
self.sequence = 0
# A cache of user_ids that we *know* aren't interesting, e.g. user_ids
# owned by AS's, or remote users, etc. (I.e. users we will never need to
# calculate push for)
# These never need to be invalidated as we will never set up push for
# them.
self.uninteresting_user_set = set()
# We need to be clever on the invalidating caches callbacks, as
# otherwise the invalidation callback holds a reference to the object,
# potentially causing it to leak.
# To get around this we pass a function that on invalidations looks ups
# the RoomsForUser entry in the cache, rather than keeping a reference
# to self around in the callback.
self.invalidate_all_cb = _Invalidation(rules_for_room_cache, room_id)
@defer.inlineCallbacks
def get_rules(self, event, context):
"""Given an event context return the rules for all users who are
currently in the room.
"""
state_group = context.state_group
if state_group and self.state_group == state_group:
logger.debug("Using cached rules for %r", self.room_id)
self.room_push_rule_cache_metrics.inc_hits()
defer.returnValue(self.rules_by_user)
with (yield self.linearizer.queue(())):
if state_group and self.state_group == state_group:
logger.debug("Using cached rules for %r", self.room_id)
self.room_push_rule_cache_metrics.inc_hits()
defer.returnValue(self.rules_by_user)
self.room_push_rule_cache_metrics.inc_misses()
ret_rules_by_user = {}
missing_member_event_ids = {}
if state_group and self.state_group == context.prev_group:
# If we have a simple delta then we can reuse most of the previous
# results.
ret_rules_by_user = self.rules_by_user
current_state_ids = context.delta_ids
push_rules_delta_state_cache_metric.inc_hits()
else:
current_state_ids = context.current_state_ids
push_rules_delta_state_cache_metric.inc_misses()
push_rules_state_size_counter.inc_by(len(current_state_ids))
logger.debug(
"Looking for member changes in %r %r", state_group, current_state_ids
)
# Loop through to see which member events we've seen and have rules
# for and which we need to fetch
for key in current_state_ids:
typ, user_id = key
if typ != EventTypes.Member:
continue
if user_id in self.uninteresting_user_set:
continue
if not self.is_mine_id(user_id):
self.uninteresting_user_set.add(user_id)
continue
if self.store.get_if_app_services_interested_in_user(user_id):
self.uninteresting_user_set.add(user_id)
continue
event_id = current_state_ids[key]
res = self.member_map.get(event_id, None)
if res:
user_id, state = res
if state == Membership.JOIN:
rules = self.rules_by_user.get(user_id, None)
if rules:
ret_rules_by_user[user_id] = rules
continue
# If a user has left a room we remove their push rule. If they
# joined then we readd it later in _update_rules_with_member_event_ids
ret_rules_by_user.pop(user_id, None)
missing_member_event_ids[user_id] = event_id
if missing_member_event_ids:
# If we have some memebr events we haven't seen, look them up
# and fetch push rules for them if appropriate.
logger.debug("Found new member events %r", missing_member_event_ids)
yield self._update_rules_with_member_event_ids(
ret_rules_by_user, missing_member_event_ids, state_group, event
)
else:
# The push rules didn't change but lets update the cache anyway
self.update_cache(
self.sequence,
members={}, # There were no membership changes
rules_by_user=ret_rules_by_user,
state_group=state_group
)
if logger.isEnabledFor(logging.DEBUG):
logger.debug(
"Returning push rules for %r %r",
self.room_id, ret_rules_by_user.keys(),
)
defer.returnValue(ret_rules_by_user)
@defer.inlineCallbacks
def _update_rules_with_member_event_ids(self, ret_rules_by_user, member_event_ids,
state_group, event):
"""Update the partially filled rules_by_user dict by fetching rules for
any newly joined users in the `member_event_ids` list.
Args:
ret_rules_by_user (dict): Partiallly filled dict of push rules. Gets
updated with any new rules.
member_event_ids (list): List of event ids for membership events that
have happened since the last time we filled rules_by_user
state_group: The state group we are currently computing push rules
for. Used when updating the cache.
"""
sequence = self.sequence
rows = yield self.store._simple_select_many_batch(
table="room_memberships",
column="event_id",
iterable=member_event_ids.values(),
retcols=('user_id', 'membership', 'event_id'),
keyvalues={},
batch_size=500,
desc="_get_rules_for_member_event_ids",
)
members = {
row["event_id"]: (row["user_id"], row["membership"])
for row in rows
}
# If the event is a join event then it will be in current state evnts
# map but not in the DB, so we have to explicitly insert it.
if event.type == EventTypes.Member:
for event_id in member_event_ids.itervalues():
if event_id == event.event_id:
members[event_id] = (event.state_key, event.membership)
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Found members %r: %r", self.room_id, members.values())
interested_in_user_ids = set(
user_id for user_id, membership in members.itervalues()
if membership == Membership.JOIN
)
logger.debug("Joined: %r", interested_in_user_ids)
if_users_with_pushers = yield self.store.get_if_users_have_pushers(
interested_in_user_ids,
on_invalidate=self.invalidate_all_cb,
)
user_ids = set(
uid for uid, have_pusher in if_users_with_pushers.iteritems() if have_pusher
)
logger.debug("With pushers: %r", user_ids)
users_with_receipts = yield self.store.get_users_with_read_receipts_in_room(
self.room_id, on_invalidate=self.invalidate_all_cb,
)
logger.debug("With receipts: %r", users_with_receipts)
# any users with pushers must be ours: they have pushers
for uid in users_with_receipts:
if uid in interested_in_user_ids:
user_ids.add(uid)
rules_by_user = yield self.store.bulk_get_push_rules(
user_ids, on_invalidate=self.invalidate_all_cb,
)
ret_rules_by_user.update(
item for item in rules_by_user.iteritems() if item[0] is not None
)
self.update_cache(sequence, members, ret_rules_by_user, state_group)
def invalidate_all(self):
# Note: Don't hand this function directly to an invalidation callback
# as it keeps a reference to self and will stop this instance from being
# GC'd if it gets dropped from the rules_to_user cache. Instead use
# `self.invalidate_all_cb`
logger.debug("Invalidating RulesForRoom for %r", self.room_id)
self.sequence += 1
self.state_group = object()
self.member_map = {}
self.rules_by_user = {}
push_rules_invalidation_counter.inc()
def update_cache(self, sequence, members, rules_by_user, state_group):
if sequence == self.sequence:
self.member_map.update(members)
self.rules_by_user = rules_by_user
self.state_group = state_group
class _Invalidation(namedtuple("_Invalidation", ("cache", "room_id"))):
# We rely on _CacheContext implementing __eq__ and __hash__ sensibly,
# which namedtuple does for us (i.e. two _CacheContext are the same if
# their caches and keys match). This is important in particular to
# dedupe when we add callbacks to lru cache nodes, otherwise the number
# of callbacks would grow.
def __call__(self):
rules = self.cache.get(self.room_id, None, update_metrics=False)
if rules:
rules.invalidate_all()

View File

@@ -21,7 +21,6 @@ import logging
from synapse.util.metrics import Measure
from synapse.util.logcontext import LoggingContext
from mailer import Mailer
logger = logging.getLogger(__name__)
@@ -56,8 +55,10 @@ class EmailPusher(object):
This shares quite a bit of code with httpusher: it would be good to
factor out the common parts
"""
def __init__(self, hs, pusherdict):
def __init__(self, hs, pusherdict, mailer):
self.hs = hs
self.mailer = mailer
self.store = self.hs.get_datastore()
self.clock = self.hs.get_clock()
self.pusher_id = pusherdict['id']
@@ -73,16 +74,6 @@ class EmailPusher(object):
self.processing = False
if self.hs.config.email_enable_notifs:
if 'data' in pusherdict and 'brand' in pusherdict['data']:
app_name = pusherdict['data']['brand']
else:
app_name = self.hs.config.email_app_name
self.mailer = Mailer(self.hs, app_name)
else:
self.mailer = None
@defer.inlineCallbacks
def on_started(self):
if self.mailer is not None:

View File

@@ -244,6 +244,26 @@ class HttpPusher(object):
@defer.inlineCallbacks
def _build_notification_dict(self, event, tweaks, badge):
if self.data.get('format') == 'event_id_only':
d = {
'notification': {
'event_id': event.event_id,
'room_id': event.room_id,
'counts': {
'unread': badge,
},
'devices': [
{
'app_id': self.app_id,
'pushkey': self.pushkey,
'pushkey_ts': long(self.pushkey_ts / 1000),
'data': self.data_minus_url,
}
]
}
}
defer.returnValue(d)
ctx = yield push_tools.get_context_for_event(
self.store, self.state_handler, event, self.user_id
)
@@ -275,7 +295,7 @@ class HttpPusher(object):
if event.type == 'm.room.member':
d['notification']['membership'] = event.content['membership']
d['notification']['user_is_target'] = event.state_key == self.user_id
if 'content' in event:
if not self.hs.config.push_redact_content and 'content' in event:
d['notification']['content'] = event.content
# We no longer send aliases separately, instead, we send the human

View File

@@ -78,23 +78,17 @@ ALLOWED_ATTRS = {
class Mailer(object):
def __init__(self, hs, app_name):
def __init__(self, hs, app_name, notif_template_html, notif_template_text):
self.hs = hs
self.notif_template_html = notif_template_html
self.notif_template_text = notif_template_text
self.store = self.hs.get_datastore()
self.macaroon_gen = self.hs.get_macaroon_generator()
self.state_handler = self.hs.get_state_handler()
loader = jinja2.FileSystemLoader(self.hs.config.email_template_dir)
self.app_name = app_name
logger.info("Created Mailer for app_name %s" % app_name)
env = jinja2.Environment(loader=loader)
env.filters["format_ts"] = format_ts_filter
env.filters["mxc_to_http"] = self.mxc_to_http_filter
self.notif_template_html = env.get_template(
self.hs.config.email_notif_template_html
)
self.notif_template_text = env.get_template(
self.hs.config.email_notif_template_text
)
@defer.inlineCallbacks
def send_notification_mail(self, app_id, user_id, email_address,
@@ -200,7 +194,11 @@ class Mailer(object):
yield sendmail(
self.hs.config.email_smtp_host,
raw_from, raw_to, multipart_msg.as_string(),
port=self.hs.config.email_smtp_port
port=self.hs.config.email_smtp_port,
requireAuthentication=self.hs.config.email_smtp_user is not None,
username=self.hs.config.email_smtp_user,
password=self.hs.config.email_smtp_pass,
requireTransportSecurity=self.hs.config.require_transport_security
)
@defer.inlineCallbacks
@@ -477,28 +475,6 @@ class Mailer(object):
urllib.urlencode(params),
)
def mxc_to_http_filter(self, value, width, height, resize_method="crop"):
if value[0:6] != "mxc://":
return ""
serverAndMediaId = value[6:]
fragment = None
if '#' in serverAndMediaId:
(serverAndMediaId, fragment) = serverAndMediaId.split('#', 1)
fragment = "#" + fragment
params = {
"width": width,
"height": height,
"method": resize_method,
}
return "%s_matrix/media/v1/thumbnail/%s?%s%s" % (
self.hs.config.public_baseurl,
serverAndMediaId,
urllib.urlencode(params),
fragment or "",
)
def safe_markup(raw_html):
return jinja2.Markup(bleach.linkify(bleach.clean(
@@ -539,3 +515,52 @@ def string_ordinal_total(s):
def format_ts_filter(value, format):
return time.strftime(format, time.localtime(value / 1000))
def load_jinja2_templates(config):
"""Load the jinja2 email templates from disk
Returns:
(notif_template_html, notif_template_text)
"""
logger.info("loading jinja2")
loader = jinja2.FileSystemLoader(config.email_template_dir)
env = jinja2.Environment(loader=loader)
env.filters["format_ts"] = format_ts_filter
env.filters["mxc_to_http"] = _create_mxc_to_http_filter(config)
notif_template_html = env.get_template(
config.email_notif_template_html
)
notif_template_text = env.get_template(
config.email_notif_template_text
)
return notif_template_html, notif_template_text
def _create_mxc_to_http_filter(config):
def mxc_to_http_filter(value, width, height, resize_method="crop"):
if value[0:6] != "mxc://":
return ""
serverAndMediaId = value[6:]
fragment = None
if '#' in serverAndMediaId:
(serverAndMediaId, fragment) = serverAndMediaId.split('#', 1)
fragment = "#" + fragment
params = {
"width": width,
"height": height,
"method": resize_method,
}
return "%s_matrix/media/v1/thumbnail/%s?%s%s" % (
config.public_baseurl,
serverAndMediaId,
urllib.urlencode(params),
fragment or "",
)
return mxc_to_http_filter

View File

@@ -200,7 +200,9 @@ def _glob_to_re(glob, word_boundary):
return re.compile(r, flags=re.IGNORECASE)
def _flatten_dict(d, prefix=[], result={}):
def _flatten_dict(d, prefix=[], result=None):
if result is None:
result = {}
for key, value in d.items():
if isinstance(value, basestring):
result[".".join(prefix + [key])] = value.lower()

View File

@@ -26,22 +26,54 @@ logger = logging.getLogger(__name__)
# process works fine)
try:
from synapse.push.emailpusher import EmailPusher
from synapse.push.mailer import Mailer, load_jinja2_templates
except:
pass
def create_pusher(hs, pusherdict):
logger.info("trying to create_pusher for %r", pusherdict)
class PusherFactory(object):
def __init__(self, hs):
self.hs = hs
PUSHER_TYPES = {
"http": HttpPusher,
}
self.pusher_types = {
"http": HttpPusher,
}
logger.info("email enable notifs: %r", hs.config.email_enable_notifs)
if hs.config.email_enable_notifs:
PUSHER_TYPES["email"] = EmailPusher
logger.info("defined email pusher type")
logger.info("email enable notifs: %r", hs.config.email_enable_notifs)
if hs.config.email_enable_notifs:
self.mailers = {} # app_name -> Mailer
if pusherdict['kind'] in PUSHER_TYPES:
logger.info("found pusher")
return PUSHER_TYPES[pusherdict['kind']](hs, pusherdict)
templates = load_jinja2_templates(hs.config)
self.notif_template_html, self.notif_template_text = templates
self.pusher_types["email"] = self._create_email_pusher
logger.info("defined email pusher type")
def create_pusher(self, pusherdict):
logger.info("trying to create_pusher for %r", pusherdict)
if pusherdict['kind'] in self.pusher_types:
logger.info("found pusher")
return self.pusher_types[pusherdict['kind']](self.hs, pusherdict)
def _create_email_pusher(self, _hs, pusherdict):
app_name = self._app_name_from_pusherdict(pusherdict)
mailer = self.mailers.get(app_name)
if not mailer:
mailer = Mailer(
hs=self.hs,
app_name=app_name,
notif_template_html=self.notif_template_html,
notif_template_text=self.notif_template_text,
)
self.mailers[app_name] = mailer
return EmailPusher(self.hs, pusherdict, mailer)
def _app_name_from_pusherdict(self, pusherdict):
if 'data' in pusherdict and 'brand' in pusherdict['data']:
app_name = pusherdict['data']['brand']
else:
app_name = self.hs.config.email_app_name
return app_name

View File

@@ -16,7 +16,7 @@
from twisted.internet import defer
import pusher
from .pusher import PusherFactory
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
from synapse.util.async import run_on_reactor
@@ -28,6 +28,7 @@ logger = logging.getLogger(__name__)
class PusherPool:
def __init__(self, _hs):
self.hs = _hs
self.pusher_factory = PusherFactory(_hs)
self.start_pushers = _hs.config.start_pushers
self.store = self.hs.get_datastore()
self.clock = self.hs.get_clock()
@@ -48,7 +49,7 @@ class PusherPool:
# will then get pulled out of the database,
# recreated, added and started: this means we have only one
# code path adding pushers.
pusher.create_pusher(self.hs, {
self.pusher_factory.create_pusher({
"id": None,
"user_name": user_id,
"kind": kind,
@@ -186,7 +187,7 @@ class PusherPool:
logger.info("Starting %d pushers", len(pushers))
for pusherdict in pushers:
try:
p = pusher.create_pusher(self.hs, pusherdict)
p = self.pusher_factory.create_pusher(pusherdict)
except:
logger.exception("Couldn't start a pusher: caught Exception")
continue

View File

@@ -31,7 +31,7 @@ REQUIREMENTS = {
"pyyaml": ["yaml"],
"pyasn1": ["pyasn1"],
"daemonize": ["daemonize"],
"py-bcrypt": ["bcrypt"],
"bcrypt": ["bcrypt"],
"pillow": ["PIL"],
"pydenticon": ["pydenticon"],
"ujson": ["ujson"],
@@ -40,6 +40,7 @@ REQUIREMENTS = {
"pymacaroons-pynacl": ["pymacaroons"],
"msgpack-python>=0.3.0": ["msgpack"],
"phonenumbers>=8.2.0": ["phonenumbers"],
"affinity": ["affinity"],
}
CONDITIONAL_REQUIREMENTS = {
"web_client": {

View File

@@ -1,60 +0,0 @@
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.http.server import respond_with_json_bytes, request_handler
from synapse.http.servlet import parse_json_object_from_request
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
class ExpireCacheResource(Resource):
"""
HTTP endpoint for expiring storage caches.
POST /_synapse/replication/expire_cache HTTP/1.1
Content-Type: application/json
{
"invalidate": [
{
"name": "func_name",
"keys": ["key1", "key2"]
}
]
}
"""
def __init__(self, hs):
Resource.__init__(self) # Resource is old-style, so no super()
self.store = hs.get_datastore()
self.version_string = hs.version_string
self.clock = hs.get_clock()
def render_POST(self, request):
self._async_render_POST(request)
return NOT_DONE_YET
@request_handler()
def _async_render_POST(self, request):
content = parse_json_object_from_request(request)
for row in content["invalidate"]:
name = row["name"]
keys = tuple(row["keys"])
getattr(self.store, name).invalidate(keys)
respond_with_json_bytes(request, 200, "{}")

View File

@@ -1,59 +0,0 @@
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.http.server import respond_with_json_bytes, request_handler
from synapse.http.servlet import parse_json_object_from_request
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
from twisted.internet import defer
class PresenceResource(Resource):
"""
HTTP endpoint for marking users as syncing.
POST /_synapse/replication/presence HTTP/1.1
Content-Type: application/json
{
"process_id": "<process_id>",
"syncing_users": ["<user_id>"]
}
"""
def __init__(self, hs):
Resource.__init__(self) # Resource is old-style, so no super()
self.version_string = hs.version_string
self.clock = hs.get_clock()
self.presence_handler = hs.get_presence_handler()
def render_POST(self, request):
self._async_render_POST(request)
return NOT_DONE_YET
@request_handler()
@defer.inlineCallbacks
def _async_render_POST(self, request):
content = parse_json_object_from_request(request)
process_id = content["process_id"]
syncing_user_ids = content["syncing_users"]
yield self.presence_handler.update_external_syncs(
process_id, set(syncing_user_ids)
)
respond_with_json_bytes(request, 200, "{}")

View File

@@ -1,54 +0,0 @@
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.http.server import respond_with_json_bytes, request_handler
from synapse.http.servlet import parse_json_object_from_request
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
from twisted.internet import defer
class PusherResource(Resource):
"""
HTTP endpoint for deleting rejected pushers
"""
def __init__(self, hs):
Resource.__init__(self) # Resource is old-style, so no super()
self.version_string = hs.version_string
self.store = hs.get_datastore()
self.notifier = hs.get_notifier()
self.clock = hs.get_clock()
def render_POST(self, request):
self._async_render_POST(request)
return NOT_DONE_YET
@request_handler()
@defer.inlineCallbacks
def _async_render_POST(self, request):
content = parse_json_object_from_request(request)
for remove in content["remove"]:
yield self.store.delete_pusher_by_app_id_pushkey_user_id(
remove["app_id"],
remove["push_key"],
remove["user_id"],
)
self.notifier.on_new_replication_data()
respond_with_json_bytes(request, 200, "{}")

View File

@@ -1,576 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.http.servlet import parse_integer, parse_string
from synapse.http.server import request_handler, finish_request
from synapse.replication.pusher_resource import PusherResource
from synapse.replication.presence_resource import PresenceResource
from synapse.replication.expire_cache import ExpireCacheResource
from synapse.api.errors import SynapseError
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
from twisted.internet import defer
import ujson as json
import collections
import logging
logger = logging.getLogger(__name__)
REPLICATION_PREFIX = "/_synapse/replication"
STREAM_NAMES = (
("events",),
("presence",),
("typing",),
("receipts",),
("user_account_data", "room_account_data", "tag_account_data",),
("backfill",),
("push_rules",),
("pushers",),
("caches",),
("to_device",),
("public_rooms",),
("federation",),
("device_lists",),
)
class ReplicationResource(Resource):
"""
HTTP endpoint for extracting data from synapse.
The streams of data returned by the endpoint are controlled by the
parameters given to the API. To return a given stream pass a query
parameter with a position in the stream to return data from or the
special value "-1" to return data from the start of the stream.
If there is no data for any of the supplied streams after the given
position then the request will block until there is data for one
of the streams. This allows clients to long-poll this API.
The possible streams are:
* "streams": A special stream returing the positions of other streams.
* "events": The new events seen on the server.
* "presence": Presence updates.
* "typing": Typing updates.
* "receipts": Receipt updates.
* "user_account_data": Top-level per user account data.
* "room_account_data: Per room per user account data.
* "tag_account_data": Per room per user tags.
* "backfill": Old events that have been backfilled from other servers.
* "push_rules": Per user changes to push rules.
* "pushers": Per user changes to their pushers.
* "caches": Cache invalidations.
The API takes two additional query parameters:
* "timeout": How long to wait before returning an empty response.
* "limit": The maximum number of rows to return for the selected streams.
The response is a JSON object with keys for each stream with updates. Under
each key is a JSON object with:
* "position": The current position of the stream.
* "field_names": The names of the fields in each row.
* "rows": The updates as an array of arrays.
There are a number of ways this API could be used:
1) To replicate the contents of the backing database to another database.
2) To be notified when the contents of a shared backing database changes.
3) To "tail" the activity happening on a server for debugging.
In the first case the client would track all of the streams and store it's
own copy of the data.
In the second case the client might theoretically just be able to follow
the "streams" stream to track where the other streams are. However in
practise it will probably need to get the contents of the streams in
order to expire the any in-memory caches. Whether it gets the contents
of the streams from this replication API or directly from the backing
store is a matter of taste.
In the third case the client would use the "streams" stream to find what
streams are available and their current positions. Then it can start
long-polling this replication API for new data on those streams.
"""
def __init__(self, hs):
Resource.__init__(self) # Resource is old-style, so no super()
self.version_string = hs.version_string
self.store = hs.get_datastore()
self.sources = hs.get_event_sources()
self.presence_handler = hs.get_presence_handler()
self.typing_handler = hs.get_typing_handler()
self.federation_sender = hs.get_federation_sender()
self.notifier = hs.notifier
self.clock = hs.get_clock()
self.config = hs.get_config()
self.putChild("remove_pushers", PusherResource(hs))
self.putChild("syncing_users", PresenceResource(hs))
self.putChild("expire_cache", ExpireCacheResource(hs))
def render_GET(self, request):
self._async_render_GET(request)
return NOT_DONE_YET
@defer.inlineCallbacks
def current_replication_token(self):
stream_token = yield self.sources.get_current_token()
backfill_token = yield self.store.get_current_backfill_token()
push_rules_token, room_stream_token = self.store.get_push_rules_stream_token()
pushers_token = self.store.get_pushers_stream_token()
caches_token = self.store.get_cache_stream_token()
public_rooms_token = self.store.get_current_public_room_stream_id()
federation_token = self.federation_sender.get_current_token()
device_list_token = self.store.get_device_stream_token()
defer.returnValue(_ReplicationToken(
room_stream_token,
int(stream_token.presence_key),
int(stream_token.typing_key),
int(stream_token.receipt_key),
int(stream_token.account_data_key),
backfill_token,
push_rules_token,
pushers_token,
0, # State stream is no longer a thing
caches_token,
int(stream_token.to_device_key),
int(public_rooms_token),
int(federation_token),
int(device_list_token),
))
@request_handler()
@defer.inlineCallbacks
def _async_render_GET(self, request):
limit = parse_integer(request, "limit", 100)
timeout = parse_integer(request, "timeout", 10 * 1000)
request.setHeader(b"Content-Type", b"application/json")
request_streams = {
name: parse_integer(request, name)
for names in STREAM_NAMES for name in names
}
request_streams["streams"] = parse_string(request, "streams")
federation_ack = parse_integer(request, "federation_ack", None)
def replicate():
return self.replicate(
request_streams, limit,
federation_ack=federation_ack
)
writer = yield self.notifier.wait_for_replication(replicate, timeout)
result = writer.finish()
for stream_name, stream_content in result.items():
logger.info(
"Replicating %d rows of %s from %s -> %s",
len(stream_content["rows"]),
stream_name,
request_streams.get(stream_name),
stream_content["position"],
)
request.write(json.dumps(result, ensure_ascii=False))
finish_request(request)
@defer.inlineCallbacks
def replicate(self, request_streams, limit, federation_ack=None):
writer = _Writer()
current_token = yield self.current_replication_token()
logger.debug("Replicating up to %r", current_token)
if limit == 0:
raise SynapseError(400, "Limit cannot be 0")
yield self.account_data(writer, current_token, limit, request_streams)
yield self.events(writer, current_token, limit, request_streams)
# TODO: implement limit
yield self.presence(writer, current_token, request_streams)
yield self.typing(writer, current_token, request_streams)
yield self.receipts(writer, current_token, limit, request_streams)
yield self.push_rules(writer, current_token, limit, request_streams)
yield self.pushers(writer, current_token, limit, request_streams)
yield self.caches(writer, current_token, limit, request_streams)
yield self.to_device(writer, current_token, limit, request_streams)
yield self.public_rooms(writer, current_token, limit, request_streams)
yield self.device_lists(writer, current_token, limit, request_streams)
self.federation(writer, current_token, limit, request_streams, federation_ack)
self.streams(writer, current_token, request_streams)
logger.debug("Replicated %d rows", writer.total)
defer.returnValue(writer)
def streams(self, writer, current_token, request_streams):
request_token = request_streams.get("streams")
streams = []
if request_token is not None:
if request_token == "-1":
for names, position in zip(STREAM_NAMES, current_token):
streams.extend((name, position) for name in names)
else:
items = zip(
STREAM_NAMES,
current_token,
_ReplicationToken(request_token)
)
for names, current_id, last_id in items:
if last_id < current_id:
streams.extend((name, current_id) for name in names)
if streams:
writer.write_header_and_rows(
"streams", streams, ("name", "position"),
position=str(current_token)
)
@defer.inlineCallbacks
def events(self, writer, current_token, limit, request_streams):
request_events = request_streams.get("events")
request_backfill = request_streams.get("backfill")
if request_events is not None or request_backfill is not None:
if request_events is None:
request_events = current_token.events
if request_backfill is None:
request_backfill = current_token.backfill
no_new_tokens = (
request_events == current_token.events
and request_backfill == current_token.backfill
)
if no_new_tokens:
return
res = yield self.store.get_all_new_events(
request_backfill, request_events,
current_token.backfill, current_token.events,
limit
)
upto_events_token = _position_from_rows(
res.new_forward_events, current_token.events
)
upto_backfill_token = _position_from_rows(
res.new_backfill_events, current_token.backfill
)
if request_events != upto_events_token:
writer.write_header_and_rows("events", res.new_forward_events, (
"position", "event_id", "room_id", "type", "state_key",
), position=upto_events_token)
if request_backfill != upto_backfill_token:
writer.write_header_and_rows("backfill", res.new_backfill_events, (
"position", "event_id", "room_id", "type", "state_key", "redacts",
), position=upto_backfill_token)
writer.write_header_and_rows(
"forward_ex_outliers", res.forward_ex_outliers,
("position", "event_id", "state_group"),
)
writer.write_header_and_rows(
"backward_ex_outliers", res.backward_ex_outliers,
("position", "event_id", "state_group"),
)
@defer.inlineCallbacks
def presence(self, writer, current_token, request_streams):
current_position = current_token.presence
request_presence = request_streams.get("presence")
if request_presence is not None and request_presence != current_position:
presence_rows = yield self.presence_handler.get_all_presence_updates(
request_presence, current_position
)
upto_token = _position_from_rows(presence_rows, current_position)
writer.write_header_and_rows("presence", presence_rows, (
"position", "user_id", "state", "last_active_ts",
"last_federation_update_ts", "last_user_sync_ts",
"status_msg", "currently_active",
), position=upto_token)
@defer.inlineCallbacks
def typing(self, writer, current_token, request_streams):
current_position = current_token.typing
request_typing = request_streams.get("typing")
if request_typing is not None and request_typing != current_position:
# If they have a higher token than current max, we can assume that
# they had been talking to a previous instance of the master. Since
# we reset the token on restart, the best (but hacky) thing we can
# do is to simply resend down all the typing notifications.
if request_typing > current_position:
request_typing = 0
typing_rows = yield self.typing_handler.get_all_typing_updates(
request_typing, current_position
)
upto_token = _position_from_rows(typing_rows, current_position)
writer.write_header_and_rows("typing", typing_rows, (
"position", "room_id", "typing"
), position=upto_token)
@defer.inlineCallbacks
def receipts(self, writer, current_token, limit, request_streams):
current_position = current_token.receipts
request_receipts = request_streams.get("receipts")
if request_receipts is not None and request_receipts != current_position:
receipts_rows = yield self.store.get_all_updated_receipts(
request_receipts, current_position, limit
)
upto_token = _position_from_rows(receipts_rows, current_position)
writer.write_header_and_rows("receipts", receipts_rows, (
"position", "room_id", "receipt_type", "user_id", "event_id", "data"
), position=upto_token)
@defer.inlineCallbacks
def account_data(self, writer, current_token, limit, request_streams):
current_position = current_token.account_data
user_account_data = request_streams.get("user_account_data")
room_account_data = request_streams.get("room_account_data")
tag_account_data = request_streams.get("tag_account_data")
if user_account_data is not None or room_account_data is not None:
if user_account_data is None:
user_account_data = current_position
if room_account_data is None:
room_account_data = current_position
no_new_tokens = (
user_account_data == current_position
and room_account_data == current_position
)
if no_new_tokens:
return
user_rows, room_rows = yield self.store.get_all_updated_account_data(
user_account_data, room_account_data, current_position, limit
)
upto_users_token = _position_from_rows(user_rows, current_position)
upto_rooms_token = _position_from_rows(room_rows, current_position)
writer.write_header_and_rows("user_account_data", user_rows, (
"position", "user_id", "type", "content"
), position=upto_users_token)
writer.write_header_and_rows("room_account_data", room_rows, (
"position", "user_id", "room_id", "type", "content"
), position=upto_rooms_token)
if tag_account_data is not None:
tag_rows = yield self.store.get_all_updated_tags(
tag_account_data, current_position, limit
)
upto_tag_token = _position_from_rows(tag_rows, current_position)
writer.write_header_and_rows("tag_account_data", tag_rows, (
"position", "user_id", "room_id", "tags"
), position=upto_tag_token)
@defer.inlineCallbacks
def push_rules(self, writer, current_token, limit, request_streams):
current_position = current_token.push_rules
push_rules = request_streams.get("push_rules")
if push_rules is not None and push_rules != current_position:
rows = yield self.store.get_all_push_rule_updates(
push_rules, current_position, limit
)
upto_token = _position_from_rows(rows, current_position)
writer.write_header_and_rows("push_rules", rows, (
"position", "event_stream_ordering", "user_id", "rule_id", "op",
"priority_class", "priority", "conditions", "actions"
), position=upto_token)
@defer.inlineCallbacks
def pushers(self, writer, current_token, limit, request_streams):
current_position = current_token.pushers
pushers = request_streams.get("pushers")
if pushers is not None and pushers != current_position:
updated, deleted = yield self.store.get_all_updated_pushers(
pushers, current_position, limit
)
upto_token = _position_from_rows(updated, current_position)
writer.write_header_and_rows("pushers", updated, (
"position", "user_id", "access_token", "profile_tag", "kind",
"app_id", "app_display_name", "device_display_name", "pushkey",
"ts", "lang", "data"
), position=upto_token)
writer.write_header_and_rows("deleted_pushers", deleted, (
"position", "user_id", "app_id", "pushkey"
), position=upto_token)
@defer.inlineCallbacks
def caches(self, writer, current_token, limit, request_streams):
current_position = current_token.caches
caches = request_streams.get("caches")
if caches is not None and caches != current_position:
updated_caches = yield self.store.get_all_updated_caches(
caches, current_position, limit
)
upto_token = _position_from_rows(updated_caches, current_position)
writer.write_header_and_rows("caches", updated_caches, (
"position", "cache_func", "keys", "invalidation_ts"
), position=upto_token)
@defer.inlineCallbacks
def to_device(self, writer, current_token, limit, request_streams):
current_position = current_token.to_device
to_device = request_streams.get("to_device")
if to_device is not None and to_device != current_position:
to_device_rows = yield self.store.get_all_new_device_messages(
to_device, current_position, limit
)
upto_token = _position_from_rows(to_device_rows, current_position)
writer.write_header_and_rows("to_device", to_device_rows, (
"position", "user_id", "device_id", "message_json"
), position=upto_token)
@defer.inlineCallbacks
def public_rooms(self, writer, current_token, limit, request_streams):
current_position = current_token.public_rooms
public_rooms = request_streams.get("public_rooms")
if public_rooms is not None and public_rooms != current_position:
public_rooms_rows = yield self.store.get_all_new_public_rooms(
public_rooms, current_position, limit
)
upto_token = _position_from_rows(public_rooms_rows, current_position)
writer.write_header_and_rows("public_rooms", public_rooms_rows, (
"position", "room_id", "visibility", "appservice_id", "network_id",
), position=upto_token)
def federation(self, writer, current_token, limit, request_streams, federation_ack):
if self.config.send_federation:
return
current_position = current_token.federation
federation = request_streams.get("federation")
if federation is not None and federation != current_position:
federation_rows = self.federation_sender.get_replication_rows(
federation, limit, federation_ack=federation_ack,
)
upto_token = _position_from_rows(federation_rows, current_position)
writer.write_header_and_rows("federation", federation_rows, (
"position", "type", "content",
), position=upto_token)
@defer.inlineCallbacks
def device_lists(self, writer, current_token, limit, request_streams):
current_position = current_token.device_lists
device_lists = request_streams.get("device_lists")
if device_lists is not None and device_lists != current_position:
changes = yield self.store.get_all_device_list_changes_for_remotes(
device_lists,
)
writer.write_header_and_rows("device_lists", changes, (
"position", "user_id", "destination",
), position=current_position)
class _Writer(object):
"""Writes the streams as a JSON object as the response to the request"""
def __init__(self):
self.streams = {}
self.total = 0
def write_header_and_rows(self, name, rows, fields, position=None):
if position is None:
if rows:
position = rows[-1][0]
else:
return
self.streams[name] = {
"position": position if type(position) is int else str(position),
"field_names": fields,
"rows": rows,
}
self.total += len(rows)
def __nonzero__(self):
return bool(self.total)
def finish(self):
return self.streams
class _ReplicationToken(collections.namedtuple("_ReplicationToken", (
"events", "presence", "typing", "receipts", "account_data", "backfill",
"push_rules", "pushers", "state", "caches", "to_device", "public_rooms",
"federation", "device_lists",
))):
__slots__ = []
def __new__(cls, *args):
if len(args) == 1:
streams = [int(value) for value in args[0].split("_")]
if len(streams) < len(cls._fields):
streams.extend([0] * (len(cls._fields) - len(streams)))
return cls(*streams)
else:
return super(_ReplicationToken, cls).__new__(cls, *args)
def __str__(self):
return "_".join(str(value) for value in self)
def _position_from_rows(rows, current_position):
"""Calculates a position to return for a stream. Ideally we want to return the
position of the last row, as that will be the most correct. However, if there
are no rows we fall back to using the current position to stop us from
repeatedly hitting the storage layer unncessarily thinking there are updates.
(Not all advances of the token correspond to an actual update)
We can't just always return the current position, as we often limit the
number of rows we replicate, and so the stream may lag. The assumption is
that if the storage layer returns no new rows then we are not lagging and
we are at the `current_position`.
"""
if rows:
return rows[-1][0]
return current_position

View File

@@ -15,7 +15,6 @@
from synapse.storage._base import SQLBaseStore
from synapse.storage.engines import PostgresEngine
from twisted.internet import defer
from ._slaved_id_tracker import SlavedIdTracker
@@ -34,8 +33,7 @@ class BaseSlavedStore(SQLBaseStore):
else:
self._cache_id_gen = None
self.expire_cache_url = hs.config.worker_replication_url + "/expire_cache"
self.http_client = hs.get_simple_http_client()
self.hs = hs
def stream_positions(self):
pos = {}
@@ -43,35 +41,20 @@ class BaseSlavedStore(SQLBaseStore):
pos["caches"] = self._cache_id_gen.get_current_token()
return pos
def process_replication(self, result):
stream = result.get("caches")
if stream:
for row in stream["rows"]:
(
position, cache_func, keys, invalidation_ts,
) = row
def process_replication_rows(self, stream_name, token, rows):
if stream_name == "caches":
self._cache_id_gen.advance(token)
for row in rows:
try:
getattr(self, cache_func).invalidate(tuple(keys))
getattr(self, row.cache_func).invalidate(tuple(row.keys))
except AttributeError:
# We probably haven't pulled in the cache in this worker,
# which is fine.
pass
self._cache_id_gen.advance(int(stream["position"]))
return defer.succeed(None)
def _invalidate_cache_and_stream(self, txn, cache_func, keys):
txn.call_after(cache_func.invalidate, keys)
txn.call_after(self._send_invalidation_poke, cache_func, keys)
@defer.inlineCallbacks
def _send_invalidation_poke(self, cache_func, keys):
try:
yield self.http_client.post_json_get_json(self.expire_cache_url, {
"invalidate": [{
"name": cache_func.__name__,
"keys": list(keys),
}]
})
except:
logger.exception("Failed to poke on expire_cache")
self.hs.get_tcp_replication().send_invalidate_cache(cache_func, keys)

View File

@@ -69,38 +69,25 @@ class SlavedAccountDataStore(BaseSlavedStore):
result["tag_account_data"] = position
return result
def process_replication(self, result):
stream = result.get("user_account_data")
if stream:
self._account_data_id_gen.advance(int(stream["position"]))
for row in stream["rows"]:
position, user_id, data_type = row[:3]
self.get_global_account_data_by_type_for_user.invalidate(
(data_type, user_id,)
)
self.get_account_data_for_user.invalidate((user_id,))
def process_replication_rows(self, stream_name, token, rows):
if stream_name == "tag_account_data":
self._account_data_id_gen.advance(token)
for row in rows:
self.get_tags_for_user.invalidate((row.user_id,))
self._account_data_stream_cache.entity_has_changed(
user_id, position
row.user_id, token
)
stream = result.get("room_account_data")
if stream:
self._account_data_id_gen.advance(int(stream["position"]))
for row in stream["rows"]:
position, user_id = row[:2]
self.get_account_data_for_user.invalidate((user_id,))
elif stream_name == "account_data":
self._account_data_id_gen.advance(token)
for row in rows:
if not row.room_id:
self.get_global_account_data_by_type_for_user.invalidate(
(row.data_type, row.user_id,)
)
self.get_account_data_for_user.invalidate((row.user_id,))
self._account_data_stream_cache.entity_has_changed(
user_id, position
row.user_id, token
)
stream = result.get("tag_account_data")
if stream:
self._account_data_id_gen.advance(int(stream["position"]))
for row in stream["rows"]:
position, user_id = row[:2]
self.get_tags_for_user.invalidate((user_id,))
self._account_data_stream_cache.entity_has_changed(
user_id, position
)
return super(SlavedAccountDataStore, self).process_replication(result)
return super(SlavedAccountDataStore, self).process_replication_rows(
stream_name, token, rows
)

View File

@@ -16,6 +16,7 @@
from ._base import BaseSlavedStore
from synapse.storage import DataStore
from synapse.config.appservice import load_appservices
from synapse.storage.appservice import _make_exclusive_regex
class SlavedApplicationServiceStore(BaseSlavedStore):
@@ -25,6 +26,7 @@ class SlavedApplicationServiceStore(BaseSlavedStore):
hs.config.server_name,
hs.config.app_service_config_files
)
self.exclusive_user_regex = _make_exclusive_regex(self.services_cache)
get_app_service_by_token = DataStore.get_app_service_by_token.__func__
get_app_service_by_user_id = DataStore.get_app_service_by_user_id.__func__
@@ -38,3 +40,6 @@ class SlavedApplicationServiceStore(BaseSlavedStore):
get_appservice_state = DataStore.get_appservice_state.__func__
set_appservice_last_pos = DataStore.set_appservice_last_pos.__func__
set_appservice_state = DataStore.set_appservice_state.__func__
get_if_app_services_interested_in_user = (
DataStore.get_if_app_services_interested_in_user.__func__
)

View File

@@ -0,0 +1,47 @@
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import BaseSlavedStore
from synapse.storage.client_ips import LAST_SEEN_GRANULARITY
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.caches.descriptors import Cache
class SlavedClientIpStore(BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedClientIpStore, self).__init__(db_conn, hs)
self.client_ip_last_seen = Cache(
name="client_ip_last_seen",
keylen=4,
max_entries=50000 * CACHE_SIZE_FACTOR,
)
def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id):
now = int(self._clock.time_msec())
key = (user_id, access_token, ip)
try:
last_seen = self.client_ip_last_seen.get(key)
except KeyError:
last_seen = None
# Rate-limited inserts
if last_seen is not None and (now - last_seen) < LAST_SEEN_GRANULARITY:
return
self.hs.get_tcp_replication().send_user_ip(
user_id, access_token, ip, user_agent, device_id, now
)

View File

@@ -53,21 +53,18 @@ class SlavedDeviceInboxStore(BaseSlavedStore):
result["to_device"] = self._device_inbox_id_gen.get_current_token()
return result
def process_replication(self, result):
stream = result.get("to_device")
if stream:
self._device_inbox_id_gen.advance(int(stream["position"]))
for row in stream["rows"]:
stream_id = row[0]
entity = row[1]
if entity.startswith("@"):
def process_replication_rows(self, stream_name, token, rows):
if stream_name == "to_device":
self._device_inbox_id_gen.advance(token)
for row in rows:
if row.entity.startswith("@"):
self._device_inbox_stream_cache.entity_has_changed(
entity, stream_id
row.entity, token
)
else:
self._device_federation_outbox_stream_cache.entity_has_changed(
entity, stream_id
row.entity, token
)
return super(SlavedDeviceInboxStore, self).process_replication(result)
return super(SlavedDeviceInboxStore, self).process_replication_rows(
stream_name, token, rows
)

View File

@@ -16,6 +16,7 @@
from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker
from synapse.storage import DataStore
from synapse.storage.end_to_end_keys import EndToEndKeyStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
@@ -45,28 +46,25 @@ class SlavedDeviceStore(BaseSlavedStore):
_mark_as_sent_devices_by_remote_txn = (
DataStore._mark_as_sent_devices_by_remote_txn.__func__
)
count_e2e_one_time_keys = EndToEndKeyStore.__dict__["count_e2e_one_time_keys"]
def stream_positions(self):
result = super(SlavedDeviceStore, self).stream_positions()
result["device_lists"] = self._device_list_id_gen.get_current_token()
return result
def process_replication(self, result):
stream = result.get("device_lists")
if stream:
self._device_list_id_gen.advance(int(stream["position"]))
for row in stream["rows"]:
stream_id = row[0]
user_id = row[1]
destination = row[2]
def process_replication_rows(self, stream_name, token, rows):
if stream_name == "device_lists":
self._device_list_id_gen.advance(token)
for row in rows:
self._device_list_stream_cache.entity_has_changed(
user_id, stream_id
row.user_id, token
)
if destination:
if row.destination:
self._device_list_federation_stream_cache.entity_has_changed(
destination, stream_id
row.destination, token
)
return super(SlavedDeviceStore, self).process_replication(result)
return super(SlavedDeviceStore, self).process_replication_rows(
stream_name, token, rows
)

View File

@@ -71,6 +71,7 @@ class SlavedEventStore(BaseSlavedStore):
# to reach inside the __dict__ to extract them.
get_rooms_for_user = RoomMemberStore.__dict__["get_rooms_for_user"]
get_users_in_room = RoomMemberStore.__dict__["get_users_in_room"]
get_hosts_in_room = RoomMemberStore.__dict__["get_hosts_in_room"]
get_users_who_share_room_with_user = (
RoomMemberStore.__dict__["get_users_who_share_room_with_user"]
)
@@ -101,15 +102,14 @@ class SlavedEventStore(BaseSlavedStore):
_get_state_groups_from_groups_txn = (
DataStore._get_state_groups_from_groups_txn.__func__
)
_get_state_group_from_group = (
StateStore.__dict__["_get_state_group_from_group"]
)
get_recent_event_ids_for_room = (
StreamStore.__dict__["get_recent_event_ids_for_room"]
)
get_current_state_ids = (
StateStore.__dict__["get_current_state_ids"]
)
get_state_group_delta = StateStore.__dict__["get_state_group_delta"]
_get_joined_hosts_cache = RoomMemberStore.__dict__["_get_joined_hosts_cache"]
has_room_changed_since = DataStore.has_room_changed_since.__func__
get_unread_push_actions_for_user_in_range_for_http = (
@@ -146,12 +146,14 @@ class SlavedEventStore(BaseSlavedStore):
RoomMemberStore.__dict__["_get_joined_users_from_context"]
)
get_joined_hosts = DataStore.get_joined_hosts.__func__
_get_joined_hosts = RoomMemberStore.__dict__["_get_joined_hosts"]
get_recent_events_for_room = DataStore.get_recent_events_for_room.__func__
get_room_events_stream_for_rooms = (
DataStore.get_room_events_stream_for_rooms.__func__
)
is_host_joined = DataStore.is_host_joined.__func__
_is_host_joined = RoomMemberStore.__dict__["_is_host_joined"]
is_host_joined = RoomMemberStore.__dict__["is_host_joined"]
get_stream_token_for_event = DataStore.get_stream_token_for_event.__func__
_set_before_and_after = staticmethod(DataStore._set_before_and_after)
@@ -201,48 +203,25 @@ class SlavedEventStore(BaseSlavedStore):
result["backfill"] = -self._backfill_id_gen.get_current_token()
return result
def process_replication(self, result):
stream = result.get("events")
if stream:
self._stream_id_gen.advance(int(stream["position"]))
if stream["rows"]:
logger.info("Got %d event rows", len(stream["rows"]))
for row in stream["rows"]:
self._process_replication_row(
row, backfilled=False,
def process_replication_rows(self, stream_name, token, rows):
if stream_name == "events":
self._stream_id_gen.advance(token)
for row in rows:
self.invalidate_caches_for_event(
token, row.event_id, row.room_id, row.type, row.state_key,
row.redacts,
backfilled=False,
)
stream = result.get("backfill")
if stream:
self._backfill_id_gen.advance(-int(stream["position"]))
for row in stream["rows"]:
self._process_replication_row(
row, backfilled=True,
elif stream_name == "backfill":
self._backfill_id_gen.advance(-token)
for row in rows:
self.invalidate_caches_for_event(
-token, row.event_id, row.room_id, row.type, row.state_key,
row.redacts,
backfilled=True,
)
stream = result.get("forward_ex_outliers")
if stream:
self._stream_id_gen.advance(int(stream["position"]))
for row in stream["rows"]:
event_id = row[1]
self._invalidate_get_event_cache(event_id)
stream = result.get("backward_ex_outliers")
if stream:
self._backfill_id_gen.advance(-int(stream["position"]))
for row in stream["rows"]:
event_id = row[1]
self._invalidate_get_event_cache(event_id)
return super(SlavedEventStore, self).process_replication(result)
def _process_replication_row(self, row, backfilled):
stream_ordering = row[0] if not backfilled else -row[0]
self.invalidate_caches_for_event(
stream_ordering, row[1], row[2], row[3], row[4], row[5],
backfilled=backfilled,
return super(SlavedEventStore, self).process_replication_rows(
stream_name, token, rows
)
def invalidate_caches_for_event(self, stream_ordering, event_id, room_id,

View File

@@ -39,6 +39,16 @@ class SlavedPresenceStore(BaseSlavedStore):
_get_presence_for_user = PresenceStore.__dict__["_get_presence_for_user"]
get_presence_for_users = PresenceStore.__dict__["get_presence_for_users"]
# XXX: This is a bit broken because we don't persist the accepted list in a
# way that can be replicated. This means that we don't have a way to
# invalidate the cache correctly.
get_presence_list_accepted = PresenceStore.__dict__[
"get_presence_list_accepted"
]
get_presence_list_observers_accepted = PresenceStore.__dict__[
"get_presence_list_observers_accepted"
]
def get_current_presence_token(self):
return self._presence_id_gen.get_current_token()
@@ -48,15 +58,14 @@ class SlavedPresenceStore(BaseSlavedStore):
result["presence"] = position
return result
def process_replication(self, result):
stream = result.get("presence")
if stream:
self._presence_id_gen.advance(int(stream["position"]))
for row in stream["rows"]:
position, user_id = row[:2]
def process_replication_rows(self, stream_name, token, rows):
if stream_name == "presence":
self._presence_id_gen.advance(token)
for row in rows:
self.presence_stream_cache.entity_has_changed(
user_id, position
row.user_id, token
)
self._get_presence_for_user.invalidate((user_id,))
return super(SlavedPresenceStore, self).process_replication(result)
self._get_presence_for_user.invalidate((row.user_id,))
return super(SlavedPresenceStore, self).process_replication_rows(
stream_name, token, rows
)

Some files were not shown because too many files have changed in this diff Show More