Compare commits

...

1716 Commits

Author SHA1 Message Date
Erik Johnston
f9834a3d1a Merge branch 'release-v0.18.4' of github.com:matrix-org/synapse 2016-11-22 10:25:23 +00:00
Erik Johnston
aac06e8f74 Bump changelog 2016-11-22 10:24:04 +00:00
Erik Johnston
2bbc4cab60 Merge branch 'dbkr/work_around_devicename_bug' of github.com:matrix-org/synapse into release-v0.18.4 2016-11-22 10:18:30 +00:00
Mark Haines
a289150943 Fix flake8 2016-11-18 17:15:02 +00:00
David Baker
544722bad2 Work around client replacing reg params
Works around https://github.com/vector-im/vector-android/issues/715
and equivalent for iOS
2016-11-18 17:07:35 +00:00
Erik Johnston
9d58ccc547 Bump changelog and version 2016-11-14 15:05:04 +00:00
Kegsay
9355a5c42b Merge pull request #1624 from matrix-org/kegan/idempotent-requests
Store Promise<Response> instead of Response for HTTP API transactions
2016-11-14 12:45:30 +00:00
Kegan Dougal
3991b4cbdb Clean transactions based on time. Add HttpTransactionCache tests. 2016-11-14 11:19:24 +00:00
Kegan Dougal
af4a1bac50 Move .observe() up to the cache to make things neater 2016-11-14 09:52:41 +00:00
Erik Johnston
0964005d84 Merge pull request #1625 from DanielDent/patch-1
Add support for durations in minutes
2016-11-12 11:20:46 +00:00
Daniel Dent
1c93cd9f9f Add support for durations in minutes 2016-11-12 00:10:23 -08:00
Kegan Dougal
8ecaff51a1 Review comments 2016-11-11 17:47:03 +00:00
Kegan Dougal
f6c48802f5 More flake8 2016-11-11 15:08:24 +00:00
Kegan Dougal
a88bc67f88 Flake8 and fix whoopsie 2016-11-11 15:02:29 +00:00
Kegan Dougal
42c43cfafd Use ObservableDeferreds instead of Deferreds as they behave as intended 2016-11-11 14:54:10 +00:00
Kegan Dougal
c7daf3136c Use observable deferreds because they are sane 2016-11-11 14:13:32 +00:00
Kegan Dougal
8a8ad46f48 Flake8 2016-11-10 15:22:11 +00:00
Kegan Dougal
2771447c29 Store Promise<Response> instead of Response for HTTP API transactions
This fixes a race whereby:
 - User hits an endpoint.
 - No cached transaction so executes main code.
 - User hits same endpoint.
 - No cache transaction so executes main code.
 - Main code finishes executing and caches response and returns.
 - Main code finishes executing and caches response and returns.

 This race is common in the wild when Synapse is struggling under load.
 This commit fixes the race by:
  - User hits an endpoint.
  - Caches the promise to execute the main code and executes main code.
  - User hits same endpoint.
  - Yields on the same promise as the first request.
  - Main code finishes executing and returns, unblocking both requests.
2016-11-10 14:49:26 +00:00
Erik Johnston
6cc4fcf25c Merge pull request #1619 from matrix-org/erikj/pwd_provider_error
Don't assume providers raise ConfigError's
2016-11-09 11:11:06 +00:00
Erik Johnston
ac507e7ab8 Don't assume providers raise ConfigError's 2016-11-08 17:23:28 +00:00
Erik Johnston
e6651e8046 Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-11-08 14:43:49 +00:00
Erik Johnston
291628d42a Merge branch 'erikj/ldap3_auth' 2016-11-08 14:40:54 +00:00
Erik Johnston
3c09818d91 Bump version and changelog 2016-11-08 14:39:55 +00:00
Erik Johnston
27d3f2e7ab Explicitly set authentication mode in ldap3
This only makes a difference for versions of ldap3 before 1.0, but a)
its best to be explicit and b) there are distributions that package
ancient versions for ldap3 (e.g. debian).
2016-11-08 14:35:25 +00:00
Erik Johnston
17e0a58020 Merge pull request #1615 from matrix-org/erikj/limit_prev_events
Limit the number of prev_events of new events
2016-11-08 12:06:15 +00:00
Erik Johnston
34449cfc6c Merge pull request #1616 from matrix-org/erikj/worker_frozen_dict
Respect use_frozen_dicts option in workers
2016-11-08 11:32:22 +00:00
Erik Johnston
a4632783fb Sample correctly 2016-11-08 11:20:26 +00:00
Erik Johnston
24772ba56e Respect use_frozen_dicts option in workers 2016-11-08 11:07:18 +00:00
Erik Johnston
eeda4e618c Limit the number of prev_events of new events 2016-11-08 11:02:29 +00:00
Erik Johnston
d24197bead Merge pull request #1198 from euank/more-ip-blacklist
default config: blacklist more internal ips
2016-11-07 09:41:34 +00:00
Euan Kemp
c6bbad109b default config: blacklist more internal ips 2016-11-06 17:02:25 -08:00
Erik Johnston
16dc9064d4 Merge pull request #1195 from matrix-org/erikj/incorrect_func
Remove unused but buggy function
2016-11-04 11:09:38 +00:00
Erik Johnston
63772443e6 Comment 2016-11-04 10:53:42 +00:00
Erik Johnston
a3f6576084 Remove unused but buggy function 2016-11-04 10:48:20 +00:00
Paul Evans
7fc2b5c063 Merge pull request #1193 from matrix-org/paul/metrics
More updates to Promethese metrics exposition
2016-11-03 17:22:15 +00:00
Paul "LeoNerd" Evans
89e3e39d52 Fix copypasto error in metric rename table in docs 2016-11-03 17:04:13 +00:00
Paul "LeoNerd" Evans
2938a00825 Rename the python-specific metrics now the docs claim that we have done 2016-11-03 17:03:52 +00:00
Paul "LeoNerd" Evans
5219f7e060 Since we don't export per-filetype fd counts any more, delete all the code related to that too 2016-11-03 16:41:32 +00:00
Paul "LeoNerd" Evans
93ebeb2aa8 Remove now-unused 'resource' import 2016-11-03 16:37:09 +00:00
Paul "LeoNerd" Evans
c1b077cd19 Now we have new-style metrics don't bother exporting legacy-named process ones 2016-11-03 16:34:16 +00:00
Erik Johnston
06cc0bb762 Merge pull request #1192 from matrix-org/erikj/postgres_gist
Replace postgres GIN with GIST
2016-11-03 15:15:43 +00:00
Erik Johnston
64c6566980 Remove spurious comment 2016-11-03 15:04:32 +00:00
Erik Johnston
8fd4d9129f Replace postgres GIN with GIST
This is because GIN can be slow to write too, especially when the table
gets large.
2016-11-03 15:00:03 +00:00
David Baker
9164bfa1c3 Merge pull request #1191 from matrix-org/dbkr/non_ascii_passwords
Don't error on non-ascii passwords
2016-11-03 11:21:07 +00:00
David Baker
9084720993 Don't error on non-ascii passwords 2016-11-03 10:42:14 +00:00
Mark Haines
80d5d3baa1 Merge pull request #1190 from matrix-org/markjh/media_cors
Set CORs headers on responses from the media repo
2016-11-02 14:49:28 +00:00
Mark Haines
b1c27975d0 Set CORs headers on responses from the media repo 2016-11-02 11:29:25 +00:00
Erik Johnston
dc155f4c2c Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-11-01 16:48:53 +00:00
Erik Johnston
2746e805fe Merge pull request #1188 from matrix-org/erikj/sent_transactions
Remove sent_transactions table.
2016-11-01 14:52:55 +00:00
Erik Johnston
0aeb1324b7 Merge branch 'release-v0.18.2' of github.com:matrix-org/synapse into develop 2016-11-01 14:18:07 +00:00
Erik Johnston
4a9055d446 Merge branch 'release-v0.18.2' of github.com:matrix-org/synapse 2016-11-01 13:14:04 +00:00
Erik Johnston
3c91c5b216 Bump version and changelog 2016-11-01 13:06:36 +00:00
Erik Johnston
f6e8019b9c Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.18.2 2016-11-01 11:57:28 +00:00
Erik Johnston
760469c812 Continue to clean up received_transactions 2016-11-01 11:42:08 +00:00
Paul Evans
47ed4d84bb Merge pull request #1187 from matrix-org/paul/metrics-howto
Update documentation about exported prometheus metrics
2016-10-31 18:33:32 +00:00
Erik Johnston
f09d2b692f Removed unused stuff 2016-10-31 17:10:56 +00:00
Erik Johnston
4c3eb14d68 Increase batching of sent transaction inserts
This should further reduce the number of individual inserts,
transactions and updates that are required for keeping sent_transactions
up to date.
2016-10-31 16:07:45 +00:00
Paul "LeoNerd" Evans
1d4d518b50 Add details of renamed metrics 2016-10-31 15:06:52 +00:00
Paul "LeoNerd" Evans
159434a133 Remove long-deprecated instructions about promethesus console; also fix for modern config file format 2016-10-28 13:58:27 +01:00
Erik Johnston
264f6c2a39 Changelog formattting 2016-10-28 11:19:21 +01:00
Mark Haines
82e71a259c Bump changelog and version 2016-10-28 11:16:05 +01:00
Mark Haines
490b97d3e7 Merge branch 'develop' into release-v0.18.2 2016-10-28 11:12:31 +01:00
Paul Evans
f9d5b60a24 Merge pull request #1184 from matrix-org/paul/metrics
Bugfix for process-wide metric export on split processes
2016-10-27 18:27:36 +01:00
Paul "LeoNerd" Evans
1cc22da600 Set up the process collector during metrics __init__; that way all split-process workers have it 2016-10-27 18:09:34 +01:00
Paul "LeoNerd" Evans
aac13b1f9a Pass the Metrics group into the process collector instead of having it find its own one; this avoids it needing to import from synapse.metrics 2016-10-27 18:08:15 +01:00
Paul "LeoNerd" Evans
ccc1a3d54d Allow creation of a 'subspace' within a Metrics object, returning another one 2016-10-27 18:07:34 +01:00
Erik Johnston
665e53524e Bump changelog and version 2016-10-27 14:52:47 +01:00
Erik Johnston
e438699c59 Merge pull request #1183 from matrix-org/erikj/fix_email_update
Fix user_threepids schema delta
2016-10-27 14:48:30 +01:00
Erik Johnston
a9111786f9 Use most recently added binding, not most recently seen user. 2016-10-27 14:32:45 +01:00
Erik Johnston
1fc1bc2a51 Fix user_threepids schema delta
The delta `37/user_threepids.sql` aimed to update all the email
addresses to be lower case, however duplicate emails may exist in the
table already.

This commit adds a step where the delta moves the duplicate emails to a
new `medium` `email_old`. Only the most recently used account keeps the
binding intact. We move rather than delete so that we retain some record
of which emails were associated with which account.
2016-10-27 14:14:44 +01:00
Erik Johnston
db0609f1ec Update changelog 2016-10-27 11:13:07 +01:00
Erik Johnston
ab731d8f8e Bump changelog and version 2016-10-27 11:07:02 +01:00
Erik Johnston
45bdacd9a7 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.18.2 2016-10-27 11:02:30 +01:00
Mark Haines
177f104432 Merge pull request #1098 from matrix-org/markjh/bearer_token
Allow clients to supply access_tokens as headers
2016-10-25 17:33:15 +01:00
Erik Johnston
22fbf86e4f Merge branch 'release-v0.18.2' of github.com:matrix-org/synapse into develop 2016-10-25 15:09:46 +01:00
Erik Johnston
f138bb40e2 Fixup change log 2016-10-25 11:23:40 +01:00
Erik Johnston
855645c719 Bump version and changelog 2016-10-25 11:16:16 +01:00
Erik Johnston
25423f50aa Merge pull request #1179 from matrix-org/erikj/typing_timer_paranoia
Fix infinite typing bug
2016-10-25 10:59:59 +01:00
Erik Johnston
2ef617bc06 Fix infinite typing bug
There's a bug somewhere that causes typing notifications to not be timed
out properly. By adding a paranoia timer and using correct inequalities
notifications should stop being stuck, even if it the root cause hasn't
been fixed.
2016-10-24 15:51:22 +01:00
Erik Johnston
e83a08d795 Merge pull request #1178 from matrix-org/erikj/current_room_token
Fix incredibly slow back pagination query
2016-10-24 13:58:28 +01:00
Erik Johnston
b6800a8ecd Actually use the new function 2016-10-24 13:39:49 +01:00
Erik Johnston
d04e2ff3a4 Fix incredubly slow back pagination query
If a client didn't specify a from token when paginating backwards
synapse would attempt to query the (global) maximum topological token.
This a) doesn't make much sense since they're room specific and b) there
are no indices that lets postgres do this efficiently.
2016-10-24 13:35:51 +01:00
Paul Evans
a842fed418 Merge pull request #1177 from matrix-org/paul/standard-metric-names
Standardise prometheus metrics
2016-10-21 13:06:19 +01:00
Luke Barnard
e01a1bc92d Merge pull request #1175 from matrix-org/luke/feature-configurable-as-rate-limiting
Allow Configurable Rate Limiting Per AS
2016-10-20 16:21:10 +01:00
Luke Barnard
6fdd31915b Style 2016-10-20 13:53:15 +01:00
Luke Barnard
07caa749bf Closing brace on following line 2016-10-20 12:07:16 +01:00
Luke Barnard
f09db236b1 as_user->app_service, less redundant comments, better positioned comments 2016-10-20 12:04:54 +01:00
Luke Barnard
8bfd01f619 flake8 2016-10-20 11:52:46 +01:00
Luke Barnard
1b17d1a106 Use real AS object by passing it through the requester
This means synapse does not have to check if the AS is interested, but instead it effectively re-uses what it already knew about the requesting user
2016-10-20 11:43:05 +01:00
Paul "LeoNerd" Evans
b01aaadd48 Split callback metric lambda functions down onto their own lines to keep line lengths under 90 2016-10-19 18:26:13 +01:00
Paul "LeoNerd" Evans
1071c7d963 Adjust code for <100 char line limit 2016-10-19 18:23:25 +01:00
Paul "LeoNerd" Evans
6453d03edd Cut the raw /proc/self/stat line up into named fields at collection time 2016-10-19 18:21:40 +01:00
Paul "LeoNerd" Evans
3ae48a1f99 Move the process metrics collector code into its own file 2016-10-19 18:10:24 +01:00
Paul "LeoNerd" Evans
4cedd53224 A slightly neater way to manage metric collector functions 2016-10-19 17:54:09 +01:00
Paul "LeoNerd" Evans
5663137e03 appease pep8 2016-10-19 16:09:42 +01:00
Paul "LeoNerd" Evans
b202531be6 Also guard /proc/self/fds-related code with a suitable psuedoconstant 2016-10-19 15:37:41 +01:00
Paul "LeoNerd" Evans
1b179455fc Guard registration of process-wide metrics by existence of the requisite /proc entries 2016-10-19 15:34:38 +01:00
Paul "LeoNerd" Evans
981f852d54 Add standard process_start_time_seconds metric 2016-10-19 15:05:22 +01:00
Paul "LeoNerd" Evans
def63649df Add standard process_max_fds metric 2016-10-19 15:05:21 +01:00
Paul "LeoNerd" Evans
06f1ad1625 Add standard process_open_fds metric 2016-10-19 15:05:21 +01:00
Paul "LeoNerd" Evans
95fc70216d Add standard process_*_memory_bytes metrics 2016-10-19 15:05:21 +01:00
Paul "LeoNerd" Evans
9b0316c75a Use /proc/self/stat to generate the new process_cpu_*_seconds_total metrics 2016-10-19 15:05:21 +01:00
Paul "LeoNerd" Evans
03c2720940 Export CPU usage metrics also under prometheus-standard metric name 2016-10-19 15:05:21 +01:00
Paul "LeoNerd" Evans
b21b9dbc37 Callback metric values might not just be integers - allow floats 2016-10-19 15:05:15 +01:00
Erik Johnston
78c083f159 Merge pull request #1164 from pik/error-codes
Clarify Error codes for GET /filter/
2016-10-19 14:26:17 +01:00
Erik Johnston
3aa8925091 Merge pull request #1176 from matrix-org/erikj/eager_ratelimit_check
Check whether to ratelimit sooner to avoid work
2016-10-19 14:25:52 +01:00
Erik Johnston
f2f74ffce6 Comment 2016-10-19 14:21:28 +01:00
David Baker
7d2cf7e960 Merge pull request #1170 from matrix-org/dbkr/password_reset_case_insensitive
Make password reset email field case insensitive
2016-10-19 13:29:44 +01:00
David Baker
0108ed8ae6 Latest delta is now 37 2016-10-19 11:40:35 +01:00
David Baker
a7f48320b1 Merge remote-tracking branch 'origin/develop' into dbkr/password_reset_case_insensitive 2016-10-19 11:28:56 +01:00
David Baker
df2a616c7b Convert emails to lowercase when storing
And db migration sql to convert existing addresses.
2016-10-19 11:13:55 +01:00
Erik Johnston
550308c7a1 Check whether to ratelimit sooner to avoid work 2016-10-19 10:45:24 +01:00
pik
e8b1d2a452 Refactor test_filter to use real DataStore
* add tests for filter api errors
2016-10-18 12:17:38 -05:00
Luke Barnard
5b54d51d1e Allow Configurable Rate Limiting Per AS
This adds a flag loaded from the registration file of an AS that will determine whether or not its users are rate limited (by ratelimit in _base.py). Needed for IRC bridge reasons - see https://github.com/matrix-org/matrix-appservice-irc/issues/240.
2016-10-18 17:04:09 +01:00
Erik Johnston
f6955db970 Merge pull request #1174 from matrix-org/erikj/email_push_noop
Reduce redundant database work in email pusher
2016-10-18 15:27:10 +01:00
Erik Johnston
8ca05b5755 Fix push notifications for a single unread message 2016-10-18 10:57:33 +01:00
Erik Johnston
f0ca088280 Reduce redundant database work in email pusher
Update the last stream ordering if the
`get_unread_push_actions_for_user_in_range_for_email` returns no new
push actions. This reduces the range that it needs to check next
iteration.
2016-10-18 10:52:47 +01:00
Erik Johnston
50ac1d843d Merge branch 'release-v0.18.2' of github.com:matrix-org/synapse into develop 2016-10-18 09:41:53 +01:00
Erik Johnston
513e600f63 Update changelog 2016-10-17 17:05:59 +01:00
Erik Johnston
b95dbdcba4 Bump version and changelog 2016-10-17 15:59:12 +01:00
Erik Johnston
927a67ee1a Merge pull request #1113 from matrix-org/erikj/remove_auth
Remove redundant event_auth index
2016-10-17 14:28:10 +01:00
Erik Johnston
6942d68247 Bump schema version 2016-10-17 11:17:45 +01:00
Erik Johnston
b59994b454 Remove TODO 2016-10-17 11:17:02 +01:00
Erik Johnston
816988baaa Merge branch 'develop' of github.com:matrix-org/synapse into erikj/remove_auth 2016-10-17 11:10:37 +01:00
Erik Johnston
2869a29fd7 Drop some unused indices 2016-10-17 11:08:19 +01:00
pik
d43b63818c Fix MockHttpRequest always returning M_UNKNOWN errcode in testing 2016-10-14 15:46:54 -05:00
Erik Johnston
a68ade6ed3 Merge pull request #1162 from larroy/master
Use sys.executable instead of hardcoded python. fixes #1161
2016-10-14 21:42:55 +01:00
David Baker
29c5922021 Revert part of 6207399
older sqlite doesn't support indexes on expressions, lets just
store things lowercase in the db
2016-10-14 16:20:24 +01:00
Alexander Maznev
d9350b0db8 Error codes for filters
* add tests

Signed-off-by: Alexander Maznev <alexander.maznev@gmail.com>
2016-10-14 10:18:28 -05:00
David Baker
bcb1245a2d Merge remote-tracking branch 'origin/develop' into dbkr/password_reset_case_insensitive 2016-10-14 15:10:38 +01:00
David Baker
62073992c5 Make password reset email field case insensitive 2016-10-14 13:56:53 +01:00
Erik Johnston
0393c4203c Merge pull request #1169 from matrix-org/erikj/fix_email_notifs
Fix email push notifs being dropped
2016-10-14 10:38:29 +01:00
Erik Johnston
6f7540ada4 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/fix_email_notifs 2016-10-14 10:22:43 +01:00
Erik Johnston
1d107d8484 Fix email push notifs being dropped
A lot of email push notifications were failing to be sent due to an
exception being thrown along one of the (many) paths. This was due to a
change where we moved from pulling out the full state for each room, but
rather pulled out the event ids for the state and separately loaded the
full events when needed.
2016-10-13 13:40:38 +01:00
Richard van der Hoff
f7aed3d7a2 Merge pull request #1168 from matrix-org/rav/ui_auth_on_device_delete
User-interactive auth on delete device
2016-10-13 09:38:41 +01:00
Richard van der Hoff
9009143fb9 Handle delete device requests with no body
We should probably return a 401 rather than a 400 for existing clients that
don't know they have to do the UIA dance to delete a device.
2016-10-12 18:47:28 +01:00
Richard van der Hoff
fbd3866bc6 User-interactive auth on delete device 2016-10-12 16:16:31 +01:00
Mark Haines
9e18e0b1cb Merge pull request #1167 from matrix-org/markjh/fingerprints
Add config option for adding additional TLS fingerprints
2016-10-12 15:27:44 +01:00
Mark Haines
c61ddeedac Explain how long the servers can cache the TLS fingerprints for 2016-10-12 14:48:24 +01:00
Mark Haines
0af6213019 Improve comment formatting 2016-10-12 14:45:13 +01:00
Erik Johnston
35e2cc8b52 Merge pull request #1155 from matrix-org/erikj/pluggable_pwd_auth
Implement pluggable password auth
2016-10-12 11:41:20 +01:00
Mark Haines
6e9f3ab415 Add config option for adding additional TLS fingerprints 2016-10-11 19:14:46 +01:00
Erik Johnston
e641115421 Merge pull request #1141 from matrix-org/erikj/replication_noop
Reduce DB hits for replication
2016-10-11 15:41:55 +01:00
Erik Johnston
3061dac53e Merge branch 'develop' of github.com:matrix-org/synapse into erikj/replication_noop 2016-10-11 14:08:29 +01:00
Erik Johnston
668f91d707 Fix check of wrong variable 2016-10-11 13:57:22 +01:00
Richard van der Hoff
0061e8744f Merge pull request #1166 from matrix-org/rav/grandfather_broken_riot_signup
Work around email-spamming Riot bug
2016-10-11 11:58:58 +01:00
Richard van der Hoff
fa74fcf512 Work around email-spamming Riot bug
5d9546f9 introduced a change to synapse behaviour, in that failures in the
interactive-auth process would return the flows and params data as well as an
error code (as specced in https://github.com/matrix-org/matrix-doc/pull/397).

That change exposed a bug in Riot which would make it request a new validation
token (and send a new email) each time it got a 401 with a `flows` parameter
(see https://github.com/vector-im/vector-web/issues/2447 and the fix at
https://github.com/matrix-org/matrix-react-sdk/pull/510).

To preserve compatibility with broken versions of Riot, grandfather in the old
behaviour for the email validation stage.
2016-10-11 11:34:40 +01:00
Erik Johnston
a2f2516199 Merge pull request #1157 from Rugvip/nolimit
Remove rate limiting from app service senders and fix get_or_create_user requester
2016-10-11 11:20:54 +01:00
Erik Johnston
a940618c94 Merge pull request #1150 from Rugvip/state_key
api/auth: fix for not being allowed to set your own state_key
2016-10-11 11:19:55 +01:00
Pedro Larroy
c57f871184 Use sys.executable instead of hardcoded python. fixes #1161 2016-10-08 23:55:20 +02:00
Richard van der Hoff
8681aff4f1 Merge pull request #1160 from matrix-org/rav/401_on_password_fail
Interactive Auth: Return 401 from for incorrect password
2016-10-07 10:57:43 +01:00
Richard van der Hoff
5d9546f9f4 Interactive Auth: Return 401 from for incorrect password
This requires a bit of fettling, because I want to return a helpful error
message too but we don't want to distinguish between unknown user and invalid
password. To avoid hardcoding the error message into 15 places in the code,
I've had to refactor a few methods to return None instead of throwing.

Fixes https://matrix.org/jira/browse/SYN-744
2016-10-07 00:00:00 +01:00
Patrik Oldsberg
7b5546d077 rest/client/v1/register: use the correct requester in createUser
Signed-off-by: Patrik Oldsberg <patrik.oldsberg@ericsson.com>
2016-10-06 22:12:32 +02:00
Richard van der Hoff
5d34e32d42 Merge pull request #1159 from matrix-org/rav/uia_fallback_postmessage
window.postmessage for Interactive Auth fallback
2016-10-06 19:56:43 +01:00
Richard van der Hoff
f382117852 window.postmessage for Interactive Auth fallback
If you're a webapp running the fallback in an iframe, you can't set set a
window.onAuthDone function. Let's post a message back to window.opener instead.
2016-10-06 18:16:59 +01:00
Patrik Oldsberg
3de7c8a4d0 handlers/profile: added admin override for set_displayname and set_avatar_url
Signed-off-by: Patrik Oldsberg <patrik.oldsberg@ericsson.com>
2016-10-06 15:24:59 +02:00
Patrik Oldsberg
2ff2d36b80 handers: do not ratelimit app service senders
Signed-off-by: Patrik Oldsberg <patrik.oldsberg@ericsson.com>
2016-10-06 15:24:59 +02:00
Patrik Oldsberg
9bfc617791 storage/appservice: make appservice methods only relying on the cache synchronous 2016-10-06 15:24:59 +02:00
Erik Johnston
503c0ab78b Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-10-05 17:06:55 +01:00
Erik Johnston
e779ee0ee2 Merge branch 'release-v0.18.1' of github.com:matrix-org/synapse 2016-10-05 14:41:07 +01:00
Erik Johnston
4285be791d Bump changelog and version 2016-10-05 14:40:38 +01:00
David Baker
b5665f7516 Merge pull request #1156 from matrix-org/dbkr/email_notifs_add_riot_brand
Add Riot brand to email notifs
2016-10-04 12:47:05 +01:00
David Baker
6d3513740d Add Riot brand to email notifs 2016-10-04 10:20:21 +01:00
Erik Johnston
850b103b36 Implement pluggable password auth
Allows delegating the password auth to an external module. This also
moves the LDAP auth to using this system, allowing it to be removed from
the synapse tree entirely in the future.
2016-10-03 10:36:40 +01:00
Erik Johnston
21185e3e8a Update changelog 2016-09-30 14:11:40 +01:00
Patrik Oldsberg
24a70e19c7 api/auth: fix for not being allowed to set your own state_key
Signed-off-by: Patrik Oldsberg <patrik.oldsberg@ericsson.com>
2016-09-30 13:08:25 +02:00
Erik Johnston
04aa2f2863 Bump version and changelog 2016-09-30 10:34:57 +01:00
Erik Johnston
f7bcdbe56c Merge pull request #1153 from matrix-org/erikj/ldap_restructure
Restructure ldap authentication
2016-09-30 09:17:25 +01:00
Martin Weinelt
3027ea22b0 Restructure ldap authentication
- properly parse return values of ldap bind() calls
- externalize authentication methods
- change control flow to be more error-resilient
- unbind ldap connections in many places
- improve log messages and loglevels
2016-09-29 15:30:08 +01:00
Erik Johnston
5875a65253 Merge pull request #1145 from matrix-org/erikj/fix_reindex
Fix background reindex of origin_server_ts
2016-09-29 13:53:48 +01:00
Erik Johnston
36d621201b Merge pull request #1146 from matrix-org/erikj/port_script_fix
Update port script with recently added tables
2016-09-27 11:54:51 +01:00
Erik Johnston
9040c9ffa1 Fix background reindex of origin_server_ts
The storage function `_get_events_txn` was removed everywhere except
from this background reindex. The function was removed due to it being
(almost) completely unused while also being large and complex.
Therefore, instead of resurrecting `_get_events_txn` we manually
reimplement the bits that are needed directly.
2016-09-27 11:23:49 +01:00
Erik Johnston
4a18127917 Merge pull request #1144 from matrix-org/erikj/sqlite_state_perf
Fix perf of fetching state in SQLite
2016-09-27 11:22:45 +01:00
Erik Johnston
adae348fdf Update port script with recently added tables
This also fixes a bug where the port script would explode when it
encountered the newly added boolean column
`public_room_list_stream.visibility`
2016-09-27 11:19:48 +01:00
Erik Johnston
4974147aa3 Remove duplication 2016-09-27 09:27:54 +01:00
Erik Johnston
13122e5e24 Remove unused variable 2016-09-27 09:21:51 +01:00
Erik Johnston
cf3e1cc200 Fix perf of fetching state in SQLite 2016-09-26 17:16:24 +01:00
Erik Johnston
a38d46249e Merge pull request #1140 from matrix-org/erikj/typing_fed_timeout
Time out typing over federation
2016-09-26 11:24:14 +01:00
Matthew Hodgson
aab6a31c96 typo 2016-09-25 14:08:58 +01:00
Erik Johnston
748d8fdc7b Reduce DB hits for replication
Some streams will occaisonally advance their positions without actually
having any new rows to send over federation. Currently this means that
the token will not advance on the workers, leading to them repeatedly
sending a slightly out of date token. This in turns requires the master
to hit the DB to check if there are any new rows, rather than hitting
the no op logic where we check if the given token matches the current
token.

This commit changes the API to always return an entry if the position
for a stream has changed, allowing workers to advance their tokens
correctly.
2016-09-23 16:49:21 +01:00
Erik Johnston
655891d179 Move FEDERATION_PING_INTERVAL timer. Update log line 2016-09-23 15:43:34 +01:00
Erik Johnston
4225a97f4e Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-09-23 15:36:59 +01:00
Erik Johnston
22578545a0 Time out typing over federation 2016-09-23 14:00:52 +01:00
Erik Johnston
667fcd54e8 Merge pull request #1136 from matrix-org/erikj/fix_signed_3pid
Allow invites via 3pid to bypass sender sig check
2016-09-22 13:41:49 +01:00
Erik Johnston
f96020550f Update comments 2016-09-22 12:54:22 +01:00
Erik Johnston
81964aeb90 Merge pull request #1132 from matrix-org/erikj/initial_sync_split
Support /initialSync in synchrotron worker
2016-09-22 12:45:02 +01:00
Erik Johnston
2e9ee30969 Add comments 2016-09-22 11:59:46 +01:00
Erik Johnston
a61e4522b5 Shuffle things around to make unit tests work 2016-09-22 11:08:12 +01:00
Erik Johnston
1168cbd54d Allow invites via 3pid to bypass sender sig check
When a server sends a third party invite another server may be the one
that the inviting user registers with. In this case it is that remote
server that will issue an actual invitation, and wants to do it "in the
name of" the original invitee. However, the new proper invite will not
be signed by the original server, and thus other servers would reject
the invite if it was seen as coming from the original user.

To fix this, a special case has been added to the auth rules whereby
another server can send an invite "in the name of" another server's
user, so long as that user had previously issued a third party invite
that is now being accepted.
2016-09-22 10:56:53 +01:00
Erik Johnston
bbc0d9617f Merge pull request #1134 from matrix-org/erikj/fix_stream_public_deletion
Fix _delete_old_forward_extrem_cache query
2016-09-21 17:04:04 +01:00
Erik Johnston
8009d84364 Match against event_id, rather than room_id 2016-09-21 16:46:59 +01:00
Erik Johnston
dc692556d6 Remove spurious AS clause 2016-09-21 16:28:47 +01:00
Erik Johnston
dc78db8c56 Update correct table 2016-09-21 15:52:44 +01:00
Erik Johnston
4f78108d8c Readd entries to public_room_list_stream that were deleted 2016-09-21 15:24:22 +01:00
Erik Johnston
0b78d8adf2 Fix _delete_old_forward_extrem_cache query 2016-09-21 15:20:56 +01:00
Erik Johnston
85827eef2d Merge pull request #1133 from matrix-org/erikj/public_room_count
Add total_room_count_estimate to /publicRooms
2016-09-21 13:51:52 +01:00
Erik Johnston
90c070c850 Add total_room_count_estimate to /publicRooms 2016-09-21 13:30:05 +01:00
Erik Johnston
87528f0756 Support /initialSync in synchrotron worker 2016-09-21 11:46:28 +01:00
Erik Johnston
88acb99747 Merge branch 'release-v0.18.0' of github.com:matrix-org/synapse 2016-09-19 17:20:25 +01:00
Erik Johnston
2b8ff4659f Bump version and changelog 2016-09-19 17:16:56 +01:00
Erik Johnston
ddfcdd4778 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.18.0 2016-09-19 17:15:24 +01:00
Erik Johnston
6f0c5e5d9b Merge pull request #1131 from matrix-org/matthew/e2e-notifs
Notify on e2e events
2016-09-19 10:50:26 +01:00
Erik Johnston
49cf205dc7 _id field must uniquely identify different conditions 2016-09-19 10:34:01 +01:00
Erik Johnston
39af634dd2 Merge pull request #1130 from matrix-org/erikj/fix_pubroom_pag
Handle fact that _generate_room_entry may not return a room entry
2016-09-19 10:13:59 +01:00
Matthew Hodgson
3f6ec271ba proposal for notifying on e2e events 2016-09-17 22:05:06 +01:00
Erik Johnston
4d49e0bdfd PEP8 2016-09-17 18:09:22 +01:00
Erik Johnston
81570abfb2 Handle fact that _generate_room_entry may not return a room entry 2016-09-17 18:01:54 +01:00
Erik Johnston
ddc89df89d Enable guest access to POST /publicRooms 2016-09-17 15:55:24 +01:00
Erik Johnston
eb24aecf8c Merge pull request #1129 from matrix-org/erikj/fix_pubroom_pag
Fix and clean up publicRooms pagination
2016-09-17 15:30:34 +01:00
Erik Johnston
e1ba98d724 Merge pull request #1127 from matrix-org/dbkr/publicroom_search_case_insensitive
Make public room search case insensitive
2016-09-17 15:01:17 +01:00
Erik Johnston
a298331de4 Spelling 2016-09-17 14:59:40 +01:00
Erik Johnston
71edaae981 Fix and clean up publicRooms pagination 2016-09-17 14:46:19 +01:00
Matthew Hodgson
64527f94cc mention client_reader worker 2016-09-17 14:15:10 +01:00
Matthew Hodgson
883df2e983 fix logger for client_reader worker 2016-09-17 14:12:04 +01:00
David Baker
5336acd46f Make public room search case insensitive 2016-09-16 19:02:42 +01:00
Erik Johnston
fa9d2c7295 Update changelog 2016-09-16 17:44:29 +01:00
Erik Johnston
19fe990476 Update changelog and bump version 2016-09-16 17:30:59 +01:00
Erik Johnston
995f2f032f Fix public room pagination for client_reader app 2016-09-16 14:48:21 +01:00
Erik Johnston
9e1283c824 Merge pull request #1126 from matrix-org/erikj/public_room_cache
Add very basic filter API to /publicRooms
2016-09-16 13:17:29 +01:00
Erik Johnston
a68807d426 Comment 2016-09-16 11:36:20 +01:00
Erik Johnston
2e67cabd7f Make POST /publicRooms require auth 2016-09-16 11:32:51 +01:00
Erik Johnston
b7b62bf9ea Comment 2016-09-16 11:00:29 +01:00
Erik Johnston
d84319ae10 Add remote reoom cache 2016-09-16 10:31:59 +01:00
Erik Johnston
23b6701a28 Support filtering remote room lists 2016-09-16 10:24:15 +01:00
Erik Johnston
e58a9d781c Filter remote rooms lists locally 2016-09-16 10:19:32 +01:00
Erik Johnston
74d4cdee25 Don't cache searched in /publicRooms 2016-09-16 09:05:11 +01:00
Erik Johnston
418bcd4309 Add new storage function to slave store 2016-09-16 08:37:39 +01:00
Erik Johnston
098db4aa52 Add very basic filter API to /publicRooms 2016-09-15 17:50:16 +01:00
Erik Johnston
c33b25fd8d Change the way we calculate new_limit in /publicRooms and add POST API 2016-09-15 17:35:20 +01:00
Erik Johnston
de4f798f01 Handling expiring stream extrems correctly. 2016-09-15 17:34:59 +01:00
Erik Johnston
ea6dc356b0 Merge pull request #1125 from matrix-org/erikj/public_room_cache
Change get_pos_of_last_change to return upper bound
2016-09-15 15:48:53 +01:00
Erik Johnston
955f34d23e Change get_pos_of_last_change to return upper bound 2016-09-15 15:12:07 +01:00
Erik Johnston
241d7d2d62 Merge pull request #1124 from matrix-org/erikj/enable_state_caching_workers
Enable state caches on workers
2016-09-15 15:01:47 +01:00
Erik Johnston
1535f21eb5 Merge pull request #1123 from matrix-org/erikj/public_room_cache
Use stream_change cache to make get_forward_extremeties_for_room cache more effective
2016-09-15 14:50:09 +01:00
Erik Johnston
4be85281f9 Enable state caches on workers 2016-09-15 14:31:22 +01:00
Erik Johnston
cb3edec6af Use stream_change cache to make get_forward_extremeties_for_room cache more effective 2016-09-15 14:28:13 +01:00
Erik Johnston
923f77cff3 Merge pull request #1122 from matrix-org/erikj/public_room_cache
Add cache to get_forward_extremeties_for_room
2016-09-15 14:06:55 +01:00
Erik Johnston
55e6fc917c Add cache to get_forward_extremeties_for_room 2016-09-15 14:04:28 +01:00
Erik Johnston
68c1ed4d1a Remove default public rooms limit 2016-09-15 13:56:20 +01:00
Erik Johnston
b82fa849c8 Merge pull request #1120 from matrix-org/erikj/push_invite_cache
Ensure we don't mutate the cache of push rules
2016-09-15 13:27:18 +01:00
Erik Johnston
e457034e99 Merge pull request #1121 from matrix-org/erikj/public_room_paginate
Add pagination support to publicRooms
2016-09-15 13:27:09 +01:00
Erik Johnston
1d98cf26be By default limit /publicRooms to 100 entries 2016-09-15 13:18:35 +01:00
Erik Johnston
211786ecd6 Stream public room changes down replication 2016-09-15 11:47:23 +01:00
Erik Johnston
4fb65a1091 Base public room list off of public_rooms stream 2016-09-15 11:27:04 +01:00
Erik Johnston
5810cffd33 Pass since/from parameters over federation 2016-09-15 10:36:19 +01:00
Erik Johnston
f3eead0660 Allow paginating both forwards and backwards 2016-09-15 10:15:37 +01:00
Erik Johnston
4131381123 Remove support for aggregate room lists 2016-09-15 09:28:15 +01:00
Erik Johnston
6a5ded5988 Ensure we don't mutate the cache of push rules 2016-09-15 09:16:13 +01:00
Erik Johnston
4f181f361d Accept optional token to public room list 2016-09-15 09:08:57 +01:00
Erik Johnston
c566f0ee17 Calculate the public room list from a stream_ordering 2016-09-14 17:42:47 +01:00
Erik Johnston
772c6067a3 Refactor public rooms to not pull out the full state for each room 2016-09-14 17:29:25 +01:00
Erik Johnston
baffe96d95 Add a room visibility stream 2016-09-14 17:29:19 +01:00
Erik Johnston
264a48aedf Merge pull request #1117 from matrix-org/erikj/fix_state
Ensure we don't mutate state cache entries
2016-09-14 16:50:37 +01:00
Erik Johnston
21c88016bd Merge pull request #1118 from matrix-org/erikj/public_rooms_splitout
Split out public room list into a worker process
2016-09-14 16:50:27 +01:00
Erik Johnston
ed992ae6ba Add a DB index to figure out past state at a stream ordering in a room 2016-09-14 16:20:27 +01:00
Erik Johnston
3e6e8a1c03 Enable testing of client_reader 2016-09-14 14:51:48 +01:00
Erik Johnston
e0b6db29ed Split out public room list into a worker process 2016-09-14 14:42:51 +01:00
Erik Johnston
a70a43bc51 Move RoomListHandler into a separate file 2016-09-14 14:07:37 +01:00
Erik Johnston
f2b2cd8eb4 Amalgamate two identical consecutive if statements 2016-09-14 11:16:22 +01:00
Erik Johnston
00f51493f5 Fix reindex 2016-09-14 10:18:30 +01:00
Erik Johnston
d5ae1f1291 Ensure we don't mutate state cache entries 2016-09-14 10:03:48 +01:00
Matthew Hodgson
1b01488d27 Merge pull request #1111 from matrix-org/matthew/device-ids
make device IDs more useful for human disambiguation
2016-09-14 01:18:02 +01:00
Paul Evans
0f73f0e70e Merge pull request #1116 from matrix-org/paul/tiny-fixes
Fix typo "persiting"
2016-09-13 13:27:42 +01:00
Paul "LeoNerd" Evans
ca35e54d6b Fix typo "persiting" 2016-09-13 13:26:33 +01:00
Mark Haines
497f053344 Merge pull request #1114 from matrix-org/markjh/limit_key_retries
Limit how often we ask for keys from dead servers
2016-09-13 13:26:23 +01:00
Mark Haines
ad816b0add Limit how often we ask for keys from dead servers 2016-09-13 11:53:50 +01:00
Mark Haines
0c057736ac Merge pull request #1112 from matrix-org/markjh/e2e_key_handler
Move the E2E key handling into the e2e handler
2016-09-13 11:51:52 +01:00
Erik Johnston
43253c10b8 Remove redundant event_auth index 2016-09-13 11:47:48 +01:00
Mark Haines
18ab019a4a Move the E2E key handling into the e2e handler 2016-09-13 11:35:35 +01:00
Mark Haines
76b09c29b0 Merge pull request #1110 from matrix-org/markjh/e2e_timeout
Add a timeout parameter for end2end key queries.
2016-09-13 10:50:45 +01:00
Erik Johnston
ba6bc2faa0 Merge pull request #1109 from matrix-org/erikj/partial_indices
Add WHERE clause support to index creation
2016-09-13 09:06:16 +01:00
Matthew Hodgson
edbcb4152b make device IDs more useful for human disambiguation 2016-09-13 00:02:39 +01:00
Mark Haines
949c2c5435 Add a timeout parameter for end2end key queries.
Add a timeout parameter for controlling how long synapse will wait
for responses from remote servers. For servers that fail include how
they failed to make it easier to debug.

Fetch keys from different servers in parallel rather than in series.

Set the default timeout to 10s.
2016-09-12 18:17:09 +01:00
Erik Johnston
b17af156c7 Remove where clause 2016-09-12 17:05:54 +01:00
Erik Johnston
1c9da43a95 Merge pull request #1108 from matrix-org/erikj/create_dm
Add is_direct param to /createRoom
2016-09-12 16:57:16 +01:00
Erik Johnston
0b32bb20bb Index contains_url for file search queries 2016-09-12 16:57:05 +01:00
Erik Johnston
c94de0ab60 Add WHERE clause support to index creation 2016-09-12 16:55:01 +01:00
Erik Johnston
502c901e11 Merge pull request #1107 from matrix-org/erikj/backfill_none
Fix backfill when cannot find an event.
2016-09-12 16:48:01 +01:00
Erik Johnston
48a5a7552d Add is_direct param to /createRoom 2016-09-12 16:34:20 +01:00
Erik Johnston
706b5d76ed Fix backfill when cannot find an event.
`get_pdu` can succeed but return None.
2016-09-12 14:59:51 +01:00
Erik Johnston
7c679b1118 Merge pull request #1106 from matrix-org/erikj/state_reindex_concurrent
Create new index concurrently
2016-09-12 14:38:40 +01:00
Erik Johnston
d080b3425c Merge pull request #1105 from matrix-org/erikj/make_notif_highlight_query_fast
Optimise /notifications query
2016-09-12 14:34:12 +01:00
Erik Johnston
03a98aff3c Create new index concurrently 2016-09-12 14:27:01 +01:00
Erik Johnston
fa20c9ce94 Change the index to be stream_ordering, highlight 2016-09-12 14:04:08 +01:00
Erik Johnston
5ef5435529 Remove unused import 2016-09-12 13:32:58 +01:00
Mark Haines
aa7b890cfe Merge pull request #1104 from matrix-org/markjh/direct_to_device_federation_sync
Fix direct to device messages recieved over federation to notify sync
2016-09-12 13:25:23 +01:00
Erik Johnston
7cd6edb947 Use register_background_index_update 2016-09-12 12:54:48 +01:00
Erik Johnston
0294c14ec4 Add back in query change 2016-09-12 12:43:56 +01:00
Erik Johnston
7fe42cf949 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/make_notif_highlight_query_fast 2016-09-12 12:37:09 +01:00
Erik Johnston
15ca0c6a4d Make reindex happen in bg 2016-09-12 12:36:36 +01:00
Mark Haines
0baf498bd1 Merge pull request #1103 from matrix-org/markjh/comment_on_create_index
Add comments to existing schema deltas that used "CREATE INDEX" directly
2016-09-12 12:33:18 +01:00
Mark Haines
a232e06100 Fix direct to device messages recieved over federation to notify sync 2016-09-12 12:30:46 +01:00
Mark Haines
4a32d25d4c Merge branch 'develop' into markjh/bearer_token 2016-09-12 11:14:56 +01:00
Mark Haines
31f85f9db9 Add comments to existing schema deltas that used "CREATE INDEX" directly 2016-09-12 11:00:26 +01:00
Mark Haines
ec609f8094 Fix unit tests 2016-09-12 10:46:02 +01:00
Erik Johnston
caef86f428 Merge pull request #1102 from matrix-org/revert-1099-dbkr/make_notif_highlight_query_fast
Revert "Add index to event_push_actions"
2016-09-12 10:41:21 +01:00
Erik Johnston
54417999b6 Revert "Add index to event_push_actions" 2016-09-12 10:39:55 +01:00
Erik Johnston
45dc260060 Merge pull request #1101 from matrix-org/erikj/state_types_idx
Change state fetch query for postgres to be faster
2016-09-12 10:20:38 +01:00
Erik Johnston
d1c217c823 Merge pull request #1097 from matrix-org/erikj/replication_typing_rest
Correctly handle typing stream id resetting
2016-09-12 10:10:15 +01:00
Erik Johnston
897d57bc58 Change state fetch query for postgres to be faster
It turns out that postgres doesn't like doing a list of OR's and is
about 1000x slower, so we just issue a query for each specific type
seperately.
2016-09-12 10:05:07 +01:00
Erik Johnston
555460ae1b Merge pull request #1095 from matrix-org/erikj/batch_edus
Clobber EDUs in send queue
2016-09-12 08:04:15 +01:00
Richard van der Hoff
4162f820ff Merge pull request #1100 from VShell/fix-cas
Conform better to the CAS protocol specification
2016-09-09 21:57:42 +01:00
Shell Turner
29205e9596 Conform better to the CAS protocol specification
Redirect to CAS's /login endpoint properly, and
don't require an <attributes> element.

Signed-off-by: Shell Turner <cam.turn@gmail.com>
2016-09-09 21:20:14 +01:00
David Baker
d213884c41 Merge pull request #1099 from matrix-org/dbkr/make_notif_highlight_query_fast
Add index to event_push_actions
2016-09-09 19:39:09 +01:00
David Baker
b91e2833b3 Merge remote-tracking branch 'origin/develop' into dbkr/make_notif_highlight_query_fast 2016-09-09 19:11:34 +01:00
David Baker
f2acc3dcf9 Add index to event_push_actions
and remove room_id caluse so it uses it

Mostly from @negativemjark
2016-09-09 18:54:54 +01:00
Mark Haines
3ddec016ff Merge branch 'develop' into markjh/bearer_token 2016-09-09 18:51:22 +01:00
Mark Haines
8e01263587 Allow clients to supply access_tokens as headers
Clients can continue to supply access tokens as query parameters
or can supply the token as a header:

   Authorization: Bearer <access_token_goes_here>

This matches the ouath2 format of
https://tools.ietf.org/html/rfc6750#section-2.1
2016-09-09 18:17:42 +01:00
Erik Johnston
3265def8c7 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/batch_edus 2016-09-09 18:06:01 +01:00
Erik Johnston
af4701b311 Fix incorrect attribute name 2016-09-09 17:36:56 +01:00
Erik Johnston
44330a21e9 Comment 2016-09-09 17:22:07 +01:00
Erik Johnston
464ffd1b5e Comment 2016-09-09 17:17:23 +01:00
Erik Johnston
327425764e Add edu.type as part of key. Remove debug logging 2016-09-09 17:13:30 +01:00
Mark Haines
dbff7e9436 Merge pull request #1096 from matrix-org/markjh/get_access_token
Add helper function for getting access_tokens from requests
2016-09-09 17:09:27 +01:00
Erik Johnston
a4339de9de Correctly handle typing stream id resetting 2016-09-09 16:44:26 +01:00
Mark Haines
8aee5aa068 Add helper function for getting access_tokens from requests
Rather than reimplementing the token parsing in the various places.
This will make it easier to change the token parsing to allow access_tokens
in HTTP headers.
2016-09-09 16:33:15 +01:00
Erik Johnston
52b2318777 Clobber EDUs in send queue 2016-09-09 15:59:08 +01:00
Paul Evans
56f38d1776 Merge pull request #1091 from matrix-org/paul/third-party-lookup
Improvements to 3PE lookup API
2016-09-09 15:43:11 +01:00
Paul Evans
8cb252d00c Merge pull request #1094 from matrix-org/paul/get-state-whole-event
Allow clients to ask for the whole of a single state event
2016-09-09 15:26:17 +01:00
Paul "LeoNerd" Evans
776594f99d Log if rejecting 3PE query metadata result due to type check 2016-09-09 15:11:31 +01:00
Paul "LeoNerd" Evans
ed44c475d8 Reject malformed 3PE query metadata results earlier in AS API handling code 2016-09-09 15:07:04 +01:00
Erik Johnston
ab80d5e0a9 Drop replication log levels 2016-09-09 14:56:50 +01:00
Paul "LeoNerd" Evans
f25d74f69c Minor fixes from PR comments 2016-09-09 14:54:16 +01:00
Erik Johnston
ea05155a8c Merge pull request #1093 from matrix-org/erikj/dedupe_presence
Deduplicate presence in _update_states
2016-09-09 14:45:21 +01:00
Paul "LeoNerd" Evans
d271383e63 Filter returned events for client-facing format 2016-09-09 14:40:15 +01:00
Paul "LeoNerd" Evans
0fc0a3bdff Allow clients to specify the format a room state event is returned in 2016-09-09 14:34:29 +01:00
Erik Johnston
6c4d582144 Deduplicate presence in _update_states 2016-09-09 14:28:22 +01:00
Erik Johnston
685da5a3b0 Merge pull request #1092 from matrix-org/erikj/transaction_queue_check
Check if destination is ready for retry earlier
2016-09-09 13:49:56 +01:00
Erik Johnston
a6c6750166 Check if destination is ready for retry earlier 2016-09-09 13:46:05 +01:00
Paul "LeoNerd" Evans
bdbcfc2a80 appease pep8 2016-09-09 13:31:39 +01:00
Paul "LeoNerd" Evans
6eb0c8a2e4 Python isn't JavaScript; have to quote dict keys 2016-09-09 13:31:17 +01:00
Mark Haines
6b54fa81de Merge pull request #1089 from matrix-org/markjh/direct_to_device_stream
Track the max device stream_id in a separate table,
2016-09-09 13:31:07 +01:00
Paul "LeoNerd" Evans
25eb769b26 Efficiency fix for lookups of a single protocol 2016-09-09 13:25:02 +01:00
Erik Johnston
0b6b999e7b Merge pull request #1090 from matrix-org/erikj/transaction_queue_check
Fix tightloop on sending transaction
2016-09-09 13:19:44 +01:00
Paul "LeoNerd" Evans
3328428d05 Allow lookup of a single 3PE protocol query metadata 2016-09-09 13:19:04 +01:00
Erik Johnston
4598682b43 Fix tightloop on sending transaction 2016-09-09 13:12:53 +01:00
Paul "LeoNerd" Evans
033d43e419 Don't corrupt shared cache on subsequent protocol requests 2016-09-09 13:10:36 +01:00
Mark Haines
647c724573 Use the previous MAX value if any to set the stream_id 2016-09-09 11:52:44 +01:00
Erik Johnston
a15ba15e64 Merge pull request #1088 from matrix-org/erikj/transaction_queue_check
Correctly guard against multiple concurrent transactions
2016-09-09 11:49:06 +01:00
Mark Haines
6a6cbfcf1e Track the max_stream_device_id in a separate table, since we delete from the inbox table 2016-09-09 11:48:23 +01:00
Erik Johnston
d2688d7f03 Correctly guard against multiple concurrent transactions 2016-09-09 11:44:36 +01:00
Mark Haines
303b6f29f0 Merge pull request #1087 from matrix-org/markjh/reapply_delta
Reapply 34/device_outbox in 35/device_outbox_again.py since the schem…
2016-09-09 11:24:27 +01:00
Erik Johnston
1fe7ca1362 Merge branch 'release-v0.17.3' of github.com:matrix-org/synapse into develop 2016-09-09 11:15:40 +01:00
Erik Johnston
9bba6ebaa9 Merge branch 'release-v0.17.3' of github.com:matrix-org/synapse 2016-09-09 11:12:31 +01:00
Erik Johnston
66efcbbff1 Bump changelog and version 2016-09-09 11:07:39 +01:00
Mark Haines
0877157353 Just move the schema and add some DROPs 2016-09-09 11:04:47 +01:00
Erik Johnston
2ffec928e2 Reduce batch size to be under SQL limit 2016-09-09 11:03:31 +01:00
Erik Johnston
b390756150 Update last_device_stream_id_by_dest if there is nothing to send 2016-09-09 11:00:15 +01:00
Matthew Hodgson
b8f84f99ff Merge pull request #1081 from matrix-org/dbkr/notifications_only_highlight
Implement `only=highlight` on `/notifications`
2016-09-09 00:09:51 +01:00
Mark Haines
43b77c5d97 Only catch databas errors 2016-09-08 17:44:21 +01:00
Paul "LeoNerd" Evans
2f267ee160 Collect up all the "instances" lists of individual AS protocol results into one combined answer to the client 2016-09-08 17:43:53 +01:00
Mark Haines
7d5b142547 Add a stub run_upgrade 2016-09-08 17:39:11 +01:00
David Baker
c3276aef25 Merge pull request #1080 from matrix-org/dbkr/fix_notifications_api_with_from
Fix /notifications API when used with `from` param
2016-09-08 17:35:35 +01:00
Mark Haines
fa722a699c Reapply 34/device_outbox in 35/device_outbox_again.py since the schema was bumped before it landed on develop 2016-09-08 17:35:16 +01:00
Erik Johnston
023143f9ae Merge pull request #1085 from matrix-org/erikj/reindex_state_groups
Reindex state_groups_state after pruning
2016-09-08 17:16:38 +01:00
Erik Johnston
5c688739d6 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/reindex_state_groups 2016-09-08 16:52:09 +01:00
Erik Johnston
ebb46497ba Add delta file 2016-09-08 16:38:54 +01:00
Mark Haines
91ec972277 Merge pull request #1084 from matrix-org/markjh/direct_to_device_wildcard
Support wildcard device_ids for direct to device messages
2016-09-08 16:28:08 +01:00
Erik Johnston
5beda10bbd Reindex state_groups_state after pruning 2016-09-08 16:18:01 +01:00
Erik Johnston
257025ac89 Merge pull request #1082 from matrix-org/erikj/remote_public_rooms
Add server param to /publicRooms
2016-09-08 16:04:22 +01:00
Erik Johnston
3f9889bfd6 Use parse_string 2016-09-08 15:51:10 +01:00
Erik Johnston
caa22334b3 Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-09-08 15:33:00 +01:00
Erik Johnston
5834c6178c Merge branch 'release-v0.17.2' of github.com:matrix-org/synapse 2016-09-08 15:26:26 +01:00
Erik Johnston
b152ee71fe Bump version and changelog 2016-09-08 15:23:44 +01:00
Erik Johnston
d987353840 Merge pull request #1083 from matrix-org/erikj/check_origin
Check the user_id for presence/typing matches origin
2016-09-08 15:17:24 +01:00
Mark Haines
a1c8f268e5 Support wildcard device_ids for direct to device messages 2016-09-08 15:13:05 +01:00
Erik Johnston
8b93af662d Check the user_id for presence/typing matches origin 2016-09-08 15:07:38 +01:00
Mark Haines
2117c409a0 Merge pull request #1074 from matrix-org/markjh/direct_to_device_federation
Send device messages over federation
2016-09-08 14:26:47 +01:00
Mark Haines
fa9d36e050 Merge branch 'develop' into markjh/direct_to_device_federation 2016-09-08 13:43:43 +01:00
David Baker
4ef222ab61 Implement only=highlight on /notifications 2016-09-08 13:43:35 +01:00
Erik Johnston
61cd9af09b Log delta files we're applying 2016-09-08 13:40:46 +01:00
Erik Johnston
791658b576 Add server param to /publicRooms 2016-09-08 11:53:05 +01:00
Erik Johnston
2982d16e07 Merge pull request #1079 from matrix-org/erikj/state_seqscan
Temporarily disable sequential scans for state fetching
2016-09-08 09:55:16 +01:00
David Baker
c5b49eb7ca Fix /notifications API when used with from param 2016-09-08 09:40:10 +01:00
Erik Johnston
b568ca309c Temporarily disable sequential scans for state fetching 2016-09-08 09:38:54 +01:00
Mark Haines
3c320c006c Merge pull request #1077 from matrix-org/markjh/device_logging
Log the types and values when failing to store devices
2016-09-07 18:24:24 +01:00
Mark Haines
85b51fdd6b Log the types and values when failing to store devices 2016-09-07 17:19:18 +01:00
Mark Haines
43954d000e Add a new method to enqueue the device messages rather than sending a dummy EDU 2016-09-07 16:10:51 +01:00
Mark Haines
2a0159b8ae Fix the stream change cache to work over replication 2016-09-07 15:58:00 +01:00
Mark Haines
cb98ac261b Move the check for federated device_messages.
Move the check into _attempt_new_transaction.
Only delete messages if there were messages to delete.
2016-09-07 15:39:13 +01:00
Mark Haines
31a07d2335 Add stream change caches for device messages 2016-09-07 15:27:07 +01:00
Erik Johnston
91279fd218 Merge pull request #1076 from matrix-org/erikj/state_storage
Use windowing function to make use of index
2016-09-07 14:57:25 +01:00
Erik Johnston
513188aa56 Comment 2016-09-07 14:53:23 +01:00
Erik Johnston
fadb01551a Add appopriate framing clause 2016-09-07 14:39:01 +01:00
Erik Johnston
d25c20ccbe Use windowing function to make use of index 2016-09-07 14:22:22 +01:00
Mark Haines
7d893beebe Comment the add_messages storage functions 2016-09-07 12:03:37 +01:00
Erik Johnston
94a83b534f Merge pull request #1065 from matrix-org/erikj/state_storage
Move to storing state_groups_state as deltas
2016-09-07 09:39:58 +01:00
Mark Haines
74cbfdc7de Fix unit tests 2016-09-06 18:30:03 +01:00
Mark Haines
d4a35ada28 Send device messages over federation 2016-09-06 18:16:20 +01:00
Mark Haines
e020834e4f Add storage methods for federated device messages 2016-09-06 15:12:13 +01:00
Mark Haines
2ad72da931 Add tables for federated device messages
Adds tables for storing the messages that need to be sent to a
remote device and for deduplicating messages received.
2016-09-06 15:10:40 +01:00
Erik Johnston
8da7d0e4f9 Merge pull request #1073 from matrix-org/erikj/presence_fiddle
Record counts of state changes
2016-09-06 11:37:20 +01:00
Erik Johnston
3c4208a057 Record counts of state changes 2016-09-06 11:31:01 +01:00
Mark Haines
f4164edb70 Move _add_messages_to_device_inbox_txn into a separate method 2016-09-06 11:26:37 +01:00
Erik Johnston
2eed4d7af4 Merge pull request #1072 from matrix-org/erikj/presence_fiddle
Fiddle should_notify to better report stats
2016-09-06 10:57:48 +01:00
Erik Johnston
438ef47637 Short circuit if presence is the same 2016-09-06 10:28:35 +01:00
Erik Johnston
74a3b4a650 Fiddle should_notify to better report stats 2016-09-06 10:23:38 +01:00
Erik Johnston
9b69c85f7c Merge pull request #1071 from matrix-org/erikj/pdf_fix
Allow PDF to be rendered from media repo
2016-09-05 17:55:43 +01:00
Erik Johnston
d51b8a1674 Add quotes and be explicity about script-src 2016-09-05 17:35:01 +01:00
Erik Johnston
662b031a30 Allow PDF to be rendered from media repo 2016-09-05 17:25:26 +01:00
Erik Johnston
4ec67a3d21 Mention get_pdu bug 2016-09-05 15:52:48 +01:00
Erik Johnston
0595413c0f Scale the batch size so that we're not bitten by the minimum 2016-09-05 15:49:57 +01:00
Erik Johnston
a7032abb2e Correctly handle reindexing state groups that already have an edge 2016-09-05 15:07:23 +01:00
Erik Johnston
9e6d88f4e2 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/state_storage 2016-09-05 15:01:33 +01:00
Erik Johnston
8c93e0bae7 Merge pull request #1070 from matrix-org/erikj/presence_stats
Record why we have chosen to notify
2016-09-05 15:01:30 +01:00
Erik Johnston
70332a12dd Take value in a better way 2016-09-05 14:57:14 +01:00
Erik Johnston
373654c635 Comment about sqlite and WITH RECURSIVE 2016-09-05 14:50:36 +01:00
Erik Johnston
485d999c8a Correctly delete old state groups in purge history API 2016-09-05 14:49:08 +01:00
Erik Johnston
69054e3d4c Record why we have chosen to notify 2016-09-05 14:12:11 +01:00
Erik Johnston
0237a0d1a5 Merge pull request #1069 from matrix-org/erikj/host_find
Use get_joined_users_from_context instead of manually looking up hosts
2016-09-05 13:57:30 +01:00
Erik Johnston
69a2d4e38c Use get_joined_users_from_context instead of manually looking up hosts 2016-09-05 13:44:40 +01:00
Erik Johnston
19275b3030 Typo 2016-09-05 13:08:26 +01:00
Erik Johnston
f12993ec16 Bump changelog and version 2016-09-05 13:01:14 +01:00
Erik Johnston
940d4fad24 Merge pull request #1068 from matrix-org/erikj/bulk_push
Make bulk_get_push_rules_for_room use get_joined_users_from_context cache
2016-09-05 12:44:44 +01:00
Erik Johnston
bb36b93f71 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/state_storage 2016-09-05 11:53:11 +01:00
Erik Johnston
d87b87adf7 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/bulk_push 2016-09-05 11:52:49 +01:00
Erik Johnston
caed150363 Remove unused imports 2016-09-05 10:52:01 +01:00
Erik Johnston
80a6a445fa Only fetch local pushers 2016-09-05 10:43:32 +01:00
Erik Johnston
628e65721b Add comments 2016-09-05 10:41:27 +01:00
Mark Haines
274c2f50a5 Merge pull request #1067 from matrix-org/markjh/idempotent
Fix membership changes to be idempotent
2016-09-05 10:21:25 +01:00
Erik Johnston
a99e933550 Add upgrade script that will slowly prune state_groups_state entries 2016-09-05 10:05:36 +01:00
Erik Johnston
3847fa38c4 Make bulk_get_push_rules_for_room use get_joined_users_from_context cache 2016-09-05 10:02:38 +01:00
Mark Haines
f2690c6423 Fix membership changes to be idempotent 2016-09-02 19:23:22 +01:00
Mark Haines
81b94c5750 Merge pull request #1066 from matrix-org/markjh/direct_to_device_lowerbound
Only return new device messages in /sync
2016-09-02 16:18:34 +01:00
Mark Haines
65fa37ac5e Only return new device messages in /sync 2016-09-02 15:50:37 +01:00
Erik Johnston
3baf641a48 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/state_storage 2016-09-02 14:54:07 +01:00
Erik Johnston
c0238ecbed Explicitly specify state_key for history_visibility fetching 2016-09-02 14:53:46 +01:00
Erik Johnston
273b6bcf22 Merge pull request #1064 from matrix-org/erikj/on_receive_check
Only check if host is in room if we have state and auth_chain
2016-09-02 14:43:35 +01:00
Erik Johnston
f7f1027d3d Comment on when auth chain and state are None 2016-09-02 14:42:38 +01:00
Erik Johnston
34e5e17f91 Comment 2016-09-02 14:26:07 +01:00
Erik Johnston
b96c6c3185 Docstrings 2016-09-02 14:19:22 +01:00
Erik Johnston
1ffe9578d1 Merge pull request #1063 from matrix-org/erikj/pull_out_ids_only
Only pull out IDs from DB for /state_ids/ request
2016-09-02 14:16:28 +01:00
Erik Johnston
cce957e254 Bump max_entries on get_destination_retry_timings 2016-09-02 14:08:33 +01:00
Erik Johnston
bd9b8d87ae Only check if host is in room if we have state and auth_chain 2016-09-02 13:40:28 +01:00
Mark Haines
2aa39db681 Merge pull request #1062 from matrix-org/markjh/direct_to_device_synchrotron
Fix up the calls to the notifier for device messages
2016-09-02 11:24:30 +01:00
Erik Johnston
657847e4c6 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/state_storage 2016-09-02 11:04:48 +01:00
Mark Haines
965168a842 Merge branch 'develop' into markjh/direct_to_device_synchrotron 2016-09-02 10:59:24 +01:00
Erik Johnston
2854ee2a52 Only pull out IDs from DB for /state_ids/ request 2016-09-02 10:53:36 +01:00
Erik Johnston
598317927c Limit the length of state chains 2016-09-02 10:41:38 +01:00
Mark Haines
7ed5acacf4 Fix up the calls to the notifier for device messages 2016-09-01 18:08:40 +01:00
Erik Johnston
c1c38da586 Merge pull request #1061 from matrix-org/erikj/linearize_resolution
Linearize state resolution to help caches
2016-09-01 15:58:19 +01:00
Erik Johnston
051a9ea921 Linearize state resolution to help caches 2016-09-01 14:55:03 +01:00
Erik Johnston
265d847ffd Fix typo in log line 2016-09-01 14:50:06 +01:00
Erik Johnston
f4778d4cd9 Merge branch 'erikj/pdu_check' of github.com:matrix-org/synapse into develop 2016-09-01 14:40:36 +01:00
Erik Johnston
9e25443db8 Move to storing state_groups_state as deltas 2016-09-01 14:31:26 +01:00
Erik Johnston
44982606ee Merge pull request #1060 from matrix-org/erikj/state_ids
Assign state groups in state handler.
2016-09-01 14:20:42 +01:00
Erik Johnston
516a272aca Ensure we only return a validated pdu in get_pdu 2016-09-01 10:55:02 +01:00
Erik Johnston
0cfd6c3161 Use state_groups table to test existence 2016-08-31 16:25:57 +01:00
Erik Johnston
5405351b14 Lower get_linearized_receipts_for_room cache size 2016-08-31 16:19:44 +01:00
Erik Johnston
1b91ff685f Merge pull request #1053 from Takios/develop
Add prerequisites to install on openSUSE to README
2016-08-31 16:00:02 +01:00
Erik Johnston
ed7a703d4c Handle the fact that workers can't generate state groups 2016-08-31 15:53:19 +01:00
Erik Johnston
1671913287 Merge pull request #1059 from matrix-org/erikj/sent_transaction_delete
Clean up old sent transactions
2016-08-31 15:07:33 +01:00
Erik Johnston
f51888530d Always specify state_group so that its in the cache 2016-08-31 14:58:11 +01:00
Erik Johnston
826ca61745 Add storage function to SlaveStore 2016-08-31 14:45:04 +01:00
Erik Johnston
fbd2615de4 Merge pull request #1057 from matrix-org/erikj/fix_email_name
Fix email notifs by adding missing param
2016-08-31 14:33:45 +01:00
Erik Johnston
c10cb581c6 Correctly handle the difference between prev and current state 2016-08-31 14:26:22 +01:00
Erik Johnston
ef0cc648cf Clean up old sent transactions 2016-08-31 11:12:02 +01:00
Mark Haines
761f9fccff Merge pull request #1058 from matrix-org/markjh/direct_to_device_synchrotron
Add a replication stream for direct to device messages
2016-08-31 11:08:18 +01:00
Mark Haines
a662252758 Return the current stream position from add_messages_to_device_inbox 2016-08-31 10:42:52 +01:00
Mark Haines
1aa3e1d287 Add a replication stream for direct to device messages 2016-08-31 10:38:58 +01:00
Erik Johnston
1bb8ec296d Generate state group ids in state layer 2016-08-31 10:09:46 +01:00
Erik Johnston
d80f64d370 Fix email notifs by adding missing param 2016-08-30 21:46:39 +01:00
Kegsay
998666be64 Merge pull request #1056 from matrix-org/kegan/appservice-url-is-optional
Allow application services to have an optional 'url'
2016-08-30 17:56:05 +01:00
Kegan Dougal
c882783535 flake8 2016-08-30 17:20:31 +01:00
Kegan Dougal
572acde483 Use None instead of the empty string
Change how we validate the 'url' field as a result.
2016-08-30 17:16:00 +01:00
Erik Johnston
5dc2a702cf Make _state_groups_id_gen a normal IdGenerator 2016-08-30 16:55:11 +01:00
Erik Johnston
3e784eff74 Remove state replication stream 2016-08-30 16:51:36 +01:00
Kegan Dougal
16b652f0a3 Flake8 2016-08-30 16:30:12 +01:00
Kegan Dougal
e82247f990 Allow application services to have an optional 'url'
If 'url' is not specified, they will not be pushed for events or queries. This
is useful for bots who simply wish to reserve large chunks of user/alias
namespace, and don't care about being pushed for events.
2016-08-30 16:21:16 +01:00
Erik Johnston
c7f665d700 Merge pull request #1055 from matrix-org/erikj/occaisonally_persist
Occaisonally persist unpersisted presence updates
2016-08-30 15:59:00 +01:00
Erik Johnston
d3f108b6bb Merge pull request #1054 from matrix-org/erikj/presence_noop_online
Don't notify for online -> online transitions.
2016-08-30 15:51:28 +01:00
Erik Johnston
097330bae8 Check correct variable 2016-08-30 15:50:20 +01:00
Erik Johnston
21b977ccfe Occaisonally persist unpersisted presence updates 2016-08-30 15:39:50 +01:00
Matthew Hodgson
928d337c16 Remove FUD over psql 2016-08-30 15:33:13 +01:00
Erik Johnston
bc1a8b1f7a Don't notify for online -> online transitions.
Specifically, if currently_active remains true then we should not notify
if only the last active time changes.
2016-08-30 15:05:32 +01:00
Fabian Niepelt
b3be9e4376 Add prerequisites to install on openSUSE to README
Signed-off-by: Fabian Niepelt <fniepelt@takios.de>
2016-08-30 15:03:03 +02:00
Erik Johnston
67f0c990f8 Merge pull request #1051 from matrix-org/erikj/fix_push_names
Fix push room names for rooms with only an alias
2016-08-30 11:50:34 +01:00
Erik Johnston
fba1111dd6 Merge pull request #1052 from matrix-org/erikj/noop_new_device_message
Noop get_new_messages_for_device if token hasn't changed
2016-08-30 11:36:27 +01:00
Erik Johnston
c8cd87b21b Comment about message deletion 2016-08-30 11:23:26 +01:00
Erik Johnston
55e17d3697 Fix push room names for rooms with only an alias 2016-08-30 11:19:59 +01:00
Erik Johnston
1ee6285905 Fix check 2016-08-30 11:17:46 +01:00
Erik Johnston
68e1a872fd Noop get_new_messages_for_device if token hasn't changed 2016-08-30 10:58:46 +01:00
Erik Johnston
55fc17cf4b Merge pull request #1049 from matrix-org/erikj/presence_users_in_room
Use state handler instead of get_users_in_room/get_joined_hosts
2016-08-30 10:50:37 +01:00
Erik Johnston
ffc807af50 Merge pull request #1050 from matrix-org/erikj/fix_device_sync
Add new direct message storage functions to slave store
2016-08-30 10:29:41 +01:00
Erik Johnston
41788bba50 Add to slave store 2016-08-30 09:55:17 +01:00
Erik Johnston
873f870e5a Add new direct message storage functions to slave store 2016-08-30 09:40:32 +01:00
Matthew Hodgson
5acbe09b67 warn people to avoid running a HS media repository on the same domain as another webapp 2016-08-27 00:20:39 +01:00
Mark Haines
8c1e746f54 Merge pull request #1046 from matrix-org/markjh/direct_to_device
Start adding store-and-forward direct-to-device messaging
2016-08-26 15:43:35 +01:00
Erik Johnston
93b32d4515 Fix unit tests 2016-08-26 15:40:27 +01:00
Erik Johnston
bed10f9880 Use state handler instead of get_users_in_room/get_joined_hosts 2016-08-26 14:54:30 +01:00
Mark Haines
4bbef62124 Merge remote-tracking branch 'origin/develop' into markjh/direct_to_device 2016-08-26 14:35:31 +01:00
Erik Johnston
3cf15edef7 Merge pull request #1048 from matrix-org/erikj/fix_mail_name
Fix room name in email notifs
2016-08-26 14:22:16 +01:00
Erik Johnston
a234e895cf Fix room name in email notifs 2016-08-26 14:10:21 +01:00
Erik Johnston
c943d8d2e8 Merge pull request #1047 from matrix-org/erikj/state_ids
Avoid pulling the full state of a room out so often.
2016-08-26 13:41:57 +01:00
Erik Johnston
4daa397a00 Add is_host_joined to slave storage 2016-08-26 13:10:56 +01:00
Erik Johnston
c7cd35d682 Typo 2016-08-26 11:23:58 +01:00
Erik Johnston
54cc69154e Make None optional 2016-08-26 11:20:59 +01:00
Erik Johnston
11faa4296d Measure _filter_events_for_server 2016-08-26 11:15:40 +01:00
Erik Johnston
f6338d6a3e Don't pull out full state for _filter_events_for_server 2016-08-26 11:13:16 +01:00
Erik Johnston
1ccdc1e93a Cache check_host_in_room 2016-08-26 10:59:40 +01:00
Erik Johnston
25414b44a2 Add measure on check_host_in_room 2016-08-26 10:47:00 +01:00
Erik Johnston
3f11953fcb Fix tests 2016-08-26 10:15:52 +01:00
Erik Johnston
50943ab942 Add new state storage funcs to replication 2016-08-26 09:57:32 +01:00
Erik Johnston
30961182f2 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/state_ids 2016-08-26 09:48:13 +01:00
Erik Johnston
c1a133a6b6 Merge pull request #1043 from matrix-org/erikj/backfill_fix
Fix None check in backfill
2016-08-26 09:07:05 +01:00
Erik Johnston
778fa85f47 Make sync not pull out full state 2016-08-25 18:59:44 +01:00
Paul Evans
1a1e198f72 Merge pull request #1044 from matrix-org/paul/third-party-lookup
Root the 3PE lookup API within /_matrix/app/unstable
2016-08-25 18:49:18 +01:00
Mark Haines
3b8d0ceb22 More 0_0 in tests 2016-08-25 18:42:46 +01:00
Erik Johnston
7356d52e73 Fix up push to use get_current_state_ids 2016-08-25 18:35:49 +01:00
Paul "LeoNerd" Evans
9459137f1e Just sprintf the 'kind' argument into uri directly 2016-08-25 18:35:38 +01:00
Paul "LeoNerd" Evans
1294d4a329 Move ThirdPartyEntityKind into api.constants so the expectation becomes that the value is significant 2016-08-25 18:34:47 +01:00
Mark Haines
ab34fdecb7 Merge branch 'develop' into markjh/direct_to_device 2016-08-25 18:34:46 +01:00
Mark Haines
b162cb2e41 Add some TODOs 2016-08-25 18:18:53 +01:00
Erik Johnston
0e1900d819 Pull out full state less 2016-08-25 18:15:51 +01:00
Mark Haines
641efb6a39 Fix the deduplication of incoming direct-to-device messages 2016-08-25 18:14:02 +01:00
Paul "LeoNerd" Evans
e7af8be5ae Root the 3PE lookup API within /_matrix/app/unstable instead of at toplevel 2016-08-25 18:06:29 +01:00
Paul "LeoNerd" Evans
142983b4ea APP_SERVICE_PREFIX is never used; don't bother 2016-08-25 18:06:05 +01:00
Erik Johnston
721414d98a Add desc 2016-08-25 17:49:05 +01:00
Mark Haines
e993925279 Add store-and-forward direct-to-device messaging 2016-08-25 17:35:37 +01:00
Erik Johnston
a3dc1e9cbe Replace context.current_state with context.current_state_ids 2016-08-25 17:32:22 +01:00
Paul Evans
d9dcb2ba3a Merge pull request #1041 from matrix-org/paul/third-party-lookup
Extend 3PE lookup APIs for metadata query
2016-08-25 17:06:53 +01:00
Paul "LeoNerd" Evans
adf53f04ce appease pep8 2016-08-25 16:00:31 +01:00
Paul "LeoNerd" Evans
c435bfee9c Don't need toplevel cache on 3PE lookup metadata any more 2016-08-25 15:57:07 +01:00
Paul "LeoNerd" Evans
db7283cc6b Implement a ResponseCache around 3PE lookup metadata lookups 2016-08-25 15:56:27 +01:00
Paul "LeoNerd" Evans
d0b8d49f71 Kill PROTOCOL_META since I'm not using it any more 2016-08-25 15:45:28 +01:00
Paul "LeoNerd" Evans
5474824975 Actually query over AS API for 3PE lookup metadata 2016-08-25 15:29:36 +01:00
Erik Johnston
17f4f14df7 Pull out event ids rather than full events for state 2016-08-25 13:42:44 +01:00
Erik Johnston
cd5b264b03 Fix None check in backfill 2016-08-25 10:39:19 +01:00
Erik Johnston
eb6a7cf3f4 Merge branch 'release-v0.17.1' of github.com:matrix-org/synapse into develop 2016-08-24 14:50:42 +01:00
Erik Johnston
37638c06c5 Merge branch 'release-v0.17.1' of github.com:matrix-org/synapse 2016-08-24 14:39:35 +01:00
Erik Johnston
60a015550a Bump changelog and version 2016-08-24 14:31:13 +01:00
Erik Johnston
90d5983d7a Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.17.1 2016-08-24 14:28:57 +01:00
Erik Johnston
d89f8683dc Merge pull request #1042 from matrix-org/erikj/preserve_log_contexts
Preserve some logcontexts
2016-08-24 13:52:41 +01:00
Erik Johnston
c20cb5160d Remove tracer 2016-08-24 13:22:47 +01:00
Erik Johnston
fda97dd58a Merge branch 'develop' of github.com:matrix-org/synapse into erikj/preserve_log_contexts 2016-08-24 13:22:02 +01:00
Paul "LeoNerd" Evans
8e1ed09dff Move static knowledge of protocol metadata into AS handler; cache the result 2016-08-24 13:01:53 +01:00
Paul "LeoNerd" Evans
965f33c901 Declare 'gitter' known protocol, with user lookup 2016-08-24 12:34:03 +01:00
Paul "LeoNerd" Evans
9899824b85 Initial hack at the 3PN protocols metadata lookup API 2016-08-24 12:33:01 +01:00
Erik Johnston
9219139351 Preserve some logcontexts 2016-08-24 11:58:40 +01:00
Paul "LeoNerd" Evans
63c19e1df9 Move 3PU/3PL lookup APIs into /thirdparty containing entity 2016-08-24 11:55:57 +01:00
Erik Johnston
3e86dcf1c0 Merge pull request #1040 from matrix-org/erikj/pagination
Add None checks to backfill
2016-08-24 10:59:55 +01:00
Erik Johnston
86bcf4d6a7 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/pagination 2016-08-24 10:38:21 +01:00
Erik Johnston
ba07d4a70e Add None checks to backfill 2016-08-24 10:31:05 +01:00
Kegsay
928b2187ea Merge pull request #1039 from matrix-org/kegan/join-with-custom-content
Pass through user-supplied content in /join/$room_id
2016-08-23 17:22:38 +01:00
Kegan Dougal
4b31426a02 Pass through user-supplied content in /join/$room_id
It was always intended to allow custom keys on the join event, but this has
at some point been lost. Restore it.

If the user specifies keys like "avatar_url" then they will be clobbered.
2016-08-23 16:32:04 +01:00
Erik Johnston
122c7a43c9 Merge pull request #1038 from matrix-org/erikj/receved_txn_purge
Delete old received_transactions rows
2016-08-23 11:02:40 +01:00
Erik Johnston
14047126d8 Bump changelog and version 2016-08-22 18:26:07 +01:00
Erik Johnston
d143f211c8 Merge pull request #1028 from matrix-org/dbkr/notifications_api
Add the Notifications API
2016-08-22 18:23:24 +01:00
Mark Haines
e9fe9af068 Merge pull request #1037 from matrix-org/markjh/add_stats_to_prometheus
Add usage stats to prometheus monitoring
2016-08-22 16:37:32 +01:00
Erik Johnston
aad8a1a825 Delete old received_transactions 2016-08-22 16:29:46 +01:00
Mark Haines
689f4cb914 Update comment 2016-08-22 16:17:31 +01:00
Mark Haines
c8f9b45bc2 Add usage stats to prometheus monitoring 2016-08-22 15:34:38 +01:00
Erik Johnston
e65bc7d315 Merge pull request #1031 from matrix-org/erikj/measure_notifier
Add more Measure blocks
2016-08-22 12:13:07 +01:00
Erik Johnston
33f3624ff7 Add exception logging. Fix typo 2016-08-22 10:49:31 +01:00
Erik Johnston
8c52160b07 Allow request handlers to override metric name 2016-08-22 10:44:45 +01:00
Erik Johnston
a093fab253 Use top level measure 2016-08-22 10:18:12 +01:00
Matthew Hodgson
6e80c03d45 Merge branch 'develop' into dbkr/notifications_api 2016-08-20 00:16:18 +01:00
Matthew Hodgson
6372efbdc3 Merge pull request #1032 from matrix-org/matthew/workerdoc
Matthew/workerdoc
2016-08-19 19:17:09 +01:00
Matthew Hodgson
58d6c93103 PR feedback 2016-08-19 19:16:55 +01:00
Matthew Hodgson
b7ffa0e2cd quick guide to synapse scalability via workers 2016-08-19 18:55:57 +01:00
Matthew Hodgson
d77ef276fa increase RAM reqs 2016-08-19 18:55:25 +01:00
Erik Johnston
27e0178da9 Add a top level measure 2016-08-19 18:49:37 +01:00
Erik Johnston
6d1a94d218 Remove redundant measure 2016-08-19 18:40:31 +01:00
Erik Johnston
8731197e54 Only abort Measure on Exceptions 2016-08-19 18:23:45 +01:00
Erik Johnston
afbf6b33fc defer.returnValue must not be called within Measure 2016-08-19 18:23:44 +01:00
Erik Johnston
37adde32dc Move defer.returnValue out of Measure 2016-08-19 18:23:44 +01:00
Erik Johnston
04fc8bbcb0 Update keyring Measure 2016-08-19 18:23:44 +01:00
Erik Johnston
39b900b316 Measure http.server render 2016-08-19 18:23:44 +01:00
Erik Johnston
47dd8f02a1 Measure _get_event_from_row 2016-08-19 18:23:44 +01:00
Erik Johnston
2426c2f21a Measure keyrings 2016-08-19 18:23:44 +01:00
Erik Johnston
39242090e3 Add measure blocks to notifier 2016-08-19 18:23:44 +01:00
Erik Johnston
e6784daf07 Merge pull request #1030 from matrix-org/erikj/cache_contexts
Add concept of cache contexts
2016-08-19 16:29:58 +01:00
Erik Johnston
45fd2c8942 Ensure invalidation list does not grow unboundedly 2016-08-19 16:09:16 +01:00
Erik Johnston
c0d7d9d642 Rename to on_invalidate 2016-08-19 15:13:58 +01:00
Erik Johnston
dc76a3e909 Make cache_context an explicit option 2016-08-19 15:02:38 +01:00
Erik Johnston
f164fd9220 Move _bulk_get_push_rules_for_room to storage layer 2016-08-19 14:29:20 +01:00
Erik Johnston
ba214a5e32 Remove lru option 2016-08-19 14:17:11 +01:00
Erik Johnston
4161ff2fc4 Add concept of cache contexts 2016-08-19 14:17:07 +01:00
Erik Johnston
290763f559 Merge pull request #1029 from matrix-org/erikj/appservice_stream
Make get_new_events_for_appservice use indices
2016-08-19 10:54:32 +01:00
Erik Johnston
b770435389 Make get_new_events_for_appservice use indices 2016-08-19 10:28:42 +01:00
Paul Evans
5674ea3e6c Merge pull request #1026 from matrix-org/paul/thirdpartylookup
3rd party entity lookup
2016-08-18 20:52:50 +01:00
David Baker
1e4217c90c Explicit join 2016-08-18 17:53:44 +01:00
David Baker
0acdd0f1ea Use tuple comparison
Hopefully easier to read
2016-08-18 17:51:08 +01:00
Paul "LeoNerd" Evans
65201631a4 Move validation logic for AS 3PE query response into ApplicationServiceApi class, to keep the handler logic neater 2016-08-18 17:33:56 +01:00
Paul "LeoNerd" Evans
697872cf08 More warnings about invalid results from AS 3PE query 2016-08-18 17:24:39 +01:00
Paul "LeoNerd" Evans
b515f844ee Avoid so much copypasta between 3PU and 3PL query by unifying around a ThirdPartyEntityKind enumeration 2016-08-18 17:19:55 +01:00
David Baker
602c84cd9c Merge remote-tracking branch 'origin/develop' into dbkr/notifications_api 2016-08-18 17:15:26 +01:00
Paul "LeoNerd" Evans
2a91799fcc Minor syntax neatenings 2016-08-18 16:58:25 +01:00
Erik Johnston
be088b32d8 Merge pull request #1027 from matrix-org/erikj/appservice_stream
Add appservice worker
2016-08-18 16:49:43 +01:00
Paul "LeoNerd" Evans
fcf1dec809 Appease pep8 2016-08-18 16:26:19 +01:00
Paul "LeoNerd" Evans
105ff162d4 Authenticate 3PE lookup requests 2016-08-18 16:19:23 +01:00
Paul "LeoNerd" Evans
06964c4a0a Copypasta the 3PU support code to also do 3PL 2016-08-18 16:09:50 +01:00
Paul "LeoNerd" Evans
f3afd6ef1a Remove TODO note about request fields being strings - they're always strings 2016-08-18 15:53:01 +01:00
Erik Johnston
bcbd74dc5b Remove log lines 2016-08-18 15:52:10 +01:00
Paul "LeoNerd" Evans
d7b42afc74 Log a warning if an AS yields an invalid 3PU lookup result 2016-08-18 15:49:55 +01:00
Paul "LeoNerd" Evans
80f4740c8f Scattergather the call out to ASes; validate received results 2016-08-18 15:40:41 +01:00
Erik Johnston
522c804f6b Empty commit 2016-08-18 15:31:49 +01:00
Erik Johnston
19a625362b Merge branch 'develop' of github.com:matrix-org/synapse into erikj/appservice_stream 2016-08-18 15:06:31 +01:00
Erik Johnston
90b8b7706f Jenkins: tox install setuptools 2016-08-18 15:06:16 +01:00
Erik Johnston
07229bbdae Add appservice worker 2016-08-18 14:59:55 +01:00
Paul "LeoNerd" Evans
434bbf2cb5 Filter 3PU lookups by only ASes that declare knowledge of that protocol 2016-08-18 14:56:02 +01:00
Paul "LeoNerd" Evans
d5bf7a4a99 Merge remote-tracking branch 'origin/develop' into paul/thirdpartylookup 2016-08-18 14:21:01 +01:00
Paul "LeoNerd" Evans
718ffcf8bb Since empty lookups now return 200/empty list not 404, we can safely log failures as exceptions 2016-08-18 14:18:37 +01:00
Paul "LeoNerd" Evans
3856582741 Ensure that 3PU lookup request fields actually get passed in 2016-08-18 14:06:02 +01:00
Paul "LeoNerd" Evans
f0c73a1e7a Extend individual list results into the main return list, don't append 2016-08-18 13:53:54 +01:00
Paul "LeoNerd" Evans
b3511adb92 Don't catch the return-value-as-exception that @defer.inlineCallbacks will use 2016-08-18 13:45:18 +01:00
Erik Johnston
6762a7268c Merge pull request #1025 from matrix-org/erikj/appservice_stream
Make AppserviceHandler stream events from database
2016-08-18 12:59:56 +01:00
Erik Johnston
9da84a9a1e Make AppserviceHandler stream events from database
This is for two reasons:

1. Suppresses duplicates correctly, as the notifier doesn't do any
   duplicate suppression.
2. Makes it easier to connect the AppserviceHandler to the replication
   stream.
2016-08-18 11:54:41 +01:00
Mark Haines
403ecd8a2c Missed a s/federation reader/media repository/ in a log message 2016-08-18 10:26:15 +01:00
Mark Haines
47fbff7a05 Merge pull request #1024 from matrix-org/markjh/media_repository
Add a media repository worker
2016-08-18 10:12:17 +01:00
Mark Haines
396624864a Add a media repository worker 2016-08-18 09:38:42 +01:00
Erik Johnston
e73dcb66da Merge pull request #1022 from matrix-org/erikj/as_notify_perf
Make notify_interested_services faster
2016-08-18 09:35:19 +01:00
Erik Johnston
ea166f2ac9 Merge pull request #1023 from matrix-org/erikj/update_fix
Fix push_display_name_rename schema update
2016-08-18 09:16:53 +01:00
Paul "LeoNerd" Evans
3ec10dffd6 Actually make 3PU lookup calls out to ASes 2016-08-18 00:39:09 +01:00
Erik Johnston
732cf72b86 Fix push_display_name_rename schema update 2016-08-17 18:12:21 +01:00
Erik Johnston
abcb9aee5b Make push Measure finer grained 2016-08-17 18:00:18 +01:00
Erik Johnston
2a0d8f8219 Merge pull request #1021 from matrix-org/erikj/mediasecurity_policy
Set `Content-Security-Policy` on media repo
2016-08-17 17:22:43 +01:00
Erik Johnston
320dfe523c Make notify_interested_services faster 2016-08-17 17:20:50 +01:00
Erik Johnston
0af9e1a637 Set Content-Security-Policy on media repo
This is to inform browsers that they should sandbox the returned
media. This is particularly cruical for javascript/HTML files.
2016-08-17 16:27:39 +01:00
Paul "LeoNerd" Evans
fa87c981e1 Thread 3PU lookup through as far as the AS API object; which currently noöps it 2016-08-17 16:17:28 +01:00
David Baker
0d7cef0943 Merge pull request #1011 from matrix-org/dbkr/contains_display_name_override
Move display name rule
2016-08-17 16:15:45 +01:00
Erik Johnston
f90b3d83a3 Add None check to _iterate_over_text 2016-08-17 15:17:17 +01:00
Matrix
f743471380 Change name of metric 2016-08-17 15:05:50 +01:00
Erik Johnston
b9e888858c Move Measure block inside loop 2016-08-17 14:52:26 +01:00
Erik Johnston
973d67a033 Merge pull request #1019 from matrix-org/erikj/appservice_clean
Clean up _ServiceQueuer
2016-08-17 14:37:21 +01:00
Paul "LeoNerd" Evans
e3e3fbc23a Initial empty implementation that just registers an API endpoint handler 2016-08-17 12:46:49 +01:00
Erik Johnston
e885024523 Merge pull request #1018 from matrix-org/erikj/dead_appservice
Remove dead appservice code
2016-08-17 12:04:58 +01:00
Erik Johnston
7321f45457 Clean up _ServiceQueuer 2016-08-17 12:03:04 +01:00
David Baker
d87c9092c9 Empty commit to trigger re-test 2016-08-17 11:51:47 +01:00
Erik Johnston
b9abf3e4e3 Remove dead appservice code 2016-08-17 11:48:23 +01:00
Erik Johnston
92d39126d7 Merge pull request #1017 from matrix-org/erikj/appservice_measure
Measure notify_interested_services
2016-08-17 11:30:16 +01:00
Erik Johnston
b835ebcc79 Update unit tests 2016-08-17 11:22:11 +01:00
Erik Johnston
62c5245c87 Measure notify_interested_services 2016-08-17 11:12:29 +01:00
Erik Johnston
49043f5ff3 Merge pull request #1016 from matrix-org/erikj/short_circuit_cache
Don't update caches replication stream if tokens haven't advanced
2016-08-17 10:30:52 +01:00
Erik Johnston
949629291c Do it in storage function 2016-08-16 17:05:34 +01:00
David Baker
ad42322257 Add migration script
To port existing rule actions & enable entries to the new name
2016-08-16 16:56:30 +01:00
David Baker
0bba2799b6 Merge remote-tracking branch 'origin/develop' into dbkr/contains_display_name_override 2016-08-16 16:46:37 +01:00
Erik Johnston
64a2acb161 Don't update caches replication stream if tokens haven't advanced 2016-08-16 16:44:19 +01:00
David Baker
1594eba29e s/underride/override/ in the rule_id too 2016-08-16 16:44:07 +01:00
Erik Johnston
16284039c6 Merge pull request #1015 from matrix-org/erikj/preview_url_fixes
Fix up preview URL API. Add tests.
2016-08-16 15:22:04 +01:00
Erik Johnston
1119b4cebe Add lxml to jenkins-unittests.sh 2016-08-16 15:06:33 +01:00
Erik Johnston
109a560905 Flake8 2016-08-16 14:57:21 +01:00
Erik Johnston
48b5829aea Fix up preview URL API. Add tests.
This includes:

- Splitting out methods of a class into stand alone functions, to make
  them easier to test.
- Adding unit tests to split out functions, testing HTML -> preview.
- Handle the fact that elements in lxml may have tail text.
2016-08-16 14:53:24 +01:00
Erik Johnston
7c6f4f9427 Merge pull request #1012 from matrix-org/erikj/limit_backfill_uri
Limit number of extremeties in backfill request
2016-08-16 12:55:42 +01:00
Erik Johnston
25c2332071 Merge pull request #1010 from matrix-org/erikj/refactor_deletions
Refactor user_delete_access_tokens. Invalidate get_user_by_access_token to slaves.
2016-08-16 11:37:53 +01:00
Erik Johnston
2ee1bd124c Limit number of extremeties in backfill request
This works around a bug where if we make a backfill request with too
many extremeties it causes the request URI to be too long.
2016-08-16 11:34:36 +01:00
Erik Johnston
a2427981b7 Use cached get_user_by_access_token in slaves 2016-08-16 11:24:32 +01:00
Erik Johnston
6cbd1b495e Merge branch 'fix_integrity_retry' of https://github.com/Ralith/synapse into Ralith-fix_integrity_retry 2016-08-16 10:50:24 +01:00
David Baker
1c7c317df1 Move display name rule
As per https://github.com/matrix-org/matrix-doc/pull/373 and comment
2016-08-15 18:34:53 +01:00
Erik Johnston
dc3a00f24f Refactor user_delete_access_tokens. Invalidate get_user_by_access_token to slaves. 2016-08-15 17:04:39 +01:00
Erik Johnston
75299af4fc Merge pull request #1009 from matrix-org/erikj/event_split
Split out /events to synchrotron
2016-08-15 15:39:05 +01:00
Erik Johnston
89e786bd85 Doc get_next() context manager usage 2016-08-15 13:45:26 +01:00
Erik Johnston
d9664344ec Rename table. Add docs. 2016-08-15 11:45:57 +01:00
Erik Johnston
784a2d4f2c Remove broken cache stuff 2016-08-15 11:25:48 +01:00
Erik Johnston
0be963472b Use cached version of get_aliases_for_room 2016-08-15 11:24:12 +01:00
Erik Johnston
64e7e11853 Implement cache replication stream 2016-08-15 11:16:45 +01:00
Erik Johnston
4d70d1f80e Add some invalidations to a cache_stream 2016-08-15 11:15:17 +01:00
Erik Johnston
99bbd90b0d Always run txn.after_callbacks 2016-08-15 09:45:44 +01:00
Benjamin Saunders
8a57cc3123 Add missing database corruption recovery case
Signed-off-by: Benjamin Saunders <ben.e.saunders@gmail.com>
2016-08-14 11:50:22 -07:00
Erik Johnston
2edb7c7676 Merge pull request #1007 from sargon/develop
Fix some bugs in the auth/ldap handler
2016-08-14 16:07:31 +01:00
Daniel Ehlers
dfaf0fee31 Log the value which is observed in the first place.
The name 'result' is of bool type and has no len property,
resulting in a TypeError. Futhermore in the flow control
conn.response is observed and hence should be reported.

Signed-off-by: Daniel Ehlers <sargon@toppoint.de>
2016-08-14 16:49:05 +02:00
Daniel Ehlers
e380538b59 Fix AttributeError when bind_dn is not defined.
In case one does not define bind_dn in ldap configuration, filter
attribute is not declared. Since auth code only uses ldap_filter attribute
when according LDAP mode is selected, it is safe to only declare the
attribute in that case.

Signed-off-by: Daniel Ehlers <sargon@toppoint.de>
2016-08-14 16:48:33 +02:00
Erik Johnston
4e1cebd56f Make synchrotron accept /events 2016-08-12 15:31:44 +01:00
Erik Johnston
5006c4b30d Merge pull request #1005 from matrix-org/erikj/linearize_joins
Only process one local membership event per room at a time
2016-08-12 10:14:12 +01:00
Erik Johnston
866a5320de Dont invoke get_handlers fromClientV1RestServlet
hs.get_handlers() can not be invoked from split out processes. Moving
the invocations down a level means that we can slowly split out
individual servlets.
2016-08-12 10:03:19 +01:00
Erik Johnston
448ac6cf0d Only process one local membership event per room at a time 2016-08-12 09:32:19 +01:00
Erik Johnston
832799dbff Merge pull request #997 from Half-Shot/develop
Don't change status_msg on /sync
2016-08-11 14:10:55 +01:00
David Baker
b4ecf0b886 Merge remote-tracking branch 'origin/develop' into dbkr/notifications_api 2016-08-11 14:09:13 +01:00
Will Hunt
5b5148b7ec Synced up synchrotron set_state with PresenceHandler set_state 2016-08-11 11:48:30 +01:00
Erik Johnston
c9c1541fd0 Merge pull request #1003 from matrix-org/erikj/redaction_prev_content
Include prev_content in redacted state events
2016-08-11 10:41:18 +01:00
Erik Johnston
1d40373c9d Include prev_content in redacted state events 2016-08-11 10:24:41 +01:00
Erik Johnston
5202679edc Merge pull request #1000 from matrix-org/erikj/contexts
Clean up TransactionQueue
2016-08-11 09:34:12 +01:00
Erik Johnston
c315922b5f PEP8 2016-08-10 16:34:10 +01:00
Erik Johnston
ca8abfbf30 Clean up TransactionQueue 2016-08-10 16:24:16 +01:00
Erik Johnston
5aeadb7414 Merge pull request #999 from matrix-org/erikj/measure_more
Measure federation send transaction resources
2016-08-10 14:16:14 +01:00
Erik Johnston
c9f724caa4 Merge pull request #998 from matrix-org/erikj/pdu_fail_cache
Various federation /event/ improvements
2016-08-10 14:09:14 +01:00
Erik Johnston
487bc49bf8 Don't stop on 4xx series errors 2016-08-10 13:39:12 +01:00
Erik Johnston
739ea29d1e Also check if server is in the room 2016-08-10 13:32:23 +01:00
Erik Johnston
ea8c4094db Also pull out rejected events 2016-08-10 13:26:13 +01:00
Erik Johnston
7f41bcbeec Correctly auth /event/ requests 2016-08-10 13:22:20 +01:00
Erik Johnston
11fdfaf03b Only resign our own events 2016-08-10 13:16:58 +01:00
Will Hunt
2510db3e76 Don't change status_msg on /sync 2016-08-10 12:59:59 +01:00
Erik Johnston
f91df1f761 Store if we fail to fetch an event from a destination 2016-08-10 11:31:46 +01:00
Erik Johnston
3bc9629be5 Measure federation send transaction resources 2016-08-10 10:56:38 +01:00
Erik Johnston
d45489474d Merge pull request #996 from matrix-org/erikj/tls_error
Don't print stack traces when failing to get remote keys
2016-08-10 10:56:08 +01:00
Erik Johnston
fa1ce4d8ad Don't print stack traces when failing to get remote keys 2016-08-10 10:44:37 +01:00
Richard van der Hoff
79ebfbe7c6 /login: Respond with a 403 when we get an invalid m.login.token 2016-08-09 16:29:28 +01:00
David Baker
cd41c6ece2 Merge pull request #995 from matrix-org/rav/clean_up_cas_login
Clean up CAS login code
2016-08-09 10:21:56 +01:00
David Baker
27771b2495 Merge pull request #994 from matrix-org/rav/fix_cas_login
Fix CAS login
2016-08-08 17:47:45 +01:00
Richard van der Hoff
d3250499c1 Merge pull request #993 from matrix-org/rav/fix_token_login
Fix token login
2016-08-08 17:43:02 +01:00
Richard van der Hoff
a8bcc7274d PEP8 2016-08-08 17:20:38 +01:00
Richard van der Hoff
65666fedd5 Clean up CAS login code
Remove some apparently unused code.

Clean up parse_cas_response, mostly to catch the exception if the CAS response
isn't valid XML.
2016-08-08 17:17:25 +01:00
Richard van der Hoff
0682ca04b3 Fix CAS login
Attempting to log in with CAS was giving a 500 error.
2016-08-08 17:01:30 +01:00
Richard van der Hoff
6fe6a6f029 Fix login with m.login.token
login with token (as used by CAS auth) was broken by 067596d, such that it
always returned a 401.
2016-08-08 16:40:39 +01:00
Erik Johnston
d330d45e2d Merge branch 'release-v0.17.0' of github.com:matrix-org/synapse 2016-08-08 13:38:13 +01:00
Erik Johnston
ccd4290777 Capatailize HTML 2016-08-08 13:18:58 +01:00
Erik Johnston
9cb42507f8 Be bolder 2016-08-08 13:17:28 +01:00
Erik Johnston
d960910c72 Update changelog 2016-08-08 13:06:22 +01:00
Erik Johnston
b46b8a5efb Update changelog 2016-08-08 11:34:33 +01:00
Erik Johnston
27fe3e2d4f Bump changelog and version 2016-08-08 11:31:12 +01:00
Erik Johnston
3410142741 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.17.0 2016-08-08 11:24:53 +01:00
Erik Johnston
e7674eb759 Merge pull request #992 from matrix-org/erikj/psutil_conditional
Make psutil optional
2016-08-08 11:24:11 +01:00
Erik Johnston
7c1a92274c Make psutil optional 2016-08-08 11:12:21 +01:00
Erik Johnston
f5deaff424 Merge pull request #991 from matrix-org/erikj/retry_make
Retry joining via other servers if first one failed. Fix some other bugs.
2016-08-05 18:21:27 +01:00
Erik Johnston
5f360182c6 Fix a couple of python bugs 2016-08-05 18:08:32 +01:00
Erik Johnston
46453bfc2f Retry joining via other servers if first one failed 2016-08-05 18:02:03 +01:00
Erik Johnston
6bf6bc1d1d Merge pull request #990 from matrix-org/erikj/fed_vers
Add federation /version API
2016-08-05 16:59:16 +01:00
Erik Johnston
f45be05305 Print newline after result in federation_client script 2016-08-05 16:46:14 +01:00
Erik Johnston
24f36469bc Add federation /version API 2016-08-05 16:36:07 +01:00
Erik Johnston
597c79be10 Change the way we specify if we require auth or not 2016-08-05 16:17:04 +01:00
Erik Johnston
e6021c370e Merge pull request #989 from matrix-org/erikj/raise_404
Raise 404 when couldn't find event
2016-08-05 15:54:16 +01:00
Erik Johnston
a8b946decb Raise 404 when couldn't find event 2016-08-05 15:31:02 +01:00
Erik Johnston
3f5ac150b2 Merge pull request #988 from matrix-org/erikj/ignore_comments_preview
Don't include html comments in description
2016-08-05 15:17:14 +01:00
Erik Johnston
5bcccfde6c Don't include html comments in description 2016-08-05 14:45:11 +01:00
Erik Johnston
c95dd7a426 Update changelog 2016-08-05 13:20:25 +01:00
Erik Johnston
4d87d3659a Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.17.0 2016-08-05 13:19:22 +01:00
Erik Johnston
32fc39fd4c Merge pull request #987 from matrix-org/erikj/fix_backfill_auth
Fix backfill auth events
2016-08-05 13:13:55 +01:00
Erik Johnston
93acf49e9b Fix backfill auth events 2016-08-05 12:59:04 +01:00
Erik Johnston
b2c290a6e5 Bump version and changelog 2016-08-05 11:16:54 +01:00
Erik Johnston
499dc1b349 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.17.0 2016-08-05 11:15:48 +01:00
Erik Johnston
9377509dcd Merge pull request #986 from matrix-org/erikj/state
Check if we already have the events returned by /state/
2016-08-05 11:12:32 +01:00
Erik Johnston
2d4de61fb7 Fix typo 2016-08-05 10:48:56 +01:00
Erik Johnston
87ef315ad5 Merge pull request #985 from matrix-org/erikj/fix_integrity_retry
Tweak integrity error recovery to work as intended
2016-08-05 10:44:01 +01:00
Erik Johnston
fccadb7719 Check if we already have the events returned by /state/ 2016-08-05 10:43:47 +01:00
Erik Johnston
f0fa66f495 Delete more tables 2016-08-05 10:40:08 +01:00
Erik Johnston
1515d1b581 Fallback to /state/ on both 400 and 404 2016-08-05 10:24:23 +01:00
Benjamin Saunders
a2b7102eea Tweak integrity error recovery to work as intended 2016-08-04 20:38:08 -07:00
Erik Johnston
a5d7968b3e Merge pull request #973 from matrix-org/erikj/xpath_fix
Change the way we summarize URLs
2016-08-04 16:37:25 +01:00
Erik Johnston
b5525c76d1 Typo 2016-08-04 16:10:08 +01:00
Erik Johnston
e97648c4e2 Test summarization 2016-08-04 16:09:09 +01:00
Erik Johnston
835ceeee76 Merge pull request #983 from matrix-org/erikj/retry_on_integrity_error
Retry event persistence on IntegrityError
2016-08-04 15:40:03 +01:00
Erik Johnston
b3682df2ca Merge branch 'develop' of github.com:matrix-org/synapse into erikj/xpath_fix 2016-08-04 15:29:45 +01:00
Erik Johnston
8ad8490cff Fix typo 2016-08-04 15:21:29 +01:00
Erik Johnston
59fa91fe88 Retry event persistence on IntegrityError
Due to a bug in the porting script some backfilled events were not
correctly persisted, causing irrecoverable IntegrityErrors on future
attempts to persist those events.

This commit adds a retry mechanism invoked upon IntegrityError,
where when retried the tables are purged for all references to the
events being persisted.
2016-08-04 15:02:15 +01:00
Erik Johnston
1b5436ad78 Merge pull request #979 from matrix-org/erikj/state_ids_api
Add /state_ids federation API
2016-08-04 14:38:59 +01:00
Erik Johnston
257c41cc2e Fix typos. 2016-08-04 14:05:45 +01:00
Erik Johnston
b4e2290d89 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/state_ids_api 2016-08-04 14:04:35 +01:00
Erik Johnston
e3ee63578f Tidy up get_events 2016-08-04 14:01:18 +01:00
Erik Johnston
ea0b767114 Merge pull request #982 from matrix-org/erikj/fix_port_script
Port script: Handle the fact that some tables have negative rowid rows
2016-08-04 13:09:43 +01:00
Richard van der Hoff
ab03912e94 Merge pull request #981 from matrix-org/rav/omit_device_displayname_if_null
keys/query: Omit device displayname if null
2016-08-04 12:51:43 +01:00
Mark Haines
1fc50712d6 Factor out more common code from the jenkins scripts (#980)
* Factor out more common code from the jenkins scripts

* Fix install_and_run path

* Poke jenkins

* Poke jenkins
2016-08-04 12:31:55 +01:00
Erik Johnston
7c7786d4e1 Allow upgrading from old port_from_sqlite3 format 2016-08-04 11:38:08 +01:00
Erik Johnston
b0a14bf53e Handle the fact that some tables have negative rowid rows 2016-08-04 11:28:02 +01:00
Richard van der Hoff
f131cd9e53 keys/query: Omit device displayname if null
... which makes it more consistent with user displaynames.
2016-08-04 10:59:51 +01:00
Erik Johnston
edb33eb163 Rename fields to _ids 2016-08-03 17:19:15 +01:00
Erik Johnston
bcc9cda8ca Fix copy + paste fails 2016-08-03 17:17:26 +01:00
Richard van der Hoff
05e3354047 Merge pull request #978 from matrix-org/rav/device_name_in_e2e_devices
Include device name in /keys/query response
2016-08-03 17:11:27 +01:00
Richard van der Hoff
3364d96c6b Merge pull request #977 from matrix-org/rav/return_all_devices
keys/query: return all users which were asked for
2016-08-03 17:10:41 +01:00
Richard van der Hoff
98385888b8 PEP8 2016-08-03 15:42:08 +01:00
Richard van der Hoff
68264d7404 Include device name in /keys/query response
Add an 'unsigned' section which includes the device display name.
2016-08-03 15:42:08 +01:00
Richard van der Hoff
91fa69e029 keys/query: return all users which were asked for
In the situation where all of a user's devices get deleted, we want to
indicate this to a client, so we want to return an empty dictionary, rather
than nothing at all.
2016-08-03 15:41:44 +01:00
Erik Johnston
4c56bedee3 Actually call get_room_state 2016-08-03 15:04:29 +01:00
Erik Johnston
520ee9bd2c Fix syntax error 2016-08-03 15:03:15 +01:00
Erik Johnston
a60a2eaa02 Comment 2016-08-03 14:52:43 +01:00
Erik Johnston
e3a720217a Add /state_ids federation API
The new API only returns the event_ids for the state, as most
requesters will already have the vast majority of the events already.
2016-08-03 14:47:37 +01:00
Richard van der Hoff
530bc862dc Merge branch 'rav/null_default_device_displayname' into develop 2016-08-03 14:30:32 +01:00
Richard van der Hoff
a6f5cc65d9 PEP8 2016-08-03 14:30:06 +01:00
Richard van der Hoff
e555bc6551 Merge branch 'rav/refactor_device_query' into develop 2016-08-03 14:26:01 +01:00
Richard van der Hoff
a843868fe9 E2eKeysHandler: minor tweaks
PR feedback
2016-08-03 14:24:33 +01:00
Erik Johnston
f5da3bacb2 Merge pull request #975 from matrix-org/erikj/multi_event_persist
Ensure we only persist an event once at a time
2016-08-03 14:10:36 +01:00
Erik Johnston
97f072db74 Print status code in federation_client.py 2016-08-03 13:46:56 +01:00
Erik Johnston
80ad710217 Remove other bit of deduplication 2016-08-03 13:25:59 +01:00
Richard van der Hoff
4fec5e57be Default device_display_name to null
It turns out that it's more useful to return a null device display name (and
let clients decide how to handle it: eg, falling back to device_id) than using
a constant string like "unknown device".
2016-08-03 11:53:00 +01:00
Erik Johnston
a8a32d2714 Ensure we only persist an event once at a time 2016-08-03 11:23:39 +01:00
Mark Haines
921f17f938 Merge branch 'develop' into rav/refactor_device_query 2016-08-03 11:12:47 +01:00
Mark Haines
9a2f296fa2 Factor out some of the code shared between the sytest scripts (#974)
* Factor out some of the code shared between the different sytest jenkins scripts

* Exclude jenkins from the MANIFEST

* Fix dendron build

* Missing new line

* Poke jenkins

* Export the PORT_BASE and PORT_COUNT

* Poke jenkins
2016-08-02 20:42:30 +01:00
Erik Johnston
58c9653c6b Don't infer paragrahs from newlines 2016-08-02 18:50:24 +01:00
Erik Johnston
6b58ade2f0 Comment on why we clone 2016-08-02 18:41:22 +01:00
Erik Johnston
9e66c58ceb Spelling. 2016-08-02 18:37:31 +01:00
Erik Johnston
f83f5fbce8 Make it actually compile 2016-08-02 18:32:42 +01:00
Erik Johnston
aecaec3e10 Change the way we summarize URLs
Using XPath is slow on some machines (for unknown reasons), so use a
different approach to get a list of text nodes.

Try to generate a summary that respect paragraph and then word
boundaries, adding ellipses when appropriate.
2016-08-02 18:25:53 +01:00
Richard van der Hoff
1efee2f52b E2E keys: Make federation query share code with client query
Refactor the e2e query handler to separate out the local query, and then make
the federation handler use it.
2016-08-02 18:12:00 +01:00
Erik Johnston
49e047c55e Bump version and changelog 2016-08-02 17:10:26 +01:00
Erik Johnston
59a2c6d60e Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.17.0 2016-08-02 17:07:54 +01:00
Erik Johnston
06f812b95c Merge pull request #971 from matrix-org/erikj/fed_state
Fix response cache
2016-08-02 17:07:32 +01:00
Erik Johnston
c9154b970c Don't double wrap 200 2016-08-02 16:45:53 +01:00
Erik Johnston
b3d5c4ad9d Fix response cache 2016-08-02 16:42:21 +01:00
Erik Johnston
456544b621 Typo 2016-08-02 16:12:53 +01:00
Erik Johnston
d199f2248f Change wording 2016-08-02 15:48:43 +01:00
Erik Johnston
8f650bd338 Bump changeog and version 2016-08-02 15:43:52 +01:00
Erik Johnston
7b0f6293f2 Merge pull request #940 from matrix-org/erikj/fed_state_cache
Cache federation state responses
2016-08-02 15:21:37 +01:00
Erik Johnston
fcde5b2a97 Print authorization header for federation_client.py 2016-08-02 15:06:17 +01:00
Erik Johnston
342e072024 Merge pull request #967 from matrix-org/erikj/fed_reader
Split out the federation reading portions into a separate.
2016-08-02 13:59:47 +01:00
Erik Johnston
55e8a87888 Change default jenkins port base and count 2016-08-02 13:41:17 +01:00
David Baker
26a8c1d7ab Merge pull request #968 from matrix-org/dbkr/fix_add_email_on_register
Fix adding emails on registration
2016-08-02 10:32:17 +01:00
Mark Haines
54de6a812a Merge branch 'develop' into dbkr/fix_add_email_on_register 2016-08-02 10:18:15 +01:00
Erik Johnston
733bf44290 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/fed_reader 2016-08-02 09:56:31 +01:00
Erik Johnston
f14cd95342 Merge pull request #970 from matrix-org/erikj/clock
Ignore AlreadyCalled errors on timer cancel
2016-08-02 09:50:37 +01:00
Richard van der Hoff
986615b0b2 Move e2e query logic into a handler 2016-08-01 18:02:07 +01:00
Matthew Hodgson
bfeaab6dfc missing --upgrade 2016-08-01 17:12:02 +01:00
Erik Johnston
b260f92936 Ignore AlreadyCalled errors on timer cancel 2016-07-31 16:00:12 +01:00
David Baker
271d3e7865 Fix adding emails on registration
Synapse was not adding email addresses to accounts registered with an email address, due to too many different variables called 'result'. Rename both of them. Also remove the defer.returnValue() with no params because that's not a thing.
2016-07-29 15:25:24 +01:00
Paul Evans
18b7eb830b Merge pull request #958 from matrix-org/paul/SYN-738
Forbid non-ASes from registering users whose names begin with '_'
2016-07-29 14:10:45 +01:00
Erik Johnston
74106ba171 Make jenkins dendron test federation read apis 2016-07-29 11:45:03 +01:00
Erik Johnston
2a9ce8c422 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/fed_reader 2016-07-29 11:33:46 +01:00
Erik Johnston
cbea0c7044 Merge pull request #964 from matrix-org/erikj/fed_join_fix
Handle the case of missing auth events when joining a room
2016-07-29 11:33:16 +01:00
Erik Johnston
5aa024e501 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/fed_reader 2016-07-29 11:24:56 +01:00
Erik Johnston
c51a52f300 Mention that func will fetch auth events 2016-07-29 11:17:04 +01:00
Erik Johnston
3d13c3a295 Update docstring 2016-07-29 10:45:05 +01:00
Mark Haines
a679a01dbe Merge pull request #966 from matrix-org/markjh/fix_push
Create separate methods for getting messages to push
2016-07-29 10:13:01 +01:00
Mark Haines
8dad08a950 Fix SQL to supply arguments in the same order 2016-07-29 09:57:13 +01:00
Mark Haines
0a7d3cd00f Create separate methods for getting messages to push
for the email and http pushers rather than trying to make a single
method that will work with their conflicting requirements.

The http pusher needs to get the messages in ascending stream order, and
doesn't want to miss a message.

The email pusher needs to get the messages in descending timestamp order,
and doesn't mind if it misses messages.
2016-07-28 20:24:24 +01:00
Erik Johnston
ec8b217722 Add destination retry to slave store 2016-07-28 17:35:53 +01:00
Kegsay
328ad6901d Merge pull request #965 from matrix-org/kegan/comment-push-actions-fn
Comment get_unread_push_actions_for_user_in_range function
2016-07-28 17:27:28 +01:00
Erik Johnston
76b89d0edb Add slace storage functions for public room list 2016-07-28 17:03:40 +01:00
Kegan Dougal
370135ad0b Comment get_unread_push_actions_for_user_in_range function 2016-07-28 16:47:37 +01:00
Erik Johnston
0fcbca531f Add get_auth_chain to slave store 2016-07-28 16:36:28 +01:00
Erik Johnston
1e2740caab Handle the case of missing auth events when joining a room 2016-07-28 16:08:33 +01:00
Erik Johnston
6ede23ff1b Add more key storage funcs into slave store 2016-07-28 15:41:26 +01:00
Erik Johnston
591ad2268c Merge pull request #963 from matrix-org/erikj/admin_docs
Add some basic admin API docs
2016-07-28 15:19:30 +01:00
Erik Johnston
3c3246c078 Use correct path 2016-07-28 15:08:37 +01:00
Erik Johnston
5fb41a955c Merge branch 'release-v0.17.0' of github.com:matrix-org/synapse into develop 2016-07-28 14:56:32 +01:00
Erik Johnston
367b594183 Add some basic admin API docs 2016-07-28 14:56:09 +01:00
Erik Johnston
7861cfec0a Add authors to changelog 2016-07-28 14:35:05 +01:00
Erik Johnston
019cf013d6 Update changelog 2016-07-28 11:30:08 +01:00
Mark Haines
4329f20e3f Merge pull request #962 from matrix-org/markjh/retry
Fix retry utils to check if the exception is a subclass of CME
2016-07-28 11:12:19 +01:00
Erik Johnston
a285194021 Merge branch 'erikj/key_client_fix' of github.com:matrix-org/synapse into release-v0.17.0 2016-07-28 10:47:06 +01:00
Mark Haines
bf81e38d36 Fix retry utils to check if the exception is a subclass of CME 2016-07-28 10:47:02 +01:00
Erik Johnston
78cac3e594 Merge pull request #941 from matrix-org/erikj/key_client_fix
Send the correct host header when fetching keys
2016-07-28 10:46:36 +01:00
Erik Johnston
7871790db1 Bump version and changelog 2016-07-28 10:38:56 +01:00
Erik Johnston
18e044628e Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.17.0 2016-07-28 10:37:43 +01:00
Erik Johnston
b557b682d9 Merge pull request #961 from matrix-org/dbkr/fix_push_invite_name
Don't include name of room for invites in push
2016-07-28 10:36:41 +01:00
David Baker
389c890f14 Don't include name of room for invites in push
Avoids insane pushes like, "Bob invited you to invite from Bob"
2016-07-28 10:20:47 +01:00
Richard van der Hoff
cd8738ab63 Merge pull request #960 from matrix-org/rav/support_r0.2
Add r0.2.0 to the "supported versions" list
2016-07-28 10:14:33 +01:00
Richard van der Hoff
f6f8f81a48 Add r0.1.0 to the "supported versions" list 2016-07-28 10:14:07 +01:00
David Baker
ecd5e6bfa4 Typo 2016-07-28 10:04:52 +01:00
Richard van der Hoff
fda078f995 Add r0.2.0 to the "supported versions" list 2016-07-28 09:14:21 +01:00
Matthew Hodgson
ab5580e152 Merge pull request #959 from evelynmitchell/patch-1
3PID defined on first mention
2016-07-28 01:17:36 +02:00
evelynmitchell
e8d212d92e 3PID defined on first mention 2016-07-27 12:20:46 -06:00
Richard van der Hoff
40e539683c Merge pull request #956 from matrix-org/rav/check_device_id_on_key_upload
Make the device id on e2e key upload optional
2016-07-27 18:16:57 +01:00
Paul "LeoNerd" Evans
05f6447301 Forbid non-ASes from registering users whose names begin with '_' (SYN-738) 2016-07-27 17:54:26 +01:00
Erik Johnston
5238960850 Bump CHANGES and version 2016-07-27 17:33:09 +01:00
Richard van der Hoff
ccec25e2c6 key upload tweaks
1. Add v2_alpha URL back in, since things seem to be using it.

2. Don't reject the request if the device_id in the upload request fails to
   match that in the access_token.
2016-07-27 16:41:06 +01:00
Mark Haines
c38b7c4104 Merge pull request #957 from matrix-org/markjh/verify
Clean up verify_json_objects_for_server
2016-07-27 15:33:32 +01:00
Mark Haines
29b25d59c6 Merge branch 'develop' into markjh/verify
Conflicts:
	synapse/crypto/keyring.py
2016-07-27 15:11:02 +01:00
Mark Haines
884b800899 Merge pull request #955 from matrix-org/markjh/only_from2
Add a couple more checks to the keyring
2016-07-27 15:08:22 +01:00
Mark Haines
09d31815b4 Merge pull request #954 from matrix-org/markjh/even_more_fixes
Fix a couple of bugs in the transaction and keyring code
2016-07-27 15:08:14 +01:00
Mark Haines
fe1b369946 Clean up verify_json_objects_for_server 2016-07-27 14:10:43 +01:00
Richard van der Hoff
26cb0efa88 SQL syntax fix 2016-07-27 12:30:22 +01:00
Richard van der Hoff
d47115ff8b Delete e2e keys on device delete 2016-07-27 12:24:52 +01:00
Richard van der Hoff
2e3d90d67c Make the device id on e2e key upload optional
We should now be able to get our device_id from the access_token, so the
device_id on the upload request is optional. Where it is supplied, we should
check that it matches.

For active access_tokens without an associated device_id, we ought to register
the device in the devices table.

Also update the table on upgrade so that all of the existing e2e keys are
associated with real devices.
2016-07-26 23:38:12 +01:00
Mark Haines
a4b06b619c Add a couple more checks to the keyring 2016-07-26 19:50:11 +01:00
Mark Haines
c63b1697f4 Merge pull request #952 from matrix-org/markjh/more_fixes
Check if the user is banned when handling 3pid invites
2016-07-26 19:20:56 +01:00
Mark Haines
87ffd21b29 Fix a couple of bugs in the transaction and keyring code 2016-07-26 19:19:08 +01:00
Richard van der Hoff
2452611d0f Merge pull request #953 from matrix-org/rav/requester
Add `create_requester` function
2016-07-26 16:57:53 +01:00
Richard van der Hoff
eb359eced4 Add create_requester function
Wrap the `Requester` constructor with a function which provides sensible
defaults, and use it throughout
2016-07-26 16:46:53 +01:00
Mark Haines
c824b29e77 Check if the user is banned when handling 3pid invites 2016-07-26 16:39:14 +01:00
Richard van der Hoff
33d7776473 Fix typo 2016-07-26 13:32:15 +01:00
Richard van der Hoff
9ad8d9b17c Merge branch 'develop' into rav/delete_refreshtoken_on_delete_device 2016-07-26 13:29:46 +01:00
Richard van der Hoff
5b1825ba5b Merge pull request #951 from matrix-org/rav/flake8
Fix flake8 noise
2016-07-26 13:27:54 +01:00
Mark Haines
9c4cf83259 Merge pull request #948 from matrix-org/markjh/auth_fixes
Don't add rejections to the state_group, persist all rejections
2016-07-26 13:22:57 +01:00
Richard van der Hoff
05e7e5e972 Fix flake8 violation
Apparently flake8 v3 puts the error on a different line to v2. Easiest way to
make sure that happens is by putting the whole statement on one line :)
2016-07-26 11:59:08 +01:00
Richard van der Hoff
db4f823d34 Fix flake8 configuration
Apparently flake8 v3 doesn't like trailing comments on config settings.

Also remove the pep8 config, which didn't work (because it was missing W503)
and duplicated the flake8 config. We don't use pep8 on its own, so the config
was duplicative.
2016-07-26 11:49:40 +01:00
Richard van der Hoff
8e02494166 Delete refresh tokens when deleting devices 2016-07-26 11:10:37 +01:00
Mark Haines
a6f06ce3e2 Fix how push_actions are redacted. 2016-07-26 11:05:39 +01:00
David Baker
d34e9f93b7 Merge pull request #949 from matrix-org/rav/update_devices
Implement updates and deletes for devices
2016-07-26 10:49:55 +01:00
Mark Haines
efeb6176c1 Don't add rejected events if we've seen them befrore. Add some comments to explain what the code is doing mechanically 2016-07-26 10:49:52 +01:00
Matthew Hodgson
1a54513cf1 federation doesn't work over ipv6 yet thanks to twisted 2016-07-26 10:09:37 +02:00
Matthew Hodgson
242c52d607 typo 2016-07-26 10:09:25 +02:00
Richard van der Hoff
012b4c1913 Implement updating devices
You can update the displayname of devices now.
2016-07-26 07:35:48 +01:00
Richard van der Hoff
436bffd15f Implement deleting devices 2016-07-26 07:35:48 +01:00
Mark Haines
1b3c3e6d68 Only update the events and event_json tables for rejected events 2016-07-25 18:44:30 +01:00
Richard van der Hoff
33d08e8433 Log when adding listeners 2016-07-25 17:22:15 +01:00
Mark Haines
8f7f4cb92b Don't add the events to forward extremities if the event is rejected 2016-07-25 17:13:37 +01:00
Mark Haines
2623cec874 Don't add rejections to the state_group, persist all rejections 2016-07-25 16:12:16 +01:00
David Baker
4fcdf7b4b2 Merge pull request #946 from matrix-org/dbkr/log_recaptcha_hostname
Log the hostname the reCAPTCHA was completed on
2016-07-25 16:10:39 +01:00
Mark Haines
955ef1f06c fix: defer.returnValue takes one argument 2016-07-25 16:04:45 +01:00
Richard van der Hoff
2ee4c9ee02 background updates: fix assert again 2016-07-25 16:01:46 +01:00
Richard van der Hoff
9dbd903f41 background updates: Fix assertion to do something 2016-07-25 14:05:23 +01:00
Richard van der Hoff
bf3de7b90b Merge pull request #945 from matrix-org/rav/background_reindex
Create index on user_ips in the background
2016-07-25 14:04:05 +01:00
Richard van der Hoff
e73ad8de3b Merge pull request #947 from matrix-org/rav/unittest_logging
Slightly saner logging for unittests
2016-07-25 13:44:46 +01:00
Richard van der Hoff
42f4feb2b7 PEP8 2016-07-25 12:25:06 +01:00
Richard van der Hoff
f16f0e169d Slightly saner logging for unittests
1. Give the handler used for logging in unit tests a formatter, so that the
output is slightly more meaningful

2. Log some synapse.storage stuff, because it's useful.
2016-07-25 12:12:47 +01:00
Richard van der Hoff
465117d7ca Fix background_update tests
A bit of a cleanup for background_updates, and make sure that the real
background updates have run before we start the unit tests, so that they don't
interfere with the tests.
2016-07-25 12:10:42 +01:00
David Baker
7ed58bb347 Use get to avoid KeyErrors 2016-07-22 17:18:50 +01:00
David Baker
dad2da7e54 Log the hostname the reCAPTCHA was completed on
This could be useful information to have in the logs. Also comment about how & why we don't verify the hostname.
2016-07-22 17:00:56 +01:00
Richard van der Hoff
363786845b PEP8 2016-07-22 13:21:07 +01:00
Richard van der Hoff
ec5717caf5 Create index on user_ips in the background
user_ips is kinda big, so really we want to add the index in the background
once we're running. Replace the schema delta with one which will do that.

I've done this in a way that's reasonably easy to reuse as there a few other
indexes I need, and I don't suppose they will be the last.
2016-07-22 13:16:39 +01:00
Erik Johnston
d26b660aa6 Cache getPeer 2016-07-21 17:38:51 +01:00
Erik Johnston
aede7248ab Split out a FederationReader process 2016-07-21 17:37:44 +01:00
David Baker
68a92afcff Merge pull request #944 from matrix-org/rav/devices_returns_list
make /devices return a list
2016-07-21 17:00:26 +01:00
Richard van der Hoff
55abbe1850 make /devices return a list
Turns out I specced this to return a list of devices rather than a dict of them
2016-07-21 15:57:28 +01:00
David Baker
2c28e25bda Merge pull request #943 from matrix-org/rav/get_device_api
Implement GET /device/{deviceId}
2016-07-21 13:41:42 +01:00
David Baker
1e6e370b76 Merge pull request #942 from matrix-org/rav/fix_register_deviceid
Preserve device_id from first call to /register
2016-07-21 13:16:31 +01:00
Richard van der Hoff
1c3c202b96 Fix PEP8 errors 2016-07-21 13:15:15 +01:00
Richard van der Hoff
406f7aa0f6 Implement GET /device/{deviceId} 2016-07-21 12:00:29 +01:00
Richard van der Hoff
34f56b40fd Merge branch 'rav/get_devices_api' into develop 2016-07-21 11:59:30 +01:00
Richard van der Hoff
c445f5fec7 storage/client_ips: remove some dead code 2016-07-21 11:58:47 +01:00
David Baker
44adde498e Merge pull request #939 from matrix-org/rav/get_devices_api
GET /devices endpoint
2016-07-21 11:54:47 +01:00
Erik Johnston
cf94a78872 Set host not path 2016-07-21 11:45:53 +01:00
Richard van der Hoff
1a64dffb00 Preserve device_id from first call to /register
device_id may only be passed in the first call to /register, so make sure we
fish it out of the register `params` rather than the body of the final call.
2016-07-21 11:34:16 +01:00
Erik Johnston
081e5d55e6 Send the correct host header when fetching keys 2016-07-21 11:14:54 +01:00
Erik Johnston
248e6770ca Cache federation state responses 2016-07-21 10:30:12 +01:00
Richard van der Hoff
40a1c96617 Fix PEP8 errors 2016-07-20 18:06:28 +01:00
Richard van der Hoff
7314bf4682 Merge branch 'develop' into rav/get_devices_api
(pick up PR #938 in the hope of fixing the UTs)
2016-07-20 17:40:00 +01:00
Richard van der Hoff
e9e3eaa67d Merge pull request #938 from matrix-org/rav/add_device_id_to_client_ips
Record device_id in client_ips
2016-07-20 17:38:45 +01:00
Erik Johnston
d36b1d849d Don't explode if we have no snapshots yet 2016-07-20 16:59:52 +01:00
David Baker
742056be0d Merge pull request #937 from matrix-org/rav/register_device_on_register
Register a device_id in the /v2/register flow.
2016-07-20 16:51:27 +01:00
Richard van der Hoff
bc8f265f0a GET /devices endpoint
implement a GET /devices endpoint which lists all of the user's devices.

It also returns the last IP where we saw that device, so there is some dancing
to fish that out of the user_ips table.
2016-07-20 16:42:32 +01:00
Richard van der Hoff
ec041b335e Record device_id in client_ips
Record the device_id when we add a client ip; it's somewhat redundant as we
could get it via the access_token, but it will make querying rather easier.
2016-07-20 16:41:03 +01:00
Richard van der Hoff
053e83dafb More doc-comments
Fix some more comments on some things
2016-07-20 16:40:28 +01:00
Richard van der Hoff
b97a1356b1 Register a device_id in the /v2/register flow.
This doesn't cover *all* of the registration flows, but it does cover the most
common ones: in particular: shared_secret registration, appservice
registration, and normal user/pass registration.

Pull device_id from the registration parameters. Register the device in the
devices table. Associate the device with the returned access and refresh
tokens. Profit.
2016-07-20 16:38:27 +01:00
Erik Johnston
b73dc0ef4d Merge pull request #936 from matrix-org/erikj/log_rss
Add metrics for psutil derived memory usage
2016-07-20 16:32:38 +01:00
Erik Johnston
499e3281e6 Make jenkins install deps on unit tests 2016-07-20 16:09:59 +01:00
Erik Johnston
66868119dc Add metrics for psutil derived memory usage 2016-07-20 16:00:21 +01:00
Erik Johnston
aba0b2a39b Merge pull request #935 from matrix-org/erikj/backfill_notifs
Don't notify pusher pool for backfilled events
2016-07-20 13:39:16 +01:00
Erik Johnston
57dca35692 Don't notify pusher pool for backfilled events 2016-07-20 13:25:06 +01:00
Richard van der Hoff
c68518dfbb Merge pull request #933 from matrix-org/rav/type_annotations
Type annotations
2016-07-20 12:26:32 +01:00
David Baker
e967bc86e7 Merge pull request #932 from matrix-org/rav/register_refactor
Further registration refactoring
2016-07-20 11:03:33 +01:00
Erik Johnston
1e2a7f18a1 Merge pull request #922 from matrix-org/erikj/file_api2
Feature: Add filter to /messages. Add 'contains_url' to filter.
2016-07-20 10:40:48 +01:00
Erik Johnston
f91faf09b3 Comment 2016-07-20 10:18:09 +01:00
Richard van der Hoff
4430b1ceb3 MANIFEST.in: Add *.pyi 2016-07-19 19:01:20 +01:00
Richard van der Hoff
3413f1e284 Type annotations
Add some type annotations to help PyCharm (in particular) to figure out the
types of a bunch of things.
2016-07-19 18:56:16 +01:00
Richard van der Hoff
40cbffb2d2 Further registration refactoring
* `RegistrationHandler.appservice_register` no longer issues an access token:
  instead it is left for the caller to do it. (There are two of these, one in
  `synapse/rest/client/v1/register.py`, which now simply calls
  `AuthHandler.issue_access_token`, and the other in
  `synapse/rest/client/v2_alpha/register.py`, which is covered below).

* In `synapse/rest/client/v2_alpha/register.py`, move the generation of
  access_tokens into `_create_registration_details`. This means that the normal
  flow no longer needs to call `AuthHandler.issue_access_token`; the
  shared-secret flow can tell `RegistrationHandler.register` not to generate a
  token; and the appservice flow continues to work despite the above change.
2016-07-19 18:46:19 +01:00
David Baker
b9e997f561 Merge pull request #931 from matrix-org/rav/refactor_register
rest/client/v2_alpha/register.py: Refactor flow somewhat.
2016-07-19 16:13:45 +01:00
Richard van der Hoff
9a7a77a22a Merge pull request #929 from matrix-org/rav/support_deviceid_in_login
Add device_id support to /login
2016-07-19 15:53:04 +01:00
Richard van der Hoff
8f6281ab0c Don't bind email unless threepid contains expected fields 2016-07-19 15:50:01 +01:00
Richard van der Hoff
0da0d0a29d rest/client/v2_alpha/register.py: Refactor flow somewhat.
This is meant to be an *almost* non-functional change, with the exception that
it fixes what looks a lot like a bug in that it only calls
`auth_handler.add_threepid` and `add_pusher` once instead of three times.

The idea is to move the generation of the `access_token` out of
`registration_handler.register`, because `access_token`s now require a
device_id, and we only want to generate a device_id once registration has been
successful.
2016-07-19 13:12:22 +01:00
Richard van der Hoff
022b9176fe schema fix
device_id should be text, not bigint.
2016-07-19 11:44:05 +01:00
Mark Haines
0c62c958fd Merge pull request #930 from matrix-org/markjh/handlers
Update docstring on Handlers.
2016-07-19 10:48:58 +01:00
Mark Haines
c41d52a042 Summary line 2016-07-19 10:28:27 +01:00
Mark Haines
7e554aac86 Update docstring on Handlers.
To indicate it is deprecated.
2016-07-19 10:20:58 +01:00
Richard van der Hoff
f863a52cea Add device_id support to /login
Add a 'devices' table to the storage, as well as a 'device_id' column to
refresh_tokens.

Allow the client to pass a device_id, and initial_device_display_name, to
/login. If login is successful, then register the device in the devices table
if it wasn't known already. If no device_id was supplied, make one up.

Associate the device_id with the access token and refresh token, so that we can
get at it again later. Ensure that the device_id is copied from the refresh
token to the access_token when the token is refreshed.
2016-07-18 16:39:44 +01:00
Richard van der Hoff
93efcb8526 Merge pull request #928 from matrix-org/rav/refactor_login
Refactor login flow
2016-07-18 16:12:35 +01:00
Richard van der Hoff
dcfd71aa4c Refactor login flow
Make sure that we have the canonical user_id *before* calling
get_login_tuple_for_user_id.

Replace login_with_password with a method which just validates the password,
and have the caller call get_login_tuple_for_user_id. This brings the password
flow into line with the other flows, and will give us a place to register the
device_id if necessary.
2016-07-18 15:23:54 +01:00
Erik Johnston
fca90b3445 Merge pull request #924 from matrix-org/erikj/purge_history
Fix /purge_history bug
2016-07-18 15:11:11 +01:00
Mark Haines
a292454aa1 Merge pull request #925 from matrix-org/markjh/auth_fix
Fix 500 ISE when sending alias event without a state_key
2016-07-18 15:04:47 +01:00
Erik Johnston
4f81edbd4f Merge pull request #927 from Half-Shot/develop
Fall back to 'username' if 'user' is not given for appservice registration.
2016-07-18 10:44:56 +01:00
Richard van der Hoff
6344db659f Fix a doc-comment
The `store` in a handler is a generic DataStore, not just an events.StateStore.
2016-07-18 09:48:10 +01:00
Will Hunt
511a52afc8 Use body.get to check for 'user' 2016-07-16 18:44:08 +01:00
Will Hunt
e885e2a623 Fall back to 'username' if 'user' is not given for appservice reg. 2016-07-16 18:33:48 +01:00
Mark Haines
d137e03231 Fix 500 ISE when sending alias event without a state_key 2016-07-15 18:58:25 +01:00
Erik Johnston
f52565de50 Fix /purge_history bug
This was caused by trying to insert duplicate backward extremeties
2016-07-15 14:23:15 +01:00
Erik Johnston
a2d288c6a9 Merge pull request #923 from matrix-org/erikj/purge_history
Various purge_history fixes
2016-07-15 13:23:29 +01:00
Erik Johnston
bd7c51921d Merge pull request #919 from matrix-org/erikj/auth_fix
Various auth.py fixes.
2016-07-15 11:38:33 +01:00
Erik Johnston
978fa53cc2 Pull out min stream_ordering from ex_outlier_stream 2016-07-15 10:22:30 +01:00
Erik Johnston
eec9609e96 event_backwards_extremeties may not be empty 2016-07-15 10:22:09 +01:00
Erik Johnston
9e1b43bcbf Comment 2016-07-15 09:29:54 +01:00
Erik Johnston
a3036ac37e Merge pull request #921 from matrix-org/erikj/account_deactivate
Feature: Add an /account/deactivate endpoint
2016-07-14 17:25:15 +01:00
Erik Johnston
ebdafd8114 Check sender signed event 2016-07-14 17:03:24 +01:00
Erik Johnston
a98d215204 Add filter param to /messages API 2016-07-14 16:30:56 +01:00
Erik Johnston
d554ca5e1d Add support for filters in paginate_room_events 2016-07-14 15:59:04 +01:00
Erik Johnston
209e04fa11 Merge pull request #918 from negzi/bugfix_for_token_expiry
Bug fix: expire invalid access tokens
2016-07-14 15:51:52 +01:00
Erik Johnston
e5142f65a6 Add 'contains_url' to filter 2016-07-14 15:35:48 +01:00
Erik Johnston
b64aa6d687 Add sender and contains_url field to events table 2016-07-14 15:35:43 +01:00
Erik Johnston
848d3bf2e1 Add hs object 2016-07-14 10:25:52 +01:00
Erik Johnston
b55c770271 Only accept password auth 2016-07-14 10:00:38 +01:00
Erik Johnston
d543b72562 Add an /account/deactivate endpoint 2016-07-14 09:56:53 +01:00
Negar Fazeli
0136a522b1 Bug fix: expire invalid access tokens 2016-07-13 15:00:37 +02:00
Erik Johnston
2cb758ac75 Check if alias event's state_key matches sender's domain 2016-07-13 13:12:25 +01:00
Erik Johnston
560c71c735 Check creation event's room_id domain matches sender's 2016-07-13 13:07:19 +01:00
David Baker
a37ee2293c Merge pull request #915 from matrix-org/dbkr/more_requesttokens
Add requestToken endpoints
2016-07-13 11:51:46 +01:00
David Baker
c55ad2e375 be more pythonic 2016-07-12 14:15:10 +01:00
David Baker
aaa9d9f0e1 on_OPTIONS isn't neccessary 2016-07-12 14:13:14 +01:00
David Baker
75fa7f6b3c Remove other debug logging 2016-07-12 14:08:57 +01:00
David Baker
a5db0026ed Separate out requestTokens to separate handlers 2016-07-11 09:57:07 +01:00
David Baker
9c491366c5 Oops, remove debug logging 2016-07-11 09:07:40 +01:00
David Baker
385aec4010 Implement https://github.com/matrix-org/matrix-doc/pull/346/files 2016-07-08 17:42:48 +01:00
Mark Haines
10f4856b0c Merge pull request #914 from matrix-org/markjh/upgrade
Ensure that the guest user is in the database when upgrading accounts
2016-07-08 17:02:06 +01:00
Mark Haines
dfde67a6fe Add a comment explaining allow_none 2016-07-08 15:57:06 +01:00
Mark Haines
10c843fcfb Ensure that the guest user is in the database when upgrading accounts 2016-07-08 15:15:55 +01:00
Erik Johnston
58930da52b Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-07-08 14:11:37 +01:00
Erik Johnston
0870588c20 Merge branch 'hotfixes-v0.16.1' 2016-07-08 13:22:32 +01:00
Erik Johnston
f90cf150e2 Bump version and changelog 2016-07-07 16:33:00 +01:00
Erik Johnston
067596d341 Fix bug where we did not correctly explode when multiple user_ids were set in macaroon 2016-07-07 16:22:24 +01:00
Erik Johnston
70d650be2b Merge pull request #911 from matrix-org/erikj/purge_history
Feature: Purge local room history.
2016-07-07 13:34:28 +01:00
Erik Johnston
b92e7955be Comment 2016-07-07 11:42:15 +01:00
Erik Johnston
c98e1479bd Return 400 rather than 500 2016-07-07 11:41:07 +01:00
Erik Johnston
67f2c901ea Add rest servlet. Fix SQL. 2016-07-06 15:56:59 +01:00
Erik Johnston
eef7778af9 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/test2 2016-07-06 14:50:22 +01:00
Erik Johnston
a17e7caeb7 Merge branch 'erikj/shared_secret' into erikj/test2 2016-07-06 14:46:31 +01:00
Erik Johnston
f0c06ac65c Merge pull request #909 from matrix-org/erikj/shared_secret
Add an admin option to shared secret registration (breaks backwards compat)
2016-07-06 14:08:51 +01:00
Erik Johnston
76b18df3d9 Check that there are no null bytes in user and passsword 2016-07-06 11:17:53 +01:00
Erik Johnston
0da24cac8b Add null separator to hmac 2016-07-06 11:05:16 +01:00
Erik Johnston
2e3c8acc68 Merge pull request #910 from KentShikama/hash_password_followup
Follow up to adding password pepper
2016-07-06 09:59:59 +01:00
Kent Shikama
8d9a884cee Update password config comment
Signed-off-by: Kent Shikama <kent@kentshikama.com>
2016-07-06 12:18:19 +09:00
Kent Shikama
896bc6cd46 Update hash_password script
Signed-off-by: Kent Shikama <kent@kentshikama.com>
2016-07-06 12:17:54 +09:00
Erik Johnston
be3548f7e1 Remove spurious txn 2016-07-05 17:46:51 +01:00
Erik Johnston
4adf93e0f7 Fix for postgres 2016-07-05 17:34:25 +01:00
Erik Johnston
651faee698 Add an admin option to shared secret registration 2016-07-05 17:30:22 +01:00
Erik Johnston
caf33b2d9b Protect password when registering using shared secret 2016-07-05 17:18:19 +01:00
Erik Johnston
8f8798bc0d Add ReadWriteLock for pagination and history prune 2016-07-05 15:30:25 +01:00
Erik Johnston
7335f0adda Add ReadWriteLock 2016-07-05 15:23:17 +01:00
David Baker
ef535178ff Merge pull request #904 from matrix-org/dbkr/register_email_no_untrusted_id_server
requestToken update
2016-07-05 15:13:34 +01:00
Mark Haines
04dee11e97 Merge pull request #906 from matrix-org/markjh/faster_events_around
Use a query that postgresql optimises better for get_events_around
2016-07-05 14:48:34 +01:00
Mark Haines
dd2ccee27d Fix typo 2016-07-05 14:06:07 +01:00
Mark Haines
b6b0132ac7 Make get_events_around more efficient on sqlite3 2016-07-05 13:55:18 +01:00
Erik Johnston
e34cb5e7dc Merge pull request #907 from KentShikama/pepper
Add pepper to password hashing
2016-07-05 11:26:22 +01:00
Kent Shikama
252ee2d979 Remove default password pepper string 2016-07-05 19:15:51 +09:00
Kent Shikama
14362bf359 Fix password config 2016-07-05 19:12:53 +09:00
Kent Shikama
1ee2584307 Fix pep8 2016-07-05 19:01:00 +09:00
Kent Shikama
507b8bb091 Add comment to prompt changing of pepper 2016-07-05 18:42:35 +09:00
Mark Haines
d44d11d864 Use true/false for boolean parameter inclusive to avoid potential for sqli, and possibly make the code clearer 2016-07-05 10:39:13 +01:00
Erik Johnston
2d21d43c34 Add purge_history API 2016-07-05 10:28:51 +01:00
Mark Haines
0fb76c71ac Use different SQL for postgres and sqlite3 for when using multicolumn indexes 2016-07-04 19:44:55 +01:00
Kent Shikama
8bdaf5f7af Add pepper to password hashing
Signed-off-by: Kent Shikama <kent@kentshikama.com>
2016-07-05 02:13:52 +09:00
Erik Johnston
a67bf0b074 Add storage function to purge history for a room 2016-07-04 16:02:50 +01:00
Mark Haines
f18d7546c6 Use a query that postgresql optimises better for get_events_around 2016-07-04 15:48:25 +01:00
Erik Johnston
3de8168343 Merge pull request #905 from KentShikama/add-password-hash
Optionally include password hash in createUser endpoint
2016-07-04 14:23:04 +01:00
Kent Shikama
bb069079bb Fix style violations
Signed-off-by: Kent Shikama <kent@kentshikama.com>
2016-07-04 22:07:11 +09:00
Kent Shikama
2e5a31f197 Use .get() instead of [] to access password_hash 2016-07-04 22:00:13 +09:00
Kent Shikama
fc8007dbec Optionally include password hash in createUser endpoint
Signed-off-by: Kent Shikama <kent@kentshikama.com>
2016-07-03 15:08:15 +09:00
Richard van der Hoff
1238203bc4 code_style.rst: add link to sphinx examples 2016-07-01 09:36:51 +01:00
Richard van der Hoff
41f072fd0e code_style.rst: *fix* link to google style 2016-07-01 09:09:40 +01:00
Richard van der Hoff
5a6ef20ef6 code_style.rst: add link to google style 2016-07-01 09:08:35 +01:00
David Baker
be8be535f7 requestToken update
Don't send requestToken request to untrusted ID servers

Also correct the THREEPID_IN_USE error to add the M_ prefix. This is a backwards incomaptible change, but the only thing using this is the angular client which is now unmaintained, so it's probably better to just do this now.
2016-06-30 17:51:28 +01:00
Erik Johnston
ab71589c0b Merge pull request #903 from matrix-org/erikj/deactivate_user
Feature: Add deactivate account admin API
2016-06-30 15:57:29 +01:00
Erik Johnston
f328d95cef Feature: Add deactivate account admin API
Allows server admins to "deactivate" accounts, which:

- Revokes all access tokens
- Removes all threepids
- Removes password

The API is a POST to `/admin/deactivate/<user_id>`
2016-06-30 15:40:58 +01:00
Erik Johnston
aac546c978 Merge pull request #902 from matrix-org/erikj/expire_media
Feature: Implement purge_media_cache admin API
2016-06-29 15:50:57 +01:00
Erik Johnston
f52cb4cd78 Remove race 2016-06-29 15:24:50 +01:00
Mark Haines
6783534a0f Merge pull request #886 from matrix-org/markjh/async_commit
Optionally make committing to postgres asynchronous.
2016-06-29 15:21:58 +01:00
Erik Johnston
a70688445d Implement purge_media_cache admin API 2016-06-29 14:57:59 +01:00
Erik Johnston
314b146b2e Track approximate last access time for remote media 2016-06-29 11:41:20 +01:00
David Baker
7846ac3125 Merge pull request #900 from RickCogley/RickCogley-coturn-readme-2
Rick cogley coturn readme 2
2016-06-28 10:49:02 +01:00
Rick Cogley
56ec5869c9 Update turn-howto.rst to use git clone (2)
Not logical to use svn checkout against a github repo, so changed to git clone. 

Signed-off-by: Rick Cogley <rick.cogley@esolia.co.jp>
2016-06-28 18:34:38 +09:00
Rick Cogley
1ea358b28b Update turn-howto.rst to use git clone
svn checkout is not logical for a checkout from github, so changed the checkout to "git clone". 
thanks @dbkr

Signed-off-by: Rick Cogley <rick.cogley@esolia.co.jp>
2016-06-28 18:27:54 +09:00
David Baker
db74dcda5b Merge pull request #894 from matrix-org/dbkr/push_room_naming
Use similar naming we use in email notifs for push
2016-06-28 10:12:24 +01:00
Rick Cogley
551fe80bed Remove double spaces
Reading the RST spec, I was trying to get breaks to appear by entering the double spaces after the lines in the code blocks. It does not work anyway, and, as pointed out, I've removed.
2016-06-28 12:47:55 +09:00
Matthew Hodgson
63bb8f0df9 remove vector.im from default secondary DS list 2016-06-27 13:13:33 +04:00
Rick Cogley
70d820c875 Update to reflect new location at github.
Additionally it does not appear there is turnserver.conf.default, but rather, just /etc/turnserver.conf.
2016-06-26 19:07:07 +09:00
David Baker
3f7652c56f Merge remote-tracking branch 'origin/develop' into dbkr/push_room_naming 2016-06-24 14:03:39 +01:00
Mark Haines
535b6bfacc Merge pull request #895 from matrix-org/markjh/jenkins_port_range
Fix the sytests to use a port-range rather than a port base
2016-06-24 14:03:01 +01:00
Mark Haines
f7fe0e5f67 Fix the sytests to use a port-range rather than a port base 2016-06-24 13:53:03 +01:00
David Baker
2455ad8468 Remove room name & alias test
as get_room_name_and_alias is now gone
2016-06-24 13:34:20 +01:00
David Baker
0b640aa56b even more pep8 2016-06-24 11:47:11 +01:00
David Baker
aa3a4944d5 more pep8 2016-06-24 11:45:23 +01:00
David Baker
46b7362304 pep8 2016-06-24 11:44:57 +01:00
David Baker
870c45913e Use similar naming we use in email notifs for push
Fixes https://github.com/vector-im/vector-web/issues/1654
2016-06-24 11:41:11 +01:00
Mark Haines
05f1a4596a Merge branch 'master' into develop 2016-06-23 11:17:48 +01:00
David Baker
13517e2914 Merge pull request #892 from matrix-org/dbkr/email_notif_most_recent
Put most recent 20 messages in notif
2016-06-23 09:25:45 +01:00
David Baker
b5fb7458d5 Actually we need to order these properly
otherwise we'll end up returning the wrong 20
2016-06-22 18:07:14 +01:00
David Baker
f73fdb04a6 Style 2016-06-22 17:51:40 +01:00
David Baker
3a4120e49a Put most recent 20 messages in notif
Fixes https://github.com/vector-im/vector-web/issues/1648
2016-06-22 17:47:18 +01:00
Erik Johnston
9fe894402f Merge pull request #843 from mweinelt/ldap3-rewrite
Rewrite LDAP Authentication against ldap3
2016-06-22 17:23:07 +01:00
Martin Weinelt
0a32208e5d Rework ldap integration with ldap3
Use the pure-python ldap3 library, which eliminates the need for a
system dependency.

Offer both a `search` and `simple_bind` mode, for more sophisticated
ldap scenarios.
- `search` tries to find a matching DN within the `user_base` while
  employing the `user_filter`, then tries the bind when a single
  matching DN was found.
- `simple_bind` tries the bind against a specific DN by combining the
  localpart and `user_base`

Offer support for STARTTLS on a plain connection.

The configuration was changed to reflect these new possibilities.

Signed-off-by: Martin Weinelt <hexa@darmstadt.ccc.de>
2016-06-22 17:51:59 +02:00
Mark Haines
774f3a692c Merge pull request #889 from matrix-org/markjh/synctl_workers
Optionally start or stop workers in synctl.
2016-06-21 17:58:19 +01:00
Mark Haines
5cc7564c5c Optionally start or stop workers in synctl.
Optionally start or stop an individual worker by passing -w with
the path to the worker config.
Optionally start or stop every worker and the main synapse by
passing -a with a path to a directory containing worker configs.

The "-w" is intended to be used to bounce individual workers proceses.
THe "-a" is intended for when you want to restart all the workers
simultaneuously, for example when performing database upgrades.
2016-06-21 16:38:05 +01:00
Mark Haines
0fe0b0eeb6 Merge pull request #888 from matrix-org/markjh/content_repo
Remove the legacy v0 content upload API.
2016-06-21 14:01:01 +01:00
David Baker
0126a5d60a Merge pull request #887 from matrix-org/dbkr/notif_template_subs_fail
Fix substitution failure in mail template
2016-06-21 13:48:54 +01:00
Mark Haines
13e334506c Remove the legacy v0 content upload API.
The existing content can still be downloaded. The last upload to the
matrix.org server was in January 2015, so it is probably safe to remove
the upload API.
2016-06-21 11:47:39 +01:00
David Baker
6b40e4f52a Fix substitution failure in mail template 2016-06-21 11:37:56 +01:00
Mark Haines
d5fb561709 Optionally make committing to postgres asynchronous.
Useful when running tests when you don't care whether the server
will lose data that it claims that it has committed.
2016-06-20 17:53:38 +01:00
Erik Johnston
d8ec81cc31 Merge pull request #879 from matrix-org/erikj/linearize_fed_server
Linearize some federation endpoints based on (origin, room_id)
2016-06-20 17:34:29 +01:00
Erik Johnston
bc72d381b2 Merge branch 'release-v0.16.1' of github.com:matrix-org/synapse 2016-06-20 14:18:04 +01:00
Erik Johnston
4d362a61ea Bump version and changelog 2016-06-20 14:17:42 +01:00
Erik Johnston
00c281f6a4 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.16.1 2016-06-20 14:13:54 +01:00
Mark Haines
c8cd41cdd8 Merge pull request #880 from matrix-org/markjh/registered_user
Remove registered_users from the distributor.
2016-06-17 19:45:06 +01:00
Mark Haines
41e4b2efea Add the create_profile method back since the tests use it 2016-06-17 19:20:47 +01:00
Mark Haines
0c13d45522 Add a comment on why we don't create a profile for upgrading users 2016-06-17 19:18:53 +01:00
Mark Haines
9f1800fba8 Remove registered_users from the distributor.
The only place that was observed was to set the profile. I've made it
so that the profile is set within store.register in the same transaction
that creates the user.

This required some slight changes to the registration code for upgrading
guest users, since it previously relied on the distributor swallowing errors
if the profile already existed.
2016-06-17 19:14:16 +01:00
Erik Johnston
8f4a9bbc16 Linearize some federation endpoints based on (origin, room_id) 2016-06-17 16:43:45 +01:00
Erik Johnston
9ba2bf1570 Merge pull request #878 from matrix-org/erikj/ujson
Disable responding with canonical json for federation
2016-06-17 16:22:12 +01:00
Erik Johnston
120c238705 Disable responding with canonical json for federation 2016-06-17 16:10:37 +01:00
Erik Johnston
1c1f633b13 Merge pull request #877 from matrix-org/erikj/frozen_default
Turn use_frozen_events off by default
2016-06-17 15:33:55 +01:00
Erik Johnston
6660f37558 Merge pull request #876 from matrix-org/erikj/sign_own
Only re-sign our own events
2016-06-17 15:23:20 +01:00
Mark Haines
20e5b46b20 Merge pull request #875 from matrix-org/markjh/email_formatting
Fix ``KeyError: 'msgtype'``. Use ``.get``
2016-06-17 15:14:58 +01:00
Erik Johnston
0113ad36ee Enable use_frozen_events in tests 2016-06-17 15:13:13 +01:00
Erik Johnston
3e41de05cc Turn use_frozen_events off by default 2016-06-17 15:11:22 +01:00
Erik Johnston
2884712ca7 Only re-sign our own events 2016-06-17 14:47:33 +01:00
Mark Haines
ded01c3bf6 Fix `KeyError: 'msgtype'. Use .get`
Fixes a key error where the mailer tried to get the ``msgtype`` of an
event that was missing a ``msgtype``.

```
 File "synapse/push/mailer.py", line 264, in get_notif_vars
 File "synapse/push/mailer.py", line 285, in get_message_vars
 File ".../frozendict/__init__.py", line 10, in __getitem__
    return self.__dict[key]
    KeyError: 'msgtype'
```
2016-06-17 13:49:16 +01:00
Mark Haines
8c75040c25 Fix setting gc thresholds in the workers 2016-06-17 11:48:12 +01:00
Mark Haines
f1073ad43d Merge pull request #874 from matrix-org/markjh/worker_config
Inline the synchrotron and pusher configs into the main config
2016-06-17 10:58:52 +01:00
Mark Haines
a352b68acf Use worker_ prefixes for worker config, use existing support for multiple config files 2016-06-16 17:29:50 +01:00
Mark Haines
364d616792 Access the event_cache_size directly from the server object.
This means that the workers can override the event_cache_size
directly without clobbering the value in the main synapse config.
2016-06-16 12:53:15 +01:00
Mark Haines
bde13833cb Access replication_url from the worker config directly 2016-06-16 12:44:40 +01:00
Mark Haines
80a1bc7db5 Comment on what's going on in clobber_with_worker_config 2016-06-16 11:29:45 +01:00
Mark Haines
f1f70bf4b5 Merge remote-tracking branch 'origin/develop' into markjh/worker_config 2016-06-16 11:20:17 +01:00
Mark Haines
dbb5a39b64 Add worker config module 2016-06-16 11:09:15 +01:00
Mark Haines
885ee861f7 Inline the synchrotron and pusher configs into the main config 2016-06-16 11:06:12 +01:00
Erik Johnston
486b9a6a2d Merge pull request #873 from vt0r/bugfix/bcrypt-utf8-encode
Fix TypeError in call to bcrypt.hashpw
2016-06-16 10:12:17 +01:00
Erik Johnston
1f31381611 Merge pull request #872 from matrix-org/erikj/preview_url_fixes
Fix some `/preview_url` explosions
2016-06-16 10:08:29 +01:00
Salvatore LaMendola
ed5f43a55a Fix TypeError in call to bcrypt.hashpw
- At the very least, this TypeError caused logins to fail on my own
  running instance of Synapse, and the simple (explicit) UTF-8
  conversion resolved login errors for me.

Signed-off-by: Salvatore LaMendola <salvatore.lamendola@gmail.com>
2016-06-16 00:43:42 -04:00
Erik Johnston
09a17f965c Line lengths 2016-06-15 16:58:12 +01:00
Erik Johnston
1e9026e484 Handle floats as img widths 2016-06-15 16:58:05 +01:00
Erik Johnston
a60169ea09 Handle og props with not content 2016-06-15 16:57:48 +01:00
Mark Haines
78a16d395c Merge pull request #867 from matrix-org/markjh/enable_jenkins_synchrotron
Enable testing the synchrotron on jenkins
2016-06-15 16:41:58 +01:00
Erik Johnston
5436848955 Merge branch 'release-v0.16.1' of github.com:matrix-org/synapse into develop 2016-06-15 16:24:07 +01:00
Erik Johnston
0477368e9a Update change log 2016-06-15 16:06:26 +01:00
Erik Johnston
0ef0655b83 Bump version and changelog 2016-06-15 15:50:48 +01:00
Erik Johnston
a64dbae90b Merge pull request #871 from matrix-org/erikj/linearize_state_fetch_on_pdu
Linearize fetching of gaps on incoming events
2016-06-15 15:31:57 +01:00
Erik Johnston
d41a1a91d3 Linearize fetching of gaps on incoming events
This potentially stops the server from doing multiple requests for the
same data.
2016-06-15 15:16:14 +01:00
Richard van der Hoff
15bf3e3376 Merge pull request #870 from matrix-org/rav/work_around_tls_bug
Work around TLS bug in twisted
2016-06-15 13:28:13 +01:00
Erik Johnston
d9f7fa2e57 Merge pull request #869 from matrix-org/erikj/backfill_fix
Correctly mark backfilled events as backfilled
2016-06-15 11:21:31 +01:00
Erik Johnston
b31c49d676 Correctly mark backfilled events as backfilled 2016-06-15 10:59:08 +01:00
Richard van der Hoff
255c229f23 Work around TLS bug in twisted
Wrap up twisted's FileBodyProducer to work around
https://twistedmatrix.com/trac/ticket/8473. Hopefully this fixes
https://matrix.org/jira/browse/SYN-700.
2016-06-15 10:39:08 +01:00
Erik Johnston
d12134ce37 Merge pull request #868 from matrix-org/erikj/invalid_id
Make get_domain_from_id throw SynapseError on invalid ID
2016-06-14 15:09:07 +01:00
Erik Johnston
36e2aade87 Make get_domain_from_id throw SynapseError on invalid ID 2016-06-14 13:25:29 +01:00
Matthew Hodgson
33546b58aa point to the CAPTCHA docs 2016-06-12 23:11:29 +01:00
Mark Haines
fdc015c6e9 Enable testing the synchrotron on jenkins 2016-06-10 16:30:26 +01:00
Erik Johnston
e9892b4b26 Merge pull request #866 from bartekrutkowski/develop
Change /bin/bash to /bin/sh in tox.ini
2016-06-10 13:02:48 +01:00
Bartek Rutkowski
50f69e2ef2 Change /bin/bash to /bin/sh in tox.ini
No features of Bash are used here, so using /bin/sh makes it more portable to systems that don't have Bash natively (like BSD systems).
2016-06-10 11:33:43 +01:00
Mark Haines
41b35412bf Merge pull request #863 from matrix-org/markjh/load_config
Add function to load config without generating it
2016-06-10 11:12:32 +01:00
Mark Haines
7dbb473339 Add function to load config without generating it
Renames ``load_config`` to ``load_or_generate_config``
Adds a method called ``load_config`` that just loads the
config.

The main synapse.app.homeserver will continue to use
``load_or_generate_config`` to retain backwards compat.
However new worker processes can use ``load_config`` to
load the config avoiding some of the cruft needed to generate
the config.

As the new ``load_config`` method is expected to be used by new
configs it removes support for the legacy commandline overrides
that ``load_or_generate_config`` supports
2016-06-09 18:50:38 +01:00
Erik Johnston
16a8884233 Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-06-09 16:57:33 +01:00
Erik Johnston
ba0406d10d Merge branch 'release-v0.16.0' of github.com:matrix-org/synapse 2016-06-09 14:21:23 +01:00
Erik Johnston
8327d5df70 Change CHANGELOG 2016-06-09 14:16:26 +01:00
Erik Johnston
a31befbcd0 Bump version and changelog 2016-06-09 13:23:41 +01:00
Erik Johnston
5ee5b655b2 Merge pull request #862 from matrix-org/erikj/media_remote_error
502 on /thumbnail when can't contact remote server
2016-06-09 13:11:02 +01:00
Erik Johnston
0a43219a27 Merge pull request #860 from negzi/bug_fix_get_or_create_user
Fix a bug caused by a change in auth_handler function
2016-06-09 11:40:12 +01:00
Erik Johnston
eba4ff1bcb 502 on /thumbnail when can't contact remote server 2016-06-09 11:29:43 +01:00
Erik Johnston
2eab219a70 Merge pull request #861 from matrix-org/erikj/events_log
Remove redundant exception log in /events
2016-06-09 11:22:20 +01:00
Erik Johnston
95f305c35a Remove redundant exception log in /events 2016-06-09 11:15:04 +01:00
Negar Fazeli
6e7dc7c7dd Fix a bug caused by a change in auth_handler function
Fix the relevant unit test cases
2016-06-08 23:22:39 +02:00
Erik Johnston
b7fbc9bd95 Merge pull request #859 from matrix-org/erikj/public_room_performance
Pull full state for each room all at once
2016-06-08 16:36:06 +01:00
Erik Johnston
919a2c74f6 Merge branch 'release-v0.16.0' of github.com:matrix-org/synapse into develop 2016-06-08 16:26:26 +01:00
Erik Johnston
81c07a32fd Pull full state for each room all at once 2016-06-08 15:51:49 +01:00
Erik Johnston
b063784b78 Merge pull request #857 from matrix-org/erikj/default_visibility
Don't make rooms visibile by default
2016-06-08 15:18:58 +01:00
Mark Haines
defa28efa1 Disable the synchrotron on jenkins until the sytest support lands (#855)
* Disable the synchrotron on jenkins until the sytest support lands

* Poke jenkins

* Poke jenkins

* Poke jenkins

* Poke jenkins

* Poke jenkins

* Poke jenkins

* Poke jenkins

* Poke jenkins
2016-06-08 15:11:31 +01:00
Erik Johnston
690029d1a3 Don't make rooms visibile by default 2016-06-08 14:47:42 +01:00
Erik Johnston
d88faf92d1 Fix up federation PublicRoomList 2016-06-08 14:39:31 +01:00
Erik Johnston
958c968d02 Merge pull request #856 from matrix-org/erikj/fed_pub_rooms
Enable auth on /publicRoom endpoints
2016-06-08 14:36:09 +01:00
Erik Johnston
746b2f5657 Merge pull request #854 from matrix-org/erikj/federation_logging
Add some logging for when servers ask for missing events
2016-06-08 14:31:41 +01:00
Erik Johnston
efeabd3180 Log user that is making /publicRooms calls 2016-06-08 14:23:15 +01:00
Erik Johnston
1fd6eb695d Enable auth on federation PublicRoomList 2016-06-08 14:15:18 +01:00
Erik Johnston
128360d4f0 Merge pull request #853 from matrix-org/erikj/replication_noop
Don't hit DB for noop replications queries
2016-06-08 13:26:26 +01:00
Erik Johnston
17aab5827a Add some logging for when servers ask for missing events 2016-06-08 11:55:31 +01:00
Erik Johnston
1a815fb04f Don't hit DB for noop replications queries 2016-06-08 11:33:30 +01:00
Erik Johnston
66503a69c9 Update commit hash in changelog 2016-06-08 11:13:56 +01:00
Erik Johnston
bab916bccc Bump version and changelog to v0.16.0-rc2 2016-06-08 11:05:45 +01:00
Erik Johnston
5c73115155 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.16.0 2016-06-08 10:50:31 +01:00
Erik Johnston
e0fda29f94 Merge pull request #850 from matrix-org/erikj/gc_threshold
Add gc_threshold to pusher and synchrotron
2016-06-08 10:05:56 +01:00
Erik Johnston
02270b4b3d Merge pull request #852 from matrix-org/erikj/gc_metrics
Add GC counts to metrics
2016-06-08 10:05:49 +01:00
Mark Haines
6babdb671b Merge pull request #851 from matrix-org/markjh/jenkins_synchrotron
Add script for running sytest with dendron
2016-06-07 17:20:41 +01:00
Erik Johnston
0f2165ccf4 Don't track total objects as its too expensive to calculate 2016-06-07 17:00:45 +01:00
Erik Johnston
18f0cc7d99 Record some more GC metrics 2016-06-07 16:55:49 +01:00
Mark Haines
64935d11f7 Add script for running sytest with dendron 2016-06-07 16:45:40 +01:00
Erik Johnston
2d1d1025fa Add gc_threshold to pusher and synchrotron 2016-06-07 16:26:25 +01:00
Erik Johnston
b01e71e719 Merge pull request #849 from matrix-org/erikj/gc_threshold
Allow setting of gc.set_thresholds
2016-06-07 16:07:54 +01:00
Erik Johnston
dded389ac1 Allow setting of gc.set_thresholds 2016-06-07 15:45:56 +01:00
Mark Haines
c54fcd9ee4 Merge pull request #848 from matrix-org/markjh/unusedIV
Remove dead code.
2016-06-07 15:27:12 +01:00
Mark Haines
0b2158719c Remove dead code.
Loading push rules now happens in the datastore, so we can remove
the methods that loaded them outside the datastore.

The ``waiting_for_join_list`` in federation handler is populated by
anything, so can be removed.

The ``_get_members_events_txn`` method isn't called from anywhere
so can be removed.
2016-06-07 15:07:11 +01:00
Erik Johnston
188f8d63e2 Merge pull request #847 from matrix-org/erikj/gc_tick
Change the way we do stats for GC
2016-06-07 13:46:26 +01:00
Erik Johnston
48e65099b5 Also record number of unreachable objects 2016-06-07 13:40:22 +01:00
Erik Johnston
75331c5fca Change the way we do stats 2016-06-07 13:33:13 +01:00
Erik Johnston
8c966fbd51 Merge pull request #771 from matrix-org/erikj/gc_tick
Manually run GC on reactor tick.
2016-06-07 13:18:36 +01:00
Mark Haines
efe8126290 Merge pull request #846 from matrix-org/markjh/user_joined_notifier
Notify users for events in rooms they join.
2016-06-07 11:43:49 +01:00
Mark Haines
88625db05f Notify users for events in rooms they join.
Change how the notifier updates the map from room_id to user streams on
receiving a join event. Make it update the map when it notifies for the
join event, rather than using the "user_joined_room" distributor signal
2016-06-07 11:33:36 +01:00
Erik Johnston
84379062f9 Fix AS retries, but with correct ordering 2016-06-07 10:24:50 +01:00
Erik Johnston
310197bab5 Fix AS retries 2016-06-07 09:34:50 +01:00
Mark Haines
b0932b34cb Merge pull request #845 from matrix-org/markjh/synchrotron_presence
Fix a KeyError in the synchrotron presence
2016-06-06 16:52:27 +01:00
Mark Haines
4a5bbb1941 Fix a KeyError in the synchrotron presence 2016-06-06 16:37:12 +01:00
Mark Haines
87f60e7053 Merge pull request #844 from matrix-org/markjh/yield_on_sleep
Yield on the sleeps intended to backoff replication
2016-06-06 16:20:54 +01:00
Mark Haines
5ef84da4f1 Yield on the sleeps intended to backoff replication 2016-06-06 16:05:28 +01:00
Erik Johnston
216a05b3e3 .values() returns list of sets 2016-06-06 16:00:09 +01:00
Erik Johnston
9f715573aa Merge pull request #842 from matrix-org/erikj/presence_timer
Fire after 30s not 8h
2016-06-06 15:51:23 +01:00
Erik Johnston
96dc600579 Fix typos 2016-06-06 15:44:41 +01:00
Erik Johnston
377eb480ca Fire after 30s not 8h 2016-06-06 15:14:21 +01:00
Erik Johnston
e4134c5e13 Merge pull request #841 from matrix-org/erikj/event_counter
Add metric counter for number of persisted events
2016-06-06 14:17:40 +01:00
Erik Johnston
778c1fea8b Merge pull request #840 from matrix-org/erikj/event_write_through
Add events to cache when we persist them
2016-06-06 14:17:35 +01:00
Erik Johnston
7aa778fba9 Add metric counter for number of persisted events 2016-06-06 11:58:09 +01:00
Erik Johnston
70aee0717c Add events to cache when we persist them 2016-06-06 11:34:53 +01:00
Erik Johnston
3210f4c385 Merge pull request #836 from matrix-org/erikj/change_event_cache
Change the way we cache events
2016-06-03 18:47:46 +01:00
Mark Haines
040a560a48 Merge pull request #837 from matrix-org/markjh/synchrotron_presence_list
Add get_presence_list_accepted to the broken caches in synchrotron
2016-06-03 18:28:08 +01:00
Erik Johnston
cffe46408f Don't rely on options when inserting event into cache 2016-06-03 18:25:21 +01:00
Mark Haines
ac9716f154 Fix spelling 2016-06-03 18:10:00 +01:00
Mark Haines
8f79084bd4 Add get_presence_list_accepted to the broken caches in synchrotron 2016-06-03 18:03:40 +01:00
Erik Johnston
10ea3f46ba Change the way we cache events 2016-06-03 17:57:50 +01:00
Erik Johnston
f6be734be9 Merge pull request #835 from matrix-org/erikj/get_event_txn
Remove event fetching from DB threads
2016-06-03 17:30:00 +01:00
Erik Johnston
05e01f21d7 Remove event fetching from DB threads 2016-06-03 17:22:13 +01:00
David Baker
81d226888f Merge pull request #834 from matrix-org/dbkr/fix_email_from
Fix email notif From
2016-06-03 16:54:29 +01:00
David Baker
72c4d482e9 3rd time lucky: we'd already calculated it above 2016-06-03 16:39:50 +01:00
David Baker
fbf608decb Oops, we're using the dict form 2016-06-03 16:38:39 +01:00
David Baker
06d40c8b98 Add substitutions to email notif From 2016-06-03 16:31:23 +01:00
Erik Johnston
5f88549f4a Merge branch 'release-v0.16.0' of github.com:matrix-org/synapse into develop 2016-06-03 16:27:24 +01:00
Mark Haines
ca457f594e Merge pull request #831 from matrix-org/markjh/synchrotronII
Add a separate process that can handle /sync requests
2016-06-03 16:06:25 +01:00
Erik Johnston
c11614bcdc Note that v0.15.x was never released 2016-06-03 15:50:15 +01:00
Erik Johnston
21961c93c7 Bump changelog and version 2016-06-03 15:33:14 +01:00
Mark Haines
48340e4f13 Clear the list of ongoing syncs on shutdown 2016-06-03 15:02:27 +01:00
Erik Johnston
fcbc282f56 Merge branch 'release-v0.15.0' of github.com:matrix-org/synapse into release-v0.16.0 2016-06-03 15:02:04 +01:00
Erik Johnston
51773bcbaf Merge pull request #832 from matrix-org/erikj/presence_coount
Change def of small delta in presence stream. Add metrics.
2016-06-03 14:57:00 +01:00
Mark Haines
da491e75b2 Appease flake8 2016-06-03 14:56:36 +01:00
Mark Haines
0b3c80a234 Use ClientIpStore to record client ips 2016-06-03 14:55:01 +01:00
Mark Haines
dd6f62ed99 Merge branch 'develop' into markjh/synchrotronII 2016-06-03 14:51:33 +01:00
Mark Haines
c367ba5dd2 Merge pull request #833 from matrix-org/markjh/client_ips
Move insert_client_ip to a separate class
2016-06-03 14:51:03 +01:00
Mark Haines
eef541a291 Move insert_client_ip to a separate class 2016-06-03 14:42:35 +01:00
Mark Haines
80aade3805 Send updates to the syncing users every ten seconds or immediately if they've just come online 2016-06-03 14:24:19 +01:00
Erik Johnston
ab116bdb0c Fix typo 2016-06-03 14:03:42 +01:00
Erik Johnston
4ce84a1acd Change metric style 2016-06-03 13:49:16 +01:00
Erik Johnston
a7ff5a1770 Presence metrics. Change def of small delta 2016-06-03 13:40:55 +01:00
Matthew Hodgson
aa6fab0cc2 Merge pull request #822 from matrix-org/matthew/brand-from-header
brand the email from header
2016-06-03 12:15:01 +01:00
Matthew Hodgson
8d740132f4 Merge branch 'develop' into matthew/brand-from-header 2016-06-03 12:14:18 +01:00
Erik Johnston
3b096c5f5c Merge branch 'erikj/cache_perf' of github.com:matrix-org/synapse into develop 2016-06-03 12:00:33 +01:00
Erik Johnston
43b7f371f5 Merge pull request #830 from matrix-org/erikj/metrics_perf
Change CacheMetrics to be quicker
2016-06-03 11:57:29 +01:00
Mark Haines
abb151f3c9 Add a separate process that can handle /sync requests 2016-06-03 11:57:26 +01:00
Erik Johnston
4982b28868 Merge pull request #829 from matrix-org/erikj/poke_notifier
Poke notifier on next reactor tick
2016-06-03 11:52:10 +01:00
Erik Johnston
d06f2a229e Merge pull request #828 from matrix-org/erikj/joined_hosts_for_room
Make get_joined_hosts_for_room use get_users_in_room
2016-06-03 11:50:30 +01:00
Erik Johnston
58a224a651 Pull out update_results_dict 2016-06-03 11:47:07 +01:00
Mark Haines
20eccd84d4 Merge pull request #827 from matrix-org/markjh/more_slaved_methods
Add methods to events, account data and receipt slaves
2016-06-03 11:46:21 +01:00
Erik Johnston
722472b48c Merge pull request #825 from matrix-org/erikj/cache_push_rules
Load push rules in storage layer so that they get cached
2016-06-03 11:44:32 +01:00
Erik Johnston
73c7112433 Change CacheMetrics to be quicker
We change it so that each cache has an individual CacheMetric, instead
of having one global CacheMetric. This means that when a cache tries to
increment a counter it does not need to go through so many indirections.
2016-06-03 11:26:52 +01:00
Mark Haines
b09f348530 Merge pull request #824 from matrix-org/markjh/slaved_presence_store
Add a slaved store for presence
2016-06-03 11:26:33 +01:00
Erik Johnston
4c04222fa5 Poke notifier on next reactor tick 2016-06-03 11:24:16 +01:00
Erik Johnston
ccb56fc24b Make get_joined_hosts_for_room use get_users_in_room 2016-06-03 11:20:23 +01:00
Mark Haines
81cf449daa Add methods to events, account data and receipt slaves
Adds the methods needed by /sync to the slaved events,
account data and receipt stores.
2016-06-03 11:19:27 +01:00
Erik Johnston
e043ede4a2 Small optimisation to CacheListDescriptor 2016-06-03 11:19:22 +01:00
Mark Haines
b821c839dc Merge pull request #823 from matrix-org/markjh/more_slaved_stores
Add slaved stores for filters, tokens, and push rules
2016-06-03 11:17:43 +01:00
Erik Johnston
597013caa5 Make cachedList go a bit faster 2016-06-03 11:13:29 +01:00
Erik Johnston
6a0afa582a Load push rules in storage layer, so that they get cached 2016-06-03 11:10:00 +01:00
Mark Haines
3ae915b27e Add a slaved store for presence 2016-06-03 11:05:53 +01:00
Erik Johnston
59f2d73522 Remove unnecessary sets 2016-06-03 11:05:45 +01:00
Erik Johnston
9c26b390a2 Only get local users 2016-06-03 11:04:31 +01:00
Mark Haines
f88d747f79 Add a comment explaining why the filter cache doesn't need exipiring 2016-06-03 11:03:10 +01:00
Erik Johnston
065e739d6e Merge pull request #811 from matrix-org/erikj/state_users_in_room
Use state to calculate get_users_in_room
2016-06-03 10:58:27 +01:00
Erik Johnston
696d7c5937 Merge pull request #809 from matrix-org/erikj/cache_receipts_in_room
Add get_users_with_read_receipts_in_room cache
2016-06-03 10:58:24 +01:00
Mark Haines
0eae075723 Add slaved stores for filters, tokens, and push rules 2016-06-03 10:58:03 +01:00
Matthew Hodgson
79d1f072f4 brand the email from header 2016-06-02 21:34:40 +01:00
David Baker
6bb9aacf9d Merge pull request #821 from matrix-org/dbkr/email_unsubscribe
Email unsubscribe links that don't require logging in
2016-06-02 17:44:55 +01:00
David Baker
745ddb4dd0 peppate 2016-06-02 17:38:41 +01:00
David Baker
7a5a5f2df2 Merge pull request #820 from matrix-org/dbkr/email_notif_string_fmt_error
Fix error in email notification string formatting
2016-06-02 17:26:06 +01:00
David Baker
1f31cc37f8 Working unsubscribe links going straight to the HS
and authed by macaroons that let you delete pushers and nothing else
2016-06-02 17:21:31 +01:00
Matthew Hodgson
2675c1e40e add some branding debugging 2016-06-02 17:21:12 +01:00
David Baker
c71177f285 Merge remote-tracking branch 'origin/dbkr/email_notif_string_fmt_error' into dbkr/email_unsubscribe 2016-06-02 17:20:56 +01:00
David Baker
07a5559916 Fix error in email notification string formatting 2016-06-02 17:17:16 +01:00
Mark Haines
56d15a0530 Store the typing users as user_id strings. (#819)
Rather than storing them as UserID objects.
2016-06-02 16:28:54 +01:00
David Baker
812b5de0fe Merge remote-tracking branch 'origin/develop' into dbkr/email_unsubscribe 2016-06-02 15:33:28 +01:00
Mark Haines
80f34d7b57 Fix setting the _clock in SQLBaseStore 2016-06-02 15:23:56 +01:00
Mark Haines
661a540dd1 Deduplicate presence entries in sync (#818) 2016-06-02 15:20:28 +01:00
Mark Haines
70599ce925 Allow external processes to mark a user as syncing. (#812)
* Add infrastructure to the presence handler to track sync requests in external processes

* Expire stale entries for dead external processes

* Add an http endpoint for making users as syncing

Add some docstrings and comments.

* Fixes
2016-06-02 15:20:15 +01:00
David Baker
fb2193cc63 Merge pull request #817 from matrix-org/dbkr/split_out_auth_handler
Split out the auth handler
2016-06-02 14:31:35 +01:00
Erik Johnston
356f13c069 Disable INCLUDE_ALL_UNREAD_NOTIFS 2016-06-02 14:07:38 +01:00
Erik Johnston
02ac463dbf Merge pull request #800 from matrix-org/erikj/sync_refactor
Refactor SyncHandler
2016-06-02 14:02:13 +01:00
Matthew Hodgson
c5af1b6b00 Merge pull request #814 from matrix-org/matthew/3pid_invite_auth
special case m.room.third_party_invite event auth to match invites,
2016-06-02 13:47:40 +01:00
David Baker
3a3fb2f6f9 Merge branch 'dbkr/split_out_auth_handler' into dbkr/email_unsubscribe 2016-06-02 13:35:25 +01:00
David Baker
4a10510cd5 Split out the auth handler 2016-06-02 13:31:45 +01:00
Matthew Hodgson
f84b89f0c6 if an email pusher specifies a brand param, use it 2016-06-02 13:29:48 +01:00
David Baker
a15ad60849 Email unsubscribing that may in theory, work
Were it not for that fact that you can't use the base handler in the pusher because it pulls in the world. Comitting while I fix that on a different branch.
2016-06-02 11:44:15 +01:00
David Baker
07233a1ec8 Merge pull request #815 from matrix-org/dbkr/email_greeting_not_none
Use user_id in email greeting if display name is null
2016-06-02 09:52:19 +01:00
David Baker
e793866398 Use user_id in email greeting if display name is null 2016-06-02 09:41:13 +01:00
Matthew Hodgson
aaa70e26a2 special case m.room.third_party_invite event auth to match invites, otherwise they get out of sync and you get https://github.com/vector-im/vector-web/issues/1208 2016-06-01 22:13:47 +01:00
Erik Johnston
a04a2d043c Merge pull request #807 from matrix-org/erikj/push_rules_cache
Ensure we always return boolean in push rules
2016-06-01 18:07:48 +01:00
Erik Johnston
0f06b496d1 Merge pull request #806 from matrix-org/erikj/hash_cache
Cache get_event_reference_hashes
2016-06-01 18:07:42 +01:00
Erik Johnston
83b70c9f63 Merge pull request #813 from matrix-org/dbkr/fix_room_list_spidering
Fix room list spidering
2016-06-01 18:02:27 +01:00
David Baker
e0deeff23e Fix room list spidering 2016-06-01 17:58:58 +01:00
David Baker
991af8b0d6 WIP on unsubscribing email notifs without logging in 2016-06-01 17:40:52 +01:00
David Baker
00c487a8db Merge pull request #808 from matrix-org/dbkr/room_list_spider
Add secondary_directory_servers option to fetch room list from other servers
2016-06-01 15:32:52 +01:00
Erik Johnston
c8285564a3 Use state to calculate get_users_in_room 2016-06-01 15:25:25 +01:00
David Baker
1db79d6192 Merge pull request #810 from matrix-org/dbkr/limit_email_notifs
Limit number of notifications in an email notification
2016-06-01 12:49:22 +01:00
David Baker
d60eed0710 Limit number of notifications in an email notification 2016-06-01 11:45:43 +01:00
David Baker
195254cae8 Inject fake room list handler in tests
Otherwise it tries to start the remote public room list updating looping call which breaks.
2016-06-01 11:14:16 +01:00
Erik Johnston
43db0d9f6a Add get_users_with_read_receipts_in_room cache 2016-06-01 10:54:32 +01:00
David Baker
8e539f13c0 Merge remote-tracking branch 'origin/develop' into dbkr/room_list_spider 2016-06-01 09:54:36 +01:00
David Baker
6ecb2ca4ec pep8 2016-06-01 09:48:55 +01:00
Matthew Hodgson
58ee43d020 handle emotes & notices correctly in email notifs 2016-05-31 20:28:42 +01:00
David Baker
2a449fec4d Add cache to remote room lists
Poll for updates from remote servers, waiting for the poll if there's no cache entry.
2016-05-31 18:27:23 +01:00
David Baker
6ca4d3ae9a Add vector.im to default secondary_directory_servers and add comment explaining it's not a permanent solution 2016-05-31 17:24:50 +01:00
Erik Johnston
dea9f20f8c Force boolean 2016-05-31 17:24:30 +01:00
David Baker
963e3ed282 Apparently I am not permitted to have two blank lines here 2016-05-31 17:22:53 +01:00
David Baker
d240796ded Basic, un-cached support for secondary_directory_servers 2016-05-31 17:20:07 +01:00
Mark Haines
c8c5bf950a Fix synapse/storage/schema/delta/30/as_users.py 2016-05-31 17:10:40 +01:00
Erik Johnston
c9ca285d33 Merge pull request #805 from matrix-org/erikj/push_rules_cache
Fix GET /push_rules
2016-05-31 16:42:21 +01:00
Erik Johnston
1d4ee854e2 Fix typo 2016-05-31 15:45:53 +01:00
Erik Johnston
cca0093fa9 Change fix 2016-05-31 15:44:08 +01:00
Erik Johnston
4efa389299 Fix GET /push_rules 2016-05-31 15:37:53 +01:00
Erik Johnston
aefd2d1cbc Cache get_event_reference_hashes 2016-05-31 15:32:32 +01:00
Erik Johnston
10de8c2631 Merge pull request #804 from matrix-org/erikj/push_rules_cache
Add caches to bulk_get_push_rules*
2016-05-31 15:04:40 +01:00
Mark Haines
014e0799f9 Merge pull request #803 from matrix-org/markjh/liberate_appservice_handler
Move the AS handler out of the Handlers object.
2016-05-31 14:31:19 +01:00
Mark Haines
c626fc576a Move the AS handler out of the Handlers object.
Access it directly from the homeserver itself. It already wasn't
inheriting from BaseHandler storing it on the Handlers object was
already somewhat dubious.
2016-05-31 13:53:48 +01:00
Erik Johnston
e5b0bbcd33 Add caches to bulk_get_push_rules* 2016-05-31 13:46:58 +01:00
David Baker
70ecb415f5 Fix c+p fail 2016-05-31 12:00:54 +01:00
David Baker
e1625d62a8 Add federation room list servlet 2016-05-31 11:55:57 +01:00
David Baker
163e48c0e3 Merge pull request #802 from matrix-org/dbkr/split_room_list_handler
Split out the room list handler
2016-05-31 11:32:44 +01:00
David Baker
887c6e6f05 Split out the room list handler
So I can use it from federation bits without pulling in all the handlers.
2016-05-31 11:05:16 +01:00
Matthew Hodgson
42a9ea37e4 Merge pull request #801 from ruma/readme-history-storage
Alter phrasing to clarify where info is stored.
2016-05-29 10:37:04 +01:00
Jimmy Cuadra
8b5dbee47e Alter phrasing to clarify where info is stored.
A user on #matrix:matrix.org was confused by the phrasing of the first
sentence in the paragraph and couldn't tell whether it was saying that
the homeserver stored the data or the clients did. This change splits it
into two sentences to make the subject of each sentence clear.
2016-05-29 02:31:56 -07:00
Erik Johnston
85b992f621 Fix to allow start with postgres 2016-05-27 10:44:44 +01:00
Erik Johnston
cc84f7cb8e Send down correct error response if user not found 2016-05-27 10:35:15 +01:00
David Baker
209ba4d024 Merge pull request #795 from matrix-org/dbkr/delete_push_actions_after_a_month
Only delete push actions after 30 days
2016-05-24 16:22:13 +01:00
David Baker
b007ee4606 Check for presence of 'avatar_url' key 2016-05-24 15:12:05 +01:00
Matrix
95b86e6dad tweak mail notifs 2016-05-24 14:52:29 +01:00
David Baker
cbf8d146ac Merge pull request #799 from matrix-org/matthew/quieter-email-notifs
Tune email notifs to make them quieter:
2016-05-24 14:30:51 +01:00
Erik Johnston
faad233ea6 Change short circuit path 2016-05-24 14:27:19 +01:00
Erik Johnston
6900303997 Don't send down all ephemeral events 2016-05-24 11:44:55 +01:00
David Baker
37b7e84620 Include the ts the notif was received at 2016-05-24 11:33:32 +01:00
Erik Johnston
1c5ed2a19b Only work out newly_joined_users for incremental sync 2016-05-24 11:21:34 +01:00
Erik Johnston
b08ad0389e Only include non-offline presence in initial sync 2016-05-24 11:15:05 +01:00
Erik Johnston
be2c677386 Spell builder correctly 2016-05-24 10:53:03 +01:00
Erik Johnston
79bea8ab9a Inline function. Make load_filtered_recents private 2016-05-24 10:22:24 +01:00
Erik Johnston
84f94e4cbb Add comments 2016-05-24 10:14:53 +01:00
Erik Johnston
d16cc52b5d Merge pull request #798 from negzi/bugfix_create_user_feature
Fix set profile error with Requester.
2016-05-24 10:13:53 +01:00
Erik Johnston
137e6a4557 Shuffle things room 2016-05-24 09:50:55 +01:00
Matthew Hodgson
680f1d9387 catch thinko in presentable names 2016-05-23 22:55:11 +01:00
Matthew Hodgson
cb8a321bdd fix NPE in room ordering 2016-05-23 22:54:56 +01:00
Matthew Hodgson
d437882581 fix debug text 2016-05-23 22:54:32 +01:00
Matthew Hodgson
88ea5ab2c3 consistency is the better part of valour 2016-05-23 19:33:45 +01:00
Matthew Hodgson
989bdc9e56 Tune email notifs to make them quieter:
* After initial 10 minute window, only alert every 24h for room notifs
 * Reset room state after 6h of idleness
 * Synchronise throttles for messages sent in the same notif, so the 24 hourly notifs 'line up'
 * Fix the email subjects to say what triggered the notification
 * Order the rooms in reverse activity order in the email, so the 'reason' room should always come first
2016-05-23 19:24:11 +01:00
Negi Fazeli
6fe04ffef2 Fix set profile error with Requester.
Replace flush_user with delete access token due to function removal
Add a new test case for if the user is already registered
2016-05-23 19:50:28 +02:00
David Baker
b791a530da Actually make the 'read' flag correct 2016-05-23 18:48:02 +01:00
David Baker
a24bc5b2dc Add GET /notifications API 2016-05-23 18:33:51 +01:00
Erik Johnston
c0c79ef444 Add back concurrently_execute 2016-05-23 18:21:27 +01:00
Erik Johnston
b5605dfecc Refactor SyncHandler 2016-05-23 18:08:18 +01:00
David Baker
31b5395ab6 Remove debug logging 2016-05-23 16:32:01 +01:00
Richard van der Hoff
09804c9862 Fix link to A-S spec 2016-05-23 16:29:38 +01:00
David Baker
c2da3406fc Oops, missing comma 2016-05-20 18:03:31 +01:00
David Baker
ccffb0965d Remove stale line 2016-05-20 17:59:10 +01:00
David Baker
18d68bfee4 Handle empty events table 2016-05-20 17:58:09 +01:00
David Baker
d4503e25ed Make deleting push actions more efficient
There's no index on received_ts, so manually binary search using the stream_ordering index, and only update it once an hour.
2016-05-20 17:56:10 +01:00
David Baker
149fa411e2 Only delete push actions after 30 days 2016-05-20 15:25:12 +01:00
Kegsay
60ff2e7984 Merge pull request #794 from matrix-org/kegan/join-with-server-name
Allow clients to specify a server_name to avoid 'No known servers'
2016-05-19 14:13:11 +01:00
Kegan Dougal
332d7e9b97 Allow clients to specify a server_name to avoid 'No known servers'
Multiple server_names are supported via ?server_name=foo&server_name=bar
2016-05-19 13:50:52 +01:00
Matthew Hodgson
6fb51eaf7b Merge pull request #793 from matrix-org/matthew/one-push-badge-per-convo
increment badge count per missed convo, not per msg
2016-05-18 13:56:38 +01:00
Matthew Hodgson
e837df6adb increment badge count per missed convo, not per msg 2016-05-18 11:53:25 +01:00
Erik Johnston
42368ea8db Add desc to get_presence_for_users 2016-05-18 11:38:10 +01:00
Mark Haines
ee660c6b9b Merge pull request #792 from matrix-org/markjh/liberate_typing_handler
Move typing handler out of the Handlers object
2016-05-17 16:06:17 +01:00
Mark Haines
0cb441fedd Move typing handler out of the Handlers object 2016-05-17 15:58:46 +01:00
Mark Haines
5adf627551 Merge pull request #791 from matrix-org/markjh/app_service_config
Move the functions for parsing app service config
2016-05-17 11:40:49 +01:00
Mark Haines
6a30a0bfd3 Move the functions for parsing app service config 2016-05-17 11:28:58 +01:00
Mark Haines
c4c98ce8da Merge pull request #790 from matrix-org/markjh/liberate_sync_handler
Move SyncHandler out of the Handlers object
2016-05-17 10:54:39 +01:00
Mark Haines
523d5bcd0b Merge remote-tracking branch 'origin/develop' into markjh/liberate_sync_handler 2016-05-17 10:43:58 +01:00
Matthew Hodgson
43e1e0489c Merge pull request #786 from matrix-org/matthew/email_notifs_tuning
tune email notifs, fix CSS a bit, and add debugging details
2016-05-17 10:43:24 +01:00
Mark Haines
1ed33784a6 Merge pull request #789 from matrix-org/markjh/member_cleanup
Cleanup room member handler
2016-05-17 10:43:19 +01:00
Mark Haines
526bf8126f Remove unused get_joined_rooms_for_user 2016-05-17 10:20:51 +01:00
Mark Haines
425e6b4983 Merge branch 'develop' into markjh/member_cleanup 2016-05-17 10:13:16 +01:00
Mark Haines
b153f5b150 Merge pull request #787 from matrix-org/markjh/liberate_presence_handler
Move the presence handler out of the Handlers object
2016-05-17 10:09:43 +01:00
Mark Haines
3fd8a07fca Merge pull request #788 from matrix-org/markjh/domian
Spell "domain" correctly
2016-05-17 10:09:34 +01:00
Mark Haines
f68eea808a Move SyncHandler out of the Handlers object 2016-05-16 20:19:26 +01:00
Mark Haines
53e171f345 Merge branch 'markjh/liberate_presence_handler' into markjh/liberate_sync_handler 2016-05-16 20:08:32 +01:00
Mark Haines
80cb9becd8 Remove get_joined_rooms_for_user from RoomMemberHandler 2016-05-16 20:06:55 +01:00
Mark Haines
816df9f267 get_room_members is unused now 2016-05-16 19:51:43 +01:00
Mark Haines
821306120a Replaces calls to fetch_room_distributions_into with get_joined_hosts_for_room 2016-05-16 19:48:07 +01:00
Mark Haines
1a3a2002ff Spell "domain" correctly
s/domian/domain/g
2016-05-16 19:17:23 +01:00
Mark Haines
e168abbcff Don't inherit PresenceHandler from BaseHandler, remove references to self.hs from presence handler 2016-05-16 19:08:40 +01:00
Matthew Hodgson
e501e9ecb2 tweak text 2016-05-16 19:02:22 +01:00
Matthew Hodgson
cbd2adc95e tune email notifs, fix CSS a bit, and add debugging details 2016-05-16 18:58:38 +01:00
Mark Haines
3b86ecfa79 Move the presence handler out of the Handlers object 2016-05-16 18:56:37 +01:00
David Baker
647781ca56 Fix emailpusher import
Try importing at the root level rather than conditionally importing, as per comment
2016-05-16 18:41:32 +01:00
Erik Johnston
c39f305067 os.environ requires a string 2016-05-16 17:21:30 +01:00
Erik Johnston
678c8a7f1e Merge pull request #785 from matrix-org/erikj/cache_factor_synctly
Make synctl read a cache factor from config file
2016-05-16 17:07:35 +01:00
Erik Johnston
c5c5a7403b Make synctl read a cache factor from config file 2016-05-16 17:01:57 +01:00
Matthew Hodgson
2d98c960ec Merge pull request #760 from matrix-org/matthew/preview_url_ip_whitelist
add a url_preview_ip_range_whitelist config param
2016-05-16 13:13:26 +01:00
Mark Haines
eb79110beb Clean up the blacklist/whitelist handling.
Always set the config key with an empty list, even if a list isn't specified.
This means that the codepaths are the same for both the empty list and
for a missing key. Since the behaviour is the same for both cases this
makes the code somewhat easier to reason about.
2016-05-16 13:03:59 +01:00
Mark Haines
dd95eb4cb5 Merge branch 'develop' into matthew/preview_url_ip_whitelist 2016-05-16 12:59:41 +01:00
Erik Johnston
60d53f9e95 Count number of GC collects 2016-05-16 09:34:42 +01:00
Matrix
c0112fabc2 fix logo 2016-05-13 18:07:21 +01:00
Matthew Hodgson
0e8364c7f5 rogue img 2016-05-13 17:53:31 +01:00
Matthew Hodgson
782471b7e1 fix matrix.to URLs 2016-05-13 17:50:16 +01:00
Mark Haines
0466454b00 Assert that stream replicated stream positions are ints 2016-05-13 17:33:44 +01:00
Mark Haines
077468f6a9 Merge pull request #780 from matrix-org/dbkr/email_notifs_on_pusher
Make email notifs work on the pusher synapse
2016-05-13 17:27:46 +01:00
Mark Haines
b3f29dc1e5 Manually expire broken caches like the who_forgot_in_room 2016-05-13 17:16:27 +01:00
Mark Haines
f03ddc98ec Use the SlavedAccountDataStore 2016-05-13 17:01:28 +01:00
Mark Haines
1f71f386f6 Merge branch 'develop' into dbkr/email_notifs_on_pusher 2016-05-13 16:59:56 +01:00
Mark Haines
206eb9fd94 Shift some of the state_group methods into the SlavedEventStore 2016-05-13 16:58:14 +01:00
Erik Johnston
7d6e89ed22 Add a comment 2016-05-13 16:31:08 +01:00
Mark Haines
21018c2c13 Merge pull request #783 from matrix-org/markjh/slave_account_data
Add a slaved datastore for account data
2016-05-13 15:56:04 +01:00
Mark Haines
a8affd606e Merge pull request #784 from matrix-org/markjh/receipts_fix
Allow receipts for events we haven't seen in the db
2016-05-13 15:55:26 +01:00
Mark Haines
b7381d5338 Allow receipts for events we haven't seen in the db 2016-05-13 15:46:41 +01:00
Mark Haines
3abab26458 Add a slaved datastore for account data 2016-05-13 15:34:06 +01:00
Erik Johnston
99b5a2e560 Merge pull request #741 from negzi/create_user_with_expiry
Create user with expiry
2016-05-13 14:46:53 +01:00
Erik Johnston
ba5c616ff4 Merge pull request #778 from matrix-org/erikj/add_pusher
Fixup add_pusher
2016-05-13 14:43:23 +01:00
Erik Johnston
0c11c1be88 Spelling 2016-05-13 14:42:25 +01:00
Erik Johnston
e00e8f2166 Merge pull request #769 from matrix-org/erikj/push_actions_delete
Delete old pushers
2016-05-13 14:41:36 +01:00
Erik Johnston
fd8e921b6e Merge pull request #779 from matrix-org/erikj/receipts
Use tree cache for get_linearized_receipts_for_room
2016-05-13 14:41:21 +01:00
Erik Johnston
024cda9a97 Merge pull request #782 from matrix-org/erikj/remove_indices
Remove unused indices
2016-05-13 14:40:45 +01:00
Erik Johnston
c9aff0736c Remove topics table 2016-05-13 14:40:38 +01:00
Negi Fazeli
40aa6e8349 Create user with expiry
- Add unittests for client, api and handler

Signed-off-by: Negar Fazeli <negar.fazeli@ericsson.com>
2016-05-13 15:34:15 +02:00
Mark Haines
9295fa30a8 Annotate the removed indicies with why they were removed. 2016-05-13 14:16:57 +01:00
Erik Johnston
5e50058473 Remove unused indices
This includes removing both unused indices and indices that are subsets
of other indices.
2016-05-13 13:28:07 +01:00
Mark Haines
cdda850ce1 Merge pull request #781 from matrix-org/markjh/replication_problems
Fix a bug in replication that was causing the pusher to tight loop
2016-05-13 12:05:19 +01:00
Mark Haines
0e792e7903 Log the stream IDs in an order that makes sense 2016-05-13 11:54:44 +01:00
Mark Haines
3547e66bc6 Make sure we advance our stream position 2016-05-13 11:53:00 +01:00
Erik Johnston
6da7f39d95 Use tree cache for get_linearized_receipts_for_room 2016-05-13 11:41:23 +01:00
David Baker
b5e646a18c Make email notifs work on the pusher synapse
Plus general bugfix to email notif code
2016-05-13 11:36:50 +01:00
Mark Haines
048b3ece36 Merge pull request #777 from matrix-org/markjh/move_filter_for_client
move filter_events_for_client out of base handler
2016-05-13 11:30:22 +01:00
Erik Johnston
13d37c3c56 Fixup add_pusher 2016-05-13 11:25:02 +01:00
Mark Haines
a458a40337 missed a spot 2016-05-12 18:19:58 +01:00
Mark Haines
7e23476814 move filter_events_for_client out of base handler 2016-05-11 13:42:37 +01:00
Mark Haines
260b498ee5 Merge pull request #776 from matrix-org/markjh/lazy_signing_key
Shuffle when we get the signing_key attribute.
2016-05-11 13:31:29 +02:00
Mark Haines
1620578b13 Shuffle when we get the signing_key attribute.
Wait until we sign a message to get the signing key from the homeserver
config. This means that the message handler can be created without
having a signing key in the config which means that separate processes
like the pusher that don't send messages and don't need to sign them can
still access the handlers.
2016-05-11 12:20:57 +01:00
Erik Johnston
108434e53d Merge pull request #775 from matrix-org/erikj/password_hash
Correctly handle NULL password hashes from the database
2016-05-11 12:18:13 +01:00
Erik Johnston
1400bb1663 Correctly handle NULL password hashes from the database 2016-05-11 12:06:02 +01:00
Mark Haines
a284a32d78 Merge pull request #774 from matrix-org/markjh/shuffle_base_handler
Shuffle some methods out of base handler
2016-05-11 13:00:28 +02:00
Mark Haines
458a435114 Fix typo 2016-05-11 10:35:33 +01:00
Mark Haines
30057b1e15 Move _create_new_client_event and handle_new_client_event out of base handler 2016-05-11 09:09:20 +01:00
David Baker
ae1af262f6 Pass through _get_state_group_for_events 2016-05-10 19:18:03 +02:00
David Baker
90afc07f39 StateStore, not EventsStore 2016-05-10 19:10:46 +02:00
David Baker
89b5ef7c4b Cached functions must be accessed through the dict 2016-05-10 19:05:22 +02:00
David Baker
35b6e6d2a8 Pass though _get_state_group_for_events 2016-05-10 18:56:40 +02:00
David Baker
3367e65476 Pass through get_state_groups 2016-05-10 18:53:15 +02:00
David Baker
0c4ccdcb83 Also pass through get_profile_displayname 2016-05-10 18:51:14 +02:00
David Baker
f28643cea9 Uncommit accidentally commited edit to cipher list 2016-05-10 18:44:32 +02:00
David Baker
5f46be19a7 Pass through get_events to pusher too 2016-05-10 18:43:40 +02:00
David Baker
d46b18a00f Pass through _get_event_txn 2016-05-10 18:27:06 +02:00
Matrix
3b1930e8ec unbreak schema 2016-05-10 16:42:37 +01:00
Matthew Hodgson
fe97b81c09 Merge pull request #759 from matrix-org/dbkr/email_notifs
Send email notifications for missed messages
2016-05-10 16:30:05 +02:00
David Baker
997db04648 Merge remote-tracking branch 'origin/develop' into dbkr/email_notifs 2016-05-10 14:40:19 +02:00
David Baker
c00b484eff More consistent config naming 2016-05-10 14:39:16 +02:00
David Baker
94040b0798 Add config option to not send email notifs for new users 2016-05-10 14:34:53 +02:00
Erik Johnston
e581754f09 Merge pull request #763 from matrix-org/erikj/ignore_user
Implement basic ignore user API
2016-05-10 13:27:41 +01:00
David Baker
e04b1d6b0a Make pep8 happy 2016-05-10 14:23:16 +02:00
Matthew Hodgson
5599608887 Switch from CSS to Table layout for HTML mails so they work in Outlook aka Word
Remove templates-vector and theme templates with variables instead
    Switch to matrix.to URLs by default for links
2016-05-10 00:14:48 +02:00
Matthew Hodgson
6b45ffd2d1 Switch from CSS to Table layout for HTML mails so they work in Outlook aka Word
Remove templates-vector and theme templates with variables instead
Switch to matrix.to URLs by default for links
2016-05-09 20:16:56 +02:00
Erik Johnston
c9eb6dfc1b Merge branch 'develop' of github.com:matrix-org/synapse into erikj/ignore_user 2016-05-09 13:21:06 +01:00
Erik Johnston
3f84da139c Merge pull request #773 from matrix-org/erikj/get_domian_from_id
Add and use get_domain_from_id
2016-05-09 13:21:00 +01:00
Erik Johnston
def64d6ef3 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/ignore_user 2016-05-09 13:05:09 +01:00
Erik Johnston
100e2c42f6 Merge pull request #764 from matrix-org/erikj/replication_logging
Add some log information at returned replication streams
2016-05-09 11:18:00 +01:00
Erik Johnston
8715731559 Merge pull request #772 from matrix-org/erikj/get_user_cache
Add cache to get_user_by_id
2016-05-09 11:12:11 +01:00
Erik Johnston
34b3af3363 Merge pull request #770 from matrix-org/erikj/transaction_reactor
Run transaction queue on reactor
2016-05-09 11:12:06 +01:00
Erik Johnston
7354667bd3 Merge pull request #768 from matrix-org/erikj/queue_evens_persist
Queue events by room for persistence
2016-05-09 11:11:53 +01:00
Erik Johnston
08dfa8eee2 Add and use get_domian_from_id 2016-05-09 10:36:03 +01:00
Erik Johnston
f904b1c60c Merge pull request #766 from sbts/patch-1
Fix Typo in README.rst s/Halp/Help/
2016-05-09 10:21:14 +01:00
Erik Johnston
1f1dee94f6 Manually run GC on reactor tick.
This also adds a metric for amount of time spent in GC.
2016-05-09 10:13:25 +01:00
Erik Johnston
f6ebaf4a32 Run transaction queue on reactor
This ensures that any CPU work that happens doesn't block message
sending.
2016-05-09 10:10:06 +01:00
Erik Johnston
4ea762c1a2 Add cache to get_user_by_id 2016-05-09 10:08:21 +01:00
Erik Johnston
012cb5416c Merge branch 'develop' of github.com:matrix-org/synapse into erikj/push_actions_delete 2016-05-06 15:59:20 +01:00
Erik Johnston
fcb2c3f0db Remove unused import 2016-05-06 15:47:40 +01:00
Erik Johnston
fd85b167ec Pull loop one level up 2016-05-06 15:38:42 +01:00
Erik Johnston
b6e0be701e Queue events for persistence 2016-05-06 14:31:44 +01:00
Erik Johnston
96d9d5d388 Merge pull request #767 from matrix-org/erikj/transaction_txn
Reduce database inserts when sending transactions
2016-05-06 13:20:43 +01:00
Erik Johnston
d13459636f Pull prev txn from in memory 2016-05-06 11:30:55 +01:00
Erik Johnston
1d275dba69 Don't needlessly enter transaction 2016-05-06 11:25:58 +01:00
Erik Johnston
56b5e83e36 Reduce database inserts when sending transactions 2016-05-06 11:20:18 +01:00
Mark Haines
1f590f3e9a Merge pull request #765 from matrix-org/markjh/open_id
Add an openidish mechanism for proving that you own a given user_id
2016-05-05 19:26:30 +01:00
David
1b45e6a9bc Fix Typo in README.rst s/Halp/Help/ 2016-05-06 02:07:59 +08:00
Matthew Hodgson
53ca739f1f better mail subject lines 2016-05-05 15:55:44 +01:00
Matthew Hodgson
81c2176cba fix layout; handle app naming in synapse, not jinja 2016-05-05 15:54:29 +01:00
Mark Haines
573ef3f1c9 Rename openid/token to openid/request_token 2016-05-05 15:15:00 +01:00
Erik Johnston
8940281d1b Don't warn 2016-05-05 15:10:03 +01:00
Mark Haines
9c272da05f Add an openidish mechanism for proving to third parties that you own a given user_id 2016-05-05 13:42:44 +01:00
Matthew Hodgson
a5974f89d6 fix app branding 2016-05-05 11:25:56 +01:00
Erik Johnston
5d8a93a10e Add some log information at returned replication streams 2016-05-05 10:29:21 +01:00
Erik Johnston
1f0f5ffa1e Add bulk fetch storage API 2016-05-05 10:03:15 +01:00
Matthew Hodgson
634efb65f1 pep8 2016-05-05 02:10:57 +01:00
Matthew Hodgson
c64d5fc66c fix room.txt 2016-05-05 02:00:33 +01:00
Matthew Hodgson
3c39fa8902 First cut at Vector-branded mail templates 2016-05-05 02:00:33 +01:00
Matthew Hodgson
ce81ccb063 handle fragments correctly on mxc URLs.
switch to vector.im permalinks as matrix.to isn't ready yet.
merge overlapping notifications together.
give one message of context after a notification (in the unlikely event it exists, but it's possible thanks to throttling).
include name of app in mail templates
2016-05-05 02:00:33 +01:00
Matthew Hodgson
1cf5c379cb spell out emailpusher full path 2016-05-05 01:59:39 +01:00
Erik Johnston
fee1118a20 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/ignore_user 2016-05-04 19:08:27 +01:00
Erik Johnston
97b9141245 Merge pull request #762 from matrix-org/erikj/report_event
Add /report endpoint
2016-05-04 19:05:49 +01:00
Erik Johnston
fcd1eb642d Add primary key 2016-05-04 16:51:51 +01:00
Erik Johnston
8e6a163f27 Add timestamp and auto incrementing ID 2016-05-04 15:19:12 +01:00
David Baker
39d0a99972 Include no context
until we can de-dup between the context and other notifs
2016-05-04 14:52:49 +01:00
David Baker
9ef05a12c3 Add date header & message id 2016-05-04 14:52:10 +01:00
David Baker
80be396464 Correct SQL statement for postgres
In standard sql, join binds tighter than comma, so we were joining on the wrong table. Postgres follows the standard (apparently).
2016-05-04 13:19:59 +01:00
Erik Johnston
5650e38e7d Move event_id to path 2016-05-04 13:19:39 +01:00
Matthew Hodgson
8a04412fa1 starting point for doc on how log contexts are supposed to work 2016-05-04 12:19:04 +01:00
David Baker
8cc82aad87 Add db functions used for email to the pusher app 2016-05-04 11:47:59 +01:00
David Baker
de22001ab5 pep8 2016-05-04 11:41:35 +01:00
Matthew Hodgson
f1026418ea copyright 2016-05-04 11:38:01 +01:00
Matthew Hodgson
17cbf773b9 fix assorted typos in default config 2016-05-04 11:38:01 +01:00
Erik Johnston
984d4a2c0f Add /report endpoint 2016-05-04 11:28:10 +01:00
David Baker
e6bffa4475 Unused import 2016-05-04 11:26:58 +01:00
David Baker
92f0f3d21d Catch all exceptions when creating a pusher 2016-05-04 11:24:07 +01:00
Erik Johnston
a438a6d2bc Implement basic ignore user 2016-05-04 10:16:46 +01:00
Erik Johnston
7ea3b4118d Merge pull request #757 from matrix-org/erikj/event_auth_typo
Fix typo in event_auth servlet path
2016-05-03 14:23:15 +01:00
Erik Johnston
183f23f10d Delete old pushers 2016-05-03 14:22:33 +01:00
Matthew Hodgson
792def4928 add a url_preview_ip_range_whitelist config param so we can whitelist the matrix.org IP space 2016-05-01 12:44:24 +01:00
David Baker
2df75de505 Merge remote-tracking branch 'origin/develop' into dbkr/email_notifs 2016-04-29 20:28:47 +01:00
David Baker
b084e4d963 Add constant for throttle multiplier 2016-04-29 20:14:55 +01:00
David Baker
35b7b8e4bc Remove unused function 2016-04-29 20:10:34 +01:00
David Baker
6b9b6a9169 Remove unused arg 2016-04-29 20:02:52 +01:00
David Baker
c7c75e87dc Docstring 2016-04-29 19:47:35 +01:00
David Baker
b0a1036d93 Use explicit join 2016-04-29 19:28:56 +01:00
David Baker
8f99cd5996 Oops, actually specify the user id 2016-04-29 19:27:03 +01:00
David Baker
60f44c098d Remove unnecessary if 2016-04-29 19:17:10 +01:00
David Baker
50ad8005e4 Put spaces at start of line 2016-04-29 19:16:15 +01:00
David Baker
83618d719a Try imports in config 2016-04-29 19:13:52 +01:00
David Baker
e7a76b5123 Use the constant 2016-04-29 19:10:45 +01:00
David Baker
29c8cf8db8 Avoid vars builtin 2016-04-29 19:09:28 +01:00
David Baker
d3da5294e8 Use named parameter format 2016-04-29 19:04:40 +01:00
David Baker
765f2b8446 Default enable email notifs to False 2016-04-29 14:46:18 +01:00
David Baker
ff5e5423e5 Include res in the manifest 2016-04-29 14:43:48 +01:00
David Baker
311b5ce051 pep8 2016-04-29 14:37:30 +01:00
David Baker
3facde2536 Remove rather pointless get function 2016-04-29 14:36:45 +01:00
David Baker
4364ea1272 Stop processing notifs once we've sent a mail 2016-04-29 14:31:27 +01:00
David Baker
4b0c3a3270 Correct public_baseurl default 2016-04-29 14:30:15 +01:00
David Baker
5048455965 Nicer get() shorthand 2016-04-29 14:27:40 +01:00
David Baker
6c8957be7f Remove redundant docstring 2016-04-29 14:25:28 +01:00
David Baker
18ce88bd2d Correct default template and add text template 2016-04-29 14:24:25 +01:00
David Baker
40d40e470d Send mail notifs with a plaintext part too 2016-04-29 13:56:21 +01:00
David Baker
56aae0eaf5 Merge pull request #758 from matrix-org/dbkr/fix_password_reset
Fix password reset
2016-04-29 12:45:39 +01:00
David Baker
dc2c527ce9 Fix password reset
Default requester to None, otherwise it isn't defined when resetting using email auth
2016-04-29 12:07:54 +01:00
Erik Johnston
62b51b8452 Fix typo in event_auth servlet path 2016-04-29 12:00:51 +01:00
David Baker
b2c04da8dc Add an email pusher for new users
If they registered with an email address and email notifs are enabled on the HS
2016-04-29 11:43:57 +01:00
David Baker
ec9cbe847d pep8 newline 2016-04-29 10:07:30 +01:00
David Baker
acded821c4 Merge remote-tracking branch 'origin/develop' into dbkr/email_notifs 2016-04-29 10:05:20 +01:00
David Baker
69e519052b Remove vector specific style 2016-04-28 17:34:32 +01:00
David Baker
46b5547a42 Some basic css to make it halfway legible 2016-04-28 17:29:39 +01:00
David Baker
36bb5c2383 Fix notification link 2016-04-28 17:28:48 +01:00
David Baker
e800ee2f63 May as well always include room link 2016-04-28 17:28:27 +01:00
David Baker
cc0874cf71 Put back real delay before mailing 2016-04-28 17:00:40 +01:00
David Baker
68f8fc2f14 Support file messages & fix plain text 2016-04-28 16:59:57 +01:00
David Baker
4845c7359d Support image notifs 2016-04-28 15:55:53 +01:00
Mark Haines
351b50a887 Fix more typos in per-request metrics 2016-04-28 15:29:46 +01:00
David Baker
60f86fc876 pep8 2016-04-28 15:16:30 +01:00
David Baker
937c407eef Only import email pusher if email notifs are on 2016-04-28 15:12:14 +01:00
Mark Haines
dcfc10b129 Fix typo in request metrics 2016-04-28 15:11:06 +01:00
Matthew Hodgson
aebd0c9717 fix typo 2016-04-28 15:09:25 +01:00
Mark Haines
1c188cf73c Merge pull request #756 from matrix-org/markjh/more_metrics
Report per request metrics for all of the things using request_handler
2016-04-28 14:59:45 +01:00
Mark Haines
1a12766e3b Add a comment explaining why automatic metric reporting is disabled for JsonResource 2016-04-28 12:31:26 +01:00
Mark Haines
6037349512 Check if report_metrics is True 2016-04-28 12:26:07 +01:00
David Baker
ebbabc4986 Handle room invites in email notifs 2016-04-28 11:49:36 +01:00
Mark Haines
8d7ad44331 Report per request metrics for all of the things using request_handler 2016-04-28 10:57:49 +01:00
David Baker
5367708236 Add the jinja template for individual notifs 2016-04-28 10:55:32 +01:00
David Baker
9dba1b668c Linkify plain text messages too 2016-04-28 10:55:08 +01:00
David Baker
424a7f48f8 Run filter_events_for_client
so we don't accidentally mail out events people shouldn't see
2016-04-27 17:50:49 +01:00
David Baker
4ed1e45869 Make html messages work 2016-04-27 17:18:51 +01:00
Mark Haines
21d188bf95 Merge pull request #755 from matrix-org/markjh/right_direction
Fix backfill replication to advance the stream correctly
2016-04-27 15:54:48 +01:00
Mark Haines
8a65666454 Fix backfill replication to advance the stream correctly 2016-04-27 15:38:43 +01:00
David Baker
8781083960 Better grammar for multiple messages in a room
Say who the messages are from if there's no room name, otherwise it's a bit nonsensical
2016-04-27 15:30:41 +01:00
David Baker
fa12209c1b Hopefully all remaining bits for email notifs
Add public facing base url to the server so synapse knows what URL to use when converting mxc to http urls for use in emails
2016-04-27 15:09:55 +01:00
Mark Haines
c7c03bf303 Merge pull request #754 from matrix-org/markjh/check_for_nop
Check that something has happend before running the selects
2016-04-27 13:36:47 +01:00
Mark Haines
871357d539 Check that somethign has happend before running the selects 2016-04-27 11:54:13 +01:00
Mark Haines
71df327190 Actually start the pusher daemon 2016-04-26 17:07:09 +01:00
Mark Haines
c9eab73f2a Fix typo in default pusher config 2016-04-26 17:06:18 +01:00
Mark Haines
47571d11db Merge pull request #753 from matrix-org/markjh/daemon_pusher
Optionally daemonize the pusher
2016-04-26 16:11:44 +01:00
Mark Haines
b80b93ea0f Add a log context to the daemonized pusher 2016-04-26 15:57:28 +01:00
Mark Haines
6df5a6a833 Optionally daemonize the pusher 2016-04-26 15:37:41 +01:00
Erik Johnston
5164ccc3e5 Bump changelog and version 2016-04-26 11:26:32 +01:00
Erik Johnston
3306cf45ca Merge pull request #750 from matrix-org/erikj/jwt_optional
Make pyjwt dependency optional
2016-04-26 11:07:22 +01:00
Mark Haines
c487c42492 Merge pull request #752 from matrix-org/markjh/more_updates
Add a couple of update methods to the PusherSlaveStore
2016-04-26 11:00:40 +01:00
Mark Haines
9c417c54d4 Add a couple of update methods to the PusherSlaveStore 2016-04-26 10:45:02 +01:00
Mark Haines
9843f2a657 Merge pull request #751 from matrix-org/markjh/pusher_metrics_manhole
Add a metrics listener and a ssh listener to the pusher
2016-04-26 10:35:51 +01:00
David Baker
7b4715bad7 More variable calculation for email notifs
Include name of the person we're sending to and add summary text at the top giving an overview of what's happened.
2016-04-25 18:27:04 +01:00
Mark Haines
f15e9e8de4 Remove the uncomments from the comments 2016-04-25 17:56:24 +01:00
Mark Haines
72e2fafa20 Add a metrics listener and a ssh listener to the pusher 2016-04-25 17:34:25 +01:00
Mark Haines
233bf78ab4 Merge pull request #749 from matrix-org/markjh/split_manhole
Split out setting up the manhole to a separate file
2016-04-25 15:23:53 +01:00
Mark Haines
f22f46f4f9 Move the listenTCP call outside the manhole function 2016-04-25 14:59:21 +01:00
David Baker
290f125a13 Typo 2016-04-25 14:42:59 +01:00
Erik Johnston
52ecbc2843 Make pyjwt dependency optional 2016-04-25 14:30:15 +01:00
David Baker
05e49ffbdf No we don't: it's just the display name 2016-04-22 18:44:17 +01:00
David Baker
bd0f9c2065 Actually do UTF8 correctly 2016-04-22 18:42:00 +01:00
David Baker
c5b3c6e101 Sort member events
So names of people in a room are given in order
2016-04-22 18:33:36 +01:00
David Baker
83bf65297a Mime part is binary so encode it first.
Doesn't get character enocind right yet but makes it not error.
2016-04-22 18:31:47 +01:00
David Baker
e8701e64b9 Implement group-of-people names 2016-04-22 17:28:42 +01:00
David Baker
c553797c4f No inlineCallbacks necessary on this 2016-04-22 17:27:54 +01:00
Mark Haines
5905f36f05 Split out setting up the manhole to a separate file 2016-04-22 17:09:15 +01:00
Mark Haines
c3f8dbf6b5 Merge pull request #748 from matrix-org/markjh/split_out_site.py
Move SynapseSite to its own file
2016-04-22 17:08:21 +01:00
Mark Haines
62607d5452 Merge branch 'develop' into markjh/split_out_site.py
Conflicts:
	synapse/app/homeserver.py
2016-04-22 16:26:57 +01:00
Mark Haines
e57df8fa86 Merge pull request #747 from matrix-org/markjh/http_resource_tree
Split out create_resource_tree to a separate file
2016-04-22 16:24:55 +01:00
Mark Haines
e856036f4c Move SynapseSite to its own file 2016-04-22 16:09:55 +01:00
Mark Haines
9e7aa98c22 Split out create_resource_tree to a separate file 2016-04-22 15:40:51 +01:00
Mark Haines
2022ae0fb9 Merge pull request #746 from matrix-org/markjh/split_out_pusher
Optionally split out the pushers into a separate process
2016-04-22 11:34:08 +01:00
Erik Johnston
64ec3493c1 Merge pull request #745 from matrix-org/erikj/search-index
Optimise event_search in postgres
2016-04-22 11:23:49 +01:00
Erik Johnston
4063fe0283 Update port script 2016-04-22 10:35:53 +01:00
Erik Johnston
183cacac90 Simplify query and handle finishing correctly 2016-04-22 10:01:57 +01:00
David Baker
19d6b6cd7a Add WIP email template files 2016-04-21 19:22:53 +01:00
David Baker
c10ed26c30 Flesh out email templating
Mostly WIP porting the room name calculation logic from the web client so our room names in the email mirror the clients.
2016-04-21 19:19:07 +01:00
Erik Johnston
ae571810f2 Order NULLs first 2016-04-21 18:14:18 +01:00
Erik Johnston
3ddbb1687c Fix query 2016-04-21 18:02:36 +01:00
Erik Johnston
8fae3d7b1e Use special UPDATE syntax 2016-04-21 18:01:49 +01:00
Erik Johnston
b57dcb4b51 Typo 2016-04-21 17:49:00 +01:00
Erik Johnston
26db18bc90 Need to do _background_update_progress_txn in actual transaction 2016-04-21 17:45:56 +01:00
Erik Johnston
b9675ef6e6 Merge pull request #687 from nikriek/jwt-fix
Fix issues with JWT login
2016-04-21 17:42:25 +01:00
Erik Johnston
e395eb1108 Update progress when creating index 2016-04-21 17:39:24 +01:00
Erik Johnston
3b0fa77f50 Fix SQL statement 2016-04-21 17:37:42 +01:00
Erik Johnston
129e403487 Create index must be on a conn 2016-04-21 17:35:51 +01:00
Mark Haines
a3ac837599 Optionally split out the pushers into a separate process 2016-04-21 17:22:37 +01:00
Mark Haines
78741cf025 Merge pull request #743 from matrix-org/markjh/slave_pushers
Replicate the pushers
2016-04-21 17:21:29 +01:00
Erik Johnston
51bb339ab2 Create index concurrently 2016-04-21 17:16:11 +01:00
Erik Johnston
b743c1237e Add missing run_upgrade 2016-04-21 17:12:04 +01:00
Mark Haines
31719ad124 Merge pull request #744 from matrix-org/markjh/replication_remove_pusher
Add a replication endpoint for deleting pushers
2016-04-21 17:10:49 +01:00
Niklas Riekenbrauck
565c2edb0a Fix issues with JWT login 2016-04-21 18:10:48 +02:00
Erik Johnston
c877f0f034 Optimise event_search in postgres 2016-04-21 16:56:14 +01:00
Mark Haines
b15266fe06 Merge pull request #742 from matrix-org/markjh/slave_event_push_actions
Replicate push actions
2016-04-21 16:39:04 +01:00
Mark Haines
cfe1ff4bdb Add a replication endpoint for deleting pushers 2016-04-21 16:33:05 +01:00
Mark Haines
d4823efad9 Replicate the pushers 2016-04-21 16:18:00 +01:00
Mark Haines
9f53491cab Merge branch 'develop' into markjh/slave_event_push_actions 2016-04-21 16:00:43 +01:00
Mark Haines
02a27a6c4f pip install new python dependencies in jenkins.sh 2016-04-21 16:00:28 +01:00
Mark Haines
a611c968cc Merge branch 'develop' into markjh/slave_event_push_actions 2016-04-21 15:37:01 +01:00
Erik Johnston
59698906eb Make jenkins install lxml 2016-04-21 15:36:13 +01:00
Mark Haines
c0d8e0eb63 Replicate push actions 2016-04-21 15:25:58 +01:00
David Baker
2ed0adb075 Generate mails from a template 2016-04-20 18:35:29 +01:00
Erik Johnston
68ebb81e86 Merge pull request #740 from matrix-org/erikj/state_cache
Always use state cache entry if it exists
2016-04-20 13:52:59 +01:00
David Baker
05adc6c2de more pep8 2016-04-20 13:02:45 +01:00
David Baker
f63bd4ff47 Send a rather basic email notif
Also pep8 fixes
2016-04-20 13:02:01 +01:00
Erik Johnston
5bbc321588 Always use state cache entry if it exists
Also check if the resolved state matches an existing state group.
2016-04-20 11:49:10 +01:00
Erik Johnston
4cf4320593 Add some logging to state resolve_events 2016-04-20 11:06:02 +01:00
Erik Johnston
eab47ea1e5 Merge pull request #739 from matrix-org/erikj/cache_get_state_groups_for_groups
Add cache to _get_state_groups_from_groups
2016-04-19 17:37:19 +01:00
Mark Haines
f52dd35ac3 Merge pull request #738 from matrix-org/markjh/slaved_receipts
Add a slaved receipts store
2016-04-19 17:31:59 +01:00
Erik Johnston
61c7edfd34 Add cache to _get_state_groups_from_groups 2016-04-19 17:22:03 +01:00
Mark Haines
5bbd424ee0 Add a slaved receipts store 2016-04-19 17:14:08 +01:00
Erik Johnston
6ac40f7b65 Merge pull request #737 from matrix-org/erikj/spider_ssl_factory
Use tls_server_context_factory for SpiderEndpoint
2016-04-19 16:22:05 +01:00
Erik Johnston
f505575f69 Make InsecureInterceptableContextFactory work with SpiderEndpoint 2016-04-19 16:08:14 +01:00
Mark Haines
4084c58aa1 Merge pull request #736 from matrix-org/markjh/slave_invited_rooms_for_user
Replicate get_invited_rooms_for_user
2016-04-19 15:46:00 +01:00
Mark Haines
e99365f601 Replicate get_invited_rooms_for_user 2016-04-19 15:22:14 +01:00
David Baker
e2a01455af Add single instance & logging stuff
Copy the stuff over from http pusher that prevents multiple instances of process running at once and sets up logging and measure blocks.
2016-04-19 14:52:58 +01:00
Erik Johnston
e8884e5e9c Add self.media_repo to PreviewUrlResource 2016-04-19 14:51:34 +01:00
Erik Johnston
a7001c311b _make_dirs was moved to MediaRepository 2016-04-19 14:49:31 +01:00
Erik Johnston
9181e2f4c7 Add store to PreviewUrlResource 2016-04-19 14:48:24 +01:00
Erik Johnston
fb76a81ff7 Reorder imports 2016-04-19 14:45:05 +01:00
David Baker
07d765209d First bits of emailpusher
Mostly logic of when to send an email
2016-04-19 14:24:36 +01:00
Erik Johnston
48af68ba8e Merge pull request #735 from matrix-org/erikj/media_resource_cleanup
Split out BaseMediaResource into MediaRepository
2016-04-19 13:56:59 +01:00
Erik Johnston
0c93df89b6 Move MediaRepository to media_repository module 2016-04-19 11:31:43 +01:00
Erik Johnston
43f0941e8f Split out BaseMediaResource into MediaRepository
This is so that a single MediaRepository can be shared across all
resources, rather than having a "copy" per resource.

In particular this allows us to guard against both the thumbnail and
download resource triggering a download of remote content at the same
time.
2016-04-19 11:24:59 +01:00
Erik Johnston
481119f7d6 Merge pull request #734 from matrix-org/erikj/measure
Create log context in Measure if one doesn't exist
2016-04-18 19:03:57 +01:00
Erik Johnston
9f56645038 Merge pull request #728 from OlegGirko/systemd_env_file
Add environment file to systemd unit configuration.
2016-04-18 16:22:17 +01:00
Erik Johnston
eb8619e256 Create log context in Measure if one doesn't exist 2016-04-18 16:08:32 +01:00
Erik Johnston
4ef7a25c10 Merge pull request #733 from matrix-org/erikj/make_member_timeout
Lower timeout for make_membership_event
2016-04-18 15:08:05 +01:00
Erik Johnston
3727a15764 Merge pull request #732 from matrix-org/erikj/login
Simplify _check_password
2016-04-18 15:07:57 +01:00
Matthew Hodgson
aaabbd3e9e explicitly pass in the charset from Content-Type to lxml to fix cyrillic woes better 2016-04-15 14:32:25 +01:00
Matthew Hodgson
84f9cac4d0 fix cyrillic URL previews by hardcoding all page decoding to UTF-8 for now, rather than relying on lxml's heuristics which seem to get it wrong 2016-04-15 13:20:08 +01:00
Erik Johnston
914f1eafac Lower timeout for make_membership_event
Calls to make_membership_event are done in response to client requests,
and so should not be retried over long timeframes.
2016-04-15 11:22:23 +01:00
Erik Johnston
6fd2f685fe Simplify _check_password 2016-04-15 11:17:18 +01:00
Erik Johnston
737aee9295 Merge pull request #731 from matrix-org/erikj/timed_otu
Use SynapseError 504 for Timeout errors
2016-04-15 10:31:17 +01:00
Erik Johnston
cb9c465707 Use SynapseError 504 for Timeout errors 2016-04-15 10:21:32 +01:00
Mark Haines
3c79bdd7a0 Fix check_password rather than inverting the meaning of _check_local_password (#730) 2016-04-14 19:00:21 +01:00
David Baker
a4c56bf67b Merge pull request #729 from matrix-org/dbkr/fix_login_nonexistent_user
Fix login to error for nonexistent users
2016-04-14 18:46:45 +01:00
David Baker
4c1b32d7e2 Fix login to error for nonexistent users
Fixes SYN-680
2016-04-14 18:28:42 +01:00
Matthew Hodgson
f78b479118 fix urlparse import thinko breaking tiny URLs 2016-04-14 15:23:55 +01:00
Kegsay
4802f9cdb6 Merge pull request #727 from matrix-org/kegan/fix-asapi-reg
Make v2_alpha reg follow the AS API specification
2016-04-14 15:08:54 +01:00
Kegan Dougal
83776d6219 Make v2_alpha reg follow the AS API specification
The spec is clear the key should be 'user' not 'username' and this is indeed
the case for v1. This is not true for v2_alpha though, which is what this
commit is fixing.
2016-04-14 14:52:26 +01:00
Oleg Girko
e83f8c0aa5 Add environment file to systemd unit configuration.
Now there is at least one environment variable that controls
synapse server's behaviour: SYNAPSE_CACHE_FACTOR.
So, it makes sense to make systemd unit file to use
environment configuration file that can set this variable's value.

Signed-off-by: Oleg Girko <ol@infoserver.lv>
2016-04-14 14:46:18 +01:00
Matthew Hodgson
bd77216d06 comment out 2c838f6459 due to risk of https://en.wikipedia.org/wiki/Billion_laughs attacks - thanks @torhve 2016-04-14 14:39:24 +01:00
Erik Johnston
5a578ea4c7 Merge pull request #726 from matrix-org/erikj/push_metric
Measure push action generator
2016-04-14 13:49:09 +01:00
Erik Johnston
9ae64c9910 Measure push action generator 2016-04-14 13:42:22 +01:00
Erik Johnston
b42ad359e9 Merge pull request #725 from matrix-org/dbkr/push_only_joined
Don't push for everyone who ever sent an RR to the room
2016-04-14 12:05:13 +01:00
David Baker
757e2c79b4 Don't push for everyone who ever sent an RR to the room 2016-04-14 12:02:50 +01:00
Erik Johnston
86e9bbc74e Add missing yield 2016-04-14 11:56:52 +01:00
Erik Johnston
e40f25ebe1 Rename log context 2016-04-14 11:54:14 +01:00
Erik Johnston
ff1d333a02 Merge pull request #724 from matrix-org/erikj/push_measure
Add push index. Add extra Measure
2016-04-14 11:46:46 +01:00
Erik Johnston
2ae91a9e2f Make send_badge private 2016-04-14 11:37:50 +01:00
Erik Johnston
d213d69fe3 Add desc arg 2016-04-14 11:36:23 +01:00
Erik Johnston
56da835eaf Add necessary logging contexts 2016-04-14 11:33:50 +01:00
Erik Johnston
96bcfb29c7 Add index 2016-04-14 11:26:33 +01:00
Erik Johnston
7be1065b8f Add extra Measure 2016-04-14 11:26:15 +01:00
Erik Johnston
a2546b9082 Fix query for get_unread_push_actions_for_user_in_range 2016-04-14 11:08:31 +01:00
Erik Johnston
ceeb5b909f Merge pull request #721 from matrix-org/erikj/spider
Sanitize the optional dependencies for spider API
2016-04-14 09:59:29 +01:00
David Baker
43a89cca8e Merge pull request #722 from matrix-org/dbkr/only_unread_event_actions
Only return unread notifications
2016-04-13 14:54:26 +01:00
Erik Johnston
f338bf9257 Give install requirements 2016-04-13 14:33:48 +01:00
David Baker
767fc0b739 pep8 2016-04-13 14:23:27 +01:00
David Baker
54d08c8868 Only return unread notifications
Make get_unread_push_actions_for_user_in_range only return unread event actions, being more true to its name. Done in two separate sql queries to get actions after a read receipt and those in a room wiht no receipt at all. SQL queries by Erik.
2016-04-13 14:16:45 +01:00
Erik Johnston
5880bc5417 Merge pull request #718 from matrix-org/erikj/public_room_list
Don't return empty public rooms
2016-04-13 14:07:26 +01:00
Erik Johnston
f613a3e332 Merge pull request #720 from matrix-org/erikj/auth_chec
Don't auto log failed auth checks
2016-04-13 14:07:23 +01:00
Erik Johnston
bfe586843f Add back in helpful description for missing url_preview_ip_range_blacklist 2016-04-13 13:52:57 +01:00
Erik Johnston
d0633e6dbe Sanitize the optional dependencies for spider API 2016-04-13 13:38:09 +01:00
Erik Johnston
0f2ca8cde1 Measure Auth.check 2016-04-13 11:15:59 +01:00
Erik Johnston
c53f9d561e Don't auto log failed auth checks 2016-04-13 11:11:46 +01:00
David Baker
65141161f6 Unused member variable 2016-04-12 16:25:26 +01:00
Erik Johnston
72f454b752 Don't return empty public rooms 2016-04-12 16:06:18 +01:00
Mark Haines
10ebbaea2e Update replication.rst 2016-04-12 15:53:45 +01:00
Mark Haines
aa5ce4d450 Add some design documentation for replication 2016-04-12 15:14:10 +01:00
David Baker
d33d623f0d Merge pull request #716 from matrix-org/dbkr/get_pushers
Add get endpoint for pushers
2016-04-12 14:40:37 +01:00
David Baker
7984ffdc6a Unneccessarywhitespaceisunnecessary 2016-04-12 13:55:57 +01:00
David Baker
c1267d04c5 Oops, forgot the desc. 2016-04-12 13:55:32 +01:00
David Baker
a04c076b7f Make the /set part mandatory 2016-04-12 13:54:41 +01:00
David Baker
44891b4a0a Tidy up get_pusher functions
Decodes pushers rows on the main thread rather than the db thread and uses _simple_select_list. Also do the same to the function I copied and factor out the duplication into a helper function.
2016-04-12 13:47:17 +01:00
David Baker
7b39bcdaae Mis-named function 2016-04-12 13:35:08 +01:00
David Baker
d937f342bb Split into separate servlet classes 2016-04-12 13:33:30 +01:00
Erik Johnston
318cb1f207 Merge pull request #717 from matrix-org/erikj/backfill_state
Check if we've already backfilled events
2016-04-12 13:30:30 +01:00
Erik Johnston
c48465dbaa More comments 2016-04-12 12:48:30 +01:00
Erik Johnston
8be1a37909 More comments 2016-04-12 12:04:19 +01:00
Erik Johnston
d3d0be4167 Don't append to unused list 2016-04-12 11:59:00 +01:00
Erik Johnston
762ada1e07 Add back backfilled parameter that was removed 2016-04-12 11:58:04 +01:00
Erik Johnston
0d3da210f0 Add comment 2016-04-12 11:54:41 +01:00
Erik Johnston
cccf86dd05 Check if we've already backfilled events 2016-04-12 11:19:32 +01:00
David Baker
8a76094965 Add get endpoint for pushers
As per https://github.com/matrix-org/matrix-doc/pull/308
2016-04-11 18:00:03 +01:00
Mark Haines
790f5848b2 Fix the rule_id for .m.rule.invite_for_me (#715) 2016-04-11 16:10:39 +01:00
Mark Haines
82d7eea7e3 Move the versionstring code out of app.homeserver into util 2016-04-11 14:57:09 +01:00
David Baker
2547dffccc Merge pull request #705 from matrix-org/dbkr/pushers_use_event_actions
Change pushers to use the event_actions table
2016-04-11 12:58:55 +01:00
David Baker
9bb041791c Run unsafe proces in a loop until we've caught up
and wrap unsafe process in a try block
2016-04-11 12:48:30 +01:00
Erik Johnston
17515bae14 PEP8 2016-04-11 11:02:50 +01:00
Matthew Hodgson
4bd3d25218 Merge pull request #688 from matrix-org/matthew/preview_urls
URL previewing support
2016-04-11 10:40:29 +01:00
Matthew Hodgson
5ffacc5e84 fix typos and needless try/except from PR review 2016-04-11 10:39:16 +01:00
Matthew Hodgson
83b2f83da0 actually throw meaningful errors 2016-04-08 21:36:59 +01:00
Mark Haines
b36270b5e1 Fix pep8 warning 2016-04-08 19:52:23 +01:00
Matthew Hodgson
6ff7a79308 move local_media_repository_url_cache.sql to schema v31 2016-04-08 19:09:02 +01:00
Matthew Hodgson
af582b66bb fix typo 2016-04-08 19:08:47 +01:00
Matthew Hodgson
2460d904bd fix error checking for new SQL 2016-04-08 19:04:29 +01:00
Matthew Hodgson
1ccabe2965 more PR feedback 2016-04-08 18:58:08 +01:00
Matthew Hodgson
fb83f6a1fc fix SQL based on PR feedback 2016-04-08 18:55:38 +01:00
Matthew Hodgson
b04f81284a Add more doc 2016-04-08 18:55:27 +01:00
Matthew Hodgson
ec9331f851 Add doc 2016-04-08 18:54:18 +01:00
Matthew Hodgson
dafef5a688 Add url_preview_enabled config option to turn on/off preview_url endpoint. defaults to off.
Add url_preview_ip_range_blacklist to let admins specify internal IP ranges that must not be spidered.
Add url_preview_url_blacklist to let admins specify URL patterns that must not be spidered.
Implement a custom SpiderEndpoint and associated support classes to implement url_preview_ip_range_blacklist
Add commentary and generally address PR feedback
2016-04-08 18:37:15 +01:00
David Baker
d96a070a3a Actually check if we;re processing 2016-04-08 16:49:39 +01:00
David Baker
ed3979df5f Fix invite pushes
* If the event is an invite event, add the invitee to list of user we run push rules for (if they have a pusher etc)
 * Move invite_for_me to be higher prio than member events otherwise member events matches them
 * Spell override right
2016-04-08 15:29:59 +01:00
David Baker
7b6d519482 Make sure max stream ordering only increases 2016-04-08 14:08:16 +01:00
David Baker
52d1008661 Unsafe process should call itself if the max has changed 2016-04-08 14:06:54 +01:00
David Baker
ce3fe52498 Comment why unsafe process is unsafe 2016-04-08 14:02:38 +01:00
David Baker
d9f38561c8 Literally a dictionary 2016-04-07 17:45:01 +01:00
David Baker
4836864f56 generate id in the main thread 2016-04-07 17:38:48 +01:00
David Baker
a4a31fa8dc Only pass in what we need 2016-04-07 17:37:19 +01:00
David Baker
3fb35cbd6f Oops, inequality fail 2016-04-07 17:33:37 +01:00
David Baker
15e0f1696f Wrap process in a flag so we don't process whist already processing. 2016-04-07 17:31:08 +01:00
Matthew Hodgson
d6e7333ae4 Merge branch 'develop' into matthew/preview_urls 2016-04-07 17:26:44 +01:00
David Baker
6ec02e9ecf indenting 2016-04-07 17:24:05 +01:00
David Baker
25cd5bb697 defer.gatherResults rather than doing all the pokes in series 2016-04-07 17:22:14 +01:00
David Baker
fa129ce5b5 Add measure blocks 2016-04-07 17:12:29 +01:00
David Baker
e1e042f2a1 Add comments on min_stream_id
saying that the min stream id won't be completely accurate all the time
2016-04-07 17:09:36 +01:00
David Baker
05d044aac3 pep8 2016-04-07 16:45:38 +01:00
David Baker
2d5c693fd3 Fix port script for changes merged from develop 2016-04-07 16:43:54 +01:00
David Baker
9c99ab4572 Merge remote-tracking branch 'origin/develop' into dbkr/pushers_use_event_actions 2016-04-07 16:35:22 +01:00
David Baker
d549fdfa22 Remove code that's now been obsoleted or moved elsewhere 2016-04-07 16:31:38 +01:00
David Baker
92e3071623 Send badge count pushes.
Also fix bugs with retrying.
2016-04-07 15:39:53 +01:00
David Baker
0fd1cd2400 pep8 2016-04-06 16:50:47 +01:00
David Baker
7e2c89a37f Make pushers use the event_push_actions table instead of listening on an event stream & running the rules again. Sytest passes, but remaining to do:
* Make badges work again
 * Remove old, unused code
2016-04-06 15:42:15 +01:00
Matthew Hodgson
9f7dc2bef7 Merge branch 'develop' into matthew/preview_urls 2016-04-04 00:38:21 +01:00
Matthew Hodgson
cf51c4120e report image size (bytewise) in OG meta 2016-04-03 23:57:05 +01:00
Matthew Hodgson
0834b152fb char encoding 2016-04-03 12:59:27 +01:00
Matthew Hodgson
8b98a7e8c3 pep8 2016-04-03 12:56:29 +01:00
Matthew Hodgson
eab4d462f8 fix etag typing error. fix timestamp typing error 2016-04-03 02:02:46 +01:00
Matthew Hodgson
c3916462f6 rebase all image URLs 2016-04-03 01:33:12 +01:00
Matthew Hodgson
110780b18b remove stale todo 2016-04-03 00:48:31 +01:00
Matthew Hodgson
b09e29a03c Ensure only one download for a given URL is active at a time 2016-04-03 00:47:40 +01:00
Matthew Hodgson
7426c86eb8 add a persistent cache of URL lookups, and fix up the in-memory one to work 2016-04-03 00:31:57 +01:00
Matthew Hodgson
d1b154a10f support gzip compression, and don't pass through error msgs 2016-04-02 03:06:39 +01:00
Matthew Hodgson
9377157961 how was _respond_default_thumbnail ever meant to work? 2016-04-02 02:31:45 +01:00
Matthew Hodgson
2c838f6459 pass back SVGs as their own thumbnails 2016-04-02 02:30:07 +01:00
Matthew Hodgson
5037ee0d37 handle missing dimensions without crashing 2016-04-02 02:29:57 +01:00
Matthew Hodgson
b26e8604f1 make meta comparisons case insensitive 2016-04-02 01:35:44 +01:00
Matthew Hodgson
5fd07da764 refactor calc_og; spider image URLs; fix xpath; add a (broken) expiringcache; loads of other fixes 2016-04-02 00:35:49 +01:00
Matthew Hodgson
c60b751694 fix assorted redirect, unicode and screenscraping bugs 2016-04-01 02:17:48 +01:00
Matthew Hodgson
683e564815 handle spidered relative images correctly 2016-03-31 23:52:58 +01:00
Matthew Hodgson
72550c3803 prevent choking on invalid utf-8, and handle image thumbnailing smarter 2016-03-31 15:14:14 +01:00
Matthew Hodgson
bb9a2ca87c synthesise basig OG metadata from pages lacking it 2016-03-31 14:15:09 +01:00
Matthew Hodgson
0d3d7de6fc sync in changes from matrixfederationclient 2016-03-31 12:42:27 +01:00
Matthew Hodgson
a8a5dd3b44 handle requests with missing content-length headers (e.g. YouTube) 2016-03-31 01:55:21 +01:00
Matthew Hodgson
7178ab7da0 spell out more packages 2016-03-30 17:29:22 +01:00
Matthew Hodgson
ae5831d303 fix bugs 2016-03-29 03:32:55 +01:00
Matthew Hodgson
721b2bfa85 implement redirects 2016-03-29 03:32:52 +01:00
Matthew Hodgson
19038582d3 debug 2016-03-29 03:14:16 +01:00
Matthew Hodgson
64b4aead15 make it work 2016-03-29 03:13:25 +01:00
Matthew Hodgson
dd4287ca5d make it build 2016-03-29 02:07:57 +01:00
Matthew Hodgson
e0c2490a14 Merge branch 'develop' into matthew/preview_urls 2016-03-29 01:20:25 +01:00
Matthew Hodgson
ec0cf996c9 typo 2016-03-29 01:20:14 +01:00
Matthew Hodgson
d9d48aad2d Merge branch 'develop' into matthew/preview_urls 2016-03-27 22:54:42 +01:00
Matthew Hodgson
adafa24b0a typo 2016-03-25 23:38:19 +00:00
Matthew Hodgson
7dd0c1730a initial WIP of a tentative preview_url endpoint - incomplete, untested, experimental, etc. just putting it here for safekeeping for now 2016-01-24 18:47:27 -05:00
338 changed files with 25681 additions and 8055 deletions

8
.gitignore vendored
View File

@@ -24,10 +24,10 @@ homeserver*.yaml
.coverage
htmlcov
demo/*.db
demo/*.log
demo/*.log.*
demo/*.pid
demo/*/*.db
demo/*/*.log
demo/*/*.log.*
demo/*/*.pid
demo/media_store.*
demo/etc

View File

@@ -1,3 +1,588 @@
Changes in synapse v0.18.4 (2016-11-22)
=======================================
Bug fixes:
* Add workaround for buggy clients that the fail to register (PR #1632)
Changes in synapse v0.18.4-rc1 (2016-11-14)
===========================================
Changes:
* Various database efficiency improvements (PR #1188, #1192)
* Update default config to blacklist more internal IPs, thanks to Euan Kemp (PR
#1198)
* Allow specifying duration in minutes in config, thanks to Daniel Dent (PR
#1625)
Bug fixes:
* Fix media repo to set CORs headers on responses (PR #1190)
* Fix registration to not error on non-ascii passwords (PR #1191)
* Fix create event code to limit the number of prev_events (PR #1615)
* Fix bug in transaction ID deduplication (PR #1624)
Changes in synapse v0.18.3 (2016-11-08)
=======================================
SECURITY UPDATE
Explicitly require authentication when using LDAP3. This is the default on
versions of ``ldap3`` above 1.0, but some distributions will package an older
version.
If you are using LDAP3 login and have a version of ``ldap3`` older than 1.0 it
is **CRITICAL to updgrade**.
Changes in synapse v0.18.2 (2016-11-01)
=======================================
No changes since v0.18.2-rc5
Changes in synapse v0.18.2-rc5 (2016-10-28)
===========================================
Bug fixes:
* Fix prometheus process metrics in worker processes (PR #1184)
Changes in synapse v0.18.2-rc4 (2016-10-27)
===========================================
Bug fixes:
* Fix ``user_threepids`` schema delta, which in some instances prevented
startup after upgrade (PR #1183)
Changes in synapse v0.18.2-rc3 (2016-10-27)
===========================================
Changes:
* Allow clients to supply access tokens as headers (PR #1098)
* Clarify error codes for GET /filter/, thanks to Alexander Maznev (PR #1164)
* Make password reset email field case insensitive (PR #1170)
* Reduce redundant database work in email pusher (PR #1174)
* Allow configurable rate limiting per AS (PR #1175)
* Check whether to ratelimit sooner to avoid work (PR #1176)
* Standardise prometheus metrics (PR #1177)
Bug fixes:
* Fix incredibly slow back pagination query (PR #1178)
* Fix infinite typing bug (PR #1179)
Changes in synapse v0.18.2-rc2 (2016-10-25)
===========================================
(This release did not include the changes advertised and was identical to RC1)
Changes in synapse v0.18.2-rc1 (2016-10-17)
===========================================
Changes:
* Remove redundant event_auth index (PR #1113)
* Reduce DB hits for replication (PR #1141)
* Implement pluggable password auth (PR #1155)
* Remove rate limiting from app service senders and fix get_or_create_user
requester, thanks to Patrik Oldsberg (PR #1157)
* window.postmessage for Interactive Auth fallback (PR #1159)
* Use sys.executable instead of hardcoded python, thanks to Pedro Larroy
(PR #1162)
* Add config option for adding additional TLS fingerprints (PR #1167)
* User-interactive auth on delete device (PR #1168)
Bug fixes:
* Fix not being allowed to set your own state_key, thanks to Patrik Oldsberg
(PR #1150)
* Fix interactive auth to return 401 from for incorrect password (PR #1160,
#1166)
* Fix email push notifs being dropped (PR #1169)
Changes in synapse v0.18.1 (2016-10-05)
======================================
No changes since v0.18.1-rc1
Changes in synapse v0.18.1-rc1 (2016-09-30)
===========================================
Features:
* Add total_room_count_estimate to ``/publicRooms`` (PR #1133)
Changes:
* Time out typing over federation (PR #1140)
* Restructure LDAP authentication (PR #1153)
Bug fixes:
* Fix 3pid invites when server is already in the room (PR #1136)
* Fix upgrading with SQLite taking lots of CPU for a few days
after upgrade (PR #1144)
* Fix upgrading from very old database versions (PR #1145)
* Fix port script to work with recently added tables (PR #1146)
Changes in synapse v0.18.0 (2016-09-19)
=======================================
The release includes major changes to the state storage database schemas, which
significantly reduce database size. Synapse will attempt to upgrade the current
data in the background. Servers with large SQLite database may experience
degradation of performance while this upgrade is in progress, therefore you may
want to consider migrating to using Postgres before upgrading very large SQLite
databases
Changes:
* Make public room search case insensitive (PR #1127)
Bug fixes:
* Fix and clean up publicRooms pagination (PR #1129)
Changes in synapse v0.18.0-rc1 (2016-09-16)
===========================================
Features:
* Add ``only=highlight`` on ``/notifications`` (PR #1081)
* Add server param to /publicRooms (PR #1082)
* Allow clients to ask for the whole of a single state event (PR #1094)
* Add is_direct param to /createRoom (PR #1108)
* Add pagination support to publicRooms (PR #1121)
* Add very basic filter API to /publicRooms (PR #1126)
* Add basic direct to device messaging support for E2E (PR #1074, #1084, #1104,
#1111)
Changes:
* Move to storing state_groups_state as deltas, greatly reducing DB size (PR
#1065)
* Reduce amount of state pulled out of the DB during common requests (PR #1069)
* Allow PDF to be rendered from media repo (PR #1071)
* Reindex state_groups_state after pruning (PR #1085)
* Clobber EDUs in send queue (PR #1095)
* Conform better to the CAS protocol specification (PR #1100)
* Limit how often we ask for keys from dead servers (PR #1114)
Bug fixes:
* Fix /notifications API when used with ``from`` param (PR #1080)
* Fix backfill when cannot find an event. (PR #1107)
Changes in synapse v0.17.3 (2016-09-09)
=======================================
This release fixes a major bug that stopped servers from handling rooms with
over 1000 members.
Changes in synapse v0.17.2 (2016-09-08)
=======================================
This release contains security bug fixes. Please upgrade.
No changes since v0.17.2-rc1
Changes in synapse v0.17.2-rc1 (2016-09-05)
===========================================
Features:
* Start adding store-and-forward direct-to-device messaging (PR #1046, #1050,
#1062, #1066)
Changes:
* Avoid pulling the full state of a room out so often (PR #1047, #1049, #1063,
#1068)
* Don't notify for online to online presence transitions. (PR #1054)
* Occasionally persist unpersisted presence updates (PR #1055)
* Allow application services to have an optional 'url' (PR #1056)
* Clean up old sent transactions from DB (PR #1059)
Bug fixes:
* Fix None check in backfill (PR #1043)
* Fix membership changes to be idempotent (PR #1067)
* Fix bug in get_pdu where it would sometimes return events with incorrect
signature
Changes in synapse v0.17.1 (2016-08-24)
=======================================
Changes:
* Delete old received_transactions rows (PR #1038)
* Pass through user-supplied content in /join/$room_id (PR #1039)
Bug fixes:
* Fix bug with backfill (PR #1040)
Changes in synapse v0.17.1-rc1 (2016-08-22)
===========================================
Features:
* Add notification API (PR #1028)
Changes:
* Don't print stack traces when failing to get remote keys (PR #996)
* Various federation /event/ perf improvements (PR #998)
* Only process one local membership event per room at a time (PR #1005)
* Move default display name push rule (PR #1011, #1023)
* Fix up preview URL API. Add tests. (PR #1015)
* Set ``Content-Security-Policy`` on media repo (PR #1021)
* Make notify_interested_services faster (PR #1022)
* Add usage stats to prometheus monitoring (PR #1037)
Bug fixes:
* Fix token login (PR #993)
* Fix CAS login (PR #994, #995)
* Fix /sync to not clobber status_msg (PR #997)
* Fix redacted state events to include prev_content (PR #1003)
* Fix some bugs in the auth/ldap handler (PR #1007)
* Fix backfill request to limit URI length, so that remotes don't reject the
requests due to path length limits (PR #1012)
* Fix AS push code to not send duplicate events (PR #1025)
Changes in synapse v0.17.0 (2016-08-08)
=======================================
This release contains significant security bug fixes regarding authenticating
events received over federation. PLEASE UPGRADE.
This release changes the LDAP configuration format in a backwards incompatible
way, see PR #843 for details.
Changes:
* Add federation /version API (PR #990)
* Make psutil dependency optional (PR #992)
Bug fixes:
* Fix URL preview API to exclude HTML comments in description (PR #988)
* Fix error handling of remote joins (PR #991)
Changes in synapse v0.17.0-rc4 (2016-08-05)
===========================================
Changes:
* Change the way we summarize URLs when previewing (PR #973)
* Add new ``/state_ids/`` federation API (PR #979)
* Speed up processing of ``/state/`` response (PR #986)
Bug fixes:
* Fix event persistence when event has already been partially persisted
(PR #975, #983, #985)
* Fix port script to also copy across backfilled events (PR #982)
Changes in synapse v0.17.0-rc3 (2016-08-02)
===========================================
Changes:
* Forbid non-ASes from registering users whose names begin with '_' (PR #958)
* Add some basic admin API docs (PR #963)
Bug fixes:
* Send the correct host header when fetching keys (PR #941)
* Fix joining a room that has missing auth events (PR #964)
* Fix various push bugs (PR #966, #970)
* Fix adding emails on registration (PR #968)
Changes in synapse v0.17.0-rc2 (2016-08-02)
===========================================
(This release did not include the changes advertised and was identical to RC1)
Changes in synapse v0.17.0-rc1 (2016-07-28)
===========================================
This release changes the LDAP configuration format in a backwards incompatible
way, see PR #843 for details.
Features:
* Add purge_media_cache admin API (PR #902)
* Add deactivate account admin API (PR #903)
* Add optional pepper to password hashing (PR #907, #910 by KentShikama)
* Add an admin option to shared secret registration (breaks backwards compat)
(PR #909)
* Add purge local room history API (PR #911, #923, #924)
* Add requestToken endpoints (PR #915)
* Add an /account/deactivate endpoint (PR #921)
* Add filter param to /messages. Add 'contains_url' to filter. (PR #922)
* Add device_id support to /login (PR #929)
* Add device_id support to /v2/register flow. (PR #937, #942)
* Add GET /devices endpoint (PR #939, #944)
* Add GET /device/{deviceId} (PR #943)
* Add update and delete APIs for devices (PR #949)
Changes:
* Rewrite LDAP Authentication against ldap3 (PR #843 by mweinelt)
* Linearize some federation endpoints based on (origin, room_id) (PR #879)
* Remove the legacy v0 content upload API. (PR #888)
* Use similar naming we use in email notifs for push (PR #894)
* Optionally include password hash in createUser endpoint (PR #905 by
KentShikama)
* Use a query that postgresql optimises better for get_events_around (PR #906)
* Fall back to 'username' if 'user' is not given for appservice registration.
(PR #927 by Half-Shot)
* Add metrics for psutil derived memory usage (PR #936)
* Record device_id in client_ips (PR #938)
* Send the correct host header when fetching keys (PR #941)
* Log the hostname the reCAPTCHA was completed on (PR #946)
* Make the device id on e2e key upload optional (PR #956)
* Add r0.2.0 to the "supported versions" list (PR #960)
* Don't include name of room for invites in push (PR #961)
Bug fixes:
* Fix substitution failure in mail template (PR #887)
* Put most recent 20 messages in email notif (PR #892)
* Ensure that the guest user is in the database when upgrading accounts
(PR #914)
* Fix various edge cases in auth handling (PR #919)
* Fix 500 ISE when sending alias event without a state_key (PR #925)
* Fix bug where we stored rejections in the state_group, persist all
rejections (PR #948)
* Fix lack of check of if the user is banned when handling 3pid invites
(PR #952)
* Fix a couple of bugs in the transaction and keyring code (PR #954, #955)
Changes in synapse v0.16.1-r1 (2016-07-08)
==========================================
THIS IS A CRITICAL SECURITY UPDATE.
This fixes a bug which allowed users' accounts to be accessed by unauthorised
users.
Changes in synapse v0.16.1 (2016-06-20)
=======================================
Bug fixes:
* Fix assorted bugs in ``/preview_url`` (PR #872)
* Fix TypeError when setting unicode passwords (PR #873)
Performance improvements:
* Turn ``use_frozen_events`` off by default (PR #877)
* Disable responding with canonical json for federation (PR #878)
Changes in synapse v0.16.1-rc1 (2016-06-15)
===========================================
Features: None
Changes:
* Log requester for ``/publicRoom`` endpoints when possible (PR #856)
* 502 on ``/thumbnail`` when can't connect to remote server (PR #862)
* Linearize fetching of gaps on incoming events (PR #871)
Bugs fixes:
* Fix bug where rooms where marked as published by default (PR #857)
* Fix bug where joining room with an event with invalid sender (PR #868)
* Fix bug where backfilled events were sent down sync streams (PR #869)
* Fix bug where outgoing connections could wedge indefinitely, causing push
notifications to be unreliable (PR #870)
Performance improvements:
* Improve ``/publicRooms`` performance(PR #859)
Changes in synapse v0.16.0 (2016-06-09)
=======================================
NB: As of v0.14 all AS config files must have an ID field.
Bug fixes:
* Don't make rooms published by default (PR #857)
Changes in synapse v0.16.0-rc2 (2016-06-08)
===========================================
Features:
* Add configuration option for tuning GC via ``gc.set_threshold`` (PR #849)
Changes:
* Record metrics about GC (PR #771, #847, #852)
* Add metric counter for number of persisted events (PR #841)
Bug fixes:
* Fix 'From' header in email notifications (PR #843)
* Fix presence where timeouts were not being fired for the first 8h after
restarts (PR #842)
* Fix bug where synapse sent malformed transactions to AS's when retrying
transactions (Commits 310197b, 8437906)
Performance improvements:
* Remove event fetching from DB threads (PR #835)
* Change the way we cache events (PR #836)
* Add events to cache when we persist them (PR #840)
Changes in synapse v0.16.0-rc1 (2016-06-03)
===========================================
Version 0.15 was not released. See v0.15.0-rc1 below for additional changes.
Features:
* Add email notifications for missed messages (PR #759, #786, #799, #810, #815,
#821)
* Add a ``url_preview_ip_range_whitelist`` config param (PR #760)
* Add /report endpoint (PR #762)
* Add basic ignore user API (PR #763)
* Add an openidish mechanism for proving that you own a given user_id (PR #765)
* Allow clients to specify a server_name to avoid 'No known servers' (PR #794)
* Add secondary_directory_servers option to fetch room list from other servers
(PR #808, #813)
Changes:
* Report per request metrics for all of the things using request_handler (PR
#756)
* Correctly handle ``NULL`` password hashes from the database (PR #775)
* Allow receipts for events we haven't seen in the db (PR #784)
* Make synctl read a cache factor from config file (PR #785)
* Increment badge count per missed convo, not per msg (PR #793)
* Special case m.room.third_party_invite event auth to match invites (PR #814)
Bug fixes:
* Fix typo in event_auth servlet path (PR #757)
* Fix password reset (PR #758)
Performance improvements:
* Reduce database inserts when sending transactions (PR #767)
* Queue events by room for persistence (PR #768)
* Add cache to ``get_user_by_id`` (PR #772)
* Add and use ``get_domain_from_id`` (PR #773)
* Use tree cache for ``get_linearized_receipts_for_room`` (PR #779)
* Remove unused indices (PR #782)
* Add caches to ``bulk_get_push_rules*`` (PR #804)
* Cache ``get_event_reference_hashes`` (PR #806)
* Add ``get_users_with_read_receipts_in_room`` cache (PR #809)
* Use state to calculate ``get_users_in_room`` (PR #811)
* Load push rules in storage layer so that they get cached (PR #825)
* Make ``get_joined_hosts_for_room`` use get_users_in_room (PR #828)
* Poke notifier on next reactor tick (PR #829)
* Change CacheMetrics to be quicker (PR #830)
Changes in synapse v0.15.0-rc1 (2016-04-26)
===========================================
Features:
* Add login support for Javascript Web Tokens, thanks to Niklas Riekenbrauck
(PR #671,#687)
* Add URL previewing support (PR #688)
* Add login support for LDAP, thanks to Christoph Witzany (PR #701)
* Add GET endpoint for pushers (PR #716)
Changes:
* Never notify for member events (PR #667)
* Deduplicate identical ``/sync`` requests (PR #668)
* Require user to have left room to forget room (PR #673)
* Use DNS cache if within TTL (PR #677)
* Let users see their own leave events (PR #699)
* Deduplicate membership changes (PR #700)
* Increase performance of pusher code (PR #705)
* Respond with error status 504 if failed to talk to remote server (PR #731)
* Increase search performance on postgres (PR #745)
Bug fixes:
* Fix bug where disabling all notifications still resulted in push (PR #678)
* Fix bug where users couldn't reject remote invites if remote refused (PR #691)
* Fix bug where synapse attempted to backfill from itself (PR #693)
* Fix bug where profile information was not correctly added when joining remote
rooms (PR #703)
* Fix bug where register API required incorrect key name for AS registration
(PR #727)
Changes in synapse v0.14.0 (2016-03-30)
=======================================
@@ -511,7 +1096,7 @@ Configuration:
* Add support for changing the bind host of the metrics listener via the
``metrics_bind_host`` option.
Changes in synapse v0.9.0-r5 (2015-05-21)
=========================================
@@ -853,7 +1438,7 @@ See UPGRADE for information about changes to the client server API, including
breaking backwards compatibility with VoIP calls and registration API.
Homeserver:
* When a user changes their displayname or avatar the server will now update
* When a user changes their displayname or avatar the server will now update
all their join states to reflect this.
* The server now adds "age" key to events to indicate how old they are. This
is clock independent, so at no point does any server or webclient have to
@@ -911,7 +1496,7 @@ Changes in synapse 0.2.2 (2014-09-06)
=====================================
Homeserver:
* When the server returns state events it now also includes the previous
* When the server returns state events it now also includes the previous
content.
* Add support for inviting people when creating a new room.
* Make the homeserver inform the room via `m.room.aliases` when a new alias
@@ -923,7 +1508,7 @@ Webclient:
* Handle `m.room.aliases` events.
* Asynchronously send messages and show a local echo.
* Inform the UI when a message failed to send.
* Only autoscroll on receiving a new message if the user was already at the
* Only autoscroll on receiving a new message if the user was already at the
bottom of the screen.
* Add support for ban/kick reasons.

View File

@@ -11,8 +11,10 @@ recursive-include synapse/storage/schema *.sql
recursive-include synapse/storage/schema *.py
recursive-include docs *
recursive-include res *
recursive-include scripts *
recursive-include scripts-dev *
recursive-include synapse *.pyi
recursive-include tests *.py
recursive-include synapse/static *.css
@@ -22,5 +24,7 @@ recursive-include synapse/static *.js
exclude jenkins.sh
exclude jenkins*.sh
exclude jenkins*
recursive-exclude jenkins *.sh
prune demo/etc

View File

@@ -11,8 +11,8 @@ VoIP. The basics you need to know to get up and running are:
like ``#matrix:matrix.org`` or ``#test:localhost:8448``.
- Matrix user IDs look like ``@matthew:matrix.org`` (although in the future
you will normally refer to yourself and others using a 3PID: email
address, phone number, etc rather than manipulating Matrix user IDs)
you will normally refer to yourself and others using a third party identifier
(3PID): email address, phone number, etc rather than manipulating Matrix user IDs)
The overall architecture is::
@@ -58,12 +58,13 @@ the spec in the context of a codebase and let you run your own homeserver and
generally help bootstrap the ecosystem.
In Matrix, every user runs one or more Matrix clients, which connect through to
a Matrix homeserver which stores all their personal chat history and user
account information - much as a mail client connects through to an IMAP/SMTP
server. Just like email, you can either run your own Matrix homeserver and
control and own your own communications and history or use one hosted by
someone else (e.g. matrix.org) - there is no single point of control or
mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts, etc.
a Matrix homeserver. The homeserver stores all their personal chat history and
user account information - much as a mail client connects through to an
IMAP/SMTP server. Just like email, you can either run your own Matrix
homeserver and control and own your own communications and history or use one
hosted by someone else (e.g. matrix.org) - there is no single point of control
or mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts,
etc.
Synapse ships with two basic demo Matrix clients: webclient (a basic group chat
web client demo implemented in AngularJS) and cmdclient (a basic Python
@@ -94,7 +95,7 @@ Synapse is the reference python/twisted Matrix homeserver implementation.
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 2.7
- At least 512 MB RAM.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in python but some of the libraries is uses are written in
C. So before we can install synapse itself we need a working C compiler and the
@@ -104,7 +105,7 @@ Installing prerequisites on Ubuntu or Debian::
sudo apt-get install build-essential python2.7-dev libffi-dev \
python-pip python-setuptools sqlite3 \
libssl-dev python-virtualenv libjpeg-dev
libssl-dev python-virtualenv libjpeg-dev libxslt1-dev
Installing prerequisites on ArchLinux::
@@ -133,6 +134,12 @@ Installing prerequisites on Raspbian::
sudo pip install --upgrade ndg-httpsclient
sudo pip install --upgrade virtualenv
Installing prerequisites on openSUSE::
sudo zypper in -t pattern devel_basis
sudo zypper in python-pip python-setuptools sqlite3 python-virtualenv \
python-devel libffi-devel libopenssl-devel libjpeg62-devel
To install the synapse homeserver run::
virtualenv -p python2.7 ~/.synapse
@@ -198,6 +205,21 @@ run (e.g. ``~/.synapse``), and::
source ./bin/activate
synctl start
Security Note
=============
Matrix serves raw user generated data in some APIs - specifically the content
repository endpoints: http://matrix.org/docs/spec/client_server/r0.2.0.html#get-matrix-media-r0-download-servername-mediaid
Whilst we have tried to mitigate against possible XSS attacks (e.g.
https://github.com/matrix-org/synapse/pull/1021) we recommend running
matrix homeservers on a dedicated domain name, to limit any malicious user generated
content served to web browsers a matrix API from being able to attack webapps hosted
on the same domain. This is particularly true of sharing a matrix webclient and
server on the same domain.
See https://github.com/vector-im/vector-web/issues/1977 and
https://developer.github.com/changes/2014-04-25-user-content-security for more details.
Using PostgreSQL
================
@@ -214,9 +236,6 @@ The advantages of Postgres include:
pointing at the same DB master, as well as enabling DB replication in
synapse itself.
The only disadvantage is that the code is relatively new as of April 2015 and
may have a few regressions relative to SQLite.
For information on how to install and use PostgreSQL, please see
`docs/postgres.rst <docs/postgres.rst>`_.
@@ -444,7 +463,7 @@ You have two choices here, which will influence the form of your Matrix user
IDs:
1) Use the machine's own hostname as available on public DNS in the form of
its A or AAAA records. This is easier to set up initially, perhaps for
its A records. This is easier to set up initially, perhaps for
testing, but lacks the flexibility of SRV.
2) Set up a SRV record for your domain name. This requires you create a SRV
@@ -557,6 +576,23 @@ as the primary means of identity and E2E encryption is not complete. As such,
we are running a single identity server (https://matrix.org) at the current
time.
URL Previews
============
Synapse 0.15.0 introduces an experimental new API for previewing URLs at
/_matrix/media/r0/preview_url. This is disabled by default. To turn it on
you must enable the `url_preview_enabled: True` config parameter and explicitly
specify the IP ranges that Synapse is not allowed to spider for previewing in
the `url_preview_ip_range_blacklist` configuration parameter. This is critical
from a security perspective to stop arbitrary Matrix users spidering 'internal'
URLs on your network. At the very least we recommend that your loopback and
RFC1918 IP addresses are blacklisted.
This also requires the optional lxml and netaddr python dependencies to be
installed.
Password reset
==============
@@ -600,7 +636,7 @@ Building internal API documentation::
Halp!! Synapse eats all my RAM!
Help!! Synapse eats all my RAM!
===============================
Synapse's architecture is quite RAM hungry currently - we deliberately

View File

@@ -27,7 +27,15 @@ running:
# Pull the latest version of the master branch.
git pull
# Update the versions of synapse's python dependencies.
python synapse/python_dependencies.py | xargs -n1 pip install
python synapse/python_dependencies.py | xargs -n1 pip install --upgrade
Upgrading to v0.15.0
====================
If you want to use the new URL previewing API (/_matrix/media/r0/preview_url)
then you have to explicitly enable it in the config and update your dependencies
dependencies. See README.rst for details.
Upgrading to v0.11.0

View File

@@ -9,6 +9,7 @@ Description=Synapse Matrix homeserver
Type=simple
User=synapse
Group=synapse
EnvironmentFile=-/etc/sysconfig/synapse
WorkingDirectory=/var/lib/synapse
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml --log-config=/etc/synapse/log_config.yaml

12
docs/admin_api/README.rst Normal file
View File

@@ -0,0 +1,12 @@
Admin APIs
==========
This directory includes documentation for the various synapse specific admin
APIs available.
Only users that are server admins can use these APIs. A user can be marked as a
server admin by updating the database directly, e.g.:
``UPDATE users SET admin = 1 WHERE name = '@foo:bar.com'``
Restarting may be required for the changes to register.

View File

@@ -0,0 +1,15 @@
Purge History API
=================
The purge history API allows server admins to purge historic events from their
database, reclaiming disk space.
Depending on the amount of history being purged a call to the API may take
several minutes or longer. During this period users will not be able to
paginate further back in the room from the point being purged from.
The API is simply:
``POST /_matrix/client/r0/admin/purge_history/<room_id>/<event_id>``
including an ``access_token`` of a server admin.

View File

@@ -0,0 +1,19 @@
Purge Remote Media API
======================
The purge remote media API allows server admins to purge old cached remote
media.
The API is::
POST /_matrix/client/r0/admin/purge_media_cache
{
"before_ts": <unix_timestamp_in_ms>
}
Which will remove all cached media that was last accessed before
``<unix_timestamp_in_ms>``.
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View File

@@ -32,5 +32,4 @@ The format of the AS configuration file is as follows:
See the spec_ for further details on how application services work.
.. _spec: https://github.com/matrix-org/matrix-doc/blob/master/specification/25_application_service_api.rst#application-service-api
.. _spec: https://matrix.org/docs/spec/application_service/unstable.html

View File

@@ -43,7 +43,10 @@ Basically, PEP8
together, or want to deliberately extend or preserve vertical/horizontal
space)
Comments should follow the google code style. This is so that we can generate
documentation with sphinx (http://sphinxcontrib-napoleon.readthedocs.org/en/latest/)
Comments should follow the `google code style <http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with
`sphinx <http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples <http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
in the sphinx documentation.
Code should pass pep8 --max-line-length=100 without any warnings.

10
docs/log_contexts.rst Normal file
View File

@@ -0,0 +1,10 @@
What do I do about "Unexpected logging context" debug log-lines everywhere?
<Mjark> The logging context lives in thread local storage
<Mjark> Sometimes it gets out of sync with what it should actually be, usually because something scheduled something to run on the reactor without preserving the logging context.
<Matthew> what is the impact of it getting out of sync? and how and when should we preserve log context?
<Mjark> The impact is that some of the CPU and database metrics will be under-reported, and some log lines will be mis-attributed.
<Mjark> It should happen auto-magically in all the APIs that do IO or otherwise defer to the reactor.
<Erik> Mjark: the other place is if we branch, e.g. using defer.gatherResults
Unanswered: how and when should we preserve log context?

View File

@@ -15,36 +15,45 @@ How to monitor Synapse metrics using Prometheus
Restart synapse
3: Check out synapse-prometheus-config
https://github.com/matrix-org/synapse-prometheus-config
3: Add a prometheus target for synapse. It needs to set the ``metrics_path``
to a non-default value::
4: Add ``synapse.html`` and ``synapse.rules``
The ``.html`` file needs to appear in prometheus's ``consoles`` directory,
and the ``.rules`` file needs to be invoked somewhere in the main config
file. A symlink to each from the git checkout into the prometheus directory
might be easiest to ensure ``git pull`` keeps it updated.
- job_name: "synapse"
metrics_path: "/_synapse/metrics"
static_configs:
- targets:
"my.server.here:9092"
5: Add a prometheus target for synapse
This is easiest if prometheus runs on the same machine as synapse, as it can
then just use localhost::
Standard Metric Names
---------------------
global: {
rule_file: "synapse.rules"
}
As of synapse version 0.18.2, the format of the process-wide metrics has been
changed to fit prometheus standard naming conventions. Additionally the units
have been changed to seconds, from miliseconds.
job: {
name: "synapse"
================================== =============================
New name Old name
---------------------------------- -----------------------------
process_cpu_user_seconds_total process_resource_utime / 1000
process_cpu_system_seconds_total process_resource_stime / 1000
process_open_fds (no 'type' label) process_fds
================================== =============================
target_group: {
target: "http://localhost:9092/"
}
}
The python-specific counts of garbage collector performance have been renamed.
6: Start prometheus::
=========================== ======================
New name Old name
--------------------------- ----------------------
python_gc_time reactor_gc_time
python_gc_unreachable_total reactor_gc_unreachable
python_gc_counts reactor_gc_counts
=========================== ======================
./prometheus -config.file=prometheus.conf
The twisted-specific reactor metrics have been renamed.
7: Wait a few seconds for it to start and perform the first scrape,
then visit the console:
http://server-where-prometheus-runs:9090/consoles/synapse.html
==================================== =====================
New name Old name
------------------------------------ ---------------------
python_twisted_reactor_pending_calls reactor_pending_calls
python_twisted_reactor_tick_time reactor_tick_time
==================================== =====================

58
docs/replication.rst Normal file
View File

@@ -0,0 +1,58 @@
Replication Architecture
========================
Motivation
----------
We'd like to be able to split some of the work that synapse does into multiple
python processes. In theory multiple synapse processes could share a single
postgresql database and we'd scale up by running more synapse processes.
However much of synapse assumes that only one process is interacting with the
database, both for assigning unique identifiers when inserting into tables,
notifying components about new updates, and for invalidating its caches.
So running multiple copies of the current code isn't an option. One way to
run multiple processes would be to have a single writer process and multiple
reader processes connected to the same database. In order to do this we'd need
a way for the reader process to invalidate its in-memory caches when an update
happens on the writer. One way to do this is for the writer to present an
append-only log of updates which the readers can consume to invalidate their
caches and to push updates to listening clients or pushers.
Synapse already stores much of its data as an append-only log so that it can
correctly respond to /sync requests so the amount of code changes needed to
expose the append-only log to the readers should be fairly minimal.
Architecture
------------
The Replication API
~~~~~~~~~~~~~~~~~~~
Synapse will optionally expose a long poll HTTP API for extracting updates. The
API will have a similar shape to /sync in that clients provide tokens
indicating where in the log they have reached and a timeout. The synapse server
then either responds with updates immediately if it already has updates or it
waits until the timeout for more updates. If the timeout expires and nothing
happened then the server returns an empty response.
However unlike the /sync API this replication API is returning synapse specific
data rather than trying to implement a matrix specification. The replication
results are returned as arrays of rows where the rows are mostly lifted
directly from the database. This avoids unnecessary JSON parsing on the server
and hopefully avoids an impedance mismatch between the data returned and the
required updates to the datastore.
This does not replicate all the database tables as many of the database tables
are indexes that can be recovered from the contents of other tables.
The format and parameters for the api are documented in
``synapse/replication/resource.py``.
The Slaved DataStore
~~~~~~~~~~~~~~~~~~~~
There are read-only version of the synapse storage layer in
``synapse/replication/slave/storage`` that use the response of the replication
API to invalidate their caches.

View File

@@ -9,31 +9,35 @@ the Home Server to generate credentials that are valid for use on the TURN
server through the use of a secret shared between the Home Server and the
TURN server.
This document described how to install coturn
(https://code.google.com/p/coturn/) which also supports the TURN REST API,
This document describes how to install coturn
(https://github.com/coturn/coturn) which also supports the TURN REST API,
and integrate it with synapse.
coturn Setup
============
You may be able to setup coturn via your package manager, or set it up manually using the usual ``configure, make, make install`` process.
1. Check out coturn::
svn checkout http://coturn.googlecode.com/svn/trunk/ coturn
git clone https://github.com/coturn/coturn.git coturn
cd coturn
2. Configure it::
./configure
You may need to install libevent2: if so, you should do so
You may need to install ``libevent2``: if so, you should do so
in the way recommended by your operating system.
You can ignore warnings about lack of database support: a
database is unnecessary for this purpose.
3. Build and install it::
make
make install
4. Make a config file in /etc/turnserver.conf. You can customise
a config file from turnserver.conf.default. The relevant
4. Create or edit the config file in ``/etc/turnserver.conf``. The relevant
lines, with example values, are::
lt-cred-mech
@@ -41,7 +45,7 @@ coturn Setup
static-auth-secret=[your secret key here]
realm=turn.myserver.org
See turnserver.conf.default for explanations of the options.
See turnserver.conf for explanations of the options.
One way to generate the static-auth-secret is with pwgen::
pwgen -s 64 1
@@ -54,6 +58,7 @@ coturn Setup
import your private key and certificate.
7. Start the turn server::
bin/turnserver -o

74
docs/url_previews.rst Normal file
View File

@@ -0,0 +1,74 @@
URL Previews
============
Design notes on a URL previewing service for Matrix:
Options are:
1. Have an AS which listens for URLs, downloads them, and inserts an event that describes their metadata.
* Pros:
* Decouples the implementation entirely from Synapse.
* Uses existing Matrix events & content repo to store the metadata.
* Cons:
* Which AS should provide this service for a room, and why should you trust it?
* Doesn't work well with E2E; you'd have to cut the AS into every room
* the AS would end up subscribing to every room anyway.
2. Have a generic preview API (nothing to do with Matrix) that provides a previewing service:
* Pros:
* Simple and flexible; can be used by any clients at any point
* Cons:
* If each HS provides one of these independently, all the HSes in a room may needlessly DoS the target URI
* We need somewhere to store the URL metadata rather than just using Matrix itself
* We can't piggyback on matrix to distribute the metadata between HSes.
3. Make the synapse of the sending user responsible for spidering the URL and inserting an event asynchronously which describes the metadata.
* Pros:
* Works transparently for all clients
* Piggy-backs nicely on using Matrix for distributing the metadata.
* No confusion as to which AS
* Cons:
* Doesn't work with E2E
* We might want to decouple the implementation of the spider from the HS, given spider behaviour can be quite complicated and evolve much more rapidly than the HS. It's more like a bot than a core part of the server.
4. Make the sending client use the preview API and insert the event itself when successful.
* Pros:
* Works well with E2E
* No custom server functionality
* Lets the client customise the preview that they send (like on FB)
* Cons:
* Entirely specific to the sending client, whereas it'd be nice if /any/ URL was correctly previewed if clients support it.
5. Have the option of specifying a shared (centralised) previewing service used by a room, to avoid all the different HSes in the room DoSing the target.
Best solution is probably a combination of both 2 and 4.
* Sending clients do their best to create and send a preview at the point of sending the message, perhaps delaying the message until the preview is computed? (This also lets the user validate the preview before sending)
* Receiving clients have the option of going and creating their own preview if one doesn't arrive soon enough (or if the original sender didn't create one)
This is a bit magical though in that the preview could come from two entirely different sources - the sending HS or your local one. However, this can always be exposed to users: "Generate your own URL previews if none are available?"
This is tantamount also to senders calculating their own thumbnails for sending in advance of the main content - we are trusting the sender not to lie about the content in the thumbnail. Whereas currently thumbnails are calculated by the receiving homeserver to avoid this attack.
However, this kind of phishing attack does exist whether we let senders pick their thumbnails or not, in that a malicious sender can send normal text messages around the attachment claiming it to be legitimate. We could rely on (future) reputation/abuse management to punish users who phish (be it with bogus metadata or bogus descriptions). Bogus metadata is particularly bad though, especially if it's avoidable.
As a first cut, let's do #2 and have the receiver hit the API to calculate its own previews (as it does currently for image thumbnails). We can then extend/optimise this to option 4 as a special extra if needed.
API
---
GET /_matrix/media/r0/preview_url?url=http://wherever.com
200 OK
{
"og:type" : "article"
"og:url" : "https://twitter.com/matrixdotorg/status/684074366691356672"
"og:title" : "Matrix on Twitter"
"og:image" : "https://pbs.twimg.com/profile_images/500400952029888512/yI0qtFi7_400x400.png"
"og:description" : "“Synapse 0.12 is out! Lots of polishing, performance &amp;amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”"
"og:site_name" : "Twitter"
}
* Downloads the URL
* If HTML, just stores it in RAM and parses it for OG meta tags
* Download any media OG meta tags to the media repo, and refer to them in the OG via mxc:// URIs.
* If a media filetype we know we can thumbnail: store it on disk, and hand it to the thumbnailer. Generate OG meta tags from the thumbnailer contents.
* Otherwise, don't bother downloading further.

98
docs/workers.rst Normal file
View File

@@ -0,0 +1,98 @@
Scaling synapse via workers
---------------------------
Synapse has experimental support for splitting out functionality into
multiple separate python processes, helping greatly with scalability. These
processes are called 'workers', and are (eventually) intended to scale
horizontally independently.
All processes continue to share the same database instance, and as such, workers
only work with postgres based synapse deployments (sharing a single sqlite
across multiple processes is a recipe for disaster, plus you should be using
postgres anyway if you care about scalability).
The workers communicate with the master synapse process via a synapse-specific
HTTP protocol called 'replication' - analogous to MySQL or Postgres style
database replication; feeding a stream of relevant data to the workers so they
can be kept in sync with the main synapse process and database state.
To enable workers, you need to add a replication listener to the master synapse, e.g.::
listeners:
- port: 9092
bind_address: '127.0.0.1'
type: http
tls: false
x_forwarded: false
resources:
- names: [replication]
compress: false
Under **no circumstances** should this replication API listener be exposed to the
public internet; it currently implements no authentication whatsoever and is
unencrypted HTTP.
You then create a set of configs for the various worker processes. These should be
worker configuration files should be stored in a dedicated subdirectory, to allow
synctl to manipulate them.
The current available worker applications are:
* synapse.app.pusher - handles sending push notifications to sygnal and email
* synapse.app.synchrotron - handles /sync endpoints. can scales horizontally through multiple instances.
* synapse.app.appservice - handles output traffic to Application Services
* synapse.app.federation_reader - handles receiving federation traffic (including public_rooms API)
* synapse.app.media_repository - handles the media repository.
* synapse.app.client_reader - handles client API endpoints like /publicRooms
Each worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that worker,
e.g. the HTTP listener that it provides (if any); logging configuration; etc.
You should minimise the number of overrides though to maintain a usable config.
You must specify the type of worker application (worker_app) and the replication
endpoint that it's talking to on the main synapse process (worker_replication_url).
For instance::
worker_app: synapse.app.synchrotron
# The replication listener on the synapse to talk to.
worker_replication_url: http://127.0.0.1:9092/_synapse/replication
worker_listeners:
- type: http
port: 8083
resources:
- names:
- client
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/synchrotron.pid
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
...is a full configuration for a synchrotron worker instance, which will expose a
plain HTTP /sync endpoint on port 8083 separately from the /sync endpoint provided
by the main synapse.
Obviously you should configure your loadbalancer to route the /sync endpoint to
the synchrotron instance(s) in this instance.
Finally, to actually run your worker-based synapse, you must pass synctl the -a
commandline option to tell it to operate on all the worker configurations found
in the given directory, e.g.::
synctl -a $CONFIG/workers start
Currently one should always restart all workers when restarting or upgrading
synapse, unless you explicitly know it's safe not to. For instance, restarting
synapse without restarting all the synchrotrons may result in broken typing
notifications.
To manipulate a specific worker, you pass the -w option to synctl::
synctl -w $CONFIG/workers/synchrotron.yaml restart
All of the above is highly experimental and subject to change as Synapse evolves,
but documenting it here to help folks needing highly scalable Synapses similar
to the one running matrix.org!

24
jenkins-dendron-postgres.sh Executable file
View File

@@ -0,0 +1,24 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./jenkins/clone.sh dendron https://github.com/matrix-org/dendron.git
./dendron/jenkins/build_dendron.sh
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \
--pusher \
--synchrotron \
--federation-reader \
--client-reader \
--appservice \

View File

@@ -4,58 +4,14 @@ set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
# Output flake8 violations to violations.flake8.log
# Don't exit with non-0 status code on Jenkins,
# so that the build steps continue and a later step can decided whether to
# UNSTABLE or FAILURE this build.
export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished with status code \$?"
./sytest/jenkins/prep_sytest_for_postgres.sh
rm .coverage* || echo "No coverage files to remove"
tox --notest -e py27
TOX_BIN=$WORKSPACE/.tox/py27/bin
$TOX_BIN/pip install psycopg2
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
if [[ ! -e .sytest-base ]]; then
git clone https://github.com/matrix-org/sytest.git .sytest-base --mirror
else
(cd .sytest-base; git fetch -p)
fi
rm -rf sytest
git clone .sytest-base sytest --shared
cd sytest
git checkout "${GIT_BRANCH}" || (echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop" ; git checkout develop)
: ${PORT_BASE:=8000}
./jenkins/prep_sytest_for_postgres.sh
echo >&2 "Running sytest with PostgreSQL";
./jenkins/install_and_run.sh --coverage \
--python $TOX_BIN/python \
--synapse-directory $WORKSPACE \
--port-base $PORT_BASE
cd ..
cp sytest/.coverage.* .
# Combine the coverage reports
echo "Combining:" .coverage.*
$TOX_BIN/python -m coverage combine
# Output coverage to coverage.xml
$TOX_BIN/coverage xml -o coverage.xml
./sytest/jenkins/install_and_run.sh \
--synapse-directory $WORKSPACE \

View File

@@ -4,52 +4,12 @@ set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
# Output flake8 violations to violations.flake8.log
# Don't exit with non-0 status code on Jenkins,
# so that the build steps continue and a later step can decided whether to
# UNSTABLE or FAILURE this build.
export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished with status code \$?"
rm .coverage* || echo "No coverage files to remove"
tox --notest -e py27
TOX_BIN=$WORKSPACE/.tox/py27/bin
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
if [[ ! -e .sytest-base ]]; then
git clone https://github.com/matrix-org/sytest.git .sytest-base --mirror
else
(cd .sytest-base; git fetch -p)
fi
rm -rf sytest
git clone .sytest-base sytest --shared
cd sytest
git checkout "${GIT_BRANCH}" || (echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop" ; git checkout develop)
: ${PORT_BASE:=8500}
./jenkins/install_and_run.sh --coverage \
--python $TOX_BIN/python \
--synapse-directory $WORKSPACE \
--port-base $PORT_BASE
cd ..
cp sytest/.coverage.* .
# Combine the coverage reports
echo "Combining:" .coverage.*
$TOX_BIN/python -m coverage combine
# Output coverage to coverage.xml
$TOX_BIN/coverage xml -o coverage.xml
./sytest/jenkins/install_and_run.sh \
--synapse-directory $WORKSPACE \

View File

@@ -22,4 +22,9 @@ export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished w
rm .coverage* || echo "No coverage files to remove"
tox --notest -e py27
TOX_BIN=$WORKSPACE/.tox/py27/bin
python synapse/python_dependencies.py | xargs -n1 $TOX_BIN/pip install
$TOX_BIN/pip install lxml
tox -e py27

View File

@@ -1,86 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
# Output flake8 violations to violations.flake8.log
# Don't exit with non-0 status code on Jenkins,
# so that the build steps continue and a later step can decided whether to
# UNSTABLE or FAILURE this build.
export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished with status code \$?"
rm .coverage* || echo "No coverage files to remove"
tox
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
TOX_BIN=$WORKSPACE/.tox/py27/bin
if [[ ! -e .sytest-base ]]; then
git clone https://github.com/matrix-org/sytest.git .sytest-base --mirror
else
(cd .sytest-base; git fetch -p)
fi
rm -rf sytest
git clone .sytest-base sytest --shared
cd sytest
git checkout "${GIT_BRANCH}" || (echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop" ; git checkout develop)
: ${PERL5LIB:=$WORKSPACE/perl5/lib/perl5}
: ${PERL_MB_OPT:=--install_base=$WORKSPACE/perl5}
: ${PERL_MM_OPT:=INSTALL_BASE=$WORKSPACE/perl5}
export PERL5LIB PERL_MB_OPT PERL_MM_OPT
./install-deps.pl
: ${PORT_BASE:=8000}
echo >&2 "Running sytest with SQLite3";
./run-tests.pl --coverage -O tap --synapse-directory $WORKSPACE \
--python $TOX_BIN/python --all --port-base $PORT_BASE > results-sqlite3.tap
RUN_POSTGRES=""
for port in $(($PORT_BASE + 1)) $(($PORT_BASE + 2)); do
if psql synapse_jenkins_$port <<< ""; then
RUN_POSTGRES="$RUN_POSTGRES:$port"
cat > localhost-$port/database.yaml << EOF
name: psycopg2
args:
database: synapse_jenkins_$port
EOF
fi
done
# Run if both postgresql databases exist
if test "$RUN_POSTGRES" = ":$(($PORT_BASE + 1)):$(($PORT_BASE + 2))"; then
echo >&2 "Running sytest with PostgreSQL";
$TOX_BIN/pip install psycopg2
./run-tests.pl --coverage -O tap --synapse-directory $WORKSPACE \
--python $TOX_BIN/python --all --port-base $PORT_BASE > results-postgresql.tap
else
echo >&2 "Skipping running sytest with PostgreSQL, $RUN_POSTGRES"
fi
cd ..
cp sytest/.coverage.* .
# Combine the coverage reports
echo "Combining:" .coverage.*
$TOX_BIN/python -m coverage combine
# Output coverage to coverage.xml
$TOX_BIN/coverage xml -o coverage.xml

44
jenkins/clone.sh Executable file
View File

@@ -0,0 +1,44 @@
#! /bin/bash
# This clones a project from github into a named subdirectory
# If the project has a branch with the same name as this branch
# then it will checkout that branch after cloning.
# Otherwise it will checkout "origin/develop."
# The first argument is the name of the directory to checkout
# the branch into.
# The second argument is the URL of the remote repository to checkout.
# Usually something like https://github.com/matrix-org/sytest.git
set -eux
NAME=$1
PROJECT=$2
BASE=".$NAME-base"
# Update our mirror.
if [ ! -d ".$NAME-base" ]; then
# Create a local mirror of the source repository.
# This saves us from having to download the entire repository
# when this script is next run.
git clone "$PROJECT" "$BASE" --mirror
else
# Fetch any updates from the source repository.
(cd "$BASE"; git fetch -p)
fi
# Remove the existing repository so that we have a clean copy
rm -rf "$NAME"
# Cloning with --shared means that we will share portions of the
# .git directory with our local mirror.
git clone "$BASE" "$NAME" --shared
# Jenkins may have supplied us with the name of the branch in the
# environment. Otherwise we will have to guess based on the current
# commit.
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
cd "$NAME"
# check out the relevant branch
git checkout "${GIT_BRANCH}" || (
echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop"
git checkout "origin/develop"
)

20
jenkins/prepare_synapse.sh Executable file
View File

@@ -0,0 +1,20 @@
#! /bin/bash
cd "`dirname $0`/.."
TOX_DIR=$WORKSPACE/.tox
mkdir -p $TOX_DIR
if ! [ $TOX_DIR -ef .tox ]; then
ln -s "$TOX_DIR" .tox
fi
# set up the virtualenv
tox -e py27 --notest -v
TOX_BIN=$TOX_DIR/py27/bin
$TOX_BIN/pip install setuptools
python synapse/python_dependencies.py | xargs -n1 $TOX_BIN/pip install
$TOX_BIN/pip install lxml
$TOX_BIN/pip install psycopg2

View File

@@ -0,0 +1,7 @@
.header {
border-bottom: 4px solid #e4f7ed ! important;
}
.notif_link a, .footer a {
color: #76CFA6 ! important;
}

156
res/templates/mail.css Normal file
View File

@@ -0,0 +1,156 @@
body {
margin: 0px;
}
pre, code {
word-break: break-word;
white-space: pre-wrap;
}
#page {
font-family: 'Open Sans', Helvetica, Arial, Sans-Serif;
font-color: #454545;
font-size: 12pt;
width: 100%;
padding: 20px;
}
#inner {
width: 640px;
}
.header {
width: 100%;
height: 87px;
color: #454545;
border-bottom: 4px solid #e5e5e5;
}
.logo {
text-align: right;
margin-left: 20px;
}
.salutation {
padding-top: 10px;
font-weight: bold;
}
.summarytext {
}
.room {
width: 100%;
color: #454545;
border-bottom: 1px solid #e5e5e5;
}
.room_header td {
padding-top: 38px;
padding-bottom: 10px;
border-bottom: 1px solid #e5e5e5;
}
.room_name {
vertical-align: middle;
font-size: 18px;
font-weight: bold;
}
.room_header h2 {
margin-top: 0px;
margin-left: 75px;
font-size: 20px;
}
.room_avatar {
width: 56px;
line-height: 0px;
text-align: center;
vertical-align: middle;
}
.room_avatar img {
width: 48px;
height: 48px;
object-fit: cover;
border-radius: 24px;
}
.notif {
border-bottom: 1px solid #e5e5e5;
margin-top: 16px;
padding-bottom: 16px;
}
.historical_message .sender_avatar {
opacity: 0.3;
}
/* spell out opacity and historical_message class names for Outlook aka Word */
.historical_message .sender_name {
color: #e3e3e3;
}
.historical_message .message_time {
color: #e3e3e3;
}
.historical_message .message_body {
color: #c7c7c7;
}
.historical_message td,
.message td {
padding-top: 10px;
}
.sender_avatar {
width: 56px;
text-align: center;
vertical-align: top;
}
.sender_avatar img {
margin-top: -2px;
width: 32px;
height: 32px;
border-radius: 16px;
}
.sender_name {
display: inline;
font-size: 13px;
color: #a2a2a2;
}
.message_time {
text-align: right;
width: 100px;
font-size: 11px;
color: #a2a2a2;
}
.message_body {
}
.notif_link td {
padding-top: 10px;
padding-bottom: 10px;
font-weight: bold;
}
.notif_link a, .footer a {
color: #454545;
text-decoration: none;
}
.debug {
font-size: 10px;
color: #888;
}
.footer {
margin-top: 20px;
text-align: center;
}

45
res/templates/notif.html Normal file
View File

@@ -0,0 +1,45 @@
{% for message in notif.messages %}
<tr class="{{ "historical_message" if message.is_historical else "message" }}">
<td class="sender_avatar">
{% if loop.index0 == 0 or notif.messages[loop.index0 - 1].sender_name != notif.messages[loop.index0].sender_name %}
{% if message.sender_avatar_url %}
<img alt="" class="sender_avatar" src="{{ message.sender_avatar_url|mxc_to_http(32,32) }}" />
{% else %}
{% if message.sender_hash % 3 == 0 %}
<img class="sender_avatar" src="https://vector.im/beta/img/76cfa6.png" />
{% elif message.sender_hash % 3 == 1 %}
<img class="sender_avatar" src="https://vector.im/beta/img/50e2c2.png" />
{% else %}
<img class="sender_avatar" src="https://vector.im/beta/img/f4c371.png" />
{% endif %}
{% endif %}
{% endif %}
</td>
<td class="message_contents">
{% if loop.index0 == 0 or notif.messages[loop.index0 - 1].sender_name != notif.messages[loop.index0].sender_name %}
<div class="sender_name">{% if message.msgtype == "m.emote" %}*{% endif %} {{ message.sender_name }}</div>
{% endif %}
<div class="message_body">
{% if message.msgtype == "m.text" %}
{{ message.body_text_html }}
{% elif message.msgtype == "m.emote" %}
{{ message.body_text_html }}
{% elif message.msgtype == "m.notice" %}
{{ message.body_text_html }}
{% elif message.msgtype == "m.image" %}
<img src="{{ message.image_url|mxc_to_http(640, 480, scale) }}" />
{% elif message.msgtype == "m.file" %}
<span class="filename">{{ message.body_text_plain }}</span>
{% endif %}
</div>
</td>
<td class="message_time">{{ message.ts|format_ts("%H:%M") }}</td>
</tr>
{% endfor %}
<tr class="notif_link">
<td></td>
<td>
<a href="{{ notif.link }}">View {{ room.title }}</a>
</td>
<td></td>
</tr>

16
res/templates/notif.txt Normal file
View File

@@ -0,0 +1,16 @@
{% for message in notif.messages %}
{% if message.msgtype == "m.emote" %}* {% endif %}{{ message.sender_name }} ({{ message.ts|format_ts("%H:%M") }})
{% if message.msgtype == "m.text" %}
{{ message.body_text_plain }}
{% elif message.msgtype == "m.emote" %}
{{ message.body_text_plain }}
{% elif message.msgtype == "m.notice" %}
{{ message.body_text_plain }}
{% elif message.msgtype == "m.image" %}
{{ message.body_text_plain }}
{% elif message.msgtype == "m.file" %}
{{ message.body_text_plain }}
{% endif %}
{% endfor %}
View {{ room.title }} at {{ notif.link }}

View File

@@ -0,0 +1,55 @@
<!doctype html>
<html lang="en">
<head>
<style type="text/css">
{% include 'mail.css' without context %}
{% include "mail-%s.css" % app_name ignore missing without context %}
</style>
</head>
<body>
<table id="page">
<tr>
<td> </td>
<td id="inner">
<table class="header">
<tr>
<td>
<div class="salutation">Hi {{ user_display_name }},</div>
<div class="summarytext">{{ summary_text }}</div>
</td>
<td class="logo">
{% if app_name == "Riot" %}
<img src="http://matrix.org/img/riot-logo-email.png" width="83" height="83" alt="[Riot]"/>
{% elif app_name == "Vector" %}
<img src="http://matrix.org/img/vector-logo-email.png" width="64" height="83" alt="[Vector]"/>
{% else %}
<img src="http://matrix.org/img/matrix-120x51.png" width="120" height="51" alt="[matrix]"/>
{% endif %}
</td>
</tr>
</table>
{% for room in rooms %}
{% include 'room.html' with context %}
{% endfor %}
<div class="footer">
<a href="{{ unsubscribe_link }}">Unsubscribe</a>
<br/>
<br/>
<div class="debug">
Sending email at {{ reason.now|format_ts("%c") }} due to activity in room {{ reason.room_name }} because
an event was received at {{ reason.received_at|format_ts("%c") }}
which is more than {{ "%.1f"|format(reason.delay_before_mail_ms / (60*1000)) }} ({{ reason.delay_before_mail_ms }}) mins ago,
{% if reason.last_sent_ts %}
and the last time we sent a mail for this room was {{ reason.last_sent_ts|format_ts("%c") }},
which is more than {{ "%.1f"|format(reason.throttle_ms / (60*1000)) }} (current throttle_ms) mins ago.
{% else %}
and we don't have a last time we sent a mail for this room.
{% endif %}
</div>
</div>
</td>
<td> </td>
</tr>
</table>
</body>
</html>

View File

@@ -0,0 +1,10 @@
Hi {{ user_display_name }},
{{ summary_text }}
{% for room in rooms %}
{% include 'room.txt' with context %}
{% endfor %}
You can disable these notifications at {{ unsubscribe_link }}

33
res/templates/room.html Normal file
View File

@@ -0,0 +1,33 @@
<table class="room">
<tr class="room_header">
<td class="room_avatar">
{% if room.avatar_url %}
<img alt="" src="{{ room.avatar_url|mxc_to_http(48,48) }}" />
{% else %}
{% if room.hash % 3 == 0 %}
<img alt="" src="https://vector.im/beta/img/76cfa6.png" />
{% elif room.hash % 3 == 1 %}
<img alt="" src="https://vector.im/beta/img/50e2c2.png" />
{% else %}
<img alt="" src="https://vector.im/beta/img/f4c371.png" />
{% endif %}
{% endif %}
</td>
<td class="room_name" colspan="2">
{{ room.title }}
</td>
</tr>
{% if room.invite %}
<tr>
<td></td>
<td>
<a href="{{ room.link }}">Join the conversation.</a>
</td>
<td></td>
</tr>
{% else %}
{% for notif in room.notifs %}
{% include 'notif.html' with context %}
{% endfor %}
{% endif %}
</table>

9
res/templates/room.txt Normal file
View File

@@ -0,0 +1,9 @@
{{ room.title }}
{% if room.invite %}
You've been invited, join at {{ room.link }}
{% else %}
{% for notif in room.notifs %}
{% include 'notif.txt' with context %}
{% endfor %}
{% endif %}

View File

@@ -116,17 +116,19 @@ def get_json(origin_name, origin_key, destination, path):
authorization_headers = []
for key, sig in signed_json["signatures"][origin_name].items():
authorization_headers.append(bytes(
"X-Matrix origin=%s,key=\"%s\",sig=\"%s\"" % (
origin_name, key, sig,
)
))
header = "X-Matrix origin=%s,key=\"%s\",sig=\"%s\"" % (
origin_name, key, sig,
)
authorization_headers.append(bytes(header))
sys.stderr.write(header)
sys.stderr.write("\n")
result = requests.get(
lookup(destination, path),
headers={"Authorization": authorization_headers[0]},
verify=False,
)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
return result.json()
@@ -141,6 +143,7 @@ def main():
)
json.dump(result, sys.stdout)
print ""
if __name__ == "__main__":
main()

View File

@@ -1,10 +1,16 @@
#!/usr/bin/env python
import argparse
import sys
import bcrypt
import getpass
import yaml
bcrypt_rounds=12
password_pepper = ""
def prompt_for_pass():
password = getpass.getpass("Password: ")
@@ -28,12 +34,22 @@ if __name__ == "__main__":
default=None,
help="New password for user. Will prompt if omitted.",
)
parser.add_argument(
"-c", "--config",
type=argparse.FileType('r'),
help="Path to server config file. Used to read in bcrypt_rounds and password_pepper.",
)
args = parser.parse_args()
if "config" in args and args.config:
config = yaml.safe_load(args.config)
bcrypt_rounds = config.get("bcrypt_rounds", bcrypt_rounds)
password_config = config.get("password_config", {})
password_pepper = password_config.get("pepper", password_pepper)
password = args.password
if not password:
password = prompt_for_pass()
print bcrypt.hashpw(password, bcrypt.gensalt(bcrypt_rounds))
print bcrypt.hashpw(password + password_pepper, bcrypt.gensalt(bcrypt_rounds))

View File

@@ -25,18 +25,26 @@ import urllib2
import yaml
def request_registration(user, password, server_location, shared_secret):
def request_registration(user, password, server_location, shared_secret, admin=False):
mac = hmac.new(
key=shared_secret,
msg=user,
digestmod=hashlib.sha1,
).hexdigest()
)
mac.update(user)
mac.update("\x00")
mac.update(password)
mac.update("\x00")
mac.update("admin" if admin else "notadmin")
mac = mac.hexdigest()
data = {
"user": user,
"password": password,
"mac": mac,
"type": "org.matrix.login.shared_secret",
"admin": admin,
}
server_location = server_location.rstrip("/")
@@ -68,7 +76,7 @@ def request_registration(user, password, server_location, shared_secret):
sys.exit(1)
def register_new_user(user, password, server_location, shared_secret):
def register_new_user(user, password, server_location, shared_secret, admin):
if not user:
try:
default_user = getpass.getuser()
@@ -99,7 +107,14 @@ def register_new_user(user, password, server_location, shared_secret):
print "Passwords do not match"
sys.exit(1)
request_registration(user, password, server_location, shared_secret)
if not admin:
admin = raw_input("Make admin [no]: ")
if admin in ("y", "yes", "true"):
admin = True
else:
admin = False
request_registration(user, password, server_location, shared_secret, bool(admin))
if __name__ == "__main__":
@@ -119,6 +134,11 @@ if __name__ == "__main__":
default=None,
help="New password for user. Will prompt if omitted.",
)
parser.add_argument(
"-a", "--admin",
action="store_true",
help="Register new user as an admin. Will prompt if omitted.",
)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument(
@@ -151,4 +171,4 @@ if __name__ == "__main__":
else:
secret = args.shared_secret
register_new_user(args.user, args.password, args.server_url, secret)
register_new_user(args.user, args.password, args.server_url, secret, args.admin)

View File

@@ -34,11 +34,12 @@ logger = logging.getLogger("synapse_port_db")
BOOLEAN_COLUMNS = {
"events": ["processed", "outlier"],
"events": ["processed", "outlier", "contains_url"],
"rooms": ["is_public"],
"event_edges": ["is_state"],
"presence_list": ["accepted"],
"presence_stream": ["currently_active"],
"public_room_list_stream": ["visibility"],
}
@@ -71,6 +72,14 @@ APPEND_ONLY_TABLES = [
"event_to_state_groups",
"rejections",
"event_search",
"presence_stream",
"push_rules_stream",
"current_state_resets",
"ex_outlier_stream",
"cache_invalidation_stream",
"public_room_list_stream",
"state_group_edges",
"stream_ordering_to_exterm",
]
@@ -92,8 +101,12 @@ class Store(object):
_simple_select_onecol_txn = SQLBaseStore.__dict__["_simple_select_onecol_txn"]
_simple_select_onecol = SQLBaseStore.__dict__["_simple_select_onecol"]
_simple_select_one = SQLBaseStore.__dict__["_simple_select_one"]
_simple_select_one_txn = SQLBaseStore.__dict__["_simple_select_one_txn"]
_simple_select_one_onecol = SQLBaseStore.__dict__["_simple_select_one_onecol"]
_simple_select_one_onecol_txn = SQLBaseStore.__dict__["_simple_select_one_onecol_txn"]
_simple_select_one_onecol_txn = SQLBaseStore.__dict__[
"_simple_select_one_onecol_txn"
]
_simple_update_one = SQLBaseStore.__dict__["_simple_update_one"]
_simple_update_one_txn = SQLBaseStore.__dict__["_simple_update_one_txn"]
@@ -158,31 +171,40 @@ class Porter(object):
def setup_table(self, table):
if table in APPEND_ONLY_TABLES:
# It's safe to just carry on inserting.
next_chunk = yield self.postgres_store._simple_select_one_onecol(
row = yield self.postgres_store._simple_select_one(
table="port_from_sqlite3",
keyvalues={"table_name": table},
retcol="rowid",
retcols=("forward_rowid", "backward_rowid"),
allow_none=True,
)
total_to_port = None
if next_chunk is None:
if row is None:
if table == "sent_transactions":
next_chunk, already_ported, total_to_port = (
forward_chunk, already_ported, total_to_port = (
yield self._setup_sent_transactions()
)
backward_chunk = 0
else:
yield self.postgres_store._simple_insert(
table="port_from_sqlite3",
values={"table_name": table, "rowid": 1}
values={
"table_name": table,
"forward_rowid": 1,
"backward_rowid": 0,
}
)
next_chunk = 1
forward_chunk = 1
backward_chunk = 0
already_ported = 0
else:
forward_chunk = row["forward_rowid"]
backward_chunk = row["backward_rowid"]
if total_to_port is None:
already_ported, total_to_port = yield self._get_total_count_to_port(
table, next_chunk
table, forward_chunk, backward_chunk
)
else:
def delete_all(txn):
@@ -196,32 +218,124 @@ class Porter(object):
yield self.postgres_store._simple_insert(
table="port_from_sqlite3",
values={"table_name": table, "rowid": 0}
values={
"table_name": table,
"forward_rowid": 1,
"backward_rowid": 0,
}
)
next_chunk = 1
forward_chunk = 1
backward_chunk = 0
already_ported, total_to_port = yield self._get_total_count_to_port(
table, next_chunk
table, forward_chunk, backward_chunk
)
defer.returnValue((table, already_ported, total_to_port, next_chunk))
defer.returnValue(
(table, already_ported, total_to_port, forward_chunk, backward_chunk)
)
@defer.inlineCallbacks
def handle_table(self, table, postgres_size, table_size, next_chunk):
def handle_table(self, table, postgres_size, table_size, forward_chunk,
backward_chunk):
if not table_size:
return
self.progress.add_table(table, postgres_size, table_size)
select = (
if table == "event_search":
yield self.handle_search_table(
postgres_size, table_size, forward_chunk, backward_chunk
)
return
forward_select = (
"SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?"
% (table,)
)
backward_select = (
"SELECT rowid, * FROM %s WHERE rowid <= ? ORDER BY rowid LIMIT ?"
% (table,)
)
do_forward = [True]
do_backward = [True]
while True:
def r(txn):
txn.execute(select, (next_chunk, self.batch_size,))
forward_rows = []
backward_rows = []
if do_forward[0]:
txn.execute(forward_select, (forward_chunk, self.batch_size,))
forward_rows = txn.fetchall()
if not forward_rows:
do_forward[0] = False
if do_backward[0]:
txn.execute(backward_select, (backward_chunk, self.batch_size,))
backward_rows = txn.fetchall()
if not backward_rows:
do_backward[0] = False
if forward_rows or backward_rows:
headers = [column[0] for column in txn.description]
else:
headers = None
return headers, forward_rows, backward_rows
headers, frows, brows = yield self.sqlite_store.runInteraction(
"select", r
)
if frows or brows:
if frows:
forward_chunk = max(row[0] for row in frows) + 1
if brows:
backward_chunk = min(row[0] for row in brows) - 1
rows = frows + brows
self._convert_rows(table, headers, rows)
def insert(txn):
self.postgres_store.insert_many_txn(
txn, table, headers[1:], rows
)
self.postgres_store._simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": table},
updatevalues={
"forward_rowid": forward_chunk,
"backward_rowid": backward_chunk,
},
)
yield self.postgres_store.execute(insert)
postgres_size += len(rows)
self.progress.update(table, postgres_size)
else:
return
@defer.inlineCallbacks
def handle_search_table(self, postgres_size, table_size, forward_chunk,
backward_chunk):
select = (
"SELECT es.rowid, es.*, e.origin_server_ts, e.stream_ordering"
" FROM event_search as es"
" INNER JOIN events AS e USING (event_id, room_id)"
" WHERE es.rowid >= ?"
" ORDER BY es.rowid LIMIT ?"
)
while True:
def r(txn):
txn.execute(select, (forward_chunk, self.batch_size,))
rows = txn.fetchall()
headers = [column[0] for column in txn.description]
@@ -230,59 +344,51 @@ class Porter(object):
headers, rows = yield self.sqlite_store.runInteraction("select", r)
if rows:
next_chunk = rows[-1][0] + 1
forward_chunk = rows[-1][0] + 1
if table == "event_search":
# We have to treat event_search differently since it has a
# different structure in the two different databases.
def insert(txn):
sql = (
"INSERT INTO event_search (event_id, room_id, key, sender, vector)"
" VALUES (?,?,?,?,to_tsvector('english', ?))"
# We have to treat event_search differently since it has a
# different structure in the two different databases.
def insert(txn):
sql = (
"INSERT INTO event_search (event_id, room_id, key,"
" sender, vector, origin_server_ts, stream_ordering)"
" VALUES (?,?,?,?,to_tsvector('english', ?),?,?)"
)
rows_dict = [
dict(zip(headers, row))
for row in rows
]
txn.executemany(sql, [
(
row["event_id"],
row["room_id"],
row["key"],
row["sender"],
row["value"],
row["origin_server_ts"],
row["stream_ordering"],
)
for row in rows_dict
])
rows_dict = [
dict(zip(headers, row))
for row in rows
]
txn.executemany(sql, [
(
row["event_id"],
row["room_id"],
row["key"],
row["sender"],
row["value"],
)
for row in rows_dict
])
self.postgres_store._simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": table},
updatevalues={"rowid": next_chunk},
)
else:
self._convert_rows(table, headers, rows)
def insert(txn):
self.postgres_store.insert_many_txn(
txn, table, headers[1:], rows
)
self.postgres_store._simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": table},
updatevalues={"rowid": next_chunk},
)
self.postgres_store._simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": "event_search"},
updatevalues={
"forward_rowid": forward_chunk,
"backward_rowid": backward_chunk,
},
)
yield self.postgres_store.execute(insert)
postgres_size += len(rows)
self.progress.update(table, postgres_size)
self.progress.update("event_search", postgres_size)
else:
return
@@ -356,10 +462,32 @@ class Porter(object):
txn.execute(
"CREATE TABLE port_from_sqlite3 ("
" table_name varchar(100) NOT NULL UNIQUE,"
" rowid bigint NOT NULL"
" forward_rowid bigint NOT NULL,"
" backward_rowid bigint NOT NULL"
")"
)
# The old port script created a table with just a "rowid" column.
# We want people to be able to rerun this script from an old port
# so that they can pick up any missing events that were not
# ported across.
def alter_table(txn):
txn.execute(
"ALTER TABLE IF EXISTS port_from_sqlite3"
" RENAME rowid TO forward_rowid"
)
txn.execute(
"ALTER TABLE IF EXISTS port_from_sqlite3"
" ADD backward_rowid bigint NOT NULL DEFAULT 0"
)
try:
yield self.postgres_store.runInteraction(
"alter_table", alter_table
)
except Exception as e:
logger.info("Failed to create port table: %s", e)
try:
yield self.postgres_store.runInteraction(
"create_port_table", create_port_table
@@ -419,7 +547,7 @@ class Porter(object):
@defer.inlineCallbacks
def _setup_sent_transactions(self):
# Only save things from the last day
yesterday = int(time.time()*1000) - 86400000
yesterday = int(time.time() * 1000) - 86400000
# And save the max transaction id from each destination
select = (
@@ -475,7 +603,11 @@ class Porter(object):
yield self.postgres_store._simple_insert(
table="port_from_sqlite3",
values={"table_name": "sent_transactions", "rowid": next_chunk}
values={
"table_name": "sent_transactions",
"forward_rowid": next_chunk,
"backward_rowid": 0,
}
)
def get_sent_table_size(txn):
@@ -496,13 +628,18 @@ class Porter(object):
defer.returnValue((next_chunk, inserted_rows, total_count))
@defer.inlineCallbacks
def _get_remaining_count_to_port(self, table, next_chunk):
rows = yield self.sqlite_store.execute_sql(
def _get_remaining_count_to_port(self, table, forward_chunk, backward_chunk):
frows = yield self.sqlite_store.execute_sql(
"SELECT count(*) FROM %s WHERE rowid >= ?" % (table,),
next_chunk,
forward_chunk,
)
defer.returnValue(rows[0][0])
brows = yield self.sqlite_store.execute_sql(
"SELECT count(*) FROM %s WHERE rowid <= ?" % (table,),
backward_chunk,
)
defer.returnValue(frows[0][0] + brows[0][0])
@defer.inlineCallbacks
def _get_already_ported_count(self, table):
@@ -513,10 +650,10 @@ class Porter(object):
defer.returnValue(rows[0][0])
@defer.inlineCallbacks
def _get_total_count_to_port(self, table, next_chunk):
def _get_total_count_to_port(self, table, forward_chunk, backward_chunk):
remaining, done = yield defer.gatherResults(
[
self._get_remaining_count_to_port(table, next_chunk),
self._get_remaining_count_to_port(table, forward_chunk, backward_chunk),
self._get_already_ported_count(table),
],
consumeErrors=True,
@@ -647,7 +784,7 @@ class CursesProgress(Progress):
color = curses.color_pair(2) if perc == 100 else curses.color_pair(1)
self.stdscr.addstr(
i+2, left_margin + max_len - len(table),
i + 2, left_margin + max_len - len(table),
table,
curses.A_BOLD | color,
)
@@ -655,18 +792,18 @@ class CursesProgress(Progress):
size = 20
progress = "[%s%s]" % (
"#" * int(perc*size/100),
" " * (size - int(perc*size/100)),
"#" * int(perc * size / 100),
" " * (size - int(perc * size / 100)),
)
self.stdscr.addstr(
i+2, left_margin + max_len + middle_space,
i + 2, left_margin + max_len + middle_space,
"%s %3d%% (%d/%d)" % (progress, perc, data["num_done"], data["total"]),
)
if self.finished:
self.stdscr.addstr(
rows-1, 0,
rows - 1, 0,
"Press any key to exit...",
)

View File

@@ -16,7 +16,5 @@ ignore =
[flake8]
max-line-length = 90
ignore = W503 ; W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it.
[pep8]
max-line-length = 90
# W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it.
ignore = W503

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.14.0"
__version__ = "0.18.4"

View File

@@ -13,22 +13,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module contains classes for authenticating the user."""
import logging
import pymacaroons
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json, SignatureVerifyException
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, Codes, SynapseError, EventSizeError
from synapse.types import Requester, RoomID, UserID, EventID
from synapse.util.logutils import log_function
from synapse.util.logcontext import preserve_context_over_fn
from unpaddedbase64 import decode_base64
import logging
import pymacaroons
import synapse.types
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, Codes, SynapseError, EventSizeError
from synapse.types import UserID, get_domain_from_id
from synapse.util.logcontext import preserve_context_over_fn
from synapse.util.logutils import log_function
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)
@@ -41,12 +41,20 @@ AuthEventTypes = (
class Auth(object):
"""
FIXME: This class contains a mix of functions for authenticating users
of our client-server API and authenticating events added to room graphs.
"""
def __init__(self, hs):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
# Docs for these currently lives at
# github.com/matrix-org/matrix-doc/blob/master/drafts/macaroons_caveats.rst
# In addition, we have type == delete_pusher which grants access only to
# delete pushers.
self._KNOWN_CAVEAT_PREFIXES = set([
"gen = ",
"guest = ",
@@ -55,7 +63,18 @@ class Auth(object):
"user_id = ",
])
def check(self, event, auth_events):
@defer.inlineCallbacks
def check_from_context(self, event, context, do_sig_check=True):
auth_events_ids = yield self.compute_auth_events(
event, context.prev_state_ids, for_verification=True,
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
(e.type, e.state_key): e for e in auth_events.values()
}
self.check(event, auth_events=auth_events, do_sig_check=do_sig_check)
def check(self, event, auth_events, do_sig_check=True):
""" Checks if this event is correctly authed.
Args:
@@ -66,11 +85,35 @@ class Auth(object):
Returns:
True if the auth checks pass.
"""
self.check_size_limits(event)
with Measure(self.clock, "auth.check"):
self.check_size_limits(event)
try:
if not hasattr(event, "room_id"):
raise AuthError(500, "Event has no room_id: %s" % event)
if do_sig_check:
sender_domain = get_domain_from_id(event.sender)
event_id_domain = get_domain_from_id(event.event_id)
is_invite_via_3pid = (
event.type == EventTypes.Member
and event.membership == Membership.INVITE
and "third_party_invite" in event.content
)
# Check the sender's domain has signed the event
if not event.signatures.get(sender_domain):
# We allow invites via 3pid to have a sender from a different
# HS, as the sender must match the sender of the original
# 3pid invite. This is checked further down with the
# other dedicated membership checks.
if not is_invite_via_3pid:
raise AuthError(403, "Event not signed by sender's server")
# Check the event_id's domain has signed the event
if not event.signatures.get(event_id_domain):
raise AuthError(403, "Event not signed by sending server")
if auth_events is None:
# Oh, we don't know what the state of the room was, so we
# are trusting that this is allowed (at least for now)
@@ -78,6 +121,12 @@ class Auth(object):
return True
if event.type == EventTypes.Create:
room_id_domain = get_domain_from_id(event.room_id)
if room_id_domain != sender_domain:
raise AuthError(
403,
"Creation event's room_id domain does not match sender's"
)
# FIXME
return True
@@ -89,8 +138,8 @@ class Auth(object):
"Room %r does not exist" % (event.room_id,)
)
creating_domain = RoomID.from_string(event.room_id).domain
originating_domain = UserID.from_string(event.sender).domain
creating_domain = get_domain_from_id(event.room_id)
originating_domain = get_domain_from_id(event.sender)
if creating_domain != originating_domain:
if not self.can_federate(event, auth_events):
raise AuthError(
@@ -100,6 +149,22 @@ class Auth(object):
# FIXME: Temp hack
if event.type == EventTypes.Aliases:
if not event.is_state():
raise AuthError(
403,
"Alias event must be a state event",
)
if not event.state_key:
raise AuthError(
403,
"Alias event must have non-empty state_key"
)
sender_domain = get_domain_from_id(event.sender)
if event.state_key != sender_domain:
raise AuthError(
403,
"Alias event's state_key does not match sender's domain"
)
return True
logger.debug(
@@ -118,6 +183,24 @@ class Auth(object):
return allowed
self.check_event_sender_in_room(event, auth_events)
# Special case to allow m.room.third_party_invite events wherever
# a user is allowed to issue invites. Fixes
# https://github.com/vector-im/vector-web/issues/1208 hopefully
if event.type == EventTypes.ThirdPartyInvite:
user_level = self._get_user_power_level(event.user_id, auth_events)
invite_level = self._get_named_level(auth_events, "invite", 0)
if user_level < invite_level:
raise AuthError(
403, (
"You cannot issue a third party invite for %s." %
(event.content.display_name,)
)
)
else:
return True
self._can_send_event(event, auth_events)
if event.type == EventTypes.PowerLevels:
@@ -127,13 +210,6 @@ class Auth(object):
self.check_redaction(event, auth_events)
logger.debug("Allowing! %s", event)
except AuthError as e:
logger.info(
"Event auth check failed on event %s with msg: %s",
event, e.msg
)
logger.info("Denying! %s", event)
raise
def check_size_limits(self, event):
def too_big(field):
@@ -219,21 +295,17 @@ class Auth(object):
@defer.inlineCallbacks
def check_host_in_room(self, room_id, host):
curr_state = yield self.state.get_current_state(room_id)
with Measure(self.clock, "check_host_in_room"):
latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
for event in curr_state.values():
if event.type == EventTypes.Member:
try:
if UserID.from_string(event.state_key).domain != host:
continue
except:
logger.warn("state_key not user_id: %s", event.state_key)
continue
entry = yield self.state.resolve_state_groups(
room_id, latest_event_ids
)
if event.content["membership"] == Membership.JOIN:
defer.returnValue(True)
defer.returnValue(False)
ret = yield self.store.is_host_joined(
room_id, host, entry.state_group, entry.state
)
defer.returnValue(ret)
def check_event_sender_in_room(self, event, auth_events):
key = (EventTypes.Member, event.user_id, )
@@ -271,8 +343,8 @@ class Auth(object):
target_user_id = event.state_key
creating_domain = RoomID.from_string(event.room_id).domain
target_domain = UserID.from_string(target_user_id).domain
creating_domain = get_domain_from_id(event.room_id)
target_domain = get_domain_from_id(target_user_id)
if creating_domain != target_domain:
if not self.can_federate(event, auth_events):
raise AuthError(
@@ -328,6 +400,10 @@ class Auth(object):
if Membership.INVITE == membership and "third_party_invite" in event.content:
if not self._verify_third_party_invite(event, auth_events):
raise AuthError(403, "You are not invited to this room.")
if target_banned:
raise AuthError(
403, "%s is banned from the room" % (target_user_id,)
)
return True
if Membership.JOIN != membership:
@@ -432,6 +508,9 @@ class Auth(object):
if not invite_event:
return False
if invite_event.sender != event.sender:
return False
if event.user_id != invite_event.user_id:
return False
@@ -512,33 +591,38 @@ class Auth(object):
return default
@defer.inlineCallbacks
def get_user_by_req(self, request, allow_guest=False):
def get_user_by_req(self, request, allow_guest=False, rights="access"):
""" Get a registered user's ID.
Args:
request - An HTTP request with an access_token query parameter.
Returns:
tuple of:
UserID (str)
Access token ID (str)
defer.Deferred: resolves to a ``synapse.types.Requester`` object
Raises:
AuthError if no user by that token exists or the token is invalid.
"""
# Can optionally look elsewhere in the request (e.g. headers)
try:
user_id = yield self._get_appservice_user_id(request.args)
user_id, app_service = yield self._get_appservice_user_id(request)
if user_id:
request.authenticated_entity = user_id
defer.returnValue(
Requester(UserID.from_string(user_id), "", False)
synapse.types.create_requester(user_id, app_service=app_service)
)
access_token = request.args["access_token"][0]
user_info = yield self.get_user_by_access_token(access_token)
access_token = get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
user_info = yield self.get_user_by_access_token(access_token, rights)
user = user_info["user"]
token_id = user_info["token_id"]
is_guest = user_info["is_guest"]
# device_id may not be present if get_user_by_access_token has been
# stubbed out.
device_id = user_info.get("device_id")
ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders(
"User-Agent",
@@ -550,7 +634,8 @@ class Auth(object):
user=user,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent
user_agent=user_agent,
device_id=device_id,
)
if is_guest and not allow_guest:
@@ -560,7 +645,9 @@ class Auth(object):
request.authenticated_entity = user.to_string()
defer.returnValue(Requester(user, token_id, is_guest))
defer.returnValue(synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service)
)
except KeyError:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.",
@@ -568,19 +655,21 @@ class Auth(object):
)
@defer.inlineCallbacks
def _get_appservice_user_id(self, request_args):
app_service = yield self.store.get_app_service_by_token(
request_args["access_token"][0]
def _get_appservice_user_id(self, request):
app_service = self.store.get_app_service_by_token(
get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
)
if app_service is None:
defer.returnValue(None)
defer.returnValue((None, None))
if "user_id" not in request_args:
defer.returnValue(app_service.sender)
if "user_id" not in request.args:
defer.returnValue((app_service.sender, app_service))
user_id = request_args["user_id"][0]
user_id = request.args["user_id"][0]
if app_service.sender == user_id:
defer.returnValue(app_service.sender)
defer.returnValue((app_service.sender, app_service))
if not app_service.is_interested_in_user(user_id):
raise AuthError(
@@ -592,10 +681,10 @@ class Auth(object):
403,
"Application service has not registered this user"
)
defer.returnValue(user_id)
defer.returnValue((user_id, app_service))
@defer.inlineCallbacks
def get_user_by_access_token(self, token):
def get_user_by_access_token(self, token, rights="access"):
""" Get a registered user's ID.
Args:
@@ -606,46 +695,61 @@ class Auth(object):
AuthError if no user by that token exists or the token is invalid.
"""
try:
ret = yield self.get_user_from_macaroon(token)
ret = yield self.get_user_from_macaroon(token, rights)
except AuthError:
# TODO(daniel): Remove this fallback when all existing access tokens
# have been re-issued as macaroons.
if self.hs.config.expire_access_token:
raise
ret = yield self._look_up_user_by_access_token(token)
defer.returnValue(ret)
@defer.inlineCallbacks
def get_user_from_macaroon(self, macaroon_str):
def get_user_from_macaroon(self, macaroon_str, rights="access"):
try:
macaroon = pymacaroons.Macaroon.deserialize(macaroon_str)
self.validate_macaroon(macaroon, "access", False)
user_prefix = "user_id = "
user = None
user_id = self.get_user_id_from_macaroon(macaroon)
user = UserID.from_string(user_id)
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token,
user_id=user_id,
)
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
user = UserID.from_string(caveat.caveat_id[len(user_prefix):])
elif caveat.caveat_id == "guest = true":
if caveat.caveat_id == "guest = true":
guest = True
if user is None:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "No user caveat in macaroon",
errcode=Codes.UNKNOWN_TOKEN
)
if guest:
ret = {
"user": user,
"is_guest": True,
"token_id": None,
"device_id": None,
}
elif rights == "delete_pusher":
# We don't store these tokens in the database
ret = {
"user": user,
"is_guest": False,
"token_id": None,
"device_id": None,
}
else:
# This codepath exists so that we can actually return a
# token ID, because we use token IDs in place of device
# identifiers throughout the codebase.
# TODO(daniel): Remove this fallback when device IDs are
# properly implemented.
# This codepath exists for several reasons:
# * so that we can actually return a token ID, which is used
# in some parts of the schema (where we probably ought to
# use device IDs instead)
# * the only way we currently have to invalidate an
# access_token is by removing it from the database, so we
# have to check here that it is still in the db
# * some attributes (notably device_id) aren't stored in the
# macaroon. They probably should be.
# TODO: build the dictionary from the macaroon once the
# above are fixed
ret = yield self._look_up_user_by_access_token(macaroon_str)
if ret["user"] != user:
logger.error(
@@ -665,21 +769,46 @@ class Auth(object):
errcode=Codes.UNKNOWN_TOKEN
)
def validate_macaroon(self, macaroon, type_string, verify_expiry):
def get_user_id_from_macaroon(self, macaroon):
"""Retrieve the user_id given by the caveats on the macaroon.
Does *not* validate the macaroon.
Args:
macaroon (pymacaroons.Macaroon): The macaroon to validate
Returns:
(str) user id
Raises:
AuthError if there is no user_id caveat in the macaroon
"""
user_prefix = "user_id = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix):]
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "No user caveat in macaroon",
errcode=Codes.UNKNOWN_TOKEN
)
def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id):
"""
validate that a Macaroon is understood by and was signed by this server.
Args:
macaroon(pymacaroons.Macaroon): The macaroon to validate
type_string(str): The kind of token this is (e.g. "access", "refresh")
type_string(str): The kind of token required (e.g. "access", "refresh",
"delete_pusher")
verify_expiry(bool): Whether to verify whether the macaroon has expired.
This should really always be True, but no clients currently implement
token refresh, so we can't enforce expiry yet.
user_id (str): The user_id required
"""
v = pymacaroons.Verifier()
v.satisfy_exact("gen = 1")
v.satisfy_exact("type = " + type_string)
v.satisfy_general(lambda c: c.startswith("user_id = "))
v.satisfy_exact("user_id = %s" % user_id)
v.satisfy_exact("guest = true")
if verify_expiry:
v.satisfy_general(self._verify_expiry)
@@ -718,18 +847,23 @@ class Auth(object):
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN
)
# we use ret.get() below because *lots* of unit tests stub out
# get_user_by_access_token in a way where it only returns a couple of
# the fields.
user_info = {
"user": UserID.from_string(ret.get("name")),
"token_id": ret.get("token_id", None),
"is_guest": False,
"device_id": ret.get("device_id"),
}
defer.returnValue(user_info)
@defer.inlineCallbacks
def get_appservice_by_req(self, request):
try:
token = request.args["access_token"][0]
service = yield self.store.get_app_service_by_token(token)
token = get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token: %s" % (token,))
raise AuthError(
@@ -738,7 +872,7 @@ class Auth(object):
errcode=Codes.UNKNOWN_TOKEN
)
request.authenticated_entity = service.sender
defer.returnValue(service)
return defer.succeed(service)
except KeyError:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token."
@@ -749,7 +883,7 @@ class Auth(object):
@defer.inlineCallbacks
def add_auth_events(self, builder, context):
auth_ids = self.compute_auth_events(builder, context.current_state)
auth_ids = yield self.compute_auth_events(builder, context.prev_state_ids)
auth_events_entries = yield self.store.add_event_hashes(
auth_ids
@@ -757,30 +891,32 @@ class Auth(object):
builder.auth_events = auth_events_entries
def compute_auth_events(self, event, current_state):
@defer.inlineCallbacks
def compute_auth_events(self, event, current_state_ids, for_verification=False):
if event.type == EventTypes.Create:
return []
defer.returnValue([])
auth_ids = []
key = (EventTypes.PowerLevels, "", )
power_level_event = current_state.get(key)
power_level_event_id = current_state_ids.get(key)
if power_level_event:
auth_ids.append(power_level_event.event_id)
if power_level_event_id:
auth_ids.append(power_level_event_id)
key = (EventTypes.JoinRules, "", )
join_rule_event = current_state.get(key)
join_rule_event_id = current_state_ids.get(key)
key = (EventTypes.Member, event.user_id, )
member_event = current_state.get(key)
member_event_id = current_state_ids.get(key)
key = (EventTypes.Create, "", )
create_event = current_state.get(key)
if create_event:
auth_ids.append(create_event.event_id)
create_event_id = current_state_ids.get(key)
if create_event_id:
auth_ids.append(create_event_id)
if join_rule_event:
if join_rule_event_id:
join_rule_event = yield self.store.get_event(join_rule_event_id)
join_rule = join_rule_event.content.get("join_rule")
is_public = join_rule == JoinRules.PUBLIC if join_rule else False
else:
@@ -789,15 +925,21 @@ class Auth(object):
if event.type == EventTypes.Member:
e_type = event.content["membership"]
if e_type in [Membership.JOIN, Membership.INVITE]:
if join_rule_event:
auth_ids.append(join_rule_event.event_id)
if join_rule_event_id:
auth_ids.append(join_rule_event_id)
if e_type == Membership.JOIN:
if member_event and not is_public:
auth_ids.append(member_event.event_id)
if member_event_id and not is_public:
auth_ids.append(member_event_id)
else:
if member_event:
auth_ids.append(member_event.event_id)
if member_event_id:
auth_ids.append(member_event_id)
if for_verification:
key = (EventTypes.Member, event.state_key, )
existing_event_id = current_state_ids.get(key)
if existing_event_id:
auth_ids.append(existing_event_id)
if e_type == Membership.INVITE:
if "third_party_invite" in event.content:
@@ -805,14 +947,15 @@ class Auth(object):
EventTypes.ThirdPartyInvite,
event.content["third_party_invite"]["signed"]["token"]
)
third_party_invite = current_state.get(key)
if third_party_invite:
auth_ids.append(third_party_invite.event_id)
elif member_event:
third_party_invite_id = current_state_ids.get(key)
if third_party_invite_id:
auth_ids.append(third_party_invite_id)
elif member_event_id:
member_event = yield self.store.get_event(member_event_id)
if member_event.content["membership"] == Membership.JOIN:
auth_ids.append(member_event.event_id)
return auth_ids
defer.returnValue(auth_ids)
def _get_send_level(self, etype, state_key, auth_events):
key = (EventTypes.PowerLevels, "", )
@@ -861,16 +1004,6 @@ class Auth(object):
403,
"You are not allowed to set others state"
)
else:
sender_domain = UserID.from_string(
event.user_id
).domain
if sender_domain != event.state_key:
raise AuthError(
403,
"You are not allowed to set others state"
)
return True
@@ -894,8 +1027,8 @@ class Auth(object):
if user_level >= redact_level:
return False
redacter_domain = EventID.from_string(event.event_id).domain
redactee_domain = EventID.from_string(event.redacts).domain
redacter_domain = get_domain_from_id(event.event_id)
redactee_domain = get_domain_from_id(event.redacts)
if redacter_domain == redactee_domain:
return True
@@ -1028,3 +1161,68 @@ class Auth(object):
"This server requires you to be a moderator in the room to"
" edit its room list entry"
)
def has_access_token(request):
"""Checks if the request has an access_token.
Returns:
bool: False if no access_token was given, True otherwise.
"""
query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders("Authorization")
return bool(query_params) or bool(auth_headers)
def get_access_token_from_request(request, token_not_found_http_status=401):
"""Extracts the access_token from the request.
Args:
request: The http request.
token_not_found_http_status(int): The HTTP status code to set in the
AuthError if the token isn't found. This is used in some of the
legacy APIs to change the status code to 403 from the default of
401 since some of the old clients depended on auth errors returning
403.
Returns:
str: The access_token
Raises:
AuthError: If there isn't an access_token in the request.
"""
auth_headers = request.requestHeaders.getRawHeaders("Authorization")
query_params = request.args.get("access_token")
if auth_headers:
# Try the get the access_token from a "Authorization: Bearer"
# header
if query_params is not None:
raise AuthError(
token_not_found_http_status,
"Mixing Authorization headers and access_token query parameters.",
errcode=Codes.MISSING_TOKEN,
)
if len(auth_headers) > 1:
raise AuthError(
token_not_found_http_status,
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
parts = auth_headers[0].split(" ")
if parts[0] == "Bearer" and len(parts) == 2:
return parts[1]
else:
raise AuthError(
token_not_found_http_status,
"Invalid Authorization header.",
errcode=Codes.MISSING_TOKEN,
)
else:
# Try to get the access_token from the query params.
if not query_params:
raise AuthError(
token_not_found_http_status,
"Missing access token.",
errcode=Codes.MISSING_TOKEN
)
return query_params[0]

View File

@@ -85,3 +85,8 @@ class RoomCreationPreset(object):
PRIVATE_CHAT = "private_chat"
PUBLIC_CHAT = "public_chat"
TRUSTED_PRIVATE_CHAT = "trusted_private_chat"
class ThirdPartyEntityKind(object):
USER = "user"
LOCATION = "location"

View File

@@ -42,8 +42,10 @@ class Codes(object):
TOO_LARGE = "M_TOO_LARGE"
EXCLUSIVE = "M_EXCLUSIVE"
THREEPID_AUTH_FAILED = "M_THREEPID_AUTH_FAILED"
THREEPID_IN_USE = "THREEPID_IN_USE"
THREEPID_IN_USE = "M_THREEPID_IN_USE"
THREEPID_NOT_FOUND = "M_THREEPID_NOT_FOUND"
INVALID_USERNAME = "M_INVALID_USERNAME"
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
class CodeMessageException(RuntimeError):

View File

@@ -15,6 +15,8 @@
from synapse.api.errors import SynapseError
from synapse.types import UserID, RoomID
from twisted.internet import defer
import ujson as json
@@ -24,10 +26,10 @@ class Filtering(object):
super(Filtering, self).__init__()
self.store = hs.get_datastore()
@defer.inlineCallbacks
def get_user_filter(self, user_localpart, filter_id):
result = self.store.get_user_filter(user_localpart, filter_id)
result.addCallback(FilterCollection)
return result
result = yield self.store.get_user_filter(user_localpart, filter_id)
defer.returnValue(FilterCollection(result))
def add_user_filter(self, user_localpart, user_filter):
self.check_valid_filter(user_filter)
@@ -189,6 +191,17 @@ class Filter(object):
def __init__(self, filter_json):
self.filter_json = filter_json
self.types = self.filter_json.get("types", None)
self.not_types = self.filter_json.get("not_types", [])
self.rooms = self.filter_json.get("rooms", None)
self.not_rooms = self.filter_json.get("not_rooms", [])
self.senders = self.filter_json.get("senders", None)
self.not_senders = self.filter_json.get("not_senders", [])
self.contains_url = self.filter_json.get("contains_url", None)
def check(self, event):
"""Checks whether the filter matches the given event.
@@ -207,9 +220,10 @@ class Filter(object):
event.get("room_id", None),
sender,
event.get("type", None),
"url" in event.get("content", {})
)
def check_fields(self, room_id, sender, event_type):
def check_fields(self, room_id, sender, event_type, contains_url):
"""Checks whether the filter matches the given event fields.
Returns:
@@ -223,15 +237,20 @@ class Filter(object):
for name, match_func in literal_keys.items():
not_name = "not_%s" % (name,)
disallowed_values = self.filter_json.get(not_name, [])
disallowed_values = getattr(self, not_name)
if any(map(match_func, disallowed_values)):
return False
allowed_values = self.filter_json.get(name, None)
allowed_values = getattr(self, name)
if allowed_values is not None:
if not any(map(match_func, allowed_values)):
return False
contains_url_filter = self.filter_json.get("contains_url")
if contains_url_filter is not None:
if contains_url_filter != contains_url:
return False
return True
def filter_rooms(self, room_ids):

View File

@@ -23,7 +23,7 @@ class Ratelimiter(object):
def __init__(self):
self.message_counts = collections.OrderedDict()
def send_message(self, user_id, time_now_s, msg_rate_hz, burst_count):
def send_message(self, user_id, time_now_s, msg_rate_hz, burst_count, update=True):
"""Can the user send a message?
Args:
user_id: The user sending a message.
@@ -32,12 +32,15 @@ class Ratelimiter(object):
second.
burst_count: How many messages the user can send before being
limited.
update (bool): Whether to update the message rates or not. This is
useful to check if a message would be allowed to be sent before
its ready to be actually sent.
Returns:
A pair of a bool indicating if they can send a message now and a
time in seconds of when they can next send a message.
"""
self.prune_message_counts(time_now_s)
message_count, time_start, _ignored = self.message_counts.pop(
message_count, time_start, _ignored = self.message_counts.get(
user_id, (0., time_now_s, None),
)
time_delta = time_now_s - time_start
@@ -52,9 +55,10 @@ class Ratelimiter(object):
allowed = True
message_count += 1
self.message_counts[user_id] = (
message_count, time_start, msg_rate_hz
)
if update:
self.message_counts[user_id] = (
message_count, time_start, msg_rate_hz
)
if msg_rate_hz > 0:
time_allowed = (

View File

@@ -25,4 +25,3 @@ SERVER_KEY_PREFIX = "/_matrix/key/v1"
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
MEDIA_PREFIX = "/_matrix/media/r0"
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
APP_SERVICE_PREFIX = "/_matrix/appservice/v1"

View File

@@ -16,13 +16,11 @@
import sys
sys.dont_write_bytecode = True
from synapse.python_dependencies import (
check_requirements, MissingRequirementError
) # NOQA
from synapse import python_dependencies # noqa: E402
try:
check_requirements()
except MissingRequirementError as e:
python_dependencies.check_requirements()
except python_dependencies.MissingRequirementError as e:
message = "\n".join([
"Missing Requirement: %s" % (e.message,),
"To install run:",

214
synapse/app/appservice.py Normal file
View File

@@ -0,0 +1,214 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
from synapse.server import HomeServer
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.appservice")
class AppserviceSlaveStore(
DirectoryStore, SlavedEventStore, SlavedApplicationServiceStore,
SlavedRegistrationStore,
):
pass
class AppserviceServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = AppserviceSlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
logger.info("Synapse appservice now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
appservice_handler = self.get_application_service_handler()
@defer.inlineCallbacks
def replicate(results):
stream = results.get("events")
if stream:
max_stream_id = stream["position"]
yield appservice_handler.notify_interested_services(max_stream_id)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
replicate(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse appservice", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.appservice"
setup_logging(config.worker_log_config, config.worker_log_file)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
if config.notify_appservices:
sys.stderr.write(
"\nThe appservices must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``notify_appservices: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.notify_appservices = True
ps = AppserviceServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-appservice",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -0,0 +1,220 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.rest.client.v1.room import PublicRoomListRestServlet
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.client_reader")
class ClientReaderSlavedStore(
SlavedEventStore,
SlavedKeyStore,
RoomStore,
DirectoryStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
pass
class ClientReaderServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = ClientReaderSlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
PublicRoomListRestServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
logger.info("Synapse client reader now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse client reader", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.client_reader"
setup_logging(config.worker_log_config, config.worker_log_file)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
ss = ClientReaderServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-client-reader",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -0,0 +1,211 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.api.urls import FEDERATION_PREFIX
from synapse.federation.transport.server import TransportLayerServer
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.federation_reader")
class FederationReaderSlavedStore(
SlavedEventStore,
SlavedKeyStore,
RoomStore,
DirectoryStore,
TransactionStore,
BaseSlavedStore,
):
pass
class FederationReaderServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = FederationReaderSlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
})
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
logger.info("Synapse federation reader now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse federation reader", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.federation_reader"
setup_logging(config.worker_log_config, config.worker_log_file)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
ss = FederationReaderServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-federation-reader",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -16,14 +16,10 @@
import synapse
import contextlib
import gc
import logging
import os
import re
import resource
import subprocess
import sys
import time
from synapse.config._base import ConfigError
from synapse.python_dependencies import (
@@ -37,18 +33,11 @@ from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_d
from synapse.server import HomeServer
from twisted.conch.manhole import ColoredManhole
from twisted.conch.insults import insults
from twisted.conch import manhole_ssh
from twisted.cred import checkers, portal
from twisted.internet import reactor, task, defer
from twisted.application import service
from twisted.web.resource import Resource, EncodingResourceWrapper
from twisted.web.static import File
from twisted.web.server import Site, GzipEncoderFactory, Request
from twisted.web.server import GzipEncoderFactory
from synapse.http.server import RootRedirect
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
@@ -62,10 +51,18 @@ from synapse.api.urls import (
from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory
from synapse.util.logcontext import LoggingContext
from synapse.metrics import register_memory_metrics, get_metrics_for
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.resource import ReplicationResource, REPLICATION_PREFIX
from synapse.federation.transport.server import TransportLayerServer
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.http.site import SynapseSite
from synapse import events
from daemonize import Daemonize
@@ -73,9 +70,6 @@ from daemonize import Daemonize
logger = logging.getLogger("synapse.app.homeserver")
ACCESS_TOKEN_RE = re.compile(r'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
def gz_wrap(r):
return EncodingResourceWrapper(r, [GzipEncoderFactory()])
@@ -154,7 +148,7 @@ class SynapseHomeServer(HomeServer):
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path, self.auth, self.content_addr
self, self.config.uploads_path
),
})
@@ -173,7 +167,12 @@ class SynapseHomeServer(HomeServer):
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationResource(self)
root_resource = create_resource_tree(resources)
if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
root_resource = Resource()
root_resource = create_resource_tree(resources, root_resource)
if tls:
reactor.listenSSL(
port,
@@ -206,24 +205,13 @@ class SynapseHomeServer(HomeServer):
if listener["type"] == "http":
self._listener_http(config, listener)
elif listener["type"] == "manhole":
checker = checkers.InMemoryUsernamePasswordDatabaseDontUse(
matrix="rabbithole"
)
rlm = manhole_ssh.TerminalRealm()
rlm.chainedProtocolFactory = lambda: insults.ServerProtocol(
ColoredManhole,
{
"__name__": "__console__",
"hs": self,
}
)
f = manhole_ssh.ConchFactory(portal.Portal(rlm, [checker]))
reactor.listenTCP(
listener["port"],
f,
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
else:
@@ -269,86 +257,6 @@ def quit_with_error(error_string):
sys.exit(1)
def get_version_string():
try:
null = open(os.devnull, 'w')
cwd = os.path.dirname(os.path.abspath(__file__))
try:
git_branch = subprocess.check_output(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
stderr=null,
cwd=cwd,
).strip()
git_branch = "b=" + git_branch
except subprocess.CalledProcessError:
git_branch = ""
try:
git_tag = subprocess.check_output(
['git', 'describe', '--exact-match'],
stderr=null,
cwd=cwd,
).strip()
git_tag = "t=" + git_tag
except subprocess.CalledProcessError:
git_tag = ""
try:
git_commit = subprocess.check_output(
['git', 'rev-parse', '--short', 'HEAD'],
stderr=null,
cwd=cwd,
).strip()
except subprocess.CalledProcessError:
git_commit = ""
try:
dirty_string = "-this_is_a_dirty_checkout"
is_dirty = subprocess.check_output(
['git', 'describe', '--dirty=' + dirty_string],
stderr=null,
cwd=cwd,
).strip().endswith(dirty_string)
git_dirty = "dirty" if is_dirty else ""
except subprocess.CalledProcessError:
git_dirty = ""
if git_branch or git_tag or git_commit or git_dirty:
git_version = ",".join(
s for s in
(git_branch, git_tag, git_commit, git_dirty,)
if s
)
return (
"Synapse/%s (%s)" % (
synapse.__version__, git_version,
)
).encode("ascii")
except Exception as e:
logger.info("Failed to check for git repository: %s", e)
return ("Synapse/%s" % (synapse.__version__,)).encode("ascii")
def change_resource_limit(soft_file_no):
try:
soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
if not soft_file_no:
soft_file_no = hard
resource.setrlimit(resource.RLIMIT_NOFILE, (soft_file_no, hard))
logger.info("Set file limit to: %d", soft_file_no)
resource.setrlimit(
resource.RLIMIT_CORE, (resource.RLIM_INFINITY, resource.RLIM_INFINITY)
)
except (ValueError, resource.error) as e:
logger.warn("Failed to set file or core limit: %s", e)
def setup(config_options):
"""
Args:
@@ -359,10 +267,9 @@ def setup(config_options):
HomeServer
"""
try:
config = HomeServerConfig.load_config(
config = HomeServerConfig.load_or_generate_config(
"Synapse Homeserver",
config_options,
generate_section="Homeserver"
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
@@ -378,7 +285,7 @@ def setup(config_options):
# check any extra requirements we have now we have a config
check_requirements(config)
version_string = get_version_string()
version_string = "Synapse/" + get_version_string(synapse)
logger.info("Server hostname: %s", config.server_name)
logger.info("Server version: %s", version_string)
@@ -395,7 +302,6 @@ def setup(config_options):
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
content_addr=config.content_addr,
version_string=version_string,
database_engine=database_engine,
)
@@ -430,6 +336,8 @@ def setup(config_options):
hs.get_datastore().start_doing_background_updates()
hs.get_replication_layer().start_get_pdu_cache()
register_memory_metrics(hs)
reactor.callWhenRunning(start)
return hs
@@ -445,215 +353,13 @@ class SynapseService(service.Service):
def startService(self):
hs = setup(self.config)
change_resource_limit(hs.config.soft_file_limit)
if hs.config.gc_thresholds:
gc.set_threshold(*hs.config.gc_thresholds)
def stopService(self):
return self._port.stopListening()
class SynapseRequest(Request):
def __init__(self, site, *args, **kw):
Request.__init__(self, *args, **kw)
self.site = site
self.authenticated_entity = None
self.start_time = 0
def __repr__(self):
# We overwrite this so that we don't log ``access_token``
return '<%s at 0x%x method=%s uri=%s clientproto=%s site=%s>' % (
self.__class__.__name__,
id(self),
self.method,
self.get_redacted_uri(),
self.clientproto,
self.site.site_tag,
)
def get_redacted_uri(self):
return ACCESS_TOKEN_RE.sub(
r'\1<redacted>\3',
self.uri
)
def get_user_agent(self):
return self.requestHeaders.getRawHeaders("User-Agent", [None])[-1]
def started_processing(self):
self.site.access_logger.info(
"%s - %s - Received request: %s %s",
self.getClientIP(),
self.site.site_tag,
self.method,
self.get_redacted_uri()
)
self.start_time = int(time.time() * 1000)
def finished_processing(self):
try:
context = LoggingContext.current_context()
ru_utime, ru_stime = context.get_resource_usage()
db_txn_count = context.db_txn_count
db_txn_duration = context.db_txn_duration
except:
ru_utime, ru_stime = (0, 0)
db_txn_count, db_txn_duration = (0, 0)
self.site.access_logger.info(
"%s - %s - {%s}"
" Processed request: %dms (%dms, %dms) (%dms/%d)"
" %sB %s \"%s %s %s\" \"%s\"",
self.getClientIP(),
self.site.site_tag,
self.authenticated_entity,
int(time.time() * 1000) - self.start_time,
int(ru_utime * 1000),
int(ru_stime * 1000),
int(db_txn_duration * 1000),
int(db_txn_count),
self.sentLength,
self.code,
self.method,
self.get_redacted_uri(),
self.clientproto,
self.get_user_agent(),
)
@contextlib.contextmanager
def processing(self):
self.started_processing()
yield
self.finished_processing()
class XForwardedForRequest(SynapseRequest):
def __init__(self, *args, **kw):
SynapseRequest.__init__(self, *args, **kw)
"""
Add a layer on top of another request that only uses the value of an
X-Forwarded-For header as the result of C{getClientIP}.
"""
def getClientIP(self):
"""
@return: The client address (the first address) in the value of the
I{X-Forwarded-For header}. If the header is not present, return
C{b"-"}.
"""
return self.requestHeaders.getRawHeaders(
b"x-forwarded-for", [b"-"])[0].split(b",")[0].strip()
class SynapseRequestFactory(object):
def __init__(self, site, x_forwarded_for):
self.site = site
self.x_forwarded_for = x_forwarded_for
def __call__(self, *args, **kwargs):
if self.x_forwarded_for:
return XForwardedForRequest(self.site, *args, **kwargs)
else:
return SynapseRequest(self.site, *args, **kwargs)
class SynapseSite(Site):
"""
Subclass of a twisted http Site that does access logging with python's
standard logging
"""
def __init__(self, logger_name, site_tag, config, resource, *args, **kwargs):
Site.__init__(self, resource, *args, **kwargs)
self.site_tag = site_tag
proxied = config.get("x_forwarded", False)
self.requestFactory = SynapseRequestFactory(self, proxied)
self.access_logger = logging.getLogger(logger_name)
def log(self, request):
pass
def create_resource_tree(desired_tree, redirect_root_to_web_client=True):
"""Create the resource tree for this Home Server.
This in unduly complicated because Twisted does not support putting
child resources more than 1 level deep at a time.
Args:
web_client (bool): True to enable the web client.
redirect_root_to_web_client (bool): True to redirect '/' to the
location of the web client. This does nothing if web_client is not
True.
"""
if redirect_root_to_web_client and WEB_CLIENT_PREFIX in desired_tree:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
root_resource = Resource()
# ideally we'd just use getChild and putChild but getChild doesn't work
# unless you give it a Request object IN ADDITION to the name :/ So
# instead, we'll store a copy of this mapping so we can actually add
# extra resources to existing nodes. See self._resource_id for the key.
resource_mappings = {}
for full_path, res in desired_tree.items():
logger.info("Attaching %s to path %s", res, full_path)
last_resource = root_resource
for path_seg in full_path.split('/')[1:-1]:
if path_seg not in last_resource.listNames():
# resource doesn't exist, so make a "dummy resource"
child_resource = Resource()
last_resource.putChild(path_seg, child_resource)
res_id = _resource_id(last_resource, path_seg)
resource_mappings[res_id] = child_resource
last_resource = child_resource
else:
# we have an existing Resource, use that instead.
res_id = _resource_id(last_resource, path_seg)
last_resource = resource_mappings[res_id]
# ===========================
# now attach the actual desired resource
last_path_seg = full_path.split('/')[-1]
# if there is already a resource here, thieve its children and
# replace it
res_id = _resource_id(last_resource, last_path_seg)
if res_id in resource_mappings:
# there is a dummy resource at this path already, which needs
# to be replaced with the desired resource.
existing_dummy_resource = resource_mappings[res_id]
for child_name in existing_dummy_resource.listNames():
child_res_id = _resource_id(
existing_dummy_resource, child_name
)
child_resource = resource_mappings[child_res_id]
# steal the children
res.putChild(child_name, child_resource)
# finally, insert the desired resource in the right place
last_resource.putChild(last_path_seg, res)
res_id = _resource_id(last_resource, last_path_seg)
resource_mappings[res_id] = res
return root_resource
def _resource_id(resource, path_seg):
"""Construct an arbitrary resource ID so you can retrieve the mapping
later.
If you want to represent resource A putChild resource B with path C,
the mapping should looks like _resource_id(A,C) = B.
Args:
resource (Resource): The *parent* Resourceb
path_seg (str): The name of the child Resource to be attached.
Returns:
str: A unique string which can be a key to the child Resource.
"""
return "%s-%s" % (resource, path_seg)
def run(hs):
PROFILE_SYNAPSE = False
if PROFILE_SYNAPSE:
@@ -679,6 +385,8 @@ def run(hs):
start_time = hs.get_clock().time()
stats = {}
@defer.inlineCallbacks
def phone_stats_home():
logger.info("Gathering stats for reporting")
@@ -687,7 +395,10 @@ def run(hs):
if uptime < 0:
uptime = 0
stats = {}
# If the stats directory is empty then this is the first time we've
# reported stats.
first_time = not stats
stats["homeserver"] = hs.config.server_name
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
@@ -700,6 +411,25 @@ def run(hs):
daily_messages = yield hs.get_datastore().count_daily_messages()
if daily_messages is not None:
stats["daily_messages"] = daily_messages
else:
stats.pop("daily_messages", None)
if first_time:
# Add callbacks to report the synapse stats as metrics whenever
# prometheus requests them, typically every 30s.
# As some of the stats are expensive to calculate we only update
# them when synapse phones home to matrix.org every 24 hours.
metrics = get_metrics_for("synapse.usage")
metrics.add_callback("timestamp", lambda: stats["timestamp"])
metrics.add_callback("uptime_seconds", lambda: stats["uptime_seconds"])
metrics.add_callback("total_users", lambda: stats["total_users"])
metrics.add_callback("total_room_count", lambda: stats["total_room_count"])
metrics.add_callback(
"daily_active_users", lambda: stats["daily_active_users"]
)
metrics.add_callback(
"daily_messages", lambda: stats.get("daily_messages", 0)
)
logger.info("Reporting stats to matrix.org: %s" % (stats,))
try:
@@ -720,6 +450,8 @@ def run(hs):
# sys.settrace(logcontext_tracer)
with LoggingContext("run"):
change_resource_limit(hs.config.soft_file_limit)
if hs.config.gc_thresholds:
gc.set_threshold(*hs.config.gc_thresholds)
reactor.run()
if hs.config.daemonize:

View File

@@ -0,0 +1,217 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.api.urls import (
CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
)
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.media_repository")
class MediaRepositorySlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
BaseSlavedStore,
MediaRepositoryStore,
ClientIpStore,
):
pass
class MediaRepositoryServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = MediaRepositorySlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "media":
media_repo = MediaRepositoryResource(self)
resources.update({
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
})
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
logger.info("Synapse media repository now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse media repository", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.media_repository"
setup_logging(config.worker_log_config, config.worker_log_file)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
ss = MediaRepositoryServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-media-repository",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

303
synapse/app/pusher.py Normal file
View File

@@ -0,0 +1,303 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
from synapse.server import HomeServer
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.storage.roommember import RoomMemberStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.storage.engines import create_engine
from synapse.storage import DataStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.pusher")
class PusherSlaveStore(
SlavedEventStore, SlavedPusherStore, SlavedReceiptsStore,
SlavedAccountDataStore
):
update_pusher_last_stream_ordering_and_success = (
DataStore.update_pusher_last_stream_ordering_and_success.__func__
)
update_pusher_failing_since = (
DataStore.update_pusher_failing_since.__func__
)
update_pusher_last_stream_ordering = (
DataStore.update_pusher_last_stream_ordering.__func__
)
get_throttle_params_by_room = (
DataStore.get_throttle_params_by_room.__func__
)
set_throttle_params = (
DataStore.set_throttle_params.__func__
)
get_time_of_last_push_action_before = (
DataStore.get_time_of_last_push_action_before.__func__
)
get_profile_displayname = (
DataStore.get_profile_displayname.__func__
)
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
class PusherServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = PusherSlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def remove_pusher(self, app_id, push_key, user_id):
http_client = self.get_simple_http_client()
replication_url = self.config.worker_replication_url
url = replication_url + "/remove_pushers"
return http_client.post_json_get_json(url, {
"remove": [{
"app_id": app_id,
"push_key": push_key,
"user_id": user_id,
}]
})
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
logger.info("Synapse pusher now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
pusher_pool = self.get_pusherpool()
def stop_pusher(user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
pushers_for_user = pusher_pool.pushers.get(user_id, {})
pusher = pushers_for_user.pop(key, None)
if pusher is None:
return
logger.info("Stopping pusher %r / %r", user_id, key)
pusher.on_stop()
def start_pusher(user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return pusher_pool._refresh_pusher(app_id, pushkey, user_id)
@defer.inlineCallbacks
def poke_pushers(results):
pushers_rows = set(
map(tuple, results.get("pushers", {}).get("rows", []))
)
deleted_pushers_rows = set(
map(tuple, results.get("deleted_pushers", {}).get("rows", []))
)
for row in sorted(pushers_rows | deleted_pushers_rows):
if row in deleted_pushers_rows:
user_id, app_id, pushkey = row[1:4]
stop_pusher(user_id, app_id, pushkey)
elif row in pushers_rows:
user_id = row[1]
app_id = row[5]
pushkey = row[8]
yield start_pusher(user_id, app_id, pushkey)
stream = results.get("events")
if stream and stream["rows"]:
min_stream_id = stream["rows"][0][0]
max_stream_id = stream["position"]
preserve_fn(pusher_pool.on_new_notifications)(
min_stream_id, max_stream_id
)
stream = results.get("receipts")
if stream and stream["rows"]:
rows = stream["rows"]
affected_room_ids = set(row[1] for row in rows)
min_stream_id = rows[0][0]
max_stream_id = stream["position"]
preserve_fn(pusher_pool.on_new_receipts)(
min_stream_id, max_stream_id, affected_room_ids
)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
poke_pushers(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse pusher", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.pusher"
setup_logging(config.worker_log_config, config.worker_log_file)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
if config.start_pushers:
sys.stderr.write(
"\nThe pushers must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``start_pushers: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.start_pushers = True
database_engine = create_engine(config.database_config)
ps = PusherServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_pusherpool().start()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-pusher",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
if __name__ == '__main__':
with LoggingContext("main"):
ps = start(sys.argv[1:])

496
synapse/app/synchrotron.py Normal file
View File

@@ -0,0 +1,496 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
from synapse.api.constants import EventTypes, PresenceState
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.events import FrozenEvent
from synapse.handlers.presence import PresenceHandler
from synapse.http.site import SynapseSite
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.rest.client.v2_alpha import sync
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.presence import PresenceStore, UserPresenceState
from synapse.storage.roommember import RoomMemberStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import contextlib
import gc
import ujson as json
logger = logging.getLogger("synapse.app.synchrotron")
class SynchrotronSlavedStore(
SlavedPushRuleStore,
SlavedEventStore,
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedPresenceStore,
SlavedDeviceInboxStore,
RoomStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
# XXX: This is a bit broken because we don't persist the accepted list in a
# way that can be replicated. This means that we don't have a way to
# invalidate the cache correctly.
get_presence_list_accepted = PresenceStore.__dict__[
"get_presence_list_accepted"
]
get_presence_list_observers_accepted = PresenceStore.__dict__[
"get_presence_list_observers_accepted"
]
UPDATE_SYNCING_USERS_MS = 10 * 1000
class SynchrotronPresence(object):
def __init__(self, hs):
self.is_mine_id = hs.is_mine_id
self.http_client = hs.get_simple_http_client()
self.store = hs.get_datastore()
self.user_to_num_current_syncs = {}
self.syncing_users_url = hs.config.worker_replication_url + "/syncing_users"
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
active_presence = self.store.take_presence_startup_info()
self.user_to_current_state = {
state.user_id: state
for state in active_presence
}
self.process_id = random_string(16)
logger.info("Presence process_id is %r", self.process_id)
self._sending_sync = False
self._need_to_send_sync = False
self.clock.looping_call(
self._send_syncing_users_regularly,
UPDATE_SYNCING_USERS_MS,
)
reactor.addSystemEventTrigger("before", "shutdown", self._on_shutdown)
def set_state(self, user, state, ignore_status_msg=False):
# TODO Hows this supposed to work?
pass
get_states = PresenceHandler.get_states.__func__
get_state = PresenceHandler.get_state.__func__
_get_interested_parties = PresenceHandler._get_interested_parties.__func__
current_state_for_users = PresenceHandler.current_state_for_users.__func__
@defer.inlineCallbacks
def user_syncing(self, user_id, affect_presence):
if affect_presence:
curr_sync = self.user_to_num_current_syncs.get(user_id, 0)
self.user_to_num_current_syncs[user_id] = curr_sync + 1
prev_states = yield self.current_state_for_users([user_id])
if prev_states[user_id].state == PresenceState.OFFLINE:
# TODO: Don't block the sync request on this HTTP hit.
yield self._send_syncing_users_now()
def _end():
# We check that the user_id is in user_to_num_current_syncs because
# user_to_num_current_syncs may have been cleared if we are
# shutting down.
if affect_presence and user_id in self.user_to_num_current_syncs:
self.user_to_num_current_syncs[user_id] -= 1
@contextlib.contextmanager
def _user_syncing():
try:
yield
finally:
_end()
defer.returnValue(_user_syncing())
@defer.inlineCallbacks
def _on_shutdown(self):
# When the synchrotron is shutdown tell the master to clear the in
# progress syncs for this process
self.user_to_num_current_syncs.clear()
yield self._send_syncing_users_now()
def _send_syncing_users_regularly(self):
# Only send an update if we aren't in the middle of sending one.
if not self._sending_sync:
preserve_fn(self._send_syncing_users_now)()
@defer.inlineCallbacks
def _send_syncing_users_now(self):
if self._sending_sync:
# We don't want to race with sending another update.
# Instead we wait for that update to finish and send another
# update afterwards.
self._need_to_send_sync = True
return
# Flag that we are sending an update.
self._sending_sync = True
yield self.http_client.post_json_get_json(self.syncing_users_url, {
"process_id": self.process_id,
"syncing_users": [
user_id for user_id, count in self.user_to_num_current_syncs.items()
if count > 0
],
})
# Unset the flag as we are no longer sending an update.
self._sending_sync = False
if self._need_to_send_sync:
# If something happened while we were sending the update then
# we might need to send another update.
# TODO: Check if the update that was sent matches the current state
# as we only need to send an update if they are different.
self._need_to_send_sync = False
yield self._send_syncing_users_now()
@defer.inlineCallbacks
def notify_from_replication(self, states, stream_id):
parties = yield self._get_interested_parties(
states, calculate_remote_hosts=False
)
room_ids_to_states, users_to_states, _ = parties
self.notifier.on_new_event(
"presence_key", stream_id, rooms=room_ids_to_states.keys(),
users=users_to_states.keys()
)
@defer.inlineCallbacks
def process_replication(self, result):
stream = result.get("presence", {"rows": []})
states = []
for row in stream["rows"]:
(
position, user_id, state, last_active_ts,
last_federation_update_ts, last_user_sync_ts, status_msg,
currently_active
) = row
state = UserPresenceState(
user_id, state, last_active_ts,
last_federation_update_ts, last_user_sync_ts, status_msg,
currently_active
)
self.user_to_current_state[user_id] = state
states.append(state)
if states and "position" in stream:
stream_id = int(stream["position"])
yield self.notify_from_replication(states, stream_id)
class SynchrotronTyping(object):
def __init__(self, hs):
self._latest_room_serial = 0
self._room_serials = {}
self._room_typing = {}
def stream_positions(self):
# We must update this typing token from the response of the previous
# sync. In particular, the stream id may "reset" back to zero/a low
# value which we *must* use for the next replication request.
return {"typing": self._latest_room_serial}
def process_replication(self, result):
stream = result.get("typing")
if stream:
self._latest_room_serial = int(stream["position"])
for row in stream["rows"]:
position, room_id, typing_json = row
typing = json.loads(typing_json)
self._room_serials[room_id] = position
self._room_typing[room_id] = typing
class SynchrotronApplicationService(object):
def notify_interested_services(self, event):
pass
class SynchrotronServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = SynchrotronSlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
sync.register_servlets(self, resource)
events.register_servlets(self, resource)
InitialSyncRestServlet(self).register(resource)
RoomInitialSyncRestServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
logger.info("Synapse synchrotron now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
notifier = self.get_notifier()
presence_handler = self.get_presence_handler()
typing_handler = self.get_typing_handler()
def notify_from_stream(
result, stream_name, stream_key, room=None, user=None
):
stream = result.get(stream_name)
if stream:
position_index = stream["field_names"].index("position")
if room:
room_index = stream["field_names"].index(room)
if user:
user_index = stream["field_names"].index(user)
users = ()
rooms = ()
for row in stream["rows"]:
position = row[position_index]
if user:
users = (row[user_index],)
if room:
rooms = (row[room_index],)
notifier.on_new_event(
stream_key, position, users=users, rooms=rooms
)
def notify(result):
stream = result.get("events")
if stream:
max_position = stream["position"]
for row in stream["rows"]:
position = row[0]
internal = json.loads(row[1])
event_json = json.loads(row[2])
event = FrozenEvent(event_json, internal_metadata_dict=internal)
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
notifier.on_new_room_event(
event, position, max_position, extra_users
)
notify_from_stream(
result, "push_rules", "push_rules_key", user="user_id"
)
notify_from_stream(
result, "user_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "room_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "tag_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "receipts", "receipt_key", room="room_id"
)
notify_from_stream(
result, "typing", "typing_key", room="room_id"
)
notify_from_stream(
result, "to_device", "to_device_key", user="user_id"
)
while True:
try:
args = store.stream_positions()
args.update(typing_handler.stream_positions())
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
typing_handler.process_replication(result)
yield presence_handler.process_replication(result)
notify(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_presence_handler(self):
return SynchrotronPresence(self)
def build_typing_handler(self):
return SynchrotronTyping(self)
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse synchrotron", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.synchrotron"
setup_logging(config.worker_log_config, config.worker_log_file)
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = SynchrotronServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
application_service_handler=SynchrotronApplicationService(),
)
ss.setup()
ss.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_datastore().start_profiling()
ss.replicate()
ss.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-synchrotron",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -14,71 +14,199 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import argparse
import collections
import glob
import os
import os.path
import subprocess
import signal
import subprocess
import sys
import yaml
SYNAPSE = ["python", "-B", "-m", "synapse.app.homeserver"]
SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
GREEN = "\x1b[1;32m"
RED = "\x1b[1;31m"
NORMAL = "\x1b[m"
def write(message, colour=NORMAL, stream=sys.stdout):
if colour == NORMAL:
stream.write(message + "\n")
else:
stream.write(colour + message + NORMAL + "\n")
def start(configfile):
print ("Starting ...")
write("Starting ...")
args = SYNAPSE
args.extend(["--daemonize", "-c", configfile])
try:
subprocess.check_call(args)
print (GREEN + "started" + NORMAL)
write("started synapse.app.homeserver(%r)" % (configfile,), colour=GREEN)
except subprocess.CalledProcessError as e:
print (
RED +
"error starting (exit code: %d); see above for logs" % e.returncode +
NORMAL
write(
"error starting (exit code: %d); see above for logs" % e.returncode,
colour=RED,
)
def stop(pidfile):
def start_worker(app, configfile, worker_configfile):
args = [
"python", "-B",
"-m", app,
"-c", configfile,
"-c", worker_configfile
]
try:
subprocess.check_call(args)
write("started %s(%r)" % (app, worker_configfile), colour=GREEN)
except subprocess.CalledProcessError as e:
write(
"error starting %s(%r) (exit code: %d); see above for logs" % (
app, worker_configfile, e.returncode,
),
colour=RED,
)
def stop(pidfile, app):
if os.path.exists(pidfile):
pid = int(open(pidfile).read())
os.kill(pid, signal.SIGTERM)
print (GREEN + "stopped" + NORMAL)
write("stopped %s" % (app,), colour=GREEN)
Worker = collections.namedtuple("Worker", [
"app", "configfile", "pidfile", "cache_factor"
])
def main():
configfile = sys.argv[2] if len(sys.argv) == 3 else "homeserver.yaml"
if not os.path.exists(configfile):
sys.stderr.write(
"No config file found\n"
"To generate a config file, run '%s -c %s --generate-config"
" --server-name=<server name>'\n" % (
" ".join(SYNAPSE), configfile
)
parser = argparse.ArgumentParser()
parser.add_argument(
"action",
choices=["start", "stop", "restart"],
help="whether to start, stop or restart the synapse",
)
parser.add_argument(
"configfile",
nargs="?",
default="homeserver.yaml",
help="the homeserver config file, defaults to homserver.yaml",
)
parser.add_argument(
"-w", "--worker",
metavar="WORKERCONFIG",
help="start or stop a single worker",
)
parser.add_argument(
"-a", "--all-processes",
metavar="WORKERCONFIGDIR",
help="start or stop all the workers in the given directory"
" and the main synapse process",
)
options = parser.parse_args()
if options.worker and options.all_processes:
write(
'Cannot use "--worker" with "--all-processes"',
stream=sys.stderr
)
sys.exit(1)
config = yaml.load(open(configfile))
pidfile = config["pid_file"]
configfile = options.configfile
action = sys.argv[1] if sys.argv[1:] else "usage"
if action == "start":
start(configfile)
elif action == "stop":
stop(pidfile)
elif action == "restart":
stop(pidfile)
start(configfile)
else:
sys.stderr.write("Usage: %s [start|stop|restart] [configfile]\n" % (sys.argv[0],))
if not os.path.exists(configfile):
write(
"No config file found\n"
"To generate a config file, run '%s -c %s --generate-config"
" --server-name=<server name>'\n" % (
" ".join(SYNAPSE), options.configfile
),
stream=sys.stderr,
)
sys.exit(1)
with open(configfile) as stream:
config = yaml.load(stream)
pidfile = config["pid_file"]
cache_factor = config.get("synctl_cache_factor")
start_stop_synapse = True
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
worker_configfiles = []
if options.worker:
start_stop_synapse = False
worker_configfile = options.worker
if not os.path.exists(worker_configfile):
write(
"No worker config found at %r" % (worker_configfile,),
stream=sys.stderr,
)
sys.exit(1)
worker_configfiles.append(worker_configfile)
if options.all_processes:
worker_configdir = options.all_processes
if not os.path.isdir(worker_configdir):
write(
"No worker config directory found at %r" % (worker_configdir,),
stream=sys.stderr,
)
sys.exit(1)
worker_configfiles.extend(sorted(glob.glob(
os.path.join(worker_configdir, "*.yaml")
)))
workers = []
for worker_configfile in worker_configfiles:
with open(worker_configfile) as stream:
worker_config = yaml.load(stream)
worker_app = worker_config["worker_app"]
worker_pidfile = worker_config["worker_pid_file"]
worker_daemonize = worker_config["worker_daemonize"]
assert worker_daemonize # TODO print something more user friendly
worker_cache_factor = worker_config.get("synctl_cache_factor")
workers.append(Worker(
worker_app, worker_configfile, worker_pidfile, worker_cache_factor,
))
action = options.action
if action == "stop" or action == "restart":
for worker in workers:
stop(worker.pidfile, worker.app)
if start_stop_synapse:
stop(pidfile, "synapse.app.homeserver")
# TODO: Wait for synapse to actually shutdown before starting it again
if action == "start" or action == "restart":
if start_stop_synapse:
start(configfile)
for worker in workers:
if worker.cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(worker.cache_factor)
start_worker(worker.app, configfile, worker.configfile)
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
else:
os.environ.pop("SYNAPSE_CACHE_FACTOR", None)
if __name__ == "__main__":
main()

View File

@@ -14,6 +14,8 @@
# limitations under the License.
from synapse.api.constants import EventTypes
from twisted.internet import defer
import logging
import re
@@ -79,7 +81,7 @@ class ApplicationService(object):
NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]
def __init__(self, token, url=None, namespaces=None, hs_token=None,
sender=None, id=None):
sender=None, id=None, protocols=None, rate_limited=True):
self.token = token
self.url = url
self.hs_token = hs_token
@@ -87,6 +89,14 @@ class ApplicationService(object):
self.namespaces = self._check_namespaces(namespaces)
self.id = id
# .protocols is a publicly visible field
if protocols:
self.protocols = set(protocols)
else:
self.protocols = set()
self.rate_limited = rate_limited
def _check_namespaces(self, namespaces):
# Sanity check that it is of the form:
# {
@@ -138,65 +148,66 @@ class ApplicationService(object):
return regex_obj["exclusive"]
return False
def _matches_user(self, event, member_list):
if (hasattr(event, "sender") and
self.is_interested_in_user(event.sender)):
return True
@defer.inlineCallbacks
def _matches_user(self, event, store):
if not event:
defer.returnValue(False)
if self.is_interested_in_user(event.sender):
defer.returnValue(True)
# also check m.room.member state key
if (hasattr(event, "type") and event.type == EventTypes.Member
and hasattr(event, "state_key")
and self.is_interested_in_user(event.state_key)):
return True
if (event.type == EventTypes.Member and
self.is_interested_in_user(event.state_key)):
defer.returnValue(True)
if not store:
defer.returnValue(False)
member_list = yield store.get_users_in_room(event.room_id)
# check joined member events
for user_id in member_list:
if self.is_interested_in_user(user_id):
return True
return False
defer.returnValue(True)
defer.returnValue(False)
def _matches_room_id(self, event):
if hasattr(event, "room_id"):
return self.is_interested_in_room(event.room_id)
return False
def _matches_aliases(self, event, alias_list):
@defer.inlineCallbacks
def _matches_aliases(self, event, store):
if not store or not event:
defer.returnValue(False)
alias_list = yield store.get_aliases_for_room(event.room_id)
for alias in alias_list:
if self.is_interested_in_alias(alias):
return True
return False
defer.returnValue(True)
defer.returnValue(False)
def is_interested(self, event, restrict_to=None, aliases_for_event=None,
member_list=None):
@defer.inlineCallbacks
def is_interested(self, event, store=None):
"""Check if this service is interested in this event.
Args:
event(Event): The event to check.
restrict_to(str): The namespace to restrict regex tests to.
aliases_for_event(list): A list of all the known room aliases for
this event.
member_list(list): A list of all joined user_ids in this room.
store(DataStore)
Returns:
bool: True if this service would like to know about this event.
"""
if aliases_for_event is None:
aliases_for_event = []
if member_list is None:
member_list = []
# Do cheap checks first
if self._matches_room_id(event):
defer.returnValue(True)
if restrict_to and restrict_to not in ApplicationService.NS_LIST:
# this is a programming error, so fail early and raise a general
# exception
raise Exception("Unexpected restrict_to value: %s". restrict_to)
if (yield self._matches_aliases(event, store)):
defer.returnValue(True)
if not restrict_to:
return (self._matches_user(event, member_list)
or self._matches_aliases(event, aliases_for_event)
or self._matches_room_id(event))
elif restrict_to == ApplicationService.NS_ALIASES:
return self._matches_aliases(event, aliases_for_event)
elif restrict_to == ApplicationService.NS_ROOMS:
return self._matches_room_id(event)
elif restrict_to == ApplicationService.NS_USERS:
return self._matches_user(event, member_list)
if (yield self._matches_user(event, store)):
defer.returnValue(True)
defer.returnValue(False)
def is_interested_in_user(self, user_id):
return (
@@ -216,11 +227,17 @@ class ApplicationService(object):
or user_id == self.sender
)
def is_interested_in_protocol(self, protocol):
return protocol in self.protocols
def is_exclusive_alias(self, alias):
return self._is_exclusive(ApplicationService.NS_ALIASES, alias)
def is_exclusive_room(self, room_id):
return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)
def is_rate_limited(self):
return self.rate_limited
def __str__(self):
return "ApplicationService: %s" % (self.__dict__,)

View File

@@ -14,9 +14,11 @@
# limitations under the License.
from twisted.internet import defer
from synapse.api.constants import ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException
from synapse.http.client import SimpleHttpClient
from synapse.events.utils import serialize_event
from synapse.util.caches.response_cache import ResponseCache
import logging
import urllib
@@ -24,6 +26,42 @@ import urllib
logger = logging.getLogger(__name__)
HOUR_IN_MS = 60 * 60 * 1000
APP_SERVICE_PREFIX = "/_matrix/app/unstable"
def _is_valid_3pe_metadata(info):
if "instances" not in info:
return False
if not isinstance(info["instances"], list):
return False
return True
def _is_valid_3pe_result(r, field):
if not isinstance(r, dict):
return False
for k in (field, "protocol"):
if k not in r:
return False
if not isinstance(r[k], str):
return False
if "fields" not in r:
return False
fields = r["fields"]
if not isinstance(fields, dict):
return False
for k in fields.keys():
if not isinstance(fields[k], str):
return False
return True
class ApplicationServiceApi(SimpleHttpClient):
"""This class manages HS -> AS communications, including querying and
pushing.
@@ -33,8 +71,12 @@ class ApplicationServiceApi(SimpleHttpClient):
super(ApplicationServiceApi, self).__init__(hs)
self.clock = hs.get_clock()
self.protocol_meta_cache = ResponseCache(hs, timeout_ms=HOUR_IN_MS)
@defer.inlineCallbacks
def query_user(self, service, user_id):
if service.url is None:
defer.returnValue(False)
uri = service.url + ("/users/%s" % urllib.quote(user_id))
response = None
try:
@@ -54,6 +96,8 @@ class ApplicationServiceApi(SimpleHttpClient):
@defer.inlineCallbacks
def query_alias(self, service, alias):
if service.url is None:
defer.returnValue(False)
uri = service.url + ("/rooms/%s" % urllib.quote(alias))
response = None
try:
@@ -71,8 +115,84 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning("query_alias to %s threw exception %s", uri, ex)
defer.returnValue(False)
@defer.inlineCallbacks
def query_3pe(self, service, kind, protocol, fields):
if kind == ThirdPartyEntityKind.USER:
required_field = "userid"
elif kind == ThirdPartyEntityKind.LOCATION:
required_field = "alias"
else:
raise ValueError(
"Unrecognised 'kind' argument %r to query_3pe()", kind
)
if service.url is None:
defer.returnValue([])
uri = "%s%s/thirdparty/%s/%s" % (
service.url,
APP_SERVICE_PREFIX,
kind,
urllib.quote(protocol)
)
try:
response = yield self.get_json(uri, fields)
if not isinstance(response, list):
logger.warning(
"query_3pe to %s returned an invalid response %r",
uri, response
)
defer.returnValue([])
ret = []
for r in response:
if _is_valid_3pe_result(r, field=required_field):
ret.append(r)
else:
logger.warning(
"query_3pe to %s returned an invalid result %r",
uri, r
)
defer.returnValue(ret)
except Exception as ex:
logger.warning("query_3pe to %s threw exception %s", uri, ex)
defer.returnValue([])
def get_3pe_protocol(self, service, protocol):
if service.url is None:
defer.returnValue({})
@defer.inlineCallbacks
def _get():
uri = "%s%s/thirdparty/protocol/%s" % (
service.url,
APP_SERVICE_PREFIX,
urllib.quote(protocol)
)
try:
info = yield self.get_json(uri, {})
if not _is_valid_3pe_metadata(info):
logger.warning("query_3pe_protocol to %s did not return a"
" valid result", uri)
defer.returnValue(None)
defer.returnValue(info)
except Exception as ex:
logger.warning("query_3pe_protocol to %s threw exception %s",
uri, ex)
defer.returnValue(None)
key = (service.id, protocol)
return self.protocol_meta_cache.get(key) or (
self.protocol_meta_cache.set(key, _get())
)
@defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None):
if service.url is None:
defer.returnValue(True)
events = self._serialize(events)
if txn_id is None:

View File

@@ -48,32 +48,35 @@ UP & quit +---------- YES SUCCESS
This is all tied together by the AppServiceScheduler which DIs the required
components.
"""
from twisted.internet import defer
from synapse.appservice import ApplicationServiceState
from twisted.internet import defer
from synapse.util.logcontext import preserve_fn
from synapse.util.metrics import Measure
import logging
logger = logging.getLogger(__name__)
class AppServiceScheduler(object):
class ApplicationServiceScheduler(object):
""" Public facing API for this module. Does the required DI to tie the
components together. This also serves as the "event_pool", which in this
case is a simple array.
"""
def __init__(self, clock, store, as_api):
self.clock = clock
self.store = store
self.as_api = as_api
def __init__(self, hs):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.as_api = hs.get_application_service_api()
def create_recoverer(service, callback):
return _Recoverer(clock, store, as_api, service, callback)
return _Recoverer(self.clock, self.store, self.as_api, service, callback)
self.txn_ctrl = _TransactionController(
clock, store, as_api, create_recoverer
self.clock, self.store, self.as_api, create_recoverer
)
self.queuer = _ServiceQueuer(self.txn_ctrl)
self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock)
@defer.inlineCallbacks
def start(self):
@@ -94,38 +97,36 @@ class _ServiceQueuer(object):
this schedules any other events in the queue to run.
"""
def __init__(self, txn_ctrl):
def __init__(self, txn_ctrl, clock):
self.queued_events = {} # dict of {service_id: [events]}
self.pending_requests = {} # dict of {service_id: Deferred}
self.requests_in_flight = set()
self.txn_ctrl = txn_ctrl
self.clock = clock
def enqueue(self, service, event):
# if this service isn't being sent something
if not self.pending_requests.get(service.id):
self._send_request(service, [event])
else:
# add to queue for this service
if service.id not in self.queued_events:
self.queued_events[service.id] = []
self.queued_events[service.id].append(event)
self.queued_events.setdefault(service.id, []).append(event)
preserve_fn(self._send_request)(service)
def _send_request(self, service, events):
# send request and add callbacks
d = self.txn_ctrl.send(service, events)
d.addBoth(self._on_request_finish)
d.addErrback(self._on_request_fail)
self.pending_requests[service.id] = d
@defer.inlineCallbacks
def _send_request(self, service):
if service.id in self.requests_in_flight:
return
def _on_request_finish(self, service):
self.pending_requests[service.id] = None
# if there are queued events, then send them.
if (service.id in self.queued_events
and len(self.queued_events[service.id]) > 0):
self._send_request(service, self.queued_events[service.id])
self.queued_events[service.id] = []
self.requests_in_flight.add(service.id)
try:
while True:
events = self.queued_events.pop(service.id, [])
if not events:
return
def _on_request_fail(self, err):
logger.error("AS request failed: %s", err)
with Measure(self.clock, "servicequeuer.send"):
try:
yield self.txn_ctrl.send(service, events)
except:
logger.exception("AS request failed")
finally:
self.requests_in_flight.discard(service.id)
class _TransactionController(object):
@@ -149,14 +150,12 @@ class _TransactionController(object):
if service_is_up:
sent = yield txn.send(self.as_api)
if sent:
txn.complete(self.store)
yield txn.complete(self.store)
else:
self._start_recoverer(service)
preserve_fn(self._start_recoverer)(service)
except Exception as e:
logger.exception(e)
self._start_recoverer(service)
# request has finished
defer.returnValue(service)
preserve_fn(self._start_recoverer)(service)
@defer.inlineCallbacks
def on_recovered(self, recoverer):

View File

@@ -64,11 +64,12 @@ class Config(object):
if isinstance(value, int) or isinstance(value, long):
return value
second = 1000
hour = 60 * 60 * second
minute = 60 * second
hour = 60 * minute
day = 24 * hour
week = 7 * day
year = 365 * day
sizes = {"s": second, "h": hour, "d": day, "w": week, "y": year}
sizes = {"s": second, "m": minute, "h": hour, "d": day, "w": week, "y": year}
size = 1
suffix = value[-1]
if suffix in sizes:
@@ -157,9 +158,40 @@ class Config(object):
return default_config, config
@classmethod
def load_config(cls, description, argv, generate_section=None):
obj = cls()
def load_config(cls, description, argv):
config_parser = argparse.ArgumentParser(
description=description,
)
config_parser.add_argument(
"-c", "--config-path",
action="append",
metavar="CONFIG_FILE",
help="Specify config file. Can be given multiple times and"
" may specify directories containing *.yaml files."
)
config_parser.add_argument(
"--keys-directory",
metavar="DIRECTORY",
help="Where files such as certs and signing keys are stored when"
" their location is given explicitly in the config."
" Defaults to the directory containing the last config file",
)
config_args = config_parser.parse_args(argv)
config_files = find_config_files(search_paths=config_args.config_path)
obj = cls()
obj.read_config_files(
config_files,
keys_directory=config_args.keys_directory,
generate_keys=False,
)
return obj
@classmethod
def load_or_generate_config(cls, description, argv):
config_parser = argparse.ArgumentParser(add_help=False)
config_parser.add_argument(
"-c", "--config-path",
@@ -176,7 +208,7 @@ class Config(object):
config_parser.add_argument(
"--report-stats",
action="store",
help="Stuff",
help="Whether the generated config reports anonymized usage statistics",
choices=["yes", "no"]
)
config_parser.add_argument(
@@ -197,36 +229,11 @@ class Config(object):
)
config_args, remaining_args = config_parser.parse_known_args(argv)
config_files = find_config_files(search_paths=config_args.config_path)
generate_keys = config_args.generate_keys
config_files = []
if config_args.config_path:
for config_path in config_args.config_path:
if os.path.isdir(config_path):
# We accept specifying directories as config paths, we search
# inside that directory for all files matching *.yaml, and then
# we apply them in *sorted* order.
files = []
for entry in os.listdir(config_path):
entry_path = os.path.join(config_path, entry)
if not os.path.isfile(entry_path):
print (
"Found subdirectory in config directory: %r. IGNORING."
) % (entry_path, )
continue
if not entry.endswith(".yaml"):
print (
"Found file in config directory that does not"
" end in '.yaml': %r. IGNORING."
) % (entry_path, )
continue
files.append(entry_path)
config_files.extend(sorted(files))
else:
config_files.append(config_path)
obj = cls()
if config_args.generate_config:
if config_args.report_stats is None:
@@ -299,28 +306,43 @@ class Config(object):
" -c CONFIG-FILE\""
)
if config_args.keys_directory:
config_dir_path = config_args.keys_directory
else:
config_dir_path = os.path.dirname(config_args.config_path[-1])
config_dir_path = os.path.abspath(config_dir_path)
obj.read_config_files(
config_files,
keys_directory=config_args.keys_directory,
generate_keys=generate_keys,
)
if generate_keys:
return None
obj.invoke_all("read_arguments", args)
return obj
def read_config_files(self, config_files, keys_directory=None,
generate_keys=False):
if not keys_directory:
keys_directory = os.path.dirname(config_files[-1])
config_dir_path = os.path.abspath(keys_directory)
specified_config = {}
for config_file in config_files:
yaml_config = cls.read_config_file(config_file)
yaml_config = self.read_config_file(config_file)
specified_config.update(yaml_config)
if "server_name" not in specified_config:
raise ConfigError(MISSING_SERVER_NAME)
server_name = specified_config["server_name"]
_, config = obj.generate_config(
_, config = self.generate_config(
config_dir_path=config_dir_path,
server_name=server_name,
is_generating_file=False,
)
config.pop("log_config")
config.update(specified_config)
if "report_stats" not in config:
raise ConfigError(
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS + "\n" +
@@ -328,11 +350,51 @@ class Config(object):
)
if generate_keys:
obj.invoke_all("generate_files", config)
self.invoke_all("generate_files", config)
return
obj.invoke_all("read_config", config)
self.invoke_all("read_config", config)
obj.invoke_all("read_arguments", args)
return obj
def find_config_files(search_paths):
"""Finds config files using a list of search paths. If a path is a file
then that file path is added to the list. If a search path is a directory
then all the "*.yaml" files in that directory are added to the list in
sorted order.
Args:
search_paths(list(str)): A list of paths to search.
Returns:
list(str): A list of file paths.
"""
config_files = []
if search_paths:
for config_path in search_paths:
if os.path.isdir(config_path):
# We accept specifying directories as config paths, we search
# inside that directory for all files matching *.yaml, and then
# we apply them in *sorted* order.
files = []
for entry in os.listdir(config_path):
entry_path = os.path.join(config_path, entry)
if not os.path.isfile(entry_path):
print (
"Found subdirectory in config directory: %r. IGNORING."
) % (entry_path, )
continue
if not entry.endswith(".yaml"):
print (
"Found file in config directory that does not"
" end in '.yaml': %r. IGNORING."
) % (entry_path, )
continue
files.append(entry_path)
config_files.extend(sorted(files))
else:
config_files.append(config_path)
return config_files

View File

@@ -12,16 +12,153 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from ._base import Config, ConfigError
from synapse.appservice import ApplicationService
from synapse.types import UserID
import urllib
import yaml
import logging
logger = logging.getLogger(__name__)
class AppServiceConfig(Config):
def read_config(self, config):
self.app_service_config_files = config.get("app_service_config_files", [])
self.notify_appservices = config.get("notify_appservices", True)
def default_config(cls, **kwargs):
return """\
# A list of application service config file to use
app_service_config_files: []
"""
def load_appservices(hostname, config_files):
"""Returns a list of Application Services from the config files."""
if not isinstance(config_files, list):
logger.warning(
"Expected %s to be a list of AS config files.", config_files
)
return []
# Dicts of value -> filename
seen_as_tokens = {}
seen_ids = {}
appservices = []
for config_file in config_files:
try:
with open(config_file, 'r') as f:
appservice = _load_appservice(
hostname, yaml.load(f), config_file
)
if appservice.id in seen_ids:
raise ConfigError(
"Cannot reuse ID across application services: "
"%s (files: %s, %s)" % (
appservice.id, config_file, seen_ids[appservice.id],
)
)
seen_ids[appservice.id] = config_file
if appservice.token in seen_as_tokens:
raise ConfigError(
"Cannot reuse as_token across application services: "
"%s (files: %s, %s)" % (
appservice.token,
config_file,
seen_as_tokens[appservice.token],
)
)
seen_as_tokens[appservice.token] = config_file
logger.info("Loaded application service: %s", appservice)
appservices.append(appservice)
except Exception as e:
logger.error("Failed to load appservice from '%s'", config_file)
logger.exception(e)
raise
return appservices
def _load_appservice(hostname, as_info, config_filename):
required_string_fields = [
"id", "as_token", "hs_token", "sender_localpart"
]
for field in required_string_fields:
if not isinstance(as_info.get(field), basestring):
raise KeyError("Required string field: '%s' (%s)" % (
field, config_filename,
))
# 'url' must either be a string or explicitly null, not missing
# to avoid accidentally turning off push for ASes.
if (not isinstance(as_info.get("url"), basestring) and
as_info.get("url", "") is not None):
raise KeyError(
"Required string field or explicit null: 'url' (%s)" % (config_filename,)
)
localpart = as_info["sender_localpart"]
if urllib.quote(localpart) != localpart:
raise ValueError(
"sender_localpart needs characters which are not URL encoded."
)
user = UserID(localpart, hostname)
user_id = user.to_string()
# Rate limiting for users of this AS is on by default (excludes sender)
rate_limited = True
if isinstance(as_info.get("rate_limited"), bool):
rate_limited = as_info.get("rate_limited")
# namespace checks
if not isinstance(as_info.get("namespaces"), dict):
raise KeyError("Requires 'namespaces' object.")
for ns in ApplicationService.NS_LIST:
# specific namespaces are optional
if ns in as_info["namespaces"]:
# expect a list of dicts with exclusive and regex keys
for regex_obj in as_info["namespaces"][ns]:
if not isinstance(regex_obj, dict):
raise ValueError(
"Expected namespace entry in %s to be an object,"
" but got %s", ns, regex_obj
)
if not isinstance(regex_obj.get("regex"), basestring):
raise ValueError(
"Missing/bad type 'regex' key in %s", regex_obj
)
if not isinstance(regex_obj.get("exclusive"), bool):
raise ValueError(
"Missing/bad type 'exclusive' key in %s", regex_obj
)
# protocols check
protocols = as_info.get("protocols")
if protocols:
# Because strings are lists in python
if isinstance(protocols, str) or not isinstance(protocols, list):
raise KeyError("Optional 'protocols' must be a list if present.")
for p in protocols:
if not isinstance(p, str):
raise KeyError("Bad value for 'protocols' item")
if as_info["url"] is None:
logger.info(
"(%s) Explicitly empty 'url' provided. This application service"
" will not receive events or queries.",
config_filename,
)
return ApplicationService(
token=as_info["as_token"],
url=as_info["url"],
namespaces=as_info["namespaces"],
hs_token=as_info["hs_token"],
sender=user_id,
id=as_info["id"],
protocols=protocols,
rate_limited=rate_limited
)

View File

@@ -27,6 +27,7 @@ class CaptchaConfig(Config):
def default_config(self, **kwargs):
return """\
## Captcha ##
# See docs/CAPTCHA_SETUP for full details of configuring this.
# This Home Server's ReCAPTCHA public key.
recaptcha_public_key: "YOUR_PUBLIC_KEY"

View File

@@ -0,0 +1,98 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file can't be called email.py because if it is, we cannot:
import email.utils
from ._base import Config
class EmailConfig(Config):
def read_config(self, config):
self.email_enable_notifs = False
email_config = config.get("email", {})
self.email_enable_notifs = email_config.get("enable_notifs", False)
if self.email_enable_notifs:
# make sure we can import the required deps
import jinja2
import bleach
# prevent unused warnings
jinja2
bleach
required = [
"smtp_host",
"smtp_port",
"notif_from",
"template_dir",
"notif_template_html",
"notif_template_text",
]
missing = []
for k in required:
if k not in email_config:
missing.append(k)
if (len(missing) > 0):
raise RuntimeError(
"email.enable_notifs is True but required keys are missing: %s" %
(", ".join(["email." + k for k in missing]),)
)
if config.get("public_baseurl") is None:
raise RuntimeError(
"email.enable_notifs is True but no public_baseurl is set"
)
self.email_smtp_host = email_config["smtp_host"]
self.email_smtp_port = email_config["smtp_port"]
self.email_notif_from = email_config["notif_from"]
self.email_template_dir = email_config["template_dir"]
self.email_notif_template_html = email_config["notif_template_html"]
self.email_notif_template_text = email_config["notif_template_text"]
self.email_notif_for_new_users = email_config.get(
"notif_for_new_users", True
)
if "app_name" in email_config:
self.email_app_name = email_config["app_name"]
else:
self.email_app_name = "Matrix"
# make sure it's valid
parsed = email.utils.parseaddr(self.email_notif_from)
if parsed[1] == '':
raise RuntimeError("Invalid notif_from address")
else:
self.email_enable_notifs = False
# Not much point setting defaults for the rest: it would be an
# error for them to be used.
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Enable sending emails for notification events
#email:
# enable_notifs: false
# smtp_host: "localhost"
# smtp_port: 25
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# app_name: Matrix
# template_dir: res/templates
# notif_template_html: notif_mail.html
# notif_template_text: notif_mail.txt
# notif_for_new_users: True
"""

View File

@@ -30,14 +30,17 @@ from .saml2 import SAML2Config
from .cas import CasConfig
from .password import PasswordConfig
from .jwt import JWTConfig
from .ldap import LDAPConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .emailconfig import EmailConfig
from .workers import WorkerConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
RatelimitConfig, ContentRepositoryConfig, CaptchaConfig,
VoipConfig, RegistrationConfig, MetricsConfig, ApiConfig,
AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
JWTConfig, LDAPConfig, PasswordConfig,):
JWTConfig, PasswordConfig, EmailConfig,
WorkerConfig, PasswordAuthProviderConfig,):
pass

View File

@@ -13,7 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from ._base import Config, ConfigError
MISSING_JWT = (
"""Missing jwt library. This is required for jwt login.
Install by running:
pip install pyjwt
"""
)
class JWTConfig(Config):
@@ -23,6 +32,12 @@ class JWTConfig(Config):
self.jwt_enabled = jwt_config.get("enabled", False)
self.jwt_secret = jwt_config["secret"]
self.jwt_algorithm = jwt_config["algorithm"]
try:
import jwt
jwt # To stop unused lint.
except ImportError:
raise ConfigError(MISSING_JWT)
else:
self.jwt_enabled = False
self.jwt_secret = None
@@ -30,6 +45,8 @@ class JWTConfig(Config):
def default_config(self, **kwargs):
return """\
# The JWT needs to contain a globally unique "sub" (subject) claim.
#
# jwt_config:
# enabled: true
# secret: "a secret"

View File

@@ -57,6 +57,8 @@ class KeyConfig(Config):
seed = self.signing_key[0].seed
self.macaroon_secret_key = hashlib.sha256(seed)
self.expire_access_token = config.get("expire_access_token", False)
def default_config(self, config_dir_path, server_name, is_generating_file=False,
**kwargs):
base_key_name = os.path.join(config_dir_path, server_name)
@@ -69,6 +71,9 @@ class KeyConfig(Config):
return """\
macaroon_secret_key: "%(macaroon_secret_key)s"
# Used to enable access token expiration.
expire_access_token: False
## Signing Keys ##
# Path to the signing key to sign messages with

View File

@@ -1,52 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Niklas Riekenbrauck
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class LDAPConfig(Config):
def read_config(self, config):
ldap_config = config.get("ldap_config", None)
if ldap_config:
self.ldap_enabled = ldap_config.get("enabled", False)
self.ldap_server = ldap_config["server"]
self.ldap_port = ldap_config["port"]
self.ldap_tls = ldap_config.get("tls", False)
self.ldap_search_base = ldap_config["search_base"]
self.ldap_search_property = ldap_config["search_property"]
self.ldap_email_property = ldap_config["email_property"]
self.ldap_full_name_property = ldap_config["full_name_property"]
else:
self.ldap_enabled = False
self.ldap_server = None
self.ldap_port = None
self.ldap_tls = False
self.ldap_search_base = None
self.ldap_search_property = None
self.ldap_email_property = None
self.ldap_full_name_property = None
def default_config(self, **kwargs):
return """\
# ldap_config:
# enabled: true
# server: "ldap://localhost"
# port: 389
# tls: false
# search_base: "ou=Users,dc=example,dc=com"
# search_property: "cn"
# email_property: "email"
# full_name_property: "givenName"
"""

View File

@@ -126,54 +126,58 @@ class LoggingConfig(Config):
)
def setup_logging(self):
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if self.log_config is None:
setup_logging(self.log_config, self.log_file, self.verbosity)
level = logging.INFO
level_for_storage = logging.INFO
if self.verbosity:
level = logging.DEBUG
if self.verbosity > 1:
level_for_storage = logging.DEBUG
# FIXME: we need a logging.WARN for a -q quiet option
logger = logging.getLogger('')
logger.setLevel(level)
def setup_logging(log_config=None, log_file=None, verbosity=None):
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if log_config is None:
logging.getLogger('synapse.storage').setLevel(level_for_storage)
level = logging.INFO
level_for_storage = logging.INFO
if verbosity:
level = logging.DEBUG
if verbosity > 1:
level_for_storage = logging.DEBUG
formatter = logging.Formatter(log_format)
if self.log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
self.log_file, maxBytes=(1000 * 1000 * 100), backupCount=3
)
# FIXME: we need a logging.WARN for a -q quiet option
logger = logging.getLogger('')
logger.setLevel(level)
def sighup(signum, stack):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
logging.getLogger('synapse.storage').setLevel(level_for_storage)
# TODO(paul): obviously this is a terrible mechanism for
# stealing SIGHUP, because it means no other part of synapse
# can use it instead. If we want to catch SIGHUP anywhere
# else as well, I'd suggest we find a nicer way to broadcast
# it around.
if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup)
else:
handler = logging.StreamHandler()
handler.setFormatter(formatter)
formatter = logging.Formatter(log_format)
if log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3
)
handler.addFilter(LoggingContextFilter(request=""))
def sighup(signum, stack):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
logger.addHandler(handler)
# TODO(paul): obviously this is a terrible mechanism for
# stealing SIGHUP, because it means no other part of synapse
# can use it instead. If we want to catch SIGHUP anywhere
# else as well, I'd suggest we find a nicer way to broadcast
# it around.
if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup)
else:
with open(self.log_config, 'r') as f:
logging.config.dictConfig(yaml.load(f))
handler = logging.StreamHandler()
handler.setFormatter(formatter)
observer = PythonLoggingObserver()
observer.start()
handler.addFilter(LoggingContextFilter(request=""))
logger.addHandler(handler)
else:
with open(log_config, 'r') as f:
logging.config.dictConfig(yaml.load(f))
observer = PythonLoggingObserver()
observer.start()

View File

@@ -23,10 +23,14 @@ class PasswordConfig(Config):
def read_config(self, config):
password_config = config.get("password_config", {})
self.password_enabled = password_config.get("enabled", True)
self.password_pepper = password_config.get("pepper", "")
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Enable password for login.
password_config:
enabled: true
# Uncomment and change to a secret random string for extra security.
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
#pepper: ""
"""

View File

@@ -0,0 +1,66 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Openmarket
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
import importlib
class PasswordAuthProviderConfig(Config):
def read_config(self, config):
self.password_providers = []
# We want to be backwards compatible with the old `ldap_config`
# param.
ldap_config = config.get("ldap_config", {})
self.ldap_enabled = ldap_config.get("enabled", False)
if self.ldap_enabled:
from synapse.util.ldap_auth_provider import LdapAuthProvider
parsed_config = LdapAuthProvider.parse_config(ldap_config)
self.password_providers.append((LdapAuthProvider, parsed_config))
providers = config.get("password_providers", [])
for provider in providers:
# We need to import the module, and then pick the class out of
# that, so we split based on the last dot.
module, clz = provider['module'].rsplit(".", 1)
module = importlib.import_module(module)
provider_class = getattr(module, clz)
try:
provider_config = provider_class.parse_config(provider["config"])
except Exception as e:
raise ConfigError(
"Failed to parse config for %r: %r" % (provider['module'], e)
)
self.password_providers.append((provider_class, provider_config))
def default_config(self, **kwargs):
return """\
# password_providers:
# - module: "synapse.util.ldap_auth_provider.LdapAuthProvider"
# config:
# enabled: true
# uri: "ldap://ldap.example.com:389"
# start_tls: true
# base: "ou=users,dc=example,dc=com"
# attributes:
# uid: "cn"
# mail: "email"
# name: "givenName"
# #bind_dn:
# #bind_password:
# #filter: "(objectClass=posixAccount)"
"""

View File

@@ -32,6 +32,7 @@ class RegistrationConfig(Config):
)
self.registration_shared_secret = config.get("registration_shared_secret")
self.user_creation_max_duration = int(config["user_creation_max_duration"])
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
self.trusted_third_party_id_servers = config["trusted_third_party_id_servers"]
@@ -54,6 +55,11 @@ class RegistrationConfig(Config):
# secret, even if registration is otherwise disabled.
registration_shared_secret: "%(registration_shared_secret)s"
# Sets the expiry for the short term user creation in
# milliseconds. For instance the bellow duration is two weeks
# in milliseconds.
user_creation_max_duration: 1209600000
# Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash.
# The default number of rounds is 12.

View File

@@ -13,9 +13,25 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from ._base import Config, ConfigError
from collections import namedtuple
MISSING_NETADDR = (
"Missing netaddr library. This is required for URL preview API."
)
MISSING_LXML = (
"""Missing lxml library. This is required for URL preview API.
Install by running:
pip install lxml
Requires libxslt1-dev system package.
"""
)
ThumbnailRequirement = namedtuple(
"ThumbnailRequirement", ["width", "height", "method", "media_type"]
)
@@ -23,7 +39,7 @@ ThumbnailRequirement = namedtuple(
def parse_thumbnail_requirements(thumbnail_sizes):
""" Takes a list of dictionaries with "width", "height", and "method" keys
and creates a map from image media types to the thumbnail size, thumnailing
and creates a map from image media types to the thumbnail size, thumbnailing
method, and thumbnail media type to precalculate
Args:
@@ -53,12 +69,44 @@ class ContentRepositoryConfig(Config):
def read_config(self, config):
self.max_upload_size = self.parse_size(config["max_upload_size"])
self.max_image_pixels = self.parse_size(config["max_image_pixels"])
self.max_spider_size = self.parse_size(config["max_spider_size"])
self.media_store_path = self.ensure_directory(config["media_store_path"])
self.uploads_path = self.ensure_directory(config["uploads_path"])
self.dynamic_thumbnails = config["dynamic_thumbnails"]
self.thumbnail_requirements = parse_thumbnail_requirements(
config["thumbnail_sizes"]
)
self.url_preview_enabled = config.get("url_preview_enabled", False)
if self.url_preview_enabled:
try:
import lxml
lxml # To stop unused lint.
except ImportError:
raise ConfigError(MISSING_LXML)
try:
from netaddr import IPSet
except ImportError:
raise ConfigError(MISSING_NETADDR)
if "url_preview_ip_range_blacklist" in config:
self.url_preview_ip_range_blacklist = IPSet(
config["url_preview_ip_range_blacklist"]
)
else:
raise ConfigError(
"For security, you must specify an explicit target IP address "
"blacklist in url_preview_ip_range_blacklist for url previewing "
"to work"
)
self.url_preview_ip_range_whitelist = IPSet(
config.get("url_preview_ip_range_whitelist", ())
)
self.url_preview_url_blacklist = config.get(
"url_preview_url_blacklist", ()
)
def default_config(self, **kwargs):
media_store = self.default_path("media_store")
@@ -80,7 +128,7 @@ class ContentRepositoryConfig(Config):
# the resolution requested by the client. If true then whenever
# a new resolution is requested by the client the server will
# generate a new thumbnail. If false the server will pick a thumbnail
# from a precalcualted list.
# from a precalculated list.
dynamic_thumbnails: false
# List of thumbnail to precalculate when an image is uploaded.
@@ -100,4 +148,73 @@ class ContentRepositoryConfig(Config):
- width: 800
height: 600
method: scale
# Is the preview URL API enabled? If enabled, you *must* specify
# an explicit url_preview_ip_range_blacklist of IPs that the spider is
# denied from accessing.
url_preview_enabled: False
# List of IP address CIDR ranges that the URL preview spider is denied
# from accessing. There are no defaults: you must explicitly
# specify a list for URL previewing to work. You should specify any
# internal services in your network that you do not want synapse to try
# to connect to, otherwise anyone in any Matrix room could cause your
# synapse to issue arbitrary GET requests to your internal services,
# causing serious security issues.
#
# url_preview_ip_range_blacklist:
# - '127.0.0.0/8'
# - '10.0.0.0/8'
# - '172.16.0.0/12'
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '169.254.0.0/16'
#
# List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist.
# This is useful for specifying exceptions to wide-ranging blacklisted
# target IP ranges - e.g. for enabling URL previews for a specific private
# website only visible in your network.
#
# url_preview_ip_range_whitelist:
# - '192.168.1.1'
# Optional list of URL matches that the URL preview spider is
# denied from accessing. You should use url_preview_ip_range_blacklist
# in preference to this, otherwise someone could define a public DNS
# entry that points to a private IP address and circumvent the blacklist.
# This is more useful if you know there is an entire shape of URL that
# you know that will never want synapse to try to spider.
#
# Each list entry is a dictionary of url component attributes as returned
# by urlparse.urlsplit as applied to the absolute form of the URL. See
# https://docs.python.org/2/library/urlparse.html#urlparse.urlsplit
# The values of the dictionary are treated as an filename match pattern
# applied to that component of URLs, unless they start with a ^ in which
# case they are treated as a regular expression match. If all the
# specified component matches for a given list item succeed, the URL is
# blacklisted.
#
# url_preview_url_blacklist:
# # blacklist any URL with a username in its URI
# - username: '*'
#
# # blacklist all *.google.com URLs
# - netloc: 'google.com'
# - netloc: '*.google.com'
#
# # blacklist all plain HTTP URLs
# - scheme: 'http'
#
# # blacklist http(s)://www.acme.com/foo
# - netloc: 'www.acme.com'
# path: '/foo'
#
# # blacklist any URL with a literal IPv4 address
# - netloc: '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$'
# The largest allowed URL preview spidering size in bytes
max_spider_size: "10M"
""" % locals()

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from ._base import Config, ConfigError
class ServerConfig(Config):
@@ -27,10 +27,18 @@ class ServerConfig(Config):
self.daemonize = config.get("daemonize")
self.print_pidfile = config.get("print_pidfile")
self.user_agent_suffix = config.get("user_agent_suffix")
self.use_frozen_dicts = config.get("use_frozen_dicts", True)
self.use_frozen_dicts = config.get("use_frozen_dicts", False)
self.public_baseurl = config.get("public_baseurl")
if self.public_baseurl is not None:
if self.public_baseurl[-1] != '/':
self.public_baseurl += '/'
self.start_pushers = config.get("start_pushers", True)
self.listeners = config.get("listeners", [])
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
bind_port = config.get("bind_port")
if bind_port:
self.listeners = []
@@ -98,26 +106,6 @@ class ServerConfig(Config):
]
})
# Attempt to guess the content_addr for the v0 content repostitory
content_addr = config.get("content_addr")
if not content_addr:
for listener in self.listeners:
if listener["type"] == "http" and not listener.get("tls", False):
unsecure_port = listener["port"]
break
else:
raise RuntimeError("Could not determine 'content_addr'")
host = self.server_name
if ':' not in host:
host = "%s:%d" % (host, unsecure_port)
else:
host = host.split(':')[0]
host = "%s:%d" % (host, unsecure_port)
content_addr = "http://%s" % (host,)
self.content_addr = content_addr
def default_config(self, server_name, **kwargs):
if ":" in server_name:
bind_port = int(server_name.split(":")[1])
@@ -142,11 +130,17 @@ class ServerConfig(Config):
# Whether to serve a web client from the HTTP/HTTPS root resource.
web_client: True
# The public-facing base URL for the client API (not including _matrix/...)
# public_baseurl: https://example.com:8448/
# Set the soft limit on the number of file descriptors synapse can use
# Zero is used to indicate synapse should set the soft limit to the
# hard limit.
soft_file_limit: 0
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
# gc_thresholds: [700, 10, 10]
# List of ports that Synapse should listen on, their purpose and their
# configuration.
listeners:
@@ -228,3 +222,20 @@ class ServerConfig(Config):
type=int,
help="Turn on the twisted telnet manhole"
" service on the given port.")
def read_gc_thresholds(thresholds):
"""Reads the three integer thresholds for garbage collection. Ensures that
the thresholds are integers if thresholds are supplied.
"""
if thresholds is None:
return None
try:
assert len(thresholds) == 3
return (
int(thresholds[0]), int(thresholds[1]), int(thresholds[2]),
)
except:
raise ConfigError(
"Value of `gc_threshold` must be a list of three integers if set"
)

View File

@@ -19,6 +19,9 @@ from OpenSSL import crypto
import subprocess
import os
from hashlib import sha256
from unpaddedbase64 import encode_base64
GENERATE_DH_PARAMS = False
@@ -42,6 +45,19 @@ class TlsConfig(Config):
config.get("tls_dh_params_path"), "tls_dh_params"
)
self.tls_fingerprints = config["tls_fingerprints"]
# Check that our own certificate is included in the list of fingerprints
# and include it if it is not.
x509_certificate_bytes = crypto.dump_certificate(
crypto.FILETYPE_ASN1,
self.tls_certificate
)
sha256_fingerprint = encode_base64(sha256(x509_certificate_bytes).digest())
sha256_fingerprints = set(f["sha256"] for f in self.tls_fingerprints)
if sha256_fingerprint not in sha256_fingerprints:
self.tls_fingerprints.append({u"sha256": sha256_fingerprint})
# This config option applies to non-federation HTTP clients
# (e.g. for talking to recaptcha, identity servers, and such)
# It should never be used in production, and is intended for
@@ -73,6 +89,28 @@ class TlsConfig(Config):
# Don't bind to the https port
no_tls: False
# List of allowed TLS fingerprints for this server to publish along
# with the signing keys for this server. Other matrix servers that
# make HTTPS requests to this server will check that the TLS
# certificates returned by this server match one of the fingerprints.
#
# Synapse automatically adds its the fingerprint of its own certificate
# to the list. So if federation traffic is handle directly by synapse
# then no modification to the list is required.
#
# If synapse is run behind a load balancer that handles the TLS then it
# will be necessary to add the fingerprints of the certificates used by
# the loadbalancers to this list if they are different to the one
# synapse is using.
#
# Homeservers are permitted to cache the list of TLS fingerprints
# returned in the key responses up to the "valid_until_ts" returned in
# key. It may be necessary to publish the fingerprints of a new
# certificate and wait until the "valid_until_ts" of the previous key
# responses have passed before deploying it.
tls_fingerprints: []
# tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
""" % locals()
def read_tls_certificate(self, cert_path):

31
synapse/config/workers.py Normal file
View File

@@ -0,0 +1,31 @@
# -*- coding: utf-8 -*-
# Copyright 2016 matrix.org
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class WorkerConfig(Config):
"""The workers are processes run separately to the main synapse process.
They have their own pid_file and listener configuration. They use the
replication_url to talk to the main synapse process."""
def read_config(self, config):
self.worker_app = config.get("worker_app")
self.worker_listeners = config.get("worker_listeners")
self.worker_daemonize = config.get("worker_daemonize")
self.worker_pid_file = config.get("worker_pid_file")
self.worker_log_file = config.get("worker_log_file")
self.worker_log_config = config.get("worker_log_config")
self.worker_replication_url = config.get("worker_replication_url")

View File

@@ -77,10 +77,12 @@ class SynapseKeyClientProtocol(HTTPClient):
def __init__(self):
self.remote_key = defer.Deferred()
self.host = None
self._peer = None
def connectionMade(self):
self.host = self.transport.getHost()
logger.debug("Connected to %s", self.host)
self._peer = self.transport.getPeer()
logger.debug("Connected to %s", self._peer)
self.sendCommand(b"GET", self.path)
if self.host:
self.sendHeader(b"Host", self.host)
@@ -124,7 +126,10 @@ class SynapseKeyClientProtocol(HTTPClient):
self.timer.cancel()
def on_timeout(self):
logger.debug("Timeout waiting for response from %s", self.host)
logger.debug(
"Timeout waiting for response from %s: %s",
self.host, self._peer,
)
self.errback(IOError("Timeout waiting for response"))
self.transport.abortConnection()
@@ -133,4 +138,5 @@ class SynapseKeyClientFactory(Factory):
def protocol(self):
protocol = SynapseKeyClientProtocol()
protocol.path = self.path
protocol.host = self.host
return protocol

View File

@@ -22,6 +22,7 @@ from synapse.util.logcontext import (
preserve_context_over_deferred, preserve_context_over_fn, PreserveLoggingContext,
preserve_fn
)
from synapse.util.metrics import Measure
from twisted.internet import defer
@@ -44,7 +45,25 @@ import logging
logger = logging.getLogger(__name__)
KeyGroup = namedtuple("KeyGroup", ("server_name", "group_id", "key_ids"))
VerifyKeyRequest = namedtuple("VerifyRequest", (
"server_name", "key_ids", "json_object", "deferred"
))
"""
A request for a verify key to verify a JSON object.
Attributes:
server_name(str): The name of the server to verify against.
key_ids(set(str)): The set of key_ids to that could be used to verify the
JSON object
json_object(dict): The JSON object to verify.
deferred(twisted.internet.defer.Deferred):
A deferred (server_name, key_id, verify_key) tuple that resolves when
a verify key has been fetched
"""
class KeyLookupError(ValueError):
pass
class Keyring(object):
@@ -74,39 +93,32 @@ class Keyring(object):
list of deferreds indicating success or failure to verify each
json object's signature for the given server_name.
"""
group_id_to_json = {}
group_id_to_group = {}
group_ids = []
next_group_id = 0
deferreds = {}
verify_requests = []
for server_name, json_object in server_and_json:
logger.debug("Verifying for %s", server_name)
group_id = next_group_id
next_group_id += 1
group_ids.append(group_id)
key_ids = signature_ids(json_object, server_name)
if not key_ids:
deferreds[group_id] = defer.fail(SynapseError(
deferred = defer.fail(SynapseError(
400,
"Not signed with a supported algorithm",
Codes.UNAUTHORIZED,
))
else:
deferreds[group_id] = defer.Deferred()
deferred = defer.Deferred()
group = KeyGroup(server_name, group_id, key_ids)
verify_request = VerifyKeyRequest(
server_name, key_ids, json_object, deferred
)
group_id_to_group[group_id] = group
group_id_to_json[group_id] = json_object
verify_requests.append(verify_request)
@defer.inlineCallbacks
def handle_key_deferred(group, deferred):
server_name = group.server_name
def handle_key_deferred(verify_request):
server_name = verify_request.server_name
try:
_, _, key_id, verify_key = yield deferred
_, key_id, verify_key = yield verify_request.deferred
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
@@ -128,7 +140,7 @@ class Keyring(object):
Codes.UNAUTHORIZED,
)
json_object = group_id_to_json[group.group_id]
json_object = verify_request.json_object
try:
verify_signed_json(json_object, server_name, verify_key)
@@ -157,36 +169,34 @@ class Keyring(object):
# Actually start fetching keys.
wait_on_deferred.addBoth(
lambda _: self.get_server_verify_keys(group_id_to_group, deferreds)
lambda _: self.get_server_verify_keys(verify_requests)
)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
server_to_gids = {}
server_to_request_ids = {}
def remove_deferreds(res, server_name, group_id):
server_to_gids[server_name].discard(group_id)
if not server_to_gids[server_name]:
def remove_deferreds(res, server_name, verify_request):
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for g_id, deferred in deferreds.items():
server_name = group_id_to_group[g_id].server_name
server_to_gids.setdefault(server_name, set()).add(g_id)
deferred.addBoth(remove_deferreds, server_name, g_id)
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
deferred.addBoth(remove_deferreds, server_name, verify_request)
# Pass those keys to handle_key_deferred so that the json object
# signatures can be verified
return [
preserve_context_over_fn(
handle_key_deferred,
group_id_to_group[g_id],
deferreds[g_id],
)
for g_id in group_ids
preserve_context_over_fn(handle_key_deferred, verify_request)
for verify_request in verify_requests
]
@defer.inlineCallbacks
@@ -220,7 +230,7 @@ class Keyring(object):
d.addBoth(rm, server_name)
def get_server_verify_keys(self, group_id_to_group, group_id_to_deferred):
def get_server_verify_keys(self, verify_requests):
"""Takes a dict of KeyGroups and tries to find at least one key for
each group.
"""
@@ -234,76 +244,79 @@ class Keyring(object):
@defer.inlineCallbacks
def do_iterations():
merged_results = {}
with Measure(self.clock, "get_server_verify_keys"):
merged_results = {}
missing_keys = {}
for group in group_id_to_group.values():
missing_keys.setdefault(group.server_name, set()).update(
group.key_ids
)
for fn in key_fetch_fns:
results = yield fn(missing_keys.items())
merged_results.update(results)
# We now need to figure out which groups we have keys for
# and which we don't
missing_groups = {}
for group in group_id_to_group.values():
for key_id in group.key_ids:
if key_id in merged_results[group.server_name]:
with PreserveLoggingContext():
group_id_to_deferred[group.group_id].callback((
group.group_id,
group.server_name,
key_id,
merged_results[group.server_name][key_id],
))
break
else:
missing_groups.setdefault(
group.server_name, []
).append(group)
if not missing_groups:
break
missing_keys = {
server_name: set(
key_id for group in groups for key_id in group.key_ids
missing_keys = {}
for verify_request in verify_requests:
missing_keys.setdefault(verify_request.server_name, set()).update(
verify_request.key_ids
)
for server_name, groups in missing_groups.items()
}
for group in missing_groups.values():
group_id_to_deferred[group.group_id].errback(SynapseError(
401,
"No key for %s with id %s" % (
group.server_name, group.key_ids,
),
Codes.UNAUTHORIZED,
))
for fn in key_fetch_fns:
results = yield fn(missing_keys.items())
merged_results.update(results)
# We now need to figure out which verify requests we have keys
# for and which we don't
missing_keys = {}
requests_missing_keys = []
for verify_request in verify_requests:
server_name = verify_request.server_name
result_keys = merged_results[server_name]
if verify_request.deferred.called:
# We've already called this deferred, which probably
# means that we've already found a key for it.
continue
for key_id in verify_request.key_ids:
if key_id in result_keys:
with PreserveLoggingContext():
verify_request.deferred.callback((
server_name,
key_id,
result_keys[key_id],
))
break
else:
# The else block is only reached if the loop above
# doesn't break.
missing_keys.setdefault(server_name, set()).update(
verify_request.key_ids
)
requests_missing_keys.append(verify_request)
if not missing_keys:
break
for verify_request in requests_missing_keys.values():
verify_request.deferred.errback(SynapseError(
401,
"No key for %s with id %s" % (
verify_request.server_name, verify_request.key_ids,
),
Codes.UNAUTHORIZED,
))
def on_err(err):
for deferred in group_id_to_deferred.values():
if not deferred.called:
deferred.errback(err)
for verify_request in verify_requests:
if not verify_request.deferred.called:
verify_request.deferred.errback(err)
do_iterations().addErrback(on_err)
return group_id_to_deferred
@defer.inlineCallbacks
def get_keys_from_store(self, server_name_and_key_ids):
res = yield defer.gatherResults(
res = yield preserve_context_over_deferred(defer.gatherResults(
[
self.store.get_server_verify_keys(
preserve_fn(self.store.get_server_verify_keys)(
server_name, key_ids
).addCallback(lambda ks, server: (server, ks), server_name)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)).addErrback(unwrapFirstError)
defer.returnValue(dict(res))
@@ -324,13 +337,13 @@ class Keyring(object):
)
defer.returnValue({})
results = yield defer.gatherResults(
results = yield preserve_context_over_deferred(defer.gatherResults(
[
get_key(p_name, p_keys)
preserve_fn(get_key)(p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)).addErrback(unwrapFirstError)
union_of_keys = {}
for result in results:
@@ -356,7 +369,7 @@ class Keyring(object):
)
except Exception as e:
logger.info(
"Unable to getting key %r for %r directly: %s %s",
"Unable to get key %r for %r directly: %s %s",
key_ids, server_name,
type(e).__name__, str(e.message),
)
@@ -370,13 +383,13 @@ class Keyring(object):
defer.returnValue(keys)
results = yield defer.gatherResults(
results = yield preserve_context_over_deferred(defer.gatherResults(
[
get_key(server_name, key_ids)
preserve_fn(get_key)(server_name, key_ids)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)).addErrback(unwrapFirstError)
merged = {}
for result in results:
@@ -418,7 +431,7 @@ class Keyring(object):
for response in responses:
if (u"signatures" not in response
or perspective_name not in response[u"signatures"]):
raise ValueError(
raise KeyLookupError(
"Key response not signed by perspective server"
" %r" % (perspective_name,)
)
@@ -441,21 +454,21 @@ class Keyring(object):
list(response[u"signatures"][perspective_name]),
list(perspective_keys)
)
raise ValueError(
raise KeyLookupError(
"Response not signed with a known key for perspective"
" server %r" % (perspective_name,)
)
processed_response = yield self.process_v2_response(
perspective_name, response
perspective_name, response, only_from_server=False
)
for server_name, response_keys in processed_response.items():
keys.setdefault(server_name, {}).update(response_keys)
yield defer.gatherResults(
yield preserve_context_over_deferred(defer.gatherResults(
[
self.store_keys(
preserve_fn(self.store_keys)(
server_name=server_name,
from_server=perspective_name,
verify_keys=response_keys,
@@ -463,7 +476,7 @@ class Keyring(object):
for server_name, response_keys in keys.items()
],
consumeErrors=True
).addErrback(unwrapFirstError)
)).addErrback(unwrapFirstError)
defer.returnValue(keys)
@@ -484,10 +497,10 @@ class Keyring(object):
if (u"signatures" not in response
or server_name not in response[u"signatures"]):
raise ValueError("Key response not signed by remote server")
raise KeyLookupError("Key response not signed by remote server")
if "tls_fingerprints" not in response:
raise ValueError("Key response missing TLS fingerprints")
raise KeyLookupError("Key response missing TLS fingerprints")
certificate_bytes = crypto.dump_certificate(
crypto.FILETYPE_ASN1, tls_certificate
@@ -501,7 +514,7 @@ class Keyring(object):
response_sha256_fingerprints.add(fingerprint[u"sha256"])
if sha256_fingerprint_b64 not in response_sha256_fingerprints:
raise ValueError("TLS certificate not allowed by fingerprints")
raise KeyLookupError("TLS certificate not allowed by fingerprints")
response_keys = yield self.process_v2_response(
from_server=server_name,
@@ -511,7 +524,7 @@ class Keyring(object):
keys.update(response_keys)
yield defer.gatherResults(
yield preserve_context_over_deferred(defer.gatherResults(
[
preserve_fn(self.store_keys)(
server_name=key_server_name,
@@ -521,13 +534,13 @@ class Keyring(object):
for key_server_name, verify_keys in keys.items()
],
consumeErrors=True
).addErrback(unwrapFirstError)
)).addErrback(unwrapFirstError)
defer.returnValue(keys)
@defer.inlineCallbacks
def process_v2_response(self, from_server, response_json,
requested_ids=[]):
requested_ids=[], only_from_server=True):
time_now_ms = self.clock.time_msec()
response_keys = {}
verify_keys = {}
@@ -551,9 +564,16 @@ class Keyring(object):
results = {}
server_name = response_json["server_name"]
if only_from_server:
if server_name != from_server:
raise KeyLookupError(
"Expected a response for server %r not %r" % (
from_server, server_name
)
)
for key_id in response_json["signatures"].get(server_name, {}):
if key_id not in response_json["verify_keys"]:
raise ValueError(
raise KeyLookupError(
"Key response must include verification keys for all"
" signatures"
)
@@ -580,7 +600,7 @@ class Keyring(object):
response_keys.update(verify_keys)
response_keys.update(old_verify_keys)
yield defer.gatherResults(
yield preserve_context_over_deferred(defer.gatherResults(
[
preserve_fn(self.store.store_server_keys_json)(
server_name=server_name,
@@ -593,7 +613,7 @@ class Keyring(object):
for key_id in updated_key_ids
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)).addErrback(unwrapFirstError)
results[server_name] = response_keys
@@ -621,15 +641,15 @@ class Keyring(object):
if ("signatures" not in response
or server_name not in response["signatures"]):
raise ValueError("Key response not signed by remote server")
raise KeyLookupError("Key response not signed by remote server")
if "tls_certificate" not in response:
raise ValueError("Key response missing TLS certificate")
raise KeyLookupError("Key response missing TLS certificate")
tls_certificate_b64 = response["tls_certificate"]
if encode_base64(x509_certificate_bytes) != tls_certificate_b64:
raise ValueError("TLS certificate doesn't match")
raise KeyLookupError("TLS certificate doesn't match")
# Cache the result in the datastore.
@@ -645,7 +665,7 @@ class Keyring(object):
for key_id in response["signatures"][server_name]:
if key_id not in response["verify_keys"]:
raise ValueError(
raise KeyLookupError(
"Key response must include verification keys for all"
" signatures"
)
@@ -682,7 +702,7 @@ class Keyring(object):
A deferred that completes when the keys are stored.
"""
# TODO(markjh): Store whether the keys have expired.
yield defer.gatherResults(
yield preserve_context_over_deferred(defer.gatherResults(
[
preserve_fn(self.store.store_server_verify_key)(
server_name, server_name, key.time_added, key
@@ -690,4 +710,4 @@ class Keyring(object):
for key_id, key in verify_keys.items()
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)).addErrback(unwrapFirstError)

View File

@@ -99,7 +99,7 @@ class EventBase(object):
return d
def get(self, key, default):
def get(self, key, default=None):
return self._event_dict.get(key, default)
def get_internal_metadata_dict(self):

View File

@@ -15,9 +15,30 @@
class EventContext(object):
__slots__ = [
"current_state_ids",
"prev_state_ids",
"state_group",
"rejected",
"push_actions",
"prev_group",
"delta_ids",
"prev_state_events",
]
def __init__(self, current_state=None):
self.current_state = current_state
def __init__(self):
# The current state including the current event
self.current_state_ids = None
# The current state excluding the current event
self.prev_state_ids = None
self.state_group = None
self.rejected = False
self.push_actions = []
# A previously persisted state group and a delta between that
# and this state.
self.prev_group = None
self.delta_ids = None
self.prev_state_events = None

View File

@@ -88,6 +88,8 @@ def prune_event(event):
if "age_ts" in event.unsigned:
allowed_fields["unsigned"]["age_ts"] = event.unsigned["age_ts"]
if "replaces_state" in event.unsigned:
allowed_fields["unsigned"]["replaces_state"] = event.unsigned["replaces_state"]
return type(event)(
allowed_fields,

View File

@@ -23,6 +23,7 @@ from synapse.crypto.event_signing import check_event_content_hash
from synapse.api.errors import SynapseError
from synapse.util import unwrapFirstError
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
import logging
@@ -31,6 +32,9 @@ logger = logging.getLogger(__name__)
class FederationBase(object):
def __init__(self, hs):
pass
@defer.inlineCallbacks
def _check_sigs_and_hash_and_fetch(self, origin, pdus, outlier=False,
include_none=False):
@@ -99,10 +103,10 @@ class FederationBase(object):
warn, pdu
)
valid_pdus = yield defer.gatherResults(
valid_pdus = yield preserve_context_over_deferred(defer.gatherResults(
deferreds,
consumeErrors=True
).addErrback(unwrapFirstError)
)).addErrback(unwrapFirstError)
if include_none:
defer.returnValue(valid_pdus)
@@ -126,7 +130,7 @@ class FederationBase(object):
for pdu in pdus
]
deferreds = self.keyring.verify_json_objects_for_server([
deferreds = preserve_fn(self.keyring.verify_json_objects_for_server)([
(p.origin, p.get_pdu_json())
for p in redacted_pdus
])

View File

@@ -26,7 +26,9 @@ from synapse.api.errors import (
from synapse.util import unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logutils import log_function
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
from synapse.events import FrozenEvent
from synapse.types import get_domain_from_id
import synapse.metrics
from synapse.util.retryutils import get_retry_limiter, NotRetryingDestination
@@ -50,7 +52,34 @@ sent_edus_counter = metrics.register_counter("sent_edus")
sent_queries_counter = metrics.register_counter("sent_queries", labels=["type"])
PDU_RETRY_TIME_MS = 1 * 60 * 1000
class FederationClient(FederationBase):
def __init__(self, hs):
super(FederationClient, self).__init__(hs)
self.pdu_destination_tried = {}
self._clock.looping_call(
self._clear_tried_cache, 60 * 1000,
)
self.state = hs.get_state_handler()
def _clear_tried_cache(self):
"""Clear pdu_destination_tried cache"""
now = self._clock.time_msec()
old_dict = self.pdu_destination_tried
self.pdu_destination_tried = {}
for event_id, destination_dict in old_dict.items():
destination_dict = {
dest: time
for dest, time in destination_dict.items()
if time + PDU_RETRY_TIME_MS > now
}
if destination_dict:
self.pdu_destination_tried[event_id] = destination_dict
def start_get_pdu_cache(self):
self._get_pdu_cache = ExpiringCache(
@@ -92,8 +121,12 @@ class FederationClient(FederationBase):
pdu.event_id
)
def send_presence(self, destination, states):
if destination != self.server_name:
self._transaction_queue.enqueue_presence(destination, states)
@log_function
def send_edu(self, destination, edu_type, content):
def send_edu(self, destination, edu_type, content, key=None):
edu = Edu(
origin=self.server_name,
destination=destination,
@@ -103,9 +136,13 @@ class FederationClient(FederationBase):
sent_edus_counter.inc()
# TODO, add errback, etc.
self._transaction_queue.enqueue_edu(edu)
return defer.succeed(None)
self._transaction_queue.enqueue_edu(edu, key=key)
@log_function
def send_device_messages(self, destination):
"""Sends the device messages in the local database to the remote
destination"""
self._transaction_queue.enqueue_device_messages(destination)
@log_function
def send_failure(self, failure, destination):
@@ -136,7 +173,7 @@ class FederationClient(FederationBase):
)
@log_function
def query_client_keys(self, destination, content):
def query_client_keys(self, destination, content, timeout):
"""Query device keys for a device hosted on a remote server.
Args:
@@ -148,10 +185,12 @@ class FederationClient(FederationBase):
response
"""
sent_queries_counter.inc("client_device_keys")
return self.transport_layer.query_client_keys(destination, content)
return self.transport_layer.query_client_keys(
destination, content, timeout
)
@log_function
def claim_client_keys(self, destination, content):
def claim_client_keys(self, destination, content, timeout):
"""Claims one-time keys for a device hosted on a remote server.
Args:
@@ -163,7 +202,9 @@ class FederationClient(FederationBase):
response
"""
sent_queries_counter.inc("client_one_time_keys")
return self.transport_layer.claim_client_keys(destination, content)
return self.transport_layer.claim_client_keys(
destination, content, timeout
)
@defer.inlineCallbacks
@log_function
@@ -198,10 +239,10 @@ class FederationClient(FederationBase):
]
# FIXME: We should handle signature failures more gracefully.
pdus[:] = yield defer.gatherResults(
pdus[:] = yield preserve_context_over_deferred(defer.gatherResults(
self._check_sigs_and_hashes(pdus),
consumeErrors=True,
).addErrback(unwrapFirstError)
)).addErrback(unwrapFirstError)
defer.returnValue(pdus)
@@ -233,12 +274,19 @@ class FederationClient(FederationBase):
# TODO: Rate limit the number of times we try and get the same event.
if self._get_pdu_cache:
e = self._get_pdu_cache.get(event_id)
if e:
defer.returnValue(e)
ev = self._get_pdu_cache.get(event_id)
if ev:
defer.returnValue(ev)
pdu = None
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
signed_pdu = None
for destination in destinations:
now = self._clock.time_msec()
last_attempt = pdu_attempts.get(destination, 0)
if last_attempt + PDU_RETRY_TIME_MS > now:
continue
try:
limiter = yield get_retry_limiter(
destination,
@@ -262,39 +310,33 @@ class FederationClient(FederationBase):
pdu = pdu_list[0]
# Check signatures are correct.
pdu = yield self._check_sigs_and_hashes([pdu])[0]
signed_pdu = yield self._check_sigs_and_hashes([pdu])[0]
break
except SynapseError:
logger.info(
"Failed to get PDU %s from %s because %s",
event_id, destination, e,
)
continue
except CodeMessageException as e:
if 400 <= e.code < 500:
raise
pdu_attempts[destination] = now
except SynapseError as e:
logger.info(
"Failed to get PDU %s from %s because %s",
event_id, destination, e,
)
continue
except NotRetryingDestination as e:
logger.info(e.message)
continue
except Exception as e:
pdu_attempts[destination] = now
logger.info(
"Failed to get PDU %s from %s because %s",
event_id, destination, e,
)
continue
if self._get_pdu_cache is not None and pdu:
self._get_pdu_cache[event_id] = pdu
if self._get_pdu_cache is not None and signed_pdu:
self._get_pdu_cache[event_id] = signed_pdu
defer.returnValue(pdu)
defer.returnValue(signed_pdu)
@defer.inlineCallbacks
@log_function
@@ -311,6 +353,42 @@ class FederationClient(FederationBase):
Deferred: Results in a list of PDUs.
"""
try:
# First we try and ask for just the IDs, as thats far quicker if
# we have most of the state and auth_chain already.
# However, this may 404 if the other side has an old synapse.
result = yield self.transport_layer.get_room_state_ids(
destination, room_id, event_id=event_id,
)
state_event_ids = result["pdu_ids"]
auth_event_ids = result.get("auth_chain_ids", [])
fetched_events, failed_to_fetch = yield self.get_events(
[destination], room_id, set(state_event_ids + auth_event_ids)
)
if failed_to_fetch:
logger.warn("Failed to get %r", failed_to_fetch)
event_map = {
ev.event_id: ev for ev in fetched_events
}
pdus = [event_map[e_id] for e_id in state_event_ids if e_id in event_map]
auth_chain = [
event_map[e_id] for e_id in auth_event_ids if e_id in event_map
]
auth_chain.sort(key=lambda e: e.depth)
defer.returnValue((pdus, auth_chain))
except HttpResponseException as e:
if e.code == 400 or e.code == 404:
logger.info("Failed to use get_room_state_ids API, falling back")
else:
raise e
result = yield self.transport_layer.get_room_state(
destination, room_id, event_id=event_id,
)
@@ -324,18 +402,95 @@ class FederationClient(FederationBase):
for p in result.get("auth_chain", [])
]
seen_events = yield self.store.get_events([
ev.event_id for ev in itertools.chain(pdus, auth_chain)
])
signed_pdus = yield self._check_sigs_and_hash_and_fetch(
destination, pdus, outlier=True
destination,
[p for p in pdus if p.event_id not in seen_events],
outlier=True
)
signed_pdus.extend(
seen_events[p.event_id] for p in pdus if p.event_id in seen_events
)
signed_auth = yield self._check_sigs_and_hash_and_fetch(
destination, auth_chain, outlier=True
destination,
[p for p in auth_chain if p.event_id not in seen_events],
outlier=True
)
signed_auth.extend(
seen_events[p.event_id] for p in auth_chain if p.event_id in seen_events
)
signed_auth.sort(key=lambda e: e.depth)
defer.returnValue((signed_pdus, signed_auth))
@defer.inlineCallbacks
def get_events(self, destinations, room_id, event_ids, return_local=True):
"""Fetch events from some remote destinations, checking if we already
have them.
Args:
destinations (list)
room_id (str)
event_ids (list)
return_local (bool): Whether to include events we already have in
the DB in the returned list of events
Returns:
Deferred: A deferred resolving to a 2-tuple where the first is a list of
events and the second is a list of event ids that we failed to fetch.
"""
if return_local:
seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
signed_events = seen_events.values()
else:
seen_events = yield self.store.have_events(event_ids)
signed_events = []
failed_to_fetch = set()
missing_events = set(event_ids)
for k in seen_events:
missing_events.discard(k)
if not missing_events:
defer.returnValue((signed_events, failed_to_fetch))
def random_server_list():
srvs = list(destinations)
random.shuffle(srvs)
return srvs
batch_size = 20
missing_events = list(missing_events)
for i in xrange(0, len(missing_events), batch_size):
batch = set(missing_events[i:i + batch_size])
deferreds = [
preserve_fn(self.get_pdu)(
destinations=random_server_list(),
event_id=e_id,
)
for e_id in batch
]
res = yield preserve_context_over_deferred(
defer.DeferredList(deferreds, consumeErrors=True)
)
for success, result in res:
if success and result:
signed_events.append(result)
batch.discard(result.event_id)
# We removed all events we successfully fetched from `batch`
failed_to_fetch.update(batch)
defer.returnValue((signed_events, failed_to_fetch))
@defer.inlineCallbacks
@log_function
def get_event_auth(self, destination, room_id, event_id):
@@ -411,14 +566,19 @@ class FederationClient(FederationBase):
(destination, self.event_from_pdu_json(pdu_dict))
)
break
except CodeMessageException:
raise
except CodeMessageException as e:
if not 500 <= e.code < 600:
raise
else:
logger.warn(
"Failed to make_%s via %s: %s",
membership, destination, e.message
)
except Exception as e:
logger.warn(
"Failed to make_%s via %s: %s",
membership, destination, e.message
)
raise
raise RuntimeError("Failed to send to any server.")
@@ -490,8 +650,14 @@ class FederationClient(FederationBase):
"auth_chain": signed_auth,
"origin": destination,
})
except CodeMessageException:
raise
except CodeMessageException as e:
if not 500 <= e.code < 600:
raise
else:
logger.exception(
"Failed to send_join via %s: %s",
destination, e.message
)
except Exception as e:
logger.exception(
"Failed to send_join via %s: %s",
@@ -550,6 +716,15 @@ class FederationClient(FederationBase):
raise RuntimeError("Failed to send to any server.")
def get_public_rooms(self, destination, limit=None, since_token=None,
search_filter=None):
if destination == self.server_name:
return
return self.transport_layer.get_public_rooms(
destination, limit, since_token, search_filter
)
@defer.inlineCallbacks
def query_auth(self, destination, room_id, event_id, local_auth):
"""
@@ -639,7 +814,8 @@ class FederationClient(FederationBase):
if len(signed_events) >= limit:
defer.returnValue(signed_events)
servers = yield self.store.get_joined_hosts_for_room(room_id)
users = yield self.state.get_current_user_in_room(room_id)
servers = set(get_domain_from_id(u) for u in users)
servers = set(servers)
servers.discard(self.server_name)
@@ -684,14 +860,16 @@ class FederationClient(FederationBase):
return srvs
deferreds = [
self.get_pdu(
preserve_fn(self.get_pdu)(
destinations=random_server_list(),
event_id=e_id,
)
for e_id, depth in ordered_missing[:limit - len(signed_events)]
]
res = yield defer.DeferredList(deferreds, consumeErrors=True)
res = yield preserve_context_over_deferred(
defer.DeferredList(deferreds, consumeErrors=True)
)
for (result, val), (e_id, _) in zip(res, ordered_missing):
if result and val:
signed_events.append(val)

View File

@@ -19,11 +19,13 @@ from twisted.internet import defer
from .federation_base import FederationBase
from .units import Transaction, Edu
from synapse.util.async import Linearizer
from synapse.util.logutils import log_function
from synapse.util.caches.response_cache import ResponseCache
from synapse.events import FrozenEvent
import synapse.metrics
from synapse.api.errors import FederationError, SynapseError
from synapse.api.errors import AuthError, FederationError, SynapseError
from synapse.crypto.event_signing import compute_event_signature
@@ -44,6 +46,18 @@ received_queries_counter = metrics.register_counter("received_queries", labels=[
class FederationServer(FederationBase):
def __init__(self, hs):
super(FederationServer, self).__init__(hs)
self.auth = hs.get_auth()
self._room_pdu_linearizer = Linearizer()
self._server_linearizer = Linearizer()
# We cache responses to state queries, as they take a while and often
# come in waves.
self._state_resp_cache = ResponseCache(hs, timeout_ms=30000)
def set_handler(self, handler):
"""Sets the handler that the replication layer will use to communicate
receipt of new PDUs from other home servers. The required methods are
@@ -83,11 +97,14 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
@log_function
def on_backfill_request(self, origin, room_id, versions, limit):
pdus = yield self.handler.on_backfill_request(
origin, room_id, versions, limit
)
with (yield self._server_linearizer.queue((origin, room_id))):
pdus = yield self.handler.on_backfill_request(
origin, room_id, versions, limit
)
defer.returnValue((200, self._transaction_from_pdus(pdus).get_dict()))
res = self._transaction_from_pdus(pdus).get_dict()
defer.returnValue((200, res))
@defer.inlineCallbacks
@log_function
@@ -171,22 +188,64 @@ class FederationServer(FederationBase):
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception as e:
logger.exception("Failed to handle edu %r", edu_type, e)
logger.exception("Failed to handle edu %r", edu_type)
else:
logger.warn("Received EDU of type %s with no handler", edu_type)
@defer.inlineCallbacks
@log_function
def on_context_state_request(self, origin, room_id, event_id):
if event_id:
pdus = yield self.handler.get_state_for_pdu(
origin, room_id, event_id,
)
auth_chain = yield self.store.get_auth_chain(
[pdu.event_id for pdu in pdus]
)
if not event_id:
raise NotImplementedError("Specify an event")
for event in auth_chain:
in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
result = self._state_resp_cache.get((room_id, event_id))
if not result:
with (yield self._server_linearizer.queue((origin, room_id))):
resp = yield self._state_resp_cache.set(
(room_id, event_id),
self._on_context_state_request_compute(room_id, event_id)
)
else:
resp = yield result
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_state_ids_request(self, origin, room_id, event_id):
if not event_id:
raise NotImplementedError("Specify an event")
in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
state_ids = yield self.handler.get_state_ids_for_pdu(
room_id, event_id,
)
auth_chain_ids = yield self.store.get_auth_chain_ids(state_ids)
defer.returnValue((200, {
"pdu_ids": state_ids,
"auth_chain_ids": auth_chain_ids,
}))
@defer.inlineCallbacks
def _on_context_state_request_compute(self, room_id, event_id):
pdus = yield self.handler.get_state_for_pdu(
room_id, event_id,
)
auth_chain = yield self.store.get_auth_chain(
[pdu.event_id for pdu in pdus]
)
for event in auth_chain:
# We sign these again because there was a bug where we
# incorrectly signed things the first time round
if self.hs.is_mine_id(event.event_id):
event.signatures.update(
compute_event_signature(
event,
@@ -194,13 +253,11 @@ class FederationServer(FederationBase):
self.hs.config.signing_key[0]
)
)
else:
raise NotImplementedError("Specify an event")
defer.returnValue((200, {
defer.returnValue({
"pdus": [pdu.get_pdu_json() for pdu in pdus],
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
}))
})
@defer.inlineCallbacks
@log_function
@@ -274,14 +331,16 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id):
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
defer.returnValue((200, {
"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus],
}))
with (yield self._server_linearizer.queue((origin, room_id))):
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
res = {
"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus],
}
defer.returnValue((200, res))
@defer.inlineCallbacks
def on_query_auth_request(self, origin, content, event_id):
def on_query_auth_request(self, origin, content, room_id, event_id):
"""
Content is a dict with keys::
auth_chain (list): A list of events that give the auth chain.
@@ -300,58 +359,41 @@ class FederationServer(FederationBase):
Returns:
Deferred: Results in `dict` with the same format as `content`
"""
auth_chain = [
self.event_from_pdu_json(e)
for e in content["auth_chain"]
]
with (yield self._server_linearizer.queue((origin, room_id))):
auth_chain = [
self.event_from_pdu_json(e)
for e in content["auth_chain"]
]
signed_auth = yield self._check_sigs_and_hash_and_fetch(
origin, auth_chain, outlier=True
)
signed_auth = yield self._check_sigs_and_hash_and_fetch(
origin, auth_chain, outlier=True
)
ret = yield self.handler.on_query_auth(
origin,
event_id,
signed_auth,
content.get("rejects", []),
content.get("missing", []),
)
ret = yield self.handler.on_query_auth(
origin,
event_id,
signed_auth,
content.get("rejects", []),
content.get("missing", []),
)
time_now = self._clock.time_msec()
send_content = {
"auth_chain": [
e.get_pdu_json(time_now)
for e in ret["auth_chain"]
],
"rejects": ret.get("rejects", []),
"missing": ret.get("missing", []),
}
time_now = self._clock.time_msec()
send_content = {
"auth_chain": [
e.get_pdu_json(time_now)
for e in ret["auth_chain"]
],
"rejects": ret.get("rejects", []),
"missing": ret.get("missing", []),
}
defer.returnValue(
(200, send_content)
)
@defer.inlineCallbacks
@log_function
def on_query_client_keys(self, origin, content):
query = []
for user_id, device_ids in content.get("device_keys", {}).items():
if not device_ids:
query.append((user_id, None))
else:
for device_id in device_ids:
query.append((user_id, device_id))
results = yield self.store.get_e2e_device_keys(query)
json_result = {}
for user_id, device_keys in results.items():
for device_id, json_bytes in device_keys.items():
json_result.setdefault(user_id, {})[device_id] = json.loads(
json_bytes
)
defer.returnValue({"device_keys": json_result})
return self.on_query_request("client_keys", content)
@defer.inlineCallbacks
@log_function
@@ -377,16 +419,34 @@ class FederationServer(FederationBase):
@log_function
def on_get_missing_events(self, origin, room_id, earliest_events,
latest_events, limit, min_depth):
missing_events = yield self.handler.on_get_missing_events(
origin, room_id, earliest_events, latest_events, limit, min_depth
)
with (yield self._server_linearizer.queue((origin, room_id))):
logger.info(
"on_get_missing_events: earliest_events: %r, latest_events: %r,"
" limit: %d, min_depth: %d",
earliest_events, latest_events, limit, min_depth
)
missing_events = yield self.handler.on_get_missing_events(
origin, room_id, earliest_events, latest_events, limit, min_depth
)
time_now = self._clock.time_msec()
if len(missing_events) < 5:
logger.info(
"Returning %d events: %r", len(missing_events), missing_events
)
else:
logger.info("Returning %d events", len(missing_events))
time_now = self._clock.time_msec()
defer.returnValue({
"events": [ev.get_pdu_json(time_now) for ev in missing_events],
})
@log_function
def on_openid_userinfo(self, token):
ts_now_ms = self._clock.time_msec()
return self.store.get_user_id_for_open_id_token(token, ts_now_ms)
@log_function
def _get_persisted_pdu(self, origin, event_id, do_auth=True):
""" Get a PDU from the database with given origin and id.
@@ -476,42 +536,59 @@ class FederationServer(FederationBase):
pdu.internal_metadata.outlier = True
elif min_depth and pdu.depth > min_depth:
if get_missing and prevs - seen:
latest = yield self.store.get_latest_event_ids_in_room(
pdu.room_id
)
# If we're missing stuff, ensure we only fetch stuff one
# at a time.
with (yield self._room_pdu_linearizer.queue(pdu.room_id)):
# We recalculate seen, since it may have changed.
have_seen = yield self.store.have_events(prevs)
seen = set(have_seen.keys())
# We add the prev events that we have seen to the latest
# list to ensure the remote server doesn't give them to us
latest = set(latest)
latest |= seen
if prevs - seen:
latest = yield self.store.get_latest_event_ids_in_room(
pdu.room_id
)
missing_events = yield self.get_missing_events(
origin,
pdu.room_id,
earliest_events_ids=list(latest),
latest_events=[pdu],
limit=10,
min_depth=min_depth,
)
# We add the prev events that we have seen to the latest
# list to ensure the remote server doesn't give them to us
latest = set(latest)
latest |= seen
# We want to sort these by depth so we process them and
# tell clients about them in order.
missing_events.sort(key=lambda x: x.depth)
logger.info(
"Missing %d events for room %r: %r...",
len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
)
for e in missing_events:
yield self._handle_new_pdu(
origin,
e,
get_missing=False
)
missing_events = yield self.get_missing_events(
origin,
pdu.room_id,
earliest_events_ids=list(latest),
latest_events=[pdu],
limit=10,
min_depth=min_depth,
)
have_seen = yield self.store.have_events(
[ev for ev, _ in pdu.prev_events]
)
# We want to sort these by depth so we process them and
# tell clients about them in order.
missing_events.sort(key=lambda x: x.depth)
for e in missing_events:
yield self._handle_new_pdu(
origin,
e,
get_missing=False
)
have_seen = yield self.store.have_events(
[ev for ev, _ in pdu.prev_events]
)
prevs = {e_id for e_id, _ in pdu.prev_events}
seen = set(have_seen.keys())
if prevs - seen:
logger.info(
"Still missing %d events for room %r: %r...",
len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
)
fetch_state = True
if fetch_state:
@@ -526,7 +603,7 @@ class FederationServer(FederationBase):
origin, pdu.room_id, pdu.event_id,
)
except:
logger.warn("Failed to get state for event: %s", pdu.event_id)
logger.exception("Failed to get state for event: %s", pdu.event_id)
yield self.handler.on_receive_pdu(
origin,

View File

@@ -72,5 +72,7 @@ class ReplicationLayer(FederationClient, FederationServer):
self.hs = hs
super(ReplicationLayer, self).__init__(hs)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name

View File

@@ -17,14 +17,16 @@
from twisted.internet import defer
from .persistence import TransactionActions
from .units import Transaction
from .units import Transaction, Edu
from synapse.api.errors import HttpResponseException
from synapse.util.logutils import log_function
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.async import run_on_reactor
from synapse.util.logcontext import preserve_context_over_fn
from synapse.util.retryutils import (
get_retry_limiter, NotRetryingDestination,
)
from synapse.util.metrics import measure_func
from synapse.handlers.presence import format_user_presence_state
import synapse.metrics
import logging
@@ -50,7 +52,7 @@ class TransactionQueue(object):
self.transport_layer = transport_layer
self._clock = hs.get_clock()
self.clock = hs.get_clock()
# Is a mapping from destinations -> deferreds. Used to keep track
# of which destinations have transactions in flight and when they are
@@ -68,20 +70,30 @@ class TransactionQueue(object):
# destination -> list of tuple(edu, deferred)
self.pending_edus_by_dest = edus = {}
# Presence needs to be separate as we send single aggragate EDUs
self.pending_presence_by_dest = presence = {}
self.pending_edus_keyed_by_dest = edus_keyed = {}
metrics.register_callback(
"pending_pdus",
lambda: sum(map(len, pdus.values())),
)
metrics.register_callback(
"pending_edus",
lambda: sum(map(len, edus.values())),
lambda: (
sum(map(len, edus.values()))
+ sum(map(len, presence.values()))
+ sum(map(len, edus_keyed.values()))
),
)
# destination -> list of tuple(failure, deferred)
self.pending_failures_by_dest = {}
self.last_device_stream_id_by_dest = {}
# HACK to get unique tx id
self._next_txn_id = int(self._clock.time_msec())
self._next_txn_id = int(self.clock.time_msec())
def can_send_to(self, destination):
"""Can we send messages to the given server?
@@ -118,86 +130,68 @@ class TransactionQueue(object):
if not destinations:
return
deferreds = []
for destination in destinations:
deferred = defer.Deferred()
self.pending_pdus_by_dest.setdefault(destination, []).append(
(pdu, deferred, order)
(pdu, order)
)
def chain(failure):
if not deferred.called:
deferred.errback(failure)
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
def log_failure(f):
logger.warn("Failed to send pdu to %s: %s", destination, f.value)
def enqueue_presence(self, destination, states):
self.pending_presence_by_dest.setdefault(destination, {}).update({
state.user_id: state for state in states
})
deferred.addErrback(log_failure)
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
with PreserveLoggingContext():
self._attempt_new_transaction(destination).addErrback(chain)
deferreds.append(deferred)
# NO inlineCallbacks
def enqueue_edu(self, edu):
def enqueue_edu(self, edu, key=None):
destination = edu.destination
if not self.can_send_to(destination):
return
deferred = defer.Deferred()
self.pending_edus_by_dest.setdefault(destination, []).append(
(edu, deferred)
if key:
self.pending_edus_keyed_by_dest.setdefault(
destination, {}
)[(edu.edu_type, key)] = edu
else:
self.pending_edus_by_dest.setdefault(destination, []).append(edu)
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
def chain(failure):
if not deferred.called:
deferred.errback(failure)
def log_failure(f):
logger.warn("Failed to send edu to %s: %s", destination, f.value)
deferred.addErrback(log_failure)
with PreserveLoggingContext():
self._attempt_new_transaction(destination).addErrback(chain)
return deferred
@defer.inlineCallbacks
def enqueue_failure(self, failure, destination):
if destination == self.server_name or destination == "localhost":
return
deferred = defer.Deferred()
if not self.can_send_to(destination):
return
self.pending_failures_by_dest.setdefault(
destination, []
).append(
(failure, deferred)
).append(failure)
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
def chain(f):
if not deferred.called:
deferred.errback(f)
def enqueue_device_messages(self, destination):
if destination == self.server_name or destination == "localhost":
return
def log_failure(f):
logger.warn("Failed to send failure to %s: %s", destination, f.value)
if not self.can_send_to(destination):
return
deferred.addErrback(log_failure)
with PreserveLoggingContext():
self._attempt_new_transaction(destination).addErrback(chain)
yield deferred
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
@defer.inlineCallbacks
@log_function
def _attempt_new_transaction(self, destination):
# list of (pending_pdu, deferred, order)
if destination in self.pending_transactions:
@@ -211,55 +205,128 @@ class TransactionQueue(object):
)
return
pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
pending_edus = self.pending_edus_by_dest.pop(destination, [])
pending_failures = self.pending_failures_by_dest.pop(destination, [])
if pending_pdus:
logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
destination, len(pending_pdus))
if not pending_pdus and not pending_edus and not pending_failures:
logger.debug("TX [%s] Nothing to send", destination)
return
try:
self.pending_transactions[destination] = 1
yield run_on_reactor()
while True:
pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
pending_edus = self.pending_edus_by_dest.pop(destination, [])
pending_presence = self.pending_presence_by_dest.pop(destination, {})
pending_failures = self.pending_failures_by_dest.pop(destination, [])
pending_edus.extend(
self.pending_edus_keyed_by_dest.pop(destination, {}).values()
)
limiter = yield get_retry_limiter(
destination,
self.clock,
self.store,
)
device_message_edus, device_stream_id = (
yield self._get_new_device_messages(destination)
)
pending_edus.extend(device_message_edus)
if pending_presence:
pending_edus.append(
Edu(
origin=self.server_name,
destination=destination,
edu_type="m.presence",
content={
"push": [
format_user_presence_state(
presence, self.clock.time_msec()
)
for presence in pending_presence.values()
]
},
)
)
if pending_pdus:
logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
destination, len(pending_pdus))
if not pending_pdus and not pending_edus and not pending_failures:
logger.debug("TX [%s] Nothing to send", destination)
self.last_device_stream_id_by_dest[destination] = (
device_stream_id
)
return
success = yield self._send_new_transaction(
destination, pending_pdus, pending_edus, pending_failures,
device_stream_id,
should_delete_from_device_stream=bool(device_message_edus),
limiter=limiter,
)
if not success:
break
except NotRetryingDestination:
logger.info(
"TX [%s] not ready for retry yet - "
"dropping transaction for now",
destination,
)
finally:
# We want to be *very* sure we delete this after we stop processing
self.pending_transactions.pop(destination, None)
@defer.inlineCallbacks
def _get_new_device_messages(self, destination):
last_device_stream_id = self.last_device_stream_id_by_dest.get(destination, 0)
to_device_stream_id = self.store.get_to_device_stream_token()
contents, stream_id = yield self.store.get_new_device_msgs_for_remote(
destination, last_device_stream_id, to_device_stream_id
)
edus = [
Edu(
origin=self.server_name,
destination=destination,
edu_type="m.direct_to_device",
content=content,
)
for content in contents
]
defer.returnValue((edus, stream_id))
@measure_func("_send_new_transaction")
@defer.inlineCallbacks
def _send_new_transaction(self, destination, pending_pdus, pending_edus,
pending_failures, device_stream_id,
should_delete_from_device_stream, limiter):
# Sort based on the order field
pending_pdus.sort(key=lambda t: t[1])
pdus = [x[0] for x in pending_pdus]
edus = pending_edus
failures = [x.get_dict() for x in pending_failures]
success = True
try:
logger.debug("TX [%s] _attempt_new_transaction", destination)
# Sort based on the order field
pending_pdus.sort(key=lambda t: t[2])
pdus = [x[0] for x in pending_pdus]
edus = [x[0] for x in pending_edus]
failures = [x[0].get_dict() for x in pending_failures]
deferreds = [
x[1]
for x in pending_pdus + pending_edus + pending_failures
]
txn_id = str(self._next_txn_id)
limiter = yield get_retry_limiter(
destination,
self._clock,
self.store,
)
logger.debug(
"TX [%s] {%s} Attempting new transaction"
" (pdus: %d, edus: %d, failures: %d)",
destination, txn_id,
len(pending_pdus),
len(pending_edus),
len(pending_failures)
len(pdus),
len(edus),
len(failures)
)
logger.debug("TX [%s] Persisting transaction...", destination)
transaction = Transaction.create_new(
origin_server_ts=int(self._clock.time_msec()),
origin_server_ts=int(self.clock.time_msec()),
transaction_id=txn_id,
origin=self.server_name,
destination=destination,
@@ -278,9 +345,9 @@ class TransactionQueue(object):
" (PDUs: %d, EDUs: %d, failures: %d)",
destination, txn_id,
transaction.transaction_id,
len(pending_pdus),
len(pending_edus),
len(pending_failures),
len(pdus),
len(edus),
len(failures),
)
with limiter:
@@ -290,7 +357,7 @@ class TransactionQueue(object):
# keys work
def json_data_cb():
data = transaction.get_dict()
now = int(self._clock.time_msec())
now = int(self.clock.time_msec())
if "pdus" in data:
for p in data["pdus"]:
if "age_ts" in p:
@@ -330,28 +397,19 @@ class TransactionQueue(object):
logger.debug("TX [%s] Marked as delivered", destination)
logger.debug("TX [%s] Yielding to callbacks...", destination)
for deferred in deferreds:
if code == 200:
deferred.callback(None)
else:
deferred.errback(RuntimeError("Got status %d" % code))
# Ensures we don't continue until all callbacks on that
# deferred have fired
try:
yield deferred
except:
pass
logger.debug("TX [%s] Yielded to callbacks", destination)
except NotRetryingDestination:
logger.info(
"TX [%s] not ready for retry yet - "
"dropping transaction for now",
destination,
)
if code != 200:
for p in pdus:
logger.info(
"Failed to send event %s to %s", p.event_id, destination
)
success = False
else:
# Remove the acknowledged device messages from the database
if should_delete_from_device_stream:
yield self.store.delete_device_msgs_for_remote(
destination, device_stream_id
)
self.last_device_stream_id_by_dest[destination] = device_stream_id
except RuntimeError as e:
# We capture this here as there as nothing actually listens
# for this finishing functions deferred.
@@ -360,6 +418,11 @@ class TransactionQueue(object):
destination,
e,
)
success = False
for p in pdus:
logger.info("Failed to send event %s to %s", p.event_id, destination)
except Exception as e:
# We capture this here as there as nothing actually listens
# for this finishing functions deferred.
@@ -369,13 +432,9 @@ class TransactionQueue(object):
e,
)
for deferred in deferreds:
if not deferred.called:
deferred.errback(e)
success = False
finally:
# We want to be *very* sure we delete this after we stop processing
self.pending_transactions.pop(destination, None)
for p in pdus:
logger.info("Failed to send event %s to %s", p.event_id, destination)
# Check to see if there is anything else to send.
self._attempt_new_transaction(destination)
defer.returnValue(success)

View File

@@ -54,6 +54,28 @@ class TransportLayerClient(object):
destination, path=path, args={"event_id": event_id},
)
@log_function
def get_room_state_ids(self, destination, room_id, event_id):
""" Requests all state for a given room from the given server at the
given event. Returns the state's event_id's
Args:
destination (str): The host name of the remote home server we want
to get the state from.
context (str): The name of the context we want the state of
event_id (str): The event we want the context at.
Returns:
Deferred: Results in a dict received from the remote homeserver.
"""
logger.debug("get_room_state_ids dest=%s, room=%s",
destination, room_id)
path = PREFIX + "/state_ids/%s/" % room_id
return self.client.get_json(
destination, path=path, args={"event_id": event_id},
)
@log_function
def get_event(self, destination, event_id, timeout=None):
""" Requests the pdu with give id and origin from the given server.
@@ -179,7 +201,8 @@ class TransportLayerClient(object):
content = yield self.client.get_json(
destination=destination,
path=path,
retry_on_dns_fail=True,
retry_on_dns_fail=False,
timeout=20000,
)
defer.returnValue(content)
@@ -223,6 +246,28 @@ class TransportLayerClient(object):
defer.returnValue(response)
@defer.inlineCallbacks
@log_function
def get_public_rooms(self, remote_server, limit, since_token,
search_filter=None):
path = PREFIX + "/publicRooms"
args = {}
if limit:
args["limit"] = [str(limit)]
if since_token:
args["since"] = [since_token]
# TODO(erikj): Actually send the search_filter across federation.
response = yield self.client.get_json(
destination=remote_server,
path=path,
args=args,
)
defer.returnValue(response)
@defer.inlineCallbacks
@log_function
def exchange_third_party_invite(self, destination, room_id, event_dict):
@@ -263,7 +308,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def query_client_keys(self, destination, query_content):
def query_client_keys(self, destination, query_content, timeout):
"""Query the device keys for a list of user ids hosted on a remote
server.
@@ -292,12 +337,13 @@ class TransportLayerClient(object):
destination=destination,
path=path,
data=query_content,
timeout=timeout,
)
defer.returnValue(content)
@defer.inlineCallbacks
@log_function
def claim_client_keys(self, destination, query_content):
def claim_client_keys(self, destination, query_content, timeout):
"""Claim one-time keys for a list of devices hosted on a remote server.
Request:
@@ -328,6 +374,7 @@ class TransportLayerClient(object):
destination=destination,
path=path,
data=query_content,
timeout=timeout,
)
defer.returnValue(content)

View File

@@ -18,13 +18,16 @@ from twisted.internet import defer
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.api.errors import Codes, SynapseError
from synapse.http.server import JsonResource
from synapse.http.servlet import parse_json_object_from_request
from synapse.http.servlet import (
parse_json_object_from_request, parse_integer_from_args, parse_string_from_args,
)
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
import functools
import logging
import simplejson as json
import re
import synapse
logger = logging.getLogger(__name__)
@@ -37,7 +40,7 @@ class TransportLayerServer(JsonResource):
self.hs = hs
self.clock = hs.get_clock()
super(TransportLayerServer, self).__init__(hs)
super(TransportLayerServer, self).__init__(hs, canonical_json=False)
self.authenticator = Authenticator(hs)
self.ratelimiter = FederationRateLimiter(
@@ -60,6 +63,16 @@ class TransportLayerServer(JsonResource):
)
class AuthenticationError(SynapseError):
"""There was a problem authenticating the request"""
pass
class NoAuthenticationError(AuthenticationError):
"""The request had no authentication information"""
pass
class Authenticator(object):
def __init__(self, hs):
self.keyring = hs.get_keyring()
@@ -67,7 +80,7 @@ class Authenticator(object):
# A method just so we can pass 'self' as the authenticator to the Servlets
@defer.inlineCallbacks
def authenticate_request(self, request):
def authenticate_request(self, request, content):
json_request = {
"method": request.method,
"uri": request.uri,
@@ -75,17 +88,10 @@ class Authenticator(object):
"signatures": {},
}
content = None
origin = None
if content is not None:
json_request["content"] = content
if request.method in ["PUT", "POST"]:
# TODO: Handle other method types? other content types?
try:
content_bytes = request.content.read()
content = json.loads(content_bytes)
json_request["content"] = content
except:
raise SynapseError(400, "Unable to parse JSON", Codes.BAD_JSON)
origin = None
def parse_auth_header(header_str):
try:
@@ -103,14 +109,14 @@ class Authenticator(object):
sig = strip_quotes(param_dict["sig"])
return (origin, key, sig)
except:
raise SynapseError(
raise AuthenticationError(
400, "Malformed Authorization header", Codes.UNAUTHORIZED
)
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
if not auth_headers:
raise SynapseError(
raise NoAuthenticationError(
401, "Missing Authorization headers", Codes.UNAUTHORIZED,
)
@@ -121,7 +127,7 @@ class Authenticator(object):
json_request["signatures"].setdefault(origin, {})[key] = sig
if not json_request["signatures"]:
raise SynapseError(
raise NoAuthenticationError(
401, "Missing Authorization headers", Codes.UNAUTHORIZED,
)
@@ -130,38 +136,59 @@ class Authenticator(object):
logger.info("Request from %s", origin)
request.authenticated_entity = origin
defer.returnValue((origin, content))
defer.returnValue(origin)
class BaseFederationServlet(object):
def __init__(self, handler, authenticator, ratelimiter, server_name):
REQUIRE_AUTH = True
def __init__(self, handler, authenticator, ratelimiter, server_name,
room_list_handler):
self.handler = handler
self.authenticator = authenticator
self.ratelimiter = ratelimiter
self.room_list_handler = room_list_handler
def _wrap(self, code):
def _wrap(self, func):
authenticator = self.authenticator
ratelimiter = self.ratelimiter
@defer.inlineCallbacks
@functools.wraps(code)
def new_code(request, *args, **kwargs):
@functools.wraps(func)
def new_func(request, *args, **kwargs):
content = None
if request.method in ["PUT", "POST"]:
# TODO: Handle other method types? other content types?
content = parse_json_object_from_request(request)
try:
(origin, content) = yield authenticator.authenticate_request(request)
with ratelimiter.ratelimit(origin) as d:
yield d
response = yield code(
origin, content, request.args, *args, **kwargs
)
origin = yield authenticator.authenticate_request(request, content)
except NoAuthenticationError:
origin = None
if self.REQUIRE_AUTH:
logger.exception("authenticate_request failed")
raise
except:
logger.exception("authenticate_request failed")
raise
if origin:
with ratelimiter.ratelimit(origin) as d:
yield d
response = yield func(
origin, content, request.args, *args, **kwargs
)
else:
response = yield func(
origin, content, request.args, *args, **kwargs
)
defer.returnValue(response)
# Extra logic that functools.wraps() doesn't finish
new_code.__self__ = code.__self__
new_func.__self__ = func.__self__
return new_code
return new_func
def register(self, server):
pattern = re.compile("^" + PREFIX + self.PATH + "$")
@@ -269,6 +296,17 @@ class FederationStateServlet(BaseFederationServlet):
)
class FederationStateIdsServlet(BaseFederationServlet):
PATH = "/state_ids/(?P<room_id>[^/]*)/"
def on_GET(self, origin, content, query, room_id):
return self.handler.on_state_ids_request(
origin,
room_id,
query.get("event_id", [None])[0],
)
class FederationBackfillServlet(BaseFederationServlet):
PATH = "/backfill/(?P<context>[^/]*)/"
@@ -323,7 +361,7 @@ class FederationSendLeaveServlet(BaseFederationServlet):
class FederationEventAuthServlet(BaseFederationServlet):
PATH = "/event_auth(?P<context>[^/]*)/(?P<event_id>[^/]*)"
PATH = "/event_auth/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
def on_GET(self, origin, content, query, context, event_id):
return self.handler.on_event_auth(origin, context, event_id)
@@ -365,10 +403,8 @@ class FederationThirdPartyInviteExchangeServlet(BaseFederationServlet):
class FederationClientKeysQueryServlet(BaseFederationServlet):
PATH = "/user/keys/query"
@defer.inlineCallbacks
def on_POST(self, origin, content, query):
response = yield self.handler.on_query_client_keys(origin, content)
defer.returnValue((200, response))
return self.handler.on_query_client_keys(origin, content)
class FederationClientKeysClaimServlet(BaseFederationServlet):
@@ -386,7 +422,7 @@ class FederationQueryAuthServlet(BaseFederationServlet):
@defer.inlineCallbacks
def on_POST(self, origin, content, query, context, event_id):
new_content = yield self.handler.on_query_auth_request(
origin, content, event_id
origin, content, context, event_id
)
defer.returnValue((200, new_content))
@@ -418,9 +454,10 @@ class FederationGetMissingEventsServlet(BaseFederationServlet):
class On3pidBindServlet(BaseFederationServlet):
PATH = "/3pid/onbind"
REQUIRE_AUTH = False
@defer.inlineCallbacks
def on_POST(self, request):
content = parse_json_object_from_request(request)
def on_POST(self, origin, content, query):
if "invites" in content:
last_exception = None
for invite in content["invites"]:
@@ -442,10 +479,103 @@ class On3pidBindServlet(BaseFederationServlet):
raise last_exception
defer.returnValue((200, {}))
# Avoid doing remote HS authorization checks which are done by default by
# BaseFederationServlet.
def _wrap(self, code):
return code
class OpenIdUserInfo(BaseFederationServlet):
"""
Exchange a bearer token for information about a user.
The response format should be compatible with:
http://openid.net/specs/openid-connect-core-1_0.html#UserInfoResponse
GET /openid/userinfo?access_token=ABDEFGH HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
"sub": "@userpart:example.org",
}
"""
PATH = "/openid/userinfo"
REQUIRE_AUTH = False
@defer.inlineCallbacks
def on_GET(self, origin, content, query):
token = query.get("access_token", [None])[0]
if token is None:
defer.returnValue((401, {
"errcode": "M_MISSING_TOKEN", "error": "Access Token required"
}))
return
user_id = yield self.handler.on_openid_userinfo(token)
if user_id is None:
defer.returnValue((401, {
"errcode": "M_UNKNOWN_TOKEN",
"error": "Access Token unknown or expired"
}))
defer.returnValue((200, {"sub": user_id}))
class PublicRoomList(BaseFederationServlet):
"""
Fetch the public room list for this server.
This API returns information in the same format as /publicRooms on the
client API, but will only ever include local public rooms and hence is
intended for consumption by other home servers.
GET /publicRooms HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
"chunk": [
{
"aliases": [
"#test:localhost"
],
"guest_can_join": false,
"name": "test room",
"num_joined_members": 3,
"room_id": "!whkydVegtvatLfXmPN:localhost",
"world_readable": false
}
],
"end": "END",
"start": "START"
}
"""
PATH = "/publicRooms"
@defer.inlineCallbacks
def on_GET(self, origin, content, query):
limit = parse_integer_from_args(query, "limit", 0)
since_token = parse_string_from_args(query, "since", None)
data = yield self.room_list_handler.get_local_public_room_list(
limit, since_token
)
defer.returnValue((200, data))
class FederationVersionServlet(BaseFederationServlet):
PATH = "/version"
REQUIRE_AUTH = False
def on_GET(self, origin, content, query):
return defer.succeed((200, {
"server": {
"name": "Synapse",
"version": get_version_string(synapse)
},
}))
SERVLET_CLASSES = (
@@ -453,6 +583,7 @@ SERVLET_CLASSES = (
FederationPullServlet,
FederationEventServlet,
FederationStateServlet,
FederationStateIdsServlet,
FederationBackfillServlet,
FederationQueryServlet,
FederationMakeJoinServlet,
@@ -468,6 +599,9 @@ SERVLET_CLASSES = (
FederationClientKeysClaimServlet,
FederationThirdPartyInviteExchangeServlet,
On3pidBindServlet,
OpenIdUserInfo,
PublicRoomList,
FederationVersionServlet,
)
@@ -478,4 +612,5 @@ def register_servlets(hs, resource, authenticator, ratelimiter):
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
room_list_handler=hs.get_room_list_handler(),
).register(resource)

View File

@@ -13,24 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.appservice.scheduler import AppServiceScheduler
from synapse.appservice.api import ApplicationServiceApi
from .register import RegistrationHandler
from .room import (
RoomCreationHandler, RoomListHandler, RoomContextHandler,
RoomCreationHandler, RoomContextHandler,
)
from .room_member import RoomMemberHandler
from .message import MessageHandler
from .events import EventStreamHandler, EventHandler
from .federation import FederationHandler
from .profile import ProfileHandler
from .presence import PresenceHandler
from .directory import DirectoryHandler
from .typing import TypingNotificationHandler
from .admin import AdminHandler
from .appservice import ApplicationServicesHandler
from .sync import SyncHandler
from .auth import AuthHandler
from .identity import IdentityHandler
from .receipts import ReceiptsHandler
from .search import SearchHandler
@@ -38,10 +30,21 @@ from .search import SearchHandler
class Handlers(object):
""" A collection of all the event handlers.
""" Deprecated. A collection of handlers.
There's no need to lazily create these; we'll just make them all eagerly
at construction time.
At some point most of the classes whose name ended "Handler" were
accessed through this class.
However this makes it painful to unit test the handlers and to run cut
down versions of synapse that only use specific handlers because using a
single handler required creating all of the handlers. So some of the
handlers have been lifted out of the Handlers object and are now accessed
directly through the homeserver object itself.
Any new handlers should follow the new pattern of being accessed through
the homeserver object and should not be added to the Handlers object.
The remaining handlers should be moved out of the handlers object.
"""
def __init__(self, hs):
@@ -49,26 +52,11 @@ class Handlers(object):
self.message_handler = MessageHandler(hs)
self.room_creation_handler = RoomCreationHandler(hs)
self.room_member_handler = RoomMemberHandler(hs)
self.event_stream_handler = EventStreamHandler(hs)
self.event_handler = EventHandler(hs)
self.federation_handler = FederationHandler(hs)
self.profile_handler = ProfileHandler(hs)
self.presence_handler = PresenceHandler(hs)
self.room_list_handler = RoomListHandler(hs)
self.directory_handler = DirectoryHandler(hs)
self.typing_notification_handler = TypingNotificationHandler(hs)
self.admin_handler = AdminHandler(hs)
self.receipts_handler = ReceiptsHandler(hs)
asapi = ApplicationServiceApi(hs)
self.appservice_handler = ApplicationServicesHandler(
hs, asapi, AppServiceScheduler(
clock=hs.get_clock(),
store=hs.get_datastore(),
as_api=asapi
)
)
self.sync_handler = SyncHandler(hs)
self.auth_handler = AuthHandler(hs)
self.identity_handler = IdentityHandler(hs)
self.search_handler = SearchHandler(hs)
self.room_context_handler = RoomContextHandler(hs)

View File

@@ -13,49 +13,33 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.errors import LimitExceededError, SynapseError, AuthError
from synapse.crypto.event_signing import add_hashes_and_signatures
import synapse.types
from synapse.api.constants import Membership, EventTypes
from synapse.types import UserID, RoomAlias, Requester
from synapse.push.action_generator import ActionGenerator
from synapse.util.logcontext import PreserveLoggingContext
import logging
from synapse.api.errors import LimitExceededError
from synapse.types import UserID
logger = logging.getLogger(__name__)
VISIBILITY_PRIORITY = (
"world_readable",
"shared",
"invited",
"joined",
)
MEMBERSHIP_PRIORITY = (
Membership.JOIN,
Membership.INVITE,
Membership.KNOCK,
Membership.LEAVE,
Membership.BAN,
)
class BaseHandler(object):
"""
Common base class for the event handlers.
Attributes:
store (synapse.storage.events.StateStore):
store (synapse.storage.DataStore):
state_handler (synapse.state.StateHandler):
"""
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
self.store = hs.get_datastore()
self.auth = hs.get_auth()
self.notifier = hs.get_notifier()
@@ -65,165 +49,26 @@ class BaseHandler(object):
self.clock = hs.get_clock()
self.hs = hs
self.signing_key = hs.config.signing_key[0]
self.server_name = hs.hostname
self.event_builder_factory = hs.get_event_builder_factory()
@defer.inlineCallbacks
def filter_events_for_clients(self, user_tuples, events, event_id_to_state):
""" Returns dict of user_id -> list of events that user is allowed to
see.
Args:
user_tuples (str, bool): (user id, is_peeking) for each user to be
checked. is_peeking should be true if:
* the user is not currently a member of the room, and:
* the user has not been a member of the room since the
given events
events ([synapse.events.EventBase]): list of events to filter
"""
forgotten = yield defer.gatherResults([
self.store.who_forgot_in_room(
room_id,
)
for room_id in frozenset(e.room_id for e in events)
], consumeErrors=True)
# Set of membership event_ids that have been forgotten
event_id_forgotten = frozenset(
row["event_id"] for rows in forgotten for row in rows
)
def allowed(event, user_id, is_peeking):
"""
Args:
event (synapse.events.EventBase): event to check
user_id (str)
is_peeking (bool)
"""
state = event_id_to_state[event.event_id]
# get the room_visibility at the time of the event.
visibility_event = state.get((EventTypes.RoomHistoryVisibility, ""), None)
if visibility_event:
visibility = visibility_event.content.get("history_visibility", "shared")
else:
visibility = "shared"
if visibility not in VISIBILITY_PRIORITY:
visibility = "shared"
# if it was world_readable, it's easy: everyone can read it
if visibility == "world_readable":
return True
# Always allow history visibility events on boundaries. This is done
# by setting the effective visibility to the least restrictive
# of the old vs new.
if event.type == EventTypes.RoomHistoryVisibility:
prev_content = event.unsigned.get("prev_content", {})
prev_visibility = prev_content.get("history_visibility", None)
if prev_visibility not in VISIBILITY_PRIORITY:
prev_visibility = "shared"
new_priority = VISIBILITY_PRIORITY.index(visibility)
old_priority = VISIBILITY_PRIORITY.index(prev_visibility)
if old_priority < new_priority:
visibility = prev_visibility
# likewise, if the event is the user's own membership event, use
# the 'most joined' membership
membership = None
if event.type == EventTypes.Member and event.state_key == user_id:
membership = event.content.get("membership", None)
if membership not in MEMBERSHIP_PRIORITY:
membership = "leave"
prev_content = event.unsigned.get("prev_content", {})
prev_membership = prev_content.get("membership", None)
if prev_membership not in MEMBERSHIP_PRIORITY:
prev_membership = "leave"
new_priority = MEMBERSHIP_PRIORITY.index(membership)
old_priority = MEMBERSHIP_PRIORITY.index(prev_membership)
if old_priority < new_priority:
membership = prev_membership
# otherwise, get the user's membership at the time of the event.
if membership is None:
membership_event = state.get((EventTypes.Member, user_id), None)
if membership_event:
if membership_event.event_id not in event_id_forgotten:
membership = membership_event.membership
# if the user was a member of the room at the time of the event,
# they can see it.
if membership == Membership.JOIN:
return True
if visibility == "joined":
# we weren't a member at the time of the event, so we can't
# see this event.
return False
elif visibility == "invited":
# user can also see the event if they were *invited* at the time
# of the event.
return membership == Membership.INVITE
else:
# visibility is shared: user can also see the event if they have
# become a member since the event
#
# XXX: if the user has subsequently joined and then left again,
# ideally we would share history up to the point they left. But
# we don't know when they left.
return not is_peeking
defer.returnValue({
user_id: [
event
for event in events
if allowed(event, user_id, is_peeking)
]
for user_id, is_peeking in user_tuples
})
@defer.inlineCallbacks
def _filter_events_for_client(self, user_id, events, is_peeking=False):
"""
Check which events a user is allowed to see
Args:
user_id(str): user id to be checked
events([synapse.events.EventBase]): list of events to be checked
is_peeking(bool): should be True if:
* the user is not currently a member of the room, and:
* the user has not been a member of the room since the given
events
Returns:
[synapse.events.EventBase]
"""
types = (
(EventTypes.RoomHistoryVisibility, ""),
(EventTypes.Member, user_id),
)
event_id_to_state = yield self.store.get_state_for_events(
frozenset(e.event_id for e in events),
types=types
)
res = yield self.filter_events_for_clients(
[(user_id, is_peeking)], events, event_id_to_state
)
defer.returnValue(res.get(user_id, []))
def ratelimit(self, requester):
time_now = self.clock.time()
user_id = requester.user.to_string()
# The AS user itself is never rate limited.
app_service = self.store.get_app_service_by_user_id(user_id)
if app_service is not None:
return # do not ratelimit app service senders
# Disable rate limiting of users belonging to any AS that is configured
# not to be rate limited in its registration file (rate_limited: true|false).
if requester.app_service and not requester.app_service.is_rate_limited():
return
allowed, time_allowed = self.ratelimiter.send_message(
requester.user.to_string(), time_now,
user_id, time_now,
msg_rate_hz=self.hs.config.rc_messages_per_second,
burst_count=self.hs.config.rc_message_burst_count,
)
@@ -233,213 +78,20 @@ class BaseHandler(object):
)
@defer.inlineCallbacks
def _create_new_client_event(self, builder, prev_event_ids=None):
if prev_event_ids:
prev_events = yield self.store.add_event_hashes(prev_event_ids)
prev_max_depth = yield self.store.get_max_depth_of_events(prev_event_ids)
depth = prev_max_depth + 1
else:
latest_ret = yield self.store.get_latest_event_ids_and_hashes_in_room(
builder.room_id,
)
if latest_ret:
depth = max([d for _, _, d in latest_ret]) + 1
else:
depth = 1
prev_events = [
(event_id, prev_hashes)
for event_id, prev_hashes, _ in latest_ret
]
builder.prev_events = prev_events
builder.depth = depth
state_handler = self.state_handler
context = yield state_handler.compute_event_context(builder)
if builder.is_state():
builder.prev_state = yield self.store.add_event_hashes(
context.prev_state_events
)
yield self.auth.add_auth_events(builder, context)
add_hashes_and_signatures(
builder, self.server_name, self.signing_key
)
event = builder.build()
logger.debug(
"Created event %s with current state: %s",
event.event_id, context.current_state,
)
defer.returnValue(
(event, context,)
)
def is_host_in_room(self, current_state):
room_members = [
(state_key, event.membership)
for ((event_type, state_key), event) in current_state.items()
if event_type == EventTypes.Member
]
if len(room_members) == 0:
# Have we just created the room, and is this about to be the very
# first member event?
create_event = current_state.get(("m.room.create", ""))
if create_event:
return True
for (state_key, membership) in room_members:
if (
UserID.from_string(state_key).domain == self.hs.hostname
and membership == Membership.JOIN
):
return True
return False
@defer.inlineCallbacks
def handle_new_client_event(
self,
requester,
event,
context,
ratelimit=True,
extra_users=[]
):
# We now need to go and hit out to wherever we need to hit out to.
if ratelimit:
self.ratelimit(requester)
self.auth.check(event, auth_events=context.current_state)
yield self.maybe_kick_guest_users(event, context.current_state.values())
if event.type == EventTypes.CanonicalAlias:
# Check the alias is acually valid (at this time at least)
room_alias_str = event.content.get("alias", None)
if room_alias_str:
room_alias = RoomAlias.from_string(room_alias_str)
directory_handler = self.hs.get_handlers().directory_handler
mapping = yield directory_handler.get_association(room_alias)
if mapping["room_id"] != event.room_id:
raise SynapseError(
400,
"Room alias %s does not point to the room" % (
room_alias_str,
)
)
federation_handler = self.hs.get_handlers().federation_handler
if event.type == EventTypes.Member:
if event.content["membership"] == Membership.INVITE:
def is_inviter_member_event(e):
return (
e.type == EventTypes.Member and
e.sender == event.sender
)
event.unsigned["invite_room_state"] = [
{
"type": e.type,
"state_key": e.state_key,
"content": e.content,
"sender": e.sender,
}
for k, e in context.current_state.items()
if e.type in self.hs.config.room_invite_state_types
or is_inviter_member_event(e)
]
invitee = UserID.from_string(event.state_key)
if not self.hs.is_mine(invitee):
# TODO: Can we add signature from remote server in a nicer
# way? If we have been invited by a remote server, we need
# to get them to sign the event.
returned_invite = yield federation_handler.send_invite(
invitee.domain,
event,
)
event.unsigned.pop("room_state", None)
# TODO: Make sure the signatures actually are correct.
event.signatures.update(
returned_invite.signatures
)
if event.type == EventTypes.Redaction:
if self.auth.check_redaction(event, auth_events=context.current_state):
original_event = yield self.store.get_event(
event.redacts,
check_redacted=False,
get_prev_content=False,
allow_rejected=False,
allow_none=False
)
if event.user_id != original_event.user_id:
raise AuthError(
403,
"You don't have permission to redact events"
)
if event.type == EventTypes.Create and context.current_state:
raise AuthError(
403,
"Changing the room create event is forbidden",
)
action_generator = ActionGenerator(self.hs)
yield action_generator.handle_push_actions_for_event(
event, context, self
)
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event, context=context
)
destinations = set()
for k, s in context.current_state.items():
try:
if k[0] == EventTypes.Member:
if s.content["membership"] == Membership.JOIN:
destinations.add(
UserID.from_string(s.state_key).domain
)
except SynapseError:
logger.warn(
"Failed to get destination from event %s", s.event_id
)
with PreserveLoggingContext():
# Don't block waiting on waking up all the listeners.
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
# If invite, remove room_state from unsigned before sending.
event.unsigned.pop("invite_room_state", None)
federation_handler.handle_new_event(
event, destinations=destinations,
)
@defer.inlineCallbacks
def maybe_kick_guest_users(self, event, current_state):
def maybe_kick_guest_users(self, event, context=None):
# Technically this function invalidates current_state by changing it.
# Hopefully this isn't that important to the caller.
if event.type == EventTypes.GuestAccess:
guest_access = event.content.get("guest_access", "forbidden")
if guest_access != "can_join":
if context:
current_state = yield self.store.get_events(
context.current_state_ids.values()
)
current_state = current_state.values()
else:
current_state = yield self.store.get_current_state(event.room_id)
logger.info("maybe_kick_guest_users %r", current_state)
yield self.kick_guest_users(current_state)
@defer.inlineCallbacks
@@ -472,7 +124,8 @@ class BaseHandler(object):
# and having homeservers have their own users leave keeps more
# of that decision-making and control local to the guest-having
# homeserver.
requester = Requester(target_user, "", True)
requester = synapse.types.create_requester(
target_user, is_guest=True)
handler = self.hs.get_handlers().room_member_handler
yield handler.update_membership(
requester,

View File

@@ -16,8 +16,8 @@
from twisted.internet import defer
from synapse.api.constants import EventTypes
from synapse.appservice import ApplicationService
from synapse.types import UserID
from synapse.util.metrics import Measure
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
import logging
@@ -35,47 +35,81 @@ def log_failure(failure):
)
# NB: Purposefully not inheriting BaseHandler since that contains way too much
# setup code which this handler does not need or use. This makes testing a lot
# easier.
class ApplicationServicesHandler(object):
def __init__(self, hs, appservice_api, appservice_scheduler):
def __init__(self, hs):
self.store = hs.get_datastore()
self.hs = hs
self.appservice_api = appservice_api
self.scheduler = appservice_scheduler
self.is_mine_id = hs.is_mine_id
self.appservice_api = hs.get_application_service_api()
self.scheduler = hs.get_application_service_scheduler()
self.started_scheduler = False
self.clock = hs.get_clock()
self.notify_appservices = hs.config.notify_appservices
self.current_max = 0
self.is_processing = False
@defer.inlineCallbacks
def notify_interested_services(self, event):
def notify_interested_services(self, current_id):
"""Notifies (pushes) all application services interested in this event.
Pushing is done asynchronously, so this method won't block for any
prolonged length of time.
Args:
event(Event): The event to push out to interested services.
current_id(int): The current maximum ID.
"""
# Gather interested services
services = yield self._get_services_for_event(event)
if len(services) == 0:
return # no services need notifying
services = self.store.get_app_services()
if not services or not self.notify_appservices:
return
# Do we know this user exists? If not, poke the user query API for
# all services which match that user regex. This needs to block as these
# user queries need to be made BEFORE pushing the event.
yield self._check_user_exists(event.sender)
if event.type == EventTypes.Member:
yield self._check_user_exists(event.state_key)
self.current_max = max(self.current_max, current_id)
if self.is_processing:
return
if not self.started_scheduler:
self.scheduler.start().addErrback(log_failure)
self.started_scheduler = True
with Measure(self.clock, "notify_interested_services"):
self.is_processing = True
try:
upper_bound = self.current_max
limit = 100
while True:
upper_bound, events = yield self.store.get_new_events_for_appservice(
upper_bound, limit
)
# Fork off pushes to these services
for service in services:
self.scheduler.submit_event_for_as(service, event)
if not events:
break
for event in events:
# Gather interested services
services = yield self._get_services_for_event(event)
if len(services) == 0:
continue # no services need notifying
# Do we know this user exists? If not, poke the user
# query API for all services which match that user regex.
# This needs to block as these user queries need to be
# made BEFORE pushing the event.
yield self._check_user_exists(event.sender)
if event.type == EventTypes.Member:
yield self._check_user_exists(event.state_key)
if not self.started_scheduler:
self.scheduler.start().addErrback(log_failure)
self.started_scheduler = True
# Fork off pushes to these services
for service in services:
preserve_fn(self.scheduler.submit_event_for_as)(
service, event
)
yield self.store.set_appservice_last_pos(upper_bound)
if len(events) < limit:
break
finally:
self.is_processing = False
@defer.inlineCallbacks
def query_user_exists(self, user_id):
@@ -108,11 +142,12 @@ class ApplicationServicesHandler(object):
association can be found.
"""
room_alias_str = room_alias.to_string()
alias_query_services = yield self._get_services_for_event(
event=None,
restrict_to=ApplicationService.NS_ALIASES,
alias_list=[room_alias_str]
)
services = self.store.get_app_services()
alias_query_services = [
s for s in services if (
s.is_interested_in_alias(room_alias_str)
)
]
for alias_service in alias_query_services:
is_known_alias = yield self.appservice_api.query_alias(
alias_service, room_alias_str
@@ -125,52 +160,97 @@ class ApplicationServicesHandler(object):
defer.returnValue(result)
@defer.inlineCallbacks
def _get_services_for_event(self, event, restrict_to="", alias_list=None):
def query_3pe(self, kind, protocol, fields):
services = yield self._get_services_for_3pn(protocol)
results = yield preserve_context_over_deferred(defer.DeferredList([
preserve_fn(self.appservice_api.query_3pe)(service, kind, protocol, fields)
for service in services
], consumeErrors=True))
ret = []
for (success, result) in results:
if success:
ret.extend(result)
defer.returnValue(ret)
@defer.inlineCallbacks
def get_3pe_protocols(self, only_protocol=None):
services = self.store.get_app_services()
protocols = {}
# Collect up all the individual protocol responses out of the ASes
for s in services:
for p in s.protocols:
if only_protocol is not None and p != only_protocol:
continue
if p not in protocols:
protocols[p] = []
info = yield self.appservice_api.get_3pe_protocol(s, p)
if info is not None:
protocols[p].append(info)
def _merge_instances(infos):
if not infos:
return {}
# Merge the 'instances' lists of multiple results, but just take
# the other fields from the first as they ought to be identical
# copy the result so as not to corrupt the cached one
combined = dict(infos[0])
combined["instances"] = list(combined["instances"])
for info in infos[1:]:
combined["instances"].extend(info["instances"])
return combined
for p in protocols.keys():
protocols[p] = _merge_instances(protocols[p])
defer.returnValue(protocols)
@defer.inlineCallbacks
def _get_services_for_event(self, event):
"""Retrieve a list of application services interested in this event.
Args:
event(Event): The event to check. Can be None if alias_list is not.
restrict_to(str): The namespace to restrict regex tests to.
alias_list: A list of aliases to get services for. If None, this
list is obtained from the database.
Returns:
list<ApplicationService>: A list of services interested in this
event based on the service regex.
"""
member_list = None
if hasattr(event, "room_id"):
# We need to know the aliases associated with this event.room_id,
# if any.
if not alias_list:
alias_list = yield self.store.get_aliases_for_room(
event.room_id
)
# We need to know the members associated with this event.room_id,
# if any.
member_list = yield self.store.get_users_in_room(event.room_id)
services = yield self.store.get_app_services()
services = self.store.get_app_services()
interested_list = [
s for s in services if (
s.is_interested(event, restrict_to, alias_list, member_list)
yield s.is_interested(event, self.store)
)
]
defer.returnValue(interested_list)
@defer.inlineCallbacks
def _get_services_for_user(self, user_id):
services = yield self.store.get_app_services()
services = self.store.get_app_services()
interested_list = [
s for s in services if (
s.is_interested_in_user(user_id)
)
]
defer.returnValue(interested_list)
return defer.succeed(interested_list)
def _get_services_for_3pn(self, protocol):
services = self.store.get_app_services()
interested_list = [
s for s in services if s.is_interested_in_protocol(protocol)
]
return defer.succeed(interested_list)
@defer.inlineCallbacks
def _is_unknown_user(self, user_id):
user = UserID.from_string(user_id)
if not self.hs.is_mine(user):
if not self.is_mine_id(user_id):
# we don't know if they are unknown or not since it isn't one of our
# users. We can't poke ASes.
defer.returnValue(False)
@@ -182,7 +262,7 @@ class ApplicationServicesHandler(object):
return
# user not found; could be the AS though, so check.
services = yield self.store.get_app_services()
services = self.store.get_app_services()
service_list = [s for s in services if s.sender == user_id]
defer.returnValue(len(service_list) == 0)

View File

@@ -18,7 +18,7 @@ from twisted.internet import defer
from ._base import BaseHandler
from synapse.api.constants import LoginType
from synapse.types import UserID
from synapse.api.errors import AuthError, LoginError, Codes
from synapse.api.errors import AuthError, LoginError, Codes, StoreError, SynapseError
from synapse.util.async import run_on_reactor
from twisted.web.client import PartialDownloadError
@@ -38,6 +38,10 @@ class AuthHandler(BaseHandler):
SESSION_EXPIRE_MS = 48 * 60 * 60 * 1000
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
super(AuthHandler, self).__init__(hs)
self.checkers = {
LoginType.PASSWORD: self._check_password_auth,
@@ -47,22 +51,18 @@ class AuthHandler(BaseHandler):
}
self.bcrypt_rounds = hs.config.bcrypt_rounds
self.sessions = {}
self.INVALID_TOKEN_HTTP_STATUS = 401
self.ldap_enabled = hs.config.ldap_enabled
self.ldap_server = hs.config.ldap_server
self.ldap_port = hs.config.ldap_port
self.ldap_tls = hs.config.ldap_tls
self.ldap_search_base = hs.config.ldap_search_base
self.ldap_search_property = hs.config.ldap_search_property
self.ldap_email_property = hs.config.ldap_email_property
self.ldap_full_name_property = hs.config.ldap_full_name_property
account_handler = _AccountHandler(
hs, check_user_exists=self.check_user_exists
)
if self.ldap_enabled is True:
import ldap
logger.info("Import ldap version: %s", ldap.__version__)
self.password_providers = [
module(config=config, account_handler=account_handler)
for module, config in hs.config.password_providers
]
self.hs = hs # FIXME better possibility to access registrationHandler later?
self.device_handler = hs.get_device_handler()
@defer.inlineCallbacks
def check_auth(self, flows, clientdict, clientip):
@@ -133,13 +133,30 @@ class AuthHandler(BaseHandler):
creds = session['creds']
# check auth type currently being presented
errordict = {}
if 'type' in authdict:
if authdict['type'] not in self.checkers:
login_type = authdict['type']
if login_type not in self.checkers:
raise LoginError(400, "", Codes.UNRECOGNIZED)
result = yield self.checkers[authdict['type']](authdict, clientip)
if result:
creds[authdict['type']] = result
self._save_session(session)
try:
result = yield self.checkers[login_type](authdict, clientip)
if result:
creds[login_type] = result
self._save_session(session)
except LoginError, e:
if login_type == LoginType.EMAIL_IDENTITY:
# riot used to have a bug where it would request a new
# validation token (thus sending a new email) each time it
# got a 401 with a 'flows' field.
# (https://github.com/vector-im/vector-web/issues/2447).
#
# Grandfather in the old behaviour for now to avoid
# breaking old riot deployments.
raise e
# this step failed. Merge the error dict into the response
# so that the client can have another go.
errordict = e.error_dict()
for f in flows:
if len(set(f) - set(creds.keys())) == 0:
@@ -148,6 +165,7 @@ class AuthHandler(BaseHandler):
ret = self._auth_dict_for_flows(flows, session)
ret['completed'] = creds.keys()
ret.update(errordict)
defer.returnValue((False, ret, clientdict, session['id']))
@defer.inlineCallbacks
@@ -220,7 +238,6 @@ class AuthHandler(BaseHandler):
sess = self._get_session_info(session_id)
return sess.setdefault('serverdict', {}).get(key, default)
@defer.inlineCallbacks
def _check_password_auth(self, authdict, _):
if "user" not in authdict or "password" not in authdict:
raise LoginError(400, "", Codes.MISSING_PARAM)
@@ -230,11 +247,7 @@ class AuthHandler(BaseHandler):
if not user_id.startswith('@'):
user_id = UserID.create(user_id, self.hs.hostname).to_string()
if not (yield self._check_password(user_id, password)):
logger.warn("Failed password login for user %s", user_id)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
defer.returnValue(user_id)
return self._check_password(user_id, password)
@defer.inlineCallbacks
def _check_recaptcha(self, authdict, clientip):
@@ -270,8 +283,17 @@ class AuthHandler(BaseHandler):
data = pde.response
resp_body = simplejson.loads(data)
if 'success' in resp_body and resp_body['success']:
defer.returnValue(True)
if 'success' in resp_body:
# Note that we do NOT check the hostname here: we explicitly
# intend the CAPTCHA to be presented by whatever client the
# user is using, we just care that they have completed a CAPTCHA.
logger.info(
"%s reCAPTCHA from hostname %s",
"Successful" if resp_body['success'] else "Failed",
resp_body.get('hostname')
)
if resp_body['success']:
defer.returnValue(True)
raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
@defer.inlineCallbacks
@@ -338,167 +360,189 @@ class AuthHandler(BaseHandler):
return self.sessions[session_id]
@defer.inlineCallbacks
def login_with_password(self, user_id, password):
def validate_password_login(self, user_id, password):
"""
Authenticates the user with their username and password.
Used only by the v1 login API.
Args:
user_id (str): User ID
user_id (str): complete @user:id
password (str): Password
Returns:
A tuple of:
The user's ID.
The access token for the user's session.
The refresh token for the user's session.
defer.Deferred: (str) canonical user id
Raises:
StoreError if there was a problem storing the token.
StoreError if there was a problem accessing the database
LoginError if there was an authentication problem.
"""
if not (yield self._check_password(user_id, password)):
logger.warn("Failed password login for user %s", user_id)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
logger.info("Logging in user %s", user_id)
access_token = yield self.issue_access_token(user_id)
refresh_token = yield self.issue_refresh_token(user_id)
defer.returnValue((user_id, access_token, refresh_token))
return self._check_password(user_id, password)
@defer.inlineCallbacks
def get_login_tuple_for_user_id(self, user_id):
def get_login_tuple_for_user_id(self, user_id, device_id=None,
initial_display_name=None):
"""
Gets login tuple for the user with the given user ID.
Creates a new access/refresh token for the user.
The user is assumed to have been authenticated by some other
machanism (e.g. CAS)
machanism (e.g. CAS), and the user_id converted to the canonical case.
The device will be recorded in the table if it is not there already.
Args:
user_id (str): User ID
user_id (str): canonical User ID
device_id (str|None): the device ID to associate with the tokens.
None to leave the tokens unassociated with a device (deprecated:
we should always have a device ID)
initial_display_name (str): display name to associate with the
device if it needs re-registering
Returns:
A tuple of:
The user's ID.
The access token for the user's session.
The refresh token for the user's session.
Raises:
StoreError if there was a problem storing the token.
LoginError if there was an authentication problem.
"""
user_id, ignored = yield self._find_user_id_and_pwd_hash(user_id)
logger.info("Logging in user %s on device %s", user_id, device_id)
access_token = yield self.issue_access_token(user_id, device_id)
refresh_token = yield self.issue_refresh_token(user_id, device_id)
logger.info("Logging in user %s", user_id)
access_token = yield self.issue_access_token(user_id)
refresh_token = yield self.issue_refresh_token(user_id)
defer.returnValue((user_id, access_token, refresh_token))
# the device *should* have been registered before we got here; however,
# it's possible we raced against a DELETE operation. The thing we
# really don't want is active access_tokens without a record of the
# device, so we double-check it here.
if device_id is not None:
yield self.device_handler.check_device_registered(
user_id, device_id, initial_display_name
)
defer.returnValue((access_token, refresh_token))
@defer.inlineCallbacks
def does_user_exist(self, user_id):
try:
yield self._find_user_id_and_pwd_hash(user_id)
defer.returnValue(True)
except LoginError:
defer.returnValue(False)
def check_user_exists(self, user_id):
"""
Checks to see if a user with the given id exists. Will check case
insensitively, but return None if there are multiple inexact matches.
Args:
(str) user_id: complete @user:id
Returns:
defer.Deferred: (str) canonical_user_id, or None if zero or
multiple matches
"""
res = yield self._find_user_id_and_pwd_hash(user_id)
if res is not None:
defer.returnValue(res[0])
defer.returnValue(None)
@defer.inlineCallbacks
def _find_user_id_and_pwd_hash(self, user_id):
"""Checks to see if a user with the given id exists. Will check case
insensitively, but will throw if there are multiple inexact matches.
insensitively, but will return None if there are multiple inexact
matches.
Returns:
tuple: A 2-tuple of `(canonical_user_id, password_hash)`
None: if there is not exactly one match
"""
user_infos = yield self.store.get_users_by_id_case_insensitive(user_id)
result = None
if not user_infos:
logger.warn("Attempted to login as %s but they do not exist", user_id)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
if len(user_infos) > 1:
if user_id not in user_infos:
logger.warn(
"Attempted to login as %s but it matches more than one user "
"inexactly: %r",
user_id, user_infos.keys()
)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
defer.returnValue((user_id, user_infos[user_id]))
elif len(user_infos) == 1:
# a single match (possibly not exact)
result = user_infos.popitem()
elif user_id in user_infos:
# multiple matches, but one is exact
result = (user_id, user_infos[user_id])
else:
defer.returnValue(user_infos.popitem())
# multiple matches, none of them exact
logger.warn(
"Attempted to login as %s but it matches more than one user "
"inexactly: %r",
user_id, user_infos.keys()
)
defer.returnValue(result)
@defer.inlineCallbacks
def _check_password(self, user_id, password):
defer.returnValue(
not (
(yield self._check_ldap_password(user_id, password))
or
(yield self._check_local_password(user_id, password))
))
"""Authenticate a user against the LDAP and local databases.
user_id is checked case insensitively against the local database, but
will throw if there are multiple inexact matches.
Args:
user_id (str): complete @user:id
Returns:
(str) the canonical_user_id
Raises:
LoginError if login fails
"""
for provider in self.password_providers:
is_valid = yield provider.check_password(user_id, password)
if is_valid:
defer.returnValue(user_id)
canonical_user_id = yield self._check_local_password(user_id, password)
if canonical_user_id:
defer.returnValue(canonical_user_id)
# unknown username or invalid password. We raise a 403 here, but note
# that if we're doing user-interactive login, it turns all LoginErrors
# into a 401 anyway.
raise LoginError(
403, "Invalid password",
errcode=Codes.FORBIDDEN
)
@defer.inlineCallbacks
def _check_local_password(self, user_id, password):
try:
user_id, password_hash = yield self._find_user_id_and_pwd_hash(user_id)
defer.returnValue(not self.validate_hash(password, password_hash))
except LoginError:
defer.returnValue(False)
"""Authenticate a user against the local password database.
user_id is checked case insensitively, but will return None if there are
multiple inexact matches.
Args:
user_id (str): complete @user:id
Returns:
(str) the canonical_user_id, or None if unknown user / bad password
"""
lookupres = yield self._find_user_id_and_pwd_hash(user_id)
if not lookupres:
defer.returnValue(None)
(user_id, password_hash) = lookupres
result = self.validate_hash(password, password_hash)
if not result:
logger.warn("Failed password login for user %s", user_id)
defer.returnValue(None)
defer.returnValue(user_id)
@defer.inlineCallbacks
def _check_ldap_password(self, user_id, password):
if self.ldap_enabled is not True:
logger.debug("LDAP not configured")
defer.returnValue(False)
import ldap
logger.info("Authenticating %s with LDAP" % user_id)
try:
ldap_url = "%s:%s" % (self.ldap_server, self.ldap_port)
logger.debug("Connecting LDAP server at %s" % ldap_url)
l = ldap.initialize(ldap_url)
if self.ldap_tls:
logger.debug("Initiating TLS")
self._connection.start_tls_s()
local_name = UserID.from_string(user_id).localpart
dn = "%s=%s, %s" % (
self.ldap_search_property,
local_name,
self.ldap_search_base)
logger.debug("DN for LDAP authentication: %s" % dn)
l.simple_bind_s(dn.encode('utf-8'), password.encode('utf-8'))
if not (yield self.does_user_exist(user_id)):
handler = self.hs.get_handlers().registration_handler
user_id, access_token = (
yield handler.register(localpart=local_name)
)
defer.returnValue(True)
except ldap.LDAPError, e:
logger.warn("LDAP error: %s", e)
defer.returnValue(False)
@defer.inlineCallbacks
def issue_access_token(self, user_id):
def issue_access_token(self, user_id, device_id=None):
access_token = self.generate_access_token(user_id)
yield self.store.add_access_token_to_user(user_id, access_token)
yield self.store.add_access_token_to_user(user_id, access_token,
device_id)
defer.returnValue(access_token)
@defer.inlineCallbacks
def issue_refresh_token(self, user_id):
def issue_refresh_token(self, user_id, device_id=None):
refresh_token = self.generate_refresh_token(user_id)
yield self.store.add_refresh_token_to_user(user_id, refresh_token)
yield self.store.add_refresh_token_to_user(user_id, refresh_token,
device_id)
defer.returnValue(refresh_token)
def generate_access_token(self, user_id, extra_caveats=None):
def generate_access_token(self, user_id, extra_caveats=None,
duration_in_ms=(60 * 60 * 1000)):
extra_caveats = extra_caveats or []
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = access")
now = self.hs.get_clock().time_msec()
expiry = now + (60 * 60 * 1000)
expiry = now + duration_in_ms
macaroon.add_first_party_caveat("time < %d" % (expiry,))
for caveat in extra_caveats:
macaroon.add_first_party_caveat(caveat)
@@ -514,22 +558,28 @@ class AuthHandler(BaseHandler):
))
return m.serialize()
def generate_short_term_login_token(self, user_id):
def generate_short_term_login_token(self, user_id, duration_in_ms=(2 * 60 * 1000)):
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = login")
now = self.hs.get_clock().time_msec()
expiry = now + (2 * 60 * 1000)
expiry = now + duration_in_ms
macaroon.add_first_party_caveat("time < %d" % (expiry,))
return macaroon.serialize()
def generate_delete_pusher_token(self, user_id):
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = delete_pusher")
return macaroon.serialize()
def validate_short_term_login_token_and_get_user_id(self, login_token):
auth_api = self.hs.get_auth()
try:
macaroon = pymacaroons.Macaroon.deserialize(login_token)
auth_api = self.hs.get_auth()
auth_api.validate_macaroon(macaroon, "login", True)
return self.get_user_from_macaroon(macaroon)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError(401, "Invalid token", errcode=Codes.UNKNOWN_TOKEN)
user_id = auth_api.get_user_id_from_macaroon(macaroon)
auth_api.validate_macaroon(macaroon, "login", True, user_id)
return user_id
except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
def _generate_base_macaroon(self, user_id):
macaroon = pymacaroons.Macaroon(
@@ -540,32 +590,39 @@ class AuthHandler(BaseHandler):
macaroon.add_first_party_caveat("user_id = %s" % (user_id,))
return macaroon
def get_user_from_macaroon(self, macaroon):
user_prefix = "user_id = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix):]
raise AuthError(
self.INVALID_TOKEN_HTTP_STATUS, "No user_id found in token",
errcode=Codes.UNKNOWN_TOKEN
)
@defer.inlineCallbacks
def set_password(self, user_id, newpassword, requester=None):
password_hash = self.hash(newpassword)
except_access_token_ids = [requester.access_token_id] if requester else []
except_access_token_id = requester.access_token_id if requester else None
yield self.store.user_set_password_hash(user_id, password_hash)
try:
yield self.store.user_set_password_hash(user_id, password_hash)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Unknown user", Codes.NOT_FOUND)
raise e
yield self.store.user_delete_access_tokens(
user_id, except_access_token_ids
user_id, except_access_token_id
)
yield self.hs.get_pusherpool().remove_pushers_by_user(
user_id, except_access_token_ids
user_id, except_access_token_id
)
@defer.inlineCallbacks
def add_threepid(self, user_id, medium, address, validated_at):
# 'Canonicalise' email addresses down to lower case.
# We've now moving towards the Home Server being the entity that
# is responsible for validating threepids used for resetting passwords
# on accounts, so in future Synapse will gain knowledge of specific
# types (mediums) of threepid. For now, we still use the existing
# infrastructure, but this is the start of synapse gaining knowledge
# of specific types of threepid (and fixes the fact that checking
# for the presenc eof an email address during password reset was
# case sensitive).
if medium == 'email':
address = address.lower()
yield self.store.user_add_threepid(
user_id, medium, address, validated_at,
self.hs.get_clock().time_msec()
@@ -596,7 +653,8 @@ class AuthHandler(BaseHandler):
Returns:
Hashed password (str).
"""
return bcrypt.hashpw(password, bcrypt.gensalt(self.bcrypt_rounds))
return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper,
bcrypt.gensalt(self.bcrypt_rounds))
def validate_hash(self, password, stored_hash):
"""Validates that self.hash(password) == stored_hash.
@@ -608,4 +666,35 @@ class AuthHandler(BaseHandler):
Returns:
Whether self.hash(password) == stored_hash (bool).
"""
return bcrypt.hashpw(password, stored_hash) == stored_hash
if stored_hash:
return bcrypt.hashpw(password + self.hs.config.password_pepper,
stored_hash.encode('utf-8')) == stored_hash
else:
return False
class _AccountHandler(object):
"""A proxy object that gets passed to password auth providers so they
can register new users etc if necessary.
"""
def __init__(self, hs, check_user_exists):
self.hs = hs
self._check_user_exists = check_user_exists
def check_user_exists(self, user_id):
"""Check if user exissts.
Returns:
Deferred(bool)
"""
return self._check_user_exists(user_id)
def register(self, localpart):
"""Registers a new user with given localpart
Returns:
Deferred: a 2-tuple of (user_id, access_token)
"""
reg = self.hs.get_handlers().registration_handler
return reg.register(localpart=localpart)

181
synapse/handlers/device.py Normal file
View File

@@ -0,0 +1,181 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api import errors
from synapse.util import stringutils
from twisted.internet import defer
from ._base import BaseHandler
import logging
logger = logging.getLogger(__name__)
class DeviceHandler(BaseHandler):
def __init__(self, hs):
super(DeviceHandler, self).__init__(hs)
@defer.inlineCallbacks
def check_device_registered(self, user_id, device_id,
initial_device_display_name=None):
"""
If the given device has not been registered, register it with the
supplied display name.
If no device_id is supplied, we make one up.
Args:
user_id (str): @user:id
device_id (str | None): device id supplied by client
initial_device_display_name (str | None): device display name from
client
Returns:
str: device id (generated if none was supplied)
"""
if device_id is not None:
yield self.store.store_device(
user_id=user_id,
device_id=device_id,
initial_device_display_name=initial_device_display_name,
ignore_if_known=True,
)
defer.returnValue(device_id)
# if the device id is not specified, we'll autogen one, but loop a few
# times in case of a clash.
attempts = 0
while attempts < 5:
try:
device_id = stringutils.random_string(10).upper()
yield self.store.store_device(
user_id=user_id,
device_id=device_id,
initial_device_display_name=initial_device_display_name,
ignore_if_known=False,
)
defer.returnValue(device_id)
except errors.StoreError:
attempts += 1
raise errors.StoreError(500, "Couldn't generate a device ID.")
@defer.inlineCallbacks
def get_devices_by_user(self, user_id):
"""
Retrieve the given user's devices
Args:
user_id (str):
Returns:
defer.Deferred: list[dict[str, X]]: info on each device
"""
device_map = yield self.store.get_devices_by_user(user_id)
ips = yield self.store.get_last_client_ip_by_device(
devices=((user_id, device_id) for device_id in device_map.keys())
)
devices = device_map.values()
for device in devices:
_update_device_from_client_ips(device, ips)
defer.returnValue(devices)
@defer.inlineCallbacks
def get_device(self, user_id, device_id):
""" Retrieve the given device
Args:
user_id (str):
device_id (str):
Returns:
defer.Deferred: dict[str, X]: info on the device
Raises:
errors.NotFoundError: if the device was not found
"""
try:
device = yield self.store.get_device(user_id, device_id)
except errors.StoreError:
raise errors.NotFoundError
ips = yield self.store.get_last_client_ip_by_device(
devices=((user_id, device_id),)
)
_update_device_from_client_ips(device, ips)
defer.returnValue(device)
@defer.inlineCallbacks
def delete_device(self, user_id, device_id):
""" Delete the given device
Args:
user_id (str):
device_id (str):
Returns:
defer.Deferred:
"""
try:
yield self.store.delete_device(user_id, device_id)
except errors.StoreError, e:
if e.code == 404:
# no match
pass
else:
raise
yield self.store.user_delete_access_tokens(
user_id, device_id=device_id,
delete_refresh_tokens=True,
)
yield self.store.delete_e2e_keys_by_device(
user_id=user_id, device_id=device_id
)
@defer.inlineCallbacks
def update_device(self, user_id, device_id, content):
""" Update the given device
Args:
user_id (str):
device_id (str):
content (dict): body of update request
Returns:
defer.Deferred:
"""
try:
yield self.store.update_device(
user_id,
device_id,
new_display_name=content.get("display_name")
)
except errors.StoreError, e:
if e.code == 404:
raise errors.NotFoundError()
else:
raise
def _update_device_from_client_ips(device, client_ips):
ip = client_ips.get((device["user_id"], device["device_id"]), {})
device.update({
"last_seen_ts": ip.get("last_seen"),
"last_seen_ip": ip.get("ip"),
})

View File

@@ -0,0 +1,117 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.types import get_domain_from_id
from synapse.util.stringutils import random_string
logger = logging.getLogger(__name__)
class DeviceMessageHandler(object):
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer): server
"""
self.store = hs.get_datastore()
self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id
self.federation = hs.get_replication_layer()
self.federation.register_edu_handler(
"m.direct_to_device", self.on_direct_to_device_edu
)
@defer.inlineCallbacks
def on_direct_to_device_edu(self, origin, content):
local_messages = {}
sender_user_id = content["sender"]
if origin != get_domain_from_id(sender_user_id):
logger.warn(
"Dropping device message from %r with spoofed sender %r",
origin, sender_user_id
)
message_type = content["type"]
message_id = content["message_id"]
for user_id, by_device in content["messages"].items():
messages_by_device = {
device_id: {
"content": message_content,
"type": message_type,
"sender": sender_user_id,
}
for device_id, message_content in by_device.items()
}
if messages_by_device:
local_messages[user_id] = messages_by_device
stream_id = yield self.store.add_messages_from_remote_to_device_inbox(
origin, message_id, local_messages
)
self.notifier.on_new_event(
"to_device_key", stream_id, users=local_messages.keys()
)
@defer.inlineCallbacks
def send_device_message(self, sender_user_id, message_type, messages):
local_messages = {}
remote_messages = {}
for user_id, by_device in messages.items():
if self.is_mine_id(user_id):
messages_by_device = {
device_id: {
"content": message_content,
"type": message_type,
"sender": sender_user_id,
}
for device_id, message_content in by_device.items()
}
if messages_by_device:
local_messages[user_id] = messages_by_device
else:
destination = get_domain_from_id(user_id)
remote_messages.setdefault(destination, {})[user_id] = by_device
message_id = random_string(16)
remote_edu_contents = {}
for destination, messages in remote_messages.items():
remote_edu_contents[destination] = {
"messages": messages,
"sender": sender_user_id,
"type": message_type,
"message_id": message_id,
}
stream_id = yield self.store.add_messages_to_device_inbox(
local_messages, remote_edu_contents
)
self.notifier.on_new_event(
"to_device_key", stream_id, users=local_messages.keys()
)
for destination in remote_messages.keys():
# Enqueue a new federation transaction to send the new
# device messages to each remote destination.
self.federation.send_device_messages(destination)

View File

@@ -19,7 +19,7 @@ from ._base import BaseHandler
from synapse.api.errors import SynapseError, Codes, CodeMessageException, AuthError
from synapse.api.constants import EventTypes
from synapse.types import RoomAlias, UserID
from synapse.types import RoomAlias, UserID, get_domain_from_id
import logging
import string
@@ -33,6 +33,7 @@ class DirectoryHandler(BaseHandler):
super(DirectoryHandler, self).__init__(hs)
self.state = hs.get_state_handler()
self.appservice_handler = hs.get_application_service_handler()
self.federation = hs.get_replication_layer()
self.federation.register_query_handler(
@@ -54,7 +55,8 @@ class DirectoryHandler(BaseHandler):
# TODO(erikj): Add transactions.
# TODO(erikj): Check if there is a current association.
if not servers:
servers = yield self.store.get_joined_hosts_for_room(room_id)
users = yield self.state.get_current_user_in_room(room_id)
servers = set(get_domain_from_id(u) for u in users)
if not servers:
raise SynapseError(400, "Failed to get server list")
@@ -192,7 +194,8 @@ class DirectoryHandler(BaseHandler):
Codes.NOT_FOUND
)
extra_servers = yield self.store.get_joined_hosts_for_room(room_id)
users = yield self.state.get_current_user_in_room(room_id)
extra_servers = set(get_domain_from_id(u) for u in users)
servers = set(extra_servers) | set(servers)
# If this server is in the list of servers, return it first.
@@ -281,17 +284,16 @@ class DirectoryHandler(BaseHandler):
)
if not result:
# Query AS to see if it exists
as_handler = self.hs.get_handlers().appservice_handler
as_handler = self.appservice_handler
result = yield as_handler.query_room_alias_exists(room_alias)
defer.returnValue(result)
@defer.inlineCallbacks
def can_modify_alias(self, alias, user_id=None):
# Any application service "interested" in an alias they are regexing on
# can modify the alias.
# Users can only modify the alias if ALL the interested services have
# non-exclusive locks on the alias (or there are no interested services)
services = yield self.store.get_app_services()
services = self.store.get_app_services()
interested_services = [
s for s in services if s.is_interested_in_alias(alias.to_string())
]
@@ -299,14 +301,12 @@ class DirectoryHandler(BaseHandler):
for service in interested_services:
if user_id == service.sender:
# this user IS the app service so they can do whatever they like
defer.returnValue(True)
return
return defer.succeed(True)
elif service.is_exclusive_alias(alias.to_string()):
# another service has an exclusive lock on this alias.
defer.returnValue(False)
return
return defer.succeed(False)
# either no interested services, or no service with an exclusive lock
defer.returnValue(True)
return defer.succeed(True)
@defer.inlineCallbacks
def _user_can_delete_alias(self, alias, user_id):

View File

@@ -0,0 +1,279 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import ujson as json
import logging
from canonicaljson import encode_canonical_json
from twisted.internet import defer
from synapse.api.errors import SynapseError, CodeMessageException
from synapse.types import get_domain_from_id
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
from synapse.util.retryutils import get_retry_limiter, NotRetryingDestination
logger = logging.getLogger(__name__)
class E2eKeysHandler(object):
def __init__(self, hs):
self.store = hs.get_datastore()
self.federation = hs.get_replication_layer()
self.device_handler = hs.get_device_handler()
self.is_mine_id = hs.is_mine_id
self.clock = hs.get_clock()
# doesn't really work as part of the generic query API, because the
# query request requires an object POST, but we abuse the
# "query handler" interface.
self.federation.register_query_handler(
"client_keys", self.on_federation_query_client_keys
)
@defer.inlineCallbacks
def query_devices(self, query_body, timeout):
""" Handle a device key query from a client
{
"device_keys": {
"<user_id>": ["<device_id>"]
}
}
->
{
"device_keys": {
"<user_id>": {
"<device_id>": {
...
}
}
}
}
"""
device_keys_query = query_body.get("device_keys", {})
# separate users by domain.
# make a map from domain to user_id to device_ids
local_query = {}
remote_queries = {}
for user_id, device_ids in device_keys_query.items():
if self.is_mine_id(user_id):
local_query[user_id] = device_ids
else:
domain = get_domain_from_id(user_id)
remote_queries.setdefault(domain, {})[user_id] = device_ids
# do the queries
failures = {}
results = {}
if local_query:
local_result = yield self.query_local_devices(local_query)
for user_id, keys in local_result.items():
if user_id in local_query:
results[user_id] = keys
@defer.inlineCallbacks
def do_remote_query(destination):
destination_query = remote_queries[destination]
try:
limiter = yield get_retry_limiter(
destination, self.clock, self.store
)
with limiter:
remote_result = yield self.federation.query_client_keys(
destination,
{"device_keys": destination_query},
timeout=timeout
)
for user_id, keys in remote_result["device_keys"].items():
if user_id in destination_query:
results[user_id] = keys
except CodeMessageException as e:
failures[destination] = {
"status": e.code, "message": e.message
}
except NotRetryingDestination as e:
failures[destination] = {
"status": 503, "message": "Not ready for retry",
}
yield preserve_context_over_deferred(defer.gatherResults([
preserve_fn(do_remote_query)(destination)
for destination in remote_queries
]))
defer.returnValue({
"device_keys": results, "failures": failures,
})
@defer.inlineCallbacks
def query_local_devices(self, query):
"""Get E2E device keys for local users
Args:
query (dict[string, list[string]|None): map from user_id to a list
of devices to query (None for all devices)
Returns:
defer.Deferred: (resolves to dict[string, dict[string, dict]]):
map from user_id -> device_id -> device details
"""
local_query = []
result_dict = {}
for user_id, device_ids in query.items():
if not self.is_mine_id(user_id):
logger.warning("Request for keys for non-local user %s",
user_id)
raise SynapseError(400, "Not a user here")
if not device_ids:
local_query.append((user_id, None))
else:
for device_id in device_ids:
local_query.append((user_id, device_id))
# make sure that each queried user appears in the result dict
result_dict[user_id] = {}
results = yield self.store.get_e2e_device_keys(local_query)
# Build the result structure, un-jsonify the results, and add the
# "unsigned" section
for user_id, device_keys in results.items():
for device_id, device_info in device_keys.items():
r = json.loads(device_info["key_json"])
r["unsigned"] = {}
display_name = device_info["device_display_name"]
if display_name is not None:
r["unsigned"]["device_display_name"] = display_name
result_dict[user_id][device_id] = r
defer.returnValue(result_dict)
@defer.inlineCallbacks
def on_federation_query_client_keys(self, query_body):
""" Handle a device key query from a federated server
"""
device_keys_query = query_body.get("device_keys", {})
res = yield self.query_local_devices(device_keys_query)
defer.returnValue({"device_keys": res})
@defer.inlineCallbacks
def claim_one_time_keys(self, query, timeout):
local_query = []
remote_queries = {}
for user_id, device_keys in query.get("one_time_keys", {}).items():
if self.is_mine_id(user_id):
for device_id, algorithm in device_keys.items():
local_query.append((user_id, device_id, algorithm))
else:
domain = get_domain_from_id(user_id)
remote_queries.setdefault(domain, {})[user_id] = device_keys
results = yield self.store.claim_e2e_one_time_keys(local_query)
json_result = {}
failures = {}
for user_id, device_keys in results.items():
for device_id, keys in device_keys.items():
for key_id, json_bytes in keys.items():
json_result.setdefault(user_id, {})[device_id] = {
key_id: json.loads(json_bytes)
}
@defer.inlineCallbacks
def claim_client_keys(destination):
device_keys = remote_queries[destination]
try:
limiter = yield get_retry_limiter(
destination, self.clock, self.store
)
with limiter:
remote_result = yield self.federation.claim_client_keys(
destination,
{"one_time_keys": device_keys},
timeout=timeout
)
for user_id, keys in remote_result["one_time_keys"].items():
if user_id in device_keys:
json_result[user_id] = keys
except CodeMessageException as e:
failures[destination] = {
"status": e.code, "message": e.message
}
except NotRetryingDestination as e:
failures[destination] = {
"status": 503, "message": "Not ready for retry",
}
yield preserve_context_over_deferred(defer.gatherResults([
preserve_fn(claim_client_keys)(destination)
for destination in remote_queries
]))
defer.returnValue({
"one_time_keys": json_result,
"failures": failures
})
@defer.inlineCallbacks
def upload_keys_for_user(self, user_id, device_id, keys):
time_now = self.clock.time_msec()
# TODO: Validate the JSON to make sure it has the right keys.
device_keys = keys.get("device_keys", None)
if device_keys:
logger.info(
"Updating device_keys for device %r for user %s at %d",
device_id, user_id, time_now
)
# TODO: Sign the JSON with the server key
yield self.store.set_e2e_device_keys(
user_id, device_id, time_now,
encode_canonical_json(device_keys)
)
one_time_keys = keys.get("one_time_keys", None)
if one_time_keys:
logger.info(
"Adding %d one_time_keys for device %r for user %r at %d",
len(one_time_keys), device_id, user_id, time_now
)
key_list = []
for key_id, key_json in one_time_keys.items():
algorithm, key_id = key_id.split(":")
key_list.append((
algorithm, key_id, encode_canonical_json(key_json)
))
yield self.store.add_e2e_one_time_keys(
user_id, device_id, time_now, key_list
)
# the device should have been registered already, but it may have been
# deleted due to a race with a DELETE request. Or we may be using an
# old access_token without an associated device_id. Either way, we
# need to double-check the device is registered to avoid ending up with
# keys without a corresponding device.
self.device_handler.check_device_registered(user_id, device_id)
result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue({"one_time_key_counts": result})

View File

@@ -47,6 +47,7 @@ class EventStreamHandler(BaseHandler):
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self.state = hs.get_state_handler()
@defer.inlineCallbacks
@log_function
@@ -58,7 +59,7 @@ class EventStreamHandler(BaseHandler):
If `only_keys` is not None, events from keys will be sent down.
"""
auth_user = UserID.from_string(auth_user_id)
presence_handler = self.hs.get_handlers().presence_handler
presence_handler = self.hs.get_presence_handler()
context = yield presence_handler.user_syncing(
auth_user_id, affect_presence=affect_presence,
@@ -90,7 +91,7 @@ class EventStreamHandler(BaseHandler):
# Send down presence.
if event.state_key == auth_user_id:
# Send down presence for everyone in the room.
users = yield self.store.get_users_in_room(event.room_id)
users = yield self.state.get_current_user_in_room(event.room_id)
states = yield presence_handler.get_states(
users,
as_event=True,

View File

@@ -26,14 +26,17 @@ from synapse.api.errors import (
from synapse.api.constants import EventTypes, Membership, RejectedReason
from synapse.events.validator import EventValidator
from synapse.util import unwrapFirstError
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.logcontext import (
PreserveLoggingContext, preserve_fn, preserve_context_over_deferred
)
from synapse.util.metrics import measure_func
from synapse.util.logutils import log_function
from synapse.util.async import run_on_reactor
from synapse.util.frozenutils import unfreeze
from synapse.crypto.event_signing import (
compute_event_signature, add_hashes_and_signatures,
)
from synapse.types import UserID
from synapse.types import UserID, get_domain_from_id
from synapse.events.utils import prune_event
@@ -66,10 +69,6 @@ class FederationHandler(BaseHandler):
self.hs = hs
self.distributor.observe("user_joined_room", self.user_joined_room)
self.waiting_for_join_list = {}
self.store = hs.get_datastore()
self.replication_layer = hs.get_replication_layer()
self.state_handler = hs.get_state_handler()
@@ -102,6 +101,9 @@ class FederationHandler(BaseHandler):
def on_receive_pdu(self, origin, pdu, state=None, auth_chain=None):
""" Called by the ReplicationLayer when we have a new pdu. We need to
do auth checks and put it through the StateHandler.
auth_chain and state are None if we already have the necessary state
and prev_events in the db
"""
event = pdu
@@ -119,16 +121,25 @@ class FederationHandler(BaseHandler):
# FIXME (erikj): Awful hack to make the case where we are not currently
# in the room work
is_in_room = yield self.auth.check_host_in_room(
event.room_id,
self.server_name
)
if not is_in_room and not event.internal_metadata.is_outlier():
logger.debug("Got event for room we're not in.")
# If state and auth_chain are None, then we don't need to do this check
# as we already know we have enough state in the DB to handle this
# event.
if state and auth_chain and not event.internal_metadata.is_outlier():
is_in_room = yield self.auth.check_host_in_room(
event.room_id,
self.server_name
)
else:
is_in_room = True
if not is_in_room:
logger.info(
"Got event for room we're not in: %r %r",
event.room_id, event.event_id
)
try:
event_stream_id, max_stream_id = yield self._persist_auth_tree(
auth_chain, state, event
origin, auth_chain, state, event
)
except AuthError as e:
raise FederationError(
@@ -219,17 +230,28 @@ class FederationHandler(BaseHandler):
if event.type == EventTypes.Member:
if event.membership == Membership.JOIN:
prev_state = context.current_state.get((event.type, event.state_key))
if not prev_state or prev_state.membership != Membership.JOIN:
# Only fire user_joined_room if the user has acutally
# joined the room. Don't bother if the user is just
# changing their profile info.
# Only fire user_joined_room if the user has acutally
# joined the room. Don't bother if the user is just
# changing their profile info.
newly_joined = True
prev_state_id = context.prev_state_ids.get(
(event.type, event.state_key)
)
if prev_state_id:
prev_state = yield self.store.get_event(
prev_state_id, allow_none=True,
)
if prev_state and prev_state.membership == Membership.JOIN:
newly_joined = False
if newly_joined:
user = UserID.from_string(event.state_key)
yield user_joined_room(self.distributor, user, event.room_id)
@measure_func("_filter_events_for_server")
@defer.inlineCallbacks
def _filter_events_for_server(self, server_name, room_id, events):
event_to_state = yield self.store.get_state_for_events(
event_to_state_ids = yield self.store.get_state_ids_for_events(
frozenset(e.event_id for e in events),
types=(
(EventTypes.RoomHistoryVisibility, ""),
@@ -237,6 +259,30 @@ class FederationHandler(BaseHandler):
)
)
# We only want to pull out member events that correspond to the
# server's domain.
def check_match(id):
try:
return server_name == get_domain_from_id(id)
except:
return False
event_map = yield self.store.get_events([
e_id for key_to_eid in event_to_state_ids.values()
for key, e_id in key_to_eid
if key[0] != EventTypes.Member or check_match(key[1])
])
event_to_state = {
e_id: {
key: event_map[inner_e_id]
for key, inner_e_id in key_to_eid.items()
if inner_e_id in event_map
}
for e_id, key_to_eid in event_to_state_ids.items()
}
def redact_disallowed(event, state):
if not state:
return event
@@ -253,7 +299,7 @@ class FederationHandler(BaseHandler):
if ev.type != EventTypes.Member:
continue
try:
domain = UserID.from_string(ev.state_key).domain
domain = get_domain_from_id(ev.state_key)
except:
continue
@@ -278,15 +324,16 @@ class FederationHandler(BaseHandler):
@log_function
@defer.inlineCallbacks
def backfill(self, dest, room_id, limit, extremities=[]):
def backfill(self, dest, room_id, limit, extremities):
""" Trigger a backfill request to `dest` for the given `room_id`
This will attempt to get more events from the remote. This may return
be successfull and still return no events if the other side has no new
events to offer.
"""
if dest == self.server_name:
raise SynapseError(400, "Can't backfill from self.")
if not extremities:
extremities = yield self.store.get_oldest_events_in_room(room_id)
events = yield self.replication_layer.backfill(
dest,
room_id,
@@ -294,6 +341,16 @@ class FederationHandler(BaseHandler):
extremities=extremities,
)
# Don't bother processing events we already have.
seen_events = yield self.store.have_events_in_timeline(
set(e.event_id for e in events)
)
events = [e for e in events if e.event_id not in seen_events]
if not events:
defer.returnValue([])
event_map = {e.event_id: e for e in events}
event_ids = set(e.event_id for e in events)
@@ -325,40 +382,73 @@ class FederationHandler(BaseHandler):
state_events.update({s.event_id: s for s in state})
events_to_state[e_id] = state
required_auth = set(
a_id
for event in events + state_events.values() + auth_events.values()
for a_id, _ in event.auth_events
)
auth_events.update({
e_id: event_map[e_id] for e_id in required_auth if e_id in event_map
})
missing_auth = required_auth - set(auth_events)
failed_to_fetch = set()
# Try and fetch any missing auth events from both DB and remote servers.
# We repeatedly do this until we stop finding new auth events.
while missing_auth - failed_to_fetch:
logger.info("Missing auth for backfill: %r", missing_auth)
ret_events = yield self.store.get_events(missing_auth - failed_to_fetch)
auth_events.update(ret_events)
required_auth.update(
a_id for event in ret_events.values() for a_id, _ in event.auth_events
)
missing_auth = required_auth - set(auth_events)
if missing_auth - failed_to_fetch:
logger.info(
"Fetching missing auth for backfill: %r",
missing_auth - failed_to_fetch
)
results = yield preserve_context_over_deferred(defer.gatherResults(
[
preserve_fn(self.replication_layer.get_pdu)(
[dest],
event_id,
outlier=True,
timeout=10000,
)
for event_id in missing_auth - failed_to_fetch
],
consumeErrors=True
)).addErrback(unwrapFirstError)
auth_events.update({a.event_id: a for a in results if a})
required_auth.update(
a_id
for event in results if event
for a_id, _ in event.auth_events
)
missing_auth = required_auth - set(auth_events)
failed_to_fetch = missing_auth - set(auth_events)
seen_events = yield self.store.have_events(
set(auth_events.keys()) | set(state_events.keys())
)
all_events = events + state_events.values() + auth_events.values()
required_auth = set(
a_id for event in all_events for a_id, _ in event.auth_events
)
missing_auth = required_auth - set(auth_events)
results = yield defer.gatherResults(
[
self.replication_layer.get_pdu(
[dest],
event_id,
outlier=True,
timeout=10000,
)
for event_id in missing_auth
],
consumeErrors=True
).addErrback(unwrapFirstError)
auth_events.update({a.event_id: a for a in results})
ev_infos = []
for a in auth_events.values():
if a.event_id in seen_events:
continue
a.internal_metadata.outlier = True
ev_infos.append({
"event": a,
"auth_events": {
(auth_events[a_id].type, auth_events[a_id].state_key):
auth_events[a_id]
for a_id, _ in a.auth_events
if a_id in auth_events
}
})
@@ -370,23 +460,27 @@ class FederationHandler(BaseHandler):
(auth_events[a_id].type, auth_events[a_id].state_key):
auth_events[a_id]
for a_id, _ in event_map[e_id].auth_events
if a_id in auth_events
}
})
yield self._handle_new_events(
dest, ev_infos,
backfilled=True,
)
events.sort(key=lambda e: e.depth)
for event in events:
if event in events_to_state:
continue
ev_infos.append({
"event": event,
})
yield self._handle_new_events(
dest, ev_infos,
backfilled=True,
)
# We store these one at a time since each event depends on the
# previous to work out the state.
# TODO: We can probably do something more clever here.
yield self._handle_new_event(
dest, event, backfilled=True,
)
defer.returnValue(events)
@@ -410,6 +504,10 @@ class FederationHandler(BaseHandler):
)
max_depth = sorted_extremeties_tuple[0][1]
# We don't want to specify too many extremities as it causes the backfill
# request URI to be too long.
extremities = dict(sorted_extremeties_tuple[:5])
if current_depth > max_depth:
logger.debug(
"Not backfilling as we don't need to. %d < %d",
@@ -435,7 +533,7 @@ class FederationHandler(BaseHandler):
joined_domains = {}
for u, d in joined_users:
try:
dom = UserID.from_string(u).domain
dom = get_domain_from_id(u)
old_d = joined_domains.get(dom)
if old_d:
joined_domains[dom] = min(d, old_d)
@@ -458,11 +556,15 @@ class FederationHandler(BaseHandler):
# TODO: Should we try multiple of these at a time?
for dom in domains:
try:
events = yield self.backfill(
yield self.backfill(
dom, room_id,
limit=100,
extremities=[e for e in extremities.keys()]
)
# If this succeeded then we probably already have the
# appropriate stuff.
# TODO: We can probably do something more intelligent here.
defer.returnValue(True)
except SynapseError as e:
logger.info(
"Failed to backfill from %s because %s",
@@ -488,8 +590,6 @@ class FederationHandler(BaseHandler):
)
continue
if events:
defer.returnValue(True)
defer.returnValue(False)
success = yield try_backfill(likely_domains)
@@ -504,12 +604,24 @@ class FederationHandler(BaseHandler):
event_ids = list(extremities.keys())
states = yield defer.gatherResults([
self.state_handler.resolve_state_groups(room_id, [e])
states = yield preserve_context_over_deferred(defer.gatherResults([
preserve_fn(self.state_handler.resolve_state_groups)(room_id, [e])
for e in event_ids
])
]))
states = dict(zip(event_ids, [s[1] for s in states]))
state_map = yield self.store.get_events(
[e_id for ids in states.values() for e_id in ids],
get_prev_content=False
)
states = {
key: {
k: state_map[e_id]
for k, e_id in state_dict.items()
if e_id in state_map
} for key, state_dict in states.items()
}
for e_id, _ in sorted_extremeties_tuple:
likely_domains = get_domains_from_state(states[e_id])
@@ -619,7 +731,7 @@ class FederationHandler(BaseHandler):
pass
event_stream_id, max_stream_id = yield self._persist_auth_tree(
auth_chain, state, event
origin, auth_chain, state, event
)
with PreserveLoggingContext():
@@ -661,11 +773,18 @@ class FederationHandler(BaseHandler):
"state_key": user_id,
})
event, context = yield self._create_new_client_event(
builder=builder,
)
try:
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
builder=builder,
)
except AuthError as e:
logger.warn("Failed to create join %r because %s", event, e)
raise e
self.auth.check(event, auth_events=context.current_state)
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_join_request`
yield self.auth.check_from_context(event, context, do_sig_check=False)
defer.returnValue(event)
@@ -713,19 +832,12 @@ class FederationHandler(BaseHandler):
new_pdu = event
destinations = set()
users_in_room = yield self.store.get_joined_users_from_context(event, context)
for k, s in context.current_state.items():
try:
if k[0] == EventTypes.Member:
if s.content["membership"] == Membership.JOIN:
destinations.add(
UserID.from_string(s.state_key).domain
)
except:
logger.warn(
"Failed to get destination from event %s", s.event_id
)
destinations = set(
get_domain_from_id(user_id) for user_id in users_in_room
if not self.hs.is_mine_id(user_id)
)
destinations.discard(origin)
@@ -737,13 +849,15 @@ class FederationHandler(BaseHandler):
self.replication_layer.send_pdu(new_pdu, destinations)
state_ids = [e.event_id for e in context.current_state.values()]
state_ids = context.prev_state_ids.values()
auth_chain = yield self.store.get_auth_chain(set(
[event.event_id] + state_ids
))
state = yield self.store.get_events(context.prev_state_ids.values())
defer.returnValue({
"state": context.current_state.values(),
"state": state.values(),
"auth_chain": auth_chain,
})
@@ -891,11 +1005,18 @@ class FederationHandler(BaseHandler):
"state_key": user_id,
})
event, context = yield self._create_new_client_event(
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
builder=builder,
)
self.auth.check(event, auth_events=context.current_state)
try:
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_leave_request`
yield self.auth.check_from_context(event, context, do_sig_check=False)
except AuthError as e:
logger.warn("Failed to create new leave %r because %s", event, e)
raise e
defer.returnValue(event)
@@ -936,20 +1057,12 @@ class FederationHandler(BaseHandler):
new_pdu = event
destinations = set()
for k, s in context.current_state.items():
try:
if k[0] == EventTypes.Member:
if s.content["membership"] == Membership.LEAVE:
destinations.add(
UserID.from_string(s.state_key).domain
)
except:
logger.warn(
"Failed to get destination from event %s", s.event_id
)
users_in_room = yield self.store.get_joined_users_from_context(event, context)
destinations = set(
get_domain_from_id(user_id) for user_id in users_in_room
if not self.hs.is_mine_id(user_id)
)
destinations.discard(origin)
logger.debug(
@@ -963,14 +1076,11 @@ class FederationHandler(BaseHandler):
defer.returnValue(None)
@defer.inlineCallbacks
def get_state_for_pdu(self, origin, room_id, event_id, do_auth=True):
def get_state_for_pdu(self, room_id, event_id):
"""Returns the state at the event. i.e. not including said event.
"""
yield run_on_reactor()
if do_auth:
in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
state_groups = yield self.store.get_state_groups(
room_id, [event_id]
)
@@ -994,18 +1104,49 @@ class FederationHandler(BaseHandler):
res = results.values()
for event in res:
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
# We sign these again because there was a bug where we
# incorrectly signed things the first time round
if self.hs.is_mine_id(event.event_id):
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
)
)
)
defer.returnValue(res)
else:
defer.returnValue([])
@defer.inlineCallbacks
def get_state_ids_for_pdu(self, room_id, event_id):
"""Returns the state at the event. i.e. not including said event.
"""
yield run_on_reactor()
state_groups = yield self.store.get_state_groups_ids(
room_id, [event_id]
)
if state_groups:
_, state = state_groups.items().pop()
results = state
event = yield self.store.get_event(event_id)
if event and event.is_state():
# Get previous state
if "replaces_state" in event.unsigned:
prev_id = event.unsigned["replaces_state"]
if prev_id != event.event_id:
results[(event.type, event.state_key)] = prev_id
else:
del results[(event.type, event.state_key)]
defer.returnValue(results.values())
else:
defer.returnValue([])
@defer.inlineCallbacks
@log_function
def on_backfill_request(self, origin, room_id, pdu_list, limit):
@@ -1038,16 +1179,17 @@ class FederationHandler(BaseHandler):
)
if event:
# FIXME: This is a temporary work around where we occasionally
# return events slightly differently than when they were
# originally signed
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
if self.hs.is_mine_id(event.event_id):
# FIXME: This is a temporary work around where we occasionally
# return events slightly differently than when they were
# originally signed
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
)
)
)
if do_auth:
in_room = yield self.auth.check_host_in_room(
@@ -1057,6 +1199,12 @@ class FederationHandler(BaseHandler):
if not in_room:
raise AuthError(403, "Host not in room.")
events = yield self._filter_events_for_server(
origin, event.room_id, [event]
)
event = events[0]
defer.returnValue(event)
else:
defer.returnValue(None)
@@ -1065,18 +1213,10 @@ class FederationHandler(BaseHandler):
def get_min_depth_for_context(self, context):
return self.store.get_min_depth(context)
@log_function
def user_joined_room(self, user, room_id):
waiters = self.waiting_for_join_list.get(
(user.to_string(), room_id),
[]
)
while waiters:
waiters.pop().callback(None)
@defer.inlineCallbacks
@log_function
def _handle_new_event(self, origin, event, state=None, auth_events=None):
def _handle_new_event(self, origin, event, state=None, auth_events=None,
backfilled=False):
context = yield self._prep_event(
origin, event,
state=state,
@@ -1086,21 +1226,34 @@ class FederationHandler(BaseHandler):
if not event.internal_metadata.is_outlier():
action_generator = ActionGenerator(self.hs)
yield action_generator.handle_push_actions_for_event(
event, context, self
event, context
)
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
backfilled=backfilled,
)
if not backfilled:
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
preserve_fn(self.hs.get_pusherpool().on_new_notifications)(
event_stream_id, max_stream_id
)
defer.returnValue((context, event_stream_id, max_stream_id))
@defer.inlineCallbacks
def _handle_new_events(self, origin, event_infos, backfilled=False):
contexts = yield defer.gatherResults(
"""Creates the appropriate contexts and persists events. The events
should not depend on one another, e.g. this should be used to persist
a bunch of outliers, but not a chunk of individual events that depend
on each other for state calculations.
"""
contexts = yield preserve_context_over_deferred(defer.gatherResults(
[
self._prep_event(
preserve_fn(self._prep_event)(
origin,
ev_info["event"],
state=ev_info.get("state"),
@@ -1108,7 +1261,7 @@ class FederationHandler(BaseHandler):
)
for ev_info in event_infos
]
)
))
yield self.store.persist_events(
[
@@ -1119,11 +1272,19 @@ class FederationHandler(BaseHandler):
)
@defer.inlineCallbacks
def _persist_auth_tree(self, auth_events, state, event):
def _persist_auth_tree(self, origin, auth_events, state, event):
"""Checks the auth chain is valid (and passes auth checks) for the
state and event. Then persists the auth chain and state atomically.
Persists the event seperately.
Will attempt to fetch missing auth events.
Args:
origin (str): Where the events came from
auth_events (list)
state (list)
event (Event)
Returns:
2-tuple of (event_stream_id, max_stream_id) from the persist_event
call for `event`
@@ -1136,7 +1297,7 @@ class FederationHandler(BaseHandler):
event_map = {
e.event_id: e
for e in auth_events
for e in itertools.chain(auth_events, state, [event])
}
create_event = None
@@ -1145,10 +1306,29 @@ class FederationHandler(BaseHandler):
create_event = e
break
missing_auth_events = set()
for e in itertools.chain(auth_events, state, [event]):
for e_id, _ in e.auth_events:
if e_id not in event_map:
missing_auth_events.add(e_id)
for e_id in missing_auth_events:
m_ev = yield self.replication_layer.get_pdu(
[origin],
e_id,
outlier=True,
timeout=10000,
)
if m_ev and m_ev.event_id == e_id:
event_map[e_id] = m_ev
else:
logger.info("Failed to find auth event %r", e_id)
for e in itertools.chain(auth_events, state, [event]):
auth_for_e = {
(event_map[e_id].type, event_map[e_id].state_key): event_map[e_id]
for e_id, _ in e.auth_events
if e_id in event_map
}
if create_event:
auth_for_e[(EventTypes.Create, "")] = create_event
@@ -1197,7 +1377,13 @@ class FederationHandler(BaseHandler):
)
if not auth_events:
auth_events = context.current_state
auth_events_ids = yield self.auth.compute_auth_events(
event, context.prev_state_ids, for_verification=True,
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
(e.type, e.state_key): e for e in auth_events.values()
}
# This is a hack to fix some old rooms where the initial join event
# didn't reference the create event in its auth events.
@@ -1223,8 +1409,7 @@ class FederationHandler(BaseHandler):
context.rejected = RejectedReason.AUTH_ERROR
if event.type == EventTypes.GuestAccess:
full_context = yield self.store.get_current_state(room_id=event.room_id)
yield self.maybe_kick_guest_users(event, full_context)
yield self.maybe_kick_guest_users(event)
defer.returnValue(context)
@@ -1292,6 +1477,11 @@ class FederationHandler(BaseHandler):
current_state = set(e.event_id for e in auth_events.values())
event_auth_events = set(e_id for e_id, _ in event.auth_events)
if event.is_state():
event_key = (event.type, event.state_key)
else:
event_key = None
if event_auth_events - current_state:
have_events = yield self.store.have_events(
event_auth_events - current_state
@@ -1365,9 +1555,9 @@ class FederationHandler(BaseHandler):
# Do auth conflict res.
logger.info("Different auth: %s", different_auth)
different_events = yield defer.gatherResults(
different_events = yield preserve_context_over_deferred(defer.gatherResults(
[
self.store.get_event(
preserve_fn(self.store.get_event)(
d,
allow_none=True,
allow_rejected=False,
@@ -1376,13 +1566,13 @@ class FederationHandler(BaseHandler):
if d in have_events and not have_events[d]
],
consumeErrors=True
).addErrback(unwrapFirstError)
)).addErrback(unwrapFirstError)
if different_events:
local_view = dict(auth_events)
remote_view = dict(auth_events)
remote_view.update({
(d.type, d.state_key): d for d in different_events
(d.type, d.state_key): d for d in different_events if d
})
new_state, prev_state = self.state_handler.resolve_events(
@@ -1395,8 +1585,16 @@ class FederationHandler(BaseHandler):
current_state = set(e.event_id for e in auth_events.values())
different_auth = event_auth_events - current_state
context.current_state.update(auth_events)
context.state_group = None
context.current_state_ids = dict(context.current_state_ids)
context.current_state_ids.update({
k: a.event_id for k, a in auth_events.items()
if k != event_key
})
context.prev_state_ids = dict(context.prev_state_ids)
context.prev_state_ids.update({
k: a.event_id for k, a in auth_events.items()
})
context.state_group = self.store.get_next_state_group()
if different_auth and not event.internal_metadata.is_outlier():
logger.info("Different auth after resolution: %s", different_auth)
@@ -1417,8 +1615,8 @@ class FederationHandler(BaseHandler):
if do_resolution:
# 1. Get what we think is the auth chain.
auth_ids = self.auth.compute_auth_events(
event, context.current_state
auth_ids = yield self.auth.compute_auth_events(
event, context.prev_state_ids
)
local_auth_chain = yield self.store.get_auth_chain(auth_ids)
@@ -1474,13 +1672,22 @@ class FederationHandler(BaseHandler):
# 4. Look at rejects and their proofs.
# TODO.
context.current_state.update(auth_events)
context.state_group = None
context.current_state_ids = dict(context.current_state_ids)
context.current_state_ids.update({
k: a.event_id for k, a in auth_events.items()
if k != event_key
})
context.prev_state_ids = dict(context.prev_state_ids)
context.prev_state_ids.update({
k: a.event_id for k, a in auth_events.items()
})
context.state_group = self.store.get_next_state_group()
try:
self.auth.check(event, auth_events=auth_events)
except AuthError:
raise
except AuthError as e:
logger.warn("Failed auth resolution for %r because %s", event, e)
raise e
@defer.inlineCallbacks
def construct_auth_difference(self, local_auth, remote_auth):
@@ -1650,14 +1857,22 @@ class FederationHandler(BaseHandler):
if (yield self.auth.check_host_in_room(room_id, self.hs.hostname)):
builder = self.event_builder_factory.new(event_dict)
EventValidator().validate_new(builder)
event, context = yield self._create_new_client_event(builder=builder)
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
builder=builder
)
event, context = yield self.add_display_name_to_third_party_invite(
event_dict, event, context
)
self.auth.check(event, context.current_state)
yield self._check_signature(event, auth_events=context.current_state)
try:
yield self.auth.check_from_context(event, context)
except AuthError as e:
logger.warn("Denying new third party invite %r because %s", event, e)
raise e
yield self._check_signature(event, context)
member_handler = self.hs.get_handlers().room_member_handler
yield member_handler.send_membership_event(None, event, context)
else:
@@ -1673,7 +1888,8 @@ class FederationHandler(BaseHandler):
def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
builder = self.event_builder_factory.new(event_dict)
event, context = yield self._create_new_client_event(
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
builder=builder,
)
@@ -1681,8 +1897,12 @@ class FederationHandler(BaseHandler):
event_dict, event, context
)
self.auth.check(event, auth_events=context.current_state)
yield self._check_signature(event, auth_events=context.current_state)
try:
self.auth.check_from_context(event, context)
except AuthError as e:
logger.warn("Denying third party invite %r because %s", event, e)
raise e
yield self._check_signature(event, context)
returned_invite = yield self.send_invite(origin, event)
# TODO: Make sure the signatures actually are correct.
@@ -1696,29 +1916,38 @@ class FederationHandler(BaseHandler):
EventTypes.ThirdPartyInvite,
event.content["third_party_invite"]["signed"]["token"]
)
original_invite = context.current_state.get(key)
if not original_invite:
logger.info(
"Could not find invite event for third_party_invite - "
"discarding: %s" % (event_dict,)
original_invite = None
original_invite_id = context.prev_state_ids.get(key)
if original_invite_id:
original_invite = yield self.store.get_event(
original_invite_id, allow_none=True
)
return
if original_invite:
display_name = original_invite.content["display_name"]
event_dict["content"]["third_party_invite"]["display_name"] = display_name
else:
logger.info(
"Could not find invite event for third_party_invite: %r",
event_dict
)
# We don't discard here as this is not the appropriate place to do
# auth checks. If we need the invite and don't have it then the
# auth check code will explode appropriately.
display_name = original_invite.content["display_name"]
event_dict["content"]["third_party_invite"]["display_name"] = display_name
builder = self.event_builder_factory.new(event_dict)
EventValidator().validate_new(builder)
event, context = yield self._create_new_client_event(builder=builder)
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(builder=builder)
defer.returnValue((event, context))
@defer.inlineCallbacks
def _check_signature(self, event, auth_events):
def _check_signature(self, event, context):
"""
Checks that the signature in the event is consistent with its invite.
Args:
event (Event): The m.room.member event to check
auth_events (dict<(event type, state_key), event>):
context (EventContext):
Raises:
AuthError: if signature didn't match any keys, or key has been
@@ -1729,10 +1958,14 @@ class FederationHandler(BaseHandler):
signed = event.content["third_party_invite"]["signed"]
token = signed["token"]
invite_event = auth_events.get(
invite_event_id = context.prev_state_ids.get(
(EventTypes.ThirdPartyInvite, token,)
)
invite_event = None
if invite_event_id:
invite_event = yield self.store.get_event(invite_event_id, allow_none=True)
if not invite_event:
raise AuthError(403, "Could not find invite")

View File

@@ -21,7 +21,7 @@ from synapse.api.errors import (
)
from ._base import BaseHandler
from synapse.util.async import run_on_reactor
from synapse.api.errors import SynapseError
from synapse.api.errors import SynapseError, Codes
import json
import logging
@@ -41,6 +41,20 @@ class IdentityHandler(BaseHandler):
hs.config.use_insecure_ssl_client_just_for_testing_do_not_use
)
def _should_trust_id_server(self, id_server):
if id_server not in self.trusted_id_servers:
if self.trust_any_id_server_just_for_testing_do_not_use:
logger.warn(
"Trusting untrustworthy ID server %r even though it isn't"
" in the trusted id list for testing because"
" 'use_insecure_ssl_client_just_for_testing_do_not_use'"
" is set in the config",
id_server,
)
else:
return False
return True
@defer.inlineCallbacks
def threepid_from_creds(self, creds):
yield run_on_reactor()
@@ -59,19 +73,12 @@ class IdentityHandler(BaseHandler):
else:
raise SynapseError(400, "No client_secret in creds")
if id_server not in self.trusted_id_servers:
if self.trust_any_id_server_just_for_testing_do_not_use:
logger.warn(
"Trusting untrustworthy ID server %r even though it isn't"
" in the trusted id list for testing because"
" 'use_insecure_ssl_client_just_for_testing_do_not_use'"
" is set in the config",
id_server,
)
else:
logger.warn('%s is not a trusted ID server: rejecting 3pid ' +
'credentials', id_server)
defer.returnValue(None)
if not self._should_trust_id_server(id_server):
logger.warn(
'%s is not a trusted ID server: rejecting 3pid ' +
'credentials', id_server
)
defer.returnValue(None)
data = {}
try:
@@ -129,6 +136,12 @@ class IdentityHandler(BaseHandler):
def requestEmailToken(self, id_server, email, client_secret, send_attempt, **kwargs):
yield run_on_reactor()
if not self._should_trust_id_server(id_server):
raise SynapseError(
400, "Untrusted ID server '%s'" % id_server,
Codes.SERVER_NOT_TRUSTED
)
params = {
'email': email,
'client_secret': client_secret,

View File

@@ -0,0 +1,443 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.streams.config import PaginationConfig
from synapse.types import (
UserID, StreamToken,
)
from synapse.util import unwrapFirstError
from synapse.util.async import concurrently_execute
from synapse.util.caches.snapshot_cache import SnapshotCache
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
import logging
logger = logging.getLogger(__name__)
class InitialSyncHandler(BaseHandler):
def __init__(self, hs):
super(InitialSyncHandler, self).__init__(hs)
self.hs = hs
self.state = hs.get_state_handler()
self.clock = hs.get_clock()
self.validator = EventValidator()
self.snapshot_cache = SnapshotCache()
def snapshot_all_rooms(self, user_id=None, pagin_config=None,
as_client_event=True, include_archived=False):
"""Retrieve a snapshot of all rooms the user is invited or has joined.
This snapshot may include messages for all rooms where the user is
joined, depending on the pagination config.
Args:
user_id (str): The ID of the user making the request.
pagin_config (synapse.api.streams.PaginationConfig): The pagination
config used to determine how many messages *PER ROOM* to return.
as_client_event (bool): True to get events in client-server format.
include_archived (bool): True to get rooms that the user has left
Returns:
A list of dicts with "room_id" and "membership" keys for all rooms
the user is currently invited or joined in on. Rooms where the user
is joined on, may return a "messages" key with messages, depending
on the specified PaginationConfig.
"""
key = (
user_id,
pagin_config.from_token,
pagin_config.to_token,
pagin_config.direction,
pagin_config.limit,
as_client_event,
include_archived,
)
now_ms = self.clock.time_msec()
result = self.snapshot_cache.get(now_ms, key)
if result is not None:
return result
return self.snapshot_cache.set(now_ms, key, self._snapshot_all_rooms(
user_id, pagin_config, as_client_event, include_archived
))
@defer.inlineCallbacks
def _snapshot_all_rooms(self, user_id=None, pagin_config=None,
as_client_event=True, include_archived=False):
memberships = [Membership.INVITE, Membership.JOIN]
if include_archived:
memberships.append(Membership.LEAVE)
room_list = yield self.store.get_rooms_for_user_where_membership_is(
user_id=user_id, membership_list=memberships
)
user = UserID.from_string(user_id)
rooms_ret = []
now_token = yield self.hs.get_event_sources().get_current_token()
presence_stream = self.hs.get_event_sources().sources["presence"]
pagination_config = PaginationConfig(from_token=now_token)
presence, _ = yield presence_stream.get_pagination_rows(
user, pagination_config.get_source_config("presence"), None
)
receipt_stream = self.hs.get_event_sources().sources["receipt"]
receipt, _ = yield receipt_stream.get_pagination_rows(
user, pagination_config.get_source_config("receipt"), None
)
tags_by_room = yield self.store.get_tags_for_user(user_id)
account_data, account_data_by_room = (
yield self.store.get_account_data_for_user(user_id)
)
public_room_ids = yield self.store.get_public_room_ids()
limit = pagin_config.limit
if limit is None:
limit = 10
@defer.inlineCallbacks
def handle_room(event):
d = {
"room_id": event.room_id,
"membership": event.membership,
"visibility": (
"public" if event.room_id in public_room_ids
else "private"
),
}
if event.membership == Membership.INVITE:
time_now = self.clock.time_msec()
d["inviter"] = event.sender
invite_event = yield self.store.get_event(event.event_id)
d["invite"] = serialize_event(invite_event, time_now, as_client_event)
rooms_ret.append(d)
if event.membership not in (Membership.JOIN, Membership.LEAVE):
return
try:
if event.membership == Membership.JOIN:
room_end_token = now_token.room_key
deferred_room_state = self.state_handler.get_current_state(
event.room_id
)
elif event.membership == Membership.LEAVE:
room_end_token = "s%d" % (event.stream_ordering,)
deferred_room_state = self.store.get_state_for_events(
[event.event_id], None
)
deferred_room_state.addCallback(
lambda states: states[event.event_id]
)
(messages, token), current_state = yield preserve_context_over_deferred(
defer.gatherResults(
[
preserve_fn(self.store.get_recent_events_for_room)(
event.room_id,
limit=limit,
end_token=room_end_token,
),
deferred_room_state,
]
)
).addErrback(unwrapFirstError)
messages = yield filter_events_for_client(
self.store, user_id, messages
)
start_token = now_token.copy_and_replace("room_key", token[0])
end_token = now_token.copy_and_replace("room_key", token[1])
time_now = self.clock.time_msec()
d["messages"] = {
"chunk": [
serialize_event(m, time_now, as_client_event)
for m in messages
],
"start": start_token.to_string(),
"end": end_token.to_string(),
}
d["state"] = [
serialize_event(c, time_now, as_client_event)
for c in current_state.values()
]
account_data_events = []
tags = tags_by_room.get(event.room_id)
if tags:
account_data_events.append({
"type": "m.tag",
"content": {"tags": tags},
})
account_data = account_data_by_room.get(event.room_id, {})
for account_data_type, content in account_data.items():
account_data_events.append({
"type": account_data_type,
"content": content,
})
d["account_data"] = account_data_events
except:
logger.exception("Failed to get snapshot")
yield concurrently_execute(handle_room, room_list, 10)
account_data_events = []
for account_data_type, content in account_data.items():
account_data_events.append({
"type": account_data_type,
"content": content,
})
ret = {
"rooms": rooms_ret,
"presence": presence,
"account_data": account_data_events,
"receipts": receipt,
"end": now_token.to_string(),
}
defer.returnValue(ret)
@defer.inlineCallbacks
def room_initial_sync(self, requester, room_id, pagin_config=None):
"""Capture the a snapshot of a room. If user is currently a member of
the room this will be what is currently in the room. If the user left
the room this will be what was in the room when they left.
Args:
requester(Requester): The user to get a snapshot for.
room_id(str): The room to get a snapshot of.
pagin_config(synapse.streams.config.PaginationConfig):
The pagination config used to determine how many messages to
return.
Raises:
AuthError if the user wasn't in the room.
Returns:
A JSON serialisable dict with the snapshot of the room.
"""
user_id = requester.user.to_string()
membership, member_event_id = yield self._check_in_room_or_world_readable(
room_id, user_id,
)
is_peeking = member_event_id is None
if membership == Membership.JOIN:
result = yield self._room_initial_sync_joined(
user_id, room_id, pagin_config, membership, is_peeking
)
elif membership == Membership.LEAVE:
result = yield self._room_initial_sync_parted(
user_id, room_id, pagin_config, membership, member_event_id, is_peeking
)
account_data_events = []
tags = yield self.store.get_tags_for_room(user_id, room_id)
if tags:
account_data_events.append({
"type": "m.tag",
"content": {"tags": tags},
})
account_data = yield self.store.get_account_data_for_room(user_id, room_id)
for account_data_type, content in account_data.items():
account_data_events.append({
"type": account_data_type,
"content": content,
})
result["account_data"] = account_data_events
defer.returnValue(result)
@defer.inlineCallbacks
def _room_initial_sync_parted(self, user_id, room_id, pagin_config,
membership, member_event_id, is_peeking):
room_state = yield self.store.get_state_for_events(
[member_event_id], None
)
room_state = room_state[member_event_id]
limit = pagin_config.limit if pagin_config else None
if limit is None:
limit = 10
stream_token = yield self.store.get_stream_token_for_event(
member_event_id
)
messages, token = yield self.store.get_recent_events_for_room(
room_id,
limit=limit,
end_token=stream_token
)
messages = yield filter_events_for_client(
self.store, user_id, messages, is_peeking=is_peeking
)
start_token = StreamToken.START.copy_and_replace("room_key", token[0])
end_token = StreamToken.START.copy_and_replace("room_key", token[1])
time_now = self.clock.time_msec()
defer.returnValue({
"membership": membership,
"room_id": room_id,
"messages": {
"chunk": [serialize_event(m, time_now) for m in messages],
"start": start_token.to_string(),
"end": end_token.to_string(),
},
"state": [serialize_event(s, time_now) for s in room_state.values()],
"presence": [],
"receipts": [],
})
@defer.inlineCallbacks
def _room_initial_sync_joined(self, user_id, room_id, pagin_config,
membership, is_peeking):
current_state = yield self.state.get_current_state(
room_id=room_id,
)
# TODO: These concurrently
time_now = self.clock.time_msec()
state = [
serialize_event(x, time_now)
for x in current_state.values()
]
now_token = yield self.hs.get_event_sources().get_current_token()
limit = pagin_config.limit if pagin_config else None
if limit is None:
limit = 10
room_members = [
m for m in current_state.values()
if m.type == EventTypes.Member
and m.content["membership"] == Membership.JOIN
]
presence_handler = self.hs.get_presence_handler()
@defer.inlineCallbacks
def get_presence():
states = yield presence_handler.get_states(
[m.user_id for m in room_members],
as_event=True,
)
defer.returnValue(states)
@defer.inlineCallbacks
def get_receipts():
receipts_handler = self.hs.get_handlers().receipts_handler
receipts = yield receipts_handler.get_receipts_for_room(
room_id,
now_token.receipt_key
)
defer.returnValue(receipts)
presence, receipts, (messages, token) = yield defer.gatherResults(
[
preserve_fn(get_presence)(),
preserve_fn(get_receipts)(),
preserve_fn(self.store.get_recent_events_for_room)(
room_id,
limit=limit,
end_token=now_token.room_key,
)
],
consumeErrors=True,
).addErrback(unwrapFirstError)
messages = yield filter_events_for_client(
self.store, user_id, messages, is_peeking=is_peeking,
)
start_token = now_token.copy_and_replace("room_key", token[0])
end_token = now_token.copy_and_replace("room_key", token[1])
time_now = self.clock.time_msec()
ret = {
"room_id": room_id,
"messages": {
"chunk": [serialize_event(m, time_now) for m in messages],
"start": start_token.to_string(),
"end": end_token.to_string(),
},
"state": state,
"presence": presence,
"receipts": receipts,
}
if not is_peeking:
ret["membership"] = membership
defer.returnValue(ret)
@defer.inlineCallbacks
def _check_in_room_or_world_readable(self, room_id, user_id):
try:
# check_user_was_in_room will return the most recent membership
# event for the user if:
# * The user is a non-guest user, and was ever in the room
# * The user is a guest user, and has joined the room
# else it will throw.
member_event = yield self.auth.check_user_was_in_room(room_id, user_id)
defer.returnValue((member_event.membership, member_event.event_id))
return
except AuthError:
visibility = yield self.state_handler.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
if (
visibility and
visibility.content["history_visibility"] == "world_readable"
):
defer.returnValue((Membership.JOIN, None))
return
raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
)

View File

@@ -16,20 +16,25 @@
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.streams.config import PaginationConfig
from synapse.api.errors import AuthError, Codes, SynapseError, LimitExceededError
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.util import unwrapFirstError
from synapse.util.async import concurrently_execute
from synapse.util.caches.snapshot_cache import SnapshotCache
from synapse.types import UserID, RoomStreamToken, StreamToken
from synapse.push.action_generator import ActionGenerator
from synapse.types import (
UserID, RoomAlias, RoomStreamToken, get_domain_from_id
)
from synapse.util.async import run_on_reactor, ReadWriteLock
from synapse.util.logcontext import preserve_fn
from synapse.util.metrics import measure_func
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
from canonicaljson import encode_canonical_json
import logging
import random
logger = logging.getLogger(__name__)
@@ -42,11 +47,24 @@ class MessageHandler(BaseHandler):
self.state = hs.get_state_handler()
self.clock = hs.get_clock()
self.validator = EventValidator()
self.snapshot_cache = SnapshotCache()
self.pagination_lock = ReadWriteLock()
@defer.inlineCallbacks
def purge_history(self, room_id, event_id):
event = yield self.store.get_event(event_id)
if event.room_id != room_id:
raise SynapseError(400, "Event is for wrong room.")
depth = event.depth
with (yield self.pagination_lock.write(room_id)):
yield self.store.delete_old_state(room_id, depth)
@defer.inlineCallbacks
def get_messages(self, requester, room_id=None, pagin_config=None,
as_client_event=True):
as_client_event=True, event_filter=None):
"""Get messages in a room.
Args:
@@ -55,18 +73,18 @@ class MessageHandler(BaseHandler):
pagin_config (synapse.api.streams.PaginationConfig): The pagination
config rules to apply, if any.
as_client_event (bool): True to get events in client-server format.
event_filter (Filter): Filter to apply to results or None
Returns:
dict: Pagination API results
"""
user_id = requester.user.to_string()
data_source = self.hs.get_event_sources().sources["room"]
if pagin_config.from_token:
room_token = pagin_config.from_token.room_key
else:
pagin_config.from_token = (
yield self.hs.get_event_sources().get_current_token(
direction='b'
yield self.hs.get_event_sources().get_current_token_for_room(
room_id=room_id
)
)
room_token = pagin_config.from_token.room_key
@@ -79,42 +97,48 @@ class MessageHandler(BaseHandler):
source_config = pagin_config.get_source_config("room")
membership, member_event_id = yield self._check_in_room_or_world_readable(
room_id, user_id
)
if source_config.direction == 'b':
# if we're going backwards, we might need to backfill. This
# requires that we have a topo token.
if room_token.topological:
max_topo = room_token.topological
else:
max_topo = yield self.store.get_max_topological_token_for_stream_and_room(
room_id, room_token.stream
)
if membership == Membership.LEAVE:
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
leave_token = yield self.store.get_topological_token_for_event(
member_event_id
)
leave_token = RoomStreamToken.parse(leave_token)
if leave_token.topological < max_topo:
source_config.from_key = str(leave_token)
yield self.hs.get_handlers().federation_handler.maybe_backfill(
room_id, max_topo
with (yield self.pagination_lock.read(room_id)):
membership, member_event_id = yield self._check_in_room_or_world_readable(
room_id, user_id
)
events, next_key = yield data_source.get_pagination_rows(
requester.user, source_config, room_id
)
if source_config.direction == 'b':
# if we're going backwards, we might need to backfill. This
# requires that we have a topo token.
if room_token.topological:
max_topo = room_token.topological
else:
max_topo = yield self.store.get_max_topological_token(
room_id, room_token.stream
)
next_token = pagin_config.from_token.copy_and_replace(
"room_key", next_key
)
if membership == Membership.LEAVE:
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
leave_token = yield self.store.get_topological_token_for_event(
member_event_id
)
leave_token = RoomStreamToken.parse(leave_token)
if leave_token.topological < max_topo:
source_config.from_key = str(leave_token)
yield self.hs.get_handlers().federation_handler.maybe_backfill(
room_id, max_topo
)
events, next_key = yield self.store.paginate_room_events(
room_id=room_id,
from_key=source_config.from_key,
to_key=source_config.to_key,
direction=source_config.direction,
limit=source_config.limit,
event_filter=event_filter,
)
next_token = pagin_config.from_token.copy_and_replace(
"room_key", next_key
)
if not events:
defer.returnValue({
@@ -123,7 +147,11 @@ class MessageHandler(BaseHandler):
"end": next_token.to_string(),
})
events = yield self._filter_events_for_client(
if event_filter:
events = event_filter.filter(events)
events = yield filter_events_for_client(
self.store,
user_id,
events,
is_peeking=(member_event_id is None),
@@ -212,12 +240,27 @@ class MessageHandler(BaseHandler):
"Tried to send member event through non-member codepath"
)
# We check here if we are currently being rate limited, so that we
# don't do unnecessary work. We check again just before we actually
# send the event.
time_now = self.clock.time()
allowed, time_allowed = self.ratelimiter.send_message(
event.sender, time_now,
msg_rate_hz=self.hs.config.rc_messages_per_second,
burst_count=self.hs.config.rc_message_burst_count,
update=False,
)
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now)),
)
user = UserID.from_string(event.sender)
assert self.hs.is_mine(user), "User must be our own: %s" % (user,)
if event.is_state():
prev_state = self.deduplicate_state_event(event, context)
prev_state = yield self.deduplicate_state_event(event, context)
if prev_state is not None:
defer.returnValue(prev_state)
@@ -229,9 +272,10 @@ class MessageHandler(BaseHandler):
)
if event.type == EventTypes.Message:
presence = self.hs.get_handlers().presence_handler
presence = self.hs.get_presence_handler()
yield presence.bump_presence_active_time(user)
@defer.inlineCallbacks
def deduplicate_state_event(self, event, context):
"""
Checks whether event is in the latest resolved state in context.
@@ -239,13 +283,17 @@ class MessageHandler(BaseHandler):
If so, returns the version of the event in context.
Otherwise, returns None.
"""
prev_event = context.current_state.get((event.type, event.state_key))
prev_event_id = context.prev_state_ids.get((event.type, event.state_key))
prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
if not prev_event:
return
if prev_event and event.user_id == prev_event.user_id:
prev_content = encode_canonical_json(prev_event.content)
next_content = encode_canonical_json(event.content)
if prev_content == next_content:
return prev_event
return None
defer.returnValue(prev_event)
return
@defer.inlineCallbacks
def create_and_send_nonmember_event(
@@ -356,371 +404,221 @@ class MessageHandler(BaseHandler):
[serialize_event(c, now) for c in room_state.values()]
)
def snapshot_all_rooms(self, user_id=None, pagin_config=None,
as_client_event=True, include_archived=False):
"""Retrieve a snapshot of all rooms the user is invited or has joined.
This snapshot may include messages for all rooms where the user is
joined, depending on the pagination config.
Args:
user_id (str): The ID of the user making the request.
pagin_config (synapse.api.streams.PaginationConfig): The pagination
config used to determine how many messages *PER ROOM* to return.
as_client_event (bool): True to get events in client-server format.
include_archived (bool): True to get rooms that the user has left
Returns:
A list of dicts with "room_id" and "membership" keys for all rooms
the user is currently invited or joined in on. Rooms where the user
is joined on, may return a "messages" key with messages, depending
on the specified PaginationConfig.
"""
key = (
user_id,
pagin_config.from_token,
pagin_config.to_token,
pagin_config.direction,
pagin_config.limit,
as_client_event,
include_archived,
)
now_ms = self.clock.time_msec()
result = self.snapshot_cache.get(now_ms, key)
if result is not None:
return result
return self.snapshot_cache.set(now_ms, key, self._snapshot_all_rooms(
user_id, pagin_config, as_client_event, include_archived
))
@measure_func("_create_new_client_event")
@defer.inlineCallbacks
def _snapshot_all_rooms(self, user_id=None, pagin_config=None,
as_client_event=True, include_archived=False):
def _create_new_client_event(self, builder, prev_event_ids=None):
if prev_event_ids:
prev_events = yield self.store.add_event_hashes(prev_event_ids)
prev_max_depth = yield self.store.get_max_depth_of_events(prev_event_ids)
depth = prev_max_depth + 1
else:
latest_ret = yield self.store.get_latest_event_ids_and_hashes_in_room(
builder.room_id,
)
memberships = [Membership.INVITE, Membership.JOIN]
if include_archived:
memberships.append(Membership.LEAVE)
# We want to limit the max number of prev events we point to in our
# new event
if len(latest_ret) > 10:
# Sort by reverse depth, so we point to the most recent.
latest_ret.sort(key=lambda a: -a[2])
new_latest_ret = latest_ret[:5]
room_list = yield self.store.get_rooms_for_user_where_membership_is(
user_id=user_id, membership_list=memberships
# We also randomly point to some of the older events, to make
# sure that we don't completely ignore the older events.
if latest_ret[5:]:
sample_size = min(5, len(latest_ret[5:]))
new_latest_ret.extend(random.sample(latest_ret[5:], sample_size))
latest_ret = new_latest_ret
if latest_ret:
depth = max([d for _, _, d in latest_ret]) + 1
else:
depth = 1
prev_events = [
(event_id, prev_hashes)
for event_id, prev_hashes, _ in latest_ret
]
builder.prev_events = prev_events
builder.depth = depth
state_handler = self.state_handler
context = yield state_handler.compute_event_context(builder)
if builder.is_state():
builder.prev_state = yield self.store.add_event_hashes(
context.prev_state_events
)
yield self.auth.add_auth_events(builder, context)
signing_key = self.hs.config.signing_key[0]
add_hashes_and_signatures(
builder, self.server_name, signing_key
)
user = UserID.from_string(user_id)
event = builder.build()
rooms_ret = []
now_token = yield self.hs.get_event_sources().get_current_token()
presence_stream = self.hs.get_event_sources().sources["presence"]
pagination_config = PaginationConfig(from_token=now_token)
presence, _ = yield presence_stream.get_pagination_rows(
user, pagination_config.get_source_config("presence"), None
logger.debug(
"Created event %s with state: %s",
event.event_id, context.prev_state_ids,
)
receipt_stream = self.hs.get_event_sources().sources["receipt"]
receipt, _ = yield receipt_stream.get_pagination_rows(
user, pagination_config.get_source_config("receipt"), None
defer.returnValue(
(event, context,)
)
tags_by_room = yield self.store.get_tags_for_user(user_id)
@measure_func("handle_new_client_event")
@defer.inlineCallbacks
def handle_new_client_event(
self,
requester,
event,
context,
ratelimit=True,
extra_users=[]
):
# We now need to go and hit out to wherever we need to hit out to.
account_data, account_data_by_room = (
yield self.store.get_account_data_for_user(user_id)
)
if ratelimit:
self.ratelimit(requester)
public_room_ids = yield self.store.get_public_room_ids()
try:
yield self.auth.check_from_context(event, context)
except AuthError as err:
logger.warn("Denying new event %r because %s", event, err)
raise err
limit = pagin_config.limit
if limit is None:
limit = 10
yield self.maybe_kick_guest_users(event, context)
@defer.inlineCallbacks
def handle_room(event):
d = {
"room_id": event.room_id,
"membership": event.membership,
"visibility": (
"public" if event.room_id in public_room_ids
else "private"
),
}
if event.type == EventTypes.CanonicalAlias:
# Check the alias is acually valid (at this time at least)
room_alias_str = event.content.get("alias", None)
if room_alias_str:
room_alias = RoomAlias.from_string(room_alias_str)
directory_handler = self.hs.get_handlers().directory_handler
mapping = yield directory_handler.get_association(room_alias)
if event.membership == Membership.INVITE:
time_now = self.clock.time_msec()
d["inviter"] = event.sender
invite_event = yield self.store.get_event(event.event_id)
d["invite"] = serialize_event(invite_event, time_now, as_client_event)
rooms_ret.append(d)
if event.membership not in (Membership.JOIN, Membership.LEAVE):
return
try:
if event.membership == Membership.JOIN:
room_end_token = now_token.room_key
deferred_room_state = self.state_handler.get_current_state(
event.room_id
)
elif event.membership == Membership.LEAVE:
room_end_token = "s%d" % (event.stream_ordering,)
deferred_room_state = self.store.get_state_for_events(
[event.event_id], None
)
deferred_room_state.addCallback(
lambda states: states[event.event_id]
if mapping["room_id"] != event.room_id:
raise SynapseError(
400,
"Room alias %s does not point to the room" % (
room_alias_str,
)
)
(messages, token), current_state = yield defer.gatherResults(
[
self.store.get_recent_events_for_room(
event.room_id,
limit=limit,
end_token=room_end_token,
),
deferred_room_state,
]
).addErrback(unwrapFirstError)
federation_handler = self.hs.get_handlers().federation_handler
messages = yield self._filter_events_for_client(
user_id, messages
)
if event.type == EventTypes.Member:
if event.content["membership"] == Membership.INVITE:
def is_inviter_member_event(e):
return (
e.type == EventTypes.Member and
e.sender == event.sender
)
start_token = now_token.copy_and_replace("room_key", token[0])
end_token = now_token.copy_and_replace("room_key", token[1])
time_now = self.clock.time_msec()
d["messages"] = {
"chunk": [
serialize_event(m, time_now, as_client_event)
for m in messages
],
"start": start_token.to_string(),
"end": end_token.to_string(),
}
d["state"] = [
serialize_event(c, time_now, as_client_event)
for c in current_state.values()
state_to_include_ids = [
e_id
for k, e_id in context.current_state_ids.items()
if k[0] in self.hs.config.room_invite_state_types
or k[0] == EventTypes.Member and k[1] == event.sender
]
account_data_events = []
tags = tags_by_room.get(event.room_id)
if tags:
account_data_events.append({
"type": "m.tag",
"content": {"tags": tags},
})
state_to_include = yield self.store.get_events(state_to_include_ids)
account_data = account_data_by_room.get(event.room_id, {})
for account_data_type, content in account_data.items():
account_data_events.append({
"type": account_data_type,
"content": content,
})
event.unsigned["invite_room_state"] = [
{
"type": e.type,
"state_key": e.state_key,
"content": e.content,
"sender": e.sender,
}
for e in state_to_include.values()
]
d["account_data"] = account_data_events
except:
logger.exception("Failed to get snapshot")
invitee = UserID.from_string(event.state_key)
if not self.hs.is_mine(invitee):
# TODO: Can we add signature from remote server in a nicer
# way? If we have been invited by a remote server, we need
# to get them to sign the event.
yield concurrently_execute(handle_room, room_list, 10)
returned_invite = yield federation_handler.send_invite(
invitee.domain,
event,
)
account_data_events = []
for account_data_type, content in account_data.items():
account_data_events.append({
"type": account_data_type,
"content": content,
})
event.unsigned.pop("room_state", None)
ret = {
"rooms": rooms_ret,
"presence": presence,
"account_data": account_data_events,
"receipts": receipt,
"end": now_token.to_string(),
}
# TODO: Make sure the signatures actually are correct.
event.signatures.update(
returned_invite.signatures
)
defer.returnValue(ret)
@defer.inlineCallbacks
def room_initial_sync(self, requester, room_id, pagin_config=None):
"""Capture the a snapshot of a room. If user is currently a member of
the room this will be what is currently in the room. If the user left
the room this will be what was in the room when they left.
Args:
requester(Requester): The user to get a snapshot for.
room_id(str): The room to get a snapshot of.
pagin_config(synapse.streams.config.PaginationConfig):
The pagination config used to determine how many messages to
return.
Raises:
AuthError if the user wasn't in the room.
Returns:
A JSON serialisable dict with the snapshot of the room.
"""
user_id = requester.user.to_string()
membership, member_event_id = yield self._check_in_room_or_world_readable(
room_id, user_id,
)
is_peeking = member_event_id is None
if membership == Membership.JOIN:
result = yield self._room_initial_sync_joined(
user_id, room_id, pagin_config, membership, is_peeking
if event.type == EventTypes.Redaction:
auth_events_ids = yield self.auth.compute_auth_events(
event, context.prev_state_ids, for_verification=True,
)
elif membership == Membership.LEAVE:
result = yield self._room_initial_sync_parted(
user_id, room_id, pagin_config, membership, member_event_id, is_peeking
)
account_data_events = []
tags = yield self.store.get_tags_for_room(user_id, room_id)
if tags:
account_data_events.append({
"type": "m.tag",
"content": {"tags": tags},
})
account_data = yield self.store.get_account_data_for_room(user_id, room_id)
for account_data_type, content in account_data.items():
account_data_events.append({
"type": account_data_type,
"content": content,
})
result["account_data"] = account_data_events
defer.returnValue(result)
@defer.inlineCallbacks
def _room_initial_sync_parted(self, user_id, room_id, pagin_config,
membership, member_event_id, is_peeking):
room_state = yield self.store.get_state_for_events(
[member_event_id], None
)
room_state = room_state[member_event_id]
limit = pagin_config.limit if pagin_config else None
if limit is None:
limit = 10
stream_token = yield self.store.get_stream_token_for_event(
member_event_id
)
messages, token = yield self.store.get_recent_events_for_room(
room_id,
limit=limit,
end_token=stream_token
)
messages = yield self._filter_events_for_client(
user_id, messages, is_peeking=is_peeking
)
start_token = StreamToken.START.copy_and_replace("room_key", token[0])
end_token = StreamToken.START.copy_and_replace("room_key", token[1])
time_now = self.clock.time_msec()
defer.returnValue({
"membership": membership,
"room_id": room_id,
"messages": {
"chunk": [serialize_event(m, time_now) for m in messages],
"start": start_token.to_string(),
"end": end_token.to_string(),
},
"state": [serialize_event(s, time_now) for s in room_state.values()],
"presence": [],
"receipts": [],
})
@defer.inlineCallbacks
def _room_initial_sync_joined(self, user_id, room_id, pagin_config,
membership, is_peeking):
current_state = yield self.state.get_current_state(
room_id=room_id,
)
# TODO: These concurrently
time_now = self.clock.time_msec()
state = [
serialize_event(x, time_now)
for x in current_state.values()
]
now_token = yield self.hs.get_event_sources().get_current_token()
limit = pagin_config.limit if pagin_config else None
if limit is None:
limit = 10
room_members = [
m for m in current_state.values()
if m.type == EventTypes.Member
and m.content["membership"] == Membership.JOIN
]
presence_handler = self.hs.get_handlers().presence_handler
@defer.inlineCallbacks
def get_presence():
states = yield presence_handler.get_states(
[m.user_id for m in room_members],
as_event=True,
)
defer.returnValue(states)
@defer.inlineCallbacks
def get_receipts():
receipts_handler = self.hs.get_handlers().receipts_handler
receipts = yield receipts_handler.get_receipts_for_room(
room_id,
now_token.receipt_key
)
defer.returnValue(receipts)
presence, receipts, (messages, token) = yield defer.gatherResults(
[
get_presence(),
get_receipts(),
self.store.get_recent_events_for_room(
room_id,
limit=limit,
end_token=now_token.room_key,
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
(e.type, e.state_key): e for e in auth_events.values()
}
if self.auth.check_redaction(event, auth_events=auth_events):
original_event = yield self.store.get_event(
event.redacts,
check_redacted=False,
get_prev_content=False,
allow_rejected=False,
allow_none=False
)
],
consumeErrors=True,
).addErrback(unwrapFirstError)
if event.user_id != original_event.user_id:
raise AuthError(
403,
"You don't have permission to redact events"
)
messages = yield self._filter_events_for_client(
user_id, messages, is_peeking=is_peeking,
if event.type == EventTypes.Create and context.prev_state_ids:
raise AuthError(
403,
"Changing the room create event is forbidden",
)
action_generator = ActionGenerator(self.hs)
yield action_generator.handle_push_actions_for_event(
event, context
)
start_token = now_token.copy_and_replace("room_key", token[0])
end_token = now_token.copy_and_replace("room_key", token[1])
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event, context=context
)
time_now = self.clock.time_msec()
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
preserve_fn(self.hs.get_pusherpool().on_new_notifications)(
event_stream_id, max_stream_id
)
ret = {
"room_id": room_id,
"messages": {
"chunk": [serialize_event(m, time_now) for m in messages],
"start": start_token.to_string(),
"end": end_token.to_string(),
},
"state": state,
"presence": presence,
"receipts": receipts,
}
if not is_peeking:
ret["membership"] = membership
users_in_room = yield self.store.get_joined_users_from_context(event, context)
defer.returnValue(ret)
destinations = [
get_domain_from_id(user_id) for user_id in users_in_room
if not self.hs.is_mine_id(user_id)
]
@defer.inlineCallbacks
def _notify():
yield run_on_reactor()
yield self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
preserve_fn(_notify)()
# If invite, remove room_state from unsigned before sending.
event.unsigned.pop("invite_room_state", None)
preserve_fn(federation_handler.handle_new_event)(
event, destinations=destinations,
)

View File

@@ -33,11 +33,9 @@ from synapse.util.logcontext import preserve_fn
from synapse.util.logutils import log_function
from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer
from synapse.types import UserID
from synapse.types import UserID, get_domain_from_id
import synapse.metrics
from ._base import BaseHandler
import logging
@@ -52,6 +50,13 @@ timers_fired_counter = metrics.register_counter("timers_fired")
federation_presence_counter = metrics.register_counter("federation_presence")
bump_active_time_counter = metrics.register_counter("bump_active_time")
get_updates_counter = metrics.register_counter("get_updates", labels=["type"])
notify_reason_counter = metrics.register_counter("notify_reason", labels=["reason"])
state_transition_counter = metrics.register_counter(
"state_transition", labels=["from", "to"]
)
# If a user was last active in the last LAST_ACTIVE_GRANULARITY, consider them
# "currently_active"
@@ -70,20 +75,26 @@ FEDERATION_TIMEOUT = 30 * 60 * 1000
# How often to resend presence to remote servers
FEDERATION_PING_INTERVAL = 25 * 60 * 1000
# How long we will wait before assuming that the syncs from an external process
# are dead.
EXTERNAL_PROCESS_EXPIRY = 5 * 60 * 1000
assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER
class PresenceHandler(BaseHandler):
class PresenceHandler(object):
def __init__(self, hs):
super(PresenceHandler, self).__init__(hs)
self.hs = hs
self.is_mine = hs.is_mine
self.is_mine_id = hs.is_mine_id
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.wheel_timer = WheelTimer()
self.notifier = hs.get_notifier()
self.federation = hs.get_replication_layer()
self.state = hs.get_state_handler()
self.federation.register_edu_handler(
"m.presence", self.incoming_presence
)
@@ -138,7 +149,7 @@ class PresenceHandler(BaseHandler):
obj=state.user_id,
then=state.last_user_sync_ts + SYNC_ONLINE_TIMEOUT,
)
if self.hs.is_mine_id(state.user_id):
if self.is_mine_id(state.user_id):
self.wheel_timer.insert(
now=now,
obj=state.user_id,
@@ -160,20 +171,38 @@ class PresenceHandler(BaseHandler):
self.serial_to_user = {}
self._next_serial = 1
# Keeps track of the number of *ongoing* syncs. While this is non zero
# a user will never go offline.
# Keeps track of the number of *ongoing* syncs on this process. While
# this is non zero a user will never go offline.
self.user_to_num_current_syncs = {}
# Keeps track of the number of *ongoing* syncs on other processes.
# While any sync is ongoing on another process the user will never
# go offline.
# Each process has a unique identifier and an update frequency. If
# no update is received from that process within the update period then
# we assume that all the sync requests on that process have stopped.
# Stored as a dict from process_id to set of user_id, and a dict of
# process_id to millisecond timestamp last updated.
self.external_process_to_current_syncs = {}
self.external_process_last_updated_ms = {}
# Start a LoopingCall in 30s that fires every 5s.
# The initial delay is to allow disconnected clients a chance to
# reconnect before we treat them as offline.
self.clock.call_later(
0 * 1000,
30,
self.clock.looping_call,
self._handle_timeouts,
5000,
)
self.clock.call_later(
60,
self.clock.looping_call,
self._persist_unpersisted_changes,
60 * 1000,
)
metrics.register_callback("wheel_timer_size", lambda: len(self.wheel_timer))
@defer.inlineCallbacks
@@ -188,7 +217,7 @@ class PresenceHandler(BaseHandler):
is some spurious presence changes that will self-correct.
"""
logger.info(
"Performing _on_shutdown. Persiting %d unpersisted changes",
"Performing _on_shutdown. Persisting %d unpersisted changes",
len(self.user_to_current_state)
)
@@ -199,6 +228,27 @@ class PresenceHandler(BaseHandler):
])
logger.info("Finished _on_shutdown")
@defer.inlineCallbacks
def _persist_unpersisted_changes(self):
"""We periodically persist the unpersisted changes, as otherwise they
may stack up and slow down shutdown times.
"""
logger.info(
"Performing _persist_unpersisted_changes. Persisting %d unpersisted changes",
len(self.unpersisted_users_changes)
)
unpersisted = self.unpersisted_users_changes
self.unpersisted_users_changes = set()
if unpersisted:
yield self.store.update_presence([
self.user_to_current_state[user_id]
for user_id in unpersisted
])
logger.info("Finished _persist_unpersisted_changes")
@defer.inlineCallbacks
def _update_states(self, new_states):
"""Updates presence of users. Sets the appropriate timeouts. Pokes
@@ -215,6 +265,12 @@ class PresenceHandler(BaseHandler):
to_notify = {} # Changes we want to notify everyone about
to_federation_ping = {} # These need sending keep-alives
# Only bother handling the last presence change for each user
new_states_dict = {}
for new_state in new_states:
new_states_dict[new_state.user_id] = new_state
new_state = new_states_dict.values()
for new_state in new_states:
user_id = new_state.user_id
@@ -228,7 +284,7 @@ class PresenceHandler(BaseHandler):
new_state, should_notify, should_ping = handle_update(
prev_state, new_state,
is_mine=self.hs.is_mine_id(user_id),
is_mine=self.is_mine_id(user_id),
wheel_timer=self.wheel_timer,
now=now
)
@@ -268,31 +324,48 @@ class PresenceHandler(BaseHandler):
"""Checks the presence of users that have timed out and updates as
appropriate.
"""
logger.info("Handling presence timeouts")
now = self.clock.time_msec()
with Measure(self.clock, "presence_handle_timeouts"):
# Fetch the list of users that *may* have timed out. Things may have
# changed since the timeout was set, so we won't necessarily have to
# take any action.
users_to_check = self.wheel_timer.fetch(now)
try:
with Measure(self.clock, "presence_handle_timeouts"):
# Fetch the list of users that *may* have timed out. Things may have
# changed since the timeout was set, so we won't necessarily have to
# take any action.
users_to_check = set(self.wheel_timer.fetch(now))
states = [
self.user_to_current_state.get(
user_id, UserPresenceState.default(user_id)
# Check whether the lists of syncing processes from an external
# process have expired.
expired_process_ids = [
process_id for process_id, last_update
in self.external_process_last_updated_ms.items()
if now - last_update > EXTERNAL_PROCESS_EXPIRY
]
for process_id in expired_process_ids:
users_to_check.update(
self.external_process_last_updated_ms.pop(process_id, ())
)
self.external_process_last_update.pop(process_id)
states = [
self.user_to_current_state.get(
user_id, UserPresenceState.default(user_id)
)
for user_id in users_to_check
]
timers_fired_counter.inc_by(len(states))
changes = handle_timeouts(
states,
is_mine_fn=self.is_mine_id,
syncing_user_ids=self.get_currently_syncing_users(),
now=now,
)
for user_id in set(users_to_check)
]
timers_fired_counter.inc_by(len(states))
changes = handle_timeouts(
states,
is_mine_fn=self.hs.is_mine_id,
user_to_num_current_syncs=self.user_to_num_current_syncs,
now=now,
)
preserve_fn(self._update_states)(changes)
preserve_fn(self._update_states)(changes)
except:
logger.exception("Exception in _handle_timeouts loop")
@defer.inlineCallbacks
def bump_presence_active_time(self, user):
@@ -365,6 +438,74 @@ class PresenceHandler(BaseHandler):
defer.returnValue(_user_syncing())
def get_currently_syncing_users(self):
"""Get the set of user ids that are currently syncing on this HS.
Returns:
set(str): A set of user_id strings.
"""
syncing_user_ids = {
user_id for user_id, count in self.user_to_num_current_syncs.items()
if count
}
for user_ids in self.external_process_to_current_syncs.values():
syncing_user_ids.update(user_ids)
return syncing_user_ids
@defer.inlineCallbacks
def update_external_syncs(self, process_id, syncing_user_ids):
"""Update the syncing users for an external process
Args:
process_id(str): An identifier for the process the users are
syncing against. This allows synapse to process updates
as user start and stop syncing against a given process.
syncing_user_ids(set(str)): The set of user_ids that are
currently syncing on that server.
"""
# Grab the previous list of user_ids that were syncing on that process
prev_syncing_user_ids = (
self.external_process_to_current_syncs.get(process_id, set())
)
# Grab the current presence state for both the users that are syncing
# now and the users that were syncing before this update.
prev_states = yield self.current_state_for_users(
syncing_user_ids | prev_syncing_user_ids
)
updates = []
time_now_ms = self.clock.time_msec()
# For each new user that is syncing check if we need to mark them as
# being online.
for new_user_id in syncing_user_ids - prev_syncing_user_ids:
prev_state = prev_states[new_user_id]
if prev_state.state == PresenceState.OFFLINE:
updates.append(prev_state.copy_and_replace(
state=PresenceState.ONLINE,
last_active_ts=time_now_ms,
last_user_sync_ts=time_now_ms,
))
else:
updates.append(prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
))
# For each user that is still syncing or stopped syncing update the
# last sync time so that we will correctly apply the grace period when
# they stop syncing.
for old_user_id in prev_syncing_user_ids:
prev_state = prev_states[old_user_id]
updates.append(prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
))
yield self._update_states(updates)
# Update the last updated time for the process. We expire the entries
# if we don't receive an update in the given timeframe.
self.external_process_last_updated_ms[process_id] = self.clock.time_msec()
self.external_process_to_current_syncs[process_id] = syncing_user_ids
@defer.inlineCallbacks
def current_state_for_user(self, user_id):
"""Get the current presence state for a user.
@@ -403,7 +544,7 @@ class PresenceHandler(BaseHandler):
defer.returnValue(states)
@defer.inlineCallbacks
def _get_interested_parties(self, states):
def _get_interested_parties(self, states, calculate_remote_hosts=True):
"""Given a list of states return which entities (rooms, users, servers)
are interested in the given states.
@@ -426,21 +567,24 @@ class PresenceHandler(BaseHandler):
users_to_states.setdefault(state.user_id, []).append(state)
hosts_to_states = {}
for room_id, states in room_ids_to_states.items():
local_states = filter(lambda s: self.hs.is_mine_id(s.user_id), states)
if not local_states:
continue
if calculate_remote_hosts:
for room_id, states in room_ids_to_states.items():
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
if not local_states:
continue
hosts = yield self.store.get_joined_hosts_for_room(room_id)
for host in hosts:
hosts_to_states.setdefault(host, []).extend(local_states)
users = yield self.state.get_current_user_in_room(room_id)
hosts = set(get_domain_from_id(u) for u in users)
for host in hosts:
hosts_to_states.setdefault(host, []).extend(local_states)
for user_id, states in users_to_states.items():
local_states = filter(lambda s: self.hs.is_mine_id(s.user_id), states)
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
if not local_states:
continue
host = UserID.from_string(user_id).domain
host = get_domain_from_id(user_id)
hosts_to_states.setdefault(host, []).extend(local_states)
# TODO: de-dup hosts_to_states, as a single host might have multiple
@@ -465,24 +609,24 @@ class PresenceHandler(BaseHandler):
self._push_to_remotes(hosts_to_states)
@defer.inlineCallbacks
def notify_for_states(self, state, stream_id):
parties = yield self._get_interested_parties([state])
room_ids_to_states, users_to_states, hosts_to_states = parties
self.notifier.on_new_event(
"presence_key", stream_id, rooms=room_ids_to_states.keys(),
users=[UserID.from_string(u) for u in users_to_states.keys()]
)
def _push_to_remotes(self, hosts_to_states):
"""Sends state updates to remote servers.
Args:
hosts_to_states (dict): Mapping `server_name` -> `[UserPresenceState]`
"""
now = self.clock.time_msec()
for host, states in hosts_to_states.items():
self.federation.send_edu(
destination=host,
edu_type="m.presence",
content={
"push": [
_format_user_presence_state(state, now)
for state in states
]
}
)
self.federation.send_presence(host, states)
@defer.inlineCallbacks
def incoming_presence(self, origin, content):
@@ -503,6 +647,13 @@ class PresenceHandler(BaseHandler):
)
continue
if get_domain_from_id(user_id) != origin:
logger.info(
"Got presence update from %r with bad 'user_id': %r",
origin, user_id,
)
continue
presence_state = push.get("presence", None)
if not presence_state:
logger.info(
@@ -562,17 +713,17 @@ class PresenceHandler(BaseHandler):
defer.returnValue([
{
"type": "m.presence",
"content": _format_user_presence_state(state, now),
"content": format_user_presence_state(state, now),
}
for state in updates
])
else:
defer.returnValue([
_format_user_presence_state(state, now) for state in updates
format_user_presence_state(state, now) for state in updates
])
@defer.inlineCallbacks
def set_state(self, target_user, state):
def set_state(self, target_user, state, ignore_status_msg=False):
"""Set the presence state of the user.
"""
status_msg = state.get("status_msg", None)
@@ -589,10 +740,13 @@ class PresenceHandler(BaseHandler):
prev_state = yield self.current_state_for_user(user_id)
new_fields = {
"state": presence,
"status_msg": status_msg if presence != PresenceState.OFFLINE else None
"state": presence
}
if not ignore_status_msg:
msg = status_msg if presence != PresenceState.OFFLINE else None
new_fields["status_msg"] = msg
if presence == PresenceState.ONLINE:
new_fields["last_active_ts"] = self.clock.time_msec()
@@ -611,14 +765,14 @@ class PresenceHandler(BaseHandler):
# don't need to send to local clients here, as that is done as part
# of the event stream/sync.
# TODO: Only send to servers not already in the room.
if self.hs.is_mine(user):
user_ids = yield self.state.get_current_user_in_room(room_id)
if self.is_mine(user):
state = yield self.current_state_for_user(user.to_string())
hosts = yield self.store.get_joined_hosts_for_room(room_id)
hosts = set(get_domain_from_id(u) for u in user_ids)
self._push_to_remotes({host: (state,) for host in hosts})
else:
user_ids = yield self.store.get_users_in_room(room_id)
user_ids = filter(self.hs.is_mine_id, user_ids)
user_ids = filter(self.is_mine_id, user_ids)
states = yield self.current_state_for_users(user_ids)
@@ -628,7 +782,7 @@ class PresenceHandler(BaseHandler):
def get_presence_list(self, observer_user, accepted=None):
"""Returns the presence for all users in their presence list.
"""
if not self.hs.is_mine(observer_user):
if not self.is_mine(observer_user):
raise SynapseError(400, "User is not hosted on this Home Server")
presence_list = yield self.store.get_presence_list(
@@ -659,7 +813,7 @@ class PresenceHandler(BaseHandler):
observer_user.localpart, observed_user.to_string()
)
if self.hs.is_mine(observed_user):
if self.is_mine(observed_user):
yield self.invite_presence(observed_user, observer_user)
else:
yield self.federation.send_edu(
@@ -675,11 +829,11 @@ class PresenceHandler(BaseHandler):
def invite_presence(self, observed_user, observer_user):
"""Handles new presence invites.
"""
if not self.hs.is_mine(observed_user):
if not self.is_mine(observed_user):
raise SynapseError(400, "User is not hosted on this Home Server")
# TODO: Don't auto accept
if self.hs.is_mine(observer_user):
if self.is_mine(observer_user):
yield self.accept_presence(observed_user, observer_user)
else:
self.federation.send_edu(
@@ -742,7 +896,7 @@ class PresenceHandler(BaseHandler):
Returns:
A Deferred.
"""
if not self.hs.is_mine(observer_user):
if not self.is_mine(observer_user):
raise SynapseError(400, "User is not hosted on this Home Server")
yield self.store.del_presence_list(
@@ -793,28 +947,38 @@ class PresenceHandler(BaseHandler):
def should_notify(old_state, new_state):
"""Decides if a presence state change should be sent to interested parties.
"""
if old_state == new_state:
return False
if old_state.status_msg != new_state.status_msg:
return True
if old_state.state == PresenceState.ONLINE:
if new_state.state != PresenceState.ONLINE:
# Always notify for online -> anything
return True
if new_state.currently_active != old_state.currently_active:
return True
if new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
# Always notify for a transition where last active gets bumped.
notify_reason_counter.inc("status_msg_change")
return True
if old_state.state != new_state.state:
notify_reason_counter.inc("state_change")
state_transition_counter.inc(old_state.state, new_state.state)
return True
if old_state.state == PresenceState.ONLINE:
if new_state.currently_active != old_state.currently_active:
notify_reason_counter.inc("current_active_change")
return True
if new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
# Only notify about last active bumps if we're not currently acive
if not new_state.currently_active:
notify_reason_counter.inc("last_active_change_online")
return True
elif new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
# Always notify for a transition where last active gets bumped.
notify_reason_counter.inc("last_active_change_not_online")
return True
return False
def _format_user_presence_state(state, now):
def format_user_presence_state(state, now):
"""Convert UserPresenceState to a format that can be sent down to clients
and to other servers.
"""
@@ -834,9 +998,14 @@ def _format_user_presence_state(state, now):
class PresenceEventSource(object):
def __init__(self, hs):
self.hs = hs
# We can't call get_presence_handler here because there's a cycle:
#
# Presence -> Notifier -> PresenceEventSource -> Presence
#
self.get_presence_handler = hs.get_presence_handler
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
@defer.inlineCallbacks
@log_function
@@ -860,7 +1029,7 @@ class PresenceEventSource(object):
from_key = int(from_key)
room_ids = room_ids or []
presence = self.hs.get_handlers().presence_handler
presence = self.get_presence_handler()
stream_change_cache = self.store.presence_stream_cache
if not room_ids:
@@ -877,13 +1046,13 @@ class PresenceEventSource(object):
user_ids_changed = set()
changed = None
if from_key and max_token - from_key < 100:
# For small deltas, its quicker to get all changes and then
# work out if we share a room or they're in our presence list
if from_key:
changed = stream_change_cache.get_all_entities_changed(from_key)
# get_all_entities_changed can return None
if changed is not None:
if changed is not None and len(changed) < 500:
# For small deltas, its quicker to get all changes and then
# work out if we share a room or they're in our presence list
get_updates_counter.inc("stream")
for other_user_id in changed:
if other_user_id in friends:
user_ids_changed.add(other_user_id)
@@ -895,9 +1064,11 @@ class PresenceEventSource(object):
else:
# Too many possible updates. Find all users we can see and check
# if any of them have changed.
get_updates_counter.inc("full")
user_ids_to_check = set()
for room_id in room_ids:
users = yield self.store.get_users_in_room(room_id)
users = yield self.state.get_current_user_in_room(room_id)
user_ids_to_check.update(users)
user_ids_to_check.update(friends)
@@ -920,7 +1091,7 @@ class PresenceEventSource(object):
defer.returnValue(([
{
"type": "m.presence",
"content": _format_user_presence_state(s, now),
"content": format_user_presence_state(s, now),
}
for s in updates.values()
if include_offline or s.state != PresenceState.OFFLINE
@@ -933,15 +1104,14 @@ class PresenceEventSource(object):
return self.get_new_events(user, from_key=None, include_offline=False)
def handle_timeouts(user_states, is_mine_fn, user_to_num_current_syncs, now):
def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now):
"""Checks the presence of users that have timed out and updates as
appropriate.
Args:
user_states(list): List of UserPresenceState's to check.
is_mine_fn (fn): Function that returns if a user_id is ours
user_to_num_current_syncs (dict): Mapping of user_id to number of currently
active syncs.
syncing_user_ids (set): Set of user_ids with active syncs.
now (int): Current time in ms.
Returns:
@@ -952,21 +1122,20 @@ def handle_timeouts(user_states, is_mine_fn, user_to_num_current_syncs, now):
for state in user_states:
is_mine = is_mine_fn(state.user_id)
new_state = handle_timeout(state, is_mine, user_to_num_current_syncs, now)
new_state = handle_timeout(state, is_mine, syncing_user_ids, now)
if new_state:
changes[state.user_id] = new_state
return changes.values()
def handle_timeout(state, is_mine, user_to_num_current_syncs, now):
def handle_timeout(state, is_mine, syncing_user_ids, now):
"""Checks the presence of the user to see if any of the timers have elapsed
Args:
state (UserPresenceState)
is_mine (bool): Whether the user is ours
user_to_num_current_syncs (dict): Mapping of user_id to number of currently
active syncs.
syncing_user_ids (set): Set of user_ids with active syncs.
now (int): Current time in ms.
Returns:
@@ -1000,7 +1169,7 @@ def handle_timeout(state, is_mine, user_to_num_current_syncs, now):
# If there are have been no sync for a while (and none ongoing),
# set presence to offline
if not user_to_num_current_syncs.get(user_id, 0):
if user_id not in syncing_user_ids:
if now - state.last_user_sync_ts > SYNC_ONLINE_TIMEOUT:
state = state.copy_and_replace(
state=PresenceState.OFFLINE,

View File

@@ -13,15 +13,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
import synapse.types
from synapse.api.errors import SynapseError, AuthError, CodeMessageException
from synapse.types import UserID, Requester
from synapse.types import UserID
from ._base import BaseHandler
import logging
logger = logging.getLogger(__name__)
@@ -36,13 +36,6 @@ class ProfileHandler(BaseHandler):
"profile", self.on_profile_query
)
distributor = hs.get_distributor()
distributor.observe("registered_user", self.registered_user)
def registered_user(self, user):
return self.store.create_profile(user.localpart)
@defer.inlineCallbacks
def get_displayname(self, target_user):
if self.hs.is_mine(target_user):
@@ -72,13 +65,13 @@ class ProfileHandler(BaseHandler):
defer.returnValue(result["displayname"])
@defer.inlineCallbacks
def set_displayname(self, target_user, requester, new_displayname):
def set_displayname(self, target_user, requester, new_displayname, by_admin=False):
"""target_user is the user whose displayname is to be changed;
auth_user is the user attempting to make this change."""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this Home Server")
if target_user != requester.user:
if not by_admin and target_user != requester.user:
raise AuthError(400, "Cannot set another user's displayname")
if new_displayname == '':
@@ -118,13 +111,13 @@ class ProfileHandler(BaseHandler):
defer.returnValue(result["avatar_url"])
@defer.inlineCallbacks
def set_avatar_url(self, target_user, requester, new_avatar_url):
def set_avatar_url(self, target_user, requester, new_avatar_url, by_admin=False):
"""target_user is the user whose avatar_url is to be changed;
auth_user is the user attempting to make this change."""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this Home Server")
if target_user != requester.user:
if not by_admin and target_user != requester.user:
raise AuthError(400, "Cannot set another user's avatar_url")
yield self.store.set_profile_avatar_url(
@@ -172,7 +165,9 @@ class ProfileHandler(BaseHandler):
try:
# Assume the user isn't a guest because we don't let guests set
# profile or avatar data.
requester = Requester(user, "", False)
# XXX why are we recreating `requester` here for each room?
# what was wrong with the `requester` we were passed?
requester = synapse.types.create_requester(user)
yield handler.update_membership(
requester,
user,

View File

@@ -18,6 +18,7 @@ from ._base import BaseHandler
from twisted.internet import defer
from synapse.util.logcontext import PreserveLoggingContext
from synapse.types import get_domain_from_id
import logging
@@ -29,12 +30,15 @@ class ReceiptsHandler(BaseHandler):
def __init__(self, hs):
super(ReceiptsHandler, self).__init__(hs)
self.server_name = hs.config.server_name
self.store = hs.get_datastore()
self.hs = hs
self.federation = hs.get_replication_layer()
self.federation.register_edu_handler(
"m.receipt", self._received_remote_receipt
)
self.clock = self.hs.get_clock()
self.state = hs.get_state_handler()
@defer.inlineCallbacks
def received_client_receipt(self, room_id, receipt_type, user_id,
@@ -80,6 +84,9 @@ class ReceiptsHandler(BaseHandler):
def _handle_new_receipts(self, receipts):
"""Takes a list of receipts, stores them and informs the notifier.
"""
min_batch_id = None
max_batch_id = None
for receipt in receipts:
room_id = receipt["room_id"]
receipt_type = receipt["receipt_type"]
@@ -97,10 +104,21 @@ class ReceiptsHandler(BaseHandler):
stream_id, max_persisted_id = res
with PreserveLoggingContext():
self.notifier.on_new_event(
"receipt_key", max_persisted_id, rooms=[room_id]
)
if min_batch_id is None or stream_id < min_batch_id:
min_batch_id = stream_id
if max_batch_id is None or max_persisted_id > max_batch_id:
max_batch_id = max_persisted_id
affected_room_ids = list(set([r["room_id"] for r in receipts]))
with PreserveLoggingContext():
self.notifier.on_new_event(
"receipt_key", max_batch_id, rooms=affected_room_ids
)
# Note that the min here shouldn't be relied upon to be accurate.
self.hs.get_pusherpool().on_new_receipts(
min_batch_id, max_batch_id, affected_room_ids
)
defer.returnValue(True)
@@ -117,12 +135,10 @@ class ReceiptsHandler(BaseHandler):
event_ids = receipt["event_ids"]
data = receipt["data"]
remotedomains = set()
rm_handler = self.hs.get_handlers().room_member_handler
yield rm_handler.fetch_room_distributions_into(
room_id, localusers=None, remotedomains=remotedomains
)
users = yield self.state.get_current_user_in_room(room_id)
remotedomains = set(get_domain_from_id(u) for u in users)
remotedomains = remotedomains.copy()
remotedomains.discard(self.server_name)
logger.debug("Sending receipt to: %r", remotedomains)
@@ -140,6 +156,7 @@ class ReceiptsHandler(BaseHandler):
}
},
},
key=(room_id, receipt_type, user_id),
)
@defer.inlineCallbacks

Some files were not shown because too many files have changed in this diff Show More