Compare commits

...

2218 Commits

Author SHA1 Message Date
Matthew Hodgson
5ea4d5d38a fix NPE 2018-01-07 23:08:54 +00:00
Matthew Hodgson
f512e5c137 entirely blindly implement ?ts for AS. untested 2018-01-04 19:10:35 +00:00
Richard van der Hoff
3f9f1c50f3 Merge pull request #2683 from seckrv/fix_pwd_auth_prov_typo
synapse/config/password_auth_providers: Fixed bracket typo
2017-12-18 22:37:21 +00:00
Richard van der Hoff
48fa4e1e5b Merge pull request #2435 from silkeh/listen-ipv6-default
Adapt the default config to bind on both IPv4 and IPv6 on all platforms
2017-12-18 22:34:50 +00:00
Silke
df0f602796 Implement listen_tcp method in remaining workers
Signed-off-by: Silke <silke@slxh.eu>
2017-12-18 20:00:42 +01:00
Silke
26cd3f5690 Remove logger argument and do not catch replication listener
Signed-off-by: Silke <silke@slxh.eu>
2017-12-18 20:00:42 +01:00
Silke Hofstra
ed48ecc58c Add methods for listening on multiple addresses
Add listen_tcp and listen_ssl which implement Twisted's reactor.listenTCP
and reactor.listenSSL for multiple addresses.

Signed-off-by: Silke Hofstra <silke@slxh.eu>
2017-12-17 13:15:48 +01:00
Silke Hofstra
37d1a90025 Allow binds to both :: and 0.0.0.0
Binding on 0.0.0.0 when :: is specified in the bind_addresses is now allowed.
This causes a warning explaining the behaviour.
Configuration changed to match.

See #2232

Signed-off-by: Silke Hofstra <silke@slxh.eu>
2017-12-17 13:10:31 +01:00
Willem Mulder
3e59143ba8 Adapt the default config to bind on IPv6.
Most deployments are on Linux (or Mac OS), so this would actually bind
on both IPv4 and IPv6.

Resolves #1886.

Signed-off-by: Willem Mulder <willemmaster@hotmail.com>
2017-12-17 13:07:37 +01:00
Erik Johnston
ba24576f2f Merge pull request #2717 from matrix-org/erikj/createroom_content
Fix wrong avatars when inviting multiple users when creating room
2017-12-07 20:00:34 +00:00
Erik Johnston
d8a6c734fa Merge branch 'develop' of github.com:matrix-org/synapse into erikj/createroom_content 2017-12-07 14:24:01 +00:00
Erik Johnston
ef045dcd71 Copy dict in update_membership too 2017-12-07 14:17:15 +00:00
Matthew Hodgson
33cb7ef0b7 Merge pull request #2723 from matrix-org/matthew/search-all-local-users
Add all local users to the user_directory and optionally search them
2017-12-05 11:09:47 +00:00
Matthew Hodgson
cdc2cb5d11 fix StoreError syntax 2017-12-05 11:09:31 +00:00
Richard van der Hoff
16ec3805e5 Fix error when deleting devices
This was introduced in d7ea8c4 / PR #2728
2017-12-05 09:49:22 +00:00
Richard van der Hoff
8529874368 Merge pull request #2729 from matrix-org/rav/custom_ui_auth
support custom login types for validating users
2017-12-05 09:43:43 +00:00
Richard van der Hoff
da1010c83a support custom login types for validating users
Wire the custom login type support from password providers into the UI-auth
user-validation flows.
2017-12-05 09:43:30 +00:00
Richard van der Hoff
cc58e177f3 Merge pull request #2728 from matrix-org/rav/validate_user_via_ui_auth
Factor out a validate_user_via_ui_auth method
2017-12-05 09:42:46 +00:00
Richard van der Hoff
d7ea8c4800 Factor out a validate_user_via_ui_auth method
Collect together all the places that validate a logged-in user via UI auth.
2017-12-05 09:42:30 +00:00
Richard van der Hoff
aa6ecf0984 Merge pull request #2727 from matrix-org/rav/refactor_ui_auth_return
Refactor UI auth implementation
2017-12-05 09:40:38 +00:00
Richard van der Hoff
d5f9fb06b0 Refactor UI auth implementation
Instead of returning False when auth is incomplete, throw an exception which
can be caught with a wrapper.
2017-12-05 09:40:05 +00:00
Matthew Hodgson
c22e73293a speed up the rate of initial spam for users 2017-12-04 18:05:28 +00:00
Matthew Hodgson
b11dca2025 better doc 2017-12-04 17:51:33 +00:00
Matthew Hodgson
7b86c1fdcd try make tests work a bit more... 2017-12-04 17:10:03 +00:00
Richard van der Hoff
58ebdb037c Merge pull request #2716 from matrix-org/rav/federation_client_post
federation_client script: Support for posting content
2017-12-04 17:04:08 +00:00
Matthew Hodgson
95f8a713dc erik told me to 2017-12-04 16:56:25 +00:00
Matthew Hodgson
74e0cc74ce fix pep8 and tests 2017-12-04 15:11:38 +00:00
Matthew Hodgson
1bd40ca73e switch to a simpler 'search_all_users' button as per review feedback 2017-12-04 14:58:39 +00:00
Matthew Hodgson
f397153dfc Merge branch 'develop' into matthew/search-all-local-users 2017-11-30 01:51:38 +00:00
Matthew Hodgson
5406392f8b specify default user_directory_include_pattern 2017-11-30 01:45:34 +00:00
Matthew Hodgson
f61e107f63 remove null constraint on user_dir.room_id 2017-11-30 01:43:50 +00:00
Matthew Hodgson
4b1fceb913 fix alternation operator for FTS4 - how did this ever work!? 2017-11-30 01:34:03 +00:00
Matthew Hodgson
a4bb133b68 fix thinkos galore 2017-11-30 01:17:15 +00:00
Matthew Hodgson
cd3697e8b7 kick the user_directory index when new users register 2017-11-29 18:33:34 +00:00
Matthew Hodgson
3241c7aac3 untested WIP but might actually work 2017-11-29 18:27:05 +00:00
Richard van der Hoff
624c46eb06 Merge pull request #2721 from matrix-org/rav/get_user_by_access_token_comments
Improve comments on get_user_by_access_token
2017-11-29 17:57:06 +00:00
Richard van der Hoff
7a48a6b63e Merge pull request #2722 from matrix-org/rav/delete_device_on_logout
Delete devices and pushers on logouts etc
2017-11-29 17:56:46 +00:00
Matthew Hodgson
47d99a20d5 Add user_directory_include_pattern config param to expand search results to additional users
Initial commit; this doesn't work yet - the LIKE filtering seems too aggressive.
It also needs _do_initial_spam to be aware of prepopulating the whole user_directory_search table with all users...
...and it needs a handle_user_signup() or something to be added so that new signups get incrementally added to the table too.

Committing it here as a WIP
2017-11-29 16:46:45 +00:00
Richard van der Hoff
ad7e570d07 Delete devices in various logout situations
Make sure that we delete devices whenever a user is logged out due to any of
the following situations:

 * /logout
 * /logout_all
 * change password
 * deactivate account (by the user or by an admin)
 * invalidate access token from a dynamic module

Fixes #2672.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
ae31f8ce45 Move set_password into its own handler
Non-functional refactoring to move set_password. This means that we'll be able
to properly deactivate devices and access tokens without introducing a
dependency loop.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
7ca5c68233 Move deactivate_account into its own handler
Non-functional refactoring to move deactivate_account. This means that we'll be
able to properly deactivate devices and access tokens without introducing a
dependency loop.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
2c6d63922a Remove pushers when deleting access tokens
Whenever an access token is invalidated, we should remove the associated
pushers.
2017-11-29 16:44:35 +00:00
Richard van der Hoff
97d1a1dc01 Merge pull request #2718 from matrix-org/rav/notify_logcontexts
Clear logcontext before starting fed txn queue runner
2017-11-29 16:01:46 +00:00
Richard van der Hoff
8b45de90a4 Merge pull request #2719 from matrix-org/rav/handle_missing_hashes
Fix 500 when joining matrix-dev
2017-11-29 16:01:33 +00:00
Richard van der Hoff
7303ed65e1 Fix 500 when joining matrix-dev
matrix-dev has an event (`$/6ANj/9QWQyd71N6DpRQPf+SDUu11+HVMeKSpMzBCwM:zemos.net`)
which has no `hashes` member.

Check for missing `hashes` element in events.
2017-11-29 16:00:46 +00:00
Richard van der Hoff
da562bd6a1 Improve comments on get_user_by_access_token
because I have to reverse-engineer this every time.
2017-11-29 15:52:41 +00:00
Richard van der Hoff
d4fb4f7c52 Clear logcontext before starting fed txn queue runner
These processes take a long time compared to the request, so there is lots of
"Entering|Restoring dead context" in the logs. Let's try to shut it up a bit.
2017-11-28 15:26:14 +00:00
Erik Johnston
dfbc45302e PEP8 2017-11-28 15:23:26 +00:00
Erik Johnston
c4c1d170af Fix wrong avatars when inviting multiple users when creating room
We reused the `content` dictionary between invite requests, which meant they could end up reusing the profile info for a previous user
2017-11-28 15:19:15 +00:00
Richard van der Hoff
fd04968f32 federation_client script: Support for posting content 2017-11-28 11:59:24 +00:00
Luke Barnard
c2a1194424 Merge pull request #2715 from matrix-org/luke/group-guest-access
Allow guest access to group APIs for reading
2017-11-28 11:58:54 +00:00
Luke Barnard
ab1b2d0ff2 Allow guest access to group APIs for reading 2017-11-28 11:23:00 +00:00
Richard van der Hoff
5a4da5bf78 Merge pull request #2697 from matrix-org/rav/fix_urlcache_index_error
Fix error on sqlite 3.7
2017-11-27 12:25:48 +00:00
Richard van der Hoff
84b31a3e7a Merge pull request #2713 from matrix-org/rav/no_upsert_forever
Avoid retrying forever on IntegrityError
2017-11-27 12:19:35 +00:00
Richard van der Hoff
df6c72ede3 Merge pull request #2711 from matrix-org/rav/fix_dns_errhandler
Fix error handling on dns lookup
2017-11-27 12:19:18 +00:00
Richard van der Hoff
04bb79f139 Merge pull request #2710 from matrix-org/rav/remove_dead_code
Tiny code cleanups
2017-11-27 12:15:44 +00:00
Richard van der Hoff
e828a7380a Merge pull request #2708 from matrix-org/rav/replication_logcontext_leaks
Fix some logcontext leaks in replication resource
2017-11-27 12:15:33 +00:00
Richard van der Hoff
7ef22a41a3 Merge pull request #2707 from matrix-org/rav/fix_urlpreview
Fix OPTIONS on preview_url
2017-11-27 12:15:14 +00:00
Richard van der Hoff
96387bd26f Merge pull request #2705 from matrix-org/rav/improve_tracebacks
Improve tracebacks on exceptions
2017-11-27 12:07:52 +00:00
Richard van der Hoff
6be01f599b Improve tracebacks on exceptions
Use failure.Failure to recover our failure, which will give us a useful
stacktrace, unlike the rethrown exception.
2017-11-27 12:05:58 +00:00
Richard van der Hoff
63ccaa5873 Avoid retrying forever on IntegrityError 2017-11-27 12:00:07 +00:00
Richard van der Hoff
8b38096a89 Fix error handling on dns lookup
pass the right arguments to the errback handler

Fixes "TypeError('eb() takes exactly 2 arguments (1 given)',)"
2017-11-24 16:47:48 +00:00
Richard van der Hoff
795b0849f3 Add a comment which might save some confusion 2017-11-24 00:34:56 +00:00
Richard van der Hoff
7f14f0ae38 Remove dead sync_callback
This is never used; let's remove it to stop confusing things.
2017-11-24 00:32:04 +00:00
Richard van der Hoff
0edf085b68 Fix some logcontext leaks in replication resource
The @measure_func annotations rely on the wrapped function respecting the
logcontext rules. Add the necessary yields to make this work.
2017-11-23 23:19:43 +00:00
Richard van der Hoff
8132a6b7ac Fix OPTIONS on preview_url
Fixes #2706
2017-11-23 17:52:31 +00:00
Richard van der Hoff
6b48b3e277 fix sql fails 2017-11-22 18:06:24 +00:00
Richard van der Hoff
2908f955d1 Check database in has_completed_background_updates
so that the right thing happens on workers.
2017-11-22 18:02:15 +00:00
Richard van der Hoff
79eba878a7 Merge pull request #2701 from matrix-org/rav/one_mediarepo_to_rule_them_all
Try to avoid having multiple PreviewUrlResource instances
2017-11-22 16:46:09 +00:00
Richard van der Hoff
68ca864141 Add config option to disable media_repo on main synapse
... to stop us doing the cache cleanup jobs on the master.
2017-11-22 16:20:27 +00:00
Richard van der Hoff
e1fd4751de Build MediaRepositoryResource as a homeserver dependency
This avoids the scenario where we have four different PreviewUrlResources
configured on a single app, each of which have their own caches and cache
clearing jobs.
2017-11-22 16:19:49 +00:00
Richard van der Hoff
148c113fbe Merge pull request #2700 from matrix-org/rav/worker_docs
* Improve documentation of workers

Fixes https://github.com/matrix-org/synapse/issues/2554
2017-11-21 18:28:40 +00:00
Richard van der Hoff
a0c6688976 Improve documentation of workers
Fixes https://github.com/matrix-org/synapse/issues/2554
2017-11-21 18:28:13 +00:00
Richard van der Hoff
d5a7c56ef9 Merge pull request #2698 from matrix-org/rav/remove_dead_dependencies
Clean up dependency list
2017-11-21 17:38:42 +00:00
Richard van der Hoff
0b4aa2dc21 Merge pull request #2689 from matrix-org/rav/unlock_account_data_upsert
Avoid locking account_data tables for upserts
2017-11-21 13:39:14 +00:00
Matthew Hodgson
3ab2cfec47 sanity checks 2017-11-21 12:10:20 +00:00
Richard van der Hoff
7298ed7c51 Clean up dependency list
remove those that aren't used at all, and replace the ones that don't have
builders with simple getters rather than dynamically-generated methods.
2017-11-21 11:15:41 +00:00
Richard van der Hoff
7098b65cb8 Fix error on sqlite 3.7
Create the url_cache index on local_media_repository as a background update, so
that we can detect whether we are on sqlite or not and create a partial or
complete index accordingly.

To avoid running the cleanup job before we have built the index, add a bailout
which will defer the cleanup if the bg updates are still running.

Fixes https://github.com/matrix-org/synapse/issues/2572.
2017-11-21 11:14:17 +00:00
Matthew Hodgson
2145ee1976 don't double-invite in sync_room_to_group.pl 2017-11-19 00:48:47 +00:00
Richard van der Hoff
59a7275258 Merge pull request #2688 from matrix-org/rav/unlock_more_upsert
Avoid locking for upsert on pushers tables
2017-11-17 13:09:31 +00:00
Richard van der Hoff
d8a05418f9 Merge branch 'master' into develop 2017-11-17 12:13:14 +00:00
Richard van der Hoff
b102e93571 Merge branch 'release-v0.25.1' 2017-11-17 12:03:07 +00:00
Luke Barnard
cdf6fc15b0 Merge pull request #2686 from matrix-org/luke/as-flair
Add automagical AS Publicised Group(s)
2017-11-17 10:13:46 +00:00
Richard van der Hoff
74bbeb4373 Bump version in __init__.py 2017-11-17 10:10:53 +00:00
Richard van der Hoff
2187724ad2 Prep changelog for v0.25.1 2017-11-17 10:09:16 +00:00
Jurek
eded7084d2 Fix auth handler #2678 2017-11-17 10:07:27 +00:00
Matthew Hodgson
34c3d0a386 typo 2017-11-17 01:54:02 +00:00
Matthew Hodgson
9d50b6f0ea quick and dirty room membership<->group membership sync script 2017-11-17 01:54:02 +00:00
Luke Barnard
ab1dc84779 Add extra space before inline comment 2017-11-16 18:22:40 +00:00
Luke Barnard
7fb0e98b03 Extract group_id from the dict for multiple use 2017-11-16 18:18:30 +00:00
Luke Barnard
e836bdf734 Fix tests 2017-11-16 18:14:39 +00:00
Richard van der Hoff
c46139a17e Avoid locking account_data tables for upserts 2017-11-16 18:08:01 +00:00
Luke Barnard
d8391f0541 Remove unused GROUP_ID_REGEX 2017-11-16 18:05:57 +00:00
Luke Barnard
4e8374856d Document get_groups_for_user 2017-11-16 18:03:46 +00:00
Luke Barnard
270f9cd23a Flake8 2017-11-16 18:03:31 +00:00
Luke Barnard
9d83d52027 Use a generator instead of a list 2017-11-16 17:57:34 +00:00
Luke Barnard
5b48eec4a1 Make sure we check AS groups for lookup on bulk 2017-11-16 17:55:15 +00:00
Luke Barnard
b1edf26051 Check group_id belongs to this domain 2017-11-16 17:54:27 +00:00
Richard van der Hoff
06e5bcfc83 Avoid locking for upsert on pushers tables
* replace the upsert into deleted_pushers with an insert
* no need to lock for upsert on pusher_throttle
2017-11-16 17:52:23 +00:00
Jurek
624a8bbd67 Fix auth handler #2678 2017-11-16 17:19:02 +00:00
Richard van der Hoff
b26cbbb60e Revert "Merge pull request #2679 from jkolo/fix_auth_handler"
This PR was against master, not develop :(

This reverts commit 203058a027, reversing
changes made to 552f123bea.
2017-11-16 17:18:11 +00:00
Richard van der Hoff
203058a027 Merge pull request #2679 from jkolo/fix_auth_handler
Fix auth handler
2017-11-16 17:09:00 +00:00
Luke Barnard
97bd18af4e Add automagical AS Publicised Group(s)
via registration file "users" namespace:

```YAML
...
namespaces:
  users:
    - exclusive: true
      regex: '.*luke.*'
      group_id: '+all_the_lukes:hsdomain'
...
```

This is part of giving App Services their own groups for matching users. With this, ghost users will be given the appeareance that they are in a group and that they have publicised the fact, but _only_ from the perspective of the `get_publicised_groups_for_user` API.
2017-11-16 16:44:55 +00:00
Richard van der Hoff
ba05f28ae7 Merge pull request #2684 from matrix-org/rav/unlock_upsert
Start work on avoiding table locks for upserts
2017-11-16 16:27:29 +00:00
Richard van der Hoff
77a1227870 Fix broken ref to IntegrityError 2017-11-16 16:03:38 +00:00
Richard van der Hoff
7ab2b69e18 Avoid locking pushers table on upsert
Now that _simple_upsert will retry on IntegrityError, we don't need to lock the
table.
2017-11-16 15:32:01 +00:00
Richard van der Hoff
10aaa1bc15 _simple_upsert: retry on IntegrityError
wrap the call to _simple_upsert_txn in a loop so that we retry on an
integrityerror: this means we can avoid locking the table provided there is an
unique index.
2017-11-16 15:30:15 +00:00
Richard van der Hoff
cdc9e50a5d Cleanup in _simple_upsert_txn
Bail out early to reduce indentation
2017-11-16 15:29:10 +00:00
Richard von Seck
6f05de0e5e synapse/config/password_auth_providers: Fixed bracket typo
Signed-off-by: Richard von Seck <richard.von-seck@gmx.net>
2017-11-16 15:59:38 +01:00
Jurek
56e2a4333e Fix auth handler #2678 2017-11-15 22:49:43 +01:00
Richard van der Hoff
f959c01600 Merge pull request #2661 from matrix-org/rav/statereadstore
Pull out bits of StateStore to a mixin
2017-11-15 17:23:01 +00:00
Richard van der Hoff
117a8c0d35 Merge pull request #2677 from matrix-org/rav/spec_r0.3.0
Declare support for r0.3.0
2017-11-15 17:10:45 +00:00
Richard van der Hoff
30d2730ee2 Declare support for r0.3.0 2017-11-15 16:24:22 +00:00
Erik Johnston
aa812feb41 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-11-15 11:35:34 +00:00
Erik Johnston
552f123bea Merge branch 'release-v0.25.0' of github.com:matrix-org/synapse 2017-11-15 11:32:24 +00:00
Erik Johnston
5d0cbf763f Bump changelog 2017-11-15 11:29:32 +00:00
Richard van der Hoff
1b83c09c03 Merge pull request #2675 from matrix-org/rav/remove_broken_logcontext_funcs
Remove preserve_context_over_{fn, deferred}
2017-11-15 11:13:53 +00:00
David Baker
7190a550dc Merge pull request #2650 from matrix-org/dbkr/push_include_content_option
Rename redact_content option to include_content
2017-11-15 10:47:38 +00:00
Richard van der Hoff
b2cd6accf5 Remove __PreservingContextDeferred too 2017-11-14 23:00:10 +00:00
Richard van der Hoff
038c994724 Merge pull request #2648 from krombel/update_prometheus
update prometheus-config to new format
2017-11-14 19:14:32 +00:00
Krombel
c161472575 Make clear that the config has changed since prometheus v2
This restores the config that is usable for prometheus pre v2.0.0
The new config only works for Prometheus v2+
2017-11-14 19:59:26 +01:00
Richard van der Hoff
008aa2fc6d Merge pull request #2671 from matrix-org/rav/room_list_fixes
Reshuffle room list request code
2017-11-14 18:11:02 +00:00
Richard van der Hoff
6f30fd9235 Merge pull request #2673 from matrix-org/erikj/fix_port_script_groups
Add new boolean columns to port script
2017-11-14 16:03:41 +00:00
Erik Johnston
9ecf621404 Less s's 2017-11-14 15:55:15 +00:00
Erik Johnston
22db751d1e Add new boolean columns to port script 2017-11-14 15:48:50 +00:00
Erik Johnston
03feb7a34d Bump version and changelog 2017-11-14 14:51:25 +00:00
Richard van der Hoff
35a4b63240 Pull out bits of StateStore to a mixin
... so that we don't need to secretly gut-wrench it for use in the slaved
stores. I haven't done the other stores yet, but we should. I'm tired of the
workers breaking every time we tweak the stores because I forgot to gut-wrench
the right method.

fixes https://github.com/matrix-org/synapse/issues/2655.
2017-11-14 11:43:58 +00:00
Richard van der Hoff
4dd1bfa8c1 Revert "Revert "move _state_group_cache to statestore""
We're going to fix this properly on this branch, so that the _state_group_cache
can end up in StateGroupReadStore.

This reverts commit ab335edb02.
2017-11-14 11:43:58 +00:00
Richard van der Hoff
6caa379ba1 Merge pull request #2658 from matrix-org/rav/store_heirarchy_init
Make __init__ consistent across Store hierarchy
2017-11-14 11:43:12 +00:00
Richard van der Hoff
7e6fa29cb5 Remove preserve_context_over_{fn, deferred}
Both of these functions ae known to leak logcontexts. Replace the remaining
calls to them and kill them off.
2017-11-14 11:22:42 +00:00
Richard van der Hoff
44a1bfd6a6 Reshuffle room list request code
I'm not entirely sure if this will actually help anything, but it simplifies
the code and might give further clues about why room list search requests are
blowing out the get_current_state_ids caches.
2017-11-14 10:29:58 +00:00
Richard van der Hoff
1fc66c7460 Add a load of logging to the room_list handler
So we can see what it gets up to.
2017-11-14 10:23:47 +00:00
Richard van der Hoff
7bd6c87eca Merge pull request #2668 from turt2live/travis/whoami
Add a route for determining who you are
2017-11-14 09:54:21 +00:00
Travis Ralston
812c191939 Remove redundent call
Signed-off-by: Travis Ralston <travpc@gmail.com>
2017-11-13 12:44:21 -07:00
Richard van der Hoff
c741ba59c9 Merge pull request #2669 from matrix-org/rav/cache_urlpreview_failure
Cache failures in url_preview handler
2017-11-13 18:36:24 +00:00
Richard van der Hoff
781c15a6a3 Merge pull request #2663 from matrix-org/rav/invalid_request_utf8
Fix 500 on invalid utf-8 in request
2017-11-13 18:35:24 +00:00
David Baker
45ab288e07 Print instead of logging
because we had to wait until the logger was set up
2017-11-13 18:32:08 +00:00
Richard van der Hoff
8b33ac8f6c Merge branch 'develop' into rav/invalid_request_utf8 2017-11-13 11:56:22 +00:00
Richard van der Hoff
63ef607f1f Fix tests for Store.__init__ update
Fix the test to pass the right number of args to the Store constructors
2017-11-13 10:46:08 +00:00
Richard van der Hoff
6cfee09be9 Make __init__ consitstent across Store heirarchy
Add db_conn parameters to the `__init__` methods of the *Store classes, so that
they are all consistent, which makes the multiple inheritance work correctly
(and so that we can later extract mixins which can be used in the slavedstores)
2017-11-13 10:46:07 +00:00
Erik Johnston
ab335edb02 Revert "move _state_group_cache to statestore"
This reverts commit f5cf3638e9.
2017-11-13 10:05:33 +00:00
Erik Johnston
bfbf1e1f1a Up cache size of get_global_account_data_by_type_for_user 2017-11-13 09:52:11 +00:00
Travis Ralston
2d314b771f Add a route for determining who you are
Useful for applications which may have an access token, but no idea as to who owns it.

Signed-off-by: Travis Ralston <travpc@gmail.com>
2017-11-12 23:39:38 -07:00
Richard van der Hoff
5d15abb120 Bit more logging 2017-11-10 16:58:04 +00:00
Richard van der Hoff
46790f50cf Cache failures in url_preview handler
Reshuffle the caching logic in the url_preview handler so that failures are
cached (and to generally simplify things and fix the logcontext leaks).
2017-11-10 16:50:50 +00:00
Richard van der Hoff
4d0414c714 Merge pull request #2662 from matrix-org/rav/fix_mxids_again
Downcase userid on registration
2017-11-10 13:57:34 +00:00
Richard van der Hoff
e508145c9b Add some more comments appservice user registration
Explain why we don't validate userids registered via app services
2017-11-10 12:39:45 +00:00
Richard van der Hoff
e0ebd1e4bd Downcase userids for shared-secret registration 2017-11-10 12:39:05 +00:00
Richard van der Hoff
f90649eb2b Fix 500 on invalid utf-8 in request
If somebody sends us a request where the the body is invalid utf-8, we should
return a 400 rather than a 500. (json.loads throws a UnicodeError in this
situation)

We might as well catch all Exceptions here: it seems very unlikely that we
would get a request that *isn't caused by invalid json.
2017-11-10 09:15:39 +00:00
Richard van der Hoff
9b599bc18d Downcase userid on registration
Force username to lowercase before attempting to register

https://github.com/matrix-org/synapse/issues/2660
2017-11-09 22:20:01 +00:00
Richard van der Hoff
9b803ccc98 Revert "Allow upper-case characters in mxids"
This reverts commit b70b646903.
2017-11-09 21:57:24 +00:00
Richard van der Hoff
1282086f58 Merge pull request #2659 from matrix-org/rav/apparently_we_dont_follow_our_own_spec_now
Allow upper-case characters in mxids
2017-11-09 20:06:47 +00:00
Richard van der Hoff
b70b646903 Allow upper-case characters in mxids
Because we're never going to be able to fix this :'(
2017-11-09 19:36:13 +00:00
Erik Johnston
2dce6b15c3 Fix typo 2017-11-09 15:56:16 +00:00
Erik Johnston
4e2b2508af Register group servlet 2017-11-09 15:49:42 +00:00
Luke Barnard
0ea5310290 Merge pull request #2657 from matrix-org/erikj/group_visibility_namespace
Namespace visibility options for groups
2017-11-09 15:41:51 +00:00
Erik Johnston
13735843c7 Namespace visibility options for groups 2017-11-09 15:27:18 +00:00
Richard van der Hoff
618c7b816a Merge pull request #2656 from matrix-org/rav/fix_deactivate
Fix 'NoneType' not iterable in /deactivate
2017-11-09 15:20:35 +00:00
Erik Johnston
0fcb5a8ce5 Merge pull request #2651 from matrix-org/erikj/update_group_room_settings
Change so that update group room visibility isn't an upsert
2017-11-09 15:20:24 +00:00
Richard van der Hoff
889102315e Fix 'NoneType' not iterable in /deactivate
make sure we actually return a value from user_delete_access_tokens
2017-11-09 15:15:33 +00:00
David Baker
b2a788e902 Make the commented config have the default 2017-11-09 10:11:42 +00:00
Erik Johnston
82e4bfb53d Add brackets 2017-11-09 10:06:42 +00:00
Erik Johnston
e8814410ef Have an explicit API to update room config 2017-11-08 16:13:27 +00:00
Erik Johnston
94ff2cda73 Revert "Modify group room association API to allow modification of is_public" 2017-11-08 15:43:34 +00:00
Erik Johnston
d305987b40 Merge pull request #2631 from xyzz/fix_appservice_event_backlog
Fix appservices being backlogged and not receiving new events due to a bug in notify_interested_services
2017-11-08 11:54:10 +00:00
Erik Johnston
167eb01d83 Merge pull request #2637 from spantaleev/avoid-noop-media-deletes
Avoid no-op media deletes
2017-11-08 11:53:27 +00:00
David Baker
ad408beb66 better comments 2017-11-08 11:50:08 +00:00
David Baker
1b870937ae Log if any of the old config flags are set 2017-11-08 11:46:24 +00:00
David Baker
2a98ba0ed3 Rename redact_content option to include_content
The redact_content option never worked because it read the wrong config
section. The PR introducing it
(https://github.com/matrix-org/synapse/pull/2301) had feedback suggesting the
name be changed to not re-use the term 'redact' but this wasn't
incorporated.

This reanmes the option to give it a less confusing name, and also
means that people who've set the redact_content option won't suddenly
see a behaviour change when upgrading synapse, but instead can set
include_content if they want to.

This PR also updates the wording of the config comment to clarify
that this has no effect on event_id_only push.

Includes https://github.com/matrix-org/synapse/pull/2422
2017-11-08 10:35:30 +00:00
Richard van der Hoff
02a9a93bde Merge pull request #2649 from matrix-org/rav/fix_delta_on_state_res
Fix bug in state group storage
2017-11-08 09:22:13 +00:00
Richard van der Hoff
e148438e97 s/items/iteritems/ 2017-11-08 09:21:41 +00:00
Ilya Zhuravlev
d46386d57e Remove useless assignment in notify_interested_services 2017-11-07 22:23:22 +03:00
Matthew Hodgson
228ccf1fe3 Merge pull request #2643 from matrix-org/matthew/user_dir_typos
Fix various embarrassing typos around user_directory and add some doc.
2017-11-07 17:31:11 +00:00
Richard van der Hoff
780dbb378f Update deltas when doing auth resolution
Fixes a bug where the persisted state groups were different to those actually
being used after auth resolution.
2017-11-07 16:43:00 +00:00
Richard van der Hoff
1ca4288135 factor out _update_context_for_auth_events
This is duplicated, so let's factor it out before fixing it
2017-11-07 16:43:00 +00:00
Richard van der Hoff
f5cf3638e9 move _state_group_cache to statestore
this is internal to statestore, so let's keep it there.
2017-11-07 16:43:00 +00:00
Erik Johnston
5ef5e14ecc Merge pull request #2636 from farialima/me-master
Fix for #2635: correctly update rooms avatar/display name when modified by admin
2017-11-07 13:49:27 +00:00
Erik Johnston
76c9af193c Revert "Merge branch 'master' of github.com:matrix-org/synapse into develop"
This reverts commit f9b255cd62, reversing
changes made to 1bd654dabd.
2017-11-07 13:32:35 +00:00
Erik Johnston
f9b255cd62 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-11-07 13:31:03 +00:00
Krombel
44ad6dd4bf update prometheus-config to new format 2017-11-07 13:35:35 +01:00
Luke Barnard
1bd654dabd Merge pull request #2647 from matrix-org/luke/get-group-users-is-privileged
Return whether a user is an admin within a group
2017-11-07 12:05:11 +00:00
Luke Barnard
38b265cb51 Remember to pick is_admin out of the db 2017-11-07 11:24:04 +00:00
Luke Barnard
5561c09091 Return whether a user is an admin within a group 2017-11-07 11:18:45 +00:00
Matthew Hodgson
3db5ff69b2 Merge pull request #2576 from maximevaillancourt/exclude-noscript-url-preview
Ignore <noscript> tags when generating URL preview descriptions
2017-11-07 11:09:22 +00:00
Richard van der Hoff
ec12e7eada Merge pull request #2646 from matrix-org/rav/logging_for_limiter
Logging and logcontext fixes for Limiter
2017-11-07 10:47:00 +00:00
Matthew Hodgson
631fa4a1b7 create new indexes before dropping old ones to keep safetynet in place 2017-11-07 10:41:55 +00:00
Richard van der Hoff
bf993db11c Logging and logcontext fixes for Limiter
Add some logging to the Limiter in a similar spirit to the Linearizer, to help
debug issues.

Also fix a logcontext leak.

Also refactor slightly to avoid throwing exceptions.
2017-11-07 00:48:57 +00:00
Matthew Hodgson
4ad883398f s/users_in_pubic_room/users_in_public_rooms/g 2017-11-04 19:39:40 +00:00
Matthew Hodgson
d802e8ca6a s/users_in_pubic_room/users_in_public_rooms/g 2017-11-04 19:38:13 +00:00
Matthew Hodgson
a100700630 fix copyright.... 2017-11-04 19:35:49 +00:00
Matthew Hodgson
b6b075fd49 s/popualte/populate/ 2017-11-04 19:35:33 +00:00
Matthew Hodgson
d1622e080f s/intial/initial/ 2017-11-04 19:35:14 +00:00
Matthew Hodgson
2ac6deafb7 simplify instructions for regenerating user_dir 2017-11-04 19:34:59 +00:00
Slavi Pantaleev
805196fbeb Avoid no-op media deletes
If there are no media entries to delete,
avoid creating transactions, prepared statements
and unnecessary log entries.

Signed-off-by: Slavi Pantaleev <slavi@devture.com>
2017-11-04 09:50:15 +02:00
Francois Granade
f103b91ffa removed unused import flagged by flake8a 2017-11-03 18:45:49 +01:00
Francois Granade
fa4f337b49 Fix for issue 2635: correctly update rooms avatar/display name when modified by admin 2017-11-03 18:25:04 +01:00
Ilya Zhuravlev
8a4a0ddea6 Fix appservice tests to account for new behavior of notify_interested_services 2017-11-02 23:19:57 +03:00
Ilya Zhuravlev
45fbe4ff67 Fix appservices being backlogged and not receiving new events due to a bug in notify_interested_services 2017-11-02 22:49:43 +03:00
David Baker
f851bc8182 Merge pull request #2630 from matrix-org/luke/fix-rooms-in-group
Make the get_rooms_in_group API more sane
2017-11-02 17:23:17 +00:00
David Baker
9e09a1800b Merge pull request #2629 from matrix-org/rav/register_inhibit_login
support inhibit_login in /register
2017-11-02 16:51:35 +00:00
Luke Barnard
a34c586a89 Make the get_rooms_in_group API more sane
Return entries with is_public = True when they're public and is_public = False otherwise.
2017-11-02 16:42:30 +00:00
Richard van der Hoff
6c3a02072b support inhibit_login in /register
Allow things to pass inhibit_login when registering to ... inhibit logins.
2017-11-02 16:31:07 +00:00
David Baker
4a6754baf2 Merge pull request #2628 from matrix-org/rav/module_api_hooks
Add more hooks to ModuleApi
2017-11-02 15:37:14 +00:00
David Baker
4b36897cd9 Merge remote-tracking branch 'origin/develop' into rav/module_api_hooks 2017-11-02 15:19:17 +00:00
David Baker
d4553818a0 Merge pull request #2627 from matrix-org/rav/custom_rest_endpoints
Add a hook for custom rest endpoints
2017-11-02 15:18:37 +00:00
David Baker
6b6f03ae05 Merge pull request #2626 from matrix-org/rav/refactor_module_api
Factor _AccountHandler proxy out to ModuleApi
2017-11-02 15:15:30 +00:00
David Baker
77e3757fa9 Merge pull request #2625 from matrix-org/rav/named_resource_refactor
Factor out _configure_named_resource
2017-11-02 15:11:30 +00:00
Richard van der Hoff
6b60f7dca0 Add more hooks to ModuleApi
add `get_user_by_req` and `invalidate_access_token`
2017-11-02 14:37:39 +00:00
Richard van der Hoff
fcdfc911ee Add a hook for custom rest endpoints
Let the user specify custom modules which can be used for implementing extra
endpoints.
2017-11-02 14:36:55 +00:00
Richard van der Hoff
1189be43a2 Factor _AccountHandler proxy out to ModuleApi
We're going to need to use this from places that aren't password auth, so let's
move it to a proper class.
2017-11-02 14:36:11 +00:00
Richard van der Hoff
6650a07ede Factor out _configure_named_resource
This was a bit of a code vomit, so let's factor it out to preserve some sanity
2017-11-02 14:33:37 +00:00
David Baker
b19d9e2174 Merge pull request #2624 from matrix-org/rav/password_provider_notify_logout
Notify auth providers on logout
2017-11-02 10:55:17 +00:00
David Baker
1f080a6c97 Merge pull request #2623 from matrix-org/rav/callbacks_for_auth_providers
Allow password_auth_providers to return a callback
2017-11-02 10:49:03 +00:00
David Baker
04897c9dc1 Merge pull request #2622 from matrix-org/rav/db_access_for_auth_providers
Let auth providers get to the database
2017-11-02 10:41:25 +00:00
Richard van der Hoff
979eed4362 Fix user-interactive password auth
this got broken in the previous commit
2017-11-01 17:03:20 +00:00
Richard van der Hoff
bc8a5c0330 Notify auth providers on logout
Provide a hook by which auth providers can be notified of logouts.
2017-11-01 16:51:51 +00:00
Richard van der Hoff
4c8f94ac94 Allow password_auth_providers to return a callback
... so that they have a way to record access tokens.
2017-11-01 16:51:03 +00:00
Richard van der Hoff
846a94fbc9 Merge pull request #2620 from matrix-org/rav/auth_non_password
Let password auth providers handle arbitrary login types
2017-11-01 16:45:33 +00:00
Richard van der Hoff
3cd6b22c7b Let password auth providers handle arbitrary login types
Provide a hook where password auth providers can say they know about other
login types, and get passed the relevant parameters
2017-11-01 16:43:57 +00:00
David Baker
c9b9ef575b Merge pull request #2621 from matrix-org/rav/refactor_accesstoken_delete
Move access token deletion into auth handler
2017-11-01 16:26:06 +00:00
Matthew Hodgson
275826f234 Merge pull request #2617 from matrix-org/matthew/auto-displayname
automatically set default displayname on register
2017-11-01 16:21:16 +00:00
David Baker
4f0488b307 Merge remote-tracking branch 'origin/develop' into rav/refactor_accesstoken_delete 2017-11-01 16:20:19 +00:00
David Baker
e5e930aec3 Merge pull request #2615 from matrix-org/rav/break_auth_device_dep
Break dependency of auth_handler on device_handler
2017-11-01 16:06:31 +00:00
David Baker
fbbacb284e Merge pull request #2613 from matrix-org/rav/kill_refresh_tokens
Remove the last vestiges of refresh_tokens
2017-11-01 15:57:35 +00:00
Matthew Hodgson
9f7a555b4e switch to setting default displayname in the storage layer
to avoid clobbering guest user displaynames on registration
2017-11-01 15:51:30 +00:00
Richard van der Hoff
dd13310fb8 Move access token deletion into auth handler
Also move duplicated deactivation code into the auth handler.

I want to add some hooks when we deactivate an access token, so let's bring it
all in here so that there's somewhere to put it.
2017-11-01 15:46:22 +00:00
David Baker
691cc4e036 Merge pull request #2618 from matrix-org/dbkr/log_login_requests
Log login requests
2017-11-01 14:11:28 +00:00
David Baker
0bb253f37b Apparently this is python 2017-11-01 14:02:52 +00:00
David Baker
59e7e62c4b Log login requests
Carefully though, to avoid logging passwords
2017-11-01 13:58:01 +00:00
Matthew Hodgson
f8420d6279 automatically set default displayname on register
to avoid leaking ugly MXIDs and cluttering up the timeline with
displayname changes as well as membership joins for autojoin rooms
(e.g. the status autojoin rooms), automatically set the displayname
to match the localpart of the mxid upon registration.
2017-11-01 13:15:41 +00:00
Luke Barnard
99354b430e Merge pull request #2612 from matrix-org/luke/groups-room-relationship-is-public
Modify group room association API to allow modification of is_public
2017-11-01 11:08:36 +00:00
Richard van der Hoff
74c56f794c Break dependency of auth_handler on device_handler
I'm going to need to make the device_handler depend on the auth_handler, so I
need to break this dependency to avoid a cycle.

It turns out that the auth_handler was only using the device_handler in one
place which was an edge case which we can more elegantly handle by throwing an
error rather than fixing it up.
2017-11-01 10:27:06 +00:00
Richard van der Hoff
02237ce725 Fix tests for refresh_token removal 2017-11-01 10:19:42 +00:00
Luke Barnard
318a249c8b Leave is_public as required argument of update_room_group_association 2017-11-01 09:36:01 +00:00
Luke Barnard
207fabbc6a Update docs for updating room group association 2017-11-01 09:35:15 +00:00
Richard van der Hoff
356bcafc44 Remove the last vestiges of refresh_tokens 2017-10-31 20:35:58 +00:00
Richard van der Hoff
3e0aaad190 Let auth providers get to the database
Somewhat open to abuse, but also somewhat unavoidable :/
2017-10-31 17:22:29 +00:00
Richard van der Hoff
a72e4e3e28 Merge pull request #2610 from matrix-org/rav/schema_for_pw_providers
DB schema interface for password auth providers
2017-10-31 17:20:47 +00:00
Luke Barnard
13b3d7b4a0 Flake8 2017-10-31 17:20:11 +00:00
Richard van der Hoff
e025aec028 Merge pull request #2611 from matrix-org/dbkr/port_script_drop_nuls
Make the port script drop NUL values in all tables
2017-10-31 17:16:08 +00:00
Luke Barnard
20fe347906 Modify group room association API to allow modification of is_public
also includes renamings to make things more consistent.
2017-10-31 17:04:28 +00:00
David Baker
9d419f48e6 Make the port script drop NUL values in all tables
Postgres doesn't support NULs in strings so it makes the script
throw an exception and stop if any values contain \0. Drop them
with appropriate warning.
2017-10-31 16:58:49 +00:00
Richard van der Hoff
9ded00f221 fix tests 2017-10-31 14:21:13 +00:00
Richard van der Hoff
1650eb5847 DB schema interface for password auth providers
Provide an interface by which password auth providers can register db schema
files to be run at startup
2017-10-31 14:01:53 +00:00
David Baker
c31a7c3ff6 Merge pull request #2609 from matrix-org/rav/refactor_login
Refactor some logic from LoginRestServlet into AuthHandler
2017-10-31 13:51:36 +00:00
Richard van der Hoff
b8e54fbc08 Merge pull request #2607 from matrix-org/rav/cleanup_ldap_hacks
Clean up backwards-compat hacks for ldap
2017-10-31 13:41:12 +00:00
David Baker
a1f8b0fd64 Merge pull request #2608 from matrix-org/rav/password_provider_doc
Start some documentation on password providers
2017-10-31 13:40:10 +00:00
Richard van der Hoff
1b65ae00ac Refactor some logic from LoginRestServlet into AuthHandler
I'm going to need some more flexibility in handling login types in password
auth providers, so as a first step, move some stuff from LoginRestServlet into
AuthHandler.

In particular, we pass everything other than SAML, JWT and token logins down to
the AuthHandler, which now has responsibility for checking the login type and
fishing the password out of the login dictionary, as well as qualifying the
user_id if need be. Ideally SAML, JWT and token would go that way too, but
there's no real need for it right now and I'm trying to minimise impact.

This commit *should* be non-functional.
2017-10-31 10:48:41 +00:00
Richard van der Hoff
ebda45de4c Start some documentation on password providers
Document the existing interface, before I start adding new stuff.
2017-10-31 10:47:52 +00:00
Richard van der Hoff
ffc574a6f9 Clean up backwards-compat hacks for ldap
try to make the backwards-compat flows follow the same code paths as the modern
impl.

This commit should be non-functional.
2017-10-31 10:47:02 +00:00
Richard van der Hoff
e2f4190209 Merge pull request #2605 from matrix-org/luke/fix-group-creation-error-wording
Fix wording on group creation error
2017-10-30 16:06:42 +00:00
Luke Barnard
9bc17fc5fb Fix wording on group creation error 2017-10-30 15:17:23 +00:00
Matthew Hodgson
208a6647f1 fix typo 2017-10-29 20:54:20 +00:00
Matthew Hodgson
e51c2bcaef move url_previews to MD as RST does my head in 2017-10-29 20:47:17 +00:00
Erik Johnston
71a1bd53b2 Merge pull request #2599 from matrix-org/erikj/groups_invite
Fix typo when checking if user is invited to group
2017-10-27 17:34:54 +01:00
Erik Johnston
d0abb4e8e6 Fix typo when checking if user is invited to group 2017-10-27 16:57:19 +01:00
Erik Johnston
977078f06d Fix bad merge 2017-10-27 15:10:50 +01:00
Erik Johnston
6980c4557e Merge branch 'erikj/attestation_jitter' of github.com:matrix-org/synapse into develop 2017-10-27 15:09:05 +01:00
Erik Johnston
632baf799e Merge pull request #2598 from matrix-org/revert-2596-erikj/attestation_jitter
Revert "Add jitter to validity period of attestations"
2017-10-27 15:08:29 +01:00
Erik Johnston
af92f5b00f Revert "Add jitter to validity period of attestations" 2017-10-27 15:07:21 +01:00
Erik Johnston
4ab8abbc2b Merge branch 'erikj/attestation_local_fix' of github.com:matrix-org/synapse into develop 2017-10-27 15:07:08 +01:00
Erik Johnston
b1e62d4a57 Merge pull request #2596 from matrix-org/erikj/attestation_jitter
Add jitter to validity period of attestations
2017-10-27 14:20:25 +01:00
Erik Johnston
6af3656deb Merge pull request #2595 from matrix-org/erikj/attestation_commnet
Add comment about attestations
2017-10-27 14:20:19 +01:00
Richard van der Hoff
4d83632009 Merge pull request #2591 from matrix-org/rav/device_delete_auth
Device deletion: check UI auth matches access token
2017-10-27 12:30:10 +01:00
Richard van der Hoff
110b373e9c Merge pull request #2589 from matrix-org/rav/as_deactivate_account
Allow ASes to deactivate their own users
2017-10-27 12:29:32 +01:00
Erik Johnston
ca571b0ec3 Add jitter to validity period of attestations
This helps ensure that the renewals of attestations are spread out more
evenly.
2017-10-27 11:57:27 +01:00
Luke Barnard
d8c26162a1 Merge pull request #2582 from matrix-org/luke/group-is-public
Add is_public to groups table to allow for private groups
2017-10-27 11:41:13 +01:00
Erik Johnston
c067088747 Add comment about attestations 2017-10-27 11:35:41 +01:00
Luke Barnard
5451cc7792 Request is_public from database 2017-10-27 11:27:43 +01:00
Luke Barnard
124314672f group is dict 2017-10-27 11:08:19 +01:00
Luke Barnard
6362298fa5 Create groups with is_public = True 2017-10-27 11:04:20 +01:00
Richard van der Hoff
8b56977b6f Merge pull request #2586 from matrix-org/rav/frontend_proxy_auth_header
Front-end proxy: pass through auth header
2017-10-27 11:01:50 +01:00
Richard van der Hoff
173567a7f2 Docstring for post_urlencoded_get_json 2017-10-27 10:59:50 +01:00
Luke Barnard
c7d9f25d22 Fix create_group to pass requester_user_id 2017-10-27 10:57:20 +01:00
Erik Johnston
e27b76d117 Import logger 2017-10-27 10:54:02 +01:00
Richard van der Hoff
8854c039f2 Merge pull request #2585 from matrix-org/rav/unstable_to_r0
Support /keys/upload on /r0 as well as /unstable
2017-10-27 10:53:48 +01:00
Richard van der Hoff
14f581abc2 Merge pull request #2584 from matrix-org/rav/fix_httpclient_logcontexts
Fix logcontext leaks in httpclient
2017-10-27 10:53:29 +01:00
Luke Barnard
2ca46c7afc Correct logic for checking private group membership 2017-10-27 10:48:01 +01:00
Erik Johnston
82d8c1bacb Fixup 2017-10-27 10:30:21 +01:00
Erik Johnston
2fd9831f7c Merge pull request #2574 from matrix-org/erikj/room_list_fixes
Add logging and fix log contexts for publicRooms
2017-10-27 10:01:23 +01:00
Erik Johnston
195abfe7a5 Remove incorrect attestations 2017-10-27 09:58:13 +01:00
Erik Johnston
d8dde19f04 Log if we try to do attestations for our own user and group 2017-10-27 09:55:01 +01:00
Erik Johnston
585972b51a Don't generate group attestations for local users 2017-10-27 09:46:56 +01:00
Richard van der Hoff
7a6546228b Device deletion: check UI auth matches access token
(otherwise there's no point in the UI auth)
2017-10-27 00:04:31 +01:00
Matthew Hodgson
92f680889d spell out need for libxml2 for lxml to work 2017-10-27 00:02:29 +01:00
Richard van der Hoff
785bd7fd75 Allow ASes to deactivate their own users 2017-10-27 00:01:00 +01:00
Richard van der Hoff
c89e6aadff Merge pull request #2581 from matrix-org/rav/fix_init_with_no_logfile
Fix error when running synapse with no logfile
2017-10-26 22:16:57 +01:00
Richard van der Hoff
54a2525133 Front-end proxy: pass through auth header
So that access-token-in-an-auth-header works.
2017-10-26 18:19:01 +01:00
Richard van der Hoff
0a5866bec9 Support /keys/upload on /r0 as well as /unstable
(So that we can stop riot relying on it in /unstable)
2017-10-26 18:18:23 +01:00
Richard van der Hoff
0d8e3ad48b Fix logcontext leaks in httpclient
`preserve_context_over_fn` is borked
2017-10-26 18:17:10 +01:00
Richard van der Hoff
12ef02dc3d SimpleHTTPClient: add support for headers
Sometimes we need to pass headers into these methods
2017-10-26 17:59:50 +01:00
Luke Barnard
69e8a05f35 Make it work 2017-10-26 17:55:58 +01:00
Luke Barnard
007cd48af6 Recreate groups table instead of adding column
Adding a column with non-constant default not possible in sqlite3
2017-10-26 17:55:22 +01:00
Luke Barnard
713e60b9b6 Awful hack to get default true 2017-10-26 17:38:14 +01:00
Luke Barnard
e86cefcb6f Add groups table to BOOLEAN_COLUMNS in synapse_port_db 2017-10-26 17:24:54 +01:00
Luke Barnard
cfa4e658e0 Bump schema version to 46 2017-10-26 17:23:49 +01:00
Luke Barnard
595fe67f01 delint 2017-10-26 17:20:24 +01:00
Luke Barnard
9b2feef9eb Add is_public to groups table to allow for private groups
Prevent group API access to non-members for private groups

Also make all the group code paths consistent with `requester_user_id` always being the User ID of the requesting user.
2017-10-26 16:51:32 +01:00
Richard van der Hoff
f7f90e0c8d Fix error when running synapse with no logfile
Fixes 'UnboundLocalError: local variable 'sighup' referenced before assignment'
2017-10-26 16:45:20 +01:00
Richard van der Hoff
1dd0f53b21 Merge pull request #2579 from krombel/move_unstable_to_r0
register some /unstable endpoints in /r0 as well
2017-10-26 16:10:43 +01:00
Krombel
8299b323ee add release endpoints for /thirdparty 2017-10-26 16:58:20 +02:00
Krombel
9b436c8b4c register some /unstable endpoints in /r0 as well 2017-10-26 15:22:50 +02:00
Richard van der Hoff
5b38fdab31 Merge pull request #2578 from matrix-org/rav/code_style_imports
Code_style updates
2017-10-26 11:55:59 +01:00
Richard van der Hoff
1eb300e1fc Document import rules 2017-10-26 11:55:41 +01:00
Richard van der Hoff
f7f6bfaae4 code_style: more formatting 2017-10-26 11:55:41 +01:00
Erik Johnston
4ea882ede4 Merge pull request #2577 from matrix-org/erikj/fix_port
Fix port script
2017-10-26 11:54:12 +01:00
Erik Johnston
566e21eac8 Update room_list.py 2017-10-26 11:39:54 +01:00
Richard van der Hoff
351cc35342 code_style.rst: a couple of tidyups 2017-10-26 10:29:26 +01:00
Erik Johnston
37d766aedd Fix port script
We changed _simple_update_one_txn to use _simple_update_txn but didn't
yank it out in the port script.

Fixes #2565
2017-10-26 10:01:03 +01:00
Maxime Vaillancourt
5287e57c86 Ignore noscript tags when generating URL previews 2017-10-25 20:44:34 -04:00
Erik Johnston
2a7e9faeec Do logcontexts outside ResponseCache 2017-10-25 15:21:08 +01:00
Erik Johnston
1ad1ba9e6a Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-10-25 10:27:23 +01:00
Erik Johnston
33a9026cdf Add logging and fix log contexts for publicRooms 2017-10-25 10:26:06 +01:00
Matthew Hodgson
efd0f5a3c5 tip for generating tls_fingerprints 2017-10-24 18:49:49 +01:00
Erik Johnston
f009df23ec Merge branch 'release-v0.24.1' of github.com:matrix-org/synapse 2017-10-24 15:02:25 +01:00
Erik Johnston
6ba4fabdb9 Bump version and changelog 2017-10-24 14:15:27 +01:00
Erik Johnston
9e2c22c97f Merge pull request #2567 from matrix-org/erikj/group_fed_update_profile
Correctly wire in update group profile over federation
2017-10-24 09:21:47 +01:00
Erik Johnston
39dc52157d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/group_fed_update_profile 2017-10-24 09:16:20 +01:00
Richard van der Hoff
0d437698b2 Merge pull request #2568 from matrix-org/rav/pep8
PEP8 fixes
2017-10-23 17:39:31 +01:00
Richard van der Hoff
0be99858f3 fix vars named l
E741 says "do not use variables named ‘l’, ‘O’, or ‘I’".
2017-10-23 15:56:38 +01:00
Richard van der Hoff
eaaabc6c4f replace 'except:' with 'except Exception:'
what could possibly go wrong
2017-10-23 15:52:32 +01:00
Erik Johnston
ce6d4914f4 Correctly wire in update group profile over federation 2017-10-23 15:21:24 +01:00
Richard van der Hoff
ecf198aab8 Merge pull request #2566 from matrix-org/rav/media_logcontext_leak
Fix a logcontext leak in the media repo
2017-10-23 14:47:49 +01:00
Richard van der Hoff
3267b81b81 Merge pull request #2561 from matrix-org/rav/id_checking
Updates to ID checking
2017-10-23 14:39:20 +01:00
Richard van der Hoff
d03cfc4258 Fix a logcontext leak in the media repo 2017-10-23 14:34:27 +01:00
Erik Johnston
1de557975f Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-10-23 13:18:12 +01:00
Erik Johnston
ffba978077 Merge branch 'release-v0.24.0' of github.com:matrix-org/synapse 2017-10-23 13:13:53 +01:00
Erik Johnston
13e16cf302 Bump version and changelog 2017-10-23 13:13:31 +01:00
Richard van der Hoff
bd0d84bf92 Merge pull request #2560 from matrix-org/rav/kill_pointless_method
Remove pointless create() method
2017-10-23 11:06:57 +01:00
Richard van der Hoff
1135193dfd Validate group ids when parsing
May as well do it whenever we parse a Group ID. We check the sigil and basic
structure here so it makes sense to check the grammar in the same place.
2017-10-21 00:30:39 +01:00
Richard van der Hoff
29812c628b Allow = in mxids and groupids
... because the spec says we should.
2017-10-20 23:42:53 +01:00
Richard van der Hoff
58fbbe0f1d Disallow capital letters in userids
Factor out a common function for checking user ids and group ids, which forbids
capitals.
2017-10-20 23:37:22 +01:00
Richard van der Hoff
631d7b87b5 Remove pointless create() method
It just calls the constructor, so we may as well kill it rather than having
random codepaths.
2017-10-20 22:14:55 +01:00
Erik Johnston
6070647774 Correctly bump version 2017-10-19 16:40:20 +01:00
Erik Johnston
d6237859f6 Update changelog 2017-10-19 14:19:52 +01:00
Erik Johnston
0ef0aeceac Bump version and changelog 2017-10-19 13:48:33 +01:00
Erik Johnston
b4a6b7f720 Merge pull request #2559 from matrix-org/erikj/group_id_validation
Add config to enable group creation
2017-10-19 13:45:09 +01:00
Erik Johnston
c7d46510d7 Flake8 2017-10-19 13:36:06 +01:00
Erik Johnston
ffd3f1a783 Add missing file... 2017-10-19 12:17:30 +01:00
Erik Johnston
29bafe2f7e Add config to enable group creation 2017-10-19 12:13:44 +01:00
Erik Johnston
287dd1ee2c Merge pull request #2558 from matrix-org/erikj/group_id_validation
Enforce sensible group IDs
2017-10-19 12:13:31 +01:00
Erik Johnston
513c23bfd9 Enforce sensible group IDs 2017-10-19 12:01:01 +01:00
Erik Johnston
011d03a0f6 Fix typo 2017-10-19 11:22:48 +01:00
Erik Johnston
9ab859f27b Fix typo in group attestation handling 2017-10-19 10:55:52 +01:00
Erik Johnston
f4f65ef93e Merge pull request #2557 from matrix-org/erikj/media_tumbnails
Fix typo in thumbnail generation
2017-10-19 10:38:52 +01:00
Erik Johnston
bd5718d0ad Fix typo in thumbnail generation 2017-10-19 10:27:18 +01:00
Erik Johnston
161a862ffb Fix typo 2017-10-19 10:17:43 +01:00
Richard van der Hoff
69994c385a Merge pull request #2553 from matrix-org/rav/fix_500_on_event_send
Fix 500 error when we get an error handling a PDU
2017-10-18 11:25:44 +01:00
Richard van der Hoff
b5dbbac308 Merge pull request #2552 from matrix-org/rav/fix_500_on_dodgy_powerlevels
Fix 500 error when fields missing from power_levels event
2017-10-17 20:53:30 +01:00
Richard van der Hoff
582bd19ee9 Fix 500 error when we get an error handling a PDU
FederationServer doesn't have a send_failure (and nor does its subclass,
ReplicationLayer), so this was failing.

I'm not really sure what the idea behind send_failure is, given (a) we don't do
anything at the other end with it except log it, and (b) we also send back the
failure via the transaction response. I suspect there's a whole lot of dead
code around it, but for now I'm just removing the broken bit.
2017-10-17 20:52:40 +01:00
Richard van der Hoff
74f99f227c Doc some more dynamic Homeserver methods 2017-10-17 20:51:29 +01:00
Richard van der Hoff
c2bd177ea0 Fix 500 error when fields missing from power_levels event
If the users or events keys were missing from a power_levels event, then
we would throw 500s when trying to auth them.
2017-10-17 17:05:42 +01:00
Erik Johnston
fe6e9f580b Merge pull request #2550 from krombel/fix_thumbnail_2548
fix thumbnailing (#2548)
2017-10-17 15:35:18 +01:00
Richard van der Hoff
7216c76654 Improve error handling for missing files (#2551)
`os.path.exists` doesn't allow us to distinguish between permissions errors and
the path actually not existing, which repeatedly confuses people. It also means
that we try to overwrite existing key files, which is super-confusing. (cf
issues #2455, #2379). Use os.stat instead.

Also, don't recomemnd the the use of --generate-config, which screws everything
up if you're using debian (cf #2455).
2017-10-17 14:46:17 +01:00
Richard van der Hoff
dbdfd8967d Merge pull request #2546 from matrix-org/rav/remove_dead_event_injector
Remove dead class
2017-10-17 13:21:22 +01:00
Richard van der Hoff
b8e40d146f Merge pull request #2547 from matrix-org/rav/test_make_deferred_yieldable
Add some tests for make_deferred_yieldable
2017-10-17 13:21:09 +01:00
Richard van der Hoff
4cc8bb0767 Merge pull request #2549 from matrix-org/rav/event_persist_logcontexts
Fix logcontext handling for persist_events
2017-10-17 13:20:35 +01:00
David Baker
4e242b3e20 Merge pull request #2545 from matrix-org/dbkr/auto_join_rooms
Add config option to auto-join new users to rooms
2017-10-17 11:45:49 +01:00
Krombel
a6245478c8 fix thumbnailing (#2548)
in commit 0e28281a the code for thumbnailing got refactored and the
renaming of this variables was not done correctly.

Signed-Off-by: Matthias Kesler <krombel@krombel.de>
2017-10-17 12:45:33 +02:00
Richard van der Hoff
2e9f5ea31a Fix logcontext handling for persist_events
* don't use preserve_context_over_deferred, which is known broken.

* remove a redundant preserve_fn.

* add/improve some comments
2017-10-17 10:59:30 +01:00
Richard van der Hoff
a6ad8148b9 Fix name of test_logcontext
The file under test is logcontext.py, not log_context.py
2017-10-17 10:53:34 +01:00
Richard van der Hoff
5b5f35ccc0 Add some tests for make_deferred_yieldable 2017-10-17 10:52:31 +01:00
Richard van der Hoff
9b714abf35 Remove dead class
This isn't used anywhere.
2017-10-17 10:43:36 +01:00
David Baker
33122c5a1b Fix test 2017-10-17 10:39:50 +01:00
David Baker
a9c2e930ac pep8 2017-10-17 10:13:13 +01:00
David Baker
c05e6015cc Add config option to auto-join new users to rooms
New users who register on the server will be dumped into all rooms in
auto_join_rooms in the config.
2017-10-16 17:57:27 +01:00
Luke Barnard
e0a75e0c25 Merge pull request #2544 from matrix-org/luke/groups-invited-users
Implement GET /groups/$groupId/invited_users
2017-10-16 17:33:33 +02:00
Luke Barnard
85f5674e44 Delint 2017-10-16 15:52:17 +01:00
Luke Barnard
c43e8a9736 Make it work. Warn about lack of user profile 2017-10-16 15:50:39 +01:00
Luke Barnard
a3ac4f6b0a _create_rererouter for get_invited_users_in_group 2017-10-16 15:41:03 +01:00
Luke Barnard
5dfd0350c7 Merge branch 'develop' into luke/groups-invited-users 2017-10-16 15:34:36 +01:00
Luke Barnard
ca96d609e4 Merge pull request #2543 from matrix-org/luke/fix-on-group-invite-no-profile
Log a warning when no profile for invited member
2017-10-16 16:33:53 +02:00
Luke Barnard
2c5972f87f Implement GET /groups/$groupId/invited_users 2017-10-16 15:31:11 +01:00
Luke Barnard
6079d0027a Log a warning when no profile for invited member
And return empty profile
2017-10-16 14:20:45 +01:00
David Baker
99a6c9dbf2 Merge pull request #2542 from matrix-org/dbkr/room_notif_no_glob
Omit the *s for @room notifications
2017-10-16 13:46:17 +01:00
David Baker
9342bcfce0 Omit the *s for @room notifications
They're just redundant
2017-10-16 13:38:10 +01:00
Richard van der Hoff
e504816977 Merge pull request #2540 from 4nd3r/patch-1
make it absolutely clear that purge doesn't remove everything
2017-10-16 10:43:22 +01:00
Ander Punnar
b2e02084b8 make it absolutely clear that Purge History API does not remove all traces of events and message contents
because this topic pops up too often

#890 #1621 #1730 #2260 #2315 and so on
2017-10-14 13:25:42 +03:00
Erik Johnston
db3d84f46c Merge pull request #2538 from matrix-org/erikj/media_backup
Basic implementation of backup media store
2017-10-13 16:11:31 +01:00
Erik Johnston
1b6b0b1e66 Add try/finally block to close t_byte_source 2017-10-13 15:34:08 +01:00
Erik Johnston
6b725cf56a Remove old comment 2017-10-13 15:23:41 +01:00
Matthew Hodgson
64665b57d0 oops 2017-10-13 14:26:07 +01:00
Erik Johnston
2b24416e90 Don't reuse source but instead copy from primary media store to backup 2017-10-13 14:11:34 +01:00
Erik Johnston
b92a8e6e4a PEP8 2017-10-13 13:58:57 +01:00
Matthew Hodgson
931fc43cc8 fix copyright to companies which actually exist(ed) 2017-10-13 13:54:31 +01:00
Erik Johnston
31aa7bd8d1 Move type into key 2017-10-13 13:47:38 +01:00
Erik Johnston
ad1911bbf4 Comment 2017-10-13 13:47:05 +01:00
Erik Johnston
c021c39cbd Remove spurious addition 2017-10-13 13:46:53 +01:00
Erik Johnston
1f43d22397 Don't needlessly rename variable 2017-10-13 11:42:07 +01:00
Erik Johnston
a675bd08bd Add paths back in... 2017-10-13 11:41:06 +01:00
Erik Johnston
4d7e1dde70 Remove unnecessary diff 2017-10-13 11:36:32 +01:00
Erik Johnston
ae5d18617a Make things be absolute paths again 2017-10-13 11:35:44 +01:00
Erik Johnston
9732ec6797 s/write_to_file/write_to_file_and_backup/ 2017-10-13 11:34:41 +01:00
Erik Johnston
0e28281a02 Fix up 2017-10-13 11:33:49 +01:00
Erik Johnston
505371414f Fix up thumbnailing function 2017-10-13 11:23:53 +01:00
Erik Johnston
e3428d26ca Fix typo 2017-10-13 10:39:59 +01:00
Erik Johnston
35332298ef Fix up comments 2017-10-13 10:39:32 +01:00
Erik Johnston
64db043a71 Move makedirs to thread 2017-10-13 10:25:01 +01:00
Erik Johnston
b60859d6cc Use make_deferred_yieldable 2017-10-13 10:24:19 +01:00
Erik Johnston
d76621a47b Fix comments 2017-10-12 18:16:25 +01:00
Erik Johnston
4ae85ae121 Don't close prematurely.. 2017-10-12 17:57:31 +01:00
Erik Johnston
cc505b4b5e getvalue closes buffer 2017-10-12 17:52:30 +01:00
Erik Johnston
1259a76047 Get len before close 2017-10-12 17:39:23 +01:00
Erik Johnston
802ca12d05 Don't close file prematurely 2017-10-12 17:37:21 +01:00
Erik Johnston
e283b555b1 Copy everything to backup 2017-10-12 17:31:24 +01:00
Erik Johnston
b77a13812c Typo 2017-10-12 15:32:32 +01:00
Erik Johnston
6dfde6d485 Remove dead code 2017-10-12 15:30:26 +01:00
Erik Johnston
c8eeef6947 Fix typos 2017-10-12 15:28:24 +01:00
Erik Johnston
67cb89fbdf Fix typo 2017-10-12 15:23:41 +01:00
Erik Johnston
bf4fb1fb40 Basic implementation of backup media store 2017-10-12 15:20:59 +01:00
hera
f807f7f804 log when we get an exception handling replication updates 2017-10-12 11:51:24 +01:00
David Baker
b8d8ed1ba9 Merge pull request #2531 from matrix-org/dbkr/spamcheck_error_messages
Allow error strings from spam checker
2017-10-12 10:31:03 +01:00
Richard van der Hoff
cc794d60e7 Merge pull request #2532 from matrix-org/rav/fix_linearizer
Fix stackoverflow and logcontexts from linearizer
2017-10-11 17:29:32 +01:00
Richard van der Hoff
8dd0c85ac5 Merge pull request #2529 from matrix-org/rav/fix_transaction_failure_handling
log pdu_failures from incoming transactions
2017-10-11 17:29:14 +01:00
Richard van der Hoff
76fa695241 Merge pull request #2515 from matrix-org/rav/fix_receipt_logcontext
A logformatter which includes the stack when the exception was caught when
logging exceptions.
2017-10-11 17:28:01 +01:00
Richard van der Hoff
f30c4ed2bc logformatter: fix AttributeError
make sure we have the relevant fields before we try to log them.
2017-10-11 17:26:17 +01:00
Erik Johnston
b752507b48 Fix fetching remote summaries 2017-10-11 16:59:18 +01:00
Erik Johnston
af94ba9d02 Merge pull request #2533 from matrix-org/erikj/fix_group_repl
Fix group stream replication
2017-10-11 16:01:57 +01:00
Erik Johnston
818b08d0e4 peeeeeeeeep8888888888888888888888888888 2017-10-11 15:54:00 +01:00
Erik Johnston
ea18996f54 Fix group stream replication
The stream update functions expect the storage function to return a list
of tuples.
2017-10-11 15:44:39 +01:00
Richard van der Hoff
68fd82e840 Merge pull request #2530 from matrix-org/rav/fix_receipt_logcontext
fix a logcontext leak in read receipt handling
2017-10-11 15:08:53 +01:00
Richard van der Hoff
4fad8efbfb Fix stackoverflow and logcontexts from linearizer
1. make it not blow out the stack when there are more than 50 things waiting
   for a lock. Fixes https://github.com/matrix-org/synapse/issues/2505.

2. Make it not mess up the log contexts.
2017-10-11 15:05:05 +01:00
David Baker
b78bae2d51 fix isinstance 2017-10-11 14:49:09 +01:00
Erik Johnston
271f5601f3 Fix typo in invite to group 2017-10-11 14:45:33 +01:00
David Baker
c3b7a45e84 Allow error strings from spam checker 2017-10-11 14:39:22 +01:00
Richard van der Hoff
c3e190ce67 fix a logcontext leak in read receipt handling 2017-10-11 14:37:20 +01:00
Richard van der Hoff
b75d443caf log pdu_failures from incoming transactions
... even if we have no EDUs.

This appears to have been introduced in
476899295f.
2017-10-11 14:36:13 +01:00
Erik Johnston
27e727a146 Fix typo 2017-10-11 14:32:40 +01:00
Erik Johnston
4ce4379235 Fix attestations to check correct server name 2017-10-11 14:11:43 +01:00
Erik Johnston
c2c47550f9 Fix schema delta versions 2017-10-11 13:23:15 +01:00
Erik Johnston
535cc49f27 Merge pull request #2466 from matrix-org/erikj/groups_merged
Initial Group Implementation
2017-10-11 13:20:07 +01:00
Erik Johnston
dfbf73408c Merge pull request #2501 from matrix-org/dbkr/channel_notifications
Support for channel notifications
2017-10-11 13:19:29 +01:00
Erik Johnston
bc7f3eb32f Merge pull request #2483 from jeremycline/unfreeze-ujson-dump
Unfreeze event before serializing with ujson
2017-10-11 13:18:52 +01:00
Erik Johnston
ec954f47fb Validate room ids 2017-10-11 13:15:44 +01:00
David Baker
81a5e0073c pep8 2017-10-10 15:53:34 +01:00
David Baker
ab1bc9bf5f Don't KeyError if no power_levels event 2017-10-10 15:34:05 +01:00
David Baker
0f1eb3e914 Use notification levels in power_levels
Rather than making the condition directly require a specific power
level. This way the level require to notify a room can be configured
per room.
2017-10-10 15:23:00 +01:00
Erik Johnston
84e27a592d Merge pull request #2490 from matrix-org/erikj/drop_left_room_events
Ignore incoming events for rooms that we have left
2017-10-10 11:58:32 +01:00
David Baker
c9f034b4ac There was already a constant for this
also update copyright
2017-10-10 11:47:10 +01:00
David Baker
a9f9d68631 More optimisation 2017-10-10 11:38:31 +01:00
David Baker
707374d5dc What year is it!? Who's the president!? 2017-10-10 11:21:41 +01:00
David Baker
89fa00ddff Merge branch 'develop' into dbkr/channel_notifications 2017-10-10 11:20:17 +01:00
Richard van der Hoff
79bea15830 Merge pull request #2520 from matrix-org/rav/process_incoming_rooms_in_parallel
fed server: process PDUs for different rooms in parallel
2017-10-10 10:26:21 +01:00
Richard van der Hoff
426f8b0f66 Merge pull request #2518 from matrix-org/rav/linearize_incoming_transactions
Fed server: use a linearizer for ongoing transactions
2017-10-10 10:20:51 +01:00
Richard van der Hoff
6a6cc27aee fed server: process PDUs for different rooms in parallel
With luck, this will give a real-time improvement when there are many rooms and
the server ends up calling out to fetch missing events.
2017-10-09 18:30:31 +01:00
Richard van der Hoff
4c7c4d4061 Fed server: use a linearizer for ongoing transactions
We don't want to process the same transaction multiple times concurrently, so
use a linearizer.
2017-10-09 18:30:10 +01:00
Richard van der Hoff
4d24becf7f Merge pull request #2517 from matrix-org/rav/fed_server_refactor
fed server: refactor on_incoming_transaction
2017-10-09 18:19:23 +01:00
Richard van der Hoff
ba5b9b80a5 fed server: refactor on_incoming_transaction
Move as much as possible to after the have_responded check, and reduce the
number of times we iterate over the pdu list.
2017-10-09 18:10:53 +01:00
Richard van der Hoff
c7b0678356 Merge pull request #2516 from matrix-org/rav/fix_fed_server_origin_check
Fed server: Move origin-check code to _handle_received_pdu
2017-10-09 18:09:43 +01:00
Richard van der Hoff
a6e3222fe5 Fed server: Move origin-check code to _handle_received_pdu
The response-building code expects there to be an entry in the `results` list
for each entry in the pdu_list, so the early `continue` was messing this
up. That doesn't really matter, because all that the federation client does is
log any errors, but it's pretty poor form.
2017-10-09 17:53:32 +01:00
Richard van der Hoff
3cc852d339 Fancy logformatter to format exceptions better
This is a bit of an experimental change at this point; the idea is to see if it
helps us track down where our stack overflows are coming from by logging the
stack when the exception was caught and turned into a Failure. (We'll also need
edf2704420).

If we deploy this, we'll be able to enable it via the log config yaml.
2017-10-09 17:44:42 +01:00
Richard van der Hoff
0eeaa25694 Merge pull request #2508 from matrix-org/rav/federation_queue_logcontexts
Fix up logcontext handling in (federation) TransactionQueue
2017-10-09 17:43:48 +01:00
Richard van der Hoff
aa3fac8057 Merge pull request #2507 from matrix-org/rav/execute_concurrently_log_contexts
Fix logcontext handling for concurrently_execute
2017-10-09 17:43:32 +01:00
Richard van der Hoff
c1c81ee2a4 Merge pull request #2506 from matrix-org/rav/unhandled_failure
Fix up deferred handling in federation.py
2017-10-09 17:43:10 +01:00
Erik Johnston
e8496efe84 Fix up comment 2017-10-09 15:17:34 +01:00
Richard van der Hoff
01bbacf3c4 Fix up logcontext handling in (federation) TransactionQueue
Avoid using preserve_context_over_function, which has problems with respect to
logcontexts.
2017-10-06 22:39:25 +01:00
Richard van der Hoff
148428ce76 Fix logcontext handling for concurrently_execute
Avoid preserve_context_over_deferred, which is broken.
2017-10-06 22:24:28 +01:00
Richard van der Hoff
c8f568ddf9 Fix up deferred handling in federation.py
* Avoid preserve_context_over_deferred, which is broken

* set consumeErrors=True on defer.gatherResults, to avoid spurious "unhandled
  failure" erros
2017-10-06 22:14:24 +01:00
Richard van der Hoff
3ddda939d3 some comments in the state res code 2017-10-05 14:58:17 +01:00
David Baker
5de926d66f Merge pull request #2502 from matrix-org/dbkr/allow_dms_to_admins
Spam checking: add the invitee to user_may_invite
2017-10-05 14:14:13 +01:00
David Baker
f878e6f8af Spam checking: add the invitee to user_may_invite 2017-10-05 14:02:28 +01:00
David Baker
269af961e9 Make be faster 2017-10-05 13:27:12 +01:00
David Baker
ed80c6b6cc Add fastpath optimisation 2017-10-05 13:20:22 +01:00
David Baker
e433393c4f pep8 2017-10-05 13:08:02 +01:00
David Baker
985ce80375 They're called rooms 2017-10-05 13:03:44 +01:00
David Baker
b9b9714fd5 Get rule type right 2017-10-05 13:02:19 +01:00
David Baker
fa969cfdde Support for channel notifications
Add condition type to check the sender's power level and add a base
rule using it for @channel notifications.
2017-10-05 12:39:18 +01:00
David Baker
44f8e383f3 Merge pull request #2500 from matrix-org/dbkr/fix_word_boundary_mentions
Fix notif kws that start/end with non-word chars
2017-10-05 12:27:59 +01:00
David Baker
0c8da8b519 Use better method for word boundary searching
From ebc95667b8
2017-10-05 11:57:43 +01:00
Erik Johnston
eaaa837e00 Don't corrupt cache 2017-10-05 11:43:22 +01:00
David Baker
cbe3c3fdd4 pep8 2017-10-05 11:43:10 +01:00
David Baker
6748f0a579 Fix notif kws that start/end with non-word chars
Only prepend / append word bounary characters if the search
expression starts or ends with a word character, otherwise they
don't work because there's no word bounary between whitespace and
a non-word char.
2017-10-05 11:33:30 +01:00
David Baker
93b0cf7a99 Merge pull request #2495 from matrix-org/dbkr/spam_check_room_creation
Add room creation checks to spam checker
2017-10-04 14:49:20 +01:00
David Baker
d8ce68b09b spam check room publishing 2017-10-04 14:29:33 +01:00
David Baker
78d4ced829 un-double indent 2017-10-04 12:44:27 +01:00
David Baker
197c14dbcf Add room creation checks to spam checker
Lets the spam checker deny attempts to create rooms and add aliases
to them.
2017-10-04 10:47:54 +01:00
David Baker
5f20a91fa1 Merge pull request #2492 from matrix-org/dbkr/spam_check_invites
Allow spam checker to reject invites too
2017-10-03 18:10:23 +01:00
David Baker
1e2ac54351 s/roomid/room_id/ 2017-10-03 17:41:38 +01:00
David Baker
1e375468de pass room id too 2017-10-03 17:13:14 +01:00
David Baker
c2c188b699 Federation was passing strings anyway
so pass string everywhere
2017-10-03 15:46:19 +01:00
David Baker
c46a0d7eb4 this shouldn't be debug 2017-10-03 15:20:14 +01:00
David Baker
bd769a81e1 better logging 2017-10-03 15:16:40 +01:00
David Baker
537088e7dc Actually write warpper function 2017-10-03 14:28:12 +01:00
David Baker
41fd9989a2 Skip spam check for admin users 2017-10-03 14:17:44 +01:00
Erik Johnston
11d62f43c9 Invalidate cache 2017-10-03 14:12:28 +01:00
Erik Johnston
e4ab96021e Update comments 2017-10-03 14:10:41 +01:00
David Baker
2a7ed700d5 Fix param name & lint 2017-10-03 14:04:10 +01:00
David Baker
84716d267c Allow spam checker to reject invites too 2017-10-03 13:56:43 +01:00
Richard van der Hoff
e4779be97a Merge pull request #2491 from matrix-org/rav/port_db_fixes
Drop search values with nul characters
2017-10-03 13:49:05 +01:00
Erik Johnston
f2da6df568 Remove spurious line feed 2017-10-03 11:31:06 +01:00
Erik Johnston
30848c0fcd Ignore incoming events for rooms that we have left
When synapse receives an event for a room its not in over federation, it
double checks with the remote server to see if it is in fact in the
room. This is done so that if the server has forgotten about the room
(usually as a result of the database being dropped) it can recover from
it.

However, in the presence of state resets in large rooms, this can cause
a lot of work for servers that have legitimately left. As a hacky
solution that supports both cases we drop incoming events for rooms that
we have explicitly left.

This means that we no longer support the case of servers having
forgotten that they've rejoined a room, but that is sufficiently rare
that we're not going to support it for now.
2017-10-03 11:18:21 +01:00
Erik Johnston
e585c83209 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-10-02 18:11:24 +01:00
Erik Johnston
6c1bb1601e Bump version and changelog 2017-10-02 18:05:17 +01:00
Erik Johnston
ea87cb1ba5 Make 'affinity' package optional 2017-10-02 18:03:59 +01:00
Erik Johnston
3fed5bb25f Move quit_with_error 2017-10-02 17:59:34 +01:00
David Baker
27955056e0 Merge branch 'develop' into erikj/groups_merged 2017-10-02 16:20:41 +01:00
Erik Johnston
90d70af269 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-10-02 16:20:23 +01:00
Erik Johnston
b23cb8fba8 Merge branch 'release-v0.23.0' of github.com:matrix-org/synapse 2017-10-02 13:52:03 +01:00
Erik Johnston
e4a709eda3 Bump version and change log 2017-10-02 13:51:38 +01:00
Richard van der Hoff
7fc1aad195 Drop search values with nul characters
https://github.com/matrix-org/synapse/issues/2187 contains a report of a port
failing due to nul characters somewhere in the search table. Let's try dropping
the offending rows.
2017-10-02 00:53:32 +01:00
Jeremy Cline
cafb8de132 Unfreeze event before serializing with ujson
In newer versions of https://github.com/esnme/ultrajson, ujson does not
serialize frozendicts (introduced in esnme/ultrajson@53f85b1). Although
the PyPI version is still 1.35, Fedora ships with a build from commit
esnme/ultrajson@2f1d487. This causes the serialization to fail if the
distribution-provided package is used.

This runs the event through the unfreeze utility before serializing it.

Thanks to @ignatenkobrain for tracking down the root cause.

fixes #2351

Signed-off-by: Jeremy Cline <jeremy@jcline.org>
2017-09-30 11:22:37 -04:00
Richard van der Hoff
d5325d7ef1 Merge pull request #2480 from matrix-org/rav/federation_client_logging
Improve logging of failures in matrixfederationclient
2017-09-29 17:32:53 +01:00
Erik Johnston
d5694ac5fa Only log if we've removed media 2017-09-28 16:08:08 +01:00
Richard van der Hoff
e43de3ae4b Improve logging of failures in matrixfederationclient
* don't log exception types twice
* not all exceptions have a meaningful 'message'. Use the repr rather than
  attempting to build a string ourselves.
2017-09-28 15:38:09 +01:00
Richard van der Hoff
75e67b9ee4 Handle SERVFAILs when doing AAAA lookups for federation (#2477)
... to cope with people with broken dnssec setups, mostly
2017-09-28 15:24:00 +01:00
Erik Johnston
768f00dedb Up the limits on number of url cache entries to delete at one time 2017-09-28 14:27:27 +01:00
Erik Johnston
4dc07e93a8 Add old indices 2017-09-28 14:10:33 +01:00
Erik Johnston
7cc483aa0e Clear up expired url cache every 10s 2017-09-28 13:56:53 +01:00
Erik Johnston
e1e7d76cf1 Actually assign result to variable 2017-09-28 13:55:29 +01:00
Erik Johnston
93247a424a Only pull out local media that were for url cache 2017-09-28 13:48:14 +01:00
Erik Johnston
5f501ec7e2 Fix typo in url cache expiry timer 2017-09-28 12:59:01 +01:00
Erik Johnston
761d255fdf Merge pull request #2479 from matrix-org/erikj/expire_url_cache_thumbnails
Support new and old style media id formats
2017-09-28 12:58:13 +01:00
Erik Johnston
ace8079086 Support new and old style media id formats 2017-09-28 12:52:51 +01:00
Erik Johnston
7a44c01d89 Fix typo 2017-09-28 12:46:04 +01:00
Erik Johnston
c9bc4b7031 Merge pull request #2478 from matrix-org/erikj/expire_url_cache_thumbnails
Delete expired url cache data
2017-09-28 12:42:33 +01:00
Erik Johnston
ae79764fe5 Change expires column to expires_ts 2017-09-28 12:37:53 +01:00
Erik Johnston
77f1d24de3 More brackets 2017-09-28 12:23:15 +01:00
Erik Johnston
9ccb4226ba Delete expired url cache data 2017-09-28 12:18:06 +01:00
Erik Johnston
bf86a41ef1 Merge pull request #2476 from matrix-org/erikj/joined_members_auth
Fix /joined_members to work with AS users
2017-09-28 10:44:44 +01:00
Erik Johnston
8090fd4664 Fix /joined_members to work with AS users 2017-09-28 10:09:32 +01:00
Erik Johnston
3a743f649c Merge pull request #2475 from matrix-org/erikj/joined_members_auth
Fix bug where /joined_members didn't check user was in room
2017-09-27 16:42:23 +01:00
Erik Johnston
adec03395d Fix bug where /joined_members didn't check user was in room 2017-09-27 15:14:39 +01:00
David Baker
74e494b010 Merge pull request #2474 from matrix-org/dbkr/spam_check_module
Make the spam checker a module
2017-09-27 11:31:00 +01:00
David Baker
ef3a5ae787 Don't test is spam_checker not None
Sometimes it's a Mock object which is not none but is still not
what we're after
2017-09-27 11:24:19 +01:00
David Baker
8c06dd6071 Remove unintentional debugging 2017-09-27 10:31:14 +01:00
David Baker
60c78666ab pep8 2017-09-27 10:26:13 +01:00
David Baker
1786b0e768 Forgot the new file again :( 2017-09-27 10:22:54 +01:00
David Baker
8ad5f34908 pep8 2017-09-26 19:21:41 +01:00
David Baker
6cd5fcd536 Make the spam checker a module 2017-09-26 19:20:23 +01:00
David Baker
ccc67d445b Merge pull request #2473 from matrix-org/dbkr/factor_out_module_loading
Factor out module loading to a separate place
2017-09-26 18:12:17 +01:00
David Baker
9fd086e506 unnecessary parens 2017-09-26 17:59:46 +01:00
David Baker
0b03a97708 Add module_loader.py 2017-09-26 17:56:41 +01:00
David Baker
4824a33c31 Factor out module loading to a separate place
So it can be reused
2017-09-26 17:51:26 +01:00
Erik Johnston
1e5fcfd14a Merge pull request #2472 from matrix-org/erikj/groups_rooms
Add remove room from group API
2017-09-26 16:05:46 +01:00
Erik Johnston
17b8e2bd02 Add remove room API 2017-09-26 15:52:41 +01:00
Erik Johnston
a8e2a3df32 Add unique index to group_rooms table 2017-09-26 15:39:21 +01:00
Erik Johnston
0d7c7fd907 Merge pull request #2471 from matrix-org/erikj/group_summary_publicised
Add is_publicised to group summary
2017-09-26 11:33:21 +01:00
Erik Johnston
95298783bb Add is_publicised to group summary 2017-09-26 11:04:37 +01:00
Erik Johnston
1a398b19fd Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.23.0 2017-09-26 10:08:59 +01:00
Erik Johnston
f4c8cd5e85 Bump changelog and version 2017-09-26 10:02:48 +01:00
Erik Johnston
b8d832a08c Merge pull request #2470 from matrix-org/erikj/sync_speed_fix
Refactor to speed up incremental syncs
2017-09-25 17:43:14 +01:00
Erik Johnston
e3edca3b5d Refactor to speed up incremental syncs 2017-09-25 17:35:39 +01:00
Richard van der Hoff
cacfa04cb6 Merge pull request #2468 from maxidor/develop
Clarify recommended network setup
2017-09-25 16:37:33 +01:00
Max Dor
e591f7b3f0 Include review feedback 2017-09-25 16:42:26 +02:00
Max Dor
7141f1a5cc Clarify recommended network setup 2017-09-25 16:20:23 +02:00
Erik Johnston
44edac0497 Merge branch 'release-v0.23.0' of github.com:matrix-org/synapse into develop 2017-09-25 14:52:46 +01:00
Richard van der Hoff
29e1c717c3 Merge pull request #2390 from r3dey3/develop
Fix iteration of requests_missing_keys; list doesn't have .values()
2017-09-25 11:56:01 +01:00
Richard van der Hoff
94133d7ce8 Merge branch 'develop' into develop 2017-09-25 11:50:11 +01:00
Erik Johnston
b15c2b7971 Update CHANGES 2017-09-25 11:34:12 +01:00
Erik Johnston
ba8fdc925c Bump version and changes 2017-09-25 11:01:31 +01:00
Richard van der Hoff
79b3cf3e02 Fix logcontxt leak in keyclient (#2465)
preserve_context_over_function doesn't do what you want it to do.
2017-09-25 09:51:39 +01:00
Richard van der Hoff
b4fd710e1a Merge pull request #2464 from rnbdsh/patch-4
Remove non-existing files, add stop, use synctl
2017-09-25 09:33:22 +01:00
rnbdsh
b68b0ede7a Start traditionally, stop synctl
Starting with synctl lead to "no config file found"
Stopping also leads to some (code=exited, status=1/FAILURE), but at least now we can stop the service.
2017-09-24 04:55:19 +02:00
rnbdsh
68f737702b Remove non-existing files, add stop, use synctl
Non-existing files, when running the suggested from https://github.com/matrix-org/synapse#configuring-synapse
/etc/synapse/log_config.yaml so the --log-config leads to an error
/etc/sysconfig/synapse The environment-file or even the /etc/sysconfig does not exist in arch linux

Also instead of calling python2 we use synctl, as this seems to be the proper way to start it, and it gives us a more useful error in the systemctl status. And we now allow stop (and therefore restart).
2017-09-24 04:26:23 +02:00
Richard van der Hoff
f65e31d22f Do an AAAA lookup on SRV record targets (#2462)
Support SRV records which point at AAAA records, as well as A records.

Fixes https://github.com/matrix-org/synapse/issues/2405
2017-09-22 20:26:47 +01:00
Matthew Hodgson
f496399ac4 fix thinko'd docstring 2017-09-22 15:34:14 +01:00
Erik Johnston
3166ed55b2 Fix device list when rejoining room (#2461) 2017-09-22 14:44:17 +01:00
Erik Johnston
e1dec2f1a7 Remove user from group summary when the leave the group 2017-09-21 16:09:57 +01:00
Erik Johnston
bb746a9de1 Revert: Keep room_id's in group summary 2017-09-21 15:57:22 +01:00
Erik Johnston
ae8d4bb0f0 Keep room_id's in group summary 2017-09-21 15:55:18 +01:00
Richard van der Hoff
c94ab5976a Merge pull request #2459 from matrix-org/rav/keyring_cleanups
Clean up Keyring code
2017-09-20 11:29:39 +01:00
Erik Johnston
197d82dc07 Correctly return next token 2017-09-20 11:12:11 +01:00
Erik Johnston
069ae2df12 Fix initial sync 2017-09-20 10:52:12 +01:00
Richard van der Hoff
6de74ea6d7 Fix logcontexts in _check_sigs_and_hashes 2017-09-20 01:32:42 +01:00
Richard van der Hoff
72472456d8 Add some more tests for Keyring 2017-09-20 01:32:42 +01:00
Richard van der Hoff
c5c24c239b Fix logcontext handling in verify_json_objects_for_server
preserve_context_over_fn is essentially broken, because (a) it pointlessly
drops the current logcontext before calling its wrapped function, which means
we don't get any useful logcontexts for _handle_key_deferred; (b) it wraps the
resulting deferred in a _PreservingContextDeferred, which is very dangerous
because you then can't yield on it without leaking context back into the
reactor.

Instead, let's specify that the resultant deferreds call their callbacks with
no logcontext.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
c5b0e9f485 Turn _start_key_lookups into an inlineCallbacks function
... which means that logcontexts can be correctly preserved for the stuff it
does.

get_server_verify_keys is now called with the logcontext, so needs to
preserve_fn when it fires off its nested inlineCallbacks function.

Also renames get_server_verify_keys to reflect the fact it's meant to be
private.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
abdefb8a01 Fix potential race in _start_key_lookups
If the verify_request.deferred has already completed, then `remove_deferreds`
will be called immediately. It therefore might resolve the server_to_deferred
deferred while there are still other requests for that server in flight.

To avoid that, we should build the complete list of requests, and *then* add the
callbacks.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
afbd773dc6 Add some comments to _start_key_lookups 2017-09-20 01:32:42 +01:00
Richard van der Hoff
2a4b9ea233 Consistency for how verify_request.deferred is called
Define that it is run with no log context, and make sure that happens.

If we aren't careful to reset the logcontext, we can't bung the deferreds into
defer.gatherResults etc. We don't actually do that directly, but we *do*
resolve other deferreds from affected callbacks (notably the server_to_deferred
map in _start_key_lookups), and those *do* get passed into
defer.gatherResults. It turns out that this way ends up being least confusing.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
3b98439eca Factor out _start_key_lookups
... to make it easier to see what's going on.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
fde63b880d Replace server_and_json with verify_requests
This is a precursor to factoring some of this code out.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
2d511defd9 pull out handle_key_deferred to top level
There's no need for this to be a nested definition; pulling it out not only
makes it more efficient, but makes it easier to check that it's not accessing
any local variables it shouldn't be.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
dd1ea9763a Fix incorrect key_ids in error message 2017-09-20 01:32:42 +01:00
Richard van der Hoff
e76d1135dd Invalidate signing key cache when we gat an update
This might make the cache slightly more efficient.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
fcf2c0fd1a Remove redundant preserve_fn
preserve_fn is a no-op unless the wrapped function returns a
Deferred. verify_json_objects_for_server returns a list, so this is doing
nothing.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
9864efa532 Fix concurrent server_key requests (#2458)
Fix a bug where we could end up firing off multiple requests for server_keys
for the same server at the same time.
2017-09-19 23:25:44 +01:00
Richard van der Hoff
aa620d09a0 Add a config option to block all room invites (#2457)
- allows sysadmins the ability to lock down their servers so that people can't
send their users room invites.
2017-09-19 16:08:14 +01:00
Richard van der Hoff
2eabdf3f98 add some comments to on_exchange_third_party_invite_request 2017-09-19 12:20:36 +01:00
Richard van der Hoff
5ed109d59f PoC for filtering spammy events (#2456)
Demonstration of how you might add some hooks to filter out spammy events.
2017-09-19 12:20:11 +01:00
Erik Johnston
47d9848dc4 Merge pull request #2454 from matrix-org/erikj/groups_sync_creator
Ensure that creator of group sees group down /sync
2017-09-19 11:21:26 +01:00
Erik Johnston
93e504d04e Ensure that creator of group sees group down /sync 2017-09-19 11:08:16 +01:00
Erik Johnston
b5feaa5a49 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/groups_merged 2017-09-19 11:07:45 +01:00
Richard van der Hoff
3f405b34e9 Fix overzealous kicking of guest users (#2453)
We should only kick guest users if the guest access event is authorised.
2017-09-19 08:52:52 +01:00
Richard van der Hoff
290777b3d9 Clean up and document handling of logcontexts in Keyring (#2452)
I'm still unclear on what the intended behaviour for
`verify_json_objects_for_server` is, but at least I now understand the
behaviour of most of the things it calls...
2017-09-18 18:31:01 +01:00
Erik Johnston
77c81ca6ea Merge pull request #2451 from matrix-org/erikj/add_state_to_timeline
Don't filter out current state events from timeline
2017-09-18 17:22:33 +01:00
Erik Johnston
2d1b7955ae Don't filter out current state events from timeline 2017-09-18 17:13:03 +01:00
David Baker
862c8da560 Merge pull request #2450 from matrix-org/dbkr/push_event_id_only
Add support for event_id_only push format
2017-09-18 16:41:29 +01:00
Erik Johnston
2d9f341c3e Merge pull request #2449 from matrix-org/erikj/rejoin_device_lists
Correctly handle leaving room in /key/changes
2017-09-18 15:59:13 +01:00
David Baker
436ee0a2ea Also include the room_id
as really it's part of the event ID
2017-09-18 15:58:38 +01:00
David Baker
b393f5db51 Use .get - it's much shorter 2017-09-18 15:50:26 +01:00
David Baker
a2562f9d74 Add support for event_id_only push format
Param in the data dict of a pusher that tells an HTTP pusher to
send just the event_id of the event it's notifying about and the
notification counts. For clients that want to go & fetch the body
of the event themselves anyway.
2017-09-18 15:39:39 +01:00
Erik Johnston
d6dadd95ac Correctly handle leaving room in /key/changes 2017-09-18 15:38:22 +01:00
Erik Johnston
993d3f710b Merge pull request #2443 from matrix-org/erikj/rejoin_device_lists
Send down device list change notif when member leaves/rejoins room
2017-09-18 13:17:53 +01:00
Erik Johnston
4a94eb3ea4 Fix typo 2017-09-15 09:56:54 +01:00
Erik Johnston
3a0cee28d6 Actually hook leave notifs up 2017-09-14 11:49:37 +01:00
Erik Johnston
4f845a0713 Handle joining/leaving rooms in /keys/changes 2017-09-13 16:28:08 +01:00
Erik Johnston
473700f016 Get left rooms 2017-09-13 15:13:41 +01:00
Erik Johnston
9ce866ed4f In sync handle device lists for newly joined/left rooms 2017-09-12 16:44:26 +01:00
Erik Johnston
69ef4987a6 Add left section to /keys/changes 2017-09-08 14:44:36 +01:00
Erik Johnston
53cc8ad35a Send down device list change notif when member leaves/rejoins room 2017-09-07 15:08:39 +01:00
Richard van der Hoff
e2fcba038c Merge pull request #2439 from matrix-org/rav/tox_tweaks
do tox install with pip -e
2017-09-06 17:08:19 +01:00
Richard van der Hoff
5f59f20636 Merge remote-tracking branch 'origin/master' into develop 2017-09-05 21:58:19 +01:00
Richard van der Hoff
59de2c7afa Exclude the github issue template from our sdist (#2440)
PR #2413 added an issue template, but just adding files to the project
directory upsets the packaging scripts: we need to explicitly include or
exclude them.

Move the template into a .github directory to make that easy, and to de-clutter
the root a bit.
2017-09-05 21:57:19 +01:00
Richard van der Hoff
4b616c8cf2 Merge branch 'master' into develop 2017-09-05 17:51:13 +01:00
Richard van der Hoff
4dd61df6f8 do tox install with pip -e
- this ensures we end up with a working virtualenv which we can use for other
things.
2017-09-05 16:35:23 +01:00
Erik Johnston
c0c31656ff Merge pull request #2433 from ptman/patch-1
Document known to work postgres version
2017-09-01 15:28:42 +01:00
Paul Tötterman
8b16b43b7f Document known to work postgres version 2017-09-01 16:52:45 +03:00
Richard van der Hoff
dff396de0f Set --python when running sytest
.. because I want to make the 'install_and_run' script useful for non-synapse
jobs, which do not accept --python. In any case we set up the path here, so
sytest shouldn't be guessing it.
2017-09-01 11:20:37 +01:00
Richard van der Hoff
f06ffdb6fa fix python path in jenkins scripts 2017-09-01 10:31:45 +01:00
Richard van der Hoff
6e67aaa7f2 Set --python when running sytest
.. because I want to make the 'install_and_run' script useful for non-synapse
jobs, which do not accept --python. In any case we set up the path here, so
sytest shouldn't be guessing it.
2017-09-01 10:06:21 +01:00
Erik Johnston
7f0d0ba3bc Merge pull request #2430 from matrix-org/erikj/groups_profile_cache
Add user profiles to summary from group server
2017-08-25 16:48:58 +01:00
Erik Johnston
4a9b1cf253 Add user profiles to summary from group server 2017-08-25 16:23:58 +01:00
Erik Johnston
6d8799af1a Merge pull request #2429 from matrix-org/erikj/groups_profile_cache
Add a remote user profile cache
2017-08-25 15:52:42 +01:00
Erik Johnston
258409ef61 Fix typos and reinherit 2017-08-25 14:45:20 +01:00
Erik Johnston
bf81f3cf2c Split out profile handler to fix tests 2017-08-25 14:34:56 +01:00
Erik Johnston
27ebc5c8f2 Add remote profile cache 2017-08-25 11:25:47 +01:00
Erik Johnston
97c544f91f Add _simple_update 2017-08-25 11:11:37 +01:00
Richard van der Hoff
934ab76835 Merge pull request #2428 from matrix-org/rav/update_upgrade
Tweaks to the upgrade instructions
2017-08-24 11:42:02 +01:00
Richard van der Hoff
fc9878f6a4 Tweaks to the upgrade instructions 2017-08-23 15:27:02 +01:00
Richard van der Hoff
a4d3bfe3d6 Merge pull request #2417 from matrix-org/rav/federation_client
Improvements to the federation test client
2017-08-23 14:50:26 +01:00
Richard van der Hoff
a7effa8400 Merge pull request #2288 from kyrias/bcrypt
python_dependencies: Use bcrypt module instead of py-bcrypt
2017-08-23 14:14:56 +01:00
Richard van der Hoff
a04c6bbf8f test federation client: Allow server-name and key-file as options
so that you don't necessarily need a config file.
2017-08-22 11:19:30 +01:00
Richard van der Hoff
77ea8cbdd7 Merge pull request #2416 from matrix-org/rav/prometheus_config
Add prometheus config
2017-08-22 10:34:40 +01:00
Erik Johnston
2800983f3e Merge pull request #2410 from matrix-org/erikj/groups_publicise
Add ability to publicise group membership
2017-08-21 16:51:56 +01:00
Erik Johnston
8b50fe5330 Use BOOLEAN rather than TEXT type 2017-08-21 16:37:29 +01:00
Erik Johnston
73b4e18c62 Merge pull request #2426 from matrix-org/erikj/groups_fix_sync
Groups: Fix mising json.load in initial sync
2017-08-21 16:37:14 +01:00
Tom Lant
20b3660495 Merge pull request #2413 from matrix-org/toml-issue-template
Issue template for Synapse
2017-08-21 16:07:35 +01:00
Erik Johnston
175a01f56c Groups: Fix mising json.load in initial sync 2017-08-21 14:45:56 +01:00
Richard van der Hoff
046b659ce2 Improvements to the federation test client
Make it read the config file, primarily.
2017-08-17 16:59:11 +01:00
Tom Lant
413c270723 Update ISSUE_TEMPLATE.md
Added instructions for checking server version.
2017-08-17 11:14:35 +01:00
Tom Lant
ec3a2dc773 Update ISSUE_TEMPLATE.md
Responding to review comments.
2017-08-17 11:00:51 +01:00
Richard van der Hoff
012875258c Add prometheus config
... from https://github.com/matrix-org/synapse-prometheus-config.
2017-08-16 15:31:44 +01:00
Richard van der Hoff
692250c6be Fix user_dir startup
Add missing parameter to _base.start_worker_reactor
2017-08-16 15:11:29 +01:00
Richard van der Hoff
d2352347cf Fix process startup
escape the % that got added in 92168cb so that the process starts up ok.
2017-08-16 14:57:35 +01:00
Matthew Hodgson
92168cbbc5 explain why CPU affinity is a good idea 2017-08-15 18:27:42 +01:00
Richard van der Hoff
963015005e Merge pull request #2415 from matrix-org/rav/synctl_cpu_affinity
Allow configuration of CPU affinity
2017-08-15 17:42:05 +01:00
Richard van der Hoff
10d8b701a1 Allow configuration of CPU affinity
Make it possible to set the CPU affinity in the config file, so that we don't
need to remember to do it manually every time.
2017-08-15 17:08:28 +01:00
Richard van der Hoff
543c794a76 Factor out common application start
We have 10 copies of this code, and I don't really want to update each one
separately.
2017-08-15 17:04:40 +01:00
Tom Lant
57cd0c3dea Update ISSUE_TEMPLATE.md
Removed the sentence encouraging people not to file a bug - if people are in doubt we'd rather they filed a bug than gave up entirely.
2017-08-14 14:40:32 +01:00
Tom Lant
b524dd4c35 Update ISSUE_TEMPLATE.md
Oops capital L.
2017-08-14 14:36:49 +01:00
Tom Lant
09703609fc Create ISSUE_TEMPLATE.md
A new issue template proposed to try and steer people towards #matrix:matrix.org for support queries relating to running their own homeserver.
2017-08-14 14:35:25 +01:00
Erik Johnston
ba3ff7918b Fixup 2017-08-11 13:42:42 +01:00
Erik Johnston
ef8e578677 Add bulk group publicised lookup API 2017-08-09 13:36:22 +01:00
Erik Johnston
b880ff190a Allow update group publicity 2017-08-08 14:19:41 +01:00
Erik Johnston
05e21285aa Store whether the user wants to publicise their membership of a group 2017-08-08 13:01:46 +01:00
hera
eae04f1952 fix english 2017-08-04 23:56:42 +01:00
hera
5699b05072 typo 2017-08-04 23:44:37 +01:00
Erik Johnston
a1e67bcb97 Remove stale TODO comments 2017-08-04 10:07:10 +01:00
Erik Johnston
09552f9d9c Reduce spammy log line in synchrotrons 2017-08-02 17:29:51 +01:00
Kenny Keslar
f18373dc5d Fix iteration of requests_missing_keys; list doesn't have .values()
Signed-off-by: Kenny Keslar <r3dey3@r3dey3.com>
2017-07-26 22:44:19 -05:00
Erik Johnston
ebbaae5526 Merge pull request #2382 from matrix-org/erikj/group_privilege
Include users membership in group in summary API
2017-07-24 18:09:12 +01:00
Erik Johnston
966a70f1fa Update comment 2017-07-24 17:49:39 +01:00
Erik Johnston
629cdfb124 Use join rather than joined, etc. 2017-07-24 14:54:05 +01:00
Erik Johnston
ed666d3969 Fix all the typos 2017-07-24 14:05:09 +01:00
Erik Johnston
b76ef6ccb8 Include users membership in group in summary API 2017-07-24 13:55:39 +01:00
Erik Johnston
851aeae7c7 Check users/rooms are in group before adding to summary 2017-07-24 13:40:56 +01:00
Erik Johnston
d5e32c843f Correctly add joins to correct segment 2017-07-24 13:31:26 +01:00
Erik Johnston
96917d5552 Merge pull request #2378 from matrix-org/erikj/group_sync_support
Add groups to sync stream
2017-07-21 11:05:39 +01:00
Erik Johnston
0401604222 Merge pull request #2377 from matrix-org/erikj/group_profile_update
Add update group profile API
2017-07-20 17:53:39 +01:00
Erik Johnston
b238cf7f6b Remove spurious content param 2017-07-20 17:49:55 +01:00
Erik Johnston
960dae3340 Add notifier 2017-07-20 17:14:44 +01:00
Erik Johnston
2cc998fed8 Fix replication. And notify 2017-07-20 17:13:18 +01:00
Erik Johnston
139fe30f47 Remember to cast to bool 2017-07-20 16:47:35 +01:00
Erik Johnston
4d793626ff Fix bug in generating current token 2017-07-20 16:42:44 +01:00
Erik Johnston
c544188ee3 Add groups to sync stream 2017-07-20 16:36:42 +01:00
Erik Johnston
0ab153d201 Check values are strings 2017-07-20 16:24:18 +01:00
Erik Johnston
8209b5f033 Fix a storage desc 2017-07-20 16:22:22 +01:00
Erik Johnston
b27429729d Merge pull request #2375 from matrix-org/erikj/port_script
Fix port script for user directory tables
2017-07-20 16:16:43 +01:00
Erik Johnston
60a9a49f83 Extend comment 2017-07-20 16:16:29 +01:00
Erik Johnston
b3bf6a1218 Merge pull request #2374 from matrix-org/erikj/group_server_local
Add local group server support
2017-07-20 13:15:42 +01:00
Erik Johnston
57826d645b Fix typo 2017-07-20 13:15:22 +01:00
Erik Johnston
d7d24750be Fix port script for user directory tables 2017-07-20 10:47:01 +01:00
Erik Johnston
6f443a74cf Add update group profile API 2017-07-20 09:46:33 +01:00
Erik Johnston
14a34f12d7 Comments 2017-07-18 17:28:42 +01:00
Erik Johnston
3431ec55dc Comments 2017-07-18 17:23:50 +01:00
Erik Johnston
6027b1992f Fix permissions 2017-07-18 16:51:25 +01:00
Erik Johnston
e884ff31d8 Add DELETE 2017-07-18 16:41:44 +01:00
Erik Johnston
05c13f6c22 Add 'args' param to post_json 2017-07-18 16:40:21 +01:00
Erik Johnston
94ecd871a0 Fix typos 2017-07-18 16:38:54 +01:00
Erik Johnston
12ed4ee48e Correctly parse query params 2017-07-18 15:33:09 +01:00
Erik Johnston
332839f6ea Update federation client pokes 2017-07-18 14:45:37 +01:00
Erik Johnston
e5ea6dd021 Add client apis 2017-07-18 14:37:06 +01:00
Erik Johnston
cccfcfa7b9 Comments 2017-07-18 10:35:18 +01:00
Erik Johnston
68f34e85ce Use transport client directly 2017-07-18 10:29:57 +01:00
Erik Johnston
3e703eb04e Comment 2017-07-18 10:17:25 +01:00
Erik Johnston
508460f240 Remove sync stuff 2017-07-18 09:55:46 +01:00
Erik Johnston
6e9f147faa Add GroupID type 2017-07-18 09:47:25 +01:00
Erik Johnston
4540730111 Remove unused tables 2017-07-18 09:38:15 +01:00
Erik Johnston
e96ee95a7e Remove sync stuff 2017-07-18 09:38:08 +01:00
Erik Johnston
2f9eafdd36 Add local group server support 2017-07-17 12:03:49 +01:00
Erik Johnston
b3de67234e Merge pull request #2363 from matrix-org/erikj/group_server_summary
Add group summary APIs
2017-07-17 11:54:14 +01:00
Erik Johnston
514c2d3c4d Merge pull request #2371 from matrix-org/erikj/push_cache_hit
Increase cache hit ratio for push
2017-07-17 09:42:27 +01:00
Erik Johnston
bfde076022 Increase cache hit ratio for push
We don't update the cache in all code paths, which causes subsequent
calls to miss the cache
2017-07-14 16:11:26 +01:00
Erik Johnston
cb3aee8219 Ensure category and role ids are non-null 2017-07-14 14:06:55 +01:00
Erik Johnston
85fda57208 Add DEFAULT_ROLE_ID 2017-07-14 14:03:54 +01:00
Erik Johnston
4b203bdba5 Correctly increment orders 2017-07-14 14:02:31 +01:00
Erik Johnston
d3862812ff Merge pull request #2366 from matrix-org/erikj/push_metrics
Add more metrics to push rule evaluation
2017-07-14 11:04:03 +01:00
Erik Johnston
8d26385d76 Add more metrics to push rule evaluation 2017-07-13 14:37:30 +01:00
Erik Johnston
3b0470dba5 Remove unused functions 2017-07-13 13:54:01 +01:00
Erik Johnston
8575e3160f Comments 2017-07-13 13:52:41 +01:00
Erik Johnston
67b7b904ba Merge pull request #2365 from matrix-org/erikj/push_skip_lock
Push: Don't acquire lock unless necessary
2017-07-13 11:44:48 +01:00
Erik Johnston
f60218ec41 Push: Don't acquire lock unless necessary 2017-07-13 11:23:53 +01:00
Erik Johnston
a78cda4baf Remove TODO 2017-07-13 11:17:07 +01:00
Erik Johnston
7a39da8cc6 Add summary APIs to federation 2017-07-13 11:13:19 +01:00
Erik Johnston
5bbb53580a raise NotImplementedError 2017-07-13 10:25:29 +01:00
Erik Johnston
26451a09eb Comments 2017-07-12 14:47:18 +01:00
Erik Johnston
8d55877c9e Simplify checking if admin 2017-07-12 11:43:39 +01:00
Erik Johnston
a62406aaa5 Add group summary APIs 2017-07-12 11:36:15 +01:00
Erik Johnston
91818723a1 Merge pull request #2362 from matrix-org/erikj/sync_user_users_who_share
Use less DB for device list handling in sync
2017-07-12 10:45:30 +01:00
Erik Johnston
e9aec001f4 Use less DB for device list handling in sync 2017-07-12 10:30:10 +01:00
Erik Johnston
28e8c46f29 Merge pull request #2352 from matrix-org/erikj/group_server_split
Initial Group Server
2017-07-12 10:26:29 +01:00
Erik Johnston
6d586dc05c Comment 2017-07-12 09:58:37 +01:00
Erik Johnston
410b4e14a1 Move comment 2017-07-11 15:44:18 +01:00
Erik Johnston
fe4e885f54 Add federation API for adding room to group 2017-07-11 14:35:07 +01:00
Erik Johnston
bbb739d24a Comment 2017-07-11 14:31:36 +01:00
Erik Johnston
26752df503 Typo 2017-07-11 14:29:03 +01:00
Erik Johnston
e52c391cd4 Rename column to attestation_json 2017-07-11 14:25:46 +01:00
Erik Johnston
0aac30d53b Comments 2017-07-11 14:23:50 +01:00
Erik Johnston
0184a97dbd Merge pull request #2354 from krombel/reduce_static_sync_reply
encode sync-response statically
2017-07-11 14:19:56 +01:00
Krombel
85b9f76f1d split out reducing stuff; just make encode_* static 2017-07-11 13:14:35 +02:00
Erik Johnston
6322fbbd41 Comment 2017-07-11 11:52:03 +01:00
Erik Johnston
8ba89f1050 Remove u/ requirement 2017-07-11 11:45:32 +01:00
Erik Johnston
429925a5e9 Lift out visibility parsing 2017-07-11 11:44:08 +01:00
Erik Johnston
83936293eb Comments 2017-07-11 11:42:25 +01:00
Erik Johnston
e2cb760dcc Merge pull request #2357 from matrix-org/erikj/push
Don't compute push actions for backfilled events
2017-07-11 10:53:22 +01:00
Erik Johnston
925b3638ff Reduce log levels in tcp replication 2017-07-11 10:04:21 +01:00
Erik Johnston
9a6fd3ef29 Don't compute push actions for backfilled events 2017-07-11 10:02:21 +01:00
Krombel
2f82de18ee fix test 2017-07-10 17:34:58 +02:00
Erik Johnston
b8ca494ee9 Initial group server implementation 2017-07-10 15:44:15 +01:00
Krombel
6e16aca8b0 encode sync-response statically; omit empty objects from sync-response 2017-07-10 16:42:17 +02:00
Erik Johnston
d4d12daed9 Include registration and as stores in frontend proxy 2017-07-07 18:36:45 +01:00
Erik Johnston
f467a8f66d Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-07-07 18:26:28 +01:00
Erik Johnston
c9184ed87e Merge pull request #2344 from matrix-org/erikj/frontend_proxy
Add a frontend proxy
2017-07-07 18:25:46 +01:00
Erik Johnston
1fc4a962e4 Add a frontend proxy 2017-07-07 18:19:46 +01:00
Erik Johnston
08284c86ed Merge pull request #2343 from matrix-org/erikj/fastpush
Perf: Don't filter events for push
2017-07-07 14:25:42 +01:00
Erik Johnston
f502b0dea1 Perf: Don't filter events for push
We know the users are joined and we can explicitly check for if they are
ignoring the user, so lets do that.
2017-07-07 14:04:40 +01:00
Erik Johnston
1200f28d66 Merge branch 'hotfixes-v0.22.1' of github.com:matrix-org/synapse 2017-07-06 18:11:49 +01:00
Erik Johnston
76ed3476d3 Bump version and changelog 2017-07-06 18:11:22 +01:00
Erik Johnston
58dc1f2c78 Merge pull request #2342 from matrix-org/erikj/pusher_pool_instantiate
Fix bug where pusherpool didn't start and broke some rooms
2017-07-06 18:08:43 +01:00
Erik Johnston
5a7f561a9b Fix bug where pusherpool didn't start and broke some rooms
Since we didn't instansiate the PusherPool at start time it could fail
at run time, which it did for some users.

This may or may not fix things for those users, but it should happen at
start time and stop the server from starting.
2017-07-06 17:55:51 +01:00
Erik Johnston
ed9a7f5436 Merge pull request #2309 from matrix-org/erikj/user_ip_repl
Fix up user_ip replication commands
2017-07-06 14:33:14 +01:00
Erik Johnston
1f64207f26 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-07-06 13:57:45 +01:00
Erik Johnston
42b50483be Merge branch 'release-v0.22.0' of github.com:matrix-org/synapse 2017-07-06 10:36:25 +01:00
Erik Johnston
6264cf9666 Bump version and changelog 2017-07-06 10:35:56 +01:00
Erik Johnston
f386632800 Merge pull request #2334 from matrix-org/erikj/refactor_transport_server
Separate federation servlet into different lists
2017-07-05 17:09:07 +01:00
Erik Johnston
5e49a57ecc Separate federation servlet into different lists 2017-07-05 14:32:24 +01:00
Richard van der Hoff
3d31b39297 Merge pull request #2332 from matrix-org/rav/fix_pushes
Fix caching error in the push evaluator
2017-07-05 11:10:53 +01:00
Richard van der Hoff
73cfe48031 Fix caching error in the push evaluator
Initialising `result` to `{}` in the parameters meant that every call to
_flatten_dict used the *same* target dictionary.

I'm hopeful this will fix https://github.com/matrix-org/synapse/issues/2270,
but I suspect it won't. (This code seems to have been here since forever,
unlike the bug, and I don't really think it explains the observed
behaviour). Still, it makes it hard to investigate the problem.
2017-07-05 00:28:43 +01:00
Erik Johnston
05538587ef Bump version and changelog 2017-07-04 14:02:21 +01:00
Erik Johnston
f92d7416d7 Merge pull request #2330 from matrix-org/erikj/cache_size_factor
Increase default cache size
2017-07-04 10:51:21 +01:00
Mark Haines
1f12d808e7 Merge pull request #2323 from matrix-org/markjh/invite_checks
Improve the error handling for bad invites received over federation
2017-07-04 10:50:43 +01:00
Erik Johnston
29a4066a4d Update test 2017-07-04 10:21:25 +01:00
Erik Johnston
7afb4e3f54 Update README 2017-07-04 10:00:52 +01:00
Erik Johnston
495f075b41 Increase default cache factor size. 2017-07-04 09:58:32 +01:00
Erik Johnston
b5e8d529e6 Define CACHE_SIZE_FACTOR once 2017-07-04 09:56:44 +01:00
Mark Haines
3e279411fe Improve the error handling for bad invites received over federation 2017-06-30 16:20:30 +01:00
Erik Johnston
47574c9cba Merge pull request #2321 from matrix-org/erikj/prefill_forward
Prefill forward extrems and event to state groups
2017-06-30 11:03:04 +01:00
Erik Johnston
6ff14ddd2e Make into list 2017-06-29 15:47:37 +01:00
Erik Johnston
5946aa0877 Prefill forward extrems and event to state groups 2017-06-29 15:38:48 +01:00
Erik Johnston
d800ab2847 Merge pull request #2320 from matrix-org/erikj/cache_macaroon_parse
Cache macaroon parse and validation
2017-06-29 15:06:43 +01:00
Erik Johnston
2c365f4723 Cache macaroon parse and validation
Turns out this can be quite expensive for requests, and is easily
cachable. We don't cache the lookup to the DB so invalidation still
works.
2017-06-29 14:50:18 +01:00
Erik Johnston
a1a253ea50 Merge pull request #2319 from matrix-org/erikj/prune_sessions
Use an ExpiringCache for storing registration sessions
2017-06-29 14:20:24 +01:00
Erik Johnston
c72058bcc6 Use an ExpiringCache for storing registration sessions
This is because pruning them was a significant performance drain on
matrix.org
2017-06-29 14:08:37 +01:00
Erik Johnston
27f26e48b7 Serialize user ip command as json 2017-06-27 16:25:38 +01:00
Erik Johnston
8c23221666 Fix up 2017-06-27 15:53:45 +01:00
Erik Johnston
731f3c37a0 Merge branch 'release-v0.22.0' of github.com:matrix-org/synapse into develop 2017-06-27 15:41:34 +01:00
Erik Johnston
4b444723f0 Merge pull request #2308 from matrix-org/erikj/user_ip_repl
Make workers report to master for user ip updates
2017-06-27 15:36:47 +01:00
Erik Johnston
816605a137 Merge pull request #2307 from matrix-org/erikj/user_ip_batch
Batch upsert user ips
2017-06-27 15:08:32 +01:00
Erik Johnston
78cefd78d6 Make workers report to master for user ip updates 2017-06-27 14:58:10 +01:00
Erik Johnston
a0a561ae85 Fix up client ips to read from pending data 2017-06-27 14:46:12 +01:00
Erik Johnston
ed3d0170d9 Batch upsert user ips 2017-06-27 13:37:04 +01:00
Erik Johnston
976128f368 Update version and changelog 2017-06-26 16:14:56 +01:00
Erik Johnston
d04d672a80 Merge pull request #2290 from matrix-org/erikj/ensure_round_trip
Reject local events that don't round trip the DB
2017-06-26 15:12:02 +01:00
Erik Johnston
036f439f53 Merge pull request #2304 from matrix-org/erikj/users_share_fix
Fix up indices for users_who_share_rooms
2017-06-26 15:11:39 +01:00
Erik Johnston
1bce3e6b35 Remove unused variables 2017-06-26 14:03:27 +01:00
Erik Johnston
e3cbec10c1 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/ensure_round_trip 2017-06-26 14:02:44 +01:00
Erik Johnston
8abdd7b553 Fix up indices for users_who_share_rooms 2017-06-26 14:01:30 +01:00
Erik Johnston
ff13c5e7af Merge pull request #2301 from xwiki-labs/push-redact-content
Add configuration parameter to allow redaction of content from push m…
2017-06-24 13:13:51 +01:00
Caleb James DeLisle
27bd0b9a91 Change the config file generator to more descriptive explanation of push.redact_content 2017-06-24 10:32:12 +02:00
Caleb James DeLisle
bce144595c Fix TravisCI tests for PR #2301 - Fat finger mistake 2017-06-23 15:26:09 +02:00
Caleb James DeLisle
75eba3b07d Fix TravisCI tests for PR #2301 2017-06-23 15:15:18 +02:00
Caleb James DeLisle
1591eddaea Add configuration parameter to allow redaction of content from push messages for google/apple devices 2017-06-23 13:01:04 +02:00
Erik Johnston
4fec80ba6f Merge pull request #2299 from matrix-org/erikj/segregate_url_cache_downloads
Store URL cache preview downloads separately
2017-06-23 12:00:45 +01:00
Erik Johnston
7fe8ed1787 Store URL cache preview downloads seperately
This makes it easier to clear old media out at a later date
2017-06-23 11:14:11 +01:00
Erik Johnston
e204062310 Merge pull request #2297 from matrix-org/erikj/user_dir_fix
Fix thinko in initial public room user spam
2017-06-22 15:14:47 +01:00
Erik Johnston
44c722931b Make some more params configurable 2017-06-22 14:59:52 +01:00
Erik Johnston
2d520a9826 Typo. ARGH. 2017-06-22 14:42:39 +01:00
Erik Johnston
24d894e2e2 Fix thinko in unhandled user spam 2017-06-22 14:39:05 +01:00
Matthew Hodgson
ccfcef6b59 Merge branch 'master' into develop 2017-06-22 13:03:44 +01:00
Erik Johnston
e0004aa28a Add desc 2017-06-22 10:03:48 +01:00
Erik Johnston
b668112320 Merge pull request #2296 from matrix-org/erikj/dont_appserver_shar
Don't work out users who share room with appservice users
2017-06-21 14:50:24 +01:00
Erik Johnston
dae9a00a28 Initialise exclusive_user_regex 2017-06-21 14:19:33 +01:00
Erik Johnston
71995e1397 Merge pull request #2219 from krombel/avoid_duplicate_filters
only add new filter when not existent prevoisly
2017-06-21 14:11:26 +01:00
Erik Johnston
8177563ebe Fix for workers 2017-06-21 13:57:49 +01:00
Krombel
4202fba82a Merge branch 'develop' into avoid_duplicate_filters 2017-06-21 14:48:21 +02:00
Krombel
812c030e87 replaced json.dumps with encode_canonical_json 2017-06-21 14:48:12 +02:00
Erik Johnston
1217c7da91 Don't work out users who share room with appservice users 2017-06-21 12:00:41 +01:00
Erik Johnston
7d69f2d956 Merge pull request #2292 from matrix-org/erikj/quarantine_media
Add API to quarantine media
2017-06-19 18:15:00 +01:00
Erik Johnston
385dcb7c60 Handle thumbnail urls 2017-06-19 17:48:28 +01:00
Erik Johnston
b8b936a6ea Add API to quarantine media 2017-06-19 17:39:21 +01:00
Erik Johnston
b5f665de32 Merge pull request #2291 from matrix-org/erikj/shutdown_room
Add shutdown room API
2017-06-19 16:20:55 +01:00
Erik Johnston
e5ae386ea4 Handle all cases of sending membership events 2017-06-19 16:07:54 +01:00
Erik Johnston
36e51aad3c Remove unused import 2017-06-19 14:42:21 +01:00
Erik Johnston
b490299a3b Change to create new room and join other users 2017-06-19 14:10:13 +01:00
Erik Johnston
5db7070dd1 Forget room 2017-06-19 12:40:29 +01:00
Erik Johnston
d7fe6b356c Add shutdown room API 2017-06-19 12:37:27 +01:00
Erik Johnston
fcf01dd88e Reject local events that don't round trip the DB 2017-06-19 11:33:40 +01:00
Johannes Löthberg
4f66312df8 python_dependencies: Use bcrypt module instead of py-bcrypt
py-bcrypt has been unmaintained for a long while, while bcrypt is
actively maintained. And since ff8b87118d
we're compatible with the bcrypt anyway.

Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2017-06-17 17:39:35 +02:00
Matthew
3fafb7b189 add missing boolean to synapse_port_db 2017-06-16 20:51:19 +01:00
Matthew
776a070421 fix synapse_port script 2017-06-16 20:24:14 +01:00
Erik Johnston
dfeca6cf40 Merge pull request #2286 from matrix-org/erikj/split_out_user_dir
Split out user directory to a separate process
2017-06-16 13:01:19 +01:00
Erik Johnston
6aa5bc8635 Initial worker impl 2017-06-16 11:47:11 +01:00
Erik Johnston
d8f47d2efa Merge pull request #2280 from matrix-org/erikj/share_room_user_dir
Include users who you share a room with in user directory
2017-06-16 11:06:00 +01:00
Erik Johnston
0a9315bbc7 Merge pull request #2285 from krombel/allow_authorization_header
allow Authorization header
2017-06-16 10:41:58 +01:00
Krombel
1ff419d343 allow Authorization header which handling got implemented in #1098
Signed-off-by: Matthias Kesler <krombel@krombel.de>
2017-06-16 11:21:14 +02:00
Erik Johnston
24df576795 Merge pull request #2282 from matrix-org/release-v0.21.1
Release v0.21.1
2017-06-15 13:24:16 +01:00
Erik Johnston
fdf1ca30f0 Bump version and changelog 2017-06-15 12:59:06 +01:00
Erik Johnston
052c5d19d5 Merge pull request #2281 from matrix-org/erikj/phone_home_stats
Fix phone home stats
2017-06-15 12:46:23 +01:00
Erik Johnston
5ddd199870 Typo 2017-06-15 10:49:10 +01:00
Erik Johnston
a9d6fa8b2b Include users who share room with requester in user directory 2017-06-15 10:17:21 +01:00
Erik Johnston
4564b05483 Implement updating users who share rooms on the fly 2017-06-15 10:17:17 +01:00
Erik Johnston
72613bc379 Implement initial population of users who share rooms table 2017-06-15 09:59:04 +01:00
Erik Johnston
ebcd55d641 Add DB schema for tracking users who share rooms 2017-06-15 09:45:48 +01:00
Erik Johnston
4b461a6931 Add some more stats 2017-06-15 09:39:39 +01:00
Erik Johnston
93e7a38370 Remove unhelpful test 2017-06-15 09:30:54 +01:00
Erik Johnston
617304b2cf Fix phone home stats 2017-06-14 19:47:15 +01:00
Matthew Hodgson
ba502fb89a add notes on running out of FDs 2017-06-14 02:23:14 +01:00
Erik Johnston
6c6b9689bb Merge pull request #2279 from matrix-org/erikj/fix_user_dir
Fix user directory insertion due to missing room_id
2017-06-13 12:48:50 +01:00
Erik Johnston
d9fd937e39 Fix user directory insertion due to missing room_id 2017-06-13 11:50:24 +01:00
Erik Johnston
fe9dc522d4 Merge pull request #2278 from matrix-org/erikj/fix_user_dir
Fix user dir to not assume existence of user
2017-06-13 11:38:44 +01:00
Erik Johnston
505e7e8b9d Fix up sql 2017-06-13 11:19:18 +01:00
Erik Johnston
6fd7e6db3d Fix user dir to not assume existence of user 2017-06-13 11:11:26 +01:00
Erik Johnston
fdca6e36ee Merge pull request #2274 from matrix-org/erikj/cache_is_host_joined
Add cache for is_host_joined
2017-06-13 10:58:53 +01:00
Erik Johnston
90ae0cffec Merge pull request #2275 from matrix-org/erikj/tweark_user_directory_search
Tweak the ranking of PG user dir search
2017-06-13 10:58:43 +01:00
Erik Johnston
de4cb50ca6 Merge pull request #2276 from matrix-org/erikj/fix_user_di
Don't assume existence of events when updating user directory
2017-06-13 10:55:41 +01:00
Erik Johnston
a09e09ce76 Merge pull request #2277 from matrix-org/erikj/media
Throw exception when not retrying when downloading media
2017-06-13 10:52:45 +01:00
Erik Johnston
48d2949416 Throw exception when not retrying when downloading media 2017-06-13 10:23:14 +01:00
Erik Johnston
6ae8373d40 Don't assume existance of events when updating user directory 2017-06-13 10:19:26 +01:00
Erik Johnston
b58e24cc3c Tweak the ranking of PG user dir search 2017-06-13 10:16:31 +01:00
Erik Johnston
d53fe399eb Add cache for is_host_joined 2017-06-13 09:56:18 +01:00
Erik Johnston
a837765e8c Merge pull request #2266 from matrix-org/erikj/host_in_room
Change is_host_joined to use current_state table
2017-06-12 09:49:51 +01:00
Erik Johnston
f540b494a4 Merge pull request #2269 from matrix-org/erikj/cache_state_delta
Cache state deltas
2017-06-09 18:32:04 +01:00
Erik Johnston
8060974344 Fix replication 2017-06-09 16:40:52 +01:00
Erik Johnston
b0d975e216 Comments 2017-06-09 16:25:42 +01:00
Erik Johnston
e54d7d536e Cache state deltas 2017-06-09 16:24:00 +01:00
Erik Johnston
1e9b4d5a95 Merge pull request #2268 from matrix-org/erikj/entity_has_changed
Fix has_any_entity_changed
2017-06-09 15:30:55 +01:00
Erik Johnston
efc2b7db95 Rewrite conditional 2017-06-09 13:35:15 +01:00
Erik Johnston
bfd68019c2 Merge pull request #2267 from matrix-org/erikj/missing_notifier
Fix removing of pushers when using workers
2017-06-09 13:07:29 +01:00
Erik Johnston
1946867bc2 Merge pull request #2265 from matrix-org/erikj/remote_leave_outlier
Mark remote invite rejections as outliers
2017-06-09 13:05:15 +01:00
Erik Johnston
1664948e41 Comment 2017-06-09 13:05:05 +01:00
Erik Johnston
935e588799 Tweak SQL 2017-06-09 13:01:23 +01:00
Erik Johnston
eed59dcc1e Fix has_any_entity_changed
Occaisonally has_any_entity_changed would throw the error: "Set changed
size during iteration" when taking the max of the `sorteddict`. While
its uncertain how that happens, its quite inefficient to iterate over
the entire dict anyway so we change to using the more traditional
`bisect_*` functions.
2017-06-09 11:44:01 +01:00
Erik Johnston
2cac7623a5 Add missing notifier 2017-06-09 11:24:41 +01:00
Erik Johnston
298d83b340 Fix replication 2017-06-09 11:01:28 +01:00
Erik Johnston
0185b75381 Change is_host_joined to use current_state table
This bypasses a bug where using the state groups to figure out if a host
is in a room sometimes errors if the servers isn't in the room. (For
example when the server rejected an invite to a remote room)
2017-06-09 10:52:26 +01:00
Erik Johnston
7132e5cdff Mark remote invite rejections as outliers 2017-06-09 10:08:18 +01:00
Erik Johnston
98bdb4468b Merge pull request #2263 from matrix-org/erikj/fix_state_woes
Ensure we don't use unpersisted state group as prev group
2017-06-08 12:56:18 +01:00
Erik Johnston
ea11ee09f3 Ensure we don't use unpersisted state group as prev group 2017-06-08 11:59:57 +01:00
Erik Johnston
c62c480dc6 Merge pull request #2259 from matrix-org/erikj/fix_state_woes
Fix bug where state_group tables got corrupted
2017-06-07 17:51:25 +01:00
Erik Johnston
197bd126f0 Fix bug where state_group tables got corrupted
This is due to the fact that we prefilled caches using txn.call_after,
which always gets called including on error.

We fix this by making txn.call_after only fire when a transaction
completes successfully, which is what we want most of the time anyway.
2017-06-07 17:39:36 +01:00
Erik Johnston
f45f07ab86 Merge pull request #2258 from matrix-org/erikj/user_dir
Don't start user_directory handling on workers
2017-06-07 14:04:50 +01:00
Erik Johnston
a053ff3979 Merge pull request #2248 from matrix-org/erikj/state_fixup
Faster cache for get_joined_hosts
2017-06-07 14:01:06 +01:00
Erik Johnston
ecdd2a3658 Don't start user_directory handling on workers 2017-06-07 12:02:53 +01:00
Erik Johnston
2f34ad31ac Add some logging to user directory 2017-06-07 11:50:44 +01:00
Erik Johnston
671f0afa1d Merge pull request #2256 from matrix-org/erikj/faster_device_updates
Split up device_lists_outbound_pokes table for faster updates.
2017-06-07 11:48:00 +01:00
Erik Johnston
64ed74c01e When pruning, delete from device_lists_outbound_last_success 2017-06-07 11:20:47 +01:00
Erik Johnston
1a81a1898e Keep pruning background task 2017-06-07 11:16:56 +01:00
Erik Johnston
6ba21bf2b8 Comments 2017-06-07 11:08:36 +01:00
Erik Johnston
09e4bc0501 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/state_fixup 2017-06-07 11:05:23 +01:00
Erik Johnston
6e2a7ee1bc Remove spurious log lines 2017-06-07 11:05:17 +01:00
Erik Johnston
65f0513a33 Split up device_lists_outbound_pokes table for faster updates. 2017-06-07 11:02:38 +01:00
Erik Johnston
6f83c4537c Increase size of IP cache 2017-06-07 10:18:44 +01:00
Erik Johnston
cca94272fa Fix typo when getting app name 2017-06-06 11:50:07 +01:00
Erik Johnston
66b121b2fc Fix wrong number of arguments 2017-06-06 11:46:38 +01:00
Erik Johnston
8d34120a53 Merge pull request #2253 from matrix-org/erikj/user_dir
Handle profile updates in user directory
2017-06-01 17:33:20 +01:00
Erik Johnston
1a01af079e Handle profile updates in user directory 2017-06-01 15:39:51 +01:00
Erik Johnston
87e5e05aea Merge pull request #2252 from matrix-org/erikj/user_dir
Add a user directory
2017-06-01 15:39:32 +01:00
Erik Johnston
4d039aa2ca Fix sqlite 2017-06-01 14:58:48 +01:00
Erik Johnston
21e255a8f1 Split the table in two 2017-06-01 14:50:46 +01:00
Erik Johnston
d5477c7afd Tweak search query 2017-06-01 13:28:01 +01:00
Erik Johnston
02a6108235 Tweak search query 2017-06-01 13:16:40 +01:00
Erik Johnston
7233341eac Comments 2017-06-01 13:11:38 +01:00
Erik Johnston
8be6fd95a3 Check if host is still in room 2017-06-01 13:05:39 +01:00
Erik Johnston
59dbb47065 Remove spurious inlineCallbacks 2017-06-01 11:41:29 +01:00
Erik Johnston
9c7db2491b Fix removing users 2017-06-01 11:36:50 +01:00
Erik Johnston
0fe6f3c521 Bug fixes and logging
- Check if room is public when a user joins before adding to user dir
- Fix typo of field name "content.join_rules" -> "content.join_rule"
2017-06-01 11:09:49 +01:00
Erik Johnston
036362ede6 Order by if they have profile info 2017-06-01 09:41:08 +01:00
Erik Johnston
a757dd4863 Use prefix matching 2017-06-01 09:40:37 +01:00
Erik Johnston
f5cc22bdc6 Comment on why arbitrary comments 2017-05-31 17:30:26 +01:00
Erik Johnston
5dd1b2c525 Use unique indices 2017-05-31 17:29:12 +01:00
Erik Johnston
cc7609aa9f Comment briefly on how we keep user_directory up to date 2017-05-31 17:11:18 +01:00
Erik Johnston
f1378aef91 Convert to int 2017-05-31 17:03:08 +01:00
Erik Johnston
b2d8d07109 Lifts things into separate function 2017-05-31 17:00:24 +01:00
Erik Johnston
f9791498ae Typos 2017-05-31 16:50:57 +01:00
Erik Johnston
f091061711 Fix tests 2017-05-31 16:34:40 +01:00
Erik Johnston
4abcff0177 Fix typo 2017-05-31 16:22:36 +01:00
Erik Johnston
63c58c2a3f Limit number of things we fetch out of the db 2017-05-31 16:17:58 +01:00
Erik Johnston
304880d185 Add stream change cache 2017-05-31 15:46:36 +01:00
Erik Johnston
5d79d728f5 Split out directory and search tables 2017-05-31 15:23:49 +01:00
Erik Johnston
dc51af3d03 Pull max id from correct table 2017-05-31 15:13:49 +01:00
Erik Johnston
350622a107 Handle the server leaving a public room 2017-05-31 15:11:36 +01:00
Erik Johnston
63fda37e20 Add comments 2017-05-31 15:00:29 +01:00
Erik Johnston
293ef29655 Weight differently 2017-05-31 14:29:32 +01:00
Erik Johnston
535c99f157 Use POST 2017-05-31 14:15:45 +01:00
Erik Johnston
45a5df5914 Add REST API 2017-05-31 14:11:55 +01:00
Erik Johnston
3b5f22ca40 Add search 2017-05-31 14:00:01 +01:00
Erik Johnston
b5db4ed5f6 Update room column when room becomes unpublic 2017-05-31 13:40:28 +01:00
Erik Johnston
168524543f Add call later 2017-05-31 11:59:36 +01:00
Erik Johnston
3e123b8497 Start later 2017-05-31 11:56:27 +01:00
Erik Johnston
42137efde7 Don't go round in circles 2017-05-31 11:55:13 +01:00
Erik Johnston
eeb2f9e546 Add user_directory to database 2017-05-31 11:51:01 +01:00
Erik Johnston
5dbaa520a5 Merge pull request #2251 from matrix-org/erikj/current_state_delta_stream
Add current_state_delta_stream table
2017-05-30 15:06:17 +01:00
Erik Johnston
dd48f7204c Add comment 2017-05-30 15:01:22 +01:00
Erik Johnston
04095f7581 Add clobbered event_id 2017-05-30 14:53:01 +01:00
Erik Johnston
a584a81b3e Add current_state_delta_stream table 2017-05-30 14:44:09 +01:00
Erik Johnston
619e8ecd0c Handle None state group correctly 2017-05-26 10:46:03 +01:00
Erik Johnston
23da638360 Fix typing tests 2017-05-26 10:02:04 +01:00
Erik Johnston
dfbda5e025 Faster cache for get_joined_hosts 2017-05-25 17:24:44 +01:00
Erik Johnston
2b03751c3c Don't return weird prev_group 2017-05-25 14:47:39 +01:00
Erik Johnston
dbc0dfd2d5 Remove unused options 2017-05-25 14:28:34 +01:00
Erik Johnston
11f139a647 Merge pull request #2247 from matrix-org/erikj/auth_event
Only store event_auth for state events
2017-05-24 16:46:34 +01:00
Erik Johnston
6e614e9e10 Add background task to clear out old event_auth 2017-05-24 15:23:34 +01:00
Erik Johnston
c049472b8a Only store event_auth for state events 2017-05-24 15:23:31 +01:00
Erik Johnston
9a804b2812 Merge pull request #2243 from matrix-org/matthew/fix-url-preview-length-again
actually trim oversize og:description meta
2017-05-23 13:26:28 +01:00
Erik Johnston
fbbc40f385 Merge pull request #2237 from matrix-org/erikj/sync_key_count
Add count of one time keys to sync stream
2017-05-23 11:18:13 +01:00
Erik Johnston
8cf9f0a3e7 Remove redundant invalidation 2017-05-23 09:46:59 +01:00
Erik Johnston
e6618ece2d Missed an invalidation 2017-05-23 09:36:52 +01:00
Erik Johnston
58c4720293 Merge pull request #2242 from matrix-org/erikj/email_refactor
Only load jinja2 templates once
2017-05-23 09:34:08 +01:00
Matthew Hodgson
836d5c44b6 actually trim oversize og:description meta 2017-05-22 21:14:20 +01:00
Erik Johnston
11c2a3655f Only load jinja2 templates once
Instead of every time a new email pusher is created, as loading jinja2
templates is slow.
2017-05-22 17:48:58 +01:00
Erik Johnston
539aa4d333 Merge pull request #2241 from matrix-org/erikj/fix_notifs
Correctly calculate push rules for member events
2017-05-22 16:46:58 +01:00
Erik Johnston
f85a415279 Add missing storage function to slave store 2017-05-22 16:31:24 +01:00
Erik Johnston
6489455bed Comment 2017-05-22 16:22:04 +01:00
Erik Johnston
d668caa79c Remove spurious log level guards 2017-05-22 16:21:06 +01:00
Erik Johnston
74bf4ee7bf Stream count_e2e_one_time_keys cache invalidation 2017-05-22 16:19:22 +01:00
Erik Johnston
33ba90c6e9 Merge pull request #2240 from matrix-org/erikj/cache_list_fix
Update list cache to handle one arg case
2017-05-22 16:10:46 +01:00
Erik Johnston
ccd62415ac Merge pull request #2238 from matrix-org/erikj/faster_push_rules
Speed up calculating push rules
2017-05-22 15:12:34 +01:00
Erik Johnston
bd7bb5df71 Pull out if statement from for loop 2017-05-22 15:12:19 +01:00
Erik Johnston
e3417a06e2 Update list cache to handle one arg case
We update the normal cache descriptors to handle caches with a single
argument specially so that the key wasn't a 1-tuple. We need to update
the cache list to be aware of this.
2017-05-22 15:04:42 +01:00
Erik Johnston
7fb80b5eae Check if current event is a membership event 2017-05-22 15:02:12 +01:00
Erik Johnston
2d17b09a6d Add debug logging 2017-05-22 15:01:36 +01:00
Erik Johnston
24c8f38784 Comment 2017-05-22 14:59:27 +01:00
Erik Johnston
25f03cf8e9 Use tuple unpacking 2017-05-22 14:58:22 +01:00
Erik Johnston
270e1c904a Speed up calculating push rules 2017-05-19 16:51:05 +01:00
Erik Johnston
b4f59c7e27 Add count of one time keys to sync stream 2017-05-19 15:47:55 +01:00
Erik Johnston
ab4ee2e524 Merge pull request #2236 from matrix-org/erikj/invalidation
Fix invalidation of get_users_with_read_receipts_in_room
2017-05-19 14:50:23 +01:00
Erik Johnston
58ebb96cce Fix invalidation of get_users_with_read_receipts_in_room 2017-05-19 14:38:50 +01:00
Erik Johnston
99713dc7d3 Merge pull request #2234 from matrix-org/erikj/fix_push
Store ActionGenerator in HomeServer
2017-05-19 13:42:49 +01:00
Erik Johnston
1c1c0257f4 Move invalidation cb to its own structure 2017-05-19 11:44:11 +01:00
Erik Johnston
cafe659f72 Store ActionGenerator in HomeServer 2017-05-19 10:09:56 +01:00
Erik Johnston
72ed8196b3 Don't push users who have left 2017-05-18 17:48:36 +01:00
Erik Johnston
107ac7ac96 Increase size of push rule caches 2017-05-18 17:17:53 +01:00
Erik Johnston
234772db6d Merge pull request #2233 from matrix-org/erikj/faster_as_check
Make get_if_app_services_interested_in_user faster
2017-05-18 16:51:18 +01:00
Erik Johnston
760625acba Make get_if_app_services_interested_in_user faster 2017-05-18 16:34:44 +01:00
Erik Johnston
c57789d138 Remove size of push get_rules cache 2017-05-18 16:17:23 +01:00
Erik Johnston
f33df30732 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-05-18 13:56:37 +01:00
Erik Johnston
3accee1a8c Merge branch 'release-v0.21.0' of github.com:matrix-org/synapse 2017-05-18 13:54:27 +01:00
Erik Johnston
a5425b2e5b Bump changelog and version 2017-05-18 13:53:48 +01:00
Erik Johnston
6e381180ae Merge pull request #2177 from matrix-org/erikj/faster_push_rules
Make calculating push actions faster
2017-05-18 11:46:18 +01:00
Erik Johnston
056ba9b795 Add comment 2017-05-18 11:45:56 +01:00
Erik Johnston
88664afe14 Merge pull request #2231 from aaronraimist/patch-1
Correct a typo in UPGRADE.rst
2017-05-18 10:00:35 +01:00
Aaron Raimist
f98efea9b1 Correct a typo in UPGRADE.rst 2017-05-17 21:41:48 -05:00
Erik Johnston
d9e3a4b5db Merge pull request #2230 from matrix-org/erikj/speed_up_get_state
Make get_state_groups_from_groups faster.
2017-05-17 17:23:04 +01:00
Erik Johnston
66d8ffabbd Faster push rule calculation via push specific cache
We add a push rule specific cache that ensures that we can reuse
calculated push rules appropriately when a user join/leaves.
2017-05-17 16:55:40 +01:00
Erik Johnston
ace23463c5 Merge pull request #2216 from slipeer/app_services_interested_in_user
Fix users claimed non-exclusively by an app service don't get notific…
2017-05-17 16:28:50 +01:00
Erik Johnston
bbfe4e996c Make get_state_groups_from_groups faster.
Most of the time was spent copying a dict to filter out sentinel values
that indicated that keys did not exist in the dict. The sentinel values
were added to ensure that we cached the non-existence of keys.

By updating DictionaryCache to keep track of which keys were known to
not exist itself we can remove a dictionary copy.
2017-05-17 15:12:15 +01:00
Erik Johnston
9f430fa07f Merge branch 'release-v0.21.0' of github.com:matrix-org/synapse into develop 2017-05-17 13:28:46 +01:00
Erik Johnston
7c53a27801 Update changelog 2017-05-17 13:13:45 +01:00
Erik Johnston
a8bc7cae56 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.21.0 2017-05-17 13:11:43 +01:00
Erik Johnston
bf1050f7cf Merge pull request #2229 from matrix-org/erikj/faster_get_joined
Make get_joined_users faster when we have delta state
2017-05-17 13:00:05 +01:00
Erik Johnston
c6f4ff1475 Spelling 2017-05-17 11:29:14 +01:00
Erik Johnston
3a431a126d Bump changelog and version 2017-05-17 11:26:57 +01:00
Erik Johnston
ac08316548 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.21.0 2017-05-17 11:25:23 +01:00
Erik Johnston
85e8092cca Comment 2017-05-17 10:03:09 +01:00
Erik Johnston
ad53fc3cf4 Short circuit when we have delta ids 2017-05-17 09:57:34 +01:00
Erik Johnston
6fa8148ccb Merge pull request #2228 from matrix-org/erikj/speed_up_get_hosts
Speed up get_joined_hosts
2017-05-16 17:40:55 +01:00
Erik Johnston
7c69849a0d Merge pull request #2227 from matrix-org/erikj/presence_caches
Make presence use cached users/hosts in room
2017-05-16 17:40:47 +01:00
Erik Johnston
11bc21b6d9 Merge pull request #2226 from matrix-org/erikj/domain_from_id
Speed up get_domain_from_id
2017-05-16 17:18:16 +01:00
Erik Johnston
13f540ef1b Speed up get_joined_hosts 2017-05-16 16:05:22 +01:00
Erik Johnston
ec5c4499f4 Make presence use cached users/hosts in room 2017-05-16 16:01:43 +01:00
Erik Johnston
f2a5b6dbfd Speed up get_domain_from_id 2017-05-16 15:59:37 +01:00
Erik Johnston
b8492b6c2f Merge pull request #2224 from matrix-org/erikj/prefill_state
Prefill state caches
2017-05-16 15:50:11 +01:00
Erik Johnston
331570ea6f Remove spurious merge artifacts 2017-05-16 15:33:07 +01:00
Krombel
55af207321 Merge branch 'develop' into avoid_duplicate_filters 2017-05-16 15:29:59 +02:00
Richard van der Hoff
d648f65aaf Merge pull request #2218 from matrix-org/rav/event_search_index
Add an index to event_search
2017-05-16 13:26:07 +01:00
Erik Johnston
608b5a6317 Take a copy before prefilling, as it may be a frozendict 2017-05-16 12:55:29 +01:00
Krombel
64953c8ed2 avoid access-error if no filter_id matches 2017-05-15 18:36:37 +02:00
Erik Johnston
f451b64c8f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/prefill_state 2017-05-15 16:09:32 +01:00
Erik Johnston
2c9475b58e Merge pull request #2221 from psaavedra/sync_timeline_limit_filter_by_name
Configurable maximum number of events requested by /sync and /messages
2017-05-15 16:08:46 +01:00
Erik Johnston
6d17573c23 Merge pull request #2223 from matrix-org/erikj/ignore_not_retrying
Don't log exceptions for NotRetryingDestination
2017-05-15 15:52:28 +01:00
Erik Johnston
d12ae7fd1c Don't log exceptions for NotRetryingDestination 2017-05-15 15:42:18 +01:00
Pablo Saavedra
224137fcf9 Fixed syntax nits 2017-05-15 16:21:02 +02:00
Erik Johnston
e4435b014e Update comment 2017-05-15 15:11:30 +01:00
Erik Johnston
871605f4e2 Comments 2017-05-15 15:11:30 +01:00
Erik Johnston
e0d2f6d5b0 Add more granular event send metrics 2017-05-15 15:11:30 +01:00
Erik Johnston
bfbc907cec Prefill state caches 2017-05-15 15:11:13 +01:00
Erik Johnston
5e9d75b4a5 Merge pull request #2215 from hamber-dick/develop
Documantation to chek synapse version
2017-05-15 14:42:52 +01:00
Pablo Saavedra
627e6ea2b0 Fixed implementation errors
* Added HS as property in SyncRestServlet
* Fixed set_timeline_upper_limit function implementat¡ion
2017-05-15 14:51:43 +02:00
Pablo Saavedra
9da4316ca5 Configurable maximum number of events requested by /sync and /messages (#2220)
Set the limit on the returned events in the timeline in the get and sync
operations. The default value is -1, means no upper limit.

For example, using `filter_timeline_limit: 5000`:

POST /_matrix/client/r0/user/user:id/filter
{
room: {
    timeline: {
      limit: 1000000000000000000
    }
}
}

GET /_matrix/client/r0/user/user:id/filter/filter:id

{
room: {
    timeline: {
      limit: 5000
    }
}
}

The server cuts down the room.timeline.limit.
2017-05-13 18:17:54 +02:00
Krombel
eb7cbf27bc insert whitespace to fix travis build 2017-05-12 12:09:42 +02:00
Krombel
6b95e35e96 add check to only add a new filter if the same filter does not exist previously
Signed-off-by: Matthias Kesler <krombel@krombel.de>
2017-05-11 16:05:30 +02:00
Richard van der Hoff
ff3d810ea8 Add a comment to old delta 2017-05-11 12:48:50 +01:00
Richard van der Hoff
34194aaff7 Don't create event_search index on sqlite
... because the table is virtual
2017-05-11 12:46:55 +01:00
Richard van der Hoff
114f290947 Add more logging for purging
Log the number of events we will be deleting at info.
2017-05-11 12:08:47 +01:00
Richard van der Hoff
baafb85ba4 Add an index to event_search
- to make the purge API quicker
2017-05-11 12:05:22 +01:00
Richard van der Hoff
29ded770b1 Merge pull request #2214 from matrix-org/rav/hurry_up_purge
When purging, don't de-delta state groups we're about to delete
2017-05-11 12:04:25 +01:00
Richard van der Hoff
dc026bb16f Tidy purge code and add some comments
Try to make this clearer with more comments and some variable renames
2017-05-11 10:56:12 +01:00
Slipeer
328378f9cb Fix users claimed non-exclusively by an app service don't get notifications #2211 2017-05-11 11:42:08 +03:00
Luke Barnard
c1935f0a41 Merge pull request #2213 from matrix-org/luke/username-availability-qp
Modify register/available to be GET with query param
2017-05-11 09:36:39 +01:00
hamber-dick
43cd86ba8a Merge pull request #1 from hamber-dick/dev-branch-hamber-dick
Documantation to chek synapse version
2017-05-10 23:14:12 +02:00
Richard van der Hoff
8e345ce465 Don't de-delta state groups we're about to delete 2017-05-10 18:44:22 +01:00
Richard van der Hoff
b64d312421 add some logging to purge_history 2017-05-10 18:44:22 +01:00
Luke Barnard
ccad2ed824 Modify condition on empty localpart 2017-05-10 17:34:30 +01:00
Luke Barnard
369195caa5 Modify register/available to be GET with query param
- GET is now the method for register/available
- a query parameter "username" is now used

Also, empty usernames are now handled with an error message on registration or via register/available: `User ID cannot be empty`
2017-05-10 17:23:55 +01:00
hamber-dick
57ed7f6772 Documantation to chek synapse version
I've added some Documentation, how to get the running Version of a
Synapse homeserver. This should help the HS-Owners to check whether the
Upgrade was successful.
2017-05-10 18:01:39 +02:00
Erik Johnston
a3648f84b2 Merge pull request #2208 from matrix-org/erikj/ratelimit_overrid
Add per user ratelimiting overrides
2017-05-10 15:54:48 +01:00
Richard van der Hoff
5331cd150a Merge pull request #2206 from matrix-org/rav/one_time_key_upload_change_sig
Allow clients to upload one-time-keys with new sigs
2017-05-10 14:18:40 +01:00
Luke Barnard
7313a23dba Merge pull request #2209 from matrix-org/luke/username-availability-post
Change register/available to POST (from GET)
2017-05-10 13:05:45 +01:00
Luke Barnard
f7278e612e Change register/available to POST (from GET) 2017-05-10 11:40:18 +01:00
Erik Johnston
b990b2fce5 Add per user ratelimiting overrides 2017-05-10 11:05:43 +01:00
Richard van der Hoff
aedaba018f Replace some instances of preserve_context_over_deferred 2017-05-09 19:04:56 +01:00
Richard van der Hoff
de042b3b88 Do some logging when one-time-keys get claimed
might help us figure out if https://github.com/vector-im/riot-web/issues/3868
has happened.
2017-05-09 19:04:56 +01:00
Richard van der Hoff
a7e9d8762d Allow clients to upload one-time-keys with new sigs
When a client retries a key upload, don't give an error if the signature has
changed (but the key is the same).

Fixes https://github.com/vector-im/riot-android/issues/1208, hopefully.
2017-05-09 19:04:56 +01:00
Erik Johnston
ca238bc023 Merge branch 'release-v0.21.0' of github.com:matrix-org/synapse into develop 2017-05-08 17:35:11 +01:00
Erik Johnston
40dcf0d856 Merge pull request #2203 from matrix-org/erikj/event_cache_hit_ratio
Don't update event cache hit ratio from get_joined_users
2017-05-08 16:52:39 +01:00
Erik Johnston
d3c3026496 Merge pull request #2201 from matrix-org/erikj/store_device_cache
Cache check to see if device exists
2017-05-08 16:23:04 +01:00
Erik Johnston
093f7e47cc Expand docstring a bit 2017-05-08 16:14:46 +01:00
Erik Johnston
6a12998a83 Add missing yields 2017-05-08 16:10:51 +01:00
Erik Johnston
b9c84f3f3a Merge pull request #2202 from matrix-org/erikj/cache_count_device
Cache one time key counts
2017-05-08 16:09:12 +01:00
Erik Johnston
ffad4fe35b Don't update event cache hit ratio from get_joined_users
Otherwise the hit ration of plain get_events gets completely skewed by
calls to get_joined_users* functions.
2017-05-08 16:06:17 +01:00
Erik Johnston
94e6ad71f5 Invalidate cache on device deletion 2017-05-08 15:55:59 +01:00
Erik Johnston
8571f864d2 Cache one time key counts 2017-05-08 15:34:27 +01:00
Erik Johnston
fc6d4974a6 Comment 2017-05-08 15:33:57 +01:00
Erik Johnston
738ccf61c0 Cache check to see if device exists 2017-05-08 15:32:18 +01:00
Erik Johnston
dcabef952c Increase client_ip cache size 2017-05-08 15:09:19 +01:00
Erik Johnston
771c8a83c7 Bump version and changelog 2017-05-08 13:23:46 +01:00
Erik Johnston
6631985990 Merge pull request #2200 from matrix-org/erikj/revert_push
Revert speed up push
2017-05-08 13:20:52 +01:00
Erik Johnston
e0f20e9425 Revert "Remove unused import"
This reverts commit ab37bef83b.
2017-05-08 13:07:43 +01:00
Erik Johnston
fe7c1b969c Revert "We don't care about forgotten rooms"
This reverts commit ad8b316939.
2017-05-08 13:07:43 +01:00
Erik Johnston
78f306a6f7 Revert "Speed up filtering of a single event in push"
This reverts commit 421fdf7460.
2017-05-08 13:07:41 +01:00
Erik Johnston
9ac98197bb Bump version and changelog 2017-05-08 11:07:54 +01:00
Erik Johnston
27c28eaa27 Merge pull request #2190 from matrix-org/erikj/mark_remote_as_back_more
Always mark remotes as up if we receive a signed request from them
2017-05-05 14:08:12 +01:00
Erik Johnston
be2672716d Merge pull request #2189 from matrix-org/erikj/handle_remote_device_list
Handle exceptions thrown in handling remote device list updates
2017-05-05 14:01:27 +01:00
Erik Johnston
653d90c1a5 Comment 2017-05-05 14:01:17 +01:00
Erik Johnston
310b1ccdc1 Use preserve_fn and add logs 2017-05-05 13:41:19 +01:00
Kegsay
a59b0ad1a1 Merge pull request #2192 from matrix-org/kegan/simple-http-client-timeouts
Rewrite SimpleHttpClient.request to include timeouts
2017-05-05 11:52:43 +01:00
Erik Johnston
7b222fc56e Remove redundant reset of destination timers 2017-05-05 11:14:09 +01:00
Kegan Dougal
d0debb2116 Remember how twisted works 2017-05-05 11:00:21 +01:00
Erik Johnston
66f371e8b8 Merge pull request #2176 from matrix-org/erikj/faster_get_joined
Make get_joined_users faster
2017-05-05 10:59:55 +01:00
Erik Johnston
b843631d71 Add comment and TODO 2017-05-05 10:59:32 +01:00
Kegan Dougal
c2ddd773bc Include the clock 2017-05-05 10:52:46 +01:00
Kegan Dougal
7dd3bf5e24 Rewrite SimpleHttpClient.request to include timeouts
Fixes #2191
2017-05-05 10:49:19 +01:00
Erik Johnston
db7d0c3127 Always mark remotes as up if we receive a signed request from them 2017-05-05 10:34:53 +01:00
Erik Johnston
f346048a6e Handle exceptions thrown in handling remote device list updates 2017-05-05 10:34:10 +01:00
Erik Johnston
e3aa8a7aa8 Merge pull request #2185 from matrix-org/erikj/smaller_caches
Optimise caches for single key
2017-05-05 10:19:05 +01:00
Erik Johnston
cf589f2c1e Fixes 2017-05-05 10:17:56 +01:00
Erik Johnston
8af4569583 Merge pull request #2174 from matrix-org/erikj/current_cache_hosts
Add cache for get_current_hosts_in_room
2017-05-05 10:15:24 +01:00
Erik Johnston
b25db11d08 Merge pull request #2186 from matrix-org/revert-2175-erikj/prefill_state
Revert "Prefill state caches"
2017-05-04 15:09:25 +01:00
Erik Johnston
587f07543f Revert "Prefill state caches" 2017-05-04 15:07:27 +01:00
Erik Johnston
aa93cb9f44 Add comment 2017-05-04 14:59:28 +01:00
Erik Johnston
537dbadea0 Intern host strings 2017-05-04 14:55:28 +01:00
Erik Johnston
07a07588a0 Make caches bigger 2017-05-04 14:52:28 +01:00
Erik Johnston
dfaa58f72d Fix comment and num args 2017-05-04 14:50:24 +01:00
Erik Johnston
9ac263ed1b Add new storage functions to slave store 2017-05-04 14:29:03 +01:00
Erik Johnston
d2d8ed4884 Optimise caches with single key 2017-05-04 14:18:46 +01:00
Erik Johnston
5d8290429c Reduce size of get_users_in_room 2017-05-04 13:43:19 +01:00
Luke Barnard
6aa423a1a8 Merge pull request #2183 from matrix-org/luke/username-availability
Implement username availability checker
2017-05-04 09:58:40 +01:00
Luke Barnard
3669065466 Appease the flake8 gods 2017-05-03 18:05:49 +01:00
Erik Johnston
7ebf518c02 Make get_joined_users faster 2017-05-03 15:55:54 +01:00
Luke Barnard
34ed4f4206 Implement username availability checker
Outlined here: https://github.com/vector-im/riot-web/issues/3605#issuecomment-298679388

```HTTP
GET /_matrix/.../register/available
{
    "username": "desiredlocalpart123"
}
```

If available, the response looks like
```HTTP
HTTP/1.1 200 OK
{
    "available": true
}
```

Otherwise,
```HTTP
HTTP/1.1 429
{
    "errcode": "M_LIMIT_EXCEEDED",
    "error": "Too Many Requests",
    "retry_after_ms": 2000
}
```
or
```HTTP
HTTP/1.1 400
{
    "errcode": "M_USER_IN_USE",
    "error": "User ID already taken."
}

```
or
```HTTP
HTTP/1.1 400
{
    "errcode": "M_INVALID_USERNAME",
    "error": "Some reason for username being invalid"
}
```
2017-05-03 12:04:12 +01:00
David Baker
60833c8978 Merge pull request #2147 from matrix-org/dbkr/http_request_propagate_error
Propagate errors sensibly from proxied IS requests
2017-05-03 11:23:25 +01:00
David Baker
482a2ad122 No need for the exception variable 2017-05-03 11:02:59 +01:00
David Baker
c0380402bc List caught expection types 2017-05-03 10:56:22 +01:00
Erik Johnston
cdbf38728d Merge pull request #2175 from matrix-org/erikj/prefill_state
Prefill state caches
2017-05-03 10:54:11 +01:00
Erik Johnston
0c27383dd7 Merge pull request #2170 from matrix-org/erikj/fed_hole_state
Don't fetch state for missing events that we fetched
2017-05-03 10:49:37 +01:00
Erik Johnston
ef862186dd Merge together redundant calculations/logging 2017-05-03 10:06:43 +01:00
Erik Johnston
2c2dcf81d0 Update comment 2017-05-03 10:00:29 +01:00
Erik Johnston
1827057acc Comments 2017-05-03 09:56:05 +01:00
Erik Johnston
8346e6e696 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/prefill_state 2017-05-03 09:46:40 +01:00
Erik Johnston
e4c15fcb5c Merge pull request #2178 from matrix-org/erikj/message_metrics
Add more granular event send metrics
2017-05-02 17:57:34 +01:00
Erik Johnston
3e5a62ecd8 Add more granular event send metrics 2017-05-02 14:23:26 +01:00
Richard van der Hoff
82475a18d9 Merge pull request #2180 from matrix-org/rav/fix_timeout_on_timeout
Instantiate DeferredTimedOutError correctly
2017-05-02 13:32:58 +01:00
Richard van der Hoff
2e996271fe Instantiate DeferredTimedOutError correctly
Call `super` correctly, so that we correctly initialise the `errcode` field.

Fixes https://github.com/matrix-org/synapse/issues/2179.
2017-05-02 13:26:17 +01:00
Erik Johnston
a2c89a225c Prefill state caches 2017-05-02 10:40:31 +01:00
Erik Johnston
7166854f41 Add cache for get_current_hosts_in_room 2017-05-02 10:36:35 +01:00
Erik Johnston
3033261891 Merge pull request #2080 from matrix-org/erikj/filter_speed
Speed up filtering of a single event in push
2017-04-28 14:17:13 +01:00
Erik Johnston
2347efc065 Fixup 2017-04-28 12:46:53 +01:00
Erik Johnston
9b147cd730 Remove unncessary call in _get_missing_events_for_pdu 2017-04-28 11:55:25 +01:00
Erik Johnston
3a9f5bf6dd Don't fetch state for missing events that we fetched 2017-04-28 11:26:46 +01:00
Erik Johnston
ab37bef83b Remove unused import 2017-04-28 09:57:23 +01:00
Erik Johnston
ad8b316939 We don't care about forgotten rooms 2017-04-28 09:52:36 +01:00
Erik Johnston
421fdf7460 Speed up filtering of a single event in push 2017-04-28 09:52:36 +01:00
Erik Johnston
25a96e0c63 Merge pull request #2163 from matrix-org/erikj/fix_invite_state
Fix invite state to always include all events
2017-04-27 17:36:30 +01:00
Erik Johnston
46826bb078 Comment and remove spurious logging 2017-04-27 17:25:44 +01:00
Erik Johnston
f87b287291 Merge pull request #2127 from APwhitehat/alreadystarted
print something legible if synapse already running
2017-04-27 15:46:53 +01:00
Erik Johnston
bb9246e525 Merge pull request #2131 from matthewjwolff/develop
web_client_location documentation fix
2017-04-27 15:46:40 +01:00
Richard van der Hoff
c84770b877 Fix bgupdate error if index already exists (#2167)
When creating a new table index in the background, guard against it existing already. Fixes
https://github.com/matrix-org/synapse/issues/2135.

Also, make sure we restore the autocommit flag when we're done, otherwise we
get more failures from other operations later on. Fixes
https://github.com/matrix-org/synapse/issues/1890 (hopefully).
2017-04-27 15:27:48 +01:00
Erik Johnston
380fb87ecc Merge pull request #2168 from matrix-org/erikj/federation_logging
Add some extra logging for edge cases of federation
2017-04-27 15:19:12 +01:00
Erik Johnston
87ae59f5e9 Typo 2017-04-27 15:16:21 +01:00
Erik Johnston
e42b4ebf0f Add some extra logging for edge cases of federation 2017-04-27 14:38:21 +01:00
Erik Johnston
d3c150411c Merge pull request #2130 from APwhitehat/roomexists
Check that requested room_id exists
2017-04-27 09:20:26 +01:00
Erik Johnston
1e166470ab Fix tests 2017-04-26 16:23:30 +01:00
Erik Johnston
34e682d385 Fix invite state to always include all events 2017-04-26 16:18:08 +01:00
Erik Johnston
7239258ae6 Merge pull request #2160 from matrix-org/erikj/reduce_join_cache_size
Make state caches cache in ascii
2017-04-26 14:02:06 +01:00
David Baker
5fd12dce01 Remove debugging 2017-04-26 12:36:26 +01:00
David Baker
82ae0238f9 Revert accidental commit 2017-04-26 11:43:16 +01:00
David Baker
81804909d3 Merge remote-tracking branch 'origin/develop' into dbkr/http_request_propagate_error 2017-04-26 11:31:55 +01:00
David Baker
c366276056 Fix get_json 2017-04-26 10:07:01 +01:00
David Baker
1a9255c12e Use CodeMessageException subclass instead
Parse json errors from get_json client methods and throw special
errors.
2017-04-25 19:30:55 +01:00
Matthew Hodgson
94f36b0273 document how to make IPv6 work (#2088)
* document how to make IPv6 work

* spell out that pip will install 17.1 by default
2017-04-25 18:37:12 +01:00
Erik Johnston
c45dc6c62a Merge pull request #2136 from bbigras/patch-1
Fix the system requirements list in README.rst
2017-04-25 17:25:52 +01:00
Erik Johnston
f053a1409e Make state caches cache in ascii 2017-04-25 17:22:55 +01:00
Erik Johnston
22f935ab7c Merge pull request #2159 from matrix-org/erikj/reduce_join_cache_size
Reduce size of joined_user cache
2017-04-25 17:22:02 +01:00
Erik Johnston
9388eece2b Merge pull request #2149 from enckse/develop
setting up metrics, just adding/clarifying 2 very minor items
2017-04-25 16:37:46 +01:00
Erik Johnston
acb58bfb6a fix up 2017-04-25 15:39:19 +01:00
Erik Johnston
f7181615f2 Don't specify default as dict 2017-04-25 15:22:59 +01:00
Erik Johnston
f144365281 Comment 2017-04-25 15:18:26 +01:00
Erik Johnston
d9aa645f86 Reduce size of joined_user cache
The _get_joined_users_from_context cache stores a mapping from user_id
to avatar_url and display_name. Instead of storing those in a dict,
store them in a namedtuple as that uses much less memory.

We also try converting the string to ascii to further reduce the size.
2017-04-25 14:38:51 +01:00
Erik Johnston
22f3d3ae76 Reduce _get_state_group_for_event cache size 2017-04-25 11:43:03 +01:00
Erik Johnston
b4da08cad8 Merge pull request #2158 from matrix-org/erikj/reduce_cache_size
Reduce cache size by not storing deferreds
2017-04-25 11:18:07 +01:00
Erik Johnston
efab1dadde Remove DEBUG_CACHES 2017-04-25 10:54:09 +01:00
Mark Haines
33d5134b59 Merge pull request #2156 from matrix-org/markjh/old_verify_keys
Fix code for reporting old verify keys in synapse
2017-04-25 10:44:25 +01:00
Erik Johnston
119cb9bbcf Reduce cache size by not storing deferreds
Currently the cache descriptors store deferreds rather than raw values,
this is a simple way of triggering only one database hit and sharing the
result if two callers attempt to get the same value.

However, there are a few caches that simply store a mapping from string
to string (or int). These caches can have a large number of entries,
under the assumption that each entry is small. However, the size of a
deferred (specifically the size of ObservableDeferred) is signigicantly
larger than that of the raw value, 2kb vs 32b.

This PR therefore changes the cache descriptors to store the raw values
rather than the deferreds.

As a side effect cached storage function now either return a deferred or
the actual value, as the cached list decriptor already does. This is
fine as we always end up just yield'ing on the returned value
eventually, which handles that case correctly.
2017-04-25 10:23:11 +01:00
Mark Haines
e6e2627636 Fix code for reporting old verify keys in synapse 2017-04-24 18:51:25 +01:00
Richard van der Hoff
30f7bfa121 Merge pull request #2145 from matrix-org/rav/reject_invite_to_unreachable_server
Fix rejection of invites to unreachable servers
2017-04-24 15:20:52 +01:00
Erik Johnston
7af825bae4 Merge pull request #2155 from matrix-org/erikj/string_intern
Only intern ascii strings
2017-04-24 14:28:41 +01:00
Erik Johnston
26bcda31b8 Merge pull request #2154 from matrix-org/erikj/remove_unused_cache
Remove unused cache
2017-04-24 14:24:46 +01:00
Erik Johnston
d134d0935e Only intern ascii strings 2017-04-24 14:07:48 +01:00
Erik Johnston
e4f3431116 Remove unused cache 2017-04-24 13:27:38 +01:00
David Baker
a46982cee9 Need the HTTP status code 2017-04-21 16:20:12 +01:00
David Baker
70caf49914 Do the same for get_json 2017-04-21 16:09:03 +01:00
Sean Enck
719aec4064 clarify metric setup to use 'scrape_configs' section of yaml and use an array for target 2017-04-21 11:03:32 -04:00
Richard van der Hoff
cea7839911 Document some of the admin APIs (#2143)
I haven't (yet) documented all of the user-list APIs introduced in
https://github.com/matrix-org/synapse/pull/1784 because the API shape seems
very odd, given the functionality.
2017-04-21 11:55:07 +01:00
David Baker
a1595cec78 Don't error for 3xx responses 2017-04-21 11:51:17 +01:00
David Baker
2e165295b7 Merge remote-tracking branch 'origin/develop' into dbkr/http_request_propagate_error 2017-04-21 11:35:52 +01:00
David Baker
a90a0f5c8a Propagate errors sensibly from proxied IS requests
When we're proxying Matrix endpoints, parse out Matrix error
responses and turn them into SynapseErrors so they can be
propagated sensibly upstream.
2017-04-21 11:32:48 +01:00
Richard van der Hoff
91b3981800 Try harder when sending leave events
When we're rejecting invites, ignore the backoff data, so that we have a better
chance of not getting the room out of sync.
2017-04-21 01:50:36 +01:00
Richard van der Hoff
0cdb32fc43 Remove redundant try/except clauses
The `except SynapseError` clauses were pointless because the wrapped functions
would never throw a `SynapseError` (they either throw a `CodeMessageException`
or a `RuntimeError`).

The `except CodeMessageException` is now also pointless because the caller
treats all exceptions equally, so we may as well just throw the
`CodeMessageException`.
2017-04-21 01:32:01 +01:00
Richard van der Hoff
838810b76a Broaden the conditions for locally_rejecting invites
The logic for marking invites as locally rejected was all well and good, but
didn't happen when the remote server returned a 500, or wasn't reachable, or
had no DNS, or whatever.

Just expand the except clause to catch everything.

Fixes https://github.com/matrix-org/synapse/issues/761.
2017-04-21 01:31:37 +01:00
Richard van der Hoff
736b9a4784 Remove redundant function
inline `reject_remote_invite`, which only existed to make tracing the callflow
more difficult.
2017-04-21 01:31:09 +01:00
Richard van der Hoff
4903ccf159 Fix some lies, and other clarifications, in docstrings
The documentation on get_json has been wrong ever since the very first commit
to synapse...
2017-04-21 01:31:09 +01:00
Bruno Bigras
51fb884c52 Fix the system requirements list in README.rst 2017-04-19 17:32:00 -04:00
Matthew Wolff
d4040e9e28 Queried CONDITIONAL_REQUIREMENTS 2017-04-18 16:19:48 -05:00
Luke Barnard
3fb8784c92 m.read_marker -> m.fully_read (#2128)
Also:
 - change the REST endpoint to have a "S" on the end (so it's now /read_markers)
 - change the content of the m.read_up_to event to have the key "event_id" instead of "marker".
2017-04-18 17:46:15 +01:00
Matthew Hodgson
c02b6a37d6 Merge pull request #2132 from feld/patch-1
Update README.rst
2017-04-17 16:18:34 +01:00
Mark Felder
814fb032eb Update README.rst
The FreeBSD port has been moved to the net-im category
2017-04-17 08:26:50 -05:00
Matthew Wolff
54f9a4cb59 Fixed travis build failure
Signed-off-by: Matthew Wolff <matthewjwolff@gmail.com>
2017-04-17 01:38:27 -05:00
Matthew Wolff
8e780b113d web_server_root documentation fix
Signed-off-by: Matthew Wolff <matthewjwolff@gmail.com>
2017-04-17 00:49:11 -05:00
Anant Prakash
574d573ac2 Check that requested room_id exists 2017-04-14 23:50:59 +05:30
Luke Barnard
78f0ddbfad Merge pull request #2120 from matrix-org/luke/read-markers
Implement Read Marker API
2017-04-13 14:21:31 +01:00
Luke Barnard
6a70647d45 Correct logic in is_event_after 2017-04-13 13:46:17 +01:00
Anant Prakash
c1f52a321d synctl.py: Check if synapse is already running 2017-04-13 18:00:02 +05:30
Luke Barnard
b9557064bf Simplify is_event_after logic 2017-04-12 14:36:20 +01:00
Luke Barnard
cf6121e3da More null-guard changes 2017-04-12 14:02:03 +01:00
Erik Johnston
247c736b9b Merge pull request #2115 from matrix-org/erikj/dedupe_federation_repl
Reduce federation replication traffic
2017-04-12 11:07:13 +01:00
Paul Evans
8fbc0d29ee Merge pull request #2121 from matrix-org/paul/sent-transactions-metric
Add a counter metric for successfully-sent transactions
2017-04-12 11:04:31 +01:00
Erik Johnston
c06c00190f Merge pull request #2116 from matrix-org/erikj/dedupe_federation_repl2
Dedupe KeyedEdu and Devices federation repl traffic
2017-04-12 10:57:24 +01:00
Luke Barnard
c0aba0a23e Remove Unused ref to hs 2017-04-12 10:52:11 +01:00
Luke Barnard
b9676a75f6 Move a space 2017-04-12 10:51:17 +01:00
Luke Barnard
69a18514e9 Only notify user, not entire room 2017-04-12 10:50:37 +01:00
Luke Barnard
122cd52ce4 Remove comment, simplify null-guard 2017-04-12 10:48:32 +01:00
Erik Johnston
26ae5178a4 Add some comments 2017-04-12 10:36:29 +01:00
Erik Johnston
bf9060156a Merge pull request #2117 from matrix-org/erikj/remove_http_replication
Remove HTTP replication APIs
2017-04-12 10:21:42 +01:00
Erik Johnston
82301b6c29 Remove last reference to worker_replication_url 2017-04-12 10:21:02 +01:00
Erik Johnston
1745069543 Comment 2017-04-12 10:17:10 +01:00
Erik Johnston
c7ddb5ef7a Reuse get_interested_parties 2017-04-12 10:16:26 +01:00
Erik Johnston
7b41013102 Merge pull request #2118 from matrix-org/erikj/no_devices
Fix getting latest device IP for user with no devices
2017-04-12 10:14:32 +01:00
Luke Barnard
77fb2b72ae Handle no previous RM 2017-04-12 09:47:29 +01:00
Luke Barnard
7f94709066 travis flake8.. 2017-04-11 18:35:45 +01:00
Luke Barnard
867822fa1e flake8 2017-04-11 17:36:04 +01:00
Luke Barnard
73880268ef Refactor event ordering check to events store 2017-04-11 17:34:09 +01:00
Luke Barnard
131485ef66 Copyright 2017-04-11 17:33:51 +01:00
Paul "LeoNerd" Evans
11dbceb761 Add a counter metric for successfully-sent transactions 2017-04-11 17:16:12 +01:00
Luke Barnard
0127423027 flake8 2017-04-11 17:07:07 +01:00
Erik Johnston
85657eedf8 Bail on where clause instead 2017-04-11 16:24:31 +01:00
Erik Johnston
b48045a8f5 Don't bother with outer check for now 2017-04-11 16:23:24 +01:00
Erik Johnston
6f65e2f90c Update replication docs 2017-04-11 16:21:12 +01:00
Erik Johnston
323634bf8b Update workers docs 2017-04-11 16:19:52 +01:00
Erik Johnston
9c712a366f Move get_presence_list_* to SlaveStore 2017-04-11 16:07:33 +01:00
Erik Johnston
a8c8e4efd4 Comment 2017-04-11 15:35:49 +01:00
Erik Johnston
414522aed5 Move get_interested_parties 2017-04-11 15:33:26 +01:00
Erik Johnston
2be8a281d2 Comments 2017-04-11 15:28:24 +01:00
Erik Johnston
6308ac45b0 Move get_interested_remotes back to presence handler 2017-04-11 15:19:26 +01:00
Erik Johnston
b9b72bc6e2 Comments 2017-04-11 15:15:34 +01:00
Luke Barnard
d892079844 Finish implementing RM endpoint
- This change causes a 405 to be sent if "m.read_marker" is set via /account_data
 - This also fixes-up the RM endpoint so that it actually Works.
2017-04-11 15:01:39 +01:00
lukebarnard
e263c26690 Initial commit of RM server-side impl
(See https://docs.google.com/document/d/1UWqdS-e1sdwkLDUY0wA4gZyIkRp-ekjsLZ8k6g_Zvso/edit#heading=h.lndohpg8at5u)
2017-04-11 11:55:30 +01:00
Erik Johnston
f3cf3ff8b6 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-04-11 11:13:32 +01:00
Erik Johnston
4902db1fc9 Merge branch 'release-v0.20.0' of github.com:matrix-org/synapse 2017-04-11 11:12:37 +01:00
Erik Johnston
d563b8d944 Bump changelog 2017-04-11 11:12:13 +01:00
Erik Johnston
85a0d6c7ab Remove test of replication resource 2017-04-11 10:59:27 +01:00
Erik Johnston
34840cdcef Fix getting latest device IP for user with no devices 2017-04-11 09:56:54 +01:00
Erik Johnston
28a4649785 Remove HTTP replication APIs 2017-04-11 09:52:11 +01:00
Matthew Hodgson
7c551ec445 trust a hypothetical future riot.im IS 2017-04-10 17:58:36 +01:00
Erik Johnston
84fbb80c8f Use generators 2017-04-10 16:55:56 +01:00
Erik Johnston
40453b3f84 Dedupe KeyedEdu and Devices federation repl traffic 2017-04-10 16:49:51 +01:00
Erik Johnston
29574fd5b3 Reduce federation presence replication traffic
This is mainly done by moving the calculation of where to send presence
updates from the presence handler to the transaction queue, so we only
need to send the presence event (and not the destinations) across the
replication connection. Before we were duplicating by sending the full
state across once per destination.
2017-04-10 16:48:30 +01:00
Erik Johnston
2e6f5a4910 Typo 2017-04-10 16:17:40 +01:00
David Baker
405ba4178a Merge pull request #2102 from DanielDent/add-auth-email
Support authenticated SMTP
2017-04-10 15:42:16 +01:00
Erik Johnston
efcb6db688 Merge pull request #2109 from matrix-org/erikj/send_queue_fix
Fix up federation SendQueue and document types
2017-04-10 13:09:25 +01:00
Erik Johnston
0018491af2 Rename variable 2017-04-10 12:44:43 +01:00
Erik Johnston
0364d23210 Up replication ping timeout 2017-04-10 11:32:05 +01:00
Erik Johnston
8c5f03cec7 Revert to sending the same data type as before 2017-04-10 10:07:18 +01:00
Erik Johnston
f8434db549 Change name 2017-04-10 10:03:07 +01:00
Erik Johnston
ab904caf33 Comments 2017-04-10 10:02:17 +01:00
Richard van der Hoff
54a59adc7c Merge pull request #2110 from matrix-org/rav/fix_reject_persistence
When we do an invite rejection, save the signed leave event to the db
2017-04-07 16:14:41 +01:00
Richard van der Hoff
64765e5199 When we do an invite rejection, save the signed leave event to the db
During a rejection of an invite received over federation, we ask a remote
server to make us a `leave` event, then sign it, then send that with
`send_leave`.

We were saving the *unsigned* version of the event (which has a different event
id to the signed version) to our db (and sending it to the clients), whereas
other servers in the room will have seen the *signed* version. We're not aware
of any actual problems that caused, except that it makes the database confusing
to look at and generally leaves the room in a weird state.
2017-04-07 14:39:32 +01:00
Erik Johnston
0cd01f5c9c Merge pull request #2108 from matrix-org/erikj/current_state_ids
Speed up get_current_state_ids
2017-04-07 14:20:16 +01:00
Erik Johnston
2a3e822f44 Comment 2017-04-07 13:47:04 +01:00
Erik Johnston
a828a64b75 Comment 2017-04-07 11:54:03 +01:00
Erik Johnston
d4d176e5d0 Add logging 2017-04-07 11:51:28 +01:00
Erik Johnston
449d1297ca Fix up federation SendQueue and document types 2017-04-07 11:48:33 +01:00
Erik Johnston
d72667fcce Speed up get_current_state_ids
Using _simple_select_list is fairly expensive for functions that return
a lot of rows and/or get called a lot. (This is because it carefully
constructs a list of dicts).

get_current_state_ids gets called a lot on startup and e.g. when the IRC
bridge decided to send tonnes of joins/leaves (as it invalidates the
cache). We therefore replace it with a custon txn function that builds
up the final result dict without building up and intermediate
representation.
2017-04-07 10:10:49 +01:00
Erik Johnston
a41fe500d6 Bump version and changelog 2017-04-07 10:03:48 +01:00
Erik Johnston
54f59bd7d4 Merge pull request #2107 from HarHarLinks/patch-1
fix typo in synctl help
2017-04-07 09:54:37 +01:00
Erik Johnston
98ce212093 Merge pull request #2103 from matrix-org/erikj/no-double-encode
Don't double encode replication data
2017-04-07 09:39:52 +01:00
Kim Brose
8a1137ceab fix typo in synctl help 2017-04-06 17:10:20 +02:00
Erik Johnston
877c029c16 Use iteritems 2017-04-06 15:51:22 +01:00
Erik Johnston
944692ef69 Merge pull request #2106 from matrix-org/erikj/reduce_user_sync
Reduce rate of USER_SYNC repl commands
2017-04-06 13:35:31 +01:00
Erik Johnston
391712a4f9 Comment 2017-04-06 13:35:00 +01:00
Erik Johnston
ad544c803a Document types of the replication streams 2017-04-06 13:28:52 +01:00
Erik Johnston
dbf87282d3 Docs 2017-04-06 13:11:21 +01:00
Erik Johnston
69b3fd485d Fix incorrect type when using InvalidateCacheCommand 2017-04-06 09:36:38 +01:00
Daniel Dent
5058292537 Support authenticated SMTP
Closes (SYN-714) #1385

Signed-off-by: Daniel Dent <matrixcontrib@contactdaniel.net>
2017-04-05 21:01:08 -07:00
Erik Johnston
fcc803b2bf Add log lines 2017-04-05 17:13:44 +01:00
Erik Johnston
ea0152b132 Merge pull request #2104 from matrix-org/erikj/metrics_tcp
Rearrange TCP replication metrics
2017-04-05 14:24:06 +01:00
Erik Johnston
3f213d908d Rearrange metrics 2017-04-05 14:15:09 +01:00
Erik Johnston
1ca0e78ca1 Fix typo 2017-04-05 13:43:39 +01:00
Erik Johnston
b43d3267e2 Fixup some metrics for tcp repl 2017-04-05 13:34:54 +01:00
Erik Johnston
b5cb6347a4 Don't immediately notify the master about users whose syncs have gone away 2017-04-05 13:25:40 +01:00
Erik Johnston
96b9b6c127 Don't double json encode typing replication data 2017-04-05 11:34:20 +01:00
Erik Johnston
f10ce8944b Don't double json encode federation replication data 2017-04-05 11:10:28 +01:00
Erik Johnston
a5c401bd12 Merge pull request #2097 from matrix-org/erikj/repl_tcp_client
Move to using TCP replication
2017-04-05 09:36:21 +01:00
Erik Johnston
b9caf4f726 Merge pull request #2099 from matrix-org/erikj/deviceinbox_reduce
Deduplicate new deviceinbox rows for replication
2017-04-05 09:35:59 +01:00
Erik Johnston
d1d5362267 Add comment 2017-04-04 16:41:03 +01:00
Erik Johnston
9f26d3b75b Deduplicate new deviceinbox rows for replication 2017-04-04 16:21:21 +01:00
Erik Johnston
a76886726b Merge pull request #2098 from matrix-org/erikj/repl_tcp_fix
Advance replication streams even if nothing is listening
2017-04-04 15:40:51 +01:00
Erik Johnston
ac66e11f2b Add the appropriate amount of preserve_fn 2017-04-04 15:22:54 +01:00
Erik Johnston
4264ceb31c Fiddle tcp replication logging 2017-04-04 14:14:03 +01:00
Erik Johnston
023ee197be Advance replication streams even if nothing is listening
Otherwise the streams don't advance and steadily fall behind, so when a
worker does connect either a) they'll be streamed lots of old updates or
b) the connection will fail as the streams are too far behind.
2017-04-04 13:19:26 +01:00
Erik Johnston
d1605794ad Remove unused worker config option 2017-04-04 11:17:00 +01:00
Erik Johnston
3376f16012 Shuffle and comment synchrotron presence 2017-04-04 11:14:16 +01:00
Erik Johnston
6ce6bbedcb Move where we ack federation 2017-04-04 11:02:44 +01:00
Erik Johnston
27cc627e42 Merge pull request #2082 from matrix-org/erikj/repl_tcp_server
Replace HTTP replication with TCP replication (Server side part)
2017-04-04 10:07:57 +01:00
Erik Johnston
62b89daac6 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/repl_tcp_server 2017-04-04 09:46:16 +01:00
Richard van der Hoff
773e64cc1a Merge pull request #2095 from matrix-org/rav/cull_log_preserves
Cull spurious PreserveLoggingContexts
2017-04-03 17:02:25 +01:00
Richard van der Hoff
2d05eb3cf5 Merge remote-tracking branch 'origin/release-v0.20.0' into develop 2017-04-03 16:23:23 +01:00
Richard van der Hoff
ac63b92b64 Merge pull request #2094 from matrix-org/rav/fix_federation_join
Accept join events from all servers
2017-04-03 16:17:17 +01:00
Richard van der Hoff
30bcbf775a Accept join events from all servers
Make sure that we accept join events from any server, rather than just the
origin server, to make the federation join dance work correctly.

(Fixes #1893).
2017-04-03 15:58:07 +01:00
Richard van der Hoff
7eb9f34cc3 Remove spurious yield
In `MessageHandler`, remove `yield` on call to `Notifier.on_new_room_event`:
it doesn't return anything anyway.
2017-04-03 15:44:19 +01:00
Richard van der Hoff
0b08c48fc5 Remove more spurious PreserveLoggingContexts
Remove `PreserveLoggingContext` around calls to `Notifier.on_new_room_event`;
there is no problem if the logcontext is set when calling it.
2017-04-03 15:43:37 +01:00
Richard van der Hoff
65e1683680 Remove spurious PreserveLoggingContext
In `on_new_room_event`, remove `PreserveLoggingContext` - we can call its
subroutines with the logcontext set.
2017-04-03 15:42:38 +01:00
Richard van der Hoff
feb496056e preserve_fn some deferred-returning things
In `Notifier._on_new_room_event`, `preserve_fn` around its subroutines which
return deferreds, so that it is safe to call it with an active logcontext.
2017-04-03 15:41:17 +01:00
Richard van der Hoff
e2eebf1696 Fix fixme in preserve_fn
`preserve_fn` is no longer used as a decorator anywhere, so we can safely fix a
fixme therein.
2017-04-03 15:38:02 +01:00
Erik Johnston
36c28bc467 Update all the workers and master to use TCP replication 2017-04-03 15:35:52 +01:00
Erik Johnston
3a1f3f8388 Change slave storage to use new replication interface
As the TCP replication uses a slightly different API and streams than
the HTTP replication.

This breaks HTTP replication.
2017-04-03 15:34:19 +01:00
Erik Johnston
52bfa604e1 Add basic replication client handler and factory 2017-04-03 15:34:13 +01:00
Erik Johnston
0a6a966e2b Always advance stream tokens 2017-04-03 15:22:56 +01:00
Richard van der Hoff
773e1c6d68 Remove spurious @preserve_fn decorators
Remove `@preserve_fn` decorators on `on_new_room_event`,
`_notify_pending_new_room_events`, `_on_new_room_event`, `on_new_event`, and
`on_new_replication_data` - none of these functions return a deferred, and the
decorator does nothing unless the wrapped function returns a deferred, so the
decorator was a no-op.
2017-04-03 15:14:11 +01:00
Erik Johnston
0d1c85e643 Merge branch 'release-v0.20.0' of github.com:matrix-org/synapse into develop 2017-04-03 14:58:14 +01:00
Erik Johnston
1df7c28661 Use callbacks to notify tcp replication rather than deferreds 2017-03-31 15:42:51 +01:00
Erik Johnston
36d2b66f90 Add a timestamp to USER_SYNC command
This timestamp is used to indicate when the user last sync'd
2017-03-31 15:42:22 +01:00
Erik Johnston
8a240e4f9c Merge pull request #2078 from APwhitehat/assertuserfriendly
add user friendly report of assertion error in synctl.py
2017-03-31 14:41:49 +01:00
Erik Johnston
ec039e6790 Merge pull request #1984 from RyanBreaker/patch-1
Add missing package to CentOS section
2017-03-31 14:39:32 +01:00
Erik Johnston
142b6b4abf Merge pull request #2011 from matrix-org/matthew/turn_allow_guests
add setting (on by default) to support TURN for guests
2017-03-31 14:37:09 +01:00
Erik Johnston
2a06b44be2 Merge pull request #1986 from matrix-org/matthew/enable_guest_3p
enable guest access for the 3pl/3pid APIs
2017-03-31 14:36:03 +01:00
Erik Johnston
2dc57e7413 Merge pull request #2024 from jerrykan/db_port_schema
Don't assume postgres tables are in the public schema during db port
2017-03-31 14:14:30 +01:00
Erik Johnston
07a32d192c Merge pull request #1961 from benhylau/patch-1
Clarify doc for SQLite to PostgreSQL port
2017-03-31 14:13:26 +01:00
Erik Johnston
9a27448b1b Merge pull request #1927 from zuckschwerdt/fix-nuke-script
bring nuke-room script to current schema
2017-03-31 14:06:29 +01:00
Matthew Hodgson
9ee397b440 switch to allow_guest=True for authing 3Ps as per PR feedback 2017-03-31 13:54:26 +01:00
Erik Johnston
9d0170ac6c Fix up presence 2017-03-31 11:36:32 +01:00
Erik Johnston
b4276a3896 Add a brief list of commands to docs 2017-03-31 11:34:45 +01:00
Erik Johnston
bfcf016714 Fix up docs 2017-03-31 11:19:24 +01:00
Erik Johnston
9cee0ce7db Merge pull request #2075 from matrix-org/erikj/cache_speed
Speed up cached function access
2017-03-31 10:10:56 +01:00
Erik Johnston
350333a09a Merge pull request #2076 from matrix-org/erikj/as_perf
Make AS's faster
2017-03-31 09:43:10 +01:00
Erik Johnston
639d9ae9a0 Merge pull request #2083 from matrix-org/erikj/copy_replace_speed
Speed up copy_and_replace
2017-03-31 09:39:54 +01:00
Erik Johnston
4d17add8de Remove unused instance variable 2017-03-31 09:38:27 +01:00
Erik Johnston
27b1b4a2c9 Speed up copy_and_replace 2017-03-30 17:50:31 +01:00
Erik Johnston
5b5b171f3e Docs 2017-03-30 17:05:53 +01:00
Erik Johnston
b282fe7170 Revert log context change 2017-03-30 17:03:59 +01:00
Erik Johnston
9ff4e0e91b Bump version and changelog 2017-03-30 16:37:40 +01:00
Erik Johnston
0834d1a70c Merge pull request #2079 from matrix-org/erikj/push_regex_cache
Cache glob to regex at a higher level for push
2017-03-30 16:28:18 +01:00
Erik Johnston
63fcc42990 Remove user from process_presence when stops syncing 2017-03-30 14:26:08 +01:00
Erik Johnston
6194a64ae9 Doc new instance variables 2017-03-30 14:19:10 +01:00
Erik Johnston
eefd9fee81 Fix up tests 2017-03-30 14:14:46 +01:00
Erik Johnston
014fee93b3 Manually calculate cache key as getcallargs is expensive
This is because getcallargs recomputes the getargspec, amongst other
things, which we don't need to do as its already been done
2017-03-30 14:14:46 +01:00
Erik Johnston
86780a8bc3 Don't convert to deferreds when not necessary 2017-03-30 14:14:36 +01:00
Erik Johnston
31e0fe9031 Fix indentation in docs/ 2017-03-30 13:54:15 +01:00
Erik Johnston
3ba2859e0c Add tcp replication listener type and hook it up 2017-03-30 13:31:10 +01:00
Erik Johnston
e9dd8370b0 Add functions to presence to support remote syncs
The TCP replication protocol streams deltas of who has started or
stopped syncing. This is different from the HTTP API which periodically
sends the full list of users who are syncing. This commit adds support
for the new TCP style of sending deltas.
2017-03-30 13:25:14 +01:00
Erik Johnston
4d7fc7f977 Add server side resource for tcp replication 2017-03-30 13:24:45 +01:00
Richard van der Hoff
f9b4bb05e0 Fix the logcontext handling in the cache wrappers (#2077)
The cache wrappers had a habit of leaking the logcontext into the reactor while
the lookup function was running, and then not restoring it correctly when the
lookup function had completed. It's all the fault of
`preserve_context_over_{fn,deferred}` which are basically a bit broken.
2017-03-30 13:22:24 +01:00
Erik Johnston
7450693435 Initial TCP protocol implementation
This defines the low level TCP replication protocol
2017-03-30 12:54:46 +01:00
Erik Johnston
8da6f0be48 Define the various streams we will replicate 2017-03-30 12:54:46 +01:00
Erik Johnston
11880103b1 Make federation send queue take the current position 2017-03-30 12:54:36 +01:00
Erik Johnston
7984708a55 Add a simple hook to wait for replication traffic 2017-03-30 11:57:52 +01:00
Erik Johnston
24d35ab47b Add new storage functions for new replication
The new replication protocol will keep all the streams separate, rather
than muxing multiple streams into one.
2017-03-30 11:48:35 +01:00
Erik Johnston
30348c924c Use txn.fetchall() so we can reuse txn 2017-03-30 10:30:05 +01:00
Anant Prakash
6cdca71079 synctl.py: wait for synapse to stop before restarting (#2020) 2017-03-29 19:20:13 +01:00
Anant Prakash
305d16d612 add user friendly report of assertion error in synctl.py
Signed-off-by: Anant Prakash <anantprakashjsr@gmail.com>
2017-03-29 20:41:39 +05:30
Erik Johnston
a3810136fe Cache glob to regex at a higher level for push 2017-03-29 15:53:14 +01:00
Erik Johnston
3ce8d59176 Increase cache size for _get_state_group_for_event 2017-03-29 14:31:46 +01:00
Erik Johnston
b9c2ae6788 Merge pull request #1849 from matrix-org/erikj/state_typo
Fix bug where current_state_events renamed to current_state_ids
2017-03-29 11:53:42 +01:00
Erik Johnston
85be3dde81 Bail early if remote wouldn't be retried (#2064)
* Bail early if remote wouldn't be retried

* Don't always return true

* Just use get_retry_limiter

* Spelling
2017-03-29 11:48:27 +01:00
Erik Johnston
2f8b580b64 Merge pull request #2053 from matrix-org/erikj/e2e_one_time_upsert
Don't user upsert to persist new one time keys
2017-03-29 11:44:23 +01:00
Erik Johnston
c5b0bdd542 Merge pull request #2067 from matrix-org/erikj/notify_on_fed
Notify on new federation traffic
2017-03-29 11:41:37 +01:00
Erik Johnston
e4df0e189d Decrank last commit 2017-03-29 11:02:35 +01:00
Erik Johnston
4ad613f6be Merge branch 'develop' of github.com:matrix-org/synapse into erikj/e2e_one_time_upsert 2017-03-29 10:57:19 +01:00
Erik Johnston
ac6bc55512 Correctly look up key 2017-03-29 10:56:26 +01:00
Erik Johnston
69efd77749 Add comment 2017-03-29 09:50:05 +01:00
Richard van der Hoff
276af7b59b Merge pull request #2037 from ricco386/fix_readme_centos_issues
Fix installation issues
2017-03-29 08:52:39 +02:00
Erik Johnston
51b156d48a Cache whether an AS is interested based on members 2017-03-28 13:27:21 +01:00
Erik Johnston
30f5ffdca2 Remove param and cast at call site 2017-03-28 13:27:21 +01:00
Erik Johnston
650f0e69f2 Compile the regex's used in ASes 2017-03-28 13:27:21 +01:00
Erik Johnston
d28db583da Merge pull request #2063 from matrix-org/erikj/device_list_batch
Batch sending of device list pokes
2017-03-28 11:35:41 +01:00
Erik Johnston
58a35366be The algorithm is part of the key id 2017-03-28 11:34:37 +01:00
Erik Johnston
dc56a6b8c8 Merge pull request #2070 from matrix-org/erikj/perf_send
Short circuit if all new events have same state group
2017-03-27 18:33:54 +01:00
Erik Johnston
bac9bf1b12 Typo 2017-03-27 18:02:17 +01:00
Erik Johnston
d82c42837f Short circuit if all new events have same state group 2017-03-27 18:00:47 +01:00
Erik Johnston
35b4aa04be Notify on new federation traffic 2017-03-27 14:07:47 +01:00
Erik Johnston
2a28b79e04 Batch sending of device list pokes 2017-03-24 14:44:49 +00:00
Erik Johnston
281553afe6 Merge pull request #2062 from matrix-org/erikj/presence_replication
Use presence replication stream to invalidate cache
2017-03-24 13:57:45 +00:00
Erik Johnston
987f4945b4 Actually call invalidate 2017-03-24 13:28:20 +00:00
Erik Johnston
31d56c3fb5 Merge pull request #2061 from matrix-org/erikj/add_transaction_store
Add slave transaction store to workers who send federation requests
2017-03-24 13:24:27 +00:00
Erik Johnston
09f79aaad0 Use presence replication stream to invalidate cache
Instead of using the cache invalidation replication stream to invalidate
the _get_presence_cache, we can instead rely on the presence replication
stream. This reduces the amount of replication traffic considerably.
2017-03-24 13:21:08 +00:00
Erik Johnston
23e0ff840a Merge pull request #2060 from matrix-org/erikj/cache_hosts_in_room
Cache hosts in room
2017-03-24 13:07:22 +00:00
Erik Johnston
48e7697911 Add slave transaction store 2017-03-24 13:05:30 +00:00
Richard van der Hoff
f136c89d5e Merge pull request #2058 from matrix-org/rav/logcontext_leaks_2
try not to drop context after federation requests
2017-03-24 12:47:26 +00:00
Richard van der Hoff
01fc847f7f Merge pull request #2057 from matrix-org/rav/missing_yield_2
Add another missing yield on check_device_registered
2017-03-24 12:46:43 +00:00
Erik Johnston
7fc1f1e2b6 Cache hosts in room 2017-03-24 11:46:24 +00:00
Erik Johnston
57cfa513f5 Merge pull request #2054 from matrix-org/erikj/user_iter_cursor
Reduce some CPU work on DB threads
2017-03-24 11:40:08 +00:00
Erik Johnston
d58b1ffe94 Replace some calls to cursor_to_dict
cursor_to_dict can be surprisinglh expensive for large result sets, so lets
only call it when we need to.
2017-03-24 11:07:02 +00:00
Erik Johnston
e71940aa64 Use iter(items|values) 2017-03-24 10:57:02 +00:00
David Baker
e36950dec5 Merge pull request #2055 from matrix-org/dbkr/fix_add_msisdn_requestToken
Fix token request for addition of phone numbers
2017-03-24 10:32:57 +00:00
David Baker
f902e89d4b Merge pull request #2056 from matrix-org/dbkr/fix_invite_reject
Fix rejection of invites not reaching sync
2017-03-24 10:32:51 +00:00
Richard van der Hoff
a380f041c2 try not to drop context after federation requests
preserve_context_over_fn uses a ContextPreservingDeferred, which only restores
context for the duration of its callbacks, which isn't really correct, and
means that subsequent operations in the same request can end up without their
logcontexts.
2017-03-23 22:36:21 +00:00
Richard van der Hoff
9397edb28b Merge pull request #2050 from matrix-org/rav/federation_backoff
push federation retry limiter down to matrixfederationclient
2017-03-23 22:27:01 +00:00
Richard van der Hoff
06ce7335e9 Merge pull request #2052 from matrix-org/rav/time_bound_deferred
Fix time_bound_deferred to throw the right exception
2017-03-23 22:22:54 +00:00
Richard van der Hoff
13c8749ac9 Add another missing yield on check_device_registered 2017-03-23 22:18:53 +00:00
David Baker
e1f1784f99 Fix rejection of invites not reaching sync
Always allow the user to see their own leave events, otherwise
they won't see the event if they reject an invite for a room whose
history visibility is set such that they cannot see events before
joining.
2017-03-23 18:50:31 +00:00
David Baker
86e865d7d2 Oops, remove unintentional change 2017-03-23 18:48:30 +00:00
David Baker
a2dfab12c5 Fix token request for addition of phone numbers 2017-03-23 18:46:17 +00:00
Erik Johnston
00957d1aa4 User Cursor.__iter__ instead of fetchall
This prevents unnecessary construction of lists
2017-03-23 17:53:49 +00:00
Erik Johnston
6af0096f4f Merge pull request #1783 from pik/filter-validation
JSONSchema Validation For Filters
2017-03-23 15:56:15 +00:00
pik
250ce11ab9 Add jsonschema to python_dependencies.py
Signed-off-by: pik <alexander.maznev@gmail.com>
2017-03-23 11:42:47 -03:00
pik
566641a0b5 use jsonschema.FormatChecker for RoomID and UserID strings
* use a valid filter in rest/client/v2_alpha test

Signed-off-by: pik <alexander.maznev@gmail.com>
2017-03-23 11:42:41 -03:00
pik
acafcf1c5b Add valid filter tests, flake8, fix typo
Signed-off-by: pik <alexander.maznev@gmail.com>
2017-03-23 11:42:10 -03:00
pik
e56c79c114 check_valid_filter using JSONSchema
* add invalid filter tests

Signed-off-by: pik <alexander.maznev@gmail.com>
2017-03-23 11:42:07 -03:00
Erik Johnston
6ebe2d23b1 Raise a more helpful exception 2017-03-23 13:48:30 +00:00
Richard van der Hoff
0bfea9a2be fix tests 2017-03-23 13:20:08 +00:00
Erik Johnston
e64655c25d Don't user upsert to persist new one time keys
Instead we no-op duplicate one time key uploads, an error if the key_id
already exists but encodes a different key.
2017-03-23 13:17:00 +00:00
Richard van der Hoff
5a16cb4bf0 Ignore backoff history for invites, aliases, and roomdirs
Add a param to the federation client which lets us ignore historical backoff
data for federation queries, and set it for a handful of operations.
2017-03-23 12:23:22 +00:00
Richard van der Hoff
b88a323ffb Fix time_bound_deferred to throw the right exception
Due to a failure to instantiate DeferredTimedOutError, time_bound_deferred
would throw a CancelledError when the deferred timed out, which was rather
confusing.
2017-03-23 12:07:11 +00:00
Richard van der Hoff
59358cd3e7 Merge pull request #2005 from kfatehi/docs/readme
Update README: specify python2.7 in virtualenv
2017-03-23 09:32:10 +00:00
Richard van der Hoff
55366814a6 Merge pull request #2048 from matrix-org/rav/missing_yield
Add a missing yield in device key upload
2017-03-23 09:29:44 +00:00
Richard van der Hoff
b2d40d7363 Merge pull request #2049 from matrix-org/rav/logcontext_leaks
Fix a couple of logcontext leaks
2017-03-23 09:29:28 +00:00
Richard van der Hoff
4bd597d9fc push federation retry limiter down to matrixfederationclient
rather than having to instrument everywhere we make a federation call,
make the MatrixFederationHttpClient manage the retry limiter.
2017-03-23 09:28:46 +00:00
Richard van der Hoff
ad8a26e361 MatrixFederationHttpClient: clean up
rename _create_request to _request, and push ascii-encoding of `destination`
and `path` down into it
2017-03-23 00:27:04 +00:00
Richard van der Hoff
19b9366d73 Fix a couple of logcontext leaks
Use preserve_fn to correctly manage the logcontexts around things we don't want
to yield on.
2017-03-23 00:17:46 +00:00
Richard van der Hoff
e08f81d96a Add a missing yield in device key upload
(this would only very very rarely actually be a useful thing, so the main
problem was the logcontext leak...)
2017-03-23 00:16:43 +00:00
Richard Kellner
8d1dd7eb30 Removed requirement that is not needed
I have removed libsodium from CentOS system requirements, as it is
part PyNaCl.

Signed-off-by: Richard Kellner <richard.kellner@gmail.com>
2017-03-22 22:49:32 +01:00
Richard van der Hoff
35e0cfb54d Merge pull request #2044 from matrix-org/rav/crypto_docs
fix up some key verif docstrings
2017-03-22 17:12:55 +00:00
Richard van der Hoff
7b67848042 Merge pull request #2042 from matrix-org/rav/fix_key_caching
Fix caching of remote servers' signature keys
2017-03-22 17:11:55 +00:00
Richard van der Hoff
95f21c7a66 Fix caching of remote servers' signature keys
The `@cached` decorator on `KeyStore._get_server_verify_key` was missing
its `num_args` parameter, which meant that it was returning the wrong key for
any server which had more than one recorded key.

By way of a fix, change the default for `num_args` to be *all* arguments. To
implement that, factor out a common base class for `CacheDescriptor` and `CacheListDescriptor`.
2017-03-22 15:11:30 +00:00
Matthew Hodgson
d101488c5f Merge branch 'master' into develop 2017-03-21 22:48:21 +01:00
Richard van der Hoff
64778693be fix up some key verif docstrings 2017-03-21 13:27:50 +00:00
Erik Johnston
37a187bfab Merge pull request #2033 from matrix-org/erikj/repl_speed
Don't send the full event json over replication
2017-03-21 13:11:15 +00:00
Richard van der Hoff
733896e046 Merge pull request #2035 from matrix-org/rav/debug_federation
Add some debug to help diagnose weird federation issue
2017-03-21 09:49:41 +00:00
Richard Kellner
a188056364 Updated user creation section
register_new_matrix_user command has one more question, I have updated
the documentation to match the reality.
2017-03-20 19:58:24 +01:00
Richard Kellner
961e242aaf Added missing system requiremnt and pip upgrade before install
When installing on CentOS7 I wans't able to follow README instructions to
install due to errors. I was missing libsodium in order to compile
python dependencies.

Default version of Python pip is really old and therefore setuptools
upgrade ended with error as well. In order to be able to continue I
needed to upgrade pip as well.
2017-03-20 19:57:28 +01:00
Erik Johnston
e0e214556a Merge branch 'release-v0.19.3' of github.com:matrix-org/synapse 2017-03-20 17:25:41 +00:00
Erik Johnston
633dcc316c Bump changelog and version 2017-03-20 16:37:14 +00:00
Richard van der Hoff
c36d15d2de Add some debug to help diagnose weird federation issue 2017-03-20 15:36:14 +00:00
Erik Johnston
737f283a07 Fix unit test 2017-03-20 14:03:58 +00:00
Erik Johnston
aac6d1fc9b PEP8 2017-03-20 13:47:56 +00:00
Richard van der Hoff
bd08ee7a46 Merge pull request #2026 from matrix-org/rav/logcontext_docs
Logcontext docs
2017-03-20 12:05:21 +00:00
Richard van der Hoff
a4cb21659b log_contexts.rst: fix formatting of Note block
Apparently the github RST renderer doesn't like Note blocks.
2017-03-20 12:01:26 +00:00
Richard van der Hoff
eddce9d74a Merge pull request #2027 from matrix-org/rav/logcontext_leaks
A few fixes to logcontext things
2017-03-20 11:53:36 +00:00
Richard van der Hoff
2e05f5d7a4 Merge pull request #2025 from matrix-org/rav/no_reset_state_on_rejections
Avoid resetting state on rejected events
2017-03-20 11:21:01 +00:00
Richard van der Hoff
d78d08981a log_contexts.rst: fix typos 2017-03-18 22:47:37 +00:00
Matthew Hodgson
7b53e9ebfd Merge pull request #2028 from majewsky/readme-fix-1
README.md: fix link to client list on matrix.org/docs
2017-03-18 17:03:03 +00:00
Stefan Majewsky
be20243549 README.md: fix link to client list on matrix.org/docs 2017-03-18 17:16:12 +01:00
Richard van der Hoff
f40c2db05a Stop preserve_fn leaking context into the reactor
Fix a bug in ``logcontext.preserve_fn`` which made it leak context into the
reactor, and add a test for it.

Also, get rid of ``logcontext.reset_context_after_deferred``, which tried to do
the same thing but had its own, different, set of bugs.
2017-03-18 00:07:43 +00:00
Richard van der Hoff
067b00d49d Run the reactor with the sentinel logcontext
This fixes a class of 'Unexpected logcontext' messages, which were happening
because the logcontext was somewhat arbitrarily swapping between the sentinel
and the `run` logcontext.
2017-03-18 00:07:43 +00:00
Richard van der Hoff
994d7ae7c5 Remove broken use of clock.call_later
background_updates was using `call_later` in a way that leaked the logcontext
into the reactor.

We could have rewritten it to do it properly, but given that we weren't using
the fancier facilities provided by `call_later`, we might as well just use
`async.sleep`, which does the logcontext stuff properly.
2017-03-18 00:01:37 +00:00
Richard van der Hoff
d2d146a314 Logcontext docs 2017-03-17 23:59:28 +00:00
Erik Johnston
61f471f779 Don't send the full event json over replication 2017-03-17 15:50:01 +00:00
Richard van der Hoff
0c01f829ae Avoid resetting state on rejected events
When we get a rejected event, give it the same state_group as its prev_event,
rather than no state_group at all.

This should fix https://github.com/matrix-org/synapse/issues/1935.
2017-03-17 15:06:08 +00:00
Richard van der Hoff
5068fb16a5 Refactoring and cleanups
A few non-functional changes:

* A bunch of docstrings to document types
* Split `EventsStore._persist_events_txn` up a bit. Hopefully it's a bit more
  readable.
* Rephrase `EventFederationStore._update_min_depth_for_room_txn` to avoid
  mind-bending conditional.
* Rephrase rejected/outlier conditional in `_update_outliers_txn` to avoid
  mind-bending conditional.
2017-03-17 15:06:07 +00:00
Richard van der Hoff
2abe85d50e Merge pull request #2016 from matrix-org/rav/queue_pdus_during_join
Queue up federation PDUs while a room join is in progress
2017-03-17 11:32:44 +00:00
John Kristensen
be44558886 Don't assume postgres tables are in the public schema during db port
When fetching the list of tables from the postgres database during the
db port, it is assumed that the tables are in the public schema. This is
not always the case, so lets just rely on postgres to determine the
default schema to use.
2017-03-17 10:53:32 +11:00
Keyvan Fatehi
9adf1991ca Update README: specify python2.7 in virtualenv
Signed-off-by: Keyvan Fatehi <keyvanfatehi@gmail.com>
2017-03-16 16:37:14 -07:00
Erik Johnston
248eb4638d Merge pull request #2022 from matrix-org/erikj/no_op_sync
Implement no op for room stream in sync
2017-03-16 13:05:16 +00:00
Erik Johnston
da146657c9 Comments 2017-03-16 13:04:07 +00:00
Erik Johnston
a158c36a8a Comment 2017-03-16 11:57:45 +00:00
Erik Johnston
6957bfdca6 Don't recreate so many sets 2017-03-16 11:54:26 +00:00
Erik Johnston
2ccf3b241c Implement no op for room stream in sync 2017-03-16 11:06:41 +00:00
Richard van der Hoff
9ce53a3861 Queue up federation PDUs while a room join is in progress
This just takes the existing `room_queues` logic and moves it out to
`on_receive_pdu` instead of `_process_received_pdu`, which ensures that we
don't start trying to fetch prev_events and whathaveyou until the join has
completed.
2017-03-15 18:01:11 +00:00
Erik Johnston
54d2b7e596 Merge pull request #2014 from Half-Shot/hs/fix-appservice-presence
Add fallback to last_active_ts if it beats the last sync time on a presence timeout.
2017-03-15 17:37:15 +00:00
Erik Johnston
9d527191bc Merge pull request #2013 from matrix-org/erikj/presence_FASTER
Format presence events on the edges instead of reformatting them multiple times
2017-03-15 16:01:29 +00:00
Erik Johnston
a8f96c63aa Comment 2017-03-15 16:01:01 +00:00
Will Hunt
c144292373 Modify test_user_sync so it doesn't look at last_active_ts over last_user_sync_ts 2017-03-15 15:38:57 +00:00
Erik Johnston
f83ac78201 Cache set of users whose presence the other user should see 2017-03-15 15:29:19 +00:00
Will Hunt
e6032054bf Add a great comment to handle_timeout for active vs sync times. 2017-03-15 15:24:48 +00:00
Will Hunt
ebf5a6b14c Add fallback to last_active_ts if it beats the last sync time. 2017-03-15 15:17:16 +00:00
Erik Johnston
e892457a03 Comment 2017-03-15 15:01:39 +00:00
Erik Johnston
a297155a97 Remove unused import 2017-03-15 14:49:25 +00:00
Erik Johnston
6c82de5100 Format presence events on the edges instead of reformatting them multiple times 2017-03-15 14:27:34 +00:00
David Baker
0ad44acb5a Merge pull request #1997 from matrix-org/dbkr/cas_partialdownload
Handle PartialDownloadError in CAS login
2017-03-15 13:52:34 +00:00
Richard van der Hoff
5f14e7e982 Merge pull request #2010 from matrix-org/rav/fix_txnq_wedge
Fix assertion to stop transaction queue getting wedged
2017-03-15 13:07:50 +00:00
Matthew Hodgson
0970e0307e typo 2017-03-15 12:40:42 +00:00
Matthew Hodgson
5aa42d4292 set default for turn_allow_guests correctly 2017-03-15 12:40:13 +00:00
Matthew Hodgson
e0ff66251f add setting (on by default) to support TURN for guests 2017-03-15 12:22:18 +00:00
Richard van der Hoff
29ed09e80a Fix assertion to stop transaction queue getting wedged
... and update some docstrings to correctly reflect the types being used.

get_new_device_msgs_for_remote can return a long under some circumstances,
which was being stored in last_device_list_stream_id_by_dest, and was then
upsetting things on the next loop.
2017-03-15 12:16:55 +00:00
Erik Johnston
3b2dd1b3c2 Merge pull request #2008 from matrix-org/erikj/notifier_stats
Add some metrics on notifier
2017-03-15 11:25:02 +00:00
Erik Johnston
872e75a3d5 Add some metrics on notifier 2017-03-15 10:56:51 +00:00
Erik Johnston
7827251daf Merge pull request #1994 from matrix-org/dbkr/msisdn_signin_2
Phone number registration / login support v2
2017-03-15 09:59:54 +00:00
Richard van der Hoff
b5d1c68beb Implement reset_context_after_deferred
to correctly reset the context when we fire off a deferred we aren't going to
wait for.
2017-03-15 02:21:07 +00:00
Richard van der Hoff
f2ed64eaaf Merge pull request #1992 from matrix-org/rav/fix_media_loop
Fix routing loop when fetching remote media
2017-03-14 23:40:35 +00:00
Richard van der Hoff
ef328b2fc1 kick jenkins 2017-03-14 20:44:50 +00:00
Erik Johnston
bad72b0b8e Merge pull request #2002 from matrix-org/erikj/dont_sync_by_default
Reduce number of spurious sync result generations.
2017-03-14 16:58:45 +00:00
Erik Johnston
1bf84c4b6b Merge pull request #1989 from matrix-org/erikj/public_list_speed
Speed up public room list
2017-03-14 16:58:26 +00:00
Erik Johnston
fd2eef49c8 Reduce spurious calls to generate sync 2017-03-14 16:33:45 +00:00
Richard van der Hoff
1d09586599 Address review comments
- don't blindly proxy all HTTPRequestExceptions
- log unexpected exceptions at error
- avoid `isinstance`
- improve docs on `from_http_response_exception`
2017-03-14 14:15:37 +00:00
Richard van der Hoff
7f237800e9 re-refactor exception heirarchy
Give CodeMessageException back its `msg` attribute, and use that to hold the
HTTP status message for HttpResponseException.
2017-03-14 14:15:37 +00:00
David Baker
1ece06273e Handle PartialDownloadError in CAS login 2017-03-14 13:37:36 +00:00
Erik Johnston
bb256ac96f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/public_list_speed 2017-03-14 11:35:05 +00:00
Erik Johnston
6a3c5d6891 Merge pull request #1996 from matrix-org/erikj/fix_current_state
Fix current_state_events table to not lie
2017-03-14 11:34:35 +00:00
Erik Johnston
cc7a294e2e Fix current_state_events table to not lie
If we try and persist two state events that have the same ancestor we
calculate the wrong current state when persisting those events.
2017-03-14 10:57:43 +00:00
David Baker
7b6ed9871e Use extend instead of += 2017-03-14 10:49:55 +00:00
David Baker
d79a687d85 Oops, remove print 2017-03-14 10:40:20 +00:00
Luke Barnard
f29d85d9e4 Merge pull request #1993 from matrix-org/luke/delete-devices
Implement delete_devices API
2017-03-14 10:02:56 +00:00
Ryan Breaker
a175963ba5 Add --upgrade pip
Needed before `pip instal --upgrade setuptools` for CentOS 7 and also doesn't hurt for any other distro.
2017-03-13 14:05:31 -05:00
Luke Barnard
bbeeb97f75 Implement _simple_delete_many_txn, use it to delete devices
(But this doesn't implement the same for deleting access tokens or e2e keys.

Also respond to code review.
2017-03-13 17:53:23 +00:00
David Baker
0a9945220e Fix registration for broken clients
Only offer msisdn flows if the x_show_msisdn option is given.
2017-03-13 17:29:38 +00:00
David Baker
73a5f06652 Support registration / login with phone number
Changes from https://github.com/matrix-org/synapse/pull/1971
2017-03-13 17:27:51 +00:00
Luke Barnard
c077c3277b Flake 2017-03-13 16:45:38 +00:00
Richard van der Hoff
31f3ca1b2b Merge pull request #1990 from matrix-org/rav/log_config_comments
Add helpful texts to logger config options
2017-03-13 16:42:12 +00:00
Luke Barnard
c81f33f73d Implement delete_devices API
This implements the proposal here https://docs.google.com/document/d/1C-25Gqz3TXy2jIAoeOKxpNtmme0jI4g3yFGqv5GlAAk for deleting multiple devices at once in a single request.
2017-03-13 16:33:51 +00:00
Richard van der Hoff
170ccc9de5 Fix routing loop when fetching remote media
When we proxy a media request to a remote server, add a query-param, which will
tell the remote server to 404 if it doesn't recognise the server_name.

This should fix a routing loop where the server keeps forwarding back to
itself.

Also improves the error handling on remote media fetches, so that we don't
always return a rather obscure 502.
2017-03-13 16:30:36 +00:00
Erik Johnston
45c7f12d2a Add new storage function to slave store 2017-03-13 16:26:44 +00:00
Richard van der Hoff
2cad971ab4 Merge pull request #1978 from matrix-org/rav/refactor_received_pdu
Refactor FederationServer._handle_new_pdu
2017-03-13 12:45:46 +00:00
Richard van der Hoff
8d86d11fdf Bring example log config into line with default 2017-03-13 12:39:09 +00:00
Richard van der Hoff
6037a9804c Add helpful texts to logger config options 2017-03-13 12:33:35 +00:00
Richard van der Hoff
3c69f32402 Merge remote-tracking branch 'origin/develop' into rav/refactor_received_pdu 2017-03-13 12:20:47 +00:00
Richard van der Hoff
6bfe8e32b5 Merge pull request #1983 from matrix-org/rav/no_redirect_stdio
Add an option to disable stdio redirect
2017-03-13 12:20:07 +00:00
Richard van der Hoff
5fc9261929 Merge pull request #1982 from matrix-org/rav/sighup_for_logconfig
Reread log config on SIGHUP
2017-03-13 12:19:54 +00:00
Erik Johnston
0162994983 Comments 2017-03-13 11:53:26 +00:00
Erik Johnston
254b7c5b15 Bump changelog and versions 2017-03-13 10:02:58 +00:00
Erik Johnston
672dcf59d3 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.19.3 2017-03-13 09:59:54 +00:00
Erik Johnston
7eae6eaa2f Revert "Support registration & login with phone number" 2017-03-13 09:59:33 +00:00
Erik Johnston
8b0f2afbaf Merge tag 'v0.19.3-rc1' into release-v0.19.3
Changes in synapse v0.19.3-rc1 (2017-03-08)
===========================================

Features:

* Add some administration functionalities. Thanks to morteza-araby! (PR #1784)

Changes:

* Reduce database table sizes (PR #1873, #1916, #1923, #1963)
* Update contrib/ to not use syutil. Thanks to andrewshadura! (PR #1907)
* Don't fetch current state when sending an event in common case (PR #1955)

Bug fixes:

* Fix synapse_port_db failure. Thanks to Pneumaticat! (PR #1904)
* Fix caching to not cache error responses (PR #1913)
* Fix APIs to make kick & ban reasons work (PR #1917)
* Fix bugs in the /keys/changes api (PR #1921)
* Fix bug where users couldn't forget rooms they were banned from (PR #1922)
* Fix issue with long language values in pushers API (PR #1925)
* Fix a race in transaction queue (PR #1930)
* Fix dynamic thumbnailing to preserve aspect ratio. Thanks to jkolo! (PR
  #1945)
* Fix device list update to not constantly resync (PR #1964)
* Fix potential for huge memory usage when getting device that have
  changed (PR #1969)
2017-03-13 09:53:15 +00:00
Erik Johnston
79926e016e Assume rooms likely haven't changed 2017-03-13 09:50:10 +00:00
Matthew Hodgson
a61dd408ed enable guest access for the 3pl/3pid APIs 2017-03-12 19:30:45 +00:00
Ryan Breaker
53254551f0 Add missing package to CentOS section
Also added Fedora 25 to header as the same packages work for it as well.
2017-03-10 22:09:22 -06:00
Erik Johnston
8ffbe43ba1 Get current state by using current_state_events table 2017-03-10 17:39:35 +00:00
Richard van der Hoff
bcfa5cd00c Add an option to disable stdio redirect
This makes it tractable to run synapse under pdb.
2017-03-10 15:38:29 +00:00
Richard van der Hoff
d84bd51e95 Refactor logger config for workers
- to make it easier to add more config options.
2017-03-10 15:34:01 +00:00
Richard van der Hoff
9072a8c627 Reread log config on SIGHUP
When we are using a log_config file, reread it on SIGHUP.
2017-03-10 15:29:55 +00:00
Erik Johnston
3872c7a107 Merge pull request #1976 from matrix-org/erikj/device_delete_sync
Noop repated delete device inbox calls from /sync
2017-03-10 13:24:28 +00:00
Erik Johnston
8f267fa8a8 Fix it for the workers 2017-03-10 11:22:25 +00:00
Erik Johnston
64d62e41b8 Noop repated delete device inbox calls from /sync 2017-03-10 10:36:43 +00:00
Erik Johnston
3545e17f43 Add setdefault key to ExpiringCache 2017-03-10 10:30:49 +00:00
Richard van der Hoff
29235901b8 Move FederationServer._handle_new_pdu to FederationHandler
Unfortunately this significantly increases the size of the already-rather-big
FederationHandler, but the code fits more naturally here, and it paves the way
for the tighter integration that I need between handling incoming PDUs and
doing the join dance.

Other than renaming the existing `FederationHandler.on_receive_pdu` to
`_process_received_pdu` to make way for it, this just consists of the move, and
replacing `self.handler` with `self` and `self` with `self.replication_layer`.
2017-03-09 16:20:13 +00:00
Richard van der Hoff
e8b1721290 Move sig check out of _handle_new_pdu
When we receive PDUs via `get_missing_events`, we have already checked their
sigs, so there is no need to do it again.
2017-03-09 15:50:44 +00:00
Richard van der Hoff
3406333a58 Factor _get_missing_events_for_pdu out of _handle_new_pdu
This should be functionally identical: it just seeks to improve readability by
reducing indentation.
2017-03-09 15:50:44 +00:00
Richard van der Hoff
45d173a59a Fix docstring 2017-03-09 15:50:29 +00:00
David Baker
663396e45d Merge pull request #1971 from matrix-org/dbkr/msisdn_signin
Support registration & login with phone number
2017-03-09 10:18:51 +00:00
David Baker
ece7e00048 Comment when our 3pids would be incomplete 2017-03-08 19:07:18 +00:00
David Baker
9d0d40fc15 Docs 2017-03-08 19:05:29 +00:00
David Baker
3edc57296d Incorrectly copied copyright
This file post-dates OM
2017-03-08 19:00:51 +00:00
David Baker
727124a762 Not any more, it doesn't 2017-03-08 19:00:23 +00:00
Richard van der Hoff
6ad71cc29d Remove spurious SQL logging (#1972)
looks like the upsert function was accidentally sending sql logging to the
general logger. We already log the sql in `txn.execute`.
2017-03-08 18:00:44 +00:00
David Baker
d4d3629aaf Better error message 2017-03-08 17:01:26 +00:00
Erik Johnston
3170c56e07 Bump changelog and version 2017-03-08 13:00:47 +00:00
Erik Johnston
c1f18892bb Bump changelog and version 2017-03-08 12:58:56 +00:00
David Baker
1c99934b28 pep8 2017-03-08 11:58:20 +00:00
David Baker
a9e2b9ec16 Add msisdn util file 2017-03-08 11:53:36 +00:00
David Baker
85bb322333 Pull out datastore in initialiser 2017-03-08 11:51:25 +00:00
David Baker
65d43f3ca5 Minor fixes from PR feedback 2017-03-08 11:48:43 +00:00
David Baker
0e0aee25c4 Fix log line 2017-03-08 11:46:22 +00:00
David Baker
82c5e7de25 Typos 2017-03-08 11:42:44 +00:00
David Baker
2e27339add Refector out assert_params_in_request
and replace requestEmailToken where we meant requestMsisdnToken
2017-03-08 11:37:34 +00:00
David Baker
88df6c0c9a Factor out msisdn canonicalisation
Plus a couple of other minor fixes
2017-03-08 11:03:39 +00:00
David Baker
402a7bf63d Fix pep8 2017-03-08 09:33:40 +00:00
David Baker
00466e2feb Support new login format
https://docs.google.com/document/d/1-6ZSSW5YvCGhVFDyD2QExAUAdpCWjccvJT5xiyTTG2Y/edit#
2017-03-07 16:37:23 +00:00
Erik Johnston
c98d91fe94 Merge pull request #1969 from matrix-org/erikj/get_distinct_devices
Select distinct devices from DB
2017-03-06 11:52:46 +00:00
Erik Johnston
ac5491f563 Select distinct devices from DB
Otherwise we might pull out tonnes of duplicate user_ids and this can
make synapse sad.
2017-03-06 11:10:14 +00:00
David Baker
b0effa2160 Add msisdns as 3pids during registration
and support binding them with the bind_msisdn param
2017-03-03 18:34:39 +00:00
Erik Johnston
82f7f1543b Merge pull request #1964 from matrix-org/erikj/device_list_update_fix
Fix device list update to not constantly resync
2017-03-03 16:18:16 +00:00
Erik Johnston
96d79bb532 Merge pull request #1963 from matrix-org/erikj/delete_old_device_streams
Clobber old device list stream entries
2017-03-03 16:18:06 +00:00
Erik Johnston
f2581ee8b8 Don't keep around old stream IDs forever 2017-03-03 16:02:53 +00:00
Erik Johnston
9834367eea Spelling 2017-03-03 15:31:57 +00:00
Erik Johnston
da52d3af31 Fix up 2017-03-03 15:29:13 +00:00
David Baker
ad882cd54d Just return the deferred straight off
defer.returnValue doth not maketh a generator: it would need a
yield to be a generator, and this doesn't need a yield.
2017-03-01 18:08:51 +00:00
David Baker
3557cf34dc Merge remote-tracking branch 'origin/develop' into dbkr/msisdn_signin 2017-03-01 17:20:37 +00:00
Paul "LeoNerd" Evans
856a18f7a8 kick jenkins 2017-03-01 16:24:11 +00:00
Erik Johnston
d766343668 Add index to device_lists_stream 2017-03-01 15:56:30 +00:00
Paul Evans
0bf2c7f3bc Merge pull request #1960 from matrix-org/paul/sytest-integration
Prepare a script for Jenkins to run sytest via dendron+haproxy
2017-03-01 15:08:09 +00:00
Erik Johnston
36be39b8b3 Fix device list update to not constantly resync 2017-03-01 14:12:11 +00:00
Erik Johnston
3365117151 Clobber old device list stream entries 2017-03-01 10:21:30 +00:00
Benedict Lau
92312aa3e6 Clarify doc for SQLite to PostgreSQL port 2017-03-01 01:30:11 -05:00
Paul "LeoNerd" Evans
6b1ffa5f3d Added also a control script to run via the crazy dendron+haproxy hybrid we're temporarily using 2017-02-28 18:14:13 +00:00
Paul "LeoNerd" Evans
f4e7545d88 No longer need to request all the sub-components to be split when running sytest+dendron 2017-02-28 18:11:52 +00:00
Erik Johnston
e933a2712d Don't log unknown cache warnings in workers 2017-02-28 16:22:41 +00:00
Erik Johnston
d638a7484b Merge pull request #1959 from matrix-org/erikj/intern_once
Intern table column names once
2017-02-28 15:15:16 +00:00
Erik Johnston
7eff3afa05 Merge pull request #1957 from matrix-org/device_poke_index
Add stream_id index to device_lists_outbound_pokes
2017-02-28 14:48:32 +00:00
Erik Johnston
b84907bdbb Intern table column names once 2017-02-28 14:38:16 +00:00
Erik Johnston
e4919b9329 Add stream_id index to device_lists_outbound_pokes
As this is used for replication streaming
2017-02-28 11:19:06 +00:00
Erik Johnston
8a12b6f1eb Fix up txn name 2017-02-28 10:15:50 +00:00
Erik Johnston
848cf95ea0 Pop with default value to stop throwing 2017-02-28 10:02:54 +00:00
Matthew Hodgson
9037787f0b merge in right archlinux package, thanks to @saram-kon from https://github.com/matrix-org/synapse/pull/1956 2017-02-28 00:30:25 +00:00
Erik Johnston
eda96586ca Merge pull request #1955 from matrix-org/erikj/current_state_query_bypass
Don't fetch current state in common case
2017-02-27 19:15:59 +00:00
Erik Johnston
64a2cef9bb Pop rather than del from dict 2017-02-27 19:15:36 +00:00
Erik Johnston
a41dce8f8a Remove needless check 2017-02-27 18:54:43 +00:00
Erik Johnston
c0d6045776 It should be all 2017-02-27 18:45:24 +00:00
Erik Johnston
49f4bc4709 Don't fetch current state in common case
Currently we fetch the list of current state events whenever we send
something in a room. This is overkill for the common case of persisting
a simple chain of non-state events, so lets handle that case specially.
2017-02-27 18:33:41 +00:00
Erik Johnston
8eec652de5 Merge pull request #1904 from Pneumaticat/patch-1
Fix synapse_port_db failure (fixes #1902)
2017-02-27 18:11:13 +00:00
Erik Johnston
fc5d876dba Merge pull request #1954 from matrix-org/erikj/cache_device2
Cache get_user_devices_from_cache
2017-02-27 16:36:40 +00:00
Erik Johnston
f58dbb02a6 Cache get_user_devices_from_cache 2017-02-27 16:22:12 +00:00
Matthew Hodgson
ca7ea2a4b5 Merge pull request #1951 from enckse/develop
the aur package is no longer there, community package in arch does exist
2017-02-27 15:11:00 +00:00
Sean Enck
c80439a320 the aur package is no longer there, community package in arch does exist 2017-02-27 10:06:15 -05:00
Erik Johnston
acf6d4d2e3 Merge pull request #1945 from jkolo/fix_dynamic_thumbnails_aspect
Fix #1677 (dynamic thumbnails aspect)
2017-02-27 09:51:52 +00:00
Jurek
aea5461488 Fix dynamic thumbnails aspect 2017-02-24 22:43:27 +01:00
Erik Johnston
bf92b7201f Merge pull request #1939 from matrix-org/erikj/strip_sql_newlines
Strip newlines from SQL queries
2017-02-23 11:30:33 +00:00
Erik Johnston
1a4f8022e6 Strip newlines from SQL queries 2017-02-23 11:15:31 +00:00
Erik Johnston
b2d20e94fa Remove lock from rotate notifs 2017-02-22 14:24:02 +00:00
Erik Johnston
7455ba436a Ensure we pass positive ints to delay function 2017-02-22 12:08:14 +00:00
Erik Johnston
b7442c3e2b Store looping call 2017-02-21 13:59:25 +00:00
Erik Johnston
a3708a1885 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-02-21 13:46:27 +00:00
Erik Johnston
3346a21324 Merge branch 'release-v0.19.2' of github.com:matrix-org/synapse 2017-02-21 13:45:02 +00:00
Erik Johnston
30ecfef5a3 Bump version and changelog 2017-02-21 13:43:36 +00:00
Richard van der Hoff
c927d6de9b Merge pull request #1930 from matrix-org/rav/fix_txnq_race
Fix a race in transaction queue
2017-02-21 08:46:38 +00:00
Richard van der Hoff
0c4cf9372b Fix a race in transaction queue
It was theoretically possible for a PDU to get queued and not sent for ages. On
closer inspection I think there were bigger problems elsewhere, but we might as
well fix this since it's easy.
2017-02-20 16:46:25 +00:00
Erik Johnston
6226a27bf8 Remove unused param 2017-02-20 16:01:54 +00:00
Erik Johnston
efff39c030 Fix /context/ visibiltiy rules 2017-02-20 16:01:49 +00:00
Erik Johnston
b5c268738b Merge pull request #1929 from matrix-org/erikj/context_fix
Fix /context/ visibiltiy rules
2017-02-20 16:19:19 +01:00
Erik Johnston
17673404fb Remove unused param 2017-02-20 15:02:01 +00:00
Erik Johnston
7f026792e1 Fix /context/ visibiltiy rules 2017-02-20 14:54:50 +00:00
Richard van der Hoff
11940d462a Merge remote-tracking branch 'origin/master' into develop 2017-02-20 09:14:43 +00:00
Richard van der Hoff
6184f6fcbc Update metrics-howto.rst 2017-02-19 23:06:45 +00:00
Richard van der Hoff
e556aefe0a Update metrics-howto.rst 2017-02-19 23:06:08 +00:00
Richard van der Hoff
7efb38d1dd Update metrics-howto.rst 2017-02-19 22:55:48 +00:00
Christian W. Zuckschwerdt
20746d8150 bring nuke-room script to current schema
Signed-off-by: Christian W. Zuckschwerdt <christian@zuckschwerdt.org>
2017-02-19 05:27:45 +01:00
Erik Johnston
699be7d1be Fix up notif rotation 2017-02-18 14:42:39 +00:00
Richard van der Hoff
2fa14fd48a Merge pull request #1926 from matrix-org/rav/example_log_config
Add an example log_config file
2017-02-17 14:41:48 +00:00
Richard van der Hoff
66eb0bd548 Update example_log_config.yaml
add trailing NL
2017-02-17 12:55:36 +00:00
Richard van der Hoff
5aae844e60 Add an example log_config file 2017-02-17 12:48:53 +00:00
David Baker
ec8d7603e6 Merge pull request #1925 from matrix-org/dbkr/pushers_lang_lengthen
Make the pushers lang field column longer
2017-02-17 11:29:06 +00:00
David Baker
8c87bb550e Merge pull request #1922 from matrix-org/dbkr/allow_forget_for_ban
Allow forgetting rooms you're banned from
2017-02-17 10:52:30 +00:00
David Baker
4aa29508af Use TEXT rather than VARCHAR
While we're changing anyway
2017-02-17 10:51:49 +00:00
David Baker
b4017539d4 Make the pushers lang field column longer
To accommodate things like zh-Hans-CN

Fixes https://github.com/vector-im/riot-ios/issues/1031
2017-02-17 10:42:57 +00:00
Erik Johnston
b6557f2cfe Merge pull request #1923 from matrix-org/erikj/push_action_compress
Store the default push actions in a more efficient manner
2017-02-16 16:07:50 +01:00
Erik Johnston
138e030cfe Comment 2017-02-16 15:03:36 +00:00
Erik Johnston
502ae6c663 Comment 2017-02-16 14:47:11 +00:00
Erik Johnston
e6acf0c399 Store the default push actions in a more efficient manner 2017-02-16 14:40:24 +00:00
Erik Johnston
04eca2589d Merge pull request #1916 from matrix-org/erikj/push_actions_delete
Aggregate event push actions
2017-02-16 15:28:58 +01:00
David Baker
474c9aadbe Allow forgetting rooms you're banned from 2017-02-15 19:32:20 +00:00
Richard van der Hoff
7dcbcca68c Merge pull request #1921 from matrix-org/rav/fix_key_changes
Fix bugs in the /keys/changes api
2017-02-15 11:25:16 +00:00
David Baker
fa467e62a9 Merge pull request #1917 from matrix-org/dbkr/make_ban_reasons_work
Make kick & ban reasons work
2017-02-14 16:10:06 +00:00
David Baker
355d62c499 Make kick & ban reasons work
We somehow specced APIs with reason strings, preserve the content
in the events  and even have the clients display them, but failed
to actually pass the parameter through to the event content.
2017-02-14 15:10:55 +00:00
David Baker
ce3e583d94 WIP support for msisdn 3pid proxy methods 2017-02-14 15:05:55 +00:00
Richard van der Hoff
fc2f29c1d0 Fix bugs in the /keys/changes api
* `get_forward_extremeties_for_room` takes a numeric `stream_ordering`. We were
  passing a `RoomStreamToken`, which meant that it returned the *current*
  extremities, rather than those corresponding to the `from_token`. However:
* `get_state_ids_for_events` required a second ('types') parameter; this meant
  that a `TypeError` was thrown and we ended up acting as though there was *no*
  prev state.
* `get_state_ids_for_events` actually returns a map from event_id to state
  dictionary - just looking up the state keys in it again meant that we acted
  as though there was no prev state. We now check if each member's state has
  changed since *any* of the extremities.

Also add/fix some comments.
2017-02-14 13:59:50 +00:00
Erik Johnston
ce3c8df6df Less aggressive timers 2017-02-14 13:41:24 +00:00
Erik Johnston
095b45c165 Aggregate event push actions 2017-02-14 13:39:41 +00:00
Erik Johnston
795f8e3fe7 Merge pull request #1873 from matrix-org/erikj/delete_push_actions
Be more agressive about purging old room event_push_actions
2017-02-14 14:29:04 +01:00
Erik Johnston
d7457c7661 Merge pull request #1914 from matrix-org/erikj/cache_presence
Cache get_presence storage
2017-02-13 16:59:19 +01:00
Erik Johnston
359c97f506 Merge pull request #1913 from matrix-org/kegan/dont-cache-errors
http txns: Do not cache error responses
2017-02-13 16:29:19 +01:00
Erik Johnston
9e617cd4c2 Cache get_presence storage 2017-02-13 13:50:03 +00:00
Kegan Dougal
d0497425f8 Ordering is important on errbacks so add the cleanup func before creating an ObservableDeferred 2017-02-13 13:49:44 +00:00
Kegan Dougal
808ddf0ae7 Pop the txn from the map in case it has already been deleted somehow 2017-02-13 13:36:15 +00:00
Kegan Dougal
feb15dc99f Don't cache errors at all 2017-02-13 13:33:12 +00:00
Kegan Dougal
ecd7e36047 http txns: Do not cache error responses
Previously we did. This meant that, amongst other errors, rate-limiting errors
would be cached and prevent messages with that txn ID being sent.
2017-02-13 13:16:48 +00:00
Erik Johnston
6bba80241c Merge pull request #1912 from matrix-org/markjh/roominitialsync
Add db functions needed for room initial sync to slave
2017-02-13 12:20:21 +01:00
Mark Haines
3a46280ca3 Add db functions needed for room initial sync to slave 2017-02-13 11:16:53 +00:00
Erik Johnston
e1a12e24d2 Merge pull request #1907 from andrewshadura/develop
Use signedjson.sign instead of syutil.crypto.jsonsign
2017-02-13 11:54:24 +01:00
Andrew Shadura
6a3743b0d4 Use signedjson.sign instead of syutil.crypto.jsonsign
Functions from syutil.crypto.jsonsign are now available in
signedjson, so use that instead of depending on syutil.

Signed-off-by: Andrew Shadura <andrew@shadura.me>
2017-02-13 11:31:22 +01:00
Erik Johnston
481f6c87e7 Merge pull request #1906 from tyler-smith/TS_fix_config_documentation
Fix typo in config comments.
2017-02-13 11:01:03 +01:00
Tyler Smith
df4407d665 Fix typo in config comments.
Signed-off-by: Tyler Smith <tylersmith.me@gmail.com>
2017-02-11 23:02:57 -08:00
Kevin Liu
70a00eacf9 Fix typo
This is what I get for not proofreading
2017-02-11 20:49:31 -05:00
Kevin Liu
a02d609b1f Fix synapse_port_db failure (fixes #1902)
See https://matrix.to/#/!cURbafjkfsMDVwdRDQ:matrix.org/$148686272020hCgRD:potatofrom.space

Signed-off-by: Kevin Liu <kevin@potatofrom.space>
2017-02-11 20:44:16 -05:00
Erik Johnston
5c3cb8778a Merge pull request #1896 from DanielDent/patch-1
Update CAPTCHA_SETUP.rst X-Forwarded-For docs
2017-02-09 14:20:15 +01:00
Erik Johnston
1beda9c8a7 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-02-09 10:46:58 +00:00
Erik Johnston
27c005ae2c Merge branch 'release-v0.19.1' of github.com:matrix-org/synapse 2017-02-09 10:46:45 +00:00
Erik Johnston
505bfd82bb Update version and changelog 2017-02-09 10:41:02 +00:00
Daniel Dent
fdbd90e25d Update CAPTCHA_SETUP.rst X-Forwarded-For docs
It looks like CAPTCHA_SETUP.rst contains information relevant to an old version of Synapse, but Synapse now has a different approach to configuring use of the X-Forwarded-For header.
2017-02-08 21:21:02 -08:00
Erik Johnston
52cd019a54 Make None check explicit 2017-02-08 16:04:29 +00:00
Erik Johnston
f20cd34858 Merge pull request #1892 from matrix-org/erikj/rejection_fwd_extrem
Ignore new rejected events when working out forward extremities.
2017-02-08 16:59:06 +01:00
Erik Johnston
7723b4caa4 Ignore new rejected events when working out forward extremeties. 2017-02-08 14:48:06 +00:00
David Baker
9adcd3a514 Merge pull request #1891 from matrix-org/dbkr/remove_unused_constants
Remove a few aspirational but unused constants
2017-02-08 13:30:40 +00:00
David Baker
063a1251a9 Remove a few aspirational but unused constants
from the Kegan era
2017-02-08 11:36:08 +00:00
Erik Johnston
af6da6db2d Merge pull request #1784 from morteza-araby/user-admin
Administration functionalities
2017-02-06 16:21:10 +01:00
Erik Johnston
131c0134f5 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-02-04 15:16:05 +00:00
Erik Johnston
fad3a84335 Merge branch 'release-v0.19.0' of github.com:matrix-org/synapse 2017-02-04 08:28:56 +00:00
Erik Johnston
38434a7fbb Bump changelog and version 2017-02-04 08:27:51 +00:00
Erik Johnston
84f600b2ee Bump changelog and version 2017-02-02 18:58:33 +00:00
Erik Johnston
aec1708c53 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.19.0 2017-02-02 18:57:05 +00:00
Erik Johnston
f3c8658217 Merge pull request #1879 from matrix-org/erikj/bump_cache_factors
Bump cache sizes for common membership queries
2017-02-02 18:56:04 +00:00
Erik Johnston
a5d9303283 Merge pull request #1878 from matrix-org/erikj/device_measure
Measure new device list stuff
2017-02-02 18:55:57 +00:00
Erik Johnston
38258a0976 Bump cache sizes for common membership queries 2017-02-02 18:45:55 +00:00
Erik Johnston
a597994fb6 Measure new device list stuff 2017-02-02 18:36:17 +00:00
Erik Johnston
82b3e0851c Bump version and changelog 2017-02-02 17:15:17 +00:00
Erik Johnston
f8c407a13b Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.19.0 2017-02-02 16:50:28 +00:00
Erik Johnston
8da976fe00 Merge pull request #1877 from matrix-org/erikj/device_list_fixes
Make /keys/changes a bit more performant
2017-02-02 16:30:41 +00:00
Erik Johnston
1232ae41cf Use new get_users_who_share_room_with_user 2017-02-02 15:25:00 +00:00
Erik Johnston
99fa03e8b5 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/device_list_fixes 2017-02-02 15:23:45 +00:00
Erik Johnston
a8331897aa Merge pull request #1876 from matrix-org/erikj/shared_member_store
Make presence.get_new_events a bit faster
2017-02-02 15:20:14 +00:00
Erik Johnston
0f3e296cb7 Fix replication 2017-02-02 15:02:03 +00:00
Erik Johnston
6826593b81 sets aren't JSON serializable 2017-02-02 14:55:54 +00:00
Erik Johnston
6b61060b51 Comment 2017-02-02 14:47:15 +00:00
Erik Johnston
46ecd9fd6d Use stream_ordering_to_exterm for /keys/changes 2017-02-02 14:27:19 +00:00
Erik Johnston
9efcc3f3be Comment 2017-02-02 13:50:22 +00:00
Erik Johnston
832e9c52ca Comment 2017-02-02 13:09:56 +00:00
Erik Johnston
54a79c1d37 Make presence.get_new_events a bit faster
We do this by caching the set of users a user shares rooms with.
2017-02-02 13:07:18 +00:00
Morteza Araby
2849d3f29d admin,storage: added more administrator functionalities
administrators can now:
 - Set displayname of users
 - Update user avatars
 - Search for users by user_id
 - Browse all users in a paginated API
 - Reset user passwords
 - Deactivate users

Helpers for doing paginated queries has also been added to storage

Signed-off-by: Morteza Araby <morteza.araby@ericsson.com>
2017-02-02 14:02:26 +01:00
Erik Johnston
5ae38b65c1 Merge pull request #1875 from matrix-org/erikj/fix_email_push
Fix email push in pusher worker
2017-02-02 11:24:11 +00:00
Erik Johnston
bfe3f5815f Update changelog 2017-02-02 11:10:41 +00:00
Erik Johnston
cc01eae332 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.19.0 2017-02-02 11:05:57 +00:00
Erik Johnston
85e98fd4e8 Update changelog 2017-02-02 11:02:32 +00:00
Erik Johnston
51adaac953 Fix email push in pusher worker
This was broken when device list updates were implemented, as Mailer
could no longer instantiate an AuthHandler due to a dependency on
federation sending.
2017-02-02 10:53:36 +00:00
Erik Johnston
10e0737569 Bump version and changelog 2017-02-02 10:02:28 +00:00
Erik Johnston
fac3c03087 Be more agressive about purging old room event_push_actions 2017-02-01 18:27:24 +00:00
Erik Johnston
14d5e22700 Merge pull request #1872 from matrix-org/erikj/key_changes
Include newly joined users in /keys/changes API
2017-02-01 18:10:33 +00:00
Erik Johnston
fbfe44bb4d Doc args 2017-02-01 17:52:57 +00:00
Erik Johnston
d61a04583e Comment 2017-02-01 17:35:23 +00:00
Erik Johnston
7e919bdbd0 Include newly joined users in /keys/changes API 2017-02-01 17:33:16 +00:00
Erik Johnston
96355d2f2f Merge pull request #1871 from matrix-org/erikj/ratelimit_401
Correctly raise exceptions for ratelimitng. Ratelimit on 401
2017-02-01 15:56:16 +00:00
Erik Johnston
df4ecff5a9 Correctly raise exceptions for ratelimitng. Ratelimit on 401 2017-02-01 15:42:19 +00:00
Erik Johnston
6d6591880e Wake sync up for device changes 2017-02-01 15:15:16 +00:00
Erik Johnston
bd84387ac6 Merge pull request #1869 from matrix-org/erikj/device_list_stream
Implement /keys/changes
2017-02-01 13:25:26 +00:00
Erik Johnston
ebfaff84c9 Merge pull request #1870 from matrix-org/erikj/cache_get_all_new_events
Add a small cache get_all_new_events
2017-02-01 13:22:02 +00:00
Erik Johnston
73d676dc8b Comment 2017-02-01 13:17:17 +00:00
Erik Johnston
62f6b86ba7 Merge pull request #1868 from matrix-org/erikj/replication_cache
Only invalidate membership caches based on the cache stream
2017-02-01 13:12:30 +00:00
Erik Johnston
f6124311fd Add m.room.member type to query 2017-02-01 11:59:17 +00:00
Erik Johnston
88a4d54883 Merge pull request #1867 from matrix-org/erikj/member_index
Add an index to make membership queries faster
2017-02-01 11:44:27 +00:00
Erik Johnston
368c88c487 Add a small cache get_all_new_events 2017-02-01 10:50:44 +00:00
Erik Johnston
5deaf9e30b Up get_latest_event_ids_in_room cache 2017-02-01 10:39:41 +00:00
Erik Johnston
acb501c46d Comment 2017-02-01 10:32:49 +00:00
Erik Johnston
97479d0c54 Implement /keys/changes 2017-02-01 10:30:03 +00:00
Erik Johnston
06567ec513 Merge pull request #1866 from matrix-org/erikj/device_list_fixes
Better handle 404 response for federation /send/
2017-02-01 09:44:14 +00:00
Erik Johnston
692daf6f54 Remote membership tests for replication
This is because it now relies of the caches stream, which only works on
postgres. We are trying to test with sqlite.
2017-01-31 16:10:16 +00:00
Erik Johnston
458b6f4733 Only invalidate membership caches based on the cache stream
Before we completely invalidated get_users_in_room whenever we updated
any current_state_events table. This was way too aggressive.
2017-01-31 16:09:03 +00:00
Erik Johnston
fe08db2713 Remove explicit < 400 check as apparently this is confusing 2017-01-31 15:21:32 +00:00
Erik Johnston
21b7375778 Add an index to make membership queries faster 2017-01-31 15:15:57 +00:00
Erik Johnston
4c0ec15bdc Comment 2017-01-31 13:53:46 +00:00
Erik Johnston
85c590105f Comment 2017-01-31 13:46:38 +00:00
Erik Johnston
ae7a132f38 Better handle 404 response for federation /send/ 2017-01-31 13:40:09 +00:00
Erik Johnston
ac001dabdc Merge pull request #1864 from matrix-org/erikj/device_list_fixes
Fix clearing out old device list outbound pokes
2017-01-31 13:35:35 +00:00
Erik Johnston
bfb3d255b1 Merge pull request #1862 from matrix-org/erikj/presence_update
Use DB cache of joined users for presence
2017-01-31 13:23:24 +00:00
Erik Johnston
ab55794b6f Fix deletion of old sent devices correctly 2017-01-31 13:22:41 +00:00
Erik Johnston
d3169e8d28 Only fetch with row ts and count > 1 2017-01-31 11:20:03 +00:00
Erik Johnston
05b9f48ee5 Fix clearing out old device list outbound pokes 2017-01-31 10:08:55 +00:00
Erik Johnston
4c9812f5da Merge pull request #1861 from matrix-org/erikj/device_list_fixes
Device List fixes
2017-01-30 17:56:19 +00:00
Erik Johnston
4b3403ca9b Stream cache invalidations for room membership storage functions 2017-01-30 17:28:22 +00:00
Erik Johnston
1c13c9f6b6 Don't have such a large cache 2017-01-30 17:12:14 +00:00
Erik Johnston
c7a26b7c32 Fix unit tests 2017-01-30 17:11:24 +00:00
Erik Johnston
fd1c18c088 Use DB cache of joined users for presence 2017-01-30 17:00:24 +00:00
Erik Johnston
c2c9a78db9 Noop device key changes if they're the same 2017-01-30 16:55:04 +00:00
Erik Johnston
e75a779d9e Fix query 2017-01-30 16:38:20 +00:00
Erik Johnston
828db669ec Use get_users_in_room and declare it iterable 2017-01-30 16:37:22 +00:00
Erik Johnston
9636b2407d Merge pull request #1857 from matrix-org/erikj/device_list_stream
Implement device lists updates over federation
2017-01-30 14:35:21 +00:00
Erik Johnston
3670025e64 Rename func 2017-01-30 14:11:31 +00:00
Erik Johnston
4ac363a168 Remove debug logging 2017-01-30 14:10:12 +00:00
Erik Johnston
d360c97ae1 Clear out old destination pokes. 2017-01-30 10:14:37 +00:00
Erik Johnston
76100203ab Always use the latest stream_id, sent or unsent 2017-01-30 10:14:25 +00:00
Erik Johnston
d1e1fd6210 Add ts column to device_lists_outbound_pokes 2017-01-27 15:23:48 +00:00
Erik Johnston
252b503fc8 Hook device list updates to replication 2017-01-27 14:31:35 +00:00
Erik Johnston
84a35f32c7 Comment 2017-01-27 10:35:12 +00:00
Erik Johnston
c517a19c2d Comment 2017-01-27 10:33:26 +00:00
Erik Johnston
738a2867c8 SQL param ordering 2017-01-27 10:31:29 +00:00
Erik Johnston
755adff0e4 User if rather than for 2017-01-27 10:31:06 +00:00
Erik Johnston
888c59c955 Better name 2017-01-27 10:29:47 +00:00
Erik Johnston
f25a4a4692 Remove unused param 2017-01-27 10:27:39 +00:00
Erik Johnston
b3e1f2aa7a Fix unit tests 2017-01-26 17:16:24 +00:00
Erik Johnston
31aca5589c Fix on sqlite: use left rather than outer join 2017-01-26 16:55:50 +00:00
Erik Johnston
76d40f4904 Handle users leaving rooms 2017-01-26 16:39:33 +00:00
Erik Johnston
fbfad76c03 Add comments 2017-01-26 16:33:21 +00:00
Erik Johnston
c974116f19 Implement device key caching over federation 2017-01-26 16:07:24 +00:00
Paul Evans
e978247fe5 Merge pull request #1852 from matrix-org/paul/issue-1382
Don't clobber a displayname or avatar_url if provided by an m.room.member event
2017-01-25 18:15:19 +00:00
Erik Johnston
51e9fe36e4 Fix up sending of m.device_list_update edus 2017-01-25 16:55:21 +00:00
Erik Johnston
2367c5568c Add basic implementation of local device list changes 2017-01-25 14:27:27 +00:00
Paul "LeoNerd" Evans
10e48d8310 Don't clobber a displayname or avatar_url if provided by an m.room.member event 2017-01-24 18:06:07 +00:00
Erik Johnston
ba8e144554 Merge branch 'erikj/current_state_fix' into develop 2017-01-23 16:15:10 +00:00
Erik Johnston
f5b46482f4 Merge pull request #1840 from matrix-org/erikj/current_state_fix
Insert delta of current_state_events to be more efficient
2017-01-23 16:14:34 +00:00
Erik Johnston
fdf2a31a51 Typo 2017-01-23 16:14:14 +00:00
Erik Johnston
41dab8a222 Fix bug where current_state_events renamed to current_state_ids 2017-01-23 15:22:48 +00:00
Erik Johnston
c77b24c092 Refactor to calculate state delta outside transaction 2017-01-23 14:51:33 +00:00
Erik Johnston
5d2134d485 Comments 2017-01-20 17:13:24 +00:00
Erik Johnston
a55fa2047f Insert delta of current_state_events to be more efficient 2017-01-20 17:10:18 +00:00
Erik Johnston
3d9d48fffb Merge pull request #1836 from matrix-org/erikj/current_state_fix
Derive current_state_events from state groups
2017-01-20 15:14:05 +00:00
Richard van der Hoff
a0d03f2e15 Merge pull request #1837 from matrix-org/rav/fix_purge_media_doc
fix doc for purge_media_cache
2017-01-20 15:12:34 +00:00
Erik Johnston
d0897dead5 Spelling 2017-01-20 15:05:11 +00:00
Erik Johnston
567aa35b67 Update all call sites after rename 2017-01-20 14:40:31 +00:00
Erik Johnston
f2f40e64a9 Comments 2017-01-20 14:38:13 +00:00
Erik Johnston
4c6a31cd6e Calculate the forward extremeties once 2017-01-20 14:28:53 +00:00
Richard van der Hoff
83333498a5 fix doc for purge_media_cache
purge_media_cache takes its arg from a query-param, not the POST body, for some
reason.
2017-01-20 12:15:50 +00:00
Erik Johnston
86063d4321 Merge pull request #1835 from matrix-org/erikj/fix_workers
Make worker listener config backwards compat
2017-01-20 11:55:56 +00:00
Erik Johnston
09eb08f910 Derive current_state_events from state groups 2017-01-20 11:52:51 +00:00
Erik Johnston
97efe99ae9 Make worker listener config backwards compat 2017-01-20 11:45:29 +00:00
David Baker
691c8198b7 Merge pull request #1832 from xsteadfastx/xsteadfastx/turn-username-password
Added username and password for turn server
2017-01-19 14:28:31 +00:00
Marvin Steadfast
86e6165687 Added default config for turn username and password 2017-01-19 14:35:55 +01:00
Marvin Steadfast
1e38be3a7a Added username and password for turn server
It makes it possible to use a turn server that needs a username and
password instead of a token.
2017-01-19 14:08:20 +01:00
Erik Johnston
841c228533 Merge pull request #1828 from matrix-org/erikj/iterable_cache_size
Update LruCache size estimate on clear
2017-01-18 14:57:54 +00:00
Erik Johnston
c430111d0e Update LruCache size estimate on clear 2017-01-18 14:55:23 +00:00
David Baker
97d3918377 Merge pull request #1811 from aperezdc/unhardcode-riot-urls
Allow configuring the Riot URL used in notification emails
2017-01-18 14:38:49 +00:00
David Baker
6f6bf2a1eb Merge pull request #1827 from matrix-org/dbkr/email_case_insensitive
Lowercase all email addresses before querying db
2017-01-18 13:39:47 +00:00
David Baker
8c5009b628 Lowercase all email addresses before querying db
Since we store all emails in the DB in lowercase
(https://github.com/matrix-org/synapse/pull/1170)
2017-01-18 13:25:56 +00:00
Erik Johnston
ae7b4da4cc Merge pull request #1823 from matrix-org/erikj/load_events_logs
Remove loading events logs
2017-01-18 11:07:58 +00:00
Erik Johnston
fc7cae8aa3 Merge pull request #1824 from matrix-org/erikj/retry_host_log
Lower the not retrying host log line to debug
2017-01-18 11:07:51 +00:00
Erik Johnston
f9058ca785 Merge pull request #1822 from matrix-org/erikj/statE_logging
Change resolve_state_groups call site logging to DEBUG
2017-01-18 11:02:03 +00:00
Erik Johnston
f648313f98 Merge pull request #1821 from matrix-org/erikj/cache_metrics_string_intern
Measure metrics of string_cache
2017-01-18 10:57:39 +00:00
Erik Johnston
15f012032c Merge pull request #1818 from matrix-org/erikj/state_auth_splitout_split
Optimise state resolution
2017-01-18 10:53:00 +00:00
Erik Johnston
4ec1cf49e2 Lower loading events log to DEBUG 2017-01-17 17:28:32 +00:00
Erik Johnston
f878f64f43 Lower the not retrying host log line to debug 2017-01-17 17:20:39 +00:00
Erik Johnston
5f027d1fc5 Change resolve_state_groups call site logging to DEBUG 2017-01-17 17:07:15 +00:00
Erik Johnston
380dba1020 Measure metrics of string_cache 2017-01-17 17:04:46 +00:00
Erik Johnston
ed4d176152 PEP8 2017-01-17 15:27:28 +00:00
Mark Haines
c6064a7ba6 Only construct sets when necessary 2017-01-17 15:23:07 +00:00
Erik Johnston
a8594fd19f Use better names 2017-01-17 14:59:03 +00:00
Erik Johnston
7fae460402 Merge pull request #1820 from matrix-org/erikj/push_tools
Get state at event rather than for room in push
2017-01-17 14:58:26 +00:00
Erik Johnston
37b4c7d8a9 Fix typo in return type 2017-01-17 14:43:32 +00:00
Erik Johnston
e5d2df9c34 Use better variable name 2017-01-17 14:32:53 +00:00
Erik Johnston
04006bb7f0 Get state at event rather than for room in push 2017-01-17 14:31:21 +00:00
Erik Johnston
ce59a2faad Correctly handle case of rejected events in state res 2017-01-17 14:18:53 +00:00
Erik Johnston
633f97151c Check event is in state_map 2017-01-17 13:33:54 +00:00
Erik Johnston
e6153e1bd1 Fix couple of federation state bugs 2017-01-17 13:22:34 +00:00
Erik Johnston
5d6bad1b3c Optimise state resolution 2017-01-17 13:22:19 +00:00
Erik Johnston
e8ecbb6f20 Merge pull request #1812 from matrix-org/erikj/state_auth_splitout_split
Split out static state methods from StateHandler
2017-01-17 11:55:18 +00:00
Erik Johnston
d11d7cdf87 Merge pull request #1815 from matrix-org/erikj/iter_cache_size
Optionally measure size of cache by sum of length of values
2017-01-17 11:51:09 +00:00
Erik Johnston
9e8e236d98 Tidy up test 2017-01-17 11:50:18 +00:00
Erik Johnston
d6c75cb7c2 Rename and comment tree_to_leaves_iterator 2017-01-17 11:47:03 +00:00
Erik Johnston
1ccd5676e3 Remove needless call to evict() 2017-01-17 11:42:26 +00:00
Erik Johnston
d906206049 Increase state_group_cache_size 2017-01-17 11:31:08 +00:00
Erik Johnston
f85b6ca494 Speed up cache size calculation
Instead of calculating the size of the cache repeatedly, which can take
a long time now that it can use a callback, instead cache the size and
update that on insertion and deletion.

This requires changing the cache descriptors to have two caches, one for
pending deferreds and the other for the actual values. There's no reason
to evict from the pending deferreds as they won't take up any more
memory.
2017-01-17 11:18:13 +00:00
Erik Johnston
f2f179dce2 Add ExpiringCache tests 2017-01-16 15:33:34 +00:00
Erik Johnston
6d00213e80 Use OrderedDict in ExpiringCache 2017-01-16 15:33:22 +00:00
Erik Johnston
897f8752da Up cache max entries for state 2017-01-16 15:08:17 +00:00
Erik Johnston
beda469bc6 Put staticmethods at module level 2017-01-16 15:05:24 +00:00
Erik Johnston
46aebbbcbf Add support for 'iterable' to ExpiringCache 2017-01-16 14:57:23 +00:00
Erik Johnston
01521299c7 Increase cache size limit 2017-01-16 11:56:51 +00:00
Erik Johnston
2fae34bd2c Optionally measure size of cache by sum of length of values 2017-01-13 17:46:17 +00:00
Erik Johnston
95a22ae194 Merge pull request #1810 from matrix-org/erikj/state_auth_splitout_split
Split out static auth methods from Auth object
2017-01-13 16:32:27 +00:00
Erik Johnston
ec0a523ac3 Split out static state methods from StateHandler 2017-01-13 15:25:06 +00:00
Erik Johnston
e178feca3f Remove unused function 2017-01-13 15:16:45 +00:00
Erik Johnston
f0325a9ccc Merge pull request #1793 from matrix-org/erikj/change_device_inbox_index
Change device_inbox stream index to include user
2017-01-13 15:14:51 +00:00
Erik Johnston
c050f493dd Add comment 2017-01-13 15:14:41 +00:00
Adrian Perez de Castro
a3e4a198e3 Allow configuring the Riot URL used in notification emails
The URLs used for notification emails were hardcoded to use either matrix.to
or vector.im; but for self-hosted setups where Riot is also self-hosted it
may be desirable to allow configuring an alternative Riot URL.

Fixes #1809.

Signed-off-by: Adrian Perez de Castro <aperez@igalia.com>
2017-01-13 17:12:04 +02:00
Erik Johnston
8b2fa38256 Split event auth code into seperate module 2017-01-13 15:07:32 +00:00
Erik Johnston
641ccdbb14 Merge pull request #1795 from matrix-org/erikj/port_defaults
Restore default bind address
2017-01-13 13:02:59 +00:00
Richard van der Hoff
6f5e41e420 README.rst: fix formatting
Fix formatting blooper introduced in https://github.com/matrix-org/synapse/pull/1672 :/
2017-01-13 12:52:11 +00:00
Erik Johnston
0d37a7bf83 Merge pull request #1803 from matrix-org/erikj/swallow_errors
Fix spurious Unhandled Error log lines
2017-01-13 10:52:41 +00:00
Erik Johnston
ebf94aff8d Fix spurious Unhandled Error log lines 2017-01-12 17:19:47 +00:00
Erik Johnston
7a13fe16f7 Merge pull request #1802 from matrix-org/erikj/remove_debug_deferreds
Remove full_twisted_stacktraces option
2017-01-12 14:25:51 +00:00
Erik Johnston
bf5c9706d9 Remove full_twisted_stacktraces option
The debug 'full_twisted_stacktraces' flag caused synapse to rewrite
twisted deferreds to always fire the callback on the next reactor tick.
This was to force the deferred to always store the stacktraces on
exceptions, and thus be more likely to have a full stacktrace when it
reaches the final error handlers and gets printed to the logs.

Dynamically rewriting things is generally bad, and in particular this
change violates assumptions of various bits of Twisted. This wouldn't
necessarily be so bad, but it turns out this option has been turned on
on some production servers.

Turning the option can cause e.g. #1778.

For now, lets just entirely nuke this option.
2017-01-12 10:32:52 +00:00
Erik Johnston
7b62d0bc70 Add missing None check 2017-01-11 10:57:03 +00:00
Erik Johnston
7e6c2937c3 Split out static auth methods from Auth object 2017-01-10 18:16:54 +00:00
Erik Johnston
b1dfd20292 Pop bind_address 2017-01-10 17:23:18 +00:00
Erik Johnston
edd6cdfc9a Restore default bind address 2017-01-10 17:21:41 +00:00
Matthew Hodgson
3cb1799347 credit patrik properly 2017-01-10 16:50:35 +00:00
Erik Johnston
8a0fddfd73 Remove spurious for..else.. 2017-01-10 16:30:53 +00:00
Erik Johnston
d524bc9110 Merge pull request #1792 from matrix-org/erikj/limit_cache_prefill_device
Limit number of entries to prefill from cache
2017-01-10 15:42:00 +00:00
Erik Johnston
d2b00d0866 Merge pull request #1790 from matrix-org/erikj/linearizer
Add paranoia exception catch in Linearizer
2017-01-10 15:38:30 +00:00
Erik Johnston
ab655dca33 Explicitly close the cursor 2017-01-10 15:15:25 +00:00
Erik Johnston
5a32e9273e Don't disable autocommit 2017-01-10 15:11:27 +00:00
Erik Johnston
caddadfc5a Change device_inbox stream index to include user
This makes fetching the nost recently changed users much tricker, and
brings it in line with e.g. presence_stream indices.
2017-01-10 15:04:57 +00:00
Erik Johnston
dd52d4de4c Limit number of entries to prefill from cache
Some tables, like device_inbox, take a long time to query at startup for
the stream change cache prefills. This is likely because they are slower
growing streams and so are more fragmented on disk. For now, lets pull
fewer entries out to make startup quicker.

In future, we should add a better index to make it even faster.
2017-01-10 14:34:50 +00:00
Mark Haines
024eb98524 Merge pull request #1791 from matrix-org/markjh/file_logging
Log which files we saved attachments to in the media_repository
2017-01-10 14:27:55 +00:00
Mark Haines
32019c9897 Log which files we saved attachments to in the media_repository 2017-01-10 14:19:50 +00:00
Erik Johnston
657488113e Merge pull request #1789 from matrix-org/erikj/decouple_presence
Don't block messages sending on bumping presence
2017-01-10 14:06:05 +00:00
Erik Johnston
3b4de17d2b Comment 2017-01-10 14:05:53 +00:00
Erik Johnston
7d0981b312 Merge pull request #1787 from matrix-org/erikj/linearize_member
Linearize updates to membership via PUT /state/
2017-01-10 14:04:54 +00:00
Erik Johnston
07c3c08fad Merge pull request #1786 from matrix-org/erikj/linearizer_name
Name linearizer's for better logs
2017-01-10 14:04:45 +00:00
Erik Johnston
f477370c0c Add paranoia exception catch in Linearizer 2017-01-10 14:04:13 +00:00
Erik Johnston
586f474a44 Don't block messages sending on bumping presence 2017-01-10 12:46:00 +00:00
Erik Johnston
6823fe5241 Linearize updates to membership via PUT /state/ 2017-01-09 18:25:13 +00:00
Erik Johnston
f7085ac84f Name linearizer's for better logs 2017-01-09 17:17:10 +00:00
Erik Johnston
9898bbd9dc Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-01-09 14:51:17 +00:00
Erik Johnston
9a8ae6f1bf Bump version and changelog 2017-01-09 14:47:56 +00:00
Matthew Hodgson
2f4b2f4783 gah, fix mangled merge of 0.18.7 into develop 2017-01-07 04:00:42 +00:00
Matthew
6d363cea9d Merge branch 'release-v0.18.7' into develop 2017-01-07 03:46:16 +00:00
Matthew
f0e4bac64e bump changelog & version 2017-01-07 03:45:38 +00:00
Matthew
4304e7e593 do the discard check in the right place to avoid grabbing dependent events 2017-01-07 03:44:18 +00:00
Matthew Hodgson
6515b9c0d4 changelog 2017-01-07 02:52:37 +00:00
Matthew Hodgson
8c48971b51 Merge branch 'release-v0.18.7' into develop 2017-01-07 02:23:37 +00:00
Matthew
e10c527930 Discard PDUs from invalid origins due to #1753 in 0.18.[56] 2017-01-07 02:13:14 +00:00
Matthew Hodgson
2f5be2d8dc oops, this should have been rc1 2017-01-07 01:11:56 +00:00
Matthew Hodgson
4086026524 move logging to right place 2017-01-07 00:41:46 +00:00
Matthew Hodgson
9d914454c8 Merge branch 'release-v0.18.6' into develop 2017-01-07 00:40:30 +00:00
Matthew
19e2fb4386 bump version 2017-01-06 23:38:22 +00:00
Matthew
189fd15564 update changelog 2017-01-06 23:33:28 +00:00
Matthew
8404f132c3 Revert "fix typo breaking the fix to #1753"
This reverts commit b2850e62db.
2017-01-06 23:28:46 +00:00
Matthew
b2850e62db fix typo breaking the fix to #1753 2017-01-06 23:23:37 +00:00
Mark Haines
06c00bd19b Merge branch 'release-v0.18.6' into develop 2017-01-06 14:46:27 +00:00
Mark Haines
b42a972b71 Bump version and changelog 2017-01-06 14:44:28 +00:00
Mark Haines
2c8ac84a26 Merge pull request #1772 from matrix-org/markjh/fix_guest_access_check
handlers/room_member: fix guest access check when joining rooms
2017-01-06 14:41:52 +00:00
Patrik Oldsberg
1ef6084b75 handlers/room_member: fix guest access check when joining rooms
Signed-off-by: Patrik Oldsberg <patrik.oldsberg@ericsson.com>
2017-01-06 14:36:56 +00:00
Matthew Hodgson
bd85434cb3 Merge branch 'release-v0.18.6' into develop 2017-01-05 13:58:19 +00:00
Mark Haines
c18f7fc410 Fix flake8 and update changelog 2017-01-05 13:50:22 +00:00
Matthew Hodgson
dafd50d178 Merge pull request #1767 from matrix-org/matthew/resolve_state_group_logging
log call paths for resolve_state_group
2017-01-05 13:47:42 +00:00
Matthew Hodgson
883ff92a7f Fix case 2017-01-05 13:45:02 +00:00
Matthew Hodgson
d79d165761 add logging for all the places we call resolve_state_groups. my kingdom for a backtrace that actually works. 2017-01-05 13:40:39 +00:00
Matthew Hodgson
8cfc0165e9 fix annoying typos 2017-01-05 13:39:43 +00:00
Mark Haines
62451800e7 Bump version and changelog to v0.18.6-rc3 2017-01-05 13:36:10 +00:00
Matthew Hodgson
b31ed22738 Merge branch 'release-v0.18.6' into develop 2017-01-05 13:03:02 +00:00
Matthew Hodgson
7738329672 Merge pull request #1766 from matrix-org/markjh/linear_logging
More logging for the linearizer and for get_events
2017-01-05 13:01:31 +00:00
Mark Haines
dd3df11c55 More logging for the linearizer and for get_events 2017-01-05 12:32:47 +00:00
Mark Haines
e1c5463efc Merge pull request #1765 from matrix-org/markjh/timeout_get_missing_events
cherrypick #1744: limit total timeout for get_missing_events to 10s
2017-01-05 12:02:23 +00:00
Matthew Hodgson
468749c9fc fix comment 2017-01-05 12:00:11 +00:00
Matthew Hodgson
eedf400d05 limit total timeout for get_missing_events to 10s 2017-01-05 11:58:15 +00:00
Mark Haines
5175094707 Merge pull request #1744 from matrix-org/matthew/timeout_get_missing_events
limit total timeout for get_missing_events to 10s
2017-01-05 11:53:15 +00:00
Matthew Hodgson
8e82611f37 fix comment 2017-01-05 11:44:44 +00:00
Mark Haines
6028718b1a Merge pull request #1764 from matrix-org/markjh/fix_send_pdu
Only send events that originate on this server.
2017-01-05 11:41:22 +00:00
Mark Haines
f784980d2b Only send events that originate on this server.
Or events that are sent via the federation "send_join" API.

This should match the behaviour from before v0.18.5 and #1635 landed.
2017-01-05 11:26:30 +00:00
Mark Haines
0d766c8ccf Merge pull request #1758 from matrix-org/markjh/fix_ban_propagation
Fix propagation of bans to remote servers.
2017-01-04 15:39:31 +00:00
Mark Haines
e02bdaf08b Get the destinations from the state from before the event
Rather than the state after then event.
2017-01-04 15:17:15 +00:00
Mark Haines
b6b67715ed Send ALL membership events to the server that was affected.
Send all membership changes to the server that was affected.
This ensures that if the last member of a room on a server
was kicked or banned they get told about it.
2017-01-04 13:56:20 +00:00
Matthew Hodgson
555d702e34 limit total timeout for get_missing_events to 10s 2016-12-31 15:21:37 +00:00
Matthew Hodgson
899a3a1268 Merge branch 'release-v0.18.6' into develop 2016-12-31 02:38:26 +00:00
Mark Haines
f3de4f8cb7 Bump version and changelog 2016-12-30 20:21:04 +00:00
Mark Haines
321d5b73d8 Merge pull request #1736 from matrix-org/markjh/linearizer_logging
Add more useful logging when we block fetching events
2016-12-30 20:05:12 +00:00
Mark Haines
62ce3034f3 s/aquire/acquire/g 2016-12-30 20:04:44 +00:00
Mark Haines
0aff09f6c9 Add more useful logging when we block fetching events 2016-12-30 20:00:44 +00:00
Mark Haines
48c3b7dc19 Merge pull request #1734 from matrix-org/markjh/fix_get_missing
Remove fallback from get_missing_events.
2016-12-30 19:42:11 +00:00
Mark Haines
cc50b1ae53 Remove fallback from get_missing_events.
get_missing_events used to fallback to fetching the missing events
individually requesting from every server in the room, one by one.e

This could be unacceptably slow, possibly causing #1732
2016-12-30 18:13:15 +00:00
Mark Haines
f576c34594 Merge remote-tracking branch 'origin/release-v0.18.6' into develop 2016-12-30 15:13:49 +00:00
Mark Haines
0eac4fa525 Merge pull request #1731 from matrix-org/markjh/logging-memleak
Use the new twisted logging framework.
2016-12-30 12:52:50 +00:00
Mark Haines
822cb39dfa Use the new twisted logging framework.
Hopefully adding an observer to the new framework will avoid a memory
leak https://twistedmatrix.com/trac/ticket/8164
2016-12-30 11:09:24 +00:00
Mark Haines
342fb8dae9 Merge branch 'release-v0.18.6' into develop 2016-12-29 17:33:46 +00:00
Mark Haines
f023be9293 Bump changelog and version 2016-12-29 16:18:04 +00:00
Mark Haines
828c58522e Merge pull request #1725 from matrix-org/erikj/timeout_conn
Wrap connections in an N minute timeout to ensure they get reaped correctly
2016-12-29 16:14:57 +00:00
Mark Haines
97ffc5690b Manually abort the underlying TLS connection.
The abort() method calls loseConnection() which tries to shutdown the
TLS connection cleanly. We now call abortConnection() directly which
should promptly close both the TLS connection and the underlying TCP
connection.

I also added some TODO markers to consider cancelling the old previous
timeout rather than checking time.time(). But given how urgently we want
to get this code released I'd rather leave the existing code with the
duplicate timeouts and the time.time() check.
2016-12-29 15:51:04 +00:00
Erik Johnston
b4bc6fef5b Respect long_retries param and default to off 2016-12-29 00:58:34 +00:00
Erik Johnston
68030fd37b Spelling and comments 2016-12-29 00:10:49 +00:00
Erik Johnston
b7336ff32d Clean up 2016-12-29 00:09:33 +00:00
Erik Johnston
5b6672c66d Wrap connections in an N minute timeout to ensure they get reaped correctly 2016-12-29 00:06:53 +00:00
David Baker
84cf00c645 Fix another comment typo 2016-12-21 09:51:43 +00:00
David Baker
bea15fb599 Merge pull request #1714 from matrix-org/dbkr/delete_threepid
Add /account/3pid/delete endpoint
2016-12-21 09:51:04 +00:00
David Baker
0c88ab1844 Add /account/3pid/delete endpoint
Also fix a typo in a comment
2016-12-20 18:27:30 +00:00
Matthew Hodgson
b7f4f902fa Merge pull request #1712 from kyrias/fix-bind-address-none
Fix check for bind_address
2016-12-20 00:41:42 +00:00
Johannes Löthberg
702c020e58 Fix check for bind_address
The empty string is a valid setting for the bind_address option, so
explicitly check for None here instead.

Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-20 01:37:50 +01:00
Matthew Hodgson
09f15918be Merge pull request #1711 from matrix-org/matthew/utf8-password-change
fix ability to change password to a non-ascii one
2016-12-20 00:02:13 +00:00
Matthew Hodgson
da2c8f3c94 Merge pull request #1709 from kyrias/bind_addresses
Add support for specifying multiple bind addresses
2016-12-19 23:49:34 +00:00
Matthew Hodgson
a58e4e0d48 Merge pull request #1696 from kyrias/ipv6
IPv6 support
2016-12-19 23:49:07 +00:00
Matthew Hodgson
f2a5aebf98 fix ability to change password to a non-ascii one
https://github.com/vector-im/riot-web/issues/2658
2016-12-18 22:25:21 +00:00
Johannes Löthberg
a9c1b419a9 Bump twisted dependency
At least 16.0.0 is needed for wrapClientTLS support.

Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-18 23:16:43 +01:00
Johannes Löthberg
f5cd5ebd7b Add IPv6 comment to default config
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-18 23:14:32 +01:00
Johannes Löthberg
1859af9b2a Update README to use bind_addresses
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-18 22:01:34 +01:00
Johannes Löthberg
c95e9fff99 Make default homeserver config use bind_addresses
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-18 21:51:56 +01:00
Johannes Löthberg
7dfd70fc83 Add support for specifying multiple bind addresses
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-18 21:51:56 +01:00
Erik Johnston
b2f8642d3d Cache network room list queries. 2016-12-16 16:11:43 +00:00
Erik Johnston
f5a4001bb1 Merge branch 'release-v0.18.5' of github.com:matrix-org/synapse 2016-12-16 10:40:10 +00:00
Erik Johnston
b9b6d17ab1 Bump version and changelog 2016-12-16 10:18:02 +00:00
Richard van der Hoff
c824dc727a Merge pull request #1704 from matrix-org/rav/log_todevice_syncs
Add some logging for syncing to_device events
2016-12-15 18:33:55 +00:00
Richard van der Hoff
edc6a1e4f9 Add some logging for syncing to_device events
Attempt to track down the loss of to_device events
(https://github.com/vector-im/riot-web/issues/2711 etc).
2016-12-15 18:16:10 +00:00
Erik Johnston
35129ac998 Merge pull request #1698 from matrix-org/erikj/room_list
Fix caching on public room list
2016-12-15 15:40:28 +00:00
Erik Johnston
ed02a0018c Merge pull request #1702 from manuroe/mac_install_update
Update prerequisites instructions on Mac OS
2016-12-15 15:23:55 +00:00
manuroe
8bb8cc993a Update prerequisites instructions on Mac OS 2016-12-15 15:30:59 +01:00
Erik Johnston
aa1336c00a Merge pull request #1700 from matrix-org/erikj/backfill_filter
Fix /backfill returning events it shouldn't
2016-12-15 14:21:30 +00:00
Erik Johnston
4da3fc0ea0 Merge pull request #1701 from mbachry/fix-preview-crash
Fix crash in url preview
2016-12-15 10:01:28 +00:00
Marcin Bachry
24c16fc349 Fix crash in url preview when html tag has no text
Signed-off-by: Marcin Bachry <hegel666@gmail.com>
2016-12-14 22:38:18 +01:00
Erik Johnston
b8255eba26 Comment 2016-12-14 13:49:54 +00:00
Erik Johnston
b2999a7055 Fix /backfill returning events it shouldn't 2016-12-14 13:41:45 +00:00
Erik Johnston
c3208e45c9 Fixup membership query 2016-12-14 10:46:58 +00:00
Erik Johnston
9d95351cad Merge branch 'develop' of github.com:matrix-org/synapse into erikj/room_list 2016-12-13 17:57:08 +00:00
Erik Johnston
1de53a7a1a Fix caching on public room list 2016-12-13 17:33:24 +00:00
Erik Johnston
bae1115e55 Update changelog 2016-12-13 11:23:08 +00:00
Erik Johnston
b3d398343e Bump changelog and version 2016-12-13 11:07:27 +00:00
Johannes Löthberg
0648e76979 Remove spurious newline
Apparently I just removed the spaces instead...

Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-12 18:41:30 +01:00
Erik Johnston
8588d0eb3d Merge pull request #1697 from matrix-org/erikj/fix_bg_member
Fix background update that prematurely stopped
2016-12-12 17:20:17 +00:00
Erik Johnston
1574b839e0 Merge pull request #1676 from matrix-org/erikj/room_list
Add new API appservice specific public room list
2016-12-12 17:00:10 +00:00
Erik Johnston
7ec2bf9b77 Fix background update that prematurely stopped 2016-12-12 16:54:58 +00:00
Richard van der Hoff
d431c0924c Merge pull request #1694 from matrix-org/rav/no_get_e2e_keys
Remove unspecced GET endpoints for e2e keys
2016-12-12 16:06:03 +00:00
Erik Johnston
2bf5a47b3e Rename network_id to instance_id on client side 2016-12-12 16:05:45 +00:00
Johannes Löthberg
d3bd94805f Fixup for #1689 and #1690
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-12 16:32:47 +01:00
Erik Johnston
09cbcb78d3 Add cache to get_public_room_ids_at_stream_id 2016-12-12 14:41:51 +00:00
Erik Johnston
631376e2ac Notify replication. Use correct network_id 2016-12-12 14:28:15 +00:00
Richard van der Hoff
abed247182 Remove unspecced GET endpoints for e2e keys
GET /keys/claim is a terrible idea, since it isn't idempotent; also it throws
500 errors if you call it without all the right params.

GET /keys/query is arguable, but it's unspecced, so let's get rid of it too to
stop people relying on unspecced APIs.
2016-12-12 12:31:40 +00:00
Richard van der Hoff
9240948346 Merge remote-tracking branch 'origin/master' into develop 2016-12-12 11:46:46 +00:00
Richard van der Hoff
62e6d40b39 Merge pull request #1685 from matrix-org/rav/update_readme_for_tests
Update the readme to use trial
2016-12-12 11:17:06 +00:00
Erik Johnston
d45c984653 Docstring 2016-12-12 11:00:27 +00:00
Erik Johnston
d53a80af25 Merge pull request #1620 from matrix-org/erikj/concurrent_room_access
Limit the number of events that can be created on a given room concurrently
2016-12-12 10:30:23 +00:00
Richard van der Hoff
85cd30b1fd Merge pull request #1686 from matrix-org/rav/fix_federation_key_fails
E2E key query: handle federation fails
2016-12-12 09:33:39 +00:00
Richard van der Hoff
deca951241 Remove unused import 2016-12-12 09:24:35 +00:00
Glyph
9f07f4c559 IPv6 support for endpoint.py
Similar to https://github.com/matrix-org/synapse/pull/1689, but for endpoint.py
2016-12-11 11:10:32 +01:00
Glyph
6e18805ac2 IPv6 support for client.py
This is an (untested) general sketch of how to use wrapClientTLS to implement TLS over IPv6, as well as faster connections over IPv4.
2016-12-11 11:10:32 +01:00
Richard van der Hoff
77692b52b5 Merge pull request #1684 from matrix-org/rav/no_run_tox_from_setup
Don't try to run tox from setup.py
2016-12-09 18:33:35 +00:00
Richard van der Hoff
efa4ccfaee E2E key query: handle federation fails
Don't fail the whole request if we can't connect to a particular server.
2016-12-09 18:31:01 +00:00
Richard van der Hoff
e721a7f2c1 Implement a null 'test' command 2016-12-09 18:00:58 +00:00
Richard van der Hoff
1233d244ff fix pythonpath 2016-12-09 17:52:51 +00:00
Erik Johnston
b541fac7c3 Merge pull request #1683 from matrix-org/erikj/notifier_sadness
Fix rare notifier bug where listeners dont timeout
2016-12-09 17:27:45 +00:00
Richard van der Hoff
af32d3b773 Don't try to run tox from setup.py
Using tox to run the tests is a bad idea, as per the comments.
2016-12-09 17:16:31 +00:00
Richard van der Hoff
2fda8134f1 Update the readme to use trial 2016-12-09 17:15:38 +00:00
Erik Johnston
8b34f71bea Fix unit tests 2016-12-09 16:48:48 +00:00
Erik Johnston
fbaf868f62 Correctly handle timeout errors 2016-12-09 16:30:29 +00:00
Richard van der Hoff
4a9c38bfa3 Fix broken README merge
When 546ec1a was merged into develop, I accidentally overwrote the change
introduced in debbea5 (pr #1657). Reintroduce it.
2016-12-09 16:05:12 +00:00
Erik Johnston
be14c24cea Fix rare notifier bug where listeners dont timeout
There was a race condition that caused the notifier to 'miss' the
timeout notification, since there were no other checks for the timeout
this caused listeners to get stuck in a loop until something happened.
2016-12-09 15:43:18 +00:00
Erik Johnston
1697f6a323 Merge pull request #1680 from matrix-org/erikj/joined_rooms
Add new room membership APIs
2016-12-09 11:35:48 +00:00
Erik Johnston
52d12ca782 Add /room/<room_id>/joined_members API
This returns the currently joined members in the room with their display
names and avatar urls. This is more efficient than /members for large
rooms where you don't need the full events.
2016-12-08 13:32:07 +00:00
Erik Johnston
c45d8e9ba2 Add profile data to the room_membership table for joins 2016-12-08 13:08:41 +00:00
Richard van der Hoff
da13b4aa86 Merge pull request #1678 from matrix-org/rav/fix_receipt_notifications
Read-receipt fixes
2016-12-08 12:34:57 +00:00
Richard van der Hoff
b08f76bd23 Fix ignored read-receipts
Don't ignore read-receipts which arrive in the same EDU as a read-receipt for
an old event.
2016-12-08 12:13:01 +00:00
Richard van der Hoff
bd07a35c29 Fix result of insert_receipt
This should fix the absence of notifications when new receipts arrive.
2016-12-08 12:11:34 +00:00
Erik Johnston
de796f27e6 Add joined_rooms servlet 2016-12-08 11:39:03 +00:00
Erik Johnston
2687af82d4 Comments 2016-12-07 09:58:33 +00:00
Erik Johnston
3727d66a0e Don't include appservice id 2016-12-06 17:04:26 +00:00
Erik Johnston
0d81e26769 Merge pull request #1672 from williamleuschner/develop
Add README instructions for OpenBSD installation
2016-12-06 16:39:15 +00:00
William Leuschner
59bc64328f Fix incorrect numbering on OpenBSD instructions caused by my own incompetence
Signed-off-by: William Leuschner <wel2138@rit.edu>
2016-12-06 11:31:17 -05:00
Erik Johnston
f32fb65552 Add new API appservice specific public room list 2016-12-06 16:12:27 +00:00
William Leuschner
39a76b9cba Update incorrect information in README about ksh and source
Signed-off-by: William Leuschner <wel2138@rit.edu>
2016-12-06 11:09:31 -05:00
Richard van der Hoff
1529c19675 Prevent user tokens being used as guest tokens (#1675)
Make sure that a user cannot pretend to be a guest by adding 'guest = True'
caveats.
2016-12-06 15:31:37 +00:00
Richard van der Hoff
194b6259c5 Travis config (#1674) 2016-12-06 12:51:32 +00:00
Richard van der Hoff
5a2c33c12e Merge pull request #1673 from matrix-org/rav/fix_tox_tests
Fix unittests under tox
2016-12-06 11:45:55 +00:00
Richard van der Hoff
7dae7087d3 Fix unittests under tox
We now need to set PYTHONPATH when running the unit tests; update tox config to
do so.
2016-12-06 11:35:14 +00:00
William Leuschner
12aefb9dfc Add README instructions for OpenBSD installation
Signed-off-by: William Leuschner <wel2138@rit.edu>
2016-12-05 14:16:39 -05:00
Erik Johnston
9609c91e7d Merge pull request #653 from matrix-org/erikj/preset_guest_join
Enable guest access for private rooms by default
2016-12-05 17:47:14 +00:00
Erik Johnston
338df4f409 Merge pull request #1649 from matrix-org/dbkr/log_ui_auth_args
Log the args that we have on UI auth completion
2016-12-05 16:40:58 +00:00
Erik Johnston
3e90250ea3 Merge pull request #1668 from pik/bug-console-filter
Logging: Fix console filter breaking when level is DEBUG
2016-12-05 16:38:26 +00:00
Richard van der Hoff
0b1e287e81 Merge pull request #1671 from kyrias/fix-preview-test
Fix preview test
2016-12-05 15:39:17 +00:00
Johannes Löthberg
6c9a0ba415 test_preview: Fix incorrect wrapping
The old test expected an incorrect wrapping due to the preview function
not using unicode properly, so it got the wrong length.

Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-05 16:33:57 +01:00
Johannes Löthberg
0697bb2247 Make test_preview use unicode strings
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-05 16:33:57 +01:00
Richard van der Hoff
24081224d1 Merge remote-tracking branch 'origin/master' into develop 2016-12-05 11:12:43 +00:00
pik
c46e7a9c9b Bugfix: Console logging handler missing default filter 2016-12-03 20:14:58 -03:00
Richard van der Hoff
a2849a18a5 README: fix link 2016-12-03 13:18:27 +00:00
Richard van der Hoff
59984e9f58 Merge remote-tracking branch 'origin/master' into develop 2016-12-03 13:08:53 +00:00
Richard van der Hoff
546ec1a5cf Merge pull request #1667 from matrix-org/rav/update_readme
Updates to the README
2016-12-03 13:07:43 +00:00
Matthew Hodgson
7a00178832 Merge pull request #1664 from kyrias/preview-url-resource-encoding
preview_url_resource: Ellipsis must be in unicode string
2016-12-03 01:34:59 +00:00
Richard van der Hoff
9df84dd22d README: review comments
Minor updates post-review
2016-12-02 17:17:15 +00:00
Richard van der Hoff
3f23154088 README: rewrite federation section 2016-12-02 12:02:33 +00:00
Richard van der Hoff
f6270a8fe2 README: add reverse-proxying section 2016-12-02 12:02:33 +00:00
Richard van der Hoff
235407a78e README: Rewrite "Identity servers" section 2016-12-02 12:02:33 +00:00
Richard van der Hoff
77bf92e3c6 README: rewrite installation instructions 2016-12-02 12:02:33 +00:00
Richard van der Hoff
bb3d0c270d README: remove refs to demo client
The demo client isn't really fit for purpose, so stop encouraging people to use
it.
2016-12-02 11:54:59 +00:00
Richard van der Hoff
f8c45d428c README: code quotes
Add some syntax highlighting
2016-12-02 11:47:55 +00:00
Richard van der Hoff
153535fc56 README: "About matrix" updates
- remove redundant "where's the spec" section: this would belong in "About
matrix", but it's already there.

- E2E is in beta rather than dev
2016-12-02 11:46:00 +00:00
Richard van der Hoff
a8d8225ead README: Fix links
Fix a couple of broken links
2016-12-02 11:43:10 +00:00
Richard van der Hoff
cc03f4c58b Rearrange the README
Move some bits of the README around. No words were changed in the making of
this commit.
2016-12-02 11:42:59 +00:00
Johannes Löthberg
32c8b5507c preview_url_resource: Ellipsis must be in unicode string
Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2016-12-01 13:12:13 +01:00
Richard van der Hoff
971edd04af rename CAPTCHA_SETUP
this is rst so name it accordingly
2016-12-01 12:08:03 +00:00
Richard van der Hoff
471200074b Merge pull request #1654 from matrix-org/rav/no_more_refresh_tokens
Stop generating refresh_tokens
2016-12-01 11:42:53 +00:00
Richard van der Hoff
6841d8ff55 Fix doc-string
Remove refresh_token reference
2016-12-01 11:42:17 +00:00
Richard van der Hoff
12f3b9000c fix imports 2016-11-30 17:45:49 +00:00
Richard van der Hoff
aa09d6b8f0 Rip out more refresh_token code
We might as well treat all refresh_tokens as invalid. Just return a 403 from
/tokenrefresh, so that we don't have a load of dead, untestable code hanging
around.

Still TODO: removing the table from the schema.
2016-11-30 17:40:18 +00:00
Richard van der Hoff
dc4b23e1a1 Merge branch 'develop' into rav/no_more_refresh_tokens 2016-11-30 17:10:04 +00:00
Richard van der Hoff
8379a741cc Merge pull request #1660 from matrix-org/rav/better_content_type_validation
More intelligent Content-Type parsing
2016-11-30 16:54:03 +00:00
Richard van der Hoff
321fe5c44c Merge pull request #1656 from matrix-org/rav/remove_time_caveat
Stop putting a time caveat on access tokens
2016-11-30 16:53:20 +00:00
Richard van der Hoff
b5b3a7e867 More intelligent Content-Type parsing
Content-Type is allowed to contain options (`; charset=utf-8`, for
instance). We should allow that.
2016-11-30 15:07:32 +00:00
Richard van der Hoff
4febfe47f0 Comments
Update comments in verify_macaroon
2016-11-30 07:36:32 +00:00
Richard van der Hoff
77eca2487c Merge pull request #1653 from matrix-org/rav/guest_e2e
Implement E2E for guests
2016-11-29 17:41:35 +00:00
Richard van der Hoff
1c4f05db41 Stop putting a time caveat on access tokens
The 'time' caveat on the access tokens was something of a lie, since we weren't
enforcing it; more pertinently its presence stops us ever adding useful time
caveats.

Let's move in the right direction by not lying in our caveats.
2016-11-29 16:49:41 +00:00
Richard van der Hoff
7d855447ef Merge pull request #1657 from matrix-org/rav/hurry_up_pip
Let pip install multiple packages at once
2016-11-29 16:28:29 +00:00
Richard van der Hoff
debbea5b29 Let pip install multiple packages at once
Pip can install multiple dependencies at the same time, so there is no need to
use xargs -n1. It's significantly slower with -n1, so let's not do it with no
reason.
2016-11-29 12:59:19 +00:00
Richard van der Hoff
5c4edc83b5 Stop generating refresh tokens
Since we're not doing refresh tokens any more, we should start killing off the
dead code paths. /tokenrefresh itself is a bit of a thornier subject, since
there might be apps out there using it, but we can at least not generate
refresh tokens on new logins.
2016-11-28 10:13:01 +00:00
Richard van der Hoff
b6146537d2 Merge pull request #1655 from matrix-org/rav/remove_redundant_macaroon_checks
Remove redundant list of known caveat prefixes
2016-11-25 16:57:19 +00:00
Richard van der Hoff
f62b69e32a Allow guest access to endpoints for E2E
Expose /devices, /keys, and /sendToDevice to guest users, so that they can use
E2E.
2016-11-25 15:26:34 +00:00
Richard van der Hoff
7f02e4d008 Give guest users a device_id
We need to create devices for guests so that they can use e2e, but we don't
have anywhere to store it, so just use a fixed one.
2016-11-25 15:25:30 +00:00
Erik Johnston
9192e593ec Merge pull request #1650 from matrix-org/erikj/respect_ratelimited
Correctly handle 500's and 429 on federation
2016-11-24 15:27:19 +00:00
Erik Johnston
11bfe438a2 Use correct var 2016-11-24 15:26:53 +00:00
Erik Johnston
aaecffba3a Correctly handle 500's and 429 on federation 2016-11-24 15:04:49 +00:00
Richard van der Hoff
e1d7c96814 Remove redundant list of known caveat prefixes
Also add some comments.
2016-11-24 12:38:17 +00:00
Erik Johnston
7e03f9a484 Bump version and changelog 2016-11-24 12:29:58 +00:00
Erik Johnston
46ca345b06 Don't send old events as federation 2016-11-24 12:29:02 +00:00
Erik Johnston
f36ea03741 Bump changelog and version 2016-11-24 11:08:01 +00:00
David Baker
c9d4e7b716 Clarify that creds doesn not contain passwords. 2016-11-24 10:54:59 +00:00
David Baker
f681aab895 Log the args that we have on UI auth completion
This will be super helpful for debugging if we have more
registration woes.
2016-11-24 10:11:45 +00:00
Erik Johnston
11254bdf6d Merge pull request #1644 from matrix-org/erikj/efficient_notif_counts
More efficient notif count queries
2016-11-23 16:13:05 +00:00
Erik Johnston
1985860c6e Comment 2016-11-23 15:59:59 +00:00
Erik Johnston
2ac516850b More efficient notif count queries 2016-11-23 15:57:04 +00:00
Erik Johnston
302fbd218d Merge pull request #1635 from matrix-org/erikj/split_out_fed_txn
Split out federation transaction sending to a worker
2016-11-23 15:39:12 +00:00
Erik Johnston
b2d6e63b79 Merge pull request #1641 from matrix-org/erikj/as_pushers
Ignore AS users when fetching push rules
2016-11-23 15:21:52 +00:00
Erik Johnston
feec718265 Shuffle receipt handler around so that worker apps don't need to load it 2016-11-23 15:14:24 +00:00
Erik Johnston
ee5e8d71ac Fix tests 2016-11-23 14:57:07 +00:00
Erik Johnston
26072df6af Ensure only main or federation_sender process can send federation traffic 2016-11-23 14:09:47 +00:00
Erik Johnston
b69f76c106 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_out_fed_txn 2016-11-23 11:31:53 +00:00
Erik Johnston
4d9b5c60f9 Comment 2016-11-23 11:11:41 +00:00
Erik Johnston
0163466d72 Ignore AS users when fetching push rules
By ignoring AS users early on when fetching push rules for a room we can
avoid needlessly hitting the DB and filling up the caches.
2016-11-23 11:01:01 +00:00
Erik Johnston
4c79a63fd7 Explicit federation ack 2016-11-23 10:40:44 +00:00
Erik Johnston
54fed21c04 Fix tests and flake8 2016-11-22 18:18:31 +00:00
Erik Johnston
90565d015e Invalidate retry cache in both directions 2016-11-22 17:45:44 +00:00
Kegsay
0cf2a64974 Merge pull request #1640 from matrix-org/kegan/sync-perf
Return early on /sync code paths if a '*' filter is used
2016-11-22 17:41:47 +00:00
Kegan Dougal
83bcdcee61 Return early on /sync code paths if a '*' filter is used
This is currently very conservative in that it only does this if there is no
`since` token. This limits the risk to clients likely to be doing one-off
syncs (like bridges), but does mean that normal human clients won't benefit
from the time savings here. If the savings are large enough, I would consider
generalising this to just check the filter.
2016-11-22 16:38:35 +00:00
Kegsay
d4a459f7cb Merge pull request #1638 from matrix-org/kegan/sync-event-fields
Implement "event_fields" in filters
2016-11-22 14:02:38 +00:00
Kegan Dougal
c3d963ac24 Review comments 2016-11-22 13:42:11 +00:00
Kegan Dougal
6d4e6d4cba Also check for dict since sometimes they aren't frozen 2016-11-22 10:39:41 +00:00
Erik Johnston
baf9e74a73 Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-11-22 10:31:48 +00:00
Erik Johnston
f9834a3d1a Merge branch 'release-v0.18.4' of github.com:matrix-org/synapse 2016-11-22 10:25:23 +00:00
Erik Johnston
aac06e8f74 Bump changelog 2016-11-22 10:24:04 +00:00
Erik Johnston
2bbc4cab60 Merge branch 'dbkr/work_around_devicename_bug' of github.com:matrix-org/synapse into release-v0.18.4 2016-11-22 10:18:30 +00:00
Kegan Dougal
cea4e4e7b2 Glue only_event_fields into the sync rest servlet 2016-11-22 10:14:05 +00:00
Kegan Dougal
0a8b0eeca1 More tests 2016-11-22 09:59:27 +00:00
Erik Johnston
51e89709aa Comments 2016-11-21 17:59:39 +00:00
Kegan Dougal
53b27bbf06 Add remaining tests 2016-11-21 17:58:22 +00:00
Kegan Dougal
70a2157b64 Start adding some tests 2016-11-21 17:52:45 +00:00
Kegan Dougal
f97511a1f3 Move event_fields filtering to serialize_event
Also make it an inclusive not exclusive filter, as the spec demands.
2016-11-21 17:42:16 +00:00
Erik Johnston
73dc099645 Add federation-sender to sytest 2016-11-21 17:37:59 +00:00
Erik Johnston
88d85ebae1 Add some metrics 2016-11-21 17:36:05 +00:00
Erik Johnston
50934ce460 Comments 2016-11-21 16:55:23 +00:00
Kegan Dougal
e90fcd9edd Add filter_event_fields and filter_field to FilterCollection 2016-11-21 15:18:18 +00:00
Erik Johnston
9687e039e7 Remove explicit calls to send_pdu 2016-11-21 14:48:51 +00:00
Kegsay
a28ec23273 Merge pull request #1636 from matrix-org/kegan/filter-error-msg
Fail with a coherent error message if `/sync?filter=` is invalid
2016-11-21 13:36:33 +00:00
Kegan Dougal
a2a6c1c22f Fail with a coherent error message if /sync?filter= is invalid 2016-11-21 13:15:25 +00:00
Erik Johnston
524d61bf7e Fix tests 2016-11-21 11:53:02 +00:00
Erik Johnston
7c9cdb2245 Store federation stream positions in the database 2016-11-21 11:33:08 +00:00
Mark Haines
a289150943 Fix flake8 2016-11-18 17:15:02 +00:00
David Baker
544722bad2 Work around client replacing reg params
Works around https://github.com/vector-im/vector-android/issues/715
and equivalent for iOS
2016-11-18 17:07:35 +00:00
Erik Johnston
f8ee66250a Handle sending events and device messages over federation 2016-11-17 15:48:04 +00:00
Erik Johnston
ed787cf09e Hook up the send queue and create a federation sender worker 2016-11-16 17:34:44 +00:00
Erik Johnston
1587b5a033 Add initial cut of federation send queue 2016-11-16 14:47:52 +00:00
Erik Johnston
59ef517e6b Use new federation_sender DI 2016-11-16 14:47:52 +00:00
Erik Johnston
847d5db1d1 Add transaction queue and transport layer to DI 2016-11-16 14:47:52 +00:00
Erik Johnston
daec6fc355 Move logic into transaction_queue 2016-11-16 14:47:52 +00:00
Erik Johnston
0e830d3770 Rename transaction queue functions to send_* 2016-11-16 14:47:52 +00:00
Erik Johnston
dc6cede78e Merge pull request #1628 from matrix-org/erikj/ldap_split_out
Use external ldap auth pacakge
2016-11-15 16:53:34 +00:00
Erik Johnston
c7546b3cdb Merge pull request #1617 from matrix-org/erikj/intern_state_dict
Correctly intern keys in state cache
2016-11-15 16:45:55 +00:00
Erik Johnston
d56c39cf24 Use external ldap auth pacakge 2016-11-15 13:03:19 +00:00
Erik Johnston
f9d156d270 New Flake8 fixes 2016-11-15 11:22:29 +00:00
Erik Johnston
9d58ccc547 Bump changelog and version 2016-11-14 15:05:04 +00:00
Kegsay
9355a5c42b Merge pull request #1624 from matrix-org/kegan/idempotent-requests
Store Promise<Response> instead of Response for HTTP API transactions
2016-11-14 12:45:30 +00:00
Kegan Dougal
3991b4cbdb Clean transactions based on time. Add HttpTransactionCache tests. 2016-11-14 11:19:24 +00:00
Kegan Dougal
af4a1bac50 Move .observe() up to the cache to make things neater 2016-11-14 09:52:41 +00:00
Erik Johnston
0964005d84 Merge pull request #1625 from DanielDent/patch-1
Add support for durations in minutes
2016-11-12 11:20:46 +00:00
Daniel Dent
1c93cd9f9f Add support for durations in minutes 2016-11-12 00:10:23 -08:00
Kegan Dougal
8ecaff51a1 Review comments 2016-11-11 17:47:03 +00:00
Kegan Dougal
f6c48802f5 More flake8 2016-11-11 15:08:24 +00:00
Kegan Dougal
a88bc67f88 Flake8 and fix whoopsie 2016-11-11 15:02:29 +00:00
Kegan Dougal
42c43cfafd Use ObservableDeferreds instead of Deferreds as they behave as intended 2016-11-11 14:54:10 +00:00
Kegan Dougal
c7daf3136c Use observable deferreds because they are sane 2016-11-11 14:13:32 +00:00
Erik Johnston
64038b806c Comments 2016-11-11 10:42:08 +00:00
Erik Johnston
2bd4513a4d Limit the number of events that can be created on a given room concurretnly 2016-11-10 16:44:35 +00:00
Erik Johnston
d073cb7ead Add Limiter: limit concurrent access to resource 2016-11-10 16:29:51 +00:00
Kegan Dougal
8a8ad46f48 Flake8 2016-11-10 15:22:11 +00:00
Kegan Dougal
2771447c29 Store Promise<Response> instead of Response for HTTP API transactions
This fixes a race whereby:
 - User hits an endpoint.
 - No cached transaction so executes main code.
 - User hits same endpoint.
 - No cache transaction so executes main code.
 - Main code finishes executing and caches response and returns.
 - Main code finishes executing and caches response and returns.

 This race is common in the wild when Synapse is struggling under load.
 This commit fixes the race by:
  - User hits an endpoint.
  - Caches the promise to execute the main code and executes main code.
  - User hits same endpoint.
  - Yields on the same promise as the first request.
  - Main code finishes executing and returns, unblocking both requests.
2016-11-10 14:49:26 +00:00
Erik Johnston
6cc4fcf25c Merge pull request #1619 from matrix-org/erikj/pwd_provider_error
Don't assume providers raise ConfigError's
2016-11-09 11:11:06 +00:00
Erik Johnston
ac507e7ab8 Don't assume providers raise ConfigError's 2016-11-08 17:23:28 +00:00
Erik Johnston
e6651e8046 Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-11-08 14:43:49 +00:00
Erik Johnston
291628d42a Merge branch 'erikj/ldap3_auth' 2016-11-08 14:40:54 +00:00
Erik Johnston
3c09818d91 Bump version and changelog 2016-11-08 14:39:55 +00:00
Erik Johnston
27d3f2e7ab Explicitly set authentication mode in ldap3
This only makes a difference for versions of ldap3 before 1.0, but a)
its best to be explicit and b) there are distributions that package
ancient versions for ldap3 (e.g. debian).
2016-11-08 14:35:25 +00:00
Erik Johnston
17e0a58020 Merge pull request #1615 from matrix-org/erikj/limit_prev_events
Limit the number of prev_events of new events
2016-11-08 12:06:15 +00:00
Erik Johnston
587d8ac60f Correctly intern keys in state cache 2016-11-08 11:53:25 +00:00
Erik Johnston
34449cfc6c Merge pull request #1616 from matrix-org/erikj/worker_frozen_dict
Respect use_frozen_dicts option in workers
2016-11-08 11:32:22 +00:00
Erik Johnston
a4632783fb Sample correctly 2016-11-08 11:20:26 +00:00
Erik Johnston
24772ba56e Respect use_frozen_dicts option in workers 2016-11-08 11:07:18 +00:00
Erik Johnston
eeda4e618c Limit the number of prev_events of new events 2016-11-08 11:02:29 +00:00
Erik Johnston
d24197bead Merge pull request #1198 from euank/more-ip-blacklist
default config: blacklist more internal ips
2016-11-07 09:41:34 +00:00
Euan Kemp
c6bbad109b default config: blacklist more internal ips 2016-11-06 17:02:25 -08:00
Erik Johnston
16dc9064d4 Merge pull request #1195 from matrix-org/erikj/incorrect_func
Remove unused but buggy function
2016-11-04 11:09:38 +00:00
Erik Johnston
63772443e6 Comment 2016-11-04 10:53:42 +00:00
Erik Johnston
a3f6576084 Remove unused but buggy function 2016-11-04 10:48:20 +00:00
Paul Evans
7fc2b5c063 Merge pull request #1193 from matrix-org/paul/metrics
More updates to Promethese metrics exposition
2016-11-03 17:22:15 +00:00
Paul "LeoNerd" Evans
89e3e39d52 Fix copypasto error in metric rename table in docs 2016-11-03 17:04:13 +00:00
Paul "LeoNerd" Evans
2938a00825 Rename the python-specific metrics now the docs claim that we have done 2016-11-03 17:03:52 +00:00
Paul "LeoNerd" Evans
5219f7e060 Since we don't export per-filetype fd counts any more, delete all the code related to that too 2016-11-03 16:41:32 +00:00
Paul "LeoNerd" Evans
93ebeb2aa8 Remove now-unused 'resource' import 2016-11-03 16:37:09 +00:00
Paul "LeoNerd" Evans
c1b077cd19 Now we have new-style metrics don't bother exporting legacy-named process ones 2016-11-03 16:34:16 +00:00
Erik Johnston
06cc0bb762 Merge pull request #1192 from matrix-org/erikj/postgres_gist
Replace postgres GIN with GIST
2016-11-03 15:15:43 +00:00
Erik Johnston
64c6566980 Remove spurious comment 2016-11-03 15:04:32 +00:00
Erik Johnston
8fd4d9129f Replace postgres GIN with GIST
This is because GIN can be slow to write too, especially when the table
gets large.
2016-11-03 15:00:03 +00:00
David Baker
9164bfa1c3 Merge pull request #1191 from matrix-org/dbkr/non_ascii_passwords
Don't error on non-ascii passwords
2016-11-03 11:21:07 +00:00
David Baker
9084720993 Don't error on non-ascii passwords 2016-11-03 10:42:14 +00:00
Mark Haines
80d5d3baa1 Merge pull request #1190 from matrix-org/markjh/media_cors
Set CORs headers on responses from the media repo
2016-11-02 14:49:28 +00:00
Mark Haines
b1c27975d0 Set CORs headers on responses from the media repo 2016-11-02 11:29:25 +00:00
Erik Johnston
dc155f4c2c Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-11-01 16:48:53 +00:00
Erik Johnston
2746e805fe Merge pull request #1188 from matrix-org/erikj/sent_transactions
Remove sent_transactions table.
2016-11-01 14:52:55 +00:00
Erik Johnston
0aeb1324b7 Merge branch 'release-v0.18.2' of github.com:matrix-org/synapse into develop 2016-11-01 14:18:07 +00:00
Erik Johnston
4a9055d446 Merge branch 'release-v0.18.2' of github.com:matrix-org/synapse 2016-11-01 13:14:04 +00:00
Erik Johnston
3c91c5b216 Bump version and changelog 2016-11-01 13:06:36 +00:00
Erik Johnston
f6e8019b9c Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.18.2 2016-11-01 11:57:28 +00:00
Erik Johnston
760469c812 Continue to clean up received_transactions 2016-11-01 11:42:08 +00:00
Paul Evans
47ed4d84bb Merge pull request #1187 from matrix-org/paul/metrics-howto
Update documentation about exported prometheus metrics
2016-10-31 18:33:32 +00:00
Erik Johnston
f09d2b692f Removed unused stuff 2016-10-31 17:10:56 +00:00
Erik Johnston
4c3eb14d68 Increase batching of sent transaction inserts
This should further reduce the number of individual inserts,
transactions and updates that are required for keeping sent_transactions
up to date.
2016-10-31 16:07:45 +00:00
Paul "LeoNerd" Evans
1d4d518b50 Add details of renamed metrics 2016-10-31 15:06:52 +00:00
Paul "LeoNerd" Evans
159434a133 Remove long-deprecated instructions about promethesus console; also fix for modern config file format 2016-10-28 13:58:27 +01:00
Erik Johnston
264f6c2a39 Changelog formattting 2016-10-28 11:19:21 +01:00
Mark Haines
82e71a259c Bump changelog and version 2016-10-28 11:16:05 +01:00
Mark Haines
490b97d3e7 Merge branch 'develop' into release-v0.18.2 2016-10-28 11:12:31 +01:00
Paul Evans
f9d5b60a24 Merge pull request #1184 from matrix-org/paul/metrics
Bugfix for process-wide metric export on split processes
2016-10-27 18:27:36 +01:00
Paul "LeoNerd" Evans
1cc22da600 Set up the process collector during metrics __init__; that way all split-process workers have it 2016-10-27 18:09:34 +01:00
Paul "LeoNerd" Evans
aac13b1f9a Pass the Metrics group into the process collector instead of having it find its own one; this avoids it needing to import from synapse.metrics 2016-10-27 18:08:15 +01:00
Paul "LeoNerd" Evans
ccc1a3d54d Allow creation of a 'subspace' within a Metrics object, returning another one 2016-10-27 18:07:34 +01:00
Erik Johnston
665e53524e Bump changelog and version 2016-10-27 14:52:47 +01:00
Erik Johnston
e438699c59 Merge pull request #1183 from matrix-org/erikj/fix_email_update
Fix user_threepids schema delta
2016-10-27 14:48:30 +01:00
Erik Johnston
a9111786f9 Use most recently added binding, not most recently seen user. 2016-10-27 14:32:45 +01:00
Erik Johnston
1fc1bc2a51 Fix user_threepids schema delta
The delta `37/user_threepids.sql` aimed to update all the email
addresses to be lower case, however duplicate emails may exist in the
table already.

This commit adds a step where the delta moves the duplicate emails to a
new `medium` `email_old`. Only the most recently used account keeps the
binding intact. We move rather than delete so that we retain some record
of which emails were associated with which account.
2016-10-27 14:14:44 +01:00
Erik Johnston
db0609f1ec Update changelog 2016-10-27 11:13:07 +01:00
Erik Johnston
ab731d8f8e Bump changelog and version 2016-10-27 11:07:02 +01:00
Erik Johnston
45bdacd9a7 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.18.2 2016-10-27 11:02:30 +01:00
Mark Haines
177f104432 Merge pull request #1098 from matrix-org/markjh/bearer_token
Allow clients to supply access_tokens as headers
2016-10-25 17:33:15 +01:00
Erik Johnston
22fbf86e4f Merge branch 'release-v0.18.2' of github.com:matrix-org/synapse into develop 2016-10-25 15:09:46 +01:00
Erik Johnston
f138bb40e2 Fixup change log 2016-10-25 11:23:40 +01:00
Erik Johnston
855645c719 Bump version and changelog 2016-10-25 11:16:16 +01:00
Erik Johnston
25423f50aa Merge pull request #1179 from matrix-org/erikj/typing_timer_paranoia
Fix infinite typing bug
2016-10-25 10:59:59 +01:00
Erik Johnston
2ef617bc06 Fix infinite typing bug
There's a bug somewhere that causes typing notifications to not be timed
out properly. By adding a paranoia timer and using correct inequalities
notifications should stop being stuck, even if it the root cause hasn't
been fixed.
2016-10-24 15:51:22 +01:00
Erik Johnston
e83a08d795 Merge pull request #1178 from matrix-org/erikj/current_room_token
Fix incredibly slow back pagination query
2016-10-24 13:58:28 +01:00
Erik Johnston
b6800a8ecd Actually use the new function 2016-10-24 13:39:49 +01:00
Erik Johnston
d04e2ff3a4 Fix incredubly slow back pagination query
If a client didn't specify a from token when paginating backwards
synapse would attempt to query the (global) maximum topological token.
This a) doesn't make much sense since they're room specific and b) there
are no indices that lets postgres do this efficiently.
2016-10-24 13:35:51 +01:00
Paul Evans
a842fed418 Merge pull request #1177 from matrix-org/paul/standard-metric-names
Standardise prometheus metrics
2016-10-21 13:06:19 +01:00
Luke Barnard
e01a1bc92d Merge pull request #1175 from matrix-org/luke/feature-configurable-as-rate-limiting
Allow Configurable Rate Limiting Per AS
2016-10-20 16:21:10 +01:00
Luke Barnard
6fdd31915b Style 2016-10-20 13:53:15 +01:00
Luke Barnard
07caa749bf Closing brace on following line 2016-10-20 12:07:16 +01:00
Luke Barnard
f09db236b1 as_user->app_service, less redundant comments, better positioned comments 2016-10-20 12:04:54 +01:00
Luke Barnard
8bfd01f619 flake8 2016-10-20 11:52:46 +01:00
Luke Barnard
1b17d1a106 Use real AS object by passing it through the requester
This means synapse does not have to check if the AS is interested, but instead it effectively re-uses what it already knew about the requesting user
2016-10-20 11:43:05 +01:00
Paul "LeoNerd" Evans
b01aaadd48 Split callback metric lambda functions down onto their own lines to keep line lengths under 90 2016-10-19 18:26:13 +01:00
Paul "LeoNerd" Evans
1071c7d963 Adjust code for <100 char line limit 2016-10-19 18:23:25 +01:00
Paul "LeoNerd" Evans
6453d03edd Cut the raw /proc/self/stat line up into named fields at collection time 2016-10-19 18:21:40 +01:00
Paul "LeoNerd" Evans
3ae48a1f99 Move the process metrics collector code into its own file 2016-10-19 18:10:24 +01:00
Paul "LeoNerd" Evans
4cedd53224 A slightly neater way to manage metric collector functions 2016-10-19 17:54:09 +01:00
Paul "LeoNerd" Evans
5663137e03 appease pep8 2016-10-19 16:09:42 +01:00
Paul "LeoNerd" Evans
b202531be6 Also guard /proc/self/fds-related code with a suitable psuedoconstant 2016-10-19 15:37:41 +01:00
Paul "LeoNerd" Evans
1b179455fc Guard registration of process-wide metrics by existence of the requisite /proc entries 2016-10-19 15:34:38 +01:00
Paul "LeoNerd" Evans
981f852d54 Add standard process_start_time_seconds metric 2016-10-19 15:05:22 +01:00
Paul "LeoNerd" Evans
def63649df Add standard process_max_fds metric 2016-10-19 15:05:21 +01:00
Paul "LeoNerd" Evans
06f1ad1625 Add standard process_open_fds metric 2016-10-19 15:05:21 +01:00
Paul "LeoNerd" Evans
95fc70216d Add standard process_*_memory_bytes metrics 2016-10-19 15:05:21 +01:00
Paul "LeoNerd" Evans
9b0316c75a Use /proc/self/stat to generate the new process_cpu_*_seconds_total metrics 2016-10-19 15:05:21 +01:00
Paul "LeoNerd" Evans
03c2720940 Export CPU usage metrics also under prometheus-standard metric name 2016-10-19 15:05:21 +01:00
Paul "LeoNerd" Evans
b21b9dbc37 Callback metric values might not just be integers - allow floats 2016-10-19 15:05:15 +01:00
Erik Johnston
78c083f159 Merge pull request #1164 from pik/error-codes
Clarify Error codes for GET /filter/
2016-10-19 14:26:17 +01:00
Erik Johnston
3aa8925091 Merge pull request #1176 from matrix-org/erikj/eager_ratelimit_check
Check whether to ratelimit sooner to avoid work
2016-10-19 14:25:52 +01:00
Erik Johnston
f2f74ffce6 Comment 2016-10-19 14:21:28 +01:00
David Baker
7d2cf7e960 Merge pull request #1170 from matrix-org/dbkr/password_reset_case_insensitive
Make password reset email field case insensitive
2016-10-19 13:29:44 +01:00
David Baker
0108ed8ae6 Latest delta is now 37 2016-10-19 11:40:35 +01:00
David Baker
a7f48320b1 Merge remote-tracking branch 'origin/develop' into dbkr/password_reset_case_insensitive 2016-10-19 11:28:56 +01:00
David Baker
df2a616c7b Convert emails to lowercase when storing
And db migration sql to convert existing addresses.
2016-10-19 11:13:55 +01:00
Erik Johnston
550308c7a1 Check whether to ratelimit sooner to avoid work 2016-10-19 10:45:24 +01:00
pik
e8b1d2a452 Refactor test_filter to use real DataStore
* add tests for filter api errors
2016-10-18 12:17:38 -05:00
Luke Barnard
5b54d51d1e Allow Configurable Rate Limiting Per AS
This adds a flag loaded from the registration file of an AS that will determine whether or not its users are rate limited (by ratelimit in _base.py). Needed for IRC bridge reasons - see https://github.com/matrix-org/matrix-appservice-irc/issues/240.
2016-10-18 17:04:09 +01:00
Erik Johnston
f6955db970 Merge pull request #1174 from matrix-org/erikj/email_push_noop
Reduce redundant database work in email pusher
2016-10-18 15:27:10 +01:00
Erik Johnston
8ca05b5755 Fix push notifications for a single unread message 2016-10-18 10:57:33 +01:00
Erik Johnston
f0ca088280 Reduce redundant database work in email pusher
Update the last stream ordering if the
`get_unread_push_actions_for_user_in_range_for_email` returns no new
push actions. This reduces the range that it needs to check next
iteration.
2016-10-18 10:52:47 +01:00
Erik Johnston
50ac1d843d Merge branch 'release-v0.18.2' of github.com:matrix-org/synapse into develop 2016-10-18 09:41:53 +01:00
Erik Johnston
513e600f63 Update changelog 2016-10-17 17:05:59 +01:00
Erik Johnston
b95dbdcba4 Bump version and changelog 2016-10-17 15:59:12 +01:00
Erik Johnston
927a67ee1a Merge pull request #1113 from matrix-org/erikj/remove_auth
Remove redundant event_auth index
2016-10-17 14:28:10 +01:00
Erik Johnston
6942d68247 Bump schema version 2016-10-17 11:17:45 +01:00
Erik Johnston
b59994b454 Remove TODO 2016-10-17 11:17:02 +01:00
Erik Johnston
816988baaa Merge branch 'develop' of github.com:matrix-org/synapse into erikj/remove_auth 2016-10-17 11:10:37 +01:00
Erik Johnston
2869a29fd7 Drop some unused indices 2016-10-17 11:08:19 +01:00
pik
d43b63818c Fix MockHttpRequest always returning M_UNKNOWN errcode in testing 2016-10-14 15:46:54 -05:00
Erik Johnston
a68ade6ed3 Merge pull request #1162 from larroy/master
Use sys.executable instead of hardcoded python. fixes #1161
2016-10-14 21:42:55 +01:00
David Baker
29c5922021 Revert part of 6207399
older sqlite doesn't support indexes on expressions, lets just
store things lowercase in the db
2016-10-14 16:20:24 +01:00
Alexander Maznev
d9350b0db8 Error codes for filters
* add tests

Signed-off-by: Alexander Maznev <alexander.maznev@gmail.com>
2016-10-14 10:18:28 -05:00
David Baker
bcb1245a2d Merge remote-tracking branch 'origin/develop' into dbkr/password_reset_case_insensitive 2016-10-14 15:10:38 +01:00
David Baker
62073992c5 Make password reset email field case insensitive 2016-10-14 13:56:53 +01:00
Erik Johnston
0393c4203c Merge pull request #1169 from matrix-org/erikj/fix_email_notifs
Fix email push notifs being dropped
2016-10-14 10:38:29 +01:00
Erik Johnston
6f7540ada4 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/fix_email_notifs 2016-10-14 10:22:43 +01:00
Erik Johnston
1d107d8484 Fix email push notifs being dropped
A lot of email push notifications were failing to be sent due to an
exception being thrown along one of the (many) paths. This was due to a
change where we moved from pulling out the full state for each room, but
rather pulled out the event ids for the state and separately loaded the
full events when needed.
2016-10-13 13:40:38 +01:00
Richard van der Hoff
f7aed3d7a2 Merge pull request #1168 from matrix-org/rav/ui_auth_on_device_delete
User-interactive auth on delete device
2016-10-13 09:38:41 +01:00
Richard van der Hoff
9009143fb9 Handle delete device requests with no body
We should probably return a 401 rather than a 400 for existing clients that
don't know they have to do the UIA dance to delete a device.
2016-10-12 18:47:28 +01:00
Richard van der Hoff
fbd3866bc6 User-interactive auth on delete device 2016-10-12 16:16:31 +01:00
Mark Haines
9e18e0b1cb Merge pull request #1167 from matrix-org/markjh/fingerprints
Add config option for adding additional TLS fingerprints
2016-10-12 15:27:44 +01:00
Mark Haines
c61ddeedac Explain how long the servers can cache the TLS fingerprints for 2016-10-12 14:48:24 +01:00
Mark Haines
0af6213019 Improve comment formatting 2016-10-12 14:45:13 +01:00
Erik Johnston
35e2cc8b52 Merge pull request #1155 from matrix-org/erikj/pluggable_pwd_auth
Implement pluggable password auth
2016-10-12 11:41:20 +01:00
Mark Haines
6e9f3ab415 Add config option for adding additional TLS fingerprints 2016-10-11 19:14:46 +01:00
Erik Johnston
e641115421 Merge pull request #1141 from matrix-org/erikj/replication_noop
Reduce DB hits for replication
2016-10-11 15:41:55 +01:00
Erik Johnston
3061dac53e Merge branch 'develop' of github.com:matrix-org/synapse into erikj/replication_noop 2016-10-11 14:08:29 +01:00
Erik Johnston
668f91d707 Fix check of wrong variable 2016-10-11 13:57:22 +01:00
Richard van der Hoff
0061e8744f Merge pull request #1166 from matrix-org/rav/grandfather_broken_riot_signup
Work around email-spamming Riot bug
2016-10-11 11:58:58 +01:00
Richard van der Hoff
fa74fcf512 Work around email-spamming Riot bug
5d9546f9 introduced a change to synapse behaviour, in that failures in the
interactive-auth process would return the flows and params data as well as an
error code (as specced in https://github.com/matrix-org/matrix-doc/pull/397).

That change exposed a bug in Riot which would make it request a new validation
token (and send a new email) each time it got a 401 with a `flows` parameter
(see https://github.com/vector-im/vector-web/issues/2447 and the fix at
https://github.com/matrix-org/matrix-react-sdk/pull/510).

To preserve compatibility with broken versions of Riot, grandfather in the old
behaviour for the email validation stage.
2016-10-11 11:34:40 +01:00
Erik Johnston
a2f2516199 Merge pull request #1157 from Rugvip/nolimit
Remove rate limiting from app service senders and fix get_or_create_user requester
2016-10-11 11:20:54 +01:00
Erik Johnston
a940618c94 Merge pull request #1150 from Rugvip/state_key
api/auth: fix for not being allowed to set your own state_key
2016-10-11 11:19:55 +01:00
Pedro Larroy
c57f871184 Use sys.executable instead of hardcoded python. fixes #1161 2016-10-08 23:55:20 +02:00
Richard van der Hoff
8681aff4f1 Merge pull request #1160 from matrix-org/rav/401_on_password_fail
Interactive Auth: Return 401 from for incorrect password
2016-10-07 10:57:43 +01:00
Richard van der Hoff
5d9546f9f4 Interactive Auth: Return 401 from for incorrect password
This requires a bit of fettling, because I want to return a helpful error
message too but we don't want to distinguish between unknown user and invalid
password. To avoid hardcoding the error message into 15 places in the code,
I've had to refactor a few methods to return None instead of throwing.

Fixes https://matrix.org/jira/browse/SYN-744
2016-10-07 00:00:00 +01:00
Patrik Oldsberg
7b5546d077 rest/client/v1/register: use the correct requester in createUser
Signed-off-by: Patrik Oldsberg <patrik.oldsberg@ericsson.com>
2016-10-06 22:12:32 +02:00
Richard van der Hoff
5d34e32d42 Merge pull request #1159 from matrix-org/rav/uia_fallback_postmessage
window.postmessage for Interactive Auth fallback
2016-10-06 19:56:43 +01:00
Richard van der Hoff
f382117852 window.postmessage for Interactive Auth fallback
If you're a webapp running the fallback in an iframe, you can't set set a
window.onAuthDone function. Let's post a message back to window.opener instead.
2016-10-06 18:16:59 +01:00
Patrik Oldsberg
3de7c8a4d0 handlers/profile: added admin override for set_displayname and set_avatar_url
Signed-off-by: Patrik Oldsberg <patrik.oldsberg@ericsson.com>
2016-10-06 15:24:59 +02:00
Patrik Oldsberg
2ff2d36b80 handers: do not ratelimit app service senders
Signed-off-by: Patrik Oldsberg <patrik.oldsberg@ericsson.com>
2016-10-06 15:24:59 +02:00
Patrik Oldsberg
9bfc617791 storage/appservice: make appservice methods only relying on the cache synchronous 2016-10-06 15:24:59 +02:00
Erik Johnston
503c0ab78b Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-10-05 17:06:55 +01:00
Erik Johnston
e779ee0ee2 Merge branch 'release-v0.18.1' of github.com:matrix-org/synapse 2016-10-05 14:41:07 +01:00
Erik Johnston
4285be791d Bump changelog and version 2016-10-05 14:40:38 +01:00
David Baker
b5665f7516 Merge pull request #1156 from matrix-org/dbkr/email_notifs_add_riot_brand
Add Riot brand to email notifs
2016-10-04 12:47:05 +01:00
David Baker
6d3513740d Add Riot brand to email notifs 2016-10-04 10:20:21 +01:00
Erik Johnston
850b103b36 Implement pluggable password auth
Allows delegating the password auth to an external module. This also
moves the LDAP auth to using this system, allowing it to be removed from
the synapse tree entirely in the future.
2016-10-03 10:36:40 +01:00
Erik Johnston
21185e3e8a Update changelog 2016-09-30 14:11:40 +01:00
Patrik Oldsberg
24a70e19c7 api/auth: fix for not being allowed to set your own state_key
Signed-off-by: Patrik Oldsberg <patrik.oldsberg@ericsson.com>
2016-09-30 13:08:25 +02:00
Erik Johnston
04aa2f2863 Bump version and changelog 2016-09-30 10:34:57 +01:00
Erik Johnston
f7bcdbe56c Merge pull request #1153 from matrix-org/erikj/ldap_restructure
Restructure ldap authentication
2016-09-30 09:17:25 +01:00
Martin Weinelt
3027ea22b0 Restructure ldap authentication
- properly parse return values of ldap bind() calls
- externalize authentication methods
- change control flow to be more error-resilient
- unbind ldap connections in many places
- improve log messages and loglevels
2016-09-29 15:30:08 +01:00
Erik Johnston
5875a65253 Merge pull request #1145 from matrix-org/erikj/fix_reindex
Fix background reindex of origin_server_ts
2016-09-29 13:53:48 +01:00
Erik Johnston
36d621201b Merge pull request #1146 from matrix-org/erikj/port_script_fix
Update port script with recently added tables
2016-09-27 11:54:51 +01:00
Erik Johnston
9040c9ffa1 Fix background reindex of origin_server_ts
The storage function `_get_events_txn` was removed everywhere except
from this background reindex. The function was removed due to it being
(almost) completely unused while also being large and complex.
Therefore, instead of resurrecting `_get_events_txn` we manually
reimplement the bits that are needed directly.
2016-09-27 11:23:49 +01:00
Erik Johnston
4a18127917 Merge pull request #1144 from matrix-org/erikj/sqlite_state_perf
Fix perf of fetching state in SQLite
2016-09-27 11:22:45 +01:00
Erik Johnston
adae348fdf Update port script with recently added tables
This also fixes a bug where the port script would explode when it
encountered the newly added boolean column
`public_room_list_stream.visibility`
2016-09-27 11:19:48 +01:00
Erik Johnston
4974147aa3 Remove duplication 2016-09-27 09:27:54 +01:00
Erik Johnston
13122e5e24 Remove unused variable 2016-09-27 09:21:51 +01:00
Erik Johnston
cf3e1cc200 Fix perf of fetching state in SQLite 2016-09-26 17:16:24 +01:00
Erik Johnston
a38d46249e Merge pull request #1140 from matrix-org/erikj/typing_fed_timeout
Time out typing over federation
2016-09-26 11:24:14 +01:00
Matthew Hodgson
aab6a31c96 typo 2016-09-25 14:08:58 +01:00
Erik Johnston
748d8fdc7b Reduce DB hits for replication
Some streams will occaisonally advance their positions without actually
having any new rows to send over federation. Currently this means that
the token will not advance on the workers, leading to them repeatedly
sending a slightly out of date token. This in turns requires the master
to hit the DB to check if there are any new rows, rather than hitting
the no op logic where we check if the given token matches the current
token.

This commit changes the API to always return an entry if the position
for a stream has changed, allowing workers to advance their tokens
correctly.
2016-09-23 16:49:21 +01:00
Erik Johnston
655891d179 Move FEDERATION_PING_INTERVAL timer. Update log line 2016-09-23 15:43:34 +01:00
Erik Johnston
4225a97f4e Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-09-23 15:36:59 +01:00
Erik Johnston
22578545a0 Time out typing over federation 2016-09-23 14:00:52 +01:00
Erik Johnston
667fcd54e8 Merge pull request #1136 from matrix-org/erikj/fix_signed_3pid
Allow invites via 3pid to bypass sender sig check
2016-09-22 13:41:49 +01:00
Erik Johnston
f96020550f Update comments 2016-09-22 12:54:22 +01:00
Erik Johnston
81964aeb90 Merge pull request #1132 from matrix-org/erikj/initial_sync_split
Support /initialSync in synchrotron worker
2016-09-22 12:45:02 +01:00
Erik Johnston
2e9ee30969 Add comments 2016-09-22 11:59:46 +01:00
Erik Johnston
a61e4522b5 Shuffle things around to make unit tests work 2016-09-22 11:08:12 +01:00
Erik Johnston
1168cbd54d Allow invites via 3pid to bypass sender sig check
When a server sends a third party invite another server may be the one
that the inviting user registers with. In this case it is that remote
server that will issue an actual invitation, and wants to do it "in the
name of" the original invitee. However, the new proper invite will not
be signed by the original server, and thus other servers would reject
the invite if it was seen as coming from the original user.

To fix this, a special case has been added to the auth rules whereby
another server can send an invite "in the name of" another server's
user, so long as that user had previously issued a third party invite
that is now being accepted.
2016-09-22 10:56:53 +01:00
Erik Johnston
bbc0d9617f Merge pull request #1134 from matrix-org/erikj/fix_stream_public_deletion
Fix _delete_old_forward_extrem_cache query
2016-09-21 17:04:04 +01:00
Erik Johnston
8009d84364 Match against event_id, rather than room_id 2016-09-21 16:46:59 +01:00
Erik Johnston
dc692556d6 Remove spurious AS clause 2016-09-21 16:28:47 +01:00
Erik Johnston
dc78db8c56 Update correct table 2016-09-21 15:52:44 +01:00
Erik Johnston
4f78108d8c Readd entries to public_room_list_stream that were deleted 2016-09-21 15:24:22 +01:00
Erik Johnston
0b78d8adf2 Fix _delete_old_forward_extrem_cache query 2016-09-21 15:20:56 +01:00
Erik Johnston
85827eef2d Merge pull request #1133 from matrix-org/erikj/public_room_count
Add total_room_count_estimate to /publicRooms
2016-09-21 13:51:52 +01:00
Erik Johnston
90c070c850 Add total_room_count_estimate to /publicRooms 2016-09-21 13:30:05 +01:00
Erik Johnston
87528f0756 Support /initialSync in synchrotron worker 2016-09-21 11:46:28 +01:00
Erik Johnston
43253c10b8 Remove redundant event_auth index 2016-09-13 11:47:48 +01:00
Mark Haines
4a32d25d4c Merge branch 'develop' into markjh/bearer_token 2016-09-12 11:14:56 +01:00
Mark Haines
ec609f8094 Fix unit tests 2016-09-12 10:46:02 +01:00
Mark Haines
3ddec016ff Merge branch 'develop' into markjh/bearer_token 2016-09-09 18:51:22 +01:00
Mark Haines
8e01263587 Allow clients to supply access_tokens as headers
Clients can continue to supply access tokens as query parameters
or can supply the token as a header:

   Authorization: Bearer <access_token_goes_here>

This matches the ouath2 format of
https://tools.ietf.org/html/rfc6750#section-2.1
2016-09-09 18:17:42 +01:00
Erik Johnston
3bb3f02517 Enable guest access for private rooms by default 2016-03-17 16:23:53 +00:00
374 changed files with 34566 additions and 10035 deletions

47
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,47 @@
<!--
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
You will likely get better support more quickly if you ask in ** #matrix:matrix.org ** ;)
This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all
the necessary data to fix your issue.
You can also preview your report before submitting it. You may remove sections
that aren't relevant to your particular case.
Text between <!-- and --> marks will be invisible in the report.
-->
### Description
Describe here the problem that you are experiencing, or the feature you are requesting.
### Steps to reproduce
- For bugs, list the steps
- that reproduce the bug
- using hyphens as bullet points
Describe how what happens differs from what you expected.
If you can identify any relevant log snippets from _homeserver.log_, please include
those here (please be careful to remove any personal or private data):
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Homeserver**: Was this issue identified on matrix.org or another homeserver?
If not matrix.org:
- **Version**: What version of Synapse is running? <!--
You can find the Synapse version by inspecting the server headers (replace matrix.org with
your own homeserver domain):
$ curl -v https://matrix.org/_matrix/client/versions 2>&1 | grep "Server:"
-->
- **Install method**: package manager/git clone/pip
- **Platform**: Tell us about the environment in which your homeserver is operating
- distro, hardware, if it's running in a vm/container, etc.

8
.gitignore vendored
View File

@@ -24,10 +24,10 @@ homeserver*.yaml
.coverage
htmlcov
demo/*.db
demo/*.log
demo/*.log.*
demo/*.pid
demo/*/*.db
demo/*/*.log
demo/*/*.log.*
demo/*/*.pid
demo/media_store.*
demo/etc

17
.travis.yml Normal file
View File

@@ -0,0 +1,17 @@
sudo: false
language: python
python: 2.7
# tell travis to cache ~/.cache/pip
cache: pip
env:
- TOX_ENV=packaging
- TOX_ENV=pep8
- TOX_ENV=py27
install:
- pip install tox
script:
- tox -e $TOX_ENV

View File

@@ -1,3 +1,808 @@
Changes in synapse v0.25.1 (2017-11-17)
=======================================
Bug fixes:
* Fix login with LDAP and other password provider modules (PR #2678). Thanks to
@jkolo!
Changes in synapse v0.25.0 (2017-11-15)
=======================================
Bug fixes:
* Fix port script (PR #2673)
Changes in synapse v0.25.0-rc1 (2017-11-14)
===========================================
Features:
* Add is_public to groups table to allow for private groups (PR #2582)
* Add a route for determining who you are (PR #2668) Thanks to @turt2live!
* Add more features to the password providers (PR #2608, #2610, #2620, #2622,
#2623, #2624, #2626, #2628, #2629)
* Add a hook for custom rest endpoints (PR #2627)
* Add API to update group room visibility (PR #2651)
Changes:
* Ignore <noscript> tags when generating URL preview descriptions (PR #2576)
Thanks to @maximevaillancourt!
* Register some /unstable endpoints in /r0 as well (PR #2579) Thanks to
@krombel!
* Support /keys/upload on /r0 as well as /unstable (PR #2585)
* Front-end proxy: pass through auth header (PR #2586)
* Allow ASes to deactivate their own users (PR #2589)
* Remove refresh tokens (PR #2613)
* Automatically set default displayname on register (PR #2617)
* Log login requests (PR #2618)
* Always return `is_public` in the `/groups/:group_id/rooms` API (PR #2630)
* Avoid no-op media deletes (PR #2637) Thanks to @spantaleev!
* Fix various embarrassing typos around user_directory and add some doc. (PR
#2643)
* Return whether a user is an admin within a group (PR #2647)
* Namespace visibility options for groups (PR #2657)
* Downcase UserIDs on registration (PR #2662)
* Cache failures when fetching URL previews (PR #2669)
Bug fixes:
* Fix port script (PR #2577)
* Fix error when running synapse with no logfile (PR #2581)
* Fix UI auth when deleting devices (PR #2591)
* Fix typo when checking if user is invited to group (PR #2599)
* Fix the port script to drop NUL values in all tables (PR #2611)
* Fix appservices being backlogged and not receiving new events due to a bug in
notify_interested_services (PR #2631) Thanks to @xyzz!
* Fix updating rooms avatar/display name when modified by admin (PR #2636)
Thanks to @farialima!
* Fix bug in state group storage (PR #2649)
* Fix 500 on invalid utf-8 in request (PR #2663)
Changes in synapse v0.24.1 (2017-10-24)
=======================================
Bug fixes:
* Fix updating group profiles over federation (PR #2567)
Changes in synapse v0.24.0 (2017-10-23)
=======================================
No changes since v0.24.0-rc1
Changes in synapse v0.24.0-rc1 (2017-10-19)
===========================================
Features:
* Add Group Server (PR #2352, #2363, #2374, #2377, #2378, #2382, #2410, #2426,
#2430, #2454, #2471, #2472, #2544)
* Add support for channel notifications (PR #2501)
* Add basic implementation of backup media store (PR #2538)
* Add config option to auto-join new users to rooms (PR #2545)
Changes:
* Make the spam checker a module (PR #2474)
* Delete expired url cache data (PR #2478)
* Ignore incoming events for rooms that we have left (PR #2490)
* Allow spam checker to reject invites too (PR #2492)
* Add room creation checks to spam checker (PR #2495)
* Spam checking: add the invitee to user_may_invite (PR #2502)
* Process events from federation for different rooms in parallel (PR #2520)
* Allow error strings from spam checker (PR #2531)
* Improve error handling for missing files in config (PR #2551)
Bug fixes:
* Fix handling SERVFAILs when doing AAAA lookups for federation (PR #2477)
* Fix incompatibility with newer versions of ujson (PR #2483) Thanks to
@jeremycline!
* Fix notification keywords that start/end with non-word chars (PR #2500)
* Fix stack overflow and logcontexts from linearizer (PR #2532)
* Fix 500 error when fields missing from power_levels event (PR #2552)
* Fix 500 error when we get an error handling a PDU (PR #2553)
Changes in synapse v0.23.1 (2017-10-02)
=======================================
Changes:
* Make 'affinity' package optional, as it is not supported on some platforms
Changes in synapse v0.23.0 (2017-10-02)
=======================================
No changes since v0.23.0-rc2
Changes in synapse v0.23.0-rc2 (2017-09-26)
===========================================
Bug fixes:
* Fix regression in performance of syncs (PR #2470)
Changes in synapse v0.23.0-rc1 (2017-09-25)
===========================================
Features:
* Add a frontend proxy worker (PR #2344)
* Add support for event_id_only push format (PR #2450)
* Add a PoC for filtering spammy events (PR #2456)
* Add a config option to block all room invites (PR #2457)
Changes:
* Use bcrypt module instead of py-bcrypt (PR #2288) Thanks to @kyrias!
* Improve performance of generating push notifications (PR #2343, #2357, #2365,
#2366, #2371)
* Improve DB performance for device list handling in sync (PR #2362)
* Include a sample prometheus config (PR #2416)
* Document known to work postgres version (PR #2433) Thanks to @ptman!
Bug fixes:
* Fix caching error in the push evaluator (PR #2332)
* Fix bug where pusherpool didn't start and broke some rooms (PR #2342)
* Fix port script for user directory tables (PR #2375)
* Fix device lists notifications when user rejoins a room (PR #2443, #2449)
* Fix sync to always send down current state events in timeline (PR #2451)
* Fix bug where guest users were incorrectly kicked (PR #2453)
* Fix bug talking to IPv6 only servers using SRV records (PR #2462)
Changes in synapse v0.22.1 (2017-07-06)
=======================================
Bug fixes:
* Fix bug where pusher pool didn't start and caused issues when
interacting with some rooms (PR #2342)
Changes in synapse v0.22.0 (2017-07-06)
=======================================
No changes since v0.22.0-rc2
Changes in synapse v0.22.0-rc2 (2017-07-04)
===========================================
Changes:
* Improve performance of storing user IPs (PR #2307, #2308)
* Slightly improve performance of verifying access tokens (PR #2320)
* Slightly improve performance of event persistence (PR #2321)
* Increase default cache factor size from 0.1 to 0.5 (PR #2330)
Bug fixes:
* Fix bug with storing registration sessions that caused frequent CPU churn
(PR #2319)
Changes in synapse v0.22.0-rc1 (2017-06-26)
===========================================
Features:
* Add a user directory API (PR #2252, and many more)
* Add shutdown room API to remove room from local server (PR #2291)
* Add API to quarantine media (PR #2292)
* Add new config option to not send event contents to push servers (PR #2301)
Thanks to @cjdelisle!
Changes:
* Various performance fixes (PR #2177, #2233, #2230, #2238, #2248, #2256,
#2274)
* Deduplicate sync filters (PR #2219) Thanks to @krombel!
* Correct a typo in UPGRADE.rst (PR #2231) Thanks to @aaronraimist!
* Add count of one time keys to sync stream (PR #2237)
* Only store event_auth for state events (PR #2247)
* Store URL cache preview downloads separately (PR #2299)
Bug fixes:
* Fix users not getting notifications when AS listened to that user_id (PR
#2216) Thanks to @slipeer!
* Fix users without push set up not getting notifications after joining rooms
(PR #2236)
* Fix preview url API to trim long descriptions (PR #2243)
* Fix bug where we used cached but unpersisted state group as prev group,
resulting in broken state of restart (PR #2263)
* Fix removing of pushers when using workers (PR #2267)
* Fix CORS headers to allow Authorization header (PR #2285) Thanks to @krombel!
Changes in synapse v0.21.1 (2017-06-15)
=======================================
Bug fixes:
* Fix bug in anonymous usage statistic reporting (PR #2281)
Changes in synapse v0.21.0 (2017-05-18)
=======================================
No changes since v0.21.0-rc3
Changes in synapse v0.21.0-rc3 (2017-05-17)
===========================================
Features:
* Add per user rate-limiting overrides (PR #2208)
* Add config option to limit maximum number of events requested by ``/sync``
and ``/messages`` (PR #2221) Thanks to @psaavedra!
Changes:
* Various small performance fixes (PR #2201, #2202, #2224, #2226, #2227, #2228,
#2229)
* Update username availability checker API (PR #2209, #2213)
* When purging, don't de-delta state groups we're about to delete (PR #2214)
* Documentation to check synapse version (PR #2215) Thanks to @hamber-dick!
* Add an index to event_search to speed up purge history API (PR #2218)
Bug fixes:
* Fix API to allow clients to upload one-time-keys with new sigs (PR #2206)
Changes in synapse v0.21.0-rc2 (2017-05-08)
===========================================
Changes:
* Always mark remotes as up if we receive a signed request from them (PR #2190)
Bug fixes:
* Fix bug where users got pushed for rooms they had muted (PR #2200)
Changes in synapse v0.21.0-rc1 (2017-05-08)
===========================================
Features:
* Add username availability checker API (PR #2183)
* Add read marker API (PR #2120)
Changes:
* Enable guest access for the 3pl/3pid APIs (PR #1986)
* Add setting to support TURN for guests (PR #2011)
* Various performance improvements (PR #2075, #2076, #2080, #2083, #2108,
#2158, #2176, #2185)
* Make synctl a bit more user friendly (PR #2078, #2127) Thanks @APwhitehat!
* Replace HTTP replication with TCP replication (PR #2082, #2097, #2098,
#2099, #2103, #2014, #2016, #2115, #2116, #2117)
* Support authenticated SMTP (PR #2102) Thanks @DanielDent!
* Add a counter metric for successfully-sent transactions (PR #2121)
* Propagate errors sensibly from proxied IS requests (PR #2147)
* Add more granular event send metrics (PR #2178)
Bug fixes:
* Fix nuke-room script to work with current schema (PR #1927) Thanks
@zuckschwerdt!
* Fix db port script to not assume postgres tables are in the public schema
(PR #2024) Thanks @jerrykan!
* Fix getting latest device IP for user with no devices (PR #2118)
* Fix rejection of invites to unreachable servers (PR #2145)
* Fix code for reporting old verify keys in synapse (PR #2156)
* Fix invite state to always include all events (PR #2163)
* Fix bug where synapse would always fetch state for any missing event (PR #2170)
* Fix a leak with timed out HTTP connections (PR #2180)
* Fix bug where we didn't time out HTTP requests to ASes (PR #2192)
Docs:
* Clarify doc for SQLite to PostgreSQL port (PR #1961) Thanks @benhylau!
* Fix typo in synctl help (PR #2107) Thanks @HarHarLinks!
* ``web_client_location`` documentation fix (PR #2131) Thanks @matthewjwolff!
* Update README.rst with FreeBSD changes (PR #2132) Thanks @feld!
* Clarify setting up metrics (PR #2149) Thanks @encks!
Changes in synapse v0.20.0 (2017-04-11)
=======================================
Bug fixes:
* Fix joining rooms over federation where not all servers in the room saw the
new server had joined (PR #2094)
Changes in synapse v0.20.0-rc1 (2017-03-30)
===========================================
Features:
* Add delete_devices API (PR #1993)
* Add phone number registration/login support (PR #1994, #2055)
Changes:
* Use JSONSchema for validation of filters. Thanks @pik! (PR #1783)
* Reread log config on SIGHUP (PR #1982)
* Speed up public room list (PR #1989)
* Add helpful texts to logger config options (PR #1990)
* Minor ``/sync`` performance improvements. (PR #2002, #2013, #2022)
* Add some debug to help diagnose weird federation issue (PR #2035)
* Correctly limit retries for all federation requests (PR #2050, #2061)
* Don't lock table when persisting new one time keys (PR #2053)
* Reduce some CPU work on DB threads (PR #2054)
* Cache hosts in room (PR #2060)
* Batch sending of device list pokes (PR #2063)
* Speed up persist event path in certain edge cases (PR #2070)
Bug fixes:
* Fix bug where current_state_events renamed to current_state_ids (PR #1849)
* Fix routing loop when fetching remote media (PR #1992)
* Fix current_state_events table to not lie (PR #1996)
* Fix CAS login to handle PartialDownloadError (PR #1997)
* Fix assertion to stop transaction queue getting wedged (PR #2010)
* Fix presence to fallback to last_active_ts if it beats the last sync time.
Thanks @Half-Shot! (PR #2014)
* Fix bug when federation received a PDU while a room join is in progress (PR
#2016)
* Fix resetting state on rejected events (PR #2025)
* Fix installation issues in readme. Thanks @ricco386 (PR #2037)
* Fix caching of remote servers' signature keys (PR #2042)
* Fix some leaking log context (PR #2048, #2049, #2057, #2058)
* Fix rejection of invites not reaching sync (PR #2056)
Changes in synapse v0.19.3 (2017-03-20)
=======================================
No changes since v0.19.3-rc2
Changes in synapse v0.19.3-rc2 (2017-03-13)
===========================================
Bug fixes:
* Fix bug in handling of incoming device list updates over federation.
Changes in synapse v0.19.3-rc1 (2017-03-08)
===========================================
Features:
* Add some administration functionalities. Thanks to morteza-araby! (PR #1784)
Changes:
* Reduce database table sizes (PR #1873, #1916, #1923, #1963)
* Update contrib/ to not use syutil. Thanks to andrewshadura! (PR #1907)
* Don't fetch current state when sending an event in common case (PR #1955)
Bug fixes:
* Fix synapse_port_db failure. Thanks to Pneumaticat! (PR #1904)
* Fix caching to not cache error responses (PR #1913)
* Fix APIs to make kick & ban reasons work (PR #1917)
* Fix bugs in the /keys/changes api (PR #1921)
* Fix bug where users couldn't forget rooms they were banned from (PR #1922)
* Fix issue with long language values in pushers API (PR #1925)
* Fix a race in transaction queue (PR #1930)
* Fix dynamic thumbnailing to preserve aspect ratio. Thanks to jkolo! (PR
#1945)
* Fix device list update to not constantly resync (PR #1964)
* Fix potential for huge memory usage when getting device that have
changed (PR #1969)
Changes in synapse v0.19.2 (2017-02-20)
=======================================
* Fix bug with event visibility check in /context/ API. Thanks to Tokodomo for
pointing it out! (PR #1929)
Changes in synapse v0.19.1 (2017-02-09)
=======================================
* Fix bug where state was incorrectly reset in a room when synapse received an
event over federation that did not pass auth checks (PR #1892)
Changes in synapse v0.19.0 (2017-02-04)
=======================================
No changes since RC 4.
Changes in synapse v0.19.0-rc4 (2017-02-02)
===========================================
* Bump cache sizes for common membership queries (PR #1879)
Changes in synapse v0.19.0-rc3 (2017-02-02)
===========================================
* Fix email push in pusher worker (PR #1875)
* Make presence.get_new_events a bit faster (PR #1876)
* Make /keys/changes a bit more performant (PR #1877)
Changes in synapse v0.19.0-rc2 (2017-02-02)
===========================================
* Include newly joined users in /keys/changes API (PR #1872)
Changes in synapse v0.19.0-rc1 (2017-02-02)
===========================================
Features:
* Add support for specifying multiple bind addresses (PR #1709, #1712, #1795,
#1835). Thanks to @kyrias!
* Add /account/3pid/delete endpoint (PR #1714)
* Add config option to configure the Riot URL used in notification emails (PR
#1811). Thanks to @aperezdc!
* Add username and password config options for turn server (PR #1832). Thanks
to @xsteadfastx!
* Implement device lists updates over federation (PR #1857, #1861, #1864)
* Implement /keys/changes (PR #1869, #1872)
Changes:
* Improve IPv6 support (PR #1696). Thanks to @kyrias and @glyph!
* Log which files we saved attachments to in the media_repository (PR #1791)
* Linearize updates to membership via PUT /state/ to better handle multiple
joins (PR #1787)
* Limit number of entries to prefill from cache on startup (PR #1792)
* Remove full_twisted_stacktraces option (PR #1802)
* Measure size of some caches by sum of the size of cached values (PR #1815)
* Measure metrics of string_cache (PR #1821)
* Reduce logging verbosity (PR #1822, #1823, #1824)
* Don't clobber a displayname or avatar_url if provided by an m.room.member
event (PR #1852)
* Better handle 401/404 response for federation /send/ (PR #1866, #1871)
Fixes:
* Fix ability to change password to a non-ascii one (PR #1711)
* Fix push getting stuck due to looking at the wrong view of state (PR #1820)
* Fix email address comparison to be case insensitive (PR #1827)
* Fix occasional inconsistencies of room membership (PR #1836, #1840)
Performance:
* Don't block messages sending on bumping presence (PR #1789)
* Change device_inbox stream index to include user (PR #1793)
* Optimise state resolution (PR #1818)
* Use DB cache of joined users for presence (PR #1862)
* Add an index to make membership queries faster (PR #1867)
Changes in synapse v0.18.7 (2017-01-09)
=======================================
No changes from v0.18.7-rc2
Changes in synapse v0.18.7-rc2 (2017-01-07)
===========================================
Bug fixes:
* Fix error in rc1's discarding invalid inbound traffic logic that was
incorrectly discarding missing events
Changes in synapse v0.18.7-rc1 (2017-01-06)
===========================================
Bug fixes:
* Fix error in #PR 1764 to actually fix the nightmare #1753 bug.
* Improve deadlock logging further
* Discard inbound federation traffic from invalid domains, to immunise
against #1753
Changes in synapse v0.18.6 (2017-01-06)
=======================================
Bug fixes:
* Fix bug when checking if a guest user is allowed to join a room (PR #1772)
Thanks to Patrik Oldsberg for diagnosing and the fix!
Changes in synapse v0.18.6-rc3 (2017-01-05)
===========================================
Bug fixes:
* Fix bug where we failed to send ban events to the banned server (PR #1758)
* Fix bug where we sent event that didn't originate on this server to
other servers (PR #1764)
* Fix bug where processing an event from a remote server took a long time
because we were making long HTTP requests (PR #1765, PR #1744)
Changes:
* Improve logging for debugging deadlocks (PR #1766, PR #1767)
Changes in synapse v0.18.6-rc2 (2016-12-30)
===========================================
Bug fixes:
* Fix memory leak in twisted by initialising logging correctly (PR #1731)
* Fix bug where fetching missing events took an unacceptable amount of time in
large rooms (PR #1734)
Changes in synapse v0.18.6-rc1 (2016-12-29)
===========================================
Bug fixes:
* Make sure that outbound connections are closed (PR #1725)
Changes in synapse v0.18.5 (2016-12-16)
=======================================
Bug fixes:
* Fix federation /backfill returning events it shouldn't (PR #1700)
* Fix crash in url preview (PR #1701)
Changes in synapse v0.18.5-rc3 (2016-12-13)
===========================================
Features:
* Add support for E2E for guests (PR #1653)
* Add new API appservice specific public room list (PR #1676)
* Add new room membership APIs (PR #1680)
Changes:
* Enable guest access for private rooms by default (PR #653)
* Limit the number of events that can be created on a given room concurrently
(PR #1620)
* Log the args that we have on UI auth completion (PR #1649)
* Stop generating refresh_tokens (PR #1654)
* Stop putting a time caveat on access tokens (PR #1656)
* Remove unspecced GET endpoints for e2e keys (PR #1694)
Bug fixes:
* Fix handling of 500 and 429's over federation (PR #1650)
* Fix Content-Type header parsing (PR #1660)
* Fix error when previewing sites that include unicode, thanks to kyrias (PR
#1664)
* Fix some cases where we drop read receipts (PR #1678)
* Fix bug where calls to ``/sync`` didn't correctly timeout (PR #1683)
* Fix bug where E2E key query would fail if a single remote host failed (PR
#1686)
Changes in synapse v0.18.5-rc2 (2016-11-24)
===========================================
Bug fixes:
* Don't send old events over federation, fixes bug in -rc1.
Changes in synapse v0.18.5-rc1 (2016-11-24)
===========================================
Features:
* Implement "event_fields" in filters (PR #1638)
Changes:
* Use external ldap auth pacakge (PR #1628)
* Split out federation transaction sending to a worker (PR #1635)
* Fail with a coherent error message if `/sync?filter=` is invalid (PR #1636)
* More efficient notif count queries (PR #1644)
Changes in synapse v0.18.4 (2016-11-22)
=======================================
Bug fixes:
* Add workaround for buggy clients that the fail to register (PR #1632)
Changes in synapse v0.18.4-rc1 (2016-11-14)
===========================================
Changes:
* Various database efficiency improvements (PR #1188, #1192)
* Update default config to blacklist more internal IPs, thanks to Euan Kemp (PR
#1198)
* Allow specifying duration in minutes in config, thanks to Daniel Dent (PR
#1625)
Bug fixes:
* Fix media repo to set CORs headers on responses (PR #1190)
* Fix registration to not error on non-ascii passwords (PR #1191)
* Fix create event code to limit the number of prev_events (PR #1615)
* Fix bug in transaction ID deduplication (PR #1624)
Changes in synapse v0.18.3 (2016-11-08)
=======================================
SECURITY UPDATE
Explicitly require authentication when using LDAP3. This is the default on
versions of ``ldap3`` above 1.0, but some distributions will package an older
version.
If you are using LDAP3 login and have a version of ``ldap3`` older than 1.0 it
is **CRITICAL to updgrade**.
Changes in synapse v0.18.2 (2016-11-01)
=======================================
No changes since v0.18.2-rc5
Changes in synapse v0.18.2-rc5 (2016-10-28)
===========================================
Bug fixes:
* Fix prometheus process metrics in worker processes (PR #1184)
Changes in synapse v0.18.2-rc4 (2016-10-27)
===========================================
Bug fixes:
* Fix ``user_threepids`` schema delta, which in some instances prevented
startup after upgrade (PR #1183)
Changes in synapse v0.18.2-rc3 (2016-10-27)
===========================================
Changes:
* Allow clients to supply access tokens as headers (PR #1098)
* Clarify error codes for GET /filter/, thanks to Alexander Maznev (PR #1164)
* Make password reset email field case insensitive (PR #1170)
* Reduce redundant database work in email pusher (PR #1174)
* Allow configurable rate limiting per AS (PR #1175)
* Check whether to ratelimit sooner to avoid work (PR #1176)
* Standardise prometheus metrics (PR #1177)
Bug fixes:
* Fix incredibly slow back pagination query (PR #1178)
* Fix infinite typing bug (PR #1179)
Changes in synapse v0.18.2-rc2 (2016-10-25)
===========================================
(This release did not include the changes advertised and was identical to RC1)
Changes in synapse v0.18.2-rc1 (2016-10-17)
===========================================
Changes:
* Remove redundant event_auth index (PR #1113)
* Reduce DB hits for replication (PR #1141)
* Implement pluggable password auth (PR #1155)
* Remove rate limiting from app service senders and fix get_or_create_user
requester, thanks to Patrik Oldsberg (PR #1157)
* window.postmessage for Interactive Auth fallback (PR #1159)
* Use sys.executable instead of hardcoded python, thanks to Pedro Larroy
(PR #1162)
* Add config option for adding additional TLS fingerprints (PR #1167)
* User-interactive auth on delete device (PR #1168)
Bug fixes:
* Fix not being allowed to set your own state_key, thanks to Patrik Oldsberg
(PR #1150)
* Fix interactive auth to return 401 from for incorrect password (PR #1160,
#1166)
* Fix email push notifs being dropped (PR #1169)
Changes in synapse v0.18.1 (2016-10-05)
======================================
No changes since v0.18.1-rc1
Changes in synapse v0.18.1-rc1 (2016-09-30)
===========================================
Features:
* Add total_room_count_estimate to ``/publicRooms`` (PR #1133)
Changes:
* Time out typing over federation (PR #1140)
* Restructure LDAP authentication (PR #1153)
Bug fixes:
* Fix 3pid invites when server is already in the room (PR #1136)
* Fix upgrading with SQLite taking lots of CPU for a few days
after upgrade (PR #1144)
* Fix upgrading from very old database versions (PR #1145)
* Fix port script to work with recently added tables (PR #1146)
Changes in synapse v0.18.0 (2016-09-19)
=======================================
@@ -6,7 +811,7 @@ significantly reduce database size. Synapse will attempt to upgrade the current
data in the background. Servers with large SQLite database may experience
degradation of performance while this upgrade is in progress, therefore you may
want to consider migrating to using Postgres before upgrading very large SQLite
daabases
databases
Changes:

View File

@@ -27,4 +27,5 @@ exclude jenkins*.sh
exclude jenkins*
recursive-exclude jenkins *.sh
prune .github
prune demo/etc

View File

@@ -11,7 +11,7 @@ VoIP. The basics you need to know to get up and running are:
like ``#matrix:matrix.org`` or ``#test:localhost:8448``.
- Matrix user IDs look like ``@matthew:matrix.org`` (although in the future
you will normally refer to yourself and others using a third party identifier
you will normally refer to yourself and others using a third party identifier
(3PID): email address, phone number, etc rather than manipulating Matrix user IDs)
The overall architecture is::
@@ -20,12 +20,13 @@ The overall architecture is::
https://somewhere.org/_matrix https://elsewhere.net/_matrix
``#matrix:matrix.org`` is the official support room for Matrix, and can be
accessed by any client from https://matrix.org/blog/try-matrix-now or via IRC
bridge at irc://irc.freenode.net/matrix.
accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html or
via IRC bridge at irc://irc.freenode.net/matrix.
Synapse is currently in rapid development, but as of version 0.5 we believe it
is sufficiently stable to be run as an internet-facing service for real usage!
About Matrix
============
@@ -52,10 +53,10 @@ generation of fully open and interoperable messaging and VoIP apps for the
internet.
Synapse is a reference "homeserver" implementation of Matrix from the core
development team at matrix.org, written in Python/Twisted for clarity and
simplicity. It is intended to showcase the concept of Matrix and let folks see
the spec in the context of a codebase and let you run your own homeserver and
generally help bootstrap the ecosystem.
development team at matrix.org, written in Python/Twisted. It is intended to
showcase the concept of Matrix and let folks see the spec in the context of a
codebase and let you run your own homeserver and generally help bootstrap the
ecosystem.
In Matrix, every user runs one or more Matrix clients, which connect through to
a Matrix homeserver. The homeserver stores all their personal chat history and
@@ -66,26 +67,16 @@ hosted by someone else (e.g. matrix.org) - there is no single point of control
or mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts,
etc.
Synapse ships with two basic demo Matrix clients: webclient (a basic group chat
web client demo implemented in AngularJS) and cmdclient (a basic Python
command line utility which lets you easily see what the JSON APIs are up to).
Meanwhile, iOS and Android SDKs and clients are available from:
- https://github.com/matrix-org/matrix-ios-sdk
- https://github.com/matrix-org/matrix-ios-kit
- https://github.com/matrix-org/matrix-ios-console
- https://github.com/matrix-org/matrix-android-sdk
We'd like to invite you to join #matrix:matrix.org (via
https://matrix.org/blog/try-matrix-now), run a homeserver, take a look at the
Matrix spec at https://matrix.org/docs/spec and API docs at
https://matrix.org/docs/api, experiment with the APIs and the demo clients, and
report any bugs via https://matrix.org/jira.
https://matrix.org/docs/projects/try-matrix-now.html), run a homeserver, take a look
at the `Matrix spec <https://matrix.org/docs/spec>`_, and experiment with the
`APIs <https://matrix.org/docs/api>`_ and `Client SDKs
<http://matrix.org/docs/projects/try-matrix-now.html#client-sdks>`_.
Thanks for using Matrix!
[1] End-to-end encryption is currently in development - see https://matrix.org/git/olm
[1] End-to-end encryption is currently in beta: `blog post <https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-encryption-security-assessment-released-and-implemented-cross-platform-on-riot-at-last>`_.
Synapse Installation
====================
@@ -93,11 +84,17 @@ Synapse Installation
Synapse is the reference python/twisted Matrix homeserver implementation.
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 2.7
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in python but some of the libraries is uses are written in
Installing from source
----------------------
(Prebuilt packages are available for some platforms - see `Platform-Specific
Instructions`_.)
Synapse is written in python but some of the libraries it uses are written in
C. So before we can install synapse itself we need a working C compiler and the
header files for python C extensions.
@@ -112,10 +109,10 @@ Installing prerequisites on ArchLinux::
sudo pacman -S base-devel python2 python-pip \
python-setuptools python-virtualenv sqlite3
Installing prerequisites on CentOS 7::
Installing prerequisites on CentOS 7 or Fedora 25::
sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel redhat-rpm-config \
python-virtualenv libffi-devel openssl-devel
sudo yum groupinstall "Development Tools"
@@ -124,6 +121,7 @@ Installing prerequisites on Mac OS X::
xcode-select --install
sudo easy_install pip
sudo pip install virtualenv
brew install pkg-config libffi
Installing prerequisites on Raspbian::
@@ -140,10 +138,16 @@ Installing prerequisites on openSUSE::
sudo zypper in python-pip python-setuptools sqlite3 python-virtualenv \
python-devel libffi-devel libopenssl-devel libjpeg62-devel
Installing prerequisites on OpenBSD::
doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \
libxslt
To install the synapse homeserver run::
virtualenv -p python2.7 ~/.synapse
source ~/.synapse/bin/activate
pip install --upgrade pip
pip install --upgrade setuptools
pip install https://github.com/matrix-org/synapse/tarball/master
@@ -151,38 +155,74 @@ This installs synapse, along with the libraries it uses, into a virtual
environment under ``~/.synapse``. Feel free to pick a different directory
if you prefer.
In case of problems, please see the _Troubleshooting section below.
In case of problems, please see the _`Troubleshooting` section below.
Alternatively, Silvio Fricke has contributed a Dockerfile to automate the
above in Docker at https://registry.hub.docker.com/u/silviof/docker-matrix/.
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
for details.
To set up your homeserver, run (in your virtualenv, as before)::
Configuring synapse
-------------------
Before you can start Synapse, you will need to generate a configuration
file. To do this, run (in your virtualenv, as before)::
cd ~/.synapse
python -m synapse.app.homeserver \
--server-name machine.my.domain.name \
--server-name my.domain.name \
--config-path homeserver.yaml \
--generate-config \
--report-stats=[yes|no]
...substituting your host and domain name as appropriate.
... substituting an appropriate value for ``--server-name``. The server name
determines the "domain" part of user-ids for users on your server: these will
all be of the format ``@user:my.domain.name``. It also determines how other
matrix servers will reach yours for `Federation`_. For a test configuration,
set this to the hostname of your server. For a more production-ready setup, you
will probably want to specify your domain (``example.com``) rather than a
matrix-specific hostname here (in the same way that your email address is
probably ``user@example.com`` rather than ``user@email.example.com``) - but
doing so may require more advanced setup - see `Setting up
Federation`_. Beware that the server name cannot be changed later.
This will generate you a config file that you can then customise, but it will
This command will generate you a config file that you can then customise, but it will
also generate a set of keys for you. These keys will allow your Home Server to
identify itself to other Home Servers, so don't lose or delete them. It would be
wise to back them up somewhere safe. If, for whatever reason, you do need to
wise to back them up somewhere safe. (If, for whatever reason, you do need to
change your Home Server's keys, you may find that other Home Servers have the
old key cached. If you update the signing key, you should change the name of the
key in the <server name>.signing.key file (the second word) to something different.
key in the ``<server name>.signing.key`` file (the second word) to something
different. See `the spec`__ for more information on key management.)
By default, registration of new users is disabled. You can either enable
registration in the config by specifying ``enable_registration: true``
(it is then recommended to also set up CAPTCHA - see docs/CAPTCHA_SETUP), or
you can use the command line to register new users::
.. __: `key_management`_
The default configuration exposes two HTTP ports: 8008 and 8448. Port 8008 is
configured without TLS; it should be behind a reverse proxy for TLS/SSL
termination on port 443 which in turn should be used for clients. Port 8448
is configured to use TLS with a self-signed certificate. If you would like
to do initial test with a client without having to setup a reverse proxy,
you can temporarly use another certificate. (Note that a self-signed
certificate is fine for `Federation`_). You can do so by changing
``tls_certificate_path``, ``tls_private_key_path`` and ``tls_dh_params_path``
in ``homeserver.yaml``; alternatively, you can use a reverse-proxy, but be sure
to read `Using a reverse proxy with Synapse`_ when doing so.
Apart from port 8448 using TLS, both ports are the same in the default
configuration.
Registering a user
------------------
You will need at least one user on your server in order to use a Matrix
client. Users can be registered either `via a Matrix client`__, or via a
commandline script.
.. __: `client-user-reg`_
To get started, it is easiest to use the command line to register new users::
$ source ~/.synapse/bin/activate
$ synctl start # if not already running
@@ -190,10 +230,41 @@ you can use the command line to register new users::
New user localpart: erikj
Password:
Confirm password:
Make admin [no]:
Success!
This process uses a setting ``registration_shared_secret`` in
``homeserver.yaml``, which is shared between Synapse itself and the
``register_new_matrix_user`` script. It doesn't matter what it is (a random
value is generated by ``--generate-config``), but it should be kept secret, as
anyone with knowledge of it can register users on your server even if
``enable_registration`` is ``false``.
Setting up a TURN server
------------------------
For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See docs/turn-howto.rst for details.
a TURN server. See `<docs/turn-howto.rst>`_ for details.
IPv6
----
As of Synapse 0.19 we finally support IPv6, many thanks to @kyrias and @glyph
for providing PR #1696.
However, for federation to work on hosts with IPv6 DNS servers you **must**
be running Twisted 17.1.0 or later - see https://github.com/matrix-org/synapse/issues/1002
for details. We can't make Synapse depend on Twisted 17.1 by default
yet as it will break most older distributions (see https://github.com/matrix-org/synapse/pull/1909)
so if you are using operating system dependencies you'll have to install your
own Twisted 17.1 package via pip or backports etc.
If you're running in a virtualenv then pip should have installed the newest
Twisted automatically, but if your virtualenv is old you will need to manually
upgrade to a newer Twisted dependency via:
pip install Twisted>=17.1.0
Running Synapse
===============
@@ -205,11 +276,60 @@ run (e.g. ``~/.synapse``), and::
source ./bin/activate
synctl start
Connecting to Synapse from a client
===================================
The easiest way to try out your new Synapse installation is by connecting to it
from a web client. The easiest option is probably the one at
http://riot.im/app. You will need to specify a "Custom server" when you log on
or register: set this to ``https://domain.tld`` if you setup a reverse proxy
following the recommended setup, or ``https://localhost:8448`` - remember to specify the
port (``:8448``) if not ``:443`` unless you changed the configuration. (Leave the identity
server as the default - see `Identity servers`_.)
If using port 8448 you will run into errors until you accept the self-signed
certificate. You can easily do this by going to ``https://localhost:8448``
directly with your browser and accept the presented certificate. You can then
go back in your web client and proceed further.
If all goes well you should at least be able to log in, create a room, and
start sending messages.
(The homeserver runs a web client by default at https://localhost:8448/, though
as of the time of writing it is somewhat outdated and not really recommended -
https://github.com/matrix-org/synapse/issues/1527).
.. _`client-user-reg`:
Registering a new user from a client
------------------------------------
By default, registration of new users via Matrix clients is disabled. To enable
it, specify ``enable_registration: true`` in ``homeserver.yaml``. (It is then
recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.rst>`_.)
Once ``enable_registration`` is set to ``true``, it is possible to register a
user via `riot.im <https://riot.im/app/#/register>`_ or other Matrix clients.
Your new user name will be formed partly from the ``server_name`` (see
`Configuring synapse`_), and partly from a localpart you specify when you
create the account. Your name will take the form of::
@localpart:my.domain.name
(pronounced "at localpart on my dot domain dot name").
As when logging in, you will need to specify a "Custom server". Specify your
desired ``localpart`` in the 'User name' box.
Security Note
=============
Matrix serves raw user generated data in some APIs - specifically the content
repository endpoints: http://matrix.org/docs/spec/client_server/r0.2.0.html#get-matrix-media-r0-download-servername-mediaid
Matrix serves raw user generated data in some APIs - specifically the `content
repository endpoints <http://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid>`_.
Whilst we have tried to mitigate against possible XSS attacks (e.g.
https://github.com/matrix-org/synapse/pull/1021) we recommend running
matrix homeservers on a dedicated domain name, to limit any malicious user generated
@@ -220,26 +340,8 @@ server on the same domain.
See https://github.com/vector-im/vector-web/issues/1977 and
https://developer.github.com/changes/2014-04-25-user-content-security for more details.
Using PostgreSQL
================
As of Synapse 0.9, `PostgreSQL <http://www.postgresql.org>`_ is supported as an
alternative to the `SQLite <http://sqlite.org/>`_ database that Synapse has
traditionally used for convenience and simplicity.
The advantages of Postgres include:
* significant performance improvements due to the superior threading and
caching model, smarter query optimiser
* allowing the DB to be run on separate hardware
* allowing basic active/backup high-availability with a "hot spare" synapse
pointing at the same DB master, as well as enabling DB replication in
synapse itself.
For information on how to install and use PostgreSQL, please see
`docs/postgres.rst <docs/postgres.rst>`_.
Platform Specific Instructions
Platform-Specific Instructions
==============================
Debian
@@ -247,7 +349,7 @@ Debian
Matrix provides official Debian packages via apt from http://matrix.org/packages/debian/.
Note that these packages do not include a client - choose one from
https://matrix.org/blog/try-matrix-now/ (or build your own with one of our SDKs :)
https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one of our SDKs :)
Fedora
------
@@ -258,10 +360,12 @@ https://obs.infoserver.lv/project/monitor/matrix-synapse
ArchLinux
---------
The quickest way to get up and running with ArchLinux is probably with Ivan
Shapovalov's AUR package from
https://aur.archlinux.org/packages/matrix-synapse/, which should pull in all
the necessary dependencies.
The quickest way to get up and running with ArchLinux is probably with the community package
https://www.archlinux.org/packages/community/any/matrix-synapse/, which should pull in most of
the necessary dependencies. If the default web client is to be served (enabled by default in
the generated config),
https://www.archlinux.org/packages/community/any/python2-matrix-angular-sdk/ will also need to
be installed.
Alternatively, to install using pip a few changes may be needed as ArchLinux
defaults to python 3, but synapse currently assumes python 2.7 by default:
@@ -298,9 +402,35 @@ FreeBSD
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
- Ports: ``cd /usr/ports/net/py-matrix-synapse && make install clean``
- Ports: ``cd /usr/ports/net-im/py-matrix-synapse && make install clean``
- Packages: ``pkg install py27-matrix-synapse``
OpenBSD
-------
There is currently no port for OpenBSD. Additionally, OpenBSD's security
settings require a slightly more difficult installation process.
1) Create a new directory in ``/usr/local`` called ``_synapse``. Also, create a
new user called ``_synapse`` and set that directory as the new user's home.
This is required because, by default, OpenBSD only allows binaries which need
write and execute permissions on the same memory space to be run from
``/usr/local``.
2) ``su`` to the new ``_synapse`` user and change to their home directory.
3) Create a new virtualenv: ``virtualenv -p python2.7 ~/.synapse``
4) Source the virtualenv configuration located at
``/usr/local/_synapse/.synapse/bin/activate``. This is done in ``ksh`` by
using the ``.`` command, rather than ``bash``'s ``source``.
5) Optionally, use ``pip`` to install ``lxml``, which Synapse needs to parse
webpages for their titles.
6) Use ``pip`` to install this repository: ``pip install
https://github.com/matrix-org/synapse/tarball/master``
7) Optionally, change ``_synapse``'s shell to ``/bin/false`` to reduce the
chance of a compromised Synapse server being used to take over your box.
After this, you may proceed with the rest of the install directions.
NixOS
-----
@@ -340,6 +470,7 @@ Troubleshooting:
you do, you may need to create a symlink to ``libsodium.a`` so ``ld`` can find
it: ``ln -s /usr/local/lib/libsodium.a /usr/lib/libsodium.a``
Troubleshooting
===============
@@ -403,6 +534,30 @@ fix try re-installing from PyPI or directly from
# Install from github
pip install --user https://github.com/pyca/pynacl/tarball/master
Running out of File Handles
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If synapse runs out of filehandles, it typically fails badly - live-locking
at 100% CPU, and/or failing to accept new TCP connections (blocking the
connecting client). Matrix currently can legitimately use a lot of file handles,
thanks to busy rooms like #matrix:matrix.org containing hundreds of participating
servers. The first time a server talks in a room it will try to connect
simultaneously to all participating servers, which could exhaust the available
file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow
to respond. (We need to improve the routing algorithm used to be better than
full mesh, but as of June 2017 this hasn't happened yet).
If you hit this failure mode, we recommend increasing the maximum number of
open file handles to be at least 4096 (assuming a default of 1024 or 256).
This is typically done by editing ``/etc/security/limits.conf``
Separately, Synapse may leak file handles if inbound HTTP requests get stuck
during processing - e.g. blocked behind a lock or talking to a remote server etc.
This is best diagnosed by matching up the 'Received request' and 'Processed request'
log lines and looking for any 'Processed request' lines which take more than
a few seconds to execute. Please let us know at #matrix-dev:matrix.org if
you see this failure mode so we can help debug it, however.
ArchLinux
~~~~~~~~~
@@ -413,37 +568,6 @@ you will need to explicitly call Python2.7 - either running as::
...or by editing synctl with the correct python executable.
Synapse Development
===================
To check out a synapse for development, clone the git repo into a working
directory of your choice::
git clone https://github.com/matrix-org/synapse.git
cd synapse
Synapse has a number of external dependencies, that are easiest
to install using pip and a virtualenv::
virtualenv env
source env/bin/activate
python synapse/python_dependencies.py | xargs -n1 pip install
pip install setuptools_trial mock
This will run a process of downloading and installing all the needed
dependencies into a virtual env.
Once this is done, you may wish to run Synapse's unit tests, to
check that everything is installed as it should be::
python setup.py test
This should end with a 'PASSED' result::
Ran 143 tests in 0.601s
PASSED (successes=143)
Upgrading an existing Synapse
=============================
@@ -454,143 +578,254 @@ versions of synapse.
.. _UPGRADE.rst: UPGRADE.rst
.. _federation:
Setting up Federation
=====================
In order for other homeservers to send messages to your server, it will need to
be publicly visible on the internet, and they will need to know its host name.
You have two choices here, which will influence the form of your Matrix user
IDs:
Federation is the process by which users on different servers can participate
in the same room. For this to work, those other servers must be able to contact
yours to send messages.
1) Use the machine's own hostname as available on public DNS in the form of
its A records. This is easier to set up initially, perhaps for
testing, but lacks the flexibility of SRV.
As explained in `Configuring synapse`_, the ``server_name`` in your
``homeserver.yaml`` file determines the way that other servers will reach
yours. By default, they will treat it as a hostname and try to connect to
port 8448. This is easy to set up and will work with the default configuration,
provided you set the ``server_name`` to match your machine's public DNS
hostname.
2) Set up a SRV record for your domain name. This requires you create a SRV
record in DNS, but gives the flexibility to run the server on your own
choice of TCP port, on a machine that might not be the same name as the
domain name.
For a more flexible configuration, you can set up a DNS SRV record. This allows
you to run your server on a machine that might not have the same name as your
domain name. For example, you might want to run your server at
``synapse.example.com``, but have your Matrix user-ids look like
``@user:example.com``. (A SRV record also allows you to change the port from
the default 8448. However, if you are thinking of using a reverse-proxy on the
federation port, which is not recommended, be sure to read
`Reverse-proxying the federation port`_ first.)
For the first form, simply pass the required hostname (of the machine) as the
--server-name parameter::
To use a SRV record, first create your SRV record and publish it in DNS. This
should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
<synapse.server.name>``. The DNS record should then look something like::
$ dig -t srv _matrix._tcp.example.com
_matrix._tcp.example.com. 3600 IN SRV 10 0 8448 synapse.example.com.
You can then configure your homeserver to use ``<yourdomain.com>`` as the domain in
its user-ids, by setting ``server_name``::
python -m synapse.app.homeserver \
--server-name machine.my.domain.name \
--server-name <yourdomain.com> \
--config-path homeserver.yaml \
--generate-config
python -m synapse.app.homeserver --config-path homeserver.yaml
Alternatively, you can run ``synctl start`` to guide you through the process.
For the second form, first create your SRV record and publish it in DNS. This
needs to be named _matrix._tcp.YOURDOMAIN, and point at at least one hostname
and port where the server is running. (At the current time synapse does not
support clustering multiple servers into a single logical homeserver). The DNS
record would then look something like::
$ dig -t srv _matrix._tcp.machine.my.domain.name
_matrix._tcp IN SRV 10 0 8448 machine.my.domain.name.
At this point, you should then run the homeserver with the hostname of this
SRV record, as that is the name other machines will expect it to have::
python -m synapse.app.homeserver \
--server-name YOURDOMAIN \
--config-path homeserver.yaml \
--generate-config
python -m synapse.app.homeserver --config-path homeserver.yaml
If you've already generated the config file, you need to edit the "server_name"
in you ```homeserver.yaml``` file. If you've already started Synapse and a
If you've already generated the config file, you need to edit the ``server_name``
in your ``homeserver.yaml`` file. If you've already started Synapse and a
database has been created, you will have to recreate the database.
You may additionally want to pass one or more "-v" options, in order to
increase the verbosity of logging output; at least for initial testing.
If all goes well, you should be able to `connect to your server with a client`__,
and then join a room via federation. (Try ``#matrix-dev:matrix.org`` as a first
step. "Matrix HQ"'s sheer size and activity level tends to make even the
largest boxes pause for thought.)
.. __: `Connecting to Synapse from a client`_
Troubleshooting
---------------
The typical failure mode with federation is that when you try to join a room,
it is rejected with "401: Unauthorized". Generally this means that other
servers in the room couldn't access yours. (Joining a room over federation is a
complicated dance which requires connections in both directions).
So, things to check are:
* If you are trying to use a reverse-proxy, read `Reverse-proxying the
federation port`_.
* If you are not using a SRV record, check that your ``server_name`` (the part
of your user-id after the ``:``) matches your hostname, and that port 8448 on
that hostname is reachable from outside your network.
* If you *are* using a SRV record, check that it matches your ``server_name``
(it should be ``_matrix._tcp.<server_name>``), and that the port and hostname
it specifies are reachable from outside your network.
Running a Demo Federation of Synapses
-------------------------------------
If you want to get up and running quickly with a trio of homeservers in a
private federation (``localhost:8080``, ``localhost:8081`` and
``localhost:8082``) which you can then access through the webclient running at
http://localhost:8080. Simply run::
demo/start.sh
This is mainly useful just for development purposes.
Running The Demo Web Client
===========================
The homeserver runs a web client by default at https://localhost:8448/.
If this is the first time you have used the client from that browser (it uses
HTML5 local storage to remember its config), you will need to log in to your
account. If you don't yet have an account, because you've just started the
homeserver for the first time, then you'll need to register one.
private federation, there is a script in the ``demo`` directory. This is mainly
useful just for development purposes. See `<demo/README>`_.
Registering A New Account
-------------------------
Using PostgreSQL
================
Your new user name will be formed partly from the hostname your server is
running as, and partly from a localpart you specify when you create the
account. Your name will take the form of::
As of Synapse 0.9, `PostgreSQL <http://www.postgresql.org>`_ is supported as an
alternative to the `SQLite <http://sqlite.org/>`_ database that Synapse has
traditionally used for convenience and simplicity.
@localpart:my.domain.here
(pronounced "at localpart on my dot domain dot here")
The advantages of Postgres include:
Specify your desired localpart in the topmost box of the "Register for an
account" form, and click the "Register" button. Hostnames can contain ports if
required due to lack of SRV records (e.g. @matthew:localhost:8448 on an
internal synapse sandbox running on localhost).
* significant performance improvements due to the superior threading and
caching model, smarter query optimiser
* allowing the DB to be run on separate hardware
* allowing basic active/backup high-availability with a "hot spare" synapse
pointing at the same DB master, as well as enabling DB replication in
synapse itself.
If registration fails, you may need to enable it in the homeserver (see
`Synapse Installation`_ above)
For information on how to install and use PostgreSQL, please see
`docs/postgres.rst <docs/postgres.rst>`_.
Logging In To An Existing Account
---------------------------------
.. _reverse-proxy:
Using a reverse proxy with Synapse
==================================
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_ or
`HAProxy <http://www.haproxy.org/>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.
The most important thing to know here is that Matrix clients and other Matrix
servers do not necessarily need to connect to your server via the same
port. Indeed, clients will use port 443 by default, whereas servers default to
port 8448. Where these are different, we refer to the 'client port' and the
'federation port'.
The next most important thing to know is that using a reverse-proxy on the
federation port has a number of pitfalls. It is possible, but be sure to read
`Reverse-proxying the federation port`_.
The recommended setup is therefore to configure your reverse-proxy on port 443
to port 8008 of synapse for client connections, but to also directly expose port
8448 for server-server connections. All the Matrix endpoints begin ``/_matrix``,
so an example nginx configuration might look like::
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name matrix.example.com;
location /_matrix {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
You will also want to set ``bind_addresses: ['127.0.0.1']`` and ``x_forwarded: true``
for port 8008 in ``homeserver.yaml`` to ensure that client IP addresses are
recorded correctly.
Having done so, you can then use ``https://matrix.example.com`` (instead of
``https://matrix.example.com:8448``) as the "Custom server" when `Connecting to
Synapse from a client`_.
Reverse-proxying the federation port
------------------------------------
There are two issues to consider before using a reverse-proxy on the federation
port:
* Due to the way SSL certificates are managed in the Matrix federation protocol
(see `spec`__), Synapse needs to be configured with the path to the SSL
certificate, *even if you do not terminate SSL at Synapse*.
.. __: `key_management`_
* Synapse does not currently support SNI on the federation protocol
(`bug #1491 <https://github.com/matrix-org/synapse/issues/1491>`_), which
means that using name-based virtual hosting is unreliable.
Furthermore, a number of the normal reasons for using a reverse-proxy do not
apply:
* Other servers will connect on port 8448 by default, so there is no need to
listen on port 443 (for federation, at least), which avoids the need for root
privileges and virtual hosting.
* A self-signed SSL certificate is fine for federation, so there is no need to
automate renewals. (The certificate generated by ``--generate-config`` is
valid for 10 years.)
If you want to set up a reverse-proxy on the federation port despite these
caveats, you will need to do the following:
* In ``homeserver.yaml``, set ``tls_certificate_path`` to the path to the SSL
certificate file used by your reverse-proxy, and set ``no_tls`` to ``True``.
(``tls_private_key_path`` will be ignored if ``no_tls`` is ``True``.)
* In your reverse-proxy configuration:
* If there are other virtual hosts on the same port, make sure that the
*default* one uses the certificate configured above.
* Forward ``/_matrix`` to Synapse.
* If your reverse-proxy is not listening on port 8448, publish a SRV record to
tell other servers how to find you. See `Setting up Federation`_.
When updating the SSL certificate, just update the file pointed to by
``tls_certificate_path``: there is no need to restart synapse. (You may like to
use a symbolic link to help make this process atomic.)
The most common mistake when setting up federation is not to tell Synapse about
your SSL certificate. To check it, you can visit
``https://matrix.org/federationtester/api/report?server_name=<your_server_name>``.
Unfortunately, there is no UI for this yet, but, you should see
``"MatchingTLSFingerprint": true``. If not, check that
``Certificates[0].SHA256Fingerprint`` (the fingerprint of the certificate
presented by your reverse-proxy) matches ``Keys.tls_fingerprints[0].sha256``
(the fingerprint of the certificate Synapse is using).
Just enter the ``@localpart:my.domain.here`` Matrix user ID and password into
the form and click the Login button.
Identity Servers
================
The job of authenticating 3PIDs and tracking which 3PIDs are associated with a
given Matrix user is very security-sensitive, as there is obvious risk of spam
if it is too easy to sign up for Matrix accounts or harvest 3PID data.
Meanwhile the job of publishing the end-to-end encryption public keys for
Matrix users is also very security-sensitive for similar reasons.
Identity servers have the job of mapping email addresses and other 3rd Party
IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs
before creating that mapping.
Therefore the role of managing trusted identity in the Matrix ecosystem is
farmed out to a cluster of known trusted ecosystem partners, who run 'Matrix
Identity Servers' such as ``sydent``, whose role is purely to authenticate and
track 3PID logins and publish end-user public keys.
**They are not where accounts or credentials are stored - these live on home
servers. Identity Servers are just for mapping 3rd party IDs to matrix IDs.**
It's currently early days for identity servers as Matrix is not yet using 3PIDs
as the primary means of identity and E2E encryption is not complete. As such,
we are running a single identity server (https://matrix.org) at the current
time.
This process is very security-sensitive, as there is obvious risk of spam if it
is too easy to sign up for Matrix accounts or harvest 3PID data. In the longer
term, we hope to create a decentralised system to manage it (`matrix-doc #712
<https://github.com/matrix-org/matrix-doc/issues/712>`_), but in the meantime,
the role of managing trusted identity in the Matrix ecosystem is farmed out to
a cluster of known trusted ecosystem partners, who run 'Matrix Identity
Servers' such as `Sydent <https://github.com/matrix-org/sydent>`_, whose role
is purely to authenticate and track 3PID logins and publish end-user public
keys.
You can host your own copy of Sydent, but this will prevent you reaching other
users in the Matrix ecosystem via their email address, and prevent them finding
you. We therefore recommend that you use one of the centralised identity servers
at ``https://matrix.org`` or ``https://vector.im`` for now.
To reiterate: the Identity server will only be used if you choose to associate
an email address with your account, or send an invite to another user via their
email address.
URL Previews
============
Synapse 0.15.0 introduces an experimental new API for previewing URLs at
/_matrix/media/r0/preview_url. This is disabled by default. To turn it on
you must enable the `url_preview_enabled: True` config parameter and explicitly
specify the IP ranges that Synapse is not allowed to spider for previewing in
the `url_preview_ip_range_blacklist` configuration parameter. This is critical
from a security perspective to stop arbitrary Matrix users spidering 'internal'
URLs on your network. At the very least we recommend that your loopback and
RFC1918 IP addresses are blacklisted.
Synapse 0.15.0 introduces a new API for previewing URLs at
``/_matrix/media/r0/preview_url``. This is disabled by default. To turn it on
you must enable the ``url_preview_enabled: True`` config parameter and
explicitly specify the IP ranges that Synapse is not allowed to spider for
previewing in the ``url_preview_ip_range_blacklist`` configuration parameter.
This is critical from a security perspective to stop arbitrary Matrix users
spidering 'internal' URLs on your network. At the very least we recommend that
your loopback and RFC1918 IP addresses are blacklisted.
This also requires the optional lxml and netaddr python dependencies to be
installed.
installed. This in turn requires the libxml2 library to be available - on
Debian/Ubuntu this means ``apt-get install libxml2-dev``, or equivalent for
your OS.
Password reset
@@ -601,24 +836,54 @@ server, they can request a password-reset token via clients such as Vector.
A manual password reset can be done via direct database access as follows.
First calculate the hash of the new password:
First calculate the hash of the new password::
$ source ~/.synapse/bin/activate
$ ./scripts/hash_password
Password:
Confirm password:
Password:
Confirm password:
$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Then update the `users` table in the database:
Then update the `users` table in the database::
UPDATE users SET password_hash='$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
WHERE name='@test:test.com';
Where's the spec?!
==================
The source of the matrix spec lives at https://github.com/matrix-org/matrix-doc.
A recent HTML snapshot of this lives at http://matrix.org/docs/spec
Synapse Development
===================
Before setting up a development environment for synapse, make sure you have the
system dependencies (such as the python header files) installed - see
`Installing from source`_.
To check out a synapse for development, clone the git repo into a working
directory of your choice::
git clone https://github.com/matrix-org/synapse.git
cd synapse
Synapse has a number of external dependencies, that are easiest
to install using pip and a virtualenv::
virtualenv -p python2.7 env
source env/bin/activate
python synapse/python_dependencies.py | xargs pip install
pip install lxml mock
This will run a process of downloading and installing all the needed
dependencies into a virtual env.
Once this is done, you may wish to run Synapse's unit tests, to
check that everything is installed as it should be::
PYTHONPATH="." trial tests
This should end with a 'PASSED' result::
Ran 143 tests in 0.601s
PASSED (successes=143)
Building Internal API Documentation
@@ -635,7 +900,6 @@ Building internal API documentation::
python setup.py build_sphinx
Help!! Synapse eats all my RAM!
===============================
@@ -644,10 +908,9 @@ cache a lot of recent room data and metadata in RAM in order to speed up
common requests. We'll improve this in future, but for now the easiest
way to either reduce the RAM usage (at the risk of slowing things down)
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
variable. Roughly speaking, a SYNAPSE_CACHE_FACTOR of 1.0 will max out
at around 3-4GB of resident memory - this is what we currently run the
matrix.org on. The default setting is currently 0.1, which is probably
around a ~700MB footprint. You can dial it down further to 0.02 if
desired, which targets roughly ~512MB. Conversely you can dial it up if
you need performance for lots of users and have a box with a lot of RAM.
variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
.. _`key_management`: https://matrix.org/docs/spec/server_server/unstable.html#retrieving-server-keys

View File

@@ -5,30 +5,48 @@ Before upgrading check if any special steps are required to upgrade from the
what you currently have installed to current version of synapse. The extra
instructions that may be required are listed later in this document.
If synapse was installed in a virtualenv then active that virtualenv before
upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then run:
1. If synapse was installed in a virtualenv then active that virtualenv before
upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then
run:
.. code:: bash
source ~/.synapse/bin/activate
2. If synapse was installed using pip then upgrade to the latest version by
running:
.. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
# restart synapse
synctl restart
If synapse was installed using git then upgrade to the latest version by
running:
.. code:: bash
# Pull the latest version of the master branch.
git pull
# Update the versions of synapse's python dependencies.
python synapse/python_dependencies.py | xargs pip install --upgrade
# restart synapse
./synctl restart
To check whether your update was sucessful, you can check the Server header
returned by the Client-Server API:
.. code:: bash
source ~/.synapse/bin/activate
If synapse was installed using pip then upgrade to the latest version by
running:
.. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
If synapse was installed using git then upgrade to the latest version by
running:
.. code:: bash
# Pull the latest version of the master branch.
git pull
# Update the versions of synapse's python dependencies.
python synapse/python_dependencies.py | xargs -n1 pip install --upgrade
# replace <host.name> with the hostname of your synapse homeserver.
# You may need to specify a port (eg, :8448) if your server is not
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to v0.15.0
====================
@@ -68,7 +86,7 @@ It has been replaced by specifying a list of application service registrations i
``homeserver.yaml``::
app_service_config_files: ["registration-01.yaml", "registration-02.yaml"]
Where ``registration-01.yaml`` looks like::
url: <String> # e.g. "https://my.application.service.com"
@@ -157,7 +175,7 @@ This release completely changes the database schema and so requires upgrading
it before starting the new version of the homeserver.
The script "database-prepare-for-0.5.0.sh" should be used to upgrade the
database. This will save all user information, such as logins and profiles,
database. This will save all user information, such as logins and profiles,
but will otherwise purge the database. This includes messages, which
rooms the home server was a member of and room alias mappings.
@@ -166,18 +184,18 @@ file and ask for help in #matrix:matrix.org. The upgrade process is,
unfortunately, non trivial and requires human intervention to resolve any
resulting conflicts during the upgrade process.
Before running the command the homeserver should be first completely
Before running the command the homeserver should be first completely
shutdown. To run it, simply specify the location of the database, e.g.:
./scripts/database-prepare-for-0.5.0.sh "homeserver.db"
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
restart than usual as it reinitializes the database.
On startup of the new version, users can either rejoin remote rooms using room
aliases or by being reinvited. Alternatively, if any other homeserver sends a
message to a room that the homeserver was previously in the local HS will
message to a room that the homeserver was previously in the local HS will
automatically rejoin the room.
Upgrading to v0.4.0
@@ -236,7 +254,7 @@ automatically generate default config use::
--config-path homeserver.config \
--generate-config
This config can be edited if desired, for example to specify a different SSL
This config can be edited if desired, for example to specify a different SSL
certificate to use. Once done you can run the home server using::
$ python synapse/app/homeserver.py --config-path homeserver.config
@@ -257,20 +275,20 @@ This release completely changes the database schema and so requires upgrading
it before starting the new version of the homeserver.
The script "database-prepare-for-0.0.1.sh" should be used to upgrade the
database. This will save all user information, such as logins and profiles,
database. This will save all user information, such as logins and profiles,
but will otherwise purge the database. This includes messages, which
rooms the home server was a member of and room alias mappings.
Before running the command the homeserver should be first completely
Before running the command the homeserver should be first completely
shutdown. To run it, simply specify the location of the database, e.g.:
./scripts/database-prepare-for-0.0.1.sh "homeserver.db"
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
restart than usual as it reinitializes the database.
On startup of the new version, users can either rejoin remote rooms using room
aliases or by being reinvited. Alternatively, if any other homeserver sends a
message to a room that the homeserver was previously in the local HS will
message to a room that the homeserver was previously in the local HS will
automatically rejoin the room.

View File

@@ -32,7 +32,7 @@ import urlparse
import nacl.signing
import nacl.encoding
from syutil.crypto.jsonsign import verify_signed_json, SignatureVerifyException
from signedjson.sign import verify_signed_json, SignatureVerifyException
CONFIG_JSON = "cmdclient_config.json"

View File

@@ -36,15 +36,13 @@ class HttpClient(object):
the request body. This will be encoded as JSON.
Returns:
Deferred: Succeeds when we get *any* HTTP response.
The result of the deferred is a tuple of `(code, response)`,
where `response` is a dict representing the decoded JSON body.
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body.
"""
pass
def get_json(self, url, args=None):
""" Get's some json from the given host homeserver and path
""" Gets some json from the given host homeserver and path
Args:
url (str): The URL to GET data from.
@@ -54,10 +52,8 @@ class HttpClient(object):
and *not* a string.
Returns:
Deferred: Succeeds when we get *any* HTTP response.
The result of the deferred is a tuple of `(code, response)`,
where `response` is a dict representing the decoded JSON body.
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body.
"""
pass
@@ -214,4 +210,4 @@ class _JsonProducer(object):
pass
def stopProducing(self):
pass
pass

View File

@@ -0,0 +1,50 @@
# Example log_config file for synapse. To enable, point `log_config` to it in
# `homeserver.yaml`, and restart synapse.
#
# This configuration will produce similar results to the defaults within
# synapse, but can be edited to give more flexibility.
version: 1
formatters:
fmt:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s'
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
request: ""
handlers:
# example output to console
console:
class: logging.StreamHandler
filters: [context]
# example output to file - to enable, edit 'root' config below.
file:
class: logging.handlers.RotatingFileHandler
formatter: fmt
filename: /var/log/synapse/homeserver.log
maxBytes: 100000000
backupCount: 3
filters: [context]
root:
level: INFO
handlers: [console] # to use file handler instead, switch to [file]
loggers:
synapse:
level: INFO
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: INFO
# example of enabling debugging for a component:
#
# synapse.federation.transport.server:
# level: DEBUG

37
contrib/prometheus/README Normal file
View File

@@ -0,0 +1,37 @@
This directory contains some sample monitoring config for using the
'Prometheus' monitoring server against synapse.
To use it, first install prometheus by following the instructions at
http://prometheus.io/
### for Prometheus v1
Add a new job to the main prometheus.conf file:
job: {
name: "synapse"
target_group: {
target: "http://SERVER.LOCATION.HERE:PORT/_synapse/metrics"
}
}
### for Prometheus v2
Add a new job to the main prometheus.yml file:
- job_name: "synapse"
metrics_path: "/_synapse/metrics"
# when endpoint uses https:
scheme: "https"
static_configs:
- targets: ['SERVER.LOCATION:PORT']
To use `synapse.rules` add
rule_files:
- "/PATH/TO/synapse-v2.rules"
Metrics are disabled by default when running synapse; they must be enabled
with the 'enable-metrics' option, either in the synapse config file or as a
command-line option.

View File

@@ -0,0 +1,395 @@
{{ template "head" . }}
{{ template "prom_content_head" . }}
<h1>System Resources</h1>
<h3>CPU</h3>
<div id="process_resource_utime"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_resource_utime"),
expr: "rate(process_cpu_seconds_total[2m]) * 100",
name: "[[job]]",
min: 0,
max: 100,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "%",
yTitle: "CPU Usage"
})
</script>
<h3>Memory</h3>
<div id="process_resource_maxrss"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_resource_maxrss"),
expr: "process_psutil_rss:max",
name: "Maxrss",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "bytes",
yTitle: "Usage"
})
</script>
<h3>File descriptors</h3>
<div id="process_fds"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_fds"),
expr: "process_open_fds{job='synapse'}",
name: "FDs",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "",
yTitle: "Descriptors"
})
</script>
<h1>Reactor</h1>
<h3>Total reactor time</h3>
<div id="reactor_total_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_total_time"),
expr: "rate(python_twisted_reactor_tick_time:total[2m]) / 1000",
name: "time",
max: 1,
min: 0,
renderer: "area",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Average reactor tick time</h3>
<div id="reactor_average_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_average_time"),
expr: "rate(python_twisted_reactor_tick_time:total[2m]) / rate(python_twisted_reactor_tick_time:count[2m]) / 1000",
name: "time",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s",
yTitle: "Time"
})
</script>
<h3>Pending calls per tick</h3>
<div id="reactor_pending_calls"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_pending_calls"),
expr: "rate(python_twisted_reactor_pending_calls:total[30s])/rate(python_twisted_reactor_pending_calls:count[30s])",
name: "calls",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yTitle: "Pending Cals"
})
</script>
<h1>Storage</h1>
<h3>Queries</h3>
<div id="synapse_storage_query_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_query_time"),
expr: "rate(synapse_storage_query_time:count[2m])",
name: "[[verb]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "queries/s",
yTitle: "Queries"
})
</script>
<h3>Transactions</h3>
<div id="synapse_storage_transaction_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_transaction_time"),
expr: "rate(synapse_storage_transaction_time:count[2m])",
name: "[[desc]]",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "txn/s",
yTitle: "Transactions"
})
</script>
<h3>Transaction execution time</h3>
<div id="synapse_storage_transactions_time_msec"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_transactions_time_msec"),
expr: "rate(synapse_storage_transaction_time:total[2m]) / 1000",
name: "[[desc]]",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Database scheduling latency</h3>
<div id="synapse_storage_schedule_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_schedule_time"),
expr: "rate(synapse_storage_schedule_time:total[2m]) / 1000",
name: "Total latency",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Cache hit ratio</h3>
<div id="synapse_cache_ratio"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_cache_ratio"),
expr: "rate(synapse_util_caches_cache:total[2m]) * 100",
name: "[[name]]",
min: 0,
max: 100,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "%",
yTitle: "Percentage"
})
</script>
<h3>Cache size</h3>
<div id="synapse_cache_size"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_cache_size"),
expr: "synapse_util_caches_cache:size",
name: "[[name]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Items"
})
</script>
<h1>Requests</h1>
<h3>Requests by Servlet</h3>
<div id="synapse_http_server_requests_servlet"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet"),
expr: "rate(synapse_http_server_requests:servlet[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4>
<div id="synapse_http_server_requests_servlet_minus_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet_minus_events"),
expr: "rate(synapse_http_server_requests:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Average response times</h3>
<div id="synapse_http_server_response_time_avg"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_time_avg"),
expr: "rate(synapse_http_server_response_time:total[2m]) / rate(synapse_http_server_response_time:count[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/req",
yTitle: "Response time"
})
</script>
<h3>All responses by code</h3>
<div id="synapse_http_server_responses"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_responses"),
expr: "rate(synapse_http_server_responses[2m])",
name: "[[method]] / [[code]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Error responses by code</h3>
<div id="synapse_http_server_responses_err"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_responses_err"),
expr: "rate(synapse_http_server_responses{code=~\"[45]..\"}[2m])",
name: "[[method]] / [[code]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>CPU Usage</h3>
<div id="synapse_http_server_response_ru_utime"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_ru_utime"),
expr: "rate(synapse_http_server_response_ru_utime:total[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "CPU Usage"
})
</script>
<h3>DB Usage</h3>
<div id="synapse_http_server_response_db_txn_duration"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
expr: "rate(synapse_http_server_response_db_txn_duration:total[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "DB Usage"
})
</script>
<h3>Average event send times</h3>
<div id="synapse_http_server_send_time_avg"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_send_time_avg"),
expr: "rate(synapse_http_server_response_time:total{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_time:count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/req",
yTitle: "Response time"
})
</script>
<h1>Federation</h1>
<h3>Sent Messages</h3>
<div id="synapse_federation_client_sent"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_client_sent"),
expr: "rate(synapse_federation_client_sent[2m])",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Received Messages</h3>
<div id="synapse_federation_server_received"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_server_received"),
expr: "rate(synapse_federation_server_received[2m])",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Pending</h3>
<div id="synapse_federation_transaction_queue_pending"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_transaction_queue_pending"),
expr: "synapse_federation_transaction_queue_pending",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Units"
})
</script>
<h1>Clients</h1>
<h3>Notifiers</h3>
<div id="synapse_notifier_listeners"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_notifier_listeners"),
expr: "synapse_notifier_listeners",
name: "listeners",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Listeners"
})
</script>
<h3>Notified Events</h3>
<div id="synapse_notifier_notified_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_notifier_notified_events"),
expr: "rate(synapse_notifier_notified_events[2m])",
name: "events",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "events/s",
yTitle: "Event rate"
})
</script>
{{ template "prom_content_tail" . }}
{{ template "tail" }}

View File

@@ -0,0 +1,21 @@
synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method)
synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet)
synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet)
synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])
synapse_federation_client_sent{type="EDU"} = synapse_federation_client_sent_edus + 0
synapse_federation_client_sent{type="PDU"} = synapse_federation_client_sent_pdu_destinations:count + 0
synapse_federation_client_sent{type="Query"} = sum(synapse_federation_client_sent_queries) by (job)
synapse_federation_server_received{type="EDU"} = synapse_federation_server_received_edus + 0
synapse_federation_server_received{type="PDU"} = synapse_federation_server_received_pdus + 0
synapse_federation_server_received{type="Query"} = sum(synapse_federation_server_received_queries) by (job)
synapse_federation_transaction_queue_pending{type="EDU"} = synapse_federation_transaction_queue_pending_edus + 0
synapse_federation_transaction_queue_pending{type="PDU"} = synapse_federation_transaction_queue_pending_pdus + 0

View File

@@ -0,0 +1,60 @@
groups:
- name: synapse
rules:
- record: "synapse_federation_transaction_queue_pendingEdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)"
- record: "synapse_federation_transaction_queue_pendingPdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)"
- record: 'synapse_http_server_requests:method'
labels:
servlet: ""
expr: "sum(synapse_http_server_requests) by (method)"
- record: 'synapse_http_server_requests:servlet'
labels:
method: ""
expr: 'sum(synapse_http_server_requests) by (servlet)'
- record: 'synapse_http_server_requests:total'
labels:
servlet: ""
expr: 'sum(synapse_http_server_requests:by_method) by (servlet)'
- record: 'synapse_cache:hit_ratio_5m'
expr: 'rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])'
- record: 'synapse_cache:hit_ratio_30s'
expr: 'rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])'
- record: 'synapse_federation_client_sent'
labels:
type: "EDU"
expr: 'synapse_federation_client_sent_edus + 0'
- record: 'synapse_federation_client_sent'
labels:
type: "PDU"
expr: 'synapse_federation_client_sent_pdu_destinations:count + 0'
- record: 'synapse_federation_client_sent'
labels:
type: "Query"
expr: 'sum(synapse_federation_client_sent_queries) by (job)'
- record: 'synapse_federation_server_received'
labels:
type: "EDU"
expr: 'synapse_federation_server_received_edus + 0'
- record: 'synapse_federation_server_received'
labels:
type: "PDU"
expr: 'synapse_federation_server_received_pdus + 0'
- record: 'synapse_federation_server_received'
labels:
type: "Query"
expr: 'sum(synapse_federation_server_received_queries) by (job)'
- record: 'synapse_federation_transaction_queue_pending'
labels:
type: "EDU"
expr: 'synapse_federation_transaction_queue_pending_edus + 0'
- record: 'synapse_federation_transaction_queue_pending'
labels:
type: "PDU"
expr: 'synapse_federation_transaction_queue_pending_pdus + 0'

View File

@@ -1,5 +1,5 @@
# This assumes that Synapse has been installed as a system package
# (e.g. https://aur.archlinux.org/packages/matrix-synapse/ for ArchLinux)
# (e.g. https://www.archlinux.org/packages/community/any/matrix-synapse/ for ArchLinux)
# rather than in a user home directory or similar under virtualenv.
[Unit]
@@ -9,9 +9,10 @@ Description=Synapse Matrix homeserver
Type=simple
User=synapse
Group=synapse
EnvironmentFile=-/etc/sysconfig/synapse
WorkingDirectory=/var/lib/synapse
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml --log-config=/etc/synapse/log_config.yaml
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml
ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml
[Install]
WantedBy=multi-user.target

View File

@@ -10,13 +10,13 @@ https://developers.google.com/recaptcha/
Setting ReCaptcha Keys
----------------------
The keys are a config option on the home server config. If they are not
visible, you can generate them via --generate-config. Set the following value:
The keys are a config option on the home server config. If they are not
visible, you can generate them via --generate-config. Set the following value::
recaptcha_public_key: YOUR_PUBLIC_KEY
recaptcha_private_key: YOUR_PRIVATE_KEY
In addition, you MUST enable captchas via:
In addition, you MUST enable captchas via::
enable_registration_captcha: true
@@ -25,7 +25,5 @@ Configuring IP used for auth
The ReCaptcha API requires that the IP address of the user who solved the
captcha is sent. If the client is connecting through a proxy or load balancer,
it may be required to use the X-Forwarded-For (XFF) header instead of the origin
IP address. This can be configured as an option on the home server like so:
captcha_ip_origin_is_x_forwarded: true
IP address. This can be configured using the x_forwarded directive in the
listeners section of the homeserver.yaml configuration file.

View File

@@ -4,6 +4,8 @@ Purge History API
The purge history API allows server admins to purge historic events from their
database, reclaiming disk space.
**NB!** This will not delete local events (locally sent messages content etc) from the database, but will remove lots of the metadata about them and does dramatically reduce the on disk space usage
Depending on the amount of history being purged a call to the API may take
several minutes or longer. During this period users will not be able to
paginate further back in the room from the point being purged from.

View File

@@ -2,15 +2,13 @@ Purge Remote Media API
======================
The purge remote media API allows server admins to purge old cached remote
media.
media.
The API is::
POST /_matrix/client/r0/admin/purge_media_cache
POST /_matrix/client/r0/admin/purge_media_cache?before_ts=<unix_timestamp_in_ms>&access_token=<access_token>
{
"before_ts": <unix_timestamp_in_ms>
}
{}
Which will remove all cached media that was last accessed before
``<unix_timestamp_in_ms>``.

View File

@@ -0,0 +1,73 @@
Query Account
=============
This API returns information about a specific user account.
The api is::
GET /_matrix/client/r0/admin/whois/<user_id>
including an ``access_token`` of a server admin.
It returns a JSON body like the following:
.. code:: json
{
"user_id": "<user_id>",
"devices": {
"": {
"sessions": [
{
"connections": [
{
"ip": "1.2.3.4",
"last_seen": 1417222374433,
"user_agent": "Mozilla/5.0 ..."
},
{
"ip": "1.2.3.10",
"last_seen": 1417222374500,
"user_agent": "Dalvik/2.1.0 ..."
}
]
}
]
}
}
}
``last_seen`` is measured in milliseconds since the Unix epoch.
Deactivate Account
==================
This API deactivates an account. It removes active access tokens, resets the
password, and deletes third-party IDs (to prevent the user requesting a
password reset).
The api is::
POST /_matrix/client/r0/admin/deactivate/<user_id>
including an ``access_token`` of a server admin, and an empty request body.
Reset password
==============
Changes the password of another user.
The api is::
POST /_matrix/client/r0/admin/reset_password/<user_id>
with a body of:
.. code:: json
{
"new_password": "<secret>"
}
including an ``access_token`` of a server admin.

View File

@@ -1,52 +1,119 @@
Basically, PEP8
- Everything should comply with PEP8. Code should pass
``pep8 --max-line-length=100`` without any warnings.
- NEVER tabs. 4 spaces to indent.
- Max line width: 79 chars (with flexibility to overflow by a "few chars" if
- **Indenting**:
- NEVER tabs. 4 spaces to indent.
- follow PEP8; either hanging indent or multiline-visual indent depending
on the size and shape of the arguments and what makes more sense to the
author. In other words, both this::
print("I am a fish %s" % "moo")
and this::
print("I am a fish %s" %
"moo")
and this::
print(
"I am a fish %s" %
"moo",
)
...are valid, although given each one takes up 2x more vertical space than
the previous, it's up to the author's discretion as to which layout makes
most sense for their function invocation. (e.g. if they want to add
comments per-argument, or put expressions in the arguments, or group
related arguments together, or want to deliberately extend or preserve
vertical/horizontal space)
- **Line length**:
Max line length is 79 chars (with flexibility to overflow by a "few chars" if
the overflowing content is not semantically significant and avoids an
explosion of vertical whitespace).
- Use camel case for class and type names
- Use underscores for functions and variables.
- Use double quotes.
- Use parentheses instead of '\\' for line continuation where ever possible
(which is pretty much everywhere)
- There should be max a single new line between:
Use parentheses instead of ``\`` for line continuation where ever possible
(which is pretty much everywhere).
- **Naming**:
- Use camel case for class and type names
- Use underscores for functions and variables.
- Use double quotes ``"foo"`` rather than single quotes ``'foo'``.
- **Blank lines**:
- There should be max a single new line between:
- statements
- functions in a class
- There should be two new lines between:
- There should be two new lines between:
- definitions in a module (e.g., between different classes)
- There should be spaces where spaces should be and not where there shouldn't be:
- a single space after a comma
- a single space before and after for '=' when used as assignment
- no spaces before and after for '=' for default values and keyword arguments.
- Indenting must follow PEP8; either hanging indent or multiline-visual indent
depending on the size and shape of the arguments and what makes more sense to
the author. In other words, both this::
print("I am a fish %s" % "moo")
- **Whitespace**:
and this::
There should be spaces where spaces should be and not where there shouldn't
be:
print("I am a fish %s" %
"moo")
- a single space after a comma
- a single space before and after for '=' when used as assignment
- no spaces before and after for '=' for default values and keyword arguments.
and this::
- **Comments**: should follow the `google code style
<http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with `sphinx
<http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples
<http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
in the sphinx documentation.
print(
"I am a fish %s" %
"moo"
)
- **Imports**:
...are valid, although given each one takes up 2x more vertical space than
the previous, it's up to the author's discretion as to which layout makes most
sense for their function invocation. (e.g. if they want to add comments
per-argument, or put expressions in the arguments, or group related arguments
together, or want to deliberately extend or preserve vertical/horizontal
space)
- Prefer to import classes and functions than packages or modules.
Comments should follow the `google code style <http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with
`sphinx <http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples <http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
in the sphinx documentation.
Example::
Code should pass pep8 --max-line-length=100 without any warnings.
from synapse.types import UserID
...
user_id = UserID(local, server)
is preferred over::
from synapse import types
...
user_id = types.UserID(local, server)
(or any other variant).
This goes against the advice in the Google style guide, but it means that
errors in the name are caught early (at import time).
- Multiple imports from the same package can be combined onto one line::
from synapse.types import GroupID, RoomID, UserID
An effort should be made to keep the individual imports in alphabetical
order.
If the list becomes long, wrap it with parentheses and split it over
multiple lines.
- As per `PEP-8 <https://www.python.org/dev/peps/pep-0008/#imports>`_,
imports should be grouped in the following order, with a blank line between
each group:
1. standard library imports
2. related third party imports
3. local application/library specific imports
- Imports within each group should be sorted alphabetically by module name.
- Avoid wildcard imports (``from synapse.types import *``) and relative
imports (``from .types import UserID``).

View File

@@ -1,10 +1,442 @@
What do I do about "Unexpected logging context" debug log-lines everywhere?
Log contexts
============
<Mjark> The logging context lives in thread local storage
<Mjark> Sometimes it gets out of sync with what it should actually be, usually because something scheduled something to run on the reactor without preserving the logging context.
<Matthew> what is the impact of it getting out of sync? and how and when should we preserve log context?
<Mjark> The impact is that some of the CPU and database metrics will be under-reported, and some log lines will be mis-attributed.
<Mjark> It should happen auto-magically in all the APIs that do IO or otherwise defer to the reactor.
<Erik> Mjark: the other place is if we branch, e.g. using defer.gatherResults
.. contents::
Unanswered: how and when should we preserve log context?
To help track the processing of individual requests, synapse uses a
'log context' to track which request it is handling at any given moment. This
is done via a thread-local variable; a ``logging.Filter`` is then used to fish
the information back out of the thread-local variable and add it to each log
record.
Logcontexts are also used for CPU and database accounting, so that we can track
which requests were responsible for high CPU use or database activity.
The ``synapse.util.logcontext`` module provides a facilities for managing the
current log context (as well as providing the ``LoggingContextFilter`` class).
Deferreds make the whole thing complicated, so this document describes how it
all works, and how to write code which follows the rules.
Logcontexts without Deferreds
-----------------------------
In the absence of any Deferred voodoo, things are simple enough. As with any
code of this nature, the rule is that our function should leave things as it
found them:
.. code:: python
from synapse.util import logcontext # omitted from future snippets
def handle_request(request_id):
request_context = logcontext.LoggingContext()
calling_context = logcontext.LoggingContext.current_context()
logcontext.LoggingContext.set_current_context(request_context)
try:
request_context.request = request_id
do_request_handling()
logger.debug("finished")
finally:
logcontext.LoggingContext.set_current_context(calling_context)
def do_request_handling():
logger.debug("phew") # this will be logged against request_id
LoggingContext implements the context management methods, so the above can be
written much more succinctly as:
.. code:: python
def handle_request(request_id):
with logcontext.LoggingContext() as request_context:
request_context.request = request_id
do_request_handling()
logger.debug("finished")
def do_request_handling():
logger.debug("phew")
Using logcontexts with Deferreds
--------------------------------
Deferreds — and in particular, ``defer.inlineCallbacks`` — break
the linear flow of code so that there is no longer a single entry point where
we should set the logcontext and a single exit point where we should remove it.
Consider the example above, where ``do_request_handling`` needs to do some
blocking operation, and returns a deferred:
.. code:: python
@defer.inlineCallbacks
def handle_request(request_id):
with logcontext.LoggingContext() as request_context:
request_context.request = request_id
yield do_request_handling()
logger.debug("finished")
In the above flow:
* The logcontext is set
* ``do_request_handling`` is called, and returns a deferred
* ``handle_request`` yields the deferred
* The ``inlineCallbacks`` wrapper of ``handle_request`` returns a deferred
So we have stopped processing the request (and will probably go on to start
processing the next), without clearing the logcontext.
To circumvent this problem, synapse code assumes that, wherever you have a
deferred, you will want to yield on it. To that end, whereever functions return
a deferred, we adopt the following conventions:
**Rules for functions returning deferreds:**
* If the deferred is already complete, the function returns with the same
logcontext it started with.
* If the deferred is incomplete, the function clears the logcontext before
returning; when the deferred completes, it restores the logcontext before
running any callbacks.
That sounds complicated, but actually it means a lot of code (including the
example above) "just works". There are two cases:
* If ``do_request_handling`` returns a completed deferred, then the logcontext
will still be in place. In this case, execution will continue immediately
after the ``yield``; the "finished" line will be logged against the right
context, and the ``with`` block restores the original context before we
return to the caller.
* If the returned deferred is incomplete, ``do_request_handling`` clears the
logcontext before returning. The logcontext is therefore clear when
``handle_request`` yields the deferred. At that point, the ``inlineCallbacks``
wrapper adds a callback to the deferred, and returns another (incomplete)
deferred to the caller, and it is safe to begin processing the next request.
Once ``do_request_handling``'s deferred completes, it will reinstate the
logcontext, before running the callback added by the ``inlineCallbacks``
wrapper. That callback runs the second half of ``handle_request``, so again
the "finished" line will be logged against the right
context, and the ``with`` block restores the original context.
As an aside, it's worth noting that ``handle_request`` follows our rules -
though that only matters if the caller has its own logcontext which it cares
about.
The following sections describe pitfalls and helpful patterns when implementing
these rules.
Always yield your deferreds
---------------------------
Whenever you get a deferred back from a function, you should ``yield`` on it
as soon as possible. (Returning it directly to your caller is ok too, if you're
not doing ``inlineCallbacks``.) Do not pass go; do not do any logging; do not
call any other functions.
.. code:: python
@defer.inlineCallbacks
def fun():
logger.debug("starting")
yield do_some_stuff() # just like this
d = more_stuff()
result = yield d # also fine, of course
defer.returnValue(result)
def nonInlineCallbacksFun():
logger.debug("just a wrapper really")
return do_some_stuff() # this is ok too - the caller will yield on
# it anyway.
Provided this pattern is followed all the way back up to the callchain to where
the logcontext was set, this will make things work out ok: provided
``do_some_stuff`` and ``more_stuff`` follow the rules above, then so will
``fun`` (as wrapped by ``inlineCallbacks``) and ``nonInlineCallbacksFun``.
It's all too easy to forget to ``yield``: for instance if we forgot that
``do_some_stuff`` returned a deferred, we might plough on regardless. This
leads to a mess; it will probably work itself out eventually, but not before
a load of stuff has been logged against the wrong content. (Normally, other
things will break, more obviously, if you forget to ``yield``, so this tends
not to be a major problem in practice.)
Of course sometimes you need to do something a bit fancier with your Deferreds
- not all code follows the linear A-then-B-then-C pattern. Notes on
implementing more complex patterns are in later sections.
Where you create a new Deferred, make it follow the rules
---------------------------------------------------------
Most of the time, a Deferred comes from another synapse function. Sometimes,
though, we need to make up a new Deferred, or we get a Deferred back from
external code. We need to make it follow our rules.
The easy way to do it is with a combination of ``defer.inlineCallbacks``, and
``logcontext.PreserveLoggingContext``. Suppose we want to implement ``sleep``,
which returns a deferred which will run its callbacks after a given number of
seconds. That might look like:
.. code:: python
# not a logcontext-rules-compliant function
def get_sleep_deferred(seconds):
d = defer.Deferred()
reactor.callLater(seconds, d.callback, None)
return d
That doesn't follow the rules, but we can fix it by wrapping it with
``PreserveLoggingContext`` and ``yield`` ing on it:
.. code:: python
@defer.inlineCallbacks
def sleep(seconds):
with PreserveLoggingContext():
yield get_sleep_deferred(seconds)
This technique works equally for external functions which return deferreds,
or deferreds we have made ourselves.
You can also use ``logcontext.make_deferred_yieldable``, which just does the
boilerplate for you, so the above could be written:
.. code:: python
def sleep(seconds):
return logcontext.make_deferred_yieldable(get_sleep_deferred(seconds))
Fire-and-forget
---------------
Sometimes you want to fire off a chain of execution, but not wait for its
result. That might look a bit like this:
.. code:: python
@defer.inlineCallbacks
def do_request_handling():
yield foreground_operation()
# *don't* do this
background_operation()
logger.debug("Request handling complete")
@defer.inlineCallbacks
def background_operation():
yield first_background_step()
logger.debug("Completed first step")
yield second_background_step()
logger.debug("Completed second step")
The above code does a couple of steps in the background after
``do_request_handling`` has finished. The log lines are still logged against
the ``request_context`` logcontext, which may or may not be desirable. There
are two big problems with the above, however. The first problem is that, if
``background_operation`` returns an incomplete Deferred, it will expect its
caller to ``yield`` immediately, so will have cleared the logcontext. In this
example, that means that 'Request handling complete' will be logged without any
context.
The second problem, which is potentially even worse, is that when the Deferred
returned by ``background_operation`` completes, it will restore the original
logcontext. There is nothing waiting on that Deferred, so the logcontext will
leak into the reactor and possibly get attached to some arbitrary future
operation.
There are two potential solutions to this.
One option is to surround the call to ``background_operation`` with a
``PreserveLoggingContext`` call. That will reset the logcontext before
starting ``background_operation`` (so the context restored when the deferred
completes will be the empty logcontext), and will restore the current
logcontext before continuing the foreground process:
.. code:: python
@defer.inlineCallbacks
def do_request_handling():
yield foreground_operation()
# start background_operation off in the empty logcontext, to
# avoid leaking the current context into the reactor.
with PreserveLoggingContext():
background_operation()
# this will now be logged against the request context
logger.debug("Request handling complete")
Obviously that option means that the operations done in
``background_operation`` would be not be logged against a logcontext (though
that might be fixed by setting a different logcontext via a ``with
LoggingContext(...)`` in ``background_operation``).
The second option is to use ``logcontext.preserve_fn``, which wraps a function
so that it doesn't reset the logcontext even when it returns an incomplete
deferred, and adds a callback to the returned deferred to reset the
logcontext. In other words, it turns a function that follows the Synapse rules
about logcontexts and Deferreds into one which behaves more like an external
function — the opposite operation to that described in the previous section.
It can be used like this:
.. code:: python
@defer.inlineCallbacks
def do_request_handling():
yield foreground_operation()
logcontext.preserve_fn(background_operation)()
# this will now be logged against the request context
logger.debug("Request handling complete")
Passing synapse deferreds into third-party functions
----------------------------------------------------
A typical example of this is where we want to collect together two or more
deferred via ``defer.gatherResults``:
.. code:: python
d1 = operation1()
d2 = operation2()
d3 = defer.gatherResults([d1, d2])
This is really a variation of the fire-and-forget problem above, in that we are
firing off ``d1`` and ``d2`` without yielding on them. The difference
is that we now have third-party code attached to their callbacks. Anyway either
technique given in the `Fire-and-forget`_ section will work.
Of course, the new Deferred returned by ``gatherResults`` needs to be wrapped
in order to make it follow the logcontext rules before we can yield it, as
described in `Where you create a new Deferred, make it follow the rules`_.
So, option one: reset the logcontext before starting the operations to be
gathered:
.. code:: python
@defer.inlineCallbacks
def do_request_handling():
with PreserveLoggingContext():
d1 = operation1()
d2 = operation2()
result = yield defer.gatherResults([d1, d2])
In this case particularly, though, option two, of using
``logcontext.preserve_fn`` almost certainly makes more sense, so that
``operation1`` and ``operation2`` are both logged against the original
logcontext. This looks like:
.. code:: python
@defer.inlineCallbacks
def do_request_handling():
d1 = logcontext.preserve_fn(operation1)()
d2 = logcontext.preserve_fn(operation2)()
with PreserveLoggingContext():
result = yield defer.gatherResults([d1, d2])
Was all this really necessary?
------------------------------
The conventions used work fine for a linear flow where everything happens in
series via ``defer.inlineCallbacks`` and ``yield``, but are certainly tricky to
follow for any more exotic flows. It's hard not to wonder if we could have done
something else.
We're not going to rewrite Synapse now, so the following is entirely of
academic interest, but I'd like to record some thoughts on an alternative
approach.
I briefly prototyped some code following an alternative set of rules. I think
it would work, but I certainly didn't get as far as thinking how it would
interact with concepts as complicated as the cache descriptors.
My alternative rules were:
* functions always preserve the logcontext of their caller, whether or not they
are returning a Deferred.
* Deferreds returned by synapse functions run their callbacks in the same
context as the function was orignally called in.
The main point of this scheme is that everywhere that sets the logcontext is
responsible for clearing it before returning control to the reactor.
So, for example, if you were the function which started a ``with
LoggingContext`` block, you wouldn't ``yield`` within it — instead you'd start
off the background process, and then leave the ``with`` block to wait for it:
.. code:: python
def handle_request(request_id):
with logcontext.LoggingContext() as request_context:
request_context.request = request_id
d = do_request_handling()
def cb(r):
logger.debug("finished")
d.addCallback(cb)
return d
(in general, mixing ``with LoggingContext`` blocks and
``defer.inlineCallbacks`` in the same function leads to slighly
counter-intuitive code, under this scheme).
Because we leave the original ``with`` block as soon as the Deferred is
returned (as opposed to waiting for it to be resolved, as we do today), the
logcontext is cleared before control passes back to the reactor; so if there is
some code within ``do_request_handling`` which needs to wait for a Deferred to
complete, there is no need for it to worry about clearing the logcontext before
doing so:
.. code:: python
def handle_request():
r = do_some_stuff()
r.addCallback(do_some_more_stuff)
return r
— and provided ``do_some_stuff`` follows the rules of returning a Deferred which
runs its callbacks in the original logcontext, all is happy.
The business of a Deferred which runs its callbacks in the original logcontext
isn't hard to achieve — we have it today, in the shape of
``logcontext._PreservingContextDeferred``:
.. code:: python
def do_some_stuff():
deferred = do_some_io()
pcd = _PreservingContextDeferred(LoggingContext.current_context())
deferred.chainDeferred(pcd)
return pcd
It turns out that, thanks to the way that Deferreds chain together, we
automatically get the property of a context-preserving deferred with
``defer.inlineCallbacks``, provided the final Defered the function ``yields``
on has that property. So we can just write:
.. code:: python
@defer.inlineCallbacks
def handle_request():
yield do_some_stuff()
yield do_some_more_stuff()
To conclude: I think this scheme would have worked equally well, with less
danger of messing it up, and probably made some more esoteric code easier to
write. But again — changing the conventions of the entire Synapse codebase is
not a sensible option for the marginal improvement offered.

View File

@@ -1,50 +1,68 @@
How to monitor Synapse metrics using Prometheus
===============================================
1: Install prometheus:
Follow instructions at http://prometheus.io/docs/introduction/install/
1. Install prometheus:
2: Enable synapse metrics:
Simply setting a (local) port number will enable it. Pick a port.
prometheus itself defaults to 9090, so starting just above that for
locally monitored services seems reasonable. E.g. 9092:
Follow instructions at http://prometheus.io/docs/introduction/install/
Add to homeserver.yaml
2. Enable synapse metrics:
metrics_port: 9092
Simply setting a (local) port number will enable it. Pick a port.
prometheus itself defaults to 9090, so starting just above that for
locally monitored services seems reasonable. E.g. 9092:
Restart synapse
Add to homeserver.yaml::
3: Check out synapse-prometheus-config
https://github.com/matrix-org/synapse-prometheus-config
metrics_port: 9092
4: Add ``synapse.html`` and ``synapse.rules``
The ``.html`` file needs to appear in prometheus's ``consoles`` directory,
and the ``.rules`` file needs to be invoked somewhere in the main config
file. A symlink to each from the git checkout into the prometheus directory
might be easiest to ensure ``git pull`` keeps it updated.
Also ensure that ``enable_metrics`` is set to ``True``.
Restart synapse.
5: Add a prometheus target for synapse
This is easiest if prometheus runs on the same machine as synapse, as it can
then just use localhost::
3. Add a prometheus target for synapse.
global: {
rule_file: "synapse.rules"
}
It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``)::
job: {
name: "synapse"
- job_name: "synapse"
metrics_path: "/_synapse/metrics"
static_configs:
- targets: ["my.server.here:9092"]
target_group: {
target: "http://localhost:9092/"
}
}
If your prometheus is older than 1.5.2, you will need to replace
``static_configs`` in the above with ``target_groups``.
Restart prometheus.
6: Start prometheus::
Standard Metric Names
---------------------
./prometheus -config.file=prometheus.conf
As of synapse version 0.18.2, the format of the process-wide metrics has been
changed to fit prometheus standard naming conventions. Additionally the units
have been changed to seconds, from miliseconds.
7: Wait a few seconds for it to start and perform the first scrape,
then visit the console:
================================== =============================
New name Old name
---------------------------------- -----------------------------
process_cpu_user_seconds_total process_resource_utime / 1000
process_cpu_system_seconds_total process_resource_stime / 1000
process_open_fds (no 'type' label) process_fds
================================== =============================
http://server-where-prometheus-runs:9090/consoles/synapse.html
The python-specific counts of garbage collector performance have been renamed.
=========================== ======================
New name Old name
--------------------------- ----------------------
python_gc_time reactor_gc_time
python_gc_unreachable_total reactor_gc_unreachable
python_gc_counts reactor_gc_counts
=========================== ======================
The twisted-specific reactor metrics have been renamed.
==================================== =====================
New name Old name
------------------------------------ ---------------------
python_twisted_reactor_pending_calls reactor_pending_calls
python_twisted_reactor_tick_time reactor_tick_time
==================================== =====================

View File

@@ -0,0 +1,99 @@
Password auth provider modules
==============================
Password auth providers offer a way for server administrators to integrate
their Synapse installation with an existing authentication system.
A password auth provider is a Python class which is dynamically loaded into
Synapse, and provides a number of methods by which it can integrate with the
authentication system.
This document serves as a reference for those looking to implement their own
password auth providers.
Required methods
----------------
Password auth provider classes must provide the following methods:
*class* ``SomeProvider.parse_config``\(*config*)
This method is passed the ``config`` object for this module from the
homeserver configuration file.
It should perform any appropriate sanity checks on the provided
configuration, and return an object which is then passed into ``__init__``.
*class* ``SomeProvider``\(*config*, *account_handler*)
The constructor is passed the config object returned by ``parse_config``,
and a ``synapse.module_api.ModuleApi`` object which allows the
password provider to check if accounts exist and/or create new ones.
Optional methods
----------------
Password auth provider classes may optionally provide the following methods.
*class* ``SomeProvider.get_db_schema_files``\()
This method, if implemented, should return an Iterable of ``(name,
stream)`` pairs of database schema files. Each file is applied in turn at
initialisation, and a record is then made in the database so that it is
not re-applied on the next start.
``someprovider.get_supported_login_types``\()
This method, if implemented, should return a ``dict`` mapping from a login
type identifier (such as ``m.login.password``) to an iterable giving the
fields which must be provided by the user in the submission to the
``/login`` api. These fields are passed in the ``login_dict`` dictionary
to ``check_auth``.
For example, if a password auth provider wants to implement a custom login
type of ``com.example.custom_login``, where the client is expected to pass
the fields ``secret1`` and ``secret2``, the provider should implement this
method and return the following dict::
{"com.example.custom_login": ("secret1", "secret2")}
``someprovider.check_auth``\(*username*, *login_type*, *login_dict*)
This method is the one that does the real work. If implemented, it will be
called for each login attempt where the login type matches one of the keys
returned by ``get_supported_login_types``.
It is passed the (possibly UNqualified) ``user`` provided by the client,
the login type, and a dictionary of login secrets passed by the client.
The method should return a Twisted ``Deferred`` object, which resolves to
the canonical ``@localpart:domain`` user id if authentication is successful,
and ``None`` if not.
Alternatively, the ``Deferred`` can resolve to a ``(str, func)`` tuple, in
which case the second field is a callback which will be called with the
result from the ``/login`` call (including ``access_token``, ``device_id``,
etc.)
``someprovider.check_password``\(*user_id*, *password*)
This method provides a simpler interface than ``get_supported_login_types``
and ``check_auth`` for password auth providers that just want to provide a
mechanism for validating ``m.login.password`` logins.
Iif implemented, it will be called to check logins with an
``m.login.password`` login type. It is passed a qualified
``@localpart:domain`` user id, and the password provided by the user.
The method should return a Twisted ``Deferred`` object, which resolves to
``True`` if authentication is successful, and ``False`` if not.
``someprovider.on_logged_out``\(*user_id*, *device_id*, *access_token*)
This method, if implemented, is called when a user logs out. It is passed
the qualified user ID, the ID of the deactivated device (if any: access
tokens are occasionally created without an associated device ID), and the
(now deactivated) access token.
It may return a Twisted ``Deferred`` object; the logout request will wait
for the deferred to complete but the result is ignored.

View File

@@ -1,6 +1,8 @@
Using Postgres
--------------
Postgres version 9.4 or later is known to work.
Set up database
===============
@@ -112,9 +114,9 @@ script one last time, e.g. if the SQLite database is at ``homeserver.db``
run::
synapse_port_db --sqlite-database homeserver.db \
--postgres-config database_config.yaml
--postgres-config homeserver-postgres.yaml
Once that has completed, change the synapse config to point at the PostgreSQL
database configuration file using the ``database_config`` parameter (see
`Synapse Config`_) and restart synapse. Synapse should now be running against
database configuration file ``homeserver-postgres.yaml`` (i.e. rename it to
``homeserver.yaml``) and restart synapse. Synapse should now be running against
PostgreSQL.

View File

@@ -26,28 +26,10 @@ expose the append-only log to the readers should be fairly minimal.
Architecture
------------
The Replication API
~~~~~~~~~~~~~~~~~~~
The Replication Protocol
~~~~~~~~~~~~~~~~~~~~~~~~
Synapse will optionally expose a long poll HTTP API for extracting updates. The
API will have a similar shape to /sync in that clients provide tokens
indicating where in the log they have reached and a timeout. The synapse server
then either responds with updates immediately if it already has updates or it
waits until the timeout for more updates. If the timeout expires and nothing
happened then the server returns an empty response.
However unlike the /sync API this replication API is returning synapse specific
data rather than trying to implement a matrix specification. The replication
results are returned as arrays of rows where the rows are mostly lifted
directly from the database. This avoids unnecessary JSON parsing on the server
and hopefully avoids an impedance mismatch between the data returned and the
required updates to the datastore.
This does not replicate all the database tables as many of the database tables
are indexes that can be recovered from the contents of other tables.
The format and parameters for the api are documented in
``synapse/replication/resource.py``.
See ``tcp_replication.rst``
The Slaved DataStore

View File

@@ -50,7 +50,7 @@ master_doc = 'index'
# General information about the project.
project = u'Synapse'
copyright = u'2014, TNG'
copyright = u'Copyright 2014-2017 OpenMarket Ltd, 2017 Vector Creations Ltd, 2017 New Vector Ltd'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the

223
docs/tcp_replication.rst Normal file
View File

@@ -0,0 +1,223 @@
TCP Replication
===============
Motivation
----------
Previously the workers used an HTTP long poll mechanism to get updates from the
master, which had the problem of causing a lot of duplicate work on the server.
This TCP protocol replaces those APIs with the aim of increased efficiency.
Overview
--------
The protocol is based on fire and forget, line based commands. An example flow
would be (where '>' indicates master to worker and '<' worker to master flows)::
> SERVER example.com
< REPLICATE events 53
> RDATA events 54 ["$foo1:bar.com", ...]
> RDATA events 55 ["$foo4:bar.com", ...]
The example shows the server accepting a new connection and sending its identity
with the ``SERVER`` command, followed by the client asking to subscribe to the
``events`` stream from the token ``53``. The server then periodically sends ``RDATA``
commands which have the format ``RDATA <stream_name> <token> <row>``, where the
format of ``<row>`` is defined by the individual streams.
Error reporting happens by either the client or server sending an `ERROR`
command, and usually the connection will be closed.
Since the protocol is a simple line based, its possible to manually connect to
the server using a tool like netcat. A few things should be noted when manually
using the protocol:
* When subscribing to a stream using ``REPLICATE``, the special token ``NOW`` can
be used to get all future updates. The special stream name ``ALL`` can be used
with ``NOW`` to subscribe to all available streams.
* The federation stream is only available if federation sending has been
disabled on the main process.
* The server will only time connections out that have sent a ``PING`` command.
If a ping is sent then the connection will be closed if no further commands
are receieved within 15s. Both the client and server protocol implementations
will send an initial PING on connection and ensure at least one command every
5s is sent (not necessarily ``PING``).
* ``RDATA`` commands *usually* include a numeric token, however if the stream
has multiple rows to replicate per token the server will send multiple
``RDATA`` commands, with all but the last having a token of ``batch``. See
the documentation on ``commands.RdataCommand`` for further details.
Architecture
------------
The basic structure of the protocol is line based, where the initial word of
each line specifies the command. The rest of the line is parsed based on the
command. For example, the `RDATA` command is defined as::
RDATA <stream_name> <token> <row_json>
(Note that `<row_json>` may contains spaces, but cannot contain newlines.)
Blank lines are ignored.
Keep alives
~~~~~~~~~~~
Both sides are expected to send at least one command every 5s or so, and
should send a ``PING`` command if necessary. If either side do not receive a
command within e.g. 15s then the connection should be closed.
Because the server may be connected to manually using e.g. netcat, the timeouts
aren't enabled until an initial ``PING`` command is seen. Both the client and
server implementations below send a ``PING`` command immediately on connection to
ensure the timeouts are enabled.
This ensures that both sides can quickly realize if the tcp connection has gone
and handle the situation appropriately.
Start up
~~~~~~~~
When a new connection is made, the server:
* Sends a ``SERVER`` command, which includes the identity of the server, allowing
the client to detect if its connected to the expected server
* Sends a ``PING`` command as above, to enable the client to time out connections
promptly.
The client:
* Sends a ``NAME`` command, allowing the server to associate a human friendly
name with the connection. This is optional.
* Sends a ``PING`` as above
* For each stream the client wishes to subscribe to it sends a ``REPLICATE``
with the stream_name and token it wants to subscribe from.
* On receipt of a ``SERVER`` command, checks that the server name matches the
expected server name.
Error handling
~~~~~~~~~~~~~~
If either side detects an error it can send an ``ERROR`` command and close the
connection.
If the client side loses the connection to the server it should reconnect,
following the steps above.
Congestion
~~~~~~~~~~
If the server sends messages faster than the client can consume them the server
will first buffer a (fairly large) number of commands and then disconnect the
client. This ensures that we don't queue up an unbounded number of commands in
memory and gives us a potential oppurtunity to squawk loudly. When/if the client
recovers it can reconnect to the server and ask for missed messages.
Reliability
~~~~~~~~~~~
In general the replication stream should be considered an unreliable transport
since e.g. commands are not resent if the connection disappears.
The exception to that are the replication streams, i.e. RDATA commands, since
these include tokens which can be used to restart the stream on connection
errors.
The client should keep track of the token in the last RDATA command received
for each stream so that on reconneciton it can start streaming from the correct
place. Note: not all RDATA have valid tokens due to batching. See
``RdataCommand`` for more details.
Example
~~~~~~~
An example iteraction is shown below. Each line is prefixed with '>' or '<' to
indicate which side is sending, these are *not* included on the wire::
* connection established *
> SERVER localhost:8823
> PING 1490197665618
< NAME synapse.app.appservice
< PING 1490197665618
< REPLICATE events 1
< REPLICATE backfill 1
< REPLICATE caches 1
> POSITION events 1
> POSITION backfill 1
> POSITION caches 1
> RDATA caches 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513]
> RDATA events 14 ["$149019767112vOHxz:localhost:8823",
"!AFDCvgApUmpdfVjIXm:localhost:8823","m.room.guest_access","",null]
< PING 1490197675618
> ERROR server stopping
* connection closed by server *
The ``POSITION`` command sent by the server is used to set the clients position
without needing to send data with the ``RDATA`` command.
An example of a batched set of ``RDATA`` is::
> RDATA caches batch ["get_user_by_id",["@test:localhost:8823"],1490197670513]
> RDATA caches batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513]
> RDATA caches batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513]
> RDATA caches 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513]
In this case the client shouldn't advance their caches token until it sees the
the last ``RDATA``.
List of commands
~~~~~~~~~~~~~~~~
The list of valid commands, with which side can send it: server (S) or client (C):
SERVER (S)
Sent at the start to identify which server the client is talking to
RDATA (S)
A single update in a stream
POSITION (S)
The position of the stream has been updated
ERROR (S, C)
There was an error
PING (S, C)
Sent periodically to ensure the connection is still alive
NAME (C)
Sent at the start by client to inform the server who they are
REPLICATE (C)
Asks the server to replicate a given stream
USER_SYNC (C)
A user has started or stopped syncing
FEDERATION_ACK (C)
Acknowledge receipt of some federation data
REMOVE_PUSHER (C)
Inform the server a pusher should be removed
INVALIDATE_CACHE (C)
Inform the server a cache should be invalidated
SYNC (S, C)
Used exclusively in tests
See ``synapse/replication/tcp/commands.py`` for a detailed description and the
format of each command.

View File

@@ -50,14 +50,37 @@ You may be able to setup coturn via your package manager, or set it up manually
pwgen -s 64 1
5. Ensure youe firewall allows traffic into the TURN server on
the ports you've configured it to listen on (remember to allow
both TCP and UDP if you've enabled both).
5. Consider your security settings. TURN lets users request a relay
which will connect to arbitrary IP addresses and ports. At the least
we recommend:
6. If you've configured coturn to support TLS/DTLS, generate or
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work
allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200
Ideally coturn should refuse to relay traffic which isn't SRTP;
see https://github.com/matrix-org/synapse/issues/2009
6. Ensure your firewall allows traffic into the TURN server on
the ports you've configured it to listen on (remember to allow
both TCP and UDP TURN traffic)
7. If you've configured coturn to support TLS/DTLS, generate or
import your private key and certificate.
7. Start the turn server::
8. Start the turn server::
bin/turnserver -o
@@ -83,12 +106,19 @@ Your home server configuration file needs the following extra keys:
to refresh credentials. The TURN REST API specification recommends
one day (86400000).
4. "turn_allow_guests": Whether to allow guest users to use the TURN
server. This is enabled by default, as otherwise VoIP will not
work reliably for guests. However, it does introduce a security risk
as it lets guests connect to arbitrary endpoints without having gone
through a CAPTCHA or similar to register a real account.
As an example, here is the relevant section of the config file for
matrix.org::
turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ]
turn_shared_secret: n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons
turn_user_lifetime: 86400000
turn_allow_guests: True
Now, restart synapse::

View File

@@ -56,6 +56,7 @@ As a first cut, let's do #2 and have the receiver hit the API to calculate its o
API
---
```
GET /_matrix/media/r0/preview_url?url=http://wherever.com
200 OK
{
@@ -66,6 +67,7 @@ GET /_matrix/media/r0/preview_url?url=http://wherever.com
"og:description" : "“Synapse 0.12 is out! Lots of polishing, performance &amp;amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”"
"og:site_name" : "Twitter"
}
```
* Downloads the URL
* If HTML, just stores it in RAM and parses it for OG meta tags

17
docs/user_directory.md Normal file
View File

@@ -0,0 +1,17 @@
User Directory API Implementation
=================================
The user directory is currently maintained based on the 'visible' users
on this particular server - i.e. ones which your account shares a room with, or
who are present in a publicly viewable room present on the server.
The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
quickest solution to fix it is:
```
UPDATE user_directory_stream_pos SET stream_id = NULL;
```
and restart the synapse, which should then start a background task to
flush the current tables and regenerate the directory.

View File

@@ -1,63 +1,67 @@
Scaling synapse via workers
---------------------------
===========================
Synapse has experimental support for splitting out functionality into
multiple separate python processes, helping greatly with scalability. These
processes are called 'workers', and are (eventually) intended to scale
horizontally independently.
All of the below is highly experimental and subject to change as Synapse evolves,
but documenting it here to help folks needing highly scalable Synapses similar
to the one running matrix.org!
All processes continue to share the same database instance, and as such, workers
only work with postgres based synapse deployments (sharing a single sqlite
across multiple processes is a recipe for disaster, plus you should be using
postgres anyway if you care about scalability).
The workers communicate with the master synapse process via a synapse-specific
HTTP protocol called 'replication' - analogous to MySQL or Postgres style
TCP protocol called 'replication' - analogous to MySQL or Postgres style
database replication; feeding a stream of relevant data to the workers so they
can be kept in sync with the main synapse process and database state.
Configuration
-------------
To make effective use of the workers, you will need to configure an HTTP
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
the correct worker, or to the main synapse instance. Note that this includes
requests made to the federation port. The caveats regarding running a
reverse-proxy on the federation port still apply (see
https://github.com/matrix-org/synapse/blob/master/README.rst#reverse-proxying-the-federation-port).
To enable workers, you need to add a replication listener to the master synapse, e.g.::
listeners:
- port: 9092
bind_address: '127.0.0.1'
type: http
tls: false
x_forwarded: false
resources:
- names: [replication]
compress: false
type: replication
Under **no circumstances** should this replication API listener be exposed to the
public internet; it currently implements no authentication whatsoever and is
unencrypted HTTP.
unencrypted.
You then create a set of configs for the various worker processes. These should be
worker configuration files should be stored in a dedicated subdirectory, to allow
synctl to manipulate them.
The current available worker applications are:
* synapse.app.pusher - handles sending push notifications to sygnal and email
* synapse.app.synchrotron - handles /sync endpoints. can scales horizontally through multiple instances.
* synapse.app.appservice - handles output traffic to Application Services
* synapse.app.federation_reader - handles receiving federation traffic (including public_rooms API)
* synapse.app.media_repository - handles the media repository.
* synapse.app.client_reader - handles client API endpoints like /publicRooms
You then create a set of configs for the various worker processes. These
should be worker configuration files, and should be stored in a dedicated
subdirectory, to allow synctl to manipulate them.
Each worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that worker,
e.g. the HTTP listener that it provides (if any); logging configuration; etc.
You should minimise the number of overrides though to maintain a usable config.
You must specify the type of worker application (worker_app) and the replication
endpoint that it's talking to on the main synapse process (worker_replication_url).
You must specify the type of worker application (``worker_app``). The currently
available worker applications are listed below. You must also specify the
replication endpoint that it's talking to on the main synapse process
(``worker_replication_host`` and ``worker_replication_port``).
For instance::
worker_app: synapse.app.synchrotron
# The replication listener on the synapse to talk to.
worker_replication_url: http://127.0.0.1:9092/_synapse/replication
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_listeners:
- type: http
@@ -71,11 +75,11 @@ For instance::
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
...is a full configuration for a synchrotron worker instance, which will expose a
plain HTTP /sync endpoint on port 8083 separately from the /sync endpoint provided
plain HTTP ``/sync`` endpoint on port 8083 separately from the ``/sync`` endpoint provided
by the main synapse.
Obviously you should configure your loadbalancer to route the /sync endpoint to
the synchrotron instance(s) in this instance.
Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (``localhost:8083`` in the above example).
Finally, to actually run your worker-based synapse, you must pass synctl the -a
commandline option to tell it to operate on all the worker configurations found
@@ -92,7 +96,114 @@ To manipulate a specific worker, you pass the -w option to synctl::
synctl -w $CONFIG/workers/synchrotron.yaml restart
All of the above is highly experimental and subject to change as Synapse evolves,
but documenting it here to help folks needing highly scalable Synapses similar
to the one running matrix.org!
Available worker applications
-----------------------------
``synapse.app.pusher``
~~~~~~~~~~~~~~~~~~~~~~
Handles sending push notifications to sygnal and email. Doesn't handle any
REST endpoints itself, but you should set ``start_pushers: False`` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.synchrotron``
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The synchrotron handles ``sync`` requests from clients. In particular, it can
handle REST endpoints matching the following regular expressions::
^/_matrix/client/(v2_alpha|r0)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
^/_matrix/client/(api/v1|r0)/initialSync$
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
The above endpoints should all be routed to the synchrotron worker by the
reverse-proxy configuration.
It is possible to run multiple instances of the synchrotron to scale
horizontally. In this case the reverse-proxy should be configured to
load-balance across the instances, though it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting
a userid from the access token is currently left as an exercise for the reader.
``synapse.app.appservice``
~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles sending output traffic to Application Services. Doesn't handle any
REST endpoints itself, but you should set ``notify_appservices: False`` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.federation_reader``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles a subset of federation endpoints. In particular, it can handle REST
endpoints matching the following regular expressions::
^/_matrix/federation/v1/event/
^/_matrix/federation/v1/state/
^/_matrix/federation/v1/state_ids/
^/_matrix/federation/v1/backfill/
^/_matrix/federation/v1/get_missing_events/
^/_matrix/federation/v1/publicRooms
The above endpoints should all be routed to the federation_reader worker by the
reverse-proxy configuration.
``synapse.app.federation_sender``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles sending federation traffic to other servers. Doesn't handle any
REST endpoints itself, but you should set ``send_federation: False`` in the
shared configuration file to stop the main synapse sending this traffic.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.media_repository``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles the media repository. It can handle all endpoints starting with::
/_matrix/media/
You should also set ``enable_media_repo: False`` in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository.
Note this worker cannot be load-balanced: only one instance should be active.
``synapse.app.client_reader``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handles client API endpoints. It can handle REST endpoints matching the
following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
``synapse.app.user_dir``
~~~~~~~~~~~~~~~~~~~~~~~~
Handles searches in the user directory. It can handle REST endpoints matching
the following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
``synapse.app.frontend_proxy``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Proxies some frequently-requested client endpoints to add caching and remove
load from the main synapse. It can handle REST endpoints matching the following
regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
It will proxy any requests it cannot handle to the main synapse instance. It
must therefore be configured with the location of the main instance, via
the ``worker_main_http_uri`` setting in the frontend_proxy worker configuration
file. For example::
worker_main_http_uri: http://127.0.0.1:8008

View File

@@ -0,0 +1,23 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
export HAPROXY_BIN=/home/haproxy/haproxy-1.6.11/haproxy
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./jenkins/clone.sh dendron https://github.com/matrix-org/dendron.git
./dendron/jenkins/build_dendron.sh
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \
--haproxy \

View File

@@ -15,10 +15,6 @@ export SYNAPSE_CACHE_FACTOR=1
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \
--pusher \
--synchrotron \
--federation-reader \
--client-reader \
--appservice \

View File

@@ -14,4 +14,5 @@ export SYNAPSE_CACHE_FACTOR=1
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View File

@@ -12,4 +12,5 @@ export SYNAPSE_CACHE_FACTOR=1
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View File

@@ -15,6 +15,6 @@ tox -e py27 --notest -v
TOX_BIN=$TOX_DIR/py27/bin
$TOX_BIN/pip install setuptools
python synapse/python_dependencies.py | xargs -n1 $TOX_BIN/pip install
$TOX_BIN/pip install lxml
$TOX_BIN/pip install psycopg2
{ python synapse/python_dependencies.py
echo lxml psycopg2
} | xargs $TOX_BIN/pip install

View File

@@ -18,7 +18,9 @@
<div class="summarytext">{{ summary_text }}</div>
</td>
<td class="logo">
{% if app_name == "Vector" %}
{% if app_name == "Riot" %}
<img src="http://matrix.org/img/riot-logo-email.png" width="83" height="83" alt="[Riot]"/>
{% elif app_name == "Vector" %}
<img src="http://matrix.org/img/vector-logo-email.png" width="64" height="83" alt="[Vector]"/>
{% else %}
<img src="http://matrix.org/img/matrix-120x51.png" width="120" height="51" alt="[matrix]"/>

125
scripts-dev/federation_client.py Normal file → Executable file
View File

@@ -1,10 +1,30 @@
#!/usr/bin/env python
#
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import argparse
import nacl.signing
import json
import base64
import requests
import sys
import srvlookup
import yaml
def encode_base64(input_bytes):
"""Encode bytes as a base64 string without any padding."""
@@ -103,15 +123,25 @@ def lookup(destination, path):
except:
return "https://%s:%d%s" % (destination, 8448, path)
def get_json(origin_name, origin_key, destination, path):
request_json = {
"method": "GET",
def request_json(method, origin_name, origin_key, destination, path, content):
if method is None:
if content is None:
method = "GET"
else:
method = "POST"
json_to_sign = {
"method": method,
"uri": path,
"origin": origin_name,
"destination": destination,
}
signed_json = sign_json(request_json, origin_key, origin_name)
if content is not None:
json_to_sign["content"] = json.loads(content)
signed_json = sign_json(json_to_sign, origin_key, origin_name)
authorization_headers = []
@@ -120,30 +150,97 @@ def get_json(origin_name, origin_key, destination, path):
origin_name, key, sig,
)
authorization_headers.append(bytes(header))
sys.stderr.write(header)
sys.stderr.write("\n")
print ("Authorization: %s" % header, file=sys.stderr)
result = requests.get(
lookup(destination, path),
dest = lookup(destination, path)
print ("Requesting %s" % dest, file=sys.stderr)
result = requests.request(
method=method,
url=dest,
headers={"Authorization": authorization_headers[0]},
verify=False,
data=content,
)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
return result.json()
def main():
origin_name, keyfile, destination, path = sys.argv[1:]
parser = argparse.ArgumentParser(
description=
"Signs and sends a federation request to a matrix homeserver",
)
with open(keyfile) as f:
parser.add_argument(
"-N", "--server-name",
help="Name to give as the local homeserver. If unspecified, will be "
"read from the config file.",
)
parser.add_argument(
"-k", "--signing-key-path",
help="Path to the file containing the private ed25519 key to sign the "
"request with.",
)
parser.add_argument(
"-c", "--config",
default="homeserver.yaml",
help="Path to server config file. Ignored if --server-name and "
"--signing-key-path are both given.",
)
parser.add_argument(
"-d", "--destination",
default="matrix.org",
help="name of the remote homeserver. We will do SRV lookups and "
"connect appropriately.",
)
parser.add_argument(
"-X", "--method",
help="HTTP method to use for the request. Defaults to GET if --data is"
"unspecified, POST if it is."
)
parser.add_argument(
"--body",
help="Data to send as the body of the HTTP request"
)
parser.add_argument(
"path",
help="request path. We will add '/_matrix/federation/v1/' to this."
)
args = parser.parse_args()
if not args.server_name or not args.signing_key_path:
read_args_from_config(args)
with open(args.signing_key_path) as f:
key = read_signing_keys(f)[0]
result = get_json(
origin_name, key, destination, "/_matrix/federation/v1/" + path
result = request_json(
args.method,
args.server_name, key, args.destination,
"/_matrix/federation/v1/" + args.path,
content=args.body,
)
json.dump(result, sys.stdout)
print ""
print ("")
def read_args_from_config(args):
with open(args.config, 'r') as fh:
config = yaml.safe_load(fh)
if not args.server_name:
args.server_name = config['server_name']
if not args.signing_key_path:
args.signing_key_path = config['signing_key_path']
if __name__ == "__main__":
main()

View File

@@ -9,16 +9,39 @@
ROOMID="$1"
sqlite3 homeserver.db <<EOF
DELETE FROM context_depth WHERE context = '$ROOMID';
DELETE FROM current_state WHERE context = '$ROOMID';
DELETE FROM feedback WHERE room_id = '$ROOMID';
DELETE FROM messages WHERE room_id = '$ROOMID';
DELETE FROM pdu_backward_extremities WHERE context = '$ROOMID';
DELETE FROM pdu_edges WHERE context = '$ROOMID';
DELETE FROM pdu_forward_extremities WHERE context = '$ROOMID';
DELETE FROM pdus WHERE context = '$ROOMID';
DELETE FROM room_data WHERE room_id = '$ROOMID';
DELETE FROM event_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_backward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_edges WHERE room_id = '$ROOMID';
DELETE FROM room_depth WHERE room_id = '$ROOMID';
DELETE FROM state_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM events WHERE room_id = '$ROOMID';
DELETE FROM event_json WHERE room_id = '$ROOMID';
DELETE FROM state_events WHERE room_id = '$ROOMID';
DELETE FROM current_state_events WHERE room_id = '$ROOMID';
DELETE FROM room_memberships WHERE room_id = '$ROOMID';
DELETE FROM feedback WHERE room_id = '$ROOMID';
DELETE FROM topics WHERE room_id = '$ROOMID';
DELETE FROM room_names WHERE room_id = '$ROOMID';
DELETE FROM rooms WHERE room_id = '$ROOMID';
DELETE FROM state_pdus WHERE context = '$ROOMID';
DELETE FROM room_hosts WHERE room_id = '$ROOMID';
DELETE FROM room_aliases WHERE room_id = '$ROOMID';
DELETE FROM state_groups WHERE room_id = '$ROOMID';
DELETE FROM state_groups_state WHERE room_id = '$ROOMID';
DELETE FROM receipts_graph WHERE room_id = '$ROOMID';
DELETE FROM receipts_linearized WHERE room_id = '$ROOMID';
DELETE FROM event_search_content WHERE c1room_id = '$ROOMID';
DELETE FROM guest_access WHERE room_id = '$ROOMID';
DELETE FROM history_visibility WHERE room_id = '$ROOMID';
DELETE FROM room_tags WHERE room_id = '$ROOMID';
DELETE FROM room_tags_revisions WHERE room_id = '$ROOMID';
DELETE FROM room_account_data WHERE room_id = '$ROOMID';
DELETE FROM event_push_actions WHERE room_id = '$ROOMID';
DELETE FROM local_invites WHERE room_id = '$ROOMID';
DELETE FROM pusher_throttle WHERE room_id = '$ROOMID';
DELETE FROM event_reports WHERE room_id = '$ROOMID';
DELETE FROM public_room_list_stream WHERE room_id = '$ROOMID';
DELETE FROM stream_ordering_to_exterm WHERE room_id = '$ROOMID';
DELETE FROM event_auth WHERE room_id = '$ROOMID';
DELETE FROM appservice_room_list WHERE room_id = '$ROOMID';
VACUUM;
EOF

View File

@@ -39,6 +39,17 @@ BOOLEAN_COLUMNS = {
"event_edges": ["is_state"],
"presence_list": ["accepted"],
"presence_stream": ["currently_active"],
"public_room_list_stream": ["visibility"],
"device_lists_outbound_pokes": ["sent"],
"users_who_share_rooms": ["share_private"],
"groups": ["is_public"],
"group_rooms": ["is_public"],
"group_users": ["is_public", "is_admin"],
"group_summary_rooms": ["is_public"],
"group_room_categories": ["is_public"],
"group_summary_users": ["is_public"],
"group_roles": ["is_public"],
"local_group_membership": ["is_publicised", "is_admin"],
}
@@ -71,6 +82,14 @@ APPEND_ONLY_TABLES = [
"event_to_state_groups",
"rejections",
"event_search",
"presence_stream",
"push_rules_stream",
"current_state_resets",
"ex_outlier_stream",
"cache_invalidation_stream",
"public_room_list_stream",
"state_group_edges",
"stream_ordering_to_exterm",
]
@@ -101,6 +120,7 @@ class Store(object):
_simple_update_one = SQLBaseStore.__dict__["_simple_update_one"]
_simple_update_one_txn = SQLBaseStore.__dict__["_simple_update_one_txn"]
_simple_update_txn = SQLBaseStore.__dict__["_simple_update_txn"]
def runInteraction(self, desc, func, *args, **kwargs):
def r(conn):
@@ -111,7 +131,7 @@ class Store(object):
try:
txn = conn.cursor()
return func(
LoggingTransaction(txn, desc, self.database_engine, []),
LoggingTransaction(txn, desc, self.database_engine, [], []),
*args, **kwargs
)
except self.database_engine.module.DatabaseError as e:
@@ -241,6 +261,25 @@ class Porter(object):
)
return
if table in (
"user_directory", "user_directory_search", "users_who_share_rooms",
"users_in_pubic_room",
):
# We don't port these tables, as they're a faff and we can regenreate
# them anyway.
self.progress.update(table, table_size) # Mark table as done
return
if table == "user_directory_stream_pos":
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
yield self.postgres_store._simple_insert(
table=table,
values={"stream_id": None},
)
self.progress.update(table, table_size) # Mark table as done
return
forward_select = (
"SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?"
% (table,)
@@ -288,7 +327,7 @@ class Porter(object):
backward_chunk = min(row[0] for row in brows) - 1
rows = frows + brows
self._convert_rows(table, headers, rows)
rows = self._convert_rows(table, headers, rows)
def insert(txn):
self.postgres_store.insert_many_txn(
@@ -346,10 +385,13 @@ class Porter(object):
" VALUES (?,?,?,?,to_tsvector('english', ?),?,?)"
)
rows_dict = [
dict(zip(headers, row))
for row in rows
]
rows_dict = []
for row in rows:
d = dict(zip(headers, row))
if "\0" in d['value']:
logger.warn('dropping search row %s', d)
else:
rows_dict.append(d)
txn.executemany(sql, [
(
@@ -437,9 +479,7 @@ class Porter(object):
postgres_tables = yield self.postgres_store._simple_select_onecol(
table="information_schema.tables",
keyvalues={
"table_schema": "public",
},
keyvalues={},
retcol="distinct table_name",
)
@@ -523,17 +563,29 @@ class Porter(object):
i for i, h in enumerate(headers) if h in bool_col_names
]
class BadValueException(Exception):
pass
def conv(j, col):
if j in bool_cols:
return bool(col)
elif isinstance(col, basestring) and "\0" in col:
logger.warn("DROPPING ROW: NUL value in table %s col %s: %r", table, headers[j], col)
raise BadValueException();
return col
outrows = []
for i, row in enumerate(rows):
rows[i] = tuple(
conv(j, col)
for j, col in enumerate(row)
if j > 0
)
try:
outrows.append(tuple(
conv(j, col)
for j, col in enumerate(row)
if j > 0
))
except BadValueException:
pass
return outrows
@defer.inlineCallbacks
def _setup_sent_transactions(self):
@@ -561,7 +613,7 @@ class Porter(object):
"select", r,
)
self._convert_rows("sent_transactions", headers, rows)
rows = self._convert_rows("sent_transactions", headers, rows)
inserted_rows = len(rows)
if inserted_rows:

45
scripts/sync_room_to_group.pl Executable file
View File

@@ -0,0 +1,45 @@
#!/usr/bin/env perl
use strict;
use warnings;
use JSON::XS;
use LWP::UserAgent;
use URI::Escape;
if (@ARGV < 4) {
die "usage: $0 <homeserver url> <access_token> <room_id|room_alias> <group_id>\n";
}
my ($hs, $access_token, $room_id, $group_id) = @ARGV;
my $ua = LWP::UserAgent->new();
$ua->timeout(10);
if ($room_id =~ /^#/) {
$room_id = uri_escape($room_id);
$room_id = decode_json($ua->get("${hs}/_matrix/client/r0/directory/room/${room_id}?access_token=${access_token}")->decoded_content)->{room_id};
}
my $room_users = [ keys %{decode_json($ua->get("${hs}/_matrix/client/r0/rooms/${room_id}/joined_members?access_token=${access_token}")->decoded_content)->{joined}} ];
my $group_users = [
(map { $_->{user_id} } @{decode_json($ua->get("${hs}/_matrix/client/unstable/groups/${group_id}/users?access_token=${access_token}" )->decoded_content)->{chunk}}),
(map { $_->{user_id} } @{decode_json($ua->get("${hs}/_matrix/client/unstable/groups/${group_id}/invited_users?access_token=${access_token}" )->decoded_content)->{chunk}}),
];
die "refusing to sync from empty room" unless (@$room_users);
die "refusing to sync to empty group" unless (@$group_users);
my $diff = {};
foreach my $user (@$room_users) { $diff->{$user}++ }
foreach my $user (@$group_users) { $diff->{$user}-- }
foreach my $user (keys %$diff) {
if ($diff->{$user} == 1) {
warn "inviting $user";
print STDERR $ua->put("${hs}/_matrix/client/unstable/groups/${group_id}/admin/users/invite/${user}?access_token=${access_token}", Content=>'{}')->status_line."\n";
}
elsif ($diff->{$user} == -1) {
warn "removing $user";
print STDERR $ua->put("${hs}/_matrix/client/unstable/groups/${group_id}/admin/users/remove/${user}?access_token=${access_token}", Content=>'{}')->status_line."\n";
}
}

View File

@@ -23,6 +23,45 @@ import sys
here = os.path.abspath(os.path.dirname(__file__))
# Some notes on `setup.py test`:
#
# Once upon a time we used to try to make `setup.py test` run `tox` to run the
# tests. That's a bad idea for three reasons:
#
# 1: `setup.py test` is supposed to find out whether the tests work in the
# *current* environmentt, not whatever tox sets up.
# 2: Empirically, trying to install tox during the test run wasn't working ("No
# module named virtualenv").
# 3: The tox documentation advises against it[1].
#
# Even further back in time, we used to use setuptools_trial [2]. That has its
# own set of issues: for instance, it requires installation of Twisted to build
# an sdist (because the recommended mode of usage is to add it to
# `setup_requires`). That in turn means that in order to successfully run tox
# you have to have the python header files installed for whichever version of
# python tox uses (which is python3 on recent ubuntus, for example).
#
# So, for now at least, we stick with what appears to be the convention among
# Twisted projects, and don't attempt to do anything when someone runs
# `setup.py test`; instead we direct people to run `trial` directly if they
# care.
#
# [1]: http://tox.readthedocs.io/en/2.5.0/example/basic.html#integration-with-setup-py-test-command
# [2]: https://pypi.python.org/pypi/setuptools_trial
class TestCommand(Command):
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
print ("""Synapse's tests cannot be run via setup.py. To run them, try:
PYTHONPATH="." trial tests
""")
def read_file(path_segments):
"""Read a file from the package. Takes a list of strings to join to
make the path"""
@@ -39,38 +78,6 @@ def exec_file(path_segments):
return result
class Tox(Command):
user_options = [('tox-args=', 'a', "Arguments to pass to tox")]
def initialize_options(self):
self.tox_args = None
def finalize_options(self):
self.test_args = []
self.test_suite = True
def run(self):
#import here, cause outside the eggs aren't loaded
try:
import tox
except ImportError:
try:
self.distribution.fetch_build_eggs("tox")
import tox
except:
raise RuntimeError(
"The tests need 'tox' to run. Please install 'tox'."
)
import shlex
args = self.tox_args
if args:
args = shlex.split(self.tox_args)
else:
args = []
errno = tox.cmdline(args=args)
sys.exit(errno)
version = exec_file(("synapse", "__init__.py"))["__version__"]
dependencies = exec_file(("synapse", "python_dependencies.py"))
long_description = read_file(("README.rst",))
@@ -86,5 +93,5 @@ setup(
zip_safe=False,
long_description=long_description,
scripts=["synctl"] + glob.glob("scripts/*"),
cmdclass={'test': Tox},
cmdclass={'test': TestCommand},
)

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.18.0"
__version__ = "0.25.1"

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -43,10 +44,8 @@ class JoinRules(object):
class LoginType(object):
PASSWORD = u"m.login.password"
OAUTH = u"m.login.oauth2"
EMAIL_CODE = u"m.login.email.code"
EMAIL_URL = u"m.login.email.url"
EMAIL_IDENTITY = u"m.login.email.identity"
MSISDN = u"m.login.msisdn"
RECAPTCHA = u"m.login.recaptcha"
DUMMY = u"m.login.dummy"

View File

@@ -15,6 +15,7 @@
"""Contains exceptions and error codes."""
import json
import logging
logger = logging.getLogger(__name__)
@@ -39,6 +40,7 @@ class Codes(object):
CAPTCHA_NEEDED = "M_CAPTCHA_NEEDED"
CAPTCHA_INVALID = "M_CAPTCHA_INVALID"
MISSING_PARAM = "M_MISSING_PARAM"
INVALID_PARAM = "M_INVALID_PARAM"
TOO_LARGE = "M_TOO_LARGE"
EXCLUSIVE = "M_EXCLUSIVE"
THREEPID_AUTH_FAILED = "M_THREEPID_AUTH_FAILED"
@@ -49,27 +51,46 @@ class Codes(object):
class CodeMessageException(RuntimeError):
"""An exception with integer code and message string attributes."""
"""An exception with integer code and message string attributes.
Attributes:
code (int): HTTP error code
msg (str): string describing the error
"""
def __init__(self, code, msg):
super(CodeMessageException, self).__init__("%d: %s" % (code, msg))
self.code = code
self.msg = msg
self.response_code_message = None
def error_dict(self):
return cs_error(self.msg)
class MatrixCodeMessageException(CodeMessageException):
"""An error from a general matrix endpoint, eg. from a proxied Matrix API call.
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN):
super(MatrixCodeMessageException, self).__init__(code, msg)
self.errcode = errcode
class SynapseError(CodeMessageException):
"""A base error which can be caught for all synapse events."""
"""A base exception type for matrix errors which have an errcode and error
message (as well as an HTTP status code).
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN):
"""Constructs a synapse error.
Args:
code (int): The integer error code (an HTTP response code)
msg (str): The human-readable error message.
err (str): The error code e.g 'M_FORBIDDEN'
errcode (str): The matrix error code e.g 'M_FORBIDDEN'
"""
super(SynapseError, self).__init__(code, msg)
self.errcode = errcode
@@ -80,12 +101,61 @@ class SynapseError(CodeMessageException):
self.errcode,
)
@classmethod
def from_http_response_exception(cls, err):
"""Make a SynapseError based on an HTTPResponseException
This is useful when a proxied request has failed, and we need to
decide how to map the failure onto a matrix error to send back to the
client.
An attempt is made to parse the body of the http response as a matrix
error. If that succeeds, the errcode and error message from the body
are used as the errcode and error message in the new synapse error.
Otherwise, the errcode is set to M_UNKNOWN, and the error message is
set to the reason code from the HTTP response.
Args:
err (HttpResponseException):
Returns:
SynapseError:
"""
# try to parse the body as json, to get better errcode/msg, but
# default to M_UNKNOWN with the HTTP status as the error text
try:
j = json.loads(err.response)
except ValueError:
j = {}
errcode = j.get('errcode', Codes.UNKNOWN)
errmsg = j.get('error', err.msg)
res = SynapseError(err.code, errmsg, errcode)
return res
class RegistrationError(SynapseError):
"""An error raised when a registration event fails."""
pass
class InteractiveAuthIncompleteError(Exception):
"""An error raised when UI auth is not yet complete
(This indicates we should return a 401 with 'result' as the body)
Attributes:
result (dict): the server response to the request, which should be
passed back to the client
"""
def __init__(self, result):
super(InteractiveAuthIncompleteError, self).__init__(
"Interactive auth not yet complete",
)
self.result = result
class UnrecognizedRequestError(SynapseError):
"""An error indicating we don't understand the request you're trying to make"""
def __init__(self, *args, **kwargs):
@@ -105,13 +175,11 @@ class UnrecognizedRequestError(SynapseError):
class NotFoundError(SynapseError):
"""An error indicating we can't find the thing you asked for"""
def __init__(self, *args, **kwargs):
if "errcode" not in kwargs:
kwargs["errcode"] = Codes.NOT_FOUND
def __init__(self, msg="Not found", errcode=Codes.NOT_FOUND):
super(NotFoundError, self).__init__(
404,
"Not found",
**kwargs
msg,
errcode=errcode
)
@@ -172,7 +240,6 @@ class LimitExceededError(SynapseError):
errcode=Codes.LIMIT_EXCEEDED):
super(LimitExceededError, self).__init__(code, msg, errcode)
self.retry_after_ms = retry_after_ms
self.response_code_message = "Too Many Requests"
def error_dict(self):
return cs_error(
@@ -242,6 +309,19 @@ class FederationError(RuntimeError):
class HttpResponseException(CodeMessageException):
"""
Represents an HTTP-level failure of an outbound request
Attributes:
response (str): body of response
"""
def __init__(self, code, msg, response):
self.response = response
"""
Args:
code (int): HTTP status code
msg (str): reason phrase from HTTP response status line
response (str): body of response
"""
super(HttpResponseException, self).__init__(code, msg)
self.response = response

View File

@@ -13,11 +13,174 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api.errors import SynapseError
from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, RoomID
from twisted.internet import defer
import ujson as json
import jsonschema
from jsonschema import FormatChecker
FILTER_SCHEMA = {
"additionalProperties": False,
"type": "object",
"properties": {
"limit": {
"type": "number"
},
"senders": {
"$ref": "#/definitions/user_id_array"
},
"not_senders": {
"$ref": "#/definitions/user_id_array"
},
# TODO: We don't limit event type values but we probably should...
# check types are valid event types
"types": {
"type": "array",
"items": {
"type": "string"
}
},
"not_types": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
ROOM_FILTER_SCHEMA = {
"additionalProperties": False,
"type": "object",
"properties": {
"not_rooms": {
"$ref": "#/definitions/room_id_array"
},
"rooms": {
"$ref": "#/definitions/room_id_array"
},
"ephemeral": {
"$ref": "#/definitions/room_event_filter"
},
"include_leave": {
"type": "boolean"
},
"state": {
"$ref": "#/definitions/room_event_filter"
},
"timeline": {
"$ref": "#/definitions/room_event_filter"
},
"account_data": {
"$ref": "#/definitions/room_event_filter"
},
}
}
ROOM_EVENT_FILTER_SCHEMA = {
"additionalProperties": False,
"type": "object",
"properties": {
"limit": {
"type": "number"
},
"senders": {
"$ref": "#/definitions/user_id_array"
},
"not_senders": {
"$ref": "#/definitions/user_id_array"
},
"types": {
"type": "array",
"items": {
"type": "string"
}
},
"not_types": {
"type": "array",
"items": {
"type": "string"
}
},
"rooms": {
"$ref": "#/definitions/room_id_array"
},
"not_rooms": {
"$ref": "#/definitions/room_id_array"
},
"contains_url": {
"type": "boolean"
}
}
}
USER_ID_ARRAY_SCHEMA = {
"type": "array",
"items": {
"type": "string",
"format": "matrix_user_id"
}
}
ROOM_ID_ARRAY_SCHEMA = {
"type": "array",
"items": {
"type": "string",
"format": "matrix_room_id"
}
}
USER_FILTER_SCHEMA = {
"$schema": "http://json-schema.org/draft-04/schema#",
"description": "schema for a Sync filter",
"type": "object",
"definitions": {
"room_id_array": ROOM_ID_ARRAY_SCHEMA,
"user_id_array": USER_ID_ARRAY_SCHEMA,
"filter": FILTER_SCHEMA,
"room_filter": ROOM_FILTER_SCHEMA,
"room_event_filter": ROOM_EVENT_FILTER_SCHEMA
},
"properties": {
"presence": {
"$ref": "#/definitions/filter"
},
"account_data": {
"$ref": "#/definitions/filter"
},
"room": {
"$ref": "#/definitions/room_filter"
},
"event_format": {
"type": "string",
"enum": ["client", "federation"]
},
"event_fields": {
"type": "array",
"items": {
"type": "string",
# Don't allow '\\' in event field filters. This makes matching
# events a lot easier as we can then use a negative lookbehind
# assertion to split '\.' If we allowed \\ then it would
# incorrectly split '\\.' See synapse.events.utils.serialize_event
"pattern": "^((?!\\\).)*$"
}
}
},
"additionalProperties": False
}
@FormatChecker.cls_checks('matrix_room_id')
def matrix_room_id_validator(room_id_str):
return RoomID.from_string(room_id_str)
@FormatChecker.cls_checks('matrix_user_id')
def matrix_user_id_validator(user_id_str):
return UserID.from_string(user_id_str)
class Filtering(object):
@@ -52,83 +215,11 @@ class Filtering(object):
# NB: Filters are the complete json blobs. "Definitions" are an
# individual top-level key e.g. public_user_data. Filters are made of
# many definitions.
top_level_definitions = [
"presence", "account_data"
]
room_level_definitions = [
"state", "timeline", "ephemeral", "account_data"
]
for key in top_level_definitions:
if key in user_filter_json:
self._check_definition(user_filter_json[key])
if "room" in user_filter_json:
self._check_definition_room_lists(user_filter_json["room"])
for key in room_level_definitions:
if key in user_filter_json["room"]:
self._check_definition(user_filter_json["room"][key])
def _check_definition_room_lists(self, definition):
"""Check that "rooms" and "not_rooms" are lists of room ids if they
are present
Args:
definition(dict): The filter definition
Raises:
SynapseError: If there was a problem with this definition.
"""
# check rooms are valid room IDs
room_id_keys = ["rooms", "not_rooms"]
for key in room_id_keys:
if key in definition:
if type(definition[key]) != list:
raise SynapseError(400, "Expected %s to be a list." % key)
for room_id in definition[key]:
RoomID.from_string(room_id)
def _check_definition(self, definition):
"""Check if the provided definition is valid.
This inspects not only the types but also the values to make sure they
make sense.
Args:
definition(dict): The filter definition
Raises:
SynapseError: If there was a problem with this definition.
"""
# NB: Filters are the complete json blobs. "Definitions" are an
# individual top-level key e.g. public_user_data. Filters are made of
# many definitions.
if type(definition) != dict:
raise SynapseError(
400, "Expected JSON object, not %s" % (definition,)
)
self._check_definition_room_lists(definition)
# check senders are valid user IDs
user_id_keys = ["senders", "not_senders"]
for key in user_id_keys:
if key in definition:
if type(definition[key]) != list:
raise SynapseError(400, "Expected %s to be a list." % key)
for user_id in definition[key]:
UserID.from_string(user_id)
# TODO: We don't limit event type values but we probably should...
# check types are valid event types
event_keys = ["types", "not_types"]
for key in event_keys:
if key in definition:
if type(definition[key]) != list:
raise SynapseError(400, "Expected %s to be a list." % key)
for event_type in definition[key]:
if not isinstance(event_type, basestring):
raise SynapseError(400, "Event type should be a string")
try:
jsonschema.validate(user_filter_json, USER_FILTER_SCHEMA,
format_checker=FormatChecker())
except jsonschema.ValidationError as e:
raise SynapseError(400, e.message)
class FilterCollection(object):
@@ -152,6 +243,7 @@ class FilterCollection(object):
self.include_leave = filter_json.get("room", {}).get(
"include_leave", False
)
self.event_fields = filter_json.get("event_fields", [])
def __repr__(self):
return "<FilterCollection %s>" % (json.dumps(self._filter_json),)
@@ -186,6 +278,26 @@ class FilterCollection(object):
def filter_room_account_data(self, events):
return self._room_account_data.filter(self._room_filter.filter(events))
def blocks_all_presence(self):
return (
self._presence_filter.filters_all_types() or
self._presence_filter.filters_all_senders()
)
def blocks_all_room_ephemeral(self):
return (
self._room_ephemeral_filter.filters_all_types() or
self._room_ephemeral_filter.filters_all_senders() or
self._room_ephemeral_filter.filters_all_rooms()
)
def blocks_all_room_timeline(self):
return (
self._room_timeline_filter.filters_all_types() or
self._room_timeline_filter.filters_all_senders() or
self._room_timeline_filter.filters_all_rooms()
)
class Filter(object):
def __init__(self, filter_json):
@@ -202,25 +314,50 @@ class Filter(object):
self.contains_url = self.filter_json.get("contains_url", None)
def filters_all_types(self):
return "*" in self.not_types
def filters_all_senders(self):
return "*" in self.not_senders
def filters_all_rooms(self):
return "*" in self.not_rooms
def check(self, event):
"""Checks whether the filter matches the given event.
Returns:
bool: True if the event matches
"""
sender = event.get("sender", None)
if not sender:
# Presence events have their 'sender' in content.user_id
content = event.get("content")
# account_data has been allowed to have non-dict content, so check type first
if isinstance(content, dict):
sender = content.get("user_id")
# We usually get the full "events" as dictionaries coming through,
# except for presence which actually gets passed around as its own
# namedtuple type.
if isinstance(event, UserPresenceState):
sender = event.user_id
room_id = None
ev_type = "m.presence"
is_url = False
else:
sender = event.get("sender", None)
if not sender:
# Presence events had their 'sender' in content.user_id, but are
# now handled above. We don't know if anything else uses this
# form. TODO: Check this and probably remove it.
content = event.get("content")
# account_data has been allowed to have non-dict content, so
# check type first
if isinstance(content, dict):
sender = content.get("user_id")
room_id = event.get("room_id", None)
ev_type = event.get("type", None)
is_url = "url" in event.get("content", {})
return self.check_fields(
event.get("room_id", None),
room_id,
sender,
event.get("type", None),
"url" in event.get("content", {})
ev_type,
is_url,
)
def check_fields(self, room_id, sender, event_type, contains_url):

View File

@@ -23,7 +23,7 @@ class Ratelimiter(object):
def __init__(self):
self.message_counts = collections.OrderedDict()
def send_message(self, user_id, time_now_s, msg_rate_hz, burst_count):
def send_message(self, user_id, time_now_s, msg_rate_hz, burst_count, update=True):
"""Can the user send a message?
Args:
user_id: The user sending a message.
@@ -32,12 +32,15 @@ class Ratelimiter(object):
second.
burst_count: How many messages the user can send before being
limited.
update (bool): Whether to update the message rates or not. This is
useful to check if a message would be allowed to be sent before
its ready to be actually sent.
Returns:
A pair of a bool indicating if they can send a message now and a
time in seconds of when they can next send a message.
"""
self.prune_message_counts(time_now_s)
message_count, time_start, _ignored = self.message_counts.pop(
message_count, time_start, _ignored = self.message_counts.get(
user_id, (0., time_now_s, None),
)
time_delta = time_now_s - time_start
@@ -52,9 +55,10 @@ class Ratelimiter(object):
allowed = True
message_count += 1
self.message_counts[user_id] = (
message_count, time_start, msg_rate_hz
)
if update:
self.message_counts[user_id] = (
message_count, time_start, msg_rate_hz
)
if msg_rate_hz > 0:
time_allowed = (

178
synapse/app/_base.py Normal file
View File

@@ -0,0 +1,178 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import logging
import sys
try:
import affinity
except Exception:
affinity = None
from daemonize import Daemonize
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
from twisted.internet import error, reactor
logger = logging.getLogger(__name__)
def start_worker_reactor(appname, config):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
the reactor. Pulls configuration from the 'worker' settings in 'config'.
Args:
appname (str): application name which will be sent to syslog
config (synapse.config.Config): config object
"""
logger = logging.getLogger(config.worker_app)
start_reactor(
appname,
config.soft_file_limit,
config.gc_thresholds,
config.worker_pid_file,
config.worker_daemonize,
config.worker_cpu_affinity,
logger,
)
def start_reactor(
appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
cpu_affinity,
logger,
):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
the reactor
Args:
appname (str): application name which will be sent to syslog
soft_file_limit (int):
gc_thresholds:
pid_file (str): name of pid file to write to if daemonize is True
daemonize (bool): true to run the reactor in a background process
cpu_affinity (int|None): cpu affinity mask
logger (logging.Logger): logger instance to pass to Daemonize
"""
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
if cpu_affinity is not None:
if not affinity:
quit_with_error(
"Missing package 'affinity' required for cpu_affinity\n"
"option\n\n"
"Install by running:\n\n"
" pip install affinity\n\n"
)
logger.info("Setting CPU affinity to %s" % cpu_affinity)
affinity.set_process_affinity_mask(0, cpu_affinity)
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
reactor.run()
if daemonize:
daemon = Daemonize(
app=appname,
pid=pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
def quit_with_error(error_string):
message_lines = error_string.split("\n")
line_length = max([len(l) for l in message_lines if len(l) < 80]) + 2
sys.stderr.write("*" * line_length + '\n')
for line in message_lines:
sys.stderr.write(" %s\n" % (line.rstrip(),))
sys.stderr.write("*" * line_length + '\n')
sys.exit(1)
def listen_tcp(bind_addresses, port, factory, backlog=50):
"""
Create a TCP socket for a port and several addresses
"""
for address in bind_addresses:
try:
reactor.listenTCP(
port,
factory,
backlog,
address
)
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
def listen_ssl(bind_addresses, port, factory, context_factory, backlog=50):
"""
Create an SSL socket for a port and several addresses
"""
for address in bind_addresses:
try:
reactor.listenSSL(
port,
factory,
context_factory,
backlog,
address
)
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
def check_bind_error(e, address, bind_addresses):
"""
This method checks an exception occurred while binding on 0.0.0.0.
If :: is specified in the bind addresses a warning is shown.
The exception is still raised otherwise.
Binding on both 0.0.0.0 and :: causes an exception on Linux and macOS
because :: binds on both IPv4 and IPv6 (as per RFC 3493).
When binding on 0.0.0.0 after :: this can safely be ignored.
Args:
e (Exception): Exception that was caught.
address (str): Address on which binding was attempted.
bind_addresses (list): Addresses on which the service listens.
"""
if address == '0.0.0.0' and '::' in bind_addresses:
logger.warn('Failed to listen on 0.0.0.0, continuing because listening on [::]')
else:
raise e

View File

@@ -13,36 +13,31 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.appservice")
@@ -74,7 +69,7 @@ class AppserviceServer(HomeServer):
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
@@ -83,16 +78,18 @@ class AppserviceServer(HomeServer):
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
)
logger.info("Synapse appservice now listening on port %d", port)
def start_listening(self, listeners):
@@ -100,42 +97,37 @@ class AppserviceServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
appservice_handler = self.get_application_service_handler()
self.get_tcp_replication().start_replication(self)
@defer.inlineCallbacks
def replicate(results):
stream = results.get("events")
if stream:
max_stream_id = stream["position"]
yield appservice_handler.notify_interested_services(max_stream_id)
def build_tcp_replication(self):
return ASReplicationHandler(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
replicate(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
class ASReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(ASReplicationHandler, self).__init__(hs.get_datastore())
self.appservice_handler = hs.get_application_service_handler()
def on_rdata(self, stream_name, token, rows):
super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
if stream_name == "events":
max_stream_id = self.store.get_room_max_stream_ordering()
preserve_fn(
self.appservice_handler.notify_interested_services
)(max_stream_id)
def start(config_options):
@@ -149,7 +141,9 @@ def start(config_options):
assert config.worker_app == "synapse.app.appservice"
setup_logging(config.worker_log_config, config.worker_log_file)
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -176,33 +170,13 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-appservice",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-appservice", config)
if __name__ == '__main__':

View File

@@ -13,44 +13,39 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import PublicRoomListRestServlet
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.crypto import context_factory
from twisted.internet import reactor, defer
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.client_reader")
@@ -61,8 +56,9 @@ class ClientReaderSlavedStore(
DirectoryStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
TransactionStore,
SlavedClientIpStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
pass
@@ -88,7 +84,7 @@ class ClientReaderServer(HomeServer):
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
@@ -106,16 +102,18 @@ class ClientReaderServer(HomeServer):
})
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
)
logger.info("Synapse client reader now listening on port %d", port)
def start_listening(self, listeners):
@@ -123,33 +121,23 @@ class ClientReaderServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
self.get_tcp_replication().start_replication(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
@@ -163,7 +151,9 @@ def start(config_options):
assert config.worker_app == "synapse.app.client_reader"
setup_logging(config.worker_log_config, config.worker_log_file)
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -182,33 +172,13 @@ def start(config_options):
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-client-reader",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-client-reader", config)
if __name__ == '__main__':

View File

@@ -13,42 +13,36 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.api.urls import FEDERATION_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.api.urls import FEDERATION_PREFIX
from synapse.federation.transport.server import TransportLayerServer
from synapse.crypto import context_factory
from twisted.internet import reactor, defer
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.federation_reader")
@@ -84,7 +78,7 @@ class FederationReaderServer(HomeServer):
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
@@ -97,16 +91,18 @@ class FederationReaderServer(HomeServer):
})
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
)
logger.info("Synapse federation reader now listening on port %d", port)
def start_listening(self, listeners):
@@ -114,33 +110,22 @@ class FederationReaderServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
self.get_tcp_replication().start_replication(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
@@ -154,7 +139,9 @@ def start(config_options):
assert config.worker_app == "synapse.app.federation_reader"
setup_logging(config.worker_log_config, config.worker_log_file)
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -173,33 +160,13 @@ def start(config_options):
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-federation-reader",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-federation-reader", config)
if __name__ == '__main__':

View File

@@ -0,0 +1,270 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation import send_queue
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
logger = logging.getLogger("synapse.app.federation_sender")
class FederationSenderSlaveStore(
SlavedDeviceInboxStore, TransactionStore, SlavedReceiptsStore, SlavedEventStore,
SlavedRegistrationStore, SlavedDeviceStore, SlavedPresenceStore,
):
def __init__(self, db_conn, hs):
super(FederationSenderSlaveStore, self).__init__(db_conn, hs)
# We pull out the current federation stream position now so that we
# always have a known value for the federation position in memory so
# that we don't have to bounce via a deferred once when we start the
# replication streams.
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = (
"SELECT stream_id FROM federation_stream_position"
" WHERE type = ?"
)
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()
txn.execute(sql, ("federation",))
rows = txn.fetchall()
txn.close()
return rows[0][0] if rows else -1
class FederationSenderServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = FederationSenderSlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse federation_sender now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return FederationSenderReplicationHandler(self)
class FederationSenderReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(FederationSenderReplicationHandler, self).__init__(hs.get_datastore())
self.send_handler = FederationSenderHandler(hs, self)
def on_rdata(self, stream_name, token, rows):
super(FederationSenderReplicationHandler, self).on_rdata(
stream_name, token, rows
)
self.send_handler.process_replication_rows(stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(FederationSenderReplicationHandler, self).get_streams_to_replicate()
args.update(self.send_handler.stream_positions())
return args
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse federation sender", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.federation_sender"
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
if config.send_federation:
sys.stderr.write(
"\nThe send_federation must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``send_federation: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.send_federation = True
tls_server_context_factory = context_factory.ServerContextFactory(config)
ps = FederationSenderServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ps.setup()
ps.start_listening(config.worker_listeners)
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-federation-sender", config)
class FederationSenderHandler(object):
"""Processes the replication stream and forwards the appropriate entries
to the federation sender.
"""
def __init__(self, hs, replication_client):
self.store = hs.get_datastore()
self.federation_sender = hs.get_federation_sender()
self.replication_client = replication_client
self.federation_position = self.store.federation_out_pos_startup
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
self._last_ack = self.federation_position
self._room_serials = {}
self._room_typing = {}
def on_start(self):
# There may be some events that are persisted but haven't been sent,
# so send them now.
self.federation_sender.notify_new_events(
self.store.get_room_max_stream_ordering()
)
def stream_positions(self):
return {"federation": self.federation_position}
def process_replication_rows(self, stream_name, token, rows):
# The federation stream contains things that we want to send out, e.g.
# presence, typing, etc.
if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows)
preserve_fn(self.update_token)(token)
# We also need to poke the federation sender when new events happen
elif stream_name == "events":
self.federation_sender.notify_new_events(token)
@defer.inlineCallbacks
def update_token(self, token):
self.federation_position = token
# We linearize here to ensure we don't have races updating the token
with (yield self._fed_position_linearizer.queue(None)):
if self._last_ack < self.federation_position:
yield self.store.update_federation_out_pos(
"federation", self.federation_position
)
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(self.federation_position)
self._last_ack = self.federation_position
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -0,0 +1,241 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.api.errors import SynapseError
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.servlet import (
RestServlet, parse_json_object_from_request,
)
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v2_alpha._base import client_v2_patterns
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
logger = logging.getLogger("synapse.app.frontend_proxy")
class KeyUploadServlet(RestServlet):
PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$")
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer): server
"""
super(KeyUploadServlet, self).__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker_main_http_uri
@defer.inlineCallbacks
def on_POST(self, request, device_id):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
if device_id is not None:
# passing the device_id here is deprecated; however, we allow it
# for now for compatibility with older clients.
if (requester.device_id is not None and
device_id != requester.device_id):
logger.warning("Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id, device_id)
else:
device_id = requester.device_id
if device_id is None:
raise SynapseError(
400,
"To upload keys, you must pass device_id when authenticating"
)
if body:
# They're actually trying to upload something, proxy to main synapse.
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
headers = {
"Authorization": auth_headers,
}
result = yield self.http_client.post_json_get_json(
self.main_uri + request.uri,
body,
headers=headers,
)
defer.returnValue((200, result))
else:
# Just interested in counts.
result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue((200, {"one_time_key_counts": result}))
class FrontendProxySlavedStore(
SlavedDeviceStore,
SlavedClientIpStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
BaseSlavedStore,
):
pass
class FrontendProxyServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = FrontendProxySlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse client reader now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse frontend proxy", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.frontend_proxy"
assert config.worker_main_http_uri is not None
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
ss = FrontendProxyServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-frontend-proxy", config)
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -13,59 +13,51 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
import gc
import logging
import os
import sys
import synapse
import synapse.config.logger
from synapse import events
from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \
LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \
STATIC_PREFIX, WEB_CLIENT_PREFIX
from synapse.app import _base
from synapse.app._base import quit_with_error, listen_ssl, listen_tcp
from synapse.config._base import ConfigError
from synapse.python_dependencies import (
check_requirements, DEPENDENCY_LINKS
)
from synapse.rest import ClientRestResource
from synapse.storage.engines import create_engine, IncorrectDatabaseSetup
from synapse.storage import are_all_users_on_domain
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.server import HomeServer
from twisted.internet import reactor, task, defer
from twisted.application import service
from twisted.web.resource import Resource, EncodingResourceWrapper
from twisted.web.static import File
from twisted.web.server import GzipEncoderFactory
from synapse.http.server import RootRedirect
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.api.urls import (
FEDERATION_PREFIX, WEB_CLIENT_PREFIX, CONTENT_REPO_PREFIX,
SERVER_KEY_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, STATIC_PREFIX,
SERVER_KEY_V2_PREFIX,
)
from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory
from synapse.util.logcontext import LoggingContext
from synapse.metrics import register_memory_metrics, get_metrics_for
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.resource import ReplicationResource, REPLICATION_PREFIX
from synapse.federation.transport.server import TransportLayerServer
from synapse.module_api import ModuleApi
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import register_memory_metrics
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
check_requirements
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
from synapse.storage import are_all_users_on_domain
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.http.site import SynapseSite
from synapse import events
from daemonize import Daemonize
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, Resource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
logger = logging.getLogger("synapse.app.homeserver")
@@ -90,7 +82,7 @@ def build_resource_for_web_client(hs):
"\n"
"You can also disable hosting of the webclient via the\n"
"configuration option `web_client`\n"
% {"dep": DEPENDENCY_LINKS["matrix-angular-sdk"]}
% {"dep": CONDITIONAL_REQUIREMENTS["web_client"].keys()[0]}
)
syweb_path = os.path.dirname(syweb.__file__)
webclient_path = os.path.join(syweb_path, "webclient")
@@ -107,7 +99,7 @@ def build_resource_for_web_client(hs):
class SynapseHomeServer(HomeServer):
def _listener_http(self, config, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
bind_addresses = listener_config["bind_addresses"]
tls = listener_config.get("tls", False)
site_tag = listener_config.get("tag", port)
@@ -117,55 +109,18 @@ class SynapseHomeServer(HomeServer):
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "client":
client_resource = ClientRestResource(self)
if res["compress"]:
client_resource = gz_wrap(client_resource)
resources.update(self._configure_named_resource(
name, res.get("compress", False),
))
resources.update({
"/_matrix/client/api/v1": client_resource,
"/_matrix/client/r0": client_resource,
"/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource,
})
if name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
})
if name in ["static", "client"]:
resources.update({
STATIC_PREFIX: File(
os.path.join(os.path.dirname(synapse.__file__), "static")
),
})
if name in ["media", "federation", "client"]:
media_repo = MediaRepositoryResource(self)
resources.update({
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
})
if name in ["keys", "federation"]:
resources.update({
SERVER_KEY_PREFIX: LocalKey(self),
SERVER_KEY_V2_PREFIX: KeyApiV2Resource(self),
})
if name == "webclient":
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
if name == "metrics" and self.get_config().enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(self)
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationResource(self)
additional_resources = listener_config.get("additional_resources", {})
logger.debug("Configuring additional resources: %r",
additional_resources)
module_api = ModuleApi(self, self.get_auth_handler())
for path, resmodule in additional_resources.items():
handler_cls, config = load_module(resmodule)
handler = handler_cls(config, module_api)
resources[path] = AdditionalResource(self, handler.handle_request)
if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
@@ -173,8 +128,10 @@ class SynapseHomeServer(HomeServer):
root_resource = Resource()
root_resource = create_resource_tree(resources, root_resource)
if tls:
reactor.listenSSL(
listen_ssl(
bind_addresses,
port,
SynapseSite(
"synapse.access.https.%s" % (site_tag,),
@@ -183,21 +140,87 @@ class SynapseHomeServer(HomeServer):
root_resource,
),
self.tls_server_context_factory,
interface=bind_address
)
else:
reactor.listenTCP(
listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
)
logger.info("Synapse now listening on port %d", port)
def _configure_named_resource(self, name, compress=False):
"""Build a resource map for a named resource
Args:
name (str): named resource: one of "client", "federation", etc
compress (bool): whether to enable gzip compression for this
resource
Returns:
dict[str, Resource]: map from path to HTTP resource
"""
resources = {}
if name == "client":
client_resource = ClientRestResource(self)
if compress:
client_resource = gz_wrap(client_resource)
resources.update({
"/_matrix/client/api/v1": client_resource,
"/_matrix/client/r0": client_resource,
"/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource,
})
if name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
})
if name in ["static", "client"]:
resources.update({
STATIC_PREFIX: File(
os.path.join(os.path.dirname(synapse.__file__), "static")
),
})
if name in ["media", "federation", "client"]:
if self.get_config().enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update({
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
})
elif name == "media":
raise ConfigError(
"'media' resource conflicts with enable_media_repo=False",
)
if name in ["keys", "federation"]:
resources.update({
SERVER_KEY_PREFIX: LocalKey(self),
SERVER_KEY_V2_PREFIX: KeyApiV2Resource(self),
})
if name == "webclient":
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
if name == "metrics" and self.get_config().enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(self)
return resources
def start_listening(self):
config = self.get_config()
@@ -205,15 +228,25 @@ class SynapseHomeServer(HomeServer):
if listener["type"] == "http":
self._listener_http(config, listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
)
elif listener["type"] == "replication":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
factory = ReplicationStreamProtocolFactory(self)
server_listener = reactor.listenTCP(
listener["port"], factory, interface=address
)
reactor.addSystemEventTrigger(
"before", "shutdown", server_listener.stopListening,
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -247,16 +280,6 @@ class SynapseHomeServer(HomeServer):
return db_conn
def quit_with_error(error_string):
message_lines = error_string.split("\n")
line_length = max([len(l) for l in message_lines if len(l) < 80]) + 2
sys.stderr.write("*" * line_length + '\n')
for line in message_lines:
sys.stderr.write(" %s\n" % (line.rstrip(),))
sys.stderr.write("*" * line_length + '\n')
sys.exit(1)
def setup(config_options):
"""
Args:
@@ -280,7 +303,7 @@ def setup(config_options):
# generating config files and shouldn't try to continue.
sys.exit(0)
config.setup_logging()
synapse.config.logger.setup_logging(config, use_worker_options=False)
# check any extra requirements we have now we have a config
check_requirements(config)
@@ -383,7 +406,8 @@ def run(hs):
ThreadPool._worker = profile(ThreadPool._worker)
reactor.run = profile(reactor.run)
start_time = hs.get_clock().time()
clock = hs.get_clock()
start_time = clock.time()
stats = {}
@@ -395,41 +419,23 @@ def run(hs):
if uptime < 0:
uptime = 0
# If the stats directory is empty then this is the first time we've
# reported stats.
first_time = not stats
stats["homeserver"] = hs.config.server_name
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
stats["total_users"] = yield hs.get_datastore().count_all_users()
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users
room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
daily_messages = yield hs.get_datastore().count_daily_messages()
if daily_messages is not None:
stats["daily_messages"] = daily_messages
else:
stats.pop("daily_messages", None)
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
if first_time:
# Add callbacks to report the synapse stats as metrics whenever
# prometheus requests them, typically every 30s.
# As some of the stats are expensive to calculate we only update
# them when synapse phones home to matrix.org every 24 hours.
metrics = get_metrics_for("synapse.usage")
metrics.add_callback("timestamp", lambda: stats["timestamp"])
metrics.add_callback("uptime_seconds", lambda: stats["uptime_seconds"])
metrics.add_callback("total_users", lambda: stats["total_users"])
metrics.add_callback("total_room_count", lambda: stats["total_room_count"])
metrics.add_callback(
"daily_active_users", lambda: stats["daily_active_users"]
)
metrics.add_callback(
"daily_messages", lambda: stats.get("daily_messages", 0)
)
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
logger.info("Reporting stats to matrix.org: %s" % (stats,))
try:
@@ -441,36 +447,25 @@ def run(hs):
logger.warn("Error reporting stats: %s", e)
if hs.config.report_stats:
phone_home_task = task.LoopingCall(phone_stats_home)
logger.info("Scheduling stats reporting for 24 hour intervals")
phone_home_task.start(60 * 60 * 24, now=False)
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)
def in_thread():
# Uncomment to enable tracing of log context changes.
# sys.settrace(logcontext_tracer)
with LoggingContext("run"):
change_resource_limit(hs.config.soft_file_limit)
if hs.config.gc_thresholds:
gc.set_threshold(*hs.config.gc_thresholds)
reactor.run()
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home)
if hs.config.daemonize:
if hs.config.daemonize and hs.config.print_pidfile:
print (hs.config.pid_file)
if hs.config.print_pidfile:
print (hs.config.pid_file)
daemon = Daemonize(
app="synapse-homeserver",
pid=hs.config.pid_file,
action=lambda: in_thread(),
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
in_thread()
_base.start_reactor(
"synapse-homeserver",
hs.config.soft_file_limit,
hs.config.gc_thresholds,
hs.config.pid_file,
hs.config.daemonize,
hs.config.cpu_affinity,
logger,
)
def main():

View File

@@ -13,53 +13,48 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from synapse.api.urls import (
CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
)
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from twisted.internet import reactor, defer
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.media_repository")
class MediaRepositorySlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
TransactionStore,
BaseSlavedStore,
MediaRepositoryStore,
ClientIpStore,
):
pass
@@ -85,7 +80,7 @@ class MediaRepositoryServer(HomeServer):
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
@@ -93,7 +88,7 @@ class MediaRepositoryServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "media":
media_repo = MediaRepositoryResource(self)
media_repo = self.get_media_repository_resource()
resources.update({
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
@@ -103,16 +98,18 @@ class MediaRepositoryServer(HomeServer):
})
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
)
logger.info("Synapse media repository now listening on port %d", port)
def start_listening(self, listeners):
@@ -120,33 +117,22 @@ class MediaRepositoryServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
self.get_tcp_replication().start_replication(self)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
@@ -160,7 +146,16 @@ def start(config_options):
assert config.worker_app == "synapse.app.media_repository"
setup_logging(config.worker_log_config, config.worker_log_file)
if config.enable_media_repo:
_base.quit_with_error(
"enable_media_repo must be disabled in the main synapse process\n"
"before the media repo can be run in a separate worker.\n"
"Please add ``enable_media_repo: false`` to the main config\n"
)
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -179,33 +174,13 @@ def start(config_options):
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-media-repository",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-media-repository", config)
if __name__ == '__main__':

View File

@@ -13,38 +13,33 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.storage.roommember import RoomMemberStore
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.storage.engines import create_engine
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.util.async import sleep
from synapse.storage.engines import create_engine
from synapse.storage.roommember import RoomMemberStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.pusher")
@@ -86,7 +81,6 @@ class PusherSlaveStore(
class PusherServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
@@ -106,20 +100,11 @@ class PusherServer(HomeServer):
logger.info("Finished setting up.")
def remove_pusher(self, app_id, push_key, user_id):
http_client = self.get_simple_http_client()
replication_url = self.config.worker_replication_url
url = replication_url + "/remove_pushers"
return http_client.post_json_get_json(url, {
"remove": [{
"app_id": app_id,
"push_key": push_key,
"user_id": user_id,
}]
})
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
@@ -128,16 +113,18 @@ class PusherServer(HomeServer):
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
)
logger.info("Synapse pusher now listening on port %d", port)
def start_listening(self, listeners):
@@ -145,85 +132,64 @@ class PusherServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return PusherReplicationHandler(self)
class PusherReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(PusherReplicationHandler, self).__init__(hs.get_datastore())
self.pusher_pool = hs.get_pusherpool()
def on_rdata(self, stream_name, token, rows):
super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
preserve_fn(self.poke_pushers)(stream_name, token, rows)
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
pusher_pool = self.get_pusherpool()
def stop_pusher(user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
pushers_for_user = pusher_pool.pushers.get(user_id, {})
pusher = pushers_for_user.pop(key, None)
if pusher is None:
return
logger.info("Stopping pusher %r / %r", user_id, key)
pusher.on_stop()
def start_pusher(user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return pusher_pool._refresh_pusher(app_id, pushkey, user_id)
@defer.inlineCallbacks
def poke_pushers(results):
pushers_rows = set(
map(tuple, results.get("pushers", {}).get("rows", []))
def poke_pushers(self, stream_name, token, rows):
if stream_name == "pushers":
for row in rows:
if row.deleted:
yield self.stop_pusher(row.user_id, row.app_id, row.pushkey)
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
yield self.pusher_pool.on_new_notifications(
token, token,
)
deleted_pushers_rows = set(
map(tuple, results.get("deleted_pushers", {}).get("rows", []))
elif stream_name == "receipts":
yield self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
for row in sorted(pushers_rows | deleted_pushers_rows):
if row in deleted_pushers_rows:
user_id, app_id, pushkey = row[1:4]
stop_pusher(user_id, app_id, pushkey)
elif row in pushers_rows:
user_id = row[1]
app_id = row[5]
pushkey = row[8]
yield start_pusher(user_id, app_id, pushkey)
stream = results.get("events")
if stream:
min_stream_id = stream["rows"][0][0]
max_stream_id = stream["position"]
preserve_fn(pusher_pool.on_new_notifications)(
min_stream_id, max_stream_id
)
def stop_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
pushers_for_user = self.pusher_pool.pushers.get(user_id, {})
pusher = pushers_for_user.pop(key, None)
if pusher is None:
return
logger.info("Stopping pusher %r / %r", user_id, key)
pusher.on_stop()
stream = results.get("receipts")
if stream:
rows = stream["rows"]
affected_room_ids = set(row[1] for row in rows)
min_stream_id = rows[0][0]
max_stream_id = stream["position"]
preserve_fn(pusher_pool.on_new_receipts)(
min_stream_id, max_stream_id, affected_room_ids
)
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
poke_pushers(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
def start_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return self.pusher_pool._refresh_pusher(app_id, pushkey, user_id)
def start(config_options):
@@ -237,7 +203,9 @@ def start(config_options):
assert config.worker_app == "synapse.app.pusher"
setup_logging(config.worker_log_config, config.worker_log_file)
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
if config.start_pushers:
sys.stderr.write(
@@ -264,34 +232,14 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_pusherpool().start()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-pusher",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-pusher", config)
if __name__ == '__main__':

View File

@@ -13,54 +13,51 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import logging
import sys
import synapse
from synapse.api.constants import EventTypes, PresenceState
from synapse.api.constants import EventTypes
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.events import FrozenEvent
from synapse.handlers.presence import PresenceHandler
from synapse.http.site import SynapseSite
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.rest.client.v2_alpha import sync
from synapse.rest.client.v1 import events
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v2_alpha import sync
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.presence import PresenceStore, UserPresenceState
from synapse.storage.presence import UserPresenceState
from synapse.storage.roommember import RoomMemberStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import contextlib
import gc
import ujson as json
logger = logging.getLogger("synapse.app.synchrotron")
@@ -73,23 +70,20 @@ class SynchrotronSlavedStore(
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedPresenceStore,
SlavedGroupServerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
# XXX: This is a bit broken because we don't persist the accepted list in a
# way that can be replicated. This means that we don't have a way to
# invalidate the cache correctly.
get_presence_list_accepted = PresenceStore.__dict__[
"get_presence_list_accepted"
]
get_presence_list_observers_accepted = PresenceStore.__dict__[
"get_presence_list_observers_accepted"
]
did_forget = (
RoomMemberStore.__dict__["did_forget"]
)
UPDATE_SYNCING_USERS_MS = 10 * 1000
@@ -97,11 +91,11 @@ UPDATE_SYNCING_USERS_MS = 10 * 1000
class SynchrotronPresence(object):
def __init__(self, hs):
self.hs = hs
self.is_mine_id = hs.is_mine_id
self.http_client = hs.get_simple_http_client()
self.store = hs.get_datastore()
self.user_to_num_current_syncs = {}
self.syncing_users_url = hs.config.worker_replication_url + "/syncing_users"
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
@@ -111,17 +105,52 @@ class SynchrotronPresence(object):
for state in active_presence
}
# user_id -> last_sync_ms. Lists the users that have stopped syncing
# but we haven't notified the master of that yet
self.users_going_offline = {}
self._send_stop_syncing_loop = self.clock.looping_call(
self.send_stop_syncing, 10 * 1000
)
self.process_id = random_string(16)
logger.info("Presence process_id is %r", self.process_id)
self._sending_sync = False
self._need_to_send_sync = False
self.clock.looping_call(
self._send_syncing_users_regularly,
UPDATE_SYNCING_USERS_MS,
)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
self.hs.get_tcp_replication().send_user_sync(user_id, is_syncing, last_sync_ms)
reactor.addSystemEventTrigger("before", "shutdown", self._on_shutdown)
def mark_as_coming_online(self, user_id):
"""A user has started syncing. Send a UserSync to the master, unless they
had recently stopped syncing.
Args:
user_id (str)
"""
going_offline = self.users_going_offline.pop(user_id, None)
if not going_offline:
# Safe to skip because we haven't yet told the master they were offline
self.send_user_sync(user_id, True, self.clock.time_msec())
def mark_as_going_offline(self, user_id):
"""A user has stopped syncing. We wait before notifying the master as
its likely they'll come back soon. This allows us to avoid sending
a stopped syncing immediately followed by a started syncing notification
to the master
Args:
user_id (str)
"""
self.users_going_offline[user_id] = self.clock.time_msec()
def send_stop_syncing(self):
"""Check if there are any users who have stopped syncing a while ago
and haven't come back yet. If there are poke the master about them.
"""
now = self.clock.time_msec()
for user_id, last_sync_ms in self.users_going_offline.items():
if now - last_sync_ms > 10 * 1000:
self.users_going_offline.pop(user_id, None)
self.send_user_sync(user_id, False, last_sync_ms)
def set_state(self, user, state, ignore_status_msg=False):
# TODO Hows this supposed to work?
@@ -129,18 +158,16 @@ class SynchrotronPresence(object):
get_states = PresenceHandler.get_states.__func__
get_state = PresenceHandler.get_state.__func__
_get_interested_parties = PresenceHandler._get_interested_parties.__func__
current_state_for_users = PresenceHandler.current_state_for_users.__func__
@defer.inlineCallbacks
def user_syncing(self, user_id, affect_presence):
if affect_presence:
curr_sync = self.user_to_num_current_syncs.get(user_id, 0)
self.user_to_num_current_syncs[user_id] = curr_sync + 1
prev_states = yield self.current_state_for_users([user_id])
if prev_states[user_id].state == PresenceState.OFFLINE:
# TODO: Don't block the sync request on this HTTP hit.
yield self._send_syncing_users_now()
# If we went from no in flight sync to some, notify replication
if self.user_to_num_current_syncs[user_id] == 1:
self.mark_as_coming_online(user_id)
def _end():
# We check that the user_id is in user_to_num_current_syncs because
@@ -149,6 +176,10 @@ class SynchrotronPresence(object):
if affect_presence and user_id in self.user_to_num_current_syncs:
self.user_to_num_current_syncs[user_id] -= 1
# If we went from one in flight sync to non, notify replication
if self.user_to_num_current_syncs[user_id] == 0:
self.mark_as_going_offline(user_id)
@contextlib.contextmanager
def _user_syncing():
try:
@@ -156,56 +187,12 @@ class SynchrotronPresence(object):
finally:
_end()
defer.returnValue(_user_syncing())
@defer.inlineCallbacks
def _on_shutdown(self):
# When the synchrotron is shutdown tell the master to clear the in
# progress syncs for this process
self.user_to_num_current_syncs.clear()
yield self._send_syncing_users_now()
def _send_syncing_users_regularly(self):
# Only send an update if we aren't in the middle of sending one.
if not self._sending_sync:
preserve_fn(self._send_syncing_users_now)()
@defer.inlineCallbacks
def _send_syncing_users_now(self):
if self._sending_sync:
# We don't want to race with sending another update.
# Instead we wait for that update to finish and send another
# update afterwards.
self._need_to_send_sync = True
return
# Flag that we are sending an update.
self._sending_sync = True
yield self.http_client.post_json_get_json(self.syncing_users_url, {
"process_id": self.process_id,
"syncing_users": [
user_id for user_id, count in self.user_to_num_current_syncs.items()
if count > 0
],
})
# Unset the flag as we are no longer sending an update.
self._sending_sync = False
if self._need_to_send_sync:
# If something happened while we were sending the update then
# we might need to send another update.
# TODO: Check if the update that was sent matches the current state
# as we only need to send an update if they are different.
self._need_to_send_sync = False
yield self._send_syncing_users_now()
return defer.succeed(_user_syncing())
@defer.inlineCallbacks
def notify_from_replication(self, states, stream_id):
parties = yield self._get_interested_parties(
states, calculate_remote_hosts=False
)
room_ids_to_states, users_to_states, _ = parties
parties = yield get_interested_parties(self.store, states)
room_ids_to_states, users_to_states = parties
self.notifier.on_new_event(
"presence_key", stream_id, rooms=room_ids_to_states.keys(),
@@ -213,26 +200,24 @@ class SynchrotronPresence(object):
)
@defer.inlineCallbacks
def process_replication(self, result):
stream = result.get("presence", {"rows": []})
states = []
for row in stream["rows"]:
(
position, user_id, state, last_active_ts,
last_federation_update_ts, last_user_sync_ts, status_msg,
currently_active
) = row
state = UserPresenceState(
user_id, state, last_active_ts,
last_federation_update_ts, last_user_sync_ts, status_msg,
currently_active
)
self.user_to_current_state[user_id] = state
states.append(state)
def process_replication_rows(self, token, rows):
states = [UserPresenceState(
row.user_id, row.state, row.last_active_ts,
row.last_federation_update_ts, row.last_user_sync_ts, row.status_msg,
row.currently_active
) for row in rows]
if states and "position" in stream:
stream_id = int(stream["position"])
yield self.notify_from_replication(states, stream_id)
for state in states:
self.user_to_current_state[row.user_id] = state
stream_id = token
yield self.notify_from_replication(states, stream_id)
def get_currently_syncing_users(self):
return [
user_id for user_id, count in self.user_to_num_current_syncs.iteritems()
if count > 0
]
class SynchrotronTyping(object):
@@ -247,16 +232,12 @@ class SynchrotronTyping(object):
# value which we *must* use for the next replication request.
return {"typing": self._latest_room_serial}
def process_replication(self, result):
stream = result.get("typing")
if stream:
self._latest_room_serial = int(stream["position"])
def process_replication_rows(self, token, rows):
self._latest_room_serial = token
for row in stream["rows"]:
position, room_id, typing_json = row
typing = json.loads(typing_json)
self._room_serials[room_id] = position
self._room_typing[room_id] = typing
for row in rows:
self._room_serials[row.room_id] = token
self._room_typing[row.room_id] = row.user_ids
class SynchrotronApplicationService(object):
@@ -285,7 +266,7 @@ class SynchrotronServer(HomeServer):
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
@@ -296,6 +277,8 @@ class SynchrotronServer(HomeServer):
resource = JsonResource(self, canonical_json=False)
sync.register_servlets(self, resource)
events.register_servlets(self, resource)
InitialSyncRestServlet(self).register(resource)
RoomInitialSyncRestServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
@@ -304,16 +287,18 @@ class SynchrotronServer(HomeServer):
})
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
)
logger.info("Synapse synchrotron now listening on port %d", port)
def start_listening(self, listeners):
@@ -321,104 +306,22 @@ class SynchrotronServer(HomeServer):
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
notifier = self.get_notifier()
presence_handler = self.get_presence_handler()
typing_handler = self.get_typing_handler()
self.get_tcp_replication().start_replication(self)
def notify_from_stream(
result, stream_name, stream_key, room=None, user=None
):
stream = result.get(stream_name)
if stream:
position_index = stream["field_names"].index("position")
if room:
room_index = stream["field_names"].index(room)
if user:
user_index = stream["field_names"].index(user)
users = ()
rooms = ()
for row in stream["rows"]:
position = row[position_index]
if user:
users = (row[user_index],)
if room:
rooms = (row[room_index],)
notifier.on_new_event(
stream_key, position, users=users, rooms=rooms
)
def notify(result):
stream = result.get("events")
if stream:
max_position = stream["position"]
for row in stream["rows"]:
position = row[0]
internal = json.loads(row[1])
event_json = json.loads(row[2])
event = FrozenEvent(event_json, internal_metadata_dict=internal)
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
notifier.on_new_room_event(
event, position, max_position, extra_users
)
notify_from_stream(
result, "push_rules", "push_rules_key", user="user_id"
)
notify_from_stream(
result, "user_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "room_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "tag_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "receipts", "receipt_key", room="room_id"
)
notify_from_stream(
result, "typing", "typing_key", room="room_id"
)
notify_from_stream(
result, "to_device", "to_device_key", user="user_id"
)
while True:
try:
args = store.stream_positions()
args.update(typing_handler.stream_positions())
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
yield store.process_replication(result)
typing_handler.process_replication(result)
yield presence_handler.process_replication(result)
notify(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_tcp_replication(self):
return SyncReplicationHandler(self)
def build_presence_handler(self):
return SynchrotronPresence(self)
@@ -427,6 +330,82 @@ class SynchrotronServer(HomeServer):
return SynchrotronTyping(self)
class SyncReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(SyncReplicationHandler, self).__init__(hs.get_datastore())
self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler()
# NB this is a SynchrotronPresence, not a normal PresenceHandler
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier()
def on_rdata(self, stream_name, token, rows):
super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
preserve_fn(self.process_and_notify)(stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(SyncReplicationHandler, self).get_streams_to_replicate()
args.update(self.typing_handler.stream_positions())
return args
def get_currently_syncing_users(self):
return self.presence_handler.get_currently_syncing_users()
@defer.inlineCallbacks
def process_and_notify(self, stream_name, token, rows):
if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
event = yield self.store.get_event(row.event_id)
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
max_token = self.store.get_room_max_stream_ordering()
self.notifier.on_new_room_event(
event, token, max_token, extra_users
)
elif stream_name == "push_rules":
self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows],
)
elif stream_name in ("account_data", "tag_account_data",):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows],
)
elif stream_name == "receipts":
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "typing":
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "to_device":
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
self.notifier.on_new_event(
"to_device_key", token, users=entities,
)
elif stream_name == "device_lists":
all_room_ids = set()
for row in rows:
room_ids = yield self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
self.notifier.on_new_event(
"device_list_key", token, rooms=all_room_ids,
)
elif stream_name == "presence":
yield self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows],
)
def start(config_options):
try:
config = HomeServerConfig.load_config(
@@ -438,7 +417,9 @@ def start(config_options):
assert config.worker_app == "synapse.app.synchrotron"
setup_logging(config.worker_log_config, config.worker_log_file)
setup_logging(config, use_worker_options=True)
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -454,33 +435,13 @@ def start(config_options):
ss.setup()
ss.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_datastore().start_profiling()
ss.replicate()
ss.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-synchrotron",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-synchrotron", config)
if __name__ == '__main__':

View File

@@ -23,14 +23,27 @@ import signal
import subprocess
import sys
import yaml
import errno
import time
SYNAPSE = ["python", "-B", "-m", "synapse.app.homeserver"]
SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
GREEN = "\x1b[1;32m"
YELLOW = "\x1b[1;33m"
RED = "\x1b[1;31m"
NORMAL = "\x1b[m"
def pid_running(pid):
try:
os.kill(pid, 0)
return True
except OSError, err:
if err.errno == errno.EPERM:
return True
return False
def write(message, colour=NORMAL, stream=sys.stdout):
if colour == NORMAL:
stream.write(message + "\n")
@@ -38,6 +51,11 @@ def write(message, colour=NORMAL, stream=sys.stdout):
stream.write(colour + message + NORMAL + "\n")
def abort(message, colour=RED, stream=sys.stderr):
write(message, colour, stream)
sys.exit(1)
def start(configfile):
write("Starting ...")
args = SYNAPSE
@@ -45,7 +63,8 @@ def start(configfile):
try:
subprocess.check_call(args)
write("started synapse.app.homeserver(%r)" % (configfile,), colour=GREEN)
write("started synapse.app.homeserver(%r)" %
(configfile,), colour=GREEN)
except subprocess.CalledProcessError as e:
write(
"error starting (exit code: %d); see above for logs" % e.returncode,
@@ -76,8 +95,16 @@ def start_worker(app, configfile, worker_configfile):
def stop(pidfile, app):
if os.path.exists(pidfile):
pid = int(open(pidfile).read())
os.kill(pid, signal.SIGTERM)
write("stopped %s" % (app,), colour=GREEN)
try:
os.kill(pid, signal.SIGTERM)
write("stopped %s" % (app,), colour=GREEN)
except OSError, err:
if err.errno == errno.ESRCH:
write("%s not running" % (app,), colour=YELLOW)
elif err.errno == errno.EPERM:
abort("Cannot stop %s: Operation not permitted" % (app,))
else:
abort("Cannot stop %s: Unknown error" % (app,))
Worker = collections.namedtuple("Worker", [
@@ -98,7 +125,7 @@ def main():
"configfile",
nargs="?",
default="homeserver.yaml",
help="the homeserver config file, defaults to homserver.yaml",
help="the homeserver config file, defaults to homeserver.yaml",
)
parser.add_argument(
"-w", "--worker",
@@ -175,7 +202,8 @@ def main():
worker_app = worker_config["worker_app"]
worker_pidfile = worker_config["worker_pid_file"]
worker_daemonize = worker_config["worker_daemonize"]
assert worker_daemonize # TODO print something more user friendly
assert worker_daemonize, "In config %r: expected '%s' to be True" % (
worker_configfile, "worker_daemonize")
worker_cache_factor = worker_config.get("synctl_cache_factor")
workers.append(Worker(
worker_app, worker_configfile, worker_pidfile, worker_cache_factor,
@@ -190,10 +218,25 @@ def main():
if start_stop_synapse:
stop(pidfile, "synapse.app.homeserver")
# TODO: Wait for synapse to actually shutdown before starting it again
# Wait for synapse to actually shutdown before starting it again
if action == "restart":
running_pids = []
if start_stop_synapse and os.path.exists(pidfile):
running_pids.append(int(open(pidfile).read()))
for worker in workers:
if os.path.exists(worker.pidfile):
running_pids.append(int(open(worker.pidfile).read()))
if len(running_pids) > 0:
write("Waiting for process to exit before restarting...")
for running_pid in running_pids:
while pid_running(running_pid):
time.sleep(0.2)
if action == "start" or action == "restart":
if start_stop_synapse:
# Check if synapse is already running
if os.path.exists(pidfile) and pid_running(int(open(pidfile).read())):
abort("synapse.app.homeserver already running")
start(configfile)
for worker in workers:

237
synapse/app/user_dir.py Normal file
View File

@@ -0,0 +1,237 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v2_alpha import user_directory
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
logger = logging.getLogger("synapse.app.user_dir")
class UserDirectorySlaveStore(
SlavedEventStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
UserDirectoryStore,
BaseSlavedStore,
):
def __init__(self, db_conn, hs):
super(UserDirectorySlaveStore, self).__init__(db_conn, hs)
events_max = self._stream_id_gen.get_current_token()
curr_state_delta_prefill, min_curr_state_delta_id = self._get_cache_dict(
db_conn, "current_state_delta_stream",
entity_column="room_id",
stream_column="stream_id",
max_value=events_max, # As we share the stream id with events token
limit=1000,
)
self._curr_state_delta_stream_cache = StreamChangeCache(
"_curr_state_delta_stream_cache", min_curr_state_delta_id,
prefilled_cache=curr_state_delta_prefill,
)
self._current_state_delta_pos = events_max
def stream_positions(self):
result = super(UserDirectorySlaveStore, self).stream_positions()
result["current_state_deltas"] = self._current_state_delta_pos
return result
def process_replication_rows(self, stream_name, token, rows):
if stream_name == "current_state_deltas":
self._current_state_delta_pos = token
for row in rows:
self._curr_state_delta_stream_cache.entity_has_changed(
row.room_id, token
)
return super(UserDirectorySlaveStore, self).process_replication_rows(
stream_name, token, rows
)
class UserDirectoryServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = UserDirectorySlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
user_directory.register_servlets(self, resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
)
)
logger.info("Synapse user_dir now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return UserDirectoryReplicationHandler(self)
class UserDirectoryReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(UserDirectoryReplicationHandler, self).__init__(hs.get_datastore())
self.user_directory = hs.get_user_directory_handler()
def on_rdata(self, stream_name, token, rows):
super(UserDirectoryReplicationHandler, self).on_rdata(
stream_name, token, rows
)
if stream_name == "current_state_deltas":
preserve_fn(self.user_directory.notify_new_event)()
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse user directory", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.user_dir"
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
if config.update_user_directory:
sys.stderr.write(
"\nThe update_user_directory must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``update_user_directory: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.update_user_directory = True
tls_server_context_factory = context_factory.ServerContextFactory(config)
ps = UserDirectoryServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ps.setup()
ps.start_listening(config.worker_listeners)
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-user-dir", config)
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -13,6 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api.constants import EventTypes
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.types import GroupID, get_domain_from_id
from twisted.internet import defer
@@ -80,21 +82,27 @@ class ApplicationService(object):
# values.
NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]
def __init__(self, token, url=None, namespaces=None, hs_token=None,
sender=None, id=None, protocols=None):
def __init__(self, token, hostname, url=None, namespaces=None, hs_token=None,
sender=None, id=None, protocols=None, rate_limited=True):
self.token = token
self.url = url
self.hs_token = hs_token
self.sender = sender
self.server_name = hostname
self.namespaces = self._check_namespaces(namespaces)
self.id = id
if "|" in self.id:
raise Exception("application service ID cannot contain '|' character")
# .protocols is a publicly visible field
if protocols:
self.protocols = set(protocols)
else:
self.protocols = set()
self.rate_limited = rate_limited
def _check_namespaces(self, namespaces):
# Sanity check that it is of the form:
# {
@@ -119,29 +127,41 @@ class ApplicationService(object):
raise ValueError(
"Expected bool for 'exclusive' in ns '%s'" % ns
)
if not isinstance(regex_obj.get("regex"), basestring):
group_id = regex_obj.get("group_id")
if group_id:
if not isinstance(group_id, str):
raise ValueError(
"Expected string for 'group_id' in ns '%s'" % ns
)
try:
GroupID.from_string(group_id)
except Exception:
raise ValueError(
"Expected valid group ID for 'group_id' in ns '%s'" % ns
)
if get_domain_from_id(group_id) != self.server_name:
raise ValueError(
"Expected 'group_id' to be this host in ns '%s'" % ns
)
regex = regex_obj.get("regex")
if isinstance(regex, basestring):
regex_obj["regex"] = re.compile(regex) # Pre-compile regex
else:
raise ValueError(
"Expected string for 'regex' in ns '%s'" % ns
)
return namespaces
def _matches_regex(self, test_string, namespace_key, return_obj=False):
if not isinstance(test_string, basestring):
logger.error(
"Expected a string to test regex against, but got %s",
test_string
)
return False
def _matches_regex(self, test_string, namespace_key):
for regex_obj in self.namespaces[namespace_key]:
if re.match(regex_obj["regex"], test_string):
if return_obj:
return regex_obj
return True
return False
if regex_obj["regex"].match(test_string):
return regex_obj
return None
def _is_exclusive(self, ns_key, test_string):
regex_obj = self._matches_regex(test_string, ns_key, return_obj=True)
regex_obj = self._matches_regex(test_string, ns_key)
if regex_obj:
return regex_obj["exclusive"]
return False
@@ -161,7 +181,14 @@ class ApplicationService(object):
if not store:
defer.returnValue(False)
member_list = yield store.get_users_in_room(event.room_id)
does_match = yield self._matches_user_in_member_list(event.room_id, store)
defer.returnValue(does_match)
@cachedInlineCallbacks(num_args=1, cache_context=True)
def _matches_user_in_member_list(self, room_id, store, cache_context):
member_list = yield store.get_users_in_room(
room_id, on_invalidate=cache_context.invalidate
)
# check joined member events
for user_id in member_list:
@@ -214,10 +241,10 @@ class ApplicationService(object):
)
def is_interested_in_alias(self, alias):
return self._matches_regex(alias, ApplicationService.NS_ALIASES)
return bool(self._matches_regex(alias, ApplicationService.NS_ALIASES))
def is_interested_in_room(self, room_id):
return self._matches_regex(room_id, ApplicationService.NS_ROOMS)
return bool(self._matches_regex(room_id, ApplicationService.NS_ROOMS))
def is_exclusive_user(self, user_id):
return (
@@ -234,5 +261,33 @@ class ApplicationService(object):
def is_exclusive_room(self, room_id):
return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)
def get_exlusive_user_regexes(self):
"""Get the list of regexes used to determine if a user is exclusively
registered by the AS
"""
return [
regex_obj["regex"]
for regex_obj in self.namespaces[ApplicationService.NS_USERS]
if regex_obj["exclusive"]
]
def get_groups_for_user(self, user_id):
"""Get the groups that this user is associated with by this AS
Args:
user_id (str): The ID of the user.
Returns:
iterable[str]: an iterable that yields group_id strings.
"""
return (
regex_obj["group_id"]
for regex_obj in self.namespaces[ApplicationService.NS_USERS]
if "group_id" in regex_obj and regex_obj["regex"].match(user_id)
)
def is_rate_limited(self):
return self.rate_limited
def __str__(self):
return "ApplicationService: %s" % (self.__dict__,)

View File

@@ -18,7 +18,9 @@ from synapse.api.constants import ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException
from synapse.http.client import SimpleHttpClient
from synapse.events.utils import serialize_event
from synapse.util.logcontext import preserve_fn, make_deferred_yieldable
from synapse.util.caches.response_cache import ResponseCache
from synapse.types import ThirdPartyInstanceID
import logging
import urllib
@@ -177,6 +179,13 @@ class ApplicationServiceApi(SimpleHttpClient):
" valid result", uri)
defer.returnValue(None)
for instance in info.get("instances", []):
network_id = instance.get("network_id", None)
if network_id is not None:
instance["instance_id"] = ThirdPartyInstanceID(
service.id, network_id,
).to_string()
defer.returnValue(info)
except Exception as ex:
logger.warning("query_3pe_protocol to %s threw exception %s",
@@ -184,9 +193,12 @@ class ApplicationServiceApi(SimpleHttpClient):
defer.returnValue(None)
key = (service.id, protocol)
return self.protocol_meta_cache.get(key) or (
self.protocol_meta_cache.set(key, _get())
)
result = self.protocol_meta_cache.get(key)
if not result:
result = self.protocol_meta_cache.set(
key, preserve_fn(_get)()
)
return make_deferred_yieldable(result)
@defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None):

View File

@@ -123,7 +123,7 @@ class _ServiceQueuer(object):
with Measure(self.clock, "servicequeuer.send"):
try:
yield self.txn_ctrl.send(service, events)
except:
except Exception:
logger.exception("AS request failed")
finally:
self.requests_in_flight.discard(service.id)

View File

@@ -64,11 +64,12 @@ class Config(object):
if isinstance(value, int) or isinstance(value, long):
return value
second = 1000
hour = 60 * 60 * second
minute = 60 * second
hour = 60 * minute
day = 24 * hour
week = 7 * day
year = 365 * day
sizes = {"s": second, "h": hour, "d": day, "w": week, "y": year}
sizes = {"s": second, "m": minute, "h": hour, "d": day, "w": week, "y": year}
size = 1
suffix = value[-1]
if suffix in sizes:
@@ -80,22 +81,38 @@ class Config(object):
def abspath(file_path):
return os.path.abspath(file_path) if file_path else file_path
@classmethod
def path_exists(cls, file_path):
"""Check if a file exists
Unlike os.path.exists, this throws an exception if there is an error
checking if the file exists (for example, if there is a perms error on
the parent dir).
Returns:
bool: True if the file exists; False if not.
"""
try:
os.stat(file_path)
return True
except OSError as e:
if e.errno != errno.ENOENT:
raise e
return False
@classmethod
def check_file(cls, file_path, config_name):
if file_path is None:
raise ConfigError(
"Missing config for %s."
" You must specify a path for the config file. You can "
"do this with the -c or --config-path option. "
"Adding --generate-config along with --server-name "
"<server name> will generate a config file at the given path."
% (config_name,)
)
if not os.path.exists(file_path):
try:
os.stat(file_path)
except OSError as e:
raise ConfigError(
"File %s config for %s doesn't exist."
" Try running again with --generate-config"
% (file_path, config_name,)
"Error accessing file '%s' (config for %s): %s"
% (file_path, config_name, e.strerror)
)
return cls.abspath(file_path)
@@ -247,7 +264,7 @@ class Config(object):
" -c CONFIG-FILE\""
)
(config_path,) = config_files
if not os.path.exists(config_path):
if not cls.path_exists(config_path):
if config_args.keys_directory:
config_dir_path = config_args.keys_directory
else:
@@ -260,7 +277,7 @@ class Config(object):
"Must specify a server_name to a generate config for."
" Pass -H server.name."
)
if not os.path.exists(config_dir_path):
if not cls.path_exists(config_dir_path):
os.makedirs(config_dir_path)
with open(config_path, "wb") as config_file:
config_bytes, config = obj.generate_config(

View File

@@ -110,6 +110,11 @@ def _load_appservice(hostname, as_info, config_filename):
user = UserID(localpart, hostname)
user_id = user.to_string()
# Rate limiting for users of this AS is on by default (excludes sender)
rate_limited = True
if isinstance(as_info.get("rate_limited"), bool):
rate_limited = as_info.get("rate_limited")
# namespace checks
if not isinstance(as_info.get("namespaces"), dict):
raise KeyError("Requires 'namespaces' object.")
@@ -149,10 +154,12 @@ def _load_appservice(hostname, as_info, config_filename):
)
return ApplicationService(
token=as_info["as_token"],
hostname=hostname,
url=as_info["url"],
namespaces=as_info["namespaces"],
hs_token=as_info["hs_token"],
sender=user_id,
id=as_info["id"],
protocols=protocols,
rate_limited=rate_limited
)

View File

@@ -41,7 +41,7 @@ class CasConfig(Config):
#cas_config:
# enabled: true
# server_url: "https://cas-server.com"
# service_url: "https://homesever.domain.com:8448"
# service_url: "https://homeserver.domain.com:8448"
# #required_attributes:
# # name: value
"""

View File

@@ -68,6 +68,18 @@ class EmailConfig(Config):
self.email_notif_for_new_users = email_config.get(
"notif_for_new_users", True
)
self.email_riot_base_url = email_config.get(
"riot_base_url", None
)
self.email_smtp_user = email_config.get(
"smtp_user", None
)
self.email_smtp_pass = email_config.get(
"smtp_pass", None
)
self.require_transport_security = email_config.get(
"require_transport_security", False
)
if "app_name" in email_config:
self.email_app_name = email_config["app_name"]
else:
@@ -85,14 +97,25 @@ class EmailConfig(Config):
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Enable sending emails for notification events
# Defining a custom URL for Riot is only needed if email notifications
# should contain links to a self-hosted installation of Riot; when set
# the "app_name" setting is ignored.
#
# If your SMTP server requires authentication, the optional smtp_user &
# smtp_pass variables should be used
#
#email:
# enable_notifs: false
# smtp_host: "localhost"
# smtp_port: 25
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: False
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# app_name: Matrix
# template_dir: res/templates
# notif_template_html: notif_mail.html
# notif_template_text: notif_mail.txt
# notif_for_new_users: True
# riot_base_url: "http://localhost/riot"
"""

32
synapse/config/groups.py Normal file
View File

@@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class GroupsConfig(Config):
def read_config(self, config):
self.enable_group_creation = config.get("enable_group_creation", False)
self.group_creation_prefix = config.get("group_creation_prefix", "")
def default_config(self, **kwargs):
return """\
# Whether to allow non server admins to create groups on this server
enable_group_creation: false
# If enabled, non server admins can only create groups with local parts
# starting with this prefix
# group_creation_prefix: "unofficial/"
"""

View File

@@ -30,17 +30,22 @@ from .saml2 import SAML2Config
from .cas import CasConfig
from .password import PasswordConfig
from .jwt import JWTConfig
from .ldap import LDAPConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .emailconfig import EmailConfig
from .workers import WorkerConfig
from .push import PushConfig
from .spam_checker import SpamCheckerConfig
from .groups import GroupsConfig
from .user_directory import UserDirectoryConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
RatelimitConfig, ContentRepositoryConfig, CaptchaConfig,
VoipConfig, RegistrationConfig, MetricsConfig, ApiConfig,
AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
JWTConfig, LDAPConfig, PasswordConfig, EmailConfig,
WorkerConfig,):
JWTConfig, PasswordConfig, EmailConfig,
WorkerConfig, PasswordAuthProviderConfig, PushConfig,
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,):
pass

View File

@@ -118,10 +118,9 @@ class KeyConfig(Config):
signing_keys = self.read_file(signing_key_path, "signing_key")
try:
return read_signing_keys(signing_keys.splitlines(True))
except Exception:
except Exception as e:
raise ConfigError(
"Error reading signing_key."
" Try running again with --generate-config"
"Error reading signing_key: %s" % (str(e))
)
def read_old_signing_keys(self, old_signing_keys):
@@ -141,7 +140,8 @@ class KeyConfig(Config):
def generate_files(self, config):
signing_key_path = config["signing_key_path"]
if not os.path.exists(signing_key_path):
if not self.path_exists(signing_key_path):
with open(signing_key_path, "w") as signing_key_file:
key_id = "a_" + random_string(4)
write_signing_keys(

View File

@@ -1,100 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Niklas Riekenbrauck
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
MISSING_LDAP3 = (
"Missing ldap3 library. This is required for LDAP Authentication."
)
class LDAPMode(object):
SIMPLE = "simple",
SEARCH = "search",
LIST = (SIMPLE, SEARCH)
class LDAPConfig(Config):
def read_config(self, config):
ldap_config = config.get("ldap_config", {})
self.ldap_enabled = ldap_config.get("enabled", False)
if self.ldap_enabled:
# verify dependencies are available
try:
import ldap3
ldap3 # to stop unused lint
except ImportError:
raise ConfigError(MISSING_LDAP3)
self.ldap_mode = LDAPMode.SIMPLE
# verify config sanity
self.require_keys(ldap_config, [
"uri",
"base",
"attributes",
])
self.ldap_uri = ldap_config["uri"]
self.ldap_start_tls = ldap_config.get("start_tls", False)
self.ldap_base = ldap_config["base"]
self.ldap_attributes = ldap_config["attributes"]
if "bind_dn" in ldap_config:
self.ldap_mode = LDAPMode.SEARCH
self.require_keys(ldap_config, [
"bind_dn",
"bind_password",
])
self.ldap_bind_dn = ldap_config["bind_dn"]
self.ldap_bind_password = ldap_config["bind_password"]
self.ldap_filter = ldap_config.get("filter", None)
# verify attribute lookup
self.require_keys(ldap_config['attributes'], [
"uid",
"name",
"mail",
])
def require_keys(self, config, required):
missing = [key for key in required if key not in config]
if missing:
raise ConfigError(
"LDAP enabled but missing required config values: {}".format(
", ".join(missing)
)
)
def default_config(self, **kwargs):
return """\
# ldap_config:
# enabled: true
# uri: "ldap://ldap.example.com:389"
# start_tls: true
# base: "ou=users,dc=example,dc=com"
# attributes:
# uid: "cn"
# mail: "email"
# name: "givenName"
# #bind_dn:
# #bind_password:
# #filter: "(objectClass=posixAccount)"
"""

View File

@@ -15,14 +15,13 @@
from ._base import Config
from synapse.util.logcontext import LoggingContextFilter
from twisted.python.log import PythonLoggingObserver
from twisted.logger import globalLogBeginner, STDLibLogObserver
import logging
import logging.config
import yaml
from string import Template
import os
import signal
from synapse.util.debug import debug_deferreds
DEFAULT_LOG_CONFIG = Template("""
@@ -46,16 +45,18 @@ handlers:
maxBytes: 104857600
backupCount: 10
filters: [context]
level: INFO
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse:
level: INFO
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: INFO
root:
@@ -68,10 +69,9 @@ class LoggingConfig(Config):
def read_config(self, config):
self.verbosity = config.get("verbose", 0)
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
self.log_config = self.abspath(config.get("log_config"))
self.log_file = self.abspath(config.get("log_file"))
if config.get("full_twisted_stacktraces"):
debug_deferreds()
def default_config(self, config_dir_path, server_name, **kwargs):
log_file = self.abspath("homeserver.log")
@@ -79,24 +79,21 @@ class LoggingConfig(Config):
os.path.join(config_dir_path, server_name + ".log.config")
)
return """
# Logging verbosity level.
# Logging verbosity level. Ignored if log_config is specified.
verbose: 0
# File to write logging to
# File to write logging to. Ignored if log_config is specified.
log_file: "%(log_file)s"
# A yaml python logging config file
log_config: "%(log_config)s"
# Stop twisted from discarding the stack traces of exceptions in
# deferreds by waiting a reactor tick before running a deferred's
# callbacks.
# full_twisted_stacktraces: true
""" % locals()
def read_arguments(self, args):
if args.verbose is not None:
self.verbosity = args.verbose
if args.no_redirect_stdio is not None:
self.no_redirect_stdio = args.no_redirect_stdio
if args.log_config is not None:
self.log_config = args.log_config
if args.log_file is not None:
@@ -106,16 +103,22 @@ class LoggingConfig(Config):
logging_group = parser.add_argument_group("logging")
logging_group.add_argument(
'-v', '--verbose', dest="verbose", action='count',
help="The verbosity level."
help="The verbosity level. Specify multiple times to increase "
"verbosity. (Ignored if --log-config is specified.)"
)
logging_group.add_argument(
'-f', '--log-file', dest="log_file",
help="File to log to."
help="File to log to. (Ignored if --log-config is specified.)"
)
logging_group.add_argument(
'--log-config', dest="log_config", default=None,
help="Python logging config file"
)
logging_group.add_argument(
'-n', '--no-redirect-stdio',
action='store_true', default=None,
help="Do not redirect stdout/stderr to the log"
)
def generate_files(self, config):
log_config = config.get("log_config")
@@ -125,22 +128,33 @@ class LoggingConfig(Config):
DEFAULT_LOG_CONFIG.substitute(log_file=config["log_file"])
)
def setup_logging(self):
setup_logging(self.log_config, self.log_file, self.verbosity)
def setup_logging(config, use_worker_options=False):
""" Set up python logging
Args:
config (LoggingConfig | synapse.config.workers.WorkerConfig):
configuration data
use_worker_options (bool): True to use 'worker_log_config' and
'worker_log_file' options instead of 'log_config' and 'log_file'.
"""
log_config = (config.worker_log_config if use_worker_options
else config.log_config)
log_file = (config.worker_log_file if use_worker_options
else config.log_file)
def setup_logging(log_config=None, log_file=None, verbosity=None):
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if log_config is None:
if log_config is None:
level = logging.INFO
level_for_storage = logging.INFO
if verbosity:
if config.verbosity:
level = logging.DEBUG
if verbosity > 1:
if config.verbosity > 1:
level_for_storage = logging.DEBUG
# FIXME: we need a logging.WARN for a -q quiet option
@@ -160,24 +174,50 @@ def setup_logging(log_config=None, log_file=None, verbosity=None):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
# TODO(paul): obviously this is a terrible mechanism for
# stealing SIGHUP, because it means no other part of synapse
# can use it instead. If we want to catch SIGHUP anywhere
# else as well, I'd suggest we find a nicer way to broadcast
# it around.
if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup)
else:
handler = logging.StreamHandler()
def sighup(signum, stack):
pass
handler.setFormatter(formatter)
handler.addFilter(LoggingContextFilter(request=""))
logger.addHandler(handler)
else:
with open(log_config, 'r') as f:
logging.config.dictConfig(yaml.load(f))
def load_log_config():
with open(log_config, 'r') as f:
logging.config.dictConfig(yaml.load(f))
observer = PythonLoggingObserver()
observer.start()
def sighup(signum, stack):
# it might be better to use a file watcher or something for this.
logging.info("Reloading log config from %s due to SIGHUP",
log_config)
load_log_config()
load_log_config()
# TODO(paul): obviously this is a terrible mechanism for
# stealing SIGHUP, because it means no other part of synapse
# can use it instead. If we want to catch SIGHUP anywhere
# else as well, I'd suggest we find a nicer way to broadcast
# it around.
if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup)
# It's critical to point twisted's internal logging somewhere, otherwise it
# stacks up and leaks kup to 64K object;
# see: https://twistedmatrix.com/trac/ticket/8164
#
# Routing to the python logging framework could be a performance problem if
# the handlers blocked for a long time as python.logging is a blocking API
# see https://twistedmatrix.com/documents/current/core/howto/logger.html
# filed as https://github.com/matrix-org/synapse/issues/1727
#
# However this may not be too much of a problem if we are just writing to a file.
observer = STDLibLogObserver()
globalLogBeginner.beginLoggingTo(
[observer],
redirectStandardIO=not config.no_redirect_stdio,
)

View File

@@ -0,0 +1,69 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Openmarket
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.util.module_loader import load_module
LDAP_PROVIDER = 'ldap_auth_provider.LdapAuthProvider'
class PasswordAuthProviderConfig(Config):
def read_config(self, config):
self.password_providers = []
providers = []
# We want to be backwards compatible with the old `ldap_config`
# param.
ldap_config = config.get("ldap_config", {})
if ldap_config.get("enabled", False):
providers.append({
'module': LDAP_PROVIDER,
'config': ldap_config,
})
providers.extend(config.get("password_providers", []))
for provider in providers:
mod_name = provider['module']
# This is for backwards compat when the ldap auth provider resided
# in this package.
if mod_name == "synapse.util.ldap_auth_provider.LdapAuthProvider":
mod_name = LDAP_PROVIDER
(provider_class, provider_config) = load_module({
"module": mod_name,
"config": provider['config'],
})
self.password_providers.append((provider_class, provider_config))
def default_config(self, **kwargs):
return """\
# password_providers:
# - module: "ldap_auth_provider.LdapAuthProvider"
# config:
# enabled: true
# uri: "ldap://ldap.example.com:389"
# start_tls: true
# base: "ou=users,dc=example,dc=com"
# attributes:
# uid: "cn"
# mail: "email"
# name: "givenName"
# #bind_dn:
# #bind_password:
# #filter: "(objectClass=posixAccount)"
"""

61
synapse/config/push.py Normal file
View File

@@ -0,0 +1,61 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class PushConfig(Config):
def read_config(self, config):
push_config = config.get("push", {})
self.push_include_content = push_config.get("include_content", True)
# There was a a 'redact_content' setting but mistakenly read from the
# 'email'section'. Check for the flag in the 'push' section, and log,
# but do not honour it to avoid nasty surprises when people upgrade.
if push_config.get("redact_content") is not None:
print(
"The push.redact_content content option has never worked. "
"Please set push.include_content if you want this behaviour"
)
# Now check for the one in the 'email' section and honour it,
# with a warning.
push_config = config.get("email", {})
redact_content = push_config.get("redact_content")
if redact_content is not None:
print(
"The 'email.redact_content' option is deprecated: "
"please set push.include_content instead"
)
self.push_include_content = not redact_content
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Clients requesting push notifications can either have the body of
# the message sent in the notification poke along with other details
# like the sender, or just the event ID and room ID (`event_id_only`).
# If clients choose the former, this option controls whether the
# notification request includes the content of the event (other details
# like the sender are still included). For `event_id_only` push, it
# has no effect.
# For modern android devices the notification content will still appear
# because it is loaded by the app. iPhone, however will send a
# notification saying only that a message arrived and who it came from.
#
#push:
# include_content: true
"""

View File

@@ -32,7 +32,6 @@ class RegistrationConfig(Config):
)
self.registration_shared_secret = config.get("registration_shared_secret")
self.user_creation_max_duration = int(config["user_creation_max_duration"])
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
self.trusted_third_party_id_servers = config["trusted_third_party_id_servers"]
@@ -42,6 +41,8 @@ class RegistrationConfig(Config):
self.allow_guest_access and config.get("invite_3pid_guest", False)
)
self.auto_join_rooms = config.get("auto_join_rooms", [])
def default_config(self, **kwargs):
registration_shared_secret = random_string_with_symbols(50)
@@ -55,11 +56,6 @@ class RegistrationConfig(Config):
# secret, even if registration is otherwise disabled.
registration_shared_secret: "%(registration_shared_secret)s"
# Sets the expiry for the short term user creation in
# milliseconds. For instance the bellow duration is two weeks
# in milliseconds.
user_creation_max_duration: 1209600000
# Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash.
# The default number of rounds is 12.
@@ -75,6 +71,12 @@ class RegistrationConfig(Config):
trusted_third_party_id_servers:
- matrix.org
- vector.im
- riot.im
# Users who register on this homeserver will automatically be joined
# to these rooms
#auto_join_rooms:
# - "#example:example.com"
""" % locals()
def add_arguments(self, parser):

View File

@@ -70,7 +70,19 @@ class ContentRepositoryConfig(Config):
self.max_upload_size = self.parse_size(config["max_upload_size"])
self.max_image_pixels = self.parse_size(config["max_image_pixels"])
self.max_spider_size = self.parse_size(config["max_spider_size"])
self.media_store_path = self.ensure_directory(config["media_store_path"])
self.backup_media_store_path = config.get("backup_media_store_path")
if self.backup_media_store_path:
self.backup_media_store_path = self.ensure_directory(
self.backup_media_store_path
)
self.synchronous_backup_media_store = config.get(
"synchronous_backup_media_store", False
)
self.uploads_path = self.ensure_directory(config["uploads_path"])
self.dynamic_thumbnails = config["dynamic_thumbnails"]
self.thumbnail_requirements = parse_thumbnail_requirements(
@@ -115,6 +127,14 @@ class ContentRepositoryConfig(Config):
# Directory where uploaded images and attachments are stored.
media_store_path: "%(media_store)s"
# A secondary directory where uploaded images and attachments are
# stored as a backup.
# backup_media_store_path: "%(media_store)s"
# Whether to wait for successful write to backup media store before
# returning successfully.
# synchronous_backup_media_store: false
# Directory where in-progress uploads are stored.
uploads_path: "%(uploads_path)s"
@@ -167,6 +187,8 @@ class ContentRepositoryConfig(Config):
# - '10.0.0.0/8'
# - '172.16.0.0/12'
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '169.254.0.0/16'
#
# List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist.

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -29,6 +30,30 @@ class ServerConfig(Config):
self.user_agent_suffix = config.get("user_agent_suffix")
self.use_frozen_dicts = config.get("use_frozen_dicts", False)
self.public_baseurl = config.get("public_baseurl")
self.cpu_affinity = config.get("cpu_affinity")
# Whether to send federation traffic out in this process. This only
# applies to some federation traffic, and so shouldn't be used to
# "disable" federation
self.send_federation = config.get("send_federation", True)
# Whether to update the user directory or not. This should be set to
# false only if we are updating the user directory in a worker
self.update_user_directory = config.get("update_user_directory", True)
# whether to enable the media repository endpoints. This should be set
# to false if the media repository is running as a separate endpoint;
# doing so ensures that we will not run cache cleanup jobs on the
# master, potentially causing inconsistency.
self.enable_media_repo = config.get("enable_media_repo", True)
self.filter_timeline_limit = config.get("filter_timeline_limit", -1)
# Whether we should block invites sent to users on this server
# (other than those sent by local server admins)
self.block_non_admin_invites = config.get(
"block_non_admin_invites", False,
)
if self.public_baseurl is not None:
if self.public_baseurl[-1] != '/':
@@ -37,6 +62,15 @@ class ServerConfig(Config):
self.listeners = config.get("listeners", [])
for listener in self.listeners:
bind_address = listener.pop("bind_address", None)
bind_addresses = listener.setdefault("bind_addresses", [])
if bind_address:
bind_addresses.append(bind_address)
elif not bind_addresses:
bind_addresses.append('')
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
bind_port = config.get("bind_port")
@@ -49,7 +83,7 @@ class ServerConfig(Config):
self.listeners.append({
"port": bind_port,
"bind_address": bind_host,
"bind_addresses": [bind_host],
"tls": True,
"type": "http",
"resources": [
@@ -68,7 +102,7 @@ class ServerConfig(Config):
if unsecure_port:
self.listeners.append({
"port": unsecure_port,
"bind_address": bind_host,
"bind_addresses": [bind_host],
"tls": False,
"type": "http",
"resources": [
@@ -87,7 +121,7 @@ class ServerConfig(Config):
if manhole:
self.listeners.append({
"port": manhole,
"bind_address": "127.0.0.1",
"bind_addresses": ["127.0.0.1"],
"type": "manhole",
})
@@ -95,7 +129,7 @@ class ServerConfig(Config):
if metrics_port:
self.listeners.append({
"port": metrics_port,
"bind_address": config.get("metrics_bind_host", "127.0.0.1"),
"bind_addresses": [config.get("metrics_bind_host", "127.0.0.1")],
"tls": False,
"type": "http",
"resources": [
@@ -127,9 +161,36 @@ class ServerConfig(Config):
# When running as a daemon, the file to store the pid in
pid_file: %(pid_file)s
# CPU affinity mask. Setting this restricts the CPUs on which the
# process will be scheduled. It is represented as a bitmask, with the
# lowest order bit corresponding to the first logical CPU and the
# highest order bit corresponding to the last logical CPU. Not all CPUs
# may exist on a given system but a mask may specify more CPUs than are
# present.
#
# For example:
# 0x00000001 is processor #0,
# 0x00000003 is processors #0 and #1,
# 0xFFFFFFFF is all processors (#0 through #31).
#
# Pinning a Python process to a single CPU is desirable, because Python
# is inherently single-threaded due to the GIL, and can suffer a
# 30-40%% slowdown due to cache blow-out and thread context switching
# if the scheduler happens to schedule the underlying threads across
# different cores. See
# https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/.
#
# cpu_affinity: 0xFFFFFFFF
# Whether to serve a web client from the HTTP/HTTPS root resource.
web_client: True
# The root directory to server for the above web client.
# If left undefined, synapse will serve the matrix-angular-sdk web client.
# Make sure matrix-angular-sdk is installed with pip if web_client is True
# and web_client_location is undefined
# web_client_location: "/path/to/web/root"
# The public-facing base URL for the client API (not including _matrix/...)
# public_baseurl: https://example.com:8448/
@@ -141,6 +202,14 @@ class ServerConfig(Config):
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
# gc_thresholds: [700, 10, 10]
# Set the limit on the returned events in the timeline in the get
# and sync operations. The default value is -1, means no upper limit.
# filter_timeline_limit: 5000
# Whether room invites to users on this server should be blocked
# (except those sent by local server admins). The default is False.
# block_non_admin_invites: True
# List of ports that Synapse should listen on, their purpose and their
# configuration.
listeners:
@@ -150,9 +219,13 @@ class ServerConfig(Config):
# The port to listen for HTTPS requests on.
port: %(bind_port)s
# Local interface to listen on.
# The empty string will cause synapse to listen on all interfaces.
bind_address: ''
# Local addresses to listen on.
# On Linux and Mac OS, `::` will listen on all IPv4 and IPv6
# addresses by default. For most other OSes, this will only listen
# on IPv6.
bind_addresses:
- '::'
- '0.0.0.0'
# This is a 'http' listener, allows us to specify 'resources'.
type: http
@@ -179,11 +252,18 @@ class ServerConfig(Config):
- names: [federation] # Federation APIs
compress: false
# optional list of additional endpoints which can be loaded via
# dynamic modules
# additional_resources:
# "/_matrix/my/custom/endpoint":
# module: my_module.CustomRequestHandler
# config: {}
# Unsecure HTTP listener,
# For when matrix traffic passes through loadbalancer that unwraps TLS.
- port: %(unsecure_port)s
tls: false
bind_address: ''
bind_addresses: ['::', '0.0.0.0']
type: http
x_forwarded: false
@@ -197,7 +277,7 @@ class ServerConfig(Config):
# Turn on the twisted ssh manhole service on localhost on the given
# port.
# - port: 9000
# bind_address: 127.0.0.1
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
""" % locals()
@@ -235,7 +315,7 @@ def read_gc_thresholds(thresholds):
return (
int(thresholds[0]), int(thresholds[1]), int(thresholds[2]),
)
except:
except Exception:
raise ConfigError(
"Value of `gc_threshold` must be a list of three integers if set"
)

View File

@@ -0,0 +1,35 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util.module_loader import load_module
from ._base import Config
class SpamCheckerConfig(Config):
def read_config(self, config):
self.spam_checker = None
provider = config.get("spam_checker", None)
if provider is not None:
self.spam_checker = load_module(provider)
def default_config(self, **kwargs):
return """\
# spam_checker:
# module: "my_custom_project.SuperSpamChecker"
# config:
# example_option: 'things'
"""

View File

@@ -19,6 +19,9 @@ from OpenSSL import crypto
import subprocess
import os
from hashlib import sha256
from unpaddedbase64 import encode_base64
GENERATE_DH_PARAMS = False
@@ -42,6 +45,19 @@ class TlsConfig(Config):
config.get("tls_dh_params_path"), "tls_dh_params"
)
self.tls_fingerprints = config["tls_fingerprints"]
# Check that our own certificate is included in the list of fingerprints
# and include it if it is not.
x509_certificate_bytes = crypto.dump_certificate(
crypto.FILETYPE_ASN1,
self.tls_certificate
)
sha256_fingerprint = encode_base64(sha256(x509_certificate_bytes).digest())
sha256_fingerprints = set(f["sha256"] for f in self.tls_fingerprints)
if sha256_fingerprint not in sha256_fingerprints:
self.tls_fingerprints.append({u"sha256": sha256_fingerprint})
# This config option applies to non-federation HTTP clients
# (e.g. for talking to recaptcha, identity servers, and such)
# It should never be used in production, and is intended for
@@ -73,6 +89,34 @@ class TlsConfig(Config):
# Don't bind to the https port
no_tls: False
# List of allowed TLS fingerprints for this server to publish along
# with the signing keys for this server. Other matrix servers that
# make HTTPS requests to this server will check that the TLS
# certificates returned by this server match one of the fingerprints.
#
# Synapse automatically adds the fingerprint of its own certificate
# to the list. So if federation traffic is handle directly by synapse
# then no modification to the list is required.
#
# If synapse is run behind a load balancer that handles the TLS then it
# will be necessary to add the fingerprints of the certificates used by
# the loadbalancers to this list if they are different to the one
# synapse is using.
#
# Homeservers are permitted to cache the list of TLS fingerprints
# returned in the key responses up to the "valid_until_ts" returned in
# key. It may be necessary to publish the fingerprints of a new
# certificate and wait until the "valid_until_ts" of the previous key
# responses have passed before deploying it.
#
# You can calculate a fingerprint from a given TLS listener via:
# openssl s_client -connect $host:$port < /dev/null 2> /dev/null |
# openssl x509 -outform DER | openssl sha256 -binary | base64 | tr -d '='
# or by checking matrix.org/federationtester/api/report?server_name=$host
#
tls_fingerprints: []
# tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
""" % locals()
def read_tls_certificate(self, cert_path):
@@ -88,7 +132,7 @@ class TlsConfig(Config):
tls_private_key_path = config["tls_private_key_path"]
tls_dh_params_path = config["tls_dh_params_path"]
if not os.path.exists(tls_private_key_path):
if not self.path_exists(tls_private_key_path):
with open(tls_private_key_path, "w") as private_key_file:
tls_private_key = crypto.PKey()
tls_private_key.generate_key(crypto.TYPE_RSA, 2048)
@@ -103,7 +147,7 @@ class TlsConfig(Config):
crypto.FILETYPE_PEM, private_key_pem
)
if not os.path.exists(tls_certificate_path):
if not self.path_exists(tls_certificate_path):
with open(tls_certificate_path, "w") as certificate_file:
cert = crypto.X509()
subject = cert.get_subject()
@@ -121,7 +165,7 @@ class TlsConfig(Config):
certificate_file.write(cert_pem)
if not os.path.exists(tls_dh_params_path):
if not self.path_exists(tls_dh_params_path):
if GENERATE_DH_PARAMS:
subprocess.check_call([
"openssl", "dhparam",

View File

@@ -0,0 +1,44 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class UserDirectoryConfig(Config):
"""User Directory Configuration
Configuration for the behaviour of the /user_directory API
"""
def read_config(self, config):
self.user_directory_search_all_users = False
user_directory_config = config.get("user_directory", None)
if user_directory_config:
self.user_directory_search_all_users = (
user_directory_config.get("search_all_users", False)
)
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# User Directory configuration
#
# 'search_all_users' defines whether to search all users visible to your HS
# when searching the user directory, rather than limiting to users visible
# in public rooms. Defaults to false. If you set it True, you'll have to run
# UPDATE user_directory_stream_pos SET stream_id = NULL;
# on your database to tell it to rebuild the user_directory search indexes.
#
#user_directory:
# search_all_users: false
"""

View File

@@ -19,8 +19,11 @@ class VoipConfig(Config):
def read_config(self, config):
self.turn_uris = config.get("turn_uris", [])
self.turn_shared_secret = config["turn_shared_secret"]
self.turn_shared_secret = config.get("turn_shared_secret")
self.turn_username = config.get("turn_username")
self.turn_password = config.get("turn_password")
self.turn_user_lifetime = self.parse_duration(config["turn_user_lifetime"])
self.turn_allow_guests = config.get("turn_allow_guests", True)
def default_config(self, **kwargs):
return """\
@@ -32,6 +35,18 @@ class VoipConfig(Config):
# The shared secret used to compute passwords for the TURN server
turn_shared_secret: "YOUR_SHARED_SECRET"
# The Username and password if the TURN server needs them and
# does not use a token
#turn_username: "TURNSERVER_USERNAME"
#turn_password: "TURNSERVER_PASSWORD"
# How long generated TURN credentials last
turn_user_lifetime: "1h"
# Whether guests should be allowed to use the TURN server.
# This defaults to True, otherwise VoIP will be unreliable for guests.
# However, it does introduce a slight security risk as it allows users to
# connect to arbitrary endpoints without having first signed up for a
# valid account (e.g. by passing a CAPTCHA).
turn_allow_guests: True
"""

View File

@@ -28,4 +28,19 @@ class WorkerConfig(Config):
self.worker_pid_file = config.get("worker_pid_file")
self.worker_log_file = config.get("worker_log_file")
self.worker_log_config = config.get("worker_log_config")
self.worker_replication_url = config.get("worker_replication_url")
self.worker_replication_host = config.get("worker_replication_host", None)
self.worker_replication_port = config.get("worker_replication_port", None)
self.worker_name = config.get("worker_name", self.worker_app)
self.worker_main_http_uri = config.get("worker_main_http_uri", None)
self.worker_cpu_affinity = config.get("worker_cpu_affinity")
if self.worker_listeners:
for listener in self.worker_listeners:
bind_address = listener.pop("bind_address", None)
bind_addresses = listener.setdefault("bind_addresses", [])
if bind_address:
bind_addresses.append(bind_address)
elif not bind_addresses:
bind_addresses.append('')

View File

@@ -34,7 +34,7 @@ class ServerContextFactory(ssl.ContextFactory):
try:
_ecCurve = _OpenSSLECCurve(_defaultCurveName)
_ecCurve.addECKeyToContext(context)
except:
except Exception:
logger.exception("Failed to enable elliptic curve for TLS")
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
context.use_certificate_chain_file(config.tls_certificate_file)

View File

@@ -32,18 +32,25 @@ def check_event_content_hash(event, hash_algorithm=hashlib.sha256):
"""Check whether the hash for this PDU matches the contents"""
name, expected_hash = compute_content_hash(event, hash_algorithm)
logger.debug("Expecting hash: %s", encode_base64(expected_hash))
if name not in event.hashes:
# some malformed events lack a 'hashes'. Protect against it being missing
# or a weird type by basically treating it the same as an unhashed event.
hashes = event.get("hashes")
if not isinstance(hashes, dict):
raise SynapseError(400, "Malformed 'hashes'", Codes.UNAUTHORIZED)
if name not in hashes:
raise SynapseError(
400,
"Algorithm %s not in hashes %s" % (
name, list(event.hashes),
name, list(hashes),
),
Codes.UNAUTHORIZED,
)
message_hash_base64 = event.hashes[name]
message_hash_base64 = hashes[name]
try:
message_hash_bytes = decode_base64(message_hash_base64)
except:
except Exception:
raise SynapseError(
400,
"Invalid base64: %s" % (message_hash_base64,),

View File

@@ -13,14 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util import logcontext
from twisted.web.http import HTTPClient
from twisted.internet.protocol import Factory
from twisted.internet import defer, reactor
from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util.logcontext import (
preserve_context_over_fn, preserve_context_over_deferred
)
import simplejson as json
import logging
@@ -43,14 +40,10 @@ def fetch_server_key(server_name, ssl_context_factory, path=KEY_API_V1):
for i in range(5):
try:
protocol = yield preserve_context_over_fn(
endpoint.connect, factory
)
server_response, server_certificate = yield preserve_context_over_deferred(
protocol.remote_key
)
defer.returnValue((server_response, server_certificate))
return
with logcontext.PreserveLoggingContext():
protocol = yield endpoint.connect(factory)
server_response, server_certificate = yield protocol.remote_key
defer.returnValue((server_response, server_certificate))
except SynapseKeyClientError as e:
logger.exception("Error getting key for %r" % (server_name,))
if e.status.startswith("4"):

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,11 +16,9 @@
from synapse.crypto.keyclient import fetch_server_key
from synapse.api.errors import SynapseError, Codes
from synapse.util.retryutils import get_retry_limiter
from synapse.util import unwrapFirstError
from synapse.util.async import ObservableDeferred
from synapse.util import unwrapFirstError, logcontext
from synapse.util.logcontext import (
preserve_context_over_deferred, preserve_context_over_fn, PreserveLoggingContext,
PreserveLoggingContext,
preserve_fn
)
from synapse.util.metrics import Measure
@@ -58,7 +57,8 @@ Attributes:
json_object(dict): The JSON object to verify.
deferred(twisted.internet.defer.Deferred):
A deferred (server_name, key_id, verify_key) tuple that resolves when
a verify key has been fetched
a verify key has been fetched. The deferreds' callbacks are run with no
logcontext.
"""
@@ -75,31 +75,41 @@ class Keyring(object):
self.perspective_servers = self.config.perspectives
self.hs = hs
# map from server name to Deferred. Has an entry for each server with
# an ongoing key download; the Deferred completes once the download
# completes.
#
# These are regular, logcontext-agnostic Deferreds.
self.key_downloads = {}
def verify_json_for_server(self, server_name, json_object):
return self.verify_json_objects_for_server(
[(server_name, json_object)]
)[0]
return logcontext.make_deferred_yieldable(
self.verify_json_objects_for_server(
[(server_name, json_object)]
)[0]
)
def verify_json_objects_for_server(self, server_and_json):
"""Bulk verfies signatures of json objects, bulk fetching keys as
"""Bulk verifies signatures of json objects, bulk fetching keys as
necessary.
Args:
server_and_json (list): List of pairs of (server_name, json_object)
Returns:
list of deferreds indicating success or failure to verify each
json object's signature for the given server_name.
List<Deferred>: for each input pair, a deferred indicating success
or failure to verify each json object's signature for the given
server_name. The deferreds run their callbacks in the sentinel
logcontext.
"""
verify_requests = []
for server_name, json_object in server_and_json:
logger.debug("Verifying for %s", server_name)
key_ids = signature_ids(json_object, server_name)
if not key_ids:
logger.warn("Request from %s: no supported signature keys",
server_name)
deferred = defer.fail(SynapseError(
400,
"Not signed with a supported algorithm",
@@ -108,97 +118,81 @@ class Keyring(object):
else:
deferred = defer.Deferred()
logger.debug("Verifying for %s with key_ids %s",
server_name, key_ids)
verify_request = VerifyKeyRequest(
server_name, key_ids, json_object, deferred
)
verify_requests.append(verify_request)
@defer.inlineCallbacks
def handle_key_deferred(verify_request):
server_name = verify_request.server_name
try:
_, key_id, verify_key = yield verify_request.deferred
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
)
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
401,
"No key for %s with id %s" % (server_name, key_ids),
Codes.UNAUTHORIZED,
)
json_object = verify_request.json_object
try:
verify_signed_json(json_object, server_name, verify_key)
except:
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
),
Codes.UNAUTHORIZED,
)
server_to_deferred = {
server_name: defer.Deferred()
for server_name, _ in server_and_json
}
with PreserveLoggingContext():
# We want to wait for any previous lookups to complete before
# proceeding.
wait_on_deferred = self.wait_for_previous_lookups(
[server_name for server_name, _ in server_and_json],
server_to_deferred,
)
# Actually start fetching keys.
wait_on_deferred.addBoth(
lambda _: self.get_server_verify_keys(verify_requests)
)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
server_to_request_ids = {}
def remove_deferreds(res, server_name, verify_request):
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
deferred.addBoth(remove_deferreds, server_name, verify_request)
preserve_fn(self._start_key_lookups)(verify_requests)
# Pass those keys to handle_key_deferred so that the json object
# signatures can be verified
handle = preserve_fn(_handle_key_deferred)
return [
preserve_context_over_fn(handle_key_deferred, verify_request)
for verify_request in verify_requests
handle(rq) for rq in verify_requests
]
@defer.inlineCallbacks
def _start_key_lookups(self, verify_requests):
"""Sets off the key fetches for each verify request
Once each fetch completes, verify_request.deferred will be resolved.
Args:
verify_requests (List[VerifyKeyRequest]):
"""
# create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred()
for rq in verify_requests
}
# We want to wait for any previous lookups to complete before
# proceeding.
yield self.wait_for_previous_lookups(
[rq.server_name for rq in verify_requests],
server_to_deferred,
)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
server_to_request_ids = {}
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request):
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for verify_request in verify_requests:
verify_request.deferred.addBoth(
remove_deferreds, verify_request,
)
@defer.inlineCallbacks
def wait_for_previous_lookups(self, server_names, server_to_deferred):
"""Waits for any previous key lookups for the given servers to finish.
@@ -206,7 +200,13 @@ class Keyring(object):
Args:
server_names (list): list of server_names we want to lookup
server_to_deferred (dict): server_name to deferred which gets
resolved once we've finished looking up keys for that server
resolved once we've finished looking up keys for that server.
The Deferreds should be regular twisted ones which call their
callbacks with no logcontext.
Returns: a Deferred which resolves once all key lookups for the given
servers have completed. Follows the synapse rules of logcontext
preservation.
"""
while True:
wait_on = [
@@ -220,19 +220,23 @@ class Keyring(object):
else:
break
def rm(r, server_name_):
self.key_downloads.pop(server_name_, None)
return r
for server_name, deferred in server_to_deferred.items():
d = ObservableDeferred(preserve_context_over_deferred(deferred))
self.key_downloads[server_name] = d
self.key_downloads[server_name] = deferred
deferred.addBoth(rm, server_name)
def rm(r, server_name):
self.key_downloads.pop(server_name, None)
return r
def _get_server_verify_keys(self, verify_requests):
"""Tries to find at least one key for each verify request
d.addBoth(rm, server_name)
For each verify_request, verify_request.deferred is called back with
params (server_name, key_id, VerifyKey) if a key is found, or errbacked
with a SynapseError if none of the keys are found.
def get_server_verify_keys(self, verify_requests):
"""Takes a dict of KeyGroups and tries to find at least one key for
each group.
Args:
verify_requests (list[VerifyKeyRequest]): list of verify requests
"""
# These are functions that produce keys given a list of key ids
@@ -245,8 +249,11 @@ class Keyring(object):
@defer.inlineCallbacks
def do_iterations():
with Measure(self.clock, "get_server_verify_keys"):
# dict[str, dict[str, VerifyKey]]: results so far.
# map server_name -> key_id -> VerifyKey
merged_results = {}
# dict[str, set(str)]: keys to fetch for each server
missing_keys = {}
for verify_request in verify_requests:
missing_keys.setdefault(verify_request.server_name, set()).update(
@@ -290,25 +297,37 @@ class Keyring(object):
if not missing_keys:
break
for verify_request in requests_missing_keys.values():
verify_request.deferred.errback(SynapseError(
401,
"No key for %s with id %s" % (
verify_request.server_name, verify_request.key_ids,
),
Codes.UNAUTHORIZED,
))
with PreserveLoggingContext():
for verify_request in requests_missing_keys:
verify_request.deferred.errback(SynapseError(
401,
"No key for %s with id %s" % (
verify_request.server_name, verify_request.key_ids,
),
Codes.UNAUTHORIZED,
))
def on_err(err):
for verify_request in verify_requests:
if not verify_request.deferred.called:
verify_request.deferred.errback(err)
with PreserveLoggingContext():
for verify_request in verify_requests:
if not verify_request.deferred.called:
verify_request.deferred.errback(err)
do_iterations().addErrback(on_err)
preserve_fn(do_iterations)().addErrback(on_err)
@defer.inlineCallbacks
def get_keys_from_store(self, server_name_and_key_ids):
res = yield preserve_context_over_deferred(defer.gatherResults(
"""
Args:
server_name_and_key_ids (list[(str, iterable[str])]):
list of (server_name, iterable[key_id]) tuples to fetch keys for
Returns:
Deferred: resolves to dict[str, dict[str, VerifyKey]]: map from
server_name -> key_id -> VerifyKey
"""
res = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.get_server_verify_keys)(
server_name, key_ids
@@ -316,7 +335,7 @@ class Keyring(object):
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(dict(res))
@@ -337,13 +356,13 @@ class Keyring(object):
)
defer.returnValue({})
results = yield preserve_context_over_deferred(defer.gatherResults(
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
union_of_keys = {}
for result in results:
@@ -356,40 +375,34 @@ class Keyring(object):
def get_keys_from_server(self, server_name_and_key_ids):
@defer.inlineCallbacks
def get_key(server_name, key_ids):
limiter = yield get_retry_limiter(
server_name,
self.clock,
self.store,
)
with limiter:
keys = None
try:
keys = yield self.get_server_verify_key_v2_direct(
server_name, key_ids
)
except Exception as e:
logger.info(
"Unable to get key %r for %r directly: %s %s",
key_ids, server_name,
type(e).__name__, str(e.message),
)
keys = None
try:
keys = yield self.get_server_verify_key_v2_direct(
server_name, key_ids
)
except Exception as e:
logger.info(
"Unable to get key %r for %r directly: %s %s",
key_ids, server_name,
type(e).__name__, str(e.message),
)
if not keys:
keys = yield self.get_server_verify_key_v1_direct(
server_name, key_ids
)
if not keys:
keys = yield self.get_server_verify_key_v1_direct(
server_name, key_ids
)
keys = {server_name: keys}
keys = {server_name: keys}
defer.returnValue(keys)
results = yield preserve_context_over_deferred(defer.gatherResults(
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(server_name, key_ids)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
merged = {}
for result in results:
@@ -466,7 +479,7 @@ class Keyring(object):
for server_name, response_keys in processed_response.items():
keys.setdefault(server_name, {}).update(response_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
server_name=server_name,
@@ -476,7 +489,7 @@ class Keyring(object):
for server_name, response_keys in keys.items()
],
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(keys)
@@ -524,7 +537,7 @@ class Keyring(object):
keys.update(response_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
server_name=key_server_name,
@@ -534,7 +547,7 @@ class Keyring(object):
for key_server_name, verify_keys in keys.items()
],
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(keys)
@@ -600,7 +613,7 @@ class Keyring(object):
response_keys.update(verify_keys)
response_keys.update(old_verify_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_keys_json)(
server_name=server_name,
@@ -613,7 +626,7 @@ class Keyring(object):
for key_id in updated_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
results[server_name] = response_keys
@@ -691,7 +704,6 @@ class Keyring(object):
defer.returnValue(verify_keys)
@defer.inlineCallbacks
def store_keys(self, server_name, from_server, verify_keys):
"""Store a collection of verify keys for a given server
Args:
@@ -702,7 +714,7 @@ class Keyring(object):
A deferred that completes when the keys are stored.
"""
# TODO(markjh): Store whether the keys have expired.
yield preserve_context_over_deferred(defer.gatherResults(
return logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_verify_key)(
server_name, server_name, key.time_added, key
@@ -710,4 +722,48 @@ class Keyring(object):
for key_id, key in verify_keys.items()
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
@defer.inlineCallbacks
def _handle_key_deferred(verify_request):
server_name = verify_request.server_name
try:
with PreserveLoggingContext():
_, key_id, verify_key = yield verify_request.deferred
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
)
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
401,
"No key for %s with id %s" % (server_name, verify_request.key_ids),
Codes.UNAUTHORIZED,
)
json_object = verify_request.json_object
logger.debug("Got key %s %s:%s for server %s, verifying" % (
key_id, verify_key.alg, verify_key.version, server_name,
))
try:
verify_signed_json(json_object, server_name, verify_key)
except Exception:
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
),
Codes.UNAUTHORIZED,
)

678
synapse/event_auth.py Normal file
View File

@@ -0,0 +1,678 @@
# -*- coding: utf-8 -*-
# Copyright 2014 - 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json, SignatureVerifyException
from unpaddedbase64 import decode_base64
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, SynapseError, EventSizeError
from synapse.types import UserID, get_domain_from_id
logger = logging.getLogger(__name__)
def check(event, auth_events, do_sig_check=True, do_size_check=True):
""" Checks if this event is correctly authed.
Args:
event: the event being checked.
auth_events (dict: event-key -> event): the existing room state.
Returns:
True if the auth checks pass.
"""
if do_size_check:
_check_size_limits(event)
if not hasattr(event, "room_id"):
raise AuthError(500, "Event has no room_id: %s" % event)
if do_sig_check:
sender_domain = get_domain_from_id(event.sender)
event_id_domain = get_domain_from_id(event.event_id)
is_invite_via_3pid = (
event.type == EventTypes.Member
and event.membership == Membership.INVITE
and "third_party_invite" in event.content
)
# Check the sender's domain has signed the event
if not event.signatures.get(sender_domain):
# We allow invites via 3pid to have a sender from a different
# HS, as the sender must match the sender of the original
# 3pid invite. This is checked further down with the
# other dedicated membership checks.
if not is_invite_via_3pid:
raise AuthError(403, "Event not signed by sender's server")
# Check the event_id's domain has signed the event
if not event.signatures.get(event_id_domain):
raise AuthError(403, "Event not signed by sending server")
if auth_events is None:
# Oh, we don't know what the state of the room was, so we
# are trusting that this is allowed (at least for now)
logger.warn("Trusting event: %s", event.event_id)
return True
if event.type == EventTypes.Create:
room_id_domain = get_domain_from_id(event.room_id)
if room_id_domain != sender_domain:
raise AuthError(
403,
"Creation event's room_id domain does not match sender's"
)
# FIXME
return True
creation_event = auth_events.get((EventTypes.Create, ""), None)
if not creation_event:
raise SynapseError(
403,
"Room %r does not exist" % (event.room_id,)
)
creating_domain = get_domain_from_id(event.room_id)
originating_domain = get_domain_from_id(event.sender)
if creating_domain != originating_domain:
if not _can_federate(event, auth_events):
raise AuthError(
403,
"This room has been marked as unfederatable."
)
# FIXME: Temp hack
if event.type == EventTypes.Aliases:
if not event.is_state():
raise AuthError(
403,
"Alias event must be a state event",
)
if not event.state_key:
raise AuthError(
403,
"Alias event must have non-empty state_key"
)
sender_domain = get_domain_from_id(event.sender)
if event.state_key != sender_domain:
raise AuthError(
403,
"Alias event's state_key does not match sender's domain"
)
return True
if logger.isEnabledFor(logging.DEBUG):
logger.debug(
"Auth events: %s",
[a.event_id for a in auth_events.values()]
)
if event.type == EventTypes.Member:
allowed = _is_membership_change_allowed(
event, auth_events
)
if allowed:
logger.debug("Allowing! %s", event)
else:
logger.debug("Denying! %s", event)
return allowed
_check_event_sender_in_room(event, auth_events)
# Special case to allow m.room.third_party_invite events wherever
# a user is allowed to issue invites. Fixes
# https://github.com/vector-im/vector-web/issues/1208 hopefully
if event.type == EventTypes.ThirdPartyInvite:
user_level = get_user_power_level(event.user_id, auth_events)
invite_level = _get_named_level(auth_events, "invite", 0)
if user_level < invite_level:
raise AuthError(
403, (
"You cannot issue a third party invite for %s." %
(event.content.display_name,)
)
)
else:
return True
_can_send_event(event, auth_events)
if event.type == EventTypes.PowerLevels:
_check_power_levels(event, auth_events)
if event.type == EventTypes.Redaction:
check_redaction(event, auth_events)
logger.debug("Allowing! %s", event)
def _check_size_limits(event):
def too_big(field):
raise EventSizeError("%s too large" % (field,))
if len(event.user_id) > 255:
too_big("user_id")
if len(event.room_id) > 255:
too_big("room_id")
if event.is_state() and len(event.state_key) > 255:
too_big("state_key")
if len(event.type) > 255:
too_big("type")
if len(event.event_id) > 255:
too_big("event_id")
if len(encode_canonical_json(event.get_pdu_json())) > 65536:
too_big("event")
def _can_federate(event, auth_events):
creation_event = auth_events.get((EventTypes.Create, ""))
return creation_event.content.get("m.federate", True) is True
def _is_membership_change_allowed(event, auth_events):
membership = event.content["membership"]
# Check if this is the room creator joining:
if len(event.prev_events) == 1 and Membership.JOIN == membership:
# Get room creation event:
key = (EventTypes.Create, "", )
create = auth_events.get(key)
if create and event.prev_events[0][0] == create.event_id:
if create.content["creator"] == event.state_key:
return True
target_user_id = event.state_key
creating_domain = get_domain_from_id(event.room_id)
target_domain = get_domain_from_id(target_user_id)
if creating_domain != target_domain:
if not _can_federate(event, auth_events):
raise AuthError(
403,
"This room has been marked as unfederatable."
)
# get info about the caller
key = (EventTypes.Member, event.user_id, )
caller = auth_events.get(key)
caller_in_room = caller and caller.membership == Membership.JOIN
caller_invited = caller and caller.membership == Membership.INVITE
# get info about the target
key = (EventTypes.Member, target_user_id, )
target = auth_events.get(key)
target_in_room = target and target.membership == Membership.JOIN
target_banned = target and target.membership == Membership.BAN
key = (EventTypes.JoinRules, "", )
join_rule_event = auth_events.get(key)
if join_rule_event:
join_rule = join_rule_event.content.get(
"join_rule", JoinRules.INVITE
)
else:
join_rule = JoinRules.INVITE
user_level = get_user_power_level(event.user_id, auth_events)
target_level = get_user_power_level(
target_user_id, auth_events
)
# FIXME (erikj): What should we do here as the default?
ban_level = _get_named_level(auth_events, "ban", 50)
logger.debug(
"_is_membership_change_allowed: %s",
{
"caller_in_room": caller_in_room,
"caller_invited": caller_invited,
"target_banned": target_banned,
"target_in_room": target_in_room,
"membership": membership,
"join_rule": join_rule,
"target_user_id": target_user_id,
"event.user_id": event.user_id,
}
)
if Membership.INVITE == membership and "third_party_invite" in event.content:
if not _verify_third_party_invite(event, auth_events):
raise AuthError(403, "You are not invited to this room.")
if target_banned:
raise AuthError(
403, "%s is banned from the room" % (target_user_id,)
)
return True
if Membership.JOIN != membership:
if (caller_invited
and Membership.LEAVE == membership
and target_user_id == event.user_id):
return True
if not caller_in_room: # caller isn't joined
raise AuthError(
403,
"%s not in room %s." % (event.user_id, event.room_id,)
)
if Membership.INVITE == membership:
# TODO (erikj): We should probably handle this more intelligently
# PRIVATE join rules.
# Invites are valid iff caller is in the room and target isn't.
if target_banned:
raise AuthError(
403, "%s is banned from the room" % (target_user_id,)
)
elif target_in_room: # the target is already in the room.
raise AuthError(403, "%s is already in the room." %
target_user_id)
else:
invite_level = _get_named_level(auth_events, "invite", 0)
if user_level < invite_level:
raise AuthError(
403, "You cannot invite user %s." % target_user_id
)
elif Membership.JOIN == membership:
# Joins are valid iff caller == target and they were:
# invited: They are accepting the invitation
# joined: It's a NOOP
if event.user_id != target_user_id:
raise AuthError(403, "Cannot force another user to join.")
elif target_banned:
raise AuthError(403, "You are banned from this room")
elif join_rule == JoinRules.PUBLIC:
pass
elif join_rule == JoinRules.INVITE:
if not caller_in_room and not caller_invited:
raise AuthError(403, "You are not invited to this room.")
else:
# TODO (erikj): may_join list
# TODO (erikj): private rooms
raise AuthError(403, "You are not allowed to join this room")
elif Membership.LEAVE == membership:
# TODO (erikj): Implement kicks.
if target_banned and user_level < ban_level:
raise AuthError(
403, "You cannot unban user &s." % (target_user_id,)
)
elif target_user_id != event.user_id:
kick_level = _get_named_level(auth_events, "kick", 50)
if user_level < kick_level or user_level <= target_level:
raise AuthError(
403, "You cannot kick user %s." % target_user_id
)
elif Membership.BAN == membership:
if user_level < ban_level or user_level <= target_level:
raise AuthError(403, "You don't have permission to ban")
else:
raise AuthError(500, "Unknown membership %s" % membership)
return True
def _check_event_sender_in_room(event, auth_events):
key = (EventTypes.Member, event.user_id, )
member_event = auth_events.get(key)
return _check_joined_room(
member_event,
event.user_id,
event.room_id
)
def _check_joined_room(member, user_id, room_id):
if not member or member.membership != Membership.JOIN:
raise AuthError(403, "User %s not in room %s (%s)" % (
user_id, room_id, repr(member)
))
def get_send_level(etype, state_key, auth_events):
key = (EventTypes.PowerLevels, "", )
send_level_event = auth_events.get(key)
send_level = None
if send_level_event:
send_level = send_level_event.content.get("events", {}).get(
etype
)
if send_level is None:
if state_key is not None:
send_level = send_level_event.content.get(
"state_default", 50
)
else:
send_level = send_level_event.content.get(
"events_default", 0
)
if send_level:
send_level = int(send_level)
else:
send_level = 0
return send_level
def _can_send_event(event, auth_events):
send_level = get_send_level(
event.type, event.get("state_key", None), auth_events
)
user_level = get_user_power_level(event.user_id, auth_events)
if user_level < send_level:
raise AuthError(
403,
"You don't have permission to post that to the room. " +
"user_level (%d) < send_level (%d)" % (user_level, send_level)
)
# Check state_key
if hasattr(event, "state_key"):
if event.state_key.startswith("@"):
if event.state_key != event.user_id:
raise AuthError(
403,
"You are not allowed to set others state"
)
return True
def check_redaction(event, auth_events):
"""Check whether the event sender is allowed to redact the target event.
Returns:
True if the the sender is allowed to redact the target event if the
target event was created by them.
False if the sender is allowed to redact the target event with no
further checks.
Raises:
AuthError if the event sender is definitely not allowed to redact
the target event.
"""
user_level = get_user_power_level(event.user_id, auth_events)
redact_level = _get_named_level(auth_events, "redact", 50)
if user_level >= redact_level:
return False
redacter_domain = get_domain_from_id(event.event_id)
redactee_domain = get_domain_from_id(event.redacts)
if redacter_domain == redactee_domain:
return True
raise AuthError(
403,
"You don't have permission to redact events"
)
def _check_power_levels(event, auth_events):
user_list = event.content.get("users", {})
# Validate users
for k, v in user_list.items():
try:
UserID.from_string(k)
except Exception:
raise SynapseError(400, "Not a valid user_id: %s" % (k,))
try:
int(v)
except Exception:
raise SynapseError(400, "Not a valid power level: %s" % (v,))
key = (event.type, event.state_key, )
current_state = auth_events.get(key)
if not current_state:
return
user_level = get_user_power_level(event.user_id, auth_events)
# Check other levels:
levels_to_check = [
("users_default", None),
("events_default", None),
("state_default", None),
("ban", None),
("redact", None),
("kick", None),
("invite", None),
]
old_list = current_state.content.get("users", {})
for user in set(old_list.keys() + user_list.keys()):
levels_to_check.append(
(user, "users")
)
old_list = current_state.content.get("events", {})
new_list = event.content.get("events", {})
for ev_id in set(old_list.keys() + new_list.keys()):
levels_to_check.append(
(ev_id, "events")
)
old_state = current_state.content
new_state = event.content
for level_to_check, dir in levels_to_check:
old_loc = old_state
new_loc = new_state
if dir:
old_loc = old_loc.get(dir, {})
new_loc = new_loc.get(dir, {})
if level_to_check in old_loc:
old_level = int(old_loc[level_to_check])
else:
old_level = None
if level_to_check in new_loc:
new_level = int(new_loc[level_to_check])
else:
new_level = None
if new_level is not None and old_level is not None:
if new_level == old_level:
continue
if dir == "users" and level_to_check != event.user_id:
if old_level == user_level:
raise AuthError(
403,
"You don't have permission to remove ops level equal "
"to your own"
)
if old_level > user_level or new_level > user_level:
raise AuthError(
403,
"You don't have permission to add ops level greater "
"than your own"
)
def _get_power_level_event(auth_events):
key = (EventTypes.PowerLevels, "", )
return auth_events.get(key)
def get_user_power_level(user_id, auth_events):
power_level_event = _get_power_level_event(auth_events)
if power_level_event:
level = power_level_event.content.get("users", {}).get(user_id)
if not level:
level = power_level_event.content.get("users_default", 0)
if level is None:
return 0
else:
return int(level)
else:
key = (EventTypes.Create, "", )
create_event = auth_events.get(key)
if (create_event is not None and
create_event.content["creator"] == user_id):
return 100
else:
return 0
def _get_named_level(auth_events, name, default):
power_level_event = _get_power_level_event(auth_events)
if not power_level_event:
return default
level = power_level_event.content.get(name, None)
if level is not None:
return int(level)
else:
return default
def _verify_third_party_invite(event, auth_events):
"""
Validates that the invite event is authorized by a previous third-party invite.
Checks that the public key, and keyserver, match those in the third party invite,
and that the invite event has a signature issued using that public key.
Args:
event: The m.room.member join event being validated.
auth_events: All relevant previous context events which may be used
for authorization decisions.
Return:
True if the event fulfills the expectations of a previous third party
invite event.
"""
if "third_party_invite" not in event.content:
return False
if "signed" not in event.content["third_party_invite"]:
return False
signed = event.content["third_party_invite"]["signed"]
for key in {"mxid", "token"}:
if key not in signed:
return False
token = signed["token"]
invite_event = auth_events.get(
(EventTypes.ThirdPartyInvite, token,)
)
if not invite_event:
return False
if invite_event.sender != event.sender:
return False
if event.user_id != invite_event.user_id:
return False
if signed["mxid"] != event.state_key:
return False
if signed["token"] != token:
return False
for public_key_object in get_public_keys(invite_event):
public_key = public_key_object["public_key"]
try:
for server, signature_block in signed["signatures"].items():
for key_name, encoded_signature in signature_block.items():
if not key_name.startswith("ed25519:"):
continue
verify_key = decode_verify_key_bytes(
key_name,
decode_base64(public_key)
)
verify_signed_json(signed, server, verify_key)
# We got the public key from the invite, so we know that the
# correct server signed the signed bundle.
# The caller is responsible for checking that the signing
# server has not revoked that public key.
return True
except (KeyError, SignatureVerifyException,):
continue
return False
def get_public_keys(invite_event):
public_keys = []
if "public_key" in invite_event.content:
o = {
"public_key": invite_event.content["public_key"],
}
if "key_validity_url" in invite_event.content:
o["key_validity_url"] = invite_event.content["key_validity_url"]
public_keys.append(o)
public_keys.extend(invite_event.content.get("public_keys", []))
return public_keys
def auth_types_for_event(event):
"""Given an event, return a list of (EventType, StateKey) that may be
needed to auth the event. The returned list may be a superset of what
would actually be required depending on the full state of the room.
Used to limit the number of events to fetch from the database to
actually auth the event.
"""
if event.type == EventTypes.Create:
return []
auth_types = []
auth_types.append((EventTypes.PowerLevels, "", ))
auth_types.append((EventTypes.Member, event.user_id, ))
auth_types.append((EventTypes.Create, "", ))
if event.type == EventTypes.Member:
membership = event.content["membership"]
if membership in [Membership.JOIN, Membership.INVITE]:
auth_types.append((EventTypes.JoinRules, "", ))
auth_types.append((EventTypes.Member, event.state_key, ))
if membership == Membership.INVITE:
if "third_party_invite" in event.content:
key = (
EventTypes.ThirdPartyInvite,
event.content["third_party_invite"]["signed"]["token"]
)
auth_types.append(key)
return auth_types

View File

@@ -36,6 +36,15 @@ class _EventInternalMetadata(object):
def is_invite_from_remote(self):
return getattr(self, "invite_from_remote", False)
def get_send_on_behalf_of(self):
"""Whether this server should send the event on behalf of another server.
This is used by the federation "send_join" API to forward the initial join
event for a server in the room.
returns a str with the name of the server this event is sent on behalf of.
"""
return getattr(self, "send_on_behalf_of", None)
def _event_dict_property(key):
def getter(self):
@@ -70,7 +79,6 @@ class EventBase(object):
auth_events = _event_dict_property("auth_events")
depth = _event_dict_property("depth")
content = _event_dict_property("content")
event_id = _event_dict_property("event_id")
hashes = _event_dict_property("hashes")
origin = _event_dict_property("origin")
origin_server_ts = _event_dict_property("origin_server_ts")
@@ -79,8 +87,6 @@ class EventBase(object):
redacts = _event_dict_property("redacts")
room_id = _event_dict_property("room_id")
sender = _event_dict_property("sender")
state_key = _event_dict_property("state_key")
type = _event_dict_property("type")
user_id = _event_dict_property("sender")
@property
@@ -153,6 +159,11 @@ class FrozenEvent(EventBase):
else:
frozen_dict = event_dict
self.event_id = event_dict["event_id"]
self.type = event_dict["type"]
if "state_key" in event_dict:
self.state_key = event_dict["state_key"]
super(FrozenEvent, self).__init__(
frozen_dict,
signatures=signatures,

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from . import EventBase, FrozenEvent
from . import EventBase, FrozenEvent, _event_dict_property
from synapse.types import EventID
@@ -34,6 +34,10 @@ class EventBuilder(EventBase):
internal_metadata_dict=internal_metadata_dict,
)
event_id = _event_dict_property("event_id")
state_key = _event_dict_property("state_key")
type = _event_dict_property("type")
def build(self):
return FrozenEvent.from_event(self)
@@ -51,7 +55,7 @@ class EventBuilderFactory(object):
local_part = str(int(self.clock.time())) + i + random_string(5)
e_id = EventID.create(local_part, self.hostname)
e_id = EventID(local_part, self.hostname)
return e_id.to_string()

View File

@@ -15,6 +15,32 @@
class EventContext(object):
"""
Attributes:
current_state_ids (dict[(str, str), str]):
The current state map including the current event.
(type, state_key) -> event_id
prev_state_ids (dict[(str, str), str]):
The current state map excluding the current event.
(type, state_key) -> event_id
state_group (int): state group id
rejected (bool|str): A rejection reason if the event was rejected, else
False
push_actions (list[(str, list[object])]): list of (user_id, actions)
tuples
prev_group (int): Previously persisted state group. ``None`` for an
outlier.
delta_ids (dict[(str, str), str]): Delta from ``prev_group``.
(type, state_key) -> event_id. ``None`` for an outlier.
prev_state_events (?): XXX: is this ever set to anything other than
the empty list?
"""
__slots__ = [
"current_state_ids",
"prev_state_ids",
@@ -24,6 +50,7 @@ class EventContext(object):
"prev_group",
"delta_ids",
"prev_state_events",
"app_service",
]
def __init__(self):
@@ -42,3 +69,5 @@ class EventContext(object):
self.delta_ids = None
self.prev_state_events = None
self.app_service = None

113
synapse/events/spamcheck.py Normal file
View File

@@ -0,0 +1,113 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class SpamChecker(object):
def __init__(self, hs):
self.spam_checker = None
module = None
config = None
try:
module, config = hs.config.spam_checker
except Exception:
pass
if module is not None:
self.spam_checker = module(config=config)
def check_event_for_spam(self, event):
"""Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if
sent by a local user. If it is sent by a user on another server, then
users receive a blank event.
Args:
event (synapse.events.EventBase): the event to be checked
Returns:
bool: True if the event is spammy.
"""
if self.spam_checker is None:
return False
return self.spam_checker.check_event_for_spam(event)
def user_may_invite(self, inviter_userid, invitee_userid, room_id):
"""Checks if a given user may send an invite
If this method returns false, the invite will be rejected.
Args:
userid (string): The sender's user ID
Returns:
bool: True if the user may send an invite, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_invite(inviter_userid, invitee_userid, room_id)
def user_may_create_room(self, userid):
"""Checks if a given user may create a room
If this method returns false, the creation request will be rejected.
Args:
userid (string): The sender's user ID
Returns:
bool: True if the user may create a room, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_create_room(userid)
def user_may_create_room_alias(self, userid, room_alias):
"""Checks if a given user may create a room alias
If this method returns false, the association request will be rejected.
Args:
userid (string): The sender's user ID
room_alias (string): The alias to be created
Returns:
bool: True if the user may create a room alias, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_create_room_alias(userid, room_alias)
def user_may_publish_room(self, userid, room_id):
"""Checks if a given user may publish a room to the directory
If this method returns false, the publish request will be rejected.
Args:
userid (string): The sender's user ID
room_id (string): The ID of the room that would be published
Returns:
bool: True if the user may publish the room, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_publish_room(userid, room_id)

View File

@@ -16,6 +16,17 @@
from synapse.api.constants import EventTypes
from . import EventBase
from frozendict import frozendict
import re
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
# (?<!stuff) matches if the current position in the string is not preceded
# by a match for 'stuff'.
# TODO: This is fast, but fails to handle "foo\\.bar" which should be treated as
# the literal fields "foo\" and "bar" but will instead be treated as "foo\\.bar"
SPLIT_FIELD_REGEX = re.compile(r'(?<!\\)\.')
def prune_event(event):
""" Returns a pruned version of the given event, which removes all keys we
@@ -97,6 +108,83 @@ def prune_event(event):
)
def _copy_field(src, dst, field):
"""Copy the field in 'src' to 'dst'.
For example, if src={"foo":{"bar":5}} and dst={}, and field=["foo","bar"]
then dst={"foo":{"bar":5}}.
Args:
src(dict): The dict to read from.
dst(dict): The dict to modify.
field(list<str>): List of keys to drill down to in 'src'.
"""
if len(field) == 0: # this should be impossible
return
if len(field) == 1: # common case e.g. 'origin_server_ts'
if field[0] in src:
dst[field[0]] = src[field[0]]
return
# Else is a nested field e.g. 'content.body'
# Pop the last field as that's the key to move across and we need the
# parent dict in order to access the data. Drill down to the right dict.
key_to_move = field.pop(-1)
sub_dict = src
for sub_field in field: # e.g. sub_field => "content"
if sub_field in sub_dict and type(sub_dict[sub_field]) in [dict, frozendict]:
sub_dict = sub_dict[sub_field]
else:
return
if key_to_move not in sub_dict:
return
# Insert the key into the output dictionary, creating nested objects
# as required. We couldn't do this any earlier or else we'd need to delete
# the empty objects if the key didn't exist.
sub_out_dict = dst
for sub_field in field:
sub_out_dict = sub_out_dict.setdefault(sub_field, {})
sub_out_dict[key_to_move] = sub_dict[key_to_move]
def only_fields(dictionary, fields):
"""Return a new dict with only the fields in 'dictionary' which are present
in 'fields'.
If there are no event fields specified then all fields are included.
The entries may include '.' charaters to indicate sub-fields.
So ['content.body'] will include the 'body' field of the 'content' object.
A literal '.' character in a field name may be escaped using a '\'.
Args:
dictionary(dict): The dictionary to read from.
fields(list<str>): A list of fields to copy over. Only shallow refs are
taken.
Returns:
dict: A new dictionary with only the given fields. If fields was empty,
the same dictionary is returned.
"""
if len(fields) == 0:
return dictionary
# for each field, convert it:
# ["content.body.thing\.with\.dots"] => [["content", "body", "thing\.with\.dots"]]
split_fields = [SPLIT_FIELD_REGEX.split(f) for f in fields]
# for each element of the output array of arrays:
# remove escaping so we can use the right key names.
split_fields[:] = [
[f.replace(r'\.', r'.') for f in field_array] for field_array in split_fields
]
output = {}
for field_array in split_fields:
_copy_field(dictionary, output, field_array)
return output
def format_event_raw(d):
return d
@@ -137,7 +225,22 @@ def format_event_for_client_v2_without_room_id(d):
def serialize_event(e, time_now_ms, as_client_event=True,
event_format=format_event_for_client_v1,
token_id=None):
token_id=None, only_event_fields=None, is_invite=False):
"""Serialize event for clients
Args:
e (EventBase)
time_now_ms (int)
as_client_event (bool)
event_format
token_id
only_event_fields
is_invite (bool): Whether this is an invite that is being sent to the
invitee
Returns:
dict
"""
# FIXME(erikj): To handle the case of presence events and the like
if not isinstance(e, EventBase):
return e
@@ -163,7 +266,19 @@ def serialize_event(e, time_now_ms, as_client_event=True,
if txn_id is not None:
d["unsigned"]["transaction_id"] = txn_id
# If this is an invite for somebody else, then we don't care about the
# invite_room_state as that's meant solely for the invitee. Other clients
# will already have the state since they're in the room.
if not is_invite:
d["unsigned"].pop("invite_room_state", None)
if as_client_event:
return event_format(d)
else:
return d
d = event_format(d)
if only_event_fields:
if (not isinstance(only_event_fields, list) or
not all(isinstance(f, basestring) for f in only_event_fields)):
raise TypeError("only_event_fields must be a list of strings")
d = only_fields(d, only_event_fields)
return d

View File

@@ -17,10 +17,9 @@
"""
from .replication import ReplicationLayer
from .transport.client import TransportLayerClient
def initialize_http_replication(homeserver):
transport = TransportLayerClient(homeserver)
def initialize_http_replication(hs):
transport = hs.get_federation_transport_client()
return ReplicationLayer(homeserver, transport)
return ReplicationLayer(hs, transport)

View File

@@ -12,28 +12,20 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.events.utils import prune_event
from synapse.crypto.event_signing import check_event_content_hash
from synapse.api.errors import SynapseError
from synapse.util import unwrapFirstError
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
import logging
from synapse.api.errors import SynapseError
from synapse.crypto.event_signing import check_event_content_hash
from synapse.events.utils import prune_event
from synapse.util import unwrapFirstError, logcontext
from twisted.internet import defer
logger = logging.getLogger(__name__)
class FederationBase(object):
def __init__(self, hs):
pass
self.spam_checker = hs.get_spam_checker()
@defer.inlineCallbacks
def _check_sigs_and_hash_and_fetch(self, origin, pdus, outlier=False,
@@ -57,56 +49,52 @@ class FederationBase(object):
"""
deferreds = self._check_sigs_and_hashes(pdus)
def callback(pdu):
return pdu
@defer.inlineCallbacks
def handle_check_result(pdu, deferred):
try:
res = yield logcontext.make_deferred_yieldable(deferred)
except SynapseError:
res = None
def errback(failure, pdu):
failure.trap(SynapseError)
return None
def try_local_db(res, pdu):
if not res:
# Check local db.
return self.store.get_event(
res = yield self.store.get_event(
pdu.event_id,
allow_rejected=True,
allow_none=True,
)
return res
def try_remote(res, pdu):
if not res and pdu.origin != origin:
return self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
outlier=outlier,
timeout=10000,
).addErrback(lambda e: None)
return res
try:
res = yield self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
outlier=outlier,
timeout=10000,
)
except SynapseError:
pass
def warn(res, pdu):
if not res:
logger.warn(
"Failed to find copy of %s with valid signature",
pdu.event_id,
)
return res
for pdu, deferred in zip(pdus, deferreds):
deferred.addCallbacks(
callback, errback, errbackArgs=[pdu]
).addCallback(
try_local_db, pdu
).addCallback(
try_remote, pdu
).addCallback(
warn, pdu
defer.returnValue(res)
handle = logcontext.preserve_fn(handle_check_result)
deferreds2 = [
handle(pdu, deferred)
for pdu, deferred in zip(pdus, deferreds)
]
valid_pdus = yield logcontext.make_deferred_yieldable(
defer.gatherResults(
deferreds2,
consumeErrors=True,
)
valid_pdus = yield preserve_context_over_deferred(defer.gatherResults(
deferreds,
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError)
if include_none:
defer.returnValue(valid_pdus)
@@ -114,15 +102,24 @@ class FederationBase(object):
defer.returnValue([p for p in valid_pdus if p])
def _check_sigs_and_hash(self, pdu):
return self._check_sigs_and_hashes([pdu])[0]
return logcontext.make_deferred_yieldable(
self._check_sigs_and_hashes([pdu])[0],
)
def _check_sigs_and_hashes(self, pdus):
"""Throws a SynapseError if a PDU does not have the correct
signatures.
"""Checks that each of the received events is correctly signed by the
sending server.
Args:
pdus (list[FrozenEvent]): the events to be checked
Returns:
FrozenEvent: Either the given event or it redacted if it failed the
content hash check.
list[Deferred]: for each input event, a deferred which:
* returns the original event if the checks pass
* returns a redacted version of the event (if the signature
matched but the hash did not)
* throws a SynapseError if the signature check failed.
The deferreds run their callbacks in the sentinel logcontext.
"""
redacted_pdus = [
@@ -130,26 +127,38 @@ class FederationBase(object):
for pdu in pdus
]
deferreds = preserve_fn(self.keyring.verify_json_objects_for_server)([
deferreds = self.keyring.verify_json_objects_for_server([
(p.origin, p.get_pdu_json())
for p in redacted_pdus
])
ctx = logcontext.LoggingContext.current_context()
def callback(_, pdu, redacted):
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
return pdu
with logcontext.PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
if self.spam_checker.check_event_for_spam(pdu):
logger.warn(
"Event contains spam, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
return pdu
def errback(failure, pdu):
failure.trap(SynapseError)
logger.warn(
"Signature check failed for %s",
pdu.event_id,
)
with logcontext.PreserveLoggingContext(ctx):
logger.warn(
"Signature check failed for %s",
pdu.event_id,
)
return failure
for deferred, pdu, redacted in zip(deferreds, pdus, redacted_pdus):

View File

@@ -18,20 +18,18 @@ from twisted.internet import defer
from .federation_base import FederationBase
from synapse.api.constants import Membership
from .units import Edu
from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError,
)
from synapse.util import unwrapFirstError
from synapse.util import unwrapFirstError, logcontext
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logutils import log_function
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
from synapse.events import FrozenEvent
from synapse.types import get_domain_from_id
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.events import FrozenEvent, builder
import synapse.metrics
from synapse.util.retryutils import get_retry_limiter, NotRetryingDestination
from synapse.util.retryutils import NotRetryingDestination
import copy
import itertools
@@ -45,10 +43,6 @@ logger = logging.getLogger(__name__)
# synapse.federation.federation_client is a silly name
metrics = synapse.metrics.get_metrics_for("synapse.federation.client")
sent_pdus_destination_dist = metrics.register_distribution("sent_pdu_destinations")
sent_edus_counter = metrics.register_counter("sent_edus")
sent_queries_counter = metrics.register_counter("sent_queries", labels=["type"])
@@ -92,68 +86,9 @@ class FederationClient(FederationBase):
self._get_pdu_cache.start()
@log_function
def send_pdu(self, pdu, destinations):
"""Informs the replication layer about a new PDU generated within the
home server that should be transmitted to others.
TODO: Figure out when we should actually resolve the deferred.
Args:
pdu (Pdu): The new Pdu.
Returns:
Deferred: Completes when we have successfully processed the PDU
and replicated it to any interested remote home servers.
"""
order = self._order
self._order += 1
sent_pdus_destination_dist.inc_by(len(destinations))
logger.debug("[%s] transaction_layer.enqueue_pdu... ", pdu.event_id)
# TODO, add errback, etc.
self._transaction_queue.enqueue_pdu(pdu, destinations, order)
logger.debug(
"[%s] transaction_layer.enqueue_pdu... done",
pdu.event_id
)
def send_presence(self, destination, states):
if destination != self.server_name:
self._transaction_queue.enqueue_presence(destination, states)
@log_function
def send_edu(self, destination, edu_type, content, key=None):
edu = Edu(
origin=self.server_name,
destination=destination,
edu_type=edu_type,
content=content,
)
sent_edus_counter.inc()
# TODO, add errback, etc.
self._transaction_queue.enqueue_edu(edu, key=key)
return defer.succeed(None)
@log_function
def send_device_messages(self, destination):
"""Sends the device messages in the local database to the remote
destination"""
self._transaction_queue.enqueue_device_messages(destination)
@log_function
def send_failure(self, failure, destination):
self._transaction_queue.enqueue_failure(failure, destination)
return defer.succeed(None)
@log_function
def make_query(self, destination, query_type, args,
retry_on_dns_fail=False):
retry_on_dns_fail=False, ignore_backoff=False):
"""Sends a federation Query to a remote homeserver of the given type
and arguments.
@@ -163,6 +98,8 @@ class FederationClient(FederationBase):
handler name used in register_query_handler().
args (dict): Mapping of strings to strings containing the details
of the query request.
ignore_backoff (bool): true to ignore the historical backoff data
and try the request anyway.
Returns:
a Deferred which will eventually yield a JSON object from the
@@ -171,7 +108,8 @@ class FederationClient(FederationBase):
sent_queries_counter.inc(query_type)
return self.transport_layer.make_query(
destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail
destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail,
ignore_backoff=ignore_backoff,
)
@log_function
@@ -191,6 +129,16 @@ class FederationClient(FederationBase):
destination, content, timeout
)
@log_function
def query_user_devices(self, destination, user_id, timeout=30000):
"""Query the device keys for a list of user ids hosted on a remote
server.
"""
sent_queries_counter.inc("user_devices")
return self.transport_layer.query_user_devices(
destination, user_id, timeout
)
@log_function
def claim_client_keys(self, destination, content, timeout):
"""Claims one-time keys for a device hosted on a remote server.
@@ -241,10 +189,10 @@ class FederationClient(FederationBase):
]
# FIXME: We should handle signature failures more gracefully.
pdus[:] = yield preserve_context_over_deferred(defer.gatherResults(
pdus[:] = yield logcontext.make_deferred_yieldable(defer.gatherResults(
self._check_sigs_and_hashes(pdus),
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(pdus)
@@ -261,8 +209,7 @@ class FederationClient(FederationBase):
Args:
destinations (list): Which home servers to query
pdu_origin (str): The home server that originally sent the pdu.
event_id (str)
event_id (str): event to fetch
outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if
it's from an arbitary point in the context as opposed to part
of the current block of PDUs. Defaults to `False`
@@ -290,31 +237,24 @@ class FederationClient(FederationBase):
continue
try:
limiter = yield get_retry_limiter(
destination,
self._clock,
self.store,
transaction_data = yield self.transport_layer.get_event(
destination, event_id, timeout=timeout,
)
with limiter:
transaction_data = yield self.transport_layer.get_event(
destination, event_id, timeout=timeout,
)
logger.debug("transaction_data %r", transaction_data)
logger.debug("transaction_data %r", transaction_data)
pdu_list = [
self.event_from_pdu_json(p, outlier=outlier)
for p in transaction_data["pdus"]
]
pdu_list = [
self.event_from_pdu_json(p, outlier=outlier)
for p in transaction_data["pdus"]
]
if pdu_list and pdu_list[0]:
pdu = pdu_list[0]
if pdu_list and pdu_list[0]:
pdu = pdu_list[0]
# Check signatures are correct.
signed_pdu = yield self._check_sigs_and_hash(pdu)
# Check signatures are correct.
signed_pdu = yield self._check_sigs_and_hashes([pdu])[0]
break
break
pdu_attempts[destination] = now
@@ -480,7 +420,7 @@ class FederationClient(FederationBase):
for e_id in batch
]
res = yield preserve_context_over_deferred(
res = yield make_deferred_yieldable(
defer.DeferredList(deferreds, consumeErrors=True)
)
for success, result in res:
@@ -534,8 +474,13 @@ class FederationClient(FederationBase):
content (object): Any additional data to put into the content field
of the event.
Return:
A tuple of (origin (str), event (object)) where origin is the remote
homeserver which generated the event.
Deferred: resolves to a tuple of (origin (str), event (object))
where origin is the remote homeserver which generated the event.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
if membership not in valid_memberships:
@@ -564,8 +509,10 @@ class FederationClient(FederationBase):
if "prev_state" not in pdu_dict:
pdu_dict["prev_state"] = []
ev = builder.EventBuilder(pdu_dict)
defer.returnValue(
(destination, self.event_from_pdu_json(pdu_dict))
(destination, ev)
)
break
except CodeMessageException as e:
@@ -586,6 +533,27 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
def send_join(self, destinations, pdu):
"""Sends a join event to one of a list of homeservers.
Doing so will cause the remote server to add the event to the graph,
and send the event out to the rest of the federation.
Args:
destinations (str): Candidate homeservers which are probably
participating in the room.
pdu (BaseEvent): event to be sent
Return:
Deferred: resolves to a dict with members ``origin`` (a string
giving the serer the event was sent to, ``state`` (?) and
``auth_chain``.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
@@ -693,6 +661,26 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
def send_leave(self, destinations, pdu):
"""Sends a leave event to one of a list of homeservers.
Doing so will cause the remote server to add the event to the graph,
and send the event out to the rest of the federation.
This is mostly useful to reject received invites.
Args:
destinations (str): Candidate homeservers which are probably
participating in the room.
pdu (BaseEvent): event to be sent
Return:
Deferred: resolves to None.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a non-200 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
@@ -719,12 +707,15 @@ class FederationClient(FederationBase):
raise RuntimeError("Failed to send to any server.")
def get_public_rooms(self, destination, limit=None, since_token=None,
search_filter=None):
search_filter=None, include_all_networks=False,
third_party_instance_id=None):
if destination == self.server_name:
return
return self.transport_layer.get_public_rooms(
destination, limit, since_token, search_filter
destination, limit, since_token, search_filter,
include_all_networks=include_all_networks,
third_party_instance_id=third_party_instance_id,
)
@defer.inlineCallbacks
@@ -769,7 +760,7 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
def get_missing_events(self, destination, room_id, earliest_events_ids,
latest_events, limit, min_depth):
latest_events, limit, min_depth, timeout):
"""Tries to fetch events we are missing. This is called when we receive
an event without having received all of its ancestors.
@@ -783,6 +774,7 @@ class FederationClient(FederationBase):
have all previous events for.
limit (int): Maximum number of events to return.
min_depth (int): Minimum depth of events tor return.
timeout (int): Max time to wait in ms
"""
try:
content = yield self.transport_layer.get_missing_events(
@@ -792,6 +784,7 @@ class FederationClient(FederationBase):
latest_events=[e.event_id for e in latest_events],
limit=limit,
min_depth=min_depth,
timeout=timeout,
)
events = [
@@ -802,8 +795,6 @@ class FederationClient(FederationBase):
signed_events = yield self._check_sigs_and_hash_and_fetch(
destination, events, outlier=False
)
have_gotten_all_from_destination = True
except HttpResponseException as e:
if not e.code == 400:
raise
@@ -811,72 +802,6 @@ class FederationClient(FederationBase):
# We are probably hitting an old server that doesn't support
# get_missing_events
signed_events = []
have_gotten_all_from_destination = False
if len(signed_events) >= limit:
defer.returnValue(signed_events)
users = yield self.state.get_current_user_in_room(room_id)
servers = set(get_domain_from_id(u) for u in users)
servers = set(servers)
servers.discard(self.server_name)
failed_to_fetch = set()
while len(signed_events) < limit:
# Are we missing any?
seen_events = set(earliest_events_ids)
seen_events.update(e.event_id for e in signed_events if e)
missing_events = {}
for e in itertools.chain(latest_events, signed_events):
if e.depth > min_depth:
missing_events.update({
e_id: e.depth for e_id, _ in e.prev_events
if e_id not in seen_events
and e_id not in failed_to_fetch
})
if not missing_events:
break
have_seen = yield self.store.have_events(missing_events)
for k in have_seen:
missing_events.pop(k, None)
if not missing_events:
break
# Okay, we haven't gotten everything yet. Lets get them.
ordered_missing = sorted(missing_events.items(), key=lambda x: x[0])
if have_gotten_all_from_destination:
servers.discard(destination)
def random_server_list():
srvs = list(servers)
random.shuffle(srvs)
return srvs
deferreds = [
preserve_fn(self.get_pdu)(
destinations=random_server_list(),
event_id=e_id,
)
for e_id, depth in ordered_missing[:limit - len(signed_events)]
]
res = yield preserve_context_over_deferred(
defer.DeferredList(deferreds, consumeErrors=True)
)
for (result, val), (e_id, _) in zip(res, ordered_missing):
if result and val:
signed_events.append(val)
else:
failed_to_fetch.add(e_id)
defer.returnValue(signed_events)

View File

@@ -12,17 +12,17 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from .federation_base import FederationBase
from .units import Transaction, Edu
from synapse.util.async import Linearizer
from synapse.util import async
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.util.logutils import log_function
from synapse.util.caches.response_cache import ResponseCache
from synapse.events import FrozenEvent
from synapse.types import get_domain_from_id
import synapse.metrics
from synapse.api.errors import AuthError, FederationError, SynapseError
@@ -32,6 +32,9 @@ from synapse.crypto.event_signing import compute_event_signature
import simplejson as json
import logging
# when processing incoming transactions, we try to handle multiple rooms in
# parallel, up to this limit.
TRANSACTION_CONCURRENCY_LIMIT = 10
logger = logging.getLogger(__name__)
@@ -51,8 +54,8 @@ class FederationServer(FederationBase):
self.auth = hs.get_auth()
self._room_pdu_linearizer = Linearizer()
self._server_linearizer = Linearizer()
self._server_linearizer = async.Linearizer("fed_server")
self._transaction_linearizer = async.Linearizer("fed_txn_handler")
# We cache responses to state queries, as they take a while and often
# come in waves.
@@ -109,30 +112,46 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
@log_function
def on_incoming_transaction(self, transaction_data):
# keep this as early as possible to make the calculated origin ts as
# accurate as possible.
request_time = self._clock.time_msec()
transaction = Transaction(**transaction_data)
received_pdus_counter.inc_by(len(transaction.pdus))
for p in transaction.pdus:
if "unsigned" in p:
unsigned = p["unsigned"]
if "age" in unsigned:
p["age"] = unsigned["age"]
if "age" in p:
p["age_ts"] = int(self._clock.time_msec()) - int(p["age"])
del p["age"]
pdu_list = [
self.event_from_pdu_json(p) for p in transaction.pdus
]
if not transaction.transaction_id:
raise Exception("Transaction missing transaction_id")
if not transaction.origin:
raise Exception("Transaction missing origin")
logger.debug("[%s] Got transaction", transaction.transaction_id)
# use a linearizer to ensure that we don't process the same transaction
# multiple times in parallel.
with (yield self._transaction_linearizer.queue(
(transaction.origin, transaction.transaction_id),
)):
result = yield self._handle_incoming_transaction(
transaction, request_time,
)
defer.returnValue(result)
@defer.inlineCallbacks
def _handle_incoming_transaction(self, transaction, request_time):
""" Process an incoming transaction and return the HTTP response
Args:
transaction (Transaction): incoming transaction
request_time (int): timestamp that the HTTP request arrived at
Returns:
Deferred[(int, object)]: http response code and body
"""
response = yield self.transaction_actions.have_responded(transaction)
if response:
logger.debug(
"[%s] We've already responed to this request",
"[%s] We've already responded to this request",
transaction.transaction_id
)
defer.returnValue(response)
@@ -140,18 +159,49 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id)
results = []
received_pdus_counter.inc_by(len(transaction.pdus))
for pdu in pdu_list:
try:
yield self._handle_new_pdu(transaction.origin, pdu)
results.append({})
except FederationError as e:
self.send_failure(e, transaction.origin)
results.append({"error": str(e)})
except Exception as e:
results.append({"error": str(e)})
logger.exception("Failed to handle PDU")
pdus_by_room = {}
for p in transaction.pdus:
if "unsigned" in p:
unsigned = p["unsigned"]
if "age" in unsigned:
p["age"] = unsigned["age"]
if "age" in p:
p["age_ts"] = request_time - int(p["age"])
del p["age"]
event = self.event_from_pdu_json(p)
room_id = event.room_id
pdus_by_room.setdefault(room_id, []).append(event)
pdu_results = {}
# we can process different rooms in parallel (which is useful if they
# require callouts to other servers to fetch missing events), but
# impose a limit to avoid going too crazy with ram/cpu.
@defer.inlineCallbacks
def process_pdus_for_room(room_id):
logger.debug("Processing PDUs for %s", room_id)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
try:
yield self._handle_received_pdu(
transaction.origin, pdu
)
pdu_results[event_id] = {}
except FederationError as e:
logger.warn("Error handling PDU %s: %s", event_id, e)
pdu_results[event_id] = {"error": str(e)}
except Exception as e:
pdu_results[event_id] = {"error": str(e)}
logger.exception("Failed to handle PDU %s", event_id)
yield async.concurrently_execute(
process_pdus_for_room, pdus_by_room.keys(),
TRANSACTION_CONCURRENCY_LIMIT,
)
if hasattr(transaction, "edus"):
for edu in (Edu(**x) for x in transaction.edus):
@@ -161,17 +211,16 @@ class FederationServer(FederationBase):
edu.content
)
for failure in getattr(transaction, "pdu_failures", []):
logger.info("Got failure %r", failure)
logger.debug("Returning: %s", str(results))
pdu_failures = getattr(transaction, "pdu_failures", [])
for failure in pdu_failures:
logger.info("Got failure %r", failure)
response = {
"pdus": dict(zip(
(p.event_id for p in pdu_list), results
)),
"pdus": pdu_results,
}
logger.debug("Returning: %s", str(response))
yield self.transaction_actions.set_response(
transaction,
200, response
@@ -205,12 +254,13 @@ class FederationServer(FederationBase):
result = self._state_resp_cache.get((room_id, event_id))
if not result:
with (yield self._server_linearizer.queue((origin, room_id))):
resp = yield self._state_resp_cache.set(
d = self._state_resp_cache.set(
(room_id, event_id),
self._on_context_state_request_compute(room_id, event_id)
preserve_fn(self._on_context_state_request_compute)(room_id, event_id)
)
resp = yield make_deferred_yieldable(d)
else:
resp = yield result
resp = yield make_deferred_yieldable(result)
defer.returnValue((200, resp))
@@ -395,6 +445,9 @@ class FederationServer(FederationBase):
def on_query_client_keys(self, origin, content):
return self.on_query_request("client_keys", content)
def on_query_user_devices(self, origin, user_id):
return self.on_query_request("user_devices", user_id)
@defer.inlineCallbacks
@log_function
def on_claim_client_keys(self, origin, content):
@@ -413,6 +466,16 @@ class FederationServer(FederationBase):
key_id: json.loads(json_bytes)
}
logger.info(
"Claimed one-time-keys: %s",
",".join((
"%s for %s:%s" % (key_id, user_id, device_id)
for user_id, user_keys in json_result.iteritems()
for device_id, device_keys in user_keys.iteritems()
for key_id, _ in device_keys.iteritems()
)),
)
defer.returnValue({"one_time_keys": json_result})
@defer.inlineCallbacks
@@ -425,6 +488,7 @@ class FederationServer(FederationBase):
" limit: %d, min_depth: %d",
earliest_events, latest_events, limit, min_depth
)
missing_events = yield self.handler.on_get_missing_events(
origin, room_id, earliest_events, latest_events, limit, min_depth
)
@@ -472,25 +536,39 @@ class FederationServer(FederationBase):
)
@defer.inlineCallbacks
@log_function
def _handle_new_pdu(self, origin, pdu, get_missing=True):
# We reprocess pdus when we have seen them only as outliers
existing = yield self._get_persisted_pdu(
origin, pdu.event_id, do_auth=False
)
def _handle_received_pdu(self, origin, pdu):
""" Process a PDU received in a federation /send/ transaction.
# FIXME: Currently we fetch an event again when we already have it
# if it has been marked as an outlier.
Args:
origin (str): server which sent the pdu
pdu (FrozenEvent): received pdu
already_seen = (
existing and (
not existing.internal_metadata.is_outlier()
or pdu.internal_metadata.is_outlier()
)
)
if already_seen:
logger.debug("Already seen pdu %s", pdu.event_id)
return
Returns (Deferred): completes with None
Raises: FederationError if the signatures / hash do not match
"""
# check that it's actually being sent from a valid destination to
# workaround bug #1753 in 0.18.5 and 0.18.6
if origin != get_domain_from_id(pdu.event_id):
# We continue to accept join events from any server; this is
# necessary for the federation join dance to work correctly.
# (When we join over federation, the "helper" server is
# responsible for sending out the join event, rather than the
# origin. See bug #1893).
if not (
pdu.type == 'm.room.member' and
pdu.content and
pdu.content.get("membership", None) == 'join'
):
logger.info(
"Discarding PDU %s from invalid origin %s",
pdu.event_id, origin
)
return
else:
logger.info(
"Accepting join PDU %s from %s",
pdu.event_id, origin
)
# Check signature.
try:
@@ -503,114 +581,7 @@ class FederationServer(FederationBase):
affected=pdu.event_id,
)
state = None
auth_chain = []
have_seen = yield self.store.have_events(
[ev for ev, _ in pdu.prev_events]
)
fetch_state = False
# Get missing pdus if necessary.
if not pdu.internal_metadata.is_outlier():
# We only backfill backwards to the min depth.
min_depth = yield self.handler.get_min_depth_for_context(
pdu.room_id
)
logger.debug(
"_handle_new_pdu min_depth for %s: %d",
pdu.room_id, min_depth
)
prevs = {e_id for e_id, _ in pdu.prev_events}
seen = set(have_seen.keys())
if min_depth and pdu.depth < min_depth:
# This is so that we don't notify the user about this
# message, to work around the fact that some events will
# reference really really old events we really don't want to
# send to the clients.
pdu.internal_metadata.outlier = True
elif min_depth and pdu.depth > min_depth:
if get_missing and prevs - seen:
# If we're missing stuff, ensure we only fetch stuff one
# at a time.
with (yield self._room_pdu_linearizer.queue(pdu.room_id)):
# We recalculate seen, since it may have changed.
have_seen = yield self.store.have_events(prevs)
seen = set(have_seen.keys())
if prevs - seen:
latest = yield self.store.get_latest_event_ids_in_room(
pdu.room_id
)
# We add the prev events that we have seen to the latest
# list to ensure the remote server doesn't give them to us
latest = set(latest)
latest |= seen
logger.info(
"Missing %d events for room %r: %r...",
len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
)
missing_events = yield self.get_missing_events(
origin,
pdu.room_id,
earliest_events_ids=list(latest),
latest_events=[pdu],
limit=10,
min_depth=min_depth,
)
# We want to sort these by depth so we process them and
# tell clients about them in order.
missing_events.sort(key=lambda x: x.depth)
for e in missing_events:
yield self._handle_new_pdu(
origin,
e,
get_missing=False
)
have_seen = yield self.store.have_events(
[ev for ev, _ in pdu.prev_events]
)
prevs = {e_id for e_id, _ in pdu.prev_events}
seen = set(have_seen.keys())
if prevs - seen:
logger.info(
"Still missing %d events for room %r: %r...",
len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
)
fetch_state = True
if fetch_state:
# We need to get the state at this event, since we haven't
# processed all the prev events.
logger.debug(
"_handle_new_pdu getting state for %s",
pdu.room_id
)
try:
state, auth_chain = yield self.get_state_for_room(
origin, pdu.room_id, pdu.event_id,
)
except:
logger.exception("Failed to get state for event: %s", pdu.event_id)
yield self.handler.on_receive_pdu(
origin,
pdu,
state=state,
auth_chain=auth_chain,
)
yield self.handler.on_receive_pdu(origin, pdu, get_missing=True)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name

View File

@@ -20,8 +20,6 @@ a given transport.
from .federation_client import FederationClient
from .federation_server import FederationServer
from .transaction_queue import TransactionQueue
from .persistence import TransactionActions
import logging
@@ -66,9 +64,6 @@ class ReplicationLayer(FederationClient, FederationServer):
self._clock = hs.get_clock()
self.transaction_actions = TransactionActions(self.store)
self._transaction_queue = TransactionQueue(hs, transport_layer)
self._order = 0
self.hs = hs

View File

@@ -0,0 +1,548 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A federation sender that forwards things to be sent across replication to
a worker process.
It assumes there is a single worker process feeding off of it.
Each row in the replication stream consists of a type and some json, where the
types indicate whether they are presence, or edus, etc.
Ephemeral or non-event data are queued up in-memory. When the worker requests
updates since a particular point, all in-memory data since before that point is
dropped. We also expire things in the queue after 5 minutes, to ensure that a
dead worker doesn't cause the queues to grow limitlessly.
Events are replicated via a separate events stream.
"""
from .units import Edu
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
import synapse.metrics
from blist import sorteddict
from collections import namedtuple
import logging
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
class FederationRemoteSendQueue(object):
"""A drop in replacement for TransactionQueue"""
def __init__(self, hs):
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id
self.presence_map = {} # Pending presence map user_id -> UserPresenceState
self.presence_changed = sorteddict() # Stream position -> user_id
self.keyed_edu = {} # (destination, key) -> EDU
self.keyed_edu_changed = sorteddict() # stream position -> (destination, key)
self.edus = sorteddict() # stream position -> Edu
self.failures = sorteddict() # stream position -> (destination, Failure)
self.device_messages = sorteddict() # stream position -> destination
self.pos = 1
self.pos_time = sorteddict()
# EVERYTHING IS SAD. In particular, python only makes new scopes when
# we make a new function, so we need to make a new function so the inner
# lambda binds to the queue rather than to the name of the queue which
# changes. ARGH.
def register(name, queue):
metrics.register_callback(
queue_name + "_size",
lambda: len(queue),
)
for queue_name in [
"presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed",
"edus", "failures", "device_messages", "pos_time",
]:
register(queue_name, getattr(self, queue_name))
self.clock.looping_call(self._clear_queue, 30 * 1000)
def _next_pos(self):
pos = self.pos
self.pos += 1
self.pos_time[self.clock.time_msec()] = pos
return pos
def _clear_queue(self):
"""Clear the queues for anything older than N minutes"""
FIVE_MINUTES_AGO = 5 * 60 * 1000
now = self.clock.time_msec()
keys = self.pos_time.keys()
time = keys.bisect_left(now - FIVE_MINUTES_AGO)
if not keys[:time]:
return
position_to_delete = max(keys[:time])
for key in keys[:time]:
del self.pos_time[key]
self._clear_queue_before_pos(position_to_delete)
def _clear_queue_before_pos(self, position_to_delete):
"""Clear all the queues from before a given position"""
with Measure(self.clock, "send_queue._clear"):
# Delete things out of presence maps
keys = self.presence_changed.keys()
i = keys.bisect_left(position_to_delete)
for key in keys[:i]:
del self.presence_changed[key]
user_ids = set(
user_id
for uids in self.presence_changed.itervalues()
for user_id in uids
)
to_del = [
user_id for user_id in self.presence_map if user_id not in user_ids
]
for user_id in to_del:
del self.presence_map[user_id]
# Delete things out of keyed edus
keys = self.keyed_edu_changed.keys()
i = keys.bisect_left(position_to_delete)
for key in keys[:i]:
del self.keyed_edu_changed[key]
live_keys = set()
for edu_key in self.keyed_edu_changed.values():
live_keys.add(edu_key)
to_del = [edu_key for edu_key in self.keyed_edu if edu_key not in live_keys]
for edu_key in to_del:
del self.keyed_edu[edu_key]
# Delete things out of edu map
keys = self.edus.keys()
i = keys.bisect_left(position_to_delete)
for key in keys[:i]:
del self.edus[key]
# Delete things out of failure map
keys = self.failures.keys()
i = keys.bisect_left(position_to_delete)
for key in keys[:i]:
del self.failures[key]
# Delete things out of device map
keys = self.device_messages.keys()
i = keys.bisect_left(position_to_delete)
for key in keys[:i]:
del self.device_messages[key]
def notify_new_events(self, current_id):
"""As per TransactionQueue"""
# We don't need to replicate this as it gets sent down a different
# stream.
pass
def send_edu(self, destination, edu_type, content, key=None):
"""As per TransactionQueue"""
pos = self._next_pos()
edu = Edu(
origin=self.server_name,
destination=destination,
edu_type=edu_type,
content=content,
)
if key:
assert isinstance(key, tuple)
self.keyed_edu[(destination, key)] = edu
self.keyed_edu_changed[pos] = (destination, key)
else:
self.edus[pos] = edu
self.notifier.on_new_replication_data()
def send_presence(self, states):
"""As per TransactionQueue
Args:
states (list(UserPresenceState))
"""
pos = self._next_pos()
# We only want to send presence for our own users, so lets always just
# filter here just in case.
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
self.presence_map.update({state.user_id: state for state in local_states})
self.presence_changed[pos] = [state.user_id for state in local_states]
self.notifier.on_new_replication_data()
def send_failure(self, failure, destination):
"""As per TransactionQueue"""
pos = self._next_pos()
self.failures[pos] = (destination, str(failure))
self.notifier.on_new_replication_data()
def send_device_messages(self, destination):
"""As per TransactionQueue"""
pos = self._next_pos()
self.device_messages[pos] = destination
self.notifier.on_new_replication_data()
def get_current_token(self):
return self.pos - 1
def federation_ack(self, token):
self._clear_queue_before_pos(token)
def get_replication_rows(self, from_token, to_token, limit, federation_ack=None):
"""Get rows to be sent over federation between the two tokens
Args:
from_token (int)
to_token(int)
limit (int)
federation_ack (int): Optional. The position where the worker is
explicitly acknowledged it has handled. Allows us to drop
data from before that point
"""
# TODO: Handle limit.
# To handle restarts where we wrap around
if from_token > self.pos:
from_token = -1
# list of tuple(int, BaseFederationRow), where the first is the position
# of the federation stream.
rows = []
# There should be only one reader, so lets delete everything its
# acknowledged its seen.
if federation_ack:
self._clear_queue_before_pos(federation_ack)
# Fetch changed presence
keys = self.presence_changed.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
dest_user_ids = [
(pos, user_id)
for pos in keys[i:j]
for user_id in self.presence_changed[pos]
]
for (key, user_id) in dest_user_ids:
rows.append((key, PresenceRow(
state=self.presence_map[user_id],
)))
# Fetch changes keyed edus
keys = self.keyed_edu_changed.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
# We purposefully clobber based on the key here, python dict comprehensions
# always use the last value, so this will correctly point to the last
# stream position.
keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]}
for ((destination, edu_key), pos) in keyed_edus.iteritems():
rows.append((pos, KeyedEduRow(
key=edu_key,
edu=self.keyed_edu[(destination, edu_key)],
)))
# Fetch changed edus
keys = self.edus.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
edus = ((k, self.edus[k]) for k in keys[i:j])
for (pos, edu) in edus:
rows.append((pos, EduRow(edu)))
# Fetch changed failures
keys = self.failures.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
failures = ((k, self.failures[k]) for k in keys[i:j])
for (pos, (destination, failure)) in failures:
rows.append((pos, FailureRow(
destination=destination,
failure=failure,
)))
# Fetch changed device messages
keys = self.device_messages.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
device_messages = {self.device_messages[k]: k for k in keys[i:j]}
for (destination, pos) in device_messages.iteritems():
rows.append((pos, DeviceRow(
destination=destination,
)))
# Sort rows based on pos
rows.sort()
return [(pos, row.TypeId, row.to_data()) for pos, row in rows]
class BaseFederationRow(object):
"""Base class for rows to be sent in the federation stream.
Specifies how to identify, serialize and deserialize the different types.
"""
TypeId = None # Unique string that ids the type. Must be overriden in sub classes.
@staticmethod
def from_data(data):
"""Parse the data from the federation stream into a row.
Args:
data: The value of ``data`` from FederationStreamRow.data, type
depends on the type of stream
"""
raise NotImplementedError()
def to_data(self):
"""Serialize this row to be sent over the federation stream.
Returns:
The value to be sent in FederationStreamRow.data. The type depends
on the type of stream.
"""
raise NotImplementedError()
def add_to_buffer(self, buff):
"""Add this row to the appropriate field in the buffer ready for this
to be sent over federation.
We use a buffer so that we can batch up events that have come in at
the same time and send them all at once.
Args:
buff (BufferedToSend)
"""
raise NotImplementedError()
class PresenceRow(BaseFederationRow, namedtuple("PresenceRow", (
"state", # UserPresenceState
))):
TypeId = "p"
@staticmethod
def from_data(data):
return PresenceRow(
state=UserPresenceState.from_dict(data)
)
def to_data(self):
return self.state.as_dict()
def add_to_buffer(self, buff):
buff.presence.append(self.state)
class KeyedEduRow(BaseFederationRow, namedtuple("KeyedEduRow", (
"key", # tuple(str) - the edu key passed to send_edu
"edu", # Edu
))):
"""Streams EDUs that have an associated key that is ued to clobber. For example,
typing EDUs clobber based on room_id.
"""
TypeId = "k"
@staticmethod
def from_data(data):
return KeyedEduRow(
key=tuple(data["key"]),
edu=Edu(**data["edu"]),
)
def to_data(self):
return {
"key": self.key,
"edu": self.edu.get_internal_dict(),
}
def add_to_buffer(self, buff):
buff.keyed_edus.setdefault(
self.edu.destination, {}
)[self.key] = self.edu
class EduRow(BaseFederationRow, namedtuple("EduRow", (
"edu", # Edu
))):
"""Streams EDUs that don't have keys. See KeyedEduRow
"""
TypeId = "e"
@staticmethod
def from_data(data):
return EduRow(Edu(**data))
def to_data(self):
return self.edu.get_internal_dict()
def add_to_buffer(self, buff):
buff.edus.setdefault(self.edu.destination, []).append(self.edu)
class FailureRow(BaseFederationRow, namedtuple("FailureRow", (
"destination", # str
"failure",
))):
"""Streams failures to a remote server. Failures are issued when there was
something wrong with a transaction the remote sent us, e.g. it included
an event that was invalid.
"""
TypeId = "f"
@staticmethod
def from_data(data):
return FailureRow(
destination=data["destination"],
failure=data["failure"],
)
def to_data(self):
return {
"destination": self.destination,
"failure": self.failure,
}
def add_to_buffer(self, buff):
buff.failures.setdefault(self.destination, []).append(self.failure)
class DeviceRow(BaseFederationRow, namedtuple("DeviceRow", (
"destination", # str
))):
"""Streams the fact that either a) there is pending to device messages for
users on the remote, or b) a local users device has changed and needs to
be sent to the remote.
"""
TypeId = "d"
@staticmethod
def from_data(data):
return DeviceRow(destination=data["destination"])
def to_data(self):
return {"destination": self.destination}
def add_to_buffer(self, buff):
buff.device_destinations.add(self.destination)
TypeToRow = {
Row.TypeId: Row
for Row in (
PresenceRow,
KeyedEduRow,
EduRow,
FailureRow,
DeviceRow,
)
}
ParsedFederationStreamData = namedtuple("ParsedFederationStreamData", (
"presence", # list(UserPresenceState)
"keyed_edus", # dict of destination -> { key -> Edu }
"edus", # dict of destination -> [Edu]
"failures", # dict of destination -> [failures]
"device_destinations", # set of destinations
))
def process_rows_for_federation(transaction_queue, rows):
"""Parse a list of rows from the federation stream and put them in the
transaction queue ready for sending to the relevant homeservers.
Args:
transaction_queue (TransactionQueue)
rows (list(synapse.replication.tcp.streams.FederationStreamRow))
"""
# The federation stream contains a bunch of different types of
# rows that need to be handled differently. We parse the rows, put
# them into the appropriate collection and then send them off.
buff = ParsedFederationStreamData(
presence=[],
keyed_edus={},
edus={},
failures={},
device_destinations=set(),
)
# Parse the rows in the stream and add to the buffer
for row in rows:
if row.type not in TypeToRow:
logger.error("Unrecognized federation row type %r", row.type)
continue
RowType = TypeToRow[row.type]
parsed_row = RowType.from_data(row.data)
parsed_row.add_to_buffer(buff)
if buff.presence:
transaction_queue.send_presence(buff.presence)
for destination, edu_map in buff.keyed_edus.iteritems():
for key, edu in edu_map.items():
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=key,
)
for destination, edu_list in buff.edus.iteritems():
for edu in edu_list:
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=None,
)
for destination, failure_list in buff.failures.iteritems():
for failure in failure_list:
transaction_queue.send_failure(destination, failure)
for destination in buff.device_destinations:
transaction_queue.send_device_messages(destination)

View File

@@ -12,7 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
from twisted.internet import defer
@@ -20,13 +20,11 @@ from .persistence import TransactionActions
from .units import Transaction, Edu
from synapse.api.errors import HttpResponseException
from synapse.util import logcontext, PreserveLoggingContext
from synapse.util.async import run_on_reactor
from synapse.util.logcontext import preserve_context_over_fn
from synapse.util.retryutils import (
get_retry_limiter, NotRetryingDestination,
)
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from synapse.util.metrics import measure_func
from synapse.handlers.presence import format_user_presence_state
from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
import synapse.metrics
import logging
@@ -36,6 +34,14 @@ logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
client_metrics = synapse.metrics.get_metrics_for("synapse.federation.client")
sent_pdus_destination_dist = client_metrics.register_distribution(
"sent_pdu_destinations"
)
sent_edus_counter = client_metrics.register_counter("sent_edus")
sent_transactions_counter = client_metrics.register_counter("sent_transactions")
class TransactionQueue(object):
"""This class makes sure we only have one transaction in flight at
@@ -44,15 +50,17 @@ class TransactionQueue(object):
It batches pending PDUs into single transactions.
"""
def __init__(self, hs, transport_layer):
def __init__(self, hs):
self.server_name = hs.hostname
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.transaction_actions = TransactionActions(self.store)
self.transport_layer = transport_layer
self.transport_layer = hs.get_federation_transport_client()
self.clock = hs.get_clock()
self.is_mine_id = hs.is_mine_id
# Is a mapping from destinations -> deferreds. Used to keep track
# of which destinations have transactions in flight and when they are
@@ -70,8 +78,18 @@ class TransactionQueue(object):
# destination -> list of tuple(edu, deferred)
self.pending_edus_by_dest = edus = {}
# Presence needs to be separate as we send single aggragate EDUs
# Map of user_id -> UserPresenceState for all the pending presence
# to be sent out by user_id. Entries here get processed and put in
# pending_presence_by_dest
self.pending_presence = {}
# Map of destination -> user_id -> UserPresenceState of pending presence
# to be sent to each destinations
self.pending_presence_by_dest = presence = {}
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
# based on their key (e.g. typing events by room_id)
# Map of destination -> (edu_type, key) -> Edu
self.pending_edus_keyed_by_dest = edus_keyed = {}
metrics.register_callback(
@@ -90,11 +108,24 @@ class TransactionQueue(object):
# destination -> list of tuple(failure, deferred)
self.pending_failures_by_dest = {}
# destination -> stream_id of last successfully sent to-device message.
# NB: may be a long or an int.
self.last_device_stream_id_by_dest = {}
# destination -> stream_id of last successfully sent device list
# update.
self.last_device_list_stream_id_by_dest = {}
# HACK to get unique tx id
self._next_txn_id = int(self.clock.time_msec())
self._order = 1
self._is_processing = False
self._last_poked_id = -1
self._processing_pending_presence = False
def can_send_to(self, destination):
"""Can we send messages to the given server?
@@ -115,11 +146,80 @@ class TransactionQueue(object):
else:
return not destination.startswith("localhost")
def enqueue_pdu(self, pdu, destinations, order):
def notify_new_events(self, current_id):
"""This gets called when we have some new events we might want to
send out to other servers.
"""
self._last_poked_id = max(current_id, self._last_poked_id)
if self._is_processing:
return
# fire off a processing loop in the background. It's likely it will
# outlast the current request, so run it in the sentinel logcontext.
with PreserveLoggingContext():
self._process_event_queue_loop()
@defer.inlineCallbacks
def _process_event_queue_loop(self):
try:
self._is_processing = True
while True:
last_token = yield self.store.get_federation_out_pos("events")
next_token, events = yield self.store.get_all_new_events_stream(
last_token, self._last_poked_id, limit=20,
)
logger.debug("Handling %s -> %s", last_token, next_token)
if not events and next_token >= self._last_poked_id:
break
for event in events:
# Only send events for this server.
send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of()
is_mine = self.is_mine_id(event.event_id)
if not is_mine and send_on_behalf_of is None:
continue
# Get the state from before the event.
# We need to make sure that this is the state from before
# the event and not from after it.
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = yield self.state.get_current_hosts_in_room(
event.room_id, latest_event_ids=[
prev_id for prev_id, _ in event.prev_events
],
)
destinations = set(destinations)
if send_on_behalf_of is not None:
# If we are sending the event on behalf of another server
# then it already has the event and there is no reason to
# send the event to it.
destinations.discard(send_on_behalf_of)
logger.debug("Sending %s to %r", event, destinations)
self._send_pdu(event, destinations)
yield self.store.update_federation_out_pos(
"events", next_token
)
finally:
self._is_processing = False
def _send_pdu(self, pdu, destinations):
# We loop through all destinations to see whether we already have
# a transaction in progress. If we do, stick it in the pending_pdus
# table and we'll get back to it later.
order = self._order
self._order += 1
destinations = set(destinations)
destinations = set(
dest for dest in destinations if self.can_send_to(dest)
@@ -130,30 +230,94 @@ class TransactionQueue(object):
if not destinations:
return
sent_pdus_destination_dist.inc_by(len(destinations))
for destination in destinations:
self.pending_pdus_by_dest.setdefault(destination, []).append(
(pdu, order)
)
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
self._attempt_new_transaction(destination)
def enqueue_presence(self, destination, states):
self.pending_presence_by_dest.setdefault(destination, {}).update({
@logcontext.preserve_fn # the caller should not yield on this
@defer.inlineCallbacks
def send_presence(self, states):
"""Send the new presence states to the appropriate destinations.
This actually queues up the presence states ready for sending and
triggers a background task to process them and send out the transactions.
Args:
states (list(UserPresenceState))
"""
# First we queue up the new presence by user ID, so multiple presence
# updates in quick successtion are correctly handled
# We only want to send presence for our own users, so lets always just
# filter here just in case.
self.pending_presence.update({
state.user_id: state for state in states
if self.is_mine_id(state.user_id)
})
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
# We then handle the new pending presence in batches, first figuring
# out the destinations we need to send each state to and then poking it
# to attempt a new transaction. We linearize this so that we don't
# accidentally mess up the ordering and send multiple presence updates
# in the wrong order
if self._processing_pending_presence:
return
def enqueue_edu(self, edu, key=None):
destination = edu.destination
self._processing_pending_presence = True
try:
while True:
states_map = self.pending_presence
self.pending_presence = {}
if not states_map:
break
yield self._process_presence_inner(states_map.values())
finally:
self._processing_pending_presence = False
@measure_func("txnqueue._process_presence")
@defer.inlineCallbacks
def _process_presence_inner(self, states):
"""Given a list of states populate self.pending_presence_by_dest and
poke to send a new transaction to each destination
Args:
states (list(UserPresenceState))
"""
hosts_and_states = yield get_interested_remotes(self.store, states, self.state)
for destinations, states in hosts_and_states:
for destination in destinations:
if not self.can_send_to(destination):
continue
self.pending_presence_by_dest.setdefault(
destination, {}
).update({
state.user_id: state for state in states
})
self._attempt_new_transaction(destination)
def send_edu(self, destination, edu_type, content, key=None):
edu = Edu(
origin=self.server_name,
destination=destination,
edu_type=edu_type,
content=content,
)
if not self.can_send_to(destination):
return
sent_edus_counter.inc()
if key:
self.pending_edus_keyed_by_dest.setdefault(
destination, {}
@@ -161,11 +325,9 @@ class TransactionQueue(object):
else:
self.pending_edus_by_dest.setdefault(destination, []).append(edu)
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
self._attempt_new_transaction(destination)
def enqueue_failure(self, failure, destination):
def send_failure(self, failure, destination):
if destination == self.server_name or destination == "localhost":
return
@@ -176,23 +338,33 @@ class TransactionQueue(object):
destination, []
).append(failure)
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
self._attempt_new_transaction(destination)
def enqueue_device_messages(self, destination):
def send_device_messages(self, destination):
if destination == self.server_name or destination == "localhost":
return
if not self.can_send_to(destination):
return
preserve_context_over_fn(
self._attempt_new_transaction, destination
)
self._attempt_new_transaction(destination)
def get_current_token(self):
return 0
@defer.inlineCallbacks
def _attempt_new_transaction(self, destination):
"""Try to start a new transaction to this destination
If there is already a transaction in progress to this destination,
returns immediately. Otherwise kicks off the process of sending a
transaction in the background.
Args:
destination (str):
Returns:
None
"""
# list of (pending_pdu, deferred, order)
if destination in self.pending_transactions:
# XXX: pending_transactions can get stuck on by a never-ending
@@ -205,74 +377,124 @@ class TransactionQueue(object):
)
return
logger.debug("TX [%s] Starting transaction loop", destination)
# Drop the logcontext before starting the transaction. It doesn't
# really make sense to log all the outbound transactions against
# whatever path led us to this point: that's pretty arbitrary really.
#
# (this also means we can fire off _perform_transaction without
# yielding)
with logcontext.PreserveLoggingContext():
self._transaction_transmission_loop(destination)
@defer.inlineCallbacks
def _transaction_transmission_loop(self, destination):
pending_pdus = []
try:
self.pending_transactions[destination] = 1
# This will throw if we wouldn't retry. We do this here so we fail
# quickly, but we will later check this again in the http client,
# hence why we throw the result away.
yield get_retry_limiter(destination, self.clock, self.store)
# XXX: what's this for?
yield run_on_reactor()
pending_pdus = []
while True:
pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
pending_edus = self.pending_edus_by_dest.pop(destination, [])
pending_presence = self.pending_presence_by_dest.pop(destination, {})
pending_failures = self.pending_failures_by_dest.pop(destination, [])
device_message_edus, device_stream_id, dev_list_id = (
yield self._get_new_device_messages(destination)
)
pending_edus.extend(
self.pending_edus_keyed_by_dest.pop(destination, {}).values()
# BEGIN CRITICAL SECTION
#
# In order to avoid a race condition, we need to make sure that
# the following code (from popping the queues up to the point
# where we decide if we actually have any pending messages) is
# atomic - otherwise new PDUs or EDUs might arrive in the
# meantime, but not get sent because we hold the
# pending_transactions flag.
pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
pending_edus = self.pending_edus_by_dest.pop(destination, [])
pending_presence = self.pending_presence_by_dest.pop(destination, {})
pending_failures = self.pending_failures_by_dest.pop(destination, [])
pending_edus.extend(
self.pending_edus_keyed_by_dest.pop(destination, {}).values()
)
pending_edus.extend(device_message_edus)
if pending_presence:
pending_edus.append(
Edu(
origin=self.server_name,
destination=destination,
edu_type="m.presence",
content={
"push": [
format_user_presence_state(
presence, self.clock.time_msec()
)
for presence in pending_presence.values()
]
},
)
)
limiter = yield get_retry_limiter(
destination,
self.clock,
self.store,
)
if pending_pdus:
logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
destination, len(pending_pdus))
device_message_edus, device_stream_id = (
yield self._get_new_device_messages(destination)
if not pending_pdus and not pending_edus and not pending_failures:
logger.debug("TX [%s] Nothing to send", destination)
self.last_device_stream_id_by_dest[destination] = (
device_stream_id
)
return
pending_edus.extend(device_message_edus)
if pending_presence:
pending_edus.append(
Edu(
origin=self.server_name,
destination=destination,
edu_type="m.presence",
content={
"push": [
format_user_presence_state(
presence, self.clock.time_msec()
)
for presence in pending_presence.values()
]
},
)
# END CRITICAL SECTION
success = yield self._send_new_transaction(
destination, pending_pdus, pending_edus, pending_failures,
)
if success:
sent_transactions_counter.inc()
# Remove the acknowledged device messages from the database
# Only bother if we actually sent some device messages
if device_message_edus:
yield self.store.delete_device_msgs_for_remote(
destination, device_stream_id
)
logger.info("Marking as sent %r %r", destination, dev_list_id)
yield self.store.mark_as_sent_devices_by_remote(
destination, dev_list_id
)
if pending_pdus:
logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
destination, len(pending_pdus))
if not pending_pdus and not pending_edus and not pending_failures:
logger.debug("TX [%s] Nothing to send", destination)
self.last_device_stream_id_by_dest[destination] = (
device_stream_id
)
return
success = yield self._send_new_transaction(
destination, pending_pdus, pending_edus, pending_failures,
device_stream_id,
should_delete_from_device_stream=bool(device_message_edus),
limiter=limiter,
)
if not success:
break
except NotRetryingDestination:
logger.info(
"TX [%s] not ready for retry yet - "
self.last_device_stream_id_by_dest[destination] = device_stream_id
self.last_device_list_stream_id_by_dest[destination] = dev_list_id
else:
break
except NotRetryingDestination as e:
logger.debug(
"TX [%s] not ready for retry yet (next retry at %s) - "
"dropping transaction for now",
destination,
datetime.datetime.fromtimestamp(
(e.retry_last_ts + e.retry_interval) / 1000.0
),
)
except Exception as e:
logger.warn(
"TX [%s] Failed to send transaction: %s",
destination,
e,
)
for p, _ in pending_pdus:
logger.info("Failed to send event %s to %s", p.event_id,
destination)
finally:
# We want to be *very* sure we delete this after we stop processing
self.pending_transactions.pop(destination, None)
@@ -293,13 +515,26 @@ class TransactionQueue(object):
)
for content in contents
]
defer.returnValue((edus, stream_id))
last_device_list = self.last_device_list_stream_id_by_dest.get(destination, 0)
now_stream_id, results = yield self.store.get_devices_by_remote(
destination, last_device_list
)
edus.extend(
Edu(
origin=self.server_name,
destination=destination,
edu_type="m.device_list_update",
content=content,
)
for content in results
)
defer.returnValue((edus, stream_id, now_stream_id))
@measure_func("_send_new_transaction")
@defer.inlineCallbacks
def _send_new_transaction(self, destination, pending_pdus, pending_edus,
pending_failures, device_stream_id,
should_delete_from_device_stream, limiter):
pending_failures):
# Sort based on the order field
pending_pdus.sort(key=lambda t: t[1])
@@ -309,132 +544,104 @@ class TransactionQueue(object):
success = True
logger.debug("TX [%s] _attempt_new_transaction", destination)
txn_id = str(self._next_txn_id)
logger.debug(
"TX [%s] {%s} Attempting new transaction"
" (pdus: %d, edus: %d, failures: %d)",
destination, txn_id,
len(pdus),
len(edus),
len(failures)
)
logger.debug("TX [%s] Persisting transaction...", destination)
transaction = Transaction.create_new(
origin_server_ts=int(self.clock.time_msec()),
transaction_id=txn_id,
origin=self.server_name,
destination=destination,
pdus=pdus,
edus=edus,
pdu_failures=failures,
)
self._next_txn_id += 1
yield self.transaction_actions.prepare_to_send(transaction)
logger.debug("TX [%s] Persisted transaction", destination)
logger.info(
"TX [%s] {%s} Sending transaction [%s],"
" (PDUs: %d, EDUs: %d, failures: %d)",
destination, txn_id,
transaction.transaction_id,
len(pdus),
len(edus),
len(failures),
)
# Actually send the transaction
# FIXME (erikj): This is a bit of a hack to make the Pdu age
# keys work
def json_data_cb():
data = transaction.get_dict()
now = int(self.clock.time_msec())
if "pdus" in data:
for p in data["pdus"]:
if "age_ts" in p:
unsigned = p.setdefault("unsigned", {})
unsigned["age"] = now - int(p["age_ts"])
del p["age_ts"]
return data
try:
logger.debug("TX [%s] _attempt_new_transaction", destination)
txn_id = str(self._next_txn_id)
logger.debug(
"TX [%s] {%s} Attempting new transaction"
" (pdus: %d, edus: %d, failures: %d)",
destination, txn_id,
len(pdus),
len(edus),
len(failures)
response = yield self.transport_layer.send_transaction(
transaction, json_data_cb
)
code = 200
logger.debug("TX [%s] Persisting transaction...", destination)
transaction = Transaction.create_new(
origin_server_ts=int(self.clock.time_msec()),
transaction_id=txn_id,
origin=self.server_name,
destination=destination,
pdus=pdus,
edus=edus,
pdu_failures=failures,
)
self._next_txn_id += 1
yield self.transaction_actions.prepare_to_send(transaction)
logger.debug("TX [%s] Persisted transaction", destination)
logger.info(
"TX [%s] {%s} Sending transaction [%s],"
" (PDUs: %d, EDUs: %d, failures: %d)",
destination, txn_id,
transaction.transaction_id,
len(pdus),
len(edus),
len(failures),
)
with limiter:
# Actually send the transaction
# FIXME (erikj): This is a bit of a hack to make the Pdu age
# keys work
def json_data_cb():
data = transaction.get_dict()
now = int(self.clock.time_msec())
if "pdus" in data:
for p in data["pdus"]:
if "age_ts" in p:
unsigned = p.setdefault("unsigned", {})
unsigned["age"] = now - int(p["age_ts"])
del p["age_ts"]
return data
try:
response = yield self.transport_layer.send_transaction(
transaction, json_data_cb
)
code = 200
if response:
for e_id, r in response.get("pdus", {}).items():
if "error" in r:
logger.warn(
"Transaction returned error for %s: %s",
e_id, r,
)
except HttpResponseException as e:
code = e.code
response = e.response
if response:
for e_id, r in response.get("pdus", {}).items():
if "error" in r:
logger.warn(
"Transaction returned error for %s: %s",
e_id, r,
)
except HttpResponseException as e:
code = e.code
response = e.response
if e.code in (401, 404, 429) or 500 <= e.code:
logger.info(
"TX [%s] {%s} got %d response",
destination, txn_id, code
)
raise e
logger.debug("TX [%s] Sent transaction", destination)
logger.debug("TX [%s] Marking as delivered...", destination)
logger.info(
"TX [%s] {%s} got %d response",
destination, txn_id, code
)
yield self.transaction_actions.delivered(
transaction, code, response
)
logger.debug("TX [%s] Sent transaction", destination)
logger.debug("TX [%s] Marking as delivered...", destination)
logger.debug("TX [%s] Marked as delivered", destination)
yield self.transaction_actions.delivered(
transaction, code, response
)
if code != 200:
for p in pdus:
logger.info(
"Failed to send event %s to %s", p.event_id, destination
)
success = False
else:
# Remove the acknowledged device messages from the database
if should_delete_from_device_stream:
yield self.store.delete_device_msgs_for_remote(
destination, device_stream_id
)
self.last_device_stream_id_by_dest[destination] = device_stream_id
except RuntimeError as e:
# We capture this here as there as nothing actually listens
# for this finishing functions deferred.
logger.warn(
"TX [%s] Problem in _attempt_transaction: %s",
destination,
e,
)
success = False
logger.debug("TX [%s] Marked as delivered", destination)
if code != 200:
for p in pdus:
logger.info("Failed to send event %s to %s", p.event_id, destination)
except Exception as e:
# We capture this here as there as nothing actually listens
# for this finishing functions deferred.
logger.warn(
"TX [%s] Problem in _attempt_transaction: %s",
destination,
e,
)
logger.info(
"Failed to send event %s to %s", p.event_id, destination
)
success = False
for p in pdus:
logger.info("Failed to send event %s to %s", p.event_id, destination)
defer.returnValue(success)

View File

@@ -163,6 +163,7 @@ class TransportLayerClient(object):
data=json_data,
json_data_callback=json_data_callback,
long_retries=True,
backoff_on_404=True, # If we get a 404 the other side has gone
)
logger.debug(
@@ -174,7 +175,8 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def make_query(self, destination, query_type, args, retry_on_dns_fail):
def make_query(self, destination, query_type, args, retry_on_dns_fail,
ignore_backoff=False):
path = PREFIX + "/query/%s" % query_type
content = yield self.client.get_json(
@@ -183,6 +185,7 @@ class TransportLayerClient(object):
args=args,
retry_on_dns_fail=retry_on_dns_fail,
timeout=10000,
ignore_backoff=ignore_backoff,
)
defer.returnValue(content)
@@ -190,6 +193,26 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def make_membership_event(self, destination, room_id, user_id, membership):
"""Asks a remote server to build and sign us a membership event
Note that this does not append any events to any graphs.
Args:
destination (str): address of remote homeserver
room_id (str): room to join/leave
user_id (str): user to be joined/left
membership (str): one of join/leave
Returns:
Deferred: Succeeds when we get a 2xx HTTP response. The result
will be the decoded JSON body (ie, the new event).
Fails with ``HTTPRequestException`` if we get an HTTP response
code >= 300.
Fails with ``NotRetryingDestination`` if we are not yet ready
to retry this server.
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
if membership not in valid_memberships:
raise RuntimeError(
@@ -198,11 +221,23 @@ class TransportLayerClient(object):
)
path = PREFIX + "/make_%s/%s/%s" % (membership, room_id, user_id)
ignore_backoff = False
retry_on_dns_fail = False
if membership == Membership.LEAVE:
# we particularly want to do our best to send leave events. The
# problem is that if it fails, we won't retry it later, so if the
# remote server was just having a momentary blip, the room will be
# out of sync.
ignore_backoff = True
retry_on_dns_fail = True
content = yield self.client.get_json(
destination=destination,
path=path,
retry_on_dns_fail=False,
retry_on_dns_fail=retry_on_dns_fail,
timeout=20000,
ignore_backoff=ignore_backoff,
)
defer.returnValue(content)
@@ -229,6 +264,12 @@ class TransportLayerClient(object):
destination=destination,
path=path,
data=content,
# we want to do our best to send this through. The problem is
# that if it fails, we won't retry it later, so if the remote
# server was just having a momentary blip, the room will be out of
# sync.
ignore_backoff=True,
)
defer.returnValue(response)
@@ -242,6 +283,7 @@ class TransportLayerClient(object):
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
defer.returnValue(response)
@@ -249,10 +291,15 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def get_public_rooms(self, remote_server, limit, since_token,
search_filter=None):
search_filter=None, include_all_networks=False,
third_party_instance_id=None):
path = PREFIX + "/publicRooms"
args = {}
args = {
"include_all_networks": "true" if include_all_networks else "false",
}
if third_party_instance_id:
args["third_party_instance_id"] = third_party_instance_id,
if limit:
args["limit"] = [str(limit)]
if since_token:
@@ -264,6 +311,7 @@ class TransportLayerClient(object):
destination=remote_server,
path=path,
args=args,
ignore_backoff=True,
)
defer.returnValue(response)
@@ -341,6 +389,32 @@ class TransportLayerClient(object):
)
defer.returnValue(content)
@defer.inlineCallbacks
@log_function
def query_user_devices(self, destination, user_id, timeout):
"""Query the devices for a user id hosted on a remote server.
Response:
{
"stream_id": "...",
"devices": [ { ... } ]
}
Args:
destination(str): The server to query.
query_content(dict): The user ids to query.
Returns:
A dict containg the device keys.
"""
path = PREFIX + "/user/devices/" + user_id
content = yield self.client.get_json(
destination=destination,
path=path,
timeout=timeout,
)
defer.returnValue(content)
@defer.inlineCallbacks
@log_function
def claim_client_keys(self, destination, query_content, timeout):
@@ -381,7 +455,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def get_missing_events(self, destination, room_id, earliest_events,
latest_events, limit, min_depth):
latest_events, limit, min_depth, timeout):
path = PREFIX + "/get_missing_events/%s" % (room_id,)
content = yield self.client.post_json(
@@ -392,7 +466,423 @@ class TransportLayerClient(object):
"min_depth": int(min_depth),
"earliest_events": earliest_events,
"latest_events": latest_events,
}
},
timeout=timeout,
)
defer.returnValue(content)
@log_function
def get_group_profile(self, destination, group_id, requester_user_id):
"""Get a group profile
"""
path = PREFIX + "/groups/%s/profile" % (group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def update_group_profile(self, destination, group_id, requester_user_id, content):
"""Update a remote group profile
Args:
destination (str)
group_id (str)
requester_user_id (str)
content (dict): The new profile of the group
"""
path = PREFIX + "/groups/%s/profile" % (group_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def get_group_summary(self, destination, group_id, requester_user_id):
"""Get a group summary
"""
path = PREFIX + "/groups/%s/summary" % (group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_rooms_in_group(self, destination, group_id, requester_user_id):
"""Get all rooms in a group
"""
path = PREFIX + "/groups/%s/rooms" % (group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
def add_room_to_group(self, destination, group_id, requester_user_id, room_id,
content):
"""Add a room to a group
"""
path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
def update_room_in_group(self, destination, group_id, requester_user_id, room_id,
config_key, content):
"""Update room in group
"""
path = PREFIX + "/groups/%s/room/%s/config/%s" % (group_id, room_id, config_key,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
def remove_room_from_group(self, destination, group_id, requester_user_id, room_id):
"""Remove a room from a group
"""
path = PREFIX + "/groups/%s/room/%s" % (group_id, room_id,)
return self.client.delete_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_users_in_group(self, destination, group_id, requester_user_id):
"""Get users in a group
"""
path = PREFIX + "/groups/%s/users" % (group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_invited_users_in_group(self, destination, group_id, requester_user_id):
"""Get users that have been invited to a group
"""
path = PREFIX + "/groups/%s/invited_users" % (group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def accept_group_invite(self, destination, group_id, user_id, content):
"""Accept a group invite
"""
path = PREFIX + "/groups/%s/users/%s/accept_invite" % (group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def invite_to_group(self, destination, group_id, user_id, requester_user_id, content):
"""Invite a user to a group
"""
path = PREFIX + "/groups/%s/users/%s/invite" % (group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def invite_to_group_notification(self, destination, group_id, user_id, content):
"""Sent by group server to inform a user's server that they have been
invited.
"""
path = PREFIX + "/groups/local/%s/users/%s/invite" % (group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def remove_user_from_group(self, destination, group_id, requester_user_id,
user_id, content):
"""Remove a user fron a group
"""
path = PREFIX + "/groups/%s/users/%s/remove" % (group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def remove_user_from_group_notification(self, destination, group_id, user_id,
content):
"""Sent by group server to inform a user's server that they have been
kicked from the group.
"""
path = PREFIX + "/groups/local/%s/users/%s/remove" % (group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def renew_group_attestation(self, destination, group_id, user_id, content):
"""Sent by either a group server or a user's server to periodically update
the attestations
"""
path = PREFIX + "/groups/%s/renew_attestation/%s" % (group_id, user_id)
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)
@log_function
def update_group_summary_room(self, destination, group_id, user_id, room_id,
category_id, content):
"""Update a room entry in a group summary
"""
if category_id:
path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % (
group_id, category_id, room_id,
)
else:
path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": user_id},
data=content,
ignore_backoff=True,
)
@log_function
def delete_group_summary_room(self, destination, group_id, user_id, room_id,
category_id):
"""Delete a room entry in a group summary
"""
if category_id:
path = PREFIX + "/groups/%s/summary/categories/%s/rooms/%s" % (
group_id, category_id, room_id,
)
else:
path = PREFIX + "/groups/%s/summary/rooms/%s" % (group_id, room_id,)
return self.client.delete_json(
destination=destination,
path=path,
args={"requester_user_id": user_id},
ignore_backoff=True,
)
@log_function
def get_group_categories(self, destination, group_id, requester_user_id):
"""Get all categories in a group
"""
path = PREFIX + "/groups/%s/categories" % (group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_group_category(self, destination, group_id, requester_user_id, category_id):
"""Get category info in a group
"""
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def update_group_category(self, destination, group_id, requester_user_id, category_id,
content):
"""Update a category in a group
"""
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def delete_group_category(self, destination, group_id, requester_user_id,
category_id):
"""Delete a category in a group
"""
path = PREFIX + "/groups/%s/categories/%s" % (group_id, category_id,)
return self.client.delete_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_group_roles(self, destination, group_id, requester_user_id):
"""Get all roles in a group
"""
path = PREFIX + "/groups/%s/roles" % (group_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def get_group_role(self, destination, group_id, requester_user_id, role_id):
"""Get a roles info
"""
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,)
return self.client.get_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def update_group_role(self, destination, group_id, requester_user_id, role_id,
content):
"""Update a role in a group
"""
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def delete_group_role(self, destination, group_id, requester_user_id, role_id):
"""Delete a role in a group
"""
path = PREFIX + "/groups/%s/roles/%s" % (group_id, role_id,)
return self.client.delete_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
@log_function
def update_group_summary_user(self, destination, group_id, requester_user_id,
user_id, role_id, content):
"""Update a users entry in a group
"""
if role_id:
path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % (
group_id, role_id, user_id,
)
else:
path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,)
return self.client.post_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
data=content,
ignore_backoff=True,
)
@log_function
def delete_group_summary_user(self, destination, group_id, requester_user_id,
user_id, role_id):
"""Delete a users entry in a group
"""
if role_id:
path = PREFIX + "/groups/%s/summary/roles/%s/users/%s" % (
group_id, role_id, user_id,
)
else:
path = PREFIX + "/groups/%s/summary/users/%s" % (group_id, user_id,)
return self.client.delete_json(
destination=destination,
path=path,
args={"requester_user_id": requester_user_id},
ignore_backoff=True,
)
def bulk_get_publicised_groups(self, destination, user_ids):
"""Get the groups a list of users are publicising
"""
path = PREFIX + "/get_groups_publicised"
content = {"user_ids": user_ids}
return self.client.post_json(
destination=destination,
path=path,
data=content,
ignore_backoff=True,
)

Some files were not shown because too many files have changed in this diff Show More