Compare commits

...

1395 Commits

Author SHA1 Message Date
Richard van der Hoff
3504982cb7 changelog for 0.33.3 2018-08-22 14:07:44 +01:00
Richard van der Hoff
4e5a4549b6 bump version to 0.33.3 2018-08-22 14:07:10 +01:00
Richard van der Hoff
9b7d9d8ba0 Update attributions and PR links in changelog 2018-08-22 14:06:20 +01:00
Richard van der Hoff
d7585a4c83 Merge pull request #3732 from matrix-org/rav/fix_gdpr_consent
Fix 500 error from /consent form
2018-08-22 09:15:06 +01:00
Richard van der Hoff
afb4b490a4 changelog 2018-08-21 23:19:14 +01:00
Richard van der Hoff
f7bf181a90 fix another consent encoding fail 2018-08-21 23:14:25 +01:00
Richard van der Hoff
f7baff6f7b Fix 500 error from /consent form
Fixes #3731
2018-08-21 22:47:07 +01:00
Amber Brown
3b5b64ac99 changelog 2018-08-21 03:48:55 +10:00
Amber Brown
23d7e63a4a Merge pull request #3723 from matrix-org/rav/fix_logcontext_disaster
Fix exceptions when a connection is closed before we read the headers
2018-08-21 03:47:52 +10:00
Richard van der Hoff
012d612f9d changelog 2018-08-20 18:26:27 +01:00
Richard van der Hoff
be6527325a Fix exceptions when a connection is closed before we read the headers
This fixes bugs introduced in #3700, by making sure that we behave sanely
when an incoming connection is closed before the headers are read.
2018-08-20 18:21:10 +01:00
Richard van der Hoff
55e6bdf287 Robustness fix for logcontext filter
Make the logcontext filter not explode if it somehow ends up with a logcontext
of None, since that infinite-loops the whole logging system.
2018-08-20 18:20:07 +01:00
Amber Brown
80bf7d3580 changelog 2018-08-21 00:01:14 +10:00
Amber Brown
9a2f960736 version 2018-08-21 00:00:19 +10:00
Amber Brown
324525f40c Port over enough to get some sytests running on Python 3 (#3668) 2018-08-20 23:54:49 +10:00
Erik Johnston
cf6f9a8b53 Merge pull request #3719 from matrix-org/erikj/use_cache_fact
Use get_cache_factor_for function for `state_cache`
2018-08-20 13:33:35 +01:00
Erik Johnston
48a910e128 Newsfile 2018-08-20 13:33:20 +01:00
Erik Johnston
f2a48d87df Use get_cache_factor_for function for state_cache
This allows the cache factor for `state_cache` to be individually
specified in the enviroment
2018-08-20 13:01:46 +01:00
Erik Johnston
2aa7cc6a46 Merge pull request #3713 from matrix-org/erikj/fixup_fed_logging
Fix logging bug in EDU handling over replication
2018-08-20 10:51:45 +01:00
Richard van der Hoff
3cef867cc1 Merge pull request #3709 from matrix-org/rav/logcontext_for_replication_commands
Logcontexts for replication command handlers
2018-08-17 16:22:07 +01:00
Richard van der Hoff
c144252a8c Merge pull request #3710 from matrix-org/rav/logcontext_for_pusher_updates
Fix logcontexts for running pushers
2018-08-17 16:21:49 +01:00
Amber Brown
c334ca67bb Integrate presence from hotfixes (#3694) 2018-08-18 01:08:45 +10:00
Amber Brown
04f5d2db62 Remove v1/register's broken shared secret functionality (#3703) 2018-08-18 00:55:01 +10:00
Richard van der Hoff
63260397c6 Merge pull request #3701 from matrix-org/rav/use_producer_for_responses
Use a producer to stream back responses
2018-08-17 14:58:45 +01:00
Richard van der Hoff
3f8709ffe4 Merge pull request #3700 from matrix-org/rav/wait_for_producers
Refactor request logging code
2018-08-17 14:57:45 +01:00
Neil Johnson
4c22b4047b Merge pull request #3707 from matrix-org/neilj/limit_exceeded_error
add new error type ResourceLimit
2018-08-17 13:33:54 +00:00
Neil Johnson
9fd161c6fb Merge branch 'neilj/limit_exceeded_error' of github.com:matrix-org/synapse into neilj/limit_exceeded_error 2018-08-17 13:58:40 +01:00
Neil Johnson
0195dfbf52 server limits config docs 2018-08-17 13:58:25 +01:00
Neil Johnson
69c49d3fa3 Merge branch 'develop' into neilj/limit_exceeded_error 2018-08-17 12:44:26 +00:00
Neil Johnson
a2d872e7b3 Merge pull request #3708 from matrix-org/neilj/resource_Limit_block_event_creation
Neilj/resource limit block event creation
2018-08-17 12:42:59 +00:00
Amber Brown
fa27073b14 Merge pull request #3712 from matrix-org/travis/register-admin-docs
Update the admin register documentation to return a real user ID
2018-08-17 22:31:02 +10:00
Erik Johnston
73737bd0f9 Newsfile 2018-08-17 11:14:05 +01:00
Erik Johnston
3b2dcfff78 Fix logging bug in EDU handling over replication 2018-08-17 11:11:06 +01:00
Neil Johnson
521d369e7a remove errant yield 2018-08-17 10:12:11 +01:00
Travis Ralston
b99a0f3941 Create 3712.misc 2018-08-17 02:47:31 -06:00
Travis Ralston
a8ffc27db7 Update the admin register documentation to return a real user ID
Presumably this is the intention anyways. I've also updated the domain part to be something more along the lines of what people might expect.
2018-08-17 02:46:25 -06:00
Richard van der Hoff
4c9da1440f changelog 2018-08-17 00:50:36 +01:00
Richard van der Hoff
d9efd87d55 changelog 2018-08-17 00:49:22 +01:00
Richard van der Hoff
0e8d78f6aa Logcontexts for replication command handlers
Run the handlers for replication commands as background processes. This should
improve the visibility in our metrics, and reduce the number of "running db
transaction from sentinel context" warnings.

Ideally it means converting the things that fire off deferreds into the night
into things that actually return a Deferred when they are done. I've made a bit
of a stab at this, but it will probably be leaky.
2018-08-17 00:43:43 +01:00
Richard van der Hoff
66f7dc8c87 Fix logcontexts for running pushers
First of all, avoid resetting the logcontext before running the pushers, to fix
the "Starting db txn 'get_all_updated_receipts' from sentinel context" warning.

Instead, give them their own "background process" logcontexts.
2018-08-17 00:32:39 +01:00
Neil Johnson
7e51342196 For resource limit blocked users, prevent writing into rooms 2018-08-16 23:05:20 +01:00
Neil Johnson
bcfeb44afe call reap on start up and fix under reaping bug 2018-08-16 22:55:32 +01:00
Neil Johnson
372bf073c1 block event creation and room creation on hitting resource limits 2018-08-16 21:25:16 +01:00
Neil Johnson
7edd11623d add new error type ResourceLimit 2018-08-16 21:04:30 +01:00
Neil Johnson
13ad9930c8 add new error type ResourceLimit 2018-08-16 18:02:02 +01:00
Erik Johnston
b4d6db5c4a Merge pull request #3705 from matrix-org/erikj/fix_inbound_fed_worker
Fix inbound federation on reader worker
2018-08-16 15:42:50 +01:00
Erik Johnston
aae86a81ef Newsfile 2018-08-16 15:41:48 +01:00
Erik Johnston
7d5b1a60a3 Fix inbound federation on reader worker
Inbound federation requires calculating push, which in turn relies on
having access to account data.
2018-08-16 15:32:20 +01:00
Will Hunt
c151b32b1d Add GET media/v1/config (#3184) 2018-08-16 14:23:38 +01:00
Matthew Hodgson
762a758fea lazyload aware /messages (#3589) 2018-08-16 14:22:47 +01:00
Matthew Hodgson
3f543dc021 initial cut at a room summary API (#3574) 2018-08-16 09:46:50 +01:00
Richard van der Hoff
859989f958 Merge pull request #3686 from matrix-org/rav/changelog_links_to_prs
Changelog should link to PRs
2018-08-15 22:22:38 +01:00
Neil Johnson
81d727efa9 Merge pull request #3689 from matrix-org/neilj/fix_off_by_1+maus
Fix Mau off by one errors
2018-08-15 16:19:41 +00:00
Neil Johnson
da7fe43899 Fix mau blocking calulation bug on login 2018-08-15 17:03:30 +01:00
Richard van der Hoff
e2e846628d Remove incorrectly re-added changelog files
These were removed for the 0.33.2 release, but were incorrectly re-added in
2018-08-15 16:38:46 +01:00
Matthew Hodgson
2f78f432c4 speed up /members and add at= and membership params (#3568) 2018-08-15 16:35:22 +01:00
Neil Johnson
86a00e05e1 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-15 16:27:08 +01:00
Richard van der Hoff
d82fa0e9a6 changelog 2018-08-15 15:12:35 +01:00
Amber Brown
a87af25fbb Fix the tests 2018-08-15 15:12:23 +01:00
Richard van der Hoff
afcd655ab6 Use a producer to stream back responses
The problem with dumping all of the json response into the Request object at
once is that doing so starts the timeout for the next request to be received:
so if it takes longer than 60s to stream back the response to the client, the
client never gets it.

The correct solution is to use a Producer; then the timeout is only started
once all of the content is sent over the TCP connection.
2018-08-15 15:04:16 +01:00
Erik Johnston
dc56c47dc0 Merge pull request #3653 from matrix-org/erikj/split_federation
Move more federation APIs to workers
2018-08-15 14:59:02 +01:00
Neil Johnson
4601129c44 Merge pull request #3687 from matrix-org/neilj/admin_email
support admin_email config and pass through into blocking errors,
2018-08-15 13:52:25 +00:00
Erik Johnston
ef184caf30 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_federation 2018-08-15 14:25:46 +01:00
Erik Johnston
488ffe6fdb Use federation handler function rather than duplicate
This involves renaming _persist_events to be a public function.
2018-08-15 14:17:18 +01:00
Erik Johnston
773db62a22 Rename slave TransactionStore to SlaveTransactionStore 2018-08-15 14:17:06 +01:00
Neil Johnson
39176f27f7 remove changelog referencing another PR 2018-08-15 13:53:51 +01:00
Richard van der Hoff
2de813a611 changelog 2018-08-15 13:49:03 +01:00
Richard van der Hoff
eaaa2248ff Refactor request logging code
This commit moves a bunch of the logic for deciding when to log the receipt and
completion of HTTP requests into SynapseRequest, rather than in the request
handling wrappers.

Advantages of this are:
 * we get logs for *all* requests (including OPTIONS and HEADs), rather than
   just those that end up hitting handlers we've remembered to decorate
   correctly.

 * when a request handler wires up a Producer (as the media stuff does
   currently, and as other things will do soon), we log at the point that all
   of the traffic has been sent to the client.
2018-08-15 13:47:52 +01:00
Neil Johnson
87a824bad1 clean up AuthError 2018-08-15 11:58:03 +01:00
Neil Johnson
c4eb97518f Merge branch 'neilj/update_limits_error_codes' of github.com:matrix-org/synapse into neilj/admin_email 2018-08-15 11:53:01 +01:00
Neil Johnson
55afba0fc5 update admin email to uri 2018-08-15 11:41:18 +01:00
Neil Johnson
75c663c7b9 update error codes 2018-08-15 11:27:48 +01:00
Neil Johnson
1c5e690a6b Merge pull request #3690 from matrix-org/neilj/change_prometheus_mau_metric_name
combine mau metrics into one group
2018-08-15 09:53:59 +00:00
Erik Johnston
fef2e65d12 Merge pull request #3667 from matrix-org/erikj/fixup_unbind
Don't fail requests to unbind 3pids for non supporting ID servers
2018-08-15 10:32:12 +01:00
Neil Johnson
596ce63576 update resource limit error codes 2018-08-15 10:23:28 +01:00
Neil Johnson
b8429c7c81 update error codes for resource limiting 2018-08-15 10:19:48 +01:00
Neil Johnson
ab035bdeac replace admin_email with admin_uri for greater flexibility 2018-08-15 10:16:41 +01:00
Neil Johnson
19b433e3f4 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/admin_email 2018-08-14 17:44:46 +01:00
Neil Johnson
b586b8b986 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 17:43:22 +01:00
Neil Johnson
70e48cbbb1 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/change_prometheus_mau_metric_name 2018-08-14 17:40:59 +01:00
Neil Johnson
aa3220df6a Merge pull request #3692 from matrix-org/neil/fix_postgres_test_initialise_reserved_users
Neil/fix postgres test initialise reserved users
2018-08-14 16:40:11 +00:00
Neil Johnson
7277216d01 fix setup_test_homeserver to be postgres compatible 2018-08-14 17:14:39 +01:00
Neil Johnson
cd0c749c4f Fix missing yield in synapse.storage.monthly_active_users.initialise_reserved_users 2018-08-14 17:06:51 +01:00
Neil Johnson
9ecbaf8ba8 adding missing yield 2018-08-14 16:55:28 +01:00
Neil Johnson
1522ed9c07 in case max_mau is less than I think 2018-08-14 16:52:30 +01:00
Neil Johnson
629f390035 Rename MAU prometheus metric 2018-08-14 16:38:05 +01:00
Neil Johnson
e5962f845c pep8 2018-08-14 16:36:14 +01:00
Neil Johnson
e7d091fb86 combine mau metrics into one group 2018-08-14 16:26:55 +01:00
Neil Johnson
414d54b61a Merge pull request #3670 from matrix-org/neilj/mau_sync_block
Block ability to read via sync if mau limit exceeded
2018-08-14 15:21:31 +00:00
Neil Johnson
2545993ce4 make comments clearer 2018-08-14 15:48:12 +01:00
Neil Johnson
06b331ff40 fix off by 1 errors 2018-08-14 15:28:15 +01:00
Neil Johnson
8f9a7eb58d support admin_email config and pass through into blocking errors, return AuthError in all cases 2018-08-14 15:11:54 +01:00
Neil Johnson
c74c71128d remove blank line 2018-08-14 15:06:24 +01:00
Neil Johnson
ed4bc3d2fc fix off by 1s on mau 2018-08-14 15:04:48 +01:00
Neil Johnson
bd92c8eaa7 Merge branch 'neilj/admin_email' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 14:56:45 +01:00
Neil Johnson
99ebaed8e6 Update register.py
remove comments
2018-08-14 14:55:55 +01:00
Neil Johnson
9b5bf3d858 Merge branch 'neilj/admin_email' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 14:51:38 +01:00
Neil Johnson
e25d87d97b Merge branch 'neilj/mau_sync_block' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 14:32:18 +01:00
Amber Brown
614e6d517d Dockerised sytest (#3660) 2018-08-14 21:30:34 +10:00
Amber Brown
0bdc362598 Merge pull request #3681 from matrix-org/neilj/fix_reap_users_in_postgres
Neilj/fix reap users in postgres
2018-08-14 21:29:35 +10:00
Amber Brown
591bf87c6a Merge remote-tracking branch 'origin/develop' into neilj/fix_reap_users_in_postgres 2018-08-14 20:56:23 +10:00
Amber Brown
bdfbd934d6 Implement a new test baseclass to cut down on boilerplate (#3684) 2018-08-14 20:53:43 +10:00
Jan Christian Grünhage
8f0c430ca4 Merge pull request #3669 from matrix-org/jcgruenhage/update-docker-base
update docker base-image to alpine 3.8
2018-08-14 11:43:59 +02:00
Neil Johnson
1f24d8681b set admin email via config 2018-08-13 21:26:43 +01:00
Neil Johnson
f4b49152e2 support admin_email config and pass through into blocking errors, return AuthError in all cases 2018-08-13 21:09:47 +01:00
Richard van der Hoff
b6d55588e1 Changelog should link to PRs
The changelog makes much more sense if it consistently links to PRs, rather
than mostly PRs with the occasional issue #.
2018-08-13 18:56:55 +01:00
Richard van der Hoff
8ba2dac6ea Name changelog after PR, not bug 2018-08-13 18:54:01 +01:00
Richard van der Hoff
50bcaf1a27 Merge pull request #3677 from andrewshadura/master
Use recaptcha_ajax.js directly from Google
2018-08-13 18:52:56 +01:00
Neil Johnson
ce7de9ae6b Revert "support admin_email config and pass through into blocking errors, return AuthError in all cases"
This reverts commit 0d43f991a1.
2018-08-13 18:06:18 +01:00
Neil Johnson
0d43f991a1 support admin_email config and pass through into blocking errors, return AuthError in all cases 2018-08-13 18:00:23 +01:00
Amber Brown
99dd975dae Run tests under PostgreSQL (#3423) 2018-08-13 16:47:46 +10:00
Neil Johnson
ac205a54b2 Fixes test_reap_monthly_active_users so it passes under postgres 2018-08-11 22:50:09 +01:00
Neil Johnson
31fa743567 fix sqlite/postgres incompatibility in reap_monthly_active_users 2018-08-11 22:38:34 +01:00
Amber Brown
a001038b92 Merge pull request #3679 from matrix-org/hawkowl/blackify-tests
Blackify the tests
2018-08-11 00:12:56 +10:00
Amber Brown
807449d8f2 fix up a forced long line 2018-08-11 00:01:43 +10:00
Richard van der Hoff
c31793a784 Merge branch 'rav/fix_linearizer_cancellation' into develop 2018-08-10 14:57:27 +01:00
Richard van der Hoff
c08f9d95b2 log *after* reloading log config
... because logging *before* reloading means the log message gets lost in the old MemoryLogger
2018-08-10 14:56:48 +01:00
Amber Brown
dd16e7dfcc changelog 2018-08-10 23:54:46 +10:00
black
8b3d9b6b19 Run black. 2018-08-10 23:54:09 +10:00
Amber Brown
b37c472419 Rename async to async_helpers because async is a keyword on Python 3.7 (#3678) 2018-08-10 23:50:21 +10:00
Andrej Shadura
c75b71a397 Use recaptcha_ajax.js directly from Google
The script recaptcha_ajax.js contains minified non-free code which
we probably cannot redistribute. Since it talks to Google servers
anyway, it is better to just download it from Google directly instead
of shipping it.

This fixes #1932.
2018-08-10 13:36:14 +02:00
Richard van der Hoff
3c0213a217 Merge pull request #3439 from vojeroen/send_sni_for_federation_requests
send SNI for federation requests
2018-08-10 12:23:54 +01:00
Richard van der Hoff
178ab76ac0 changelog 2018-08-10 11:05:23 +01:00
Richard van der Hoff
638d35ef08 Fix linearizer cancellation on twisted < 18.7
Turns out that cancellation of inlineDeferreds didn't really work properly
until Twisted 18.7. This commit refactors Linearizer.queue to avoid
inlineCallbacks.
2018-08-10 10:59:09 +01:00
Jeroen
2e9c73e8ca more generic conversion of str/bytes to unicode 2018-08-09 21:31:26 +02:00
Jeroen
64899341dc include private functions from twisted 2018-08-09 21:04:22 +02:00
Neil Johnson
885ea9c602 rename _user_last_seen_monthly_active 2018-08-09 18:02:12 +01:00
Neil Johnson
c1f9dec92a fix errant parenthesis 2018-08-09 17:43:26 +01:00
Neil Johnson
04df714259 fix imports 2018-08-09 17:41:52 +01:00
Neil Johnson
09cf130898 only block on sync where user is not part of the mau cohort 2018-08-09 17:39:12 +01:00
Neil Johnson
c6b28fb479 Where server is disabled, block ability for locked out users to read new messages 2018-08-09 12:45:58 +01:00
Jan Christian Grünhage
d967653705 update docker base-image to alpine 3.8 2018-08-09 13:31:10 +02:00
Neil Johnson
69ce057ea6 block sync if auth checks fail 2018-08-09 12:26:27 +01:00
Neil Johnson
3dce9050cf Merge branch 'develop' of github.com:matrix-org/synapse into neilj/mau_sync_block 2018-08-09 11:42:02 +01:00
Neil Johnson
0ad98e38d0 Merge pull request #3655 from matrix-org/neilj/disable_hs
Flag to disable HS without disabling federation
2018-08-09 10:41:43 +00:00
Neil Johnson
a5ef110749 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/mau_sync_block 2018-08-09 11:40:37 +01:00
Erik Johnston
5c6226707d Update docs/workers.rst 2018-08-09 10:37:42 +01:00
Erik Johnston
8876ce7f77 Newsfile 2018-08-09 10:37:42 +01:00
Erik Johnston
b179537f2a Move clean_room_for_join to master 2018-08-09 10:37:38 +01:00
Erik Johnston
72d1902bbe Fixup doc comments 2018-08-09 10:23:49 +01:00
Amber Brown
984376745b Merge branch 'master' into develop 2018-08-09 19:23:47 +10:00
Amber Brown
67dbe4c899 0.33.2 changelog 2018-08-09 19:21:07 +10:00
Amber Brown
dafe90a6be 0.33.2 2018-08-09 19:20:41 +10:00
Amber Brown
3d6f9fba19 fix the changelog & template 2018-08-09 19:19:30 +10:00
Erik Johnston
5785b93711 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_federation 2018-08-09 10:16:16 +01:00
Erik Johnston
bf7598f582 Log when we 3pid/unbind request fails 2018-08-09 10:09:56 +01:00
Erik Johnston
2bdafaf3c1 Merge pull request #3632 from matrix-org/erikj/refactor_repl_servlet
Add helper base class for generating new replication endpoints
2018-08-09 10:06:23 +01:00
Erik Johnston
62564797f5 Fixup wording and remove dead code 2018-08-09 09:56:10 +01:00
Amber Brown
2511f3f8a0 Test fixes for Python 3 (#3647) 2018-08-09 12:22:01 +10:00
Richard van der Hoff
bb89c84614 Merge pull request #3664 from matrix-org/rav/federation_metrics
more metrics for the federation and appservice senders
2018-08-08 23:16:58 +01:00
Jeroen
d5c0ce4cad updated docstring for ServerContextFactory 2018-08-08 19:25:01 +02:00
Neil Johnson
e92fb00f32 sync auth blocking 2018-08-08 17:54:49 +01:00
Neil Johnson
839a317c96 fix pep8 too many lines 2018-08-08 17:39:04 +01:00
Neil Johnson
5298d79fb5 Merge branch 'develop' into neilj/disable_hs 2018-08-08 16:13:03 +00:00
Richard van der Hoff
8521ae13e3 Merge pull request #3654 from matrix-org/rav/room_versions
Support for room versioning
2018-08-08 17:10:53 +01:00
Neil Johnson
d2f3ef98ac Merge branch 'develop' into neilj/disable_hs 2018-08-08 15:55:47 +00:00
Neil Johnson
54685d294d Ability to disable client/server Synapse via conf toggle 2018-08-08 15:38:54 +01:00
Neil Johnson
cc187debf3 Merge pull request #3662 from matrix-org/neilj/reserved_users
Reserved users for MAU limits
2018-08-08 14:33:17 +00:00
Neil Johnson
2b5baebeba Merge branch 'develop' of github.com:matrix-org/synapse into neilj/disable_hs 2018-08-08 13:47:07 +01:00
Neil Johnson
be59910b93 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/reserved_users 2018-08-08 13:45:21 +01:00
Neil Johnson
990fe9fc23 Merge pull request #3633 from matrix-org/neilj/mau_tracker
API for monthly_active_users table
2018-08-08 12:44:15 +00:00
Neil Johnson
312ae74746 typos 2018-08-08 13:33:16 +01:00
Neil Johnson
d92675a486 Ability to whitelist specific threepids against monthly active user limiting 2018-08-08 12:06:42 +01:00
Erik Johnston
360ba89c50 Don't fail requests to unbind 3pids for non supporting ID servers
Older identity servers may not support the unbind 3pid request, so we
shouldn't fail the requests if we received one of 400/404/501. The
request still fails if we receive e.g. 500 responses, allowing clients
to retry requests on transient identity server errors that otherwise do
support the API.

Fixes #3661
2018-08-08 12:06:18 +01:00
Neil Johnson
7f3d897e7a mock config.max_mau_value 2018-08-08 11:46:23 +01:00
Erik Johnston
bebe325e6c Rename POST param to METHOD 2018-08-08 10:36:18 +01:00
Erik Johnston
5011417632 Fixup logging and docstrings 2018-08-08 10:29:58 +01:00
Richard van der Hoff
e6d73b8582 changelog 2018-08-07 22:14:35 +01:00
Richard van der Hoff
bab94da79c fix metric name 2018-08-07 22:11:45 +01:00
Richard van der Hoff
53bca4690b more metrics for the federation and appservice senders 2018-08-07 19:09:48 +01:00
Neil Johnson
ef3589063a prevent total number of reserved users being too large 2018-08-07 17:59:27 +01:00
Neil Johnson
e8eba2b4e3 implement reserved users for mau limits 2018-08-07 17:49:43 +01:00
Richard van der Hoff
e5d2c67844 Merge pull request #3658 from matrix-org/rav/fix_event_persisted_position_metrics
Fix occasional glitches in the synapse_event_persisted_position metric
2018-08-07 14:48:08 +01:00
Richard van der Hoff
3523f5432a Don't expose default_room_version as config opt 2018-08-07 12:51:57 +01:00
Richard van der Hoff
9b92720d88 fix event lag graph 2018-08-07 12:27:34 +01:00
Amber Brown
23cd86554e fix changelog 2018-08-07 21:20:42 +10:00
Amber Brown
848431be1d fix for rc1 2018-08-07 21:13:22 +10:00
Amber Brown
cf8ef11f35 changelog 2018-08-07 21:11:24 +10:00
Amber Brown
3b9662339b version 2018-08-07 21:10:52 +10:00
Richard van der Hoff
865d07cd38 changelog 2018-08-07 10:02:01 +01:00
Richard van der Hoff
ca9bc1f4fe Fix occasional glitches in the synapse_event_persisted_position metric
Every so often this metric glitched to a negative number. I'm assuming it was
due to backfilled events.
2018-08-07 10:00:03 +01:00
Neil Johnson
a74b25faaa WIP building out mau reserved users 2018-08-06 23:25:25 +01:00
Neil Johnson
fbe255f9a4 add default mau_limits_reserved_threepids 2018-08-06 23:24:54 +01:00
Neil Johnson
7daa8a78c5 load mau limit threepids 2018-08-06 22:55:05 +01:00
Neil Johnson
89834d9a29 Merge branch 'neilj/mau_tracker' of github.com:matrix-org/synapse into neilj/disable_hs 2018-08-06 21:56:56 +01:00
Neil Johnson
e54794f5b6 Fix postgres compatibility bug 2018-08-06 21:55:54 +01:00
Neil Johnson
7bcf126b18 Merge branch 'neilj/mau_tracker' of github.com:matrix-org/synapse into neilj/disable_hs 2018-08-06 21:39:44 +01:00
Neil Johnson
1911c037cb update comments to reflect new sig 2018-08-06 18:01:46 +01:00
Neil Johnson
33bd07d062 Merge branch 'neilj/mau_tracker' of github.com:matrix-org/synapse into neilj/disable_hs 2018-08-06 17:52:28 +01:00
Neil Johnson
16d78be315 make use of _simple_select_one_onecol, improved comments 2018-08-06 17:51:15 +01:00
Richard van der Hoff
7f3f108561 changelog 2018-08-06 16:30:52 +01:00
Richard van der Hoff
19a17068f1 Check m.room.create for sane room_versions 2018-08-06 16:11:24 +01:00
Erik Johnston
96a9a29645 Pull in necessary stores in federation_reader 2018-08-06 15:23:57 +01:00
Erik Johnston
62ace05c45 Move necessary storage functions to worker classes 2018-08-06 15:23:38 +01:00
Erik Johnston
1e2bed9656 Import all functions from TransactionStore 2018-08-06 15:23:38 +01:00
Erik Johnston
a3f5bf79a0 Add EDU/query handling over replication 2018-08-06 15:23:31 +01:00
Erik Johnston
e26dbd82ef Add replication APIs for persisting federation events 2018-08-06 15:02:28 +01:00
Erik Johnston
051a99c400 Fix isort 2018-08-06 14:29:31 +01:00
Richard van der Hoff
f900d50824 include known room versions in outgoing make_joins 2018-08-06 13:45:37 +01:00
Neil Johnson
42c6823827 disable HS from config 2018-08-04 22:07:04 +01:00
Neil Johnson
e40a510fbf py3 fix 2018-08-03 23:19:13 +01:00
Neil Johnson
d08296f9f2 remove unused import 2018-08-03 23:00:16 +01:00
Neil Johnson
886be75ad1 bug fixes 2018-08-03 22:29:03 +01:00
Will Hunt
16d9701892 Return M_NOT_FOUND when a profile could not be found. (#3596) 2018-08-03 19:08:05 +01:00
Neil Johnson
e10830e976 wip commit - tests failing 2018-08-03 17:55:50 +01:00
Richard van der Hoff
3777fa26aa sanity check response from make_join 2018-08-03 16:08:32 +01:00
Richard van der Hoff
0d63d93ca8 Enforce compatibility when processing make_join requests
Reject make_join requests from servers which do not support the room version.

Also include the room version in the response.
2018-08-03 16:08:32 +01:00
Richard van der Hoff
0ca459ea33 Basic support for room versioning
This is the first tranche of support for room versioning. It includes:
 * setting the default room version in the config file
 * new room_version param on the createRoom API
 * storing the version of newly-created rooms in the m.room.create event
 * fishing the version of existing rooms out of the m.room.create event
2018-08-03 16:08:32 +01:00
Richard van der Hoff
15c1ae45e5 Docstrings for BaseFederationServlet
... to save me reverse-engineering this stuff again.
2018-08-03 16:08:32 +01:00
Neil Johnson
5593ff6773 fix (lots of) py3 test failures 2018-08-03 14:59:17 +01:00
Neil Johnson
b2aab04d2c fix py3 test failure 2018-08-03 14:12:56 +01:00
Neil Johnson
da90337d89 Add ability to limit number of monthly active users on the server 2018-08-03 14:05:20 +01:00
Neil Johnson
950807d93a fix caching and tests 2018-08-03 13:49:53 +01:00
Michael Kaye
df7f9871c1 Merge pull request #3644 from matrix-org/michaelkaye/refactor_docker_locations_v2
Refactor Dockerfile location
2018-08-03 13:44:35 +01:00
Neil Johnson
897c51d274 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/mau_tracker 2018-08-03 13:40:47 +01:00
Erik Johnston
cb298ff623 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/refactor_repl_servlet 2018-08-03 09:25:15 +01:00
Michael Kaye
42960aa047 Update README.md
Link to contrib/docker
2018-08-03 09:16:01 +01:00
Michael Kaye
c3f596180f Update README.md
Link to docker/README.md
2018-08-03 09:15:19 +01:00
Michael Kaye
637b11b9ed Update README.rst
Link to contrib/docker
2018-08-03 09:13:54 +01:00
Michael Kaye
feacd13932 Update README.rst
wrap at 80ish
2018-08-03 09:10:41 +01:00
Neil Johnson
c0affa7b4f update generate_monthly_active_users, and reap_monthly_active_users 2018-08-02 23:03:01 +01:00
Neil Johnson
5e2f7b8084 typo 2018-08-02 22:47:48 +01:00
Neil Johnson
4ecb4bdac9 wip attempt at caching 2018-08-02 22:41:05 +01:00
Amber Brown
8a86d7ea96 Merge pull request #3645 from matrix-org/michaelkaye/mention_newsfragment
Mention the word "newsfragment" in CONTRIBUTING.rst
2018-08-03 04:40:18 +10:00
Michael Kaye
70baa61e95 newsfragment 2018-08-02 18:58:43 +01:00
Michael Kaye
4c4a123bd0 Mention the word "newsfragment" in CONTRIBUTING.rst 2018-08-02 18:53:44 +01:00
Michael Kaye
489949879e Add news entry 2018-08-02 18:49:50 +01:00
Michael Kaye
1758f4e1c7 Address SPAG issues 2018-08-02 18:21:34 +01:00
Michael Kaye
26a37f3d4d Do not include docker files in python build 2018-08-02 18:21:34 +01:00
Michael Kaye
0d25724419 Refactor docker locations and README.
This addresses #3224
2018-08-02 18:21:32 +01:00
Richard van der Hoff
1fa98495d0 Merge pull request #3639 from matrix-org/rav/refactor_error_handling
Clean up handling of errors from outbound requests
2018-08-02 17:38:24 +01:00
Richard van der Hoff
bdae8f2e68 Merge pull request #3638 from matrix-org/rav/refactor_federation_client_exception_handling
Factor out exception handling in federation_client
2018-08-02 17:37:46 +01:00
Neil Johnson
74b1d46ad9 do mau checks based on monthly_active_users table 2018-08-02 16:57:35 +01:00
Neil Johnson
9180061b49 remove unused count_monthly_users 2018-08-02 15:55:29 +01:00
Richard van der Hoff
704c3e6239 Merge branch 'master' into develop 2018-08-02 15:43:30 +01:00
Richard van der Hoff
43ecfe0b10 Merge tag 'v0.33.1'
Synapse 0.33.1 (2018-08-02)
===========================

SECURITY FIXES
--------------

- Fix a potential issue where servers could request events for rooms they have not joined. (`#3641 <https://github.com/matrix-org/synapse/issues/3641>`_)
- Fix a potential issue where users could see events in private rooms before they joined. (`#3642 <https://github.com/matrix-org/synapse/issues/3642>`_)
2018-08-02 15:40:44 +01:00
Richard van der Hoff
c2a83349f0 changelog: this is a security release 2018-08-02 15:35:42 +01:00
Richard van der Hoff
db1f33fb36 fix changelog typos 2018-08-02 15:33:53 +01:00
Richard van der Hoff
14a4e7d5a4 Prepare 0.33.1 2018-08-02 15:31:04 +01:00
Richard van der Hoff
50d9d97408 Merge pull request #3642 from matrix-org/rav/another_room_id_check
Check room visibility for /event/ requests
2018-08-02 15:21:59 +01:00
Richard van der Hoff
8cefc690c9 changelogs 2018-08-02 15:11:19 +01:00
Richard van der Hoff
0bf5ec0db7 Check room visibility for /event/ requests
Make sure that the user has permission to view the requeseted event for
/event/{eventId} and /room/{roomId}/event/{eventId} requests.

Also check that the event is in the given room for
/room/{roomId}/event/{eventId}, for sanity.
2018-08-02 15:03:27 +01:00
Richard van der Hoff
a937497cf5 Merge pull request #3641 from matrix-org/rav/room_id_check
Validation for events/rooms in fed requests
2018-08-02 14:22:05 +01:00
Richard van der Hoff
a013404292 changelog 2018-08-02 14:00:29 +01:00
Richard van der Hoff
14fa9d4d92 Avoid extra db lookups
Since we're about to look up the events themselves anyway, we can skip the
extra db queries here.
2018-08-02 13:55:51 +01:00
Neil Johnson
c4ffbecb68 fix test, update constructor call 2018-08-02 13:51:05 +01:00
Richard van der Hoff
0a65450d04 Validation for events/rooms in fed requests
When we get a federation request which refers to an event id, make sure that
said event is in the room the caller claims it is in.

(patch supplied by @turt2live)
2018-08-02 13:48:40 +01:00
Neil Johnson
00f99f74b1 insertion into monthly_active_users 2018-08-02 13:47:19 +01:00
Neil Johnson
4a6725d9d1 Merge branch 'neilj/mau_tracker' of github.com:matrix-org/synapse into neilj/mau_tracker 2018-08-02 11:04:18 +01:00
Neil Johnson
165e067033 Revert "change monthly_active_users table to be a single column"
This reverts commit ec716a35b2.
2018-08-02 10:59:58 +01:00
Erik Johnston
40c1c59cf4 Merge pull request #3621 from matrix-org/erikj/split_fed_store
Split out DB writes in federation handler
2018-08-02 10:41:42 +01:00
Neil Johnson
08281fe6b7 self.db_conn unused 2018-08-01 23:26:24 +01:00
Neil Johnson
c21d82bab3 normalise reaping query 2018-08-01 23:24:38 +01:00
Neil Johnson
ec716a35b2 change monthly_active_users table to be a single column 2018-08-01 17:54:37 +01:00
Neil Johnson
d766f26de9 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/mau_tracker 2018-08-01 17:49:41 +01:00
Neil Johnson
085435e13a Merge pull request #3630 from matrix-org/neilj/mau_sign_in_log_in_limits
Initial impl of capping MAU
2018-08-01 15:58:45 +00:00
Richard van der Hoff
b8d7d3996b Merge pull request #3620 from fuzzmz/return-404-room-not-found
return 404 if room not found
2018-08-01 16:34:32 +01:00
Richard van der Hoff
908be65e64 changelog 2018-08-01 16:23:31 +01:00
Neil Johnson
b7f203a566 count_monthly_users is now async 2018-08-01 16:17:42 +01:00
Neil Johnson
7ff44d9215 improve clarity 2018-08-01 16:17:00 +01:00
Richard van der Hoff
38b98e5a98 changelog 2018-08-01 16:07:49 +01:00
Richard van der Hoff
01e93f48ed Kill off MatrixCodeMessageException
This code brings the SimpleHttpClient into line with the
MatrixFederationHttpClient by having it raise HttpResponseExceptions when a
request fails (rather than trying to parse for matrix errors and maybe raising
MatrixCodeMessageException).

Then, whenever we were checking for MatrixCodeMessageException and turning them
into SynapseErrors, we now need to check for HttpResponseExceptions and call
to_synapse_error.
2018-08-01 16:02:46 +01:00
Richard van der Hoff
018d75a148 Refactor code for turning HttpResponseException into SynapseError
This commit replaces SynapseError.from_http_response_exception with
HttpResponseException.to_synapse_error.

The new method actually returns a ProxiedRequestError, which allows us to pass
through additional metadata from the API call.
2018-08-01 16:02:46 +01:00
Richard van der Hoff
fa7dc889f1 Be more careful which errors we send back over the C-S API
We really shouldn't be sending all CodeMessageExceptions back over the C-S API;
it will include things like 401s which we shouldn't proxy.

That means that we need to explicitly turn a few HttpResponseExceptions into
SynapseErrors in the federation layer.

The effect of the latter is that the matrix errcode will get passed through
correctly to calling clients, which might help with some of the random
M_UNKNOWN errors when trying to join rooms.
2018-08-01 16:02:38 +01:00
Richard van der Hoff
c82ccd3027 Factor out exception handling in federation_client
Factor out the error handling from make_membership_event, send_join, and
send_leave, so that it can be shared.
2018-08-01 16:01:04 +01:00
Amber Brown
da7785147d Python 3: Convert some unicode/bytes uses (#3569) 2018-08-02 00:54:06 +10:00
Neil Johnson
c480c4c962 fix isort 2018-08-01 14:25:58 +01:00
Neil Johnson
6eed16d8a2 fix test for py3 2018-08-01 14:02:10 +01:00
Neil Johnson
303f1c851f Merge branch 'develop' of github.com:matrix-org/synapse into neilj/mau_sign_in_log_in_limits 2018-08-01 13:42:50 +01:00
Erik Johnston
a6d7b74915 update docs 2018-08-01 13:39:14 +01:00
Erik Johnston
4b256b9271 _persist_auth_tree no longer returns anything 2018-08-01 13:39:07 +01:00
Neil Johnson
4e5ac901dd clean up 2018-08-01 12:03:57 +01:00
Neil Johnson
f9f5559971 fix comment 2018-08-01 12:03:42 +01:00
Neil Johnson
4e6e00152c fix known broken test 2018-08-01 11:48:37 +01:00
Neil Johnson
0aba3d361a count_monthly_users() async 2018-08-01 11:47:58 +01:00
Neil Johnson
2c54f1c225 remove need to plot limit_usage_by_mau 2018-08-01 11:46:59 +01:00
Jan Christian Grünhage
c4842e16cb Merge pull request #3543 from bebehei/docker
Improvements for Docker usage
2018-08-01 11:32:45 +02:00
Richard van der Hoff
6e63d6868c Update 2952.bugfix 2018-08-01 10:31:22 +01:00
Richard van der Hoff
f49147d14f Merge pull request #3634 from matrix-org/rav/wtf_is_a_replication_layer
rename replication_layer to federation_client
2018-08-01 10:29:29 +01:00
Richard van der Hoff
cab782c17e Merge pull request #3384 from matrix-org/rav/rewrite_cachedlist_decorator
Rewrite cache list decorator
2018-08-01 10:28:56 +01:00
Neil Johnson
6023cdd227 remove errant print 2018-08-01 10:27:17 +01:00
Neil Johnson
7931393495 make count_monthly_users async synapse/handlers/auth.py 2018-08-01 10:21:56 +01:00
Neil Johnson
c507fa15ce only need to loop if mau limiting is enabled 2018-08-01 10:20:42 +01:00
Serban Constantin
70af98e361 return NotFoundError if room not found
Per the Client-Server API[0] we should return
`M_NOT_FOUND` if the room isn't found instead
of generic SynapseError.

This ensures that /directory/list API returns
404 for room not found instead of 400.

[0]: https://matrix.org/docs/spec/client_server/unstable.html#get-matrix-client-r0-directory-list-room-roomid

Signed-off-by: Serban Constantin <serban.constantin@gmail.com>
2018-07-31 21:47:23 +03:00
Travis Ralston
5e2ee64660 Merge pull request #3628 from turt2live/travis/goodby-pdu-failures
Remove pdu_failures from transactions
2018-07-31 12:13:09 -06:00
Richard van der Hoff
1841672c85 changelog 2018-07-31 18:26:54 +01:00
Neil Johnson
6ef983ce5c api into monthly_active_users table 2018-07-31 16:36:24 +01:00
Richard van der Hoff
bdbdceeafa rename replication_layer to federation_client
I have HAD ENOUGH of trying to remember wtf a replication layer is in terms of
classes.
2018-07-31 15:44:05 +01:00
Erik Johnston
16bd63f32f Newsfile 2018-07-31 14:48:43 +01:00
Erik Johnston
443da003bc Use new helper base class for membership requests 2018-07-31 14:32:23 +01:00
Erik Johnston
729b672823 Use new helper base class for ReplicationSendEventRestServlet 2018-07-31 14:32:23 +01:00
Erik Johnston
d81602b75a Add helper base class for generating new replication endpoints
This will hopefully reduce the boiler plate required to implement new
internal HTTP requests.
2018-07-31 14:32:20 +01:00
Richard van der Hoff
5de936caa1 Merge pull request #3612 from matrix-org/rav/store_heirarchy
Make EventStore inherit from EventFederationStore
2018-07-31 13:44:04 +01:00
Neil Johnson
5bb39b1e0c mau limts 2018-07-31 13:22:14 +01:00
Neil Johnson
df2235e7fa coding style 2018-07-31 13:16:20 +01:00
Richard van der Hoff
0bc9b9e397 reinstate explicit include of EventsWorkerStore 2018-07-31 13:11:04 +01:00
Neil Johnson
7d05406a07 fix user_ips counting 2018-07-31 12:03:23 +01:00
Richard van der Hoff
82977477e3 Merge pull request #3629 from ptman/patch-1
Add some documentation for using the dashboard
2018-07-31 11:31:52 +01:00
Paul Tötterman
9c14c2b561 Add some documentation for using the dashboard 2018-07-31 12:48:37 +03:00
Richard van der Hoff
6aab397ada synapse grafana dashboard 2018-07-31 09:45:58 +01:00
Amber Brown
52384f2ee5 Merge pull request #3626 from krombel/only_import_secrets_when_available
Only import secrets when available
2018-07-31 08:58:24 +10:00
Travis Ralston
e908b86832 Remove pdu_failures from transactions
The field is never read from, and all the opportunities given to populate it are not utilized. It should be very safe to remove this.
2018-07-30 16:28:47 -06:00
Krombel
254e8267e2 Only import secrets when available
secrets got introduced in python 3.6 so this class is not available
in 3.5 and before.

This now checks for the current running version and only tries using
secrets if the version is 3.6 or above

Signed-Off-By: Matthias Kesler <krombel@krombel.de>
2018-07-30 23:59:02 +02:00
Neil Johnson
21276ff846 remove errant logging 2018-07-30 22:42:12 +01:00
Neil Johnson
fef7e58ac6 actually close conn 2018-07-30 22:29:44 +01:00
Neil Johnson
cefac79c10 monthly_active_tests 2018-07-30 22:08:09 +01:00
Neil Johnson
9b13817e06 factor out metrics from __init__ to app/homeserver 2018-07-30 22:07:07 +01:00
Neil Johnson
251e6c1210 limit register and sign in on number of monthly users 2018-07-30 15:55:57 +01:00
Erik Johnston
143f1a2532 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_fed_store 2018-07-30 09:56:18 +01:00
Jeroen
2903e65aff fix isort 2018-07-29 19:47:08 +02:00
Richard van der Hoff
a8cbce0ced fix invalidation 2018-07-27 16:17:17 +01:00
Matthew Hodgson
e9b2d047f6 make /context lazyload & filter aware (#3567)
make /context lazyload & filter aware.
2018-07-27 15:12:50 +01:00
Erik Johnston
ad7cd95a78 Newsfile 2018-07-27 15:02:57 +01:00
Richard van der Hoff
f102c05856 Rewrite cache list decorator
Because it was complicated and annoyed me. I suspect this will be more
efficient too.
2018-07-27 13:47:04 +01:00
Jeroen
8e3f75b39a fix accidental removal of hs 2018-07-27 12:17:31 +02:00
Richard van der Hoff
b0b5566f36 Merge pull request #3391 from t3chguy/t3chguy/default_inviter_display_name_3pid
if inviter_display_name == ""||None then default to inviter MXID
2018-07-27 10:08:39 +01:00
Richard van der Hoff
781c2bd21a Merge pull request #3469 from DJViking/master
Add instructions for install on OpenSUSE and SLES
2018-07-27 09:21:41 +01:00
Richard van der Hoff
7041cd872b Merge branch 'develop' into send_sni_for_federation_requests 2018-07-27 09:17:11 +01:00
Richard van der Hoff
d42455cdc9 Merge pull request #3616 from matrix-org/travis/event_id_send_leave
Update the send_leave path to be an event_id
2018-07-26 23:25:17 +01:00
Richard van der Hoff
65c8dee900 Merge pull request #3614 from matrix-org/rav/stop_populating_event_content
Stop populating events.content
2018-07-26 22:54:08 +01:00
Matthew Hodgson
a75231b507 Deduplicate redundant lazy-loaded members (#3331)
* attempt at deduplicating lazy-loaded members

as per the proposal; we can deduplicate redundant lazy-loaded members
which are sent in the same sync sequence. we do this heuristically
rather than requiring the client to somehow tell us which members it
has chosen to cache, by instead caching the last N members sent to
a client, and not sending them again.  For now we hardcode N to 100.
Each cache for a given (user,device) tuple is in turn cached for up to
X minutes (to avoid the caches building up).  For now we hardcode X to 30.

* add include_redundant_members filter option & make it work

* remove stale todo

* add tests for _get_some_state_from_cache

* incorporate review
2018-07-26 22:51:30 +01:00
Richard van der Hoff
9e68b1bd2d Merge pull request #3613 from matrix-org/rav/stop_using_event_edges_room_id
Remove some redundant joins on event_edges.room_id
2018-07-26 22:31:01 +01:00
Travis Ralston
49254d43a6 Create 3616.misc 2018-07-26 14:45:48 -06:00
Travis Ralston
7d32f0d745 Update the send_leave path to be an event_id
It's still not used, however the parameter is an event ID not a transaction ID.
2018-07-26 14:41:59 -06:00
Richard van der Hoff
ef9d51b081 Merge pull request #3610 from matrix-org/rav/fix_looping_calls
Fix some looping_call calls which were broken in #3604
2018-07-26 14:55:57 +01:00
Richard van der Hoff
85531a06a2 changelog 2018-07-26 14:55:29 +01:00
Richard van der Hoff
51d7df1915 Create the column nullable
There's no real point in ever making the column non-nullable, and doing so
breaks the sytests.
2018-07-26 14:54:04 +01:00
Richard van der Hoff
5c1d301fd9 Stop populating events.content
This field is no longer read from, so we should stop populating it. Once we're
happy that this doesn't break everything, and a rollback is unlikely, we can
think about dropping the column.
2018-07-26 14:43:02 +01:00
Richard van der Hoff
cf78eaebad changelog 2018-07-26 13:22:40 +01:00
Richard van der Hoff
bd4b25f4d0 Remove some redundant joins on event_edges.room_id
We've long passed the point where it's possible to have the same event_id in
different tables, so these join conditions are redundant: we can just join on
event_id.

event_edges is of non-trivial size, and the room_id column is wasteful, so
let's stop reading from it. In future, we can stop writing to it, and then drop
it.
2018-07-26 13:19:08 +01:00
Richard van der Hoff
1b4d73fa52 comment on event_edges 2018-07-26 12:53:51 +01:00
Richard van der Hoff
a15ed52267 changelog 2018-07-26 12:53:18 +01:00
Richard van der Hoff
21e878ebb6 Make EventStore inherit from EventFederationStore
(since it uses methods therein)

Turns out that we had a bunch of things which were incorrectly importing
EventWorkerStore from events.py rather than events_worker.py, which broke once
I removed the import into events.py.
2018-07-26 12:48:51 +01:00
Richard van der Hoff
03751a6420 Fix some looping_call calls which were broken in #3604
It turns out that looping_call does check the deferred returned by its
callback, and (at least in the case of client_ips), we were relying on this,
and I broke it in #3604.

Update run_as_background_process to return the deferred, and make sure we
return it to clock.looping_call.
2018-07-26 11:48:08 +01:00
Matthew Hodgson
1bcd0490c2 Merge pull request #2970 from matrix-org/matthew/filter_members
Implement the lazy_load_members room state filter parameter
2018-07-26 00:03:01 +01:00
Matthew Hodgson
bc7944e6d2 switch missing_types to be a bool 2018-07-25 23:36:31 +01:00
Travis Ralston
a4fe9d2d36 Merge pull request #3609 from matrix-org/travis/doc-typo-2
Fix a minor documentation typo in on_make_leave
2018-07-25 16:09:45 -06:00
Travis Ralston
6185650f9c Create 3609.misc 2018-07-25 15:46:56 -06:00
Travis Ralston
d8e65ed7e1 Fix a minor documentation typo in on_make_leave 2018-07-25 15:44:41 -06:00
Matthew Hodgson
2565804030 Merge branch 'develop' into matthew/filter_members 2018-07-25 17:27:49 +01:00
Matthew Hodgson
0620d27f4d flake8 2018-07-25 17:21:17 +01:00
Matthew Hodgson
0a7ee0ab8b add tests for _get_some_state_from_cache 2018-07-25 16:33:52 +01:00
Matthew Hodgson
7d9fb88617 incorporate more review. 2018-07-25 16:33:50 +01:00
Erik Johnston
78a691d005 Split out DB writes in federation handler
This will allow us to easily add an internal replication API to proxy
these reqeusts to master, so that we can move federation APIs to
workers.
2018-07-25 16:22:56 +01:00
Erik Johnston
3849f7f69f Merge pull request #3603 from matrix-org/erikj/handle_outliers
Correctly handle outliers during persist events
2018-07-25 13:24:04 +01:00
Richard van der Hoff
cee1ae1b72 Merge pull request #3606 from matrix-org/rav/logcontext_fixes_once_more
Fix another logcontext leak in _persist_events
2018-07-25 11:56:00 +01:00
Richard van der Hoff
32b30e15f2 Merge pull request #3607 from matrix-org/rav/fix_persist_events_integrity_error
Fix occasional 'tuple index out of range' error
2018-07-25 11:55:45 +01:00
Richard van der Hoff
4081bd1560 Merge pull request #3605 from matrix-org/rav/fix_update_remote_profile_cache
Fix updating of cached remote profiles
2018-07-25 11:54:46 +01:00
Richard van der Hoff
1bfb5bed1d Merge pull request #3604 from matrix-org/rav/background_process_fixes
Wrap a number of things that run in the background
2018-07-25 11:54:33 +01:00
Erik Johnston
7780a7b47c Actually fix it by adding continue 2018-07-25 11:13:20 +01:00
Richard van der Hoff
1be94440d3 Fix occasional 'tuple index out of range' error
This fixes a bug in _delete_existing_rows_txn which was introduced in #3435
(though it's been on matrix-org-hotfixes for *years*). This code is only called
when there is some sort of conflict the first time we try to persist an event,
so it only happens rarely. Still, the exceptions are annoying.
2018-07-25 11:05:58 +01:00
Richard van der Hoff
07defd5fe6 Fix another logcontext leak in _persist_events
We need to run the errback in the sentinel context to avoid losing our own
context.

Also: add logging to runInteraction to help identify where "Starting db
connection from sentinel context" warnings are coming from
2018-07-25 10:53:23 +01:00
Richard van der Hoff
9c237a7ab5 changelog 2018-07-25 10:46:01 +01:00
Richard van der Hoff
55acd6856c Fix updating of cached remote profiles
_update_remote_profile_cache was missing its `defer.inlineCallbacks`, so when
it was called, would just return a generator object, without actually running
any of the method body.
2018-07-25 10:34:48 +01:00
Richard van der Hoff
f59be4eb0e Fix unit tests
on_notifier_poke no longer runs synchonously, so we have to do a different hack
to make sure that the replication data has been sent. Let's actually listen for
its arrival.
2018-07-25 10:30:36 +01:00
Erik Johnston
a297ff2b16 Fix typo in conditional 2018-07-25 09:48:01 +01:00
Richard van der Hoff
3f11d84534 Changelog 2018-07-25 09:43:25 +01:00
Richard van der Hoff
371da42ae4 Wrap a number of things that run in the background
This will reduce the number of "Starting db connection from sentinel context"
warnings, and will help with our metrics.
2018-07-25 09:41:12 +01:00
Erik Johnston
0b300d323a Newsfile 2018-07-25 09:39:45 +01:00
Erik Johnston
ec56121b0d Correctly handle outliers during persist events
We incorrectly asserted that all contexts must have a non None state
group without consider outliers. This would usually be fine as the
assertion would never be hit, as there is a shortcut during persistence
if the forward extremities don't change.

However, if the outlier is being persisted with non-outlier events, the
function would be called and the assertion would be hit.

Fixes #3601
2018-07-25 09:35:02 +01:00
Matthew Hodgson
cb5c37a57c handle the edge case for _get_some_state_from_cache where types is [] 2018-07-24 20:34:45 +01:00
Michael Telatynski
38eaa5280d add changelog entry for PR#3391
Signed-off-by: Michael Telatynski <7t3chguy@gmail.com>
2018-07-24 17:25:42 +01:00
Erik Johnston
1e5dbdcbb1 Merge pull request #3597 from matrix-org/erikj/did_forget
Fix client_reader worker being able to handle /context requests
2018-07-24 17:20:39 +01:00
Erik Johnston
1674a85238 Move newsfile 2018-07-24 17:20:27 +01:00
Michael Telatynski
87951d3891 Merge branch 'develop' of github.com:matrix-org/synapse into t3chguy/default_inviter_display_name_3pid 2018-07-24 17:17:46 +01:00
Erik Johnston
f14c866e37 Newsfile 2018-07-24 16:51:19 +01:00
Erik Johnston
8b8c4f34a3 Replace usage of get_current_toke with StreamToken.START
This allows us to handle /context/ requests on the client_reader worker
without having to pull in all the various stream handlers (e.g.
precence, typing, pushers etc). The only thing the token gets used for
is pagination, and that ignores everything but the room portion of the
token.
2018-07-24 16:49:17 +01:00
Erik Johnston
3188973857 Pull out did_forget to worker store 2018-07-24 16:49:14 +01:00
Erik Johnston
60a1d147a7 Merge pull request #3595 from matrix-org/erikj/use_deltas
Use deltas to calculate current state deltas
2018-07-24 15:41:18 +01:00
Erik Johnston
709c309b0e Expand on docstring comment about return value 2018-07-24 15:12:50 +01:00
Erik Johnston
8f65ab98d2 Remove unnecessary iteritems 2018-07-24 15:08:01 +01:00
Erik Johnston
ed0dd68731 Fixup comment (and indent) 2018-07-24 14:31:38 +01:00
Erik Johnston
f33c596533 Newsfile 2018-07-24 14:28:04 +01:00
Erik Johnston
811ac73a42 Don't fetch state from the database unless needed 2018-07-24 14:18:23 +01:00
Erik Johnston
a79410e7b8 Have _get_new_state_after_events return delta
If we have a delta from the existing to new current state, then we can
reuse that rather than manually working it out by fetching both lots of
state.
2018-07-24 14:14:17 +01:00
Richard van der Hoff
f7b133fc26 Merge pull request #3594 from matrix-org/richvdh-patch-1
Update ISSUE_TEMPLATE.md
2018-07-24 14:13:15 +01:00
Richard van der Hoff
81946db9cf Merge pull request #3587 from matrix-org/rav/better_exception_logging
Improve logging for exceptions when handling PDUs
2018-07-24 14:12:12 +01:00
Richard van der Hoff
a321f78991 Merge pull request #3586 from matrix-org/rav/optimise_resolve_state_groups
Fixes and optimisations for resolve_state_groups
2018-07-24 14:11:45 +01:00
Richard van der Hoff
93b0722c50 Merge pull request #3583 from matrix-org/rav/remove_who_forgot_in_room
Remove redundant checks on room forgottenness
2018-07-24 14:11:11 +01:00
Matthew Hodgson
454f59b7ad Merge branch 'develop' into matthew/filter_members 2018-07-24 14:03:37 +01:00
Matthew Hodgson
1a01a5b964 clarify comment on p_ids 2018-07-24 14:03:15 +01:00
Erik Johnston
223341205e Don't require to_delete to have event_ids 2018-07-24 14:02:40 +01:00
Matthew Hodgson
e22700c3dd consider non-filter_type types as wildcards, thus missing from the state-group-cache 2018-07-24 13:59:07 +01:00
Richard van der Hoff
7417951117 Update ISSUE_TEMPLATE.md
request backticks for logs
2018-07-24 13:46:39 +01:00
Matthew Hodgson
eb1d911ab7 rather than adding ll_ids, remove them from p_ids 2018-07-24 13:40:49 +01:00
Erik Johnston
0a8e4f3af9 Merge pull request #3592 from matrix-org/erikj/speed_up_calculate_state_delta
Speed up _calculate_state_delta
2018-07-24 13:39:58 +01:00
Matthew Hodgson
d19fba3655 Merge branch 'develop' into matthew/filter_members 2018-07-24 12:39:54 +01:00
Matthew Hodgson
cd241d6bda incorporate more review 2018-07-24 12:39:40 +01:00
Richard van der Hoff
30bfed5aa5 Merge remote-tracking branch 'origin/develop' into rav/remove_who_forgot_in_room 2018-07-24 11:46:09 +01:00
Erik Johnston
97acd385a3 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/speed_up_calculate_state_delta 2018-07-24 11:32:13 +01:00
Erik Johnston
0fa73e4a63 Remove unnecessary if 2018-07-24 11:19:23 +01:00
Erik Johnston
2581eb3e1d Newsfile 2018-07-24 11:14:28 +01:00
Richard van der Hoff
69292de6cc Merge pull request #3591 from matrix-org/rav/logcontext_fixes
Logcontext fixes
2018-07-24 11:11:27 +01:00
Erik Johnston
ff5426f6b8 Speed up _calculate_state_delta 2018-07-24 10:55:11 +01:00
Richard van der Hoff
a678145010 Merge branch 'develop' into rav/logcontext_fixes 2018-07-24 10:43:30 +01:00
Erik Johnston
d436ad332c Merge pull request #3555 from matrix-org/erikj/client_apis_move
Make client_reader support some more read only APIs
2018-07-24 10:42:28 +01:00
Richard van der Hoff
2601ee28bd Merge pull request #3590 from matrix-org/rav/persist_events_metrics
Add some measure blocks to persist_events
2018-07-24 10:41:51 +01:00
Erik Johnston
536bc63a4e Merge branch 'develop' into erikj/client_apis_move 2018-07-24 09:57:05 +01:00
Richard van der Hoff
cf2d15c6a9 another couple of logcontext leaks 2018-07-24 00:57:48 +01:00
Richard van der Hoff
c6e66821a9 Changelog 2018-07-24 00:38:39 +01:00
Richard van der Hoff
8dff6e0322 Logcontext fixes
Fix some random logcontext leaks.
2018-07-24 00:37:17 +01:00
Richard van der Hoff
30957a941a changelog 2018-07-24 00:24:38 +01:00
Richard van der Hoff
69fb5dbdab fix idiocy 2018-07-24 00:04:44 +01:00
Richard van der Hoff
1938cffaea Add some measure blocks to persist_events
... to help us figure out where 40% of CPU is going
2018-07-23 23:48:19 +01:00
Matthew Hodgson
efcdacad7d handle case where types is [] on postgres correctly 2018-07-23 22:41:05 +01:00
Matthew Hodgson
004a83b43a changelog 2018-07-23 22:32:35 +01:00
Richard van der Hoff
d8709df739 changelog 2018-07-23 22:16:22 +01:00
Richard van der Hoff
ce0c18dec5 Improve logging for exceptions handling PDUs
when we get an exception handling a federation PDU, log the whole stacktrace.
2018-07-23 22:13:19 +01:00
Richard van der Hoff
c1f80effbe Handle delta_ids being None in _update_context_for_auth_events
it's easier to create the new state group as a delta from the existing one.

(There's an outside chance this will help with
https://github.com/matrix-org/synapse/issues/3364)
2018-07-23 22:06:50 +01:00
Matthew Hodgson
adfe29ec0b Merge branch 'develop' into matthew/filter_members 2018-07-23 19:21:37 +01:00
Matthew Hodgson
254fb430d1 incorporate review 2018-07-23 19:21:20 +01:00
Richard van der Hoff
cc99256e90 newsfile 2018-07-23 19:10:50 +01:00
Richard van der Hoff
5c705f70c9 Fixes and optimisations for resolve_state_groups
First of all, fix the logic which looks for identical input state groups so
that we actually use them. This turned out to be most easily done by factoring
the relevant code out to a separate function so that we could do an early
return.

Secondly, avoid building the whole `conflicted_state` dict (which was only ever
used as a boolean flag).

Thirdly, replace the construction of the `state` dict (which mapped from keys
to events that set them), with an optimistic construction of the resolution
result assuming there will be no conflicts. This should be no slower than
building the old `state` dict, and:
  - in the conflicted case, we'll short-cut it, saving part of the work
  - in the unconflicted case, it saves rebuilding the resolution from the
    `state` dict.

Finally, do a couple of s/values/itervalues/.
2018-07-23 19:10:11 +01:00
Erik Johnston
f559119de0 Merge pull request #3584 from matrix-org/erikj/use_cached
Only get cached state from context in persist_event
2018-07-23 17:52:45 +01:00
Erik Johnston
8b9f164fff Comments 2018-07-23 17:43:01 +01:00
Erik Johnston
2d5bba151b Newsfile 2018-07-23 17:29:32 +01:00
Erik Johnston
50c60e5fad Only get cached state from context in persist_event
We don't want to bother pulling out the current state from the DB since
until we know we have to. Checking the context for state is just an
optimisation.
2018-07-23 17:21:40 +01:00
Richard van der Hoff
4f5cc8e4e7 Merge remote-tracking branch 'origin/develop' into rav/remove_who_forgot_in_room 2018-07-23 17:15:12 +01:00
Richard van der Hoff
dae6dc1e77 Remove redundant checks on room forgottenness
Fixes #3550
2018-07-23 17:13:34 +01:00
Erik Johnston
a646bdc670 Merge pull request #3582 from matrix-org/erikj/fixup_stateless
Fix missing attributes on workers.
2018-07-23 16:44:42 +01:00
Erik Johnston
9f41ad491d Newsfile 2018-07-23 16:31:46 +01:00
Erik Johnston
0faa3223cd Fix missing attributes on workers.
This was missed during the transition from attribute to getter for
getting state from context.
2018-07-23 16:28:00 +01:00
Erik Johnston
37e87611bc Merge pull request #3581 from matrix-org/erikj/fixup_stateless
Fix EventContext when using workers
2018-07-23 16:06:59 +01:00
Erik Johnston
a4d24781bf Newsfile 2018-07-23 15:28:51 +01:00
Erik Johnston
999bcf9d01 Fix EventContext when using workers
We were:
  1. Not correctly setting all attributes
  2. Using defer.inlineCallbacks in a non-generator
2018-07-23 15:24:21 +01:00
Erik Johnston
9c294ea864 Merge pull request #3579 from matrix-org/erikj/stateless_contexts_4
Add concept of StatelessContext, take 4.
2018-07-23 15:14:39 +01:00
Erik Johnston
4797ed000e Update docstrings to make sense 2018-07-23 15:05:56 +01:00
Richard van der Hoff
726a0b1e64 Merge branch 'develop' into erikj/stateless_contexts_4 2018-07-23 14:44:27 +01:00
Erik Johnston
f0a1b8e4cd Merge pull request #3577 from matrix-org/erikj/cleanup_context
Refcator EventContext to accept state during init
2018-07-23 14:41:37 +01:00
Erik Johnston
8fbe418777 Fix unit tests 2018-07-23 13:33:49 +01:00
Erik Johnston
0b0b24cb82 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/client_apis_move 2018-07-23 13:21:15 +01:00
Erik Johnston
4fc52b1037 Update docs/workers.rst 2018-07-23 13:20:43 +01:00
Erik Johnston
f3182bb1d0 Newsfile 2018-07-23 13:19:24 +01:00
Erik Johnston
027bc01a1b Add support for updating state 2018-07-23 13:17:25 +01:00
Erik Johnston
e42510ba63 Use new getters 2018-07-23 13:17:22 +01:00
Erik Johnston
440b8845b5 Make EventContext lazy load state 2018-07-23 12:56:56 +01:00
Erik Johnston
842cdece42 pep8 2018-07-23 12:31:35 +01:00
Erik Johnston
959f4b9074 Newsfile 2018-07-23 12:22:59 +01:00
Erik Johnston
acbfdc3442 Refcator EventContext to accept state during init 2018-07-23 12:20:55 +01:00
Matthew Hodgson
354a99c968 Merge pull request #3520 from matrix-org/matthew/sync_deleted_devices
Announce deleted devices explicitly over federation.
2018-07-23 10:16:05 +01:00
Matthew Hodgson
9b34f3ea3a Merge branch 'develop' into matthew/sync_deleted_devices 2018-07-23 10:03:28 +01:00
Matthew Hodgson
c1bf2b587e add trailing comma 2018-07-23 09:56:23 +01:00
Amber Brown
3132b89f12 Make the rest of the .iterwhatever go away (#3562) 2018-07-21 15:47:18 +10:00
Erik Johnston
5c88bb722f Move PaginationHandler to its own file 2018-07-20 15:32:23 +01:00
Erik Johnston
0ecf68aedc Move check_in_room_or_world_readable to Auth 2018-07-20 15:30:59 +01:00
Richard van der Hoff
ff48ab8527 Merge pull request #3572 from matrix-org/rav/linearizer_cancellation
Test and fix support for cancellation in Linearizer
2018-07-20 14:44:02 +01:00
Richard van der Hoff
5c30cb709a Changelog 2018-07-20 14:01:36 +01:00
Richard van der Hoff
3d6df84658 Test and fix support for cancellation in Linearizer 2018-07-20 13:59:55 +01:00
Richard van der Hoff
4f67623674 Merge pull request #3571 from matrix-org/rav/limiter_fixes
A set of improvements to the Limiter
2018-07-20 13:53:03 +01:00
Amber Brown
e1a237eaab Admin API for creating new users (#3415) 2018-07-20 22:41:13 +10:00
Richard van der Hoff
683f4058c1 changelogs 2018-07-20 13:16:39 +01:00
Richard van der Hoff
7c712f95bb Combine Limiter and Linearizer
Linearizer was effectively a Limiter with max_count=1, so rather than
maintaining two sets of code, let's combine them.
2018-07-20 13:11:43 +01:00
Richard van der Hoff
8462c26485 Improvements to the Limiter
* give them names, to improve logging
* use a deque rather than a list for efficiency
2018-07-20 12:50:27 +01:00
Richard van der Hoff
d7275eecf3 Add a sleep to the Limiter to fix stack overflows.
Fixes #3570
2018-07-20 12:37:12 +01:00
Matthew Hodgson
650daf5628 make test work 2018-07-19 20:49:44 +01:00
Matthew Hodgson
1fa4f7e03e first cut of a UT for testing state store (untested) 2018-07-19 20:19:32 +01:00
Matthew Hodgson
2f558300cc fix thinkos; unbreak tests 2018-07-19 19:22:27 +01:00
Matthew Hodgson
bcaec2915a incorporate review 2018-07-19 19:03:50 +01:00
Matthew Hodgson
924eb34d94 add a filtered_types param to limit filtering to specific types 2018-07-19 18:32:02 +01:00
Richard van der Hoff
7044af3298 Merge pull request #3564 from matrix-org/hawkowl/markdown
Switch changes to markdown & flip content on for miscs
2018-07-19 17:31:59 +01:00
Richard van der Hoff
5f3658baf5 Merge pull request #3377 from Valodim/note-affinity
document that the affinity package is required for the cpu_affinity setting
2018-07-19 14:35:06 +01:00
Amber Brown
fd9b08873c Merge remote-tracking branch 'origin/develop' into hawkowl/markdown 2018-07-19 21:48:28 +10:00
Richard van der Hoff
11d592290c Merge branch 'master' into develop 2018-07-19 12:44:04 +01:00
Amber Brown
87086ac69d fixes 2018-07-19 21:42:40 +10:00
Amber Brown
7814e4cc86 fix up weird translation things 2018-07-19 21:37:57 +10:00
Richard van der Hoff
6284f579bf Update r0.33.0 release notes
(mostly just clarifications)
2018-07-19 12:37:55 +01:00
Amber Brown
7cf76c9a09 changelog 2018-07-19 21:26:30 +10:00
Amber Brown
37af0d2a13 make pyproject point at it 2018-07-19 21:25:47 +10:00
Amber Brown
252f80094c rst -> md 2018-07-19 21:25:33 +10:00
Amber Brown
9a8acdc720 Merge branch 'master' into develop 2018-07-19 21:16:44 +10:00
Amber Brown
d69decd5c7 0.33.0 final changelog 2018-07-19 21:12:15 +10:00
Amber Brown
38f53399a2 0.33 final 2018-07-19 21:11:40 +10:00
Amber Brown
13d501c773 update changelogs 2018-07-19 21:11:24 +10:00
Amber Brown
ce0545eca1 Revert "0.33.0rc1 changelog"
This reverts commit 21d3b87943.
2018-07-19 21:03:15 +10:00
Amber Brown
95ccb6e2ec Don't spew errors because we can't save metrics (#3563) 2018-07-19 20:58:18 +10:00
Richard van der Hoff
ba6477feac Merge pull request #3507 from Peetz0r/patch-1
Changed http links to https
2018-07-19 11:55:40 +01:00
Matthew Hodgson
be3adfc331 merge develop pydoc for _get_state_for_groups 2018-07-19 11:26:04 +01:00
Matthew Hodgson
9e40834f74 yes, we do need to invalidate the device_id_exists_cache when deleting a remote device 2018-07-19 11:15:10 +01:00
Richard van der Hoff
f1a15ea206 revert 00bc979
... we've fixed the things that caused the warnings, so we should reinstate the
warning.
2018-07-19 11:14:20 +01:00
Richard van der Hoff
1ffb7bec20 Merge remote-tracking branch 'origin/release-v0.33.0' into develop 2018-07-19 11:12:33 +01:00
Richard van der Hoff
2de3d994f3 Merge pull request #3561 from matrix-org/rav/disable_logcontext_warning
Disable logcontext warning
2018-07-19 11:05:57 +01:00
Richard van der Hoff
c754e006f4 Merge pull request #3556 from matrix-org/rav/background_processes
Run things as background processes
2018-07-19 11:04:18 +01:00
Amber Brown
a97c845271 Move v1-only APIs into their own module & isolate deprecated ones (#3460) 2018-07-19 20:03:33 +10:00
Matthew Hodgson
c0685f67c0 spell out that include_deleted_devices requires include_all_devices 2018-07-19 10:59:02 +01:00
Richard van der Hoff
18a2b2c0b4 changelog 2018-07-19 10:54:39 +01:00
Richard van der Hoff
00bc979137 Disable logcontext warning
Temporary workaround to #3518 while we release 0.33.0.
2018-07-19 10:51:15 +01:00
Benedikt Heine
f1dd89fe86 [Docker] Use sorted multiline package lists
This matches docker best practices.

Signed-off-by: Benedikt Heine <bebe@bebehei.de>
2018-07-19 11:38:58 +02:00
Erik Johnston
6f62a6ef21 Merge pull request #3554 from matrix-org/erikj/response_metrics_code
Add response code to response timer metrics
2018-07-19 10:25:18 +01:00
Amber Brown
77091d7e8e Merge pull request #3559 from matrix-org/rav/pep8_config
add config for pep8
2018-07-19 13:17:07 +10:00
Richard van der Hoff
eed24893fa changelog 2018-07-18 22:12:19 +01:00
Richard van der Hoff
3f9e649f17 add config for pep8
Since, for better or worse, we seem to have configured isort to generate
89-character lines, pycharm is now complaining at me that our lines are too
long.

So, let's configure pep8 to behave consistently with flake8.
2018-07-18 20:55:37 +01:00
Richard van der Hoff
8c69b735e3 Make Distributor run its processes as a background process
This is more involved than it might otherwise be, because the current
implementation just drops its logcontexts and runs everything in the sentinel
context.

It turns out that we aren't actually using a bunch of the functionality here
(notably suppress_failures and the fact that Distributor.fire returns a
deferred), so the easiest way to fix this is actually by simplifying a bunch of
code.
2018-07-18 20:55:05 +01:00
Richard van der Hoff
08436c556a changelog 2018-07-18 20:55:05 +01:00
Richard van der Hoff
667fba68f3 Run things as background processes
This fixes #3518, and ensures that we get useful logs and metrics for lots of
things that happen in the background.

(There are certainly more things that happen in the background; these are just
the common ones I've found running a single-process synapse locally).
2018-07-18 20:55:05 +01:00
Erik Johnston
3a993a660d Newsfile 2018-07-18 15:39:49 +01:00
Erik Johnston
9b596177ae Add some room read only APIs to client_reader 2018-07-18 15:33:03 +01:00
Erik Johnston
bacdf0cbf9 Move RoomContextHandler out of Handlers
This is in preparation for moving GET /context/ to a worker
2018-07-18 15:33:03 +01:00
Erik Johnston
8cb8df55e9 Split MessageHandler into read only and writers
This will let us call the read only parts from workers, and so be able
to move some APIs off of master, e.g. the `/state` API.
2018-07-18 15:33:03 +01:00
Richard van der Hoff
65d6a0e477 Merge pull request #3553 from matrix-org/rav/background_process_tracking
Resource tracking for background processes
2018-07-18 14:27:50 +01:00
Erik Johnston
5bd0a47fcd pep8 2018-07-18 14:19:00 +01:00
Erik Johnston
00845c49d2 Newsfile 2018-07-18 14:03:16 +01:00
Erik Johnston
e45a46b6e4 Add response code to response timer metrics 2018-07-18 13:59:36 +01:00
Richard van der Hoff
dab00faa83 Merge pull request #3367 from matrix-org/rav/drop_re_signing_hacks
Remove event re-signing hacks
2018-07-18 12:46:27 +01:00
David Baker
c91a44572e Merge pull request #3514 from matrix-org/dbkr/turn_dont_add_defaults
Comment dummy TURN parameters in default config
2018-07-18 11:56:18 +01:00
Richard van der Hoff
92aecd557b changelog 2018-07-18 11:29:48 +01:00
Richard van der Hoff
6e3fc657b4 Resource tracking for background processes
This introduces a mechanism for tracking resource usage by background
processes, along with an example of how it will be used.

This will help address #3518, but more importantly will give us better insights
into things which are happening but not being shown up by the request metrics.

We *could* do this with Measure blocks, but:
 - I think having them pulled out as a completely separate metric class will
   make it easier to distinguish top-level processes from those which are
   nested.

 - I want to be able to report on in-flight background processes, and I don't
   think we want to do this for *all* Measure blocks.
2018-07-18 10:50:33 +01:00
Amber Brown
21d3b87943 0.33.0rc1 changelog 2018-07-18 12:53:32 +10:00
Amber Brown
5f3d02f6eb bump to 0.33.0rc1 2018-07-18 12:52:56 +10:00
Richard van der Hoff
0aed3fc346 Merge pull request #3546 from matrix-org/rav/fix_erasure_over_federation
Fix visibility of events from erased users over federation
2018-07-17 15:16:45 +01:00
Richard van der Hoff
79eb339c66 add a comment 2018-07-17 14:53:34 +01:00
Richard van der Hoff
4a11df5b64 changelog 2018-07-17 14:24:29 +01:00
Richard van der Hoff
d897be6a98 Fix visibility of events from erased users over federation 2018-07-17 14:02:07 +01:00
Richard van der Hoff
9c04b4abf9 Merge pull request #3541 from matrix-org/rav/optimize_filter_events_for_server
Refactor and optimze filter_events_for_server
2018-07-17 14:01:39 +01:00
Richard van der Hoff
94440ae994 fix imports 2018-07-17 11:51:26 +01:00
Amber Brown
bc006b3c9d Refactor REST API tests to use explicit reactors (#3351) 2018-07-17 20:43:18 +10:00
Erik Johnston
c7320a5564 Merge pull request #3544 from matrix-org/erikj/fixup_stream_cache
Fix perf regression in PR #3530
2018-07-17 11:16:59 +01:00
Richard van der Hoff
2172a3d8cb add a comment 2018-07-17 11:13:57 +01:00
Erik Johnston
b2aa05a8d6 Use efficient .intersection 2018-07-17 11:07:04 +01:00
Erik Johnston
850238b4ef Add unit test 2018-07-17 10:59:02 +01:00
Erik Johnston
9952d18e4d Newsfile 2018-07-17 10:31:51 +01:00
Erik Johnston
547b1355d3 Fix perf regression in PR #3530
The get_entities_changed function was changed to return all changed
entities since the given stream position, rather than only those changed
from a given list of entities. This resulted in the function incorrectly
returning large numbers of entities that, for example, caused large
increases in database usage.
2018-07-17 10:27:51 +01:00
Benedikt Heine
fe089b13cb [Docker] Build docker image via compose
It's much easier to build the image via docker-compose instead of an
error-prone low-level docker call.

Signed-off-by: Benedikt Heine <bebe@bebehei.de>
2018-07-17 09:24:33 +02:00
Amber Brown
3fe0938b76 Merge pull request #3530 from matrix-org/erikj/stream_cache
Don't return unknown entities in get_entities_changed
2018-07-17 13:44:46 +10:00
Amber Brown
fe10dd9fb2 Merge pull request #3540 from krombel/enforce_isort
check isort by travis
2018-07-17 13:41:59 +10:00
Krombel
9677b1d1c0 rename 'isort' to 'check_isort' as requested 2018-07-16 16:03:41 +02:00
Richard van der Hoff
2731bf7ac3 Changelog 2018-07-16 14:12:25 +01:00
Richard van der Hoff
09e29fb58b Attempt to make _filter_events_for_server more efficient 2018-07-16 14:06:09 +01:00
Richard van der Hoff
15b13b537f Add a test which profiles filter_events_for_server in a large room 2018-07-16 14:06:09 +01:00
Krombel
78a9ddcf9a rerun isort with latest version 2018-07-16 14:23:25 +02:00
Richard van der Hoff
ea69d35651 Move filter_events_for_server out of FederationHandler
for easier unit testing.
2018-07-16 13:06:24 +01:00
Krombel
4a27000548 check isort by travis 2018-07-16 13:57:33 +02:00
Jeroen
505530f36a Merge remote-tracking branch 'upstream/develop' into send_sni_for_federation_requests
# Conflicts:
#	synapse/crypto/context_factory.py
2018-07-14 20:24:46 +02:00
Amber Brown
8a4f05fefb Fix develop because I broke it :( (#3535) 2018-07-14 09:51:00 +10:00
Amber Brown
8532953c04 Merge pull request #3534 from krombel/use_parse_and_asserts_from_servlet
Use parse and asserts from http.servlet
2018-07-14 09:09:19 +10:00
Amber Brown
a2374b2c7f fix sytests 2018-07-14 07:52:58 +10:00
Amber Brown
33b60c01b5 Make auth & transactions more testable (#3499) 2018-07-14 07:34:49 +10:00
Krombel
516f960ad8 add changelog 2018-07-13 22:19:19 +02:00
Krombel
3366b9c534 rename assert_params_in_request to assert_params_in_dict
the method "assert_params_in_request" does handle dicts and not
requests. A request body has to be parsed to json before this method
can be used
2018-07-13 21:53:01 +02:00
Krombel
32fd6910d0 Use parse_{int,str} and assert from http.servlet
parse_integer and parse_string can take a request and raise errors
in case we have wrong or missing params.
This PR tries to use them more to deduplicate some code and make it
better readable
2018-07-13 21:40:14 +02:00
Erik Johnston
bc832f822f Fixup unit test 2018-07-13 17:03:04 +01:00
Richard van der Hoff
2aba1f549c Merge pull request #3533 from matrix-org/rav/fix_federation_ratelimite_queue
Make FederationRateLimiter queue requests properly
2018-07-13 16:59:18 +01:00
Richard van der Hoff
08546be40f better changelog 2018-07-13 16:47:13 +01:00
Richard van der Hoff
b03a0e14db changelog 2018-07-13 16:28:28 +01:00
Richard van der Hoff
3b391d9c45 Fix unit tests 2018-07-13 16:28:04 +01:00
Richard van der Hoff
33b40d0a25 Make FederationRateLimiter queue requests properly
popitem removes the *most recent* item by default [1]. We want the oldest.

Fixes #3524

[1]: https://docs.python.org/2/library/collections.html#collections.OrderedDict.popitem
2018-07-13 16:19:40 +01:00
Erik Johnston
5f263b607e Newsfile 2018-07-13 15:47:25 +01:00
Erik Johnston
77b692e65d Don't return unknown entities in get_entities_changed
The stream cache keeps track of all entities that have changed since
a particular stream position, so get_entities_changed does not need to
return unknown entites when given a larger stream position.

This makes it consistent with the behaviour of has_entity_changed.
2018-07-13 15:26:10 +01:00
Matthew Hodgson
ba22b6a456 typo 2018-07-13 12:03:39 +01:00
Richard van der Hoff
6dff49b8a9 Merge pull request #3521 from matrix-org/rav/optimise_stream_change_cache
Reduce set building in get_entities_changed
2018-07-12 12:08:49 +01:00
Matthew Hodgson
37c4fba0ac changelog 2018-07-12 11:45:33 +01:00
Richard van der Hoff
38c5fa7ee4 changelog 2018-07-12 11:45:28 +01:00
Matthew Hodgson
12ec58301f shift to using an explicit deleted flag on m.device_list_update EDUs
and generally make it work.
2018-07-12 11:39:43 +01:00
Richard van der Hoff
fa5c2bc082 Reduce set building in get_entities_changed
This line shows up as about 5% of cpu time on a synchrotron:

    not_known_entities = set(entities) - set(self._entity_to_key)

Presumably the problem here is that _entity_to_key can be largeish, and
building a set for its keys every time this function is called is slow.

Here we rewrite the logic to avoid building so many sets.
2018-07-12 11:37:44 +01:00
Richard van der Hoff
4c4dd6299d Merge pull request #3316 from matrix-org/rav/enforce_report_api
Enforce the specified API for report_event
2018-07-12 10:50:05 +01:00
Richard van der Hoff
c969754a91 changelog 2018-07-12 10:00:57 +01:00
Richard van der Hoff
482d17b58b Merge branch 'develop' into rav/enforce_report_api 2018-07-12 09:56:28 +01:00
Erik Johnston
0456e05977 Merge pull request #3505 from matrix-org/erikj/receipts_cahce
Use stream cache in get_linearized_receipts_for_room
2018-07-12 09:46:29 +01:00
Erik Johnston
aff1dfdf3d Update return value docstring 2018-07-12 09:45:37 +01:00
Matthew Hodgson
5797f5542b WIP to announce deleted devices over federation
Previously we queued up the poke correctly when the device was deleted,
but then the actual EDU wouldn't get sent, as the device was no longer known.
Instead, we now send EDUs for deleted devices too if there's a poke for them.
2018-07-12 01:32:39 +01:00
David Baker
1b5425527c I failed to correctly guess the PR number 2018-07-11 16:02:29 +01:00
David Baker
36f4fd3e1e Comment dummy TURN parameters in default config
This default config is parsed and used a base before the actual
config is overlaid, so with these values not commented out, the
code to detect when no turn params were set and refuse to generate
credentials was never firing because the dummy default was always set.
2018-07-11 15:49:29 +01:00
Peter
2ef3f84945 change http links to https
Especially useful for the debian repo, as this makes it easier to get the key in a secure way
2018-07-10 21:14:22 +02:00
Amber Brown
129ffd7b88 Merge pull request #3498 from OlegGirko/fix_attrs_syntax
* Use more portable syntax using attrs package.

Newer syntax

    attr.ib(factory=dict)

is just a syntactic sugar for

    attr.ib(default=attr.Factory(dict))

It was introduced in newest version of attrs package (18.1.0)
and doesn't work with older versions.

We should either require minimum version of attrs to be 18.1.0,
or use older (slightly more verbose) syntax.
Requiring newest version is not a good solution because
Linux distributions may have older version of attrs (17.4.0 in Fedora 28),
and requiring to build (and package)
newer version just to use newer syntactic sugar in only one test
is just too much.
It's much better to fix that test to use older syntax.

Signed-off-by: Oleg Girko <ol@infoserver.lv>
2018-07-11 04:22:46 +10:00
Amber Brown
85354bb18e changelog entry 2018-07-11 03:27:03 +10:00
Erik Johnston
6ccefef07a Use 'is not None' and add comments 2018-07-10 18:12:39 +01:00
Matthew Hodgson
ea752bdd99 s/becuase/because/g 2018-07-10 17:58:18 +01:00
Erik Johnston
bb3d536087 Newsfile 2018-07-10 17:28:31 +01:00
Erik Johnston
05f5dabc10 Use stream cache in get_linearized_receipts_for_room
This avoids us from uncessarily hitting the database when there has been
no change for the room
2018-07-10 17:22:42 +01:00
Richard van der Hoff
c3c29aa196 Attempt to include db threads in cpu usage stats (#3496)
Let's try to include time spent in the DB threads in the per-request/block cpu
usage metrics.
2018-07-10 16:12:36 +01:00
Richard van der Hoff
55370331da Refactor logcontext resource usage tracking (#3501)
Factor out the resource usage tracking out to a separate object, which can be
passed around and copied independently of the logcontext itself.
2018-07-10 13:56:07 +01:00
Matthew Hodgson
16b10666e7 another typo 2018-07-10 12:28:42 +01:00
Matthew Hodgson
4ea391a6ae typo (i think) 2018-07-10 12:08:09 +01:00
Richard van der Hoff
b1fe697b3c Merge pull request #3497 from matrix-org/rav/measure_fetch_event_loop
Add CPU metrics for _fetch_event_list
2018-07-10 10:00:24 +01:00
Oleg Girko
6c1ec5a1bd Use more portable syntax using attrs package.
Newer syntax

    attr.ib(factory=dict)

is just a syntactic sugar for

    attr.ib(default=attr.Factory(dict))

It was introduced in newest version of attrs package (18.1.0)
and doesn't work with older versions.

We should either require minimum version of attrs to be 18.1.0,
or use older (slightly more verbose) syntax.
Requiring newest version is not a good solution because
Linux distributions may have older version of attrs (17.4.0 in Fedora 28),
and requiring to build (and package)
newer version just to use newer syntactic sugar in only one test
is just too much.
It's much better to fix that test to use older syntax.

Signed-off-by: Oleg Girko <ol@infoserver.lv>
2018-07-10 00:38:49 +01:00
Richard van der Hoff
f3b3b9dd8f changelog 2018-07-09 18:16:52 +01:00
Richard van der Hoff
e31e5dee38 Add CPU metrics for _fetch_event_list
add a Measure block on _fetch_event_list, in the hope that we can better
measure CPU usage here.
2018-07-09 18:15:54 +01:00
Richard van der Hoff
395fa8d1fd Merge pull request #3464 from matrix-org/hawkowl/isort-run
Run isort on Synapse
2018-07-09 10:24:43 +01:00
Jeroen
b5e157d895 Merge branch 'develop' into send_sni_for_federation_requests
# Conflicts:
#	synapse/http/endpoint.py
2018-07-09 08:51:11 +02:00
Amber Brown
09477bd884 changelog 2018-07-09 16:09:37 +10:00
Amber Brown
49af402019 run isort 2018-07-09 16:09:20 +10:00
Amber Brown
2ee9f1bd1a Add an isort configuration (#3463) 2018-07-09 16:05:21 +10:00
Amber Brown
1241156c82 changelog 2018-07-07 10:48:30 +10:00
Amber Brown
3060bcc8e9 version 2018-07-07 10:48:06 +10:00
Amber Brown
e845fd41c2 Correct attrs package name in requirements (#3492) 2018-07-07 10:46:59 +10:00
Richard van der Hoff
20ff89d6e1 Merge tag 'v0.32.1'
Synapse 0.32.1 (2018-07-06)
===========================

Bugfixes
--------

- Add explicit dependency on netaddr ([#3488](https://github.com/matrix-org/synapse/issues/3488))
2018-07-06 16:56:23 +01:00
Richard van der Hoff
1cfc2c4790 Prepare 0.32.1 release 2018-07-06 16:50:52 +01:00
Richard van der Hoff
2087d5d046 Merge pull request #3488 from matrix-org/rav/fix-netaddr-dep
Add explicit dependency on netaddr
2018-07-06 16:39:56 +01:00
Richard van der Hoff
1464a0578a Add explicit dependency on netaddr
the dependencies file, causing failures on upgrade (and presumably for new
installs).
2018-07-06 16:27:17 +01:00
Neil Johnson
277c561766 0.32.0 version bump, update changelog 2018-07-06 15:07:29 +01:00
Amber Brown
89690aaaeb changelog 2018-07-05 20:46:40 +10:00
Amber Brown
be8b32dbc2 ACL changelog 2018-07-05 20:45:12 +10:00
Amber Brown
d196fe42a9 bump version to 0.32.0rc1 2018-07-05 20:22:35 +10:00
Neil Johnson
feef8461d1 Merge remote-tracking branch 'hera/rav/server_acls' into develop 2018-07-05 11:04:01 +01:00
Erik Johnston
1a88640677 Merge pull request #3483 from matrix-org/rav/more_server_name_validation
More server_name validation
2018-07-05 10:04:20 +01:00
Richard van der Hoff
3cf3e08a97 Implementation of server_acls
... as described at
https://docs.google.com/document/d/1EttUVzjc2DWe2ciw4XPtNpUpIl9lWXGEsy2ewDS7rtw.
2018-07-04 19:06:20 +01:00
Richard van der Hoff
546bc9e28b More server_name validation
We need to do a bit more validation when we get a server name, but don't want
to be re-doing it all over the shop, so factor out a separate
parse_and_validate_server_name, and do the extra validation.

Also, use it to verify the server name in the config file.
2018-07-04 18:59:51 +01:00
Richard van der Hoff
f192a93875 Merge pull request #3481 from matrix-org/rav/fix_cachedescriptor_test
Reinstate lost run_on_reactor in unit test
2018-07-04 18:55:33 +01:00
Erik Johnston
13f7adf84b Merge pull request #3473 from matrix-org/erikj/thread_cache
Invalidate cache on correct thread
2018-07-04 10:11:38 +01:00
Erik Johnston
40252d13d1 Merge pull request #3474 from matrix-org/erikj/py3_auth
Fix up auth check
2018-07-04 09:41:33 +01:00
Richard van der Hoff
ea555d5633 Reinstate lost run_on_reactor in unit test
a61738b removed a call to run_on_reactor from a unit test, but that call was
doing something useful, in making the function in question asynchronous.

Reinstate the call and add a check that we are testing what we wanted to be
testing.
2018-07-04 09:40:01 +01:00
Richard van der Hoff
c4e7ad0e0f Add changelog 2018-07-04 09:15:45 +01:00
Richard van der Hoff
a4ab491371 Merge branch 'develop' into rav/drop_re_signing_hacks 2018-07-04 07:13:38 +01:00
Richard van der Hoff
508196e08a Reject invalid server names (#3480)
Make sure that server_names used in auth headers are sane, and reject them with
a sensible error code, before they disappear off into the depths of the system.
2018-07-03 14:36:14 +01:00
Richard van der Hoff
f741630847 Merge pull request #3470 from matrix-org/matthew/fix-utf8-logging
don't mix unicode strings with utf8-in-byte-strings
2018-07-02 15:25:36 +01:00
Matthew Hodgson
6ec3aa2f72 news snippet 2018-07-02 13:43:34 +01:00
Erik Johnston
abb183438c Correct newsfile 2018-07-02 13:12:38 +01:00
Erik Johnston
f88dea577d Newsfile 2018-07-02 11:46:32 +01:00
Erik Johnston
cea4662b13 Newsfile 2018-07-02 11:46:20 +01:00
Erik Johnston
2c33b55738 Avoid relying on int vs None comparison
Python 3 doesn't support comparing None to ints
2018-07-02 11:40:32 +01:00
Erik Johnston
cbf82dddf1 Ensure that we define sender_domain 2018-07-02 11:37:57 +01:00
Erik Johnston
3905c693c5 Invalidate cache on correct thread 2018-07-02 11:36:44 +01:00
Matthew Hodgson
fc4f8f33be replace invalid utf8 with \ufffd 2018-07-02 11:33:02 +01:00
Matthew Hodgson
1c867f5391 a fix which doesn't NPE everywhere 2018-07-01 11:56:33 +01:00
Matthew Hodgson
f131bf8d3e don't mix unicode strings with utf8-in-byte-strings
otherwise we explode with:

```
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/handlers.py, line 78, in emit
    logging.FileHandler.emit(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 950, in emit
    StreamHandler.emit(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 887, in emit
    self.handleError(record)
  File /usr/lib/python2.7/logging/__init__.py, line 810, in handleError
    None, sys.stderr)
  File /usr/lib/python2.7/traceback.py, line 124, in print_exception
    _print(file, 'Traceback (most recent call last):')
  File /usr/lib/python2.7/traceback.py, line 13, in _print
    file.write(str+terminator)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_io.py, line 170, in write
    self.log.emit(self.level, format=u{log_io}, log_io=line)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 144, in emit
    self.observer(event)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 136, in __call__
    errorLogger = self._errorLoggerForObserver(brokenObserver)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 156, in _errorLoggerForObserver
    if obs is not observer
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 81, in __init__
    self.log = Logger(observer=self)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 64, in __init__
    namespace = self._namespaceFromCallingContext()
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 42, in _namespaceFromCallingContext
    return currentframe(2).f_globals[__name__]
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/python/compat.py, line 93, in currentframe
    for x in range(n + 1):
RuntimeError: maximum recursion depth exceeded while calling a Python object
Logged from file site.py, line 129
  File /usr/lib/python2.7/logging/__init__.py, line 859, in emit
    msg = self.format(record)
  File /usr/lib/python2.7/logging/__init__.py, line 732, in format
    return fmt.format(record)
  File /usr/lib/python2.7/logging/__init__.py, line 471, in format
    record.message = record.getMessage()
  File /usr/lib/python2.7/logging/__init__.py, line 335, in getMessage
    msg = msg % self.args
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 4: ordinal not in range(128)
Logged from file site.py, line 129
```

...where the logger apparently recurses whilst trying to log the error, hitting the
maximum recursion depth and killing everything badly.
2018-07-01 05:08:58 +01:00
Sverre Moe
ff65916108 Add instructions for install on OpenSUSE and SLES 2018-07-01 00:06:51 +02:00
Matthew Hodgson
75d4986a8c Merge pull request #3467 from matrix-org/hawkowl/contributor-requirements
Clarify "real name" in contributor requirements
2018-06-30 00:21:10 +01:00
Amber Brown
7c0cdd330f topfile 2018-06-29 14:13:15 +01:00
Erik Johnston
e3b4043800 Merge pull request #3456 from matrix-org/hawkowl/federation-prevevent-checking
Check the state of prev_events a bit more thoroughly when coming over federation
2018-06-29 13:55:02 +01:00
Amber Brown
ec766b2530 clarification on what "real names" are 2018-06-29 10:33:31 +01:00
Matthew Hodgson
fc0e17b3e5 Merge pull request #3465 from matrix-org/matthew/as_ip_lock
add ip_range_whitelist parameter to limit where ASes can connect from
2018-06-28 21:15:06 +01:00
Matthew Hodgson
f82cf3c7df add test 2018-06-28 21:14:16 +01:00
Matthew Hodgson
0d7eabeada add towncrier snippet 2018-06-28 20:59:01 +01:00
Matthew Hodgson
e72234f6bd fix tests 2018-06-28 20:56:07 +01:00
Matthew Hodgson
f4f1cda928 add ip_range_whitelist parameter to limit where ASes can connect from 2018-06-28 20:32:00 +01:00
Amber Brown
6350bf925e Attempt to be more performant on PyPy (#3462) 2018-06-28 14:49:57 +01:00
Amber Brown
cfda61e9cd topfile update 2018-06-28 11:13:08 +01:00
Amber Brown
72d2143ea8 Revert "Revert "Try to not use as much CPU in the StreamChangeCache"" (#3454) 2018-06-28 11:04:18 +01:00
Amber Brown
caf07f770a topfile 2018-06-27 15:07:16 +01:00
Amber Brown
99800de63d try and clean up 2018-06-27 11:40:27 +01:00
Amber Brown
f03a5d1a17 pep8 2018-06-27 11:38:14 +01:00
Amber Brown
94f09618e5 cleanups 2018-06-27 11:38:03 +01:00
Amber Brown
a7ecf34b70 cleanups 2018-06-27 11:36:03 +01:00
Amber Brown
35cc3e8b14 stylistic cleanup 2018-06-27 11:32:09 +01:00
Amber Brown
8d62baa48c cleanups 2018-06-27 11:31:48 +01:00
Amber Brown
77078d6c8e handle federation not telling us about prev_events 2018-06-27 11:27:32 +01:00
Amber Brown
cd6bcdaf87 Better testing framework for homeserver-using things (#3446) 2018-06-27 10:37:24 +01:00
Jeroen
95341a8f6f take idna implementation from twisted 2018-06-26 21:15:14 +02:00
Jeroen
26651b0d6a towncrier changelog 2018-06-26 21:10:45 +02:00
Jeroen
b7f34ee348 allow self-signed certificates 2018-06-26 20:41:05 +02:00
Matthew Hodgson
71ad43a8b5 Merge pull request #3452 from matrix-org/revert-3451-hawkowl/sorteddict-api
Revert "Try to not use as much CPU in the StreamChangeCache"
2018-06-26 18:09:28 +01:00
Matthew Hodgson
8057489b26 Revert "Try to not use as much CPU in the StreamChangeCache" 2018-06-26 18:09:01 +01:00
Matthew Hodgson
d91efb06cf Merge pull request #3451 from matrix-org/hawkowl/sorteddict-api
Try to not use as much CPU in the cache
2018-06-26 17:49:55 +01:00
Amber Brown
1202508067 fixes 2018-06-26 17:29:01 +01:00
Amber Brown
bd3d329c88 fixes 2018-06-26 17:28:12 +01:00
Amber Brown
abfe4b2957 try and make loading items from the cache faster 2018-06-26 17:25:34 +01:00
Matthew Hodgson
c695a8d003 Merge pull request #3449 from matrix-org/dbkr/fix_deactivate_account_multiple_pending
Fix error on deleting users pending deactivation
2018-06-26 10:59:19 +01:00
David Baker
028490afd4 Fix error on deleting users pending deactivation
Use simple_delete instead of simple_delete_one as commented
2018-06-26 10:52:52 +01:00
Matthew Hodgson
c7f6b420ae Merge pull request #3448 from matrix-org/matthew/gdpr-deactivate-admin-api
add GDPR erase param to deactivate API
2018-06-26 10:43:14 +01:00
Matthew Hodgson
9570aa82eb update doc for deactivate API 2018-06-26 10:42:50 +01:00
Matthew Hodgson
1e788db430 add GDPR erase param to deactivate API 2018-06-26 10:26:54 +01:00
Amber Brown
1d62c4a127 Merge pull request #3438 from turt2live/travis/dont-print-access-tokens-in-logs
Stop including access tokens in warnings in the log
2018-06-26 09:55:55 +01:00
Erik Johnston
d72fb9a448 Merge pull request #3442 from matrix-org/matthew/allow-unconsented-parts
allow non-consented users to still part rooms (to let us autopart them)
2018-06-25 20:14:18 +01:00
Erik Johnston
1b947d6dde Merge pull request #3443 from matrix-org/erikj/fast_filter_servers
Add fast path to _filter_events_for_server
2018-06-25 20:11:00 +01:00
Erik Johnston
df48f7ef37 Actually fix it 2018-06-25 20:03:41 +01:00
Erik Johnston
a0e8a53c6d Comment 2018-06-25 19:57:38 +01:00
Erik Johnston
7bdc5c8fa3 Fix bug with assuming wrong type 2018-06-25 19:56:02 +01:00
Erik Johnston
ea7a9c0483 Add fast path to _filter_events_for_server
Most rooms have a trivial history visibility like "shared" or
"world_readable", especially large rooms, so lets not bother getting the
full membership of those rooms in that case.
2018-06-25 19:49:13 +01:00
Matthew Hodgson
0269367f18 allow non-consented users to still part rooms (to let us autopart them) 2018-06-25 17:56:10 +01:00
Matthew Hodgson
784189b1f4 typos 2018-06-25 17:37:16 +01:00
Matthew Hodgson
6eb861b67f typo 2018-06-25 17:37:16 +01:00
Erik Johnston
947fea67cb Need to pass reactor to endpoint fac 2018-06-25 15:22:57 +01:00
Amber Brown
36cb570641 Use towncrier to build the changelog (#3425) 2018-06-25 14:42:27 +01:00
Erik Johnston
33fdcfa957 Merge pull request #3441 from matrix-org/erikj/redo_erasure
Fix user erasure and re-enable
2018-06-25 14:37:01 +01:00
Erik Johnston
eb50c44eaf Add UserErasureWorkerStore to workers 2018-06-25 14:22:24 +01:00
Amber Brown
07cad26d65 Remove all global reactor imports & pass it around explicitly (#3424) 2018-06-25 14:08:28 +01:00
Erik Johnston
244484bf3c Revert "Revert "Merge pull request #3431 from matrix-org/rav/erasure_visibility""
This reverts commit 1d009013b3.
2018-06-25 13:42:55 +01:00
Jeroen
07b4f88de9 formatting changes for pep8 2018-06-25 12:31:16 +02:00
Jeroen
3d605853c8 send SNI for federation requests 2018-06-24 22:38:43 +02:00
Travis Ralston
ec1e799e17 Don't print invalid access tokens in the logs
Tokens shouldn't be appearing the logs, valid or invalid. 

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-06-24 12:17:01 -06:00
Richard van der Hoff
1d009013b3 Revert "Merge pull request #3431 from matrix-org/rav/erasure_visibility"
This reverts commit ce0d911156, reversing
changes made to b4a5d767a9.
2018-06-22 16:35:10 +01:00
Richard van der Hoff
516f884176 Merge pull request #3435 from matrix-org/rav/fix_event_push_actions_tablescan
Fix event_push_actions tablescan when reinserting events
2018-06-22 15:57:22 +01:00
Mark Haines
9850f66abe Deleting from event_push_actions needs to use an index 2018-06-22 15:54:48 +01:00
Richard van der Hoff
fe8d2968e3 Merge pull request #3434 from matrix-org/rav/search_logging
Add some logging to /search and /context queries
2018-06-22 15:51:11 +01:00
Erik Johnston
28ddc6cfbe Also log number of events for serach context 2018-06-22 15:42:11 +01:00
Erik Johnston
4b4cec3989 Add some logging to search queries 2018-06-22 15:42:11 +01:00
Richard van der Hoff
200e11c5bf Merge pull request #3432 from matrix-org/rav/joined_hosts_cache_non_iterable
Make _get_joined_hosts_cache cache non-iterable
2018-06-22 15:18:51 +01:00
Erik Johnston
f8272813a9 Make _get_joined_hosts_cache cache non-iterable 2018-06-22 15:12:26 +01:00
Richard van der Hoff
1d7ad11747 Merge pull request #3430 from matrix-org/rav/configurable_push_action_rotation
Make push actions rotation configurable
2018-06-22 15:07:31 +01:00
Erik Johnston
ce0d911156 Merge pull request #3431 from matrix-org/rav/erasure_visibility
Support hiding events from deleted users
2018-06-22 15:06:44 +01:00
Erik Johnston
b4a5d767a9 Merge pull request #3428 from matrix-org/erikj/persisted_pdu
Simplify get_persisted_pdu
2018-06-22 14:47:43 +01:00
Erik Johnston
f79abda87f Merge pull request #3427 from matrix-org/erikj/remove_filters
remove dead filter_events_for_clients
2018-06-22 14:45:42 +01:00
Erik Johnston
75dc3ddeab Make push actions rotation configurable 2018-06-22 14:44:37 +01:00
Richard van der Hoff
bb018d0b5b Merge pull request #3383 from matrix-org/rav/hacky_speedup_state_group_cache
Improve StateGroupCache efficiency for wildcard lookups
2018-06-22 14:01:55 +01:00
Richard van der Hoff
43e02c409d Disable partial state group caching for wildcard lookups
When _get_state_for_groups is given a wildcard filter, just do a complete
lookup. Hopefully this will give us the best of both worlds by not filling up
the ram if we only need one or two keys, but also making the cache still work
for the federation reader usecase.
2018-06-22 11:52:07 +01:00
Richard van der Hoff
240f192523 Merge pull request #3382 from matrix-org/rav/optimise_state_groups
Optimise state_group_cache update
2018-06-22 11:20:20 +01:00
Richard van der Hoff
70e6501913 Merge pull request #3419 from matrix-org/rav/events_per_request
Log number of events fetched from DB
2018-06-22 11:17:56 +01:00
Richard van der Hoff
0495fe0035 Indirect evt_count updates via method call
so that we can stub it for the sentinel and not have a billion failing UTs
2018-06-22 10:42:28 +01:00
Amber Brown
9a685d60ae Merge pull request #3418 from matrix-org/rav/fix_metric_desc
Fix description of "python_gc_time" metric
2018-06-22 09:43:26 +01:00
Amber Brown
77ac14b960 Pass around the reactor explicitly (#3385) 2018-06-22 09:37:10 +01:00
Richard van der Hoff
cbbfaa4be8 Fix description of "python_gc_time" metric 2018-06-21 10:02:42 +01:00
Amber Brown
c2eff937ac Populate synapse_federation_client_sent_pdu_destinations:count again (#3386) 2018-06-21 09:39:58 +01:00
Amber Brown
99b77aa829 Fix tcp protocol metrics naming (#3410) 2018-06-21 09:39:27 +01:00
Richard van der Hoff
b088aafcae Log number of events fetched from DB
When we finish processing a request, log the number of events we fetched from
the database to handle it.

[I'm trying to figure out which requests are responsible for large amounts of
event cache churn. It may turn out to be more helpful to add counts to the
prometheus per-request/block metrics, but that is an extension to this code
anyway.]
2018-06-21 06:15:03 +01:00
Richard van der Hoff
aff3d76920 Merge pull request #3416 from matrix-org/rav/restart_indicator
Write a clear restart indicator in logs
2018-06-20 18:14:08 +01:00
Richard van der Hoff
02bfc581f8 Merge pull request #3399 from costacruise/master
Add error code to room creation error
2018-06-20 17:26:25 +01:00
Richard van der Hoff
245d53d32a Write a clear restart indicator in logs
I'm fed up with never being able to find the point a server restarted in the
logs.
2018-06-20 15:33:14 +01:00
Amber Brown
f6c4d74f96 Fix inflight requests metric (incorrect name & traceback) (#3413) 2018-06-20 11:18:57 +01:00
Matthew Hodgson
ccfdaf68be spell gauge correctly 2018-06-16 07:10:34 +01:00
Richard van der Hoff
9a793f861c Merge branch 'master' into develop 2018-06-14 16:36:01 +01:00
Richard van der Hoff
53969e1960 Merge tag 'v0.31.2'
SECURITY UPDATE: Prevent unauthorised users from setting state events in a room
when there is no `m.room.power_levels` event in force in the room. (PR #3397)

Discussion around the Matrix Spec change proposal for this change can be
followed at https://github.com/matrix-org/matrix-doc/issues/1304.
2018-06-14 16:35:33 +01:00
Richard van der Hoff
667c6546bd link to spec proposal from changelog 2018-06-14 16:27:41 +01:00
Richard van der Hoff
7e1c616452 v0.31.2 2018-06-14 16:24:32 +01:00
Richard van der Hoff
ba438a3ac1 changelog for 0.31.2 2018-06-14 16:22:46 +01:00
Richard van der Hoff
61ab08a197 Merge pull request #3397 from matrix-org/rav/adjust_auth_rules
Adjust event auth rules when there is no PL event
2018-06-14 16:09:13 +01:00
Richard van der Hoff
1e77ac66e3 Fix broken unit test
We need power levels for this test to do what it is supposed to do.
2018-06-14 14:21:29 +01:00
Richard van der Hoff
a502cfec00 remove spurious debug 2018-06-14 14:20:53 +01:00
Michael Wagner
19cd3120ec Add error code to room creation error
This error code is mentioned in the documentation at https://matrix.org/docs/api/client-server/#!/Room32creation/createRoom
2018-06-14 14:08:40 +02:00
Richard van der Hoff
5c9afd6f80 Make default state_default 50
Make it so that, before there is a power-levels event in the room, you need a
power level of at least 50 to send state.

Partially addresses https://github.com/matrix-org/matrix-doc/issues/1192
2018-06-14 12:38:09 +01:00
Richard van der Hoff
52423607bd Clarify interface for event_auth
stop pretending that it returns a boolean, which just almost gave me a heart
attack.
2018-06-14 12:26:17 +01:00
Amber Brown
f116f32ace add a last seen metric (#3396) 2018-06-14 20:26:59 +10:00
Richard van der Hoff
557b686eac Refactor get_send_level to take a power_levels event
it makes it easier for me to reason about
2018-06-14 11:26:27 +01:00
Amber Brown
a61738b316 Remove run_on_reactor (#3395) 2018-06-14 18:27:37 +10:00
Richard van der Hoff
3681437c35 Merge pull request #3368 from matrix-org/rav/fix_federation_client_host
Fix commandline federation_client to send the right Host header
2018-06-13 15:41:51 +01:00
Amber Brown
0fde1896cd Merge pull request #3389 from turt2live/travis/name_metrics
Use the correct flag (enable_metrics) when warning about an incorrect metrics setup
2018-06-13 23:50:10 +10:00
Amber Brown
2a4fde0a6f Merge pull request #3390 from turt2live/travis/appsvc-metrics
Use the RegistryProxy for appservices too
2018-06-13 22:05:15 +10:00
Michael Telatynski
94700e55fa if inviter_display_name == ""||None then default to inviter MXID
to prevent email invite from "None"
2018-06-13 10:31:01 +01:00
Travis Ralston
45768d1640 Use the RegistryProxy for appservices too
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-06-12 12:55:48 -06:00
Travis Ralston
12285a1a76 The flag is named enable_metrics, not collect_metrics
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-06-12 12:51:31 -06:00
Richard van der Hoff
96bad44f87 Fix federation_client to send the right Host
This appears to have stopped working since matrix.org moved to cloudflare. The
Host header should match the name of the server, not whatever is in the SRV
record.
2018-06-12 14:14:36 +01:00
Richard van der Hoff
b6faef2ad7 Filter out erased messages
Redact any messges sent by erased users.
2018-06-12 09:53:18 +01:00
Richard van der Hoff
f1023ebf4b mark accounts as erased when requested 2018-06-12 09:53:18 +01:00
Richard van der Hoff
3ff8a619f5 UserErasureStore
to store which users have been erased
2018-06-12 09:52:22 +01:00
Richard van der Hoff
9fc5b74b24 simplify get_persisted_pdu
it doesn't make much sense to use get_persisted_pdu on the receive path: just
get the event straight from the store.
2018-06-12 09:51:31 +01:00
Richard van der Hoff
bd348f0af6 remove dead filter_events_for_clients
This is only used by filter_events_for_client, so we can simplify the whole
thing by just doing one user at a time, and removing a dead storage function to
boot.
2018-06-12 09:51:31 +01:00
Richard van der Hoff
eb32b2ca20 Optimise state_group_cache update
(1) matrix-org-hotfixes has removed the intern calls; let's do the same here.
(2) remove redundant iteritems() so we can used an optimised db update.
2018-06-11 22:56:11 +01:00
David Baker
187a546bff Merge pull request #3276 from matrix-org/dbkr/unbind
Remove email addresses / phone numbers from ID servers when they're removed from synapse
2018-06-11 16:02:00 +01:00
Matthew Hodgson
d6cc369205 fix idiotic typo in state res 2018-06-11 14:43:55 +01:00
Matthew Hodgson
c96d882a02 Merge branch 'develop' into matthew/filter_members 2018-06-10 12:26:14 +03:00
Vincent Breitmoser
b800834351 add note that the affinity package is required for the cpu_affinity setting 2018-06-09 22:50:29 +02:00
Neil Johnson
ed5a0780a4 Merge branch 'master' into develop 2018-06-08 15:47:11 +01:00
Neil Johnson
1032393dfb Merge tag 'v0.31.1'
Changes in synapse v0.31.1 (2018-06-08)
=======================================

v0.31.1 fixes a security bug in the ``get_missing_events`` federation API
where event visibility rules were not applied correctly.

We are not aware of it being actively exploited but please upgrade asap.

Bug Fixes:

* Fix event filtering in get_missing_events handler (PR #3371)
2018-06-08 15:46:18 +01:00
Neil Johnson
aefcc0f5e5 tweak changelog 2018-06-08 15:32:54 +01:00
Neil Johnson
82e751c43f Update CHANGES.rst 2018-06-08 15:22:34 +01:00
Neil Johnson
0eb4722932 changelog a bump version 2018-06-08 15:21:46 +01:00
Richard van der Hoff
c6b1441c52 Fix event filtering in get_missing_events handler 2018-06-08 14:15:31 +01:00
Matthew Hodgson
8b98acca05 fix various changelog bugs and typos 2018-06-08 14:15:16 +01:00
David Baker
0e505b1913 Merge pull request #3372 from matrix-org/rav/better_verification_logging
Try to log more helpful info when a sig verification fails
2018-06-08 13:32:02 +01:00
David Baker
ad9edd1d96 Merge pull request #3371 from matrix-org/rav/fix_get_missing_events
Fix event filtering in get_missing_events handler
2018-06-08 13:19:15 +01:00
Richard van der Hoff
e82db24a0e Try to log more helpful info when a sig verification fails
Firstly, don't swallow the reason for the failure

Secondly, don't assume all exceptions are verification failures

Thirdly, log a bit of info about the key being used if debug is enabled
2018-06-08 12:13:08 +01:00
Richard van der Hoff
0834b49c6a Fix event filtering in get_missing_events handler 2018-06-08 11:34:46 +01:00
Matthew Hodgson
36446ffedb fix various changelog bugs and typos 2018-06-07 23:54:16 +03:00
Richard van der Hoff
8503dd0047 Remove event re-signing hacks
These "temporary fixes" have been here three and a half years, and I can't find
any events in the matrix.org database where the calculated signature differs
from what's in the db. It's time for them to go away.
2018-06-07 16:08:29 +01:00
Will Hunt
13d211edc1 Merge pull request #3344 from Half-Shot/hs/as-metrics
Add metrics to track appservice transactions
2018-06-07 11:37:12 +01:00
Richard van der Hoff
1152f495a0 Merge pull request #3363 from matrix-org/rav/fix_purge
Fix event-purge-by-ts admin API
2018-06-07 11:34:46 +01:00
Richard van der Hoff
e2acf536d4 Merge pull request #3355 from matrix-org/rav/fix_federation_backfill
Fix federation backfill from sqlite servers
2018-06-07 10:18:01 +01:00
Amber Brown
0160f6601f Merge pull request #3356 from matrix-org/rav/add_missing_attr_dep
Make sure that attr is installed
2018-06-07 16:33:30 +10:00
Richard van der Hoff
f4caf3f83d fix log 2018-06-07 00:26:38 +01:00
Richard van der Hoff
0546715c18 Fix event-purge-by-ts admin API
This got completely broken in 0.30.

Fixes #3300.
2018-06-07 00:15:49 +01:00
Richard van der Hoff
57e3f923d2 Add missing dependency on attr
We've rcently added a dep on `attr`. I don't know why the CI didn't pick this
up, but we should make it explicit anyway.
2018-06-06 17:12:41 +01:00
Richard van der Hoff
d3a8c9c55e Fix sql error in _get_state_groups_from_groups
If this was called with a `(type, None)` entry in types (which is supposed to
return all state of type `type`), it would explode with a sql error.
2018-06-06 14:19:01 +01:00
Neil Johnson
fef6c2cdcc Merge branch 'master' into develop 2018-06-06 12:28:15 +01:00
Neil Johnson
752b7b32ed Merge tag 'v0.31.0'
Changes in synapse v0.31.0 (2018-06-06)
======================================

Most notable change from v0.30.0 is to switch to python prometheus library to improve system
stats reporting. WARNING this changes a number of prometheus metrics in a
backwards-incompatible manner. For more details, see
`docs/metrics-howto.rst <docs/metrics-howto.rst#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310>`_.

Bug Fixes:

* Fix metric documentation tables (PR #3341)
* Fix LaterGuage error handling (694968f)
* Fix replication metrics (b7e7fd2)

Changes in synapse v0.31.0-rc1 (2018-06-04)
==========================================

Features:

* Switch to the Python Prometheus library (PR #3256, #3274)
* Let users leave the server notice room after joining (PR #3287)

Changes:

* daily user type phone home stats (PR #3264)
* Use iter* methods for _filter_events_for_server (PR #3267)
* Docs on consent bits (PR #3268)
* Remove users from user directory on deactivate (PR #3277)
* Avoid sending consent notice to guest users (PR #3288)
* disable CPUMetrics if no /proc/self/stat (PR #3299)
* Add local and loopback IPv6 addresses to url_preview_ip_range_blacklist (PR #3312) Thanks to @thegcat!
* Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (PR #3307)
* Add private IPv6 addresses to example config for url preview blacklist (PR #3317) Thanks to @thegcat!
* Reduce stuck read-receipts: ignore depth when updating (PR #3318)
* Put python's logs into Trial when running unit tests (PR #3319)

Changes, python 3 migration:

* Replace some more comparisons with six (PR #3243) Thanks to @NotAFile!
* replace some iteritems with six (PR #3244) Thanks to @NotAFile!
* Add batch_iter to utils (PR #3245) Thanks to @NotAFile!
* use repr, not str (PR #3246) Thanks to @NotAFile!
* Misc Python3 fixes (PR #3247) Thanks to @NotAFile!
* Py3 storage/_base.py (PR #3278) Thanks to @NotAFile!
* more six iteritems (PR #3279) Thanks to @NotAFile!
* More Misc. py3 fixes (PR #3280) Thanks to @NotAFile!
* remaining isintance fixes (PR #3281) Thanks to @NotAFile!
* py3-ize state.py (PR #3283) Thanks to @NotAFile!
* extend tox testing for py3 to avoid regressions (PR #3302) Thanks to @krombel!
* use memoryview in py3 (PR #3303) Thanks to @NotAFile!

Bugs:

* Fix federation backfill bugs (PR #3261)
* federation: fix LaterGauge usage (PR #3328) Thanks to @intelfx!
2018-06-06 12:27:33 +01:00
Neil Johnson
3f589f9097 7 char sha in changelog 2018-06-06 11:39:42 +01:00
Neil Johnson
176f1206d1 Update CHANGES.rst 2018-06-06 11:28:30 +01:00
Neil Johnson
61134debdc bump version and changelog 2018-06-06 11:26:21 +01:00
Richard van der Hoff
ad459a106c Merge pull request #3349 from t3chguy/redact_as_request_token
Redact AS tokens in log (fixes to #3327)
2018-06-06 10:58:07 +01:00
Michael Telatynski
592c162516 also redact __str__ of ApplicationService used for logging 2018-06-06 10:35:29 +01:00
Michael Telatynski
330432031b redact_uri in two missed log paths 2018-06-06 10:25:48 +01:00
David Baker
bf54c1cf6c pep8 2018-06-06 10:15:33 +01:00
David Baker
3e4bc4488c More doc fixes 2018-06-06 09:44:10 +01:00
Amber Brown
48e2b48888 Merge pull request #3347 from krombel/py3_extend_tox_2
update tox.ini to cover 292 succeeding tests
2018-06-06 16:37:19 +10:00
Amber Brown
d8db6d9267 Merge pull request #3348 from intelfx/fix-sortedcontainers
federation/send_queue.py: fix usage of sortedcontainers.SortedDict
2018-06-06 16:36:53 +10:00
Amber Brown
23c785992f Fix metric documentation tables (#3341) 2018-06-06 07:12:16 +01:00
Richard van der Hoff
b3b16490f7 Add note to changelog on prometheus metrics 2018-06-06 07:08:36 +01:00
Richard van der Hoff
592ee217a3 Merge commit 'b7e7fd2' into release-v0.31.0 2018-06-06 07:02:02 +01:00
Amber Brown
304bb22c1d Fix metric documentation tables (#3341) 2018-06-06 15:52:37 +10:00
Ivan Shapovalov
c88d50aa8f federation/send_queue.py: fix usage of sortedcontainers.SortedDict 2018-06-06 02:45:18 +03:00
Krombel
0c87eed294 update tox.ini to cover 292 succeeding tests
Signed-Off-By: Matthias Kesler <krombel@krombel.de>
2018-06-05 23:10:15 +02:00
Richard van der Hoff
e316407b5d Merge pull request #3327 from t3chguy/redact_as_request_token
Strip `access_token` from outgoing requests
2018-06-05 19:08:46 +01:00
Michael Telatynski
e6cbf47773 factor out uri redaction into a method on http 2018-06-05 18:31:40 +01:00
David Baker
607bd27c83 fix pep8 2018-06-05 18:10:35 +01:00
David Baker
d62162bbec doc fixes 2018-06-05 18:09:13 +01:00
Richard van der Hoff
617afee069 Merge pull request #3340 from ArchangeGabriel/patch-1
doc/postgres.rst: fix display of the last command block
2018-06-05 17:45:17 +01:00
Richard van der Hoff
522bd3c8a3 Merge remote-tracking branch 'origin/master' into develop 2018-06-05 17:42:49 +01:00
Will Hunt
d6e3c2c79b Let's try labels instead of label, that might work 2018-06-05 17:30:45 +01:00
Amber Brown
f7869f8f8b Port to sortedcontainers (with tests!) (#3332) 2018-06-06 00:13:57 +10:00
Will Hunt
604cff1a06 Add metrics to track appservice transactions 2018-06-05 13:21:30 +01:00
Bruno Pagani
b50f18171d doc/postgres.rest: fix displaying of the last command block
Also indent all of them with 4 spaces.
2018-06-04 22:41:52 +00:00
Richard van der Hoff
f29b41fde9 Merge pull request #3324 from matrix-org/rav/remove_dead_method
Remove was_forgotten_at
2018-06-04 18:18:08 +01:00
Richard van der Hoff
28b0490dfd Merge pull request #3334 from matrix-org/rav/cache_factor_override
Cache factor override system for specific caches
2018-06-04 17:19:33 +01:00
Richard van der Hoff
b7e7fd2d0e Fix replication metrics
fix bug introduced in #3256
2018-06-04 16:23:05 +01:00
Neil Johnson
244ab974e7 bump version and changelog 2018-06-04 16:09:58 +01:00
Richard van der Hoff
694968fa81 Hopefully, fix LaterGuage error handling 2018-06-04 15:59:14 +01:00
Erik Johnston
042eedfa2b Add hacky cache factor override system 2018-06-04 15:39:28 +01:00
David Baker
c5930d513a Docstring 2018-06-04 12:05:58 +01:00
David Baker
6a29e815fc Fix comment 2018-06-04 12:01:23 +01:00
David Baker
e44150a6de Missing yield 2018-06-04 12:01:13 +01:00
David Baker
f731e42baf docstring 2018-06-04 12:00:51 +01:00
Amber Brown
5dbf305444 Put python's logs into Trial when running unit tests (#3319) 2018-06-04 16:06:06 +10:00
Amber Brown
86accac5d5 Merge pull request #3328 from intelfx/fix-metrics-LaterGauge-usage
federation: fix LaterGauge usage
2018-06-04 15:35:32 +10:00
Matthew Hodgson
28f09fcdd5 Merge branch 'develop' into matthew/filter_members 2018-06-04 00:09:17 +03:00
Matthew Hodgson
5f6122fe10 more comments 2018-06-04 00:08:52 +03:00
Ivan Shapovalov
7d9d75e4e8 federation/send_queue.py: fix usage of LaterGauge
Fixes a startup crash due to commit df9f72d9e5
"replacing portions".
2018-06-03 14:16:17 +03:00
Michael Telatynski
09503126df Strip access_token from outgoing requests using existing regex 2018-06-02 23:25:13 +01:00
Richard van der Hoff
c1f4118bb6 Remove was_forgotten_at
This is unused. IT MUST DIE!!!1
̧̪͈̱̹̳͖͙H̵̰̤̰͕̖e̛ ͚͉̗̼̞w̶̩̥͉̮h̩̺̪̩͘ͅọ͎͉̟ ̜̩͔̦̘ͅW̪̫̩̣̲͔̳a͏͔̳͖i͖͜t͓̤̠͓͙s̘̰̩̥̙̝ͅ ̲̠̬̥Be̡̙̫̦h̰̩i̛̫͙͔̭̤̗̲n̳͞d̸ ͎̻͘T̛͇̝̲̹̠̗ͅh̫̦̝ͅe̩̫͟ ͓͖̼W͕̳͎͚̙̥ą̙l̘͚̺͔͞ͅl̳͍̙̤̤̮̳.̢
̟̺̜̙͉Z̤̲̙̙͎̥̝A͎̣͔̙͘L̥̻̗̳̻̳̳͢G͉̖̯͓̞̩̦O̹̹̺!̙͈͎̞̬ *
2018-06-01 18:21:49 +01:00
Richard van der Hoff
a9e97dcd65 Merge pull request #3317 from thegcat/feature/3312-add_ipv6_to_blacklist_example_config
Add private IPv6 addresses to example config for url preview blacklist
2018-06-01 14:45:14 +01:00
Neil Johnson
71477f3317 Merge pull request #3264 from matrix-org/neil/sign-up-stats
daily user type phone home stats
2018-06-01 13:42:01 +00:00
Richard van der Hoff
41006d9c28 Merge pull request #3318 from matrix-org/rav/ignore_depth_on_rrs
Ignore depth when updating read-receipts
2018-06-01 14:14:45 +01:00
Richard van der Hoff
9f797a24a4 Handle RRs which arrive before their events 2018-06-01 14:01:43 +01:00
Richard van der Hoff
857e6fd8b6 Ignore depth when updating read-receipts
Order read receipts by stream ordering instead of depth
2018-06-01 12:18:11 +01:00
Felix Schäfer
4ef76f3ac4 Add private IPv6 addresses to preview blacklist #3312
The added addresses are expected to be local or loopback addresses and
shouldn't be spidered for previews.

Signed-off-by: Felix Schäfer <felix@thegcat.net>
2018-06-01 12:18:35 +02:00
Neil Johnson
4986b084f8 remove unnecessary INSERT 2018-06-01 10:50:40 +01:00
Richard van der Hoff
7e15410f02 Enforce the specified API for report_event
as per
https://matrix.org/docs/spec/client_server/unstable.html#post-matrix-client-r0-rooms-roomid-report-eventid
2018-05-31 18:17:11 +01:00
Richard van der Hoff
c2c3092cce code_style.rst: formatting 2018-05-31 16:11:34 +01:00
Amber Brown
febe0ec8fd Run Prometheus on a different port, optionally. (#3274) 2018-05-31 19:04:50 +10:00
Amber Brown
c936a52a9e Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (#3307) 2018-05-31 19:03:47 +10:00
Richard van der Hoff
e73635191f Merge pull request #3290 from rubo77/patch-7
Add link to thorough instruction how to configure consent
2018-05-30 19:52:01 +01:00
Richard van der Hoff
219c2a322b remove trailing whitespace 2018-05-30 19:42:19 +01:00
Richard van der Hoff
2e4be8bfd9 fix english and wrap comment 2018-05-30 19:24:12 +01:00
Amber Brown
872cf43516 Merge pull request #3303 from NotAFile/py3-memoryview
use memoryview in py3
2018-05-30 12:51:42 +10:00
Amber Brown
debff7ae09 Merge pull request #3281 from NotAFile/py3-six-isinstance
remaining isintance fixes
2018-05-30 12:44:46 +10:00
Richard van der Hoff
34b85df7f5 Update some comments and docstrings in SyncHandler 2018-05-29 22:31:18 +01:00
Richard van der Hoff
711f61a31d Merge pull request #3304 from matrix-org/rav/exempt_as_users_from_gdpr
Exempt AS-registered users from doing gdpr
2018-05-29 20:25:12 +01:00
Richard van der Hoff
a995fdae39 fix tests 2018-05-29 20:19:29 +01:00
Richard van der Hoff
4a9cbdbc15 Exempt AS-registered users from doing gdpr 2018-05-29 19:54:32 +01:00
Neil Johnson
ab0ef31dc7 create users index on creation_ts 2018-05-29 17:51:08 +01:00
Neil Johnson
558f3d376a create index in background 2018-05-29 17:47:55 +01:00
Neil Johnson
c379acd4fd bump version 2018-05-29 17:47:28 +01:00
Richard van der Hoff
db2e4608ab Merge pull request #3302 from krombel/py3_extend_tox_testing
extend tox testing for py3 to avoid regressions
2018-05-29 17:37:52 +01:00
Adrian Tschira
4b9d0cde97 add remaining memoryview changes 2018-05-29 17:42:43 +02:00
Adrian Tschira
7873cde526 pep8 2018-05-29 17:35:55 +02:00
Adrian Tschira
1afafb3497 use memoryview in py3
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-29 17:14:34 +02:00
Krombel
d6cd55532b extend tox testing for py3 to avoid regressions 2018-05-29 16:02:41 +02:00
Matthew Hodgson
9bbb9f5556 add lazy_load_members to the filter json schema 2018-05-29 04:26:10 +01:00
Matthew Hodgson
8df7bad839 pep8 2018-05-29 02:49:36 +01:00
Amber Brown
235b53263a Merge pull request #3299 from matrix-org/matthew/macos-fixes
disable CPUMetrics if no /proc/self/stat
2018-05-29 11:45:45 +10:00
Matthew Hodgson
ff1bc0a279 pep8 2018-05-29 02:32:15 +01:00
Matthew Hodgson
adb6bac4d5 fix another dumb typo 2018-05-29 02:29:22 +01:00
Matthew Hodgson
0a240ad36e disable CPUMetrics if no /proc/self/stat
fixes build on macOS again
2018-05-29 02:23:30 +01:00
Matthew Hodgson
284e36ad04 fix dumb typo 2018-05-29 02:23:11 +01:00
Matthew Hodgson
b69ff33d9e disable CPUMetrics if no /proc/self/stat
fixes build on macOS again
2018-05-29 02:22:27 +01:00
Matthew Hodgson
5e6b31f0da fix dumb typo 2018-05-29 02:22:09 +01:00
Matthew Hodgson
a6c8f7c875 add pydoc 2018-05-29 01:09:55 +01:00
Matthew Hodgson
7a6df013cc merge develop 2018-05-29 00:25:22 +01:00
Amber Brown
81717e8515 Merge pull request #3256 from matrix-org/3218-official-prom
Switch to the Python Prometheus library
2018-05-28 23:30:09 +10:00
Amber Brown
57ad76fa4a fix up tests 2018-05-28 19:51:53 +10:00
Amber Brown
3ef5cd74a6 update to more consistently use seconds in any metrics or logging 2018-05-28 19:39:27 +10:00
Amber Brown
5c40ce3777 invalid syntax :( 2018-05-28 19:16:09 +10:00
Amber Brown
357c74a50f add comment about why unreg 2018-05-28 19:14:41 +10:00
Amber Brown
a2eb5db4a0 update metrics to be in seconds 2018-05-28 19:10:27 +10:00
Amber Brown
754826a830 Merge remote-tracking branch 'origin/develop' into 3218-official-prom 2018-05-28 18:57:23 +10:00
Ruben Barkow
08ea5fe635 add link to thorough instruction how to configure consent 2018-05-25 23:19:55 +02:00
Richard van der Hoff
60f09b1e11 Merge pull request #3288 from matrix-org/rav/no_spam_guests
Avoid sending consent notice to guest users
2018-05-25 11:44:45 +01:00
Richard van der Hoff
66bdae986f Fix default for send_server_notice_to_guests
bool("False") == True...
2018-05-25 11:42:05 +01:00
Richard van der Hoff
ba1b163590 Avoid sending consent notice to guest users
we think it makes sense not to send the notices to guest users.
2018-05-25 11:36:43 +01:00
Richard van der Hoff
41921ac01b Merge pull request #3287 from matrix-org/rav/allow_leaving_server_notices_room
Let users leave the server notice room after joining
2018-05-25 11:15:45 +01:00
Richard van der Hoff
757ed27258 Let users leave the server notice room after joining
They still can't reject invites, but we let them leave it.
2018-05-25 11:07:21 +01:00
Adrian Tschira
4ee4450d66 fix recursion error 2018-05-24 21:44:10 +02:00
Amber Brown
9c36c150e7 Merge pull request #3283 from NotAFile/py3-state
py3-ize state.py
2018-05-24 14:23:33 -05:00
Amber Brown
cc1349c06a Merge pull request #3279 from NotAFile/py3-more-iteritems
more six iteritems
2018-05-24 14:23:13 -05:00
Amber Brown
5b788aba90 Merge pull request #3280 from NotAFile/py3-more-misc
More Misc. py3 fixes
2018-05-24 14:22:59 -05:00
Adrian Tschira
0e61705661 py3-ize state.py 2018-05-24 20:59:00 +02:00
Adrian Tschira
dd068ca979 remaining isintance fixes
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-24 20:55:08 +02:00
Adrian Tschira
17a70cf6e9 Misc. py3 fixes
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-24 20:20:33 +02:00
Adrian Tschira
6c16a4ec1b more iteritems 2018-05-24 20:19:06 +02:00
Amber Brown
7ea07c7305 Merge pull request #3278 from NotAFile/py3-storage-base
Py3 storage/_base.py
2018-05-24 13:08:09 -05:00
Amber Brown
1f69693347 Merge pull request #3244 from NotAFile/py3-six-4
replace some iteritems with six
2018-05-24 13:04:07 -05:00
Amber Brown
c4fb15a06c Merge pull request #3246 from NotAFile/py3-repr-string
use repr, not str
2018-05-24 13:00:20 -05:00
Amber Brown
36501068d8 Merge pull request #3247 from NotAFile/py3-misc
Misc Python3 fixes
2018-05-24 12:58:37 -05:00
Amber Brown
2aff6eab6d Merge pull request #3245 from NotAFile/batch-iter
Add batch_iter to utils
2018-05-24 12:54:12 -05:00
Adrian Tschira
095292304f Py3 storage/_base.py
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-24 18:24:12 +02:00
David Baker
77a23e2e05 Merge remote-tracking branch 'origin/develop' into dbkr/unbind 2018-05-24 16:20:53 +01:00
David Baker
ecc4b88bd1 Merge pull request #3277 from matrix-org/dbkr/remove_from_user_dir
Remove users from user directory on deactivate
2018-05-24 16:12:12 +01:00
Erik Johnston
46345187cc Merge pull request #3243 from NotAFile/py3-six-3
Replace some more comparisons with six
2018-05-24 16:08:57 +01:00
Neil Johnson
037c6db85d Merge branch 'master' into develop 2018-05-24 16:03:44 +01:00
David Baker
7a1af504d7 Remove users from user directory on deactivate 2018-05-24 15:59:58 +01:00
Neil Johnson
14ca678674 Update CHANGES.rst 2018-05-24 15:54:02 +01:00
Neil Johnson
6f67163c63 Update CHANGES.rst 2018-05-24 15:04:41 +01:00
Neil Johnson
bdd2ed5acf update for v0.30.0 2018-05-24 15:02:03 +01:00
Erik Johnston
f72d5a44d5 Merge pull request #3261 from matrix-org/erikj/pagination_fixes
Fix federation backfill bugs
2018-05-24 14:52:03 +01:00
Erik Johnston
68399fc4de Merge pull request #3267 from matrix-org/erikj/iter_filter
Use iter* methods for _filter_events_for_server
2018-05-24 14:44:57 +01:00
Neil Johnson
91d95a1d8e bump version 2018-05-24 14:05:07 +01:00
David Baker
9700d15611 pep8 2018-05-24 11:23:15 +01:00
David Baker
a21a41bad7 comment 2018-05-24 11:19:59 +01:00
David Baker
b3bff53178 Unbind 3pids when they're deleted too 2018-05-24 11:08:05 +01:00
Richard van der Hoff
8c98281b8d Merge branch 'release-v0.30.0' into develop 2018-05-24 10:33:12 +01:00
Richard van der Hoff
6abcb5d22d Merge pull request #3273 from matrix-org/rav/server_notices_avatar_url
Allow overriding the server_notices user's avatar
2018-05-24 10:31:43 +01:00
Amber Brown
389dac2c15 pepeightttt 2018-05-23 13:08:59 -05:00
Amber Brown
472a5ec4e2 add back CPU metrics 2018-05-23 13:03:56 -05:00
Amber Brown
e987079037 fixes 2018-05-23 13:03:51 -05:00
Richard van der Hoff
9bf4b2bda3 Allow overriding the server_notices user's avatar
probably should have done this in the first place, like @turt2live suggested.
2018-05-23 17:43:30 +01:00
Richard van der Hoff
23aa70cea8 Merge pull request #3272 from matrix-org/rav/localpart_in_consent_uri
Use the localpart in the consent uri
2018-05-23 16:29:44 +01:00
Richard van der Hoff
043f05a078 Merge docs on consent bits from PR #3268 into release branch 2018-05-23 16:24:58 +01:00
Richard van der Hoff
96f07cebda Merge pull request #3268 from matrix-org/rav/privacy_policy_docs
Docs on consent bits
2018-05-23 16:23:05 +01:00
Richard van der Hoff
a0b3946fe2 Merge branch 'release-v0.30.0' into rav/localpart_in_consent_uri 2018-05-23 16:06:03 +01:00
Richard van der Hoff
2f7008d4eb Merge pull request #3271 from matrix-org/rav/consent_uri_in_messages
Support for putting %(consent_uri)s in messages
2018-05-23 16:04:30 +01:00
Richard van der Hoff
e206b2c9ac consent_tracking.md: clarify link 2018-05-23 15:57:10 +01:00
Richard van der Hoff
2df8c3139a minor post-review tweaks 2018-05-23 15:39:52 +01:00
Richard van der Hoff
dda40fb55d fix typo 2018-05-23 15:30:26 +01:00
Richard van der Hoff
3ff6f50eac Use the localpart in the consent uri
... because it's shorter.
2018-05-23 15:28:23 +01:00
Richard van der Hoff
82191b08f6 Support for putting %(consent_uri)s in messages
Make it possible to put the URI in the error message and the server notice that
get sent by the server
2018-05-23 15:24:31 +01:00
Richard van der Hoff
2c62ea2515 Merge pull request #3270 from matrix-org/rav/block_remote_server_notices
Block attempts to send server notices to remote users
2018-05-23 15:06:35 +01:00
Richard van der Hoff
cd8ab9a0d8 mention public_baseurl 2018-05-23 14:43:09 +01:00
David Baker
2c7866d664 Hit the 3pid unbind endpoint on deactivation 2018-05-23 14:38:56 +01:00
Richard van der Hoff
321f02d263 Block attempts to send server notices to remote users 2018-05-23 14:30:47 +01:00
Richard van der Hoff
1cbb8e5a33 fix wrapping 2018-05-23 13:58:28 +01:00
Richard van der Hoff
052d08a6a5 Using the manhole to send server notices 2018-05-23 13:55:39 +01:00
Richard van der Hoff
5ad1149f38 Notes on the manhole 2018-05-23 13:47:34 +01:00
Richard van der Hoff
563606b8f2 consent_tracking: formatting etc 2018-05-23 12:37:39 +01:00
Richard van der Hoff
2574ea3dc8 server_notices.md: fix link 2018-05-23 12:34:34 +01:00
Richard van der Hoff
833db2d922 consent tracking docs 2018-05-23 12:32:38 +01:00
Neil Johnson
9e8ab0a4f4 style 2018-05-23 12:05:58 +01:00
Neil Johnson
3601a240aa bump version and changelog 2018-05-23 12:03:23 +01:00
Richard van der Hoff
e7598b666b Some docs about server notices 2018-05-23 11:14:23 +01:00
Erik Johnston
5aaa3189d5 s/values/itervalues/ 2018-05-23 10:13:05 +01:00
Erik Johnston
0a4bca4134 Use iter* methods for _filter_events_for_server 2018-05-23 10:04:23 +01:00
Amber Brown
b6063631c3 more cleanup 2018-05-22 17:36:20 -05:00
Amber Brown
53cc2cde1f cleanup 2018-05-22 17:32:57 -05:00
Amber Brown
071206304d cleanup pep8 errors 2018-05-22 16:54:22 -05:00
Amber Brown
4abeaedcf3 tests/metrics is gone now 2018-05-22 16:45:11 -05:00
Amber Brown
85ba83eb51 fixes 2018-05-22 16:28:23 -05:00
Amber Brown
228f1f584e fix the test failures 2018-05-22 15:02:38 -05:00
Erik Johnston
e85b5a0ff7 Use iter* methods 2018-05-22 19:02:48 +01:00
Erik Johnston
586b66b197 Fix that states is a dict of dicts 2018-05-22 19:02:36 +01:00
Erik Johnston
35ca3e7b65 Merge pull request #3265 from matrix-org/erikj/limit_pagination
Don't support limitless pagination
2018-05-22 18:29:36 +01:00
Erik Johnston
a17e901f4d Remove unused string formatting param 2018-05-22 18:24:32 +01:00
Erik Johnston
5494c1d71e Don't support limitless pagination
The pagination storage function supported not specifiying a limit on the
number of events returned. This was triggered when using the search or
context API with a limit of zero, which the storage function took to
mean not being limited.
2018-05-22 18:15:21 +01:00
Neil Johnson
d8cb7225d2 daily user type phone home stats 2018-05-22 18:09:09 +01:00
hera
ad2823ee27 fix synchrotron 2018-05-22 17:47:42 +01:00
Richard van der Hoff
08bfc48abf custom error code for not leaving server notices room 2018-05-22 17:27:27 +01:00
David Baker
0a078026ea comment typo 2018-05-22 17:14:06 +01:00
Amber Brown
07bb9bdae8 update gitignore to remove some files that got put on my machine 2018-05-22 10:57:28 -05:00
Amber Brown
8f5a688d42 cleanups, self-registration 2018-05-22 10:56:03 -05:00
Amber Brown
a8990fa2ec Merge remote-tracking branch 'origin/develop' into 3218-official-prom 2018-05-22 10:50:26 -05:00
Erik Johnston
cb2a2ad791 get_domains_from_state returns list of tuples 2018-05-22 16:23:39 +01:00
Richard van der Hoff
08a14b32ae Merge pull request #3262 from matrix-org/rav/has_already_consented
Add a 'has_consented' template var to consent forms
2018-05-22 15:35:10 +01:00
Richard van der Hoff
82c2a52987 Merge pull request #3263 from matrix-org/rav/fix_jinja_dep
Fix dependency on jinja2
2018-05-22 15:31:08 +01:00
Richard van der Hoff
7b36d06a69 Add a 'has_consented' template var to consent forms
fixes #3260
2018-05-22 14:58:34 +01:00
Richard van der Hoff
669400e22f Enable auto-escaping for the consent templates
... to reduce the risk of somebody introducing an html injection attack...
2018-05-22 14:58:34 +01:00
Richard van der Hoff
b5b2d5d64b Fix dependency on jinja2
Delay the import of ConsentResource, so that we can get away without jinja2 if
people don't have the consent resource enabled.

Fixes #3259
2018-05-22 14:03:45 +01:00
Richard van der Hoff
3b2def6c7a Merge pull request #3257 from matrix-org/rav/fonx_on_no_consent
Reject attempts to send event before privacy consent is given
2018-05-22 12:01:43 +01:00
Richard van der Hoff
a5e2941aad Reject attempts to send event before privacy consent is given
Returns an M_CONSENT_NOT_GIVEN error (cf
https://github.com/matrix-org/matrix-doc/issues/1252) if consent is not yet
given.
2018-05-22 12:00:47 +01:00
Richard van der Hoff
8aeb529262 Merge pull request #3236 from matrix-org/rav/consent_notice
Send users a server notice about consent
2018-05-22 11:56:09 +01:00
Richard van der Hoff
8810685df9 Stub out ServerNoticesSender on the workers
... and have the sync endpoints call it directly rather than obsure indirection
via PresenceHandler
2018-05-22 11:54:51 +01:00
Richard van der Hoff
d5dca9a04f Move consent config parsing into ConsentConfig
turns out we need to reuse this, so it's better in the config class.
2018-05-22 11:54:51 +01:00
Richard van der Hoff
9ea219c514 Send users a server notice about consent
When a user first syncs, we will send them a server notice asking them to
consent to the privacy policy if they have not already done so.
2018-05-22 11:54:51 +01:00
Richard van der Hoff
d14d7b8fdc Rename 'version' param on user consent config
we're going to use it for the version we require too.
2018-05-22 11:54:51 +01:00
Erik Johnston
7cfa8a87a1 Merge pull request #3258 from matrix-org/erikj/fixup_logcontext_rusage
Fix logcontext resource usage tracking
2018-05-22 11:39:56 +01:00
Erik Johnston
7948ecf234 Comment 2018-05-22 11:39:43 +01:00
Erik Johnston
020377a550 Fix logcontext resource usage tracking 2018-05-22 11:16:07 +01:00
Erik Johnston
13a8dfba0d Merge pull request #3252 from matrix-org/erikj/in_flight_requests
Add in flight request metrics
2018-05-22 09:54:30 +01:00
Erik Johnston
c435b0b441 Don't store context 2018-05-22 09:34:26 +01:00
Erik Johnston
fb2806b186 Move in_flight_requests_count to be a callback metric 2018-05-22 09:31:53 +01:00
Amber Brown
fcc525b0b7 rest of the changes 2018-05-21 19:48:57 -05:00
Amber Brown
df9f72d9e5 replacing portions 2018-05-21 19:47:37 -05:00
Amber Brown
c60e0d5e02 don't need the resource portion 2018-05-21 17:03:20 -05:00
Amber Brown
02c1d29133 look at the Prometheus metrics instead 2018-05-21 17:02:20 -05:00
Amber Brown
f258deffcb remove old metrics libs 2018-05-21 17:01:15 -05:00
Richard van der Hoff
413482f578 Merge pull request #3255 from matrix-org/rav/fix_transactions
Stop the transaction cache caching failures
2018-05-21 17:58:08 +01:00
Neil Johnson
4aac88928f Merge pull request #3251 from matrix-org/cohort_analytics
Tighter filtering for user_daily_visits
2018-05-21 16:42:18 +00:00
Richard van der Hoff
6e1cb54a05 Fix logcontext leak in HttpTransactionCache
ONE DAY I WILL PURGE THE WORLD OF THIS EVIL
2018-05-21 16:58:20 +01:00
Richard van der Hoff
6d6e7288fe Stop the transaction cache caching failures
The transaction cache has some code which tries to stop it caching failures,
but if the callback function failed straight away, then things would happen
backwards and we'd end up with the failure stuck in the cache.
2018-05-21 16:49:59 +01:00
Michael Kaye
d689e0dba1 Merge pull request #3239 from ptman/develop
Add lxml to docker image for web previews
2018-05-21 16:31:18 +01:00
Erik Johnston
dfa70adc33 Add in flight request metrics
This tracks CPU and DB usage while requests are in flight, rather than
when we write the response.
2018-05-21 16:23:06 +01:00
Adrian Tschira
933bf2dd35 replace some iteritems with six
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:59:26 +02:00
Adrian Tschira
d9fe2b2d9d Replace some more comparisons with six
plus a bonus b"" string I missed last time

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:56:31 +02:00
Adrian Tschira
45b55e23d3 Add batch_iter to utils
There's a frequent idiom I noticed where an iterable is split up into a
number of chunks/batches. Unfortunately that method does not work with
iterators like dict.keys() in python3. This implementation works with
iterators.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:48:30 +02:00
Adrian Tschira
dcc235b47d use stand-in value if maxint is not available
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:35:44 +02:00
Adrian Tschira
73cbdef5f7 fix py3 intern and remove unnecessary py3 encode
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:35:31 +02:00
Adrian Tschira
aafb0f6b0d py3-ize url preview 2018-05-19 17:35:20 +02:00
Adrian Tschira
b932b4ea25 use repr, not str
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:28:42 +02:00
Neil Johnson
644aac5f73 Tighter filtering for user_daily_visits 2018-05-18 17:10:35 +01:00
Neil Johnson
08462620bf Merge pull request #3241 from matrix-org/fix_user_visits_insertion
fix psql compatability bug
2018-05-18 15:01:01 +00:00
Neil Johnson
ef466b3a13 fix psql compatability bug 2018-05-18 15:51:21 +01:00
Paul Tötterman
861f8a9b21 Add lxml to docker image for web previews
Signed-off-by: Paul Tötterman <paul.totterman@iki.fi>
2018-05-18 16:20:08 +03:00
Neil Johnson
2725223f08 Merge branch 'master' into develop 2018-05-18 14:07:50 +01:00
Neil Johnson
ab5e888927 Merge tag 'v0.29.1'
Changes in synapse v0.29.1 (2018-05-17)
==========================================
Changes:

* Update docker documentation (PR #3222)

Changes in synapse v0.29.0 (2018-05-16)
===========================================
Not changes since v0.29.0-rc1

Changes in synapse v0.29.0-rc1 (2018-05-14)
===========================================

Notable changes, a docker file for running Synapse (Thanks to @kaiyou!) and a
closed spec bug in the Client Server API. Additionally further prep for Python 3
migration.

Potentially breaking change:

* Make Client-Server API return 401 for invalid token (PR #3161).

  This changes the Client-server spec to return a 401 error code instead of 403
  when the access token is unrecognised. This is the behaviour required by the
  specification, but some clients may be relying on the old, incorrect
  behaviour.

  Thanks to @NotAFile for fixing this.

Features:

* Add a Dockerfile for synapse (PR #2846) Thanks to @kaiyou!

Changes - General:

* nuke-room-from-db.sh: added postgresql option and help (PR #2337) Thanks to @rubo77!
* Part user from rooms on account deactivate (PR #3201)
* Make 'unexpected logging context' into warnings (PR #3007)
* Set Server header in SynapseRequest (PR #3208)
* remove duplicates from groups tables (PR #3129)
* Improve exception handling for background processes (PR #3138)
* Add missing consumeErrors to improve exception handling (PR #3139)
* reraise exceptions more carefully (PR #3142)
* Remove redundant call to preserve_fn (PR #3143)
* Trap exceptions thrown within run_in_background (PR #3144)

Changes - Refactors:

* Refactor /context to reuse pagination storage functions (PR #3193)
* Refactor recent events func to use pagination func (PR #3195)
* Refactor pagination DB API to return concrete type (PR #3196)
* Refactor get_recent_events_for_room return type (PR #3198)
* Refactor sync APIs to reuse pagination API (PR #3199)
* Remove unused code path from member change DB func (PR #3200)
* Refactor request handling wrappers (PR #3203)
* transaction_id, destination defined twice (PR #3209) Thanks to @damir-manapov!
* Refactor event storage to prepare for changes in state calculations (PR #3141)
* Set Server header in SynapseRequest (PR #3208)
* Use deferred.addTimeout instead of time_bound_deferred (PR #3127, #3178)
* Use run_in_background in preference to preserve_fn (PR #3140)

Changes - Python 3 migration:

* Construct HMAC as bytes on py3 (PR #3156) Thanks to @NotAFile!
* run config tests on py3 (PR #3159) Thanks to @NotAFile!
* Open certificate files as bytes (PR #3084) Thanks to @NotAFile!
* Open config file in non-bytes mode (PR #3085) Thanks to @NotAFile!
* Make event properties raise AttributeError instead (PR #3102) Thanks to @NotAFile!
* Use six.moves.urlparse (PR #3108) Thanks to @NotAFile!
* Add py3 tests to tox with folders that work (PR #3145) Thanks to @NotAFile!
* Don't yield in list comprehensions (PR #3150) Thanks to @NotAFile!
* Move more xrange to six (PR #3151) Thanks to @NotAFile!
* make imports local (PR #3152) Thanks to @NotAFile!
* move httplib import to six (PR #3153) Thanks to @NotAFile!
* Replace stringIO imports with six (PR #3154, #3168) Thanks to @NotAFile!
* more bytes strings (PR #3155) Thanks to @NotAFile!

Bug Fixes:

* synapse fails to start under Twisted >= 18.4 (PR #3157)
* Fix a class of logcontext leaks (PR #3170)
* Fix a couple of logcontext leaks in unit tests (PR #3172)
* Fix logcontext leak in media repo (PR #3174)
* Escape label values in prometheus metrics (PR #3175, #3186)
* Fix 'Unhandled Error' logs with Twisted 18.4 (PR #3182) Thanks to @Half-Shot!
* Fix logcontext leaks in rate limiter (PR #3183)
* notifications: Convert next_token to string according to the spec (PR #3190) Thanks to @mujx!
* nuke-room-from-db.sh: fix deletion from search table (PR #3194) Thanks to @rubo77!
* add guard for None on purge_history api (PR #3160) Thanks to @krombel!
2018-05-18 14:06:34 +01:00
Neil Johnson
f3d9dca975 Merge branch 'release-v0.29.0' of https://github.com/matrix-org/synapse into release-v0.29.0 2018-05-18 13:55:19 +01:00
Richard van der Hoff
6d9dc67139 Merge pull request #3232 from matrix-org/rav/server_notices_room
Infrastructure for a server notices room
2018-05-18 11:28:04 +01:00
Richard van der Hoff
ed3125b0a1 Merge pull request #3235 from matrix-org/rav/fix_receipts_deferred
Fix error in handling receipts
2018-05-18 11:23:11 +01:00
Richard van der Hoff
67af392712 Merge pull request #3233 from matrix-org/rav/remove_dead_code
Remove unused `update_external_syncs`
2018-05-18 11:22:43 +01:00
Richard van der Hoff
011e1f4010 Better docstrings 2018-05-18 11:22:12 +01:00
Richard van der Hoff
26305788fe Make sure we reject attempts to invite the notices user 2018-05-18 11:18:39 +01:00
Richard van der Hoff
d10707c810 Replace inline docstrings with "Attributes" in class docstring 2018-05-18 11:00:55 +01:00
Erik Johnston
fa30ac38cc Merge pull request #3221 from matrix-org/erikj/purge_token
Make purge_history operate on tokens
2018-05-18 10:35:23 +01:00
Richard van der Hoff
8b1c856d81 Fix error in handling receipts
Fixes an error which has been happening ever since #2158 (v0.21.0-rc1):

> TypeError: argument of type 'ObservableDeferred' is not iterable

fixes #3234
2018-05-18 09:15:35 +01:00
Neil Johnson
6958459b50 bump version, change log 2018-05-17 21:35:07 +01:00
Richard van der Hoff
88d3405332 fix missing yield for server_notices_room 2018-05-17 18:33:45 +01:00
Richard van der Hoff
d43d480d86 Remove unused update_external_syncs
This method isn't used anywhere. Burninate it.
2018-05-17 18:22:19 +01:00
Neil Johnson
a2da6de40e light grammar changes 2018-05-17 18:12:00 +01:00
Michael Kaye
450f500d0c Note that secrets need to be retained. 2018-05-17 18:12:00 +01:00
Michael Kaye
82b0361f02 Document macaroon env var correctly 2018-05-17 18:12:00 +01:00
Michael Kaye
1b1b47aec6 Reference synapse docker image and docker-compose 2018-05-17 18:12:00 +01:00
Richard van der Hoff
fed62e21ad Infrastructure for a server notices room
Server Notices use a special room which the user can't dismiss. They are
created on demand when some other bit of the code calls send_notice.

(This doesn't actually do much yet becuse we don't call send_notice anywhere)
2018-05-17 17:58:25 +01:00
Richard van der Hoff
f8a1e76d64 Merge pull request #3225 from matrix-org/rav/move_creation_handler
Move RoomCreationHandler out of synapse.handlers.Handlers
2018-05-17 17:56:42 +01:00
Erik Johnston
f7906203f6 Merge pull request #3212 from matrix-org/erikj/epa_stream
Use stream rather depth ordering for push actions
2018-05-17 12:01:21 +01:00
Richard van der Hoff
ae53c71d90 Merge pull request #1756 from rubo77/patch-4
Add instructions how to setup the postgres user and clarify the final step
2018-05-17 10:52:54 +01:00
rubo77
616da9eb1d postgres.rst: Add instructions how to setup the postgres user and clarify the final step 2018-05-17 11:48:56 +02:00
Richard van der Hoff
c46367d0d7 Move RoomCreationHandler out of synapse.handlers.Handlers
Handlers is deprecated nowadays, so let's move this out before I add a new
dependency on it.

Also fix the docstrings on create_room.
2018-05-17 09:08:42 +01:00
Neil Johnson
85b8acdeb4 Merge branch 'master' into develop 2018-05-16 16:48:10 +01:00
Neil Johnson
3c099219e0 bump version and changelog for 0.29.0 2018-05-16 15:44:59 +01:00
Erik Johnston
680530cc7f Clarify comment 2018-05-16 11:47:29 +01:00
Erik Johnston
43e6e82c4d Comments 2018-05-16 11:13:31 +01:00
Neil Johnson
dc8930ea9e Merge pull request #3163 from matrix-org/cohort_analytics
user visit data
2018-05-16 10:09:24 +00:00
Erik Johnston
c945af8799 Move and rename variable 2018-05-16 10:52:06 +01:00
Neil Johnson
be11a02c4f remove empty line 2018-05-16 10:45:40 +01:00
Neil Johnson
a2204cc9cc remove unused method recurring_user_daily_visit_stats 2018-05-16 09:47:20 +01:00
Neil Johnson
31c2502ca8 style and further contraining query 2018-05-16 09:46:43 +01:00
Richard van der Hoff
8030a825c8 Merge pull request #3213 from matrix-org/rav/consent_handler
ConsentResource to gather policy consent from users
2018-05-16 07:19:18 +01:00
Neil Johnson
c92a8aa578 pep8 2018-05-15 17:31:11 +01:00
Neil Johnson
05ac15ae82 Limit query load of generate_user_daily_visits
The aim is to keep track of when it was last called and only query from that point in time
2018-05-15 17:01:33 +01:00
Erik Johnston
5f27ed75ad Make purge_history operate on tokens
As we're soon going to change how topological_ordering works
2018-05-15 16:23:50 +01:00
Erik Johnston
37dbee6490 Use events_to_purge table rather than token 2018-05-15 16:23:47 +01:00
Richard van der Hoff
47815edcfa ConsentResource to gather policy consent from users
Hopefully there are enough comments and docs in this that it makes sense on its
own.
2018-05-15 15:11:59 +01:00
Neil Johnson
589ecc5b58 further musical chairs 2018-05-14 17:49:59 +01:00
Neil Johnson
e71fb118f4 rearrange and collect related PRs 2018-05-14 17:39:22 +01:00
Neil Johnson
aea80a0118 v0.29.0-rc1: bump version and change log 2018-05-14 15:50:57 +01:00
Neil Johnson
f077e97914 instead of inserting user daily visit data at the end of the day, instead insert incrementally through the day 2018-05-14 13:50:58 +01:00
David Baker
8cbbfd16fb Merge pull request #3201 from matrix-org/dbkr/leave_rooms_on_deactivate
Part user from rooms on account deactivate
2018-05-14 11:31:48 +01:00
Neil Johnson
977765bde2 Merge branch 'develop' of https://github.com/matrix-org/synapse into cohort_analytics 2018-05-14 09:31:42 +01:00
Michael Kaye
16f41237f0 Merge pull request #2846 from kaiyou/feat-dockerfile
Add a Dockerfile for synapse
2018-05-11 17:21:12 +01:00
Richard van der Hoff
c25d7ba12e Merge pull request #3208 from matrix-org/rav/more_refactor_request_handler
Set Server header in SynapseRequest
2018-05-11 16:07:47 +01:00
Erik Johnston
6406b70aeb Use stream rather depth ordering for push actions
This simplifies things as it is, but will also allow us to change the
way we traverse topologically without having to update the way push
actions work.
2018-05-11 15:30:11 +01:00
Richard van der Hoff
23e2dfe940 Merge pull request #3209 from damir-manapov/master
transaction_id, destination defined twice
2018-05-11 00:35:13 +01:00
Richard van der Hoff
bd8d0cfab1 Merge remote-tracking branch 'origin/master' into develop 2018-05-11 00:19:26 +01:00
Damir Manapov
db18d854cd transaction_id, destination twice 2018-05-10 22:13:31 +03:00
Richard van der Hoff
318711e139 Set Server header in SynapseRequest
(instead of everywhere that writes a response. Or rather, the subset of places
which write responses where we haven't forgotten it).

This also means that we don't have to have the mysterious version_string
attribute in anything with a request handler.

Unfortunately it does mean that we have to pass the version string wherever we
instantiate a SynapseSite, which has been c&ped 150 times, but that is code
that ought to be cleaned up anyway really.
2018-05-10 18:50:27 +01:00
Richard van der Hoff
7b411007e6 Merge pull request #3203 from matrix-org/rav/refactor_request_handler
Refactor request handling wrappers
2018-05-10 15:26:53 +01:00
David Baker
6b49628e3b Catch failure to part user from room 2018-05-10 12:23:53 +01:00
David Baker
217bc53c98 Many docstrings 2018-05-10 12:20:40 +01:00
Richard van der Hoff
645cb4bf06 Remove redundant request_handler decorator
This is needless complexity; we might as well use the wrapper directly.

Also rename wrap_request_handler->wrap_json_request_handler.
2018-05-10 12:19:53 +01:00
Richard van der Hoff
09f570b935 Factor wrap_request_handler_with_logging out of wrap_request_handler
... so that it can be used on non-JSON endpoints
2018-05-10 12:19:52 +01:00
Richard van der Hoff
9589a1925e Remove include_metrics param
The metrics are now available via the request, so this is redundant and can go
away at last.
2018-05-10 12:19:52 +01:00
Richard van der Hoff
49e5a613f1 Move outgoing_responses_counter handling to RequestMetrics
it's much neater there.
2018-05-10 12:19:52 +01:00
Richard van der Hoff
b8700dd7d0 Bump requests_counter in wrapped_request_handler
less magic
2018-05-10 12:19:52 +01:00
Richard van der Hoff
c6f730282c Move RequestMetrics handling into SynapseRequest.processing()
It fits quite nicely here, and opens the path to getting rid of the
"include_metrics" mess.
2018-05-10 12:19:51 +01:00
Richard van der Hoff
09b29f9c4a Make RequestMetrics take a raw time rather than a clock
... which is going to make it easier to move around.
2018-05-10 12:18:52 +01:00
David Baker
4d298506dd Oops, don't call function passed to run_in_background 2018-05-10 11:57:13 +01:00
Richard van der Hoff
8460e48d06 Move request_id management into SynapseRequest 2018-05-10 11:48:17 +01:00
Richard van der Hoff
18e144fe08 Move RequestsMetrics to its own file
This is useful in its own right, because server.py is full of stuff; but more
importantly, I want to do some refactoring that will cause a circular reference
as it is.
2018-05-09 19:55:03 +01:00
Erik Johnston
bfe1f73855 Merge pull request #3199 from matrix-org/erikj/pagination_sync
Refactor sync APIs to reuse pagination API
2018-05-09 16:16:56 +01:00
Erik Johnston
5adb75bcba Merge pull request #3198 from matrix-org/erikj/fixup_return_pagination
Refactor get_recent_events_for_room return type
2018-05-09 16:07:14 +01:00
Erik Johnston
a5c98dda48 Merge pull request #3200 from matrix-org/erikj/remove_membership_change
Remove unused code path from member change DB func
2018-05-09 16:02:13 +01:00
Erik Johnston
d26bec8a43 Add comment to sync as to why code path is split 2018-05-09 15:56:07 +01:00
Erik Johnston
fcf55f2255 Fix returned token is no longer a tuple 2018-05-09 15:43:00 +01:00
Erik Johnston
7ce98804ff Fix up comment 2018-05-09 15:42:39 +01:00
Erik Johnston
cddf91c8b9 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/remove_membership_change 2018-05-09 15:32:07 +01:00
Erik Johnston
9896dab8f6 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/fixup_return_pagination 2018-05-09 15:31:33 +01:00
Erik Johnston
1e5280b7d0 Merge pull request #3196 from matrix-org/erikj/pagination_return
Refactor pagination DB API to return concrete type
2018-05-09 15:27:17 +01:00
Erik Johnston
75552d2148 Update comments 2018-05-09 15:15:38 +01:00
David Baker
294e9a0c9b Prefix internal functions 2018-05-09 15:10:37 +01:00
David Baker
46df23f581 Add the schema file 2018-05-09 15:07:54 +01:00
David Baker
52281e4c54 Indent fail 2018-05-09 15:06:16 +01:00
David Baker
7e8726b8fb Part deactivated users in the background
One room at a time so we don't take out the whole server with leave
events, and restart at server restart.
2018-05-09 14:54:28 +01:00
Erik Johnston
c0e08dc45b Remove unused code path from member change DB func
The function is never called without a from_key, so we can remove all
the handling for that scenario.
2018-05-09 14:31:32 +01:00
Erik Johnston
0461ef01b7 Merge pull request #3195 from matrix-org/erikj/pagination_refactor
Refactor recent events func to use pagination func
2018-05-09 14:12:24 +01:00
Erik Johnston
e2accd7f1d Refactor sync APIs to reuse pagination API
The sync API often returns events in a topological rather than stream
ordering, e.g. when the user joined the room or on initial sync. When
this happens we can reuse existing pagination storage functions.
2018-05-09 13:43:39 +01:00
Erik Johnston
e5ab9cd24b Don't unnecessarily require token to be stream token
This allows calling the `get_recent_event_ids_for_room` function in more
situations.
2018-05-09 11:58:35 +01:00
Richard van der Hoff
60590211c1 Merge pull request #3194 from rubo77/fix-nuke
nuke-room-from-db.sh: fix deletion from search table
2018-05-09 11:58:07 +01:00
Erik Johnston
c4af4c24ca Refactor get_recent_events_for_room return type
There is no reason to return a tuple of tokens when the last token is
always the token passed as an argument. Changing it makes it consistent
with other storage APIs
2018-05-09 11:55:34 +01:00
Erik Johnston
05e0a2462c Refactor pagination DB API to return concrete type
This makes it easier to document what is being returned by the storage
functions and what some functions expect as arguments.
2018-05-09 11:34:24 +01:00
Erik Johnston
7dd13415db Remove unused from_token param 2018-05-09 10:58:16 +01:00
Erik Johnston
27cf170558 Refactor recent events func to use pagination func
This also removes a cache that is unlikely to ever get hit.
2018-05-09 10:55:55 +01:00
Erik Johnston
1aeb5e28a9 Merge pull request #3193 from matrix-org/erikj/pagination_refactor
Refactor /context to reuse pagination storage functions
2018-05-09 10:22:38 +01:00
Erik Johnston
23ec51c94c Fix up comments and make function private 2018-05-09 09:55:19 +01:00
Richard van der Hoff
d5377eba55 Merge pull request #2337 from rubo77/patch-5
nuke-room-from-db.sh: added postgresql option and help
2018-05-09 09:43:07 +01:00
rubo77
d11b8b6b65 nuke-room-from-db.sh: nuke from table event_search too 2018-05-09 00:46:47 +02:00
rubo77
8ff8ab3bce Dont nuke non-existing table event_search_content 2018-05-09 00:21:00 +02:00
rubo77
6c957e26f0 nuke-room-from-db.sh: added postgresql option and help 2018-05-09 00:14:01 +02:00
Erik Johnston
696f532453 Reuse existing pagination code for context API 2018-05-08 16:20:19 +01:00
Erik Johnston
3e6d306e94 Parse tokens before calling DB function 2018-05-08 16:18:58 +01:00
Erik Johnston
274b8c6025 Only fetch required fields from database 2018-05-08 16:15:25 +01:00
Erik Johnston
06c0d0ed08 Split paginate_room_events storage function 2018-05-08 16:14:26 +01:00
David Baker
bf98fa0864 Part user from rooms on account deactivate
This implements this very crudely: this probably isn't viable
because parting a user from all their rooms could take a long time,
and if the HS gets restarted in that time the process will be
aborted.
2018-05-08 15:58:35 +01:00
Richard van der Hoff
678e649b78 Merge pull request #3190 from mujx/notif-token-fix
notifications: Convert next_token to string according to the spec
2018-05-08 13:54:47 +01:00
Erik Johnston
0b7dfbb194 Merge pull request #3186 from matrix-org/erikj/fix_int_values_metrics
Fix metrics that have integer value labels
2018-05-08 09:48:50 +01:00
Konstantinos Sideris
88868b2839 notifications: Convert next_token to string according to the spec
Currently the parameter is serialized as an integer.

Signed-off-by: Konstantinos Sideris <sideris.konstantin@gmail.com>
2018-05-05 12:55:02 +03:00
kaiyou
5addeaa02c Add Docker packaging in the author list 2018-05-04 21:23:01 +02:00
Erik Johnston
6d8ec3462d Note that label values can be anything 2018-05-03 16:25:05 +01:00
Erik Johnston
95b6912045 Fix metrics that have integer value labels 2018-05-03 15:51:04 +01:00
Richard van der Hoff
966686c845 Merge pull request #3007 from matrix-org/rav/warn_on_logcontext_fail
Make 'unexpected logging context' into warnings
2018-05-03 15:10:04 +01:00
Richard van der Hoff
093d8c415a Merge remote-tracking branch 'origin/develop' into rav/warn_on_logcontext_fail 2018-05-03 14:59:29 +01:00
Richard van der Hoff
0ba609dc6f Merge pull request #3183 from matrix-org/rav/moar_logcontext_leaks
Fix logcontext leaks in rate limiter
2018-05-03 14:58:13 +01:00
Richard van der Hoff
2117f84323 Merge pull request #3182 from Half-Shot/hs/fix-twisted-shutdown
Fix 'Unhandled Error' logs with Twisted 18.4
2018-05-03 12:40:11 +01:00
Richard van der Hoff
a7fe62f0cb Fix logcontext leaks in rate limiter 2018-05-03 12:31:59 +01:00
Will Hunt
2e7a94c36b Don't abortConnection() if the transport connection has already closed. 2018-05-03 12:31:47 +01:00
Richard van der Hoff
a2aaa9cb3c Merge pull request #3178 from matrix-org/rav/fix_request_timeouts
fix http request timeout code
2018-05-03 11:33:26 +01:00
Richard van der Hoff
d72faf2fad Fix changes warning 2018-05-03 10:56:42 +01:00
Richard van der Hoff
a0501ac57e Warn of potential client incompatibility from #3161 2018-05-03 10:51:39 +01:00
Erik Johnston
0a3b51c420 Merge pull request #3141 from matrix-org/erikj/fixup_state
Refactor event storage to prepare for changes in state calculations
2018-05-03 10:39:20 +01:00
Erik Johnston
31c7c29d43 Fix up grammar 2018-05-03 10:38:58 +01:00
Richard van der Hoff
902673e356 Merge pull request #3161 from NotAFile/remove-v1auth
Make Client-Server API return 403 for invalid token
2018-05-03 10:10:57 +01:00
Erik Johnston
53a5fdf312 Merge pull request #3175 from matrix-org/erikj/escape_metric_values
Escape label values in prometheus metrics
2018-05-03 10:01:04 +01:00
Richard van der Hoff
1dfd650348 add missing param to cancelled_to_request_timed_out_error
This gets two arguments, not one.
2018-05-02 22:42:36 +01:00
kaiyou
9a779c2ddb Merge remote-tracking branch 'upstream/master' into feat-dockerfile 2018-05-02 20:22:41 +02:00
Erik Johnston
a41117c63b Make _escape_character take MatchObject 2018-05-02 17:27:27 +01:00
Erik Johnston
32015e1109 Escape label values in prometheus metrics 2018-05-02 16:52:42 +01:00
Richard van der Hoff
3a42aed9a1 Merge pull request #3170 from matrix-org/rav/more_logcontext_leaks
Fix a class of logcontext leaks
2018-05-02 16:45:51 +01:00
Richard van der Hoff
5a0be97ab2 Merge pull request #3174 from matrix-org/rav/media_repo_logcontext_leaks
Fix logcontext leak in media repo
2018-05-02 16:43:04 +01:00
Richard van der Hoff
415c6b672e Merge branch 'develop' into rav/more_logcontext_leaks 2018-05-02 16:16:01 +01:00
Richard van der Hoff
4e9bdeba57 Merge pull request #3172 from matrix-org/rav/fix_test_logcontext_leaks
Fix a couple of logcontext leaks in unit tests
2018-05-02 16:15:22 +01:00
Richard van der Hoff
be31adb036 Fix logcontext leak in media repo
Make FileResponder.write_to_consumer uphold the logcontext contract
2018-05-02 16:14:50 +01:00
Richard van der Hoff
11607006d9 Remove spurious unittest.DEBUG 2018-05-02 15:48:47 +01:00
Richard van der Hoff
46beeb9a30 Fix a couple of logcontext leaks in unit tests
... which were making other, innocent, tests, fail.

Plus remove a spurious unittest.DEBUG which was making the output noisy.
2018-05-02 15:46:22 +01:00
Richard van der Hoff
f22e7cda2c Fix a class of logcontext leaks
So, it turns out that if you have a first `Deferred` `D1`, you can add a
callback which returns another `Deferred` `D2`, and `D2` must then complete
before any further callbacks on `D1` will execute (and later callbacks on `D1`
get the *result* of `D2` rather than `D2` itself).

So, `D1` might have `called=True` (as in, it has started running its
callbacks), but any new callbacks added to `D1` won't get run until `D2`
completes - so if you `yield D1` in an `inlineCallbacks` function, your `yield`
will 'block'.

In conclusion: some of our assumptions in `logcontext` were invalid. We need to
make sure that we don't optimise out the logcontext juggling when this
situation happens. Fortunately, it is easy to detect by checking `D1.paused`.
2018-05-02 11:58:00 +01:00
Richard van der Hoff
a8d8bf92e0 Merge pull request #3168 from matrix-org/rav/fix_logformatter
Fix incorrect reference to StringIO
2018-05-02 10:03:36 +01:00
Richard van der Hoff
e482f8cd85 Fix incorrect reference to StringIO
This was introduced in 4f2f5171
2018-05-02 09:12:26 +01:00
kaiyou
4f2e898c29 Make the logging level configurable 2018-05-01 20:50:03 +02:00
kaiyou
d4c14e1438 Fix the documentation about 'POSTGRES_DB' 2018-05-01 20:47:58 +02:00
Matthew Hodgson
9f21de6a01 missing word :| 2018-05-01 19:19:46 +01:00
Matthew Hodgson
da602419b2 missing word :| 2018-05-01 19:19:23 +01:00
Matthew Hodgson
8ae7096958 Merge branch 'release-v0.28.1' into develop 2018-05-01 19:05:03 +01:00
Matthew Hodgson
562532dd2d Merge branch 'release-v0.28.1' 2018-05-01 19:04:11 +01:00
Matthew Hodgson
5c2214f4c7 fix markdown 2018-05-01 19:03:35 +01:00
Neil Johnson
2414178ed6 Merge branch 'master' into develop 2018-05-01 18:53:56 +01:00
Neil Johnson
40d1bbd257 fix conflict in changelog from previous release 2018-05-01 18:52:44 +01:00
Matthew Hodgson
8e6bd0e324 changelog for 0.28.1 2018-05-01 18:28:23 +01:00
Neil Johnson
8570bb84cc Update __init__.py
bump version
2018-05-01 18:22:53 +01:00
Richard van der Hoff
ca7211104e Merge branch 'release-v0.28.1' into develop 2018-05-01 18:16:57 +01:00
Richard van der Hoff
d5eee5d601 Merge commit '33f469b' into release-v0.28.1 2018-05-01 18:14:18 +01:00
Richard van der Hoff
d858f3bd4e Miscellaneous fixes to python_dependencies
* add some doc about wtf this thing does
* pin Twisted to < 18.4
* add explicit dep on six (fixes #3089)
2018-05-01 18:13:54 +01:00
Richard van der Hoff
33f469ba19 Apply some limits to depth to counter abuse
* When creating a new event, cap its depth to 2^63 - 1
* When receiving events, reject any without a sensible depth

As per https://docs.google.com/document/d/1I3fi2S-XnpO45qrpCsowZv8P8dHcNZ4fsBsbOW7KABI
2018-05-01 17:54:19 +01:00
Neil Johnson
dd1a832419 remove user agent from data model, will just join on user_ips 2018-05-01 12:13:49 +01:00
Neil Johnson
d0857702e8 add inidexes based on usage 2018-05-01 12:12:57 +01:00
Neil Johnson
5917562b60 10 mins seems more reasonable that every minute 2018-05-01 12:12:22 +01:00
Adrian Tschira
6495dbb326 Burminate v1auth
This closes #2602

v1auth was created to account for the differences in status code between
the v1 and v2_alpha revisions of the protocol (401 vs 403 for invalid
tokens). However since those protocols were merged, this makes the r0
version/endpoint internally inconsistent, and violates the
specification for the r0 endpoint.

This might break clients that rely on this inconsistency with the
specification. This is said to affect the legacy angular reference
client. However, I feel that restoring parity with the spec is more
important. Either way, it is critical to inform developers about this
change, in case they rely on the illegal behaviour.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-30 22:20:43 +02:00
Will Hunt
2ad3fc36e6 Fixes #3135 - Replace _OpenSSLECCurve with crypto.get_elliptic_curve (#3157)
fixes #3135

Signed-off-by: Will Hunt will@half-shot.uk
2018-04-30 16:21:11 +01:00
Richard van der Hoff
cead75fae3 Merge pull request #3160 from krombel/fix_3076
add guard for None on purge_history api
2018-04-30 15:03:59 +01:00
Krombel
576b71dd3d add guard for None on purge_history api 2018-04-30 14:29:48 +02:00
Matthew Hodgson
99a54bf2af Merge pull request #3129 from matrix-org/matthew/fix_group_dups
remove duplicates from groups tables
2018-04-30 11:47:25 +01:00
Richard van der Hoff
63ae5cbf34 Merge pull request #3143 from matrix-org/rav/remove_redundant_preserve_fn
Remove redundant call to preserve_fn
2018-04-30 10:23:59 +01:00
Richard van der Hoff
fdb6849b81 Merge pull request #3144 from matrix-org/rav/run_in_background_exception_handling
Trap exceptions thrown within run_in_background
2018-04-30 10:23:02 +01:00
Richard van der Hoff
66aa32ede2 Merge pull request #3159 from NotAFile/py3-tests-config
run config tests on py3
2018-04-30 10:22:45 +01:00
Adrian Tschira
6e005d1382 run config tests on py3
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-30 10:39:45 +02:00
Richard van der Hoff
01e8a52825 Merge pull request #3102 from NotAFile/py3-attributeerror
Make event properties raise AttributeError instead
2018-04-30 09:22:09 +01:00
Adrian Tschira
0c9db26260 add comment explaining attributeerror 2018-04-30 09:49:10 +02:00
Richard van der Hoff
950a32eb47 Merge pull request #3152 from NotAFile/py3-local-imports
make imports local
2018-04-30 01:28:13 +01:00
Richard van der Hoff
bc2017a594 Merge pull request #3153 from NotAFile/py3-httplib
move httplib import to six
2018-04-30 01:26:42 +01:00
Richard van der Hoff
683149c1f9 Merge pull request #3151 from NotAFile/py3-xrange-1
Move more xrange to six
2018-04-30 01:20:06 +01:00
Richard van der Hoff
7b908aeec4 Merge branch 'rav/test_36' into develop 2018-04-30 01:12:58 +01:00
Richard van der Hoff
3b0e431c82 Merge pull request #3150 from NotAFile/py3-listcomp-yield
Don't yield in list comprehensions
2018-04-30 01:11:41 +01:00
Richard van der Hoff
db75c86e84 Merge branch 'develop' into py3-xrange-1 2018-04-30 01:02:25 +01:00
Richard van der Hoff
2fd96727b1 Merge pull request #3085 from NotAFile/py3-config-text-mode
Open config file in non-bytes mode
2018-04-30 01:00:23 +01:00
Richard van der Hoff
b8ee12b978 Merge pull request #3084 from NotAFile/py3-certs-byte-mode
Open certificate files as bytes
2018-04-30 01:00:05 +01:00
Richard van der Hoff
049b0b5af2 Merge pull request #3154 from NotAFile/py3-stringio
Replace stringIO imports with six
2018-04-30 00:59:04 +01:00
Richard van der Hoff
d1d54d6088 add py36 to build matrix 2018-04-30 00:58:31 +01:00
Richard van der Hoff
ac5f2f4d86 Merge pull request #3145 from NotAFile/py3-tests
Add py3 tests to tox with folders that work
2018-04-30 00:53:05 +01:00
Richard van der Hoff
af3cc50511 Remove redundant call to preserve_fn
submit_event_for_as doesn't return a deferred anyway, so this is pointless.
2018-04-30 00:48:36 +01:00
Richard van der Hoff
dbf6f28d64 Merge pull request #3155 from NotAFile/py3-bytes-1
more bytes strings
2018-04-30 00:38:21 +01:00
Richard van der Hoff
7767a9fc0e Update tox.ini
add missing comma
2018-04-30 00:37:32 +01:00
Richard van der Hoff
aab2e4da60 Merge pull request #3140 from matrix-org/rav/use_run_in_background
Use run_in_background in preference to preserve_fn
2018-04-30 00:34:28 +01:00
Richard van der Hoff
1315d374cc Merge pull request #3156 from NotAFile/py3-hmac-bytes
Construct HMAC as bytes on py3
2018-04-30 00:33:20 +01:00
Richard van der Hoff
9e2601f830 Merge pull request #3108 from NotAFile/py3-six-urlparse
Use six.moves.urlparse
2018-04-30 00:33:05 +01:00
Adrian Tschira
122593265b Construct HMAC as bytes on py3
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-29 00:19:41 +02:00
Adrian Tschira
e9143b6593 more bytes strings
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-29 00:13:57 +02:00
Matthew Hodgson
adaf3ec87f fix missing import 2018-04-28 22:39:15 +01:00
Matthew Hodgson
006e18b6bb pep8 2018-04-28 22:32:24 +01:00
Matthew Hodgson
42c89c8215 make it work with sqlite 2018-04-28 22:27:30 +01:00
Adrian Tschira
d82b6ea9e6 Move more xrange to six
plus a bonus next()

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:57:00 +02:00
Adrian Tschira
4f2f5171b7 replace stringIO imports 2018-04-28 13:46:23 +02:00
Adrian Tschira
94f4d7f49e move httplib import to six 2018-04-28 13:43:34 +02:00
Adrian Tschira
57b58e2174 make imports local
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:41:41 +02:00
Adrian Tschira
cdb4647a80 Don't yield in list comprehensions
I've tried to grep for more of this with no success.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:36:30 +02:00
Adrian Tschira
a376d8f761 open log_config in text mode too
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-28 13:34:13 +02:00
Adrian Tschira
4f5694e2ce Add py3 tests to tox with folders that work
It's just a few tests, but it will at least prevent a few files from
regressing. Also, it makes it easiert to check your code against py36
while writing it.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-27 16:29:41 +02:00
Richard van der Hoff
9558236728 Merge pull request #3127 from matrix-org/rav/deferred_timeout
Use deferred.addTimeout instead of time_bound_deferred
2018-04-27 14:32:54 +01:00
Richard van der Hoff
453adf00b6 pep8; remove spurious import 2018-04-27 14:32:08 +01:00
Richard van der Hoff
fc149b4eeb Merge remote-tracking branch 'origin/develop' into rav/use_run_in_background 2018-04-27 14:31:23 +01:00
Richard van der Hoff
6146332387 Merge remote-tracking branch 'origin/develop' into rav/deferred_timeout 2018-04-27 14:18:00 +01:00
Erik Johnston
d2737c1fae Merge branch 'master' of github.com:matrix-org/synapse into develop 2018-04-27 13:28:51 +01:00
Richard van der Hoff
2a13af23bc Use run_in_background in preference to preserve_fn
While I was going through uses of preserve_fn for other PRs, I converted places
which only use the wrapped function once to use run_in_background, to avoid
creating the function object.
2018-04-27 12:55:51 +01:00
Richard van der Hoff
3d1ae61399 Merge branch 'develop' into rav/deferred_timeout 2018-04-27 12:54:43 +01:00
Richard van der Hoff
9d2c1b8429 Backport deferred.addTimeout
Twisted 16.0 doesn't have addTimeout, so let's backport it.
2018-04-27 12:52:30 +01:00
Richard van der Hoff
13843f771e Trap exceptions thrown within run_in_background
Turn any exceptions that get thrown synchronously within run_in_background into
Failures instead.
2018-04-27 12:17:13 +01:00
Richard van der Hoff
41d4b07a53 Merge pull request #3142 from matrix-org/rav/reraise
reraise exceptions more carefully
2018-04-27 12:16:19 +01:00
Neil Johnson
05ba7e3a44 Update CHANGES.rst 2018-04-27 12:13:12 +01:00
Neil Johnson
53849ea9d3 Update CHANGES.rst 2018-04-27 12:11:39 +01:00
Richard van der Hoff
268e40341b Merge pull request #3136 from matrix-org/rav/fix_dependencies
Miscellaneous fixes to python_dependencies
2018-04-27 11:48:28 +01:00
Richard van der Hoff
9c3da24561 Merge pull request #3138 from matrix-org/rav/catch_unhandled_exceptions
Improve exception handling for background processes
2018-04-27 11:47:49 +01:00
Richard van der Hoff
53494c34df Merge pull request #3139 from matrix-org/rav/consume_errors
Add missing consumeErrors to improve exception handling
2018-04-27 11:47:21 +01:00
Richard van der Hoff
6493b22b42 reraise exceptions more carefully
We need to be careful (under python 2, at least) that when we reraise an
exception after doing some error handling, we actually reraise the original
exception rather than anything that might have been raised (and handled) during
the error handling.
2018-04-27 11:40:06 +01:00
Erik Johnston
6e10eed28e Refactor event storage to not require state
This is in preparation for using contexts that may or may not have the
current_state_ids set. This will allow us to avoid unnecessarily pulling
out state for an event on the master process when using workers.

We also add a check to see if the state groups of the old extremities
are the same as the new ones.
2018-04-27 11:38:02 +01:00
Richard van der Hoff
605defb9e4 Add missing consumeErrors
In general we want defer.gatherResults to consumeErrors, rather than having
exceptions hanging around and getting logged as CRITICAL unhandled errors.
2018-04-27 11:16:28 +01:00
Richard van der Hoff
9255a6cb17 Improve exception handling for background processes
There were a bunch of places where we fire off a process to happen in the
background, but don't have any exception handling on it - instead relying on
the unhandled error being logged when the relevent deferred gets
garbage-collected.

This is unsatisfactory for a number of reasons:
 - logging on garbage collection is best-effort and may happen some time after
   the error, if at all
 - it can be hard to figure out where the error actually happened.
 - it is logged as a scary CRITICAL error which (a) I always forget to grep for
   and (b) it's not really CRITICAL if a background process we don't care about
   fails.

So this is an attempt to add exception handling to everything we fire off into
the background.
2018-04-27 11:07:40 +01:00
Neil Johnson
d842ed14f4 Merge tag 'v0.28.0'
Changes in synapse v0.28.0-rc1 (2018-04-26)
===========================================

Bug Fixes:

* Fix quarantine media admin API and search reindex (PR #3130)
* Fix media admin APIs (PR #3134)

Changes in synapse v0.28.0-rc1 (2018-04-24)
===========================================

Minor performance improvement to federation sending and bug fixes.

(Note: This release does not include state resolutions discussed in matrix live)

Features:

* Add metrics for event processing lag (PR #3090)
* Add metrics for ResponseCache (PR #3092)

Changes:

* Synapse on PyPy (PR #2760) Thanks to @Valodim!
* move handling of auto_join_rooms to RegisterHandler (PR #2996) Thanks to @krombel!
* Improve handling of SRV records for federation connections (PR #3016) Thanks to @silkeh!
* Document the behaviour of ResponseCache (PR #3059)
* Preparation for py3 (PR #3061, #3073, #3074, #3075, #3103, #3104, #3106, #3107, #3109, #3110) Thanks to @NotAFile!
* update prometheus dashboard to use new metric names (PR #3069) Thanks to @krombel!
* use python3-compatible prints (PR #3074) Thanks to @NotAFile!
* Send federation events concurrently (PR #3078)
* Limit concurrent event sends for a room (PR #3079)
* Improve R30 stat definition (PR #3086)
* Send events to ASes concurrently (PR #3088)
* Refactor ResponseCache usage (PR #3093)
* Clarify that SRV may not point to a CNAME (PR #3100) Thanks to @silkeh!
* Use str(e) instead of e.message (PR #3103) Thanks to @NotAFile!
* Use six.itervalues in some places (PR #3106) Thanks to @NotAFile!
* Refactor store.have_events (PR #3117)

Bug Fixes:

* Return 401 for invalid access_token on logout (PR #2938) Thanks to @dklug!
* Return a 404 rather than a 500 on rejoining empty rooms (PR #3080)
* fix federation_domain_whitelist (PR #3099)
* Avoid creating events with huge numbers of prev_events (PR #3113)
* Reject events which have lots of prev_events (PR #3118)
2018-04-27 10:40:27 +01:00
Richard van der Hoff
31c8be956f also upgrade pip when installing 2018-04-27 01:56:58 +01:00
Neil Johnson
28dd536e80 update changelog and bump version to 0.28.0 2018-04-26 15:51:39 +01:00
Neil Johnson
8721580303 Merge branch 'develop' of https://github.com/matrix-org/synapse into release-v0.28.0-rc1 2018-04-26 15:44:54 +01:00
Richard van der Hoff
dbf76fd4b9 jenkins build: make sure we have a recent setuptools 2018-04-26 13:11:03 +01:00
Richard van der Hoff
d78ada3166 Miscellaneous fixes to python_dependencies
* add some doc about wtf this thing does
* pin Twisted to < 18.4
* add explicit dep on six (fixes #3089)
2018-04-26 13:11:03 +01:00
Erik Johnston
0ced8b5b47 Merge pull request #3134 from matrix-org/erikj/fix_admin_media_api
Fix media admin APIs
2018-04-26 12:02:40 +01:00
Erik Johnston
7ec8e798b4 Fix media admin APIs 2018-04-26 11:31:22 +01:00
Neil Johnson
fb6015d0a6 pep8 2018-04-25 17:56:11 +01:00
Erik Johnston
a5ad88913c Merge pull request #3130 from matrix-org/erikj/fix_quarantine_room
Fix quarantine media admin API
2018-04-25 17:54:12 +01:00
Neil Johnson
617bf40924 Generate user daily stats 2018-04-25 17:37:29 +01:00
Erik Johnston
22881b3d69 Also fix reindexing of search 2018-04-25 15:32:04 +01:00
Erik Johnston
ba3166743c Fix quarantine media admin API 2018-04-25 15:11:18 +01:00
Matthew Hodgson
e3a373f002 remove duplicates from groups tables
and rename inconsistently named indexes.
Based on https://github.com/matrix-org/synapse/pull/3128 - thanks @vurpo\!
2018-04-25 14:58:43 +01:00
Neil Johnson
48c01ae851 ignore atom editor python ide files 2018-04-25 11:02:28 +01:00
Neil Johnson
6ab3b9c743 Update CHANGES.rst
Rephrase v0.28.0-rc1 summary
2018-04-24 16:39:20 +01:00
Neil Johnson
1bb83d5d41 Merge branch 'master' into develop 2018-04-24 15:52:43 +01:00
Neil Johnson
13a2beabca Update CHANGES.rst
fix formatting on line break
2018-04-24 15:43:30 +01:00
Neil Johnson
2c3e995f38 Bump version and update changelog 2018-04-24 15:33:22 +01:00
Neil Johnson
8e8b06715f Revert "Bump version and update changelog"
This reverts commit 08b29d4574.
2018-04-24 13:58:45 +01:00
Neil Johnson
08b29d4574 Bump version and update changelog 2018-04-24 13:56:12 +01:00
Richard van der Hoff
77ebef9d43 Merge pull request #3118 from matrix-org/rav/reject_prev_events
Reject events which have lots of prev_events
2018-04-23 17:51:38 +01:00
Richard van der Hoff
9b9c38373c Remove spurious param 2018-04-23 12:00:06 +01:00
Richard van der Hoff
286e20f2bc Merge pull request #3109 from NotAFile/py3-tests-fix
Make tests py3 compatible
2018-04-23 11:59:03 +01:00
Richard van der Hoff
1ea904b9f0 Use deferred.addTimeout instead of time_bound_deferred
This doesn't feel like a wheel we need to reinvent.
2018-04-23 00:53:18 +01:00
Richard van der Hoff
dc875d2712 Merge pull request #3106 from NotAFile/py3-six-itervalues-1
Use six.itervalues in some places
2018-04-20 15:43:52 +01:00
Richard van der Hoff
8dc4a6144b Merge pull request #3107 from NotAFile/py3-bool-nonzero
add __bool__ alias to __nonzero__ methods
2018-04-20 15:43:39 +01:00
Richard van der Hoff
d06a9ea5f7 Merge pull request #3104 from NotAFile/py3-unittest-config
Add some more variables to the unittest config
2018-04-20 15:35:58 +01:00
Richard van der Hoff
c09a6daf09 Merge pull request #3110 from NotAFile/py3-six-queue
Replace Queue with six.moves.queue
2018-04-20 15:35:00 +01:00
Richard van der Hoff
692a3cc806 Merge pull request #3103 from NotAFile/py3-baseexcepton-message
Use str(e) instead of e.message
2018-04-20 15:34:49 +01:00
Erik Johnston
366dd893fc Merge pull request #3100 from silkeh/readme-srv-cname
Clarify that SRV may not point to a CNAME
2018-04-20 15:18:44 +01:00
Erik Johnston
bdb7714d13 Merge pull request #3125 from matrix-org/erikj/add_contrib_docs
Document contrib directory
2018-04-20 13:02:24 +01:00
Erik Johnston
67dabe143d Document contrib directory 2018-04-20 11:47:38 +01:00
Richard van der Hoff
3de7d9fe99 accept stupid events over backfill 2018-04-20 11:41:03 +01:00
Richard van der Hoff
11a67b7c9d Merge pull request #3093 from matrix-org/rav/response_cache_wrap
Refactor ResponseCache usage
2018-04-20 11:31:17 +01:00
Richard van der Hoff
0c280d4d99 Reinstate linearizer for federation_server.on_context_state_request 2018-04-20 11:10:04 +01:00
Richard van der Hoff
bc381d5798 Merge pull request #3117 from matrix-org/rav/refactor_have_events
Refactor store.have_events
2018-04-20 10:26:12 +01:00
Richard van der Hoff
b1dfbc3c40 Refactor store.have_events
It turns out that most of the time we were calling have_events, we were only
using half of the result. Replace have_events with have_seen_events and
get_rejection_reasons, so that we can see what's going on a bit more clearly.
2018-04-20 10:25:56 +01:00
Richard van der Hoff
dacf3a50ac Merge pull request #3113 from matrix-org/rav/fix_huge_prev_events
Avoid creating events with huge numbers of prev_events
2018-04-18 11:27:56 +01:00
Richard van der Hoff
1f4b498b73 Add some comments 2018-04-18 00:15:36 +01:00
Richard van der Hoff
e585228860 Check events on backfill too 2018-04-18 00:06:42 +01:00
Richard van der Hoff
9b7794262f Reject events which have too many auth_events or prev_events
... this should protect us from being dossed by people making silly events
(deliberately or otherwise)
2018-04-18 00:06:42 +01:00
Richard van der Hoff
639480e14a Avoid creating events with huge numbers of prev_events
In most cases, we limit the number of prev_events for a given event to 10
events. This fixes a particular code path which created events with huge
numbers of prev_events.
2018-04-16 18:41:37 +01:00
Adrian Tschira
878995e660 Replace Queue with six.moves.queue
and a six.range change which I missed the last time

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-16 00:46:21 +02:00
Adrian Tschira
a1a3c9660f Make tests py3 compatible
This is a mixed commit that fixes various small issues

 * print parentheses
 * 01 is invalid syntax (it was octal in py2)
 * [x for i in 1, 2] is invalid syntax
 * six moves

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-16 00:39:32 +02:00
Matthew Hodgson
512633ef44 fix spurious changelog dup 2018-04-15 22:45:06 +01:00
Adrian Tschira
2a3c33ff03 Use six.moves.urlparse
The imports were shuffled around a bunch in py3

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 21:22:43 +02:00
Adrian Tschira
f63ff73c7f add __bool__ alias to __nonzero__ methods
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:40:47 +02:00
Adrian Tschira
36c59ce669 Use six.itervalues in some places
There's more where that came from

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:39:43 +02:00
Adrian Tschira
cb9cdfecd0 Add some more variables to the unittest config
These worked accidentally before (python2 doesn't complain if you
compare incompatible types) but under py3 this blows up spectacularly

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:36:39 +02:00
Adrian Tschira
1515560f5c Use str(e) instead of e.message
Doing this I learned e.message was pretty shortlived, added in 2.6,
they realized it was a bad idea and deprecated it in 2.7

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:32:42 +02:00
Adrian Tschira
bfc2ade9b3 Make event properties raise AttributeError instead
They raised KeyError before. I'm changing this because the code uses
hasattr() to check for the presence of a key. This worked accidentally
before, because hasattr() silences all exceptions in python 2. However,
in python3, this isn't the case anymore.

I had a look around to see if anything depended on this raising a
KeyError and I couldn't find anything. Of course, I could have simply
missed it.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-15 20:16:59 +02:00
Silke
c4bdbc2bd2 Clarify that SRV may not point to a CNAME
Signed-off-by: Silke Hofstra <silke@slxh.eu>
2018-04-14 14:55:25 +02:00
kaiyou
041b41a825 Merge remote-tracking branch 'upstream/master' into feat-dockerfile 2018-04-14 12:27:16 +02:00
Neil Johnson
154b44c249 Merge branch 'master' of https://github.com/matrix-org/synapse into develop 2018-04-13 17:07:54 +01:00
Matthew Hodgson
0d8c50df44 Merge pull request #3099 from matrix-org/matthew/fix-federation-domain-whitelist
fix federation_domain_whitelist
2018-04-13 15:51:13 +01:00
Matthew Hodgson
78a9698650 fix federation_domain_whitelist
we were checking the wrong server_name on inbound requests
2018-04-13 15:47:43 +01:00
Matthew Hodgson
25b0ba30b1 revert last to PR properly 2018-04-13 15:46:37 +01:00
Matthew Hodgson
f8d46cad3c correctly auth inbound federation_domain_whitelist reqs 2018-04-13 15:41:52 +01:00
Neil Johnson
d4b2e05852 Merge branch 'master' of https://github.com/matrix-org/synapse into develop 2018-04-13 12:16:27 +01:00
Neil Johnson
eb53439c4a Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-04-13 12:14:57 +01:00
Neil Johnson
51d628d28d Bump version and Change log 2018-04-13 12:08:19 +01:00
Neil Johnson
df77837a33 Merge pull request #3095 from matrix-org/rav/bump_canonical_json
Update canonicaljson dependency
2018-04-13 12:04:46 +01:00
Richard van der Hoff
d3347ad485 Revert "Use sortedcontainers instead of blist"
This reverts commit 9fbe70a7dc.

It turns out that sortedcontainers.SortedDict is not an exact match for
blist.sorteddict; in particular, `popitem()` removes things from the opposite
end of the dict.

This is trivial to fix, but I want to add some unit tests, and potentially some
more thought about it, before we do so.
2018-04-13 11:16:43 +01:00
Richard van der Hoff
fac3f9e678 Bump canonicaljson to 1.1.3
1.1.2 was a bit broken too :/
2018-04-13 10:21:38 +01:00
Richard van der Hoff
60f6014bb7 ResponseCache: fix handling of completed results
Turns out that ObservableDeferred.observe doesn't return a deferred if the
result is already completed. Fix handling and improve documentation.
2018-04-13 07:32:29 +01:00
Richard van der Hoff
119596ab8f Update canonicaljson dependency
1.1.0 and 1.1.1 were broken, so we're updating this to help people make sure
they don't end up on a broken version.

Also, 1.1.0 is speedier...
2018-04-12 17:31:44 +01:00
Richard van der Hoff
b78395b7fe Refactor ResponseCache usage
Adds a `.wrap` method to ResponseCache which wraps up the boilerplate of a
(get, set) pair, and then use it throughout the codebase.

This will be largely non-functional, but does include the following functional
changes:

* federation_server.on_context_state_request: drops use of _server_linearizer
  which looked redundant and could cause incorrect cache misses by yielding
  between the get and the set.
* RoomListHandler.get_remote_public_room_list(): fixes logcontext leaks
* the wrap function includes some logging. I'm hoping this won't be too noisy
  on production.
2018-04-12 13:02:15 +01:00
Richard van der Hoff
d5c74b9f6c Merge pull request #3092 from matrix-org/rav/response_cache_metrics
Add metrics for ResponseCache
2018-04-12 12:59:36 +01:00
Erik Johnston
0f13f30fca Merge pull request #3090 from matrix-org/erikj/processed_event_lag
Add metrics for event processing lag
2018-04-12 12:18:57 +01:00
Erik Johnston
415aeefd89 Format docstring 2018-04-12 12:07:09 +01:00
Erik Johnston
19ceb4851f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/processed_event_lag 2018-04-12 11:36:07 +01:00
Richard van der Hoff
261124396e Merge pull request #3059 from matrix-org/rav/doc_response_cache
Document the behaviour of ResponseCache
2018-04-12 11:22:30 +01:00
Erik Johnston
23a7f9d7f4 Doc we raise on unknown event 2018-04-12 11:20:51 +01:00
Erik Johnston
d7bf3a68f0 s/list/tuple 2018-04-12 11:19:04 +01:00
Erik Johnston
f67e906e18 Set all metrics at the same time 2018-04-12 11:18:19 +01:00
Erik Johnston
971059a733 Merge pull request #3088 from matrix-org/erikj/as_parallel
Send events to ASes concurrently
2018-04-12 10:42:36 +01:00
Erik Johnston
e939f3bca6 Fix tests 2018-04-11 14:37:11 +01:00
Erik Johnston
4dae4a97ed Track last processed event received_ts 2018-04-11 14:27:09 +01:00
Erik Johnston
92e34615c5 Track where event stream processing have gotten up to 2018-04-11 12:13:40 +01:00
Erik Johnston
ab825aa328 Add GaugeMetric 2018-04-11 12:13:40 +01:00
Richard van der Hoff
233699c42e Merge pull request #2760 from Valodim/pypy
Synapse on PyPy
2018-04-11 11:20:01 +01:00
Neil Johnson
427e6c4059 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse 2018-04-11 10:59:00 +01:00
Neil Johnson
781cd8c54f bump version/changelog 2018-04-11 10:54:43 +01:00
Neil Johnson
9ef0b179e0 Merge commit '11d2609da70af797405241cdf7d9df19db5628f2' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-11 10:51:59 +01:00
Erik Johnston
121591568b Send events to ASes concurrently 2018-04-11 09:56:00 +01:00
Richard van der Hoff
b3384232a0 Add metrics for ResponseCache 2018-04-10 23:14:47 +01:00
Matthew Hodgson
360d899a64 Merge pull request #3086 from matrix-org/r30_stats
fix typo
2018-04-10 17:46:37 +01:00
Neil Johnson
d54cfbb7a8 fix typo 2018-04-10 17:38:16 +01:00
Erik Johnston
eaa2ebf20b Merge pull request #3079 from matrix-org/erikj/limit_concurrent_sends
Limit concurrent event sends for a room
2018-04-10 16:43:58 +01:00
Erik Johnston
9daf82278f Merge pull request #3078 from matrix-org/erikj/federation_sender
Send federation events concurrently
2018-04-10 16:43:48 +01:00
Adrian Tschira
a3f9ddbede Open certificate files as bytes
That's what pyOpenSSL expects on python3

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-10 17:36:29 +02:00
Adrian Tschira
7f8eebc8ee Open config file in non-bytes mode
Nothing written into it is encoded, so it makes little sense, but it
does break in python3 the way it was before.

The variable names were adjusted to be less misleading.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-10 17:32:40 +02:00
Neil Johnson
dd723267b2 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into develop 2018-04-10 15:03:29 +01:00
Erik Johnston
a060dfa132 Use run_in_background instead 2018-04-10 14:25:11 +01:00
Erik Johnston
f8e8ec013b Note why we're limiting concurrent event sends 2018-04-10 14:00:46 +01:00
Erik Johnston
1246d23710 Preserve log contexts correctly 2018-04-10 12:04:32 +01:00
Erik Johnston
d49cbf712f Log event ID on exception 2018-04-10 12:03:41 +01:00
Erik Johnston
ce72d590ed Merge pull request #3082 from matrix-org/erikj/urlencode_paths
URL quote path segments over federation
2018-04-10 11:31:16 +01:00
Vincent Breitmoser
6d7f0f8dd3 Don't disable GC when running on PyPy
PyPy's incminimark GC can't be triggered manually. From what I observed
there are no obvious issues with just letting it run normally. And
unlike CPython, it actually returns unused RAM to the system.

Signed-off-by: Vincent Breitmoser <look@my.amazin.horse>
2018-04-10 11:35:34 +02:00
Vincent Breitmoser
f4284d943a In DomainSpecificString, override __repr__ in addition to __str__
For some reason, string interpolation on a DomainSpecificString object
like "%r" % (domainSpecificStringObj) fails under PyPy, because the
default __repr__ implementation wants to iterate over the object. I'm
not sure why that happens, but overriding __repr__ instead of __str__
fixes this problem, and is arguably the more appropriate thing to do
anyways.
2018-04-10 11:35:29 +02:00
Richard van der Hoff
d1e56cfcd1 Fix pep8 error on psycopg2cffi hack 2018-04-10 11:35:29 +02:00
Vincent Breitmoser
89de934981 Use psycopg2cffi module instead of psycopg2 if running on pypy
The psycopg2 package isn't available for PyPy.  This commit adds a check
if the runtime is PyPy, and if it is uses psycopg2cffi module in favor
of psycopg2. This is almost a drop-in replacement, except for one place
where an additional cast to string is required.
2018-04-10 11:29:52 +02:00
Vincent Breitmoser
9fbe70a7dc Use sortedcontainers instead of blist
This commit drop-in replaces blist with SortedContainers. They are
written in pure python so work with pypy, but perform as good as
native implementations, at least in a couple benchmarks:

http://www.grantjenks.com/docs/sortedcontainers/performance.html
2018-04-10 11:29:51 +02:00
Richard van der Hoff
a3599dda97 Merge pull request #2996 from krombel/allow_auto_join_rooms
move handling of auto_join_rooms to RegisterHandler
2018-04-10 01:11:00 +01:00
Richard van der Hoff
87478c5a60 Merge pull request #3061 from NotAFile/add-some-byte-strings
Add b prefixes to some strings that are bytes in py3
2018-04-09 23:54:05 +01:00
Richard van der Hoff
c508b2f2f0 Merge pull request #3073 from NotAFile/use-six-reraise
Replace old-style raise with six.reraise
2018-04-09 23:53:40 +01:00
Richard van der Hoff
37354b55c9 Merge pull request #2938 from dklug/develop
Return 401 for invalid access_token on logout
2018-04-09 23:52:56 +01:00
Richard van der Hoff
0e9aa1d091 Merge pull request #3074 from NotAFile/fix-py3-prints
use python3-compatible prints
2018-04-09 23:44:41 +01:00
Richard van der Hoff
8eaa141d8f Merge pull request #3075 from NotAFile/six-type-checks
Replace some type checks with six type checks
2018-04-09 23:40:44 +01:00
Richard van der Hoff
664adb4236 Merge pull request #3016 from silkeh/improve-service-lookups
Improve handling of SRV records for federation connections
2018-04-09 23:40:06 +01:00
Richard van der Hoff
aea3a93611 Merge pull request #3069 from krombel/update_prometheus_config
update prometheus dashboard to use new metric names
2018-04-09 23:37:18 +01:00
Neil Johnson
41e0611895 remove errant print 2018-04-09 18:44:20 +01:00
Neil Johnson
61b439c904 Fix msec to sec, again 2018-04-09 18:43:48 +01:00
Neil Johnson
87770300d5 Fix msec to sec 2018-04-09 18:38:59 +01:00
Neil Johnson
9a311adfea v0.27.3-rc2 2018-04-09 17:52:08 +01:00
Neil Johnson
64bc2162ef Fix psycopg2 interpolation 2018-04-09 17:50:36 +01:00
Richard van der Hoff
d2c6f4d626 Merge pull request #3080 from matrix-org/rav/fix_500_on_rejoin
Return a 404 rather than a 500 on rejoining empty rooms
2018-04-09 17:32:36 +01:00
Neil Johnson
5232d3bfb1 version bump v0.27.3-rc2 2018-04-09 17:25:57 +01:00
Neil Johnson
5e785d4d5b Merge branch 'develop' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 17:21:34 +01:00
Erik Johnston
6e025a97b4 Handle all events in a room correctly 2018-04-09 16:02:48 +01:00
Neil Johnson
414b2b3bd1 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 16:02:08 +01:00
Neil Johnson
b151eb14a2 Update CHANGES.rst 2018-04-09 16:01:59 +01:00
Neil Johnson
64cebbc730 Merge branch 'release-v0.27.0' of https://github.com/matrix-org/synapse into release-v0.27.0 2018-04-09 16:00:51 +01:00
Neil Johnson
d9ae2bc826 bump version to release candidate 2018-04-09 16:00:31 +01:00
Neil Johnson
21d5a2a08e Update CHANGES.rst 2018-04-09 15:55:41 +01:00
Neil Johnson
c115deed12 Update CHANGES.rst 2018-04-09 15:54:32 +01:00
Neil Johnson
072fb59446 bump version 2018-04-09 13:49:25 +01:00
Neil Johnson
89dda61315 Merge branch 'develop' into release-v0.27.0 2018-04-09 13:48:15 +01:00
Neil Johnson
687f3451bd 0.27.3 2018-04-09 13:44:05 +01:00
Richard van der Hoff
f3ef60662f Return a 404 rather than a 500 on rejoining empty rooms
Filter ourselves out of the server list before checking for an empty remote
host list, to fix 500 error

Fixes #2141
2018-04-09 12:56:22 +01:00
Erik Johnston
e5082494eb Limit concurrent event sends for a room 2018-04-09 12:07:39 +01:00
Erik Johnston
56b0589865 Use create_and_send_nonmember_event everywhere 2018-04-09 12:04:18 +01:00
Erik Johnston
11974f3787 Send federation events concurrently 2018-04-09 11:47:10 +01:00
Erik Johnston
145d14656b Handle exceptions in get_hosts_for_room when sending events over federation 2018-04-09 11:47:01 +01:00
kaiyou
a13b7860c6 Merge remote-tracking branch 'upstream/master' into feat-dockerfile 2018-04-08 17:56:44 +02:00
Adrian Tschira
e54c202b81 Replace some type checks with six type checks
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-07 01:02:32 +02:00
Adrian Tschira
b0500d3774 use python3-compatible prints 2018-04-06 23:35:27 +02:00
Adrian Tschira
4f40d058cc Replace old-style raise with six.reraise
The old style raise is invalid syntax in python3. As noted in the docs,
this adds one more frame in the traceback, but I think this is
acceptable:

    <ipython-input-7-bcc5cba3de3f> in <module>()
         16     except:
         17         pass
    ---> 18     six.reraise(*x)

    /usr/lib/python3.6/site-packages/six.py in reraise(tp, value, tb)
        691             if value.__traceback__ is not tb:
        692                 raise value.with_traceback(tb)
    --> 693             raise value
        694         finally:
        695             value = None

    <ipython-input-7-bcc5cba3de3f> in <module>()
          9
         10 try:
    ---> 11     x()
         12 except:
         13     x = sys.exc_info()

Also note that this uses six, which is not formally a dependency yet,
but is included indirectly since most packages depend on it.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-06 23:06:24 +02:00
Krombel
c7ede92d0b make prometheus config compliant to v0.28 2018-04-05 23:34:01 +02:00
Adrian Tschira
6168351877 Add b prefixes to some strings that are bytes in py3
This has no effect on python2

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-04-04 13:48:51 +02:00
Silke
72251d1b97 Remove address resolution of hosts in SRV records
Signed-off-by: Silke Hofstra <silke@slxh.eu>
2018-04-04 12:26:50 +02:00
Richard van der Hoff
a9a74101a4 Document the behaviour of ResponseCache
it looks like everything that uses ResponseCache expects to have to
`make_deferred_yieldable` its results. It's debatable whether that is the best
approach, but let's document it for now to avoid further confusion.
2018-04-04 09:06:22 +01:00
Neil Johnson
16aeb41547 Update README.rst
update docker hub url
2018-03-28 16:47:56 +01:00
Krombel
6152e253d8 Merge branch 'develop' of into allow_auto_join_rooms 2018-03-28 14:45:28 +02:00
Matthew Hodgson
b2f2282947 make lazy_load_members configurable in filters 2018-03-19 01:15:13 +00:00
Matthew Hodgson
478af0f720 reshuffle todo & comments 2018-03-19 01:00:12 +00:00
Matthew Hodgson
366f730bf6 only get member state IDs for incremental syncs if we're filtering 2018-03-18 21:40:43 +00:00
kaiyou
757f1b5843 Merge remote-tracking branch 'upstream/master' into feat-dockerfile 2018-03-17 16:02:08 +01:00
Matthew Hodgson
0b56290f0b remove stale import 2018-03-16 01:45:49 +00:00
Matthew Hodgson
fc5397fdf5 remove debug 2018-03-16 01:44:55 +00:00
Matthew Hodgson
4f0493c850 fix tsm search again 2018-03-16 01:43:37 +00:00
Matthew Hodgson
f7dcc404f2 add state_ids for timeline entries 2018-03-16 01:37:53 +00:00
Matthew Hodgson
5b3b3aada8 simplify timeline_start_members 2018-03-16 01:17:34 +00:00
Erik Johnston
bf49d2dca8 Replace some ujson with simplejson to make it work 2018-03-16 00:55:44 +00:00
Matthew Hodgson
3bc5bd2d22 make incr syncs work 2018-03-16 00:52:04 +00:00
Richard van der Hoff
5a6e54264d Make 'unexpected logging context' into warnings
I think we've now fixed enough of these that the rest can be logged at
warning.
2018-03-15 18:40:38 +00:00
Krombel
91ea0202e6 move handling of auto_join_rooms to RegisterHandler
Currently the handling of auto_join_rooms only works when a user
registers itself via public register api. Registrations via
registration_shared_secret and ModuleApi do not work

This auto_joins the users in the registration handler which enables
the auto join feature for all 3 registration paths.

This is related to issue #2725

Signed-Off-by: Matthias Kesler <krombel@krombel.de>
2018-03-14 16:45:37 +01:00
Matthew Hodgson
056a6df546 Merge branch 'develop' into matthew/filter_members 2018-03-14 15:38:05 +00:00
Matthew Hodgson
9f77001e27 pep8 2018-03-14 00:07:47 +00:00
Matthew Hodgson
4d0cfef6ee add copyright to nudge CI 2018-03-14 00:02:20 +00:00
Matthew Hodgson
c9d72e4571 oops 2018-03-13 23:46:45 +00:00
Matthew Hodgson
ccca02846d make it work 2018-03-13 22:31:41 +00:00
Matthew Hodgson
f0f9a0605b remove comment now #2969 is fixed 2018-03-13 22:12:15 +00:00
Matthew Hodgson
12350e3f9a merge proper fix to bug 2969 2018-03-13 22:11:58 +00:00
Matthew Hodgson
14a9d2f73d ensure we always include the members for a given timeline block 2018-03-13 22:03:42 +00:00
Matthew Hodgson
afbf4d3dcc typoe 2018-03-13 19:48:04 +00:00
Matthew Hodgson
865377a70d disable optimisation for searching for state groups
when type filter includes wildcards on state_key
2018-03-13 19:46:04 +00:00
Matthew Hodgson
b2aba9e430 build where_clause sanely 2018-03-13 18:13:44 +00:00
Matthew Hodgson
52f7e23c72 PR feedbackz 2018-03-13 18:07:55 +00:00
Matthew Hodgson
1b1c137771 fix bug #2926 2018-03-13 17:52:52 +00:00
Matthew Hodgson
fdedcd1f4d correctly handle None state_keys
and fix include_other_types thinko
2018-03-12 01:39:06 +00:00
Matthew Hodgson
97c0496cfa fix sqlite where clause 2018-03-12 00:27:06 +00:00
Matthew Hodgson
8713365265 typos 2018-03-11 20:10:25 +00:00
Matthew Hodgson
9b334b3f97 WIP experiment in lazyloading room members 2018-03-11 20:01:41 +00:00
dklug
af7ed8e1ef Return 401 for invalid access_token on logout
Signed-off-by: Duncan Klug <dklug@ucmerced.edu>
2018-03-02 22:01:27 -08:00
kaiyou
f44b7c022f Disable logging to file and rely on the console when using Docker 2018-02-10 23:57:51 +01:00
kaiyou
07f1b71819 Explicitely provide the postgres password to synapse in the Compose example 2018-02-10 23:57:36 +01:00
kaiyou
b815aa0e2d Remove an accidentally committed test configuration 2018-02-10 21:59:58 +01:00
kaiyou
6f0b1f85f9 Generate macaroon and registration secrets, then store the results to the data dir 2018-02-10 00:05:03 +01:00
kaiyou
ca70148c05 Fix the path to the log config file 2018-02-09 00:23:19 +01:00
kaiyou
e511979fe6 Make SYNAPSE_MACAROON_SECRET_KEY a mandatory option 2018-02-09 00:13:26 +01:00
kaiyou
a03c382966 Specify the Docker registry for the postgres image 2018-02-08 22:00:43 +01:00
kaiyou
48e2c641b8 Specify the Docker registry in the build tag 2018-02-08 21:58:12 +01:00
kaiyou
d8680c969b Make it clear that the image has two modes of operation 2018-02-08 21:55:35 +01:00
kaiyou
b9b668e4bb Update to Alpine 3.7 and switch to libressl 2018-02-08 21:39:36 +01:00
kaiyou
ef1f8d4be6 Enable email server configuration from environment variables 2018-02-08 20:53:12 +01:00
kaiyou
a0af0054ec Honor the SYNAPSE_REPORT_STATS parameter in the Docker image 2018-02-08 20:46:11 +01:00
kaiyou
914a59cb8c Disable the Web client in the Docker image 2018-02-08 20:43:45 +01:00
kaiyou
e174c46a29 Use 'synapse' as a default postgres user in Docker examples 2018-02-08 20:42:57 +01:00
kaiyou
b8a4dceb3c Refactor the start script to better handle mandatory parameters 2018-02-08 20:41:41 +01:00
kaiyou
084afbb6a0 Rename the permissions variable to avoid confusion 2018-02-08 19:50:04 +01:00
kaiyou
58df3a8c5d Add some documentation about high performance storage 2018-02-08 19:48:53 +01:00
kaiyou
63fd148724 Make it clear that two modes are avaiable in the documentation, improve the compose file 2018-02-08 19:46:11 +01:00
kaiyou
1ffd9cb936 Support loading application service files from /data/appservices/ 2018-02-05 23:13:27 +01:00
kaiyou
107a5c9441 Add the non-tls port to the expose list 2018-02-05 23:02:33 +01:00
kaiyou
ee3b160a2a Only generate configuration files when necessary 2018-02-05 22:57:35 +01:00
kaiyou
630573a932 Do not copy documentation files to the Docker root folder 2018-02-05 22:57:22 +01:00
kaiyou
f5364b47ec Point to the 'latest' tag in the Docker documentation 2018-02-05 22:14:40 +01:00
kaiyou
d8c7da5dca Fix a typo in the Docker README 2018-02-05 22:12:50 +01:00
kaiyou
cf4ef60e28 Document the cache factor environment variable for Docker 2018-02-05 22:10:03 +01:00
kaiyou
cd51931b62 Add dynamic TURN configuration in the Docker image 2018-02-05 21:53:53 +01:00
kaiyou
81010a126e Add dynamic recaptcha configuration in the Docker image 2018-02-05 21:28:15 +01:00
kaiyou
8db84e9b21 Remove docker related files from the python manifest 2018-02-05 20:08:35 +01:00
kaiyou
e9021e16c4 Run the server as an unprivileged user 2018-02-04 23:19:08 +01:00
kaiyou
f72c9c1fb6 Fix multiple typos 2018-02-04 16:18:40 +01:00
kaiyou
b8ab78b82c Add the build cache/ folder to gitignore 2018-02-04 15:41:54 +01:00
kaiyou
9a87b8aaf7 Update sumperdump Docker readme to match this image properties 2018-02-04 15:27:32 +01:00
kaiyou
84a9209ba7 Remove etc/service files from rob's branch 2018-02-04 15:08:43 +01:00
kaiyou
53965334da Merge remote-tracking branch 'origin/rob/docker' into feat-dockerfile 2018-02-04 15:08:19 +01:00
kaiyou
a207cccb05 Reuse environment variables of the postgres container 2018-02-04 15:04:26 +01:00
kaiyou
1ba2fe114c Provide an example docker compose file 2018-02-04 12:55:20 +01:00
kaiyou
042757feb2 Install the postgres dependencies 2018-02-04 12:28:42 +01:00
kaiyou
886c2d5019 Support an external postgresql config in the Docker image 2018-02-04 12:20:29 +01:00
kaiyou
f2bf0cda02 Generate shared secrets if not defined in the environment 2018-02-04 11:40:20 +01:00
kaiyou
6d1e28a842 Generate any missing keys before starting synapse 2018-02-04 11:14:06 +01:00
kaiyou
48bc22f89d Allow for a wheel cache and include missing files in the build 2018-02-04 10:58:07 +01:00
kaiyou
d434ae3387 Add template config files for the Docker image 2018-02-03 20:30:08 +01:00
kaiyou
431476fbc4 Initial commit including a Dockerfile for synapse 2018-02-03 20:18:36 +01:00
Robert Swain
24d162814b docker: s/matrix-org/matrixdotorg/g 2017-09-29 11:40:15 +02:00
Robert Swain
95e02b856b docker: Initial Dockerfile and docker-compose.yaml 2017-09-28 12:12:47 +02:00
446 changed files with 28753 additions and 13510 deletions

48
.circleci/config.yml Normal file
View File

@@ -0,0 +1,48 @@
version: 2
jobs:
sytestpy2:
machine: true
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs matrixdotorg/sytest-synapsepy2
- store_artifacts:
path: ~/project/logs
destination: logs
sytestpy2postgres:
machine: true
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy2
- store_artifacts:
path: ~/project/logs
destination: logs
sytestpy3:
machine: true
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs hawkowl/sytestpy3
- store_artifacts:
path: ~/project/logs
destination: logs
sytestpy3postgres:
machine: true
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy3
- store_artifacts:
path: ~/project/logs
destination: logs
workflows:
version: 2
build:
jobs:
- sytestpy2
- sytestpy2postgres
# Currently broken while the Python 3 port is incomplete
# - sytestpy3
# - sytestpy3postgres

8
.dockerignore Normal file
View File

@@ -0,0 +1,8 @@
Dockerfile
.travis.yml
.gitignore
demo/etc
tox.ini
synctl
.git/*
.tox/*

View File

@@ -27,8 +27,9 @@ Describe here the problem that you are experiencing, or the feature you are requ
Describe how what happens differs from what you expected.
If you can identify any relevant log snippets from _homeserver.log_, please include
those here (please be careful to remove any personal or private data):
<!-- If you can identify any relevant log snippets from _homeserver.log_, please include
those (please be careful to remove any personal or private data). Please surround them with
``` (three backticks, on a line on their own), so that they are formatted legibly. -->
### Version information

6
.gitignore vendored
View File

@@ -1,5 +1,6 @@
*.pyc
.*.swp
*~
.DS_Store
_trial_temp/
@@ -13,6 +14,7 @@ docs/build/
cmdclient_config.json
homeserver*.db
homeserver*.log
homeserver*.log.*
homeserver*.pid
homeserver*.yaml
@@ -32,6 +34,7 @@ demo/media_store.*
demo/etc
uploads
cache
.idea/
media_store/
@@ -39,6 +42,8 @@ media_store/
*.tac
build/
venv/
venv*/
localhost-800*/
static/client/register/register_config.js
@@ -48,3 +53,4 @@ env/
*.config
.vscode/
.ropeproject/

View File

@@ -1,14 +1,43 @@
sudo: false
language: python
python: 2.7
# tell travis to cache ~/.cache/pip
cache: pip
env:
- TOX_ENV=packaging
- TOX_ENV=pep8
- TOX_ENV=py27
before_script:
- git remote set-branches --add origin develop
- git fetch origin develop
services:
- postgresql
matrix:
fast_finish: true
include:
- python: 2.7
env: TOX_ENV=packaging
- python: 2.7
env: TOX_ENV=pep8
- python: 2.7
env: TOX_ENV=py27
- python: 2.7
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
- python: 3.6
env: TOX_ENV=py36
- python: 3.6
env: TOX_ENV=check_isort
- python: 3.6
env: TOX_ENV=check-newsfragment
allow_failures:
- python: 2.7
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
install:
- pip install tox

View File

@@ -60,3 +60,9 @@ Niklas Riekenbrauck <nikriek at gmail dot.com>
Christoph Witzany <christoph at web.crofting.com>
* Add LDAP support for authentication
Pierre Jaury <pierre at jaury.eu>
* Docker packaging
Serban Constantin <serban.constantin at gmail dot com>
* Small bug fix

2634
CHANGES.md Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -30,11 +30,11 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use `Jenkins <http://matrix.org/jenkins>`_ and
We use `Jenkins <http://matrix.org/jenkins>`_ and
`Travis <https://travis-ci.org/matrix-org/synapse>`_ for continuous
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
pull request for feedback.
Code style
@@ -48,6 +48,26 @@ Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
Changelog
~~~~~~~~~
All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d``
file named in the format of ``PRnumber.type``. The type can be
one of ``feature``, ``bugfix``, ``removal`` (also used for
deprecations), or ``misc`` (for internal-only changes). The content of
the file is your changelog entry, which can contain RestructuredText
formatting. A note of contributors is welcomed in changelogs for
non-misc changes (the content of misc changes is not displayed).
For example, a fix in PR #1234 would have its changelog entry in
``changelog.d/1234.bugfix``, and contain content like "The security levels of
Florbs are now validated when recieved over federation. Contributed by Jane
Matrix".
Attribution
~~~~~~~~~~~
@@ -105,16 +125,20 @@ the contribution or otherwise have the right to contribute it to Matrix::
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment::
Signed-off-by: Your Name <your@email.example.org>
...using your real name; unfortunately pseudonyms and anonymous contributions
can't be accepted. Git makes this trivial - just use the -s flag when you do
``git commit``, having first set ``user.name`` and ``user.email`` git configs
(which you should have done anyway :)
We accept contributions under a legally identifiable name, such as
your name on government documentation or common-law names (names
claimed by legitimate usage or repute). Unfortunately, we cannot
accept anonymous contributions at this time.
Git allows you to add this signoff automatically when using the ``-s``
flag to ``git commit``, which uses the name and email set in your
``user.name`` and ``user.email`` git configs.
Conclusion
~~~~~~~~~~

View File

@@ -2,6 +2,7 @@ include synctl
include LICENSE
include VERSION
include *.rst
include *.md
include demo/README
include demo/demo.tls.dh
include demo/*.py
@@ -25,7 +26,14 @@ recursive-include synapse/static *.js
exclude jenkins.sh
exclude jenkins*.sh
exclude jenkins*
exclude Dockerfile
exclude .dockerignore
recursive-exclude jenkins *.sh
include pyproject.toml
recursive-include changelog.d *
prune .github
prune demo/etc
prune docker
prune .circleci

View File

@@ -71,7 +71,7 @@ We'd like to invite you to join #matrix:matrix.org (via
https://matrix.org/docs/projects/try-matrix-now.html), run a homeserver, take a look
at the `Matrix spec <https://matrix.org/docs/spec>`_, and experiment with the
`APIs <https://matrix.org/docs/api>`_ and `Client SDKs
<http://matrix.org/docs/projects/try-matrix-now.html#client-sdks>`_.
<https://matrix.org/docs/projects/try-matrix-now.html#client-sdks>`_.
Thanks for using Matrix!
@@ -157,11 +157,19 @@ if you prefer.
In case of problems, please see the _`Troubleshooting` section below.
Alternatively, Silvio Fricke has contributed a Dockerfile to automate the
above in Docker at https://registry.hub.docker.com/u/silviof/docker-matrix/.
There is an offical synapse image available at
https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with
the docker-compose file available at `contrib/docker <contrib/docker>`_. Further information on
this including configuration options is available in the README on
hub.docker.com.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at
https://hub.docker.com/r/avhost/docker-matrix/tags/
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
tested with VirtualBox/AWS/DigitalOcean - see
https://github.com/EMnify/matrix-synapse-auto-deploy
for details.
Configuring synapse
@@ -282,7 +290,7 @@ Connecting to Synapse from a client
The easiest way to try out your new Synapse installation is by connecting to it
from a web client. The easiest option is probably the one at
http://riot.im/app. You will need to specify a "Custom server" when you log on
https://riot.im/app. You will need to specify a "Custom server" when you log on
or register: set this to ``https://domain.tld`` if you setup a reverse proxy
following the recommended setup, or ``https://localhost:8448`` - remember to specify the
port (``:8448``) if not ``:443`` unless you changed the configuration. (Leave the identity
@@ -328,7 +336,7 @@ Security Note
=============
Matrix serves raw user generated data in some APIs - specifically the `content
repository endpoints <http://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid>`_.
repository endpoints <https://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid>`_.
Whilst we have tried to mitigate against possible XSS attacks (e.g.
https://github.com/matrix-org/synapse/pull/1021) we recommend running
@@ -347,7 +355,7 @@ Platform-Specific Instructions
Debian
------
Matrix provides official Debian packages via apt from http://matrix.org/packages/debian/.
Matrix provides official Debian packages via apt from https://matrix.org/packages/debian/.
Note that these packages do not include a client - choose one from
https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one of our SDKs :)
@@ -361,6 +369,19 @@ Synapse is in the Fedora repositories as ``matrix-synapse``::
Oleg Girko provides Fedora RPMs at
https://obs.infoserver.lv/project/monitor/matrix-synapse
OpenSUSE
--------
Synapse is in the OpenSUSE repositories as ``matrix-synapse``::
sudo zypper install matrix-synapse
SUSE Linux Enterprise Server
----------------------------
Unofficial package are built for SLES 15 in the openSUSE:Backports:SLE-15 repository at
https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15/standard/
ArchLinux
---------
@@ -523,7 +544,7 @@ Troubleshooting Running
-----------------------
If synapse fails with ``missing "sodium.h"`` crypto errors, you may need
to manually upgrade PyNaCL, as synapse uses NaCl (http://nacl.cr.yp.to/) for
to manually upgrade PyNaCL, as synapse uses NaCl (https://nacl.cr.yp.to/) for
encryption and digital signatures.
Unfortunately PyNACL currently has a few issues
(https://github.com/pyca/pynacl/issues/53) and
@@ -614,6 +635,9 @@ should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
$ dig -t srv _matrix._tcp.example.com
_matrix._tcp.example.com. 3600 IN SRV 10 0 8448 synapse.example.com.
Note that the server hostname cannot be an alias (CNAME record): it has to point
directly to the server hosting the synapse instance.
You can then configure your homeserver to use ``<yourdomain.com>`` as the domain in
its user-ids, by setting ``server_name``::
@@ -668,8 +692,8 @@ useful just for development purposes. See `<demo/README>`_.
Using PostgreSQL
================
As of Synapse 0.9, `PostgreSQL <http://www.postgresql.org>`_ is supported as an
alternative to the `SQLite <http://sqlite.org/>`_ database that Synapse has
As of Synapse 0.9, `PostgreSQL <https://www.postgresql.org>`_ is supported as an
alternative to the `SQLite <https://sqlite.org/>`_ database that Synapse has
traditionally used for convenience and simplicity.
The advantages of Postgres include:
@@ -693,7 +717,7 @@ Using a reverse proxy with Synapse
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_ or
`HAProxy <http://www.haproxy.org/>`_ in front of Synapse. One advantage of
`HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.

1
changelog.d/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
!.gitignore

10
contrib/README.rst Normal file
View File

@@ -0,0 +1,10 @@
Community Contributions
=======================
Everything in this directory are projects submitted by the community that may be useful
to others. As such, the project maintainers cannot guarantee support, stability
or backwards compatibility of these projects.
Files in this directory should *not* be relied on directly, as they may not
continue to work or exist in future. If you wish to use any of these files then
they should be copied to avoid them breaking from underneath you.

41
contrib/docker/README.md Normal file
View File

@@ -0,0 +1,41 @@
# Synapse Docker
### Automated configuration
It is recommended that you use Docker Compose to run your containers, including
this image and a Postgres server. A sample ``docker-compose.yml`` is provided,
including example labels for reverse proxying and other artifacts.
Read the section about environment variables and set at least mandatory variables,
then run the server:
```
docker-compose up -d
```
If secrets are not specified in the environment variables, they will be generated
as part of the startup. Please ensure these secrets are kept between launches of the
Docker container, as their loss may require users to log in again.
### Manual configuration
A sample ``docker-compose.yml`` is provided, including example labels for
reverse proxying and other artifacts. The docker-compose file is an example,
please comment/uncomment sections that are not suitable for your usecase.
Specify a ``SYNAPSE_CONFIG_PATH``, preferably to a persistent path,
to use manual configuration. To generate a fresh ``homeserver.yaml``, simply run:
```
docker-compose run --rm -e SYNAPSE_SERVER_NAME=my.matrix.host synapse generate
```
Then, customize your configuration and run the server:
```
docker-compose up -d
```
### More information
For more information on required environment variables and mounts, see the main docker documentation at [/docker/README.md](../../docker/README.md)

View File

@@ -0,0 +1,50 @@
# This compose file is compatible with Compose itself, it might need some
# adjustments to run properly with stack.
version: '3'
services:
synapse:
build: ../..
image: docker.io/matrixdotorg/synapse:latest
# Since snyapse does not retry to connect to the database, restart upon
# failure
restart: unless-stopped
# See the readme for a full documentation of the environment settings
environment:
- SYNAPSE_SERVER_NAME=my.matrix.host
- SYNAPSE_REPORT_STATS=no
- SYNAPSE_ENABLE_REGISTRATION=yes
- SYNAPSE_LOG_LEVEL=INFO
- POSTGRES_PASSWORD=changeme
volumes:
# You may either store all the files in a local folder
- ./files:/data
# .. or you may split this between different storage points
# - ./files:/data
# - /path/to/ssd:/data/uploads
# - /path/to/large_hdd:/data/media
depends_on:
- db
# In order to expose Synapse, remove one of the following, you might for
# instance expose the TLS port directly:
ports:
- 8448:8448/tcp
# ... or use a reverse proxy, here is an example for traefik:
labels:
- traefik.enable=true
- traefik.frontend.rule=Host:my.matrix.Host
- traefik.port=8448
db:
image: docker.io/postgres:10-alpine
# Change that password, of course!
environment:
- POSTGRES_USER=synapse
- POSTGRES_PASSWORD=changeme
volumes:
# You may store the database tables in a local folder..
- ./schemas:/var/lib/postgresql/data
# .. or store them on some high performance storage for better results
# - /path/to/ssd/storage:/var/lib/postfesql/data

View File

@@ -0,0 +1,6 @@
# Using the Synapse Grafana dashboard
0. Set up Prometheus and Grafana. Out of scope for this readme. Useful documentation about using Grafana with Prometheus: http://docs.grafana.org/features/datasources/prometheus/
1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst
2. Import dashboard into Grafana. Download `synapse.json`. Import it to Grafana and select the correct Prometheus datasource. http://docs.grafana.org/reference/export_import/
3. Set up additional recording rules

4969
contrib/grafana/synapse.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -22,6 +22,8 @@ import argparse
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
from six import string_types
def make_graph(file_name, room_id, file_prefix, limit):
print "Reading lines"
@@ -58,7 +60,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for key, value in unfreeze(event.get_dict()["content"]).items():
if value is None:
value = "<null>"
elif isinstance(value, basestring):
elif isinstance(value, string_types):
pass
else:
value = json.dumps(value)

View File

@@ -202,11 +202,11 @@ new PromConsole.Graph({
<h1>Requests</h1>
<h3>Requests by Servlet</h3>
<div id="synapse_http_server_requests_servlet"></div>
<div id="synapse_http_server_request_count_servlet"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet"),
expr: "rate(synapse_http_server_requests:servlet[2m])",
node: document.querySelector("#synapse_http_server_request_count_servlet"),
expr: "rate(synapse_http_server_request_count:servlet[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -215,11 +215,11 @@ new PromConsole.Graph({
})
</script>
<h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4>
<div id="synapse_http_server_requests_servlet_minus_events"></div>
<div id="synapse_http_server_request_count_servlet_minus_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet_minus_events"),
expr: "rate(synapse_http_server_requests:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
node: document.querySelector("#synapse_http_server_request_count_servlet_minus_events"),
expr: "rate(synapse_http_server_request_count:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -233,7 +233,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_time_avg"),
expr: "rate(synapse_http_server_response_time:total[2m]) / rate(synapse_http_server_response_time:count[2m]) / 1000",
expr: "rate(synapse_http_server_response_time_seconds[2m]) / rate(synapse_http_server_response_count[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -276,7 +276,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_ru_utime"),
expr: "rate(synapse_http_server_response_ru_utime:total[2m])",
expr: "rate(synapse_http_server_response_ru_utime_seconds[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -291,7 +291,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
expr: "rate(synapse_http_server_response_db_txn_duration:total[2m])",
expr: "rate(synapse_http_server_response_db_txn_duration_seconds[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -306,7 +306,7 @@ new PromConsole.Graph({
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_send_time_avg"),
expr: "rate(synapse_http_server_response_time:total{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_time:count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
expr: "rate(synapse_http_server_response_time_second{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,

View File

@@ -1,10 +1,10 @@
synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method)
synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet)
synapse_http_server_request_count:method{servlet=""} = sum(synapse_http_server_request_count) by (method)
synapse_http_server_request_count:servlet{method=""} = sum(synapse_http_server_request_count) by (servlet)
synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet)
synapse_http_server_request_count:total{servlet=""} = sum(synapse_http_server_request_count:by_method) by (servlet)
synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])

View File

@@ -5,19 +5,19 @@ groups:
expr: "sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)"
- record: "synapse_federation_transaction_queue_pendingPdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)"
- record: 'synapse_http_server_requests:method'
- record: 'synapse_http_server_request_count:method'
labels:
servlet: ""
expr: "sum(synapse_http_server_requests) by (method)"
- record: 'synapse_http_server_requests:servlet'
expr: "sum(synapse_http_server_request_count) by (method)"
- record: 'synapse_http_server_request_count:servlet'
labels:
method: ""
expr: 'sum(synapse_http_server_requests) by (servlet)'
expr: 'sum(synapse_http_server_request_count) by (servlet)'
- record: 'synapse_http_server_requests:total'
- record: 'synapse_http_server_request_count:total'
labels:
servlet: ""
expr: 'sum(synapse_http_server_requests:by_method) by (servlet)'
expr: 'sum(synapse_http_server_request_count:by_method) by (servlet)'
- record: 'synapse_cache:hit_ratio_5m'
expr: 'rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])'

View File

@@ -2,6 +2,9 @@
# (e.g. https://www.archlinux.org/packages/community/any/matrix-synapse/ for ArchLinux)
# rather than in a user home directory or similar under virtualenv.
# **NOTE:** This is an example service file that may change in the future. If you
# wish to use this please copy rather than symlink it.
[Unit]
Description=Synapse Matrix homeserver
@@ -12,6 +15,7 @@ Group=synapse
WorkingDirectory=/var/lib/synapse
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml
ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml
# EnvironmentFile=-/etc/sysconfig/synapse # Can be used to e.g. set SYNAPSE_CACHE_FACTOR
[Install]
WantedBy=multi-user.target

35
docker/Dockerfile Normal file
View File

@@ -0,0 +1,35 @@
FROM docker.io/python:2-alpine3.8
RUN apk add --no-cache --virtual .nacl_deps \
build-base \
libffi-dev \
libjpeg-turbo-dev \
libressl-dev \
libxslt-dev \
linux-headers \
postgresql-dev \
su-exec \
zlib-dev
COPY . /synapse
# A wheel cache may be provided in ./cache for faster build
RUN cd /synapse \
&& pip install --upgrade \
lxml \
pip \
psycopg2 \
setuptools \
&& mkdir -p /synapse/cache \
&& pip install -f /synapse/cache --upgrade --process-dependency-links . \
&& mv /synapse/docker/start.py /synapse/docker/conf / \
&& rm -rf \
setup.cfg \
setup.py \
synapse
VOLUME ["/data"]
EXPOSE 8008/tcp 8448/tcp
ENTRYPOINT ["/start.py"]

124
docker/README.md Normal file
View File

@@ -0,0 +1,124 @@
# Synapse Docker
This Docker image will run Synapse as a single process. It does not provide a database
server or a TURN server, you should run these separately.
## Run
We do not currently offer a `latest` image, as this has somewhat undefined semantics.
We instead release only tagged versions so upgrading between releases is entirely
within your control.
### Using docker-compose (easier)
This image is designed to run either with an automatically generated configuration
file or with a custom configuration that requires manual editing.
An easy way to make use of this image is via docker-compose. See the
[contrib/docker](../contrib/docker)
section of the synapse project for examples.
### Without Compose (harder)
If you do not wish to use Compose, you may still run this image using plain
Docker commands. Note that the following is just a guideline and you may need
to add parameters to the docker run command to account for the network situation
with your postgres database.
```
docker run \
-d \
--name synapse \
-v ${DATA_PATH}:/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
docker.io/matrixdotorg/synapse:latest
```
## Volumes
The image expects a single volume, located at ``/data``, that will hold:
* temporary files during uploads;
* uploaded media and thumbnails;
* the SQLite database if you do not configure postgres;
* the appservices configuration.
You are free to use separate volumes depending on storage endpoints at your
disposal. For instance, ``/data/media`` coud be stored on a large but low
performance hdd storage while other files could be stored on high performance
endpoints.
In order to setup an application service, simply create an ``appservices``
directory in the data volume and write the application service Yaml
configuration file there. Multiple application services are supported.
## Environment
Unless you specify a custom path for the configuration file, a very generic
file will be generated, based on the following environment settings.
These are a good starting point for setting up your own deployment.
Global settings:
* ``UID``, the user id Synapse will run as [default 991]
* ``GID``, the group id Synapse will run as [default 991]
* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
then customize it manually. No other environment variable is required.
Otherwise, a dynamic configuration file will be used. The following environment
variables are available for configuration:
* ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
statistics reporting back to the Matrix project which helps us to get funding.
* ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
you run your own TLS-capable reverse proxy).
* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
the Synapse instance.
* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
* ``SYNAPSE_CACHE_FACTOR``, the cache factor [default `0.5`].
* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
key in order to enable recaptcha upon registration.
* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
key in order to enable recaptcha upon registration.
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
Shared secrets, that will be initialized to random values if not set:
* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
registration is disable.
* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
to the server.
Database specific values (will use SQLite if not set):
* `POSTGRES_DB` - The database name for the synapse postgres database. [default: `synapse`]
* `POSTGRES_HOST` - The host of the postgres database if you wish to use postgresql instead of sqlite3. [default: `db` which is useful when using a container on the same docker network in a compose file where the postgres service is called `db`]
* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If this is set then postgres will be used instead of sqlite3.** [default: none] **NOTE**: You are highly encouraged to use postgresql! Please use the compose file to make it easier to deploy.
* `POSTGRES_USER` - The user for the synapse postgres database. [default: `matrix`]
Mail server specific values (will not send emails if not set):
* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default ``25``].
* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if any.
* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail server if any.
## Build
Build the docker image with the `docker build` command from the root of the synapse repository.
```
docker build -t docker.io/matrixdotorg/synapse . -f docker/Dockerfile
```
The `-t` option sets the image tag. Official images are tagged `matrixdotorg/synapse:<version>` where `<version>` is the same as the release tag in the synapse git repository.
You may have a local Python wheel cache available, in which case copy the relevant
packages in the ``cache/`` directory at the root of the project.

219
docker/conf/homeserver.yaml Normal file
View File

@@ -0,0 +1,219 @@
# vim:ft=yaml
## TLS ##
tls_certificate_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.crt"
tls_private_key_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.key"
tls_dh_params_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.dh"
no_tls: {{ "True" if SYNAPSE_NO_TLS else "False" }}
tls_fingerprints: []
## Server ##
server_name: "{{ SYNAPSE_SERVER_NAME }}"
pid_file: /homeserver.pid
web_client: False
soft_file_limit: 0
## Ports ##
listeners:
{% if not SYNAPSE_NO_TLS %}
-
port: 8448
bind_addresses: ['0.0.0.0']
type: http
tls: true
x_forwarded: false
resources:
- names: [client]
compress: true
- names: [federation] # Federation APIs
compress: false
{% endif %}
- port: 8008
tls: false
bind_addresses: ['0.0.0.0']
type: http
x_forwarded: false
resources:
- names: [client]
compress: true
- names: [federation]
compress: false
## Database ##
{% if POSTGRES_PASSWORD %}
database:
name: "psycopg2"
args:
user: "{{ POSTGRES_USER or "synapse" }}"
password: "{{ POSTGRES_PASSWORD }}"
database: "{{ POSTGRES_DB or "synapse" }}"
host: "{{ POSTGRES_HOST or "db" }}"
port: "{{ POSTGRES_PORT or "5432" }}"
cp_min: 5
cp_max: 10
{% else %}
database:
name: "sqlite3"
args:
database: "/data/homeserver.db"
{% endif %}
## Performance ##
event_cache_size: "{{ SYNAPSE_EVENT_CACHE_SIZE or "10K" }}"
verbose: 0
log_file: "/data/homeserver.log"
log_config: "/compiled/log.config"
## Ratelimiting ##
rc_messages_per_second: 0.2
rc_message_burst_count: 10.0
federation_rc_window_size: 1000
federation_rc_sleep_limit: 10
federation_rc_sleep_delay: 500
federation_rc_reject_limit: 50
federation_rc_concurrent: 3
## Files ##
media_store_path: "/data/media"
uploads_path: "/data/uploads"
max_upload_size: "10M"
max_image_pixels: "32M"
dynamic_thumbnails: false
# List of thumbnail to precalculate when an image is uploaded.
thumbnail_sizes:
- width: 32
height: 32
method: crop
- width: 96
height: 96
method: crop
- width: 320
height: 240
method: scale
- width: 640
height: 480
method: scale
- width: 800
height: 600
method: scale
url_preview_enabled: False
max_spider_size: "10M"
## Captcha ##
{% if SYNAPSE_RECAPTCHA_PUBLIC_KEY %}
recaptcha_public_key: "{{ SYNAPSE_RECAPTCHA_PUBLIC_KEY }}"
recaptcha_private_key: "{{ SYNAPSE_RECAPTCHA_PRIVATE_KEY }}"
enable_registration_captcha: True
recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
{% else %}
recaptcha_public_key: "YOUR_PUBLIC_KEY"
recaptcha_private_key: "YOUR_PRIVATE_KEY"
enable_registration_captcha: False
recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
{% endif %}
## Turn ##
{% if SYNAPSE_TURN_URIS %}
turn_uris:
{% for uri in SYNAPSE_TURN_URIS.split(',') %} - "{{ uri }}"
{% endfor %}
turn_shared_secret: "{{ SYNAPSE_TURN_SECRET }}"
turn_user_lifetime: "1h"
turn_allow_guests: True
{% else %}
turn_uris: []
turn_shared_secret: "YOUR_SHARED_SECRET"
turn_user_lifetime: "1h"
turn_allow_guests: True
{% endif %}
## Registration ##
enable_registration: {{ "True" if SYNAPSE_ENABLE_REGISTRATION else "False" }}
registration_shared_secret: "{{ SYNAPSE_REGISTRATION_SHARED_SECRET }}"
bcrypt_rounds: 12
allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
enable_group_creation: true
# The list of identity servers trusted to verify third party
# identifiers by this server.
trusted_third_party_id_servers:
- matrix.org
- vector.im
- riot.im
## Metrics ###
{% if SYNAPSE_REPORT_STATS.lower() == "yes" %}
enable_metrics: True
report_stats: True
{% else %}
enable_metrics: False
report_stats: False
{% endif %}
## API Configuration ##
room_invite_state_types:
- "m.room.join_rules"
- "m.room.canonical_alias"
- "m.room.avatar"
- "m.room.name"
{% if SYNAPSE_APPSERVICES %}
app_service_config_files:
{% for appservice in SYNAPSE_APPSERVICES %} - "{{ appservice }}"
{% endfor %}
{% else %}
app_service_config_files: []
{% endif %}
macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}"
expire_access_token: False
## Signing Keys ##
signing_key_path: "/data/{{ SYNAPSE_SERVER_NAME }}.signing.key"
old_signing_keys: {}
key_refresh_interval: "1d" # 1 Day.
# The trusted servers to download signing keys from.
perspectives:
servers:
"matrix.org":
verify_keys:
"ed25519:auto":
key: "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw"
password_config:
enabled: true
{% if SYNAPSE_SMTP_HOST %}
email:
enable_notifs: false
smtp_host: "{{ SYNAPSE_SMTP_HOST }}"
smtp_port: {{ SYNAPSE_SMTP_PORT or "25" }}
smtp_user: "{{ SYNAPSE_SMTP_USER }}"
smtp_pass: "{{ SYNAPSE_SMTP_PASSWORD }}"
require_transport_security: False
notif_from: "{{ SYNAPSE_SMTP_FROM or "hostmaster@" + SYNAPSE_SERVER_NAME }}"
app_name: Matrix
template_dir: res/templates
notif_template_html: notif_mail.html
notif_template_text: notif_mail.txt
notif_for_new_users: True
riot_base_url: "https://{{ SYNAPSE_SERVER_NAME }}"
{% endif %}

29
docker/conf/log.config Normal file
View File

@@ -0,0 +1,29 @@
version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s'
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
request: ""
handlers:
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse:
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
root:
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
handlers: [console]

66
docker/start.py Executable file
View File

@@ -0,0 +1,66 @@
#!/usr/local/bin/python
import jinja2
import os
import sys
import subprocess
import glob
# Utility functions
convert = lambda src, dst, environ: open(dst, "w").write(jinja2.Template(open(src).read()).render(**environ))
def check_arguments(environ, args):
for argument in args:
if argument not in environ:
print("Environment variable %s is mandatory, exiting." % argument)
sys.exit(2)
def generate_secrets(environ, secrets):
for name, secret in secrets.items():
if secret not in environ:
filename = "/data/%s.%s.key" % (environ["SYNAPSE_SERVER_NAME"], name)
if os.path.exists(filename):
with open(filename) as handle: value = handle.read()
else:
print("Generating a random secret for {}".format(name))
value = os.urandom(32).encode("hex")
with open(filename, "w") as handle: handle.write(value)
environ[secret] = value
# Prepare the configuration
mode = sys.argv[1] if len(sys.argv) > 1 else None
environ = os.environ.copy()
ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
args = ["python", "-m", "synapse.app.homeserver"]
# In generate mode, generate a configuration, missing keys, then exit
if mode == "generate":
check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS", "SYNAPSE_CONFIG_PATH"))
args += [
"--server-name", environ["SYNAPSE_SERVER_NAME"],
"--report-stats", environ["SYNAPSE_REPORT_STATS"],
"--config-path", environ["SYNAPSE_CONFIG_PATH"],
"--generate-config"
]
os.execv("/usr/local/bin/python", args)
# In normal mode, generate missing keys if any, then run synapse
else:
# Parse the configuration file
if "SYNAPSE_CONFIG_PATH" in environ:
args += ["--config-path", environ["SYNAPSE_CONFIG_PATH"]]
else:
check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"))
generate_secrets(environ, {
"registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY"
})
environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
if not os.path.exists("/compiled"): os.mkdir("/compiled")
convert("/conf/homeserver.yaml", "/compiled/homeserver.yaml", environ)
convert("/conf/log.config", "/compiled/log.config", environ)
subprocess.check_output(["chown", "-R", ownership, "/data"])
args += ["--config-path", "/compiled/homeserver.yaml"]
# Generate missing keys and start synapse
subprocess.check_output(args + ["--generate-keys"])
os.execv("/sbin/su-exec", ["su-exec", ownership] + args)

View File

@@ -0,0 +1,63 @@
Shared-Secret Registration
==========================
This API allows for the creation of users in an administrative and
non-interactive way. This is generally used for bootstrapping a Synapse
instance with administrator accounts.
To authenticate yourself to the server, you will need both the shared secret
(``registration_shared_secret`` in the homeserver configuration), and a
one-time nonce. If the registration shared secret is not configured, this API
is not enabled.
To fetch the nonce, you need to request one from the API::
> GET /_matrix/client/r0/admin/register
< {"nonce": "thisisanonce"}
Once you have the nonce, you can make a ``POST`` to the same URL with a JSON
body containing the nonce, username, password, whether they are an admin
(optional, False by default), and a HMAC digest of the content.
As an example::
> POST /_matrix/client/r0/admin/register
> {
"nonce": "thisisanonce",
"username": "pepper_roni",
"password": "pizza",
"admin": true,
"mac": "mac_digest_here"
}
< {
"access_token": "token_here",
"user_id": "@pepper_roni:localhost",
"home_server": "test",
"device_id": "device_id_here"
}
The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being
the shared secret and the content being the nonce, user, password, and either
the string "admin" or "notadmin", each separated by NULs. For an example of
generation in Python::
import hmac, hashlib
def generate_mac(nonce, user, password, admin=False):
mac = hmac.new(
key=shared_secret,
digestmod=hashlib.sha1,
)
mac.update(nonce.encode('utf8'))
mac.update(b"\x00")
mac.update(user.encode('utf8'))
mac.update(b"\x00")
mac.update(password.encode('utf8'))
mac.update(b"\x00")
mac.update(b"admin" if admin else b"notadmin")
return mac.hexdigest()

View File

@@ -44,13 +44,26 @@ Deactivate Account
This API deactivates an account. It removes active access tokens, resets the
password, and deletes third-party IDs (to prevent the user requesting a
password reset).
password reset). It can also mark the user as GDPR-erased (stopping their data
from distributed further, and deleting it entirely if there are no other
references to it).
The api is::
POST /_matrix/client/r0/admin/deactivate/<user_id>
including an ``access_token`` of a server admin, and an empty request body.
with a body of:
.. code:: json
{
"erase": true
}
including an ``access_token`` of a server admin.
The erase parameter is optional and defaults to 'false'.
An empty body may be passed for backwards compatibility.
Reset password

View File

@@ -16,7 +16,7 @@
print("I am a fish %s" %
"moo")
and this::
and this::
print(
"I am a fish %s" %

160
docs/consent_tracking.md Normal file
View File

@@ -0,0 +1,160 @@
Support in Synapse for tracking agreement to server terms and conditions
========================================================================
Synapse 0.30 introduces support for tracking whether users have agreed to the
terms and conditions set by the administrator of a server - and blocking access
to the server until they have.
There are several parts to this functionality; each requires some specific
configuration in `homeserver.yaml` to be enabled.
Note that various parts of the configuation and this document refer to the
"privacy policy": agreement with a privacy policy is one particular use of this
feature, but of course adminstrators can specify other terms and conditions
unrelated to "privacy" per se.
Collecting policy agreement from a user
---------------------------------------
Synapse can be configured to serve the user a simple policy form with an
"accept" button. Clicking "Accept" records the user's acceptance in the
database and shows a success page.
To enable this, first create templates for the policy and success pages.
These should be stored on the local filesystem.
These templates use the [Jinja2](http://jinja.pocoo.org) templating language,
and [docs/privacy_policy_templates](privacy_policy_templates) gives
examples of the sort of thing that can be done.
Note that the templates must be stored under a name giving the language of the
template - currently this must always be `en` (for "English");
internationalisation support is intended for the future.
The template for the policy itself should be versioned and named according to
the version: for example `1.0.html`. The version of the policy which the user
has agreed to is stored in the database.
Once the templates are in place, make the following changes to `homeserver.yaml`:
1. Add a `user_consent` section, which should look like:
```yaml
user_consent:
template_dir: privacy_policy_templates
version: 1.0
```
`template_dir` points to the directory containing the policy
templates. `version` defines the version of the policy which will be served
to the user. In the example above, Synapse will serve
`privacy_policy_templates/en/1.0.html`.
2. Add a `form_secret` setting at the top level:
```yaml
form_secret: "<unique secret>"
```
This should be set to an arbitrary secret string (try `pwgen -y 30` to
generate suitable secrets).
More on what this is used for below.
3. Add `consent` wherever the `client` resource is currently enabled in the
`listeners` configuration. For example:
```yaml
listeners:
- port: 8008
resources:
- names:
- client
- consent
```
Finally, ensure that `jinja2` is installed. If you are using a virtualenv, this
should be a matter of `pip install Jinja2`. On debian, try `apt-get install
python-jinja2`.
Once this is complete, and the server has been restarted, try visiting
`https://<server>/_matrix/consent`. If correctly configured, this should give
an error "Missing string query parameter 'u'". It is now possible to manually
construct URIs where users can give their consent.
### Constructing the consent URI
It may be useful to manually construct the "consent URI" for a given user - for
instance, in order to send them an email asking them to consent. To do this,
take the base `https://<server>/_matrix/consent` URL and add the following
query parameters:
* `u`: the user id of the user. This can either be a full MXID
(`@user:server.com`) or just the localpart (`user`).
* `h`: hex-encoded HMAC-SHA256 of `u` using the `form_secret` as a key. It is
possible to calculate this on the commandline with something like:
```bash
echo -n '<user>' | openssl sha256 -hmac '<form_secret>'
```
This should result in a URI which looks something like:
`https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...`.
Sending users a server notice asking them to agree to the policy
----------------------------------------------------------------
It is possible to configure Synapse to send a [server
notice](server_notices.md) to anybody who has not yet agreed to the current
version of the policy. To do so:
* ensure that the consent resource is configured, as in the previous section
* ensure that server notices are configured, as in [server_notices.md](server_notices.md).
* Add `server_notice_content` under `user_consent` in `homeserver.yaml`. For
example:
```yaml
user_consent:
server_notice_content:
msgtype: m.text
body: >-
Please give your consent to the privacy policy at %(consent_uri)s.
```
Synapse automatically replaces the placeholder `%(consent_uri)s` with the
consent uri for that user.
* ensure that `public_baseurl` is set in `homeserver.yaml`, and gives the base
URI that clients use to connect to the server. (It is used to construct
`consent_uri` in the server notice.)
Blocking users from using the server until they agree to the policy
-------------------------------------------------------------------
Synapse can be configured to block any attempts to join rooms or send messages
until the user has given their agreement to the policy. (Joining the server
notices room is exempted from this).
To enable this, add `block_events_error` under `user_consent`. For example:
```yaml
user_consent:
block_events_error: >-
You can't send any messages until you consent to the privacy policy at
%(consent_uri)s.
```
Synapse automatically replaces the placeholder `%(consent_uri)s` with the
consent uri for that user.
ensure that `public_baseurl` is set in `homeserver.yaml`, and gives the base
URI that clients use to connect to the server. (It is used to construct
`consent_uri` in the error.)

43
docs/manhole.md Normal file
View File

@@ -0,0 +1,43 @@
Using the synapse manhole
=========================
The "manhole" allows server administrators to access a Python shell on a running
Synapse installation. This is a very powerful mechanism for administration and
debugging.
To enable it, first uncomment the `manhole` listener configuration in
`homeserver.yaml`:
```yaml
listeners:
- port: 9000
bind_addresses: ['::1', '127.0.0.1']
type: manhole
```
(`bind_addresses` in the above is important: it ensures that access to the
manhole is only possible for local users).
Note that this will give administrative access to synapse to **all users** with
shell access to the server. It should therefore **not** be enabled in
environments where untrusted users have shell access.
Then restart synapse, and point an ssh client at port 9000 on localhost, using
the username `matrix`:
```bash
ssh -p9000 matrix@localhost
```
The password is `rabbithole`.
This gives a Python REPL in which `hs` gives access to the
`synapse.server.HomeServer` object - which in turn gives access to many other
parts of the process.
As a simple example, retrieving an event from the database:
```
>>> hs.get_datastore().get_event('$1416420717069yeQaw:matrix.org')
<Deferred at 0x7ff253fc6998 current result: <FrozenEvent event_id='$1416420717069yeQaw:matrix.org', type='m.room.create', state_key=''>>
```

View File

@@ -1,25 +1,47 @@
How to monitor Synapse metrics using Prometheus
===============================================
1. Install prometheus:
1. Install Prometheus:
Follow instructions at http://prometheus.io/docs/introduction/install/
2. Enable synapse metrics:
2. Enable Synapse metrics:
Simply setting a (local) port number will enable it. Pick a port.
prometheus itself defaults to 9090, so starting just above that for
locally monitored services seems reasonable. E.g. 9092:
There are two methods of enabling metrics in Synapse.
Add to homeserver.yaml::
The first serves the metrics as a part of the usual web server and can be
enabled by adding the "metrics" resource to the existing listener as such::
metrics_port: 9092
resources:
- names:
- client
- metrics
Also ensure that ``enable_metrics`` is set to ``True``.
This provides a simple way of adding metrics to your Synapse installation,
and serves under ``/_synapse/metrics``. If you do not wish your metrics be
publicly exposed, you will need to either filter it out at your load
balancer, or use the second method.
Restart synapse.
The second method runs the metrics server on a different port, in a
different thread to Synapse. This can make it more resilient to heavy load
meaning metrics cannot be retrieved, and can be exposed to just internal
networks easier. The served metrics are available over HTTP only, and will
be available at ``/``.
3. Add a prometheus target for synapse.
Add a new listener to homeserver.yaml::
listeners:
- type: metrics
port: 9000
bind_addresses:
- '0.0.0.0'
For both options, you will need to ensure that ``enable_metrics`` is set to
``True``.
Restart Synapse.
3. Add a Prometheus target for Synapse.
It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``)::
@@ -31,7 +53,50 @@ How to monitor Synapse metrics using Prometheus
If your prometheus is older than 1.5.2, you will need to replace
``static_configs`` in the above with ``target_groups``.
Restart prometheus.
Restart Prometheus.
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
---------------------------------------------------------------------------------
The duplicated metrics deprecated in Synapse 0.27.0 have been removed.
All time duration-based metrics have been changed to be seconds. This affects:
+----------------------------------+
| msec -> sec metrics |
+==================================+
| python_gc_time |
+----------------------------------+
| python_twisted_reactor_tick_time |
+----------------------------------+
| synapse_storage_query_time |
+----------------------------------+
| synapse_storage_schedule_time |
+----------------------------------+
| synapse_storage_transaction_time |
+----------------------------------+
Several metrics have been changed to be histograms, which sort entries into
buckets and allow better analysis. The following metrics are now histograms:
+-------------------------------------------+
| Altered metrics |
+===========================================+
| python_gc_time |
+-------------------------------------------+
| python_twisted_reactor_pending_calls |
+-------------------------------------------+
| python_twisted_reactor_tick_time |
+-------------------------------------------+
| synapse_http_server_response_time_seconds |
+-------------------------------------------+
| synapse_storage_query_time |
+-------------------------------------------+
| synapse_storage_schedule_time |
+-------------------------------------------+
| synapse_storage_transaction_time |
+-------------------------------------------+
Block and response metrics renamed for 0.27.0

View File

@@ -6,16 +6,22 @@ Postgres version 9.4 or later is known to work.
Set up database
===============
The PostgreSQL database used *must* have the correct encoding set, otherwise
Assuming your PostgreSQL database user is called ``postgres``, create a user
``synapse_user`` with::
su - postgres
createuser --pwprompt synapse_user
The PostgreSQL database used *must* have the correct encoding set, otherwise it
would not be able to store UTF8 strings. To create a database with the correct
encoding use, e.g.::
CREATE DATABASE synapse
ENCODING 'UTF8'
LC_COLLATE='C'
LC_CTYPE='C'
template=template0
OWNER synapse_user;
CREATE DATABASE synapse
ENCODING 'UTF8'
LC_COLLATE='C'
LC_CTYPE='C'
template=template0
OWNER synapse_user;
This would create an appropriate database named ``synapse`` owned by the
``synapse_user`` user (which must already exist).
@@ -46,8 +52,8 @@ As with Debian/Ubuntu, postgres support depends on the postgres python connector
Synapse config
==============
When you are ready to start using PostgreSQL, add the following line to your
config file::
When you are ready to start using PostgreSQL, edit the ``database`` section in
your config file to match the following lines::
database:
name: psycopg2
@@ -96,9 +102,12 @@ complete, restart synapse. For instance::
cp homeserver.db homeserver.db.snapshot
./synctl start
Assuming your new config file (as described in the section *Synapse config*)
is named ``homeserver-postgres.yaml`` and the SQLite snapshot is at
``homeserver.db.snapshot`` then simply run::
Copy the old config file into a new config file::
cp homeserver.yaml homeserver-postgres.yaml
Edit the database section as described in the section *Synapse config* above
and with the SQLite snapshot located at ``homeserver.db.snapshot`` simply run::
synapse_port_db --sqlite-database homeserver.db.snapshot \
--postgres-config homeserver-postgres.yaml
@@ -117,6 +126,11 @@ run::
--postgres-config homeserver-postgres.yaml
Once that has completed, change the synapse config to point at the PostgreSQL
database configuration file ``homeserver-postgres.yaml`` (i.e. rename it to
``homeserver.yaml``) and restart synapse. Synapse should now be running against
PostgreSQL.
database configuration file ``homeserver-postgres.yaml``::
./synctl stop
mv homeserver.yaml homeserver-old-sqlite.yaml
mv homeserver-postgres.yaml homeserver.yaml
./synctl start
Synapse should now be running against PostgreSQL.

View File

@@ -0,0 +1,23 @@
<!doctype html>
<html lang="en">
<head>
<title>Matrix.org Privacy policy</title>
</head>
<body>
{% if has_consented %}
<p>
Your base already belong to us.
</p>
{% else %}
<p>
All your base are belong to us.
</p>
<form method="post" action="consent">
<input type="hidden" name="v" value="{{version}}"/>
<input type="hidden" name="u" value="{{user}}"/>
<input type="hidden" name="h" value="{{userhmac}}"/>
<input type="submit" value="Sure thing!"/>
</form>
{% endif %}
</body>
</html>

View File

@@ -0,0 +1,11 @@
<!doctype html>
<html lang="en">
<head>
<title>Matrix.org Privacy policy</title>
</head>
<body>
<p>
Sweet.
</p>
</body>
</html>

74
docs/server_notices.md Normal file
View File

@@ -0,0 +1,74 @@
Server Notices
==============
'Server Notices' are a new feature introduced in Synapse 0.30. They provide a
channel whereby server administrators can send messages to users on the server.
They are used as part of communication of the server polices(see
[consent_tracking.md](consent_tracking.md)), however the intention is that
they may also find a use for features such as "Message of the day".
This is a feature specific to Synapse, but it uses standard Matrix
communication mechanisms, so should work with any Matrix client.
User experience
---------------
When the user is first sent a server notice, they will get an invitation to a
room (typically called 'Server Notices', though this is configurable in
`homeserver.yaml`). They will be **unable to reject** this invitation -
attempts to do so will receive an error.
Once they accept the invitation, they will see the notice message in the room
history; it will appear to have come from the 'server notices user' (see
below).
The user is prevented from sending any messages in this room by the power
levels.
Having joined the room, the user can leave the room if they want. Subsequent
server notices will then cause a new room to be created.
Synapse configuration
---------------------
Server notices come from a specific user id on the server. Server
administrators are free to choose the user id - something like `server` is
suggested, meaning the notices will come from
`@server:<your_server_name>`. Once the Server Notices user is configured, that
user id becomes a special, privileged user, so administrators should ensure
that **it is not already allocated**.
In order to support server notices, it is necessary to add some configuration
to the `homeserver.yaml` file. In particular, you should add a `server_notices`
section, which should look like this:
```yaml
server_notices:
system_mxid_localpart: server
system_mxid_display_name: "Server Notices"
system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
room_name: "Server Notices"
```
The only compulsory setting is `system_mxid_localpart`, which defines the user
id of the Server Notices user, as above. `room_name` defines the name of the
room which will be created.
`system_mxid_display_name` and `system_mxid_avatar_url` can be used to set the
displayname and avatar of the Server Notices user.
Sending notices
---------------
As of the current version of synapse, there is no convenient interface for
sending notices (other than the automated ones sent as part of consent
tracking).
In the meantime, it is possible to test this feature using the manhole. Having
gone into the manhole as described in [manhole.md](manhole.md), a notice can be
sent with something like:
```
>>> hs.get_server_notices_manager().send_notice('@user:server.com', {'msgtype':'m.text', 'body':'foo'})
```

View File

@@ -173,10 +173,23 @@ endpoints matching the following regular expressions::
^/_matrix/federation/v1/backfill/
^/_matrix/federation/v1/get_missing_events/
^/_matrix/federation/v1/publicRooms
^/_matrix/federation/v1/query/
^/_matrix/federation/v1/make_join/
^/_matrix/federation/v1/make_leave/
^/_matrix/federation/v1/send_join/
^/_matrix/federation/v1/send_leave/
^/_matrix/federation/v1/invite/
^/_matrix/federation/v1/query_auth/
^/_matrix/federation/v1/event_auth/
^/_matrix/federation/v1/exchange_third_party_invite/
^/_matrix/federation/v1/send/
The above endpoints should all be routed to the federation_reader worker by the
reverse-proxy configuration.
The `^/_matrix/federation/v1/send/` endpoint must only be handled by a single
instance.
``synapse.app.federation_sender``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -206,6 +219,10 @@ Handles client API endpoints. It can handle REST endpoints matching the
following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
``synapse.app.user_dir``
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -224,6 +241,14 @@ regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
If ``use_presence`` is False in the homeserver config, it can also handle REST
endpoints matching the following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
This "stub" presence handler will pass through ``GET`` request but make the
``PUT`` effectively a no-op.
It will proxy any requests it cannot handle to the main synapse instance. It
must therefore be configured with the location of the main instance, via
the ``worker_main_http_uri`` setting in the frontend_proxy worker configuration

View File

@@ -1,5 +1,7 @@
#! /bin/bash
set -eux
cd "`dirname $0`/.."
TOX_DIR=$WORKSPACE/.tox
@@ -14,7 +16,20 @@ fi
tox -e py27 --notest -v
TOX_BIN=$TOX_DIR/py27/bin
$TOX_BIN/pip install setuptools
# cryptography 2.2 requires setuptools >= 18.5.
#
# older versions of virtualenv (?) give us a virtualenv with the same version
# of setuptools as is installed on the system python (and tox runs virtualenv
# under python3, so we get the version of setuptools that is installed on that).
#
# anyway, make sure that we have a recent enough setuptools.
$TOX_BIN/pip install 'setuptools>=18.5'
# we also need a semi-recent version of pip, because old ones fail to install
# the "enum34" dependency of cryptography.
$TOX_BIN/pip install 'pip>=10'
{ python synapse/python_dependencies.py
echo lxml psycopg2
} | xargs $TOX_BIN/pip install

30
pyproject.toml Normal file
View File

@@ -0,0 +1,30 @@
[tool.towncrier]
package = "synapse"
filename = "CHANGES.md"
directory = "changelog.d"
issue_format = "[\\#{issue}](https://github.com/matrix-org/synapse/issues/{issue})"
[[tool.towncrier.type]]
directory = "feature"
name = "Features"
showcontent = true
[[tool.towncrier.type]]
directory = "bugfix"
name = "Bugfixes"
showcontent = true
[[tool.towncrier.type]]
directory = "doc"
name = "Improved Documentation"
showcontent = true
[[tool.towncrier.type]]
directory = "removal"
name = "Deprecations and Removals"
showcontent = true
[[tool.towncrier.type]]
directory = "misc"
name = "Internal Changes"
showcontent = true

View File

@@ -18,14 +18,22 @@
from __future__ import print_function
import argparse
from urlparse import urlparse, urlunparse
import nacl.signing
import json
import base64
import requests
import sys
from requests.adapters import HTTPAdapter
import srvlookup
import yaml
# uncomment the following to enable debug logging of http requests
#from httplib import HTTPConnection
#HTTPConnection.debuglevel = 1
def encode_base64(input_bytes):
"""Encode bytes as a base64 string without any padding."""
@@ -113,17 +121,6 @@ def read_signing_keys(stream):
return keys
def lookup(destination, path):
if ":" in destination:
return "https://%s%s" % (destination, path)
else:
try:
srv = srvlookup.lookup("matrix", "tcp", destination)[0]
return "https://%s:%d%s" % (srv.host, srv.port, path)
except:
return "https://%s:%d%s" % (destination, 8448, path)
def request_json(method, origin_name, origin_key, destination, path, content):
if method is None:
if content is None:
@@ -152,13 +149,19 @@ def request_json(method, origin_name, origin_key, destination, path, content):
authorization_headers.append(bytes(header))
print ("Authorization: %s" % header, file=sys.stderr)
dest = lookup(destination, path)
dest = "matrix://%s%s" % (destination, path)
print ("Requesting %s" % dest, file=sys.stderr)
result = requests.request(
s = requests.Session()
s.mount("matrix://", MatrixConnectionAdapter())
result = s.request(
method=method,
url=dest,
headers={"Authorization": authorization_headers[0]},
headers={
"Host": destination,
"Authorization": authorization_headers[0]
},
verify=False,
data=content,
)
@@ -242,5 +245,39 @@ def read_args_from_config(args):
args.signing_key_path = config['signing_key_path']
class MatrixConnectionAdapter(HTTPAdapter):
@staticmethod
def lookup(s):
if s[-1] == ']':
# ipv6 literal (with no port)
return s, 8448
if ":" in s:
out = s.rsplit(":",1)
try:
port = int(out[1])
except ValueError:
raise ValueError("Invalid host:port '%s'" % s)
return out[0], port
try:
srv = srvlookup.lookup("matrix", "tcp", s)[0]
return srv.host, srv.port
except:
return s, 8448
def get_connection(self, url, proxies=None):
parsed = urlparse(url)
(host, port) = self.lookup(parsed.netloc)
netloc = "%s:%d" % (host, port)
print("Connecting to %s" % (netloc,), file=sys.stderr)
url = urlunparse((
"https", netloc, parsed.path, parsed.params, parsed.query,
parsed.fragment,
))
return super(MatrixConnectionAdapter, self).get_connection(url, proxies)
if __name__ == "__main__":
main()

View File

@@ -6,9 +6,19 @@
## Do not run it lightly.
set -e
if [ "$1" == "-h" ] || [ "$1" == "" ]; then
echo "Call with ROOM_ID as first option and then pipe it into the database. So for instance you might run"
echo " nuke-room-from-db.sh <room_id> | sqlite3 homeserver.db"
echo "or"
echo " nuke-room-from-db.sh <room_id> | psql --dbname=synapse"
exit
fi
ROOMID="$1"
sqlite3 homeserver.db <<EOF
cat <<EOF
DELETE FROM event_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_backward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_edges WHERE room_id = '$ROOMID';
@@ -29,7 +39,7 @@ DELETE FROM state_groups WHERE room_id = '$ROOMID';
DELETE FROM state_groups_state WHERE room_id = '$ROOMID';
DELETE FROM receipts_graph WHERE room_id = '$ROOMID';
DELETE FROM receipts_linearized WHERE room_id = '$ROOMID';
DELETE FROM event_search_content WHERE c1room_id = '$ROOMID';
DELETE FROM event_search WHERE room_id = '$ROOMID';
DELETE FROM guest_access WHERE room_id = '$ROOMID';
DELETE FROM history_visibility WHERE room_id = '$ROOMID';
DELETE FROM room_tags WHERE room_id = '$ROOMID';

View File

@@ -26,11 +26,37 @@ import yaml
def request_registration(user, password, server_location, shared_secret, admin=False):
req = urllib2.Request(
"%s/_matrix/client/r0/admin/register" % (server_location,),
headers={'Content-Type': 'application/json'}
)
try:
if sys.version_info[:3] >= (2, 7, 9):
# As of version 2.7.9, urllib2 now checks SSL certs
import ssl
f = urllib2.urlopen(req, context=ssl.SSLContext(ssl.PROTOCOL_SSLv23))
else:
f = urllib2.urlopen(req)
body = f.read()
f.close()
nonce = json.loads(body)["nonce"]
except urllib2.HTTPError as e:
print "ERROR! Received %d %s" % (e.code, e.reason,)
if 400 <= e.code < 500:
if e.info().type == "application/json":
resp = json.load(e)
if "error" in resp:
print resp["error"]
sys.exit(1)
mac = hmac.new(
key=shared_secret,
digestmod=hashlib.sha1,
)
mac.update(nonce)
mac.update("\x00")
mac.update(user)
mac.update("\x00")
mac.update(password)
@@ -40,10 +66,10 @@ def request_registration(user, password, server_location, shared_secret, admin=F
mac = mac.hexdigest()
data = {
"user": user,
"nonce": nonce,
"username": user,
"password": password,
"mac": mac,
"type": "org.matrix.login.shared_secret",
"admin": admin,
}
@@ -52,7 +78,7 @@ def request_registration(user, password, server_location, shared_secret, admin=F
print "Sending registration request..."
req = urllib2.Request(
"%s/_matrix/client/api/v1/register" % (server_location,),
"%s/_matrix/client/r0/admin/register" % (server_location,),
data=json.dumps(data),
headers={'Content-Type': 'application/json'}
)

View File

@@ -30,6 +30,8 @@ import time
import traceback
import yaml
from six import string_types
logger = logging.getLogger("synapse_port_db")
@@ -574,7 +576,7 @@ class Porter(object):
def conv(j, col):
if j in bool_cols:
return bool(col)
elif isinstance(col, basestring) and "\0" in col:
elif isinstance(col, string_types) and "\0" in col:
logger.warn("DROPPING ROW: NUL value in table %s col %s: %r", table, headers[j], col)
raise BadValueException();
return col

View File

@@ -14,7 +14,26 @@ ignore =
pylint.cfg
tox.ini
[flake8]
[pep8]
max-line-length = 90
# W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it.
ignore = W503
# W503 requires that binary operators be at the end, not start, of lines. Erik
# doesn't like it. E203 is contrary to PEP8.
ignore = W503,E203
[flake8]
# note that flake8 inherits the "ignore" settings from "pep8" (because it uses
# pep8 to do those checks), but not the "max-line-length" setting
max-line-length = 90
[isort]
line_length = 89
not_skip = __init__.py
sections=FUTURE,STDLIB,COMPAT,THIRDPARTY,TWISTED,FIRSTPARTY,TESTS,LOCALFOLDER
default_section=THIRDPARTY
known_first_party = synapse
known_tests=tests
known_compat = mock,six
known_twisted=twisted,OpenSSL
multi_line_output=3
include_trailing_comma=true
combine_as_imports=true

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,4 +17,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.27.2"
__version__ = "0.33.3"

View File

@@ -15,15 +15,19 @@
import logging
from six import itervalues
import pymacaroons
from netaddr import IPAddress
from twisted.internet import defer
import synapse.types
from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, Codes
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, Codes, ResourceLimitError
from synapse.types import UserID
from synapse.util.caches import register_cache, CACHE_SIZE_FACTOR
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
from synapse.util.caches.lrucache import LruCache
from synapse.util.metrics import Measure
@@ -57,16 +61,17 @@ class Auth(object):
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("token_cache", self.token_cache)
register_cache("cache", "token_cache", self.token_cache)
@defer.inlineCallbacks
def check_from_context(self, event, context, do_sig_check=True):
prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_events_ids = yield self.compute_auth_events(
event, context.prev_state_ids, for_verification=True,
event, prev_state_ids, for_verification=True,
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
(e.type, e.state_key): e for e in auth_events.values()
(e.type, e.state_key): e for e in itervalues(auth_events)
}
self.check(event, auth_events=auth_events, do_sig_check=do_sig_check)
@@ -189,7 +194,7 @@ class Auth(object):
synapse.types.create_requester(user_id, app_service=app_service)
)
access_token = get_access_token_from_request(
access_token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
@@ -204,11 +209,11 @@ class Auth(object):
ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders(
"User-Agent",
default=[""]
)[0]
b"User-Agent",
default=[b""]
)[0].decode('ascii', 'surrogateescape')
if user and access_token and ip_addr:
self.store.insert_client_ip(
yield self.store.insert_client_ip(
user_id=user.to_string(),
access_token=access_token,
ip=ip_addr,
@@ -235,17 +240,22 @@ class Auth(object):
@defer.inlineCallbacks
def _get_appservice_user_id(self, request):
app_service = self.store.get_app_service_by_token(
get_access_token_from_request(
self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
)
if app_service is None:
defer.returnValue((None, None))
if "user_id" not in request.args:
if app_service.ip_range_whitelist:
ip_address = IPAddress(self.hs.get_ip_from_request(request))
if ip_address not in app_service.ip_range_whitelist:
defer.returnValue((None, None))
if b"user_id" not in request.args:
defer.returnValue((app_service.sender, app_service))
user_id = request.args["user_id"][0]
user_id = request.args[b"user_id"][0].decode('utf8')
if app_service.sender == user_id:
defer.returnValue((app_service.sender, app_service))
@@ -486,7 +496,7 @@ class Auth(object):
def _look_up_user_by_access_token(self, token):
ret = yield self.store.get_user_by_access_token(token)
if not ret:
logger.warn("Unrecognised access token - not in store: %s" % (token,))
logger.warn("Unrecognised access token - not in store.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN
@@ -504,12 +514,12 @@ class Auth(object):
def get_appservice_by_req(self, request):
try:
token = get_access_token_from_request(
token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token: %s" % (token,))
logger.warn("Unrecognised appservice access token.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
@@ -535,7 +545,8 @@ class Auth(object):
@defer.inlineCallbacks
def add_auth_events(self, builder, context):
auth_ids = yield self.compute_auth_events(builder, context.prev_state_ids)
prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_ids = yield self.compute_auth_events(builder, prev_state_ids)
auth_events_entries = yield self.store.add_event_hashes(
auth_ids
@@ -653,7 +664,7 @@ class Auth(object):
auth_events[(EventTypes.PowerLevels, "")] = power_level_event
send_level = event_auth.get_send_level(
EventTypes.Aliases, "", auth_events
EventTypes.Aliases, "", power_level_event,
)
user_level = event_auth.get_user_power_level(user_id, auth_events)
@@ -664,67 +675,134 @@ class Auth(object):
" edit its room list entry"
)
@staticmethod
def has_access_token(request):
"""Checks if the request has an access_token.
def has_access_token(request):
"""Checks if the request has an access_token.
Returns:
bool: False if no access_token was given, True otherwise.
"""
query_params = request.args.get(b"access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers)
Returns:
bool: False if no access_token was given, True otherwise.
"""
query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders("Authorization")
return bool(query_params) or bool(auth_headers)
@staticmethod
def get_access_token_from_request(request, token_not_found_http_status=401):
"""Extracts the access_token from the request.
Args:
request: The http request.
token_not_found_http_status(int): The HTTP status code to set in the
AuthError if the token isn't found. This is used in some of the
legacy APIs to change the status code to 403 from the default of
401 since some of the old clients depended on auth errors returning
403.
Returns:
unicode: The access_token
Raises:
AuthError: If there isn't an access_token in the request.
"""
def get_access_token_from_request(request, token_not_found_http_status=401):
"""Extracts the access_token from the request.
Args:
request: The http request.
token_not_found_http_status(int): The HTTP status code to set in the
AuthError if the token isn't found. This is used in some of the
legacy APIs to change the status code to 403 from the default of
401 since some of the old clients depended on auth errors returning
403.
Returns:
str: The access_token
Raises:
AuthError: If there isn't an access_token in the request.
"""
auth_headers = request.requestHeaders.getRawHeaders("Authorization")
query_params = request.args.get("access_token")
if auth_headers:
# Try the get the access_token from a "Authorization: Bearer"
# header
if query_params is not None:
raise AuthError(
token_not_found_http_status,
"Mixing Authorization headers and access_token query parameters.",
errcode=Codes.MISSING_TOKEN,
)
if len(auth_headers) > 1:
raise AuthError(
token_not_found_http_status,
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
parts = auth_headers[0].split(" ")
if parts[0] == "Bearer" and len(parts) == 2:
return parts[1]
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get(b"access_token")
if auth_headers:
# Try the get the access_token from a "Authorization: Bearer"
# header
if query_params is not None:
raise AuthError(
token_not_found_http_status,
"Mixing Authorization headers and access_token query parameters.",
errcode=Codes.MISSING_TOKEN,
)
if len(auth_headers) > 1:
raise AuthError(
token_not_found_http_status,
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
parts = auth_headers[0].split(b" ")
if parts[0] == b"Bearer" and len(parts) == 2:
return parts[1].decode('ascii')
else:
raise AuthError(
token_not_found_http_status,
"Invalid Authorization header.",
errcode=Codes.MISSING_TOKEN,
)
else:
raise AuthError(
token_not_found_http_status,
"Invalid Authorization header.",
errcode=Codes.MISSING_TOKEN,
# Try to get the access_token from the query params.
if not query_params:
raise AuthError(
token_not_found_http_status,
"Missing access token.",
errcode=Codes.MISSING_TOKEN
)
return query_params[0].decode('ascii')
@defer.inlineCallbacks
def check_in_room_or_world_readable(self, room_id, user_id):
"""Checks that the user is or was in the room or the room is world
readable. If it isn't then an exception is raised.
Returns:
Deferred[tuple[str, str|None]]: Resolves to the current membership of
the user in the room and the membership event ID of the user. If
the user is not in the room and never has been, then
`(Membership.JOIN, None)` is returned.
"""
try:
# check_user_was_in_room will return the most recent membership
# event for the user if:
# * The user is a non-guest user, and was ever in the room
# * The user is a guest user, and has joined the room
# else it will throw.
member_event = yield self.check_user_was_in_room(room_id, user_id)
defer.returnValue((member_event.membership, member_event.event_id))
except AuthError:
visibility = yield self.state.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
else:
# Try to get the access_token from the query params.
if not query_params:
if (
visibility and
visibility.content["history_visibility"] == "world_readable"
):
defer.returnValue((Membership.JOIN, None))
return
raise AuthError(
token_not_found_http_status,
"Missing access token.",
errcode=Codes.MISSING_TOKEN
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
)
return query_params[0]
@defer.inlineCallbacks
def check_auth_blocking(self, user_id=None):
"""Checks if the user should be rejected for some external reason,
such as monthly active user limiting or global disable flag
Args:
user_id(str|None): If present, checks for presence against existing
MAU cohort
"""
if self.hs.config.hs_disabled:
raise ResourceLimitError(
403, self.hs.config.hs_disabled_message,
errcode=Codes.RESOURCE_LIMIT_EXCEED,
admin_uri=self.hs.config.admin_uri,
limit_type=self.hs.config.hs_disabled_limit_type
)
if self.hs.config.limit_usage_by_mau is True:
# If the user is already part of the MAU cohort
if user_id:
timestamp = yield self.store.user_last_seen_monthly_active(user_id)
if timestamp:
return
# Else if there is no room in the MAU bucket, bail
current_mau = yield self.store.get_monthly_active_count()
if current_mau >= self.hs.config.max_mau_value:
raise ResourceLimitError(
403, "Monthly Active User Limit Exceeded",
admin_uri=self.hs.config.admin_uri,
errcode=Codes.RESOURCE_LIMIT_EXCEED,
limit_type="monthly_active_user"
)

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,6 +17,9 @@
"""Contains constants from the specification."""
# the "depth" field on events is limited to 2**63 - 1
MAX_DEPTH = 2**63 - 1
class Membership(object):
@@ -73,6 +77,8 @@ class EventTypes(object):
Topic = "m.room.topic"
Name = "m.room.name"
ServerACL = "m.room.server_acl"
class RejectedReason(object):
AUTH_ERROR = "auth_error"
@@ -89,3 +95,11 @@ class RoomCreationPreset(object):
class ThirdPartyEntityKind(object):
USER = "user"
LOCATION = "location"
# the version we will give rooms which are created on this server
DEFAULT_ROOM_VERSION = "1"
# vdh-test-version is a placeholder to get room versioning support working and tested
# until we have a working v2.
KNOWN_ROOM_VERSIONS = {"1", "vdh-test-version"}

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -17,7 +18,10 @@
import logging
import simplejson as json
from six import iteritems
from six.moves import http_client
from canonicaljson import json
logger = logging.getLogger(__name__)
@@ -50,6 +54,11 @@ class Codes(object):
THREEPID_DENIED = "M_THREEPID_DENIED"
INVALID_USERNAME = "M_INVALID_USERNAME"
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
CONSENT_NOT_GIVEN = "M_CONSENT_NOT_GIVEN"
CANNOT_LEAVE_SERVER_NOTICE_ROOM = "M_CANNOT_LEAVE_SERVER_NOTICE_ROOM"
RESOURCE_LIMIT_EXCEED = "M_RESOURCE_LIMIT_EXCEED"
UNSUPPORTED_ROOM_VERSION = "M_UNSUPPORTED_ROOM_VERSION"
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
class CodeMessageException(RuntimeError):
@@ -64,20 +73,6 @@ class CodeMessageException(RuntimeError):
self.code = code
self.msg = msg
def error_dict(self):
return cs_error(self.msg)
class MatrixCodeMessageException(CodeMessageException):
"""An error from a general matrix endpoint, eg. from a proxied Matrix API call.
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN):
super(MatrixCodeMessageException, self).__init__(code, msg)
self.errcode = errcode
class SynapseError(CodeMessageException):
"""A base exception type for matrix errors which have an errcode and error
@@ -103,38 +98,54 @@ class SynapseError(CodeMessageException):
self.errcode,
)
@classmethod
def from_http_response_exception(cls, err):
"""Make a SynapseError based on an HTTPResponseException
This is useful when a proxied request has failed, and we need to
decide how to map the failure onto a matrix error to send back to the
client.
class ProxiedRequestError(SynapseError):
"""An error from a general matrix endpoint, eg. from a proxied Matrix API call.
An attempt is made to parse the body of the http response as a matrix
error. If that succeeds, the errcode and error message from the body
are used as the errcode and error message in the new synapse error.
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN, additional_fields=None):
super(ProxiedRequestError, self).__init__(
code, msg, errcode
)
if additional_fields is None:
self._additional_fields = {}
else:
self._additional_fields = dict(additional_fields)
Otherwise, the errcode is set to M_UNKNOWN, and the error message is
set to the reason code from the HTTP response.
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
**self._additional_fields
)
class ConsentNotGivenError(SynapseError):
"""The error returned to the client when the user has not consented to the
privacy policy.
"""
def __init__(self, msg, consent_uri):
"""Constructs a ConsentNotGivenError
Args:
err (HttpResponseException):
Returns:
SynapseError:
msg (str): The human-readable error message
consent_url (str): The URL where the user can give their consent
"""
# try to parse the body as json, to get better errcode/msg, but
# default to M_UNKNOWN with the HTTP status as the error text
try:
j = json.loads(err.response)
except ValueError:
j = {}
errcode = j.get('errcode', Codes.UNKNOWN)
errmsg = j.get('error', err.msg)
super(ConsentNotGivenError, self).__init__(
code=http_client.FORBIDDEN,
msg=msg,
errcode=Codes.CONSENT_NOT_GIVEN
)
self._consent_uri = consent_uri
res = SynapseError(err.code, errmsg, errcode)
return res
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
consent_uri=self._consent_uri
)
class RegistrationError(SynapseError):
@@ -220,6 +231,30 @@ class AuthError(SynapseError):
super(AuthError, self).__init__(*args, **kwargs)
class ResourceLimitError(SynapseError):
"""
Any error raised when there is a problem with resource usage.
For instance, the monthly active user limit for the server has been exceeded
"""
def __init__(
self, code, msg,
errcode=Codes.RESOURCE_LIMIT_EXCEED,
admin_uri=None,
limit_type=None,
):
self.admin_uri = admin_uri
self.limit_type = limit_type
super(ResourceLimitError, self).__init__(code, msg, errcode=errcode)
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
admin_uri=self.admin_uri,
limit_type=self.limit_type
)
class EventSizeError(SynapseError):
"""An error raised when an event is too big."""
@@ -277,12 +312,25 @@ class LimitExceededError(SynapseError):
)
def cs_exception(exception):
if isinstance(exception, CodeMessageException):
return exception.error_dict()
else:
logger.error("Unknown exception type: %s", type(exception))
return {}
class IncompatibleRoomVersionError(SynapseError):
"""A server is trying to join a room whose version it does not support."""
def __init__(self, room_version):
super(IncompatibleRoomVersionError, self).__init__(
code=400,
msg="Your homeserver does not support the features required to "
"join this room",
errcode=Codes.INCOMPATIBLE_ROOM_VERSION,
)
self._room_version = room_version
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
room_version=self._room_version,
)
def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
@@ -291,13 +339,13 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
Args:
msg (str): The error message.
code (int): The error code.
code (str): The error code.
kwargs : Additional keys to add to the response.
Returns:
A dict representing the error response JSON.
"""
err = {"error": msg, "errcode": code}
for key, value in kwargs.iteritems():
for key, value in iteritems(kwargs):
err[key] = value
return err
@@ -341,7 +389,7 @@ class HttpResponseException(CodeMessageException):
Represents an HTTP-level failure of an outbound request
Attributes:
response (str): body of response
response (bytes): body of response
"""
def __init__(self, code, msg, response):
"""
@@ -349,7 +397,39 @@ class HttpResponseException(CodeMessageException):
Args:
code (int): HTTP status code
msg (str): reason phrase from HTTP response status line
response (str): body of response
response (bytes): body of response
"""
super(HttpResponseException, self).__init__(code, msg)
self.response = response
def to_synapse_error(self):
"""Make a SynapseError based on an HTTPResponseException
This is useful when a proxied request has failed, and we need to
decide how to map the failure onto a matrix error to send back to the
client.
An attempt is made to parse the body of the http response as a matrix
error. If that succeeds, the errcode and error message from the body
are used as the errcode and error message in the new synapse error.
Otherwise, the errcode is set to M_UNKNOWN, and the error message is
set to the reason code from the HTTP response.
Returns:
SynapseError:
"""
# try to parse the body as json, to get better errcode/msg, but
# default to M_UNKNOWN with the HTTP status as the error text
try:
j = json.loads(self.response)
except ValueError:
j = {}
if not isinstance(j, dict):
j = {}
errcode = j.pop('errcode', Codes.UNKNOWN)
errmsg = j.pop('error', self.msg)
return ProxiedRequestError(self.code, errmsg, errcode, j)

View File

@@ -12,14 +12,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api.errors import SynapseError
from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, RoomID
import jsonschema
from canonicaljson import json
from jsonschema import FormatChecker
from twisted.internet import defer
import simplejson as json
import jsonschema
from jsonschema import FormatChecker
from synapse.api.errors import SynapseError
from synapse.storage.presence import UserPresenceState
from synapse.types import RoomID, UserID
FILTER_SCHEMA = {
"additionalProperties": False,
@@ -112,7 +113,13 @@ ROOM_EVENT_FILTER_SCHEMA = {
},
"contains_url": {
"type": "boolean"
}
},
"lazy_load_members": {
"type": "boolean"
},
"include_redundant_members": {
"type": "boolean"
},
}
}
@@ -260,6 +267,12 @@ class FilterCollection(object):
def ephemeral_limit(self):
return self._room_ephemeral_filter.limit()
def lazy_load_members(self):
return self._room_state_filter.lazy_load_members()
def include_redundant_members(self):
return self._room_state_filter.include_redundant_members()
def filter_presence(self, events):
return self._presence_filter.filter(events)
@@ -411,11 +424,17 @@ class Filter(object):
return room_ids
def filter(self, events):
return filter(self.check, events)
return list(filter(self.check, events))
def limit(self):
return self.filter_json.get("limit", 10)
def lazy_load_members(self):
return self.filter_json.get("lazy_load_members", False)
def include_redundant_members(self):
return self.filter_json.get("include_redundant_members", False)
def _matches_wildcard(actual_value, filter_value):
if filter_value.endswith("*"):

View File

@@ -72,7 +72,7 @@ class Ratelimiter(object):
return allowed, time_allowed
def prune_message_counts(self, time_now_s):
for user_id in self.message_counts.keys():
for user_id in list(self.message_counts.keys()):
message_count, time_start, msg_rate_hz = (
self.message_counts[user_id]
)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -14,6 +15,12 @@
# limitations under the License.
"""Contains the URL paths to prefix various aspects of the server with. """
import hmac
from hashlib import sha256
from six.moves.urllib.parse import urlencode
from synapse.config import ConfigError
CLIENT_PREFIX = "/_matrix/client/api/v1"
CLIENT_V2_ALPHA_PREFIX = "/_matrix/client/v2_alpha"
@@ -25,3 +32,46 @@ SERVER_KEY_PREFIX = "/_matrix/key/v1"
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
MEDIA_PREFIX = "/_matrix/media/r0"
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
class ConsentURIBuilder(object):
def __init__(self, hs_config):
"""
Args:
hs_config (synapse.config.homeserver.HomeServerConfig):
"""
if hs_config.form_secret is None:
raise ConfigError(
"form_secret not set in config",
)
if hs_config.public_baseurl is None:
raise ConfigError(
"public_baseurl not set in config",
)
self._hmac_secret = hs_config.form_secret.encode("utf-8")
self._public_baseurl = hs_config.public_baseurl
def build_user_consent_uri(self, user_id):
"""Build a URI which we can give to the user to do their privacy
policy consent
Args:
user_id (str): mxid or username of user
Returns
(str) the URI where the user can do consent
"""
mac = hmac.new(
key=self._hmac_secret,
msg=user_id,
digestmod=sha256,
).hexdigest()
consent_uri = "%s_matrix/consent?%s" % (
self._public_baseurl,
urlencode({
"u": user_id,
"h": mac
}),
)
return consent_uri

View File

@@ -14,9 +14,11 @@
# limitations under the License.
import sys
from synapse import python_dependencies # noqa: E402
sys.dont_write_bytecode = True
from synapse import python_dependencies # noqa: E402
try:
python_dependencies.check_requirements()

View File

@@ -17,15 +17,18 @@ import gc
import logging
import sys
from daemonize import Daemonize
from twisted.internet import error, reactor
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
try:
import affinity
except Exception:
affinity = None
from daemonize import Daemonize
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
from twisted.internet import error, reactor
logger = logging.getLogger(__name__)
@@ -124,7 +127,20 @@ def quit_with_error(error_string):
sys.exit(1)
def listen_tcp(bind_addresses, port, factory, backlog=50):
def listen_metrics(bind_addresses, port):
"""
Start Prometheus metrics server.
"""
from synapse.metrics import RegistryProxy
from prometheus_client import start_http_server
for host in bind_addresses:
reactor.callInThread(start_http_server, int(port),
addr=host, registry=RegistryProxy)
logger.info("Metrics now reporting on %s:%d", host, port)
def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
"""
Create a TCP socket for a port and several addresses
"""
@@ -140,7 +156,9 @@ def listen_tcp(bind_addresses, port, factory, backlog=50):
check_bind_error(e, address, bind_addresses)
def listen_ssl(bind_addresses, port, factory, context_factory, backlog=50):
def listen_ssl(
bind_addresses, port, factory, context_factory, reactor=reactor, backlog=50
):
"""
Create an SSL socket for a port and several addresses
"""

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -23,6 +26,7 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
@@ -32,11 +36,9 @@ from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.appservice")
@@ -62,7 +64,7 @@ class AppserviceServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
@@ -74,6 +76,7 @@ class AppserviceServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -93,6 +96,13 @@ class AppserviceServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -107,14 +117,20 @@ class ASReplicationHandler(ReplicationClientHandler):
super(ASReplicationHandler, self).__init__(hs.get_datastore())
self.appservice_handler = hs.get_application_service_handler()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
yield super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
if stream_name == "events":
max_stream_id = self.store.get_room_max_stream_ordering()
preserve_fn(
self.appservice_handler.notify_interested_services
)(max_stream_id)
run_in_background(self._notify_app_services, max_stream_id)
@defer.inlineCallbacks
def _notify_app_services(self, room_stream_id):
try:
yield self.appservice_handler.notify_interested_services(room_stream_id)
except Exception:
logger.exception("Error notifying application services of event")
def start(config_options):

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -25,8 +28,10 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.directory import DirectoryStore
@@ -34,29 +39,34 @@ from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import PublicRoomListRestServlet
from synapse.rest.client.v1.room import (
JoinedRoomMemberListRestServlet,
PublicRoomListRestServlet,
RoomEventContextServlet,
RoomMemberListRestServlet,
RoomStateRestServlet,
)
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.client_reader")
class ClientReaderSlavedStore(
SlavedAccountDataStore,
SlavedEventStore,
SlavedKeyStore,
RoomStore,
DirectoryStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
TransactionStore,
SlavedTransactionStore,
SlavedClientIpStore,
BaseSlavedStore,
):
@@ -77,10 +87,16 @@ class ClientReaderServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
PublicRoomListRestServlet(self).register(resource)
RoomMemberListRestServlet(self).register(resource)
JoinedRoomMemberListRestServlet(self).register(resource)
RoomStateRestServlet(self).register(resource)
RoomEventContextServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
@@ -98,6 +114,7 @@ class ClientReaderServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -117,7 +134,13 @@ class ClientReaderServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -145,11 +168,13 @@ def start(config_options):
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ss = ClientReaderServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -25,6 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@@ -39,11 +43,13 @@ from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import (
RoomSendEventRestServlet, RoomMembershipRestServlet, RoomStateEventRestServlet,
JoinRoomAliasServlet,
RoomMembershipRestServlet,
RoomSendEventRestServlet,
RoomStateEventRestServlet,
)
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
@@ -51,15 +57,13 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.event_creator")
class EventCreatorSlavedStore(
DirectoryStore,
TransactionStore,
SlavedTransactionStore,
SlavedProfileStore,
SlavedAccountDataStore,
SlavedPusherStore,
@@ -90,7 +94,7 @@ class EventCreatorServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
RoomSendEventRestServlet(self).register(resource)
@@ -114,6 +118,7 @@ class EventCreatorServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -133,6 +138,13 @@ class EventCreatorServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -162,11 +174,13 @@ def start(config_options):
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ss = EventCreatorServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.api.urls import FEDERATION_PREFIX
@@ -26,13 +29,20 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
@@ -40,18 +50,22 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_reader")
class FederationReaderSlavedStore(
SlavedAccountDataStore,
SlavedProfileStore,
SlavedApplicationServiceStore,
SlavedPusherStore,
SlavedPushRuleStore,
SlavedReceiptsStore,
SlavedEventStore,
SlavedKeyStore,
RoomStore,
DirectoryStore,
TransactionStore,
SlavedTransactionStore,
BaseSlavedStore,
):
pass
@@ -71,7 +85,7 @@ class FederationReaderServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
@@ -87,6 +101,7 @@ class FederationReaderServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -106,6 +121,13 @@ class FederationReaderServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -133,11 +155,13 @@ def start(config_options):
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ss = FederationReaderServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -25,6 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation import send_queue
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
@@ -32,23 +36,21 @@ from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import Linearizer
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_sender")
class FederationSenderSlaveStore(
SlavedDeviceInboxStore, TransactionStore, SlavedReceiptsStore, SlavedEventStore,
SlavedDeviceInboxStore, SlavedTransactionStore, SlavedReceiptsStore, SlavedEventStore,
SlavedRegistrationStore, SlavedDeviceStore, SlavedPresenceStore,
):
def __init__(self, db_conn, hs):
@@ -89,7 +91,7 @@ class FederationSenderServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
@@ -101,6 +103,7 @@ class FederationSenderServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -120,6 +123,13 @@ class FederationSenderServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -134,8 +144,9 @@ class FederationSenderReplicationHandler(ReplicationClientHandler):
super(FederationSenderReplicationHandler, self).__init__(hs.get_datastore())
self.send_handler = FederationSenderHandler(hs, self)
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(FederationSenderReplicationHandler, self).on_rdata(
yield super(FederationSenderReplicationHandler, self).on_rdata(
stream_name, token, rows
)
self.send_handler.process_replication_rows(stream_name, token, rows)
@@ -176,11 +187,13 @@ def start(config_options):
config.send_federation = True
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ps = FederationSenderServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
@@ -229,7 +242,7 @@ class FederationSenderHandler(object):
# presence, typing, etc.
if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows)
preserve_fn(self.update_token)(token)
run_in_background(self.update_token, token)
# We also need to poke the federation sender when new events happen
elif stream_name == "events":
@@ -237,19 +250,22 @@ class FederationSenderHandler(object):
@defer.inlineCallbacks
def update_token(self, token):
self.federation_position = token
try:
self.federation_position = token
# We linearize here to ensure we don't have races updating the token
with (yield self._fed_position_linearizer.queue(None)):
if self._last_ack < self.federation_position:
yield self.store.update_federation_out_pos(
"federation", self.federation_position
)
# We linearize here to ensure we don't have races updating the token
with (yield self._fed_position_linearizer.queue(None)):
if self._last_ack < self.federation_position:
yield self.store.update_federation_out_pos(
"federation", self.federation_position
)
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(self.federation_position)
self._last_ack = self.federation_position
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(self.federation_position)
self._last_ack = self.federation_position
except Exception:
logger.exception("Error updating federation stream position")
if __name__ == '__main__':

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.api.errors import SynapseError
@@ -25,10 +28,9 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.servlet import (
RestServlet, parse_json_object_from_request,
)
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -36,6 +38,7 @@ from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.base import ClientV1RestServlet, client_path_patterns
from synapse.rest.client.v2_alpha._base import client_v2_patterns
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
@@ -43,12 +46,39 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.frontend_proxy")
class PresenceStatusStubServlet(ClientV1RestServlet):
PATTERNS = client_path_patterns("/presence/(?P<user_id>[^/]*)/status")
def __init__(self, hs):
super(PresenceStatusStubServlet, self).__init__(hs)
self.http_client = hs.get_simple_http_client()
self.auth = hs.get_auth()
self.main_uri = hs.config.worker_main_http_uri
@defer.inlineCallbacks
def on_GET(self, request, user_id):
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
headers = {
"Authorization": auth_headers,
}
result = yield self.http_client.get_json(
self.main_uri + request.uri,
headers=headers,
)
defer.returnValue((200, result))
@defer.inlineCallbacks
def on_PUT(self, request, user_id):
yield self.auth.get_user_by_req(request)
defer.returnValue((200, {}))
class KeyUploadServlet(RestServlet):
PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$")
@@ -90,7 +120,7 @@ class KeyUploadServlet(RestServlet):
# They're actually trying to upload something, proxy to main synapse.
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = {
"Authorization": auth_headers,
}
@@ -131,10 +161,16 @@ class FrontendProxyServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource)
# If presence is disabled, use the stub servlet that does
# not allow sending presence
if not self.config.use_presence:
PresenceStatusStubServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
@@ -152,7 +188,9 @@ class FrontendProxyServer(HomeServer):
site_tag,
listener_config,
root_resource,
)
self.version_string,
),
reactor=self.get_reactor()
)
logger.info("Synapse client reader now listening on port %d", port)
@@ -171,6 +209,13 @@ class FrontendProxyServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -200,11 +245,13 @@ def start(config_options):
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ss = FrontendProxyServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -18,27 +18,44 @@ import logging
import os
import sys
from six import iteritems
from prometheus_client import Gauge
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
import synapse
import synapse.config.logger
from synapse import events
from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \
LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \
STATIC_PREFIX, WEB_CLIENT_PREFIX
from synapse.api.urls import (
CONTENT_REPO_PREFIX,
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_PREFIX,
SERVER_KEY_PREFIX,
SERVER_KEY_V2_PREFIX,
STATIC_PREFIX,
WEB_CLIENT_PREFIX,
)
from synapse.app import _base
from synapse.app._base import quit_with_error, listen_ssl, listen_tcp
from synapse.app._base import listen_ssl, listen_tcp, quit_with_error
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer
from synapse.module_api import ModuleApi
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import register_memory_metrics
from synapse.metrics import RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
check_requirements
from synapse.replication.http import ReplicationRestResource, REPLICATION_PREFIX
from synapse.module_api import ModuleApi
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, check_requirements
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
from synapse.rest.key.v1.server_key_resource import LocalKey
@@ -55,11 +72,6 @@ from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
logger = logging.getLogger("synapse.app.homeserver")
@@ -140,6 +152,7 @@ class SynapseHomeServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
),
self.tls_server_context_factory,
)
@@ -153,6 +166,7 @@ class SynapseHomeServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
logger.info("Synapse now listening on port %d", port)
@@ -182,6 +196,15 @@ class SynapseHomeServer(HomeServer):
"/_matrix/client/versions": client_resource,
})
if name == "consent":
from synapse.rest.consent.consent_resource import ConsentResource
consent_resource = ConsentResource(self)
if compress:
consent_resource = gz_wrap(consent_resource)
resources.update({
"/_matrix/consent": consent_resource,
})
if name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
@@ -219,7 +242,7 @@ class SynapseHomeServer(HomeServer):
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
if name == "metrics" and self.get_config().enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
@@ -252,6 +275,13 @@ class SynapseHomeServer(HomeServer):
reactor.addSystemEventTrigger(
"before", "shutdown", server_listener.stopListening,
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -272,6 +302,11 @@ class SynapseHomeServer(HomeServer):
quit_with_error(e.message)
# Gauges to expose monthly active user control metrics
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
max_mau_gauge = Gauge("synapse_admin_mau:max", "MAU Limit")
def setup(config_options):
"""
Args:
@@ -300,14 +335,10 @@ def setup(config_options):
# check any extra requirements we have now we have a config
check_requirements(config)
version_string = "Synapse/" + get_version_string(synapse)
logger.info("Server hostname: %s", config.server_name)
logger.info("Server version: %s", version_string)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
database_engine = create_engine(config.database_config)
config.database_config["args"]["cp_openfun"] = database_engine.on_new_connection
@@ -316,8 +347,9 @@ def setup(config_options):
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string=version_string,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
@@ -351,8 +383,6 @@ def setup(config_options):
hs.get_datastore().start_doing_background_updates()
hs.get_federation_client().start_get_pdu_cache()
register_memory_metrics(hs)
reactor.callWhenRunning(start)
return hs
@@ -407,6 +437,9 @@ def run(hs):
# currently either 0 or 1
stats_process = []
def start_phone_stats_home():
return run_as_background_process("phone_stats_home", phone_stats_home)
@defer.inlineCallbacks
def phone_stats_home():
logger.info("Gathering stats for reporting")
@@ -423,6 +456,10 @@ def run(hs):
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
for name, count in iteritems(daily_user_type_results):
stats["daily_user_type_" + name] = count
room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count
@@ -431,7 +468,7 @@ def run(hs):
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in r30_results.iteritems():
for name, count in iteritems(r30_results):
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
@@ -473,9 +510,42 @@ def run(hs):
" changes across releases."
)
def generate_user_daily_visit_stats():
return run_as_background_process(
"generate_user_daily_visits",
hs.get_datastore().generate_user_daily_visits,
)
# Rather than update on per session basis, batch up the requests.
# If you increase the loop period, the accuracy of user_daily_visits
# table will decrease
clock.looping_call(generate_user_daily_visit_stats, 5 * 60 * 1000)
# monthly active user limiting functionality
clock.looping_call(
hs.get_datastore().reap_monthly_active_users, 1000 * 60 * 60
)
hs.get_datastore().reap_monthly_active_users()
@defer.inlineCallbacks
def generate_monthly_active_users():
count = 0
if hs.config.limit_usage_by_mau:
count = yield hs.get_datastore().get_monthly_active_count()
current_mau_gauge.set(float(count))
max_mau_gauge.set(float(hs.config.max_mau_value))
hs.get_datastore().initialise_reserved_users(
hs.config.mau_limits_reserved_threepids
)
generate_monthly_active_users()
if hs.config.limit_usage_by_mau:
clock.looping_call(generate_monthly_active_users, 5 * 60 * 1000)
# End of monthly active user settings
if hs.config.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)
clock.looping_call(start_phone_stats_home, 3 * 60 * 60 * 1000)
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
@@ -483,7 +553,7 @@ def run(hs):
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home)
clock.call_later(5 * 60, start_phone_stats_home)
if hs.config.daemonize and hs.config.print_pidfile:
print (hs.config.pid_file)

View File

@@ -16,23 +16,25 @@
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.api.urls import (
CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
)
from synapse.api.urls import CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
@@ -42,8 +44,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.media_repository")
@@ -52,7 +52,7 @@ class MediaRepositorySlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
TransactionStore,
SlavedTransactionStore,
BaseSlavedStore,
MediaRepositoryStore,
):
@@ -73,7 +73,7 @@ class MediaRepositoryServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "media":
media_repo = self.get_media_repository_resource()
resources.update({
@@ -94,6 +94,7 @@ class MediaRepositoryServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -113,6 +114,13 @@ class MediaRepositoryServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -147,11 +155,13 @@ def start(config_options):
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ss = MediaRepositoryServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -23,6 +26,7 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore
@@ -33,11 +37,9 @@ from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.pusher")
@@ -92,7 +94,7 @@ class PusherServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
@@ -104,6 +106,7 @@ class PusherServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -123,6 +126,13 @@ class PusherServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -138,26 +148,30 @@ class PusherReplicationHandler(ReplicationClientHandler):
self.pusher_pool = hs.get_pusherpool()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
preserve_fn(self.poke_pushers)(stream_name, token, rows)
yield super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.poke_pushers, stream_name, token, rows)
@defer.inlineCallbacks
def poke_pushers(self, stream_name, token, rows):
if stream_name == "pushers":
for row in rows:
if row.deleted:
yield self.stop_pusher(row.user_id, row.app_id, row.pushkey)
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
yield self.pusher_pool.on_new_notifications(
token, token,
)
elif stream_name == "receipts":
yield self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
try:
if stream_name == "pushers":
for row in rows:
if row.deleted:
yield self.stop_pusher(row.user_id, row.app_id, row.pushkey)
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
self.pusher_pool.on_new_notifications(
token, token,
)
elif stream_name == "receipts":
self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
except Exception:
logger.exception("Error poking pushers")
def stop_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)

View File

@@ -17,6 +17,11 @@ import contextlib
import logging
import sys
from six import iteritems
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse.api.constants import EventTypes
from synapse.app import _base
@@ -26,6 +31,7 @@ from synapse.config.logger import setup_logging
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@@ -35,12 +41,12 @@ from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
@@ -49,14 +55,11 @@ from synapse.rest.client.v2_alpha import sync
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.presence import UserPresenceState
from synapse.storage.roommember import RoomMemberStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.synchrotron")
@@ -77,9 +80,7 @@ class SynchrotronSlavedStore(
RoomStore,
BaseSlavedStore,
):
did_forget = (
RoomMemberStore.__dict__["did_forget"]
)
pass
UPDATE_SYNCING_USERS_MS = 10 * 1000
@@ -113,7 +114,10 @@ class SynchrotronPresence(object):
logger.info("Presence process_id is %r", self.process_id)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
self.hs.get_tcp_replication().send_user_sync(user_id, is_syncing, last_sync_ms)
if self.hs.config.use_presence:
self.hs.get_tcp_replication().send_user_sync(
user_id, is_syncing, last_sync_ms
)
def mark_as_coming_online(self, user_id):
"""A user has started syncing. Send a UserSync to the master, unless they
@@ -210,10 +214,13 @@ class SynchrotronPresence(object):
yield self.notify_from_replication(states, stream_id)
def get_currently_syncing_users(self):
return [
user_id for user_id, count in self.user_to_num_current_syncs.iteritems()
if count > 0
]
if self.hs.config.use_presence:
return [
user_id for user_id, count in iteritems(self.user_to_num_current_syncs)
if count > 0
]
else:
return set()
class SynchrotronTyping(object):
@@ -255,7 +262,7 @@ class SynchrotronServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
sync.register_servlets(self, resource)
@@ -279,6 +286,7 @@ class SynchrotronServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -298,6 +306,13 @@ class SynchrotronServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -323,10 +338,10 @@ class SyncReplicationHandler(ReplicationClientHandler):
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
preserve_fn(self.process_and_notify)(stream_name, token, rows)
yield super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.process_and_notify, stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(SyncReplicationHandler, self).get_streams_to_replicate()
@@ -338,55 +353,58 @@ class SyncReplicationHandler(ReplicationClientHandler):
@defer.inlineCallbacks
def process_and_notify(self, stream_name, token, rows):
if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
event = yield self.store.get_event(row.event_id)
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
max_token = self.store.get_room_max_stream_ordering()
self.notifier.on_new_room_event(
event, token, max_token, extra_users
)
elif stream_name == "push_rules":
self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows],
)
elif stream_name in ("account_data", "tag_account_data",):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows],
)
elif stream_name == "receipts":
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "typing":
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "to_device":
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
try:
if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
event = yield self.store.get_event(row.event_id)
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
max_token = self.store.get_room_max_stream_ordering()
self.notifier.on_new_room_event(
event, token, max_token, extra_users
)
elif stream_name == "push_rules":
self.notifier.on_new_event(
"to_device_key", token, users=entities,
"push_rules_key", token, users=[row.user_id for row in rows],
)
elif stream_name == "device_lists":
all_room_ids = set()
for row in rows:
room_ids = yield self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
self.notifier.on_new_event(
"device_list_key", token, rooms=all_room_ids,
)
elif stream_name == "presence":
yield self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows],
)
elif stream_name in ("account_data", "tag_account_data",):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows],
)
elif stream_name == "receipts":
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "typing":
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows],
)
elif stream_name == "to_device":
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
self.notifier.on_new_event(
"to_device_key", token, users=entities,
)
elif stream_name == "device_lists":
all_room_ids = set()
for row in rows:
room_ids = yield self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
self.notifier.on_new_event(
"device_list_key", token, rooms=all_room_ids,
)
elif stream_name == "presence":
yield self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows],
)
except Exception:
logger.exception("Error processing replication")
def start(config_options):

View File

@@ -16,16 +16,19 @@
import argparse
import collections
import errno
import glob
import os
import os.path
import signal
import subprocess
import sys
import yaml
import errno
import time
from six import iteritems
import yaml
SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
GREEN = "\x1b[1;32m"
@@ -171,6 +174,10 @@ def main():
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
cache_factors = config.get("synctl_cache_factors", {})
for cache_name, factor in iteritems(cache_factors):
os.environ["SYNAPSE_CACHE_FACTOR_" + cache_name.upper()] = str(factor)
worker_configfiles = []
if options.worker:
start_stop_synapse = False

View File

@@ -17,6 +17,9 @@
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -26,6 +29,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -39,11 +43,9 @@ from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.user_dir")
@@ -105,7 +107,7 @@ class UserDirectoryServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
user_directory.register_servlets(self, resource)
@@ -126,6 +128,7 @@ class UserDirectoryServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -145,6 +148,13 @@ class UserDirectoryServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -159,12 +169,20 @@ class UserDirectoryReplicationHandler(ReplicationClientHandler):
super(UserDirectoryReplicationHandler, self).__init__(hs.get_datastore())
self.user_directory = hs.get_user_directory_handler()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(UserDirectoryReplicationHandler, self).on_rdata(
yield super(UserDirectoryReplicationHandler, self).on_rdata(
stream_name, token, rows
)
if stream_name == "current_state_deltas":
preserve_fn(self.user_directory.notify_new_event)()
run_in_background(self._notify_directory)
@defer.inlineCallbacks
def _notify_directory(self):
try:
yield self.user_directory.notify_new_event()
except Exception:
logger.exception("Error notifiying user directory of state update")
def start(config_options):
@@ -197,11 +215,13 @@ def start(config_options):
config.update_user_directory = True
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ps = UserDirectoryServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -12,14 +12,16 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api.constants import EventTypes
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.types import GroupID, get_domain_from_id
import logging
import re
from six import string_types
from twisted.internet import defer
import logging
import re
from synapse.api.constants import EventTypes
from synapse.types import GroupID, get_domain_from_id
from synapse.util.caches.descriptors import cachedInlineCallbacks
logger = logging.getLogger(__name__)
@@ -83,7 +85,8 @@ class ApplicationService(object):
NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]
def __init__(self, token, hostname, url=None, namespaces=None, hs_token=None,
sender=None, id=None, protocols=None, rate_limited=True):
sender=None, id=None, protocols=None, rate_limited=True,
ip_range_whitelist=None):
self.token = token
self.url = url
self.hs_token = hs_token
@@ -91,6 +94,7 @@ class ApplicationService(object):
self.server_name = hostname
self.namespaces = self._check_namespaces(namespaces)
self.id = id
self.ip_range_whitelist = ip_range_whitelist
if "|" in self.id:
raise Exception("application service ID cannot contain '|' character")
@@ -146,7 +150,7 @@ class ApplicationService(object):
)
regex = regex_obj.get("regex")
if isinstance(regex, basestring):
if isinstance(regex, string_types):
regex_obj["regex"] = re.compile(regex) # Pre-compile regex
else:
raise ValueError(
@@ -290,4 +294,8 @@ class ApplicationService(object):
return self.rate_limited
def __str__(self):
return "ApplicationService: %s" % (self.__dict__,)
# copy dictionary and redact token fields so they don't get logged
dict_copy = self.__dict__.copy()
dict_copy["token"] = "<redacted>"
dict_copy["hs_token"] = "<redacted>"
return "ApplicationService: %s" % (dict_copy,)

View File

@@ -12,21 +12,39 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import urllib
from prometheus_client import Counter
from twisted.internet import defer
from synapse.api.constants import ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException
from synapse.http.client import SimpleHttpClient
from synapse.events.utils import serialize_event
from synapse.util.logcontext import preserve_fn, make_deferred_yieldable
from synapse.util.caches.response_cache import ResponseCache
from synapse.http.client import SimpleHttpClient
from synapse.types import ThirdPartyInstanceID
import logging
import urllib
from synapse.util.caches.response_cache import ResponseCache
logger = logging.getLogger(__name__)
sent_transactions_counter = Counter(
"synapse_appservice_api_sent_transactions",
"Number of /transactions/ requests sent",
["service"]
)
failed_transactions_counter = Counter(
"synapse_appservice_api_failed_transactions",
"Number of /transactions/ requests that failed to send",
["service"]
)
sent_events_counter = Counter(
"synapse_appservice_api_sent_events",
"Number of events sent to the AS",
["service"]
)
HOUR_IN_MS = 60 * 60 * 1000
@@ -73,7 +91,8 @@ class ApplicationServiceApi(SimpleHttpClient):
super(ApplicationServiceApi, self).__init__(hs)
self.clock = hs.get_clock()
self.protocol_meta_cache = ResponseCache(hs, timeout_ms=HOUR_IN_MS)
self.protocol_meta_cache = ResponseCache(hs, "as_protocol_meta",
timeout_ms=HOUR_IN_MS)
@defer.inlineCallbacks
def query_user(self, service, user_id):
@@ -193,12 +212,7 @@ class ApplicationServiceApi(SimpleHttpClient):
defer.returnValue(None)
key = (service.id, protocol)
result = self.protocol_meta_cache.get(key)
if not result:
result = self.protocol_meta_cache.set(
key, preserve_fn(_get)()
)
return make_deferred_yieldable(result)
return self.protocol_meta_cache.wrap(key, _get)
@defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None):
@@ -224,12 +238,15 @@ class ApplicationServiceApi(SimpleHttpClient):
args={
"access_token": service.hs_token
})
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(events))
defer.returnValue(True)
return
except CodeMessageException as e:
logger.warning("push_bulk to %s received %s", uri, e.code)
except Exception as ex:
logger.warning("push_bulk to %s threw exception %s", uri, ex)
failed_transactions_counter.labels(service.id).inc()
defer.returnValue(False)
def _serialize(self, events):

View File

@@ -48,14 +48,14 @@ UP & quit +---------- YES SUCCESS
This is all tied together by the AppServiceScheduler which DIs the required
components.
"""
import logging
from twisted.internet import defer
from synapse.appservice import ApplicationServiceState
from synapse.util.logcontext import preserve_fn
from synapse.util.logcontext import run_in_background
from synapse.util.metrics import Measure
import logging
logger = logging.getLogger(__name__)
@@ -106,7 +106,7 @@ class _ServiceQueuer(object):
def enqueue(self, service, event):
# if this service isn't being sent something
self.queued_events.setdefault(service.id, []).append(event)
preserve_fn(self._send_request)(service)
run_in_background(self._send_request, service)
@defer.inlineCallbacks
def _send_request(self, service):
@@ -152,10 +152,10 @@ class _TransactionController(object):
if sent:
yield txn.complete(self.store)
else:
preserve_fn(self._start_recoverer)(service)
except Exception as e:
logger.exception(e)
preserve_fn(self._start_recoverer)(service)
run_in_background(self._start_recoverer, service)
except Exception:
logger.exception("Error creating appservice transaction")
run_in_background(self._start_recoverer, service)
@defer.inlineCallbacks
def on_recovered(self, recoverer):
@@ -176,17 +176,20 @@ class _TransactionController(object):
@defer.inlineCallbacks
def _start_recoverer(self, service):
yield self.store.set_appservice_state(
service,
ApplicationServiceState.DOWN
)
logger.info(
"Application service falling behind. Starting recoverer. AS ID %s",
service.id
)
recoverer = self.recoverer_fn(service, self.on_recovered)
self.add_recoverers([recoverer])
recoverer.recover()
try:
yield self.store.set_appservice_state(
service,
ApplicationServiceState.DOWN
)
logger.info(
"Application service falling behind. Starting recoverer. AS ID %s",
service.id
)
recoverer = self.recoverer_fn(service, self.on_recovered)
self.add_recoverers([recoverer])
recoverer.recover()
except Exception:
logger.exception("Error starting AS recoverer")
@defer.inlineCallbacks
def _is_service_up(self, service):

View File

@@ -12,3 +12,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import ConfigError
# export ConfigError if somebody does import *
# this is largely a fudge to stop PEP8 moaning about the import
__all__ = ["ConfigError"]

View File

@@ -16,9 +16,12 @@
import argparse
import errno
import os
import yaml
from textwrap import dedent
from six import integer_types
import yaml
class ConfigError(Exception):
pass
@@ -49,7 +52,7 @@ Missing mandatory `server_name` config option.
class Config(object):
@staticmethod
def parse_size(value):
if isinstance(value, int) or isinstance(value, long):
if isinstance(value, integer_types):
return value
sizes = {"K": 1024, "M": 1024 * 1024}
size = 1
@@ -61,7 +64,7 @@ class Config(object):
@staticmethod
def parse_duration(value):
if isinstance(value, int) or isinstance(value, long):
if isinstance(value, integer_types):
return value
second = 1000
minute = 60 * second
@@ -279,31 +282,31 @@ class Config(object):
)
if not cls.path_exists(config_dir_path):
os.makedirs(config_dir_path)
with open(config_path, "wb") as config_file:
config_bytes, config = obj.generate_config(
with open(config_path, "w") as config_file:
config_str, config = obj.generate_config(
config_dir_path=config_dir_path,
server_name=server_name,
report_stats=(config_args.report_stats == "yes"),
is_generating_file=True
)
obj.invoke_all("generate_files", config)
config_file.write(config_bytes)
print (
config_file.write(config_str)
print((
"A config file has been generated in %r for server name"
" %r with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it"
" to your needs."
) % (config_path, server_name)
print (
) % (config_path, server_name))
print(
"If this server name is incorrect, you will need to"
" regenerate the SSL certificates"
)
return
else:
print (
print((
"Config file %r already exists. Generating any missing key"
" files."
) % (config_path,)
) % (config_path,))
generate_keys = True
parser = argparse.ArgumentParser(

View File

@@ -12,10 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.api.constants import EventTypes
from ._base import Config
class ApiConfig(Config):

View File

@@ -12,14 +12,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
import logging
from six import string_types
from six.moves.urllib import parse as urlparse
import yaml
from netaddr import IPSet
from synapse.appservice import ApplicationService
from synapse.types import UserID
import urllib
import yaml
import logging
from ._base import Config, ConfigError
logger = logging.getLogger(__name__)
@@ -89,21 +93,21 @@ def _load_appservice(hostname, as_info, config_filename):
"id", "as_token", "hs_token", "sender_localpart"
]
for field in required_string_fields:
if not isinstance(as_info.get(field), basestring):
if not isinstance(as_info.get(field), string_types):
raise KeyError("Required string field: '%s' (%s)" % (
field, config_filename,
))
# 'url' must either be a string or explicitly null, not missing
# to avoid accidentally turning off push for ASes.
if (not isinstance(as_info.get("url"), basestring) and
if (not isinstance(as_info.get("url"), string_types) and
as_info.get("url", "") is not None):
raise KeyError(
"Required string field or explicit null: 'url' (%s)" % (config_filename,)
)
localpart = as_info["sender_localpart"]
if urllib.quote(localpart) != localpart:
if urlparse.quote(localpart) != localpart:
raise ValueError(
"sender_localpart needs characters which are not URL encoded."
)
@@ -128,7 +132,7 @@ def _load_appservice(hostname, as_info, config_filename):
"Expected namespace entry in %s to be an object,"
" but got %s", ns, regex_obj
)
if not isinstance(regex_obj.get("regex"), basestring):
if not isinstance(regex_obj.get("regex"), string_types):
raise ValueError(
"Missing/bad type 'regex' key in %s", regex_obj
)
@@ -152,6 +156,13 @@ def _load_appservice(hostname, as_info, config_filename):
" will not receive events or queries.",
config_filename,
)
ip_range_whitelist = None
if as_info.get('ip_range_whitelist'):
ip_range_whitelist = IPSet(
as_info.get('ip_range_whitelist')
)
return ApplicationService(
token=as_info["as_token"],
hostname=hostname,
@@ -161,5 +172,6 @@ def _load_appservice(hostname, as_info, config_filename):
sender=user_id,
id=as_info["id"],
protocols=protocols,
rate_limited=rate_limited
rate_limited=rate_limited,
ip_range_whitelist=ip_range_whitelist,
)

View File

@@ -0,0 +1,88 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
DEFAULT_CONFIG = """\
# User Consent configuration
#
# for detailed instructions, see
# https://github.com/matrix-org/synapse/blob/master/docs/consent_tracking.md
#
# Parts of this section are required if enabling the 'consent' resource under
# 'listeners', in particular 'template_dir' and 'version'.
#
# 'template_dir' gives the location of the templates for the HTML forms.
# This directory should contain one subdirectory per language (eg, 'en', 'fr'),
# and each language directory should contain the policy document (named as
# '<version>.html') and a success page (success.html).
#
# 'version' specifies the 'current' version of the policy document. It defines
# the version to be served by the consent resource if there is no 'v'
# parameter.
#
# 'server_notice_content', if enabled, will send a user a "Server Notice"
# asking them to consent to the privacy policy. The 'server_notices' section
# must also be configured for this to work. Notices will *not* be sent to
# guest users unless 'send_server_notice_to_guests' is set to true.
#
# 'block_events_error', if set, will block any attempts to send events
# until the user consents to the privacy policy. The value of the setting is
# used as the text of the error.
#
# user_consent:
# template_dir: res/templates/privacy
# version: 1.0
# server_notice_content:
# msgtype: m.text
# body: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
# send_server_notice_to_guests: True
# block_events_error: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
#
"""
class ConsentConfig(Config):
def __init__(self):
super(ConsentConfig, self).__init__()
self.user_consent_version = None
self.user_consent_template_dir = None
self.user_consent_server_notice_content = None
self.user_consent_server_notice_to_guests = False
self.block_events_without_consent_error = None
def read_config(self, config):
consent_config = config.get("user_consent")
if consent_config is None:
return
self.user_consent_version = str(consent_config["version"])
self.user_consent_template_dir = consent_config["template_dir"]
self.user_consent_server_notice_content = consent_config.get(
"server_notice_content",
)
self.block_events_without_consent_error = consent_config.get(
"block_events_error",
)
self.user_consent_server_notice_to_guests = bool(consent_config.get(
"send_server_notice_to_guests", False,
))
def default_config(self, **kwargs):
return DEFAULT_CONFIG

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,31 +13,32 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .tls import TlsConfig
from .server import ServerConfig
from .logger import LoggingConfig
from .database import DatabaseConfig
from .ratelimiting import RatelimitConfig
from .repository import ContentRepositoryConfig
from .captcha import CaptchaConfig
from .voip import VoipConfig
from .registration import RegistrationConfig
from .metrics import MetricsConfig
from .api import ApiConfig
from .appservice import AppServiceConfig
from .key import KeyConfig
from .saml2 import SAML2Config
from .captcha import CaptchaConfig
from .cas import CasConfig
from .password import PasswordConfig
from .jwt import JWTConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .consent_config import ConsentConfig
from .database import DatabaseConfig
from .emailconfig import EmailConfig
from .workers import WorkerConfig
from .push import PushConfig
from .spam_checker import SpamCheckerConfig
from .groups import GroupsConfig
from .jwt import JWTConfig
from .key import KeyConfig
from .logger import LoggingConfig
from .metrics import MetricsConfig
from .password import PasswordConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .push import PushConfig
from .ratelimiting import RatelimitConfig
from .registration import RegistrationConfig
from .repository import ContentRepositoryConfig
from .saml2 import SAML2Config
from .server import ServerConfig
from .server_notices_config import ServerNoticesConfig
from .spam_checker import SpamCheckerConfig
from .tls import TlsConfig
from .user_directory import UserDirectoryConfig
from .voip import VoipConfig
from .workers import WorkerConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
@@ -45,12 +47,15 @@ class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
JWTConfig, PasswordConfig, EmailConfig,
WorkerConfig, PasswordAuthProviderConfig, PushConfig,
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,):
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,
ConsentConfig,
ServerNoticesConfig,
):
pass
if __name__ == '__main__':
import sys
sys.stdout.write(
HomeServerConfig().generate_config(sys.argv[1], sys.argv[2])[0]
HomeServerConfig().generate_config(sys.argv[1], sys.argv[2], True)[0]
)

View File

@@ -15,7 +15,6 @@
from ._base import Config, ConfigError
MISSING_JWT = (
"""Missing jwt library. This is required for jwt login.

View File

@@ -13,21 +13,24 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
from synapse.util.stringutils import random_string
from signedjson.key import (
generate_signing_key, is_signing_algorithm_supported,
decode_signing_key_base64, decode_verify_key_bytes,
read_signing_keys, write_signing_keys, NACL_ED25519
)
from unpaddedbase64 import decode_base64
from synapse.util.stringutils import random_string_with_symbols
import os
import hashlib
import logging
import os
from signedjson.key import (
NACL_ED25519,
decode_signing_key_base64,
decode_verify_key_bytes,
generate_signing_key,
is_signing_algorithm_supported,
read_signing_keys,
write_signing_keys,
)
from unpaddedbase64 import decode_base64
from synapse.util.stringutils import random_string, random_string_with_symbols
from ._base import Config, ConfigError
logger = logging.getLogger(__name__)
@@ -59,14 +62,20 @@ class KeyConfig(Config):
self.expire_access_token = config.get("expire_access_token", False)
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values
self.form_secret = config.get("form_secret", None)
def default_config(self, config_dir_path, server_name, is_generating_file=False,
**kwargs):
base_key_name = os.path.join(config_dir_path, server_name)
if is_generating_file:
macaroon_secret_key = random_string_with_symbols(50)
form_secret = '"%s"' % random_string_with_symbols(50)
else:
macaroon_secret_key = None
form_secret = 'null'
return """\
macaroon_secret_key: "%(macaroon_secret_key)s"
@@ -74,6 +83,10 @@ class KeyConfig(Config):
# Used to enable access token expiration.
expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values
form_secret: %(form_secret)s
## Signing Keys ##
# Path to the signing key to sign messages with

View File

@@ -12,17 +12,22 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.util.logcontext import LoggingContextFilter
from twisted.logger import globalLogBeginner, STDLibLogObserver
import logging
import logging.config
import yaml
from string import Template
import os
import signal
import sys
from string import Template
import yaml
from twisted.logger import STDLibLogObserver, globalLogBeginner
import synapse
from synapse.util.logcontext import LoggingContextFilter
from synapse.util.versionstring import get_version_string
from ._base import Config
DEFAULT_LOG_CONFIG = Template("""
version: 1
@@ -117,7 +122,7 @@ class LoggingConfig(Config):
log_config = config.get("log_config")
if log_config and not os.path.exists(log_config):
log_file = self.abspath("homeserver.log")
with open(log_config, "wb") as log_config_file:
with open(log_config, "w") as log_config_file:
log_config_file.write(
DEFAULT_LOG_CONFIG.substitute(log_file=log_file)
)
@@ -163,7 +168,8 @@ def setup_logging(config, use_worker_options=False):
if log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3,
encoding='utf8'
)
def sighup(signum, stack):
@@ -188,9 +194,8 @@ def setup_logging(config, use_worker_options=False):
def sighup(signum, stack):
# it might be better to use a file watcher or something for this.
logging.info("Reloading log config from %s due to SIGHUP",
log_config)
load_log_config()
logging.info("Reloaded log config from %s due to SIGHUP", log_config)
load_log_config()
@@ -202,6 +207,15 @@ def setup_logging(config, use_worker_options=False):
if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup)
# make sure that the first thing we log is a thing we can grep backwards
# for
logging.warn("***** STARTING SERVER *****")
logging.warn(
"Server %s version %s",
sys.argv[0], get_version_string(synapse),
)
logging.info("Server hostname: %s", config.server_name)
# It's critical to point twisted's internal logging somewhere, otherwise it
# stacks up and leaks kup to 64K object;
# see: https://twistedmatrix.com/trac/ticket/8164

View File

@@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.util.module_loader import load_module
from ._base import Config
LDAP_PROVIDER = 'ldap_auth_provider.LdapAuthProvider'

View File

@@ -13,11 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from distutils.util import strtobool
from synapse.util.stringutils import random_string_with_symbols
from distutils.util import strtobool
from ._base import Config
class RegistrationConfig(Config):

View File

@@ -13,11 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
from collections import namedtuple
from synapse.util.module_loader import load_module
from ._base import Config, ConfigError
MISSING_NETADDR = (
"Missing netaddr library. This is required for URL preview API."
@@ -250,6 +250,9 @@ class ContentRepositoryConfig(Config):
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '169.254.0.0/16'
# - '::1/128'
# - 'fe80::/64'
# - 'fc00::/7'
#
# List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist.

View File

@@ -14,13 +14,25 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from synapse.http.endpoint import parse_and_validate_server_name
from ._base import Config, ConfigError
logger = logging.Logger(__name__)
class ServerConfig(Config):
def read_config(self, config):
self.server_name = config["server_name"]
try:
parse_and_validate_server_name(self.server_name)
except ValueError as e:
raise ConfigError(str(e))
self.pid_file = self.abspath(config.get("pid_file"))
self.web_client = config["web_client"]
self.web_client_location = config.get("web_client_location", None)
@@ -37,6 +49,9 @@ class ServerConfig(Config):
# "disable" federation
self.send_federation = config.get("send_federation", True)
# Whether to enable user presence.
self.use_presence = config.get("use_presence", True)
# Whether to update the user directory or not. This should be set to
# false only if we are updating the user directory in a worker
self.update_user_directory = config.get("update_user_directory", True)
@@ -55,6 +70,26 @@ class ServerConfig(Config):
"block_non_admin_invites", False,
)
# Options to control access by tracking MAU
self.limit_usage_by_mau = config.get("limit_usage_by_mau", False)
self.max_mau_value = 0
if self.limit_usage_by_mau:
self.max_mau_value = config.get(
"max_mau_value", 0,
)
self.mau_limits_reserved_threepids = config.get(
"mau_limit_reserved_threepids", []
)
# Options to disable HS
self.hs_disabled = config.get("hs_disabled", False)
self.hs_disabled_message = config.get("hs_disabled_message", "")
self.hs_disabled_limit_type = config.get("hs_disabled_limit_type", "")
# Admin uri to direct users at should their instance become blocked
# due to resource constraints
self.admin_uri = config.get("admin_uri", None)
# FIXME: federation_domain_whitelist needs sytests
self.federation_domain_whitelist = None
federation_domain_whitelist = config.get(
@@ -138,6 +173,12 @@ class ServerConfig(Config):
metrics_port = config.get("metrics_port")
if metrics_port:
logger.warn(
("The metrics_port configuration option is deprecated in Synapse 0.31 "
"in favour of a listener. Please see "
"http://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst"
" on how to configure the new listener."))
self.listeners.append({
"port": metrics_port,
"bind_addresses": [config.get("metrics_bind_host", "127.0.0.1")],
@@ -152,8 +193,8 @@ class ServerConfig(Config):
})
def default_config(self, server_name, **kwargs):
if ":" in server_name:
bind_port = int(server_name.split(":")[1])
_, bind_port = parse_and_validate_server_name(server_name)
if bind_port is not None:
unsecure_port = bind_port - 400
else:
bind_port = 8448
@@ -191,6 +232,8 @@ class ServerConfig(Config):
# different cores. See
# https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/.
#
# This setting requires the affinity package to be installed!
#
# cpu_affinity: 0xFFFFFFFF
# Whether to serve a web client from the HTTP/HTTPS root resource.
@@ -210,6 +253,9 @@ class ServerConfig(Config):
# hard limit.
soft_file_limit: 0
# Set to false to disable presence tracking on this homeserver.
use_presence: true
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
# gc_thresholds: [700, 10, 10]
@@ -301,6 +347,32 @@ class ServerConfig(Config):
# - port: 9000
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
# Homeserver blocking
#
# How to reach the server admin, used in ResourceLimitError
# admin_uri: 'mailto:admin@server.com'
#
# Global block config
#
# hs_disabled: False
# hs_disabled_message: 'Human readable reason for why the HS is blocked'
# hs_disabled_limit_type: 'error code(str), to help clients decode reason'
#
# Monthly Active User Blocking
#
# Enables monthly active user checking
# limit_usage_by_mau: False
# max_mau_value: 50
#
# Sometimes the server admin will want to ensure certain accounts are
# never blocked by mau checking. These accounts are specified here.
#
# mau_limit_reserved_threepids:
# - medium: 'email'
# address: 'reserved_user@example.com'
""" % locals()
def read_arguments(self, args):

View File

@@ -0,0 +1,87 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.types import UserID
from ._base import Config
DEFAULT_CONFIG = """\
# Server Notices room configuration
#
# Uncomment this section to enable a room which can be used to send notices
# from the server to users. It is a special room which cannot be left; notices
# come from a special "notices" user id.
#
# If you uncomment this section, you *must* define the system_mxid_localpart
# setting, which defines the id of the user which will be used to send the
# notices.
#
# It's also possible to override the room name, the display name of the
# "notices" user, and the avatar for the user.
#
# server_notices:
# system_mxid_localpart: notices
# system_mxid_display_name: "Server Notices"
# system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
# room_name: "Server Notices"
"""
class ServerNoticesConfig(Config):
"""Configuration for the server notices room.
Attributes:
server_notices_mxid (str|None):
The MXID to use for server notices.
None if server notices are not enabled.
server_notices_mxid_display_name (str|None):
The display name to use for the server notices user.
None if server notices are not enabled.
server_notices_mxid_avatar_url (str|None):
The display name to use for the server notices user.
None if server notices are not enabled.
server_notices_room_name (str|None):
The name to use for the server notices room.
None if server notices are not enabled.
"""
def __init__(self):
super(ServerNoticesConfig, self).__init__()
self.server_notices_mxid = None
self.server_notices_mxid_display_name = None
self.server_notices_mxid_avatar_url = None
self.server_notices_room_name = None
def read_config(self, config):
c = config.get("server_notices")
if c is None:
return
mxid_localpart = c['system_mxid_localpart']
self.server_notices_mxid = UserID(
mxid_localpart, self.server_name,
).to_string()
self.server_notices_mxid_display_name = c.get(
'system_mxid_display_name', None,
)
self.server_notices_mxid_avatar_url = c.get(
'system_mxid_avatar_url', None,
)
# todo: i18n
self.server_notices_room_name = c.get('room_name', "Server Notices")
def default_config(self, **kwargs):
return DEFAULT_CONFIG

View File

@@ -13,14 +13,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
import os
import subprocess
from hashlib import sha256
from unpaddedbase64 import encode_base64
from OpenSSL import crypto
import subprocess
import os
from hashlib import sha256
from unpaddedbase64 import encode_base64
from ._base import Config
GENERATE_DH_PARAMS = False
@@ -133,7 +134,7 @@ class TlsConfig(Config):
tls_dh_params_path = config["tls_dh_params_path"]
if not self.path_exists(tls_private_key_path):
with open(tls_private_key_path, "w") as private_key_file:
with open(tls_private_key_path, "wb") as private_key_file:
tls_private_key = crypto.PKey()
tls_private_key.generate_key(crypto.TYPE_RSA, 2048)
private_key_pem = crypto.dump_privatekey(
@@ -148,7 +149,7 @@ class TlsConfig(Config):
)
if not self.path_exists(tls_certificate_path):
with open(tls_certificate_path, "w") as certificate_file:
with open(tls_certificate_path, "wb") as certificate_file:
cert = crypto.X509()
subject = cert.get_subject()
subject.CN = config["server_name"]

View File

@@ -30,10 +30,10 @@ class VoipConfig(Config):
## Turn ##
# The public URIs of the TURN server to give to clients
turn_uris: []
#turn_uris: []
# The shared secret used to compute passwords for the TURN server
turn_shared_secret: "YOUR_SHARED_SECRET"
#turn_shared_secret: "YOUR_SHARED_SECRET"
# The Username and password if the TURN server needs them and
# does not use a token

View File

@@ -11,19 +11,22 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import ssl
from OpenSSL import SSL
from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName
import logging
from zope.interface import implementer
from OpenSSL import SSL, crypto
from twisted.internet._sslverify import _defaultCurveName
from twisted.internet.interfaces import IOpenSSLClientConnectionCreator
from twisted.internet.ssl import CertificateOptions, ContextFactory
from twisted.python.failure import Failure
logger = logging.getLogger(__name__)
class ServerContextFactory(ssl.ContextFactory):
class ServerContextFactory(ContextFactory):
"""Factory for PyOpenSSL SSL contexts that are used to handle incoming
connections and to make connections to remote servers."""
connections."""
def __init__(self, config):
self._context = SSL.Context(SSL.SSLv23_METHOD)
@@ -32,8 +35,9 @@ class ServerContextFactory(ssl.ContextFactory):
@staticmethod
def configure_context(context, config):
try:
_ecCurve = _OpenSSLECCurve(_defaultCurveName)
_ecCurve.addECKeyToContext(context)
_ecCurve = crypto.get_elliptic_curve(_defaultCurveName)
context.set_tmp_ecdh(_ecCurve)
except Exception:
logger.exception("Failed to enable elliptic curve for TLS")
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
@@ -47,3 +51,78 @@ class ServerContextFactory(ssl.ContextFactory):
def getContext(self):
return self._context
def _idnaBytes(text):
"""
Convert some text typed by a human into some ASCII bytes. This is a
copy of twisted.internet._idna._idnaBytes. For documentation, see the
twisted documentation.
"""
try:
import idna
except ImportError:
return text.encode("idna")
else:
return idna.encode(text)
def _tolerateErrors(wrapped):
"""
Wrap up an info_callback for pyOpenSSL so that if something goes wrong
the error is immediately logged and the connection is dropped if possible.
This is a copy of twisted.internet._sslverify._tolerateErrors. For
documentation, see the twisted documentation.
"""
def infoCallback(connection, where, ret):
try:
return wrapped(connection, where, ret)
except: # noqa: E722, taken from the twisted implementation
f = Failure()
logger.exception("Error during info_callback")
connection.get_app_data().failVerification(f)
return infoCallback
@implementer(IOpenSSLClientConnectionCreator)
class ClientTLSOptions(object):
"""
Client creator for TLS without certificate identity verification. This is a
copy of twisted.internet._sslverify.ClientTLSOptions with the identity
verification left out. For documentation, see the twisted documentation.
"""
def __init__(self, hostname, ctx):
self._ctx = ctx
self._hostname = hostname
self._hostnameBytes = _idnaBytes(hostname)
ctx.set_info_callback(
_tolerateErrors(self._identityVerifyingInfoCallback)
)
def clientConnectionForTLS(self, tlsProtocol):
context = self._ctx
connection = SSL.Connection(context, None)
connection.set_app_data(tlsProtocol)
return connection
def _identityVerifyingInfoCallback(self, connection, where, ret):
if where & SSL.SSL_CB_HANDSHAKE_START:
connection.set_tlsext_host_name(self._hostnameBytes)
class ClientTLSOptionsFactory(object):
"""Factory for Twisted ClientTLSOptions that are used to make connections
to remote servers for federation."""
def __init__(self, config):
# We don't use config options yet
pass
def get_options(self, host):
return ClientTLSOptions(
host.decode('utf-8'),
CertificateOptions(verify=False).getContext()
)

View File

@@ -15,16 +15,16 @@
# limitations under the License.
from synapse.api.errors import SynapseError, Codes
from synapse.events.utils import prune_event
from canonicaljson import encode_canonical_json
from unpaddedbase64 import encode_base64, decode_base64
from signedjson.sign import sign_json
import hashlib
import logging
from canonicaljson import encode_canonical_json
from signedjson.sign import sign_json
from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.errors import Codes, SynapseError
from synapse.events.utils import prune_event
logger = logging.getLogger(__name__)

View File

@@ -13,14 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util import logcontext
from twisted.web.http import HTTPClient
from twisted.internet.protocol import Factory
from twisted.internet import defer, reactor
from synapse.http.endpoint import matrix_federation_endpoint
import simplejson as json
import logging
from canonicaljson import json
from twisted.internet import defer, reactor
from twisted.internet.protocol import Factory
from twisted.web.http import HTTPClient
from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util import logcontext
logger = logging.getLogger(__name__)
@@ -28,14 +30,14 @@ KEY_API_V1 = b"/_matrix/key/v1/"
@defer.inlineCallbacks
def fetch_server_key(server_name, ssl_context_factory, path=KEY_API_V1):
def fetch_server_key(server_name, tls_client_options_factory, path=KEY_API_V1):
"""Fetch the keys for a remote server."""
factory = SynapseKeyClientFactory()
factory.path = path
factory.host = server_name
endpoint = matrix_federation_endpoint(
reactor, server_name, ssl_context_factory, timeout=30
reactor, server_name, tls_client_options_factory, timeout=30
)
for i in range(5):

View File

@@ -14,32 +14,37 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.crypto.keyclient import fetch_server_key
from synapse.api.errors import SynapseError, Codes
from synapse.util import unwrapFirstError, logcontext
from synapse.util.logcontext import (
PreserveLoggingContext,
preserve_fn
)
from synapse.util.metrics import Measure
import hashlib
import logging
import urllib
from collections import namedtuple
from twisted.internet import defer
from signedjson.sign import (
verify_signed_json, signature_ids, sign_json, encode_canonical_json
)
from signedjson.key import (
is_signing_algorithm_supported, decode_verify_key_bytes
decode_verify_key_bytes,
encode_verify_key_base64,
is_signing_algorithm_supported,
)
from signedjson.sign import (
SignatureVerifyException,
encode_canonical_json,
sign_json,
signature_ids,
verify_signed_json,
)
from unpaddedbase64 import decode_base64, encode_base64
from OpenSSL import crypto
from twisted.internet import defer
from collections import namedtuple
import urllib
import hashlib
import logging
from synapse.api.errors import Codes, SynapseError
from synapse.crypto.keyclient import fetch_server_key
from synapse.util import logcontext, unwrapFirstError
from synapse.util.logcontext import (
PreserveLoggingContext,
preserve_fn,
run_in_background,
)
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)
@@ -55,7 +60,7 @@ Attributes:
key_ids(set(str)): The set of key_ids to that could be used to verify the
JSON object
json_object(dict): The JSON object to verify.
deferred(twisted.internet.defer.Deferred):
deferred(Deferred[str, str, nacl.signing.VerifyKey]):
A deferred (server_name, key_id, verify_key) tuple that resolves when
a verify key has been fetched. The deferreds' callbacks are run with no
logcontext.
@@ -127,7 +132,7 @@ class Keyring(object):
verify_requests.append(verify_request)
preserve_fn(self._start_key_lookups)(verify_requests)
run_in_background(self._start_key_lookups, verify_requests)
# Pass those keys to handle_key_deferred so that the json object
# signatures can be verified
@@ -146,53 +151,56 @@ class Keyring(object):
verify_requests (List[VerifyKeyRequest]):
"""
# create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred()
for rq in verify_requests
}
try:
# create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred()
for rq in verify_requests
}
# We want to wait for any previous lookups to complete before
# proceeding.
yield self.wait_for_previous_lookups(
[rq.server_name for rq in verify_requests],
server_to_deferred,
)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
server_to_request_ids = {}
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request):
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for verify_request in verify_requests:
verify_request.deferred.addBoth(
remove_deferreds, verify_request,
# We want to wait for any previous lookups to complete before
# proceeding.
yield self.wait_for_previous_lookups(
[rq.server_name for rq in verify_requests],
server_to_deferred,
)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
server_to_request_ids = {}
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request):
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for verify_request in verify_requests:
verify_request.deferred.addBoth(
remove_deferreds, verify_request,
)
except Exception:
logger.exception("Error starting key lookups")
@defer.inlineCallbacks
def wait_for_previous_lookups(self, server_names, server_to_deferred):
"""Waits for any previous key lookups for the given servers to finish.
@@ -313,7 +321,7 @@ class Keyring(object):
if not verify_request.deferred.called:
verify_request.deferred.errback(err)
preserve_fn(do_iterations)().addErrback(on_err)
run_in_background(do_iterations).addErrback(on_err)
@defer.inlineCallbacks
def get_keys_from_store(self, server_name_and_key_ids):
@@ -329,8 +337,9 @@ class Keyring(object):
"""
res = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.get_server_verify_keys)(
server_name, key_ids
run_in_background(
self.store.get_server_verify_keys,
server_name, key_ids,
).addCallback(lambda ks, server: (server, ks), server_name)
for server_name, key_ids in server_name_and_key_ids
],
@@ -352,13 +361,13 @@ class Keyring(object):
logger.exception(
"Unable to get key from %r: %s %s",
perspective_name,
type(e).__name__, str(e.message),
type(e).__name__, str(e),
)
defer.returnValue({})
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(p_name, p_keys)
run_in_background(get_key, p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
],
consumeErrors=True,
@@ -384,7 +393,7 @@ class Keyring(object):
logger.info(
"Unable to get key %r for %r directly: %s %s",
key_ids, server_name,
type(e).__name__, str(e.message),
type(e).__name__, str(e),
)
if not keys:
@@ -398,7 +407,7 @@ class Keyring(object):
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(server_name, key_ids)
run_in_background(get_key, server_name, key_ids)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
@@ -481,7 +490,8 @@ class Keyring(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
run_in_background(
self.store_keys,
server_name=server_name,
from_server=perspective_name,
verify_keys=response_keys,
@@ -502,7 +512,7 @@ class Keyring(object):
continue
(response, tls_certificate) = yield fetch_server_key(
server_name, self.hs.tls_server_context_factory,
server_name, self.hs.tls_client_options_factory,
path=(b"/_matrix/key/v2/server/%s" % (
urllib.quote(requested_key_id),
)).encode("ascii"),
@@ -539,7 +549,8 @@ class Keyring(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
run_in_background(
self.store_keys,
server_name=key_server_name,
from_server=server_name,
verify_keys=verify_keys,
@@ -615,7 +626,8 @@ class Keyring(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_keys_json)(
run_in_background(
self.store.store_server_keys_json,
server_name=server_name,
key_id=key_id,
from_server=server_name,
@@ -643,7 +655,7 @@ class Keyring(object):
# Try to fetch the key from the remote server.
(response, tls_certificate) = yield fetch_server_key(
server_name, self.hs.tls_server_context_factory
server_name, self.hs.tls_client_options_factory
)
# Check the response.
@@ -716,7 +728,8 @@ class Keyring(object):
# TODO(markjh): Store whether the keys have expired.
return logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_verify_key)(
run_in_background(
self.store.store_server_verify_key,
server_name, server_name, key.time_added, key
)
for key_id, key in verify_keys.items()
@@ -727,6 +740,17 @@ class Keyring(object):
@defer.inlineCallbacks
def _handle_key_deferred(verify_request):
"""Waits for the key to become available, and then performs a verification
Args:
verify_request (VerifyKeyRequest):
Returns:
Deferred[None]
Raises:
SynapseError if there was a problem performing the verification
"""
server_name = verify_request.server_name
try:
with PreserveLoggingContext():
@@ -734,7 +758,7 @@ def _handle_key_deferred(verify_request):
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
server_name, type(e).__name__, str(e),
)
raise SynapseError(
502,
@@ -744,7 +768,7 @@ def _handle_key_deferred(verify_request):
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
server_name, type(e).__name__, str(e),
)
raise SynapseError(
401,
@@ -759,11 +783,17 @@ def _handle_key_deferred(verify_request):
))
try:
verify_signed_json(json_object, server_name, verify_key)
except Exception:
except SignatureVerifyException as e:
logger.debug(
"Error verifying signature for %s:%s:%s with key %s: %s",
server_name, verify_key.alg, verify_key.version,
encode_verify_key_base64(verify_key),
str(e),
)
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
"Invalid signature for server %s with key %s:%s: %s" % (
server_name, verify_key.alg, verify_key.version, str(e),
),
Codes.UNAUTHORIZED,
)

View File

@@ -17,11 +17,11 @@ import logging
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json, SignatureVerifyException
from signedjson.sign import SignatureVerifyException, verify_signed_json
from unpaddedbase64 import decode_base64
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, SynapseError, EventSizeError
from synapse.api.constants import KNOWN_ROOM_VERSIONS, EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, EventSizeError, SynapseError
from synapse.types import UserID, get_domain_from_id
logger = logging.getLogger(__name__)
@@ -34,9 +34,11 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
event: the event being checked.
auth_events (dict: event-key -> event): the existing room state.
Raises:
AuthError if the checks fail
Returns:
True if the auth checks pass.
if the auth checks pass.
"""
if do_size_check:
_check_size_limits(event)
@@ -71,17 +73,27 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
# Oh, we don't know what the state of the room was, so we
# are trusting that this is allowed (at least for now)
logger.warn("Trusting event: %s", event.event_id)
return True
return
if event.type == EventTypes.Create:
sender_domain = get_domain_from_id(event.sender)
room_id_domain = get_domain_from_id(event.room_id)
if room_id_domain != sender_domain:
raise AuthError(
403,
"Creation event's room_id domain does not match sender's"
)
room_version = event.content.get("room_version", "1")
if room_version not in KNOWN_ROOM_VERSIONS:
raise AuthError(
403,
"room appears to have unsupported version %s" % (
room_version,
))
# FIXME
return True
logger.debug("Allowing! %s", event)
return
creation_event = auth_events.get((EventTypes.Create, ""), None)
@@ -118,7 +130,8 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
403,
"Alias event's state_key does not match sender's domain"
)
return True
logger.debug("Allowing! %s", event)
return
if logger.isEnabledFor(logging.DEBUG):
logger.debug(
@@ -127,14 +140,9 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
)
if event.type == EventTypes.Member:
allowed = _is_membership_change_allowed(
event, auth_events
)
if allowed:
logger.debug("Allowing! %s", event)
else:
logger.debug("Denying! %s", event)
return allowed
_is_membership_change_allowed(event, auth_events)
logger.debug("Allowing! %s", event)
return
_check_event_sender_in_room(event, auth_events)
@@ -153,7 +161,8 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
)
)
else:
return True
logger.debug("Allowing! %s", event)
return
_can_send_event(event, auth_events)
@@ -200,7 +209,7 @@ def _is_membership_change_allowed(event, auth_events):
create = auth_events.get(key)
if create and event.prev_events[0][0] == create.event_id:
if create.content["creator"] == event.state_key:
return True
return
target_user_id = event.state_key
@@ -265,13 +274,13 @@ def _is_membership_change_allowed(event, auth_events):
raise AuthError(
403, "%s is banned from the room" % (target_user_id,)
)
return True
return
if Membership.JOIN != membership:
if (caller_invited
and Membership.LEAVE == membership
and target_user_id == event.user_id):
return True
return
if not caller_in_room: # caller isn't joined
raise AuthError(
@@ -334,8 +343,6 @@ def _is_membership_change_allowed(event, auth_events):
else:
raise AuthError(500, "Unknown membership %s" % membership)
return True
def _check_event_sender_in_room(event, auth_events):
key = (EventTypes.Member, event.user_id, )
@@ -355,35 +362,46 @@ def _check_joined_room(member, user_id, room_id):
))
def get_send_level(etype, state_key, auth_events):
key = (EventTypes.PowerLevels, "", )
send_level_event = auth_events.get(key)
send_level = None
if send_level_event:
send_level = send_level_event.content.get("events", {}).get(
etype
)
if send_level is None:
if state_key is not None:
send_level = send_level_event.content.get(
"state_default", 50
)
else:
send_level = send_level_event.content.get(
"events_default", 0
)
def get_send_level(etype, state_key, power_levels_event):
"""Get the power level required to send an event of a given type
if send_level:
send_level = int(send_level)
The federation spec [1] refers to this as "Required Power Level".
https://matrix.org/docs/spec/server_server/unstable.html#definitions
Args:
etype (str): type of event
state_key (str|None): state_key of state event, or None if it is not
a state event.
power_levels_event (synapse.events.EventBase|None): power levels event
in force at this point in the room
Returns:
int: power level required to send this event.
"""
if power_levels_event:
power_levels_content = power_levels_event.content
else:
send_level = 0
power_levels_content = {}
return send_level
# see if we have a custom level for this event type
send_level = power_levels_content.get("events", {}).get(etype)
# otherwise, fall back to the state_default/events_default.
if send_level is None:
if state_key is not None:
send_level = power_levels_content.get("state_default", 50)
else:
send_level = power_levels_content.get("events_default", 0)
return int(send_level)
def _can_send_event(event, auth_events):
power_levels_event = _get_power_level_event(auth_events)
send_level = get_send_level(
event.type, event.get("state_key", None), auth_events
event.type, event.get("state_key"), power_levels_event,
)
user_level = get_user_power_level(event.user_id, auth_events)
@@ -471,14 +489,14 @@ def _check_power_levels(event, auth_events):
]
old_list = current_state.content.get("users", {})
for user in set(old_list.keys() + user_list.keys()):
for user in set(list(old_list) + list(user_list)):
levels_to_check.append(
(user, "users")
)
old_list = current_state.content.get("events", {})
new_list = event.content.get("events", {})
for ev_id in set(old_list.keys() + new_list.keys()):
for ev_id in set(list(old_list) + list(new_list)):
levels_to_check.append(
(ev_id, "events")
)
@@ -515,7 +533,11 @@ def _check_power_levels(event, auth_events):
"to your own"
)
if old_level > user_level or new_level > user_level:
# Check if the old and new levels are greater than the user level
# (if defined)
old_level_too_big = old_level is not None and old_level > user_level
new_level_too_big = new_level is not None and new_level > user_level
if old_level_too_big or new_level_too_big:
raise AuthError(
403,
"You don't have permission to add ops level greater "
@@ -524,13 +546,22 @@ def _check_power_levels(event, auth_events):
def _get_power_level_event(auth_events):
key = (EventTypes.PowerLevels, "", )
return auth_events.get(key)
return auth_events.get((EventTypes.PowerLevels, ""))
def get_user_power_level(user_id, auth_events):
power_level_event = _get_power_level_event(auth_events)
"""Get a user's power level
Args:
user_id (str): user's id to look up in power_levels
auth_events (dict[(str, str), synapse.events.EventBase]):
state in force at this point in the room (or rather, a subset of
it including at least the create event and power levels event.
Returns:
int: the user's power level in this room.
"""
power_level_event = _get_power_level_event(auth_events)
if power_level_event:
level = power_level_event.content.get("users", {}).get(user_id)
if not level:
@@ -541,6 +572,11 @@ def get_user_power_level(user_id, auth_events):
else:
return int(level)
else:
# if there is no power levels event, the creator gets 100 and everyone
# else gets 0.
# some things which call this don't pass the create event: hack around
# that.
key = (EventTypes.Create, "", )
create_event = auth_events.get(key)
if (create_event is not None and

View File

@@ -13,9 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util.frozenutils import freeze
from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze
# Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents
# bugs where we accidentally share e.g. signature dicts. However, converting
@@ -47,14 +46,26 @@ class _EventInternalMetadata(object):
def _event_dict_property(key):
# We want to be able to use hasattr with the event dict properties.
# However, (on python3) hasattr expects AttributeError to be raised. Hence,
# we need to transform the KeyError into an AttributeError
def getter(self):
return self._event_dict[key]
try:
return self._event_dict[key]
except KeyError:
raise AttributeError(key)
def setter(self, v):
self._event_dict[key] = v
try:
self._event_dict[key] = v
except KeyError:
raise AttributeError(key)
def delete(self):
del self._event_dict[key]
try:
del self._event_dict[key]
except KeyError:
raise AttributeError(key)
return property(
getter,
@@ -134,7 +145,7 @@ class EventBase(object):
return field in self._event_dict
def items(self):
return self._event_dict.items()
return list(self._event_dict.items())
class FrozenEvent(EventBase):

View File

@@ -13,13 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from . import EventBase, FrozenEvent, _event_dict_property
import copy
from synapse.types import EventID
from synapse.util.stringutils import random_string
import copy
from . import EventBase, FrozenEvent, _event_dict_property
class EventBuilder(EventBase):

View File

@@ -13,22 +13,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from six import iteritems
from frozendict import frozendict
from twisted.internet import defer
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
class EventContext(object):
"""
Attributes:
current_state_ids (dict[(str, str), str]):
The current state map including the current event.
(type, state_key) -> event_id
prev_state_ids (dict[(str, str), str]):
The current state map excluding the current event.
(type, state_key) -> event_id
state_group (int|None): state group id, if the state has been stored
as a state group. This is usually only None if e.g. the event is
an outlier.
@@ -45,38 +41,77 @@ class EventContext(object):
prev_state_events (?): XXX: is this ever set to anything other than
the empty list?
_current_state_ids (dict[(str, str), str]|None):
The current state map including the current event. None if outlier
or we haven't fetched the state from DB yet.
(type, state_key) -> event_id
_prev_state_ids (dict[(str, str), str]|None):
The current state map excluding the current event. None if outlier
or we haven't fetched the state from DB yet.
(type, state_key) -> event_id
_fetching_state_deferred (Deferred|None): Resolves when *_state_ids have
been calculated. None if we haven't started calculating yet
_event_type (str): The type of the event the context is associated with.
Only set when state has not been fetched yet.
_event_state_key (str|None): The state_key of the event the context is
associated with. Only set when state has not been fetched yet.
_prev_state_id (str|None): If the event associated with the context is
a state event, then `_prev_state_id` is the event_id of the state
that was replaced.
Only set when state has not been fetched yet.
"""
__slots__ = [
"current_state_ids",
"prev_state_ids",
"state_group",
"rejected",
"prev_group",
"delta_ids",
"prev_state_events",
"app_service",
"_current_state_ids",
"_prev_state_ids",
"_prev_state_id",
"_event_type",
"_event_state_key",
"_fetching_state_deferred",
]
def __init__(self):
# The current state including the current event
self.current_state_ids = None
# The current state excluding the current event
self.prev_state_ids = None
self.state_group = None
self.prev_state_events = []
self.rejected = False
self.app_service = None
@staticmethod
def with_state(state_group, current_state_ids, prev_state_ids,
prev_group=None, delta_ids=None):
context = EventContext()
# The current state including the current event
context._current_state_ids = current_state_ids
# The current state excluding the current event
context._prev_state_ids = prev_state_ids
context.state_group = state_group
context._prev_state_id = None
context._event_type = None
context._event_state_key = None
context._fetching_state_deferred = defer.succeed(None)
# A previously persisted state group and a delta between that
# and this state.
self.prev_group = None
self.delta_ids = None
context.prev_group = prev_group
context.delta_ids = delta_ids
self.prev_state_events = None
return context
self.app_service = None
def serialize(self, event):
@defer.inlineCallbacks
def serialize(self, event, store):
"""Converts self to a type that can be serialized as JSON, and then
deserialized by `deserialize`
@@ -92,11 +127,12 @@ class EventContext(object):
# the prev_state_ids, so if we're a state event we include the event
# id that we replaced in the state.
if event.is_state():
prev_state_id = self.prev_state_ids.get((event.type, event.state_key))
prev_state_ids = yield self.get_prev_state_ids(store)
prev_state_id = prev_state_ids.get((event.type, event.state_key))
else:
prev_state_id = None
return {
defer.returnValue({
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
@@ -106,10 +142,9 @@ class EventContext(object):
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None
}
})
@staticmethod
@defer.inlineCallbacks
def deserialize(store, input):
"""Converts a dict that was produced by `serialize` back into a
EventContext.
@@ -122,32 +157,115 @@ class EventContext(object):
EventContext
"""
context = EventContext()
context.state_group = input["state_group"]
context.rejected = input["rejected"]
context.prev_group = input["prev_group"]
context.delta_ids = _decode_state_dict(input["delta_ids"])
context.prev_state_events = input["prev_state_events"]
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
prev_state_id = input["prev_state_id"]
event_type = input["event_type"]
event_state_key = input["event_state_key"]
context._prev_state_id = input["prev_state_id"]
context._event_type = input["event_type"]
context._event_state_key = input["event_state_key"]
context.current_state_ids = yield store.get_state_ids_for_group(
context.state_group,
)
if prev_state_id and event_state_key:
context.prev_state_ids = dict(context.current_state_ids)
context.prev_state_ids[(event_type, event_state_key)] = prev_state_id
else:
context.prev_state_ids = context.current_state_ids
context._current_state_ids = None
context._prev_state_ids = None
context._fetching_state_deferred = None
context.state_group = input["state_group"]
context.prev_group = input["prev_group"]
context.delta_ids = _decode_state_dict(input["delta_ids"])
context.rejected = input["rejected"]
context.prev_state_events = input["prev_state_events"]
app_service_id = input["app_service_id"]
if app_service_id:
context.app_service = store.get_app_service_by_id(app_service_id)
defer.returnValue(context)
return context
@defer.inlineCallbacks
def get_current_state_ids(self, store):
"""Gets the current state IDs
Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group
is None, which happens when the associated event is an outlier.
"""
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store,
)
yield make_deferred_yieldable(self._fetching_state_deferred)
defer.returnValue(self._current_state_ids)
@defer.inlineCallbacks
def get_prev_state_ids(self, store):
"""Gets the prev state IDs
Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group
is None, which happens when the associated event is an outlier.
"""
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store,
)
yield make_deferred_yieldable(self._fetching_state_deferred)
defer.returnValue(self._prev_state_ids)
def get_cached_current_state_ids(self):
"""Gets the current state IDs if we have them already cached.
Returns:
dict[(str, str), str]|None: Returns None if we haven't cached the
state or if state_group is None, which happens when the associated
event is an outlier.
"""
return self._current_state_ids
@defer.inlineCallbacks
def _fill_out_state(self, store):
"""Called to populate the _current_state_ids and _prev_state_ids
attributes by loading from the database.
"""
if self.state_group is None:
return
self._current_state_ids = yield store.get_state_ids_for_group(
self.state_group,
)
if self._prev_state_id and self._event_state_key is not None:
self._prev_state_ids = dict(self._current_state_ids)
key = (self._event_type, self._event_state_key)
self._prev_state_ids[key] = self._prev_state_id
else:
self._prev_state_ids = self._current_state_ids
@defer.inlineCallbacks
def update_state(self, state_group, prev_state_ids, current_state_ids,
prev_group, delta_ids):
"""Replace the state in the context
"""
# We need to make sure we wait for any ongoing fetching of state
# to complete so that the updated state doesn't get clobbered
if self._fetching_state_deferred:
yield make_deferred_yieldable(self._fetching_state_deferred)
self.state_group = state_group
self._prev_state_ids = prev_state_ids
self.prev_group = prev_group
self._current_state_ids = current_state_ids
self.delta_ids = delta_ids
# We need to ensure that that we've marked as having fetched the state
self._fetching_state_deferred = defer.succeed(None)
def _encode_state_dict(state_dict):
@@ -159,7 +277,7 @@ def _encode_state_dict(state_dict):
return [
(etype, state_key, v)
for (etype, state_key), v in state_dict.iteritems()
for (etype, state_key), v in iteritems(state_dict)
]

View File

@@ -13,12 +13,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api.constants import EventTypes
from . import EventBase
import re
from six import string_types
from frozendict import frozendict
import re
from synapse.api.constants import EventTypes
from . import EventBase
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
# (?<!stuff) matches if the current position in the string is not preceded
@@ -277,7 +280,7 @@ def serialize_event(e, time_now_ms, as_client_event=True,
if only_event_fields:
if (not isinstance(only_event_fields, list) or
not all(isinstance(f, basestring) for f in only_event_fields)):
not all(isinstance(f, string_types) for f in only_event_fields)):
raise TypeError("only_event_fields must be a list of strings")
d = only_fields(d, only_event_fields)

View File

@@ -13,9 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.types import EventID, RoomID, UserID
from synapse.api.errors import SynapseError
from six import string_types
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.types import EventID, RoomID, UserID
class EventValidator(object):
@@ -49,7 +51,7 @@ class EventValidator(object):
strings.append("state_key")
for s in strings:
if not isinstance(getattr(event, s), basestring):
if not isinstance(getattr(event, s), string_types):
raise SynapseError(400, "Not '%s' a string type" % (s,))
if event.type == EventTypes.Member:
@@ -88,5 +90,5 @@ class EventValidator(object):
for s in keys:
if s not in d:
raise SynapseError(400, "'%s' not in content" % (s,))
if not isinstance(d[s], basestring):
if not isinstance(d[s], string_types):
raise SynapseError(400, "Not '%s' a string type" % (s,))

View File

@@ -14,13 +14,17 @@
# limitations under the License.
import logging
from synapse.api.errors import SynapseError
import six
from twisted.internet import defer
from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import Codes, SynapseError
from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import FrozenEvent
from synapse.events.utils import prune_event
from synapse.http.servlet import assert_params_in_request
from synapse.util import unwrapFirstError, logcontext
from twisted.internet import defer
from synapse.http.servlet import assert_params_in_dict
from synapse.util import logcontext, unwrapFirstError
logger = logging.getLogger(__name__)
@@ -190,11 +194,23 @@ def event_from_pdu_json(pdu_json, outlier=False):
FrozenEvent
Raises:
SynapseError: if the pdu is missing required fields
SynapseError: if the pdu is missing required fields or is otherwise
not a valid matrix event
"""
# we could probably enforce a bunch of other fields here (room_id, sender,
# origin, etc etc)
assert_params_in_request(pdu_json, ('event_id', 'type'))
assert_params_in_dict(pdu_json, ('event_id', 'type', 'depth'))
depth = pdu_json['depth']
if not isinstance(depth, six.integer_types):
raise SynapseError(400, "Depth %r not an intger" % (depth, ),
Codes.BAD_JSON)
if depth < 0:
raise SynapseError(400, "Depth too small", Codes.BAD_JSON)
elif depth > MAX_DEPTH:
raise SynapseError(400, "Depth too large", Codes.BAD_JSON)
event = FrozenEvent(
pdu_json
)

View File

@@ -19,36 +19,42 @@ import itertools
import logging
import random
from six.moves import range
from prometheus_client import Counter
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.constants import KNOWN_ROOM_VERSIONS, EventTypes, Membership
from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError, FederationDeniedError
CodeMessageException,
FederationDeniedError,
HttpResponseException,
SynapseError,
)
from synapse.events import builder
from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
)
import synapse.metrics
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.util import logcontext, unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.logutils import log_function
from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__)
# synapse.federation.federation_client is a silly name
metrics = synapse.metrics.get_metrics_for("synapse.federation.client")
sent_queries_counter = metrics.register_counter("sent_queries", labels=["type"])
sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["type"])
PDU_RETRY_TIME_MS = 1 * 60 * 1000
class InvalidResponseError(RuntimeError):
"""Helper for _try_destination_list: indicates that the server returned a response
we couldn't parse
"""
pass
class FederationClient(FederationBase):
def __init__(self, hs):
super(FederationClient, self).__init__(hs)
@@ -106,7 +112,7 @@ class FederationClient(FederationBase):
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc(query_type)
sent_queries_counter.labels(query_type).inc()
return self.transport_layer.make_query(
destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail,
@@ -125,7 +131,7 @@ class FederationClient(FederationBase):
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc("client_device_keys")
sent_queries_counter.labels("client_device_keys").inc()
return self.transport_layer.query_client_keys(
destination, content, timeout
)
@@ -135,7 +141,7 @@ class FederationClient(FederationBase):
"""Query the device keys for a list of user ids hosted on a remote
server.
"""
sent_queries_counter.inc("user_devices")
sent_queries_counter.labels("user_devices").inc()
return self.transport_layer.query_user_devices(
destination, user_id, timeout
)
@@ -152,7 +158,7 @@ class FederationClient(FederationBase):
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc("client_one_time_keys")
sent_queries_counter.labels("client_one_time_keys").inc()
return self.transport_layer.claim_client_keys(
destination, content, timeout
)
@@ -392,9 +398,9 @@ class FederationClient(FederationBase):
"""
if return_local:
seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
signed_events = seen_events.values()
signed_events = list(seen_events.values())
else:
seen_events = yield self.store.have_events(event_ids)
seen_events = yield self.store.have_seen_events(event_ids)
signed_events = []
failed_to_fetch = set()
@@ -413,11 +419,12 @@ class FederationClient(FederationBase):
batch_size = 20
missing_events = list(missing_events)
for i in xrange(0, len(missing_events), batch_size):
for i in range(0, len(missing_events), batch_size):
batch = set(missing_events[i:i + batch_size])
deferreds = [
preserve_fn(self.get_pdu)(
run_in_background(
self.get_pdu,
destinations=random_server_list(),
event_id=e_id,
)
@@ -458,8 +465,63 @@ class FederationClient(FederationBase):
defer.returnValue(signed_auth)
@defer.inlineCallbacks
def _try_destination_list(self, description, destinations, callback):
"""Try an operation on a series of servers, until it succeeds
Args:
description (unicode): description of the operation we're doing, for logging
destinations (Iterable[unicode]): list of server_names to try
callback (callable): Function to run for each server. Passed a single
argument: the server_name to try. May return a deferred.
If the callback raises a CodeMessageException with a 300/400 code,
attempts to perform the operation stop immediately and the exception is
reraised.
Otherwise, if the callback raises an Exception the error is logged and the
next server tried. Normally the stacktrace is logged but this is
suppressed if the exception is an InvalidResponseError.
Returns:
The [Deferred] result of callback, if it succeeds
Raises:
SynapseError if the chosen remote server returns a 300/400 code.
RuntimeError if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
try:
res = yield callback(destination)
defer.returnValue(res)
except InvalidResponseError as e:
logger.warn(
"Failed to %s via %s: %s",
description, destination, e,
)
except HttpResponseException as e:
if not 500 <= e.code < 600:
raise e.to_synapse_error()
else:
logger.warn(
"Failed to %s via %s: %i %s",
description, destination, e.code, e.message,
)
except Exception:
logger.warn(
"Failed to %s via %s",
description, destination, exc_info=1,
)
raise RuntimeError("Failed to %s via any server" % (description, ))
def make_membership_event(self, destinations, room_id, user_id, membership,
content={},):
content, params):
"""
Creates an m.room.member event, with context, without participating in the room.
@@ -475,13 +537,15 @@ class FederationClient(FederationBase):
user_id (str): The user whose membership is being evented.
membership (str): The "membership" property of the event. Must be
one of "join" or "leave".
content (object): Any additional data to put into the content field
content (dict): Any additional data to put into the content field
of the event.
params (dict[str, str|Iterable[str]]): Query parameters to include in the
request.
Return:
Deferred: resolves to a tuple of (origin (str), event (object))
where origin is the remote homeserver which generated the event.
Fails with a ``CodeMessageException`` if the chosen remote server
Fails with a ``SynapseError`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
@@ -492,50 +556,37 @@ class FederationClient(FederationBase):
"make_membership_event called with membership='%s', must be one of %s" %
(membership, ",".join(valid_memberships))
)
for destination in destinations:
if destination == self.server_name:
continue
try:
ret = yield self.transport_layer.make_membership_event(
destination, room_id, user_id, membership
)
@defer.inlineCallbacks
def send_request(destination):
ret = yield self.transport_layer.make_membership_event(
destination, room_id, user_id, membership, params,
)
pdu_dict = ret["event"]
pdu_dict = ret.get("event", None)
if not isinstance(pdu_dict, dict):
raise InvalidResponseError("Bad 'event' field in response")
logger.debug("Got response to make_%s: %s", membership, pdu_dict)
logger.debug("Got response to make_%s: %s", membership, pdu_dict)
pdu_dict["content"].update(content)
pdu_dict["content"].update(content)
# The protoevent received over the JSON wire may not have all
# the required fields. Lets just gloss over that because
# there's some we never care about
if "prev_state" not in pdu_dict:
pdu_dict["prev_state"] = []
# The protoevent received over the JSON wire may not have all
# the required fields. Lets just gloss over that because
# there's some we never care about
if "prev_state" not in pdu_dict:
pdu_dict["prev_state"] = []
ev = builder.EventBuilder(pdu_dict)
ev = builder.EventBuilder(pdu_dict)
defer.returnValue(
(destination, ev)
)
break
except CodeMessageException as e:
if not 500 <= e.code < 600:
raise
else:
logger.warn(
"Failed to make_%s via %s: %s",
membership, destination, e.message
)
except Exception as e:
logger.warn(
"Failed to make_%s via %s: %s",
membership, destination, e.message
)
defer.returnValue(
(destination, ev)
)
raise RuntimeError("Failed to send to any server.")
return self._try_destination_list(
"make_" + membership, destinations, send_request,
)
@defer.inlineCallbacks
def send_join(self, destinations, pdu):
"""Sends a join event to one of a list of homeservers.
@@ -552,103 +603,111 @@ class FederationClient(FederationBase):
giving the serer the event was sent to, ``state`` (?) and
``auth_chain``.
Fails with a ``CodeMessageException`` if the chosen remote server
Fails with a ``SynapseError`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
try:
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_join(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
def check_authchain_validity(signed_auth_chain):
for e in signed_auth_chain:
if e.type == EventTypes.Create:
create_event = e
break
else:
raise InvalidResponseError(
"no %s in auth chain" % (EventTypes.Create,),
)
logger.debug("Got content: %s", content)
# the room version should be sane.
room_version = create_event.content.get("room_version", "1")
if room_version not in KNOWN_ROOM_VERSIONS:
# This shouldn't be possible, because the remote server should have
# rejected the join attempt during make_join.
raise InvalidResponseError(
"room appears to have unsupported version %s" % (
room_version,
))
state = [
event_from_pdu_json(p, outlier=True)
for p in content.get("state", [])
]
@defer.inlineCallbacks
def send_request(destination):
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_join(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
auth_chain = [
event_from_pdu_json(p, outlier=True)
for p in content.get("auth_chain", [])
]
logger.debug("Got content: %s", content)
pdus = {
p.event_id: p
for p in itertools.chain(state, auth_chain)
}
state = [
event_from_pdu_json(p, outlier=True)
for p in content.get("state", [])
]
valid_pdus = yield self._check_sigs_and_hash_and_fetch(
destination, pdus.values(),
outlier=True,
)
auth_chain = [
event_from_pdu_json(p, outlier=True)
for p in content.get("auth_chain", [])
]
valid_pdus_map = {
p.event_id: p
for p in valid_pdus
}
pdus = {
p.event_id: p
for p in itertools.chain(state, auth_chain)
}
# NB: We *need* to copy to ensure that we don't have multiple
# references being passed on, as that causes... issues.
signed_state = [
copy.copy(valid_pdus_map[p.event_id])
for p in state
if p.event_id in valid_pdus_map
]
valid_pdus = yield self._check_sigs_and_hash_and_fetch(
destination, list(pdus.values()),
outlier=True,
)
signed_auth = [
valid_pdus_map[p.event_id]
for p in auth_chain
if p.event_id in valid_pdus_map
]
valid_pdus_map = {
p.event_id: p
for p in valid_pdus
}
# NB: We *need* to copy to ensure that we don't have multiple
# references being passed on, as that causes... issues.
for s in signed_state:
s.internal_metadata = copy.deepcopy(s.internal_metadata)
# NB: We *need* to copy to ensure that we don't have multiple
# references being passed on, as that causes... issues.
signed_state = [
copy.copy(valid_pdus_map[p.event_id])
for p in state
if p.event_id in valid_pdus_map
]
auth_chain.sort(key=lambda e: e.depth)
signed_auth = [
valid_pdus_map[p.event_id]
for p in auth_chain
if p.event_id in valid_pdus_map
]
defer.returnValue({
"state": signed_state,
"auth_chain": signed_auth,
"origin": destination,
})
except CodeMessageException as e:
if not 500 <= e.code < 600:
raise
else:
logger.exception(
"Failed to send_join via %s: %s",
destination, e.message
)
except Exception as e:
logger.exception(
"Failed to send_join via %s: %s",
destination, e.message
)
# NB: We *need* to copy to ensure that we don't have multiple
# references being passed on, as that causes... issues.
for s in signed_state:
s.internal_metadata = copy.deepcopy(s.internal_metadata)
raise RuntimeError("Failed to send to any server.")
check_authchain_validity(signed_auth)
defer.returnValue({
"state": signed_state,
"auth_chain": signed_auth,
"origin": destination,
})
return self._try_destination_list("send_join", destinations, send_request)
@defer.inlineCallbacks
def send_invite(self, destination, room_id, event_id, pdu):
time_now = self._clock.time_msec()
code, content = yield self.transport_layer.send_invite(
destination=destination,
room_id=room_id,
event_id=event_id,
content=pdu.get_pdu_json(time_now),
)
try:
code, content = yield self.transport_layer.send_invite(
destination=destination,
room_id=room_id,
event_id=event_id,
content=pdu.get_pdu_json(time_now),
)
except HttpResponseException as e:
if e.code == 403:
raise e.to_synapse_error()
raise
pdu_dict = content["event"]
@@ -663,7 +722,6 @@ class FederationClient(FederationBase):
defer.returnValue(pdu)
@defer.inlineCallbacks
def send_leave(self, destinations, pdu):
"""Sends a leave event to one of a list of homeservers.
@@ -680,35 +738,25 @@ class FederationClient(FederationBase):
Return:
Deferred: resolves to None.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a non-200 code.
Fails with a ``SynapseError`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
@defer.inlineCallbacks
def send_request(destination):
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_leave(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
try:
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_leave(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
logger.debug("Got content: %s", content)
defer.returnValue(None)
logger.debug("Got content: %s", content)
defer.returnValue(None)
except CodeMessageException:
raise
except Exception as e:
logger.exception(
"Failed to send_leave via %s: %s",
destination, e.message
)
raise RuntimeError("Failed to send to any server.")
return self._try_destination_list("send_leave", destinations, send_request)
def get_public_rooms(self, destination, limit=None, since_token=None,
search_filter=None, include_all_networks=False,

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,24 +14,38 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import re
import six
from six import iteritems
from canonicaljson import json
from prometheus_client import Counter
import simplejson as json
from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from twisted.python import failure
from synapse.api.errors import AuthError, FederationError, SynapseError, NotFoundError
from synapse.crypto.event_signing import compute_event_signature
from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
from synapse.api.constants import EventTypes
from synapse.api.errors import (
AuthError,
FederationError,
IncompatibleRoomVersionError,
NotFoundError,
SynapseError,
)
from synapse.crypto.event_signing import compute_event_signature
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
import synapse.metrics
from synapse.http.endpoint import parse_server_name
from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet,
ReplicationGetQueryRestServlet,
)
from synapse.types import get_domain_from_id
from synapse.util import async
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logcontext import make_deferred_yieldable, preserve_fn
from synapse.util.logutils import log_function
# when processing incoming transactions, we try to handle multiple rooms in
@@ -39,25 +54,25 @@ TRANSACTION_CONCURRENCY_LIMIT = 10
logger = logging.getLogger(__name__)
# synapse.federation.federation_server is a silly name
metrics = synapse.metrics.get_metrics_for("synapse.federation.server")
received_pdus_counter = Counter("synapse_federation_server_received_pdus", "")
received_pdus_counter = metrics.register_counter("received_pdus")
received_edus_counter = Counter("synapse_federation_server_received_edus", "")
received_edus_counter = metrics.register_counter("received_edus")
received_queries_counter = metrics.register_counter("received_queries", labels=["type"])
received_queries_counter = Counter(
"synapse_federation_server_received_queries", "", ["type"]
)
class FederationServer(FederationBase):
def __init__(self, hs):
super(FederationServer, self).__init__(hs)
self.auth = hs.get_auth()
self.handler = hs.get_handlers().federation_handler
self._server_linearizer = async.Linearizer("fed_server")
self._transaction_linearizer = async.Linearizer("fed_txn_handler")
self._server_linearizer = Linearizer("fed_server")
self._transaction_linearizer = Linearizer("fed_txn_handler")
self.transaction_actions = TransactionActions(self.store)
@@ -65,12 +80,15 @@ class FederationServer(FederationBase):
# We cache responses to state queries, as they take a while and often
# come in waves.
self._state_resp_cache = ResponseCache(hs, timeout_ms=30000)
self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
@defer.inlineCallbacks
@log_function
def on_backfill_request(self, origin, room_id, versions, limit):
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdus = yield self.handler.on_backfill_request(
origin, room_id, versions, limit
)
@@ -129,7 +147,9 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id)
received_pdus_counter.inc_by(len(transaction.pdus))
received_pdus_counter.inc(len(transaction.pdus))
origin_host, _ = parse_server_name(transaction.origin)
pdus_by_room = {}
@@ -151,9 +171,21 @@ class FederationServer(FederationBase):
# we can process different rooms in parallel (which is useful if they
# require callouts to other servers to fetch missing events), but
# impose a limit to avoid going too crazy with ram/cpu.
@defer.inlineCallbacks
def process_pdus_for_room(room_id):
logger.debug("Processing PDUs for %s", room_id)
try:
yield self.check_server_matches_acl(origin_host, room_id)
except AuthError as e:
logger.warn(
"Ignoring PDUs for room %s from banned server", room_id,
)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
return
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
try:
@@ -165,10 +197,14 @@ class FederationServer(FederationBase):
logger.warn("Error handling PDU %s: %s", event_id, e)
pdu_results[event_id] = {"error": str(e)}
except Exception as e:
f = failure.Failure()
pdu_results[event_id] = {"error": str(e)}
logger.exception("Failed to handle PDU %s", event_id)
logger.error(
"Failed to handle PDU %s: %s",
event_id, f.getTraceback().rstrip(),
)
yield async.concurrently_execute(
yield concurrently_execute(
process_pdus_for_room, pdus_by_room.keys(),
TRANSACTION_CONCURRENCY_LIMIT,
)
@@ -181,10 +217,6 @@ class FederationServer(FederationBase):
edu.content
)
pdu_failures = getattr(transaction, "pdu_failures", [])
for failure in pdu_failures:
logger.info("Got failure %r", failure)
response = {
"pdus": pdu_results,
}
@@ -208,20 +240,24 @@ class FederationServer(FederationBase):
if not event_id:
raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
result = self._state_resp_cache.get((room_id, event_id))
if not result:
with (yield self._server_linearizer.queue((origin, room_id))):
d = self._state_resp_cache.set(
(room_id, event_id),
preserve_fn(self._on_context_state_request_compute)(room_id, event_id)
)
resp = yield make_deferred_yieldable(d)
else:
resp = yield make_deferred_yieldable(result)
# we grab the linearizer to protect ourselves from servers which hammer
# us. In theory we might already have the response to this query
# in the cache so we could return it without waiting for the linearizer
# - but that's non-trivial to get right, and anyway somewhat defeats
# the point of the linearizer.
with (yield self._server_linearizer.queue((origin, room_id))):
resp = yield self._state_resp_cache.wrap(
(room_id, event_id),
self._on_context_state_request_compute,
room_id, event_id,
)
defer.returnValue((200, resp))
@@ -230,6 +266,9 @@ class FederationServer(FederationBase):
if not event_id:
raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@@ -273,7 +312,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
@log_function
def on_pdu_request(self, origin, event_id):
pdu = yield self._get_persisted_pdu(origin, event_id)
pdu = yield self.handler.get_persisted_pdu(origin, event_id)
if pdu:
defer.returnValue(
@@ -289,19 +328,32 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
received_queries_counter.inc(query_type)
received_queries_counter.labels(query_type).inc()
resp = yield self.registry.on_query(query_type, args)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_make_join_request(self, room_id, user_id):
def on_make_join_request(self, origin, room_id, user_id, supported_versions):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
room_version = yield self.store.get_room_version(room_id)
if room_version not in supported_versions:
logger.warn("Room version %s not in %s", room_version, supported_versions)
raise IncompatibleRoomVersionError(room_version=room_version)
pdu = yield self.handler.on_make_join_request(room_id, user_id)
time_now = self._clock.time_msec()
defer.returnValue({"event": pdu.get_pdu_json(time_now)})
defer.returnValue({
"event": pdu.get_pdu_json(time_now),
"room_version": room_version,
})
@defer.inlineCallbacks
def on_invite_request(self, origin, content):
pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
ret_pdu = yield self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec()
defer.returnValue((200, {"event": ret_pdu.get_pdu_json(time_now)}))
@@ -310,6 +362,10 @@ class FederationServer(FederationBase):
def on_send_join_request(self, origin, content):
logger.debug("on_send_join_request: content: %s", content)
pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
res_pdus = yield self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
@@ -321,7 +377,9 @@ class FederationServer(FederationBase):
}))
@defer.inlineCallbacks
def on_make_leave_request(self, room_id, user_id):
def on_make_leave_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdu = yield self.handler.on_make_leave_request(room_id, user_id)
time_now = self._clock.time_msec()
defer.returnValue({"event": pdu.get_pdu_json(time_now)})
@@ -330,6 +388,10 @@ class FederationServer(FederationBase):
def on_send_leave_request(self, origin, content):
logger.debug("on_send_leave_request: content: %s", content)
pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
yield self.handler.on_send_leave_request(origin, pdu)
defer.returnValue((200, {}))
@@ -337,6 +399,9 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id):
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
res = {
@@ -365,6 +430,9 @@ class FederationServer(FederationBase):
Deferred: Results in `dict` with the same format as `content`
"""
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
auth_chain = [
event_from_pdu_json(e)
for e in content["auth_chain"]
@@ -377,6 +445,7 @@ class FederationServer(FederationBase):
ret = yield self.handler.on_query_auth(
origin,
event_id,
room_id,
signed_auth,
content.get("rejects", []),
content.get("missing", []),
@@ -425,9 +494,9 @@ class FederationServer(FederationBase):
"Claimed one-time-keys: %s",
",".join((
"%s for %s:%s" % (key_id, user_id, device_id)
for user_id, user_keys in json_result.iteritems()
for device_id, device_keys in user_keys.iteritems()
for key_id, _ in device_keys.iteritems()
for user_id, user_keys in iteritems(json_result)
for device_id, device_keys in iteritems(user_keys)
for key_id, _ in iteritems(device_keys)
)),
)
@@ -438,6 +507,9 @@ class FederationServer(FederationBase):
def on_get_missing_events(self, origin, room_id, earliest_events,
latest_events, limit, min_depth):
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
logger.info(
"on_get_missing_events: earliest_events: %r, latest_events: %r,"
" limit: %d, min_depth: %d",
@@ -466,17 +538,6 @@ class FederationServer(FederationBase):
ts_now_ms = self._clock.time_msec()
return self.store.get_user_id_for_open_id_token(token, ts_now_ms)
@log_function
def _get_persisted_pdu(self, origin, event_id, do_auth=True):
""" Get a PDU from the database with given origin and id.
Returns:
Deferred: Results in a `Pdu`.
"""
return self.handler.get_persisted_pdu(
origin, event_id, do_auth=do_auth
)
def _transaction_from_pdus(self, pdu_list):
"""Returns a new Transaction containing the given PDUs suitable for
transmission.
@@ -494,13 +555,33 @@ class FederationServer(FederationBase):
def _handle_received_pdu(self, origin, pdu):
""" Process a PDU received in a federation /send/ transaction.
If the event is invalid, then this method throws a FederationError.
(The error will then be logged and sent back to the sender (which
probably won't do anything with it), and other events in the
transaction will be processed as normal).
It is likely that we'll then receive other events which refer to
this rejected_event in their prev_events, etc. When that happens,
we'll attempt to fetch the rejected event again, which will presumably
fail, so those second-generation events will also get rejected.
Eventually, we get to the point where there are more than 10 events
between any new events and the original rejected event. Since we
only try to backfill 10 events deep on received pdu, we then accept the
new event, possibly introducing a discontinuity in the DAG, with new
forward extremities, so normal service is approximately returned,
until we try to backfill across the discontinuity.
Args:
origin (str): server which sent the pdu
pdu (FrozenEvent): received pdu
Returns (Deferred): completes with None
Raises: FederationError if the signatures / hash do not match
"""
Raises: FederationError if the signatures / hash do not match, or
if the event was unacceptable for any other reason (eg, too large,
too many prev_events, couldn't find the prev_events)
"""
# check that it's actually being sent from a valid destination to
# workaround bug #1753 in 0.18.5 and 0.18.6
if origin != get_domain_from_id(pdu.event_id):
@@ -536,7 +617,9 @@ class FederationServer(FederationBase):
affected=pdu.event_id,
)
yield self.handler.on_receive_pdu(origin, pdu, get_missing=True)
yield self.handler.on_receive_pdu(
origin, pdu, get_missing=True, sent_to_us_directly=True,
)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name
@@ -564,6 +647,101 @@ class FederationServer(FederationBase):
)
defer.returnValue(ret)
@defer.inlineCallbacks
def check_server_matches_acl(self, server_name, room_id):
"""Check if the given server is allowed by the server ACLs in the room
Args:
server_name (str): name of server, *without any port part*
room_id (str): ID of the room to check
Raises:
AuthError if the server does not match the ACL
"""
state_ids = yield self.store.get_current_state_ids(room_id)
acl_event_id = state_ids.get((EventTypes.ServerACL, ""))
if not acl_event_id:
return
acl_event = yield self.store.get_event(acl_event_id)
if server_matches_acl_event(server_name, acl_event):
return
raise AuthError(code=403, msg="Server is banned from room")
def server_matches_acl_event(server_name, acl_event):
"""Check if the given server is allowed by the ACL event
Args:
server_name (str): name of server, without any port part
acl_event (EventBase): m.room.server_acl event
Returns:
bool: True if this server is allowed by the ACLs
"""
logger.debug("Checking %s against acl %s", server_name, acl_event.content)
# first of all, check if literal IPs are blocked, and if so, whether the
# server name is a literal IP
allow_ip_literals = acl_event.content.get("allow_ip_literals", True)
if not isinstance(allow_ip_literals, bool):
logger.warn("Ignorning non-bool allow_ip_literals flag")
allow_ip_literals = True
if not allow_ip_literals:
# check for ipv6 literals. These start with '['.
if server_name[0] == '[':
return False
# check for ipv4 literals. We can just lift the routine from twisted.
if isIPAddress(server_name):
return False
# next, check the deny list
deny = acl_event.content.get("deny", [])
if not isinstance(deny, (list, tuple)):
logger.warn("Ignorning non-list deny ACL %s", deny)
deny = []
for e in deny:
if _acl_entry_matches(server_name, e):
# logger.info("%s matched deny rule %s", server_name, e)
return False
# then the allow list.
allow = acl_event.content.get("allow", [])
if not isinstance(allow, (list, tuple)):
logger.warn("Ignorning non-list allow ACL %s", allow)
allow = []
for e in allow:
if _acl_entry_matches(server_name, e):
# logger.info("%s matched allow rule %s", server_name, e)
return True
# everything else should be rejected.
# logger.info("%s fell through", server_name)
return False
def _acl_entry_matches(server_name, acl_entry):
if not isinstance(acl_entry, six.string_types):
logger.warn("Ignoring non-str ACL entry '%s' (is %s)", acl_entry, type(acl_entry))
return False
regex = _glob_to_regex(acl_entry)
return regex.match(server_name)
def _glob_to_regex(glob):
res = ''
for c in glob:
if c == '*':
res = res + '.*'
elif c == '?':
res = res + '.'
else:
res = res + re.escape(c)
return re.compile(res + "\\Z", re.IGNORECASE)
class FederationHandlerRegistry(object):
"""Allows classes to register themselves as handlers for a given EDU or
@@ -586,6 +764,8 @@ class FederationHandlerRegistry(object):
if edu_type in self.edu_handlers:
raise KeyError("Already have an EDU handler for %s" % (edu_type,))
logger.info("Registering federation EDU handler for %r", edu_type)
self.edu_handlers[edu_type] = handler
def register_query_handler(self, query_type, handler):
@@ -604,6 +784,8 @@ class FederationHandlerRegistry(object):
"Already have a Query handler for %s" % (query_type,)
)
logger.info("Registering federation query handler for %r", query_type)
self.query_handlers[query_type] = handler
@defer.inlineCallbacks
@@ -626,3 +808,49 @@ class FederationHandlerRegistry(object):
raise NotFoundError("No handler for Query type '%s'" % (query_type,))
return handler(args)
class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
"""A FederationHandlerRegistry for worker processes.
When receiving EDU or queries it will check if an appropriate handler has
been registered on the worker, if there isn't one then it calls off to the
master process.
"""
def __init__(self, hs):
self.config = hs.config
self.http_client = hs.get_simple_http_client()
self.clock = hs.get_clock()
self._get_query_client = ReplicationGetQueryRestServlet.make_client(hs)
self._send_edu = ReplicationFederationSendEduRestServlet.make_client(hs)
super(ReplicationFederationHandlerRegistry, self).__init__()
def on_edu(self, edu_type, origin, content):
"""Overrides FederationHandlerRegistry
"""
handler = self.edu_handlers.get(edu_type)
if handler:
return super(ReplicationFederationHandlerRegistry, self).on_edu(
edu_type, origin, content,
)
return self._send_edu(
edu_type=edu_type,
origin=origin,
content=content,
)
def on_query(self, query_type, args):
"""Overrides FederationHandlerRegistry
"""
handler = self.query_handlers.get(query_type)
if handler:
return handler(args)
return self._get_query_client(
query_type=query_type,
args=args,
)

View File

@@ -19,13 +19,12 @@ package.
These actions are mostly only used by the :py:mod:`.replication` module.
"""
import logging
from twisted.internet import defer
from synapse.util.logutils import log_function
import logging
logger = logging.getLogger(__name__)

View File

@@ -29,23 +29,22 @@ dead worker doesn't cause the queues to grow limitlessly.
Events are replicated via a separate events stream.
"""
from .units import Edu
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
import synapse.metrics
from blist import sorteddict
import logging
from collections import namedtuple
import logging
from six import iteritems, itervalues
from sortedcontainers import SortedDict
from synapse.metrics import LaterGauge
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
from .units import Edu
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
class FederationRemoteSendQueue(object):
"""A drop in replacement for TransactionQueue"""
@@ -56,33 +55,29 @@ class FederationRemoteSendQueue(object):
self.is_mine_id = hs.is_mine_id
self.presence_map = {} # Pending presence map user_id -> UserPresenceState
self.presence_changed = sorteddict() # Stream position -> user_id
self.presence_changed = SortedDict() # Stream position -> user_id
self.keyed_edu = {} # (destination, key) -> EDU
self.keyed_edu_changed = sorteddict() # stream position -> (destination, key)
self.keyed_edu_changed = SortedDict() # stream position -> (destination, key)
self.edus = sorteddict() # stream position -> Edu
self.edus = SortedDict() # stream position -> Edu
self.failures = sorteddict() # stream position -> (destination, Failure)
self.device_messages = sorteddict() # stream position -> destination
self.device_messages = SortedDict() # stream position -> destination
self.pos = 1
self.pos_time = sorteddict()
self.pos_time = SortedDict()
# EVERYTHING IS SAD. In particular, python only makes new scopes when
# we make a new function, so we need to make a new function so the inner
# lambda binds to the queue rather than to the name of the queue which
# changes. ARGH.
def register(name, queue):
metrics.register_callback(
queue_name + "_size",
lambda: len(queue),
)
LaterGauge("synapse_federation_send_queue_%s_size" % (queue_name,),
"", [], lambda: len(queue))
for queue_name in [
"presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed",
"edus", "failures", "device_messages", "pos_time",
"edus", "device_messages", "pos_time",
]:
register(queue_name, getattr(self, queue_name))
@@ -101,7 +96,7 @@ class FederationRemoteSendQueue(object):
now = self.clock.time_msec()
keys = self.pos_time.keys()
time = keys.bisect_left(now - FIVE_MINUTES_AGO)
time = self.pos_time.bisect_left(now - FIVE_MINUTES_AGO)
if not keys[:time]:
return
@@ -116,13 +111,13 @@ class FederationRemoteSendQueue(object):
with Measure(self.clock, "send_queue._clear"):
# Delete things out of presence maps
keys = self.presence_changed.keys()
i = keys.bisect_left(position_to_delete)
i = self.presence_changed.bisect_left(position_to_delete)
for key in keys[:i]:
del self.presence_changed[key]
user_ids = set(
user_id
for uids in self.presence_changed.itervalues()
for uids in itervalues(self.presence_changed)
for user_id in uids
)
@@ -134,7 +129,7 @@ class FederationRemoteSendQueue(object):
# Delete things out of keyed edus
keys = self.keyed_edu_changed.keys()
i = keys.bisect_left(position_to_delete)
i = self.keyed_edu_changed.bisect_left(position_to_delete)
for key in keys[:i]:
del self.keyed_edu_changed[key]
@@ -148,19 +143,13 @@ class FederationRemoteSendQueue(object):
# Delete things out of edu map
keys = self.edus.keys()
i = keys.bisect_left(position_to_delete)
i = self.edus.bisect_left(position_to_delete)
for key in keys[:i]:
del self.edus[key]
# Delete things out of failure map
keys = self.failures.keys()
i = keys.bisect_left(position_to_delete)
for key in keys[:i]:
del self.failures[key]
# Delete things out of device map
keys = self.device_messages.keys()
i = keys.bisect_left(position_to_delete)
i = self.device_messages.bisect_left(position_to_delete)
for key in keys[:i]:
del self.device_messages[key]
@@ -200,20 +189,13 @@ class FederationRemoteSendQueue(object):
# We only want to send presence for our own users, so lets always just
# filter here just in case.
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
local_states = list(filter(lambda s: self.is_mine_id(s.user_id), states))
self.presence_map.update({state.user_id: state for state in local_states})
self.presence_changed[pos] = [state.user_id for state in local_states]
self.notifier.on_new_replication_data()
def send_failure(self, failure, destination):
"""As per TransactionQueue"""
pos = self._next_pos()
self.failures[pos] = (destination, str(failure))
self.notifier.on_new_replication_data()
def send_device_messages(self, destination):
"""As per TransactionQueue"""
pos = self._next_pos()
@@ -253,13 +235,12 @@ class FederationRemoteSendQueue(object):
self._clear_queue_before_pos(federation_ack)
# Fetch changed presence
keys = self.presence_changed.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
i = self.presence_changed.bisect_right(from_token)
j = self.presence_changed.bisect_right(to_token) + 1
dest_user_ids = [
(pos, user_id)
for pos in keys[i:j]
for user_id in self.presence_changed[pos]
for pos, user_id_list in self.presence_changed.items()[i:j]
for user_id in user_id_list
]
for (key, user_id) in dest_user_ids:
@@ -268,48 +249,33 @@ class FederationRemoteSendQueue(object):
)))
# Fetch changes keyed edus
keys = self.keyed_edu_changed.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
i = self.keyed_edu_changed.bisect_right(from_token)
j = self.keyed_edu_changed.bisect_right(to_token) + 1
# We purposefully clobber based on the key here, python dict comprehensions
# always use the last value, so this will correctly point to the last
# stream position.
keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]}
keyed_edus = {v: k for k, v in self.keyed_edu_changed.items()[i:j]}
for ((destination, edu_key), pos) in keyed_edus.iteritems():
for ((destination, edu_key), pos) in iteritems(keyed_edus):
rows.append((pos, KeyedEduRow(
key=edu_key,
edu=self.keyed_edu[(destination, edu_key)],
)))
# Fetch changed edus
keys = self.edus.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
edus = ((k, self.edus[k]) for k in keys[i:j])
i = self.edus.bisect_right(from_token)
j = self.edus.bisect_right(to_token) + 1
edus = self.edus.items()[i:j]
for (pos, edu) in edus:
rows.append((pos, EduRow(edu)))
# Fetch changed failures
keys = self.failures.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
failures = ((k, self.failures[k]) for k in keys[i:j])
for (pos, (destination, failure)) in failures:
rows.append((pos, FailureRow(
destination=destination,
failure=failure,
)))
# Fetch changed device messages
keys = self.device_messages.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
device_messages = {self.device_messages[k]: k for k in keys[i:j]}
i = self.device_messages.bisect_right(from_token)
j = self.device_messages.bisect_right(to_token) + 1
device_messages = {v: k for k, v in self.device_messages.items()[i:j]}
for (destination, pos) in device_messages.iteritems():
for (destination, pos) in iteritems(device_messages):
rows.append((pos, DeviceRow(
destination=destination,
)))
@@ -425,34 +391,6 @@ class EduRow(BaseFederationRow, namedtuple("EduRow", (
buff.edus.setdefault(self.edu.destination, []).append(self.edu)
class FailureRow(BaseFederationRow, namedtuple("FailureRow", (
"destination", # str
"failure",
))):
"""Streams failures to a remote server. Failures are issued when there was
something wrong with a transaction the remote sent us, e.g. it included
an event that was invalid.
"""
TypeId = "f"
@staticmethod
def from_data(data):
return FailureRow(
destination=data["destination"],
failure=data["failure"],
)
def to_data(self):
return {
"destination": self.destination,
"failure": self.failure,
}
def add_to_buffer(self, buff):
buff.failures.setdefault(self.destination, []).append(self.failure)
class DeviceRow(BaseFederationRow, namedtuple("DeviceRow", (
"destination", # str
))):
@@ -479,7 +417,6 @@ TypeToRow = {
PresenceRow,
KeyedEduRow,
EduRow,
FailureRow,
DeviceRow,
)
}
@@ -489,7 +426,6 @@ ParsedFederationStreamData = namedtuple("ParsedFederationStreamData", (
"presence", # list(UserPresenceState)
"keyed_edus", # dict of destination -> { key -> Edu }
"edus", # dict of destination -> [Edu]
"failures", # dict of destination -> [failures]
"device_destinations", # set of destinations
))
@@ -511,7 +447,6 @@ def process_rows_for_federation(transaction_queue, rows):
presence=[],
keyed_edus={},
edus={},
failures={},
device_destinations=set(),
)
@@ -528,21 +463,17 @@ def process_rows_for_federation(transaction_queue, rows):
if buff.presence:
transaction_queue.send_presence(buff.presence)
for destination, edu_map in buff.keyed_edus.iteritems():
for destination, edu_map in iteritems(buff.keyed_edus):
for key, edu in edu_map.items():
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=key,
)
for destination, edu_list in buff.edus.iteritems():
for destination, edu_list in iteritems(buff.edus):
for edu in edu_list:
transaction_queue.send_edu(
edu.destination, edu.edu_type, edu.content, key=None,
)
for destination, failure_list in buff.failures.iteritems():
for failure in failure_list:
transaction_queue.send_failure(destination, failure)
for destination in buff.device_destinations:
transaction_queue.send_device_messages(destination)

Some files were not shown because too many files have changed in this diff Show More