Compare commits

..

869 Commits

Author SHA1 Message Date
Erik Johnston
9854f4c7ff Add basic file API 2016-07-13 16:29:35 +01:00
Erik Johnston
518b3a3f89 Track in DB file message events 2016-07-13 15:16:02 +01:00
Mark Haines
10f4856b0c Merge pull request #914 from matrix-org/markjh/upgrade
Ensure that the guest user is in the database when upgrading accounts
2016-07-08 17:02:06 +01:00
Mark Haines
dfde67a6fe Add a comment explaining allow_none 2016-07-08 15:57:06 +01:00
Mark Haines
10c843fcfb Ensure that the guest user is in the database when upgrading accounts 2016-07-08 15:15:55 +01:00
Erik Johnston
58930da52b Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-07-08 14:11:37 +01:00
Erik Johnston
0870588c20 Merge branch 'hotfixes-v0.16.1' 2016-07-08 13:22:32 +01:00
Erik Johnston
f90cf150e2 Bump version and changelog 2016-07-07 16:33:00 +01:00
Erik Johnston
067596d341 Fix bug where we did not correctly explode when multiple user_ids were set in macaroon 2016-07-07 16:22:24 +01:00
Erik Johnston
70d650be2b Merge pull request #911 from matrix-org/erikj/purge_history
Feature: Purge local room history.
2016-07-07 13:34:28 +01:00
Erik Johnston
b92e7955be Comment 2016-07-07 11:42:15 +01:00
Erik Johnston
c98e1479bd Return 400 rather than 500 2016-07-07 11:41:07 +01:00
Erik Johnston
67f2c901ea Add rest servlet. Fix SQL. 2016-07-06 15:56:59 +01:00
Erik Johnston
eef7778af9 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/test2 2016-07-06 14:50:22 +01:00
Erik Johnston
a17e7caeb7 Merge branch 'erikj/shared_secret' into erikj/test2 2016-07-06 14:46:31 +01:00
Erik Johnston
f0c06ac65c Merge pull request #909 from matrix-org/erikj/shared_secret
Add an admin option to shared secret registration (breaks backwards compat)
2016-07-06 14:08:51 +01:00
Erik Johnston
76b18df3d9 Check that there are no null bytes in user and passsword 2016-07-06 11:17:53 +01:00
Erik Johnston
0da24cac8b Add null separator to hmac 2016-07-06 11:05:16 +01:00
Erik Johnston
2e3c8acc68 Merge pull request #910 from KentShikama/hash_password_followup
Follow up to adding password pepper
2016-07-06 09:59:59 +01:00
Kent Shikama
8d9a884cee Update password config comment
Signed-off-by: Kent Shikama <kent@kentshikama.com>
2016-07-06 12:18:19 +09:00
Kent Shikama
896bc6cd46 Update hash_password script
Signed-off-by: Kent Shikama <kent@kentshikama.com>
2016-07-06 12:17:54 +09:00
Erik Johnston
be3548f7e1 Remove spurious txn 2016-07-05 17:46:51 +01:00
Erik Johnston
4adf93e0f7 Fix for postgres 2016-07-05 17:34:25 +01:00
Erik Johnston
651faee698 Add an admin option to shared secret registration 2016-07-05 17:30:22 +01:00
Erik Johnston
caf33b2d9b Protect password when registering using shared secret 2016-07-05 17:18:19 +01:00
Erik Johnston
8f8798bc0d Add ReadWriteLock for pagination and history prune 2016-07-05 15:30:25 +01:00
Erik Johnston
7335f0adda Add ReadWriteLock 2016-07-05 15:23:17 +01:00
David Baker
ef535178ff Merge pull request #904 from matrix-org/dbkr/register_email_no_untrusted_id_server
requestToken update
2016-07-05 15:13:34 +01:00
Mark Haines
04dee11e97 Merge pull request #906 from matrix-org/markjh/faster_events_around
Use a query that postgresql optimises better for get_events_around
2016-07-05 14:48:34 +01:00
Mark Haines
dd2ccee27d Fix typo 2016-07-05 14:06:07 +01:00
Mark Haines
b6b0132ac7 Make get_events_around more efficient on sqlite3 2016-07-05 13:55:18 +01:00
Erik Johnston
e34cb5e7dc Merge pull request #907 from KentShikama/pepper
Add pepper to password hashing
2016-07-05 11:26:22 +01:00
Kent Shikama
252ee2d979 Remove default password pepper string 2016-07-05 19:15:51 +09:00
Kent Shikama
14362bf359 Fix password config 2016-07-05 19:12:53 +09:00
Kent Shikama
1ee2584307 Fix pep8 2016-07-05 19:01:00 +09:00
Kent Shikama
507b8bb091 Add comment to prompt changing of pepper 2016-07-05 18:42:35 +09:00
Mark Haines
d44d11d864 Use true/false for boolean parameter inclusive to avoid potential for sqli, and possibly make the code clearer 2016-07-05 10:39:13 +01:00
Erik Johnston
2d21d43c34 Add purge_history API 2016-07-05 10:28:51 +01:00
Mark Haines
0fb76c71ac Use different SQL for postgres and sqlite3 for when using multicolumn indexes 2016-07-04 19:44:55 +01:00
Kent Shikama
8bdaf5f7af Add pepper to password hashing
Signed-off-by: Kent Shikama <kent@kentshikama.com>
2016-07-05 02:13:52 +09:00
Erik Johnston
a67bf0b074 Add storage function to purge history for a room 2016-07-04 16:02:50 +01:00
Mark Haines
f18d7546c6 Use a query that postgresql optimises better for get_events_around 2016-07-04 15:48:25 +01:00
Erik Johnston
3de8168343 Merge pull request #905 from KentShikama/add-password-hash
Optionally include password hash in createUser endpoint
2016-07-04 14:23:04 +01:00
Kent Shikama
bb069079bb Fix style violations
Signed-off-by: Kent Shikama <kent@kentshikama.com>
2016-07-04 22:07:11 +09:00
Kent Shikama
2e5a31f197 Use .get() instead of [] to access password_hash 2016-07-04 22:00:13 +09:00
Kent Shikama
fc8007dbec Optionally include password hash in createUser endpoint
Signed-off-by: Kent Shikama <kent@kentshikama.com>
2016-07-03 15:08:15 +09:00
Richard van der Hoff
1238203bc4 code_style.rst: add link to sphinx examples 2016-07-01 09:36:51 +01:00
Richard van der Hoff
41f072fd0e code_style.rst: *fix* link to google style 2016-07-01 09:09:40 +01:00
Richard van der Hoff
5a6ef20ef6 code_style.rst: add link to google style 2016-07-01 09:08:35 +01:00
David Baker
be8be535f7 requestToken update
Don't send requestToken request to untrusted ID servers

Also correct the THREEPID_IN_USE error to add the M_ prefix. This is a backwards incomaptible change, but the only thing using this is the angular client which is now unmaintained, so it's probably better to just do this now.
2016-06-30 17:51:28 +01:00
Erik Johnston
ab71589c0b Merge pull request #903 from matrix-org/erikj/deactivate_user
Feature: Add deactivate account admin API
2016-06-30 15:57:29 +01:00
Erik Johnston
f328d95cef Feature: Add deactivate account admin API
Allows server admins to "deactivate" accounts, which:

- Revokes all access tokens
- Removes all threepids
- Removes password

The API is a POST to `/admin/deactivate/<user_id>`
2016-06-30 15:40:58 +01:00
Erik Johnston
aac546c978 Merge pull request #902 from matrix-org/erikj/expire_media
Feature: Implement purge_media_cache admin API
2016-06-29 15:50:57 +01:00
Erik Johnston
f52cb4cd78 Remove race 2016-06-29 15:24:50 +01:00
Mark Haines
6783534a0f Merge pull request #886 from matrix-org/markjh/async_commit
Optionally make committing to postgres asynchronous.
2016-06-29 15:21:58 +01:00
Erik Johnston
a70688445d Implement purge_media_cache admin API 2016-06-29 14:57:59 +01:00
Erik Johnston
314b146b2e Track approximate last access time for remote media 2016-06-29 11:41:20 +01:00
David Baker
7846ac3125 Merge pull request #900 from RickCogley/RickCogley-coturn-readme-2
Rick cogley coturn readme 2
2016-06-28 10:49:02 +01:00
Rick Cogley
56ec5869c9 Update turn-howto.rst to use git clone (2)
Not logical to use svn checkout against a github repo, so changed to git clone. 

Signed-off-by: Rick Cogley <rick.cogley@esolia.co.jp>
2016-06-28 18:34:38 +09:00
Rick Cogley
1ea358b28b Update turn-howto.rst to use git clone
svn checkout is not logical for a checkout from github, so changed the checkout to "git clone". 
thanks @dbkr

Signed-off-by: Rick Cogley <rick.cogley@esolia.co.jp>
2016-06-28 18:27:54 +09:00
David Baker
db74dcda5b Merge pull request #894 from matrix-org/dbkr/push_room_naming
Use similar naming we use in email notifs for push
2016-06-28 10:12:24 +01:00
Rick Cogley
551fe80bed Remove double spaces
Reading the RST spec, I was trying to get breaks to appear by entering the double spaces after the lines in the code blocks. It does not work anyway, and, as pointed out, I've removed.
2016-06-28 12:47:55 +09:00
Matthew Hodgson
63bb8f0df9 remove vector.im from default secondary DS list 2016-06-27 13:13:33 +04:00
Rick Cogley
70d820c875 Update to reflect new location at github.
Additionally it does not appear there is turnserver.conf.default, but rather, just /etc/turnserver.conf.
2016-06-26 19:07:07 +09:00
David Baker
3f7652c56f Merge remote-tracking branch 'origin/develop' into dbkr/push_room_naming 2016-06-24 14:03:39 +01:00
Mark Haines
535b6bfacc Merge pull request #895 from matrix-org/markjh/jenkins_port_range
Fix the sytests to use a port-range rather than a port base
2016-06-24 14:03:01 +01:00
Mark Haines
f7fe0e5f67 Fix the sytests to use a port-range rather than a port base 2016-06-24 13:53:03 +01:00
David Baker
2455ad8468 Remove room name & alias test
as get_room_name_and_alias is now gone
2016-06-24 13:34:20 +01:00
David Baker
0b640aa56b even more pep8 2016-06-24 11:47:11 +01:00
David Baker
aa3a4944d5 more pep8 2016-06-24 11:45:23 +01:00
David Baker
46b7362304 pep8 2016-06-24 11:44:57 +01:00
David Baker
870c45913e Use similar naming we use in email notifs for push
Fixes https://github.com/vector-im/vector-web/issues/1654
2016-06-24 11:41:11 +01:00
Mark Haines
05f1a4596a Merge branch 'master' into develop 2016-06-23 11:17:48 +01:00
David Baker
13517e2914 Merge pull request #892 from matrix-org/dbkr/email_notif_most_recent
Put most recent 20 messages in notif
2016-06-23 09:25:45 +01:00
David Baker
b5fb7458d5 Actually we need to order these properly
otherwise we'll end up returning the wrong 20
2016-06-22 18:07:14 +01:00
David Baker
f73fdb04a6 Style 2016-06-22 17:51:40 +01:00
David Baker
3a4120e49a Put most recent 20 messages in notif
Fixes https://github.com/vector-im/vector-web/issues/1648
2016-06-22 17:47:18 +01:00
Erik Johnston
9fe894402f Merge pull request #843 from mweinelt/ldap3-rewrite
Rewrite LDAP Authentication against ldap3
2016-06-22 17:23:07 +01:00
Martin Weinelt
0a32208e5d Rework ldap integration with ldap3
Use the pure-python ldap3 library, which eliminates the need for a
system dependency.

Offer both a `search` and `simple_bind` mode, for more sophisticated
ldap scenarios.
- `search` tries to find a matching DN within the `user_base` while
  employing the `user_filter`, then tries the bind when a single
  matching DN was found.
- `simple_bind` tries the bind against a specific DN by combining the
  localpart and `user_base`

Offer support for STARTTLS on a plain connection.

The configuration was changed to reflect these new possibilities.

Signed-off-by: Martin Weinelt <hexa@darmstadt.ccc.de>
2016-06-22 17:51:59 +02:00
Mark Haines
774f3a692c Merge pull request #889 from matrix-org/markjh/synctl_workers
Optionally start or stop workers in synctl.
2016-06-21 17:58:19 +01:00
Mark Haines
5cc7564c5c Optionally start or stop workers in synctl.
Optionally start or stop an individual worker by passing -w with
the path to the worker config.
Optionally start or stop every worker and the main synapse by
passing -a with a path to a directory containing worker configs.

The "-w" is intended to be used to bounce individual workers proceses.
THe "-a" is intended for when you want to restart all the workers
simultaneuously, for example when performing database upgrades.
2016-06-21 16:38:05 +01:00
Mark Haines
0fe0b0eeb6 Merge pull request #888 from matrix-org/markjh/content_repo
Remove the legacy v0 content upload API.
2016-06-21 14:01:01 +01:00
David Baker
0126a5d60a Merge pull request #887 from matrix-org/dbkr/notif_template_subs_fail
Fix substitution failure in mail template
2016-06-21 13:48:54 +01:00
Mark Haines
13e334506c Remove the legacy v0 content upload API.
The existing content can still be downloaded. The last upload to the
matrix.org server was in January 2015, so it is probably safe to remove
the upload API.
2016-06-21 11:47:39 +01:00
David Baker
6b40e4f52a Fix substitution failure in mail template 2016-06-21 11:37:56 +01:00
Mark Haines
d5fb561709 Optionally make committing to postgres asynchronous.
Useful when running tests when you don't care whether the server
will lose data that it claims that it has committed.
2016-06-20 17:53:38 +01:00
Erik Johnston
d8ec81cc31 Merge pull request #879 from matrix-org/erikj/linearize_fed_server
Linearize some federation endpoints based on (origin, room_id)
2016-06-20 17:34:29 +01:00
Erik Johnston
bc72d381b2 Merge branch 'release-v0.16.1' of github.com:matrix-org/synapse 2016-06-20 14:18:04 +01:00
Erik Johnston
4d362a61ea Bump version and changelog 2016-06-20 14:17:42 +01:00
Erik Johnston
00c281f6a4 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.16.1 2016-06-20 14:13:54 +01:00
Mark Haines
c8cd41cdd8 Merge pull request #880 from matrix-org/markjh/registered_user
Remove registered_users from the distributor.
2016-06-17 19:45:06 +01:00
Mark Haines
41e4b2efea Add the create_profile method back since the tests use it 2016-06-17 19:20:47 +01:00
Mark Haines
0c13d45522 Add a comment on why we don't create a profile for upgrading users 2016-06-17 19:18:53 +01:00
Mark Haines
9f1800fba8 Remove registered_users from the distributor.
The only place that was observed was to set the profile. I've made it
so that the profile is set within store.register in the same transaction
that creates the user.

This required some slight changes to the registration code for upgrading
guest users, since it previously relied on the distributor swallowing errors
if the profile already existed.
2016-06-17 19:14:16 +01:00
Erik Johnston
8f4a9bbc16 Linearize some federation endpoints based on (origin, room_id) 2016-06-17 16:43:45 +01:00
Erik Johnston
9ba2bf1570 Merge pull request #878 from matrix-org/erikj/ujson
Disable responding with canonical json for federation
2016-06-17 16:22:12 +01:00
Erik Johnston
120c238705 Disable responding with canonical json for federation 2016-06-17 16:10:37 +01:00
Erik Johnston
1c1f633b13 Merge pull request #877 from matrix-org/erikj/frozen_default
Turn use_frozen_events off by default
2016-06-17 15:33:55 +01:00
Erik Johnston
6660f37558 Merge pull request #876 from matrix-org/erikj/sign_own
Only re-sign our own events
2016-06-17 15:23:20 +01:00
Mark Haines
20e5b46b20 Merge pull request #875 from matrix-org/markjh/email_formatting
Fix ``KeyError: 'msgtype'``. Use ``.get``
2016-06-17 15:14:58 +01:00
Erik Johnston
0113ad36ee Enable use_frozen_events in tests 2016-06-17 15:13:13 +01:00
Erik Johnston
3e41de05cc Turn use_frozen_events off by default 2016-06-17 15:11:22 +01:00
Erik Johnston
2884712ca7 Only re-sign our own events 2016-06-17 14:47:33 +01:00
Mark Haines
ded01c3bf6 Fix `KeyError: 'msgtype'. Use .get`
Fixes a key error where the mailer tried to get the ``msgtype`` of an
event that was missing a ``msgtype``.

```
 File "synapse/push/mailer.py", line 264, in get_notif_vars
 File "synapse/push/mailer.py", line 285, in get_message_vars
 File ".../frozendict/__init__.py", line 10, in __getitem__
    return self.__dict[key]
    KeyError: 'msgtype'
```
2016-06-17 13:49:16 +01:00
Mark Haines
8c75040c25 Fix setting gc thresholds in the workers 2016-06-17 11:48:12 +01:00
Mark Haines
f1073ad43d Merge pull request #874 from matrix-org/markjh/worker_config
Inline the synchrotron and pusher configs into the main config
2016-06-17 10:58:52 +01:00
Mark Haines
a352b68acf Use worker_ prefixes for worker config, use existing support for multiple config files 2016-06-16 17:29:50 +01:00
Mark Haines
364d616792 Access the event_cache_size directly from the server object.
This means that the workers can override the event_cache_size
directly without clobbering the value in the main synapse config.
2016-06-16 12:53:15 +01:00
Mark Haines
bde13833cb Access replication_url from the worker config directly 2016-06-16 12:44:40 +01:00
Mark Haines
80a1bc7db5 Comment on what's going on in clobber_with_worker_config 2016-06-16 11:29:45 +01:00
Mark Haines
f1f70bf4b5 Merge remote-tracking branch 'origin/develop' into markjh/worker_config 2016-06-16 11:20:17 +01:00
Mark Haines
dbb5a39b64 Add worker config module 2016-06-16 11:09:15 +01:00
Mark Haines
885ee861f7 Inline the synchrotron and pusher configs into the main config 2016-06-16 11:06:12 +01:00
Erik Johnston
486b9a6a2d Merge pull request #873 from vt0r/bugfix/bcrypt-utf8-encode
Fix TypeError in call to bcrypt.hashpw
2016-06-16 10:12:17 +01:00
Erik Johnston
1f31381611 Merge pull request #872 from matrix-org/erikj/preview_url_fixes
Fix some `/preview_url` explosions
2016-06-16 10:08:29 +01:00
Salvatore LaMendola
ed5f43a55a Fix TypeError in call to bcrypt.hashpw
- At the very least, this TypeError caused logins to fail on my own
  running instance of Synapse, and the simple (explicit) UTF-8
  conversion resolved login errors for me.

Signed-off-by: Salvatore LaMendola <salvatore.lamendola@gmail.com>
2016-06-16 00:43:42 -04:00
Erik Johnston
09a17f965c Line lengths 2016-06-15 16:58:12 +01:00
Erik Johnston
1e9026e484 Handle floats as img widths 2016-06-15 16:58:05 +01:00
Erik Johnston
a60169ea09 Handle og props with not content 2016-06-15 16:57:48 +01:00
Mark Haines
78a16d395c Merge pull request #867 from matrix-org/markjh/enable_jenkins_synchrotron
Enable testing the synchrotron on jenkins
2016-06-15 16:41:58 +01:00
Erik Johnston
5436848955 Merge branch 'release-v0.16.1' of github.com:matrix-org/synapse into develop 2016-06-15 16:24:07 +01:00
Erik Johnston
0477368e9a Update change log 2016-06-15 16:06:26 +01:00
Erik Johnston
0ef0655b83 Bump version and changelog 2016-06-15 15:50:48 +01:00
Erik Johnston
a64dbae90b Merge pull request #871 from matrix-org/erikj/linearize_state_fetch_on_pdu
Linearize fetching of gaps on incoming events
2016-06-15 15:31:57 +01:00
Erik Johnston
d41a1a91d3 Linearize fetching of gaps on incoming events
This potentially stops the server from doing multiple requests for the
same data.
2016-06-15 15:16:14 +01:00
Richard van der Hoff
15bf3e3376 Merge pull request #870 from matrix-org/rav/work_around_tls_bug
Work around TLS bug in twisted
2016-06-15 13:28:13 +01:00
Erik Johnston
d9f7fa2e57 Merge pull request #869 from matrix-org/erikj/backfill_fix
Correctly mark backfilled events as backfilled
2016-06-15 11:21:31 +01:00
Erik Johnston
b31c49d676 Correctly mark backfilled events as backfilled 2016-06-15 10:59:08 +01:00
Richard van der Hoff
255c229f23 Work around TLS bug in twisted
Wrap up twisted's FileBodyProducer to work around
https://twistedmatrix.com/trac/ticket/8473. Hopefully this fixes
https://matrix.org/jira/browse/SYN-700.
2016-06-15 10:39:08 +01:00
Erik Johnston
d12134ce37 Merge pull request #868 from matrix-org/erikj/invalid_id
Make get_domain_from_id throw SynapseError on invalid ID
2016-06-14 15:09:07 +01:00
Erik Johnston
36e2aade87 Make get_domain_from_id throw SynapseError on invalid ID 2016-06-14 13:25:29 +01:00
Matthew Hodgson
33546b58aa point to the CAPTCHA docs 2016-06-12 23:11:29 +01:00
Mark Haines
fdc015c6e9 Enable testing the synchrotron on jenkins 2016-06-10 16:30:26 +01:00
Erik Johnston
e9892b4b26 Merge pull request #866 from bartekrutkowski/develop
Change /bin/bash to /bin/sh in tox.ini
2016-06-10 13:02:48 +01:00
Bartek Rutkowski
50f69e2ef2 Change /bin/bash to /bin/sh in tox.ini
No features of Bash are used here, so using /bin/sh makes it more portable to systems that don't have Bash natively (like BSD systems).
2016-06-10 11:33:43 +01:00
Mark Haines
41b35412bf Merge pull request #863 from matrix-org/markjh/load_config
Add function to load config without generating it
2016-06-10 11:12:32 +01:00
Mark Haines
7dbb473339 Add function to load config without generating it
Renames ``load_config`` to ``load_or_generate_config``
Adds a method called ``load_config`` that just loads the
config.

The main synapse.app.homeserver will continue to use
``load_or_generate_config`` to retain backwards compat.
However new worker processes can use ``load_config`` to
load the config avoiding some of the cruft needed to generate
the config.

As the new ``load_config`` method is expected to be used by new
configs it removes support for the legacy commandline overrides
that ``load_or_generate_config`` supports
2016-06-09 18:50:38 +01:00
Erik Johnston
16a8884233 Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-06-09 16:57:33 +01:00
Erik Johnston
ba0406d10d Merge branch 'release-v0.16.0' of github.com:matrix-org/synapse 2016-06-09 14:21:23 +01:00
Erik Johnston
8327d5df70 Change CHANGELOG 2016-06-09 14:16:26 +01:00
Erik Johnston
a31befbcd0 Bump version and changelog 2016-06-09 13:23:41 +01:00
Erik Johnston
5ee5b655b2 Merge pull request #862 from matrix-org/erikj/media_remote_error
502 on /thumbnail when can't contact remote server
2016-06-09 13:11:02 +01:00
Erik Johnston
0a43219a27 Merge pull request #860 from negzi/bug_fix_get_or_create_user
Fix a bug caused by a change in auth_handler function
2016-06-09 11:40:12 +01:00
Erik Johnston
eba4ff1bcb 502 on /thumbnail when can't contact remote server 2016-06-09 11:29:43 +01:00
Erik Johnston
2eab219a70 Merge pull request #861 from matrix-org/erikj/events_log
Remove redundant exception log in /events
2016-06-09 11:22:20 +01:00
Erik Johnston
95f305c35a Remove redundant exception log in /events 2016-06-09 11:15:04 +01:00
Negar Fazeli
6e7dc7c7dd Fix a bug caused by a change in auth_handler function
Fix the relevant unit test cases
2016-06-08 23:22:39 +02:00
Erik Johnston
b7fbc9bd95 Merge pull request #859 from matrix-org/erikj/public_room_performance
Pull full state for each room all at once
2016-06-08 16:36:06 +01:00
Erik Johnston
919a2c74f6 Merge branch 'release-v0.16.0' of github.com:matrix-org/synapse into develop 2016-06-08 16:26:26 +01:00
Erik Johnston
81c07a32fd Pull full state for each room all at once 2016-06-08 15:51:49 +01:00
Erik Johnston
b063784b78 Merge pull request #857 from matrix-org/erikj/default_visibility
Don't make rooms visibile by default
2016-06-08 15:18:58 +01:00
Mark Haines
defa28efa1 Disable the synchrotron on jenkins until the sytest support lands (#855)
* Disable the synchrotron on jenkins until the sytest support lands

* Poke jenkins

* Poke jenkins

* Poke jenkins

* Poke jenkins

* Poke jenkins

* Poke jenkins

* Poke jenkins

* Poke jenkins
2016-06-08 15:11:31 +01:00
Erik Johnston
690029d1a3 Don't make rooms visibile by default 2016-06-08 14:47:42 +01:00
Erik Johnston
d88faf92d1 Fix up federation PublicRoomList 2016-06-08 14:39:31 +01:00
Erik Johnston
958c968d02 Merge pull request #856 from matrix-org/erikj/fed_pub_rooms
Enable auth on /publicRoom endpoints
2016-06-08 14:36:09 +01:00
Erik Johnston
746b2f5657 Merge pull request #854 from matrix-org/erikj/federation_logging
Add some logging for when servers ask for missing events
2016-06-08 14:31:41 +01:00
Erik Johnston
efeabd3180 Log user that is making /publicRooms calls 2016-06-08 14:23:15 +01:00
Erik Johnston
1fd6eb695d Enable auth on federation PublicRoomList 2016-06-08 14:15:18 +01:00
Erik Johnston
128360d4f0 Merge pull request #853 from matrix-org/erikj/replication_noop
Don't hit DB for noop replications queries
2016-06-08 13:26:26 +01:00
Erik Johnston
17aab5827a Add some logging for when servers ask for missing events 2016-06-08 11:55:31 +01:00
Erik Johnston
1a815fb04f Don't hit DB for noop replications queries 2016-06-08 11:33:30 +01:00
Erik Johnston
66503a69c9 Update commit hash in changelog 2016-06-08 11:13:56 +01:00
Erik Johnston
bab916bccc Bump version and changelog to v0.16.0-rc2 2016-06-08 11:05:45 +01:00
Erik Johnston
5c73115155 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.16.0 2016-06-08 10:50:31 +01:00
Erik Johnston
e0fda29f94 Merge pull request #850 from matrix-org/erikj/gc_threshold
Add gc_threshold to pusher and synchrotron
2016-06-08 10:05:56 +01:00
Erik Johnston
02270b4b3d Merge pull request #852 from matrix-org/erikj/gc_metrics
Add GC counts to metrics
2016-06-08 10:05:49 +01:00
Mark Haines
6babdb671b Merge pull request #851 from matrix-org/markjh/jenkins_synchrotron
Add script for running sytest with dendron
2016-06-07 17:20:41 +01:00
Erik Johnston
0f2165ccf4 Don't track total objects as its too expensive to calculate 2016-06-07 17:00:45 +01:00
Erik Johnston
18f0cc7d99 Record some more GC metrics 2016-06-07 16:55:49 +01:00
Mark Haines
64935d11f7 Add script for running sytest with dendron 2016-06-07 16:45:40 +01:00
Erik Johnston
2d1d1025fa Add gc_threshold to pusher and synchrotron 2016-06-07 16:26:25 +01:00
Erik Johnston
b01e71e719 Merge pull request #849 from matrix-org/erikj/gc_threshold
Allow setting of gc.set_thresholds
2016-06-07 16:07:54 +01:00
Erik Johnston
dded389ac1 Allow setting of gc.set_thresholds 2016-06-07 15:45:56 +01:00
Mark Haines
c54fcd9ee4 Merge pull request #848 from matrix-org/markjh/unusedIV
Remove dead code.
2016-06-07 15:27:12 +01:00
Mark Haines
0b2158719c Remove dead code.
Loading push rules now happens in the datastore, so we can remove
the methods that loaded them outside the datastore.

The ``waiting_for_join_list`` in federation handler is populated by
anything, so can be removed.

The ``_get_members_events_txn`` method isn't called from anywhere
so can be removed.
2016-06-07 15:07:11 +01:00
Erik Johnston
188f8d63e2 Merge pull request #847 from matrix-org/erikj/gc_tick
Change the way we do stats for GC
2016-06-07 13:46:26 +01:00
Erik Johnston
48e65099b5 Also record number of unreachable objects 2016-06-07 13:40:22 +01:00
Erik Johnston
75331c5fca Change the way we do stats 2016-06-07 13:33:13 +01:00
Erik Johnston
8c966fbd51 Merge pull request #771 from matrix-org/erikj/gc_tick
Manually run GC on reactor tick.
2016-06-07 13:18:36 +01:00
Mark Haines
efe8126290 Merge pull request #846 from matrix-org/markjh/user_joined_notifier
Notify users for events in rooms they join.
2016-06-07 11:43:49 +01:00
Mark Haines
88625db05f Notify users for events in rooms they join.
Change how the notifier updates the map from room_id to user streams on
receiving a join event. Make it update the map when it notifies for the
join event, rather than using the "user_joined_room" distributor signal
2016-06-07 11:33:36 +01:00
Erik Johnston
84379062f9 Fix AS retries, but with correct ordering 2016-06-07 10:24:50 +01:00
Erik Johnston
310197bab5 Fix AS retries 2016-06-07 09:34:50 +01:00
Mark Haines
b0932b34cb Merge pull request #845 from matrix-org/markjh/synchrotron_presence
Fix a KeyError in the synchrotron presence
2016-06-06 16:52:27 +01:00
Mark Haines
4a5bbb1941 Fix a KeyError in the synchrotron presence 2016-06-06 16:37:12 +01:00
Mark Haines
87f60e7053 Merge pull request #844 from matrix-org/markjh/yield_on_sleep
Yield on the sleeps intended to backoff replication
2016-06-06 16:20:54 +01:00
Mark Haines
5ef84da4f1 Yield on the sleeps intended to backoff replication 2016-06-06 16:05:28 +01:00
Erik Johnston
216a05b3e3 .values() returns list of sets 2016-06-06 16:00:09 +01:00
Erik Johnston
9f715573aa Merge pull request #842 from matrix-org/erikj/presence_timer
Fire after 30s not 8h
2016-06-06 15:51:23 +01:00
Erik Johnston
96dc600579 Fix typos 2016-06-06 15:44:41 +01:00
Erik Johnston
377eb480ca Fire after 30s not 8h 2016-06-06 15:14:21 +01:00
Erik Johnston
e4134c5e13 Merge pull request #841 from matrix-org/erikj/event_counter
Add metric counter for number of persisted events
2016-06-06 14:17:40 +01:00
Erik Johnston
778c1fea8b Merge pull request #840 from matrix-org/erikj/event_write_through
Add events to cache when we persist them
2016-06-06 14:17:35 +01:00
Erik Johnston
7aa778fba9 Add metric counter for number of persisted events 2016-06-06 11:58:09 +01:00
Erik Johnston
70aee0717c Add events to cache when we persist them 2016-06-06 11:34:53 +01:00
Erik Johnston
3210f4c385 Merge pull request #836 from matrix-org/erikj/change_event_cache
Change the way we cache events
2016-06-03 18:47:46 +01:00
Mark Haines
040a560a48 Merge pull request #837 from matrix-org/markjh/synchrotron_presence_list
Add get_presence_list_accepted to the broken caches in synchrotron
2016-06-03 18:28:08 +01:00
Erik Johnston
cffe46408f Don't rely on options when inserting event into cache 2016-06-03 18:25:21 +01:00
Mark Haines
ac9716f154 Fix spelling 2016-06-03 18:10:00 +01:00
Mark Haines
8f79084bd4 Add get_presence_list_accepted to the broken caches in synchrotron 2016-06-03 18:03:40 +01:00
Erik Johnston
10ea3f46ba Change the way we cache events 2016-06-03 17:57:50 +01:00
Erik Johnston
f6be734be9 Merge pull request #835 from matrix-org/erikj/get_event_txn
Remove event fetching from DB threads
2016-06-03 17:30:00 +01:00
Erik Johnston
05e01f21d7 Remove event fetching from DB threads 2016-06-03 17:22:13 +01:00
David Baker
81d226888f Merge pull request #834 from matrix-org/dbkr/fix_email_from
Fix email notif From
2016-06-03 16:54:29 +01:00
David Baker
72c4d482e9 3rd time lucky: we'd already calculated it above 2016-06-03 16:39:50 +01:00
David Baker
fbf608decb Oops, we're using the dict form 2016-06-03 16:38:39 +01:00
David Baker
06d40c8b98 Add substitutions to email notif From 2016-06-03 16:31:23 +01:00
Erik Johnston
5f88549f4a Merge branch 'release-v0.16.0' of github.com:matrix-org/synapse into develop 2016-06-03 16:27:24 +01:00
Mark Haines
ca457f594e Merge pull request #831 from matrix-org/markjh/synchrotronII
Add a separate process that can handle /sync requests
2016-06-03 16:06:25 +01:00
Erik Johnston
c11614bcdc Note that v0.15.x was never released 2016-06-03 15:50:15 +01:00
Erik Johnston
21961c93c7 Bump changelog and version 2016-06-03 15:33:14 +01:00
Mark Haines
48340e4f13 Clear the list of ongoing syncs on shutdown 2016-06-03 15:02:27 +01:00
Erik Johnston
fcbc282f56 Merge branch 'release-v0.15.0' of github.com:matrix-org/synapse into release-v0.16.0 2016-06-03 15:02:04 +01:00
Erik Johnston
51773bcbaf Merge pull request #832 from matrix-org/erikj/presence_coount
Change def of small delta in presence stream. Add metrics.
2016-06-03 14:57:00 +01:00
Mark Haines
da491e75b2 Appease flake8 2016-06-03 14:56:36 +01:00
Mark Haines
0b3c80a234 Use ClientIpStore to record client ips 2016-06-03 14:55:01 +01:00
Mark Haines
dd6f62ed99 Merge branch 'develop' into markjh/synchrotronII 2016-06-03 14:51:33 +01:00
Mark Haines
c367ba5dd2 Merge pull request #833 from matrix-org/markjh/client_ips
Move insert_client_ip to a separate class
2016-06-03 14:51:03 +01:00
Mark Haines
eef541a291 Move insert_client_ip to a separate class 2016-06-03 14:42:35 +01:00
Mark Haines
80aade3805 Send updates to the syncing users every ten seconds or immediately if they've just come online 2016-06-03 14:24:19 +01:00
Erik Johnston
ab116bdb0c Fix typo 2016-06-03 14:03:42 +01:00
Erik Johnston
4ce84a1acd Change metric style 2016-06-03 13:49:16 +01:00
Erik Johnston
a7ff5a1770 Presence metrics. Change def of small delta 2016-06-03 13:40:55 +01:00
Matthew Hodgson
aa6fab0cc2 Merge pull request #822 from matrix-org/matthew/brand-from-header
brand the email from header
2016-06-03 12:15:01 +01:00
Matthew Hodgson
8d740132f4 Merge branch 'develop' into matthew/brand-from-header 2016-06-03 12:14:18 +01:00
Erik Johnston
3b096c5f5c Merge branch 'erikj/cache_perf' of github.com:matrix-org/synapse into develop 2016-06-03 12:00:33 +01:00
Erik Johnston
43b7f371f5 Merge pull request #830 from matrix-org/erikj/metrics_perf
Change CacheMetrics to be quicker
2016-06-03 11:57:29 +01:00
Mark Haines
abb151f3c9 Add a separate process that can handle /sync requests 2016-06-03 11:57:26 +01:00
Erik Johnston
4982b28868 Merge pull request #829 from matrix-org/erikj/poke_notifier
Poke notifier on next reactor tick
2016-06-03 11:52:10 +01:00
Erik Johnston
d06f2a229e Merge pull request #828 from matrix-org/erikj/joined_hosts_for_room
Make get_joined_hosts_for_room use get_users_in_room
2016-06-03 11:50:30 +01:00
Erik Johnston
58a224a651 Pull out update_results_dict 2016-06-03 11:47:07 +01:00
Mark Haines
20eccd84d4 Merge pull request #827 from matrix-org/markjh/more_slaved_methods
Add methods to events, account data and receipt slaves
2016-06-03 11:46:21 +01:00
Erik Johnston
722472b48c Merge pull request #825 from matrix-org/erikj/cache_push_rules
Load push rules in storage layer so that they get cached
2016-06-03 11:44:32 +01:00
Erik Johnston
73c7112433 Change CacheMetrics to be quicker
We change it so that each cache has an individual CacheMetric, instead
of having one global CacheMetric. This means that when a cache tries to
increment a counter it does not need to go through so many indirections.
2016-06-03 11:26:52 +01:00
Mark Haines
b09f348530 Merge pull request #824 from matrix-org/markjh/slaved_presence_store
Add a slaved store for presence
2016-06-03 11:26:33 +01:00
Erik Johnston
4c04222fa5 Poke notifier on next reactor tick 2016-06-03 11:24:16 +01:00
Erik Johnston
ccb56fc24b Make get_joined_hosts_for_room use get_users_in_room 2016-06-03 11:20:23 +01:00
Mark Haines
81cf449daa Add methods to events, account data and receipt slaves
Adds the methods needed by /sync to the slaved events,
account data and receipt stores.
2016-06-03 11:19:27 +01:00
Erik Johnston
e043ede4a2 Small optimisation to CacheListDescriptor 2016-06-03 11:19:22 +01:00
Mark Haines
b821c839dc Merge pull request #823 from matrix-org/markjh/more_slaved_stores
Add slaved stores for filters, tokens, and push rules
2016-06-03 11:17:43 +01:00
Erik Johnston
597013caa5 Make cachedList go a bit faster 2016-06-03 11:13:29 +01:00
Erik Johnston
6a0afa582a Load push rules in storage layer, so that they get cached 2016-06-03 11:10:00 +01:00
Mark Haines
3ae915b27e Add a slaved store for presence 2016-06-03 11:05:53 +01:00
Erik Johnston
59f2d73522 Remove unnecessary sets 2016-06-03 11:05:45 +01:00
Erik Johnston
9c26b390a2 Only get local users 2016-06-03 11:04:31 +01:00
Mark Haines
f88d747f79 Add a comment explaining why the filter cache doesn't need exipiring 2016-06-03 11:03:10 +01:00
Erik Johnston
065e739d6e Merge pull request #811 from matrix-org/erikj/state_users_in_room
Use state to calculate get_users_in_room
2016-06-03 10:58:27 +01:00
Erik Johnston
696d7c5937 Merge pull request #809 from matrix-org/erikj/cache_receipts_in_room
Add get_users_with_read_receipts_in_room cache
2016-06-03 10:58:24 +01:00
Mark Haines
0eae075723 Add slaved stores for filters, tokens, and push rules 2016-06-03 10:58:03 +01:00
Matthew Hodgson
79d1f072f4 brand the email from header 2016-06-02 21:34:40 +01:00
David Baker
6bb9aacf9d Merge pull request #821 from matrix-org/dbkr/email_unsubscribe
Email unsubscribe links that don't require logging in
2016-06-02 17:44:55 +01:00
David Baker
745ddb4dd0 peppate 2016-06-02 17:38:41 +01:00
David Baker
7a5a5f2df2 Merge pull request #820 from matrix-org/dbkr/email_notif_string_fmt_error
Fix error in email notification string formatting
2016-06-02 17:26:06 +01:00
David Baker
1f31cc37f8 Working unsubscribe links going straight to the HS
and authed by macaroons that let you delete pushers and nothing else
2016-06-02 17:21:31 +01:00
Matthew Hodgson
2675c1e40e add some branding debugging 2016-06-02 17:21:12 +01:00
David Baker
c71177f285 Merge remote-tracking branch 'origin/dbkr/email_notif_string_fmt_error' into dbkr/email_unsubscribe 2016-06-02 17:20:56 +01:00
David Baker
07a5559916 Fix error in email notification string formatting 2016-06-02 17:17:16 +01:00
Mark Haines
56d15a0530 Store the typing users as user_id strings. (#819)
Rather than storing them as UserID objects.
2016-06-02 16:28:54 +01:00
David Baker
812b5de0fe Merge remote-tracking branch 'origin/develop' into dbkr/email_unsubscribe 2016-06-02 15:33:28 +01:00
Mark Haines
80f34d7b57 Fix setting the _clock in SQLBaseStore 2016-06-02 15:23:56 +01:00
Mark Haines
661a540dd1 Deduplicate presence entries in sync (#818) 2016-06-02 15:20:28 +01:00
Mark Haines
70599ce925 Allow external processes to mark a user as syncing. (#812)
* Add infrastructure to the presence handler to track sync requests in external processes

* Expire stale entries for dead external processes

* Add an http endpoint for making users as syncing

Add some docstrings and comments.

* Fixes
2016-06-02 15:20:15 +01:00
David Baker
fb2193cc63 Merge pull request #817 from matrix-org/dbkr/split_out_auth_handler
Split out the auth handler
2016-06-02 14:31:35 +01:00
Erik Johnston
356f13c069 Disable INCLUDE_ALL_UNREAD_NOTIFS 2016-06-02 14:07:38 +01:00
Erik Johnston
02ac463dbf Merge pull request #800 from matrix-org/erikj/sync_refactor
Refactor SyncHandler
2016-06-02 14:02:13 +01:00
Matthew Hodgson
c5af1b6b00 Merge pull request #814 from matrix-org/matthew/3pid_invite_auth
special case m.room.third_party_invite event auth to match invites,
2016-06-02 13:47:40 +01:00
David Baker
3a3fb2f6f9 Merge branch 'dbkr/split_out_auth_handler' into dbkr/email_unsubscribe 2016-06-02 13:35:25 +01:00
David Baker
4a10510cd5 Split out the auth handler 2016-06-02 13:31:45 +01:00
Matthew Hodgson
f84b89f0c6 if an email pusher specifies a brand param, use it 2016-06-02 13:29:48 +01:00
David Baker
a15ad60849 Email unsubscribing that may in theory, work
Were it not for that fact that you can't use the base handler in the pusher because it pulls in the world. Comitting while I fix that on a different branch.
2016-06-02 11:44:15 +01:00
David Baker
07233a1ec8 Merge pull request #815 from matrix-org/dbkr/email_greeting_not_none
Use user_id in email greeting if display name is null
2016-06-02 09:52:19 +01:00
David Baker
e793866398 Use user_id in email greeting if display name is null 2016-06-02 09:41:13 +01:00
Matthew Hodgson
aaa70e26a2 special case m.room.third_party_invite event auth to match invites, otherwise they get out of sync and you get https://github.com/vector-im/vector-web/issues/1208 2016-06-01 22:13:47 +01:00
Erik Johnston
a04a2d043c Merge pull request #807 from matrix-org/erikj/push_rules_cache
Ensure we always return boolean in push rules
2016-06-01 18:07:48 +01:00
Erik Johnston
0f06b496d1 Merge pull request #806 from matrix-org/erikj/hash_cache
Cache get_event_reference_hashes
2016-06-01 18:07:42 +01:00
Erik Johnston
83b70c9f63 Merge pull request #813 from matrix-org/dbkr/fix_room_list_spidering
Fix room list spidering
2016-06-01 18:02:27 +01:00
David Baker
e0deeff23e Fix room list spidering 2016-06-01 17:58:58 +01:00
David Baker
991af8b0d6 WIP on unsubscribing email notifs without logging in 2016-06-01 17:40:52 +01:00
David Baker
00c487a8db Merge pull request #808 from matrix-org/dbkr/room_list_spider
Add secondary_directory_servers option to fetch room list from other servers
2016-06-01 15:32:52 +01:00
Erik Johnston
c8285564a3 Use state to calculate get_users_in_room 2016-06-01 15:25:25 +01:00
David Baker
1db79d6192 Merge pull request #810 from matrix-org/dbkr/limit_email_notifs
Limit number of notifications in an email notification
2016-06-01 12:49:22 +01:00
David Baker
d60eed0710 Limit number of notifications in an email notification 2016-06-01 11:45:43 +01:00
David Baker
195254cae8 Inject fake room list handler in tests
Otherwise it tries to start the remote public room list updating looping call which breaks.
2016-06-01 11:14:16 +01:00
Erik Johnston
43db0d9f6a Add get_users_with_read_receipts_in_room cache 2016-06-01 10:54:32 +01:00
David Baker
8e539f13c0 Merge remote-tracking branch 'origin/develop' into dbkr/room_list_spider 2016-06-01 09:54:36 +01:00
David Baker
6ecb2ca4ec pep8 2016-06-01 09:48:55 +01:00
Matthew Hodgson
58ee43d020 handle emotes & notices correctly in email notifs 2016-05-31 20:28:42 +01:00
David Baker
2a449fec4d Add cache to remote room lists
Poll for updates from remote servers, waiting for the poll if there's no cache entry.
2016-05-31 18:27:23 +01:00
David Baker
6ca4d3ae9a Add vector.im to default secondary_directory_servers and add comment explaining it's not a permanent solution 2016-05-31 17:24:50 +01:00
Erik Johnston
dea9f20f8c Force boolean 2016-05-31 17:24:30 +01:00
David Baker
963e3ed282 Apparently I am not permitted to have two blank lines here 2016-05-31 17:22:53 +01:00
David Baker
d240796ded Basic, un-cached support for secondary_directory_servers 2016-05-31 17:20:07 +01:00
Mark Haines
c8c5bf950a Fix synapse/storage/schema/delta/30/as_users.py 2016-05-31 17:10:40 +01:00
Erik Johnston
c9ca285d33 Merge pull request #805 from matrix-org/erikj/push_rules_cache
Fix GET /push_rules
2016-05-31 16:42:21 +01:00
Erik Johnston
1d4ee854e2 Fix typo 2016-05-31 15:45:53 +01:00
Erik Johnston
cca0093fa9 Change fix 2016-05-31 15:44:08 +01:00
Erik Johnston
4efa389299 Fix GET /push_rules 2016-05-31 15:37:53 +01:00
Erik Johnston
aefd2d1cbc Cache get_event_reference_hashes 2016-05-31 15:32:32 +01:00
Erik Johnston
10de8c2631 Merge pull request #804 from matrix-org/erikj/push_rules_cache
Add caches to bulk_get_push_rules*
2016-05-31 15:04:40 +01:00
Mark Haines
014e0799f9 Merge pull request #803 from matrix-org/markjh/liberate_appservice_handler
Move the AS handler out of the Handlers object.
2016-05-31 14:31:19 +01:00
Mark Haines
c626fc576a Move the AS handler out of the Handlers object.
Access it directly from the homeserver itself. It already wasn't
inheriting from BaseHandler storing it on the Handlers object was
already somewhat dubious.
2016-05-31 13:53:48 +01:00
Erik Johnston
e5b0bbcd33 Add caches to bulk_get_push_rules* 2016-05-31 13:46:58 +01:00
David Baker
70ecb415f5 Fix c+p fail 2016-05-31 12:00:54 +01:00
David Baker
e1625d62a8 Add federation room list servlet 2016-05-31 11:55:57 +01:00
David Baker
163e48c0e3 Merge pull request #802 from matrix-org/dbkr/split_room_list_handler
Split out the room list handler
2016-05-31 11:32:44 +01:00
David Baker
887c6e6f05 Split out the room list handler
So I can use it from federation bits without pulling in all the handlers.
2016-05-31 11:05:16 +01:00
Matthew Hodgson
42a9ea37e4 Merge pull request #801 from ruma/readme-history-storage
Alter phrasing to clarify where info is stored.
2016-05-29 10:37:04 +01:00
Jimmy Cuadra
8b5dbee47e Alter phrasing to clarify where info is stored.
A user on #matrix:matrix.org was confused by the phrasing of the first
sentence in the paragraph and couldn't tell whether it was saying that
the homeserver stored the data or the clients did. This change splits it
into two sentences to make the subject of each sentence clear.
2016-05-29 02:31:56 -07:00
Erik Johnston
85b992f621 Fix to allow start with postgres 2016-05-27 10:44:44 +01:00
Erik Johnston
cc84f7cb8e Send down correct error response if user not found 2016-05-27 10:35:15 +01:00
David Baker
209ba4d024 Merge pull request #795 from matrix-org/dbkr/delete_push_actions_after_a_month
Only delete push actions after 30 days
2016-05-24 16:22:13 +01:00
David Baker
b007ee4606 Check for presence of 'avatar_url' key 2016-05-24 15:12:05 +01:00
Matrix
95b86e6dad tweak mail notifs 2016-05-24 14:52:29 +01:00
David Baker
cbf8d146ac Merge pull request #799 from matrix-org/matthew/quieter-email-notifs
Tune email notifs to make them quieter:
2016-05-24 14:30:51 +01:00
Erik Johnston
faad233ea6 Change short circuit path 2016-05-24 14:27:19 +01:00
Erik Johnston
6900303997 Don't send down all ephemeral events 2016-05-24 11:44:55 +01:00
Erik Johnston
1c5ed2a19b Only work out newly_joined_users for incremental sync 2016-05-24 11:21:34 +01:00
Erik Johnston
b08ad0389e Only include non-offline presence in initial sync 2016-05-24 11:15:05 +01:00
Erik Johnston
be2c677386 Spell builder correctly 2016-05-24 10:53:03 +01:00
Erik Johnston
79bea8ab9a Inline function. Make load_filtered_recents private 2016-05-24 10:22:24 +01:00
Erik Johnston
84f94e4cbb Add comments 2016-05-24 10:14:53 +01:00
Erik Johnston
d16cc52b5d Merge pull request #798 from negzi/bugfix_create_user_feature
Fix set profile error with Requester.
2016-05-24 10:13:53 +01:00
Erik Johnston
137e6a4557 Shuffle things room 2016-05-24 09:50:55 +01:00
Matthew Hodgson
680f1d9387 catch thinko in presentable names 2016-05-23 22:55:11 +01:00
Matthew Hodgson
cb8a321bdd fix NPE in room ordering 2016-05-23 22:54:56 +01:00
Matthew Hodgson
d437882581 fix debug text 2016-05-23 22:54:32 +01:00
Matthew Hodgson
88ea5ab2c3 consistency is the better part of valour 2016-05-23 19:33:45 +01:00
Matthew Hodgson
989bdc9e56 Tune email notifs to make them quieter:
* After initial 10 minute window, only alert every 24h for room notifs
 * Reset room state after 6h of idleness
 * Synchronise throttles for messages sent in the same notif, so the 24 hourly notifs 'line up'
 * Fix the email subjects to say what triggered the notification
 * Order the rooms in reverse activity order in the email, so the 'reason' room should always come first
2016-05-23 19:24:11 +01:00
Negi Fazeli
6fe04ffef2 Fix set profile error with Requester.
Replace flush_user with delete access token due to function removal
Add a new test case for if the user is already registered
2016-05-23 19:50:28 +02:00
Erik Johnston
c0c79ef444 Add back concurrently_execute 2016-05-23 18:21:27 +01:00
Erik Johnston
b5605dfecc Refactor SyncHandler 2016-05-23 18:08:18 +01:00
David Baker
31b5395ab6 Remove debug logging 2016-05-23 16:32:01 +01:00
Richard van der Hoff
09804c9862 Fix link to A-S spec 2016-05-23 16:29:38 +01:00
David Baker
c2da3406fc Oops, missing comma 2016-05-20 18:03:31 +01:00
David Baker
ccffb0965d Remove stale line 2016-05-20 17:59:10 +01:00
David Baker
18d68bfee4 Handle empty events table 2016-05-20 17:58:09 +01:00
David Baker
d4503e25ed Make deleting push actions more efficient
There's no index on received_ts, so manually binary search using the stream_ordering index, and only update it once an hour.
2016-05-20 17:56:10 +01:00
David Baker
149fa411e2 Only delete push actions after 30 days 2016-05-20 15:25:12 +01:00
Kegsay
60ff2e7984 Merge pull request #794 from matrix-org/kegan/join-with-server-name
Allow clients to specify a server_name to avoid 'No known servers'
2016-05-19 14:13:11 +01:00
Kegan Dougal
332d7e9b97 Allow clients to specify a server_name to avoid 'No known servers'
Multiple server_names are supported via ?server_name=foo&server_name=bar
2016-05-19 13:50:52 +01:00
Matthew Hodgson
6fb51eaf7b Merge pull request #793 from matrix-org/matthew/one-push-badge-per-convo
increment badge count per missed convo, not per msg
2016-05-18 13:56:38 +01:00
Matthew Hodgson
e837df6adb increment badge count per missed convo, not per msg 2016-05-18 11:53:25 +01:00
Erik Johnston
42368ea8db Add desc to get_presence_for_users 2016-05-18 11:38:10 +01:00
Mark Haines
ee660c6b9b Merge pull request #792 from matrix-org/markjh/liberate_typing_handler
Move typing handler out of the Handlers object
2016-05-17 16:06:17 +01:00
Mark Haines
0cb441fedd Move typing handler out of the Handlers object 2016-05-17 15:58:46 +01:00
Mark Haines
5adf627551 Merge pull request #791 from matrix-org/markjh/app_service_config
Move the functions for parsing app service config
2016-05-17 11:40:49 +01:00
Mark Haines
6a30a0bfd3 Move the functions for parsing app service config 2016-05-17 11:28:58 +01:00
Mark Haines
c4c98ce8da Merge pull request #790 from matrix-org/markjh/liberate_sync_handler
Move SyncHandler out of the Handlers object
2016-05-17 10:54:39 +01:00
Mark Haines
523d5bcd0b Merge remote-tracking branch 'origin/develop' into markjh/liberate_sync_handler 2016-05-17 10:43:58 +01:00
Matthew Hodgson
43e1e0489c Merge pull request #786 from matrix-org/matthew/email_notifs_tuning
tune email notifs, fix CSS a bit, and add debugging details
2016-05-17 10:43:24 +01:00
Mark Haines
1ed33784a6 Merge pull request #789 from matrix-org/markjh/member_cleanup
Cleanup room member handler
2016-05-17 10:43:19 +01:00
Mark Haines
526bf8126f Remove unused get_joined_rooms_for_user 2016-05-17 10:20:51 +01:00
Mark Haines
425e6b4983 Merge branch 'develop' into markjh/member_cleanup 2016-05-17 10:13:16 +01:00
Mark Haines
b153f5b150 Merge pull request #787 from matrix-org/markjh/liberate_presence_handler
Move the presence handler out of the Handlers object
2016-05-17 10:09:43 +01:00
Mark Haines
3fd8a07fca Merge pull request #788 from matrix-org/markjh/domian
Spell "domain" correctly
2016-05-17 10:09:34 +01:00
Mark Haines
f68eea808a Move SyncHandler out of the Handlers object 2016-05-16 20:19:26 +01:00
Mark Haines
53e171f345 Merge branch 'markjh/liberate_presence_handler' into markjh/liberate_sync_handler 2016-05-16 20:08:32 +01:00
Mark Haines
80cb9becd8 Remove get_joined_rooms_for_user from RoomMemberHandler 2016-05-16 20:06:55 +01:00
Mark Haines
816df9f267 get_room_members is unused now 2016-05-16 19:51:43 +01:00
Mark Haines
821306120a Replaces calls to fetch_room_distributions_into with get_joined_hosts_for_room 2016-05-16 19:48:07 +01:00
Mark Haines
1a3a2002ff Spell "domain" correctly
s/domian/domain/g
2016-05-16 19:17:23 +01:00
Mark Haines
e168abbcff Don't inherit PresenceHandler from BaseHandler, remove references to self.hs from presence handler 2016-05-16 19:08:40 +01:00
Matthew Hodgson
e501e9ecb2 tweak text 2016-05-16 19:02:22 +01:00
Matthew Hodgson
cbd2adc95e tune email notifs, fix CSS a bit, and add debugging details 2016-05-16 18:58:38 +01:00
Mark Haines
3b86ecfa79 Move the presence handler out of the Handlers object 2016-05-16 18:56:37 +01:00
David Baker
647781ca56 Fix emailpusher import
Try importing at the root level rather than conditionally importing, as per comment
2016-05-16 18:41:32 +01:00
Erik Johnston
c39f305067 os.environ requires a string 2016-05-16 17:21:30 +01:00
Erik Johnston
678c8a7f1e Merge pull request #785 from matrix-org/erikj/cache_factor_synctly
Make synctl read a cache factor from config file
2016-05-16 17:07:35 +01:00
Erik Johnston
c5c5a7403b Make synctl read a cache factor from config file 2016-05-16 17:01:57 +01:00
Matthew Hodgson
2d98c960ec Merge pull request #760 from matrix-org/matthew/preview_url_ip_whitelist
add a url_preview_ip_range_whitelist config param
2016-05-16 13:13:26 +01:00
Mark Haines
eb79110beb Clean up the blacklist/whitelist handling.
Always set the config key with an empty list, even if a list isn't specified.
This means that the codepaths are the same for both the empty list and
for a missing key. Since the behaviour is the same for both cases this
makes the code somewhat easier to reason about.
2016-05-16 13:03:59 +01:00
Mark Haines
dd95eb4cb5 Merge branch 'develop' into matthew/preview_url_ip_whitelist 2016-05-16 12:59:41 +01:00
Erik Johnston
60d53f9e95 Count number of GC collects 2016-05-16 09:34:42 +01:00
Matrix
c0112fabc2 fix logo 2016-05-13 18:07:21 +01:00
Matthew Hodgson
0e8364c7f5 rogue img 2016-05-13 17:53:31 +01:00
Matthew Hodgson
782471b7e1 fix matrix.to URLs 2016-05-13 17:50:16 +01:00
Mark Haines
0466454b00 Assert that stream replicated stream positions are ints 2016-05-13 17:33:44 +01:00
Mark Haines
077468f6a9 Merge pull request #780 from matrix-org/dbkr/email_notifs_on_pusher
Make email notifs work on the pusher synapse
2016-05-13 17:27:46 +01:00
Mark Haines
b3f29dc1e5 Manually expire broken caches like the who_forgot_in_room 2016-05-13 17:16:27 +01:00
Mark Haines
f03ddc98ec Use the SlavedAccountDataStore 2016-05-13 17:01:28 +01:00
Mark Haines
1f71f386f6 Merge branch 'develop' into dbkr/email_notifs_on_pusher 2016-05-13 16:59:56 +01:00
Mark Haines
206eb9fd94 Shift some of the state_group methods into the SlavedEventStore 2016-05-13 16:58:14 +01:00
Erik Johnston
7d6e89ed22 Add a comment 2016-05-13 16:31:08 +01:00
Mark Haines
21018c2c13 Merge pull request #783 from matrix-org/markjh/slave_account_data
Add a slaved datastore for account data
2016-05-13 15:56:04 +01:00
Mark Haines
a8affd606e Merge pull request #784 from matrix-org/markjh/receipts_fix
Allow receipts for events we haven't seen in the db
2016-05-13 15:55:26 +01:00
Mark Haines
b7381d5338 Allow receipts for events we haven't seen in the db 2016-05-13 15:46:41 +01:00
Mark Haines
3abab26458 Add a slaved datastore for account data 2016-05-13 15:34:06 +01:00
Erik Johnston
99b5a2e560 Merge pull request #741 from negzi/create_user_with_expiry
Create user with expiry
2016-05-13 14:46:53 +01:00
Erik Johnston
ba5c616ff4 Merge pull request #778 from matrix-org/erikj/add_pusher
Fixup add_pusher
2016-05-13 14:43:23 +01:00
Erik Johnston
0c11c1be88 Spelling 2016-05-13 14:42:25 +01:00
Erik Johnston
e00e8f2166 Merge pull request #769 from matrix-org/erikj/push_actions_delete
Delete old pushers
2016-05-13 14:41:36 +01:00
Erik Johnston
fd8e921b6e Merge pull request #779 from matrix-org/erikj/receipts
Use tree cache for get_linearized_receipts_for_room
2016-05-13 14:41:21 +01:00
Erik Johnston
024cda9a97 Merge pull request #782 from matrix-org/erikj/remove_indices
Remove unused indices
2016-05-13 14:40:45 +01:00
Erik Johnston
c9aff0736c Remove topics table 2016-05-13 14:40:38 +01:00
Negi Fazeli
40aa6e8349 Create user with expiry
- Add unittests for client, api and handler

Signed-off-by: Negar Fazeli <negar.fazeli@ericsson.com>
2016-05-13 15:34:15 +02:00
Mark Haines
9295fa30a8 Annotate the removed indicies with why they were removed. 2016-05-13 14:16:57 +01:00
Erik Johnston
5e50058473 Remove unused indices
This includes removing both unused indices and indices that are subsets
of other indices.
2016-05-13 13:28:07 +01:00
Mark Haines
cdda850ce1 Merge pull request #781 from matrix-org/markjh/replication_problems
Fix a bug in replication that was causing the pusher to tight loop
2016-05-13 12:05:19 +01:00
Mark Haines
0e792e7903 Log the stream IDs in an order that makes sense 2016-05-13 11:54:44 +01:00
Mark Haines
3547e66bc6 Make sure we advance our stream position 2016-05-13 11:53:00 +01:00
Erik Johnston
6da7f39d95 Use tree cache for get_linearized_receipts_for_room 2016-05-13 11:41:23 +01:00
David Baker
b5e646a18c Make email notifs work on the pusher synapse
Plus general bugfix to email notif code
2016-05-13 11:36:50 +01:00
Mark Haines
048b3ece36 Merge pull request #777 from matrix-org/markjh/move_filter_for_client
move filter_events_for_client out of base handler
2016-05-13 11:30:22 +01:00
Erik Johnston
13d37c3c56 Fixup add_pusher 2016-05-13 11:25:02 +01:00
Mark Haines
a458a40337 missed a spot 2016-05-12 18:19:58 +01:00
Mark Haines
7e23476814 move filter_events_for_client out of base handler 2016-05-11 13:42:37 +01:00
Mark Haines
260b498ee5 Merge pull request #776 from matrix-org/markjh/lazy_signing_key
Shuffle when we get the signing_key attribute.
2016-05-11 13:31:29 +02:00
Mark Haines
1620578b13 Shuffle when we get the signing_key attribute.
Wait until we sign a message to get the signing key from the homeserver
config. This means that the message handler can be created without
having a signing key in the config which means that separate processes
like the pusher that don't send messages and don't need to sign them can
still access the handlers.
2016-05-11 12:20:57 +01:00
Erik Johnston
108434e53d Merge pull request #775 from matrix-org/erikj/password_hash
Correctly handle NULL password hashes from the database
2016-05-11 12:18:13 +01:00
Erik Johnston
1400bb1663 Correctly handle NULL password hashes from the database 2016-05-11 12:06:02 +01:00
Mark Haines
a284a32d78 Merge pull request #774 from matrix-org/markjh/shuffle_base_handler
Shuffle some methods out of base handler
2016-05-11 13:00:28 +02:00
Mark Haines
458a435114 Fix typo 2016-05-11 10:35:33 +01:00
Mark Haines
30057b1e15 Move _create_new_client_event and handle_new_client_event out of base handler 2016-05-11 09:09:20 +01:00
David Baker
ae1af262f6 Pass through _get_state_group_for_events 2016-05-10 19:18:03 +02:00
David Baker
90afc07f39 StateStore, not EventsStore 2016-05-10 19:10:46 +02:00
David Baker
89b5ef7c4b Cached functions must be accessed through the dict 2016-05-10 19:05:22 +02:00
David Baker
35b6e6d2a8 Pass though _get_state_group_for_events 2016-05-10 18:56:40 +02:00
David Baker
3367e65476 Pass through get_state_groups 2016-05-10 18:53:15 +02:00
David Baker
0c4ccdcb83 Also pass through get_profile_displayname 2016-05-10 18:51:14 +02:00
David Baker
f28643cea9 Uncommit accidentally commited edit to cipher list 2016-05-10 18:44:32 +02:00
David Baker
5f46be19a7 Pass through get_events to pusher too 2016-05-10 18:43:40 +02:00
David Baker
d46b18a00f Pass through _get_event_txn 2016-05-10 18:27:06 +02:00
Matrix
3b1930e8ec unbreak schema 2016-05-10 16:42:37 +01:00
Matthew Hodgson
fe97b81c09 Merge pull request #759 from matrix-org/dbkr/email_notifs
Send email notifications for missed messages
2016-05-10 16:30:05 +02:00
David Baker
997db04648 Merge remote-tracking branch 'origin/develop' into dbkr/email_notifs 2016-05-10 14:40:19 +02:00
David Baker
c00b484eff More consistent config naming 2016-05-10 14:39:16 +02:00
David Baker
94040b0798 Add config option to not send email notifs for new users 2016-05-10 14:34:53 +02:00
Erik Johnston
e581754f09 Merge pull request #763 from matrix-org/erikj/ignore_user
Implement basic ignore user API
2016-05-10 13:27:41 +01:00
David Baker
e04b1d6b0a Make pep8 happy 2016-05-10 14:23:16 +02:00
Matthew Hodgson
5599608887 Switch from CSS to Table layout for HTML mails so they work in Outlook aka Word
Remove templates-vector and theme templates with variables instead
    Switch to matrix.to URLs by default for links
2016-05-10 00:14:48 +02:00
Matthew Hodgson
6b45ffd2d1 Switch from CSS to Table layout for HTML mails so they work in Outlook aka Word
Remove templates-vector and theme templates with variables instead
Switch to matrix.to URLs by default for links
2016-05-09 20:16:56 +02:00
Erik Johnston
c9eb6dfc1b Merge branch 'develop' of github.com:matrix-org/synapse into erikj/ignore_user 2016-05-09 13:21:06 +01:00
Erik Johnston
3f84da139c Merge pull request #773 from matrix-org/erikj/get_domian_from_id
Add and use get_domain_from_id
2016-05-09 13:21:00 +01:00
Erik Johnston
def64d6ef3 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/ignore_user 2016-05-09 13:05:09 +01:00
Erik Johnston
100e2c42f6 Merge pull request #764 from matrix-org/erikj/replication_logging
Add some log information at returned replication streams
2016-05-09 11:18:00 +01:00
Erik Johnston
8715731559 Merge pull request #772 from matrix-org/erikj/get_user_cache
Add cache to get_user_by_id
2016-05-09 11:12:11 +01:00
Erik Johnston
34b3af3363 Merge pull request #770 from matrix-org/erikj/transaction_reactor
Run transaction queue on reactor
2016-05-09 11:12:06 +01:00
Erik Johnston
7354667bd3 Merge pull request #768 from matrix-org/erikj/queue_evens_persist
Queue events by room for persistence
2016-05-09 11:11:53 +01:00
Erik Johnston
08dfa8eee2 Add and use get_domian_from_id 2016-05-09 10:36:03 +01:00
Erik Johnston
f904b1c60c Merge pull request #766 from sbts/patch-1
Fix Typo in README.rst s/Halp/Help/
2016-05-09 10:21:14 +01:00
Erik Johnston
1f1dee94f6 Manually run GC on reactor tick.
This also adds a metric for amount of time spent in GC.
2016-05-09 10:13:25 +01:00
Erik Johnston
f6ebaf4a32 Run transaction queue on reactor
This ensures that any CPU work that happens doesn't block message
sending.
2016-05-09 10:10:06 +01:00
Erik Johnston
4ea762c1a2 Add cache to get_user_by_id 2016-05-09 10:08:21 +01:00
Erik Johnston
012cb5416c Merge branch 'develop' of github.com:matrix-org/synapse into erikj/push_actions_delete 2016-05-06 15:59:20 +01:00
Erik Johnston
fcb2c3f0db Remove unused import 2016-05-06 15:47:40 +01:00
Erik Johnston
fd85b167ec Pull loop one level up 2016-05-06 15:38:42 +01:00
Erik Johnston
b6e0be701e Queue events for persistence 2016-05-06 14:31:44 +01:00
Erik Johnston
96d9d5d388 Merge pull request #767 from matrix-org/erikj/transaction_txn
Reduce database inserts when sending transactions
2016-05-06 13:20:43 +01:00
Erik Johnston
d13459636f Pull prev txn from in memory 2016-05-06 11:30:55 +01:00
Erik Johnston
1d275dba69 Don't needlessly enter transaction 2016-05-06 11:25:58 +01:00
Erik Johnston
56b5e83e36 Reduce database inserts when sending transactions 2016-05-06 11:20:18 +01:00
Mark Haines
1f590f3e9a Merge pull request #765 from matrix-org/markjh/open_id
Add an openidish mechanism for proving that you own a given user_id
2016-05-05 19:26:30 +01:00
David
1b45e6a9bc Fix Typo in README.rst s/Halp/Help/ 2016-05-06 02:07:59 +08:00
Matthew Hodgson
53ca739f1f better mail subject lines 2016-05-05 15:55:44 +01:00
Matthew Hodgson
81c2176cba fix layout; handle app naming in synapse, not jinja 2016-05-05 15:54:29 +01:00
Mark Haines
573ef3f1c9 Rename openid/token to openid/request_token 2016-05-05 15:15:00 +01:00
Erik Johnston
8940281d1b Don't warn 2016-05-05 15:10:03 +01:00
Mark Haines
9c272da05f Add an openidish mechanism for proving to third parties that you own a given user_id 2016-05-05 13:42:44 +01:00
Matthew Hodgson
a5974f89d6 fix app branding 2016-05-05 11:25:56 +01:00
Erik Johnston
5d8a93a10e Add some log information at returned replication streams 2016-05-05 10:29:21 +01:00
Erik Johnston
1f0f5ffa1e Add bulk fetch storage API 2016-05-05 10:03:15 +01:00
Matthew Hodgson
634efb65f1 pep8 2016-05-05 02:10:57 +01:00
Matthew Hodgson
c64d5fc66c fix room.txt 2016-05-05 02:00:33 +01:00
Matthew Hodgson
3c39fa8902 First cut at Vector-branded mail templates 2016-05-05 02:00:33 +01:00
Matthew Hodgson
ce81ccb063 handle fragments correctly on mxc URLs.
switch to vector.im permalinks as matrix.to isn't ready yet.
merge overlapping notifications together.
give one message of context after a notification (in the unlikely event it exists, but it's possible thanks to throttling).
include name of app in mail templates
2016-05-05 02:00:33 +01:00
Matthew Hodgson
1cf5c379cb spell out emailpusher full path 2016-05-05 01:59:39 +01:00
Erik Johnston
fee1118a20 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/ignore_user 2016-05-04 19:08:27 +01:00
Erik Johnston
97b9141245 Merge pull request #762 from matrix-org/erikj/report_event
Add /report endpoint
2016-05-04 19:05:49 +01:00
Erik Johnston
fcd1eb642d Add primary key 2016-05-04 16:51:51 +01:00
Erik Johnston
8e6a163f27 Add timestamp and auto incrementing ID 2016-05-04 15:19:12 +01:00
David Baker
39d0a99972 Include no context
until we can de-dup between the context and other notifs
2016-05-04 14:52:49 +01:00
David Baker
9ef05a12c3 Add date header & message id 2016-05-04 14:52:10 +01:00
David Baker
80be396464 Correct SQL statement for postgres
In standard sql, join binds tighter than comma, so we were joining on the wrong table. Postgres follows the standard (apparently).
2016-05-04 13:19:59 +01:00
Erik Johnston
5650e38e7d Move event_id to path 2016-05-04 13:19:39 +01:00
Matthew Hodgson
8a04412fa1 starting point for doc on how log contexts are supposed to work 2016-05-04 12:19:04 +01:00
David Baker
8cc82aad87 Add db functions used for email to the pusher app 2016-05-04 11:47:59 +01:00
David Baker
de22001ab5 pep8 2016-05-04 11:41:35 +01:00
Matthew Hodgson
f1026418ea copyright 2016-05-04 11:38:01 +01:00
Matthew Hodgson
17cbf773b9 fix assorted typos in default config 2016-05-04 11:38:01 +01:00
Erik Johnston
984d4a2c0f Add /report endpoint 2016-05-04 11:28:10 +01:00
David Baker
e6bffa4475 Unused import 2016-05-04 11:26:58 +01:00
David Baker
92f0f3d21d Catch all exceptions when creating a pusher 2016-05-04 11:24:07 +01:00
Erik Johnston
a438a6d2bc Implement basic ignore user 2016-05-04 10:16:46 +01:00
Erik Johnston
7ea3b4118d Merge pull request #757 from matrix-org/erikj/event_auth_typo
Fix typo in event_auth servlet path
2016-05-03 14:23:15 +01:00
Erik Johnston
183f23f10d Delete old pushers 2016-05-03 14:22:33 +01:00
Matthew Hodgson
792def4928 add a url_preview_ip_range_whitelist config param so we can whitelist the matrix.org IP space 2016-05-01 12:44:24 +01:00
David Baker
2df75de505 Merge remote-tracking branch 'origin/develop' into dbkr/email_notifs 2016-04-29 20:28:47 +01:00
David Baker
b084e4d963 Add constant for throttle multiplier 2016-04-29 20:14:55 +01:00
David Baker
35b7b8e4bc Remove unused function 2016-04-29 20:10:34 +01:00
David Baker
6b9b6a9169 Remove unused arg 2016-04-29 20:02:52 +01:00
David Baker
c7c75e87dc Docstring 2016-04-29 19:47:35 +01:00
David Baker
b0a1036d93 Use explicit join 2016-04-29 19:28:56 +01:00
David Baker
8f99cd5996 Oops, actually specify the user id 2016-04-29 19:27:03 +01:00
David Baker
60f44c098d Remove unnecessary if 2016-04-29 19:17:10 +01:00
David Baker
50ad8005e4 Put spaces at start of line 2016-04-29 19:16:15 +01:00
David Baker
83618d719a Try imports in config 2016-04-29 19:13:52 +01:00
David Baker
e7a76b5123 Use the constant 2016-04-29 19:10:45 +01:00
David Baker
29c8cf8db8 Avoid vars builtin 2016-04-29 19:09:28 +01:00
David Baker
d3da5294e8 Use named parameter format 2016-04-29 19:04:40 +01:00
David Baker
765f2b8446 Default enable email notifs to False 2016-04-29 14:46:18 +01:00
David Baker
ff5e5423e5 Include res in the manifest 2016-04-29 14:43:48 +01:00
David Baker
311b5ce051 pep8 2016-04-29 14:37:30 +01:00
David Baker
3facde2536 Remove rather pointless get function 2016-04-29 14:36:45 +01:00
David Baker
4364ea1272 Stop processing notifs once we've sent a mail 2016-04-29 14:31:27 +01:00
David Baker
4b0c3a3270 Correct public_baseurl default 2016-04-29 14:30:15 +01:00
David Baker
5048455965 Nicer get() shorthand 2016-04-29 14:27:40 +01:00
David Baker
6c8957be7f Remove redundant docstring 2016-04-29 14:25:28 +01:00
David Baker
18ce88bd2d Correct default template and add text template 2016-04-29 14:24:25 +01:00
David Baker
40d40e470d Send mail notifs with a plaintext part too 2016-04-29 13:56:21 +01:00
David Baker
56aae0eaf5 Merge pull request #758 from matrix-org/dbkr/fix_password_reset
Fix password reset
2016-04-29 12:45:39 +01:00
David Baker
dc2c527ce9 Fix password reset
Default requester to None, otherwise it isn't defined when resetting using email auth
2016-04-29 12:07:54 +01:00
Erik Johnston
62b51b8452 Fix typo in event_auth servlet path 2016-04-29 12:00:51 +01:00
David Baker
b2c04da8dc Add an email pusher for new users
If they registered with an email address and email notifs are enabled on the HS
2016-04-29 11:43:57 +01:00
David Baker
ec9cbe847d pep8 newline 2016-04-29 10:07:30 +01:00
David Baker
acded821c4 Merge remote-tracking branch 'origin/develop' into dbkr/email_notifs 2016-04-29 10:05:20 +01:00
David Baker
69e519052b Remove vector specific style 2016-04-28 17:34:32 +01:00
David Baker
46b5547a42 Some basic css to make it halfway legible 2016-04-28 17:29:39 +01:00
David Baker
36bb5c2383 Fix notification link 2016-04-28 17:28:48 +01:00
David Baker
e800ee2f63 May as well always include room link 2016-04-28 17:28:27 +01:00
David Baker
cc0874cf71 Put back real delay before mailing 2016-04-28 17:00:40 +01:00
David Baker
68f8fc2f14 Support file messages & fix plain text 2016-04-28 16:59:57 +01:00
David Baker
4845c7359d Support image notifs 2016-04-28 15:55:53 +01:00
Mark Haines
351b50a887 Fix more typos in per-request metrics 2016-04-28 15:29:46 +01:00
David Baker
60f86fc876 pep8 2016-04-28 15:16:30 +01:00
David Baker
937c407eef Only import email pusher if email notifs are on 2016-04-28 15:12:14 +01:00
Mark Haines
dcfc10b129 Fix typo in request metrics 2016-04-28 15:11:06 +01:00
Matthew Hodgson
aebd0c9717 fix typo 2016-04-28 15:09:25 +01:00
Mark Haines
1c188cf73c Merge pull request #756 from matrix-org/markjh/more_metrics
Report per request metrics for all of the things using request_handler
2016-04-28 14:59:45 +01:00
Mark Haines
1a12766e3b Add a comment explaining why automatic metric reporting is disabled for JsonResource 2016-04-28 12:31:26 +01:00
Mark Haines
6037349512 Check if report_metrics is True 2016-04-28 12:26:07 +01:00
David Baker
ebbabc4986 Handle room invites in email notifs 2016-04-28 11:49:36 +01:00
Mark Haines
8d7ad44331 Report per request metrics for all of the things using request_handler 2016-04-28 10:57:49 +01:00
David Baker
5367708236 Add the jinja template for individual notifs 2016-04-28 10:55:32 +01:00
David Baker
9dba1b668c Linkify plain text messages too 2016-04-28 10:55:08 +01:00
David Baker
424a7f48f8 Run filter_events_for_client
so we don't accidentally mail out events people shouldn't see
2016-04-27 17:50:49 +01:00
David Baker
4ed1e45869 Make html messages work 2016-04-27 17:18:51 +01:00
Mark Haines
21d188bf95 Merge pull request #755 from matrix-org/markjh/right_direction
Fix backfill replication to advance the stream correctly
2016-04-27 15:54:48 +01:00
Mark Haines
8a65666454 Fix backfill replication to advance the stream correctly 2016-04-27 15:38:43 +01:00
David Baker
8781083960 Better grammar for multiple messages in a room
Say who the messages are from if there's no room name, otherwise it's a bit nonsensical
2016-04-27 15:30:41 +01:00
David Baker
fa12209c1b Hopefully all remaining bits for email notifs
Add public facing base url to the server so synapse knows what URL to use when converting mxc to http urls for use in emails
2016-04-27 15:09:55 +01:00
Mark Haines
c7c03bf303 Merge pull request #754 from matrix-org/markjh/check_for_nop
Check that something has happend before running the selects
2016-04-27 13:36:47 +01:00
Mark Haines
871357d539 Check that somethign has happend before running the selects 2016-04-27 11:54:13 +01:00
Mark Haines
71df327190 Actually start the pusher daemon 2016-04-26 17:07:09 +01:00
Mark Haines
c9eab73f2a Fix typo in default pusher config 2016-04-26 17:06:18 +01:00
Mark Haines
47571d11db Merge pull request #753 from matrix-org/markjh/daemon_pusher
Optionally daemonize the pusher
2016-04-26 16:11:44 +01:00
Mark Haines
b80b93ea0f Add a log context to the daemonized pusher 2016-04-26 15:57:28 +01:00
Mark Haines
6df5a6a833 Optionally daemonize the pusher 2016-04-26 15:37:41 +01:00
Erik Johnston
5164ccc3e5 Bump changelog and version 2016-04-26 11:26:32 +01:00
Erik Johnston
3306cf45ca Merge pull request #750 from matrix-org/erikj/jwt_optional
Make pyjwt dependency optional
2016-04-26 11:07:22 +01:00
Mark Haines
c487c42492 Merge pull request #752 from matrix-org/markjh/more_updates
Add a couple of update methods to the PusherSlaveStore
2016-04-26 11:00:40 +01:00
Mark Haines
9c417c54d4 Add a couple of update methods to the PusherSlaveStore 2016-04-26 10:45:02 +01:00
Mark Haines
9843f2a657 Merge pull request #751 from matrix-org/markjh/pusher_metrics_manhole
Add a metrics listener and a ssh listener to the pusher
2016-04-26 10:35:51 +01:00
David Baker
7b4715bad7 More variable calculation for email notifs
Include name of the person we're sending to and add summary text at the top giving an overview of what's happened.
2016-04-25 18:27:04 +01:00
Mark Haines
f15e9e8de4 Remove the uncomments from the comments 2016-04-25 17:56:24 +01:00
Mark Haines
72e2fafa20 Add a metrics listener and a ssh listener to the pusher 2016-04-25 17:34:25 +01:00
Mark Haines
233bf78ab4 Merge pull request #749 from matrix-org/markjh/split_manhole
Split out setting up the manhole to a separate file
2016-04-25 15:23:53 +01:00
Mark Haines
f22f46f4f9 Move the listenTCP call outside the manhole function 2016-04-25 14:59:21 +01:00
David Baker
290f125a13 Typo 2016-04-25 14:42:59 +01:00
Erik Johnston
52ecbc2843 Make pyjwt dependency optional 2016-04-25 14:30:15 +01:00
David Baker
05e49ffbdf No we don't: it's just the display name 2016-04-22 18:44:17 +01:00
David Baker
bd0f9c2065 Actually do UTF8 correctly 2016-04-22 18:42:00 +01:00
David Baker
c5b3c6e101 Sort member events
So names of people in a room are given in order
2016-04-22 18:33:36 +01:00
David Baker
83bf65297a Mime part is binary so encode it first.
Doesn't get character enocind right yet but makes it not error.
2016-04-22 18:31:47 +01:00
David Baker
e8701e64b9 Implement group-of-people names 2016-04-22 17:28:42 +01:00
David Baker
c553797c4f No inlineCallbacks necessary on this 2016-04-22 17:27:54 +01:00
Mark Haines
5905f36f05 Split out setting up the manhole to a separate file 2016-04-22 17:09:15 +01:00
Mark Haines
c3f8dbf6b5 Merge pull request #748 from matrix-org/markjh/split_out_site.py
Move SynapseSite to its own file
2016-04-22 17:08:21 +01:00
Mark Haines
62607d5452 Merge branch 'develop' into markjh/split_out_site.py
Conflicts:
	synapse/app/homeserver.py
2016-04-22 16:26:57 +01:00
Mark Haines
e57df8fa86 Merge pull request #747 from matrix-org/markjh/http_resource_tree
Split out create_resource_tree to a separate file
2016-04-22 16:24:55 +01:00
Mark Haines
e856036f4c Move SynapseSite to its own file 2016-04-22 16:09:55 +01:00
Mark Haines
9e7aa98c22 Split out create_resource_tree to a separate file 2016-04-22 15:40:51 +01:00
Mark Haines
2022ae0fb9 Merge pull request #746 from matrix-org/markjh/split_out_pusher
Optionally split out the pushers into a separate process
2016-04-22 11:34:08 +01:00
Erik Johnston
64ec3493c1 Merge pull request #745 from matrix-org/erikj/search-index
Optimise event_search in postgres
2016-04-22 11:23:49 +01:00
Erik Johnston
4063fe0283 Update port script 2016-04-22 10:35:53 +01:00
Erik Johnston
183cacac90 Simplify query and handle finishing correctly 2016-04-22 10:01:57 +01:00
David Baker
19d6b6cd7a Add WIP email template files 2016-04-21 19:22:53 +01:00
David Baker
c10ed26c30 Flesh out email templating
Mostly WIP porting the room name calculation logic from the web client so our room names in the email mirror the clients.
2016-04-21 19:19:07 +01:00
Erik Johnston
ae571810f2 Order NULLs first 2016-04-21 18:14:18 +01:00
Erik Johnston
3ddbb1687c Fix query 2016-04-21 18:02:36 +01:00
Erik Johnston
8fae3d7b1e Use special UPDATE syntax 2016-04-21 18:01:49 +01:00
Erik Johnston
b57dcb4b51 Typo 2016-04-21 17:49:00 +01:00
Erik Johnston
26db18bc90 Need to do _background_update_progress_txn in actual transaction 2016-04-21 17:45:56 +01:00
Erik Johnston
b9675ef6e6 Merge pull request #687 from nikriek/jwt-fix
Fix issues with JWT login
2016-04-21 17:42:25 +01:00
Erik Johnston
e395eb1108 Update progress when creating index 2016-04-21 17:39:24 +01:00
Erik Johnston
3b0fa77f50 Fix SQL statement 2016-04-21 17:37:42 +01:00
Erik Johnston
129e403487 Create index must be on a conn 2016-04-21 17:35:51 +01:00
Mark Haines
a3ac837599 Optionally split out the pushers into a separate process 2016-04-21 17:22:37 +01:00
Mark Haines
78741cf025 Merge pull request #743 from matrix-org/markjh/slave_pushers
Replicate the pushers
2016-04-21 17:21:29 +01:00
Erik Johnston
51bb339ab2 Create index concurrently 2016-04-21 17:16:11 +01:00
Erik Johnston
b743c1237e Add missing run_upgrade 2016-04-21 17:12:04 +01:00
Mark Haines
31719ad124 Merge pull request #744 from matrix-org/markjh/replication_remove_pusher
Add a replication endpoint for deleting pushers
2016-04-21 17:10:49 +01:00
Niklas Riekenbrauck
565c2edb0a Fix issues with JWT login 2016-04-21 18:10:48 +02:00
Erik Johnston
c877f0f034 Optimise event_search in postgres 2016-04-21 16:56:14 +01:00
Mark Haines
b15266fe06 Merge pull request #742 from matrix-org/markjh/slave_event_push_actions
Replicate push actions
2016-04-21 16:39:04 +01:00
Mark Haines
cfe1ff4bdb Add a replication endpoint for deleting pushers 2016-04-21 16:33:05 +01:00
Mark Haines
d4823efad9 Replicate the pushers 2016-04-21 16:18:00 +01:00
Mark Haines
9f53491cab Merge branch 'develop' into markjh/slave_event_push_actions 2016-04-21 16:00:43 +01:00
Mark Haines
02a27a6c4f pip install new python dependencies in jenkins.sh 2016-04-21 16:00:28 +01:00
Mark Haines
a611c968cc Merge branch 'develop' into markjh/slave_event_push_actions 2016-04-21 15:37:01 +01:00
Erik Johnston
59698906eb Make jenkins install lxml 2016-04-21 15:36:13 +01:00
Mark Haines
c0d8e0eb63 Replicate push actions 2016-04-21 15:25:58 +01:00
David Baker
2ed0adb075 Generate mails from a template 2016-04-20 18:35:29 +01:00
Erik Johnston
68ebb81e86 Merge pull request #740 from matrix-org/erikj/state_cache
Always use state cache entry if it exists
2016-04-20 13:52:59 +01:00
David Baker
05adc6c2de more pep8 2016-04-20 13:02:45 +01:00
David Baker
f63bd4ff47 Send a rather basic email notif
Also pep8 fixes
2016-04-20 13:02:01 +01:00
Erik Johnston
5bbc321588 Always use state cache entry if it exists
Also check if the resolved state matches an existing state group.
2016-04-20 11:49:10 +01:00
Erik Johnston
4cf4320593 Add some logging to state resolve_events 2016-04-20 11:06:02 +01:00
Erik Johnston
eab47ea1e5 Merge pull request #739 from matrix-org/erikj/cache_get_state_groups_for_groups
Add cache to _get_state_groups_from_groups
2016-04-19 17:37:19 +01:00
Mark Haines
f52dd35ac3 Merge pull request #738 from matrix-org/markjh/slaved_receipts
Add a slaved receipts store
2016-04-19 17:31:59 +01:00
Erik Johnston
61c7edfd34 Add cache to _get_state_groups_from_groups 2016-04-19 17:22:03 +01:00
Mark Haines
5bbd424ee0 Add a slaved receipts store 2016-04-19 17:14:08 +01:00
Erik Johnston
6ac40f7b65 Merge pull request #737 from matrix-org/erikj/spider_ssl_factory
Use tls_server_context_factory for SpiderEndpoint
2016-04-19 16:22:05 +01:00
Erik Johnston
f505575f69 Make InsecureInterceptableContextFactory work with SpiderEndpoint 2016-04-19 16:08:14 +01:00
Mark Haines
4084c58aa1 Merge pull request #736 from matrix-org/markjh/slave_invited_rooms_for_user
Replicate get_invited_rooms_for_user
2016-04-19 15:46:00 +01:00
Mark Haines
e99365f601 Replicate get_invited_rooms_for_user 2016-04-19 15:22:14 +01:00
David Baker
e2a01455af Add single instance & logging stuff
Copy the stuff over from http pusher that prevents multiple instances of process running at once and sets up logging and measure blocks.
2016-04-19 14:52:58 +01:00
Erik Johnston
e8884e5e9c Add self.media_repo to PreviewUrlResource 2016-04-19 14:51:34 +01:00
Erik Johnston
a7001c311b _make_dirs was moved to MediaRepository 2016-04-19 14:49:31 +01:00
Erik Johnston
9181e2f4c7 Add store to PreviewUrlResource 2016-04-19 14:48:24 +01:00
Erik Johnston
fb76a81ff7 Reorder imports 2016-04-19 14:45:05 +01:00
David Baker
07d765209d First bits of emailpusher
Mostly logic of when to send an email
2016-04-19 14:24:36 +01:00
Erik Johnston
48af68ba8e Merge pull request #735 from matrix-org/erikj/media_resource_cleanup
Split out BaseMediaResource into MediaRepository
2016-04-19 13:56:59 +01:00
Erik Johnston
0c93df89b6 Move MediaRepository to media_repository module 2016-04-19 11:31:43 +01:00
Erik Johnston
43f0941e8f Split out BaseMediaResource into MediaRepository
This is so that a single MediaRepository can be shared across all
resources, rather than having a "copy" per resource.

In particular this allows us to guard against both the thumbnail and
download resource triggering a download of remote content at the same
time.
2016-04-19 11:24:59 +01:00
Erik Johnston
481119f7d6 Merge pull request #734 from matrix-org/erikj/measure
Create log context in Measure if one doesn't exist
2016-04-18 19:03:57 +01:00
Erik Johnston
9f56645038 Merge pull request #728 from OlegGirko/systemd_env_file
Add environment file to systemd unit configuration.
2016-04-18 16:22:17 +01:00
Erik Johnston
eb8619e256 Create log context in Measure if one doesn't exist 2016-04-18 16:08:32 +01:00
Erik Johnston
4ef7a25c10 Merge pull request #733 from matrix-org/erikj/make_member_timeout
Lower timeout for make_membership_event
2016-04-18 15:08:05 +01:00
Erik Johnston
3727a15764 Merge pull request #732 from matrix-org/erikj/login
Simplify _check_password
2016-04-18 15:07:57 +01:00
Matthew Hodgson
aaabbd3e9e explicitly pass in the charset from Content-Type to lxml to fix cyrillic woes better 2016-04-15 14:32:25 +01:00
Matthew Hodgson
84f9cac4d0 fix cyrillic URL previews by hardcoding all page decoding to UTF-8 for now, rather than relying on lxml's heuristics which seem to get it wrong 2016-04-15 13:20:08 +01:00
Erik Johnston
914f1eafac Lower timeout for make_membership_event
Calls to make_membership_event are done in response to client requests,
and so should not be retried over long timeframes.
2016-04-15 11:22:23 +01:00
Erik Johnston
6fd2f685fe Simplify _check_password 2016-04-15 11:17:18 +01:00
Erik Johnston
737aee9295 Merge pull request #731 from matrix-org/erikj/timed_otu
Use SynapseError 504 for Timeout errors
2016-04-15 10:31:17 +01:00
Erik Johnston
cb9c465707 Use SynapseError 504 for Timeout errors 2016-04-15 10:21:32 +01:00
Mark Haines
3c79bdd7a0 Fix check_password rather than inverting the meaning of _check_local_password (#730) 2016-04-14 19:00:21 +01:00
David Baker
a4c56bf67b Merge pull request #729 from matrix-org/dbkr/fix_login_nonexistent_user
Fix login to error for nonexistent users
2016-04-14 18:46:45 +01:00
David Baker
4c1b32d7e2 Fix login to error for nonexistent users
Fixes SYN-680
2016-04-14 18:28:42 +01:00
Matthew Hodgson
f78b479118 fix urlparse import thinko breaking tiny URLs 2016-04-14 15:23:55 +01:00
Kegsay
4802f9cdb6 Merge pull request #727 from matrix-org/kegan/fix-asapi-reg
Make v2_alpha reg follow the AS API specification
2016-04-14 15:08:54 +01:00
Kegan Dougal
83776d6219 Make v2_alpha reg follow the AS API specification
The spec is clear the key should be 'user' not 'username' and this is indeed
the case for v1. This is not true for v2_alpha though, which is what this
commit is fixing.
2016-04-14 14:52:26 +01:00
Oleg Girko
e83f8c0aa5 Add environment file to systemd unit configuration.
Now there is at least one environment variable that controls
synapse server's behaviour: SYNAPSE_CACHE_FACTOR.
So, it makes sense to make systemd unit file to use
environment configuration file that can set this variable's value.

Signed-off-by: Oleg Girko <ol@infoserver.lv>
2016-04-14 14:46:18 +01:00
Matthew Hodgson
bd77216d06 comment out 2c838f6459 due to risk of https://en.wikipedia.org/wiki/Billion_laughs attacks - thanks @torhve 2016-04-14 14:39:24 +01:00
Erik Johnston
5a578ea4c7 Merge pull request #726 from matrix-org/erikj/push_metric
Measure push action generator
2016-04-14 13:49:09 +01:00
Erik Johnston
9ae64c9910 Measure push action generator 2016-04-14 13:42:22 +01:00
Erik Johnston
b42ad359e9 Merge pull request #725 from matrix-org/dbkr/push_only_joined
Don't push for everyone who ever sent an RR to the room
2016-04-14 12:05:13 +01:00
David Baker
757e2c79b4 Don't push for everyone who ever sent an RR to the room 2016-04-14 12:02:50 +01:00
Erik Johnston
86e9bbc74e Add missing yield 2016-04-14 11:56:52 +01:00
Erik Johnston
e40f25ebe1 Rename log context 2016-04-14 11:54:14 +01:00
Erik Johnston
ff1d333a02 Merge pull request #724 from matrix-org/erikj/push_measure
Add push index. Add extra Measure
2016-04-14 11:46:46 +01:00
Erik Johnston
2ae91a9e2f Make send_badge private 2016-04-14 11:37:50 +01:00
Erik Johnston
d213d69fe3 Add desc arg 2016-04-14 11:36:23 +01:00
Erik Johnston
56da835eaf Add necessary logging contexts 2016-04-14 11:33:50 +01:00
Erik Johnston
96bcfb29c7 Add index 2016-04-14 11:26:33 +01:00
Erik Johnston
7be1065b8f Add extra Measure 2016-04-14 11:26:15 +01:00
Erik Johnston
a2546b9082 Fix query for get_unread_push_actions_for_user_in_range 2016-04-14 11:08:31 +01:00
Erik Johnston
ceeb5b909f Merge pull request #721 from matrix-org/erikj/spider
Sanitize the optional dependencies for spider API
2016-04-14 09:59:29 +01:00
David Baker
43a89cca8e Merge pull request #722 from matrix-org/dbkr/only_unread_event_actions
Only return unread notifications
2016-04-13 14:54:26 +01:00
Erik Johnston
f338bf9257 Give install requirements 2016-04-13 14:33:48 +01:00
David Baker
767fc0b739 pep8 2016-04-13 14:23:27 +01:00
David Baker
54d08c8868 Only return unread notifications
Make get_unread_push_actions_for_user_in_range only return unread event actions, being more true to its name. Done in two separate sql queries to get actions after a read receipt and those in a room wiht no receipt at all. SQL queries by Erik.
2016-04-13 14:16:45 +01:00
Erik Johnston
5880bc5417 Merge pull request #718 from matrix-org/erikj/public_room_list
Don't return empty public rooms
2016-04-13 14:07:26 +01:00
Erik Johnston
f613a3e332 Merge pull request #720 from matrix-org/erikj/auth_chec
Don't auto log failed auth checks
2016-04-13 14:07:23 +01:00
Erik Johnston
bfe586843f Add back in helpful description for missing url_preview_ip_range_blacklist 2016-04-13 13:52:57 +01:00
Erik Johnston
d0633e6dbe Sanitize the optional dependencies for spider API 2016-04-13 13:38:09 +01:00
Erik Johnston
0f2ca8cde1 Measure Auth.check 2016-04-13 11:15:59 +01:00
Erik Johnston
c53f9d561e Don't auto log failed auth checks 2016-04-13 11:11:46 +01:00
David Baker
65141161f6 Unused member variable 2016-04-12 16:25:26 +01:00
Erik Johnston
72f454b752 Don't return empty public rooms 2016-04-12 16:06:18 +01:00
Mark Haines
10ebbaea2e Update replication.rst 2016-04-12 15:53:45 +01:00
Mark Haines
aa5ce4d450 Add some design documentation for replication 2016-04-12 15:14:10 +01:00
David Baker
d33d623f0d Merge pull request #716 from matrix-org/dbkr/get_pushers
Add get endpoint for pushers
2016-04-12 14:40:37 +01:00
David Baker
7984ffdc6a Unneccessarywhitespaceisunnecessary 2016-04-12 13:55:57 +01:00
David Baker
c1267d04c5 Oops, forgot the desc. 2016-04-12 13:55:32 +01:00
David Baker
a04c076b7f Make the /set part mandatory 2016-04-12 13:54:41 +01:00
David Baker
44891b4a0a Tidy up get_pusher functions
Decodes pushers rows on the main thread rather than the db thread and uses _simple_select_list. Also do the same to the function I copied and factor out the duplication into a helper function.
2016-04-12 13:47:17 +01:00
David Baker
7b39bcdaae Mis-named function 2016-04-12 13:35:08 +01:00
David Baker
d937f342bb Split into separate servlet classes 2016-04-12 13:33:30 +01:00
Erik Johnston
318cb1f207 Merge pull request #717 from matrix-org/erikj/backfill_state
Check if we've already backfilled events
2016-04-12 13:30:30 +01:00
Erik Johnston
c48465dbaa More comments 2016-04-12 12:48:30 +01:00
Erik Johnston
8be1a37909 More comments 2016-04-12 12:04:19 +01:00
Erik Johnston
d3d0be4167 Don't append to unused list 2016-04-12 11:59:00 +01:00
Erik Johnston
762ada1e07 Add back backfilled parameter that was removed 2016-04-12 11:58:04 +01:00
Erik Johnston
0d3da210f0 Add comment 2016-04-12 11:54:41 +01:00
Erik Johnston
cccf86dd05 Check if we've already backfilled events 2016-04-12 11:19:32 +01:00
David Baker
8a76094965 Add get endpoint for pushers
As per https://github.com/matrix-org/matrix-doc/pull/308
2016-04-11 18:00:03 +01:00
Mark Haines
790f5848b2 Fix the rule_id for .m.rule.invite_for_me (#715) 2016-04-11 16:10:39 +01:00
Mark Haines
82d7eea7e3 Move the versionstring code out of app.homeserver into util 2016-04-11 14:57:09 +01:00
David Baker
2547dffccc Merge pull request #705 from matrix-org/dbkr/pushers_use_event_actions
Change pushers to use the event_actions table
2016-04-11 12:58:55 +01:00
David Baker
9bb041791c Run unsafe proces in a loop until we've caught up
and wrap unsafe process in a try block
2016-04-11 12:48:30 +01:00
Erik Johnston
17515bae14 PEP8 2016-04-11 11:02:50 +01:00
Matthew Hodgson
4bd3d25218 Merge pull request #688 from matrix-org/matthew/preview_urls
URL previewing support
2016-04-11 10:40:29 +01:00
Matthew Hodgson
5ffacc5e84 fix typos and needless try/except from PR review 2016-04-11 10:39:16 +01:00
Matthew Hodgson
83b2f83da0 actually throw meaningful errors 2016-04-08 21:36:59 +01:00
Mark Haines
b36270b5e1 Fix pep8 warning 2016-04-08 19:52:23 +01:00
Matthew Hodgson
6ff7a79308 move local_media_repository_url_cache.sql to schema v31 2016-04-08 19:09:02 +01:00
Matthew Hodgson
af582b66bb fix typo 2016-04-08 19:08:47 +01:00
Matthew Hodgson
2460d904bd fix error checking for new SQL 2016-04-08 19:04:29 +01:00
Matthew Hodgson
1ccabe2965 more PR feedback 2016-04-08 18:58:08 +01:00
Matthew Hodgson
fb83f6a1fc fix SQL based on PR feedback 2016-04-08 18:55:38 +01:00
Matthew Hodgson
b04f81284a Add more doc 2016-04-08 18:55:27 +01:00
Matthew Hodgson
ec9331f851 Add doc 2016-04-08 18:54:18 +01:00
Matthew Hodgson
dafef5a688 Add url_preview_enabled config option to turn on/off preview_url endpoint. defaults to off.
Add url_preview_ip_range_blacklist to let admins specify internal IP ranges that must not be spidered.
Add url_preview_url_blacklist to let admins specify URL patterns that must not be spidered.
Implement a custom SpiderEndpoint and associated support classes to implement url_preview_ip_range_blacklist
Add commentary and generally address PR feedback
2016-04-08 18:37:15 +01:00
David Baker
d96a070a3a Actually check if we;re processing 2016-04-08 16:49:39 +01:00
David Baker
ed3979df5f Fix invite pushes
* If the event is an invite event, add the invitee to list of user we run push rules for (if they have a pusher etc)
 * Move invite_for_me to be higher prio than member events otherwise member events matches them
 * Spell override right
2016-04-08 15:29:59 +01:00
Erik Johnston
79fc4ff6f9 Merge pull request #677 from matrix-org/erikj/dns_cache
Read from DNS cache if within TTL
2016-04-08 14:09:56 +01:00
David Baker
7b6d519482 Make sure max stream ordering only increases 2016-04-08 14:08:16 +01:00
David Baker
52d1008661 Unsafe process should call itself if the max has changed 2016-04-08 14:06:54 +01:00
Erik Johnston
96bd8ff57c Merge pull request #707 from matrix-org/markjh/remove_changed_presencelike_data
changed_presencelike_data isn't observed anywhere so can be removed
2016-04-08 14:04:54 +01:00
David Baker
ce3fe52498 Comment why unsafe process is unsafe 2016-04-08 14:02:38 +01:00
Mark Haines
7e2f971c08 Remove some unused functions (#711)
* Remove some unused functions

* get_room_events_stream is only used in tests

* is_exclusive_room might actually be something we want
2016-04-08 14:01:56 +01:00
Mark Haines
d63b49137a Merge pull request #710 from matrix-org/markjh/move_fire
Move all the wrapper functions for distributor.fire
2016-04-08 11:39:34 +01:00
Mark Haines
b9ee5650b0 Move all the wrapper functions for distributor.fire
Move the functions inside the distributor and import them
where needed. This reduces duplication and makes it possible
for flake8 to detect when the functions aren't used in a
given file.
2016-04-08 11:01:38 +01:00
Mark Haines
caef337587 changed_presencelike_data isn't observed anywhere in synapse so can be removed 2016-04-08 10:37:19 +01:00
Mark Haines
b4a5002a6e Merge pull request #708 from matrix-org/markjh/remove_collect_presencelike_data
Call profile handler get_displayname directly
2016-04-08 09:51:36 +01:00
Mark Haines
86be915cce Call profile handler get_displayname directly rather than using collect_presencelike_data 2016-04-07 18:11:49 +01:00
David Baker
d9f38561c8 Literally a dictionary 2016-04-07 17:45:01 +01:00
David Baker
4836864f56 generate id in the main thread 2016-04-07 17:38:48 +01:00
David Baker
a4a31fa8dc Only pass in what we need 2016-04-07 17:37:19 +01:00
Erik Johnston
f942980c0b Merge pull request #701 from DoubleMalt/ldap-auth
Add LDAP authentication
2016-04-07 17:35:28 +01:00
David Baker
3fb35cbd6f Oops, inequality fail 2016-04-07 17:33:37 +01:00
David Baker
15e0f1696f Wrap process in a flag so we don't process whist already processing. 2016-04-07 17:31:08 +01:00
Mark Haines
da84fa3d74 Merge pull request #706 from matrix-org/markjh/slaveIV
Add tests for redactions
2016-04-07 17:30:37 +01:00
Matthew Hodgson
d6e7333ae4 Merge branch 'develop' into matthew/preview_urls 2016-04-07 17:26:44 +01:00
David Baker
6ec02e9ecf indenting 2016-04-07 17:24:05 +01:00
David Baker
25cd5bb697 defer.gatherResults rather than doing all the pokes in series 2016-04-07 17:22:14 +01:00
David Baker
fa129ce5b5 Add measure blocks 2016-04-07 17:12:29 +01:00
David Baker
e1e042f2a1 Add comments on min_stream_id
saying that the min stream id won't be completely accurate all the time
2016-04-07 17:09:36 +01:00
Mark Haines
ceb599e789 Add tests for redactions 2016-04-07 16:52:07 +01:00
Mark Haines
8c82b06904 Merge pull request #704 from matrix-org/markh/slaveIII
Add tests for get_latest_event_ids_in_room and get_current_state
2016-04-07 16:49:34 +01:00
David Baker
05d044aac3 pep8 2016-04-07 16:45:38 +01:00
David Baker
2d5c693fd3 Fix port script for changes merged from develop 2016-04-07 16:43:54 +01:00
Mark Haines
57fa1801c3 Add sensible __eq__ operators inside the tests.
Rather than adding them globally. This limits the changes to only
affect the tests.
2016-04-07 16:41:37 +01:00
Erik Johnston
a294b04bf0 Merge pull request #700 from matrix-org/erikj/deduplicate_joins
Deduplicate membership changes
2016-04-07 16:35:40 +01:00
David Baker
9c99ab4572 Merge remote-tracking branch 'origin/develop' into dbkr/pushers_use_event_actions 2016-04-07 16:35:22 +01:00
David Baker
d549fdfa22 Remove code that's now been obsoleted or moved elsewhere 2016-04-07 16:31:38 +01:00
Erik Johnston
95ac3078da Rename things 2016-04-07 16:07:16 +01:00
David Baker
92e3071623 Send badge count pushes.
Also fix bugs with retrying.
2016-04-07 15:39:53 +01:00
Erik Johnston
ee5aef6c72 Log contexts and squash things together 2016-04-07 15:34:21 +01:00
Erik Johnston
639cd07d6d Add comment 2016-04-07 14:24:12 +01:00
Erik Johnston
af03ecf352 Deduplicate joins 2016-04-07 14:19:02 +01:00
Mark Haines
60ec9793fb Add tests for get_latest_event_ids_in_room and get_current_state 2016-04-07 13:17:56 +01:00
Christoph Witzany
674379e673 Add myself to AUTHORS.rst
Signed-off-by: Christoph Witzany <christoph@web.crofting.com>
2016-04-07 13:01:09 +02:00
Erik Johnston
a28d066732 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/dns_cache 2016-04-07 11:11:17 +01:00
Erik Johnston
8495b6d365 Merge pull request #703 from matrix-org/erikj/member
Set profile information when joining rooms remotely
2016-04-07 11:08:15 +01:00
Erik Johnston
1ef0365670 Set profile information when joining rooms remotely 2016-04-07 09:42:52 +01:00
Richard van der Hoff
87a30890a3 Merge pull request #699 from matrix-org/rav/show_own_leave_event
Let users see their own leave events
2016-04-06 17:57:06 +01:00
Christoph Witzany
ed4d18f516 fix check for failed authentication 2016-04-06 18:30:11 +02:00
Christoph Witzany
9c62fcdb68 remove line 2016-04-06 18:23:46 +02:00
Christoph Witzany
27a0c21c38 make tests for ldap more specific to not be fooled by Mocks 2016-04-06 18:23:46 +02:00
Christoph Witzany
3555a659ec output ldap version for info and to pacify pep8 2016-04-06 18:23:46 +02:00
Christoph Witzany
4c5e8adf8b conditionally import ldap 2016-04-06 18:23:46 +02:00
Christoph Witzany
875ed05bdc fix pep8 2016-04-06 18:23:46 +02:00
Christoph Witzany
67f3a50e9a fix exception handling 2016-04-06 18:23:46 +02:00
Christoph Witzany
afff321e9a code style 2016-04-06 18:23:46 +02:00
Christoph Witzany
8f0e47fae8 cleanup 2016-04-06 18:23:45 +02:00
Christoph Witzany
823b8be4b7 add tls property and twist my head around twisted 2016-04-06 18:23:45 +02:00
Christoph Witzany
92767dd703 add tls property 2016-04-06 18:23:45 +02:00
Christoph Witzany
7b9319b1c8 move LDAP authentication to AuthenticationHandler 2016-04-06 18:23:45 +02:00
Christoph Witzany
3d95405e5f Introduce LDAP authentication 2016-04-06 18:23:45 +02:00
Mark Haines
8d2bca1a90 Merge pull request #702 from matrix-org/markjh/slaveII
Test that room membership is replicated
2016-04-06 16:52:39 +01:00
David Baker
0fd1cd2400 pep8 2016-04-06 16:50:47 +01:00
Mark Haines
6bfec56796 Test that room membership is replicated 2016-04-06 16:20:13 +01:00
Mark Haines
e815763b7f Merge pull request #697 from matrix-org/markjh/slaveI
Add a slaved events store class
2016-04-06 16:19:25 +01:00
David Baker
7e2c89a37f Make pushers use the event_push_actions table instead of listening on an event stream & running the rules again. Sytest passes, but remaining to do:
* Make badges work again
 * Remove old, unused code
2016-04-06 15:42:15 +01:00
Richard van der Hoff
1e05637e37 Let users see their own leave events
... otherwise clients get confused.

Fixes https://matrix.org/jira/browse/SYN-662,
https://github.com/vector-im/vector-web/issues/368
2016-04-06 15:36:19 +01:00
Erik Johnston
b713934b2e Merge pull request #698 from matrix-org/erikj/port_script_fix
Don't require config to create database
2016-04-06 14:32:45 +01:00
Mark Haines
75fb9ac1be Add a slaved events store class
Add a test to check that get_room_names_and_aliases does the same
thing on both the master and on the slave data store.
2016-04-06 14:18:35 +01:00
Erik Johnston
8aab9d87fa Don't require config to create database 2016-04-06 14:15:45 +01:00
Mark Haines
7d11f825aa Merge pull request #694 from matrix-org/markjh/caches
Move _get_cache_dict into the SQLBaseStore
2016-04-06 13:21:25 +01:00
Mark Haines
196ebaf662 Merge pull request #695 from matrix-org/markjh/cachesII
Make the cache objects be per instance rather than being global
2016-04-06 13:21:19 +01:00
Mark Haines
87f2dec8d4 Make the cache objects be per instance rather than being global 2016-04-06 13:08:05 +01:00
Mark Haines
a1e0d316ea Move _get_cache_dict into the SQLBaseStore 2016-04-06 13:05:19 +01:00
Erik Johnston
11860637e1 Tests 2016-04-06 10:12:30 +01:00
Mark Haines
2e308a3a38 Merge pull request #692 from matrix-org/markjh/replicate_reshuffle
Separate generating the replication response...
2016-04-05 13:23:36 +01:00
Erik Johnston
c2b429ab24 Merge pull request #693 from matrix-org/erikj/backfill_self
Don't backfill from self
2016-04-05 13:04:36 +01:00
Erik Johnston
6222ae51ce Don't backfill from self 2016-04-05 12:56:29 +01:00
Erik Johnston
b29f98377d Merge pull request #691 from matrix-org/erikj/member
Fix stuck invites
2016-04-05 12:44:39 +01:00
Mark Haines
1d4deff25a Separate generating the replication response...
from doing the http request parsing to make it easier
to write unit tests for replication.
2016-04-05 11:23:57 +01:00
Erik Johnston
df727f2126 Fix stuck invites
If rejecting a remote invite fails with an error response don't fail
the entire request; instead mark the invite as locally rejected.

This fixes the bug where users can get stuck invites which they can
neither accept nor reject.
2016-04-05 11:13:24 +01:00
Erik Johnston
7a77f8b6d5 Merge pull request #690 from matrix-org/erikj/member
Store invites in a separate table.
2016-04-05 09:12:27 +01:00
Erik Johnston
0c53d750e7 Docs and indents 2016-04-04 18:02:48 +01:00
Erik Johnston
92ab45a330 Add upgrade path, rename table 2016-04-04 17:07:43 +01:00
Erik Johnston
3d76b7cb2b Store invites in a separate table. 2016-04-04 16:30:15 +01:00
Erik Johnston
bf14883a04 Merge pull request #689 from matrix-org/erikj/member
Do checks for memberships before creating events
2016-04-04 11:56:40 +01:00
Matthew Hodgson
9f7dc2bef7 Merge branch 'develop' into matthew/preview_urls 2016-04-04 00:38:21 +01:00
Matthew Hodgson
cf51c4120e report image size (bytewise) in OG meta 2016-04-03 23:57:05 +01:00
Matthew Hodgson
0834b152fb char encoding 2016-04-03 12:59:27 +01:00
Matthew Hodgson
8b98a7e8c3 pep8 2016-04-03 12:56:29 +01:00
Matthew Hodgson
eab4d462f8 fix etag typing error. fix timestamp typing error 2016-04-03 02:02:46 +01:00
Matthew Hodgson
c3916462f6 rebase all image URLs 2016-04-03 01:33:12 +01:00
Matthew Hodgson
110780b18b remove stale todo 2016-04-03 00:48:31 +01:00
Matthew Hodgson
b09e29a03c Ensure only one download for a given URL is active at a time 2016-04-03 00:47:40 +01:00
Matthew Hodgson
7426c86eb8 add a persistent cache of URL lookups, and fix up the in-memory one to work 2016-04-03 00:31:57 +01:00
Matthew Hodgson
d1b154a10f support gzip compression, and don't pass through error msgs 2016-04-02 03:06:39 +01:00
Matthew Hodgson
9377157961 how was _respond_default_thumbnail ever meant to work? 2016-04-02 02:31:45 +01:00
Matthew Hodgson
2c838f6459 pass back SVGs as their own thumbnails 2016-04-02 02:30:07 +01:00
Matthew Hodgson
5037ee0d37 handle missing dimensions without crashing 2016-04-02 02:29:57 +01:00
Matthew Hodgson
b26e8604f1 make meta comparisons case insensitive 2016-04-02 01:35:44 +01:00
Matthew Hodgson
5fd07da764 refactor calc_og; spider image URLs; fix xpath; add a (broken) expiringcache; loads of other fixes 2016-04-02 00:35:49 +01:00
Erik Johnston
d76d89323c Use computed prev event ids 2016-04-01 17:39:32 +01:00
Erik Johnston
aa82cb38e9 Remove state hack from _create_new_client_event 2016-04-01 16:36:54 +01:00
Mark Haines
89e6839a48 Merge pull request #686 from matrix-org/markjh/doc_strings
Use google style doc strings.
2016-04-01 16:20:09 +01:00
Erik Johnston
c906f30661 Do checks for memberships before creating events 2016-04-01 16:17:32 +01:00
Mark Haines
2a37467fa1 Use google style doc strings.
pycharm supports them so there is no need to use the other format.

Might as well convert the existing strings to reduce the risk of
people accidentally cargo culting the wrong doc string format.
2016-04-01 16:12:07 +01:00
Mark Haines
f2b916534b Merge pull request #684 from matrix-org/markjh/backfill_id_gen
Use a stream id generator for backfilled ids
2016-04-01 15:13:14 +01:00
Mark Haines
9bc5b4c663 Assert that the step != 0 2016-04-01 15:08:20 +01:00
Mark Haines
35b5c4ba1b use google style doc strings 2016-04-01 15:07:01 +01:00
Erik Johnston
a853cdec5b Merge pull request #685 from matrix-org/erikj/sync_leave
Add concurrently_execute function
2016-04-01 15:02:59 +01:00
Erik Johnston
3f4eb4c924 Comment 2016-04-01 14:15:27 +01:00
Erik Johnston
8d73cd502b Add concurrently_execute function 2016-04-01 14:06:00 +01:00
Mark Haines
a2866e2e6a Rename direction to step, apply checks consistently 2016-04-01 13:50:54 +01:00
Mark Haines
e36bfbab38 Use a stream id generator for backfilled ids 2016-04-01 13:29:05 +01:00
Erik Johnston
35bb465b86 Filter rooms list before chunking 2016-04-01 13:14:53 +01:00
Mark Haines
c42f46ab7d Merge pull request #682 from matrix-org/markjh/fix_invalidate
Fix the invalidation of the names and aliases cache
2016-04-01 10:52:29 +01:00
Mark Haines
7753fc6570 Fix the invalidation of the names and aliases cache 2016-04-01 10:34:51 +01:00
Matthew Hodgson
c60b751694 fix assorted redirect, unicode and screenscraping bugs 2016-04-01 02:17:48 +01:00
Matthew Hodgson
683e564815 handle spidered relative images correctly 2016-03-31 23:52:58 +01:00
Mark Haines
431aa8ada9 Merge pull request #681 from matrix-org/markjh/remove_outlier
Remove outlier parameter from compute_event_context
2016-03-31 15:44:37 +01:00
Mark Haines
dc4c1579d4 Remove outlier parameter from compute_event_context
Use event.internal_metadata.is_outlier instead.
2016-03-31 15:32:24 +01:00
Mark Haines
03e406eefc Merge pull request #680 from matrix-org/markjh/remove_is_new_state
Remove the is_new_state argument to persist event.
2016-03-31 15:14:48 +01:00
Matthew Hodgson
72550c3803 prevent choking on invalid utf-8, and handle image thumbnailing smarter 2016-03-31 15:14:14 +01:00
Mark Haines
5d06929169 Move the check for backfilled outside the for loop 2016-03-31 15:09:09 +01:00
Mark Haines
76503f95ed Remove the is_new_state argument to persist event.
Move the checks for whether an event is new state inside persist
event itself.

This was harder than expected because there wasn't enough information
passed to persist event to correctly handle invites from remote servers
for new rooms.
2016-03-31 15:00:42 +01:00
Erik Johnston
fe95943305 Merge pull request #679 from matrix-org/erikj/member
Split out RoomMemberHandler
2016-03-31 14:45:57 +01:00
Matthew Hodgson
bb9a2ca87c synthesise basig OG metadata from pages lacking it 2016-03-31 14:15:09 +01:00
Erik Johnston
d35780eda0 Split out RoomMemberHandler 2016-03-31 13:08:45 +01:00
Matthew Hodgson
0d3d7de6fc sync in changes from matrixfederationclient 2016-03-31 12:42:27 +01:00
Mark Haines
62e395f0e3 Merge pull request #676 from matrix-org/markjh/replicate_stateIII
Add replication streams for ex outliers and current state resets
2016-03-31 11:20:57 +01:00
Erik Johnston
5260db7663 Line length 2016-03-31 10:49:27 +01:00
Mark Haines
2ec5426035 Use a namedtuple rather than tuple unpacking 2016-03-31 10:33:02 +01:00
David Baker
c9500a9c1d Merge pull request #678 from matrix-org/dbkr/push_obey_enable
Don't ignore the obey overlay if the rule has an enabled attribute of False
2016-03-31 10:26:22 +01:00
Erik Johnston
f9d3665c88 Allow clock to be passed in to func 2016-03-31 10:23:48 +01:00
David Baker
c27c51484a Don't ignore the obey overlay if the rule has an enabled attribute of False
Fixes https://github.com/vector-im/vector-web/issues/1244
2016-03-31 10:12:31 +01:00
Erik Johnston
f699b8f997 Read from DNS cache if within TTL 2016-03-31 10:04:28 +01:00
Matthew Hodgson
a8a5dd3b44 handle requests with missing content-length headers (e.g. YouTube) 2016-03-31 01:55:21 +01:00
Matthew Hodgson
a68c1b15aa spell out more packages 2016-03-30 17:29:42 +01:00
Matthew Hodgson
9113316b0e typo 2016-03-30 17:29:42 +01:00
Matthew Hodgson
7178ab7da0 spell out more packages 2016-03-30 17:29:22 +01:00
Mark Haines
1fbb094c6f Add replication streams for ex outliers and current state resets 2016-03-30 17:19:56 +01:00
Mark Haines
98c460cecd Merge pull request #675 from matrix-org/markjh/replicate_stateII
Add a replication stream for state groups
2016-03-30 16:40:17 +01:00
Mark Haines
8b8052909f return the state_group for backfill 2016-03-30 16:20:07 +01:00
Mark Haines
61407986b4 Add a entry to current_state_resets table when the current state is reset 2016-03-30 16:18:46 +01:00
Mark Haines
31a9eceda5 Add a replication stream for state groups 2016-03-30 16:01:58 +01:00
Mark Haines
fc66df1e60 Merge pull request #674 from matrix-org/markjh/replicate_state
Use a stream id generator to assign state group ids
2016-03-30 15:58:49 +01:00
Erik Johnston
178c9fb200 Merge pull request #673 from matrix-org/erikj/forget
Require user to have left room to forget room
2016-03-30 15:55:24 +01:00
Erik Johnston
73b6bf4629 Only forget room if you were in the room 2016-03-30 15:09:18 +01:00
Erik Johnston
08a8514b7a Remove spurious comment 2016-03-30 15:05:33 +01:00
Erik Johnston
d24662b88a Merge branch 'master' of github.com:matrix-org/synapse into develop 2016-03-30 14:41:31 +01:00
Mark Haines
1e25f62ee6 Use a stream id generator to assign state group ids 2016-03-30 12:55:02 +01:00
Erik Johnston
fddb6fddc1 Require user to have left room to forget room
This dramatically simplifies the forget API code - in particular it no
longer generates a leave event.
2016-03-30 11:03:00 +01:00
Erik Johnston
f5bf45a2e5 Merge pull request #671 from nikriek/jwt-support
Support login using Javascript Web Tokens (JWT)
2016-03-29 16:31:42 +01:00
Niklas Riekenbrauck
3f9948a069 Add JWT support 2016-03-29 14:36:36 +02:00
Matthew Hodgson
ae5831d303 fix bugs 2016-03-29 03:32:55 +01:00
Matthew Hodgson
721b2bfa85 implement redirects 2016-03-29 03:32:52 +01:00
Matthew Hodgson
19038582d3 debug 2016-03-29 03:14:16 +01:00
Matthew Hodgson
64b4aead15 make it work 2016-03-29 03:13:25 +01:00
Matthew Hodgson
dd4287ca5d make it build 2016-03-29 02:07:57 +01:00
Matthew Hodgson
e0c2490a14 Merge branch 'develop' into matthew/preview_urls 2016-03-29 01:20:25 +01:00
Matthew Hodgson
ec0cf996c9 typo 2016-03-29 01:20:14 +01:00
Matthew Hodgson
d9d48aad2d Merge branch 'develop' into matthew/preview_urls 2016-03-27 22:54:42 +01:00
Matthew Hodgson
adafa24b0a typo 2016-03-25 23:38:19 +00:00
Mark Haines
3e8bb99a2b Merge pull request #668 from matrix-org/markjh/deduplicate
Deduplicate identical /sync requests
2016-03-24 18:07:30 +00:00
Mark Haines
77cba688ed Fix typo 2016-03-24 18:02:37 +00:00
Mark Haines
54a546091a Add a response cache for getting the public room list 2016-03-24 18:02:10 +00:00
Mark Haines
191c7bef6b Deduplicate identical /sync requests 2016-03-24 17:47:31 +00:00
David Baker
31e6f8636f Merge pull request #667 from matrix-org/dbkr/never_notify_member_events
Never notify for member events.
2016-03-24 13:48:02 +00:00
David Baker
3b554bda26 Never notify for member events. This fixes https://github.com/vector-im/vector-web/issues/828 2016-03-24 13:19:39 +00:00
Matthew Hodgson
7dd0c1730a initial WIP of a tentative preview_url endpoint - incomplete, untested, experimental, etc. just putting it here for safekeeping for now 2016-01-24 18:47:27 -05:00
243 changed files with 15319 additions and 6050 deletions

View File

@@ -57,3 +57,6 @@ Florent Violleau <floviolleau at gmail dot com>
Niklas Riekenbrauck <nikriek at gmail dot.com>
* Add JWT support for registration and login
Christoph Witzany <christoph at web.crofting.com>
* Add LDAP support for authentication

View File

@@ -1,3 +1,175 @@
Changes in synapse v0.16.1-r1 (2016-07-08)
==========================================
THIS IS A CRITICAL SECURITY UPDATE.
This fixes a bug which allowed users' accounts to be accessed by unauthorised
users.
Changes in synapse v0.16.1 (2016-06-20)
=======================================
Bug fixes:
* Fix assorted bugs in ``/preview_url`` (PR #872)
* Fix TypeError when setting unicode passwords (PR #873)
Performance improvements:
* Turn ``use_frozen_events`` off by default (PR #877)
* Disable responding with canonical json for federation (PR #878)
Changes in synapse v0.16.1-rc1 (2016-06-15)
===========================================
Features: None
Changes:
* Log requester for ``/publicRoom`` endpoints when possible (PR #856)
* 502 on ``/thumbnail`` when can't connect to remote server (PR #862)
* Linearize fetching of gaps on incoming events (PR #871)
Bugs fixes:
* Fix bug where rooms where marked as published by default (PR #857)
* Fix bug where joining room with an event with invalid sender (PR #868)
* Fix bug where backfilled events were sent down sync streams (PR #869)
* Fix bug where outgoing connections could wedge indefinitely, causing push
notifications to be unreliable (PR #870)
Performance improvements:
* Improve ``/publicRooms`` performance(PR #859)
Changes in synapse v0.16.0 (2016-06-09)
=======================================
NB: As of v0.14 all AS config files must have an ID field.
Bug fixes:
* Don't make rooms published by default (PR #857)
Changes in synapse v0.16.0-rc2 (2016-06-08)
===========================================
Features:
* Add configuration option for tuning GC via ``gc.set_threshold`` (PR #849)
Changes:
* Record metrics about GC (PR #771, #847, #852)
* Add metric counter for number of persisted events (PR #841)
Bug fixes:
* Fix 'From' header in email notifications (PR #843)
* Fix presence where timeouts were not being fired for the first 8h after
restarts (PR #842)
* Fix bug where synapse sent malformed transactions to AS's when retrying
transactions (Commits 310197b, 8437906)
Performance improvements:
* Remove event fetching from DB threads (PR #835)
* Change the way we cache events (PR #836)
* Add events to cache when we persist them (PR #840)
Changes in synapse v0.16.0-rc1 (2016-06-03)
===========================================
Version 0.15 was not released. See v0.15.0-rc1 below for additional changes.
Features:
* Add email notifications for missed messages (PR #759, #786, #799, #810, #815,
#821)
* Add a ``url_preview_ip_range_whitelist`` config param (PR #760)
* Add /report endpoint (PR #762)
* Add basic ignore user API (PR #763)
* Add an openidish mechanism for proving that you own a given user_id (PR #765)
* Allow clients to specify a server_name to avoid 'No known servers' (PR #794)
* Add secondary_directory_servers option to fetch room list from other servers
(PR #808, #813)
Changes:
* Report per request metrics for all of the things using request_handler (PR
#756)
* Correctly handle ``NULL`` password hashes from the database (PR #775)
* Allow receipts for events we haven't seen in the db (PR #784)
* Make synctl read a cache factor from config file (PR #785)
* Increment badge count per missed convo, not per msg (PR #793)
* Special case m.room.third_party_invite event auth to match invites (PR #814)
Bug fixes:
* Fix typo in event_auth servlet path (PR #757)
* Fix password reset (PR #758)
Performance improvements:
* Reduce database inserts when sending transactions (PR #767)
* Queue events by room for persistence (PR #768)
* Add cache to ``get_user_by_id`` (PR #772)
* Add and use ``get_domain_from_id`` (PR #773)
* Use tree cache for ``get_linearized_receipts_for_room`` (PR #779)
* Remove unused indices (PR #782)
* Add caches to ``bulk_get_push_rules*`` (PR #804)
* Cache ``get_event_reference_hashes`` (PR #806)
* Add ``get_users_with_read_receipts_in_room`` cache (PR #809)
* Use state to calculate ``get_users_in_room`` (PR #811)
* Load push rules in storage layer so that they get cached (PR #825)
* Make ``get_joined_hosts_for_room`` use get_users_in_room (PR #828)
* Poke notifier on next reactor tick (PR #829)
* Change CacheMetrics to be quicker (PR #830)
Changes in synapse v0.15.0-rc1 (2016-04-26)
===========================================
Features:
* Add login support for Javascript Web Tokens, thanks to Niklas Riekenbrauck
(PR #671,#687)
* Add URL previewing support (PR #688)
* Add login support for LDAP, thanks to Christoph Witzany (PR #701)
* Add GET endpoint for pushers (PR #716)
Changes:
* Never notify for member events (PR #667)
* Deduplicate identical ``/sync`` requests (PR #668)
* Require user to have left room to forget room (PR #673)
* Use DNS cache if within TTL (PR #677)
* Let users see their own leave events (PR #699)
* Deduplicate membership changes (PR #700)
* Increase performance of pusher code (PR #705)
* Respond with error status 504 if failed to talk to remote server (PR #731)
* Increase search performance on postgres (PR #745)
Bug fixes:
* Fix bug where disabling all notifications still resulted in push (PR #678)
* Fix bug where users couldn't reject remote invites if remote refused (PR #691)
* Fix bug where synapse attempted to backfill from itself (PR #693)
* Fix bug where profile information was not correctly added when joining remote
rooms (PR #703)
* Fix bug where register API required incorrect key name for AS registration
(PR #727)
Changes in synapse v0.14.0 (2016-03-30)
=======================================
@@ -511,7 +683,7 @@ Configuration:
* Add support for changing the bind host of the metrics listener via the
``metrics_bind_host`` option.
Changes in synapse v0.9.0-r5 (2015-05-21)
=========================================
@@ -853,7 +1025,7 @@ See UPGRADE for information about changes to the client server API, including
breaking backwards compatibility with VoIP calls and registration API.
Homeserver:
* When a user changes their displayname or avatar the server will now update
* When a user changes their displayname or avatar the server will now update
all their join states to reflect this.
* The server now adds "age" key to events to indicate how old they are. This
is clock independent, so at no point does any server or webclient have to
@@ -911,7 +1083,7 @@ Changes in synapse 0.2.2 (2014-09-06)
=====================================
Homeserver:
* When the server returns state events it now also includes the previous
* When the server returns state events it now also includes the previous
content.
* Add support for inviting people when creating a new room.
* Make the homeserver inform the room via `m.room.aliases` when a new alias
@@ -923,7 +1095,7 @@ Webclient:
* Handle `m.room.aliases` events.
* Asynchronously send messages and show a local echo.
* Inform the UI when a message failed to send.
* Only autoscroll on receiving a new message if the user was already at the
* Only autoscroll on receiving a new message if the user was already at the
bottom of the screen.
* Add support for ban/kick reasons.

View File

@@ -11,6 +11,7 @@ recursive-include synapse/storage/schema *.sql
recursive-include synapse/storage/schema *.py
recursive-include docs *
recursive-include res *
recursive-include scripts *
recursive-include scripts-dev *
recursive-include tests *.py

View File

@@ -58,12 +58,13 @@ the spec in the context of a codebase and let you run your own homeserver and
generally help bootstrap the ecosystem.
In Matrix, every user runs one or more Matrix clients, which connect through to
a Matrix homeserver which stores all their personal chat history and user
account information - much as a mail client connects through to an IMAP/SMTP
server. Just like email, you can either run your own Matrix homeserver and
control and own your own communications and history or use one hosted by
someone else (e.g. matrix.org) - there is no single point of control or
mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts, etc.
a Matrix homeserver. The homeserver stores all their personal chat history and
user account information - much as a mail client connects through to an
IMAP/SMTP server. Just like email, you can either run your own Matrix
homeserver and control and own your own communications and history or use one
hosted by someone else (e.g. matrix.org) - there is no single point of control
or mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts,
etc.
Synapse ships with two basic demo Matrix clients: webclient (a basic group chat
web client demo implemented in AngularJS) and cmdclient (a basic Python
@@ -104,7 +105,7 @@ Installing prerequisites on Ubuntu or Debian::
sudo apt-get install build-essential python2.7-dev libffi-dev \
python-pip python-setuptools sqlite3 \
libssl-dev python-virtualenv libjpeg-dev
libssl-dev python-virtualenv libjpeg-dev libxslt1-dev
Installing prerequisites on ArchLinux::
@@ -118,7 +119,6 @@ Installing prerequisites on CentOS 7::
python-virtualenv libffi-devel openssl-devel
sudo yum groupinstall "Development Tools"
Installing prerequisites on Mac OS X::
xcode-select --install
@@ -150,12 +150,7 @@ In case of problems, please see the _Troubleshooting section below.
Alternatively, Silvio Fricke has contributed a Dockerfile to automate the
above in Docker at https://registry.hub.docker.com/u/silviof/docker-matrix/.
Another alternative is to install via apt from http://matrix.org/packages/debian/.
Note that these packages do not include a client - choose one from
https://matrix.org/blog/try-matrix-now/ (or build your own with
https://github.com/matrix-org/matrix-js-sdk/).
Finally, Martin Giess has created an auto-deployment process with vagrant/ansible,
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
for details.
@@ -229,6 +224,19 @@ For information on how to install and use PostgreSQL, please see
Platform Specific Instructions
==============================
Debian
------
Matrix provides official Debian packages via apt from http://matrix.org/packages/debian/.
Note that these packages do not include a client - choose one from
https://matrix.org/blog/try-matrix-now/ (or build your own with one of our SDKs :)
Fedora
------
Oleg Girko provides Fedora RPMs at
https://obs.infoserver.lv/project/monitor/matrix-synapse
ArchLinux
---------
@@ -270,11 +278,17 @@ During setup of Synapse you need to call python2.7 directly again::
FreeBSD
-------
Synapse can be installed via FreeBSD Ports or Packages:
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
- Ports: ``cd /usr/ports/net/py-matrix-synapse && make install clean``
- Packages: ``pkg install py27-matrix-synapse``
NixOS
-----
Robin Lambertz has packaged Synapse for NixOS at:
https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-synapse.nix
Windows Install
---------------
Synapse can be installed on Cygwin. It requires the following Cygwin packages:
@@ -544,6 +558,23 @@ as the primary means of identity and E2E encryption is not complete. As such,
we are running a single identity server (https://matrix.org) at the current
time.
URL Previews
============
Synapse 0.15.0 introduces an experimental new API for previewing URLs at
/_matrix/media/r0/preview_url. This is disabled by default. To turn it on
you must enable the `url_preview_enabled: True` config parameter and explicitly
specify the IP ranges that Synapse is not allowed to spider for previewing in
the `url_preview_ip_range_blacklist` configuration parameter. This is critical
from a security perspective to stop arbitrary Matrix users spidering 'internal'
URLs on your network. At the very least we recommend that your loopback and
RFC1918 IP addresses are blacklisted.
This also requires the optional lxml and netaddr python dependencies to be
installed.
Password reset
==============
@@ -587,7 +618,7 @@ Building internal API documentation::
Halp!! Synapse eats all my RAM!
Help!! Synapse eats all my RAM!
===============================
Synapse's architecture is quite RAM hungry currently - we deliberately

View File

@@ -30,6 +30,14 @@ running:
python synapse/python_dependencies.py | xargs -n1 pip install
Upgrading to v0.15.0
====================
If you want to use the new URL previewing API (/_matrix/media/r0/preview_url)
then you have to explicitly enable it in the config and update your dependencies
dependencies. See README.rst for details.
Upgrading to v0.11.0
====================

View File

@@ -9,6 +9,7 @@ Description=Synapse Matrix homeserver
Type=simple
User=synapse
Group=synapse
EnvironmentFile=-/etc/sysconfig/synapse
WorkingDirectory=/var/lib/synapse
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml --log-config=/etc/synapse/log_config.yaml

View File

@@ -32,5 +32,4 @@ The format of the AS configuration file is as follows:
See the spec_ for further details on how application services work.
.. _spec: https://github.com/matrix-org/matrix-doc/blob/master/specification/25_application_service_api.rst#application-service-api
.. _spec: https://matrix.org/docs/spec/application_service/unstable.html

View File

@@ -43,7 +43,10 @@ Basically, PEP8
together, or want to deliberately extend or preserve vertical/horizontal
space)
Comments should follow the google code style. This is so that we can generate
documentation with sphinx (http://sphinxcontrib-napoleon.readthedocs.org/en/latest/)
Comments should follow the `google code style <http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with
`sphinx <http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples <http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
in the sphinx documentation.
Code should pass pep8 --max-line-length=100 without any warnings.

10
docs/log_contexts.rst Normal file
View File

@@ -0,0 +1,10 @@
What do I do about "Unexpected logging context" debug log-lines everywhere?
<Mjark> The logging context lives in thread local storage
<Mjark> Sometimes it gets out of sync with what it should actually be, usually because something scheduled something to run on the reactor without preserving the logging context.
<Matthew> what is the impact of it getting out of sync? and how and when should we preserve log context?
<Mjark> The impact is that some of the CPU and database metrics will be under-reported, and some log lines will be mis-attributed.
<Mjark> It should happen auto-magically in all the APIs that do IO or otherwise defer to the reactor.
<Erik> Mjark: the other place is if we branch, e.g. using defer.gatherResults
Unanswered: how and when should we preserve log context?

58
docs/replication.rst Normal file
View File

@@ -0,0 +1,58 @@
Replication Architecture
========================
Motivation
----------
We'd like to be able to split some of the work that synapse does into multiple
python processes. In theory multiple synapse processes could share a single
postgresql database and we'd scale up by running more synapse processes.
However much of synapse assumes that only one process is interacting with the
database, both for assigning unique identifiers when inserting into tables,
notifying components about new updates, and for invalidating its caches.
So running multiple copies of the current code isn't an option. One way to
run multiple processes would be to have a single writer process and multiple
reader processes connected to the same database. In order to do this we'd need
a way for the reader process to invalidate its in-memory caches when an update
happens on the writer. One way to do this is for the writer to present an
append-only log of updates which the readers can consume to invalidate their
caches and to push updates to listening clients or pushers.
Synapse already stores much of its data as an append-only log so that it can
correctly respond to /sync requests so the amount of code changes needed to
expose the append-only log to the readers should be fairly minimal.
Architecture
------------
The Replication API
~~~~~~~~~~~~~~~~~~~
Synapse will optionally expose a long poll HTTP API for extracting updates. The
API will have a similar shape to /sync in that clients provide tokens
indicating where in the log they have reached and a timeout. The synapse server
then either responds with updates immediately if it already has updates or it
waits until the timeout for more updates. If the timeout expires and nothing
happened then the server returns an empty response.
However unlike the /sync API this replication API is returning synapse specific
data rather than trying to implement a matrix specification. The replication
results are returned as arrays of rows where the rows are mostly lifted
directly from the database. This avoids unnecessary JSON parsing on the server
and hopefully avoids an impedance mismatch between the data returned and the
required updates to the datastore.
This does not replicate all the database tables as many of the database tables
are indexes that can be recovered from the contents of other tables.
The format and parameters for the api are documented in
``synapse/replication/resource.py``.
The Slaved DataStore
~~~~~~~~~~~~~~~~~~~~
There are read-only version of the synapse storage layer in
``synapse/replication/slave/storage`` that use the response of the replication
API to invalidate their caches.

View File

@@ -9,31 +9,35 @@ the Home Server to generate credentials that are valid for use on the TURN
server through the use of a secret shared between the Home Server and the
TURN server.
This document described how to install coturn
(https://code.google.com/p/coturn/) which also supports the TURN REST API,
This document describes how to install coturn
(https://github.com/coturn/coturn) which also supports the TURN REST API,
and integrate it with synapse.
coturn Setup
============
You may be able to setup coturn via your package manager, or set it up manually using the usual ``configure, make, make install`` process.
1. Check out coturn::
svn checkout http://coturn.googlecode.com/svn/trunk/ coturn
git clone https://github.com/coturn/coturn.git coturn
cd coturn
2. Configure it::
./configure
You may need to install libevent2: if so, you should do so
You may need to install ``libevent2``: if so, you should do so
in the way recommended by your operating system.
You can ignore warnings about lack of database support: a
database is unnecessary for this purpose.
3. Build and install it::
make
make install
4. Make a config file in /etc/turnserver.conf. You can customise
a config file from turnserver.conf.default. The relevant
4. Create or edit the config file in ``/etc/turnserver.conf``. The relevant
lines, with example values, are::
lt-cred-mech
@@ -41,7 +45,7 @@ coturn Setup
static-auth-secret=[your secret key here]
realm=turn.myserver.org
See turnserver.conf.default for explanations of the options.
See turnserver.conf for explanations of the options.
One way to generate the static-auth-secret is with pwgen::
pwgen -s 64 1
@@ -54,6 +58,7 @@ coturn Setup
import your private key and certificate.
7. Start the turn server::
bin/turnserver -o

74
docs/url_previews.rst Normal file
View File

@@ -0,0 +1,74 @@
URL Previews
============
Design notes on a URL previewing service for Matrix:
Options are:
1. Have an AS which listens for URLs, downloads them, and inserts an event that describes their metadata.
* Pros:
* Decouples the implementation entirely from Synapse.
* Uses existing Matrix events & content repo to store the metadata.
* Cons:
* Which AS should provide this service for a room, and why should you trust it?
* Doesn't work well with E2E; you'd have to cut the AS into every room
* the AS would end up subscribing to every room anyway.
2. Have a generic preview API (nothing to do with Matrix) that provides a previewing service:
* Pros:
* Simple and flexible; can be used by any clients at any point
* Cons:
* If each HS provides one of these independently, all the HSes in a room may needlessly DoS the target URI
* We need somewhere to store the URL metadata rather than just using Matrix itself
* We can't piggyback on matrix to distribute the metadata between HSes.
3. Make the synapse of the sending user responsible for spidering the URL and inserting an event asynchronously which describes the metadata.
* Pros:
* Works transparently for all clients
* Piggy-backs nicely on using Matrix for distributing the metadata.
* No confusion as to which AS
* Cons:
* Doesn't work with E2E
* We might want to decouple the implementation of the spider from the HS, given spider behaviour can be quite complicated and evolve much more rapidly than the HS. It's more like a bot than a core part of the server.
4. Make the sending client use the preview API and insert the event itself when successful.
* Pros:
* Works well with E2E
* No custom server functionality
* Lets the client customise the preview that they send (like on FB)
* Cons:
* Entirely specific to the sending client, whereas it'd be nice if /any/ URL was correctly previewed if clients support it.
5. Have the option of specifying a shared (centralised) previewing service used by a room, to avoid all the different HSes in the room DoSing the target.
Best solution is probably a combination of both 2 and 4.
* Sending clients do their best to create and send a preview at the point of sending the message, perhaps delaying the message until the preview is computed? (This also lets the user validate the preview before sending)
* Receiving clients have the option of going and creating their own preview if one doesn't arrive soon enough (or if the original sender didn't create one)
This is a bit magical though in that the preview could come from two entirely different sources - the sending HS or your local one. However, this can always be exposed to users: "Generate your own URL previews if none are available?"
This is tantamount also to senders calculating their own thumbnails for sending in advance of the main content - we are trusting the sender not to lie about the content in the thumbnail. Whereas currently thumbnails are calculated by the receiving homeserver to avoid this attack.
However, this kind of phishing attack does exist whether we let senders pick their thumbnails or not, in that a malicious sender can send normal text messages around the attachment claiming it to be legitimate. We could rely on (future) reputation/abuse management to punish users who phish (be it with bogus metadata or bogus descriptions). Bogus metadata is particularly bad though, especially if it's avoidable.
As a first cut, let's do #2 and have the receiver hit the API to calculate its own previews (as it does currently for image thumbnails). We can then extend/optimise this to option 4 as a special extra if needed.
API
---
GET /_matrix/media/r0/preview_url?url=http://wherever.com
200 OK
{
"og:type" : "article"
"og:url" : "https://twitter.com/matrixdotorg/status/684074366691356672"
"og:title" : "Matrix on Twitter"
"og:image" : "https://pbs.twimg.com/profile_images/500400952029888512/yI0qtFi7_400x400.png"
"og:description" : "“Synapse 0.12 is out! Lots of polishing, performance &amp;amp; bugfixes: /sync API, /r0 prefix, fulltext search, 3PID invites https://t.co/5alhXLLEGP”"
"og:site_name" : "Twitter"
}
* Downloads the URL
* If HTML, just stores it in RAM and parses it for OG meta tags
* Download any media OG meta tags to the media repo, and refer to them in the OG via mxc:// URIs.
* If a media filetype we know we can thumbnail: store it on disk, and hand it to the thumbnailer. Generate OG meta tags from the thumbnailer contents.
* Otherwise, don't bother downloading further.

87
jenkins-dendron-postgres.sh Executable file
View File

@@ -0,0 +1,87 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
# Output flake8 violations to violations.flake8.log
# Don't exit with non-0 status code on Jenkins,
# so that the build steps continue and a later step can decided whether to
# UNSTABLE or FAILURE this build.
export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished with status code \$?"
rm .coverage* || echo "No coverage files to remove"
tox --notest -e py27
TOX_BIN=$WORKSPACE/.tox/py27/bin
python synapse/python_dependencies.py | xargs -n1 $TOX_BIN/pip install
$TOX_BIN/pip install psycopg2
$TOX_BIN/pip install lxml
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
if [[ ! -e .dendron-base ]]; then
git clone https://github.com/matrix-org/dendron.git .dendron-base --mirror
else
(cd .dendron-base; git fetch -p)
fi
rm -rf dendron
git clone .dendron-base dendron --shared
cd dendron
: ${GOPATH:=${WORKSPACE}/.gopath}
if [[ "${GOPATH}" != *:* ]]; then
mkdir -p "${GOPATH}"
export PATH="${GOPATH}/bin:${PATH}"
fi
export GOPATH
git checkout "${GIT_BRANCH}" || (echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop" ; git checkout develop)
go get github.com/constabulary/gb/...
gb generate
gb build
cd ..
if [[ ! -e .sytest-base ]]; then
git clone https://github.com/matrix-org/sytest.git .sytest-base --mirror
else
(cd .sytest-base; git fetch -p)
fi
rm -rf sytest
git clone .sytest-base sytest --shared
cd sytest
git checkout "${GIT_BRANCH}" || (echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop" ; git checkout develop)
: ${PORT_BASE:=8000}
: ${PORT_COUNT=20}
./jenkins/prep_sytest_for_postgres.sh
mkdir -p var
echo >&2 "Running sytest with PostgreSQL";
./jenkins/install_and_run.sh --python $TOX_BIN/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \
--pusher \
--synchrotron \
--port-range ${PORT_BASE}:$((PORT_BASE+PORT_COUNT-1))
cd ..

View File

@@ -25,7 +25,9 @@ rm .coverage* || echo "No coverage files to remove"
tox --notest -e py27
TOX_BIN=$WORKSPACE/.tox/py27/bin
python synapse/python_dependencies.py | xargs -n1 $TOX_BIN/pip install
$TOX_BIN/pip install psycopg2
$TOX_BIN/pip install lxml
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
@@ -42,6 +44,7 @@ cd sytest
git checkout "${GIT_BRANCH}" || (echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop" ; git checkout develop)
: ${PORT_BASE:=8000}
: ${PORT_COUNT=20}
./jenkins/prep_sytest_for_postgres.sh
@@ -49,7 +52,7 @@ echo >&2 "Running sytest with PostgreSQL";
./jenkins/install_and_run.sh --coverage \
--python $TOX_BIN/python \
--synapse-directory $WORKSPACE \
--port-base $PORT_BASE
--port-range ${PORT_BASE}:$((PORT_BASE+PORT_COUNT-1)) \
cd ..
cp sytest/.coverage.* .

View File

@@ -24,6 +24,8 @@ rm .coverage* || echo "No coverage files to remove"
tox --notest -e py27
TOX_BIN=$WORKSPACE/.tox/py27/bin
python synapse/python_dependencies.py | xargs -n1 $TOX_BIN/pip install
$TOX_BIN/pip install lxml
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
@@ -39,11 +41,12 @@ cd sytest
git checkout "${GIT_BRANCH}" || (echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop" ; git checkout develop)
: ${PORT_BASE:=8500}
: ${PORT_COUNT=20}
: ${PORT_BASE:=8000}
./jenkins/install_and_run.sh --coverage \
--python $TOX_BIN/python \
--synapse-directory $WORKSPACE \
--port-base $PORT_BASE
--port-range ${PORT_BASE}:$((PORT_BASE+PORT_COUNT-1)) \
cd ..
cp sytest/.coverage.* .

View File

@@ -1,86 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
# Output flake8 violations to violations.flake8.log
# Don't exit with non-0 status code on Jenkins,
# so that the build steps continue and a later step can decided whether to
# UNSTABLE or FAILURE this build.
export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished with status code \$?"
rm .coverage* || echo "No coverage files to remove"
tox
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
TOX_BIN=$WORKSPACE/.tox/py27/bin
if [[ ! -e .sytest-base ]]; then
git clone https://github.com/matrix-org/sytest.git .sytest-base --mirror
else
(cd .sytest-base; git fetch -p)
fi
rm -rf sytest
git clone .sytest-base sytest --shared
cd sytest
git checkout "${GIT_BRANCH}" || (echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop" ; git checkout develop)
: ${PERL5LIB:=$WORKSPACE/perl5/lib/perl5}
: ${PERL_MB_OPT:=--install_base=$WORKSPACE/perl5}
: ${PERL_MM_OPT:=INSTALL_BASE=$WORKSPACE/perl5}
export PERL5LIB PERL_MB_OPT PERL_MM_OPT
./install-deps.pl
: ${PORT_BASE:=8000}
echo >&2 "Running sytest with SQLite3";
./run-tests.pl --coverage -O tap --synapse-directory $WORKSPACE \
--python $TOX_BIN/python --all --port-base $PORT_BASE > results-sqlite3.tap
RUN_POSTGRES=""
for port in $(($PORT_BASE + 1)) $(($PORT_BASE + 2)); do
if psql synapse_jenkins_$port <<< ""; then
RUN_POSTGRES="$RUN_POSTGRES:$port"
cat > localhost-$port/database.yaml << EOF
name: psycopg2
args:
database: synapse_jenkins_$port
EOF
fi
done
# Run if both postgresql databases exist
if test "$RUN_POSTGRES" = ":$(($PORT_BASE + 1)):$(($PORT_BASE + 2))"; then
echo >&2 "Running sytest with PostgreSQL";
$TOX_BIN/pip install psycopg2
./run-tests.pl --coverage -O tap --synapse-directory $WORKSPACE \
--python $TOX_BIN/python --all --port-base $PORT_BASE > results-postgresql.tap
else
echo >&2 "Skipping running sytest with PostgreSQL, $RUN_POSTGRES"
fi
cd ..
cp sytest/.coverage.* .
# Combine the coverage reports
echo "Combining:" .coverage.*
$TOX_BIN/python -m coverage combine
# Output coverage to coverage.xml
$TOX_BIN/coverage xml -o coverage.xml

View File

@@ -0,0 +1,7 @@
.header {
border-bottom: 4px solid #e4f7ed ! important;
}
.notif_link a, .footer a {
color: #76CFA6 ! important;
}

156
res/templates/mail.css Normal file
View File

@@ -0,0 +1,156 @@
body {
margin: 0px;
}
pre, code {
word-break: break-word;
white-space: pre-wrap;
}
#page {
font-family: 'Open Sans', Helvetica, Arial, Sans-Serif;
font-color: #454545;
font-size: 12pt;
width: 100%;
padding: 20px;
}
#inner {
width: 640px;
}
.header {
width: 100%;
height: 87px;
color: #454545;
border-bottom: 4px solid #e5e5e5;
}
.logo {
text-align: right;
margin-left: 20px;
}
.salutation {
padding-top: 10px;
font-weight: bold;
}
.summarytext {
}
.room {
width: 100%;
color: #454545;
border-bottom: 1px solid #e5e5e5;
}
.room_header td {
padding-top: 38px;
padding-bottom: 10px;
border-bottom: 1px solid #e5e5e5;
}
.room_name {
vertical-align: middle;
font-size: 18px;
font-weight: bold;
}
.room_header h2 {
margin-top: 0px;
margin-left: 75px;
font-size: 20px;
}
.room_avatar {
width: 56px;
line-height: 0px;
text-align: center;
vertical-align: middle;
}
.room_avatar img {
width: 48px;
height: 48px;
object-fit: cover;
border-radius: 24px;
}
.notif {
border-bottom: 1px solid #e5e5e5;
margin-top: 16px;
padding-bottom: 16px;
}
.historical_message .sender_avatar {
opacity: 0.3;
}
/* spell out opacity and historical_message class names for Outlook aka Word */
.historical_message .sender_name {
color: #e3e3e3;
}
.historical_message .message_time {
color: #e3e3e3;
}
.historical_message .message_body {
color: #c7c7c7;
}
.historical_message td,
.message td {
padding-top: 10px;
}
.sender_avatar {
width: 56px;
text-align: center;
vertical-align: top;
}
.sender_avatar img {
margin-top: -2px;
width: 32px;
height: 32px;
border-radius: 16px;
}
.sender_name {
display: inline;
font-size: 13px;
color: #a2a2a2;
}
.message_time {
text-align: right;
width: 100px;
font-size: 11px;
color: #a2a2a2;
}
.message_body {
}
.notif_link td {
padding-top: 10px;
padding-bottom: 10px;
font-weight: bold;
}
.notif_link a, .footer a {
color: #454545;
text-decoration: none;
}
.debug {
font-size: 10px;
color: #888;
}
.footer {
margin-top: 20px;
text-align: center;
}

45
res/templates/notif.html Normal file
View File

@@ -0,0 +1,45 @@
{% for message in notif.messages %}
<tr class="{{ "historical_message" if message.is_historical else "message" }}">
<td class="sender_avatar">
{% if loop.index0 == 0 or notif.messages[loop.index0 - 1].sender_name != notif.messages[loop.index0].sender_name %}
{% if message.sender_avatar_url %}
<img alt="" class="sender_avatar" src="{{ message.sender_avatar_url|mxc_to_http(32,32) }}" />
{% else %}
{% if message.sender_hash % 3 == 0 %}
<img class="sender_avatar" src="https://vector.im/beta/img/76cfa6.png" />
{% elif message.sender_hash % 3 == 1 %}
<img class="sender_avatar" src="https://vector.im/beta/img/50e2c2.png" />
{% else %}
<img class="sender_avatar" src="https://vector.im/beta/img/f4c371.png" />
{% endif %}
{% endif %}
{% endif %}
</td>
<td class="message_contents">
{% if loop.index0 == 0 or notif.messages[loop.index0 - 1].sender_name != notif.messages[loop.index0].sender_name %}
<div class="sender_name">{% if message.msgtype == "m.emote" %}*{% endif %} {{ message.sender_name }}</div>
{% endif %}
<div class="message_body">
{% if message.msgtype == "m.text" %}
{{ message.body_text_html }}
{% elif message.msgtype == "m.emote" %}
{{ message.body_text_html }}
{% elif message.msgtype == "m.notice" %}
{{ message.body_text_html }}
{% elif message.msgtype == "m.image" %}
<img src="{{ message.image_url|mxc_to_http(640, 480, scale) }}" />
{% elif message.msgtype == "m.file" %}
<span class="filename">{{ message.body_text_plain }}</span>
{% endif %}
</div>
</td>
<td class="message_time">{{ message.ts|format_ts("%H:%M") }}</td>
</tr>
{% endfor %}
<tr class="notif_link">
<td></td>
<td>
<a href="{{ notif.link }}">View {{ room.title }}</a>
</td>
<td></td>
</tr>

16
res/templates/notif.txt Normal file
View File

@@ -0,0 +1,16 @@
{% for message in notif.messages %}
{% if message.msgtype == "m.emote" %}* {% endif %}{{ message.sender_name }} ({{ message.ts|format_ts("%H:%M") }})
{% if message.msgtype == "m.text" %}
{{ message.body_text_plain }}
{% elif message.msgtype == "m.emote" %}
{{ message.body_text_plain }}
{% elif message.msgtype == "m.notice" %}
{{ message.body_text_plain }}
{% elif message.msgtype == "m.image" %}
{{ message.body_text_plain }}
{% elif message.msgtype == "m.file" %}
{{ message.body_text_plain }}
{% endif %}
{% endfor %}
View {{ room.title }} at {{ notif.link }}

View File

@@ -0,0 +1,53 @@
<!doctype html>
<html lang="en">
<head>
<style type="text/css">
{% include 'mail.css' without context %}
{% include "mail-%s.css" % app_name ignore missing without context %}
</style>
</head>
<body>
<table id="page">
<tr>
<td> </td>
<td id="inner">
<table class="header">
<tr>
<td>
<div class="salutation">Hi {{ user_display_name }},</div>
<div class="summarytext">{{ summary_text }}</div>
</td>
<td class="logo">
{% if app_name == "Vector" %}
<img src="http://matrix.org/img/vector-logo-email.png" width="64" height="83" alt="[Vector]"/>
{% else %}
<img src="http://matrix.org/img/matrix-120x51.png" width="120" height="51" alt="[matrix]"/>
{% endif %}
</td>
</tr>
</table>
{% for room in rooms %}
{% include 'room.html' with context %}
{% endfor %}
<div class="footer">
<a href="{{ unsubscribe_link }}">Unsubscribe</a>
<br/>
<br/>
<div class="debug">
Sending email at {{ reason.now|format_ts("%c") }} due to activity in room {{ reason.room_name }} because
an event was received at {{ reason.received_at|format_ts("%c") }}
which is more than {{ "%.1f"|format(reason.delay_before_mail_ms / (60*1000)) }} ({{ reason.delay_before_mail_ms }}) mins ago,
{% if reason.last_sent_ts %}
and the last time we sent a mail for this room was {{ reason.last_sent_ts|format_ts("%c") }},
which is more than {{ "%.1f"|format(reason.throttle_ms / (60*1000)) }} (current throttle_ms) mins ago.
{% else %}
and we don't have a last time we sent a mail for this room.
{% endif %}
</div>
</div>
</td>
<td> </td>
</tr>
</table>
</body>
</html>

View File

@@ -0,0 +1,10 @@
Hi {{ user_display_name }},
{{ summary_text }}
{% for room in rooms %}
{% include 'room.txt' with context %}
{% endfor %}
You can disable these notifications at {{ unsubscribe_link }}

33
res/templates/room.html Normal file
View File

@@ -0,0 +1,33 @@
<table class="room">
<tr class="room_header">
<td class="room_avatar">
{% if room.avatar_url %}
<img alt="" src="{{ room.avatar_url|mxc_to_http(48,48) }}" />
{% else %}
{% if room.hash % 3 == 0 %}
<img alt="" src="https://vector.im/beta/img/76cfa6.png" />
{% elif room.hash % 3 == 1 %}
<img alt="" src="https://vector.im/beta/img/50e2c2.png" />
{% else %}
<img alt="" src="https://vector.im/beta/img/f4c371.png" />
{% endif %}
{% endif %}
</td>
<td class="room_name" colspan="2">
{{ room.title }}
</td>
</tr>
{% if room.invite %}
<tr>
<td></td>
<td>
<a href="{{ room.link }}">Join the conversation.</a>
</td>
<td></td>
</tr>
{% else %}
{% for notif in room.notifs %}
{% include 'notif.html' with context %}
{% endfor %}
{% endif %}
</table>

9
res/templates/room.txt Normal file
View File

@@ -0,0 +1,9 @@
{{ room.title }}
{% if room.invite %}
You've been invited, join at {{ room.link }}
{% else %}
{% for notif in room.notifs %}
{% include 'notif.txt' with context %}
{% endfor %}
{% endif %}

View File

@@ -1,10 +1,16 @@
#!/usr/bin/env python
import argparse
import sys
import bcrypt
import getpass
import yaml
bcrypt_rounds=12
password_pepper = ""
def prompt_for_pass():
password = getpass.getpass("Password: ")
@@ -28,12 +34,22 @@ if __name__ == "__main__":
default=None,
help="New password for user. Will prompt if omitted.",
)
parser.add_argument(
"-c", "--config",
type=argparse.FileType('r'),
help="Path to server config file. Used to read in bcrypt_rounds and password_pepper.",
)
args = parser.parse_args()
if "config" in args and args.config:
config = yaml.safe_load(args.config)
bcrypt_rounds = config.get("bcrypt_rounds", bcrypt_rounds)
password_config = config.get("password_config", {})
password_pepper = password_config.get("pepper", password_pepper)
password = args.password
if not password:
password = prompt_for_pass()
print bcrypt.hashpw(password, bcrypt.gensalt(bcrypt_rounds))
print bcrypt.hashpw(password + password_pepper, bcrypt.gensalt(bcrypt_rounds))

View File

@@ -25,18 +25,26 @@ import urllib2
import yaml
def request_registration(user, password, server_location, shared_secret):
def request_registration(user, password, server_location, shared_secret, admin=False):
mac = hmac.new(
key=shared_secret,
msg=user,
digestmod=hashlib.sha1,
).hexdigest()
)
mac.update(user)
mac.update("\x00")
mac.update(password)
mac.update("\x00")
mac.update("admin" if admin else "notadmin")
mac = mac.hexdigest()
data = {
"user": user,
"password": password,
"mac": mac,
"type": "org.matrix.login.shared_secret",
"admin": admin,
}
server_location = server_location.rstrip("/")
@@ -68,7 +76,7 @@ def request_registration(user, password, server_location, shared_secret):
sys.exit(1)
def register_new_user(user, password, server_location, shared_secret):
def register_new_user(user, password, server_location, shared_secret, admin):
if not user:
try:
default_user = getpass.getuser()
@@ -99,7 +107,14 @@ def register_new_user(user, password, server_location, shared_secret):
print "Passwords do not match"
sys.exit(1)
request_registration(user, password, server_location, shared_secret)
if not admin:
admin = raw_input("Make admin [no]: ")
if admin in ("y", "yes", "true"):
admin = True
else:
admin = False
request_registration(user, password, server_location, shared_secret, bool(admin))
if __name__ == "__main__":
@@ -119,6 +134,11 @@ if __name__ == "__main__":
default=None,
help="New password for user. Will prompt if omitted.",
)
parser.add_argument(
"-a", "--admin",
action="store_true",
help="Register new user as an admin. Will prompt if omitted.",
)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument(
@@ -151,4 +171,4 @@ if __name__ == "__main__":
else:
secret = args.shared_secret
register_new_user(args.user, args.password, args.server_url, secret)
register_new_user(args.user, args.password, args.server_url, secret, args.admin)

View File

@@ -19,6 +19,7 @@ from twisted.enterprise import adbapi
from synapse.storage._base import LoggingTransaction, SQLBaseStore
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
import argparse
import curses
@@ -37,6 +38,7 @@ BOOLEAN_COLUMNS = {
"rooms": ["is_public"],
"event_edges": ["is_state"],
"presence_list": ["accepted"],
"presence_stream": ["currently_active"],
}
@@ -212,6 +214,10 @@ class Porter(object):
self.progress.add_table(table, postgres_size, table_size)
if table == "event_search":
yield self.handle_search_table(postgres_size, table_size, next_chunk)
return
select = (
"SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?"
% (table,)
@@ -230,51 +236,19 @@ class Porter(object):
if rows:
next_chunk = rows[-1][0] + 1
if table == "event_search":
# We have to treat event_search differently since it has a
# different structure in the two different databases.
def insert(txn):
sql = (
"INSERT INTO event_search (event_id, room_id, key, sender, vector)"
" VALUES (?,?,?,?,to_tsvector('english', ?))"
)
self._convert_rows(table, headers, rows)
rows_dict = [
dict(zip(headers, row))
for row in rows
]
def insert(txn):
self.postgres_store.insert_many_txn(
txn, table, headers[1:], rows
)
txn.executemany(sql, [
(
row["event_id"],
row["room_id"],
row["key"],
row["sender"],
row["value"],
)
for row in rows_dict
])
self.postgres_store._simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": table},
updatevalues={"rowid": next_chunk},
)
else:
self._convert_rows(table, headers, rows)
def insert(txn):
self.postgres_store.insert_many_txn(
txn, table, headers[1:], rows
)
self.postgres_store._simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": table},
updatevalues={"rowid": next_chunk},
)
self.postgres_store._simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": table},
updatevalues={"rowid": next_chunk},
)
yield self.postgres_store.execute(insert)
@@ -284,6 +258,73 @@ class Porter(object):
else:
return
@defer.inlineCallbacks
def handle_search_table(self, postgres_size, table_size, next_chunk):
select = (
"SELECT es.rowid, es.*, e.origin_server_ts, e.stream_ordering"
" FROM event_search as es"
" INNER JOIN events AS e USING (event_id, room_id)"
" WHERE es.rowid >= ?"
" ORDER BY es.rowid LIMIT ?"
)
while True:
def r(txn):
txn.execute(select, (next_chunk, self.batch_size,))
rows = txn.fetchall()
headers = [column[0] for column in txn.description]
return headers, rows
headers, rows = yield self.sqlite_store.runInteraction("select", r)
if rows:
next_chunk = rows[-1][0] + 1
# We have to treat event_search differently since it has a
# different structure in the two different databases.
def insert(txn):
sql = (
"INSERT INTO event_search (event_id, room_id, key,"
" sender, vector, origin_server_ts, stream_ordering)"
" VALUES (?,?,?,?,to_tsvector('english', ?),?,?)"
)
rows_dict = [
dict(zip(headers, row))
for row in rows
]
txn.executemany(sql, [
(
row["event_id"],
row["room_id"],
row["key"],
row["sender"],
row["value"],
row["origin_server_ts"],
row["stream_ordering"],
)
for row in rows_dict
])
self.postgres_store._simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": "event_search"},
updatevalues={"rowid": next_chunk},
)
yield self.postgres_store.execute(insert)
postgres_size += len(rows)
self.progress.update("event_search", postgres_size)
else:
return
def setup_db(self, db_config, database_engine):
db_conn = database_engine.module.connect(
**{
@@ -292,7 +333,7 @@ class Porter(object):
}
)
database_engine.prepare_database(db_conn)
prepare_database(db_conn, database_engine, config=None)
db_conn.commit()
@@ -309,8 +350,8 @@ class Porter(object):
**self.postgres_config["args"]
)
sqlite_engine = create_engine(FakeConfig(sqlite_config))
postgres_engine = create_engine(FakeConfig(postgres_config))
sqlite_engine = create_engine(sqlite_config)
postgres_engine = create_engine(postgres_config)
self.sqlite_store = Store(sqlite_db_pool, sqlite_engine)
self.postgres_store = Store(postgres_db_pool, postgres_engine)
@@ -792,8 +833,3 @@ if __name__ == "__main__":
if end_error_exec_info:
exc_type, exc_value, exc_traceback = end_error_exec_info
traceback.print_exception(exc_type, exc_value, exc_traceback)
class FakeConfig:
def __init__(self, database_config):
self.database_config = database_config

View File

@@ -17,3 +17,6 @@ ignore =
[flake8]
max-line-length = 90
ignore = W503 ; W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it.
[pep8]
max-line-length = 90

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.14.0"
__version__ = "0.16.1-r1"

View File

@@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module contains classes for authenticating the user."""
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json, SignatureVerifyException
@@ -22,9 +21,10 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, Codes, SynapseError, EventSizeError
from synapse.types import Requester, RoomID, UserID, EventID
from synapse.types import Requester, UserID, get_domain_from_id
from synapse.util.logutils import log_function
from synapse.util.logcontext import preserve_context_over_fn
from synapse.util.metrics import Measure
from unpaddedbase64 import decode_base64
import logging
@@ -41,12 +41,20 @@ AuthEventTypes = (
class Auth(object):
"""
FIXME: This class contains a mix of functions for authenticating users
of our client-server API and authenticating events added to room graphs.
"""
def __init__(self, hs):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
# Docs for these currently lives at
# https://github.com/matrix-org/matrix-doc/blob/master/drafts/macaroons_caveats.rst
# In addition, we have type == delete_pusher which grants access only to
# delete pushers.
self._KNOWN_CAVEAT_PREFIXES = set([
"gen = ",
"guest = ",
@@ -66,9 +74,9 @@ class Auth(object):
Returns:
True if the auth checks pass.
"""
self.check_size_limits(event)
with Measure(self.clock, "auth.check"):
self.check_size_limits(event)
try:
if not hasattr(event, "room_id"):
raise AuthError(500, "Event has no room_id: %s" % event)
if auth_events is None:
@@ -89,8 +97,8 @@ class Auth(object):
"Room %r does not exist" % (event.room_id,)
)
creating_domain = RoomID.from_string(event.room_id).domain
originating_domain = UserID.from_string(event.sender).domain
creating_domain = get_domain_from_id(event.room_id)
originating_domain = get_domain_from_id(event.sender)
if creating_domain != originating_domain:
if not self.can_federate(event, auth_events):
raise AuthError(
@@ -118,6 +126,24 @@ class Auth(object):
return allowed
self.check_event_sender_in_room(event, auth_events)
# Special case to allow m.room.third_party_invite events wherever
# a user is allowed to issue invites. Fixes
# https://github.com/vector-im/vector-web/issues/1208 hopefully
if event.type == EventTypes.ThirdPartyInvite:
user_level = self._get_user_power_level(event.user_id, auth_events)
invite_level = self._get_named_level(auth_events, "invite", 0)
if user_level < invite_level:
raise AuthError(
403, (
"You cannot issue a third party invite for %s." %
(event.content.display_name,)
)
)
else:
return True
self._can_send_event(event, auth_events)
if event.type == EventTypes.PowerLevels:
@@ -127,13 +153,6 @@ class Auth(object):
self.check_redaction(event, auth_events)
logger.debug("Allowing! %s", event)
except AuthError as e:
logger.info(
"Event auth check failed on event %s with msg: %s",
event, e.msg
)
logger.info("Denying! %s", event)
raise
def check_size_limits(self, event):
def too_big(field):
@@ -224,7 +243,7 @@ class Auth(object):
for event in curr_state.values():
if event.type == EventTypes.Member:
try:
if UserID.from_string(event.state_key).domain != host:
if get_domain_from_id(event.state_key) != host:
continue
except:
logger.warn("state_key not user_id: %s", event.state_key)
@@ -271,8 +290,8 @@ class Auth(object):
target_user_id = event.state_key
creating_domain = RoomID.from_string(event.room_id).domain
target_domain = UserID.from_string(target_user_id).domain
creating_domain = get_domain_from_id(event.room_id)
target_domain = get_domain_from_id(target_user_id)
if creating_domain != target_domain:
if not self.can_federate(event, auth_events):
raise AuthError(
@@ -512,7 +531,7 @@ class Auth(object):
return default
@defer.inlineCallbacks
def get_user_by_req(self, request, allow_guest=False):
def get_user_by_req(self, request, allow_guest=False, rights="access"):
""" Get a registered user's ID.
Args:
@@ -534,7 +553,7 @@ class Auth(object):
)
access_token = request.args["access_token"][0]
user_info = yield self.get_user_by_access_token(access_token)
user_info = yield self.get_user_by_access_token(access_token, rights)
user = user_info["user"]
token_id = user_info["token_id"]
is_guest = user_info["is_guest"]
@@ -595,7 +614,7 @@ class Auth(object):
defer.returnValue(user_id)
@defer.inlineCallbacks
def get_user_by_access_token(self, token):
def get_user_by_access_token(self, token, rights="access"):
""" Get a registered user's ID.
Args:
@@ -606,7 +625,7 @@ class Auth(object):
AuthError if no user by that token exists or the token is invalid.
"""
try:
ret = yield self.get_user_from_macaroon(token)
ret = yield self.get_user_from_macaroon(token, rights)
except AuthError:
# TODO(daniel): Remove this fallback when all existing access tokens
# have been re-issued as macaroons.
@@ -614,20 +633,26 @@ class Auth(object):
defer.returnValue(ret)
@defer.inlineCallbacks
def get_user_from_macaroon(self, macaroon_str):
def get_user_from_macaroon(self, macaroon_str, rights="access"):
try:
macaroon = pymacaroons.Macaroon.deserialize(macaroon_str)
self.validate_macaroon(macaroon, "access", False)
user_prefix = "user_id = "
user = None
user_id = None
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
user = UserID.from_string(caveat.caveat_id[len(user_prefix):])
user_id = caveat.caveat_id[len(user_prefix):]
user = UserID.from_string(user_id)
elif caveat.caveat_id == "guest = true":
guest = True
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token,
user_id=user_id,
)
if user is None:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "No user caveat in macaroon",
@@ -640,6 +665,13 @@ class Auth(object):
"is_guest": True,
"token_id": None,
}
elif rights == "delete_pusher":
# We don't store these tokens in the database
ret = {
"user": user,
"is_guest": False,
"token_id": None,
}
else:
# This codepath exists so that we can actually return a
# token ID, because we use token IDs in place of device
@@ -665,13 +697,14 @@ class Auth(object):
errcode=Codes.UNKNOWN_TOKEN
)
def validate_macaroon(self, macaroon, type_string, verify_expiry):
def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id):
"""
validate that a Macaroon is understood by and was signed by this server.
Args:
macaroon(pymacaroons.Macaroon): The macaroon to validate
type_string(str): The kind of token this is (e.g. "access", "refresh")
type_string(str): The kind of token required (e.g. "access", "refresh",
"delete_pusher")
verify_expiry(bool): Whether to verify whether the macaroon has expired.
This should really always be True, but no clients currently implement
token refresh, so we can't enforce expiry yet.
@@ -679,7 +712,7 @@ class Auth(object):
v = pymacaroons.Verifier()
v.satisfy_exact("gen = 1")
v.satisfy_exact("type = " + type_string)
v.satisfy_general(lambda c: c.startswith("user_id = "))
v.satisfy_exact("user_id = %s" % user_id)
v.satisfy_exact("guest = true")
if verify_expiry:
v.satisfy_general(self._verify_expiry)
@@ -894,8 +927,8 @@ class Auth(object):
if user_level >= redact_level:
return False
redacter_domain = EventID.from_string(event.event_id).domain
redactee_domain = EventID.from_string(event.redacts).domain
redacter_domain = get_domain_from_id(event.event_id)
redactee_domain = get_domain_from_id(event.redacts)
if redacter_domain == redactee_domain:
return True

View File

@@ -42,8 +42,9 @@ class Codes(object):
TOO_LARGE = "M_TOO_LARGE"
EXCLUSIVE = "M_EXCLUSIVE"
THREEPID_AUTH_FAILED = "M_THREEPID_AUTH_FAILED"
THREEPID_IN_USE = "THREEPID_IN_USE"
THREEPID_IN_USE = "M_THREEPID_IN_USE"
INVALID_USERNAME = "M_INVALID_USERNAME"
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
class CodeMessageException(RuntimeError):

View File

@@ -15,6 +15,8 @@
from synapse.api.errors import SynapseError
from synapse.types import UserID, RoomID
from twisted.internet import defer
import ujson as json
@@ -24,10 +26,10 @@ class Filtering(object):
super(Filtering, self).__init__()
self.store = hs.get_datastore()
@defer.inlineCallbacks
def get_user_filter(self, user_localpart, filter_id):
result = self.store.get_user_filter(user_localpart, filter_id)
result.addCallback(FilterCollection)
return result
result = yield self.store.get_user_filter(user_localpart, filter_id)
defer.returnValue(FilterCollection(result))
def add_user_filter(self, user_localpart, user_filter):
self.check_valid_filter(user_filter)

View File

@@ -16,14 +16,10 @@
import synapse
import contextlib
import gc
import logging
import os
import re
import resource
import subprocess
import sys
import time
from synapse.config._base import ConfigError
from synapse.python_dependencies import (
@@ -33,22 +29,15 @@ from synapse.python_dependencies import (
from synapse.rest import ClientRestResource
from synapse.storage.engines import create_engine, IncorrectDatabaseSetup
from synapse.storage import are_all_users_on_domain
from synapse.storage.prepare_database import UpgradeDatabaseException
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.server import HomeServer
from twisted.conch.manhole import ColoredManhole
from twisted.conch.insults import insults
from twisted.conch import manhole_ssh
from twisted.cred import checkers, portal
from twisted.internet import reactor, task, defer
from twisted.application import service
from twisted.web.resource import Resource, EncodingResourceWrapper
from twisted.web.static import File
from twisted.web.server import Site, GzipEncoderFactory, Request
from twisted.web.server import GzipEncoderFactory
from synapse.http.server import RootRedirect
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
@@ -66,6 +55,13 @@ from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.resource import ReplicationResource, REPLICATION_PREFIX
from synapse.federation.transport.server import TransportLayerServer
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.http.site import SynapseSite
from synapse import events
from daemonize import Daemonize
@@ -73,9 +69,6 @@ from daemonize import Daemonize
logger = logging.getLogger("synapse.app.homeserver")
ACCESS_TOKEN_RE = re.compile(r'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
def gz_wrap(r):
return EncodingResourceWrapper(r, [GzipEncoderFactory()])
@@ -154,7 +147,7 @@ class SynapseHomeServer(HomeServer):
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path, self.auth, self.content_addr
self, self.config.uploads_path
),
})
@@ -173,7 +166,12 @@ class SynapseHomeServer(HomeServer):
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationResource(self)
root_resource = create_resource_tree(resources)
if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
root_resource = Resource()
root_resource = create_resource_tree(resources, root_resource)
if tls:
reactor.listenSSL(
port,
@@ -206,24 +204,13 @@ class SynapseHomeServer(HomeServer):
if listener["type"] == "http":
self._listener_http(config, listener)
elif listener["type"] == "manhole":
checker = checkers.InMemoryUsernamePasswordDatabaseDontUse(
matrix="rabbithole"
)
rlm = manhole_ssh.TerminalRealm()
rlm.chainedProtocolFactory = lambda: insults.ServerProtocol(
ColoredManhole,
{
"__name__": "__console__",
"hs": self,
}
)
f = manhole_ssh.ConchFactory(portal.Portal(rlm, [checker]))
reactor.listenTCP(
listener["port"],
f,
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
else:
@@ -245,7 +232,7 @@ class SynapseHomeServer(HomeServer):
except IncorrectDatabaseSetup as e:
quit_with_error(e.message)
def get_db_conn(self):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
@@ -254,7 +241,8 @@ class SynapseHomeServer(HomeServer):
}
db_conn = self.database_engine.module.connect(**db_params)
self.database_engine.on_new_connection(db_conn)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
@@ -268,86 +256,6 @@ def quit_with_error(error_string):
sys.exit(1)
def get_version_string():
try:
null = open(os.devnull, 'w')
cwd = os.path.dirname(os.path.abspath(__file__))
try:
git_branch = subprocess.check_output(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
stderr=null,
cwd=cwd,
).strip()
git_branch = "b=" + git_branch
except subprocess.CalledProcessError:
git_branch = ""
try:
git_tag = subprocess.check_output(
['git', 'describe', '--exact-match'],
stderr=null,
cwd=cwd,
).strip()
git_tag = "t=" + git_tag
except subprocess.CalledProcessError:
git_tag = ""
try:
git_commit = subprocess.check_output(
['git', 'rev-parse', '--short', 'HEAD'],
stderr=null,
cwd=cwd,
).strip()
except subprocess.CalledProcessError:
git_commit = ""
try:
dirty_string = "-this_is_a_dirty_checkout"
is_dirty = subprocess.check_output(
['git', 'describe', '--dirty=' + dirty_string],
stderr=null,
cwd=cwd,
).strip().endswith(dirty_string)
git_dirty = "dirty" if is_dirty else ""
except subprocess.CalledProcessError:
git_dirty = ""
if git_branch or git_tag or git_commit or git_dirty:
git_version = ",".join(
s for s in
(git_branch, git_tag, git_commit, git_dirty,)
if s
)
return (
"Synapse/%s (%s)" % (
synapse.__version__, git_version,
)
).encode("ascii")
except Exception as e:
logger.info("Failed to check for git repository: %s", e)
return ("Synapse/%s" % (synapse.__version__,)).encode("ascii")
def change_resource_limit(soft_file_no):
try:
soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
if not soft_file_no:
soft_file_no = hard
resource.setrlimit(resource.RLIMIT_NOFILE, (soft_file_no, hard))
logger.info("Set file limit to: %d", soft_file_no)
resource.setrlimit(
resource.RLIMIT_CORE, (resource.RLIM_INFINITY, resource.RLIM_INFINITY)
)
except (ValueError, resource.error) as e:
logger.warn("Failed to set file or core limit: %s", e)
def setup(config_options):
"""
Args:
@@ -358,10 +266,9 @@ def setup(config_options):
HomeServer
"""
try:
config = HomeServerConfig.load_config(
config = HomeServerConfig.load_or_generate_config(
"Synapse Homeserver",
config_options,
generate_section="Homeserver"
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
@@ -377,7 +284,7 @@ def setup(config_options):
# check any extra requirements we have now we have a config
check_requirements(config)
version_string = get_version_string()
version_string = get_version_string("Synapse", synapse)
logger.info("Server hostname: %s", config.server_name)
logger.info("Server version: %s", version_string)
@@ -386,7 +293,7 @@ def setup(config_options):
tls_server_context_factory = context_factory.ServerContextFactory(config)
database_engine = create_engine(config)
database_engine = create_engine(config.database_config)
config.database_config["args"]["cp_openfun"] = database_engine.on_new_connection
hs = SynapseHomeServer(
@@ -394,7 +301,6 @@ def setup(config_options):
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
content_addr=config.content_addr,
version_string=version_string,
database_engine=database_engine,
)
@@ -402,8 +308,10 @@ def setup(config_options):
logger.info("Preparing database: %s...", config.database_config['name'])
try:
db_conn = hs.get_db_conn()
database_engine.prepare_database(db_conn)
db_conn = hs.get_db_conn(run_new_connection=False)
prepare_database(db_conn, database_engine, config=config)
database_engine.on_new_connection(db_conn)
hs.run_startup_checks(db_conn, database_engine)
db_conn.commit()
@@ -442,215 +350,13 @@ class SynapseService(service.Service):
def startService(self):
hs = setup(self.config)
change_resource_limit(hs.config.soft_file_limit)
if hs.config.gc_thresholds:
gc.set_threshold(*hs.config.gc_thresholds)
def stopService(self):
return self._port.stopListening()
class SynapseRequest(Request):
def __init__(self, site, *args, **kw):
Request.__init__(self, *args, **kw)
self.site = site
self.authenticated_entity = None
self.start_time = 0
def __repr__(self):
# We overwrite this so that we don't log ``access_token``
return '<%s at 0x%x method=%s uri=%s clientproto=%s site=%s>' % (
self.__class__.__name__,
id(self),
self.method,
self.get_redacted_uri(),
self.clientproto,
self.site.site_tag,
)
def get_redacted_uri(self):
return ACCESS_TOKEN_RE.sub(
r'\1<redacted>\3',
self.uri
)
def get_user_agent(self):
return self.requestHeaders.getRawHeaders("User-Agent", [None])[-1]
def started_processing(self):
self.site.access_logger.info(
"%s - %s - Received request: %s %s",
self.getClientIP(),
self.site.site_tag,
self.method,
self.get_redacted_uri()
)
self.start_time = int(time.time() * 1000)
def finished_processing(self):
try:
context = LoggingContext.current_context()
ru_utime, ru_stime = context.get_resource_usage()
db_txn_count = context.db_txn_count
db_txn_duration = context.db_txn_duration
except:
ru_utime, ru_stime = (0, 0)
db_txn_count, db_txn_duration = (0, 0)
self.site.access_logger.info(
"%s - %s - {%s}"
" Processed request: %dms (%dms, %dms) (%dms/%d)"
" %sB %s \"%s %s %s\" \"%s\"",
self.getClientIP(),
self.site.site_tag,
self.authenticated_entity,
int(time.time() * 1000) - self.start_time,
int(ru_utime * 1000),
int(ru_stime * 1000),
int(db_txn_duration * 1000),
int(db_txn_count),
self.sentLength,
self.code,
self.method,
self.get_redacted_uri(),
self.clientproto,
self.get_user_agent(),
)
@contextlib.contextmanager
def processing(self):
self.started_processing()
yield
self.finished_processing()
class XForwardedForRequest(SynapseRequest):
def __init__(self, *args, **kw):
SynapseRequest.__init__(self, *args, **kw)
"""
Add a layer on top of another request that only uses the value of an
X-Forwarded-For header as the result of C{getClientIP}.
"""
def getClientIP(self):
"""
@return: The client address (the first address) in the value of the
I{X-Forwarded-For header}. If the header is not present, return
C{b"-"}.
"""
return self.requestHeaders.getRawHeaders(
b"x-forwarded-for", [b"-"])[0].split(b",")[0].strip()
class SynapseRequestFactory(object):
def __init__(self, site, x_forwarded_for):
self.site = site
self.x_forwarded_for = x_forwarded_for
def __call__(self, *args, **kwargs):
if self.x_forwarded_for:
return XForwardedForRequest(self.site, *args, **kwargs)
else:
return SynapseRequest(self.site, *args, **kwargs)
class SynapseSite(Site):
"""
Subclass of a twisted http Site that does access logging with python's
standard logging
"""
def __init__(self, logger_name, site_tag, config, resource, *args, **kwargs):
Site.__init__(self, resource, *args, **kwargs)
self.site_tag = site_tag
proxied = config.get("x_forwarded", False)
self.requestFactory = SynapseRequestFactory(self, proxied)
self.access_logger = logging.getLogger(logger_name)
def log(self, request):
pass
def create_resource_tree(desired_tree, redirect_root_to_web_client=True):
"""Create the resource tree for this Home Server.
This in unduly complicated because Twisted does not support putting
child resources more than 1 level deep at a time.
Args:
web_client (bool): True to enable the web client.
redirect_root_to_web_client (bool): True to redirect '/' to the
location of the web client. This does nothing if web_client is not
True.
"""
if redirect_root_to_web_client and WEB_CLIENT_PREFIX in desired_tree:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
root_resource = Resource()
# ideally we'd just use getChild and putChild but getChild doesn't work
# unless you give it a Request object IN ADDITION to the name :/ So
# instead, we'll store a copy of this mapping so we can actually add
# extra resources to existing nodes. See self._resource_id for the key.
resource_mappings = {}
for full_path, res in desired_tree.items():
logger.info("Attaching %s to path %s", res, full_path)
last_resource = root_resource
for path_seg in full_path.split('/')[1:-1]:
if path_seg not in last_resource.listNames():
# resource doesn't exist, so make a "dummy resource"
child_resource = Resource()
last_resource.putChild(path_seg, child_resource)
res_id = _resource_id(last_resource, path_seg)
resource_mappings[res_id] = child_resource
last_resource = child_resource
else:
# we have an existing Resource, use that instead.
res_id = _resource_id(last_resource, path_seg)
last_resource = resource_mappings[res_id]
# ===========================
# now attach the actual desired resource
last_path_seg = full_path.split('/')[-1]
# if there is already a resource here, thieve its children and
# replace it
res_id = _resource_id(last_resource, last_path_seg)
if res_id in resource_mappings:
# there is a dummy resource at this path already, which needs
# to be replaced with the desired resource.
existing_dummy_resource = resource_mappings[res_id]
for child_name in existing_dummy_resource.listNames():
child_res_id = _resource_id(
existing_dummy_resource, child_name
)
child_resource = resource_mappings[child_res_id]
# steal the children
res.putChild(child_name, child_resource)
# finally, insert the desired resource in the right place
last_resource.putChild(last_path_seg, res)
res_id = _resource_id(last_resource, last_path_seg)
resource_mappings[res_id] = res
return root_resource
def _resource_id(resource, path_seg):
"""Construct an arbitrary resource ID so you can retrieve the mapping
later.
If you want to represent resource A putChild resource B with path C,
the mapping should looks like _resource_id(A,C) = B.
Args:
resource (Resource): The *parent* Resourceb
path_seg (str): The name of the child Resource to be attached.
Returns:
str: A unique string which can be a key to the child Resource.
"""
return "%s-%s" % (resource, path_seg)
def run(hs):
PROFILE_SYNAPSE = False
if PROFILE_SYNAPSE:
@@ -717,6 +423,8 @@ def run(hs):
# sys.settrace(logcontext_tracer)
with LoggingContext("run"):
change_resource_limit(hs.config.soft_file_limit)
if hs.config.gc_thresholds:
gc.set_threshold(*hs.config.gc_thresholds)
reactor.run()
if hs.config.daemonize:

314
synapse/app/pusher.py Normal file
View File

@@ -0,0 +1,314 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
from synapse.server import HomeServer
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.storage.roommember import RoomMemberStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.storage.engines import create_engine
from synapse.storage import DataStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.pusher")
class PusherSlaveStore(
SlavedEventStore, SlavedPusherStore, SlavedReceiptsStore,
SlavedAccountDataStore
):
update_pusher_last_stream_ordering_and_success = (
DataStore.update_pusher_last_stream_ordering_and_success.__func__
)
update_pusher_failing_since = (
DataStore.update_pusher_failing_since.__func__
)
update_pusher_last_stream_ordering = (
DataStore.update_pusher_last_stream_ordering.__func__
)
get_throttle_params_by_room = (
DataStore.get_throttle_params_by_room.__func__
)
set_throttle_params = (
DataStore.set_throttle_params.__func__
)
get_time_of_last_push_action_before = (
DataStore.get_time_of_last_push_action_before.__func__
)
get_profile_displayname = (
DataStore.get_profile_displayname.__func__
)
# XXX: This is a bit broken because we don't persist forgotten rooms
# in a way that they can be streamed. This means that we don't have a
# way to invalidate the forgotten rooms cache correctly.
# For now we expire the cache every 10 minutes.
BROKEN_CACHE_EXPIRY_MS = 60 * 60 * 1000
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
class PusherServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = PusherSlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def remove_pusher(self, app_id, push_key, user_id):
http_client = self.get_simple_http_client()
replication_url = self.config.worker_replication_url
url = replication_url + "/remove_pushers"
return http_client.post_json_get_json(url, {
"remove": [{
"app_id": app_id,
"push_key": push_key,
"user_id": user_id,
}]
})
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
logger.info("Synapse pusher now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
pusher_pool = self.get_pusherpool()
clock = self.get_clock()
def stop_pusher(user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
pushers_for_user = pusher_pool.pushers.get(user_id, {})
pusher = pushers_for_user.pop(key, None)
if pusher is None:
return
logger.info("Stopping pusher %r / %r", user_id, key)
pusher.on_stop()
def start_pusher(user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return pusher_pool._refresh_pusher(app_id, pushkey, user_id)
@defer.inlineCallbacks
def poke_pushers(results):
pushers_rows = set(
map(tuple, results.get("pushers", {}).get("rows", []))
)
deleted_pushers_rows = set(
map(tuple, results.get("deleted_pushers", {}).get("rows", []))
)
for row in sorted(pushers_rows | deleted_pushers_rows):
if row in deleted_pushers_rows:
user_id, app_id, pushkey = row[1:4]
stop_pusher(user_id, app_id, pushkey)
elif row in pushers_rows:
user_id = row[1]
app_id = row[5]
pushkey = row[8]
yield start_pusher(user_id, app_id, pushkey)
stream = results.get("events")
if stream:
min_stream_id = stream["rows"][0][0]
max_stream_id = stream["position"]
preserve_fn(pusher_pool.on_new_notifications)(
min_stream_id, max_stream_id
)
stream = results.get("receipts")
if stream:
rows = stream["rows"]
affected_room_ids = set(row[1] for row in rows)
min_stream_id = rows[0][0]
max_stream_id = stream["position"]
preserve_fn(pusher_pool.on_new_receipts)(
min_stream_id, max_stream_id, affected_room_ids
)
def expire_broken_caches():
store.who_forgot_in_room.invalidate_all()
next_expire_broken_caches_ms = 0
while True:
try:
args = store.stream_positions()
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
now_ms = clock.time_msec()
if now_ms > next_expire_broken_caches_ms:
expire_broken_caches()
next_expire_broken_caches_ms = (
now_ms + store.BROKEN_CACHE_EXPIRY_MS
)
yield store.process_replication(result)
poke_pushers(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(30)
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse pusher", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.pusher"
setup_logging(config.worker_log_config, config.worker_log_file)
if config.start_pushers:
sys.stderr.write(
"\nThe pushers must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``start_pushers: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.start_pushers = True
database_engine = create_engine(config.database_config)
ps = PusherServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string=get_version_string("Synapse", synapse),
database_engine=database_engine,
)
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.replicate()
ps.get_pusherpool().start()
ps.get_datastore().start_profiling()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-pusher",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
if __name__ == '__main__':
with LoggingContext("main"):
ps = start(sys.argv[1:])

465
synapse/app/synchrotron.py Normal file
View File

@@ -0,0 +1,465 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
from synapse.api.constants import EventTypes, PresenceState
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.events import FrozenEvent
from synapse.handlers.presence import PresenceHandler
from synapse.http.site import SynapseSite
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.rest.client.v2_alpha import sync
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.presence import PresenceStore, UserPresenceState
from synapse.storage.roommember import RoomMemberStore
from synapse.util.async import sleep
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import contextlib
import gc
import ujson as json
logger = logging.getLogger("synapse.app.synchrotron")
class SynchrotronSlavedStore(
SlavedPushRuleStore,
SlavedEventStore,
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedPresenceStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
# XXX: This is a bit broken because we don't persist forgotten rooms
# in a way that they can be streamed. This means that we don't have a
# way to invalidate the forgotten rooms cache correctly.
# For now we expire the cache every 10 minutes.
BROKEN_CACHE_EXPIRY_MS = 60 * 60 * 1000
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
)
# XXX: This is a bit broken because we don't persist the accepted list in a
# way that can be replicated. This means that we don't have a way to
# invalidate the cache correctly.
get_presence_list_accepted = PresenceStore.__dict__[
"get_presence_list_accepted"
]
UPDATE_SYNCING_USERS_MS = 10 * 1000
class SynchrotronPresence(object):
def __init__(self, hs):
self.http_client = hs.get_simple_http_client()
self.store = hs.get_datastore()
self.user_to_num_current_syncs = {}
self.syncing_users_url = hs.config.worker_replication_url + "/syncing_users"
self.clock = hs.get_clock()
active_presence = self.store.take_presence_startup_info()
self.user_to_current_state = {
state.user_id: state
for state in active_presence
}
self.process_id = random_string(16)
logger.info("Presence process_id is %r", self.process_id)
self._sending_sync = False
self._need_to_send_sync = False
self.clock.looping_call(
self._send_syncing_users_regularly,
UPDATE_SYNCING_USERS_MS,
)
reactor.addSystemEventTrigger("before", "shutdown", self._on_shutdown)
def set_state(self, user, state):
# TODO Hows this supposed to work?
pass
get_states = PresenceHandler.get_states.__func__
current_state_for_users = PresenceHandler.current_state_for_users.__func__
@defer.inlineCallbacks
def user_syncing(self, user_id, affect_presence):
if affect_presence:
curr_sync = self.user_to_num_current_syncs.get(user_id, 0)
self.user_to_num_current_syncs[user_id] = curr_sync + 1
prev_states = yield self.current_state_for_users([user_id])
if prev_states[user_id].state == PresenceState.OFFLINE:
# TODO: Don't block the sync request on this HTTP hit.
yield self._send_syncing_users_now()
def _end():
# We check that the user_id is in user_to_num_current_syncs because
# user_to_num_current_syncs may have been cleared if we are
# shutting down.
if affect_presence and user_id in self.user_to_num_current_syncs:
self.user_to_num_current_syncs[user_id] -= 1
@contextlib.contextmanager
def _user_syncing():
try:
yield
finally:
_end()
defer.returnValue(_user_syncing())
@defer.inlineCallbacks
def _on_shutdown(self):
# When the synchrotron is shutdown tell the master to clear the in
# progress syncs for this process
self.user_to_num_current_syncs.clear()
yield self._send_syncing_users_now()
def _send_syncing_users_regularly(self):
# Only send an update if we aren't in the middle of sending one.
if not self._sending_sync:
preserve_fn(self._send_syncing_users_now)()
@defer.inlineCallbacks
def _send_syncing_users_now(self):
if self._sending_sync:
# We don't want to race with sending another update.
# Instead we wait for that update to finish and send another
# update afterwards.
self._need_to_send_sync = True
return
# Flag that we are sending an update.
self._sending_sync = True
yield self.http_client.post_json_get_json(self.syncing_users_url, {
"process_id": self.process_id,
"syncing_users": [
user_id for user_id, count in self.user_to_num_current_syncs.items()
if count > 0
],
})
# Unset the flag as we are no longer sending an update.
self._sending_sync = False
if self._need_to_send_sync:
# If something happened while we were sending the update then
# we might need to send another update.
# TODO: Check if the update that was sent matches the current state
# as we only need to send an update if they are different.
self._need_to_send_sync = False
yield self._send_syncing_users_now()
def process_replication(self, result):
stream = result.get("presence", {"rows": []})
for row in stream["rows"]:
(
position, user_id, state, last_active_ts,
last_federation_update_ts, last_user_sync_ts, status_msg,
currently_active
) = row
self.user_to_current_state[user_id] = UserPresenceState(
user_id, state, last_active_ts,
last_federation_update_ts, last_user_sync_ts, status_msg,
currently_active
)
class SynchrotronTyping(object):
def __init__(self, hs):
self._latest_room_serial = 0
self._room_serials = {}
self._room_typing = {}
def stream_positions(self):
return {"typing": self._latest_room_serial}
def process_replication(self, result):
stream = result.get("typing")
if stream:
self._latest_room_serial = int(stream["position"])
for row in stream["rows"]:
position, room_id, typing_json = row
typing = json.loads(typing_json)
self._room_serials[room_id] = position
self._room_typing[room_id] = typing
class SynchrotronApplicationService(object):
def notify_interested_services(self, event):
pass
class SynchrotronServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = SynchrotronSlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
sync.register_servlets(self, resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
})
root_resource = create_resource_tree(resources, Resource())
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
logger.info("Synapse synchrotron now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=listener.get("bind_address", '127.0.0.1')
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@defer.inlineCallbacks
def replicate(self):
http_client = self.get_simple_http_client()
store = self.get_datastore()
replication_url = self.config.worker_replication_url
clock = self.get_clock()
notifier = self.get_notifier()
presence_handler = self.get_presence_handler()
typing_handler = self.get_typing_handler()
def expire_broken_caches():
store.who_forgot_in_room.invalidate_all()
store.get_presence_list_accepted.invalidate_all()
def notify_from_stream(
result, stream_name, stream_key, room=None, user=None
):
stream = result.get(stream_name)
if stream:
position_index = stream["field_names"].index("position")
if room:
room_index = stream["field_names"].index(room)
if user:
user_index = stream["field_names"].index(user)
users = ()
rooms = ()
for row in stream["rows"]:
position = row[position_index]
if user:
users = (row[user_index],)
if room:
rooms = (row[room_index],)
notifier.on_new_event(
stream_key, position, users=users, rooms=rooms
)
def notify(result):
stream = result.get("events")
if stream:
max_position = stream["position"]
for row in stream["rows"]:
position = row[0]
internal = json.loads(row[1])
event_json = json.loads(row[2])
event = FrozenEvent(event_json, internal_metadata_dict=internal)
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
notifier.on_new_room_event(
event, position, max_position, extra_users
)
notify_from_stream(
result, "push_rules", "push_rules_key", user="user_id"
)
notify_from_stream(
result, "user_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "room_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "tag_account_data", "account_data_key", user="user_id"
)
notify_from_stream(
result, "receipts", "receipt_key", room="room_id"
)
notify_from_stream(
result, "typing", "typing_key", room="room_id"
)
next_expire_broken_caches_ms = 0
while True:
try:
args = store.stream_positions()
args.update(typing_handler.stream_positions())
args["timeout"] = 30000
result = yield http_client.get_json(replication_url, args=args)
now_ms = clock.time_msec()
if now_ms > next_expire_broken_caches_ms:
expire_broken_caches()
next_expire_broken_caches_ms = (
now_ms + store.BROKEN_CACHE_EXPIRY_MS
)
yield store.process_replication(result)
typing_handler.process_replication(result)
presence_handler.process_replication(result)
notify(result)
except:
logger.exception("Error replicating from %r", replication_url)
yield sleep(5)
def build_presence_handler(self):
return SynchrotronPresence(self)
def build_typing_handler(self):
return SynchrotronTyping(self)
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse synchrotron", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.synchrotron"
setup_logging(config.worker_log_config, config.worker_log_file)
database_engine = create_engine(config.database_config)
ss = SynchrotronServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string=get_version_string("Synapse", synapse),
database_engine=database_engine,
application_service_handler=SynchrotronApplicationService(),
)
ss.setup()
ss.start_listening(config.worker_listeners)
def run():
with LoggingContext("run"):
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_datastore().start_profiling()
ss.replicate()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-synchrotron",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -14,11 +14,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import argparse
import collections
import glob
import os
import os.path
import subprocess
import signal
import subprocess
import sys
import yaml
SYNAPSE = ["python", "-B", "-m", "synapse.app.homeserver"]
@@ -28,57 +31,182 @@ RED = "\x1b[1;31m"
NORMAL = "\x1b[m"
def write(message, colour=NORMAL, stream=sys.stdout):
if colour == NORMAL:
stream.write(message + "\n")
else:
stream.write(colour + message + NORMAL + "\n")
def start(configfile):
print ("Starting ...")
write("Starting ...")
args = SYNAPSE
args.extend(["--daemonize", "-c", configfile])
try:
subprocess.check_call(args)
print (GREEN + "started" + NORMAL)
write("started synapse.app.homeserver(%r)" % (configfile,), colour=GREEN)
except subprocess.CalledProcessError as e:
print (
RED +
"error starting (exit code: %d); see above for logs" % e.returncode +
NORMAL
write(
"error starting (exit code: %d); see above for logs" % e.returncode,
colour=RED,
)
def stop(pidfile):
def start_worker(app, configfile, worker_configfile):
args = [
"python", "-B",
"-m", app,
"-c", configfile,
"-c", worker_configfile
]
try:
subprocess.check_call(args)
write("started %s(%r)" % (app, worker_configfile), colour=GREEN)
except subprocess.CalledProcessError as e:
write(
"error starting %s(%r) (exit code: %d); see above for logs" % (
app, worker_configfile, e.returncode,
),
colour=RED,
)
def stop(pidfile, app):
if os.path.exists(pidfile):
pid = int(open(pidfile).read())
os.kill(pid, signal.SIGTERM)
print (GREEN + "stopped" + NORMAL)
write("stopped %s" % (app,), colour=GREEN)
Worker = collections.namedtuple("Worker", [
"app", "configfile", "pidfile", "cache_factor"
])
def main():
configfile = sys.argv[2] if len(sys.argv) == 3 else "homeserver.yaml"
if not os.path.exists(configfile):
sys.stderr.write(
"No config file found\n"
"To generate a config file, run '%s -c %s --generate-config"
" --server-name=<server name>'\n" % (
" ".join(SYNAPSE), configfile
)
parser = argparse.ArgumentParser()
parser.add_argument(
"action",
choices=["start", "stop", "restart"],
help="whether to start, stop or restart the synapse",
)
parser.add_argument(
"configfile",
nargs="?",
default="homeserver.yaml",
help="the homeserver config file, defaults to homserver.yaml",
)
parser.add_argument(
"-w", "--worker",
metavar="WORKERCONFIG",
help="start or stop a single worker",
)
parser.add_argument(
"-a", "--all-processes",
metavar="WORKERCONFIGDIR",
help="start or stop all the workers in the given directory"
" and the main synapse process",
)
options = parser.parse_args()
if options.worker and options.all_processes:
write(
'Cannot use "--worker" with "--all-processes"',
stream=sys.stderr
)
sys.exit(1)
config = yaml.load(open(configfile))
pidfile = config["pid_file"]
configfile = options.configfile
action = sys.argv[1] if sys.argv[1:] else "usage"
if action == "start":
start(configfile)
elif action == "stop":
stop(pidfile)
elif action == "restart":
stop(pidfile)
start(configfile)
else:
sys.stderr.write("Usage: %s [start|stop|restart] [configfile]\n" % (sys.argv[0],))
if not os.path.exists(configfile):
write(
"No config file found\n"
"To generate a config file, run '%s -c %s --generate-config"
" --server-name=<server name>'\n" % (
" ".join(SYNAPSE), options.configfile
),
stream=sys.stderr,
)
sys.exit(1)
with open(configfile) as stream:
config = yaml.load(stream)
pidfile = config["pid_file"]
cache_factor = config.get("synctl_cache_factor")
start_stop_synapse = True
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
worker_configfiles = []
if options.worker:
start_stop_synapse = False
worker_configfile = options.worker
if not os.path.exists(worker_configfile):
write(
"No worker config found at %r" % (worker_configfile,),
stream=sys.stderr,
)
sys.exit(1)
worker_configfiles.append(worker_configfile)
if options.all_processes:
worker_configdir = options.all_processes
if not os.path.isdir(worker_configdir):
write(
"No worker config directory found at %r" % (worker_configdir,),
stream=sys.stderr,
)
sys.exit(1)
worker_configfiles.extend(sorted(glob.glob(
os.path.join(worker_configdir, "*.yaml")
)))
workers = []
for worker_configfile in worker_configfiles:
with open(worker_configfile) as stream:
worker_config = yaml.load(stream)
worker_app = worker_config["worker_app"]
worker_pidfile = worker_config["worker_pid_file"]
worker_daemonize = worker_config["worker_daemonize"]
assert worker_daemonize # TODO print something more user friendly
worker_cache_factor = worker_config.get("synctl_cache_factor")
workers.append(Worker(
worker_app, worker_configfile, worker_pidfile, worker_cache_factor,
))
action = options.action
if action == "stop" or action == "restart":
for worker in workers:
stop(worker.pidfile, worker.app)
if start_stop_synapse:
stop(pidfile, "synapse.app.homeserver")
# TODO: Wait for synapse to actually shutdown before starting it again
if action == "start" or action == "restart":
if start_stop_synapse:
start(configfile)
for worker in workers:
if worker.cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(worker.cache_factor)
start_worker(worker.app, configfile, worker.configfile)
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
else:
os.environ.pop("SYNAPSE_CACHE_FACTOR", None)
if __name__ == "__main__":
main()

View File

@@ -100,11 +100,6 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning("push_bulk to %s threw exception %s", uri, ex)
defer.returnValue(False)
@defer.inlineCallbacks
def push(self, service, event, txn_id=None):
response = yield self.push_bulk(service, [event], txn_id)
defer.returnValue(response)
def _serialize(self, events):
time_now = self.clock.time_msec()
return [

View File

@@ -56,22 +56,22 @@ import logging
logger = logging.getLogger(__name__)
class AppServiceScheduler(object):
class ApplicationServiceScheduler(object):
""" Public facing API for this module. Does the required DI to tie the
components together. This also serves as the "event_pool", which in this
case is a simple array.
"""
def __init__(self, clock, store, as_api):
self.clock = clock
self.store = store
self.as_api = as_api
def __init__(self, hs):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.as_api = hs.get_application_service_api()
def create_recoverer(service, callback):
return _Recoverer(clock, store, as_api, service, callback)
return _Recoverer(self.clock, self.store, self.as_api, service, callback)
self.txn_ctrl = _TransactionController(
clock, store, as_api, create_recoverer
self.clock, self.store, self.as_api, create_recoverer
)
self.queuer = _ServiceQueuer(self.txn_ctrl)

View File

@@ -157,9 +157,40 @@ class Config(object):
return default_config, config
@classmethod
def load_config(cls, description, argv, generate_section=None):
obj = cls()
def load_config(cls, description, argv):
config_parser = argparse.ArgumentParser(
description=description,
)
config_parser.add_argument(
"-c", "--config-path",
action="append",
metavar="CONFIG_FILE",
help="Specify config file. Can be given multiple times and"
" may specify directories containing *.yaml files."
)
config_parser.add_argument(
"--keys-directory",
metavar="DIRECTORY",
help="Where files such as certs and signing keys are stored when"
" their location is given explicitly in the config."
" Defaults to the directory containing the last config file",
)
config_args = config_parser.parse_args(argv)
config_files = find_config_files(search_paths=config_args.config_path)
obj = cls()
obj.read_config_files(
config_files,
keys_directory=config_args.keys_directory,
generate_keys=False,
)
return obj
@classmethod
def load_or_generate_config(cls, description, argv):
config_parser = argparse.ArgumentParser(add_help=False)
config_parser.add_argument(
"-c", "--config-path",
@@ -176,7 +207,7 @@ class Config(object):
config_parser.add_argument(
"--report-stats",
action="store",
help="Stuff",
help="Whether the generated config reports anonymized usage statistics",
choices=["yes", "no"]
)
config_parser.add_argument(
@@ -197,36 +228,11 @@ class Config(object):
)
config_args, remaining_args = config_parser.parse_known_args(argv)
config_files = find_config_files(search_paths=config_args.config_path)
generate_keys = config_args.generate_keys
config_files = []
if config_args.config_path:
for config_path in config_args.config_path:
if os.path.isdir(config_path):
# We accept specifying directories as config paths, we search
# inside that directory for all files matching *.yaml, and then
# we apply them in *sorted* order.
files = []
for entry in os.listdir(config_path):
entry_path = os.path.join(config_path, entry)
if not os.path.isfile(entry_path):
print (
"Found subdirectory in config directory: %r. IGNORING."
) % (entry_path, )
continue
if not entry.endswith(".yaml"):
print (
"Found file in config directory that does not"
" end in '.yaml': %r. IGNORING."
) % (entry_path, )
continue
files.append(entry_path)
config_files.extend(sorted(files))
else:
config_files.append(config_path)
obj = cls()
if config_args.generate_config:
if config_args.report_stats is None:
@@ -299,28 +305,43 @@ class Config(object):
" -c CONFIG-FILE\""
)
if config_args.keys_directory:
config_dir_path = config_args.keys_directory
else:
config_dir_path = os.path.dirname(config_args.config_path[-1])
config_dir_path = os.path.abspath(config_dir_path)
obj.read_config_files(
config_files,
keys_directory=config_args.keys_directory,
generate_keys=generate_keys,
)
if generate_keys:
return None
obj.invoke_all("read_arguments", args)
return obj
def read_config_files(self, config_files, keys_directory=None,
generate_keys=False):
if not keys_directory:
keys_directory = os.path.dirname(config_files[-1])
config_dir_path = os.path.abspath(keys_directory)
specified_config = {}
for config_file in config_files:
yaml_config = cls.read_config_file(config_file)
yaml_config = self.read_config_file(config_file)
specified_config.update(yaml_config)
if "server_name" not in specified_config:
raise ConfigError(MISSING_SERVER_NAME)
server_name = specified_config["server_name"]
_, config = obj.generate_config(
_, config = self.generate_config(
config_dir_path=config_dir_path,
server_name=server_name,
is_generating_file=False,
)
config.pop("log_config")
config.update(specified_config)
if "report_stats" not in config:
raise ConfigError(
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS + "\n" +
@@ -328,11 +349,51 @@ class Config(object):
)
if generate_keys:
obj.invoke_all("generate_files", config)
self.invoke_all("generate_files", config)
return
obj.invoke_all("read_config", config)
self.invoke_all("read_config", config)
obj.invoke_all("read_arguments", args)
return obj
def find_config_files(search_paths):
"""Finds config files using a list of search paths. If a path is a file
then that file path is added to the list. If a search path is a directory
then all the "*.yaml" files in that directory are added to the list in
sorted order.
Args:
search_paths(list(str)): A list of paths to search.
Returns:
list(str): A list of file paths.
"""
config_files = []
if search_paths:
for config_path in search_paths:
if os.path.isdir(config_path):
# We accept specifying directories as config paths, we search
# inside that directory for all files matching *.yaml, and then
# we apply them in *sorted* order.
files = []
for entry in os.listdir(config_path):
entry_path = os.path.join(config_path, entry)
if not os.path.isfile(entry_path):
print (
"Found subdirectory in config directory: %r. IGNORING."
) % (entry_path, )
continue
if not entry.endswith(".yaml"):
print (
"Found file in config directory that does not"
" end in '.yaml': %r. IGNORING."
) % (entry_path, )
continue
files.append(entry_path)
config_files.extend(sorted(files))
else:
config_files.append(config_path)
return config_files

View File

@@ -12,7 +12,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from ._base import Config, ConfigError
from synapse.appservice import ApplicationService
from synapse.types import UserID
import urllib
import yaml
import logging
logger = logging.getLogger(__name__)
class AppServiceConfig(Config):
@@ -25,3 +34,99 @@ class AppServiceConfig(Config):
# A list of application service config file to use
app_service_config_files: []
"""
def load_appservices(hostname, config_files):
"""Returns a list of Application Services from the config files."""
if not isinstance(config_files, list):
logger.warning(
"Expected %s to be a list of AS config files.", config_files
)
return []
# Dicts of value -> filename
seen_as_tokens = {}
seen_ids = {}
appservices = []
for config_file in config_files:
try:
with open(config_file, 'r') as f:
appservice = _load_appservice(
hostname, yaml.load(f), config_file
)
if appservice.id in seen_ids:
raise ConfigError(
"Cannot reuse ID across application services: "
"%s (files: %s, %s)" % (
appservice.id, config_file, seen_ids[appservice.id],
)
)
seen_ids[appservice.id] = config_file
if appservice.token in seen_as_tokens:
raise ConfigError(
"Cannot reuse as_token across application services: "
"%s (files: %s, %s)" % (
appservice.token,
config_file,
seen_as_tokens[appservice.token],
)
)
seen_as_tokens[appservice.token] = config_file
logger.info("Loaded application service: %s", appservice)
appservices.append(appservice)
except Exception as e:
logger.error("Failed to load appservice from '%s'", config_file)
logger.exception(e)
raise
return appservices
def _load_appservice(hostname, as_info, config_filename):
required_string_fields = [
"id", "url", "as_token", "hs_token", "sender_localpart"
]
for field in required_string_fields:
if not isinstance(as_info.get(field), basestring):
raise KeyError("Required string field: '%s' (%s)" % (
field, config_filename,
))
localpart = as_info["sender_localpart"]
if urllib.quote(localpart) != localpart:
raise ValueError(
"sender_localpart needs characters which are not URL encoded."
)
user = UserID(localpart, hostname)
user_id = user.to_string()
# namespace checks
if not isinstance(as_info.get("namespaces"), dict):
raise KeyError("Requires 'namespaces' object.")
for ns in ApplicationService.NS_LIST:
# specific namespaces are optional
if ns in as_info["namespaces"]:
# expect a list of dicts with exclusive and regex keys
for regex_obj in as_info["namespaces"][ns]:
if not isinstance(regex_obj, dict):
raise ValueError(
"Expected namespace entry in %s to be an object,"
" but got %s", ns, regex_obj
)
if not isinstance(regex_obj.get("regex"), basestring):
raise ValueError(
"Missing/bad type 'regex' key in %s", regex_obj
)
if not isinstance(regex_obj.get("exclusive"), bool):
raise ValueError(
"Missing/bad type 'exclusive' key in %s", regex_obj
)
return ApplicationService(
token=as_info["as_token"],
url=as_info["url"],
namespaces=as_info["namespaces"],
hs_token=as_info["hs_token"],
sender=user_id,
id=as_info["id"],
)

View File

@@ -27,6 +27,7 @@ class CaptchaConfig(Config):
def default_config(self, **kwargs):
return """\
## Captcha ##
# See docs/CAPTCHA_SETUP for full details of configuring this.
# This Home Server's ReCAPTCHA public key.
recaptcha_public_key: "YOUR_PUBLIC_KEY"

View File

@@ -0,0 +1,98 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file can't be called email.py because if it is, we cannot:
import email.utils
from ._base import Config
class EmailConfig(Config):
def read_config(self, config):
self.email_enable_notifs = False
email_config = config.get("email", {})
self.email_enable_notifs = email_config.get("enable_notifs", False)
if self.email_enable_notifs:
# make sure we can import the required deps
import jinja2
import bleach
# prevent unused warnings
jinja2
bleach
required = [
"smtp_host",
"smtp_port",
"notif_from",
"template_dir",
"notif_template_html",
"notif_template_text",
]
missing = []
for k in required:
if k not in email_config:
missing.append(k)
if (len(missing) > 0):
raise RuntimeError(
"email.enable_notifs is True but required keys are missing: %s" %
(", ".join(["email." + k for k in missing]),)
)
if config.get("public_baseurl") is None:
raise RuntimeError(
"email.enable_notifs is True but no public_baseurl is set"
)
self.email_smtp_host = email_config["smtp_host"]
self.email_smtp_port = email_config["smtp_port"]
self.email_notif_from = email_config["notif_from"]
self.email_template_dir = email_config["template_dir"]
self.email_notif_template_html = email_config["notif_template_html"]
self.email_notif_template_text = email_config["notif_template_text"]
self.email_notif_for_new_users = email_config.get(
"notif_for_new_users", True
)
if "app_name" in email_config:
self.email_app_name = email_config["app_name"]
else:
self.email_app_name = "Matrix"
# make sure it's valid
parsed = email.utils.parseaddr(self.email_notif_from)
if parsed[1] == '':
raise RuntimeError("Invalid notif_from address")
else:
self.email_enable_notifs = False
# Not much point setting defaults for the rest: it would be an
# error for them to be used.
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Enable sending emails for notification events
#email:
# enable_notifs: false
# smtp_host: "localhost"
# smtp_port: 25
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# app_name: Matrix
# template_dir: res/templates
# notif_template_html: notif_mail.html
# notif_template_text: notif_mail.txt
# notif_for_new_users: True
"""

View File

@@ -29,13 +29,18 @@ from .key import KeyConfig
from .saml2 import SAML2Config
from .cas import CasConfig
from .password import PasswordConfig
from .jwt import JWTConfig
from .ldap import LDAPConfig
from .emailconfig import EmailConfig
from .workers import WorkerConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
RatelimitConfig, ContentRepositoryConfig, CaptchaConfig,
VoipConfig, RegistrationConfig, MetricsConfig, ApiConfig,
AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
PasswordConfig,):
JWTConfig, LDAPConfig, PasswordConfig, EmailConfig,
WorkerConfig,):
pass

54
synapse/config/jwt.py Normal file
View File

@@ -0,0 +1,54 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Niklas Riekenbrauck
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
MISSING_JWT = (
"""Missing jwt library. This is required for jwt login.
Install by running:
pip install pyjwt
"""
)
class JWTConfig(Config):
def read_config(self, config):
jwt_config = config.get("jwt_config", None)
if jwt_config:
self.jwt_enabled = jwt_config.get("enabled", False)
self.jwt_secret = jwt_config["secret"]
self.jwt_algorithm = jwt_config["algorithm"]
try:
import jwt
jwt # To stop unused lint.
except ImportError:
raise ConfigError(MISSING_JWT)
else:
self.jwt_enabled = False
self.jwt_secret = None
self.jwt_algorithm = None
def default_config(self, **kwargs):
return """\
# The JWT needs to contain a globally unique "sub" (subject) claim.
#
# jwt_config:
# enabled: true
# secret: "a secret"
# algorithm: "HS256"
"""

View File

@@ -57,6 +57,8 @@ class KeyConfig(Config):
seed = self.signing_key[0].seed
self.macaroon_secret_key = hashlib.sha256(seed)
self.expire_access_token = config.get("expire_access_token", False)
def default_config(self, config_dir_path, server_name, is_generating_file=False,
**kwargs):
base_key_name = os.path.join(config_dir_path, server_name)
@@ -69,6 +71,9 @@ class KeyConfig(Config):
return """\
macaroon_secret_key: "%(macaroon_secret_key)s"
# Used to enable access token expiration.
expire_access_token: False
## Signing Keys ##
# Path to the signing key to sign messages with

100
synapse/config/ldap.py Normal file
View File

@@ -0,0 +1,100 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Niklas Riekenbrauck
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
MISSING_LDAP3 = (
"Missing ldap3 library. This is required for LDAP Authentication."
)
class LDAPMode(object):
SIMPLE = "simple",
SEARCH = "search",
LIST = (SIMPLE, SEARCH)
class LDAPConfig(Config):
def read_config(self, config):
ldap_config = config.get("ldap_config", {})
self.ldap_enabled = ldap_config.get("enabled", False)
if self.ldap_enabled:
# verify dependencies are available
try:
import ldap3
ldap3 # to stop unused lint
except ImportError:
raise ConfigError(MISSING_LDAP3)
self.ldap_mode = LDAPMode.SIMPLE
# verify config sanity
self.require_keys(ldap_config, [
"uri",
"base",
"attributes",
])
self.ldap_uri = ldap_config["uri"]
self.ldap_start_tls = ldap_config.get("start_tls", False)
self.ldap_base = ldap_config["base"]
self.ldap_attributes = ldap_config["attributes"]
if "bind_dn" in ldap_config:
self.ldap_mode = LDAPMode.SEARCH
self.require_keys(ldap_config, [
"bind_dn",
"bind_password",
])
self.ldap_bind_dn = ldap_config["bind_dn"]
self.ldap_bind_password = ldap_config["bind_password"]
self.ldap_filter = ldap_config.get("filter", None)
# verify attribute lookup
self.require_keys(ldap_config['attributes'], [
"uid",
"name",
"mail",
])
def require_keys(self, config, required):
missing = [key for key in required if key not in config]
if missing:
raise ConfigError(
"LDAP enabled but missing required config values: {}".format(
", ".join(missing)
)
)
def default_config(self, **kwargs):
return """\
# ldap_config:
# enabled: true
# uri: "ldap://ldap.example.com:389"
# start_tls: true
# base: "ou=users,dc=example,dc=com"
# attributes:
# uid: "cn"
# mail: "email"
# name: "givenName"
# #bind_dn:
# #bind_password:
# #filter: "(objectClass=posixAccount)"
"""

View File

@@ -126,54 +126,58 @@ class LoggingConfig(Config):
)
def setup_logging(self):
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if self.log_config is None:
setup_logging(self.log_config, self.log_file, self.verbosity)
level = logging.INFO
level_for_storage = logging.INFO
if self.verbosity:
level = logging.DEBUG
if self.verbosity > 1:
level_for_storage = logging.DEBUG
# FIXME: we need a logging.WARN for a -q quiet option
logger = logging.getLogger('')
logger.setLevel(level)
def setup_logging(log_config=None, log_file=None, verbosity=None):
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if log_config is None:
logging.getLogger('synapse.storage').setLevel(level_for_storage)
level = logging.INFO
level_for_storage = logging.INFO
if verbosity:
level = logging.DEBUG
if verbosity > 1:
level_for_storage = logging.DEBUG
formatter = logging.Formatter(log_format)
if self.log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
self.log_file, maxBytes=(1000 * 1000 * 100), backupCount=3
)
# FIXME: we need a logging.WARN for a -q quiet option
logger = logging.getLogger('')
logger.setLevel(level)
def sighup(signum, stack):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
logging.getLogger('synapse.storage').setLevel(level_for_storage)
# TODO(paul): obviously this is a terrible mechanism for
# stealing SIGHUP, because it means no other part of synapse
# can use it instead. If we want to catch SIGHUP anywhere
# else as well, I'd suggest we find a nicer way to broadcast
# it around.
if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup)
else:
handler = logging.StreamHandler()
handler.setFormatter(formatter)
formatter = logging.Formatter(log_format)
if log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3
)
handler.addFilter(LoggingContextFilter(request=""))
def sighup(signum, stack):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
logger.addHandler(handler)
# TODO(paul): obviously this is a terrible mechanism for
# stealing SIGHUP, because it means no other part of synapse
# can use it instead. If we want to catch SIGHUP anywhere
# else as well, I'd suggest we find a nicer way to broadcast
# it around.
if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup)
else:
with open(self.log_config, 'r') as f:
logging.config.dictConfig(yaml.load(f))
handler = logging.StreamHandler()
handler.setFormatter(formatter)
observer = PythonLoggingObserver()
observer.start()
handler.addFilter(LoggingContextFilter(request=""))
logger.addHandler(handler)
else:
with open(log_config, 'r') as f:
logging.config.dictConfig(yaml.load(f))
observer = PythonLoggingObserver()
observer.start()

View File

@@ -23,10 +23,14 @@ class PasswordConfig(Config):
def read_config(self, config):
password_config = config.get("password_config", {})
self.password_enabled = password_config.get("enabled", True)
self.password_pepper = password_config.get("pepper", "")
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Enable password for login.
password_config:
enabled: true
# Uncomment and change to a secret random string for extra security.
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
#pepper: ""
"""

View File

@@ -32,6 +32,7 @@ class RegistrationConfig(Config):
)
self.registration_shared_secret = config.get("registration_shared_secret")
self.user_creation_max_duration = int(config["user_creation_max_duration"])
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
self.trusted_third_party_id_servers = config["trusted_third_party_id_servers"]
@@ -54,6 +55,11 @@ class RegistrationConfig(Config):
# secret, even if registration is otherwise disabled.
registration_shared_secret: "%(registration_shared_secret)s"
# Sets the expiry for the short term user creation in
# milliseconds. For instance the bellow duration is two weeks
# in milliseconds.
user_creation_max_duration: 1209600000
# Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash.
# The default number of rounds is 12.

View File

@@ -13,9 +13,25 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from ._base import Config, ConfigError
from collections import namedtuple
MISSING_NETADDR = (
"Missing netaddr library. This is required for URL preview API."
)
MISSING_LXML = (
"""Missing lxml library. This is required for URL preview API.
Install by running:
pip install lxml
Requires libxslt1-dev system package.
"""
)
ThumbnailRequirement = namedtuple(
"ThumbnailRequirement", ["width", "height", "method", "media_type"]
)
@@ -23,7 +39,7 @@ ThumbnailRequirement = namedtuple(
def parse_thumbnail_requirements(thumbnail_sizes):
""" Takes a list of dictionaries with "width", "height", and "method" keys
and creates a map from image media types to the thumbnail size, thumnailing
and creates a map from image media types to the thumbnail size, thumbnailing
method, and thumbnail media type to precalculate
Args:
@@ -53,12 +69,44 @@ class ContentRepositoryConfig(Config):
def read_config(self, config):
self.max_upload_size = self.parse_size(config["max_upload_size"])
self.max_image_pixels = self.parse_size(config["max_image_pixels"])
self.max_spider_size = self.parse_size(config["max_spider_size"])
self.media_store_path = self.ensure_directory(config["media_store_path"])
self.uploads_path = self.ensure_directory(config["uploads_path"])
self.dynamic_thumbnails = config["dynamic_thumbnails"]
self.thumbnail_requirements = parse_thumbnail_requirements(
config["thumbnail_sizes"]
)
self.url_preview_enabled = config.get("url_preview_enabled", False)
if self.url_preview_enabled:
try:
import lxml
lxml # To stop unused lint.
except ImportError:
raise ConfigError(MISSING_LXML)
try:
from netaddr import IPSet
except ImportError:
raise ConfigError(MISSING_NETADDR)
if "url_preview_ip_range_blacklist" in config:
self.url_preview_ip_range_blacklist = IPSet(
config["url_preview_ip_range_blacklist"]
)
else:
raise ConfigError(
"For security, you must specify an explicit target IP address "
"blacklist in url_preview_ip_range_blacklist for url previewing "
"to work"
)
self.url_preview_ip_range_whitelist = IPSet(
config.get("url_preview_ip_range_whitelist", ())
)
self.url_preview_url_blacklist = config.get(
"url_preview_url_blacklist", ()
)
def default_config(self, **kwargs):
media_store = self.default_path("media_store")
@@ -80,7 +128,7 @@ class ContentRepositoryConfig(Config):
# the resolution requested by the client. If true then whenever
# a new resolution is requested by the client the server will
# generate a new thumbnail. If false the server will pick a thumbnail
# from a precalcualted list.
# from a precalculated list.
dynamic_thumbnails: false
# List of thumbnail to precalculate when an image is uploaded.
@@ -100,4 +148,71 @@ class ContentRepositoryConfig(Config):
- width: 800
height: 600
method: scale
# Is the preview URL API enabled? If enabled, you *must* specify
# an explicit url_preview_ip_range_blacklist of IPs that the spider is
# denied from accessing.
url_preview_enabled: False
# List of IP address CIDR ranges that the URL preview spider is denied
# from accessing. There are no defaults: you must explicitly
# specify a list for URL previewing to work. You should specify any
# internal services in your network that you do not want synapse to try
# to connect to, otherwise anyone in any Matrix room could cause your
# synapse to issue arbitrary GET requests to your internal services,
# causing serious security issues.
#
# url_preview_ip_range_blacklist:
# - '127.0.0.0/8'
# - '10.0.0.0/8'
# - '172.16.0.0/12'
# - '192.168.0.0/16'
#
# List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist.
# This is useful for specifying exceptions to wide-ranging blacklisted
# target IP ranges - e.g. for enabling URL previews for a specific private
# website only visible in your network.
#
# url_preview_ip_range_whitelist:
# - '192.168.1.1'
# Optional list of URL matches that the URL preview spider is
# denied from accessing. You should use url_preview_ip_range_blacklist
# in preference to this, otherwise someone could define a public DNS
# entry that points to a private IP address and circumvent the blacklist.
# This is more useful if you know there is an entire shape of URL that
# you know that will never want synapse to try to spider.
#
# Each list entry is a dictionary of url component attributes as returned
# by urlparse.urlsplit as applied to the absolute form of the URL. See
# https://docs.python.org/2/library/urlparse.html#urlparse.urlsplit
# The values of the dictionary are treated as an filename match pattern
# applied to that component of URLs, unless they start with a ^ in which
# case they are treated as a regular expression match. If all the
# specified component matches for a given list item succeed, the URL is
# blacklisted.
#
# url_preview_url_blacklist:
# # blacklist any URL with a username in its URI
# - username: '*'
#
# # blacklist all *.google.com URLs
# - netloc: 'google.com'
# - netloc: '*.google.com'
#
# # blacklist all plain HTTP URLs
# - scheme: 'http'
#
# # blacklist http(s)://www.acme.com/foo
# - netloc: 'www.acme.com'
# path: '/foo'
#
# # blacklist any URL with a literal IPv4 address
# - netloc: '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$'
# The largest allowed URL preview spidering size in bytes
max_spider_size: "10M"
""" % locals()

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from ._base import Config, ConfigError
class ServerConfig(Config):
@@ -27,10 +27,19 @@ class ServerConfig(Config):
self.daemonize = config.get("daemonize")
self.print_pidfile = config.get("print_pidfile")
self.user_agent_suffix = config.get("user_agent_suffix")
self.use_frozen_dicts = config.get("use_frozen_dicts", True)
self.use_frozen_dicts = config.get("use_frozen_dicts", False)
self.public_baseurl = config.get("public_baseurl")
self.secondary_directory_servers = config.get("secondary_directory_servers", [])
if self.public_baseurl is not None:
if self.public_baseurl[-1] != '/':
self.public_baseurl += '/'
self.start_pushers = config.get("start_pushers", True)
self.listeners = config.get("listeners", [])
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
bind_port = config.get("bind_port")
if bind_port:
self.listeners = []
@@ -98,26 +107,6 @@ class ServerConfig(Config):
]
})
# Attempt to guess the content_addr for the v0 content repostitory
content_addr = config.get("content_addr")
if not content_addr:
for listener in self.listeners:
if listener["type"] == "http" and not listener.get("tls", False):
unsecure_port = listener["port"]
break
else:
raise RuntimeError("Could not determine 'content_addr'")
host = self.server_name
if ':' not in host:
host = "%s:%d" % (host, unsecure_port)
else:
host = host.split(':')[0]
host = "%s:%d" % (host, unsecure_port)
content_addr = "http://%s" % (host,)
self.content_addr = content_addr
def default_config(self, server_name, **kwargs):
if ":" in server_name:
bind_port = int(server_name.split(":")[1])
@@ -142,11 +131,25 @@ class ServerConfig(Config):
# Whether to serve a web client from the HTTP/HTTPS root resource.
web_client: True
# The public-facing base URL for the client API (not including _matrix/...)
# public_baseurl: https://example.com:8448/
# Set the soft limit on the number of file descriptors synapse can use
# Zero is used to indicate synapse should set the soft limit to the
# hard limit.
soft_file_limit: 0
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
# gc_thresholds: [700, 10, 10]
# A list of other Home Servers to fetch the public room directory from
# and include in the public room directory of this home server
# This is a temporary stopgap solution to populate new server with a
# list of rooms until there exists a good solution of a decentralized
# room directory.
# secondary_directory_servers:
# - matrix.org
# List of ports that Synapse should listen on, their purpose and their
# configuration.
listeners:
@@ -228,3 +231,20 @@ class ServerConfig(Config):
type=int,
help="Turn on the twisted telnet manhole"
" service on the given port.")
def read_gc_thresholds(thresholds):
"""Reads the three integer thresholds for garbage collection. Ensures that
the thresholds are integers if thresholds are supplied.
"""
if thresholds is None:
return None
try:
assert len(thresholds) == 3
return (
int(thresholds[0]), int(thresholds[1]), int(thresholds[2]),
)
except:
raise ConfigError(
"Value of `gc_threshold` must be a list of three integers if set"
)

31
synapse/config/workers.py Normal file
View File

@@ -0,0 +1,31 @@
# -*- coding: utf-8 -*-
# Copyright 2016 matrix.org
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class WorkerConfig(Config):
"""The workers are processes run separately to the main synapse process.
They have their own pid_file and listener configuration. They use the
replication_url to talk to the main synapse process."""
def read_config(self, config):
self.worker_app = config.get("worker_app")
self.worker_listeners = config.get("worker_listeners")
self.worker_daemonize = config.get("worker_daemonize")
self.worker_pid_file = config.get("worker_pid_file")
self.worker_log_file = config.get("worker_log_file")
self.worker_log_config = config.get("worker_log_config")
self.worker_replication_url = config.get("worker_replication_url")

View File

@@ -31,7 +31,10 @@ class _EventInternalMetadata(object):
return dict(self.__dict__)
def is_outlier(self):
return hasattr(self, "outlier") and self.outlier
return getattr(self, "outlier", False)
def is_invite_from_remote(self):
return getattr(self, "invite_from_remote", False)
def _event_dict_property(key):

View File

@@ -31,6 +31,9 @@ logger = logging.getLogger(__name__)
class FederationBase(object):
def __init__(self, hs):
pass
@defer.inlineCallbacks
def _check_sigs_and_hash_and_fetch(self, origin, pdus, outlier=False,
include_none=False):

View File

@@ -24,6 +24,7 @@ from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError,
)
from synapse.util import unwrapFirstError
from synapse.util.async import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logutils import log_function
from synapse.events import FrozenEvent
@@ -51,6 +52,8 @@ sent_queries_counter = metrics.register_counter("sent_queries", labels=["type"])
class FederationClient(FederationBase):
def __init__(self, hs):
super(FederationClient, self).__init__(hs)
def start_get_pdu_cache(self):
self._get_pdu_cache = ExpiringCache(
@@ -550,6 +553,25 @@ class FederationClient(FederationBase):
raise RuntimeError("Failed to send to any server.")
@defer.inlineCallbacks
def get_public_rooms(self, destinations):
results_by_server = {}
@defer.inlineCallbacks
def _get_result(s):
if s == self.server_name:
defer.returnValue()
try:
result = yield self.transport_layer.get_public_rooms(s)
results_by_server[s] = result
except:
logger.exception("Error getting room list from server %r", s)
yield concurrently_execute(_get_result, destinations, 3)
defer.returnValue(results_by_server)
@defer.inlineCallbacks
def query_auth(self, destination, room_id, event_id, local_auth):
"""

View File

@@ -19,6 +19,7 @@ from twisted.internet import defer
from .federation_base import FederationBase
from .units import Transaction, Edu
from synapse.util.async import Linearizer
from synapse.util.logutils import log_function
from synapse.events import FrozenEvent
import synapse.metrics
@@ -44,6 +45,12 @@ received_queries_counter = metrics.register_counter("received_queries", labels=[
class FederationServer(FederationBase):
def __init__(self, hs):
super(FederationServer, self).__init__(hs)
self._room_pdu_linearizer = Linearizer()
self._server_linearizer = Linearizer()
def set_handler(self, handler):
"""Sets the handler that the replication layer will use to communicate
receipt of new PDUs from other home servers. The required methods are
@@ -83,11 +90,14 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
@log_function
def on_backfill_request(self, origin, room_id, versions, limit):
pdus = yield self.handler.on_backfill_request(
origin, room_id, versions, limit
)
with (yield self._server_linearizer.queue((origin, room_id))):
pdus = yield self.handler.on_backfill_request(
origin, room_id, versions, limit
)
defer.returnValue((200, self._transaction_from_pdus(pdus).get_dict()))
res = self._transaction_from_pdus(pdus).get_dict()
defer.returnValue((200, res))
@defer.inlineCallbacks
@log_function
@@ -178,24 +188,28 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
@log_function
def on_context_state_request(self, origin, room_id, event_id):
if event_id:
pdus = yield self.handler.get_state_for_pdu(
origin, room_id, event_id,
)
auth_chain = yield self.store.get_auth_chain(
[pdu.event_id for pdu in pdus]
)
for event in auth_chain:
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
)
with (yield self._server_linearizer.queue((origin, room_id))):
if event_id:
pdus = yield self.handler.get_state_for_pdu(
origin, room_id, event_id,
)
else:
raise NotImplementedError("Specify an event")
auth_chain = yield self.store.get_auth_chain(
[pdu.event_id for pdu in pdus]
)
for event in auth_chain:
# We sign these again because there was a bug where we
# incorrectly signed things the first time round
if self.hs.is_mine_id(event.event_id):
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
)
)
else:
raise NotImplementedError("Specify an event")
defer.returnValue((200, {
"pdus": [pdu.get_pdu_json() for pdu in pdus],
@@ -274,14 +288,16 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id):
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
defer.returnValue((200, {
"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus],
}))
with (yield self._server_linearizer.queue((origin, room_id))):
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
res = {
"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus],
}
defer.returnValue((200, res))
@defer.inlineCallbacks
def on_query_auth_request(self, origin, content, event_id):
def on_query_auth_request(self, origin, content, room_id, event_id):
"""
Content is a dict with keys::
auth_chain (list): A list of events that give the auth chain.
@@ -300,32 +316,33 @@ class FederationServer(FederationBase):
Returns:
Deferred: Results in `dict` with the same format as `content`
"""
auth_chain = [
self.event_from_pdu_json(e)
for e in content["auth_chain"]
]
with (yield self._server_linearizer.queue((origin, room_id))):
auth_chain = [
self.event_from_pdu_json(e)
for e in content["auth_chain"]
]
signed_auth = yield self._check_sigs_and_hash_and_fetch(
origin, auth_chain, outlier=True
)
signed_auth = yield self._check_sigs_and_hash_and_fetch(
origin, auth_chain, outlier=True
)
ret = yield self.handler.on_query_auth(
origin,
event_id,
signed_auth,
content.get("rejects", []),
content.get("missing", []),
)
ret = yield self.handler.on_query_auth(
origin,
event_id,
signed_auth,
content.get("rejects", []),
content.get("missing", []),
)
time_now = self._clock.time_msec()
send_content = {
"auth_chain": [
e.get_pdu_json(time_now)
for e in ret["auth_chain"]
],
"rejects": ret.get("rejects", []),
"missing": ret.get("missing", []),
}
time_now = self._clock.time_msec()
send_content = {
"auth_chain": [
e.get_pdu_json(time_now)
for e in ret["auth_chain"]
],
"rejects": ret.get("rejects", []),
"missing": ret.get("missing", []),
}
defer.returnValue(
(200, send_content)
@@ -377,16 +394,34 @@ class FederationServer(FederationBase):
@log_function
def on_get_missing_events(self, origin, room_id, earliest_events,
latest_events, limit, min_depth):
missing_events = yield self.handler.on_get_missing_events(
origin, room_id, earliest_events, latest_events, limit, min_depth
)
with (yield self._server_linearizer.queue((origin, room_id))):
logger.info(
"on_get_missing_events: earliest_events: %r, latest_events: %r,"
" limit: %d, min_depth: %d",
earliest_events, latest_events, limit, min_depth
)
missing_events = yield self.handler.on_get_missing_events(
origin, room_id, earliest_events, latest_events, limit, min_depth
)
time_now = self._clock.time_msec()
if len(missing_events) < 5:
logger.info(
"Returning %d events: %r", len(missing_events), missing_events
)
else:
logger.info("Returning %d events", len(missing_events))
time_now = self._clock.time_msec()
defer.returnValue({
"events": [ev.get_pdu_json(time_now) for ev in missing_events],
})
@log_function
def on_openid_userinfo(self, token):
ts_now_ms = self._clock.time_msec()
return self.store.get_user_id_for_open_id_token(token, ts_now_ms)
@log_function
def _get_persisted_pdu(self, origin, event_id, do_auth=True):
""" Get a PDU from the database with given origin and id.
@@ -476,42 +511,59 @@ class FederationServer(FederationBase):
pdu.internal_metadata.outlier = True
elif min_depth and pdu.depth > min_depth:
if get_missing and prevs - seen:
latest = yield self.store.get_latest_event_ids_in_room(
pdu.room_id
)
# If we're missing stuff, ensure we only fetch stuff one
# at a time.
with (yield self._room_pdu_linearizer.queue(pdu.room_id)):
# We recalculate seen, since it may have changed.
have_seen = yield self.store.have_events(prevs)
seen = set(have_seen.keys())
# We add the prev events that we have seen to the latest
# list to ensure the remote server doesn't give them to us
latest = set(latest)
latest |= seen
if prevs - seen:
latest = yield self.store.get_latest_event_ids_in_room(
pdu.room_id
)
missing_events = yield self.get_missing_events(
origin,
pdu.room_id,
earliest_events_ids=list(latest),
latest_events=[pdu],
limit=10,
min_depth=min_depth,
)
# We add the prev events that we have seen to the latest
# list to ensure the remote server doesn't give them to us
latest = set(latest)
latest |= seen
# We want to sort these by depth so we process them and
# tell clients about them in order.
missing_events.sort(key=lambda x: x.depth)
logger.info(
"Missing %d events for room %r: %r...",
len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
)
for e in missing_events:
yield self._handle_new_pdu(
origin,
e,
get_missing=False
)
missing_events = yield self.get_missing_events(
origin,
pdu.room_id,
earliest_events_ids=list(latest),
latest_events=[pdu],
limit=10,
min_depth=min_depth,
)
have_seen = yield self.store.have_events(
[ev for ev, _ in pdu.prev_events]
)
# We want to sort these by depth so we process them and
# tell clients about them in order.
missing_events.sort(key=lambda x: x.depth)
for e in missing_events:
yield self._handle_new_pdu(
origin,
e,
get_missing=False
)
have_seen = yield self.store.have_events(
[ev for ev, _ in pdu.prev_events]
)
prevs = {e_id for e_id, _ in pdu.prev_events}
seen = set(have_seen.keys())
if prevs - seen:
logger.info(
"Still missing %d events for room %r: %r...",
len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
)
fetch_state = True
if fetch_state:

View File

@@ -72,5 +72,7 @@ class ReplicationLayer(FederationClient, FederationServer):
self.hs = hs
super(ReplicationLayer, self).__init__(hs)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name

View File

@@ -20,6 +20,7 @@ from .persistence import TransactionActions
from .units import Transaction
from synapse.api.errors import HttpResponseException
from synapse.util.async import run_on_reactor
from synapse.util.logutils import log_function
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.retryutils import (
@@ -199,6 +200,8 @@ class TransactionQueue(object):
@defer.inlineCallbacks
@log_function
def _attempt_new_transaction(self, destination):
yield run_on_reactor()
# list of (pending_pdu, deferred, order)
if destination in self.pending_transactions:
# XXX: pending_transactions can get stuck on by a never-ending

View File

@@ -179,7 +179,8 @@ class TransportLayerClient(object):
content = yield self.client.get_json(
destination=destination,
path=path,
retry_on_dns_fail=True,
retry_on_dns_fail=False,
timeout=20000,
)
defer.returnValue(content)
@@ -223,6 +224,18 @@ class TransportLayerClient(object):
defer.returnValue(response)
@defer.inlineCallbacks
@log_function
def get_public_rooms(self, remote_server):
path = PREFIX + "/publicRooms"
response = yield self.client.get_json(
destination=remote_server,
path=path,
)
defer.returnValue(response)
@defer.inlineCallbacks
@log_function
def exchange_third_party_invite(self, destination, room_id, event_dict):

View File

@@ -18,7 +18,7 @@ from twisted.internet import defer
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.api.errors import Codes, SynapseError
from synapse.http.server import JsonResource
from synapse.http.servlet import parse_json_object_from_request
from synapse.http.servlet import parse_json_object_from_request, parse_string
from synapse.util.ratelimitutils import FederationRateLimiter
import functools
@@ -37,7 +37,7 @@ class TransportLayerServer(JsonResource):
self.hs = hs
self.clock = hs.get_clock()
super(TransportLayerServer, self).__init__(hs)
super(TransportLayerServer, self).__init__(hs, canonical_json=False)
self.authenticator = Authenticator(hs)
self.ratelimiter = FederationRateLimiter(
@@ -134,10 +134,12 @@ class Authenticator(object):
class BaseFederationServlet(object):
def __init__(self, handler, authenticator, ratelimiter, server_name):
def __init__(self, handler, authenticator, ratelimiter, server_name,
room_list_handler):
self.handler = handler
self.authenticator = authenticator
self.ratelimiter = ratelimiter
self.room_list_handler = room_list_handler
def _wrap(self, code):
authenticator = self.authenticator
@@ -323,7 +325,7 @@ class FederationSendLeaveServlet(BaseFederationServlet):
class FederationEventAuthServlet(BaseFederationServlet):
PATH = "/event_auth(?P<context>[^/]*)/(?P<event_id>[^/]*)"
PATH = "/event_auth/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
def on_GET(self, origin, content, query, context, event_id):
return self.handler.on_event_auth(origin, context, event_id)
@@ -386,7 +388,7 @@ class FederationQueryAuthServlet(BaseFederationServlet):
@defer.inlineCallbacks
def on_POST(self, origin, content, query, context, event_id):
new_content = yield self.handler.on_query_auth_request(
origin, content, event_id
origin, content, context, event_id
)
defer.returnValue((200, new_content))
@@ -448,6 +450,89 @@ class On3pidBindServlet(BaseFederationServlet):
return code
class OpenIdUserInfo(BaseFederationServlet):
"""
Exchange a bearer token for information about a user.
The response format should be compatible with:
http://openid.net/specs/openid-connect-core-1_0.html#UserInfoResponse
GET /openid/userinfo?access_token=ABDEFGH HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
"sub": "@userpart:example.org",
}
"""
PATH = "/openid/userinfo"
@defer.inlineCallbacks
def on_GET(self, request):
token = parse_string(request, "access_token")
if token is None:
defer.returnValue((401, {
"errcode": "M_MISSING_TOKEN", "error": "Access Token required"
}))
return
user_id = yield self.handler.on_openid_userinfo(token)
if user_id is None:
defer.returnValue((401, {
"errcode": "M_UNKNOWN_TOKEN",
"error": "Access Token unknown or expired"
}))
defer.returnValue((200, {"sub": user_id}))
# Avoid doing remote HS authorization checks which are done by default by
# BaseFederationServlet.
def _wrap(self, code):
return code
class PublicRoomList(BaseFederationServlet):
"""
Fetch the public room list for this server.
This API returns information in the same format as /publicRooms on the
client API, but will only ever include local public rooms and hence is
intended for consumption by other home servers.
GET /publicRooms HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
"chunk": [
{
"aliases": [
"#test:localhost"
],
"guest_can_join": false,
"name": "test room",
"num_joined_members": 3,
"room_id": "!whkydVegtvatLfXmPN:localhost",
"world_readable": false
}
],
"end": "END",
"start": "START"
}
"""
PATH = "/publicRooms"
@defer.inlineCallbacks
def on_GET(self, origin, content, query):
data = yield self.room_list_handler.get_local_public_room_list()
defer.returnValue((200, data))
SERVLET_CLASSES = (
FederationSendServlet,
FederationPullServlet,
@@ -468,6 +553,8 @@ SERVLET_CLASSES = (
FederationClientKeysClaimServlet,
FederationThirdPartyInviteExchangeServlet,
On3pidBindServlet,
OpenIdUserInfo,
PublicRoomList,
)
@@ -478,4 +565,5 @@ def register_servlets(hs, resource, authenticator, ratelimiter):
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
room_list_handler=hs.get_room_list_handler(),
).register(resource)

View File

@@ -13,23 +13,17 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.appservice.scheduler import AppServiceScheduler
from synapse.appservice.api import ApplicationServiceApi
from .register import RegistrationHandler
from .room import (
RoomCreationHandler, RoomMemberHandler, RoomListHandler, RoomContextHandler,
RoomCreationHandler, RoomContextHandler,
)
from .room_member import RoomMemberHandler
from .message import MessageHandler
from .events import EventStreamHandler, EventHandler
from .federation import FederationHandler
from .profile import ProfileHandler
from .presence import PresenceHandler
from .directory import DirectoryHandler
from .typing import TypingNotificationHandler
from .admin import AdminHandler
from .appservice import ApplicationServicesHandler
from .sync import SyncHandler
from .auth import AuthHandler
from .identity import IdentityHandler
from .receipts import ReceiptsHandler
from .search import SearchHandler
@@ -52,22 +46,9 @@ class Handlers(object):
self.event_handler = EventHandler(hs)
self.federation_handler = FederationHandler(hs)
self.profile_handler = ProfileHandler(hs)
self.presence_handler = PresenceHandler(hs)
self.room_list_handler = RoomListHandler(hs)
self.directory_handler = DirectoryHandler(hs)
self.typing_notification_handler = TypingNotificationHandler(hs)
self.admin_handler = AdminHandler(hs)
self.receipts_handler = ReceiptsHandler(hs)
asapi = ApplicationServiceApi(hs)
self.appservice_handler = ApplicationServicesHandler(
hs, asapi, AppServiceScheduler(
clock=hs.get_clock(),
store=hs.get_datastore(),
as_api=asapi
)
)
self.sync_handler = SyncHandler(hs)
self.auth_handler = AuthHandler(hs)
self.identity_handler = IdentityHandler(hs)
self.search_handler = SearchHandler(hs)
self.room_context_handler = RoomContextHandler(hs)

View File

@@ -15,13 +15,10 @@
from twisted.internet import defer
from synapse.api.errors import LimitExceededError, SynapseError, AuthError
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.api.errors import LimitExceededError
from synapse.api.constants import Membership, EventTypes
from synapse.types import UserID, RoomAlias, Requester
from synapse.push.action_generator import ActionGenerator
from synapse.types import UserID, Requester
from synapse.util.logcontext import PreserveLoggingContext
import logging
@@ -29,20 +26,13 @@ import logging
logger = logging.getLogger(__name__)
VISIBILITY_PRIORITY = (
"world_readable",
"shared",
"invited",
"joined",
)
class BaseHandler(object):
"""
Common base class for the event handlers.
:type store: synapse.storage.events.StateStore
:type state_handler: synapse.state.StateHandler
Attributes:
store (synapse.storage.events.StateStore):
state_handler (synapse.state.StateHandler):
"""
def __init__(self, hs):
@@ -55,137 +45,10 @@ class BaseHandler(object):
self.clock = hs.get_clock()
self.hs = hs
self.signing_key = hs.config.signing_key[0]
self.server_name = hs.hostname
self.event_builder_factory = hs.get_event_builder_factory()
@defer.inlineCallbacks
def filter_events_for_clients(self, user_tuples, events, event_id_to_state):
""" Returns dict of user_id -> list of events that user is allowed to
see.
:param (str, bool) user_tuples: (user id, is_peeking) for each
user to be checked. is_peeking should be true if:
* the user is not currently a member of the room, and:
* the user has not been a member of the room since the given
events
"""
forgotten = yield defer.gatherResults([
self.store.who_forgot_in_room(
room_id,
)
for room_id in frozenset(e.room_id for e in events)
], consumeErrors=True)
# Set of membership event_ids that have been forgotten
event_id_forgotten = frozenset(
row["event_id"] for rows in forgotten for row in rows
)
def allowed(event, user_id, is_peeking):
state = event_id_to_state[event.event_id]
# get the room_visibility at the time of the event.
visibility_event = state.get((EventTypes.RoomHistoryVisibility, ""), None)
if visibility_event:
visibility = visibility_event.content.get("history_visibility", "shared")
else:
visibility = "shared"
if visibility not in VISIBILITY_PRIORITY:
visibility = "shared"
# if it was world_readable, it's easy: everyone can read it
if visibility == "world_readable":
return True
# Always allow history visibility events on boundaries. This is done
# by setting the effective visibility to the least restrictive
# of the old vs new.
if event.type == EventTypes.RoomHistoryVisibility:
prev_content = event.unsigned.get("prev_content", {})
prev_visibility = prev_content.get("history_visibility", None)
if prev_visibility not in VISIBILITY_PRIORITY:
prev_visibility = "shared"
new_priority = VISIBILITY_PRIORITY.index(visibility)
old_priority = VISIBILITY_PRIORITY.index(prev_visibility)
if old_priority < new_priority:
visibility = prev_visibility
# get the user's membership at the time of the event. (or rather,
# just *after* the event. Which means that people can see their
# own join events, but not (currently) their own leave events.)
membership_event = state.get((EventTypes.Member, user_id), None)
if membership_event:
if membership_event.event_id in event_id_forgotten:
membership = None
else:
membership = membership_event.membership
else:
membership = None
# if the user was a member of the room at the time of the event,
# they can see it.
if membership == Membership.JOIN:
return True
if visibility == "joined":
# we weren't a member at the time of the event, so we can't
# see this event.
return False
elif visibility == "invited":
# user can also see the event if they were *invited* at the time
# of the event.
return membership == Membership.INVITE
else:
# visibility is shared: user can also see the event if they have
# become a member since the event
#
# XXX: if the user has subsequently joined and then left again,
# ideally we would share history up to the point they left. But
# we don't know when they left.
return not is_peeking
defer.returnValue({
user_id: [
event
for event in events
if allowed(event, user_id, is_peeking)
]
for user_id, is_peeking in user_tuples
})
@defer.inlineCallbacks
def _filter_events_for_client(self, user_id, events, is_peeking=False):
"""
Check which events a user is allowed to see
:param str user_id: user id to be checked
:param [synapse.events.EventBase] events: list of events to be checked
:param bool is_peeking should be True if:
* the user is not currently a member of the room, and:
* the user has not been a member of the room since the given
events
:rtype [synapse.events.EventBase]
"""
types = (
(EventTypes.RoomHistoryVisibility, ""),
(EventTypes.Member, user_id),
)
event_id_to_state = yield self.store.get_state_for_events(
frozenset(e.event_id for e in events),
types=types
)
res = yield self.filter_events_for_clients(
[(user_id, is_peeking)], events, event_id_to_state
)
defer.returnValue(res.get(user_id, []))
def ratelimit(self, requester):
time_now = self.clock.time()
allowed, time_allowed = self.ratelimiter.send_message(
@@ -198,95 +61,6 @@ class BaseHandler(object):
retry_after_ms=int(1000 * (time_allowed - time_now)),
)
@defer.inlineCallbacks
def _create_new_client_event(self, builder):
latest_ret = yield self.store.get_latest_event_ids_and_hashes_in_room(
builder.room_id,
)
if latest_ret:
depth = max([d for _, _, d in latest_ret]) + 1
else:
depth = 1
prev_events = [
(event_id, prev_hashes)
for event_id, prev_hashes, _ in latest_ret
]
builder.prev_events = prev_events
builder.depth = depth
state_handler = self.state_handler
context = yield state_handler.compute_event_context(builder)
# If we've received an invite over federation, there are no latest
# events in the room, because we don't know enough about the graph
# fragment we received to treat it like a graph, so the above returned
# no relevant events. It may have returned some events (if we have
# joined and left the room), but not useful ones, like the invite.
if (
not self.is_host_in_room(context.current_state) and
builder.type == EventTypes.Member
):
prev_member_event = yield self.store.get_room_member(
builder.sender, builder.room_id
)
# The prev_member_event may already be in context.current_state,
# despite us not being present in the room; in particular, if
# inviting user, and all other local users, have already left.
#
# In that case, we have all the information we need, and we don't
# want to drop "context" - not least because we may need to handle
# the invite locally, which will require us to have the whole
# context (not just prev_member_event) to auth it.
#
context_event_ids = (
e.event_id for e in context.current_state.values()
)
if (
prev_member_event and
prev_member_event.event_id not in context_event_ids
):
# The prev_member_event is missing from context, so it must
# have arrived over federation and is an outlier. We forcibly
# set our context to the invite we received over federation
builder.prev_events = (
prev_member_event.event_id,
prev_member_event.prev_events
)
context = yield state_handler.compute_event_context(
builder,
old_state=(prev_member_event,),
outlier=True
)
if builder.is_state():
builder.prev_state = yield self.store.add_event_hashes(
context.prev_state_events
)
yield self.auth.add_auth_events(builder, context)
add_hashes_and_signatures(
builder, self.server_name, self.signing_key
)
event = builder.build()
logger.debug(
"Created event %s with current state: %s",
event.event_id, context.current_state,
)
defer.returnValue(
(event, context,)
)
def is_host_in_room(self, current_state):
room_members = [
(state_key, event.membership)
@@ -301,143 +75,12 @@ class BaseHandler(object):
return True
for (state_key, membership) in room_members:
if (
UserID.from_string(state_key).domain == self.hs.hostname
self.hs.is_mine_id(state_key)
and membership == Membership.JOIN
):
return True
return False
@defer.inlineCallbacks
def handle_new_client_event(
self,
requester,
event,
context,
ratelimit=True,
extra_users=[]
):
# We now need to go and hit out to wherever we need to hit out to.
if ratelimit:
self.ratelimit(requester)
self.auth.check(event, auth_events=context.current_state)
yield self.maybe_kick_guest_users(event, context.current_state.values())
if event.type == EventTypes.CanonicalAlias:
# Check the alias is acually valid (at this time at least)
room_alias_str = event.content.get("alias", None)
if room_alias_str:
room_alias = RoomAlias.from_string(room_alias_str)
directory_handler = self.hs.get_handlers().directory_handler
mapping = yield directory_handler.get_association(room_alias)
if mapping["room_id"] != event.room_id:
raise SynapseError(
400,
"Room alias %s does not point to the room" % (
room_alias_str,
)
)
federation_handler = self.hs.get_handlers().federation_handler
if event.type == EventTypes.Member:
if event.content["membership"] == Membership.INVITE:
def is_inviter_member_event(e):
return (
e.type == EventTypes.Member and
e.sender == event.sender
)
event.unsigned["invite_room_state"] = [
{
"type": e.type,
"state_key": e.state_key,
"content": e.content,
"sender": e.sender,
}
for k, e in context.current_state.items()
if e.type in self.hs.config.room_invite_state_types
or is_inviter_member_event(e)
]
invitee = UserID.from_string(event.state_key)
if not self.hs.is_mine(invitee):
# TODO: Can we add signature from remote server in a nicer
# way? If we have been invited by a remote server, we need
# to get them to sign the event.
returned_invite = yield federation_handler.send_invite(
invitee.domain,
event,
)
event.unsigned.pop("room_state", None)
# TODO: Make sure the signatures actually are correct.
event.signatures.update(
returned_invite.signatures
)
if event.type == EventTypes.Redaction:
if self.auth.check_redaction(event, auth_events=context.current_state):
original_event = yield self.store.get_event(
event.redacts,
check_redacted=False,
get_prev_content=False,
allow_rejected=False,
allow_none=False
)
if event.user_id != original_event.user_id:
raise AuthError(
403,
"You don't have permission to redact events"
)
if event.type == EventTypes.Create and context.current_state:
raise AuthError(
403,
"Changing the room create event is forbidden",
)
action_generator = ActionGenerator(self.hs)
yield action_generator.handle_push_actions_for_event(
event, context, self
)
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event, context=context
)
destinations = set()
for k, s in context.current_state.items():
try:
if k[0] == EventTypes.Member:
if s.content["membership"] == Membership.JOIN:
destinations.add(
UserID.from_string(s.state_key).domain
)
except SynapseError:
logger.warn(
"Failed to get destination from event %s", s.event_id
)
with PreserveLoggingContext():
# Don't block waiting on waking up all the listeners.
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
# If invite, remove room_state from unsigned before sending.
event.unsigned.pop("invite_room_state", None)
federation_handler.handle_new_event(
event, destinations=destinations,
)
@defer.inlineCallbacks
def maybe_kick_guest_users(self, event, current_state):
# Technically this function invalidates current_state by changing it.

View File

@@ -17,7 +17,6 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes
from synapse.appservice import ApplicationService
from synapse.types import UserID
import logging
@@ -35,16 +34,13 @@ def log_failure(failure):
)
# NB: Purposefully not inheriting BaseHandler since that contains way too much
# setup code which this handler does not need or use. This makes testing a lot
# easier.
class ApplicationServicesHandler(object):
def __init__(self, hs, appservice_api, appservice_scheduler):
def __init__(self, hs):
self.store = hs.get_datastore()
self.hs = hs
self.appservice_api = appservice_api
self.scheduler = appservice_scheduler
self.is_mine_id = hs.is_mine_id
self.appservice_api = hs.get_application_service_api()
self.scheduler = hs.get_application_service_scheduler()
self.started_scheduler = False
@defer.inlineCallbacks
@@ -169,8 +165,7 @@ class ApplicationServicesHandler(object):
@defer.inlineCallbacks
def _is_unknown_user(self, user_id):
user = UserID.from_string(user_id)
if not self.hs.is_mine(user):
if not self.is_mine_id(user_id):
# we don't know if they are unknown or not since it isn't one of our
# users. We can't poke ASes.
defer.returnValue(False)

View File

@@ -18,8 +18,9 @@ from twisted.internet import defer
from ._base import BaseHandler
from synapse.api.constants import LoginType
from synapse.types import UserID
from synapse.api.errors import AuthError, LoginError, Codes
from synapse.api.errors import AuthError, LoginError, Codes, StoreError, SynapseError
from synapse.util.async import run_on_reactor
from synapse.config.ldap import LDAPMode
from twisted.web.client import PartialDownloadError
@@ -28,6 +29,12 @@ import bcrypt
import pymacaroons
import simplejson
try:
import ldap3
except ImportError:
ldap3 = None
pass
import synapse.util.stringutils as stringutils
@@ -49,6 +56,24 @@ class AuthHandler(BaseHandler):
self.sessions = {}
self.INVALID_TOKEN_HTTP_STATUS = 401
self.ldap_enabled = hs.config.ldap_enabled
if self.ldap_enabled:
if not ldap3:
raise RuntimeError(
'Missing ldap3 library. This is required for LDAP Authentication.'
)
self.ldap_mode = hs.config.ldap_mode
self.ldap_uri = hs.config.ldap_uri
self.ldap_start_tls = hs.config.ldap_start_tls
self.ldap_base = hs.config.ldap_base
self.ldap_filter = hs.config.ldap_filter
self.ldap_attributes = hs.config.ldap_attributes
if self.ldap_mode == LDAPMode.SEARCH:
self.ldap_bind_dn = hs.config.ldap_bind_dn
self.ldap_bind_password = hs.config.ldap_bind_password
self.hs = hs # FIXME better possibility to access registrationHandler later?
@defer.inlineCallbacks
def check_auth(self, flows, clientdict, clientip):
"""
@@ -163,9 +188,13 @@ class AuthHandler(BaseHandler):
def get_session_id(self, clientdict):
"""
Gets the session ID for a client given the client dictionary
:param clientdict: The dictionary sent by the client in the request
:return: The string session ID the client sent. If the client did not
send a session ID, returns None.
Args:
clientdict: The dictionary sent by the client in the request
Returns:
str|None: The string session ID the client sent. If the client did
not send a session ID, returns None.
"""
sid = None
if clientdict and 'auth' in clientdict:
@@ -179,9 +208,11 @@ class AuthHandler(BaseHandler):
Store a key-value pair into the sessions data associated with this
request. This data is stored server-side and cannot be modified by
the client.
:param session_id: (string) The ID of this session as returned from check_auth
:param key: (string) The key to store the data under
:param value: (any) The data to store
Args:
session_id (string): The ID of this session as returned from check_auth
key (string): The key to store the data under
value (any): The data to store
"""
sess = self._get_session_info(session_id)
sess.setdefault('serverdict', {})[key] = value
@@ -190,9 +221,11 @@ class AuthHandler(BaseHandler):
def get_session_data(self, session_id, key, default=None):
"""
Retrieve data stored with set_session_data
:param session_id: (string) The ID of this session as returned from check_auth
:param key: (string) The key to store the data under
:param default: (any) Value to return if the key has not been set
Args:
session_id (string): The ID of this session as returned from check_auth
key (string): The key to store the data under
default (any): Value to return if the key has not been set
"""
sess = self._get_session_info(session_id)
return sess.setdefault('serverdict', {}).get(key, default)
@@ -207,8 +240,10 @@ class AuthHandler(BaseHandler):
if not user_id.startswith('@'):
user_id = UserID.create(user_id, self.hs.hostname).to_string()
user_id, password_hash = yield self._find_user_id_and_pwd_hash(user_id)
self._check_password(user_id, password, password_hash)
if not (yield self._check_password(user_id, password)):
logger.warn("Failed password login for user %s", user_id)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
defer.returnValue(user_id)
@defer.inlineCallbacks
@@ -332,8 +367,10 @@ class AuthHandler(BaseHandler):
StoreError if there was a problem storing the token.
LoginError if there was an authentication problem.
"""
user_id, password_hash = yield self._find_user_id_and_pwd_hash(user_id)
self._check_password(user_id, password, password_hash)
if not (yield self._check_password(user_id, password)):
logger.warn("Failed password login for user %s", user_id)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
logger.info("Logging in user %s", user_id)
access_token = yield self.issue_access_token(user_id)
@@ -399,11 +436,194 @@ class AuthHandler(BaseHandler):
else:
defer.returnValue(user_infos.popitem())
def _check_password(self, user_id, password, stored_hash):
"""Checks that user_id has passed password, raises LoginError if not."""
if not self.validate_hash(password, stored_hash):
logger.warn("Failed password login for user %s", user_id)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
@defer.inlineCallbacks
def _check_password(self, user_id, password):
"""
Returns:
True if the user_id successfully authenticated
"""
valid_ldap = yield self._check_ldap_password(user_id, password)
if valid_ldap:
defer.returnValue(True)
valid_local_password = yield self._check_local_password(user_id, password)
if valid_local_password:
defer.returnValue(True)
defer.returnValue(False)
@defer.inlineCallbacks
def _check_local_password(self, user_id, password):
try:
user_id, password_hash = yield self._find_user_id_and_pwd_hash(user_id)
defer.returnValue(self.validate_hash(password, password_hash))
except LoginError:
defer.returnValue(False)
@defer.inlineCallbacks
def _check_ldap_password(self, user_id, password):
""" Attempt to authenticate a user against an LDAP Server
and register an account if none exists.
Returns:
True if authentication against LDAP was successful
"""
if not ldap3 or not self.ldap_enabled:
defer.returnValue(False)
if self.ldap_mode not in LDAPMode.LIST:
raise RuntimeError(
'Invalid ldap mode specified: {mode}'.format(
mode=self.ldap_mode
)
)
try:
server = ldap3.Server(self.ldap_uri)
logger.debug(
"Attempting ldap connection with %s",
self.ldap_uri
)
localpart = UserID.from_string(user_id).localpart
if self.ldap_mode == LDAPMode.SIMPLE:
# bind with the the local users ldap credentials
bind_dn = "{prop}={value},{base}".format(
prop=self.ldap_attributes['uid'],
value=localpart,
base=self.ldap_base
)
conn = ldap3.Connection(server, bind_dn, password)
logger.debug(
"Established ldap connection in simple mode: %s",
conn
)
if self.ldap_start_tls:
conn.start_tls()
logger.debug(
"Upgraded ldap connection in simple mode through StartTLS: %s",
conn
)
conn.bind()
elif self.ldap_mode == LDAPMode.SEARCH:
# connect with preconfigured credentials and search for local user
conn = ldap3.Connection(
server,
self.ldap_bind_dn,
self.ldap_bind_password
)
logger.debug(
"Established ldap connection in search mode: %s",
conn
)
if self.ldap_start_tls:
conn.start_tls()
logger.debug(
"Upgraded ldap connection in search mode through StartTLS: %s",
conn
)
conn.bind()
# find matching dn
query = "({prop}={value})".format(
prop=self.ldap_attributes['uid'],
value=localpart
)
if self.ldap_filter:
query = "(&{query}{filter})".format(
query=query,
filter=self.ldap_filter
)
logger.debug("ldap search filter: %s", query)
result = conn.search(self.ldap_base, query)
if result and len(conn.response) == 1:
# found exactly one result
user_dn = conn.response[0]['dn']
logger.debug('ldap search found dn: %s', user_dn)
# unbind and reconnect, rebind with found dn
conn.unbind()
conn = ldap3.Connection(
server,
user_dn,
password,
auto_bind=True
)
else:
# found 0 or > 1 results, abort!
logger.warn(
"ldap search returned unexpected (%d!=1) amount of results",
len(conn.response)
)
defer.returnValue(False)
logger.info(
"User authenticated against ldap server: %s",
conn
)
# check for existing account, if none exists, create one
if not (yield self.does_user_exist(user_id)):
# query user metadata for account creation
query = "({prop}={value})".format(
prop=self.ldap_attributes['uid'],
value=localpart
)
if self.ldap_mode == LDAPMode.SEARCH and self.ldap_filter:
query = "(&{filter}{user_filter})".format(
filter=query,
user_filter=self.ldap_filter
)
logger.debug("ldap registration filter: %s", query)
result = conn.search(
search_base=self.ldap_base,
search_filter=query,
attributes=[
self.ldap_attributes['name'],
self.ldap_attributes['mail']
]
)
if len(conn.response) == 1:
attrs = conn.response[0]['attributes']
mail = attrs[self.ldap_attributes['mail']][0]
name = attrs[self.ldap_attributes['name']][0]
# create account
registration_handler = self.hs.get_handlers().registration_handler
user_id, access_token = (
yield registration_handler.register(localpart=localpart)
)
# TODO: bind email, set displayname with data from ldap directory
logger.info(
"ldap registration successful: %d: %s (%s, %)",
user_id,
localpart,
name,
mail
)
else:
logger.warn(
"ldap registration failed: unexpected (%d!=1) amount of results",
len(result)
)
defer.returnValue(False)
defer.returnValue(True)
except ldap3.core.exceptions.LDAPException as e:
logger.warn("Error during ldap authentication: %s", e)
defer.returnValue(False)
@defer.inlineCallbacks
def issue_access_token(self, user_id):
@@ -438,14 +658,19 @@ class AuthHandler(BaseHandler):
))
return m.serialize()
def generate_short_term_login_token(self, user_id):
def generate_short_term_login_token(self, user_id, duration_in_ms=(2 * 60 * 1000)):
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = login")
now = self.hs.get_clock().time_msec()
expiry = now + (2 * 60 * 1000)
expiry = now + duration_in_ms
macaroon.add_first_party_caveat("time < %d" % (expiry,))
return macaroon.serialize()
def generate_delete_pusher_token(self, user_id):
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = delete_pusher")
return macaroon.serialize()
def validate_short_term_login_token_and_get_user_id(self, login_token):
try:
macaroon = pymacaroons.Macaroon.deserialize(login_token)
@@ -480,7 +705,12 @@ class AuthHandler(BaseHandler):
except_access_token_ids = [requester.access_token_id] if requester else []
yield self.store.user_set_password_hash(user_id, password_hash)
try:
yield self.store.user_set_password_hash(user_id, password_hash)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Unknown user", Codes.NOT_FOUND)
raise e
yield self.store.user_delete_access_tokens(
user_id, except_access_token_ids
)
@@ -520,7 +750,8 @@ class AuthHandler(BaseHandler):
Returns:
Hashed password (str).
"""
return bcrypt.hashpw(password, bcrypt.gensalt(self.bcrypt_rounds))
return bcrypt.hashpw(password + self.hs.config.password_pepper,
bcrypt.gensalt(self.bcrypt_rounds))
def validate_hash(self, password, stored_hash):
"""Validates that self.hash(password) == stored_hash.
@@ -532,4 +763,8 @@ class AuthHandler(BaseHandler):
Returns:
Whether self.hash(password) == stored_hash (bool).
"""
return bcrypt.hashpw(password, stored_hash) == stored_hash
if stored_hash:
return bcrypt.hashpw(password + self.hs.config.password_pepper,
stored_hash.encode('utf-8')) == stored_hash
else:
return False

View File

@@ -33,6 +33,7 @@ class DirectoryHandler(BaseHandler):
super(DirectoryHandler, self).__init__(hs)
self.state = hs.get_state_handler()
self.appservice_handler = hs.get_application_service_handler()
self.federation = hs.get_replication_layer()
self.federation.register_query_handler(
@@ -281,7 +282,7 @@ class DirectoryHandler(BaseHandler):
)
if not result:
# Query AS to see if it exists
as_handler = self.hs.get_handlers().appservice_handler
as_handler = self.appservice_handler
result = yield as_handler.query_room_alias_exists(room_alias)
defer.returnValue(result)

View File

@@ -58,7 +58,7 @@ class EventStreamHandler(BaseHandler):
If `only_keys` is not None, events from keys will be sent down.
"""
auth_user = UserID.from_string(auth_user_id)
presence_handler = self.hs.get_handlers().presence_handler
presence_handler = self.hs.get_presence_handler()
context = yield presence_handler.user_syncing(
auth_user_id, affect_presence=affect_presence,

View File

@@ -26,20 +26,21 @@ from synapse.api.errors import (
from synapse.api.constants import EventTypes, Membership, RejectedReason
from synapse.events.validator import EventValidator
from synapse.util import unwrapFirstError
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.logcontext import PreserveLoggingContext, preserve_fn
from synapse.util.logutils import log_function
from synapse.util.async import run_on_reactor
from synapse.util.frozenutils import unfreeze
from synapse.crypto.event_signing import (
compute_event_signature, add_hashes_and_signatures,
)
from synapse.types import UserID
from synapse.types import UserID, get_domain_from_id
from synapse.events.utils import prune_event
from synapse.util.retryutils import NotRetryingDestination
from synapse.push.action_generator import ActionGenerator
from synapse.util.distributor import user_joined_room
from twisted.internet import defer
@@ -49,10 +50,6 @@ import logging
logger = logging.getLogger(__name__)
def user_joined_room(distributor, user, room_id):
return distributor.fire("user_joined_room", user, room_id)
class FederationHandler(BaseHandler):
"""Handles events that originated from federation.
Responsible for:
@@ -69,10 +66,6 @@ class FederationHandler(BaseHandler):
self.hs = hs
self.distributor.observe("user_joined_room", self.user_joined_room)
self.waiting_for_join_list = {}
self.store = hs.get_datastore()
self.replication_layer = hs.get_replication_layer()
self.state_handler = hs.get_state_handler()
@@ -102,8 +95,7 @@ class FederationHandler(BaseHandler):
@log_function
@defer.inlineCallbacks
def on_receive_pdu(self, origin, pdu, state=None,
auth_chain=None):
def on_receive_pdu(self, origin, pdu, state=None, auth_chain=None):
""" Called by the ReplicationLayer when we have a new pdu. We need to
do auth checks and put it through the StateHandler.
"""
@@ -174,11 +166,7 @@ class FederationHandler(BaseHandler):
})
seen_ids.add(e.event_id)
yield self._handle_new_events(
origin,
event_infos,
outliers=True
)
yield self._handle_new_events(origin, event_infos)
try:
context, event_stream_id, max_stream_id = yield self._handle_new_event(
@@ -288,7 +276,14 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
def backfill(self, dest, room_id, limit, extremities=[]):
""" Trigger a backfill request to `dest` for the given `room_id`
This will attempt to get more events from the remote. This may return
be successfull and still return no events if the other side has no new
events to offer.
"""
if dest == self.server_name:
raise SynapseError(400, "Can't backfill from self.")
if not extremities:
extremities = yield self.store.get_oldest_events_in_room(room_id)
@@ -299,6 +294,16 @@ class FederationHandler(BaseHandler):
extremities=extremities,
)
# Don't bother processing events we already have.
seen_events = yield self.store.have_events_in_timeline(
set(e.event_id for e in events)
)
events = [e for e in events if e.event_id not in seen_events]
if not events:
defer.returnValue([])
event_map = {e.event_id: e for e in events}
event_ids = set(e.event_id for e in events)
@@ -340,24 +345,27 @@ class FederationHandler(BaseHandler):
)
missing_auth = required_auth - set(auth_events)
results = yield defer.gatherResults(
[
self.replication_layer.get_pdu(
[dest],
event_id,
outlier=True,
timeout=10000,
)
for event_id in missing_auth
],
consumeErrors=True
).addErrback(unwrapFirstError)
auth_events.update({a.event_id: a for a in results})
if missing_auth:
logger.info("Missing auth for backfill: %r", missing_auth)
results = yield defer.gatherResults(
[
self.replication_layer.get_pdu(
[dest],
event_id,
outlier=True,
timeout=10000,
)
for event_id in missing_auth
],
consumeErrors=True
).addErrback(unwrapFirstError)
auth_events.update({a.event_id: a for a in results})
ev_infos = []
for a in auth_events.values():
if a.event_id in seen_events:
continue
a.internal_metadata.outlier = True
ev_infos.append({
"event": a,
"auth_events": {
@@ -378,20 +386,23 @@ class FederationHandler(BaseHandler):
}
})
yield self._handle_new_events(
dest, ev_infos,
backfilled=True,
)
events.sort(key=lambda e: e.depth)
for event in events:
if event in events_to_state:
continue
ev_infos.append({
"event": event,
})
yield self._handle_new_events(
dest, ev_infos,
backfilled=True,
)
# We store these one at a time since each event depends on the
# previous to work out the state.
# TODO: We can probably do something more clever here.
yield self._handle_new_event(
dest, event, backfilled=True,
)
defer.returnValue(events)
@@ -440,7 +451,7 @@ class FederationHandler(BaseHandler):
joined_domains = {}
for u, d in joined_users:
try:
dom = UserID.from_string(u).domain
dom = get_domain_from_id(u)
old_d = joined_domains.get(dom)
if old_d:
joined_domains[dom] = min(d, old_d)
@@ -455,7 +466,7 @@ class FederationHandler(BaseHandler):
likely_domains = [
domain for domain, depth in curr_domains
if domain is not self.server_name
if domain != self.server_name
]
@defer.inlineCallbacks
@@ -463,11 +474,15 @@ class FederationHandler(BaseHandler):
# TODO: Should we try multiple of these at a time?
for dom in domains:
try:
events = yield self.backfill(
yield self.backfill(
dom, room_id,
limit=100,
extremities=[e for e in extremities.keys()]
)
# If this succeeded then we probably already have the
# appropriate stuff.
# TODO: We can probably do something more intelligent here.
defer.returnValue(True)
except SynapseError as e:
logger.info(
"Failed to backfill from %s because %s",
@@ -493,8 +508,6 @@ class FederationHandler(BaseHandler):
)
continue
if events:
defer.returnValue(True)
defer.returnValue(False)
success = yield try_backfill(likely_domains)
@@ -666,9 +679,14 @@ class FederationHandler(BaseHandler):
"state_key": user_id,
})
event, context = yield self._create_new_client_event(
builder=builder,
)
try:
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
builder=builder,
)
except AuthError as e:
logger.warn("Failed to create join %r because %s", event, e)
raise e
self.auth.check(event, auth_events=context.current_state)
@@ -724,9 +742,7 @@ class FederationHandler(BaseHandler):
try:
if k[0] == EventTypes.Member:
if s.content["membership"] == Membership.JOIN:
destinations.add(
UserID.from_string(s.state_key).domain
)
destinations.add(get_domain_from_id(s.state_key))
except:
logger.warn(
"Failed to get destination from event %s", s.event_id
@@ -761,6 +777,7 @@ class FederationHandler(BaseHandler):
event = pdu
event.internal_metadata.outlier = True
event.internal_metadata.invite_from_remote = True
event.signatures.update(
compute_event_signature(
@@ -788,13 +805,19 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
def do_remotely_reject_invite(self, target_hosts, room_id, user_id):
origin, event = yield self._make_and_verify_event(
target_hosts,
room_id,
user_id,
"leave"
)
signed_event = self._sign_event(event)
try:
origin, event = yield self._make_and_verify_event(
target_hosts,
room_id,
user_id,
"leave"
)
signed_event = self._sign_event(event)
except SynapseError:
raise
except CodeMessageException as e:
logger.warn("Failed to reject invite: %s", e)
raise SynapseError(500, "Failed to reject invite")
# Try the host we successfully got a response to /make_join/
# request first.
@@ -804,10 +827,16 @@ class FederationHandler(BaseHandler):
except ValueError:
pass
yield self.replication_layer.send_leave(
target_hosts,
signed_event
)
try:
yield self.replication_layer.send_leave(
target_hosts,
signed_event
)
except SynapseError:
raise
except CodeMessageException as e:
logger.warn("Failed to reject invite: %s", e)
raise SynapseError(500, "Failed to reject invite")
context = yield self.state_handler.compute_event_context(event)
@@ -883,11 +912,16 @@ class FederationHandler(BaseHandler):
"state_key": user_id,
})
event, context = yield self._create_new_client_event(
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
builder=builder,
)
self.auth.check(event, auth_events=context.current_state)
try:
self.auth.check(event, auth_events=context.current_state)
except AuthError as e:
logger.warn("Failed to create new leave %r because %s", event, e)
raise e
defer.returnValue(event)
@@ -934,9 +968,7 @@ class FederationHandler(BaseHandler):
try:
if k[0] == EventTypes.Member:
if s.content["membership"] == Membership.LEAVE:
destinations.add(
UserID.from_string(s.state_key).domain
)
destinations.add(get_domain_from_id(s.state_key))
except:
logger.warn(
"Failed to get destination from event %s", s.event_id
@@ -986,13 +1018,16 @@ class FederationHandler(BaseHandler):
res = results.values()
for event in res:
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
# We sign these again because there was a bug where we
# incorrectly signed things the first time round
if self.hs.is_mine_id(event.event_id):
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
)
)
)
defer.returnValue(res)
else:
@@ -1057,21 +1092,10 @@ class FederationHandler(BaseHandler):
def get_min_depth_for_context(self, context):
return self.store.get_min_depth(context)
@log_function
def user_joined_room(self, user, room_id):
waiters = self.waiting_for_join_list.get(
(user.to_string(), room_id),
[]
)
while waiters:
waiters.pop().callback(None)
@defer.inlineCallbacks
@log_function
def _handle_new_event(self, origin, event, state=None, auth_events=None):
outlier = event.internal_metadata.is_outlier()
def _handle_new_event(self, origin, event, state=None, auth_events=None,
backfilled=False):
context = yield self._prep_event(
origin, event,
state=state,
@@ -1081,20 +1105,30 @@ class FederationHandler(BaseHandler):
if not event.internal_metadata.is_outlier():
action_generator = ActionGenerator(self.hs)
yield action_generator.handle_push_actions_for_event(
event, context, self
event, context
)
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
is_new_state=not outlier,
backfilled=backfilled,
)
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
preserve_fn(self.hs.get_pusherpool().on_new_notifications)(
event_stream_id, max_stream_id
)
defer.returnValue((context, event_stream_id, max_stream_id))
@defer.inlineCallbacks
def _handle_new_events(self, origin, event_infos, backfilled=False,
outliers=False):
def _handle_new_events(self, origin, event_infos, backfilled=False):
"""Creates the appropriate contexts and persists events. The events
should not depend on one another, e.g. this should be used to persist
a bunch of outliers, but not a chunk of individual events that depend
on each other for state calculations.
"""
contexts = yield defer.gatherResults(
[
self._prep_event(
@@ -1113,7 +1147,6 @@ class FederationHandler(BaseHandler):
for ev_info, context in itertools.izip(event_infos, contexts)
],
backfilled=backfilled,
is_new_state=(not outliers and not backfilled),
)
@defer.inlineCallbacks
@@ -1128,11 +1161,9 @@ class FederationHandler(BaseHandler):
"""
events_to_context = {}
for e in itertools.chain(auth_events, state):
ctx = yield self.state_handler.compute_event_context(
e, outlier=True,
)
events_to_context[e.event_id] = ctx
e.internal_metadata.outlier = True
ctx = yield self.state_handler.compute_event_context(e)
events_to_context[e.event_id] = ctx
event_map = {
e.event_id: e
@@ -1176,16 +1207,14 @@ class FederationHandler(BaseHandler):
(e, events_to_context[e.event_id])
for e in itertools.chain(auth_events, state)
],
is_new_state=False,
)
new_event_context = yield self.state_handler.compute_event_context(
event, old_state=state, outlier=False,
event, old_state=state
)
event_stream_id, max_stream_id = yield self.store.persist_event(
event, new_event_context,
is_new_state=True,
current_state=state,
)
@@ -1193,10 +1222,9 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
def _prep_event(self, origin, event, state=None, auth_events=None):
outlier = event.internal_metadata.is_outlier()
context = yield self.state_handler.compute_event_context(
event, old_state=state, outlier=outlier,
event, old_state=state,
)
if not auth_events:
@@ -1385,7 +1413,7 @@ class FederationHandler(BaseHandler):
local_view = dict(auth_events)
remote_view = dict(auth_events)
remote_view.update({
(d.type, d.state_key): d for d in different_events
(d.type, d.state_key): d for d in different_events if d
})
new_state, prev_state = self.state_handler.resolve_events(
@@ -1482,8 +1510,9 @@ class FederationHandler(BaseHandler):
try:
self.auth.check(event, auth_events=auth_events)
except AuthError:
raise
except AuthError as e:
logger.warn("Failed auth resolution for %r because %s", event, e)
raise e
@defer.inlineCallbacks
def construct_auth_difference(self, local_auth, remote_auth):
@@ -1653,13 +1682,21 @@ class FederationHandler(BaseHandler):
if (yield self.auth.check_host_in_room(room_id, self.hs.hostname)):
builder = self.event_builder_factory.new(event_dict)
EventValidator().validate_new(builder)
event, context = yield self._create_new_client_event(builder=builder)
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
builder=builder
)
event, context = yield self.add_display_name_to_third_party_invite(
event_dict, event, context
)
self.auth.check(event, context.current_state)
try:
self.auth.check(event, context.current_state)
except AuthError as e:
logger.warn("Denying new third party invite %r because %s", event, e)
raise e
yield self._check_signature(event, auth_events=context.current_state)
member_handler = self.hs.get_handlers().room_member_handler
yield member_handler.send_membership_event(None, event, context)
@@ -1676,7 +1713,8 @@ class FederationHandler(BaseHandler):
def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
builder = self.event_builder_factory.new(event_dict)
event, context = yield self._create_new_client_event(
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(
builder=builder,
)
@@ -1684,7 +1722,11 @@ class FederationHandler(BaseHandler):
event_dict, event, context
)
self.auth.check(event, auth_events=context.current_state)
try:
self.auth.check(event, auth_events=context.current_state)
except AuthError as e:
logger.warn("Denying third party invite %r because %s", event, e)
raise e
yield self._check_signature(event, auth_events=context.current_state)
returned_invite = yield self.send_invite(origin, event)
@@ -1711,20 +1753,23 @@ class FederationHandler(BaseHandler):
event_dict["content"]["third_party_invite"]["display_name"] = display_name
builder = self.event_builder_factory.new(event_dict)
EventValidator().validate_new(builder)
event, context = yield self._create_new_client_event(builder=builder)
message_handler = self.hs.get_handlers().message_handler
event, context = yield message_handler._create_new_client_event(builder=builder)
defer.returnValue((event, context))
@defer.inlineCallbacks
def _check_signature(self, event, auth_events):
"""
Checks that the signature in the event is consistent with its invite.
:param event (Event): The m.room.member event to check
:param auth_events (dict<(event type, state_key), event>)
:raises
AuthError if signature didn't match any keys, or key has been
Args:
event (Event): The m.room.member event to check
auth_events (dict<(event type, state_key), event>):
Raises:
AuthError: if signature didn't match any keys, or key has been
revoked,
SynapseError if a transient error meant a key couldn't be checked
SynapseError: if a transient error meant a key couldn't be checked
for revocation.
"""
signed = event.content["third_party_invite"]["signed"]
@@ -1766,12 +1811,13 @@ class FederationHandler(BaseHandler):
"""
Checks whether public_key has been revoked.
:param public_key (str): base-64 encoded public key.
:param url (str): Key revocation URL.
Args:
public_key (str): base-64 encoded public key.
url (str): Key revocation URL.
:raises
AuthError if they key has been revoked.
SynapseError if a transient error meant a key couldn't be checked
Raises:
AuthError: if they key has been revoked.
SynapseError: if a transient error meant a key couldn't be checked
for revocation.
"""
try:

View File

@@ -21,7 +21,7 @@ from synapse.api.errors import (
)
from ._base import BaseHandler
from synapse.util.async import run_on_reactor
from synapse.api.errors import SynapseError
from synapse.api.errors import SynapseError, Codes
import json
import logging
@@ -41,6 +41,20 @@ class IdentityHandler(BaseHandler):
hs.config.use_insecure_ssl_client_just_for_testing_do_not_use
)
def _should_trust_id_server(self, id_server):
if id_server not in self.trusted_id_servers:
if self.trust_any_id_server_just_for_testing_do_not_use:
logger.warn(
"Trusting untrustworthy ID server %r even though it isn't"
" in the trusted id list for testing because"
" 'use_insecure_ssl_client_just_for_testing_do_not_use'"
" is set in the config",
id_server,
)
else:
return False
return True
@defer.inlineCallbacks
def threepid_from_creds(self, creds):
yield run_on_reactor()
@@ -59,19 +73,12 @@ class IdentityHandler(BaseHandler):
else:
raise SynapseError(400, "No client_secret in creds")
if id_server not in self.trusted_id_servers:
if self.trust_any_id_server_just_for_testing_do_not_use:
logger.warn(
"Trusting untrustworthy ID server %r even though it isn't"
" in the trusted id list for testing because"
" 'use_insecure_ssl_client_just_for_testing_do_not_use'"
" is set in the config",
id_server,
)
else:
logger.warn('%s is not a trusted ID server: rejecting 3pid ' +
'credentials', id_server)
defer.returnValue(None)
if not self._should_trust_id_server(id_server):
logger.warn(
'%s is not a trusted ID server: rejecting 3pid ' +
'credentials', id_server
)
defer.returnValue(None)
data = {}
try:
@@ -129,6 +136,12 @@ class IdentityHandler(BaseHandler):
def requestEmailToken(self, id_server, email, client_secret, send_attempt, **kwargs):
yield run_on_reactor()
if not self._should_trust_id_server(id_server):
raise SynapseError(
400, "Untrusted ID server '%s'" % id_server,
Codes.SERVER_NOT_TRUSTED
)
params = {
'email': email,
'client_secret': client_secret,

View File

@@ -17,12 +17,19 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.streams.config import PaginationConfig
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.push.action_generator import ActionGenerator
from synapse.streams.config import PaginationConfig
from synapse.types import (
UserID, RoomAlias, RoomStreamToken, StreamToken, get_domain_from_id
)
from synapse.util import unwrapFirstError
from synapse.util.async import concurrently_execute, run_on_reactor, ReadWriteLock
from synapse.util.caches.snapshot_cache import SnapshotCache
from synapse.types import UserID, RoomStreamToken, StreamToken
from synapse.util.logcontext import preserve_fn
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
@@ -33,10 +40,6 @@ import logging
logger = logging.getLogger(__name__)
def collect_presencelike_data(distributor, user, content):
return distributor.fire("collect_presencelike_data", user, content)
class MessageHandler(BaseHandler):
def __init__(self, hs):
@@ -47,34 +50,19 @@ class MessageHandler(BaseHandler):
self.validator = EventValidator()
self.snapshot_cache = SnapshotCache()
self.pagination_lock = ReadWriteLock()
@defer.inlineCallbacks
def get_message(self, msg_id=None, room_id=None, sender_id=None,
user_id=None):
""" Retrieve a message.
def purge_history(self, room_id, event_id):
event = yield self.store.get_event(event_id)
Args:
msg_id (str): The message ID to obtain.
room_id (str): The room where the message resides.
sender_id (str): The user ID of the user who sent the message.
user_id (str): The user ID of the user making this request.
Returns:
The message, or None if no message exists.
Raises:
SynapseError if something went wrong.
"""
yield self.auth.check_joined_room(room_id, user_id)
if event.room_id != room_id:
raise SynapseError(400, "Event is for wrong room.")
# Pull out the message from the db
# msg = yield self.store.get_message(
# room_id=room_id,
# msg_id=msg_id,
# user_id=sender_id
# )
depth = event.depth
# TODO (erikj): Once we work out the correct c-s api we need to think
# on how to do this.
defer.returnValue(None)
with (yield self.pagination_lock.write(room_id)):
yield self.store.delete_old_state(room_id, depth)
@defer.inlineCallbacks
def get_messages(self, requester, room_id=None, pagin_config=None,
@@ -111,42 +99,43 @@ class MessageHandler(BaseHandler):
source_config = pagin_config.get_source_config("room")
membership, member_event_id = yield self._check_in_room_or_world_readable(
room_id, user_id
)
if source_config.direction == 'b':
# if we're going backwards, we might need to backfill. This
# requires that we have a topo token.
if room_token.topological:
max_topo = room_token.topological
else:
max_topo = yield self.store.get_max_topological_token_for_stream_and_room(
room_id, room_token.stream
)
if membership == Membership.LEAVE:
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
leave_token = yield self.store.get_topological_token_for_event(
member_event_id
)
leave_token = RoomStreamToken.parse(leave_token)
if leave_token.topological < max_topo:
source_config.from_key = str(leave_token)
yield self.hs.get_handlers().federation_handler.maybe_backfill(
room_id, max_topo
with (yield self.pagination_lock.read(room_id)):
membership, member_event_id = yield self._check_in_room_or_world_readable(
room_id, user_id
)
events, next_key = yield data_source.get_pagination_rows(
requester.user, source_config, room_id
)
if source_config.direction == 'b':
# if we're going backwards, we might need to backfill. This
# requires that we have a topo token.
if room_token.topological:
max_topo = room_token.topological
else:
max_topo = yield self.store.get_max_topological_token(
room_id, room_token.stream
)
next_token = pagin_config.from_token.copy_and_replace(
"room_key", next_key
)
if membership == Membership.LEAVE:
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
leave_token = yield self.store.get_topological_token_for_event(
member_event_id
)
leave_token = RoomStreamToken.parse(leave_token)
if leave_token.topological < max_topo:
source_config.from_key = str(leave_token)
yield self.hs.get_handlers().federation_handler.maybe_backfill(
room_id, max_topo
)
events, next_key = yield data_source.get_pagination_rows(
requester.user, source_config, room_id
)
next_token = pagin_config.from_token.copy_and_replace(
"room_key", next_key
)
if not events:
defer.returnValue({
@@ -155,7 +144,8 @@ class MessageHandler(BaseHandler):
"end": next_token.to_string(),
})
events = yield self._filter_events_for_client(
events = yield filter_events_for_client(
self.store,
user_id,
events,
is_peeking=(member_event_id is None),
@@ -175,7 +165,102 @@ class MessageHandler(BaseHandler):
defer.returnValue(chunk)
@defer.inlineCallbacks
def create_event(self, event_dict, token_id=None, txn_id=None):
def get_files(self, requester, room_id, pagin_config):
"""Get files in a room.
Args:
requester (Requester): The user requesting files.
room_id (str): The room they want files from.
pagin_config (synapse.api.streams.PaginationConfig): The pagination
config rules to apply, if any.
as_client_event (bool): True to get events in client-server format.
Returns:
dict: Pagination API results
"""
user_id = requester.user.to_string()
if pagin_config.from_token:
room_token = pagin_config.from_token.room_key
else:
pagin_config.from_token = (
yield self.hs.get_event_sources().get_current_token(
direction='b'
)
)
room_token = pagin_config.from_token.room_key
room_token = RoomStreamToken.parse(room_token)
pagin_config.from_token = pagin_config.from_token.copy_and_replace(
"room_key", str(room_token)
)
source_config = pagin_config.get_source_config("room")
membership, member_event_id = yield self._check_in_room_or_world_readable(
room_id, user_id
)
if source_config.direction == 'b':
if room_token.topological:
max_topo = room_token.topological
else:
max_topo = yield self.store.get_max_topological_token(
room_id, room_token.stream
)
if membership == Membership.LEAVE:
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
leave_token = yield self.store.get_topological_token_for_event(
member_event_id
)
leave_token = RoomStreamToken.parse(leave_token)
if leave_token.topological < max_topo:
source_config.from_key = str(leave_token)
events, next_key = yield self.store.paginate_room_file_events(
room_id,
from_key=source_config.from_key,
to_key=source_config.to_key,
direction=source_config.direction,
limit=source_config.limit,
)
next_token = pagin_config.from_token.copy_and_replace(
"room_key", next_key
)
if not events:
defer.returnValue({
"chunk": [],
"start": pagin_config.from_token.to_string(),
"end": next_token.to_string(),
})
events = yield filter_events_for_client(
self.store,
user_id,
events,
is_peeking=(member_event_id is None),
)
time_now = self.clock.time_msec()
chunk = {
"chunk": [
serialize_event(e, time_now)
for e in events
],
"start": pagin_config.from_token.to_string(),
"end": next_token.to_string(),
}
defer.returnValue(chunk)
@defer.inlineCallbacks
def create_event(self, event_dict, token_id=None, txn_id=None, prev_event_ids=None):
"""
Given a dict from a client, create a new event.
@@ -186,6 +271,9 @@ class MessageHandler(BaseHandler):
Args:
event_dict (dict): An entire event
token_id (str)
txn_id (str)
prev_event_ids (list): The prev event ids to use when creating the event
Returns:
Tuple of created event (FrozenEvent), Context
@@ -198,12 +286,8 @@ class MessageHandler(BaseHandler):
membership = builder.content.get("membership", None)
target = UserID.from_string(builder.state_key)
if membership == Membership.JOIN:
if membership in {Membership.JOIN, Membership.INVITE}:
# If event doesn't include a display name, add one.
yield collect_presencelike_data(
self.distributor, target, builder.content
)
elif membership == Membership.INVITE:
profile = self.hs.get_handlers().profile_handler
content = builder.content
@@ -224,6 +308,7 @@ class MessageHandler(BaseHandler):
event, context = yield self._create_new_client_event(
builder=builder,
prev_event_ids=prev_event_ids,
)
defer.returnValue((event, context))
@@ -261,7 +346,7 @@ class MessageHandler(BaseHandler):
)
if event.type == EventTypes.Message:
presence = self.hs.get_handlers().presence_handler
presence = self.hs.get_presence_handler()
yield presence.bump_presence_active_time(user)
def deduplicate_state_event(self, event, context):
@@ -515,8 +600,8 @@ class MessageHandler(BaseHandler):
]
).addErrback(unwrapFirstError)
messages = yield self._filter_events_for_client(
user_id, messages
messages = yield filter_events_for_client(
self.store, user_id, messages
)
start_token = now_token.copy_and_replace("room_key", token[0])
@@ -556,14 +641,7 @@ class MessageHandler(BaseHandler):
except:
logger.exception("Failed to get snapshot")
# Only do N rooms at once
n = 5
d_list = [handle_room(e) for e in room_list]
for i in range(0, len(d_list), n):
yield defer.gatherResults(
d_list[i:i + n],
consumeErrors=True
).addErrback(unwrapFirstError)
yield concurrently_execute(handle_room, room_list, 10)
account_data_events = []
for account_data_type, content in account_data.items():
@@ -658,8 +736,8 @@ class MessageHandler(BaseHandler):
end_token=stream_token
)
messages = yield self._filter_events_for_client(
user_id, messages, is_peeking=is_peeking
messages = yield filter_events_for_client(
self.store, user_id, messages, is_peeking=is_peeking
)
start_token = StreamToken.START.copy_and_replace("room_key", token[0])
@@ -706,7 +784,7 @@ class MessageHandler(BaseHandler):
and m.content["membership"] == Membership.JOIN
]
presence_handler = self.hs.get_handlers().presence_handler
presence_handler = self.hs.get_presence_handler()
@defer.inlineCallbacks
def get_presence():
@@ -739,8 +817,8 @@ class MessageHandler(BaseHandler):
consumeErrors=True,
).addErrback(unwrapFirstError)
messages = yield self._filter_events_for_client(
user_id, messages, is_peeking=is_peeking,
messages = yield filter_events_for_client(
self.store, user_id, messages, is_peeking=is_peeking,
)
start_token = now_token.copy_and_replace("room_key", token[0])
@@ -763,3 +841,196 @@ class MessageHandler(BaseHandler):
ret["membership"] = membership
defer.returnValue(ret)
@defer.inlineCallbacks
def _create_new_client_event(self, builder, prev_event_ids=None):
if prev_event_ids:
prev_events = yield self.store.add_event_hashes(prev_event_ids)
prev_max_depth = yield self.store.get_max_depth_of_events(prev_event_ids)
depth = prev_max_depth + 1
else:
latest_ret = yield self.store.get_latest_event_ids_and_hashes_in_room(
builder.room_id,
)
if latest_ret:
depth = max([d for _, _, d in latest_ret]) + 1
else:
depth = 1
prev_events = [
(event_id, prev_hashes)
for event_id, prev_hashes, _ in latest_ret
]
builder.prev_events = prev_events
builder.depth = depth
state_handler = self.state_handler
context = yield state_handler.compute_event_context(builder)
if builder.is_state():
builder.prev_state = yield self.store.add_event_hashes(
context.prev_state_events
)
yield self.auth.add_auth_events(builder, context)
signing_key = self.hs.config.signing_key[0]
add_hashes_and_signatures(
builder, self.server_name, signing_key
)
event = builder.build()
logger.debug(
"Created event %s with current state: %s",
event.event_id, context.current_state,
)
defer.returnValue(
(event, context,)
)
@defer.inlineCallbacks
def handle_new_client_event(
self,
requester,
event,
context,
ratelimit=True,
extra_users=[]
):
# We now need to go and hit out to wherever we need to hit out to.
if ratelimit:
self.ratelimit(requester)
try:
self.auth.check(event, auth_events=context.current_state)
except AuthError as err:
logger.warn("Denying new event %r because %s", event, err)
raise err
yield self.maybe_kick_guest_users(event, context.current_state.values())
if event.type == EventTypes.CanonicalAlias:
# Check the alias is acually valid (at this time at least)
room_alias_str = event.content.get("alias", None)
if room_alias_str:
room_alias = RoomAlias.from_string(room_alias_str)
directory_handler = self.hs.get_handlers().directory_handler
mapping = yield directory_handler.get_association(room_alias)
if mapping["room_id"] != event.room_id:
raise SynapseError(
400,
"Room alias %s does not point to the room" % (
room_alias_str,
)
)
federation_handler = self.hs.get_handlers().federation_handler
if event.type == EventTypes.Member:
if event.content["membership"] == Membership.INVITE:
def is_inviter_member_event(e):
return (
e.type == EventTypes.Member and
e.sender == event.sender
)
event.unsigned["invite_room_state"] = [
{
"type": e.type,
"state_key": e.state_key,
"content": e.content,
"sender": e.sender,
}
for k, e in context.current_state.items()
if e.type in self.hs.config.room_invite_state_types
or is_inviter_member_event(e)
]
invitee = UserID.from_string(event.state_key)
if not self.hs.is_mine(invitee):
# TODO: Can we add signature from remote server in a nicer
# way? If we have been invited by a remote server, we need
# to get them to sign the event.
returned_invite = yield federation_handler.send_invite(
invitee.domain,
event,
)
event.unsigned.pop("room_state", None)
# TODO: Make sure the signatures actually are correct.
event.signatures.update(
returned_invite.signatures
)
if event.type == EventTypes.Redaction:
if self.auth.check_redaction(event, auth_events=context.current_state):
original_event = yield self.store.get_event(
event.redacts,
check_redacted=False,
get_prev_content=False,
allow_rejected=False,
allow_none=False
)
if event.user_id != original_event.user_id:
raise AuthError(
403,
"You don't have permission to redact events"
)
if event.type == EventTypes.Create and context.current_state:
raise AuthError(
403,
"Changing the room create event is forbidden",
)
action_generator = ActionGenerator(self.hs)
yield action_generator.handle_push_actions_for_event(
event, context
)
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event, context=context
)
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
preserve_fn(self.hs.get_pusherpool().on_new_notifications)(
event_stream_id, max_stream_id
)
destinations = set()
for k, s in context.current_state.items():
try:
if k[0] == EventTypes.Member:
if s.content["membership"] == Membership.JOIN:
destinations.add(get_domain_from_id(s.state_key))
except SynapseError:
logger.warn(
"Failed to get destination from event %s", s.event_id
)
@defer.inlineCallbacks
def _notify():
yield run_on_reactor()
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
preserve_fn(_notify)()
# If invite, remove room_state from unsigned before sending.
event.unsigned.pop("invite_room_state", None)
federation_handler.handle_new_event(
event, destinations=destinations,
)

View File

@@ -33,11 +33,9 @@ from synapse.util.logcontext import preserve_fn
from synapse.util.logutils import log_function
from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer
from synapse.types import UserID
from synapse.types import UserID, get_domain_from_id
import synapse.metrics
from ._base import BaseHandler
import logging
@@ -52,6 +50,8 @@ timers_fired_counter = metrics.register_counter("timers_fired")
federation_presence_counter = metrics.register_counter("federation_presence")
bump_active_time_counter = metrics.register_counter("bump_active_time")
get_updates_counter = metrics.register_counter("get_updates", labels=["type"])
# If a user was last active in the last LAST_ACTIVE_GRANULARITY, consider them
# "currently_active"
@@ -70,14 +70,18 @@ FEDERATION_TIMEOUT = 30 * 60 * 1000
# How often to resend presence to remote servers
FEDERATION_PING_INTERVAL = 25 * 60 * 1000
# How long we will wait before assuming that the syncs from an external process
# are dead.
EXTERNAL_PROCESS_EXPIRY = 5 * 60 * 1000
assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER
class PresenceHandler(BaseHandler):
class PresenceHandler(object):
def __init__(self, hs):
super(PresenceHandler, self).__init__(hs)
self.hs = hs
self.is_mine = hs.is_mine
self.is_mine_id = hs.is_mine_id
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.wheel_timer = WheelTimer()
@@ -138,7 +142,7 @@ class PresenceHandler(BaseHandler):
obj=state.user_id,
then=state.last_user_sync_ts + SYNC_ONLINE_TIMEOUT,
)
if self.hs.is_mine_id(state.user_id):
if self.is_mine_id(state.user_id):
self.wheel_timer.insert(
now=now,
obj=state.user_id,
@@ -160,15 +164,26 @@ class PresenceHandler(BaseHandler):
self.serial_to_user = {}
self._next_serial = 1
# Keeps track of the number of *ongoing* syncs. While this is non zero
# a user will never go offline.
# Keeps track of the number of *ongoing* syncs on this process. While
# this is non zero a user will never go offline.
self.user_to_num_current_syncs = {}
# Keeps track of the number of *ongoing* syncs on other processes.
# While any sync is ongoing on another process the user will never
# go offline.
# Each process has a unique identifier and an update frequency. If
# no update is received from that process within the update period then
# we assume that all the sync requests on that process have stopped.
# Stored as a dict from process_id to set of user_id, and a dict of
# process_id to millisecond timestamp last updated.
self.external_process_to_current_syncs = {}
self.external_process_last_updated_ms = {}
# Start a LoopingCall in 30s that fires every 5s.
# The initial delay is to allow disconnected clients a chance to
# reconnect before we treat them as offline.
self.clock.call_later(
0 * 1000,
30,
self.clock.looping_call,
self._handle_timeouts,
5000,
@@ -228,7 +243,7 @@ class PresenceHandler(BaseHandler):
new_state, should_notify, should_ping = handle_update(
prev_state, new_state,
is_mine=self.hs.is_mine_id(user_id),
is_mine=self.is_mine_id(user_id),
wheel_timer=self.wheel_timer,
now=now
)
@@ -268,31 +283,48 @@ class PresenceHandler(BaseHandler):
"""Checks the presence of users that have timed out and updates as
appropriate.
"""
logger.info("Handling presence timeouts")
now = self.clock.time_msec()
with Measure(self.clock, "presence_handle_timeouts"):
# Fetch the list of users that *may* have timed out. Things may have
# changed since the timeout was set, so we won't necessarily have to
# take any action.
users_to_check = self.wheel_timer.fetch(now)
try:
with Measure(self.clock, "presence_handle_timeouts"):
# Fetch the list of users that *may* have timed out. Things may have
# changed since the timeout was set, so we won't necessarily have to
# take any action.
users_to_check = set(self.wheel_timer.fetch(now))
states = [
self.user_to_current_state.get(
user_id, UserPresenceState.default(user_id)
# Check whether the lists of syncing processes from an external
# process have expired.
expired_process_ids = [
process_id for process_id, last_update
in self.external_process_last_updated_ms.items()
if now - last_update > EXTERNAL_PROCESS_EXPIRY
]
for process_id in expired_process_ids:
users_to_check.update(
self.external_process_last_updated_ms.pop(process_id, ())
)
self.external_process_last_update.pop(process_id)
states = [
self.user_to_current_state.get(
user_id, UserPresenceState.default(user_id)
)
for user_id in users_to_check
]
timers_fired_counter.inc_by(len(states))
changes = handle_timeouts(
states,
is_mine_fn=self.is_mine_id,
syncing_user_ids=self.get_currently_syncing_users(),
now=now,
)
for user_id in set(users_to_check)
]
timers_fired_counter.inc_by(len(states))
changes = handle_timeouts(
states,
is_mine_fn=self.hs.is_mine_id,
user_to_num_current_syncs=self.user_to_num_current_syncs,
now=now,
)
preserve_fn(self._update_states)(changes)
preserve_fn(self._update_states)(changes)
except:
logger.exception("Exception in _handle_timeouts loop")
@defer.inlineCallbacks
def bump_presence_active_time(self, user):
@@ -365,6 +397,74 @@ class PresenceHandler(BaseHandler):
defer.returnValue(_user_syncing())
def get_currently_syncing_users(self):
"""Get the set of user ids that are currently syncing on this HS.
Returns:
set(str): A set of user_id strings.
"""
syncing_user_ids = {
user_id for user_id, count in self.user_to_num_current_syncs.items()
if count
}
for user_ids in self.external_process_to_current_syncs.values():
syncing_user_ids.update(user_ids)
return syncing_user_ids
@defer.inlineCallbacks
def update_external_syncs(self, process_id, syncing_user_ids):
"""Update the syncing users for an external process
Args:
process_id(str): An identifier for the process the users are
syncing against. This allows synapse to process updates
as user start and stop syncing against a given process.
syncing_user_ids(set(str)): The set of user_ids that are
currently syncing on that server.
"""
# Grab the previous list of user_ids that were syncing on that process
prev_syncing_user_ids = (
self.external_process_to_current_syncs.get(process_id, set())
)
# Grab the current presence state for both the users that are syncing
# now and the users that were syncing before this update.
prev_states = yield self.current_state_for_users(
syncing_user_ids | prev_syncing_user_ids
)
updates = []
time_now_ms = self.clock.time_msec()
# For each new user that is syncing check if we need to mark them as
# being online.
for new_user_id in syncing_user_ids - prev_syncing_user_ids:
prev_state = prev_states[new_user_id]
if prev_state.state == PresenceState.OFFLINE:
updates.append(prev_state.copy_and_replace(
state=PresenceState.ONLINE,
last_active_ts=time_now_ms,
last_user_sync_ts=time_now_ms,
))
else:
updates.append(prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
))
# For each user that is still syncing or stopped syncing update the
# last sync time so that we will correctly apply the grace period when
# they stop syncing.
for old_user_id in prev_syncing_user_ids:
prev_state = prev_states[old_user_id]
updates.append(prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
))
yield self._update_states(updates)
# Update the last updated time for the process. We expire the entries
# if we don't receive an update in the given timeframe.
self.external_process_last_updated_ms[process_id] = self.clock.time_msec()
self.external_process_to_current_syncs[process_id] = syncing_user_ids
@defer.inlineCallbacks
def current_state_for_user(self, user_id):
"""Get the current presence state for a user.
@@ -427,7 +527,7 @@ class PresenceHandler(BaseHandler):
hosts_to_states = {}
for room_id, states in room_ids_to_states.items():
local_states = filter(lambda s: self.hs.is_mine_id(s.user_id), states)
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
if not local_states:
continue
@@ -436,11 +536,11 @@ class PresenceHandler(BaseHandler):
hosts_to_states.setdefault(host, []).extend(local_states)
for user_id, states in users_to_states.items():
local_states = filter(lambda s: self.hs.is_mine_id(s.user_id), states)
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
if not local_states:
continue
host = UserID.from_string(user_id).domain
host = get_domain_from_id(user_id)
hosts_to_states.setdefault(host, []).extend(local_states)
# TODO: de-dup hosts_to_states, as a single host might have multiple
@@ -611,14 +711,14 @@ class PresenceHandler(BaseHandler):
# don't need to send to local clients here, as that is done as part
# of the event stream/sync.
# TODO: Only send to servers not already in the room.
if self.hs.is_mine(user):
if self.is_mine(user):
state = yield self.current_state_for_user(user.to_string())
hosts = yield self.store.get_joined_hosts_for_room(room_id)
self._push_to_remotes({host: (state,) for host in hosts})
else:
user_ids = yield self.store.get_users_in_room(room_id)
user_ids = filter(self.hs.is_mine_id, user_ids)
user_ids = filter(self.is_mine_id, user_ids)
states = yield self.current_state_for_users(user_ids)
@@ -628,7 +728,7 @@ class PresenceHandler(BaseHandler):
def get_presence_list(self, observer_user, accepted=None):
"""Returns the presence for all users in their presence list.
"""
if not self.hs.is_mine(observer_user):
if not self.is_mine(observer_user):
raise SynapseError(400, "User is not hosted on this Home Server")
presence_list = yield self.store.get_presence_list(
@@ -659,7 +759,7 @@ class PresenceHandler(BaseHandler):
observer_user.localpart, observed_user.to_string()
)
if self.hs.is_mine(observed_user):
if self.is_mine(observed_user):
yield self.invite_presence(observed_user, observer_user)
else:
yield self.federation.send_edu(
@@ -675,11 +775,11 @@ class PresenceHandler(BaseHandler):
def invite_presence(self, observed_user, observer_user):
"""Handles new presence invites.
"""
if not self.hs.is_mine(observed_user):
if not self.is_mine(observed_user):
raise SynapseError(400, "User is not hosted on this Home Server")
# TODO: Don't auto accept
if self.hs.is_mine(observer_user):
if self.is_mine(observer_user):
yield self.accept_presence(observed_user, observer_user)
else:
self.federation.send_edu(
@@ -742,7 +842,7 @@ class PresenceHandler(BaseHandler):
Returns:
A Deferred.
"""
if not self.hs.is_mine(observer_user):
if not self.is_mine(observer_user):
raise SynapseError(400, "User is not hosted on this Home Server")
yield self.store.del_presence_list(
@@ -834,7 +934,11 @@ def _format_user_presence_state(state, now):
class PresenceEventSource(object):
def __init__(self, hs):
self.hs = hs
# We can't call get_presence_handler here because there's a cycle:
#
# Presence -> Notifier -> PresenceEventSource -> Presence
#
self.get_presence_handler = hs.get_presence_handler
self.clock = hs.get_clock()
self.store = hs.get_datastore()
@@ -860,7 +964,7 @@ class PresenceEventSource(object):
from_key = int(from_key)
room_ids = room_ids or []
presence = self.hs.get_handlers().presence_handler
presence = self.get_presence_handler()
stream_change_cache = self.store.presence_stream_cache
if not room_ids:
@@ -877,13 +981,13 @@ class PresenceEventSource(object):
user_ids_changed = set()
changed = None
if from_key and max_token - from_key < 100:
# For small deltas, its quicker to get all changes and then
# work out if we share a room or they're in our presence list
if from_key:
changed = stream_change_cache.get_all_entities_changed(from_key)
# get_all_entities_changed can return None
if changed is not None:
if changed is not None and len(changed) < 500:
# For small deltas, its quicker to get all changes and then
# work out if we share a room or they're in our presence list
get_updates_counter.inc("stream")
for other_user_id in changed:
if other_user_id in friends:
user_ids_changed.add(other_user_id)
@@ -895,6 +999,8 @@ class PresenceEventSource(object):
else:
# Too many possible updates. Find all users we can see and check
# if any of them have changed.
get_updates_counter.inc("full")
user_ids_to_check = set()
for room_id in room_ids:
users = yield self.store.get_users_in_room(room_id)
@@ -933,15 +1039,14 @@ class PresenceEventSource(object):
return self.get_new_events(user, from_key=None, include_offline=False)
def handle_timeouts(user_states, is_mine_fn, user_to_num_current_syncs, now):
def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now):
"""Checks the presence of users that have timed out and updates as
appropriate.
Args:
user_states(list): List of UserPresenceState's to check.
is_mine_fn (fn): Function that returns if a user_id is ours
user_to_num_current_syncs (dict): Mapping of user_id to number of currently
active syncs.
syncing_user_ids (set): Set of user_ids with active syncs.
now (int): Current time in ms.
Returns:
@@ -952,21 +1057,20 @@ def handle_timeouts(user_states, is_mine_fn, user_to_num_current_syncs, now):
for state in user_states:
is_mine = is_mine_fn(state.user_id)
new_state = handle_timeout(state, is_mine, user_to_num_current_syncs, now)
new_state = handle_timeout(state, is_mine, syncing_user_ids, now)
if new_state:
changes[state.user_id] = new_state
return changes.values()
def handle_timeout(state, is_mine, user_to_num_current_syncs, now):
def handle_timeout(state, is_mine, syncing_user_ids, now):
"""Checks the presence of the user to see if any of the timers have elapsed
Args:
state (UserPresenceState)
is_mine (bool): Whether the user is ours
user_to_num_current_syncs (dict): Mapping of user_id to number of currently
active syncs.
syncing_user_ids (set): Set of user_ids with active syncs.
now (int): Current time in ms.
Returns:
@@ -1000,7 +1104,7 @@ def handle_timeout(state, is_mine, user_to_num_current_syncs, now):
# If there are have been no sync for a while (and none ongoing),
# set presence to offline
if not user_to_num_current_syncs.get(user_id, 0):
if user_id not in syncing_user_ids:
if now - state.last_user_sync_ts > SYNC_ONLINE_TIMEOUT:
state = state.copy_and_replace(
state=PresenceState.OFFLINE,

View File

@@ -17,7 +17,6 @@ from twisted.internet import defer
from synapse.api.errors import SynapseError, AuthError, CodeMessageException
from synapse.types import UserID, Requester
from synapse.util import unwrapFirstError
from ._base import BaseHandler
@@ -27,14 +26,6 @@ import logging
logger = logging.getLogger(__name__)
def changed_presencelike_data(distributor, user, state):
return distributor.fire("changed_presencelike_data", user, state)
def collect_presencelike_data(distributor, user, content):
return distributor.fire("collect_presencelike_data", user, content)
class ProfileHandler(BaseHandler):
def __init__(self, hs):
@@ -45,21 +36,6 @@ class ProfileHandler(BaseHandler):
"profile", self.on_profile_query
)
distributor = hs.get_distributor()
self.distributor = distributor
distributor.declare("collect_presencelike_data")
distributor.declare("changed_presencelike_data")
distributor.observe("registered_user", self.registered_user)
distributor.observe(
"collect_presencelike_data", self.collect_presencelike_data
)
def registered_user(self, user):
return self.store.create_profile(user.localpart)
@defer.inlineCallbacks
def get_displayname(self, target_user):
if self.hs.is_mine(target_user):
@@ -105,10 +81,6 @@ class ProfileHandler(BaseHandler):
target_user.localpart, new_displayname
)
yield changed_presencelike_data(self.distributor, target_user, {
"displayname": new_displayname,
})
yield self._update_join_states(requester)
@defer.inlineCallbacks
@@ -152,30 +124,8 @@ class ProfileHandler(BaseHandler):
target_user.localpart, new_avatar_url
)
yield changed_presencelike_data(self.distributor, target_user, {
"avatar_url": new_avatar_url,
})
yield self._update_join_states(requester)
@defer.inlineCallbacks
def collect_presencelike_data(self, user, state):
if not self.hs.is_mine(user):
defer.returnValue(None)
(displayname, avatar_url) = yield defer.gatherResults(
[
self.store.get_profile_displayname(user.localpart),
self.store.get_profile_avatar_url(user.localpart),
],
consumeErrors=True
).addErrback(unwrapFirstError)
state["displayname"] = displayname
state["avatar_url"] = avatar_url
defer.returnValue(None)
@defer.inlineCallbacks
def on_profile_query(self, args):
user = UserID.from_string(args["user_id"])

View File

@@ -29,6 +29,8 @@ class ReceiptsHandler(BaseHandler):
def __init__(self, hs):
super(ReceiptsHandler, self).__init__(hs)
self.server_name = hs.config.server_name
self.store = hs.get_datastore()
self.hs = hs
self.federation = hs.get_replication_layer()
self.federation.register_edu_handler(
@@ -80,6 +82,9 @@ class ReceiptsHandler(BaseHandler):
def _handle_new_receipts(self, receipts):
"""Takes a list of receipts, stores them and informs the notifier.
"""
min_batch_id = None
max_batch_id = None
for receipt in receipts:
room_id = receipt["room_id"]
receipt_type = receipt["receipt_type"]
@@ -97,10 +102,21 @@ class ReceiptsHandler(BaseHandler):
stream_id, max_persisted_id = res
with PreserveLoggingContext():
self.notifier.on_new_event(
"receipt_key", max_persisted_id, rooms=[room_id]
)
if min_batch_id is None or stream_id < min_batch_id:
min_batch_id = stream_id
if max_batch_id is None or max_persisted_id > max_batch_id:
max_batch_id = max_persisted_id
affected_room_ids = list(set([r["room_id"] for r in receipts]))
with PreserveLoggingContext():
self.notifier.on_new_event(
"receipt_key", max_batch_id, rooms=affected_room_ids
)
# Note that the min here shouldn't be relied upon to be accurate.
self.hs.get_pusherpool().on_new_receipts(
min_batch_id, max_batch_id, affected_room_ids
)
defer.returnValue(True)
@@ -117,12 +133,9 @@ class ReceiptsHandler(BaseHandler):
event_ids = receipt["event_ids"]
data = receipt["data"]
remotedomains = set()
rm_handler = self.hs.get_handlers().room_member_handler
yield rm_handler.fetch_room_distributions_into(
room_id, localusers=None, remotedomains=remotedomains
)
remotedomains = yield self.store.get_joined_hosts_for_room(room_id)
remotedomains = remotedomains.copy()
remotedomains.discard(self.server_name)
logger.debug("Sending receipt to: %r", remotedomains)

View File

@@ -16,7 +16,7 @@
"""Contains functions for registering clients."""
from twisted.internet import defer
from synapse.types import UserID
from synapse.types import UserID, Requester
from synapse.api.errors import (
AuthError, Codes, SynapseError, RegistrationError, InvalidCaptchaError
)
@@ -30,18 +30,12 @@ import urllib
logger = logging.getLogger(__name__)
def registered_user(distributor, user):
return distributor.fire("registered_user", user)
class RegistrationHandler(BaseHandler):
def __init__(self, hs):
super(RegistrationHandler, self).__init__(hs)
self.auth = hs.get_auth()
self.distributor = hs.get_distributor()
self.distributor.declare("registered_user")
self.captcha_client = CaptchaServerHttpClient(hs)
self._next_generated_user_id = None
@@ -96,7 +90,8 @@ class RegistrationHandler(BaseHandler):
password=None,
generate_token=True,
guest_access_token=None,
make_guest=False
make_guest=False,
admin=False,
):
"""Registers a new client on the server.
@@ -143,9 +138,12 @@ class RegistrationHandler(BaseHandler):
password_hash=password_hash,
was_guest=was_guest,
make_guest=make_guest,
create_profile_with_localpart=(
# If the user was a guest then they already have a profile
None if was_guest else user.localpart
),
admin=admin,
)
yield registered_user(self.distributor, user)
else:
# autogen a sequential user ID
attempts = 0
@@ -163,7 +161,8 @@ class RegistrationHandler(BaseHandler):
user_id=user_id,
token=token,
password_hash=password_hash,
make_guest=make_guest
make_guest=make_guest,
create_profile_with_localpart=user.localpart,
)
except SynapseError:
# if user id is taken, just generate another
@@ -171,7 +170,6 @@ class RegistrationHandler(BaseHandler):
user_id = None
token = None
attempts += 1
yield registered_user(self.distributor, user)
# We used to generate default identicons here, but nowadays
# we want clients to generate their own as part of their branding
@@ -204,8 +202,8 @@ class RegistrationHandler(BaseHandler):
token=token,
password_hash="",
appservice_id=service_id,
create_profile_with_localpart=user.localpart,
)
yield registered_user(self.distributor, user)
defer.returnValue((user_id, token))
@defer.inlineCallbacks
@@ -251,9 +249,9 @@ class RegistrationHandler(BaseHandler):
yield self.store.register(
user_id=user_id,
token=token,
password_hash=None
password_hash=None,
create_profile_with_localpart=user.localpart,
)
yield registered_user(self.distributor, user)
except Exception as e:
yield self.store.add_access_token_to_user(user_id, token)
# Ignore Registration errors
@@ -361,8 +359,62 @@ class RegistrationHandler(BaseHandler):
)
defer.returnValue(data)
@defer.inlineCallbacks
def get_or_create_user(self, localpart, displayname, duration_seconds,
password_hash=None):
"""Creates a new user if the user does not exist,
else revokes all previous access tokens and generates a new one.
Args:
localpart : The local part of the user ID to register. If None,
one will be randomly generated.
Returns:
A tuple of (user_id, access_token).
Raises:
RegistrationError if there was a problem registering.
"""
yield run_on_reactor()
if localpart is None:
raise SynapseError(400, "Request must include user id")
need_register = True
try:
yield self.check_username(localpart)
except SynapseError as e:
if e.errcode == Codes.USER_IN_USE:
need_register = False
else:
raise
user = UserID(localpart, self.hs.hostname)
user_id = user.to_string()
token = self.auth_handler().generate_short_term_login_token(
user_id, duration_seconds)
if need_register:
yield self.store.register(
user_id=user_id,
token=token,
password_hash=password_hash,
create_profile_with_localpart=user.localpart,
)
else:
yield self.store.user_delete_access_tokens(user_id=user_id)
yield self.store.add_access_token_to_user(user_id=user_id, token=token)
if displayname is not None:
logger.info("setting user display name: %s -> %s", user_id, displayname)
profile_handler = self.hs.get_handlers().profile_handler
yield profile_handler.set_displayname(
user, Requester(user, token, False), displayname
)
defer.returnValue((user_id, token))
def auth_handler(self):
return self.hs.get_handlers().auth_handler
return self.hs.get_auth_handler()
@defer.inlineCallbacks
def guest_access_token_for(self, medium, address, inviter_user_id):

View File

@@ -18,19 +18,17 @@ from twisted.internet import defer
from ._base import BaseHandler
from synapse.types import UserID, RoomAlias, RoomID, RoomStreamToken, Requester
from synapse.types import UserID, RoomAlias, RoomID, RoomStreamToken
from synapse.api.constants import (
EventTypes, Membership, JoinRules, RoomCreationPreset,
EventTypes, JoinRules, RoomCreationPreset, Membership,
)
from synapse.api.errors import AuthError, StoreError, SynapseError, Codes
from synapse.util import stringutils, unwrapFirstError
from synapse.util.logcontext import preserve_context_over_fn
from signedjson.sign import verify_signed_json
from signedjson.key import decode_verify_key_bytes
from synapse.api.errors import AuthError, StoreError, SynapseError
from synapse.util import stringutils
from synapse.util.async import concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
from synapse.visibility import filter_events_for_client
from collections import OrderedDict
from unpaddedbase64 import decode_base64
import logging
import math
@@ -38,23 +36,11 @@ import string
logger = logging.getLogger(__name__)
REMOTE_ROOM_LIST_POLL_INTERVAL = 60 * 1000
id_server_scheme = "https://"
def user_left_room(distributor, user, room_id):
return preserve_context_over_fn(
distributor.fire,
"user_left_room", user=user, room_id=room_id
)
def user_joined_room(distributor, user, room_id):
return preserve_context_over_fn(
distributor.fire,
"user_joined_room", user=user, room_id=room_id
)
class RoomCreationHandler(BaseHandler):
PRESETS_DICT = {
@@ -356,666 +342,147 @@ class RoomCreationHandler(BaseHandler):
)
class RoomMemberHandler(BaseHandler):
# TODO(paul): This handler currently contains a messy conflation of
# low-level API that works on UserID objects and so on, and REST-level
# API that takes ID strings and returns pagination chunks. These concerns
# ought to be separated out a lot better.
def __init__(self, hs):
super(RoomMemberHandler, self).__init__(hs)
self.clock = hs.get_clock()
self.distributor = hs.get_distributor()
self.distributor.declare("user_joined_room")
self.distributor.declare("user_left_room")
@defer.inlineCallbacks
def get_room_members(self, room_id):
users = yield self.store.get_users_in_room(room_id)
defer.returnValue([UserID.from_string(u) for u in users])
@defer.inlineCallbacks
def fetch_room_distributions_into(self, room_id, localusers=None,
remotedomains=None, ignore_user=None):
"""Fetch the distribution of a room, adding elements to either
'localusers' or 'remotedomains', which should be a set() if supplied.
If ignore_user is set, ignore that user.
This function returns nothing; its result is performed by the
side-effect on the two passed sets. This allows easy accumulation of
member lists of multiple rooms at once if required.
"""
members = yield self.get_room_members(room_id)
for member in members:
if ignore_user is not None and member == ignore_user:
continue
if self.hs.is_mine(member):
if localusers is not None:
localusers.add(member)
else:
if remotedomains is not None:
remotedomains.add(member.domain)
@defer.inlineCallbacks
def update_membership(
self,
requester,
target,
room_id,
action,
txn_id=None,
remote_room_hosts=None,
third_party_signed=None,
ratelimit=True,
):
effective_membership_state = action
if action in ["kick", "unban"]:
effective_membership_state = "leave"
elif action == "forget":
effective_membership_state = "leave"
if third_party_signed is not None:
replication = self.hs.get_replication_layer()
yield replication.exchange_third_party_invite(
third_party_signed["sender"],
target.to_string(),
room_id,
third_party_signed,
)
msg_handler = self.hs.get_handlers().message_handler
content = {"membership": effective_membership_state}
if requester.is_guest:
content["kind"] = "guest"
event, context = yield msg_handler.create_event(
{
"type": EventTypes.Member,
"content": content,
"room_id": room_id,
"sender": requester.user.to_string(),
"state_key": target.to_string(),
# For backwards compatibility:
"membership": effective_membership_state,
},
token_id=requester.access_token_id,
txn_id=txn_id,
)
old_state = context.current_state.get((EventTypes.Member, event.state_key))
old_membership = old_state.content.get("membership") if old_state else None
if action == "unban" and old_membership != "ban":
raise SynapseError(
403,
"Cannot unban user who was not banned (membership=%s)" % old_membership,
errcode=Codes.BAD_STATE
)
if old_membership == "ban" and action != "unban":
raise SynapseError(
403,
"Cannot %s user who was is banned" % (action,),
errcode=Codes.BAD_STATE
)
member_handler = self.hs.get_handlers().room_member_handler
yield member_handler.send_membership_event(
requester,
event,
context,
ratelimit=ratelimit,
remote_room_hosts=remote_room_hosts,
)
if action == "forget":
yield self.forget(requester.user, room_id)
@defer.inlineCallbacks
def send_membership_event(
self,
requester,
event,
context,
remote_room_hosts=None,
ratelimit=True,
):
"""
Change the membership status of a user in a room.
Args:
requester (Requester): The local user who requested the membership
event. If None, certain checks, like whether this homeserver can
act as the sender, will be skipped.
event (SynapseEvent): The membership event.
context: The context of the event.
is_guest (bool): Whether the sender is a guest.
room_hosts ([str]): Homeservers which are likely to already be in
the room, and could be danced with in order to join this
homeserver for the first time.
ratelimit (bool): Whether to rate limit this request.
Raises:
SynapseError if there was a problem changing the membership.
"""
remote_room_hosts = remote_room_hosts or []
target_user = UserID.from_string(event.state_key)
room_id = event.room_id
if requester is not None:
sender = UserID.from_string(event.sender)
assert sender == requester.user, (
"Sender (%s) must be same as requester (%s)" %
(sender, requester.user)
)
assert self.hs.is_mine(sender), "Sender must be our own: %s" % (sender,)
else:
requester = Requester(target_user, None, False)
message_handler = self.hs.get_handlers().message_handler
prev_event = message_handler.deduplicate_state_event(event, context)
if prev_event is not None:
return
action = "send"
if event.membership == Membership.JOIN:
if requester.is_guest and not self._can_guest_join(context.current_state):
# This should be an auth check, but guests are a local concept,
# so don't really fit into the general auth process.
raise AuthError(403, "Guest access not allowed")
do_remote_join_dance, remote_room_hosts = self._should_do_dance(
context,
(self.get_inviter(event.state_key, context.current_state)),
remote_room_hosts,
)
if do_remote_join_dance:
action = "remote_join"
elif event.membership == Membership.LEAVE:
is_host_in_room = self.is_host_in_room(context.current_state)
if not is_host_in_room:
# perhaps we've been invited
inviter = self.get_inviter(target_user.to_string(), context.current_state)
if not inviter:
raise SynapseError(404, "Not a known room")
if self.hs.is_mine(inviter):
# the inviter was on our server, but has now left. Carry on
# with the normal rejection codepath.
#
# This is a bit of a hack, because the room might still be
# active on other servers.
pass
else:
# send the rejection to the inviter's HS.
remote_room_hosts = remote_room_hosts + [inviter.domain]
action = "remote_reject"
federation_handler = self.hs.get_handlers().federation_handler
if action == "remote_join":
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
# We don't do an auth check if we are doing an invite
# join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we
# need to do the invite/join dance.
yield federation_handler.do_invite_join(
remote_room_hosts,
event.room_id,
event.user_id,
event.content,
)
elif action == "remote_reject":
yield federation_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
event.user_id
)
else:
yield self.handle_new_client_event(
requester,
event,
context,
extra_users=[target_user],
ratelimit=ratelimit,
)
prev_member_event = context.current_state.get(
(EventTypes.Member, target_user.to_string()),
None
)
if event.membership == Membership.JOIN:
if not prev_member_event or prev_member_event.membership != Membership.JOIN:
# Only fire user_joined_room if the user has acutally joined the
# room. Don't bother if the user is just changing their profile
# info.
yield user_joined_room(self.distributor, target_user, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event and prev_member_event.membership == Membership.JOIN:
user_left_room(self.distributor, target_user, room_id)
def _can_guest_join(self, current_state):
"""
Returns whether a guest can join a room based on its current state.
"""
guest_access = current_state.get((EventTypes.GuestAccess, ""), None)
return (
guest_access
and guest_access.content
and "guest_access" in guest_access.content
and guest_access.content["guest_access"] == "can_join"
)
def _should_do_dance(self, context, inviter, room_hosts=None):
# TODO: Shouldn't this be remote_room_host?
room_hosts = room_hosts or []
is_host_in_room = self.is_host_in_room(context.current_state)
if is_host_in_room:
return False, room_hosts
if inviter and not self.hs.is_mine(inviter):
room_hosts.append(inviter.domain)
return True, room_hosts
@defer.inlineCallbacks
def lookup_room_alias(self, room_alias):
"""
Get the room ID associated with a room alias.
Args:
room_alias (RoomAlias): The alias to look up.
Returns:
A tuple of:
The room ID as a RoomID object.
Hosts likely to be participating in the room ([str]).
Raises:
SynapseError if room alias could not be found.
"""
directory_handler = self.hs.get_handlers().directory_handler
mapping = yield directory_handler.get_association(room_alias)
if not mapping:
raise SynapseError(404, "No such room alias")
room_id = mapping["room_id"]
servers = mapping["servers"]
defer.returnValue((RoomID.from_string(room_id), servers))
def get_inviter(self, user_id, current_state):
prev_state = current_state.get((EventTypes.Member, user_id))
if prev_state and prev_state.membership == Membership.INVITE:
return UserID.from_string(prev_state.user_id)
return None
@defer.inlineCallbacks
def get_joined_rooms_for_user(self, user):
"""Returns a list of roomids that the user has any of the given
membership states in."""
rooms = yield self.store.get_rooms_for_user(
user.to_string(),
)
# For some reason the list of events contains duplicates
# TODO(paul): work out why because I really don't think it should
room_ids = set(r.room_id for r in rooms)
defer.returnValue(room_ids)
@defer.inlineCallbacks
def do_3pid_invite(
self,
room_id,
inviter,
medium,
address,
id_server,
requester,
txn_id
):
invitee = yield self._lookup_3pid(
id_server, medium, address
)
if invitee:
handler = self.hs.get_handlers().room_member_handler
yield handler.update_membership(
requester,
UserID.from_string(invitee),
room_id,
"invite",
txn_id=txn_id,
)
else:
yield self._make_and_store_3pid_invite(
requester,
id_server,
medium,
address,
room_id,
inviter,
txn_id=txn_id
)
@defer.inlineCallbacks
def _lookup_3pid(self, id_server, medium, address):
"""Looks up a 3pid in the passed identity server.
Args:
id_server (str): The server name (including port, if required)
of the identity server to use.
medium (str): The type of the third party identifier (e.g. "email").
address (str): The third party identifier (e.g. "foo@example.com").
Returns:
(str) the matrix ID of the 3pid, or None if it is not recognized.
"""
try:
data = yield self.hs.get_simple_http_client().get_json(
"%s%s/_matrix/identity/api/v1/lookup" % (id_server_scheme, id_server,),
{
"medium": medium,
"address": address,
}
)
if "mxid" in data:
if "signatures" not in data:
raise AuthError(401, "No signatures on 3pid binding")
self.verify_any_signature(data, id_server)
defer.returnValue(data["mxid"])
except IOError as e:
logger.warn("Error from identity server lookup: %s" % (e,))
defer.returnValue(None)
@defer.inlineCallbacks
def verify_any_signature(self, data, server_hostname):
if server_hostname not in data["signatures"]:
raise AuthError(401, "No signature from server %s" % (server_hostname,))
for key_name, signature in data["signatures"][server_hostname].items():
key_data = yield self.hs.get_simple_http_client().get_json(
"%s%s/_matrix/identity/api/v1/pubkey/%s" %
(id_server_scheme, server_hostname, key_name,),
)
if "public_key" not in key_data:
raise AuthError(401, "No public key named %s from %s" %
(key_name, server_hostname,))
verify_signed_json(
data,
server_hostname,
decode_verify_key_bytes(key_name, decode_base64(key_data["public_key"]))
)
return
@defer.inlineCallbacks
def _make_and_store_3pid_invite(
self,
requester,
id_server,
medium,
address,
room_id,
user,
txn_id
):
room_state = yield self.hs.get_state_handler().get_current_state(room_id)
inviter_display_name = ""
inviter_avatar_url = ""
member_event = room_state.get((EventTypes.Member, user.to_string()))
if member_event:
inviter_display_name = member_event.content.get("displayname", "")
inviter_avatar_url = member_event.content.get("avatar_url", "")
canonical_room_alias = ""
canonical_alias_event = room_state.get((EventTypes.CanonicalAlias, ""))
if canonical_alias_event:
canonical_room_alias = canonical_alias_event.content.get("alias", "")
room_name = ""
room_name_event = room_state.get((EventTypes.Name, ""))
if room_name_event:
room_name = room_name_event.content.get("name", "")
room_join_rules = ""
join_rules_event = room_state.get((EventTypes.JoinRules, ""))
if join_rules_event:
room_join_rules = join_rules_event.content.get("join_rule", "")
room_avatar_url = ""
room_avatar_event = room_state.get((EventTypes.RoomAvatar, ""))
if room_avatar_event:
room_avatar_url = room_avatar_event.content.get("url", "")
token, public_keys, fallback_public_key, display_name = (
yield self._ask_id_server_for_third_party_invite(
id_server=id_server,
medium=medium,
address=address,
room_id=room_id,
inviter_user_id=user.to_string(),
room_alias=canonical_room_alias,
room_avatar_url=room_avatar_url,
room_join_rules=room_join_rules,
room_name=room_name,
inviter_display_name=inviter_display_name,
inviter_avatar_url=inviter_avatar_url
)
)
msg_handler = self.hs.get_handlers().message_handler
yield msg_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.ThirdPartyInvite,
"content": {
"display_name": display_name,
"public_keys": public_keys,
# For backwards compatibility:
"key_validity_url": fallback_public_key["key_validity_url"],
"public_key": fallback_public_key["public_key"],
},
"room_id": room_id,
"sender": user.to_string(),
"state_key": token,
},
txn_id=txn_id,
)
@defer.inlineCallbacks
def _ask_id_server_for_third_party_invite(
self,
id_server,
medium,
address,
room_id,
inviter_user_id,
room_alias,
room_avatar_url,
room_join_rules,
room_name,
inviter_display_name,
inviter_avatar_url
):
"""
Asks an identity server for a third party invite.
:param id_server (str): hostname + optional port for the identity server.
:param medium (str): The literal string "email".
:param address (str): The third party address being invited.
:param room_id (str): The ID of the room to which the user is invited.
:param inviter_user_id (str): The user ID of the inviter.
:param room_alias (str): An alias for the room, for cosmetic
notifications.
:param room_avatar_url (str): The URL of the room's avatar, for cosmetic
notifications.
:param room_join_rules (str): The join rules of the email
(e.g. "public").
:param room_name (str): The m.room.name of the room.
:param inviter_display_name (str): The current display name of the
inviter.
:param inviter_avatar_url (str): The URL of the inviter's avatar.
:return: A deferred tuple containing:
token (str): The token which must be signed to prove authenticity.
public_keys ([{"public_key": str, "key_validity_url": str}]):
public_key is a base64-encoded ed25519 public key.
fallback_public_key: One element from public_keys.
display_name (str): A user-friendly name to represent the invited
user.
"""
is_url = "%s%s/_matrix/identity/api/v1/store-invite" % (
id_server_scheme, id_server,
)
invite_config = {
"medium": medium,
"address": address,
"room_id": room_id,
"room_alias": room_alias,
"room_avatar_url": room_avatar_url,
"room_join_rules": room_join_rules,
"room_name": room_name,
"sender": inviter_user_id,
"sender_display_name": inviter_display_name,
"sender_avatar_url": inviter_avatar_url,
}
if self.hs.config.invite_3pid_guest:
registration_handler = self.hs.get_handlers().registration_handler
guest_access_token = yield registration_handler.guest_access_token_for(
medium=medium,
address=address,
inviter_user_id=inviter_user_id,
)
guest_user_info = yield self.hs.get_auth().get_user_by_access_token(
guest_access_token
)
invite_config.update({
"guest_access_token": guest_access_token,
"guest_user_id": guest_user_info["user"].to_string(),
})
data = yield self.hs.get_simple_http_client().post_urlencoded_get_json(
is_url,
invite_config
)
# TODO: Check for success
token = data["token"]
public_keys = data.get("public_keys", [])
if "public_key" in data:
fallback_public_key = {
"public_key": data["public_key"],
"key_validity_url": "%s%s/_matrix/identity/api/v1/pubkey/isvalid" % (
id_server_scheme, id_server,
),
}
else:
fallback_public_key = public_keys[0]
if not public_keys:
public_keys.append(fallback_public_key)
display_name = data["display_name"]
defer.returnValue((token, public_keys, fallback_public_key, display_name))
def forget(self, user, room_id):
return self.store.forget(user.to_string(), room_id)
class RoomListHandler(BaseHandler):
def __init__(self, hs):
super(RoomListHandler, self).__init__(hs)
self.response_cache = ResponseCache()
self.remote_list_request_cache = ResponseCache()
self.remote_list_cache = {}
self.fetch_looping_call = hs.get_clock().looping_call(
self.fetch_all_remote_lists, REMOTE_ROOM_LIST_POLL_INTERVAL
)
self.fetch_all_remote_lists()
def get_local_public_room_list(self):
result = self.response_cache.get(())
if not result:
result = self.response_cache.set((), self._get_public_room_list())
return result
@defer.inlineCallbacks
def get_public_room_list(self):
def _get_public_room_list(self):
room_ids = yield self.store.get_public_room_ids()
results = []
@defer.inlineCallbacks
def handle_room(room_id):
aliases = yield self.store.get_aliases_for_room(room_id)
# We pull each bit of state out indvidually to avoid pulling the
# full state into memory. Due to how the caching works this should
# be fairly quick, even if not originally in the cache.
def get_state(etype, state_key):
return self.state_handler.get_current_state(room_id, etype, state_key)
current_state = yield self.state_handler.get_current_state(room_id)
# Double check that this is actually a public room.
join_rules_event = yield get_state(EventTypes.JoinRules, "")
join_rules_event = current_state.get((EventTypes.JoinRules, ""))
if join_rules_event:
join_rule = join_rules_event.content.get("join_rule", None)
if join_rule and join_rule != JoinRules.PUBLIC:
defer.returnValue(None)
result = {"room_id": room_id}
num_joined_users = len([
1 for _, event in current_state.items()
if event.type == EventTypes.Member
and event.membership == Membership.JOIN
])
if num_joined_users == 0:
return
result["num_joined_members"] = num_joined_users
aliases = yield self.store.get_aliases_for_room(room_id)
if aliases:
result["aliases"] = aliases
name_event = yield get_state(EventTypes.Name, "")
name_event = yield current_state.get((EventTypes.Name, ""))
if name_event:
name = name_event.content.get("name", None)
if name:
result["name"] = name
topic_event = yield get_state(EventTypes.Topic, "")
topic_event = current_state.get((EventTypes.Topic, ""))
if topic_event:
topic = topic_event.content.get("topic", None)
if topic:
result["topic"] = topic
canonical_event = yield get_state(EventTypes.CanonicalAlias, "")
canonical_event = current_state.get((EventTypes.CanonicalAlias, ""))
if canonical_event:
canonical_alias = canonical_event.content.get("alias", None)
if canonical_alias:
result["canonical_alias"] = canonical_alias
visibility_event = yield get_state(EventTypes.RoomHistoryVisibility, "")
visibility_event = current_state.get((EventTypes.RoomHistoryVisibility, ""))
visibility = None
if visibility_event:
visibility = visibility_event.content.get("history_visibility", None)
result["world_readable"] = visibility == "world_readable"
guest_event = yield get_state(EventTypes.GuestAccess, "")
guest_event = current_state.get((EventTypes.GuestAccess, ""))
guest = None
if guest_event:
guest = guest_event.content.get("guest_access", None)
result["guest_can_join"] = guest == "can_join"
avatar_event = yield get_state("m.room.avatar", "")
avatar_event = current_state.get(("m.room.avatar", ""))
if avatar_event:
avatar_url = avatar_event.content.get("url", None)
if avatar_url:
result["avatar_url"] = avatar_url
joined_users = yield self.store.get_users_in_room(room_id)
result["num_joined_members"] = len(joined_users)
results.append(result)
defer.returnValue(result)
result = []
for chunk in (room_ids[i:i + 10] for i in xrange(0, len(room_ids), 10)):
chunk_result = yield defer.gatherResults([
handle_room(room_id)
for room_id in chunk
], consumeErrors=True).addErrback(unwrapFirstError)
result.extend(v for v in chunk_result if v)
yield concurrently_execute(handle_room, room_ids, 10)
# FIXME (erikj): START is no longer a valid value
defer.returnValue({"start": "START", "end": "END", "chunk": result})
defer.returnValue({"start": "START", "end": "END", "chunk": results})
@defer.inlineCallbacks
def fetch_all_remote_lists(self):
deferred = self.hs.get_replication_layer().get_public_rooms(
self.hs.config.secondary_directory_servers
)
self.remote_list_request_cache.set((), deferred)
self.remote_list_cache = yield deferred
@defer.inlineCallbacks
def get_aggregated_public_room_list(self):
"""
Get the public room list from this server and the servers
specified in the secondary_directory_servers config option.
XXX: Pagination...
"""
# We return the results from out cache which is updated by a looping call,
# unless we're missing a cache entry, in which case wait for the result
# of the fetch if there's one in progress. If not, omit that server.
wait = False
for s in self.hs.config.secondary_directory_servers:
if s not in self.remote_list_cache:
logger.warn("No cached room list from %s: waiting for fetch", s)
wait = True
break
if wait and self.remote_list_request_cache.get(()):
yield self.remote_list_request_cache.get(())
public_rooms = yield self.get_local_public_room_list()
# keep track of which room IDs we've seen so we can de-dup
room_ids = set()
# tag all the ones in our list with our server name.
# Also add the them to the de-deping set
for room in public_rooms['chunk']:
room["server_name"] = self.hs.hostname
room_ids.add(room["room_id"])
# Now add the results from federation
for server_name, server_result in self.remote_list_cache.items():
for room in server_result["chunk"]:
if room["room_id"] not in room_ids:
room["server_name"] = server_name
public_rooms["chunk"].append(room)
room_ids.add(room["room_id"])
defer.returnValue(public_rooms)
class RoomContextHandler(BaseHandler):
@@ -1040,10 +507,12 @@ class RoomContextHandler(BaseHandler):
now_token = yield self.hs.get_event_sources().get_current_token()
def filter_evts(events):
return self._filter_events_for_client(
return filter_events_for_client(
self.store,
user.to_string(),
events,
is_peeking=is_guest)
is_peeking=is_guest
)
event = yield self.store.get_event(event_id, get_prev_content=True,
allow_none=True)

View File

@@ -0,0 +1,677 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from ._base import BaseHandler
from synapse.types import UserID, RoomID, Requester
from synapse.api.constants import (
EventTypes, Membership,
)
from synapse.api.errors import AuthError, SynapseError, Codes
from synapse.util.async import Linearizer
from synapse.util.distributor import user_left_room, user_joined_room
from signedjson.sign import verify_signed_json
from signedjson.key import decode_verify_key_bytes
from unpaddedbase64 import decode_base64
import logging
logger = logging.getLogger(__name__)
id_server_scheme = "https://"
class RoomMemberHandler(BaseHandler):
# TODO(paul): This handler currently contains a messy conflation of
# low-level API that works on UserID objects and so on, and REST-level
# API that takes ID strings and returns pagination chunks. These concerns
# ought to be separated out a lot better.
def __init__(self, hs):
super(RoomMemberHandler, self).__init__(hs)
self.member_linearizer = Linearizer()
self.clock = hs.get_clock()
self.distributor = hs.get_distributor()
self.distributor.declare("user_joined_room")
self.distributor.declare("user_left_room")
@defer.inlineCallbacks
def _local_membership_update(
self, requester, target, room_id, membership,
prev_event_ids,
txn_id=None,
ratelimit=True,
):
msg_handler = self.hs.get_handlers().message_handler
content = {"membership": membership}
if requester.is_guest:
content["kind"] = "guest"
event, context = yield msg_handler.create_event(
{
"type": EventTypes.Member,
"content": content,
"room_id": room_id,
"sender": requester.user.to_string(),
"state_key": target.to_string(),
# For backwards compatibility:
"membership": membership,
},
token_id=requester.access_token_id,
txn_id=txn_id,
prev_event_ids=prev_event_ids,
)
yield msg_handler.handle_new_client_event(
requester,
event,
context,
extra_users=[target],
ratelimit=ratelimit,
)
prev_member_event = context.current_state.get(
(EventTypes.Member, target.to_string()),
None
)
if event.membership == Membership.JOIN:
if not prev_member_event or prev_member_event.membership != Membership.JOIN:
# Only fire user_joined_room if the user has acutally joined the
# room. Don't bother if the user is just changing their profile
# info.
yield user_joined_room(self.distributor, target, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event and prev_member_event.membership == Membership.JOIN:
user_left_room(self.distributor, target, room_id)
@defer.inlineCallbacks
def remote_join(self, remote_room_hosts, room_id, user, content):
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
# We don't do an auth check if we are doing an invite
# join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we
# need to do the invite/join dance.
yield self.hs.get_handlers().federation_handler.do_invite_join(
remote_room_hosts,
room_id,
user.to_string(),
content,
)
yield user_joined_room(self.distributor, user, room_id)
def reject_remote_invite(self, user_id, room_id, remote_room_hosts):
return self.hs.get_handlers().federation_handler.do_remotely_reject_invite(
remote_room_hosts,
room_id,
user_id
)
@defer.inlineCallbacks
def update_membership(
self,
requester,
target,
room_id,
action,
txn_id=None,
remote_room_hosts=None,
third_party_signed=None,
ratelimit=True,
):
key = (target, room_id,)
with (yield self.member_linearizer.queue(key)):
result = yield self._update_membership(
requester,
target,
room_id,
action,
txn_id=txn_id,
remote_room_hosts=remote_room_hosts,
third_party_signed=third_party_signed,
ratelimit=ratelimit,
)
defer.returnValue(result)
@defer.inlineCallbacks
def _update_membership(
self,
requester,
target,
room_id,
action,
txn_id=None,
remote_room_hosts=None,
third_party_signed=None,
ratelimit=True,
):
effective_membership_state = action
if action in ["kick", "unban"]:
effective_membership_state = "leave"
if third_party_signed is not None:
replication = self.hs.get_replication_layer()
yield replication.exchange_third_party_invite(
third_party_signed["sender"],
target.to_string(),
room_id,
third_party_signed,
)
if not remote_room_hosts:
remote_room_hosts = []
latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
current_state = yield self.state_handler.get_current_state(
room_id, latest_event_ids=latest_event_ids,
)
old_state = current_state.get((EventTypes.Member, target.to_string()))
old_membership = old_state.content.get("membership") if old_state else None
if action == "unban" and old_membership != "ban":
raise SynapseError(
403,
"Cannot unban user who was not banned (membership=%s)" % old_membership,
errcode=Codes.BAD_STATE
)
if old_membership == "ban" and action != "unban":
raise SynapseError(
403,
"Cannot %s user who was banned" % (action,),
errcode=Codes.BAD_STATE
)
is_host_in_room = self.is_host_in_room(current_state)
if effective_membership_state == Membership.JOIN:
if requester.is_guest and not self._can_guest_join(current_state):
# This should be an auth check, but guests are a local concept,
# so don't really fit into the general auth process.
raise AuthError(403, "Guest access not allowed")
if not is_host_in_room:
inviter = yield self.get_inviter(target.to_string(), room_id)
if inviter and not self.hs.is_mine(inviter):
remote_room_hosts.append(inviter.domain)
content = {"membership": Membership.JOIN}
profile = self.hs.get_handlers().profile_handler
content["displayname"] = yield profile.get_displayname(target)
content["avatar_url"] = yield profile.get_avatar_url(target)
if requester.is_guest:
content["kind"] = "guest"
ret = yield self.remote_join(
remote_room_hosts, room_id, target, content
)
defer.returnValue(ret)
elif effective_membership_state == Membership.LEAVE:
if not is_host_in_room:
# perhaps we've been invited
inviter = yield self.get_inviter(target.to_string(), room_id)
if not inviter:
raise SynapseError(404, "Not a known room")
if self.hs.is_mine(inviter):
# the inviter was on our server, but has now left. Carry on
# with the normal rejection codepath.
#
# This is a bit of a hack, because the room might still be
# active on other servers.
pass
else:
# send the rejection to the inviter's HS.
remote_room_hosts = remote_room_hosts + [inviter.domain]
try:
ret = yield self.reject_remote_invite(
target.to_string(), room_id, remote_room_hosts
)
defer.returnValue(ret)
except SynapseError as e:
logger.warn("Failed to reject invite: %s", e)
yield self.store.locally_reject_invite(
target.to_string(), room_id
)
defer.returnValue({})
yield self._local_membership_update(
requester=requester,
target=target,
room_id=room_id,
membership=effective_membership_state,
txn_id=txn_id,
ratelimit=ratelimit,
prev_event_ids=latest_event_ids,
)
@defer.inlineCallbacks
def send_membership_event(
self,
requester,
event,
context,
remote_room_hosts=None,
ratelimit=True,
):
"""
Change the membership status of a user in a room.
Args:
requester (Requester): The local user who requested the membership
event. If None, certain checks, like whether this homeserver can
act as the sender, will be skipped.
event (SynapseEvent): The membership event.
context: The context of the event.
is_guest (bool): Whether the sender is a guest.
room_hosts ([str]): Homeservers which are likely to already be in
the room, and could be danced with in order to join this
homeserver for the first time.
ratelimit (bool): Whether to rate limit this request.
Raises:
SynapseError if there was a problem changing the membership.
"""
remote_room_hosts = remote_room_hosts or []
target_user = UserID.from_string(event.state_key)
room_id = event.room_id
if requester is not None:
sender = UserID.from_string(event.sender)
assert sender == requester.user, (
"Sender (%s) must be same as requester (%s)" %
(sender, requester.user)
)
assert self.hs.is_mine(sender), "Sender must be our own: %s" % (sender,)
else:
requester = Requester(target_user, None, False)
message_handler = self.hs.get_handlers().message_handler
prev_event = message_handler.deduplicate_state_event(event, context)
if prev_event is not None:
return
if event.membership == Membership.JOIN:
if requester.is_guest and not self._can_guest_join(context.current_state):
# This should be an auth check, but guests are a local concept,
# so don't really fit into the general auth process.
raise AuthError(403, "Guest access not allowed")
yield message_handler.handle_new_client_event(
requester,
event,
context,
extra_users=[target_user],
ratelimit=ratelimit,
)
prev_member_event = context.current_state.get(
(EventTypes.Member, target_user.to_string()),
None
)
if event.membership == Membership.JOIN:
if not prev_member_event or prev_member_event.membership != Membership.JOIN:
# Only fire user_joined_room if the user has acutally joined the
# room. Don't bother if the user is just changing their profile
# info.
yield user_joined_room(self.distributor, target_user, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event and prev_member_event.membership == Membership.JOIN:
user_left_room(self.distributor, target_user, room_id)
def _can_guest_join(self, current_state):
"""
Returns whether a guest can join a room based on its current state.
"""
guest_access = current_state.get((EventTypes.GuestAccess, ""), None)
return (
guest_access
and guest_access.content
and "guest_access" in guest_access.content
and guest_access.content["guest_access"] == "can_join"
)
@defer.inlineCallbacks
def lookup_room_alias(self, room_alias):
"""
Get the room ID associated with a room alias.
Args:
room_alias (RoomAlias): The alias to look up.
Returns:
A tuple of:
The room ID as a RoomID object.
Hosts likely to be participating in the room ([str]).
Raises:
SynapseError if room alias could not be found.
"""
directory_handler = self.hs.get_handlers().directory_handler
mapping = yield directory_handler.get_association(room_alias)
if not mapping:
raise SynapseError(404, "No such room alias")
room_id = mapping["room_id"]
servers = mapping["servers"]
defer.returnValue((RoomID.from_string(room_id), servers))
@defer.inlineCallbacks
def get_inviter(self, user_id, room_id):
invite = yield self.store.get_invite_for_user_in_room(
user_id=user_id,
room_id=room_id,
)
if invite:
defer.returnValue(UserID.from_string(invite.sender))
@defer.inlineCallbacks
def do_3pid_invite(
self,
room_id,
inviter,
medium,
address,
id_server,
requester,
txn_id
):
invitee = yield self._lookup_3pid(
id_server, medium, address
)
if invitee:
yield self.update_membership(
requester,
UserID.from_string(invitee),
room_id,
"invite",
txn_id=txn_id,
)
else:
yield self._make_and_store_3pid_invite(
requester,
id_server,
medium,
address,
room_id,
inviter,
txn_id=txn_id
)
@defer.inlineCallbacks
def _lookup_3pid(self, id_server, medium, address):
"""Looks up a 3pid in the passed identity server.
Args:
id_server (str): The server name (including port, if required)
of the identity server to use.
medium (str): The type of the third party identifier (e.g. "email").
address (str): The third party identifier (e.g. "foo@example.com").
Returns:
str: the matrix ID of the 3pid, or None if it is not recognized.
"""
try:
data = yield self.hs.get_simple_http_client().get_json(
"%s%s/_matrix/identity/api/v1/lookup" % (id_server_scheme, id_server,),
{
"medium": medium,
"address": address,
}
)
if "mxid" in data:
if "signatures" not in data:
raise AuthError(401, "No signatures on 3pid binding")
self.verify_any_signature(data, id_server)
defer.returnValue(data["mxid"])
except IOError as e:
logger.warn("Error from identity server lookup: %s" % (e,))
defer.returnValue(None)
@defer.inlineCallbacks
def verify_any_signature(self, data, server_hostname):
if server_hostname not in data["signatures"]:
raise AuthError(401, "No signature from server %s" % (server_hostname,))
for key_name, signature in data["signatures"][server_hostname].items():
key_data = yield self.hs.get_simple_http_client().get_json(
"%s%s/_matrix/identity/api/v1/pubkey/%s" %
(id_server_scheme, server_hostname, key_name,),
)
if "public_key" not in key_data:
raise AuthError(401, "No public key named %s from %s" %
(key_name, server_hostname,))
verify_signed_json(
data,
server_hostname,
decode_verify_key_bytes(key_name, decode_base64(key_data["public_key"]))
)
return
@defer.inlineCallbacks
def _make_and_store_3pid_invite(
self,
requester,
id_server,
medium,
address,
room_id,
user,
txn_id
):
room_state = yield self.hs.get_state_handler().get_current_state(room_id)
inviter_display_name = ""
inviter_avatar_url = ""
member_event = room_state.get((EventTypes.Member, user.to_string()))
if member_event:
inviter_display_name = member_event.content.get("displayname", "")
inviter_avatar_url = member_event.content.get("avatar_url", "")
canonical_room_alias = ""
canonical_alias_event = room_state.get((EventTypes.CanonicalAlias, ""))
if canonical_alias_event:
canonical_room_alias = canonical_alias_event.content.get("alias", "")
room_name = ""
room_name_event = room_state.get((EventTypes.Name, ""))
if room_name_event:
room_name = room_name_event.content.get("name", "")
room_join_rules = ""
join_rules_event = room_state.get((EventTypes.JoinRules, ""))
if join_rules_event:
room_join_rules = join_rules_event.content.get("join_rule", "")
room_avatar_url = ""
room_avatar_event = room_state.get((EventTypes.RoomAvatar, ""))
if room_avatar_event:
room_avatar_url = room_avatar_event.content.get("url", "")
token, public_keys, fallback_public_key, display_name = (
yield self._ask_id_server_for_third_party_invite(
id_server=id_server,
medium=medium,
address=address,
room_id=room_id,
inviter_user_id=user.to_string(),
room_alias=canonical_room_alias,
room_avatar_url=room_avatar_url,
room_join_rules=room_join_rules,
room_name=room_name,
inviter_display_name=inviter_display_name,
inviter_avatar_url=inviter_avatar_url
)
)
msg_handler = self.hs.get_handlers().message_handler
yield msg_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.ThirdPartyInvite,
"content": {
"display_name": display_name,
"public_keys": public_keys,
# For backwards compatibility:
"key_validity_url": fallback_public_key["key_validity_url"],
"public_key": fallback_public_key["public_key"],
},
"room_id": room_id,
"sender": user.to_string(),
"state_key": token,
},
txn_id=txn_id,
)
@defer.inlineCallbacks
def _ask_id_server_for_third_party_invite(
self,
id_server,
medium,
address,
room_id,
inviter_user_id,
room_alias,
room_avatar_url,
room_join_rules,
room_name,
inviter_display_name,
inviter_avatar_url
):
"""
Asks an identity server for a third party invite.
Args:
id_server (str): hostname + optional port for the identity server.
medium (str): The literal string "email".
address (str): The third party address being invited.
room_id (str): The ID of the room to which the user is invited.
inviter_user_id (str): The user ID of the inviter.
room_alias (str): An alias for the room, for cosmetic notifications.
room_avatar_url (str): The URL of the room's avatar, for cosmetic
notifications.
room_join_rules (str): The join rules of the email (e.g. "public").
room_name (str): The m.room.name of the room.
inviter_display_name (str): The current display name of the
inviter.
inviter_avatar_url (str): The URL of the inviter's avatar.
Returns:
A deferred tuple containing:
token (str): The token which must be signed to prove authenticity.
public_keys ([{"public_key": str, "key_validity_url": str}]):
public_key is a base64-encoded ed25519 public key.
fallback_public_key: One element from public_keys.
display_name (str): A user-friendly name to represent the invited
user.
"""
is_url = "%s%s/_matrix/identity/api/v1/store-invite" % (
id_server_scheme, id_server,
)
invite_config = {
"medium": medium,
"address": address,
"room_id": room_id,
"room_alias": room_alias,
"room_avatar_url": room_avatar_url,
"room_join_rules": room_join_rules,
"room_name": room_name,
"sender": inviter_user_id,
"sender_display_name": inviter_display_name,
"sender_avatar_url": inviter_avatar_url,
}
if self.hs.config.invite_3pid_guest:
registration_handler = self.hs.get_handlers().registration_handler
guest_access_token = yield registration_handler.guest_access_token_for(
medium=medium,
address=address,
inviter_user_id=inviter_user_id,
)
guest_user_info = yield self.hs.get_auth().get_user_by_access_token(
guest_access_token
)
invite_config.update({
"guest_access_token": guest_access_token,
"guest_user_id": guest_user_info["user"].to_string(),
})
data = yield self.hs.get_simple_http_client().post_urlencoded_get_json(
is_url,
invite_config
)
# TODO: Check for success
token = data["token"]
public_keys = data.get("public_keys", [])
if "public_key" in data:
fallback_public_key = {
"public_key": data["public_key"],
"key_validity_url": "%s%s/_matrix/identity/api/v1/pubkey/isvalid" % (
id_server_scheme, id_server,
),
}
else:
fallback_public_key = public_keys[0]
if not public_keys:
public_keys.append(fallback_public_key)
display_name = data["display_name"]
defer.returnValue((token, public_keys, fallback_public_key, display_name))
@defer.inlineCallbacks
def forget(self, user, room_id):
user_id = user.to_string()
member = yield self.state_handler.get_current_state(
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
)
membership = member.membership if member else None
if membership is not None and membership != Membership.LEAVE:
raise SynapseError(400, "User %s in room %s" % (
user_id, room_id
))
if membership:
yield self.store.forget(user_id, room_id)

View File

@@ -21,6 +21,7 @@ from synapse.api.constants import Membership, EventTypes
from synapse.api.filtering import Filter
from synapse.api.errors import SynapseError
from synapse.events.utils import serialize_event
from synapse.visibility import filter_events_for_client
from unpaddedbase64 import decode_base64, encode_base64
@@ -172,8 +173,8 @@ class SearchHandler(BaseHandler):
filtered_events = search_filter.filter([r["event"] for r in results])
events = yield self._filter_events_for_client(
user.to_string(), filtered_events
events = yield filter_events_for_client(
self.store, user.to_string(), filtered_events
)
events.sort(key=lambda e: -rank_map[e.event_id])
@@ -223,8 +224,8 @@ class SearchHandler(BaseHandler):
r["event"] for r in results
])
events = yield self._filter_events_for_client(
user.to_string(), filtered_events
events = yield filter_events_for_client(
self.store, user.to_string(), filtered_events
)
room_events.extend(events)
@@ -281,12 +282,12 @@ class SearchHandler(BaseHandler):
event.room_id, event.event_id, before_limit, after_limit
)
res["events_before"] = yield self._filter_events_for_client(
user.to_string(), res["events_before"]
res["events_before"] = yield filter_events_for_client(
self.store, user.to_string(), res["events_before"]
)
res["events_after"] = yield self._filter_events_for_client(
user.to_string(), res["events_after"]
res["events_after"] = yield filter_events_for_client(
self.store, user.to_string(), res["events_after"]
)
res["start"] = now_token.copy_and_replace(

File diff suppressed because it is too large Load Diff

View File

@@ -15,8 +15,6 @@
from twisted.internet import defer
from ._base import BaseHandler
from synapse.api.errors import SynapseError, AuthError
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.metrics import Measure
@@ -32,14 +30,16 @@ logger = logging.getLogger(__name__)
# A tiny object useful for storing a user's membership in a room, as a mapping
# key
RoomMember = namedtuple("RoomMember", ("room_id", "user"))
RoomMember = namedtuple("RoomMember", ("room_id", "user_id"))
class TypingNotificationHandler(BaseHandler):
class TypingHandler(object):
def __init__(self, hs):
super(TypingNotificationHandler, self).__init__(hs)
self.homeserver = hs
self.store = hs.get_datastore()
self.server_name = hs.config.server_name
self.auth = hs.get_auth()
self.is_mine_id = hs.is_mine_id
self.notifier = hs.get_notifier()
self.clock = hs.get_clock()
@@ -67,20 +67,23 @@ class TypingNotificationHandler(BaseHandler):
@defer.inlineCallbacks
def started_typing(self, target_user, auth_user, room_id, timeout):
if not self.hs.is_mine(target_user):
target_user_id = target_user.to_string()
auth_user_id = auth_user.to_string()
if not self.is_mine_id(target_user_id):
raise SynapseError(400, "User is not hosted on this Home Server")
if target_user != auth_user:
if target_user_id != auth_user_id:
raise AuthError(400, "Cannot set another user's typing state")
yield self.auth.check_joined_room(room_id, target_user.to_string())
yield self.auth.check_joined_room(room_id, target_user_id)
logger.debug(
"%s has started typing in %s", target_user.to_string(), room_id
"%s has started typing in %s", target_user_id, room_id
)
until = self.clock.time_msec() + timeout
member = RoomMember(room_id=room_id, user=target_user)
member = RoomMember(room_id=room_id, user_id=target_user_id)
was_present = member in self._member_typing_until
@@ -104,25 +107,28 @@ class TypingNotificationHandler(BaseHandler):
yield self._push_update(
room_id=room_id,
user=target_user,
user_id=target_user_id,
typing=True,
)
@defer.inlineCallbacks
def stopped_typing(self, target_user, auth_user, room_id):
if not self.hs.is_mine(target_user):
target_user_id = target_user.to_string()
auth_user_id = auth_user.to_string()
if not self.is_mine_id(target_user_id):
raise SynapseError(400, "User is not hosted on this Home Server")
if target_user != auth_user:
if target_user_id != auth_user_id:
raise AuthError(400, "Cannot set another user's typing state")
yield self.auth.check_joined_room(room_id, target_user.to_string())
yield self.auth.check_joined_room(room_id, target_user_id)
logger.debug(
"%s has stopped typing in %s", target_user.to_string(), room_id
"%s has stopped typing in %s", target_user_id, room_id
)
member = RoomMember(room_id=room_id, user=target_user)
member = RoomMember(room_id=room_id, user_id=target_user_id)
if member in self._member_typing_timer:
self.clock.cancel_call_later(self._member_typing_timer[member])
@@ -132,8 +138,9 @@ class TypingNotificationHandler(BaseHandler):
@defer.inlineCallbacks
def user_left_room(self, user, room_id):
if self.hs.is_mine(user):
member = RoomMember(room_id=room_id, user=user)
user_id = user.to_string()
if self.is_mine_id(user_id):
member = RoomMember(room_id=room_id, user_id=user_id)
yield self._stopped_typing(member)
@defer.inlineCallbacks
@@ -144,7 +151,7 @@ class TypingNotificationHandler(BaseHandler):
yield self._push_update(
room_id=member.room_id,
user=member.user,
user_id=member.user_id,
typing=False,
)
@@ -156,61 +163,53 @@ class TypingNotificationHandler(BaseHandler):
del self._member_typing_timer[member]
@defer.inlineCallbacks
def _push_update(self, room_id, user, typing):
localusers = set()
remotedomains = set()
rm_handler = self.homeserver.get_handlers().room_member_handler
yield rm_handler.fetch_room_distributions_into(
room_id, localusers=localusers, remotedomains=remotedomains
)
if localusers:
self._push_update_local(
room_id=room_id,
user=user,
typing=typing
)
def _push_update(self, room_id, user_id, typing):
domains = yield self.store.get_joined_hosts_for_room(room_id)
deferreds = []
for domain in remotedomains:
deferreds.append(self.federation.send_edu(
destination=domain,
edu_type="m.typing",
content={
"room_id": room_id,
"user_id": user.to_string(),
"typing": typing,
},
))
for domain in domains:
if domain == self.server_name:
self._push_update_local(
room_id=room_id,
user_id=user_id,
typing=typing
)
else:
deferreds.append(self.federation.send_edu(
destination=domain,
edu_type="m.typing",
content={
"room_id": room_id,
"user_id": user_id,
"typing": typing,
},
))
yield defer.DeferredList(deferreds, consumeErrors=True)
@defer.inlineCallbacks
def _recv_edu(self, origin, content):
room_id = content["room_id"]
user = UserID.from_string(content["user_id"])
user_id = content["user_id"]
localusers = set()
# Check that the string is a valid user id
UserID.from_string(user_id)
rm_handler = self.homeserver.get_handlers().room_member_handler
yield rm_handler.fetch_room_distributions_into(
room_id, localusers=localusers
)
domains = yield self.store.get_joined_hosts_for_room(room_id)
if localusers:
if self.server_name in domains:
self._push_update_local(
room_id=room_id,
user=user,
user_id=user_id,
typing=content["typing"]
)
def _push_update_local(self, room_id, user, typing):
def _push_update_local(self, room_id, user_id, typing):
room_set = self._room_typing.setdefault(room_id, set())
if typing:
room_set.add(user)
room_set.add(user_id)
else:
room_set.discard(user)
room_set.discard(user_id)
self._latest_room_serial += 1
self._room_serials[room_id] = self._latest_room_serial
@@ -222,13 +221,14 @@ class TypingNotificationHandler(BaseHandler):
def get_all_typing_updates(self, last_id, current_id):
# TODO: Work out a way to do this without scanning the entire state.
if last_id == current_id:
return []
rows = []
for room_id, serial in self._room_serials.items():
if last_id < serial and serial <= current_id:
typing = self._room_typing[room_id]
typing_bytes = json.dumps([
u.to_string() for u in typing
], ensure_ascii=False)
typing_bytes = json.dumps(list(typing), ensure_ascii=False)
rows.append((serial, room_id, typing_bytes))
rows.sort()
return rows
@@ -238,34 +238,26 @@ class TypingNotificationEventSource(object):
def __init__(self, hs):
self.hs = hs
self.clock = hs.get_clock()
self._handler = None
self._room_member_handler = None
def handler(self):
# Avoid cyclic dependency in handler setup
if not self._handler:
self._handler = self.hs.get_handlers().typing_notification_handler
return self._handler
def room_member_handler(self):
if not self._room_member_handler:
self._room_member_handler = self.hs.get_handlers().room_member_handler
return self._room_member_handler
# We can't call get_typing_handler here because there's a cycle:
#
# Typing -> Notifier -> TypingNotificationEventSource -> Typing
#
self.get_typing_handler = hs.get_typing_handler
def _make_event_for(self, room_id):
typing = self.handler()._room_typing[room_id]
typing = self.get_typing_handler()._room_typing[room_id]
return {
"type": "m.typing",
"room_id": room_id,
"content": {
"user_ids": [u.to_string() for u in typing],
"user_ids": list(typing),
},
}
def get_new_events(self, from_key, room_ids, **kwargs):
with Measure(self.clock, "typing.get_new_events"):
from_key = int(from_key)
handler = self.handler()
handler = self.get_typing_handler()
events = []
for room_id in room_ids:
@@ -279,7 +271,7 @@ class TypingNotificationEventSource(object):
return events, handler._latest_room_serial
def get_current_key(self):
return self.handler()._latest_room_serial
return self.get_typing_handler()._latest_room_serial
def get_pagination_rows(self, user, pagination_config, key):
return ([], pagination_config.from_key)

View File

@@ -15,17 +15,25 @@
from OpenSSL import SSL
from OpenSSL.SSL import VERIFY_NONE
from synapse.api.errors import CodeMessageException
from synapse.api.errors import (
CodeMessageException, SynapseError, Codes,
)
from synapse.util.logcontext import preserve_context_over_fn
import synapse.metrics
from synapse.http.endpoint import SpiderEndpoint
from canonicaljson import encode_canonical_json
from twisted.internet import defer, reactor, ssl
from twisted.internet import defer, reactor, ssl, protocol, task
from twisted.internet.endpoints import SSL4ClientEndpoint, TCP4ClientEndpoint
from twisted.web.client import (
Agent, readBody, FileBodyProducer, PartialDownloadError,
BrowserLikeRedirectAgent, ContentDecoderAgent, GzipDecoder, Agent,
readBody, PartialDownloadError,
)
from twisted.web.client import FileBodyProducer as TwistedFileBodyProducer
from twisted.web.http import PotentialDataLoss
from twisted.web.http_headers import Headers
from twisted.web._newclient import ResponseDone
from StringIO import StringIO
@@ -238,6 +246,107 @@ class SimpleHttpClient(object):
else:
raise CodeMessageException(response.code, body)
# XXX: FIXME: This is horribly copy-pasted from matrixfederationclient.
# The two should be factored out.
@defer.inlineCallbacks
def get_file(self, url, output_stream, max_size=None):
"""GETs a file from a given URL
Args:
url (str): The URL to GET
output_stream (file): File to write the response body to.
Returns:
A (int,dict,string,int) tuple of the file length, dict of the response
headers, absolute URI of the response and HTTP response code.
"""
response = yield self.request(
"GET",
url.encode("ascii"),
headers=Headers({
b"User-Agent": [self.user_agent],
})
)
headers = dict(response.headers.getAllRawHeaders())
if 'Content-Length' in headers and headers['Content-Length'] > max_size:
logger.warn("Requested URL is too large > %r bytes" % (self.max_size,))
raise SynapseError(
502,
"Requested file is too large > %r bytes" % (self.max_size,),
Codes.TOO_LARGE,
)
if response.code > 299:
logger.warn("Got %d when downloading %s" % (response.code, url))
raise SynapseError(
502,
"Got error %d" % (response.code,),
Codes.UNKNOWN,
)
# TODO: if our Content-Type is HTML or something, just read the first
# N bytes into RAM rather than saving it all to disk only to read it
# straight back in again
try:
length = yield preserve_context_over_fn(
_readBodyToFile,
response, output_stream, max_size
)
except Exception as e:
logger.exception("Failed to download body")
raise SynapseError(
502,
("Failed to download remote body: %s" % e),
Codes.UNKNOWN,
)
defer.returnValue((length, headers, response.request.absoluteURI, response.code))
# XXX: FIXME: This is horribly copy-pasted from matrixfederationclient.
# The two should be factored out.
class _ReadBodyToFileProtocol(protocol.Protocol):
def __init__(self, stream, deferred, max_size):
self.stream = stream
self.deferred = deferred
self.length = 0
self.max_size = max_size
def dataReceived(self, data):
self.stream.write(data)
self.length += len(data)
if self.max_size is not None and self.length >= self.max_size:
self.deferred.errback(SynapseError(
502,
"Requested file is too large > %r bytes" % (self.max_size,),
Codes.TOO_LARGE,
))
self.deferred = defer.Deferred()
self.transport.loseConnection()
def connectionLost(self, reason):
if reason.check(ResponseDone):
self.deferred.callback(self.length)
elif reason.check(PotentialDataLoss):
# stolen from https://github.com/twisted/treq/pull/49/files
# http://twistedmatrix.com/trac/ticket/4840
self.deferred.callback(self.length)
else:
self.deferred.errback(reason)
# XXX: FIXME: This is horribly copy-pasted from matrixfederationclient.
# The two should be factored out.
def _readBodyToFile(response, stream, max_size):
d = defer.Deferred()
response.deliverBody(_ReadBodyToFileProtocol(stream, d, max_size))
return d
class CaptchaServerHttpClient(SimpleHttpClient):
"""
@@ -269,6 +378,60 @@ class CaptchaServerHttpClient(SimpleHttpClient):
defer.returnValue(e.response)
class SpiderEndpointFactory(object):
def __init__(self, hs):
self.blacklist = hs.config.url_preview_ip_range_blacklist
self.whitelist = hs.config.url_preview_ip_range_whitelist
self.policyForHTTPS = hs.get_http_client_context_factory()
def endpointForURI(self, uri):
logger.info("Getting endpoint for %s", uri.toBytes())
if uri.scheme == "http":
return SpiderEndpoint(
reactor, uri.host, uri.port, self.blacklist, self.whitelist,
endpoint=TCP4ClientEndpoint,
endpoint_kw_args={
'timeout': 15
},
)
elif uri.scheme == "https":
tlsPolicy = self.policyForHTTPS.creatorForNetloc(uri.host, uri.port)
return SpiderEndpoint(
reactor, uri.host, uri.port, self.blacklist, self.whitelist,
endpoint=SSL4ClientEndpoint,
endpoint_kw_args={
'sslContextFactory': tlsPolicy,
'timeout': 15
},
)
else:
logger.warn("Can't get endpoint for unrecognised scheme %s", uri.scheme)
class SpiderHttpClient(SimpleHttpClient):
"""
Separate HTTP client for spidering arbitrary URLs.
Special in that it follows retries and has a UA that looks
like a browser.
used by the preview_url endpoint in the content repo.
"""
def __init__(self, hs):
SimpleHttpClient.__init__(self, hs)
# clobber the base class's agent and UA:
self.agent = ContentDecoderAgent(
BrowserLikeRedirectAgent(
Agent.usingEndpointFactory(
reactor,
SpiderEndpointFactory(hs)
)
), [('gzip', GzipDecoder)]
)
# We could look like Chrome:
# self.user_agent = ("Mozilla/5.0 (%s) (KHTML, like Gecko)
# Chrome Safari" % hs.version_string)
def encode_urlencode_args(args):
return {k: encode_urlencode_arg(v) for k, v in args.items()}
@@ -301,5 +464,31 @@ class InsecureInterceptableContextFactory(ssl.ContextFactory):
self._context = SSL.Context(SSL.SSLv23_METHOD)
self._context.set_verify(VERIFY_NONE, lambda *_: None)
def getContext(self, hostname, port):
def getContext(self, hostname=None, port=None):
return self._context
def creatorForNetloc(self, hostname, port):
return self
class FileBodyProducer(TwistedFileBodyProducer):
"""Workaround for https://twistedmatrix.com/trac/ticket/8473
We override the pauseProducing and resumeProducing methods in twisted's
FileBodyProducer so that they do not raise exceptions if the task has
already completed.
"""
def pauseProducing(self):
try:
super(FileBodyProducer, self).pauseProducing()
except task.TaskDone:
# task has already completed
pass
def resumeProducing(self):
try:
super(FileBodyProducer, self).resumeProducing()
except task.NotPaused:
# task was not paused (probably because it had already completed)
pass

View File

@@ -22,6 +22,7 @@ from twisted.names.error import DNSNameError, DomainError
import collections
import logging
import random
import time
logger = logging.getLogger(__name__)
@@ -31,7 +32,7 @@ SERVER_CACHE = {}
_Server = collections.namedtuple(
"_Server", "priority weight host port"
"_Server", "priority weight host port expires"
)
@@ -74,6 +75,41 @@ def matrix_federation_endpoint(reactor, destination, ssl_context_factory=None,
return transport_endpoint(reactor, domain, port, **endpoint_kw_args)
class SpiderEndpoint(object):
"""An endpoint which refuses to connect to blacklisted IP addresses
Implements twisted.internet.interfaces.IStreamClientEndpoint.
"""
def __init__(self, reactor, host, port, blacklist, whitelist,
endpoint=TCP4ClientEndpoint, endpoint_kw_args={}):
self.reactor = reactor
self.host = host
self.port = port
self.blacklist = blacklist
self.whitelist = whitelist
self.endpoint = endpoint
self.endpoint_kw_args = endpoint_kw_args
@defer.inlineCallbacks
def connect(self, protocolFactory):
address = yield self.reactor.resolve(self.host)
from netaddr import IPAddress
ip_address = IPAddress(address)
if ip_address in self.blacklist:
if self.whitelist is None or ip_address not in self.whitelist:
raise ConnectError(
"Refusing to spider blacklisted IP address %s" % address
)
logger.info("Connecting to %s:%s", address, self.port)
endpoint = self.endpoint(
self.reactor, address, self.port, **self.endpoint_kw_args
)
connection = yield endpoint.connect(protocolFactory)
defer.returnValue(connection)
class SRVClientEndpoint(object):
"""An endpoint which looks up SRV records for a service.
Cycles through the list of servers starting with each call to connect
@@ -92,7 +128,8 @@ class SRVClientEndpoint(object):
host=domain,
port=default_port,
priority=0,
weight=0
weight=0,
expires=0,
)
else:
self.default_server = None
@@ -118,7 +155,7 @@ class SRVClientEndpoint(object):
return self.default_server
else:
raise ConnectError(
"Not server available for %s", self.service_name
"Not server available for %s" % self.service_name
)
min_priority = self.servers[0].priority
@@ -153,7 +190,13 @@ class SRVClientEndpoint(object):
@defer.inlineCallbacks
def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE):
def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=time):
cache_entry = cache.get(service_name, None)
if cache_entry:
if all(s.expires > int(clock.time()) for s in cache_entry):
servers = list(cache_entry)
defer.returnValue(servers)
servers = []
try:
@@ -166,34 +209,33 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE):
and answers[0].type == dns.SRV
and answers[0].payload
and answers[0].payload.target == dns.Name('.')):
raise ConnectError("Service %s unavailable", service_name)
raise ConnectError("Service %s unavailable" % service_name)
for answer in answers:
if answer.type != dns.SRV or not answer.payload:
continue
payload = answer.payload
host = str(payload.target)
srv_ttl = answer.ttl
try:
answers, _, _ = yield dns_client.lookupAddress(host)
except DNSNameError:
continue
ips = [
answer.payload.dottedQuad()
for answer in answers
if answer.type == dns.A and answer.payload
]
for answer in answers:
if answer.type == dns.A and answer.payload:
ip = answer.payload.dottedQuad()
host_ttl = min(srv_ttl, answer.ttl)
for ip in ips:
servers.append(_Server(
host=ip,
port=int(payload.port),
priority=int(payload.priority),
weight=int(payload.weight)
))
servers.append(_Server(
host=ip,
port=int(payload.port),
priority=int(payload.priority),
weight=int(payload.weight),
expires=int(clock.time()) + host_ttl,
))
servers.sort()
cache[service_name] = list(servers)

View File

@@ -74,7 +74,12 @@ response_db_txn_duration = metrics.register_distribution(
_next_request_id = 0
def request_handler(request_handler):
def request_handler(report_metrics=True):
"""Decorator for ``wrap_request_handler``"""
return lambda request_handler: wrap_request_handler(request_handler, report_metrics)
def wrap_request_handler(request_handler, report_metrics):
"""Wraps a method that acts as a request handler with the necessary logging
and exception handling.
@@ -96,7 +101,12 @@ def request_handler(request_handler):
global _next_request_id
request_id = "%s-%s" % (request.method, _next_request_id)
_next_request_id += 1
with LoggingContext(request_id) as request_context:
if report_metrics:
request_metrics = RequestMetrics()
request_metrics.start(self.clock)
request_context.request = request_id
with request.processing():
try:
@@ -133,6 +143,14 @@ def request_handler(request_handler):
},
send_cors=True
)
finally:
try:
if report_metrics:
request_metrics.stop(
self.clock, request, self.__class__.__name__
)
except:
pass
return wrapped_request_handler
@@ -197,19 +215,23 @@ class JsonResource(HttpServer, resource.Resource):
self._async_render(request)
return server.NOT_DONE_YET
@request_handler
# Disable metric reporting because _async_render does its own metrics.
# It does its own metric reporting because _async_render dispatches to
# a callback and it's the class name of that callback we want to report
# against rather than the JsonResource itself.
@request_handler(report_metrics=False)
@defer.inlineCallbacks
def _async_render(self, request):
""" This gets called from render() every time someone sends us a request.
This checks if anyone has registered a callback for that method and
path.
"""
start = self.clock.time_msec()
if request.method == "OPTIONS":
self._send_response(request, 200, {})
return
start_context = LoggingContext.current_context()
request_metrics = RequestMetrics()
request_metrics.start(self.clock)
# Loop through all the registered callbacks to check if the method
# and path regex match
@@ -241,40 +263,7 @@ class JsonResource(HttpServer, resource.Resource):
self._send_response(request, code, response)
try:
context = LoggingContext.current_context()
tag = ""
if context:
tag = context.tag
if context != start_context:
logger.warn(
"Context have unexpectedly changed %r, %r",
context, self.start_context
)
return
incoming_requests_counter.inc(request.method, servlet_classname, tag)
response_timer.inc_by(
self.clock.time_msec() - start, request.method,
servlet_classname, tag
)
ru_utime, ru_stime = context.get_resource_usage()
response_ru_utime.inc_by(
ru_utime, request.method, servlet_classname, tag
)
response_ru_stime.inc_by(
ru_stime, request.method, servlet_classname, tag
)
response_db_txn_count.inc_by(
context.db_txn_count, request.method, servlet_classname, tag
)
response_db_txn_duration.inc_by(
context.db_txn_duration, request.method, servlet_classname, tag
)
request_metrics.stop(self.clock, request, servlet_classname)
except:
pass
@@ -307,6 +296,48 @@ class JsonResource(HttpServer, resource.Resource):
)
class RequestMetrics(object):
def start(self, clock):
self.start = clock.time_msec()
self.start_context = LoggingContext.current_context()
def stop(self, clock, request, servlet_classname):
context = LoggingContext.current_context()
tag = ""
if context:
tag = context.tag
if context != self.start_context:
logger.warn(
"Context have unexpectedly changed %r, %r",
context, self.start_context
)
return
incoming_requests_counter.inc(request.method, servlet_classname, tag)
response_timer.inc_by(
clock.time_msec() - self.start, request.method,
servlet_classname, tag
)
ru_utime, ru_stime = context.get_resource_usage()
response_ru_utime.inc_by(
ru_utime, request.method, servlet_classname, tag
)
response_ru_stime.inc_by(
ru_stime, request.method, servlet_classname, tag
)
response_db_txn_count.inc_by(
context.db_txn_count, request.method, servlet_classname, tag
)
response_db_txn_duration.inc_by(
context.db_txn_duration, request.method, servlet_classname, tag
)
class RootRedirect(resource.Resource):
"""Redirects the root '/' path to another path."""

View File

@@ -26,14 +26,19 @@ logger = logging.getLogger(__name__)
def parse_integer(request, name, default=None, required=False):
"""Parse an integer parameter from the request string
:param request: the twisted HTTP request.
:param name (str): the name of the query parameter.
:param default: value to use if the parameter is absent, defaults to None.
:param required (bool): whether to raise a 400 SynapseError if the
parameter is absent, defaults to False.
:return: An int value or the default.
:raises
SynapseError if the parameter is absent and required, or if the
Args:
request: the twisted HTTP request.
name (str): the name of the query parameter.
default (int|None): value to use if the parameter is absent, defaults
to None.
required (bool): whether to raise a 400 SynapseError if the
parameter is absent, defaults to False.
Returns:
int|None: An int value or the default.
Raises:
SynapseError: if the parameter is absent and required, or if the
parameter is present and not an integer.
"""
if name in request.args:
@@ -53,14 +58,19 @@ def parse_integer(request, name, default=None, required=False):
def parse_boolean(request, name, default=None, required=False):
"""Parse a boolean parameter from the request query string
:param request: the twisted HTTP request.
:param name (str): the name of the query parameter.
:param default: value to use if the parameter is absent, defaults to None.
:param required (bool): whether to raise a 400 SynapseError if the
parameter is absent, defaults to False.
:return: A bool value or the default.
:raises
SynapseError if the parameter is absent and required, or if the
Args:
request: the twisted HTTP request.
name (str): the name of the query parameter.
default (bool|None): value to use if the parameter is absent, defaults
to None.
required (bool): whether to raise a 400 SynapseError if the
parameter is absent, defaults to False.
Returns:
bool|None: A bool value or the default.
Raises:
SynapseError: if the parameter is absent and required, or if the
parameter is present and not one of "true" or "false".
"""
@@ -88,15 +98,20 @@ def parse_string(request, name, default=None, required=False,
allowed_values=None, param_type="string"):
"""Parse a string parameter from the request query string.
:param request: the twisted HTTP request.
:param name (str): the name of the query parameter.
:param default: value to use if the parameter is absent, defaults to None.
:param required (bool): whether to raise a 400 SynapseError if the
parameter is absent, defaults to False.
:param allowed_values (list): List of allowed values for the string,
or None if any value is allowed, defaults to None
:return: A string value or the default.
:raises
Args:
request: the twisted HTTP request.
name (str): the name of the query parameter.
default (str|None): value to use if the parameter is absent, defaults
to None.
required (bool): whether to raise a 400 SynapseError if the
parameter is absent, defaults to False.
allowed_values (list[str]): List of allowed values for the string,
or None if any value is allowed, defaults to None
Returns:
str|None: A string value or the default.
Raises:
SynapseError if the parameter is absent and required, or if the
parameter is present, must be one of a list of allowed values and
is not one of those allowed values.
@@ -122,9 +137,13 @@ def parse_string(request, name, default=None, required=False,
def parse_json_value_from_request(request):
"""Parse a JSON value from the body of a twisted HTTP request.
:param request: the twisted HTTP request.
:returns: The JSON value.
:raises
Args:
request: the twisted HTTP request.
Returns:
The JSON value.
Raises:
SynapseError if the request body couldn't be decoded as JSON.
"""
try:
@@ -143,8 +162,10 @@ def parse_json_value_from_request(request):
def parse_json_object_from_request(request):
"""Parse a JSON object from the body of a twisted HTTP request.
:param request: the twisted HTTP request.
:raises
Args:
request: the twisted HTTP request.
Raises:
SynapseError if the request body couldn't be decoded as JSON or
if it wasn't a JSON object.
"""

146
synapse/http/site.py Normal file
View File

@@ -0,0 +1,146 @@
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util.logcontext import LoggingContext
from twisted.web.server import Site, Request
import contextlib
import logging
import re
import time
ACCESS_TOKEN_RE = re.compile(r'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
class SynapseRequest(Request):
def __init__(self, site, *args, **kw):
Request.__init__(self, *args, **kw)
self.site = site
self.authenticated_entity = None
self.start_time = 0
def __repr__(self):
# We overwrite this so that we don't log ``access_token``
return '<%s at 0x%x method=%s uri=%s clientproto=%s site=%s>' % (
self.__class__.__name__,
id(self),
self.method,
self.get_redacted_uri(),
self.clientproto,
self.site.site_tag,
)
def get_redacted_uri(self):
return ACCESS_TOKEN_RE.sub(
r'\1<redacted>\3',
self.uri
)
def get_user_agent(self):
return self.requestHeaders.getRawHeaders("User-Agent", [None])[-1]
def started_processing(self):
self.site.access_logger.info(
"%s - %s - Received request: %s %s",
self.getClientIP(),
self.site.site_tag,
self.method,
self.get_redacted_uri()
)
self.start_time = int(time.time() * 1000)
def finished_processing(self):
try:
context = LoggingContext.current_context()
ru_utime, ru_stime = context.get_resource_usage()
db_txn_count = context.db_txn_count
db_txn_duration = context.db_txn_duration
except:
ru_utime, ru_stime = (0, 0)
db_txn_count, db_txn_duration = (0, 0)
self.site.access_logger.info(
"%s - %s - {%s}"
" Processed request: %dms (%dms, %dms) (%dms/%d)"
" %sB %s \"%s %s %s\" \"%s\"",
self.getClientIP(),
self.site.site_tag,
self.authenticated_entity,
int(time.time() * 1000) - self.start_time,
int(ru_utime * 1000),
int(ru_stime * 1000),
int(db_txn_duration * 1000),
int(db_txn_count),
self.sentLength,
self.code,
self.method,
self.get_redacted_uri(),
self.clientproto,
self.get_user_agent(),
)
@contextlib.contextmanager
def processing(self):
self.started_processing()
yield
self.finished_processing()
class XForwardedForRequest(SynapseRequest):
def __init__(self, *args, **kw):
SynapseRequest.__init__(self, *args, **kw)
"""
Add a layer on top of another request that only uses the value of an
X-Forwarded-For header as the result of C{getClientIP}.
"""
def getClientIP(self):
"""
@return: The client address (the first address) in the value of the
I{X-Forwarded-For header}. If the header is not present, return
C{b"-"}.
"""
return self.requestHeaders.getRawHeaders(
b"x-forwarded-for", [b"-"])[0].split(b",")[0].strip()
class SynapseRequestFactory(object):
def __init__(self, site, x_forwarded_for):
self.site = site
self.x_forwarded_for = x_forwarded_for
def __call__(self, *args, **kwargs):
if self.x_forwarded_for:
return XForwardedForRequest(self.site, *args, **kwargs)
else:
return SynapseRequest(self.site, *args, **kwargs)
class SynapseSite(Site):
"""
Subclass of a twisted http Site that does access logging with python's
standard logging
"""
def __init__(self, logger_name, site_tag, config, resource, *args, **kwargs):
Site.__init__(self, resource, *args, **kwargs)
self.site_tag = site_tag
proxied = config.get("x_forwarded", False)
self.requestFactory = SynapseRequestFactory(self, proxied)
self.access_logger = logging.getLogger(logger_name)
def log(self, request):
pass

View File

@@ -22,6 +22,7 @@ import functools
import os
import stat
import time
import gc
from twisted.internet import reactor
@@ -33,11 +34,7 @@ from .metric import (
logger = logging.getLogger(__name__)
# We'll keep all the available metrics in a single toplevel dict, one shared
# for the entire process. We don't currently support per-HomeServer instances
# of metrics, because in practice any one python VM will host only one
# HomeServer anyway. This makes a lot of implementation neater
all_metrics = {}
all_metrics = []
class Metrics(object):
@@ -53,7 +50,7 @@ class Metrics(object):
metric = metric_class(full_name, *args, **kwargs)
all_metrics[full_name] = metric
all_metrics.append(metric)
return metric
def register_counter(self, *args, **kwargs):
@@ -84,12 +81,12 @@ def render_all():
# TODO(paul): Internal hack
update_resource_metrics()
for name in sorted(all_metrics.keys()):
for metric in all_metrics:
try:
strs += all_metrics[name].render()
strs += metric.render()
except Exception:
strs += ["# FAILED to render %s" % name]
logger.exception("Failed to render %s metric", name)
strs += ["# FAILED to render"]
logger.exception("Failed to render metric")
strs.append("") # to generate a final CRLF
@@ -156,6 +153,13 @@ reactor_metrics = get_metrics_for("reactor")
tick_time = reactor_metrics.register_distribution("tick_time")
pending_calls_metric = reactor_metrics.register_distribution("pending_calls")
gc_time = reactor_metrics.register_distribution("gc_time", labels=["gen"])
gc_unreachable = reactor_metrics.register_counter("gc_unreachable", labels=["gen"])
reactor_metrics.register_callback(
"gc_counts", lambda: {(i,): v for i, v in enumerate(gc.get_count())}, labels=["gen"]
)
def runUntilCurrentTimer(func):
@@ -182,6 +186,22 @@ def runUntilCurrentTimer(func):
end = time.time() * 1000
tick_time.inc_by(end - start)
pending_calls_metric.inc_by(num_pending)
# Check if we need to do a manual GC (since its been disabled), and do
# one if necessary.
threshold = gc.get_threshold()
counts = gc.get_count()
for i in (2, 1, 0):
if threshold[i] < counts[i]:
logger.info("Collecting gc %d", i)
start = time.time() * 1000
unreachable = gc.collect(i)
end = time.time() * 1000
gc_time.inc_by(end - start, i)
gc_unreachable.inc_by(unreachable, i)
return ret
return f
@@ -196,5 +216,9 @@ try:
# runUntilCurrent is called when we have pending calls. It is called once
# per iteratation after fd polling.
reactor.runUntilCurrent = runUntilCurrentTimer(reactor.runUntilCurrent)
# We manually run the GC each reactor tick so that we can get some metrics
# about time spent doing GC,
gc.disable()
except AttributeError:
pass

View File

@@ -47,9 +47,6 @@ class BaseMetric(object):
for k, v in zip(self.labels, values)])
)
def render(self):
return map_concat(self.render_item, sorted(self.counts.keys()))
class CounterMetric(BaseMetric):
"""The simplest kind of metric; one that stores a monotonically-increasing
@@ -83,6 +80,9 @@ class CounterMetric(BaseMetric):
def render_item(self, k):
return ["%s%s %d" % (self.name, self._render_key(k), self.counts[k])]
def render(self):
return map_concat(self.render_item, sorted(self.counts.keys()))
class CallbackMetric(BaseMetric):
"""A metric that returns the numeric value returned by a callback whenever
@@ -126,30 +126,30 @@ class DistributionMetric(object):
class CacheMetric(object):
"""A combination of two CounterMetrics, one to count cache hits and one to
count a total, and a callback metric to yield the current size.
__slots__ = ("name", "cache_name", "hits", "misses", "size_callback")
This metric generates standard metric name pairs, so that monitoring rules
can easily be applied to measure hit ratio."""
def __init__(self, name, size_callback, labels=[]):
def __init__(self, name, size_callback, cache_name):
self.name = name
self.cache_name = cache_name
self.hits = CounterMetric(name + ":hits", labels=labels)
self.total = CounterMetric(name + ":total", labels=labels)
self.hits = 0
self.misses = 0
self.size = CallbackMetric(
name + ":size",
callback=size_callback,
labels=labels,
)
self.size_callback = size_callback
def inc_hits(self, *values):
self.hits.inc(*values)
self.total.inc(*values)
def inc_hits(self):
self.hits += 1
def inc_misses(self, *values):
self.total.inc(*values)
def inc_misses(self):
self.misses += 1
def render(self):
return self.hits.render() + self.total.render() + self.size.render()
size = self.size_callback()
hits = self.hits
total = self.misses + self.hits
return [
"""%s:hits{name="%s"} %d""" % (self.name, self.cache_name, hits),
"""%s:total{name="%s"} %d""" % (self.name, self.cache_name, total),
"""%s:size{name="%s"} %d""" % (self.name, self.cache_name, size),
]

View File

@@ -14,13 +14,14 @@
# limitations under the License.
from twisted.internet import defer
from synapse.api.constants import EventTypes
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError
from synapse.util.logutils import log_function
from synapse.util.async import ObservableDeferred
from synapse.util.logcontext import PreserveLoggingContext
from synapse.types import StreamToken
from synapse.visibility import filter_events_for_client
import synapse.metrics
from collections import namedtuple
@@ -139,8 +140,6 @@ class Notifier(object):
UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000
def __init__(self, hs):
self.hs = hs
self.user_to_user_stream = {}
self.room_to_user_streams = {}
self.appservice_to_user_streams = {}
@@ -150,10 +149,8 @@ class Notifier(object):
self.pending_new_room_events = []
self.clock = hs.get_clock()
hs.get_distributor().observe(
"user_joined_room", self._user_joined_room
)
self.appservice_handler = hs.get_application_service_handler()
self.state_handler = hs.get_state_handler()
self.clock.looping_call(
self.remove_expired_streams, self.UNUSED_STREAM_EXPIRY_MS
@@ -231,9 +228,7 @@ class Notifier(object):
def _on_new_room_event(self, event, room_stream_id, extra_users=[]):
"""Notify any user streams that are interested in this room event"""
# poke any interested application service.
self.hs.get_handlers().appservice_handler.notify_interested_services(
event
)
self.appservice_handler.notify_interested_services(event)
app_streams = set()
@@ -249,6 +244,9 @@ class Notifier(object):
)
app_streams |= app_user_streams
if event.type == EventTypes.Member and event.membership == Membership.JOIN:
self._user_joined_room(event.state_key, event.room_id)
self.on_new_event(
"room_key", room_stream_id,
users=extra_users,
@@ -398,8 +396,8 @@ class Notifier(object):
)
if name == "room":
room_member_handler = self.hs.get_handlers().room_member_handler
new_events = yield room_member_handler._filter_events_for_client(
new_events = yield filter_events_for_client(
self.store,
user.to_string(),
new_events,
is_peeking=is_peeking,
@@ -448,7 +446,7 @@ class Notifier(object):
@defer.inlineCallbacks
def _is_world_readable(self, room_id):
state = yield self.hs.get_state_handler().get_current_state(
state = yield self.state_handler.get_current_state(
room_id,
EventTypes.RoomHistoryVisibility
)
@@ -484,9 +482,8 @@ class Notifier(object):
user_stream.appservice, set()
).add(user_stream)
def _user_joined_room(self, user, room_id):
user = str(user)
new_user_stream = self.user_to_user_stream.get(user)
def _user_joined_room(self, user_id, room_id):
new_user_stream = self.user_to_user_stream.get(user_id)
if new_user_stream is not None:
room_streams = self.room_to_user_streams.setdefault(room_id, set())
room_streams.add(new_user_stream)
@@ -503,13 +500,14 @@ class Notifier(object):
def wait_for_replication(self, callback, timeout):
"""Wait for an event to happen.
:param callback:
Gets called whenever an event happens. If this returns a truthy
value then ``wait_for_replication`` returns, otherwise it waits
for another event.
:param int timeout:
How many milliseconds to wait for callback return a truthy value.
:returns:
Args:
callback: Gets called whenever an event happens. If this returns a
truthy value then ``wait_for_replication`` returns, otherwise
it waits for another event.
timeout: How many milliseconds to wait for callback return a truthy
value.
Returns:
A deferred that resolves with the value returned by the callback.
"""
listener = _NotificationListener(None)

View File

@@ -13,333 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.streams.config import PaginationConfig
from synapse.types import StreamToken
from synapse.util.logcontext import LoggingContext
from synapse.util.metrics import Measure
import synapse.util.async
from .push_rule_evaluator import evaluator_for_user_id
import logging
import random
logger = logging.getLogger(__name__)
_NEXT_ID = 1
def _get_next_id():
global _NEXT_ID
_id = _NEXT_ID
_NEXT_ID += 1
return _id
# Pushers could now be moved to pull out of the event_push_actions table instead
# of listening on the event stream: this would avoid them having to run the
# rules again.
class Pusher(object):
INITIAL_BACKOFF = 1000
MAX_BACKOFF = 60 * 60 * 1000
GIVE_UP_AFTER = 24 * 60 * 60 * 1000
def __init__(self, _hs, user_id, app_id,
app_display_name, device_display_name, pushkey, pushkey_ts,
data, last_token, last_success, failing_since):
self.hs = _hs
self.evStreamHandler = self.hs.get_handlers().event_stream_handler
self.store = self.hs.get_datastore()
self.clock = self.hs.get_clock()
self.user_id = user_id
self.app_id = app_id
self.app_display_name = app_display_name
self.device_display_name = device_display_name
self.pushkey = pushkey
self.pushkey_ts = pushkey_ts
self.data = data
self.last_token = last_token
self.last_success = last_success # not actually used
self.backoff_delay = Pusher.INITIAL_BACKOFF
self.failing_since = failing_since
self.alive = True
self.badge = None
self.name = "Pusher-%d" % (_get_next_id(),)
# The last value of last_active_time that we saw
self.last_last_active_time = 0
self.has_unread = True
@defer.inlineCallbacks
def get_context_for_event(self, ev):
name_aliases = yield self.store.get_room_name_and_aliases(
ev['room_id']
)
ctx = {'aliases': name_aliases[1]}
if name_aliases[0] is not None:
ctx['name'] = name_aliases[0]
their_member_events_for_room = yield self.store.get_current_state(
room_id=ev['room_id'],
event_type='m.room.member',
state_key=ev['user_id']
)
for mev in their_member_events_for_room:
if mev.content['membership'] == 'join' and 'displayname' in mev.content:
dn = mev.content['displayname']
if dn is not None:
ctx['sender_display_name'] = dn
defer.returnValue(ctx)
@defer.inlineCallbacks
def start(self):
with LoggingContext(self.name):
if not self.last_token:
# First-time setup: get a token to start from (we can't
# just start from no token, ie. 'now'
# because we need the result to be reproduceable in case
# we fail to dispatch the push)
config = PaginationConfig(from_token=None, limit='1')
chunk = yield self.evStreamHandler.get_stream(
self.user_id, config, timeout=0, affect_presence=False
)
self.last_token = chunk['end']
yield self.store.update_pusher_last_token(
self.app_id, self.pushkey, self.user_id, self.last_token
)
logger.info("New pusher %s for user %s starting from token %s",
self.pushkey, self.user_id, self.last_token)
else:
logger.info(
"Old pusher %s for user %s starting",
self.pushkey, self.user_id,
)
wait = 0
while self.alive:
try:
if wait > 0:
yield synapse.util.async.sleep(wait)
with Measure(self.clock, "push"):
yield self.get_and_dispatch()
wait = 0
except:
if wait == 0:
wait = 1
else:
wait = min(wait * 2, 1800)
logger.exception(
"Exception in pusher loop for pushkey %s. Pausing for %ds",
self.pushkey, wait
)
@defer.inlineCallbacks
def get_and_dispatch(self):
from_tok = StreamToken.from_string(self.last_token)
config = PaginationConfig(from_token=from_tok, limit='1')
timeout = (300 + random.randint(-60, 60)) * 1000
chunk = yield self.evStreamHandler.get_stream(
self.user_id, config, timeout=timeout, affect_presence=False,
only_keys=("room", "receipt",),
)
# limiting to 1 may get 1 event plus 1 presence event, so
# pick out the actual event
single_event = None
read_receipt = None
for c in chunk['chunk']:
if 'event_id' in c: # Hmmm...
single_event = c
elif c['type'] == 'm.receipt':
read_receipt = c
have_updated_badge = False
if read_receipt:
for receipt_part in read_receipt['content'].values():
if 'm.read' in receipt_part:
if self.user_id in receipt_part['m.read'].keys():
have_updated_badge = True
if not single_event:
if have_updated_badge:
yield self.update_badge()
self.last_token = chunk['end']
yield self.store.update_pusher_last_token(
self.app_id,
self.pushkey,
self.user_id,
self.last_token
)
return
if not self.alive:
return
processed = False
rule_evaluator = yield \
evaluator_for_user_id(
self.user_id, single_event['room_id'], self.store
)
actions = yield rule_evaluator.actions_for_event(single_event)
tweaks = rule_evaluator.tweaks_for_actions(actions)
if 'notify' in actions:
self.badge = yield self._get_badge_count()
rejected = yield self.dispatch_push(single_event, tweaks, self.badge)
self.has_unread = True
if isinstance(rejected, list) or isinstance(rejected, tuple):
processed = True
for pk in rejected:
if pk != self.pushkey:
# for sanity, we only remove the pushkey if it
# was the one we actually sent...
logger.warn(
("Ignoring rejected pushkey %s because we"
" didn't send it"), pk
)
else:
logger.info(
"Pushkey %s was rejected: removing",
pk
)
yield self.hs.get_pusherpool().remove_pusher(
self.app_id, pk, self.user_id
)
else:
if have_updated_badge:
yield self.update_badge()
processed = True
if not self.alive:
return
if processed:
self.backoff_delay = Pusher.INITIAL_BACKOFF
self.last_token = chunk['end']
yield self.store.update_pusher_last_token_and_success(
self.app_id,
self.pushkey,
self.user_id,
self.last_token,
self.clock.time_msec()
)
if self.failing_since:
self.failing_since = None
yield self.store.update_pusher_failing_since(
self.app_id,
self.pushkey,
self.user_id,
self.failing_since)
else:
if not self.failing_since:
self.failing_since = self.clock.time_msec()
yield self.store.update_pusher_failing_since(
self.app_id,
self.pushkey,
self.user_id,
self.failing_since
)
if (self.failing_since and
self.failing_since <
self.clock.time_msec() - Pusher.GIVE_UP_AFTER):
# we really only give up so that if the URL gets
# fixed, we don't suddenly deliver a load
# of old notifications.
logger.warn("Giving up on a notification to user %s, "
"pushkey %s",
self.user_id, self.pushkey)
self.backoff_delay = Pusher.INITIAL_BACKOFF
self.last_token = chunk['end']
yield self.store.update_pusher_last_token(
self.app_id,
self.pushkey,
self.user_id,
self.last_token
)
self.failing_since = None
yield self.store.update_pusher_failing_since(
self.app_id,
self.pushkey,
self.user_id,
self.failing_since
)
else:
logger.warn("Failed to dispatch push for user %s "
"(failing for %dms)."
"Trying again in %dms",
self.user_id,
self.clock.time_msec() - self.failing_since,
self.backoff_delay)
yield synapse.util.async.sleep(self.backoff_delay / 1000.0)
self.backoff_delay *= 2
if self.backoff_delay > Pusher.MAX_BACKOFF:
self.backoff_delay = Pusher.MAX_BACKOFF
def stop(self):
self.alive = False
def dispatch_push(self, p, tweaks, badge):
"""
Overridden by implementing classes to actually deliver the notification
Args:
p: The event to notify for as a single event from the event stream
Returns: If the notification was delivered, an array containing any
pushkeys that were rejected by the push gateway.
False if the notification could not be delivered (ie.
should be retried).
"""
pass
@defer.inlineCallbacks
def update_badge(self):
new_badge = yield self._get_badge_count()
if self.badge != new_badge:
self.badge = new_badge
yield self.send_badge(self.badge)
def send_badge(self, badge):
"""
Overridden by implementing classes to send an updated badge count
"""
pass
@defer.inlineCallbacks
def _get_badge_count(self):
invites, joins = yield defer.gatherResults([
self.store.get_invited_rooms_for_user(self.user_id),
self.store.get_rooms_for_user(self.user_id),
], consumeErrors=True)
my_receipts_by_room = yield self.store.get_receipts_for_user(
self.user_id,
"m.read",
)
badge = len(invites)
for r in joins:
if r.room_id in my_receipts_by_room:
last_unread_event_id = my_receipts_by_room[r.room_id]
notifs = yield (
self.store.get_unread_event_push_actions_by_room_for_user(
r.room_id, self.user_id, last_unread_event_id
)
)
badge += notifs["notify_count"]
defer.returnValue(badge)
class PusherConfigException(Exception):
def __init__(self, msg):

View File

@@ -15,7 +15,9 @@
from twisted.internet import defer
from .bulk_push_rule_evaluator import evaluator_for_room_id
from .bulk_push_rule_evaluator import evaluator_for_event
from synapse.util.metrics import Measure
import logging
@@ -25,6 +27,7 @@ logger = logging.getLogger(__name__)
class ActionGenerator:
def __init__(self, hs):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastore()
# really we want to get all user ids and all profile tags too,
# since we want the actions for each profile tag for every user and
@@ -34,15 +37,16 @@ class ActionGenerator:
# tag (ie. we just need all the users).
@defer.inlineCallbacks
def handle_push_actions_for_event(self, event, context, handler):
bulk_evaluator = yield evaluator_for_room_id(
event.room_id, self.hs, self.store
)
def handle_push_actions_for_event(self, event, context):
with Measure(self.clock, "handle_push_actions_for_event"):
bulk_evaluator = yield evaluator_for_event(
event, self.hs, self.store, context.current_state
)
actions_by_user = yield bulk_evaluator.action_for_event_by_user(
event, handler, context.current_state
)
actions_by_user = yield bulk_evaluator.action_for_event_by_user(
event, context.current_state
)
context.push_actions = [
(uid, actions) for uid, actions in actions_by_user.items()
]
context.push_actions = [
(uid, actions) for uid, actions in actions_by_user.items()
]

View File

@@ -19,9 +19,11 @@ import copy
def list_with_base_rules(rawrules):
"""Combine the list of rules set by the user with the default push rules
:param list rawrules: The rules the user has modified or set.
:returns: A new list with the rules set by the user combined with the
defaults.
Args:
rawrules(list): The rules the user has modified or set.
Returns:
A new list with the rules set by the user combined with the defaults.
"""
ruleslist = []
@@ -77,7 +79,7 @@ def make_base_append_rules(kind, modified_base_rules):
rules = []
if kind == 'override':
rules = BASE_APPEND_OVRRIDE_RULES
rules = BASE_APPEND_OVERRIDE_RULES
elif kind == 'underride':
rules = BASE_APPEND_UNDERRIDE_RULES
elif kind == 'content':
@@ -146,7 +148,7 @@ BASE_PREPEND_OVERRIDE_RULES = [
]
BASE_APPEND_OVRRIDE_RULES = [
BASE_APPEND_OVERRIDE_RULES = [
{
'rule_id': 'global/override/.m.rule.suppress_notices',
'conditions': [
@@ -160,7 +162,61 @@ BASE_APPEND_OVRRIDE_RULES = [
'actions': [
'dont_notify',
]
}
},
# NB. .m.rule.invite_for_me must be higher prio than .m.rule.member_event
# otherwise invites will be matched by .m.rule.member_event
{
'rule_id': 'global/override/.m.rule.invite_for_me',
'conditions': [
{
'kind': 'event_match',
'key': 'type',
'pattern': 'm.room.member',
'_id': '_member',
},
{
'kind': 'event_match',
'key': 'content.membership',
'pattern': 'invite',
'_id': '_invite_member',
},
{
'kind': 'event_match',
'key': 'state_key',
'pattern_type': 'user_id'
},
],
'actions': [
'notify',
{
'set_tweak': 'sound',
'value': 'default'
}, {
'set_tweak': 'highlight',
'value': False
}
]
},
# Will we sometimes want to know about people joining and leaving?
# Perhaps: if so, this could be expanded upon. Seems the most usual case
# is that we don't though. We add this override rule so that even if
# the room rule is set to notify, we don't get notifications about
# join/leave/avatar/displayname events.
# See also: https://matrix.org/jira/browse/SYN-607
{
'rule_id': 'global/override/.m.rule.member_event',
'conditions': [
{
'kind': 'event_match',
'key': 'type',
'pattern': 'm.room.member',
'_id': '_member',
}
],
'actions': [
'dont_notify'
]
},
]
@@ -229,57 +285,6 @@ BASE_APPEND_UNDERRIDE_RULES = [
}
]
},
{
'rule_id': 'global/underride/.m.rule.invite_for_me',
'conditions': [
{
'kind': 'event_match',
'key': 'type',
'pattern': 'm.room.member',
'_id': '_member',
},
{
'kind': 'event_match',
'key': 'content.membership',
'pattern': 'invite',
'_id': '_invite_member',
},
{
'kind': 'event_match',
'key': 'state_key',
'pattern_type': 'user_id'
},
],
'actions': [
'notify',
{
'set_tweak': 'sound',
'value': 'default'
}, {
'set_tweak': 'highlight',
'value': False
}
]
},
# This is too simple: https://matrix.org/jira/browse/SYN-607
# Removing for now
# {
# 'rule_id': 'global/underride/.m.rule.member_event',
# 'conditions': [
# {
# 'kind': 'event_match',
# 'key': 'type',
# 'pattern': 'm.room.member',
# '_id': '_member',
# }
# ],
# 'actions': [
# 'notify', {
# 'set_tweak': 'highlight',
# 'value': False
# }
# ]
# },
{
'rule_id': 'global/underride/.m.rule.message',
'conditions': [
@@ -312,7 +317,7 @@ for r in BASE_PREPEND_OVERRIDE_RULES:
r['default'] = True
BASE_RULE_IDS.add(r['rule_id'])
for r in BASE_APPEND_OVRRIDE_RULES:
for r in BASE_APPEND_OVERRIDE_RULES:
r['priority_class'] = PRIORITY_CLASS_MAP['override']
r['default'] = True
BASE_RULE_IDS.add(r['rule_id'])

View File

@@ -14,67 +14,66 @@
# limitations under the License.
import logging
import ujson as json
from twisted.internet import defer
from .baserules import list_with_base_rules
from .push_rule_evaluator import PushRuleEvaluatorForEvent
from synapse.api.constants import EventTypes
from synapse.api.constants import EventTypes, Membership
from synapse.visibility import filter_events_for_clients
logger = logging.getLogger(__name__)
def decode_rule_json(rule):
rule['conditions'] = json.loads(rule['conditions'])
rule['actions'] = json.loads(rule['actions'])
return rule
@defer.inlineCallbacks
def _get_rules(room_id, user_ids, store):
rules_by_user = yield store.bulk_get_push_rules(user_ids)
rules_enabled_by_user = yield store.bulk_get_push_rules_enabled(user_ids)
rules_by_user = {
uid: list_with_base_rules([
decode_rule_json(rule_list)
for rule_list in rules_by_user.get(uid, [])
])
for uid in user_ids
}
# We apply the rules-enabled map here: bulk_get_push_rules doesn't
# fetch disabled rules, but this won't account for any server default
# rules the user has disabled, so we need to do this too.
for uid in user_ids:
if uid not in rules_enabled_by_user:
continue
user_enabled_map = rules_enabled_by_user[uid]
for i, rule in enumerate(rules_by_user[uid]):
rule_id = rule['rule_id']
if rule_id in user_enabled_map:
if rule.get('enabled', True) != bool(user_enabled_map[rule_id]):
# Rules are cached across users.
rule = dict(rule)
rule['enabled'] = bool(user_enabled_map[rule_id])
rules_by_user[uid][i] = rule
rules_by_user = {k: v for k, v in rules_by_user.items() if v is not None}
defer.returnValue(rules_by_user)
@defer.inlineCallbacks
def evaluator_for_room_id(room_id, hs, store):
results = yield store.get_receipts_for_room(room_id, "m.read")
user_ids = [
row["user_id"] for row in results
if hs.is_mine_id(row["user_id"])
]
def evaluator_for_event(event, hs, store, current_state):
room_id = event.room_id
# We also will want to generate notifs for other people in the room so
# their unread countss are correct in the event stream, but to avoid
# generating them for bot / AS users etc, we only do so for people who've
# sent a read receipt into the room.
local_users_in_room = set(
e.state_key for e in current_state.values()
if e.type == EventTypes.Member and e.membership == Membership.JOIN
and hs.is_mine_id(e.state_key)
)
# users in the room who have pushers need to get push rules run because
# that's how their pushers work
if_users_with_pushers = yield store.get_if_users_have_pushers(
local_users_in_room
)
user_ids = set(
uid for uid, have_pusher in if_users_with_pushers.items() if have_pusher
)
users_with_receipts = yield store.get_users_with_read_receipts_in_room(room_id)
# any users with pushers must be ours: they have pushers
for uid in users_with_receipts:
if uid in local_users_in_room:
user_ids.add(uid)
# if this event is an invite event, we may need to run rules for the user
# who's been invited, otherwise they won't get told they've been invited
if event.type == 'm.room.member' and event.content['membership'] == 'invite':
invited_user = event.state_key
if invited_user and hs.is_mine_id(invited_user):
has_pusher = yield store.user_has_pusher(invited_user)
if has_pusher:
user_ids.add(invited_user)
rules_by_user = yield _get_rules(room_id, user_ids, store)
defer.returnValue(BulkPushRuleEvaluator(
@@ -98,16 +97,24 @@ class BulkPushRuleEvaluator:
self.store = store
@defer.inlineCallbacks
def action_for_event_by_user(self, event, handler, current_state):
def action_for_event_by_user(self, event, current_state):
actions_by_user = {}
users_dict = yield self.store.are_guests(self.rules_by_user.keys())
# None of these users can be peeking since this list of users comes
# from the set of users in the room, so we know for sure they're all
# actually in the room.
user_tuples = [
(u, False) for u in self.rules_by_user.keys()
]
filtered_by_user = yield handler.filter_events_for_clients(
users_dict.items(), [event], {event.event_id: current_state}
filtered_by_user = yield filter_events_for_clients(
self.store, user_tuples, [event], {event.event_id: current_state}
)
room_members = yield self.store.get_users_in_room(self.room_id)
room_members = set(
e.state_key for e in current_state.values()
if e.type == EventTypes.Member and e.membership == Membership.JOIN
)
evaluator = PushRuleEvaluatorForEvent(event, len(room_members))

View File

@@ -13,29 +13,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.push.baserules import list_with_base_rules
from synapse.push.rulekinds import (
PRIORITY_CLASS_MAP, PRIORITY_CLASS_INVERSE_MAP
)
import copy
import simplejson as json
def format_push_rules_for_user(user, rawrules, enabled_map):
def format_push_rules_for_user(user, ruleslist):
"""Converts a list of rawrules and a enabled map into nested dictionaries
to match the Matrix client-server format for push rules"""
ruleslist = []
for rawrule in rawrules:
rule = dict(rawrule)
rule["conditions"] = json.loads(rawrule["conditions"])
rule["actions"] = json.loads(rawrule["actions"])
ruleslist.append(rule)
# We're going to be mutating this a lot, so do a deep copy
ruleslist = copy.deepcopy(list_with_base_rules(ruleslist))
ruleslist = copy.deepcopy(ruleslist)
rules = {'global': {}, 'device': {}}
@@ -60,9 +50,7 @@ def format_push_rules_for_user(user, rawrules, enabled_map):
template_rule = _rule_to_template(r)
if template_rule:
if r['rule_id'] in enabled_map:
template_rule['enabled'] = enabled_map[r['rule_id']]
elif 'enabled' in r:
if 'enabled' in r:
template_rule['enabled'] = r['enabled']
else:
template_rule['enabled'] = True

283
synapse/push/emailpusher.py Normal file
View File

@@ -0,0 +1,283 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer, reactor
import logging
from synapse.util.metrics import Measure
from synapse.util.logcontext import LoggingContext
from mailer import Mailer
logger = logging.getLogger(__name__)
# The amount of time we always wait before ever emailing about a notification
# (to give the user a chance to respond to other push or notice the window)
DELAY_BEFORE_MAIL_MS = 10 * 60 * 1000
# THROTTLE is the minimum time between mail notifications sent for a given room.
# Each room maintains its own throttle counter, but each new mail notification
# sends the pending notifications for all rooms.
THROTTLE_START_MS = 10 * 60 * 1000
THROTTLE_MAX_MS = 24 * 60 * 60 * 1000 # 24h
# THROTTLE_MULTIPLIER = 6 # 10 mins, 1 hour, 6 hours, 24 hours
THROTTLE_MULTIPLIER = 144 # 10 mins, 24 hours - i.e. jump straight to 1 day
# If no event triggers a notification for this long after the previous,
# the throttle is released.
# 12 hours - a gap of 12 hours in conversation is surely enough to merit a new
# notification when things get going again...
THROTTLE_RESET_AFTER_MS = (12 * 60 * 60 * 1000)
# does each email include all unread notifs, or just the ones which have happened
# since the last mail?
# XXX: this is currently broken as it includes ones from parted rooms(!)
INCLUDE_ALL_UNREAD_NOTIFS = False
class EmailPusher(object):
"""
A pusher that sends email notifications about events (approximately)
when they happen.
This shares quite a bit of code with httpusher: it would be good to
factor out the common parts
"""
def __init__(self, hs, pusherdict):
self.hs = hs
self.store = self.hs.get_datastore()
self.clock = self.hs.get_clock()
self.pusher_id = pusherdict['id']
self.user_id = pusherdict['user_name']
self.app_id = pusherdict['app_id']
self.email = pusherdict['pushkey']
self.last_stream_ordering = pusherdict['last_stream_ordering']
self.timed_call = None
self.throttle_params = None
# See httppusher
self.max_stream_ordering = None
self.processing = False
if self.hs.config.email_enable_notifs:
if 'data' in pusherdict and 'brand' in pusherdict['data']:
app_name = pusherdict['data']['brand']
else:
app_name = self.hs.config.email_app_name
self.mailer = Mailer(self.hs, app_name)
else:
self.mailer = None
@defer.inlineCallbacks
def on_started(self):
if self.mailer is not None:
self.throttle_params = yield self.store.get_throttle_params_by_room(
self.pusher_id
)
yield self._process()
def on_stop(self):
if self.timed_call:
self.timed_call.cancel()
@defer.inlineCallbacks
def on_new_notifications(self, min_stream_ordering, max_stream_ordering):
self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering)
yield self._process()
def on_new_receipts(self, min_stream_id, max_stream_id):
# We could wake up and cancel the timer but there tend to be quite a
# lot of read receipts so it's probably less work to just let the
# timer fire
return defer.succeed(None)
@defer.inlineCallbacks
def on_timer(self):
self.timed_call = None
yield self._process()
@defer.inlineCallbacks
def _process(self):
if self.processing:
return
with LoggingContext("emailpush._process"):
with Measure(self.clock, "emailpush._process"):
try:
self.processing = True
# if the max ordering changes while we're running _unsafe_process,
# call it again, and so on until we've caught up.
while True:
starting_max_ordering = self.max_stream_ordering
try:
yield self._unsafe_process()
except:
logger.exception("Exception processing notifs")
if self.max_stream_ordering == starting_max_ordering:
break
finally:
self.processing = False
@defer.inlineCallbacks
def _unsafe_process(self):
"""
Main logic of the push loop without the wrapper function that sets
up logging, measures and guards against multiple instances of it
being run.
"""
start = 0 if INCLUDE_ALL_UNREAD_NOTIFS else self.last_stream_ordering
unprocessed = yield self.store.get_unread_push_actions_for_user_in_range(
self.user_id, start, self.max_stream_ordering
)
soonest_due_at = None
for push_action in unprocessed:
received_at = push_action['received_ts']
if received_at is None:
received_at = 0
notif_ready_at = received_at + DELAY_BEFORE_MAIL_MS
room_ready_at = self.room_ready_to_notify_at(
push_action['room_id']
)
should_notify_at = max(notif_ready_at, room_ready_at)
if should_notify_at < self.clock.time_msec():
# one of our notifications is ready for sending, so we send
# *one* email updating the user on their notifications,
# we then consider all previously outstanding notifications
# to be delivered.
reason = {
'room_id': push_action['room_id'],
'now': self.clock.time_msec(),
'received_at': received_at,
'delay_before_mail_ms': DELAY_BEFORE_MAIL_MS,
'last_sent_ts': self.get_room_last_sent_ts(push_action['room_id']),
'throttle_ms': self.get_room_throttle_ms(push_action['room_id']),
}
yield self.send_notification(unprocessed, reason)
yield self.save_last_stream_ordering_and_success(max([
ea['stream_ordering'] for ea in unprocessed
]))
# we update the throttle on all the possible unprocessed push actions
for ea in unprocessed:
yield self.sent_notif_update_throttle(
ea['room_id'], ea
)
break
else:
if soonest_due_at is None or should_notify_at < soonest_due_at:
soonest_due_at = should_notify_at
if self.timed_call is not None:
self.timed_call.cancel()
self.timed_call = None
if soonest_due_at is not None:
self.timed_call = reactor.callLater(
self.seconds_until(soonest_due_at), self.on_timer
)
@defer.inlineCallbacks
def save_last_stream_ordering_and_success(self, last_stream_ordering):
self.last_stream_ordering = last_stream_ordering
yield self.store.update_pusher_last_stream_ordering_and_success(
self.app_id, self.email, self.user_id,
last_stream_ordering, self.clock.time_msec()
)
def seconds_until(self, ts_msec):
return (ts_msec - self.clock.time_msec()) / 1000
def get_room_throttle_ms(self, room_id):
if room_id in self.throttle_params:
return self.throttle_params[room_id]["throttle_ms"]
else:
return 0
def get_room_last_sent_ts(self, room_id):
if room_id in self.throttle_params:
return self.throttle_params[room_id]["last_sent_ts"]
else:
return 0
def room_ready_to_notify_at(self, room_id):
"""
Determines whether throttling should prevent us from sending an email
for the given room
Returns: The timestamp when we are next allowed to send an email notif
for this room
"""
last_sent_ts = self.get_room_last_sent_ts(room_id)
throttle_ms = self.get_room_throttle_ms(room_id)
may_send_at = last_sent_ts + throttle_ms
return may_send_at
@defer.inlineCallbacks
def sent_notif_update_throttle(self, room_id, notified_push_action):
# We have sent a notification, so update the throttle accordingly.
# If the event that triggered the notif happened more than
# THROTTLE_RESET_AFTER_MS after the previous one that triggered a
# notif, we release the throttle. Otherwise, the throttle is increased.
time_of_previous_notifs = yield self.store.get_time_of_last_push_action_before(
notified_push_action['stream_ordering']
)
time_of_this_notifs = notified_push_action['received_ts']
if time_of_previous_notifs is not None and time_of_this_notifs is not None:
gap = time_of_this_notifs - time_of_previous_notifs
else:
# if we don't know the arrival time of one of the notifs (it was not
# stored prior to email notification code) then assume a gap of
# zero which will just not reset the throttle
gap = 0
current_throttle_ms = self.get_room_throttle_ms(room_id)
if gap > THROTTLE_RESET_AFTER_MS:
new_throttle_ms = THROTTLE_START_MS
else:
if current_throttle_ms == 0:
new_throttle_ms = THROTTLE_START_MS
else:
new_throttle_ms = min(
current_throttle_ms * THROTTLE_MULTIPLIER,
THROTTLE_MAX_MS
)
self.throttle_params[room_id] = {
"last_sent_ts": self.clock.time_msec(),
"throttle_ms": new_throttle_ms
}
yield self.store.set_throttle_params(
self.pusher_id, room_id, self.throttle_params[room_id]
)
@defer.inlineCallbacks
def send_notification(self, push_actions, reason):
logger.info("Sending notif email for user %r", self.user_id)
yield self.mailer.send_notification_mail(
self.app_id, self.user_id, self.email, push_actions, reason
)

View File

@@ -13,60 +13,242 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.push import Pusher, PusherConfigException
from synapse.push import PusherConfigException
from twisted.internet import defer
from twisted.internet import defer, reactor
import logging
import push_rule_evaluator
import push_tools
from synapse.util.logcontext import LoggingContext
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)
class HttpPusher(Pusher):
def __init__(self, _hs, user_id, app_id,
app_display_name, device_display_name, pushkey, pushkey_ts,
data, last_token, last_success, failing_since):
super(HttpPusher, self).__init__(
_hs,
user_id,
app_id,
app_display_name,
device_display_name,
pushkey,
pushkey_ts,
data,
last_token,
last_success,
failing_since
class HttpPusher(object):
INITIAL_BACKOFF_SEC = 1 # in seconds because that's what Twisted takes
MAX_BACKOFF_SEC = 60 * 60
# This one's in ms because we compare it against the clock
GIVE_UP_AFTER_MS = 24 * 60 * 60 * 1000
def __init__(self, hs, pusherdict):
self.hs = hs
self.store = self.hs.get_datastore()
self.clock = self.hs.get_clock()
self.state_handler = self.hs.get_state_handler()
self.user_id = pusherdict['user_name']
self.app_id = pusherdict['app_id']
self.app_display_name = pusherdict['app_display_name']
self.device_display_name = pusherdict['device_display_name']
self.pushkey = pusherdict['pushkey']
self.pushkey_ts = pusherdict['ts']
self.data = pusherdict['data']
self.last_stream_ordering = pusherdict['last_stream_ordering']
self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
self.failing_since = pusherdict['failing_since']
self.timed_call = None
self.processing = False
# This is the highest stream ordering we know it's safe to process.
# When new events arrive, we'll be given a window of new events: we
# should honour this rather than just looking for anything higher
# because of potential out-of-order event serialisation. This starts
# off as None though as we don't know any better.
self.max_stream_ordering = None
if 'data' not in pusherdict:
raise PusherConfigException(
"No 'data' key for HTTP pusher"
)
self.data = pusherdict['data']
self.name = "%s/%s/%s" % (
pusherdict['user_name'],
pusherdict['app_id'],
pusherdict['pushkey'],
)
if 'url' not in data:
if 'url' not in self.data:
raise PusherConfigException(
"'url' required in data for HTTP pusher"
)
self.url = data['url']
self.http_client = _hs.get_simple_http_client()
self.url = self.data['url']
self.http_client = hs.get_simple_http_client()
self.data_minus_url = {}
self.data_minus_url.update(self.data)
del self.data_minus_url['url']
@defer.inlineCallbacks
def _build_notification_dict(self, event, tweaks, badge):
# we probably do not want to push for every presence update
# (we may want to be able to set up notifications when specific
# people sign in, but we'd want to only deliver the pertinent ones)
# Actually, presence events will not get this far now because we
# need to filter them out in the main Pusher code.
if 'event_id' not in event:
defer.returnValue(None)
def on_started(self):
yield self._process()
ctx = yield self.get_context_for_event(event)
@defer.inlineCallbacks
def on_new_notifications(self, min_stream_ordering, max_stream_ordering):
self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering)
yield self._process()
@defer.inlineCallbacks
def on_new_receipts(self, min_stream_id, max_stream_id):
# Note that the min here shouldn't be relied upon to be accurate.
# We could check the receipts are actually m.read receipts here,
# but currently that's the only type of receipt anyway...
with LoggingContext("push.on_new_receipts"):
with Measure(self.clock, "push.on_new_receipts"):
badge = yield push_tools.get_badge_count(
self.hs.get_datastore(), self.user_id
)
yield self._send_badge(badge)
@defer.inlineCallbacks
def on_timer(self):
yield self._process()
def on_stop(self):
if self.timed_call:
self.timed_call.cancel()
@defer.inlineCallbacks
def _process(self):
if self.processing:
return
with LoggingContext("push._process"):
with Measure(self.clock, "push._process"):
try:
self.processing = True
# if the max ordering changes while we're running _unsafe_process,
# call it again, and so on until we've caught up.
while True:
starting_max_ordering = self.max_stream_ordering
try:
yield self._unsafe_process()
except:
logger.exception("Exception processing notifs")
if self.max_stream_ordering == starting_max_ordering:
break
finally:
self.processing = False
@defer.inlineCallbacks
def _unsafe_process(self):
"""
Looks for unset notifications and dispatch them, in order
Never call this directly: use _process which will only allow this to
run once per pusher.
"""
unprocessed = yield self.store.get_unread_push_actions_for_user_in_range(
self.user_id, self.last_stream_ordering, self.max_stream_ordering
)
for push_action in unprocessed:
processed = yield self._process_one(push_action)
if processed:
self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
self.last_stream_ordering = push_action['stream_ordering']
yield self.store.update_pusher_last_stream_ordering_and_success(
self.app_id, self.pushkey, self.user_id,
self.last_stream_ordering,
self.clock.time_msec()
)
if self.failing_since:
self.failing_since = None
yield self.store.update_pusher_failing_since(
self.app_id, self.pushkey, self.user_id,
self.failing_since
)
else:
if not self.failing_since:
self.failing_since = self.clock.time_msec()
yield self.store.update_pusher_failing_since(
self.app_id, self.pushkey, self.user_id,
self.failing_since
)
if (
self.failing_since and
self.failing_since <
self.clock.time_msec() - HttpPusher.GIVE_UP_AFTER_MS
):
# we really only give up so that if the URL gets
# fixed, we don't suddenly deliver a load
# of old notifications.
logger.warn("Giving up on a notification to user %s, "
"pushkey %s",
self.user_id, self.pushkey)
self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
self.last_stream_ordering = push_action['stream_ordering']
yield self.store.update_pusher_last_stream_ordering(
self.app_id,
self.pushkey,
self.user_id,
self.last_stream_ordering
)
self.failing_since = None
yield self.store.update_pusher_failing_since(
self.app_id,
self.pushkey,
self.user_id,
self.failing_since
)
else:
logger.info("Push failed: delaying for %ds", self.backoff_delay)
self.timed_call = reactor.callLater(self.backoff_delay, self.on_timer)
self.backoff_delay = min(self.backoff_delay * 2, self.MAX_BACKOFF_SEC)
break
@defer.inlineCallbacks
def _process_one(self, push_action):
if 'notify' not in push_action['actions']:
defer.returnValue(True)
tweaks = push_rule_evaluator.tweaks_for_actions(push_action['actions'])
badge = yield push_tools.get_badge_count(self.hs.get_datastore(), self.user_id)
event = yield self.store.get_event(push_action['event_id'], allow_none=True)
if event is None:
defer.returnValue(True) # It's been redacted
rejected = yield self.dispatch_push(event, tweaks, badge)
if rejected is False:
defer.returnValue(False)
if isinstance(rejected, list) or isinstance(rejected, tuple):
for pk in rejected:
if pk != self.pushkey:
# for sanity, we only remove the pushkey if it
# was the one we actually sent...
logger.warn(
("Ignoring rejected pushkey %s because we"
" didn't send it"), pk
)
else:
logger.info(
"Pushkey %s was rejected: removing",
pk
)
yield self.hs.remove_pusher(
self.app_id, pk, self.user_id
)
defer.returnValue(True)
@defer.inlineCallbacks
def _build_notification_dict(self, event, tweaks, badge):
ctx = yield push_tools.get_context_for_event(
self.state_handler, event, self.user_id
)
d = {
'notification': {
'id': event['event_id'],
'room_id': event['room_id'],
'type': event['type'],
'sender': event['user_id'],
'id': event.event_id, # deprecated: remove soon
'event_id': event.event_id,
'room_id': event.room_id,
'type': event.type,
'sender': event.user_id,
'counts': { # -- we don't mark messages as read yet so
# we have no way of knowing
# Just set the badge to 1 until we have read receipts
@@ -84,14 +266,14 @@ class HttpPusher(Pusher):
]
}
}
if event['type'] == 'm.room.member':
d['notification']['membership'] = event['content']['membership']
d['notification']['user_is_target'] = event['state_key'] == self.user_id
if event.type == 'm.room.member':
d['notification']['membership'] = event.content['membership']
d['notification']['user_is_target'] = event.state_key == self.user_id
if 'content' in event:
d['notification']['content'] = event['content']
d['notification']['content'] = event.content
if len(ctx['aliases']):
d['notification']['room_alias'] = ctx['aliases'][0]
# We no longer send aliases separately, instead, we send the human
# readable name of the room, which may be an alias.
if 'sender_display_name' in ctx and len(ctx['sender_display_name']) > 0:
d['notification']['sender_display_name'] = ctx['sender_display_name']
if 'name' in ctx and len(ctx['name']) > 0:
@@ -115,7 +297,7 @@ class HttpPusher(Pusher):
defer.returnValue(rejected)
@defer.inlineCallbacks
def send_badge(self, badge):
def _send_badge(self, badge):
logger.info("Sending updated badge count %d to %r", badge, self.user_id)
d = {
'notification': {

515
synapse/push/mailer.py Normal file
View File

@@ -0,0 +1,515 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from twisted.mail.smtp import sendmail
import email.utils
import email.mime.multipart
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from synapse.util.async import concurrently_execute
from synapse.util.presentable_names import (
calculate_room_name, name_from_member_event, descriptor_from_member_events
)
from synapse.types import UserID
from synapse.api.errors import StoreError
from synapse.api.constants import EventTypes
from synapse.visibility import filter_events_for_client
import jinja2
import bleach
import time
import urllib
import logging
logger = logging.getLogger(__name__)
MESSAGE_FROM_PERSON_IN_ROOM = "You have a message on %(app)s from %(person)s " \
"in the %(room)s room..."
MESSAGE_FROM_PERSON = "You have a message on %(app)s from %(person)s..."
MESSAGES_FROM_PERSON = "You have messages on %(app)s from %(person)s..."
MESSAGES_IN_ROOM = "You have messages on %(app)s in the %(room)s room..."
MESSAGES_IN_ROOM_AND_OTHERS = \
"You have messages on %(app)s in the %(room)s room and others..."
MESSAGES_FROM_PERSON_AND_OTHERS = \
"You have messages on %(app)s from %(person)s and others..."
INVITE_FROM_PERSON_TO_ROOM = "%(person)s has invited you to join the " \
"%(room)s room on %(app)s..."
INVITE_FROM_PERSON = "%(person)s has invited you to chat on %(app)s..."
CONTEXT_BEFORE = 1
CONTEXT_AFTER = 1
# From https://github.com/matrix-org/matrix-react-sdk/blob/master/src/HtmlUtils.js
ALLOWED_TAGS = [
'font', # custom to matrix for IRC-style font coloring
'del', # for markdown
# deliberately no h1/h2 to stop people shouting.
'h3', 'h4', 'h5', 'h6', 'blockquote', 'p', 'a', 'ul', 'ol',
'nl', 'li', 'b', 'i', 'u', 'strong', 'em', 'strike', 'code', 'hr', 'br', 'div',
'table', 'thead', 'caption', 'tbody', 'tr', 'th', 'td', 'pre'
]
ALLOWED_ATTRS = {
# custom ones first:
"font": ["color"], # custom to matrix
"a": ["href", "name", "target"], # remote target: custom to matrix
# We don't currently allow img itself by default, but this
# would make sense if we did
"img": ["src"],
}
# When bleach release a version with this option, we can specify schemes
# ALLOWED_SCHEMES = ["http", "https", "ftp", "mailto"]
class Mailer(object):
def __init__(self, hs, app_name):
self.hs = hs
self.store = self.hs.get_datastore()
self.auth_handler = self.hs.get_auth_handler()
self.state_handler = self.hs.get_state_handler()
loader = jinja2.FileSystemLoader(self.hs.config.email_template_dir)
self.app_name = app_name
logger.info("Created Mailer for app_name %s" % app_name)
env = jinja2.Environment(loader=loader)
env.filters["format_ts"] = format_ts_filter
env.filters["mxc_to_http"] = self.mxc_to_http_filter
self.notif_template_html = env.get_template(
self.hs.config.email_notif_template_html
)
self.notif_template_text = env.get_template(
self.hs.config.email_notif_template_text
)
@defer.inlineCallbacks
def send_notification_mail(self, app_id, user_id, email_address,
push_actions, reason):
try:
from_string = self.hs.config.email_notif_from % {
"app": self.app_name
}
except TypeError:
from_string = self.hs.config.email_notif_from
raw_from = email.utils.parseaddr(from_string)[1]
raw_to = email.utils.parseaddr(email_address)[1]
if raw_to == '':
raise RuntimeError("Invalid 'to' address")
rooms_in_order = deduped_ordered_list(
[pa['room_id'] for pa in push_actions]
)
notif_events = yield self.store.get_events(
[pa['event_id'] for pa in push_actions]
)
notifs_by_room = {}
for pa in push_actions:
notifs_by_room.setdefault(pa["room_id"], []).append(pa)
# collect the current state for all the rooms in which we have
# notifications
state_by_room = {}
try:
user_display_name = yield self.store.get_profile_displayname(
UserID.from_string(user_id).localpart
)
if user_display_name is None:
user_display_name = user_id
except StoreError:
user_display_name = user_id
@defer.inlineCallbacks
def _fetch_room_state(room_id):
room_state = yield self.state_handler.get_current_state(room_id)
state_by_room[room_id] = room_state
# Run at most 3 of these at once: sync does 10 at a time but email
# notifs are much less realtime than sync so we can afford to wait a bit.
yield concurrently_execute(_fetch_room_state, rooms_in_order, 3)
# actually sort our so-called rooms_in_order list, most recent room first
rooms_in_order.sort(
key=lambda r: -(notifs_by_room[r][-1]['received_ts'] or 0)
)
rooms = []
for r in rooms_in_order:
roomvars = yield self.get_room_vars(
r, user_id, notifs_by_room[r], notif_events, state_by_room[r]
)
rooms.append(roomvars)
reason['room_name'] = calculate_room_name(
state_by_room[reason['room_id']], user_id, fallback_to_members=True
)
summary_text = self.make_summary_text(
notifs_by_room, state_by_room, notif_events, user_id, reason
)
template_vars = {
"user_display_name": user_display_name,
"unsubscribe_link": self.make_unsubscribe_link(
user_id, app_id, email_address
),
"summary_text": summary_text,
"app_name": self.app_name,
"rooms": rooms,
"reason": reason,
}
html_text = self.notif_template_html.render(**template_vars)
html_part = MIMEText(html_text, "html", "utf8")
plain_text = self.notif_template_text.render(**template_vars)
text_part = MIMEText(plain_text, "plain", "utf8")
multipart_msg = MIMEMultipart('alternative')
multipart_msg['Subject'] = "[%s] %s" % (self.app_name, summary_text)
multipart_msg['From'] = from_string
multipart_msg['To'] = email_address
multipart_msg['Date'] = email.utils.formatdate()
multipart_msg['Message-ID'] = email.utils.make_msgid()
multipart_msg.attach(text_part)
multipart_msg.attach(html_part)
logger.info("Sending email push notification to %s" % email_address)
# logger.debug(html_text)
yield sendmail(
self.hs.config.email_smtp_host,
raw_from, raw_to, multipart_msg.as_string(),
port=self.hs.config.email_smtp_port
)
@defer.inlineCallbacks
def get_room_vars(self, room_id, user_id, notifs, notif_events, room_state):
my_member_event = room_state[("m.room.member", user_id)]
is_invite = my_member_event.content["membership"] == "invite"
room_vars = {
"title": calculate_room_name(room_state, user_id),
"hash": string_ordinal_total(room_id), # See sender avatar hash
"notifs": [],
"invite": is_invite,
"link": self.make_room_link(room_id),
}
if not is_invite:
for n in notifs:
notifvars = yield self.get_notif_vars(
n, user_id, notif_events[n['event_id']], room_state
)
# merge overlapping notifs together.
# relies on the notifs being in chronological order.
merge = False
if room_vars['notifs'] and 'messages' in room_vars['notifs'][-1]:
prev_messages = room_vars['notifs'][-1]['messages']
for message in notifvars['messages']:
pm = filter(lambda pm: pm['id'] == message['id'], prev_messages)
if pm:
if not message["is_historical"]:
pm[0]["is_historical"] = False
merge = True
elif merge:
# we're merging, so append any remaining messages
# in this notif to the previous one
prev_messages.append(message)
if not merge:
room_vars['notifs'].append(notifvars)
defer.returnValue(room_vars)
@defer.inlineCallbacks
def get_notif_vars(self, notif, user_id, notif_event, room_state):
results = yield self.store.get_events_around(
notif['room_id'], notif['event_id'],
before_limit=CONTEXT_BEFORE, after_limit=CONTEXT_AFTER
)
ret = {
"link": self.make_notif_link(notif),
"ts": notif['received_ts'],
"messages": [],
}
the_events = yield filter_events_for_client(
self.store, user_id, results["events_before"]
)
the_events.append(notif_event)
for event in the_events:
messagevars = self.get_message_vars(notif, event, room_state)
if messagevars is not None:
ret['messages'].append(messagevars)
defer.returnValue(ret)
def get_message_vars(self, notif, event, room_state):
if event.type != EventTypes.Message:
return None
sender_state_event = room_state[("m.room.member", event.sender)]
sender_name = name_from_member_event(sender_state_event)
sender_avatar_url = sender_state_event.content.get("avatar_url")
# 'hash' for deterministically picking default images: use
# sender_hash % the number of default images to choose from
sender_hash = string_ordinal_total(event.sender)
msgtype = event.content.get("msgtype")
ret = {
"msgtype": msgtype,
"is_historical": event.event_id != notif['event_id'],
"id": event.event_id,
"ts": event.origin_server_ts,
"sender_name": sender_name,
"sender_avatar_url": sender_avatar_url,
"sender_hash": sender_hash,
}
if msgtype == "m.text":
self.add_text_message_vars(ret, event)
elif msgtype == "m.image":
self.add_image_message_vars(ret, event)
if "body" in event.content:
ret["body_text_plain"] = event.content["body"]
return ret
def add_text_message_vars(self, messagevars, event):
msgformat = event.content.get("format")
messagevars["format"] = msgformat
formatted_body = event.content.get("formatted_body")
body = event.content.get("body")
if msgformat == "org.matrix.custom.html" and formatted_body:
messagevars["body_text_html"] = safe_markup(formatted_body)
elif body:
messagevars["body_text_html"] = safe_text(body)
return messagevars
def add_image_message_vars(self, messagevars, event):
messagevars["image_url"] = event.content["url"]
return messagevars
def make_summary_text(self, notifs_by_room, state_by_room,
notif_events, user_id, reason):
if len(notifs_by_room) == 1:
# Only one room has new stuff
room_id = notifs_by_room.keys()[0]
# If the room has some kind of name, use it, but we don't
# want the generated-from-names one here otherwise we'll
# end up with, "new message from Bob in the Bob room"
room_name = calculate_room_name(
state_by_room[room_id], user_id, fallback_to_members=False
)
my_member_event = state_by_room[room_id][("m.room.member", user_id)]
if my_member_event.content["membership"] == "invite":
inviter_member_event = state_by_room[room_id][
("m.room.member", my_member_event.sender)
]
inviter_name = name_from_member_event(inviter_member_event)
if room_name is None:
return INVITE_FROM_PERSON % {
"person": inviter_name,
"app": self.app_name
}
else:
return INVITE_FROM_PERSON_TO_ROOM % {
"person": inviter_name,
"room": room_name,
"app": self.app_name,
}
sender_name = None
if len(notifs_by_room[room_id]) == 1:
# There is just the one notification, so give some detail
event = notif_events[notifs_by_room[room_id][0]["event_id"]]
if ("m.room.member", event.sender) in state_by_room[room_id]:
state_event = state_by_room[room_id][("m.room.member", event.sender)]
sender_name = name_from_member_event(state_event)
if sender_name is not None and room_name is not None:
return MESSAGE_FROM_PERSON_IN_ROOM % {
"person": sender_name,
"room": room_name,
"app": self.app_name,
}
elif sender_name is not None:
return MESSAGE_FROM_PERSON % {
"person": sender_name,
"app": self.app_name,
}
else:
# There's more than one notification for this room, so just
# say there are several
if room_name is not None:
return MESSAGES_IN_ROOM % {
"room": room_name,
"app": self.app_name,
}
else:
# If the room doesn't have a name, say who the messages
# are from explicitly to avoid, "messages in the Bob room"
sender_ids = list(set([
notif_events[n['event_id']].sender
for n in notifs_by_room[room_id]
]))
return MESSAGES_FROM_PERSON % {
"person": descriptor_from_member_events([
state_by_room[room_id][("m.room.member", s)]
for s in sender_ids
]),
"app": self.app_name,
}
else:
# Stuff's happened in multiple different rooms
# ...but we still refer to the 'reason' room which triggered the mail
if reason['room_name'] is not None:
return MESSAGES_IN_ROOM_AND_OTHERS % {
"room": reason['room_name'],
"app": self.app_name,
}
else:
# If the reason room doesn't have a name, say who the messages
# are from explicitly to avoid, "messages in the Bob room"
sender_ids = list(set([
notif_events[n['event_id']].sender
for n in notifs_by_room[reason['room_id']]
]))
return MESSAGES_FROM_PERSON_AND_OTHERS % {
"person": descriptor_from_member_events([
state_by_room[reason['room_id']][("m.room.member", s)]
for s in sender_ids
]),
"app": self.app_name,
}
def make_room_link(self, room_id):
# need /beta for Universal Links to work on iOS
if self.app_name == "Vector":
return "https://vector.im/beta/#/room/%s" % (room_id,)
else:
return "https://matrix.to/#/%s" % (room_id,)
def make_notif_link(self, notif):
# need /beta for Universal Links to work on iOS
if self.app_name == "Vector":
return "https://vector.im/beta/#/room/%s/%s" % (
notif['room_id'], notif['event_id']
)
else:
return "https://matrix.to/#/%s/%s" % (
notif['room_id'], notif['event_id']
)
def make_unsubscribe_link(self, user_id, app_id, email_address):
params = {
"access_token": self.auth_handler.generate_delete_pusher_token(user_id),
"app_id": app_id,
"pushkey": email_address,
}
# XXX: make r0 once API is stable
return "%s_matrix/client/unstable/pushers/remove?%s" % (
self.hs.config.public_baseurl,
urllib.urlencode(params),
)
def mxc_to_http_filter(self, value, width, height, resize_method="crop"):
if value[0:6] != "mxc://":
return ""
serverAndMediaId = value[6:]
fragment = None
if '#' in serverAndMediaId:
(serverAndMediaId, fragment) = serverAndMediaId.split('#', 1)
fragment = "#" + fragment
params = {
"width": width,
"height": height,
"method": resize_method,
}
return "%s_matrix/media/v1/thumbnail/%s?%s%s" % (
self.hs.config.public_baseurl,
serverAndMediaId,
urllib.urlencode(params),
fragment or "",
)
def safe_markup(raw_html):
return jinja2.Markup(bleach.linkify(bleach.clean(
raw_html, tags=ALLOWED_TAGS, attributes=ALLOWED_ATTRS,
# bleach master has this, but it isn't released yet
# protocols=ALLOWED_SCHEMES,
strip=True
)))
def safe_text(raw_text):
"""
Process text: treat it as HTML but escape any tags (ie. just escape the
HTML) then linkify it.
"""
return jinja2.Markup(bleach.linkify(bleach.clean(
raw_text, tags=[], attributes={},
strip=False
)))
def deduped_ordered_list(l):
seen = set()
ret = []
for item in l:
if item not in seen:
seen.add(item)
ret.append(item)
return ret
def string_ordinal_total(s):
tot = 0
for c in s:
tot += ord(c)
return tot
def format_ts_filter(value, format):
return time.strftime(format, time.localtime(value / 1000))

View File

@@ -13,12 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from .baserules import list_with_base_rules
import logging
import simplejson as json
import re
from synapse.types import UserID
@@ -32,22 +27,6 @@ IS_GLOB = re.compile(r'[\?\*\[\]]')
INEQUALITY_EXPR = re.compile("^([=<>]*)([0-9]*)$")
@defer.inlineCallbacks
def evaluator_for_user_id(user_id, room_id, store):
rawrules = yield store.get_push_rules_for_user(user_id)
enabled_map = yield store.get_push_rules_enabled_for_user(user_id)
our_member_event = yield store.get_current_state(
room_id=room_id,
event_type='m.room.member',
state_key=user_id,
)
defer.returnValue(PushRuleEvaluator(
user_id, rawrules, enabled_map,
room_id, our_member_event, store
))
def _room_member_count(ev, condition, room_member_count):
if 'is' not in condition:
return False
@@ -74,110 +53,14 @@ def _room_member_count(ev, condition, room_member_count):
return False
class PushRuleEvaluator:
DEFAULT_ACTIONS = []
def __init__(self, user_id, raw_rules, enabled_map, room_id,
our_member_event, store):
self.user_id = user_id
self.room_id = room_id
self.our_member_event = our_member_event
self.store = store
rules = []
for raw_rule in raw_rules:
rule = dict(raw_rule)
rule['conditions'] = json.loads(raw_rule['conditions'])
rule['actions'] = json.loads(raw_rule['actions'])
rules.append(rule)
self.rules = list_with_base_rules(rules)
self.enabled_map = enabled_map
@staticmethod
def tweaks_for_actions(actions):
tweaks = {}
for a in actions:
if not isinstance(a, dict):
continue
if 'set_tweak' in a and 'value' in a:
tweaks[a['set_tweak']] = a['value']
return tweaks
@defer.inlineCallbacks
def actions_for_event(self, ev):
"""
This should take into account notification settings that the user
has configured both globally and per-room when we have the ability
to do such things.
"""
if ev['user_id'] == self.user_id:
# let's assume you probably know about messages you sent yourself
defer.returnValue([])
room_id = ev['room_id']
# get *our* member event for display name matching
my_display_name = None
if self.our_member_event:
my_display_name = self.our_member_event[0].content.get("displayname")
room_members = yield self.store.get_users_in_room(room_id)
room_member_count = len(room_members)
evaluator = PushRuleEvaluatorForEvent(ev, room_member_count)
for r in self.rules:
enabled = self.enabled_map.get(r['rule_id'], None)
if enabled is not None and not enabled:
continue
if not r.get("enabled", True):
continue
conditions = r['conditions']
actions = r['actions']
# ignore rules with no actions (we have an explict 'dont_notify')
if len(actions) == 0:
logger.warn(
"Ignoring rule id %s with no actions for user %s",
r['rule_id'], self.user_id
)
continue
matches = True
for c in conditions:
matches = evaluator.matches(
c, self.user_id, my_display_name
)
if not matches:
break
logger.debug(
"Rule %s %s",
r['rule_id'], "matches" if matches else "doesn't match"
)
if matches:
logger.debug(
"%s matches for user %s, event %s",
r['rule_id'], self.user_id, ev['event_id']
)
# filter out dont_notify as we treat an empty actions list
# as dont_notify, and this doesn't take up a row in our database
actions = [x for x in actions if x != 'dont_notify']
defer.returnValue(actions)
logger.debug(
"No rules match for user %s, event %s",
self.user_id, ev['event_id']
)
defer.returnValue(PushRuleEvaluator.DEFAULT_ACTIONS)
def tweaks_for_actions(actions):
tweaks = {}
for a in actions:
if not isinstance(a, dict):
continue
if 'set_tweak' in a and 'value' in a:
tweaks[a['set_tweak']] = a['value']
return tweaks
class PushRuleEvaluatorForEvent(object):

View File

@@ -0,0 +1,68 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.util.presentable_names import (
calculate_room_name, name_from_member_event
)
@defer.inlineCallbacks
def get_badge_count(store, user_id):
invites, joins = yield defer.gatherResults([
store.get_invited_rooms_for_user(user_id),
store.get_rooms_for_user(user_id),
], consumeErrors=True)
my_receipts_by_room = yield store.get_receipts_for_user(
user_id, "m.read",
)
badge = len(invites)
for r in joins:
if r.room_id in my_receipts_by_room:
last_unread_event_id = my_receipts_by_room[r.room_id]
notifs = yield (
store.get_unread_event_push_actions_by_room_for_user(
r.room_id, user_id, last_unread_event_id
)
)
# return one badge count per conversation, as count per
# message is so noisy as to be almost useless
badge += 1 if notifs["notify_count"] else 0
defer.returnValue(badge)
@defer.inlineCallbacks
def get_context_for_event(state_handler, ev, user_id):
ctx = {}
room_state = yield state_handler.get_current_state(ev.room_id)
# we no longer bother setting room_alias, and make room_name the
# human-readable name instead, be that m.room.namer, an alias or
# a list of people in the room
name = calculate_room_name(
room_state, user_id, fallback_to_single_member=False
)
if name:
ctx['name'] = name
sender_state_event = room_state[("m.room.member", ev.sender)]
ctx['sender_display_name'] = name_from_member_event(sender_state_event)
defer.returnValue(ctx)

47
synapse/push/pusher.py Normal file
View File

@@ -0,0 +1,47 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from httppusher import HttpPusher
import logging
logger = logging.getLogger(__name__)
# We try importing this if we can (it will fail if we don't
# have the optional email dependencies installed). We don't
# yet have the config to know if we need the email pusher,
# but importing this after daemonizing seems to fail
# (even though a simple test of importing from a daemonized
# process works fine)
try:
from synapse.push.emailpusher import EmailPusher
except:
pass
def create_pusher(hs, pusherdict):
logger.info("trying to create_pusher for %r", pusherdict)
PUSHER_TYPES = {
"http": HttpPusher,
}
logger.info("email enable notifs: %r", hs.config.email_enable_notifs)
if hs.config.email_enable_notifs:
PUSHER_TYPES["email"] = EmailPusher
logger.info("defined email pusher type")
if pusherdict['kind'] in PUSHER_TYPES:
logger.info("found pusher")
return PUSHER_TYPES[pusherdict['kind']](hs, pusherdict)

View File

@@ -16,9 +16,9 @@
from twisted.internet import defer
from .httppusher import HttpPusher
from synapse.push import PusherConfigException
import pusher
from synapse.util.logcontext import preserve_fn
from synapse.util.async import run_on_reactor
import logging
@@ -28,10 +28,10 @@ logger = logging.getLogger(__name__)
class PusherPool:
def __init__(self, _hs):
self.hs = _hs
self.start_pushers = _hs.config.start_pushers
self.store = self.hs.get_datastore()
self.clock = self.hs.get_clock()
self.pushers = {}
self.last_pusher_started = -1
@defer.inlineCallbacks
def start(self):
@@ -48,7 +48,8 @@ class PusherPool:
# will then get pulled out of the database,
# recreated, added and started: this means we have only one
# code path adding pushers.
self._create_pusher({
pusher.create_pusher(self.hs, {
"id": None,
"user_name": user_id,
"kind": kind,
"app_id": app_id,
@@ -58,10 +59,18 @@ class PusherPool:
"ts": time_now_msec,
"lang": lang,
"data": data,
"last_token": None,
"last_stream_ordering": None,
"last_success": None,
"failing_since": None
})
# create the pusher setting last_stream_ordering to the current maximum
# stream ordering in event_push_actions, so it will process
# pushes from this point onwards.
last_stream_ordering = (
yield self.store.get_latest_push_action_stream_ordering()
)
yield self.store.add_pusher(
user_id=user_id,
access_token=access_token,
@@ -73,6 +82,7 @@ class PusherPool:
pushkey_ts=time_now_msec,
lang=lang,
data=data,
last_stream_ordering=last_stream_ordering,
profile_tag=profile_tag,
)
yield self._refresh_pusher(app_id, pushkey, user_id)
@@ -106,26 +116,51 @@ class PusherPool:
)
yield self.remove_pusher(p['app_id'], p['pushkey'], p['user_name'])
def _create_pusher(self, pusherdict):
if pusherdict['kind'] == 'http':
return HttpPusher(
self.hs,
user_id=pusherdict['user_name'],
app_id=pusherdict['app_id'],
app_display_name=pusherdict['app_display_name'],
device_display_name=pusherdict['device_display_name'],
pushkey=pusherdict['pushkey'],
pushkey_ts=pusherdict['ts'],
data=pusherdict['data'],
last_token=pusherdict['last_token'],
last_success=pusherdict['last_success'],
failing_since=pusherdict['failing_since']
@defer.inlineCallbacks
def on_new_notifications(self, min_stream_id, max_stream_id):
yield run_on_reactor()
try:
users_affected = yield self.store.get_push_action_users_in_range(
min_stream_id, max_stream_id
)
else:
raise PusherConfigException(
"Unknown pusher type '%s' for user %s" %
(pusherdict['kind'], pusherdict['user_name'])
deferreds = []
for u in users_affected:
if u in self.pushers:
for p in self.pushers[u].values():
deferreds.append(
p.on_new_notifications(min_stream_id, max_stream_id)
)
yield defer.gatherResults(deferreds)
except:
logger.exception("Exception in pusher on_new_notifications")
@defer.inlineCallbacks
def on_new_receipts(self, min_stream_id, max_stream_id, affected_room_ids):
yield run_on_reactor()
try:
# Need to subtract 1 from the minimum because the lower bound here
# is not inclusive
updated_receipts = yield self.store.get_all_updated_receipts(
min_stream_id - 1, max_stream_id
)
# This returns a tuple, user_id is at index 3
users_affected = set([r[3] for r in updated_receipts])
deferreds = []
for u in users_affected:
if u in self.pushers:
for p in self.pushers[u].values():
deferreds.append(
p.on_new_receipts(min_stream_id, max_stream_id)
)
yield defer.gatherResults(deferreds)
except:
logger.exception("Exception in pusher on_new_receipts")
@defer.inlineCallbacks
def _refresh_pusher(self, app_id, pushkey, user_id):
@@ -143,33 +178,40 @@ class PusherPool:
self._start_pushers([p])
def _start_pushers(self, pushers):
if not self.start_pushers:
logger.info("Not starting pushers because they are disabled in the config")
return
logger.info("Starting %d pushers", len(pushers))
for pusherdict in pushers:
try:
p = self._create_pusher(pusherdict)
except PusherConfigException:
logger.exception("Couldn't start a pusher: caught PusherConfigException")
p = pusher.create_pusher(self.hs, pusherdict)
except:
logger.exception("Couldn't start a pusher: caught Exception")
continue
if p:
fullid = "%s:%s:%s" % (
appid_pushkey = "%s:%s" % (
pusherdict['app_id'],
pusherdict['pushkey'],
pusherdict['user_name']
)
if fullid in self.pushers:
self.pushers[fullid].stop()
self.pushers[fullid] = p
preserve_fn(p.start)()
byuser = self.pushers.setdefault(pusherdict['user_name'], {})
if appid_pushkey in byuser:
byuser[appid_pushkey].on_stop()
byuser[appid_pushkey] = p
preserve_fn(p.on_started)()
logger.info("Started pushers")
@defer.inlineCallbacks
def remove_pusher(self, app_id, pushkey, user_id):
fullid = "%s:%s:%s" % (app_id, pushkey, user_id)
if fullid in self.pushers:
logger.info("Stopping pusher %s", fullid)
self.pushers[fullid].stop()
del self.pushers[fullid]
appid_pushkey = "%s:%s" % (app_id, pushkey)
byuser = self.pushers.get(user_id, {})
if appid_pushkey in byuser:
logger.info("Stopping pusher %s / %s", user_id, appid_pushkey)
byuser[appid_pushkey].on_stop()
del byuser[appid_pushkey]
yield self.store.delete_pusher_by_app_id_pushkey_user_id(
app_id, pushkey, user_id
)

View File

@@ -40,7 +40,17 @@ REQUIREMENTS = {
CONDITIONAL_REQUIREMENTS = {
"web_client": {
"matrix_angular_sdk>=0.6.8": ["syweb>=0.6.8"],
}
},
"preview_url": {
"netaddr>=0.7.18": ["netaddr"],
},
"email.enable_notifs": {
"Jinja2>=2.8": ["Jinja2>=2.8"],
"bleach>=1.4.2": ["bleach>=1.4.2"],
},
"ldap": {
"ldap3>=1.0": ["ldap3>=1.0"],
},
}

View File

@@ -0,0 +1,59 @@
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.http.server import respond_with_json_bytes, request_handler
from synapse.http.servlet import parse_json_object_from_request
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
from twisted.internet import defer
class PresenceResource(Resource):
"""
HTTP endpoint for marking users as syncing.
POST /_synapse/replication/presence HTTP/1.1
Content-Type: application/json
{
"process_id": "<process_id>",
"syncing_users": ["<user_id>"]
}
"""
def __init__(self, hs):
Resource.__init__(self) # Resource is old-style, so no super()
self.version_string = hs.version_string
self.clock = hs.get_clock()
self.presence_handler = hs.get_presence_handler()
def render_POST(self, request):
self._async_render_POST(request)
return NOT_DONE_YET
@request_handler()
@defer.inlineCallbacks
def _async_render_POST(self, request):
content = parse_json_object_from_request(request)
process_id = content["process_id"]
syncing_user_ids = content["syncing_users"]
yield self.presence_handler.update_external_syncs(
process_id, set(syncing_user_ids)
)
respond_with_json_bytes(request, 200, "{}")

Some files were not shown because too many files have changed in this diff Show More