Compare commits

..

2 Commits

Author SHA1 Message Date
Andrew Morgan
3360be1829 Add header margin change 2022-03-10 18:35:08 +00:00
Andrew Morgan
19ca533bcc Rename indent-section-headers -> section-headers
to be more generic
2022-03-10 18:34:58 +00:00
211 changed files with 5919 additions and 8201 deletions

View File

@@ -377,7 +377,7 @@ jobs:
# Run Complement
- run: |
set -o pipefail
go test -v -json -tags synapse_blacklist,msc2403,msc2716,msc3030 ./tests/... 2>&1 | gotestfmt
go test -v -json -p 1 -tags synapse_blacklist,msc2403,msc2716,msc3030 ./tests/... 2>&1 | gotestfmt
shell: bash
name: Run Complement Tests
env:

File diff suppressed because it is too large Load Diff

3845
CHANGES.md

File diff suppressed because it is too large Load Diff

View File

@@ -33,7 +33,7 @@ site-url = "/synapse/"
additional-css = [
"docs/website_files/table-of-contents.css",
"docs/website_files/remove-nav-buttons.css",
"docs/website_files/indent-section-headers.css",
"docs/website_files/section-headers.css",
]
additional-js = ["docs/website_files/table-of-contents.js"]
theme = "docs/website_files/theme"

View File

@@ -1 +0,0 @@
Fix a performance regression in `/sync` handling, introduced in 1.49.0.

View File

@@ -0,0 +1 @@
Remove workaround introduced in Synapse 1.50.0 for Mjolnir compatibility. Breaks compatibility with Mjolnir 1.3.1 and earlier.

1
changelog.d/11915.misc Normal file
View File

@@ -0,0 +1 @@
Simplify the `ApplicationService` class' set of public methods related to interest checking.

1
changelog.d/11998.doc Normal file
View File

@@ -0,0 +1 @@
Fix complexity checking config example in [Resource Constrained Devices](https://matrix-org.github.io/synapse/v1.54/other/running_synapse_on_single_board_computers.html) docs page.

View File

@@ -0,0 +1 @@
Add third-party rules rules callbacks `check_can_shutdown_room` and `check_can_deactivate_user`.

1
changelog.d/12042.misc Normal file
View File

@@ -0,0 +1 @@
Correct type hints for txredis.

1
changelog.d/12090.bugfix Normal file
View File

@@ -0,0 +1 @@
Use the proper serialization format for bundled thread aggregations. The bug has existed since Synapse v1.48.0.

1
changelog.d/12101.misc Normal file
View File

@@ -0,0 +1 @@
Limit the size of `aggregation_key` on annotations.

1
changelog.d/12108.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints to `tests/rest/client`.

1
changelog.d/12113.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug when redacting events with relations.

1
changelog.d/12118.misc Normal file
View File

@@ -0,0 +1 @@
Move scripts to Synapse package and expose as setuptools entry points.

1
changelog.d/12121.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug when redacting events with relations.

1
changelog.d/12128.misc Normal file
View File

@@ -0,0 +1 @@
Fix data validation to compare to lists, not sequences.

1
changelog.d/12130.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug when redacting events with relations.

1
changelog.d/12131.misc Normal file
View File

@@ -0,0 +1 @@
Fix CI not attaching source distributions and wheels to the GitHub releases.

View File

@@ -0,0 +1 @@
Improve performance of logging in for large accounts.

View File

@@ -0,0 +1 @@
Add experimental env var `SYNAPSE_ASYNC_IO_REACTOR` that causes Synapse to use the asyncio reactor for Twisted.

1
changelog.d/12136.misc Normal file
View File

@@ -0,0 +1 @@
Remove unused mocks from `test_typing`.

1
changelog.d/12137.misc Normal file
View File

@@ -0,0 +1 @@
Give `scripts-dev` scripts suffixes for neater CI config.

View File

@@ -0,0 +1 @@
Remove backwards compatibilty with pagination tokens from the `/relations` and `/aggregations` endpoints generated from Synapse < v1.52.0.

1
changelog.d/12140.misc Normal file
View File

@@ -0,0 +1 @@
Move `synctl` into `synapse._scripts` and expose as an entry point.

1
changelog.d/12142.misc Normal file
View File

@@ -0,0 +1 @@
Move the snapcraft configuration file to `contrib`.

1
changelog.d/12143.doc Normal file
View File

@@ -0,0 +1 @@
Improve documentation for demo scripts.

1
changelog.d/12144.misc Normal file
View File

@@ -0,0 +1 @@
Enable [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) Complement tests in CI.

1
changelog.d/12145.misc Normal file
View File

@@ -0,0 +1 @@
Enable [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) Complement tests in CI.

1
changelog.d/12146.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints to `tests/rest`.

1
changelog.d/12149.misc Normal file
View File

@@ -0,0 +1 @@
Add test for `ObservableDeferred`'s cancellation behaviour.

1
changelog.d/12150.misc Normal file
View File

@@ -0,0 +1 @@
Use `ParamSpec` in type hints for `synapse.logging.context`.

View File

@@ -0,0 +1 @@
Support the stable identifiers from [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440): threads.

1
changelog.d/12152.misc Normal file
View File

@@ -0,0 +1 @@
Prune unused jobs from `tox` config.

1
changelog.d/12153.misc Normal file
View File

@@ -0,0 +1 @@
Move CI checks out of tox, to facilitate a move to using poetry.

1
changelog.d/12154.misc Normal file
View File

@@ -0,0 +1 @@
Avoid generating state groups for local out-of-band leaves.

1
changelog.d/12155.misc Normal file
View File

@@ -0,0 +1 @@
Avoid trying to calculate the state at outlier events.

1
changelog.d/12156.misc Normal file
View File

@@ -0,0 +1 @@
Fix some type annotations.

1
changelog.d/12157.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in #4864 whereby background updates are never run with the default background batch size.

1
changelog.d/12159.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints for `ObservableDeferred` attributes.

1
changelog.d/12161.misc Normal file
View File

@@ -0,0 +1 @@
Use a prebuilt Action for the `tests-done` CI job.

1
changelog.d/12163.misc Normal file
View File

@@ -0,0 +1 @@
Reduce number of DB queries made during processing of `/sync`.

1
changelog.d/12173.misc Normal file
View File

@@ -0,0 +1 @@
Avoid trying to calculate the state at outlier events.

1
changelog.d/12175.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug where non-standard information was returned from the `/hierarchy` API. Introduced in Synapse v1.41.0.

1
changelog.d/12179.doc Normal file
View File

@@ -0,0 +1 @@
Updates to the Room DAG concepts development document.

1
changelog.d/12182.misc Normal file
View File

@@ -0,0 +1 @@
Retry HTTP replication failures, this should prevent 502's when restarting stateful workers (main, event persisters, stream writers). Contributed by Nick @ Beeper.

1
changelog.d/12187.misc Normal file
View File

@@ -0,0 +1 @@
Remove unused variables.

1
changelog.d/12189.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug when redacting events with relations.

1
changelog.d/12192.misc Normal file
View File

@@ -0,0 +1 @@
Rename `HomeServer.get_tcp_replication` to `get_replication_command_handler`.

1
changelog.d/12197.misc Normal file
View File

@@ -0,0 +1 @@
Remove some dead code.

View File

@@ -1 +0,0 @@
Reduce overhead of restarting synchrotrons.

View File

@@ -193,15 +193,12 @@ class TrivialXmppClient:
time.sleep(7)
print("SSRC spammer started")
while self.running:
ssrcMsg = (
"<presence to='%(tojid)s' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%(nick)s</nick><stats xmlns='http://jitsi.org/jitmeet/stats'><stat name='bitrate_download' value='175'/><stat name='bitrate_upload' value='176'/><stat name='packetLoss_total' value='0'/><stat name='packetLoss_download' value='0'/><stat name='packetLoss_upload' value='0'/></stats><media xmlns='http://estos.de/ns/mjs'><source type='audio' ssrc='%(assrc)s' direction='sendre'/><source type='video' ssrc='%(vssrc)s' direction='sendre'/></media></presence>"
% {
"tojid": "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid),
"nick": self.userId,
"assrc": self.ssrcs["audio"],
"vssrc": self.ssrcs["video"],
}
)
ssrcMsg = "<presence to='%(tojid)s' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%(nick)s</nick><stats xmlns='http://jitsi.org/jitmeet/stats'><stat name='bitrate_download' value='175'/><stat name='bitrate_upload' value='176'/><stat name='packetLoss_total' value='0'/><stat name='packetLoss_download' value='0'/><stat name='packetLoss_upload' value='0'/></stats><media xmlns='http://estos.de/ns/mjs'><source type='audio' ssrc='%(assrc)s' direction='sendre'/><source type='video' ssrc='%(vssrc)s' direction='sendre'/></media></presence>" % {
"tojid": "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid),
"nick": self.userId,
"assrc": self.ssrcs["audio"],
"vssrc": self.ssrcs["video"],
}
res = self.sendIq(ssrcMsg)
print("reply from ssrc announce: ", res)
time.sleep(10)

36
debian/changelog vendored
View File

@@ -1,39 +1,3 @@
matrix-synapse-py3 (1.56.0) stable; urgency=medium
* New synapse release 1.56.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 05 Apr 2022 12:38:39 +0100
matrix-synapse-py3 (1.56.0~rc1) stable; urgency=medium
* New synapse release 1.56.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 29 Mar 2022 10:40:50 +0100
matrix-synapse-py3 (1.55.2) stable; urgency=medium
* New synapse release 1.55.2.
-- Synapse Packaging team <packages@matrix.org> Thu, 24 Mar 2022 19:07:11 +0000
matrix-synapse-py3 (1.55.1) stable; urgency=medium
* New synapse release 1.55.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 24 Mar 2022 17:44:23 +0000
matrix-synapse-py3 (1.55.0) stable; urgency=medium
* New synapse release 1.55.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Mar 2022 13:59:26 +0000
matrix-synapse-py3 (1.55.0~rc1) stable; urgency=medium
* New synapse release 1.55.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Mar 2022 10:59:31 +0000
matrix-synapse-py3 (1.54.0) stable; urgency=medium
* New synapse release 1.54.0.

View File

@@ -38,7 +38,6 @@ for port in 8080 8081 8082; do
printf '\n\n# Customisation made by demo/start.sh\n\n'
echo "public_baseurl: http://localhost:$port/"
echo 'enable_registration: true'
echo 'enable_registration_without_verification: true'
echo ''
# Warning, this heredoc depends on the interaction of tabs and spaces.

View File

@@ -458,17 +458,6 @@ Git allows you to add this signoff automatically when using the `-s`
flag to `git commit`, which uses the name and email set in your
`user.name` and `user.email` git configs.
### Private Sign off
If you would like to provide your legal name privately to the Matrix.org
Foundation (instead of in a public commit or comment), you can do so
by emailing your legal name and a link to the pull request to
[dco@matrix.org](mailto:dco@matrix.org?subject=Private%20sign%20off).
It helps to include "sign off" or similar in the subject line. You will then
be instructed further.
Once private sign off is complete, doing so for future contributions will not
be required.
# 10. Turn feedback into better code.

View File

@@ -172,7 +172,7 @@ any of the subsequent implementations of this callback.
_First introduced in Synapse v1.37.0_
```python
async def check_username_for_spam(user_profile: synapse.module_api.UserProfile) -> bool
async def check_username_for_spam(user_profile: Dict[str, str]) -> bool
```
Called when computing search results in the user directory. The module must return a
@@ -182,11 +182,9 @@ search results; otherwise return `False`.
The profile is represented as a dictionary with the following keys:
* `user_id: str`. The Matrix ID for this user.
* `display_name: Optional[str]`. The user's display name, or `None` if this user
has not set a display name.
* `avatar_url: Optional[str]`. The `mxc://` URL to the user's avatar, or `None`
if this user has not set an avatar.
* `user_id`: The Matrix ID for this user.
* `display_name`: The user's display name.
* `avatar_url`: The `mxc://` URL to the user's avatar.
The module is given a copy of the original dictionary, so modifying it from within the
module cannot modify a user's profile when included in user directory search results.

View File

@@ -225,8 +225,6 @@ oidc_providers:
3. Create an application for synapse in Authentik and link it to the provider.
4. Note the slug of your application, Client ID and Client Secret.
Note: RSA keys must be used for signing for Authentik, ECC keys do not work.
Synapse config:
```yaml
oidc_providers:
@@ -242,7 +240,7 @@ oidc_providers:
- "email"
user_mapping_provider:
config:
localpart_template: "{{ user.preferred_username }}"
localpart_template: "{{ user.preferred_username }}}"
display_name_template: "{{ user.preferred_username|capitalize }}" # TO BE FILLED: If your users have names in Authentik and you want those in Synapse, this should be replaced with user.name|capitalize.
```

View File

@@ -234,13 +234,12 @@ host all all ::1/128 ident
### Fixing incorrect `COLLATE` or `CTYPE`
Synapse will refuse to set up a new database if it has the wrong values of
`COLLATE` and `CTYPE` set. Synapse will also refuse to start an existing database with incorrect values
of `COLLATE` and `CTYPE` unless the config flag `allow_unsafe_locale`, found in the
`database` section of the config, is set to true. Using different locales can cause issues if the locale library is updated from
`COLLATE` and `CTYPE` set, and will log warnings on existing databases. Using
different locales can cause issues if the locale library is updated from
underneath the database, or if a different version of the locale is used on any
replicas.
If you have a databse with an unsafe locale, the safest way to fix the issue is to dump the database and recreate it with
The safest way to fix the issue is to dump the database and recreate it with
the correct locale parameter (as shown above). It is also possible to change the
parameters on a live database and run a `REINDEX` on the entire database,
however extreme care must be taken to avoid database corruption.

View File

@@ -182,7 +182,7 @@ matrix.example.com {
```
frontend https
bind *:443,[::]:443 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-For %[src]
@@ -195,7 +195,7 @@ frontend https
use_backend matrix if matrix-host matrix-path
frontend matrix-federation
bind *:8448,[::]:8448 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-For %[src]

View File

@@ -783,12 +783,6 @@ caches:
# 'txn_limit' gives the maximum number of transactions to run per connection
# before reconnecting. Defaults to 0, which means no limit.
#
# 'allow_unsafe_locale' is an option specific to Postgres. Under the default behavior, Synapse will refuse to
# start if the postgres db is set to a non-C locale. You can override this behavior (which is *not* recommended)
# by setting 'allow_unsafe_locale' to true. Note that doing so may corrupt your database. You can find more information
# here: https://matrix-org.github.io/synapse/latest/postgres.html#fixing-incorrect-collate-or-ctype and here:
# https://wiki.postgresql.org/wiki/Locale_data_changes
#
# 'args' gives options which are passed through to the database engine,
# except for options starting 'cp_', which are used to configure the Twisted
# connection pool. For a reference to valid arguments, see:
@@ -1218,18 +1212,10 @@ oembed:
# Registration can be rate-limited using the parameters in the "Ratelimiting"
# section of this file.
# Enable registration for new users. Defaults to 'false'. It is highly recommended that if you enable registration,
# you use either captcha, email, or token-based verification to verify that new users are not bots. In order to enable registration
# without any verification, you must also set `enable_registration_without_verification`, found below.
# Enable registration for new users.
#
#enable_registration: false
# Enable registration without email or captcha verification. Note: this option is *not* recommended,
# as registration without verification is a known vector for spam and abuse. Defaults to false. Has no effect
# unless `enable_registration` is also enabled.
#
#enable_registration_without_verification: true
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.
@@ -1961,14 +1947,8 @@ saml2_config:
#
# localpart_template: Jinja2 template for the localpart of the MXID.
# If this is not set, the user will be prompted to choose their
# own username (see the documentation for the
# 'sso_auth_account_details.html' template). This template can
# use the 'localpart_from_email' filter.
#
# confirm_localpart: Whether to prompt the user to validate (or
# change) the generated localpart (see the documentation for the
# 'sso_auth_account_details.html' template), instead of
# registering the account right away.
# own username (see 'sso_auth_account_details.html' in the 'sso'
# section of this file).
#
# display_name_template: Jinja2 template for the display name to set
# on first login. If unset, no displayname will be set.
@@ -2749,35 +2729,3 @@ redis:
# Optional password if configured on the Redis instance
#
#password: <secret_password>
## Background Updates ##
# Background updates are database updates that are run in the background in batches.
# The duration, minimum batch size, default batch size, whether to sleep between batches and if so, how long to
# sleep can all be configured. This is helpful to speed up or slow down the updates.
#
background_updates:
# How long in milliseconds to run a batch of background updates for. Defaults to 100. Uncomment and set
# a time to change the default.
#
#background_update_duration_ms: 500
# Whether to sleep between updates. Defaults to True. Uncomment to change the default.
#
#sleep_enabled: false
# If sleeping between updates, how long in milliseconds to sleep for. Defaults to 1000. Uncomment
# and set a duration to change the default.
#
#sleep_duration_ms: 300
# Minimum size a batch of background updates can be. Must be greater than 0. Defaults to 1. Uncomment and
# set a size to change the default.
#
#min_batch_size: 10
# The batch size to use for the first iteration of a new background update. The default is 100.
# Uncomment and set a size to change the default.
#
#default_batch_size: 50

View File

@@ -36,13 +36,6 @@ Turns a `mxc://` URL for media content into an HTTP(S) one using the homeserver'
Example: `message.sender_avatar_url|mxc_to_http(32,32)`
```python
localpart_from_email(address: str) -> str
```
Returns the local part of an email address (e.g. `alice` in `alice@example.com`).
Example: `user.email_address|localpart_from_email`
## Email templates
@@ -183,11 +176,8 @@ Below are the templates Synapse will look for when generating pages related to S
for the brand of the IdP
* `user_attributes`: an object containing details about the user that
we received from the IdP. May have the following attributes:
* `display_name`: the user's display name
* `emails`: a list of email addresses
* `localpart`: the local part of the Matrix user ID to register,
if `localpart_template` is set in the mapping provider configuration (empty
string if not)
* display_name: the user's display_name
* emails: a list of email addresses
The template should render a form which submits the following fields:
* `username`: the localpart of the user's chosen user id
* `sso_new_user_consent.html`: HTML page allowing the user to consent to the

View File

@@ -85,32 +85,6 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.56.0
## Open registration without verification is now disabled by default
Synapse will refuse to start if registration is enabled without email, captcha, or token-based verification unless the new config
flag `enable_registration_without_verification` is set to "true".
## Groups/communities feature has been deprecated
The non-standard groups/communities feature in Synapse has been deprecated and will
be disabled by default in Synapse v1.58.0.
You can test disabling it by adding the following to your homeserver configuration:
```yaml
experimental_features:
groups_enabled: false
```
## Change in behaviour for PostgreSQL databases with unsafe locale
Synapse now refuses to start when using PostgreSQL with non-`C` values for `COLLATE` and
`CTYPE` unless the config flag `allow_unsafe_locale`, found in the database section of
the configuration file, is set to `true`. See the [PostgreSQL documentation](https://matrix-org.github.io/synapse/latest/postgres.html#fixing-incorrect-collate-or-ctype)
for more information and instructions on how to fix a database with incorrect values.
# Upgrading to v1.55.0
## `synctl` script has been moved

View File

@@ -1,7 +0,0 @@
/*
* Indents each chapter title in the left sidebar so that they aren't
* at the same level as the section headers.
*/
.chapter-item {
margin-left: 1em;
}

View File

@@ -0,0 +1,20 @@
/*
* Indents each chapter title in the left sidebar so that they aren't
* at the same level as the section headers.
*/
.chapter-item {
margin-left: 1em;
}
/*
* Prevents a large gap between successive section headers.
*
* mdbook sets 'margin-top: 2.5em' on h2 and h3 headers. This makes sense when separating
* a header from the paragraph beforehand, but has the downside of introducing a large
* gap between headers that are next to each other with no text in between.
*
* This rule reduces the margin in this case.
*/
h1 + h2, h2 + h3 {
margin-top: 1.0em;
}

View File

@@ -185,8 +185,8 @@ worker: refer to the [stream writers](#stream-writers) section below for further
information.
# Sync requests
^/_matrix/client/(r0|v3)/sync$
^/_matrix/client/(api/v1|r0|v3)/events$
^/_matrix/client/(v2_alpha|r0|v3)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$
^/_matrix/client/(api/v1|r0|v3)/initialSync$
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
@@ -200,9 +200,13 @@ information.
^/_matrix/federation/v1/query/
^/_matrix/federation/v1/make_join/
^/_matrix/federation/v1/make_leave/
^/_matrix/federation/(v1|v2)/send_join/
^/_matrix/federation/(v1|v2)/send_leave/
^/_matrix/federation/(v1|v2)/invite/
^/_matrix/federation/v1/send_join/
^/_matrix/federation/v2/send_join/
^/_matrix/federation/v1/send_leave/
^/_matrix/federation/v2/send_leave/
^/_matrix/federation/v1/invite/
^/_matrix/federation/v2/invite/
^/_matrix/federation/v1/query_auth/
^/_matrix/federation/v1/event_auth/
^/_matrix/federation/v1/exchange_third_party_invite/
^/_matrix/federation/v1/user/devices/
@@ -270,8 +274,6 @@ information.
Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/federation/v1/groups/
^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
^/_matrix/client/(r0|v3|unstable)/groups/
Pagination requests can also be handled, but all requests for a given
room must be routed to the same instance. Additionally, care must be taken to
@@ -349,11 +351,8 @@ is only supported with Redis-based replication.)
To enable this, the worker must have a HTTP replication listener configured,
have a `worker_name` and be listed in the `instance_map` config. The same worker
can handle multiple streams, but unless otherwise documented, each stream can only
have a single writer.
For example, to move event persistence off to a dedicated worker, the shared
configuration would include:
can handle multiple streams. For example, to move event persistence off to a
dedicated worker, the shared configuration would include:
```yaml
instance_map:
@@ -371,8 +370,8 @@ streams and the endpoints associated with them:
##### The `events` stream
The `events` stream experimentally supports having multiple writers, where work
is sharded between them by room ID. Note that you *must* restart all worker
The `events` stream also experimentally supports having multiple writers, where
work is sharded between them by room ID. Note that you *must* restart all worker
instances when adding or removing event persisters. An example `stream_writers`
configuration with multiple writers:
@@ -385,38 +384,38 @@ stream_writers:
##### The `typing` stream
The following endpoints should be routed directly to the worker configured as
the stream writer for the `typing` stream:
The following endpoints should be routed directly to the workers configured as
stream writers for the `typing` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/typing
##### The `to_device` stream
The following endpoints should be routed directly to the worker configured as
the stream writer for the `to_device` stream:
The following endpoints should be routed directly to the workers configured as
stream writers for the `to_device` stream:
^/_matrix/client/(r0|v3|unstable)/sendToDevice/
^/_matrix/client/(api/v1|r0|v3|unstable)/sendToDevice/
##### The `account_data` stream
The following endpoints should be routed directly to the worker configured as
the stream writer for the `account_data` stream:
The following endpoints should be routed directly to the workers configured as
stream writers for the `account_data` stream:
^/_matrix/client/(r0|v3|unstable)/.*/tags
^/_matrix/client/(r0|v3|unstable)/.*/account_data
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/tags
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/account_data
##### The `receipts` stream
The following endpoints should be routed directly to the worker configured as
the stream writer for the `receipts` stream:
The following endpoints should be routed directly to the workers configured as
stream writers for the `receipts` stream:
^/_matrix/client/(r0|v3|unstable)/rooms/.*/receipt
^/_matrix/client/(r0|v3|unstable)/rooms/.*/read_markers
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/receipt
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/read_markers
##### The `presence` stream
The following endpoints should be routed directly to the worker configured as
the stream writer for the `presence` stream:
The following endpoints should be routed directly to the workers configured as
stream writers for the `presence` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
@@ -526,28 +525,19 @@ Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for
Handles searches in the user directory. It can handle REST endpoints matching
the following regular expressions:
^/_matrix/client/(r0|v3|unstable)/user_directory/search$
^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$
When using this worker you must also set `update_user_directory: false` in the
When using this worker you must also set `update_user_directory: False` in the
shared configuration file to stop the main synapse running background
jobs related to updating the user directory.
Above endpoint is not *required* to be routed to this worker. By default,
`update_user_directory` is set to `true`, which means the main process
will handle updates. All workers configured with `client` can handle the above
endpoint as long as either this worker or the main process are configured to
handle it, and are online.
If `update_user_directory` is set to `false`, and this worker is not running,
the above endpoint may give outdated results.
### `synapse.app.frontend_proxy`
Proxies some frequently-requested client endpoints to add caching and remove
load from the main synapse. It can handle REST endpoints matching the following
regular expressions:
^/_matrix/client/(r0|v3|unstable)/keys/upload
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload
If `use_presence` is False in the homeserver config, it can also handle REST
endpoints matching the following regular expressions:

View File

@@ -38,11 +38,17 @@ exclude = (?x)
|synapse/_scripts/update_synapse_database.py
|synapse/storage/databases/__init__.py
|synapse/storage/databases/main/__init__.py
|synapse/storage/databases/main/cache.py
|synapse/storage/databases/main/devices.py
|synapse/storage/databases/main/event_federation.py
|synapse/storage/databases/main/group_server.py
|synapse/storage/databases/main/metrics.py
|synapse/storage/databases/main/monthly_active_users.py
|synapse/storage/databases/main/push_rule.py
|synapse/storage/databases/main/receipts.py
|synapse/storage/databases/main/roommember.py
|synapse/storage/databases/main/search.py
|synapse/storage/databases/main/state.py
|synapse/storage/schema/
@@ -60,6 +66,14 @@ exclude = (?x)
|tests/federation/test_federation_server.py
|tests/federation/transport/test_knocking.py
|tests/federation/transport/test_server.py
|tests/handlers/test_cas.py
|tests/handlers/test_directory.py
|tests/handlers/test_e2e_keys.py
|tests/handlers/test_federation.py
|tests/handlers/test_oidc.py
|tests/handlers/test_presence.py
|tests/handlers/test_profile.py
|tests/handlers/test_saml.py
|tests/handlers/test_typing.py
|tests/http/federation/test_matrix_federation_agent.py
|tests/http/federation/test_srv_resolver.py
@@ -71,15 +85,22 @@ exclude = (?x)
|tests/logging/test_terse_json.py
|tests/module_api/test_api.py
|tests/push/test_email.py
|tests/push/test_http.py
|tests/push/test_presentable_names.py
|tests/push/test_push_rule_evaluator.py
|tests/rest/client/test_transactions.py
|tests/rest/media/v1/test_media_storage.py
|tests/rest/media/v1/test_url_preview.py
|tests/scripts/test_new_matrix_user.py
|tests/server.py
|tests/server_notices/test_resource_limits_server_notices.py
|tests/state/test_v2.py
|tests/storage/test_background_update.py
|tests/storage/test_base.py
|tests/storage/test_client_ips.py
|tests/storage/test_database.py
|tests/storage/test_event_federation.py
|tests/storage/test_id_generators.py
|tests/storage/test_roommember.py
|tests/test_metrics.py
|tests/test_phone_home.py

View File

@@ -66,15 +66,11 @@ def cli():
./scripts-dev/release.py tag
# ... wait for assets to build ...
# ... wait for asssets to build ...
./scripts-dev/release.py publish
./scripts-dev/release.py upload
# Optional: generate some nice links for the announcement
./scripts-dev/release.py upload
If the env var GH_TOKEN (or GITHUB_TOKEN) is set, or passed into the
`tag`/`publish` command, then a new draft release will be created/published.
"""
@@ -419,41 +415,6 @@ def upload():
)
@cli.command()
def announce():
"""Generate markdown to announce the release."""
current_version, _, _ = parse_version_from_module()
tag_name = f"v{current_version}"
click.echo(
f"""
Hi everyone. Synapse {current_version} has just been released.
[notes](https://github.com/matrix-org/synapse/releases/tag/{tag_name}) |\
[docker](https://hub.docker.com/r/matrixdotorg/synapse/tags?name={tag_name}) | \
[debs](https://packages.matrix.org/debian/) | \
[pypi](https://pypi.org/project/matrix-synapse/{current_version}/)"""
)
if "rc" in tag_name:
click.echo(
"""
Announce the RC in
- #homeowners:matrix.org (Synapse Announcements)
- #synapse-dev:matrix.org"""
)
else:
click.echo(
"""
Announce the release in
- #homeowners:matrix.org (Synapse Announcements), bumping the version in the topic
- #synapse:matrix.org (Synapse Admins), bumping the version in the topic
- #synapse-dev:matrix.org
- #synapse-package-maintainers:matrix.org"""
)
def parse_version_from_module() -> Tuple[
version.Version, redbaron.RedBaron, redbaron.Node
]:

View File

@@ -95,7 +95,7 @@ CONDITIONAL_REQUIREMENTS["all"] = list(ALL_OPTIONAL_REQUIREMENTS)
# We pin black so that our tests don't start failing on new releases.
CONDITIONAL_REQUIREMENTS["lint"] = [
"isort==5.7.0",
"black==22.3.0",
"black==21.12b0",
"flake8-comprehensions",
"flake8-bugbear==21.3.2",
"flake8",
@@ -108,7 +108,6 @@ CONDITIONAL_REQUIREMENTS["mypy"] = [
"types-jsonschema>=3.2.0",
"types-opentracing>=2.4.2",
"types-Pillow>=8.3.4",
"types-psycopg2>=2.9.9",
"types-pyOpenSSL>=20.0.7",
"types-PyYAML>=5.4.10",
"types-requests>=2.26.0",
@@ -128,7 +127,7 @@ CONDITIONAL_REQUIREMENTS["dev"] = (
+ CONDITIONAL_REQUIREMENTS["test"]
+ [
# The following are used by the release script
"click==8.1.0",
"click==7.1.2",
"redbaron==0.9.2",
"GitPython==3.1.14",
"commonmark==0.9.1",

View File

@@ -68,7 +68,7 @@ try:
except ImportError:
pass
__version__ = "1.56.0"
__version__ = "1.54.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -23,7 +23,7 @@ from typing_extensions import Final
MAX_PDU_SIZE = 65536
# the "depth" field on events is limited to 2**63 - 1
MAX_DEPTH = 2**63 - 1
MAX_DEPTH = 2 ** 63 - 1
# the maximum length for a room alias is 255 characters
MAX_ALIAS_LENGTH = 255

View File

@@ -322,8 +322,7 @@ class GenericWorkerServer(HomeServer):
presence.register_servlets(self, resource)
if self.config.experimental.groups_enabled:
groups.register_servlets(self, resource)
groups.register_servlets(self, resource)
resources.update({CLIENT_API_PREFIX: resource})

View File

@@ -261,10 +261,7 @@ class SynapseHomeServer(HomeServer):
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
if name == "metrics" and self.config.metrics.enable_metrics:
metrics_resource: Resource = MetricsResource(RegistryProxy)
if compress:
metrics_resource = gz_wrap(metrics_resource)
resources[METRICS_PREFIX] = metrics_resource
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
@@ -351,23 +348,6 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
if config.server.gc_seconds:
synapse.metrics.MIN_TIME_BETWEEN_GCS = config.server.gc_seconds
if (
config.registration.enable_registration
and not config.registration.enable_registration_without_verification
):
if (
not config.captcha.enable_registration_captcha
and not config.registration.registrations_require_3pid
and not config.registration.registration_requires_token
):
raise ConfigError(
"You have enabled open registration without any verification. This is a known vector for "
"spam and abuse. If you would like to allow public registration, please consider adding email, "
"captcha, or token-based verification. Otherwise this check can be removed by setting the "
"`enable_registration_without_verification` config option to `true`."
)
hs = SynapseHomeServer(
config.server.server_name,
config=config,

View File

@@ -428,7 +428,7 @@ class _Recoverer:
"as-recoverer-%s" % (self.service.id,), self.retry
)
delay = 2**self.backoff_counter
delay = 2 ** self.backoff_counter
logger.info("Scheduling retries on %s in %fs", self.service.id, delay)
self.clock.call_later(delay, _retry)

View File

@@ -19,7 +19,6 @@ from synapse.config import (
api,
appservice,
auth,
background_updates,
cache,
captcha,
cas,
@@ -114,7 +113,6 @@ class RootConfig:
caches: cache.CacheConfig
federation: federation.FederationConfig
retention: retention.RetentionConfig
background_updates: background_updates.BackgroundUpdateConfig
config_classes: List[Type["Config"]] = ...
def __init__(self) -> None: ...

View File

@@ -1,68 +0,0 @@
# Copyright 2022 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class BackgroundUpdateConfig(Config):
section = "background_updates"
def generate_config_section(self, **kwargs) -> str:
return """\
## Background Updates ##
# Background updates are database updates that are run in the background in batches.
# The duration, minimum batch size, default batch size, whether to sleep between batches and if so, how long to
# sleep can all be configured. This is helpful to speed up or slow down the updates.
#
background_updates:
# How long in milliseconds to run a batch of background updates for. Defaults to 100. Uncomment and set
# a time to change the default.
#
#background_update_duration_ms: 500
# Whether to sleep between updates. Defaults to True. Uncomment to change the default.
#
#sleep_enabled: false
# If sleeping between updates, how long in milliseconds to sleep for. Defaults to 1000. Uncomment
# and set a duration to change the default.
#
#sleep_duration_ms: 300
# Minimum size a batch of background updates can be. Must be greater than 0. Defaults to 1. Uncomment and
# set a size to change the default.
#
#min_batch_size: 10
# The batch size to use for the first iteration of a new background update. The default is 100.
# Uncomment and set a size to change the default.
#
#default_batch_size: 50
"""
def read_config(self, config, **kwargs) -> None:
bg_update_config = config.get("background_updates") or {}
self.update_duration_ms = bg_update_config.get(
"background_update_duration_ms", 100
)
self.sleep_enabled = bg_update_config.get("sleep_enabled", True)
self.sleep_duration_ms = bg_update_config.get("sleep_duration_ms", 1000)
self.min_batch_size = bg_update_config.get("min_batch_size", 1)
self.default_batch_size = bg_update_config.get("default_batch_size", 100)

View File

@@ -37,12 +37,6 @@ DEFAULT_CONFIG = """\
# 'txn_limit' gives the maximum number of transactions to run per connection
# before reconnecting. Defaults to 0, which means no limit.
#
# 'allow_unsafe_locale' is an option specific to Postgres. Under the default behavior, Synapse will refuse to
# start if the postgres db is set to a non-C locale. You can override this behavior (which is *not* recommended)
# by setting 'allow_unsafe_locale' to true. Note that doing so may corrupt your database. You can find more information
# here: https://matrix-org.github.io/synapse/latest/postgres.html#fixing-incorrect-collate-or-ctype and here:
# https://wiki.postgresql.org/wiki/Locale_data_changes
#
# 'args' gives options which are passed through to the database engine,
# except for options starting 'cp_', which are used to configure the Twisted
# connection pool. For a reference to valid arguments, see:

View File

@@ -74,6 +74,3 @@ class ExperimentalConfig(Config):
# MSC3720 (Account status endpoint)
self.msc3720_enabled: bool = experimental.get("msc3720_enabled", False)
# The deprecated groups feature.
self.groups_enabled: bool = experimental.get("groups_enabled", True)

View File

@@ -16,7 +16,6 @@ from .account_validity import AccountValidityConfig
from .api import ApiConfig
from .appservice import AppServiceConfig
from .auth import AuthConfig
from .background_updates import BackgroundUpdateConfig
from .cache import CacheConfig
from .captcha import CaptchaConfig
from .cas import CasConfig
@@ -100,5 +99,4 @@ class HomeServerConfig(RootConfig):
WorkerConfig,
RedisConfig,
ExperimentalConfig,
BackgroundUpdateConfig,
]

View File

@@ -182,14 +182,8 @@ class OIDCConfig(Config):
#
# localpart_template: Jinja2 template for the localpart of the MXID.
# If this is not set, the user will be prompted to choose their
# own username (see the documentation for the
# 'sso_auth_account_details.html' template). This template can
# use the 'localpart_from_email' filter.
#
# confirm_localpart: Whether to prompt the user to validate (or
# change) the generated localpart (see the documentation for the
# 'sso_auth_account_details.html' template), instead of
# registering the account right away.
# own username (see 'sso_auth_account_details.html' in the 'sso'
# section of this file).
#
# display_name_template: Jinja2 template for the display name to set
# on first login. If unset, no displayname will be set.

View File

@@ -33,10 +33,6 @@ class RegistrationConfig(Config):
str(config["disable_registration"])
)
self.enable_registration_without_verification = strtobool(
str(config.get("enable_registration_without_verification", False))
)
self.registrations_require_3pid = config.get("registrations_require_3pid", [])
self.allowed_local_3pids = config.get("allowed_local_3pids", [])
self.enable_3pid_lookup = config.get("enable_3pid_lookup", True)
@@ -211,18 +207,10 @@ class RegistrationConfig(Config):
# Registration can be rate-limited using the parameters in the "Ratelimiting"
# section of this file.
# Enable registration for new users. Defaults to 'false'. It is highly recommended that if you enable registration,
# you use either captcha, email, or token-based verification to verify that new users are not bots. In order to enable registration
# without any verification, you must also set `enable_registration_without_verification`, found below.
# Enable registration for new users.
#
#enable_registration: false
# Enable registration without email or captcha verification. Note: this option is *not* recommended,
# as registration without verification is a known vector for spam and abuse. Defaults to false. Has no effect
# unless `enable_registration` is also enabled.
#
#enable_registration_without_verification: true
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.

View File

@@ -676,10 +676,6 @@ class ServerConfig(Config):
):
raise ConfigError("'custom_template_directory' must be a string")
self.use_account_validity_in_account_status: bool = (
config.get("use_account_validity_in_account_status") or False
)
def has_tls_listener(self) -> bool:
return any(listener.tls for listener in self.listeners)

View File

@@ -25,8 +25,8 @@ logger = logging.getLogger(__name__)
LEGACY_SPAM_CHECKER_WARNING = """
This server is using a spam checker module that is implementing the deprecated spam
checker interface. Please check with the module's maintainer to see if a new version
supporting Synapse's generic modules system is available. For more information, please
see https://matrix-org.github.io/synapse/latest/modules/index.html
supporting Synapse's generic modules system is available.
For more information, please see https://matrix-org.github.io/synapse/latest/modules.html
---------------------------------------------------------------------------------------"""

View File

@@ -182,7 +182,7 @@ class Keyring:
vk = get_verify_key(hs.signing_key)
self._local_verify_keys[f"{vk.alg}:{vk.version}"] = FetchKeyResult(
verify_key=vk,
valid_until_ts=2**63, # fake future timestamp
valid_until_ts=2 ** 63, # fake future timestamp
)
async def verify_json_for_server(

View File

@@ -21,6 +21,7 @@ from typing import (
Awaitable,
Callable,
Collection,
Dict,
List,
Optional,
Tuple,
@@ -30,7 +31,7 @@ from typing import (
from synapse.rest.media.v1._base import FileInfo
from synapse.rest.media.v1.media_storage import ReadableFileWrapper
from synapse.spam_checker_api import RegistrationBehaviour
from synapse.types import RoomAlias, UserProfile
from synapse.types import RoomAlias
from synapse.util.async_helpers import maybe_awaitable
if TYPE_CHECKING:
@@ -49,7 +50,7 @@ USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[[str, str, str, str], Awaitable[bo
USER_MAY_CREATE_ROOM_CALLBACK = Callable[[str], Awaitable[bool]]
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[[str, RoomAlias], Awaitable[bool]]
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[[str, str], Awaitable[bool]]
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[UserProfile], Awaitable[bool]]
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[Dict[str, str]], Awaitable[bool]]
LEGACY_CHECK_REGISTRATION_FOR_SPAM_CALLBACK = Callable[
[
Optional[dict],
@@ -244,8 +245,8 @@ class SpamChecker:
"""Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if
sent by a local user. If it is sent by a user on another server, the
event is soft-failed.
sent by a local user. If it is sent by a user on another server, then
users receive a blank event.
Args:
event: the event to be checked
@@ -382,7 +383,7 @@ class SpamChecker:
return True
async def check_username_for_spam(self, user_profile: UserProfile) -> bool:
async def check_username_for_spam(self, user_profile: Dict[str, str]) -> bool:
"""Checks if a user ID or display name are considered "spammy" by this server.
If the server considers a username spammy, then it will not be included in

View File

@@ -38,8 +38,8 @@ from synapse.util.frozenutils import unfreeze
from . import EventBase
if TYPE_CHECKING:
from synapse.handlers.relations import BundledAggregations
from synapse.server import HomeServer
from synapse.storage.databases.main.relations import BundledAggregations
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
@@ -49,7 +49,7 @@ if TYPE_CHECKING:
# the literal fields "foo\" and "bar" but will instead be treated as "foo\\.bar"
SPLIT_FIELD_REGEX = re.compile(r"(?<!\\)\.")
CANONICALJSON_MAX_INT = (2**53) - 1
CANONICALJSON_MAX_INT = (2 ** 53) - 1
CANONICALJSON_MIN_INT = -CANONICALJSON_MAX_INT
@@ -530,12 +530,9 @@ class EventClientSerializer:
# Include the bundled aggregations in the event.
if serialized_aggregations:
# There is likely already an "unsigned" field, but a filter might
# have stripped it off (via the event_fields option). The server is
# allowed to return additional fields, so add it back.
serialized_event.setdefault("unsigned", {}).setdefault(
"m.relations", {}
).update(serialized_aggregations)
serialized_event["unsigned"].setdefault("m.relations", {}).update(
serialized_aggregations
)
def serialize_events(
self,

View File

@@ -22,6 +22,7 @@ from typing import (
Callable,
Collection,
Dict,
Iterable,
List,
Optional,
Tuple,
@@ -576,10 +577,10 @@ class FederationServer(FederationBase):
async def _on_context_state_request_compute(
self, room_id: str, event_id: Optional[str]
) -> Dict[str, list]:
pdus: Collection[EventBase]
if event_id:
event_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
pdus = await self.store.get_events_as_list(event_ids)
pdus: Iterable[EventBase] = await self.handler.get_state_for_pdu(
room_id, event_id
)
else:
pdus = (await self.state.get_current_state(room_id)).values()
@@ -1092,7 +1093,7 @@ class FederationServer(FederationBase):
# has started processing).
while True:
async with lock:
logger.info("handling received PDU in room %s: %s", room_id, event)
logger.info("handling received PDU: %s", event)
try:
with nested_logging_context(event.event_id):
await self._federation_event_handler.on_receive_pdu(

View File

@@ -289,7 +289,7 @@ class OpenIdUserInfo(BaseFederationServlet):
return 200, {"sub": user_id}
SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
DEFAULT_SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
"federation": FEDERATION_SERVLET_CLASSES,
"room_list": (PublicRoomList,),
"group_server": GROUP_SERVER_SERVLET_CLASSES,
@@ -298,10 +298,6 @@ SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
"openid": (OpenIdUserInfo,),
}
DEFAULT_SERVLET_GROUPS = ("federation", "room_list", "openid")
GROUP_SERVLET_GROUPS = ("group_server", "group_local", "group_attestation")
def register_servlets(
hs: "HomeServer",
@@ -324,19 +320,16 @@ def register_servlets(
Defaults to ``DEFAULT_SERVLET_GROUPS``.
"""
if not servlet_groups:
servlet_groups = DEFAULT_SERVLET_GROUPS
# Only allow the groups servlets if the deprecated groups feature is enabled.
if hs.config.experimental.groups_enabled:
servlet_groups = servlet_groups + GROUP_SERVLET_GROUPS
servlet_groups = DEFAULT_SERVLET_GROUPS.keys()
for servlet_group in servlet_groups:
# Skip unknown servlet groups.
if servlet_group not in SERVLET_GROUPS:
if servlet_group not in DEFAULT_SERVLET_GROUPS:
raise RuntimeError(
f"Attempting to register unknown federation servlet: '{servlet_group}'"
)
for servletclass in SERVLET_GROUPS[servlet_group]:
for servletclass in DEFAULT_SERVLET_GROUPS[servlet_group]:
# Only allow the `/timestamp_to_event` servlet if msc3030 is enabled
if (
servletclass == FederationTimestampLookupServlet

View File

@@ -26,10 +26,6 @@ class AccountHandler:
self._main_store = hs.get_datastores().main
self._is_mine = hs.is_mine
self._federation_client = hs.get_federation_client()
self._use_account_validity_in_account_status = (
hs.config.server.use_account_validity_in_account_status
)
self._account_validity_handler = hs.get_account_validity_handler()
async def get_account_statuses(
self,
@@ -110,13 +106,6 @@ class AccountHandler:
"deactivated": userinfo.is_deactivated,
}
if self._use_account_validity_in_account_status:
status[
"org.matrix.expired"
] = await self._account_validity_handler.is_user_expired(
user_id.to_string()
)
return status
async def _get_remote_account_statuses(

View File

@@ -371,6 +371,7 @@ class DeviceHandler(DeviceWorkerHandler):
log_kv(
{"reason": "User doesn't have device id.", "device_id": device_id}
)
pass
else:
raise
@@ -413,6 +414,7 @@ class DeviceHandler(DeviceWorkerHandler):
# no match
set_tag("error", True)
set_tag("reason", "User doesn't have that device id.")
pass
else:
raise

View File

@@ -950,35 +950,54 @@ class FederationHandler:
return event
async def get_state_for_pdu(self, room_id: str, event_id: str) -> List[EventBase]:
"""Returns the state at the event. i.e. not including said event."""
event = await self.store.get_event(event_id, check_room_id=room_id)
state_groups = await self.state_store.get_state_groups(room_id, [event_id])
if state_groups:
_, state = list(state_groups.items()).pop()
results = {(e.type, e.state_key): e for e in state}
if event.is_state():
# Get previous state
if "replaces_state" in event.unsigned:
prev_id = event.unsigned["replaces_state"]
if prev_id != event.event_id:
prev_event = await self.store.get_event(prev_id)
results[(event.type, event.state_key)] = prev_event
else:
del results[(event.type, event.state_key)]
res = list(results.values())
return res
else:
return []
async def get_state_ids_for_pdu(self, room_id: str, event_id: str) -> List[str]:
"""Returns the state at the event. i.e. not including said event."""
event = await self.store.get_event(event_id, check_room_id=room_id)
if event.internal_metadata.outlier:
raise NotFoundError("State not known at event %s" % (event_id,))
state_groups = await self.state_store.get_state_groups_ids(room_id, [event_id])
# get_state_groups_ids should return exactly one result
assert len(state_groups) == 1
if state_groups:
_, state = list(state_groups.items()).pop()
results = state
state_map = next(iter(state_groups.values()))
if event.is_state():
# Get previous state
if "replaces_state" in event.unsigned:
prev_id = event.unsigned["replaces_state"]
if prev_id != event.event_id:
results[(event.type, event.state_key)] = prev_id
else:
results.pop((event.type, event.state_key), None)
state_key = event.get_state_key()
if state_key is not None:
# the event was not rejected (get_event raises a NotFoundError for rejected
# events) so the state at the event should include the event itself.
assert (
state_map.get((event.type, state_key)) == event.event_id
), "State at event did not include event itself"
# ... but we need the state *before* that event
if "replaces_state" in event.unsigned:
prev_id = event.unsigned["replaces_state"]
state_map[(event.type, state_key)] = prev_id
else:
del state_map[(event.type, state_key)]
return list(state_map.values())
return list(results.values())
else:
return []
async def on_backfill_request(
self, origin: str, room_id: str, pdu_list: List[str], limit: int

View File

@@ -277,8 +277,8 @@ class MessageHandler:
# If this is an AS, double check that they are allowed to see the members.
# This can either be because the AS user is in the room or because there
# is a user in the room that the AS is "interested in"
if False and requester.app_service and user_id not in users_with_profile: # type: ignore[unreachable]
for uid in users_with_profile: # type: ignore[unreachable]
if requester.app_service and user_id not in users_with_profile:
for uid in users_with_profile:
if requester.app_service.is_interested_in_user(uid):
break
else:
@@ -493,7 +493,6 @@ class EventCreationHandler:
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None,
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
@@ -528,15 +527,6 @@ class EventCreationHandler:
If non-None, prev_event_ids must also be provided.
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is with insertion events which float at
the beginning of a historical batch and don't have any `prev_events` to
derive from; we add all of these state events as the explicit state so the
rest of the historical batch can inherit the same state and state_group.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
require_consent: Whether to check if the requester has
consented to the privacy policy.
@@ -622,7 +612,6 @@ class EventCreationHandler:
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
state_event_ids=state_event_ids,
depth=depth,
)
@@ -782,7 +771,7 @@ class EventCreationHandler:
event_dict: dict,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
ratelimit: bool = True,
txn_id: Optional[str] = None,
ignore_shadow_ban: bool = False,
@@ -806,14 +795,12 @@ class EventCreationHandler:
The event IDs to use as the prev events.
Should normally be left as None to automatically request them
from the database.
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is with insertion events which float at
the beginning of a historical batch and don't have any `prev_events` to
derive from; we add all of these state events as the explicit state so the
rest of the historical batch can inherit the same state and state_group.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
auth_event_ids:
The event ids to use as the auth_events for the new event.
Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events.
If non-None, prev_event_ids must also be provided.
ratelimit: Whether to rate limit this send.
txn_id: The transaction ID.
ignore_shadow_ban: True if shadow-banned users should be allowed to
@@ -869,9 +856,8 @@ class EventCreationHandler:
requester,
event_dict,
txn_id=txn_id,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
state_event_ids=state_event_ids,
auth_event_ids=auth_event_ids,
outlier=outlier,
historical=historical,
depth=depth,
@@ -907,7 +893,6 @@ class EventCreationHandler:
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None,
depth: Optional[int] = None,
) -> Tuple[EventBase, EventContext]:
"""Create a new event for a local client
@@ -930,15 +915,6 @@ class EventCreationHandler:
Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events.
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is with insertion events which float at
the beginning of a historical batch and don't have any `prev_events` to
derive from; we add all of these state events as the explicit state so the
rest of the historical batch can inherit the same state and state_group.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
depth: Override the depth used to order the event in the DAG.
Should normally be set to None, which will cause the depth to be calculated
based on the prev_events.
@@ -946,26 +922,31 @@ class EventCreationHandler:
Returns:
Tuple of created event, context
"""
# Strip down the state_event_ids to only what we need to auth the event.
# Strip down the auth_event_ids to only what we need to auth the event.
# For example, we don't need extra m.room.member that don't match event.sender
if state_event_ids is not None:
# Do a quick check to make sure that prev_event_ids is present to
# make the type-checking around `builder.build` happy.
full_state_ids_at_event = None
if auth_event_ids is not None:
# If auth events are provided, prev events must be also.
# prev_event_ids could be an empty array though.
assert prev_event_ids is not None
# Copy the full auth state before it stripped down
full_state_ids_at_event = auth_event_ids.copy()
temp_event = await builder.build(
prev_event_ids=prev_event_ids,
auth_event_ids=state_event_ids,
auth_event_ids=auth_event_ids,
depth=depth,
)
state_events = await self.store.get_events_as_list(state_event_ids)
auth_events = await self.store.get_events_as_list(auth_event_ids)
# Create a StateMap[str]
state_map = {(e.type, e.state_key): e.event_id for e in state_events}
# Actually strip down and only use the necessary auth events
auth_event_state_map = {
(e.type, e.state_key): e.event_id for e in auth_events
}
# Actually strip down and use the necessary auth events
auth_event_ids = self._event_auth_handler.compute_auth_events(
event=temp_event,
current_state_ids=state_map,
current_state_ids=auth_event_state_map,
for_verification=False,
)
@@ -1008,16 +989,12 @@ class EventCreationHandler:
context = EventContext.for_outlier()
elif (
event.type == EventTypes.MSC2716_INSERTION
and state_event_ids
and full_state_ids_at_event
and builder.internal_metadata.is_historical()
):
# Add explicit state to the insertion event so it has state to derive
# from even though it's floating with no `prev_events`. The rest of
# the batch can derive from this state and state_group.
#
# TODO(faster_joins): figure out how this works, and make sure that the
# old state is complete.
old_state = await self.store.get_events_as_list(state_event_ids)
old_state = await self.store.get_events_as_list(full_state_ids_at_event)
context = await self.state.compute_event_context(event, old_state=old_state)
else:
context = await self.state.compute_event_context(event)

View File

@@ -45,7 +45,6 @@ from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart
from synapse.util import Clock, json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
from synapse.util.templates import _localpart_from_email_filter
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -1229,7 +1228,6 @@ class OidcSessionData:
class UserAttributeDict(TypedDict):
localpart: Optional[str]
confirm_localpart: bool
display_name: Optional[str]
emails: List[str]
@@ -1309,11 +1307,6 @@ def jinja_finalize(thing: Any) -> Any:
env = Environment(finalize=jinja_finalize)
env.filters.update(
{
"localpart_from_email": _localpart_from_email_filter,
}
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
@@ -1323,7 +1316,6 @@ class JinjaOidcMappingConfig:
display_name_template: Optional[Template]
email_template: Optional[Template]
extra_attributes: Dict[str, Template]
confirm_localpart: bool = False
class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
@@ -1365,17 +1357,12 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
"invalid jinja template", path=["extra_attributes", key]
) from e
confirm_localpart = config.get("confirm_localpart") or False
if not isinstance(confirm_localpart, bool):
raise ConfigError("must be a bool", path=["confirm_localpart"])
return JinjaOidcMappingConfig(
subject_claim=subject_claim,
localpart_template=localpart_template,
display_name_template=display_name_template,
email_template=email_template,
extra_attributes=extra_attributes,
confirm_localpart=confirm_localpart,
)
def get_remote_user_id(self, userinfo: UserInfo) -> str:
@@ -1411,10 +1398,7 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
emails.append(email)
return UserAttributeDict(
localpart=localpart,
display_name=display_name,
emails=emails,
confirm_localpart=self._config.confirm_localpart,
localpart=localpart, display_name=display_name, emails=emails
)
async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict:

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Collection, Dict, List, Optional, Set
from typing import TYPE_CHECKING, Any, Collection, Dict, List, Optional, Set
import attr
@@ -134,7 +134,6 @@ class PaginationHandler:
self.clock = hs.get_clock()
self._server_name = hs.hostname
self._room_shutdown_handler = hs.get_room_shutdown_handler()
self._relations_handler = hs.get_relations_handler()
self.pagination_lock = ReadWriteLock()
# IDs of rooms in which there currently an active purge *or delete* operation.
@@ -351,7 +350,7 @@ class PaginationHandler:
"""
self._purges_in_progress_by_room.add(room_id)
try:
async with self.pagination_lock.write(room_id):
with await self.pagination_lock.write(room_id):
await self.storage.purge_events.purge_history(
room_id, token, delete_local_events
)
@@ -407,7 +406,7 @@ class PaginationHandler:
room_id: room to be purged
force: set true to skip checking for joined users.
"""
async with self.pagination_lock.write(room_id):
with await self.pagination_lock.write(room_id):
# first check that we have no users in this room
if not force:
joined = await self.store.is_host_joined(room_id, self._server_name)
@@ -423,7 +422,7 @@ class PaginationHandler:
pagin_config: PaginationConfig,
as_client_event: bool = True,
event_filter: Optional[Filter] = None,
) -> JsonDict:
) -> Dict[str, Any]:
"""Get messages in a room.
Args:
@@ -432,7 +431,6 @@ class PaginationHandler:
pagin_config: The pagination config rules to apply, if any.
as_client_event: True to get events in client-server format.
event_filter: Filter to apply to results or None
Returns:
Pagination API results
"""
@@ -450,7 +448,7 @@ class PaginationHandler:
room_token = from_token.room_key
async with self.pagination_lock.read(room_id):
with await self.pagination_lock.read(room_id):
(
membership,
member_event_id,
@@ -540,9 +538,7 @@ class PaginationHandler:
state_dict = await self.store.get_events(list(state_ids.values()))
state = state_dict.values()
aggregations = await self._relations_handler.get_bundled_aggregations(
events, user_id
)
aggregations = await self.store.get_bundled_aggregations(events, user_id)
time_now = self.clock.time_msec()
@@ -619,7 +615,7 @@ class PaginationHandler:
self._purges_in_progress_by_room.add(room_id)
try:
async with self.pagination_lock.write(room_id):
with await self.pagination_lock.write(room_id):
self._delete_by_id[delete_id].status = DeleteStatus.STATUS_SHUTTING_DOWN
self._delete_by_id[
delete_id

View File

@@ -267,6 +267,7 @@ class BasePresenceHandler(abc.ABC):
is_syncing: Whether or not the user is now syncing
sync_time_msec: Time in ms when the user was last syncing
"""
pass
async def update_external_syncs_clear(self, process_id: str) -> None:
"""Marks all users that had been marked as syncing by a given process
@@ -276,6 +277,7 @@ class BasePresenceHandler(abc.ABC):
This is a no-op when presence is handled by a different worker.
"""
pass
async def process_replication_rows(
self, stream_name: str, instance_name: str, token: int, rows: list

View File

@@ -336,18 +336,12 @@ class ProfileHandler:
"""Check that the size and content type of the avatar at the given MXC URI are
within the configured limits.
If the given `mxc` is empty, no checks are performed. (Users are always able to
unset their avatar.)
Args:
mxc: The MXC URI at which the avatar can be found.
Returns:
A boolean indicating whether the file can be allowed to be set as an avatar.
"""
if mxc == "":
return True
if not self.max_avatar_size and not self.allowed_avatar_mimetypes:
return True

View File

@@ -1,271 +0,0 @@
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Dict, Iterable, Optional, cast
import attr
from frozendict import frozendict
from synapse.api.constants import RelationTypes
from synapse.api.errors import SynapseError
from synapse.events import EventBase
from synapse.types import JsonDict, Requester, StreamToken
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.databases.main import DataStore
logger = logging.getLogger(__name__)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class _ThreadAggregation:
# The latest event in the thread.
latest_event: EventBase
# The latest edit to the latest event in the thread.
latest_edit: Optional[EventBase]
# The total number of events in the thread.
count: int
# True if the current user has sent an event to the thread.
current_user_participated: bool
@attr.s(slots=True, auto_attribs=True)
class BundledAggregations:
"""
The bundled aggregations for an event.
Some values require additional processing during serialization.
"""
annotations: Optional[JsonDict] = None
references: Optional[JsonDict] = None
replace: Optional[EventBase] = None
thread: Optional[_ThreadAggregation] = None
def __bool__(self) -> bool:
return bool(self.annotations or self.references or self.replace or self.thread)
class RelationsHandler:
def __init__(self, hs: "HomeServer"):
self._main_store = hs.get_datastores().main
self._storage = hs.get_storage()
self._auth = hs.get_auth()
self._clock = hs.get_clock()
self._event_handler = hs.get_event_handler()
self._event_serializer = hs.get_event_client_serializer()
async def get_relations(
self,
requester: Requester,
event_id: str,
room_id: str,
relation_type: Optional[str] = None,
event_type: Optional[str] = None,
aggregation_key: Optional[str] = None,
limit: int = 5,
direction: str = "b",
from_token: Optional[StreamToken] = None,
to_token: Optional[StreamToken] = None,
) -> JsonDict:
"""Get related events of a event, ordered by topological ordering.
TODO Accept a PaginationConfig instead of individual pagination parameters.
Args:
requester: The user requesting the relations.
event_id: Fetch events that relate to this event ID.
room_id: The room the event belongs to.
relation_type: Only fetch events with this relation type, if given.
event_type: Only fetch events with this event type, if given.
aggregation_key: Only fetch events with this aggregation key, if given.
limit: Only fetch the most recent `limit` events.
direction: Whether to fetch the most recent first (`"b"`) or the
oldest first (`"f"`).
from_token: Fetch rows from the given token, or from the start if None.
to_token: Fetch rows up to the given token, or up to the end if None.
Returns:
The pagination chunk.
"""
user_id = requester.user.to_string()
# TODO Properly handle a user leaving a room.
(_, member_event_id) = await self._auth.check_user_in_room_or_world_readable(
room_id, user_id, allow_departed_users=True
)
# This gets the original event and checks that a) the event exists and
# b) the user is allowed to view it.
event = await self._event_handler.get_event(requester.user, room_id, event_id)
if event is None:
raise SynapseError(404, "Unknown parent event.")
pagination_chunk = await self._main_store.get_relations_for_event(
event_id=event_id,
event=event,
room_id=room_id,
relation_type=relation_type,
event_type=event_type,
aggregation_key=aggregation_key,
limit=limit,
direction=direction,
from_token=from_token,
to_token=to_token,
)
events = await self._main_store.get_events_as_list(
[c["event_id"] for c in pagination_chunk.chunk]
)
events = await filter_events_for_client(
self._storage, user_id, events, is_peeking=(member_event_id is None)
)
now = self._clock.time_msec()
# Do not bundle aggregations when retrieving the original event because
# we want the content before relations are applied to it.
original_event = self._event_serializer.serialize_event(
event, now, bundle_aggregations=None
)
# The relations returned for the requested event do include their
# bundled aggregations.
aggregations = await self.get_bundled_aggregations(
events, requester.user.to_string()
)
serialized_events = self._event_serializer.serialize_events(
events, now, bundle_aggregations=aggregations
)
return_value = await pagination_chunk.to_dict(self._main_store)
return_value["chunk"] = serialized_events
return_value["original_event"] = original_event
return return_value
async def _get_bundled_aggregation_for_event(
self, event: EventBase, user_id: str
) -> Optional[BundledAggregations]:
"""Generate bundled aggregations for an event.
Note that this does not use a cache, but depends on cached methods.
Args:
event: The event to calculate bundled aggregations for.
user_id: The user requesting the bundled aggregations.
Returns:
The bundled aggregations for an event, if bundled aggregations are
enabled and the event can have bundled aggregations.
"""
# Do not bundle aggregations for an event which represents an edit or an
# annotation. It does not make sense for them to have related events.
relates_to = event.content.get("m.relates_to")
if isinstance(relates_to, (dict, frozendict)):
relation_type = relates_to.get("rel_type")
if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE):
return None
event_id = event.event_id
room_id = event.room_id
# The bundled aggregations to include, a mapping of relation type to a
# type-specific value. Some types include the direct return type here
# while others need more processing during serialization.
aggregations = BundledAggregations()
annotations = await self._main_store.get_aggregation_groups_for_event(
event_id, room_id
)
if annotations.chunk:
aggregations.annotations = await annotations.to_dict(
cast("DataStore", self)
)
references = await self._main_store.get_relations_for_event(
event_id, event, room_id, RelationTypes.REFERENCE, direction="f"
)
if references.chunk:
aggregations.references = await references.to_dict(cast("DataStore", self))
# Store the bundled aggregations in the event metadata for later use.
return aggregations
async def get_bundled_aggregations(
self, events: Iterable[EventBase], user_id: str
) -> Dict[str, BundledAggregations]:
"""Generate bundled aggregations for events.
Args:
events: The iterable of events to calculate bundled aggregations for.
user_id: The user requesting the bundled aggregations.
Returns:
A map of event ID to the bundled aggregation for the event. Not all
events may have bundled aggregations in the results.
"""
# De-duplicate events by ID to handle the same event requested multiple times.
#
# State events do not get bundled aggregations.
events_by_id = {
event.event_id: event for event in events if not event.is_state()
}
# event ID -> bundled aggregation in non-serialized form.
results: Dict[str, BundledAggregations] = {}
# Fetch other relations per event.
for event in events_by_id.values():
event_result = await self._get_bundled_aggregation_for_event(event, user_id)
if event_result:
results[event.event_id] = event_result
# Fetch any edits (but not for redacted events).
edits = await self._main_store.get_applicable_edits(
[
event_id
for event_id, event in events_by_id.items()
if not event.internal_metadata.is_redacted()
]
)
for event_id, edit in edits.items():
results.setdefault(event_id, BundledAggregations()).replace = edit
# Fetch thread summaries.
summaries = await self._main_store.get_thread_summaries(events_by_id.keys())
# Only fetch participated for a limited selection based on what had
# summaries.
participated = await self._main_store.get_threads_participated(
[event_id for event_id, summary in summaries.items() if summary], user_id
)
for event_id, summary in summaries.items():
if summary:
thread_count, latest_thread_event, edit = summary
results.setdefault(
event_id, BundledAggregations()
).thread = _ThreadAggregation(
latest_event=latest_thread_event,
latest_edit=edit,
count=thread_count,
# If there's a thread summary it must also exist in the
# participated dictionary.
current_user_participated=participated[event_id],
)
return results

View File

@@ -60,8 +60,8 @@ from synapse.events import EventBase
from synapse.events.utils import copy_power_levels_contents
from synapse.federation.federation_client import InvalidResponseError
from synapse.handlers.federation import get_domains_from_state
from synapse.handlers.relations import BundledAggregations
from synapse.rest.admin._base import assert_user_is_admin
from synapse.storage.databases.main.relations import BundledAggregations
from synapse.storage.state import StateFilter
from synapse.streams import EventSource
from synapse.types import (
@@ -1118,7 +1118,6 @@ class RoomContextHandler:
self.store = hs.get_datastores().main
self.storage = hs.get_storage()
self.state_store = self.storage.state
self._relations_handler = hs.get_relations_handler()
async def get_event_context(
self,
@@ -1191,7 +1190,7 @@ class RoomContextHandler:
event = filtered[0]
# Fetch the aggregations.
aggregations = await self._relations_handler.get_bundled_aggregations(
aggregations = await self.store.get_bundled_aggregations(
itertools.chain(events_before, (event,), events_after),
user.to_string(),
)

View File

@@ -121,11 +121,12 @@ class RoomBatchHandler:
return create_requester(user_id, app_service=app_service)
async def get_most_recent_full_state_ids_from_event_id_list(
async def get_most_recent_auth_event_ids_from_event_id_list(
self, event_ids: List[str]
) -> List[str]:
"""Find the most recent event_id and grab the full state at that event.
We will use this as a base to auth our historical messages against.
"""Find the most recent auth event ids (derived from state events) that
allowed that message to be sent. We will use this as a base
to auth our historical messages against.
Args:
event_ids: List of event ID's to look at
@@ -135,37 +136,38 @@ class RoomBatchHandler:
"""
(
most_recent_event_id,
most_recent_prev_event_id,
_,
) = await self.store.get_max_depth_of(event_ids)
# mapping from (type, state_key) -> state_event_id
prev_state_map = await self.state_store.get_state_ids_for_event(
most_recent_event_id
most_recent_prev_event_id
)
# List of state event ID's
full_state_ids = list(prev_state_map.values())
prev_state_ids = list(prev_state_map.values())
auth_event_ids = prev_state_ids
return full_state_ids
return auth_event_ids
async def persist_state_events_at_start(
self,
state_events_at_start: List[JsonDict],
room_id: str,
initial_state_event_ids: List[str],
initial_auth_event_ids: List[str],
app_service_requester: Requester,
) -> List[str]:
"""Takes all `state_events_at_start` event dictionaries and creates/persists
them in a floating state event chain which don't resolve into the current room
state. They are floating because they reference no prev_events and are marked
as outliers which disconnects them from the normal DAG.
them as floating state events which don't resolve into the current room state.
They are floating because they reference a fake prev_event which doesn't connect
to the normal DAG at all.
Args:
state_events_at_start:
room_id: Room where you want the events persisted in.
initial_state_event_ids:
The base set of state for the historical batch which the floating
state chain will derive from. This should probably be the state
from the `prev_event` defined by `/batch_send?prev_event_id=$abc`.
initial_auth_event_ids: These will be the auth_events for the first
state event created. Each event created afterwards will be
added to the list of auth events for the next state event
created.
app_service_requester: The requester of an application service.
Returns:
@@ -174,7 +176,7 @@ class RoomBatchHandler:
assert app_service_requester.app_service
state_event_ids_at_start = []
state_event_ids = initial_state_event_ids.copy()
auth_event_ids = initial_auth_event_ids.copy()
# Make the state events float off on their own by specifying no
# prev_events for the first one in the chain so we don't have a bunch of
@@ -187,7 +189,9 @@ class RoomBatchHandler:
)
logger.debug(
"RoomBatchSendEventRestServlet inserting state_event=%s", state_event
"RoomBatchSendEventRestServlet inserting state_event=%s, auth_event_ids=%s",
state_event,
auth_event_ids,
)
event_dict = {
@@ -213,26 +217,16 @@ class RoomBatchHandler:
room_id=room_id,
action=membership,
content=event_dict["content"],
# Mark as an outlier to disconnect it from the normal DAG
# and not show up between batches of history.
outlier=True,
historical=True,
# Only the first event in the state chain should be floating.
# Only the first event in the chain should be floating.
# The rest should hang off each other in a chain.
allow_no_prev_events=index == 0,
prev_event_ids=prev_event_ids_for_state_chain,
# Since each state event is marked as an outlier, the
# `EventContext.for_outlier()` won't have any `state_ids`
# set and therefore can't derive any state even though the
# prev_events are set. Also since the first event in the
# state chain is floating with no `prev_events`, it can't
# derive state from anywhere automatically. So we need to
# set some state explicitly.
#
# Make sure to use a copy of this list because we modify it
# later in the loop here. Otherwise it will be the same
# reference and also update in the event when we append later.
state_event_ids=state_event_ids.copy(),
auth_event_ids=auth_event_ids.copy(),
)
else:
# TODO: Add some complement tests that adds state that is not member joins
@@ -246,31 +240,21 @@ class RoomBatchHandler:
state_event["sender"], app_service_requester.app_service
),
event_dict,
# Mark as an outlier to disconnect it from the normal DAG
# and not show up between batches of history.
outlier=True,
historical=True,
# Only the first event in the state chain should be floating.
# Only the first event in the chain should be floating.
# The rest should hang off each other in a chain.
allow_no_prev_events=index == 0,
prev_event_ids=prev_event_ids_for_state_chain,
# Since each state event is marked as an outlier, the
# `EventContext.for_outlier()` won't have any `state_ids`
# set and therefore can't derive any state even though the
# prev_events are set. Also since the first event in the
# state chain is floating with no `prev_events`, it can't
# derive state from anywhere automatically. So we need to
# set some state explicitly.
#
# Make sure to use a copy of this list because we modify it
# later in the loop here. Otherwise it will be the same
# reference and also update in the event when we append later.
state_event_ids=state_event_ids.copy(),
auth_event_ids=auth_event_ids.copy(),
)
event_id = event.event_id
state_event_ids_at_start.append(event_id)
state_event_ids.append(event_id)
auth_event_ids.append(event_id)
# Connect all the state in a floating chain
prev_event_ids_for_state_chain = [event_id]
@@ -281,7 +265,7 @@ class RoomBatchHandler:
events_to_create: List[JsonDict],
room_id: str,
inherited_depth: int,
initial_state_event_ids: List[str],
auth_event_ids: List[str],
app_service_requester: Requester,
) -> List[str]:
"""Create and persists all events provided sequentially. Handles the
@@ -297,10 +281,8 @@ class RoomBatchHandler:
room_id: Room where you want the events persisted in.
inherited_depth: The depth to create the events at (you will
probably by calling inherit_depth_from_prev_ids(...)).
initial_state_event_ids:
This is used to set explicit state for the insertion event at
the start of the historical batch since it's floating with no
prev_events to derive state from automatically.
auth_event_ids: Define which events allow you to create the given
event in the room.
app_service_requester: The requester of an application service.
Returns:
@@ -308,11 +290,6 @@ class RoomBatchHandler:
"""
assert app_service_requester.app_service
# We expect the first event in a historical batch to be an insertion event
assert events_to_create[0]["type"] == EventTypes.MSC2716_INSERTION
# We expect the last event in a historical batch to be an batch event
assert events_to_create[-1]["type"] == EventTypes.MSC2716_BATCH
# Make the historical event chain float off on its own by specifying no
# prev_events for the first event in the chain which causes the HS to
# ask for the state at the start of the batch later.
@@ -344,16 +321,11 @@ class RoomBatchHandler:
ev["sender"], app_service_requester.app_service
),
event_dict,
# Only the first event (which is the insertion event) in the
# chain should be floating. The rest should hang off each other
# in a chain.
# Only the first event in the chain should be floating.
# The rest should hang off each other in a chain.
allow_no_prev_events=index == 0,
prev_event_ids=event_dict.get("prev_events"),
# Since the first event (which is the insertion event) in the
# chain is floating with no `prev_events`, it can't derive state
# from anywhere automatically. So we need to set some state
# explicitly.
state_event_ids=initial_state_event_ids if index == 0 else None,
auth_event_ids=auth_event_ids,
historical=True,
depth=inherited_depth,
)
@@ -371,9 +343,10 @@ class RoomBatchHandler:
)
logger.debug(
"RoomBatchSendEventRestServlet inserting event=%s, prev_event_ids=%s",
"RoomBatchSendEventRestServlet inserting event=%s, prev_event_ids=%s, auth_event_ids=%s",
event,
prev_event_ids,
auth_event_ids,
)
events_to_persist.append((event, context))
@@ -403,12 +376,12 @@ class RoomBatchHandler:
room_id: str,
batch_id_to_connect_to: str,
inherited_depth: int,
initial_state_event_ids: List[str],
auth_event_ids: List[str],
app_service_requester: Requester,
) -> Tuple[List[str], str]:
"""
Handles creating and persisting all of the historical events as well as
insertion and batch meta events to make the batch navigable in the DAG.
Handles creating and persisting all of the historical events as well
as insertion and batch meta events to make the batch navigable in the DAG.
Args:
events_to_create: List of historical events to create in JSON
@@ -418,13 +391,8 @@ class RoomBatchHandler:
want this batch to connect to.
inherited_depth: The depth to create the events at (you will
probably by calling inherit_depth_from_prev_ids(...)).
initial_state_event_ids:
This is used to set explicit state for the insertion event at
the start of the historical batch since it's floating with no
prev_events to derive state from automatically. This should
probably be the state from the `prev_event` defined by
`/batch_send?prev_event_id=$abc` plus the outcome of
`persist_state_events_at_start`
auth_event_ids: Define which events allow you to create the given
event in the room.
app_service_requester: The requester of an application service.
Returns:
@@ -470,7 +438,7 @@ class RoomBatchHandler:
events_to_create=events_to_create,
room_id=room_id,
inherited_depth=inherited_depth,
initial_state_event_ids=initial_state_event_ids,
auth_event_ids=auth_event_ids,
app_service_requester=app_service_requester,
)

View File

@@ -271,7 +271,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
membership: str,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
txn_id: Optional[str] = None,
ratelimit: bool = True,
content: Optional[dict] = None,
@@ -294,14 +294,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
events should have a prev_event and we should only use this in special
cases like MSC2716.
prev_event_ids: The event IDs to use as the prev events
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is the historical `state_events_at_start`;
since each is marked as an `outlier`, the `EventContext.for_outlier()` won't
have any `state_ids` set and therefore can't derive any state even though the
prev_events are set so we need to set them ourself via this argument.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
auth_event_ids:
The event ids to use as the auth_events for the new event.
Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events.
txn_id:
ratelimit:
@@ -356,7 +352,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
txn_id=txn_id,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
state_event_ids=state_event_ids,
auth_event_ids=auth_event_ids,
require_consent=require_consent,
outlier=outlier,
historical=historical,
@@ -459,7 +455,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
historical: bool = False,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
) -> Tuple[str, int]:
"""Update a user's membership in a room.
@@ -487,14 +483,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
events should have a prev_event and we should only use this in special
cases like MSC2716.
prev_event_ids: The event IDs to use as the prev events
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is the historical `state_events_at_start`;
since each is marked as an `outlier`, the `EventContext.for_outlier()` won't
have any `state_ids` set and therefore can't derive any state even though the
prev_events are set so we need to set them ourself via this argument.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
auth_event_ids:
The event ids to use as the auth_events for the new event.
Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events.
Returns:
A tuple of the new event ID and stream ID.
@@ -513,24 +505,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if requester.app_service:
as_id = requester.app_service.id
then = self.clock.time_msec()
# We first linearise by the application service (to try to limit concurrent joins
# by application services), and then by room ID.
with (await self.member_as_limiter.queue(as_id)):
diff = self.clock.time_msec() - then
if diff > 80 * 1000:
# haproxy would have timed the request out anyway...
raise SynapseError(504, "took to long to process")
with (await self.member_linearizer.queue(key)):
diff = self.clock.time_msec() - then
if diff > 80 * 1000:
# haproxy would have timed the request out anyway...
raise SynapseError(504, "took to long to process")
result = await self.update_membership_locked(
requester,
target,
@@ -547,7 +525,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
historical=historical,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
state_event_ids=state_event_ids,
auth_event_ids=auth_event_ids,
)
return result
@@ -569,7 +547,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
historical: bool = False,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
state_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
) -> Tuple[str, int]:
"""Helper for update_membership.
@@ -599,14 +577,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
events should have a prev_event and we should only use this in special
cases like MSC2716.
prev_event_ids: The event IDs to use as the prev events
state_event_ids:
The full state at a given event. This is used particularly by the MSC2716
/batch_send endpoint. One use case is the historical `state_events_at_start`;
since each is marked as an `outlier`, the `EventContext.for_outlier()` won't
have any `state_ids` set and therefore can't derive any state even though the
prev_events are set so we need to set them ourself via this argument.
This should normally be left as None, which will cause the auth_event_ids
to be calculated based on the room state at the prev_events.
auth_event_ids:
The event ids to use as the auth_events for the new event.
Should normally be left as None, which will cause them to be calculated
based on the room state at the prev_events.
Returns:
A tuple of the new event ID and stream ID.
@@ -733,7 +707,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
ratelimit=ratelimit,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
state_event_ids=state_event_ids,
auth_event_ids=auth_event_ids,
content=content,
require_consent=require_consent,
outlier=outlier,
@@ -957,7 +931,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
txn_id=txn_id,
ratelimit=ratelimit,
prev_event_ids=latest_event_ids,
state_event_ids=state_event_ids,
auth_event_ids=auth_event_ids,
content=content,
require_consent=require_consent,
outlier=outlier,

View File

@@ -54,7 +54,6 @@ class SearchHandler:
self.clock = hs.get_clock()
self.hs = hs
self._event_serializer = hs.get_event_client_serializer()
self._relations_handler = hs.get_relations_handler()
self.storage = hs.get_storage()
self.state_store = self.storage.state
self.auth = hs.get_auth()
@@ -355,7 +354,7 @@ class SearchHandler:
aggregations = None
if self._msc3666_enabled:
aggregations = await self._relations_handler.get_bundled_aggregations(
aggregations = await self.store.get_bundled_aggregations(
# Generate an iterable of EventBase for all the events that will be
# returned, including contextual events.
itertools.chain(

Some files were not shown because too many files have changed in this diff Show More