Compare commits

...

58 Commits

Author SHA1 Message Date
Erik Johnston
e923a8db81 Get encryption state at the time 2024-08-30 15:26:16 +01:00
Erik Johnston
f78ab68fa2 Add cache 2024-08-30 14:53:08 +01:00
Erik Johnston
e76954b9ce Parameterize tests 2024-08-30 14:49:43 +01:00
Erik Johnston
82f58bf7b7 Factor out _filter_relevant_room_to_send 2024-08-30 13:58:36 +01:00
Erik Johnston
acb57ee42e Use filter_membership_for_sync 2024-08-30 13:44:52 +01:00
Erik Johnston
5d6386a3c9 Use dm_room_ids 2024-08-30 13:36:20 +01:00
Erik Johnston
6c4ad323a9 Faster have_finished_sliding_sync_background_jobs 2024-08-30 13:31:06 +01:00
Erik Johnston
2980422e9b Apply suggestions from code review
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-08-30 13:14:54 +01:00
Erik Johnston
bc4cb1fc41 Handle state resets in rooms 2024-08-29 19:13:16 +01:00
Erik Johnston
676754d7a7 WIP 2024-08-29 18:23:15 +01:00
Erik Johnston
a02739766e Newsfile 2024-08-29 17:23:36 +01:00
Erik Johnston
c038ff9e24 Proper join 2024-08-29 16:28:12 +01:00
Erik Johnston
86a0730f73 Add trace 2024-08-29 16:28:12 +01:00
Erik Johnston
e2c0a4b205 Use new tables 2024-08-29 16:28:12 +01:00
Erik Johnston
c9a915648f Add DB functions 2024-08-29 16:28:12 +01:00
Erik Johnston
58071bc9e5 Split out fetching of newly joined/left rooms 2024-08-29 16:27:50 +01:00
Erik Johnston
74bec29c1d Split out _rewind_current_membership_to_token function 2024-08-29 16:27:50 +01:00
Erik Johnston
2999a14aed Sliding Sync: Make PerConnectionState immutable (#17600)
This is so that we can cache it.

We also move the sliding sync types to
`synapse/types/handlers/sliding_sync.py`. This is mainly in-prep for
#17599 to avoid circular imports.

The only change in behaviour is that
`RoomSyncConfig.combine_sync_config(..)` now returns a new room sync
config rather than mutating in-place.

Reviewable commit-by-commit.

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-08-29 16:22:57 +01:00
Eric Eastwood
1a6b718f8c Sliding Sync: Pre-populate room data for quick filtering/sorting (#17512)
Pre-populate room data for quick filtering/sorting in the Sliding Sync
API

Spawning from
https://github.com/element-hq/synapse/pull/17450#discussion_r1697335578

This PR is acting as the Synapse version `N+1` step in the gradual
migration being tracked by
https://github.com/element-hq/synapse/issues/17623

Adding two new database tables:

- `sliding_sync_joined_rooms`: A table for storing room meta data that
the local server is still participating in. The info here can be shared
across all `Membership.JOIN`. Keyed on `(room_id)` and updated when the
relevant room current state changes or a new event is sent in the room.
- `sliding_sync_membership_snapshots`: A table for storing a snapshot of
room meta data at the time of the local user's membership. Keyed on
`(room_id, user_id)` and only updated when a user's membership in a room
changes.

Also adds background updates to populate these tables with all of the
existing data.


We want to have the guarantee that if a row exists in the sliding sync
tables, we are able to rely on it (accurate data). And if a row doesn't
exist, we use a fallback to get the same info until the background
updates fill in the rows or a new event comes in triggering it to be
fully inserted. This means we need a couple extra things in place until
we bump `SCHEMA_COMPAT_VERSION` and run the foreground update in the
`N+2` part of the gradual migration. For context on why we can't rely on
the tables without these things see [1].

1. On start-up, block until we clear out any rows for the rooms that
have had events since the max-`stream_ordering` of the
`sliding_sync_joined_rooms` table (compare to max-`stream_ordering` of
the `events` table). For `sliding_sync_membership_snapshots`, we can
compare to the max-`stream_ordering` of `local_current_membership`
- This accounts for when someone downgrades their Synapse version and
then upgrades it again. This will ensure that we don't have any
stale/out-of-date data in the
`sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` tables
since any new events sent in rooms would have also needed to be written
to the sliding sync tables. For example a new event needs to bump
`event_stream_ordering` in `sliding_sync_joined_rooms` table or some
state in the room changing (like the room name). Or another example of
someone's membership changing in a room affecting
`sliding_sync_membership_snapshots`.
1. Add another background update that will catch-up with any rows that
were just deleted from the sliding sync tables (based on the activity in
the `events`/`local_current_membership`). The rooms that need
recalculating are added to the
`sliding_sync_joined_rooms_to_recalculate` table.
1. Making sure rows are fully inserted. Instead of partially inserting,
we need to check if the row already exists and fully insert all data if
not.

All of this extra functionality can be removed once the
`SCHEMA_COMPAT_VERSION` is bumped with support for the new sliding sync
tables so people can no longer downgrade (the `N+2` part of the gradual
migration).


<details>
<summary><sup>[1]</sup></summary>

For `sliding_sync_joined_rooms`, since we partially insert rows as state
comes in, we can't rely on the existence of the row for a given
`room_id`. We can't even rely on looking at whether the background
update has finished. There could still be partial rows from when someone
reverted their Synapse version after the background update finished, had
some state changes (or new rooms), then upgraded again and more state
changes happen leaving a partial row.

For `sliding_sync_membership_snapshots`, we insert items as a whole
except for the `forgotten` column ~~so we can rely on rows existing and
just need to always use a fallback for the `forgotten` data. We can't
use the `forgotten` column in the table for the same reasons above about
`sliding_sync_joined_rooms`.~~ We could have an out-of-date membership
from when someone reverted their Synapse version. (same problems as
outlined for `sliding_sync_joined_rooms` above)

Discussed in an [internal
meeting](https://docs.google.com/document/d/1MnuvPkaCkT_wviSQZ6YKBjiWciCBFMd-7hxyCO-OCbQ/edit#bookmark=id.dz5x6ef4mxz7)

</details>


### TODO

 - [x] Update `stream_ordering`/`bump_stamp`
 - [x] Handle remote invites
 - [x] Handle state resets
- [x] Consider adding `sender` so we can filter `LEAVE` memberships and
distinguish from kicks.
     - [x] We should add it to be able to tell leaves from kicks 
- [x] Consider adding `tombstone` state to help address
https://github.com/element-hq/synapse/issues/17540
     - [x] We should add it `tombstone_successor_room_id`
- [x] Consider adding `forgotten` status to avoid extra
lookup/table-join on `room_memberships`
    - [x] We should add it
- [x] Background update to fill in values for all joined rooms and
non-join membership
 - [x] Clean-up tables when room is deleted
 - [ ] Make sure tables are useful to our use case
- First explored in
https://github.com/element-hq/synapse/compare/erikj/ss_use_new_tables
- Also explored in
76b5a576eb
 - [x] Plan for how can we use this with a fallback
     - See plan discussed above in main area of the issue description
- Discussed in an [internal
meeting](https://docs.google.com/document/d/1MnuvPkaCkT_wviSQZ6YKBjiWciCBFMd-7hxyCO-OCbQ/edit#bookmark=id.dz5x6ef4mxz7)
 - [x] Plan for how we can rely on this new table without a fallback
- Synapse version `N+1`: (this PR) Bump `SCHEMA_VERSION` to `87`. Add
new tables and background update to backfill all rows. Since this is a
new table, we don't have to add any `NOT VALID` constraints and validate
them when the background update completes. Read from new tables with a
fallback in cases where the rows aren't filled in yet.
- Synapse version `N+2`: Bump `SCHEMA_VERSION` to `88` and bump
`SCHEMA_COMPAT_VERSION` to `87` because we don't want people to
downgrade and miss writes while they are on an older version. Add a
foreground update to finish off the backfill so we can read from new
tables without the fallback. Application code can now rely on the new
tables being populated.
- Discussed in an [internal
meeting](https://docs.google.com/document/d/1MnuvPkaCkT_wviSQZ6YKBjiWciCBFMd-7hxyCO-OCbQ/edit#bookmark=id.hh7shg4cxdhj)




### Dev notes

```
SYNAPSE_TEST_LOG_LEVEL=INFO poetry run trial tests.storage.test_events.SlidingSyncPrePopulatedTablesTestCase

SYNAPSE_POSTGRES=1 SYNAPSE_POSTGRES_USER=postgres SYNAPSE_TEST_LOG_LEVEL=INFO poetry run trial tests.storage.test_events.SlidingSyncPrePopulatedTablesTestCase
```

```
SYNAPSE_TEST_LOG_LEVEL=INFO poetry run trial tests.handlers.test_sliding_sync.FilterRoomsTestCase
```

Reference:

- [Development docs on background updates and worked examples of gradual
migrations

](1dfa59b238/docs/development/database_schema.md (background-updates))
- A real example of a gradual migration:
https://github.com/matrix-org/synapse/pull/15649#discussion_r1213779514
- Adding `rooms.creator` field that needed a background update to
backfill data, https://github.com/matrix-org/synapse/pull/10697
- Adding `rooms.room_version` that needed a background update to
backfill data, https://github.com/matrix-org/synapse/pull/6729
- Adding `room_stats_state.room_type` that needed a background update to
backfill data, https://github.com/matrix-org/synapse/pull/13031
- Tables from MSC2716: `insertion_events`, `insertion_event_edges`,
`insertion_event_extremities`, `batch_events`
- `current_state_events` updated in
`synapse/storage/databases/main/events.py`

---

```
persist_event (adds to queue)
_persist_event_batch
_persist_events_and_state_updates (assigns `stream_ordering` to events)
_persist_events_txn
	_store_event_txn
        _update_metadata_tables_txn
            _store_room_members_txn
	_update_current_state_txn
```

---

> Concatenated Indexes [...] (also known as multi-column, composite or
combined index)
>
> [...] key consists of multiple columns.
> 
> We can take advantage of the fact that the first index column is
always usable for searching
>
> *--
https://use-the-index-luke.com/sql/where-clause/the-equals-operator/concatenated-keys*

---

Dealing with `portdb` (`synapse/_scripts/synapse_port_db.py`),
https://github.com/element-hq/synapse/pull/17512#discussion_r1725998219

---

<details>
<summary>SQL queries:</summary>

Both of these are equivalent and work in SQLite and Postgres

Options 1:
```sql
WITH data_table (room_id, user_id, membership_event_id, membership, event_stream_ordering, {", ".join(insert_keys)}) AS (
    VALUES (
        ?, ?, ?,
        (SELECT membership FROM room_memberships WHERE event_id = ?),
        (SELECT stream_ordering FROM events WHERE event_id = ?),
        {", ".join("?" for _ in insert_values)}
    )
)
INSERT INTO sliding_sync_non_join_memberships
    (room_id, user_id, membership_event_id, membership, event_stream_ordering, {", ".join(insert_keys)})
SELECT * FROM data_table
WHERE membership != ?
ON CONFLICT (room_id, user_id)
DO UPDATE SET
    membership_event_id = EXCLUDED.membership_event_id,
    membership = EXCLUDED.membership,
    event_stream_ordering = EXCLUDED.event_stream_ordering,
    {", ".join(f"{key} = EXCLUDED.{key}" for key in insert_keys)}
```

Option 2:
```sql
INSERT INTO sliding_sync_non_join_memberships
    (room_id, user_id, membership_event_id, membership, event_stream_ordering, {", ".join(insert_keys)})
SELECT 
    column1 as room_id,
    column2 as user_id,
    column3 as membership_event_id,
    column4 as membership,
    column5 as event_stream_ordering,
    {", ".join("column" + str(i) for i in range(6, 6 + len(insert_keys)))}
FROM (
    VALUES (
        ?, ?, ?,
        (SELECT membership FROM room_memberships WHERE event_id = ?),
        (SELECT stream_ordering FROM events WHERE event_id = ?),
        {", ".join("?" for _ in insert_values)}
    )
) as v
WHERE membership != ?
ON CONFLICT (room_id, user_id)
DO UPDATE SET
    membership_event_id = EXCLUDED.membership_event_id,
    membership = EXCLUDED.membership,
    event_stream_ordering = EXCLUDED.event_stream_ordering,
    {", ".join(f"{key} = EXCLUDED.{key}" for key in insert_keys)}
```

If we don't need the `membership` condition, we could use:

```sql
INSERT INTO sliding_sync_non_join_memberships
    (room_id, membership_event_id, user_id, membership, event_stream_ordering, {", ".join(insert_keys)})
VALUES (
    ?, ?, ?,
    (SELECT membership FROM room_memberships WHERE event_id = ?),
    (SELECT stream_ordering FROM events WHERE event_id = ?),
    {", ".join("?" for _ in insert_values)}
)
ON CONFLICT (room_id, user_id)
DO UPDATE SET
    membership_event_id = EXCLUDED.membership_event_id,
    membership = EXCLUDED.membership,
    event_stream_ordering = EXCLUDED.event_stream_ordering,
    {", ".join(f"{key} = EXCLUDED.{key}" for key in insert_keys)}
```

</details>

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Erik Johnston <erik@matrix.org>
2024-08-29 16:09:51 +01:00
Gordan Trevis
594cd5f9fd Fix Internal Server Error for Non-Local Users in Room Actions (#17607) 2024-08-29 14:34:29 +00:00
Erik Johnston
b21134de3b Fix starting non-media repos (#17626)
Regressed in #17543.

The `max_download_size` config is not available on workers that don't
load the media repo.

Besides, we should honour the max_size param that was passed into the
function.
2024-08-29 12:26:17 +00:00
meise
a8f29c9913 docs: fix typo in saml2_config example (#17594) 2024-08-29 10:39:16 +00:00
Dirk Klimpel
9eed8cd878 fix listener docs - admin api only on main process (#17590) 2024-08-29 10:33:14 +00:00
Erik Johnston
8678516e79 Sliding sync: Always send your own receipts down (#17617)
When returning receipts in sliding sync for initial rooms we should
always include our own receipts in the room (even if they don't match
any timeline events).

Reviewable commit-by-commit.

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-08-29 10:09:40 +01:00
Till
573c6d7e69 Use max_upload_size as the limit when following the Location header (#17543)
Otherwise we use the `expected_size` from the initial federation
request, which might be far too low.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Erik Johnston <erikj@element.io>
2024-08-29 09:25:10 +02:00
Erik Johnston
689641b903 Sliding sync: factor out room list logic (#17622)
Move calculating of the room lists out of the core handler. This should
make it easier to switch things around to start using the tables in
#17512.

This is just moving code between files and methods.

Reviewable commit-by-commit
2024-08-28 18:42:19 +01:00
Krishan
e75a23a63d Fix hierarchy returning 403 when room is accessible through federation (#17194) 2024-08-28 15:45:49 +01:00
Shay
e563e4bdf3 Fix content length on federation /thumbnail responses (#17532) 2024-08-28 11:29:12 +01:00
dependabot[bot]
f4032d3e71 Bump serde from 1.0.208 to 1.0.209 (#17613) 2024-08-28 10:09:26 +01:00
eyJhb
8da16e55fe hash_password accepts stdin now (#17608)
`hash_password` now actually accepts password from stdin. The `getpass`
reads from TTY, and does NOT accept stdin in any way.

The manpage has been updated to reflect that.
2024-08-27 18:51:43 +01:00
dependabot[bot]
d9cc0faf4b Bump pyyaml from 6.0.1 to 6.0.2 (#17611) 2024-08-27 14:55:56 +01:00
dependabot[bot]
cca77af68f Bump phonenumbers from 8.13.43 to 8.13.44 (#17610) 2024-08-27 14:55:47 +01:00
dependabot[bot]
48742da536 Bump attrs from 23.2.0 to 24.2.0 (#17609) 2024-08-27 14:55:38 +01:00
dependabot[bot]
940b932405 Bump pygithub from 2.3.0 to 2.4.0 (#17612) 2024-08-27 14:55:27 +01:00
dependabot[bot]
a2b2f6d09b Bump serde_json from 1.0.125 to 1.0.127 (#17614) 2024-08-27 14:55:03 +01:00
Erik Johnston
defd4aca67 Speed up fetching latest stream positions via cache (#17606)
The idea is to engineer it so that the vast majority of the rooms can
stay in the cache, so we can just ignore them.
2024-08-27 11:03:56 +00:00
Erik Johnston
b4d95409fb Fix @tag_args for non-methods (#17604)
The decorator assumed we were always wrapping function methods
2024-08-27 11:47:28 +01:00
dependabot[bot]
f1a1c7fc53 Bump types-setuptools from 71.1.0.20240726 to 71.1.0.20240818 (#17586) 2024-08-23 09:53:14 +01:00
dependabot[bot]
cb9fa062b7 Bump sentry-sdk from 2.12.0 to 2.13.0 (#17585) 2024-08-23 09:53:06 +01:00
dependabot[bot]
74b75cfd54 Bump cryptography from 42.0.8 to 43.0.0 (#17584) 2024-08-23 09:52:53 +01:00
dependabot[bot]
87d13fd143 Bump types-jsonschema from 4.23.0.20240712 to 4.23.0.20240813 (#17583) 2024-08-23 09:52:34 +01:00
dependabot[bot]
ad2cd9aefd Bump serde_json from 1.0.124 to 1.0.125 (#17582) 2024-08-23 09:52:26 +01:00
dependabot[bot]
ad0ee53993 Bump serde from 1.0.206 to 1.0.208 (#17581) 2024-08-23 09:52:16 +01:00
Erik Johnston
92b38c1afd Sliding sync: Split up handler into its own module (#17595)
That file was getting long.

The changes are non functional, and simply split things up into:
- the main class
- the connection store
- the extensions
- the types
2024-08-20 18:30:23 +00:00
Quentin Gliech
a8e313836d changelog: move SSSS some changes in the features section 2024-08-20 15:18:13 +02:00
Quentin Gliech
7c9684b5dc 1.114.0rc1 2024-08-20 14:57:22 +02:00
Erik Johnston
f1e8d2d15a Sliding Sync: Speed up getting receipts for initial rooms (#17592)
Let's only pull out the events we care about. Note that the index isn't
necessary here, as postgres is happy to scan the set of rooms for the
events.
2024-08-20 12:57:34 +01:00
Erik Johnston
10428046e4 Add metrics for sliding sync processing time (#17593)
This should let us see how quickly we actually process things in
practice.
2024-08-20 11:36:49 +00:00
Erik Johnston
6eb98a4f1c Sliding Sync: Handle timeline limit changes (take 2) (#17579)
This supersedes #17503, given the per-connection state is being heavily
rewritten it felt easier to recreate the PR on top of that work.

This correctly handles the case of timeline limits going up and down.

This does not handle changes in `required_state`, but that can be done
as a separate PR.

Based on #17575.

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-08-20 10:31:25 +01:00
Erik Johnston
950ba844f7 Sliding Sync: Batch up fetching receipts (#17589)
This is to make initial sliding sync a bit faster
2024-08-20 10:13:26 +01:00
Erik Johnston
8b8d74d12f Sliding sync: Correctly track which read receipts we have or have not sent down. (#17575)
Add connection tracking to the receipts extension.

Based on #17574

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-08-19 21:16:07 +01:00
Erik Johnston
261e746281 Sliding sync: Add classes for per-connection state (#17574)
This is some prep work ahead of correctly tracking receipts, where we
will also want to track the room status in terms of last receipt we had
sent down.

Essentially, we add two classes `PerConnectionState` and a mutable
version, and then operate on those.

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-08-19 20:09:41 +01:00
Erik Johnston
993644ded0 Fix zero length media handling (#17570)
Results in:

```
AssertionError: null
  File "synapse/http/server.py", line 332, in _async_render_wrapper
    callback_return = await self._async_render(request)
  File "synapse/http/server.py", line 544, in _async_render
    callback_return = await raw_callback_return
  File "synapse/federation/transport/server/_base.py", line 369, in new_func
    response = await func(
  File "synapse/federation/transport/server/federation.py", line 826, in on_GET
    await self.media_repo.get_local_media(
  File "synapse/media/media_repository.py", line 473, in get_local_media
    await respond_with_multipart_responder(
  File "synapse/media/_base.py", line 353, in respond_with_multipart_responder
    assert content_length is not None
```
2024-08-19 15:06:44 +01:00
Erik Johnston
a5d25bb623 Test github token before running release script (#17562)
This stops people from getting half way through a step and it failing
due to the github token having expired (this happens to me every damn
time).
2024-08-19 14:15:36 +01:00
Erik Johnston
f162c92f2a Speed up /keys/changes (#17548)
Follow on from #17537.

This is just adding a batched lookup function (you might want to hide
whitespace in the diff).
2024-08-16 16:04:02 +01:00
Erik Johnston
9ce489be5e Add a flag to /versions about SSS support (#17571)
So that clients can check for support. Note that if the feature is only
enabled for some users, the `/versions` request must be authenticated to
pick up that SSS is enabled for the user
2024-08-16 08:54:57 +01:00
Andrew Morgan
fae75b0376 Register the media threadpool with our metrics (#17566) 2024-08-14 15:11:22 +01:00
Tulir Asokan
f77bfbfa30 Fix fetching signing keys when old_verify_keys is omitted (#17568)
`old_verify_keys` isn't marked as required in
https://spec.matrix.org/v1.11/server-server-api/#get_matrixkeyv2server
and there's no functional difference between an empty object and
omitting the object, so I don't think there's any reason synapse should
explode when the field is omitted.
2024-08-14 14:13:56 +01:00
108 changed files with 14107 additions and 4084 deletions

View File

@@ -1,3 +1,57 @@
# Synapse 1.114.0rc1 (2024-08-20)
### Features
- Add a flag to `/versions`, `org.matrix.simplified_msc3575`, to indicate whether experimental sliding sync support has been enabled. ([\#17571](https://github.com/element-hq/synapse/issues/17571))
- Handle changes in `timeline_limit` in experimental sliding sync. ([\#17579](https://github.com/element-hq/synapse/issues/17579))
- Correctly track read receipts that should be sent down in experimental sliding sync. ([\#17575](https://github.com/element-hq/synapse/issues/17575), [\#17589](https://github.com/element-hq/synapse/issues/17589), [\#17592](https://github.com/element-hq/synapse/issues/17592))
### Bugfixes
- Start handlers for new media endpoints when media resource configured. ([\#17483](https://github.com/element-hq/synapse/issues/17483))
- Fix timeline ordering (using `stream_ordering` instead of topological ordering) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17510](https://github.com/element-hq/synapse/issues/17510))
- Fix experimental sliding sync implementation to remember any updates in rooms that were not sent down immediately. ([\#17535](https://github.com/element-hq/synapse/issues/17535))
- Better exclude partially stated rooms if we must await full state in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17538](https://github.com/element-hq/synapse/issues/17538))
- Handle lower-case http headers in `_Mulitpart_Parser_Protocol`. ([\#17545](https://github.com/element-hq/synapse/issues/17545))
- Fix fetching federation signing keys from servers that omit `old_verify_keys`. Contributed by @tulir @ Beeper. ([\#17568](https://github.com/element-hq/synapse/issues/17568))
- Fix bug where we would respond with an error when a remote server asked for media that had a length of 0, using the new multipart federation media endpoint. ([\#17570](https://github.com/element-hq/synapse/issues/17570))
### Improved Documentation
- Clarify default behaviour of the
[`auto_accept_invites.worker_to_run_on`](https://element-hq.github.io/synapse/develop/usage/configuration/config_documentation.html#auto-accept-invites)
option. ([\#17515](https://github.com/element-hq/synapse/issues/17515))
- Improve docstrings for profile methods. ([\#17559](https://github.com/element-hq/synapse/issues/17559))
### Internal Changes
- Add more tracing to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17514](https://github.com/element-hq/synapse/issues/17514))
- Fixup comment in sliding sync implementation. ([\#17531](https://github.com/element-hq/synapse/issues/17531))
- Replace override of deprecated method `HTTPAdapter.get_connection` with `get_connection_with_tls_context`. ([\#17536](https://github.com/element-hq/synapse/issues/17536))
- Fix performance of device lists in `/key/changes` and sliding sync. ([\#17537](https://github.com/element-hq/synapse/issues/17537), [\#17548](https://github.com/element-hq/synapse/issues/17548))
- Bump setuptools from 67.6.0 to 72.1.0. ([\#17542](https://github.com/element-hq/synapse/issues/17542))
- Add a utility function for generating random event IDs. ([\#17557](https://github.com/element-hq/synapse/issues/17557))
- Speed up responding to media requests. ([\#17558](https://github.com/element-hq/synapse/issues/17558), [\#17561](https://github.com/element-hq/synapse/issues/17561), [\#17564](https://github.com/element-hq/synapse/issues/17564), [\#17566](https://github.com/element-hq/synapse/issues/17566), [\#17567](https://github.com/element-hq/synapse/issues/17567), [\#17569](https://github.com/element-hq/synapse/issues/17569))
- Test github token before running release script steps. ([\#17562](https://github.com/element-hq/synapse/issues/17562))
- Reduce log spam of multipart files. ([\#17563](https://github.com/element-hq/synapse/issues/17563))
- Refactor per-connection state in experimental sliding sync handler. ([\#17574](https://github.com/element-hq/synapse/issues/17574))
- Add histogram metrics for sliding sync processing time. ([\#17593](https://github.com/element-hq/synapse/issues/17593))
### Updates to locked dependencies
* Bump bytes from 1.6.1 to 1.7.1. ([\#17526](https://github.com/element-hq/synapse/issues/17526))
* Bump lxml from 5.2.2 to 5.3.0. ([\#17550](https://github.com/element-hq/synapse/issues/17550))
* Bump phonenumbers from 8.13.42 to 8.13.43. ([\#17551](https://github.com/element-hq/synapse/issues/17551))
* Bump regex from 1.10.5 to 1.10.6. ([\#17527](https://github.com/element-hq/synapse/issues/17527))
* Bump sentry-sdk from 2.10.0 to 2.12.0. ([\#17553](https://github.com/element-hq/synapse/issues/17553))
* Bump serde from 1.0.204 to 1.0.206. ([\#17556](https://github.com/element-hq/synapse/issues/17556))
* Bump serde_json from 1.0.122 to 1.0.124. ([\#17555](https://github.com/element-hq/synapse/issues/17555))
* Bump sigstore/cosign-installer from 3.5.0 to 3.6.0. ([\#17549](https://github.com/element-hq/synapse/issues/17549))
* Bump types-pyyaml from 6.0.12.20240311 to 6.0.12.20240808. ([\#17552](https://github.com/element-hq/synapse/issues/17552))
* Bump types-requests from 2.31.0.20240406 to 2.32.0.20240712. ([\#17524](https://github.com/element-hq/synapse/issues/17524))
# Synapse 1.113.0 (2024-08-13)
No significant changes since 1.113.0rc1.

12
Cargo.lock generated
View File

@@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "serde"
version = "1.0.206"
version = "1.0.209"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b3e4cd94123dd520a128bcd11e34d9e9e423e7e3e50425cb1b4b1e3549d0284"
checksum = "99fce0ffe7310761ca6bf9faf5115afbc19688edd00171d81b1bb1b116c63e09"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.206"
version = "1.0.209"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fabfb6138d2383ea8208cf98ccf69cdfb1aff4088460681d84189aa259762f97"
checksum = "a5831b979fd7b5439637af1752d535ff49f4860c0f341d1baeb6faf0f4242170"
dependencies = [
"proc-macro2",
"quote",
@@ -505,9 +505,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.124"
version = "1.0.127"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "66ad62847a56b3dba58cc891acd13884b9c61138d330c0d7b6181713d4fce38d"
checksum = "8043c06d9f82bd7271361ed64f415fe5e12a77fdb52e573e7f06a516dea329ad"
dependencies = [
"itoa",
"memchr",

1
changelog.d/17194.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix hierarchy returning 403 when room is accessible through federation. Contributed by Krishan (@kfiven).

View File

@@ -1 +0,0 @@
Start handlers for new media endpoints when media resource configured.

View File

@@ -1 +0,0 @@
Fix timeline ordering (using `stream_ordering` instead of topological ordering) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

1
changelog.d/17512.misc Normal file
View File

@@ -0,0 +1 @@
Pre-populate room data used in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint for quick filtering/sorting.

View File

@@ -1 +0,0 @@
Add more tracing to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1,3 +0,0 @@
Clarify default behaviour of the
[`auto_accept_invites.worker_to_run_on`](https://element-hq.github.io/synapse/develop/usage/configuration/config_documentation.html#auto-accept-invites)
option.

View File

@@ -1 +0,0 @@
Fixup comment in sliding sync implementation.

1
changelog.d/17532.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix content-length on federation /thumbnail responses.

View File

@@ -1 +0,0 @@
Fix experimental sliding sync implementation to remember any updates in rooms that were not sent down immediately.

View File

@@ -1 +0,0 @@
Replace override of deprecated method `HTTPAdapter.get_connection` with `get_connection_with_tls_context`.

View File

@@ -1 +0,0 @@
Fix performance of device lists in `/key/changes` and sliding sync.

View File

@@ -1 +0,0 @@
Better exclude partially stated rooms if we must await full state in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1 +0,0 @@
Bump setuptools from 67.6.0 to 72.1.0.

1
changelog.d/17543.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix authenticated media responses using a wrong limit when following redirects over federation.

View File

@@ -1 +0,0 @@
Handle lower-case http headers in `_Mulitpart_Parser_Protocol`.

View File

@@ -1 +0,0 @@
Add a utility function for generating random event IDs.

View File

@@ -1 +0,0 @@
Speed up responding to media requests.

View File

@@ -1 +0,0 @@
Improve docstrings for profile methods.

View File

@@ -1 +0,0 @@
Speed up responding to media requests.

View File

@@ -1 +0,0 @@
Reduce log spam of multipart files.

View File

@@ -1 +0,0 @@
Speed up responding to media requests.

View File

@@ -1 +0,0 @@
Speed up responding to media requests.

View File

@@ -1 +0,0 @@
Speed up responding to media requests.

1
changelog.d/17590.doc Normal file
View File

@@ -0,0 +1 @@
Clarify that the admin api resource is only loaded on the main process and not workers.

1
changelog.d/17594.doc Normal file
View File

@@ -0,0 +1 @@
Fixed typo in `saml2_config` config [example](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#saml2_config).

1
changelog.d/17595.misc Normal file
View File

@@ -0,0 +1 @@
Refactor sliding sync class into multiple files.

1
changelog.d/17600.misc Normal file
View File

@@ -0,0 +1 @@
Make the sliding sync `PerConnectionState` class immutable.

1
changelog.d/17604.misc Normal file
View File

@@ -0,0 +1 @@
Add support to `@tag_args` for standalone functions.

1
changelog.d/17606.misc Normal file
View File

@@ -0,0 +1 @@
Speed up incremental syncs in sliding sync by adding some more caching.

1
changelog.d/17607.bugfix Normal file
View File

@@ -0,0 +1 @@
Return `400 M_BAD_JSON` upon attempting to complete various room actions with a non-local user ID and unknown room ID, rather than an internal server error.

View File

@@ -0,0 +1 @@
Make `hash_password` accept password input from stdin.

1
changelog.d/17617.misc Normal file
View File

@@ -0,0 +1 @@
Always return the user's own read receipts in sliding sync.

1
changelog.d/17622.misc Normal file
View File

@@ -0,0 +1 @@
Refactor sliding sync code to move room list logic out into a separate class.

1
changelog.d/17626.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix authenticated media responses using a wrong limit when following redirects over federation.

1
changelog.d/17630.misc Normal file
View File

@@ -0,0 +1 @@
Use new database tables for sliding sync.

6
debian/changelog vendored
View File

@@ -1,3 +1,9 @@
matrix-synapse-py3 (1.114.0~rc1) stable; urgency=medium
* New synapse release 1.114.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 Aug 2024 12:55:28 +0000
matrix-synapse-py3 (1.113.0) stable; urgency=medium
* New Synapse release 1.113.0.

View File

@@ -1,10 +1,13 @@
.\" generated with Ronn-NG/v0.8.0
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0
.TH "HASH_PASSWORD" "1" "July 2021" "" ""
.\" generated with Ronn-NG/v0.10.1
.\" http://github.com/apjanke/ronn-ng/tree/0.10.1
.TH "HASH_PASSWORD" "1" "August 2024" ""
.SH "NAME"
\fBhash_password\fR \- Calculate the hash of a new password, so that passwords can be reset
.SH "SYNOPSIS"
\fBhash_password\fR [\fB\-p\fR|\fB\-\-password\fR [password]] [\fB\-c\fR|\fB\-\-config\fR \fIfile\fR]
.TS
allbox;
\fBhash_password\fR [\fB\-p\fR \fB\-\-password\fR [password]] [\fB\-c\fR \fB\-\-config\fR \fIfile\fR]
.TE
.SH "DESCRIPTION"
\fBhash_password\fR calculates the hash of a supplied password using bcrypt\.
.P
@@ -20,7 +23,7 @@ bcrypt_rounds: 17 password_config: pepper: "random hashing pepper"
.SH "OPTIONS"
.TP
\fB\-p\fR, \fB\-\-password\fR
Read the password form the command line if [password] is supplied\. If not, prompt the user and read the password form the \fBSTDIN\fR\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
Read the password form the command line if [password] is supplied, or from \fBSTDIN\fR\. If not, prompt the user and read the password from the tty prompt\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
.TP
\fB\-c\fR, \fB\-\-config\fR
Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\.
@@ -33,7 +36,17 @@ $2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8\.X8fWFpum7SxZ9MFe
.fi
.IP "" 0
.P
Hash from the STDIN:
Hash from the stdin:
.IP "" 4
.nf
$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX\.rcuAbM8ErLoUhybG
.fi
.IP "" 0
.P
Hash from the prompt:
.IP "" 4
.nf
$ hash_password
@@ -53,6 +66,6 @@ $2b$12$CwI\.wBNr\.w3kmiUlV3T5s\.GT2wH7uebDCovDrCOh18dFedlANK99O
.fi
.IP "" 0
.SH "COPYRIGHT"
This man page was written by Rahul De <\fI\%mailto:rahulde@swecha\.net\fR> for Debian GNU/Linux distribution\.
This man page was written by Rahul De «rahulde@swecha\.net» for Debian GNU/Linux distribution\.
.SH "SEE ALSO"
synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

182
debian/hash_password.1.html vendored Normal file
View File

@@ -0,0 +1,182 @@
<!DOCTYPE html>
<html>
<head>
<meta http-equiv='content-type' content='text/html;charset=utf-8'>
<meta name='generator' content='Ronn-NG/v0.10.1 (http://github.com/apjanke/ronn-ng/tree/0.10.1)'>
<title>hash_password(1) - Calculate the hash of a new password, so that passwords can be reset</title>
<style type='text/css' media='all'>
/* style: man */
body#manpage {margin:0}
.mp {max-width:100ex;padding:0 9ex 1ex 4ex}
.mp p,.mp pre,.mp ul,.mp ol,.mp dl {margin:0 0 20px 0}
.mp h2 {margin:10px 0 0 0}
.mp > p,.mp > pre,.mp > ul,.mp > ol,.mp > dl {margin-left:8ex}
.mp h3 {margin:0 0 0 4ex}
.mp dt {margin:0;clear:left}
.mp dt.flush {float:left;width:8ex}
.mp dd {margin:0 0 0 9ex}
.mp h1,.mp h2,.mp h3,.mp h4 {clear:left}
.mp pre {margin-bottom:20px}
.mp pre+h2,.mp pre+h3 {margin-top:22px}
.mp h2+pre,.mp h3+pre {margin-top:5px}
.mp img {display:block;margin:auto}
.mp h1.man-title {display:none}
.mp,.mp code,.mp pre,.mp tt,.mp kbd,.mp samp,.mp h3,.mp h4 {font-family:monospace;font-size:14px;line-height:1.42857142857143}
.mp h2 {font-size:16px;line-height:1.25}
.mp h1 {font-size:20px;line-height:2}
.mp {text-align:justify;background:#fff}
.mp,.mp code,.mp pre,.mp pre code,.mp tt,.mp kbd,.mp samp {color:#131211}
.mp h1,.mp h2,.mp h3,.mp h4 {color:#030201}
.mp u {text-decoration:underline}
.mp code,.mp strong,.mp b {font-weight:bold;color:#131211}
.mp em,.mp var {font-style:italic;color:#232221;text-decoration:none}
.mp a,.mp a:link,.mp a:hover,.mp a code,.mp a pre,.mp a tt,.mp a kbd,.mp a samp {color:#0000ff}
.mp b.man-ref {font-weight:normal;color:#434241}
.mp pre {padding:0 4ex}
.mp pre code {font-weight:normal;color:#434241}
.mp h2+pre,h3+pre {padding-left:0}
ol.man-decor,ol.man-decor li {margin:3px 0 10px 0;padding:0;float:left;width:33%;list-style-type:none;text-transform:uppercase;color:#999;letter-spacing:1px}
ol.man-decor {width:100%}
ol.man-decor li.tl {text-align:left}
ol.man-decor li.tc {text-align:center;letter-spacing:4px}
ol.man-decor li.tr {text-align:right;float:right}
</style>
</head>
<!--
The following styles are deprecated and will be removed at some point:
div#man, div#man ol.man, div#man ol.head, div#man ol.man.
The .man-page, .man-decor, .man-head, .man-foot, .man-title, and
.man-navigation should be used instead.
-->
<body id='manpage'>
<div class='mp' id='man'>
<div class='man-navigation' style='display:none'>
<a href="#NAME">NAME</a>
<a href="#SYNOPSIS">SYNOPSIS</a>
<a href="#DESCRIPTION">DESCRIPTION</a>
<a href="#FILES">FILES</a>
<a href="#OPTIONS">OPTIONS</a>
<a href="#EXAMPLES">EXAMPLES</a>
<a href="#COPYRIGHT">COPYRIGHT</a>
<a href="#SEE-ALSO">SEE ALSO</a>
</div>
<ol class='man-decor man-head man head'>
<li class='tl'>hash_password(1)</li>
<li class='tc'></li>
<li class='tr'>hash_password(1)</li>
</ol>
<h2 id="NAME">NAME</h2>
<p class="man-name">
<code>hash_password</code> - <span class="man-whatis">Calculate the hash of a new password, so that passwords can be reset</span>
</p>
<h2 id="SYNOPSIS">SYNOPSIS</h2>
<table>
<tbody>
<tr>
<td>
<code>hash_password</code> [<code>-p</code>
</td>
<td>
<code>--password</code> [password]] [<code>-c</code>
</td>
<td>
<code>--config</code> <var>file</var>]</td>
</tr>
</tbody>
</table>
<h2 id="DESCRIPTION">DESCRIPTION</h2>
<p><strong>hash_password</strong> calculates the hash of a supplied password using bcrypt.</p>
<p><code>hash_password</code> takes a password as an parameter either on the command line
or the <code>STDIN</code> if not supplied.</p>
<p>It accepts an YAML file which can be used to specify parameters like the
number of rounds for bcrypt and password_config section having the pepper
value used for the hashing. By default <code>bcrypt_rounds</code> is set to <strong>12</strong>.</p>
<p>The hashed password is written on the <code>STDOUT</code>.</p>
<h2 id="FILES">FILES</h2>
<p>A sample YAML file accepted by <code>hash_password</code> is described below:</p>
<p>bcrypt_rounds: 17
password_config:
pepper: "random hashing pepper"</p>
<h2 id="OPTIONS">OPTIONS</h2>
<dl>
<dt>
<code>-p</code>, <code>--password</code>
</dt>
<dd>Read the password form the command line if [password] is supplied, or from <code>STDIN</code>.
If not, prompt the user and read the password from the tty prompt.
It is not recommended to type the password on the command line
directly. Use the STDIN instead.</dd>
<dt>
<code>-c</code>, <code>--config</code>
</dt>
<dd>Read the supplied YAML <var>file</var> containing the options <code>bcrypt_rounds</code>
and the <code>password_config</code> section containing the <code>pepper</code> value.</dd>
</dl>
<h2 id="EXAMPLES">EXAMPLES</h2>
<p>Hash from the command line:</p>
<pre><code>$ hash_password -p "p@ssw0rd"
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
</code></pre>
<p>Hash from the stdin:</p>
<pre><code>$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
</code></pre>
<p>Hash from the prompt:</p>
<pre><code>$ hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
</code></pre>
<p>Using a config file:</p>
<pre><code>$ hash_password -c config.yml
Password:
Confirm password:
$2b$12$CwI.wBNr.w3kmiUlV3T5s.GT2wH7uebDCovDrCOh18dFedlANK99O
</code></pre>
<h2 id="COPYRIGHT">COPYRIGHT</h2>
<p>This man page was written by Rahul De «rahulde@swecha.net»
for Debian GNU/Linux distribution.</p>
<h2 id="SEE-ALSO">SEE ALSO</h2>
<p><span class="man-ref">synctl<span class="s">(1)</span></span>, <span class="man-ref">synapse_port_db<span class="s">(1)</span></span>, <span class="man-ref">register_new_matrix_user<span class="s">(1)</span></span>, <span class="man-ref">synapse_review_recent_signups<span class="s">(1)</span></span></p>
<ol class='man-decor man-foot man foot'>
<li class='tl'></li>
<li class='tc'>August 2024</li>
<li class='tr'>hash_password(1)</li>
</ol>
</div>
</body>
</html>

View File

@@ -29,8 +29,8 @@ A sample YAML file accepted by `hash_password` is described below:
## OPTIONS
* `-p`, `--password`:
Read the password form the command line if [password] is supplied.
If not, prompt the user and read the password form the `STDIN`.
Read the password form the command line if [password] is supplied, or from `STDIN`.
If not, prompt the user and read the password from the tty prompt.
It is not recommended to type the password on the command line
directly. Use the STDIN instead.
@@ -45,7 +45,14 @@ Hash from the command line:
$ hash_password -p "p@ssw0rd"
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
Hash from the STDIN:
Hash from the stdin:
$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
Hash from the prompt:
$ hash_password
Password:

View File

@@ -509,7 +509,8 @@ Unix socket support (_Added in Synapse 1.89.0_):
Valid resource names are:
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies `media` and `static`.
* `client`: the client-server API (/_matrix/client). Also implies `media` and `static`.
If configuring the main process, the Synapse Admin API (/_synapse/admin) is also implied.
* `consent`: user consent forms (/_matrix/consent). See [here](../../consent_tracking.md) for more.
@@ -1765,7 +1766,7 @@ rc_3pid_validation:
This option sets ratelimiting how often invites can be sent in a room or to a
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10`,
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`, and `per_issuer`
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`, and `per_issuer`
defaults to `per_second: 0.3`, `burst_count: 10`.
Client requests that invite user(s) when [creating a
@@ -1966,7 +1967,7 @@ max_image_pixels: 35M
---
### `remote_media_download_burst_count`
Remote media downloads are ratelimited using a [leaky bucket algorithm](https://en.wikipedia.org/wiki/Leaky_bucket), where a given "bucket" is keyed to the IP address of the requester when requesting remote media downloads. This configuration option sets the size of the bucket against which the size in bytes of downloads are penalized - if the bucket is full, ie a given number of bytes have already been downloaded, further downloads will be denied until the bucket drains. Defaults to 500MiB. See also `remote_media_download_per_second` which determines the rate at which the "bucket" is emptied and thus has available space to authorize new requests.
Remote media downloads are ratelimited using a [leaky bucket algorithm](https://en.wikipedia.org/wiki/Leaky_bucket), where a given "bucket" is keyed to the IP address of the requester when requesting remote media downloads. This configuration option sets the size of the bucket against which the size in bytes of downloads are penalized - if the bucket is full, ie a given number of bytes have already been downloaded, further downloads will be denied until the bucket drains. Defaults to 500MiB. See also `remote_media_download_per_second` which determines the rate at which the "bucket" is emptied and thus has available space to authorize new requests.
Example configuration:
```yaml
@@ -3302,8 +3303,8 @@ saml2_config:
contact_person:
- given_name: Bob
sur_name: "the Sysadmin"
email_address": ["admin@example.com"]
contact_type": technical
email_address: ["admin@example.com"]
contact_type: technical
saml_session_lifetime: 5m

211
poetry.lock generated
View File

@@ -16,22 +16,22 @@ typing-extensions = {version = ">=4.0.0", markers = "python_version < \"3.9\""}
[[package]]
name = "attrs"
version = "23.2.0"
version = "24.2.0"
description = "Classes Without Boilerplate"
optional = false
python-versions = ">=3.7"
files = [
{file = "attrs-23.2.0-py3-none-any.whl", hash = "sha256:99b87a485a5820b23b879f04c2305b44b951b502fd64be915879d77a7e8fc6f1"},
{file = "attrs-23.2.0.tar.gz", hash = "sha256:935dc3b529c262f6cf76e50877d35a4bd3c1de194fd41f47a2b7ae8f19971f30"},
{file = "attrs-24.2.0-py3-none-any.whl", hash = "sha256:81921eb96de3191c8258c199618104dd27ac608d9366f5e35d011eae1867ede2"},
{file = "attrs-24.2.0.tar.gz", hash = "sha256:5cfb1b9148b5b086569baec03f20d7b6bf3bcacc9a42bebf87ffaaca362f6346"},
]
[package.extras]
cov = ["attrs[tests]", "coverage[toml] (>=5.3)"]
dev = ["attrs[tests]", "pre-commit"]
docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope-interface"]
tests = ["attrs[tests-no-zope]", "zope-interface"]
tests-mypy = ["mypy (>=1.6)", "pytest-mypy-plugins"]
tests-no-zope = ["attrs[tests-mypy]", "cloudpickle", "hypothesis", "pympler", "pytest (>=4.3.0)", "pytest-xdist[psutil]"]
benchmark = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-codspeed", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
cov = ["cloudpickle", "coverage[toml] (>=5.3)", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
dev = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pre-commit", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
docs = ["cogapp", "furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier (<24.7)"]
tests = ["cloudpickle", "hypothesis", "mypy (>=1.11.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"]
tests-mypy = ["mypy (>=1.11.1)", "pytest-mypy-plugins"]
[[package]]
name = "authlib"
@@ -403,43 +403,38 @@ files = [
[[package]]
name = "cryptography"
version = "42.0.8"
version = "43.0.0"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = ">=3.7"
files = [
{file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:81d8a521705787afe7a18d5bfb47ea9d9cc068206270aad0b96a725022e18d2e"},
{file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:961e61cefdcb06e0c6d7e3a1b22ebe8b996eb2bf50614e89384be54c48c6b63d"},
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e3ec3672626e1b9e55afd0df6d774ff0e953452886e06e0f1eb7eb0c832e8902"},
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e599b53fd95357d92304510fb7bda8523ed1f79ca98dce2f43c115950aa78801"},
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5226d5d21ab681f432a9c1cf8b658c0cb02533eece706b155e5fbd8a0cdd3949"},
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:6b7c4f03ce01afd3b76cf69a5455caa9cfa3de8c8f493e0d3ab7d20611c8dae9"},
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:2346b911eb349ab547076f47f2e035fc8ff2c02380a7cbbf8d87114fa0f1c583"},
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:ad803773e9df0b92e0a817d22fd8a3675493f690b96130a5e24f1b8fabbea9c7"},
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2f66d9cd9147ee495a8374a45ca445819f8929a3efcd2e3df6428e46c3cbb10b"},
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d45b940883a03e19e944456a558b67a41160e367a719833c53de6911cabba2b7"},
{file = "cryptography-42.0.8-cp37-abi3-win32.whl", hash = "sha256:a0c5b2b0585b6af82d7e385f55a8bc568abff8923af147ee3c07bd8b42cda8b2"},
{file = "cryptography-42.0.8-cp37-abi3-win_amd64.whl", hash = "sha256:57080dee41209e556a9a4ce60d229244f7a66ef52750f813bfbe18959770cfba"},
{file = "cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:dea567d1b0e8bc5764b9443858b673b734100c2871dc93163f58c46a97a83d28"},
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4783183f7cb757b73b2ae9aed6599b96338eb957233c58ca8f49a49cc32fd5e"},
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0608251135d0e03111152e41f0cc2392d1e74e35703960d4190b2e0f4ca9c70"},
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dc0fdf6787f37b1c6b08e6dfc892d9d068b5bdb671198c72072828b80bd5fe4c"},
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9c0c1716c8447ee7dbf08d6db2e5c41c688544c61074b54fc4564196f55c25a7"},
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:fff12c88a672ab9c9c1cf7b0c80e3ad9e2ebd9d828d955c126be4fd3e5578c9e"},
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:cafb92b2bc622cd1aa6a1dce4b93307792633f4c5fe1f46c6b97cf67073ec961"},
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:31f721658a29331f895a5a54e7e82075554ccfb8b163a18719d342f5ffe5ecb1"},
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:b297f90c5723d04bcc8265fc2a0f86d4ea2e0f7ab4b6994459548d3a6b992a14"},
{file = "cryptography-42.0.8-cp39-abi3-win32.whl", hash = "sha256:2f88d197e66c65be5e42cd72e5c18afbfae3f741742070e3019ac8f4ac57262c"},
{file = "cryptography-42.0.8-cp39-abi3-win_amd64.whl", hash = "sha256:fa76fbb7596cc5839320000cdd5d0955313696d9511debab7ee7278fc8b5c84a"},
{file = "cryptography-42.0.8-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:ba4f0a211697362e89ad822e667d8d340b4d8d55fae72cdd619389fb5912eefe"},
{file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:81884c4d096c272f00aeb1f11cf62ccd39763581645b0812e99a91505fa48e0c"},
{file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c9bb2ae11bfbab395bdd072985abde58ea9860ed84e59dbc0463a5d0159f5b71"},
{file = "cryptography-42.0.8-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:7016f837e15b0a1c119d27ecd89b3515f01f90a8615ed5e9427e30d9cdbfed3d"},
{file = "cryptography-42.0.8-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5a94eccb2a81a309806027e1670a358b99b8fe8bfe9f8d329f27d72c094dde8c"},
{file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:dec9b018df185f08483f294cae6ccac29e7a6e0678996587363dc352dc65c842"},
{file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:343728aac38decfdeecf55ecab3264b015be68fc2816ca800db649607aeee648"},
{file = "cryptography-42.0.8-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:013629ae70b40af70c9a7a5db40abe5d9054e6f4380e50ce769947b73bf3caad"},
{file = "cryptography-42.0.8.tar.gz", hash = "sha256:8d09d05439ce7baa8e9e95b07ec5b6c886f548deb7e0f69ef25f64b3bce842f2"},
{file = "cryptography-43.0.0-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:64c3f16e2a4fc51c0d06af28441881f98c5d91009b8caaff40cf3548089e9c74"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3dcdedae5c7710b9f97ac6bba7e1052b95c7083c9d0e9df96e02a1932e777895"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d9a1eca329405219b605fac09ecfc09ac09e595d6def650a437523fcd08dd22"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ea9e57f8ea880eeea38ab5abf9fbe39f923544d7884228ec67d666abd60f5a47"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9a8d6802e0825767476f62aafed40532bd435e8a5f7d23bd8b4f5fd04cc80ecf"},
{file = "cryptography-43.0.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:cc70b4b581f28d0a254d006f26949245e3657d40d8857066c2ae22a61222ef55"},
{file = "cryptography-43.0.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:4a997df8c1c2aae1e1e5ac49c2e4f610ad037fc5a3aadc7b64e39dea42249431"},
{file = "cryptography-43.0.0-cp37-abi3-win32.whl", hash = "sha256:6e2b11c55d260d03a8cf29ac9b5e0608d35f08077d8c087be96287f43af3ccdc"},
{file = "cryptography-43.0.0-cp37-abi3-win_amd64.whl", hash = "sha256:31e44a986ceccec3d0498e16f3d27b2ee5fdf69ce2ab89b52eaad1d2f33d8778"},
{file = "cryptography-43.0.0-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:7b3f5fe74a5ca32d4d0f302ffe6680fcc5c28f8ef0dc0ae8f40c0f3a1b4fca66"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ac1955ce000cb29ab40def14fd1bbfa7af2017cca696ee696925615cafd0dce5"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:299d3da8e00b7e2b54bb02ef58d73cd5f55fb31f33ebbf33bd00d9aa6807df7e"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ee0c405832ade84d4de74b9029bedb7b31200600fa524d218fc29bfa371e97f5"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:cb013933d4c127349b3948aa8aaf2f12c0353ad0eccd715ca789c8a0f671646f"},
{file = "cryptography-43.0.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:fdcb265de28585de5b859ae13e3846a8e805268a823a12a4da2597f1f5afc9f0"},
{file = "cryptography-43.0.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:2905ccf93a8a2a416f3ec01b1a7911c3fe4073ef35640e7ee5296754e30b762b"},
{file = "cryptography-43.0.0-cp39-abi3-win32.whl", hash = "sha256:47ca71115e545954e6c1d207dd13461ab81f4eccfcb1345eac874828b5e3eaaf"},
{file = "cryptography-43.0.0-cp39-abi3-win_amd64.whl", hash = "sha256:0663585d02f76929792470451a5ba64424acc3cd5227b03921dab0e2f27b1709"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:2c6d112bf61c5ef44042c253e4859b3cbbb50df2f78fa8fae6747a7814484a70"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:844b6d608374e7d08f4f6e6f9f7b951f9256db41421917dfb2d003dde4cd6b66"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:51956cf8730665e2bdf8ddb8da0056f699c1a5715648c1b0144670c1ba00b48f"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:aae4d918f6b180a8ab8bf6511a419473d107df4dbb4225c7b48c5c9602c38c7f"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:232ce02943a579095a339ac4b390fbbe97f5b5d5d107f8a08260ea2768be8cc2"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5bcb8a5620008a8034d39bce21dc3e23735dfdb6a33a06974739bfa04f853947"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:08a24a7070b2b6804c1940ff0f910ff728932a9d0e80e7814234269f9d46d069"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:e9c5266c432a1e23738d178e51c2c7a5e2ddf790f248be939448c0ba2021f9d1"},
{file = "cryptography-43.0.0.tar.gz", hash = "sha256:b88075ada2d51aa9f18283532c9f60e72170041bba88d7f37e49cbb10275299e"},
]
[package.dependencies]
@@ -452,7 +447,7 @@ nox = ["nox"]
pep8test = ["check-sdist", "click", "mypy", "ruff"]
sdist = ["build"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
test = ["certifi", "cryptography-vectors (==43.0.0)", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
test-randomorder = ["pytest-randomly"]
[[package]]
@@ -1512,13 +1507,13 @@ files = [
[[package]]
name = "phonenumbers"
version = "8.13.43"
version = "8.13.44"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
files = [
{file = "phonenumbers-8.13.43-py2.py3-none-any.whl", hash = "sha256:339e521403fe4dd9c664dbbeb2fe434f9ea5c81e54c0fdfadbaeb53b26a76c27"},
{file = "phonenumbers-8.13.43.tar.gz", hash = "sha256:35b904e4a79226eee027fbb467a9aa6f1ab9ffc3c09c91bf14b885c154936726"},
{file = "phonenumbers-8.13.44-py2.py3-none-any.whl", hash = "sha256:52cd02865dab1428ca9e89d442629b61d407c7dc687cfb80a3e8d068a584513c"},
{file = "phonenumbers-8.13.44.tar.gz", hash = "sha256:2175021e84ee4e41b43c890f2d0af51f18c6ca9ad525886d6d6e4ea882e46fac"},
]
[[package]]
@@ -1882,13 +1877,13 @@ typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0"
[[package]]
name = "pygithub"
version = "2.3.0"
version = "2.4.0"
description = "Use the full Github API v3"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "PyGithub-2.3.0-py3-none-any.whl", hash = "sha256:65b499728be3ce7b0cd2cd760da3b32f0f4d7bc55e5e0677617f90f6564e793e"},
{file = "PyGithub-2.3.0.tar.gz", hash = "sha256:0148d7347a1cdeed99af905077010aef81a4dad988b0ba51d4108bf66b443f7e"},
{file = "PyGithub-2.4.0-py3-none-any.whl", hash = "sha256:81935aa4bdc939fba98fee1cb47422c09157c56a27966476ff92775602b9ee24"},
{file = "pygithub-2.4.0.tar.gz", hash = "sha256:6601e22627e87bac192f1e2e39c6e6f69a43152cfb8f307cee575879320b3051"},
]
[package.dependencies]
@@ -2089,51 +2084,64 @@ files = [
[[package]]
name = "pyyaml"
version = "6.0.1"
version = "6.0.2"
description = "YAML parser and emitter for Python"
optional = false
python-versions = ">=3.6"
python-versions = ">=3.8"
files = [
{file = "PyYAML-6.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d858aa552c999bc8a8d57426ed01e40bef403cd8ccdd0fc5f6f04a00414cac2a"},
{file = "PyYAML-6.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd66fc5d0da6d9815ba2cebeb4205f95818ff4b79c3ebe268e75d961704af52f"},
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"},
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"},
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"},
{file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"},
{file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"},
{file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"},
{file = "PyYAML-6.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f003ed9ad21d6a4713f0a9b5a7a0a79e08dd0f221aff4525a2be4c346ee60aab"},
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"},
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"},
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"},
{file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"},
{file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"},
{file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"},
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"},
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"},
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:afd7e57eddb1a54f0f1a974bc4391af8bcce0b444685d936840f125cf046d5bd"},
{file = "PyYAML-6.0.1-cp36-cp36m-win32.whl", hash = "sha256:fca0e3a251908a499833aa292323f32437106001d436eca0e6e7833256674585"},
{file = "PyYAML-6.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:f22ac1c3cac4dbc50079e965eba2c1058622631e526bd9afd45fedd49ba781fa"},
{file = "PyYAML-6.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b1275ad35a5d18c62a7220633c913e1b42d44b46ee12554e5fd39c70a243d6a3"},
{file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18aeb1bf9a78867dc38b259769503436b7c72f7a1f1f4c93ff9a17de54319b27"},
{file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:596106435fa6ad000c2991a98fa58eeb8656ef2325d7e158344fb33864ed87e3"},
{file = "PyYAML-6.0.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:baa90d3f661d43131ca170712d903e6295d1f7a0f595074f151c0aed377c9b9c"},
{file = "PyYAML-6.0.1-cp37-cp37m-win32.whl", hash = "sha256:9046c58c4395dff28dd494285c82ba00b546adfc7ef001486fbf0324bc174fba"},
{file = "PyYAML-6.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:4fb147e7a67ef577a588a0e2c17b6db51dda102c71de36f8549b6816a96e1867"},
{file = "PyYAML-6.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1d4c7e777c441b20e32f52bd377e0c409713e8bb1386e1099c2415f26e479595"},
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"},
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"},
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"},
{file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"},
{file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"},
{file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"},
{file = "PyYAML-6.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c8098ddcc2a85b61647b2590f825f3db38891662cfc2fc776415143f599bb859"},
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"},
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"},
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"},
{file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"},
{file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"},
{file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"},
{file = "PyYAML-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0a9a2848a5b7feac301353437eb7d5957887edbf81d56e903999a75a3d743086"},
{file = "PyYAML-6.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:29717114e51c84ddfba879543fb232a6ed60086602313ca38cce623c1d62cfbf"},
{file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8824b5a04a04a047e72eea5cec3bc266db09e35de6bdfe34c9436ac5ee27d237"},
{file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7c36280e6fb8385e520936c3cb3b8042851904eba0e58d277dca80a5cfed590b"},
{file = "PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec031d5d2feb36d1d1a24380e4db6d43695f3748343d99434e6f5f9156aaa2ed"},
{file = "PyYAML-6.0.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:936d68689298c36b53b29f23c6dbb74de12b4ac12ca6cfe0e047bedceea56180"},
{file = "PyYAML-6.0.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:23502f431948090f597378482b4812b0caae32c22213aecf3b55325e049a6c68"},
{file = "PyYAML-6.0.2-cp310-cp310-win32.whl", hash = "sha256:2e99c6826ffa974fe6e27cdb5ed0021786b03fc98e5ee3c5bfe1fd5015f42b99"},
{file = "PyYAML-6.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:a4d3091415f010369ae4ed1fc6b79def9416358877534caf6a0fdd2146c87a3e"},
{file = "PyYAML-6.0.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:cc1c1159b3d456576af7a3e4d1ba7e6924cb39de8f67111c735f6fc832082774"},
{file = "PyYAML-6.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1e2120ef853f59c7419231f3bf4e7021f1b936f6ebd222406c3b60212205d2ee"},
{file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d225db5a45f21e78dd9358e58a98702a0302f2659a3c6cd320564b75b86f47c"},
{file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5ac9328ec4831237bec75defaf839f7d4564be1e6b25ac710bd1a96321cc8317"},
{file = "PyYAML-6.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ad2a3decf9aaba3d29c8f537ac4b243e36bef957511b4766cb0057d32b0be85"},
{file = "PyYAML-6.0.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ff3824dc5261f50c9b0dfb3be22b4567a6f938ccce4587b38952d85fd9e9afe4"},
{file = "PyYAML-6.0.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:797b4f722ffa07cc8d62053e4cff1486fa6dc094105d13fea7b1de7d8bf71c9e"},
{file = "PyYAML-6.0.2-cp311-cp311-win32.whl", hash = "sha256:11d8f3dd2b9c1207dcaf2ee0bbbfd5991f571186ec9cc78427ba5bd32afae4b5"},
{file = "PyYAML-6.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:e10ce637b18caea04431ce14fabcf5c64a1c61ec9c56b071a4b7ca131ca52d44"},
{file = "PyYAML-6.0.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:c70c95198c015b85feafc136515252a261a84561b7b1d51e3384e0655ddf25ab"},
{file = "PyYAML-6.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ce826d6ef20b1bc864f0a68340c8b3287705cae2f8b4b1d932177dcc76721725"},
{file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f71ea527786de97d1a0cc0eacd1defc0985dcf6b3f17bb77dcfc8c34bec4dc5"},
{file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9b22676e8097e9e22e36d6b7bda33190d0d400f345f23d4065d48f4ca7ae0425"},
{file = "PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476"},
{file = "PyYAML-6.0.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:0833f8694549e586547b576dcfaba4a6b55b9e96098b36cdc7ebefe667dfed48"},
{file = "PyYAML-6.0.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8b9c7197f7cb2738065c481a0461e50ad02f18c78cd75775628afb4d7137fb3b"},
{file = "PyYAML-6.0.2-cp312-cp312-win32.whl", hash = "sha256:ef6107725bd54b262d6dedcc2af448a266975032bc85ef0172c5f059da6325b4"},
{file = "PyYAML-6.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8"},
{file = "PyYAML-6.0.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba"},
{file = "PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1"},
{file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133"},
{file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484"},
{file = "PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5"},
{file = "PyYAML-6.0.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc"},
{file = "PyYAML-6.0.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652"},
{file = "PyYAML-6.0.2-cp313-cp313-win32.whl", hash = "sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183"},
{file = "PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563"},
{file = "PyYAML-6.0.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:24471b829b3bf607e04e88d79542a9d48bb037c2267d7927a874e6c205ca7e9a"},
{file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7fded462629cfa4b685c5416b949ebad6cec74af5e2d42905d41e257e0869f5"},
{file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d84a1718ee396f54f3a086ea0a66d8e552b2ab2017ef8b420e92edbc841c352d"},
{file = "PyYAML-6.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9056c1ecd25795207ad294bcf39f2db3d845767be0ea6e6a34d856f006006083"},
{file = "PyYAML-6.0.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:82d09873e40955485746739bcb8b4586983670466c23382c19cffecbf1fd8706"},
{file = "PyYAML-6.0.2-cp38-cp38-win32.whl", hash = "sha256:43fa96a3ca0d6b1812e01ced1044a003533c47f6ee8aca31724f78e93ccc089a"},
{file = "PyYAML-6.0.2-cp38-cp38-win_amd64.whl", hash = "sha256:01179a4a8559ab5de078078f37e5c1a30d76bb88519906844fd7bdea1b7729ff"},
{file = "PyYAML-6.0.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:688ba32a1cffef67fd2e9398a2efebaea461578b0923624778664cc1c914db5d"},
{file = "PyYAML-6.0.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a8786accb172bd8afb8be14490a16625cbc387036876ab6ba70912730faf8e1f"},
{file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d8e03406cac8513435335dbab54c0d385e4a49e4945d2909a581c83647ca0290"},
{file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f753120cb8181e736c57ef7636e83f31b9c0d1722c516f7e86cf15b7aa57ff12"},
{file = "PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3b1fdb9dc17f5a7677423d508ab4f243a726dea51fa5e70992e59a7411c89d19"},
{file = "PyYAML-6.0.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0b69e4ce7a131fe56b7e4d770c67429700908fc0752af059838b1cfb41960e4e"},
{file = "PyYAML-6.0.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a9f8c2e67970f13b16084e04f134610fd1d374bf477b17ec1599185cf611d725"},
{file = "PyYAML-6.0.2-cp39-cp39-win32.whl", hash = "sha256:6395c297d42274772abc367baaa79683958044e5d3835486c16da75d2a694631"},
{file = "PyYAML-6.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:39693e1f8320ae4f43943590b49779ffb98acb81f788220ea932a6b6c51004d8"},
{file = "pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e"},
]
[[package]]
@@ -2403,13 +2411,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
[[package]]
name = "sentry-sdk"
version = "2.12.0"
version = "2.13.0"
description = "Python client for Sentry (https://sentry.io)"
optional = true
python-versions = ">=3.6"
files = [
{file = "sentry_sdk-2.12.0-py2.py3-none-any.whl", hash = "sha256:7a8d5163d2ba5c5f4464628c6b68f85e86972f7c636acc78aed45c61b98b7a5e"},
{file = "sentry_sdk-2.12.0.tar.gz", hash = "sha256:8763840497b817d44c49b3fe3f5f7388d083f2337ffedf008b2cdb63b5c86dc6"},
{file = "sentry_sdk-2.13.0-py2.py3-none-any.whl", hash = "sha256:6beede8fc2ab4043da7f69d95534e320944690680dd9a963178a49de71d726c6"},
{file = "sentry_sdk-2.13.0.tar.gz", hash = "sha256:8d4a576f7a98eb2fdb40e13106e41f330e5c79d72a68be1316e7852cf4995260"},
]
[package.dependencies]
@@ -2436,6 +2444,7 @@ httpx = ["httpx (>=0.16.0)"]
huey = ["huey (>=2)"]
huggingface-hub = ["huggingface-hub (>=0.22)"]
langchain = ["langchain (>=0.0.210)"]
litestar = ["litestar (>=2.0.0)"]
loguru = ["loguru (>=0.5)"]
openai = ["openai (>=1.0.0)", "tiktoken (>=0.3.0)"]
opentelemetry = ["opentelemetry-distro (>=0.35b0)"]
@@ -2802,13 +2811,13 @@ files = [
[[package]]
name = "types-jsonschema"
version = "4.23.0.20240712"
version = "4.23.0.20240813"
description = "Typing stubs for jsonschema"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-jsonschema-4.23.0.20240712.tar.gz", hash = "sha256:b20db728dcf7ea3e80e9bdeb55e8b8420c6c040cda14e8cf284465adee71d217"},
{file = "types_jsonschema-4.23.0.20240712-py3-none-any.whl", hash = "sha256:8c33177ce95336241c1d61ccb56a9964d4361b99d5f1cd81a1ab4909b0dd7cf4"},
{file = "types-jsonschema-4.23.0.20240813.tar.gz", hash = "sha256:c93f48206f209a5bc4608d295ac39f172fb98b9e24159ce577dbd25ddb79a1c0"},
{file = "types_jsonschema-4.23.0.20240813-py3-none-any.whl", hash = "sha256:be283e23f0b87547316c2ee6b0fd36d95ea30e921db06478029e10b5b6aa6ac3"},
]
[package.dependencies]
@@ -2900,13 +2909,13 @@ urllib3 = ">=2"
[[package]]
name = "types-setuptools"
version = "71.1.0.20240726"
version = "71.1.0.20240818"
description = "Typing stubs for setuptools"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-setuptools-71.1.0.20240726.tar.gz", hash = "sha256:85ba28e9461bb1be86ebba4db0f1c2408f2b11115b1966334ea9dc464e29303e"},
{file = "types_setuptools-71.1.0.20240726-py3-none-any.whl", hash = "sha256:a7775376f36e0ff09bcad236bf265777590a66b11623e48c20bfc30f1444ea36"},
{file = "types-setuptools-71.1.0.20240818.tar.gz", hash = "sha256:f62eaffaa39774462c65fbb49368c4dc1d91a90a28371cb14e1af090ff0e41e3"},
{file = "types_setuptools-71.1.0.20240818-py3-none-any.whl", hash = "sha256:c4f95302f88369ac0ac46c67ddbfc70c6c4dbbb184d9fed356244217a2934025"},
]
[[package]]

View File

@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.113.0"
version = "1.114.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"

View File

@@ -38,6 +38,7 @@ from mypy.types import (
NoneType,
TupleType,
TypeAliasType,
TypeVarType,
UninhabitedType,
UnionType,
)
@@ -233,6 +234,7 @@ IMMUTABLE_CUSTOM_TYPES = {
"synapse.synapse_rust.push.FilteredPushRules",
# This is technically not immutable, but close enough.
"signedjson.types.VerifyKey",
"synapse.types.StrCollection",
}
# Immutable containers only if the values are also immutable.
@@ -298,7 +300,7 @@ def is_cacheable(
elif rt.type.fullname in MUTABLE_CONTAINER_TYPES:
# Mutable containers are mutable regardless of their underlying type.
return False, None
return False, f"container {rt.type.fullname} is mutable"
elif "attrs" in rt.type.metadata:
# attrs classes are only cachable iff it is frozen (immutable itself)
@@ -318,6 +320,9 @@ def is_cacheable(
else:
return False, "non-frozen attrs class"
elif rt.type.is_enum:
# We assume Enum values are immutable
return True, None
else:
# Ensure we fail for unknown types, these generally means that the
# above code is not complete.
@@ -326,6 +331,18 @@ def is_cacheable(
f"Don't know how to handle {rt.type.fullname} return type instance",
)
elif isinstance(rt, TypeVarType):
# We consider TypeVars immutable if they are bound to a set of immutable
# types.
if rt.values:
for value in rt.values:
ok, note = is_cacheable(value, signature, verbose)
if not ok:
return False, f"TypeVar bound not cacheable {value}"
return True, None
return False, "TypeVar is unbound"
elif isinstance(rt, NoneType):
# None is cachable.
return True, None

View File

@@ -324,6 +324,11 @@ def tag(gh_token: Optional[str]) -> None:
def _tag(gh_token: Optional[str]) -> None:
"""Tags the release and generates a draft GitHub release"""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Make sure we're in a git repo.
repo = get_repo_and_check_clean_checkout()
@@ -418,6 +423,11 @@ def publish(gh_token: str) -> None:
def _publish(gh_token: str) -> None:
"""Publish release on GitHub."""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Make sure we're in a git repo.
get_repo_and_check_clean_checkout()
@@ -460,6 +470,11 @@ def upload(gh_token: Optional[str]) -> None:
def _upload(gh_token: Optional[str]) -> None:
"""Upload release to pypi."""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
current_version = get_package_version()
tag_name = f"v{current_version}"
@@ -555,6 +570,11 @@ def wait_for_actions(gh_token: Optional[str]) -> None:
def _wait_for_actions(gh_token: Optional[str]) -> None:
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Find out the version and tag name.
current_version = get_package_version()
tag_name = f"v{current_version}"
@@ -711,6 +731,11 @@ Ask the designated people to do the blog and tweets."""
@cli.command()
@click.option("--gh-token", envvar=["GH_TOKEN", "GITHUB_TOKEN"], required=True)
def full(gh_token: str) -> None:
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
click.echo("1. If this is a security release, read the security wiki page.")
click.echo("2. Check for any release blockers before proceeding.")
click.echo(" https://github.com/element-hq/synapse/labels/X-Release-Blocker")

View File

@@ -56,7 +56,9 @@ def main() -> None:
password_pepper = password_config.get("pepper", password_pepper)
password = args.password
if not password:
if not password and not sys.stdin.isatty():
password = sys.stdin.readline().strip()
elif not password:
password = prompt_for_pass()
# On Python 2, make sure we decode it to Unicode before we normalise it

View File

@@ -129,6 +129,11 @@ BOOLEAN_COLUMNS = {
"remote_media_cache": ["authenticated"],
"room_stats_state": ["is_federatable"],
"rooms": ["is_public", "has_auth_chain_index"],
"sliding_sync_joined_rooms": ["is_encrypted"],
"sliding_sync_membership_snapshots": [
"has_known_state",
"is_encrypted",
],
"users": ["shadow_banned", "approved", "locked", "suspended"],
"un_partial_stated_event_stream": ["rejection_status_changed"],
"users_who_share_rooms": ["share_private"],

View File

@@ -245,6 +245,8 @@ class EventContentFields:
# `m.room.encryption`` algorithm field
ENCRYPTION_ALGORITHM: Final = "algorithm"
TOMBSTONE_SUCCESSOR_ROOM: Final = "replacement_room"
class EventUnsignedContentFields:
"""Fields found inside the 'unsigned' data on events"""

View File

@@ -589,7 +589,7 @@ class BaseV2KeyFetcher(KeyFetcher):
% (server_name,)
)
for key_id, key_data in response_json["old_verify_keys"].items():
for key_id, key_data in response_json.get("old_verify_keys", {}).items():
if is_signing_algorithm_supported(key_id):
key_base64 = key_data["key"]
key_bytes = decode_base64(key_base64)

View File

@@ -267,31 +267,27 @@ class DeviceWorkerHandler:
newly_left_rooms.add(change.room_id)
# We now work out if any other users have since joined or left the rooms
# the user is currently in. First we filter out rooms that we know
# haven't changed recently.
rooms_changed = self.store.get_rooms_that_changed(
joined_room_ids, from_token.room_key
)
# the user is currently in.
# List of membership changes per room
room_to_deltas: Dict[str, List[StateDelta]] = {}
# The set of event IDs of membership events (so we can fetch their
# associated membership).
memberships_to_fetch: Set[str] = set()
for room_id in rooms_changed:
# TODO: Only pull out membership events?
state_changes = await self.store.get_current_state_deltas_for_room(
room_id, from_token=from_token.room_key, to_token=now_token.room_key
)
for delta in state_changes:
if delta.event_type != EventTypes.Member:
continue
room_to_deltas.setdefault(room_id, []).append(delta)
if delta.event_id:
memberships_to_fetch.add(delta.event_id)
if delta.prev_event_id:
memberships_to_fetch.add(delta.prev_event_id)
# TODO: Only pull out membership events?
state_changes = await self.store.get_current_state_deltas_for_rooms(
joined_room_ids, from_token=from_token.room_key, to_token=now_token.room_key
)
for delta in state_changes:
if delta.event_type != EventTypes.Member:
continue
room_to_deltas.setdefault(delta.room_id, []).append(delta)
if delta.event_id:
memberships_to_fetch.add(delta.event_id)
if delta.prev_event_id:
memberships_to_fetch.add(delta.prev_event_id)
# Fetch all the memberships for the membership events
event_id_to_memberships = await self.store.get_membership_from_event_ids(

View File

@@ -183,8 +183,13 @@ class RoomSummaryHandler:
) -> JsonDict:
"""See docstring for SpaceSummaryHandler.get_room_hierarchy."""
# First of all, check that the room is accessible.
if not await self._is_local_room_accessible(requested_room_id, requester):
# If the room is available locally, quickly check that the user can access it.
local_room = await self._store.is_host_joined(
requested_room_id, self._server_name
)
if local_room and not await self._is_local_room_accessible(
requested_room_id, requester
):
raise UnstableSpecAuthError(
403,
"User %s not in room %s, and room previews are disabled"
@@ -192,6 +197,22 @@ class RoomSummaryHandler:
errcode=Codes.NOT_JOINED,
)
if not local_room:
room_hierarchy = await self._summarize_remote_room_hierarchy(
_RoomQueueEntry(requested_room_id, ()),
False,
)
root_room_entry = room_hierarchy[0]
if not root_room_entry or not await self._is_remote_room_accessible(
requester, requested_room_id, root_room_entry.room
):
raise UnstableSpecAuthError(
403,
"User %s not in room %s, and room previews are disabled"
% (requester, requested_room_id),
errcode=Codes.NOT_JOINED,
)
# If this is continuing a previous session, pull the persisted data.
if from_token:
try:

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,699 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import itertools
import logging
from typing import TYPE_CHECKING, AbstractSet, Dict, Mapping, Optional, Sequence, Set
from typing_extensions import assert_never
from synapse.api.constants import AccountDataTypes, EduTypes
from synapse.handlers.receipts import ReceiptEventSource
from synapse.logging.opentracing import trace
from synapse.storage.databases.main.receipts import ReceiptInRoom
from synapse.types import (
DeviceListUpdates,
JsonMapping,
MultiWriterStreamToken,
SlidingSyncStreamToken,
StrCollection,
StreamToken,
)
from synapse.types.handlers.sliding_sync import (
HaveSentRoomFlag,
MutablePerConnectionState,
OperationType,
PerConnectionState,
SlidingSyncConfig,
SlidingSyncResult,
)
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class SlidingSyncExtensionHandler:
"""Handles the extensions to sliding sync."""
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastores().main
self.event_sources = hs.get_event_sources()
self.device_handler = hs.get_device_handler()
self.push_rules_handler = hs.get_push_rules_handler()
@trace
async def get_extensions_response(
self,
sync_config: SlidingSyncConfig,
previous_connection_state: "PerConnectionState",
new_connection_state: "MutablePerConnectionState",
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
actual_room_response_map: Mapping[str, SlidingSyncResult.RoomResult],
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> SlidingSyncResult.Extensions:
"""Handle extension requests.
Args:
sync_config: Sync configuration
new_connection_state: Snapshot of the current per-connection state
new_per_connection_state: A mutable copy of the per-connection
state, used to record updates to the state during this request.
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response.
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
if sync_config.extensions is None:
return SlidingSyncResult.Extensions()
to_device_response = None
if sync_config.extensions.to_device is not None:
to_device_response = await self.get_to_device_extension_response(
sync_config=sync_config,
to_device_request=sync_config.extensions.to_device,
to_token=to_token,
)
e2ee_response = None
if sync_config.extensions.e2ee is not None:
e2ee_response = await self.get_e2ee_extension_response(
sync_config=sync_config,
e2ee_request=sync_config.extensions.e2ee,
to_token=to_token,
from_token=from_token,
)
account_data_response = None
if sync_config.extensions.account_data is not None:
account_data_response = await self.get_account_data_extension_response(
sync_config=sync_config,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
account_data_request=sync_config.extensions.account_data,
to_token=to_token,
from_token=from_token,
)
receipts_response = None
if sync_config.extensions.receipts is not None:
receipts_response = await self.get_receipts_extension_response(
sync_config=sync_config,
previous_connection_state=previous_connection_state,
new_connection_state=new_connection_state,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
actual_room_response_map=actual_room_response_map,
receipts_request=sync_config.extensions.receipts,
to_token=to_token,
from_token=from_token,
)
typing_response = None
if sync_config.extensions.typing is not None:
typing_response = await self.get_typing_extension_response(
sync_config=sync_config,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
actual_room_response_map=actual_room_response_map,
typing_request=sync_config.extensions.typing,
to_token=to_token,
from_token=from_token,
)
return SlidingSyncResult.Extensions(
to_device=to_device_response,
e2ee=e2ee_response,
account_data=account_data_response,
receipts=receipts_response,
typing=typing_response,
)
def find_relevant_room_ids_for_extension(
self,
requested_lists: Optional[StrCollection],
requested_room_ids: Optional[StrCollection],
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: AbstractSet[str],
) -> Set[str]:
"""
Handle the reserved `lists`/`rooms` keys for extensions. Extensions should only
return results for rooms in the Sliding Sync response. This matches up the
requested rooms/lists with the actual lists/rooms in the Sliding Sync response.
{"lists": []} // Do not process any lists.
{"lists": ["rooms", "dms"]} // Process only a subset of lists.
{"lists": ["*"]} // Process all lists defined in the Sliding Window API. (This is the default.)
{"rooms": []} // Do not process any specific rooms.
{"rooms": ["!a:b", "!c:d"]} // Process only a subset of room subscriptions.
{"rooms": ["*"]} // Process all room subscriptions defined in the Room Subscription API. (This is the default.)
Args:
requested_lists: The `lists` from the extension request.
requested_room_ids: The `rooms` from the extension request.
actual_lists: The actual lists from the Sliding Sync response.
actual_room_ids: The actual room subscriptions from the Sliding Sync request.
"""
# We only want to include account data for rooms that are already in the sliding
# sync response AND that were requested in the account data request.
relevant_room_ids: Set[str] = set()
# See what rooms from the room subscriptions we should get account data for
if requested_room_ids is not None:
for room_id in requested_room_ids:
# A wildcard means we process all rooms from the room subscriptions
if room_id == "*":
relevant_room_ids.update(actual_room_ids)
break
if room_id in actual_room_ids:
relevant_room_ids.add(room_id)
# See what rooms from the sliding window lists we should get account data for
if requested_lists is not None:
for list_key in requested_lists:
# Just some typing because we share the variable name in multiple places
actual_list: Optional[SlidingSyncResult.SlidingWindowList] = None
# A wildcard means we process rooms from all lists
if list_key == "*":
for actual_list in actual_lists.values():
# We only expect a single SYNC operation for any list
assert len(actual_list.ops) == 1
sync_op = actual_list.ops[0]
assert sync_op.op == OperationType.SYNC
relevant_room_ids.update(sync_op.room_ids)
break
actual_list = actual_lists.get(list_key)
if actual_list is not None:
# We only expect a single SYNC operation for any list
assert len(actual_list.ops) == 1
sync_op = actual_list.ops[0]
assert sync_op.op == OperationType.SYNC
relevant_room_ids.update(sync_op.room_ids)
return relevant_room_ids
@trace
async def get_to_device_extension_response(
self,
sync_config: SlidingSyncConfig,
to_device_request: SlidingSyncConfig.Extensions.ToDeviceExtension,
to_token: StreamToken,
) -> Optional[SlidingSyncResult.Extensions.ToDeviceExtension]:
"""Handle to-device extension (MSC3885)
Args:
sync_config: Sync configuration
to_device_request: The to-device extension from the request
to_token: The point in the stream to sync up to.
"""
user_id = sync_config.user.to_string()
device_id = sync_config.requester.device_id
# Skip if the extension is not enabled
if not to_device_request.enabled:
return None
# Check that this request has a valid device ID (not all requests have
# to belong to a device, and so device_id is None)
if device_id is None:
return SlidingSyncResult.Extensions.ToDeviceExtension(
next_batch=f"{to_token.to_device_key}",
events=[],
)
since_stream_id = 0
if to_device_request.since is not None:
# We've already validated this is an int.
since_stream_id = int(to_device_request.since)
if to_token.to_device_key < since_stream_id:
# The since token is ahead of our current token, so we return an
# empty response.
logger.warning(
"Got to-device.since from the future. since token: %r is ahead of our current to_device stream position: %r",
since_stream_id,
to_token.to_device_key,
)
return SlidingSyncResult.Extensions.ToDeviceExtension(
next_batch=to_device_request.since,
events=[],
)
# Delete everything before the given since token, as we know the
# device must have received them.
deleted = await self.store.delete_messages_for_device(
user_id=user_id,
device_id=device_id,
up_to_stream_id=since_stream_id,
)
logger.debug(
"Deleted %d to-device messages up to %d for %s",
deleted,
since_stream_id,
user_id,
)
messages, stream_id = await self.store.get_messages_for_device(
user_id=user_id,
device_id=device_id,
from_stream_id=since_stream_id,
to_stream_id=to_token.to_device_key,
limit=min(to_device_request.limit, 100), # Limit to at most 100 events
)
return SlidingSyncResult.Extensions.ToDeviceExtension(
next_batch=f"{stream_id}",
events=messages,
)
@trace
async def get_e2ee_extension_response(
self,
sync_config: SlidingSyncConfig,
e2ee_request: SlidingSyncConfig.Extensions.E2eeExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.E2eeExtension]:
"""Handle E2EE device extension (MSC3884)
Args:
sync_config: Sync configuration
e2ee_request: The e2ee extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
user_id = sync_config.user.to_string()
device_id = sync_config.requester.device_id
# Skip if the extension is not enabled
if not e2ee_request.enabled:
return None
device_list_updates: Optional[DeviceListUpdates] = None
if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token`
device_list_updates = await self.device_handler.get_user_ids_changed(
user_id=user_id,
from_token=from_token.stream_token,
)
device_one_time_keys_count: Mapping[str, int] = {}
device_unused_fallback_key_types: Sequence[str] = []
if device_id:
# TODO: We should have a way to let clients differentiate between the states of:
# * no change in OTK count since the provided since token
# * the server has zero OTKs left for this device
# Spec issue: https://github.com/matrix-org/matrix-doc/issues/3298
device_one_time_keys_count = await self.store.count_e2e_one_time_keys(
user_id, device_id
)
device_unused_fallback_key_types = (
await self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
)
return SlidingSyncResult.Extensions.E2eeExtension(
device_list_updates=device_list_updates,
device_one_time_keys_count=device_one_time_keys_count,
device_unused_fallback_key_types=device_unused_fallback_key_types,
)
@trace
async def get_account_data_extension_response(
self,
sync_config: SlidingSyncConfig,
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
account_data_request: SlidingSyncConfig.Extensions.AccountDataExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.AccountDataExtension]:
"""Handle Account Data extension (MSC3959)
Args:
sync_config: Sync configuration
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
account_data_request: The account_data extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
user_id = sync_config.user.to_string()
# Skip if the extension is not enabled
if not account_data_request.enabled:
return None
global_account_data_map: Mapping[str, JsonMapping] = {}
if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token`
global_account_data_map = (
await self.store.get_updated_global_account_data_for_user(
user_id, from_token.stream_token.account_data_key
)
)
have_push_rules_changed = await self.store.have_push_rules_changed_for_user(
user_id, from_token.stream_token.push_rules_key
)
if have_push_rules_changed:
global_account_data_map = dict(global_account_data_map)
# TODO: This should take into account the `from_token` and `to_token`
global_account_data_map[AccountDataTypes.PUSH_RULES] = (
await self.push_rules_handler.push_rules_for_user(sync_config.user)
)
else:
# TODO: This should take into account the `to_token`
all_global_account_data = await self.store.get_global_account_data_for_user(
user_id
)
global_account_data_map = dict(all_global_account_data)
# TODO: This should take into account the `to_token`
global_account_data_map[AccountDataTypes.PUSH_RULES] = (
await self.push_rules_handler.push_rules_for_user(sync_config.user)
)
# Fetch room account data
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]] = {}
relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=account_data_request.lists,
requested_room_ids=account_data_request.rooms,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
)
if len(relevant_room_ids) > 0:
if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token`
account_data_by_room_map = (
await self.store.get_updated_room_account_data_for_user(
user_id, from_token.stream_token.account_data_key
)
)
else:
# TODO: This should take into account the `to_token`
account_data_by_room_map = (
await self.store.get_room_account_data_for_user(user_id)
)
# Filter down to the relevant rooms
account_data_by_room_map = {
room_id: account_data_map
for room_id, account_data_map in account_data_by_room_map.items()
if room_id in relevant_room_ids
}
return SlidingSyncResult.Extensions.AccountDataExtension(
global_account_data_map=global_account_data_map,
account_data_by_room_map=account_data_by_room_map,
)
@trace
async def get_receipts_extension_response(
self,
sync_config: SlidingSyncConfig,
previous_connection_state: "PerConnectionState",
new_connection_state: "MutablePerConnectionState",
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
actual_room_response_map: Mapping[str, SlidingSyncResult.RoomResult],
receipts_request: SlidingSyncConfig.Extensions.ReceiptsExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.ReceiptsExtension]:
"""Handle Receipts extension (MSC3960)
Args:
sync_config: Sync configuration
previous_connection_state: The current per-connection state
new_connection_state: A mutable copy of the per-connection
state, used to record updates to the state.
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response.
account_data_request: The account_data extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
# Skip if the extension is not enabled
if not receipts_request.enabled:
return None
relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=receipts_request.lists,
requested_room_ids=receipts_request.rooms,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
)
room_id_to_receipt_map: Dict[str, JsonMapping] = {}
if len(relevant_room_ids) > 0:
# We need to handle the different cases depending on if we have sent
# down receipts previously or not, so we split the relevant rooms
# up into different collections based on status.
live_rooms = set()
previously_rooms: Dict[str, MultiWriterStreamToken] = {}
initial_rooms = set()
for room_id in relevant_room_ids:
if not from_token:
initial_rooms.add(room_id)
continue
# If we're sending down the room from scratch again for some
# reason, we should always resend the receipts as well
# (regardless of if we've sent them down before). This is to
# mimic the behaviour of what happens on initial sync, where you
# get a chunk of timeline with all of the corresponding receipts
# for the events in the timeline.
#
# We also resend down receipts when we "expand" the timeline,
# (see the "XXX: Odd behavior" in
# `synapse.handlers.sliding_sync`).
room_result = actual_room_response_map.get(room_id)
if room_result is not None:
if room_result.initial or room_result.unstable_expanded_timeline:
initial_rooms.add(room_id)
continue
room_status = previous_connection_state.receipts.have_sent_room(room_id)
if room_status.status == HaveSentRoomFlag.LIVE:
live_rooms.add(room_id)
elif room_status.status == HaveSentRoomFlag.PREVIOUSLY:
assert room_status.last_token is not None
previously_rooms[room_id] = room_status.last_token
elif room_status.status == HaveSentRoomFlag.NEVER:
initial_rooms.add(room_id)
else:
assert_never(room_status.status)
# The set of receipts that we fetched. Private receipts need to be
# filtered out before returning.
fetched_receipts = []
# For live rooms we just fetch all receipts in those rooms since the
# `since` token.
if live_rooms:
assert from_token is not None
receipts = await self.store.get_linearized_receipts_for_rooms(
room_ids=live_rooms,
from_key=from_token.stream_token.receipt_key,
to_key=to_token.receipt_key,
)
fetched_receipts.extend(receipts)
# For rooms we've previously sent down, but aren't up to date, we
# need to use the from token from the room status.
if previously_rooms:
for room_id, receipt_token in previously_rooms.items():
# TODO: Limit the number of receipts we're about to send down
# for the room, if its too many we should TODO
previously_receipts = (
await self.store.get_linearized_receipts_for_room(
room_id=room_id,
from_key=receipt_token,
to_key=to_token.receipt_key,
)
)
fetched_receipts.extend(previously_receipts)
if initial_rooms:
# We also always send down receipts for the current user.
user_receipts = (
await self.store.get_linearized_receipts_for_user_in_rooms(
user_id=sync_config.user.to_string(),
room_ids=initial_rooms,
to_key=to_token.receipt_key,
)
)
# For rooms we haven't previously sent down, we could send all receipts
# from that room but we only want to include receipts for events
# in the timeline to avoid bloating and blowing up the sync response
# as the number of users in the room increases. (this behavior is part of the spec)
initial_rooms_and_event_ids = [
(room_id, event.event_id)
for room_id in initial_rooms
if room_id in actual_room_response_map
for event in actual_room_response_map[room_id].timeline_events
]
initial_receipts = await self.store.get_linearized_receipts_for_events(
room_and_event_ids=initial_rooms_and_event_ids,
)
# Combine the receipts for a room and add them to
# `fetched_receipts`
for room_id in initial_receipts.keys() | user_receipts.keys():
receipt_content = ReceiptInRoom.merge_to_content(
list(
itertools.chain(
initial_receipts.get(room_id, []),
user_receipts.get(room_id, []),
)
)
)
fetched_receipts.append(
{
"room_id": room_id,
"type": EduTypes.RECEIPT,
"content": receipt_content,
}
)
fetched_receipts = ReceiptEventSource.filter_out_private_receipts(
fetched_receipts, sync_config.user.to_string()
)
for receipt in fetched_receipts:
# These fields should exist for every receipt
room_id = receipt["room_id"]
type = receipt["type"]
content = receipt["content"]
room_id_to_receipt_map[room_id] = {"type": type, "content": content}
# Now we update the per-connection state to track which receipts we have
# and haven't sent down.
new_connection_state.receipts.record_sent_rooms(relevant_room_ids)
if from_token:
# Now find the set of rooms that may have receipts that we're not sending
# down. We only need to check rooms that we have previously returned
# receipts for (in `previous_connection_state`) because we only care about
# updating `LIVE` rooms to `PREVIOUSLY`. The `PREVIOUSLY` rooms will just
# stay pointing at their previous position so we don't need to waste time
# checking those and since we default to `NEVER`, rooms that were `NEVER`
# sent before don't need to be recorded as we'll handle them correctly when
# they come into range for the first time.
rooms_no_receipts = [
room_id
for room_id, room_status in previous_connection_state.receipts._statuses.items()
if room_status.status == HaveSentRoomFlag.LIVE
and room_id not in relevant_room_ids
]
changed_rooms = await self.store.get_rooms_with_receipts_between(
rooms_no_receipts,
from_key=from_token.stream_token.receipt_key,
to_key=to_token.receipt_key,
)
new_connection_state.receipts.record_unsent_rooms(
changed_rooms, from_token.stream_token.receipt_key
)
return SlidingSyncResult.Extensions.ReceiptsExtension(
room_id_to_receipt_map=room_id_to_receipt_map,
)
async def get_typing_extension_response(
self,
sync_config: SlidingSyncConfig,
actual_lists: Mapping[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
actual_room_response_map: Mapping[str, SlidingSyncResult.RoomResult],
typing_request: SlidingSyncConfig.Extensions.TypingExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.TypingExtension]:
"""Handle Typing Notification extension (MSC3961)
Args:
sync_config: Sync configuration
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response.
account_data_request: The account_data extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
# Skip if the extension is not enabled
if not typing_request.enabled:
return None
relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=typing_request.lists,
requested_room_ids=typing_request.rooms,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
)
room_id_to_typing_map: Dict[str, JsonMapping] = {}
if len(relevant_room_ids) > 0:
# Note: We don't need to take connection tracking into account for typing
# notifications because they'll get anything still relevant and hasn't timed
# out when the room comes into range. We consider the gap where the room
# fell out of range, as long enough for any typing notifications to have
# timed out (it's not worth the 30 seconds of data we may have missed).
typing_source = self.event_sources.sources.typing
typing_notifications, _ = await typing_source.get_new_events(
user=sync_config.user,
from_key=(from_token.stream_token.typing_key if from_token else 0),
to_key=to_token.typing_key,
# This is a dummy value and isn't used in the function
limit=0,
room_ids=relevant_room_ids,
is_guest=False,
)
for typing_notification in typing_notifications:
# These fields should exist for every typing notification
room_id = typing_notification["room_id"]
type = typing_notification["type"]
content = typing_notification["content"]
room_id_to_typing_map[room_id] = {"type": type, "content": content}
return SlidingSyncResult.Extensions.TypingExtension(
room_id_to_typing_map=room_id_to_typing_map,
)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,200 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
from typing import TYPE_CHECKING, Dict, Optional, Tuple
import attr
from synapse.api.errors import SlidingSyncUnknownPosition
from synapse.logging.opentracing import trace
from synapse.types import SlidingSyncStreamToken
from synapse.types.handlers.sliding_sync import (
MutablePerConnectionState,
PerConnectionState,
SlidingSyncConfig,
)
if TYPE_CHECKING:
pass
logger = logging.getLogger(__name__)
@attr.s(auto_attribs=True)
class SlidingSyncConnectionStore:
"""In-memory store of per-connection state, including what rooms we have
previously sent down a sliding sync connection.
Note: This is NOT safe to run in a worker setup because connection positions will
point to different sets of rooms on different workers. e.g. for the same connection,
a connection position of 5 might have totally different states on worker A and
worker B.
One complication that we need to deal with here is needing to handle requests being
resent, i.e. if we sent down a room in a response that the client received, we must
consider the room *not* sent when we get the request again.
This is handled by using an integer "token", which is returned to the client
as part of the sync token. For each connection we store a mapping from
tokens to the room states, and create a new entry when we send down new
rooms.
Note that for any given sliding sync connection we will only store a maximum
of two different tokens: the previous token from the request and a new token
sent in the response. When we receive a request with a given token, we then
clear out all other entries with a different token.
Attributes:
_connections: Mapping from `(user_id, conn_id)` to mapping of `token`
to mapping of room ID to `HaveSentRoom`.
"""
# `(user_id, conn_id)` -> `connection_position` -> `PerConnectionState`
_connections: Dict[Tuple[str, str], Dict[int, PerConnectionState]] = attr.Factory(
dict
)
async def is_valid_token(
self, sync_config: SlidingSyncConfig, connection_token: int
) -> bool:
"""Return whether the connection token is valid/recognized"""
if connection_token == 0:
return True
conn_key = self._get_connection_key(sync_config)
return connection_token in self._connections.get(conn_key, {})
async def get_per_connection_state(
self,
sync_config: SlidingSyncConfig,
from_token: Optional[SlidingSyncStreamToken],
) -> PerConnectionState:
"""Fetch the per-connection state for the token.
Raises:
SlidingSyncUnknownPosition if the connection_token is unknown
"""
if from_token is None:
return PerConnectionState()
connection_position = from_token.connection_position
if connection_position == 0:
# Initial sync (request without a `from_token`) starts at `0` so
# there is no existing per-connection state
return PerConnectionState()
conn_key = self._get_connection_key(sync_config)
sync_statuses = self._connections.get(conn_key, {})
connection_state = sync_statuses.get(connection_position)
if connection_state is None:
raise SlidingSyncUnknownPosition()
return connection_state
@trace
async def record_new_state(
self,
sync_config: SlidingSyncConfig,
from_token: Optional[SlidingSyncStreamToken],
new_connection_state: MutablePerConnectionState,
) -> int:
"""Record updated per-connection state, returning the connection
position associated with the new state.
If there are no changes to the state this may return the same token as
the existing per-connection state.
"""
prev_connection_token = 0
if from_token is not None:
prev_connection_token = from_token.connection_position
if not new_connection_state.has_updates():
return prev_connection_token
conn_key = self._get_connection_key(sync_config)
sync_statuses = self._connections.setdefault(conn_key, {})
# Generate a new token, removing any existing entries in that token
# (which can happen if requests get resent).
new_store_token = prev_connection_token + 1
sync_statuses.pop(new_store_token, None)
# We copy the `MutablePerConnectionState` so that the inner `ChainMap`s
# don't grow forever.
sync_statuses[new_store_token] = new_connection_state.copy()
return new_store_token
@trace
async def mark_token_seen(
self,
sync_config: SlidingSyncConfig,
from_token: Optional[SlidingSyncStreamToken],
) -> None:
"""We have received a request with the given token, so we can clear out
any other tokens associated with the connection.
If there is no from token then we have started afresh, and so we delete
all tokens associated with the device.
"""
# Clear out any tokens for the connection that doesn't match the one
# from the request.
conn_key = self._get_connection_key(sync_config)
sync_statuses = self._connections.pop(conn_key, {})
if from_token is None:
return
sync_statuses = {
connection_token: room_statuses
for connection_token, room_statuses in sync_statuses.items()
if connection_token == from_token.connection_position
}
if sync_statuses:
self._connections[conn_key] = sync_statuses
@staticmethod
def _get_connection_key(sync_config: SlidingSyncConfig) -> Tuple[str, str]:
"""Return a unique identifier for this connection.
The first part is simply the user ID.
The second part is generally a combination of device ID and conn_id.
However, both these two are optional (e.g. puppet access tokens don't
have device IDs), so this handles those edge cases.
We use this over the raw `conn_id` to avoid clashes between different
clients that use the same `conn_id`. Imagine a user uses a web client
that uses `conn_id: main_sync_loop` and an Android client that also has
a `conn_id: main_sync_loop`.
"""
user_id = sync_config.user.to_string()
# Only one sliding sync connection is allowed per given conn_id (empty
# or not).
conn_id = sync_config.conn_id or ""
if sync_config.requester.device_id:
return (user_id, f"D/{sync_config.requester.device_id}/{conn_id}")
if sync_config.requester.access_token_id:
# If we don't have a device, then the access token ID should be a
# stable ID.
return (user_id, f"A/{sync_config.requester.access_token_id}/{conn_id}")
# If we have neither then its likely an AS or some weird token. Either
# way we can just fail here.
raise Exception("Cannot use sliding sync with access token type")

View File

@@ -1756,8 +1756,10 @@ class MatrixFederationHttpClient:
request.destination,
str_url,
)
# We don't know how large the response will be upfront, so limit it to
# the `max_size` config value.
length, headers, _, _ = await self._simple_http_client.get_file(
str_url, output_stream, expected_size
str_url, output_stream, max_size
)
logger.info(

View File

@@ -1032,13 +1032,13 @@ def tag_args(func: Callable[P, R]) -> Callable[P, R]:
def _wrapping_logic(
_func: Callable[P, R], *args: P.args, **kwargs: P.kwargs
) -> Generator[None, None, None]:
# We use `[1:]` to skip the `self` object reference and `start=1` to
# make the index line up with `argspec.args`.
#
# FIXME: We could update this to handle any type of function by ignoring the
# first argument only if it's named `self` or `cls`. This isn't fool-proof
# but handles the idiomatic cases.
for i, arg in enumerate(args[1:], start=1):
for i, arg in enumerate(args, start=0):
if argspec.args[i] in ("self", "cls"):
# Ignore `self` and `cls` values. Ideally we'd properly detect
# if we were wrapping a method, but that is really non-trivial
# and this is good enough.
continue
set_tag(SynapseTags.FUNC_ARG_PREFIX + argspec.args[i], str(arg))
set_tag(SynapseTags.FUNC_ARGS, str(args[len(argspec.args) :]))
set_tag(SynapseTags.FUNC_KWARGS, str(kwargs))

View File

@@ -60,8 +60,6 @@ from synapse.util.stringutils import is_ascii
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.databases.main.media_repository import LocalMedia
logger = logging.getLogger(__name__)
@@ -290,7 +288,9 @@ async def respond_with_multipart_responder(
clock: Clock,
request: SynapseRequest,
responder: "Optional[Responder]",
media_info: "LocalMedia",
media_type: str,
media_length: Optional[int],
upload_name: Optional[str],
) -> None:
"""
Responds to requests originating from the federation media `/download` endpoint by
@@ -314,7 +314,7 @@ async def respond_with_multipart_responder(
)
return
if media_info.media_type.lower().split(";", 1)[0] in INLINE_CONTENT_TYPES:
if media_type.lower().split(";", 1)[0] in INLINE_CONTENT_TYPES:
disposition = "inline"
else:
disposition = "attachment"
@@ -322,16 +322,16 @@ async def respond_with_multipart_responder(
def _quote(x: str) -> str:
return urllib.parse.quote(x.encode("utf-8"))
if media_info.upload_name:
if _can_encode_filename_as_token(media_info.upload_name):
if upload_name:
if _can_encode_filename_as_token(upload_name):
disposition = "%s; filename=%s" % (
disposition,
media_info.upload_name,
upload_name,
)
else:
disposition = "%s; filename*=utf-8''%s" % (
disposition,
_quote(media_info.upload_name),
_quote(upload_name),
)
from synapse.media.media_storage import MultipartFileConsumer
@@ -341,14 +341,14 @@ async def respond_with_multipart_responder(
multipart_consumer = MultipartFileConsumer(
clock,
request,
media_info.media_type,
media_type,
{},
disposition,
media_info.media_length,
media_length,
)
logger.debug("Responding to media request with responder %s", responder)
if media_info.media_length is not None:
if media_length is not None:
content_length = multipart_consumer.content_length()
assert content_length is not None
request.setHeader(b"Content-Length", b"%d" % (content_length,))

View File

@@ -471,7 +471,7 @@ class MediaRepository:
responder = await self.media_storage.fetch_media(file_info)
if federation:
await respond_with_multipart_responder(
self.clock, request, responder, media_info
self.clock, request, responder, media_type, media_length, upload_name
)
else:
await respond_with_responder(
@@ -1008,7 +1008,7 @@ class MediaRepository:
t_method: str,
t_type: str,
url_cache: bool,
) -> Optional[str]:
) -> Optional[Tuple[str, FileInfo]]:
input_path = await self.media_storage.ensure_media_is_in_local_cache(
FileInfo(None, media_id, url_cache=url_cache)
)
@@ -1070,7 +1070,7 @@ class MediaRepository:
t_len,
)
return output_path
return output_path, file_info
# Could not generate thumbnail.
return None

View File

@@ -544,7 +544,7 @@ class MultipartFileConsumer:
Calculate the content length of the multipart response
in bytes.
"""
if not self.length:
if self.length is None:
return None
# calculate length of json field and content-type, disposition headers
json_field = json.dumps(self.json_field)

View File

@@ -348,7 +348,12 @@ class ThumbnailProvider:
if responder:
if for_federation:
await respond_with_multipart_responder(
self.hs.get_clock(), request, responder, media_info
self.hs.get_clock(),
request,
responder,
info.type,
info.length,
None,
)
return
else:
@@ -360,7 +365,7 @@ class ThumbnailProvider:
logger.debug("We don't have a thumbnail of that size. Generating")
# Okay, so we generate one.
file_path = await self.media_repo.generate_local_exact_thumbnail(
thumbnail_result = await self.media_repo.generate_local_exact_thumbnail(
media_id,
desired_width,
desired_height,
@@ -369,13 +374,18 @@ class ThumbnailProvider:
url_cache=bool(media_info.url_cache),
)
if file_path:
if thumbnail_result:
file_path, file_info = thumbnail_result
assert file_info.thumbnail is not None
if for_federation:
await respond_with_multipart_responder(
self.hs.get_clock(),
request,
FileResponder(self.hs, open(file_path, "rb")),
media_info,
file_info.thumbnail.type,
file_info.thumbnail.length,
None,
)
else:
await respond_with_file(self.hs, request, desired_type, file_path)
@@ -580,7 +590,12 @@ class ThumbnailProvider:
if for_federation:
assert media_info is not None
await respond_with_multipart_responder(
self.hs.get_clock(), request, responder, media_info
self.hs.get_clock(),
request,
responder,
file_info.thumbnail.type,
file_info.thumbnail.length,
None,
)
return
else:
@@ -634,7 +649,12 @@ class ThumbnailProvider:
if for_federation:
assert media_info is not None
await respond_with_multipart_responder(
self.hs.get_clock(), request, responder, media_info
self.hs.get_clock(),
request,
responder,
file_info.thumbnail.type,
file_info.thumbnail.length,
None,
)
else:
await respond_with_responder(

View File

@@ -21,7 +21,7 @@
import itertools
import logging
from collections import defaultdict
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Tuple, Union
from synapse.api.constants import AccountDataTypes, EduTypes, Membership, PresenceState
from synapse.api.errors import Codes, StoreError, SynapseError
@@ -975,7 +975,7 @@ class SlidingSyncRestServlet(RestServlet):
return response
def encode_lists(
self, lists: Dict[str, SlidingSyncResult.SlidingWindowList]
self, lists: Mapping[str, SlidingSyncResult.SlidingWindowList]
) -> JsonDict:
def encode_operation(
operation: SlidingSyncResult.SlidingWindowList.Operation,
@@ -1044,6 +1044,11 @@ class SlidingSyncRestServlet(RestServlet):
if room_result.initial:
serialized_rooms[room_id]["initial"] = room_result.initial
if room_result.unstable_expanded_timeline:
serialized_rooms[room_id][
"unstable_expanded_timeline"
] = room_result.unstable_expanded_timeline
# This will be omitted for invite/knock rooms with `stripped_state`
if (
room_result.required_state is not None

View File

@@ -64,6 +64,7 @@ class VersionsRestServlet(RestServlet):
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
msc3881_enabled = self.config.experimental.msc3881_enabled
msc3575_enabled = self.config.experimental.msc3575_enabled
if self.auth.has_access_token(request):
requester = await self.auth.get_user_by_req(
@@ -77,6 +78,9 @@ class VersionsRestServlet(RestServlet):
msc3881_enabled = await self.store.is_feature_enabled(
user_id, ExperimentalFeature.MSC3881
)
msc3575_enabled = await self.store.is_feature_enabled(
user_id, ExperimentalFeature.MSC3575
)
return (
200,
@@ -169,6 +173,8 @@ class VersionsRestServlet(RestServlet):
),
# MSC4151: Report room API (Client-Server API)
"org.matrix.msc4151": self.config.experimental.msc4151_enabled,
# Simplified sliding sync
"org.matrix.simplified_msc3575": msc3575_enabled,
},
},
)

View File

@@ -124,6 +124,7 @@ from synapse.http.client import (
)
from synapse.http.matrixfederationclient import MatrixFederationHttpClient
from synapse.media.media_repository import MediaRepository
from synapse.metrics import register_threadpool
from synapse.metrics.common_usage_metrics import CommonUsageMetricsManager
from synapse.module_api import ModuleApi
from synapse.module_api.callbacks import ModuleApiCallbacks
@@ -959,4 +960,7 @@ class HomeServer(metaclass=abc.ABCMeta):
"during", "shutdown", media_threadpool.stop
)
# Register the threadpool with our metrics.
register_threadpool("media", media_threadpool)
return media_threadpool

View File

@@ -123,6 +123,9 @@ class SQLBaseStore(metaclass=ABCMeta):
self._attempt_to_invalidate_cache(
"_get_rooms_for_local_user_where_membership_is_inner", (user_id,)
)
self._attempt_to_invalidate_cache(
"get_sliding_sync_rooms_for_user", (user_id,)
)
# Purge other caches based on room state.
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
@@ -157,6 +160,7 @@ class SQLBaseStore(metaclass=ABCMeta):
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
def _attempt_to_invalidate_cache(
self, cache_name: str, key: Optional[Collection[Any]]

View File

@@ -44,7 +44,7 @@ from synapse._pydantic_compat import HAS_PYDANTIC_V2
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.engines import PostgresEngine
from synapse.storage.types import Connection, Cursor
from synapse.types import JsonDict
from synapse.types import JsonDict, StrCollection
from synapse.util import Clock, json_encoder
from . import engines
@@ -487,6 +487,25 @@ class BackgroundUpdater:
return not update_exists
async def have_completed_background_updates(
self, update_names: StrCollection
) -> bool:
"""Return the name of background updates that have not yet been
completed"""
if self._all_done:
return True
rows = await self.db_pool.simple_select_many_batch(
table="background_updates",
column="update_name",
iterable=update_names,
retcols=("update_name",),
desc="get_uncompleted_background_updates",
)
# If we find any rows then we've not completed the update.
return not bool(rows)
async def do_next_background_update(self, sleep: bool = True) -> bool:
"""Does some amount of work on the next queued background update

View File

@@ -502,8 +502,15 @@ class EventsPersistenceStorageController:
"""
state = await self._calculate_current_state(room_id)
delta = await self._calculate_state_delta(room_id, state)
sliding_sync_table_changes = (
await self.persist_events_store._calculate_sliding_sync_table_changes(
room_id, [], delta
)
)
await self.persist_events_store.update_current_state(room_id, delta)
await self.persist_events_store.update_current_state(
room_id, delta, sliding_sync_table_changes
)
async def _calculate_current_state(self, room_id: str) -> StateMap[str]:
"""Calculate the current state of a room, based on the forward extremities

View File

@@ -35,6 +35,7 @@ from typing import (
Iterable,
Iterator,
List,
Mapping,
Optional,
Sequence,
Tuple,
@@ -1254,9 +1255,9 @@ class DatabasePool:
self,
txn: LoggingTransaction,
table: str,
keyvalues: Dict[str, Any],
values: Dict[str, Any],
insertion_values: Optional[Dict[str, Any]] = None,
keyvalues: Mapping[str, Any],
values: Mapping[str, Any],
insertion_values: Optional[Mapping[str, Any]] = None,
where_clause: Optional[str] = None,
) -> bool:
"""
@@ -1299,9 +1300,9 @@ class DatabasePool:
self,
txn: LoggingTransaction,
table: str,
keyvalues: Dict[str, Any],
values: Dict[str, Any],
insertion_values: Optional[Dict[str, Any]] = None,
keyvalues: Mapping[str, Any],
values: Mapping[str, Any],
insertion_values: Optional[Mapping[str, Any]] = None,
where_clause: Optional[str] = None,
lock: bool = True,
) -> bool:
@@ -1322,7 +1323,7 @@ class DatabasePool:
if lock:
# We need to lock the table :(
self.engine.lock_table(txn, table)
txn.database_engine.lock_table(txn, table)
def _getwhere(key: str) -> str:
# If the value we're passing in is None (aka NULL), we need to use
@@ -1376,13 +1377,13 @@ class DatabasePool:
# successfully inserted
return True
@staticmethod
def simple_upsert_txn_native_upsert(
self,
txn: LoggingTransaction,
table: str,
keyvalues: Dict[str, Any],
values: Dict[str, Any],
insertion_values: Optional[Dict[str, Any]] = None,
keyvalues: Mapping[str, Any],
values: Mapping[str, Any],
insertion_values: Optional[Mapping[str, Any]] = None,
where_clause: Optional[str] = None,
) -> bool:
"""
@@ -1535,8 +1536,8 @@ class DatabasePool:
self.simple_upsert_txn_emulated(txn, table, _keys, _vals, lock=False)
@staticmethod
def simple_upsert_many_txn_native_upsert(
self,
txn: LoggingTransaction,
table: str,
key_names: Collection[str],
@@ -1966,8 +1967,8 @@ class DatabasePool:
def simple_update_txn(
txn: LoggingTransaction,
table: str,
keyvalues: Dict[str, Any],
updatevalues: Dict[str, Any],
keyvalues: Mapping[str, Any],
updatevalues: Mapping[str, Any],
) -> int:
"""
Update rows in the given database table.

View File

@@ -313,6 +313,8 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
"get_unread_event_push_actions_by_room_for_user", (room_id,)
)
self._attempt_to_invalidate_cache("_get_max_event_pos", (room_id,))
# The `_get_membership_from_event_id` is immutable, except for the
# case where we look up an event *before* persisting it.
self._attempt_to_invalidate_cache("_get_membership_from_event_id", (event_id,))
@@ -404,6 +406,8 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
)
self._attempt_to_invalidate_cache("get_relations_for_event", (room_id,))
self._attempt_to_invalidate_cache("_get_max_event_pos", (room_id,))
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
self._attempt_to_invalidate_cache("get_applicable_edit", None)
self._attempt_to_invalidate_cache("get_thread_id", None)
@@ -476,6 +480,8 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
self._attempt_to_invalidate_cache("_get_max_event_pos", (room_id,))
# And delete state caches.
self._invalidate_state_caches_all(room_id)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -457,6 +457,8 @@ class EventsWorkerStore(SQLBaseStore):
) -> Optional[EventBase]:
"""Get an event from the database by event_id.
Events for unknown room versions will also be filtered out.
Args:
event_id: The event_id of the event to fetch
@@ -511,6 +513,10 @@ class EventsWorkerStore(SQLBaseStore):
) -> Dict[str, EventBase]:
"""Get events from the database
Unknown events will be omitted from the response.
Events for unknown room versions will also be filtered out.
Args:
event_ids: The event_ids of the events to fetch
@@ -553,6 +559,8 @@ class EventsWorkerStore(SQLBaseStore):
Unknown events will be omitted from the response.
Events for unknown room versions will also be filtered out.
Args:
event_ids: The event_ids of the events to fetch

View File

@@ -454,6 +454,10 @@ class PurgeEventsStore(StateGroupWorkerStore, CacheInvalidationWorkerStore):
# so must be deleted first.
"local_current_membership",
"room_memberships",
# Note: the sliding_sync_ tables have foreign keys to the `events` table
# so must be deleted first.
"sliding_sync_joined_rooms",
"sliding_sync_membership_snapshots",
"events",
"federation_inbound_events_staging",
"receipts_graph",

View File

@@ -30,10 +30,12 @@ from typing import (
Mapping,
Optional,
Sequence,
Set,
Tuple,
cast,
)
import attr
from immutabledict import immutabledict
from synapse.api.constants import EduTypes
@@ -43,6 +45,7 @@ from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
make_tuple_in_list_sql_clause,
)
from synapse.storage.engines._base import IsolationLevel
from synapse.storage.util.id_generators import MultiWriterIdGenerator
@@ -51,10 +54,12 @@ from synapse.types import (
JsonMapping,
MultiWriterStreamToken,
PersistedPosition,
StrCollection,
)
from synapse.util import json_encoder
from synapse.util.caches.descriptors import cached, cachedList
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.iterutils import batch_iter
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -62,6 +67,57 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
@attr.s(auto_attribs=True, slots=True, frozen=True)
class ReceiptInRoom:
receipt_type: str
user_id: str
event_id: str
thread_id: Optional[str]
data: JsonMapping
@staticmethod
def merge_to_content(receipts: Collection["ReceiptInRoom"]) -> JsonMapping:
"""Merge the given set of receipts (in a room) into the receipt
content format.
Returns:
A mapping of the combined receipts: event ID -> receipt type -> user
ID -> receipt data.
"""
# MSC4102: always replace threaded receipts with unthreaded ones if
# there is a clash. This means we will drop some receipts, but MSC4102
# is designed to drop semantically meaningless receipts, so this is
# okay. Previously, we would drop meaningful data!
#
# We do this by finding the unthreaded receipts, and then filtering out
# matching threaded receipts.
# Set of (user_id, event_id)
unthreaded_receipts: Set[Tuple[str, str]] = {
(receipt.user_id, receipt.event_id)
for receipt in receipts
if receipt.thread_id is None
}
# event_id -> receipt_type -> user_id -> receipt data
content: Dict[str, Dict[str, Dict[str, JsonMapping]]] = {}
for receipt in receipts:
data = receipt.data
if receipt.thread_id is not None:
if (receipt.user_id, receipt.event_id) in unthreaded_receipts:
# Ignore threaded receipts if we have an unthreaded one.
continue
data = dict(data)
data["thread_id"] = receipt.thread_id
content.setdefault(receipt.event_id, {}).setdefault(
receipt.receipt_type, {}
)[receipt.user_id] = data
return content
class ReceiptsWorkerStore(SQLBaseStore):
def __init__(
self,
@@ -398,7 +454,7 @@ class ReceiptsWorkerStore(SQLBaseStore):
def f(
txn: LoggingTransaction,
) -> List[Tuple[str, str, str, str, Optional[str], str]]:
) -> Mapping[str, Sequence[ReceiptInRoom]]:
if from_key:
sql = """
SELECT stream_id, instance_name, room_id, receipt_type,
@@ -428,50 +484,46 @@ class ReceiptsWorkerStore(SQLBaseStore):
txn.execute(sql + clause, [to_key.get_max_stream_pos()] + list(args))
return [
(room_id, receipt_type, user_id, event_id, thread_id, data)
for stream_id, instance_name, room_id, receipt_type, user_id, event_id, thread_id, data in txn
if MultiWriterStreamToken.is_stream_position_in_range(
results: Dict[str, List[ReceiptInRoom]] = {}
for (
stream_id,
instance_name,
room_id,
receipt_type,
user_id,
event_id,
thread_id,
data,
) in txn:
if not MultiWriterStreamToken.is_stream_position_in_range(
from_key, to_key, instance_name, stream_id
):
continue
results.setdefault(room_id, []).append(
ReceiptInRoom(
receipt_type=receipt_type,
user_id=user_id,
event_id=event_id,
thread_id=thread_id,
data=db_to_json(data),
)
)
]
return results
txn_results = await self.db_pool.runInteraction(
"_get_linearized_receipts_for_rooms", f
)
results: JsonDict = {}
for room_id, receipt_type, user_id, event_id, thread_id, data in txn_results:
# We want a single event per room, since we want to batch the
# receipts by room, event and type.
room_event = results.setdefault(
room_id,
{"type": EduTypes.RECEIPT, "room_id": room_id, "content": {}},
)
# The content is of the form:
# {"$foo:bar": { "read": { "@user:host": <receipt> }, .. }, .. }
event_entry = room_event["content"].setdefault(event_id, {})
receipt_type_dict = event_entry.setdefault(receipt_type, {})
# MSC4102: always replace threaded receipts with unthreaded ones if there is a clash.
# Specifically:
# - if there is no existing receipt, great, set the data.
# - if there is an existing receipt, is it threaded (thread_id present)?
# YES: replace if this receipt has no thread id. NO: do not replace.
# This means we will drop some receipts, but MSC4102 is designed to drop semantically
# meaningless receipts, so this is okay. Previously, we would drop meaningful data!
receipt_data = db_to_json(data)
if user_id in receipt_type_dict: # existing receipt
# is the existing receipt threaded and we are currently processing an unthreaded one?
if "thread_id" in receipt_type_dict[user_id] and not thread_id:
receipt_type_dict[user_id] = (
receipt_data # replace with unthreaded one
)
else: # receipt does not exist, just set it
receipt_type_dict[user_id] = receipt_data
if thread_id:
receipt_type_dict[user_id]["thread_id"] = thread_id
results: JsonDict = {
room_id: {
"room_id": room_id,
"type": EduTypes.RECEIPT,
"content": ReceiptInRoom.merge_to_content(receipts),
}
for room_id, receipts in txn_results.items()
}
results = {
room_id: [results[room_id]] if room_id in results else []
@@ -479,6 +531,69 @@ class ReceiptsWorkerStore(SQLBaseStore):
}
return results
async def get_linearized_receipts_for_events(
self,
room_and_event_ids: Collection[Tuple[str, str]],
) -> Mapping[str, Sequence[ReceiptInRoom]]:
"""Get all receipts for the given set of events.
Arguments:
room_and_event_ids: A collection of 2-tuples of room ID and
event IDs to fetch receipts for
Returns:
A list of receipts, one per room.
"""
if not room_and_event_ids:
return {}
def get_linearized_receipts_for_events_txn(
txn: LoggingTransaction,
room_id_event_id_tuples: Collection[Tuple[str, str]],
) -> List[Tuple[str, str, str, str, Optional[str], str]]:
clause, args = make_tuple_in_list_sql_clause(
self.database_engine, ("room_id", "event_id"), room_id_event_id_tuples
)
sql = f"""
SELECT room_id, receipt_type, user_id, event_id, thread_id, data
FROM receipts_linearized
WHERE {clause}
"""
txn.execute(sql, args)
return txn.fetchall()
# room_id -> receipts
room_to_receipts: Dict[str, List[ReceiptInRoom]] = {}
for batch in batch_iter(room_and_event_ids, 1000):
batch_results = await self.db_pool.runInteraction(
"get_linearized_receipts_for_events",
get_linearized_receipts_for_events_txn,
batch,
)
for (
room_id,
receipt_type,
user_id,
event_id,
thread_id,
data,
) in batch_results:
room_to_receipts.setdefault(room_id, []).append(
ReceiptInRoom(
receipt_type=receipt_type,
user_id=user_id,
event_id=event_id,
thread_id=thread_id,
data=db_to_json(data),
)
)
return room_to_receipts
@cached(
num_args=2,
)
@@ -550,6 +665,114 @@ class ReceiptsWorkerStore(SQLBaseStore):
return results
async def get_linearized_receipts_for_user_in_rooms(
self, user_id: str, room_ids: StrCollection, to_key: MultiWriterStreamToken
) -> Mapping[str, Sequence[ReceiptInRoom]]:
"""Fetch all receipts for the user in the given room.
Returns:
A dict from room ID to receipts in the room.
"""
def get_linearized_receipts_for_user_in_rooms_txn(
txn: LoggingTransaction,
batch_room_ids: StrCollection,
) -> List[Tuple[str, str, str, str, Optional[str], str]]:
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", batch_room_ids
)
sql = f"""
SELECT instance_name, stream_id, room_id, receipt_type, user_id, event_id, thread_id, data
FROM receipts_linearized
WHERE {clause} AND user_id = ? AND stream_id <= ?
"""
args.append(user_id)
args.append(to_key.get_max_stream_pos())
txn.execute(sql, args)
return [
(room_id, receipt_type, user_id, event_id, thread_id, data)
for instance_name, stream_id, room_id, receipt_type, user_id, event_id, thread_id, data in txn
if MultiWriterStreamToken.is_stream_position_in_range(
low=None,
high=to_key,
instance_name=instance_name,
pos=stream_id,
)
]
# room_id -> receipts
room_to_receipts: Dict[str, List[ReceiptInRoom]] = {}
for batch in batch_iter(room_ids, 1000):
batch_results = await self.db_pool.runInteraction(
"get_linearized_receipts_for_events",
get_linearized_receipts_for_user_in_rooms_txn,
batch,
)
for (
room_id,
receipt_type,
user_id,
event_id,
thread_id,
data,
) in batch_results:
room_to_receipts.setdefault(room_id, []).append(
ReceiptInRoom(
receipt_type=receipt_type,
user_id=user_id,
event_id=event_id,
thread_id=thread_id,
data=db_to_json(data),
)
)
return room_to_receipts
async def get_rooms_with_receipts_between(
self,
room_ids: StrCollection,
from_key: MultiWriterStreamToken,
to_key: MultiWriterStreamToken,
) -> StrCollection:
"""Given a set of room_ids, find out which ones (may) have receipts
between the two tokens (> `from_token` and <= `to_token`)."""
room_ids = self._receipts_stream_cache.get_entities_changed(
room_ids, from_key.stream
)
if not room_ids:
return []
def f(txn: LoggingTransaction, room_ids: StrCollection) -> StrCollection:
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", room_ids
)
sql = f"""
SELECT DISTINCT room_id FROM receipts_linearized
WHERE {clause} AND ? < stream_id AND stream_id <= ?
"""
args.append(from_key.stream)
args.append(to_key.get_max_stream_pos())
txn.execute(sql, args)
return [room_id for room_id, in txn]
results: List[str] = []
for batch in batch_iter(room_ids, 1000):
batch_result = await self.db_pool.runInteraction(
"get_rooms_with_receipts_between", f, batch
)
results.extend(batch_result)
return results
async def get_users_sent_receipts_between(
self, last_id: int, current_id: int
) -> List[str]:
@@ -954,6 +1177,12 @@ class ReceiptsBackgroundUpdateStore(SQLBaseStore):
self.RECEIPTS_GRAPH_UNIQUE_INDEX_UPDATE_NAME,
self._background_receipts_graph_unique_index,
)
self.db_pool.updates.register_background_index_update(
update_name="receipts_room_id_event_id_index",
index_name="receipts_linearized_event_id",
table="receipts_linearized",
columns=("room_id", "event_id"),
)
async def _populate_receipt_event_stream_ordering(
self, progress: JsonDict, batch_size: int

View File

@@ -19,6 +19,7 @@
#
#
import logging
from http import HTTPStatus
from typing import (
TYPE_CHECKING,
AbstractSet,
@@ -39,6 +40,7 @@ from typing import (
import attr
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.logging.opentracing import trace
from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import wrap_as_background_process
@@ -51,7 +53,12 @@ from synapse.storage.database import (
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.engines import Sqlite3Engine
from synapse.storage.roommember import MemberSummary, ProfileInfo, RoomsForUser
from synapse.storage.roommember import (
MemberSummary,
ProfileInfo,
RoomsForUser,
RoomsForUserSlidingSync,
)
from synapse.types import (
JsonDict,
PersistedEventPosition,
@@ -631,10 +638,8 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
"""
# Paranoia check.
if not self.hs.is_mine_id(user_id):
raise Exception(
"Cannot call 'get_local_current_membership_for_user_in_room' on "
"non-local user %s" % (user_id,),
)
message = f"Provided user_id {user_id} is a non-local user"
raise SynapseError(HTTPStatus.BAD_REQUEST, message, errcode=Codes.BAD_JSON)
results = cast(
Optional[Tuple[str, str]],
@@ -1337,6 +1342,12 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
keyvalues={"user_id": user_id, "room_id": room_id},
updatevalues={"forgotten": 1},
)
self.db_pool.simple_update_txn(
txn,
table="sliding_sync_membership_snapshots",
keyvalues={"user_id": user_id, "room_id": room_id},
updatevalues={"forgotten": 1},
)
self._invalidate_cache_and_stream(txn, self.did_forget, (user_id, room_id))
self._invalidate_cache_and_stream(
@@ -1371,6 +1382,54 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
desc="room_forgetter_stream_pos",
)
@cached(iterable=True, max_entries=10000)
async def get_sliding_sync_rooms_for_user(
self,
user_id: str,
) -> Mapping[str, RoomsForUserSlidingSync]:
"""Get all the rooms for a user to handle a sliding sync request.
Ignores forgotten rooms and rooms that the user has been kicked from.
Returns:
Map from room ID to membership info
"""
def get_sliding_sync_rooms_for_user_txn(
txn: LoggingTransaction,
) -> Dict[str, RoomsForUserSlidingSync]:
sql = """
SELECT m.room_id, m.sender, m.membership, m.membership_event_id,
r.room_version,
m.event_instance_name, m.event_stream_ordering,
COALESCE(j.room_type, m.room_type),
COALESCE(j.is_encrypted, m.is_encrypted)
FROM sliding_sync_membership_snapshots AS m
INNER JOIN rooms AS r USING (room_id)
LEFT JOIN sliding_sync_joined_rooms AS j ON (j.room_id = m.room_id AND m.membership = 'join')
WHERE user_id = ?
AND m.forgotten = 0
"""
txn.execute(sql, (user_id,))
return {
row[0]: RoomsForUserSlidingSync(
room_id=row[0],
sender=row[1],
membership=row[2],
event_id=row[3],
room_version_id=row[4],
event_pos=PersistedEventPosition(row[5], row[6]),
room_type=row[7],
is_encrypted=row[8],
)
for row in txn
}
return await self.db_pool.runInteraction(
"get_sliding_sync_rooms_for_user",
get_sliding_sync_rooms_for_user_txn,
)
class RoomMemberBackgroundUpdateStore(SQLBaseStore):
def __init__(

View File

@@ -26,10 +26,11 @@ import attr
from synapse.logging.opentracing import trace
from synapse.storage._base import SQLBaseStore
from synapse.storage.database import LoggingTransaction
from synapse.storage.database import LoggingTransaction, make_in_list_sql_clause
from synapse.storage.databases.main.stream import _filter_results_by_stream
from synapse.types import RoomStreamToken
from synapse.types import RoomStreamToken, StrCollection
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.iterutils import batch_iter
logger = logging.getLogger(__name__)
@@ -160,43 +161,137 @@ class StateDeltasStore(SQLBaseStore):
self._get_max_stream_id_in_current_state_deltas_txn,
)
def get_current_state_deltas_for_room_txn(
self,
txn: LoggingTransaction,
room_id: str,
*,
from_token: Optional[RoomStreamToken],
to_token: Optional[RoomStreamToken],
) -> List[StateDelta]:
"""
Get the state deltas between two tokens.
(> `from_token` and <= `to_token`)
"""
from_clause = ""
from_args = []
if from_token is not None:
from_clause = "AND ? < stream_id"
from_args = [from_token.stream]
to_clause = ""
to_args = []
if to_token is not None:
to_clause = "AND stream_id <= ?"
to_args = [to_token.get_max_stream_pos()]
sql = f"""
SELECT instance_name, stream_id, type, state_key, event_id, prev_event_id
FROM current_state_delta_stream
WHERE room_id = ? {from_clause} {to_clause}
ORDER BY stream_id ASC
"""
txn.execute(sql, [room_id] + from_args + to_args)
return [
StateDelta(
stream_id=row[1],
room_id=room_id,
event_type=row[2],
state_key=row[3],
event_id=row[4],
prev_event_id=row[5],
)
for row in txn
if _filter_results_by_stream(from_token, to_token, row[0], row[1])
]
@trace
async def get_current_state_deltas_for_room(
self, room_id: str, from_token: RoomStreamToken, to_token: RoomStreamToken
self,
room_id: str,
*,
from_token: Optional[RoomStreamToken],
to_token: Optional[RoomStreamToken],
) -> List[StateDelta]:
"""Get the state deltas between two tokens."""
"""
Get the state deltas between two tokens.
if not self._curr_state_delta_stream_cache.has_entity_changed(
room_id, from_token.stream
(> `from_token` and <= `to_token`)
"""
if (
from_token is not None
and not self._curr_state_delta_stream_cache.has_entity_changed(
room_id, from_token.stream
)
):
return []
def get_current_state_deltas_for_room_txn(
return await self.db_pool.runInteraction(
"get_current_state_deltas_for_room",
self.get_current_state_deltas_for_room_txn,
room_id,
from_token=from_token,
to_token=to_token,
)
@trace
async def get_current_state_deltas_for_rooms(
self,
room_ids: StrCollection,
from_token: RoomStreamToken,
to_token: RoomStreamToken,
) -> List[StateDelta]:
"""Get the state deltas between two tokens for the set of rooms."""
room_ids = self._curr_state_delta_stream_cache.get_entities_changed(
room_ids, from_token.stream
)
if not room_ids:
return []
def get_current_state_deltas_for_rooms_txn(
txn: LoggingTransaction,
room_ids: StrCollection,
) -> List[StateDelta]:
sql = """
SELECT instance_name, stream_id, type, state_key, event_id, prev_event_id
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", room_ids
)
sql = f"""
SELECT instance_name, stream_id, room_id, type, state_key, event_id, prev_event_id
FROM current_state_delta_stream
WHERE room_id = ? AND ? < stream_id AND stream_id <= ?
WHERE {clause} AND ? < stream_id AND stream_id <= ?
ORDER BY stream_id ASC
"""
txn.execute(
sql, (room_id, from_token.stream, to_token.get_max_stream_pos())
)
args.append(from_token.stream)
args.append(to_token.get_max_stream_pos())
txn.execute(sql, args)
return [
StateDelta(
stream_id=row[1],
room_id=room_id,
event_type=row[2],
state_key=row[3],
event_id=row[4],
prev_event_id=row[5],
room_id=row[2],
event_type=row[3],
state_key=row[4],
event_id=row[5],
prev_event_id=row[6],
)
for row in txn
if _filter_results_by_stream(from_token, to_token, row[0], row[1])
]
return await self.db_pool.runInteraction(
"get_current_state_deltas_for_room", get_current_state_deltas_for_room_txn
)
results = []
for batch in batch_iter(room_ids, 1000):
deltas = await self.db_pool.runInteraction(
"get_current_state_deltas_for_rooms",
get_current_state_deltas_for_rooms_txn,
batch,
)
results.extend(deltas)
return results

View File

@@ -50,6 +50,7 @@ from typing import (
Dict,
Iterable,
List,
Mapping,
Optional,
Protocol,
Set,
@@ -80,7 +81,7 @@ from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.types import PersistedEventPosition, RoomStreamToken, StrCollection
from synapse.util.caches.descriptors import cached
from synapse.util.caches.descriptors import cached, cachedList
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.cancellation import cancellable
from synapse.util.iterutils import batch_iter
@@ -1263,12 +1264,76 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
return None
async def get_last_event_pos_in_room(
self,
room_id: str,
event_types: Optional[StrCollection] = None,
) -> Optional[Tuple[str, PersistedEventPosition]]:
"""
Returns the ID and event position of the last event in a room.
Based on `get_last_event_pos_in_room_before_stream_ordering(...)`
Args:
room_id
event_types: Optional allowlist of event types to filter by
Returns:
The ID of the most recent event and it's position, or None if there are no
events in the room that match the given event types.
"""
def _get_last_event_pos_in_room_txn(
txn: LoggingTransaction,
) -> Optional[Tuple[str, PersistedEventPosition]]:
event_type_clause = ""
event_type_args: List[str] = []
if event_types is not None and len(event_types) > 0:
event_type_clause, event_type_args = make_in_list_sql_clause(
txn.database_engine, "type", event_types
)
event_type_clause = f"AND {event_type_clause}"
sql = f"""
SELECT event_id, stream_ordering, instance_name
FROM events
LEFT JOIN rejections USING (event_id)
WHERE room_id = ?
{event_type_clause}
AND NOT outlier
AND rejections.event_id IS NULL
ORDER BY stream_ordering DESC
LIMIT 1
"""
txn.execute(
sql,
[room_id] + event_type_args,
)
row = cast(Optional[Tuple[str, int, str]], txn.fetchone())
if row is not None:
event_id, stream_ordering, instance_name = row
return event_id, PersistedEventPosition(
# If instance_name is null we default to "master"
instance_name or "master",
stream_ordering,
)
return None
return await self.db_pool.runInteraction(
"get_last_event_pos_in_room",
_get_last_event_pos_in_room_txn,
)
@trace
async def get_last_event_pos_in_room_before_stream_ordering(
self,
room_id: str,
end_token: RoomStreamToken,
event_types: Optional[Collection[str]] = None,
event_types: Optional[StrCollection] = None,
) -> Optional[Tuple[str, PersistedEventPosition]]:
"""
Returns the ID and event position of the last event in a room at or before a
@@ -1381,8 +1446,52 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
rooms
"""
# First we just get the latest positions for the room, as the vast
# majority of them will be before the given end token anyway. By doing
# this we can cache most rooms.
uncapped_results = await self._bulk_get_max_event_pos(room_ids)
# Check that the stream position for the rooms are from before the
# minimum position of the token. If not then we need to fetch more
# rows.
results: Dict[str, int] = {}
recheck_rooms: Set[str] = set()
min_token = end_token.stream
max_token = end_token.get_max_stream_pos()
for room_id, stream in uncapped_results.items():
if stream <= min_token:
results[room_id] = stream
else:
recheck_rooms.add(room_id)
if not recheck_rooms:
return results
# There shouldn't be many rooms that we need to recheck, so we do them
# one-by-one.
for room_id in recheck_rooms:
result = await self.get_last_event_pos_in_room_before_stream_ordering(
room_id, end_token
)
if result is not None:
results[room_id] = result[1].stream
return results
@cached()
async def _get_max_event_pos(self, room_id: str) -> int:
raise NotImplementedError()
@cachedList(cached_method_name="_get_max_event_pos", list_name="room_ids")
async def _bulk_get_max_event_pos(
self, room_ids: StrCollection
) -> Mapping[str, int]:
"""Fetch the max position of a persisted event in the room."""
# We need to be careful not to return positions ahead of the current
# positions, so we get the current token now and cap our queries to it.
now_token = self.get_room_max_token()
max_pos = now_token.get_max_stream_pos()
results: Dict[str, int] = {}
# First, we check for the rooms in the stream change cache to see if we
@@ -1390,31 +1499,32 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
missing_room_ids: Set[str] = set()
for room_id in room_ids:
stream_pos = self._events_stream_cache.get_max_pos_of_last_change(room_id)
if stream_pos and stream_pos <= min_token:
if stream_pos is not None:
results[room_id] = stream_pos
else:
missing_room_ids.add(room_id)
if not missing_room_ids:
return results
# Next, we query the stream position from the DB. At first we fetch all
# positions less than the *max* stream pos in the token, then filter
# them down. We do this as a) this is a cheaper query, and b) the vast
# majority of rooms will have a latest token from before the min stream
# pos.
def bulk_get_last_event_pos_txn(
txn: LoggingTransaction, batch_room_ids: StrCollection
def bulk_get_max_event_pos_txn(
txn: LoggingTransaction, batched_room_ids: StrCollection
) -> Dict[str, int]:
# This query fetches the latest stream position in the rooms before
# the given max position.
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", batch_room_ids
self.database_engine, "room_id", batched_room_ids
)
sql = f"""
SELECT room_id, (
SELECT stream_ordering FROM events AS e
LEFT JOIN rejections USING (event_id)
WHERE e.room_id = r.room_id
AND stream_ordering <= ?
AND e.stream_ordering <= ?
AND NOT outlier
AND rejection_reason IS NULL
ORDER BY stream_ordering DESC
@@ -1423,72 +1533,29 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
FROM rooms AS r
WHERE {clause}
"""
txn.execute(sql, [max_token] + args)
txn.execute(sql, [max_pos] + args)
return {row[0]: row[1] for row in txn}
recheck_rooms: Set[str] = set()
for batched in batch_iter(missing_room_ids, 1000):
result = await self.db_pool.runInteraction(
"bulk_get_last_event_pos_in_room_before_stream_ordering",
bulk_get_last_event_pos_txn,
batched,
for batched in batch_iter(room_ids, 1000):
batch_results = await self.db_pool.runInteraction(
"_bulk_get_max_event_pos", bulk_get_max_event_pos_txn, batched
)
# Check that the stream position for the rooms are from before the
# minimum position of the token. If not then we need to fetch more
# rows.
for room_id, stream in result.items():
if stream <= min_token:
results[room_id] = stream
for room_id, stream_ordering in batch_results.items():
if stream_ordering <= now_token.stream:
results.update(batch_results)
else:
recheck_rooms.add(room_id)
if not recheck_rooms:
return results
# For the remaining rooms we need to fetch all rows between the min and
# max stream positions in the end token, and filter out the rows that
# are after the end token.
#
# This query should be fast as the range between the min and max should
# be small.
def bulk_get_last_event_pos_recheck_txn(
txn: LoggingTransaction, batch_room_ids: StrCollection
) -> Dict[str, int]:
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", batch_room_ids
# We now need to handle rooms where the above query returned a stream
# position that was potentially too new. This should happen very rarely
# so we just query the rooms one-by-one
for room_id in recheck_rooms:
result = await self.get_last_event_pos_in_room_before_stream_ordering(
room_id, now_token
)
sql = f"""
SELECT room_id, instance_name, stream_ordering
FROM events
WHERE ? < stream_ordering AND stream_ordering <= ?
AND NOT outlier
AND rejection_reason IS NULL
AND {clause}
ORDER BY stream_ordering ASC
"""
txn.execute(sql, [min_token, max_token] + args)
# We take the max stream ordering that is less than the token. Since
# we ordered by stream ordering we just need to iterate through and
# take the last matching stream ordering.
txn_results: Dict[str, int] = {}
for row in txn:
room_id = row[0]
event_pos = PersistedEventPosition(row[1], row[2])
if not event_pos.persisted_after(end_token):
txn_results[room_id] = event_pos.stream
return txn_results
for batched in batch_iter(recheck_rooms, 1000):
recheck_result = await self.db_pool.runInteraction(
"bulk_get_last_event_pos_in_room_before_stream_ordering_recheck",
bulk_get_last_event_pos_recheck_txn,
batched,
)
results.update(recheck_result)
if result is not None:
results[room_id] = result[1].stream
return results

View File

@@ -39,6 +39,19 @@ class RoomsForUser:
room_version_id: str
@attr.s(slots=True, frozen=True, weakref_slot=False, auto_attribs=True)
class RoomsForUserSlidingSync:
room_id: str
sender: Optional[str]
membership: str
event_id: Optional[str]
event_pos: PersistedEventPosition
room_version_id: str
room_type: Optional[str]
is_encrypted: bool
@attr.s(slots=True, frozen=True, weakref_slot=False, auto_attribs=True)
class GetRoomsForUserWithStreamOrdering:
room_id: str

View File

@@ -19,7 +19,7 @@
#
#
SCHEMA_VERSION = 86 # remember to update the list below when updating
SCHEMA_VERSION = 87 # remember to update the list below when updating
"""Represents the expectations made by the codebase about the database schema
This should be incremented whenever the codebase changes its requirements on the
@@ -142,6 +142,10 @@ Changes in SCHEMA_VERSION = 85
Changes in SCHEMA_VERSION = 86
- Add a column `authenticated` to the tables `local_media_repository` and `remote_media_cache`
Changes in SCHEMA_VERSION = 87
- Add tables to store Sliding Sync data for quick filtering/sorting
(`sliding_sync_joined_rooms`, `sliding_sync_membership_snapshots`)
"""

View File

@@ -0,0 +1,15 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8602, 'receipts_room_id_event_id_index', '{}');

View File

@@ -0,0 +1,169 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- This table is a list/queue used to keep track of which rooms need to be inserted into
-- `sliding_sync_joined_rooms`. We do this to avoid reading from `current_state_events`
-- during the background update to populate `sliding_sync_joined_rooms` which works but
-- it takes a lot of work for the database to grab `DISTINCT` room_ids given how many
-- state events there are for each room.
--
-- This table is prefilled with every room in the `rooms` table (see the
-- `sliding_sync_prefill_joined_rooms_to_recalculate_table_bg_update` background
-- update). This table is also updated whenever we come across stale data so that we can
-- catch-up with all of the new data if Synapse was downgraded (see
-- `_resolve_stale_data_in_sliding_sync_tables`).
--
-- FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
-- foreground update for
-- `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
-- https://github.com/element-hq/synapse/issues/17623)
CREATE TABLE IF NOT EXISTS sliding_sync_joined_rooms_to_recalculate(
room_id TEXT NOT NULL REFERENCES rooms(room_id),
PRIMARY KEY (room_id)
);
-- A table for storing room meta data (current state relevant to sliding sync) that the
-- local server is still participating in (someone local is joined to the room).
--
-- We store the joined rooms in separate table from `sliding_sync_membership_snapshots`
-- because we need up-to-date information for joined rooms and it can be shared across
-- everyone who is joined.
--
-- This table is kept in sync with `current_state_events` which means if the server is
-- no longer participating in a room, the row will be deleted.
CREATE TABLE IF NOT EXISTS sliding_sync_joined_rooms(
room_id TEXT NOT NULL REFERENCES rooms(room_id),
-- The `stream_ordering` of the most-recent/latest event in the room
event_stream_ordering BIGINT NOT NULL REFERENCES events(stream_ordering),
-- The `stream_ordering` of the last event according to the `bump_event_types`
bump_stamp BIGINT,
-- `m.room.create` -> `content.type` (current state)
--
-- Useful for the `spaces`/`not_spaces` filter in the Sliding Sync API
room_type TEXT,
-- `m.room.name` -> `content.name` (current state)
--
-- Useful for the room meta data and `room_name_like` filter in the Sliding Sync API
room_name TEXT,
-- `m.room.encryption` -> `content.algorithm` (current state)
--
-- Useful for the `is_encrypted` filter in the Sliding Sync API
is_encrypted BOOLEAN DEFAULT FALSE NOT NULL,
-- `m.room.tombstone` -> `content.replacement_room` (according to the current state at the
-- time of the membership).
--
-- Useful for the `include_old_rooms` functionality in the Sliding Sync API
tombstone_successor_room_id TEXT,
PRIMARY KEY (room_id)
);
-- So we can purge rooms easily.
--
-- The primary key is already `room_id`
-- So we can sort by `stream_ordering
CREATE UNIQUE INDEX IF NOT EXISTS sliding_sync_joined_rooms_event_stream_ordering ON sliding_sync_joined_rooms(event_stream_ordering);
-- A table for storing a snapshot of room meta data (historical current state relevant
-- for sliding sync) at the time of a local user's membership. Only has rows for the
-- latest membership event for a given local user in a room which matches
-- `local_current_membership` .
--
-- We store all memberships including joins. This makes it easy to reference this table
-- to find all membership for a given user and shares the same semantics as
-- `local_current_membership`. And we get to avoid some table maintenance; if we only
-- stored non-joins, we would have to delete the row for the user when the user joins
-- the room. Stripped state doesn't include the `m.room.tombstone` event, so we just
-- assume that the room doesn't have a tombstone.
--
-- For remote invite/knocks where the server is not participating in the room, we will
-- use stripped state events to populate this table. We assume that if any stripped
-- state is given, it will include all possible stripped state events types. For
-- example, if stripped state is given but `m.room.encryption` isn't included, we will
-- assume that the room is not encrypted.
--
-- We don't include `bump_stamp` here because we can just use the `stream_ordering` from
-- the membership event itself as the `bump_stamp`.
CREATE TABLE IF NOT EXISTS sliding_sync_membership_snapshots(
room_id TEXT NOT NULL REFERENCES rooms(room_id),
user_id TEXT NOT NULL,
-- Useful to be able to tell leaves from kicks (where the `user_id` is different from the `sender`)
sender TEXT NOT NULL,
membership_event_id TEXT NOT NULL REFERENCES events(event_id),
membership TEXT NOT NULL,
-- This is an integer just to match `room_memberships` and also means we don't need
-- to do any casting.
forgotten INTEGER DEFAULT 0 NOT NULL,
-- `stream_ordering` of the `membership_event_id`
event_stream_ordering BIGINT NOT NULL REFERENCES events(stream_ordering),
-- `instance_name` of the worker that persisted the `membership_event_id`.
-- Useful for crafting `PersistedEventPosition(...)`
event_instance_name TEXT NOT NULL,
-- For remote invites/knocks that don't include any stripped state, we want to be
-- able to distinguish between a room with `None` as valid value for some state and
-- room where the state is completely unknown. Basically, this should be True unless
-- no stripped state was provided for a remote invite/knock (False).
has_known_state BOOLEAN DEFAULT FALSE NOT NULL,
-- `m.room.create` -> `content.type` (according to the current state at the time of
-- the membership).
--
-- Useful for the `spaces`/`not_spaces` filter in the Sliding Sync API
room_type TEXT,
-- `m.room.name` -> `content.name` (according to the current state at the time of
-- the membership).
--
-- Useful for the room meta data and `room_name_like` filter in the Sliding Sync API
room_name TEXT,
-- `m.room.encryption` -> `content.algorithm` (according to the current state at the
-- time of the membership).
--
-- Useful for the `is_encrypted` filter in the Sliding Sync API
is_encrypted BOOLEAN DEFAULT FALSE NOT NULL,
-- `m.room.tombstone` -> `content.replacement_room` (according to the current state at the
-- time of the membership).
--
-- Useful for the `include_old_rooms` functionality in the Sliding Sync API
tombstone_successor_room_id TEXT,
PRIMARY KEY (room_id, user_id)
);
-- So we can purge rooms easily.
--
-- Since we're using a multi-column index as the primary key (room_id, user_id), the
-- first index column (room_id) is always usable for searching so we don't need to
-- create a separate index for it.
--
-- CREATE INDEX IF NOT EXISTS sliding_sync_membership_snapshots_room_id ON sliding_sync_membership_snapshots(room_id);
-- So we can fetch all rooms for a given user
CREATE INDEX IF NOT EXISTS sliding_sync_membership_snapshots_user_id ON sliding_sync_membership_snapshots(user_id);
-- So we can sort by `stream_ordering
CREATE UNIQUE INDEX IF NOT EXISTS sliding_sync_membership_snapshots_event_stream_ordering ON sliding_sync_membership_snapshots(event_stream_ordering);
-- Add a series of background updates to populate the new `sliding_sync_joined_rooms` table:
--
-- 1. Add a background update to prefill `sliding_sync_joined_rooms_to_recalculate`.
-- We do a one-shot bulk insert from the `rooms` table to prefill.
-- 2. Add a background update to populate the new `sliding_sync_joined_rooms` table
-- based on the rooms listed in the `sliding_sync_joined_rooms_to_recalculate`
-- table.
--
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8701, 'sliding_sync_prefill_joined_rooms_to_recalculate_table_bg_update', '{}');
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(8701, 'sliding_sync_joined_rooms_bg_update', '{}', 'sliding_sync_prefill_joined_rooms_to_recalculate_table_bg_update');
-- Add a background updates to populate the new `sliding_sync_membership_snapshots` table
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8701, 'sliding_sync_membership_snapshots_bg_update', '{}');

View File

@@ -17,33 +17,23 @@
# [This file includes modifications made by New Vector Limited]
#
#
from enum import Enum
from typing import TYPE_CHECKING, Dict, Final, List, Mapping, Optional, Sequence, Tuple
import attr
from typing_extensions import TypedDict
from synapse._pydantic_compat import HAS_PYDANTIC_V2
from typing import List, Optional, TypedDict
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import Extra
else:
from pydantic import Extra
from synapse.api.constants import EventTypes
from synapse.events import EventBase
from synapse.types import (
DeviceListUpdates,
JsonDict,
JsonMapping,
Requester,
SlidingSyncStreamToken,
StreamToken,
UserID,
)
from synapse.types.rest.client import SlidingSyncBody
if TYPE_CHECKING:
from synapse.handlers.relations import BundledAggregations
# Sliding Sync: The event types that clients should consider as new activity and affect
# the `bump_stamp`
SLIDING_SYNC_DEFAULT_BUMP_EVENT_TYPES = {
EventTypes.Create,
EventTypes.Message,
EventTypes.Encrypted,
EventTypes.Sticker,
EventTypes.CallInvite,
EventTypes.PollStart,
EventTypes.LiveLocationShareStart,
}
class ShutdownRoomParams(TypedDict):
@@ -101,331 +91,3 @@ class ShutdownRoomResponse(TypedDict):
failed_to_kick_users: List[str]
local_aliases: List[str]
new_room_id: Optional[str]
class SlidingSyncConfig(SlidingSyncBody):
"""
Inherit from `SlidingSyncBody` since we need all of the same fields and add a few
extra fields that we need in the handler
"""
user: UserID
requester: Requester
# Pydantic config
class Config:
# By default, ignore fields that we don't recognise.
extra = Extra.ignore
# By default, don't allow fields to be reassigned after parsing.
allow_mutation = False
# Allow custom types like `UserID` to be used in the model
arbitrary_types_allowed = True
class OperationType(Enum):
"""
Represents the operation types in a Sliding Sync window.
Attributes:
SYNC: Sets a range of entries. Clients SHOULD discard what they previous knew about
entries in this range.
INSERT: Sets a single entry. If the position is not empty then clients MUST move
entries to the left or the right depending on where the closest empty space is.
DELETE: Remove a single entry. Often comes before an INSERT to allow entries to move
places.
INVALIDATE: Remove a range of entries. Clients MAY persist the invalidated range for
offline support, but they should be treated as empty when additional operations
which concern indexes in the range arrive from the server.
"""
SYNC: Final = "SYNC"
INSERT: Final = "INSERT"
DELETE: Final = "DELETE"
INVALIDATE: Final = "INVALIDATE"
@attr.s(slots=True, frozen=True, auto_attribs=True)
class SlidingSyncResult:
"""
The Sliding Sync result to be serialized to JSON for a response.
Attributes:
next_pos: The next position token in the sliding window to request (next_batch).
lists: Sliding window API. A map of list key to list results.
rooms: Room subscription API. A map of room ID to room results.
extensions: Extensions API. A map of extension key to extension results.
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class RoomResult:
"""
Attributes:
name: Room name or calculated room name.
avatar: Room avatar
heroes: List of stripped membership events (containing `user_id` and optionally
`avatar_url` and `displayname`) for the users used to calculate the room name.
is_dm: Flag to specify whether the room is a direct-message room (most likely
between two people).
initial: Flag which is set when this is the first time the server is sending this
data on this connection. Clients can use this flag to replace or update
their local state. When there is an update, servers MUST omit this flag
entirely and NOT send "initial":false as this is wasteful on bandwidth. The
absence of this flag means 'false'.
required_state: The current state of the room
timeline: Latest events in the room. The last event is the most recent.
bundled_aggregations: A mapping of event ID to the bundled aggregations for
the timeline events above. This allows clients to show accurate reaction
counts (or edits, threads), even if some of the reaction events were skipped
over in a gappy sync.
stripped_state: Stripped state events (for rooms where the usre is
invited/knocked). Same as `rooms.invite.$room_id.invite_state` in sync v2,
absent on joined/left rooms
prev_batch: A token that can be passed as a start parameter to the
`/rooms/<room_id>/messages` API to retrieve earlier messages.
limited: True if there are more events than `timeline_limit` looking
backwards from the `response.pos` to the `request.pos`.
num_live: The number of timeline events which have just occurred and are not historical.
The last N events are 'live' and should be treated as such. This is mostly
useful to determine whether a given @mention event should make a noise or not.
Clients cannot rely solely on the absence of `initial: true` to determine live
events because if a room not in the sliding window bumps into the window because
of an @mention it will have `initial: true` yet contain a single live event
(with potentially other old events in the timeline).
bump_stamp: The `stream_ordering` of the last event according to the
`bump_event_types`. This helps clients sort more readily without them
needing to pull in a bunch of the timeline to determine the last activity.
`bump_event_types` is a thing because for example, we don't want display
name changes to mark the room as unread and bump it to the top. For
encrypted rooms, we just have to consider any activity as a bump because we
can't see the content and the client has to figure it out for themselves.
joined_count: The number of users with membership of join, including the client's
own user ID. (same as sync `v2 m.joined_member_count`)
invited_count: The number of users with membership of invite. (same as sync v2
`m.invited_member_count`)
notification_count: The total number of unread notifications for this room. (same
as sync v2)
highlight_count: The number of unread notifications for this room with the highlight
flag set. (same as sync v2)
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class StrippedHero:
user_id: str
display_name: Optional[str]
avatar_url: Optional[str]
name: Optional[str]
avatar: Optional[str]
heroes: Optional[List[StrippedHero]]
is_dm: bool
initial: bool
# Should be empty for invite/knock rooms with `stripped_state`
required_state: List[EventBase]
# Should be empty for invite/knock rooms with `stripped_state`
timeline_events: List[EventBase]
bundled_aggregations: Optional[Dict[str, "BundledAggregations"]]
# Optional because it's only relevant to invite/knock rooms
stripped_state: List[JsonDict]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
prev_batch: Optional[StreamToken]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
limited: Optional[bool]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
num_live: Optional[int]
bump_stamp: int
joined_count: int
invited_count: int
notification_count: int
highlight_count: int
def __bool__(self) -> bool:
return (
# If this is the first time the client is seeing the room, we should not filter it out
# under any circumstance.
self.initial
# We need to let the client know if there are any new events
or bool(self.required_state)
or bool(self.timeline_events)
or bool(self.stripped_state)
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class SlidingWindowList:
"""
Attributes:
count: The total number of entries in the list. Always present if this list
is.
ops: The sliding list operations to perform.
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class Operation:
"""
Attributes:
op: The operation type to perform.
range: Which index positions are affected by this operation. These are
both inclusive.
room_ids: Which room IDs are affected by this operation. These IDs match
up to the positions in the `range`, so the last room ID in this list
matches the 9th index. The room data is held in a separate object.
"""
op: OperationType
range: Tuple[int, int]
room_ids: List[str]
count: int
ops: List[Operation]
@attr.s(slots=True, frozen=True, auto_attribs=True)
class Extensions:
"""Responses for extensions
Attributes:
to_device: The to-device extension (MSC3885)
e2ee: The E2EE device extension (MSC3884)
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class ToDeviceExtension:
"""The to-device extension (MSC3885)
Attributes:
next_batch: The to-device stream token the client should use
to get more results
events: A list of to-device messages for the client
"""
next_batch: str
events: Sequence[JsonMapping]
def __bool__(self) -> bool:
return bool(self.events)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class E2eeExtension:
"""The E2EE device extension (MSC3884)
Attributes:
device_list_updates: List of user_ids whose devices have changed or left (only
present on incremental syncs).
device_one_time_keys_count: Map from key algorithm to the number of
unclaimed one-time keys currently held on the server for this device. If
an algorithm is unlisted, the count for that algorithm is assumed to be
zero. If this entire parameter is missing, the count for all algorithms
is assumed to be zero.
device_unused_fallback_key_types: List of unused fallback key algorithms
for this device.
"""
# Only present on incremental syncs
device_list_updates: Optional[DeviceListUpdates]
device_one_time_keys_count: Mapping[str, int]
device_unused_fallback_key_types: Sequence[str]
def __bool__(self) -> bool:
# Note that "signed_curve25519" is always returned in key count responses
# regardless of whether we uploaded any keys for it. This is necessary until
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
#
# Also related:
# https://github.com/element-hq/element-android/issues/3725 and
# https://github.com/matrix-org/synapse/issues/10456
default_otk = self.device_one_time_keys_count.get("signed_curve25519")
more_than_default_otk = len(self.device_one_time_keys_count) > 1 or (
default_otk is not None and default_otk > 0
)
return bool(
more_than_default_otk
or self.device_list_updates
or self.device_unused_fallback_key_types
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class AccountDataExtension:
"""The Account Data extension (MSC3959)
Attributes:
global_account_data_map: Mapping from `type` to `content` of global account
data events.
account_data_by_room_map: Mapping from room_id to mapping of `type` to
`content` of room account data events.
"""
global_account_data_map: Mapping[str, JsonMapping]
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]]
def __bool__(self) -> bool:
return bool(
self.global_account_data_map or self.account_data_by_room_map
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class ReceiptsExtension:
"""The Receipts extension (MSC3960)
Attributes:
room_id_to_receipt_map: Mapping from room_id to `m.receipt` ephemeral
event (type, content)
"""
room_id_to_receipt_map: Mapping[str, JsonMapping]
def __bool__(self) -> bool:
return bool(self.room_id_to_receipt_map)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class TypingExtension:
"""The Typing Notification extension (MSC3961)
Attributes:
room_id_to_typing_map: Mapping from room_id to `m.typing` ephemeral
event (type, content)
"""
room_id_to_typing_map: Mapping[str, JsonMapping]
def __bool__(self) -> bool:
return bool(self.room_id_to_typing_map)
to_device: Optional[ToDeviceExtension] = None
e2ee: Optional[E2eeExtension] = None
account_data: Optional[AccountDataExtension] = None
receipts: Optional[ReceiptsExtension] = None
typing: Optional[TypingExtension] = None
def __bool__(self) -> bool:
return bool(
self.to_device
or self.e2ee
or self.account_data
or self.receipts
or self.typing
)
next_pos: SlidingSyncStreamToken
lists: Dict[str, SlidingWindowList]
rooms: Dict[str, RoomResult]
extensions: Extensions
def __bool__(self) -> bool:
"""Make the result appear empty if there are no updates. This is used
to tell if the notifier needs to wait for more events when polling for
events.
"""
# We don't include `self.lists` here, as a) `lists` is always non-empty even if
# there are no changes, and b) since we're sorting rooms by `stream_ordering` of
# the latest activity, anything that would cause the order to change would end
# up in `self.rooms` and cause us to send down the change.
return bool(self.rooms or self.extensions)
@staticmethod
def empty(next_pos: SlidingSyncStreamToken) -> "SlidingSyncResult":
"Return a new empty result"
return SlidingSyncResult(
next_pos=next_pos,
lists={},
rooms={},
extensions=SlidingSyncResult.Extensions(),
)

View File

@@ -0,0 +1,853 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
import typing
from collections import ChainMap
from enum import Enum
from typing import (
TYPE_CHECKING,
AbstractSet,
Callable,
Dict,
Final,
Generic,
List,
Mapping,
MutableMapping,
Optional,
Sequence,
Set,
Tuple,
TypeVar,
cast,
)
import attr
from synapse._pydantic_compat import HAS_PYDANTIC_V2
from synapse.api.constants import EventTypes
from synapse.types import MultiWriterStreamToken, RoomStreamToken, StrCollection, UserID
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import Extra
else:
from pydantic import Extra
from synapse.events import EventBase
from synapse.types import (
DeviceListUpdates,
JsonDict,
JsonMapping,
Requester,
SlidingSyncStreamToken,
StreamToken,
)
from synapse.types.rest.client import SlidingSyncBody
if TYPE_CHECKING:
from synapse.handlers.relations import BundledAggregations
logger = logging.getLogger(__name__)
class SlidingSyncConfig(SlidingSyncBody):
"""
Inherit from `SlidingSyncBody` since we need all of the same fields and add a few
extra fields that we need in the handler
"""
user: UserID
requester: Requester
# Pydantic config
class Config:
# By default, ignore fields that we don't recognise.
extra = Extra.ignore
# By default, don't allow fields to be reassigned after parsing.
allow_mutation = False
# Allow custom types like `UserID` to be used in the model
arbitrary_types_allowed = True
class OperationType(Enum):
"""
Represents the operation types in a Sliding Sync window.
Attributes:
SYNC: Sets a range of entries. Clients SHOULD discard what they previous knew about
entries in this range.
INSERT: Sets a single entry. If the position is not empty then clients MUST move
entries to the left or the right depending on where the closest empty space is.
DELETE: Remove a single entry. Often comes before an INSERT to allow entries to move
places.
INVALIDATE: Remove a range of entries. Clients MAY persist the invalidated range for
offline support, but they should be treated as empty when additional operations
which concern indexes in the range arrive from the server.
"""
SYNC: Final = "SYNC"
INSERT: Final = "INSERT"
DELETE: Final = "DELETE"
INVALIDATE: Final = "INVALIDATE"
@attr.s(slots=True, frozen=True, auto_attribs=True)
class SlidingSyncResult:
"""
The Sliding Sync result to be serialized to JSON for a response.
Attributes:
next_pos: The next position token in the sliding window to request (next_batch).
lists: Sliding window API. A map of list key to list results.
rooms: Room subscription API. A map of room ID to room results.
extensions: Extensions API. A map of extension key to extension results.
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class RoomResult:
"""
Attributes:
name: Room name or calculated room name.
avatar: Room avatar
heroes: List of stripped membership events (containing `user_id` and optionally
`avatar_url` and `displayname`) for the users used to calculate the room name.
is_dm: Flag to specify whether the room is a direct-message room (most likely
between two people).
initial: Flag which is set when this is the first time the server is sending this
data on this connection. Clients can use this flag to replace or update
their local state. When there is an update, servers MUST omit this flag
entirely and NOT send "initial":false as this is wasteful on bandwidth. The
absence of this flag means 'false'.
unstable_expanded_timeline: Flag which is set if we're returning more historic
events due to the timeline limit having increased. See "XXX: Odd behavior"
comment ing `synapse.handlers.sliding_sync`.
required_state: The current state of the room
timeline: Latest events in the room. The last event is the most recent.
bundled_aggregations: A mapping of event ID to the bundled aggregations for
the timeline events above. This allows clients to show accurate reaction
counts (or edits, threads), even if some of the reaction events were skipped
over in a gappy sync.
stripped_state: Stripped state events (for rooms where the usre is
invited/knocked). Same as `rooms.invite.$room_id.invite_state` in sync v2,
absent on joined/left rooms
prev_batch: A token that can be passed as a start parameter to the
`/rooms/<room_id>/messages` API to retrieve earlier messages.
limited: True if there are more events than `timeline_limit` looking
backwards from the `response.pos` to the `request.pos`.
num_live: The number of timeline events which have just occurred and are not historical.
The last N events are 'live' and should be treated as such. This is mostly
useful to determine whether a given @mention event should make a noise or not.
Clients cannot rely solely on the absence of `initial: true` to determine live
events because if a room not in the sliding window bumps into the window because
of an @mention it will have `initial: true` yet contain a single live event
(with potentially other old events in the timeline).
bump_stamp: The `stream_ordering` of the last event according to the
`bump_event_types`. This helps clients sort more readily without them
needing to pull in a bunch of the timeline to determine the last activity.
`bump_event_types` is a thing because for example, we don't want display
name changes to mark the room as unread and bump it to the top. For
encrypted rooms, we just have to consider any activity as a bump because we
can't see the content and the client has to figure it out for themselves.
joined_count: The number of users with membership of join, including the client's
own user ID. (same as sync `v2 m.joined_member_count`)
invited_count: The number of users with membership of invite. (same as sync v2
`m.invited_member_count`)
notification_count: The total number of unread notifications for this room. (same
as sync v2)
highlight_count: The number of unread notifications for this room with the highlight
flag set. (same as sync v2)
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class StrippedHero:
user_id: str
display_name: Optional[str]
avatar_url: Optional[str]
name: Optional[str]
avatar: Optional[str]
heroes: Optional[List[StrippedHero]]
is_dm: bool
initial: bool
unstable_expanded_timeline: bool
# Should be empty for invite/knock rooms with `stripped_state`
required_state: List[EventBase]
# Should be empty for invite/knock rooms with `stripped_state`
timeline_events: List[EventBase]
bundled_aggregations: Optional[Dict[str, "BundledAggregations"]]
# Optional because it's only relevant to invite/knock rooms
stripped_state: List[JsonDict]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
prev_batch: Optional[StreamToken]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
limited: Optional[bool]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
num_live: Optional[int]
bump_stamp: int
joined_count: int
invited_count: int
notification_count: int
highlight_count: int
def __bool__(self) -> bool:
return (
# If this is the first time the client is seeing the room, we should not filter it out
# under any circumstance.
self.initial
# We need to let the client know if there are any new events
or bool(self.required_state)
or bool(self.timeline_events)
or bool(self.stripped_state)
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class SlidingWindowList:
"""
Attributes:
count: The total number of entries in the list. Always present if this list
is.
ops: The sliding list operations to perform.
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class Operation:
"""
Attributes:
op: The operation type to perform.
range: Which index positions are affected by this operation. These are
both inclusive.
room_ids: Which room IDs are affected by this operation. These IDs match
up to the positions in the `range`, so the last room ID in this list
matches the 9th index. The room data is held in a separate object.
"""
op: OperationType
range: Tuple[int, int]
room_ids: List[str]
count: int
ops: List[Operation]
@attr.s(slots=True, frozen=True, auto_attribs=True)
class Extensions:
"""Responses for extensions
Attributes:
to_device: The to-device extension (MSC3885)
e2ee: The E2EE device extension (MSC3884)
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class ToDeviceExtension:
"""The to-device extension (MSC3885)
Attributes:
next_batch: The to-device stream token the client should use
to get more results
events: A list of to-device messages for the client
"""
next_batch: str
events: Sequence[JsonMapping]
def __bool__(self) -> bool:
return bool(self.events)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class E2eeExtension:
"""The E2EE device extension (MSC3884)
Attributes:
device_list_updates: List of user_ids whose devices have changed or left (only
present on incremental syncs).
device_one_time_keys_count: Map from key algorithm to the number of
unclaimed one-time keys currently held on the server for this device. If
an algorithm is unlisted, the count for that algorithm is assumed to be
zero. If this entire parameter is missing, the count for all algorithms
is assumed to be zero.
device_unused_fallback_key_types: List of unused fallback key algorithms
for this device.
"""
# Only present on incremental syncs
device_list_updates: Optional[DeviceListUpdates]
device_one_time_keys_count: Mapping[str, int]
device_unused_fallback_key_types: Sequence[str]
def __bool__(self) -> bool:
# Note that "signed_curve25519" is always returned in key count responses
# regardless of whether we uploaded any keys for it. This is necessary until
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
#
# Also related:
# https://github.com/element-hq/element-android/issues/3725 and
# https://github.com/matrix-org/synapse/issues/10456
default_otk = self.device_one_time_keys_count.get("signed_curve25519")
more_than_default_otk = len(self.device_one_time_keys_count) > 1 or (
default_otk is not None and default_otk > 0
)
return bool(
more_than_default_otk
or self.device_list_updates
or self.device_unused_fallback_key_types
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class AccountDataExtension:
"""The Account Data extension (MSC3959)
Attributes:
global_account_data_map: Mapping from `type` to `content` of global account
data events.
account_data_by_room_map: Mapping from room_id to mapping of `type` to
`content` of room account data events.
"""
global_account_data_map: Mapping[str, JsonMapping]
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]]
def __bool__(self) -> bool:
return bool(
self.global_account_data_map or self.account_data_by_room_map
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class ReceiptsExtension:
"""The Receipts extension (MSC3960)
Attributes:
room_id_to_receipt_map: Mapping from room_id to `m.receipt` ephemeral
event (type, content)
"""
room_id_to_receipt_map: Mapping[str, JsonMapping]
def __bool__(self) -> bool:
return bool(self.room_id_to_receipt_map)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class TypingExtension:
"""The Typing Notification extension (MSC3961)
Attributes:
room_id_to_typing_map: Mapping from room_id to `m.typing` ephemeral
event (type, content)
"""
room_id_to_typing_map: Mapping[str, JsonMapping]
def __bool__(self) -> bool:
return bool(self.room_id_to_typing_map)
to_device: Optional[ToDeviceExtension] = None
e2ee: Optional[E2eeExtension] = None
account_data: Optional[AccountDataExtension] = None
receipts: Optional[ReceiptsExtension] = None
typing: Optional[TypingExtension] = None
def __bool__(self) -> bool:
return bool(
self.to_device
or self.e2ee
or self.account_data
or self.receipts
or self.typing
)
next_pos: SlidingSyncStreamToken
lists: Mapping[str, SlidingWindowList]
rooms: Dict[str, RoomResult]
extensions: Extensions
def __bool__(self) -> bool:
"""Make the result appear empty if there are no updates. This is used
to tell if the notifier needs to wait for more events when polling for
events.
"""
# We don't include `self.lists` here, as a) `lists` is always non-empty even if
# there are no changes, and b) since we're sorting rooms by `stream_ordering` of
# the latest activity, anything that would cause the order to change would end
# up in `self.rooms` and cause us to send down the change.
return bool(self.rooms or self.extensions)
@staticmethod
def empty(next_pos: SlidingSyncStreamToken) -> "SlidingSyncResult":
"Return a new empty result"
return SlidingSyncResult(
next_pos=next_pos,
lists={},
rooms={},
extensions=SlidingSyncResult.Extensions(),
)
class StateValues:
"""
Understood values of the (type, state_key) tuple in `required_state`.
"""
# Include all state events of the given type
WILDCARD: Final = "*"
# Lazy-load room membership events (include room membership events for any event
# `sender` in the timeline). We only give special meaning to this value when it's a
# `state_key`.
LAZY: Final = "$LAZY"
# Subsitute with the requester's user ID. Typically used by clients to get
# the user's membership.
ME: Final = "$ME"
# We can't freeze this class because we want to update it in place with the
# de-duplicated data.
@attr.s(slots=True, auto_attribs=True, frozen=True)
class RoomSyncConfig:
"""
Holds the config for what data we should fetch for a room in the sync response.
Attributes:
timeline_limit: The maximum number of events to return in the timeline.
required_state_map: Map from state event type to state_keys requested for the
room. The values are close to `StateKey` but actually use a syntax where you
can provide `*` wildcard and `$LAZY` for lazy-loading room members.
"""
timeline_limit: int
required_state_map: Mapping[str, AbstractSet[str]]
@classmethod
def from_room_config(
cls,
room_params: SlidingSyncConfig.CommonRoomParameters,
) -> "RoomSyncConfig":
"""
Create a `RoomSyncConfig` from a `SlidingSyncList`/`RoomSubscription` config.
Args:
room_params: `SlidingSyncConfig.SlidingSyncList` or `SlidingSyncConfig.RoomSubscription`
"""
required_state_map: Dict[str, Set[str]] = {}
for (
state_type,
state_key,
) in room_params.required_state:
# If we already have a wildcard for this specific `state_key`, we don't need
# to add it since the wildcard already covers it.
if state_key in required_state_map.get(StateValues.WILDCARD, set()):
continue
# If we already have a wildcard `state_key` for this `state_type`, we don't need
# to add anything else
if StateValues.WILDCARD in required_state_map.get(state_type, set()):
continue
# If we're getting wildcards for the `state_type` and `state_key`, that's
# all that matters so get rid of any other entries
if state_type == StateValues.WILDCARD and state_key == StateValues.WILDCARD:
required_state_map = {StateValues.WILDCARD: {StateValues.WILDCARD}}
# We can break, since we don't need to add anything else
break
# If we're getting a wildcard for the `state_type`, get rid of any other
# entries with the same `state_key`, since the wildcard will cover it already.
elif state_type == StateValues.WILDCARD:
# Get rid of any entries that match the `state_key`
#
# Make a copy so we don't run into an error: `dictionary changed size
# during iteration`, when we remove items
for (
existing_state_type,
existing_state_key_set,
) in list(required_state_map.items()):
# Make a copy so we don't run into an error: `Set changed size during
# iteration`, when we filter out and remove items
for existing_state_key in existing_state_key_set.copy():
if existing_state_key == state_key:
existing_state_key_set.remove(state_key)
# If we've the left the `set()` empty, remove it from the map
if existing_state_key_set == set():
required_state_map.pop(existing_state_type, None)
# If we're getting a wildcard `state_key`, get rid of any other state_keys
# for this `state_type` since the wildcard will cover it already.
if state_key == StateValues.WILDCARD:
required_state_map[state_type] = {state_key}
# Otherwise, just add it to the set
else:
if required_state_map.get(state_type) is None:
required_state_map[state_type] = {state_key}
else:
required_state_map[state_type].add(state_key)
return cls(
timeline_limit=room_params.timeline_limit,
required_state_map=required_state_map,
)
def combine_room_sync_config(
self, other_room_sync_config: "RoomSyncConfig"
) -> "RoomSyncConfig":
"""
Combine this `RoomSyncConfig` with another `RoomSyncConfig` and return the
superset union of the two.
"""
timeline_limit = self.timeline_limit
required_state_map = {
event_type: set(state_keys)
for event_type, state_keys in self.required_state_map.items()
}
# Take the highest timeline limit
if timeline_limit < other_room_sync_config.timeline_limit:
timeline_limit = other_room_sync_config.timeline_limit
# Union the required state
for (
state_type,
state_key_set,
) in other_room_sync_config.required_state_map.items():
# If we already have a wildcard for everything, we don't need to add
# anything else
if StateValues.WILDCARD in required_state_map.get(
StateValues.WILDCARD, set()
):
break
# If we already have a wildcard `state_key` for this `state_type`, we don't need
# to add anything else
if StateValues.WILDCARD in required_state_map.get(state_type, set()):
continue
# If we're getting wildcards for the `state_type` and `state_key`, that's
# all that matters so get rid of any other entries
if (
state_type == StateValues.WILDCARD
and StateValues.WILDCARD in state_key_set
):
required_state_map = {state_type: {StateValues.WILDCARD}}
# We can break, since we don't need to add anything else
break
for state_key in state_key_set:
# If we already have a wildcard for this specific `state_key`, we don't need
# to add it since the wildcard already covers it.
if state_key in required_state_map.get(StateValues.WILDCARD, set()):
continue
# If we're getting a wildcard for the `state_type`, get rid of any other
# entries with the same `state_key`, since the wildcard will cover it already.
if state_type == StateValues.WILDCARD:
# Get rid of any entries that match the `state_key`
#
# Make a copy so we don't run into an error: `dictionary changed size
# during iteration`, when we remove items
for existing_state_type, existing_state_key_set in list(
required_state_map.items()
):
# Make a copy so we don't run into an error: `Set changed size during
# iteration`, when we filter out and remove items
for existing_state_key in existing_state_key_set.copy():
if existing_state_key == state_key:
existing_state_key_set.remove(state_key)
# If we've the left the `set()` empty, remove it from the map
if existing_state_key_set == set():
required_state_map.pop(existing_state_type, None)
# If we're getting a wildcard `state_key`, get rid of any other state_keys
# for this `state_type` since the wildcard will cover it already.
if state_key == StateValues.WILDCARD:
required_state_map[state_type] = {state_key}
break
# Otherwise, just add it to the set
else:
if required_state_map.get(state_type) is None:
required_state_map[state_type] = {state_key}
else:
required_state_map[state_type].add(state_key)
return RoomSyncConfig(timeline_limit, required_state_map)
def must_await_full_state(
self,
is_mine_id: Callable[[str], bool],
) -> bool:
"""
Check if we have a we're only requesting `required_state` which is completely
satisfied even with partial state, then we don't need to `await_full_state` before
we can return it.
Also see `StateFilter.must_await_full_state(...)` for comparison
Partially-stated rooms should have all state events except for remote membership
events so if we require a remote membership event anywhere, then we need to
return `True` (requires full state).
Args:
is_mine_id: a callable which confirms if a given state_key matches a mxid
of a local user
"""
wildcard_state_keys = self.required_state_map.get(StateValues.WILDCARD)
# Requesting *all* state in the room so we have to wait
if (
wildcard_state_keys is not None
and StateValues.WILDCARD in wildcard_state_keys
):
return True
# If the wildcards don't refer to remote user IDs, then we don't need to wait
# for full state.
if wildcard_state_keys is not None:
for possible_user_id in wildcard_state_keys:
if not possible_user_id[0].startswith(UserID.SIGIL):
# Not a user ID
continue
localpart_hostname = possible_user_id.split(":", 1)
if len(localpart_hostname) < 2:
# Not a user ID
continue
if not is_mine_id(possible_user_id):
return True
membership_state_keys = self.required_state_map.get(EventTypes.Member)
# We aren't requesting any membership events at all so the partial state will
# cover us.
if membership_state_keys is None:
return False
# If we're requesting entirely local users, the partial state will cover us.
for user_id in membership_state_keys:
if user_id == StateValues.ME:
continue
# We're lazy-loading membership so we can just return the state we have.
# Lazy-loading means we include membership for any event `sender` in the
# timeline but since we had to auth those timeline events, we will have the
# membership state for them (including from remote senders).
elif user_id == StateValues.LAZY:
continue
elif user_id == StateValues.WILDCARD:
return False
elif not is_mine_id(user_id):
return True
# Local users only so the partial state will cover us.
return False
class HaveSentRoomFlag(Enum):
"""Flag for whether we have sent the room down a sliding sync connection.
The valid state changes here are:
NEVER -> LIVE
LIVE -> PREVIOUSLY
PREVIOUSLY -> LIVE
"""
# The room has never been sent down (or we have forgotten we have sent it
# down).
NEVER = "never"
# We have previously sent the room down, but there are updates that we
# haven't sent down.
PREVIOUSLY = "previously"
# We have sent the room down and the client has received all updates.
LIVE = "live"
T = TypeVar("T", str, RoomStreamToken, MultiWriterStreamToken)
@attr.s(auto_attribs=True, slots=True, frozen=True)
class HaveSentRoom(Generic[T]):
"""Whether we have sent the room data down a sliding sync connection.
We are generic over the type of token used, e.g. `RoomStreamToken` or
`MultiWriterStreamToken`.
Attributes:
status: Flag of if we have or haven't sent down the room
last_token: If the flag is `PREVIOUSLY` then this is non-null and
contains the last stream token of the last updates we sent down
the room, i.e. we still need to send everything since then to the
client.
"""
status: HaveSentRoomFlag
last_token: Optional[T]
@staticmethod
def live() -> "HaveSentRoom[T]":
return HaveSentRoom(HaveSentRoomFlag.LIVE, None)
@staticmethod
def previously(last_token: T) -> "HaveSentRoom[T]":
"""Constructor for `PREVIOUSLY` flag."""
return HaveSentRoom(HaveSentRoomFlag.PREVIOUSLY, last_token)
@staticmethod
def never() -> "HaveSentRoom[T]":
return HaveSentRoom(HaveSentRoomFlag.NEVER, None)
@attr.s(auto_attribs=True, slots=True, frozen=True)
class RoomStatusMap(Generic[T]):
"""For a given stream, e.g. events, records what we have or have not sent
down for that stream in a given room."""
# `room_id` -> `HaveSentRoom`
_statuses: Mapping[str, HaveSentRoom[T]] = attr.Factory(dict)
def have_sent_room(self, room_id: str) -> HaveSentRoom[T]:
"""Return whether we have previously sent the room down"""
return self._statuses.get(room_id, HaveSentRoom.never())
def get_mutable(self) -> "MutableRoomStatusMap[T]":
"""Get a mutable copy of this state."""
return MutableRoomStatusMap(
statuses=self._statuses,
)
def copy(self) -> "RoomStatusMap[T]":
"""Make a copy of the class. Useful for converting from a mutable to
immutable version."""
return RoomStatusMap(statuses=dict(self._statuses))
class MutableRoomStatusMap(RoomStatusMap[T]):
"""A mutable version of `RoomStatusMap`"""
# We use a ChainMap here so that we can easily track what has been updated
# and what hasn't. Note that when we persist the per connection state this
# will get flattened to a normal dict (via calling `.copy()`)
_statuses: typing.ChainMap[str, HaveSentRoom[T]]
def __init__(
self,
statuses: Mapping[str, HaveSentRoom[T]],
) -> None:
# ChainMap requires a mutable mapping, but we're not actually going to
# mutate it.
statuses = cast(MutableMapping, statuses)
super().__init__(
statuses=ChainMap({}, statuses),
)
def get_updates(self) -> Mapping[str, HaveSentRoom[T]]:
"""Return only the changes that were made"""
return self._statuses.maps[0]
def record_sent_rooms(self, room_ids: StrCollection) -> None:
"""Record that we have sent these rooms in the response"""
for room_id in room_ids:
current_status = self._statuses.get(room_id, HaveSentRoom.never())
if current_status.status == HaveSentRoomFlag.LIVE:
continue
self._statuses[room_id] = HaveSentRoom.live()
def record_unsent_rooms(self, room_ids: StrCollection, from_token: T) -> None:
"""Record that we have not sent these rooms in the response, but there
have been updates.
"""
# Whether we add/update the entries for unsent rooms depends on the
# existing entry:
# - LIVE: We have previously sent down everything up to
# `last_room_token, so we update the entry to be `PREVIOUSLY` with
# `last_room_token`.
# - PREVIOUSLY: We have previously sent down everything up to *a*
# given token, so we don't need to update the entry.
# - NEVER: We have never previously sent down the room, and we haven't
# sent anything down this time either so we leave it as NEVER.
for room_id in room_ids:
current_status = self._statuses.get(room_id, HaveSentRoom.never())
if current_status.status != HaveSentRoomFlag.LIVE:
continue
self._statuses[room_id] = HaveSentRoom.previously(from_token)
@attr.s(auto_attribs=True, frozen=True)
class PerConnectionState:
"""The per-connection state. A snapshot of what we've sent down the
connection before.
Currently, we track whether we've sent down various aspects of a given room
before.
We use the `rooms` field to store the position in the events stream for each
room that we've previously sent to the client before. On the next request
that includes the room, we can then send only what's changed since that
recorded position.
Same goes for the `receipts` field so we only need to send the new receipts
since the last time you made a sync request.
Attributes:
rooms: The status of each room for the events stream.
receipts: The status of each room for the receipts stream.
room_configs: Map from room_id to the `RoomSyncConfig` of all
rooms that we have previously sent down.
"""
rooms: RoomStatusMap[RoomStreamToken] = attr.Factory(RoomStatusMap)
receipts: RoomStatusMap[MultiWriterStreamToken] = attr.Factory(RoomStatusMap)
room_configs: Mapping[str, RoomSyncConfig] = attr.Factory(dict)
def get_mutable(self) -> "MutablePerConnectionState":
"""Get a mutable copy of this state."""
room_configs = cast(MutableMapping[str, RoomSyncConfig], self.room_configs)
return MutablePerConnectionState(
rooms=self.rooms.get_mutable(),
receipts=self.receipts.get_mutable(),
room_configs=ChainMap({}, room_configs),
)
def copy(self) -> "PerConnectionState":
return PerConnectionState(
rooms=self.rooms.copy(),
receipts=self.receipts.copy(),
room_configs=dict(self.room_configs),
)
@attr.s(auto_attribs=True)
class MutablePerConnectionState(PerConnectionState):
"""A mutable version of `PerConnectionState`"""
rooms: MutableRoomStatusMap[RoomStreamToken]
receipts: MutableRoomStatusMap[MultiWriterStreamToken]
room_configs: typing.ChainMap[str, RoomSyncConfig]
def has_updates(self) -> bool:
return (
bool(self.rooms.get_updates())
or bool(self.receipts.get_updates())
or bool(self.get_room_config_updates())
)
def get_room_config_updates(self) -> Mapping[str, RoomSyncConfig]:
"""Get updates to the room sync config"""
return self.room_configs.maps[0]

View File

@@ -6,7 +6,7 @@ import synapse.rest.admin
import synapse.rest.client.login
import synapse.rest.client.room
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import LimitExceededError, SynapseError
from synapse.api.errors import Codes, LimitExceededError, SynapseError
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events import FrozenEventV3
from synapse.federation.federation_client import SendJoinResult
@@ -383,6 +383,26 @@ class RoomMemberMasterHandlerTestCase(HomeserverTestCase):
"""Tests that a user cannot not forgets a room that has not left."""
self.get_failure(self.handler.forget(self.alice_ID, self.room_id), SynapseError)
def test_nonlocal_room_user_action(self) -> None:
"""
Test that non-local user ids cannot perform room actions through
this homeserver.
"""
alien_user_id = UserID.from_string("@cheeky_monkey:matrix.org")
bad_room_id = f"{self.room_id}+BAD_ID"
exc = self.get_failure(
self.handler.update_membership(
create_requester(self.alice),
alien_user_id,
bad_room_id,
"unban",
),
SynapseError,
).value
self.assertEqual(exc.errcode, Codes.BAD_JSON)
def test_rejoin_forgotten_by_user(self) -> None:
"""Test that a user that has forgotten a room can do a re-join.
The room was not forgotten from the local server.

View File

@@ -757,6 +757,54 @@ class SpaceSummaryTestCase(unittest.HomeserverTestCase):
)
self._assert_hierarchy(result, expected)
def test_fed_root(self) -> None:
"""
Test if requested room is available over federation.
"""
fed_hostname = self.hs.hostname + "2"
fed_space = "#fed_space:" + fed_hostname
fed_subroom = "#fed_sub_room:" + fed_hostname
requested_room_entry = _RoomEntry(
fed_space,
{
"room_id": fed_space,
"world_readable": True,
"room_type": RoomTypes.SPACE,
},
[
{
"type": EventTypes.SpaceChild,
"room_id": fed_space,
"state_key": fed_subroom,
"content": {"via": [fed_hostname]},
}
],
)
child_room = {
"room_id": fed_subroom,
"world_readable": True,
}
async def summarize_remote_room_hierarchy(
_self: Any, room: Any, suggested_only: bool
) -> Tuple[Optional[_RoomEntry], Dict[str, JsonDict], Set[str]]:
return requested_room_entry, {fed_subroom: child_room}, set()
expected = [
(fed_space, [fed_subroom]),
(fed_subroom, ()),
]
with mock.patch(
"synapse.handlers.room_summary.RoomSummaryHandler._summarize_remote_room_hierarchy",
new=summarize_remote_room_hierarchy,
):
result = self.get_success(
self.handler.get_room_hierarchy(create_requester(self.user), fed_space)
)
self._assert_hierarchy(result, expected)
def test_fed_filtering(self) -> None:
"""
Rooms returned over federation should be properly filtered to only include

View File

@@ -18,7 +18,6 @@
#
#
import logging
from copy import deepcopy
from typing import Dict, List, Optional
from unittest.mock import patch
@@ -47,7 +46,7 @@ from synapse.rest.client import knock, login, room
from synapse.server import HomeServer
from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.types import JsonDict, StreamToken, UserID
from synapse.types.handlers import SlidingSyncConfig
from synapse.types.handlers.sliding_sync import SlidingSyncConfig
from synapse.util import Clock
from tests.replication._base import BaseMultiWorkerStreamTestCase
@@ -566,23 +565,11 @@ class RoomSyncConfigTestCase(TestCase):
"""
Combine A into B and B into A to make sure we get the same result.
"""
# Since we're mutating these in place, make a copy for each of our trials
room_sync_config_a = deepcopy(a)
room_sync_config_b = deepcopy(b)
combined_config = a.combine_room_sync_config(b)
self._assert_room_config_equal(combined_config, expected, "B into A")
# Combine B into A
room_sync_config_a.combine_room_sync_config(room_sync_config_b)
self._assert_room_config_equal(room_sync_config_a, expected, "B into A")
# Since we're mutating these in place, make a copy for each of our trials
room_sync_config_a = deepcopy(a)
room_sync_config_b = deepcopy(b)
# Combine A into B
room_sync_config_b.combine_room_sync_config(room_sync_config_a)
self._assert_room_config_equal(room_sync_config_b, expected, "A into B")
combined_config = a.combine_room_sync_config(b)
self._assert_room_config_equal(combined_config, expected, "A into B")
class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
@@ -620,7 +607,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
now_token = self.event_sources.get_current_token()
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=now_token,
to_token=now_token,
@@ -647,7 +634,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
after_room_token = self.event_sources.get_current_token()
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room_token,
to_token=after_room_token,
@@ -682,7 +669,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
after_room_token = self.event_sources.get_current_token()
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_room_token,
to_token=after_room_token,
@@ -756,7 +743,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
after_room_token = self.event_sources.get_current_token()
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room_token,
to_token=after_room_token,
@@ -828,7 +815,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
after_kick_token = self.event_sources.get_current_token()
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_kick_token,
to_token=after_kick_token,
@@ -921,7 +908,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
self.assertEqual(channel.code, 200, channel.result)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room_forgets,
to_token=before_room_forgets,
@@ -951,7 +938,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
after_room2_token = self.event_sources.get_current_token()
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_room1_token,
to_token=after_room2_token,
@@ -1001,7 +988,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
self.helper.join(room_id2, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room1_token,
to_token=after_room1_token,
@@ -1041,7 +1028,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
leave_response = self.helper.leave(room_id1, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room1_token,
to_token=after_room1_token,
@@ -1088,7 +1075,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
leave_response = self.helper.leave(room_id1, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_room1_token,
to_token=after_room1_token,
@@ -1152,7 +1139,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
leave_response = self.helper.leave(kick_room_id, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_kick_token,
to_token=after_kick_token,
@@ -1208,7 +1195,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
leave_response2 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room1_token,
to_token=after_room1_token,
@@ -1263,7 +1250,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
join_response2 = self.helper.join(room_id1, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room1_token,
to_token=after_room1_token,
@@ -1322,7 +1309,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
self.helper.join(room_id2, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=None,
to_token=after_room1_token,
@@ -1404,7 +1391,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
self.helper.join(room_id4, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=from_token,
to_token=to_token,
@@ -1477,7 +1464,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
self.helper.leave(room_id1, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_room1_token,
to_token=after_room1_token,
@@ -1520,7 +1507,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
self.helper.join(room_id1, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_room1_token,
to_token=after_room1_token,
@@ -1570,7 +1557,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
leave_response3 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room1_token,
to_token=after_room1_token,
@@ -1632,7 +1619,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
leave_response3 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_room1_token,
to_token=after_room1_token,
@@ -1691,7 +1678,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
leave_response = self.helper.leave(room_id1, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_room1_token,
to_token=after_room1_token,
@@ -1765,7 +1752,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room1_token,
to_token=after_room1_token,
@@ -1830,7 +1817,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
after_change1_token = self.event_sources.get_current_token()
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_room1_token,
to_token=after_change1_token,
@@ -1902,7 +1889,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_room1_token,
to_token=after_room1_token,
@@ -1984,7 +1971,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
self.helper.leave(room_id1, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room1_token,
to_token=after_room1_token,
@@ -2052,7 +2039,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room1_token,
to_token=after_room1_token,
@@ -2088,7 +2075,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
after_more_changes_token = self.event_sources.get_current_token()
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=after_room1_token,
to_token=after_more_changes_token,
@@ -2153,7 +2140,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
after_room1_token = self.event_sources.get_current_token()
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room1_token,
to_token=after_room1_token,
@@ -2229,7 +2216,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
self.helper.leave(room_id3, user1_id, tok=user1_tok)
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_room3_token,
to_token=after_room3_token,
@@ -2365,7 +2352,7 @@ class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):
# The function under test
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_reset_token,
to_token=after_reset_token,
@@ -2579,7 +2566,7 @@ class GetRoomMembershipForUserAtToTokenShardTestCase(BaseMultiWorkerStreamTestCa
# The function under test
room_id_results = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
UserID.from_string(user1_id),
from_token=before_stuck_activity_token,
to_token=stuck_activity_token,
@@ -2669,14 +2656,14 @@ class FilterRoomsRelevantForSyncTestCase(HomeserverTestCase):
Get the rooms the user should be syncing with
"""
room_membership_for_user_map = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
user=user,
from_token=from_token,
to_token=to_token,
)
)
filtered_sync_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms_relevant_for_sync(
self.sliding_sync_handler.room_lists.filter_rooms_relevant_for_sync(
user=user,
room_membership_for_user_map=room_membership_for_user_map,
)
@@ -3030,14 +3017,14 @@ class FilterRoomsTestCase(HomeserverTestCase):
Get the rooms the user should be syncing with
"""
room_membership_for_user_map = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
user=user,
from_token=from_token,
to_token=to_token,
)
)
filtered_sync_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms_relevant_for_sync(
self.sliding_sync_handler.room_lists.filter_rooms_relevant_for_sync(
user=user,
room_membership_for_user_map=room_membership_for_user_map,
)
@@ -3196,7 +3183,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_dm=True`
truthy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3210,7 +3197,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_dm=False`
falsy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3252,7 +3239,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=True`
truthy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3266,7 +3253,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=False`
falsy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3316,7 +3303,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=True`
truthy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3330,7 +3317,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=False`
falsy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3390,7 +3377,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=True`
truthy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3404,7 +3391,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=False`
falsy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3463,7 +3450,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=True`
truthy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3484,7 +3471,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=False`
falsy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3533,7 +3520,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=True`
truthy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3549,7 +3536,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=False`
falsy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3619,7 +3606,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=True`
truthy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3637,7 +3624,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=False`
falsy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3700,7 +3687,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=True`
truthy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3716,7 +3703,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_encrypted=False`
falsy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3760,7 +3747,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_invite=True`
truthy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3774,7 +3761,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try with `is_invite=False`
falsy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3827,7 +3814,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only normal rooms
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
@@ -3839,7 +3826,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only spaces
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
@@ -3851,7 +3838,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding normal rooms and spaces
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3865,7 +3852,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding an arbitrary room type
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3918,7 +3905,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding *NOT* normal rooms
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(not_room_types=[None]),
@@ -3930,7 +3917,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding *NOT* spaces
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3944,7 +3931,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding *NOT* normal rooms or spaces
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3959,7 +3946,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Test how it behaves when we have both `room_types` and `not_room_types`.
# `not_room_types` should win.
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -3975,7 +3962,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Test how it behaves when we have both `room_types` and `not_room_types`.
# `not_room_types` should win.
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
@@ -4025,7 +4012,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only normal rooms
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
@@ -4037,7 +4024,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only spaces
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
@@ -4094,7 +4081,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only normal rooms
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
@@ -4106,7 +4093,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only spaces
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
@@ -4152,7 +4139,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only normal rooms
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
@@ -4166,7 +4153,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only spaces
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
@@ -4228,7 +4215,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only normal rooms
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
@@ -4242,7 +4229,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only spaces
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
@@ -4305,7 +4292,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only normal rooms
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[None]),
@@ -4319,7 +4306,7 @@ class FilterRoomsTestCase(HomeserverTestCase):
# Try finding only spaces
filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
self.sliding_sync_handler.room_lists.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(room_types=[RoomTypes.SPACE]),
@@ -4366,14 +4353,14 @@ class SortRoomsTestCase(HomeserverTestCase):
Get the rooms the user should be syncing with
"""
room_membership_for_user_map = self.get_success(
self.sliding_sync_handler.get_room_membership_for_user_at_to_token(
self.sliding_sync_handler.room_lists.get_room_membership_for_user_at_to_token(
user=user,
from_token=from_token,
to_token=to_token,
)
)
filtered_sync_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms_relevant_for_sync(
self.sliding_sync_handler.room_lists.filter_rooms_relevant_for_sync(
user=user,
room_membership_for_user_map=room_membership_for_user_map,
)
@@ -4408,7 +4395,7 @@ class SortRoomsTestCase(HomeserverTestCase):
# Sort the rooms (what we're testing)
sorted_sync_rooms = self.get_success(
self.sliding_sync_handler.sort_rooms(
self.sliding_sync_handler.room_lists.sort_rooms(
sync_room_map=sync_room_map,
to_token=after_rooms_token,
)
@@ -4489,7 +4476,7 @@ class SortRoomsTestCase(HomeserverTestCase):
# Sort the rooms (what we're testing)
sorted_sync_rooms = self.get_success(
self.sliding_sync_handler.sort_rooms(
self.sliding_sync_handler.room_lists.sort_rooms(
sync_room_map=sync_room_map,
to_token=after_rooms_token,
)
@@ -4553,7 +4540,7 @@ class SortRoomsTestCase(HomeserverTestCase):
# Sort the rooms (what we're testing)
sorted_sync_rooms = self.get_success(
self.sliding_sync_handler.sort_rooms(
self.sliding_sync_handler.room_lists.sort_rooms(
sync_room_map=sync_room_map,
to_token=after_rooms_token,
)

View File

@@ -17,6 +17,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
import io
from typing import Any, Dict, Generator
from unittest.mock import ANY, Mock, create_autospec
@@ -32,7 +33,9 @@ from twisted.web.http import HTTPChannel
from twisted.web.http_headers import Headers
from synapse.api.errors import HttpResponseException, RequestSendFailed
from synapse.api.ratelimiting import Ratelimiter
from synapse.config._base import ConfigError
from synapse.config.ratelimiting import RatelimitSettings
from synapse.http.matrixfederationclient import (
ByteParser,
MatrixFederationHttpClient,
@@ -337,6 +340,81 @@ class FederationClientTests(HomeserverTestCase):
r = self.successResultOf(d)
self.assertEqual(r.code, 200)
def test_authed_media_redirect_response(self) -> None:
"""
Validate that, when following a `Location` redirect, the
maximum size is _not_ set to the initial response `Content-Length` and
the media file can be downloaded.
"""
limiter = Ratelimiter(
store=self.hs.get_datastores().main,
clock=self.clock,
cfg=RatelimitSettings(key="", per_second=0.17, burst_count=1048576),
)
output_stream = io.BytesIO()
d = defer.ensureDeferred(
self.cl.federation_get_file(
"testserv:8008", "path", output_stream, limiter, "127.0.0.1", 10000
)
)
self.pump()
clients = self.reactor.tcpClients
self.assertEqual(len(clients), 1)
(host, port, factory, _timeout, _bindAddress) = clients[0]
self.assertEqual(host, "1.2.3.4")
self.assertEqual(port, 8008)
# complete the connection and wire it up to a fake transport
protocol = factory.buildProtocol(None)
transport = StringTransport()
protocol.makeConnection(transport)
# Deferred does not have a result
self.assertNoResult(d)
redirect_data = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nLocation: http://testserv:8008/ab/c1/2345.txt\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a--\r\n\r\n"
protocol.dataReceived(
b"HTTP/1.1 200 OK\r\n"
b"Server: Fake\r\n"
b"Content-Length: %i\r\n"
b"Content-Type: multipart/mixed; boundary=6067d4698f8d40a0a794ea7d7379d53a\r\n\r\n"
% (len(redirect_data))
)
protocol.dataReceived(redirect_data)
# Still no result, not followed the redirect yet
self.assertNoResult(d)
# Now send the response returned by the server at `Location`
clients = self.reactor.tcpClients
(host, port, factory, _timeout, _bindAddress) = clients[1]
self.assertEqual(host, "1.2.3.4")
self.assertEqual(port, 8008)
protocol = factory.buildProtocol(None)
transport = StringTransport()
protocol.makeConnection(transport)
# make sure the length is longer than the initial response
data = b"Hello world!" * 30
protocol.dataReceived(
b"HTTP/1.1 200 OK\r\n"
b"Server: Fake\r\n"
b"Content-Length: %i\r\n"
b"Content-Type: text/plain\r\n"
b"\r\n"
b"%s\r\n"
b"\r\n" % (len(data), data)
)
# We should get a successful response
length, _, _ = self.successResultOf(d)
self.assertEqual(length, len(data))
self.assertEqual(output_stream.getvalue(), data)
@parameterized.expand(["get_json", "post_json", "delete_json", "put_json"])
def test_timeout_reading_body(self, method_name: str) -> None:
"""

View File

@@ -13,7 +13,7 @@
#
import logging
from parameterized import parameterized
from parameterized import parameterized, parameterized_class
from twisted.test.proto_helpers import MemoryReactor
@@ -28,6 +28,18 @@ from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
"""
Test connection tracking in the Sliding Sync API.
@@ -44,6 +56,8 @@ class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
self.store = hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
super().prepare(reactor, clock, hs)
def test_rooms_required_state_incremental_sync_LIVE(self) -> None:
"""Test that we only get state updates in incremental sync for rooms
we've already seen (LIVE).

View File

@@ -13,6 +13,8 @@
#
import logging
from parameterized import parameterized_class
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
@@ -28,6 +30,18 @@ from tests.server import TimedOutException
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
"""Tests for the account_data sliding sync extension"""
@@ -43,6 +57,8 @@ class SlidingSyncAccountDataExtensionTestCase(SlidingSyncBase):
self.store = hs.get_datastores().main
self.account_data_handler = hs.get_account_data_handler()
super().prepare(reactor, clock, hs)
def test_no_data_initial_sync(self) -> None:
"""
Test that enabling the account_data extension works during an intitial sync,

View File

@@ -13,6 +13,8 @@
#
import logging
from parameterized import parameterized_class
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
@@ -27,6 +29,18 @@ from tests.server import TimedOutException
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncE2eeExtensionTestCase(SlidingSyncBase):
"""Tests for the e2ee sliding sync extension"""
@@ -42,6 +56,8 @@ class SlidingSyncE2eeExtensionTestCase(SlidingSyncBase):
self.store = hs.get_datastores().main
self.e2e_keys_handler = hs.get_e2e_keys_handler()
super().prepare(reactor, clock, hs)
def test_no_data_initial_sync(self) -> None:
"""
Test that enabling e2ee extension works during an intitial sync, even if there

View File

@@ -13,6 +13,8 @@
#
import logging
from parameterized import parameterized_class
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
@@ -28,6 +30,18 @@ from tests.server import TimedOutException
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncReceiptsExtensionTestCase(SlidingSyncBase):
"""Tests for the receipts sliding sync extension"""
@@ -42,6 +56,8 @@ class SlidingSyncReceiptsExtensionTestCase(SlidingSyncBase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = hs.get_datastores().main
super().prepare(reactor, clock, hs)
def test_no_data_initial_sync(self) -> None:
"""
Test that enabling the receipts extension works during an intitial sync,
@@ -677,3 +693,240 @@ class SlidingSyncReceiptsExtensionTestCase(SlidingSyncBase):
set(),
exact=True,
)
def test_receipts_incremental_sync_out_of_range(self) -> None:
"""Tests that we don't return read receipts for rooms that fall out of
range, but then do send all read receipts once they're back in range.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
self.helper.join(room_id1, user1_id, tok=user1_tok)
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok)
self.helper.join(room_id2, user1_id, tok=user1_tok)
# Send a message and read receipt into room2
event_response = self.helper.send(room_id2, body="new event", tok=user2_tok)
room2_event_id = event_response["event_id"]
self.helper.send_read_receipt(room_id2, room2_event_id, tok=user1_tok)
# Now send a message into room1 so that it is at the top of the list
self.helper.send(room_id1, body="new event", tok=user2_tok)
# Make a SS request for only the top room.
sync_body = {
"lists": {
"main": {
"ranges": [[0, 0]],
"required_state": [],
"timeline_limit": 5,
}
},
"extensions": {
"receipts": {
"enabled": True,
}
},
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# The receipt is in room2, but only room1 is returned, so we don't
# expect to get the receipt.
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
set(),
exact=True,
)
# Move room2 into range.
self.helper.send(room_id2, body="new event", tok=user2_tok)
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# We expect to see the read receipt of room2, as that has the most
# recent update.
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
{room_id2},
exact=True,
)
receipt = response_body["extensions"]["receipts"]["rooms"][room_id2]
self.assertIncludes(
receipt["content"][room2_event_id][ReceiptTypes.READ].keys(),
{user1_id},
exact=True,
)
# Send a message into room1 to bump it to the top, but also send a
# receipt in room2
self.helper.send(room_id1, body="new event", tok=user2_tok)
self.helper.send_read_receipt(room_id2, room2_event_id, tok=user2_tok)
# We don't expect to see the new read receipt.
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
set(),
exact=True,
)
# But if we send a new message into room2, we expect to get the missing receipts
self.helper.send(room_id2, body="new event", tok=user2_tok)
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
{room_id2},
exact=True,
)
# We should only see the new receipt
receipt = response_body["extensions"]["receipts"]["rooms"][room_id2]
self.assertIncludes(
receipt["content"][room2_event_id][ReceiptTypes.READ].keys(),
{user2_id},
exact=True,
)
def test_return_own_read_receipts(self) -> None:
"""Test that we always send the user's own read receipts in initial
rooms, even if the receipts don't match events in the timeline..
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
self.helper.join(room_id1, user1_id, tok=user1_tok)
# Send a message and read receipts into room1
event_response = self.helper.send(room_id1, body="new event", tok=user2_tok)
room1_event_id = event_response["event_id"]
self.helper.send_read_receipt(room_id1, room1_event_id, tok=user1_tok)
self.helper.send_read_receipt(room_id1, room1_event_id, tok=user2_tok)
# Now send a message so the above message is not in the timeline.
self.helper.send(room_id1, body="new event", tok=user2_tok)
# Make a SS request for only the latest message.
sync_body = {
"lists": {
"main": {
"ranges": [[0, 0]],
"required_state": [],
"timeline_limit": 1,
}
},
"extensions": {
"receipts": {
"enabled": True,
}
},
}
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
# We should get our own receipt in room1, even though its not in the
# timeline limit.
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
{room_id1},
exact=True,
)
# We should only see our read receipt, not the other user's.
receipt = response_body["extensions"]["receipts"]["rooms"][room_id1]
self.assertIncludes(
receipt["content"][room1_event_id][ReceiptTypes.READ].keys(),
{user1_id},
exact=True,
)
def test_read_receipts_expanded_timeline(self) -> None:
"""Test that we get read receipts when we expand the timeline limit (`unstable_expanded_timeline`)."""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
self.helper.join(room_id1, user1_id, tok=user1_tok)
# Send a message and read receipt into room1
event_response = self.helper.send(room_id1, body="new event", tok=user2_tok)
room1_event_id = event_response["event_id"]
self.helper.send_read_receipt(room_id1, room1_event_id, tok=user2_tok)
# Now send a message so the above message is not in the timeline.
self.helper.send(room_id1, body="new event", tok=user2_tok)
# Make a SS request for only the latest message.
sync_body = {
"lists": {
"main": {
"ranges": [[0, 0]],
"required_state": [],
"timeline_limit": 1,
}
},
"extensions": {
"receipts": {
"enabled": True,
}
},
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# We shouldn't see user2 read receipt, as its not in the timeline
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
set(),
exact=True,
)
# Now do another request with a room subscription with an increased timeline limit
sync_body["room_subscriptions"] = {
room_id1: {
"required_state": [],
"timeline_limit": 2,
}
}
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# Assert that we did actually get an expanded timeline
room_response = response_body["rooms"][room_id1]
self.assertNotIn("initial", room_response)
self.assertEqual(room_response["unstable_expanded_timeline"], True)
# We should now see user2 read receipt, as its in the expanded timeline
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
{room_id1},
exact=True,
)
# We should only see our read receipt, not the other user's.
receipt = response_body["extensions"]["receipts"]["rooms"][room_id1]
self.assertIncludes(
receipt["content"][room1_event_id][ReceiptTypes.READ].keys(),
{user2_id},
exact=True,
)

View File

@@ -14,6 +14,8 @@
import logging
from typing import List
from parameterized import parameterized_class
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
@@ -28,6 +30,18 @@ from tests.server import TimedOutException
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncToDeviceExtensionTestCase(SlidingSyncBase):
"""Tests for the to-device sliding sync extension"""
@@ -40,6 +54,7 @@ class SlidingSyncToDeviceExtensionTestCase(SlidingSyncBase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = hs.get_datastores().main
super().prepare(reactor, clock, hs)
def _assert_to_device_response(
self, response_body: JsonDict, expected_messages: List[JsonDict]

View File

@@ -13,6 +13,8 @@
#
import logging
from parameterized import parameterized_class
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
@@ -28,6 +30,18 @@ from tests.server import TimedOutException
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncTypingExtensionTestCase(SlidingSyncBase):
"""Tests for the typing notification sliding sync extension"""
@@ -41,6 +55,8 @@ class SlidingSyncTypingExtensionTestCase(SlidingSyncBase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = hs.get_datastores().main
super().prepare(reactor, clock, hs)
def test_no_data_initial_sync(self) -> None:
"""
Test that enabling the typing extension works during an intitial sync,

View File

@@ -14,7 +14,7 @@
import logging
from typing import Literal
from parameterized import parameterized
from parameterized import parameterized, parameterized_class
from typing_extensions import assert_never
from twisted.test.proto_helpers import MemoryReactor
@@ -30,6 +30,18 @@ from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncExtensionsTestCase(SlidingSyncBase):
"""
Test general extensions behavior in the Sliding Sync API. Each extension has their
@@ -49,6 +61,8 @@ class SlidingSyncExtensionsTestCase(SlidingSyncBase):
self.storage_controllers = hs.get_storage_controllers()
self.account_data_handler = hs.get_account_data_handler()
super().prepare(reactor, clock, hs)
# Any extensions that use `lists`/`rooms` should be tested here
@parameterized.expand([("account_data",), ("receipts",), ("typing",)])
def test_extensions_lists_rooms_relevant_rooms(
@@ -120,19 +134,26 @@ class SlidingSyncExtensionsTestCase(SlidingSyncBase):
"foo-list": {
"ranges": [[0, 1]],
"required_state": [],
"timeline_limit": 0,
# We set this to `1` because we're testing `receipts` which
# interact with the `timeline`. With receipts, when a room
# hasn't been sent down the connection before or it appears
# as `initial: true`, we only include receipts for events in
# the timeline to avoid bloating and blowing up the sync
# response as the number of users in the room increases.
# (this behavior is part of the spec)
"timeline_limit": 1,
},
# We expect this list range to include room5, room4, room3
"bar-list": {
"ranges": [[0, 2]],
"required_state": [],
"timeline_limit": 0,
"timeline_limit": 1,
},
},
"room_subscriptions": {
room_id1: {
"required_state": [],
"timeline_limit": 0,
"timeline_limit": 1,
}
},
}

View File

@@ -14,6 +14,8 @@
import logging
from http import HTTPStatus
from parameterized import parameterized_class
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
@@ -27,6 +29,18 @@ from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncRoomSubscriptionsTestCase(SlidingSyncBase):
"""
Test `room_subscriptions` in the Sliding Sync API.
@@ -43,6 +57,8 @@ class SlidingSyncRoomSubscriptionsTestCase(SlidingSyncBase):
self.store = hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
super().prepare(reactor, clock, hs)
def test_room_subscriptions_with_join_membership(self) -> None:
"""
Test `room_subscriptions` with a joined room should give us timeline and current

View File

@@ -13,6 +13,8 @@
#
import logging
from parameterized import parameterized_class
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
@@ -27,6 +29,18 @@ from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncRoomsInvitesTestCase(SlidingSyncBase):
"""
Test to make sure the `rooms` response looks good for invites in the Sliding Sync API.
@@ -49,6 +63,8 @@ class SlidingSyncRoomsInvitesTestCase(SlidingSyncBase):
self.store = hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
super().prepare(reactor, clock, hs)
def test_rooms_invite_shared_history_initial_sync(self) -> None:
"""
Test that `rooms` we are invited to have some stripped `invite_state` during an

View File

@@ -13,10 +13,12 @@
#
import logging
from parameterized import parameterized_class
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
from synapse.api.constants import EventTypes, Membership
from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.api.room_versions import RoomVersions
from synapse.rest.client import login, room, sync
from synapse.server import HomeServer
@@ -28,6 +30,18 @@ from tests.test_utils.event_injection import create_event
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
"""
Test rooms meta info like name, avatar, joined_count, invited_count, is_dm,
@@ -44,6 +58,12 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
self.state_handler = self.hs.get_state_handler()
persistence = self.hs.get_storage_controllers().persistence
assert persistence is not None
self.persistence = persistence
super().prepare(reactor, clock, hs)
def test_rooms_meta_when_joined(self) -> None:
"""
@@ -600,16 +620,16 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
Test that `bump_stamp` ignores backfilled events, i.e. events with a
negative stream ordering.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# Create a remote room
creator = "@user:other"
room_id = "!foo:other"
room_version = RoomVersions.V10
shared_kwargs = {
"room_id": room_id,
"room_version": "10",
"room_version": room_version.identifier,
}
create_tuple = self.get_success(
@@ -618,6 +638,12 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
prev_event_ids=[],
type=EventTypes.Create,
state_key="",
content={
# The `ROOM_CREATOR` field could be removed if we used a room
# version > 10 (in favor of relying on `sender`)
EventContentFields.ROOM_CREATOR: creator,
EventContentFields.ROOM_VERSION: room_version.identifier,
},
sender=creator,
**shared_kwargs,
)
@@ -667,22 +693,29 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
]
# Ensure the local HS knows the room version
self.get_success(
self.store.store_room(room_id, creator, False, RoomVersions.V10)
)
self.get_success(self.store.store_room(room_id, creator, False, room_version))
# Persist these events as backfilled events.
persistence = self.hs.get_storage_controllers().persistence
assert persistence is not None
for event, context in remote_events_and_contexts:
self.get_success(persistence.persist_event(event, context, backfilled=True))
self.get_success(
self.persistence.persist_event(event, context, backfilled=True)
)
# Now we join the local user to the room
join_tuple = self.get_success(
# Now we join the local user to the room. We want to make this feel as close to
# the real `process_remote_join()` as possible but we'd like to avoid some of
# the auth checks that would be done in the real code.
#
# FIXME: The test was originally written using this less-real
# `persist_event(...)` shortcut but it would be nice to use the real remote join
# process in a `FederatingHomeserverTestCase`.
flawed_join_tuple = self.get_success(
create_event(
self.hs,
prev_event_ids=[invite_tuple[0].event_id],
# This doesn't work correctly to create an `EventContext` that includes
# both of these state events. I assume it's because we're working on our
# local homeserver which has the remote state set as `outlier`. We have
# to create our own EventContext below to get this right.
auth_event_ids=[create_tuple[0].event_id, invite_tuple[0].event_id],
type=EventTypes.Member,
state_key=user1_id,
@@ -691,7 +724,22 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
**shared_kwargs,
)
)
self.get_success(persistence.persist_event(*join_tuple))
# We have to create our own context to get the state set correctly. If we use
# the `EventContext` from the `flawed_join_tuple`, the `current_state_events`
# table will only have the join event in it which should never happen in our
# real server.
join_event = flawed_join_tuple[0]
join_context = self.get_success(
self.state_handler.compute_event_context(
join_event,
state_ids_before_event={
(e.type, e.state_key): e.event_id
for e in [create_tuple[0], invite_tuple[0]]
},
partial_state=False,
)
)
self.get_success(self.persistence.persist_event(join_event, join_context))
# Doing an SS request should return a positive `bump_stamp`, even though
# the only event that matches the bump types has as negative stream

View File

@@ -13,7 +13,7 @@
#
import logging
from parameterized import parameterized
from parameterized import parameterized, parameterized_class
from twisted.test.proto_helpers import MemoryReactor
@@ -30,6 +30,18 @@ from tests.test_utils.event_injection import mark_event_as_partial_state
logger = logging.getLogger(__name__)
# FIXME: This can be removed once we bump `SCHEMA_COMPAT_VERSION` and run the
# foreground update for
# `sliding_sync_joined_rooms`/`sliding_sync_membership_snapshots` (tracked by
# https://github.com/element-hq/synapse/issues/17623)
@parameterized_class(
("use_new_tables",),
[
(True,),
(False,),
],
class_name_func=lambda cls, num, params_dict: f"{cls.__name__}_{'new' if params_dict['use_new_tables'] else 'fallback'}",
)
class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
"""
Test `rooms.required_state` in the Sliding Sync API.
@@ -46,6 +58,8 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
self.store = hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
super().prepare(reactor, clock, hs)
def test_rooms_no_required_state(self) -> None:
"""
Empty `rooms.required_state` should not return any state events in the room

Some files were not shown because too many files have changed in this diff Show More