Compare commits

...

219 Commits

Author SHA1 Message Date
Erik Johnston
4fd68e2736 FIXUP 2024-08-28 15:05:56 +01:00
Erik Johnston
c5a90575bf FIXUP 2024-08-28 15:04:48 +01:00
Erik Johnston
a217155570 Merge branch 'madlittlemods/sliding-sync-pre-populate-room-meta-data' into erikj/ss_hacks 2024-08-28 14:58:31 +01:00
Erik Johnston
90d0e035dd Fix port script tests by handling empty DBs correctly 2024-08-28 14:39:25 +01:00
Erik Johnston
ab414f2ab8 Use event_auth table to get previous membership 2024-08-28 14:23:32 +01:00
Erik Johnston
6f9932d146 Handle old rows with null event_stream_ordering column 2024-08-28 14:23:06 +01:00
Erik Johnston
bb905cd02c Only run the sliding sync background updates on the main database 2024-08-28 11:44:56 +01:00
Erik Johnston
7c9c62051c Remove all rooms pulled out from the queue 2024-08-28 11:31:19 +01:00
Eric Eastwood
da463fb102 Add unique index right away for sliding_sync_joined_rooms_to_recalculate
This makes it so we can always `upsert` to avoid duplicates otherwise
I'm not sure of how to not insert duplicates in certain situations
(see FIXME in the diff) which would cause problems down the line
for the unique index being added later.
2024-08-28 00:50:33 -05:00
Eric Eastwood
8468401a97 Adapt to using sliding_sync_joined_rooms_to_recalculate table 2024-08-28 00:42:14 -05:00
Eric Eastwood
53b7309f6c Add sliding_sync_joined_rooms_to_recalculate table 2024-08-27 20:48:56 -05:00
Eric Eastwood
94e1a54687 get_events(...) will omit events from unknown room versions
Thanks @erikjohnston
2024-08-27 20:01:55 -05:00
Eric Eastwood
a507f152c9 Use stream_id of some point before we fetch the current state
This is simpler and some rooms are so old that they don't have
`current_state_delta_stream` yet. It's easier if we just get a
general max `stream_id` of the whole table than the max `stream_id`
for the specific room anyway.

Thanks @erikjohnston
2024-08-27 19:45:50 -05:00
Eric Eastwood
9d08bc2157 Remove debug logs 2024-08-27 19:35:05 -05:00
Eric Eastwood
56a4c0ba6e Round out tests 2024-08-27 19:34:16 -05:00
Eric Eastwood
85a60c3132 More tests 2024-08-27 19:27:24 -05:00
Eric Eastwood
e5e7269998 Add more tests 2024-08-27 18:49:53 -05:00
Eric Eastwood
c8e17f7479 Add test when no rooms 2024-08-27 18:21:25 -05:00
Eric Eastwood
4dc9e268e6 Add test for catch-up background update 2024-08-27 18:08:17 -05:00
Eric Eastwood
9a7d8c2be4 Start catch-up if nothing written yet 2024-08-27 17:28:07 -05:00
Eric Eastwood
c51a309da5 Maybe: always start background update 2024-08-27 17:11:51 -05:00
Eric Eastwood
9764f626ea Fix query in Postgres 2024-08-27 12:09:00 -05:00
Eric Eastwood
7a0c281028 Add placeholder tests 2024-08-26 19:43:52 -05:00
Eric Eastwood
7fe5d31e20 Note down caveat about forgotten 2024-08-26 18:52:10 -05:00
Eric Eastwood
53473a0eb4 Adapt sliding_sync_joined_rooms background update to use event_stream_ordering for progress
This way we can re-use it for the catch-up background process
2024-08-26 18:35:49 -05:00
Eric Eastwood
eb3c84cf45 Kick-off background update for out-of-date snapshots 2024-08-26 17:31:26 -05:00
Eric Eastwood
6a44686dc3 Why it matters 2024-08-26 16:32:59 -05:00
Eric Eastwood
a94c1dd62c Add more context for why 2024-08-26 16:27:20 -05:00
Eric Eastwood
8bddbe23bd Clear out-of-date rows 2024-08-26 16:19:47 -05:00
Eric Eastwood
addb91485f Split test cases 2024-08-26 16:11:56 -05:00
Eric Eastwood
9795556052 Update comment 2024-08-26 14:42:55 -05:00
Erik Johnston
c44db28958 Merge branch 'erikj/ss_sort_cache' into erikj/ss_hacks 2024-08-23 15:49:31 +01:00
Erik Johnston
c6204d3fa1 Speed up sort rooms via cache 2024-08-23 15:49:15 +01:00
Erik Johnston
1ead5e0c0e Merge branch 'erikj/ss_store_lists' into erikj/ss_hacks 2024-08-23 11:34:33 +01:00
Erik Johnston
8d1d8f9b3b Add TODO 2024-08-23 11:34:17 +01:00
Erik Johnston
651e520292 Store list data 2024-08-23 11:33:31 +01:00
Erik Johnston
f457dbee35 Add room_to_lists field. 2024-08-23 11:24:41 +01:00
Erik Johnston
77d3fa8b9e Fixup 2024-08-23 10:26:06 +01:00
Erik Johnston
4a9d81f6ad Fixup 2024-08-23 10:12:16 +01:00
Erik Johnston
96f476d9b4 Merge remote-tracking branch 'origin/madlittlemods/sliding-sync-pre-populate-room-meta-data' into erikj/ss_hacks 2024-08-23 10:00:55 +01:00
Erik Johnston
e2501a0bd7 Merge remote-tracking branch 'origin/erikj/ss_store_state' into erikj/ss_hacks 2024-08-23 09:54:21 +01:00
dependabot[bot]
f1a1c7fc53 Bump types-setuptools from 71.1.0.20240726 to 71.1.0.20240818 (#17586) 2024-08-23 09:53:14 +01:00
dependabot[bot]
cb9fa062b7 Bump sentry-sdk from 2.12.0 to 2.13.0 (#17585) 2024-08-23 09:53:06 +01:00
dependabot[bot]
74b75cfd54 Bump cryptography from 42.0.8 to 43.0.0 (#17584) 2024-08-23 09:52:53 +01:00
dependabot[bot]
87d13fd143 Bump types-jsonschema from 4.23.0.20240712 to 4.23.0.20240813 (#17583) 2024-08-23 09:52:34 +01:00
dependabot[bot]
ad2cd9aefd Bump serde_json from 1.0.124 to 1.0.125 (#17582) 2024-08-23 09:52:26 +01:00
dependabot[bot]
ad0ee53993 Bump serde from 1.0.206 to 1.0.208 (#17581) 2024-08-23 09:52:16 +01:00
Eric Eastwood
a57d47b778 Use simple_upsert_txn for sliding_sync_membership_snapshots in background update 2024-08-22 23:59:06 -05:00
Eric Eastwood
b6a7d2bf6c Use simple_upsert_txn for sliding_sync_joined_rooms in background update 2024-08-22 23:30:00 -05:00
Eric Eastwood
f8926d07df Fix partial-stated room re-syncing state but nothing has changed
Fixes failing test in CI: `tests.handlers.test_federation.PartialJoinTestCase.test_failed_partial_join_is_clean`
```
2024-08-22 18:57:22-0500 [-] synapse.metrics.background_process_metrics - 253 - ERROR - sync_partial_state_room-0 - Background process 'sync_partial_state_room' threw an exception
	Traceback (most recent call last):
	  File "synapse/synapse/metrics/background_process_metrics.py", line 251, in run
	    return await func(*args, **kwargs)
	           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
	  File "synapse/synapse/handlers/federation.py", line 1842, in _sync_partial_state_room_wrapper
	    await self._sync_partial_state_room(
	  File "synapse/synapse/handlers/federation.py", line 1933, in _sync_partial_state_room
	    await self.state_handler.update_current_state(room_id)
	  File "synapse/synapse/state/__init__.py", line 554, in update_current_state
	    await self._storage_controllers.persistence.update_current_state(room_id)
	  File "synapse/synapse/storage/controllers/persist_events.py", line 491, in update_current_state
	    await self._event_persist_queue.add_to_queue(
	  File "synapse/synapse/storage/controllers/persist_events.py", line 245, in add_to_queue
	    res = await make_deferred_yieldable(end_item.deferred.observe())
	          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	  File "synapse/synapse/storage/controllers/persist_events.py", line 288, in handle_queue_loop
	    ret = await self._per_item_callback(room_id, item.task)
	          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	  File "synapse/synapse/storage/controllers/persist_events.py", line 370, in _process_event_persist_queue_task
	    await self._update_current_state(room_id, task)
	  File "synapse/synapse/storage/controllers/persist_events.py", line 507, in _update_current_state
	    await self.persist_events_store._calculate_sliding_sync_table_changes(
	  File "synapse/synapse/storage/databases/main/events.py", line 624, in _calculate_sliding_sync_table_changes
	    assert most_recent_event_pos_results, (
	           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	AssertionError: We should not be seeing `None` here because we are still in the room (!room:example.com) and it should at-least have a join membership event that's keeping us here.
```
2024-08-22 19:06:04 -05:00
Eric Eastwood
21cc97ba9d Use simple_upsert_many_txn for sliding_sync_membership_snapshots
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726820006
2024-08-22 18:52:05 -05:00
Eric Eastwood
fdb8b5931f Correct comment 2024-08-22 18:43:51 -05:00
Eric Eastwood
4b866c4fca Simplify what we need to think about to grab the best effort value 2024-08-22 18:36:43 -05:00
Eric Eastwood
088a4c7cf0 Use simple_upsert_txn to update sliding_sync_joined_rooms
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726817206
2024-08-22 18:26:37 -05:00
Eric Eastwood
0726a6d58b Derive best effort stream_ordering outside of the transaction
See https://github.com/element-hq/synapse/pull/17512#discussion_r1727995882
2024-08-22 18:13:00 -05:00
Eric Eastwood
6edc4c78ce Allow for no bump_stamp (fix portdb CI job)
See https://github.com/element-hq/synapse/pull/17512#discussion_r1725998219
2024-08-22 17:09:43 -05:00
Eric Eastwood
44432e2118 Move tests to dedicated file
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726849798
2024-08-22 16:56:09 -05:00
Eric Eastwood
bcba8cccfe No need for transaction
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726844107
2024-08-22 16:52:31 -05:00
Eric Eastwood
693c06b2f1 Move away from backfill language 2024-08-22 16:48:02 -05:00
Eric Eastwood
4d87fa61c6 "backfill" -> "bg_update"
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726837698
2024-08-22 16:44:02 -05:00
Eric Eastwood
d61aada8ba Simplify _update_sliding_sync_tables_with_new_persisted_events_txn()
See
https://github.com/element-hq/synapse/pull/17512#discussion_r1719997640
https://github.com/element-hq/synapse/pull/17512#discussion_r1726828894
https://github.com/element-hq/synapse/pull/17512#discussion_r1726836440
2024-08-22 16:35:59 -05:00
Eric Eastwood
6723824c4a Prefer simple_delete_many_txn
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726819380
2024-08-22 15:47:44 -05:00
Eric Eastwood
980ee9aad6 Prefer simple_update_txn
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726808112
2024-08-22 15:40:08 -05:00
Erik Johnston
03eac5ae60 Newsfile 2024-08-22 18:04:23 +01:00
Erik Johnston
ed7591cbef Remove mark_token_seen 2024-08-22 18:04:23 +01:00
Erik Johnston
3838b18d3b Store state 2024-08-22 18:04:23 +01:00
Erik Johnston
b3d8e2d2bd Add simple_insert_returning_txn 2024-08-22 18:04:23 +01:00
Erik Johnston
d1ee253bef Allow making columns AUTOINCREMENT 2024-08-22 18:04:23 +01:00
Erik Johnston
87d53368d7 Newsfile 2024-08-22 18:03:57 +01:00
Erik Johnston
e34d634778 Make PerConnectionState immutable 2024-08-22 18:03:57 +01:00
Erik Johnston
7087c7c3d5 Make RoomSyncConfig immutable 2024-08-22 18:03:57 +01:00
Eric Eastwood
fc73b6ffc9 Rename insert_key/insert_value
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726805894
2024-08-22 11:44:57 -05:00
Eric Eastwood
9b8d2017af Check events_and_context for state events
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726798803
2024-08-22 11:34:40 -05:00
Erik Johnston
5b77f4a67a Update mypy plugin to handle enums and typevars 2024-08-22 17:04:21 +01:00
Erik Johnston
e2ade85250 Move sliding sync types 2024-08-22 17:04:21 +01:00
Eric Eastwood
339500d067 Fix sub-query selecting multiple rows 2024-08-21 22:56:25 -05:00
Eric Eastwood
31300f4ce5 More docstring 2024-08-21 22:15:19 -05:00
Eric Eastwood
b45b1896aa Fill out docstring 2024-08-21 22:09:57 -05:00
Eric Eastwood
ee2ef0b4d9 Add forgotten column 2024-08-21 21:54:22 -05:00
Eric Eastwood
0a938b137a Add missing boolean column to portdb script 2024-08-21 19:59:35 -05:00
Eric Eastwood
97248362d0 Log which room is strange 2024-08-21 19:26:14 -05:00
Eric Eastwood
02711552cf Better handle none case 2024-08-21 19:11:08 -05:00
Eric Eastwood
8ddf5c7235 Add tombstone to tests 2024-08-21 19:05:59 -05:00
Eric Eastwood
513ec8e906 Update tests 2024-08-21 18:51:04 -05:00
Eric Eastwood
cda2311520 Add tombstone_successor_room_id column 2024-08-21 18:21:44 -05:00
Eric Eastwood
c612572d12 Move away from stream_id
See https://github.com/element-hq/synapse/pull/17512#discussion_r1725806543
2024-08-21 17:14:34 -05:00
Eric Eastwood
5b1db39bb7 Add sender column so we can tell leaves from kicks 2024-08-21 16:42:12 -05:00
Eric Eastwood
e7a3328228 Pre-populate membership and membership_event_stream_ordering
See https://github.com/element-hq/synapse/pull/17512#discussion_r1725311745
2024-08-21 16:20:00 -05:00
Eric Eastwood
f6d7ffd9c5 Move _calculate_sliding_sync_table_changes(...) after we assign stream_ordering to events
See https://github.com/element-hq/synapse/pull/17512#discussion_r1725728637
2024-08-21 16:10:14 -05:00
Eric Eastwood
772c501bb6 Use available stream_id
See https://github.com/element-hq/synapse/pull/17512#discussion_r1725310035
2024-08-21 15:04:51 -05:00
Eric Eastwood
cda92af4a6 No need to update event_stream_ordering/bump_stamp ON CONFLICT 2024-08-21 14:46:50 -05:00
Eric Eastwood
d3f90e4bd8 Get full events for _sliding_sync_joined_rooms_backfill 2024-08-21 14:39:54 -05:00
Eric Eastwood
a5e06c6a8d Move back to the main store 2024-08-21 11:14:15 -05:00
Eric Eastwood
0233e20aa3 Use full event version after solving the circular import issues 2024-08-21 10:44:49 -05:00
Eric Eastwood
357132db1d Go back to simpler fetching senders 2024-08-20 21:54:03 -05:00
Eric Eastwood
726a8e9698 Attempt getting real events in backgroun update (needs work) 2024-08-20 21:48:44 -05:00
Eric Eastwood
cc200ee9f5 Merge branch 'develop' into madlittlemods/sliding-sync-pre-populate-room-meta-data 2024-08-20 17:25:48 -05:00
Eric Eastwood
3eb77c3a2a Add sanity checks and fix wrong variable usage 2024-08-20 17:24:43 -05:00
Eric Eastwood
45c89ec625 Move pre-processing completely outside transaction 2024-08-20 15:41:53 -05:00
Erik Johnston
92b38c1afd Sliding sync: Split up handler into its own module (#17595)
That file was getting long.

The changes are non functional, and simply split things up into:
- the main class
- the connection store
- the extensions
- the types
2024-08-20 18:30:23 +00:00
Eric Eastwood
ac5b05c86b Use TypedDict 2024-08-20 13:29:56 -05:00
Eric Eastwood
2964c567d3 Use dicts 2024-08-20 13:21:19 -05:00
Eric Eastwood
95d39db772 Closer types 2024-08-20 11:55:24 -05:00
Eric Eastwood
6cc6bdbedf Start of moving logic outside of the transaction (pre-process) 2024-08-20 11:10:34 -05:00
Quentin Gliech
a8e313836d changelog: move SSSS some changes in the features section 2024-08-20 15:18:13 +02:00
Quentin Gliech
7c9684b5dc 1.114.0rc1 2024-08-20 14:57:22 +02:00
Erik Johnston
f1e8d2d15a Sliding Sync: Speed up getting receipts for initial rooms (#17592)
Let's only pull out the events we care about. Note that the index isn't
necessary here, as postgres is happy to scan the set of rooms for the
events.
2024-08-20 12:57:34 +01:00
Erik Johnston
10428046e4 Add metrics for sliding sync processing time (#17593)
This should let us see how quickly we actually process things in
practice.
2024-08-20 11:36:49 +00:00
Erik Johnston
6eb98a4f1c Sliding Sync: Handle timeline limit changes (take 2) (#17579)
This supersedes #17503, given the per-connection state is being heavily
rewritten it felt easier to recreate the PR on top of that work.

This correctly handles the case of timeline limits going up and down.

This does not handle changes in `required_state`, but that can be done
as a separate PR.

Based on #17575.

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-08-20 10:31:25 +01:00
Erik Johnston
950ba844f7 Sliding Sync: Batch up fetching receipts (#17589)
This is to make initial sliding sync a bit faster
2024-08-20 10:13:26 +01:00
Eric Eastwood
574a04a40f Test state reset on membership 2024-08-19 23:30:25 -05:00
Eric Eastwood
8ee2e114dd Add test to handle state reset in the meta data 2024-08-19 23:22:24 -05:00
Eric Eastwood
98fb56e5fe Prefer _update_sliding_sync_tables_with_new_persisted_events_txn(...) to do the right thing
See https://github.com/element-hq/synapse/pull/17512#discussion_r1719992152
2024-08-19 22:30:50 -05:00
Eric Eastwood
df0c57d383 Merge branch 'develop' into madlittlemods/sliding-sync-pre-populate-room-meta-data 2024-08-19 16:34:41 -05:00
Erik Johnston
8b8d74d12f Sliding sync: Correctly track which read receipts we have or have not sent down. (#17575)
Add connection tracking to the receipts extension.

Based on #17574

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-08-19 21:16:07 +01:00
Erik Johnston
261e746281 Sliding sync: Add classes for per-connection state (#17574)
This is some prep work ahead of correctly tracking receipts, where we
will also want to track the room status in terms of last receipt we had
sent down.

Essentially, we add two classes `PerConnectionState` and a mutable
version, and then operate on those.

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-08-19 20:09:41 +01:00
Erik Johnston
993644ded0 Fix zero length media handling (#17570)
Results in:

```
AssertionError: null
  File "synapse/http/server.py", line 332, in _async_render_wrapper
    callback_return = await self._async_render(request)
  File "synapse/http/server.py", line 544, in _async_render
    callback_return = await raw_callback_return
  File "synapse/federation/transport/server/_base.py", line 369, in new_func
    response = await func(
  File "synapse/federation/transport/server/federation.py", line 826, in on_GET
    await self.media_repo.get_local_media(
  File "synapse/media/media_repository.py", line 473, in get_local_media
    await respond_with_multipart_responder(
  File "synapse/media/_base.py", line 353, in respond_with_multipart_responder
    assert content_length is not None
```
2024-08-19 15:06:44 +01:00
Erik Johnston
a5d25bb623 Test github token before running release script (#17562)
This stops people from getting half way through a step and it failing
due to the github token having expired (this happens to me every damn
time).
2024-08-19 14:15:36 +01:00
Erik Johnston
f162c92f2a Speed up /keys/changes (#17548)
Follow on from #17537.

This is just adding a batched lookup function (you might want to hide
whitespace in the diff).
2024-08-16 16:04:02 +01:00
Erik Johnston
9ce489be5e Add a flag to /versions about SSS support (#17571)
So that clients can check for support. Note that if the feature is only
enabled for some users, the `/versions` request must be authenticated to
pick up that SSS is enabled for the user
2024-08-16 08:54:57 +01:00
Eric Eastwood
d2f5247e77 Update comment 2024-08-16 00:15:03 -05:00
Eric Eastwood
c89d859c7c Fill in docstrings 2024-08-15 23:52:01 -05:00
Eric Eastwood
2ec93e3f0d Move function next to other helpers 2024-08-15 23:39:39 -05:00
Eric Eastwood
fa63c02648 Fix lints 2024-08-15 23:30:16 -05:00
Eric Eastwood
419be7c6b2 Finish off background update tests 2024-08-15 23:29:29 -05:00
Eric Eastwood
ef5f0fca3a Add more tests 2024-08-15 23:18:50 -05:00
Eric Eastwood
fb5af8f5fa Add background update test for sliding_sync_membership_snapshots 2024-08-15 22:13:32 -05:00
Eric Eastwood
8461faf384 Add historical case to background update 2024-08-15 21:56:12 -05:00
Eric Eastwood
6c2fc1d20f Move background updates to StateBackgroundUpdateStore
So we can access `_get_state_groups_from_groups_txn(...)`
2024-08-15 20:51:43 -05:00
Eric Eastwood
cbeff57402 Use helper 2024-08-15 20:31:57 -05:00
Eric Eastwood
4b42e44ef9 Work on background update for sliding_sync_membership_snapshots 2024-08-15 00:21:35 -05:00
Eric Eastwood
d113e743ae Fix lints 2024-08-14 19:30:52 -05:00
Eric Eastwood
23e0d34a2d Add more tests 2024-08-14 19:30:22 -05:00
Eric Eastwood
1c931cb3e7 Add background update for sliding_sync_joined_rooms 2024-08-14 19:19:15 -05:00
Eric Eastwood
9f551f0e97 Fix lints 2024-08-14 11:32:33 -05:00
Eric Eastwood
c8508f113a Clean up tables when a room is purged/deleted 2024-08-14 11:27:57 -05:00
Andrew Morgan
fae75b0376 Register the media threadpool with our metrics (#17566) 2024-08-14 15:11:22 +01:00
Tulir Asokan
f77bfbfa30 Fix fetching signing keys when old_verify_keys is omitted (#17568)
`old_verify_keys` isn't marked as required in
https://spec.matrix.org/v1.11/server-server-api/#get_matrixkeyv2server
and there's no functional difference between an empty object and
omitting the object, so I don't think there's any reason synapse should
explode when the field is omitted.
2024-08-14 14:13:56 +01:00
Erik Johnston
1892ba5f67 Fix 'Producer was not unregistered' error (#17569)
Follows on from #17567
2024-08-14 13:46:22 +01:00
Erik Johnston
a51daffba5 Reduce concurrent thread usage in media (#17567)
Follow on from #17558

Basically, we want to reduce the number of threads we want to use at a
time, i.e. reduce the number of threads that are paused/blocked. We do
this by returning from the thread when the consumer pauses the producer,
rather than pausing in the thread.

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2024-08-14 12:41:53 +01:00
Shay
b05b2e14bb Handle lower-case http headers in _Mulitpart_Parser_Protocol (#17545) 2024-08-14 09:49:01 +01:00
Eric Eastwood
f49003c35c No invites needed 2024-08-13 18:55:59 -05:00
Eric Eastwood
8b0e1692f9 More realistic remote room forgotten test 2024-08-13 18:51:11 -05:00
Eric Eastwood
5df94f47b5 Fix running into StopIteration
More context about how/why `StopIteration` was being ignored silently
which made this problem harder to debug. See
https://github.com/element-hq/synapse/pull/17512#discussion_r1715954505
2024-08-13 17:15:09 -05:00
Eric Eastwood
3566abd9bc Fix boolean schema for Postgres 2024-08-13 15:14:48 -05:00
Eric Eastwood
96a4614f92 Update fixme comment 2024-08-13 14:50:11 -05:00
Eric Eastwood
dc447a673f Clarify when/why we upsert 2024-08-13 14:47:17 -05:00
Eric Eastwood
32ae162278 Fix rejecting invite when no_longer_in_room (and other non-join transitions) 2024-08-13 14:35:24 -05:00
Eric Eastwood
a90f3d4ae2 Merge branch 'develop' into madlittlemods/sliding-sync-pre-populate-room-meta-data 2024-08-13 12:28:36 -05:00
Eric Eastwood
a308d99f30 Sliding Sync: Exclude partially stated rooms if we must await full state (#17538)
Previously, we just had very basic partial room exclusion based on
whether we were lazy-loading room members. Now with this PR, we added
`must_await_full_state(...)` with rules to check if we have a we're only
requesting `required_state` which is completely satisfied even with
partial state.

Partially-stated rooms should have all state events except for remote
membership events so if we require a remote membership event anywhere,
then we need to return `True`.
2024-08-13 12:27:42 -05:00
Eric Eastwood
eb3a185cfc Fix federating backfill test 2024-08-13 12:24:53 -05:00
Erik Johnston
a9fc1fd112 Use a larger, dedicated threadpool for media sending (#17564) 2024-08-13 17:59:47 +01:00
Andrew Morgan
6a11bdf01d Add a utility function for generating fake event IDs (#17557) 2024-08-13 16:55:05 +00:00
Patrick Cloke
8fea190a1f Add missing docstrings related to profile methods. (#17559) 2024-08-13 17:04:35 +01:00
Erik Johnston
81c19c4cd2 Merge branch 'master' into develop 2024-08-13 15:58:11 +01:00
Erik Johnston
aaa3c36420 Remove logging in multipart (#17563)
This is really spurious and causes a lot of spam. I don't think there is
a use for it even at DEBUG level.
2024-08-13 15:56:18 +01:00
Erik Johnston
3e7eb45eb1 Fixup media logcontexts (#17561)
Regression from #17558
2024-08-13 15:00:57 +01:00
Erik Johnston
bab37dfc6f 1.113.0 2024-08-13 14:37:01 +01:00
Erik Johnston
9f9ec92526 Speed up responding to media requests (#17558)
We do this by reading from a threadpool, rather than blocking the main
thread.

This is broadly what we do in the [S3 storage
provider](https://github.com/matrix-org/synapse-s3-storage-provider/blob/main/s3_storage_provider.py#L234)
2024-08-13 14:06:17 +01:00
dependabot[bot]
ff7b27013e Bump serde from 1.0.204 to 1.0.206 (#17556) 2024-08-13 08:59:35 +01:00
Eric Eastwood
517946d940 Fix lints 2024-08-12 20:31:25 -05:00
Eric Eastwood
f600eacd0d Adjust test description 2024-08-12 20:30:48 -05:00
Eric Eastwood
3423eb72d5 Add test to make sure snapshot evolves with membership 2024-08-12 20:29:58 -05:00
Eric Eastwood
5589ae48ca Add test for remote invite rejected/retracted 2024-08-12 20:14:14 -05:00
Eric Eastwood
83a5858083 Add tests for remote invites 2024-08-12 19:57:28 -05:00
Eric Eastwood
3e1f24ea11 User ID is not unique because user is joined to many rooms 2024-08-12 19:46:23 -05:00
Eric Eastwood
ab074f5335 Fix events from rooms we're not joined to affecting the joined room stream ordering 2024-08-12 19:40:53 -05:00
Eric Eastwood
53232e6df5 Fill in for remote invites (out of band, outlier membership) 2024-08-12 18:14:02 -05:00
Eric Eastwood
f069659343 Fix lints 2024-08-12 15:49:40 -05:00
Eric Eastwood
552f8f496d Update descriptions 2024-08-12 15:43:06 -05:00
Eric Eastwood
0af3b4822c Refactor to sliding_sync_membership_snapshots 2024-08-12 15:10:44 -05:00
Eric Eastwood
ed47a7eff5 Fix bumping when events are persisted out of order 2024-08-12 11:27:17 -05:00
Andrew Morgan
e1f5f0fbb8 Bump setuptools from 67.6.0 to 72.1.0 (#17542) 2024-08-12 14:58:01 +01:00
dependabot[bot]
8c9f2743bc Bump serde_json from 1.0.122 to 1.0.124 (#17555) 2024-08-12 14:33:12 +01:00
dependabot[bot]
b076941a36 Bump sentry-sdk from 2.10.0 to 2.12.0 (#17553) 2024-08-12 14:32:18 +01:00
dependabot[bot]
8bbe65f3c0 Bump types-pyyaml from 6.0.12.20240311 to 6.0.12.20240808 (#17552) 2024-08-12 14:32:05 +01:00
dependabot[bot]
b7faf01f26 Bump phonenumbers from 8.13.42 to 8.13.43 (#17551) 2024-08-12 14:31:56 +01:00
dependabot[bot]
4f7f6ee9a0 Bump lxml from 5.2.2 to 5.3.0 (#17550) 2024-08-12 14:31:45 +01:00
dependabot[bot]
a640b318df Bump sigstore/cosign-installer from 3.5.0 to 3.6.0 (#17549) 2024-08-12 14:31:34 +01:00
dependabot[bot]
34b7586446 Bump types-requests from 2.31.0.20240406 to 2.32.0.20240712 (#17524)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-12 10:20:38 +01:00
Erik Johnston
70b0e38603 Fix performance of device lists in /key/changes and sliding sync (#17537)
We do this by reusing the code from sync v2.

Reviewable commit-by-commit. The function `get_user_ids_changed` has
been rewritten entirely, so I would recommend not looking at the diff.
2024-08-09 11:59:44 +01:00
Eric Eastwood
3367422fd3 Need to fix upsert 2024-08-08 18:23:50 -05:00
Eric Eastwood
ca909013c8 Fill in stream_ordering/bump_stamp for any event being persisted 2024-08-08 17:49:15 -05:00
Eric Eastwood
cc2d2b6b9f Fill in stream_ordering/bump_stamp when we add current state to the joined rooms table 2024-08-08 15:41:55 -05:00
devonh
f31360e34b Start handlers for new media endpoints when media resource configured (#17483)
This is in response to issue #17473. 
Not all the necessary handlers to deal with media requests are started
now when configuring synapse to use a media worker as per the [example
config](https://element-hq.github.io/synapse/latest/workers.html#synapseappmedia_repository).
The new media endpoints introduced with authenticated media fall under
the `client` & `federation` handlers in synapse.
This PR starts up handlers for the new media endpoints if a worker has
been configured with only the `media` resource type.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2024-08-08 14:35:46 +00:00
Andrew Morgan
3ad38b644d Replace deprecated HTTPAdapter.get_connection method with get_connection_with_tls_context (#17536) 2024-08-08 14:59:37 +01:00
Erik Johnston
44ac2aa3b6 SSS: Implement PREVIOUSLY room tracking (#17535)
Implement tracking of rooms that have had updates that have not been
sent down to clients.

Simplified Sliding Sync (SSS)
2024-08-08 10:44:17 +01:00
Eric Eastwood
bc3796d333 Fix some lints 2024-08-07 20:49:46 -05:00
Eric Eastwood
5cf3ad3d7f Handle server left room 2024-08-07 20:47:13 -05:00
Eric Eastwood
bf78692ba0 Handle to_delete 2024-08-07 20:09:53 -05:00
Eric Eastwood
a1aaa47dad Add more tests 2024-08-07 19:58:51 -05:00
Eric Eastwood
c590474757 Test non-joins 2024-08-07 19:24:58 -05:00
Eric Eastwood
5b1053f23e Better test assertions 2024-08-07 19:07:43 -05:00
Eric Eastwood
68a3daf605 Fix comparison and insert 2024-08-07 18:10:24 -05:00
Eric Eastwood
61cea4e9b7 Closer to right 2024-08-07 18:07:53 -05:00
Eric Eastwood
87d95615d4 Change to updating the latest membership in the room 2024-08-07 16:37:17 -05:00
Eric Eastwood
11db575218 Sliding Sync: Use stream_ordering based timeline pagination for incremental sync (#17510)
Use `stream_ordering` based `timeline` pagination for incremental
`/sync` in Sliding Sync. Previously, we were always using a
`topological_ordering` but we should only be using that for historical
scenarios (initial `/sync`, newly joined, or haven't sent the room down
the connection before).

This is slightly different than what the [spec
suggests](https://spec.matrix.org/v1.10/client-server-api/#syncing)

> Events are ordered in this API according to the arrival time of the
event on the homeserver. This can conflict with other APIs which order
events based on their partial ordering in the event graph. This can
result in duplicate events being received (once per distinct API
called). Clients SHOULD de-duplicate events based on the event ID when
this happens.

But we've had a [discussion below in this
PR](https://github.com/element-hq/synapse/pull/17510#discussion_r1699105569)
and this matches what Sync v2 already does and seems like it makes
sense. Created a spec issue
https://github.com/matrix-org/matrix-spec/issues/1917 to clarify this.

Related issues:

 - https://github.com/matrix-org/matrix-spec/issues/1917
 - https://github.com/matrix-org/matrix-spec/issues/852
 - https://github.com/matrix-org/matrix-spec-proposals/pull/4033
2024-08-07 11:27:50 -05:00
Eric Eastwood
cb335805d4 Server left room test 2024-08-07 10:46:34 -05:00
dependabot[bot]
30e9f6e469 Bump bytes from 1.6.1 to 1.7.1 (#17526) 2024-08-07 10:37:54 +01:00
dependabot[bot]
eb62d12063 Bump regex from 1.10.5 to 1.10.6 (#17527) 2024-08-07 10:37:13 +01:00
Erik Johnston
ceb3686dcd Fixup sliding sync comment (#17531)
c.f.
https://github.com/element-hq/synapse/pull/17529#discussion_r1705780925
2024-08-07 10:32:36 +01:00
Eric Eastwood
2f3bd27284 Test is running 2024-08-06 16:50:14 -05:00
Eric Eastwood
f96d0c36a3 Special treatment for boolean columns
See 1dfa59b238/docs/development/database_schema.md (boolean-columns)
2024-08-06 15:30:51 -05:00
Eric Eastwood
1a251d5211 Fill in sliding_sync_non_join_memberships when current state changes 2024-08-06 15:18:34 -05:00
Eric Eastwood
1dfa59b238 Sliding Sync: Add more tracing (#17514)
Spawning from looking at a couple traces and wanting a little more info.

Follow-up to github.com/element-hq/synapse/pull/17501

The changes in this PR allow you to find slow Sliding Sync traces ignoring the
`wait_for_events` time. In Jaeger, you can now filter for the `current_sync_for_user`
operation with `RESULT.result=true` indicating that it actually returned non-empty results.

If you want to find traces for your own user, you can use
`RESULT.result=true ARG.sync_config.user="@madlittlemods:matrix.org"`
2024-08-06 11:43:43 -05:00
Andrew Morgan
bef6568537 Merge branch 'release-v1.113' into develop 2024-08-06 14:19:12 +01:00
Andrew Morgan
244a255065 Clarify auto_accept_invites.worker_to_run_on config docs (#17515) 2024-08-06 13:26:51 +01:00
Andrew Morgan
932cb0a928 1.113.0rc1 2024-08-06 12:24:47 +01:00
dependabot[bot]
2dad718265 Bump phonenumbers from 8.13.39 to 8.13.42 (#17521) 2024-08-06 11:47:19 +01:00
dependabot[bot]
5d8446298c Bump towncrier from 23.11.0 to 24.7.1 (#17523) 2024-08-06 11:47:06 +01:00
dependabot[bot]
d845e939a9 Bump black from 24.4.2 to 24.8.0 (#17522) 2024-08-06 11:46:48 +01:00
dependabot[bot]
23727869c7 Bump serde_json from 1.0.121 to 1.0.122 (#17525) 2024-08-06 11:45:44 +01:00
Erik Johnston
c270355349 SS: Reset connection if token is unrecognized (#17529)
This triggers the client to start a new sliding sync connection. If we
don't do this and the client asks for the full range of rooms, we end up
sending down all rooms and their state from scratch (which can be very
slow)

This causes things like
https://github.com/element-hq/element-x-ios/issues/3115 after we restart
the server

---------

Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
2024-08-06 10:39:11 +01:00
Eric Eastwood
2b5f07d714 Start of updating sliding_sync_joined_rooms 2024-08-05 22:34:21 -05:00
Eric Eastwood
ad1c887b4c Merge branch 'develop' into madlittlemods/sliding-sync-pre-populate-room-meta-data 2024-08-05 18:19:17 -05:00
Eric Eastwood
e3db7b2d81 Sliding Sync: Easier to understand timeline assertions in tests (#17511)
Added `_assertTimelineEqual(...)` because I got fed up trying to
understand the crazy diffs from the standard
`self.assertEqual(...)`/`self.assertListEqual(...)`

Before:
```
[FAIL]
Traceback (most recent call last):
  File "/home/eric/Documents/github/element/synapse/tests/rest/client/sliding_sync/test_rooms_timeline.py", line 103, in test_rooms_limited_initial_sync
    self.assertListEqual(
  File "/usr/lib/python3.12/unittest/case.py", line 1091, in assertListEqual
    self.assertSequenceEqual(list1, list2, msg, seq_type=list)
  File "/usr/lib/python3.12/unittest/case.py", line 1073, in assertSequenceEqual
    self.fail(msg)
twisted.trial.unittest.FailTest: Lists differ: ['$4QcmnzhdazSnDYcYSZCS_6-MWSzM_dN3RC7TRvW0w[95 chars]isM'] != ['$8N1XJ7e-3K_wxAanLVD3v8KQ96_B5Xj4huGkgy4N4[95 chars]nnU']

First differing element 0:
'$4QcmnzhdazSnDYcYSZCS_6-MWSzM_dN3RC7TRvW0wWA'
'$8N1XJ7e-3K_wxAanLVD3v8KQ96_B5Xj4huGkgy4N4-E'

- ['$4QcmnzhdazSnDYcYSZCS_6-MWSzM_dN3RC7TRvW0wWA',
-  '$8N1XJ7e-3K_wxAanLVD3v8KQ96_B5Xj4huGkgy4N4-E',
? ^

+ ['$8N1XJ7e-3K_wxAanLVD3v8KQ96_B5Xj4huGkgy4N4-E',
? ^

-  '$q4PRxQ_pBZkQI1keYuZPTtExQ23DqpUI3-Lxwfj_isM']
+  '$4QcmnzhdazSnDYcYSZCS_6-MWSzM_dN3RC7TRvW0wWA',
+  '$j3Xj-t2F1wH9kUHsI8X5yqS7hkdSyN2owaArfvk8nnU']
```

After:

```
[FAIL]
Traceback (most recent call last):
  File "/home/eric/Documents/github/element/synapse/tests/rest/client/sliding_sync/test_rooms_timeline.py", line 178, in test_rooms_limited_initial_sync
    self._assertTimelineEqual(
  File "/home/eric/Documents/github/element/synapse/tests/rest/client/sliding_sync/test_rooms_timeline.py", line 110, in _assertTimelineEqual
    self._assertListEqual(
  File "/home/eric/Documents/github/element/synapse/tests/rest/client/sliding_sync/test_rooms_timeline.py", line 79, in _assertListEqual
    self.fail(f"{diff_message}\n{message}")
twisted.trial.unittest.FailTest: Items must
Expected items to be in actual ('?' = missing expected items):
 [
   (10, master) $w-BoqW1PQQFU4TzVJW5OIelugxh0mY12wrfw6mbC6D4 (m.room.message) activity4
   (11, master) $sSidTZf1EOQmCVDU4mrH_1-bopMQhwcDUO2IhoemR6M (m.room.message) activity5
?  (12, master) $bgOcc3D-2QSkbk4aBxKVyOOQJGs7ZuncRJwG3cEANZg (m.room.member, @user1:test) join
 ]
Actual ('+' = found expected items):
 [
+  (11, master) $sSidTZf1EOQmCVDU4mrH_1-bopMQhwcDUO2IhoemR6M (m.room.message) activity5
+  (10, master) $w-BoqW1PQQFU4TzVJW5OIelugxh0mY12wrfw6mbC6D4 (m.room.message) activity4
   (9, master) $FmCNyc11YeFwiJ4an7_q6H0LCCjQOKd6UCr5VKeXXUw (m.room.message, None) activity3
 ]
```
2024-08-05 13:20:15 -05:00
Eric Eastwood
8392d6ac3b Use foreign keys 2024-07-31 19:00:55 -05:00
Eric Eastwood
e7e9cb289d Add changelog 2024-07-31 18:43:50 -05:00
Eric Eastwood
d26ac746d4 Start thinking about schemas 2024-07-31 18:40:23 -05:00
102 changed files with 12483 additions and 2636 deletions

View File

@@ -30,7 +30,7 @@ jobs:
run: docker buildx inspect
- name: Install Cosign
uses: sigstore/cosign-installer@v3.5.0
uses: sigstore/cosign-installer@v3.6.0
- name: Checkout repository
uses: actions/checkout@v4

View File

@@ -1,3 +1,108 @@
# Synapse 1.114.0rc1 (2024-08-20)
### Features
- Add a flag to `/versions`, `org.matrix.simplified_msc3575`, to indicate whether experimental sliding sync support has been enabled. ([\#17571](https://github.com/element-hq/synapse/issues/17571))
- Handle changes in `timeline_limit` in experimental sliding sync. ([\#17579](https://github.com/element-hq/synapse/issues/17579))
- Correctly track read receipts that should be sent down in experimental sliding sync. ([\#17575](https://github.com/element-hq/synapse/issues/17575), [\#17589](https://github.com/element-hq/synapse/issues/17589), [\#17592](https://github.com/element-hq/synapse/issues/17592))
### Bugfixes
- Start handlers for new media endpoints when media resource configured. ([\#17483](https://github.com/element-hq/synapse/issues/17483))
- Fix timeline ordering (using `stream_ordering` instead of topological ordering) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17510](https://github.com/element-hq/synapse/issues/17510))
- Fix experimental sliding sync implementation to remember any updates in rooms that were not sent down immediately. ([\#17535](https://github.com/element-hq/synapse/issues/17535))
- Better exclude partially stated rooms if we must await full state in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17538](https://github.com/element-hq/synapse/issues/17538))
- Handle lower-case http headers in `_Mulitpart_Parser_Protocol`. ([\#17545](https://github.com/element-hq/synapse/issues/17545))
- Fix fetching federation signing keys from servers that omit `old_verify_keys`. Contributed by @tulir @ Beeper. ([\#17568](https://github.com/element-hq/synapse/issues/17568))
- Fix bug where we would respond with an error when a remote server asked for media that had a length of 0, using the new multipart federation media endpoint. ([\#17570](https://github.com/element-hq/synapse/issues/17570))
### Improved Documentation
- Clarify default behaviour of the
[`auto_accept_invites.worker_to_run_on`](https://element-hq.github.io/synapse/develop/usage/configuration/config_documentation.html#auto-accept-invites)
option. ([\#17515](https://github.com/element-hq/synapse/issues/17515))
- Improve docstrings for profile methods. ([\#17559](https://github.com/element-hq/synapse/issues/17559))
### Internal Changes
- Add more tracing to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17514](https://github.com/element-hq/synapse/issues/17514))
- Fixup comment in sliding sync implementation. ([\#17531](https://github.com/element-hq/synapse/issues/17531))
- Replace override of deprecated method `HTTPAdapter.get_connection` with `get_connection_with_tls_context`. ([\#17536](https://github.com/element-hq/synapse/issues/17536))
- Fix performance of device lists in `/key/changes` and sliding sync. ([\#17537](https://github.com/element-hq/synapse/issues/17537), [\#17548](https://github.com/element-hq/synapse/issues/17548))
- Bump setuptools from 67.6.0 to 72.1.0. ([\#17542](https://github.com/element-hq/synapse/issues/17542))
- Add a utility function for generating random event IDs. ([\#17557](https://github.com/element-hq/synapse/issues/17557))
- Speed up responding to media requests. ([\#17558](https://github.com/element-hq/synapse/issues/17558), [\#17561](https://github.com/element-hq/synapse/issues/17561), [\#17564](https://github.com/element-hq/synapse/issues/17564), [\#17566](https://github.com/element-hq/synapse/issues/17566), [\#17567](https://github.com/element-hq/synapse/issues/17567), [\#17569](https://github.com/element-hq/synapse/issues/17569))
- Test github token before running release script steps. ([\#17562](https://github.com/element-hq/synapse/issues/17562))
- Reduce log spam of multipart files. ([\#17563](https://github.com/element-hq/synapse/issues/17563))
- Refactor per-connection state in experimental sliding sync handler. ([\#17574](https://github.com/element-hq/synapse/issues/17574))
- Add histogram metrics for sliding sync processing time. ([\#17593](https://github.com/element-hq/synapse/issues/17593))
### Updates to locked dependencies
* Bump bytes from 1.6.1 to 1.7.1. ([\#17526](https://github.com/element-hq/synapse/issues/17526))
* Bump lxml from 5.2.2 to 5.3.0. ([\#17550](https://github.com/element-hq/synapse/issues/17550))
* Bump phonenumbers from 8.13.42 to 8.13.43. ([\#17551](https://github.com/element-hq/synapse/issues/17551))
* Bump regex from 1.10.5 to 1.10.6. ([\#17527](https://github.com/element-hq/synapse/issues/17527))
* Bump sentry-sdk from 2.10.0 to 2.12.0. ([\#17553](https://github.com/element-hq/synapse/issues/17553))
* Bump serde from 1.0.204 to 1.0.206. ([\#17556](https://github.com/element-hq/synapse/issues/17556))
* Bump serde_json from 1.0.122 to 1.0.124. ([\#17555](https://github.com/element-hq/synapse/issues/17555))
* Bump sigstore/cosign-installer from 3.5.0 to 3.6.0. ([\#17549](https://github.com/element-hq/synapse/issues/17549))
* Bump types-pyyaml from 6.0.12.20240311 to 6.0.12.20240808. ([\#17552](https://github.com/element-hq/synapse/issues/17552))
* Bump types-requests from 2.31.0.20240406 to 2.32.0.20240712. ([\#17524](https://github.com/element-hq/synapse/issues/17524))
# Synapse 1.113.0 (2024-08-13)
No significant changes since 1.113.0rc1.
# Synapse 1.113.0rc1 (2024-08-06)
### Features
- Track which rooms have been sent to clients in the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17447](https://github.com/element-hq/synapse/issues/17447))
- Add Account Data extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17477](https://github.com/element-hq/synapse/issues/17477))
- Add receipts extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17489](https://github.com/element-hq/synapse/issues/17489))
- Add typing notification extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17505](https://github.com/element-hq/synapse/issues/17505))
### Bugfixes
- Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to handle invite/knock rooms when filtering. ([\#17450](https://github.com/element-hq/synapse/issues/17450))
- Fix a bug introduced in v1.110.0 which caused `/keys/query` to return incomplete results, leading to high network activity and CPU usage on Matrix clients. ([\#17499](https://github.com/element-hq/synapse/issues/17499))
### Improved Documentation
- Update the [`allowed_local_3pids`](https://element-hq.github.io/synapse/v1.112/usage/configuration/config_documentation.html#allowed_local_3pids) config option's msisdn address to a working example. ([\#17476](https://github.com/element-hq/synapse/issues/17476))
### Internal Changes
- Change sliding sync to use their own token format in preparation for storing per-connection state. ([\#17452](https://github.com/element-hq/synapse/issues/17452))
- Ensure we don't send down negative `bump_stamp` in experimental sliding sync endpoint. ([\#17478](https://github.com/element-hq/synapse/issues/17478))
- Do not send down empty room entries down experimental sliding sync endpoint. ([\#17479](https://github.com/element-hq/synapse/issues/17479))
- Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`. ([\#17481](https://github.com/element-hq/synapse/issues/17481), [\#17482](https://github.com/element-hq/synapse/issues/17482))
- Add some opentracing tags and logging to the experimental sliding sync implementation. ([\#17501](https://github.com/element-hq/synapse/issues/17501))
- Split and move Sliding Sync tests so we have some more sane test file sizes. ([\#17504](https://github.com/element-hq/synapse/issues/17504))
- Update the `limited` field description in the Sliding Sync response to accurately describe what it actually represents. ([\#17507](https://github.com/element-hq/synapse/issues/17507))
- Easier to understand `timeline` assertions in Sliding Sync tests. ([\#17511](https://github.com/element-hq/synapse/issues/17511))
- Reset the sliding sync connection if we don't recognize the per-connection state position. ([\#17529](https://github.com/element-hq/synapse/issues/17529))
### Updates to locked dependencies
* Bump bcrypt from 4.1.3 to 4.2.0. ([\#17495](https://github.com/element-hq/synapse/issues/17495))
* Bump black from 24.4.2 to 24.8.0. ([\#17522](https://github.com/element-hq/synapse/issues/17522))
* Bump phonenumbers from 8.13.39 to 8.13.42. ([\#17521](https://github.com/element-hq/synapse/issues/17521))
* Bump ruff from 0.5.4 to 0.5.5. ([\#17494](https://github.com/element-hq/synapse/issues/17494))
* Bump serde_json from 1.0.120 to 1.0.121. ([\#17493](https://github.com/element-hq/synapse/issues/17493))
* Bump serde_json from 1.0.121 to 1.0.122. ([\#17525](https://github.com/element-hq/synapse/issues/17525))
* Bump towncrier from 23.11.0 to 24.7.1. ([\#17523](https://github.com/element-hq/synapse/issues/17523))
* Bump types-pyopenssl from 24.1.0.20240425 to 24.1.0.20240722. ([\#17496](https://github.com/element-hq/synapse/issues/17496))
* Bump types-setuptools from 70.1.0.20240627 to 71.1.0.20240726. ([\#17497](https://github.com/element-hq/synapse/issues/17497))
# Synapse 1.112.0 (2024-07-30)
This security release is to update our locked dependency on Twisted to 24.7.0rc1, which includes a security fix for [CVE-2024-41671 / GHSA-c8m8-j448-xjx7: Disordered HTTP pipeline response in twisted.web, again](https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7).

20
Cargo.lock generated
View File

@@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
[[package]]
name = "bytes"
version = "1.6.1"
version = "1.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a12916984aab3fa6e39d655a33e09c0071eb36d6ab3aea5c2d78551f1df6d952"
checksum = "8318a53db07bb3f8dca91a600466bdb3f2eaadeedfdbcf02e1accbad9271ba50"
[[package]]
name = "cfg-if"
@@ -444,9 +444,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.10.5"
version = "1.10.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b91213439dad192326a0d7c6ee3955910425f441d7038e0d6933b0aec5c4517f"
checksum = "4219d74c6b67a3654a9fbebc4b419e22126d13d2f3c4a07ee0cb61ff79a79619"
dependencies = [
"aho-corasick",
"memchr",
@@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "serde"
version = "1.0.204"
version = "1.0.208"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc76f558e0cbb2a839d37354c575f1dc3fdc6546b5be373ba43d95f231bf7c12"
checksum = "cff085d2cb684faa248efb494c39b68e522822ac0de72ccf08109abde717cfb2"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.204"
version = "1.0.208"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e0cd7e117be63d3c3678776753929474f3b04a43a080c744d6b0ae2a8c28e222"
checksum = "24008e81ff7613ed8e5ba0cfaf24e2c2f1e5b8a0495711e44fcd4882fca62bcf"
dependencies = [
"proc-macro2",
"quote",
@@ -505,9 +505,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.121"
version = "1.0.125"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ab380d7d9f22ef3f21ad3e6c1ebe8e4fc7a2000ccba2e4d71fc96f15b2cb609"
checksum = "83c8e735a073ccf5be70aa8066aa984eaf2fa000db6c8d0100ae605b366d31ed"
dependencies = [
"itoa",
"memchr",

View File

@@ -1 +0,0 @@
Track which rooms have been sent to clients in the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1 +0,0 @@
Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to handle invite/knock rooms when filtering.

View File

@@ -1 +0,0 @@
Change sliding sync to use their own token format in preparation for storing per-connection state.

View File

@@ -1 +0,0 @@
Update the [`allowed_local_3pids`](https://element-hq.github.io/synapse/v1.112/usage/configuration/config_documentation.html#allowed_local_3pids) config option's msisdn address to a working example.

View File

@@ -1 +0,0 @@
Add Account Data extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1 +0,0 @@
Ensure we don't send down negative `bump_stamp` in experimental sliding sync endpoint.

View File

@@ -1 +0,0 @@
Do not send down empty room entries down experimental sliding sync endpoint.

View File

@@ -1 +0,0 @@
Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`.

View File

@@ -1 +0,0 @@
Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`.

View File

@@ -1 +0,0 @@
Add receipts extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1 +0,0 @@
Fix a bug introduced in v1.110.0 which caused `/keys/query` to return incomplete results, leading to high network activity and CPU usage on Matrix clients.

View File

@@ -1 +0,0 @@
Add some opentracing tags and logging to the experimental sliding sync implementation.

View File

@@ -1 +0,0 @@
Split and move Sliding Sync tests so we have some more sane test file sizes.

View File

@@ -1 +0,0 @@
Add typing notification extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1 +0,0 @@
Update the `limited` field description in the Sliding Sync response to accurately describe what it actually represents.

1
changelog.d/17512.misc Normal file
View File

@@ -0,0 +1 @@
Pre-populate room data used in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint for quick filtering/sorting.

1
changelog.d/17595.misc Normal file
View File

@@ -0,0 +1 @@
Refactor sliding sync class into multiple files.

1
changelog.d/17599.misc Normal file
View File

@@ -0,0 +1 @@
Store sliding sync per-connection state in the database.

1
changelog.d/17600.misc Normal file
View File

@@ -0,0 +1 @@
Make the sliding sync `PerConnectionState` class immutable.

18
debian/changelog vendored
View File

@@ -1,3 +1,21 @@
matrix-synapse-py3 (1.114.0~rc1) stable; urgency=medium
* New synapse release 1.114.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 Aug 2024 12:55:28 +0000
matrix-synapse-py3 (1.113.0) stable; urgency=medium
* New Synapse release 1.113.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 13 Aug 2024 14:36:56 +0100
matrix-synapse-py3 (1.113.0~rc1) stable; urgency=medium
* New Synapse release 1.113.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 06 Aug 2024 12:23:23 +0100
matrix-synapse-py3 (1.112.0) stable; urgency=medium
* New Synapse release 1.112.0.

View File

@@ -21,8 +21,10 @@ incrementing integer, but backfilled events start with `stream_ordering=-1` and
---
- `/sync` returns things in the order they arrive at the server (`stream_ordering`).
- `/messages` (and `/backfill` in the federation API) return them in the order determined by the event graph `(topological_ordering, stream_ordering)`.
- Incremental `/sync?since=xxx` returns things in the order they arrive at the server
(`stream_ordering`).
- Initial `/sync`, `/messages` (and `/backfill` in the federation API) return them in
the order determined by the event graph `(topological_ordering, stream_ordering)`.
The general idea is that, if you're following a room in real-time (i.e.
`/sync`), you probably want to see the messages as they arrive at your server,

View File

@@ -4685,7 +4685,9 @@ This setting has the following sub-options:
* `only_for_direct_messages`: Whether invites should be automatically accepted for all room types, or only
for direct messages. Defaults to false.
* `only_from_local_users`: Whether to only automatically accept invites from users on this homeserver. Defaults to false.
* `worker_to_run_on`: Which worker to run this module on. This must match the "worker_name".
* `worker_to_run_on`: Which worker to run this module on. This must match
the "worker_name". If not set or `null`, invites will be accepted on the
main process.
NOTE: Care should be taken not to enable this setting if the `synapse_auto_accept_invite` module is enabled and installed.
The two modules will compete to perform the same task and may result in undesired behaviour. For example, multiple join

460
poetry.lock generated
View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
[[package]]
name = "annotated-types"
@@ -107,33 +107,33 @@ typecheck = ["mypy"]
[[package]]
name = "black"
version = "24.4.2"
version = "24.8.0"
description = "The uncompromising code formatter."
optional = false
python-versions = ">=3.8"
files = [
{file = "black-24.4.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:dd1b5a14e417189db4c7b64a6540f31730713d173f0b63e55fabd52d61d8fdce"},
{file = "black-24.4.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8e537d281831ad0e71007dcdcbe50a71470b978c453fa41ce77186bbe0ed6021"},
{file = "black-24.4.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eaea3008c281f1038edb473c1aa8ed8143a5535ff18f978a318f10302b254063"},
{file = "black-24.4.2-cp310-cp310-win_amd64.whl", hash = "sha256:7768a0dbf16a39aa5e9a3ded568bb545c8c2727396d063bbaf847df05b08cd96"},
{file = "black-24.4.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:257d724c2c9b1660f353b36c802ccece186a30accc7742c176d29c146df6e474"},
{file = "black-24.4.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:bdde6f877a18f24844e381d45e9947a49e97933573ac9d4345399be37621e26c"},
{file = "black-24.4.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e151054aa00bad1f4e1f04919542885f89f5f7d086b8a59e5000e6c616896ffb"},
{file = "black-24.4.2-cp311-cp311-win_amd64.whl", hash = "sha256:7e122b1c4fb252fd85df3ca93578732b4749d9be076593076ef4d07a0233c3e1"},
{file = "black-24.4.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:accf49e151c8ed2c0cdc528691838afd217c50412534e876a19270fea1e28e2d"},
{file = "black-24.4.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:88c57dc656038f1ab9f92b3eb5335ee9b021412feaa46330d5eba4e51fe49b04"},
{file = "black-24.4.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:be8bef99eb46d5021bf053114442914baeb3649a89dc5f3a555c88737e5e98fc"},
{file = "black-24.4.2-cp312-cp312-win_amd64.whl", hash = "sha256:415e686e87dbbe6f4cd5ef0fbf764af7b89f9057b97c908742b6008cc554b9c0"},
{file = "black-24.4.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bf10f7310db693bb62692609b397e8d67257c55f949abde4c67f9cc574492cc7"},
{file = "black-24.4.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:98e123f1d5cfd42f886624d84464f7756f60ff6eab89ae845210631714f6db94"},
{file = "black-24.4.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:48a85f2cb5e6799a9ef05347b476cce6c182d6c71ee36925a6c194d074336ef8"},
{file = "black-24.4.2-cp38-cp38-win_amd64.whl", hash = "sha256:b1530ae42e9d6d5b670a34db49a94115a64596bc77710b1d05e9801e62ca0a7c"},
{file = "black-24.4.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:37aae07b029fa0174d39daf02748b379399b909652a806e5708199bd93899da1"},
{file = "black-24.4.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:da33a1a5e49c4122ccdfd56cd021ff1ebc4a1ec4e2d01594fef9b6f267a9e741"},
{file = "black-24.4.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef703f83fc32e131e9bcc0a5094cfe85599e7109f896fe8bc96cc402f3eb4b6e"},
{file = "black-24.4.2-cp39-cp39-win_amd64.whl", hash = "sha256:b9176b9832e84308818a99a561e90aa479e73c523b3f77afd07913380ae2eab7"},
{file = "black-24.4.2-py3-none-any.whl", hash = "sha256:d36ed1124bb81b32f8614555b34cc4259c3fbc7eec17870e8ff8ded335b58d8c"},
{file = "black-24.4.2.tar.gz", hash = "sha256:c872b53057f000085da66a19c55d68f6f8ddcac2642392ad3a355878406fbd4d"},
{file = "black-24.8.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:09cdeb74d494ec023ded657f7092ba518e8cf78fa8386155e4a03fdcc44679e6"},
{file = "black-24.8.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:81c6742da39f33b08e791da38410f32e27d632260e599df7245cccee2064afeb"},
{file = "black-24.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:707a1ca89221bc8a1a64fb5e15ef39cd755633daa672a9db7498d1c19de66a42"},
{file = "black-24.8.0-cp310-cp310-win_amd64.whl", hash = "sha256:d6417535d99c37cee4091a2f24eb2b6d5ec42b144d50f1f2e436d9fe1916fe1a"},
{file = "black-24.8.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:fb6e2c0b86bbd43dee042e48059c9ad7830abd5c94b0bc518c0eeec57c3eddc1"},
{file = "black-24.8.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:837fd281f1908d0076844bc2b801ad2d369c78c45cf800cad7b61686051041af"},
{file = "black-24.8.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:62e8730977f0b77998029da7971fa896ceefa2c4c4933fcd593fa599ecbf97a4"},
{file = "black-24.8.0-cp311-cp311-win_amd64.whl", hash = "sha256:72901b4913cbac8972ad911dc4098d5753704d1f3c56e44ae8dce99eecb0e3af"},
{file = "black-24.8.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:7c046c1d1eeb7aea9335da62472481d3bbf3fd986e093cffd35f4385c94ae368"},
{file = "black-24.8.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:649f6d84ccbae73ab767e206772cc2d7a393a001070a4c814a546afd0d423aed"},
{file = "black-24.8.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2b59b250fdba5f9a9cd9d0ece6e6d993d91ce877d121d161e4698af3eb9c1018"},
{file = "black-24.8.0-cp312-cp312-win_amd64.whl", hash = "sha256:6e55d30d44bed36593c3163b9bc63bf58b3b30e4611e4d88a0c3c239930ed5b2"},
{file = "black-24.8.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:505289f17ceda596658ae81b61ebbe2d9b25aa78067035184ed0a9d855d18afd"},
{file = "black-24.8.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b19c9ad992c7883ad84c9b22aaa73562a16b819c1d8db7a1a1a49fb7ec13c7d2"},
{file = "black-24.8.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1f13f7f386f86f8121d76599114bb8c17b69d962137fc70efe56137727c7047e"},
{file = "black-24.8.0-cp38-cp38-win_amd64.whl", hash = "sha256:f490dbd59680d809ca31efdae20e634f3fae27fba3ce0ba3208333b713bc3920"},
{file = "black-24.8.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:eab4dd44ce80dea27dc69db40dab62d4ca96112f87996bca68cd75639aeb2e4c"},
{file = "black-24.8.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3c4285573d4897a7610054af5a890bde7c65cb466040c5f0c8b732812d7f0e5e"},
{file = "black-24.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9e84e33b37be070ba135176c123ae52a51f82306def9f7d063ee302ecab2cf47"},
{file = "black-24.8.0-cp39-cp39-win_amd64.whl", hash = "sha256:73bbf84ed136e45d451a260c6b73ed674652f90a2b3211d6a35e78054563a9bb"},
{file = "black-24.8.0-py3-none-any.whl", hash = "sha256:972085c618ee94f402da1af548a4f218c754ea7e5dc70acb168bfaca4c2542ed"},
{file = "black-24.8.0.tar.gz", hash = "sha256:2500945420b6784c38b9ee885af039f5e7471ef284ab03fa35ecdde4688cd83f"},
]
[package.dependencies]
@@ -403,43 +403,38 @@ files = [
[[package]]
name = "cryptography"
version = "42.0.8"
version = "43.0.0"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = ">=3.7"
files = [
{file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:81d8a521705787afe7a18d5bfb47ea9d9cc068206270aad0b96a725022e18d2e"},
{file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:961e61cefdcb06e0c6d7e3a1b22ebe8b996eb2bf50614e89384be54c48c6b63d"},
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e3ec3672626e1b9e55afd0df6d774ff0e953452886e06e0f1eb7eb0c832e8902"},
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e599b53fd95357d92304510fb7bda8523ed1f79ca98dce2f43c115950aa78801"},
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5226d5d21ab681f432a9c1cf8b658c0cb02533eece706b155e5fbd8a0cdd3949"},
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:6b7c4f03ce01afd3b76cf69a5455caa9cfa3de8c8f493e0d3ab7d20611c8dae9"},
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:2346b911eb349ab547076f47f2e035fc8ff2c02380a7cbbf8d87114fa0f1c583"},
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:ad803773e9df0b92e0a817d22fd8a3675493f690b96130a5e24f1b8fabbea9c7"},
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2f66d9cd9147ee495a8374a45ca445819f8929a3efcd2e3df6428e46c3cbb10b"},
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d45b940883a03e19e944456a558b67a41160e367a719833c53de6911cabba2b7"},
{file = "cryptography-42.0.8-cp37-abi3-win32.whl", hash = "sha256:a0c5b2b0585b6af82d7e385f55a8bc568abff8923af147ee3c07bd8b42cda8b2"},
{file = "cryptography-42.0.8-cp37-abi3-win_amd64.whl", hash = "sha256:57080dee41209e556a9a4ce60d229244f7a66ef52750f813bfbe18959770cfba"},
{file = "cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:dea567d1b0e8bc5764b9443858b673b734100c2871dc93163f58c46a97a83d28"},
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4783183f7cb757b73b2ae9aed6599b96338eb957233c58ca8f49a49cc32fd5e"},
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0608251135d0e03111152e41f0cc2392d1e74e35703960d4190b2e0f4ca9c70"},
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dc0fdf6787f37b1c6b08e6dfc892d9d068b5bdb671198c72072828b80bd5fe4c"},
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9c0c1716c8447ee7dbf08d6db2e5c41c688544c61074b54fc4564196f55c25a7"},
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:fff12c88a672ab9c9c1cf7b0c80e3ad9e2ebd9d828d955c126be4fd3e5578c9e"},
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:cafb92b2bc622cd1aa6a1dce4b93307792633f4c5fe1f46c6b97cf67073ec961"},
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:31f721658a29331f895a5a54e7e82075554ccfb8b163a18719d342f5ffe5ecb1"},
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:b297f90c5723d04bcc8265fc2a0f86d4ea2e0f7ab4b6994459548d3a6b992a14"},
{file = "cryptography-42.0.8-cp39-abi3-win32.whl", hash = "sha256:2f88d197e66c65be5e42cd72e5c18afbfae3f741742070e3019ac8f4ac57262c"},
{file = "cryptography-42.0.8-cp39-abi3-win_amd64.whl", hash = "sha256:fa76fbb7596cc5839320000cdd5d0955313696d9511debab7ee7278fc8b5c84a"},
{file = "cryptography-42.0.8-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:ba4f0a211697362e89ad822e667d8d340b4d8d55fae72cdd619389fb5912eefe"},
{file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:81884c4d096c272f00aeb1f11cf62ccd39763581645b0812e99a91505fa48e0c"},
{file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c9bb2ae11bfbab395bdd072985abde58ea9860ed84e59dbc0463a5d0159f5b71"},
{file = "cryptography-42.0.8-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:7016f837e15b0a1c119d27ecd89b3515f01f90a8615ed5e9427e30d9cdbfed3d"},
{file = "cryptography-42.0.8-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5a94eccb2a81a309806027e1670a358b99b8fe8bfe9f8d329f27d72c094dde8c"},
{file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:dec9b018df185f08483f294cae6ccac29e7a6e0678996587363dc352dc65c842"},
{file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:343728aac38decfdeecf55ecab3264b015be68fc2816ca800db649607aeee648"},
{file = "cryptography-42.0.8-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:013629ae70b40af70c9a7a5db40abe5d9054e6f4380e50ce769947b73bf3caad"},
{file = "cryptography-42.0.8.tar.gz", hash = "sha256:8d09d05439ce7baa8e9e95b07ec5b6c886f548deb7e0f69ef25f64b3bce842f2"},
{file = "cryptography-43.0.0-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:64c3f16e2a4fc51c0d06af28441881f98c5d91009b8caaff40cf3548089e9c74"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3dcdedae5c7710b9f97ac6bba7e1052b95c7083c9d0e9df96e02a1932e777895"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d9a1eca329405219b605fac09ecfc09ac09e595d6def650a437523fcd08dd22"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ea9e57f8ea880eeea38ab5abf9fbe39f923544d7884228ec67d666abd60f5a47"},
{file = "cryptography-43.0.0-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9a8d6802e0825767476f62aafed40532bd435e8a5f7d23bd8b4f5fd04cc80ecf"},
{file = "cryptography-43.0.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:cc70b4b581f28d0a254d006f26949245e3657d40d8857066c2ae22a61222ef55"},
{file = "cryptography-43.0.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:4a997df8c1c2aae1e1e5ac49c2e4f610ad037fc5a3aadc7b64e39dea42249431"},
{file = "cryptography-43.0.0-cp37-abi3-win32.whl", hash = "sha256:6e2b11c55d260d03a8cf29ac9b5e0608d35f08077d8c087be96287f43af3ccdc"},
{file = "cryptography-43.0.0-cp37-abi3-win_amd64.whl", hash = "sha256:31e44a986ceccec3d0498e16f3d27b2ee5fdf69ce2ab89b52eaad1d2f33d8778"},
{file = "cryptography-43.0.0-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:7b3f5fe74a5ca32d4d0f302ffe6680fcc5c28f8ef0dc0ae8f40c0f3a1b4fca66"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ac1955ce000cb29ab40def14fd1bbfa7af2017cca696ee696925615cafd0dce5"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:299d3da8e00b7e2b54bb02ef58d73cd5f55fb31f33ebbf33bd00d9aa6807df7e"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ee0c405832ade84d4de74b9029bedb7b31200600fa524d218fc29bfa371e97f5"},
{file = "cryptography-43.0.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:cb013933d4c127349b3948aa8aaf2f12c0353ad0eccd715ca789c8a0f671646f"},
{file = "cryptography-43.0.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:fdcb265de28585de5b859ae13e3846a8e805268a823a12a4da2597f1f5afc9f0"},
{file = "cryptography-43.0.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:2905ccf93a8a2a416f3ec01b1a7911c3fe4073ef35640e7ee5296754e30b762b"},
{file = "cryptography-43.0.0-cp39-abi3-win32.whl", hash = "sha256:47ca71115e545954e6c1d207dd13461ab81f4eccfcb1345eac874828b5e3eaaf"},
{file = "cryptography-43.0.0-cp39-abi3-win_amd64.whl", hash = "sha256:0663585d02f76929792470451a5ba64424acc3cd5227b03921dab0e2f27b1709"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:2c6d112bf61c5ef44042c253e4859b3cbbb50df2f78fa8fae6747a7814484a70"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:844b6d608374e7d08f4f6e6f9f7b951f9256db41421917dfb2d003dde4cd6b66"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:51956cf8730665e2bdf8ddb8da0056f699c1a5715648c1b0144670c1ba00b48f"},
{file = "cryptography-43.0.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:aae4d918f6b180a8ab8bf6511a419473d107df4dbb4225c7b48c5c9602c38c7f"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:232ce02943a579095a339ac4b390fbbe97f5b5d5d107f8a08260ea2768be8cc2"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5bcb8a5620008a8034d39bce21dc3e23735dfdb6a33a06974739bfa04f853947"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:08a24a7070b2b6804c1940ff0f910ff728932a9d0e80e7814234269f9d46d069"},
{file = "cryptography-43.0.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:e9c5266c432a1e23738d178e51c2c7a5e2ddf790f248be939448c0ba2021f9d1"},
{file = "cryptography-43.0.0.tar.gz", hash = "sha256:b88075ada2d51aa9f18283532c9f60e72170041bba88d7f37e49cbb10275299e"},
]
[package.dependencies]
@@ -452,7 +447,7 @@ nox = ["nox"]
pep8test = ["check-sdist", "click", "mypy", "ruff"]
sdist = ["build"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
test = ["certifi", "cryptography-vectors (==43.0.0)", "pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"]
test-randomorder = ["pytest-randomly"]
[[package]]
@@ -998,153 +993,149 @@ pyasn1 = ">=0.4.6"
[[package]]
name = "lxml"
version = "5.2.2"
version = "5.3.0"
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
optional = true
python-versions = ">=3.6"
files = [
{file = "lxml-5.2.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:364d03207f3e603922d0d3932ef363d55bbf48e3647395765f9bfcbdf6d23632"},
{file = "lxml-5.2.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:50127c186f191b8917ea2fb8b206fbebe87fd414a6084d15568c27d0a21d60db"},
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:74e4f025ef3db1c6da4460dd27c118d8cd136d0391da4e387a15e48e5c975147"},
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:981a06a3076997adf7c743dcd0d7a0415582661e2517c7d961493572e909aa1d"},
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aef5474d913d3b05e613906ba4090433c515e13ea49c837aca18bde190853dff"},
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1e275ea572389e41e8b039ac076a46cb87ee6b8542df3fff26f5baab43713bca"},
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5b65529bb2f21ac7861a0e94fdbf5dc0daab41497d18223b46ee8515e5ad297"},
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:bcc98f911f10278d1daf14b87d65325851a1d29153caaf146877ec37031d5f36"},
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:b47633251727c8fe279f34025844b3b3a3e40cd1b198356d003aa146258d13a2"},
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:fbc9d316552f9ef7bba39f4edfad4a734d3d6f93341232a9dddadec4f15d425f"},
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:13e69be35391ce72712184f69000cda04fc89689429179bc4c0ae5f0b7a8c21b"},
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:3b6a30a9ab040b3f545b697cb3adbf3696c05a3a68aad172e3fd7ca73ab3c835"},
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:a233bb68625a85126ac9f1fc66d24337d6e8a0f9207b688eec2e7c880f012ec0"},
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:dfa7c241073d8f2b8e8dbc7803c434f57dbb83ae2a3d7892dd068d99e96efe2c"},
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:1a7aca7964ac4bb07680d5c9d63b9d7028cace3e2d43175cb50bba8c5ad33316"},
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ae4073a60ab98529ab8a72ebf429f2a8cc612619a8c04e08bed27450d52103c0"},
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:ffb2be176fed4457e445fe540617f0252a72a8bc56208fd65a690fdb1f57660b"},
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:e290d79a4107d7d794634ce3e985b9ae4f920380a813717adf61804904dc4393"},
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:96e85aa09274955bb6bd483eaf5b12abadade01010478154b0ec70284c1b1526"},
{file = "lxml-5.2.2-cp310-cp310-win32.whl", hash = "sha256:f956196ef61369f1685d14dad80611488d8dc1ef00be57c0c5a03064005b0f30"},
{file = "lxml-5.2.2-cp310-cp310-win_amd64.whl", hash = "sha256:875a3f90d7eb5c5d77e529080d95140eacb3c6d13ad5b616ee8095447b1d22e7"},
{file = "lxml-5.2.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:45f9494613160d0405682f9eee781c7e6d1bf45f819654eb249f8f46a2c22545"},
{file = "lxml-5.2.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b0b3f2df149efb242cee2ffdeb6674b7f30d23c9a7af26595099afaf46ef4e88"},
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d28cb356f119a437cc58a13f8135ab8a4c8ece18159eb9194b0d269ec4e28083"},
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:657a972f46bbefdbba2d4f14413c0d079f9ae243bd68193cb5061b9732fa54c1"},
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b74b9ea10063efb77a965a8d5f4182806fbf59ed068b3c3fd6f30d2ac7bee734"},
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:07542787f86112d46d07d4f3c4e7c760282011b354d012dc4141cc12a68cef5f"},
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:303f540ad2dddd35b92415b74b900c749ec2010e703ab3bfd6660979d01fd4ed"},
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:2eb2227ce1ff998faf0cd7fe85bbf086aa41dfc5af3b1d80867ecfe75fb68df3"},
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:1d8a701774dfc42a2f0b8ccdfe7dbc140500d1049e0632a611985d943fcf12df"},
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:56793b7a1a091a7c286b5f4aa1fe4ae5d1446fe742d00cdf2ffb1077865db10d"},
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:eb00b549b13bd6d884c863554566095bf6fa9c3cecb2e7b399c4bc7904cb33b5"},
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1a2569a1f15ae6c8c64108a2cd2b4a858fc1e13d25846be0666fc144715e32ab"},
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:8cf85a6e40ff1f37fe0f25719aadf443686b1ac7652593dc53c7ef9b8492b115"},
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:d237ba6664b8e60fd90b8549a149a74fcc675272e0e95539a00522e4ca688b04"},
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0b3f5016e00ae7630a4b83d0868fca1e3d494c78a75b1c7252606a3a1c5fc2ad"},
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:23441e2b5339bc54dc949e9e675fa35efe858108404ef9aa92f0456929ef6fe8"},
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:2fb0ba3e8566548d6c8e7dd82a8229ff47bd8fb8c2da237607ac8e5a1b8312e5"},
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:79d1fb9252e7e2cfe4de6e9a6610c7cbb99b9708e2c3e29057f487de5a9eaefa"},
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6dcc3d17eac1df7859ae01202e9bb11ffa8c98949dcbeb1069c8b9a75917e01b"},
{file = "lxml-5.2.2-cp311-cp311-win32.whl", hash = "sha256:4c30a2f83677876465f44c018830f608fa3c6a8a466eb223535035fbc16f3438"},
{file = "lxml-5.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:49095a38eb333aaf44c06052fd2ec3b8f23e19747ca7ec6f6c954ffea6dbf7be"},
{file = "lxml-5.2.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:7429e7faa1a60cad26ae4227f4dd0459efde239e494c7312624ce228e04f6391"},
{file = "lxml-5.2.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:50ccb5d355961c0f12f6cf24b7187dbabd5433f29e15147a67995474f27d1776"},
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc911208b18842a3a57266d8e51fc3cfaccee90a5351b92079beed912a7914c2"},
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:33ce9e786753743159799fdf8e92a5da351158c4bfb6f2db0bf31e7892a1feb5"},
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ec87c44f619380878bd49ca109669c9f221d9ae6883a5bcb3616785fa8f94c97"},
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08ea0f606808354eb8f2dfaac095963cb25d9d28e27edcc375d7b30ab01abbf6"},
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75a9632f1d4f698b2e6e2e1ada40e71f369b15d69baddb8968dcc8e683839b18"},
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:74da9f97daec6928567b48c90ea2c82a106b2d500f397eeb8941e47d30b1ca85"},
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:0969e92af09c5687d769731e3f39ed62427cc72176cebb54b7a9d52cc4fa3b73"},
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:9164361769b6ca7769079f4d426a41df6164879f7f3568be9086e15baca61466"},
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:d26a618ae1766279f2660aca0081b2220aca6bd1aa06b2cf73f07383faf48927"},
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ab67ed772c584b7ef2379797bf14b82df9aa5f7438c5b9a09624dd834c1c1aaf"},
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:3d1e35572a56941b32c239774d7e9ad724074d37f90c7a7d499ab98761bd80cf"},
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:8268cbcd48c5375f46e000adb1390572c98879eb4f77910c6053d25cc3ac2c67"},
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e282aedd63c639c07c3857097fc0e236f984ceb4089a8b284da1c526491e3f3d"},
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6dfdc2bfe69e9adf0df4915949c22a25b39d175d599bf98e7ddf620a13678585"},
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:4aefd911793b5d2d7a921233a54c90329bf3d4a6817dc465f12ffdfe4fc7b8fe"},
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:8b8df03a9e995b6211dafa63b32f9d405881518ff1ddd775db4e7b98fb545e1c"},
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f11ae142f3a322d44513de1018b50f474f8f736bc3cd91d969f464b5bfef8836"},
{file = "lxml-5.2.2-cp312-cp312-win32.whl", hash = "sha256:16a8326e51fcdffc886294c1e70b11ddccec836516a343f9ed0f82aac043c24a"},
{file = "lxml-5.2.2-cp312-cp312-win_amd64.whl", hash = "sha256:bbc4b80af581e18568ff07f6395c02114d05f4865c2812a1f02f2eaecf0bfd48"},
{file = "lxml-5.2.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e3d9d13603410b72787579769469af730c38f2f25505573a5888a94b62b920f8"},
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38b67afb0a06b8575948641c1d6d68e41b83a3abeae2ca9eed2ac59892b36706"},
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c689d0d5381f56de7bd6966a4541bff6e08bf8d3871bbd89a0c6ab18aa699573"},
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:cf2a978c795b54c539f47964ec05e35c05bd045db5ca1e8366988c7f2fe6b3ce"},
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:739e36ef7412b2bd940f75b278749106e6d025e40027c0b94a17ef7968d55d56"},
{file = "lxml-5.2.2-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d8bbcd21769594dbba9c37d3c819e2d5847656ca99c747ddb31ac1701d0c0ed9"},
{file = "lxml-5.2.2-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:2304d3c93f2258ccf2cf7a6ba8c761d76ef84948d87bf9664e14d203da2cd264"},
{file = "lxml-5.2.2-cp36-cp36m-win32.whl", hash = "sha256:02437fb7308386867c8b7b0e5bc4cd4b04548b1c5d089ffb8e7b31009b961dc3"},
{file = "lxml-5.2.2-cp36-cp36m-win_amd64.whl", hash = "sha256:edcfa83e03370032a489430215c1e7783128808fd3e2e0a3225deee278585196"},
{file = "lxml-5.2.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:28bf95177400066596cdbcfc933312493799382879da504633d16cf60bba735b"},
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a745cc98d504d5bd2c19b10c79c61c7c3df9222629f1b6210c0368177589fb8"},
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1b590b39ef90c6b22ec0be925b211298e810b4856909c8ca60d27ffbca6c12e6"},
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b336b0416828022bfd5a2e3083e7f5ba54b96242159f83c7e3eebaec752f1716"},
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:c2faf60c583af0d135e853c86ac2735ce178f0e338a3c7f9ae8f622fd2eb788c"},
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:4bc6cb140a7a0ad1f7bc37e018d0ed690b7b6520ade518285dc3171f7a117905"},
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7ff762670cada8e05b32bf1e4dc50b140790909caa8303cfddc4d702b71ea184"},
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:57f0a0bbc9868e10ebe874e9f129d2917750adf008fe7b9c1598c0fbbfdde6a6"},
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:a6d2092797b388342c1bc932077ad232f914351932353e2e8706851c870bca1f"},
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:60499fe961b21264e17a471ec296dcbf4365fbea611bf9e303ab69db7159ce61"},
{file = "lxml-5.2.2-cp37-cp37m-win32.whl", hash = "sha256:d9b342c76003c6b9336a80efcc766748a333573abf9350f4094ee46b006ec18f"},
{file = "lxml-5.2.2-cp37-cp37m-win_amd64.whl", hash = "sha256:b16db2770517b8799c79aa80f4053cd6f8b716f21f8aca962725a9565ce3ee40"},
{file = "lxml-5.2.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7ed07b3062b055d7a7f9d6557a251cc655eed0b3152b76de619516621c56f5d3"},
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f60fdd125d85bf9c279ffb8e94c78c51b3b6a37711464e1f5f31078b45002421"},
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8a7e24cb69ee5f32e003f50e016d5fde438010c1022c96738b04fc2423e61706"},
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23cfafd56887eaed93d07bc4547abd5e09d837a002b791e9767765492a75883f"},
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:19b4e485cd07b7d83e3fe3b72132e7df70bfac22b14fe4bf7a23822c3a35bff5"},
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:7ce7ad8abebe737ad6143d9d3bf94b88b93365ea30a5b81f6877ec9c0dee0a48"},
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e49b052b768bb74f58c7dda4e0bdf7b79d43a9204ca584ffe1fb48a6f3c84c66"},
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d14a0d029a4e176795cef99c056d58067c06195e0c7e2dbb293bf95c08f772a3"},
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:be49ad33819d7dcc28a309b86d4ed98e1a65f3075c6acd3cd4fe32103235222b"},
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a6d17e0370d2516d5bb9062c7b4cb731cff921fc875644c3d751ad857ba9c5b1"},
{file = "lxml-5.2.2-cp38-cp38-win32.whl", hash = "sha256:5b8c041b6265e08eac8a724b74b655404070b636a8dd6d7a13c3adc07882ef30"},
{file = "lxml-5.2.2-cp38-cp38-win_amd64.whl", hash = "sha256:f61efaf4bed1cc0860e567d2ecb2363974d414f7f1f124b1df368bbf183453a6"},
{file = "lxml-5.2.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:fb91819461b1b56d06fa4bcf86617fac795f6a99d12239fb0c68dbeba41a0a30"},
{file = "lxml-5.2.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d4ed0c7cbecde7194cd3228c044e86bf73e30a23505af852857c09c24e77ec5d"},
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54401c77a63cc7d6dc4b4e173bb484f28a5607f3df71484709fe037c92d4f0ed"},
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:625e3ef310e7fa3a761d48ca7ea1f9d8718a32b1542e727d584d82f4453d5eeb"},
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:519895c99c815a1a24a926d5b60627ce5ea48e9f639a5cd328bda0515ea0f10c"},
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c7079d5eb1c1315a858bbf180000757db8ad904a89476653232db835c3114001"},
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:343ab62e9ca78094f2306aefed67dcfad61c4683f87eee48ff2fd74902447726"},
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:cd9e78285da6c9ba2d5c769628f43ef66d96ac3085e59b10ad4f3707980710d3"},
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:546cf886f6242dff9ec206331209db9c8e1643ae642dea5fdbecae2453cb50fd"},
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:02f6a8eb6512fdc2fd4ca10a49c341c4e109aa6e9448cc4859af5b949622715a"},
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:339ee4a4704bc724757cd5dd9dc8cf4d00980f5d3e6e06d5847c1b594ace68ab"},
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0a028b61a2e357ace98b1615fc03f76eb517cc028993964fe08ad514b1e8892d"},
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:f90e552ecbad426eab352e7b2933091f2be77115bb16f09f78404861c8322981"},
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:d83e2d94b69bf31ead2fa45f0acdef0757fa0458a129734f59f67f3d2eb7ef32"},
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a02d3c48f9bb1e10c7788d92c0c7db6f2002d024ab6e74d6f45ae33e3d0288a3"},
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6d68ce8e7b2075390e8ac1e1d3a99e8b6372c694bbe612632606d1d546794207"},
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:453d037e09a5176d92ec0fd282e934ed26d806331a8b70ab431a81e2fbabf56d"},
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:3b019d4ee84b683342af793b56bb35034bd749e4cbdd3d33f7d1107790f8c472"},
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:cb3942960f0beb9f46e2a71a3aca220d1ca32feb5a398656be934320804c0df9"},
{file = "lxml-5.2.2-cp39-cp39-win32.whl", hash = "sha256:ac6540c9fff6e3813d29d0403ee7a81897f1d8ecc09a8ff84d2eea70ede1cdbf"},
{file = "lxml-5.2.2-cp39-cp39-win_amd64.whl", hash = "sha256:610b5c77428a50269f38a534057444c249976433f40f53e3b47e68349cca1425"},
{file = "lxml-5.2.2-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:b537bd04d7ccd7c6350cdaaaad911f6312cbd61e6e6045542f781c7f8b2e99d2"},
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4820c02195d6dfb7b8508ff276752f6b2ff8b64ae5d13ebe02e7667e035000b9"},
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a09f6184f17a80897172863a655467da2b11151ec98ba8d7af89f17bf63dae"},
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:76acba4c66c47d27c8365e7c10b3d8016a7da83d3191d053a58382311a8bf4e1"},
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b128092c927eaf485928cec0c28f6b8bead277e28acf56800e972aa2c2abd7a2"},
{file = "lxml-5.2.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:ae791f6bd43305aade8c0e22f816b34f3b72b6c820477aab4d18473a37e8090b"},
{file = "lxml-5.2.2-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a2f6a1bc2460e643785a2cde17293bd7a8f990884b822f7bca47bee0a82fc66b"},
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e8d351ff44c1638cb6e980623d517abd9f580d2e53bfcd18d8941c052a5a009"},
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bec4bd9133420c5c52d562469c754f27c5c9e36ee06abc169612c959bd7dbb07"},
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:55ce6b6d803890bd3cc89975fca9de1dff39729b43b73cb15ddd933b8bc20484"},
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8ab6a358d1286498d80fe67bd3d69fcbc7d1359b45b41e74c4a26964ca99c3f8"},
{file = "lxml-5.2.2-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:06668e39e1f3c065349c51ac27ae430719d7806c026fec462e5693b08b95696b"},
{file = "lxml-5.2.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9cd5323344d8ebb9fb5e96da5de5ad4ebab993bbf51674259dbe9d7a18049525"},
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:89feb82ca055af0fe797a2323ec9043b26bc371365847dbe83c7fd2e2f181c34"},
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e481bba1e11ba585fb06db666bfc23dbe181dbafc7b25776156120bf12e0d5a6"},
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:9d6c6ea6a11ca0ff9cd0390b885984ed31157c168565702959c25e2191674a14"},
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:3d98de734abee23e61f6b8c2e08a88453ada7d6486dc7cdc82922a03968928db"},
{file = "lxml-5.2.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:69ab77a1373f1e7563e0fb5a29a8440367dec051da6c7405333699d07444f511"},
{file = "lxml-5.2.2-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:34e17913c431f5ae01d8658dbf792fdc457073dcdfbb31dc0cc6ab256e664a8d"},
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:05f8757b03208c3f50097761be2dea0aba02e94f0dc7023ed73a7bb14ff11eb0"},
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a520b4f9974b0a0a6ed73c2154de57cdfd0c8800f4f15ab2b73238ffed0b36e"},
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5e097646944b66207023bc3c634827de858aebc226d5d4d6d16f0b77566ea182"},
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b5e4ef22ff25bfd4ede5f8fb30f7b24446345f3e79d9b7455aef2836437bc38a"},
{file = "lxml-5.2.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:ff69a9a0b4b17d78170c73abe2ab12084bdf1691550c5629ad1fe7849433f324"},
{file = "lxml-5.2.2.tar.gz", hash = "sha256:bb2dc4898180bea79863d5487e5f9c7c34297414bad54bcd0f0852aee9cfdb87"},
{file = "lxml-5.3.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:dd36439be765e2dde7660212b5275641edbc813e7b24668831a5c8ac91180656"},
{file = "lxml-5.3.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ae5fe5c4b525aa82b8076c1a59d642c17b6e8739ecf852522c6321852178119d"},
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:501d0d7e26b4d261fca8132854d845e4988097611ba2531408ec91cf3fd9d20a"},
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb66442c2546446944437df74379e9cf9e9db353e61301d1a0e26482f43f0dd8"},
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9e41506fec7a7f9405b14aa2d5c8abbb4dbbd09d88f9496958b6d00cb4d45330"},
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f7d4a670107d75dfe5ad080bed6c341d18c4442f9378c9f58e5851e86eb79965"},
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:41ce1f1e2c7755abfc7e759dc34d7d05fd221723ff822947132dc934d122fe22"},
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:44264ecae91b30e5633013fb66f6ddd05c006d3e0e884f75ce0b4755b3e3847b"},
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:3c174dc350d3ec52deb77f2faf05c439331d6ed5e702fc247ccb4e6b62d884b7"},
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:2dfab5fa6a28a0b60a20638dc48e6343c02ea9933e3279ccb132f555a62323d8"},
{file = "lxml-5.3.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:b1c8c20847b9f34e98080da785bb2336ea982e7f913eed5809e5a3c872900f32"},
{file = "lxml-5.3.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:2c86bf781b12ba417f64f3422cfc302523ac9cd1d8ae8c0f92a1c66e56ef2e86"},
{file = "lxml-5.3.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:c162b216070f280fa7da844531169be0baf9ccb17263cf5a8bf876fcd3117fa5"},
{file = "lxml-5.3.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:36aef61a1678cb778097b4a6eeae96a69875d51d1e8f4d4b491ab3cfb54b5a03"},
{file = "lxml-5.3.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:f65e5120863c2b266dbcc927b306c5b78e502c71edf3295dfcb9501ec96e5fc7"},
{file = "lxml-5.3.0-cp310-cp310-win32.whl", hash = "sha256:ef0c1fe22171dd7c7c27147f2e9c3e86f8bdf473fed75f16b0c2e84a5030ce80"},
{file = "lxml-5.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:052d99051e77a4f3e8482c65014cf6372e61b0a6f4fe9edb98503bb5364cfee3"},
{file = "lxml-5.3.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:74bcb423462233bc5d6066e4e98b0264e7c1bed7541fff2f4e34fe6b21563c8b"},
{file = "lxml-5.3.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a3d819eb6f9b8677f57f9664265d0a10dd6551d227afb4af2b9cd7bdc2ccbf18"},
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b8f5db71b28b8c404956ddf79575ea77aa8b1538e8b2ef9ec877945b3f46442"},
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c3406b63232fc7e9b8783ab0b765d7c59e7c59ff96759d8ef9632fca27c7ee4"},
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2ecdd78ab768f844c7a1d4a03595038c166b609f6395e25af9b0f3f26ae1230f"},
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:168f2dfcfdedf611eb285efac1516c8454c8c99caf271dccda8943576b67552e"},
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa617107a410245b8660028a7483b68e7914304a6d4882b5ff3d2d3eb5948d8c"},
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:69959bd3167b993e6e710b99051265654133a98f20cec1d9b493b931942e9c16"},
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:bd96517ef76c8654446fc3db9242d019a1bb5fe8b751ba414765d59f99210b79"},
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:ab6dd83b970dc97c2d10bc71aa925b84788c7c05de30241b9e96f9b6d9ea3080"},
{file = "lxml-5.3.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:eec1bb8cdbba2925bedc887bc0609a80e599c75b12d87ae42ac23fd199445654"},
{file = "lxml-5.3.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:6a7095eeec6f89111d03dabfe5883a1fd54da319c94e0fb104ee8f23616b572d"},
{file = "lxml-5.3.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:6f651ebd0b21ec65dfca93aa629610a0dbc13dbc13554f19b0113da2e61a4763"},
{file = "lxml-5.3.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:f422a209d2455c56849442ae42f25dbaaba1c6c3f501d58761c619c7836642ec"},
{file = "lxml-5.3.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:62f7fdb0d1ed2065451f086519865b4c90aa19aed51081979ecd05a21eb4d1be"},
{file = "lxml-5.3.0-cp311-cp311-win32.whl", hash = "sha256:c6379f35350b655fd817cd0d6cbeef7f265f3ae5fedb1caae2eb442bbeae9ab9"},
{file = "lxml-5.3.0-cp311-cp311-win_amd64.whl", hash = "sha256:9c52100e2c2dbb0649b90467935c4b0de5528833c76a35ea1a2691ec9f1ee7a1"},
{file = "lxml-5.3.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:e99f5507401436fdcc85036a2e7dc2e28d962550afe1cbfc07c40e454256a859"},
{file = "lxml-5.3.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:384aacddf2e5813a36495233b64cb96b1949da72bef933918ba5c84e06af8f0e"},
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:874a216bf6afaf97c263b56371434e47e2c652d215788396f60477540298218f"},
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:65ab5685d56914b9a2a34d67dd5488b83213d680b0c5d10b47f81da5a16b0b0e"},
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aac0bbd3e8dd2d9c45ceb82249e8bdd3ac99131a32b4d35c8af3cc9db1657179"},
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b369d3db3c22ed14c75ccd5af429086f166a19627e84a8fdade3f8f31426e52a"},
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c24037349665434f375645fa9d1f5304800cec574d0310f618490c871fd902b3"},
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:62d172f358f33a26d6b41b28c170c63886742f5b6772a42b59b4f0fa10526cb1"},
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:c1f794c02903c2824fccce5b20c339a1a14b114e83b306ff11b597c5f71a1c8d"},
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:5d6a6972b93c426ace71e0be9a6f4b2cfae9b1baed2eed2006076a746692288c"},
{file = "lxml-5.3.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:3879cc6ce938ff4eb4900d901ed63555c778731a96365e53fadb36437a131a99"},
{file = "lxml-5.3.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:74068c601baff6ff021c70f0935b0c7bc528baa8ea210c202e03757c68c5a4ff"},
{file = "lxml-5.3.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:ecd4ad8453ac17bc7ba3868371bffb46f628161ad0eefbd0a855d2c8c32dd81a"},
{file = "lxml-5.3.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:7e2f58095acc211eb9d8b5771bf04df9ff37d6b87618d1cbf85f92399c98dae8"},
{file = "lxml-5.3.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e63601ad5cd8f860aa99d109889b5ac34de571c7ee902d6812d5d9ddcc77fa7d"},
{file = "lxml-5.3.0-cp312-cp312-win32.whl", hash = "sha256:17e8d968d04a37c50ad9c456a286b525d78c4a1c15dd53aa46c1d8e06bf6fa30"},
{file = "lxml-5.3.0-cp312-cp312-win_amd64.whl", hash = "sha256:c1a69e58a6bb2de65902051d57fde951febad631a20a64572677a1052690482f"},
{file = "lxml-5.3.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:8c72e9563347c7395910de6a3100a4840a75a6f60e05af5e58566868d5eb2d6a"},
{file = "lxml-5.3.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:e92ce66cd919d18d14b3856906a61d3f6b6a8500e0794142338da644260595cd"},
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d04f064bebdfef9240478f7a779e8c5dc32b8b7b0b2fc6a62e39b928d428e51"},
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c2fb570d7823c2bbaf8b419ba6e5662137f8166e364a8b2b91051a1fb40ab8b"},
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0c120f43553ec759f8de1fee2f4794452b0946773299d44c36bfe18e83caf002"},
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:562e7494778a69086f0312ec9689f6b6ac1c6b65670ed7d0267e49f57ffa08c4"},
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:423b121f7e6fa514ba0c7918e56955a1d4470ed35faa03e3d9f0e3baa4c7e492"},
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:c00f323cc00576df6165cc9d21a4c21285fa6b9989c5c39830c3903dc4303ef3"},
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_28_ppc64le.whl", hash = "sha256:1fdc9fae8dd4c763e8a31e7630afef517eab9f5d5d31a278df087f307bf601f4"},
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_28_s390x.whl", hash = "sha256:658f2aa69d31e09699705949b5fc4719cbecbd4a97f9656a232e7d6c7be1a367"},
{file = "lxml-5.3.0-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:1473427aff3d66a3fa2199004c3e601e6c4500ab86696edffdbc84954c72d832"},
{file = "lxml-5.3.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a87de7dd873bf9a792bf1e58b1c3887b9264036629a5bf2d2e6579fe8e73edff"},
{file = "lxml-5.3.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:0d7b36afa46c97875303a94e8f3ad932bf78bace9e18e603f2085b652422edcd"},
{file = "lxml-5.3.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:cf120cce539453ae086eacc0130a324e7026113510efa83ab42ef3fcfccac7fb"},
{file = "lxml-5.3.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:df5c7333167b9674aa8ae1d4008fa4bc17a313cc490b2cca27838bbdcc6bb15b"},
{file = "lxml-5.3.0-cp313-cp313-win32.whl", hash = "sha256:c802e1c2ed9f0c06a65bc4ed0189d000ada8049312cfeab6ca635e39c9608957"},
{file = "lxml-5.3.0-cp313-cp313-win_amd64.whl", hash = "sha256:406246b96d552e0503e17a1006fd27edac678b3fcc9f1be71a2f94b4ff61528d"},
{file = "lxml-5.3.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:8f0de2d390af441fe8b2c12626d103540b5d850d585b18fcada58d972b74a74e"},
{file = "lxml-5.3.0-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1afe0a8c353746e610bd9031a630a95bcfb1a720684c3f2b36c4710a0a96528f"},
{file = "lxml-5.3.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:56b9861a71575f5795bde89256e7467ece3d339c9b43141dbdd54544566b3b94"},
{file = "lxml-5.3.0-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:9fb81d2824dff4f2e297a276297e9031f46d2682cafc484f49de182aa5e5df99"},
{file = "lxml-5.3.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:2c226a06ecb8cdef28845ae976da407917542c5e6e75dcac7cc33eb04aaeb237"},
{file = "lxml-5.3.0-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:7d3d1ca42870cdb6d0d29939630dbe48fa511c203724820fc0fd507b2fb46577"},
{file = "lxml-5.3.0-cp36-cp36m-win32.whl", hash = "sha256:094cb601ba9f55296774c2d57ad68730daa0b13dc260e1f941b4d13678239e70"},
{file = "lxml-5.3.0-cp36-cp36m-win_amd64.whl", hash = "sha256:eafa2c8658f4e560b098fe9fc54539f86528651f61849b22111a9b107d18910c"},
{file = "lxml-5.3.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:cb83f8a875b3d9b458cada4f880fa498646874ba4011dc974e071a0a84a1b033"},
{file = "lxml-5.3.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:25f1b69d41656b05885aa185f5fdf822cb01a586d1b32739633679699f220391"},
{file = "lxml-5.3.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23e0553b8055600b3bf4a00b255ec5c92e1e4aebf8c2c09334f8368e8bd174d6"},
{file = "lxml-5.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9ada35dd21dc6c039259596b358caab6b13f4db4d4a7f8665764d616daf9cc1d"},
{file = "lxml-5.3.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:81b4e48da4c69313192d8c8d4311e5d818b8be1afe68ee20f6385d0e96fc9512"},
{file = "lxml-5.3.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:2bc9fd5ca4729af796f9f59cd8ff160fe06a474da40aca03fcc79655ddee1a8b"},
{file = "lxml-5.3.0-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:07da23d7ee08577760f0a71d67a861019103e4812c87e2fab26b039054594cc5"},
{file = "lxml-5.3.0-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:ea2e2f6f801696ad7de8aec061044d6c8c0dd4037608c7cab38a9a4d316bfb11"},
{file = "lxml-5.3.0-cp37-cp37m-win32.whl", hash = "sha256:5c54afdcbb0182d06836cc3d1be921e540be3ebdf8b8a51ee3ef987537455f84"},
{file = "lxml-5.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:f2901429da1e645ce548bf9171784c0f74f0718c3f6150ce166be39e4dd66c3e"},
{file = "lxml-5.3.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:c56a1d43b2f9ee4786e4658c7903f05da35b923fb53c11025712562d5cc02753"},
{file = "lxml-5.3.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6ee8c39582d2652dcd516d1b879451500f8db3fe3607ce45d7c5957ab2596040"},
{file = "lxml-5.3.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0fdf3a3059611f7585a78ee10399a15566356116a4288380921a4b598d807a22"},
{file = "lxml-5.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:146173654d79eb1fc97498b4280c1d3e1e5d58c398fa530905c9ea50ea849b22"},
{file = "lxml-5.3.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:0a7056921edbdd7560746f4221dca89bb7a3fe457d3d74267995253f46343f15"},
{file = "lxml-5.3.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:9e4b47ac0f5e749cfc618efdf4726269441014ae1d5583e047b452a32e221920"},
{file = "lxml-5.3.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:f914c03e6a31deb632e2daa881fe198461f4d06e57ac3d0e05bbcab8eae01945"},
{file = "lxml-5.3.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:213261f168c5e1d9b7535a67e68b1f59f92398dd17a56d934550837143f79c42"},
{file = "lxml-5.3.0-cp38-cp38-win32.whl", hash = "sha256:218c1b2e17a710e363855594230f44060e2025b05c80d1f0661258142b2add2e"},
{file = "lxml-5.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:315f9542011b2c4e1d280e4a20ddcca1761993dda3afc7a73b01235f8641e903"},
{file = "lxml-5.3.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:1ffc23010330c2ab67fac02781df60998ca8fe759e8efde6f8b756a20599c5de"},
{file = "lxml-5.3.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2b3778cb38212f52fac9fe913017deea2fdf4eb1a4f8e4cfc6b009a13a6d3fcc"},
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4b0c7a688944891086ba192e21c5229dea54382f4836a209ff8d0a660fac06be"},
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:747a3d3e98e24597981ca0be0fd922aebd471fa99d0043a3842d00cdcad7ad6a"},
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:86a6b24b19eaebc448dc56b87c4865527855145d851f9fc3891673ff97950540"},
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b11a5d918a6216e521c715b02749240fb07ae5a1fefd4b7bf12f833bc8b4fe70"},
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:68b87753c784d6acb8a25b05cb526c3406913c9d988d51f80adecc2b0775d6aa"},
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:109fa6fede314cc50eed29e6e56c540075e63d922455346f11e4d7a036d2b8cf"},
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:02ced472497b8362c8e902ade23e3300479f4f43e45f4105c85ef43b8db85229"},
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:6b038cc86b285e4f9fea2ba5ee76e89f21ed1ea898e287dc277a25884f3a7dfe"},
{file = "lxml-5.3.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:7437237c6a66b7ca341e868cda48be24b8701862757426852c9b3186de1da8a2"},
{file = "lxml-5.3.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:7f41026c1d64043a36fda21d64c5026762d53a77043e73e94b71f0521939cc71"},
{file = "lxml-5.3.0-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:482c2f67761868f0108b1743098640fbb2a28a8e15bf3f47ada9fa59d9fe08c3"},
{file = "lxml-5.3.0-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:1483fd3358963cc5c1c9b122c80606a3a79ee0875bcac0204149fa09d6ff2727"},
{file = "lxml-5.3.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:2dec2d1130a9cda5b904696cec33b2cfb451304ba9081eeda7f90f724097300a"},
{file = "lxml-5.3.0-cp39-cp39-win32.whl", hash = "sha256:a0eabd0a81625049c5df745209dc7fcef6e2aea7793e5f003ba363610aa0a3ff"},
{file = "lxml-5.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:89e043f1d9d341c52bf2af6d02e6adde62e0a46e6755d5eb60dc6e4f0b8aeca2"},
{file = "lxml-5.3.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7b1cd427cb0d5f7393c31b7496419da594fe600e6fdc4b105a54f82405e6626c"},
{file = "lxml-5.3.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:51806cfe0279e06ed8500ce19479d757db42a30fd509940b1701be9c86a5ff9a"},
{file = "lxml-5.3.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ee70d08fd60c9565ba8190f41a46a54096afa0eeb8f76bd66f2c25d3b1b83005"},
{file = "lxml-5.3.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:8dc2c0395bea8254d8daebc76dcf8eb3a95ec2a46fa6fae5eaccee366bfe02ce"},
{file = "lxml-5.3.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:6ba0d3dcac281aad8a0e5b14c7ed6f9fa89c8612b47939fc94f80b16e2e9bc83"},
{file = "lxml-5.3.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:6e91cf736959057f7aac7adfc83481e03615a8e8dd5758aa1d95ea69e8931dba"},
{file = "lxml-5.3.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:94d6c3782907b5e40e21cadf94b13b0842ac421192f26b84c45f13f3c9d5dc27"},
{file = "lxml-5.3.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c300306673aa0f3ed5ed9372b21867690a17dba38c68c44b287437c362ce486b"},
{file = "lxml-5.3.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78d9b952e07aed35fe2e1a7ad26e929595412db48535921c5013edc8aa4a35ce"},
{file = "lxml-5.3.0-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:01220dca0d066d1349bd6a1726856a78f7929f3878f7e2ee83c296c69495309e"},
{file = "lxml-5.3.0-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:2d9b8d9177afaef80c53c0a9e30fa252ff3036fb1c6494d427c066a4ce6a282f"},
{file = "lxml-5.3.0-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:20094fc3f21ea0a8669dc4c61ed7fa8263bd37d97d93b90f28fc613371e7a875"},
{file = "lxml-5.3.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:ace2c2326a319a0bb8a8b0e5b570c764962e95818de9f259ce814ee666603f19"},
{file = "lxml-5.3.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:92e67a0be1639c251d21e35fe74df6bcc40cba445c2cda7c4a967656733249e2"},
{file = "lxml-5.3.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd5350b55f9fecddc51385463a4f67a5da829bc741e38cf689f38ec9023f54ab"},
{file = "lxml-5.3.0-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:4c1fefd7e3d00921c44dc9ca80a775af49698bbfd92ea84498e56acffd4c5469"},
{file = "lxml-5.3.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:71a8dd38fbd2f2319136d4ae855a7078c69c9a38ae06e0c17c73fd70fc6caad8"},
{file = "lxml-5.3.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:97acf1e1fd66ab53dacd2c35b319d7e548380c2e9e8c54525c6e76d21b1ae3b1"},
{file = "lxml-5.3.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:68934b242c51eb02907c5b81d138cb977b2129a0a75a8f8b60b01cb8586c7b21"},
{file = "lxml-5.3.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b710bc2b8292966b23a6a0121f7a6c51d45d2347edcc75f016ac123b8054d3f2"},
{file = "lxml-5.3.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:18feb4b93302091b1541221196a2155aa296c363fd233814fa11e181adebc52f"},
{file = "lxml-5.3.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3eb44520c4724c2e1a57c0af33a379eee41792595023f367ba3952a2d96c2aab"},
{file = "lxml-5.3.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:609251a0ca4770e5a8768ff902aa02bf636339c5a93f9349b48eb1f606f7f3e9"},
{file = "lxml-5.3.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:516f491c834eb320d6c843156440fe7fc0d50b33e44387fcec5b02f0bc118a4c"},
{file = "lxml-5.3.0.tar.gz", hash = "sha256:4e109ca30d1edec1ac60cdbe341905dc3b8f55b16855e03a54aaf59e51ec8c6f"},
]
[package.extras]
@@ -1152,7 +1143,7 @@ cssselect = ["cssselect (>=0.7)"]
html-clean = ["lxml-html-clean"]
html5 = ["html5lib"]
htmlsoup = ["BeautifulSoup4"]
source = ["Cython (>=3.0.10)"]
source = ["Cython (>=3.0.11)"]
[[package]]
name = "lxml-stubs"
@@ -1516,13 +1507,13 @@ files = [
[[package]]
name = "phonenumbers"
version = "8.13.39"
version = "8.13.43"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
files = [
{file = "phonenumbers-8.13.39-py2.py3-none-any.whl", hash = "sha256:3ad2d086fa71e7eef409001b9195ac54bebb0c6e3e752209b558ca192c9229a0"},
{file = "phonenumbers-8.13.39.tar.gz", hash = "sha256:db7ca4970d206b2056231105300753b1a5b229f43416f8c2b3010e63fbb68d77"},
{file = "phonenumbers-8.13.43-py2.py3-none-any.whl", hash = "sha256:339e521403fe4dd9c664dbbeb2fe434f9ea5c81e54c0fdfadbaeb53b26a76c27"},
{file = "phonenumbers-8.13.43.tar.gz", hash = "sha256:35b904e4a79226eee027fbb467a9aa6f1ab9ffc3c09c91bf14b885c154936726"},
]
[[package]]
@@ -2418,13 +2409,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
[[package]]
name = "sentry-sdk"
version = "2.10.0"
version = "2.13.0"
description = "Python client for Sentry (https://sentry.io)"
optional = true
python-versions = ">=3.6"
files = [
{file = "sentry_sdk-2.10.0-py2.py3-none-any.whl", hash = "sha256:87b3d413c87d8e7f816cc9334bff255a83d8b577db2b22042651c30c19c09190"},
{file = "sentry_sdk-2.10.0.tar.gz", hash = "sha256:545fcc6e36c335faa6d6cda84669b6e17025f31efbf3b2211ec14efe008b75d1"},
{file = "sentry_sdk-2.13.0-py2.py3-none-any.whl", hash = "sha256:6beede8fc2ab4043da7f69d95534e320944690680dd9a963178a49de71d726c6"},
{file = "sentry_sdk-2.13.0.tar.gz", hash = "sha256:8d4a576f7a98eb2fdb40e13106e41f330e5c79d72a68be1316e7852cf4995260"},
]
[package.dependencies]
@@ -2451,10 +2442,11 @@ httpx = ["httpx (>=0.16.0)"]
huey = ["huey (>=2)"]
huggingface-hub = ["huggingface-hub (>=0.22)"]
langchain = ["langchain (>=0.0.210)"]
litestar = ["litestar (>=2.0.0)"]
loguru = ["loguru (>=0.5)"]
openai = ["openai (>=1.0.0)", "tiktoken (>=0.3.0)"]
opentelemetry = ["opentelemetry-distro (>=0.35b0)"]
opentelemetry-experimental = ["opentelemetry-instrumentation-aio-pika (==0.46b0)", "opentelemetry-instrumentation-aiohttp-client (==0.46b0)", "opentelemetry-instrumentation-aiopg (==0.46b0)", "opentelemetry-instrumentation-asgi (==0.46b0)", "opentelemetry-instrumentation-asyncio (==0.46b0)", "opentelemetry-instrumentation-asyncpg (==0.46b0)", "opentelemetry-instrumentation-aws-lambda (==0.46b0)", "opentelemetry-instrumentation-boto (==0.46b0)", "opentelemetry-instrumentation-boto3sqs (==0.46b0)", "opentelemetry-instrumentation-botocore (==0.46b0)", "opentelemetry-instrumentation-cassandra (==0.46b0)", "opentelemetry-instrumentation-celery (==0.46b0)", "opentelemetry-instrumentation-confluent-kafka (==0.46b0)", "opentelemetry-instrumentation-dbapi (==0.46b0)", "opentelemetry-instrumentation-django (==0.46b0)", "opentelemetry-instrumentation-elasticsearch (==0.46b0)", "opentelemetry-instrumentation-falcon (==0.46b0)", "opentelemetry-instrumentation-fastapi (==0.46b0)", "opentelemetry-instrumentation-flask (==0.46b0)", "opentelemetry-instrumentation-grpc (==0.46b0)", "opentelemetry-instrumentation-httpx (==0.46b0)", "opentelemetry-instrumentation-jinja2 (==0.46b0)", "opentelemetry-instrumentation-kafka-python (==0.46b0)", "opentelemetry-instrumentation-logging (==0.46b0)", "opentelemetry-instrumentation-mysql (==0.46b0)", "opentelemetry-instrumentation-mysqlclient (==0.46b0)", "opentelemetry-instrumentation-pika (==0.46b0)", "opentelemetry-instrumentation-psycopg (==0.46b0)", "opentelemetry-instrumentation-psycopg2 (==0.46b0)", "opentelemetry-instrumentation-pymemcache (==0.46b0)", "opentelemetry-instrumentation-pymongo (==0.46b0)", "opentelemetry-instrumentation-pymysql (==0.46b0)", "opentelemetry-instrumentation-pyramid (==0.46b0)", "opentelemetry-instrumentation-redis (==0.46b0)", "opentelemetry-instrumentation-remoulade (==0.46b0)", "opentelemetry-instrumentation-requests (==0.46b0)", "opentelemetry-instrumentation-sklearn (==0.46b0)", "opentelemetry-instrumentation-sqlalchemy (==0.46b0)", "opentelemetry-instrumentation-sqlite3 (==0.46b0)", "opentelemetry-instrumentation-starlette (==0.46b0)", "opentelemetry-instrumentation-system-metrics (==0.46b0)", "opentelemetry-instrumentation-threading (==0.46b0)", "opentelemetry-instrumentation-tornado (==0.46b0)", "opentelemetry-instrumentation-tortoiseorm (==0.46b0)", "opentelemetry-instrumentation-urllib (==0.46b0)", "opentelemetry-instrumentation-urllib3 (==0.46b0)", "opentelemetry-instrumentation-wsgi (==0.46b0)"]
opentelemetry-experimental = ["opentelemetry-distro"]
pure-eval = ["asttokens", "executing", "pure-eval"]
pymongo = ["pymongo (>=3.1)"]
pyspark = ["pyspark (>=2.4.4)"]
@@ -2492,19 +2484,19 @@ tests = ["coverage[toml] (>=5.0.2)", "pytest"]
[[package]]
name = "setuptools"
version = "67.6.0"
version = "72.1.0"
description = "Easily download, build, install, upgrade, and uninstall Python packages"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "setuptools-67.6.0-py3-none-any.whl", hash = "sha256:b78aaa36f6b90a074c1fa651168723acbf45d14cb1196b6f02c0fd07f17623b2"},
{file = "setuptools-67.6.0.tar.gz", hash = "sha256:2ee892cd5f29f3373097f5a814697e397cf3ce313616df0af11231e2ad118077"},
{file = "setuptools-72.1.0-py3-none-any.whl", hash = "sha256:5a03e1860cf56bb6ef48ce186b0e557fdba433237481a9a625176c2831be15d1"},
{file = "setuptools-72.1.0.tar.gz", hash = "sha256:8d243eff56d095e5817f796ede6ae32941278f542e0f941867cc05ae52b162ec"},
]
[package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-hoverxref (<2)", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (==0.8.3)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8 (<5)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pip-run (>=8.8)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
core = ["importlib-metadata (>=6)", "importlib-resources (>=5.10.2)", "jaraco.text (>=3.7)", "more-itertools (>=8.8)", "ordered-set (>=3.1.1)", "packaging (>=24)", "platformdirs (>=2.6.2)", "tomli (>=2.0.1)", "wheel (>=0.43.0)"]
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "pyproject-hooks (!=1.1)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "importlib-metadata", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "jaraco.test", "mypy (==1.11.*)", "packaging (>=23.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy", "pytest-perf", "pytest-ruff (<0.4)", "pytest-ruff (>=0.2.1)", "pytest-ruff (>=0.3.2)", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
[[package]]
name = "setuptools-rust"
@@ -2649,24 +2641,24 @@ files = [
[[package]]
name = "towncrier"
version = "23.11.0"
version = "24.7.1"
description = "Building newsfiles for your project."
optional = false
python-versions = ">=3.8"
files = [
{file = "towncrier-23.11.0-py3-none-any.whl", hash = "sha256:2e519ca619426d189e3c98c99558fe8be50c9ced13ea1fc20a4a353a95d2ded7"},
{file = "towncrier-23.11.0.tar.gz", hash = "sha256:13937c247e3f8ae20ac44d895cf5f96a60ad46cfdcc1671759530d7837d9ee5d"},
{file = "towncrier-24.7.1-py3-none-any.whl", hash = "sha256:685e2a94335b5dc47537b4d3b449a25b18571ea85b07dcf6e8df31ba40f692dd"},
{file = "towncrier-24.7.1.tar.gz", hash = "sha256:57a057faedabcadf1a62f6f9bad726ae566c1f31a411338ddb8316993f583b3d"},
]
[package.dependencies]
click = "*"
importlib-metadata = {version = ">=4.6", markers = "python_version < \"3.10\""}
importlib-resources = {version = ">=5", markers = "python_version < \"3.10\""}
incremental = "*"
jinja2 = "*"
tomli = {version = "*", markers = "python_version < \"3.11\""}
[package.extras]
dev = ["furo", "packaging", "sphinx (>=5)", "twisted"]
dev = ["furo (>=2024.05.06)", "nox", "packaging", "sphinx (>=5)", "twisted"]
[[package]]
name = "treq"
@@ -2817,13 +2809,13 @@ files = [
[[package]]
name = "types-jsonschema"
version = "4.23.0.20240712"
version = "4.23.0.20240813"
description = "Typing stubs for jsonschema"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-jsonschema-4.23.0.20240712.tar.gz", hash = "sha256:b20db728dcf7ea3e80e9bdeb55e8b8420c6c040cda14e8cf284465adee71d217"},
{file = "types_jsonschema-4.23.0.20240712-py3-none-any.whl", hash = "sha256:8c33177ce95336241c1d61ccb56a9964d4361b99d5f1cd81a1ab4909b0dd7cf4"},
{file = "types-jsonschema-4.23.0.20240813.tar.gz", hash = "sha256:c93f48206f209a5bc4608d295ac39f172fb98b9e24159ce577dbd25ddb79a1c0"},
{file = "types_jsonschema-4.23.0.20240813-py3-none-any.whl", hash = "sha256:be283e23f0b87547316c2ee6b0fd36d95ea30e921db06478029e10b5b6aa6ac3"},
]
[package.dependencies]
@@ -2890,24 +2882,24 @@ types-cffi = "*"
[[package]]
name = "types-pyyaml"
version = "6.0.12.20240311"
version = "6.0.12.20240808"
description = "Typing stubs for PyYAML"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-PyYAML-6.0.12.20240311.tar.gz", hash = "sha256:a9e0f0f88dc835739b0c1ca51ee90d04ca2a897a71af79de9aec5f38cb0a5342"},
{file = "types_PyYAML-6.0.12.20240311-py3-none-any.whl", hash = "sha256:b845b06a1c7e54b8e5b4c683043de0d9caf205e7434b3edc678ff2411979b8f6"},
{file = "types-PyYAML-6.0.12.20240808.tar.gz", hash = "sha256:b8f76ddbd7f65440a8bda5526a9607e4c7a322dc2f8e1a8c405644f9a6f4b9af"},
{file = "types_PyYAML-6.0.12.20240808-py3-none-any.whl", hash = "sha256:deda34c5c655265fc517b546c902aa6eed2ef8d3e921e4765fe606fe2afe8d35"},
]
[[package]]
name = "types-requests"
version = "2.31.0.20240406"
version = "2.32.0.20240712"
description = "Typing stubs for requests"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-requests-2.31.0.20240406.tar.gz", hash = "sha256:4428df33c5503945c74b3f42e82b181e86ec7b724620419a2966e2de604ce1a1"},
{file = "types_requests-2.31.0.20240406-py3-none-any.whl", hash = "sha256:6216cdac377c6b9a040ac1c0404f7284bd13199c0e1bb235f4324627e8898cf5"},
{file = "types-requests-2.32.0.20240712.tar.gz", hash = "sha256:90c079ff05e549f6bf50e02e910210b98b8ff1ebdd18e19c873cd237737c1358"},
{file = "types_requests-2.32.0.20240712-py3-none-any.whl", hash = "sha256:f754283e152c752e46e70942fa2a146b5bc70393522257bb85bd1ef7e019dcc3"},
]
[package.dependencies]
@@ -2915,13 +2907,13 @@ urllib3 = ">=2"
[[package]]
name = "types-setuptools"
version = "71.1.0.20240726"
version = "71.1.0.20240818"
description = "Typing stubs for setuptools"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-setuptools-71.1.0.20240726.tar.gz", hash = "sha256:85ba28e9461bb1be86ebba4db0f1c2408f2b11115b1966334ea9dc464e29303e"},
{file = "types_setuptools-71.1.0.20240726-py3-none-any.whl", hash = "sha256:a7775376f36e0ff09bcad236bf265777590a66b11623e48c20bfc30f1444ea36"},
{file = "types-setuptools-71.1.0.20240818.tar.gz", hash = "sha256:f62eaffaa39774462c65fbb49368c4dc1d91a90a28371cb14e1af090ff0e41e3"},
{file = "types_setuptools-71.1.0.20240818-py3-none-any.whl", hash = "sha256:c4f95302f88369ac0ac46c67ddbfc70c6c4dbbb184d9fed356244217a2934025"},
]
[[package]]
@@ -3196,4 +3188,4 @@ user-search = ["pyicu"]
[metadata]
lock-version = "2.0"
python-versions = "^3.8.0"
content-hash = "5f458ce53b7469844af2e0c5a9c5ef720736de5f080c4eb8d3a0e60286424f44"
content-hash = "c165cdc1f6612c9f1b5bfd8063c23e2d595d717dd8ac1a468519e902be2cdf93"

View File

@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.112.0"
version = "1.114.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"

View File

@@ -43,7 +43,7 @@ import argparse
import base64
import json
import sys
from typing import Any, Dict, Optional, Tuple
from typing import Any, Dict, Mapping, Optional, Tuple, Union
from urllib import parse as urlparse
import requests
@@ -75,7 +75,7 @@ def encode_canonical_json(value: object) -> bytes:
value,
# Encode code-points outside of ASCII as UTF-8 rather than \u escapes
ensure_ascii=False,
# Remove unecessary white space.
# Remove unnecessary white space.
separators=(",", ":"),
# Sort the keys of dictionaries.
sort_keys=True,
@@ -298,12 +298,23 @@ class MatrixConnectionAdapter(HTTPAdapter):
return super().send(request, *args, **kwargs)
def get_connection(
self, url: str, proxies: Optional[Dict[str, str]] = None
def get_connection_with_tls_context(
self,
request: PreparedRequest,
verify: Optional[Union[bool, str]],
proxies: Optional[Mapping[str, str]] = None,
cert: Optional[Union[Tuple[str, str], str]] = None,
) -> HTTPConnectionPool:
# overrides the get_connection() method in the base class
parsed = urlparse.urlsplit(url)
(host, port, ssl_server_name) = self._lookup(parsed.netloc)
# overrides the get_connection_with_tls_context() method in the base class
parsed = urlparse.urlsplit(request.url)
# Extract the server name from the request URL, and ensure it's a str.
hostname = parsed.netloc
if isinstance(hostname, bytes):
hostname = hostname.decode("utf-8")
assert isinstance(hostname, str)
(host, port, ssl_server_name) = self._lookup(hostname)
print(
f"Connecting to {host}:{port} with SNI {ssl_server_name}", file=sys.stderr
)

View File

@@ -38,6 +38,7 @@ from mypy.types import (
NoneType,
TupleType,
TypeAliasType,
TypeVarType,
UninhabitedType,
UnionType,
)
@@ -233,6 +234,7 @@ IMMUTABLE_CUSTOM_TYPES = {
"synapse.synapse_rust.push.FilteredPushRules",
# This is technically not immutable, but close enough.
"signedjson.types.VerifyKey",
"synapse.types.StrCollection",
}
# Immutable containers only if the values are also immutable.
@@ -298,7 +300,7 @@ def is_cacheable(
elif rt.type.fullname in MUTABLE_CONTAINER_TYPES:
# Mutable containers are mutable regardless of their underlying type.
return False, None
return False, f"container {rt.type.fullname} is mutable"
elif "attrs" in rt.type.metadata:
# attrs classes are only cachable iff it is frozen (immutable itself)
@@ -318,6 +320,9 @@ def is_cacheable(
else:
return False, "non-frozen attrs class"
elif rt.type.is_enum:
# We assume Enum values are immutable
return True, None
else:
# Ensure we fail for unknown types, these generally means that the
# above code is not complete.
@@ -326,6 +331,18 @@ def is_cacheable(
f"Don't know how to handle {rt.type.fullname} return type instance",
)
elif isinstance(rt, TypeVarType):
# We consider TypeVars immutable if they are bound to a set of immutable
# types.
if rt.values:
for value in rt.values:
ok, note = is_cacheable(value, signature, verbose)
if not ok:
return False, f"TypeVar bound not cacheable {value}"
return True, None
return False, "TypeVar is unbound"
elif isinstance(rt, NoneType):
# None is cachable.
return True, None

View File

@@ -324,6 +324,11 @@ def tag(gh_token: Optional[str]) -> None:
def _tag(gh_token: Optional[str]) -> None:
"""Tags the release and generates a draft GitHub release"""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Make sure we're in a git repo.
repo = get_repo_and_check_clean_checkout()
@@ -418,6 +423,11 @@ def publish(gh_token: str) -> None:
def _publish(gh_token: str) -> None:
"""Publish release on GitHub."""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Make sure we're in a git repo.
get_repo_and_check_clean_checkout()
@@ -460,6 +470,11 @@ def upload(gh_token: Optional[str]) -> None:
def _upload(gh_token: Optional[str]) -> None:
"""Upload release to pypi."""
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
current_version = get_package_version()
tag_name = f"v{current_version}"
@@ -555,6 +570,11 @@ def wait_for_actions(gh_token: Optional[str]) -> None:
def _wait_for_actions(gh_token: Optional[str]) -> None:
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
# Find out the version and tag name.
current_version = get_package_version()
tag_name = f"v{current_version}"
@@ -711,6 +731,11 @@ Ask the designated people to do the blog and tweets."""
@cli.command()
@click.option("--gh-token", envvar=["GH_TOKEN", "GITHUB_TOKEN"], required=True)
def full(gh_token: str) -> None:
if gh_token:
# Test that the GH Token is valid before continuing.
gh = Github(gh_token)
gh.get_user()
click.echo("1. If this is a security release, read the security wiki page.")
click.echo("2. Check for any release blockers before proceeding.")
click.echo(" https://github.com/element-hq/synapse/labels/X-Release-Blocker")

View File

@@ -129,6 +129,11 @@ BOOLEAN_COLUMNS = {
"remote_media_cache": ["authenticated"],
"room_stats_state": ["is_federatable"],
"rooms": ["is_public", "has_auth_chain_index"],
"sliding_sync_joined_rooms": ["is_encrypted"],
"sliding_sync_membership_snapshots": [
"has_known_state",
"is_encrypted",
],
"users": ["shadow_banned", "approved", "locked", "suspended"],
"un_partial_stated_event_stream": ["rejection_status_changed"],
"users_who_share_rooms": ["share_private"],

View File

@@ -245,6 +245,8 @@ class EventContentFields:
# `m.room.encryption`` algorithm field
ENCRYPTION_ALGORITHM: Final = "algorithm"
TOMBSTONE_SUCCESSOR_ROOM: Final = "replacement_room"
class EventUnsignedContentFields:
"""Fields found inside the 'unsigned' data on events"""

View File

@@ -128,6 +128,10 @@ class Codes(str, Enum):
# MSC2677
DUPLICATE_ANNOTATION = "M_DUPLICATE_ANNOTATION"
# MSC3575 we are telling the client they need to expire their sliding sync
# connection.
UNKNOWN_POS = "M_UNKNOWN_POS"
class CodeMessageException(RuntimeError):
"""An exception with integer code, a message string attributes and optional headers.
@@ -847,3 +851,17 @@ class PartialStateConflictError(SynapseError):
msg=PartialStateConflictError.message(),
errcode=Codes.UNKNOWN,
)
class SlidingSyncUnknownPosition(SynapseError):
"""An error that Synapse can return to signal to the client to expire their
sliding sync connection (i.e. send a new request without a `?since=`
param).
"""
def __init__(self) -> None:
super().__init__(
HTTPStatus.BAD_REQUEST,
msg="Unknown position",
errcode=Codes.UNKNOWN_POS,
)

View File

@@ -98,6 +98,7 @@ from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
from synapse.storage.databases.main.search import SearchStore
from synapse.storage.databases.main.session import SessionStore
from synapse.storage.databases.main.signatures import SignatureWorkerStore
from synapse.storage.databases.main.sliding_sync import SlidingSyncStore
from synapse.storage.databases.main.state import StateGroupWorkerStore
from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.stream import StreamWorkerStore
@@ -159,6 +160,7 @@ class GenericWorkerStore(
SessionStore,
TaskSchedulerWorkerStore,
ExperimentalFeaturesStore,
SlidingSyncStore,
):
# Properties that multiple storage classes define. Tell mypy what the
# expected type is.
@@ -206,6 +208,21 @@ class GenericWorkerServer(HomeServer):
"/_synapse/admin": admin_resource,
}
)
if "federation" not in res.names:
# Only load the federation media resource separately if federation
# resource is not specified since federation resource includes media
# resource.
resources[FEDERATION_PREFIX] = TransportLayerServer(
self, servlet_groups=["media"]
)
if "client" not in res.names:
# Only load the client media resource separately if client
# resource is not specified since client resource includes media
# resource.
resources[CLIENT_API_PREFIX] = ClientRestResource(
self, servlet_groups=["media"]
)
else:
logger.warning(
"A 'media' listener is configured but the media"

View File

@@ -101,6 +101,12 @@ class SynapseHomeServer(HomeServer):
# Skip loading openid resource if federation is defined
# since federation resource will include openid
continue
if name == "media" and (
"federation" in res.names or "client" in res.names
):
# Skip loading media resource if federation or client are defined
# since federation & client resources will include media
continue
if name == "health":
# Skip loading, health resource is always included
continue
@@ -231,6 +237,14 @@ class SynapseHomeServer(HomeServer):
"'media' resource conflicts with enable_media_repo=False"
)
if name == "media":
resources[FEDERATION_PREFIX] = TransportLayerServer(
self, servlet_groups=["media"]
)
resources[CLIENT_API_PREFIX] = ClientRestResource(
self, servlet_groups=["media"]
)
if name in ["keys", "federation"]:
resources[SERVER_KEY_PREFIX] = KeyResource(self)

View File

@@ -589,7 +589,7 @@ class BaseV2KeyFetcher(KeyFetcher):
% (server_name,)
)
for key_id, key_data in response_json["old_verify_keys"].items():
for key_id, key_data in response_json.get("old_verify_keys", {}).items():
if is_signing_algorithm_supported(key_id):
key_base64 = key_data["key"]
key_bytes = decode_base64(key_base64)

View File

@@ -271,6 +271,10 @@ SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
"federation": FEDERATION_SERVLET_CLASSES,
"room_list": (PublicRoomList,),
"openid": (OpenIdUserInfo,),
"media": (
FederationMediaDownloadServlet,
FederationMediaThumbnailServlet,
),
}

View File

@@ -912,6 +912,4 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
FederationV1SendKnockServlet,
FederationMakeKnockServlet,
FederationAccountStatusServlet,
FederationMediaDownloadServlet,
FederationMediaThumbnailServlet,
)

View File

@@ -197,8 +197,14 @@ class AdminHandler:
# events that we have and then filtering, this isn't the most
# efficient method perhaps but it does guarantee we get everything.
while True:
events, _ = await self._store.paginate_room_events(
room_id, from_key, to_key, limit=100, direction=Direction.FORWARDS
events, _ = (
await self._store.paginate_room_events_by_topological_ordering(
room_id=room_id,
from_key=from_key,
to_key=to_key,
limit=100,
direction=Direction.FORWARDS,
)
)
if not events:
break

View File

@@ -20,10 +20,20 @@
#
#
import logging
from typing import TYPE_CHECKING, Dict, Iterable, List, Mapping, Optional, Set, Tuple
from typing import (
TYPE_CHECKING,
AbstractSet,
Dict,
Iterable,
List,
Mapping,
Optional,
Set,
Tuple,
)
from synapse.api import errors
from synapse.api.constants import EduTypes, EventTypes
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.errors import (
Codes,
FederationDeniedError,
@@ -38,6 +48,7 @@ from synapse.metrics.background_process_metrics import (
wrap_as_background_process,
)
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
from synapse.storage.databases.main.state_deltas import StateDelta
from synapse.types import (
DeviceListUpdates,
JsonDict,
@@ -222,129 +233,111 @@ class DeviceWorkerHandler:
set_tag("user_id", user_id)
set_tag("from_token", str(from_token))
now_room_key = self.store.get_room_max_token()
room_ids = await self.store.get_rooms_for_user(user_id)
now_token = self._event_sources.get_current_token()
changed = await self.get_device_changes_in_shared_rooms(
user_id, room_ids, from_token
# We need to work out all the different membership changes for the user
# and user they share a room with, to pass to
# `generate_sync_entry_for_device_list`. See its docstring for details
# on the data required.
joined_room_ids = await self.store.get_rooms_for_user(user_id)
# Get the set of rooms that the user has joined/left
membership_changes = (
await self.store.get_current_state_delta_membership_changes_for_user(
user_id, from_key=from_token.room_key, to_key=now_token.room_key
)
)
# Then work out if any users have since joined
rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key)
# Check for newly joined or left rooms. We need to make sure that we add
# to newly joined in the case membership goes from join -> leave -> join
# again.
newly_joined_rooms: Set[str] = set()
newly_left_rooms: Set[str] = set()
for change in membership_changes:
# We check for changes in "joinedness", i.e. if the membership has
# changed to or from JOIN.
if change.membership == Membership.JOIN:
if change.prev_membership != Membership.JOIN:
newly_joined_rooms.add(change.room_id)
newly_left_rooms.discard(change.room_id)
elif change.prev_membership == Membership.JOIN:
newly_joined_rooms.discard(change.room_id)
newly_left_rooms.add(change.room_id)
member_events = await self.store.get_membership_changes_for_user(
user_id, from_token.room_key, now_room_key
# We now work out if any other users have since joined or left the rooms
# the user is currently in.
# List of membership changes per room
room_to_deltas: Dict[str, List[StateDelta]] = {}
# The set of event IDs of membership events (so we can fetch their
# associated membership).
memberships_to_fetch: Set[str] = set()
# TODO: Only pull out membership events?
state_changes = await self.store.get_current_state_deltas_for_rooms(
joined_room_ids, from_token=from_token.room_key, to_token=now_token.room_key
)
rooms_changed.update(event.room_id for event in member_events)
stream_ordering = from_token.room_key.stream
possibly_changed = set(changed)
possibly_left = set()
for room_id in rooms_changed:
# Check if the forward extremities have changed. If not then we know
# the current state won't have changed, and so we can skip this room.
try:
if not await self.store.have_room_forward_extremities_changed_since(
room_id, stream_ordering
):
continue
except errors.StoreError:
pass
current_state_ids = await self._state_storage.get_current_state_ids(
room_id, await_full_state=False
)
# The user may have left the room
# TODO: Check if they actually did or if we were just invited.
if room_id not in room_ids:
for etype, state_key in current_state_ids.keys():
if etype != EventTypes.Member:
continue
possibly_left.add(state_key)
for delta in state_changes:
if delta.event_type != EventTypes.Member:
continue
# Fetch the current state at the time.
try:
event_ids = await self.store.get_forward_extremities_for_room_at_stream_ordering(
room_id, stream_ordering=stream_ordering
)
except errors.StoreError:
# we have purged the stream_ordering index since the stream
# ordering: treat it the same as a new room
event_ids = []
room_to_deltas.setdefault(delta.room_id, []).append(delta)
if delta.event_id:
memberships_to_fetch.add(delta.event_id)
if delta.prev_event_id:
memberships_to_fetch.add(delta.prev_event_id)
# special-case for an empty prev state: include all members
# in the changed list
if not event_ids:
log_kv(
{"event": "encountered empty previous state", "room_id": room_id}
)
for etype, state_key in current_state_ids.keys():
if etype != EventTypes.Member:
continue
possibly_changed.add(state_key)
continue
# Fetch all the memberships for the membership events
event_id_to_memberships = await self.store.get_membership_from_event_ids(
memberships_to_fetch
)
current_member_id = current_state_ids.get((EventTypes.Member, user_id))
if not current_member_id:
continue
joined_invited_knocked = (
Membership.JOIN,
Membership.INVITE,
Membership.KNOCK,
)
# mapping from event_id -> state_dict
prev_state_ids = await self._state_storage.get_state_ids_for_events(
event_ids,
await_full_state=False,
)
# We now want to find any user that have newly joined/invited/knocked,
# or newly left, similarly to above.
newly_joined_or_invited_or_knocked_users: Set[str] = set()
newly_left_users: Set[str] = set()
for _, deltas in room_to_deltas.items():
for delta in deltas:
# Get the prev/new memberships for the delta
new_membership = None
prev_membership = None
if delta.event_id:
m = event_id_to_memberships.get(delta.event_id)
if m is not None:
new_membership = m.membership
if delta.prev_event_id:
m = event_id_to_memberships.get(delta.prev_event_id)
if m is not None:
prev_membership = m.membership
# Check if we've joined the room? If so we just blindly add all the users to
# the "possibly changed" users.
for state_dict in prev_state_ids.values():
member_event = state_dict.get((EventTypes.Member, user_id), None)
if not member_event or member_event != current_member_id:
for etype, state_key in current_state_ids.keys():
if etype != EventTypes.Member:
continue
possibly_changed.add(state_key)
break
# Check if a user has newly joined/invited/knocked, or left.
if new_membership in joined_invited_knocked:
if prev_membership not in joined_invited_knocked:
newly_joined_or_invited_or_knocked_users.add(delta.state_key)
newly_left_users.discard(delta.state_key)
elif prev_membership in joined_invited_knocked:
newly_joined_or_invited_or_knocked_users.discard(delta.state_key)
newly_left_users.add(delta.state_key)
# If there has been any change in membership, include them in the
# possibly changed list. We'll check if they are joined below,
# and we're not toooo worried about spuriously adding users.
for key, event_id in current_state_ids.items():
etype, state_key = key
if etype != EventTypes.Member:
continue
# check if this member has changed since any of the extremities
# at the stream_ordering, and add them to the list if so.
for state_dict in prev_state_ids.values():
prev_event_id = state_dict.get(key, None)
if not prev_event_id or prev_event_id != event_id:
if state_key != user_id:
possibly_changed.add(state_key)
break
if possibly_changed or possibly_left:
possibly_joined = possibly_changed
possibly_left = possibly_changed | possibly_left
# Double check if we still share rooms with the given user.
users_rooms = await self.store.get_rooms_for_users(possibly_left)
for changed_user_id, entries in users_rooms.items():
if any(rid in room_ids for rid in entries):
possibly_left.discard(changed_user_id)
else:
possibly_joined.discard(changed_user_id)
else:
possibly_joined = set()
possibly_left = set()
device_list_updates = DeviceListUpdates(
changed=possibly_joined,
left=possibly_left,
# Now we actually calculate the device list entry with the information
# calculated above.
device_list_updates = await self.generate_sync_entry_for_device_list(
user_id=user_id,
since_token=from_token,
now_token=now_token,
joined_room_ids=joined_room_ids,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
newly_left_users=newly_left_users,
)
log_kv(
@@ -356,6 +349,88 @@ class DeviceWorkerHandler:
return device_list_updates
@measure_func("_generate_sync_entry_for_device_list")
async def generate_sync_entry_for_device_list(
self,
user_id: str,
since_token: StreamToken,
now_token: StreamToken,
joined_room_ids: AbstractSet[str],
newly_joined_rooms: AbstractSet[str],
newly_joined_or_invited_or_knocked_users: AbstractSet[str],
newly_left_rooms: AbstractSet[str],
newly_left_users: AbstractSet[str],
) -> DeviceListUpdates:
"""Generate the DeviceListUpdates section of sync
Args:
sync_result_builder
newly_joined_rooms: Set of rooms user has joined since previous sync
newly_joined_or_invited_or_knocked_users: Set of users that have joined,
been invited to a room or are knocking on a room since
previous sync.
newly_left_rooms: Set of rooms user has left since previous sync
newly_left_users: Set of users that have left a room we're in since
previous sync
"""
# Take a copy since these fields will be mutated later.
newly_joined_or_invited_or_knocked_users = set(
newly_joined_or_invited_or_knocked_users
)
newly_left_users = set(newly_left_users)
# We want to figure out what user IDs the client should refetch
# device keys for, and which users we aren't going to track changes
# for anymore.
#
# For the first step we check:
# a. if any users we share a room with have updated their devices,
# and
# b. we also check if we've joined any new rooms, or if a user has
# joined a room we're in.
#
# For the second step we just find any users we no longer share a
# room with by looking at all users that have left a room plus users
# that were in a room we've left.
users_that_have_changed = set()
# Step 1a, check for changes in devices of users we share a room
# with
users_that_have_changed = await self.get_device_changes_in_shared_rooms(
user_id,
joined_room_ids,
from_token=since_token,
now_token=now_token,
)
# Step 1b, check for newly joined rooms
for room_id in newly_joined_rooms:
joined_users = await self.store.get_users_in_room(room_id)
newly_joined_or_invited_or_knocked_users.update(joined_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
)
users_that_have_changed.update(user_signatures_changed)
# Now find users that we no longer track
for room_id in newly_left_rooms:
left_users = await self.store.get_users_in_room(room_id)
newly_left_users.update(left_users)
# Remove any users that we still share a room with.
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
for user_id, entries in left_users_rooms.items():
if any(rid in joined_room_ids for rid in entries):
newly_left_users.discard(user_id)
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
async def on_federation_query_user_devices(self, user_id: str) -> JsonDict:
if not self.hs.is_mine(UserID.from_string(user_id)):
raise SynapseError(400, "User is not hosted on this homeserver")

View File

@@ -507,13 +507,15 @@ class PaginationHandler:
# Initially fetch the events from the database. With any luck, we can return
# these without blocking on backfill (handled below).
events, next_key = await self.store.paginate_room_events(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
events, next_key = (
await self.store.paginate_room_events_by_topological_ordering(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
)
if pagin_config.direction == Direction.BACKWARDS:
@@ -582,13 +584,15 @@ class PaginationHandler:
# If we did backfill something, refetch the events from the database to
# catch anything new that might have been added since we last fetched.
if did_backfill:
events, next_key = await self.store.paginate_room_events(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
events, next_key = (
await self.store.paginate_room_events_by_topological_ordering(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
)
else:
# Otherwise, we can backfill in the background for eventual

View File

@@ -74,6 +74,17 @@ class ProfileHandler:
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
async def get_profile(self, user_id: str, ignore_backoff: bool = True) -> JsonDict:
"""
Get a user's profile as a JSON dictionary.
Args:
user_id: The user to fetch the profile of.
ignore_backoff: True to ignore backoff when fetching over federation.
Returns:
A JSON dictionary. For local queries this will include the displayname and avatar_url
fields. For remote queries it may contain arbitrary information.
"""
target_user = UserID.from_string(user_id)
if self.hs.is_mine(target_user):
@@ -107,6 +118,15 @@ class ProfileHandler:
raise e.to_synapse_error()
async def get_displayname(self, target_user: UserID) -> Optional[str]:
"""
Fetch a user's display name from their profile.
Args:
target_user: The user to fetch the display name of.
Returns:
The user's display name or None if unset.
"""
if self.hs.is_mine(target_user):
try:
displayname = await self.store.get_profile_displayname(target_user)
@@ -203,6 +223,15 @@ class ProfileHandler:
await self._update_join_states(requester, target_user)
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
"""
Fetch a user's avatar URL from their profile.
Args:
target_user: The user to fetch the avatar URL of.
Returns:
The user's avatar URL or None if unset.
"""
if self.hs.is_mine(target_user):
try:
avatar_url = await self.store.get_profile_avatar_url(target_user)
@@ -403,6 +432,12 @@ class ProfileHandler:
async def _update_join_states(
self, requester: Requester, target_user: UserID
) -> None:
"""
Update the membership events of each room the user is joined to with the
new profile information.
Note that this stomps over any custom display name or avatar URL in member events.
"""
if not self.hs.is_mine(target_user):
return

View File

@@ -1750,7 +1750,7 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
from_key=from_key,
to_key=to_key,
limit=limit or 10,
order="ASC",
direction=Direction.FORWARDS,
)
events = list(room_events)

View File

@@ -0,0 +1,662 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
from typing import TYPE_CHECKING, Dict, List, Mapping, Optional, Sequence, Set
from typing_extensions import assert_never
from synapse.api.constants import AccountDataTypes
from synapse.handlers.receipts import ReceiptEventSource
from synapse.logging.opentracing import trace
from synapse.types import (
DeviceListUpdates,
JsonMapping,
MultiWriterStreamToken,
SlidingSyncStreamToken,
StreamToken,
)
from synapse.types.handlers.sliding_sync import (
HaveSentRoomFlag,
MutablePerConnectionState,
OperationType,
PerConnectionState,
SlidingSyncConfig,
SlidingSyncResult,
)
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class SlidingSyncExtensionHandler:
"""Handles the extensions to sliding sync."""
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastores().main
self.event_sources = hs.get_event_sources()
self.device_handler = hs.get_device_handler()
self.push_rules_handler = hs.get_push_rules_handler()
@trace
async def get_extensions_response(
self,
sync_config: SlidingSyncConfig,
previous_connection_state: "PerConnectionState",
new_connection_state: "MutablePerConnectionState",
actual_lists: Dict[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
actual_room_response_map: Dict[str, SlidingSyncResult.RoomResult],
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> SlidingSyncResult.Extensions:
"""Handle extension requests.
Args:
sync_config: Sync configuration
new_connection_state: Snapshot of the current per-connection state
new_per_connection_state: A mutable copy of the per-connection
state, used to record updates to the state during this request.
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response.
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
if sync_config.extensions is None:
return SlidingSyncResult.Extensions()
to_device_response = None
if sync_config.extensions.to_device is not None:
to_device_response = await self.get_to_device_extension_response(
sync_config=sync_config,
to_device_request=sync_config.extensions.to_device,
to_token=to_token,
)
e2ee_response = None
if sync_config.extensions.e2ee is not None:
e2ee_response = await self.get_e2ee_extension_response(
sync_config=sync_config,
e2ee_request=sync_config.extensions.e2ee,
to_token=to_token,
from_token=from_token,
)
account_data_response = None
if sync_config.extensions.account_data is not None:
account_data_response = await self.get_account_data_extension_response(
sync_config=sync_config,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
account_data_request=sync_config.extensions.account_data,
to_token=to_token,
from_token=from_token,
)
receipts_response = None
if sync_config.extensions.receipts is not None:
receipts_response = await self.get_receipts_extension_response(
sync_config=sync_config,
previous_connection_state=previous_connection_state,
new_connection_state=new_connection_state,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
actual_room_response_map=actual_room_response_map,
receipts_request=sync_config.extensions.receipts,
to_token=to_token,
from_token=from_token,
)
typing_response = None
if sync_config.extensions.typing is not None:
typing_response = await self.get_typing_extension_response(
sync_config=sync_config,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
actual_room_response_map=actual_room_response_map,
typing_request=sync_config.extensions.typing,
to_token=to_token,
from_token=from_token,
)
return SlidingSyncResult.Extensions(
to_device=to_device_response,
e2ee=e2ee_response,
account_data=account_data_response,
receipts=receipts_response,
typing=typing_response,
)
def find_relevant_room_ids_for_extension(
self,
requested_lists: Optional[List[str]],
requested_room_ids: Optional[List[str]],
actual_lists: Dict[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
) -> Set[str]:
"""
Handle the reserved `lists`/`rooms` keys for extensions. Extensions should only
return results for rooms in the Sliding Sync response. This matches up the
requested rooms/lists with the actual lists/rooms in the Sliding Sync response.
{"lists": []} // Do not process any lists.
{"lists": ["rooms", "dms"]} // Process only a subset of lists.
{"lists": ["*"]} // Process all lists defined in the Sliding Window API. (This is the default.)
{"rooms": []} // Do not process any specific rooms.
{"rooms": ["!a:b", "!c:d"]} // Process only a subset of room subscriptions.
{"rooms": ["*"]} // Process all room subscriptions defined in the Room Subscription API. (This is the default.)
Args:
requested_lists: The `lists` from the extension request.
requested_room_ids: The `rooms` from the extension request.
actual_lists: The actual lists from the Sliding Sync response.
actual_room_ids: The actual room subscriptions from the Sliding Sync request.
"""
# We only want to include account data for rooms that are already in the sliding
# sync response AND that were requested in the account data request.
relevant_room_ids: Set[str] = set()
# See what rooms from the room subscriptions we should get account data for
if requested_room_ids is not None:
for room_id in requested_room_ids:
# A wildcard means we process all rooms from the room subscriptions
if room_id == "*":
relevant_room_ids.update(actual_room_ids)
break
if room_id in actual_room_ids:
relevant_room_ids.add(room_id)
# See what rooms from the sliding window lists we should get account data for
if requested_lists is not None:
for list_key in requested_lists:
# Just some typing because we share the variable name in multiple places
actual_list: Optional[SlidingSyncResult.SlidingWindowList] = None
# A wildcard means we process rooms from all lists
if list_key == "*":
for actual_list in actual_lists.values():
# We only expect a single SYNC operation for any list
assert len(actual_list.ops) == 1
sync_op = actual_list.ops[0]
assert sync_op.op == OperationType.SYNC
relevant_room_ids.update(sync_op.room_ids)
break
actual_list = actual_lists.get(list_key)
if actual_list is not None:
# We only expect a single SYNC operation for any list
assert len(actual_list.ops) == 1
sync_op = actual_list.ops[0]
assert sync_op.op == OperationType.SYNC
relevant_room_ids.update(sync_op.room_ids)
return relevant_room_ids
@trace
async def get_to_device_extension_response(
self,
sync_config: SlidingSyncConfig,
to_device_request: SlidingSyncConfig.Extensions.ToDeviceExtension,
to_token: StreamToken,
) -> Optional[SlidingSyncResult.Extensions.ToDeviceExtension]:
"""Handle to-device extension (MSC3885)
Args:
sync_config: Sync configuration
to_device_request: The to-device extension from the request
to_token: The point in the stream to sync up to.
"""
user_id = sync_config.user.to_string()
device_id = sync_config.requester.device_id
# Skip if the extension is not enabled
if not to_device_request.enabled:
return None
# Check that this request has a valid device ID (not all requests have
# to belong to a device, and so device_id is None)
if device_id is None:
return SlidingSyncResult.Extensions.ToDeviceExtension(
next_batch=f"{to_token.to_device_key}",
events=[],
)
since_stream_id = 0
if to_device_request.since is not None:
# We've already validated this is an int.
since_stream_id = int(to_device_request.since)
if to_token.to_device_key < since_stream_id:
# The since token is ahead of our current token, so we return an
# empty response.
logger.warning(
"Got to-device.since from the future. since token: %r is ahead of our current to_device stream position: %r",
since_stream_id,
to_token.to_device_key,
)
return SlidingSyncResult.Extensions.ToDeviceExtension(
next_batch=to_device_request.since,
events=[],
)
# Delete everything before the given since token, as we know the
# device must have received them.
deleted = await self.store.delete_messages_for_device(
user_id=user_id,
device_id=device_id,
up_to_stream_id=since_stream_id,
)
logger.debug(
"Deleted %d to-device messages up to %d for %s",
deleted,
since_stream_id,
user_id,
)
messages, stream_id = await self.store.get_messages_for_device(
user_id=user_id,
device_id=device_id,
from_stream_id=since_stream_id,
to_stream_id=to_token.to_device_key,
limit=min(to_device_request.limit, 100), # Limit to at most 100 events
)
return SlidingSyncResult.Extensions.ToDeviceExtension(
next_batch=f"{stream_id}",
events=messages,
)
@trace
async def get_e2ee_extension_response(
self,
sync_config: SlidingSyncConfig,
e2ee_request: SlidingSyncConfig.Extensions.E2eeExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.E2eeExtension]:
"""Handle E2EE device extension (MSC3884)
Args:
sync_config: Sync configuration
e2ee_request: The e2ee extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
user_id = sync_config.user.to_string()
device_id = sync_config.requester.device_id
# Skip if the extension is not enabled
if not e2ee_request.enabled:
return None
device_list_updates: Optional[DeviceListUpdates] = None
if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token`
device_list_updates = await self.device_handler.get_user_ids_changed(
user_id=user_id,
from_token=from_token.stream_token,
)
device_one_time_keys_count: Mapping[str, int] = {}
device_unused_fallback_key_types: Sequence[str] = []
if device_id:
# TODO: We should have a way to let clients differentiate between the states of:
# * no change in OTK count since the provided since token
# * the server has zero OTKs left for this device
# Spec issue: https://github.com/matrix-org/matrix-doc/issues/3298
device_one_time_keys_count = await self.store.count_e2e_one_time_keys(
user_id, device_id
)
device_unused_fallback_key_types = (
await self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
)
return SlidingSyncResult.Extensions.E2eeExtension(
device_list_updates=device_list_updates,
device_one_time_keys_count=device_one_time_keys_count,
device_unused_fallback_key_types=device_unused_fallback_key_types,
)
@trace
async def get_account_data_extension_response(
self,
sync_config: SlidingSyncConfig,
actual_lists: Dict[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
account_data_request: SlidingSyncConfig.Extensions.AccountDataExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.AccountDataExtension]:
"""Handle Account Data extension (MSC3959)
Args:
sync_config: Sync configuration
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
account_data_request: The account_data extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
user_id = sync_config.user.to_string()
# Skip if the extension is not enabled
if not account_data_request.enabled:
return None
global_account_data_map: Mapping[str, JsonMapping] = {}
if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token`
global_account_data_map = (
await self.store.get_updated_global_account_data_for_user(
user_id, from_token.stream_token.account_data_key
)
)
have_push_rules_changed = await self.store.have_push_rules_changed_for_user(
user_id, from_token.stream_token.push_rules_key
)
if have_push_rules_changed:
global_account_data_map = dict(global_account_data_map)
# TODO: This should take into account the `from_token` and `to_token`
global_account_data_map[AccountDataTypes.PUSH_RULES] = (
await self.push_rules_handler.push_rules_for_user(sync_config.user)
)
else:
# TODO: This should take into account the `to_token`
all_global_account_data = await self.store.get_global_account_data_for_user(
user_id
)
global_account_data_map = dict(all_global_account_data)
# TODO: This should take into account the `to_token`
global_account_data_map[AccountDataTypes.PUSH_RULES] = (
await self.push_rules_handler.push_rules_for_user(sync_config.user)
)
# Fetch room account data
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]] = {}
relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=account_data_request.lists,
requested_room_ids=account_data_request.rooms,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
)
if len(relevant_room_ids) > 0:
if from_token is not None:
# TODO: This should take into account the `from_token` and `to_token`
account_data_by_room_map = (
await self.store.get_updated_room_account_data_for_user(
user_id, from_token.stream_token.account_data_key
)
)
else:
# TODO: This should take into account the `to_token`
account_data_by_room_map = (
await self.store.get_room_account_data_for_user(user_id)
)
# Filter down to the relevant rooms
account_data_by_room_map = {
room_id: account_data_map
for room_id, account_data_map in account_data_by_room_map.items()
if room_id in relevant_room_ids
}
return SlidingSyncResult.Extensions.AccountDataExtension(
global_account_data_map=global_account_data_map,
account_data_by_room_map=account_data_by_room_map,
)
@trace
async def get_receipts_extension_response(
self,
sync_config: SlidingSyncConfig,
previous_connection_state: "PerConnectionState",
new_connection_state: "MutablePerConnectionState",
actual_lists: Dict[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
actual_room_response_map: Dict[str, SlidingSyncResult.RoomResult],
receipts_request: SlidingSyncConfig.Extensions.ReceiptsExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.ReceiptsExtension]:
"""Handle Receipts extension (MSC3960)
Args:
sync_config: Sync configuration
previous_connection_state: The current per-connection state
new_connection_state: A mutable copy of the per-connection
state, used to record updates to the state.
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response.
account_data_request: The account_data extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
# Skip if the extension is not enabled
if not receipts_request.enabled:
return None
relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=receipts_request.lists,
requested_room_ids=receipts_request.rooms,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
)
room_id_to_receipt_map: Dict[str, JsonMapping] = {}
if len(relevant_room_ids) > 0:
# We need to handle the different cases depending on if we have sent
# down receipts previously or not, so we split the relevant rooms
# up into different collections based on status.
live_rooms = set()
previously_rooms: Dict[str, MultiWriterStreamToken] = {}
initial_rooms = set()
for room_id in relevant_room_ids:
if not from_token:
initial_rooms.add(room_id)
continue
# If we're sending down the room from scratch again for some reason, we
# should always resend the receipts as well (regardless of if
# we've sent them down before). This is to mimic the behaviour
# of what happens on initial sync, where you get a chunk of
# timeline with all of the corresponding receipts for the events in the timeline.
room_result = actual_room_response_map.get(room_id)
if room_result is not None and room_result.initial:
initial_rooms.add(room_id)
continue
room_status = previous_connection_state.receipts.have_sent_room(room_id)
if room_status.status == HaveSentRoomFlag.LIVE:
live_rooms.add(room_id)
elif room_status.status == HaveSentRoomFlag.PREVIOUSLY:
assert room_status.last_token is not None
previously_rooms[room_id] = room_status.last_token
elif room_status.status == HaveSentRoomFlag.NEVER:
initial_rooms.add(room_id)
else:
assert_never(room_status.status)
# The set of receipts that we fetched. Private receipts need to be
# filtered out before returning.
fetched_receipts = []
# For live rooms we just fetch all receipts in those rooms since the
# `since` token.
if live_rooms:
assert from_token is not None
receipts = await self.store.get_linearized_receipts_for_rooms(
room_ids=live_rooms,
from_key=from_token.stream_token.receipt_key,
to_key=to_token.receipt_key,
)
fetched_receipts.extend(receipts)
# For rooms we've previously sent down, but aren't up to date, we
# need to use the from token from the room status.
if previously_rooms:
for room_id, receipt_token in previously_rooms.items():
# TODO: Limit the number of receipts we're about to send down
# for the room, if its too many we should TODO
previously_receipts = (
await self.store.get_linearized_receipts_for_room(
room_id=room_id,
from_key=receipt_token,
to_key=to_token.receipt_key,
)
)
fetched_receipts.extend(previously_receipts)
# For rooms we haven't previously sent down, we could send all receipts
# from that room but we only want to include receipts for events
# in the timeline to avoid bloating and blowing up the sync response
# as the number of users in the room increases. (this behavior is part of the spec)
initial_rooms_and_event_ids = [
(room_id, event.event_id)
for room_id in initial_rooms
if room_id in actual_room_response_map
for event in actual_room_response_map[room_id].timeline_events
]
if initial_rooms_and_event_ids:
initial_receipts = await self.store.get_linearized_receipts_for_events(
room_and_event_ids=initial_rooms_and_event_ids,
)
fetched_receipts.extend(initial_receipts)
fetched_receipts = ReceiptEventSource.filter_out_private_receipts(
fetched_receipts, sync_config.user.to_string()
)
for receipt in fetched_receipts:
# These fields should exist for every receipt
room_id = receipt["room_id"]
type = receipt["type"]
content = receipt["content"]
room_id_to_receipt_map[room_id] = {"type": type, "content": content}
# Now we update the per-connection state to track which receipts we have
# and haven't sent down.
new_connection_state.receipts.record_sent_rooms(relevant_room_ids)
if from_token:
# Now find the set of rooms that may have receipts that we're not sending
# down. We only need to check rooms that we have previously returned
# receipts for (in `previous_connection_state`) because we only care about
# updating `LIVE` rooms to `PREVIOUSLY`. The `PREVIOUSLY` rooms will just
# stay pointing at their previous position so we don't need to waste time
# checking those and since we default to `NEVER`, rooms that were `NEVER`
# sent before don't need to be recorded as we'll handle them correctly when
# they come into range for the first time.
rooms_no_receipts = [
room_id
for room_id, room_status in previous_connection_state.receipts._statuses.items()
if room_status.status == HaveSentRoomFlag.LIVE
and room_id not in relevant_room_ids
]
changed_rooms = await self.store.get_rooms_with_receipts_between(
rooms_no_receipts,
from_key=from_token.stream_token.receipt_key,
to_key=to_token.receipt_key,
)
new_connection_state.receipts.record_unsent_rooms(
changed_rooms, from_token.stream_token.receipt_key
)
return SlidingSyncResult.Extensions.ReceiptsExtension(
room_id_to_receipt_map=room_id_to_receipt_map,
)
async def get_typing_extension_response(
self,
sync_config: SlidingSyncConfig,
actual_lists: Dict[str, SlidingSyncResult.SlidingWindowList],
actual_room_ids: Set[str],
actual_room_response_map: Dict[str, SlidingSyncResult.RoomResult],
typing_request: SlidingSyncConfig.Extensions.TypingExtension,
to_token: StreamToken,
from_token: Optional[SlidingSyncStreamToken],
) -> Optional[SlidingSyncResult.Extensions.TypingExtension]:
"""Handle Typing Notification extension (MSC3961)
Args:
sync_config: Sync configuration
actual_lists: Sliding window API. A map of list key to list results in the
Sliding Sync response.
actual_room_ids: The actual room IDs in the the Sliding Sync response.
actual_room_response_map: A map of room ID to room results in the the
Sliding Sync response.
account_data_request: The account_data extension from the request
to_token: The point in the stream to sync up to.
from_token: The point in the stream to sync from.
"""
# Skip if the extension is not enabled
if not typing_request.enabled:
return None
relevant_room_ids = self.find_relevant_room_ids_for_extension(
requested_lists=typing_request.lists,
requested_room_ids=typing_request.rooms,
actual_lists=actual_lists,
actual_room_ids=actual_room_ids,
)
room_id_to_typing_map: Dict[str, JsonMapping] = {}
if len(relevant_room_ids) > 0:
# Note: We don't need to take connection tracking into account for typing
# notifications because they'll get anything still relevant and hasn't timed
# out when the room comes into range. We consider the gap where the room
# fell out of range, as long enough for any typing notifications to have
# timed out (it's not worth the 30 seconds of data we may have missed).
typing_source = self.event_sources.sources.typing
typing_notifications, _ = await typing_source.get_new_events(
user=sync_config.user,
from_key=(from_token.stream_token.typing_key if from_token else 0),
to_key=to_token.typing_key,
# This is a dummy value and isn't used in the function
limit=0,
room_ids=relevant_room_ids,
is_guest=False,
)
for typing_notification in typing_notifications:
# These fields should exist for every typing notification
room_id = typing_notification["room_id"]
type = typing_notification["type"]
content = typing_notification["content"]
room_id_to_typing_map[room_id] = {"type": type, "content": content}
return SlidingSyncResult.Extensions.TypingExtension(
room_id_to_typing_map=room_id_to_typing_map,
)

View File

@@ -0,0 +1,123 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
from typing import TYPE_CHECKING, Optional
import attr
from synapse.logging.opentracing import trace
from synapse.storage.databases.main import DataStore
from synapse.types import SlidingSyncStreamToken
from synapse.types.handlers.sliding_sync import (
MutablePerConnectionState,
PerConnectionState,
SlidingSyncConfig,
)
if TYPE_CHECKING:
pass
logger = logging.getLogger(__name__)
@attr.s(auto_attribs=True)
class SlidingSyncConnectionStore:
"""In-memory store of per-connection state, including what rooms we have
previously sent down a sliding sync connection.
Note: This is NOT safe to run in a worker setup because connection positions will
point to different sets of rooms on different workers. e.g. for the same connection,
a connection position of 5 might have totally different states on worker A and
worker B.
One complication that we need to deal with here is needing to handle requests being
resent, i.e. if we sent down a room in a response that the client received, we must
consider the room *not* sent when we get the request again.
This is handled by using an integer "token", which is returned to the client
as part of the sync token. For each connection we store a mapping from
tokens to the room states, and create a new entry when we send down new
rooms.
Note that for any given sliding sync connection we will only store a maximum
of two different tokens: the previous token from the request and a new token
sent in the response. When we receive a request with a given token, we then
clear out all other entries with a different token.
Attributes:
_connections: Mapping from `(user_id, conn_id)` to mapping of `token`
to mapping of room ID to `HaveSentRoom`.
"""
store: "DataStore"
async def get_per_connection_state(
self,
sync_config: SlidingSyncConfig,
from_token: Optional[SlidingSyncStreamToken],
) -> PerConnectionState:
"""Fetch the per-connection state for the token.
Raises:
SlidingSyncUnknownPosition if the connection_token is unknown
"""
if from_token is None or from_token.connection_position == 0:
return PerConnectionState()
conn_id = sync_config.conn_id or ""
device_id = sync_config.requester.device_id
assert device_id is not None
return await self.store.get_per_connection_state(
sync_config.user.to_string(),
device_id,
conn_id,
from_token.connection_position,
)
@trace
async def record_new_state(
self,
sync_config: SlidingSyncConfig,
from_token: Optional[SlidingSyncStreamToken],
new_connection_state: MutablePerConnectionState,
) -> int:
"""Record updated per-connection state, returning the connection
position associated with the new state.
If there are no changes to the state this may return the same token as
the existing per-connection state.
"""
if not new_connection_state.has_updates():
if from_token is not None:
return from_token.connection_position
else:
return 0
if from_token is not None and from_token.connection_position == 0:
from_token = None
conn_id = sync_config.conn_id or ""
device_id = sync_config.requester.device_id
assert device_id is not None
return await self.store.persist_per_connection_state(
sync_config.user.to_string(),
device_id,
conn_id,
from_token.connection_position if from_token else None,
new_connection_state,
)

View File

@@ -43,6 +43,7 @@ from prometheus_client import Counter
from synapse.api.constants import (
AccountDataTypes,
Direction,
EventContentFields,
EventTypes,
JoinRules,
@@ -64,6 +65,7 @@ from synapse.logging.opentracing import (
)
from synapse.storage.databases.main.event_push_actions import RoomNotifCounts
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
from synapse.storage.databases.main.stream import PaginateFunction
from synapse.storage.roommember import MemberSummary
from synapse.types import (
DeviceListUpdates,
@@ -84,7 +86,7 @@ from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.caches.lrucache import LruCache
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
from synapse.util.metrics import Measure, measure_func
from synapse.util.metrics import Measure
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
@@ -879,22 +881,49 @@ class SyncHandler:
since_key = since_token.room_key
while limited and len(recents) < timeline_limit and max_repeat:
# If we have a since_key then we are trying to get any events
# that have happened since `since_key` up to `end_key`, so we
# can just use `get_room_events_stream_for_room`.
# Otherwise, we want to return the last N events in the room
# in topological ordering.
if since_key:
events, end_key = await self.store.get_room_events_stream_for_room(
room_id,
limit=load_limit + 1,
from_key=since_key,
to_key=end_key,
)
else:
events, end_key = await self.store.get_recent_events_for_room(
room_id, limit=load_limit + 1, end_token=end_key
)
# For initial `/sync`, we want to view a historical section of the
# timeline; to fetch events by `topological_ordering` (best
# representation of the room DAG as others were seeing it at the time).
# This also aligns with the order that `/messages` returns events in.
#
# For incremental `/sync`, we want to get all updates for rooms since
# the last `/sync` (regardless if those updates arrived late or happened
# a while ago in the past); to fetch events by `stream_ordering` (in the
# order they were received by the server).
#
# Relevant spec issue: https://github.com/matrix-org/matrix-spec/issues/1917
#
# FIXME: Using workaround for mypy,
# https://github.com/python/mypy/issues/10740#issuecomment-1997047277 and
# https://github.com/python/mypy/issues/17479
paginate_room_events_by_topological_ordering: PaginateFunction = (
self.store.paginate_room_events_by_topological_ordering
)
paginate_room_events_by_stream_ordering: PaginateFunction = (
self.store.paginate_room_events_by_stream_ordering
)
pagination_method: PaginateFunction = (
# Use `topographical_ordering` for historical events
paginate_room_events_by_topological_ordering
if since_key is None
# Use `stream_ordering` for updates
else paginate_room_events_by_stream_ordering
)
events, end_key = await pagination_method(
room_id=room_id,
# The bounds are reversed so we can paginate backwards
# (from newer to older events) starting at to_bound.
# This ensures we fill the `limit` with the newest events first,
from_key=end_key,
to_key=since_key,
direction=Direction.BACKWARDS,
# We add one so we can determine if there are enough events to saturate
# the limit or not (see `limited`)
limit=load_limit + 1,
)
# We want to return the events in ascending order (the last event is the
# most recent).
events.reverse()
log_kv({"loaded_recents": len(events)})
@@ -1750,8 +1779,15 @@ class SyncHandler:
)
if include_device_list_updates:
device_lists = await self._generate_sync_entry_for_device_list(
sync_result_builder,
# include_device_list_updates can only be True if we have a
# since token.
assert since_token is not None
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
user_id=user_id,
since_token=since_token,
now_token=sync_result_builder.now_token,
joined_room_ids=sync_result_builder.joined_room_ids,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
@@ -1863,8 +1899,14 @@ class SyncHandler:
newly_left_users,
) = sync_result_builder.calculate_user_changes()
device_lists = await self._generate_sync_entry_for_device_list(
sync_result_builder,
# include_device_list_updates can only be True if we have a
# since token.
assert since_token is not None
device_lists = await self._device_handler.generate_sync_entry_for_device_list(
user_id=user_id,
since_token=since_token,
now_token=sync_result_builder.now_token,
joined_room_ids=sync_result_builder.joined_room_ids,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
@@ -2041,94 +2083,6 @@ class SyncHandler:
return sync_result_builder
@measure_func("_generate_sync_entry_for_device_list")
async def _generate_sync_entry_for_device_list(
self,
sync_result_builder: "SyncResultBuilder",
newly_joined_rooms: AbstractSet[str],
newly_joined_or_invited_or_knocked_users: AbstractSet[str],
newly_left_rooms: AbstractSet[str],
newly_left_users: AbstractSet[str],
) -> DeviceListUpdates:
"""Generate the DeviceListUpdates section of sync
Args:
sync_result_builder
newly_joined_rooms: Set of rooms user has joined since previous sync
newly_joined_or_invited_or_knocked_users: Set of users that have joined,
been invited to a room or are knocking on a room since
previous sync.
newly_left_rooms: Set of rooms user has left since previous sync
newly_left_users: Set of users that have left a room we're in since
previous sync
"""
user_id = sync_result_builder.sync_config.user.to_string()
since_token = sync_result_builder.since_token
assert since_token is not None
# Take a copy since these fields will be mutated later.
newly_joined_or_invited_or_knocked_users = set(
newly_joined_or_invited_or_knocked_users
)
newly_left_users = set(newly_left_users)
# We want to figure out what user IDs the client should refetch
# device keys for, and which users we aren't going to track changes
# for anymore.
#
# For the first step we check:
# a. if any users we share a room with have updated their devices,
# and
# b. we also check if we've joined any new rooms, or if a user has
# joined a room we're in.
#
# For the second step we just find any users we no longer share a
# room with by looking at all users that have left a room plus users
# that were in a room we've left.
users_that_have_changed = set()
joined_room_ids = sync_result_builder.joined_room_ids
# Step 1a, check for changes in devices of users we share a room
# with
users_that_have_changed = (
await self._device_handler.get_device_changes_in_shared_rooms(
user_id,
joined_room_ids,
from_token=since_token,
now_token=sync_result_builder.now_token,
)
)
# Step 1b, check for newly joined rooms
for room_id in newly_joined_rooms:
joined_users = await self.store.get_users_in_room(room_id)
newly_joined_or_invited_or_knocked_users.update(joined_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
)
users_that_have_changed.update(user_signatures_changed)
# Now find users that we no longer track
for room_id in newly_left_rooms:
left_users = await self.store.get_users_in_room(room_id)
newly_left_users.update(left_users)
# Remove any users that we still share a room with.
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
for user_id, entries in left_users_rooms.items():
if any(rid in joined_room_ids for rid in entries):
newly_left_users.discard(user_id)
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
@trace
async def _generate_sync_entry_for_to_device(
self, sync_result_builder: "SyncResultBuilder"
@@ -2641,9 +2595,10 @@ class SyncHandler:
# a "gap" in the timeline, as described by the spec for /sync.
room_to_events = await self.store.get_room_events_stream_for_rooms(
room_ids=sync_result_builder.joined_room_ids,
from_key=since_token.room_key,
to_key=now_token.room_key,
from_key=now_token.room_key,
to_key=since_token.room_key,
limit=timeline_limit + 1,
direction=Direction.BACKWARDS,
)
# We loop through all room ids, even if there are no new events, in case
@@ -2654,6 +2609,9 @@ class SyncHandler:
newly_joined = room_id in newly_joined_rooms
if room_entry:
events, start_key = room_entry
# We want to return the events in ascending order (the last event is the
# most recent).
events.reverse()
prev_batch_token = now_token.copy_and_replace(
StreamKeyType.ROOM, start_key

View File

@@ -1057,11 +1057,11 @@ class _MultipartParserProtocol(protocol.Protocol):
if not self.parser:
def on_header_field(data: bytes, start: int, end: int) -> None:
if data[start:end] == b"Location":
if data[start:end].lower() == b"location":
self.has_redirect = True
if data[start:end] == b"Content-Disposition":
if data[start:end].lower() == b"content-disposition":
self.in_disposition = True
if data[start:end] == b"Content-Type":
if data[start:end].lower() == b"content-type":
self.in_content_type = True
def on_header_value(data: bytes, start: int, end: int) -> None:
@@ -1088,7 +1088,6 @@ class _MultipartParserProtocol(protocol.Protocol):
return
# otherwise we are in the file part
else:
logger.info("Writing multipart file data to stream")
try:
self.stream.write(data[start:end])
except Exception as e:

View File

@@ -74,7 +74,6 @@ from synapse.api.errors import (
from synapse.config.homeserver import HomeServerConfig
from synapse.logging.context import defer_to_thread, preserve_fn, run_in_background
from synapse.logging.opentracing import active_span, start_active_span, trace_servlet
from synapse.types import ISynapseReactor
from synapse.util import json_encoder
from synapse.util.caches import intern_dict
from synapse.util.cancellation import is_function_cancellable
@@ -869,8 +868,7 @@ async def _async_write_json_to_request_in_thread(
with start_active_span("encode_json_response"):
span = active_span()
reactor: ISynapseReactor = request.reactor # type: ignore
json_str = await defer_to_thread(reactor, encode, span)
json_str = await defer_to_thread(request.reactor, encode, span)
_write_bytes_to_request(request, json_str)

View File

@@ -658,7 +658,7 @@ class SynapseSite(ProxySite):
)
self.site_tag = site_tag
self.reactor = reactor
self.reactor: ISynapseReactor = reactor
assert config.http_options is not None
proxied = config.http_options.x_forwarded

View File

@@ -28,6 +28,7 @@ from types import TracebackType
from typing import (
TYPE_CHECKING,
Awaitable,
BinaryIO,
Dict,
Generator,
List,
@@ -37,19 +38,28 @@ from typing import (
)
import attr
from zope.interface import implementer
from twisted.internet import interfaces
from twisted.internet.defer import Deferred
from twisted.internet.interfaces import IConsumer
from twisted.protocols.basic import FileSender
from twisted.python.failure import Failure
from twisted.web.server import Request
from synapse.api.errors import Codes, cs_error
from synapse.http.server import finish_request, respond_with_json
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.context import (
defer_to_threadpool,
make_deferred_yieldable,
run_in_background,
)
from synapse.util import Clock
from synapse.util.async_helpers import DeferredEvent
from synapse.util.stringutils import is_ascii
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.databases.main.media_repository import LocalMedia
@@ -122,6 +132,7 @@ def respond_404(request: SynapseRequest) -> None:
async def respond_with_file(
hs: "HomeServer",
request: SynapseRequest,
media_type: str,
file_path: str,
@@ -138,7 +149,7 @@ async def respond_with_file(
add_file_headers(request, media_type, file_size, upload_name)
with open(file_path, "rb") as f:
await make_deferred_yieldable(FileSender().beginFileTransfer(f, request))
await ThreadedFileSender(hs).beginFileTransfer(f, request)
finish_request(request)
else:
@@ -601,3 +612,151 @@ def _parseparam(s: bytes) -> Generator[bytes, None, None]:
f = s[:end]
yield f.strip()
s = s[end:]
@implementer(interfaces.IPushProducer)
class ThreadedFileSender:
"""
A producer that sends the contents of a file to a consumer, reading from the
file on a thread.
This works by having a loop in a threadpool repeatedly reading from the
file, until the consumer pauses the producer. There is then a loop in the
main thread that waits until the consumer resumes the producer and then
starts reading in the threadpool again.
This is done to ensure that we're never waiting in the threadpool, as
otherwise its easy to starve it of threads.
"""
# How much data to read in one go.
CHUNK_SIZE = 2**14
# How long we wait for the consumer to be ready again before aborting the
# read.
TIMEOUT_SECONDS = 90.0
def __init__(self, hs: "HomeServer") -> None:
self.reactor = hs.get_reactor()
self.thread_pool = hs.get_media_sender_thread_pool()
self.file: Optional[BinaryIO] = None
self.deferred: "Deferred[None]" = Deferred()
self.consumer: Optional[interfaces.IConsumer] = None
# Signals if the thread should keep reading/sending data. Set means
# continue, clear means pause.
self.wakeup_event = DeferredEvent(self.reactor)
# Signals if the thread should terminate, e.g. because the consumer has
# gone away.
self.stop_writing = False
def beginFileTransfer(
self, file: BinaryIO, consumer: interfaces.IConsumer
) -> "Deferred[None]":
"""
Begin transferring a file
"""
self.file = file
self.consumer = consumer
self.consumer.registerProducer(self, True)
# We set the wakeup signal as we should start producing immediately.
self.wakeup_event.set()
run_in_background(self.start_read_loop)
return make_deferred_yieldable(self.deferred)
def resumeProducing(self) -> None:
"""interfaces.IPushProducer"""
self.wakeup_event.set()
def pauseProducing(self) -> None:
"""interfaces.IPushProducer"""
self.wakeup_event.clear()
def stopProducing(self) -> None:
"""interfaces.IPushProducer"""
# Unregister the consumer so we don't try and interact with it again.
if self.consumer:
self.consumer.unregisterProducer()
self.consumer = None
# Terminate the loop.
self.stop_writing = True
self.wakeup_event.set()
if not self.deferred.called:
self.deferred.errback(Exception("Consumer asked us to stop producing"))
async def start_read_loop(self) -> None:
"""This is the loop that drives reading/writing"""
try:
while not self.stop_writing:
# Start the loop in the threadpool to read data.
more_data = await defer_to_threadpool(
self.reactor, self.thread_pool, self._on_thread_read_loop
)
if not more_data:
# Reached EOF, we can just return.
return
if not self.wakeup_event.is_set():
ret = await self.wakeup_event.wait(self.TIMEOUT_SECONDS)
if not ret:
raise Exception("Timed out waiting to resume")
except Exception:
self._error(Failure())
finally:
self._finish()
def _on_thread_read_loop(self) -> bool:
"""This is the loop that happens on a thread.
Returns:
Whether there is more data to send.
"""
while not self.stop_writing and self.wakeup_event.is_set():
# The file should always have been set before we get here.
assert self.file is not None
chunk = self.file.read(self.CHUNK_SIZE)
if not chunk:
return False
self.reactor.callFromThread(self._write, chunk)
return True
def _write(self, chunk: bytes) -> None:
"""Called from the thread to write a chunk of data"""
if self.consumer:
self.consumer.write(chunk)
def _error(self, failure: Failure) -> None:
"""Called when there was a fatal error"""
if self.consumer:
self.consumer.unregisterProducer()
self.consumer = None
if not self.deferred.called:
self.deferred.errback(failure)
def _finish(self) -> None:
"""Called when we have finished writing (either on success or
failure)."""
if self.file:
self.file.close()
self.file = None
if self.consumer:
self.consumer.unregisterProducer()
self.consumer = None
if not self.deferred.called:
self.deferred.callback(None)

View File

@@ -49,15 +49,11 @@ from zope.interface import implementer
from twisted.internet import interfaces
from twisted.internet.defer import Deferred
from twisted.internet.interfaces import IConsumer
from twisted.protocols.basic import FileSender
from synapse.api.errors import NotFoundError
from synapse.logging.context import (
defer_to_thread,
make_deferred_yieldable,
run_in_background,
)
from synapse.logging.context import defer_to_thread, run_in_background
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
from synapse.media._base import ThreadedFileSender
from synapse.util import Clock
from synapse.util.file_consumer import BackgroundFileConsumer
@@ -213,7 +209,7 @@ class MediaStorage:
local_path = os.path.join(self.local_media_directory, path)
if os.path.exists(local_path):
logger.debug("responding with local file %s", local_path)
return FileResponder(open(local_path, "rb"))
return FileResponder(self.hs, open(local_path, "rb"))
logger.debug("local file %s did not exist", local_path)
for provider in self.storage_providers:
@@ -336,13 +332,12 @@ class FileResponder(Responder):
is closed when finished streaming.
"""
def __init__(self, open_file: IO):
def __init__(self, hs: "HomeServer", open_file: BinaryIO):
self.hs = hs
self.open_file = open_file
def write_to_consumer(self, consumer: IConsumer) -> Deferred:
return make_deferred_yieldable(
FileSender().beginFileTransfer(self.open_file, consumer)
)
return ThreadedFileSender(self.hs).beginFileTransfer(self.open_file, consumer)
def __exit__(
self,
@@ -549,7 +544,7 @@ class MultipartFileConsumer:
Calculate the content length of the multipart response
in bytes.
"""
if not self.length:
if self.length is None:
return None
# calculate length of json field and content-type, disposition headers
json_field = json.dumps(self.json_field)

View File

@@ -145,6 +145,7 @@ class FileStorageProviderBackend(StorageProvider):
def __init__(self, hs: "HomeServer", config: str):
self.hs = hs
self.reactor = hs.get_reactor()
self.cache_directory = hs.config.media.media_store_path
self.base_directory = config
@@ -165,7 +166,7 @@ class FileStorageProviderBackend(StorageProvider):
shutil_copyfile: Callable[[str, str], str] = shutil.copyfile
with start_active_span("shutil_copyfile"):
await defer_to_thread(
self.hs.get_reactor(),
self.reactor,
shutil_copyfile,
primary_fname,
backup_fname,
@@ -177,7 +178,7 @@ class FileStorageProviderBackend(StorageProvider):
backup_fname = os.path.join(self.base_directory, path)
if os.path.isfile(backup_fname):
return FileResponder(open(backup_fname, "rb"))
return FileResponder(self.hs, open(backup_fname, "rb"))
return None

View File

@@ -259,6 +259,7 @@ class ThumbnailProvider:
media_storage: MediaStorage,
):
self.hs = hs
self.reactor = hs.get_reactor()
self.media_repo = media_repo
self.media_storage = media_storage
self.store = hs.get_datastores().main
@@ -373,11 +374,11 @@ class ThumbnailProvider:
await respond_with_multipart_responder(
self.hs.get_clock(),
request,
FileResponder(open(file_path, "rb")),
FileResponder(self.hs, open(file_path, "rb")),
media_info,
)
else:
await respond_with_file(request, desired_type, file_path)
await respond_with_file(self.hs, request, desired_type, file_path)
else:
logger.warning("Failed to generate thumbnail")
raise SynapseError(400, "Failed to generate thumbnail.")
@@ -455,7 +456,7 @@ class ThumbnailProvider:
)
if file_path:
await respond_with_file(request, desired_type, file_path)
await respond_with_file(self.hs, request, desired_type, file_path)
else:
logger.warning("Failed to generate thumbnail")
raise SynapseError(400, "Failed to generate thumbnail.")

View File

@@ -18,7 +18,8 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import TYPE_CHECKING, Callable
import logging
from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Optional, Tuple
from synapse.http.server import HttpServer, JsonResource
from synapse.rest import admin
@@ -67,11 +68,64 @@ from synapse.rest.client import (
voip,
)
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from synapse.server import HomeServer
RegisterServletsFunc = Callable[["HomeServer", HttpServer], None]
CLIENT_SERVLET_FUNCTIONS: Tuple[RegisterServletsFunc, ...] = (
versions.register_servlets,
initial_sync.register_servlets,
room.register_deprecated_servlets,
events.register_servlets,
room.register_servlets,
login.register_servlets,
profile.register_servlets,
presence.register_servlets,
directory.register_servlets,
voip.register_servlets,
pusher.register_servlets,
push_rule.register_servlets,
logout.register_servlets,
sync.register_servlets,
filter.register_servlets,
account.register_servlets,
register.register_servlets,
auth.register_servlets,
receipts.register_servlets,
read_marker.register_servlets,
room_keys.register_servlets,
keys.register_servlets,
tokenrefresh.register_servlets,
tags.register_servlets,
account_data.register_servlets,
reporting.register_servlets,
openid.register_servlets,
notifications.register_servlets,
devices.register_servlets,
thirdparty.register_servlets,
sendtodevice.register_servlets,
user_directory.register_servlets,
room_upgrade_rest_servlet.register_servlets,
capabilities.register_servlets,
account_validity.register_servlets,
relations.register_servlets,
password_policy.register_servlets,
knock.register_servlets,
appservice_ping.register_servlets,
admin.register_servlets_for_client_rest_resource,
mutual_rooms.register_servlets,
login_token_request.register_servlets,
rendezvous.register_servlets,
auth_issuer.register_servlets,
)
SERVLET_GROUPS: Dict[str, Iterable[RegisterServletsFunc]] = {
"client": CLIENT_SERVLET_FUNCTIONS,
}
class ClientRestResource(JsonResource):
"""Matrix Client API REST resource.
@@ -83,80 +137,56 @@ class ClientRestResource(JsonResource):
* etc
"""
def __init__(self, hs: "HomeServer"):
def __init__(self, hs: "HomeServer", servlet_groups: Optional[List[str]] = None):
JsonResource.__init__(self, hs, canonical_json=False)
self.register_servlets(self, hs)
if hs.config.media.can_load_media_repo:
# This import is here to prevent a circular import failure
from synapse.rest.client import media
SERVLET_GROUPS["media"] = (media.register_servlets,)
self.register_servlets(self, hs, servlet_groups)
@staticmethod
def register_servlets(client_resource: HttpServer, hs: "HomeServer") -> None:
def register_servlets(
client_resource: HttpServer,
hs: "HomeServer",
servlet_groups: Optional[Iterable[str]] = None,
) -> None:
# Some servlets are only registered on the main process (and not worker
# processes).
is_main_process = hs.config.worker.worker_app is None
versions.register_servlets(hs, client_resource)
if not servlet_groups:
servlet_groups = SERVLET_GROUPS.keys()
# Deprecated in r0
initial_sync.register_servlets(hs, client_resource)
room.register_deprecated_servlets(hs, client_resource)
for servlet_group in servlet_groups:
# Fail on unknown servlet groups.
if servlet_group not in SERVLET_GROUPS:
if servlet_group == "media":
logger.warn(
"media.can_load_media_repo needs to be configured for the media servlet to be available"
)
raise RuntimeError(
f"Attempting to register unknown client servlet: '{servlet_group}'"
)
# Partially deprecated in r0
events.register_servlets(hs, client_resource)
for servletfunc in SERVLET_GROUPS[servlet_group]:
if not is_main_process and servletfunc in [
pusher.register_servlets,
logout.register_servlets,
auth.register_servlets,
tokenrefresh.register_servlets,
reporting.register_servlets,
openid.register_servlets,
thirdparty.register_servlets,
room_upgrade_rest_servlet.register_servlets,
account_validity.register_servlets,
admin.register_servlets_for_client_rest_resource,
mutual_rooms.register_servlets,
login_token_request.register_servlets,
rendezvous.register_servlets,
auth_issuer.register_servlets,
]:
continue
room.register_servlets(hs, client_resource)
login.register_servlets(hs, client_resource)
profile.register_servlets(hs, client_resource)
presence.register_servlets(hs, client_resource)
directory.register_servlets(hs, client_resource)
voip.register_servlets(hs, client_resource)
if is_main_process:
pusher.register_servlets(hs, client_resource)
push_rule.register_servlets(hs, client_resource)
if is_main_process:
logout.register_servlets(hs, client_resource)
sync.register_servlets(hs, client_resource)
filter.register_servlets(hs, client_resource)
account.register_servlets(hs, client_resource)
register.register_servlets(hs, client_resource)
if is_main_process:
auth.register_servlets(hs, client_resource)
receipts.register_servlets(hs, client_resource)
read_marker.register_servlets(hs, client_resource)
room_keys.register_servlets(hs, client_resource)
keys.register_servlets(hs, client_resource)
if is_main_process:
tokenrefresh.register_servlets(hs, client_resource)
tags.register_servlets(hs, client_resource)
account_data.register_servlets(hs, client_resource)
if is_main_process:
reporting.register_servlets(hs, client_resource)
openid.register_servlets(hs, client_resource)
notifications.register_servlets(hs, client_resource)
devices.register_servlets(hs, client_resource)
if is_main_process:
thirdparty.register_servlets(hs, client_resource)
sendtodevice.register_servlets(hs, client_resource)
user_directory.register_servlets(hs, client_resource)
if is_main_process:
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
capabilities.register_servlets(hs, client_resource)
if is_main_process:
account_validity.register_servlets(hs, client_resource)
relations.register_servlets(hs, client_resource)
password_policy.register_servlets(hs, client_resource)
knock.register_servlets(hs, client_resource)
appservice_ping.register_servlets(hs, client_resource)
if hs.config.media.can_load_media_repo:
from synapse.rest.client import media
media.register_servlets(hs, client_resource)
# moving to /_synapse/admin
if is_main_process:
admin.register_servlets_for_client_rest_resource(hs, client_resource)
# unstable
if is_main_process:
mutual_rooms.register_servlets(hs, client_resource)
login_token_request.register_servlets(hs, client_resource)
rendezvous.register_servlets(hs, client_resource)
auth_issuer.register_servlets(hs, client_resource)
servletfunc(hs, client_resource)

View File

@@ -67,7 +67,8 @@ from synapse.streams.config import PaginationConfig
from synapse.types import JsonDict, Requester, StreamToken, ThirdPartyInstanceID, UserID
from synapse.types.state import StateFilter
from synapse.util.cancellation import cancellable
from synapse.util.stringutils import parse_and_validate_server_name, random_string
from synapse.util.events import generate_fake_event_id
from synapse.util.stringutils import parse_and_validate_server_name
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -325,7 +326,7 @@ class RoomStateEventRestServlet(RestServlet):
)
event_id = event.event_id
except ShadowBanError:
event_id = "$" + random_string(43)
event_id = generate_fake_event_id()
set_tag("event_id", event_id)
ret = {"event_id": event_id}
@@ -377,7 +378,7 @@ class RoomSendEventRestServlet(TransactionRestServlet):
)
event_id = event.event_id
except ShadowBanError:
event_id = "$" + random_string(43)
event_id = generate_fake_event_id()
set_tag("event_id", event_id)
return 200, {"event_id": event_id}
@@ -1193,7 +1194,7 @@ class RoomRedactEventRestServlet(TransactionRestServlet):
event_id = event.event_id
except ShadowBanError:
event_id = "$" + random_string(43)
event_id = generate_fake_event_id()
set_tag("event_id", event_id)
return 200, {"event_id": event_id}

View File

@@ -899,6 +899,9 @@ class SlidingSyncRestServlet(RestServlet):
body = parse_and_validate_json_object_from_request(request, SlidingSyncBody)
# Tag and log useful data to differentiate requests.
set_tag(
"sliding_sync.sync_type", "initial" if from_token is None else "incremental"
)
set_tag("sliding_sync.conn_id", body.conn_id or "")
log_kv(
{
@@ -912,6 +915,12 @@ class SlidingSyncRestServlet(RestServlet):
"sliding_sync.room_subscriptions": list(
(body.room_subscriptions or {}).keys()
),
# We also include the number of room subscriptions because logs are
# limited to 1024 characters and the large room ID list above can be cut
# off.
"sliding_sync.num_room_subscriptions": len(
(body.room_subscriptions or {}).keys()
),
}
)
@@ -1035,6 +1044,11 @@ class SlidingSyncRestServlet(RestServlet):
if room_result.initial:
serialized_rooms[room_id]["initial"] = room_result.initial
if room_result.unstable_expanded_timeline:
serialized_rooms[room_id][
"unstable_expanded_timeline"
] = room_result.unstable_expanded_timeline
# This will be omitted for invite/knock rooms with `stripped_state`
if (
room_result.required_state is not None

View File

@@ -64,6 +64,7 @@ class VersionsRestServlet(RestServlet):
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
msc3881_enabled = self.config.experimental.msc3881_enabled
msc3575_enabled = self.config.experimental.msc3575_enabled
if self.auth.has_access_token(request):
requester = await self.auth.get_user_by_req(
@@ -77,6 +78,9 @@ class VersionsRestServlet(RestServlet):
msc3881_enabled = await self.store.is_feature_enabled(
user_id, ExperimentalFeature.MSC3881
)
msc3575_enabled = await self.store.is_feature_enabled(
user_id, ExperimentalFeature.MSC3575
)
return (
200,
@@ -169,6 +173,8 @@ class VersionsRestServlet(RestServlet):
),
# MSC4151: Report room API (Client-Server API)
"org.matrix.msc4151": self.config.experimental.msc4151_enabled,
# Simplified sliding sync
"org.matrix.simplified_msc3575": msc3575_enabled,
},
},
)

View File

@@ -34,6 +34,7 @@ from typing_extensions import TypeAlias
from twisted.internet.interfaces import IOpenSSLContextFactory
from twisted.internet.tcp import Port
from twisted.python.threadpool import ThreadPool
from twisted.web.iweb import IPolicyForHTTPS
from twisted.web.resource import Resource
@@ -123,6 +124,7 @@ from synapse.http.client import (
)
from synapse.http.matrixfederationclient import MatrixFederationHttpClient
from synapse.media.media_repository import MediaRepository
from synapse.metrics import register_threadpool
from synapse.metrics.common_usage_metrics import CommonUsageMetricsManager
from synapse.module_api import ModuleApi
from synapse.module_api.callbacks import ModuleApiCallbacks
@@ -941,3 +943,24 @@ class HomeServer(metaclass=abc.ABCMeta):
@cache_in_self
def get_task_scheduler(self) -> TaskScheduler:
return TaskScheduler(self)
@cache_in_self
def get_media_sender_thread_pool(self) -> ThreadPool:
"""Fetch the threadpool used to read files when responding to media
download requests."""
# We can choose a large threadpool size as these threads predominately
# do IO rather than CPU work.
media_threadpool = ThreadPool(
name="media_threadpool", minthreads=1, maxthreads=50
)
media_threadpool.start()
self.get_reactor().addSystemEventTrigger(
"during", "shutdown", media_threadpool.stop
)
# Register the threadpool with our metrics.
register_threadpool("media", media_threadpool)
return media_threadpool

View File

@@ -502,8 +502,15 @@ class EventsPersistenceStorageController:
"""
state = await self._calculate_current_state(room_id)
delta = await self._calculate_state_delta(room_id, state)
sliding_sync_table_changes = (
await self.persist_events_store._calculate_sliding_sync_table_changes(
room_id, [], delta
)
)
await self.persist_events_store.update_current_state(room_id, delta)
await self.persist_events_store.update_current_state(
room_id, delta, sliding_sync_table_changes
)
async def _calculate_current_state(self, room_id: str) -> StateMap[str]:
"""Calculate the current state of a room, based on the forward extremities

View File

@@ -35,6 +35,7 @@ from typing import (
Iterable,
Iterator,
List,
Mapping,
Optional,
Sequence,
Tuple,
@@ -64,6 +65,7 @@ from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.background_updates import BackgroundUpdater
from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
from synapse.storage.types import Connection, Cursor, SQLQueryParameters
from synapse.types import StrCollection
from synapse.util.async_helpers import delay_cancellation
from synapse.util.iterutils import batch_iter
@@ -1095,6 +1097,50 @@ class DatabasePool:
txn.execute(sql, vals)
@staticmethod
def simple_insert_returning_txn(
txn: LoggingTransaction,
table: str,
values: Dict[str, Any],
returning: StrCollection,
) -> Tuple[Any, ...]:
"""Executes a `INSERT INTO... RETURNING...` statement (or equivalent for
SQLite versions that don't support it).
"""
if txn.database_engine.supports_returning:
keys, vals = zip(*values.items())
sql = "INSERT INTO %s (%s) VALUES(%s) RETURNING %s" % (
table,
", ".join(k for k in keys),
", ".join("?" for _ in keys),
", ".join(k for k in returning),
)
txn.execute(sql, vals)
row = txn.fetchone()
assert row is not None
return row
else:
# For old versions of SQLite we do a standard insert and then can
# use `last_insert_rowid` to get at the row we just inserted
DatabasePool.simple_insert_txn(
txn,
table=table,
values=values,
)
txn.execute("SELECT last_insert_rowid()")
row = txn.fetchone()
assert row is not None
(rowid,) = row
row = DatabasePool.simple_select_one_txn(
txn, table=table, keyvalues={"rowid": rowid}, retcols=returning
)
assert row is not None
return row
async def simple_insert_many(
self,
table: str,
@@ -1254,9 +1300,9 @@ class DatabasePool:
self,
txn: LoggingTransaction,
table: str,
keyvalues: Dict[str, Any],
values: Dict[str, Any],
insertion_values: Optional[Dict[str, Any]] = None,
keyvalues: Mapping[str, Any],
values: Mapping[str, Any],
insertion_values: Optional[Mapping[str, Any]] = None,
where_clause: Optional[str] = None,
) -> bool:
"""
@@ -1299,9 +1345,9 @@ class DatabasePool:
self,
txn: LoggingTransaction,
table: str,
keyvalues: Dict[str, Any],
values: Dict[str, Any],
insertion_values: Optional[Dict[str, Any]] = None,
keyvalues: Mapping[str, Any],
values: Mapping[str, Any],
insertion_values: Optional[Mapping[str, Any]] = None,
where_clause: Optional[str] = None,
lock: bool = True,
) -> bool:
@@ -1322,7 +1368,7 @@ class DatabasePool:
if lock:
# We need to lock the table :(
self.engine.lock_table(txn, table)
txn.database_engine.lock_table(txn, table)
def _getwhere(key: str) -> str:
# If the value we're passing in is None (aka NULL), we need to use
@@ -1376,13 +1422,13 @@ class DatabasePool:
# successfully inserted
return True
@staticmethod
def simple_upsert_txn_native_upsert(
self,
txn: LoggingTransaction,
table: str,
keyvalues: Dict[str, Any],
values: Dict[str, Any],
insertion_values: Optional[Dict[str, Any]] = None,
keyvalues: Mapping[str, Any],
values: Mapping[str, Any],
insertion_values: Optional[Mapping[str, Any]] = None,
where_clause: Optional[str] = None,
) -> bool:
"""
@@ -1535,8 +1581,8 @@ class DatabasePool:
self.simple_upsert_txn_emulated(txn, table, _keys, _vals, lock=False)
@staticmethod
def simple_upsert_many_txn_native_upsert(
self,
txn: LoggingTransaction,
table: str,
key_names: Collection[str],
@@ -1966,8 +2012,8 @@ class DatabasePool:
def simple_update_txn(
txn: LoggingTransaction,
table: str,
keyvalues: Dict[str, Any],
updatevalues: Dict[str, Any],
keyvalues: Mapping[str, Any],
updatevalues: Mapping[str, Any],
) -> int:
"""
Update rows in the given database table.

View File

@@ -33,6 +33,7 @@ from synapse.storage.database import (
LoggingDatabaseConnection,
LoggingTransaction,
)
from synapse.storage.databases.main.sliding_sync import SlidingSyncStore
from synapse.storage.databases.main.stats import UserSortOrder
from synapse.storage.engines import BaseDatabaseEngine
from synapse.storage.types import Cursor
@@ -156,6 +157,7 @@ class DataStore(
LockStore,
SessionStore,
TaskSchedulerWorkerStore,
SlidingSyncStore,
):
def __init__(
self,

View File

@@ -313,6 +313,8 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
"get_unread_event_push_actions_by_room_for_user", (room_id,)
)
self._attempt_to_invalidate_cache("_get_max_event_pos", (room_id,))
# The `_get_membership_from_event_id` is immutable, except for the
# case where we look up an event *before* persisting it.
self._attempt_to_invalidate_cache("_get_membership_from_event_id", (event_id,))

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -457,6 +457,8 @@ class EventsWorkerStore(SQLBaseStore):
) -> Optional[EventBase]:
"""Get an event from the database by event_id.
Events for unknown room versions will also be filtered out.
Args:
event_id: The event_id of the event to fetch
@@ -511,6 +513,10 @@ class EventsWorkerStore(SQLBaseStore):
) -> Dict[str, EventBase]:
"""Get events from the database
Unknown events will be omitted from the response.
Events for unknown room versions will also be filtered out.
Args:
event_ids: The event_ids of the events to fetch
@@ -553,6 +559,8 @@ class EventsWorkerStore(SQLBaseStore):
Unknown events will be omitted from the response.
Events for unknown room versions will also be filtered out.
Args:
event_ids: The event_ids of the events to fetch

View File

@@ -144,6 +144,16 @@ class ProfileWorkerStore(SQLBaseStore):
return 50
async def get_profileinfo(self, user_id: UserID) -> ProfileInfo:
"""
Fetch the display name and avatar URL of a user.
Args:
user_id: The user ID to fetch the profile for.
Returns:
The user's display name and avatar URL. Values may be null if unset
or if the user doesn't exist.
"""
profile = await self.db_pool.simple_select_one(
table="profiles",
keyvalues={"full_user_id": user_id.to_string()},
@@ -158,6 +168,15 @@ class ProfileWorkerStore(SQLBaseStore):
return ProfileInfo(avatar_url=profile[1], display_name=profile[0])
async def get_profile_displayname(self, user_id: UserID) -> Optional[str]:
"""
Fetch the display name of a user.
Args:
user_id: The user to get the display name for.
Raises:
404 if the user does not exist.
"""
return await self.db_pool.simple_select_one_onecol(
table="profiles",
keyvalues={"full_user_id": user_id.to_string()},
@@ -166,6 +185,15 @@ class ProfileWorkerStore(SQLBaseStore):
)
async def get_profile_avatar_url(self, user_id: UserID) -> Optional[str]:
"""
Fetch the avatar URL of a user.
Args:
user_id: The user to get the avatar URL for.
Raises:
404 if the user does not exist.
"""
return await self.db_pool.simple_select_one_onecol(
table="profiles",
keyvalues={"full_user_id": user_id.to_string()},
@@ -174,6 +202,12 @@ class ProfileWorkerStore(SQLBaseStore):
)
async def create_profile(self, user_id: UserID) -> None:
"""
Create a blank profile for a user.
Args:
user_id: The user to create the profile for.
"""
user_localpart = user_id.localpart
await self.db_pool.simple_insert(
table="profiles",

View File

@@ -454,6 +454,10 @@ class PurgeEventsStore(StateGroupWorkerStore, CacheInvalidationWorkerStore):
# so must be deleted first.
"local_current_membership",
"room_memberships",
# Note: the sliding_sync_ tables have foreign keys to the `events` table
# so must be deleted first.
"sliding_sync_joined_rooms",
"sliding_sync_membership_snapshots",
"events",
"federation_inbound_events_staging",
"receipts_graph",

View File

@@ -43,6 +43,7 @@ from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
make_tuple_in_list_sql_clause,
)
from synapse.storage.engines._base import IsolationLevel
from synapse.storage.util.id_generators import MultiWriterIdGenerator
@@ -51,10 +52,12 @@ from synapse.types import (
JsonMapping,
MultiWriterStreamToken,
PersistedPosition,
StrCollection,
)
from synapse.util import json_encoder
from synapse.util.caches.descriptors import cached, cachedList
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.iterutils import batch_iter
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -479,6 +482,83 @@ class ReceiptsWorkerStore(SQLBaseStore):
}
return results
async def get_linearized_receipts_for_events(
self,
room_and_event_ids: Collection[Tuple[str, str]],
) -> Sequence[JsonMapping]:
"""Get all receipts for the given set of events.
Arguments:
room_and_event_ids: A collection of 2-tuples of room ID and
event IDs to fetch receipts for
Returns:
A list of receipts, one per room.
"""
def get_linearized_receipts_for_events_txn(
txn: LoggingTransaction,
room_id_event_id_tuples: Collection[Tuple[str, str]],
) -> List[Tuple[str, str, str, str, Optional[str], str]]:
clause, args = make_tuple_in_list_sql_clause(
self.database_engine, ("room_id", "event_id"), room_id_event_id_tuples
)
sql = f"""
SELECT room_id, receipt_type, user_id, event_id, thread_id, data
FROM receipts_linearized
WHERE {clause}
"""
txn.execute(sql, args)
return txn.fetchall()
# room_id -> event_id -> receipt_type -> user_id -> receipt data
room_to_content: Dict[str, Dict[str, Dict[str, Dict[str, JsonMapping]]]] = {}
for batch in batch_iter(room_and_event_ids, 1000):
batch_results = await self.db_pool.runInteraction(
"get_linearized_receipts_for_events",
get_linearized_receipts_for_events_txn,
batch,
)
for (
room_id,
receipt_type,
user_id,
event_id,
thread_id,
data,
) in batch_results:
content = room_to_content.setdefault(room_id, {})
user_receipts = content.setdefault(event_id, {}).setdefault(
receipt_type, {}
)
receipt_data = db_to_json(data)
if thread_id is not None:
receipt_data["thread_id"] = thread_id
# MSC4102: always replace threaded receipts with unthreaded ones
# if there is a clash. Specifically:
# - if there is no existing receipt, great, set the data.
# - if there is an existing receipt, is it threaded (thread_id
# present)? YES: replace if this receipt has no thread id.
# NO: do not replace. This means we will drop some receipts, but
# MSC4102 is designed to drop semantically meaningless receipts,
# so this is okay. Previously, we would drop meaningful data!
if user_id in user_receipts:
if "thread_id" in user_receipts[user_id] and not thread_id:
user_receipts[user_id] = receipt_data
else:
user_receipts[user_id] = receipt_data
return [
{"type": EduTypes.RECEIPT, "room_id": room_id, "content": content}
for room_id, content in room_to_content.items()
]
@cached(
num_args=2,
)
@@ -550,6 +630,46 @@ class ReceiptsWorkerStore(SQLBaseStore):
return results
async def get_rooms_with_receipts_between(
self,
room_ids: StrCollection,
from_key: MultiWriterStreamToken,
to_key: MultiWriterStreamToken,
) -> StrCollection:
"""Given a set of room_ids, find out which ones (may) have receipts
between the two tokens (> `from_token` and <= `to_token`)."""
room_ids = self._receipts_stream_cache.get_entities_changed(
room_ids, from_key.stream
)
if not room_ids:
return []
def f(txn: LoggingTransaction, room_ids: StrCollection) -> StrCollection:
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", room_ids
)
sql = f"""
SELECT DISTINCT room_id FROM receipts_linearized
WHERE {clause} AND ? < stream_id AND stream_id <= ?
"""
args.append(from_key.stream)
args.append(to_key.get_max_stream_pos())
txn.execute(sql, args)
return [room_id for room_id, in txn]
results: List[str] = []
for batch in batch_iter(room_ids, 1000):
batch_result = await self.db_pool.runInteraction(
"get_rooms_with_receipts_between", f, batch
)
results.extend(batch_result)
return results
async def get_users_sent_receipts_between(
self, last_id: int, current_id: int
) -> List[str]:
@@ -954,6 +1074,12 @@ class ReceiptsBackgroundUpdateStore(SQLBaseStore):
self.RECEIPTS_GRAPH_UNIQUE_INDEX_UPDATE_NAME,
self._background_receipts_graph_unique_index,
)
self.db_pool.updates.register_background_index_update(
update_name="receipts_room_id_event_id_index",
index_name="receipts_linearized_event_id",
table="receipts_linearized",
columns=("room_id", "event_id"),
)
async def _populate_receipt_event_stream_ordering(
self, progress: JsonDict, batch_size: int

View File

@@ -39,6 +39,7 @@ from typing import (
import attr
from synapse.api.constants import EventTypes, Membership
from synapse.logging.opentracing import trace
from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
@@ -422,6 +423,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
return invite
return None
@trace
async def get_rooms_for_local_user_where_membership_is(
self,
user_id: str,
@@ -1335,6 +1337,12 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
keyvalues={"user_id": user_id, "room_id": room_id},
updatevalues={"forgotten": 1},
)
self.db_pool.simple_update_txn(
txn,
table="sliding_sync_membership_snapshots",
keyvalues={"user_id": user_id, "room_id": room_id},
updatevalues={"forgotten": 1},
)
self._invalidate_cache_and_stream(txn, self.did_forget, (user_id, room_id))
self._invalidate_cache_and_stream(

View File

@@ -0,0 +1,506 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
from typing import TYPE_CHECKING, AbstractSet, Dict, List, Mapping, Optional, Set, cast
import attr
from synapse.api.errors import SlidingSyncUnknownPosition
from synapse.logging.opentracing import log_kv
from synapse.storage._base import SQLBaseStore, db_to_json
from synapse.storage.database import LoggingTransaction
from synapse.types import MultiWriterStreamToken, RoomStreamToken
from synapse.types.handlers.sliding_sync import (
HaveSentRoom,
HaveSentRoomFlag,
MutablePerConnectionState,
PerConnectionState,
RoomStatusMap,
RoomSyncConfig,
)
from synapse.util import json_encoder
from synapse.util.caches.descriptors import cached
if TYPE_CHECKING:
from synapse.storage.databases.main import DataStore
class SlidingSyncStore(SQLBaseStore):
async def persist_per_connection_state(
self,
user_id: str,
device_id: str,
conn_id: str,
previous_connection_position: Optional[int],
per_connection_state: "MutablePerConnectionState",
) -> int:
"""Persist updates to the per-connection state for a sliding sync
connection.
Returns:
The connection position of the newly persisted state.
"""
store = cast("DataStore", self)
return await self.db_pool.runInteraction(
"persist_per_connection_state",
self.persist_per_connection_state_txn,
user_id=user_id,
device_id=device_id,
conn_id=conn_id,
previous_connection_position=previous_connection_position,
per_connection_state=await PerConnectionStateDB.from_state(
per_connection_state, store
),
)
def persist_per_connection_state_txn(
self,
txn: LoggingTransaction,
user_id: str,
device_id: str,
conn_id: str,
previous_connection_position: Optional[int],
per_connection_state: "PerConnectionStateDB",
) -> int:
# First we fetch the (or create) the connection key associated with the
# previous connection position.
if previous_connection_position is not None:
# The `previous_connection_position` is a user-supplied value, so we
# need to make sure that the one they supplied is actually theirs.
sql = """
SELECT connection_key
FROM sliding_sync_connection_positions
INNER JOIN sliding_sync_connections USING (connection_key)
WHERE
connection_position = ?
AND user_id = ? AND device_id = ? AND conn_id = ?
"""
txn.execute(
sql, (previous_connection_position, user_id, device_id, conn_id)
)
row = txn.fetchone()
if row is None:
raise SlidingSyncUnknownPosition()
(connection_key,) = row
else:
# We're restarting the connection, so we clear all existing
# connections. We do this here to ensure that if we get lots of
# one-shot requests we don't stack up lots of entries.
self.db_pool.simple_delete_txn(
txn,
table="sliding_sync_connections",
keyvalues={
"user_id": user_id,
"device_id": device_id,
"conn_id": conn_id,
},
)
(connection_key,) = self.db_pool.simple_insert_returning_txn(
txn,
table="sliding_sync_connections",
values={
"user_id": user_id,
"device_id": device_id,
"conn_id": conn_id,
"created_ts": self._clock.time_msec(),
},
returning=("connection_key",),
)
# Define a new connection position for the updates
(connection_position,) = self.db_pool.simple_insert_returning_txn(
txn,
table="sliding_sync_connection_positions",
values={
"connection_key": connection_key,
"created_ts": self._clock.time_msec(),
},
returning=("connection_position",),
)
# We need to deduplicate the `required_state` JSON. We do this by
# fetching all JSON associated with the connection and comparing that
# with the updates to `required_state`
# Dict from required state json -> required state ID
required_state_to_id: Dict[str, int] = {}
if previous_connection_position is not None:
rows = self.db_pool.simple_select_list_txn(
txn,
table="sliding_sync_connection_required_state",
keyvalues={"connection_key": connection_key},
retcols=("required_state_id", "required_state"),
)
for required_state_id, required_state in rows:
required_state_to_id[required_state] = required_state_id
room_to_state_ids: Dict[str, int] = {}
unique_required_state: Dict[str, List[str]] = {}
for room_id, room_state in per_connection_state.room_configs.items():
serialized_state = json_encoder.encode(
# We store the required state as a sorted list of event type /
# state key tuples.
sorted(
(event_type, state_key)
for event_type, state_keys in room_state.required_state_map.items()
for state_key in state_keys
)
)
existing_state_id = required_state_to_id.get(serialized_state)
if existing_state_id is not None:
room_to_state_ids[room_id] = existing_state_id
else:
unique_required_state.setdefault(serialized_state, []).append(room_id)
# Insert any new `required_state` json we haven't previously seen.
for serialized_required_state, room_ids in unique_required_state.items():
(required_state_id,) = self.db_pool.simple_insert_returning_txn(
txn,
table="sliding_sync_connection_required_state",
values={
"connection_key": connection_key,
"required_state": serialized_required_state,
},
returning=("required_state_id",),
)
for room_id in room_ids:
room_to_state_ids[room_id] = required_state_id
# Copy over state from the previous connection position (we'll overwrite
# these rows with any changes).
if previous_connection_position is not None:
sql = """
INSERT INTO sliding_sync_connection_streams
(connection_position, stream, room_id, room_status, last_position)
SELECT ?, stream, room_id, room_status, last_position
FROM sliding_sync_connection_streams
WHERE connection_position = ?
"""
txn.execute(sql, (connection_position, previous_connection_position))
sql = """
INSERT INTO sliding_sync_connection_room_configs
(connection_position, room_id, timeline_limit, required_state_id)
SELECT ?, room_id, timeline_limit, required_state_id
FROM sliding_sync_connection_room_configs
WHERE connection_position = ?
"""
txn.execute(sql, (connection_position, previous_connection_position))
# We now upsert the changes to the various streams.
key_values = []
value_values = []
for room_id, have_sent_room in per_connection_state.rooms._statuses.items():
key_values.append((connection_position, "rooms", room_id))
value_values.append(
(have_sent_room.status.value, have_sent_room.last_token)
)
for room_id, have_sent_room in per_connection_state.receipts._statuses.items():
key_values.append((connection_position, "receipts", room_id))
value_values.append(
(have_sent_room.status.value, have_sent_room.last_token)
)
self.db_pool.simple_upsert_many_txn(
txn,
table="sliding_sync_connection_streams",
key_names=(
"connection_position",
"stream",
"room_id",
),
key_values=key_values,
value_names=(
"room_status",
"last_position",
),
value_values=value_values,
)
# ... and upsert changes to the room configs.
keys = []
values = []
for room_id, room_config in per_connection_state.room_configs.items():
keys.append((connection_position, room_id))
values.append((room_config.timeline_limit, room_to_state_ids[room_id]))
self.db_pool.simple_upsert_many_txn(
txn,
table="sliding_sync_connection_room_configs",
key_names=(
"connection_position",
"room_id",
),
key_values=keys,
value_names=(
"timeline_limit",
"required_state_id",
),
value_values=values,
)
# Persist changes to the room lists
for list_name, list_room_ids in per_connection_state.list_to_rooms.items():
self.db_pool.simple_delete_txn(
txn,
table="sliding_sync_connection_room_lists",
keyvalues={"connection_key": connection_key, "list_name": list_name},
)
self.db_pool.simple_insert_many_txn(
txn,
table="sliding_sync_connection_room_lists",
keys=("connection_key", "list_name", "room_id"),
values=[
(connection_key, list_name, room_id) for room_id in list_room_ids
],
)
return connection_position
@cached(iterable=True, max_entries=100000)
async def get_per_connection_state(
self, user_id: str, device_id: str, conn_id: str, connection_position: int
) -> "PerConnectionState":
"""Get the per-connection state for the given connection position."""
per_connection_state_db = await self.db_pool.runInteraction(
"get_per_connection_state",
self._get_per_connection_state_txn,
user_id=user_id,
device_id=device_id,
conn_id=conn_id,
connection_position=connection_position,
)
store = cast("DataStore", self)
return await per_connection_state_db.to_state(store)
def _get_per_connection_state_txn(
self,
txn: LoggingTransaction,
user_id: str,
device_id: str,
conn_id: str,
connection_position: int,
) -> "PerConnectionStateDB":
# The `previous_connection_position` is a user-supplied value, so we
# need to make sure that the one they supplied is actually theirs.
sql = """
SELECT connection_key
FROM sliding_sync_connection_positions
INNER JOIN sliding_sync_connections USING (connection_key)
WHERE
connection_position = ?
AND user_id = ? AND device_id = ? AND conn_id = ?
"""
txn.execute(sql, (connection_position, user_id, device_id, conn_id))
row = txn.fetchone()
if row is None:
raise SlidingSyncUnknownPosition()
(connection_key,) = row
# Now that we have seen the client has received and used the connection
# position, we can delete all the other connection positions.
sql = """
DELETE FROM sliding_sync_connection_positions
WHERE connection_key = ? AND connection_position != ?
"""
txn.execute(sql, (connection_key, connection_position))
# Fetch and create a mapping from required state ID to the actual
# required state for the connection.
rows = self.db_pool.simple_select_list_txn(
txn,
table="sliding_sync_connection_required_state",
keyvalues={"connection_key": connection_key},
retcols=(
"required_state_id",
"required_state",
),
)
required_state_map: Dict[int, Dict[str, Set[str]]] = {}
for row in rows:
state = required_state_map[row[0]] = {}
for event_type, state_keys in db_to_json(row[1]):
state[event_type] = set(state_keys)
# Get all the room configs, looking up the required state from the map
# above.
room_config_rows = self.db_pool.simple_select_list_txn(
txn,
table="sliding_sync_connection_room_configs",
keyvalues={"connection_position": connection_position},
retcols=(
"room_id",
"timeline_limit",
"required_state_id",
),
)
room_configs: Dict[str, RoomSyncConfig] = {}
for (
room_id,
timeline_limit,
required_state_id,
) in room_config_rows:
room_configs[room_id] = RoomSyncConfig(
timeline_limit=timeline_limit,
required_state_map=required_state_map[required_state_id],
)
# Now look up the per-room stream data.
rooms: Dict[str, HaveSentRoom[str]] = {}
receipts: Dict[str, HaveSentRoom[str]] = {}
receipt_rows = self.db_pool.simple_select_list_txn(
txn,
table="sliding_sync_connection_streams",
keyvalues={"connection_position": connection_position},
retcols=(
"stream",
"room_id",
"room_status",
"last_position",
),
)
for stream, room_id, room_status, last_position in receipt_rows:
have_sent_room: HaveSentRoom[str] = HaveSentRoom(
status=HaveSentRoomFlag(room_status), last_token=last_position
)
if stream == "rooms":
rooms[room_id] = have_sent_room
elif stream == "receipts":
receipts[room_id] = have_sent_room
# Fetch any stored lists for the connection
rows = self.db_pool.simple_select_list_txn(
txn,
table="sliding_sync_connection_room_lists",
keyvalues={
connection_key: connection_key,
},
retcols=("list_name", "room_id"),
)
list_to_rooms: Dict[str, Set[str]] = {}
for list_name, room_id in rows:
list_to_rooms.setdefault(list_name, set()).add(room_id)
return PerConnectionStateDB(
rooms=RoomStatusMap(rooms),
receipts=RoomStatusMap(receipts),
room_configs=room_configs,
list_to_rooms=list_to_rooms,
)
@attr.s(auto_attribs=True, frozen=True)
class PerConnectionStateDB:
"""An equivalent to `PerConnectionState` that holds data in a format stored
in the DB.
The principle difference is that the tokens for the different streams are
serialized to strings.
When persisting this *only* contains updates to the state.
"""
rooms: "RoomStatusMap[str]"
receipts: "RoomStatusMap[str]"
room_configs: Mapping[str, "RoomSyncConfig"]
list_to_rooms: Mapping[str, AbstractSet[str]]
@staticmethod
async def from_state(
per_connection_state: "MutablePerConnectionState", store: "DataStore"
) -> "PerConnectionStateDB":
"""Convert from a standard `PerConnectionState`"""
rooms = {
room_id: HaveSentRoom(
status=status.status,
last_token=(
await status.last_token.to_string(store)
if status.last_token is not None
else None
),
)
for room_id, status in per_connection_state.rooms.get_updates().items()
}
receipts = {
room_id: HaveSentRoom(
status=status.status,
last_token=(
await status.last_token.to_string(store)
if status.last_token is not None
else None
),
)
for room_id, status in per_connection_state.receipts.get_updates().items()
}
log_kv(
{
"rooms": rooms,
"receipts": receipts,
"room_configs": per_connection_state.room_configs.maps[0],
}
)
return PerConnectionStateDB(
rooms=RoomStatusMap(rooms),
receipts=RoomStatusMap(receipts),
room_configs=per_connection_state.room_configs.maps[0],
list_to_rooms=per_connection_state.list_to_rooms.maps[0],
)
async def to_state(self, store: "DataStore") -> "PerConnectionState":
"""Convert into a standard `PerConnectionState`"""
rooms = {
room_id: HaveSentRoom(
status=status.status,
last_token=(
await RoomStreamToken.parse(store, status.last_token)
if status.last_token is not None
else None
),
)
for room_id, status in self.rooms._statuses.items()
}
receipts = {
room_id: HaveSentRoom(
status=status.status,
last_token=(
await MultiWriterStreamToken.parse(store, status.last_token)
if status.last_token is not None
else None
),
)
for room_id, status in self.receipts._statuses.items()
}
return PerConnectionState(
rooms=RoomStatusMap(rooms),
receipts=RoomStatusMap(receipts),
room_configs=self.room_configs,
list_to_rooms=self.list_to_rooms,
)

View File

@@ -24,11 +24,13 @@ from typing import List, Optional, Tuple
import attr
from synapse.logging.opentracing import trace
from synapse.storage._base import SQLBaseStore
from synapse.storage.database import LoggingTransaction
from synapse.storage.database import LoggingTransaction, make_in_list_sql_clause
from synapse.storage.databases.main.stream import _filter_results_by_stream
from synapse.types import RoomStreamToken
from synapse.types import RoomStreamToken, StrCollection
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.iterutils import batch_iter
logger = logging.getLogger(__name__)
@@ -159,37 +161,137 @@ class StateDeltasStore(SQLBaseStore):
self._get_max_stream_id_in_current_state_deltas_txn,
)
async def get_current_state_deltas_for_room(
self, room_id: str, from_token: RoomStreamToken, to_token: RoomStreamToken
def get_current_state_deltas_for_room_txn(
self,
txn: LoggingTransaction,
room_id: str,
*,
from_token: Optional[RoomStreamToken],
to_token: Optional[RoomStreamToken],
) -> List[StateDelta]:
"""Get the state deltas between two tokens."""
"""
Get the state deltas between two tokens.
def get_current_state_deltas_for_room_txn(
txn: LoggingTransaction,
) -> List[StateDelta]:
sql = """
(> `from_token` and <= `to_token`)
"""
from_clause = ""
from_args = []
if from_token is not None:
from_clause = "AND ? < stream_id"
from_args = [from_token.stream]
to_clause = ""
to_args = []
if to_token is not None:
to_clause = "AND stream_id <= ?"
to_args = [to_token.get_max_stream_pos()]
sql = f"""
SELECT instance_name, stream_id, type, state_key, event_id, prev_event_id
FROM current_state_delta_stream
WHERE room_id = ? AND ? < stream_id AND stream_id <= ?
WHERE room_id = ? {from_clause} {to_clause}
ORDER BY stream_id ASC
"""
txn.execute(
sql, (room_id, from_token.stream, to_token.get_max_stream_pos())
txn.execute(sql, [room_id] + from_args + to_args)
return [
StateDelta(
stream_id=row[1],
room_id=room_id,
event_type=row[2],
state_key=row[3],
event_id=row[4],
prev_event_id=row[5],
)
for row in txn
if _filter_results_by_stream(from_token, to_token, row[0], row[1])
]
@trace
async def get_current_state_deltas_for_room(
self,
room_id: str,
*,
from_token: Optional[RoomStreamToken],
to_token: Optional[RoomStreamToken],
) -> List[StateDelta]:
"""
Get the state deltas between two tokens.
(> `from_token` and <= `to_token`)
"""
if (
from_token is not None
and not self._curr_state_delta_stream_cache.has_entity_changed(
room_id, from_token.stream
)
):
return []
return await self.db_pool.runInteraction(
"get_current_state_deltas_for_room",
self.get_current_state_deltas_for_room_txn,
room_id,
from_token=from_token,
to_token=to_token,
)
@trace
async def get_current_state_deltas_for_rooms(
self,
room_ids: StrCollection,
from_token: RoomStreamToken,
to_token: RoomStreamToken,
) -> List[StateDelta]:
"""Get the state deltas between two tokens for the set of rooms."""
room_ids = self._curr_state_delta_stream_cache.get_entities_changed(
room_ids, from_token.stream
)
if not room_ids:
return []
def get_current_state_deltas_for_rooms_txn(
txn: LoggingTransaction,
room_ids: StrCollection,
) -> List[StateDelta]:
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", room_ids
)
sql = f"""
SELECT instance_name, stream_id, room_id, type, state_key, event_id, prev_event_id
FROM current_state_delta_stream
WHERE {clause} AND ? < stream_id AND stream_id <= ?
ORDER BY stream_id ASC
"""
args.append(from_token.stream)
args.append(to_token.get_max_stream_pos())
txn.execute(sql, args)
return [
StateDelta(
stream_id=row[1],
room_id=room_id,
event_type=row[2],
state_key=row[3],
event_id=row[4],
prev_event_id=row[5],
room_id=row[2],
event_type=row[3],
state_key=row[4],
event_id=row[5],
prev_event_id=row[6],
)
for row in txn
if _filter_results_by_stream(from_token, to_token, row[0], row[1])
]
return await self.db_pool.runInteraction(
"get_current_state_deltas_for_room", get_current_state_deltas_for_room_txn
)
results = []
for batch in batch_iter(room_ids, 1000):
deltas = await self.db_pool.runInteraction(
"get_current_state_deltas_for_rooms",
get_current_state_deltas_for_rooms_txn,
batch,
)
results.extend(deltas)
return results

View File

@@ -50,7 +50,9 @@ from typing import (
Dict,
Iterable,
List,
Mapping,
Optional,
Protocol,
Set,
Tuple,
cast,
@@ -59,7 +61,7 @@ from typing import (
import attr
from immutabledict import immutabledict
from typing_extensions import Literal
from typing_extensions import Literal, assert_never
from twisted.internet import defer
@@ -67,7 +69,7 @@ from synapse.api.constants import Direction, EventTypes, Membership
from synapse.api.filtering import Filter
from synapse.events import EventBase
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import trace
from synapse.logging.opentracing import tag_args, trace
from synapse.storage._base import SQLBaseStore
from synapse.storage.database import (
DatabasePool,
@@ -79,7 +81,7 @@ from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.types import PersistedEventPosition, RoomStreamToken, StrCollection
from synapse.util.caches.descriptors import cached
from synapse.util.caches.descriptors import cached, cachedList
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.cancellation import cancellable
from synapse.util.iterutils import batch_iter
@@ -97,6 +99,18 @@ _STREAM_TOKEN = "stream"
_TOPOLOGICAL_TOKEN = "topological"
class PaginateFunction(Protocol):
async def __call__(
self,
*,
room_id: str,
from_key: RoomStreamToken,
to_key: Optional[RoomStreamToken] = None,
direction: Direction = Direction.BACKWARDS,
limit: int = 0,
) -> Tuple[List[EventBase], RoomStreamToken]: ...
# Used as return values for pagination APIs
@attr.s(slots=True, frozen=True, auto_attribs=True)
class _EventDictReturn:
@@ -280,7 +294,7 @@ def generate_pagination_bounds(
def generate_next_token(
direction: Direction, last_topo_ordering: int, last_stream_ordering: int
direction: Direction, last_topo_ordering: Optional[int], last_stream_ordering: int
) -> RoomStreamToken:
"""
Generate the next room stream token based on the currently returned data.
@@ -447,7 +461,6 @@ def _filter_results_by_stream(
The `instance_name` arg is optional to handle historic rows, and is
interpreted as if it was "master".
"""
if instance_name is None:
instance_name = "master"
@@ -660,33 +673,43 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
async def get_room_events_stream_for_rooms(
self,
*,
room_ids: Collection[str],
from_key: RoomStreamToken,
to_key: RoomStreamToken,
to_key: Optional[RoomStreamToken] = None,
direction: Direction = Direction.BACKWARDS,
limit: int = 0,
order: str = "DESC",
) -> Dict[str, Tuple[List[EventBase], RoomStreamToken]]:
"""Get new room events in stream ordering since `from_key`.
Args:
room_ids
from_key: Token from which no events are returned before
to_key: Token from which no events are returned after. (This
is typically the current stream token)
from_key: The token to stream from (starting point and heading in the given
direction)
to_key: The token representing the end stream position (end point)
limit: Maximum number of events to return
order: Either "DESC" or "ASC". Determines which events are
returned when the result is limited. If "DESC" then the most
recent `limit` events are returned, otherwise returns the
oldest `limit` events.
direction: Indicates whether we are paginating forwards or backwards
from `from_key`.
Returns:
A map from room id to a tuple containing:
- list of recent events in the room
- stream ordering key for the start of the chunk of events returned.
When Direction.FORWARDS: from_key < x <= to_key, (ascending order)
When Direction.BACKWARDS: from_key >= x > to_key, (descending order)
"""
room_ids = self._events_stream_cache.get_entities_changed(
room_ids, from_key.stream
)
if direction == Direction.FORWARDS:
room_ids = self._events_stream_cache.get_entities_changed(
room_ids, from_key.stream
)
elif direction == Direction.BACKWARDS:
if to_key is not None:
room_ids = self._events_stream_cache.get_entities_changed(
room_ids, to_key.stream
)
else:
assert_never(direction)
if not room_ids:
return {}
@@ -698,12 +721,12 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
defer.gatherResults(
[
run_in_background(
self.get_room_events_stream_for_room,
room_id,
from_key,
to_key,
limit,
order=order,
self.paginate_room_events_by_stream_ordering,
room_id=room_id,
from_key=from_key,
to_key=to_key,
direction=direction,
limit=limit,
)
for room_id in rm_ids
],
@@ -727,69 +750,122 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
if self._events_stream_cache.has_entity_changed(room_id, from_id)
}
async def get_room_events_stream_for_room(
async def paginate_room_events_by_stream_ordering(
self,
*,
room_id: str,
from_key: RoomStreamToken,
to_key: RoomStreamToken,
to_key: Optional[RoomStreamToken] = None,
direction: Direction = Direction.BACKWARDS,
limit: int = 0,
order: str = "DESC",
) -> Tuple[List[EventBase], RoomStreamToken]:
"""Get new room events in stream ordering since `from_key`.
"""
Paginate events by `stream_ordering` in the room from the `from_key` in the
given `direction` to the `to_key` or `limit`.
Args:
room_id
from_key: Token from which no events are returned before
to_key: Token from which no events are returned after. (This
is typically the current stream token)
from_key: The token to stream from (starting point and heading in the given
direction)
to_key: The token representing the end stream position (end point)
direction: Indicates whether we are paginating forwards or backwards
from `from_key`.
limit: Maximum number of events to return
order: Either "DESC" or "ASC". Determines which events are
returned when the result is limited. If "DESC" then the most
recent `limit` events are returned, otherwise returns the
oldest `limit` events.
Returns:
The list of events (in ascending stream order) and the token from the start
of the chunk of events returned.
"""
if from_key == to_key:
return [], from_key
The results as a list of events and a token that points to the end
of the result set. If no events are returned then the end of the
stream has been reached (i.e. there are no events between `from_key`
and `to_key`).
has_changed = self._events_stream_cache.has_entity_changed(
room_id, from_key.stream
)
When Direction.FORWARDS: from_key < x <= to_key, (ascending order)
When Direction.BACKWARDS: from_key >= x > to_key, (descending order)
"""
# FIXME: When going forwards, we should enforce that the `to_key` is not `None`
# because we always need an upper bound when querying the events stream (as
# otherwise we'll potentially pick up events that are not fully persisted).
# We should only be working with `stream_ordering` tokens here
assert from_key is None or from_key.topological is None
assert to_key is None or to_key.topological is None
# We can bail early if we're looking forwards, and our `to_key` is already
# before our `from_key`.
if (
direction == Direction.FORWARDS
and to_key is not None
and to_key.is_before_or_eq(from_key)
):
# Token selection matches what we do below if there are no rows
return [], to_key if to_key else from_key
# Or vice-versa, if we're looking backwards and our `from_key` is already before
# our `to_key`.
elif (
direction == Direction.BACKWARDS
and to_key is not None
and from_key.is_before_or_eq(to_key)
):
# Token selection matches what we do below if there are no rows
return [], to_key if to_key else from_key
# We can do a quick sanity check to see if any events have been sent in the room
# since the earlier token.
has_changed = True
if direction == Direction.FORWARDS:
has_changed = self._events_stream_cache.has_entity_changed(
room_id, from_key.stream
)
elif direction == Direction.BACKWARDS:
if to_key is not None:
has_changed = self._events_stream_cache.has_entity_changed(
room_id, to_key.stream
)
else:
assert_never(direction)
if not has_changed:
return [], from_key
# Token selection matches what we do below if there are no rows
return [], to_key if to_key else from_key
order, from_bound, to_bound = generate_pagination_bounds(
direction, from_key, to_key
)
bounds = generate_pagination_where_clause(
direction=direction,
# The empty string will shortcut downstream code to only use the
# `stream_ordering` column
column_names=("", "stream_ordering"),
from_token=from_bound,
to_token=to_bound,
engine=self.database_engine,
)
def f(txn: LoggingTransaction) -> List[_EventDictReturn]:
# To handle tokens with a non-empty instance_map we fetch more
# results than necessary and then filter down
min_from_id = from_key.stream
max_to_id = to_key.get_max_stream_pos()
sql = """
SELECT event_id, instance_name, topological_ordering, stream_ordering
sql = f"""
SELECT event_id, instance_name, stream_ordering
FROM events
WHERE
room_id = ?
AND not outlier
AND stream_ordering > ? AND stream_ordering <= ?
ORDER BY stream_ordering %s LIMIT ?
""" % (
order,
)
txn.execute(sql, (room_id, min_from_id, max_to_id, 2 * limit))
AND {bounds}
ORDER BY stream_ordering {order} LIMIT ?
"""
txn.execute(sql, (room_id, 2 * limit))
rows = [
_EventDictReturn(event_id, None, stream_ordering)
for event_id, instance_name, topological_ordering, stream_ordering in txn
if _filter_results(
from_key,
to_key,
instance_name,
topological_ordering,
stream_ordering,
for event_id, instance_name, stream_ordering in txn
if _filter_results_by_stream(
lower_token=(
to_key if direction == Direction.BACKWARDS else from_key
),
upper_token=(
from_key if direction == Direction.BACKWARDS else to_key
),
instance_name=instance_name,
stream_ordering=stream_ordering,
)
][:limit]
return rows
@@ -800,18 +876,20 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
[r.event_id for r in rows], get_prev_content=True
)
if order.lower() == "desc":
ret.reverse()
if rows:
key = RoomStreamToken(stream=min(r.stream_ordering for r in rows))
next_key = generate_next_token(
direction=direction,
last_topo_ordering=None,
last_stream_ordering=rows[-1].stream_ordering,
)
else:
# Assume we didn't get anything because there was nothing to
# get.
key = from_key
# TODO (erikj): We should work out what to do here instead. (same as
# `_paginate_room_events_by_topological_ordering_txn(...)`)
next_key = to_key if to_key else from_key
return ret, key
return ret, next_key
@trace
async def get_current_state_delta_membership_changes_for_user(
self,
user_id: str,
@@ -1117,7 +1195,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
rows, token = await self.db_pool.runInteraction(
"get_recent_event_ids_for_room",
self._paginate_room_events_txn,
self._paginate_room_events_by_topological_ordering_txn,
room_id,
from_token=end_token,
limit=limit,
@@ -1186,11 +1264,76 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
return None
async def get_last_event_pos_in_room(
self,
room_id: str,
event_types: Optional[StrCollection] = None,
) -> Optional[Tuple[str, PersistedEventPosition]]:
"""
Returns the ID and event position of the last event in a room.
Based on `get_last_event_pos_in_room_before_stream_ordering(...)`
Args:
room_id
event_types: Optional allowlist of event types to filter by
Returns:
The ID of the most recent event and it's position, or None if there are no
events in the room that match the given event types.
"""
def _get_last_event_pos_in_room_txn(
txn: LoggingTransaction,
) -> Optional[Tuple[str, PersistedEventPosition]]:
event_type_clause = ""
event_type_args: List[str] = []
if event_types is not None and len(event_types) > 0:
event_type_clause, event_type_args = make_in_list_sql_clause(
txn.database_engine, "type", event_types
)
event_type_clause = f"AND {event_type_clause}"
sql = f"""
SELECT event_id, stream_ordering, instance_name
FROM events
LEFT JOIN rejections USING (event_id)
WHERE room_id = ?
{event_type_clause}
AND NOT outlier
AND rejections.event_id IS NULL
ORDER BY stream_ordering DESC
LIMIT 1
"""
txn.execute(
sql,
[room_id] + event_type_args,
)
row = cast(Optional[Tuple[str, int, str]], txn.fetchone())
if row is not None:
event_id, stream_ordering, instance_name = row
return event_id, PersistedEventPosition(
# If instance_name is null we default to "master"
instance_name or "master",
stream_ordering,
)
return None
return await self.db_pool.runInteraction(
"get_last_event_pos_in_room",
_get_last_event_pos_in_room_txn,
)
@trace
async def get_last_event_pos_in_room_before_stream_ordering(
self,
room_id: str,
end_token: RoomStreamToken,
event_types: Optional[Collection[str]] = None,
event_types: Optional[StrCollection] = None,
) -> Optional[Tuple[str, PersistedEventPosition]]:
"""
Returns the ID and event position of the last event in a room at or before a
@@ -1304,7 +1447,6 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
"""
min_token = end_token.stream
max_token = end_token.get_max_stream_pos()
results: Dict[str, int] = {}
# First, we check for the rooms in the stream change cache to see if we
@@ -1317,26 +1459,76 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
else:
missing_room_ids.add(room_id)
if not missing_room_ids:
return results
# Next, we query the stream position from the DB. At first we fetch all
# positions less than the *max* stream pos in the token, then filter
# them down. We do this as a) this is a cheaper query, and b) the vast
# majority of rooms will have a latest token from before the min stream
# pos.
def bulk_get_last_event_pos_txn(
txn: LoggingTransaction, batch_room_ids: StrCollection
uncapped_results = await self._bulk_get_max_event_pos(missing_room_ids)
# Check that the stream position for the rooms are from before the
# minimum position of the token. If not then we need to fetch more
# rows.
recheck_rooms: Set[str] = set()
for room_id, stream in uncapped_results.items():
if stream <= min_token:
results[room_id] = stream
else:
recheck_rooms.add(room_id)
if not recheck_rooms:
return results
for room_id in recheck_rooms:
result = await self.get_last_event_pos_in_room_before_stream_ordering(
room_id, end_token
)
if result is not None:
results[room_id] = result[1].stream
return results
@cached()
async def _get_max_event_pos(self, room_id: str) -> int:
raise NotImplementedError()
@cachedList(cached_method_name="_get_max_event_pos", list_name="room_ids")
async def _bulk_get_max_event_pos(
self, room_ids: StrCollection
) -> Mapping[str, int]:
"""Fetch the max position of a persisted event in the room."""
now_token = self.get_room_max_token()
max_pos = now_token.get_max_stream_pos()
results: Dict[str, int] = {}
missing_room_ids: Set[str] = set()
for room_id in room_ids:
stream_pos = self._events_stream_cache.get_max_pos_of_last_change(room_id)
if stream_pos is not None:
results[room_id] = stream_pos
else:
missing_room_ids.add(room_id)
if not missing_room_ids:
return results
def bulk_get_max_event_pos_txn(
txn: LoggingTransaction, batched_room_ids: StrCollection
) -> Dict[str, int]:
# This query fetches the latest stream position in the rooms before
# the given max position.
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", batch_room_ids
self.database_engine, "room_id", batched_room_ids
)
sql = f"""
SELECT room_id, (
SELECT stream_ordering FROM events AS e
LEFT JOIN rejections USING (event_id)
WHERE e.room_id = r.room_id
AND stream_ordering <= ?
AND e.stream_ordering <= ?
AND NOT outlier
AND rejection_reason IS NULL
ORDER BY stream_ordering DESC
@@ -1345,72 +1537,26 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
FROM rooms AS r
WHERE {clause}
"""
txn.execute(sql, [max_token] + args)
txn.execute(sql, [max_pos] + args)
return {row[0]: row[1] for row in txn}
recheck_rooms: Set[str] = set()
for batched in batch_iter(missing_room_ids, 1000):
result = await self.db_pool.runInteraction(
"bulk_get_last_event_pos_in_room_before_stream_ordering",
bulk_get_last_event_pos_txn,
batched,
for batched in batch_iter(room_ids, 1000):
batch_results = await self.db_pool.runInteraction(
"_bulk_get_max_event_pos", bulk_get_max_event_pos_txn, batched
)
# Check that the stream position for the rooms are from before the
# minimum position of the token. If not then we need to fetch more
# rows.
for room_id, stream in result.items():
if stream <= min_token:
results[room_id] = stream
for room_id, stream_ordering in batch_results.items():
if stream_ordering <= now_token.stream:
results.update(batch_results)
else:
recheck_rooms.add(room_id)
if not recheck_rooms:
return results
# For the remaining rooms we need to fetch all rows between the min and
# max stream positions in the end token, and filter out the rows that
# are after the end token.
#
# This query should be fast as the range between the min and max should
# be small.
def bulk_get_last_event_pos_recheck_txn(
txn: LoggingTransaction, batch_room_ids: StrCollection
) -> Dict[str, int]:
clause, args = make_in_list_sql_clause(
self.database_engine, "room_id", batch_room_ids
for room_id in recheck_rooms:
result = await self.get_last_event_pos_in_room_before_stream_ordering(
room_id, now_token
)
sql = f"""
SELECT room_id, instance_name, stream_ordering
FROM events
WHERE ? < stream_ordering AND stream_ordering <= ?
AND NOT outlier
AND rejection_reason IS NULL
AND {clause}
ORDER BY stream_ordering ASC
"""
txn.execute(sql, [min_token, max_token] + args)
# We take the max stream ordering that is less than the token. Since
# we ordered by stream ordering we just need to iterate through and
# take the last matching stream ordering.
txn_results: Dict[str, int] = {}
for row in txn:
room_id = row[0]
event_pos = PersistedEventPosition(row[1], row[2])
if not event_pos.persisted_after(end_token):
txn_results[room_id] = event_pos.stream
return txn_results
for batched in batch_iter(recheck_rooms, 1000):
recheck_result = await self.db_pool.runInteraction(
"bulk_get_last_event_pos_in_room_before_stream_ordering_recheck",
bulk_get_last_event_pos_recheck_txn,
batched,
)
results.update(recheck_result)
if result is not None:
results[room_id] = result[1].stream
return results
@@ -1622,7 +1768,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
topological=topological_ordering, stream=stream_ordering
)
rows, start_token = self._paginate_room_events_txn(
rows, start_token = self._paginate_room_events_by_topological_ordering_txn(
txn,
room_id,
before_token,
@@ -1632,7 +1778,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
)
events_before = [r.event_id for r in rows]
rows, end_token = self._paginate_room_events_txn(
rows, end_token = self._paginate_room_events_by_topological_ordering_txn(
txn,
room_id,
after_token,
@@ -1795,14 +1941,14 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
def has_room_changed_since(self, room_id: str, stream_id: int) -> bool:
return self._events_stream_cache.has_entity_changed(room_id, stream_id)
def _paginate_room_events_txn(
def _paginate_room_events_by_topological_ordering_txn(
self,
txn: LoggingTransaction,
room_id: str,
from_token: RoomStreamToken,
to_token: Optional[RoomStreamToken] = None,
direction: Direction = Direction.BACKWARDS,
limit: int = -1,
limit: int = 0,
event_filter: Optional[Filter] = None,
) -> Tuple[List[_EventDictReturn], RoomStreamToken]:
"""Returns list of events before or after a given token.
@@ -1824,6 +1970,24 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
been reached (i.e. there are no events between `from_token` and
`to_token`), or `limit` is zero.
"""
# We can bail early if we're looking forwards, and our `to_key` is already
# before our `from_token`.
if (
direction == Direction.FORWARDS
and to_token is not None
and to_token.is_before_or_eq(from_token)
):
# Token selection matches what we do below if there are no rows
return [], to_token if to_token else from_token
# Or vice-versa, if we're looking backwards and our `from_token` is already before
# our `to_token`.
elif (
direction == Direction.BACKWARDS
and to_token is not None
and from_token.is_before_or_eq(to_token)
):
# Token selection matches what we do below if there are no rows
return [], to_token if to_token else from_token
args: List[Any] = [room_id]
@@ -1908,7 +2072,6 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
"bounds": bounds,
"order": order,
}
txn.execute(sql, args)
# Filter the result set.
@@ -1940,27 +2103,30 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
return rows, next_token
@trace
async def paginate_room_events(
@tag_args
async def paginate_room_events_by_topological_ordering(
self,
*,
room_id: str,
from_key: RoomStreamToken,
to_key: Optional[RoomStreamToken] = None,
direction: Direction = Direction.BACKWARDS,
limit: int = -1,
limit: int = 0,
event_filter: Optional[Filter] = None,
) -> Tuple[List[EventBase], RoomStreamToken]:
"""Returns list of events before or after a given token.
When Direction.FORWARDS: from_key < x <= to_key
When Direction.BACKWARDS: from_key >= x > to_key
"""
Paginate events by `topological_ordering` (tie-break with `stream_ordering`) in
the room from the `from_key` in the given `direction` to the `to_key` or
`limit`.
Args:
room_id
from_key: The token used to stream from
to_key: A token which if given limits the results to only those before
from_key: The token to stream from (starting point and heading in the given
direction)
to_key: The token representing the end stream position (end point)
direction: Indicates whether we are paginating forwards or backwards
from `from_key`.
limit: The maximum number of events to return.
limit: Maximum number of events to return
event_filter: If provided filters the events to those that match the filter.
Returns:
@@ -1968,8 +2134,18 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
of the result set. If no events are returned then the end of the
stream has been reached (i.e. there are no events between `from_key`
and `to_key`).
When Direction.FORWARDS: from_key < x <= to_key, (ascending order)
When Direction.BACKWARDS: from_key >= x > to_key, (descending order)
"""
# FIXME: When going forwards, we should enforce that the `to_key` is not `None`
# because we always need an upper bound when querying the events stream (as
# otherwise we'll potentially pick up events that are not fully persisted).
# We have these checks outside of the transaction function (txn) to save getting
# a DB connection and switching threads if we don't need to.
#
# We can bail early if we're looking forwards, and our `to_key` is already
# before our `from_key`.
if (
@@ -1992,8 +2168,8 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
return [], to_key if to_key else from_key
rows, token = await self.db_pool.runInteraction(
"paginate_room_events",
self._paginate_room_events_txn,
"paginate_room_events_by_topological_ordering",
self._paginate_room_events_by_topological_ordering_txn,
room_id,
from_key,
to_key,
@@ -2105,6 +2281,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
return None
@trace
def get_rooms_that_might_have_updates(
self, room_ids: StrCollection, from_token: RoomStreamToken
) -> StrCollection:

View File

@@ -28,6 +28,11 @@ if TYPE_CHECKING:
from synapse.storage.database import LoggingDatabaseConnection
# A string that will be replaced with the appropriate auto increment directive
# for the database engine, expands to an auto incrementing integer primary key.
AUTO_INCREMENT_PRIMARY_KEYPLACEHOLDER = "$%AUTO_INCREMENT_PRIMARY_KEY%$"
class IsolationLevel(IntEnum):
READ_COMMITTED: int = 1
REPEATABLE_READ: int = 2

View File

@@ -25,6 +25,7 @@ from typing import TYPE_CHECKING, Any, Mapping, NoReturn, Optional, Tuple, cast
import psycopg2.extensions
from synapse.storage.engines._base import (
AUTO_INCREMENT_PRIMARY_KEYPLACEHOLDER,
BaseDatabaseEngine,
IncorrectDatabaseSetup,
IsolationLevel,
@@ -256,4 +257,10 @@ class PostgresEngine(
executing the script in its own transaction. The script transaction is
left open and it is the responsibility of the caller to commit it.
"""
# Replace auto increment placeholder with the appropriate directive
script = script.replace(
AUTO_INCREMENT_PRIMARY_KEYPLACEHOLDER,
"BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY",
)
cursor.execute(f"COMMIT; BEGIN TRANSACTION; {script}")

View File

@@ -25,6 +25,7 @@ import threading
from typing import TYPE_CHECKING, Any, List, Mapping, Optional
from synapse.storage.engines import BaseDatabaseEngine
from synapse.storage.engines._base import AUTO_INCREMENT_PRIMARY_KEYPLACEHOLDER
from synapse.storage.types import Cursor
if TYPE_CHECKING:
@@ -168,6 +169,11 @@ class Sqlite3Engine(BaseDatabaseEngine[sqlite3.Connection, sqlite3.Cursor]):
> first. No other implicit transaction control is performed; any transaction
> control must be added to sql_script.
"""
# Replace auto increment placeholder with the appropriate directive
script = script.replace(
AUTO_INCREMENT_PRIMARY_KEYPLACEHOLDER, "INTEGER PRIMARY KEY AUTOINCREMENT"
)
# The implementation of `executescript` can be found at
# https://github.com/python/cpython/blob/3.11/Modules/_sqlite/cursor.c#L1035.
cursor.executescript(f"BEGIN TRANSACTION; {script}")

View File

@@ -19,7 +19,7 @@
#
#
SCHEMA_VERSION = 86 # remember to update the list below when updating
SCHEMA_VERSION = 87 # remember to update the list below when updating
"""Represents the expectations made by the codebase about the database schema
This should be incremented whenever the codebase changes its requirements on the
@@ -142,6 +142,10 @@ Changes in SCHEMA_VERSION = 85
Changes in SCHEMA_VERSION = 86
- Add a column `authenticated` to the tables `local_media_repository` and `remote_media_cache`
Changes in SCHEMA_VERSION = 87
- Add tables to store Sliding Sync data for quick filtering/sorting
(`sliding_sync_joined_rooms`, `sliding_sync_membership_snapshots`)
"""

View File

@@ -0,0 +1,15 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8602, 'receipts_room_id_event_id_index', '{}');

View File

@@ -0,0 +1,153 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- This table is a list/queue used to keep track of which rooms need to be inserted into
-- `sliding_sync_joined_rooms`. We do this to avoid reading from `current_state_events`
-- during the background update to populate `sliding_sync_joined_rooms` which works but
-- it takes a lot of work for the database to grab `DISTINCT` room_ids given how many
-- state events there are for each room.
CREATE TABLE IF NOT EXISTS sliding_sync_joined_rooms_to_recalculate(
room_id TEXT NOT NULL REFERENCES rooms(room_id),
PRIMARY KEY (room_id)
);
-- A table for storing room meta data (current state relevant to sliding sync) that the
-- local server is still participating in (someone local is joined to the room).
--
-- We store the joined rooms in separate table from `sliding_sync_membership_snapshots`
-- because we need up-to-date information for joined rooms and it can be shared across
-- everyone who is joined.
--
-- This table is kept in sync with `current_state_events` which means if the server is
-- no longer participating in a room, the row will be deleted.
CREATE TABLE IF NOT EXISTS sliding_sync_joined_rooms(
room_id TEXT NOT NULL REFERENCES rooms(room_id),
-- The `stream_ordering` of the most-recent/latest event in the room
event_stream_ordering BIGINT NOT NULL REFERENCES events(stream_ordering),
-- The `stream_ordering` of the last event according to the `bump_event_types`
bump_stamp BIGINT,
-- `m.room.create` -> `content.type` (current state)
--
-- Useful for the `spaces`/`not_spaces` filter in the Sliding Sync API
room_type TEXT,
-- `m.room.name` -> `content.name` (current state)
--
-- Useful for the room meta data and `room_name_like` filter in the Sliding Sync API
room_name TEXT,
-- `m.room.encryption` -> `content.algorithm` (current state)
--
-- Useful for the `is_encrypted` filter in the Sliding Sync API
is_encrypted BOOLEAN DEFAULT FALSE NOT NULL,
-- `m.room.tombstone` -> `content.replacement_room` (according to the current state at the
-- time of the membership).
--
-- Useful for the `include_old_rooms` functionality in the Sliding Sync API
tombstone_successor_room_id TEXT,
PRIMARY KEY (room_id)
);
-- So we can purge rooms easily.
--
-- The primary key is already `room_id`
-- So we can sort by `stream_ordering
CREATE UNIQUE INDEX IF NOT EXISTS sliding_sync_joined_rooms_event_stream_ordering ON sliding_sync_joined_rooms(event_stream_ordering);
-- A table for storing a snapshot of room meta data (historical current state relevant
-- for sliding sync) at the time of a local user's membership. Only has rows for the
-- latest membership event for a given local user in a room which matches
-- `local_current_membership` .
--
-- We store all memberships including joins. This makes it easy to reference this table
-- to find all membership for a given user and shares the same semantics as
-- `local_current_membership`. And we get to avoid some table maintenance; if we only
-- stored non-joins, we would have to delete the row for the user when the user joins
-- the room. Stripped state doesn't include the `m.room.tombstone` event, so we just
-- assume that the room doesn't have a tombstone.
--
-- For remote invite/knocks where the server is not participating in the room, we will
-- use stripped state events to populate this table. We assume that if any stripped
-- state is given, it will include all possible stripped state events types. For
-- example, if stripped state is given but `m.room.encryption` isn't included, we will
-- assume that the room is not encrypted.
--
-- We don't include `bump_stamp` here because we can just use the `stream_ordering` from
-- the membership event itself as the `bump_stamp`.
CREATE TABLE IF NOT EXISTS sliding_sync_membership_snapshots(
room_id TEXT NOT NULL REFERENCES rooms(room_id),
user_id TEXT NOT NULL,
-- Useful to be able to tell leaves from kicks (where the `user_id` is different from the `sender`)
sender TEXT NOT NULL,
membership_event_id TEXT NOT NULL REFERENCES events(event_id),
membership TEXT NOT NULL,
-- This is an integer just to match `room_memberships` and also means we don't need
-- to do any casting.
forgotten INTEGER DEFAULT 0 NOT NULL,
-- `stream_ordering` of the `membership_event_id`
event_stream_ordering BIGINT NOT NULL REFERENCES events(stream_ordering),
-- For remote invites/knocks that don't include any stripped state, we want to be
-- able to distinguish between a room with `None` as valid value for some state and
-- room where the state is completely unknown. Basically, this should be True unless
-- no stripped state was provided for a remote invite/knock (False).
has_known_state BOOLEAN DEFAULT FALSE NOT NULL,
-- `m.room.create` -> `content.type` (according to the current state at the time of
-- the membership).
--
-- Useful for the `spaces`/`not_spaces` filter in the Sliding Sync API
room_type TEXT,
-- `m.room.name` -> `content.name` (according to the current state at the time of
-- the membership).
--
-- Useful for the room meta data and `room_name_like` filter in the Sliding Sync API
room_name TEXT,
-- `m.room.encryption` -> `content.algorithm` (according to the current state at the
-- time of the membership).
--
-- Useful for the `is_encrypted` filter in the Sliding Sync API
is_encrypted BOOLEAN DEFAULT FALSE NOT NULL,
-- `m.room.tombstone` -> `content.replacement_room` (according to the current state at the
-- time of the membership).
--
-- Useful for the `include_old_rooms` functionality in the Sliding Sync API
tombstone_successor_room_id TEXT,
PRIMARY KEY (room_id, user_id)
);
-- So we can purge rooms easily.
--
-- Since we're using a multi-column index as the primary key (room_id, user_id), the
-- first index column (room_id) is always usable for searching so we don't need to
-- create a separate index for it.
--
-- CREATE INDEX IF NOT EXISTS sliding_sync_membership_snapshots_room_id ON sliding_sync_membership_snapshots(room_id);
-- So we can fetch all rooms for a given user
CREATE INDEX IF NOT EXISTS sliding_sync_membership_snapshots_user_id ON sliding_sync_membership_snapshots(user_id);
-- So we can sort by `stream_ordering
CREATE UNIQUE INDEX IF NOT EXISTS sliding_sync_membership_snapshots_event_stream_ordering ON sliding_sync_membership_snapshots(event_stream_ordering);
-- Add a series of background updates to populate the new `sliding_sync_joined_rooms` table:
--
-- 1. Add a background update to prefill `sliding_sync_joined_rooms_to_recalculate`.
-- We do a one-shot bulk insert from the `rooms` table to prefill.
-- 2. Add a background update to populate the new `sliding_sync_joined_rooms` table
--
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8701, 'sliding_sync_prefill_joined_rooms_to_recalculate_table_bg_update', '{}');
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(8701, 'sliding_sync_joined_rooms_bg_update', '{}', 'sliding_sync_prefill_joined_rooms_to_recalculate_table_bg_update');
-- Add a background updates to populate the new `sliding_sync_membership_snapshots` table
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(8701, 'sliding_sync_membership_snapshots_bg_update', '{}');

View File

@@ -0,0 +1,78 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Table to track active sliding sync connections.
--
-- A new connection will be created for every sliding sync request without a
-- `since` token for a given `conn_id` for a device.#
--
-- Once a new connection is created and used we delete all other connections for
-- the `conn_id`.
CREATE TABLE sliding_sync_connections(
connection_key $%AUTO_INCREMENT_PRIMARY_KEY%$,
user_id TEXT NOT NULL,
device_id TEXT NOT NULL,
conn_id TEXT NOT NULL,
created_ts BIGINT NOT NULL
);
CREATE INDEX sliding_sync_connections_idx ON sliding_sync_connections(user_id, device_id, conn_id);
-- We track per-connection state by associating changes to the state with
-- connection positions. This ensures that we correctly track state even if we
-- see retries of requests.
--
-- If the client starts a "new" connection (by not specifying a since token),
-- we'll clear out the other connections (to ensure that we don't end up with
-- lots of connection keys).
CREATE TABLE sliding_sync_connection_positions(
connection_position $%AUTO_INCREMENT_PRIMARY_KEY%$,
connection_key BIGINT NOT NULL REFERENCES sliding_sync_connections(connection_key) ON DELETE CASCADE,
created_ts BIGINT NOT NULL
);
CREATE INDEX sliding_sync_connection_positions_key ON sliding_sync_connection_positions(connection_key);
-- To save space we deduplicate the `required_state` json by assigning IDs to
-- different values.
CREATE TABLE sliding_sync_connection_required_state(
required_state_id $%AUTO_INCREMENT_PRIMARY_KEY%$,
connection_key BIGINT NOT NULL REFERENCES sliding_sync_connections(connection_key) ON DELETE CASCADE,
required_state TEXT NOT NULL -- We store this as a json list of event type / state key tuples.
);
CREATE INDEX sliding_sync_connection_required_state_conn_pos ON sliding_sync_connections(connection_key);
-- Stores the room configs we have seen for rooms in a connection.
CREATE TABLE sliding_sync_connection_room_configs(
connection_position BIGINT NOT NULL REFERENCES sliding_sync_connection_positions(connection_position) ON DELETE CASCADE,
room_id TEXT NOT NULL,
timeline_limit BIGINT NOT NULL,
required_state_id BIGINT NOT NULL REFERENCES sliding_sync_connection_required_state(required_state_id)
);
CREATE UNIQUE INDEX sliding_sync_connection_room_configs_idx ON sliding_sync_connection_room_configs(connection_position, room_id);
-- Stores what data we have sent for given streams down given connections.
CREATE TABLE sliding_sync_connection_streams(
connection_position BIGINT NOT NULL REFERENCES sliding_sync_connection_positions(connection_position) ON DELETE CASCADE,
stream TEXT NOT NULL, -- e.g. "events" or "receipts"
room_id TEXT NOT NULL,
room_status TEXT NOT NULL, -- "live" or "previously", i.e. the `HaveSentRoomFlag` value
last_position TEXT -- For "previously" the token for the stream we have sent up to.
);
CREATE UNIQUE INDEX sliding_sync_connection_streams_idx ON sliding_sync_connection_streams(connection_position, room_id, stream);

View File

@@ -0,0 +1,22 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2024 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Stores the room lists for a connection
CREATE TABLE sliding_sync_connection_room_lists(
connection_key BIGINT NOT NULL REFERENCES sliding_sync_connections(connection_key) ON DELETE CASCADE,
list_name TEXT NOT NULL,
room_id TEXT NOT NULL
);
CREATE INDEX sliding_sync_connection_room_lists_idx ON sliding_sync_connection_room_lists(connection_key);

View File

@@ -17,33 +17,9 @@
# [This file includes modifications made by New Vector Limited]
#
#
from enum import Enum
from typing import TYPE_CHECKING, Dict, Final, List, Mapping, Optional, Sequence, Tuple
import attr
from typing_extensions import TypedDict
from synapse._pydantic_compat import HAS_PYDANTIC_V2
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import Extra
else:
from pydantic import Extra
from synapse.events import EventBase
from synapse.types import (
DeviceListUpdates,
JsonDict,
JsonMapping,
Requester,
SlidingSyncStreamToken,
StreamToken,
UserID,
)
from synapse.types.rest.client import SlidingSyncBody
if TYPE_CHECKING:
from synapse.handlers.relations import BundledAggregations
from typing import List, Optional, TypedDict
class ShutdownRoomParams(TypedDict):
@@ -101,331 +77,3 @@ class ShutdownRoomResponse(TypedDict):
failed_to_kick_users: List[str]
local_aliases: List[str]
new_room_id: Optional[str]
class SlidingSyncConfig(SlidingSyncBody):
"""
Inherit from `SlidingSyncBody` since we need all of the same fields and add a few
extra fields that we need in the handler
"""
user: UserID
requester: Requester
# Pydantic config
class Config:
# By default, ignore fields that we don't recognise.
extra = Extra.ignore
# By default, don't allow fields to be reassigned after parsing.
allow_mutation = False
# Allow custom types like `UserID` to be used in the model
arbitrary_types_allowed = True
class OperationType(Enum):
"""
Represents the operation types in a Sliding Sync window.
Attributes:
SYNC: Sets a range of entries. Clients SHOULD discard what they previous knew about
entries in this range.
INSERT: Sets a single entry. If the position is not empty then clients MUST move
entries to the left or the right depending on where the closest empty space is.
DELETE: Remove a single entry. Often comes before an INSERT to allow entries to move
places.
INVALIDATE: Remove a range of entries. Clients MAY persist the invalidated range for
offline support, but they should be treated as empty when additional operations
which concern indexes in the range arrive from the server.
"""
SYNC: Final = "SYNC"
INSERT: Final = "INSERT"
DELETE: Final = "DELETE"
INVALIDATE: Final = "INVALIDATE"
@attr.s(slots=True, frozen=True, auto_attribs=True)
class SlidingSyncResult:
"""
The Sliding Sync result to be serialized to JSON for a response.
Attributes:
next_pos: The next position token in the sliding window to request (next_batch).
lists: Sliding window API. A map of list key to list results.
rooms: Room subscription API. A map of room ID to room results.
extensions: Extensions API. A map of extension key to extension results.
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class RoomResult:
"""
Attributes:
name: Room name or calculated room name.
avatar: Room avatar
heroes: List of stripped membership events (containing `user_id` and optionally
`avatar_url` and `displayname`) for the users used to calculate the room name.
is_dm: Flag to specify whether the room is a direct-message room (most likely
between two people).
initial: Flag which is set when this is the first time the server is sending this
data on this connection. Clients can use this flag to replace or update
their local state. When there is an update, servers MUST omit this flag
entirely and NOT send "initial":false as this is wasteful on bandwidth. The
absence of this flag means 'false'.
required_state: The current state of the room
timeline: Latest events in the room. The last event is the most recent.
bundled_aggregations: A mapping of event ID to the bundled aggregations for
the timeline events above. This allows clients to show accurate reaction
counts (or edits, threads), even if some of the reaction events were skipped
over in a gappy sync.
stripped_state: Stripped state events (for rooms where the usre is
invited/knocked). Same as `rooms.invite.$room_id.invite_state` in sync v2,
absent on joined/left rooms
prev_batch: A token that can be passed as a start parameter to the
`/rooms/<room_id>/messages` API to retrieve earlier messages.
limited: True if there are more events than `timeline_limit` looking
backwards from the `response.pos` to the `request.pos`.
num_live: The number of timeline events which have just occurred and are not historical.
The last N events are 'live' and should be treated as such. This is mostly
useful to determine whether a given @mention event should make a noise or not.
Clients cannot rely solely on the absence of `initial: true` to determine live
events because if a room not in the sliding window bumps into the window because
of an @mention it will have `initial: true` yet contain a single live event
(with potentially other old events in the timeline).
bump_stamp: The `stream_ordering` of the last event according to the
`bump_event_types`. This helps clients sort more readily without them
needing to pull in a bunch of the timeline to determine the last activity.
`bump_event_types` is a thing because for example, we don't want display
name changes to mark the room as unread and bump it to the top. For
encrypted rooms, we just have to consider any activity as a bump because we
can't see the content and the client has to figure it out for themselves.
joined_count: The number of users with membership of join, including the client's
own user ID. (same as sync `v2 m.joined_member_count`)
invited_count: The number of users with membership of invite. (same as sync v2
`m.invited_member_count`)
notification_count: The total number of unread notifications for this room. (same
as sync v2)
highlight_count: The number of unread notifications for this room with the highlight
flag set. (same as sync v2)
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class StrippedHero:
user_id: str
display_name: Optional[str]
avatar_url: Optional[str]
name: Optional[str]
avatar: Optional[str]
heroes: Optional[List[StrippedHero]]
is_dm: bool
initial: bool
# Should be empty for invite/knock rooms with `stripped_state`
required_state: List[EventBase]
# Should be empty for invite/knock rooms with `stripped_state`
timeline_events: List[EventBase]
bundled_aggregations: Optional[Dict[str, "BundledAggregations"]]
# Optional because it's only relevant to invite/knock rooms
stripped_state: List[JsonDict]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
prev_batch: Optional[StreamToken]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
limited: Optional[bool]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
num_live: Optional[int]
bump_stamp: int
joined_count: int
invited_count: int
notification_count: int
highlight_count: int
def __bool__(self) -> bool:
return (
# If this is the first time the client is seeing the room, we should not filter it out
# under any circumstance.
self.initial
# We need to let the client know if there are any new events
or bool(self.required_state)
or bool(self.timeline_events)
or bool(self.stripped_state)
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class SlidingWindowList:
"""
Attributes:
count: The total number of entries in the list. Always present if this list
is.
ops: The sliding list operations to perform.
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class Operation:
"""
Attributes:
op: The operation type to perform.
range: Which index positions are affected by this operation. These are
both inclusive.
room_ids: Which room IDs are affected by this operation. These IDs match
up to the positions in the `range`, so the last room ID in this list
matches the 9th index. The room data is held in a separate object.
"""
op: OperationType
range: Tuple[int, int]
room_ids: List[str]
count: int
ops: List[Operation]
@attr.s(slots=True, frozen=True, auto_attribs=True)
class Extensions:
"""Responses for extensions
Attributes:
to_device: The to-device extension (MSC3885)
e2ee: The E2EE device extension (MSC3884)
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class ToDeviceExtension:
"""The to-device extension (MSC3885)
Attributes:
next_batch: The to-device stream token the client should use
to get more results
events: A list of to-device messages for the client
"""
next_batch: str
events: Sequence[JsonMapping]
def __bool__(self) -> bool:
return bool(self.events)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class E2eeExtension:
"""The E2EE device extension (MSC3884)
Attributes:
device_list_updates: List of user_ids whose devices have changed or left (only
present on incremental syncs).
device_one_time_keys_count: Map from key algorithm to the number of
unclaimed one-time keys currently held on the server for this device. If
an algorithm is unlisted, the count for that algorithm is assumed to be
zero. If this entire parameter is missing, the count for all algorithms
is assumed to be zero.
device_unused_fallback_key_types: List of unused fallback key algorithms
for this device.
"""
# Only present on incremental syncs
device_list_updates: Optional[DeviceListUpdates]
device_one_time_keys_count: Mapping[str, int]
device_unused_fallback_key_types: Sequence[str]
def __bool__(self) -> bool:
# Note that "signed_curve25519" is always returned in key count responses
# regardless of whether we uploaded any keys for it. This is necessary until
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
#
# Also related:
# https://github.com/element-hq/element-android/issues/3725 and
# https://github.com/matrix-org/synapse/issues/10456
default_otk = self.device_one_time_keys_count.get("signed_curve25519")
more_than_default_otk = len(self.device_one_time_keys_count) > 1 or (
default_otk is not None and default_otk > 0
)
return bool(
more_than_default_otk
or self.device_list_updates
or self.device_unused_fallback_key_types
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class AccountDataExtension:
"""The Account Data extension (MSC3959)
Attributes:
global_account_data_map: Mapping from `type` to `content` of global account
data events.
account_data_by_room_map: Mapping from room_id to mapping of `type` to
`content` of room account data events.
"""
global_account_data_map: Mapping[str, JsonMapping]
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]]
def __bool__(self) -> bool:
return bool(
self.global_account_data_map or self.account_data_by_room_map
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class ReceiptsExtension:
"""The Receipts extension (MSC3960)
Attributes:
room_id_to_receipt_map: Mapping from room_id to `m.receipt` ephemeral
event (type, content)
"""
room_id_to_receipt_map: Mapping[str, JsonMapping]
def __bool__(self) -> bool:
return bool(self.room_id_to_receipt_map)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class TypingExtension:
"""The Typing Notification extension (MSC3961)
Attributes:
room_id_to_typing_map: Mapping from room_id to `m.typing` ephemeral
event (type, content)
"""
room_id_to_typing_map: Mapping[str, JsonMapping]
def __bool__(self) -> bool:
return bool(self.room_id_to_typing_map)
to_device: Optional[ToDeviceExtension] = None
e2ee: Optional[E2eeExtension] = None
account_data: Optional[AccountDataExtension] = None
receipts: Optional[ReceiptsExtension] = None
typing: Optional[TypingExtension] = None
def __bool__(self) -> bool:
return bool(
self.to_device
or self.e2ee
or self.account_data
or self.receipts
or self.typing
)
next_pos: SlidingSyncStreamToken
lists: Dict[str, SlidingWindowList]
rooms: Dict[str, RoomResult]
extensions: Extensions
def __bool__(self) -> bool:
"""Make the result appear empty if there are no updates. This is used
to tell if the notifier needs to wait for more events when polling for
events.
"""
# We don't include `self.lists` here, as a) `lists` is always non-empty even if
# there are no changes, and b) since we're sorting rooms by `stream_ordering` of
# the latest activity, anything that would cause the order to change would end
# up in `self.rooms` and cause us to send down the change.
return bool(self.rooms or self.extensions)
@staticmethod
def empty(next_pos: SlidingSyncStreamToken) -> "SlidingSyncResult":
"Return a new empty result"
return SlidingSyncResult(
next_pos=next_pos,
lists={},
rooms={},
extensions=SlidingSyncResult.Extensions(),
)

View File

@@ -0,0 +1,899 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
import typing
from collections import ChainMap
from enum import Enum
from typing import (
TYPE_CHECKING,
AbstractSet,
Callable,
Dict,
Final,
Generic,
List,
Mapping,
MutableMapping,
Optional,
Sequence,
Set,
Tuple,
TypeVar,
cast,
)
import attr
from synapse._pydantic_compat import HAS_PYDANTIC_V2
from synapse.api.constants import EventTypes
from synapse.types import MultiWriterStreamToken, RoomStreamToken, StrCollection, UserID
if TYPE_CHECKING or HAS_PYDANTIC_V2:
from pydantic.v1 import Extra
else:
from pydantic import Extra
from synapse.events import EventBase
from synapse.types import (
DeviceListUpdates,
JsonDict,
JsonMapping,
Requester,
SlidingSyncStreamToken,
StreamToken,
)
from synapse.types.rest.client import SlidingSyncBody
if TYPE_CHECKING:
from synapse.handlers.relations import BundledAggregations
logger = logging.getLogger(__name__)
# Sliding Sync: The event types that clients should consider as new activity and affect
# the `bump_stamp`
SLIDING_SYNC_DEFAULT_BUMP_EVENT_TYPES = {
EventTypes.Create,
EventTypes.Message,
EventTypes.Encrypted,
EventTypes.Sticker,
EventTypes.CallInvite,
EventTypes.PollStart,
EventTypes.LiveLocationShareStart,
}
class SlidingSyncConfig(SlidingSyncBody):
"""
Inherit from `SlidingSyncBody` since we need all of the same fields and add a few
extra fields that we need in the handler
"""
user: UserID
requester: Requester
# Pydantic config
class Config:
# By default, ignore fields that we don't recognise.
extra = Extra.ignore
# By default, don't allow fields to be reassigned after parsing.
allow_mutation = False
# Allow custom types like `UserID` to be used in the model
arbitrary_types_allowed = True
class OperationType(Enum):
"""
Represents the operation types in a Sliding Sync window.
Attributes:
SYNC: Sets a range of entries. Clients SHOULD discard what they previous knew about
entries in this range.
INSERT: Sets a single entry. If the position is not empty then clients MUST move
entries to the left or the right depending on where the closest empty space is.
DELETE: Remove a single entry. Often comes before an INSERT to allow entries to move
places.
INVALIDATE: Remove a range of entries. Clients MAY persist the invalidated range for
offline support, but they should be treated as empty when additional operations
which concern indexes in the range arrive from the server.
"""
SYNC: Final = "SYNC"
INSERT: Final = "INSERT"
DELETE: Final = "DELETE"
INVALIDATE: Final = "INVALIDATE"
@attr.s(slots=True, frozen=True, auto_attribs=True)
class SlidingSyncResult:
"""
The Sliding Sync result to be serialized to JSON for a response.
Attributes:
next_pos: The next position token in the sliding window to request (next_batch).
lists: Sliding window API. A map of list key to list results.
rooms: Room subscription API. A map of room ID to room results.
extensions: Extensions API. A map of extension key to extension results.
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class RoomResult:
"""
Attributes:
name: Room name or calculated room name.
avatar: Room avatar
heroes: List of stripped membership events (containing `user_id` and optionally
`avatar_url` and `displayname`) for the users used to calculate the room name.
is_dm: Flag to specify whether the room is a direct-message room (most likely
between two people).
initial: Flag which is set when this is the first time the server is sending this
data on this connection. Clients can use this flag to replace or update
their local state. When there is an update, servers MUST omit this flag
entirely and NOT send "initial":false as this is wasteful on bandwidth. The
absence of this flag means 'false'.
unstable_expanded_timeline: Flag which is set if we're returning more historic
events due to the timeline limit having increased. See "XXX: Odd behavior"
comment ing `synapse.handlers.sliding_sync`.
required_state: The current state of the room
timeline: Latest events in the room. The last event is the most recent.
bundled_aggregations: A mapping of event ID to the bundled aggregations for
the timeline events above. This allows clients to show accurate reaction
counts (or edits, threads), even if some of the reaction events were skipped
over in a gappy sync.
stripped_state: Stripped state events (for rooms where the usre is
invited/knocked). Same as `rooms.invite.$room_id.invite_state` in sync v2,
absent on joined/left rooms
prev_batch: A token that can be passed as a start parameter to the
`/rooms/<room_id>/messages` API to retrieve earlier messages.
limited: True if there are more events than `timeline_limit` looking
backwards from the `response.pos` to the `request.pos`.
num_live: The number of timeline events which have just occurred and are not historical.
The last N events are 'live' and should be treated as such. This is mostly
useful to determine whether a given @mention event should make a noise or not.
Clients cannot rely solely on the absence of `initial: true` to determine live
events because if a room not in the sliding window bumps into the window because
of an @mention it will have `initial: true` yet contain a single live event
(with potentially other old events in the timeline).
bump_stamp: The `stream_ordering` of the last event according to the
`bump_event_types`. This helps clients sort more readily without them
needing to pull in a bunch of the timeline to determine the last activity.
`bump_event_types` is a thing because for example, we don't want display
name changes to mark the room as unread and bump it to the top. For
encrypted rooms, we just have to consider any activity as a bump because we
can't see the content and the client has to figure it out for themselves.
joined_count: The number of users with membership of join, including the client's
own user ID. (same as sync `v2 m.joined_member_count`)
invited_count: The number of users with membership of invite. (same as sync v2
`m.invited_member_count`)
notification_count: The total number of unread notifications for this room. (same
as sync v2)
highlight_count: The number of unread notifications for this room with the highlight
flag set. (same as sync v2)
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class StrippedHero:
user_id: str
display_name: Optional[str]
avatar_url: Optional[str]
name: Optional[str]
avatar: Optional[str]
heroes: Optional[List[StrippedHero]]
is_dm: bool
initial: bool
unstable_expanded_timeline: bool
# Should be empty for invite/knock rooms with `stripped_state`
required_state: List[EventBase]
# Should be empty for invite/knock rooms with `stripped_state`
timeline_events: List[EventBase]
bundled_aggregations: Optional[Dict[str, "BundledAggregations"]]
# Optional because it's only relevant to invite/knock rooms
stripped_state: List[JsonDict]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
prev_batch: Optional[StreamToken]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
limited: Optional[bool]
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
num_live: Optional[int]
bump_stamp: int
joined_count: int
invited_count: int
notification_count: int
highlight_count: int
def __bool__(self) -> bool:
return (
# If this is the first time the client is seeing the room, we should not filter it out
# under any circumstance.
self.initial
# We need to let the client know if there are any new events
or bool(self.required_state)
or bool(self.timeline_events)
or bool(self.stripped_state)
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class SlidingWindowList:
"""
Attributes:
count: The total number of entries in the list. Always present if this list
is.
ops: The sliding list operations to perform.
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class Operation:
"""
Attributes:
op: The operation type to perform.
range: Which index positions are affected by this operation. These are
both inclusive.
room_ids: Which room IDs are affected by this operation. These IDs match
up to the positions in the `range`, so the last room ID in this list
matches the 9th index. The room data is held in a separate object.
"""
op: OperationType
range: Tuple[int, int]
room_ids: List[str]
count: int
ops: List[Operation]
@attr.s(slots=True, frozen=True, auto_attribs=True)
class Extensions:
"""Responses for extensions
Attributes:
to_device: The to-device extension (MSC3885)
e2ee: The E2EE device extension (MSC3884)
"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
class ToDeviceExtension:
"""The to-device extension (MSC3885)
Attributes:
next_batch: The to-device stream token the client should use
to get more results
events: A list of to-device messages for the client
"""
next_batch: str
events: Sequence[JsonMapping]
def __bool__(self) -> bool:
return bool(self.events)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class E2eeExtension:
"""The E2EE device extension (MSC3884)
Attributes:
device_list_updates: List of user_ids whose devices have changed or left (only
present on incremental syncs).
device_one_time_keys_count: Map from key algorithm to the number of
unclaimed one-time keys currently held on the server for this device. If
an algorithm is unlisted, the count for that algorithm is assumed to be
zero. If this entire parameter is missing, the count for all algorithms
is assumed to be zero.
device_unused_fallback_key_types: List of unused fallback key algorithms
for this device.
"""
# Only present on incremental syncs
device_list_updates: Optional[DeviceListUpdates]
device_one_time_keys_count: Mapping[str, int]
device_unused_fallback_key_types: Sequence[str]
def __bool__(self) -> bool:
# Note that "signed_curve25519" is always returned in key count responses
# regardless of whether we uploaded any keys for it. This is necessary until
# https://github.com/matrix-org/matrix-doc/issues/3298 is fixed.
#
# Also related:
# https://github.com/element-hq/element-android/issues/3725 and
# https://github.com/matrix-org/synapse/issues/10456
default_otk = self.device_one_time_keys_count.get("signed_curve25519")
more_than_default_otk = len(self.device_one_time_keys_count) > 1 or (
default_otk is not None and default_otk > 0
)
return bool(
more_than_default_otk
or self.device_list_updates
or self.device_unused_fallback_key_types
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class AccountDataExtension:
"""The Account Data extension (MSC3959)
Attributes:
global_account_data_map: Mapping from `type` to `content` of global account
data events.
account_data_by_room_map: Mapping from room_id to mapping of `type` to
`content` of room account data events.
"""
global_account_data_map: Mapping[str, JsonMapping]
account_data_by_room_map: Mapping[str, Mapping[str, JsonMapping]]
def __bool__(self) -> bool:
return bool(
self.global_account_data_map or self.account_data_by_room_map
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class ReceiptsExtension:
"""The Receipts extension (MSC3960)
Attributes:
room_id_to_receipt_map: Mapping from room_id to `m.receipt` ephemeral
event (type, content)
"""
room_id_to_receipt_map: Mapping[str, JsonMapping]
def __bool__(self) -> bool:
return bool(self.room_id_to_receipt_map)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class TypingExtension:
"""The Typing Notification extension (MSC3961)
Attributes:
room_id_to_typing_map: Mapping from room_id to `m.typing` ephemeral
event (type, content)
"""
room_id_to_typing_map: Mapping[str, JsonMapping]
def __bool__(self) -> bool:
return bool(self.room_id_to_typing_map)
to_device: Optional[ToDeviceExtension] = None
e2ee: Optional[E2eeExtension] = None
account_data: Optional[AccountDataExtension] = None
receipts: Optional[ReceiptsExtension] = None
typing: Optional[TypingExtension] = None
def __bool__(self) -> bool:
return bool(
self.to_device
or self.e2ee
or self.account_data
or self.receipts
or self.typing
)
next_pos: SlidingSyncStreamToken
lists: Dict[str, SlidingWindowList]
rooms: Dict[str, RoomResult]
extensions: Extensions
def __bool__(self) -> bool:
"""Make the result appear empty if there are no updates. This is used
to tell if the notifier needs to wait for more events when polling for
events.
"""
# We don't include `self.lists` here, as a) `lists` is always non-empty even if
# there are no changes, and b) since we're sorting rooms by `stream_ordering` of
# the latest activity, anything that would cause the order to change would end
# up in `self.rooms` and cause us to send down the change.
return bool(self.rooms or self.extensions)
@staticmethod
def empty(next_pos: SlidingSyncStreamToken) -> "SlidingSyncResult":
"Return a new empty result"
return SlidingSyncResult(
next_pos=next_pos,
lists={},
rooms={},
extensions=SlidingSyncResult.Extensions(),
)
class StateValues:
"""
Understood values of the (type, state_key) tuple in `required_state`.
"""
# Include all state events of the given type
WILDCARD: Final = "*"
# Lazy-load room membership events (include room membership events for any event
# `sender` in the timeline). We only give special meaning to this value when it's a
# `state_key`.
LAZY: Final = "$LAZY"
# Subsitute with the requester's user ID. Typically used by clients to get
# the user's membership.
ME: Final = "$ME"
# We can't freeze this class because we want to update it in place with the
# de-duplicated data.
@attr.s(slots=True, auto_attribs=True, frozen=True)
class RoomSyncConfig:
"""
Holds the config for what data we should fetch for a room in the sync response.
Attributes:
timeline_limit: The maximum number of events to return in the timeline.
required_state_map: Map from state event type to state_keys requested for the
room. The values are close to `StateKey` but actually use a syntax where you
can provide `*` wildcard and `$LAZY` for lazy-loading room members.
"""
timeline_limit: int
required_state_map: Mapping[str, AbstractSet[str]]
@classmethod
def from_room_config(
cls,
room_params: SlidingSyncConfig.CommonRoomParameters,
) -> "RoomSyncConfig":
"""
Create a `RoomSyncConfig` from a `SlidingSyncList`/`RoomSubscription` config.
Args:
room_params: `SlidingSyncConfig.SlidingSyncList` or `SlidingSyncConfig.RoomSubscription`
"""
required_state_map: Dict[str, Set[str]] = {}
for (
state_type,
state_key,
) in room_params.required_state:
# If we already have a wildcard for this specific `state_key`, we don't need
# to add it since the wildcard already covers it.
if state_key in required_state_map.get(StateValues.WILDCARD, set()):
continue
# If we already have a wildcard `state_key` for this `state_type`, we don't need
# to add anything else
if StateValues.WILDCARD in required_state_map.get(state_type, set()):
continue
# If we're getting wildcards for the `state_type` and `state_key`, that's
# all that matters so get rid of any other entries
if state_type == StateValues.WILDCARD and state_key == StateValues.WILDCARD:
required_state_map = {StateValues.WILDCARD: {StateValues.WILDCARD}}
# We can break, since we don't need to add anything else
break
# If we're getting a wildcard for the `state_type`, get rid of any other
# entries with the same `state_key`, since the wildcard will cover it already.
elif state_type == StateValues.WILDCARD:
# Get rid of any entries that match the `state_key`
#
# Make a copy so we don't run into an error: `dictionary changed size
# during iteration`, when we remove items
for (
existing_state_type,
existing_state_key_set,
) in list(required_state_map.items()):
# Make a copy so we don't run into an error: `Set changed size during
# iteration`, when we filter out and remove items
for existing_state_key in existing_state_key_set.copy():
if existing_state_key == state_key:
existing_state_key_set.remove(state_key)
# If we've the left the `set()` empty, remove it from the map
if existing_state_key_set == set():
required_state_map.pop(existing_state_type, None)
# If we're getting a wildcard `state_key`, get rid of any other state_keys
# for this `state_type` since the wildcard will cover it already.
if state_key == StateValues.WILDCARD:
required_state_map[state_type] = {state_key}
# Otherwise, just add it to the set
else:
if required_state_map.get(state_type) is None:
required_state_map[state_type] = {state_key}
else:
required_state_map[state_type].add(state_key)
return cls(
timeline_limit=room_params.timeline_limit,
required_state_map=required_state_map,
)
def deep_copy(self) -> "RoomSyncConfig":
required_state_map: Dict[str, Set[str]] = {
state_type: set(state_key_set)
for state_type, state_key_set in self.required_state_map.items()
}
return RoomSyncConfig(
timeline_limit=self.timeline_limit,
required_state_map=required_state_map,
)
def combine_room_sync_config(
self, other_room_sync_config: "RoomSyncConfig"
) -> "RoomSyncConfig":
"""
Combine this `RoomSyncConfig` with another `RoomSyncConfig` and return the
superset union of the two.
"""
timeline_limit = self.timeline_limit
required_state_map = {
event_type: set(state_keys)
for event_type, state_keys in self.required_state_map.items()
}
# Take the highest timeline limit
if self.timeline_limit < other_room_sync_config.timeline_limit:
timeline_limit = other_room_sync_config.timeline_limit
# Union the required state
for (
state_type,
state_key_set,
) in other_room_sync_config.required_state_map.items():
# If we already have a wildcard for everything, we don't need to add
# anything else
if StateValues.WILDCARD in required_state_map.get(
StateValues.WILDCARD, set()
):
break
# If we already have a wildcard `state_key` for this `state_type`, we don't need
# to add anything else
if StateValues.WILDCARD in required_state_map.get(state_type, set()):
continue
# If we're getting wildcards for the `state_type` and `state_key`, that's
# all that matters so get rid of any other entries
if (
state_type == StateValues.WILDCARD
and StateValues.WILDCARD in state_key_set
):
required_state_map = {state_type: {StateValues.WILDCARD}}
# We can break, since we don't need to add anything else
break
for state_key in state_key_set:
# If we already have a wildcard for this specific `state_key`, we don't need
# to add it since the wildcard already covers it.
if state_key in required_state_map.get(StateValues.WILDCARD, set()):
continue
# If we're getting a wildcard for the `state_type`, get rid of any other
# entries with the same `state_key`, since the wildcard will cover it already.
if state_type == StateValues.WILDCARD:
# Get rid of any entries that match the `state_key`
#
# Make a copy so we don't run into an error: `dictionary changed size
# during iteration`, when we remove items
for existing_state_type, existing_state_key_set in list(
required_state_map.items()
):
# Make a copy so we don't run into an error: `Set changed size during
# iteration`, when we filter out and remove items
for existing_state_key in existing_state_key_set.copy():
if existing_state_key == state_key:
existing_state_key_set.remove(state_key)
# If we've the left the `set()` empty, remove it from the map
if existing_state_key_set == set():
required_state_map.pop(existing_state_type, None)
# If we're getting a wildcard `state_key`, get rid of any other state_keys
# for this `state_type` since the wildcard will cover it already.
if state_key == StateValues.WILDCARD:
required_state_map[state_type] = {state_key}
break
# Otherwise, just add it to the set
else:
if required_state_map.get(state_type) is None:
required_state_map[state_type] = {state_key}
else:
required_state_map[state_type].add(state_key)
return RoomSyncConfig(timeline_limit, required_state_map)
def must_await_full_state(
self,
is_mine_id: Callable[[str], bool],
) -> bool:
"""
Check if we have a we're only requesting `required_state` which is completely
satisfied even with partial state, then we don't need to `await_full_state` before
we can return it.
Also see `StateFilter.must_await_full_state(...)` for comparison
Partially-stated rooms should have all state events except for remote membership
events so if we require a remote membership event anywhere, then we need to
return `True` (requires full state).
Args:
is_mine_id: a callable which confirms if a given state_key matches a mxid
of a local user
"""
wildcard_state_keys = self.required_state_map.get(StateValues.WILDCARD)
# Requesting *all* state in the room so we have to wait
if (
wildcard_state_keys is not None
and StateValues.WILDCARD in wildcard_state_keys
):
return True
# If the wildcards don't refer to remote user IDs, then we don't need to wait
# for full state.
if wildcard_state_keys is not None:
for possible_user_id in wildcard_state_keys:
if not possible_user_id[0].startswith(UserID.SIGIL):
# Not a user ID
continue
localpart_hostname = possible_user_id.split(":", 1)
if len(localpart_hostname) < 2:
# Not a user ID
continue
if not is_mine_id(possible_user_id):
return True
membership_state_keys = self.required_state_map.get(EventTypes.Member)
# We aren't requesting any membership events at all so the partial state will
# cover us.
if membership_state_keys is None:
return False
# If we're requesting entirely local users, the partial state will cover us.
for user_id in membership_state_keys:
if user_id == StateValues.ME:
continue
# We're lazy-loading membership so we can just return the state we have.
# Lazy-loading means we include membership for any event `sender` in the
# timeline but since we had to auth those timeline events, we will have the
# membership state for them (including from remote senders).
elif user_id == StateValues.LAZY:
continue
elif user_id == StateValues.WILDCARD:
return False
elif not is_mine_id(user_id):
return True
# Local users only so the partial state will cover us.
return False
class HaveSentRoomFlag(Enum):
"""Flag for whether we have sent the room down a sliding sync connection.
The valid state changes here are:
NEVER -> LIVE
LIVE -> PREVIOUSLY
PREVIOUSLY -> LIVE
"""
# The room has never been sent down (or we have forgotten we have sent it
# down).
NEVER = "never"
# We have previously sent the room down, but there are updates that we
# haven't sent down.
PREVIOUSLY = "previously"
# We have sent the room down and the client has received all updates.
LIVE = "live"
T = TypeVar("T", str, RoomStreamToken, MultiWriterStreamToken)
@attr.s(auto_attribs=True, slots=True, frozen=True)
class HaveSentRoom(Generic[T]):
"""Whether we have sent the room data down a sliding sync connection.
We are generic over the type of token used, e.g. `RoomStreamToken` or
`MultiWriterStreamToken`.
Attributes:
status: Flag of if we have or haven't sent down the room
last_token: If the flag is `PREVIOUSLY` then this is non-null and
contains the last stream token of the last updates we sent down
the room, i.e. we still need to send everything since then to the
client.
"""
status: HaveSentRoomFlag
last_token: Optional[T]
@staticmethod
def live() -> "HaveSentRoom[T]":
return HaveSentRoom(HaveSentRoomFlag.LIVE, None)
@staticmethod
def previously(last_token: T) -> "HaveSentRoom[T]":
"""Constructor for `PREVIOUSLY` flag."""
return HaveSentRoom(HaveSentRoomFlag.PREVIOUSLY, last_token)
@staticmethod
def never() -> "HaveSentRoom[T]":
return HaveSentRoom(HaveSentRoomFlag.NEVER, None)
@attr.s(auto_attribs=True, slots=True, frozen=True)
class RoomStatusMap(Generic[T]):
"""For a given stream, e.g. events, records what we have or have not sent
down for that stream in a given room."""
# `room_id` -> `HaveSentRoom`
_statuses: Mapping[str, HaveSentRoom[T]] = attr.Factory(dict)
def have_sent_room(self, room_id: str) -> HaveSentRoom[T]:
"""Return whether we have previously sent the room down"""
return self._statuses.get(room_id, HaveSentRoom.never())
def get_mutable(self) -> "MutableRoomStatusMap[T]":
"""Get a mutable copy of this state."""
return MutableRoomStatusMap(
statuses=self._statuses,
)
def copy(self) -> "RoomStatusMap[T]":
"""Make a copy of the class. Useful for converting from a mutable to
immutable version."""
return RoomStatusMap(statuses=dict(self._statuses))
def __len__(self) -> int:
return len(self._statuses)
class MutableRoomStatusMap(RoomStatusMap[T]):
"""A mutable version of `RoomStatusMap`"""
# We use a ChainMap here so that we can easily track what has been updated
# and what hasn't. Note that when we persist the per connection state this
# will get flattened to a normal dict (via calling `.copy()`)
_statuses: typing.ChainMap[str, HaveSentRoom[T]]
def __init__(
self,
statuses: Mapping[str, HaveSentRoom[T]],
) -> None:
# ChainMap requires a mutable mapping, but we're not actually going to
# mutate it.
statuses = cast(MutableMapping, statuses)
super().__init__(
statuses=ChainMap({}, statuses),
)
def get_updates(self) -> Mapping[str, HaveSentRoom[T]]:
"""Return only the changes that were made"""
return self._statuses.maps[0]
def record_sent_rooms(self, room_ids: StrCollection) -> None:
"""Record that we have sent these rooms in the response"""
for room_id in room_ids:
current_status = self._statuses.get(room_id, HaveSentRoom.never())
if current_status.status == HaveSentRoomFlag.LIVE:
continue
self._statuses[room_id] = HaveSentRoom.live()
def record_unsent_rooms(self, room_ids: StrCollection, from_token: T) -> None:
"""Record that we have not sent these rooms in the response, but there
have been updates.
"""
# Whether we add/update the entries for unsent rooms depends on the
# existing entry:
# - LIVE: We have previously sent down everything up to
# `last_room_token, so we update the entry to be `PREVIOUSLY` with
# `last_room_token`.
# - PREVIOUSLY: We have previously sent down everything up to *a*
# given token, so we don't need to update the entry.
# - NEVER: We have never previously sent down the room, and we haven't
# sent anything down this time either so we leave it as NEVER.
for room_id in room_ids:
current_status = self._statuses.get(room_id, HaveSentRoom.never())
if current_status.status != HaveSentRoomFlag.LIVE:
continue
self._statuses[room_id] = HaveSentRoom.previously(from_token)
@attr.s(auto_attribs=True, frozen=True)
class PerConnectionState:
"""The per-connection state. A snapshot of what we've sent down the
connection before.
Currently, we track whether we've sent down various aspects of a given room
before.
We use the `rooms` field to store the position in the events stream for each
room that we've previously sent to the client before. On the next request
that includes the room, we can then send only what's changed since that
recorded position.
Same goes for the `receipts` field so we only need to send the new receipts
since the last time you made a sync request.
Attributes:
rooms: The status of each room for the events stream.
receipts: The status of each room for the receipts stream.
room_configs: Map from room_id to the `RoomSyncConfig` of all
rooms that we have previously sent down.
"""
rooms: RoomStatusMap[RoomStreamToken] = attr.Factory(RoomStatusMap)
receipts: RoomStatusMap[MultiWriterStreamToken] = attr.Factory(RoomStatusMap)
room_configs: Mapping[str, RoomSyncConfig] = attr.Factory(dict)
list_to_rooms: Mapping[str, AbstractSet[str]] = attr.Factory(dict)
def get_mutable(self) -> "MutablePerConnectionState":
"""Get a mutable copy of this state."""
room_configs = cast(MutableMapping[str, RoomSyncConfig], self.room_configs)
list_to_rooms = cast(MutableMapping[str, Set[str]], self.list_to_rooms)
return MutablePerConnectionState(
rooms=self.rooms.get_mutable(),
receipts=self.receipts.get_mutable(),
room_configs=ChainMap({}, room_configs),
list_to_rooms=ChainMap({}, list_to_rooms),
)
def copy(self) -> "PerConnectionState":
return PerConnectionState(
rooms=self.rooms.copy(),
receipts=self.receipts.copy(),
room_configs=dict(self.room_configs),
list_to_rooms=dict(self.list_to_rooms),
)
def __len__(self) -> int:
return (
len(self.rooms)
+ len(self.receipts)
+ len(self.room_configs)
+ len(self.list_to_rooms)
)
@attr.s(auto_attribs=True)
class MutablePerConnectionState(PerConnectionState):
"""A mutable version of `PerConnectionState`"""
rooms: MutableRoomStatusMap[RoomStreamToken]
receipts: MutableRoomStatusMap[MultiWriterStreamToken]
room_configs: typing.ChainMap[str, RoomSyncConfig]
list_to_rooms: typing.ChainMap[str, Set[str]]
def has_updates(self) -> bool:
return (
bool(self.rooms.get_updates())
or bool(self.receipts.get_updates())
or bool(self.get_room_config_updates())
or bool(self.list_to_rooms.maps[0])
)
def get_room_config_updates(self) -> Mapping[str, RoomSyncConfig]:
"""Get updates to the room sync config"""
return self.room_configs.maps[0]
def get_list_to_rooms_updates(self) -> Mapping[str, StrCollection]:
"""Get updates to the `list_to_rooms`"""
return self.list_to_rooms.maps[0]

View File

@@ -885,3 +885,46 @@ class AwakenableSleeper:
# Cancel the sleep if we were woken up
if call.active():
call.cancel()
class DeferredEvent:
"""Like threading.Event but for async code"""
def __init__(self, reactor: IReactorTime) -> None:
self._reactor = reactor
self._deferred: "defer.Deferred[None]" = defer.Deferred()
def set(self) -> None:
if not self._deferred.called:
self._deferred.callback(None)
def clear(self) -> None:
if self._deferred.called:
self._deferred = defer.Deferred()
def is_set(self) -> bool:
return self._deferred.called
async def wait(self, timeout_seconds: float) -> bool:
if self.is_set():
return True
# Create a deferred that gets called in N seconds
sleep_deferred: "defer.Deferred[None]" = defer.Deferred()
call = self._reactor.callLater(timeout_seconds, sleep_deferred.callback, None)
try:
await make_deferred_yieldable(
defer.DeferredList(
[sleep_deferred, self._deferred],
fireOnOneCallback=True,
fireOnOneErrback=True,
consumeErrors=True,
)
)
finally:
# Cancel the sleep if we were woken up
if call.active():
call.cancel()
return self.is_set()

29
synapse/util/events.py Normal file
View File

@@ -0,0 +1,29 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
#
from synapse.util.stringutils import random_string
def generate_fake_event_id() -> str:
"""
Generate an event ID from random ASCII characters.
This is primarily useful for generating fake event IDs in response to
requests from shadow-banned users.
Returns:
A string intended to look like an event ID, but with no actual meaning.
"""
return "$" + random_string(43)

View File

@@ -44,7 +44,7 @@ from synapse.rest.client import login, room
from synapse.server import HomeServer
from synapse.storage.databases.main.events_worker import EventCacheEntry
from synapse.util import Clock
from synapse.util.stringutils import random_string
from synapse.util.events import generate_fake_event_id
from tests import unittest
from tests.test_utils import event_injection
@@ -52,10 +52,6 @@ from tests.test_utils import event_injection
logger = logging.getLogger(__name__)
def generate_fake_event_id() -> str:
return "$fake_" + random_string(43)
class FederationTestCase(unittest.FederatingHomeserverTestCase):
servlets = [
admin.register_servlets,

View File

@@ -18,7 +18,6 @@
#
#
import logging
from copy import deepcopy
from typing import Dict, List, Optional
from unittest.mock import patch
@@ -47,7 +46,7 @@ from synapse.rest.client import knock, login, room
from synapse.server import HomeServer
from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.types import JsonDict, StreamToken, UserID
from synapse.types.handlers import SlidingSyncConfig
from synapse.types.handlers.sliding_sync import SlidingSyncConfig
from synapse.util import Clock
from tests.replication._base import BaseMultiWorkerStreamTestCase
@@ -566,23 +565,11 @@ class RoomSyncConfigTestCase(TestCase):
"""
Combine A into B and B into A to make sure we get the same result.
"""
# Since we're mutating these in place, make a copy for each of our trials
room_sync_config_a = deepcopy(a)
room_sync_config_b = deepcopy(b)
combined_config = a.combine_room_sync_config(b)
self._assert_room_config_equal(combined_config, expected, "B into A")
# Combine B into A
room_sync_config_a.combine_room_sync_config(room_sync_config_b)
self._assert_room_config_equal(room_sync_config_a, expected, "B into A")
# Since we're mutating these in place, make a copy for each of our trials
room_sync_config_a = deepcopy(a)
room_sync_config_b = deepcopy(b)
# Combine A into B
room_sync_config_b.combine_room_sync_config(room_sync_config_a)
self._assert_room_config_equal(room_sync_config_b, expected, "A into B")
combined_config = a.combine_room_sync_config(b)
self._assert_room_config_equal(combined_config, expected, "A into B")
class GetRoomMembershipForUserAtToTokenTestCase(HomeserverTestCase):

View File

@@ -49,8 +49,11 @@ from tests.unittest import TestCase
class ReadMultipartResponseTests(TestCase):
data1 = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: text/plain\r\nContent-Disposition: inline; filename=test_upload\r\n\r\nfile_"
data2 = b"to_stream\r\n--6067d4698f8d40a0a794ea7d7379d53a--\r\n\r\n"
multipart_response_data1 = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: text/plain\r\nContent-Disposition: inline; filename=test_upload\r\n\r\nfile_"
multipart_response_data2 = (
b"to_stream\r\n--6067d4698f8d40a0a794ea7d7379d53a--\r\n\r\n"
)
multipart_response_data_cased = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\ncOntEnt-type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-tyPe: text/plain\r\nconTent-dispOsition: inline; filename=test_upload\r\n\r\nfile_"
redirect_data = b"\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nContent-Type: application/json\r\n\r\n{}\r\n--6067d4698f8d40a0a794ea7d7379d53a\r\nLocation: https://cdn.example.org/ab/c1/2345.txt\r\n\r\n--6067d4698f8d40a0a794ea7d7379d53a--\r\n\r\n"
@@ -103,8 +106,31 @@ class ReadMultipartResponseTests(TestCase):
result, deferred, protocol = self._build_multipart_response(249, 250)
# Start sending data.
protocol.dataReceived(self.data1)
protocol.dataReceived(self.data2)
protocol.dataReceived(self.multipart_response_data1)
protocol.dataReceived(self.multipart_response_data2)
# Close the connection.
protocol.connectionLost(Failure(ResponseDone()))
multipart_response: MultipartResponse = deferred.result # type: ignore[assignment]
self.assertEqual(multipart_response.json, b"{}")
self.assertEqual(result.getvalue(), b"file_to_stream")
self.assertEqual(multipart_response.length, len(b"file_to_stream"))
self.assertEqual(multipart_response.content_type, b"text/plain")
self.assertEqual(
multipart_response.disposition, b"inline; filename=test_upload"
)
def test_parse_file_lowercase_headers(self) -> None:
"""
Check that a multipart response containing a file is properly parsed
into the json/file parts, and the json and file are properly captured if the http headers are lowercased
"""
result, deferred, protocol = self._build_multipart_response(249, 250)
# Start sending data.
protocol.dataReceived(self.multipart_response_data_cased)
protocol.dataReceived(self.multipart_response_data2)
# Close the connection.
protocol.connectionLost(Failure(ResponseDone()))
@@ -143,7 +169,7 @@ class ReadMultipartResponseTests(TestCase):
result, deferred, protocol = self._build_multipart_response(UNKNOWN_LENGTH, 180)
# Start sending data.
protocol.dataReceived(self.data1)
protocol.dataReceived(self.multipart_response_data1)
self.assertEqual(result.getvalue(), b"file_")
self._assert_error(deferred, protocol)
@@ -154,11 +180,11 @@ class ReadMultipartResponseTests(TestCase):
result, deferred, protocol = self._build_multipart_response(UNKNOWN_LENGTH, 180)
# Start sending data.
protocol.dataReceived(self.data1)
protocol.dataReceived(self.multipart_response_data1)
self._assert_error(deferred, protocol)
# More data might have come in.
protocol.dataReceived(self.data2)
protocol.dataReceived(self.multipart_response_data2)
self.assertEqual(result.getvalue(), b"file_")
self._assert_error(deferred, protocol)
@@ -172,7 +198,7 @@ class ReadMultipartResponseTests(TestCase):
self.assertFalse(deferred.called)
# Start sending data.
protocol.dataReceived(self.data1)
protocol.dataReceived(self.multipart_response_data1)
self._assert_error(deferred, protocol)
self._cleanup_error(deferred)

View File

@@ -21,8 +21,6 @@ import synapse.rest.admin
from synapse.api.constants import EventTypes
from synapse.rest.client import login, room, sync
from synapse.server import HomeServer
from synapse.types import SlidingSyncStreamToken
from synapse.types.handlers import SlidingSyncConfig
from synapse.util import Clock
from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
@@ -130,7 +128,6 @@ class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
self.helper.send(room_id1, "msg", tok=user1_tok)
timeline_limit = 5
conn_id = "conn_id"
sync_body = {
"lists": {
"foo-list": {
@@ -170,40 +167,6 @@ class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
response_body["rooms"].keys(), {room_id2}, response_body["rooms"]
)
# FIXME: This is a hack to record that the first room wasn't sent down
# sync, as we don't implement that currently.
sliding_sync_handler = self.hs.get_sliding_sync_handler()
requester = self.get_success(
self.hs.get_auth().get_user_by_access_token(user1_tok)
)
sync_config = SlidingSyncConfig(
user=requester.user,
requester=requester,
conn_id=conn_id,
)
parsed_initial_from_token = self.get_success(
SlidingSyncStreamToken.from_string(self.store, initial_from_token)
)
connection_position = self.get_success(
sliding_sync_handler.connection_store.record_rooms(
sync_config,
parsed_initial_from_token,
sent_room_ids=[],
unsent_room_ids=[room_id1],
)
)
# FIXME: Now fix up `from_token` with new connect position above.
parsed_from_token = self.get_success(
SlidingSyncStreamToken.from_string(self.store, from_token)
)
parsed_from_token = SlidingSyncStreamToken(
stream_token=parsed_from_token.stream_token,
connection_position=connection_position,
)
from_token = self.get_success(parsed_from_token.to_string(self.store))
# We now send another event to room1, so we should sync all the missing events.
resp = self.helper.send(room_id1, "msg2", tok=user1_tok)
expected_events.append(resp["event_id"])
@@ -238,7 +201,6 @@ class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
self.helper.send(room_id1, "msg", tok=user1_tok)
conn_id = "conn_id"
sync_body = {
"lists": {
"foo-list": {
@@ -279,40 +241,6 @@ class SlidingSyncConnectionTrackingTestCase(SlidingSyncBase):
response_body["rooms"].keys(), {room_id2}, response_body["rooms"]
)
# FIXME: This is a hack to record that the first room wasn't sent down
# sync, as we don't implement that currently.
sliding_sync_handler = self.hs.get_sliding_sync_handler()
requester = self.get_success(
self.hs.get_auth().get_user_by_access_token(user1_tok)
)
sync_config = SlidingSyncConfig(
user=requester.user,
requester=requester,
conn_id=conn_id,
)
parsed_initial_from_token = self.get_success(
SlidingSyncStreamToken.from_string(self.store, initial_from_token)
)
connection_position = self.get_success(
sliding_sync_handler.connection_store.record_rooms(
sync_config,
parsed_initial_from_token,
sent_room_ids=[],
unsent_room_ids=[room_id1],
)
)
# FIXME: Now fix up `from_token` with new connect position above.
parsed_from_token = self.get_success(
SlidingSyncStreamToken.from_string(self.store, from_token)
)
parsed_from_token = SlidingSyncStreamToken(
stream_token=parsed_from_token.stream_token,
connection_position=connection_position,
)
from_token = self.get_success(parsed_from_token.to_string(self.store))
# We now send another event to room1, so we should sync all the missing state.
self.helper.send(room_id1, "msg", tok=user1_tok)

View File

@@ -677,3 +677,108 @@ class SlidingSyncReceiptsExtensionTestCase(SlidingSyncBase):
set(),
exact=True,
)
def test_receipts_incremental_sync_out_of_range(self) -> None:
"""Tests that we don't return read receipts for rooms that fall out of
range, but then do send all read receipts once they're back in range.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
self.helper.join(room_id1, user1_id, tok=user1_tok)
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok)
self.helper.join(room_id2, user1_id, tok=user1_tok)
# Send a message and read receipt into room2
event_response = self.helper.send(room_id2, body="new event", tok=user2_tok)
room2_event_id = event_response["event_id"]
self.helper.send_read_receipt(room_id2, room2_event_id, tok=user1_tok)
# Now send a message into room1 so that it is at the top of the list
self.helper.send(room_id1, body="new event", tok=user2_tok)
# Make a SS request for only the top room.
sync_body = {
"lists": {
"main": {
"ranges": [[0, 0]],
"required_state": [],
"timeline_limit": 5,
}
},
"extensions": {
"receipts": {
"enabled": True,
}
},
}
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
# The receipt is in room2, but only room1 is returned, so we don't
# expect to get the receipt.
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
set(),
exact=True,
)
# Move room2 into range.
self.helper.send(room_id2, body="new event", tok=user2_tok)
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
# We expect to see the read receipt of room2, as that has the most
# recent update.
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
{room_id2},
exact=True,
)
receipt = response_body["extensions"]["receipts"]["rooms"][room_id2]
self.assertIncludes(
receipt["content"][room2_event_id][ReceiptTypes.READ].keys(),
{user1_id},
exact=True,
)
# Send a message into room1 to bump it to the top, but also send a
# receipt in room2
self.helper.send(room_id1, body="new event", tok=user2_tok)
self.helper.send_read_receipt(room_id2, room2_event_id, tok=user2_tok)
# We don't expect to see the new read receipt.
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
set(),
exact=True,
)
# But if we send a new message into room2, we expect to get the missing receipts
self.helper.send(room_id2, body="new event", tok=user2_tok)
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
self.assertIncludes(
response_body["extensions"]["receipts"].get("rooms").keys(),
{room_id2},
exact=True,
)
# We should only see the new receipt
receipt = response_body["extensions"]["receipts"]["rooms"][room_id2]
self.assertIncludes(
receipt["content"][room2_event_id][ReceiptTypes.READ].keys(),
{user2_id},
exact=True,
)

View File

@@ -120,19 +120,26 @@ class SlidingSyncExtensionsTestCase(SlidingSyncBase):
"foo-list": {
"ranges": [[0, 1]],
"required_state": [],
"timeline_limit": 0,
# We set this to `1` because we're testing `receipts` which
# interact with the `timeline`. With receipts, when a room
# hasn't been sent down the connection before or it appears
# as `initial: true`, we only include receipts for events in
# the timeline to avoid bloating and blowing up the sync
# response as the number of users in the room increases.
# (this behavior is part of the spec)
"timeline_limit": 1,
},
# We expect this list range to include room5, room4, room3
"bar-list": {
"ranges": [[0, 2]],
"required_state": [],
"timeline_limit": 0,
"timeline_limit": 1,
},
},
"room_subscriptions": {
room_id1: {
"required_state": [],
"timeline_limit": 0,
"timeline_limit": 1,
}
},
}

View File

@@ -16,7 +16,7 @@ import logging
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
from synapse.api.constants import EventTypes, Membership
from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.api.room_versions import RoomVersions
from synapse.rest.client import login, room, sync
from synapse.server import HomeServer
@@ -44,6 +44,10 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
self.state_handler = self.hs.get_state_handler()
persistence = self.hs.get_storage_controllers().persistence
assert persistence is not None
self.persistence = persistence
def test_rooms_meta_when_joined(self) -> None:
"""
@@ -600,16 +604,16 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
Test that `bump_stamp` ignores backfilled events, i.e. events with a
negative stream ordering.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# Create a remote room
creator = "@user:other"
room_id = "!foo:other"
room_version = RoomVersions.V10
shared_kwargs = {
"room_id": room_id,
"room_version": "10",
"room_version": room_version.identifier,
}
create_tuple = self.get_success(
@@ -618,6 +622,12 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
prev_event_ids=[],
type=EventTypes.Create,
state_key="",
content={
# The `ROOM_CREATOR` field could be removed if we used a room
# version > 10 (in favor of relying on `sender`)
EventContentFields.ROOM_CREATOR: creator,
EventContentFields.ROOM_VERSION: room_version.identifier,
},
sender=creator,
**shared_kwargs,
)
@@ -667,22 +677,29 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
]
# Ensure the local HS knows the room version
self.get_success(
self.store.store_room(room_id, creator, False, RoomVersions.V10)
)
self.get_success(self.store.store_room(room_id, creator, False, room_version))
# Persist these events as backfilled events.
persistence = self.hs.get_storage_controllers().persistence
assert persistence is not None
for event, context in remote_events_and_contexts:
self.get_success(persistence.persist_event(event, context, backfilled=True))
self.get_success(
self.persistence.persist_event(event, context, backfilled=True)
)
# Now we join the local user to the room
join_tuple = self.get_success(
# Now we join the local user to the room. We want to make this feel as close to
# the real `process_remote_join()` as possible but we'd like to avoid some of
# the auth checks that would be done in the real code.
#
# FIXME: The test was originally written using this less-real
# `persist_event(...)` shortcut but it would be nice to use the real remote join
# process in a `FederatingHomeserverTestCase`.
flawed_join_tuple = self.get_success(
create_event(
self.hs,
prev_event_ids=[invite_tuple[0].event_id],
# This doesn't work correctly to create an `EventContext` that includes
# both of these state events. I assume it's because we're working on our
# local homeserver which has the remote state set as `outlier`. We have
# to create our own EventContext below to get this right.
auth_event_ids=[create_tuple[0].event_id, invite_tuple[0].event_id],
type=EventTypes.Member,
state_key=user1_id,
@@ -691,7 +708,22 @@ class SlidingSyncRoomsMetaTestCase(SlidingSyncBase):
**shared_kwargs,
)
)
self.get_success(persistence.persist_event(*join_tuple))
# We have to create our own context to get the state set correctly. If we use
# the `EventContext` from the `flawed_join_tuple`, the `current_state_events`
# table will only have the join event in it which should never happen in our
# real server.
join_event = flawed_join_tuple[0]
join_context = self.get_success(
self.state_handler.compute_event_context(
join_event,
state_ids_before_event={
(e.type, e.state_key): e.event_id
for e in [create_tuple[0], invite_tuple[0]]
},
partial_state=False,
)
)
self.get_success(self.persistence.persist_event(join_event, join_context))
# Doing an SS request should return a positive `bump_stamp`, even though
# the only event that matches the bump types has as negative stream

View File

@@ -161,10 +161,10 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
self.assertIsNone(response_body["rooms"][room_id1].get("required_state"))
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
def test_rooms_required_state_incremental_sync_restart(self) -> None:
def test_rooms_incremental_sync_restart(self) -> None:
"""
Test `rooms.required_state` returns requested state events in the room during an
incremental sync, after a restart (and so the in memory caches are reset).
Test that after a restart (and so the in memory caches are reset) that
we correctly return an `M_UNKNOWN_POS`
"""
user1_id = self.register_user("user1", "pass")
@@ -191,26 +191,26 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
}
_, from_token = self.do_sync(sync_body, tok=user1_tok)
# Reset the in-memory cache
self.hs.get_sliding_sync_handler().connection_store._connections.clear()
# Reset the positions
self.get_success(
self.store.db_pool.simple_delete(
table="sliding_sync_connections",
keyvalues={"user_id": user1_id},
desc="clear_cache",
)
)
# Make the Sliding Sync request
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
# If the cache has been cleared then we do expect the state to come down
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id1)
channel = self.make_request(
method="POST",
path=self.sync_endpoint + f"?pos={from_token}",
content=sync_body,
access_token=user1_tok,
)
self._assertRequiredStateIncludes(
response_body["rooms"][room_id1]["required_state"],
{
state_map[(EventTypes.Create, "")],
state_map[(EventTypes.RoomHistoryVisibility, "")],
},
exact=True,
self.assertEqual(channel.code, 400, channel.json_body)
self.assertEqual(
channel.json_body["errcode"], "M_UNKNOWN_POS", channel.json_body
)
self.assertIsNone(response_body["rooms"][room_id1].get("invite_state"))
def test_rooms_required_state_wildcard(self) -> None:
"""
@@ -637,8 +637,7 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
def test_rooms_required_state_partial_state(self) -> None:
"""
Test partially-stated room are excluded unless `rooms.required_state` is
lazy-loading room members.
Test partially-stated room are excluded if they require full state.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
@@ -655,59 +654,195 @@ class SlidingSyncRoomsRequiredStateTestCase(SlidingSyncBase):
mark_event_as_partial_state(self.hs, join_response2["event_id"], room_id2)
)
# Make the Sliding Sync request (NOT lazy-loading room members)
# Make the Sliding Sync request with examples where `must_await_full_state()` is
# `False`
sync_body = {
"lists": {
"foo-list": {
"no-state-list": {
"ranges": [[0, 1]],
"required_state": [],
"timeline_limit": 0,
},
"other-state-list": {
"ranges": [[0, 1]],
"required_state": [
[EventTypes.Create, ""],
],
"timeline_limit": 0,
},
"lazy-load-list": {
"ranges": [[0, 1]],
"required_state": [
[EventTypes.Create, ""],
# Lazy-load room members
[EventTypes.Member, StateValues.LAZY],
# Local member
[EventTypes.Member, user2_id],
],
"timeline_limit": 0,
},
"local-members-only-list": {
"ranges": [[0, 1]],
"required_state": [
# Own user ID
[EventTypes.Member, user1_id],
# Local member
[EventTypes.Member, user2_id],
],
"timeline_limit": 0,
},
"me-list": {
"ranges": [[0, 1]],
"required_state": [
# Own user ID
[EventTypes.Member, StateValues.ME],
# Local member
[EventTypes.Member, user2_id],
],
"timeline_limit": 0,
},
"wildcard-type-local-state-key-list": {
"ranges": [[0, 1]],
"required_state": [
["*", user1_id],
# Not a user ID
["*", "foobarbaz"],
# Not a user ID
["*", "foo.bar.baz"],
# Not a user ID
["*", "@foo"],
],
"timeline_limit": 0,
},
}
}
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
# The list should include both rooms now because we don't need full state
for list_key in response_body["lists"].keys():
self.assertIncludes(
set(response_body["lists"][list_key]["ops"][0]["room_ids"]),
{room_id2, room_id1},
exact=True,
message=f"Expected all rooms to show up for list_key={list_key}. Response "
+ str(response_body["lists"][list_key]),
)
# Take each of the list variants and apply them to room subscriptions to make
# sure the same rules apply
for list_key in sync_body["lists"].keys():
sync_body_for_subscriptions = {
"room_subscriptions": {
room_id1: {
"required_state": sync_body["lists"][list_key][
"required_state"
],
"timeline_limit": 0,
},
room_id2: {
"required_state": sync_body["lists"][list_key][
"required_state"
],
"timeline_limit": 0,
},
}
}
response_body, _ = self.do_sync(sync_body_for_subscriptions, tok=user1_tok)
self.assertIncludes(
set(response_body["rooms"].keys()),
{room_id2, room_id1},
exact=True,
message=f"Expected all rooms to show up for test_key={list_key}.",
)
# =====================================================================
# Make the Sliding Sync request with examples where `must_await_full_state()` is
# `True`
sync_body = {
"lists": {
"wildcard-list": {
"ranges": [[0, 1]],
"required_state": [
["*", "*"],
],
"timeline_limit": 0,
},
"wildcard-type-remote-state-key-list": {
"ranges": [[0, 1]],
"required_state": [
["*", "@some:remote"],
# Not a user ID
["*", "foobarbaz"],
# Not a user ID
["*", "foo.bar.baz"],
# Not a user ID
["*", "@foo"],
],
"timeline_limit": 0,
},
"remote-member-list": {
"ranges": [[0, 1]],
"required_state": [
# Own user ID
[EventTypes.Member, user1_id],
# Remote member
[EventTypes.Member, "@some:remote"],
# Local member
[EventTypes.Member, user2_id],
],
"timeline_limit": 0,
},
"lazy-but-remote-member-list": {
"ranges": [[0, 1]],
"required_state": [
# Lazy-load room members
[EventTypes.Member, StateValues.LAZY],
# Remote member
[EventTypes.Member, "@some:remote"],
],
"timeline_limit": 0,
},
}
}
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
# Make sure the list includes room1 but room2 is excluded because it's still
# partially-stated
self.assertListEqual(
list(response_body["lists"]["foo-list"]["ops"]),
[
{
"op": "SYNC",
"range": [0, 1],
"room_ids": [room_id1],
}
],
response_body["lists"]["foo-list"],
)
for list_key in response_body["lists"].keys():
self.assertIncludes(
set(response_body["lists"][list_key]["ops"][0]["room_ids"]),
{room_id1},
exact=True,
message=f"Expected only fully-stated rooms to show up for list_key={list_key}. Response "
+ str(response_body["lists"][list_key]),
)
# Make the Sliding Sync request (with lazy-loading room members)
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [
[EventTypes.Create, ""],
# Lazy-load room members
[EventTypes.Member, StateValues.LAZY],
],
"timeline_limit": 0,
},
# Take each of the list variants and apply them to room subscriptions to make
# sure the same rules apply
for list_key in sync_body["lists"].keys():
sync_body_for_subscriptions = {
"room_subscriptions": {
room_id1: {
"required_state": sync_body["lists"][list_key][
"required_state"
],
"timeline_limit": 0,
},
room_id2: {
"required_state": sync_body["lists"][list_key][
"required_state"
],
"timeline_limit": 0,
},
}
}
}
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
response_body, _ = self.do_sync(sync_body_for_subscriptions, tok=user1_tok)
# The list should include both rooms now because we're lazy-loading room members
self.assertListEqual(
list(response_body["lists"]["foo-list"]["ops"]),
[
{
"op": "SYNC",
"range": [0, 1],
"room_ids": [room_id2, room_id1],
}
],
response_body["lists"]["foo-list"],
)
self.assertIncludes(
set(response_body["rooms"].keys()),
{room_id1},
exact=True,
message=f"Expected only fully-stated rooms to show up for test_key={list_key}.",
)

View File

@@ -12,13 +12,15 @@
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import logging
from typing import List, Optional
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
from synapse.api.constants import EventTypes
from synapse.rest.client import login, room, sync
from synapse.server import HomeServer
from synapse.types import StreamToken
from synapse.types import StreamToken, StrSequence
from synapse.util import Clock
from tests.rest.client.sliding_sync.test_sliding_sync import SlidingSyncBase
@@ -42,6 +44,82 @@ class SlidingSyncRoomsTimelineTestCase(SlidingSyncBase):
self.store = hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
def _assertListEqual(
self,
actual_items: StrSequence,
expected_items: StrSequence,
message: Optional[str] = None,
) -> None:
"""
Like `self.assertListEqual(...)` but with an actually understandable diff message.
"""
if actual_items == expected_items:
return
expected_lines: List[str] = []
for expected_item in expected_items:
is_expected_in_actual = expected_item in actual_items
expected_lines.append(
"{} {}".format(" " if is_expected_in_actual else "?", expected_item)
)
actual_lines: List[str] = []
for actual_item in actual_items:
is_actual_in_expected = actual_item in expected_items
actual_lines.append(
"{} {}".format("+" if is_actual_in_expected else " ", actual_item)
)
newline = "\n"
expected_string = f"Expected items to be in actual ('?' = missing expected items):\n [\n{newline.join(expected_lines)}\n ]"
actual_string = f"Actual ('+' = found expected items):\n [\n{newline.join(actual_lines)}\n ]"
first_message = "Items must"
diff_message = f"{first_message}\n{expected_string}\n{actual_string}"
self.fail(f"{diff_message}\n{message}")
def _assertTimelineEqual(
self,
*,
room_id: str,
actual_event_ids: List[str],
expected_event_ids: List[str],
message: Optional[str] = None,
) -> None:
"""
Like `self.assertListEqual(...)` for event IDs in a room but will give a nicer
output with context for what each event_id is (type, stream_ordering, content,
etc).
"""
if actual_event_ids == expected_event_ids:
return
event_id_set = set(actual_event_ids + expected_event_ids)
events = self.get_success(self.store.get_events(event_id_set))
def event_id_to_string(event_id: str) -> str:
event = events.get(event_id)
if event:
state_key = event.get_state_key()
state_key_piece = f", {state_key}" if state_key is not None else ""
return (
f"({event.internal_metadata.stream_ordering: >2}, {event.internal_metadata.instance_name}) "
+ f"{event.event_id} ({event.type}{state_key_piece}) {event.content.get('membership', '')}{event.content.get('body', '')}"
)
return f"{event_id} <event not found in room_id={room_id}>"
self._assertListEqual(
actual_items=[
event_id_to_string(event_id) for event_id in actual_event_ids
],
expected_items=[
event_id_to_string(event_id) for event_id in expected_event_ids
],
message=message,
)
def test_rooms_limited_initial_sync(self) -> None:
"""
Test that we mark `rooms` as `limited=True` when we saturate the `timeline_limit`
@@ -85,17 +163,18 @@ class SlidingSyncRoomsTimelineTestCase(SlidingSyncBase):
response_body["rooms"][room_id1],
)
# Check to make sure the latest events are returned
self.assertEqual(
[
self._assertTimelineEqual(
room_id=room_id1,
actual_event_ids=[
event["event_id"]
for event in response_body["rooms"][room_id1]["timeline"]
],
[
expected_event_ids=[
event_response4["event_id"],
event_response5["event_id"],
user1_join_response["event_id"],
],
response_body["rooms"][room_id1]["timeline"],
message=str(response_body["rooms"][room_id1]["timeline"]),
)
# Check to make sure the `prev_batch` points at the right place
@@ -227,16 +306,17 @@ class SlidingSyncRoomsTimelineTestCase(SlidingSyncBase):
+ str(response_body["rooms"][room_id1]),
)
# Check to make sure the latest events are returned
self.assertEqual(
[
self._assertTimelineEqual(
room_id=room_id1,
actual_event_ids=[
event["event_id"]
for event in response_body["rooms"][room_id1]["timeline"]
],
[
expected_event_ids=[
event_response2["event_id"],
event_response3["event_id"],
],
response_body["rooms"][room_id1]["timeline"],
message=str(response_body["rooms"][room_id1]["timeline"]),
)
# All events are "live"
@@ -303,18 +383,19 @@ class SlidingSyncRoomsTimelineTestCase(SlidingSyncBase):
+ str(response_body["rooms"][room_id1]),
)
# Check to make sure that the "live" and historical events are returned
self.assertEqual(
[
self._assertTimelineEqual(
room_id=room_id1,
actual_event_ids=[
event["event_id"]
for event in response_body["rooms"][room_id1]["timeline"]
],
[
expected_event_ids=[
event_response2["event_id"],
user1_join_response["event_id"],
event_response3["event_id"],
event_response4["event_id"],
],
response_body["rooms"][room_id1]["timeline"],
message=str(response_body["rooms"][room_id1]["timeline"]),
)
# Only events after the `from_token` are "live" (join, event3, event4)
@@ -361,17 +442,18 @@ class SlidingSyncRoomsTimelineTestCase(SlidingSyncBase):
response_body, _ = self.do_sync(sync_body, tok=user1_tok)
# We should see events before the ban but not after
self.assertEqual(
[
self._assertTimelineEqual(
room_id=room_id1,
actual_event_ids=[
event["event_id"]
for event in response_body["rooms"][room_id1]["timeline"]
],
[
expected_event_ids=[
event_response3["event_id"],
event_response4["event_id"],
user1_ban_response["event_id"],
],
response_body["rooms"][room_id1]["timeline"],
message=str(response_body["rooms"][room_id1]["timeline"]),
)
# No "live" events in an initial sync (no `from_token` to define the "live"
# range)
@@ -428,17 +510,18 @@ class SlidingSyncRoomsTimelineTestCase(SlidingSyncBase):
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user1_tok)
# We should see events before the ban but not after
self.assertEqual(
[
self._assertTimelineEqual(
room_id=room_id1,
actual_event_ids=[
event["event_id"]
for event in response_body["rooms"][room_id1]["timeline"]
],
[
expected_event_ids=[
event_response3["event_id"],
event_response4["event_id"],
user1_ban_response["event_id"],
],
response_body["rooms"][room_id1]["timeline"],
message=str(response_body["rooms"][room_id1]["timeline"]),
)
# All live events in the incremental sync
self.assertEqual(
@@ -491,3 +574,138 @@ class SlidingSyncRoomsTimelineTestCase(SlidingSyncBase):
# Nothing to see for this banned user in the room in the token range
self.assertIsNone(response_body["rooms"].get(room_id1))
def test_increasing_timeline_range_sends_more_messages(self) -> None:
"""
Test that increasing the timeline limit via room subscriptions sends the
room down with more messages in a limited sync.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok)
sync_body = {
"lists": {
"foo-list": {
"ranges": [[0, 1]],
"required_state": [[EventTypes.Create, ""]],
"timeline_limit": 1,
}
}
}
message_events = []
for _ in range(10):
resp = self.helper.send(room_id1, "msg", tok=user1_tok)
message_events.append(resp["event_id"])
# Make the first Sliding Sync request
response_body, from_token = self.do_sync(sync_body, tok=user1_tok)
room_response = response_body["rooms"][room_id1]
self.assertEqual(room_response["initial"], True)
self.assertNotIn("unstable_expanded_timeline", room_response)
self.assertEqual(room_response["limited"], True)
# We only expect the last message at first
self._assertTimelineEqual(
room_id=room_id1,
actual_event_ids=[event["event_id"] for event in room_response["timeline"]],
expected_event_ids=message_events[-1:],
message=str(room_response["timeline"]),
)
# We also expect to get the create event state.
state_map = self.get_success(
self.storage_controllers.state.get_current_state(room_id1)
)
self._assertRequiredStateIncludes(
room_response["required_state"],
{state_map[(EventTypes.Create, "")]},
exact=True,
)
# Now do another request with a room subscription with an increased timeline limit
sync_body["room_subscriptions"] = {
room_id1: {
"required_state": [],
"timeline_limit": 10,
}
}
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
room_response = response_body["rooms"][room_id1]
self.assertNotIn("initial", room_response)
self.assertEqual(room_response["unstable_expanded_timeline"], True)
self.assertEqual(room_response["limited"], True)
# Now we expect all the messages
self._assertTimelineEqual(
room_id=room_id1,
actual_event_ids=[event["event_id"] for event in room_response["timeline"]],
expected_event_ids=message_events,
message=str(room_response["timeline"]),
)
# We don't expect to get the room create down, as nothing has changed.
self.assertNotIn("required_state", room_response)
# Decreasing the timeline limit shouldn't resend any events
sync_body["room_subscriptions"] = {
room_id1: {
"required_state": [],
"timeline_limit": 5,
}
}
event_response = self.helper.send(room_id1, "msg", tok=user1_tok)
latest_event_id = event_response["event_id"]
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
room_response = response_body["rooms"][room_id1]
self.assertNotIn("initial", room_response)
self.assertNotIn("unstable_expanded_timeline", room_response)
self.assertEqual(room_response["limited"], False)
self._assertTimelineEqual(
room_id=room_id1,
actual_event_ids=[event["event_id"] for event in room_response["timeline"]],
expected_event_ids=[latest_event_id],
message=str(room_response["timeline"]),
)
# Increasing the limit to what it was before also should not resend any
# events
sync_body["room_subscriptions"] = {
room_id1: {
"required_state": [],
"timeline_limit": 10,
}
}
event_response = self.helper.send(room_id1, "msg", tok=user1_tok)
latest_event_id = event_response["event_id"]
response_body, from_token = self.do_sync(
sync_body, since=from_token, tok=user1_tok
)
room_response = response_body["rooms"][room_id1]
self.assertNotIn("initial", room_response)
self.assertNotIn("unstable_expanded_timeline", room_response)
self.assertEqual(room_response["limited"], False)
self._assertTimelineEqual(
room_id=room_id1,
actual_event_ids=[event["event_id"] for event in room_response["timeline"]],
expected_event_ids=[latest_event_id],
message=str(room_response["timeline"]),
)

View File

@@ -45,7 +45,7 @@ from typing_extensions import Literal
from twisted.test.proto_helpers import MemoryReactorClock
from twisted.web.server import Site
from synapse.api.constants import Membership
from synapse.api.constants import Membership, ReceiptTypes
from synapse.api.errors import Codes
from synapse.server import HomeServer
from synapse.types import JsonDict
@@ -944,3 +944,15 @@ class RestHelper:
assert len(p.links) == 1, "not exactly one link in confirmation page"
oauth_uri = p.links[0]
return oauth_uri
def send_read_receipt(self, room_id: str, event_id: str, *, tok: str) -> None:
"""Send a read receipt into the room at the given event"""
channel = make_request(
self.reactor,
self.site,
method="POST",
path=f"/rooms/{room_id}/receipt/{ReceiptTypes.READ}/{event_id}",
content={},
access_token=tok,
)
assert channel.code == HTTPStatus.OK, channel.text_body

View File

@@ -1166,6 +1166,12 @@ def setup_test_homeserver(
hs.get_auth_handler().validate_hash = validate_hash # type: ignore[assignment]
# We need to replace the media threadpool with the fake test threadpool.
def thread_pool() -> threadpool.ThreadPool:
return reactor.getThreadPool()
hs.get_media_sender_thread_pool = thread_pool # type: ignore[method-assign]
# Load any configured modules into the homeserver
module_api = hs.get_module_api()
for module, module_config in hs.config.modules.loaded_modules:

View File

@@ -112,6 +112,24 @@ class UpdateUpsertManyTests(unittest.HomeserverTestCase):
{(1, "user1", "hello"), (2, "user2", "bleb")},
)
self.get_success(
self.storage.db_pool.runInteraction(
"test",
self.storage.db_pool.simple_upsert_many_txn,
self.table_name,
key_names=key_names,
key_values=[[2, "user2"]],
value_names=[],
value_values=[],
)
)
# Check results are what we expect
self.assertEqual(
set(self._dump_table_to_tuple()),
{(1, "user1", "hello"), (2, "user2", "bleb")},
)
def test_simple_update_many(self) -> None:
"""
simple_update_many performs many updates at once.

View File

@@ -19,6 +19,7 @@
#
#
import logging
from typing import List, Optional
from twisted.test.proto_helpers import MemoryReactor
@@ -35,6 +36,8 @@ from synapse.util import Clock
from tests.unittest import HomeserverTestCase
logger = logging.getLogger(__name__)
class ExtremPruneTestCase(HomeserverTestCase):
servlets = [

View File

@@ -24,7 +24,7 @@ from typing import List, Optional, Tuple, cast
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.constants import EventContentFields, EventTypes, JoinRules, Membership
from synapse.api.room_versions import RoomVersions
from synapse.rest import admin
from synapse.rest.admin import register_servlets_for_client_rest_resource
@@ -38,6 +38,7 @@ from synapse.util import Clock
from tests import unittest
from tests.server import TestHomeServer
from tests.test_utils import event_injection
from tests.test_utils.event_injection import create_event
from tests.unittest import skip_unless
logger = logging.getLogger(__name__)
@@ -54,6 +55,10 @@ class RoomMemberStoreTestCase(unittest.HomeserverTestCase):
# We can't test the RoomMemberStore on its own without the other event
# storage logic
self.store = hs.get_datastores().main
self.state_handler = self.hs.get_state_handler()
persistence = self.hs.get_storage_controllers().persistence
assert persistence is not None
self.persistence = persistence
self.u_alice = self.register_user("alice", "pass")
self.t_alice = self.login("alice", "pass")
@@ -220,31 +225,166 @@ class RoomMemberStoreTestCase(unittest.HomeserverTestCase):
)
def test_join_locally_forgotten_room(self) -> None:
"""Tests if a user joins a forgotten room the room is not forgotten anymore."""
self.room = self.helper.create_room_as(self.u_alice, tok=self.t_alice)
self.assertFalse(
self.get_success(self.store.is_locally_forgotten_room(self.room))
)
"""
Tests if a user joins a forgotten room, the room is not forgotten anymore.
# after leaving and forget the room, it is forgotten
self.get_success(
event_injection.inject_member_event(
self.hs, self.room, self.u_alice, "leave"
Since a room can't be re-joined if everyone has left. This can only happen with
a room with remote users in it.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# Create a remote room
creator = "@user:other"
room_id = "!foo:other"
room_version = RoomVersions.V10
shared_kwargs = {
"room_id": room_id,
"room_version": room_version.identifier,
}
create_tuple = self.get_success(
create_event(
self.hs,
prev_event_ids=[],
type=EventTypes.Create,
state_key="",
content={
# The `ROOM_CREATOR` field could be removed if we used a room
# version > 10 (in favor of relying on `sender`)
EventContentFields.ROOM_CREATOR: creator,
EventContentFields.ROOM_VERSION: room_version.identifier,
},
sender=creator,
**shared_kwargs,
)
)
self.get_success(self.store.forget(self.u_alice, self.room))
self.assertTrue(
self.get_success(self.store.is_locally_forgotten_room(self.room))
)
# after rejoin the room is not forgotten anymore
self.get_success(
event_injection.inject_member_event(
self.hs, self.room, self.u_alice, "join"
creator_tuple = self.get_success(
create_event(
self.hs,
prev_event_ids=[create_tuple[0].event_id],
auth_event_ids=[create_tuple[0].event_id],
type=EventTypes.Member,
state_key=creator,
content={"membership": Membership.JOIN},
sender=creator,
**shared_kwargs,
)
)
remote_events_and_contexts = [
create_tuple,
creator_tuple,
]
# Ensure the local HS knows the room version
self.get_success(self.store.store_room(room_id, creator, False, room_version))
# Persist these events as backfilled events.
for event, context in remote_events_and_contexts:
self.get_success(
self.persistence.persist_event(event, context, backfilled=True)
)
# Now we join the local user to the room. We want to make this feel as close to
# the real `process_remote_join()` as possible but we'd like to avoid some of
# the auth checks that would be done in the real code.
#
# FIXME: The test was originally written using this less-real
# `persist_event(...)` shortcut but it would be nice to use the real remote join
# process in a `FederatingHomeserverTestCase`.
flawed_join_tuple = self.get_success(
create_event(
self.hs,
prev_event_ids=[creator_tuple[0].event_id],
# This doesn't work correctly to create an `EventContext` that includes
# both of these state events. I assume it's because we're working on our
# local homeserver which has the remote state set as `outlier`. We have
# to create our own EventContext below to get this right.
auth_event_ids=[create_tuple[0].event_id],
type=EventTypes.Member,
state_key=user1_id,
content={"membership": Membership.JOIN},
sender=user1_id,
**shared_kwargs,
)
)
# We have to create our own context to get the state set correctly. If we use
# the `EventContext` from the `flawed_join_tuple`, the `current_state_events`
# table will only have the join event in it which should never happen in our
# real server.
join_event = flawed_join_tuple[0]
join_context = self.get_success(
self.state_handler.compute_event_context(
join_event,
state_ids_before_event={
(e.type, e.state_key): e.event_id for e in [create_tuple[0]]
},
partial_state=False,
)
)
self.get_success(self.persistence.persist_event(join_event, join_context))
# The room shouldn't be forgotten because the local user just joined
self.assertFalse(
self.get_success(self.store.is_locally_forgotten_room(self.room))
self.get_success(self.store.is_locally_forgotten_room(room_id))
)
# After all of the local users (there is only user1) leave and forgetting the
# room, it is forgotten
user1_leave_response = self.helper.leave(room_id, user1_id, tok=user1_tok)
user1_leave_event = self.get_success(
self.store.get_event(user1_leave_response["event_id"])
)
self.get_success(self.store.forget(user1_id, room_id))
self.assertTrue(self.get_success(self.store.is_locally_forgotten_room(room_id)))
# Join the local user to the room (again). We want to make this feel as close to
# the real `process_remote_join()` as possible but we'd like to avoid some of
# the auth checks that would be done in the real code.
#
# FIXME: The test was originally written using this less-real
# `event_injection.inject_member_event(...)` shortcut but it would be nice to
# use the real remote join process in a `FederatingHomeserverTestCase`.
flawed_join_tuple = self.get_success(
create_event(
self.hs,
prev_event_ids=[user1_leave_response["event_id"]],
# This doesn't work correctly to create an `EventContext` that includes
# both of these state events. I assume it's because we're working on our
# local homeserver which has the remote state set as `outlier`. We have
# to create our own EventContext below to get this right.
auth_event_ids=[
create_tuple[0].event_id,
user1_leave_response["event_id"],
],
type=EventTypes.Member,
state_key=user1_id,
content={"membership": Membership.JOIN},
sender=user1_id,
**shared_kwargs,
)
)
# We have to create our own context to get the state set correctly. If we use
# the `EventContext` from the `flawed_join_tuple`, the `current_state_events`
# table will only have the join event in it which should never happen in our
# real server.
join_event = flawed_join_tuple[0]
join_context = self.get_success(
self.state_handler.compute_event_context(
join_event,
state_ids_before_event={
(e.type, e.state_key): e.event_id
for e in [create_tuple[0], user1_leave_event]
},
partial_state=False,
)
)
self.get_success(self.persistence.persist_event(join_event, join_context))
# After the local user rejoins the remote room, it isn't forgotten anymore
self.assertFalse(
self.get_success(self.store.is_locally_forgotten_room(room_id))
)

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More