Compare commits

..

3 Commits

Author SHA1 Message Date
Erik Johnston
760e6c61e6 Update storage provider interface 2024-06-18 11:15:50 +01:00
Erik Johnston
cdbe676bc4 Comments 2024-06-17 18:07:39 +01:00
Erik Johnston
1a3a6b63ef We need to support 3rd party storage providers
So we need to wrap the responders rather than changing them
2024-06-17 17:58:43 +01:00
62 changed files with 1067 additions and 1217 deletions

View File

@@ -72,7 +72,7 @@ jobs:
- name: Build and push all platforms
id: build-and-push
uses: docker/build-push-action@v6
uses: docker/build-push-action@v5
with:
push: true
labels: |

View File

@@ -14,7 +14,7 @@ jobs:
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
- name: 📥 Download artifact
uses: dawidd6/action-download-artifact@bf251b5aa9c2f7eeb574a96ee720e24f801b7c11 # v6
uses: dawidd6/action-download-artifact@deb3bb83256a78589fef6a7b942e5f2573ad7c13 # v5
with:
workflow: docs-pr.yaml
run_id: ${{ github.event.workflow_run.id }}

View File

@@ -102,7 +102,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-20.04, macos-12]
os: [ubuntu-20.04, macos-11]
arch: [x86_64, aarch64]
# is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow.
@@ -112,9 +112,9 @@ jobs:
exclude:
# Don't build macos wheels on PR CI.
- is_pr: true
os: "macos-12"
os: "macos-11"
# Don't build aarch64 wheels on mac.
- os: "macos-12"
- os: "macos-11"
arch: aarch64
# Don't build aarch64 wheels on PR CI.
- is_pr: true
@@ -130,7 +130,7 @@ jobs:
python-version: "3.x"
- name: Install cibuildwheel
run: python -m pip install cibuildwheel==2.19.1
run: python -m pip install cibuildwheel==2.16.2
- name: Set up QEMU to emulate aarch64
if: matrix.arch == 'aarch64'

View File

@@ -1,12 +1,3 @@
# Synapse 1.109.0 (2024-06-18)
### Internal Changes
- Fix the building of binary wheels for macOS by switching to macOS 12 CI runners. ([\#17319](https://github.com/element-hq/synapse/issues/17319))
# Synapse 1.109.0rc3 (2024-06-17)
### Bugfixes

View File

@@ -1,34 +1,21 @@
.. image:: https://github.com/element-hq/product/assets/87339233/7abf477a-5277-47f3-be44-ea44917d8ed7
:height: 60px
=========================================================================
Synapse |support| |development| |documentation| |license| |pypi| |python|
=========================================================================
===========================================================================================================
Element Synapse - Matrix homeserver implementation |support| |development| |documentation| |license| |pypi| |python|
===========================================================================================================
Synapse is an open-source `Matrix <https://matrix.org/>`_ homeserver written and
maintained by the Matrix.org Foundation. We began rapid development in 2014,
reaching v1.0.0 in 2019. Development on Synapse and the Matrix protocol itself continues
in earnest today.
Synapse is an open source `Matrix <https://matrix.org>`_ homeserver
implementation, written and maintained by `Element <https://element.io>`_.
`Matrix <https://github.com/matrix-org>`_ is the open standard for
secure and interoperable real time communications. You can directly run
and manage the source code in this repository, available under an AGPL
license. There is no support provided from Element unless you have a
subscription.
Subscription alternative
------------------------
Alternatively, for those that need an enterprise-ready solution, Element
Server Suite (ESS) is `available as a subscription <https://element.io/pricing>`_.
ESS builds on Synapse to offer a complete Matrix-based backend including the full
`Admin Console product <https://element.io/enterprise-functionality/admin-console>`_,
giving admins the power to easily manage an organization-wide
deployment. It includes advanced identity management, auditing,
moderation and data retention options as well as Long Term Support and
SLAs. ESS can be used to support any Matrix-based frontend client.
Briefly, Matrix is an open standard for communications on the internet, supporting
federation, encryption and VoIP. Matrix.org has more to say about the `goals of the
Matrix project <https://matrix.org/docs/guides/introduction>`_, and the `formal specification
<https://spec.matrix.org/>`_ describes the technical details.
.. contents::
🛠️ Installing and configuration
===============================
Installing and configuration
============================
The Synapse documentation describes `how to install Synapse <https://element-hq.github.io/synapse/latest/setup/installation.html>`_. We recommend using
`Docker images <https://element-hq.github.io/synapse/latest/setup/installation.html#docker-images-and-ansible-playbooks>`_ or `Debian packages from Matrix.org
@@ -118,8 +105,8 @@ Following this advice ensures that even if an XSS is found in Synapse, the
impact to other applications will be minimal.
🧪 Testing a new installation
============================
Testing a new installation
==========================
The easiest way to try out your new Synapse installation is by connecting to it
from a web client.
@@ -172,20 +159,8 @@ the form of::
As when logging in, you will need to specify a "Custom server". Specify your
desired ``localpart`` in the 'User name' box.
🎯 Troubleshooting and support
=============================
🚀 Professional support
----------------------
Enterprise quality support for Synapse including SLAs is available as part of an
`Element Server Suite (ESS) <https://element.io/pricing>` subscription.
If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`
and access the `knowledge base <https://ems-docs.element.io>`.
🤝 Community support
-------------------
Troubleshooting and support
===========================
The `Admin FAQ <https://element-hq.github.io/synapse/latest/usage/administration/admin_faq.html>`_
includes tips on dealing with some common problems. For more details, see
@@ -201,8 +176,8 @@ issues for support requests, only for bug reports and feature requests.
.. |docs| replace:: ``docs``
.. _docs: docs
🪪 Identity Servers
==================
Identity Servers
================
Identity servers have the job of mapping email addresses and other 3rd Party
IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs
@@ -231,8 +206,8 @@ an email address with your account, or send an invite to another user via their
email address.
🛠️ Development
==============
Development
===========
We welcome contributions to Synapse from the community!
The best place to get started is our
@@ -250,8 +225,8 @@ Alongside all that, join our developer community on Matrix:
`#synapse-dev:matrix.org <https://matrix.to/#/#synapse-dev:matrix.org>`_, featuring real humans!
.. |support| image:: https://img.shields.io/badge/matrix-community%20support-success
:alt: (get community support in #synapse:matrix.org)
.. |support| image:: https://img.shields.io/matrix/synapse:matrix.org?label=support&logo=matrix
:alt: (get support on #synapse:matrix.org)
:target: https://matrix.to/#/#synapse:matrix.org
.. |development| image:: https://img.shields.io/matrix/synapse-dev:matrix.org?label=development&logo=matrix

View File

@@ -0,0 +1,2 @@
Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md)
by adding a federation /download endpoint (#17172).

View File

@@ -1 +0,0 @@
Remove unused `expire_access_token` option in the Synapse Docker config file. Contributed by @AaronDewes.

View File

@@ -1 +0,0 @@
Filter for public and empty rooms added to Admin-API [List Room API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#list-room-api).

View File

@@ -1 +0,0 @@
Add `is_encrypted` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1 +0,0 @@
Fix a long-standing bug where an invalid 'from' parameter to [`/notifications`](https://spec.matrix.org/v1.10/client-server-api/#get_matrixclientv3notifications) would result in an Internal Server Error.

View File

@@ -1 +0,0 @@
Add `stream_ordering` sort to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.

View File

@@ -1,2 +0,0 @@
`register_new_matrix_user` now supports a --password-file flag, which
is useful for scripting.

View File

@@ -1,2 +0,0 @@
`register_new_matrix_user` now supports a --exists-ok flag to allow registration of users that already exist in the database.
This is useful for scripts that bootstrap user accounts with initial passwords.

View File

@@ -1 +0,0 @@
Add missing quotes for example for `exclude_rooms_from_sync`.

View File

@@ -1 +0,0 @@
Add support for via query parameter from MSC415.

View File

@@ -1 +0,0 @@
Update the README with Element branding, improve headers and fix the #synapse:matrix.org support room link rendering.

View File

@@ -1 +0,0 @@
This is a changelog so tests will run.

View File

@@ -1 +0,0 @@
Change path of the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync implementation to `/org.matrix.simplified_msc3575/sync` since our simplified API is slightly incompatible with what's in the current MSC.

View File

@@ -1 +0,0 @@
Tidy up `parse_integer` docs and call sites to reflect the fact that they require non-negative integers by default, and bring `parse_integer_from_args` default in alignment. Contributed by Denis Kasak (@dkasak).

12
debian/changelog vendored
View File

@@ -1,15 +1,3 @@
matrix-synapse-py3 (1.109.0+nmu1) UNRELEASED; urgency=medium
* `register_new_matrix_user` now supports a --password-file and a --exists-ok flag.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jun 2024 13:29:36 +0100
matrix-synapse-py3 (1.109.0) stable; urgency=medium
* New synapse release 1.109.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jun 2024 09:45:15 +0000
matrix-synapse-py3 (1.109.0~rc3) stable; urgency=medium
* New synapse release 1.109.0rc3.

View File

@@ -31,12 +31,8 @@ A sample YAML file accepted by `register_new_matrix_user` is described below:
Local part of the new user. Will prompt if omitted.
* `-p`, `--password`:
New password for user. Will prompt if this option and `--password-file` are omitted.
Supplying the password on the command line is not recommended.
* `--password-file`:
File containing the new password for user. If set, overrides `--password`.
This is a more secure alternative to specifying the password on the command line.
New password for user. Will prompt if omitted. Supplying the password
on the command line is not recommended. Use the STDIN instead.
* `-a`, `--admin`:
Register new user as an admin. Will prompt if omitted.
@@ -48,9 +44,6 @@ A sample YAML file accepted by `register_new_matrix_user` is described below:
Shared secret as defined in server config file. This is an optional
parameter as it can be also supplied via the YAML file.
* `--exists-ok`:
Do not fail if the user already exists. The user account will be not updated in this case.
* `server_url`:
URL of the home server. Defaults to 'https://localhost:8448'.

View File

@@ -176,6 +176,7 @@ app_service_config_files:
{% endif %}
macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}"
expire_access_token: False
## Signing Keys ##

View File

@@ -36,10 +36,6 @@ The following query parameters are available:
- the room's name,
- the local part of the room's canonical alias, or
- the complete (local and server part) room's id (case sensitive).
* `public_rooms` - Optional flag to filter public rooms. If `true`, only public rooms are queried. If `false`, public rooms are excluded from
the query. When the flag is absent (the default), **both** public and non-public rooms are included in the search results.
* `empty_rooms` - Optional flag to filter empty rooms. A room is empty if joined_members is zero. If `true`, only empty rooms are queried. If `false`, empty rooms are excluded from
the query. When the flag is absent (the default), **both** empty and non-empty rooms are included in the search results.
Defaults to no filtering.

View File

@@ -4150,7 +4150,7 @@ By default, no room is excluded.
Example configuration:
```yaml
exclude_rooms_from_sync:
- "!foo:example.com"
- !foo:example.com
```
---

134
poetry.lock generated
View File

@@ -1319,67 +1319,67 @@ files = [
[[package]]
name = "msgpack"
version = "1.0.8"
version = "1.0.7"
description = "MessagePack serializer"
optional = false
python-versions = ">=3.8"
files = [
{file = "msgpack-1.0.8-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:505fe3d03856ac7d215dbe005414bc28505d26f0c128906037e66d98c4e95868"},
{file = "msgpack-1.0.8-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e6b7842518a63a9f17107eb176320960ec095a8ee3b4420b5f688e24bf50c53c"},
{file = "msgpack-1.0.8-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:376081f471a2ef24828b83a641a02c575d6103a3ad7fd7dade5486cad10ea659"},
{file = "msgpack-1.0.8-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e390971d082dba073c05dbd56322427d3280b7cc8b53484c9377adfbae67dc2"},
{file = "msgpack-1.0.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:00e073efcba9ea99db5acef3959efa45b52bc67b61b00823d2a1a6944bf45982"},
{file = "msgpack-1.0.8-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:82d92c773fbc6942a7a8b520d22c11cfc8fd83bba86116bfcf962c2f5c2ecdaa"},
{file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9ee32dcb8e531adae1f1ca568822e9b3a738369b3b686d1477cbc643c4a9c128"},
{file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e3aa7e51d738e0ec0afbed661261513b38b3014754c9459508399baf14ae0c9d"},
{file = "msgpack-1.0.8-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:69284049d07fce531c17404fcba2bb1df472bc2dcdac642ae71a2d079d950653"},
{file = "msgpack-1.0.8-cp310-cp310-win32.whl", hash = "sha256:13577ec9e247f8741c84d06b9ece5f654920d8365a4b636ce0e44f15e07ec693"},
{file = "msgpack-1.0.8-cp310-cp310-win_amd64.whl", hash = "sha256:e532dbd6ddfe13946de050d7474e3f5fb6ec774fbb1a188aaf469b08cf04189a"},
{file = "msgpack-1.0.8-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9517004e21664f2b5a5fd6333b0731b9cf0817403a941b393d89a2f1dc2bd836"},
{file = "msgpack-1.0.8-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d16a786905034e7e34098634b184a7d81f91d4c3d246edc6bd7aefb2fd8ea6ad"},
{file = "msgpack-1.0.8-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e2872993e209f7ed04d963e4b4fbae72d034844ec66bc4ca403329db2074377b"},
{file = "msgpack-1.0.8-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c330eace3dd100bdb54b5653b966de7f51c26ec4a7d4e87132d9b4f738220ba"},
{file = "msgpack-1.0.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:83b5c044f3eff2a6534768ccfd50425939e7a8b5cf9a7261c385de1e20dcfc85"},
{file = "msgpack-1.0.8-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1876b0b653a808fcd50123b953af170c535027bf1d053b59790eebb0aeb38950"},
{file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:dfe1f0f0ed5785c187144c46a292b8c34c1295c01da12e10ccddfc16def4448a"},
{file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:3528807cbbb7f315bb81959d5961855e7ba52aa60a3097151cb21956fbc7502b"},
{file = "msgpack-1.0.8-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e2f879ab92ce502a1e65fce390eab619774dda6a6ff719718069ac94084098ce"},
{file = "msgpack-1.0.8-cp311-cp311-win32.whl", hash = "sha256:26ee97a8261e6e35885c2ecd2fd4a6d38252246f94a2aec23665a4e66d066305"},
{file = "msgpack-1.0.8-cp311-cp311-win_amd64.whl", hash = "sha256:eadb9f826c138e6cf3c49d6f8de88225a3c0ab181a9b4ba792e006e5292d150e"},
{file = "msgpack-1.0.8-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:114be227f5213ef8b215c22dde19532f5da9652e56e8ce969bf0a26d7c419fee"},
{file = "msgpack-1.0.8-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:d661dc4785affa9d0edfdd1e59ec056a58b3dbb9f196fa43587f3ddac654ac7b"},
{file = "msgpack-1.0.8-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d56fd9f1f1cdc8227d7b7918f55091349741904d9520c65f0139a9755952c9e8"},
{file = "msgpack-1.0.8-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0726c282d188e204281ebd8de31724b7d749adebc086873a59efb8cf7ae27df3"},
{file = "msgpack-1.0.8-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8db8e423192303ed77cff4dce3a4b88dbfaf43979d280181558af5e2c3c71afc"},
{file = "msgpack-1.0.8-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:99881222f4a8c2f641f25703963a5cefb076adffd959e0558dc9f803a52d6a58"},
{file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b5505774ea2a73a86ea176e8a9a4a7c8bf5d521050f0f6f8426afe798689243f"},
{file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:ef254a06bcea461e65ff0373d8a0dd1ed3aa004af48839f002a0c994a6f72d04"},
{file = "msgpack-1.0.8-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e1dd7839443592d00e96db831eddb4111a2a81a46b028f0facd60a09ebbdd543"},
{file = "msgpack-1.0.8-cp312-cp312-win32.whl", hash = "sha256:64d0fcd436c5683fdd7c907eeae5e2cbb5eb872fafbc03a43609d7941840995c"},
{file = "msgpack-1.0.8-cp312-cp312-win_amd64.whl", hash = "sha256:74398a4cf19de42e1498368c36eed45d9528f5fd0155241e82c4082b7e16cffd"},
{file = "msgpack-1.0.8-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:0ceea77719d45c839fd73abcb190b8390412a890df2f83fb8cf49b2a4b5c2f40"},
{file = "msgpack-1.0.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1ab0bbcd4d1f7b6991ee7c753655b481c50084294218de69365f8f1970d4c151"},
{file = "msgpack-1.0.8-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:1cce488457370ffd1f953846f82323cb6b2ad2190987cd4d70b2713e17268d24"},
{file = "msgpack-1.0.8-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3923a1778f7e5ef31865893fdca12a8d7dc03a44b33e2a5f3295416314c09f5d"},
{file = "msgpack-1.0.8-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a22e47578b30a3e199ab067a4d43d790249b3c0587d9a771921f86250c8435db"},
{file = "msgpack-1.0.8-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bd739c9251d01e0279ce729e37b39d49a08c0420d3fee7f2a4968c0576678f77"},
{file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d3420522057ebab1728b21ad473aa950026d07cb09da41103f8e597dfbfaeb13"},
{file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5845fdf5e5d5b78a49b826fcdc0eb2e2aa7191980e3d2cfd2a30303a74f212e2"},
{file = "msgpack-1.0.8-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6a0e76621f6e1f908ae52860bdcb58e1ca85231a9b0545e64509c931dd34275a"},
{file = "msgpack-1.0.8-cp38-cp38-win32.whl", hash = "sha256:374a8e88ddab84b9ada695d255679fb99c53513c0a51778796fcf0944d6c789c"},
{file = "msgpack-1.0.8-cp38-cp38-win_amd64.whl", hash = "sha256:f3709997b228685fe53e8c433e2df9f0cdb5f4542bd5114ed17ac3c0129b0480"},
{file = "msgpack-1.0.8-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f51bab98d52739c50c56658cc303f190785f9a2cd97b823357e7aeae54c8f68a"},
{file = "msgpack-1.0.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:73ee792784d48aa338bba28063e19a27e8d989344f34aad14ea6e1b9bd83f596"},
{file = "msgpack-1.0.8-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f9904e24646570539a8950400602d66d2b2c492b9010ea7e965025cb71d0c86d"},
{file = "msgpack-1.0.8-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e75753aeda0ddc4c28dce4c32ba2f6ec30b1b02f6c0b14e547841ba5b24f753f"},
{file = "msgpack-1.0.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5dbf059fb4b7c240c873c1245ee112505be27497e90f7c6591261c7d3c3a8228"},
{file = "msgpack-1.0.8-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4916727e31c28be8beaf11cf117d6f6f188dcc36daae4e851fee88646f5b6b18"},
{file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7938111ed1358f536daf311be244f34df7bf3cdedb3ed883787aca97778b28d8"},
{file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:493c5c5e44b06d6c9268ce21b302c9ca055c1fd3484c25ba41d34476c76ee746"},
{file = "msgpack-1.0.8-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5fbb160554e319f7b22ecf530a80a3ff496d38e8e07ae763b9e82fadfe96f273"},
{file = "msgpack-1.0.8-cp39-cp39-win32.whl", hash = "sha256:f9af38a89b6a5c04b7d18c492c8ccf2aee7048aff1ce8437c4683bb5a1df893d"},
{file = "msgpack-1.0.8-cp39-cp39-win_amd64.whl", hash = "sha256:ed59dd52075f8fc91da6053b12e8c89e37aa043f8986efd89e61fae69dc1b011"},
{file = "msgpack-1.0.8.tar.gz", hash = "sha256:95c02b0e27e706e48d0e5426d1710ca78e0f0628d6e89d5b5a5b91a5f12274f3"},
{file = "msgpack-1.0.7-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:04ad6069c86e531682f9e1e71b71c1c3937d6014a7c3e9edd2aa81ad58842862"},
{file = "msgpack-1.0.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cca1b62fe70d761a282496b96a5e51c44c213e410a964bdffe0928e611368329"},
{file = "msgpack-1.0.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e50ebce52f41370707f1e21a59514e3375e3edd6e1832f5e5235237db933c98b"},
{file = "msgpack-1.0.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a7b4f35de6a304b5533c238bee86b670b75b03d31b7797929caa7a624b5dda6"},
{file = "msgpack-1.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:28efb066cde83c479dfe5a48141a53bc7e5f13f785b92ddde336c716663039ee"},
{file = "msgpack-1.0.7-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4cb14ce54d9b857be9591ac364cb08dc2d6a5c4318c1182cb1d02274029d590d"},
{file = "msgpack-1.0.7-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b573a43ef7c368ba4ea06050a957c2a7550f729c31f11dd616d2ac4aba99888d"},
{file = "msgpack-1.0.7-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:ccf9a39706b604d884d2cb1e27fe973bc55f2890c52f38df742bc1d79ab9f5e1"},
{file = "msgpack-1.0.7-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cb70766519500281815dfd7a87d3a178acf7ce95390544b8c90587d76b227681"},
{file = "msgpack-1.0.7-cp310-cp310-win32.whl", hash = "sha256:b610ff0f24e9f11c9ae653c67ff8cc03c075131401b3e5ef4b82570d1728f8a9"},
{file = "msgpack-1.0.7-cp310-cp310-win_amd64.whl", hash = "sha256:a40821a89dc373d6427e2b44b572efc36a2778d3f543299e2f24eb1a5de65415"},
{file = "msgpack-1.0.7-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:576eb384292b139821c41995523654ad82d1916da6a60cff129c715a6223ea84"},
{file = "msgpack-1.0.7-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:730076207cb816138cf1af7f7237b208340a2c5e749707457d70705715c93b93"},
{file = "msgpack-1.0.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:85765fdf4b27eb5086f05ac0491090fc76f4f2b28e09d9350c31aac25a5aaff8"},
{file = "msgpack-1.0.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3476fae43db72bd11f29a5147ae2f3cb22e2f1a91d575ef130d2bf49afd21c46"},
{file = "msgpack-1.0.7-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6d4c80667de2e36970ebf74f42d1088cc9ee7ef5f4e8c35eee1b40eafd33ca5b"},
{file = "msgpack-1.0.7-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b0bf0effb196ed76b7ad883848143427a73c355ae8e569fa538365064188b8e"},
{file = "msgpack-1.0.7-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:f9a7c509542db4eceed3dcf21ee5267ab565a83555c9b88a8109dcecc4709002"},
{file = "msgpack-1.0.7-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:84b0daf226913133f899ea9b30618722d45feffa67e4fe867b0b5ae83a34060c"},
{file = "msgpack-1.0.7-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:ec79ff6159dffcc30853b2ad612ed572af86c92b5168aa3fc01a67b0fa40665e"},
{file = "msgpack-1.0.7-cp311-cp311-win32.whl", hash = "sha256:3e7bf4442b310ff154b7bb9d81eb2c016b7d597e364f97d72b1acc3817a0fdc1"},
{file = "msgpack-1.0.7-cp311-cp311-win_amd64.whl", hash = "sha256:3f0c8c6dfa6605ab8ff0611995ee30d4f9fcff89966cf562733b4008a3d60d82"},
{file = "msgpack-1.0.7-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:f0936e08e0003f66bfd97e74ee530427707297b0d0361247e9b4f59ab78ddc8b"},
{file = "msgpack-1.0.7-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:98bbd754a422a0b123c66a4c341de0474cad4a5c10c164ceed6ea090f3563db4"},
{file = "msgpack-1.0.7-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b291f0ee7961a597cbbcc77709374087fa2a9afe7bdb6a40dbbd9b127e79afee"},
{file = "msgpack-1.0.7-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ebbbba226f0a108a7366bf4b59bf0f30a12fd5e75100c630267d94d7f0ad20e5"},
{file = "msgpack-1.0.7-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1e2d69948e4132813b8d1131f29f9101bc2c915f26089a6d632001a5c1349672"},
{file = "msgpack-1.0.7-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bdf38ba2d393c7911ae989c3bbba510ebbcdf4ecbdbfec36272abe350c454075"},
{file = "msgpack-1.0.7-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:993584fc821c58d5993521bfdcd31a4adf025c7d745bbd4d12ccfecf695af5ba"},
{file = "msgpack-1.0.7-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:52700dc63a4676669b341ba33520f4d6e43d3ca58d422e22ba66d1736b0a6e4c"},
{file = "msgpack-1.0.7-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e45ae4927759289c30ccba8d9fdce62bb414977ba158286b5ddaf8df2cddb5c5"},
{file = "msgpack-1.0.7-cp312-cp312-win32.whl", hash = "sha256:27dcd6f46a21c18fa5e5deed92a43d4554e3df8d8ca5a47bf0615d6a5f39dbc9"},
{file = "msgpack-1.0.7-cp312-cp312-win_amd64.whl", hash = "sha256:7687e22a31e976a0e7fc99c2f4d11ca45eff652a81eb8c8085e9609298916dcf"},
{file = "msgpack-1.0.7-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5b6ccc0c85916998d788b295765ea0e9cb9aac7e4a8ed71d12e7d8ac31c23c95"},
{file = "msgpack-1.0.7-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:235a31ec7db685f5c82233bddf9858748b89b8119bf4538d514536c485c15fe0"},
{file = "msgpack-1.0.7-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:cab3db8bab4b7e635c1c97270d7a4b2a90c070b33cbc00c99ef3f9be03d3e1f7"},
{file = "msgpack-1.0.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bfdd914e55e0d2c9e1526de210f6fe8ffe9705f2b1dfcc4aecc92a4cb4b533d"},
{file = "msgpack-1.0.7-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:36e17c4592231a7dbd2ed09027823ab295d2791b3b1efb2aee874b10548b7524"},
{file = "msgpack-1.0.7-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38949d30b11ae5f95c3c91917ee7a6b239f5ec276f271f28638dec9156f82cfc"},
{file = "msgpack-1.0.7-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:ff1d0899f104f3921d94579a5638847f783c9b04f2d5f229392ca77fba5b82fc"},
{file = "msgpack-1.0.7-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:dc43f1ec66eb8440567186ae2f8c447d91e0372d793dfe8c222aec857b81a8cf"},
{file = "msgpack-1.0.7-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:dd632777ff3beaaf629f1ab4396caf7ba0bdd075d948a69460d13d44357aca4c"},
{file = "msgpack-1.0.7-cp38-cp38-win32.whl", hash = "sha256:4e71bc4416de195d6e9b4ee93ad3f2f6b2ce11d042b4d7a7ee00bbe0358bd0c2"},
{file = "msgpack-1.0.7-cp38-cp38-win_amd64.whl", hash = "sha256:8f5b234f567cf76ee489502ceb7165c2a5cecec081db2b37e35332b537f8157c"},
{file = "msgpack-1.0.7-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:bfef2bb6ef068827bbd021017a107194956918ab43ce4d6dc945ffa13efbc25f"},
{file = "msgpack-1.0.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:484ae3240666ad34cfa31eea7b8c6cd2f1fdaae21d73ce2974211df099a95d81"},
{file = "msgpack-1.0.7-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3967e4ad1aa9da62fd53e346ed17d7b2e922cba5ab93bdd46febcac39be636fc"},
{file = "msgpack-1.0.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8dd178c4c80706546702c59529ffc005681bd6dc2ea234c450661b205445a34d"},
{file = "msgpack-1.0.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6ffbc252eb0d229aeb2f9ad051200668fc3a9aaa8994e49f0cb2ffe2b7867e7"},
{file = "msgpack-1.0.7-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:822ea70dc4018c7e6223f13affd1c5c30c0f5c12ac1f96cd8e9949acddb48a61"},
{file = "msgpack-1.0.7-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:384d779f0d6f1b110eae74cb0659d9aa6ff35aaf547b3955abf2ab4c901c4819"},
{file = "msgpack-1.0.7-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f64e376cd20d3f030190e8c32e1c64582eba56ac6dc7d5b0b49a9d44021b52fd"},
{file = "msgpack-1.0.7-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5ed82f5a7af3697b1c4786053736f24a0efd0a1b8a130d4c7bfee4b9ded0f08f"},
{file = "msgpack-1.0.7-cp39-cp39-win32.whl", hash = "sha256:f26a07a6e877c76a88e3cecac8531908d980d3d5067ff69213653649ec0f60ad"},
{file = "msgpack-1.0.7-cp39-cp39-win_amd64.whl", hash = "sha256:1dc93e8e4653bdb5910aed79f11e165c85732067614f180f70534f056da97db3"},
{file = "msgpack-1.0.7.tar.gz", hash = "sha256:572efc93db7a4d27e404501975ca6d2d9775705c2d922390d878fcf768d92c87"},
]
[[package]]
@@ -1524,13 +1524,13 @@ files = [
[[package]]
name = "phonenumbers"
version = "8.13.39"
version = "8.13.37"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
files = [
{file = "phonenumbers-8.13.39-py2.py3-none-any.whl", hash = "sha256:3ad2d086fa71e7eef409001b9195ac54bebb0c6e3e752209b558ca192c9229a0"},
{file = "phonenumbers-8.13.39.tar.gz", hash = "sha256:db7ca4970d206b2056231105300753b1a5b229f43416f8c2b3010e63fbb68d77"},
{file = "phonenumbers-8.13.37-py2.py3-none-any.whl", hash = "sha256:4ea00ef5012422c08c7955c21131e7ae5baa9a3ef52cf2d561e963f023006b80"},
{file = "phonenumbers-8.13.37.tar.gz", hash = "sha256:bd315fed159aea0516f7c367231810fe8344d5bec26156b88fa18374c11d1cf2"},
]
[[package]]
@@ -2822,13 +2822,13 @@ referencing = "*"
[[package]]
name = "types-netaddr"
version = "1.3.0.20240530"
version = "1.2.0.20240219"
description = "Typing stubs for netaddr"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-netaddr-1.3.0.20240530.tar.gz", hash = "sha256:742c2ec1f202b666f544223e2616b34f1f13df80c91e5aeaaa93a72e4d0774ea"},
{file = "types_netaddr-1.3.0.20240530-py3-none-any.whl", hash = "sha256:354998d018e326da4f1d9b005fc91137b7c2c473aaf03c4ef64bf83c6861b440"},
{file = "types-netaddr-1.2.0.20240219.tar.gz", hash = "sha256:984e70ad838218d3032f37f05a7e294f7b007fe274ec9d774265c8c06698395f"},
{file = "types_netaddr-1.2.0.20240219-py3-none-any.whl", hash = "sha256:b26144e878acb8a1a9008e6997863714db04f8029a0f7f6bfe483c977d21b522"},
]
[[package]]
@@ -2881,13 +2881,13 @@ types-cffi = "*"
[[package]]
name = "types-pyyaml"
version = "6.0.12.20240311"
version = "6.0.12.12"
description = "Typing stubs for PyYAML"
optional = false
python-versions = ">=3.8"
python-versions = "*"
files = [
{file = "types-PyYAML-6.0.12.20240311.tar.gz", hash = "sha256:a9e0f0f88dc835739b0c1ca51ee90d04ca2a897a71af79de9aec5f38cb0a5342"},
{file = "types_PyYAML-6.0.12.20240311-py3-none-any.whl", hash = "sha256:b845b06a1c7e54b8e5b4c683043de0d9caf205e7434b3edc678ff2411979b8f6"},
{file = "types-PyYAML-6.0.12.12.tar.gz", hash = "sha256:334373d392fde0fdf95af5c3f1661885fa10c52167b14593eb856289e1855062"},
{file = "types_PyYAML-6.0.12.12-py3-none-any.whl", hash = "sha256:c05bc6c158facb0676674b7f11fe3960db4f389718e19e62bd2b84d6205cfd24"},
]
[[package]]

View File

@@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.109.0"
version = "1.109.0rc3"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"

View File

@@ -52,7 +52,6 @@ def request_registration(
user_type: Optional[str] = None,
_print: Callable[[str], None] = print,
exit: Callable[[int], None] = sys.exit,
exists_ok: bool = False,
) -> None:
url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),)
@@ -98,10 +97,6 @@ def request_registration(
r = requests.post(url, json=data)
if r.status_code != 200:
response = r.json()
if exists_ok and response["errcode"] == "M_USER_IN_USE":
_print("User already exists. Skipping.")
return
_print("ERROR! Received %d %s" % (r.status_code, r.reason))
if 400 <= r.status_code < 500:
try:
@@ -120,7 +115,6 @@ def register_new_user(
shared_secret: str,
admin: Optional[bool],
user_type: Optional[str],
exists_ok: bool = False,
) -> None:
if not user:
try:
@@ -160,13 +154,7 @@ def register_new_user(
admin = False
request_registration(
user,
password,
server_location,
shared_secret,
bool(admin),
user_type,
exists_ok=exists_ok,
user, password, server_location, shared_secret, bool(admin), user_type
)
@@ -186,22 +174,10 @@ def main() -> None:
help="Local part of the new user. Will prompt if omitted.",
)
parser.add_argument(
"--exists-ok",
action="store_true",
help="Do not fail if user already exists.",
)
password_group = parser.add_mutually_exclusive_group()
password_group.add_argument(
"-p",
"--password",
default=None,
help="New password for user. Will prompt for a password if "
"this flag and `--password-file` are both omitted.",
)
password_group.add_argument(
"--password-file",
default=None,
help="File containing the new password for user. If set, will override `--password`.",
help="New password for user. Will prompt if omitted.",
)
parser.add_argument(
"-t",
@@ -209,7 +185,6 @@ def main() -> None:
default=None,
help="User type as specified in synapse.api.constants.UserTypes",
)
admin_group = parser.add_mutually_exclusive_group()
admin_group.add_argument(
"-a",
@@ -272,11 +247,6 @@ def main() -> None:
print(_NO_SHARED_SECRET_OPTS_ERROR, file=sys.stderr)
sys.exit(1)
if args.password_file:
password = _read_file(args.password_file, "password-file").strip()
else:
password = args.password
if args.server_url:
server_url = args.server_url
elif config is not None:
@@ -300,13 +270,7 @@ def main() -> None:
admin = args.admin
register_new_user(
args.user,
password,
server_url,
secret,
admin,
args.user_type,
exists_ok=args.exists_ok,
args.user, args.password, server_url, secret, admin, args.user_type
)

View File

@@ -439,6 +439,3 @@ class ExperimentalConfig(Config):
# MSC4151: Report room API (Client-Server API)
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)
# MSC4156: Migrate server_name to via
self.msc4156_enabled: bool = experimental.get("msc4156_enabled", False)

View File

@@ -19,6 +19,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
import inspect
import logging
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple, Type
@@ -33,6 +34,7 @@ from synapse.federation.transport.server.federation import (
FEDERATION_SERVLET_CLASSES,
FederationAccountStatusServlet,
FederationUnstableClientKeysClaimServlet,
FederationUnstableMediaDownloadServlet,
)
from synapse.http.server import HttpServer, JsonResource
from synapse.http.servlet import (
@@ -315,6 +317,28 @@ def register_servlets(
):
continue
if servletclass == FederationUnstableMediaDownloadServlet:
if (
not hs.config.server.enable_media_repo
or not hs.config.experimental.msc3916_authenticated_media_enabled
):
continue
# don't load the endpoint if the storage provider is incompatible
media_repo = hs.get_media_repository()
load_download_endpoint = True
for provider in media_repo.media_storage.storage_providers:
signature = inspect.signature(provider.backend.fetch)
if "federation" not in signature.parameters:
logger.warning(
f"Federation media `/download` endpoint will not be enabled as storage provider {provider.backend} is not compatible with this endpoint."
)
load_download_endpoint = False
break
if not load_download_endpoint:
continue
servletclass(
hs=hs,
authenticator=authenticator,

View File

@@ -360,13 +360,29 @@ class BaseFederationServlet:
"request"
)
return None
if (
func.__self__.__class__.__name__ # type: ignore
== "FederationUnstableMediaDownloadServlet"
):
response = await func(
origin, content, request, *args, **kwargs
)
else:
response = await func(
origin, content, request.args, *args, **kwargs
)
else:
if (
func.__self__.__class__.__name__ # type: ignore
== "FederationUnstableMediaDownloadServlet"
):
response = await func(
origin, content, request, *args, **kwargs
)
else:
response = await func(
origin, content, request.args, *args, **kwargs
)
else:
response = await func(
origin, content, request.args, *args, **kwargs
)
finally:
# if we used the origin's context as the parent, add a new span using
# the servlet span as a parent, so that we have a link

View File

@@ -44,10 +44,13 @@ from synapse.federation.transport.server._base import (
)
from synapse.http.servlet import (
parse_boolean_from_args,
parse_integer,
parse_integer_from_args,
parse_string_from_args,
parse_strings_from_args,
)
from synapse.http.site import SynapseRequest
from synapse.media._base import DEFAULT_MAX_TIMEOUT_MS, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS
from synapse.types import JsonDict
from synapse.util import SYNAPSE_VERSION
from synapse.util.ratelimitutils import FederationRateLimiter
@@ -787,6 +790,43 @@ class FederationAccountStatusServlet(BaseFederationServerServlet):
return 200, {"account_statuses": statuses, "failures": failures}
class FederationUnstableMediaDownloadServlet(BaseFederationServerServlet):
"""
Implementation of new federation media `/download` endpoint outlined in MSC3916. Returns
a multipart/form-data response consisting of a JSON object and the requested media
item. This endpoint only returns local media.
"""
PATH = "/media/download/(?P<media_id>[^/]*)"
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc3916"
RATELIMIT = True
def __init__(
self,
hs: "HomeServer",
ratelimiter: FederationRateLimiter,
authenticator: Authenticator,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self.media_repo = self.hs.get_media_repository()
async def on_GET(
self,
origin: Optional[str],
content: Literal[None],
request: SynapseRequest,
media_id: str,
) -> None:
max_timeout_ms = parse_integer(
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
)
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
await self.media_repo.get_local_media(
request, media_id, None, max_timeout_ms, federation=True
)
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
FederationSendServlet,
FederationEventServlet,
@@ -818,4 +858,5 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
FederationV1SendKnockServlet,
FederationMakeKnockServlet,
FederationAccountStatusServlet,
FederationUnstableMediaDownloadServlet,
)

View File

@@ -201,7 +201,7 @@ class MessageHandler:
if at_token:
last_event_id = (
await self.store.get_last_event_id_in_room_before_stream_ordering(
await self.store.get_last_event_in_room_before_stream_ordering(
room_id,
end_token=at_token.room_key,
)

View File

@@ -18,22 +18,14 @@
#
#
import logging
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
from typing import TYPE_CHECKING, AbstractSet, Dict, List, Optional
from immutabledict import immutabledict
from synapse.api.constants import AccountDataTypes, EventTypes, Membership
from synapse.api.constants import AccountDataTypes, Membership
from synapse.events import EventBase
from synapse.storage.roommember import RoomsForUser
from synapse.types import (
PersistedEventPosition,
Requester,
RoomStreamToken,
StreamToken,
UserID,
)
from synapse.types import Requester, RoomStreamToken, StreamToken, UserID
from synapse.types.handlers import OperationType, SlidingSyncConfig, SlidingSyncResult
from synapse.types.state import StateFilter
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -41,27 +33,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
def convert_event_to_rooms_for_user(event: EventBase) -> RoomsForUser:
"""
Quick helper to convert an event to a `RoomsForUser` object.
"""
# These fields should be present for all persisted events
assert event.internal_metadata.stream_ordering is not None
assert event.internal_metadata.instance_name is not None
return RoomsForUser(
room_id=event.room_id,
sender=event.sender,
membership=event.membership,
event_id=event.event_id,
event_pos=PersistedEventPosition(
event.internal_metadata.instance_name,
event.internal_metadata.stream_ordering,
),
room_version_id=event.room_version.identifier,
)
def filter_membership_for_sync(*, membership: str, user_id: str, sender: str) -> bool:
"""
Returns True if the membership event should be included in the sync response,
@@ -86,7 +57,6 @@ class SlidingSyncHandler:
def __init__(self, hs: "HomeServer"):
self.clock = hs.get_clock()
self.store = hs.get_datastores().main
self.storage_controllers = hs.get_storage_controllers()
self.auth_blocking = hs.get_auth_blocking()
self.notifier = hs.get_notifier()
self.event_sources = hs.get_event_sources()
@@ -199,28 +169,26 @@ class SlidingSyncHandler:
# See https://github.com/matrix-org/matrix-doc/issues/1144
raise NotImplementedError()
# Get all of the room IDs that the user should be able to see in the sync
# response
room_id_set = await self.get_sync_room_ids_for_user(
sync_config.user,
from_token=from_token,
to_token=to_token,
)
# Assemble sliding window lists
lists: Dict[str, SlidingSyncResult.SlidingWindowList] = {}
if sync_config.lists:
# Get all of the room IDs that the user should be able to see in the sync
# response
sync_room_map = await self.get_sync_room_ids_for_user(
sync_config.user,
from_token=from_token,
to_token=to_token,
)
for list_key, list_config in sync_config.lists.items():
# Apply filters
filtered_sync_room_map = sync_room_map
filtered_room_ids = room_id_set
if list_config.filters is not None:
filtered_sync_room_map = await self.filter_rooms(
sync_config.user, sync_room_map, list_config.filters, to_token
filtered_room_ids = await self.filter_rooms(
sync_config.user, room_id_set, list_config.filters, to_token
)
sorted_room_info = await self.sort_rooms(
filtered_sync_room_map, to_token
)
# TODO: Apply sorts
sorted_room_ids = sorted(filtered_room_ids)
ops: List[SlidingSyncResult.SlidingWindowList.Operation] = []
if list_config.ranges:
@@ -229,17 +197,12 @@ class SlidingSyncHandler:
SlidingSyncResult.SlidingWindowList.Operation(
op=OperationType.SYNC,
range=range,
room_ids=[
room_id
for room_id, _ in sorted_room_info[
range[0] : range[1]
]
],
room_ids=sorted_room_ids[range[0] : range[1]],
)
)
lists[list_key] = SlidingSyncResult.SlidingWindowList(
count=len(sorted_room_info),
count=len(sorted_room_ids),
ops=ops,
)
@@ -256,7 +219,7 @@ class SlidingSyncHandler:
user: UserID,
to_token: StreamToken,
from_token: Optional[StreamToken] = None,
) -> Dict[str, RoomsForUser]:
) -> AbstractSet[str]:
"""
Fetch room IDs that should be listed for this user in the sync response (the
full room list that will be filtered, sorted, and sliced).
@@ -274,14 +237,11 @@ class SlidingSyncHandler:
to tell when a room was forgotten at the moment so we can't factor it into the
from/to range.
Args:
user: User to fetch rooms for
to_token: The token to fetch rooms up to.
from_token: The point in the stream to sync from.
Returns:
A dictionary of room IDs that should be listed in the sync response along
with membership information in that room at the time of `to_token`.
"""
user_id = user.to_string()
@@ -301,11 +261,11 @@ class SlidingSyncHandler:
# If the user has never joined any rooms before, we can just return an empty list
if not room_for_user_list:
return {}
return set()
# Our working list of rooms that can show up in the sync response
sync_room_id_set = {
room_for_user.room_id: room_for_user
room_for_user.room_id
for room_for_user in room_for_user_list
if filter_membership_for_sync(
membership=room_for_user.membership,
@@ -455,9 +415,7 @@ class SlidingSyncHandler:
not was_last_membership_already_included
and should_prev_membership_be_included
):
sync_room_id_set[room_id] = convert_event_to_rooms_for_user(
last_membership_change_after_to_token
)
sync_room_id_set.add(room_id)
# 1b) Remove rooms that the user joined (hasn't left) after the `to_token`
#
# For example, if the last membership event after the `to_token` is a "join"
@@ -468,7 +426,7 @@ class SlidingSyncHandler:
was_last_membership_already_included
and not should_prev_membership_be_included
):
del sync_room_id_set[room_id]
sync_room_id_set.discard(room_id)
# 2) -----------------------------------------------------
# We fix-up newly_left rooms after the first fixup because it may have removed
@@ -503,32 +461,25 @@ class SlidingSyncHandler:
# include newly_left rooms because the last event that the user should see
# is their own leave event
if last_membership_change_in_from_to_range.membership == Membership.LEAVE:
sync_room_id_set[room_id] = convert_event_to_rooms_for_user(
last_membership_change_in_from_to_range
)
sync_room_id_set.add(room_id)
return sync_room_id_set
async def filter_rooms(
self,
user: UserID,
sync_room_map: Dict[str, RoomsForUser],
room_id_set: AbstractSet[str],
filters: SlidingSyncConfig.SlidingSyncList.Filters,
to_token: StreamToken,
) -> Dict[str, RoomsForUser]:
) -> AbstractSet[str]:
"""
Filter rooms based on the sync request.
Args:
user: User to filter rooms for
sync_room_map: Dictionary of room IDs to sort along with membership
information in the room at the time of `to_token`.
room_id_set: Set of room IDs to filter down
filters: Filters to apply
to_token: We filter based on the state of the room at this token
Returns:
A filtered dictionary of room IDs along with membership information in the
room at the time of `to_token`.
"""
user_id = user.to_string()
@@ -537,7 +488,7 @@ class SlidingSyncHandler:
# TODO: Exclude partially stated rooms unless the `required_state` has
# `["m.room.member", "$LAZY"]`
filtered_room_id_set = set(sync_room_map.keys())
filtered_room_id_set = set(room_id_set)
# Filter for Direct-Message (DM) rooms
if filters.is_dm is not None:
@@ -572,26 +523,8 @@ class SlidingSyncHandler:
if filters.spaces:
raise NotImplementedError()
# Filter for encrypted rooms
if filters.is_encrypted is not None:
# Make a copy so we don't run into an error: `Set changed size during
# iteration`, when we filter out and remove items
for room_id in list(filtered_room_id_set):
state_at_to_token = await self.storage_controllers.state.get_state_at(
room_id,
to_token,
state_filter=StateFilter.from_types(
[(EventTypes.RoomEncryption, "")]
),
)
is_encrypted = state_at_to_token.get((EventTypes.RoomEncryption, ""))
# If we're looking for encrypted rooms, filter out rooms that are not
# encrypted and vice versa
if (filters.is_encrypted and not is_encrypted) or (
not filters.is_encrypted and is_encrypted
):
filtered_room_id_set.remove(room_id)
if filters.is_encrypted:
raise NotImplementedError()
if filters.is_invite:
raise NotImplementedError()
@@ -611,57 +544,4 @@ class SlidingSyncHandler:
if filters.not_tags:
raise NotImplementedError()
# Assemble a new sync room map but only with the `filtered_room_id_set`
return {room_id: sync_room_map[room_id] for room_id in filtered_room_id_set}
async def sort_rooms(
self,
sync_room_map: Dict[str, RoomsForUser],
to_token: StreamToken,
) -> List[Tuple[str, RoomsForUser]]:
"""
Sort by `stream_ordering` of the last event that the user should see in the
room. `stream_ordering` is unique so we get a stable sort.
Args:
sync_room_map: Dictionary of room IDs to sort along with membership
information in the room at the time of `to_token`.
to_token: We sort based on the events in the room at this token (<= `to_token`)
Returns:
A sorted list of room IDs by `stream_ordering` along with membership information.
"""
# Assemble a map of room ID to the `stream_ordering` of the last activity that the
# user should see in the room (<= `to_token`)
last_activity_in_room_map: Dict[str, int] = {}
for room_id, room_for_user in sync_room_map.items():
# If they are fully-joined to the room, let's find the latest activity
# at/before the `to_token`.
if room_for_user.membership == Membership.JOIN:
last_event_result = (
await self.store.get_last_event_pos_in_room_before_stream_ordering(
room_id, to_token.room_key
)
)
# If the room has no events at/before the `to_token`, this is probably a
# mistake in the code that generates the `sync_room_map` since that should
# only give us rooms that the user had membership in during the token range.
assert last_event_result is not None
_, event_pos = last_event_result
last_activity_in_room_map[room_id] = event_pos.stream
else:
# Otherwise, if the user has left/been invited/knocked/been banned from
# a room, they shouldn't see anything past that point.
last_activity_in_room_map[room_id] = room_for_user.event_pos.stream
return sorted(
sync_room_map.items(),
# Sort by the last activity (stream_ordering) in the room
key=lambda room_info: last_activity_in_room_map[room_info[0]],
# We want descending order
reverse=True,
)
return filtered_room_id_set

View File

@@ -979,6 +979,89 @@ class SyncHandler:
bundled_aggregations=bundled_aggregations,
)
async def get_state_after_event(
self,
event_id: str,
state_filter: Optional[StateFilter] = None,
await_full_state: bool = True,
) -> StateMap[str]:
"""
Get the room state after the given event
Args:
event_id: event of interest
state_filter: The state filter used to fetch state from the database.
await_full_state: if `True`, will block if we do not yet have complete state
at the event and `state_filter` is not satisfied by partial state.
Defaults to `True`.
"""
state_ids = await self._state_storage_controller.get_state_ids_for_event(
event_id,
state_filter=state_filter or StateFilter.all(),
await_full_state=await_full_state,
)
# using get_metadata_for_events here (instead of get_event) sidesteps an issue
# with redactions: if `event_id` is a redaction event, and we don't have the
# original (possibly because it got purged), get_event will refuse to return
# the redaction event, which isn't terribly helpful here.
#
# (To be fair, in that case we could assume it's *not* a state event, and
# therefore we don't need to worry about it. But still, it seems cleaner just
# to pull the metadata.)
m = (await self.store.get_metadata_for_events([event_id]))[event_id]
if m.state_key is not None and m.rejection_reason is None:
state_ids = dict(state_ids)
state_ids[(m.event_type, m.state_key)] = event_id
return state_ids
async def get_state_at(
self,
room_id: str,
stream_position: StreamToken,
state_filter: Optional[StateFilter] = None,
await_full_state: bool = True,
) -> StateMap[str]:
"""Get the room state at a particular stream position
Args:
room_id: room for which to get state
stream_position: point at which to get state
state_filter: The state filter used to fetch state from the database.
await_full_state: if `True`, will block if we do not yet have complete state
at the last event in the room before `stream_position` and
`state_filter` is not satisfied by partial state. Defaults to `True`.
"""
# FIXME: This gets the state at the latest event before the stream ordering,
# which might not be the same as the "current state" of the room at the time
# of the stream token if there were multiple forward extremities at the time.
last_event_id = await self.store.get_last_event_in_room_before_stream_ordering(
room_id,
end_token=stream_position.room_key,
)
if last_event_id:
state = await self.get_state_after_event(
last_event_id,
state_filter=state_filter or StateFilter.all(),
await_full_state=await_full_state,
)
else:
# no events in this room - so presumably no state
state = {}
# (erikj) This should be rarely hit, but we've had some reports that
# we get more state down gappy syncs than we should, so let's add
# some logging.
logger.info(
"Failed to find any events in room %s at %s",
room_id,
stream_position.room_key,
)
return state
async def compute_summary(
self,
room_id: str,
@@ -1352,7 +1435,7 @@ class SyncHandler:
await_full_state = True
lazy_load_members = False
state_at_timeline_end = await self._state_storage_controller.get_state_at(
state_at_timeline_end = await self.get_state_at(
room_id,
stream_position=end_token,
state_filter=state_filter,
@@ -1436,7 +1519,7 @@ class SyncHandler:
# We need to make sure the first event in our batch points to the
# last event in the previous batch.
last_event_id_prev_batch = (
await self.store.get_last_event_id_in_room_before_stream_ordering(
await self.store.get_last_event_in_room_before_stream_ordering(
room_id,
end_token=since_token.room_key,
)
@@ -1480,7 +1563,7 @@ class SyncHandler:
else:
# We can get here if the user has ignored the senders of all
# the recent events.
state_at_timeline_start = await self._state_storage_controller.get_state_at(
state_at_timeline_start = await self.get_state_at(
room_id,
stream_position=end_token,
state_filter=state_filter,
@@ -1502,14 +1585,14 @@ class SyncHandler:
# about them).
state_filter = StateFilter.all()
state_at_previous_sync = await self._state_storage_controller.get_state_at(
state_at_previous_sync = await self.get_state_at(
room_id,
stream_position=since_token,
state_filter=state_filter,
await_full_state=await_full_state,
)
state_at_timeline_end = await self._state_storage_controller.get_state_at(
state_at_timeline_end = await self.get_state_at(
room_id,
stream_position=end_token,
state_filter=state_filter,
@@ -2508,7 +2591,7 @@ class SyncHandler:
continue
if room_id in sync_result_builder.joined_room_ids or has_join:
old_state_ids = await self._state_storage_controller.get_state_at(
old_state_ids = await self.get_state_at(
room_id,
since_token,
state_filter=StateFilter.from_types([(EventTypes.Member, user_id)]),
@@ -2538,14 +2621,12 @@ class SyncHandler:
newly_left_rooms.append(room_id)
else:
if not old_state_ids:
old_state_ids = (
await self._state_storage_controller.get_state_at(
room_id,
since_token,
state_filter=StateFilter.from_types(
[(EventTypes.Member, user_id)]
),
)
old_state_ids = await self.get_state_at(
room_id,
since_token,
state_filter=StateFilter.from_types(
[(EventTypes.Member, user_id)]
),
)
old_mem_ev_id = old_state_ids.get(
(EventTypes.Member, user_id), None

View File

@@ -119,15 +119,14 @@ def parse_integer(
default: value to use if the parameter is absent, defaults to None.
required: whether to raise a 400 SynapseError if the parameter is absent,
defaults to False.
negative: whether to allow negative integers, defaults to False (disallowing
negatives).
negative: whether to allow negative integers, defaults to True.
Returns:
An int value or the default.
Raises:
SynapseError: if the parameter is absent and required, if the
parameter is present and not an integer, or if the
parameter is illegitimately negative.
parameter is illegitimate negative.
"""
args: Mapping[bytes, Sequence[bytes]] = request.args # type: ignore
return parse_integer_from_args(args, name, default, required, negative)
@@ -165,7 +164,7 @@ def parse_integer_from_args(
name: str,
default: Optional[int] = None,
required: bool = False,
negative: bool = False,
negative: bool = True,
) -> Optional[int]:
"""Parse an integer parameter from the request string
@@ -175,8 +174,7 @@ def parse_integer_from_args(
default: value to use if the parameter is absent, defaults to None.
required: whether to raise a 400 SynapseError if the parameter is absent,
defaults to False.
negative: whether to allow negative integers, defaults to False (disallowing
negatives).
negative: whether to allow negative integers, defaults to True.
Returns:
An int value or the default.
@@ -184,7 +182,7 @@ def parse_integer_from_args(
Raises:
SynapseError: if the parameter is absent and required, if the
parameter is present and not an integer, or if the
parameter is illegitimately negative.
parameter is illegitimate negative.
"""
name_bytes = name.encode("ascii")

View File

@@ -25,7 +25,16 @@ import os
import urllib
from abc import ABC, abstractmethod
from types import TracebackType
from typing import Awaitable, Dict, Generator, List, Optional, Tuple, Type
from typing import (
TYPE_CHECKING,
Awaitable,
Dict,
Generator,
List,
Optional,
Tuple,
Type,
)
import attr
@@ -37,8 +46,13 @@ from synapse.api.errors import Codes, cs_error
from synapse.http.server import finish_request, respond_with_json
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.util import Clock
from synapse.util.stringutils import is_ascii
if TYPE_CHECKING:
from synapse.storage.databases.main.media_repository import LocalMedia
logger = logging.getLogger(__name__)
# list all text content types that will have the charset default to UTF-8 when
@@ -260,6 +274,61 @@ def _can_encode_filename_as_token(x: str) -> bool:
return True
async def respond_with_multipart_responder(
clock: Clock,
request: SynapseRequest,
responder: "Optional[Responder]",
media_info: "LocalMedia",
) -> None:
"""
Responds via a Multipart responder for the federation media `/download` requests
Args:
request: the federation request to respond to
responder: the Multipart responder which will send the response
media_info: metadata about the media item
"""
if not responder:
respond_404(request)
return
# If we have a responder we *must* use it as a context manager.
with responder:
if request._disconnected:
logger.warning(
"Not sending response to request %s, already disconnected.", request
)
return
from synapse.media.media_storage import MultipartFileConsumer
multipart_consumer = MultipartFileConsumer(
clock, request, media_info.media_type, {}
)
logger.debug("Responding to media request with responder %s", responder)
if media_info.media_length is not None:
request.setHeader(b"Content-Length", b"%d" % (media_info.media_length,))
request.setHeader(
b"Content-Type",
b"multipart/mixed; boundary=%s" % multipart_consumer.boundary,
)
try:
await responder.write_to_consumer(multipart_consumer)
except Exception as e:
# The majority of the time this will be due to the client having gone
# away. Unfortunately, Twisted simply throws a generic exception at us
# in that case.
logger.warning("Failed to write to consumer: %s %s", type(e), e)
# Unregister the producer, if it has one, so Twisted doesn't complain
if request.producer:
request.unregisterProducer()
finish_request(request)
async def respond_with_responder(
request: SynapseRequest,
responder: "Optional[Responder]",

View File

@@ -54,6 +54,7 @@ from synapse.media._base import (
ThumbnailInfo,
get_filename_from_headers,
respond_404,
respond_with_multipart_responder,
respond_with_responder,
)
from synapse.media.filepath import MediaFilePaths
@@ -429,6 +430,7 @@ class MediaRepository:
media_id: str,
name: Optional[str],
max_timeout_ms: int,
federation: bool = False,
) -> None:
"""Responds to requests for local media, if exists, or returns 404.
@@ -440,6 +442,7 @@ class MediaRepository:
the filename in the Content-Disposition header of the response.
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
federation: whether the local media being fetched is for a federation request
Returns:
Resolves once a response has successfully been written to request
@@ -459,10 +462,18 @@ class MediaRepository:
file_info = FileInfo(None, media_id, url_cache=bool(url_cache))
responder = await self.media_storage.fetch_media(file_info)
await respond_with_responder(
request, responder, media_type, media_length, upload_name
responder = await self.media_storage.fetch_media(
file_info, media_info, federation
)
if federation:
# this really should be a Multipart responder but just in case
await respond_with_multipart_responder(
self.clock, request, responder, media_info
)
else:
await respond_with_responder(
request, responder, media_type, media_length, upload_name
)
async def get_remote_media(
self,

View File

@@ -19,9 +19,12 @@
#
#
import contextlib
import json
import logging
import os
import shutil
from contextlib import closing
from io import BytesIO
from types import TracebackType
from typing import (
IO,
@@ -30,33 +33,47 @@ from typing import (
AsyncIterator,
BinaryIO,
Callable,
List,
Optional,
Sequence,
Tuple,
Type,
Union,
cast,
)
from uuid import uuid4
import attr
from zope.interface import implementer
from twisted.internet import interfaces
from twisted.internet.defer import Deferred
from twisted.internet.interfaces import IConsumer
from twisted.protocols.basic import FileSender
from synapse.api.errors import NotFoundError
from synapse.logging.context import defer_to_thread, make_deferred_yieldable
from synapse.logging.context import (
defer_to_thread,
make_deferred_yieldable,
run_in_background,
)
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
from synapse.util import Clock
from synapse.util.file_consumer import BackgroundFileConsumer
from ..storage.databases.main.media_repository import LocalMedia
from ..types import JsonDict
from ._base import FileInfo, Responder
from .filepath import MediaFilePaths
if TYPE_CHECKING:
from synapse.media.storage_provider import StorageProvider
from synapse.media.storage_provider import StorageProviderWrapper
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
CRLF = b"\r\n"
class MediaStorage:
"""Responsible for storing/fetching files from local sources.
@@ -73,7 +90,7 @@ class MediaStorage:
hs: "HomeServer",
local_media_directory: str,
filepaths: MediaFilePaths,
storage_providers: Sequence["StorageProvider"],
storage_providers: Sequence["StorageProviderWrapper"],
):
self.hs = hs
self.reactor = hs.get_reactor()
@@ -169,15 +186,23 @@ class MediaStorage:
raise e from None
async def fetch_media(self, file_info: FileInfo) -> Optional[Responder]:
async def fetch_media(
self,
file_info: FileInfo,
media_info: Optional[LocalMedia] = None,
federation: bool = False,
) -> Optional[Responder]:
"""Attempts to fetch media described by file_info from the local cache
and configured storage providers.
Args:
file_info
file_info: Metadata about the media file
media_info: Metadata about the media item
federation: Whether this file is being fetched for a federation request
Returns:
Returns a Responder if the file was found, otherwise None.
If the file was found returns a Responder (a Multipart Responder if the requested
file is for the federation /download endpoint), otherwise None.
"""
paths = [self._file_info_to_path(file_info)]
@@ -316,7 +341,7 @@ class FileResponder(Responder):
"""Wraps an open file that can be sent to a request.
Args:
open_file: A file like object to be streamed ot the client,
open_file: A file like object to be streamed to the client,
is closed when finished streaming.
"""
@@ -370,3 +395,211 @@ class ReadableFileWrapper:
# We yield to the reactor by sleeping for 0 seconds.
await self.clock.sleep(0)
@implementer(interfaces.IConsumer)
@implementer(interfaces.IPushProducer)
class MultipartFileConsumer:
"""Wraps a given consumer so that any data that gets written to it gets
converted to a multipart format.
"""
def __init__(
self,
clock: Clock,
wrapped_consumer: interfaces.IConsumer,
file_content_type: str,
json_object: JsonDict,
) -> None:
self.clock = clock
self.wrapped_consumer = wrapped_consumer
self.json_field = json_object
self.json_field_written = False
self.content_type_written = False
self.file_content_type = file_content_type
self.boundary = uuid4().hex.encode("ascii")
# The producer that registered with us, and if its a push or pull
# producer.
self.producer: Optional["interfaces.IProducer"] = None
self.streaming = Optional[None]
# Whether the wrapped consumer has asked us to pause.
self.paused = False
### IConsumer APIs ###
def registerProducer(
self, producer: "interfaces.IProducer", streaming: bool
) -> None:
"""
Register to receive data from a producer.
This sets self to be a consumer for a producer. When this object runs
out of data (as when a send(2) call on a socket succeeds in moving the
last data from a userspace buffer into a kernelspace buffer), it will
ask the producer to resumeProducing().
For L{IPullProducer} providers, C{resumeProducing} will be called once
each time data is required.
For L{IPushProducer} providers, C{pauseProducing} will be called
whenever the write buffer fills up and C{resumeProducing} will only be
called when it empties. The consumer will only call C{resumeProducing}
to balance a previous C{pauseProducing} call; the producer is assumed
to start in an un-paused state.
@param streaming: C{True} if C{producer} provides L{IPushProducer},
C{False} if C{producer} provides L{IPullProducer}.
@raise RuntimeError: If a producer is already registered.
"""
self.producer = producer
self.streaming = streaming
self.wrapped_consumer.registerProducer(self, True)
def unregisterProducer(self) -> None:
"""
Stop consuming data from a producer, without disconnecting.
"""
self.wrapped_consumer.write(CRLF + b"--" + self.boundary + b"--" + CRLF)
self.wrapped_consumer.unregisterProducer()
self.paused = True
def write(self, data: bytes) -> None:
"""
The producer will write data by calling this method.
The implementation must be non-blocking and perform whatever
buffering is necessary. If the producer has provided enough data
for now and it is a L{IPushProducer}, the consumer may call its
C{pauseProducing} method.
"""
if not self.json_field_written:
self.wrapped_consumer.write(CRLF + b"--" + self.boundary + CRLF)
content_type = Header(b"Content-Type", b"application/json")
self.wrapped_consumer.write(bytes(content_type) + CRLF)
json_field = json.dumps(self.json_field)
json_bytes = json_field.encode("utf-8")
self.wrapped_consumer.write(json_bytes)
self.wrapped_consumer.write(CRLF + b"--" + self.boundary + CRLF)
self.json_field_written = True
# if we haven't written the content type yet, do so
if not self.content_type_written:
type = self.file_content_type.encode("utf-8")
content_type = Header(b"Content-Type", type)
self.wrapped_consumer.write(bytes(content_type) + CRLF)
self.content_type_written = True
self.wrapped_consumer.write(data)
### IPushProducer APIs ###
def stopProducing(self) -> None:
"""
Stop producing data.
This tells a producer that its consumer has died, so it must stop
producing data for good.
"""
assert self.producer is not None
self.paused = True
self.producer.stopProducing()
def pauseProducing(self) -> None:
"""
Pause producing data.
Tells a producer that it has produced too much data to process for
the time being, and to stop until C{resumeProducing()} is called.
"""
assert self.producer is not None
self.paused = True
if self.streaming:
cast("interfaces.IPushProducer", self.producer).pauseProducing()
else:
self.paused = True
def resumeProducing(self) -> None:
"""
Resume producing data.
This tells a producer to re-add itself to the main loop and produce
more data for its consumer.
"""
assert self.producer is not None
if self.streaming:
cast("interfaces.IPushProducer", self.producer).resumeProducing()
else:
# If the producer is not a streaming producer we need to start
# repeatedly calling `resumeProducing` in a loop.
run_in_background(self._resumeProducingRepeatedly)
### Internal APIs. ###
async def _resumeProducingRepeatedly(self) -> None:
assert self.producer is not None
assert not self.streaming
producer = cast("interfaces.IPullProducer", self.producer)
self.paused = False
while not self.paused:
producer.resumeProducing()
await self.clock.sleep(0)
class Header:
"""
`Header` This class is a tiny wrapper that produces
request headers. We can't use standard python header
class because it encodes unicode fields using =? bla bla ?=
encoding, which is correct, but no one in HTTP world expects
that, everyone wants utf-8 raw bytes. (stolen from treq.multipart)
"""
def __init__(
self,
name: bytes,
value: Any,
params: Optional[List[Tuple[Any, Any]]] = None,
):
self.name = name
self.value = value
self.params = params or []
def add_param(self, name: Any, value: Any) -> None:
self.params.append((name, value))
def __bytes__(self) -> bytes:
with closing(BytesIO()) as h:
h.write(self.name + b": " + escape(self.value).encode("us-ascii"))
if self.params:
for name, val in self.params:
h.write(b"; ")
h.write(escape(name).encode("us-ascii"))
h.write(b"=")
h.write(b'"' + escape(val).encode("utf-8") + b'"')
h.seek(0)
return h.read()
def escape(value: Union[str, bytes]) -> str:
"""
This function prevents header values from corrupting the request,
a newline in the file name parameter makes form-data request unreadable
for a majority of parsers. (stolen from treq.multipart)
"""
if isinstance(value, bytes):
value = value.decode("utf-8")
return value.replace("\r", "").replace("\n", "").replace('"', '\\"')

View File

@@ -30,6 +30,7 @@ from synapse.logging.context import defer_to_thread, run_in_background
from synapse.logging.opentracing import start_active_span, trace_with_opname
from synapse.util.async_helpers import maybe_awaitable
from ..storage.databases.main.media_repository import LocalMedia
from ._base import FileInfo, Responder
from .media_storage import FileResponder
@@ -55,7 +56,11 @@ class StorageProvider(metaclass=abc.ABCMeta):
"""
@abc.abstractmethod
async def fetch(self, path: str, file_info: FileInfo) -> Optional[Responder]:
async def fetch(
self,
path: str,
file_info: FileInfo,
) -> Optional[Responder]:
"""Attempt to fetch the file described by file_info and stream it
into writer.
@@ -124,7 +129,11 @@ class StorageProviderWrapper(StorageProvider):
run_in_background(store)
@trace_with_opname("StorageProviderWrapper.fetch")
async def fetch(self, path: str, file_info: FileInfo) -> Optional[Responder]:
async def fetch(
self,
path: str,
file_info: FileInfo,
) -> Optional[Responder]:
if file_info.url_cache:
# Files in the URL preview cache definitely aren't stored here,
# so avoid any potentially slow I/O or network access.
@@ -172,7 +181,13 @@ class FileStorageProviderBackend(StorageProvider):
)
@trace_with_opname("FileStorageProviderBackend.fetch")
async def fetch(self, path: str, file_info: FileInfo) -> Optional[Responder]:
async def fetch(
self,
path: str,
file_info: FileInfo,
media_info: Optional[LocalMedia] = None,
federation: bool = False,
) -> Optional[Responder]:
"""See StorageProvider.fetch"""
backup_fname = os.path.join(self.base_directory, path)

View File

@@ -61,8 +61,8 @@ class ListDestinationsRestServlet(RestServlet):
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self._auth, request)
start = parse_integer(request, "from", default=0)
limit = parse_integer(request, "limit", default=100)
start = parse_integer(request, "from", default=0, negative=False)
limit = parse_integer(request, "limit", default=100, negative=False)
destination = parse_string(request, "destination")
@@ -181,8 +181,8 @@ class DestinationMembershipRestServlet(RestServlet):
if not await self._store.is_destination_known(destination):
raise NotFoundError("Unknown destination")
start = parse_integer(request, "from", default=0)
limit = parse_integer(request, "limit", default=100)
start = parse_integer(request, "from", default=0, negative=False)
limit = parse_integer(request, "limit", default=100, negative=False)
direction = parse_enum(request, "dir", Direction, default=Direction.FORWARDS)

View File

@@ -311,8 +311,8 @@ class DeleteMediaByDateSize(RestServlet):
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
before_ts = parse_integer(request, "before_ts", required=True)
size_gt = parse_integer(request, "size_gt", default=0)
before_ts = parse_integer(request, "before_ts", required=True, negative=False)
size_gt = parse_integer(request, "size_gt", default=0, negative=False)
keep_profiles = parse_boolean(request, "keep_profiles", default=True)
if before_ts < 30000000000: # Dec 1970 in milliseconds, Aug 2920 in seconds
@@ -377,8 +377,8 @@ class UserMediaRestServlet(RestServlet):
if user is None:
raise NotFoundError("Unknown user")
start = parse_integer(request, "from", default=0)
limit = parse_integer(request, "limit", default=100)
start = parse_integer(request, "from", default=0, negative=False)
limit = parse_integer(request, "limit", default=100, negative=False)
# If neither `order_by` nor `dir` is set, set the default order
# to newest media is on top for backward compatibility.
@@ -421,8 +421,8 @@ class UserMediaRestServlet(RestServlet):
if user is None:
raise NotFoundError("Unknown user")
start = parse_integer(request, "from", default=0)
limit = parse_integer(request, "limit", default=100)
start = parse_integer(request, "from", default=0, negative=False)
limit = parse_integer(request, "limit", default=100, negative=False)
# If neither `order_by` nor `dir` is set, set the default order
# to newest media is on top for backward compatibility.

View File

@@ -35,7 +35,6 @@ from synapse.http.servlet import (
ResolveRoomIdMixin,
RestServlet,
assert_params_in_dict,
parse_boolean,
parse_enum,
parse_integer,
parse_json,
@@ -243,23 +242,13 @@ class ListRoomRestServlet(RestServlet):
errcode=Codes.INVALID_PARAM,
)
public_rooms = parse_boolean(request, "public_rooms")
empty_rooms = parse_boolean(request, "empty_rooms")
direction = parse_enum(request, "dir", Direction, default=Direction.FORWARDS)
reverse_order = True if direction == Direction.BACKWARDS else False
# Return list of rooms according to parameters
rooms, total_rooms = await self.store.get_rooms_paginate(
start,
limit,
order_by,
reverse_order,
search_term,
public_rooms,
empty_rooms,
start, limit, order_by, reverse_order, search_term
)
response = {
# next_token should be opaque, so return a value the client can parse
"offset": start,

View File

@@ -63,10 +63,10 @@ class UserMediaStatisticsRestServlet(RestServlet):
),
)
start = parse_integer(request, "from", default=0)
limit = parse_integer(request, "limit", default=100)
from_ts = parse_integer(request, "from_ts", default=0)
until_ts = parse_integer(request, "until_ts")
start = parse_integer(request, "from", default=0, negative=False)
limit = parse_integer(request, "limit", default=100, negative=False)
from_ts = parse_integer(request, "from_ts", default=0, negative=False)
until_ts = parse_integer(request, "until_ts", negative=False)
if until_ts is not None:
if until_ts <= from_ts:

View File

@@ -90,8 +90,8 @@ class UsersRestServletV2(RestServlet):
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
start = parse_integer(request, "from", default=0)
limit = parse_integer(request, "limit", default=100)
start = parse_integer(request, "from", default=0, negative=False)
limit = parse_integer(request, "limit", default=100, negative=False)
user_id = parse_string(request, "user_id")
name = parse_string(request, "name", encoding="utf-8")

View File

@@ -53,7 +53,6 @@ class KnockRoomAliasServlet(RestServlet):
super().__init__()
self.room_member_handler = hs.get_room_member_handler()
self.auth = hs.get_auth()
self._support_via = hs.config.experimental.msc4156_enabled
async def on_POST(
self,
@@ -75,13 +74,6 @@ class KnockRoomAliasServlet(RestServlet):
remote_room_hosts = parse_strings_from_args(
args, "server_name", required=False
)
if self._support_via:
remote_room_hosts = parse_strings_from_args(
args,
"org.matrix.msc4156.via",
default=remote_room_hosts,
required=False,
)
elif RoomAlias.is_valid(room_identifier):
handler = self.room_member_handler
room_alias = RoomAlias.from_string(room_identifier)

View File

@@ -32,7 +32,6 @@ from synapse.http.servlet import RestServlet, parse_integer, parse_string
from synapse.http.site import SynapseRequest
from synapse.types import JsonDict
from ...api.errors import SynapseError
from ._base import client_patterns
if TYPE_CHECKING:
@@ -57,22 +56,7 @@ class NotificationsServlet(RestServlet):
requester = await self.auth.get_user_by_req(request)
user_id = requester.user.to_string()
# While this is intended to be "string" to clients, the 'from' token
# is actually based on a numeric ID. So it must parse to an int.
from_token_str = parse_string(request, "from", required=False)
if from_token_str is not None:
# Parse to an integer.
try:
from_token = int(from_token_str)
except ValueError:
# If it doesn't parse to an integer, then this cannot possibly be a valid
# pagination token, as we only hand out integers.
raise SynapseError(
400, 'Query parameter "from" contains unrecognised token'
)
else:
from_token = None
from_token = parse_string(request, "from", required=False)
limit = parse_integer(request, "limit", default=50)
only = parse_string(request, "only", required=False)

View File

@@ -417,7 +417,6 @@ class JoinRoomAliasServlet(ResolveRoomIdMixin, TransactionRestServlet):
super().__init__(hs)
super(ResolveRoomIdMixin, self).__init__(hs) # ensure the Mixin is set up
self.auth = hs.get_auth()
self._support_via = hs.config.experimental.msc4156_enabled
def register(self, http_server: HttpServer) -> None:
# /join/$room_identifier[/$txn_id]
@@ -436,13 +435,6 @@ class JoinRoomAliasServlet(ResolveRoomIdMixin, TransactionRestServlet):
# twisted.web.server.Request.args is incorrectly defined as Optional[Any]
args: Dict[bytes, List[bytes]] = request.args # type: ignore
remote_room_hosts = parse_strings_from_args(args, "server_name", required=False)
if self._support_via:
remote_room_hosts = parse_strings_from_args(
args,
"org.matrix.msc4156.via",
default=remote_room_hosts,
required=False,
)
room_id, remote_room_hosts = await self.resolve_room_id(
room_identifier,
remote_room_hosts,
@@ -510,7 +502,7 @@ class PublicRoomListRestServlet(RestServlet):
if server:
raise e
limit: Optional[int] = parse_integer(request, "limit", 0)
limit: Optional[int] = parse_integer(request, "limit", 0, negative=False)
since_token = parse_string(request, "since")
if limit == 0:
@@ -1430,7 +1422,16 @@ class RoomHierarchyRestServlet(RestServlet):
requester = await self._auth.get_user_by_req(request, allow_guest=True)
max_depth = parse_integer(request, "max_depth")
if max_depth is not None and max_depth < 0:
raise SynapseError(
400, "'max_depth' must be a non-negative integer", Codes.BAD_JSON
)
limit = parse_integer(request, "limit")
if limit is not None and limit <= 0:
raise SynapseError(
400, "'limit' must be a positive integer", Codes.BAD_JSON
)
return 200, await self._room_summary_handler.get_room_hierarchy(
requester,

View File

@@ -864,7 +864,7 @@ class SlidingSyncRestServlet(RestServlet):
"""
PATTERNS = client_patterns(
"/org.matrix.simplified_msc3575/sync$", releases=[], v1=False, unstable=True
"/org.matrix.msc3575/sync$", releases=[], v1=False, unstable=True
)
def __init__(self, hs: "HomeServer"):

View File

@@ -45,7 +45,7 @@ from synapse.storage.util.partial_state_events_tracker import (
PartialStateEventsTracker,
)
from synapse.synapse_rust.acl import ServerAclEvaluator
from synapse.types import MutableStateMap, StateMap, StreamToken, get_domain_from_id
from synapse.types import MutableStateMap, StateMap, get_domain_from_id
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer
from synapse.util.caches import intern_string
@@ -372,91 +372,6 @@ class StateStorageController:
)
return state_map[event_id]
async def get_state_after_event(
self,
event_id: str,
state_filter: Optional[StateFilter] = None,
await_full_state: bool = True,
) -> StateMap[str]:
"""
Get the room state after the given event
Args:
event_id: event of interest
state_filter: The state filter used to fetch state from the database.
await_full_state: if `True`, will block if we do not yet have complete state
at the event and `state_filter` is not satisfied by partial state.
Defaults to `True`.
"""
state_ids = await self.get_state_ids_for_event(
event_id,
state_filter=state_filter or StateFilter.all(),
await_full_state=await_full_state,
)
# using get_metadata_for_events here (instead of get_event) sidesteps an issue
# with redactions: if `event_id` is a redaction event, and we don't have the
# original (possibly because it got purged), get_event will refuse to return
# the redaction event, which isn't terribly helpful here.
#
# (To be fair, in that case we could assume it's *not* a state event, and
# therefore we don't need to worry about it. But still, it seems cleaner just
# to pull the metadata.)
m = (await self.stores.main.get_metadata_for_events([event_id]))[event_id]
if m.state_key is not None and m.rejection_reason is None:
state_ids = dict(state_ids)
state_ids[(m.event_type, m.state_key)] = event_id
return state_ids
async def get_state_at(
self,
room_id: str,
stream_position: StreamToken,
state_filter: Optional[StateFilter] = None,
await_full_state: bool = True,
) -> StateMap[str]:
"""Get the room state at a particular stream position
Args:
room_id: room for which to get state
stream_position: point at which to get state
state_filter: The state filter used to fetch state from the database.
await_full_state: if `True`, will block if we do not yet have complete state
at the last event in the room before `stream_position` and
`state_filter` is not satisfied by partial state. Defaults to `True`.
"""
# FIXME: This gets the state at the latest event before the stream ordering,
# which might not be the same as the "current state" of the room at the time
# of the stream token if there were multiple forward extremities at the time.
last_event_id = (
await self.stores.main.get_last_event_id_in_room_before_stream_ordering(
room_id,
end_token=stream_position.room_key,
)
)
if last_event_id:
state = await self.get_state_after_event(
last_event_id,
state_filter=state_filter or StateFilter.all(),
await_full_state=await_full_state,
)
else:
# no events in this room - so presumably no state
state = {}
# (erikj) This should be rarely hit, but we've had some reports that
# we get more state down gappy syncs than we should, so let's add
# some logging.
logger.info(
"Failed to find any events in room %s at %s",
room_id,
stream_position.room_key,
)
return state
@trace
@tag_args
async def get_state_for_groups(

View File

@@ -1829,7 +1829,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
async def get_push_actions_for_user(
self,
user_id: str,
before: Optional[int] = None,
before: Optional[str] = None,
limit: int = 50,
only_highlight: bool = False,
) -> List[UserPushAction]:

View File

@@ -606,8 +606,6 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
order_by: str,
reverse_order: bool,
search_term: Optional[str],
public_rooms: Optional[bool],
empty_rooms: Optional[bool],
) -> Tuple[List[Dict[str, Any]], int]:
"""Function to retrieve a paginated list of rooms as json.
@@ -619,49 +617,30 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
search_term: a string to filter room names,
canonical alias and room ids by.
Room ID must match exactly. Canonical alias must match a substring of the local part.
public_rooms: Optional flag to filter public and non-public rooms. If true, public rooms are queried.
if false, public rooms are excluded from the query. When it is
none (the default), both public rooms and none-public-rooms are queried.
empty_rooms: Optional flag to filter empty and non-empty rooms.
A room is empty if joined_members is zero.
If true, empty rooms are queried.
if false, empty rooms are excluded from the query. When it is
none (the default), both empty rooms and none-empty rooms are queried.
Returns:
A list of room dicts and an integer representing the total number of
rooms that exist given this query
"""
# Filter room names by a string
filter_ = []
where_args = []
where_statement = ""
search_pattern: List[object] = []
if search_term:
filter_ = [
"LOWER(state.name) LIKE ? OR "
"LOWER(state.canonical_alias) LIKE ? OR "
"state.room_id = ?"
]
where_statement = """
WHERE LOWER(state.name) LIKE ?
OR LOWER(state.canonical_alias) LIKE ?
OR state.room_id = ?
"""
# Our postgres db driver converts ? -> %s in SQL strings as that's the
# placeholder for postgres.
# HOWEVER, if you put a % into your SQL then everything goes wibbly.
# To get around this, we're going to surround search_term with %'s
# before giving it to the database in python instead
where_args = [
f"%{search_term.lower()}%",
f"#%{search_term.lower()}%:%",
search_pattern = [
"%" + search_term.lower() + "%",
"#%" + search_term.lower() + "%:%",
search_term,
]
if public_rooms is not None:
filter_arg = "1" if public_rooms else "0"
filter_.append(f"rooms.is_public = '{filter_arg}'")
if empty_rooms is not None:
if empty_rooms:
filter_.append("curr.joined_members = 0")
else:
filter_.append("curr.joined_members <> 0")
where_clause = "WHERE " + " AND ".join(filter_) if len(filter_) > 0 else ""
# Set ordering
if RoomSortOrder(order_by) == RoomSortOrder.SIZE:
@@ -738,7 +717,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
LIMIT ?
OFFSET ?
""".format(
where=where_clause,
where=where_statement,
order_by=order_by_column,
direction="ASC" if order_by_asc else "DESC",
)
@@ -747,12 +726,10 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
count_sql = """
SELECT count(*) FROM (
SELECT room_id FROM room_stats_state state
INNER JOIN room_stats_current curr USING (room_id)
INNER JOIN rooms USING (room_id)
{where}
) AS get_room_ids
""".format(
where=where_clause,
where=where_statement,
)
def _get_rooms_paginate_txn(
@@ -760,7 +737,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
) -> Tuple[List[Dict[str, Any]], int]:
# Add the search term into the WHERE clause
# and execute the data query
txn.execute(info_sql, where_args + [limit, start])
txn.execute(info_sql, search_pattern + [limit, start])
# Refactor room query data into a structured dictionary
rooms = []
@@ -790,7 +767,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
# Execute the count query
# Add the search term into the WHERE clause if present
txn.execute(count_sql, where_args)
txn.execute(count_sql, search_pattern)
room_count = cast(Tuple[int], txn.fetchone())
return rooms, room_count[0]

View File

@@ -895,7 +895,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
"get_room_event_before_stream_ordering", _f
)
async def get_last_event_id_in_room_before_stream_ordering(
async def get_last_event_in_room_before_stream_ordering(
self,
room_id: str,
end_token: RoomStreamToken,
@@ -910,38 +910,10 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
The ID of the most recent event, or None if there are no events in the room
before this stream ordering.
"""
last_event_result = (
await self.get_last_event_pos_in_room_before_stream_ordering(
room_id, end_token
)
)
if last_event_result:
return last_event_result[0]
return None
async def get_last_event_pos_in_room_before_stream_ordering(
self,
room_id: str,
end_token: RoomStreamToken,
) -> Optional[Tuple[str, PersistedEventPosition]]:
"""
Returns the ID and event position of the last event in a room at or before a
stream ordering.
Args:
room_id
end_token: The token used to stream from
Returns:
The ID of the most recent event and it's position, or None if there are no
events in the room before this stream ordering.
"""
def get_last_event_pos_in_room_before_stream_ordering_txn(
def get_last_event_in_room_before_stream_ordering_txn(
txn: LoggingTransaction,
) -> Optional[Tuple[str, PersistedEventPosition]]:
) -> Optional[str]:
# We're looking for the closest event at or before the token. We need to
# handle the fact that the stream token can be a vector clock (with an
# `instance_map`) and events can be persisted on different instances
@@ -1003,15 +975,13 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
topological_ordering=topological_ordering,
stream_ordering=stream_ordering,
):
return event_id, PersistedEventPosition(
instance_name, stream_ordering
)
return event_id
return None
return await self.db_pool.runInteraction(
"get_last_event_pos_in_room_before_stream_ordering",
get_last_event_pos_in_room_before_stream_ordering_txn,
"get_last_event_in_room_before_stream_ordering",
get_last_event_in_room_before_stream_ordering_txn,
)
async def get_current_room_stream_token_for_room_id(

View File

@@ -75,6 +75,9 @@ class PaginationConfig:
raise SynapseError(400, "'to' parameter is invalid")
limit = parse_integer(request, "limit", default=default_limit)
if limit < 0:
raise SynapseError(400, "Limit must be 0 or above")
limit = min(limit, MAX_LIMIT)
try:

View File

@@ -175,8 +175,22 @@ class SlidingSyncBody(RequestBodyModel):
ranges: Sliding window ranges. If this field is missing, no sliding window
is used and all rooms are returned in this list. Integers are
*inclusive*.
sort: How the list should be sorted on the server. The first value is
applied first, then tiebreaks are performed with each subsequent sort
listed.
FIXME: Furthermore, it's not currently defined how servers should behave
if they encounter a filter or sort operation they do not recognise. If
the server rejects the request with an HTTP 400 then that will break
backwards compatibility with new clients vs old servers. However, the
client would be otherwise unaware that only some of the sort/filter
operations have taken effect. We may need to include a "warnings"
section to indicate which sort/filter operations are unrecognised,
allowing for some form of graceful degradation of service.
-- https://github.com/matrix-org/matrix-spec-proposals/blob/kegan/sync-v3/proposals/3575-sync.md#filter-and-sort-extensions
slow_get_all_rooms: Just get all rooms (for clients that don't want to deal with
sliding windows). When true, the `ranges` field is ignored.
sliding windows). When true, the `ranges` and `sort` fields are ignored.
required_state: Required state for each room returned. An array of event
type and state key tuples. Elements in this array are ORd together to
produce the final set of state events to return.
@@ -215,6 +229,12 @@ class SlidingSyncBody(RequestBodyModel):
`user_id` and optionally `avatar_url` and `displayname`) for the users used
to calculate the room name.
filters: Filters to apply to the list before sorting.
bump_event_types: Allowlist of event types which should be considered recent activity
when sorting `by_recency`. By omitting event types from this field,
clients can ensure that uninteresting events (e.g. a profile rename) do
not cause a room to jump to the top of its list(s). Empty or omitted
`bump_event_types` have no effect—all events in a room will be
considered recent activity.
"""
class Filters(RequestBodyModel):
@@ -280,9 +300,11 @@ class SlidingSyncBody(RequestBodyModel):
ranges: Optional[List[Tuple[int, int]]] = None
else:
ranges: Optional[List[Tuple[conint(ge=0, strict=True), conint(ge=0, strict=True)]]] = None # type: ignore[valid-type]
sort: Optional[List[StrictStr]] = None
slow_get_all_rooms: Optional[StrictBool] = False
include_heroes: Optional[StrictBool] = False
filters: Optional[Filters] = None
bump_event_types: Optional[List[StrictStr]] = None
class RoomSubscription(CommonRoomParameters):
pass

View File

@@ -0,0 +1,234 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2024 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
import io
import os
import shutil
import tempfile
from typing import Optional
from twisted.test.proto_helpers import MemoryReactor
from synapse.media._base import FileInfo, Responder
from synapse.media.filepath import MediaFilePaths
from synapse.media.media_storage import MediaStorage
from synapse.media.storage_provider import (
FileStorageProviderBackend,
StorageProviderWrapper,
)
from synapse.server import HomeServer
from synapse.storage.databases.main.media_repository import LocalMedia
from synapse.types import JsonDict, UserID
from synapse.util import Clock
from tests import unittest
from tests.test_utils import SMALL_PNG
from tests.unittest import override_config
class FederationUnstableMediaDownloadsTest(unittest.FederatingHomeserverTestCase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
super().prepare(reactor, clock, hs)
self.test_dir = tempfile.mkdtemp(prefix="synapse-tests-")
self.addCleanup(shutil.rmtree, self.test_dir)
self.primary_base_path = os.path.join(self.test_dir, "primary")
self.secondary_base_path = os.path.join(self.test_dir, "secondary")
hs.config.media.media_store_path = self.primary_base_path
storage_providers = [
StorageProviderWrapper(
FileStorageProviderBackend(hs, self.secondary_base_path),
store_local=True,
store_remote=False,
store_synchronous=True,
)
]
self.filepaths = MediaFilePaths(self.primary_base_path)
self.media_storage = MediaStorage(
hs, self.primary_base_path, self.filepaths, storage_providers
)
self.media_repo = hs.get_media_repository()
@override_config(
{"experimental_features": {"msc3916_authenticated_media_enabled": True}}
)
def test_file_download(self) -> None:
content = io.BytesIO(b"file_to_stream")
content_uri = self.get_success(
self.media_repo.create_content(
"text/plain",
"test_upload",
content,
46,
UserID.from_string("@user_id:whatever.org"),
)
)
# test with a text file
channel = self.make_signed_federation_request(
"GET",
f"/_matrix/federation/unstable/org.matrix.msc3916/media/download/{content_uri.media_id}",
)
self.pump()
self.assertEqual(200, channel.code)
content_type = channel.headers.getRawHeaders("content-type")
assert content_type is not None
assert "multipart/mixed" in content_type[0]
assert "boundary" in content_type[0]
# extract boundary
boundary = content_type[0].split("boundary=")[1]
# split on boundary and check that json field and expected value exist
stripped = channel.text_body.split("\r\n" + "--" + boundary)
# TODO: the json object expected will change once MSC3911 is implemented, currently
# {} is returned for all requests as a placeholder (per MSC3196)
found_json = any(
"\r\nContent-Type: application/json\r\n{}" in field for field in stripped
)
self.assertTrue(found_json)
# check that text file and expected value exist
found_file = any(
"\r\nContent-Type: text/plain\r\nfile_to_stream" in field
for field in stripped
)
self.assertTrue(found_file)
content = io.BytesIO(SMALL_PNG)
content_uri = self.get_success(
self.media_repo.create_content(
"image/png",
"test_png_upload",
content,
67,
UserID.from_string("@user_id:whatever.org"),
)
)
# test with an image file
channel = self.make_signed_federation_request(
"GET",
f"/_matrix/federation/unstable/org.matrix.msc3916/media/download/{content_uri.media_id}",
)
self.pump()
self.assertEqual(200, channel.code)
content_type = channel.headers.getRawHeaders("content-type")
assert content_type is not None
assert "multipart/mixed" in content_type[0]
assert "boundary" in content_type[0]
# extract boundary
boundary = content_type[0].split("boundary=")[1]
# split on boundary and check that json field and expected value exist
body = channel.result.get("body")
assert body is not None
stripped_bytes = body.split(b"\r\n" + b"--" + boundary.encode("utf-8"))
found_json = any(
b"\r\nContent-Type: application/json\r\n{}" in field
for field in stripped_bytes
)
self.assertTrue(found_json)
# check that png file exists and matches what was uploaded
found_file = any(SMALL_PNG in field for field in stripped_bytes)
self.assertTrue(found_file)
@override_config(
{"experimental_features": {"msc3916_authenticated_media_enabled": False}}
)
def test_disable_config(self) -> None:
content = io.BytesIO(b"file_to_stream")
content_uri = self.get_success(
self.media_repo.create_content(
"text/plain",
"test_upload",
content,
46,
UserID.from_string("@user_id:whatever.org"),
)
)
channel = self.make_signed_federation_request(
"GET",
f"/_matrix/federation/unstable/org.matrix.msc3916/media/download/{content_uri.media_id}",
)
self.pump()
self.assertEqual(404, channel.code)
self.assertEqual(channel.json_body.get("errcode"), "M_UNRECOGNIZED")
class FakeFileStorageProviderBackend:
"""
Fake storage provider stub with incompatible `fetch` signature for testing
"""
def __init__(self, hs: "HomeServer", config: str):
self.hs = hs
self.cache_directory = hs.config.media.media_store_path
self.base_directory = config
def __str__(self) -> str:
return "FakeFileStorageProviderBackend[%s]" % (self.base_directory,)
async def fetch(
self, path: str, file_info: FileInfo, media_info: Optional[LocalMedia] = None
) -> Optional[Responder]:
pass
TEST_DIR = tempfile.mkdtemp(prefix="synapse-tests-")
class FederationUnstableMediaEndpointCompatibilityTest(
unittest.FederatingHomeserverTestCase
):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
super().prepare(reactor, clock, hs)
self.test_dir = TEST_DIR
self.addCleanup(shutil.rmtree, self.test_dir)
self.media_repo = hs.get_media_repository()
def default_config(self) -> JsonDict:
config = super().default_config()
primary_base_path = os.path.join(TEST_DIR, "primary")
config["media_storage_providers"] = [
{
"module": "tests.federation.test_federation_media.FakeFileStorageProviderBackend",
"store_local": "True",
"store_remote": "False",
"store_synchronous": "False",
"config": {"directory": primary_base_path},
}
]
return config
@override_config(
{"experimental_features": {"msc3916_authenticated_media_enabled": True}}
)
def test_incompatible_storage_provider_fails_to_load_endpoint(self) -> None:
channel = self.make_signed_federation_request(
"GET",
"/_matrix/federation/unstable/org.matrix.msc3916/media/download/xyz",
)
self.pump()
self.assertEqual(404, channel.code)
self.assertEqual(channel.json_body.get("errcode"), "M_UNRECOGNIZED")

View File

@@ -20,8 +20,6 @@
import logging
from unittest.mock import patch
from parameterized import parameterized
from twisted.test.proto_helpers import MemoryReactor
from synapse.api.constants import AccountDataTypes, EventTypes, JoinRules, Membership
@@ -81,7 +79,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
)
self.assertEqual(room_id_results.keys(), set())
self.assertEqual(room_id_results, set())
def test_get_newly_joined_room(self) -> None:
"""
@@ -105,7 +103,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
)
self.assertEqual(room_id_results.keys(), {room_id})
self.assertEqual(room_id_results, {room_id})
def test_get_already_joined_room(self) -> None:
"""
@@ -126,7 +124,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
)
self.assertEqual(room_id_results.keys(), {room_id})
self.assertEqual(room_id_results, {room_id})
def test_get_invited_banned_knocked_room(self) -> None:
"""
@@ -182,7 +180,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
# Ensure that the invited, ban, and knock rooms show up
self.assertEqual(
room_id_results.keys(),
room_id_results,
{
invited_room_id,
ban_room_id,
@@ -228,7 +226,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# The kicked room should show up
self.assertEqual(room_id_results.keys(), {kick_room_id})
self.assertEqual(room_id_results, {kick_room_id})
def test_forgotten_rooms(self) -> None:
"""
@@ -310,7 +308,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# We shouldn't see the room because it was forgotten
self.assertEqual(room_id_results.keys(), set())
self.assertEqual(room_id_results, set())
def test_only_newly_left_rooms_show_up(self) -> None:
"""
@@ -342,7 +340,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# Only the newly_left room should show up
self.assertEqual(room_id_results.keys(), {room_id2})
self.assertEqual(room_id_results, {room_id2})
def test_no_joins_after_to_token(self) -> None:
"""
@@ -370,7 +368,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
)
self.assertEqual(room_id_results.keys(), {room_id1})
self.assertEqual(room_id_results, {room_id1})
def test_join_during_range_and_left_room_after_to_token(self) -> None:
"""
@@ -400,7 +398,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
# We should still see the room because we were joined during the
# from_token/to_token time period.
self.assertEqual(room_id_results.keys(), {room_id1})
self.assertEqual(room_id_results, {room_id1})
def test_join_before_range_and_left_room_after_to_token(self) -> None:
"""
@@ -427,7 +425,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# We should still see the room because we were joined before the `from_token`
self.assertEqual(room_id_results.keys(), {room_id1})
self.assertEqual(room_id_results, {room_id1})
def test_kicked_before_range_and_left_after_to_token(self) -> None:
"""
@@ -475,7 +473,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# We shouldn't see the room because it was forgotten
self.assertEqual(room_id_results.keys(), {kick_room_id})
self.assertEqual(room_id_results, {kick_room_id})
def test_newly_left_during_range_and_join_leave_after_to_token(self) -> None:
"""
@@ -512,7 +510,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# Room should still show up because it's newly_left during the from/to range
self.assertEqual(room_id_results.keys(), {room_id1})
self.assertEqual(room_id_results, {room_id1})
def test_newly_left_during_range_and_join_after_to_token(self) -> None:
"""
@@ -548,7 +546,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# Room should still show up because it's newly_left during the from/to range
self.assertEqual(room_id_results.keys(), {room_id1})
self.assertEqual(room_id_results, {room_id1})
def test_no_from_token(self) -> None:
"""
@@ -589,7 +587,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# Only rooms we were joined to before the `to_token` should show up
self.assertEqual(room_id_results.keys(), {room_id1})
self.assertEqual(room_id_results, {room_id1})
def test_from_token_ahead_of_to_token(self) -> None:
"""
@@ -650,7 +648,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
#
# There won't be any newly_left rooms because the `from_token` is ahead of the
# `to_token` and that range will give no membership changes to check.
self.assertEqual(room_id_results.keys(), {room_id1})
self.assertEqual(room_id_results, {room_id1})
def test_leave_before_range_and_join_leave_after_to_token(self) -> None:
"""
@@ -685,7 +683,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# Room shouldn't show up because it was left before the `from_token`
self.assertEqual(room_id_results.keys(), set())
self.assertEqual(room_id_results, set())
def test_leave_before_range_and_join_after_to_token(self) -> None:
"""
@@ -719,7 +717,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# Room shouldn't show up because it was left before the `from_token`
self.assertEqual(room_id_results.keys(), set())
self.assertEqual(room_id_results, set())
def test_join_leave_multiple_times_during_range_and_after_to_token(
self,
@@ -761,7 +759,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# Room should show up because it was newly_left and joined during the from/to range
self.assertEqual(room_id_results.keys(), {room_id1})
self.assertEqual(room_id_results, {room_id1})
def test_join_leave_multiple_times_before_range_and_after_to_token(
self,
@@ -801,7 +799,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# Room should show up because we were joined before the from/to range
self.assertEqual(room_id_results.keys(), {room_id1})
self.assertEqual(room_id_results, {room_id1})
def test_invite_before_range_and_join_leave_after_to_token(
self,
@@ -838,7 +836,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
# Room should show up because we were invited before the from/to range
self.assertEqual(room_id_results.keys(), {room_id1})
self.assertEqual(room_id_results, {room_id1})
def test_multiple_rooms_are_not_confused(
self,
@@ -891,7 +889,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
)
self.assertEqual(
room_id_results.keys(),
room_id_results,
{
# `room_id1` shouldn't show up because we left before the from/to range
#
@@ -1050,6 +1048,7 @@ class GetSyncRoomIdsForUserEventShardTestCase(BaseMultiWorkerStreamTestCase):
# Get a token while things are stuck after our activity
stuck_activity_token = self.event_sources.get_current_token()
logger.info("stuck_activity_token %s", stuck_activity_token)
# Let's make sure we're working with a token that has an `instance_map`
self.assertNotEqual(len(stuck_activity_token.room_key.instance_map), 0)
@@ -1059,6 +1058,7 @@ class GetSyncRoomIdsForUserEventShardTestCase(BaseMultiWorkerStreamTestCase):
join_on_worker2_pos = self.get_success(
self.store.get_position_for_event(join_on_worker2_response["event_id"])
)
logger.info("join_on_worker2_pos %s", join_on_worker2_pos)
# Ensure the join technially came after our token
self.assertGreater(
join_on_worker2_pos.stream,
@@ -1077,6 +1077,7 @@ class GetSyncRoomIdsForUserEventShardTestCase(BaseMultiWorkerStreamTestCase):
join_on_worker3_pos = self.get_success(
self.store.get_position_for_event(join_on_worker3_response["event_id"])
)
logger.info("join_on_worker3_pos %s", join_on_worker3_pos)
# Ensure the join came after the min but still encapsulated by the token
self.assertGreaterEqual(
join_on_worker3_pos.stream,
@@ -1102,7 +1103,7 @@ class GetSyncRoomIdsForUserEventShardTestCase(BaseMultiWorkerStreamTestCase):
)
self.assertEqual(
room_id_results.keys(),
room_id_results,
{
room_id1,
# room_id2 shouldn't show up because we left before the from/to range
@@ -1216,20 +1217,11 @@ class FilterRoomsTestCase(HomeserverTestCase):
after_rooms_token = self.event_sources.get_current_token()
# Get the rooms the user should be syncing with
sync_room_map = self.get_success(
self.sliding_sync_handler.get_sync_room_ids_for_user(
UserID.from_string(user1_id),
from_token=None,
to_token=after_rooms_token,
)
)
# Try with `is_dm=True`
truthy_filtered_room_map = self.get_success(
truthy_filtered_room_ids = self.get_success(
self.sliding_sync_handler.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
{room_id, dm_room_id},
SlidingSyncConfig.SlidingSyncList.Filters(
is_dm=True,
),
@@ -1237,13 +1229,13 @@ class FilterRoomsTestCase(HomeserverTestCase):
)
)
self.assertEqual(truthy_filtered_room_map.keys(), {dm_room_id})
self.assertEqual(truthy_filtered_room_ids, {dm_room_id})
# Try with `is_dm=False`
falsy_filtered_room_map = self.get_success(
falsy_filtered_room_ids = self.get_success(
self.sliding_sync_handler.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
{room_id, dm_room_id},
SlidingSyncConfig.SlidingSyncList.Filters(
is_dm=False,
),
@@ -1251,226 +1243,4 @@ class FilterRoomsTestCase(HomeserverTestCase):
)
)
self.assertEqual(falsy_filtered_room_map.keys(), {room_id})
def test_filter_encrypted_rooms(self) -> None:
"""
Test `filter.is_encrypted` for encrypted rooms
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
# Create a normal room
room_id = self.helper.create_room_as(
user1_id,
is_public=False,
tok=user1_tok,
)
# Create an encrypted room
encrypted_room_id = self.helper.create_room_as(
user1_id,
is_public=False,
tok=user1_tok,
)
self.helper.send_state(
encrypted_room_id,
EventTypes.RoomEncryption,
{"algorithm": "m.megolm.v1.aes-sha2"},
tok=user1_tok,
)
after_rooms_token = self.event_sources.get_current_token()
# Get the rooms the user should be syncing with
sync_room_map = self.get_success(
self.sliding_sync_handler.get_sync_room_ids_for_user(
UserID.from_string(user1_id),
from_token=None,
to_token=after_rooms_token,
)
)
# Try with `is_encrypted=True`
truthy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
is_encrypted=True,
),
after_rooms_token,
)
)
self.assertEqual(truthy_filtered_room_map.keys(), {encrypted_room_id})
# Try with `is_encrypted=False`
falsy_filtered_room_map = self.get_success(
self.sliding_sync_handler.filter_rooms(
UserID.from_string(user1_id),
sync_room_map,
SlidingSyncConfig.SlidingSyncList.Filters(
is_encrypted=False,
),
after_rooms_token,
)
)
self.assertEqual(falsy_filtered_room_map.keys(), {room_id})
class SortRoomsTestCase(HomeserverTestCase):
"""
Tests Sliding Sync handler `sort_rooms()` to make sure it sorts/orders rooms
correctly.
"""
servlets = [
admin.register_servlets,
knock.register_servlets,
login.register_servlets,
room.register_servlets,
]
def default_config(self) -> JsonDict:
config = super().default_config()
# Enable sliding sync
config["experimental_features"] = {"msc3575_enabled": True}
return config
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.sliding_sync_handler = self.hs.get_sliding_sync_handler()
self.store = self.hs.get_datastores().main
self.event_sources = hs.get_event_sources()
def test_sort_activity_basic(self) -> None:
"""
Rooms with newer activity are sorted first.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
room_id1 = self.helper.create_room_as(
user1_id,
tok=user1_tok,
)
room_id2 = self.helper.create_room_as(
user1_id,
tok=user1_tok,
)
after_rooms_token = self.event_sources.get_current_token()
# Get the rooms the user should be syncing with
sync_room_map = self.get_success(
self.sliding_sync_handler.get_sync_room_ids_for_user(
UserID.from_string(user1_id),
from_token=None,
to_token=after_rooms_token,
)
)
# Sort the rooms (what we're testing)
sorted_room_info = self.get_success(
self.sliding_sync_handler.sort_rooms(
sync_room_map=sync_room_map,
to_token=after_rooms_token,
)
)
self.assertEqual(
[room_id for room_id, _ in sorted_room_info],
[room_id2, room_id1],
)
@parameterized.expand(
[
(Membership.LEAVE,),
(Membership.INVITE,),
(Membership.KNOCK,),
(Membership.BAN,),
]
)
def test_activity_after_xxx(self, room1_membership: str) -> None:
"""
When someone has left/been invited/knocked/been banned from a room, they
shouldn't take anything into account after that membership event.
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
user2_id = self.register_user("user2", "pass")
user2_tok = self.login(user2_id, "pass")
before_rooms_token = self.event_sources.get_current_token()
# Create the rooms as user2 so we can have user1 with a clean slate to work from
# and join in whatever order we need for the tests.
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok, is_public=True)
# If we're testing knocks, set the room to knock
if room1_membership == Membership.KNOCK:
self.helper.send_state(
room_id1,
EventTypes.JoinRules,
{"join_rule": JoinRules.KNOCK},
tok=user2_tok,
)
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok, is_public=True)
room_id3 = self.helper.create_room_as(user2_id, tok=user2_tok, is_public=True)
# Here is the activity with user1 that will determine the sort of the rooms
# (room2, room1, room3)
self.helper.join(room_id3, user1_id, tok=user1_tok)
if room1_membership == Membership.LEAVE:
self.helper.join(room_id1, user1_id, tok=user1_tok)
self.helper.leave(room_id1, user1_id, tok=user1_tok)
elif room1_membership == Membership.INVITE:
self.helper.invite(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
elif room1_membership == Membership.KNOCK:
self.helper.knock(room_id1, user1_id, tok=user1_tok)
elif room1_membership == Membership.BAN:
self.helper.ban(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
self.helper.join(room_id2, user1_id, tok=user1_tok)
# Activity before the token but the user is only been xxx to this room so it
# shouldn't be taken into account
self.helper.send(room_id1, "activity in room1", tok=user2_tok)
after_rooms_token = self.event_sources.get_current_token()
# Activity after the token. Just make it in a different order than what we
# expect to make sure we're not taking the activity after the token into
# account.
self.helper.send(room_id1, "activity in room1", tok=user2_tok)
self.helper.send(room_id2, "activity in room2", tok=user2_tok)
self.helper.send(room_id3, "activity in room3", tok=user2_tok)
# Get the rooms the user should be syncing with
sync_room_map = self.get_success(
self.sliding_sync_handler.get_sync_room_ids_for_user(
UserID.from_string(user1_id),
from_token=before_rooms_token,
to_token=after_rooms_token,
)
)
# Sort the rooms (what we're testing)
sorted_room_info = self.get_success(
self.sliding_sync_handler.sort_rooms(
sync_room_map=sync_room_map,
to_token=after_rooms_token,
)
)
self.assertEqual(
[room_id for room_id, _ in sorted_room_info],
[room_id2, room_id1, room_id3],
"Corresponding map to disambiguate the opaque room IDs: "
+ str(
{
"room_id1": room_id1,
"room_id2": room_id2,
"room_id3": room_id3,
}
),
)
self.assertEqual(falsy_filtered_room_ids, {room_id})

View File

@@ -49,7 +49,10 @@ from synapse.logging.context import make_deferred_yieldable
from synapse.media._base import FileInfo, ThumbnailInfo
from synapse.media.filepath import MediaFilePaths
from synapse.media.media_storage import MediaStorage, ReadableFileWrapper
from synapse.media.storage_provider import FileStorageProviderBackend
from synapse.media.storage_provider import (
FileStorageProviderBackend,
StorageProviderWrapper,
)
from synapse.media.thumbnailer import ThumbnailProvider
from synapse.module_api import ModuleApi
from synapse.module_api.callbacks.spamchecker_callbacks import load_legacy_spam_checkers
@@ -78,7 +81,14 @@ class MediaStorageTests(unittest.HomeserverTestCase):
hs.config.media.media_store_path = self.primary_base_path
storage_providers = [FileStorageProviderBackend(hs, self.secondary_base_path)]
storage_providers = [
StorageProviderWrapper(
FileStorageProviderBackend(hs, self.secondary_base_path),
store_local=True,
store_remote=False,
store_synchronous=True,
)
]
self.filepaths = MediaFilePaths(self.primary_base_path)
self.media_storage = MediaStorage(

View File

@@ -688,7 +688,7 @@ class ModuleApiTestCase(BaseModuleApiTestCase):
channel = self.make_request(
"GET",
"/notifications",
"/notifications?from=",
access_token=tok,
)
self.assertEqual(channel.code, 200, channel.result)

View File

@@ -1795,83 +1795,6 @@ class RoomTestCase(unittest.HomeserverTestCase):
self.assertEqual(room_id, channel.json_body["rooms"][0].get("room_id"))
self.assertEqual("ж", channel.json_body["rooms"][0].get("name"))
def test_filter_public_rooms(self) -> None:
self.helper.create_room_as(
self.admin_user, tok=self.admin_user_tok, is_public=True
)
self.helper.create_room_as(
self.admin_user, tok=self.admin_user_tok, is_public=True
)
self.helper.create_room_as(
self.admin_user, tok=self.admin_user_tok, is_public=False
)
response = self.make_request(
"GET",
"/_synapse/admin/v1/rooms",
access_token=self.admin_user_tok,
)
self.assertEqual(200, response.code, msg=response.json_body)
self.assertEqual(3, response.json_body["total_rooms"])
self.assertEqual(3, len(response.json_body["rooms"]))
response = self.make_request(
"GET",
"/_synapse/admin/v1/rooms?public_rooms=true",
access_token=self.admin_user_tok,
)
self.assertEqual(200, response.code, msg=response.json_body)
self.assertEqual(2, response.json_body["total_rooms"])
self.assertEqual(2, len(response.json_body["rooms"]))
response = self.make_request(
"GET",
"/_synapse/admin/v1/rooms?public_rooms=false",
access_token=self.admin_user_tok,
)
self.assertEqual(200, response.code, msg=response.json_body)
self.assertEqual(1, response.json_body["total_rooms"])
self.assertEqual(1, len(response.json_body["rooms"]))
def test_filter_empty_rooms(self) -> None:
self.helper.create_room_as(
self.admin_user, tok=self.admin_user_tok, is_public=True
)
self.helper.create_room_as(
self.admin_user, tok=self.admin_user_tok, is_public=True
)
room_id = self.helper.create_room_as(
self.admin_user, tok=self.admin_user_tok, is_public=False
)
self.helper.leave(room_id, self.admin_user, tok=self.admin_user_tok)
response = self.make_request(
"GET",
"/_synapse/admin/v1/rooms",
access_token=self.admin_user_tok,
)
self.assertEqual(200, response.code, msg=response.json_body)
self.assertEqual(3, response.json_body["total_rooms"])
self.assertEqual(3, len(response.json_body["rooms"]))
response = self.make_request(
"GET",
"/_synapse/admin/v1/rooms?empty_rooms=false",
access_token=self.admin_user_tok,
)
self.assertEqual(200, response.code, msg=response.json_body)
self.assertEqual(2, response.json_body["total_rooms"])
self.assertEqual(2, len(response.json_body["rooms"]))
response = self.make_request(
"GET",
"/_synapse/admin/v1/rooms?empty_rooms=true",
access_token=self.admin_user_tok,
)
self.assertEqual(200, response.code, msg=response.json_body)
self.assertEqual(1, response.json_body["total_rooms"])
self.assertEqual(1, len(response.json_body["rooms"]))
def test_single_room(self) -> None:
"""Test that a single room can be requested correctly"""
# Create two test rooms

View File

@@ -18,7 +18,6 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import List, Optional, Tuple
from unittest.mock import AsyncMock, Mock
from twisted.test.proto_helpers import MemoryReactor
@@ -49,14 +48,6 @@ class HTTPPusherTests(HomeserverTestCase):
self.sync_handler = homeserver.get_sync_handler()
self.auth_handler = homeserver.get_auth_handler()
self.user_id = self.register_user("user", "pass")
self.access_token = self.login("user", "pass")
self.other_user_id = self.register_user("otheruser", "pass")
self.other_access_token = self.login("otheruser", "pass")
# Create a room
self.room_id = self.helper.create_room_as(self.user_id, tok=self.access_token)
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
# Mock out the calls over federation.
fed_transport_client = Mock(spec=["send_transaction"])
@@ -70,22 +61,32 @@ class HTTPPusherTests(HomeserverTestCase):
"""
Local users will get notified for invites
"""
user_id = self.register_user("user", "pass")
access_token = self.login("user", "pass")
other_user_id = self.register_user("otheruser", "pass")
other_access_token = self.login("otheruser", "pass")
# Create a room
room = self.helper.create_room_as(user_id, tok=access_token)
# Check we start with no pushes
self._request_notifications(from_token=None, limit=1, expected_count=0)
channel = self.make_request(
"GET",
"/notifications",
access_token=other_access_token,
)
self.assertEqual(channel.code, 200, channel.result)
self.assertEqual(len(channel.json_body["notifications"]), 0, channel.json_body)
# Send an invite
self.helper.invite(
room=self.room_id,
src=self.user_id,
targ=self.other_user_id,
tok=self.access_token,
)
self.helper.invite(room=room, src=user_id, targ=other_user_id, tok=access_token)
# We should have a notification now
channel = self.make_request(
"GET",
"/notifications",
access_token=self.other_access_token,
access_token=other_access_token,
)
self.assertEqual(channel.code, 200)
self.assertEqual(len(channel.json_body["notifications"]), 1, channel.json_body)
@@ -94,139 +95,3 @@ class HTTPPusherTests(HomeserverTestCase):
"invite",
channel.json_body,
)
def test_pagination_of_notifications(self) -> None:
"""
Check that pagination of notifications works.
"""
# Check we start with no pushes
self._request_notifications(from_token=None, limit=1, expected_count=0)
# Send an invite and have the other user join the room.
self.helper.invite(
room=self.room_id,
src=self.user_id,
targ=self.other_user_id,
tok=self.access_token,
)
self.helper.join(self.room_id, self.other_user_id, tok=self.other_access_token)
# Send 5 messages in the room and note down their event IDs.
sent_event_ids = []
for _ in range(5):
resp = self.helper.send_event(
self.room_id,
"m.room.message",
{"body": "honk", "msgtype": "m.text"},
tok=self.access_token,
)
sent_event_ids.append(resp["event_id"])
# We expect to get notifications for messages in reverse order.
# So reverse this list of event IDs to make it easier to compare
# against later.
sent_event_ids.reverse()
# We should have a few notifications now. Let's try and fetch the first 2.
notification_event_ids, _ = self._request_notifications(
from_token=None, limit=2, expected_count=2
)
# Check we got the expected event IDs back.
self.assertEqual(notification_event_ids, sent_event_ids[:2])
# Try requesting again without a 'from' query parameter. We should get the
# same two notifications back.
notification_event_ids, next_token = self._request_notifications(
from_token=None, limit=2, expected_count=2
)
self.assertEqual(notification_event_ids, sent_event_ids[:2])
# Ask for the next 5 notifications, though there should only be
# 4 remaining; the next 3 messages and the invite.
#
# We need to use the "next_token" from the response as the "from"
# query parameter in the next request in order to paginate.
notification_event_ids, next_token = self._request_notifications(
from_token=next_token, limit=5, expected_count=4
)
# Ensure we chop off the invite on the end.
notification_event_ids = notification_event_ids[:-1]
self.assertEqual(notification_event_ids, sent_event_ids[2:])
def _request_notifications(
self, from_token: Optional[str], limit: int, expected_count: int
) -> Tuple[List[str], str]:
"""
Make a request to /notifications to get the latest events to be notified about.
Only the event IDs are returned. The request is made by the "other user".
Args:
from_token: An optional starting parameter.
limit: The maximum number of results to return.
expected_count: The number of events to expect in the response.
Returns:
A list of event IDs that the client should be notified about.
Events are returned newest-first.
"""
# Construct the request path.
path = f"/notifications?limit={limit}"
if from_token is not None:
path += f"&from={from_token}"
channel = self.make_request(
"GET",
path,
access_token=self.other_access_token,
)
self.assertEqual(channel.code, 200)
self.assertEqual(
len(channel.json_body["notifications"]), expected_count, channel.json_body
)
# Extract the necessary data from the response.
next_token = channel.json_body["next_token"]
event_ids = [
event["event"]["event_id"] for event in channel.json_body["notifications"]
]
return event_ids, next_token
def test_parameters(self) -> None:
"""
Test that appropriate errors are returned when query parameters are malformed.
"""
# Test that no parameters are required.
channel = self.make_request(
"GET",
"/notifications",
access_token=self.other_access_token,
)
self.assertEqual(channel.code, 200)
# Test that limit cannot be negative
channel = self.make_request(
"GET",
"/notifications?limit=-1",
access_token=self.other_access_token,
)
self.assertEqual(channel.code, 400)
# Test that the 'limit' parameter must be an integer.
channel = self.make_request(
"GET",
"/notifications?limit=foobar",
access_token=self.other_access_token,
)
self.assertEqual(channel.code, 400)
# Test that the 'from' parameter must be an integer.
channel = self.make_request(
"GET",
"/notifications?from=osborne",
access_token=self.other_access_token,
)
self.assertEqual(channel.code, 400)

View File

@@ -1228,9 +1228,7 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = hs.get_datastores().main
self.sync_endpoint = (
"/_matrix/client/unstable/org.matrix.simplified_msc3575/sync"
)
self.sync_endpoint = "/_matrix/client/unstable/org.matrix.msc3575/sync"
self.store = hs.get_datastores().main
self.event_sources = hs.get_event_sources()
@@ -1301,6 +1299,7 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
"lists": {
"foo-list": {
"ranges": [[0, 99]],
"sort": ["by_notification_level", "by_recency", "by_name"],
"required_state": [
["m.room.join_rules", ""],
["m.room.history_visibility", ""],
@@ -1362,6 +1361,7 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
"lists": {
"foo-list": {
"ranges": [[0, 99]],
"sort": ["by_notification_level", "by_recency", "by_name"],
"required_state": [
["m.room.join_rules", ""],
["m.room.history_visibility", ""],
@@ -1415,12 +1415,14 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
"lists": {
"dms": {
"ranges": [[0, 99]],
"sort": ["by_recency"],
"required_state": [],
"timeline_limit": 1,
"filters": {"is_dm": True},
},
"foo-list": {
"ranges": [[0, 99]],
"sort": ["by_recency"],
"required_state": [],
"timeline_limit": 1,
"filters": {"is_dm": False},
@@ -1461,60 +1463,3 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
],
list(channel.json_body["lists"]["foo-list"]),
)
def test_sort_list(self) -> None:
"""
Test that the lists are sorted by `stream_ordering`
"""
user1_id = self.register_user("user1", "pass")
user1_tok = self.login(user1_id, "pass")
room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok, is_public=True)
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok, is_public=True)
room_id3 = self.helper.create_room_as(user1_id, tok=user1_tok, is_public=True)
# Activity that will order the rooms
self.helper.send(room_id3, "activity in room3", tok=user1_tok)
self.helper.send(room_id1, "activity in room1", tok=user1_tok)
self.helper.send(room_id2, "activity in room2", tok=user1_tok)
# Make the Sliding Sync request
channel = self.make_request(
"POST",
self.sync_endpoint,
{
"lists": {
"foo-list": {
"ranges": [[0, 99]],
"required_state": [
["m.room.join_rules", ""],
["m.room.history_visibility", ""],
["m.space.child", "*"],
],
"timeline_limit": 1,
}
}
},
access_token=user1_tok,
)
self.assertEqual(channel.code, 200, channel.json_body)
# Make sure it has the foo-list we requested
self.assertListEqual(
list(channel.json_body["lists"].keys()),
["foo-list"],
channel.json_body["lists"].keys(),
)
# Make sure the list is sorted in the way we expect
self.assertListEqual(
list(channel.json_body["lists"]["foo-list"]["ops"]),
[
{
"op": "SYNC",
"range": [0, 99],
"room_ids": [room_id2, room_id1, room_id3],
}
],
channel.json_body["lists"]["foo-list"],
)

View File

@@ -277,7 +277,7 @@ class PaginationTestCase(HomeserverTestCase):
class GetLastEventInRoomBeforeStreamOrderingTestCase(HomeserverTestCase):
"""
Test `get_last_event_pos_in_room_before_stream_ordering(...)`
Test `get_last_event_in_room_before_stream_ordering(...)`
"""
servlets = [
@@ -336,14 +336,14 @@ class GetLastEventInRoomBeforeStreamOrderingTestCase(HomeserverTestCase):
room_id = self.helper.create_room_as(user1_id, tok=user1_tok, is_public=True)
last_event_result = self.get_success(
self.store.get_last_event_pos_in_room_before_stream_ordering(
last_event = self.get_success(
self.store.get_last_event_in_room_before_stream_ordering(
room_id=room_id,
end_token=before_room_token.room_key,
)
)
self.assertIsNone(last_event_result)
self.assertIsNone(last_event)
def test_after_room_created(self) -> None:
"""
@@ -356,16 +356,14 @@ class GetLastEventInRoomBeforeStreamOrderingTestCase(HomeserverTestCase):
after_room_token = self.event_sources.get_current_token()
last_event_result = self.get_success(
self.store.get_last_event_pos_in_room_before_stream_ordering(
last_event = self.get_success(
self.store.get_last_event_in_room_before_stream_ordering(
room_id=room_id,
end_token=after_room_token.room_key,
)
)
assert last_event_result is not None
last_event_id, _ = last_event_result
self.assertIsNotNone(last_event_id)
self.assertIsNotNone(last_event)
def test_activity_in_other_rooms(self) -> None:
"""
@@ -382,18 +380,16 @@ class GetLastEventInRoomBeforeStreamOrderingTestCase(HomeserverTestCase):
after_room_token = self.event_sources.get_current_token()
last_event_result = self.get_success(
self.store.get_last_event_pos_in_room_before_stream_ordering(
last_event = self.get_success(
self.store.get_last_event_in_room_before_stream_ordering(
room_id=room_id1,
end_token=after_room_token.room_key,
)
)
assert last_event_result is not None
last_event_id, _ = last_event_result
# Make sure it's the event we expect (which also means we know it's from the
# correct room)
self.assertEqual(last_event_id, event_response["event_id"])
self.assertEqual(last_event, event_response["event_id"])
def test_activity_after_token_has_no_effect(self) -> None:
"""
@@ -412,17 +408,15 @@ class GetLastEventInRoomBeforeStreamOrderingTestCase(HomeserverTestCase):
self.helper.send(room_id1, "after1", tok=user1_tok)
self.helper.send(room_id1, "after2", tok=user1_tok)
last_event_result = self.get_success(
self.store.get_last_event_pos_in_room_before_stream_ordering(
last_event = self.get_success(
self.store.get_last_event_in_room_before_stream_ordering(
room_id=room_id1,
end_token=after_room_token.room_key,
)
)
assert last_event_result is not None
last_event_id, _ = last_event_result
# Make sure it's the last event before the token
self.assertEqual(last_event_id, event_response["event_id"])
self.assertEqual(last_event, event_response["event_id"])
def test_last_event_within_sharded_token(self) -> None:
"""
@@ -463,20 +457,18 @@ class GetLastEventInRoomBeforeStreamOrderingTestCase(HomeserverTestCase):
self.helper.send(room_id1, "after1", tok=user1_tok)
self.helper.send(room_id1, "after2", tok=user1_tok)
last_event_result = self.get_success(
self.store.get_last_event_pos_in_room_before_stream_ordering(
last_event = self.get_success(
self.store.get_last_event_in_room_before_stream_ordering(
room_id=room_id1,
end_token=end_token,
)
)
assert last_event_result is not None
last_event_id, _ = last_event_result
# Should find closest event before the token in room1
# Should find closest event at/before the token in room1
self.assertEqual(
last_event_id,
last_event,
event_response3["event_id"],
f"We expected {event_response3['event_id']} but saw {last_event_id} which corresponds to "
f"We expected {event_response3['event_id']} but saw {last_event} which corresponds to "
+ str(
{
"event1": event_response1["event_id"],
@@ -522,20 +514,18 @@ class GetLastEventInRoomBeforeStreamOrderingTestCase(HomeserverTestCase):
self.helper.send(room_id1, "after1", tok=user1_tok)
self.helper.send(room_id1, "after2", tok=user1_tok)
last_event_result = self.get_success(
self.store.get_last_event_pos_in_room_before_stream_ordering(
last_event = self.get_success(
self.store.get_last_event_in_room_before_stream_ordering(
room_id=room_id1,
end_token=end_token,
)
)
assert last_event_result is not None
last_event_id, _ = last_event_result
# Should find closest event before the token in room1
# Should find closest event at/before the token in room1
self.assertEqual(
last_event_id,
last_event,
event_response2["event_id"],
f"We expected {event_response2['event_id']} but saw {last_event_id} which corresponds to "
f"We expected {event_response2['event_id']} but saw {last_event} which corresponds to "
+ str(
{
"event1": event_response1["event_id"],