mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-11 01:40:27 +00:00
Compare commits
7 Commits
erikj/ss_t
...
erikj/expe
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d48caa1800 | ||
|
|
e3d5988aa6 | ||
|
|
0721903ee9 | ||
|
|
ff16dd5af5 | ||
|
|
8ce087dd00 | ||
|
|
e104003b4f | ||
|
|
e61b6a1cb3 |
4
.github/workflows/tests.yml
vendored
4
.github/workflows/tests.yml
vendored
@@ -73,7 +73,7 @@ jobs:
|
||||
- 'pyproject.toml'
|
||||
- 'poetry.lock'
|
||||
- '.github/workflows/tests.yml'
|
||||
|
||||
|
||||
linting_readme:
|
||||
- 'README.rst'
|
||||
|
||||
@@ -139,7 +139,7 @@ jobs:
|
||||
|
||||
- name: Semantic checks (ruff)
|
||||
# --quiet suppresses the update check.
|
||||
run: poetry run ruff check --quiet .
|
||||
run: poetry run ruff --quiet .
|
||||
|
||||
lint-mypy:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
@@ -1,10 +1,3 @@
|
||||
# Synapse 1.110.0 (2024-07-03)
|
||||
|
||||
No significant changes since 1.110.0rc3.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.110.0rc3 (2024-07-02)
|
||||
|
||||
### Bugfixes
|
||||
@@ -48,7 +41,7 @@ No significant changes since 1.110.0rc3.
|
||||
This is useful for scripts that bootstrap user accounts with initial passwords. ([\#17304](https://github.com/element-hq/synapse/issues/17304))
|
||||
- Add support for via query parameter from [MSC4156](https://github.com/matrix-org/matrix-spec-proposals/pull/4156). ([\#17322](https://github.com/element-hq/synapse/issues/17322))
|
||||
- Add `is_invite` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17335](https://github.com/element-hq/synapse/issues/17335))
|
||||
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/3916-authentication-for-media.md) by adding a federation /download endpoint. ([\#17350](https://github.com/element-hq/synapse/issues/17350))
|
||||
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md) by adding a federation /download endpoint. ([\#17350](https://github.com/element-hq/synapse/issues/17350))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
|
||||
@@ -179,10 +179,10 @@ desired ``localpart`` in the 'User name' box.
|
||||
-----------------------
|
||||
|
||||
Enterprise quality support for Synapse including SLAs is available as part of an
|
||||
`Element Server Suite (ESS) <https://element.io/pricing>`_ subscription.
|
||||
`Element Server Suite (ESS) <https://element.io/pricing>` subscription.
|
||||
|
||||
If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`_
|
||||
and access the `knowledge base <https://ems-docs.element.io>`_.
|
||||
If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`
|
||||
and access the `knowledge base <https://ems-docs.element.io>`.
|
||||
|
||||
🤝 Community support
|
||||
--------------------
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Make the release script create a release branch for Complement as well.
|
||||
@@ -1 +0,0 @@
|
||||
Return "required state" in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
||||
@@ -1 +1 @@
|
||||
Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/3916-authentication-for-media.md) by adding _matrix/client/v1/media/download endpoint.
|
||||
Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md) by adding _matrix/client/v1/media/download endpoint.
|
||||
@@ -1 +0,0 @@
|
||||
Fix broken links in README.
|
||||
@@ -1 +0,0 @@
|
||||
Fix linting errors from new `ruff` version.
|
||||
@@ -1,3 +0,0 @@
|
||||
Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md)
|
||||
by adding `_matrix/client/v1/media/thumbnail`, `_matrix/federation/v1/media/thumbnail` endpoints and stabilizing the
|
||||
remaining `_matrix/client/v1/media` endpoints.
|
||||
@@ -1 +0,0 @@
|
||||
Clarify that changelog content *and file extension* need to match in order for entries to merge.
|
||||
@@ -1 +0,0 @@
|
||||
Forget all of a user's rooms upon deactivation, enabling future purges.
|
||||
@@ -1 +0,0 @@
|
||||
Declare support for [Matrix 1.11](https://matrix.org/blog/2024/06/20/matrix-v1.11-release/).
|
||||
@@ -1 +0,0 @@
|
||||
MSC3861: allow overriding the introspection endpoint.
|
||||
@@ -1 +0,0 @@
|
||||
Add to-device support to experimental sliding sync implementation.
|
||||
6
debian/changelog
vendored
6
debian/changelog
vendored
@@ -1,9 +1,3 @@
|
||||
matrix-synapse-py3 (1.110.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.110.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 03 Jul 2024 09:08:59 -0600
|
||||
|
||||
matrix-synapse-py3 (1.110.0~rc3) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.110.0rc3.
|
||||
|
||||
@@ -1,17 +1,21 @@
|
||||
# Experimental Features API
|
||||
|
||||
This API allows a server administrator to enable or disable some experimental features on a per-user
|
||||
basis. The currently supported features are:
|
||||
- [MSC3881](https://github.com/matrix-org/matrix-spec-proposals/pull/3881): enable remotely toggling push notifications
|
||||
for another client
|
||||
- [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575): enable experimental sliding sync support
|
||||
basis. The currently supported features are:
|
||||
- [MSC3026](https://github.com/matrix-org/matrix-spec-proposals/pull/3026): busy
|
||||
presence state enabled
|
||||
- [MSC3881](https://github.com/matrix-org/matrix-spec-proposals/pull/3881): enable remotely toggling push notifications
|
||||
for another client
|
||||
- [MSC3967](https://github.com/matrix-org/matrix-spec-proposals/pull/3967): do not require
|
||||
UIA when first uploading cross-signing keys.
|
||||
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api/).
|
||||
|
||||
## Enabling/Disabling Features
|
||||
|
||||
This API allows a server administrator to enable experimental features for a given user. The request must
|
||||
This API allows a server administrator to enable experimental features for a given user. The request must
|
||||
provide a body containing the user id and listing the features to enable/disable in the following format:
|
||||
```json
|
||||
{
|
||||
@@ -31,7 +35,7 @@ PUT /_synapse/admin/v1/experimental_features/<user_id>
|
||||
```
|
||||
|
||||
## Listing Enabled Features
|
||||
|
||||
|
||||
To list which features are enabled/disabled for a given user send a request to the following API:
|
||||
|
||||
```
|
||||
@@ -48,4 +52,4 @@ user like so:
|
||||
"msc3967": false
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
@@ -449,9 +449,9 @@ For example, a fix in PR #1234 would have its changelog entry in
|
||||
> The security levels of Florbs are now validated when received
|
||||
> via the `/federation/florb` endpoint. Contributed by Jane Matrix.
|
||||
|
||||
If there are multiple pull requests involved in a single bugfix/feature/etc, then the
|
||||
content for each `changelog.d` file and file extension should be the same. Towncrier
|
||||
will merge the matching files together into a single changelog entry when we come to
|
||||
If there are multiple pull requests involved in a single bugfix/feature/etc,
|
||||
then the content for each `changelog.d` file should be the same. Towncrier will
|
||||
merge the matching files together into a single changelog entry when we come to
|
||||
release.
|
||||
|
||||
### How do I know what to call the changelog file before I create the PR?
|
||||
|
||||
47
poetry.lock
generated
47
poetry.lock
generated
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "annotated-types"
|
||||
@@ -182,13 +182,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "certifi"
|
||||
version = "2024.7.4"
|
||||
version = "2023.7.22"
|
||||
description = "Python package for providing Mozilla's CA Bundle."
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "certifi-2024.7.4-py3-none-any.whl", hash = "sha256:c198e21b1289c2ab85ee4e67bb4b4ef3ead0892059901a8d5b622f24a1101e90"},
|
||||
{file = "certifi-2024.7.4.tar.gz", hash = "sha256:5a1e7645bc0ec61a09e26c36f6106dd4cf40c6db3a1fb6352b0244e7fb057c7b"},
|
||||
{file = "certifi-2023.7.22-py3-none-any.whl", hash = "sha256:92d6037539857d8206b8f6ae472e8b77db8058fec5937a1ef3f54304089edbb9"},
|
||||
{file = "certifi-2023.7.22.tar.gz", hash = "sha256:539cc1d13202e33ca466e88b2807e29f4c13049d6d87031a3c110744495cb082"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2345,29 +2345,28 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.5.0"
|
||||
version = "0.3.7"
|
||||
description = "An extremely fast Python linter and code formatter, written in Rust."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "ruff-0.5.0-py3-none-linux_armv6l.whl", hash = "sha256:ee770ea8ab38918f34e7560a597cc0a8c9a193aaa01bfbd879ef43cb06bd9c4c"},
|
||||
{file = "ruff-0.5.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:38f3b8327b3cb43474559d435f5fa65dacf723351c159ed0dc567f7ab735d1b6"},
|
||||
{file = "ruff-0.5.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:7594f8df5404a5c5c8f64b8311169879f6cf42142da644c7e0ba3c3f14130370"},
|
||||
{file = "ruff-0.5.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:adc7012d6ec85032bc4e9065110df205752d64010bed5f958d25dbee9ce35de3"},
|
||||
{file = "ruff-0.5.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d505fb93b0fabef974b168d9b27c3960714d2ecda24b6ffa6a87ac432905ea38"},
|
||||
{file = "ruff-0.5.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9dc5cfd3558f14513ed0d5b70ce531e28ea81a8a3b1b07f0f48421a3d9e7d80a"},
|
||||
{file = "ruff-0.5.0-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:db3ca35265de239a1176d56a464b51557fce41095c37d6c406e658cf80bbb362"},
|
||||
{file = "ruff-0.5.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b1a321c4f68809fddd9b282fab6a8d8db796b270fff44722589a8b946925a2a8"},
|
||||
{file = "ruff-0.5.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2c4dfcd8d34b143916994b3876b63d53f56724c03f8c1a33a253b7b1e6bf2a7d"},
|
||||
{file = "ruff-0.5.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:81e5facfc9f4a674c6a78c64d38becfbd5e4f739c31fcd9ce44c849f1fad9e4c"},
|
||||
{file = "ruff-0.5.0-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e589e27971c2a3efff3fadafb16e5aef7ff93250f0134ec4b52052b673cf988d"},
|
||||
{file = "ruff-0.5.0-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:d2ffbc3715a52b037bcb0f6ff524a9367f642cdc5817944f6af5479bbb2eb50e"},
|
||||
{file = "ruff-0.5.0-py3-none-musllinux_1_2_i686.whl", hash = "sha256:cd096e23c6a4f9c819525a437fa0a99d1c67a1b6bb30948d46f33afbc53596cf"},
|
||||
{file = "ruff-0.5.0-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:46e193b36f2255729ad34a49c9a997d506e58f08555366b2108783b3064a0e1e"},
|
||||
{file = "ruff-0.5.0-py3-none-win32.whl", hash = "sha256:49141d267100f5ceff541b4e06552e98527870eafa1acc9dec9139c9ec5af64c"},
|
||||
{file = "ruff-0.5.0-py3-none-win_amd64.whl", hash = "sha256:e9118f60091047444c1b90952736ee7b1792910cab56e9b9a9ac20af94cd0440"},
|
||||
{file = "ruff-0.5.0-py3-none-win_arm64.whl", hash = "sha256:ed5c4df5c1fb4518abcb57725b576659542bdbe93366f4f329e8f398c4b71178"},
|
||||
{file = "ruff-0.5.0.tar.gz", hash = "sha256:eb641b5873492cf9bd45bc9c5ae5320648218e04386a5f0c264ad6ccce8226a1"},
|
||||
{file = "ruff-0.3.7-py3-none-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:0e8377cccb2f07abd25e84fc5b2cbe48eeb0fea9f1719cad7caedb061d70e5ce"},
|
||||
{file = "ruff-0.3.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:15a4d1cc1e64e556fa0d67bfd388fed416b7f3b26d5d1c3e7d192c897e39ba4b"},
|
||||
{file = "ruff-0.3.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d28bdf3d7dc71dd46929fafeec98ba89b7c3550c3f0978e36389b5631b793663"},
|
||||
{file = "ruff-0.3.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:379b67d4f49774ba679593b232dcd90d9e10f04d96e3c8ce4a28037ae473f7bb"},
|
||||
{file = "ruff-0.3.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c060aea8ad5ef21cdfbbe05475ab5104ce7827b639a78dd55383a6e9895b7c51"},
|
||||
{file = "ruff-0.3.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:ebf8f615dde968272d70502c083ebf963b6781aacd3079081e03b32adfe4d58a"},
|
||||
{file = "ruff-0.3.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d48098bd8f5c38897b03604f5428901b65e3c97d40b3952e38637b5404b739a2"},
|
||||
{file = "ruff-0.3.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:da8a4fda219bf9024692b1bc68c9cff4b80507879ada8769dc7e985755d662ea"},
|
||||
{file = "ruff-0.3.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c44e0149f1d8b48c4d5c33d88c677a4aa22fd09b1683d6a7ff55b816b5d074f"},
|
||||
{file = "ruff-0.3.7-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:3050ec0af72b709a62ecc2aca941b9cd479a7bf2b36cc4562f0033d688e44fa1"},
|
||||
{file = "ruff-0.3.7-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:a29cc38e4c1ab00da18a3f6777f8b50099d73326981bb7d182e54a9a21bb4ff7"},
|
||||
{file = "ruff-0.3.7-py3-none-musllinux_1_2_i686.whl", hash = "sha256:5b15cc59c19edca917f51b1956637db47e200b0fc5e6e1878233d3a938384b0b"},
|
||||
{file = "ruff-0.3.7-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e491045781b1e38b72c91247cf4634f040f8d0cb3e6d3d64d38dcf43616650b4"},
|
||||
{file = "ruff-0.3.7-py3-none-win32.whl", hash = "sha256:bc931de87593d64fad3a22e201e55ad76271f1d5bfc44e1a1887edd0903c7d9f"},
|
||||
{file = "ruff-0.3.7-py3-none-win_amd64.whl", hash = "sha256:5ef0e501e1e39f35e03c2acb1d1238c595b8bb36cf7a170e7c1df1b73da00e74"},
|
||||
{file = "ruff-0.3.7-py3-none-win_arm64.whl", hash = "sha256:789e144f6dc7019d1f92a812891c645274ed08af6037d11fc65fcbc183b7d59f"},
|
||||
{file = "ruff-0.3.7.tar.gz", hash = "sha256:d5c1aebee5162c2226784800ae031f660c350e7a3402c4d1f8ea4e97e232e3ba"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -3202,4 +3201,4 @@ user-search = ["pyicu"]
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = "^3.8.0"
|
||||
content-hash = "3372a97db99050a34f8eddad2ddf8efe8b7b704b6123df4a3e36ddc171e8f34d"
|
||||
content-hash = "e8d5806e10eb69bc06900fde18ea3df38f38490ab6baa73fe4a563dfb6abacba"
|
||||
|
||||
@@ -43,7 +43,6 @@ target-version = ['py38', 'py39', 'py310', 'py311']
|
||||
[tool.ruff]
|
||||
line-length = 88
|
||||
|
||||
[tool.ruff.lint]
|
||||
# See https://beta.ruff.rs/docs/rules/#error-e
|
||||
# for error codes. The ones we ignore are:
|
||||
# E501: Line too long (black enforces this for us)
|
||||
@@ -97,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.110.0"
|
||||
version = "1.110.0rc3"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
@@ -322,7 +321,7 @@ all = [
|
||||
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
||||
isort = ">=5.10.1"
|
||||
black = ">=22.7.0"
|
||||
ruff = "0.5.0"
|
||||
ruff = "0.3.7"
|
||||
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
||||
pydantic = "^2"
|
||||
|
||||
|
||||
@@ -112,7 +112,7 @@ python3 -m black "${files[@]}"
|
||||
|
||||
# Catch any common programming mistakes in Python code.
|
||||
# --quiet suppresses the update check.
|
||||
ruff check --quiet --fix "${files[@]}"
|
||||
ruff --quiet --fix "${files[@]}"
|
||||
|
||||
# Catch any common programming mistakes in Rust code.
|
||||
#
|
||||
|
||||
@@ -70,7 +70,6 @@ def cli() -> None:
|
||||
pip install -e .[dev]
|
||||
|
||||
- A checkout of the sytest repository at ../sytest
|
||||
- A checkout of the complement repository at ../complement
|
||||
|
||||
Then to use:
|
||||
|
||||
@@ -113,12 +112,10 @@ def _prepare() -> None:
|
||||
# Make sure we're in a git repo.
|
||||
synapse_repo = get_repo_and_check_clean_checkout()
|
||||
sytest_repo = get_repo_and_check_clean_checkout("../sytest", "sytest")
|
||||
complement_repo = get_repo_and_check_clean_checkout("../complement", "complement")
|
||||
|
||||
click.secho("Updating Synapse and Sytest git repos...")
|
||||
synapse_repo.remote().fetch()
|
||||
sytest_repo.remote().fetch()
|
||||
complement_repo.remote().fetch()
|
||||
|
||||
# Get the current version and AST from root Synapse module.
|
||||
current_version = get_package_version()
|
||||
@@ -211,15 +208,7 @@ def _prepare() -> None:
|
||||
"Which branch should the release be based on?", default=default
|
||||
)
|
||||
|
||||
for repo_name, repo in {
|
||||
"synapse": synapse_repo,
|
||||
"sytest": sytest_repo,
|
||||
"complement": complement_repo,
|
||||
}.items():
|
||||
# Special case for Complement: `develop` maps to `main`
|
||||
if repo_name == "complement" and branch_name == "develop":
|
||||
branch_name = "main"
|
||||
|
||||
for repo_name, repo in {"synapse": synapse_repo, "sytest": sytest_repo}.items():
|
||||
base_branch = find_ref(repo, branch_name)
|
||||
if not base_branch:
|
||||
print(f"Could not find base branch {branch_name} for {repo_name}!")
|
||||
@@ -242,12 +231,6 @@ def _prepare() -> None:
|
||||
if click.confirm("Push new SyTest branch?", default=True):
|
||||
sytest_repo.git.push("-u", sytest_repo.remote().name, release_branch_name)
|
||||
|
||||
# Same for Complement
|
||||
if click.confirm("Push new Complement branch?", default=True):
|
||||
complement_repo.git.push(
|
||||
"-u", complement_repo.remote().name, release_branch_name
|
||||
)
|
||||
|
||||
# Switch to the release branch and ensure it's up to date.
|
||||
synapse_repo.git.checkout(release_branch_name)
|
||||
update_branch(synapse_repo)
|
||||
@@ -647,9 +630,6 @@ def _merge_back() -> None:
|
||||
else:
|
||||
# Full release
|
||||
sytest_repo = get_repo_and_check_clean_checkout("../sytest", "sytest")
|
||||
complement_repo = get_repo_and_check_clean_checkout(
|
||||
"../complement", "complement"
|
||||
)
|
||||
|
||||
if click.confirm(f"Merge {branch_name} → master?", default=True):
|
||||
_merge_into(synapse_repo, branch_name, "master")
|
||||
@@ -663,9 +643,6 @@ def _merge_back() -> None:
|
||||
if click.confirm("On SyTest, merge master → develop?", default=True):
|
||||
_merge_into(sytest_repo, "master", "develop")
|
||||
|
||||
if click.confirm(f"On Complement, merge {branch_name} → main?", default=True):
|
||||
_merge_into(complement_repo, branch_name, "main")
|
||||
|
||||
|
||||
@cli.command()
|
||||
def announce() -> None:
|
||||
|
||||
@@ -44,7 +44,7 @@ logger = logging.getLogger("generate_workers_map")
|
||||
|
||||
|
||||
class MockHomeserver(HomeServer):
|
||||
DATASTORE_CLASS = DataStore
|
||||
DATASTORE_CLASS = DataStore # type: ignore
|
||||
|
||||
def __init__(self, config: HomeServerConfig, worker_app: Optional[str]) -> None:
|
||||
super().__init__(config.server.server_name, config=config)
|
||||
|
||||
@@ -41,7 +41,7 @@ logger = logging.getLogger("update_database")
|
||||
|
||||
|
||||
class MockHomeserver(HomeServer):
|
||||
DATASTORE_CLASS = DataStore
|
||||
DATASTORE_CLASS = DataStore # type: ignore [assignment]
|
||||
|
||||
def __init__(self, config: HomeServerConfig):
|
||||
super().__init__(
|
||||
|
||||
@@ -145,18 +145,6 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
# metadata.validate_introspection_endpoint()
|
||||
return metadata
|
||||
|
||||
async def _introspection_endpoint(self) -> str:
|
||||
"""
|
||||
Returns the introspection endpoint of the issuer
|
||||
|
||||
It uses the config option if set, otherwise it will use OIDC discovery to get it
|
||||
"""
|
||||
if self._config.introspection_endpoint is not None:
|
||||
return self._config.introspection_endpoint
|
||||
|
||||
metadata = await self._load_metadata()
|
||||
return metadata.get("introspection_endpoint")
|
||||
|
||||
async def _introspect_token(self, token: str) -> IntrospectionToken:
|
||||
"""
|
||||
Send a token to the introspection endpoint and returns the introspection response
|
||||
@@ -173,7 +161,8 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
Returns:
|
||||
The introspection response
|
||||
"""
|
||||
introspection_endpoint = await self._introspection_endpoint()
|
||||
metadata = await self._issuer_metadata.get()
|
||||
introspection_endpoint = metadata.get("introspection_endpoint")
|
||||
raw_headers: Dict[str, str] = {
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
"User-Agent": str(self._http_client.user_agent, "utf-8"),
|
||||
|
||||
@@ -110,7 +110,7 @@ class AdminCmdStore(
|
||||
|
||||
|
||||
class AdminCmdServer(HomeServer):
|
||||
DATASTORE_CLASS = AdminCmdStore
|
||||
DATASTORE_CLASS = AdminCmdStore # type: ignore
|
||||
|
||||
|
||||
async def export_data_command(hs: HomeServer, args: argparse.Namespace) -> None:
|
||||
|
||||
@@ -163,7 +163,7 @@ class GenericWorkerStore(
|
||||
|
||||
|
||||
class GenericWorkerServer(HomeServer):
|
||||
DATASTORE_CLASS = GenericWorkerStore
|
||||
DATASTORE_CLASS = GenericWorkerStore # type: ignore
|
||||
|
||||
def _listen_http(self, listener_config: ListenerConfig) -> None:
|
||||
assert listener_config.http_options is not None
|
||||
|
||||
@@ -81,7 +81,7 @@ def gz_wrap(r: Resource) -> Resource:
|
||||
|
||||
|
||||
class SynapseHomeServer(HomeServer):
|
||||
DATASTORE_CLASS = DataStore
|
||||
DATASTORE_CLASS = DataStore # type: ignore
|
||||
|
||||
def _listener_http(
|
||||
self,
|
||||
|
||||
@@ -140,12 +140,6 @@ class MSC3861:
|
||||
("experimental", "msc3861", "client_auth_method"),
|
||||
)
|
||||
|
||||
introspection_endpoint: Optional[str] = attr.ib(
|
||||
default=None,
|
||||
validator=attr.validators.optional(attr.validators.instance_of(str)),
|
||||
)
|
||||
"""The URL of the introspection endpoint used to validate access tokens."""
|
||||
|
||||
account_management_url: Optional[str] = attr.ib(
|
||||
default=None,
|
||||
validator=attr.validators.optional(attr.validators.instance_of(str)),
|
||||
@@ -443,6 +437,10 @@ class ExperimentalConfig(Config):
|
||||
"msc3823_account_suspension", False
|
||||
)
|
||||
|
||||
self.msc3916_authenticated_media_enabled = experimental.get(
|
||||
"msc3916_authenticated_media_enabled", False
|
||||
)
|
||||
|
||||
# MSC4151: Report room API (Client-Server API)
|
||||
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)
|
||||
|
||||
|
||||
@@ -322,6 +322,7 @@ class PerDestinationQueue:
|
||||
)
|
||||
|
||||
async def _transaction_transmission_loop(self) -> None:
|
||||
pending_pdus: List[EventBase] = []
|
||||
try:
|
||||
self.transmission_loop_running = True
|
||||
|
||||
@@ -337,6 +338,7 @@ class PerDestinationQueue:
|
||||
# not caught up yet
|
||||
return
|
||||
|
||||
pending_pdus = []
|
||||
while True:
|
||||
self._new_data_to_send = False
|
||||
|
||||
|
||||
@@ -33,7 +33,6 @@ from synapse.federation.transport.server.federation import (
|
||||
FEDERATION_SERVLET_CLASSES,
|
||||
FederationAccountStatusServlet,
|
||||
FederationMediaDownloadServlet,
|
||||
FederationMediaThumbnailServlet,
|
||||
FederationUnstableClientKeysClaimServlet,
|
||||
)
|
||||
from synapse.http.server import HttpServer, JsonResource
|
||||
@@ -317,10 +316,7 @@ def register_servlets(
|
||||
):
|
||||
continue
|
||||
|
||||
if (
|
||||
servletclass == FederationMediaDownloadServlet
|
||||
or servletclass == FederationMediaThumbnailServlet
|
||||
):
|
||||
if servletclass == FederationMediaDownloadServlet:
|
||||
if not hs.config.server.enable_media_repo:
|
||||
continue
|
||||
|
||||
|
||||
@@ -363,8 +363,6 @@ class BaseFederationServlet:
|
||||
if (
|
||||
func.__self__.__class__.__name__ # type: ignore
|
||||
== "FederationMediaDownloadServlet"
|
||||
or func.__self__.__class__.__name__ # type: ignore
|
||||
== "FederationMediaThumbnailServlet"
|
||||
):
|
||||
response = await func(
|
||||
origin, content, request, *args, **kwargs
|
||||
@@ -377,8 +375,6 @@ class BaseFederationServlet:
|
||||
if (
|
||||
func.__self__.__class__.__name__ # type: ignore
|
||||
== "FederationMediaDownloadServlet"
|
||||
or func.__self__.__class__.__name__ # type: ignore
|
||||
== "FederationMediaThumbnailServlet"
|
||||
):
|
||||
response = await func(
|
||||
origin, content, request, *args, **kwargs
|
||||
|
||||
@@ -46,13 +46,11 @@ from synapse.http.servlet import (
|
||||
parse_boolean_from_args,
|
||||
parse_integer,
|
||||
parse_integer_from_args,
|
||||
parse_string,
|
||||
parse_string_from_args,
|
||||
parse_strings_from_args,
|
||||
)
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.media._base import DEFAULT_MAX_TIMEOUT_MS, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS
|
||||
from synapse.media.thumbnailer import ThumbnailProvider
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import SYNAPSE_VERSION
|
||||
from synapse.util.ratelimitutils import FederationRateLimiter
|
||||
@@ -828,59 +826,6 @@ class FederationMediaDownloadServlet(BaseFederationServerServlet):
|
||||
)
|
||||
|
||||
|
||||
class FederationMediaThumbnailServlet(BaseFederationServerServlet):
|
||||
"""
|
||||
Implementation of new federation media `/thumbnail` endpoint outlined in MSC3916. Returns
|
||||
a multipart/mixed response consisting of a JSON object and the requested media
|
||||
item. This endpoint only returns local media.
|
||||
"""
|
||||
|
||||
PATH = "/media/thumbnail/(?P<media_id>[^/]*)"
|
||||
RATELIMIT = True
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
hs: "HomeServer",
|
||||
ratelimiter: FederationRateLimiter,
|
||||
authenticator: Authenticator,
|
||||
server_name: str,
|
||||
):
|
||||
super().__init__(hs, authenticator, ratelimiter, server_name)
|
||||
self.media_repo = self.hs.get_media_repository()
|
||||
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
||||
self.thumbnail_provider = ThumbnailProvider(
|
||||
hs, self.media_repo, self.media_repo.media_storage
|
||||
)
|
||||
|
||||
async def on_GET(
|
||||
self,
|
||||
origin: Optional[str],
|
||||
content: Literal[None],
|
||||
request: SynapseRequest,
|
||||
media_id: str,
|
||||
) -> None:
|
||||
|
||||
width = parse_integer(request, "width", required=True)
|
||||
height = parse_integer(request, "height", required=True)
|
||||
method = parse_string(request, "method", "scale")
|
||||
# TODO Parse the Accept header to get an prioritised list of thumbnail types.
|
||||
m_type = "image/png"
|
||||
max_timeout_ms = parse_integer(
|
||||
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
|
||||
)
|
||||
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
|
||||
|
||||
if self.dynamic_thumbnails:
|
||||
await self.thumbnail_provider.select_or_generate_local_thumbnail(
|
||||
request, media_id, width, height, method, m_type, max_timeout_ms, True
|
||||
)
|
||||
else:
|
||||
await self.thumbnail_provider.respond_local_thumbnail(
|
||||
request, media_id, width, height, method, m_type, max_timeout_ms, True
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(None, media_id)
|
||||
|
||||
|
||||
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
||||
FederationSendServlet,
|
||||
FederationEventServlet,
|
||||
@@ -913,5 +858,4 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
||||
FederationMakeKnockServlet,
|
||||
FederationAccountStatusServlet,
|
||||
FederationMediaDownloadServlet,
|
||||
FederationMediaThumbnailServlet,
|
||||
)
|
||||
|
||||
@@ -283,10 +283,6 @@ class DeactivateAccountHandler:
|
||||
ratelimit=False,
|
||||
require_consent=False,
|
||||
)
|
||||
|
||||
# Mark the room forgotten too, because they won't be able to do this
|
||||
# for us. This may lead to the room being purged eventually.
|
||||
await self._room_member_handler.forget(user, room_id)
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Failed to part user %r from room %r: ignoring and continuing",
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
#
|
||||
#
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Any, Dict, Final, List, Optional, Set, Tuple
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Set, Tuple
|
||||
|
||||
import attr
|
||||
from immutabledict import immutabledict
|
||||
@@ -39,7 +39,6 @@ from synapse.types import (
|
||||
PersistedEventPosition,
|
||||
Requester,
|
||||
RoomStreamToken,
|
||||
StateMap,
|
||||
StreamKeyType,
|
||||
StreamToken,
|
||||
UserID,
|
||||
@@ -91,186 +90,14 @@ class RoomSyncConfig:
|
||||
|
||||
Attributes:
|
||||
timeline_limit: The maximum number of events to return in the timeline.
|
||||
|
||||
required_state_map: Map from state event type to state_keys requested for the
|
||||
room. The values are close to `StateKey` but actually use a syntax where you
|
||||
can provide `*` wildcard and `$LAZY` for lazy-loading room members.
|
||||
required_state: The set of state events requested for the room. The
|
||||
values are close to `StateKey` but actually use a syntax where you can
|
||||
provide `*` wildcard and `$LAZY` for lazy room members as the `state_key` part
|
||||
of the tuple (type, state_key).
|
||||
"""
|
||||
|
||||
timeline_limit: int
|
||||
required_state_map: Dict[str, Set[str]]
|
||||
|
||||
@classmethod
|
||||
def from_room_config(
|
||||
cls,
|
||||
room_params: SlidingSyncConfig.CommonRoomParameters,
|
||||
) -> "RoomSyncConfig":
|
||||
"""
|
||||
Create a `RoomSyncConfig` from a `SlidingSyncList`/`RoomSubscription` config.
|
||||
|
||||
Args:
|
||||
room_params: `SlidingSyncConfig.SlidingSyncList` or `SlidingSyncConfig.RoomSubscription`
|
||||
"""
|
||||
required_state_map: Dict[str, Set[str]] = {}
|
||||
for (
|
||||
state_type,
|
||||
state_key,
|
||||
) in room_params.required_state:
|
||||
# If we already have a wildcard for this specific `state_key`, we don't need
|
||||
# to add it since the wildcard already covers it.
|
||||
if state_key in required_state_map.get(StateValues.WILDCARD, set()):
|
||||
continue
|
||||
|
||||
# If we already have a wildcard `state_key` for this `state_type`, we don't need
|
||||
# to add anything else
|
||||
if StateValues.WILDCARD in required_state_map.get(state_type, set()):
|
||||
continue
|
||||
|
||||
# If we're getting wildcards for the `state_type` and `state_key`, that's
|
||||
# all that matters so get rid of any other entries
|
||||
if state_type == StateValues.WILDCARD and state_key == StateValues.WILDCARD:
|
||||
required_state_map = {StateValues.WILDCARD: {StateValues.WILDCARD}}
|
||||
# We can break, since we don't need to add anything else
|
||||
break
|
||||
|
||||
# If we're getting a wildcard for the `state_type`, get rid of any other
|
||||
# entries with the same `state_key`, since the wildcard will cover it already.
|
||||
elif state_type == StateValues.WILDCARD:
|
||||
# Get rid of any entries that match the `state_key`
|
||||
#
|
||||
# Make a copy so we don't run into an error: `dictionary changed size
|
||||
# during iteration`, when we remove items
|
||||
for (
|
||||
existing_state_type,
|
||||
existing_state_key_set,
|
||||
) in list(required_state_map.items()):
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for existing_state_key in existing_state_key_set.copy():
|
||||
if existing_state_key == state_key:
|
||||
existing_state_key_set.remove(state_key)
|
||||
|
||||
# If we've the left the `set()` empty, remove it from the map
|
||||
if existing_state_key_set == set():
|
||||
required_state_map.pop(existing_state_type, None)
|
||||
|
||||
# If we're getting a wildcard `state_key`, get rid of any other state_keys
|
||||
# for this `state_type` since the wildcard will cover it already.
|
||||
if state_key == StateValues.WILDCARD:
|
||||
required_state_map[state_type] = {state_key}
|
||||
# Otherwise, just add it to the set
|
||||
else:
|
||||
if required_state_map.get(state_type) is None:
|
||||
required_state_map[state_type] = {state_key}
|
||||
else:
|
||||
required_state_map[state_type].add(state_key)
|
||||
|
||||
return cls(
|
||||
timeline_limit=room_params.timeline_limit,
|
||||
required_state_map=required_state_map,
|
||||
)
|
||||
|
||||
def deep_copy(self) -> "RoomSyncConfig":
|
||||
required_state_map: Dict[str, Set[str]] = {
|
||||
state_type: state_key_set.copy()
|
||||
for state_type, state_key_set in self.required_state_map.items()
|
||||
}
|
||||
|
||||
return RoomSyncConfig(
|
||||
timeline_limit=self.timeline_limit,
|
||||
required_state_map=required_state_map,
|
||||
)
|
||||
|
||||
def combine_room_sync_config(
|
||||
self, other_room_sync_config: "RoomSyncConfig"
|
||||
) -> None:
|
||||
"""
|
||||
Combine this `RoomSyncConfig` with another `RoomSyncConfig` and take the
|
||||
superset union of the two.
|
||||
"""
|
||||
# Take the highest timeline limit
|
||||
if self.timeline_limit < other_room_sync_config.timeline_limit:
|
||||
self.timeline_limit = other_room_sync_config.timeline_limit
|
||||
|
||||
# Union the required state
|
||||
for (
|
||||
state_type,
|
||||
state_key_set,
|
||||
) in other_room_sync_config.required_state_map.items():
|
||||
# If we already have a wildcard for everything, we don't need to add
|
||||
# anything else
|
||||
if StateValues.WILDCARD in self.required_state_map.get(
|
||||
StateValues.WILDCARD, set()
|
||||
):
|
||||
break
|
||||
|
||||
# If we already have a wildcard `state_key` for this `state_type`, we don't need
|
||||
# to add anything else
|
||||
if StateValues.WILDCARD in self.required_state_map.get(state_type, set()):
|
||||
continue
|
||||
|
||||
# If we're getting wildcards for the `state_type` and `state_key`, that's
|
||||
# all that matters so get rid of any other entries
|
||||
if (
|
||||
state_type == StateValues.WILDCARD
|
||||
and StateValues.WILDCARD in state_key_set
|
||||
):
|
||||
self.required_state_map = {state_type: {StateValues.WILDCARD}}
|
||||
# We can break, since we don't need to add anything else
|
||||
break
|
||||
|
||||
for state_key in state_key_set:
|
||||
# If we already have a wildcard for this specific `state_key`, we don't need
|
||||
# to add it since the wildcard already covers it.
|
||||
if state_key in self.required_state_map.get(
|
||||
StateValues.WILDCARD, set()
|
||||
):
|
||||
continue
|
||||
|
||||
# If we're getting a wildcard for the `state_type`, get rid of any other
|
||||
# entries with the same `state_key`, since the wildcard will cover it already.
|
||||
if state_type == StateValues.WILDCARD:
|
||||
# Get rid of any entries that match the `state_key`
|
||||
#
|
||||
# Make a copy so we don't run into an error: `dictionary changed size
|
||||
# during iteration`, when we remove items
|
||||
for existing_state_type, existing_state_key_set in list(
|
||||
self.required_state_map.items()
|
||||
):
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for existing_state_key in existing_state_key_set.copy():
|
||||
if existing_state_key == state_key:
|
||||
existing_state_key_set.remove(state_key)
|
||||
|
||||
# If we've the left the `set()` empty, remove it from the map
|
||||
if existing_state_key_set == set():
|
||||
self.required_state_map.pop(existing_state_type, None)
|
||||
|
||||
# If we're getting a wildcard `state_key`, get rid of any other state_keys
|
||||
# for this `state_type` since the wildcard will cover it already.
|
||||
if state_key == StateValues.WILDCARD:
|
||||
self.required_state_map[state_type] = {state_key}
|
||||
break
|
||||
# Otherwise, just add it to the set
|
||||
else:
|
||||
if self.required_state_map.get(state_type) is None:
|
||||
self.required_state_map[state_type] = {state_key}
|
||||
else:
|
||||
self.required_state_map[state_type].add(state_key)
|
||||
|
||||
|
||||
class StateValues:
|
||||
"""
|
||||
Understood values of the (type, state_key) tuple in `required_state`.
|
||||
"""
|
||||
|
||||
# Include all state events of the given type
|
||||
WILDCARD: Final = "*"
|
||||
# Lazy-load room membership events (include room membership events for any event
|
||||
# `sender` in the timeline). We only give special meaning to this value when it's a
|
||||
# `state_key`.
|
||||
LAZY: Final = "$LAZY"
|
||||
required_state: Set[Tuple[str, str]]
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
@@ -415,8 +242,6 @@ class SlidingSyncHandler:
|
||||
|
||||
# Assemble sliding window lists
|
||||
lists: Dict[str, SlidingSyncResult.SlidingWindowList] = {}
|
||||
# Keep track of the rooms that we're going to display and need to fetch more
|
||||
# info about
|
||||
relevant_room_map: Dict[str, RoomSyncConfig] = {}
|
||||
if sync_config.lists:
|
||||
# Get all of the room IDs that the user should be able to see in the sync
|
||||
@@ -435,76 +260,49 @@ class SlidingSyncHandler:
|
||||
sync_config.user, sync_room_map, list_config.filters, to_token
|
||||
)
|
||||
|
||||
# Sort the list
|
||||
sorted_room_info = await self.sort_rooms(
|
||||
filtered_sync_room_map, to_token
|
||||
)
|
||||
|
||||
# Find which rooms are partially stated and may need to be filtered out
|
||||
# depending on the `required_state` requested (see below).
|
||||
partial_state_room_map = await self.store.is_partial_state_room_batched(
|
||||
filtered_sync_room_map.keys()
|
||||
)
|
||||
|
||||
# Since creating the `RoomSyncConfig` takes some work, let's just do it
|
||||
# once and make a copy whenever we need it.
|
||||
room_sync_config = RoomSyncConfig.from_room_config(list_config)
|
||||
membership_state_keys = room_sync_config.required_state_map.get(
|
||||
EventTypes.Member
|
||||
)
|
||||
lazy_loading = (
|
||||
membership_state_keys is not None
|
||||
and len(membership_state_keys) == 1
|
||||
and StateValues.LAZY in membership_state_keys
|
||||
)
|
||||
|
||||
ops: List[SlidingSyncResult.SlidingWindowList.Operation] = []
|
||||
if list_config.ranges:
|
||||
for range in list_config.ranges:
|
||||
room_ids_in_list: List[str] = []
|
||||
|
||||
# We're going to loop through the sorted list of rooms starting
|
||||
# at the range start index and keep adding rooms until we fill
|
||||
# up the range or run out of rooms.
|
||||
#
|
||||
# Both sides of range are inclusive so we `+ 1`
|
||||
max_num_rooms = range[1] - range[0] + 1
|
||||
for room_id, _ in sorted_room_info[range[0] :]:
|
||||
if len(room_ids_in_list) >= max_num_rooms:
|
||||
break
|
||||
|
||||
# Exclude partially-stated rooms unless the `required_state`
|
||||
# only has `["m.room.member", "$LAZY"]` for membership
|
||||
# (lazy-loading room members).
|
||||
if partial_state_room_map.get(room_id) and not lazy_loading:
|
||||
continue
|
||||
|
||||
# Take the superset of the `RoomSyncConfig` for each room.
|
||||
#
|
||||
# Update our `relevant_room_map` with the room we're going
|
||||
# to display and need to fetch more info about.
|
||||
existing_room_sync_config = relevant_room_map.get(room_id)
|
||||
if existing_room_sync_config is not None:
|
||||
existing_room_sync_config.combine_room_sync_config(
|
||||
room_sync_config
|
||||
)
|
||||
else:
|
||||
# Make a copy so if we modify it later, it doesn't
|
||||
# affect all references.
|
||||
relevant_room_map[room_id] = (
|
||||
room_sync_config.deep_copy()
|
||||
)
|
||||
|
||||
room_ids_in_list.append(room_id)
|
||||
sliced_room_ids = [
|
||||
room_id
|
||||
# Both sides of range are inclusive
|
||||
for room_id, _ in sorted_room_info[range[0] : range[1] + 1]
|
||||
]
|
||||
|
||||
ops.append(
|
||||
SlidingSyncResult.SlidingWindowList.Operation(
|
||||
op=OperationType.SYNC,
|
||||
range=range,
|
||||
room_ids=room_ids_in_list,
|
||||
room_ids=sliced_room_ids,
|
||||
)
|
||||
)
|
||||
|
||||
# Take the superset of the `RoomSyncConfig` for each room
|
||||
for room_id in sliced_room_ids:
|
||||
if relevant_room_map.get(room_id) is not None:
|
||||
# Take the highest timeline limit
|
||||
if (
|
||||
relevant_room_map[room_id].timeline_limit
|
||||
< list_config.timeline_limit
|
||||
):
|
||||
relevant_room_map[room_id].timeline_limit = (
|
||||
list_config.timeline_limit
|
||||
)
|
||||
|
||||
# Union the required state
|
||||
relevant_room_map[room_id].required_state.update(
|
||||
list_config.required_state
|
||||
)
|
||||
else:
|
||||
relevant_room_map[room_id] = RoomSyncConfig(
|
||||
timeline_limit=list_config.timeline_limit,
|
||||
required_state=set(list_config.required_state),
|
||||
)
|
||||
|
||||
lists[list_key] = SlidingSyncResult.SlidingWindowList(
|
||||
count=len(sorted_room_info),
|
||||
ops=ops,
|
||||
@@ -526,15 +324,11 @@ class SlidingSyncHandler:
|
||||
|
||||
rooms[room_id] = room_sync_result
|
||||
|
||||
extensions = await self.get_extensions_response(
|
||||
sync_config=sync_config, to_token=to_token
|
||||
)
|
||||
|
||||
return SlidingSyncResult(
|
||||
next_pos=to_token,
|
||||
lists=lists,
|
||||
rooms=rooms,
|
||||
extensions=extensions,
|
||||
extensions={},
|
||||
)
|
||||
|
||||
async def get_sync_room_ids_for_user(
|
||||
@@ -857,6 +651,9 @@ class SlidingSyncHandler:
|
||||
user_id = user.to_string()
|
||||
|
||||
# TODO: Apply filters
|
||||
#
|
||||
# TODO: Exclude partially stated rooms unless the `required_state` has
|
||||
# `["m.room.member", "$LAZY"]`
|
||||
|
||||
filtered_room_id_set = set(sync_room_map.keys())
|
||||
|
||||
@@ -897,18 +694,16 @@ class SlidingSyncHandler:
|
||||
if filters.is_encrypted is not None:
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for room_id in filtered_room_id_set.copy():
|
||||
for room_id in list(filtered_room_id_set):
|
||||
state_at_to_token = await self.storage_controllers.state.get_state_at(
|
||||
room_id,
|
||||
to_token,
|
||||
state_filter=StateFilter.from_types(
|
||||
[(EventTypes.RoomEncryption, "")]
|
||||
),
|
||||
# Partially-stated rooms should have all state events except for the
|
||||
# membership events so we don't need to wait because we only care
|
||||
# about retrieving the `EventTypes.RoomEncryption` state event here.
|
||||
# Plus we don't want to block the whole sync waiting for this one
|
||||
# room.
|
||||
# Partially stated rooms should have all state events except for the
|
||||
# membership events so we don't need to wait. Plus we don't want to
|
||||
# block the whole sync waiting for this one room.
|
||||
await_full_state=False,
|
||||
)
|
||||
is_encrypted = state_at_to_token.get((EventTypes.RoomEncryption, ""))
|
||||
@@ -924,7 +719,7 @@ class SlidingSyncHandler:
|
||||
if filters.is_invite is not None:
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for room_id in filtered_room_id_set.copy():
|
||||
for room_id in list(filtered_room_id_set):
|
||||
room_for_user = sync_room_map[room_id]
|
||||
# If we're looking for invite rooms, filter out rooms that the user is
|
||||
# not invited to and vice versa
|
||||
@@ -942,7 +737,7 @@ class SlidingSyncHandler:
|
||||
if filters.room_types is not None or filters.not_room_types is not None:
|
||||
# Make a copy so we don't run into an error: `Set changed size during
|
||||
# iteration`, when we filter out and remove items
|
||||
for room_id in filtered_room_id_set.copy():
|
||||
for room_id in list(filtered_room_id_set):
|
||||
create_event = await self.store.get_create_event_for_room(room_id)
|
||||
room_type = create_event.content.get(EventContentFields.ROOM_TYPE)
|
||||
if (
|
||||
@@ -1048,7 +843,7 @@ class SlidingSyncHandler:
|
||||
|
||||
# Assemble the list of timeline events
|
||||
#
|
||||
# FIXME: It would be nice to make the `rooms` response more uniform regardless of
|
||||
# It would be nice to make the `rooms` response more uniform regardless of
|
||||
# membership. Currently, we have to make all of these optional because
|
||||
# `invite`/`knock` rooms only have `stripped_state`. See
|
||||
# https://github.com/matrix-org/matrix-spec-proposals/pull/3575#discussion_r1653045932
|
||||
@@ -1215,136 +1010,6 @@ class SlidingSyncHandler:
|
||||
# state reset happened. Perhaps we should indicate this by setting `initial:
|
||||
# True` and empty `required_state`.
|
||||
|
||||
# TODO: Since we can't determine whether we've already sent a room down this
|
||||
# Sliding Sync connection before (we plan to add this optimization in the
|
||||
# future), we're always returning the requested room state instead of
|
||||
# updates.
|
||||
initial = True
|
||||
|
||||
# Fetch the required state for the room
|
||||
#
|
||||
# No `required_state` for invite/knock rooms (just `stripped_state`)
|
||||
#
|
||||
# FIXME: It would be nice to make the `rooms` response more uniform regardless
|
||||
# of membership. Currently, we have to make this optional because
|
||||
# `invite`/`knock` rooms only have `stripped_state`. See
|
||||
# https://github.com/matrix-org/matrix-spec-proposals/pull/3575#discussion_r1653045932
|
||||
room_state: Optional[StateMap[EventBase]] = None
|
||||
if rooms_membership_for_user_at_to_token.membership not in (
|
||||
Membership.INVITE,
|
||||
Membership.KNOCK,
|
||||
):
|
||||
# Calculate the `StateFilter` based on the `required_state` for the room
|
||||
state_filter: Optional[StateFilter] = StateFilter.none()
|
||||
# If we have a double wildcard ("*", "*") in the `required_state`, we need
|
||||
# to fetch all state for the room
|
||||
#
|
||||
# Note: MSC3575 describes different behavior to how we're handling things
|
||||
# here but since it's not wrong to return more state than requested
|
||||
# (`required_state` is just the minimum requested), it doesn't matter if we
|
||||
# include more than client wanted. This complexity is also under scrutiny,
|
||||
# see
|
||||
# https://github.com/matrix-org/matrix-spec-proposals/pull/3575#discussion_r1185109050
|
||||
#
|
||||
# > One unique exception is when you request all state events via ["*", "*"]. When used,
|
||||
# > all state events are returned by default, and additional entries FILTER OUT the returned set
|
||||
# > of state events. These additional entries cannot use '*' themselves.
|
||||
# > For example, ["*", "*"], ["m.room.member", "@alice:example.com"] will _exclude_ every m.room.member
|
||||
# > event _except_ for @alice:example.com, and include every other state event.
|
||||
# > In addition, ["*", "*"], ["m.space.child", "*"] is an error, the m.space.child filter is not
|
||||
# > required as it would have been returned anyway.
|
||||
# >
|
||||
# > -- MSC3575 (https://github.com/matrix-org/matrix-spec-proposals/pull/3575)
|
||||
if StateValues.WILDCARD in room_sync_config.required_state_map.get(
|
||||
StateValues.WILDCARD, set()
|
||||
):
|
||||
state_filter = StateFilter.all()
|
||||
# TODO: `StateFilter` currently doesn't support wildcard event types. We're
|
||||
# currently working around this by returning all state to the client but it
|
||||
# would be nice to fetch less from the database and return just what the
|
||||
# client wanted.
|
||||
elif (
|
||||
room_sync_config.required_state_map.get(StateValues.WILDCARD)
|
||||
is not None
|
||||
):
|
||||
state_filter = StateFilter.all()
|
||||
else:
|
||||
required_state_types: List[Tuple[str, Optional[str]]] = []
|
||||
for (
|
||||
state_type,
|
||||
state_key_set,
|
||||
) in room_sync_config.required_state_map.items():
|
||||
for state_key in state_key_set:
|
||||
if state_key == StateValues.WILDCARD:
|
||||
# `None` is a wildcard in the `StateFilter`
|
||||
required_state_types.append((state_type, None))
|
||||
# We need to fetch all relevant people when we're lazy-loading membership
|
||||
elif (
|
||||
state_type == EventTypes.Member
|
||||
and state_key == StateValues.LAZY
|
||||
):
|
||||
# Everyone in the timeline is relevant
|
||||
timeline_membership: Set[str] = set()
|
||||
if timeline_events is not None:
|
||||
for timeline_event in timeline_events:
|
||||
timeline_membership.add(timeline_event.sender)
|
||||
|
||||
for user_id in timeline_membership:
|
||||
required_state_types.append(
|
||||
(EventTypes.Member, user_id)
|
||||
)
|
||||
|
||||
# FIXME: We probably also care about invite, ban, kick, targets, etc
|
||||
# but the spec only mentions "senders".
|
||||
else:
|
||||
required_state_types.append((state_type, state_key))
|
||||
|
||||
state_filter = StateFilter.from_types(required_state_types)
|
||||
|
||||
# We can skip fetching state if we don't need any
|
||||
if state_filter != StateFilter.none():
|
||||
# We can return all of the state that was requested if we're doing an
|
||||
# initial sync
|
||||
if initial:
|
||||
# People shouldn't see past their leave/ban event
|
||||
if rooms_membership_for_user_at_to_token.membership in (
|
||||
Membership.LEAVE,
|
||||
Membership.BAN,
|
||||
):
|
||||
room_state = await self.storage_controllers.state.get_state_at(
|
||||
room_id,
|
||||
stream_position=to_token.copy_and_replace(
|
||||
StreamKeyType.ROOM,
|
||||
rooms_membership_for_user_at_to_token.event_pos.to_room_stream_token(),
|
||||
),
|
||||
state_filter=state_filter,
|
||||
# Partially-stated rooms should have all state events except for
|
||||
# the membership events and since we've already excluded
|
||||
# partially-stated rooms unless `required_state` only has
|
||||
# `["m.room.member", "$LAZY"]` for membership, we should be able
|
||||
# to retrieve everything requested. Plus we don't want to block
|
||||
# the whole sync waiting for this one room.
|
||||
await_full_state=False,
|
||||
)
|
||||
# Otherwise, we can get the latest current state in the room
|
||||
else:
|
||||
room_state = await self.storage_controllers.state.get_current_state(
|
||||
room_id,
|
||||
state_filter,
|
||||
# Partially-stated rooms should have all state events except for
|
||||
# the membership events and since we've already excluded
|
||||
# partially-stated rooms unless `required_state` only has
|
||||
# `["m.room.member", "$LAZY"]` for membership, we should be able
|
||||
# to retrieve everything requested. Plus we don't want to block
|
||||
# the whole sync waiting for this one room.
|
||||
await_full_state=False,
|
||||
)
|
||||
# TODO: Query `current_state_delta_stream` and reverse/rewind back to the `to_token`
|
||||
else:
|
||||
# TODO: Once we can figure out if we've sent a room down this connection before,
|
||||
# we can return updates instead of the full required state.
|
||||
raise NotImplementedError()
|
||||
|
||||
return SlidingSyncResult.RoomResult(
|
||||
# TODO: Dummy value
|
||||
name=None,
|
||||
@@ -1352,16 +1017,20 @@ class SlidingSyncHandler:
|
||||
avatar=None,
|
||||
# TODO: Dummy value
|
||||
heroes=None,
|
||||
# TODO: Since we can't determine whether we've already sent a room down this
|
||||
# Sliding Sync connection before (we plan to add this optimization in the
|
||||
# future), we're always returning the requested room state instead of
|
||||
# updates.
|
||||
initial=True,
|
||||
# TODO: Dummy value
|
||||
is_dm=False,
|
||||
initial=initial,
|
||||
required_state=list(room_state.values()) if room_state else None,
|
||||
required_state=[],
|
||||
timeline_events=timeline_events,
|
||||
bundled_aggregations=bundled_aggregations,
|
||||
# TODO: Dummy value
|
||||
is_dm=False,
|
||||
stripped_state=stripped_state,
|
||||
prev_batch=prev_batch_token,
|
||||
limited=limited,
|
||||
num_live=num_live,
|
||||
# TODO: Dummy values
|
||||
joined_count=0,
|
||||
invited_count=0,
|
||||
@@ -1370,87 +1039,5 @@ class SlidingSyncHandler:
|
||||
# (encrypted rooms).
|
||||
notification_count=0,
|
||||
highlight_count=0,
|
||||
)
|
||||
|
||||
async def get_extensions_response(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
to_token: StreamToken,
|
||||
) -> SlidingSyncResult.Extensions:
|
||||
"""Handle extension requests."""
|
||||
|
||||
if sync_config.extensions is None:
|
||||
return SlidingSyncResult.Extensions()
|
||||
|
||||
to_device_response = None
|
||||
if sync_config.extensions.to_device:
|
||||
to_device_response = await self.get_to_device_extensions_response(
|
||||
sync_config=sync_config,
|
||||
to_device_request=sync_config.extensions.to_device,
|
||||
to_token=to_token,
|
||||
)
|
||||
|
||||
return SlidingSyncResult.Extensions(to_device=to_device_response)
|
||||
|
||||
async def get_to_device_extensions_response(
|
||||
self,
|
||||
sync_config: SlidingSyncConfig,
|
||||
to_device_request: SlidingSyncConfig.Extensions.ToDeviceExtension,
|
||||
to_token: StreamToken,
|
||||
) -> SlidingSyncResult.Extensions.ToDeviceExtension:
|
||||
"""Handle to-device extension (MSC3885)"""
|
||||
|
||||
user_id = sync_config.user.to_string()
|
||||
device_id = sync_config.device_id
|
||||
|
||||
# Check that this request has a valid device ID (not all requests have
|
||||
# to belong to a device, and so device_id is None), and that the
|
||||
# extension is enabled.
|
||||
if device_id is None or not to_device_request.enabled:
|
||||
return SlidingSyncResult.Extensions.ToDeviceExtension(
|
||||
next_batch=f"{to_token.to_device_key}",
|
||||
events=[],
|
||||
)
|
||||
|
||||
since_stream_id = 0
|
||||
if to_device_request.since:
|
||||
# We've already validated this is an int.
|
||||
since_stream_id = int(to_device_request.since)
|
||||
|
||||
if to_token.to_device_key < since_stream_id:
|
||||
# The since token is ahead of our current token, so we return an
|
||||
# empty response.
|
||||
logger.warning(
|
||||
"Got to-device.since from the future. Next_batch: %r, current: %r",
|
||||
since_stream_id,
|
||||
to_token.to_device_key,
|
||||
)
|
||||
return SlidingSyncResult.Extensions.ToDeviceExtension(
|
||||
next_batch=to_device_request.since,
|
||||
events=[],
|
||||
)
|
||||
|
||||
# Delete everything before the given since token, as we know the
|
||||
# device must have received them.
|
||||
deleted = await self.store.delete_messages_for_device(
|
||||
user_id=user_id,
|
||||
device_id=device_id,
|
||||
up_to_stream_id=since_stream_id,
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
"Deleted %d to-device messages up to %d", deleted, since_stream_id
|
||||
)
|
||||
|
||||
messages, stream_id = await self.store.get_messages_for_device(
|
||||
user_id=user_id,
|
||||
device_id=device_id,
|
||||
from_stream_id=since_stream_id,
|
||||
to_stream_id=to_token.to_device_key,
|
||||
limit=min(to_device_request.limit, 100), # Limit to at most 100 events
|
||||
)
|
||||
|
||||
return SlidingSyncResult.Extensions.ToDeviceExtension(
|
||||
next_batch=f"{stream_id}",
|
||||
events=messages,
|
||||
num_live=num_live,
|
||||
)
|
||||
|
||||
@@ -1352,7 +1352,7 @@ class SyncHandler:
|
||||
await_full_state = True
|
||||
lazy_load_members = False
|
||||
|
||||
state_at_timeline_end = await self._state_storage_controller.get_state_ids_at(
|
||||
state_at_timeline_end = await self._state_storage_controller.get_state_at(
|
||||
room_id,
|
||||
stream_position=end_token,
|
||||
state_filter=state_filter,
|
||||
@@ -1480,13 +1480,11 @@ class SyncHandler:
|
||||
else:
|
||||
# We can get here if the user has ignored the senders of all
|
||||
# the recent events.
|
||||
state_at_timeline_start = (
|
||||
await self._state_storage_controller.get_state_ids_at(
|
||||
room_id,
|
||||
stream_position=end_token,
|
||||
state_filter=state_filter,
|
||||
await_full_state=await_full_state,
|
||||
)
|
||||
state_at_timeline_start = await self._state_storage_controller.get_state_at(
|
||||
room_id,
|
||||
stream_position=end_token,
|
||||
state_filter=state_filter,
|
||||
await_full_state=await_full_state,
|
||||
)
|
||||
|
||||
if batch.limited:
|
||||
@@ -1504,14 +1502,14 @@ class SyncHandler:
|
||||
# about them).
|
||||
state_filter = StateFilter.all()
|
||||
|
||||
state_at_previous_sync = await self._state_storage_controller.get_state_ids_at(
|
||||
state_at_previous_sync = await self._state_storage_controller.get_state_at(
|
||||
room_id,
|
||||
stream_position=since_token,
|
||||
state_filter=state_filter,
|
||||
await_full_state=await_full_state,
|
||||
)
|
||||
|
||||
state_at_timeline_end = await self._state_storage_controller.get_state_ids_at(
|
||||
state_at_timeline_end = await self._state_storage_controller.get_state_at(
|
||||
room_id,
|
||||
stream_position=end_token,
|
||||
state_filter=state_filter,
|
||||
@@ -2510,7 +2508,7 @@ class SyncHandler:
|
||||
continue
|
||||
|
||||
if room_id in sync_result_builder.joined_room_ids or has_join:
|
||||
old_state_ids = await self._state_storage_controller.get_state_ids_at(
|
||||
old_state_ids = await self._state_storage_controller.get_state_at(
|
||||
room_id,
|
||||
since_token,
|
||||
state_filter=StateFilter.from_types([(EventTypes.Member, user_id)]),
|
||||
@@ -2541,7 +2539,7 @@ class SyncHandler:
|
||||
else:
|
||||
if not old_state_ids:
|
||||
old_state_ids = (
|
||||
await self._state_storage_controller.get_state_ids_at(
|
||||
await self._state_storage_controller.get_state_at(
|
||||
room_id,
|
||||
since_token,
|
||||
state_filter=StateFilter.from_types(
|
||||
|
||||
@@ -542,12 +542,7 @@ class MediaRepository:
|
||||
respond_404(request)
|
||||
|
||||
async def get_remote_media_info(
|
||||
self,
|
||||
server_name: str,
|
||||
media_id: str,
|
||||
max_timeout_ms: int,
|
||||
ip_address: str,
|
||||
use_federation: bool,
|
||||
self, server_name: str, media_id: str, max_timeout_ms: int, ip_address: str
|
||||
) -> RemoteMedia:
|
||||
"""Gets the media info associated with the remote file, downloading
|
||||
if necessary.
|
||||
@@ -558,8 +553,6 @@ class MediaRepository:
|
||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||
media to be uploaded.
|
||||
ip_address: IP address of the requester
|
||||
use_federation: if a download is necessary, whether to request the remote file
|
||||
over the federation `/download` endpoint
|
||||
|
||||
Returns:
|
||||
The media info of the file
|
||||
@@ -580,7 +573,7 @@ class MediaRepository:
|
||||
max_timeout_ms,
|
||||
self.download_ratelimiter,
|
||||
ip_address,
|
||||
use_federation,
|
||||
False,
|
||||
)
|
||||
|
||||
# Ensure we actually use the responder so that it releases resources
|
||||
|
||||
@@ -36,11 +36,9 @@ from synapse.media._base import (
|
||||
ThumbnailInfo,
|
||||
respond_404,
|
||||
respond_with_file,
|
||||
respond_with_multipart_responder,
|
||||
respond_with_responder,
|
||||
)
|
||||
from synapse.media.media_storage import FileResponder, MediaStorage
|
||||
from synapse.storage.databases.main.media_repository import LocalMedia
|
||||
from synapse.media.media_storage import MediaStorage
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.media.media_repository import MediaRepository
|
||||
@@ -273,7 +271,6 @@ class ThumbnailProvider:
|
||||
method: str,
|
||||
m_type: str,
|
||||
max_timeout_ms: int,
|
||||
for_federation: bool,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_local_media_info(
|
||||
request, media_id, max_timeout_ms
|
||||
@@ -293,8 +290,6 @@ class ThumbnailProvider:
|
||||
media_id,
|
||||
url_cache=bool(media_info.url_cache),
|
||||
server_name=None,
|
||||
for_federation=for_federation,
|
||||
media_info=media_info,
|
||||
)
|
||||
|
||||
async def select_or_generate_local_thumbnail(
|
||||
@@ -306,7 +301,6 @@ class ThumbnailProvider:
|
||||
desired_method: str,
|
||||
desired_type: str,
|
||||
max_timeout_ms: int,
|
||||
for_federation: bool,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_local_media_info(
|
||||
request, media_id, max_timeout_ms
|
||||
@@ -332,16 +326,10 @@ class ThumbnailProvider:
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if responder:
|
||||
if for_federation:
|
||||
await respond_with_multipart_responder(
|
||||
self.hs.get_clock(), request, responder, media_info
|
||||
)
|
||||
return
|
||||
else:
|
||||
await respond_with_responder(
|
||||
request, responder, info.type, info.length
|
||||
)
|
||||
return
|
||||
await respond_with_responder(
|
||||
request, responder, info.type, info.length
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||
|
||||
@@ -356,15 +344,7 @@ class ThumbnailProvider:
|
||||
)
|
||||
|
||||
if file_path:
|
||||
if for_federation:
|
||||
await respond_with_multipart_responder(
|
||||
self.hs.get_clock(),
|
||||
request,
|
||||
FileResponder(open(file_path, "rb")),
|
||||
media_info,
|
||||
)
|
||||
else:
|
||||
await respond_with_file(request, desired_type, file_path)
|
||||
await respond_with_file(request, desired_type, file_path)
|
||||
else:
|
||||
logger.warning("Failed to generate thumbnail")
|
||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||
@@ -380,10 +360,9 @@ class ThumbnailProvider:
|
||||
desired_type: str,
|
||||
max_timeout_ms: int,
|
||||
ip_address: str,
|
||||
use_federation: bool,
|
||||
) -> None:
|
||||
media_info = await self.media_repo.get_remote_media_info(
|
||||
server_name, media_id, max_timeout_ms, ip_address, use_federation
|
||||
server_name, media_id, max_timeout_ms, ip_address
|
||||
)
|
||||
if not media_info:
|
||||
respond_404(request)
|
||||
@@ -445,13 +424,12 @@ class ThumbnailProvider:
|
||||
m_type: str,
|
||||
max_timeout_ms: int,
|
||||
ip_address: str,
|
||||
use_federation: bool,
|
||||
) -> None:
|
||||
# TODO: Don't download the whole remote file
|
||||
# We should proxy the thumbnail from the remote server instead of
|
||||
# downloading the remote file and generating our own thumbnails.
|
||||
media_info = await self.media_repo.get_remote_media_info(
|
||||
server_name, media_id, max_timeout_ms, ip_address, use_federation
|
||||
server_name, media_id, max_timeout_ms, ip_address
|
||||
)
|
||||
if not media_info:
|
||||
return
|
||||
@@ -470,7 +448,6 @@ class ThumbnailProvider:
|
||||
media_info.filesystem_id,
|
||||
url_cache=False,
|
||||
server_name=server_name,
|
||||
for_federation=False,
|
||||
)
|
||||
|
||||
async def _select_and_respond_with_thumbnail(
|
||||
@@ -484,9 +461,7 @@ class ThumbnailProvider:
|
||||
media_id: str,
|
||||
file_id: str,
|
||||
url_cache: bool,
|
||||
for_federation: bool,
|
||||
server_name: Optional[str] = None,
|
||||
media_info: Optional[LocalMedia] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Respond to a request with an appropriate thumbnail from the previously generated thumbnails.
|
||||
@@ -501,8 +476,6 @@ class ThumbnailProvider:
|
||||
file_id: The ID of the media that a thumbnail is being requested for.
|
||||
url_cache: True if this is from a URL cache.
|
||||
server_name: The server name, if this is a remote thumbnail.
|
||||
for_federation: whether the request is from the federation /thumbnail request
|
||||
media_info: metadata about the media being requested.
|
||||
"""
|
||||
logger.debug(
|
||||
"_select_and_respond_with_thumbnail: media_id=%s desired=%sx%s (%s) thumbnail_infos=%s",
|
||||
@@ -538,20 +511,13 @@ class ThumbnailProvider:
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if responder:
|
||||
if for_federation:
|
||||
assert media_info is not None
|
||||
await respond_with_multipart_responder(
|
||||
self.hs.get_clock(), request, responder, media_info
|
||||
)
|
||||
return
|
||||
else:
|
||||
await respond_with_responder(
|
||||
request,
|
||||
responder,
|
||||
file_info.thumbnail.type,
|
||||
file_info.thumbnail.length,
|
||||
)
|
||||
return
|
||||
await respond_with_responder(
|
||||
request,
|
||||
responder,
|
||||
file_info.thumbnail.type,
|
||||
file_info.thumbnail.length,
|
||||
)
|
||||
return
|
||||
|
||||
# If we can't find the thumbnail we regenerate it. This can happen
|
||||
# if e.g. we've deleted the thumbnails but still have the original
|
||||
@@ -592,18 +558,12 @@ class ThumbnailProvider:
|
||||
)
|
||||
|
||||
responder = await self.media_storage.fetch_media(file_info)
|
||||
if for_federation:
|
||||
assert media_info is not None
|
||||
await respond_with_multipart_responder(
|
||||
self.hs.get_clock(), request, responder, media_info
|
||||
)
|
||||
else:
|
||||
await respond_with_responder(
|
||||
request,
|
||||
responder,
|
||||
file_info.thumbnail.type,
|
||||
file_info.thumbnail.length,
|
||||
)
|
||||
await respond_with_responder(
|
||||
request,
|
||||
responder,
|
||||
file_info.thumbnail.type,
|
||||
file_info.thumbnail.length,
|
||||
)
|
||||
else:
|
||||
# This might be because:
|
||||
# 1. We can't create thumbnails for the given media (corrupted or
|
||||
|
||||
@@ -47,7 +47,7 @@ from synapse.util.stringutils import parse_and_validate_server_name
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PreviewURLServlet(RestServlet):
|
||||
class UnstablePreviewURLServlet(RestServlet):
|
||||
"""
|
||||
Same as `GET /_matrix/media/r0/preview_url`, this endpoint provides a generic preview API
|
||||
for URLs which outputs Open Graph (https://ogp.me/) responses (with some Matrix
|
||||
@@ -65,7 +65,9 @@ class PreviewURLServlet(RestServlet):
|
||||
* Matrix cannot be used to distribute the metadata between homeservers.
|
||||
"""
|
||||
|
||||
PATTERNS = [re.compile(r"^/_matrix/client/v1/media/preview_url$")]
|
||||
PATTERNS = [
|
||||
re.compile(r"^/_matrix/client/unstable/org.matrix.msc3916/media/preview_url$")
|
||||
]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
@@ -93,8 +95,10 @@ class PreviewURLServlet(RestServlet):
|
||||
respond_with_json_bytes(request, 200, og, send_cors=True)
|
||||
|
||||
|
||||
class MediaConfigResource(RestServlet):
|
||||
PATTERNS = [re.compile(r"^/_matrix/client/v1/media/config$")]
|
||||
class UnstableMediaConfigResource(RestServlet):
|
||||
PATTERNS = [
|
||||
re.compile(r"^/_matrix/client/unstable/org.matrix.msc3916/media/config$")
|
||||
]
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__()
|
||||
@@ -108,10 +112,10 @@ class MediaConfigResource(RestServlet):
|
||||
respond_with_json(request, 200, self.limits_dict, send_cors=True)
|
||||
|
||||
|
||||
class ThumbnailResource(RestServlet):
|
||||
class UnstableThumbnailResource(RestServlet):
|
||||
PATTERNS = [
|
||||
re.compile(
|
||||
"/_matrix/client/v1/media/thumbnail/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)$"
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/thumbnail/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)$"
|
||||
)
|
||||
]
|
||||
|
||||
@@ -155,25 +159,11 @@ class ThumbnailResource(RestServlet):
|
||||
if self._is_mine_server_name(server_name):
|
||||
if self.dynamic_thumbnails:
|
||||
await self.thumbnailer.select_or_generate_local_thumbnail(
|
||||
request,
|
||||
media_id,
|
||||
width,
|
||||
height,
|
||||
method,
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
False,
|
||||
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||
)
|
||||
else:
|
||||
await self.thumbnailer.respond_local_thumbnail(
|
||||
request,
|
||||
media_id,
|
||||
width,
|
||||
height,
|
||||
method,
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
False,
|
||||
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(None, media_id)
|
||||
else:
|
||||
@@ -201,7 +191,6 @@ class ThumbnailResource(RestServlet):
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
ip_address,
|
||||
True,
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||
|
||||
@@ -271,9 +260,11 @@ class DownloadResource(RestServlet):
|
||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
media_repo = hs.get_media_repository()
|
||||
if hs.config.media.url_preview_enabled:
|
||||
PreviewURLServlet(hs, media_repo, media_repo.media_storage).register(
|
||||
UnstablePreviewURLServlet(hs, media_repo, media_repo.media_storage).register(
|
||||
http_server
|
||||
)
|
||||
MediaConfigResource(hs).register(http_server)
|
||||
ThumbnailResource(hs, media_repo, media_repo.media_storage).register(http_server)
|
||||
UnstableMediaConfigResource(hs).register(http_server)
|
||||
UnstableThumbnailResource(hs, media_repo, media_repo.media_storage).register(
|
||||
http_server
|
||||
)
|
||||
DownloadResource(hs, media_repo).register(http_server)
|
||||
|
||||
@@ -942,9 +942,7 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
response["rooms"] = await self.encode_rooms(
|
||||
requester, sliding_sync_result.rooms
|
||||
)
|
||||
response["extensions"] = await self.encode_extensions(
|
||||
requester, sliding_sync_result.extensions
|
||||
)
|
||||
response["extensions"] = {} # TODO: sliding_sync_result.extensions
|
||||
|
||||
return response
|
||||
|
||||
@@ -1004,7 +1002,7 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
if room_result.initial:
|
||||
serialized_rooms[room_id]["initial"] = room_result.initial
|
||||
|
||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
||||
# This will omitted for invite/knock rooms with `stripped_state`
|
||||
if room_result.required_state is not None:
|
||||
serialized_required_state = (
|
||||
await self.event_serializer.serialize_events(
|
||||
@@ -1015,7 +1013,7 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
)
|
||||
serialized_rooms[room_id]["required_state"] = serialized_required_state
|
||||
|
||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
||||
# This will omitted for invite/knock rooms with `stripped_state`
|
||||
if room_result.timeline_events is not None:
|
||||
serialized_timeline = await self.event_serializer.serialize_events(
|
||||
room_result.timeline_events,
|
||||
@@ -1025,17 +1023,17 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
)
|
||||
serialized_rooms[room_id]["timeline"] = serialized_timeline
|
||||
|
||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
||||
# This will omitted for invite/knock rooms with `stripped_state`
|
||||
if room_result.limited is not None:
|
||||
serialized_rooms[room_id]["limited"] = room_result.limited
|
||||
|
||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
||||
# This will omitted for invite/knock rooms with `stripped_state`
|
||||
if room_result.prev_batch is not None:
|
||||
serialized_rooms[room_id]["prev_batch"] = (
|
||||
await room_result.prev_batch.to_string(self.store)
|
||||
)
|
||||
|
||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
||||
# This will omitted for invite/knock rooms with `stripped_state`
|
||||
if room_result.num_live is not None:
|
||||
serialized_rooms[room_id]["num_live"] = room_result.num_live
|
||||
|
||||
@@ -1055,19 +1053,6 @@ class SlidingSyncRestServlet(RestServlet):
|
||||
|
||||
return serialized_rooms
|
||||
|
||||
async def encode_extensions(
|
||||
self, requester: Requester, extensions: SlidingSyncResult.Extensions
|
||||
) -> JsonDict:
|
||||
result = {}
|
||||
|
||||
if extensions.to_device is not None:
|
||||
result["to_device"] = {
|
||||
"next_batch": extensions.to_device.next_batch,
|
||||
"events": extensions.to_device.events,
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
SyncRestServlet(hs).register(http_server)
|
||||
|
||||
@@ -102,7 +102,6 @@ class VersionsRestServlet(RestServlet):
|
||||
"v1.8",
|
||||
"v1.9",
|
||||
"v1.10",
|
||||
"v1.11",
|
||||
],
|
||||
# as per MSC1497:
|
||||
"unstable_features": {
|
||||
|
||||
@@ -88,25 +88,11 @@ class ThumbnailResource(RestServlet):
|
||||
if self._is_mine_server_name(server_name):
|
||||
if self.dynamic_thumbnails:
|
||||
await self.thumbnail_provider.select_or_generate_local_thumbnail(
|
||||
request,
|
||||
media_id,
|
||||
width,
|
||||
height,
|
||||
method,
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
False,
|
||||
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||
)
|
||||
else:
|
||||
await self.thumbnail_provider.respond_local_thumbnail(
|
||||
request,
|
||||
media_id,
|
||||
width,
|
||||
height,
|
||||
method,
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
False,
|
||||
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(None, media_id)
|
||||
else:
|
||||
@@ -134,6 +120,5 @@ class ThumbnailResource(RestServlet):
|
||||
m_type,
|
||||
max_timeout_ms,
|
||||
ip_address,
|
||||
False,
|
||||
)
|
||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||
|
||||
@@ -28,7 +28,7 @@
|
||||
import abc
|
||||
import functools
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Callable, Dict, List, Optional, Type, TypeVar, cast
|
||||
from typing import TYPE_CHECKING, Callable, Dict, List, Optional, TypeVar, cast
|
||||
|
||||
from typing_extensions import TypeAlias
|
||||
|
||||
@@ -161,7 +161,6 @@ if TYPE_CHECKING:
|
||||
from synapse.handlers.jwt import JwtHandler
|
||||
from synapse.handlers.oidc import OidcHandler
|
||||
from synapse.handlers.saml import SamlHandler
|
||||
from synapse.storage._base import SQLBaseStore
|
||||
|
||||
|
||||
# The annotation for `cache_in_self` used to be
|
||||
@@ -256,13 +255,10 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||
"stats",
|
||||
]
|
||||
|
||||
@property
|
||||
@abc.abstractmethod
|
||||
def DATASTORE_CLASS(self) -> Type["SQLBaseStore"]:
|
||||
# This is overridden in derived application classes
|
||||
# (such as synapse.app.homeserver.SynapseHomeServer) and gives the class to be
|
||||
# instantiated during setup() for future return by get_datastores()
|
||||
pass
|
||||
# This is overridden in derived application classes
|
||||
# (such as synapse.app.homeserver.SynapseHomeServer) and gives the class to be
|
||||
# instantiated during setup() for future return by get_datastores()
|
||||
DATASTORE_CLASS = abc.abstractproperty()
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
|
||||
@@ -409,7 +409,7 @@ class StateStorageController:
|
||||
|
||||
return state_ids
|
||||
|
||||
async def get_state_ids_at(
|
||||
async def get_state_at(
|
||||
self,
|
||||
room_id: str,
|
||||
stream_position: StreamToken,
|
||||
@@ -460,30 +460,6 @@ class StateStorageController:
|
||||
)
|
||||
return state
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_state_at(
|
||||
self,
|
||||
room_id: str,
|
||||
stream_position: StreamToken,
|
||||
state_filter: Optional[StateFilter] = None,
|
||||
await_full_state: bool = True,
|
||||
) -> StateMap[EventBase]:
|
||||
"""Same as `get_state_ids_at` but also fetches the events"""
|
||||
state_map_ids = await self.get_state_ids_at(
|
||||
room_id, stream_position, state_filter, await_full_state
|
||||
)
|
||||
|
||||
event_map = await self.stores.main.get_events(list(state_map_ids.values()))
|
||||
|
||||
state_map = {}
|
||||
for key, event_id in state_map_ids.items():
|
||||
event = event_map.get(event_id)
|
||||
if event:
|
||||
state_map[key] = event
|
||||
|
||||
return state_map
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
async def get_state_for_groups(
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
#
|
||||
#
|
||||
from enum import Enum
|
||||
from typing import TYPE_CHECKING, Dict, Final, List, Optional, Sequence, Tuple
|
||||
from typing import TYPE_CHECKING, Dict, Final, List, Optional, Tuple
|
||||
|
||||
import attr
|
||||
from typing_extensions import TypedDict
|
||||
@@ -156,8 +156,6 @@ class SlidingSyncResult:
|
||||
avatar: Room avatar
|
||||
heroes: List of stripped membership events (containing `user_id` and optionally
|
||||
`avatar_url` and `displayname`) for the users used to calculate the room name.
|
||||
is_dm: Flag to specify whether the room is a direct-message room (most likely
|
||||
between two people).
|
||||
initial: Flag which is set when this is the first time the server is sending this
|
||||
data on this connection. Clients can use this flag to replace or update
|
||||
their local state. When there is an update, servers MUST omit this flag
|
||||
@@ -169,6 +167,8 @@ class SlidingSyncResult:
|
||||
the timeline events above. This allows clients to show accurate reaction
|
||||
counts (or edits, threads), even if some of the reaction events were skipped
|
||||
over in a gappy sync.
|
||||
is_dm: Flag to specify whether the room is a direct-message room (most likely
|
||||
between two people).
|
||||
stripped_state: Stripped state events (for rooms where the usre is
|
||||
invited/knocked). Same as `rooms.invite.$room_id.invite_state` in sync v2,
|
||||
absent on joined/left rooms
|
||||
@@ -176,13 +176,6 @@ class SlidingSyncResult:
|
||||
`/rooms/<room_id>/messages` API to retrieve earlier messages.
|
||||
limited: True if their are more events than fit between the given position and now.
|
||||
Sync again to get more.
|
||||
num_live: The number of timeline events which have just occurred and are not historical.
|
||||
The last N events are 'live' and should be treated as such. This is mostly
|
||||
useful to determine whether a given @mention event should make a noise or not.
|
||||
Clients cannot rely solely on the absence of `initial: true` to determine live
|
||||
events because if a room not in the sliding window bumps into the window because
|
||||
of an @mention it will have `initial: true` yet contain a single live event
|
||||
(with potentially other old events in the timeline).
|
||||
joined_count: The number of users with membership of join, including the client's
|
||||
own user ID. (same as sync `v2 m.joined_member_count`)
|
||||
invited_count: The number of users with membership of invite. (same as sync v2
|
||||
@@ -191,30 +184,37 @@ class SlidingSyncResult:
|
||||
as sync v2)
|
||||
highlight_count: The number of unread notifications for this room with the highlight
|
||||
flag set. (same as sync v2)
|
||||
num_live: The number of timeline events which have just occurred and are not historical.
|
||||
The last N events are 'live' and should be treated as such. This is mostly
|
||||
useful to determine whether a given @mention event should make a noise or not.
|
||||
Clients cannot rely solely on the absence of `initial: true` to determine live
|
||||
events because if a room not in the sliding window bumps into the window because
|
||||
of an @mention it will have `initial: true` yet contain a single live event
|
||||
(with potentially other old events in the timeline).
|
||||
"""
|
||||
|
||||
name: Optional[str]
|
||||
avatar: Optional[str]
|
||||
heroes: Optional[List[EventBase]]
|
||||
is_dm: bool
|
||||
initial: bool
|
||||
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||
required_state: Optional[List[EventBase]]
|
||||
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||
timeline_events: Optional[List[EventBase]]
|
||||
bundled_aggregations: Optional[Dict[str, "BundledAggregations"]]
|
||||
is_dm: bool
|
||||
# Optional because it's only relevant to invite/knock rooms
|
||||
stripped_state: Optional[List[JsonDict]]
|
||||
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||
prev_batch: Optional[StreamToken]
|
||||
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||
limited: Optional[bool]
|
||||
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||
num_live: Optional[int]
|
||||
joined_count: int
|
||||
invited_count: int
|
||||
notification_count: int
|
||||
highlight_count: int
|
||||
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||
num_live: Optional[int]
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class SlidingWindowList:
|
||||
@@ -244,27 +244,10 @@ class SlidingSyncResult:
|
||||
count: int
|
||||
ops: List[Operation]
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class Extensions:
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class ToDeviceExtension:
|
||||
"""The to-device extension (MSC3885)"""
|
||||
|
||||
next_batch: str
|
||||
events: Sequence[JsonMapping]
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
return bool(self.events)
|
||||
|
||||
to_device: Optional[ToDeviceExtension] = None
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
return bool(self.to_device)
|
||||
|
||||
next_pos: StreamToken
|
||||
lists: Dict[str, SlidingWindowList]
|
||||
rooms: Dict[str, RoomResult]
|
||||
extensions: Extensions
|
||||
extensions: JsonMapping
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
"""Make the result appear empty if there are no updates. This is used
|
||||
@@ -280,5 +263,5 @@ class SlidingSyncResult:
|
||||
next_pos=next_pos,
|
||||
lists={},
|
||||
rooms={},
|
||||
extensions=SlidingSyncResult.Extensions(),
|
||||
extensions={},
|
||||
)
|
||||
|
||||
@@ -276,37 +276,10 @@ class SlidingSyncBody(RequestBodyModel):
|
||||
class RoomSubscription(CommonRoomParameters):
|
||||
pass
|
||||
|
||||
class Extensions(RequestBodyModel):
|
||||
"""The extensions section of the request."""
|
||||
|
||||
class ToDeviceExtension(RequestBodyModel):
|
||||
"""The to-device extension (MSC3885)
|
||||
|
||||
Args:
|
||||
enabled
|
||||
limit: Maximum number of to-device messages to return
|
||||
since: The `next_batch` from the previous sync response
|
||||
"""
|
||||
|
||||
enabled: Optional[StrictBool] = False
|
||||
limit: StrictInt = 100
|
||||
since: Optional[StrictStr] = None
|
||||
|
||||
@validator("since")
|
||||
def lists_length_check(
|
||||
cls, value: Optional[StrictStr]
|
||||
) -> Optional[StrictStr]:
|
||||
if value is None:
|
||||
return value
|
||||
|
||||
try:
|
||||
int(value)
|
||||
except ValueError:
|
||||
raise ValueError("'extensions.to_device.since' is invalid")
|
||||
|
||||
return value
|
||||
|
||||
to_device: Optional[ToDeviceExtension] = None
|
||||
class Extension(RequestBodyModel):
|
||||
enabled: Optional[StrictBool] = False
|
||||
lists: Optional[List[StrictStr]] = None
|
||||
rooms: Optional[List[StrictStr]] = None
|
||||
|
||||
# mypy workaround via https://github.com/pydantic/pydantic/issues/156#issuecomment-1130883884
|
||||
if TYPE_CHECKING:
|
||||
@@ -314,7 +287,7 @@ class SlidingSyncBody(RequestBodyModel):
|
||||
else:
|
||||
lists: Optional[Dict[constr(max_length=64, strict=True), SlidingSyncList]] = None # type: ignore[valid-type]
|
||||
room_subscriptions: Optional[Dict[StrictStr, RoomSubscription]] = None
|
||||
extensions: Optional[Extensions] = None
|
||||
extensions: Optional[Dict[StrictStr, Extension]] = None
|
||||
|
||||
@validator("lists")
|
||||
def lists_length_check(
|
||||
|
||||
@@ -35,7 +35,6 @@ from synapse.types import UserID
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests import unittest
|
||||
from tests.media.test_media_storage import small_png
|
||||
from tests.test_utils import SMALL_PNG
|
||||
|
||||
|
||||
@@ -147,112 +146,3 @@ class FederationMediaDownloadsTest(unittest.FederatingHomeserverTestCase):
|
||||
# check that the png file exists and matches what was uploaded
|
||||
found_file = any(SMALL_PNG in field for field in stripped_bytes)
|
||||
self.assertTrue(found_file)
|
||||
|
||||
|
||||
class FederationThumbnailTest(unittest.FederatingHomeserverTestCase):
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
super().prepare(reactor, clock, hs)
|
||||
self.test_dir = tempfile.mkdtemp(prefix="synapse-tests-")
|
||||
self.addCleanup(shutil.rmtree, self.test_dir)
|
||||
self.primary_base_path = os.path.join(self.test_dir, "primary")
|
||||
self.secondary_base_path = os.path.join(self.test_dir, "secondary")
|
||||
|
||||
hs.config.media.media_store_path = self.primary_base_path
|
||||
|
||||
storage_providers = [
|
||||
StorageProviderWrapper(
|
||||
FileStorageProviderBackend(hs, self.secondary_base_path),
|
||||
store_local=True,
|
||||
store_remote=False,
|
||||
store_synchronous=True,
|
||||
)
|
||||
]
|
||||
|
||||
self.filepaths = MediaFilePaths(self.primary_base_path)
|
||||
self.media_storage = MediaStorage(
|
||||
hs, self.primary_base_path, self.filepaths, storage_providers
|
||||
)
|
||||
self.media_repo = hs.get_media_repository()
|
||||
|
||||
def test_thumbnail_download_scaled(self) -> None:
|
||||
content = io.BytesIO(small_png.data)
|
||||
content_uri = self.get_success(
|
||||
self.media_repo.create_content(
|
||||
"image/png",
|
||||
"test_png_thumbnail",
|
||||
content,
|
||||
67,
|
||||
UserID.from_string("@user_id:whatever.org"),
|
||||
)
|
||||
)
|
||||
# test with an image file
|
||||
channel = self.make_signed_federation_request(
|
||||
"GET",
|
||||
f"/_matrix/federation/v1/media/thumbnail/{content_uri.media_id}?width=32&height=32&method=scale",
|
||||
)
|
||||
self.pump()
|
||||
self.assertEqual(200, channel.code)
|
||||
|
||||
content_type = channel.headers.getRawHeaders("content-type")
|
||||
assert content_type is not None
|
||||
assert "multipart/mixed" in content_type[0]
|
||||
assert "boundary" in content_type[0]
|
||||
|
||||
# extract boundary
|
||||
boundary = content_type[0].split("boundary=")[1]
|
||||
# split on boundary and check that json field and expected value exist
|
||||
body = channel.result.get("body")
|
||||
assert body is not None
|
||||
stripped_bytes = body.split(b"\r\n" + b"--" + boundary.encode("utf-8"))
|
||||
found_json = any(
|
||||
b"\r\nContent-Type: application/json\r\n\r\n{}" in field
|
||||
for field in stripped_bytes
|
||||
)
|
||||
self.assertTrue(found_json)
|
||||
|
||||
# check that the png file exists and matches the expected scaled bytes
|
||||
found_file = any(small_png.expected_scaled in field for field in stripped_bytes)
|
||||
self.assertTrue(found_file)
|
||||
|
||||
def test_thumbnail_download_cropped(self) -> None:
|
||||
content = io.BytesIO(small_png.data)
|
||||
content_uri = self.get_success(
|
||||
self.media_repo.create_content(
|
||||
"image/png",
|
||||
"test_png_thumbnail",
|
||||
content,
|
||||
67,
|
||||
UserID.from_string("@user_id:whatever.org"),
|
||||
)
|
||||
)
|
||||
# test with an image file
|
||||
channel = self.make_signed_federation_request(
|
||||
"GET",
|
||||
f"/_matrix/federation/v1/media/thumbnail/{content_uri.media_id}?width=32&height=32&method=crop",
|
||||
)
|
||||
self.pump()
|
||||
self.assertEqual(200, channel.code)
|
||||
|
||||
content_type = channel.headers.getRawHeaders("content-type")
|
||||
assert content_type is not None
|
||||
assert "multipart/mixed" in content_type[0]
|
||||
assert "boundary" in content_type[0]
|
||||
|
||||
# extract boundary
|
||||
boundary = content_type[0].split("boundary=")[1]
|
||||
# split on boundary and check that json field and expected value exist
|
||||
body = channel.result.get("body")
|
||||
assert body is not None
|
||||
stripped_bytes = body.split(b"\r\n" + b"--" + boundary.encode("utf-8"))
|
||||
found_json = any(
|
||||
b"\r\nContent-Type: application/json\r\n\r\n{}" in field
|
||||
for field in stripped_bytes
|
||||
)
|
||||
self.assertTrue(found_json)
|
||||
|
||||
# check that the png file exists and matches the expected cropped bytes
|
||||
found_file = any(
|
||||
small_png.expected_cropped in field for field in stripped_bytes
|
||||
)
|
||||
self.assertTrue(found_file)
|
||||
|
||||
@@ -461,25 +461,3 @@ class DeactivateAccountTestCase(HomeserverTestCase):
|
||||
# Validate that there is no displayname in any of the events
|
||||
for event in events:
|
||||
self.assertTrue("displayname" not in event.content)
|
||||
|
||||
def test_rooms_forgotten_upon_deactivation(self) -> None:
|
||||
"""
|
||||
Tests that the user 'forgets' the rooms they left upon deactivation.
|
||||
"""
|
||||
# Create a room
|
||||
room_id = self.helper.create_room_as(
|
||||
self.user,
|
||||
is_public=True,
|
||||
tok=self.token,
|
||||
)
|
||||
|
||||
# Deactivate the account
|
||||
self._deactivate_my_account()
|
||||
|
||||
# Get all of the user's forgotten rooms
|
||||
forgotten_rooms = self.get_success(
|
||||
self._store.get_forgotten_rooms_for_user(self.user)
|
||||
)
|
||||
|
||||
# Validate that the created room is forgotten
|
||||
self.assertTrue(room_id in forgotten_rooms)
|
||||
|
||||
@@ -18,8 +18,6 @@
|
||||
#
|
||||
#
|
||||
import logging
|
||||
from copy import deepcopy
|
||||
from typing import Optional
|
||||
from unittest.mock import patch
|
||||
|
||||
from parameterized import parameterized
|
||||
@@ -35,550 +33,20 @@ from synapse.api.constants import (
|
||||
RoomTypes,
|
||||
)
|
||||
from synapse.api.room_versions import RoomVersions
|
||||
from synapse.handlers.sliding_sync import RoomSyncConfig, StateValues
|
||||
from synapse.handlers.sliding_sync import SlidingSyncConfig
|
||||
from synapse.rest import admin
|
||||
from synapse.rest.client import knock, login, room
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
from synapse.types import JsonDict, UserID
|
||||
from synapse.types.handlers import SlidingSyncConfig
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.replication._base import BaseMultiWorkerStreamTestCase
|
||||
from tests.unittest import HomeserverTestCase, TestCase
|
||||
from tests.unittest import HomeserverTestCase
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class RoomSyncConfigTestCase(TestCase):
|
||||
def _assert_room_config_equal(
|
||||
self,
|
||||
actual: RoomSyncConfig,
|
||||
expected: RoomSyncConfig,
|
||||
message_prefix: Optional[str] = None,
|
||||
) -> None:
|
||||
self.assertEqual(actual.timeline_limit, expected.timeline_limit, message_prefix)
|
||||
|
||||
# `self.assertEqual(...)` works fine to catch differences but the output is
|
||||
# almost impossible to read because of the way it truncates the output and the
|
||||
# order doesn't actually matter.
|
||||
self.assertCountEqual(
|
||||
actual.required_state_map, expected.required_state_map, message_prefix
|
||||
)
|
||||
for event_type, expected_state_keys in expected.required_state_map.items():
|
||||
self.assertCountEqual(
|
||||
actual.required_state_map[event_type],
|
||||
expected_state_keys,
|
||||
f"{message_prefix}: Mismatch for {event_type}",
|
||||
)
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
(
|
||||
"from_list_config",
|
||||
"""
|
||||
Test that we can convert a `SlidingSyncConfig.SlidingSyncList` to a
|
||||
`RoomSyncConfig`.
|
||||
""",
|
||||
# Input
|
||||
SlidingSyncConfig.SlidingSyncList(
|
||||
timeline_limit=10,
|
||||
required_state=[
|
||||
(EventTypes.Name, ""),
|
||||
(EventTypes.Member, "@foo"),
|
||||
(EventTypes.Member, "@bar"),
|
||||
(EventTypes.Member, "@baz"),
|
||||
(EventTypes.CanonicalAlias, ""),
|
||||
],
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Name: {""},
|
||||
EventTypes.Member: {
|
||||
"@foo",
|
||||
"@bar",
|
||||
"@baz",
|
||||
},
|
||||
EventTypes.CanonicalAlias: {""},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"from_room_subscription",
|
||||
"""
|
||||
Test that we can convert a `SlidingSyncConfig.RoomSubscription` to a
|
||||
`RoomSyncConfig`.
|
||||
""",
|
||||
# Input
|
||||
SlidingSyncConfig.RoomSubscription(
|
||||
timeline_limit=10,
|
||||
required_state=[
|
||||
(EventTypes.Name, ""),
|
||||
(EventTypes.Member, "@foo"),
|
||||
(EventTypes.Member, "@bar"),
|
||||
(EventTypes.Member, "@baz"),
|
||||
(EventTypes.CanonicalAlias, ""),
|
||||
],
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Name: {""},
|
||||
EventTypes.Member: {
|
||||
"@foo",
|
||||
"@bar",
|
||||
"@baz",
|
||||
},
|
||||
EventTypes.CanonicalAlias: {""},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"wildcard",
|
||||
"""
|
||||
Test that a wildcard (*) for both the `event_type` and `state_key` will override
|
||||
all other values.
|
||||
|
||||
Note: MSC3575 describes different behavior to how we're handling things here but
|
||||
since it's not wrong to return more state than requested (`required_state` is
|
||||
just the minimum requested), it doesn't matter if we include things that the
|
||||
client wanted excluded. This complexity is also under scrutiny, see
|
||||
https://github.com/matrix-org/matrix-spec-proposals/pull/3575#discussion_r1185109050
|
||||
|
||||
> One unique exception is when you request all state events via ["*", "*"]. When used,
|
||||
> all state events are returned by default, and additional entries FILTER OUT the returned set
|
||||
> of state events. These additional entries cannot use '*' themselves.
|
||||
> For example, ["*", "*"], ["m.room.member", "@alice:example.com"] will _exclude_ every m.room.member
|
||||
> event _except_ for @alice:example.com, and include every other state event.
|
||||
> In addition, ["*", "*"], ["m.space.child", "*"] is an error, the m.space.child filter is not
|
||||
> required as it would have been returned anyway.
|
||||
>
|
||||
> -- MSC3575 (https://github.com/matrix-org/matrix-spec-proposals/pull/3575)
|
||||
""",
|
||||
# Input
|
||||
SlidingSyncConfig.SlidingSyncList(
|
||||
timeline_limit=10,
|
||||
required_state=[
|
||||
(EventTypes.Name, ""),
|
||||
(StateValues.WILDCARD, StateValues.WILDCARD),
|
||||
(EventTypes.Member, "@foo"),
|
||||
(EventTypes.CanonicalAlias, ""),
|
||||
],
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
StateValues.WILDCARD: {StateValues.WILDCARD},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"wildcard_type",
|
||||
"""
|
||||
Test that a wildcard (*) as a `event_type` will override all other values for the
|
||||
same `state_key`.
|
||||
""",
|
||||
# Input
|
||||
SlidingSyncConfig.SlidingSyncList(
|
||||
timeline_limit=10,
|
||||
required_state=[
|
||||
(EventTypes.Name, ""),
|
||||
(StateValues.WILDCARD, ""),
|
||||
(EventTypes.Member, "@foo"),
|
||||
(EventTypes.CanonicalAlias, ""),
|
||||
],
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
StateValues.WILDCARD: {""},
|
||||
EventTypes.Member: {"@foo"},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"multiple_wildcard_type",
|
||||
"""
|
||||
Test that multiple wildcard (*) as a `event_type` will override all other values
|
||||
for the same `state_key`.
|
||||
""",
|
||||
# Input
|
||||
SlidingSyncConfig.SlidingSyncList(
|
||||
timeline_limit=10,
|
||||
required_state=[
|
||||
(EventTypes.Name, ""),
|
||||
(StateValues.WILDCARD, ""),
|
||||
(EventTypes.Member, "@foo"),
|
||||
(StateValues.WILDCARD, "@foo"),
|
||||
("org.matrix.personal_count", "@foo"),
|
||||
(EventTypes.Member, "@bar"),
|
||||
(EventTypes.CanonicalAlias, ""),
|
||||
],
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
StateValues.WILDCARD: {
|
||||
"",
|
||||
"@foo",
|
||||
},
|
||||
EventTypes.Member: {"@bar"},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"wildcard_state_key",
|
||||
"""
|
||||
Test that a wildcard (*) as a `state_key` will override all other values for the
|
||||
same `event_type`.
|
||||
""",
|
||||
# Input
|
||||
SlidingSyncConfig.SlidingSyncList(
|
||||
timeline_limit=10,
|
||||
required_state=[
|
||||
(EventTypes.Name, ""),
|
||||
(EventTypes.Member, "@foo"),
|
||||
(EventTypes.Member, StateValues.WILDCARD),
|
||||
(EventTypes.Member, "@bar"),
|
||||
(EventTypes.Member, StateValues.LAZY),
|
||||
(EventTypes.Member, "@baz"),
|
||||
(EventTypes.CanonicalAlias, ""),
|
||||
],
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Name: {""},
|
||||
EventTypes.Member: {
|
||||
StateValues.WILDCARD,
|
||||
},
|
||||
EventTypes.CanonicalAlias: {""},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"wildcard_merge",
|
||||
"""
|
||||
Test that a wildcard (*) entries for the `event_type` and another one for
|
||||
`state_key` will play together.
|
||||
""",
|
||||
# Input
|
||||
SlidingSyncConfig.SlidingSyncList(
|
||||
timeline_limit=10,
|
||||
required_state=[
|
||||
(EventTypes.Name, ""),
|
||||
(StateValues.WILDCARD, ""),
|
||||
(EventTypes.Member, "@foo"),
|
||||
(EventTypes.Member, StateValues.WILDCARD),
|
||||
(EventTypes.Member, "@bar"),
|
||||
(EventTypes.CanonicalAlias, ""),
|
||||
],
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
StateValues.WILDCARD: {""},
|
||||
EventTypes.Member: {StateValues.WILDCARD},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"wildcard_merge2",
|
||||
"""
|
||||
Test that an all wildcard ("*", "*") entry will override any other
|
||||
values (including other wildcards).
|
||||
""",
|
||||
# Input
|
||||
SlidingSyncConfig.SlidingSyncList(
|
||||
timeline_limit=10,
|
||||
required_state=[
|
||||
(EventTypes.Name, ""),
|
||||
(StateValues.WILDCARD, ""),
|
||||
(EventTypes.Member, StateValues.WILDCARD),
|
||||
(EventTypes.Member, "@foo"),
|
||||
# One of these should take precedence over everything else
|
||||
(StateValues.WILDCARD, StateValues.WILDCARD),
|
||||
(StateValues.WILDCARD, StateValues.WILDCARD),
|
||||
(EventTypes.CanonicalAlias, ""),
|
||||
],
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
StateValues.WILDCARD: {StateValues.WILDCARD},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"lazy_members",
|
||||
"""
|
||||
`$LAZY` room members should just be another additional key next to other
|
||||
explicit keys. We will unroll the special `$LAZY` meaning later.
|
||||
""",
|
||||
# Input
|
||||
SlidingSyncConfig.SlidingSyncList(
|
||||
timeline_limit=10,
|
||||
required_state=[
|
||||
(EventTypes.Name, ""),
|
||||
(EventTypes.Member, "@foo"),
|
||||
(EventTypes.Member, "@bar"),
|
||||
(EventTypes.Member, StateValues.LAZY),
|
||||
(EventTypes.Member, "@baz"),
|
||||
(EventTypes.CanonicalAlias, ""),
|
||||
],
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Name: {""},
|
||||
EventTypes.Member: {
|
||||
"@foo",
|
||||
"@bar",
|
||||
StateValues.LAZY,
|
||||
"@baz",
|
||||
},
|
||||
EventTypes.CanonicalAlias: {""},
|
||||
},
|
||||
),
|
||||
),
|
||||
]
|
||||
)
|
||||
def test_from_room_config(
|
||||
self,
|
||||
_test_label: str,
|
||||
_test_description: str,
|
||||
room_params: SlidingSyncConfig.CommonRoomParameters,
|
||||
expected_room_sync_config: RoomSyncConfig,
|
||||
) -> None:
|
||||
"""
|
||||
Test `RoomSyncConfig.from_room_config(room_params)` will result in the `expected_room_sync_config`.
|
||||
"""
|
||||
room_sync_config = RoomSyncConfig.from_room_config(room_params)
|
||||
|
||||
self._assert_room_config_equal(
|
||||
room_sync_config,
|
||||
expected_room_sync_config,
|
||||
)
|
||||
|
||||
@parameterized.expand(
|
||||
[
|
||||
(
|
||||
"no_direct_overlap",
|
||||
# A
|
||||
RoomSyncConfig(
|
||||
timeline_limit=9,
|
||||
required_state_map={
|
||||
EventTypes.Name: {""},
|
||||
EventTypes.Member: {
|
||||
"@foo",
|
||||
"@bar",
|
||||
},
|
||||
},
|
||||
),
|
||||
# B
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Member: {
|
||||
StateValues.LAZY,
|
||||
"@baz",
|
||||
},
|
||||
EventTypes.CanonicalAlias: {""},
|
||||
},
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Name: {""},
|
||||
EventTypes.Member: {
|
||||
"@foo",
|
||||
"@bar",
|
||||
StateValues.LAZY,
|
||||
"@baz",
|
||||
},
|
||||
EventTypes.CanonicalAlias: {""},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"wildcard_overlap",
|
||||
# A
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
StateValues.WILDCARD: {StateValues.WILDCARD},
|
||||
},
|
||||
),
|
||||
# B
|
||||
RoomSyncConfig(
|
||||
timeline_limit=9,
|
||||
required_state_map={
|
||||
EventTypes.Dummy: {StateValues.WILDCARD},
|
||||
StateValues.WILDCARD: {"@bar"},
|
||||
EventTypes.Member: {"@foo"},
|
||||
},
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
StateValues.WILDCARD: {StateValues.WILDCARD},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"state_type_wildcard_overlap",
|
||||
# A
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Dummy: {"dummy"},
|
||||
StateValues.WILDCARD: {
|
||||
"",
|
||||
"@foo",
|
||||
},
|
||||
EventTypes.Member: {"@bar"},
|
||||
},
|
||||
),
|
||||
# B
|
||||
RoomSyncConfig(
|
||||
timeline_limit=9,
|
||||
required_state_map={
|
||||
EventTypes.Dummy: {"dummy2"},
|
||||
StateValues.WILDCARD: {
|
||||
"",
|
||||
"@bar",
|
||||
},
|
||||
EventTypes.Member: {"@foo"},
|
||||
},
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Dummy: {
|
||||
"dummy",
|
||||
"dummy2",
|
||||
},
|
||||
StateValues.WILDCARD: {
|
||||
"",
|
||||
"@foo",
|
||||
"@bar",
|
||||
},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"state_key_wildcard_overlap",
|
||||
# A
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Dummy: {"dummy"},
|
||||
EventTypes.Member: {StateValues.WILDCARD},
|
||||
"org.matrix.flowers": {StateValues.WILDCARD},
|
||||
},
|
||||
),
|
||||
# B
|
||||
RoomSyncConfig(
|
||||
timeline_limit=9,
|
||||
required_state_map={
|
||||
EventTypes.Dummy: {StateValues.WILDCARD},
|
||||
EventTypes.Member: {StateValues.WILDCARD},
|
||||
"org.matrix.flowers": {"tulips"},
|
||||
},
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Dummy: {StateValues.WILDCARD},
|
||||
EventTypes.Member: {StateValues.WILDCARD},
|
||||
"org.matrix.flowers": {StateValues.WILDCARD},
|
||||
},
|
||||
),
|
||||
),
|
||||
(
|
||||
"state_type_and_state_key_wildcard_merge",
|
||||
# A
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Dummy: {"dummy"},
|
||||
StateValues.WILDCARD: {
|
||||
"",
|
||||
"@foo",
|
||||
},
|
||||
EventTypes.Member: {"@bar"},
|
||||
},
|
||||
),
|
||||
# B
|
||||
RoomSyncConfig(
|
||||
timeline_limit=9,
|
||||
required_state_map={
|
||||
EventTypes.Dummy: {"dummy2"},
|
||||
StateValues.WILDCARD: {""},
|
||||
EventTypes.Member: {StateValues.WILDCARD},
|
||||
},
|
||||
),
|
||||
# Expected
|
||||
RoomSyncConfig(
|
||||
timeline_limit=10,
|
||||
required_state_map={
|
||||
EventTypes.Dummy: {
|
||||
"dummy",
|
||||
"dummy2",
|
||||
},
|
||||
StateValues.WILDCARD: {
|
||||
"",
|
||||
"@foo",
|
||||
},
|
||||
EventTypes.Member: {StateValues.WILDCARD},
|
||||
},
|
||||
),
|
||||
),
|
||||
]
|
||||
)
|
||||
def test_combine_room_sync_config(
|
||||
self,
|
||||
_test_label: str,
|
||||
a: RoomSyncConfig,
|
||||
b: RoomSyncConfig,
|
||||
expected: RoomSyncConfig,
|
||||
) -> None:
|
||||
"""
|
||||
Combine A into B and B into A to make sure we get the same result.
|
||||
"""
|
||||
# Since we're mutating these in place, make a copy for each of our trials
|
||||
room_sync_config_a = deepcopy(a)
|
||||
room_sync_config_b = deepcopy(b)
|
||||
|
||||
# Combine B into A
|
||||
room_sync_config_a.combine_room_sync_config(room_sync_config_b)
|
||||
|
||||
self._assert_room_config_equal(room_sync_config_a, expected, "B into A")
|
||||
|
||||
# Since we're mutating these in place, make a copy for each of our trials
|
||||
room_sync_config_a = deepcopy(a)
|
||||
room_sync_config_b = deepcopy(b)
|
||||
|
||||
# Combine A into B
|
||||
room_sync_config_b.combine_room_sync_config(room_sync_config_a)
|
||||
|
||||
self._assert_room_config_equal(room_sync_config_b, expected, "A into B")
|
||||
|
||||
|
||||
class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
|
||||
"""
|
||||
Tests Sliding Sync handler `get_sync_room_ids_for_user()` to make sure it returns
|
||||
|
||||
@@ -18,6 +18,7 @@
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
import itertools
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
@@ -226,15 +227,19 @@ test_images = [
|
||||
empty_file,
|
||||
SVG,
|
||||
]
|
||||
input_values = [(x,) for x in test_images]
|
||||
urls = [
|
||||
"_matrix/media/r0/thumbnail",
|
||||
"_matrix/client/unstable/org.matrix.msc3916/media/thumbnail",
|
||||
]
|
||||
|
||||
|
||||
@parameterized_class(("test_image",), input_values)
|
||||
@parameterized_class(("test_image", "url"), itertools.product(test_images, urls))
|
||||
class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
servlets = [media.register_servlets]
|
||||
test_image: ClassVar[TestImage]
|
||||
hijack_auth = True
|
||||
user_id = "@test:user"
|
||||
url: ClassVar[str]
|
||||
|
||||
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||
self.fetches: List[
|
||||
@@ -299,6 +304,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
"config": {"directory": self.storage_path},
|
||||
}
|
||||
config["media_storage_providers"] = [provider_config]
|
||||
config["experimental_features"] = {"msc3916_authenticated_media_enabled": True}
|
||||
|
||||
hs = self.setup_test_homeserver(config=config, federation_http_client=client)
|
||||
|
||||
@@ -503,7 +509,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
params = "?width=32&height=32&method=scale"
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/media/r0/thumbnail/{self.media_id}{params}",
|
||||
f"/{self.url}/{self.media_id}{params}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -531,7 +537,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/media/r0/thumbnail/{self.media_id}{params}",
|
||||
f"/{self.url}/{self.media_id}{params}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -567,7 +573,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
params = "?width=32&height=32&method=" + method
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/media/r0/thumbnail/{self.media_id}{params}",
|
||||
f"/{self.url}/{self.media_id}{params}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -602,7 +608,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
channel.json_body,
|
||||
{
|
||||
"errcode": "M_UNKNOWN",
|
||||
"error": "Cannot find any thumbnails for the requested media ('/_matrix/media/r0/thumbnail/example.com/12345'). This might mean the media is not a supported_media_format=(image/jpeg, image/jpg, image/webp, image/gif, image/png) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)",
|
||||
"error": f"Cannot find any thumbnails for the requested media ('/{self.url}/example.com/12345'). This might mean the media is not a supported_media_format=(image/jpeg, image/jpg, image/webp, image/gif, image/png) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)",
|
||||
},
|
||||
)
|
||||
else:
|
||||
@@ -612,7 +618,7 @@ class MediaRepoTests(unittest.HomeserverTestCase):
|
||||
channel.json_body,
|
||||
{
|
||||
"errcode": "M_NOT_FOUND",
|
||||
"error": "Not found '/_matrix/media/r0/thumbnail/example.com/12345'",
|
||||
"error": f"Not found '/{self.url}/example.com/12345'",
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@@ -23,15 +23,12 @@ import io
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
from typing import Any, BinaryIO, Dict, List, Optional, Sequence, Tuple, Type
|
||||
from typing import Any, BinaryIO, ClassVar, Dict, List, Optional, Sequence, Tuple, Type
|
||||
from unittest.mock import MagicMock, Mock, patch
|
||||
from urllib import parse
|
||||
from urllib.parse import quote, urlencode
|
||||
|
||||
from parameterized import parameterized, parameterized_class
|
||||
from PIL import Image as Image
|
||||
from typing_extensions import ClassVar
|
||||
from parameterized import parameterized_class
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.internet._resolver import HostResolution
|
||||
@@ -43,6 +40,7 @@ from twisted.python.failure import Failure
|
||||
from twisted.test.proto_helpers import AccumulatingProtocol, MemoryReactor
|
||||
from twisted.web.http_headers import Headers
|
||||
from twisted.web.iweb import UNKNOWN_LENGTH, IResponse
|
||||
from twisted.web.resource import Resource
|
||||
|
||||
from synapse.api.errors import HttpResponseException
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
@@ -50,8 +48,7 @@ from synapse.config.oembed import OEmbedEndpointConfig
|
||||
from synapse.http.client import MultipartResponse
|
||||
from synapse.http.types import QueryParams
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.media._base import FileInfo, ThumbnailInfo
|
||||
from synapse.media.thumbnailer import ThumbnailProvider
|
||||
from synapse.media._base import FileInfo
|
||||
from synapse.media.url_previewer import IMAGE_CACHE_EXPIRY_MS
|
||||
from synapse.rest import admin
|
||||
from synapse.rest.client import login, media
|
||||
@@ -79,7 +76,7 @@ except ImportError:
|
||||
lxml = None # type: ignore[assignment]
|
||||
|
||||
|
||||
class MediaDomainBlockingTests(unittest.HomeserverTestCase):
|
||||
class UnstableMediaDomainBlockingTests(unittest.HomeserverTestCase):
|
||||
remote_media_id = "doesnotmatter"
|
||||
remote_server_name = "evil.com"
|
||||
servlets = [
|
||||
@@ -147,6 +144,7 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
|
||||
# Should result in a 404.
|
||||
"prevent_media_downloads_from": ["evil.com"],
|
||||
"dynamic_thumbnails": True,
|
||||
"experimental_features": {"msc3916_authenticated_media_enabled": True},
|
||||
}
|
||||
)
|
||||
def test_cannot_download_blocked_media_thumbnail(self) -> None:
|
||||
@@ -155,7 +153,7 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
|
||||
"""
|
||||
response = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/media/thumbnail/evil.com/{self.remote_media_id}?width=100&height=100",
|
||||
f"/_matrix/client/unstable/org.matrix.msc3916/media/thumbnail/evil.com/{self.remote_media_id}?width=100&height=100",
|
||||
shorthand=False,
|
||||
content={"width": 100, "height": 100},
|
||||
access_token=self.tok,
|
||||
@@ -168,6 +166,7 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
|
||||
# This proves we haven't broken anything.
|
||||
"prevent_media_downloads_from": ["not-listed.com"],
|
||||
"dynamic_thumbnails": True,
|
||||
"experimental_features": {"msc3916_authenticated_media_enabled": True},
|
||||
}
|
||||
)
|
||||
def test_remote_media_thumbnail_normally_unblocked(self) -> None:
|
||||
@@ -176,14 +175,14 @@ class MediaDomainBlockingTests(unittest.HomeserverTestCase):
|
||||
"""
|
||||
response = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/media/thumbnail/evil.com/{self.remote_media_id}?width=100&height=100",
|
||||
f"/_matrix/client/unstable/org.matrix.msc3916/media/thumbnail/evil.com/{self.remote_media_id}?width=100&height=100",
|
||||
shorthand=False,
|
||||
access_token=self.tok,
|
||||
)
|
||||
self.assertEqual(response.code, 200)
|
||||
|
||||
|
||||
class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
class UnstableURLPreviewTests(unittest.HomeserverTestCase):
|
||||
if not lxml:
|
||||
skip = "url preview feature requires lxml"
|
||||
|
||||
@@ -199,6 +198,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||
config = self.default_config()
|
||||
config["experimental_features"] = {"msc3916_authenticated_media_enabled": True}
|
||||
config["url_preview_enabled"] = True
|
||||
config["max_spider_size"] = 9999999
|
||||
config["url_preview_ip_range_blacklist"] = (
|
||||
@@ -284,6 +284,18 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
self.reactor.nameResolver = Resolver() # type: ignore[assignment]
|
||||
|
||||
def create_resource_dict(self) -> Dict[str, Resource]:
|
||||
"""Create a resource tree for the test server
|
||||
|
||||
A resource tree is a mapping from path to twisted.web.resource.
|
||||
|
||||
The default implementation creates a JsonResource and calls each function in
|
||||
`servlets` to register servlets against it.
|
||||
"""
|
||||
resources = super().create_resource_dict()
|
||||
resources["/_matrix/media"] = self.hs.get_media_repository_resource()
|
||||
return resources
|
||||
|
||||
def _assert_small_png(self, json_body: JsonDict) -> None:
|
||||
"""Assert properties from the SMALL_PNG test image."""
|
||||
self.assertTrue(json_body["og:image"].startswith("mxc://"))
|
||||
@@ -297,7 +309,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -322,7 +334,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
# Check the cache returns the correct response
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
)
|
||||
|
||||
@@ -340,7 +352,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
# Check the database cache returns the correct response
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
)
|
||||
|
||||
@@ -363,7 +375,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -393,7 +405,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -429,7 +441,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -470,7 +482,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -505,7 +517,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -538,7 +550,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://example.com",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://example.com",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -568,7 +580,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://example.com",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://example.com",
|
||||
shorthand=False,
|
||||
)
|
||||
|
||||
@@ -591,7 +603,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://example.com",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://example.com",
|
||||
shorthand=False,
|
||||
)
|
||||
|
||||
@@ -610,7 +622,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
"""
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://192.168.1.1",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://192.168.1.1",
|
||||
shorthand=False,
|
||||
)
|
||||
|
||||
@@ -628,7 +640,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
"""
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://1.1.1.2",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://1.1.1.2",
|
||||
shorthand=False,
|
||||
)
|
||||
|
||||
@@ -647,7 +659,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://example.com",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://example.com",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -684,7 +696,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://example.com",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://example.com",
|
||||
shorthand=False,
|
||||
)
|
||||
self.assertEqual(channel.code, 502)
|
||||
@@ -706,7 +718,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://example.com",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://example.com",
|
||||
shorthand=False,
|
||||
)
|
||||
|
||||
@@ -729,7 +741,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://example.com",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://example.com",
|
||||
shorthand=False,
|
||||
)
|
||||
|
||||
@@ -748,7 +760,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
"""
|
||||
channel = self.make_request(
|
||||
"OPTIONS",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://example.com",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://example.com",
|
||||
shorthand=False,
|
||||
)
|
||||
self.assertEqual(channel.code, 204)
|
||||
@@ -762,7 +774,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
# Build and make a request to the server
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://example.com",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://example.com",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -815,7 +827,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -865,7 +877,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -907,7 +919,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -947,7 +959,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -988,7 +1000,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/media/preview_url?{query_params}",
|
||||
f"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?{query_params}",
|
||||
shorthand=False,
|
||||
)
|
||||
self.pump()
|
||||
@@ -1009,7 +1021,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://matrix.org",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://matrix.org",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1046,7 +1058,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://twitter.com/matrixdotorg/status/12345",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://twitter.com/matrixdotorg/status/12345",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1106,7 +1118,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://twitter.com/matrixdotorg/status/12345",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://twitter.com/matrixdotorg/status/12345",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1155,7 +1167,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://www.hulu.com/watch/12345",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://www.hulu.com/watch/12345",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1200,7 +1212,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://twitter.com/matrixdotorg/status/12345",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://twitter.com/matrixdotorg/status/12345",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1229,7 +1241,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://www.twitter.com/matrixdotorg/status/12345",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://www.twitter.com/matrixdotorg/status/12345",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1321,7 +1333,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://www.twitter.com/matrixdotorg/status/12345",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://www.twitter.com/matrixdotorg/status/12345",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1362,7 +1374,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=http://cdn.twitter.com/matrixdotorg",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url=http://cdn.twitter.com/matrixdotorg",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1404,7 +1416,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
# Check fetching
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/media/download/{host}/{media_id}",
|
||||
f"/_matrix/media/v3/download/{host}/{media_id}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1417,7 +1429,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/download/{host}/{media_id}",
|
||||
f"/_matrix/media/v3/download/{host}/{media_id}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1452,7 +1464,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
# Check fetching
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/media/thumbnail/{host}/{media_id}?width=32&height=32&method=scale",
|
||||
f"/_matrix/client/unstable/org.matrix.msc3916/media/thumbnail/{host}/{media_id}?width=32&height=32&method=scale",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1470,7 +1482,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/media/thumbnail/{host}/{media_id}?width=32&height=32&method=scale",
|
||||
f"/_matrix/client/unstable/org.matrix.msc3916/media/thumbnail/{host}/{media_id}?width=32&height=32&method=scale",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1520,7 +1532,8 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=" + bad_url,
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url="
|
||||
+ bad_url,
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1529,7 +1542,8 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=" + good_url,
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url="
|
||||
+ good_url,
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1561,7 +1575,8 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/preview_url?url=" + bad_url,
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/preview_url?url="
|
||||
+ bad_url,
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
)
|
||||
@@ -1569,7 +1584,7 @@ class URLPreviewTests(unittest.HomeserverTestCase):
|
||||
self.assertEqual(channel.code, 403, channel.result)
|
||||
|
||||
|
||||
class MediaConfigTest(unittest.HomeserverTestCase):
|
||||
class UnstableMediaConfigTest(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
media.register_servlets,
|
||||
admin.register_servlets,
|
||||
@@ -1580,6 +1595,7 @@ class MediaConfigTest(unittest.HomeserverTestCase):
|
||||
self, reactor: ThreadedMemoryReactorClock, clock: Clock
|
||||
) -> HomeServer:
|
||||
config = self.default_config()
|
||||
config["experimental_features"] = {"msc3916_authenticated_media_enabled": True}
|
||||
|
||||
self.storage_path = self.mktemp()
|
||||
self.media_store_path = self.mktemp()
|
||||
@@ -1606,7 +1622,7 @@ class MediaConfigTest(unittest.HomeserverTestCase):
|
||||
def test_media_config(self) -> None:
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/v1/media/config",
|
||||
"/_matrix/client/unstable/org.matrix.msc3916/media/config",
|
||||
shorthand=False,
|
||||
access_token=self.tok,
|
||||
)
|
||||
@@ -1883,7 +1899,7 @@ input_values = [(x,) for x in test_images]
|
||||
|
||||
|
||||
@parameterized_class(("test_image",), input_values)
|
||||
class DownloadAndThumbnailTestCase(unittest.HomeserverTestCase):
|
||||
class DownloadTestCase(unittest.HomeserverTestCase):
|
||||
test_image: ClassVar[TestImage]
|
||||
servlets = [
|
||||
media.register_servlets,
|
||||
@@ -1989,6 +2005,7 @@ class DownloadAndThumbnailTestCase(unittest.HomeserverTestCase):
|
||||
"config": {"directory": self.storage_path},
|
||||
}
|
||||
config["media_storage_providers"] = [provider_config]
|
||||
config["experimental_features"] = {"msc3916_authenticated_media_enabled": True}
|
||||
|
||||
hs = self.setup_test_homeserver(config=config, federation_http_client=client)
|
||||
|
||||
@@ -2147,7 +2164,7 @@ class DownloadAndThumbnailTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
def test_unknown_federation_endpoint(self) -> None:
|
||||
"""
|
||||
Test that if the download request to remote federation endpoint returns a 404
|
||||
Test that if the downloadd request to remote federation endpoint returns a 404
|
||||
we fall back to the _matrix/media endpoint
|
||||
"""
|
||||
channel = self.make_request(
|
||||
@@ -2193,236 +2210,3 @@ class DownloadAndThumbnailTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
self.pump()
|
||||
self.assertEqual(channel.code, 200)
|
||||
|
||||
def test_thumbnail_crop(self) -> None:
|
||||
"""Test that a cropped remote thumbnail is available."""
|
||||
self._test_thumbnail(
|
||||
"crop",
|
||||
self.test_image.expected_cropped,
|
||||
expected_found=self.test_image.expected_found,
|
||||
unable_to_thumbnail=self.test_image.unable_to_thumbnail,
|
||||
)
|
||||
|
||||
def test_thumbnail_scale(self) -> None:
|
||||
"""Test that a scaled remote thumbnail is available."""
|
||||
self._test_thumbnail(
|
||||
"scale",
|
||||
self.test_image.expected_scaled,
|
||||
expected_found=self.test_image.expected_found,
|
||||
unable_to_thumbnail=self.test_image.unable_to_thumbnail,
|
||||
)
|
||||
|
||||
def test_invalid_type(self) -> None:
|
||||
"""An invalid thumbnail type is never available."""
|
||||
self._test_thumbnail(
|
||||
"invalid",
|
||||
None,
|
||||
expected_found=False,
|
||||
unable_to_thumbnail=self.test_image.unable_to_thumbnail,
|
||||
)
|
||||
|
||||
@unittest.override_config(
|
||||
{"thumbnail_sizes": [{"width": 32, "height": 32, "method": "scale"}]}
|
||||
)
|
||||
def test_no_thumbnail_crop(self) -> None:
|
||||
"""
|
||||
Override the config to generate only scaled thumbnails, but request a cropped one.
|
||||
"""
|
||||
self._test_thumbnail(
|
||||
"crop",
|
||||
None,
|
||||
expected_found=False,
|
||||
unable_to_thumbnail=self.test_image.unable_to_thumbnail,
|
||||
)
|
||||
|
||||
@unittest.override_config(
|
||||
{"thumbnail_sizes": [{"width": 32, "height": 32, "method": "crop"}]}
|
||||
)
|
||||
def test_no_thumbnail_scale(self) -> None:
|
||||
"""
|
||||
Override the config to generate only cropped thumbnails, but request a scaled one.
|
||||
"""
|
||||
self._test_thumbnail(
|
||||
"scale",
|
||||
None,
|
||||
expected_found=False,
|
||||
unable_to_thumbnail=self.test_image.unable_to_thumbnail,
|
||||
)
|
||||
|
||||
def test_thumbnail_repeated_thumbnail(self) -> None:
|
||||
"""Test that fetching the same thumbnail works, and deleting the on disk
|
||||
thumbnail regenerates it.
|
||||
"""
|
||||
self._test_thumbnail(
|
||||
"scale",
|
||||
self.test_image.expected_scaled,
|
||||
expected_found=self.test_image.expected_found,
|
||||
unable_to_thumbnail=self.test_image.unable_to_thumbnail,
|
||||
)
|
||||
|
||||
if not self.test_image.expected_found:
|
||||
return
|
||||
|
||||
# Fetching again should work, without re-requesting the image from the
|
||||
# remote.
|
||||
params = "?width=32&height=32&method=scale"
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/media/thumbnail/{self.remote}/{self.media_id}{params}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
access_token=self.tok,
|
||||
)
|
||||
self.pump()
|
||||
|
||||
self.assertEqual(channel.code, 200)
|
||||
if self.test_image.expected_scaled:
|
||||
self.assertEqual(
|
||||
channel.result["body"],
|
||||
self.test_image.expected_scaled,
|
||||
channel.result["body"],
|
||||
)
|
||||
|
||||
# Deleting the thumbnail on disk then re-requesting it should work as
|
||||
# Synapse should regenerate missing thumbnails.
|
||||
info = self.get_success(
|
||||
self.store.get_cached_remote_media(self.remote, self.media_id)
|
||||
)
|
||||
assert info is not None
|
||||
file_id = info.filesystem_id
|
||||
|
||||
thumbnail_dir = self.media_repo.filepaths.remote_media_thumbnail_dir(
|
||||
self.remote, file_id
|
||||
)
|
||||
shutil.rmtree(thumbnail_dir, ignore_errors=True)
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/media/thumbnail/{self.remote}/{self.media_id}{params}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
access_token=self.tok,
|
||||
)
|
||||
self.pump()
|
||||
|
||||
self.assertEqual(channel.code, 200)
|
||||
if self.test_image.expected_scaled:
|
||||
self.assertEqual(
|
||||
channel.result["body"],
|
||||
self.test_image.expected_scaled,
|
||||
channel.result["body"],
|
||||
)
|
||||
|
||||
def _test_thumbnail(
|
||||
self,
|
||||
method: str,
|
||||
expected_body: Optional[bytes],
|
||||
expected_found: bool,
|
||||
unable_to_thumbnail: bool = False,
|
||||
) -> None:
|
||||
"""Test the given thumbnailing method works as expected.
|
||||
|
||||
Args:
|
||||
method: The thumbnailing method to use (crop, scale).
|
||||
expected_body: The expected bytes from thumbnailing, or None if
|
||||
test should just check for a valid image.
|
||||
expected_found: True if the file should exist on the server, or False if
|
||||
a 404/400 is expected.
|
||||
unable_to_thumbnail: True if we expect the thumbnailing to fail (400), or
|
||||
False if the thumbnailing should succeed or a normal 404 is expected.
|
||||
"""
|
||||
|
||||
params = "?width=32&height=32&method=" + method
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/media/thumbnail/{self.remote}/{self.media_id}{params}",
|
||||
shorthand=False,
|
||||
await_result=False,
|
||||
access_token=self.tok,
|
||||
)
|
||||
self.pump()
|
||||
headers = {
|
||||
b"Content-Length": [b"%d" % (len(self.test_image.data))],
|
||||
b"Content-Type": [self.test_image.content_type],
|
||||
}
|
||||
self.fetches[0][0].callback(
|
||||
(self.test_image.data, (len(self.test_image.data), headers))
|
||||
)
|
||||
self.pump()
|
||||
if expected_found:
|
||||
self.assertEqual(channel.code, 200)
|
||||
|
||||
self.assertEqual(
|
||||
channel.headers.getRawHeaders(b"Cross-Origin-Resource-Policy"),
|
||||
[b"cross-origin"],
|
||||
)
|
||||
|
||||
if expected_body is not None:
|
||||
self.assertEqual(
|
||||
channel.result["body"], expected_body, channel.result["body"]
|
||||
)
|
||||
else:
|
||||
# ensure that the result is at least some valid image
|
||||
Image.open(io.BytesIO(channel.result["body"]))
|
||||
elif unable_to_thumbnail:
|
||||
# A 400 with a JSON body.
|
||||
self.assertEqual(channel.code, 400)
|
||||
self.assertEqual(
|
||||
channel.json_body,
|
||||
{
|
||||
"errcode": "M_UNKNOWN",
|
||||
"error": "Cannot find any thumbnails for the requested media ('/_matrix/client/v1/media/thumbnail/example.com/12345'). This might mean the media is not a supported_media_format=(image/jpeg, image/jpg, image/webp, image/gif, image/png) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)",
|
||||
},
|
||||
)
|
||||
else:
|
||||
# A 404 with a JSON body.
|
||||
self.assertEqual(channel.code, 404)
|
||||
self.assertEqual(
|
||||
channel.json_body,
|
||||
{
|
||||
"errcode": "M_NOT_FOUND",
|
||||
"error": "Not found '/_matrix/client/v1/media/thumbnail/example.com/12345'",
|
||||
},
|
||||
)
|
||||
|
||||
@parameterized.expand([("crop", 16), ("crop", 64), ("scale", 16), ("scale", 64)])
|
||||
def test_same_quality(self, method: str, desired_size: int) -> None:
|
||||
"""Test that choosing between thumbnails with the same quality rating succeeds.
|
||||
|
||||
We are not particular about which thumbnail is chosen."""
|
||||
|
||||
content_type = self.test_image.content_type.decode()
|
||||
media_repo = self.hs.get_media_repository()
|
||||
thumbnail_provider = ThumbnailProvider(
|
||||
self.hs, media_repo, media_repo.media_storage
|
||||
)
|
||||
|
||||
self.assertIsNotNone(
|
||||
thumbnail_provider._select_thumbnail(
|
||||
desired_width=desired_size,
|
||||
desired_height=desired_size,
|
||||
desired_method=method,
|
||||
desired_type=content_type,
|
||||
# Provide two identical thumbnails which are guaranteed to have the same
|
||||
# quality rating.
|
||||
thumbnail_infos=[
|
||||
ThumbnailInfo(
|
||||
width=32,
|
||||
height=32,
|
||||
method=method,
|
||||
type=content_type,
|
||||
length=256,
|
||||
),
|
||||
ThumbnailInfo(
|
||||
width=32,
|
||||
height=32,
|
||||
method=method,
|
||||
type=content_type,
|
||||
length=256,
|
||||
),
|
||||
],
|
||||
file_id=f"image{self.test_image.extension.decode()}",
|
||||
url_cache=False,
|
||||
server_name=None,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
#
|
||||
import json
|
||||
import logging
|
||||
from typing import AbstractSet, Any, Dict, Iterable, List, Optional
|
||||
from typing import Dict, List
|
||||
|
||||
from parameterized import parameterized, parameterized_class
|
||||
|
||||
@@ -32,12 +32,9 @@ from synapse.api.constants import (
|
||||
EventContentFields,
|
||||
EventTypes,
|
||||
HistoryVisibility,
|
||||
Membership,
|
||||
ReceiptTypes,
|
||||
RelationTypes,
|
||||
)
|
||||
from synapse.events import EventBase
|
||||
from synapse.handlers.sliding_sync import StateValues
|
||||
from synapse.rest.client import devices, knock, login, read_marker, receipts, room, sync
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import JsonDict, RoomStreamToken, StreamKeyType, StreamToken, UserID
|
||||
@@ -48,7 +45,6 @@ from tests.federation.transport.test_knocking import (
|
||||
KnockingStrippedStateEventHelperMixin,
|
||||
)
|
||||
from tests.server import TimedOutException
|
||||
from tests.test_utils.event_injection import mark_event_as_partial_state
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -1241,94 +1237,6 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||
)
|
||||
self.store = hs.get_datastores().main
|
||||
self.event_sources = hs.get_event_sources()
|
||||
self.storage_controllers = hs.get_storage_controllers()
|
||||
|
||||
def _assertRequiredStateIncludes(
|
||||
self,
|
||||
actual_required_state: Any,
|
||||
expected_state_events: Iterable[EventBase],
|
||||
exact: bool = False,
|
||||
) -> None:
|
||||
"""
|
||||
Wrapper around `_assertIncludes` to give slightly better looking diff error
|
||||
messages that include some context "$event_id (type, state_key)".
|
||||
|
||||
Args:
|
||||
actual_required_state: The "required_state" of a room from a Sliding Sync
|
||||
request response.
|
||||
expected_state_events: The expected state events to be included in the
|
||||
`actual_required_state`.
|
||||
exact: Whether the actual state should be exactly equal to the expected
|
||||
state (no extras).
|
||||
"""
|
||||
|
||||
assert isinstance(actual_required_state, list)
|
||||
for event in actual_required_state:
|
||||
assert isinstance(event, dict)
|
||||
|
||||
self._assertIncludes(
|
||||
{
|
||||
f'{event["event_id"]} ("{event["type"]}", "{event["state_key"]}")'
|
||||
for event in actual_required_state
|
||||
},
|
||||
{
|
||||
f'{event.event_id} ("{event.type}", "{event.state_key}")'
|
||||
for event in expected_state_events
|
||||
},
|
||||
exact=exact,
|
||||
# Message to help understand the diff in context
|
||||
message=str(actual_required_state),
|
||||
)
|
||||
|
||||
def _assertIncludes(
|
||||
self,
|
||||
actual_items: AbstractSet[str],
|
||||
expected_items: AbstractSet[str],
|
||||
exact: bool = False,
|
||||
message: Optional[str] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Assert that all of the `expected_items` are included in the `actual_items`.
|
||||
|
||||
This assert could also be called `assertContains`, `assertItemsInSet`
|
||||
|
||||
Args:
|
||||
actual_items: The container
|
||||
expected_items: The items to check for in the container
|
||||
exact: Whether the actual state should be exactly equal to the expected
|
||||
state (no extras).
|
||||
message: Optional message to include in the failure message.
|
||||
"""
|
||||
# Check that each set has the same items
|
||||
if exact and actual_items == expected_items:
|
||||
return
|
||||
# Check for a superset
|
||||
elif not exact and actual_items >= expected_items:
|
||||
return
|
||||
|
||||
expected_lines: List[str] = []
|
||||
for expected_item in expected_items:
|
||||
is_expected_in_actual = expected_item in actual_items
|
||||
expected_lines.append(
|
||||
"{} {}".format(" " if is_expected_in_actual else "?", expected_item)
|
||||
)
|
||||
|
||||
actual_lines: List[str] = []
|
||||
for actual_item in actual_items:
|
||||
is_actual_in_expected = actual_item in expected_items
|
||||
actual_lines.append(
|
||||
"{} {}".format("+" if is_actual_in_expected else " ", actual_item)
|
||||
)
|
||||
|
||||
newline = "\n"
|
||||
expected_string = f"Expected items to be in actual ('?' = missing expected items):\n {{\n{newline.join(expected_lines)}\n }}"
|
||||
actual_string = f"Actual ('+' = found expected items):\n {{\n{newline.join(actual_lines)}\n }}"
|
||||
first_message = (
|
||||
"Items must match exactly" if exact else "Some expected items are missing."
|
||||
)
|
||||
diff_message = f"{first_message}\n{expected_string}\n{actual_string}"
|
||||
|
||||
self.fail(f"{diff_message}\n{message}")
|
||||
|
||||
def _add_new_dm_to_global_account_data(
|
||||
self, source_user_id: str, target_user_id: str, target_room_id: str
|
||||
@@ -2183,11 +2091,6 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||
channel.json_body["rooms"][room_id1].get("prev_batch"),
|
||||
channel.json_body["rooms"][room_id1],
|
||||
)
|
||||
# `required_state` is omitted for `invite` rooms with `stripped_state`
|
||||
self.assertIsNone(
|
||||
channel.json_body["rooms"][room_id1].get("required_state"),
|
||||
channel.json_body["rooms"][room_id1],
|
||||
)
|
||||
# We should have some `stripped_state` so the potential joiner can identify the
|
||||
# room (we don't care about the order).
|
||||
self.assertCountEqual(
|
||||
@@ -2297,11 +2200,6 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||
channel.json_body["rooms"][room_id1].get("prev_batch"),
|
||||
channel.json_body["rooms"][room_id1],
|
||||
)
|
||||
# `required_state` is omitted for `invite` rooms with `stripped_state`
|
||||
self.assertIsNone(
|
||||
channel.json_body["rooms"][room_id1].get("required_state"),
|
||||
channel.json_body["rooms"][room_id1],
|
||||
)
|
||||
# We should have some `stripped_state` so the potential joiner can identify the
|
||||
# room (we don't care about the order).
|
||||
self.assertCountEqual(
|
||||
@@ -2423,11 +2321,6 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||
channel.json_body["rooms"][room_id1].get("prev_batch"),
|
||||
channel.json_body["rooms"][room_id1],
|
||||
)
|
||||
# `required_state` is omitted for `invite` rooms with `stripped_state`
|
||||
self.assertIsNone(
|
||||
channel.json_body["rooms"][room_id1].get("required_state"),
|
||||
channel.json_body["rooms"][room_id1],
|
||||
)
|
||||
# We should have some `stripped_state` so the potential joiner can identify the
|
||||
# room (we don't care about the order).
|
||||
self.assertCountEqual(
|
||||
@@ -2555,11 +2448,6 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||
channel.json_body["rooms"][room_id1].get("prev_batch"),
|
||||
channel.json_body["rooms"][room_id1],
|
||||
)
|
||||
# `required_state` is omitted for `invite` rooms with `stripped_state`
|
||||
self.assertIsNone(
|
||||
channel.json_body["rooms"][room_id1].get("required_state"),
|
||||
channel.json_body["rooms"][room_id1],
|
||||
)
|
||||
# We should have some `stripped_state` so the potential joiner can identify the
|
||||
# room (we don't care about the order).
|
||||
self.assertCountEqual(
|
||||
@@ -2793,602 +2681,3 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||
False,
|
||||
channel.json_body["rooms"][room_id1],
|
||||
)
|
||||
|
||||
def test_rooms_no_required_state(self) -> None:
|
||||
"""
|
||||
Empty `rooms.required_state` should not return any state events in the room
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
|
||||
# Make the Sliding Sync request
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint,
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
# Empty `required_state`
|
||||
"required_state": [],
|
||||
"timeline_limit": 0,
|
||||
}
|
||||
}
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# No `required_state` in response
|
||||
self.assertIsNone(
|
||||
channel.json_body["rooms"][room_id1].get("required_state"),
|
||||
channel.json_body["rooms"][room_id1],
|
||||
)
|
||||
|
||||
def test_rooms_required_state_initial_sync(self) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` returns requested state events in the room during an
|
||||
initial sync.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
|
||||
# Make the Sliding Sync request
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint,
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.RoomHistoryVisibility, ""],
|
||||
# This one doesn't exist in the room
|
||||
[EventTypes.Tombstone, ""],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
}
|
||||
}
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
self._assertRequiredStateIncludes(
|
||||
channel.json_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Create, "")],
|
||||
state_map[(EventTypes.RoomHistoryVisibility, "")],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_rooms_required_state_incremental_sync(self) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` returns requested state events in the room during an
|
||||
incremental sync.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
|
||||
after_room_token = self.event_sources.get_current_token()
|
||||
|
||||
# Make the Sliding Sync request
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint
|
||||
+ f"?pos={self.get_success(after_room_token.to_string(self.store))}",
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.RoomHistoryVisibility, ""],
|
||||
# This one doesn't exist in the room
|
||||
[EventTypes.Tombstone, ""],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
}
|
||||
}
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
# The returned state doesn't change from initial to incremental sync. In the
|
||||
# future, we will only return updates but only if we've sent the room down the
|
||||
# connection before.
|
||||
self._assertRequiredStateIncludes(
|
||||
channel.json_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Create, "")],
|
||||
state_map[(EventTypes.RoomHistoryVisibility, "")],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_rooms_required_state_wildcard(self) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` returns all state events when using wildcard `["*", "*"]`.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.foo_state",
|
||||
state_key="",
|
||||
body={"foo": "bar"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.foo_state",
|
||||
state_key="namespaced",
|
||||
body={"foo": "bar"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
|
||||
# Make the Sliding Sync request with wildcards for the `event_type` and `state_key`
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint,
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[StateValues.WILDCARD, StateValues.WILDCARD],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
}
|
||||
}
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
self._assertRequiredStateIncludes(
|
||||
channel.json_body["rooms"][room_id1]["required_state"],
|
||||
# We should see all the state events in the room
|
||||
state_map.values(),
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_rooms_required_state_wildcard_event_type(self) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` returns relevant state events when using wildcard in
|
||||
the event_type `["*", "foobarbaz"]`.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.foo_state",
|
||||
state_key="",
|
||||
body={"foo": "bar"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.foo_state",
|
||||
state_key=user2_id,
|
||||
body={"foo": "bar"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
|
||||
# Make the Sliding Sync request with wildcards for the `event_type`
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint,
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[StateValues.WILDCARD, user2_id],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
}
|
||||
}
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
# We expect at-least any state event with the `user2_id` as the `state_key`
|
||||
self._assertRequiredStateIncludes(
|
||||
channel.json_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Member, user2_id)],
|
||||
state_map[("org.matrix.foo_state", user2_id)],
|
||||
},
|
||||
# Ideally, this would be exact but we're currently returning all state
|
||||
# events when the `event_type` is a wildcard.
|
||||
exact=False,
|
||||
)
|
||||
|
||||
def test_rooms_required_state_wildcard_state_key(self) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` returns relevant state events when using wildcard in
|
||||
the state_key `["foobarbaz","*"]`.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
|
||||
# Make the Sliding Sync request with wildcards for the `state_key`
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint,
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Member, StateValues.WILDCARD],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
}
|
||||
}
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
self._assertRequiredStateIncludes(
|
||||
channel.json_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Member, user1_id)],
|
||||
state_map[(EventTypes.Member, user2_id)],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_rooms_required_state_lazy_loading_room_members(self) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` returns people relevant to the timeline when
|
||||
lazy-loading room members, `["m.room.member","$LAZY"]`.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
user3_id = self.register_user("user3", "pass")
|
||||
user3_tok = self.login(user3_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
self.helper.join(room_id1, user3_id, tok=user3_tok)
|
||||
|
||||
self.helper.send(room_id1, "1", tok=user2_tok)
|
||||
self.helper.send(room_id1, "2", tok=user3_tok)
|
||||
self.helper.send(room_id1, "3", tok=user2_tok)
|
||||
|
||||
# Make the Sliding Sync request with lazy loading for the room members
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint,
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.Member, StateValues.LAZY],
|
||||
],
|
||||
"timeline_limit": 3,
|
||||
}
|
||||
}
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
# Only user2 and user3 sent events in the 3 events we see in the `timeline`
|
||||
self._assertRequiredStateIncludes(
|
||||
channel.json_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Create, "")],
|
||||
state_map[(EventTypes.Member, user2_id)],
|
||||
state_map[(EventTypes.Member, user3_id)],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
@parameterized.expand([(Membership.LEAVE,), (Membership.BAN,)])
|
||||
def test_rooms_required_state_leave_ban(self, stop_membership: str) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` should not return state past a leave/ban event.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
user3_id = self.register_user("user3", "pass")
|
||||
user3_tok = self.login(user3_id, "pass")
|
||||
|
||||
from_token = self.event_sources.get_current_token()
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
self.helper.join(room_id1, user3_id, tok=user3_tok)
|
||||
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.foo_state",
|
||||
state_key="",
|
||||
body={"foo": "bar"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
|
||||
if stop_membership == Membership.LEAVE:
|
||||
# User 1 leaves
|
||||
self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||
elif stop_membership == Membership.BAN:
|
||||
# User 1 is banned
|
||||
self.helper.ban(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
# Change the state after user 1 leaves
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.foo_state",
|
||||
state_key="",
|
||||
body={"foo": "qux"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
self.helper.leave(room_id1, user3_id, tok=user3_tok)
|
||||
|
||||
# Make the Sliding Sync request with lazy loading for the room members
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint
|
||||
+ f"?pos={self.get_success(from_token.to_string(self.store))}",
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.Member, "*"],
|
||||
["org.matrix.foo_state", ""],
|
||||
],
|
||||
"timeline_limit": 3,
|
||||
}
|
||||
}
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Only user2 and user3 sent events in the 3 events we see in the `timeline`
|
||||
self._assertRequiredStateIncludes(
|
||||
channel.json_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Create, "")],
|
||||
state_map[(EventTypes.Member, user1_id)],
|
||||
state_map[(EventTypes.Member, user2_id)],
|
||||
state_map[(EventTypes.Member, user3_id)],
|
||||
state_map[("org.matrix.foo_state", "")],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_rooms_required_state_combine_superset(self) -> None:
|
||||
"""
|
||||
Test `rooms.required_state` is combined across lists and room subscriptions.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
|
||||
self.helper.send_state(
|
||||
room_id1,
|
||||
event_type="org.matrix.foo_state",
|
||||
state_key="",
|
||||
body={"foo": "bar"},
|
||||
tok=user2_tok,
|
||||
)
|
||||
|
||||
# Make the Sliding Sync request with wildcards for the `state_key`
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint,
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
[EventTypes.Member, user1_id],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
"bar-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Member, StateValues.WILDCARD],
|
||||
["org.matrix.foo_state", ""],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
}
|
||||
# TODO: Room subscription should also combine with the `required_state`
|
||||
# "room_subscriptions": {
|
||||
# room_id1: {
|
||||
# "required_state": [
|
||||
# ["org.matrix.bar_state", ""]
|
||||
# ],
|
||||
# "timeline_limit": 0,
|
||||
# }
|
||||
# }
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
state_map = self.get_success(
|
||||
self.storage_controllers.state.get_current_state(room_id1)
|
||||
)
|
||||
|
||||
self._assertRequiredStateIncludes(
|
||||
channel.json_body["rooms"][room_id1]["required_state"],
|
||||
{
|
||||
state_map[(EventTypes.Create, "")],
|
||||
state_map[(EventTypes.Member, user1_id)],
|
||||
state_map[(EventTypes.Member, user2_id)],
|
||||
state_map[("org.matrix.foo_state", "")],
|
||||
},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_rooms_required_state_partial_state(self) -> None:
|
||||
"""
|
||||
Test partially-stated room are excluded unless `rooms.required_state` is
|
||||
lazy-loading room members.
|
||||
"""
|
||||
user1_id = self.register_user("user1", "pass")
|
||||
user1_tok = self.login(user1_id, "pass")
|
||||
user2_id = self.register_user("user2", "pass")
|
||||
user2_tok = self.login(user2_id, "pass")
|
||||
|
||||
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||
_join_response1 = self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||
join_response2 = self.helper.join(room_id2, user1_id, tok=user1_tok)
|
||||
|
||||
# Mark room2 as partial state
|
||||
self.get_success(
|
||||
mark_event_as_partial_state(self.hs, join_response2["event_id"], room_id2)
|
||||
)
|
||||
|
||||
# Make the Sliding Sync request (NOT lazy-loading room members)
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint,
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
}
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# Make sure the list includes room1 but room2 is excluded because it's still
|
||||
# partially-stated
|
||||
self.assertListEqual(
|
||||
list(channel.json_body["lists"]["foo-list"]["ops"]),
|
||||
[
|
||||
{
|
||||
"op": "SYNC",
|
||||
"range": [0, 1],
|
||||
"room_ids": [room_id1],
|
||||
}
|
||||
],
|
||||
channel.json_body["lists"]["foo-list"],
|
||||
)
|
||||
|
||||
# Make the Sliding Sync request (with lazy-loading room members)
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
self.sync_endpoint,
|
||||
{
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [
|
||||
[EventTypes.Create, ""],
|
||||
# Lazy-load room members
|
||||
[EventTypes.Member, StateValues.LAZY],
|
||||
],
|
||||
"timeline_limit": 0,
|
||||
},
|
||||
}
|
||||
},
|
||||
access_token=user1_tok,
|
||||
)
|
||||
self.assertEqual(channel.code, 200, channel.json_body)
|
||||
|
||||
# The list should include both rooms now because we're lazy-loading room members
|
||||
self.assertListEqual(
|
||||
list(channel.json_body["lists"]["foo-list"]["ops"]),
|
||||
[
|
||||
{
|
||||
"op": "SYNC",
|
||||
"range": [0, 1],
|
||||
"room_ids": [room_id2, room_id1],
|
||||
}
|
||||
],
|
||||
channel.json_body["lists"]["foo-list"],
|
||||
)
|
||||
|
||||
@@ -946,7 +946,7 @@ def connect_client(
|
||||
|
||||
|
||||
class TestHomeServer(HomeServer):
|
||||
DATASTORE_CLASS = DataStore
|
||||
DATASTORE_CLASS = DataStore # type: ignore[assignment]
|
||||
|
||||
|
||||
def setup_test_homeserver(
|
||||
|
||||
@@ -125,15 +125,13 @@ async def mark_event_as_partial_state(
|
||||
in this table).
|
||||
"""
|
||||
store = hs.get_datastores().main
|
||||
# Use the store helper to insert into the database so the caches are busted
|
||||
await store.store_partial_state_room(
|
||||
room_id=room_id,
|
||||
servers={hs.hostname},
|
||||
device_lists_stream_id=0,
|
||||
joined_via=hs.hostname,
|
||||
await store.db_pool.simple_upsert(
|
||||
table="partial_state_rooms",
|
||||
keyvalues={"room_id": room_id},
|
||||
values={},
|
||||
insertion_values={"room_id": room_id},
|
||||
)
|
||||
|
||||
# FIXME: Bust the cache
|
||||
await store.db_pool.simple_insert(
|
||||
table="partial_state_events",
|
||||
values={
|
||||
|
||||
Reference in New Issue
Block a user