mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-07 01:20:16 +00:00
Compare commits
1 Commits
v1.23.1-mu
...
erikj/rele
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0eaa6dd30e |
Binary file not shown.
@@ -5,30 +5,53 @@ jobs:
|
||||
- image: docker:git
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker
|
||||
- docker_prepare
|
||||
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
|
||||
- docker_build:
|
||||
tag: -t matrixdotorg/synapse:v1.23.1
|
||||
tag: -t matrixdotorg/synapse:${CIRCLE_TAG}
|
||||
platforms: linux/amd64
|
||||
- docker_build:
|
||||
tag: -t matrixdotorg/synapse:${CIRCLE_TAG}
|
||||
platforms: linux/amd64,linux/arm/v7,linux/arm64
|
||||
|
||||
dockerhubuploadlatest:
|
||||
docker:
|
||||
- image: docker:git
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker
|
||||
- docker_prepare
|
||||
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
|
||||
- docker_build:
|
||||
tag: -t matrixdotorg/synapse:latest
|
||||
platforms: linux/amd64
|
||||
- docker_build:
|
||||
tag: -t matrixdotorg/synapse:latest
|
||||
platforms: linux/amd64,linux/arm/v7,linux/arm64
|
||||
|
||||
workflows:
|
||||
build:
|
||||
jobs:
|
||||
- dockerhubuploadrelease
|
||||
- dockerhubuploadrelease:
|
||||
filters:
|
||||
tags:
|
||||
only: /v[0-9].[0-9]+.[0-9]+.*/
|
||||
branches:
|
||||
ignore: /.*/
|
||||
- dockerhubuploadlatest:
|
||||
filters:
|
||||
branches:
|
||||
only: master
|
||||
|
||||
commands:
|
||||
docker_prepare:
|
||||
description: Sets up a remote docker server, downloads the buildx cli plugin, and enables multiarch images
|
||||
description: Downloads the buildx cli plugin and enables multiarch images
|
||||
parameters:
|
||||
buildx_version:
|
||||
type: string
|
||||
default: "v0.4.1"
|
||||
steps:
|
||||
- setup_remote_docker:
|
||||
# 19.03.13 was the most recent available on circleci at the time of
|
||||
# writing.
|
||||
version: 19.03.13
|
||||
- run: apk add --no-cache curl
|
||||
- run: mkdir -vp ~/.docker/cli-plugins/ ~/dockercache
|
||||
- run: curl --silent -L "https://github.com/docker/buildx/releases/download/<< parameters.buildx_version >>/buildx-<< parameters.buildx_version >>.linux-amd64" > ~/.docker/cli-plugins/docker-buildx
|
||||
|
||||
151
CHANGES.md
151
CHANGES.md
@@ -1,154 +1,3 @@
|
||||
Synapse 1.23.1 (2020-12-09)
|
||||
===========================
|
||||
|
||||
Due to the two security issues highlighted below, server administrators are
|
||||
encouraged to update Synapse. We are not aware of these vulnerabilities being
|
||||
exploited in the wild.
|
||||
|
||||
Security advisory
|
||||
-----------------
|
||||
|
||||
The following issues are fixed in v1.23.1 and v1.24.0.
|
||||
|
||||
- There is a denial of service attack
|
||||
([CVE-2020-26257](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26257))
|
||||
against the federation APIs in which future events will not be correctly sent
|
||||
to other servers over federation. This affects all servers that participate in
|
||||
open federation. (Fixed in [#8776](https://github.com/matrix-org/synapse/pull/8776)).
|
||||
|
||||
- Synapse may be affected by OpenSSL
|
||||
[CVE-2020-1971](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1971).
|
||||
Synapse administrators should ensure that they have the latest versions of
|
||||
the cryptography Python package installed.
|
||||
|
||||
To upgrade Synapse along with the cryptography package:
|
||||
|
||||
* Administrators using the [`matrix.org` Docker
|
||||
image](https://hub.docker.com/r/matrixdotorg/synapse/) or the [Debian/Ubuntu
|
||||
packages from
|
||||
`matrix.org`](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#matrixorg-packages)
|
||||
should ensure that they have version 1.24.0 or 1.23.1 installed: these images include
|
||||
the updated packages.
|
||||
* Administrators who have [installed Synapse from
|
||||
source](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#installing-from-source)
|
||||
should upgrade the cryptography package within their virtualenv by running:
|
||||
```sh
|
||||
<path_to_virtualenv>/bin/pip install 'cryptography>=3.3'
|
||||
```
|
||||
* Administrators who have installed Synapse from distribution packages should
|
||||
consult the information from their distributions.
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug in some federation APIs which could lead to unexpected behaviour if different parameters were set in the URI and the request body. ([\#8776](https://github.com/matrix-org/synapse/issues/8776))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Add a maximum version for pysaml2 on Python 3.5. ([\#8898](https://github.com/matrix-org/synapse/issues/8898))
|
||||
|
||||
|
||||
Synapse 1.23.0 (2020-11-18)
|
||||
===========================
|
||||
|
||||
This release changes the way structured logging is configured. See the [upgrade notes](UPGRADE.rst#upgrading-to-v1230) for details.
|
||||
|
||||
**Note**: We are aware of a trivially exploitable denial of service vulnerability in versions of Synapse prior to 1.20.0. Complete details will be disclosed on Monday, November 23rd. If you have not upgraded recently, please do so.
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a dependency versioning bug in the Dockerfile that prevented Synapse from starting. ([\#8767](https://github.com/matrix-org/synapse/issues/8767))
|
||||
|
||||
|
||||
Synapse 1.23.0rc1 (2020-11-13)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add a push rule that highlights when a jitsi conference is created in a room. ([\#8286](https://github.com/matrix-org/synapse/issues/8286))
|
||||
- Add an admin api to delete a single file or files that were not used for a defined time from server. Contributed by @dklimpel. ([\#8519](https://github.com/matrix-org/synapse/issues/8519))
|
||||
- Split admin API for reported events (`GET /_synapse/admin/v1/event_reports`) into detail and list endpoints. This is a breaking change to #8217 which was introduced in Synapse v1.21.0. Those who already use this API should check their scripts. Contributed by @dklimpel. ([\#8539](https://github.com/matrix-org/synapse/issues/8539))
|
||||
- Support generating structured logs via the standard logging configuration. ([\#8607](https://github.com/matrix-org/synapse/issues/8607), [\#8685](https://github.com/matrix-org/synapse/issues/8685))
|
||||
- Add an admin API to allow server admins to list users' pushers. Contributed by @dklimpel. ([\#8610](https://github.com/matrix-org/synapse/issues/8610), [\#8689](https://github.com/matrix-org/synapse/issues/8689))
|
||||
- Add an admin API `GET /_synapse/admin/v1/users/<user_id>/media` to get information about uploaded media. Contributed by @dklimpel. ([\#8647](https://github.com/matrix-org/synapse/issues/8647))
|
||||
- Add an admin API for local user media statistics. Contributed by @dklimpel. ([\#8700](https://github.com/matrix-org/synapse/issues/8700))
|
||||
- Add `displayname` to Shared-Secret Registration for admins. ([\#8722](https://github.com/matrix-org/synapse/issues/8722))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix fetching of E2E cross signing keys over federation when only one of the master key and device signing key is cached already. ([\#8455](https://github.com/matrix-org/synapse/issues/8455))
|
||||
- Fix a bug where Synapse would blindly forward bad responses from federation to clients when retrieving profile information. ([\#8580](https://github.com/matrix-org/synapse/issues/8580))
|
||||
- Fix a bug where the account validity endpoint would silently fail if the user ID did not have an expiration time. It now returns a 400 error. ([\#8620](https://github.com/matrix-org/synapse/issues/8620))
|
||||
- Fix email notifications for invites without local state. ([\#8627](https://github.com/matrix-org/synapse/issues/8627))
|
||||
- Fix handling of invalid group IDs to return a 400 rather than log an exception and return a 500. ([\#8628](https://github.com/matrix-org/synapse/issues/8628))
|
||||
- Fix handling of User-Agent headers that are invalid UTF-8, which caused user agents of users to not get correctly recorded. ([\#8632](https://github.com/matrix-org/synapse/issues/8632))
|
||||
- Fix a bug in the `joined_rooms` admin API if the user has never joined any rooms. The bug was introduced, along with the API, in v1.21.0. ([\#8643](https://github.com/matrix-org/synapse/issues/8643))
|
||||
- Fix exception during handling multiple concurrent requests for remote media when using multiple media repositories. ([\#8682](https://github.com/matrix-org/synapse/issues/8682))
|
||||
- Fix bug that prevented Synapse from recovering after losing connection to the database. ([\#8726](https://github.com/matrix-org/synapse/issues/8726))
|
||||
- Fix bug where the `/_synapse/admin/v1/send_server_notice` API could send notices to non-notice rooms. ([\#8728](https://github.com/matrix-org/synapse/issues/8728))
|
||||
- Fix PostgreSQL port script fails when DB has no backfilled events. Broke in v1.21.0. ([\#8729](https://github.com/matrix-org/synapse/issues/8729))
|
||||
- Fix PostgreSQL port script to correctly handle foreign key constraints. Broke in v1.21.0. ([\#8730](https://github.com/matrix-org/synapse/issues/8730))
|
||||
- Fix PostgreSQL port script so that it can be run again after a failure. Broke in v1.21.0. ([\#8755](https://github.com/matrix-org/synapse/issues/8755))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Instructions for Azure AD in the OpenID Connect documentation. Contributed by peterk. ([\#8582](https://github.com/matrix-org/synapse/issues/8582))
|
||||
- Improve the sample configuration for single sign-on providers. ([\#8635](https://github.com/matrix-org/synapse/issues/8635))
|
||||
- Fix the filepath of Dex's example config and the link to Dex's Getting Started guide in the OpenID Connect docs. ([\#8657](https://github.com/matrix-org/synapse/issues/8657))
|
||||
- Note support for Python 3.9. ([\#8665](https://github.com/matrix-org/synapse/issues/8665))
|
||||
- Minor updates to docs on running tests. ([\#8666](https://github.com/matrix-org/synapse/issues/8666))
|
||||
- Interlink prometheus/grafana documentation. ([\#8667](https://github.com/matrix-org/synapse/issues/8667))
|
||||
- Notes on SSO logins and media_repository worker. ([\#8701](https://github.com/matrix-org/synapse/issues/8701))
|
||||
- Document experimental support for running multiple event persisters. ([\#8706](https://github.com/matrix-org/synapse/issues/8706))
|
||||
- Add information regarding the various sources of, and expected contributions to, Synapse's documentation to `CONTRIBUTING.md`. ([\#8714](https://github.com/matrix-org/synapse/issues/8714))
|
||||
- Migrate documentation `docs/admin_api/event_reports` to markdown. ([\#8742](https://github.com/matrix-org/synapse/issues/8742))
|
||||
- Add some helpful hints to the README for new Synapse developers. Contributed by @chagai95. ([\#8746](https://github.com/matrix-org/synapse/issues/8746))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Optimise `/createRoom` with multiple invited users. ([\#8559](https://github.com/matrix-org/synapse/issues/8559))
|
||||
- Implement and use an `@lru_cache` decorator. ([\#8595](https://github.com/matrix-org/synapse/issues/8595))
|
||||
- Don't instansiate Requester directly. ([\#8614](https://github.com/matrix-org/synapse/issues/8614))
|
||||
- Type hints for `RegistrationStore`. ([\#8615](https://github.com/matrix-org/synapse/issues/8615))
|
||||
- Change schema to support access tokens belonging to one user but granting access to another. ([\#8616](https://github.com/matrix-org/synapse/issues/8616))
|
||||
- Remove unused OPTIONS handlers. ([\#8621](https://github.com/matrix-org/synapse/issues/8621))
|
||||
- Run `mypy` as part of the lint.sh script. ([\#8633](https://github.com/matrix-org/synapse/issues/8633))
|
||||
- Correct Synapse's PyPI package name in the OpenID Connect installation instructions. ([\#8634](https://github.com/matrix-org/synapse/issues/8634))
|
||||
- Catch exceptions during initialization of `password_providers`. Contributed by Nicolai Søborg. ([\#8636](https://github.com/matrix-org/synapse/issues/8636))
|
||||
- Fix typos and spelling errors in the code. ([\#8639](https://github.com/matrix-org/synapse/issues/8639))
|
||||
- Reduce number of OpenTracing spans started. ([\#8640](https://github.com/matrix-org/synapse/issues/8640), [\#8668](https://github.com/matrix-org/synapse/issues/8668), [\#8670](https://github.com/matrix-org/synapse/issues/8670))
|
||||
- Add field `total` to device list in admin API. ([\#8644](https://github.com/matrix-org/synapse/issues/8644))
|
||||
- Add more type hints to the application services code. ([\#8655](https://github.com/matrix-org/synapse/issues/8655), [\#8693](https://github.com/matrix-org/synapse/issues/8693))
|
||||
- Tell Black to format code for Python 3.5. ([\#8664](https://github.com/matrix-org/synapse/issues/8664))
|
||||
- Don't pull event from DB when handling replication traffic. ([\#8669](https://github.com/matrix-org/synapse/issues/8669))
|
||||
- Abstract some invite-related code in preparation for landing knocking. ([\#8671](https://github.com/matrix-org/synapse/issues/8671), [\#8688](https://github.com/matrix-org/synapse/issues/8688))
|
||||
- Clarify representation of events in logfiles. ([\#8679](https://github.com/matrix-org/synapse/issues/8679))
|
||||
- Don't require `hiredis` package to be installed to run unit tests. ([\#8680](https://github.com/matrix-org/synapse/issues/8680))
|
||||
- Fix typing info on cache call signature to accept `on_invalidate`. ([\#8684](https://github.com/matrix-org/synapse/issues/8684))
|
||||
- Fail tests if they do not await coroutines. ([\#8690](https://github.com/matrix-org/synapse/issues/8690))
|
||||
- Improve start time by adding an index to `e2e_cross_signing_keys.stream_id`. ([\#8694](https://github.com/matrix-org/synapse/issues/8694))
|
||||
- Re-organize the structured logging code to separate the TCP transport handling from the JSON formatting. ([\#8697](https://github.com/matrix-org/synapse/issues/8697))
|
||||
- Use Python 3.8 in Docker images by default. ([\#8698](https://github.com/matrix-org/synapse/issues/8698))
|
||||
- Remove the "draft" status of the Room Details Admin API. ([\#8702](https://github.com/matrix-org/synapse/issues/8702))
|
||||
- Improve the error returned when a non-string displayname or avatar_url is used when updating a user's profile. ([\#8705](https://github.com/matrix-org/synapse/issues/8705))
|
||||
- Block attempts by clients to send server ACLs, or redactions of server ACLs, that would result in the local server being blocked from the room. ([\#8708](https://github.com/matrix-org/synapse/issues/8708))
|
||||
- Add metrics the allow the local sysadmin to track 3PID `/requestToken` requests. ([\#8712](https://github.com/matrix-org/synapse/issues/8712))
|
||||
- Consolidate duplicated lists of purged tables that are checked in tests. ([\#8713](https://github.com/matrix-org/synapse/issues/8713))
|
||||
- Add some `mdui:UIInfo` element examples for `saml2_config` in the homeserver config. ([\#8718](https://github.com/matrix-org/synapse/issues/8718))
|
||||
- Improve the error message returned when a remote server incorrectly sets the `Content-Type` header in response to a JSON request. ([\#8719](https://github.com/matrix-org/synapse/issues/8719))
|
||||
- Speed up repeated state resolutions on the same room by caching event ID to auth event ID lookups. ([\#8752](https://github.com/matrix-org/synapse/issues/8752))
|
||||
|
||||
|
||||
Synapse 1.22.1 (2020-10-30)
|
||||
===========================
|
||||
|
||||
|
||||
@@ -156,24 +156,6 @@ directory, you will need both a regular newsfragment *and* an entry in the
|
||||
debian changelog. (Though typically such changes should be submitted as two
|
||||
separate pull requests.)
|
||||
|
||||
## Documentation
|
||||
|
||||
There is a growing amount of documentation located in the [docs](docs)
|
||||
directory. This documentation is intended primarily for sysadmins running their
|
||||
own Synapse instance, as well as developers interacting externally with
|
||||
Synapse. [docs/dev](docs/dev) exists primarily to house documentation for
|
||||
Synapse developers. [docs/admin_api](docs/admin_api) houses documentation
|
||||
regarding Synapse's Admin API, which is used mostly by sysadmins and external
|
||||
service developers.
|
||||
|
||||
New files added to both folders should be written in [Github-Flavoured
|
||||
Markdown](https://guides.github.com/features/mastering-markdown/), and attempts
|
||||
should be made to migrate existing documents to markdown where possible.
|
||||
|
||||
Some documentation also exists in [Synapse's Github
|
||||
Wiki](https://github.com/matrix-org/synapse/wiki), although this is primarily
|
||||
contributed to by community authors.
|
||||
|
||||
## Sign off
|
||||
|
||||
In order to have a concrete record that your contribution is intentional
|
||||
|
||||
16
README.rst
16
README.rst
@@ -261,22 +261,18 @@ to install using pip and a virtualenv::
|
||||
pip install -e ".[all,test]"
|
||||
|
||||
This will run a process of downloading and installing all the needed
|
||||
dependencies into a virtual env. If any dependencies fail to install,
|
||||
try installing the failing modules individually::
|
||||
dependencies into a virtual env.
|
||||
|
||||
pip install -e "module-name"
|
||||
|
||||
Once this is done, you may wish to run Synapse's unit tests to
|
||||
check that everything is installed correctly::
|
||||
Once this is done, you may wish to run Synapse's unit tests, to
|
||||
check that everything is installed as it should be::
|
||||
|
||||
python -m twisted.trial tests
|
||||
|
||||
This should end with a 'PASSED' result (note that exact numbers will
|
||||
differ)::
|
||||
This should end with a 'PASSED' result::
|
||||
|
||||
Ran 1337 tests in 716.064s
|
||||
Ran 1266 tests in 643.930s
|
||||
|
||||
PASSED (skips=15, successes=1322)
|
||||
PASSED (skips=15, successes=1251)
|
||||
|
||||
Running the Integration Tests
|
||||
=============================
|
||||
|
||||
@@ -87,7 +87,7 @@ then it should be modified based on the `structured logging documentation
|
||||
<https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md>`_.
|
||||
|
||||
The ``structured`` and ``drains`` logging options are now deprecated and should
|
||||
be replaced by standard logging configuration of ``handlers`` and ``formatters``.
|
||||
be replaced by standard logging configuration of ``handlers`` and ``formatters`.
|
||||
|
||||
A future will release of Synapse will make using ``structured: true`` an error.
|
||||
|
||||
|
||||
1
changelog.d/8455.bugfix
Normal file
1
changelog.d/8455.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix fetching of E2E cross signing keys over federation when only one of the master key and device signing key is cached already.
|
||||
1
changelog.d/8519.feature
Normal file
1
changelog.d/8519.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin api to delete a single file or files were not used for a defined time from server. Contributed by @dklimpel.
|
||||
1
changelog.d/8539.feature
Normal file
1
changelog.d/8539.feature
Normal file
@@ -0,0 +1 @@
|
||||
Split admin API for reported events (`GET /_synapse/admin/v1/event_reports`) into detail and list endpoints. This is a breaking change to #8217 which was introduced in Synapse v1.21.0. Those who already use this API should check their scripts. Contributed by @dklimpel.
|
||||
1
changelog.d/8559.misc
Normal file
1
changelog.d/8559.misc
Normal file
@@ -0,0 +1 @@
|
||||
Optimise `/createRoom` with multiple invited users.
|
||||
1
changelog.d/8580.bugfix
Normal file
1
changelog.d/8580.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug where Synapse would blindly forward bad responses from federation to clients when retrieving profile information.
|
||||
1
changelog.d/8582.doc
Normal file
1
changelog.d/8582.doc
Normal file
@@ -0,0 +1 @@
|
||||
Instructions for Azure AD in the OpenID Connect documentation. Contributed by peterk.
|
||||
1
changelog.d/8595.misc
Normal file
1
changelog.d/8595.misc
Normal file
@@ -0,0 +1 @@
|
||||
Implement and use an @lru_cache decorator.
|
||||
1
changelog.d/8607.feature
Normal file
1
changelog.d/8607.feature
Normal file
@@ -0,0 +1 @@
|
||||
Support generating structured logs via the standard logging configuration.
|
||||
1
changelog.d/8610.feature
Normal file
1
changelog.d/8610.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin APIs to allow server admins to list users' pushers. Contributed by @dklimpel.
|
||||
1
changelog.d/8614.misc
Normal file
1
changelog.d/8614.misc
Normal file
@@ -0,0 +1 @@
|
||||
Don't instansiate Requester directly.
|
||||
1
changelog.d/8615.misc
Normal file
1
changelog.d/8615.misc
Normal file
@@ -0,0 +1 @@
|
||||
Type hints for `RegistrationStore`.
|
||||
1
changelog.d/8616.misc
Normal file
1
changelog.d/8616.misc
Normal file
@@ -0,0 +1 @@
|
||||
Change schema to support access tokens belonging to one user but granting access to another.
|
||||
1
changelog.d/8620.bugfix
Normal file
1
changelog.d/8620.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug where the account validity endpoint would silently fail if the user ID did not have an expiration time. It now returns a 400 error.
|
||||
1
changelog.d/8621.misc
Normal file
1
changelog.d/8621.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove unused OPTIONS handlers.
|
||||
1
changelog.d/8627.bugfix
Normal file
1
changelog.d/8627.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix email notifications for invites without local state.
|
||||
1
changelog.d/8628.bugfix
Normal file
1
changelog.d/8628.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix handling of invalid group IDs to return a 400 rather than log an exception and return a 500.
|
||||
1
changelog.d/8632.bugfix
Normal file
1
changelog.d/8632.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix handling of User-Agent headers that are invalid UTF-8, which caused user agents of users to not get correctly recorded.
|
||||
1
changelog.d/8633.misc
Normal file
1
changelog.d/8633.misc
Normal file
@@ -0,0 +1 @@
|
||||
Run `mypy` as part of the lint.sh script.
|
||||
1
changelog.d/8634.misc
Normal file
1
changelog.d/8634.misc
Normal file
@@ -0,0 +1 @@
|
||||
Correct Synapse's PyPI package name in the OpenID Connect installation instructions.
|
||||
1
changelog.d/8635.doc
Normal file
1
changelog.d/8635.doc
Normal file
@@ -0,0 +1 @@
|
||||
Improve the sample configuration for single sign-on providers.
|
||||
1
changelog.d/8639.misc
Normal file
1
changelog.d/8639.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typos and spelling errors in the code.
|
||||
1
changelog.d/8640.misc
Normal file
1
changelog.d/8640.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of OpenTracing spans started.
|
||||
1
changelog.d/8643.bugfix
Normal file
1
changelog.d/8643.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug in the `joined_rooms` admin API if the user has never joined any rooms. The bug was introduced, along with the API, in v1.21.0.
|
||||
1
changelog.d/8644.misc
Normal file
1
changelog.d/8644.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add field `total` to device list in admin API.
|
||||
1
changelog.d/8647.feature
Normal file
1
changelog.d/8647.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin API `GET /_synapse/admin/v1/users/<user_id>/media` to get information about uploaded media. Contributed by @dklimpel.
|
||||
1
changelog.d/8655.misc
Normal file
1
changelog.d/8655.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add more type hints to the application services code.
|
||||
1
changelog.d/8657.doc
Normal file
1
changelog.d/8657.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix the filepath of Dex's example config and the link to Dex's Getting Started guide in the OpenID Connect docs.
|
||||
1
changelog.d/8664.misc
Normal file
1
changelog.d/8664.misc
Normal file
@@ -0,0 +1 @@
|
||||
Tell Black to format code for Python 3.5.
|
||||
1
changelog.d/8665.doc
Normal file
1
changelog.d/8665.doc
Normal file
@@ -0,0 +1 @@
|
||||
Note support for Python 3.9.
|
||||
1
changelog.d/8666.doc
Normal file
1
changelog.d/8666.doc
Normal file
@@ -0,0 +1 @@
|
||||
Minor updates to docs on running tests.
|
||||
1
changelog.d/8667.doc
Normal file
1
changelog.d/8667.doc
Normal file
@@ -0,0 +1 @@
|
||||
Interlink prometheus/grafana documentation.
|
||||
1
changelog.d/8668.misc
Normal file
1
changelog.d/8668.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of OpenTracing spans started.
|
||||
1
changelog.d/8669.misc
Normal file
1
changelog.d/8669.misc
Normal file
@@ -0,0 +1 @@
|
||||
Don't pull event from DB when handling replication traffic.
|
||||
1
changelog.d/8670.misc
Normal file
1
changelog.d/8670.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of OpenTracing spans started.
|
||||
1
changelog.d/8671.misc
Normal file
1
changelog.d/8671.misc
Normal file
@@ -0,0 +1 @@
|
||||
Abstract some invite-related code in preparation for landing knocking.
|
||||
1
changelog.d/8679.misc
Normal file
1
changelog.d/8679.misc
Normal file
@@ -0,0 +1 @@
|
||||
Clarify representation of events in logfiles.
|
||||
1
changelog.d/8680.misc
Normal file
1
changelog.d/8680.misc
Normal file
@@ -0,0 +1 @@
|
||||
Don't require `hiredis` package to be installed to run unit tests.
|
||||
1
changelog.d/8682.bugfix
Normal file
1
changelog.d/8682.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix exception during handling multiple concurrent requests for remote media when using multiple media repositories.
|
||||
1
changelog.d/8684.misc
Normal file
1
changelog.d/8684.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typing info on cache call signature to accept `on_invalidate`.
|
||||
1
changelog.d/8685.feature
Normal file
1
changelog.d/8685.feature
Normal file
@@ -0,0 +1 @@
|
||||
Support generating structured logs via the standard logging configuration.
|
||||
1
changelog.d/8688.misc
Normal file
1
changelog.d/8688.misc
Normal file
@@ -0,0 +1 @@
|
||||
Abstract some invite-related code in preparation for landing knocking.
|
||||
1
changelog.d/8689.feature
Normal file
1
changelog.d/8689.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin APIs to allow server admins to list users' pushers. Contributed by @dklimpel.
|
||||
1
changelog.d/8690.misc
Normal file
1
changelog.d/8690.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fail tests if they do not await coroutines.
|
||||
1
changelog.d/8693.misc
Normal file
1
changelog.d/8693.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add more type hints to the application services code.
|
||||
@@ -1 +0,0 @@
|
||||
Fix multiarch docker image builds.
|
||||
12
debian/changelog
vendored
12
debian/changelog
vendored
@@ -1,15 +1,3 @@
|
||||
matrix-synapse-py3 (1.23.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.23.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 09 Dec 2020 10:40:39 +0000
|
||||
|
||||
matrix-synapse-py3 (1.23.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.23.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 18 Nov 2020 11:41:28 +0000
|
||||
|
||||
matrix-synapse-py3 (1.22.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.22.1.
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
|
||||
#
|
||||
|
||||
ARG PYTHON_VERSION=3.8
|
||||
ARG PYTHON_VERSION=3.7
|
||||
|
||||
###
|
||||
### Stage 0: builder
|
||||
@@ -36,8 +36,7 @@ RUN pip install --prefix="/install" --no-warn-script-location \
|
||||
frozendict \
|
||||
jaeger-client \
|
||||
opentracing \
|
||||
# Match the version constraints of Synapse
|
||||
"prometheus_client>=0.4.0,<0.9.0" \
|
||||
prometheus-client \
|
||||
psycopg2 \
|
||||
pycparser \
|
||||
pyrsistent \
|
||||
|
||||
@@ -1,172 +0,0 @@
|
||||
# Show reported events
|
||||
|
||||
This API returns information about reported events.
|
||||
|
||||
The api is:
|
||||
```
|
||||
GET /_synapse/admin/v1/event_reports?from=0&limit=10
|
||||
```
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see [README.rst](README.rst).
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"event_reports": [
|
||||
{
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"id": 2,
|
||||
"reason": "foo",
|
||||
"score": -100,
|
||||
"received_ts": 1570897107409,
|
||||
"canonical_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"name": "Matrix HQ",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
},
|
||||
{
|
||||
"event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
|
||||
"id": 3,
|
||||
"reason": "bar",
|
||||
"score": -100,
|
||||
"received_ts": 1598889612059,
|
||||
"canonical_alias": "#alias2:matrix.org",
|
||||
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
|
||||
"name": "Your room name here",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@bar:matrix.org"
|
||||
}
|
||||
],
|
||||
"next_token": 2,
|
||||
"total": 4
|
||||
}
|
||||
```
|
||||
|
||||
To paginate, check for `next_token` and if present, call the endpoint again with `from`
|
||||
set to the value of `next_token`. This will return a new page.
|
||||
|
||||
If the endpoint does not return a `next_token` then there are no more reports to
|
||||
paginate through.
|
||||
|
||||
**URL parameters:**
|
||||
|
||||
* `limit`: integer - Is optional but is used for pagination, denoting the maximum number
|
||||
of items to return in this call. Defaults to `100`.
|
||||
* `from`: integer - Is optional but used for pagination, denoting the offset in the
|
||||
returned results. This should be treated as an opaque value and not explicitly set to
|
||||
anything other than the return value of `next_token` from a previous call. Defaults to `0`.
|
||||
* `dir`: string - Direction of event report order. Whether to fetch the most recent
|
||||
first (`b`) or the oldest first (`f`). Defaults to `b`.
|
||||
* `user_id`: string - Is optional and filters to only return users with user IDs that
|
||||
contain this value. This is the user who reported the event and wrote the reason.
|
||||
* `room_id`: string - Is optional and filters to only return rooms with room IDs that
|
||||
contain this value.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
* `id`: integer - ID of event report.
|
||||
* `received_ts`: integer - The timestamp (in milliseconds since the unix epoch) when this
|
||||
report was sent.
|
||||
* `room_id`: string - The ID of the room in which the event being reported is located.
|
||||
* `name`: string - The name of the room.
|
||||
* `event_id`: string - The ID of the reported event.
|
||||
* `user_id`: string - This is the user who reported the event and wrote the reason.
|
||||
* `reason`: string - Comment made by the `user_id` in this report. May be blank.
|
||||
* `score`: integer - Content is reported based upon a negative score, where -100 is
|
||||
"most offensive" and 0 is "inoffensive".
|
||||
* `sender`: string - This is the ID of the user who sent the original message/event that
|
||||
was reported.
|
||||
* `canonical_alias`: string - The canonical alias of the room. `null` if the room does not
|
||||
have a canonical alias set.
|
||||
* `next_token`: integer - Indication for pagination. See above.
|
||||
* `total`: integer - Total number of event reports related to the query
|
||||
(`user_id` and `room_id`).
|
||||
|
||||
# Show details of a specific event report
|
||||
|
||||
This API returns information about a specific event report.
|
||||
|
||||
The api is:
|
||||
```
|
||||
GET /_synapse/admin/v1/event_reports/<report_id>
|
||||
```
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see [README.rst](README.rst).
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"event_json": {
|
||||
"auth_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
|
||||
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
|
||||
],
|
||||
"content": {
|
||||
"body": "matrix.org: This Week in Matrix",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
|
||||
"msgtype": "m.notice"
|
||||
},
|
||||
"depth": 546,
|
||||
"hashes": {
|
||||
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
|
||||
},
|
||||
"origin": "matrix.org",
|
||||
"origin_server_ts": 1592291711430,
|
||||
"prev_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
|
||||
],
|
||||
"prev_state": [],
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"signatures": {
|
||||
"matrix.org": {
|
||||
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
|
||||
}
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"unsigned": {
|
||||
"age_ts": 1592291711430,
|
||||
}
|
||||
},
|
||||
"id": <report_id>,
|
||||
"reason": "foo",
|
||||
"score": -100,
|
||||
"received_ts": 1570897107409,
|
||||
"canonical_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"name": "Matrix HQ",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
}
|
||||
```
|
||||
|
||||
**URL parameters:**
|
||||
|
||||
* `report_id`: string - The ID of the event report.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
* `id`: integer - ID of event report.
|
||||
* `received_ts`: integer - The timestamp (in milliseconds since the unix epoch) when this
|
||||
report was sent.
|
||||
* `room_id`: string - The ID of the room in which the event being reported is located.
|
||||
* `name`: string - The name of the room.
|
||||
* `event_id`: string - The ID of the reported event.
|
||||
* `user_id`: string - This is the user who reported the event and wrote the reason.
|
||||
* `reason`: string - Comment made by the `user_id` in this report. May be blank.
|
||||
* `score`: integer - Content is reported based upon a negative score, where -100 is
|
||||
"most offensive" and 0 is "inoffensive".
|
||||
* `sender`: string - This is the ID of the user who sent the original message/event that
|
||||
was reported.
|
||||
* `canonical_alias`: string - The canonical alias of the room. `null` if the room does not
|
||||
have a canonical alias set.
|
||||
* `event_json`: object - Details of the original event that was reported.
|
||||
165
docs/admin_api/event_reports.rst
Normal file
165
docs/admin_api/event_reports.rst
Normal file
@@ -0,0 +1,165 @@
|
||||
Show reported events
|
||||
====================
|
||||
|
||||
This API returns information about reported events.
|
||||
|
||||
The api is::
|
||||
|
||||
GET /_synapse/admin/v1/event_reports?from=0&limit=10
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
.. code:: jsonc
|
||||
|
||||
{
|
||||
"event_reports": [
|
||||
{
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"id": 2,
|
||||
"reason": "foo",
|
||||
"score": -100,
|
||||
"received_ts": 1570897107409,
|
||||
"canonical_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"name": "Matrix HQ",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
},
|
||||
{
|
||||
"event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
|
||||
"id": 3,
|
||||
"reason": "bar",
|
||||
"score": -100,
|
||||
"received_ts": 1598889612059,
|
||||
"canonical_alias": "#alias2:matrix.org",
|
||||
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
|
||||
"name": "Your room name here",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@bar:matrix.org"
|
||||
}
|
||||
],
|
||||
"next_token": 2,
|
||||
"total": 4
|
||||
}
|
||||
|
||||
To paginate, check for ``next_token`` and if present, call the endpoint again
|
||||
with ``from`` set to the value of ``next_token``. This will return a new page.
|
||||
|
||||
If the endpoint does not return a ``next_token`` then there are no more
|
||||
reports to paginate through.
|
||||
|
||||
**URL parameters:**
|
||||
|
||||
- ``limit``: integer - Is optional but is used for pagination,
|
||||
denoting the maximum number of items to return in this call. Defaults to ``100``.
|
||||
- ``from``: integer - Is optional but used for pagination,
|
||||
denoting the offset in the returned results. This should be treated as an opaque value and
|
||||
not explicitly set to anything other than the return value of ``next_token`` from a previous call.
|
||||
Defaults to ``0``.
|
||||
- ``dir``: string - Direction of event report order. Whether to fetch the most recent first (``b``) or the
|
||||
oldest first (``f``). Defaults to ``b``.
|
||||
- ``user_id``: string - Is optional and filters to only return users with user IDs that contain this value.
|
||||
This is the user who reported the event and wrote the reason.
|
||||
- ``room_id``: string - Is optional and filters to only return rooms with room IDs that contain this value.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- ``id``: integer - ID of event report.
|
||||
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
|
||||
- ``room_id``: string - The ID of the room in which the event being reported is located.
|
||||
- ``name``: string - The name of the room.
|
||||
- ``event_id``: string - The ID of the reported event.
|
||||
- ``user_id``: string - This is the user who reported the event and wrote the reason.
|
||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
||||
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
|
||||
- ``canonical_alias``: string - The canonical alias of the room. ``null`` if the room does not have a canonical alias set.
|
||||
- ``next_token``: integer - Indication for pagination. See above.
|
||||
- ``total``: integer - Total number of event reports related to the query (``user_id`` and ``room_id``).
|
||||
|
||||
Show details of a specific event report
|
||||
=======================================
|
||||
|
||||
This API returns information about a specific event report.
|
||||
|
||||
The api is::
|
||||
|
||||
GET /_synapse/admin/v1/event_reports/<report_id>
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
.. code:: jsonc
|
||||
|
||||
{
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"event_json": {
|
||||
"auth_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
|
||||
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
|
||||
],
|
||||
"content": {
|
||||
"body": "matrix.org: This Week in Matrix",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
|
||||
"msgtype": "m.notice"
|
||||
},
|
||||
"depth": 546,
|
||||
"hashes": {
|
||||
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
|
||||
},
|
||||
"origin": "matrix.org",
|
||||
"origin_server_ts": 1592291711430,
|
||||
"prev_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
|
||||
],
|
||||
"prev_state": [],
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"signatures": {
|
||||
"matrix.org": {
|
||||
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
|
||||
}
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"unsigned": {
|
||||
"age_ts": 1592291711430,
|
||||
}
|
||||
},
|
||||
"id": <report_id>,
|
||||
"reason": "foo",
|
||||
"score": -100,
|
||||
"received_ts": 1570897107409,
|
||||
"canonical_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"name": "Matrix HQ",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
}
|
||||
|
||||
**URL parameters:**
|
||||
|
||||
- ``report_id``: string - The ID of the event report.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- ``id``: integer - ID of event report.
|
||||
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
|
||||
- ``room_id``: string - The ID of the room in which the event being reported is located.
|
||||
- ``name``: string - The name of the room.
|
||||
- ``event_id``: string - The ID of the reported event.
|
||||
- ``user_id``: string - This is the user who reported the event and wrote the reason.
|
||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
||||
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
|
||||
- ``canonical_alias``: string - The canonical alias of the room. ``null`` if the room does not have a canonical alias set.
|
||||
- ``event_json``: object - Details of the original event that was reported.
|
||||
@@ -18,8 +18,7 @@ To fetch the nonce, you need to request one from the API::
|
||||
|
||||
Once you have the nonce, you can make a ``POST`` to the same URL with a JSON
|
||||
body containing the nonce, username, password, whether they are an admin
|
||||
(optional, False by default), and a HMAC digest of the content. Also you can
|
||||
set the displayname (optional, ``username`` by default).
|
||||
(optional, False by default), and a HMAC digest of the content.
|
||||
|
||||
As an example::
|
||||
|
||||
@@ -27,7 +26,6 @@ As an example::
|
||||
> {
|
||||
"nonce": "thisisanonce",
|
||||
"username": "pepper_roni",
|
||||
"displayname": "Pepper Roni",
|
||||
"password": "pizza",
|
||||
"admin": true,
|
||||
"mac": "mac_digest_here"
|
||||
|
||||
@@ -265,10 +265,12 @@ Response:
|
||||
Once the `next_token` parameter is no longer present, we know we've reached the
|
||||
end of the list.
|
||||
|
||||
# Room Details API
|
||||
# DRAFT: Room Details API
|
||||
|
||||
The Room Details admin API allows server admins to get all details of a room.
|
||||
|
||||
This API is still a draft and details might change!
|
||||
|
||||
The following fields are possible in the JSON response body:
|
||||
|
||||
* `room_id` - The ID of the room.
|
||||
|
||||
@@ -1,83 +0,0 @@
|
||||
# Users' media usage statistics
|
||||
|
||||
Returns information about all local media usage of users. Gives the
|
||||
possibility to filter them by time and user.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/statistics/users/media
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [README.rst](README.rst).
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"users": [
|
||||
{
|
||||
"displayname": "foo_user_0",
|
||||
"media_count": 2,
|
||||
"media_length": 134,
|
||||
"user_id": "@foo_user_0:test"
|
||||
},
|
||||
{
|
||||
"displayname": "foo_user_1",
|
||||
"media_count": 2,
|
||||
"media_length": 134,
|
||||
"user_id": "@foo_user_1:test"
|
||||
}
|
||||
],
|
||||
"next_token": 3,
|
||||
"total": 10
|
||||
}
|
||||
```
|
||||
|
||||
To paginate, check for `next_token` and if present, call the endpoint
|
||||
again with `from` set to the value of `next_token`. This will return a new page.
|
||||
|
||||
If the endpoint does not return a `next_token` then there are no more
|
||||
reports to paginate through.
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
* `limit`: string representing a positive integer - Is optional but is
|
||||
used for pagination, denoting the maximum number of items to return
|
||||
in this call. Defaults to `100`.
|
||||
* `from`: string representing a positive integer - Is optional but used for pagination,
|
||||
denoting the offset in the returned results. This should be treated as an opaque value
|
||||
and not explicitly set to anything other than the return value of `next_token` from a
|
||||
previous call. Defaults to `0`.
|
||||
* `order_by` - string - The method in which to sort the returned list of users. Valid values are:
|
||||
- `user_id` - Users are ordered alphabetically by `user_id`. This is the default.
|
||||
- `displayname` - Users are ordered alphabetically by `displayname`.
|
||||
- `media_length` - Users are ordered by the total size of uploaded media in bytes.
|
||||
Smallest to largest.
|
||||
- `media_count` - Users are ordered by number of uploaded media. Smallest to largest.
|
||||
* `from_ts` - string representing a positive integer - Considers only
|
||||
files created at this timestamp or later. Unix timestamp in ms.
|
||||
* `until_ts` - string representing a positive integer - Considers only
|
||||
files created at this timestamp or earlier. Unix timestamp in ms.
|
||||
* `search_term` - string - Filter users by their user ID localpart **or** displayname.
|
||||
The search term can be found in any part of the string.
|
||||
Defaults to no filtering.
|
||||
* `dir` - string - Direction of order. Either `f` for forwards or `b` for backwards.
|
||||
Setting this value to `b` will reverse the above sort order. Defaults to `f`.
|
||||
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
* `users` - An array of objects, each containing information
|
||||
about the user and their local media. Objects contain the following fields:
|
||||
- `displayname` - string - Displayname of this user.
|
||||
- `media_count` - integer - Number of uploaded media by this user.
|
||||
- `media_length` - integer - Size of uploaded media in bytes by this user.
|
||||
- `user_id` - string - Fully-qualified user ID (ex. `@user:server.com`).
|
||||
* `next_token` - integer - Opaque value used for pagination. See above.
|
||||
* `total` - integer - Total number of users after filtering.
|
||||
@@ -205,7 +205,7 @@ GitHub is a bit special as it is not an OpenID Connect compliant provider, but
|
||||
just a regular OAuth2 provider.
|
||||
|
||||
The [`/user` API endpoint](https://developer.github.com/v3/users/#get-the-authenticated-user)
|
||||
can be used to retrieve information on the authenticated user. As the Synapse
|
||||
can be used to retrieve information on the authenticated user. As the Synaspse
|
||||
login mechanism needs an attribute to uniquely identify users, and that endpoint
|
||||
does not return a `sub` property, an alternative `subject_claim` has to be set.
|
||||
|
||||
|
||||
@@ -1560,28 +1560,6 @@ saml2_config:
|
||||
#description: ["My awesome SP", "en"]
|
||||
#name: ["Test SP", "en"]
|
||||
|
||||
#ui_info:
|
||||
# display_name:
|
||||
# - lang: en
|
||||
# text: "Display Name is the descriptive name of your service."
|
||||
# description:
|
||||
# - lang: en
|
||||
# text: "Description should be a short paragraph explaining the purpose of the service."
|
||||
# information_url:
|
||||
# - lang: en
|
||||
# text: "https://example.com/terms-of-service"
|
||||
# privacy_statement_url:
|
||||
# - lang: en
|
||||
# text: "https://example.com/privacy-policy"
|
||||
# keywords:
|
||||
# - lang: en
|
||||
# text: ["Matrix", "Element"]
|
||||
# logo:
|
||||
# - lang: en
|
||||
# text: "https://example.com/logo.svg"
|
||||
# width: "200"
|
||||
# height: "80"
|
||||
|
||||
#organization:
|
||||
# name: Example com
|
||||
# display_name:
|
||||
|
||||
@@ -37,10 +37,10 @@ synapse master process to be started as part of the `matrix-synapse.target`
|
||||
target.
|
||||
1. For each worker process to be enabled, run `systemctl enable
|
||||
matrix-synapse-worker@<worker_name>.service`. For each `<worker_name>`, there
|
||||
should be a corresponding configuration file.
|
||||
should be a corresponding configuration file
|
||||
`/etc/matrix-synapse/workers/<worker_name>.yaml`.
|
||||
1. Start all the synapse processes with `systemctl start matrix-synapse.target`.
|
||||
1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`.
|
||||
1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`/
|
||||
|
||||
## Usage
|
||||
|
||||
|
||||
@@ -116,7 +116,7 @@ public internet; it has no authentication and is unencrypted.
|
||||
### Worker configuration
|
||||
|
||||
In the config file for each worker, you must specify the type of worker
|
||||
application (`worker_app`), and you should specify a unique name for the worker
|
||||
application (`worker_app`), and you should specify a unqiue name for the worker
|
||||
(`worker_name`). The currently available worker applications are listed below.
|
||||
You must also specify the HTTP replication endpoint that it should talk to on
|
||||
the main synapse process. `worker_replication_host` should specify the host of
|
||||
@@ -262,9 +262,6 @@ using):
|
||||
Note that a HTTP listener with `client` and `federation` resources must be
|
||||
configured in the `worker_listeners` option in the worker config.
|
||||
|
||||
Ensure that all SSO logins go to a single process (usually the main process).
|
||||
For multiple workers not handling the SSO endpoints properly, see
|
||||
[#7530](https://github.com/matrix-org/synapse/issues/7530).
|
||||
|
||||
#### Load balancing
|
||||
|
||||
@@ -305,7 +302,7 @@ Additionally, there is *experimental* support for moving writing of specific
|
||||
streams (such as events) off of the main process to a particular worker. (This
|
||||
is only supported with Redis-based replication.)
|
||||
|
||||
Currently supported streams are `events` and `typing`.
|
||||
Currently support streams are `events` and `typing`.
|
||||
|
||||
To enable this, the worker must have a HTTP replication listener configured,
|
||||
have a `worker_name` and be listed in the `instance_map` config. For example to
|
||||
@@ -322,18 +319,6 @@ stream_writers:
|
||||
events: event_persister1
|
||||
```
|
||||
|
||||
The `events` stream also experimentally supports having multiple writers, where
|
||||
work is sharded between them by room ID. Note that you *must* restart all worker
|
||||
instances when adding or removing event persisters. An example `stream_writers`
|
||||
configuration with multiple writers:
|
||||
|
||||
```yaml
|
||||
stream_writers:
|
||||
events:
|
||||
- event_persister1
|
||||
- event_persister2
|
||||
```
|
||||
|
||||
#### Background tasks
|
||||
|
||||
There is also *experimental* support for moving background tasks to a separate
|
||||
@@ -423,8 +408,6 @@ and you must configure a single instance to run the background tasks, e.g.:
|
||||
media_instance_running_background_jobs: "media-repository-1"
|
||||
```
|
||||
|
||||
Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for both inbound client and federation requests (if they are handled separately).
|
||||
|
||||
### `synapse.app.user_dir`
|
||||
|
||||
Handles searches in the user directory. It can handle REST endpoints matching
|
||||
|
||||
1
mypy.ini
1
mypy.ini
@@ -13,7 +13,6 @@ files =
|
||||
synapse/config,
|
||||
synapse/event_auth.py,
|
||||
synapse/events/builder.py,
|
||||
synapse/events/validator.py,
|
||||
synapse/events/spamcheck.py,
|
||||
synapse/federation,
|
||||
synapse/handlers/_base.py,
|
||||
|
||||
227
scripts-dev/release.py
Normal file
227
scripts-dev/release.py
Normal file
@@ -0,0 +1,227 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
from typing import Optional
|
||||
|
||||
import click
|
||||
import git
|
||||
from packaging import version
|
||||
from redbaron import RedBaron
|
||||
|
||||
|
||||
def find_ref(repo: git.Repo, ref_name: str) -> Optional[git.HEAD]:
|
||||
"""Find the branch/ref, looking first locally then in the remote.
|
||||
"""
|
||||
if ref_name in repo.refs:
|
||||
return repo.refs[ref_name]
|
||||
elif ref_name in repo.remote().refs:
|
||||
return repo.remote().refs[ref_name]
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def update_branch(repo: git.Repo):
|
||||
"""Ensure branch is up to date if it has a remote
|
||||
"""
|
||||
if repo.active_branch.tracking_branch():
|
||||
repo.git.merge(repo.active_branch.tracking_branch().name)
|
||||
|
||||
|
||||
@click.command()
|
||||
def release():
|
||||
"""Main release command
|
||||
"""
|
||||
|
||||
# Make sure we're in a git repo.
|
||||
try:
|
||||
repo = git.Repo()
|
||||
except git.InvalidGitRepositoryError:
|
||||
raise click.ClickException("Not in Synapse repo.")
|
||||
|
||||
if repo.is_dirty():
|
||||
raise click.ClickException("Uncommitted changes exist.")
|
||||
|
||||
click.secho("Updating git repo...")
|
||||
repo.remote().fetch()
|
||||
|
||||
# Parse the AST and load the `__version__` node so that we can edit it
|
||||
# later.
|
||||
with open("synapse/__init__.py") as f:
|
||||
red = RedBaron(f.read())
|
||||
|
||||
version_node = None
|
||||
for node in red:
|
||||
if node.type != "assignment":
|
||||
continue
|
||||
|
||||
if node.target.type != "name":
|
||||
continue
|
||||
|
||||
if node.target.value != "__version__":
|
||||
continue
|
||||
|
||||
version_node = node
|
||||
break
|
||||
|
||||
if not version_node:
|
||||
print("Failed to find '__version__' definition in synapse/__init__.py")
|
||||
sys.exit(1)
|
||||
|
||||
# Parse the current version.
|
||||
current_version = version.parse(version_node.value.value.strip('"'))
|
||||
assert isinstance(current_version, version.Version)
|
||||
|
||||
# Figure out what sort of release we're doing and calcuate the new version.
|
||||
rc = click.confirm("RC", default=True)
|
||||
if current_version.pre:
|
||||
# If the current version is an RC we don't need to bump any of the
|
||||
# version numbers (other than the RC number).
|
||||
base_version = "{}.{}.{}".format(
|
||||
current_version.major, current_version.minor, current_version.micro,
|
||||
)
|
||||
|
||||
if rc:
|
||||
new_version = "{}.{}.{}rc{}".format(
|
||||
current_version.major,
|
||||
current_version.minor,
|
||||
current_version.micro,
|
||||
current_version.pre[1] + 1,
|
||||
)
|
||||
else:
|
||||
new_version = base_version
|
||||
else:
|
||||
# If this is a new release cycle then we need to know if its a major
|
||||
# version bump or a hotfix.
|
||||
release_type = click.prompt(
|
||||
"Release type",
|
||||
type=click.Choice(("major", "hotfix")),
|
||||
show_choices=True,
|
||||
default="major",
|
||||
)
|
||||
|
||||
if release_type == "major":
|
||||
base_version = new_version = "{}.{}.{}".format(
|
||||
current_version.major, current_version.minor + 1, 0,
|
||||
)
|
||||
if rc:
|
||||
new_version = "{}.{}.{}rc1".format(
|
||||
current_version.major, current_version.minor + 1, 0,
|
||||
)
|
||||
|
||||
else:
|
||||
base_version = new_version = "{}.{}.{}".format(
|
||||
current_version.major, current_version.minor, current_version.micro + 1,
|
||||
)
|
||||
if rc:
|
||||
new_version = "{}.{}.{}rc1".format(
|
||||
current_version.major,
|
||||
current_version.minor,
|
||||
current_version.micro + 1,
|
||||
)
|
||||
|
||||
# Confirm the calculated version is OK.
|
||||
if not click.confirm(f"Create new version: {new_version}?", default=True):
|
||||
click.get_current_context().abort()
|
||||
|
||||
# Switch to the release branch.
|
||||
release_branch_name = f"release-v{base_version}"
|
||||
release_branch = find_ref(repo, release_branch_name)
|
||||
if release_branch:
|
||||
if release_branch.is_remote():
|
||||
# If the release branch only exists on the remote we check it out
|
||||
# locally.
|
||||
repo.git.checkout(release_branch_name)
|
||||
release_branch = repo.active_branch
|
||||
else:
|
||||
# If a branch doesn't exist we create one. We ask which one branch it
|
||||
# should be based off, defaulting to sensible values depending on the
|
||||
# release type.
|
||||
if current_version.is_prerelease:
|
||||
default = release_branch_name
|
||||
elif release_type == "major":
|
||||
default = "develop"
|
||||
else:
|
||||
default = "master"
|
||||
|
||||
branch_name = click.prompt(
|
||||
"Which branch should release be based off of?", default=default
|
||||
)
|
||||
|
||||
base_branch = find_ref(repo, branch_name)
|
||||
if not base_branch:
|
||||
print(f"Could not find base branch {branch_name}!")
|
||||
click.get_current_context().abort()
|
||||
|
||||
# Checkout the base branch and ensure its up to date
|
||||
repo.head.reference = base_branch
|
||||
repo.head.reset(index=True, working_tree=True)
|
||||
if not base_branch.is_remote():
|
||||
update_branch(repo)
|
||||
|
||||
# Create the new release branch
|
||||
release_branch = repo.create_head(release_branch_name, commit=base_branch)
|
||||
|
||||
# Switch to the release branch and ensure its up to date.
|
||||
repo.git.checkout(release_branch_name)
|
||||
update_branch(repo)
|
||||
|
||||
# Update the `__version__` variable and write it back to the file.
|
||||
version_node.value = '"' + new_version + '"'
|
||||
with open("synapse/__init__.py", "w") as f:
|
||||
f.write(red.dumps())
|
||||
|
||||
# Generate changelgs
|
||||
subprocess.run("python3 -m towncrier", shell=True)
|
||||
|
||||
# Generate debian changelogs if its not an RC.
|
||||
if not rc:
|
||||
subprocess.run(
|
||||
f'dch -M -v {new_version} "New synapse release {new_version}."', shell=True
|
||||
)
|
||||
subprocess.run('dch -M -r -D stable ""', shell=True)
|
||||
|
||||
# Show the user the changes and ask if they want to edit the change log.
|
||||
repo.git.add("-u")
|
||||
subprocess.run("git diff --cached", shell=True)
|
||||
|
||||
if click.confirm("Edit changelog?", default=False):
|
||||
click.edit(filename="CHANGES.md")
|
||||
|
||||
# Commit the changes.
|
||||
repo.git.add("-u")
|
||||
repo.git.commit(f"-m {new_version}")
|
||||
|
||||
# We give the option to bail here in case the user wants to make sure things
|
||||
# are OK before pushing.
|
||||
if not click.confirm("Push branch to github?", default=True):
|
||||
print("")
|
||||
print("Run when ready to push:")
|
||||
print("")
|
||||
print(f"\tgit push -u {repo.remote().name} {repo.active_branch.name}")
|
||||
print("")
|
||||
sys.exit(0)
|
||||
|
||||
# Otherwise, push and open the changelog in the browser.
|
||||
repo.git.push(f"-u {repo.remote().name} {repo.active_branch.name}")
|
||||
|
||||
click.launch(
|
||||
f"https://github.com/matrix-org/synapse/blob/{repo.active_branch.name}/CHANGES.md"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
release()
|
||||
@@ -22,7 +22,7 @@ import logging
|
||||
import sys
|
||||
import time
|
||||
import traceback
|
||||
from typing import Dict, Optional, Set
|
||||
from typing import Optional
|
||||
|
||||
import yaml
|
||||
|
||||
@@ -40,7 +40,6 @@ from synapse.storage.database import DatabasePool, make_conn
|
||||
from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.end_to_end_keys import EndToEndKeyBackgroundStore
|
||||
from synapse.storage.databases.main.events_bg_updates import (
|
||||
EventsBackgroundUpdatesStore,
|
||||
)
|
||||
@@ -175,7 +174,6 @@ class Store(
|
||||
StateBackgroundUpdateStore,
|
||||
MainStateBackgroundUpdateStore,
|
||||
UserDirectoryBackgroundUpdateStore,
|
||||
EndToEndKeyBackgroundStore,
|
||||
StatsStore,
|
||||
):
|
||||
def execute(self, f, *args, **kwargs):
|
||||
@@ -292,34 +290,6 @@ class Porter(object):
|
||||
|
||||
return table, already_ported, total_to_port, forward_chunk, backward_chunk
|
||||
|
||||
async def get_table_constraints(self) -> Dict[str, Set[str]]:
|
||||
"""Returns a map of tables that have foreign key constraints to tables they depend on.
|
||||
"""
|
||||
|
||||
def _get_constraints(txn):
|
||||
# We can pull the information about foreign key constraints out from
|
||||
# the postgres schema tables.
|
||||
sql = """
|
||||
SELECT DISTINCT
|
||||
tc.table_name,
|
||||
ccu.table_name AS foreign_table_name
|
||||
FROM
|
||||
information_schema.table_constraints AS tc
|
||||
INNER JOIN information_schema.constraint_column_usage AS ccu
|
||||
USING (table_schema, constraint_name)
|
||||
WHERE tc.constraint_type = 'FOREIGN KEY';
|
||||
"""
|
||||
txn.execute(sql)
|
||||
|
||||
results = {}
|
||||
for table, foreign_table in txn:
|
||||
results.setdefault(table, set()).add(foreign_table)
|
||||
return results
|
||||
|
||||
return await self.postgres_store.db_pool.runInteraction(
|
||||
"get_table_constraints", _get_constraints
|
||||
)
|
||||
|
||||
async def handle_table(
|
||||
self, table, postgres_size, table_size, forward_chunk, backward_chunk
|
||||
):
|
||||
@@ -619,18 +589,7 @@ class Porter(object):
|
||||
"create_port_table", create_port_table
|
||||
)
|
||||
|
||||
# Step 2. Set up sequences
|
||||
#
|
||||
# We do this before porting the tables so that event if we fail half
|
||||
# way through the postgres DB always have sequences that are greater
|
||||
# than their respective tables. If we don't then creating the
|
||||
# `DataStore` object will fail due to the inconsistency.
|
||||
self.progress.set_state("Setting up sequence generators")
|
||||
await self._setup_state_group_id_seq()
|
||||
await self._setup_user_id_seq()
|
||||
await self._setup_events_stream_seqs()
|
||||
|
||||
# Step 3. Get tables.
|
||||
# Step 2. Get tables.
|
||||
self.progress.set_state("Fetching tables")
|
||||
sqlite_tables = await self.sqlite_store.db_pool.simple_select_onecol(
|
||||
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
|
||||
@@ -645,7 +604,7 @@ class Porter(object):
|
||||
tables = set(sqlite_tables) & set(postgres_tables)
|
||||
logger.info("Found %d tables", len(tables))
|
||||
|
||||
# Step 4. Figure out what still needs copying
|
||||
# Step 3. Figure out what still needs copying
|
||||
self.progress.set_state("Checking on port progress")
|
||||
setup_res = await make_deferred_yieldable(
|
||||
defer.gatherResults(
|
||||
@@ -658,43 +617,21 @@ class Porter(object):
|
||||
consumeErrors=True,
|
||||
)
|
||||
)
|
||||
# Map from table name to args passed to `handle_table`, i.e. a tuple
|
||||
# of: `postgres_size`, `table_size`, `forward_chunk`, `backward_chunk`.
|
||||
tables_to_port_info_map = {r[0]: r[1:] for r in setup_res}
|
||||
|
||||
# Step 5. Do the copying.
|
||||
#
|
||||
# This is slightly convoluted as we need to ensure tables are ported
|
||||
# in the correct order due to foreign key constraints.
|
||||
# Step 4. Do the copying.
|
||||
self.progress.set_state("Copying to postgres")
|
||||
|
||||
constraints = await self.get_table_constraints()
|
||||
tables_ported = set() # type: Set[str]
|
||||
|
||||
while tables_to_port_info_map:
|
||||
# Pulls out all tables that are still to be ported and which
|
||||
# only depend on tables that are already ported (if any).
|
||||
tables_to_port = [
|
||||
table
|
||||
for table in tables_to_port_info_map
|
||||
if not constraints.get(table, set()) - tables_ported
|
||||
]
|
||||
|
||||
await make_deferred_yieldable(
|
||||
defer.gatherResults(
|
||||
[
|
||||
run_in_background(
|
||||
self.handle_table,
|
||||
table,
|
||||
*tables_to_port_info_map.pop(table),
|
||||
)
|
||||
for table in tables_to_port
|
||||
],
|
||||
consumeErrors=True,
|
||||
)
|
||||
await make_deferred_yieldable(
|
||||
defer.gatherResults(
|
||||
[run_in_background(self.handle_table, *res) for res in setup_res],
|
||||
consumeErrors=True,
|
||||
)
|
||||
)
|
||||
|
||||
tables_ported.update(tables_to_port)
|
||||
# Step 5. Set up sequences
|
||||
self.progress.set_state("Setting up sequence generators")
|
||||
await self._setup_state_group_id_seq()
|
||||
await self._setup_user_id_seq()
|
||||
await self._setup_events_stream_seqs()
|
||||
|
||||
self.progress.done()
|
||||
except Exception as e:
|
||||
@@ -853,62 +790,45 @@ class Porter(object):
|
||||
|
||||
return done, remaining + done
|
||||
|
||||
async def _setup_state_group_id_seq(self):
|
||||
curr_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
|
||||
table="state_groups", keyvalues={}, retcol="MAX(id)", allow_none=True
|
||||
)
|
||||
|
||||
if not curr_id:
|
||||
return
|
||||
|
||||
def _setup_state_group_id_seq(self):
|
||||
def r(txn):
|
||||
txn.execute("SELECT MAX(id) FROM state_groups")
|
||||
curr_id = txn.fetchone()[0]
|
||||
if not curr_id:
|
||||
return
|
||||
next_id = curr_id + 1
|
||||
txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,))
|
||||
|
||||
await self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
|
||||
|
||||
async def _setup_user_id_seq(self):
|
||||
curr_id = await self.sqlite_store.db_pool.runInteraction(
|
||||
"setup_user_id_seq", find_max_generated_user_id_localpart
|
||||
)
|
||||
return self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
|
||||
|
||||
def _setup_user_id_seq(self):
|
||||
def r(txn):
|
||||
next_id = curr_id + 1
|
||||
next_id = find_max_generated_user_id_localpart(txn) + 1
|
||||
txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,))
|
||||
|
||||
return self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
|
||||
|
||||
async def _setup_events_stream_seqs(self):
|
||||
"""Set the event stream sequences to the correct values.
|
||||
"""
|
||||
|
||||
# We get called before we've ported the events table, so we need to
|
||||
# fetch the current positions from the SQLite store.
|
||||
curr_forward_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
|
||||
table="events", keyvalues={}, retcol="MAX(stream_ordering)", allow_none=True
|
||||
)
|
||||
|
||||
curr_backward_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
|
||||
table="events",
|
||||
keyvalues={},
|
||||
retcol="MAX(-MIN(stream_ordering), 1)",
|
||||
allow_none=True,
|
||||
)
|
||||
|
||||
def _setup_events_stream_seqs_set_pos(txn):
|
||||
if curr_forward_id:
|
||||
def _setup_events_stream_seqs(self):
|
||||
def r(txn):
|
||||
txn.execute("SELECT MAX(stream_ordering) FROM events")
|
||||
curr_id = txn.fetchone()[0]
|
||||
if curr_id:
|
||||
next_id = curr_id + 1
|
||||
txn.execute(
|
||||
"ALTER SEQUENCE events_stream_seq RESTART WITH %s",
|
||||
(curr_forward_id + 1,),
|
||||
"ALTER SEQUENCE events_stream_seq RESTART WITH %s", (next_id,)
|
||||
)
|
||||
|
||||
txn.execute(
|
||||
"ALTER SEQUENCE events_backfill_stream_seq RESTART WITH %s",
|
||||
(curr_backward_id + 1,),
|
||||
)
|
||||
txn.execute("SELECT -MIN(stream_ordering) FROM events")
|
||||
curr_id = txn.fetchone()[0]
|
||||
if curr_id:
|
||||
next_id = curr_id + 1
|
||||
txn.execute(
|
||||
"ALTER SEQUENCE events_backfill_stream_seq RESTART WITH %s",
|
||||
(next_id,),
|
||||
)
|
||||
|
||||
return await self.postgres_store.db_pool.runInteraction(
|
||||
"_setup_events_stream_seqs", _setup_events_stream_seqs_set_pos,
|
||||
return self.postgres_store.db_pool.runInteraction(
|
||||
"_setup_events_stream_seqs", r
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.23.1"
|
||||
__version__ = "1.22.1"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
# We import here so that we don't have to install a bunch of deps when
|
||||
|
||||
@@ -49,6 +49,7 @@ def register_sighup(func, *args, **kwargs):
|
||||
|
||||
Args:
|
||||
func (function): Function to be called when sent a SIGHUP signal.
|
||||
Will be called with a single default argument, the homeserver.
|
||||
*args, **kwargs: args and kwargs to be passed to the target function.
|
||||
"""
|
||||
_sighup_callbacks.append((func, args, kwargs))
|
||||
@@ -250,13 +251,13 @@ def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
|
||||
sdnotify(b"RELOADING=1")
|
||||
|
||||
for i, args, kwargs in _sighup_callbacks:
|
||||
i(*args, **kwargs)
|
||||
i(hs, *args, **kwargs)
|
||||
|
||||
sdnotify(b"READY=1")
|
||||
|
||||
signal.signal(signal.SIGHUP, handle_sighup)
|
||||
|
||||
register_sighup(refresh_certificate, hs)
|
||||
register_sighup(refresh_certificate)
|
||||
|
||||
# Load the certificate from disk.
|
||||
refresh_certificate(hs)
|
||||
|
||||
@@ -271,28 +271,6 @@ class SAML2Config(Config):
|
||||
#description: ["My awesome SP", "en"]
|
||||
#name: ["Test SP", "en"]
|
||||
|
||||
#ui_info:
|
||||
# display_name:
|
||||
# - lang: en
|
||||
# text: "Display Name is the descriptive name of your service."
|
||||
# description:
|
||||
# - lang: en
|
||||
# text: "Description should be a short paragraph explaining the purpose of the service."
|
||||
# information_url:
|
||||
# - lang: en
|
||||
# text: "https://example.com/terms-of-service"
|
||||
# privacy_statement_url:
|
||||
# - lang: en
|
||||
# text: "https://example.com/privacy-policy"
|
||||
# keywords:
|
||||
# - lang: en
|
||||
# text: ["Matrix", "Element"]
|
||||
# logo:
|
||||
# - lang: en
|
||||
# text: "https://example.com/logo.svg"
|
||||
# width: "200"
|
||||
# height: "80"
|
||||
|
||||
#organization:
|
||||
# name: Example com
|
||||
# display_name:
|
||||
|
||||
@@ -13,26 +13,20 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import Union
|
||||
|
||||
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.api.room_versions import EventFormatVersions
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.events import EventBase
|
||||
from synapse.events.builder import EventBuilder
|
||||
from synapse.events.utils import validate_canonicaljson
|
||||
from synapse.federation.federation_server import server_matches_acl_event
|
||||
from synapse.types import EventID, RoomID, UserID
|
||||
|
||||
|
||||
class EventValidator:
|
||||
def validate_new(self, event: EventBase, config: HomeServerConfig):
|
||||
def validate_new(self, event, config):
|
||||
"""Validates the event has roughly the right format
|
||||
|
||||
Args:
|
||||
event: The event to validate.
|
||||
config: The homeserver's configuration.
|
||||
event (FrozenEvent): The event to validate.
|
||||
config (Config): The homeserver's configuration.
|
||||
"""
|
||||
self.validate_builder(event)
|
||||
|
||||
@@ -82,18 +76,12 @@ class EventValidator:
|
||||
if event.type == EventTypes.Retention:
|
||||
self._validate_retention(event)
|
||||
|
||||
if event.type == EventTypes.ServerACL:
|
||||
if not server_matches_acl_event(config.server_name, event):
|
||||
raise SynapseError(
|
||||
400, "Can't create an ACL event that denies the local server"
|
||||
)
|
||||
|
||||
def _validate_retention(self, event: EventBase):
|
||||
def _validate_retention(self, event):
|
||||
"""Checks that an event that defines the retention policy for a room respects the
|
||||
format enforced by the spec.
|
||||
|
||||
Args:
|
||||
event: The event to validate.
|
||||
event (FrozenEvent): The event to validate.
|
||||
"""
|
||||
if not event.is_state():
|
||||
raise SynapseError(code=400, msg="must be a state event")
|
||||
@@ -128,10 +116,13 @@ class EventValidator:
|
||||
errcode=Codes.BAD_JSON,
|
||||
)
|
||||
|
||||
def validate_builder(self, event: Union[EventBase, EventBuilder]):
|
||||
def validate_builder(self, event):
|
||||
"""Validates that the builder/event has roughly the right format. Only
|
||||
checks values that we expect a proto event to have, rather than all the
|
||||
fields an event would have
|
||||
|
||||
Args:
|
||||
event (EventBuilder|FrozenEvent)
|
||||
"""
|
||||
|
||||
strings = ["room_id", "sender", "type"]
|
||||
|
||||
@@ -49,7 +49,6 @@ from synapse.federation.federation_base import FederationBase, event_from_pdu_js
|
||||
from synapse.federation.persistence import TransactionActions
|
||||
from synapse.federation.units import Edu, Transaction
|
||||
from synapse.http.endpoint import parse_server_name
|
||||
from synapse.http.servlet import assert_params_in_dict
|
||||
from synapse.logging.context import (
|
||||
make_deferred_yieldable,
|
||||
nested_logging_context,
|
||||
@@ -392,7 +391,7 @@ class FederationServer(FederationBase):
|
||||
TRANSACTION_CONCURRENCY_LIMIT,
|
||||
)
|
||||
|
||||
async def on_room_state_request(
|
||||
async def on_context_state_request(
|
||||
self, origin: str, room_id: str, event_id: str
|
||||
) -> Tuple[int, Dict[str, Any]]:
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
@@ -515,12 +514,11 @@ class FederationServer(FederationBase):
|
||||
return {"event": ret_pdu.get_pdu_json(time_now)}
|
||||
|
||||
async def on_send_join_request(
|
||||
self, origin: str, content: JsonDict
|
||||
self, origin: str, content: JsonDict, room_id: str
|
||||
) -> Dict[str, Any]:
|
||||
logger.debug("on_send_join_request: content: %s", content)
|
||||
|
||||
assert_params_in_dict(content, ["room_id"])
|
||||
room_version = await self.store.get_room_version(content["room_id"])
|
||||
room_version = await self.store.get_room_version(room_id)
|
||||
pdu = event_from_pdu_json(content, room_version)
|
||||
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
@@ -549,11 +547,12 @@ class FederationServer(FederationBase):
|
||||
time_now = self._clock.time_msec()
|
||||
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
|
||||
|
||||
async def on_send_leave_request(self, origin: str, content: JsonDict) -> dict:
|
||||
async def on_send_leave_request(
|
||||
self, origin: str, content: JsonDict, room_id: str
|
||||
) -> dict:
|
||||
logger.debug("on_send_leave_request: content: %s", content)
|
||||
|
||||
assert_params_in_dict(content, ["room_id"])
|
||||
room_version = await self.store.get_room_version(content["room_id"])
|
||||
room_version = await self.store.get_room_version(room_id)
|
||||
pdu = event_from_pdu_json(content, room_version)
|
||||
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
@@ -749,8 +748,12 @@ class FederationServer(FederationBase):
|
||||
)
|
||||
return ret
|
||||
|
||||
async def on_exchange_third_party_invite_request(self, event_dict: Dict):
|
||||
ret = await self.handler.on_exchange_third_party_invite_request(event_dict)
|
||||
async def on_exchange_third_party_invite_request(
|
||||
self, room_id: str, event_dict: Dict
|
||||
):
|
||||
ret = await self.handler.on_exchange_third_party_invite_request(
|
||||
room_id, event_dict
|
||||
)
|
||||
return ret
|
||||
|
||||
async def check_server_matches_acl(self, server_name: str, room_id: str):
|
||||
|
||||
@@ -440,13 +440,13 @@ class FederationEventServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationStateV1Servlet(BaseFederationServlet):
|
||||
PATH = "/state/(?P<room_id>[^/]*)/?"
|
||||
PATH = "/state/(?P<context>[^/]*)/?"
|
||||
|
||||
# This is when someone asks for all data for a given room.
|
||||
async def on_GET(self, origin, content, query, room_id):
|
||||
return await self.handler.on_room_state_request(
|
||||
# This is when someone asks for all data for a given context.
|
||||
async def on_GET(self, origin, content, query, context):
|
||||
return await self.handler.on_context_state_request(
|
||||
origin,
|
||||
room_id,
|
||||
context,
|
||||
parse_string_from_args(query, "event_id", None, required=False),
|
||||
)
|
||||
|
||||
@@ -463,16 +463,16 @@ class FederationStateIdsServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationBackfillServlet(BaseFederationServlet):
|
||||
PATH = "/backfill/(?P<room_id>[^/]*)/?"
|
||||
PATH = "/backfill/(?P<context>[^/]*)/?"
|
||||
|
||||
async def on_GET(self, origin, content, query, room_id):
|
||||
async def on_GET(self, origin, content, query, context):
|
||||
versions = [x.decode("ascii") for x in query[b"v"]]
|
||||
limit = parse_integer_from_args(query, "limit", None)
|
||||
|
||||
if not limit:
|
||||
return 400, {"error": "Did not include limit param"}
|
||||
|
||||
return await self.handler.on_backfill_request(origin, room_id, versions, limit)
|
||||
return await self.handler.on_backfill_request(origin, context, versions, limit)
|
||||
|
||||
|
||||
class FederationQueryServlet(BaseFederationServlet):
|
||||
@@ -487,9 +487,9 @@ class FederationQueryServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationMakeJoinServlet(BaseFederationServlet):
|
||||
PATH = "/make_join/(?P<room_id>[^/]*)/(?P<user_id>[^/]*)"
|
||||
PATH = "/make_join/(?P<context>[^/]*)/(?P<user_id>[^/]*)"
|
||||
|
||||
async def on_GET(self, origin, _content, query, room_id, user_id):
|
||||
async def on_GET(self, origin, _content, query, context, user_id):
|
||||
"""
|
||||
Args:
|
||||
origin (unicode): The authenticated server_name of the calling server
|
||||
@@ -511,16 +511,16 @@ class FederationMakeJoinServlet(BaseFederationServlet):
|
||||
supported_versions = ["1"]
|
||||
|
||||
content = await self.handler.on_make_join_request(
|
||||
origin, room_id, user_id, supported_versions=supported_versions
|
||||
origin, context, user_id, supported_versions=supported_versions
|
||||
)
|
||||
return 200, content
|
||||
|
||||
|
||||
class FederationMakeLeaveServlet(BaseFederationServlet):
|
||||
PATH = "/make_leave/(?P<room_id>[^/]*)/(?P<user_id>[^/]*)"
|
||||
PATH = "/make_leave/(?P<context>[^/]*)/(?P<user_id>[^/]*)"
|
||||
|
||||
async def on_GET(self, origin, content, query, room_id, user_id):
|
||||
content = await self.handler.on_make_leave_request(origin, room_id, user_id)
|
||||
async def on_GET(self, origin, content, query, context, user_id):
|
||||
content = await self.handler.on_make_leave_request(origin, context, user_id)
|
||||
return 200, content
|
||||
|
||||
|
||||
@@ -528,7 +528,7 @@ class FederationV1SendLeaveServlet(BaseFederationServlet):
|
||||
PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
content = await self.handler.on_send_leave_request(origin, content)
|
||||
content = await self.handler.on_send_leave_request(origin, content, room_id)
|
||||
return 200, (200, content)
|
||||
|
||||
|
||||
@@ -538,43 +538,43 @@ class FederationV2SendLeaveServlet(BaseFederationServlet):
|
||||
PREFIX = FEDERATION_V2_PREFIX
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
content = await self.handler.on_send_leave_request(origin, content)
|
||||
content = await self.handler.on_send_leave_request(origin, content, room_id)
|
||||
return 200, content
|
||||
|
||||
|
||||
class FederationEventAuthServlet(BaseFederationServlet):
|
||||
PATH = "/event_auth/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
PATH = "/event_auth/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
async def on_GET(self, origin, content, query, room_id, event_id):
|
||||
return await self.handler.on_event_auth(origin, room_id, event_id)
|
||||
async def on_GET(self, origin, content, query, context, event_id):
|
||||
return await self.handler.on_event_auth(origin, context, event_id)
|
||||
|
||||
|
||||
class FederationV1SendJoinServlet(BaseFederationServlet):
|
||||
PATH = "/send_join/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
PATH = "/send_join/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
# TODO(paul): assert that room_id/event_id parsed from path actually
|
||||
async def on_PUT(self, origin, content, query, context, event_id):
|
||||
# TODO(paul): assert that context/event_id parsed from path actually
|
||||
# match those given in content
|
||||
content = await self.handler.on_send_join_request(origin, content)
|
||||
content = await self.handler.on_send_join_request(origin, content, context)
|
||||
return 200, (200, content)
|
||||
|
||||
|
||||
class FederationV2SendJoinServlet(BaseFederationServlet):
|
||||
PATH = "/send_join/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
PATH = "/send_join/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
PREFIX = FEDERATION_V2_PREFIX
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
# TODO(paul): assert that room_id/event_id parsed from path actually
|
||||
async def on_PUT(self, origin, content, query, context, event_id):
|
||||
# TODO(paul): assert that context/event_id parsed from path actually
|
||||
# match those given in content
|
||||
content = await self.handler.on_send_join_request(origin, content)
|
||||
content = await self.handler.on_send_join_request(origin, content, context)
|
||||
return 200, content
|
||||
|
||||
|
||||
class FederationV1InviteServlet(BaseFederationServlet):
|
||||
PATH = "/invite/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
PATH = "/invite/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
async def on_PUT(self, origin, content, query, context, event_id):
|
||||
# We don't get a room version, so we have to assume its EITHER v1 or
|
||||
# v2. This is "fine" as the only difference between V1 and V2 is the
|
||||
# state resolution algorithm, and we don't use that for processing
|
||||
@@ -589,12 +589,12 @@ class FederationV1InviteServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationV2InviteServlet(BaseFederationServlet):
|
||||
PATH = "/invite/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
PATH = "/invite/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
PREFIX = FEDERATION_V2_PREFIX
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
# TODO(paul): assert that room_id/event_id parsed from path actually
|
||||
async def on_PUT(self, origin, content, query, context, event_id):
|
||||
# TODO(paul): assert that context/event_id parsed from path actually
|
||||
# match those given in content
|
||||
|
||||
room_version = content["room_version"]
|
||||
@@ -616,7 +616,9 @@ class FederationThirdPartyInviteExchangeServlet(BaseFederationServlet):
|
||||
PATH = "/exchange_third_party_invite/(?P<room_id>[^/]*)"
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id):
|
||||
content = await self.handler.on_exchange_third_party_invite_request(content)
|
||||
content = await self.handler.on_exchange_third_party_invite_request(
|
||||
room_id, content
|
||||
)
|
||||
return 200, content
|
||||
|
||||
|
||||
|
||||
@@ -181,15 +181,10 @@ class AuthHandler(BaseHandler):
|
||||
# better way to break the loop
|
||||
account_handler = ModuleApi(hs, self)
|
||||
|
||||
self.password_providers = []
|
||||
for module, config in hs.config.password_providers:
|
||||
try:
|
||||
self.password_providers.append(
|
||||
module(config=config, account_handler=account_handler)
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error("Error while initializing %r: %s", module, e)
|
||||
raise
|
||||
self.password_providers = [
|
||||
module(config=config, account_handler=account_handler)
|
||||
for module, config in hs.config.password_providers
|
||||
]
|
||||
|
||||
logger.info("Extra password_providers: %r", self.password_providers)
|
||||
|
||||
|
||||
@@ -55,7 +55,6 @@ from synapse.events import EventBase
|
||||
from synapse.events.snapshot import EventContext
|
||||
from synapse.events.validator import EventValidator
|
||||
from synapse.handlers._base import BaseHandler
|
||||
from synapse.http.servlet import assert_params_in_dict
|
||||
from synapse.logging.context import (
|
||||
make_deferred_yieldable,
|
||||
nested_logging_context,
|
||||
@@ -2687,7 +2686,7 @@ class FederationHandler(BaseHandler):
|
||||
)
|
||||
|
||||
async def on_exchange_third_party_invite_request(
|
||||
self, event_dict: JsonDict
|
||||
self, room_id: str, event_dict: JsonDict
|
||||
) -> None:
|
||||
"""Handle an exchange_third_party_invite request from a remote server
|
||||
|
||||
@@ -2695,11 +2694,12 @@ class FederationHandler(BaseHandler):
|
||||
into a normal m.room.member invite.
|
||||
|
||||
Args:
|
||||
event_dict: Dictionary containing the event body.
|
||||
room_id: The ID of the room.
|
||||
|
||||
event_dict (dict[str, Any]): Dictionary containing the event body.
|
||||
|
||||
"""
|
||||
assert_params_in_dict(event_dict, ["room_id"])
|
||||
room_version = await self.store.get_room_version_id(event_dict["room_id"])
|
||||
room_version = await self.store.get_room_version_id(room_id)
|
||||
|
||||
# NB: event_dict has a particular specced format we might need to fudge
|
||||
# if we change event formats too much.
|
||||
|
||||
@@ -1138,9 +1138,6 @@ class EventCreationHandler:
|
||||
if original_event.room_id != event.room_id:
|
||||
raise SynapseError(400, "Cannot redact event from a different room")
|
||||
|
||||
if original_event.type == EventTypes.ServerACL:
|
||||
raise AuthError(403, "Redacting server ACL events is not permitted")
|
||||
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
auth_events_ids = self.auth.compute_auth_events(
|
||||
event, prev_state_ids, for_verification=True
|
||||
|
||||
@@ -189,9 +189,7 @@ class ProfileHandler(BaseHandler):
|
||||
)
|
||||
|
||||
if not isinstance(new_displayname, str):
|
||||
raise SynapseError(
|
||||
400, "'displayname' must be a string", errcode=Codes.INVALID_PARAM
|
||||
)
|
||||
raise SynapseError(400, "Invalid displayname")
|
||||
|
||||
if len(new_displayname) > MAX_DISPLAYNAME_LEN:
|
||||
raise SynapseError(
|
||||
@@ -275,9 +273,7 @@ class ProfileHandler(BaseHandler):
|
||||
)
|
||||
|
||||
if not isinstance(new_avatar_url, str):
|
||||
raise SynapseError(
|
||||
400, "'avatar_url' must be a string", errcode=Codes.INVALID_PARAM
|
||||
)
|
||||
raise SynapseError(400, "Invalid displayname")
|
||||
|
||||
if len(new_avatar_url) > MAX_AVATAR_URL_LEN:
|
||||
raise SynapseError(
|
||||
|
||||
@@ -1063,19 +1063,13 @@ def check_content_type_is_json(headers):
|
||||
"""
|
||||
c_type = headers.getRawHeaders(b"Content-Type")
|
||||
if c_type is None:
|
||||
raise RequestSendFailed(
|
||||
RuntimeError("No Content-Type header received from remote server"),
|
||||
can_retry=False,
|
||||
)
|
||||
raise RequestSendFailed(RuntimeError("No Content-Type header"), can_retry=False)
|
||||
|
||||
c_type = c_type[0].decode("ascii") # only the first header
|
||||
val, options = cgi.parse_header(c_type)
|
||||
if val != "application/json":
|
||||
raise RequestSendFailed(
|
||||
RuntimeError(
|
||||
"Remote server sent Content-Type header of '%s', not 'application/json'"
|
||||
% c_type,
|
||||
),
|
||||
RuntimeError("Content-Type not application/json: was '%s'" % c_type),
|
||||
can_retry=False,
|
||||
)
|
||||
|
||||
|
||||
@@ -502,16 +502,6 @@ build_info.labels(
|
||||
|
||||
last_ticked = time.time()
|
||||
|
||||
# 3PID send info
|
||||
threepid_send_requests = Histogram(
|
||||
"synapse_threepid_send_requests_with_tries",
|
||||
documentation="Number of requests for a 3pid token by try count. Note if"
|
||||
" there is a request with try count of 4, then there would have been one"
|
||||
" each for 1, 2 and 3",
|
||||
buckets=(1, 2, 3, 4, 5, 10),
|
||||
labelnames=("type", "reason"),
|
||||
)
|
||||
|
||||
|
||||
class ReactorLastSeenMetric:
|
||||
def collect(self):
|
||||
|
||||
@@ -498,30 +498,6 @@ BASE_APPEND_UNDERRIDE_RULES = [
|
||||
],
|
||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
||||
},
|
||||
{
|
||||
"rule_id": "global/underride/.im.vector.jitsi",
|
||||
"conditions": [
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "im.vector.modular.widgets",
|
||||
"_id": "_type_modular_widgets",
|
||||
},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "content.type",
|
||||
"pattern": "jitsi",
|
||||
"_id": "_content_type_jitsi",
|
||||
},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "state_key",
|
||||
"pattern": "*",
|
||||
"_id": "_is_state_event",
|
||||
},
|
||||
],
|
||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
|
||||
@@ -72,10 +72,6 @@ REQUIREMENTS = [
|
||||
# prom-client has a history of breaking backwards compatibility between
|
||||
# minor versions (https://github.com/prometheus/client_python/issues/317),
|
||||
# so we also pin the minor version.
|
||||
#
|
||||
# Note that we replicate these constraints in the Synapse Dockerfile while
|
||||
# pre-installing dependencies. If these constraints are updated here, the
|
||||
# same change should be made in the Dockerfile.
|
||||
"prometheus_client>=0.4.0,<0.9.0",
|
||||
# we use attr.validators.deep_iterable, which arrived in 19.1.0 (Note:
|
||||
# Fedora 31 only has 19.1, so if we want to upgrade we should wait until 33
|
||||
@@ -99,11 +95,7 @@ CONDITIONAL_REQUIREMENTS = {
|
||||
# python 3.5.2, as per https://github.com/itamarst/eliot/issues/418
|
||||
'eliot<1.8.0;python_version<"3.5.3"',
|
||||
],
|
||||
"saml2": [
|
||||
# pysaml2 6.4.0 is incompatible with Python 3.5 (see https://github.com/IdentityPython/pysaml2/issues/749)
|
||||
"pysaml2>=4.5.0,<6.4.0;python_version<'3.6'",
|
||||
"pysaml2>=4.5.0;python_version>='3.6'",
|
||||
],
|
||||
"saml2": ["pysaml2>=4.5.0"],
|
||||
"oidc": ["authlib>=0.14.0"],
|
||||
"systemd": ["systemd-python>=231"],
|
||||
"url_preview": ["lxml>=3.5.0"],
|
||||
|
||||
@@ -47,7 +47,6 @@ from synapse.rest.admin.rooms import (
|
||||
ShutdownRoomRestServlet,
|
||||
)
|
||||
from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
|
||||
from synapse.rest.admin.statistics import UserMediaStatisticsRestServlet
|
||||
from synapse.rest.admin.users import (
|
||||
AccountValidityRenewServlet,
|
||||
DeactivateAccountRestServlet,
|
||||
@@ -228,7 +227,6 @@ def register_servlets(hs, http_server):
|
||||
DeviceRestServlet(hs).register(http_server)
|
||||
DevicesRestServlet(hs).register(http_server)
|
||||
DeleteDevicesRestServlet(hs).register(http_server)
|
||||
UserMediaStatisticsRestServlet(hs).register(http_server)
|
||||
EventReportDetailRestServlet(hs).register(http_server)
|
||||
EventReportsRestServlet(hs).register(http_server)
|
||||
PushersRestServlet(hs).register(http_server)
|
||||
|
||||
@@ -1,122 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2020 Dirk Klimpel
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Tuple
|
||||
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
|
||||
from synapse.storage.databases.main.stats import UserSortOrder
|
||||
from synapse.types import JsonDict
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class UserMediaStatisticsRestServlet(RestServlet):
|
||||
"""
|
||||
Get statistics about uploaded media by users.
|
||||
"""
|
||||
|
||||
PATTERNS = admin_patterns("/statistics/users/media$")
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||
await assert_requester_is_admin(self.auth, request)
|
||||
|
||||
order_by = parse_string(
|
||||
request, "order_by", default=UserSortOrder.USER_ID.value
|
||||
)
|
||||
if order_by not in (
|
||||
UserSortOrder.MEDIA_LENGTH.value,
|
||||
UserSortOrder.MEDIA_COUNT.value,
|
||||
UserSortOrder.USER_ID.value,
|
||||
UserSortOrder.DISPLAYNAME.value,
|
||||
):
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Unknown value for order_by: %s" % (order_by,),
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
start = parse_integer(request, "from", default=0)
|
||||
if start < 0:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Query parameter from must be a string representing a positive integer.",
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
limit = parse_integer(request, "limit", default=100)
|
||||
if limit < 0:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Query parameter limit must be a string representing a positive integer.",
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
from_ts = parse_integer(request, "from_ts", default=0)
|
||||
if from_ts < 0:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Query parameter from_ts must be a string representing a positive integer.",
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
until_ts = parse_integer(request, "until_ts")
|
||||
if until_ts is not None:
|
||||
if until_ts < 0:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Query parameter until_ts must be a string representing a positive integer.",
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
if until_ts <= from_ts:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Query parameter until_ts must be greater than from_ts.",
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
search_term = parse_string(request, "search_term")
|
||||
if search_term == "":
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Query parameter search_term cannot be an empty string.",
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
direction = parse_string(request, "dir", default="f")
|
||||
if direction not in ("f", "b"):
|
||||
raise SynapseError(
|
||||
400, "Unknown direction: %s" % (direction,), errcode=Codes.INVALID_PARAM
|
||||
)
|
||||
|
||||
users_media, total = await self.store.get_users_media_usage_paginate(
|
||||
start, limit, from_ts, until_ts, order_by, direction, search_term
|
||||
)
|
||||
ret = {"users": users_media, "total": total}
|
||||
if (start + limit) < total:
|
||||
ret["next_token"] = start + len(users_media)
|
||||
|
||||
return 200, ret
|
||||
@@ -412,7 +412,6 @@ class UserRegisterServlet(RestServlet):
|
||||
|
||||
admin = body.get("admin", None)
|
||||
user_type = body.get("user_type", None)
|
||||
displayname = body.get("displayname", None)
|
||||
|
||||
if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
|
||||
raise SynapseError(400, "Invalid user type")
|
||||
@@ -449,7 +448,6 @@ class UserRegisterServlet(RestServlet):
|
||||
password_hash=password_hash,
|
||||
admin=bool(admin),
|
||||
user_type=user_type,
|
||||
default_display_name=displayname,
|
||||
by_admin=True,
|
||||
)
|
||||
|
||||
|
||||
@@ -38,7 +38,6 @@ from synapse.http.servlet import (
|
||||
parse_json_object_from_request,
|
||||
parse_string,
|
||||
)
|
||||
from synapse.metrics import threepid_send_requests
|
||||
from synapse.push.mailer import Mailer
|
||||
from synapse.util.msisdn import phone_number_to_msisdn
|
||||
from synapse.util.stringutils import assert_valid_client_secret, random_string
|
||||
@@ -144,10 +143,6 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
|
||||
# Wrap the session id in a JSON object
|
||||
ret = {"sid": sid}
|
||||
|
||||
threepid_send_requests.labels(type="email", reason="password_reset").observe(
|
||||
send_attempt
|
||||
)
|
||||
|
||||
return 200, ret
|
||||
|
||||
|
||||
@@ -416,10 +411,6 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
|
||||
# Wrap the session id in a JSON object
|
||||
ret = {"sid": sid}
|
||||
|
||||
threepid_send_requests.labels(type="email", reason="add_threepid").observe(
|
||||
send_attempt
|
||||
)
|
||||
|
||||
return 200, ret
|
||||
|
||||
|
||||
@@ -490,10 +481,6 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
|
||||
next_link,
|
||||
)
|
||||
|
||||
threepid_send_requests.labels(type="msisdn", reason="add_threepid").observe(
|
||||
send_attempt
|
||||
)
|
||||
|
||||
return 200, ret
|
||||
|
||||
|
||||
|
||||
@@ -45,7 +45,6 @@ from synapse.http.servlet import (
|
||||
parse_json_object_from_request,
|
||||
parse_string,
|
||||
)
|
||||
from synapse.metrics import threepid_send_requests
|
||||
from synapse.push.mailer import Mailer
|
||||
from synapse.util.msisdn import phone_number_to_msisdn
|
||||
from synapse.util.ratelimitutils import FederationRateLimiter
|
||||
@@ -164,10 +163,6 @@ class EmailRegisterRequestTokenRestServlet(RestServlet):
|
||||
# Wrap the session id in a JSON object
|
||||
ret = {"sid": sid}
|
||||
|
||||
threepid_send_requests.labels(type="email", reason="register").observe(
|
||||
send_attempt
|
||||
)
|
||||
|
||||
return 200, ret
|
||||
|
||||
|
||||
@@ -239,10 +234,6 @@ class MsisdnRegisterRequestTokenRestServlet(RestServlet):
|
||||
next_link,
|
||||
)
|
||||
|
||||
threepid_send_requests.labels(type="msisdn", reason="register").observe(
|
||||
send_attempt
|
||||
)
|
||||
|
||||
return 200, ret
|
||||
|
||||
|
||||
|
||||
@@ -119,7 +119,7 @@ class ServerNoticesManager:
|
||||
# manages to invite the system user to a room, that doesn't make it
|
||||
# the server notices room.
|
||||
user_ids = await self._store.get_users_in_room(room.room_id)
|
||||
if len(user_ids) <= 2 and self.server_notices_mxid in user_ids:
|
||||
if self.server_notices_mxid in user_ids:
|
||||
# we found a room which our user shares with the system notice
|
||||
# user
|
||||
logger.info(
|
||||
|
||||
@@ -88,18 +88,13 @@ def make_pool(
|
||||
"""Get the connection pool for the database.
|
||||
"""
|
||||
|
||||
# By default enable `cp_reconnect`. We need to fiddle with db_args in case
|
||||
# someone has explicitly set `cp_reconnect`.
|
||||
db_args = dict(db_config.config.get("args", {}))
|
||||
db_args.setdefault("cp_reconnect", True)
|
||||
|
||||
return adbapi.ConnectionPool(
|
||||
db_config.config["name"],
|
||||
cp_reactor=reactor,
|
||||
cp_openfun=lambda conn: engine.on_new_connection(
|
||||
LoggingDatabaseConnection(conn, engine, "on_new_connection")
|
||||
),
|
||||
**db_args,
|
||||
**db_config.config.get("args", {}),
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ from twisted.enterprise.adbapi import Connection
|
||||
|
||||
from synapse.logging.opentracing import log_kv, set_tag, trace
|
||||
from synapse.storage._base import SQLBaseStore, db_to_json
|
||||
from synapse.storage.database import DatabasePool, make_in_list_sql_clause
|
||||
from synapse.storage.database import make_in_list_sql_clause
|
||||
from synapse.storage.types import Cursor
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import json_encoder
|
||||
@@ -33,7 +33,6 @@ from synapse.util.iterutils import batch_iter
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.handlers.e2e_keys import SignatureListItem
|
||||
from synapse.server import HomeServer
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
@@ -48,20 +47,7 @@ class DeviceKeyLookupResult:
|
||||
keys = attr.ib(type=Optional[JsonDict])
|
||||
|
||||
|
||||
class EndToEndKeyBackgroundStore(SQLBaseStore):
|
||||
def __init__(self, database: DatabasePool, db_conn: Connection, hs: "HomeServer"):
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
self.db_pool.updates.register_background_index_update(
|
||||
"e2e_cross_signing_keys_idx",
|
||||
index_name="e2e_cross_signing_keys_stream_idx",
|
||||
table="e2e_cross_signing_keys",
|
||||
columns=["stream_id"],
|
||||
unique=True,
|
||||
)
|
||||
|
||||
|
||||
class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore):
|
||||
class EndToEndKeyWorkerStore(SQLBaseStore):
|
||||
async def get_e2e_device_keys_for_federation_query(
|
||||
self, user_id: str
|
||||
) -> Tuple[int, List[JsonDict]]:
|
||||
|
||||
@@ -26,7 +26,6 @@ from synapse.storage.databases.main.events_worker import EventsWorkerStore
|
||||
from synapse.storage.databases.main.signatures import SignatureWorkerStore
|
||||
from synapse.types import Collection
|
||||
from synapse.util.caches.descriptors import cached
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
from synapse.util.iterutils import batch_iter
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -41,11 +40,6 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
|
||||
self._delete_old_forward_extrem_cache, 60 * 60 * 1000
|
||||
)
|
||||
|
||||
# Cache of event ID to list of auth event IDs and their depths.
|
||||
self._event_auth_cache = LruCache(
|
||||
500000, "_event_auth_cache", size_callback=len
|
||||
) # type: LruCache[str, List[Tuple[str, int]]]
|
||||
|
||||
async def get_auth_chain(
|
||||
self, event_ids: Collection[str], include_given: bool = False
|
||||
) -> List[EventBase]:
|
||||
@@ -90,45 +84,17 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
|
||||
else:
|
||||
results = set()
|
||||
|
||||
# We pull out the depth simply so that we can populate the
|
||||
# `_event_auth_cache` cache.
|
||||
base_sql = """
|
||||
SELECT a.event_id, auth_id, depth
|
||||
FROM event_auth AS a
|
||||
INNER JOIN events AS e ON (e.event_id = a.auth_id)
|
||||
WHERE
|
||||
"""
|
||||
base_sql = "SELECT DISTINCT auth_id FROM event_auth WHERE "
|
||||
|
||||
front = set(event_ids)
|
||||
while front:
|
||||
new_front = set()
|
||||
for chunk in batch_iter(front, 100):
|
||||
# Pull the auth events either from the cache or DB.
|
||||
to_fetch = [] # Event IDs to fetch from DB # type: List[str]
|
||||
for event_id in chunk:
|
||||
res = self._event_auth_cache.get(event_id)
|
||||
if res is None:
|
||||
to_fetch.append(event_id)
|
||||
else:
|
||||
new_front.update(auth_id for auth_id, depth in res)
|
||||
|
||||
if to_fetch:
|
||||
clause, args = make_in_list_sql_clause(
|
||||
txn.database_engine, "a.event_id", to_fetch
|
||||
)
|
||||
txn.execute(base_sql + clause, args)
|
||||
|
||||
# Note we need to batch up the results by event ID before
|
||||
# adding to the cache.
|
||||
to_cache = {}
|
||||
for event_id, auth_event_id, auth_event_depth in txn:
|
||||
to_cache.setdefault(event_id, []).append(
|
||||
(auth_event_id, auth_event_depth)
|
||||
)
|
||||
new_front.add(auth_event_id)
|
||||
|
||||
for event_id, auth_events in to_cache.items():
|
||||
self._event_auth_cache.set(event_id, auth_events)
|
||||
clause, args = make_in_list_sql_clause(
|
||||
txn.database_engine, "event_id", chunk
|
||||
)
|
||||
txn.execute(base_sql + clause, args)
|
||||
new_front.update(r[0] for r in txn)
|
||||
|
||||
new_front -= results
|
||||
|
||||
@@ -247,38 +213,14 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
|
||||
break
|
||||
|
||||
# Fetch the auth events and their depths of the N last events we're
|
||||
# currently walking, either from cache or DB.
|
||||
# currently walking
|
||||
search, chunk = search[:-100], search[-100:]
|
||||
clause, args = make_in_list_sql_clause(
|
||||
txn.database_engine, "a.event_id", [e_id for _, e_id in chunk]
|
||||
)
|
||||
txn.execute(base_sql + clause, args)
|
||||
|
||||
found = [] # Results found # type: List[Tuple[str, str, int]]
|
||||
to_fetch = [] # Event IDs to fetch from DB # type: List[str]
|
||||
for _, event_id in chunk:
|
||||
res = self._event_auth_cache.get(event_id)
|
||||
if res is None:
|
||||
to_fetch.append(event_id)
|
||||
else:
|
||||
found.extend((event_id, auth_id, depth) for auth_id, depth in res)
|
||||
|
||||
if to_fetch:
|
||||
clause, args = make_in_list_sql_clause(
|
||||
txn.database_engine, "a.event_id", to_fetch
|
||||
)
|
||||
txn.execute(base_sql + clause, args)
|
||||
|
||||
# We parse the results and add the to the `found` set and the
|
||||
# cache (note we need to batch up the results by event ID before
|
||||
# adding to the cache).
|
||||
to_cache = {}
|
||||
for event_id, auth_event_id, auth_event_depth in txn:
|
||||
to_cache.setdefault(event_id, []).append(
|
||||
(auth_event_id, auth_event_depth)
|
||||
)
|
||||
found.append((event_id, auth_event_id, auth_event_depth))
|
||||
|
||||
for event_id, auth_events in to_cache.items():
|
||||
self._event_auth_cache.set(event_id, auth_events)
|
||||
|
||||
for event_id, auth_event_id, auth_event_depth in found:
|
||||
for event_id, auth_event_id, auth_event_depth in txn:
|
||||
event_to_auth_events.setdefault(event_id, set()).add(auth_event_id)
|
||||
|
||||
sets = event_to_missing_sets.get(auth_event_id)
|
||||
|
||||
@@ -1,17 +0,0 @@
|
||||
/* Copyright 2020 The Matrix.org Foundation C.I.C
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
INSERT INTO background_updates (update_name, progress_json) VALUES
|
||||
('e2e_cross_signing_keys_idx', '{}');
|
||||
@@ -16,18 +16,15 @@
|
||||
|
||||
import logging
|
||||
from collections import Counter
|
||||
from enum import Enum
|
||||
from itertools import chain
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
|
||||
from twisted.internet.defer import DeferredLock
|
||||
|
||||
from synapse.api.constants import EventTypes, Membership
|
||||
from synapse.api.errors import StoreError
|
||||
from synapse.storage.database import DatabasePool
|
||||
from synapse.storage.databases.main.state_deltas import StateDeltasStore
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.caches.descriptors import cached
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -62,23 +59,6 @@ TYPE_TO_TABLE = {"room": ("room_stats", "room_id"), "user": ("user_stats", "user
|
||||
TYPE_TO_ORIGIN_TABLE = {"room": ("rooms", "room_id"), "user": ("users", "name")}
|
||||
|
||||
|
||||
class UserSortOrder(Enum):
|
||||
"""
|
||||
Enum to define the sorting method used when returning users
|
||||
with get_users_media_usage_paginate
|
||||
|
||||
MEDIA_LENGTH = ordered by size of uploaded media. Smallest to largest.
|
||||
MEDIA_COUNT = ordered by number of uploaded media. Smallest to largest.
|
||||
USER_ID = ordered alphabetically by `user_id`.
|
||||
DISPLAYNAME = ordered alphabetically by `displayname`
|
||||
"""
|
||||
|
||||
MEDIA_LENGTH = "media_length"
|
||||
MEDIA_COUNT = "media_count"
|
||||
USER_ID = "user_id"
|
||||
DISPLAYNAME = "displayname"
|
||||
|
||||
|
||||
class StatsStore(StateDeltasStore):
|
||||
def __init__(self, database: DatabasePool, db_conn, hs):
|
||||
super().__init__(database, db_conn, hs)
|
||||
@@ -902,110 +882,3 @@ class StatsStore(StateDeltasStore):
|
||||
complete_with_stream_id=pos,
|
||||
absolute_field_overrides={"joined_rooms": joined_rooms},
|
||||
)
|
||||
|
||||
async def get_users_media_usage_paginate(
|
||||
self,
|
||||
start: int,
|
||||
limit: int,
|
||||
from_ts: Optional[int] = None,
|
||||
until_ts: Optional[int] = None,
|
||||
order_by: Optional[UserSortOrder] = UserSortOrder.USER_ID.value,
|
||||
direction: Optional[str] = "f",
|
||||
search_term: Optional[str] = None,
|
||||
) -> Tuple[List[JsonDict], Dict[str, int]]:
|
||||
"""Function to retrieve a paginated list of users and their uploaded local media
|
||||
(size and number). This will return a json list of users and the
|
||||
total number of users matching the filter criteria.
|
||||
|
||||
Args:
|
||||
start: offset to begin the query from
|
||||
limit: number of rows to retrieve
|
||||
from_ts: request only media that are created later than this timestamp (ms)
|
||||
until_ts: request only media that are created earlier than this timestamp (ms)
|
||||
order_by: the sort order of the returned list
|
||||
direction: sort ascending or descending
|
||||
search_term: a string to filter user names by
|
||||
Returns:
|
||||
A list of user dicts and an integer representing the total number of
|
||||
users that exist given this query
|
||||
"""
|
||||
|
||||
def get_users_media_usage_paginate_txn(txn):
|
||||
filters = []
|
||||
args = [self.hs.config.server_name]
|
||||
|
||||
if search_term:
|
||||
filters.append("(lmr.user_id LIKE ? OR displayname LIKE ?)")
|
||||
args.extend(["@%" + search_term + "%:%", "%" + search_term + "%"])
|
||||
|
||||
if from_ts:
|
||||
filters.append("created_ts >= ?")
|
||||
args.extend([from_ts])
|
||||
if until_ts:
|
||||
filters.append("created_ts <= ?")
|
||||
args.extend([until_ts])
|
||||
|
||||
# Set ordering
|
||||
if UserSortOrder(order_by) == UserSortOrder.MEDIA_LENGTH:
|
||||
order_by_column = "media_length"
|
||||
elif UserSortOrder(order_by) == UserSortOrder.MEDIA_COUNT:
|
||||
order_by_column = "media_count"
|
||||
elif UserSortOrder(order_by) == UserSortOrder.USER_ID:
|
||||
order_by_column = "lmr.user_id"
|
||||
elif UserSortOrder(order_by) == UserSortOrder.DISPLAYNAME:
|
||||
order_by_column = "displayname"
|
||||
else:
|
||||
raise StoreError(
|
||||
500, "Incorrect value for order_by provided: %s" % order_by
|
||||
)
|
||||
|
||||
if direction == "b":
|
||||
order = "DESC"
|
||||
else:
|
||||
order = "ASC"
|
||||
|
||||
where_clause = "WHERE " + " AND ".join(filters) if len(filters) > 0 else ""
|
||||
|
||||
sql_base = """
|
||||
FROM local_media_repository as lmr
|
||||
LEFT JOIN profiles AS p ON lmr.user_id = '@' || p.user_id || ':' || ?
|
||||
{}
|
||||
GROUP BY lmr.user_id, displayname
|
||||
""".format(
|
||||
where_clause
|
||||
)
|
||||
|
||||
# SQLite does not support SELECT COUNT(*) OVER()
|
||||
sql = """
|
||||
SELECT COUNT(*) FROM (
|
||||
SELECT lmr.user_id
|
||||
{sql_base}
|
||||
) AS count_user_ids
|
||||
""".format(
|
||||
sql_base=sql_base,
|
||||
)
|
||||
txn.execute(sql, args)
|
||||
count = txn.fetchone()[0]
|
||||
|
||||
sql = """
|
||||
SELECT
|
||||
lmr.user_id,
|
||||
displayname,
|
||||
COUNT(lmr.user_id) as media_count,
|
||||
SUM(media_length) as media_length
|
||||
{sql_base}
|
||||
ORDER BY {order_by_column} {order}
|
||||
LIMIT ? OFFSET ?
|
||||
""".format(
|
||||
sql_base=sql_base, order_by_column=order_by_column, order=order,
|
||||
)
|
||||
|
||||
args += [limit, start]
|
||||
txn.execute(sql, args)
|
||||
users = self.db_pool.cursor_to_dict(txn)
|
||||
|
||||
return users, count
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_users_media_usage_paginate_txn", get_users_media_usage_paginate_txn
|
||||
)
|
||||
|
||||
@@ -59,6 +59,7 @@ class FederationTestCase(unittest.HomeserverTestCase):
|
||||
)
|
||||
|
||||
d = self.handler.on_exchange_third_party_invite_request(
|
||||
room_id=room_id,
|
||||
event_dict={
|
||||
"type": EventTypes.Member,
|
||||
"room_id": room_id,
|
||||
|
||||
@@ -154,60 +154,3 @@ class EventCreationTestCase(unittest.HomeserverTestCase):
|
||||
# Check that we've deduplicated the events.
|
||||
self.assertEqual(len(events), 2)
|
||||
self.assertEqual(events[0].event_id, events[1].event_id)
|
||||
|
||||
|
||||
class ServerAclValidationTestCase(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
login.register_servlets,
|
||||
room.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor, clock, hs):
|
||||
self.user_id = self.register_user("tester", "foobar")
|
||||
self.access_token = self.login("tester", "foobar")
|
||||
self.room_id = self.helper.create_room_as(self.user_id, tok=self.access_token)
|
||||
|
||||
def test_allow_server_acl(self):
|
||||
"""Test that sending an ACL that blocks everyone but ourselves works.
|
||||
"""
|
||||
|
||||
self.helper.send_state(
|
||||
self.room_id,
|
||||
EventTypes.ServerACL,
|
||||
body={"allow": [self.hs.hostname]},
|
||||
tok=self.access_token,
|
||||
expect_code=200,
|
||||
)
|
||||
|
||||
def test_deny_server_acl_block_outselves(self):
|
||||
"""Test that sending an ACL that blocks ourselves does not work.
|
||||
"""
|
||||
self.helper.send_state(
|
||||
self.room_id,
|
||||
EventTypes.ServerACL,
|
||||
body={},
|
||||
tok=self.access_token,
|
||||
expect_code=400,
|
||||
)
|
||||
|
||||
def test_deny_redact_server_acl(self):
|
||||
"""Test that attempting to redact an ACL is blocked.
|
||||
"""
|
||||
|
||||
body = self.helper.send_state(
|
||||
self.room_id,
|
||||
EventTypes.ServerACL,
|
||||
body={"allow": [self.hs.hostname]},
|
||||
tok=self.access_token,
|
||||
expect_code=200,
|
||||
)
|
||||
event_id = body["event_id"]
|
||||
|
||||
# Redaction of event should fail.
|
||||
path = "/_matrix/client/r0/rooms/%s/redact/%s" % (self.room_id, event_id)
|
||||
request, channel = self.make_request(
|
||||
"POST", path, content={}, access_token=self.access_token
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(int(channel.result["code"]), 403)
|
||||
|
||||
@@ -531,7 +531,40 @@ class DeleteRoomTestCase(unittest.HomeserverTestCase):
|
||||
def _is_purged(self, room_id):
|
||||
"""Test that the following tables have been purged of all rows related to the room.
|
||||
"""
|
||||
for table in PURGE_TABLES:
|
||||
for table in (
|
||||
"current_state_events",
|
||||
"event_backward_extremities",
|
||||
"event_forward_extremities",
|
||||
"event_json",
|
||||
"event_push_actions",
|
||||
"event_search",
|
||||
"events",
|
||||
"group_rooms",
|
||||
"public_room_list_stream",
|
||||
"receipts_graph",
|
||||
"receipts_linearized",
|
||||
"room_aliases",
|
||||
"room_depth",
|
||||
"room_memberships",
|
||||
"room_stats_state",
|
||||
"room_stats_current",
|
||||
"room_stats_historical",
|
||||
"room_stats_earliest_token",
|
||||
"rooms",
|
||||
"stream_ordering_to_exterm",
|
||||
"users_in_public_rooms",
|
||||
"users_who_share_private_rooms",
|
||||
"appservice_room_list",
|
||||
"e2e_room_keys",
|
||||
"event_push_summary",
|
||||
"pusher_throttle",
|
||||
"group_summary_rooms",
|
||||
"local_invites",
|
||||
"room_account_data",
|
||||
"room_tags",
|
||||
# "state_groups", # Current impl leaves orphaned state groups around.
|
||||
"state_groups_state",
|
||||
):
|
||||
count = self.get_success(
|
||||
self.store.db_pool.simple_select_one_onecol(
|
||||
table=table,
|
||||
@@ -600,7 +633,39 @@ class PurgeRoomTestCase(unittest.HomeserverTestCase):
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
|
||||
# Test that the following tables have been purged of all rows related to the room.
|
||||
for table in PURGE_TABLES:
|
||||
for table in (
|
||||
"current_state_events",
|
||||
"event_backward_extremities",
|
||||
"event_forward_extremities",
|
||||
"event_json",
|
||||
"event_push_actions",
|
||||
"event_search",
|
||||
"events",
|
||||
"group_rooms",
|
||||
"public_room_list_stream",
|
||||
"receipts_graph",
|
||||
"receipts_linearized",
|
||||
"room_aliases",
|
||||
"room_depth",
|
||||
"room_memberships",
|
||||
"room_stats_state",
|
||||
"room_stats_current",
|
||||
"room_stats_historical",
|
||||
"room_stats_earliest_token",
|
||||
"rooms",
|
||||
"stream_ordering_to_exterm",
|
||||
"users_in_public_rooms",
|
||||
"users_who_share_private_rooms",
|
||||
"appservice_room_list",
|
||||
"e2e_room_keys",
|
||||
"event_push_summary",
|
||||
"pusher_throttle",
|
||||
"group_summary_rooms",
|
||||
"room_account_data",
|
||||
"room_tags",
|
||||
# "state_groups", # Current impl leaves orphaned state groups around.
|
||||
"state_groups_state",
|
||||
):
|
||||
count = self.get_success(
|
||||
self.store.db_pool.simple_select_one_onecol(
|
||||
table=table,
|
||||
@@ -1435,39 +1500,3 @@ class JoinAliasRoomTestCase(unittest.HomeserverTestCase):
|
||||
self.render(request)
|
||||
self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(private_room_id, channel.json_body["joined_rooms"][0])
|
||||
|
||||
|
||||
PURGE_TABLES = [
|
||||
"current_state_events",
|
||||
"event_backward_extremities",
|
||||
"event_forward_extremities",
|
||||
"event_json",
|
||||
"event_push_actions",
|
||||
"event_search",
|
||||
"events",
|
||||
"group_rooms",
|
||||
"public_room_list_stream",
|
||||
"receipts_graph",
|
||||
"receipts_linearized",
|
||||
"room_aliases",
|
||||
"room_depth",
|
||||
"room_memberships",
|
||||
"room_stats_state",
|
||||
"room_stats_current",
|
||||
"room_stats_historical",
|
||||
"room_stats_earliest_token",
|
||||
"rooms",
|
||||
"stream_ordering_to_exterm",
|
||||
"users_in_public_rooms",
|
||||
"users_who_share_private_rooms",
|
||||
"appservice_room_list",
|
||||
"e2e_room_keys",
|
||||
"event_push_summary",
|
||||
"pusher_throttle",
|
||||
"group_summary_rooms",
|
||||
"local_invites",
|
||||
"room_account_data",
|
||||
"room_tags",
|
||||
# "state_groups", # Current impl leaves orphaned state groups around.
|
||||
"state_groups_state",
|
||||
]
|
||||
|
||||
@@ -1,485 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2020 Dirk Klimpel
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
from binascii import unhexlify
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.errors import Codes
|
||||
from synapse.rest.client.v1 import login
|
||||
|
||||
from tests import unittest
|
||||
|
||||
|
||||
class UserMediaStatisticsTestCase(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor, clock, hs):
|
||||
self.store = hs.get_datastore()
|
||||
self.media_repo = hs.get_media_repository_resource()
|
||||
|
||||
self.admin_user = self.register_user("admin", "pass", admin=True)
|
||||
self.admin_user_tok = self.login("admin", "pass")
|
||||
|
||||
self.other_user = self.register_user("user", "pass")
|
||||
self.other_user_tok = self.login("user", "pass")
|
||||
|
||||
self.url = "/_synapse/admin/v1/statistics/users/media"
|
||||
|
||||
def test_no_auth(self):
|
||||
"""
|
||||
Try to list users without authentication.
|
||||
"""
|
||||
request, channel = self.make_request("GET", self.url, b"{}")
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"])
|
||||
|
||||
def test_requester_is_no_admin(self):
|
||||
"""
|
||||
If the user is not a server admin, an error 403 is returned.
|
||||
"""
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url, json.dumps({}), access_token=self.other_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
|
||||
|
||||
def test_invalid_parameter(self):
|
||||
"""
|
||||
If parameters are invalid, an error is returned.
|
||||
"""
|
||||
# unkown order_by
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?order_by=bar", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
|
||||
|
||||
# negative from
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?from=-5", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
|
||||
|
||||
# negative limit
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?limit=-5", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
|
||||
|
||||
# negative from_ts
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?from_ts=-1234", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
|
||||
|
||||
# negative until_ts
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?until_ts=-1234", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
|
||||
|
||||
# until_ts smaller from_ts
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
self.url + "?from_ts=10&until_ts=5",
|
||||
access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
|
||||
|
||||
# empty search term
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?search_term=", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
|
||||
|
||||
# invalid search order
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?dir=bar", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
|
||||
|
||||
def test_limit(self):
|
||||
"""
|
||||
Testing list of media with limit
|
||||
"""
|
||||
self._create_users_with_media(10, 2)
|
||||
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?limit=5", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], 10)
|
||||
self.assertEqual(len(channel.json_body["users"]), 5)
|
||||
self.assertEqual(channel.json_body["next_token"], 5)
|
||||
self._check_fields(channel.json_body["users"])
|
||||
|
||||
def test_from(self):
|
||||
"""
|
||||
Testing list of media with a defined starting point (from)
|
||||
"""
|
||||
self._create_users_with_media(20, 2)
|
||||
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?from=5", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], 20)
|
||||
self.assertEqual(len(channel.json_body["users"]), 15)
|
||||
self.assertNotIn("next_token", channel.json_body)
|
||||
self._check_fields(channel.json_body["users"])
|
||||
|
||||
def test_limit_and_from(self):
|
||||
"""
|
||||
Testing list of media with a defined starting point and limit
|
||||
"""
|
||||
self._create_users_with_media(20, 2)
|
||||
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?from=5&limit=10", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], 20)
|
||||
self.assertEqual(channel.json_body["next_token"], 15)
|
||||
self.assertEqual(len(channel.json_body["users"]), 10)
|
||||
self._check_fields(channel.json_body["users"])
|
||||
|
||||
def test_next_token(self):
|
||||
"""
|
||||
Testing that `next_token` appears at the right place
|
||||
"""
|
||||
|
||||
number_users = 20
|
||||
self._create_users_with_media(number_users, 3)
|
||||
|
||||
# `next_token` does not appear
|
||||
# Number of results is the number of entries
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?limit=20", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], number_users)
|
||||
self.assertEqual(len(channel.json_body["users"]), number_users)
|
||||
self.assertNotIn("next_token", channel.json_body)
|
||||
|
||||
# `next_token` does not appear
|
||||
# Number of max results is larger than the number of entries
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?limit=21", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], number_users)
|
||||
self.assertEqual(len(channel.json_body["users"]), number_users)
|
||||
self.assertNotIn("next_token", channel.json_body)
|
||||
|
||||
# `next_token` does appear
|
||||
# Number of max results is smaller than the number of entries
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?limit=19", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], number_users)
|
||||
self.assertEqual(len(channel.json_body["users"]), 19)
|
||||
self.assertEqual(channel.json_body["next_token"], 19)
|
||||
|
||||
# Set `from` to value of `next_token` for request remaining entries
|
||||
# Check `next_token` does not appear
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?from=19", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], number_users)
|
||||
self.assertEqual(len(channel.json_body["users"]), 1)
|
||||
self.assertNotIn("next_token", channel.json_body)
|
||||
|
||||
def test_no_media(self):
|
||||
"""
|
||||
Tests that a normal lookup for statistics is successfully
|
||||
if users have no media created
|
||||
"""
|
||||
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url, access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, channel.code, msg=channel.json_body)
|
||||
self.assertEqual(0, channel.json_body["total"])
|
||||
self.assertEqual(0, len(channel.json_body["users"]))
|
||||
|
||||
def test_order_by(self):
|
||||
"""
|
||||
Testing order list with parameter `order_by`
|
||||
"""
|
||||
|
||||
# create users
|
||||
self.register_user("user_a", "pass", displayname="UserZ")
|
||||
userA_tok = self.login("user_a", "pass")
|
||||
self._create_media(userA_tok, 1)
|
||||
|
||||
self.register_user("user_b", "pass", displayname="UserY")
|
||||
userB_tok = self.login("user_b", "pass")
|
||||
self._create_media(userB_tok, 3)
|
||||
|
||||
self.register_user("user_c", "pass", displayname="UserX")
|
||||
userC_tok = self.login("user_c", "pass")
|
||||
self._create_media(userC_tok, 2)
|
||||
|
||||
# order by user_id
|
||||
self._order_test("user_id", ["@user_a:test", "@user_b:test", "@user_c:test"])
|
||||
self._order_test(
|
||||
"user_id", ["@user_a:test", "@user_b:test", "@user_c:test"], "f",
|
||||
)
|
||||
self._order_test(
|
||||
"user_id", ["@user_c:test", "@user_b:test", "@user_a:test"], "b",
|
||||
)
|
||||
|
||||
# order by displayname
|
||||
self._order_test(
|
||||
"displayname", ["@user_c:test", "@user_b:test", "@user_a:test"]
|
||||
)
|
||||
self._order_test(
|
||||
"displayname", ["@user_c:test", "@user_b:test", "@user_a:test"], "f",
|
||||
)
|
||||
self._order_test(
|
||||
"displayname", ["@user_a:test", "@user_b:test", "@user_c:test"], "b",
|
||||
)
|
||||
|
||||
# order by media_length
|
||||
self._order_test(
|
||||
"media_length", ["@user_a:test", "@user_c:test", "@user_b:test"],
|
||||
)
|
||||
self._order_test(
|
||||
"media_length", ["@user_a:test", "@user_c:test", "@user_b:test"], "f",
|
||||
)
|
||||
self._order_test(
|
||||
"media_length", ["@user_b:test", "@user_c:test", "@user_a:test"], "b",
|
||||
)
|
||||
|
||||
# order by media_count
|
||||
self._order_test(
|
||||
"media_count", ["@user_a:test", "@user_c:test", "@user_b:test"],
|
||||
)
|
||||
self._order_test(
|
||||
"media_count", ["@user_a:test", "@user_c:test", "@user_b:test"], "f",
|
||||
)
|
||||
self._order_test(
|
||||
"media_count", ["@user_b:test", "@user_c:test", "@user_a:test"], "b",
|
||||
)
|
||||
|
||||
def test_from_until_ts(self):
|
||||
"""
|
||||
Testing filter by time with parameters `from_ts` and `until_ts`
|
||||
"""
|
||||
# create media earlier than `ts1` to ensure that `from_ts` is working
|
||||
self._create_media(self.other_user_tok, 3)
|
||||
self.pump(1)
|
||||
ts1 = self.clock.time_msec()
|
||||
|
||||
# list all media when filter is not set
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url, access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["users"][0]["media_count"], 3)
|
||||
|
||||
# filter media starting at `ts1` after creating first media
|
||||
# result is 0
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?from_ts=%s" % (ts1,), access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], 0)
|
||||
|
||||
self._create_media(self.other_user_tok, 3)
|
||||
self.pump(1)
|
||||
ts2 = self.clock.time_msec()
|
||||
# create media after `ts2` to ensure that `until_ts` is working
|
||||
self._create_media(self.other_user_tok, 3)
|
||||
|
||||
# filter media between `ts1` and `ts2`
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
self.url + "?from_ts=%s&until_ts=%s" % (ts1, ts2),
|
||||
access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["users"][0]["media_count"], 3)
|
||||
|
||||
# filter media until `ts2` and earlier
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?until_ts=%s" % (ts2,), access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["users"][0]["media_count"], 6)
|
||||
|
||||
def test_search_term(self):
|
||||
self._create_users_with_media(20, 1)
|
||||
|
||||
# check without filter get all users
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url, access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], 20)
|
||||
|
||||
# filter user 1 and 10-19 by `user_id`
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
self.url + "?search_term=foo_user_1",
|
||||
access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], 11)
|
||||
|
||||
# filter on this user in `displayname`
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
self.url + "?search_term=bar_user_10",
|
||||
access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["users"][0]["displayname"], "bar_user_10")
|
||||
self.assertEqual(channel.json_body["total"], 1)
|
||||
|
||||
# filter and get empty result
|
||||
request, channel = self.make_request(
|
||||
"GET", self.url + "?search_term=foobar", access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(channel.json_body["total"], 0)
|
||||
|
||||
def _create_users_with_media(self, number_users: int, media_per_user: int):
|
||||
"""
|
||||
Create a number of users with a number of media
|
||||
Args:
|
||||
number_users: Number of users to be created
|
||||
media_per_user: Number of media to be created for each user
|
||||
"""
|
||||
for i in range(number_users):
|
||||
self.register_user("foo_user_%s" % i, "pass", displayname="bar_user_%s" % i)
|
||||
user_tok = self.login("foo_user_%s" % i, "pass")
|
||||
self._create_media(user_tok, media_per_user)
|
||||
|
||||
def _create_media(self, user_token: str, number_media: int):
|
||||
"""
|
||||
Create a number of media for a specific user
|
||||
Args:
|
||||
user_token: Access token of the user
|
||||
number_media: Number of media to be created for the user
|
||||
"""
|
||||
upload_resource = self.media_repo.children[b"upload"]
|
||||
for i in range(number_media):
|
||||
# file size is 67 Byte
|
||||
image_data = unhexlify(
|
||||
b"89504e470d0a1a0a0000000d4948445200000001000000010806"
|
||||
b"0000001f15c4890000000a49444154789c63000100000500010d"
|
||||
b"0a2db40000000049454e44ae426082"
|
||||
)
|
||||
|
||||
# Upload some media into the room
|
||||
self.helper.upload_media(
|
||||
upload_resource, image_data, tok=user_token, expect_code=200
|
||||
)
|
||||
|
||||
def _check_fields(self, content: List[Dict[str, Any]]):
|
||||
"""Checks that all attributes are present in content
|
||||
Args:
|
||||
content: List that is checked for content
|
||||
"""
|
||||
for c in content:
|
||||
self.assertIn("user_id", c)
|
||||
self.assertIn("displayname", c)
|
||||
self.assertIn("media_count", c)
|
||||
self.assertIn("media_length", c)
|
||||
|
||||
def _order_test(
|
||||
self, order_type: str, expected_user_list: List[str], dir: Optional[str] = None
|
||||
):
|
||||
"""Request the list of users in a certain order. Assert that order is what
|
||||
we expect
|
||||
Args:
|
||||
order_type: The type of ordering to give the server
|
||||
expected_user_list: The list of user_ids in the order we expect to get
|
||||
back from the server
|
||||
dir: The direction of ordering to give the server
|
||||
"""
|
||||
|
||||
url = self.url + "?order_by=%s" % (order_type,)
|
||||
if dir is not None and dir in ("b", "f"):
|
||||
url += "&dir=%s" % (dir,)
|
||||
request, channel = self.make_request(
|
||||
"GET", url.encode("ascii"), access_token=self.admin_user_tok,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(200, channel.code, msg=channel.json_body)
|
||||
self.assertEqual(channel.json_body["total"], len(expected_user_list))
|
||||
|
||||
returned_order = [row["user_id"] for row in channel.json_body["users"]]
|
||||
self.assertListEqual(expected_user_list, returned_order)
|
||||
self._check_fields(channel.json_body["users"])
|
||||
@@ -24,7 +24,7 @@ from mock import Mock
|
||||
import synapse.rest.admin
|
||||
from synapse.api.constants import UserTypes
|
||||
from synapse.api.errors import Codes, HttpResponseException, ResourceLimitError
|
||||
from synapse.rest.client.v1 import login, profile, room
|
||||
from synapse.rest.client.v1 import login, room
|
||||
from synapse.rest.client.v2_alpha import sync
|
||||
|
||||
from tests import unittest
|
||||
@@ -34,10 +34,7 @@ from tests.unittest import override_config
|
||||
|
||||
class UserRegisterTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
profile.register_servlets,
|
||||
]
|
||||
servlets = [synapse.rest.admin.register_servlets_for_client_rest_resource]
|
||||
|
||||
def make_homeserver(self, reactor, clock):
|
||||
|
||||
@@ -328,120 +325,6 @@ class UserRegisterTestCase(unittest.HomeserverTestCase):
|
||||
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual("Invalid user type", channel.json_body["error"])
|
||||
|
||||
def test_displayname(self):
|
||||
"""
|
||||
Test that displayname of new user is set
|
||||
"""
|
||||
|
||||
# set no displayname
|
||||
request, channel = self.make_request("GET", self.url)
|
||||
self.render(request)
|
||||
nonce = channel.json_body["nonce"]
|
||||
|
||||
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
||||
want_mac.update(nonce.encode("ascii") + b"\x00bob1\x00abc123\x00notadmin")
|
||||
want_mac = want_mac.hexdigest()
|
||||
|
||||
body = json.dumps(
|
||||
{"nonce": nonce, "username": "bob1", "password": "abc123", "mac": want_mac}
|
||||
)
|
||||
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual("@bob1:test", channel.json_body["user_id"])
|
||||
|
||||
request, channel = self.make_request("GET", "/profile/@bob1:test/displayname")
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual("bob1", channel.json_body["displayname"])
|
||||
|
||||
# displayname is None
|
||||
request, channel = self.make_request("GET", self.url)
|
||||
self.render(request)
|
||||
nonce = channel.json_body["nonce"]
|
||||
|
||||
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
||||
want_mac.update(nonce.encode("ascii") + b"\x00bob2\x00abc123\x00notadmin")
|
||||
want_mac = want_mac.hexdigest()
|
||||
|
||||
body = json.dumps(
|
||||
{
|
||||
"nonce": nonce,
|
||||
"username": "bob2",
|
||||
"displayname": None,
|
||||
"password": "abc123",
|
||||
"mac": want_mac,
|
||||
}
|
||||
)
|
||||
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual("@bob2:test", channel.json_body["user_id"])
|
||||
|
||||
request, channel = self.make_request("GET", "/profile/@bob2:test/displayname")
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual("bob2", channel.json_body["displayname"])
|
||||
|
||||
# displayname is empty
|
||||
request, channel = self.make_request("GET", self.url)
|
||||
self.render(request)
|
||||
nonce = channel.json_body["nonce"]
|
||||
|
||||
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
||||
want_mac.update(nonce.encode("ascii") + b"\x00bob3\x00abc123\x00notadmin")
|
||||
want_mac = want_mac.hexdigest()
|
||||
|
||||
body = json.dumps(
|
||||
{
|
||||
"nonce": nonce,
|
||||
"username": "bob3",
|
||||
"displayname": "",
|
||||
"password": "abc123",
|
||||
"mac": want_mac,
|
||||
}
|
||||
)
|
||||
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual("@bob3:test", channel.json_body["user_id"])
|
||||
|
||||
request, channel = self.make_request("GET", "/profile/@bob3:test/displayname")
|
||||
self.render(request)
|
||||
self.assertEqual(404, int(channel.result["code"]), msg=channel.result["body"])
|
||||
|
||||
# set displayname
|
||||
request, channel = self.make_request("GET", self.url)
|
||||
self.render(request)
|
||||
nonce = channel.json_body["nonce"]
|
||||
|
||||
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
||||
want_mac.update(nonce.encode("ascii") + b"\x00bob4\x00abc123\x00notadmin")
|
||||
want_mac = want_mac.hexdigest()
|
||||
|
||||
body = json.dumps(
|
||||
{
|
||||
"nonce": nonce,
|
||||
"username": "bob4",
|
||||
"displayname": "Bob's Name",
|
||||
"password": "abc123",
|
||||
"mac": want_mac,
|
||||
}
|
||||
)
|
||||
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual("@bob4:test", channel.json_body["user_id"])
|
||||
|
||||
request, channel = self.make_request("GET", "/profile/@bob4:test/displayname")
|
||||
self.render(request)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual("Bob's Name", channel.json_body["displayname"])
|
||||
|
||||
@override_config(
|
||||
{"limit_usage_by_mau": True, "max_mau_value": 2, "mau_trial_days": 0}
|
||||
)
|
||||
|
||||
@@ -546,24 +546,18 @@ class HomeserverTestCase(TestCase):
|
||||
|
||||
return result
|
||||
|
||||
def register_user(
|
||||
self,
|
||||
username: str,
|
||||
password: str,
|
||||
admin: Optional[bool] = False,
|
||||
displayname: Optional[str] = None,
|
||||
) -> str:
|
||||
def register_user(self, username, password, admin=False):
|
||||
"""
|
||||
Register a user. Requires the Admin API be registered.
|
||||
|
||||
Args:
|
||||
username: The user part of the new user.
|
||||
password: The password of the new user.
|
||||
admin: Whether the user should be created as an admin or not.
|
||||
displayname: The displayname of the new user.
|
||||
username (bytes/unicode): The user part of the new user.
|
||||
password (bytes/unicode): The password of the new user.
|
||||
admin (bool): Whether the user should be created as an admin
|
||||
or not.
|
||||
|
||||
Returns:
|
||||
The MXID of the new user.
|
||||
The MXID of the new user (unicode).
|
||||
"""
|
||||
self.hs.config.registration_shared_secret = "shared"
|
||||
|
||||
@@ -587,7 +581,6 @@ class HomeserverTestCase(TestCase):
|
||||
{
|
||||
"nonce": nonce,
|
||||
"username": username,
|
||||
"displayname": displayname,
|
||||
"password": password,
|
||||
"admin": admin,
|
||||
"mac": want_mac,
|
||||
|
||||
Reference in New Issue
Block a user