mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-07 01:20:16 +00:00
Compare commits
1 Commits
rei/ci_deb
...
dependabot
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f7103be392 |
22
.github/workflows/tests.yml
vendored
22
.github/workflows/tests.yml
vendored
@@ -373,7 +373,7 @@ jobs:
|
||||
|
||||
calculate-test-jobs:
|
||||
if: ${{ !cancelled() && !failure() }} # Allow previous steps to be skipped, but not fail
|
||||
# needs: linting-done
|
||||
needs: linting-done
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
@@ -393,7 +393,6 @@ jobs:
|
||||
- changes
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
job: ${{ fromJson(needs.calculate-test-jobs.outputs.trial_test_matrix) }}
|
||||
|
||||
@@ -427,24 +426,7 @@ jobs:
|
||||
if: ${{ matrix.job.postgres-version }}
|
||||
timeout-minutes: 2
|
||||
run: until pg_isready -h localhost; do sleep 1; done
|
||||
- run: |
|
||||
(
|
||||
while true; do
|
||||
echo "......."
|
||||
date
|
||||
df -h | grep root
|
||||
free -m
|
||||
sleep 10
|
||||
done
|
||||
) &
|
||||
MONITOR_PID=$!
|
||||
|
||||
poetry run trial --jobs=6 tests
|
||||
STATUS=$?
|
||||
|
||||
kill $MONITOR_PID
|
||||
exit $STATUS
|
||||
|
||||
- run: poetry run trial --jobs=6 tests
|
||||
env:
|
||||
SYNAPSE_POSTGRES: ${{ matrix.job.database == 'postgres' || '' }}
|
||||
SYNAPSE_POSTGRES_HOST: /var/run/postgresql
|
||||
|
||||
4
.github/workflows/triage_labelled.yml
vendored
4
.github/workflows/triage_labelled.yml
vendored
@@ -16,10 +16,6 @@ jobs:
|
||||
with:
|
||||
project-url: "https://github.com/orgs/matrix-org/projects/67"
|
||||
github-token: ${{ secrets.ELEMENT_BOT_TOKEN }}
|
||||
# This action will error if the issue already exists on the project. Which is
|
||||
# common as `X-Needs-Info` will often be added to issues that are already in
|
||||
# the triage queue. Prevent the whole job from failing in this case.
|
||||
continue-on-error: true
|
||||
- name: Set status
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.ELEMENT_BOT_TOKEN }}
|
||||
|
||||
20
CHANGES.md
20
CHANGES.md
@@ -1,23 +1,3 @@
|
||||
# Synapse 1.135.0 (2025-08-01)
|
||||
|
||||
No significant changes since 1.135.0rc2.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.135.0rc2 (2025-07-30)
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix user failing to deactivate with MAS when `/_synapse/mas` is handled by a worker. ([\#18716](https://github.com/element-hq/synapse/issues/18716))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Fix performance regression introduced in [#18238](https://github.com/element-hq/synapse/issues/18238) by adding a cache to `is_server_admin`. ([\#18747](https://github.com/element-hq/synapse/issues/18747))
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.135.0rc1 (2025-07-22)
|
||||
|
||||
### Features
|
||||
|
||||
574
Cargo.lock
generated
574
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
29
README.rst
29
README.rst
@@ -8,7 +8,7 @@
|
||||
Synapse is an open source `Matrix <https://matrix.org>`__ homeserver
|
||||
implementation, written and maintained by `Element <https://element.io>`_.
|
||||
`Matrix <https://github.com/matrix-org>`__ is the open standard for
|
||||
secure and interoperable real-time communications. You can directly run
|
||||
secure and interoperable real time communications. You can directly run
|
||||
and manage the source code in this repository, available under an AGPL
|
||||
license (or alternatively under a commercial license from Element).
|
||||
There is no support provided by Element unless you have a
|
||||
@@ -23,13 +23,13 @@ ESS builds on Synapse to offer a complete Matrix-based backend including the ful
|
||||
`Admin Console product <https://element.io/enterprise-functionality/admin-console>`_,
|
||||
giving admins the power to easily manage an organization-wide
|
||||
deployment. It includes advanced identity management, auditing,
|
||||
moderation and data retention options as well as Long-Term Support and
|
||||
SLAs. ESS supports any Matrix-compatible client.
|
||||
moderation and data retention options as well as Long Term Support and
|
||||
SLAs. ESS can be used to support any Matrix-based frontend client.
|
||||
|
||||
.. contents::
|
||||
|
||||
🛠️ Installation and configuration
|
||||
==================================
|
||||
🛠️ Installing and configuration
|
||||
===============================
|
||||
|
||||
The Synapse documentation describes `how to install Synapse <https://element-hq.github.io/synapse/latest/setup/installation.html>`_. We recommend using
|
||||
`Docker images <https://element-hq.github.io/synapse/latest/setup/installation.html#docker-images-and-ansible-playbooks>`_ or `Debian packages from Matrix.org
|
||||
@@ -133,7 +133,7 @@ connect from a client: see
|
||||
An easy way to get started is to login or register via Element at
|
||||
https://app.element.io/#/login or https://app.element.io/#/register respectively.
|
||||
You will need to change the server you are logging into from ``matrix.org``
|
||||
and instead specify a homeserver URL of ``https://<server_name>:8448``
|
||||
and instead specify a Homeserver URL of ``https://<server_name>:8448``
|
||||
(or just ``https://<server_name>`` if you are using a reverse proxy).
|
||||
If you prefer to use another client, refer to our
|
||||
`client breakdown <https://matrix.org/ecosystem/clients/>`_.
|
||||
@@ -162,15 +162,16 @@ the public internet. Without it, anyone can freely register accounts on your hom
|
||||
This can be exploited by attackers to create spambots targeting the rest of the Matrix
|
||||
federation.
|
||||
|
||||
Your new Matrix ID will be formed partly from the ``server_name``, and partly
|
||||
from a localpart you specify when you create the account in the form of::
|
||||
Your new user name will be formed partly from the ``server_name``, and partly
|
||||
from a localpart you specify when you create the account. Your name will take
|
||||
the form of::
|
||||
|
||||
@localpart:my.domain.name
|
||||
|
||||
(pronounced "at localpart on my dot domain dot name").
|
||||
|
||||
As when logging in, you will need to specify a "Custom server". Specify your
|
||||
desired ``localpart`` in the 'Username' box.
|
||||
desired ``localpart`` in the 'User name' box.
|
||||
|
||||
🎯 Troubleshooting and support
|
||||
==============================
|
||||
@@ -208,10 +209,10 @@ Identity servers have the job of mapping email addresses and other 3rd Party
|
||||
IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs
|
||||
before creating that mapping.
|
||||
|
||||
**Identity servers do not store accounts or credentials - these are stored and managed on homeservers.
|
||||
Identity Servers are just for mapping 3rd Party IDs to Matrix IDs.**
|
||||
**They are not where accounts or credentials are stored - these live on home
|
||||
servers. Identity Servers are just for mapping 3rd party IDs to matrix IDs.**
|
||||
|
||||
This process is highly security-sensitive, as there is an obvious risk of spam if it
|
||||
This process is very security-sensitive, as there is obvious risk of spam if it
|
||||
is too easy to sign up for Matrix accounts or harvest 3PID data. In the longer
|
||||
term, we hope to create a decentralised system to manage it (`matrix-doc #712
|
||||
<https://github.com/matrix-org/matrix-doc/issues/712>`_), but in the meantime,
|
||||
@@ -237,9 +238,9 @@ email address.
|
||||
We welcome contributions to Synapse from the community!
|
||||
The best place to get started is our
|
||||
`guide for contributors <https://element-hq.github.io/synapse/latest/development/contributing_guide.html>`_.
|
||||
This is part of our broader `documentation <https://element-hq.github.io/synapse/latest>`_, which includes
|
||||
information for Synapse developers as well as Synapse administrators.
|
||||
This is part of our larger `documentation <https://element-hq.github.io/synapse/latest>`_, which includes
|
||||
|
||||
information for Synapse developers as well as Synapse administrators.
|
||||
Developers might be particularly interested in:
|
||||
|
||||
* `Synapse's database schema <https://element-hq.github.io/synapse/latest/development/database_schema.html>`_,
|
||||
|
||||
@@ -19,17 +19,17 @@ def build(setup_kwargs: Dict[str, Any]) -> None:
|
||||
# This flag is a no-op in the latest versions. Instead, we need to
|
||||
# specify this in the `bdist_wheel` config below.
|
||||
py_limited_api=True,
|
||||
# We always build in release mode, as we can't distinguish
|
||||
# between using `poetry` in development vs production.
|
||||
# We force always building in release mode, as we can't tell the
|
||||
# difference between using `poetry` in development vs production.
|
||||
debug=False,
|
||||
)
|
||||
setup_kwargs.setdefault("rust_extensions", []).append(extension)
|
||||
setup_kwargs["zip_safe"] = False
|
||||
|
||||
# We look up the minimum supported Python version with
|
||||
# `python_requires` (e.g. ">=3.9.0,<4.0.0") and finding the first Python
|
||||
# We lookup the minimum supported python version by looking at
|
||||
# `python_requires` (e.g. ">=3.9.0,<4.0.0") and finding the first python
|
||||
# version that matches. We then convert that into the `py_limited_api` form,
|
||||
# e.g. cp39 for Python 3.9.
|
||||
# e.g. cp39 for python 3.9.
|
||||
py_limited_api: str
|
||||
python_bounds = SpecifierSet(setup_kwargs["python_requires"])
|
||||
for minor_version in itertools.count(start=8):
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Speed up upgrading a room with large numbers of banned users.
|
||||
@@ -1 +0,0 @@
|
||||
When admins enable themselves to see soft-failed events, they will also see if the cause is due to the policy server flagging them as spam via `unsigned`.
|
||||
@@ -1 +0,0 @@
|
||||
Minor improvements to README.
|
||||
@@ -1 +0,0 @@
|
||||
Refactor `LaterGauge` metrics to be homeserver-scoped.
|
||||
@@ -1 +0,0 @@
|
||||
Refactor `GaugeBucketCollector` metrics to be homeserver-scoped.
|
||||
@@ -1 +0,0 @@
|
||||
Advertise experimental support for [MSC4306](https://github.com/matrix-org/matrix-spec-proposals/pull/4306) through `/_matrix/clients/versions` if enabled.
|
||||
@@ -1 +0,0 @@
|
||||
Improve order of validation and ratelimiting in room creation.
|
||||
@@ -1 +0,0 @@
|
||||
Refactor `Histogram` metrics to be homeserver-scoped.
|
||||
@@ -1 +0,0 @@
|
||||
Refactor `Gauge` metrics to be homeserver-scoped.
|
||||
@@ -1 +0,0 @@
|
||||
Use `twisted.internet.testing` module in tests instead of deprecated `twisted.test.proto_helpers`.
|
||||
@@ -1 +0,0 @@
|
||||
Bump minimum version bound on Twisted to 21.2.0.
|
||||
@@ -1 +0,0 @@
|
||||
Remove obsolete `/send_event` replication endpoint.
|
||||
@@ -1 +0,0 @@
|
||||
Update metrics linting to be able to handle custom metrics.
|
||||
@@ -1 +0,0 @@
|
||||
Work around `twisted.protocols.amp.TooLong` error by reducing logging in some tests.
|
||||
@@ -1 +0,0 @@
|
||||
Deprecate `run_as_background_process` exported as part of the module API interface in favor of `ModuleApi.run_as_background_process`. See the relevant section in the upgrade notes for more information.
|
||||
@@ -1 +0,0 @@
|
||||
Refactor cache metrics to be homeserver-scoped.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a long-standing bug where suspended users could not have server notices sent to them (a 403 was returned to the admin).
|
||||
@@ -1 +0,0 @@
|
||||
Refactor `Histogram` metrics to be homeserver-scoped.
|
||||
@@ -1 +0,0 @@
|
||||
Prevent "Move labelled issues to correct projects" GitHub Actions workflow from failing when an issue is already on the project board.
|
||||
@@ -1 +0,0 @@
|
||||
Update implementation of [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-doc/issues/4306) to include automatic subscription conflict prevention as introduced in later drafts.
|
||||
@@ -1 +0,0 @@
|
||||
Bump minimum supported Rust version (MSRV) to 1.82.0. Missed in [#18553](https://github.com/element-hq/synapse/pull/18553) (released in Synapse 1.134.0).
|
||||
@@ -1 +0,0 @@
|
||||
Stable support for delegating authentication to [Matrix Authentication Service](https://github.com/element-hq/matrix-authentication-service/).
|
||||
@@ -1 +0,0 @@
|
||||
Document that there can be multiple workers handling the `receipts` stream.
|
||||
@@ -1 +0,0 @@
|
||||
Improve worker documentation for some device paths.
|
||||
@@ -1 +0,0 @@
|
||||
Implement the push rules for experimental [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-doc/issues/4306).
|
||||
@@ -1 +0,0 @@
|
||||
Fix an issue that could cause logcontexts to be lost on rate-limited requests. Found by @realtyem.
|
||||
@@ -1 +0,0 @@
|
||||
Make `Clock.sleep(..)` return a coroutine, so that mypy can catch places where we don't await on it.
|
||||
@@ -1 +0,0 @@
|
||||
CI debugging.
|
||||
@@ -4396,7 +4396,7 @@
|
||||
"exemplar": false,
|
||||
"expr": "(time() - max without (job, index, host) (avg_over_time(synapse_federation_last_received_pdu_time[10m]))) / 60",
|
||||
"instant": false,
|
||||
"legendFormat": "{{origin_server_name}} ",
|
||||
"legendFormat": "{{server_name}} ",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
@@ -4518,7 +4518,7 @@
|
||||
"exemplar": false,
|
||||
"expr": "(time() - max without (job, index, host) (avg_over_time(synapse_federation_last_sent_pdu_time[10m]))) / 60",
|
||||
"instant": false,
|
||||
"legendFormat": "{{destination_server_name}}",
|
||||
"legendFormat": "{{server_name}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
|
||||
12
debian/changelog
vendored
12
debian/changelog
vendored
@@ -1,15 +1,3 @@
|
||||
matrix-synapse-py3 (1.135.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.135.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 01 Aug 2025 13:12:28 +0100
|
||||
|
||||
matrix-synapse-py3 (1.135.0~rc2) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.135.0rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 30 Jul 2025 12:19:14 +0100
|
||||
|
||||
matrix-synapse-py3 (1.135.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.135.0rc1.
|
||||
|
||||
@@ -178,7 +178,6 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/login$",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/account/whoami$",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/account/deactivate$",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/devices(/|$)",
|
||||
"^/_matrix/client/(r0|v3)/delete_devices$",
|
||||
"^/_matrix/client/versions$",
|
||||
|
||||
@@ -22,46 +22,4 @@ To receive soft failed events in APIs like `/sync` and `/messages`, set `return_
|
||||
to `true` in the admin client config. When `false`, the normal behaviour of these endpoints is to
|
||||
exclude soft failed events.
|
||||
|
||||
**Note**: If the policy server flagged the event as spam and that caused soft failure, that will be indicated
|
||||
in the event's `unsigned` content like so:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"other": "event_fields_go_here",
|
||||
"unsigned": {
|
||||
"io.element.synapse.soft_failed": true,
|
||||
"io.element.synapse.policy_server_spammy": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Default: `false`
|
||||
|
||||
## See events marked spammy by policy servers
|
||||
|
||||
Learn more about policy servers from [MSC4284](https://github.com/matrix-org/matrix-spec-proposals/pull/4284).
|
||||
|
||||
Similar to `return_soft_failed_events`, clients logged in with admin accounts can see events which were
|
||||
flagged by the policy server as spammy (and thus soft failed) by setting `return_policy_server_spammy_events`
|
||||
to `true`.
|
||||
|
||||
`return_policy_server_spammy_events` may be `true` while `return_soft_failed_events` is `false` to only see
|
||||
policy server-flagged events. When `return_soft_failed_events` is `true` however, `return_policy_server_spammy_events`
|
||||
is always `true`.
|
||||
|
||||
Events which were flagged by the policy will be flagged as `io.element.synapse.policy_server_spammy` in the
|
||||
event's `unsigned` content, like so:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "m.room.message",
|
||||
"other": "event_fields_go_here",
|
||||
"unsigned": {
|
||||
"io.element.synapse.soft_failed": true,
|
||||
"io.element.synapse.policy_server_spammy": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Default: `true` if `return_soft_failed_events` is `true`, otherwise `false`
|
||||
|
||||
@@ -117,77 +117,6 @@ each upgrade are complete before moving on to the next upgrade, to avoid
|
||||
stacking them up. You can monitor the currently running background updates with
|
||||
[the Admin API](usage/administration/admin_api/background_updates.html#status).
|
||||
|
||||
# Upgrading to v1.136.0
|
||||
|
||||
## Deprecate `run_as_background_process` exported as part of the module API interface in favor of `ModuleApi.run_as_background_process`
|
||||
|
||||
The `run_as_background_process` function is now a method of the `ModuleApi` class. If
|
||||
you were using the function directly from the module API, it will continue to work fine
|
||||
but the background process metrics will not include an accurate `server_name` label.
|
||||
This kind of metric labeling isn't relevant for many use cases and is used to
|
||||
differentiate Synapse instances running in the same Python process (relevant to Synapse
|
||||
Pro: Small Hosts). We recommend updating your usage to use the new
|
||||
`ModuleApi.run_as_background_process` method to stay on top of future changes.
|
||||
|
||||
<details>
|
||||
<summary>Example <code>run_as_background_process</code> upgrade</summary>
|
||||
|
||||
Before:
|
||||
```python
|
||||
class MyModule:
|
||||
def __init__(self, module_api: ModuleApi) -> None:
|
||||
run_as_background_process(__name__ + ":setup_database", self.setup_database)
|
||||
```
|
||||
|
||||
After:
|
||||
```python
|
||||
class MyModule:
|
||||
def __init__(self, module_api: ModuleApi) -> None:
|
||||
module_api.run_as_background_process(__name__ + ":setup_database", self.setup_database)
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Metric labels have changed on `synapse_federation_last_received_pdu_time` and `synapse_federation_last_sent_pdu_time`
|
||||
|
||||
Previously, the `synapse_federation_last_received_pdu_time` and
|
||||
`synapse_federation_last_sent_pdu_time` metrics both used the `server_name` label to
|
||||
differentiate between different servers that we send and receive events from.
|
||||
|
||||
Since we're now using the `server_name` label to differentiate between different Synapse
|
||||
homeserver instances running in the same process, these metrics have been changed as follows:
|
||||
|
||||
- `synapse_federation_last_received_pdu_time` now uses the `origin_server_name` label
|
||||
- `synapse_federation_last_sent_pdu_time` now uses the `destination_server_name` label
|
||||
|
||||
The Grafana dashboard JSON in `contrib/grafana/synapse.json` has been updated to reflect
|
||||
this change but you will need to manually update your own existing Grafana dashboards
|
||||
using these metrics.
|
||||
|
||||
## Stable integration with Matrix Authentication Service
|
||||
|
||||
Support for [Matrix Authentication Service (MAS)](https://github.com/element-hq/matrix-authentication-service) is now stable, with a simplified configuration.
|
||||
This stable integration requires MAS 0.20.0 or later.
|
||||
|
||||
The existing `experimental_features.msc3861` configuration option is now deprecated and will be removed in Synapse v1.137.0.
|
||||
|
||||
Synapse deployments already using MAS should now use the new configuration options:
|
||||
|
||||
```yaml
|
||||
matrix_authentication_service:
|
||||
# Enable the MAS integration
|
||||
enabled: true
|
||||
# The base URL where Synapse will contact MAS
|
||||
endpoint: http://localhost:8080
|
||||
# The shared secret used to authenticate MAS requests, must be the same as `matrix.secret` in the MAS configuration
|
||||
# See https://element-hq.github.io/matrix-authentication-service/reference/configuration.html#matrix
|
||||
secret: "asecurerandomsecretstring"
|
||||
```
|
||||
|
||||
They must remove the `experimental_features.msc3861` configuration option from their configuration.
|
||||
|
||||
They can also remove the client previously used by Synapse [in the MAS configuration](https://element-hq.github.io/matrix-authentication-service/reference/configuration.html#clients) as it is no longer in use.
|
||||
|
||||
# Upgrading to v1.135.0
|
||||
|
||||
## `on_user_registration` module API callback may now run on any worker
|
||||
@@ -208,10 +137,10 @@ native ICU library on your system is no longer required.
|
||||
## Documented endpoint which can be delegated to a federation worker
|
||||
|
||||
The endpoint `^/_matrix/federation/v1/version$` can be delegated to a federation
|
||||
worker. This is not new behaviour, but had not been documented yet. The
|
||||
[list of delegatable endpoints](workers.md#synapseappgeneric_worker) has
|
||||
worker. This is not new behaviour, but had not been documented yet. The
|
||||
[list of delegatable endpoints](workers.md#synapseappgeneric_worker) has
|
||||
been updated to include it. Make sure to check your reverse proxy rules if you
|
||||
are using workers.
|
||||
are using workers.
|
||||
|
||||
# Upgrading to v1.126.0
|
||||
|
||||
|
||||
@@ -643,28 +643,6 @@ no_proxy_hosts:
|
||||
- 172.30.0.0/16
|
||||
```
|
||||
---
|
||||
### `matrix_authentication_service`
|
||||
|
||||
*(object)* The `matrix_authentication_service` setting configures integration with [Matrix Authentication Service (MAS)](https://github.com/element-hq/matrix-authentication-service).
|
||||
|
||||
This setting has the following sub-options:
|
||||
|
||||
* `enabled` (boolean): Whether or not to enable the MAS integration. If this is set to `false`, Synapse will use its legacy internal authentication API. Defaults to `false`.
|
||||
|
||||
* `endpoint` (string): The URL where Synapse can reach MAS. This *must* have the `discovery` and `oauth` resources mounted. Defaults to `"http://localhost:8080"`.
|
||||
|
||||
* `secret` (string|null): A shared secret that will be used to authenticate requests from and to MAS.
|
||||
|
||||
* `secret_path` (string|null): Alternative to `secret`, reading the shared secret from a file. The file should be a plain text file, containing only the secret. Synapse reads the secret from the given file once at startup.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
matrix_authentication_service:
|
||||
enabled: true
|
||||
secret: someverysecuresecret
|
||||
endpoint: http://localhost:8080
|
||||
```
|
||||
---
|
||||
### `dummy_events_threshold`
|
||||
|
||||
*(integer)* Forward extremities can build up in a room due to networking delays between homeservers. Once this happens in a large room, calculation of the state of that room can become quite expensive. To mitigate this, once the number of forward extremities reaches a given threshold, Synapse will send an `org.matrix.dummy_event` event, which will reduce the forward extremities in the room.
|
||||
|
||||
@@ -238,7 +238,6 @@ information.
|
||||
^/_matrix/client/unstable/im.nheko.summary/summary/.*$
|
||||
^/_matrix/client/(r0|v3|unstable)/account/3pid$
|
||||
^/_matrix/client/(r0|v3|unstable)/account/whoami$
|
||||
^/_matrix/client/(r0|v3|unstable)/account/deactivate$
|
||||
^/_matrix/client/(r0|v3)/delete_devices$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/devices(/|$)
|
||||
^/_matrix/client/versions$
|
||||
@@ -260,7 +259,7 @@ information.
|
||||
^/_matrix/client/(r0|v3|unstable)/keys/claim$
|
||||
^/_matrix/client/(r0|v3|unstable)/room_keys/
|
||||
^/_matrix/client/(r0|v3|unstable)/keys/upload
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/device_signing/upload$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable/keys/device_signing/upload$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/signatures/upload$
|
||||
|
||||
# Registration/login requests
|
||||
@@ -532,9 +531,8 @@ the stream writer for the `account_data` stream:
|
||||
|
||||
##### The `receipts` stream
|
||||
|
||||
The `receipts` stream supports multiple writers. The following endpoints
|
||||
can be handled by any worker, but should be routed directly to one of the workers
|
||||
configured as stream writer for the `receipts` stream:
|
||||
The following endpoints should be routed directly to the worker configured as
|
||||
the stream writer for the `receipts` stream:
|
||||
|
||||
^/_matrix/client/(r0|v3|unstable)/rooms/.*/receipt
|
||||
^/_matrix/client/(r0|v3|unstable)/rooms/.*/read_markers
|
||||
@@ -556,13 +554,13 @@ the stream writer for the `push_rules` stream:
|
||||
##### The `device_lists` stream
|
||||
|
||||
The `device_lists` stream supports multiple writers. The following endpoints
|
||||
can be handled by any worker, but should be routed directly to one of the workers
|
||||
can be handled by any worker, but should be routed directly one of the workers
|
||||
configured as stream writer for the `device_lists` stream:
|
||||
|
||||
^/_matrix/client/(r0|v3)/delete_devices$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/devices(/|$)
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/devices/
|
||||
^/_matrix/client/(r0|v3|unstable)/keys/upload
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/device_signing/upload$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable/keys/device_signing/upload$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/signatures/upload$
|
||||
|
||||
#### Restrict outbound federation traffic to a specific set of workers
|
||||
|
||||
16
mypy.ini
16
mypy.ini
@@ -1,17 +1,6 @@
|
||||
[mypy]
|
||||
namespace_packages = True
|
||||
# Our custom mypy plugin should remain first in this list.
|
||||
#
|
||||
# mypy has a limitation where it only chooses the first plugin that returns a non-None
|
||||
# value for each hook (known-limitation, c.f.
|
||||
# https://github.com/python/mypy/issues/19524). We workaround this by putting our custom
|
||||
# plugin first in the plugin order and then manually calling any other conflicting
|
||||
# plugin hooks in our own plugin followed by our own checks.
|
||||
#
|
||||
# If you add a new plugin, make sure to check whether the hooks being used conflict with
|
||||
# our custom plugin hooks and if so, manually call the other plugin's hooks in our
|
||||
# custom plugin. (also applies to if the plugin is updated in the future)
|
||||
plugins = scripts-dev/mypy_synapse_plugin.py, pydantic.mypy, mypy_zope:plugin
|
||||
plugins = pydantic.mypy, mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
|
||||
follow_imports = normal
|
||||
show_error_codes = True
|
||||
show_traceback = True
|
||||
@@ -110,6 +99,3 @@ ignore_missing_imports = True
|
||||
|
||||
[mypy-multipart.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
[mypy-mypy_zope.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
60
poetry.lock
generated
60
poetry.lock
generated
@@ -1454,18 +1454,18 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "mypy-zope"
|
||||
version = "1.0.13"
|
||||
version = "1.0.12"
|
||||
description = "Plugin for mypy to support zope interfaces"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "mypy_zope-1.0.13-py3-none-any.whl", hash = "sha256:13740c4cbc910cca2c143c6709e1c483c991abeeeb7b629ad6f73d8ac1edad15"},
|
||||
{file = "mypy_zope-1.0.13.tar.gz", hash = "sha256:63fb4d035ea874baf280dc69e714dcde4bd2a4a4837a0fd8d90ce91bea510f99"},
|
||||
{file = "mypy_zope-1.0.12-py3-none-any.whl", hash = "sha256:f2ecf169f886fbc266e9339db0c2f3818528a7536b9bb4f5ece1d5854dc2f27c"},
|
||||
{file = "mypy_zope-1.0.12.tar.gz", hash = "sha256:d6f8f99eb5644885553b4ec7afc8d68f5daf412c9bf238ec3c36b65d97df6cbe"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
mypy = ">=1.0.0,<1.18.0"
|
||||
mypy = ">=1.0.0,<1.17.0"
|
||||
"zope.interface" = "*"
|
||||
"zope.schema" = "*"
|
||||
|
||||
@@ -1543,14 +1543,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "phonenumbers"
|
||||
version = "9.0.10"
|
||||
version = "9.0.9"
|
||||
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "phonenumbers-9.0.10-py2.py3-none-any.whl", hash = "sha256:13b12d269be1f2b363c9bc2868656a7e2e8b50f1a1cef629c75005da6c374c6b"},
|
||||
{file = "phonenumbers-9.0.10.tar.gz", hash = "sha256:c2d15a6a9d0534b14a7764f51246ada99563e263f65b80b0251d1a760ac4a1ba"},
|
||||
{file = "phonenumbers-9.0.9-py2.py3-none-any.whl", hash = "sha256:13b91aa153f87675902829b38a556bad54824f9c121b89588bbb5fa8550d97ef"},
|
||||
{file = "phonenumbers-9.0.9.tar.gz", hash = "sha256:c640545019a07e68b0bea57a5fede6eef45c7391165d28935f45615f9a567a5b"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2409,30 +2409,30 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.12.7"
|
||||
version = "0.12.4"
|
||||
description = "An extremely fast Python linter and code formatter, written in Rust."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "ruff-0.12.7-py3-none-linux_armv6l.whl", hash = "sha256:76e4f31529899b8c434c3c1dede98c4483b89590e15fb49f2d46183801565303"},
|
||||
{file = "ruff-0.12.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:789b7a03e72507c54fb3ba6209e4bb36517b90f1a3569ea17084e3fd295500fb"},
|
||||
{file = "ruff-0.12.7-py3-none-macosx_11_0_arm64.whl", hash = "sha256:2e1c2a3b8626339bb6369116e7030a4cf194ea48f49b64bb505732a7fce4f4e3"},
|
||||
{file = "ruff-0.12.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32dec41817623d388e645612ec70d5757a6d9c035f3744a52c7b195a57e03860"},
|
||||
{file = "ruff-0.12.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:47ef751f722053a5df5fa48d412dbb54d41ab9b17875c6840a58ec63ff0c247c"},
|
||||
{file = "ruff-0.12.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a828a5fc25a3efd3e1ff7b241fd392686c9386f20e5ac90aa9234a5faa12c423"},
|
||||
{file = "ruff-0.12.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:5726f59b171111fa6a69d82aef48f00b56598b03a22f0f4170664ff4d8298efb"},
|
||||
{file = "ruff-0.12.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:74e6f5c04c4dd4aba223f4fe6e7104f79e0eebf7d307e4f9b18c18362124bccd"},
|
||||
{file = "ruff-0.12.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5d0bfe4e77fba61bf2ccadf8cf005d6133e3ce08793bbe870dd1c734f2699a3e"},
|
||||
{file = "ruff-0.12.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06bfb01e1623bf7f59ea749a841da56f8f653d641bfd046edee32ede7ff6c606"},
|
||||
{file = "ruff-0.12.7-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e41df94a957d50083fd09b916d6e89e497246698c3f3d5c681c8b3e7b9bb4ac8"},
|
||||
{file = "ruff-0.12.7-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:4000623300563c709458d0ce170c3d0d788c23a058912f28bbadc6f905d67afa"},
|
||||
{file = "ruff-0.12.7-py3-none-musllinux_1_2_i686.whl", hash = "sha256:69ffe0e5f9b2cf2b8e289a3f8945b402a1b19eff24ec389f45f23c42a3dd6fb5"},
|
||||
{file = "ruff-0.12.7-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a07a5c8ffa2611a52732bdc67bf88e243abd84fe2d7f6daef3826b59abbfeda4"},
|
||||
{file = "ruff-0.12.7-py3-none-win32.whl", hash = "sha256:c928f1b2ec59fb77dfdf70e0419408898b63998789cc98197e15f560b9e77f77"},
|
||||
{file = "ruff-0.12.7-py3-none-win_amd64.whl", hash = "sha256:9c18f3d707ee9edf89da76131956aba1270c6348bfee8f6c647de841eac7194f"},
|
||||
{file = "ruff-0.12.7-py3-none-win_arm64.whl", hash = "sha256:dfce05101dbd11833a0776716d5d1578641b7fddb537fe7fa956ab85d1769b69"},
|
||||
{file = "ruff-0.12.7.tar.gz", hash = "sha256:1fc3193f238bc2d7968772c82831a4ff69252f673be371fb49663f0068b7ec71"},
|
||||
{file = "ruff-0.12.4-py3-none-linux_armv6l.whl", hash = "sha256:cb0d261dac457ab939aeb247e804125a5d521b21adf27e721895b0d3f83a0d0a"},
|
||||
{file = "ruff-0.12.4-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:55c0f4ca9769408d9b9bac530c30d3e66490bd2beb2d3dae3e4128a1f05c7442"},
|
||||
{file = "ruff-0.12.4-py3-none-macosx_11_0_arm64.whl", hash = "sha256:a8224cc3722c9ad9044da7f89c4c1ec452aef2cfe3904365025dd2f51daeae0e"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e9949d01d64fa3672449a51ddb5d7548b33e130240ad418884ee6efa7a229586"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:be0593c69df9ad1465e8a2d10e3defd111fdb62dcd5be23ae2c06da77e8fcffb"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a7dea966bcb55d4ecc4cc3270bccb6f87a337326c9dcd3c07d5b97000dbff41c"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:afcfa3ab5ab5dd0e1c39bf286d829e042a15e966b3726eea79528e2e24d8371a"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c057ce464b1413c926cdb203a0f858cd52f3e73dcb3270a3318d1630f6395bb3"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e64b90d1122dc2713330350626b10d60818930819623abbb56535c6466cce045"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2abc48f3d9667fdc74022380b5c745873499ff827393a636f7a59da1515e7c57"},
|
||||
{file = "ruff-0.12.4-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:2b2449dc0c138d877d629bea151bee8c0ae3b8e9c43f5fcaafcd0c0d0726b184"},
|
||||
{file = "ruff-0.12.4-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:56e45bb11f625db55f9b70477062e6a1a04d53628eda7784dce6e0f55fd549eb"},
|
||||
{file = "ruff-0.12.4-py3-none-musllinux_1_2_i686.whl", hash = "sha256:478fccdb82ca148a98a9ff43658944f7ab5ec41c3c49d77cd99d44da019371a1"},
|
||||
{file = "ruff-0.12.4-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:0fc426bec2e4e5f4c4f182b9d2ce6a75c85ba9bcdbe5c6f2a74fcb8df437df4b"},
|
||||
{file = "ruff-0.12.4-py3-none-win32.whl", hash = "sha256:4de27977827893cdfb1211d42d84bc180fceb7b72471104671c59be37041cf93"},
|
||||
{file = "ruff-0.12.4-py3-none-win_amd64.whl", hash = "sha256:fe0b9e9eb23736b453143d72d2ceca5db323963330d5b7859d60d101147d461a"},
|
||||
{file = "ruff-0.12.4-py3-none-win_arm64.whl", hash = "sha256:0618ec4442a83ab545e5b71202a5c0ed7791e8471435b94e655b570a5031a98e"},
|
||||
{file = "ruff-0.12.4.tar.gz", hash = "sha256:13efa16df6c6eeb7d0f091abae50f58e9522f3843edb40d56ad52a5a4a4b6873"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2470,15 +2470,15 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
|
||||
|
||||
[[package]]
|
||||
name = "sentry-sdk"
|
||||
version = "2.34.1"
|
||||
version = "2.32.0"
|
||||
description = "Python client for Sentry (https://sentry.io)"
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"sentry\""
|
||||
files = [
|
||||
{file = "sentry_sdk-2.34.1-py2.py3-none-any.whl", hash = "sha256:b7a072e1cdc5abc48101d5146e1ae680fa81fe886d8d95aaa25a0b450c818d32"},
|
||||
{file = "sentry_sdk-2.34.1.tar.gz", hash = "sha256:69274eb8c5c38562a544c3e9f68b5be0a43be4b697f5fd385bf98e4fbe672687"},
|
||||
{file = "sentry_sdk-2.32.0-py2.py3-none-any.whl", hash = "sha256:6cf51521b099562d7ce3606da928c473643abe99b00ce4cb5626ea735f4ec345"},
|
||||
{file = "sentry_sdk-2.32.0.tar.gz", hash = "sha256:9016c75d9316b0f6921ac14c8cd4fb938f26002430ac5be9945ab280f78bec6b"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -3353,4 +3353,4 @@ url-preview = ["lxml"]
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = "^3.9.0"
|
||||
content-hash = "600a349d08dde732df251583094a121b5385eb43ae0c6ceff10dcf9749359446"
|
||||
content-hash = "d2560fb09c99bf87690749ad902753cfa3f3063bd14cd9d0c0f37ca9e89a7757"
|
||||
|
||||
@@ -101,7 +101,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.135.0"
|
||||
version = "1.135.0rc1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
@@ -324,7 +324,7 @@ all = [
|
||||
# failing on new releases. Keeping lower bounds loose here means that dependabot
|
||||
# can bump versions without having to update the content-hash in the lockfile.
|
||||
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
||||
ruff = "0.12.7"
|
||||
ruff = "0.12.4"
|
||||
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
||||
pydantic = "^2"
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ name = "synapse"
|
||||
version = "0.1.0"
|
||||
|
||||
edition = "2021"
|
||||
rust-version = "1.82.0"
|
||||
rust-version = "1.81.0"
|
||||
|
||||
[lib]
|
||||
name = "synapse"
|
||||
|
||||
@@ -61,7 +61,6 @@ fn bench_match_exact(b: &mut Bencher) {
|
||||
vec![],
|
||||
false,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -72,10 +71,10 @@ fn bench_match_exact(b: &mut Bencher) {
|
||||
},
|
||||
));
|
||||
|
||||
let matched = eval.match_condition(&condition, None, None, None).unwrap();
|
||||
let matched = eval.match_condition(&condition, None, None).unwrap();
|
||||
assert!(matched, "Didn't match");
|
||||
|
||||
b.iter(|| eval.match_condition(&condition, None, None, None).unwrap());
|
||||
b.iter(|| eval.match_condition(&condition, None, None).unwrap());
|
||||
}
|
||||
|
||||
#[bench]
|
||||
@@ -108,7 +107,6 @@ fn bench_match_word(b: &mut Bencher) {
|
||||
vec![],
|
||||
false,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -119,10 +117,10 @@ fn bench_match_word(b: &mut Bencher) {
|
||||
},
|
||||
));
|
||||
|
||||
let matched = eval.match_condition(&condition, None, None, None).unwrap();
|
||||
let matched = eval.match_condition(&condition, None, None).unwrap();
|
||||
assert!(matched, "Didn't match");
|
||||
|
||||
b.iter(|| eval.match_condition(&condition, None, None, None).unwrap());
|
||||
b.iter(|| eval.match_condition(&condition, None, None).unwrap());
|
||||
}
|
||||
|
||||
#[bench]
|
||||
@@ -155,7 +153,6 @@ fn bench_match_word_miss(b: &mut Bencher) {
|
||||
vec![],
|
||||
false,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -166,10 +163,10 @@ fn bench_match_word_miss(b: &mut Bencher) {
|
||||
},
|
||||
));
|
||||
|
||||
let matched = eval.match_condition(&condition, None, None, None).unwrap();
|
||||
let matched = eval.match_condition(&condition, None, None).unwrap();
|
||||
assert!(!matched, "Didn't match");
|
||||
|
||||
b.iter(|| eval.match_condition(&condition, None, None, None).unwrap());
|
||||
b.iter(|| eval.match_condition(&condition, None, None).unwrap());
|
||||
}
|
||||
|
||||
#[bench]
|
||||
@@ -202,7 +199,6 @@ fn bench_eval_message(b: &mut Bencher) {
|
||||
vec![],
|
||||
false,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -214,8 +210,7 @@ fn bench_eval_message(b: &mut Bencher) {
|
||||
false,
|
||||
false,
|
||||
false,
|
||||
false,
|
||||
);
|
||||
|
||||
b.iter(|| eval.run(&rules, Some("bob"), Some("person"), None));
|
||||
b.iter(|| eval.run(&rules, Some("bob"), Some("person")));
|
||||
}
|
||||
|
||||
@@ -54,7 +54,6 @@ enum EventInternalMetadataData {
|
||||
RecheckRedaction(bool),
|
||||
SoftFailed(bool),
|
||||
ProactivelySend(bool),
|
||||
PolicyServerSpammy(bool),
|
||||
Redacted(bool),
|
||||
TxnId(Box<str>),
|
||||
TokenId(i64),
|
||||
@@ -97,13 +96,6 @@ impl EventInternalMetadataData {
|
||||
.to_owned()
|
||||
.into_any(),
|
||||
),
|
||||
EventInternalMetadataData::PolicyServerSpammy(o) => (
|
||||
pyo3::intern!(py, "policy_server_spammy"),
|
||||
o.into_pyobject(py)
|
||||
.unwrap_infallible()
|
||||
.to_owned()
|
||||
.into_any(),
|
||||
),
|
||||
EventInternalMetadataData::Redacted(o) => (
|
||||
pyo3::intern!(py, "redacted"),
|
||||
o.into_pyobject(py)
|
||||
@@ -163,11 +155,6 @@ impl EventInternalMetadataData {
|
||||
.extract()
|
||||
.with_context(|| format!("'{key_str}' has invalid type"))?,
|
||||
),
|
||||
"policy_server_spammy" => EventInternalMetadataData::PolicyServerSpammy(
|
||||
value
|
||||
.extract()
|
||||
.with_context(|| format!("'{key_str}' has invalid type"))?,
|
||||
),
|
||||
"redacted" => EventInternalMetadataData::Redacted(
|
||||
value
|
||||
.extract()
|
||||
@@ -440,17 +427,6 @@ impl EventInternalMetadata {
|
||||
set_property!(self, ProactivelySend, obj);
|
||||
}
|
||||
|
||||
#[getter]
|
||||
fn get_policy_server_spammy(&self) -> PyResult<bool> {
|
||||
Ok(get_property_opt!(self, PolicyServerSpammy)
|
||||
.copied()
|
||||
.unwrap_or(false))
|
||||
}
|
||||
#[setter]
|
||||
fn set_policy_server_spammy(&mut self, obj: bool) {
|
||||
set_property!(self, PolicyServerSpammy, obj);
|
||||
}
|
||||
|
||||
#[getter]
|
||||
fn get_redacted(&self) -> PyResult<bool> {
|
||||
let bool = get_property!(self, Redacted)?;
|
||||
|
||||
@@ -290,26 +290,6 @@ pub const BASE_APPEND_CONTENT_RULES: &[PushRule] = &[PushRule {
|
||||
}];
|
||||
|
||||
pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[
|
||||
PushRule {
|
||||
rule_id: Cow::Borrowed("global/content/.io.element.msc4306.rule.unsubscribed_thread"),
|
||||
priority_class: 1,
|
||||
conditions: Cow::Borrowed(&[Condition::Known(
|
||||
KnownCondition::Msc4306ThreadSubscription { subscribed: false },
|
||||
)]),
|
||||
actions: Cow::Borrowed(&[]),
|
||||
default: true,
|
||||
default_enabled: true,
|
||||
},
|
||||
PushRule {
|
||||
rule_id: Cow::Borrowed("global/content/.io.element.msc4306.rule.subscribed_thread"),
|
||||
priority_class: 1,
|
||||
conditions: Cow::Borrowed(&[Condition::Known(
|
||||
KnownCondition::Msc4306ThreadSubscription { subscribed: true },
|
||||
)]),
|
||||
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION]),
|
||||
default: true,
|
||||
default_enabled: true,
|
||||
},
|
||||
PushRule {
|
||||
rule_id: Cow::Borrowed("global/underride/.m.rule.call"),
|
||||
priority_class: 1,
|
||||
|
||||
@@ -106,11 +106,8 @@ pub struct PushRuleEvaluator {
|
||||
/// flag as MSC1767 (extensible events core).
|
||||
msc3931_enabled: bool,
|
||||
|
||||
/// If MSC4210 (remove legacy mentions) is enabled.
|
||||
// If MSC4210 (remove legacy mentions) is enabled.
|
||||
msc4210_enabled: bool,
|
||||
|
||||
/// If MSC4306 (thread subscriptions) is enabled.
|
||||
msc4306_enabled: bool,
|
||||
}
|
||||
|
||||
#[pymethods]
|
||||
@@ -129,7 +126,6 @@ impl PushRuleEvaluator {
|
||||
room_version_feature_flags,
|
||||
msc3931_enabled,
|
||||
msc4210_enabled,
|
||||
msc4306_enabled,
|
||||
))]
|
||||
pub fn py_new(
|
||||
flattened_keys: BTreeMap<String, JsonValue>,
|
||||
@@ -142,7 +138,6 @@ impl PushRuleEvaluator {
|
||||
room_version_feature_flags: Vec<String>,
|
||||
msc3931_enabled: bool,
|
||||
msc4210_enabled: bool,
|
||||
msc4306_enabled: bool,
|
||||
) -> Result<Self, Error> {
|
||||
let body = match flattened_keys.get("content.body") {
|
||||
Some(JsonValue::Value(SimpleJsonValue::Str(s))) => s.clone().into_owned(),
|
||||
@@ -161,7 +156,6 @@ impl PushRuleEvaluator {
|
||||
room_version_feature_flags,
|
||||
msc3931_enabled,
|
||||
msc4210_enabled,
|
||||
msc4306_enabled,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -173,19 +167,12 @@ impl PushRuleEvaluator {
|
||||
///
|
||||
/// Returns the set of actions, if any, that match (filtering out any
|
||||
/// `dont_notify` and `coalesce` actions).
|
||||
///
|
||||
/// msc4306_thread_subscription_state: (Only populated if MSC4306 is enabled)
|
||||
/// The thread subscription state corresponding to the thread containing this event.
|
||||
/// - `None` if the event is not in a thread, or if MSC4306 is disabled.
|
||||
/// - `Some(true)` if the event is in a thread and the user has a subscription for that thread
|
||||
/// - `Some(false)` if the event is in a thread and the user does NOT have a subscription for that thread
|
||||
#[pyo3(signature = (push_rules, user_id=None, display_name=None, msc4306_thread_subscription_state=None))]
|
||||
#[pyo3(signature = (push_rules, user_id=None, display_name=None))]
|
||||
pub fn run(
|
||||
&self,
|
||||
push_rules: &FilteredPushRules,
|
||||
user_id: Option<&str>,
|
||||
display_name: Option<&str>,
|
||||
msc4306_thread_subscription_state: Option<bool>,
|
||||
) -> Vec<Action> {
|
||||
'outer: for (push_rule, enabled) in push_rules.iter() {
|
||||
if !enabled {
|
||||
@@ -217,12 +204,7 @@ impl PushRuleEvaluator {
|
||||
Condition::Known(KnownCondition::RoomVersionSupports { feature: _ }),
|
||||
);
|
||||
|
||||
match self.match_condition(
|
||||
condition,
|
||||
user_id,
|
||||
display_name,
|
||||
msc4306_thread_subscription_state,
|
||||
) {
|
||||
match self.match_condition(condition, user_id, display_name) {
|
||||
Ok(true) => {}
|
||||
Ok(false) => continue 'outer,
|
||||
Err(err) => {
|
||||
@@ -255,20 +237,14 @@ impl PushRuleEvaluator {
|
||||
}
|
||||
|
||||
/// Check if the given condition matches.
|
||||
#[pyo3(signature = (condition, user_id=None, display_name=None, msc4306_thread_subscription_state=None))]
|
||||
#[pyo3(signature = (condition, user_id=None, display_name=None))]
|
||||
fn matches(
|
||||
&self,
|
||||
condition: Condition,
|
||||
user_id: Option<&str>,
|
||||
display_name: Option<&str>,
|
||||
msc4306_thread_subscription_state: Option<bool>,
|
||||
) -> bool {
|
||||
match self.match_condition(
|
||||
&condition,
|
||||
user_id,
|
||||
display_name,
|
||||
msc4306_thread_subscription_state,
|
||||
) {
|
||||
match self.match_condition(&condition, user_id, display_name) {
|
||||
Ok(true) => true,
|
||||
Ok(false) => false,
|
||||
Err(err) => {
|
||||
@@ -286,7 +262,6 @@ impl PushRuleEvaluator {
|
||||
condition: &Condition,
|
||||
user_id: Option<&str>,
|
||||
display_name: Option<&str>,
|
||||
msc4306_thread_subscription_state: Option<bool>,
|
||||
) -> Result<bool, Error> {
|
||||
let known_condition = match condition {
|
||||
Condition::Known(known) => known,
|
||||
@@ -418,13 +393,6 @@ impl PushRuleEvaluator {
|
||||
&& self.room_version_feature_flags.contains(&flag)
|
||||
}
|
||||
}
|
||||
KnownCondition::Msc4306ThreadSubscription { subscribed } => {
|
||||
if !self.msc4306_enabled {
|
||||
false
|
||||
} else {
|
||||
msc4306_thread_subscription_state == Some(*subscribed)
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
Ok(result)
|
||||
@@ -568,11 +536,10 @@ fn push_rule_evaluator() {
|
||||
vec![],
|
||||
true,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let result = evaluator.run(&FilteredPushRules::default(), None, Some("bob"), None);
|
||||
let result = evaluator.run(&FilteredPushRules::default(), None, Some("bob"));
|
||||
assert_eq!(result.len(), 3);
|
||||
}
|
||||
|
||||
@@ -599,7 +566,6 @@ fn test_requires_room_version_supports_condition() {
|
||||
flags,
|
||||
true,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
@@ -609,7 +575,6 @@ fn test_requires_room_version_supports_condition() {
|
||||
&FilteredPushRules::default(),
|
||||
Some("@bob:example.org"),
|
||||
None,
|
||||
None,
|
||||
);
|
||||
assert_eq!(result.len(), 3);
|
||||
|
||||
@@ -628,17 +593,7 @@ fn test_requires_room_version_supports_condition() {
|
||||
};
|
||||
let rules = PushRules::new(vec![custom_rule]);
|
||||
result = evaluator.run(
|
||||
&FilteredPushRules::py_new(
|
||||
rules,
|
||||
BTreeMap::new(),
|
||||
true,
|
||||
false,
|
||||
true,
|
||||
false,
|
||||
false,
|
||||
false,
|
||||
),
|
||||
None,
|
||||
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false, false),
|
||||
None,
|
||||
None,
|
||||
);
|
||||
|
||||
@@ -369,10 +369,6 @@ pub enum KnownCondition {
|
||||
RoomVersionSupports {
|
||||
feature: Cow<'static, str>,
|
||||
},
|
||||
#[serde(rename = "io.element.msc4306.thread_subscription")]
|
||||
Msc4306ThreadSubscription {
|
||||
subscribed: bool,
|
||||
},
|
||||
}
|
||||
|
||||
impl<'source> IntoPyObject<'source> for Condition {
|
||||
@@ -551,13 +547,11 @@ pub struct FilteredPushRules {
|
||||
msc3664_enabled: bool,
|
||||
msc4028_push_encrypted_events: bool,
|
||||
msc4210_enabled: bool,
|
||||
msc4306_enabled: bool,
|
||||
}
|
||||
|
||||
#[pymethods]
|
||||
impl FilteredPushRules {
|
||||
#[new]
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub fn py_new(
|
||||
push_rules: PushRules,
|
||||
enabled_map: BTreeMap<String, bool>,
|
||||
@@ -566,7 +560,6 @@ impl FilteredPushRules {
|
||||
msc3664_enabled: bool,
|
||||
msc4028_push_encrypted_events: bool,
|
||||
msc4210_enabled: bool,
|
||||
msc4306_enabled: bool,
|
||||
) -> Self {
|
||||
Self {
|
||||
push_rules,
|
||||
@@ -576,7 +569,6 @@ impl FilteredPushRules {
|
||||
msc3664_enabled,
|
||||
msc4028_push_encrypted_events,
|
||||
msc4210_enabled,
|
||||
msc4306_enabled,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -627,10 +619,6 @@ impl FilteredPushRules {
|
||||
return false;
|
||||
}
|
||||
|
||||
if !self.msc4306_enabled && rule.rule_id.contains("/.io.element.msc4306.rule.") {
|
||||
return false;
|
||||
}
|
||||
|
||||
true
|
||||
})
|
||||
.map(|r| {
|
||||
|
||||
@@ -656,43 +656,6 @@ properties:
|
||||
- - master.hostname.example.com
|
||||
- 10.1.0.0/16
|
||||
- 172.30.0.0/16
|
||||
matrix_authentication_service:
|
||||
type: object
|
||||
description: >-
|
||||
The `matrix_authentication_service` setting configures integration with
|
||||
[Matrix Authentication Service (MAS)](https://github.com/element-hq/matrix-authentication-service).
|
||||
properties:
|
||||
enabled:
|
||||
type: boolean
|
||||
description: >-
|
||||
Whether or not to enable the MAS integration. If this is set to
|
||||
`false`, Synapse will use its legacy internal authentication API.
|
||||
default: false
|
||||
|
||||
endpoint:
|
||||
type: string
|
||||
format: uri
|
||||
description: >-
|
||||
The URL where Synapse can reach MAS. This *must* have the `discovery`
|
||||
and `oauth` resources mounted.
|
||||
default: http://localhost:8080
|
||||
|
||||
secret:
|
||||
type: ["string", "null"]
|
||||
description: >-
|
||||
A shared secret that will be used to authenticate requests from and to MAS.
|
||||
|
||||
secret_path:
|
||||
type: ["string", "null"]
|
||||
description: >-
|
||||
Alternative to `secret`, reading the shared secret from a file.
|
||||
The file should be a plain text file, containing only the secret.
|
||||
Synapse reads the secret from the given file once at startup.
|
||||
|
||||
examples:
|
||||
- enabled: true
|
||||
secret: someverysecuresecret
|
||||
endpoint: http://localhost:8080
|
||||
dummy_events_threshold:
|
||||
type: integer
|
||||
description: >-
|
||||
|
||||
@@ -23,21 +23,16 @@
|
||||
can crop up, e.g the cache descriptors.
|
||||
"""
|
||||
|
||||
import enum
|
||||
from typing import Callable, Mapping, Optional, Tuple, Type, Union
|
||||
from typing import Callable, Optional, Tuple, Type, Union
|
||||
|
||||
import attr
|
||||
import mypy.types
|
||||
from mypy.erasetype import remove_instance_last_known_values
|
||||
from mypy.errorcodes import ErrorCode
|
||||
from mypy.nodes import ARG_NAMED_OPT, ListExpr, NameExpr, TempNode, TupleExpr, Var
|
||||
from mypy.nodes import ARG_NAMED_OPT, ListExpr, NameExpr, TempNode, Var
|
||||
from mypy.plugin import (
|
||||
ClassDefContext,
|
||||
Context,
|
||||
FunctionLike,
|
||||
FunctionSigContext,
|
||||
MethodSigContext,
|
||||
MypyFile,
|
||||
Plugin,
|
||||
)
|
||||
from mypy.typeops import bind_self
|
||||
@@ -46,15 +41,12 @@ from mypy.types import (
|
||||
CallableType,
|
||||
Instance,
|
||||
NoneType,
|
||||
Options,
|
||||
TupleType,
|
||||
TypeAliasType,
|
||||
TypeVarType,
|
||||
UninhabitedType,
|
||||
UnionType,
|
||||
)
|
||||
from mypy_zope import plugin as mypy_zope_plugin
|
||||
from pydantic.mypy import plugin as mypy_pydantic_plugin
|
||||
|
||||
PROMETHEUS_METRIC_MISSING_SERVER_NAME_LABEL = ErrorCode(
|
||||
"missing-server-name-label",
|
||||
@@ -62,153 +54,17 @@ PROMETHEUS_METRIC_MISSING_SERVER_NAME_LABEL = ErrorCode(
|
||||
category="per-homeserver-tenant-metrics",
|
||||
)
|
||||
|
||||
PROMETHEUS_METRIC_MISSING_FROM_LIST_TO_CHECK = ErrorCode(
|
||||
"metric-type-missing-from-list",
|
||||
"Every Prometheus metric type must be included in the `prometheus_metric_fullname_to_label_arg_map`.",
|
||||
category="per-homeserver-tenant-metrics",
|
||||
)
|
||||
|
||||
|
||||
class Sentinel(enum.Enum):
|
||||
# defining a sentinel in this way allows mypy to correctly handle the
|
||||
# type of a dictionary lookup and subsequent type narrowing.
|
||||
UNSET_SENTINEL = object()
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True)
|
||||
class ArgLocation:
|
||||
keyword_name: str
|
||||
"""
|
||||
The keyword argument name for this argument
|
||||
"""
|
||||
position: int
|
||||
"""
|
||||
The 0-based positional index of this argument
|
||||
"""
|
||||
|
||||
|
||||
prometheus_metric_fullname_to_label_arg_map: Mapping[str, Optional[ArgLocation]] = {
|
||||
# `Collector` subclasses:
|
||||
"prometheus_client.metrics.MetricWrapperBase": ArgLocation("labelnames", 2),
|
||||
"prometheus_client.metrics.Counter": ArgLocation("labelnames", 2),
|
||||
"prometheus_client.metrics.Histogram": ArgLocation("labelnames", 2),
|
||||
"prometheus_client.metrics.Gauge": ArgLocation("labelnames", 2),
|
||||
"prometheus_client.metrics.Summary": ArgLocation("labelnames", 2),
|
||||
"prometheus_client.metrics.Info": ArgLocation("labelnames", 2),
|
||||
"prometheus_client.metrics.Enum": ArgLocation("labelnames", 2),
|
||||
"synapse.metrics.LaterGauge": ArgLocation("labelnames", 2),
|
||||
"synapse.metrics.InFlightGauge": ArgLocation("labels", 2),
|
||||
"synapse.metrics.GaugeBucketCollector": ArgLocation("labelnames", 2),
|
||||
"prometheus_client.registry.Collector": None,
|
||||
"prometheus_client.registry._EmptyCollector": None,
|
||||
"prometheus_client.registry.CollectorRegistry": None,
|
||||
"prometheus_client.process_collector.ProcessCollector": None,
|
||||
"prometheus_client.platform_collector.PlatformCollector": None,
|
||||
"prometheus_client.gc_collector.GCCollector": None,
|
||||
"synapse.metrics._gc.GCCounts": None,
|
||||
"synapse.metrics._gc.PyPyGCStats": None,
|
||||
"synapse.metrics._reactor_metrics.ReactorLastSeenMetric": None,
|
||||
"synapse.metrics.CPUMetrics": None,
|
||||
"synapse.metrics.jemalloc.JemallocCollector": None,
|
||||
"synapse.util.metrics.DynamicCollectorRegistry": None,
|
||||
"synapse.metrics.background_process_metrics._Collector": None,
|
||||
#
|
||||
# `Metric` subclasses:
|
||||
"prometheus_client.metrics_core.Metric": None,
|
||||
"prometheus_client.metrics_core.UnknownMetricFamily": ArgLocation("labels", 3),
|
||||
"prometheus_client.metrics_core.CounterMetricFamily": ArgLocation("labels", 3),
|
||||
"prometheus_client.metrics_core.GaugeMetricFamily": ArgLocation("labels", 3),
|
||||
"prometheus_client.metrics_core.SummaryMetricFamily": ArgLocation("labels", 3),
|
||||
"prometheus_client.metrics_core.InfoMetricFamily": ArgLocation("labels", 3),
|
||||
"prometheus_client.metrics_core.HistogramMetricFamily": ArgLocation("labels", 3),
|
||||
"prometheus_client.metrics_core.GaugeHistogramMetricFamily": ArgLocation(
|
||||
"labels", 4
|
||||
),
|
||||
"prometheus_client.metrics_core.StateSetMetricFamily": ArgLocation("labels", 3),
|
||||
"synapse.metrics.GaugeHistogramMetricFamilyWithLabels": ArgLocation(
|
||||
"labelnames", 4
|
||||
),
|
||||
}
|
||||
"""
|
||||
Map from the fullname of the Prometheus `Metric`/`Collector` classes to the keyword
|
||||
argument name and positional index of the label names. This map is useful because
|
||||
different metrics have different signatures for passing in label names and we just need
|
||||
to know where to look.
|
||||
|
||||
This map should include any metrics that we collect with Prometheus. Which corresponds
|
||||
to anything that inherits from `prometheus_client.registry.Collector`
|
||||
(`synapse.metrics._types.Collector`) or `prometheus_client.metrics_core.Metric`. The
|
||||
exhaustiveness of this list is enforced by `analyze_prometheus_metric_classes`.
|
||||
|
||||
The entries with `None` always fail the lint because they don't have a `labelnames`
|
||||
argument (therefore, no `SERVER_NAME_LABEL`), but we include them here so that people
|
||||
can notice and manually allow via a type ignore comment as the source of truth
|
||||
should be in the source code.
|
||||
"""
|
||||
|
||||
# Unbound at this point because we don't know the mypy version yet.
|
||||
# This is set in the `plugin(...)` function below.
|
||||
MypyPydanticPluginClass: Type[Plugin]
|
||||
MypyZopePluginClass: Type[Plugin]
|
||||
|
||||
|
||||
class SynapsePlugin(Plugin):
|
||||
def __init__(self, options: Options):
|
||||
super().__init__(options)
|
||||
self.mypy_pydantic_plugin = MypyPydanticPluginClass(options)
|
||||
self.mypy_zope_plugin = MypyZopePluginClass(options)
|
||||
|
||||
def set_modules(self, modules: dict[str, MypyFile]) -> None:
|
||||
"""
|
||||
This is called by mypy internals. We have to override this to ensure it's also
|
||||
called for any other plugins that we're manually handling.
|
||||
|
||||
Here is how mypy describes it:
|
||||
|
||||
> [`self._modules`] can't be set in `__init__` because it is executed too soon
|
||||
> in `build.py`. Therefore, `build.py` *must* set it later before graph processing
|
||||
> starts by calling `set_modules()`.
|
||||
"""
|
||||
super().set_modules(modules)
|
||||
self.mypy_pydantic_plugin.set_modules(modules)
|
||||
self.mypy_zope_plugin.set_modules(modules)
|
||||
|
||||
def get_base_class_hook(
|
||||
self, fullname: str
|
||||
) -> Optional[Callable[[ClassDefContext], None]]:
|
||||
def _get_base_class_hook(ctx: ClassDefContext) -> None:
|
||||
# Run any `get_base_class_hook` checks from other plugins first.
|
||||
#
|
||||
# Unfortunately, because mypy only chooses the first plugin that returns a
|
||||
# non-None value (known-limitation, c.f.
|
||||
# https://github.com/python/mypy/issues/19524), we workaround this by
|
||||
# putting our custom plugin first in the plugin order and then calling the
|
||||
# other plugin's hook manually followed by our own checks.
|
||||
if callback := self.mypy_pydantic_plugin.get_base_class_hook(fullname):
|
||||
callback(ctx)
|
||||
if callback := self.mypy_zope_plugin.get_base_class_hook(fullname):
|
||||
callback(ctx)
|
||||
|
||||
# Now run our own checks
|
||||
analyze_prometheus_metric_classes(ctx)
|
||||
|
||||
return _get_base_class_hook
|
||||
|
||||
def get_function_signature_hook(
|
||||
self, fullname: str
|
||||
) -> Optional[Callable[[FunctionSigContext], FunctionLike]]:
|
||||
# Strip off the unique identifier for classes that are dynamically created inside
|
||||
# functions. ex. `synapse.metrics.jemalloc.JemallocCollector@185` (this is the line
|
||||
# number)
|
||||
if "@" in fullname:
|
||||
fullname = fullname.split("@", 1)[0]
|
||||
|
||||
# Look for any Prometheus metrics to make sure they have the `SERVER_NAME_LABEL`
|
||||
# label.
|
||||
if fullname in prometheus_metric_fullname_to_label_arg_map.keys():
|
||||
# Because it's difficult to determine the `fullname` of the function in the
|
||||
# callback, let's just pass it in while we have it.
|
||||
return lambda ctx: check_prometheus_metric_instantiation(ctx, fullname)
|
||||
if fullname in (
|
||||
"prometheus_client.metrics.Counter",
|
||||
# TODO: Add other prometheus_client metrics that need checking as we
|
||||
# refactor, see https://github.com/element-hq/synapse/issues/18592
|
||||
):
|
||||
return check_prometheus_metric_instantiation
|
||||
|
||||
return None
|
||||
|
||||
@@ -232,44 +88,7 @@ class SynapsePlugin(Plugin):
|
||||
return None
|
||||
|
||||
|
||||
def analyze_prometheus_metric_classes(ctx: ClassDefContext) -> None:
|
||||
"""
|
||||
Cross-check the list of Prometheus metric classes against the
|
||||
`prometheus_metric_fullname_to_label_arg_map` to ensure the list is exhaustive and
|
||||
up-to-date.
|
||||
"""
|
||||
|
||||
fullname = ctx.cls.fullname
|
||||
# Strip off the unique identifier for classes that are dynamically created inside
|
||||
# functions. ex. `synapse.metrics.jemalloc.JemallocCollector@185` (this is the line
|
||||
# number)
|
||||
if "@" in fullname:
|
||||
fullname = fullname.split("@", 1)[0]
|
||||
|
||||
if any(
|
||||
ancestor_type.fullname
|
||||
in (
|
||||
# All of the Prometheus metric classes inherit from the `Collector`.
|
||||
"prometheus_client.registry.Collector",
|
||||
"synapse.metrics._types.Collector",
|
||||
# And custom metrics that inherit from `Metric`.
|
||||
"prometheus_client.metrics_core.Metric",
|
||||
)
|
||||
for ancestor_type in ctx.cls.info.mro
|
||||
):
|
||||
if fullname not in prometheus_metric_fullname_to_label_arg_map:
|
||||
ctx.api.fail(
|
||||
f"Expected {fullname} to be in `prometheus_metric_fullname_to_label_arg_map`, "
|
||||
f"but it was not found. This is a problem with our custom mypy plugin. "
|
||||
f"Please add it to the map.",
|
||||
Context(),
|
||||
code=PROMETHEUS_METRIC_MISSING_FROM_LIST_TO_CHECK,
|
||||
)
|
||||
|
||||
|
||||
def check_prometheus_metric_instantiation(
|
||||
ctx: FunctionSigContext, fullname: str
|
||||
) -> CallableType:
|
||||
def check_prometheus_metric_instantiation(ctx: FunctionSigContext) -> CallableType:
|
||||
"""
|
||||
Ensure that the `prometheus_client` metrics include the `SERVER_NAME_LABEL` label
|
||||
when instantiated.
|
||||
@@ -279,52 +98,21 @@ def check_prometheus_metric_instantiation(
|
||||
ensures metrics are correctly separated by homeserver.
|
||||
|
||||
There are also some metrics that apply at the process level, such as CPU usage,
|
||||
Python garbage collection, and Twisted reactor tick time, which shouldn't have the
|
||||
`SERVER_NAME_LABEL`. In those cases, use a type ignore comment to disable the
|
||||
Python garbage collection, Twisted reactor tick time which shouldn't have the
|
||||
`SERVER_NAME_LABEL`. In those cases, use use a type ignore comment to disable the
|
||||
check, e.g. `# type: ignore[missing-server-name-label]`.
|
||||
|
||||
Args:
|
||||
ctx: The `FunctionSigContext` from mypy.
|
||||
fullname: The fully qualified name of the function being called,
|
||||
e.g. `"prometheus_client.metrics.Counter"`
|
||||
"""
|
||||
# The true signature, this isn't being modified so this is what will be returned.
|
||||
signature = ctx.default_signature
|
||||
|
||||
# Find where the label names argument is in the function signature.
|
||||
arg_location = prometheus_metric_fullname_to_label_arg_map.get(
|
||||
fullname, Sentinel.UNSET_SENTINEL
|
||||
)
|
||||
assert arg_location is not Sentinel.UNSET_SENTINEL, (
|
||||
f"Expected to find {fullname} in `prometheus_metric_fullname_to_label_arg_map`, "
|
||||
f"but it was not found. This is a problem with our custom mypy plugin. "
|
||||
f"Please add it to the map. Context: {ctx.context}"
|
||||
)
|
||||
# People should be using `# type: ignore[missing-server-name-label]` for
|
||||
# process-level metrics that should not have the `SERVER_NAME_LABEL`.
|
||||
if arg_location is None:
|
||||
ctx.api.fail(
|
||||
f"{signature.name} does not have a `labelnames`/`labels` argument "
|
||||
"(if this is untrue, update `prometheus_metric_fullname_to_label_arg_map` "
|
||||
"in our custom mypy plugin) and should probably have a type ignore comment, "
|
||||
"e.g. `# type: ignore[missing-server-name-label]`. The reason we don't "
|
||||
"automatically ignore this is the source of truth should be in the source code.",
|
||||
ctx.context,
|
||||
code=PROMETHEUS_METRIC_MISSING_SERVER_NAME_LABEL,
|
||||
)
|
||||
return signature
|
||||
signature: CallableType = ctx.default_signature
|
||||
|
||||
# Sanity check the arguments are still as expected in this version of
|
||||
# `prometheus_client`. ex. `Counter(name, documentation, labelnames, ...)`
|
||||
#
|
||||
# `signature.arg_names` should be: ["name", "documentation", "labelnames", ...]
|
||||
if (
|
||||
len(signature.arg_names) < (arg_location.position + 1)
|
||||
or signature.arg_names[arg_location.position] != arg_location.keyword_name
|
||||
):
|
||||
if len(signature.arg_names) < 3 or signature.arg_names[2] != "labelnames":
|
||||
ctx.api.fail(
|
||||
f"Expected argument number {arg_location.position + 1} of {signature.name} to be `labelnames`/`labels`, "
|
||||
f"but got {signature.arg_names[arg_location.position]}",
|
||||
f"Expected the 3rd argument of {signature.name} to be 'labelnames', but got "
|
||||
f"{signature.arg_names[2]}",
|
||||
ctx.context,
|
||||
)
|
||||
return signature
|
||||
@@ -347,12 +135,8 @@ def check_prometheus_metric_instantiation(
|
||||
# ...
|
||||
# ]
|
||||
# ```
|
||||
labelnames_arg_expression = (
|
||||
ctx.args[arg_location.position][0]
|
||||
if len(ctx.args[arg_location.position]) > 0
|
||||
else None
|
||||
)
|
||||
if isinstance(labelnames_arg_expression, (ListExpr, TupleExpr)):
|
||||
labelnames_arg_expression = ctx.args[2][0] if len(ctx.args[2]) > 0 else None
|
||||
if isinstance(labelnames_arg_expression, ListExpr):
|
||||
# Check if the `labelnames` argument includes the `server_name` label (`SERVER_NAME_LABEL`).
|
||||
for labelname_expression in labelnames_arg_expression.items:
|
||||
if (
|
||||
@@ -690,13 +474,10 @@ def is_cacheable(
|
||||
|
||||
|
||||
def plugin(version: str) -> Type[SynapsePlugin]:
|
||||
global MypyPydanticPluginClass, MypyZopePluginClass
|
||||
# This is the entry point of the plugin, and lets us deal with the fact
|
||||
# that the mypy plugin interface is *not* stable by looking at the version
|
||||
# string.
|
||||
#
|
||||
# However, since we pin the version of mypy Synapse uses in CI, we don't
|
||||
# really care.
|
||||
MypyPydanticPluginClass = mypy_pydantic_plugin(version)
|
||||
MypyZopePluginClass = mypy_zope_plugin(version)
|
||||
return SynapsePlugin
|
||||
|
||||
@@ -45,6 +45,16 @@ if py_version < (3, 9):
|
||||
|
||||
# Allow using the asyncio reactor via env var.
|
||||
if strtobool(os.environ.get("SYNAPSE_ASYNC_IO_REACTOR", "0")):
|
||||
from incremental import Version
|
||||
|
||||
import twisted
|
||||
|
||||
# We need a bugfix that is included in Twisted 21.2.0:
|
||||
# https://twistedmatrix.com/trac/ticket/9787
|
||||
if twisted.version < Version("Twisted", 21, 2, 0):
|
||||
print("Using asyncio reactor requires Twisted>=21.2.0")
|
||||
sys.exit(1)
|
||||
|
||||
import asyncio
|
||||
|
||||
from twisted.internet import asyncioreactor
|
||||
|
||||
@@ -34,11 +34,9 @@ HAS_PYDANTIC_V2: bool = Version(pydantic_version).major == 2
|
||||
|
||||
if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
||||
from pydantic.v1 import (
|
||||
AnyHttpUrl,
|
||||
BaseModel,
|
||||
Extra,
|
||||
Field,
|
||||
FilePath,
|
||||
MissingError,
|
||||
PydanticValueError,
|
||||
StrictBool,
|
||||
@@ -57,11 +55,9 @@ if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
||||
from pydantic.v1.typing import get_args
|
||||
else:
|
||||
from pydantic import (
|
||||
AnyHttpUrl,
|
||||
BaseModel,
|
||||
Extra,
|
||||
Field,
|
||||
FilePath,
|
||||
MissingError,
|
||||
PydanticValueError,
|
||||
StrictBool,
|
||||
@@ -81,7 +77,6 @@ else:
|
||||
|
||||
__all__ = (
|
||||
"HAS_PYDANTIC_V2",
|
||||
"AnyHttpUrl",
|
||||
"BaseModel",
|
||||
"constr",
|
||||
"conbytes",
|
||||
@@ -90,7 +85,6 @@ __all__ = (
|
||||
"ErrorWrapper",
|
||||
"Extra",
|
||||
"Field",
|
||||
"FilePath",
|
||||
"get_args",
|
||||
"MissingError",
|
||||
"parse_obj_as",
|
||||
|
||||
@@ -29,21 +29,19 @@ import attr
|
||||
|
||||
from synapse.config._base import (
|
||||
Config,
|
||||
ConfigError,
|
||||
RootConfig,
|
||||
find_config_files,
|
||||
read_config_files,
|
||||
)
|
||||
from synapse.config.database import DatabaseConfig
|
||||
from synapse.config.server import ServerConfig
|
||||
from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn
|
||||
from synapse.storage.engines import create_engine
|
||||
|
||||
|
||||
class ReviewConfig(RootConfig):
|
||||
"A config class that just pulls out the server and database config"
|
||||
"A config class that just pulls out the database config"
|
||||
|
||||
config_classes = [ServerConfig, DatabaseConfig]
|
||||
config_classes = [DatabaseConfig]
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True)
|
||||
@@ -150,10 +148,6 @@ def main() -> None:
|
||||
config_dict = read_config_files(config_files)
|
||||
config.parse_config_dict(config_dict, "", "")
|
||||
|
||||
server_name = config.server.server_name
|
||||
if not isinstance(server_name, str):
|
||||
raise ConfigError("Must be a string", ("server_name",))
|
||||
|
||||
since_ms = time.time() * 1000 - Config.parse_duration(config_args.since)
|
||||
exclude_users_with_email = config_args.exclude_emails
|
||||
exclude_users_with_appservice = config_args.exclude_app_service
|
||||
@@ -165,12 +159,7 @@ def main() -> None:
|
||||
|
||||
engine = create_engine(database_config.config)
|
||||
|
||||
with make_conn(
|
||||
db_config=database_config,
|
||||
engine=engine,
|
||||
default_txn_name="review_recent_signups",
|
||||
server_name=server_name,
|
||||
) as db_conn:
|
||||
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
|
||||
# This generates a type of Cursor, not LoggingTransaction.
|
||||
user_infos = get_recent_users(
|
||||
db_conn.cursor(),
|
||||
|
||||
@@ -672,14 +672,8 @@ class Porter:
|
||||
engine = create_engine(db_config.config)
|
||||
|
||||
hs = MockHomeserver(self.hs_config)
|
||||
server_name = hs.hostname
|
||||
|
||||
with make_conn(
|
||||
db_config=db_config,
|
||||
engine=engine,
|
||||
default_txn_name="portdb",
|
||||
server_name=server_name,
|
||||
) as db_conn:
|
||||
with make_conn(db_config, engine, "portdb") as db_conn:
|
||||
engine.check_database(
|
||||
db_conn, allow_outdated_version=allow_outdated_version
|
||||
)
|
||||
|
||||
@@ -20,13 +20,10 @@
|
||||
#
|
||||
from typing import TYPE_CHECKING, Optional, Protocol, Tuple
|
||||
|
||||
from prometheus_client import Histogram
|
||||
|
||||
from twisted.web.server import Request
|
||||
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.types import Requester
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -36,13 +33,6 @@ if TYPE_CHECKING:
|
||||
GUEST_DEVICE_ID = "guest_device"
|
||||
|
||||
|
||||
introspection_response_timer = Histogram(
|
||||
"synapse_api_auth_delegated_introspection_response",
|
||||
"Time taken to get a response for an introspection request",
|
||||
labelnames=["code", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
|
||||
class Auth(Protocol):
|
||||
"""The interface that an auth provider must implement."""
|
||||
|
||||
|
||||
@@ -1,432 +0,0 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2025 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
#
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Optional
|
||||
from urllib.parse import urlencode
|
||||
|
||||
from synapse._pydantic_compat import (
|
||||
BaseModel,
|
||||
Extra,
|
||||
StrictBool,
|
||||
StrictInt,
|
||||
StrictStr,
|
||||
ValidationError,
|
||||
)
|
||||
from synapse.api.auth.base import BaseAuth
|
||||
from synapse.api.errors import (
|
||||
AuthError,
|
||||
HttpResponseException,
|
||||
InvalidClientTokenError,
|
||||
SynapseError,
|
||||
UnrecognizedRequestError,
|
||||
)
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import PreserveLoggingContext
|
||||
from synapse.logging.opentracing import (
|
||||
active_span,
|
||||
force_tracing,
|
||||
inject_request_headers,
|
||||
start_active_span,
|
||||
)
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.synapse_rust.http_client import HttpClient
|
||||
from synapse.types import JsonDict, Requester, UserID, create_requester
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
||||
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
|
||||
|
||||
from . import introspection_response_timer
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Scope as defined by MSC2967
|
||||
# https://github.com/matrix-org/matrix-spec-proposals/pull/2967
|
||||
SCOPE_MATRIX_API = "urn:matrix:org.matrix.msc2967.client:api:*"
|
||||
SCOPE_MATRIX_DEVICE_PREFIX = "urn:matrix:org.matrix.msc2967.client:device:"
|
||||
|
||||
|
||||
class ServerMetadata(BaseModel):
|
||||
class Config:
|
||||
extra = Extra.allow
|
||||
|
||||
issuer: StrictStr
|
||||
account_management_uri: StrictStr
|
||||
|
||||
|
||||
class IntrospectionResponse(BaseModel):
|
||||
retrieved_at_ms: StrictInt
|
||||
active: StrictBool
|
||||
scope: Optional[StrictStr]
|
||||
username: Optional[StrictStr]
|
||||
sub: Optional[StrictStr]
|
||||
device_id: Optional[StrictStr]
|
||||
expires_in: Optional[StrictInt]
|
||||
|
||||
class Config:
|
||||
extra = Extra.allow
|
||||
|
||||
def get_scope_set(self) -> set[str]:
|
||||
if not self.scope:
|
||||
return set()
|
||||
|
||||
return {token for token in self.scope.split(" ") if token}
|
||||
|
||||
def is_active(self, now_ms: int) -> bool:
|
||||
if not self.active:
|
||||
return False
|
||||
|
||||
# Compatibility tokens don't expire and don't have an 'expires_in' field
|
||||
if self.expires_in is None:
|
||||
return True
|
||||
|
||||
absolute_expiry_ms = self.expires_in * 1000 + self.retrieved_at_ms
|
||||
return now_ms < absolute_expiry_ms
|
||||
|
||||
|
||||
class MasDelegatedAuth(BaseAuth):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__(hs)
|
||||
|
||||
self.server_name = hs.hostname
|
||||
self._clock = hs.get_clock()
|
||||
self._config = hs.config.mas
|
||||
|
||||
self._http_client = hs.get_proxied_http_client()
|
||||
self._rust_http_client = HttpClient(
|
||||
reactor=hs.get_reactor(),
|
||||
user_agent=self._http_client.user_agent.decode("utf8"),
|
||||
)
|
||||
self._server_metadata = RetryOnExceptionCachedCall[ServerMetadata](
|
||||
self._load_metadata
|
||||
)
|
||||
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
|
||||
|
||||
# # Token Introspection Cache
|
||||
# This remembers what users/devices are represented by which access tokens,
|
||||
# in order to reduce overall system load:
|
||||
# - on Synapse (as requests are relatively expensive)
|
||||
# - on the network
|
||||
# - on MAS
|
||||
#
|
||||
# Since there is no invalidation mechanism currently,
|
||||
# the entries expire after 2 minutes.
|
||||
# This does mean tokens can be treated as valid by Synapse
|
||||
# for longer than reality.
|
||||
#
|
||||
# Ideally, tokens should logically be invalidated in the following circumstances:
|
||||
# - If a session logout happens.
|
||||
# In this case, MAS will delete the device within Synapse
|
||||
# anyway and this is good enough as an invalidation.
|
||||
# - If the client refreshes their token in MAS.
|
||||
# In this case, the device still exists and it's not the end of the world for
|
||||
# the old access token to continue working for a short time.
|
||||
self._introspection_cache: ResponseCache[str] = ResponseCache(
|
||||
clock=self._clock,
|
||||
name="mas_token_introspection",
|
||||
server_name=self.server_name,
|
||||
timeout_ms=120_000,
|
||||
# don't log because the keys are access tokens
|
||||
enable_logging=False,
|
||||
)
|
||||
|
||||
@property
|
||||
def _metadata_url(self) -> str:
|
||||
return f"{self._config.endpoint.rstrip('/')}/.well-known/openid-configuration"
|
||||
|
||||
@property
|
||||
def _introspection_endpoint(self) -> str:
|
||||
return f"{self._config.endpoint.rstrip('/')}/oauth2/introspect"
|
||||
|
||||
async def _load_metadata(self) -> ServerMetadata:
|
||||
response = await self._http_client.get_json(self._metadata_url)
|
||||
metadata = ServerMetadata(**response)
|
||||
return metadata
|
||||
|
||||
async def issuer(self) -> str:
|
||||
metadata = await self._server_metadata.get()
|
||||
return metadata.issuer
|
||||
|
||||
async def account_management_url(self) -> str:
|
||||
metadata = await self._server_metadata.get()
|
||||
return metadata.account_management_uri
|
||||
|
||||
async def auth_metadata(self) -> JsonDict:
|
||||
metadata = await self._server_metadata.get()
|
||||
return metadata.dict()
|
||||
|
||||
def is_request_using_the_shared_secret(self, request: SynapseRequest) -> bool:
|
||||
"""
|
||||
Check if the request is using the shared secret.
|
||||
|
||||
Args:
|
||||
request: The request to check.
|
||||
|
||||
Returns:
|
||||
True if the request is using the shared secret, False otherwise.
|
||||
"""
|
||||
access_token = self.get_access_token_from_request(request)
|
||||
shared_secret = self._config.secret()
|
||||
if not shared_secret:
|
||||
return False
|
||||
|
||||
return access_token == shared_secret
|
||||
|
||||
async def _introspect_token(
|
||||
self, token: str, cache_context: ResponseCacheContext[str]
|
||||
) -> IntrospectionResponse:
|
||||
"""
|
||||
Send a token to the introspection endpoint and returns the introspection response
|
||||
|
||||
Parameters:
|
||||
token: The token to introspect
|
||||
|
||||
Raises:
|
||||
HttpResponseException: If the introspection endpoint returns a non-2xx response
|
||||
ValueError: If the introspection endpoint returns an invalid JSON response
|
||||
JSONDecodeError: If the introspection endpoint returns a non-JSON response
|
||||
Exception: If the HTTP request fails
|
||||
|
||||
Returns:
|
||||
The introspection response
|
||||
"""
|
||||
|
||||
# By default, we shouldn't cache the result unless we know it's valid
|
||||
cache_context.should_cache = False
|
||||
raw_headers: dict[str, str] = {
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
"Accept": "application/json",
|
||||
"Authorization": f"Bearer {self._config.secret()}",
|
||||
# Tell MAS that we support reading the device ID as an explicit
|
||||
# value, not encoded in the scope. This is supported by MAS 0.15+
|
||||
"X-MAS-Supports-Device-Id": "1",
|
||||
}
|
||||
|
||||
args = {"token": token, "token_type_hint": "access_token"}
|
||||
body = urlencode(args, True)
|
||||
|
||||
# Do the actual request
|
||||
|
||||
logger.debug("Fetching token from MAS")
|
||||
start_time = self._clock.time()
|
||||
try:
|
||||
with start_active_span("mas-introspect-token"):
|
||||
inject_request_headers(raw_headers)
|
||||
with PreserveLoggingContext():
|
||||
resp_body = await self._rust_http_client.post(
|
||||
url=self._introspection_endpoint,
|
||||
response_limit=1 * 1024 * 1024,
|
||||
headers=raw_headers,
|
||||
request_body=body,
|
||||
)
|
||||
except HttpResponseException as e:
|
||||
end_time = self._clock.time()
|
||||
introspection_response_timer.labels(
|
||||
code=e.code, **{SERVER_NAME_LABEL: self.server_name}
|
||||
).observe(end_time - start_time)
|
||||
raise
|
||||
except Exception:
|
||||
end_time = self._clock.time()
|
||||
introspection_response_timer.labels(
|
||||
code="ERR", **{SERVER_NAME_LABEL: self.server_name}
|
||||
).observe(end_time - start_time)
|
||||
raise
|
||||
|
||||
logger.debug("Fetched token from MAS")
|
||||
|
||||
end_time = self._clock.time()
|
||||
introspection_response_timer.labels(
|
||||
code=200, **{SERVER_NAME_LABEL: self.server_name}
|
||||
).observe(end_time - start_time)
|
||||
|
||||
raw_response = json_decoder.decode(resp_body.decode("utf-8"))
|
||||
try:
|
||||
response = IntrospectionResponse(
|
||||
retrieved_at_ms=self._clock.time_msec(),
|
||||
**raw_response,
|
||||
)
|
||||
except ValidationError as e:
|
||||
raise ValueError(
|
||||
"The introspection endpoint returned an invalid JSON response"
|
||||
) from e
|
||||
|
||||
# We had a valid response, so we can cache it
|
||||
cache_context.should_cache = True
|
||||
return response
|
||||
|
||||
async def is_server_admin(self, requester: Requester) -> bool:
|
||||
return "urn:synapse:admin:*" in requester.scope
|
||||
|
||||
async def get_user_by_req(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
allow_guest: bool = False,
|
||||
allow_expired: bool = False,
|
||||
allow_locked: bool = False,
|
||||
) -> Requester:
|
||||
parent_span = active_span()
|
||||
with start_active_span("get_user_by_req"):
|
||||
access_token = self.get_access_token_from_request(request)
|
||||
|
||||
requester = await self.get_appservice_user(request, access_token)
|
||||
if not requester:
|
||||
requester = await self.get_user_by_access_token(
|
||||
token=access_token,
|
||||
allow_expired=allow_expired,
|
||||
)
|
||||
|
||||
await self._record_request(request, requester)
|
||||
|
||||
request.requester = requester
|
||||
|
||||
if parent_span:
|
||||
if requester.authenticated_entity in self._force_tracing_for_users:
|
||||
# request tracing is enabled for this user, so we need to force it
|
||||
# tracing on for the parent span (which will be the servlet span).
|
||||
#
|
||||
# It's too late for the get_user_by_req span to inherit the setting,
|
||||
# so we also force it on for that.
|
||||
force_tracing()
|
||||
force_tracing(parent_span)
|
||||
parent_span.set_tag(
|
||||
"authenticated_entity", requester.authenticated_entity
|
||||
)
|
||||
parent_span.set_tag("user_id", requester.user.to_string())
|
||||
if requester.device_id is not None:
|
||||
parent_span.set_tag("device_id", requester.device_id)
|
||||
if requester.app_service is not None:
|
||||
parent_span.set_tag("appservice_id", requester.app_service.id)
|
||||
return requester
|
||||
|
||||
async def get_user_by_access_token(
|
||||
self,
|
||||
token: str,
|
||||
allow_expired: bool = False,
|
||||
) -> Requester:
|
||||
try:
|
||||
introspection_result = await self._introspection_cache.wrap(
|
||||
token, self._introspect_token, token, cache_context=True
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("Failed to introspect token")
|
||||
raise SynapseError(503, "Unable to introspect the access token")
|
||||
|
||||
logger.debug("Introspection result: %r", introspection_result)
|
||||
if not introspection_result.is_active(self._clock.time_msec()):
|
||||
raise InvalidClientTokenError("Token is not active")
|
||||
|
||||
# Let's look at the scope
|
||||
scope = introspection_result.get_scope_set()
|
||||
|
||||
# Determine type of user based on presence of particular scopes
|
||||
if SCOPE_MATRIX_API not in scope:
|
||||
raise InvalidClientTokenError(
|
||||
"Token doesn't grant access to the Matrix C-S API"
|
||||
)
|
||||
|
||||
if introspection_result.username is None:
|
||||
raise AuthError(
|
||||
500,
|
||||
"Invalid username claim in the introspection result",
|
||||
)
|
||||
|
||||
user_id = UserID(
|
||||
localpart=introspection_result.username,
|
||||
domain=self.server_name,
|
||||
)
|
||||
|
||||
# Try to find a user from the username claim
|
||||
user_info = await self.store.get_user_by_id(user_id=user_id.to_string())
|
||||
if user_info is None:
|
||||
raise AuthError(
|
||||
500,
|
||||
"User not found",
|
||||
)
|
||||
|
||||
# MAS will give us the device ID as an explicit value for *compatibility* sessions
|
||||
# If present, we get it from here, if not we get it in the scope for next-gen sessions
|
||||
device_id = introspection_result.device_id
|
||||
if device_id is None:
|
||||
# Find device_ids in scope
|
||||
# We only allow a single device_id in the scope, so we find them all in the
|
||||
# scope list, and raise if there are more than one. The OIDC server should be
|
||||
# the one enforcing valid scopes, so we raise a 500 if we find an invalid scope.
|
||||
device_ids = [
|
||||
tok[len(SCOPE_MATRIX_DEVICE_PREFIX) :]
|
||||
for tok in scope
|
||||
if tok.startswith(SCOPE_MATRIX_DEVICE_PREFIX)
|
||||
]
|
||||
|
||||
if len(device_ids) > 1:
|
||||
raise AuthError(
|
||||
500,
|
||||
"Multiple device IDs in scope",
|
||||
)
|
||||
|
||||
device_id = device_ids[0] if device_ids else None
|
||||
|
||||
if device_id is not None:
|
||||
# Sanity check the device_id
|
||||
if len(device_id) > 255 or len(device_id) < 1:
|
||||
raise AuthError(
|
||||
500,
|
||||
"Invalid device ID in introspection result",
|
||||
)
|
||||
|
||||
# Make sure the device exists. This helps with introspection cache
|
||||
# invalidation: if we log out, the device gets deleted by MAS
|
||||
device = await self.store.get_device(
|
||||
user_id=user_id.to_string(),
|
||||
device_id=device_id,
|
||||
)
|
||||
if device is None:
|
||||
# Invalidate the introspection cache, the device was deleted
|
||||
self._introspection_cache.unset(token)
|
||||
raise InvalidClientTokenError("Token is not active")
|
||||
|
||||
return create_requester(
|
||||
user_id=user_id,
|
||||
device_id=device_id,
|
||||
scope=scope,
|
||||
)
|
||||
|
||||
async def get_user_by_req_experimental_feature(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
feature: "ExperimentalFeature",
|
||||
allow_guest: bool = False,
|
||||
allow_expired: bool = False,
|
||||
allow_locked: bool = False,
|
||||
) -> Requester:
|
||||
try:
|
||||
requester = await self.get_user_by_req(
|
||||
request,
|
||||
allow_guest=allow_guest,
|
||||
allow_expired=allow_expired,
|
||||
allow_locked=allow_locked,
|
||||
)
|
||||
if await self.store.is_feature_enabled(requester.user.to_string(), feature):
|
||||
return requester
|
||||
|
||||
raise UnrecognizedRequestError(code=404)
|
||||
except (AuthError, InvalidClientTokenError):
|
||||
if feature.is_globally_enabled(self.hs.config):
|
||||
# If its globally enabled then return the auth error
|
||||
raise
|
||||
|
||||
raise UnrecognizedRequestError(code=404)
|
||||
@@ -28,6 +28,7 @@ from authlib.oauth2.auth import encode_client_secret_basic, encode_client_secret
|
||||
from authlib.oauth2.rfc7523 import ClientSecretJWT, PrivateKeyJWT, private_key_jwt_sign
|
||||
from authlib.oauth2.rfc7662 import IntrospectionToken
|
||||
from authlib.oidc.discovery import OpenIDProviderMetadata, get_well_known_url
|
||||
from prometheus_client import Histogram
|
||||
|
||||
from synapse.api.auth.base import BaseAuth
|
||||
from synapse.api.errors import (
|
||||
@@ -46,21 +47,25 @@ from synapse.logging.opentracing import (
|
||||
inject_request_headers,
|
||||
start_active_span,
|
||||
)
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.synapse_rust.http_client import HttpClient
|
||||
from synapse.types import Requester, UserID, create_requester
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
||||
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
|
||||
|
||||
from . import introspection_response_timer
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
introspection_response_timer = Histogram(
|
||||
"synapse_api_auth_delegated_introspection_response",
|
||||
"Time taken to get a response for an introspection request",
|
||||
["code"],
|
||||
)
|
||||
|
||||
|
||||
# Scope as defined by MSC2967
|
||||
# https://github.com/matrix-org/matrix-spec-proposals/pull/2967
|
||||
SCOPE_MATRIX_API = "urn:matrix:org.matrix.msc2967.client:api:*"
|
||||
@@ -336,23 +341,17 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
)
|
||||
except HttpResponseException as e:
|
||||
end_time = self._clock.time()
|
||||
introspection_response_timer.labels(
|
||||
code=e.code, **{SERVER_NAME_LABEL: self.server_name}
|
||||
).observe(end_time - start_time)
|
||||
introspection_response_timer.labels(e.code).observe(end_time - start_time)
|
||||
raise
|
||||
except Exception:
|
||||
end_time = self._clock.time()
|
||||
introspection_response_timer.labels(
|
||||
code="ERR", **{SERVER_NAME_LABEL: self.server_name}
|
||||
).observe(end_time - start_time)
|
||||
introspection_response_timer.labels("ERR").observe(end_time - start_time)
|
||||
raise
|
||||
|
||||
logger.debug("Fetched token from MAS")
|
||||
|
||||
end_time = self._clock.time()
|
||||
introspection_response_timer.labels(
|
||||
code=200, **{SERVER_NAME_LABEL: self.server_name}
|
||||
).observe(end_time - start_time)
|
||||
introspection_response_timer.labels(200).observe(end_time - start_time)
|
||||
|
||||
resp = json_decoder.decode(resp_body.decode("utf-8"))
|
||||
|
||||
|
||||
@@ -140,12 +140,6 @@ class Codes(str, Enum):
|
||||
# Part of MSC4155
|
||||
INVITE_BLOCKED = "ORG.MATRIX.MSC4155.M_INVITE_BLOCKED"
|
||||
|
||||
# Part of MSC4306: Thread Subscriptions
|
||||
MSC4306_CONFLICTING_UNSUBSCRIPTION = (
|
||||
"IO.ELEMENT.MSC4306.M_CONFLICTING_UNSUBSCRIPTION"
|
||||
)
|
||||
MSC4306_NOT_IN_THREAD = "IO.ELEMENT.MSC4306.M_NOT_IN_THREAD"
|
||||
|
||||
|
||||
class CodeMessageException(RuntimeError):
|
||||
"""An exception with integer code, a message string attributes and optional headers.
|
||||
|
||||
@@ -525,12 +525,8 @@ async def start(hs: "HomeServer") -> None:
|
||||
)
|
||||
|
||||
# Register the threadpools with our metrics.
|
||||
register_threadpool(
|
||||
name="default", server_name=server_name, threadpool=reactor.getThreadPool()
|
||||
)
|
||||
register_threadpool(
|
||||
name="gai_resolver", server_name=server_name, threadpool=resolver_threadpool
|
||||
)
|
||||
register_threadpool("default", reactor.getThreadPool())
|
||||
register_threadpool("gai_resolver", resolver_threadpool)
|
||||
|
||||
# Set up the SIGHUP machinery.
|
||||
if hasattr(signal, "SIGHUP"):
|
||||
|
||||
@@ -28,7 +28,6 @@ from prometheus_client import Gauge
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.metrics.background_process_metrics import (
|
||||
run_as_background_process,
|
||||
)
|
||||
@@ -58,25 +57,16 @@ Phone home stats are sent every 3 hours
|
||||
_stats_process: List[Tuple[int, "resource.struct_rusage"]] = []
|
||||
|
||||
# Gauges to expose monthly active user control metrics
|
||||
current_mau_gauge = Gauge(
|
||||
"synapse_admin_mau_current",
|
||||
"Current MAU",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
)
|
||||
current_mau_gauge = Gauge("synapse_admin_mau_current", "Current MAU")
|
||||
current_mau_by_service_gauge = Gauge(
|
||||
"synapse_admin_mau_current_mau_by_service",
|
||||
"Current MAU by service",
|
||||
labelnames=["app_service", SERVER_NAME_LABEL],
|
||||
)
|
||||
max_mau_gauge = Gauge(
|
||||
"synapse_admin_mau_max",
|
||||
"MAU Limit",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
["app_service"],
|
||||
)
|
||||
max_mau_gauge = Gauge("synapse_admin_mau_max", "MAU Limit")
|
||||
registered_reserved_users_mau_gauge = Gauge(
|
||||
"synapse_admin_mau_registered_reserved_users",
|
||||
"Registered users with reserved threepids",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
|
||||
@@ -247,21 +237,13 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
|
||||
await store.get_monthly_active_count_by_service()
|
||||
)
|
||||
reserved_users = await store.get_registered_reserved_users()
|
||||
current_mau_gauge.labels(**{SERVER_NAME_LABEL: server_name}).set(
|
||||
float(current_mau_count)
|
||||
)
|
||||
current_mau_gauge.set(float(current_mau_count))
|
||||
|
||||
for app_service, count in current_mau_count_by_service.items():
|
||||
current_mau_by_service_gauge.labels(
|
||||
app_service=app_service, **{SERVER_NAME_LABEL: server_name}
|
||||
).set(float(count))
|
||||
current_mau_by_service_gauge.labels(app_service).set(float(count))
|
||||
|
||||
registered_reserved_users_mau_gauge.labels(
|
||||
**{SERVER_NAME_LABEL: server_name}
|
||||
).set(float(len(reserved_users)))
|
||||
max_mau_gauge.labels(**{SERVER_NAME_LABEL: server_name}).set(
|
||||
float(hs.config.server.max_mau_value)
|
||||
)
|
||||
registered_reserved_users_mau_gauge.set(float(len(reserved_users)))
|
||||
max_mau_gauge.set(float(hs.config.server.max_mau_value))
|
||||
|
||||
return run_as_background_process(
|
||||
"generate_monthly_active_users",
|
||||
|
||||
@@ -36,7 +36,6 @@ from synapse.config import ( # noqa: F401
|
||||
jwt,
|
||||
key,
|
||||
logger,
|
||||
mas,
|
||||
metrics,
|
||||
modules,
|
||||
oembed,
|
||||
@@ -125,7 +124,6 @@ class RootConfig:
|
||||
background_updates: background_updates.BackgroundUpdateConfig
|
||||
auto_accept_invites: auto_accept_invites.AutoAcceptInvitesConfig
|
||||
user_types: user_types.UserTypesConfig
|
||||
mas: mas.MasConfig
|
||||
|
||||
config_classes: List[Type["Config"]] = ...
|
||||
config_files: List[str]
|
||||
|
||||
@@ -36,14 +36,13 @@ class AuthConfig(Config):
|
||||
if password_config is None:
|
||||
password_config = {}
|
||||
|
||||
auth_delegated = (config.get("experimental_features") or {}).get(
|
||||
"msc3861", {}
|
||||
).get("enabled", False) or (
|
||||
config.get("matrix_authentication_service") or {}
|
||||
).get("enabled", False)
|
||||
|
||||
# The default value of password_config.enabled is True, unless auth is delegated
|
||||
passwords_enabled = password_config.get("enabled", not auth_delegated)
|
||||
# The default value of password_config.enabled is True, unless msc3861 is enabled.
|
||||
msc3861_enabled = (
|
||||
(config.get("experimental_features") or {})
|
||||
.get("msc3861", {})
|
||||
.get("enabled", False)
|
||||
)
|
||||
passwords_enabled = password_config.get("enabled", not msc3861_enabled)
|
||||
|
||||
# 'only_for_reauth' allows users who have previously set a password to use it,
|
||||
# even though passwords would otherwise be disabled.
|
||||
|
||||
@@ -36,7 +36,6 @@ from .federation import FederationConfig
|
||||
from .jwt import JWTConfig
|
||||
from .key import KeyConfig
|
||||
from .logger import LoggingConfig
|
||||
from .mas import MasConfig
|
||||
from .metrics import MetricsConfig
|
||||
from .modules import ModulesConfig
|
||||
from .oembed import OembedConfig
|
||||
@@ -110,6 +109,4 @@ class HomeServerConfig(RootConfig):
|
||||
BackgroundUpdateConfig,
|
||||
AutoAcceptInvitesConfig,
|
||||
UserTypesConfig,
|
||||
# This must be last, as it checks for conflicts with other config options.
|
||||
MasConfig,
|
||||
]
|
||||
|
||||
@@ -1,192 +0,0 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2025 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
#
|
||||
|
||||
from typing import Any, Optional
|
||||
|
||||
from synapse._pydantic_compat import (
|
||||
AnyHttpUrl,
|
||||
Field,
|
||||
FilePath,
|
||||
StrictBool,
|
||||
StrictStr,
|
||||
ValidationError,
|
||||
validator,
|
||||
)
|
||||
from synapse.config.experimental import read_secret_from_file_once
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.pydantic_models import ParseModel
|
||||
|
||||
from ._base import Config, ConfigError, RootConfig
|
||||
|
||||
|
||||
class MasConfigModel(ParseModel):
|
||||
enabled: StrictBool = False
|
||||
endpoint: AnyHttpUrl = Field(default="http://localhost:8080")
|
||||
secret: Optional[StrictStr] = Field(default=None)
|
||||
secret_path: Optional[FilePath] = Field(default=None)
|
||||
|
||||
@validator("secret")
|
||||
def validate_secret_is_set_if_enabled(cls, v: Any, values: dict) -> Any:
|
||||
if values.get("enabled", False) and not values.get("secret_path") and not v:
|
||||
raise ValueError(
|
||||
"You must set a `secret` or `secret_path` when enabling Matrix Authentication Service integration."
|
||||
)
|
||||
|
||||
return v
|
||||
|
||||
@validator("secret_path")
|
||||
def validate_secret_path_is_set_if_enabled(cls, v: Any, values: dict) -> Any:
|
||||
if values.get("secret"):
|
||||
raise ValueError(
|
||||
"`secret` and `secret_path` cannot be set at the same time."
|
||||
)
|
||||
|
||||
return v
|
||||
|
||||
|
||||
class MasConfig(Config):
|
||||
section = "mas"
|
||||
|
||||
def read_config(
|
||||
self, config: JsonDict, allow_secrets_in_config: bool, **kwargs: Any
|
||||
) -> None:
|
||||
mas_config = config.get("matrix_authentication_service", {})
|
||||
if mas_config is None:
|
||||
mas_config = {}
|
||||
|
||||
try:
|
||||
parsed = MasConfigModel(**mas_config)
|
||||
except ValidationError as e:
|
||||
raise ConfigError(
|
||||
"Could not validate Matrix Authentication Service configuration",
|
||||
path=("matrix_authentication_service",),
|
||||
) from e
|
||||
|
||||
if parsed.secret and not allow_secrets_in_config:
|
||||
raise ConfigError(
|
||||
"Config options that expect an in-line secret as value are disabled",
|
||||
("matrix_authentication_service", "secret"),
|
||||
)
|
||||
|
||||
self.enabled = parsed.enabled
|
||||
self.endpoint = parsed.endpoint
|
||||
self._secret = parsed.secret
|
||||
self._secret_path = parsed.secret_path
|
||||
|
||||
self.check_config_conflicts(self.root)
|
||||
|
||||
def check_config_conflicts(
|
||||
self,
|
||||
root: RootConfig,
|
||||
) -> None:
|
||||
"""Checks for any configuration conflicts with other parts of Synapse.
|
||||
|
||||
Raises:
|
||||
ConfigError: If there are any configuration conflicts.
|
||||
"""
|
||||
|
||||
if not self.enabled:
|
||||
return
|
||||
|
||||
if root.experimental.msc3861.enabled:
|
||||
raise ConfigError(
|
||||
"Experimental MSC3861 was replaced by Matrix Authentication Service."
|
||||
"Please disable MSC3861 or disable Matrix Authentication Service.",
|
||||
("experimental", "msc3861"),
|
||||
)
|
||||
|
||||
if (
|
||||
root.auth.password_enabled_for_reauth
|
||||
or root.auth.password_enabled_for_login
|
||||
):
|
||||
raise ConfigError(
|
||||
"Password auth cannot be enabled when OAuth delegation is enabled",
|
||||
("password_config", "enabled"),
|
||||
)
|
||||
|
||||
if root.registration.enable_registration:
|
||||
raise ConfigError(
|
||||
"Registration cannot be enabled when OAuth delegation is enabled",
|
||||
("enable_registration",),
|
||||
)
|
||||
|
||||
# We only need to test the user consent version, as if it must be set if the user_consent section was present in the config
|
||||
if root.consent.user_consent_version is not None:
|
||||
raise ConfigError(
|
||||
"User consent cannot be enabled when OAuth delegation is enabled",
|
||||
("user_consent",),
|
||||
)
|
||||
|
||||
if (
|
||||
root.oidc.oidc_enabled
|
||||
or root.saml2.saml2_enabled
|
||||
or root.cas.cas_enabled
|
||||
or root.jwt.jwt_enabled
|
||||
):
|
||||
raise ConfigError("SSO cannot be enabled when OAuth delegation is enabled")
|
||||
|
||||
if bool(root.authproviders.password_providers):
|
||||
raise ConfigError(
|
||||
"Password auth providers cannot be enabled when OAuth delegation is enabled"
|
||||
)
|
||||
|
||||
if root.captcha.enable_registration_captcha:
|
||||
raise ConfigError(
|
||||
"CAPTCHA cannot be enabled when OAuth delegation is enabled",
|
||||
("captcha", "enable_registration_captcha"),
|
||||
)
|
||||
|
||||
if root.auth.login_via_existing_enabled:
|
||||
raise ConfigError(
|
||||
"Login via existing session cannot be enabled when OAuth delegation is enabled",
|
||||
("login_via_existing_session", "enabled"),
|
||||
)
|
||||
|
||||
if root.registration.refresh_token_lifetime:
|
||||
raise ConfigError(
|
||||
"refresh_token_lifetime cannot be set when OAuth delegation is enabled",
|
||||
("refresh_token_lifetime",),
|
||||
)
|
||||
|
||||
if root.registration.nonrefreshable_access_token_lifetime:
|
||||
raise ConfigError(
|
||||
"nonrefreshable_access_token_lifetime cannot be set when OAuth delegation is enabled",
|
||||
("nonrefreshable_access_token_lifetime",),
|
||||
)
|
||||
|
||||
if root.registration.session_lifetime:
|
||||
raise ConfigError(
|
||||
"session_lifetime cannot be set when OAuth delegation is enabled",
|
||||
("session_lifetime",),
|
||||
)
|
||||
|
||||
if root.registration.enable_3pid_changes:
|
||||
raise ConfigError(
|
||||
"enable_3pid_changes cannot be enabled when OAuth delegation is enabled",
|
||||
("enable_3pid_changes",),
|
||||
)
|
||||
|
||||
def secret(self) -> str:
|
||||
if self._secret is not None:
|
||||
return self._secret
|
||||
elif self._secret_path is not None:
|
||||
return read_secret_from_file_once(
|
||||
str(self._secret_path),
|
||||
("matrix_authentication_service", "secret_path"),
|
||||
)
|
||||
else:
|
||||
raise RuntimeError(
|
||||
"Neither `secret` nor `secret_path` are set, this is a bug.",
|
||||
)
|
||||
@@ -148,14 +148,15 @@ class RegistrationConfig(Config):
|
||||
self.enable_set_displayname = config.get("enable_set_displayname", True)
|
||||
self.enable_set_avatar_url = config.get("enable_set_avatar_url", True)
|
||||
|
||||
auth_delegated = (config.get("experimental_features") or {}).get(
|
||||
"msc3861", {}
|
||||
).get("enabled", False) or (
|
||||
config.get("matrix_authentication_service") or {}
|
||||
).get("enabled", False)
|
||||
|
||||
# The default value of enable_3pid_changes is True, unless msc3861 is enabled.
|
||||
self.enable_3pid_changes = config.get("enable_3pid_changes", not auth_delegated)
|
||||
msc3861_enabled = (
|
||||
(config.get("experimental_features") or {})
|
||||
.get("msc3861", {})
|
||||
.get("enabled", False)
|
||||
)
|
||||
self.enable_3pid_changes = config.get(
|
||||
"enable_3pid_changes", not msc3861_enabled
|
||||
)
|
||||
|
||||
self.disable_msisdn_registration = config.get(
|
||||
"disable_msisdn_registration", False
|
||||
|
||||
@@ -114,6 +114,7 @@ class InviteAutoAccepter:
|
||||
# that occurs when responding to invites over federation (see https://github.com/matrix-org/synapse-auto-accept-invite/issues/12)
|
||||
run_as_background_process(
|
||||
"retry_make_join",
|
||||
self.server_name,
|
||||
self._retry_make_join,
|
||||
event.state_key,
|
||||
event.state_key,
|
||||
|
||||
@@ -538,11 +538,8 @@ def serialize_event(
|
||||
d["content"] = dict(d["content"])
|
||||
d["content"]["redacts"] = e.redacts
|
||||
|
||||
if config.include_admin_metadata:
|
||||
if e.internal_metadata.is_soft_failed():
|
||||
d["unsigned"]["io.element.synapse.soft_failed"] = True
|
||||
if e.internal_metadata.policy_server_spammy:
|
||||
d["unsigned"]["io.element.synapse.policy_server_spammy"] = True
|
||||
if config.include_admin_metadata and e.internal_metadata.is_soft_failed():
|
||||
d["unsigned"]["io.element.synapse.soft_failed"] = True
|
||||
|
||||
only_event_fields = config.only_event_fields
|
||||
if only_event_fields:
|
||||
|
||||
@@ -174,7 +174,6 @@ class FederationBase:
|
||||
"Event not allowed by policy server, soft-failing %s", pdu.event_id
|
||||
)
|
||||
pdu.internal_metadata.soft_failed = True
|
||||
pdu.internal_metadata.policy_server_spammy = True
|
||||
# Note: we don't redact the event so admins can inspect the event after the
|
||||
# fact. Other processes may redact the event, but that won't be applied to
|
||||
# the database copy of the event until the server's config requires it.
|
||||
|
||||
@@ -122,13 +122,12 @@ received_queries_counter = Counter(
|
||||
pdu_process_time = Histogram(
|
||||
"synapse_federation_server_pdu_process_time",
|
||||
"Time taken to process an event",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
last_pdu_ts_metric = Gauge(
|
||||
"synapse_federation_last_received_pdu_time",
|
||||
"The timestamp of the last PDU which was successfully received from the given domain",
|
||||
labelnames=("origin_server_name", SERVER_NAME_LABEL),
|
||||
labelnames=("server_name",),
|
||||
)
|
||||
|
||||
|
||||
@@ -555,9 +554,7 @@ class FederationServer(FederationBase):
|
||||
)
|
||||
|
||||
if newest_pdu_ts and origin in self._federation_metrics_domains:
|
||||
last_pdu_ts_metric.labels(
|
||||
origin_server_name=origin, **{SERVER_NAME_LABEL: self.server_name}
|
||||
).set(newest_pdu_ts / 1000)
|
||||
last_pdu_ts_metric.labels(server_name=origin).set(newest_pdu_ts / 1000)
|
||||
|
||||
return pdu_results
|
||||
|
||||
@@ -1325,9 +1322,9 @@ class FederationServer(FederationBase):
|
||||
origin, event.event_id
|
||||
)
|
||||
if received_ts is not None:
|
||||
pdu_process_time.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).observe((self._clock.time_msec() - received_ts) / 1000)
|
||||
pdu_process_time.observe(
|
||||
(self._clock.time_msec() - received_ts) / 1000
|
||||
)
|
||||
|
||||
next = await self._get_next_nonspam_staged_event_for_room(
|
||||
room_id, room_version
|
||||
|
||||
@@ -54,7 +54,7 @@ from sortedcontainers import SortedDict
|
||||
|
||||
from synapse.api.presence import UserPresenceState
|
||||
from synapse.federation.sender import AbstractFederationSender, FederationSender
|
||||
from synapse.metrics import SERVER_NAME_LABEL, LaterGauge
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.replication.tcp.streams.federation import FederationStream
|
||||
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken, StrCollection
|
||||
from synapse.util.metrics import Measure
|
||||
@@ -113,10 +113,10 @@ class FederationRemoteSendQueue(AbstractFederationSender):
|
||||
# changes. ARGH.
|
||||
def register(name: str, queue: Sized) -> None:
|
||||
LaterGauge(
|
||||
name="synapse_federation_send_queue_%s_size" % (queue_name,),
|
||||
desc="",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
caller=lambda: {(self.server_name,): len(queue)},
|
||||
"synapse_federation_send_queue_%s_size" % (queue_name,),
|
||||
"",
|
||||
[],
|
||||
lambda: len(queue),
|
||||
)
|
||||
|
||||
for queue_name in [
|
||||
|
||||
@@ -399,37 +399,31 @@ class FederationSender(AbstractFederationSender):
|
||||
self._per_destination_queues: Dict[str, PerDestinationQueue] = {}
|
||||
|
||||
LaterGauge(
|
||||
name="synapse_federation_transaction_queue_pending_destinations",
|
||||
desc="",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
caller=lambda: {
|
||||
(self.server_name,): sum(
|
||||
1
|
||||
for d in self._per_destination_queues.values()
|
||||
if d.transmission_loop_running
|
||||
)
|
||||
},
|
||||
"synapse_federation_transaction_queue_pending_destinations",
|
||||
"",
|
||||
[],
|
||||
lambda: sum(
|
||||
1
|
||||
for d in self._per_destination_queues.values()
|
||||
if d.transmission_loop_running
|
||||
),
|
||||
)
|
||||
|
||||
LaterGauge(
|
||||
name="synapse_federation_transaction_queue_pending_pdus",
|
||||
desc="",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
caller=lambda: {
|
||||
(self.server_name,): sum(
|
||||
d.pending_pdu_count() for d in self._per_destination_queues.values()
|
||||
)
|
||||
},
|
||||
"synapse_federation_transaction_queue_pending_pdus",
|
||||
"",
|
||||
[],
|
||||
lambda: sum(
|
||||
d.pending_pdu_count() for d in self._per_destination_queues.values()
|
||||
),
|
||||
)
|
||||
LaterGauge(
|
||||
name="synapse_federation_transaction_queue_pending_edus",
|
||||
desc="",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
caller=lambda: {
|
||||
(self.server_name,): sum(
|
||||
d.pending_edu_count() for d in self._per_destination_queues.values()
|
||||
)
|
||||
},
|
||||
"synapse_federation_transaction_queue_pending_edus",
|
||||
"",
|
||||
[],
|
||||
lambda: sum(
|
||||
d.pending_edu_count() for d in self._per_destination_queues.values()
|
||||
),
|
||||
)
|
||||
|
||||
self._is_processing = False
|
||||
@@ -667,8 +661,7 @@ class FederationSender(AbstractFederationSender):
|
||||
ts = event_to_received_ts[event.event_id]
|
||||
assert ts is not None
|
||||
synapse.metrics.event_processing_lag_by_event.labels(
|
||||
name="federation_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
"federation_sender"
|
||||
).observe((now - ts) / 1000)
|
||||
|
||||
async def handle_room_events(events: List[EventBase]) -> None:
|
||||
@@ -712,12 +705,10 @@ class FederationSender(AbstractFederationSender):
|
||||
assert ts is not None
|
||||
|
||||
synapse.metrics.event_processing_lag.labels(
|
||||
name="federation_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
"federation_sender"
|
||||
).set(now - ts)
|
||||
synapse.metrics.event_processing_last_ts.labels(
|
||||
name="federation_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
"federation_sender"
|
||||
).set(ts)
|
||||
|
||||
events_processed_counter.labels(
|
||||
@@ -735,7 +726,7 @@ class FederationSender(AbstractFederationSender):
|
||||
).inc()
|
||||
|
||||
synapse.metrics.event_processing_positions.labels(
|
||||
name="federation_sender", **{SERVER_NAME_LABEL: self.server_name}
|
||||
"federation_sender"
|
||||
).set(next_token)
|
||||
|
||||
finally:
|
||||
|
||||
@@ -34,7 +34,6 @@ from synapse.logging.opentracing import (
|
||||
tags,
|
||||
whitelisted_homeserver,
|
||||
)
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.metrics import measure_func
|
||||
@@ -48,7 +47,7 @@ issue_8631_logger = logging.getLogger("synapse.8631_debug")
|
||||
last_pdu_ts_metric = Gauge(
|
||||
"synapse_federation_last_sent_pdu_time",
|
||||
"The timestamp of the last PDU which was successfully sent to the given domain",
|
||||
labelnames=("destination_server_name", SERVER_NAME_LABEL),
|
||||
labelnames=("server_name",),
|
||||
)
|
||||
|
||||
|
||||
@@ -192,7 +191,6 @@ class TransactionManager:
|
||||
|
||||
if pdus and destination in self._federation_metrics_domains:
|
||||
last_pdu = pdus[-1]
|
||||
last_pdu_ts_metric.labels(
|
||||
destination_server_name=destination,
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).set(last_pdu.origin_server_ts / 1000)
|
||||
last_pdu_ts_metric.labels(server_name=destination).set(
|
||||
last_pdu.origin_server_ts / 1000
|
||||
)
|
||||
|
||||
@@ -187,8 +187,7 @@ class ApplicationServicesHandler:
|
||||
assert ts is not None
|
||||
|
||||
synapse.metrics.event_processing_lag_by_event.labels(
|
||||
name="appservice_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
"appservice_sender"
|
||||
).observe((now - ts) / 1000)
|
||||
|
||||
async def handle_room_events(events: Iterable[EventBase]) -> None:
|
||||
@@ -208,8 +207,7 @@ class ApplicationServicesHandler:
|
||||
await self.store.set_appservice_last_pos(upper_bound)
|
||||
|
||||
synapse.metrics.event_processing_positions.labels(
|
||||
name="appservice_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
"appservice_sender"
|
||||
).set(upper_bound)
|
||||
|
||||
events_processed_counter.labels(
|
||||
@@ -232,12 +230,10 @@ class ApplicationServicesHandler:
|
||||
assert ts is not None
|
||||
|
||||
synapse.metrics.event_processing_lag.labels(
|
||||
name="appservice_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
"appservice_sender"
|
||||
).set(now - ts)
|
||||
synapse.metrics.event_processing_last_ts.labels(
|
||||
name="appservice_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
"appservice_sender"
|
||||
).set(ts)
|
||||
finally:
|
||||
self.is_processing = False
|
||||
|
||||
@@ -222,7 +222,6 @@ class AuthHandler:
|
||||
self._password_localdb_enabled = hs.config.auth.password_localdb_enabled
|
||||
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
|
||||
self._account_validity_handler = hs.get_account_validity_handler()
|
||||
self._pusher_pool = hs.get_pusherpool()
|
||||
|
||||
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
|
||||
# as per `rc_login.failed_attempts`.
|
||||
@@ -282,9 +281,7 @@ class AuthHandler:
|
||||
# response.
|
||||
self._extra_attributes: Dict[str, SsoLoginExtraAttributes] = {}
|
||||
|
||||
self._auth_delegation_enabled = (
|
||||
hs.config.mas.enabled or hs.config.experimental.msc3861.enabled
|
||||
)
|
||||
self.msc3861_oauth_delegation_enabled = hs.config.experimental.msc3861.enabled
|
||||
|
||||
async def validate_user_via_ui_auth(
|
||||
self,
|
||||
@@ -335,7 +332,7 @@ class AuthHandler:
|
||||
LimitExceededError if the ratelimiter's failed request count for this
|
||||
user is too high to proceed
|
||||
"""
|
||||
if self._auth_delegation_enabled:
|
||||
if self.msc3861_oauth_delegation_enabled:
|
||||
raise SynapseError(
|
||||
HTTPStatus.INTERNAL_SERVER_ERROR, "UIA shouldn't be used with MSC3861"
|
||||
)
|
||||
@@ -1665,7 +1662,7 @@ class AuthHandler:
|
||||
)
|
||||
|
||||
if medium == "email":
|
||||
await self._pusher_pool.remove_pusher(
|
||||
await self.store.delete_pusher_by_app_id_pushkey_user_id(
|
||||
app_id="m.email", pushkey=address, user_id=user_id
|
||||
)
|
||||
|
||||
|
||||
@@ -25,9 +25,6 @@ from typing import TYPE_CHECKING, Optional
|
||||
from synapse.api.constants import Membership
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.replication.http.deactivate_account import (
|
||||
ReplicationNotifyAccountDeactivatedServlet,
|
||||
)
|
||||
from synapse.types import Codes, Requester, UserID, create_requester
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -48,7 +45,6 @@ class DeactivateAccountHandler:
|
||||
self._room_member_handler = hs.get_room_member_handler()
|
||||
self._identity_handler = hs.get_identity_handler()
|
||||
self._profile_handler = hs.get_profile_handler()
|
||||
self._pusher_pool = hs.get_pusherpool()
|
||||
self.user_directory_handler = hs.get_user_directory_handler()
|
||||
self._server_name = hs.hostname
|
||||
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
|
||||
@@ -57,16 +53,10 @@ class DeactivateAccountHandler:
|
||||
self._user_parter_running = False
|
||||
self._third_party_rules = hs.get_module_api_callbacks().third_party_event_rules
|
||||
|
||||
self._notify_account_deactivated_client = None
|
||||
|
||||
# Start the user parter loop so it can resume parting users from rooms where
|
||||
# it left off (if it has work left to do).
|
||||
if hs.config.worker.worker_app is None:
|
||||
if hs.config.worker.run_background_tasks:
|
||||
hs.get_reactor().callWhenRunning(self._start_user_parting)
|
||||
else:
|
||||
self._notify_account_deactivated_client = (
|
||||
ReplicationNotifyAccountDeactivatedServlet.make_client(hs)
|
||||
)
|
||||
|
||||
self._account_validity_enabled = (
|
||||
hs.config.account_validity.account_validity_enabled
|
||||
@@ -156,7 +146,7 @@ class DeactivateAccountHandler:
|
||||
# Most of the pushers will have been deleted when we logged out the
|
||||
# associated devices above, but we still need to delete pushers not
|
||||
# associated with devices, e.g. email pushers.
|
||||
await self._pusher_pool.delete_all_pushers_for_user(user_id)
|
||||
await self.store.delete_all_pushers_for_user(user_id)
|
||||
|
||||
# Add the user to a table of users pending deactivation (ie.
|
||||
# removal from all the rooms they're a member of)
|
||||
@@ -180,6 +170,10 @@ class DeactivateAccountHandler:
|
||||
logger.info("Marking %s as erased", user_id)
|
||||
await self.store.mark_user_erased(user_id)
|
||||
|
||||
# Now start the process that goes through that list and
|
||||
# parts users from rooms (if it isn't already running)
|
||||
self._start_user_parting()
|
||||
|
||||
# Reject all pending invites and knocks for the user, so that the
|
||||
# user doesn't show up in the "invited" section of rooms' members list.
|
||||
await self._reject_pending_invites_and_knocks_for_user(user_id)
|
||||
@@ -200,37 +194,15 @@ class DeactivateAccountHandler:
|
||||
# Delete any server-side backup keys
|
||||
await self.store.bulk_delete_backup_keys_and_versions_for_user(user_id)
|
||||
|
||||
# Notify modules and start the room parting process.
|
||||
await self.notify_account_deactivated(user_id, by_admin=by_admin)
|
||||
|
||||
return identity_server_supports_unbinding
|
||||
|
||||
async def notify_account_deactivated(
|
||||
self,
|
||||
user_id: str,
|
||||
by_admin: bool = False,
|
||||
) -> None:
|
||||
"""Notify modules and start the room parting process.
|
||||
Goes through replication if this is not the main process.
|
||||
"""
|
||||
if self._notify_account_deactivated_client is not None:
|
||||
await self._notify_account_deactivated_client(
|
||||
user_id=user_id,
|
||||
by_admin=by_admin,
|
||||
)
|
||||
return
|
||||
|
||||
# Now start the process that goes through that list and
|
||||
# parts users from rooms (if it isn't already running)
|
||||
self._start_user_parting()
|
||||
|
||||
# Let modules know the user has been deactivated.
|
||||
await self._third_party_rules.on_user_deactivation_status_changed(
|
||||
user_id,
|
||||
True,
|
||||
by_admin=by_admin,
|
||||
by_admin,
|
||||
)
|
||||
|
||||
return identity_server_supports_unbinding
|
||||
|
||||
async def _reject_pending_invites_and_knocks_for_user(self, user_id: str) -> None:
|
||||
"""Reject pending invites and knocks addressed to a given user ID.
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ from synapse.api.errors import ShadowBanError
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.config.workers import MAIN_PROCESS_INSTANCE_NAME
|
||||
from synapse.logging.opentracing import set_tag
|
||||
from synapse.metrics import SERVER_NAME_LABEL, event_processing_positions
|
||||
from synapse.metrics import event_processing_positions
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.replication.http.delayed_events import (
|
||||
ReplicationAddedDelayedEventRestServlet,
|
||||
@@ -191,9 +191,7 @@ class DelayedEventsHandler:
|
||||
self._event_pos = max_pos
|
||||
|
||||
# Expose current event processing position to prometheus
|
||||
event_processing_positions.labels(
|
||||
name="delayed_events", **{SERVER_NAME_LABEL: self.server_name}
|
||||
).set(max_pos)
|
||||
event_processing_positions.labels("delayed_events").set(max_pos)
|
||||
|
||||
await self._store.update_delayed_events_stream_pos(max_pos)
|
||||
|
||||
|
||||
@@ -71,7 +71,6 @@ from synapse.handlers.pagination import PURGE_PAGINATION_LOCK_NAME
|
||||
from synapse.http.servlet import assert_params_in_dict
|
||||
from synapse.logging.context import nested_logging_context
|
||||
from synapse.logging.opentracing import SynapseTags, set_tag, tag_args, trace
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.module_api import NOT_SPAM
|
||||
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
||||
@@ -91,7 +90,7 @@ logger = logging.getLogger(__name__)
|
||||
backfill_processing_before_timer = Histogram(
|
||||
"synapse_federation_backfill_processing_before_time_seconds",
|
||||
"sec",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
[],
|
||||
buckets=(
|
||||
0.1,
|
||||
0.5,
|
||||
@@ -534,9 +533,9 @@ class FederationHandler:
|
||||
# backfill points regardless of `current_depth`.
|
||||
if processing_start_time is not None:
|
||||
processing_end_time = self.clock.time_msec()
|
||||
backfill_processing_before_timer.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).observe((processing_end_time - processing_start_time) / 1000)
|
||||
backfill_processing_before_timer.observe(
|
||||
(processing_end_time - processing_start_time) / 1000
|
||||
)
|
||||
|
||||
success = await try_backfill(likely_domains)
|
||||
if success:
|
||||
|
||||
@@ -113,7 +113,7 @@ soft_failed_event_counter = Counter(
|
||||
backfill_processing_after_timer = Histogram(
|
||||
"synapse_federation_backfill_processing_after_time_seconds",
|
||||
"sec",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
[],
|
||||
buckets=(
|
||||
0.1,
|
||||
0.25,
|
||||
@@ -692,9 +692,7 @@ class FederationEventHandler:
|
||||
if not events:
|
||||
return
|
||||
|
||||
with backfill_processing_after_timer.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).time():
|
||||
with backfill_processing_after_timer.time():
|
||||
# if there are any events in the wrong room, the remote server is buggy and
|
||||
# should not be trusted.
|
||||
for ev in events:
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
import logging
|
||||
import random
|
||||
from http import HTTPStatus
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Sequence, Tuple
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional, Tuple
|
||||
|
||||
from canonicaljson import encode_canonical_json
|
||||
|
||||
@@ -55,11 +55,7 @@ from synapse.api.urls import ConsentURIBuilder
|
||||
from synapse.event_auth import validate_event_for_room_version
|
||||
from synapse.events import EventBase, relation_from_event
|
||||
from synapse.events.builder import EventBuilder
|
||||
from synapse.events.snapshot import (
|
||||
EventContext,
|
||||
UnpersistedEventContext,
|
||||
UnpersistedEventContextBase,
|
||||
)
|
||||
from synapse.events.snapshot import EventContext, UnpersistedEventContextBase
|
||||
from synapse.events.utils import SerializeEventConfig, maybe_upsert_event_field
|
||||
from synapse.events.validator import EventValidator
|
||||
from synapse.handlers.directory import DirectoryHandler
|
||||
@@ -67,10 +63,10 @@ from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
|
||||
from synapse.logging import opentracing
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.replication.http.send_event import ReplicationSendEventRestServlet
|
||||
from synapse.replication.http.send_events import ReplicationSendEventsRestServlet
|
||||
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
||||
from synapse.types import (
|
||||
JsonDict,
|
||||
PersistedEventPosition,
|
||||
Requester,
|
||||
RoomAlias,
|
||||
@@ -505,6 +501,7 @@ class EventCreationHandler:
|
||||
|
||||
self.room_prejoin_state_types = self.hs.config.api.room_prejoin_state
|
||||
|
||||
self.send_event = ReplicationSendEventRestServlet.make_client(hs)
|
||||
self.send_events = ReplicationSendEventsRestServlet.make_client(hs)
|
||||
|
||||
self.request_ratelimiter = hs.get_request_ratelimiter()
|
||||
@@ -647,46 +644,38 @@ class EventCreationHandler:
|
||||
"""
|
||||
await self.auth_blocking.check_auth_blocking(requester=requester)
|
||||
|
||||
# The requester may be a regular user, but puppeted by the server.
|
||||
request_by_server = (
|
||||
requester.authenticated_entity == self.hs.config.server.server_name
|
||||
requester_suspended = await self.store.get_user_suspended_status(
|
||||
requester.user.to_string()
|
||||
)
|
||||
|
||||
# If the request is initiated by the server, ignore whether the
|
||||
# requester or target is suspended.
|
||||
if not request_by_server:
|
||||
requester_suspended = await self.store.get_user_suspended_status(
|
||||
requester.user.to_string()
|
||||
)
|
||||
if requester_suspended:
|
||||
# We want to allow suspended users to perform "corrective" actions
|
||||
# asked of them by server admins, such as redact their messages and
|
||||
# leave rooms.
|
||||
if event_dict["type"] in ["m.room.redaction", "m.room.member"]:
|
||||
if event_dict["type"] == "m.room.redaction":
|
||||
event = await self.store.get_event(
|
||||
event_dict["content"]["redacts"], allow_none=True
|
||||
)
|
||||
if event:
|
||||
if event.sender != requester.user.to_string():
|
||||
raise SynapseError(
|
||||
403,
|
||||
"You can only redact your own events while account is suspended.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
if event_dict["type"] == "m.room.member":
|
||||
if event_dict["content"]["membership"] != "leave":
|
||||
if requester_suspended:
|
||||
# We want to allow suspended users to perform "corrective" actions
|
||||
# asked of them by server admins, such as redact their messages and
|
||||
# leave rooms.
|
||||
if event_dict["type"] in ["m.room.redaction", "m.room.member"]:
|
||||
if event_dict["type"] == "m.room.redaction":
|
||||
event = await self.store.get_event(
|
||||
event_dict["content"]["redacts"], allow_none=True
|
||||
)
|
||||
if event:
|
||||
if event.sender != requester.user.to_string():
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Changing membership while account is suspended is not allowed.",
|
||||
"You can only redact your own events while account is suspended.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
else:
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Sending messages while account is suspended is not allowed.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
if event_dict["type"] == "m.room.member":
|
||||
if event_dict["content"]["membership"] != "leave":
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Changing membership while account is suspended is not allowed.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
else:
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Sending messages while account is suspended is not allowed.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
|
||||
if event_dict["type"] == EventTypes.Create and event_dict["state_key"] == "":
|
||||
room_version_id = event_dict["content"]["room_version"]
|
||||
@@ -1112,9 +1101,6 @@ class EventCreationHandler:
|
||||
|
||||
policy_allowed = await self._policy_handler.is_event_allowed(event)
|
||||
if not policy_allowed:
|
||||
# We shouldn't need to set the metadata because the raise should
|
||||
# cause the request to be denied, but just in case:
|
||||
event.internal_metadata.policy_server_spammy = True
|
||||
logger.warning(
|
||||
"Event not allowed by policy server, rejecting %s",
|
||||
event.event_id,
|
||||
@@ -1525,92 +1511,6 @@ class EventCreationHandler:
|
||||
|
||||
return result
|
||||
|
||||
async def create_and_send_new_client_events(
|
||||
self,
|
||||
requester: Requester,
|
||||
room_id: str,
|
||||
prev_event_id: str,
|
||||
event_dicts: Sequence[JsonDict],
|
||||
ratelimit: bool = True,
|
||||
ignore_shadow_ban: bool = False,
|
||||
) -> None:
|
||||
"""Helper to create and send a batch of new client events.
|
||||
|
||||
This supports sending membership events in very limited circumstances
|
||||
(namely that the event is valid as is and doesn't need federation
|
||||
requests or anything). Callers should prefer to use `update_membership`,
|
||||
which correctly handles membership events in all cases. We allow
|
||||
sending membership events here as its useful when copying e.g. bans
|
||||
between rooms.
|
||||
|
||||
All other events and state events are supported.
|
||||
|
||||
Args:
|
||||
requester: The requester sending the events.
|
||||
room_id: The room ID to send the events in.
|
||||
prev_event_id: The event ID to use as the previous event for the first
|
||||
of the events, must have already been persisted.
|
||||
event_dicts: A sequence of event dictionaries to create and send.
|
||||
ratelimit: Whether to rate limit this send.
|
||||
ignore_shadow_ban: True if shadow-banned users should be allowed to
|
||||
send these events.
|
||||
"""
|
||||
|
||||
if not event_dicts:
|
||||
# Nothing to do.
|
||||
return
|
||||
|
||||
state_groups = await self._storage_controllers.state.get_state_group_for_events(
|
||||
[prev_event_id]
|
||||
)
|
||||
if prev_event_id not in state_groups:
|
||||
# This should only happen if we got passed a prev event ID that
|
||||
# hasn't been persisted yet.
|
||||
raise Exception("Previous event ID not found ")
|
||||
|
||||
current_state_group = state_groups[prev_event_id]
|
||||
state_map = await self._storage_controllers.state.get_state_ids_for_group(
|
||||
current_state_group
|
||||
)
|
||||
|
||||
events_and_contexts_to_send = []
|
||||
state_map = dict(state_map)
|
||||
depth = None
|
||||
|
||||
for event_dict in event_dicts:
|
||||
event, context = await self.create_event(
|
||||
requester=requester,
|
||||
event_dict=event_dict,
|
||||
prev_event_ids=[prev_event_id],
|
||||
depth=depth,
|
||||
# Take a copy to ensure each event gets a unique copy of
|
||||
# state_map since it is modified below.
|
||||
state_map=dict(state_map),
|
||||
for_batch=True,
|
||||
)
|
||||
events_and_contexts_to_send.append((event, context))
|
||||
|
||||
prev_event_id = event.event_id
|
||||
depth = event.depth + 1
|
||||
if event.is_state():
|
||||
# If this is a state event, we need to update the state map
|
||||
# so that it can be used for the next event.
|
||||
state_map[(event.type, event.state_key)] = event.event_id
|
||||
|
||||
datastore = self.hs.get_datastores().state
|
||||
events_and_context = (
|
||||
await UnpersistedEventContext.batch_persist_unpersisted_contexts(
|
||||
events_and_contexts_to_send, room_id, current_state_group, datastore
|
||||
)
|
||||
)
|
||||
|
||||
await self.handle_new_client_event(
|
||||
requester,
|
||||
events_and_context,
|
||||
ignore_shadow_ban=ignore_shadow_ban,
|
||||
ratelimit=ratelimit,
|
||||
)
|
||||
|
||||
async def _persist_events(
|
||||
self,
|
||||
requester: Requester,
|
||||
|
||||
@@ -780,10 +780,10 @@ class PresenceHandler(BasePresenceHandler):
|
||||
)
|
||||
|
||||
LaterGauge(
|
||||
name="synapse_handlers_presence_user_to_current_state_size",
|
||||
desc="",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
caller=lambda: {(self.server_name,): len(self.user_to_current_state)},
|
||||
"synapse_handlers_presence_user_to_current_state_size",
|
||||
"",
|
||||
[],
|
||||
lambda: len(self.user_to_current_state),
|
||||
)
|
||||
|
||||
# The per-device presence state, maps user to devices to per-device presence state.
|
||||
@@ -883,10 +883,10 @@ class PresenceHandler(BasePresenceHandler):
|
||||
)
|
||||
|
||||
LaterGauge(
|
||||
name="synapse_handlers_presence_wheel_timer_size",
|
||||
desc="",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
caller=lambda: {(self.server_name,): len(self.wheel_timer)},
|
||||
"synapse_handlers_presence_wheel_timer_size",
|
||||
"",
|
||||
[],
|
||||
lambda: len(self.wheel_timer),
|
||||
)
|
||||
|
||||
# Used to handle sending of presence to newly joined users/servers
|
||||
@@ -1568,9 +1568,9 @@ class PresenceHandler(BasePresenceHandler):
|
||||
self._event_pos = max_pos
|
||||
|
||||
# Expose current event processing position to prometheus
|
||||
synapse.metrics.event_processing_positions.labels(
|
||||
name="presence", **{SERVER_NAME_LABEL: self.server_name}
|
||||
).set(max_pos)
|
||||
synapse.metrics.event_processing_positions.labels("presence").set(
|
||||
max_pos
|
||||
)
|
||||
|
||||
async def _handle_state_delta(self, room_id: str, deltas: List[StateDelta]) -> None:
|
||||
"""Process current state deltas for the room to find new joins that need
|
||||
|
||||
@@ -94,7 +94,6 @@ from synapse.types.handlers import ShutdownRoomParams, ShutdownRoomResponse
|
||||
from synapse.types.state import StateFilter
|
||||
from synapse.util import stringutils
|
||||
from synapse.util.caches.response_cache import ResponseCache
|
||||
from synapse.util.iterutils import batch_iter
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
from synapse.visibility import filter_events_for_client
|
||||
|
||||
@@ -608,7 +607,7 @@ class RoomCreationHandler:
|
||||
additional_fields=spam_check[1],
|
||||
)
|
||||
|
||||
_, last_event_id, _ = await self._send_events_for_new_room(
|
||||
await self._send_events_for_new_room(
|
||||
requester,
|
||||
new_room_id,
|
||||
new_room_version,
|
||||
@@ -621,32 +620,29 @@ class RoomCreationHandler:
|
||||
)
|
||||
|
||||
# Transfer membership events
|
||||
ban_event_ids = await self.store.get_ban_event_ids_in_room(old_room_id)
|
||||
if ban_event_ids:
|
||||
ban_events = await self.store.get_events_as_list(ban_event_ids)
|
||||
old_room_member_state_ids = (
|
||||
await self._storage_controllers.state.get_current_state_ids(
|
||||
old_room_id, StateFilter.from_types([(EventTypes.Member, None)])
|
||||
)
|
||||
)
|
||||
|
||||
# Add any banned users to the new room.
|
||||
#
|
||||
# Note generally we should send membership events via
|
||||
# `update_membership`, however in this case its fine to bypass as
|
||||
# these bans don't need any special treatment, i.e. the sender is in
|
||||
# the room and they don't need any extra signatures, etc.
|
||||
for batched_events in batch_iter(ban_events, 1000):
|
||||
await self.event_creation_handler.create_and_send_new_client_events(
|
||||
requester=requester,
|
||||
room_id=new_room_id,
|
||||
prev_event_id=last_event_id,
|
||||
event_dicts=[
|
||||
{
|
||||
"type": EventTypes.Member,
|
||||
"state_key": ban_event.state_key,
|
||||
"room_id": new_room_id,
|
||||
"sender": requester.user.to_string(),
|
||||
"content": ban_event.content,
|
||||
}
|
||||
for ban_event in batched_events
|
||||
],
|
||||
# map from event_id to BaseEvent
|
||||
old_room_member_state_events = await self.store.get_events(
|
||||
old_room_member_state_ids.values()
|
||||
)
|
||||
for old_event in old_room_member_state_events.values():
|
||||
# Only transfer ban events
|
||||
if (
|
||||
"membership" in old_event.content
|
||||
and old_event.content["membership"] == "ban"
|
||||
):
|
||||
await self.room_member_handler.update_membership(
|
||||
requester,
|
||||
UserID.from_string(old_event.state_key),
|
||||
new_room_id,
|
||||
"ban",
|
||||
ratelimit=False,
|
||||
content=old_event.content,
|
||||
)
|
||||
|
||||
# XXX invites/joins
|
||||
@@ -779,25 +775,6 @@ class RoomCreationHandler:
|
||||
|
||||
await self.auth_blocking.check_auth_blocking(requester=requester)
|
||||
|
||||
if ratelimit:
|
||||
# Limit the rate of room creations,
|
||||
# using both the limiter specific to room creations as well
|
||||
# as the general request ratelimiter.
|
||||
#
|
||||
# Note that we don't rate limit the individual
|
||||
# events in the room — room creation isn't atomic and
|
||||
# historically it was very janky if half the events in the
|
||||
# initial state don't make it because of rate limiting.
|
||||
|
||||
# First check the room creation ratelimiter without updating it
|
||||
# (this is so we don't consume a token if the other ratelimiter doesn't
|
||||
# allow us to proceed)
|
||||
await self.creation_ratelimiter.ratelimit(requester, update=False)
|
||||
|
||||
# then apply the ratelimits
|
||||
await self.common_request_ratelimiter.ratelimit(requester)
|
||||
await self.creation_ratelimiter.ratelimit(requester)
|
||||
|
||||
if (
|
||||
self._server_notices_mxid is not None
|
||||
and user_id == self._server_notices_mxid
|
||||
@@ -829,6 +806,37 @@ class RoomCreationHandler:
|
||||
Codes.MISSING_PARAM,
|
||||
)
|
||||
|
||||
if not is_requester_admin:
|
||||
spam_check = await self._spam_checker_module_callbacks.user_may_create_room(
|
||||
user_id, config
|
||||
)
|
||||
if spam_check != self._spam_checker_module_callbacks.NOT_SPAM:
|
||||
raise SynapseError(
|
||||
403,
|
||||
"You are not permitted to create rooms",
|
||||
errcode=spam_check[0],
|
||||
additional_fields=spam_check[1],
|
||||
)
|
||||
|
||||
if ratelimit:
|
||||
# Limit the rate of room creations,
|
||||
# using both the limiter specific to room creations as well
|
||||
# as the general request ratelimiter.
|
||||
#
|
||||
# Note that we don't rate limit the individual
|
||||
# events in the room — room creation isn't atomic and
|
||||
# historically it was very janky if half the events in the
|
||||
# initial state don't make it because of rate limiting.
|
||||
|
||||
# First check the room creation ratelimiter without updating it
|
||||
# (this is so we don't consume a token if the other ratelimiter doesn't
|
||||
# allow us to proceed)
|
||||
await self.creation_ratelimiter.ratelimit(requester, update=False)
|
||||
|
||||
# then apply the ratelimits
|
||||
await self.common_request_ratelimiter.ratelimit(requester)
|
||||
await self.creation_ratelimiter.ratelimit(requester)
|
||||
|
||||
room_version_id = config.get(
|
||||
"room_version", self.config.server.default_room_version.identifier
|
||||
)
|
||||
@@ -920,19 +928,6 @@ class RoomCreationHandler:
|
||||
|
||||
self._validate_room_config(config, visibility)
|
||||
|
||||
# Run the spam checker after other validation
|
||||
if not is_requester_admin:
|
||||
spam_check = await self._spam_checker_module_callbacks.user_may_create_room(
|
||||
user_id, config
|
||||
)
|
||||
if spam_check != self._spam_checker_module_callbacks.NOT_SPAM:
|
||||
raise SynapseError(
|
||||
403,
|
||||
"You are not permitted to create rooms",
|
||||
errcode=spam_check[0],
|
||||
additional_fields=spam_check[1],
|
||||
)
|
||||
|
||||
room_id = await self._generate_and_create_room_id(
|
||||
creator_id=user_id,
|
||||
is_public=is_public,
|
||||
|
||||
@@ -49,7 +49,7 @@ from synapse.handlers.profile import MAX_AVATAR_URL_LEN, MAX_DISPLAYNAME_LEN
|
||||
from synapse.handlers.state_deltas import MatchChange, StateDeltasHandler
|
||||
from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
|
||||
from synapse.logging import opentracing
|
||||
from synapse.metrics import SERVER_NAME_LABEL, event_processing_positions
|
||||
from synapse.metrics import event_processing_positions
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.replication.http.push import ReplicationCopyPusherRestServlet
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
@@ -746,41 +746,35 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
and requester.user.to_string() == self._server_notices_mxid
|
||||
)
|
||||
|
||||
# The requester may be a regular user, but puppeted by the server.
|
||||
request_by_server = requester.authenticated_entity == self._server_name
|
||||
|
||||
# If the request is initiated by the server, ignore whether the
|
||||
# requester or target is suspended.
|
||||
if not request_by_server:
|
||||
requester_suspended = await self.store.get_user_suspended_status(
|
||||
requester.user.to_string()
|
||||
requester_suspended = await self.store.get_user_suspended_status(
|
||||
requester.user.to_string()
|
||||
)
|
||||
if action == Membership.INVITE and requester_suspended:
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Sending invites while account is suspended is not allowed.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
if action == Membership.INVITE and requester_suspended:
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Sending invites while account is suspended is not allowed.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
|
||||
if target.to_string() != requester.user.to_string():
|
||||
target_suspended = await self.store.get_user_suspended_status(
|
||||
target.to_string()
|
||||
)
|
||||
else:
|
||||
target_suspended = requester_suspended
|
||||
if target.to_string() != requester.user.to_string():
|
||||
target_suspended = await self.store.get_user_suspended_status(
|
||||
target.to_string()
|
||||
)
|
||||
else:
|
||||
target_suspended = requester_suspended
|
||||
|
||||
if action == Membership.JOIN and target_suspended:
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Joining rooms while account is suspended is not allowed.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
if action == Membership.KNOCK and target_suspended:
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Knocking on rooms while account is suspended is not allowed.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
if action == Membership.JOIN and target_suspended:
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Joining rooms while account is suspended is not allowed.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
if action == Membership.KNOCK and target_suspended:
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Knocking on rooms while account is suspended is not allowed.",
|
||||
Codes.USER_ACCOUNT_SUSPENDED,
|
||||
)
|
||||
|
||||
if (
|
||||
not self.allow_per_room_profiles and not is_requester_server_notices_user
|
||||
@@ -2261,9 +2255,7 @@ class RoomForgetterHandler(StateDeltasHandler):
|
||||
self.pos = max_pos
|
||||
|
||||
# Expose current event processing position to prometheus
|
||||
event_processing_positions.labels(
|
||||
name="room_forgetter", **{SERVER_NAME_LABEL: self.server_name}
|
||||
).set(max_pos)
|
||||
event_processing_positions.labels("room_forgetter").set(max_pos)
|
||||
|
||||
await self._store.update_room_forgetter_stream_pos(max_pos)
|
||||
|
||||
|
||||
@@ -24,13 +24,16 @@ import logging
|
||||
from email.mime.multipart import MIMEMultipart
|
||||
from email.mime.text import MIMEText
|
||||
from io import BytesIO
|
||||
from typing import TYPE_CHECKING, Dict, Optional
|
||||
from typing import TYPE_CHECKING, Any, Dict, Optional
|
||||
|
||||
from pkg_resources import parse_version
|
||||
|
||||
import twisted
|
||||
from twisted.internet.defer import Deferred
|
||||
from twisted.internet.endpoints import HostnameEndpoint
|
||||
from twisted.internet.interfaces import IProtocolFactory
|
||||
from twisted.internet.interfaces import IOpenSSLContextFactory, IProtocolFactory
|
||||
from twisted.internet.ssl import optionsForClientTLS
|
||||
from twisted.mail.smtp import ESMTPSenderFactory
|
||||
from twisted.mail.smtp import ESMTPSender, ESMTPSenderFactory
|
||||
from twisted.protocols.tls import TLSMemoryBIOFactory
|
||||
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
@@ -41,6 +44,49 @@ if TYPE_CHECKING:
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_is_old_twisted = parse_version(twisted.__version__) < parse_version("21")
|
||||
|
||||
|
||||
class _BackportESMTPSender(ESMTPSender):
|
||||
"""Extend old versions of ESMTPSender to configure TLS.
|
||||
|
||||
Unfortunately, before Twisted 21.2, ESMTPSender doesn't give an easy way to
|
||||
disable TLS, or to configure the hostname used for TLS certificate validation.
|
||||
This backports the `hostname` parameter for that functionality.
|
||||
"""
|
||||
|
||||
__hostname: Optional[str]
|
||||
|
||||
def __init__(self, *args: Any, **kwargs: Any) -> None:
|
||||
""""""
|
||||
self.__hostname = kwargs.pop("hostname", None)
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def _getContextFactory(self) -> Optional[IOpenSSLContextFactory]:
|
||||
if self.context is not None:
|
||||
return self.context
|
||||
elif self.__hostname is None:
|
||||
return None # disable TLS if hostname is None
|
||||
return optionsForClientTLS(self.__hostname)
|
||||
|
||||
|
||||
class _BackportESMTPSenderFactory(ESMTPSenderFactory):
|
||||
"""An ESMTPSenderFactory for _BackportESMTPSender.
|
||||
|
||||
This backports the `hostname` parameter, to disable or configure TLS.
|
||||
"""
|
||||
|
||||
__hostname: Optional[str]
|
||||
|
||||
def __init__(self, *args: Any, **kwargs: Any) -> None:
|
||||
self.__hostname = kwargs.pop("hostname", None)
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def protocol(self, *args: Any, **kwargs: Any) -> ESMTPSender: # type: ignore
|
||||
# this overrides ESMTPSenderFactory's `protocol` attribute, with a Callable
|
||||
# instantiating our _BackportESMTPSender, providing the hostname parameter
|
||||
return _BackportESMTPSender(*args, **kwargs, hostname=self.__hostname)
|
||||
|
||||
|
||||
async def _sendmail(
|
||||
reactor: ISynapseReactor,
|
||||
@@ -83,7 +129,9 @@ async def _sendmail(
|
||||
elif tlsname is None:
|
||||
tlsname = smtphost
|
||||
|
||||
factory: IProtocolFactory = ESMTPSenderFactory(
|
||||
factory: IProtocolFactory = (
|
||||
_BackportESMTPSenderFactory if _is_old_twisted else ESMTPSenderFactory
|
||||
)(
|
||||
username,
|
||||
password,
|
||||
from_addr,
|
||||
|
||||
@@ -38,7 +38,6 @@ from synapse.logging.opentracing import (
|
||||
tag_args,
|
||||
trace,
|
||||
)
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.storage.databases.main.stream import PaginateFunction
|
||||
@@ -80,7 +79,7 @@ logger = logging.getLogger(__name__)
|
||||
sync_processing_time = Histogram(
|
||||
"synapse_sliding_sync_processing_time",
|
||||
"Time taken to generate a sliding sync response, ignoring wait times.",
|
||||
labelnames=["initial", SERVER_NAME_LABEL],
|
||||
["initial"],
|
||||
)
|
||||
|
||||
# Limit the number of state_keys we should remember sending down the connection for each
|
||||
@@ -95,7 +94,6 @@ MAX_NUMBER_PREVIOUS_STATE_KEYS_TO_REMEMBER = 100
|
||||
|
||||
class SlidingSyncHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.server_name = hs.hostname
|
||||
self.clock = hs.get_clock()
|
||||
self.store = hs.get_datastores().main
|
||||
self.storage_controllers = hs.get_storage_controllers()
|
||||
@@ -370,9 +368,9 @@ class SlidingSyncHandler:
|
||||
set_tag(SynapseTags.FUNC_ARG_PREFIX + "sync_config.user", user_id)
|
||||
|
||||
end_time_s = self.clock.time()
|
||||
sync_processing_time.labels(
|
||||
initial=from_token is not None, **{SERVER_NAME_LABEL: self.server_name}
|
||||
).observe(end_time_s - start_time_s)
|
||||
sync_processing_time.labels(from_token is not None).observe(
|
||||
end_time_s - start_time_s
|
||||
)
|
||||
|
||||
return sliding_sync_result
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ from typing import (
|
||||
)
|
||||
|
||||
from synapse.api.constants import EventContentFields, EventTypes, Membership
|
||||
from synapse.metrics import SERVER_NAME_LABEL, event_processing_positions
|
||||
from synapse.metrics import event_processing_positions
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.types import JsonDict
|
||||
@@ -147,9 +147,7 @@ class StatsHandler:
|
||||
|
||||
logger.debug("Handled room stats to %s -> %s", self.pos, max_pos)
|
||||
|
||||
event_processing_positions.labels(
|
||||
name="stats", **{SERVER_NAME_LABEL: self.server_name}
|
||||
).set(max_pos)
|
||||
event_processing_positions.labels("stats").set(max_pos)
|
||||
|
||||
self.pos = max_pos
|
||||
|
||||
|
||||
@@ -1,15 +1,9 @@
|
||||
import logging
|
||||
from http import HTTPStatus
|
||||
from typing import TYPE_CHECKING, Optional
|
||||
|
||||
from synapse.api.constants import RelationTypes
|
||||
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
|
||||
from synapse.events import relation_from_event
|
||||
from synapse.storage.databases.main.thread_subscriptions import (
|
||||
AutomaticSubscriptionConflicted,
|
||||
ThreadSubscription,
|
||||
)
|
||||
from synapse.types import EventOrderings, UserID
|
||||
from synapse.api.errors import AuthError, NotFoundError
|
||||
from synapse.storage.databases.main.thread_subscriptions import ThreadSubscription
|
||||
from synapse.types import UserID
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -61,79 +55,42 @@ class ThreadSubscriptionsHandler:
|
||||
room_id: str,
|
||||
thread_root_event_id: str,
|
||||
*,
|
||||
automatic_event_id: Optional[str],
|
||||
automatic: bool,
|
||||
) -> Optional[int]:
|
||||
"""Sets or updates a user's subscription settings for a specific thread root.
|
||||
|
||||
Args:
|
||||
requester_user_id: The ID of the user whose settings are being updated.
|
||||
thread_root_event_id: The event ID of the thread root.
|
||||
automatic_event_id: if the user was subscribed by an automatic decision by
|
||||
their client, the event ID that caused this.
|
||||
automatic: whether the user was subscribed by an automatic decision by
|
||||
their client.
|
||||
|
||||
Returns:
|
||||
The stream ID for this update, if the update isn't no-opped.
|
||||
|
||||
Raises:
|
||||
NotFoundError if the user cannot access the thread root event, or it isn't
|
||||
known to this homeserver. Ditto for the automatic cause event if supplied.
|
||||
|
||||
SynapseError(400, M_NOT_IN_THREAD): if client supplied an automatic cause event
|
||||
but user cannot access the event.
|
||||
|
||||
SynapseError(409, M_SKIPPED): if client requested an automatic subscription
|
||||
but it was skipped because the cause event is logically later than an unsubscription.
|
||||
known to this homeserver.
|
||||
"""
|
||||
# First check that the user can access the thread root event
|
||||
# and that it exists
|
||||
try:
|
||||
thread_root_event = await self.event_handler.get_event(
|
||||
event = await self.event_handler.get_event(
|
||||
user_id, room_id, thread_root_event_id
|
||||
)
|
||||
if thread_root_event is None:
|
||||
if event is None:
|
||||
raise NotFoundError("No such thread root")
|
||||
except AuthError:
|
||||
logger.info("rejecting thread subscriptions change (thread not accessible)")
|
||||
raise NotFoundError("No such thread root")
|
||||
|
||||
if automatic_event_id:
|
||||
autosub_cause_event = await self.event_handler.get_event(
|
||||
user_id, room_id, automatic_event_id
|
||||
)
|
||||
if autosub_cause_event is None:
|
||||
raise NotFoundError("Automatic subscription event not found")
|
||||
relation = relation_from_event(autosub_cause_event)
|
||||
if (
|
||||
relation is None
|
||||
or relation.rel_type != RelationTypes.THREAD
|
||||
or relation.parent_id != thread_root_event_id
|
||||
):
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
"Automatic subscription must use an event in the thread",
|
||||
errcode=Codes.MSC4306_NOT_IN_THREAD,
|
||||
)
|
||||
|
||||
automatic_event_orderings = EventOrderings.from_event(autosub_cause_event)
|
||||
else:
|
||||
automatic_event_orderings = None
|
||||
|
||||
outcome = await self.store.subscribe_user_to_thread(
|
||||
return await self.store.subscribe_user_to_thread(
|
||||
user_id.to_string(),
|
||||
room_id,
|
||||
event.room_id,
|
||||
thread_root_event_id,
|
||||
automatic_event_orderings=automatic_event_orderings,
|
||||
automatic=automatic,
|
||||
)
|
||||
|
||||
if isinstance(outcome, AutomaticSubscriptionConflicted):
|
||||
raise SynapseError(
|
||||
HTTPStatus.CONFLICT,
|
||||
"Automatic subscription obsoleted by an unsubscription request.",
|
||||
errcode=Codes.MSC4306_CONFLICTING_UNSUBSCRIPTION,
|
||||
)
|
||||
|
||||
return outcome
|
||||
|
||||
async def unsubscribe_user_from_thread(
|
||||
self, user_id: UserID, room_id: str, thread_root_event_id: str
|
||||
) -> Optional[int]:
|
||||
|
||||
@@ -35,7 +35,6 @@ from synapse.api.constants import (
|
||||
)
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.handlers.state_deltas import MatchChange, StateDeltasHandler
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage.databases.main.state_deltas import StateDelta
|
||||
from synapse.storage.databases.main.user_directory import SearchResult
|
||||
@@ -263,9 +262,9 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||
self.pos = max_pos
|
||||
|
||||
# Expose current event processing position to prometheus
|
||||
synapse.metrics.event_processing_positions.labels(
|
||||
name="user_dir", **{SERVER_NAME_LABEL: self.server_name}
|
||||
).set(max_pos)
|
||||
synapse.metrics.event_processing_positions.labels("user_dir").set(
|
||||
max_pos
|
||||
)
|
||||
|
||||
await self.store.update_user_directory_stream_pos(max_pos)
|
||||
|
||||
|
||||
@@ -144,31 +144,27 @@ def _get_in_flight_counts() -> Mapping[Tuple[str, ...], int]:
|
||||
# Cast to a list to prevent it changing while the Prometheus
|
||||
# thread is collecting metrics
|
||||
with _in_flight_requests_lock:
|
||||
request_metrics = list(_in_flight_requests)
|
||||
reqs = list(_in_flight_requests)
|
||||
|
||||
for request_metric in request_metrics:
|
||||
request_metric.update_metrics()
|
||||
for rm in reqs:
|
||||
rm.update_metrics()
|
||||
|
||||
# Map from (method, name) -> int, the number of in flight requests of that
|
||||
# type. The key type is Tuple[str, str], but we leave the length unspecified
|
||||
# for compatability with LaterGauge's annotations.
|
||||
counts: Dict[Tuple[str, ...], int] = {}
|
||||
for request_metric in request_metrics:
|
||||
key = (
|
||||
request_metric.method,
|
||||
request_metric.name,
|
||||
request_metric.our_server_name,
|
||||
)
|
||||
for rm in reqs:
|
||||
key = (rm.method, rm.name)
|
||||
counts[key] = counts.get(key, 0) + 1
|
||||
|
||||
return counts
|
||||
|
||||
|
||||
LaterGauge(
|
||||
name="synapse_http_server_in_flight_requests_count",
|
||||
desc="",
|
||||
labelnames=["method", "servlet", SERVER_NAME_LABEL],
|
||||
caller=_get_in_flight_counts,
|
||||
"synapse_http_server_in_flight_requests_count",
|
||||
"",
|
||||
["method", "servlet"],
|
||||
_get_in_flight_counts,
|
||||
)
|
||||
|
||||
|
||||
@@ -240,10 +236,9 @@ class RequestMetrics:
|
||||
|
||||
response_count.labels(**response_base_labels).inc()
|
||||
|
||||
response_timer.labels(
|
||||
code=response_code_str,
|
||||
**response_base_labels,
|
||||
).observe(time_sec - self.start_ts)
|
||||
response_timer.labels(code=response_code_str, **response_base_labels).observe(
|
||||
time_sec - self.start_ts
|
||||
)
|
||||
|
||||
resource_usage = context.get_resource_usage()
|
||||
|
||||
|
||||
@@ -337,7 +337,7 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
|
||||
callback_return = await self._async_render(request)
|
||||
except LimitExceededError as e:
|
||||
if e.pause:
|
||||
await self._clock.sleep(e.pause)
|
||||
self._clock.sleep(e.pause)
|
||||
raise
|
||||
|
||||
if callback_return is not None:
|
||||
|
||||
@@ -33,7 +33,6 @@ from typing import (
|
||||
Iterable,
|
||||
Mapping,
|
||||
Optional,
|
||||
Sequence,
|
||||
Set,
|
||||
Tuple,
|
||||
Type,
|
||||
@@ -156,13 +155,13 @@ class _RegistryProxy:
|
||||
RegistryProxy = cast(CollectorRegistry, _RegistryProxy)
|
||||
|
||||
|
||||
@attr.s(slots=True, hash=True, auto_attribs=True, kw_only=True)
|
||||
@attr.s(slots=True, hash=True, auto_attribs=True)
|
||||
class LaterGauge(Collector):
|
||||
"""A Gauge which periodically calls a user-provided callback to produce metrics."""
|
||||
|
||||
name: str
|
||||
desc: str
|
||||
labelnames: Optional[StrSequence] = attr.ib(hash=False)
|
||||
labels: Optional[StrSequence] = attr.ib(hash=False)
|
||||
# callback: should either return a value (if there are no labels for this metric),
|
||||
# or dict mapping from a label tuple to a value
|
||||
caller: Callable[
|
||||
@@ -170,9 +169,7 @@ class LaterGauge(Collector):
|
||||
]
|
||||
|
||||
def collect(self) -> Iterable[Metric]:
|
||||
# The decision to add `SERVER_NAME_LABEL` is from the `LaterGauge` usage itself
|
||||
# (we don't enforce it here, one level up).
|
||||
g = GaugeMetricFamily(self.name, self.desc, labels=self.labelnames) # type: ignore[missing-server-name-label]
|
||||
g = GaugeMetricFamily(self.name, self.desc, labels=self.labels)
|
||||
|
||||
try:
|
||||
calls = self.caller()
|
||||
@@ -306,9 +303,7 @@ class InFlightGauge(Generic[MetricsEntry], Collector):
|
||||
|
||||
Note: may be called by a separate thread.
|
||||
"""
|
||||
# The decision to add `SERVER_NAME_LABEL` is from the `GaugeBucketCollector`
|
||||
# usage itself (we don't enforce it here, one level up).
|
||||
in_flight = GaugeMetricFamily( # type: ignore[missing-server-name-label]
|
||||
in_flight = GaugeMetricFamily(
|
||||
self.name + "_total", self.desc, labels=self.labels
|
||||
)
|
||||
|
||||
@@ -332,9 +327,7 @@ class InFlightGauge(Generic[MetricsEntry], Collector):
|
||||
yield in_flight
|
||||
|
||||
for name in self.sub_metrics:
|
||||
# The decision to add `SERVER_NAME_LABEL` is from the `InFlightGauge` usage
|
||||
# itself (we don't enforce it here, one level up).
|
||||
gauge = GaugeMetricFamily( # type: ignore[missing-server-name-label]
|
||||
gauge = GaugeMetricFamily(
|
||||
"_".join([self.name, name]), "", labels=self.labels
|
||||
)
|
||||
for key, metrics in metrics_by_key.items():
|
||||
@@ -350,51 +343,6 @@ class InFlightGauge(Generic[MetricsEntry], Collector):
|
||||
all_gauges[self.name] = self
|
||||
|
||||
|
||||
class GaugeHistogramMetricFamilyWithLabels(GaugeHistogramMetricFamily):
|
||||
"""
|
||||
Custom version of `GaugeHistogramMetricFamily` from `prometheus_client` that allows
|
||||
specifying labels and label values.
|
||||
|
||||
A single gauge histogram and its samples.
|
||||
|
||||
For use by custom collectors.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
name: str,
|
||||
documentation: str,
|
||||
gsum_value: float,
|
||||
buckets: Optional[Sequence[Tuple[str, float]]] = None,
|
||||
labelnames: StrSequence = (),
|
||||
labelvalues: StrSequence = (),
|
||||
unit: str = "",
|
||||
):
|
||||
# Sanity check the number of label values matches the number of label names.
|
||||
if len(labelvalues) != len(labelnames):
|
||||
raise ValueError(
|
||||
"The number of label values must match the number of label names"
|
||||
)
|
||||
|
||||
# Call the super to validate and set the labelnames. We use this stable API
|
||||
# instead of setting the internal `_labelnames` field directly.
|
||||
super().__init__(
|
||||
name=name,
|
||||
documentation=documentation,
|
||||
labels=labelnames,
|
||||
# Since `GaugeHistogramMetricFamily` doesn't support supplying `labels` and
|
||||
# `buckets` at the same time (artificial limitation), we will just set these
|
||||
# as `None` and set up the buckets ourselves just below.
|
||||
buckets=None,
|
||||
gsum_value=None,
|
||||
)
|
||||
|
||||
# Create a gauge for each bucket.
|
||||
if buckets is not None:
|
||||
self.add_metric(labels=labelvalues, buckets=buckets, gsum_value=gsum_value)
|
||||
|
||||
|
||||
class GaugeBucketCollector(Collector):
|
||||
"""Like a Histogram, but the buckets are Gauges which are updated atomically.
|
||||
|
||||
@@ -407,17 +355,14 @@ class GaugeBucketCollector(Collector):
|
||||
__slots__ = (
|
||||
"_name",
|
||||
"_documentation",
|
||||
"_labelnames",
|
||||
"_bucket_bounds",
|
||||
"_metric",
|
||||
)
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
name: str,
|
||||
documentation: str,
|
||||
labelnames: Optional[StrSequence],
|
||||
buckets: Iterable[float],
|
||||
registry: CollectorRegistry = REGISTRY,
|
||||
):
|
||||
@@ -431,7 +376,6 @@ class GaugeBucketCollector(Collector):
|
||||
"""
|
||||
self._name = name
|
||||
self._documentation = documentation
|
||||
self._labelnames = labelnames if labelnames else ()
|
||||
|
||||
# the tops of the buckets
|
||||
self._bucket_bounds = [float(b) for b in buckets]
|
||||
@@ -443,7 +387,7 @@ class GaugeBucketCollector(Collector):
|
||||
|
||||
# We initially set this to None. We won't report metrics until
|
||||
# this has been initialised after a successful data update
|
||||
self._metric: Optional[GaugeHistogramMetricFamilyWithLabels] = None
|
||||
self._metric: Optional[GaugeHistogramMetricFamily] = None
|
||||
|
||||
registry.register(self)
|
||||
|
||||
@@ -452,26 +396,15 @@ class GaugeBucketCollector(Collector):
|
||||
if self._metric is not None:
|
||||
yield self._metric
|
||||
|
||||
def update_data(self, values: Iterable[float], labels: StrSequence = ()) -> None:
|
||||
def update_data(self, values: Iterable[float]) -> None:
|
||||
"""Update the data to be reported by the metric
|
||||
|
||||
The existing data is cleared, and each measurement in the input is assigned
|
||||
to the relevant bucket.
|
||||
"""
|
||||
self._metric = self._values_to_metric(values)
|
||||
|
||||
Args:
|
||||
values
|
||||
labels
|
||||
"""
|
||||
self._metric = self._values_to_metric(values, labels)
|
||||
|
||||
def _values_to_metric(
|
||||
self, values: Iterable[float], labels: StrSequence = ()
|
||||
) -> GaugeHistogramMetricFamilyWithLabels:
|
||||
"""
|
||||
Args:
|
||||
values
|
||||
labels
|
||||
"""
|
||||
def _values_to_metric(self, values: Iterable[float]) -> GaugeHistogramMetricFamily:
|
||||
total = 0.0
|
||||
bucket_values = [0 for _ in self._bucket_bounds]
|
||||
|
||||
@@ -489,13 +422,9 @@ class GaugeBucketCollector(Collector):
|
||||
# that bucket or below.
|
||||
accumulated_values = itertools.accumulate(bucket_values)
|
||||
|
||||
# The decision to add `SERVER_NAME_LABEL` is from the `GaugeBucketCollector`
|
||||
# usage itself (we don't enforce it here, one level up).
|
||||
return GaugeHistogramMetricFamilyWithLabels( # type: ignore[missing-server-name-label]
|
||||
name=self._name,
|
||||
documentation=self._documentation,
|
||||
labelnames=self._labelnames,
|
||||
labelvalues=labels,
|
||||
return GaugeHistogramMetricFamily(
|
||||
self._name,
|
||||
self._documentation,
|
||||
buckets=list(
|
||||
zip((str(b) for b in self._bucket_bounds), accumulated_values)
|
||||
),
|
||||
@@ -527,19 +456,16 @@ class CPUMetrics(Collector):
|
||||
line = s.read()
|
||||
raw_stats = line.split(") ", 1)[1].split(" ")
|
||||
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
user = GaugeMetricFamily("process_cpu_user_seconds_total", "") # type: ignore[missing-server-name-label]
|
||||
user = GaugeMetricFamily("process_cpu_user_seconds_total", "")
|
||||
user.add_metric([], float(raw_stats[11]) / self.ticks_per_sec)
|
||||
yield user
|
||||
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
sys = GaugeMetricFamily("process_cpu_system_seconds_total", "") # type: ignore[missing-server-name-label]
|
||||
sys = GaugeMetricFamily("process_cpu_system_seconds_total", "")
|
||||
sys.add_metric([], float(raw_stats[12]) / self.ticks_per_sec)
|
||||
yield sys
|
||||
|
||||
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
REGISTRY.register(CPUMetrics()) # type: ignore[missing-server-name-label]
|
||||
REGISTRY.register(CPUMetrics())
|
||||
|
||||
|
||||
#
|
||||
@@ -569,40 +495,28 @@ event_processing_loop_room_count = Counter(
|
||||
|
||||
# Used to track where various components have processed in the event stream,
|
||||
# e.g. federation sending, appservice sending, etc.
|
||||
event_processing_positions = Gauge(
|
||||
"synapse_event_processing_positions", "", labelnames=["name", SERVER_NAME_LABEL]
|
||||
)
|
||||
event_processing_positions = Gauge("synapse_event_processing_positions", "", ["name"])
|
||||
|
||||
# Used to track the current max events stream position
|
||||
event_persisted_position = Gauge(
|
||||
"synapse_event_persisted_position", "", labelnames=[SERVER_NAME_LABEL]
|
||||
)
|
||||
event_persisted_position = Gauge("synapse_event_persisted_position", "")
|
||||
|
||||
# Used to track the received_ts of the last event processed by various
|
||||
# components
|
||||
event_processing_last_ts = Gauge(
|
||||
"synapse_event_processing_last_ts", "", labelnames=["name", SERVER_NAME_LABEL]
|
||||
)
|
||||
event_processing_last_ts = Gauge("synapse_event_processing_last_ts", "", ["name"])
|
||||
|
||||
# Used to track the lag processing events. This is the time difference
|
||||
# between the last processed event's received_ts and the time it was
|
||||
# finished being processed.
|
||||
event_processing_lag = Gauge(
|
||||
"synapse_event_processing_lag", "", labelnames=["name", SERVER_NAME_LABEL]
|
||||
)
|
||||
event_processing_lag = Gauge("synapse_event_processing_lag", "", ["name"])
|
||||
|
||||
event_processing_lag_by_event = Histogram(
|
||||
"synapse_event_processing_lag_by_event",
|
||||
"Time between an event being persisted and it being queued up to be sent to the relevant remote servers",
|
||||
labelnames=["name", SERVER_NAME_LABEL],
|
||||
["name"],
|
||||
)
|
||||
|
||||
# Build info of the running server.
|
||||
#
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`. We
|
||||
# consider this process-level because all Synapse homeservers running in the process
|
||||
# will use the same Synapse version.
|
||||
build_info = Gauge( # type: ignore[missing-server-name-label]
|
||||
build_info = Gauge(
|
||||
"synapse_build_info", "Build information", ["pythonversion", "version", "osversion"]
|
||||
)
|
||||
build_info.labels(
|
||||
@@ -618,57 +532,44 @@ threepid_send_requests = Histogram(
|
||||
" there is a request with try count of 4, then there would have been one"
|
||||
" each for 1, 2 and 3",
|
||||
buckets=(1, 2, 3, 4, 5, 10),
|
||||
labelnames=("type", "reason", SERVER_NAME_LABEL),
|
||||
labelnames=("type", "reason"),
|
||||
)
|
||||
|
||||
threadpool_total_threads = Gauge(
|
||||
"synapse_threadpool_total_threads",
|
||||
"Total number of threads currently in the threadpool",
|
||||
labelnames=["name", SERVER_NAME_LABEL],
|
||||
["name"],
|
||||
)
|
||||
|
||||
threadpool_total_working_threads = Gauge(
|
||||
"synapse_threadpool_working_threads",
|
||||
"Number of threads currently working in the threadpool",
|
||||
labelnames=["name", SERVER_NAME_LABEL],
|
||||
["name"],
|
||||
)
|
||||
|
||||
threadpool_total_min_threads = Gauge(
|
||||
"synapse_threadpool_min_threads",
|
||||
"Minimum number of threads configured in the threadpool",
|
||||
labelnames=["name", SERVER_NAME_LABEL],
|
||||
["name"],
|
||||
)
|
||||
|
||||
threadpool_total_max_threads = Gauge(
|
||||
"synapse_threadpool_max_threads",
|
||||
"Maximum number of threads configured in the threadpool",
|
||||
labelnames=["name", SERVER_NAME_LABEL],
|
||||
["name"],
|
||||
)
|
||||
|
||||
|
||||
def register_threadpool(*, name: str, server_name: str, threadpool: ThreadPool) -> None:
|
||||
"""
|
||||
Add metrics for the threadpool.
|
||||
def register_threadpool(name: str, threadpool: ThreadPool) -> None:
|
||||
"""Add metrics for the threadpool."""
|
||||
|
||||
Args:
|
||||
name: The name of the threadpool, used to identify it in the metrics.
|
||||
server_name: The homeserver name (used to label metrics) (this should be `hs.hostname`).
|
||||
threadpool: The threadpool to register metrics for.
|
||||
"""
|
||||
threadpool_total_min_threads.labels(name).set(threadpool.min)
|
||||
threadpool_total_max_threads.labels(name).set(threadpool.max)
|
||||
|
||||
threadpool_total_min_threads.labels(
|
||||
name=name, **{SERVER_NAME_LABEL: server_name}
|
||||
).set(threadpool.min)
|
||||
threadpool_total_max_threads.labels(
|
||||
name=name, **{SERVER_NAME_LABEL: server_name}
|
||||
).set(threadpool.max)
|
||||
|
||||
threadpool_total_threads.labels(
|
||||
name=name, **{SERVER_NAME_LABEL: server_name}
|
||||
).set_function(lambda: len(threadpool.threads))
|
||||
threadpool_total_working_threads.labels(
|
||||
name=name, **{SERVER_NAME_LABEL: server_name}
|
||||
).set_function(lambda: len(threadpool.working))
|
||||
threadpool_total_threads.labels(name).set_function(lambda: len(threadpool.threads))
|
||||
threadpool_total_working_threads.labels(name).set_function(
|
||||
lambda: len(threadpool.working)
|
||||
)
|
||||
|
||||
|
||||
class MetricsResource(Resource):
|
||||
|
||||
@@ -54,9 +54,8 @@ running_on_pypy = platform.python_implementation() == "PyPy"
|
||||
# Python GC metrics
|
||||
#
|
||||
|
||||
# These are process-level metrics, so they do not have the `SERVER_NAME_LABEL`.
|
||||
gc_unreachable = Gauge("python_gc_unreachable_total", "Unreachable GC objects", ["gen"]) # type: ignore[missing-server-name-label]
|
||||
gc_time = Histogram( # type: ignore[missing-server-name-label]
|
||||
gc_unreachable = Gauge("python_gc_unreachable_total", "Unreachable GC objects", ["gen"])
|
||||
gc_time = Histogram(
|
||||
"python_gc_time",
|
||||
"Time taken to GC (sec)",
|
||||
["gen"],
|
||||
@@ -83,8 +82,7 @@ gc_time = Histogram( # type: ignore[missing-server-name-label]
|
||||
|
||||
class GCCounts(Collector):
|
||||
def collect(self) -> Iterable[Metric]:
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
cm = GaugeMetricFamily("python_gc_counts", "GC object counts", labels=["gen"]) # type: ignore[missing-server-name-label]
|
||||
cm = GaugeMetricFamily("python_gc_counts", "GC object counts", labels=["gen"])
|
||||
for n, m in enumerate(gc.get_count()):
|
||||
cm.add_metric([str(n)], m)
|
||||
|
||||
@@ -103,8 +101,7 @@ def install_gc_manager() -> None:
|
||||
if running_on_pypy:
|
||||
return
|
||||
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
REGISTRY.register(GCCounts()) # type: ignore[missing-server-name-label]
|
||||
REGISTRY.register(GCCounts())
|
||||
|
||||
gc.disable()
|
||||
|
||||
@@ -179,8 +176,7 @@ class PyPyGCStats(Collector):
|
||||
#
|
||||
# Total time spent in GC: 0.073 # s.total_gc_time
|
||||
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
pypy_gc_time = CounterMetricFamily( # type: ignore[missing-server-name-label]
|
||||
pypy_gc_time = CounterMetricFamily(
|
||||
"pypy_gc_time_seconds_total",
|
||||
"Total time spent in PyPy GC",
|
||||
labels=[],
|
||||
@@ -188,8 +184,7 @@ class PyPyGCStats(Collector):
|
||||
pypy_gc_time.add_metric([], s.total_gc_time / 1000)
|
||||
yield pypy_gc_time
|
||||
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
pypy_mem = GaugeMetricFamily( # type: ignore[missing-server-name-label]
|
||||
pypy_mem = GaugeMetricFamily(
|
||||
"pypy_memory_bytes",
|
||||
"Memory tracked by PyPy allocator",
|
||||
labels=["state", "class", "kind"],
|
||||
@@ -213,5 +208,4 @@ class PyPyGCStats(Collector):
|
||||
|
||||
|
||||
if running_on_pypy:
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
REGISTRY.register(PyPyGCStats()) # type: ignore[missing-server-name-label]
|
||||
REGISTRY.register(PyPyGCStats())
|
||||
|
||||
@@ -62,8 +62,7 @@ logger = logging.getLogger(__name__)
|
||||
# Twisted reactor metrics
|
||||
#
|
||||
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
tick_time = Histogram( # type: ignore[missing-server-name-label]
|
||||
tick_time = Histogram(
|
||||
"python_twisted_reactor_tick_time",
|
||||
"Tick time of the Twisted reactor (sec)",
|
||||
buckets=[0.001, 0.002, 0.005, 0.01, 0.025, 0.05, 0.1, 0.2, 0.5, 1, 2, 5],
|
||||
@@ -115,8 +114,7 @@ class ReactorLastSeenMetric(Collector):
|
||||
self._call_wrapper = call_wrapper
|
||||
|
||||
def collect(self) -> Iterable[Metric]:
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
cm = GaugeMetricFamily( # type: ignore[missing-server-name-label]
|
||||
cm = GaugeMetricFamily(
|
||||
"python_twisted_reactor_last_seen",
|
||||
"Seconds since the Twisted reactor was last seen",
|
||||
)
|
||||
@@ -167,5 +165,4 @@ except Exception as e:
|
||||
|
||||
|
||||
if wrapper:
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
REGISTRY.register(ReactorLastSeenMetric(wrapper)) # type: ignore[missing-server-name-label]
|
||||
REGISTRY.register(ReactorLastSeenMetric(wrapper))
|
||||
|
||||
@@ -167,9 +167,7 @@ class _Collector(Collector):
|
||||
yield from m.collect()
|
||||
|
||||
|
||||
# The `SERVER_NAME_LABEL` is included in the individual metrics added to this registry,
|
||||
# so we don't need to worry about it on the collector itself.
|
||||
REGISTRY.register(_Collector()) # type: ignore[missing-server-name-label]
|
||||
REGISTRY.register(_Collector())
|
||||
|
||||
|
||||
class _BackgroundProcess:
|
||||
|
||||
@@ -22,7 +22,6 @@ from typing import TYPE_CHECKING
|
||||
|
||||
import attr
|
||||
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
|
||||
if TYPE_CHECKING:
|
||||
@@ -34,7 +33,6 @@ from prometheus_client import Gauge
|
||||
current_dau_gauge = Gauge(
|
||||
"synapse_admin_daily_active_users",
|
||||
"Current daily active users count",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
|
||||
@@ -91,6 +89,4 @@ class CommonUsageMetricsManager:
|
||||
"""Update the Prometheus gauges."""
|
||||
metrics = await self._collect()
|
||||
|
||||
current_dau_gauge.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).set(float(metrics.daily_active_users))
|
||||
current_dau_gauge.set(float(metrics.daily_active_users))
|
||||
|
||||
@@ -188,8 +188,7 @@ def _setup_jemalloc_stats() -> None:
|
||||
def collect(self) -> Iterable[Metric]:
|
||||
stats.refresh_stats()
|
||||
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
g = GaugeMetricFamily( # type: ignore[missing-server-name-label]
|
||||
g = GaugeMetricFamily(
|
||||
"jemalloc_stats_app_memory_bytes",
|
||||
"The stats reported by jemalloc",
|
||||
labels=["type"],
|
||||
@@ -231,8 +230,7 @@ def _setup_jemalloc_stats() -> None:
|
||||
|
||||
yield g
|
||||
|
||||
# This is a process-level metric, so it does not have the `SERVER_NAME_LABEL`.
|
||||
REGISTRY.register(JemallocCollector()) # type: ignore[missing-server-name-label]
|
||||
REGISTRY.register(JemallocCollector())
|
||||
|
||||
logger.debug("Added jemalloc stats")
|
||||
|
||||
|
||||
@@ -23,7 +23,6 @@ import logging
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Awaitable,
|
||||
Callable,
|
||||
Collection,
|
||||
Dict,
|
||||
@@ -81,9 +80,7 @@ from synapse.logging.context import (
|
||||
make_deferred_yieldable,
|
||||
run_in_background,
|
||||
)
|
||||
from synapse.metrics.background_process_metrics import (
|
||||
run_as_background_process as _run_as_background_process,
|
||||
)
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.module_api.callbacks.account_validity_callbacks import (
|
||||
IS_USER_EXPIRED_CALLBACK,
|
||||
ON_LEGACY_ADMIN_REQUEST,
|
||||
@@ -161,9 +158,6 @@ from synapse.util.caches.descriptors import CachedFunction, cached as _cached
|
||||
from synapse.util.frozenutils import freeze
|
||||
|
||||
if TYPE_CHECKING:
|
||||
# Old versions don't have `LiteralString`
|
||||
from typing_extensions import LiteralString
|
||||
|
||||
from synapse.app.generic_worker import GenericWorkerStore
|
||||
from synapse.server import HomeServer
|
||||
|
||||
@@ -222,65 +216,6 @@ class UserIpAndAgent:
|
||||
last_seen: int
|
||||
|
||||
|
||||
def run_as_background_process(
|
||||
desc: "LiteralString",
|
||||
func: Callable[..., Awaitable[Optional[T]]],
|
||||
*args: Any,
|
||||
bg_start_span: bool = True,
|
||||
**kwargs: Any,
|
||||
) -> "defer.Deferred[Optional[T]]":
|
||||
"""
|
||||
XXX: Deprecated: use `ModuleApi.run_as_background_process` instead.
|
||||
|
||||
Run the given function in its own logcontext, with resource metrics
|
||||
|
||||
This should be used to wrap processes which are fired off to run in the
|
||||
background, instead of being associated with a particular request.
|
||||
|
||||
It returns a Deferred which completes when the function completes, but it doesn't
|
||||
follow the synapse logcontext rules, which makes it appropriate for passing to
|
||||
clock.looping_call and friends (or for firing-and-forgetting in the middle of a
|
||||
normal synapse async function).
|
||||
|
||||
Args:
|
||||
desc: a description for this background process type
|
||||
server_name: The homeserver name that this background process is being run for
|
||||
(this should be `hs.hostname`).
|
||||
func: a function, which may return a Deferred or a coroutine
|
||||
bg_start_span: Whether to start an opentracing span. Defaults to True.
|
||||
Should only be disabled for processes that will not log to or tag
|
||||
a span.
|
||||
args: positional args for func
|
||||
kwargs: keyword args for func
|
||||
|
||||
Returns:
|
||||
Deferred which returns the result of func, or `None` if func raises.
|
||||
Note that the returned Deferred does not follow the synapse logcontext
|
||||
rules.
|
||||
"""
|
||||
|
||||
logger.warning(
|
||||
"Using deprecated `run_as_background_process` that's exported from the Module API. "
|
||||
"Prefer `ModuleApi.run_as_background_process` instead.",
|
||||
)
|
||||
|
||||
# Historically, since this function is exported from the module API, we can't just
|
||||
# change the signature to require a `server_name` argument. Since
|
||||
# `run_as_background_process` internally in Synapse requires `server_name` now, we
|
||||
# just have to stub this out with a placeholder value and tell people to use the new
|
||||
# function instead.
|
||||
stub_server_name = "synapse_module_running_from_unknown_server"
|
||||
|
||||
return _run_as_background_process(
|
||||
desc,
|
||||
stub_server_name,
|
||||
func,
|
||||
*args,
|
||||
bg_start_span=bg_start_span,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
|
||||
def cached(
|
||||
*,
|
||||
max_entries: int = 1000,
|
||||
@@ -342,9 +277,7 @@ class ModuleApi:
|
||||
self._device_handler = hs.get_device_handler()
|
||||
self.custom_template_dir = hs.config.server.custom_template_directory
|
||||
self._callbacks = hs.get_module_api_callbacks()
|
||||
self._auth_delegation_enabled = (
|
||||
hs.config.mas.enabled or hs.config.experimental.msc3861.enabled
|
||||
)
|
||||
self.msc3861_oauth_delegation_enabled = hs.config.experimental.msc3861.enabled
|
||||
self._event_serializer = hs.get_event_client_serializer()
|
||||
|
||||
try:
|
||||
@@ -551,7 +484,7 @@ class ModuleApi:
|
||||
|
||||
Added in Synapse v1.46.0.
|
||||
"""
|
||||
if self._auth_delegation_enabled:
|
||||
if self.msc3861_oauth_delegation_enabled:
|
||||
raise ConfigError(
|
||||
"Cannot use password auth provider callbacks when OAuth delegation is enabled"
|
||||
)
|
||||
@@ -1390,9 +1323,10 @@ class ModuleApi:
|
||||
|
||||
if self._hs.config.worker.run_background_tasks or run_on_all_instances:
|
||||
self._clock.looping_call(
|
||||
self.run_as_background_process,
|
||||
run_as_background_process,
|
||||
msec,
|
||||
desc,
|
||||
self.server_name,
|
||||
lambda: maybe_awaitable(f(*args, **kwargs)),
|
||||
)
|
||||
else:
|
||||
@@ -1448,8 +1382,9 @@ class ModuleApi:
|
||||
return self._clock.call_later(
|
||||
# convert ms to seconds as needed by call_later.
|
||||
msec * 0.001,
|
||||
self.run_as_background_process,
|
||||
run_as_background_process,
|
||||
desc,
|
||||
self.server_name,
|
||||
lambda: maybe_awaitable(f(*args, **kwargs)),
|
||||
)
|
||||
|
||||
@@ -1655,44 +1590,6 @@ class ModuleApi:
|
||||
|
||||
return {key: state_events[event_id] for key, event_id in state_ids.items()}
|
||||
|
||||
def run_as_background_process(
|
||||
self,
|
||||
desc: "LiteralString",
|
||||
func: Callable[..., Awaitable[Optional[T]]],
|
||||
*args: Any,
|
||||
bg_start_span: bool = True,
|
||||
**kwargs: Any,
|
||||
) -> "defer.Deferred[Optional[T]]":
|
||||
"""Run the given function in its own logcontext, with resource metrics
|
||||
|
||||
This should be used to wrap processes which are fired off to run in the
|
||||
background, instead of being associated with a particular request.
|
||||
|
||||
It returns a Deferred which completes when the function completes, but it doesn't
|
||||
follow the synapse logcontext rules, which makes it appropriate for passing to
|
||||
clock.looping_call and friends (or for firing-and-forgetting in the middle of a
|
||||
normal synapse async function).
|
||||
|
||||
Args:
|
||||
desc: a description for this background process type
|
||||
server_name: The homeserver name that this background process is being run for
|
||||
(this should be `hs.hostname`).
|
||||
func: a function, which may return a Deferred or a coroutine
|
||||
bg_start_span: Whether to start an opentracing span. Defaults to True.
|
||||
Should only be disabled for processes that will not log to or tag
|
||||
a span.
|
||||
args: positional args for func
|
||||
kwargs: keyword args for func
|
||||
|
||||
Returns:
|
||||
Deferred which returns the result of func, or `None` if func raises.
|
||||
Note that the returned Deferred does not follow the synapse logcontext
|
||||
rules.
|
||||
"""
|
||||
return _run_as_background_process(
|
||||
desc, self.server_name, func, *args, bg_start_span=bg_start_span, **kwargs
|
||||
)
|
||||
|
||||
async def defer_to_thread(
|
||||
self,
|
||||
f: Callable[P, T],
|
||||
|
||||
@@ -29,7 +29,6 @@ from typing import (
|
||||
Iterable,
|
||||
List,
|
||||
Literal,
|
||||
Mapping,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
@@ -264,10 +263,7 @@ class Notifier:
|
||||
# This is not a very cheap test to perform, but it's only executed
|
||||
# when rendering the metrics page, which is likely once per minute at
|
||||
# most when scraping it.
|
||||
#
|
||||
# Ideally, we'd use `Mapping[Tuple[str], int]` here but mypy doesn't like it.
|
||||
# This is close enough and better than a type ignore.
|
||||
def count_listeners() -> Mapping[Tuple[str, ...], int]:
|
||||
def count_listeners() -> int:
|
||||
all_user_streams: Set[_NotifierUserStream] = set()
|
||||
|
||||
for streams in list(self.room_to_user_streams.values()):
|
||||
@@ -275,34 +271,18 @@ class Notifier:
|
||||
for stream in list(self.user_to_user_stream.values()):
|
||||
all_user_streams.add(stream)
|
||||
|
||||
return {
|
||||
(self.server_name,): sum(
|
||||
stream.count_listeners() for stream in all_user_streams
|
||||
)
|
||||
}
|
||||
return sum(stream.count_listeners() for stream in all_user_streams)
|
||||
|
||||
LaterGauge("synapse_notifier_listeners", "", [], count_listeners)
|
||||
|
||||
LaterGauge(
|
||||
name="synapse_notifier_listeners",
|
||||
desc="",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
caller=count_listeners,
|
||||
)
|
||||
|
||||
LaterGauge(
|
||||
name="synapse_notifier_rooms",
|
||||
desc="",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
caller=lambda: {
|
||||
(self.server_name,): count(
|
||||
bool, list(self.room_to_user_streams.values())
|
||||
)
|
||||
},
|
||||
"synapse_notifier_rooms",
|
||||
"",
|
||||
[],
|
||||
lambda: count(bool, list(self.room_to_user_streams.values())),
|
||||
)
|
||||
LaterGauge(
|
||||
name="synapse_notifier_users",
|
||||
desc="",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
caller=lambda: {(self.server_name,): len(self.user_to_user_stream)},
|
||||
"synapse_notifier_users", "", [], lambda: len(self.user_to_user_stream)
|
||||
)
|
||||
|
||||
def add_replication_callback(self, cb: Callable[[], None]) -> None:
|
||||
|
||||
@@ -25,7 +25,6 @@ from typing import (
|
||||
Any,
|
||||
Collection,
|
||||
Dict,
|
||||
FrozenSet,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
@@ -478,18 +477,8 @@ class BulkPushRuleEvaluator:
|
||||
event.room_version.msc3931_push_features,
|
||||
self.hs.config.experimental.msc1767_enabled, # MSC3931 flag
|
||||
self.hs.config.experimental.msc4210_enabled,
|
||||
self.hs.config.experimental.msc4306_enabled,
|
||||
)
|
||||
|
||||
msc4306_thread_subscribers: Optional[FrozenSet[str]] = None
|
||||
if self.hs.config.experimental.msc4306_enabled and thread_id != MAIN_TIMELINE:
|
||||
# pull out, in batch, all local subscribers to this thread
|
||||
# (in the common case, they will all be getting processed for push
|
||||
# rules right now)
|
||||
msc4306_thread_subscribers = await self.store.get_subscribers_to_thread(
|
||||
event.room_id, thread_id
|
||||
)
|
||||
|
||||
for uid, rules in rules_by_user.items():
|
||||
if event.sender == uid:
|
||||
continue
|
||||
@@ -514,13 +503,7 @@ class BulkPushRuleEvaluator:
|
||||
# current user, it'll be added to the dict later.
|
||||
actions_by_user[uid] = []
|
||||
|
||||
msc4306_thread_subscription_state: Optional[bool] = None
|
||||
if msc4306_thread_subscribers is not None:
|
||||
msc4306_thread_subscription_state = uid in msc4306_thread_subscribers
|
||||
|
||||
actions = evaluator.run(
|
||||
rules, uid, display_name, msc4306_thread_subscription_state
|
||||
)
|
||||
actions = evaluator.run(rules, uid, display_name)
|
||||
if "notify" in actions:
|
||||
# Push rules say we should notify the user of this event
|
||||
actions_by_user[uid] = actions
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user