mirror of
https://github.com/element-hq/synapse.git
synced 2025-12-09 01:30:18 +00:00
Compare commits
28 Commits
quenting/d
...
dependabot
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f7103be392 | ||
|
|
2c236be058 | ||
|
|
458e6410e8 | ||
|
|
1dd5f68251 | ||
|
|
8344c944b1 | ||
|
|
b34342eedf | ||
|
|
b7e7f537f1 | ||
|
|
8fb9c105c9 | ||
|
|
a82b8a966a | ||
|
|
f5f2c9587e | ||
|
|
0be7fe926d | ||
|
|
98f84256e9 | ||
|
|
15b927ffab | ||
|
|
7fa88d6d07 | ||
|
|
9ecf192089 | ||
|
|
6838a1020b | ||
|
|
a77befcc29 | ||
|
|
cedb8cd045 | ||
|
|
bb84121553 | ||
|
|
7de4fdf61a | ||
|
|
8fc9aa70a5 | ||
|
|
3db73b974f | ||
|
|
c51bd89c3b | ||
|
|
7de9ac01a0 | ||
|
|
4e118aecd0 | ||
|
|
11a11414c5 | ||
|
|
8a4e2e826d | ||
|
|
875269eb53 |
2
.github/workflows/docker.yml
vendored
2
.github/workflows/docker.yml
vendored
@@ -120,7 +120,7 @@ jobs:
|
||||
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
|
||||
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@398d4b0eeef1380460a10c8013a76f728fb906ac # v3.9.1
|
||||
uses: sigstore/cosign-installer@d58896d6a1865668819e1d91763c7751a165e159 # v3.9.2
|
||||
|
||||
- name: Calculate docker image tag
|
||||
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
|
||||
|
||||
64
CHANGES.md
64
CHANGES.md
@@ -1,3 +1,67 @@
|
||||
# Synapse 1.135.0rc1 (2025-07-22)
|
||||
|
||||
### Features
|
||||
|
||||
- Add `recaptcha_private_key_path` and `recaptcha_public_key_path` config option. ([\#17984](https://github.com/element-hq/synapse/issues/17984), [\#18684](https://github.com/element-hq/synapse/issues/18684))
|
||||
- Add plain-text handling for rich-text topics as per [MSC3765](https://github.com/matrix-org/matrix-spec-proposals/pull/3765). ([\#18195](https://github.com/element-hq/synapse/issues/18195))
|
||||
- If enabled by the user, server admins will see [soft failed](https://spec.matrix.org/v1.13/server-server-api/#soft-failure) events over the Client-Server API. ([\#18238](https://github.com/element-hq/synapse/issues/18238))
|
||||
- Add experimental support for [MSC4277: Harmonizing the reporting endpoints](https://github.com/matrix-org/matrix-spec-proposals/pull/4277). ([\#18263](https://github.com/element-hq/synapse/issues/18263))
|
||||
- Add ability to limit amount of media uploaded by a user in a given time period. ([\#18527](https://github.com/element-hq/synapse/issues/18527))
|
||||
- Enable workers to write directly to the device lists stream and handle device list updates, reducing load on the main process. ([\#18581](https://github.com/element-hq/synapse/issues/18581))
|
||||
- Support arbitrary profile fields. Contributed by @clokep. ([\#18635](https://github.com/element-hq/synapse/issues/18635))
|
||||
- Advertise support for Matrix v1.12. ([\#18647](https://github.com/element-hq/synapse/issues/18647))
|
||||
- Add an option to issue redactions as an admin user via the [admin redaction endpoint](https://element-hq.github.io/synapse/latest/admin_api/user_admin_api.html#redact-all-the-events-of-a-user). ([\#18671](https://github.com/element-hq/synapse/issues/18671))
|
||||
- Add experimental and incomplete support for [MSC4306: Thread Subscriptions](https://github.com/matrix-org/matrix-spec-proposals/blob/rei/msc_thread_subscriptions/proposals/4306-thread-subscriptions.md). ([\#18674](https://github.com/element-hq/synapse/issues/18674))
|
||||
- Include `event_id` when getting state with `?format=event`. Contributed by @tulir @ Beeper. ([\#18675](https://github.com/element-hq/synapse/issues/18675))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix CPU and database spinning when retrying sending events to servers whilst at the same time purging those events. ([\#18499](https://github.com/element-hq/synapse/issues/18499))
|
||||
- Don't allow creation of tags with names longer than 255 bytes, [as per the spec](https://spec.matrix.org/v1.15/client-server-api/#events-14). ([\#18660](https://github.com/element-hq/synapse/issues/18660))
|
||||
- Fix `sliding_sync_connections`-related errors when porting from SQLite to Postgres. ([\#18677](https://github.com/element-hq/synapse/issues/18677))
|
||||
- Fix the MAS integration not working when Synapse is started with `--daemonize` or using `synctl`. ([\#18691](https://github.com/element-hq/synapse/issues/18691))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Document that some config options for the user directory are in violation of the Matrix spec. ([\#18548](https://github.com/element-hq/synapse/issues/18548))
|
||||
- Update `rc_delayed_event_mgmt` docs to the actual nesting level. Contributed by @HarHarLinks. ([\#18692](https://github.com/element-hq/synapse/issues/18692))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Add a dedicated internal API for Matrix Authentication Service to Synapse communication. ([\#18520](https://github.com/element-hq/synapse/issues/18520))
|
||||
- Allow user registrations to be done on workers. ([\#18552](https://github.com/element-hq/synapse/issues/18552))
|
||||
- Remove unnecessary HTTP replication calls. ([\#18564](https://github.com/element-hq/synapse/issues/18564))
|
||||
- Refactor `Measure` block metrics to be homeserver-scoped. ([\#18601](https://github.com/element-hq/synapse/issues/18601))
|
||||
- Refactor cache metrics to be homeserver-scoped. ([\#18604](https://github.com/element-hq/synapse/issues/18604))
|
||||
- Unbreak "Latest dependencies" workflow by using the `--without dev` poetry option instead of removed `--no-dev`. ([\#18617](https://github.com/element-hq/synapse/issues/18617))
|
||||
- Update URL Preview code to work with `lxml` 6.0.0+. ([\#18622](https://github.com/element-hq/synapse/issues/18622))
|
||||
- Use `markdown-it-py` instead of `commonmark` in the release script. ([\#18637](https://github.com/element-hq/synapse/issues/18637))
|
||||
- Fix typing errors with upgraded mypy version. ([\#18653](https://github.com/element-hq/synapse/issues/18653))
|
||||
- Add doc comment explaining that config files are shallowly merged. ([\#18664](https://github.com/element-hq/synapse/issues/18664))
|
||||
- Minor speed up of insertion into `stream_positions` table. ([\#18672](https://github.com/element-hq/synapse/issues/18672))
|
||||
- Remove unused `allow_no_prev_events` option when creating an event. ([\#18676](https://github.com/element-hq/synapse/issues/18676))
|
||||
- Clean up `MetricsResource` and Prometheus hacks. ([\#18687](https://github.com/element-hq/synapse/issues/18687))
|
||||
- Fix dirty `Cargo.lock` changes appearing after install (`base64`). ([\#18689](https://github.com/element-hq/synapse/issues/18689))
|
||||
- Prevent dirty `Cargo.lock` changes from install. ([\#18693](https://github.com/element-hq/synapse/issues/18693))
|
||||
- Correct spelling of 'Admin token used' log line. ([\#18697](https://github.com/element-hq/synapse/issues/18697))
|
||||
- Reduce log spam when client stops downloading media while it is being streamed to them. ([\#18699](https://github.com/element-hq/synapse/issues/18699))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump authlib from 1.6.0 to 1.6.1. ([\#18704](https://github.com/element-hq/synapse/issues/18704))
|
||||
* Bump base64 from 0.21.7 to 0.22.1. ([\#18666](https://github.com/element-hq/synapse/issues/18666))
|
||||
* Bump jsonschema from 4.24.0 to 4.25.0. ([\#18707](https://github.com/element-hq/synapse/issues/18707))
|
||||
* Bump lxml from 5.4.0 to 6.0.0. ([\#18631](https://github.com/element-hq/synapse/issues/18631))
|
||||
* Bump mypy from 1.13.0 to 1.16.1. ([\#18653](https://github.com/element-hq/synapse/issues/18653))
|
||||
* Bump once_cell from 1.19.0 to 1.21.3. ([\#18710](https://github.com/element-hq/synapse/issues/18710))
|
||||
* Bump phonenumbers from 9.0.8 to 9.0.9. ([\#18681](https://github.com/element-hq/synapse/issues/18681))
|
||||
* Bump ruff from 0.12.2 to 0.12.5. ([\#18683](https://github.com/element-hq/synapse/issues/18683), [\#18705](https://github.com/element-hq/synapse/issues/18705))
|
||||
* Bump serde_json from 1.0.140 to 1.0.141. ([\#18709](https://github.com/element-hq/synapse/issues/18709))
|
||||
* Bump sigstore/cosign-installer from 3.9.1 to 3.9.2. ([\#18708](https://github.com/element-hq/synapse/issues/18708))
|
||||
* Bump types-jsonschema from 4.24.0.20250528 to 4.24.0.20250708. ([\#18682](https://github.com/element-hq/synapse/issues/18682))
|
||||
|
||||
# Synapse 1.134.0 (2025-07-15)
|
||||
|
||||
No significant changes since 1.134.0rc1.
|
||||
|
||||
8
Cargo.lock
generated
8
Cargo.lock
generated
@@ -887,9 +887,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "once_cell"
|
||||
version = "1.19.0"
|
||||
version = "1.21.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92"
|
||||
checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
|
||||
|
||||
[[package]]
|
||||
name = "openssl-probe"
|
||||
@@ -1355,9 +1355,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serde_json"
|
||||
version = "1.0.140"
|
||||
version = "1.0.141"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "20068b6e96dc6c9bd23e01df8827e6c7e1f2fddd43c21810382803c136b99373"
|
||||
checksum = "30b9eff21ebe718216c6ec64e1d9ac57087aad11efc64e32002bce4a0d4c03d3"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"memchr",
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Add `recaptcha_private_key_path` and `recaptcha_public_key_path` config option.
|
||||
@@ -1 +0,0 @@
|
||||
Add plain-text handling for rich-text topics as per [MSC3765](https://github.com/matrix-org/matrix-spec-proposals/pull/3765).
|
||||
@@ -1 +0,0 @@
|
||||
If enabled by the user, server admins will see [soft failed](https://spec.matrix.org/v1.13/server-server-api/#soft-failure) events over the Client-Server API.
|
||||
@@ -1 +0,0 @@
|
||||
Add experimental support for [MSC4277](https://github.com/matrix-org/matrix-spec-proposals/pull/4277).
|
||||
1
changelog.d/18474.misc
Normal file
1
changelog.d/18474.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add debug logging for HMAC digest verification failures when using the admin API to register users.
|
||||
@@ -1 +0,0 @@
|
||||
Fix CPU and database spinning when retrying sending events to servers whilst at the same time purging those events.
|
||||
1
changelog.d/18514.feature
Normal file
1
changelog.d/18514.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add configurable rate limiting for the creation of rooms.
|
||||
@@ -1 +0,0 @@
|
||||
Add ability to limit amount uploaded by a user in a given time period.
|
||||
1
changelog.d/18540.feature
Normal file
1
changelog.d/18540.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add support for [MSC4293](https://github.com/matrix-org/matrix-spec-proposals/pull/4293) - Redact on Kick/Ban.
|
||||
@@ -1 +0,0 @@
|
||||
Document that some config options for the user directory are in violation of the Matrix spec.
|
||||
@@ -1 +0,0 @@
|
||||
Allow user registrations to be done on workers.
|
||||
@@ -1 +0,0 @@
|
||||
Remove unnecessary HTTP replication calls.
|
||||
1
changelog.d/18580.misc
Normal file
1
changelog.d/18580.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix config documentation generation script on Windows by enforcing UTF-8.
|
||||
@@ -1 +0,0 @@
|
||||
Enable workers to write directly to the device lists stream and handle device list updates, reducing load on the main process.
|
||||
@@ -1 +0,0 @@
|
||||
Refactor `Measure` block metrics to be homeserver-scoped.
|
||||
@@ -1 +0,0 @@
|
||||
Refactor cache metrics to be homeserver-scoped.
|
||||
@@ -1 +0,0 @@
|
||||
Unbreak "Latest dependencies" workflow by using the `--without dev` poetry option instead of removed `--no-dev`.
|
||||
@@ -1 +0,0 @@
|
||||
Update URL Preview code to work with `lxml` 6.0.0+.
|
||||
@@ -1 +0,0 @@
|
||||
Support arbitrary profile fields.
|
||||
@@ -1 +0,0 @@
|
||||
Use `markdown-it-py` instead of `commonmark` in the release script.
|
||||
@@ -1 +0,0 @@
|
||||
Advertise support for Matrix v1.12.
|
||||
@@ -1 +0,0 @@
|
||||
Fix typing errors with upgraded mypy version.
|
||||
1
changelog.d/18656.misc
Normal file
1
changelog.d/18656.misc
Normal file
@@ -0,0 +1 @@
|
||||
Refactor `Counter` metrics to be homeserver-scoped.
|
||||
@@ -1 +0,0 @@
|
||||
Don't allow creation of tags with names longer than 255 bytes, as per the spec.
|
||||
@@ -1 +0,0 @@
|
||||
Add doc comment explaining that config files are shallowly merged.
|
||||
1
changelog.d/18670.misc
Normal file
1
changelog.d/18670.misc
Normal file
@@ -0,0 +1 @@
|
||||
Refactor background process metrics to be homeserver-scoped.
|
||||
@@ -1 +0,0 @@
|
||||
Minor speed up of insertion into `stream_positions` table.
|
||||
@@ -1 +0,0 @@
|
||||
Include `event_id` when getting state with `?format=event`. Contributed by @tulir @ Beeper.
|
||||
@@ -1 +0,0 @@
|
||||
Remove unused `allow_no_prev_events` option when creating an event.
|
||||
@@ -1 +0,0 @@
|
||||
Fix sliding_sync_connections related errors when porting from SQLite to Postgres.
|
||||
@@ -1 +0,0 @@
|
||||
Add `recaptcha_private_key_path` and `recaptcha_public_key_path` config option.
|
||||
1
changelog.d/18686.feature
Normal file
1
changelog.d/18686.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add ability to configure forward/outbound proxy via homeserver config instead of environment variables. See `http_proxy`, `https_proxy`, `no_proxy_hosts`.
|
||||
@@ -1 +0,0 @@
|
||||
Clean up `MetricsResource` and Prometheus hacks.
|
||||
@@ -1 +0,0 @@
|
||||
Fix dirty `Cargo.lock` changes appearing after install (`base64`).
|
||||
@@ -1 +0,0 @@
|
||||
Fix the MAS integration not working when Synapse is started with `--daemonize` or using `synctl`.
|
||||
@@ -1 +0,0 @@
|
||||
Update `rc_delayed_event_mgmt` docs to the actual nesting level. Contributed by @HarHarLinks.
|
||||
@@ -1 +0,0 @@
|
||||
Prevent dirty `Cargo.lock` changes from install.
|
||||
1
changelog.d/18696.bugfix
Normal file
1
changelog.d/18696.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Allow return code 403 (allowed by C2S Spec since v1.2) when fetching profiles via federation.
|
||||
@@ -1 +0,0 @@
|
||||
Correct spelling of 'Admin token used' log line.
|
||||
1
changelog.d/18718.misc
Normal file
1
changelog.d/18718.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce database usage in Sliding Sync by not querying for background update completion after the update is known to be complete.
|
||||
1
changelog.d/18726.bugfix
Normal file
1
changelog.d/18726.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Register the MSC4306 endpoints in the CS API when the experimental feature is enabled.
|
||||
1
changelog.d/18727.misc
Normal file
1
changelog.d/18727.misc
Normal file
@@ -0,0 +1 @@
|
||||
Bump minimum version bound on Twisted to 21.2.0.
|
||||
6
debian/changelog
vendored
6
debian/changelog
vendored
@@ -1,3 +1,9 @@
|
||||
matrix-synapse-py3 (1.135.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.135.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Jul 2025 12:08:37 +0100
|
||||
|
||||
matrix-synapse-py3 (1.134.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.134.0.
|
||||
|
||||
@@ -98,6 +98,10 @@ rc_delayed_event_mgmt:
|
||||
per_second: 9999
|
||||
burst_count: 9999
|
||||
|
||||
rc_room_creation:
|
||||
per_second: 9999
|
||||
burst_count: 9999
|
||||
|
||||
federation_rr_transactions_per_room_per_second: 9999
|
||||
|
||||
allow_device_name_lookup_over_federation: true
|
||||
|
||||
@@ -327,6 +327,15 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
|
||||
"shared_extra_conf": {},
|
||||
"worker_extra_conf": "",
|
||||
},
|
||||
"thread_subscriptions": {
|
||||
"app": "synapse.app.generic_worker",
|
||||
"listener_resources": ["client", "replication"],
|
||||
"endpoint_patterns": [
|
||||
"^/_matrix/client/unstable/io.element.msc4306/.*",
|
||||
],
|
||||
"shared_extra_conf": {},
|
||||
"worker_extra_conf": "",
|
||||
},
|
||||
}
|
||||
|
||||
# Templates for sections that may be inserted multiple times in config files
|
||||
@@ -427,6 +436,7 @@ def add_worker_roles_to_shared_config(
|
||||
"to_device",
|
||||
"typing",
|
||||
"push_rules",
|
||||
"thread_subscriptions",
|
||||
}
|
||||
|
||||
# Worker-type specific sharding config. Now a single worker can fulfill multiple
|
||||
|
||||
@@ -1227,7 +1227,7 @@ See also the
|
||||
|
||||
## Controlling whether a user is shadow-banned
|
||||
|
||||
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
|
||||
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
|
||||
A shadow-banned users receives successful responses to their client-server API requests,
|
||||
but the events are not propagated into rooms. This can be an effective tool as it
|
||||
(hopefully) takes longer for the user to realise they are being moderated before
|
||||
@@ -1464,8 +1464,11 @@ _Added in Synapse 1.72.0._
|
||||
|
||||
## Redact all the events of a user
|
||||
|
||||
This endpoint allows an admin to redact the events of a given user. There are no restrictions on redactions for a
|
||||
local user. By default, we puppet the user who sent the message to redact it themselves. Redactions for non-local users are issued using the admin user, and will fail in rooms where the admin user is not admin/does not have the specified power level to issue redactions.
|
||||
This endpoint allows an admin to redact the events of a given user. There are no restrictions on
|
||||
redactions for a local user. By default, we puppet the user who sent the message to redact it themselves.
|
||||
Redactions for non-local users are issued using the admin user, and will fail in rooms where the
|
||||
admin user is not admin/does not have the specified power level to issue redactions. An option
|
||||
is provided to override the default and allow the admin to issue the redactions in all cases.
|
||||
|
||||
The API is
|
||||
```
|
||||
@@ -1475,7 +1478,7 @@ POST /_synapse/admin/v1/user/$user_id/redact
|
||||
"rooms": ["!roomid1", "!roomid2"]
|
||||
}
|
||||
```
|
||||
If an empty list is provided as the key for `rooms`, all events in all the rooms the user is member of will be redacted,
|
||||
If an empty list is provided as the key for `rooms`, all events in all the rooms the user is member of will be redacted,
|
||||
otherwise all the events in the rooms provided in the request will be redacted.
|
||||
|
||||
The API starts redaction process running, and returns immediately with a JSON body with
|
||||
@@ -1501,7 +1504,10 @@ The following JSON body parameter must be provided:
|
||||
The following JSON body parameters are optional:
|
||||
|
||||
- `reason` - Reason the redaction is being requested, ie "spam", "abuse", etc. This will be included in each redaction event, and be visible to users.
|
||||
- `limit` - a limit on the number of the user's events to search for ones that can be redacted (events are redacted newest to oldest) in each room, defaults to 1000 if not provided
|
||||
- `limit` - a limit on the number of the user's events to search for ones that can be redacted (events are redacted newest to oldest) in each room, defaults to 1000 if not provided.
|
||||
- `use_admin` - If set to `true`, the admin user is used to issue the redactions, rather than puppeting the user. Useful
|
||||
when the admin is also the moderator of the rooms that require redactions. Note that the redactions will fail in rooms
|
||||
where the admin does not have the sufficient power level to issue the redactions.
|
||||
|
||||
_Added in Synapse 1.116.0._
|
||||
|
||||
|
||||
@@ -7,8 +7,23 @@ proxy is supported, not SOCKS proxy or anything else.
|
||||
|
||||
## Configure
|
||||
|
||||
The `http_proxy`, `https_proxy`, `no_proxy` environment variables are used to
|
||||
specify proxy settings. The environment variable is not case sensitive.
|
||||
The proxy settings can be configured in the homeserver configuration file via
|
||||
[`http_proxy`](../usage/configuration/config_documentation.md#http_proxy),
|
||||
[`https_proxy`](../usage/configuration/config_documentation.md#https_proxy), and
|
||||
[`no_proxy_hosts`](../usage/configuration/config_documentation.md#no_proxy_hosts).
|
||||
|
||||
`homeserver.yaml` example:
|
||||
```yaml
|
||||
http_proxy: http://USERNAME:PASSWORD@10.0.1.1:8080/
|
||||
https_proxy: http://USERNAME:PASSWORD@proxy.example.com:8080/
|
||||
no_proxy_hosts:
|
||||
- master.hostname.example.com
|
||||
- 10.1.0.0/16
|
||||
- 172.30.0.0/16
|
||||
```
|
||||
|
||||
The proxy settings can also be configured via the `http_proxy`, `https_proxy`,
|
||||
`no_proxy` environment variables. The environment variable is not case sensitive.
|
||||
- `http_proxy`: Proxy server to use for HTTP requests.
|
||||
- `https_proxy`: Proxy server to use for HTTPS requests.
|
||||
- `no_proxy`: Comma-separated list of hosts, IP addresses, or IP ranges in CIDR
|
||||
@@ -44,7 +59,7 @@ The proxy will be **used** for:
|
||||
- phone-home stats
|
||||
- recaptcha validation
|
||||
- CAS auth validation
|
||||
- OpenID Connect
|
||||
- OpenID Connect (OIDC)
|
||||
- Outbound federation
|
||||
- Federation (checking public key revocation)
|
||||
- Fetching public keys of other servers
|
||||
@@ -53,7 +68,7 @@ The proxy will be **used** for:
|
||||
It will **not be used** for:
|
||||
|
||||
- Application Services
|
||||
- Identity servers
|
||||
- Matrix Identity servers
|
||||
- In worker configurations
|
||||
- connections between workers
|
||||
- connections from workers to Redis
|
||||
|
||||
@@ -610,6 +610,39 @@ manhole_settings:
|
||||
ssh_pub_key_path: CONFDIR/id_rsa.pub
|
||||
```
|
||||
---
|
||||
### `http_proxy`
|
||||
|
||||
*(string|null)* Proxy server to use for HTTP requests.
|
||||
For more details, see the [forward proxy documentation](../../setup/forward_proxy.md). There is no default for this option.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
http_proxy: http://USERNAME:PASSWORD@10.0.1.1:8080/
|
||||
```
|
||||
---
|
||||
### `https_proxy`
|
||||
|
||||
*(string|null)* Proxy server to use for HTTPS requests.
|
||||
For more details, see the [forward proxy documentation](../../setup/forward_proxy.md). There is no default for this option.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
https_proxy: http://USERNAME:PASSWORD@proxy.example.com:8080/
|
||||
```
|
||||
---
|
||||
### `no_proxy_hosts`
|
||||
|
||||
*(array)* List of hosts, IP addresses, or IP ranges in CIDR format which should not use the proxy. Synapse will directly connect to these hosts.
|
||||
For more details, see the [forward proxy documentation](../../setup/forward_proxy.md). There is no default for this option.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
no_proxy_hosts:
|
||||
- master.hostname.example.com
|
||||
- 10.1.0.0/16
|
||||
- 172.30.0.0/16
|
||||
```
|
||||
---
|
||||
### `dummy_events_threshold`
|
||||
|
||||
*(integer)* Forward extremities can build up in a room due to networking delays between homeservers. Once this happens in a large room, calculation of the state of that room can become quite expensive. To mitigate this, once the number of forward extremities reaches a given threshold, Synapse will send an `org.matrix.dummy_event` event, which will reduce the forward extremities in the room.
|
||||
@@ -1963,6 +1996,31 @@ rc_reports:
|
||||
burst_count: 20.0
|
||||
```
|
||||
---
|
||||
### `rc_room_creation`
|
||||
|
||||
*(object)* Sets rate limits for how often users are able to create rooms.
|
||||
|
||||
This setting has the following sub-options:
|
||||
|
||||
* `per_second` (number): Maximum number of requests a client can send per second.
|
||||
|
||||
* `burst_count` (number): Maximum number of requests a client can send before being throttled.
|
||||
|
||||
Default configuration:
|
||||
```yaml
|
||||
rc_room_creation:
|
||||
per_user:
|
||||
per_second: 0.016
|
||||
burst_count: 10.0
|
||||
```
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
rc_room_creation:
|
||||
per_second: 1.0
|
||||
burst_count: 5.0
|
||||
```
|
||||
---
|
||||
### `federation_rr_transactions_per_room_per_second`
|
||||
|
||||
*(integer)* Sets outgoing federation transaction frequency for sending read-receipts, per-room.
|
||||
|
||||
119
poetry.lock
generated
119
poetry.lock
generated
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "annotated-types"
|
||||
@@ -34,15 +34,15 @@ tests-mypy = ["mypy (>=1.11.1) ; platform_python_implementation == \"CPython\" a
|
||||
|
||||
[[package]]
|
||||
name = "authlib"
|
||||
version = "1.6.0"
|
||||
version = "1.6.1"
|
||||
description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients."
|
||||
optional = true
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"oidc\" or extra == \"jwt\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"jwt\" or extra == \"oidc\""
|
||||
files = [
|
||||
{file = "authlib-1.6.0-py2.py3-none-any.whl", hash = "sha256:91685589498f79e8655e8a8947431ad6288831d643f11c55c2143ffcc738048d"},
|
||||
{file = "authlib-1.6.0.tar.gz", hash = "sha256:4367d32031b7af175ad3a323d571dc7257b7099d55978087ceae4a0d88cd3210"},
|
||||
{file = "authlib-1.6.1-py2.py3-none-any.whl", hash = "sha256:e9d2031c34c6309373ab845afc24168fe9e93dc52d252631f52642f21f5ed06e"},
|
||||
{file = "authlib-1.6.1.tar.gz", hash = "sha256:4dffdbb1460ba6ec8c17981a4c67af7d8af131231b5a36a88a1e8c80c111cdfd"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -435,7 +435,7 @@ description = "XML bomb protection for Python stdlib modules"
|
||||
optional = true
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
files = [
|
||||
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
|
||||
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
|
||||
@@ -478,7 +478,7 @@ description = "XPath 1.0/2.0/3.0/3.1 parsers and selectors for ElementTree and l
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
files = [
|
||||
{file = "elementpath-4.1.5-py3-none-any.whl", hash = "sha256:2ac1a2fb31eb22bbbf817f8cf6752f844513216263f0e3892c8e79782fe4bb55"},
|
||||
{file = "elementpath-4.1.5.tar.gz", hash = "sha256:c2d6dc524b29ef751ecfc416b0627668119d8812441c555d7471da41d4bacb8d"},
|
||||
@@ -504,18 +504,19 @@ smmap = ">=3.0.1,<6"
|
||||
|
||||
[[package]]
|
||||
name = "gitpython"
|
||||
version = "3.1.44"
|
||||
version = "3.1.45"
|
||||
description = "GitPython is a Python library used to interact with Git repositories"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "GitPython-3.1.44-py3-none-any.whl", hash = "sha256:9e0e10cda9bed1ee64bc9a6de50e7e38a9c9943241cd7f585f6df3ed28011110"},
|
||||
{file = "gitpython-3.1.44.tar.gz", hash = "sha256:c87e30b26253bf5418b01b0660f818967f3c503193838337fe5e573331249269"},
|
||||
{file = "gitpython-3.1.45-py3-none-any.whl", hash = "sha256:8908cb2e02fb3b93b7eb0f2827125cb699869470432cc885f019b8fd0fccff77"},
|
||||
{file = "gitpython-3.1.45.tar.gz", hash = "sha256:85b0ee964ceddf211c41b9f27a49086010a190fd8132a24e21f362a4b36a791c"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
gitdb = ">=4.0.1,<5"
|
||||
typing-extensions = {version = ">=3.10.0.2", markers = "python_version < \"3.10\""}
|
||||
|
||||
[package.extras]
|
||||
doc = ["sphinx (>=7.1.2,<7.2)", "sphinx-autodoc-typehints", "sphinx_rtd_theme"]
|
||||
@@ -528,7 +529,7 @@ description = "Python wrapper for hiredis"
|
||||
optional = true
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"redis\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"redis\""
|
||||
files = [
|
||||
{file = "hiredis-3.2.1-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:add17efcbae46c5a6a13b244ff0b4a8fa079602ceb62290095c941b42e9d5dec"},
|
||||
{file = "hiredis-3.2.1-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:5fe955cc4f66c57df1ae8e5caf4de2925d43b5efab4e40859662311d1bcc5f54"},
|
||||
@@ -865,7 +866,7 @@ description = "Jaeger Python OpenTracing Tracer implementation"
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"opentracing\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"opentracing\""
|
||||
files = [
|
||||
{file = "jaeger-client-4.8.0.tar.gz", hash = "sha256:3157836edab8e2c209bd2d6ae61113db36f7ee399e66b1dcbb715d87ab49bfe0"},
|
||||
]
|
||||
@@ -936,14 +937,14 @@ i18n = ["Babel (>=2.7)"]
|
||||
|
||||
[[package]]
|
||||
name = "jsonschema"
|
||||
version = "4.24.0"
|
||||
version = "4.25.0"
|
||||
description = "An implementation of JSON Schema validation for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "jsonschema-4.24.0-py3-none-any.whl", hash = "sha256:a462455f19f5faf404a7902952b6f0e3ce868f3ee09a359b05eca6673bd8412d"},
|
||||
{file = "jsonschema-4.24.0.tar.gz", hash = "sha256:0b4e8069eb12aedfa881333004bccaec24ecef5a8a6a4b6df142b2cc9599d196"},
|
||||
{file = "jsonschema-4.25.0-py3-none-any.whl", hash = "sha256:24c2e8da302de79c8b9382fee3e76b355e44d2a4364bb207159ce10b517bd716"},
|
||||
{file = "jsonschema-4.25.0.tar.gz", hash = "sha256:e63acf5c11762c0e6672ffb61482bdf57f0876684d8d249c0fe2d730d48bc55f"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -954,7 +955,7 @@ rpds-py = ">=0.7.1"
|
||||
|
||||
[package.extras]
|
||||
format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
|
||||
format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=24.6.0)"]
|
||||
format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "rfc3987-syntax (>=1.1.0)", "uri-template", "webcolors (>=24.6.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "jsonschema-specifications"
|
||||
@@ -1003,7 +1004,7 @@ description = "A strictly RFC 4510 conforming LDAP V3 pure Python client library
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
|
||||
files = [
|
||||
{file = "ldap3-2.9.1-py2.py3-none-any.whl", hash = "sha256:5869596fc4948797020d3f03b7939da938778a0f9e2009f7a072ccf92b8e8d70"},
|
||||
{file = "ldap3-2.9.1.tar.gz", hash = "sha256:f3e7fc4718e3f09dda568b57100095e0ce58633bcabbed8667ce3f8fbaa4229f"},
|
||||
@@ -1019,7 +1020,7 @@ description = "Powerful and Pythonic XML processing library combining libxml2/li
|
||||
optional = true
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"url-preview\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"url-preview\""
|
||||
files = [
|
||||
{file = "lxml-6.0.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:35bc626eec405f745199200ccb5c6b36f202675d204aa29bb52e27ba2b71dea8"},
|
||||
{file = "lxml-6.0.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:246b40f8a4aec341cbbf52617cad8ab7c888d944bfe12a6abd2b1f6cfb6f6082"},
|
||||
@@ -1260,7 +1261,7 @@ description = "An LDAP3 auth provider for Synapse"
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
|
||||
files = [
|
||||
{file = "matrix-synapse-ldap3-0.3.0.tar.gz", hash = "sha256:8bb6517173164d4b9cc44f49de411d8cebdb2e705d5dd1ea1f38733c4a009e1d"},
|
||||
{file = "matrix_synapse_ldap3-0.3.0-py3-none-any.whl", hash = "sha256:8b4d701f8702551e98cc1d8c20dbed532de5613584c08d0df22de376ba99159d"},
|
||||
@@ -1493,7 +1494,7 @@ description = "OpenTracing API for Python. See documentation at http://opentraci
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"opentracing\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"opentracing\""
|
||||
files = [
|
||||
{file = "opentracing-2.4.0.tar.gz", hash = "sha256:a173117e6ef580d55874734d1fa7ecb6f3655160b8b8974a2a1e98e5ec9c840d"},
|
||||
]
|
||||
@@ -1699,7 +1700,7 @@ description = "psycopg2 - Python-PostgreSQL Database Adapter"
|
||||
optional = true
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"postgres\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"postgres\""
|
||||
files = [
|
||||
{file = "psycopg2-2.9.10-cp310-cp310-win32.whl", hash = "sha256:5df2b672140f95adb453af93a7d669d7a7bf0a56bcd26f1502329166f4a61716"},
|
||||
{file = "psycopg2-2.9.10-cp310-cp310-win_amd64.whl", hash = "sha256:c6f7b8561225f9e711a9c47087388a97fdc948211c10a4bccbf0ba68ab7b3b5a"},
|
||||
@@ -1720,7 +1721,7 @@ description = ".. image:: https://travis-ci.org/chtd/psycopg2cffi.svg?branch=mas
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
|
||||
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
|
||||
files = [
|
||||
{file = "psycopg2cffi-2.9.0.tar.gz", hash = "sha256:7e272edcd837de3a1d12b62185eb85c45a19feda9e62fa1b120c54f9e8d35c52"},
|
||||
]
|
||||
@@ -1736,7 +1737,7 @@ description = "A Simple library to enable psycopg2 compatability"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
|
||||
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
|
||||
files = [
|
||||
{file = "psycopg2cffi-compat-1.1.tar.gz", hash = "sha256:d25e921748475522b33d13420aad5c2831c743227dc1f1f2585e0fdb5c914e05"},
|
||||
]
|
||||
@@ -1996,7 +1997,7 @@ description = "A development tool to measure, monitor and analyze the memory beh
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"cache-memory\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"cache-memory\""
|
||||
files = [
|
||||
{file = "Pympler-1.0.1-py3-none-any.whl", hash = "sha256:d260dda9ae781e1eab6ea15bacb84015849833ba5555f141d2d9b7b7473b307d"},
|
||||
{file = "Pympler-1.0.1.tar.gz", hash = "sha256:993f1a3599ca3f4fcd7160c7545ad06310c9e12f70174ae7ae8d4e25f6c5d3fa"},
|
||||
@@ -2056,7 +2057,7 @@ description = "Python implementation of SAML Version 2 Standard"
|
||||
optional = true
|
||||
python-versions = ">=3.9,<4.0"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
files = [
|
||||
{file = "pysaml2-7.5.0-py3-none-any.whl", hash = "sha256:bc6627cc344476a83c757f440a73fda1369f13b6fda1b4e16bca63ffbabb5318"},
|
||||
{file = "pysaml2-7.5.0.tar.gz", hash = "sha256:f36871d4e5ee857c6b85532e942550d2cf90ea4ee943d75eb681044bbc4f54f7"},
|
||||
@@ -2081,7 +2082,7 @@ description = "Extensions to the standard Python datetime module"
|
||||
optional = true
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
files = [
|
||||
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
|
||||
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
|
||||
@@ -2109,7 +2110,7 @@ description = "World timezone definitions, modern and historical"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
files = [
|
||||
{file = "pytz-2022.7.1-py2.py3-none-any.whl", hash = "sha256:78f4f37d8198e0627c5f1143240bb0206b8691d8d7ac6d78fee88b78733f8c4a"},
|
||||
{file = "pytz-2022.7.1.tar.gz", hash = "sha256:01a0681c4b9684a28304615eba55d1ab31ae00bf68ec157ec3708a8182dbbcd0"},
|
||||
@@ -2408,30 +2409,30 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.12.3"
|
||||
version = "0.12.4"
|
||||
description = "An extremely fast Python linter and code formatter, written in Rust."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "ruff-0.12.3-py3-none-linux_armv6l.whl", hash = "sha256:47552138f7206454eaf0c4fe827e546e9ddac62c2a3d2585ca54d29a890137a2"},
|
||||
{file = "ruff-0.12.3-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:0a9153b000c6fe169bb307f5bd1b691221c4286c133407b8827c406a55282041"},
|
||||
{file = "ruff-0.12.3-py3-none-macosx_11_0_arm64.whl", hash = "sha256:fa6b24600cf3b750e48ddb6057e901dd5b9aa426e316addb2a1af185a7509882"},
|
||||
{file = "ruff-0.12.3-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2506961bf6ead54887ba3562604d69cb430f59b42133d36976421bc8bd45901"},
|
||||
{file = "ruff-0.12.3-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c4faaff1f90cea9d3033cbbcdf1acf5d7fb11d8180758feb31337391691f3df0"},
|
||||
{file = "ruff-0.12.3-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40dced4a79d7c264389de1c59467d5d5cefd79e7e06d1dfa2c75497b5269a5a6"},
|
||||
{file = "ruff-0.12.3-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:0262d50ba2767ed0fe212aa7e62112a1dcbfd46b858c5bf7bbd11f326998bafc"},
|
||||
{file = "ruff-0.12.3-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:12371aec33e1a3758597c5c631bae9a5286f3c963bdfb4d17acdd2d395406687"},
|
||||
{file = "ruff-0.12.3-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:560f13b6baa49785665276c963edc363f8ad4b4fc910a883e2625bdb14a83a9e"},
|
||||
{file = "ruff-0.12.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:023040a3499f6f974ae9091bcdd0385dd9e9eb4942f231c23c57708147b06311"},
|
||||
{file = "ruff-0.12.3-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:883d844967bffff5ab28bba1a4d246c1a1b2933f48cb9840f3fdc5111c603b07"},
|
||||
{file = "ruff-0.12.3-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:2120d3aa855ff385e0e562fdee14d564c9675edbe41625c87eeab744a7830d12"},
|
||||
{file = "ruff-0.12.3-py3-none-musllinux_1_2_i686.whl", hash = "sha256:6b16647cbb470eaf4750d27dddc6ebf7758b918887b56d39e9c22cce2049082b"},
|
||||
{file = "ruff-0.12.3-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e1417051edb436230023575b149e8ff843a324557fe0a265863b7602df86722f"},
|
||||
{file = "ruff-0.12.3-py3-none-win32.whl", hash = "sha256:dfd45e6e926deb6409d0616078a666ebce93e55e07f0fb0228d4b2608b2c248d"},
|
||||
{file = "ruff-0.12.3-py3-none-win_amd64.whl", hash = "sha256:a946cf1e7ba3209bdef039eb97647f1c77f6f540e5845ec9c114d3af8df873e7"},
|
||||
{file = "ruff-0.12.3-py3-none-win_arm64.whl", hash = "sha256:5f9c7c9c8f84c2d7f27e93674d27136fbf489720251544c4da7fb3d742e011b1"},
|
||||
{file = "ruff-0.12.3.tar.gz", hash = "sha256:f1b5a4b6668fd7b7ea3697d8d98857390b40c1320a63a178eee6be0899ea2d77"},
|
||||
{file = "ruff-0.12.4-py3-none-linux_armv6l.whl", hash = "sha256:cb0d261dac457ab939aeb247e804125a5d521b21adf27e721895b0d3f83a0d0a"},
|
||||
{file = "ruff-0.12.4-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:55c0f4ca9769408d9b9bac530c30d3e66490bd2beb2d3dae3e4128a1f05c7442"},
|
||||
{file = "ruff-0.12.4-py3-none-macosx_11_0_arm64.whl", hash = "sha256:a8224cc3722c9ad9044da7f89c4c1ec452aef2cfe3904365025dd2f51daeae0e"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e9949d01d64fa3672449a51ddb5d7548b33e130240ad418884ee6efa7a229586"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:be0593c69df9ad1465e8a2d10e3defd111fdb62dcd5be23ae2c06da77e8fcffb"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a7dea966bcb55d4ecc4cc3270bccb6f87a337326c9dcd3c07d5b97000dbff41c"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:afcfa3ab5ab5dd0e1c39bf286d829e042a15e966b3726eea79528e2e24d8371a"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c057ce464b1413c926cdb203a0f858cd52f3e73dcb3270a3318d1630f6395bb3"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e64b90d1122dc2713330350626b10d60818930819623abbb56535c6466cce045"},
|
||||
{file = "ruff-0.12.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2abc48f3d9667fdc74022380b5c745873499ff827393a636f7a59da1515e7c57"},
|
||||
{file = "ruff-0.12.4-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:2b2449dc0c138d877d629bea151bee8c0ae3b8e9c43f5fcaafcd0c0d0726b184"},
|
||||
{file = "ruff-0.12.4-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:56e45bb11f625db55f9b70477062e6a1a04d53628eda7784dce6e0f55fd549eb"},
|
||||
{file = "ruff-0.12.4-py3-none-musllinux_1_2_i686.whl", hash = "sha256:478fccdb82ca148a98a9ff43658944f7ab5ec41c3c49d77cd99d44da019371a1"},
|
||||
{file = "ruff-0.12.4-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:0fc426bec2e4e5f4c4f182b9d2ce6a75c85ba9bcdbe5c6f2a74fcb8df437df4b"},
|
||||
{file = "ruff-0.12.4-py3-none-win32.whl", hash = "sha256:4de27977827893cdfb1211d42d84bc180fceb7b72471104671c59be37041cf93"},
|
||||
{file = "ruff-0.12.4-py3-none-win_amd64.whl", hash = "sha256:fe0b9e9eb23736b453143d72d2ceca5db323963330d5b7859d60d101147d461a"},
|
||||
{file = "ruff-0.12.4-py3-none-win_arm64.whl", hash = "sha256:0618ec4442a83ab545e5b71202a5c0ed7791e8471435b94e655b570a5031a98e"},
|
||||
{file = "ruff-0.12.4.tar.gz", hash = "sha256:13efa16df6c6eeb7d0f091abae50f58e9522f3843edb40d56ad52a5a4a4b6873"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2474,7 +2475,7 @@ description = "Python client for Sentry (https://sentry.io)"
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"sentry\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"sentry\""
|
||||
files = [
|
||||
{file = "sentry_sdk-2.32.0-py2.py3-none-any.whl", hash = "sha256:6cf51521b099562d7ce3606da928c473643abe99b00ce4cb5626ea735f4ec345"},
|
||||
{file = "sentry_sdk-2.32.0.tar.gz", hash = "sha256:9016c75d9316b0f6921ac14c8cd4fb938f26002430ac5be9945ab280f78bec6b"},
|
||||
@@ -2662,7 +2663,7 @@ description = "Tornado IOLoop Backed Concurrent Futures"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"opentracing\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"opentracing\""
|
||||
files = [
|
||||
{file = "threadloop-1.0.2-py2-none-any.whl", hash = "sha256:5c90dbefab6ffbdba26afb4829d2a9df8275d13ac7dc58dccb0e279992679599"},
|
||||
{file = "threadloop-1.0.2.tar.gz", hash = "sha256:8b180aac31013de13c2ad5c834819771992d350267bddb854613ae77ef571944"},
|
||||
@@ -2678,7 +2679,7 @@ description = "Python bindings for the Apache Thrift RPC system"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"opentracing\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"opentracing\""
|
||||
files = [
|
||||
{file = "thrift-0.16.0.tar.gz", hash = "sha256:2b5b6488fcded21f9d312aa23c9ff6a0195d0f6ae26ddbd5ad9e3e25dfc14408"},
|
||||
]
|
||||
@@ -2740,7 +2741,7 @@ description = "Tornado is a Python web framework and asynchronous networking lib
|
||||
optional = true
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"opentracing\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"opentracing\""
|
||||
files = [
|
||||
{file = "tornado-6.5-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:f81067dad2e4443b015368b24e802d0083fecada4f0a4572fdb72fc06e54a9a6"},
|
||||
{file = "tornado-6.5-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:9ac1cbe1db860b3cbb251e795c701c41d343f06a96049d6274e7c77559117e41"},
|
||||
@@ -2877,7 +2878,7 @@ description = "non-blocking redis client for python"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"redis\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"redis\""
|
||||
files = [
|
||||
{file = "txredisapi-1.4.11-py3-none-any.whl", hash = "sha256:ac64d7a9342b58edca13ef267d4fa7637c1aa63f8595e066801c1e8b56b22d0b"},
|
||||
{file = "txredisapi-1.4.11.tar.gz", hash = "sha256:3eb1af99aefdefb59eb877b1dd08861efad60915e30ad5bf3d5bf6c5cedcdbc6"},
|
||||
@@ -2931,14 +2932,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "types-jsonschema"
|
||||
version = "4.24.0.20250708"
|
||||
version = "4.25.0.20250720"
|
||||
description = "Typing stubs for jsonschema"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "types_jsonschema-4.24.0.20250708-py3-none-any.whl", hash = "sha256:d574aa3421d178a8435cc898cf4cf5e5e8c8f37b949c8e3ceeff06da433a18bf"},
|
||||
{file = "types_jsonschema-4.24.0.20250708.tar.gz", hash = "sha256:a910e4944681cbb1b18a93ffb502e09910db788314312fc763df08d8ac2aadb7"},
|
||||
{file = "types_jsonschema-4.25.0.20250720-py3-none-any.whl", hash = "sha256:7d7897c715310d8bf9ae27a2cedba78bbb09e4cad83ce06d2aa79b73a88941df"},
|
||||
{file = "types_jsonschema-4.25.0.20250720.tar.gz", hash = "sha256:765a3b6144798fe3161fd8cbe570a756ed3e8c0e5adb7c09693eb49faad39dbd"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2982,14 +2983,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "types-psycopg2"
|
||||
version = "2.9.21.20250516"
|
||||
version = "2.9.21.20250718"
|
||||
description = "Typing stubs for psycopg2"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "types_psycopg2-2.9.21.20250516-py3-none-any.whl", hash = "sha256:2a9212d1e5e507017b31486ce8147634d06b85d652769d7a2d91d53cb4edbd41"},
|
||||
{file = "types_psycopg2-2.9.21.20250516.tar.gz", hash = "sha256:6721018279175cce10b9582202e2a2b4a0da667857ccf82a97691bdb5ecd610f"},
|
||||
{file = "types_psycopg2-2.9.21.20250718-py3-none-any.whl", hash = "sha256:bcf085d4293bda48f5943a46dadf0389b2f98f7e8007722f7e1c12ee0f541858"},
|
||||
{file = "types_psycopg2-2.9.21.20250718.tar.gz", hash = "sha256:dc09a97272ef67e739e57b9f4740b761208f4514257e311c0b05c8c7a37d04b4"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -3208,7 +3209,7 @@ description = "An XML Schema validator and decoder"
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"saml2\" or extra == \"all\""
|
||||
markers = "extra == \"all\" or extra == \"saml2\""
|
||||
files = [
|
||||
{file = "xmlschema-2.4.0-py3-none-any.whl", hash = "sha256:dc87be0caaa61f42649899189aab2fd8e0d567f2cf548433ba7b79278d231a4a"},
|
||||
{file = "xmlschema-2.4.0.tar.gz", hash = "sha256:d74cd0c10866ac609e1ef94a5a69b018ad16e39077bc6393408b40c6babee793"},
|
||||
@@ -3352,4 +3353,4 @@ url-preview = ["lxml"]
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = "^3.9.0"
|
||||
content-hash = "a6965a294ca751ec2b5b0b92a050acc9afd4efb3e58550845dd32c60b74a70d1"
|
||||
content-hash = "d2560fb09c99bf87690749ad902753cfa3f3063bd14cd9d0c0f37ca9e89a7757"
|
||||
|
||||
@@ -101,7 +101,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.134.0"
|
||||
version = "1.135.0rc1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
@@ -178,8 +178,13 @@ signedjson = "^1.1.0"
|
||||
service-identity = ">=18.1.0"
|
||||
# Twisted 18.9 introduces some logger improvements that the structured
|
||||
# logger utilises
|
||||
Twisted = {extras = ["tls"], version = ">=18.9.0"}
|
||||
treq = ">=15.1"
|
||||
# Twisted 19.7.0 moves test helpers to a new module and deprecates the old location.
|
||||
# Twisted 21.2.0 introduces contextvar support.
|
||||
# We could likely bump this to 22.1 without making distro packagers'
|
||||
# lives hard (as of 2025-07, distro support is Ubuntu LTS: 22.1, Debian stable: 22.4,
|
||||
# RHEL 9: 22.10)
|
||||
Twisted = {extras = ["tls"], version = ">=21.2.0"}
|
||||
treq = ">=21.5.0"
|
||||
# Twisted has required pyopenssl 16.0 since about Twisted 16.6.
|
||||
pyOpenSSL = ">=16.0.0"
|
||||
PyYAML = ">=5.3"
|
||||
@@ -319,7 +324,7 @@ all = [
|
||||
# failing on new releases. Keeping lower bounds loose here means that dependabot
|
||||
# can bump versions without having to update the content-hash in the lockfile.
|
||||
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
||||
ruff = "0.12.3"
|
||||
ruff = "0.12.4"
|
||||
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
||||
pydantic = "^2"
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
|
||||
$id: https://element-hq.github.io/synapse/schema/synapse/v1.134/synapse-config.schema.json
|
||||
$id: https://element-hq.github.io/synapse/schema/synapse/v1.135/synapse-config.schema.json
|
||||
type: object
|
||||
properties:
|
||||
modules:
|
||||
@@ -629,6 +629,33 @@ properties:
|
||||
password: mypassword
|
||||
ssh_priv_key_path: CONFDIR/id_rsa
|
||||
ssh_pub_key_path: CONFDIR/id_rsa.pub
|
||||
http_proxy:
|
||||
type: ["string", "null"]
|
||||
description: >-
|
||||
Proxy server to use for HTTP requests.
|
||||
|
||||
For more details, see the [forward proxy documentation](../../setup/forward_proxy.md).
|
||||
examples:
|
||||
- "http://USERNAME:PASSWORD@10.0.1.1:8080/"
|
||||
https_proxy:
|
||||
type: ["string", "null"]
|
||||
description: >-
|
||||
Proxy server to use for HTTPS requests.
|
||||
|
||||
For more details, see the [forward proxy documentation](../../setup/forward_proxy.md).
|
||||
examples:
|
||||
- "http://USERNAME:PASSWORD@proxy.example.com:8080/"
|
||||
no_proxy_hosts:
|
||||
type: array
|
||||
description: >-
|
||||
List of hosts, IP addresses, or IP ranges in CIDR format which should not use the
|
||||
proxy. Synapse will directly connect to these hosts.
|
||||
|
||||
For more details, see the [forward proxy documentation](../../setup/forward_proxy.md).
|
||||
examples:
|
||||
- - master.hostname.example.com
|
||||
- 10.1.0.0/16
|
||||
- 172.30.0.0/16
|
||||
dummy_events_threshold:
|
||||
type: integer
|
||||
description: >-
|
||||
@@ -2201,6 +2228,17 @@ properties:
|
||||
examples:
|
||||
- per_second: 2.0
|
||||
burst_count: 20.0
|
||||
rc_room_creation:
|
||||
$ref: "#/$defs/rc"
|
||||
description: >-
|
||||
Sets rate limits for how often users are able to create rooms.
|
||||
default:
|
||||
per_user:
|
||||
per_second: 0.016
|
||||
burst_count: 10.0
|
||||
examples:
|
||||
- per_second: 1.0
|
||||
burst_count: 5.0
|
||||
federation_rr_transactions_per_room_per_second:
|
||||
type: integer
|
||||
description: >-
|
||||
|
||||
@@ -473,6 +473,10 @@ def section(prop: str, values: dict) -> str:
|
||||
|
||||
|
||||
def main() -> None:
|
||||
# For Windows: reconfigure the terminal to be UTF-8 for `print()` calls.
|
||||
if sys.platform == "win32":
|
||||
sys.stdout.reconfigure(encoding="utf-8")
|
||||
|
||||
def usage(err_msg: str) -> int:
|
||||
script_name = (sys.argv[:1] or ["__main__.py"])[0]
|
||||
print(err_msg, file=sys.stderr)
|
||||
@@ -485,7 +489,10 @@ def main() -> None:
|
||||
exit(usage("Too many arguments."))
|
||||
if not (filepath := (sys.argv[1:] or [""])[0]):
|
||||
exit(usage("No schema file provided."))
|
||||
with open(filepath) as f:
|
||||
with open(filepath, "r", encoding="utf-8") as f:
|
||||
# Note: Windows requires that we specify the encoding otherwise it uses
|
||||
# things like CP-1251, which can cause explosions.
|
||||
# See https://github.com/yaml/pyyaml/issues/123 for more info.
|
||||
return yaml.safe_load(f)
|
||||
|
||||
schema = read_json_file_arg()
|
||||
|
||||
@@ -28,8 +28,13 @@ from typing import Callable, Optional, Tuple, Type, Union
|
||||
import mypy.types
|
||||
from mypy.erasetype import remove_instance_last_known_values
|
||||
from mypy.errorcodes import ErrorCode
|
||||
from mypy.nodes import ARG_NAMED_OPT, TempNode, Var
|
||||
from mypy.plugin import FunctionSigContext, MethodSigContext, Plugin
|
||||
from mypy.nodes import ARG_NAMED_OPT, ListExpr, NameExpr, TempNode, Var
|
||||
from mypy.plugin import (
|
||||
FunctionLike,
|
||||
FunctionSigContext,
|
||||
MethodSigContext,
|
||||
Plugin,
|
||||
)
|
||||
from mypy.typeops import bind_self
|
||||
from mypy.types import (
|
||||
AnyType,
|
||||
@@ -43,8 +48,26 @@ from mypy.types import (
|
||||
UnionType,
|
||||
)
|
||||
|
||||
PROMETHEUS_METRIC_MISSING_SERVER_NAME_LABEL = ErrorCode(
|
||||
"missing-server-name-label",
|
||||
"`SERVER_NAME_LABEL` required in metric",
|
||||
category="per-homeserver-tenant-metrics",
|
||||
)
|
||||
|
||||
|
||||
class SynapsePlugin(Plugin):
|
||||
def get_function_signature_hook(
|
||||
self, fullname: str
|
||||
) -> Optional[Callable[[FunctionSigContext], FunctionLike]]:
|
||||
if fullname in (
|
||||
"prometheus_client.metrics.Counter",
|
||||
# TODO: Add other prometheus_client metrics that need checking as we
|
||||
# refactor, see https://github.com/element-hq/synapse/issues/18592
|
||||
):
|
||||
return check_prometheus_metric_instantiation
|
||||
|
||||
return None
|
||||
|
||||
def get_method_signature_hook(
|
||||
self, fullname: str
|
||||
) -> Optional[Callable[[MethodSigContext], CallableType]]:
|
||||
@@ -65,6 +88,85 @@ class SynapsePlugin(Plugin):
|
||||
return None
|
||||
|
||||
|
||||
def check_prometheus_metric_instantiation(ctx: FunctionSigContext) -> CallableType:
|
||||
"""
|
||||
Ensure that the `prometheus_client` metrics include the `SERVER_NAME_LABEL` label
|
||||
when instantiated.
|
||||
|
||||
This is important because we support multiple Synapse instances running in the same
|
||||
process, where all metrics share a single global `REGISTRY`. The `server_name` label
|
||||
ensures metrics are correctly separated by homeserver.
|
||||
|
||||
There are also some metrics that apply at the process level, such as CPU usage,
|
||||
Python garbage collection, Twisted reactor tick time which shouldn't have the
|
||||
`SERVER_NAME_LABEL`. In those cases, use use a type ignore comment to disable the
|
||||
check, e.g. `# type: ignore[missing-server-name-label]`.
|
||||
"""
|
||||
# The true signature, this isn't being modified so this is what will be returned.
|
||||
signature: CallableType = ctx.default_signature
|
||||
|
||||
# Sanity check the arguments are still as expected in this version of
|
||||
# `prometheus_client`. ex. `Counter(name, documentation, labelnames, ...)`
|
||||
#
|
||||
# `signature.arg_names` should be: ["name", "documentation", "labelnames", ...]
|
||||
if len(signature.arg_names) < 3 or signature.arg_names[2] != "labelnames":
|
||||
ctx.api.fail(
|
||||
f"Expected the 3rd argument of {signature.name} to be 'labelnames', but got "
|
||||
f"{signature.arg_names[2]}",
|
||||
ctx.context,
|
||||
)
|
||||
return signature
|
||||
|
||||
# Ensure mypy is passing the correct number of arguments because we are doing some
|
||||
# dirty indexing into `ctx.args` later on.
|
||||
assert len(ctx.args) == len(signature.arg_names), (
|
||||
f"Expected the list of arguments in the {signature.name} signature ({len(signature.arg_names)})"
|
||||
f"to match the number of arguments from the function signature context ({len(ctx.args)})"
|
||||
)
|
||||
|
||||
# Check if the `labelnames` argument includes `SERVER_NAME_LABEL`
|
||||
#
|
||||
# `ctx.args` should look like this:
|
||||
# ```
|
||||
# [
|
||||
# [StrExpr("name")],
|
||||
# [StrExpr("documentation")],
|
||||
# [ListExpr([StrExpr("label1"), StrExpr("label2")])]
|
||||
# ...
|
||||
# ]
|
||||
# ```
|
||||
labelnames_arg_expression = ctx.args[2][0] if len(ctx.args[2]) > 0 else None
|
||||
if isinstance(labelnames_arg_expression, ListExpr):
|
||||
# Check if the `labelnames` argument includes the `server_name` label (`SERVER_NAME_LABEL`).
|
||||
for labelname_expression in labelnames_arg_expression.items:
|
||||
if (
|
||||
isinstance(labelname_expression, NameExpr)
|
||||
and labelname_expression.fullname == "synapse.metrics.SERVER_NAME_LABEL"
|
||||
):
|
||||
# Found the `SERVER_NAME_LABEL`, all good!
|
||||
break
|
||||
else:
|
||||
ctx.api.fail(
|
||||
f"Expected {signature.name} to include `SERVER_NAME_LABEL` in the list of labels. "
|
||||
"If this is a process-level metric (vs homeserver-level), use a type ignore comment "
|
||||
"to disable this check.",
|
||||
ctx.context,
|
||||
code=PROMETHEUS_METRIC_MISSING_SERVER_NAME_LABEL,
|
||||
)
|
||||
else:
|
||||
ctx.api.fail(
|
||||
f"Expected the `labelnames` argument of {signature.name} to be a list of label names "
|
||||
f"(including `SERVER_NAME_LABEL`), but got {labelnames_arg_expression}. "
|
||||
"If this is a process-level metric (vs homeserver-level), use a type ignore comment "
|
||||
"to disable this check.",
|
||||
ctx.context,
|
||||
code=PROMETHEUS_METRIC_MISSING_SERVER_NAME_LABEL,
|
||||
)
|
||||
return signature
|
||||
|
||||
return signature
|
||||
|
||||
|
||||
def _get_true_return_type(signature: CallableType) -> mypy.types.Type:
|
||||
"""
|
||||
Get the "final" return type of a callable which might return an Awaitable/Deferred.
|
||||
|
||||
@@ -48,6 +48,7 @@ if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
||||
conint,
|
||||
constr,
|
||||
parse_obj_as,
|
||||
root_validator,
|
||||
validator,
|
||||
)
|
||||
from pydantic.v1.error_wrappers import ErrorWrapper
|
||||
@@ -68,6 +69,7 @@ else:
|
||||
conint,
|
||||
constr,
|
||||
parse_obj_as,
|
||||
root_validator,
|
||||
validator,
|
||||
)
|
||||
from pydantic.error_wrappers import ErrorWrapper
|
||||
@@ -92,4 +94,5 @@ __all__ = (
|
||||
"StrictStr",
|
||||
"ValidationError",
|
||||
"validator",
|
||||
"root_validator",
|
||||
)
|
||||
|
||||
@@ -136,6 +136,7 @@ BOOLEAN_COLUMNS = {
|
||||
"has_known_state",
|
||||
"is_encrypted",
|
||||
],
|
||||
"thread_subscriptions": ["subscribed", "automatic"],
|
||||
"users": ["shadow_banned", "approved", "locked", "suspended"],
|
||||
"un_partial_stated_event_stream": ["rejection_status_changed"],
|
||||
"users_who_share_rooms": ["share_private"],
|
||||
|
||||
@@ -53,6 +53,7 @@ class MockHomeserver(HomeServer):
|
||||
|
||||
|
||||
def run_background_updates(hs: HomeServer) -> None:
|
||||
server_name = hs.hostname
|
||||
main = hs.get_datastores().main
|
||||
state = hs.get_datastores().state
|
||||
|
||||
@@ -66,7 +67,11 @@ def run_background_updates(hs: HomeServer) -> None:
|
||||
def run() -> None:
|
||||
# Apply all background updates on the database.
|
||||
defer.ensureDeferred(
|
||||
run_as_background_process("background_updates", run_background_updates)
|
||||
run_as_background_process(
|
||||
"background_updates",
|
||||
server_name,
|
||||
run_background_updates,
|
||||
)
|
||||
)
|
||||
|
||||
reactor.callWhenRunning(run)
|
||||
|
||||
@@ -369,6 +369,12 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
async def is_server_admin(self, requester: Requester) -> bool:
|
||||
return "urn:synapse:admin:*" in requester.scope
|
||||
|
||||
def _is_access_token_the_admin_token(self, token: str) -> bool:
|
||||
admin_token = self._admin_token()
|
||||
if admin_token is None:
|
||||
return False
|
||||
return token == admin_token
|
||||
|
||||
async def get_user_by_req(
|
||||
self,
|
||||
request: SynapseRequest,
|
||||
@@ -434,7 +440,7 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
requester = await self.get_user_by_access_token(access_token, allow_expired)
|
||||
|
||||
# Do not record requests from MAS using the virtual `__oidc_admin` user.
|
||||
if access_token != self._admin_token():
|
||||
if not self._is_access_token_the_admin_token(access_token):
|
||||
await self._record_request(request, requester)
|
||||
|
||||
if not allow_guest and requester.is_guest:
|
||||
@@ -470,13 +476,25 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
|
||||
raise UnrecognizedRequestError(code=404)
|
||||
|
||||
def is_request_using_the_admin_token(self, request: SynapseRequest) -> bool:
|
||||
"""
|
||||
Check if the request is using the admin token.
|
||||
|
||||
Args:
|
||||
request: The request to check.
|
||||
|
||||
Returns:
|
||||
True if the request is using the admin token, False otherwise.
|
||||
"""
|
||||
access_token = self.get_access_token_from_request(request)
|
||||
return self._is_access_token_the_admin_token(access_token)
|
||||
|
||||
async def get_user_by_access_token(
|
||||
self,
|
||||
token: str,
|
||||
allow_expired: bool = False,
|
||||
) -> Requester:
|
||||
admin_token = self._admin_token()
|
||||
if admin_token is not None and token == admin_token:
|
||||
if self._is_access_token_the_admin_token(token):
|
||||
# XXX: This is a temporary solution so that the admin API can be called by
|
||||
# the OIDC provider. This will be removed once we have OIDC client
|
||||
# credentials grant support in matrix-authentication-service.
|
||||
|
||||
@@ -75,7 +75,7 @@ from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import PreserveLoggingContext
|
||||
from synapse.logging.opentracing import init_tracer
|
||||
from synapse.metrics import install_gc_manager, register_threadpool
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.metrics.jemalloc import setup_jemalloc_stats
|
||||
from synapse.module_api.callbacks.spamchecker_callbacks import load_legacy_spam_checkers
|
||||
from synapse.module_api.callbacks.third_party_event_rules_callbacks import (
|
||||
@@ -512,6 +512,7 @@ async def start(hs: "HomeServer") -> None:
|
||||
Args:
|
||||
hs: homeserver instance
|
||||
"""
|
||||
server_name = hs.hostname
|
||||
reactor = hs.get_reactor()
|
||||
|
||||
# We want to use a separate thread pool for the resolver so that large
|
||||
@@ -530,16 +531,24 @@ async def start(hs: "HomeServer") -> None:
|
||||
# Set up the SIGHUP machinery.
|
||||
if hasattr(signal, "SIGHUP"):
|
||||
|
||||
@wrap_as_background_process("sighup")
|
||||
async def handle_sighup(*args: Any, **kwargs: Any) -> None:
|
||||
# Tell systemd our state, if we're using it. This will silently fail if
|
||||
# we're not using systemd.
|
||||
sdnotify(b"RELOADING=1")
|
||||
def handle_sighup(*args: Any, **kwargs: Any) -> "defer.Deferred[None]":
|
||||
async def _handle_sighup(*args: Any, **kwargs: Any) -> None:
|
||||
# Tell systemd our state, if we're using it. This will silently fail if
|
||||
# we're not using systemd.
|
||||
sdnotify(b"RELOADING=1")
|
||||
|
||||
for i, args, kwargs in _sighup_callbacks:
|
||||
i(*args, **kwargs)
|
||||
for i, args, kwargs in _sighup_callbacks:
|
||||
i(*args, **kwargs)
|
||||
|
||||
sdnotify(b"READY=1")
|
||||
sdnotify(b"READY=1")
|
||||
|
||||
return run_as_background_process(
|
||||
"sighup",
|
||||
server_name,
|
||||
_handle_sighup,
|
||||
*args,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
# We defer running the sighup handlers until next reactor tick. This
|
||||
# is so that we're in a sane state, e.g. flushing the logs may fail
|
||||
|
||||
@@ -104,6 +104,9 @@ from synapse.storage.databases.main.stats import StatsStore
|
||||
from synapse.storage.databases.main.stream import StreamWorkerStore
|
||||
from synapse.storage.databases.main.tags import TagsWorkerStore
|
||||
from synapse.storage.databases.main.task_scheduler import TaskSchedulerWorkerStore
|
||||
from synapse.storage.databases.main.thread_subscriptions import (
|
||||
ThreadSubscriptionsWorkerStore,
|
||||
)
|
||||
from synapse.storage.databases.main.transactions import TransactionWorkerStore
|
||||
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
|
||||
from synapse.storage.databases.main.user_directory import UserDirectoryStore
|
||||
@@ -132,6 +135,7 @@ class GenericWorkerStore(
|
||||
KeyStore,
|
||||
RoomWorkerStore,
|
||||
DirectoryWorkerStore,
|
||||
ThreadSubscriptionsWorkerStore,
|
||||
PushRulesWorkerStore,
|
||||
ApplicationServiceTransactionWorkerStore,
|
||||
ApplicationServiceWorkerStore,
|
||||
|
||||
@@ -26,7 +26,11 @@ from typing import TYPE_CHECKING, List, Mapping, Sized, Tuple
|
||||
|
||||
from prometheus_client import Gauge
|
||||
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.metrics.background_process_metrics import (
|
||||
run_as_background_process,
|
||||
)
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.constants import ONE_HOUR_SECONDS, ONE_MINUTE_SECONDS
|
||||
|
||||
@@ -66,125 +70,136 @@ registered_reserved_users_mau_gauge = Gauge(
|
||||
)
|
||||
|
||||
|
||||
@wrap_as_background_process("phone_stats_home")
|
||||
async def phone_stats_home(
|
||||
def phone_stats_home(
|
||||
hs: "HomeServer",
|
||||
stats: JsonDict,
|
||||
stats_process: List[Tuple[int, "resource.struct_rusage"]] = _stats_process,
|
||||
) -> None:
|
||||
"""Collect usage statistics and send them to the configured endpoint.
|
||||
) -> "defer.Deferred[None]":
|
||||
server_name = hs.hostname
|
||||
|
||||
Args:
|
||||
hs: the HomeServer object to use for gathering usage data.
|
||||
stats: the dict in which to store the statistics sent to the configured
|
||||
endpoint. Mostly used in tests to figure out the data that is supposed to
|
||||
be sent.
|
||||
stats_process: statistics about resource usage of the process.
|
||||
"""
|
||||
async def _phone_stats_home(
|
||||
hs: "HomeServer",
|
||||
stats: JsonDict,
|
||||
stats_process: List[Tuple[int, "resource.struct_rusage"]] = _stats_process,
|
||||
) -> None:
|
||||
"""Collect usage statistics and send them to the configured endpoint.
|
||||
|
||||
logger.info("Gathering stats for reporting")
|
||||
now = int(hs.get_clock().time())
|
||||
# Ensure the homeserver has started.
|
||||
assert hs.start_time is not None
|
||||
uptime = int(now - hs.start_time)
|
||||
if uptime < 0:
|
||||
uptime = 0
|
||||
Args:
|
||||
hs: the HomeServer object to use for gathering usage data.
|
||||
stats: the dict in which to store the statistics sent to the configured
|
||||
endpoint. Mostly used in tests to figure out the data that is supposed to
|
||||
be sent.
|
||||
stats_process: statistics about resource usage of the process.
|
||||
"""
|
||||
|
||||
#
|
||||
# Performance statistics. Keep this early in the function to maintain reliability of `test_performance_100` test.
|
||||
#
|
||||
old = stats_process[0]
|
||||
new = (now, resource.getrusage(resource.RUSAGE_SELF))
|
||||
stats_process[0] = new
|
||||
logger.info("Gathering stats for reporting")
|
||||
now = int(hs.get_clock().time())
|
||||
# Ensure the homeserver has started.
|
||||
assert hs.start_time is not None
|
||||
uptime = int(now - hs.start_time)
|
||||
if uptime < 0:
|
||||
uptime = 0
|
||||
|
||||
# Get RSS in bytes
|
||||
stats["memory_rss"] = new[1].ru_maxrss
|
||||
#
|
||||
# Performance statistics. Keep this early in the function to maintain reliability of `test_performance_100` test.
|
||||
#
|
||||
old = stats_process[0]
|
||||
new = (now, resource.getrusage(resource.RUSAGE_SELF))
|
||||
stats_process[0] = new
|
||||
|
||||
# Get CPU time in % of a single core, not % of all cores
|
||||
used_cpu_time = (new[1].ru_utime + new[1].ru_stime) - (
|
||||
old[1].ru_utime + old[1].ru_stime
|
||||
)
|
||||
if used_cpu_time == 0 or new[0] == old[0]:
|
||||
stats["cpu_average"] = 0
|
||||
else:
|
||||
stats["cpu_average"] = math.floor(used_cpu_time / (new[0] - old[0]) * 100)
|
||||
# Get RSS in bytes
|
||||
stats["memory_rss"] = new[1].ru_maxrss
|
||||
|
||||
#
|
||||
# General statistics
|
||||
#
|
||||
|
||||
store = hs.get_datastores().main
|
||||
common_metrics = await hs.get_common_usage_metrics_manager().get_metrics()
|
||||
|
||||
stats["homeserver"] = hs.config.server.server_name
|
||||
stats["server_context"] = hs.config.server.server_context
|
||||
stats["timestamp"] = now
|
||||
stats["uptime_seconds"] = uptime
|
||||
version = sys.version_info
|
||||
stats["python_version"] = "{}.{}.{}".format(
|
||||
version.major, version.minor, version.micro
|
||||
)
|
||||
stats["total_users"] = await store.count_all_users()
|
||||
|
||||
total_nonbridged_users = await store.count_nonbridged_users()
|
||||
stats["total_nonbridged_users"] = total_nonbridged_users
|
||||
|
||||
daily_user_type_results = await store.count_daily_user_type()
|
||||
for name, count in daily_user_type_results.items():
|
||||
stats["daily_user_type_" + name] = count
|
||||
|
||||
room_count = await store.get_room_count()
|
||||
stats["total_room_count"] = room_count
|
||||
|
||||
stats["daily_active_users"] = common_metrics.daily_active_users
|
||||
stats["monthly_active_users"] = await store.count_monthly_users()
|
||||
daily_active_e2ee_rooms = await store.count_daily_active_e2ee_rooms()
|
||||
stats["daily_active_e2ee_rooms"] = daily_active_e2ee_rooms
|
||||
stats["daily_e2ee_messages"] = await store.count_daily_e2ee_messages()
|
||||
daily_sent_e2ee_messages = await store.count_daily_sent_e2ee_messages()
|
||||
stats["daily_sent_e2ee_messages"] = daily_sent_e2ee_messages
|
||||
stats["daily_active_rooms"] = await store.count_daily_active_rooms()
|
||||
stats["daily_messages"] = await store.count_daily_messages()
|
||||
daily_sent_messages = await store.count_daily_sent_messages()
|
||||
stats["daily_sent_messages"] = daily_sent_messages
|
||||
|
||||
r30v2_results = await store.count_r30v2_users()
|
||||
for name, count in r30v2_results.items():
|
||||
stats["r30v2_users_" + name] = count
|
||||
|
||||
stats["cache_factor"] = hs.config.caches.global_factor
|
||||
stats["event_cache_size"] = hs.config.caches.event_cache_size
|
||||
|
||||
#
|
||||
# Database version
|
||||
#
|
||||
|
||||
# This only reports info about the *main* database.
|
||||
stats["database_engine"] = store.db_pool.engine.module.__name__
|
||||
stats["database_server_version"] = store.db_pool.engine.server_version
|
||||
|
||||
#
|
||||
# Logging configuration
|
||||
#
|
||||
synapse_logger = logging.getLogger("synapse")
|
||||
log_level = synapse_logger.getEffectiveLevel()
|
||||
stats["log_level"] = logging.getLevelName(log_level)
|
||||
|
||||
logger.info(
|
||||
"Reporting stats to %s: %s", hs.config.metrics.report_stats_endpoint, stats
|
||||
)
|
||||
try:
|
||||
await hs.get_proxied_http_client().put_json(
|
||||
hs.config.metrics.report_stats_endpoint, stats
|
||||
# Get CPU time in % of a single core, not % of all cores
|
||||
used_cpu_time = (new[1].ru_utime + new[1].ru_stime) - (
|
||||
old[1].ru_utime + old[1].ru_stime
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning("Error reporting stats: %s", e)
|
||||
if used_cpu_time == 0 or new[0] == old[0]:
|
||||
stats["cpu_average"] = 0
|
||||
else:
|
||||
stats["cpu_average"] = math.floor(used_cpu_time / (new[0] - old[0]) * 100)
|
||||
|
||||
#
|
||||
# General statistics
|
||||
#
|
||||
|
||||
store = hs.get_datastores().main
|
||||
common_metrics = await hs.get_common_usage_metrics_manager().get_metrics()
|
||||
|
||||
stats["homeserver"] = hs.config.server.server_name
|
||||
stats["server_context"] = hs.config.server.server_context
|
||||
stats["timestamp"] = now
|
||||
stats["uptime_seconds"] = uptime
|
||||
version = sys.version_info
|
||||
stats["python_version"] = "{}.{}.{}".format(
|
||||
version.major, version.minor, version.micro
|
||||
)
|
||||
stats["total_users"] = await store.count_all_users()
|
||||
|
||||
total_nonbridged_users = await store.count_nonbridged_users()
|
||||
stats["total_nonbridged_users"] = total_nonbridged_users
|
||||
|
||||
daily_user_type_results = await store.count_daily_user_type()
|
||||
for name, count in daily_user_type_results.items():
|
||||
stats["daily_user_type_" + name] = count
|
||||
|
||||
room_count = await store.get_room_count()
|
||||
stats["total_room_count"] = room_count
|
||||
|
||||
stats["daily_active_users"] = common_metrics.daily_active_users
|
||||
stats["monthly_active_users"] = await store.count_monthly_users()
|
||||
daily_active_e2ee_rooms = await store.count_daily_active_e2ee_rooms()
|
||||
stats["daily_active_e2ee_rooms"] = daily_active_e2ee_rooms
|
||||
stats["daily_e2ee_messages"] = await store.count_daily_e2ee_messages()
|
||||
daily_sent_e2ee_messages = await store.count_daily_sent_e2ee_messages()
|
||||
stats["daily_sent_e2ee_messages"] = daily_sent_e2ee_messages
|
||||
stats["daily_active_rooms"] = await store.count_daily_active_rooms()
|
||||
stats["daily_messages"] = await store.count_daily_messages()
|
||||
daily_sent_messages = await store.count_daily_sent_messages()
|
||||
stats["daily_sent_messages"] = daily_sent_messages
|
||||
|
||||
r30v2_results = await store.count_r30v2_users()
|
||||
for name, count in r30v2_results.items():
|
||||
stats["r30v2_users_" + name] = count
|
||||
|
||||
stats["cache_factor"] = hs.config.caches.global_factor
|
||||
stats["event_cache_size"] = hs.config.caches.event_cache_size
|
||||
|
||||
#
|
||||
# Database version
|
||||
#
|
||||
|
||||
# This only reports info about the *main* database.
|
||||
stats["database_engine"] = store.db_pool.engine.module.__name__
|
||||
stats["database_server_version"] = store.db_pool.engine.server_version
|
||||
|
||||
#
|
||||
# Logging configuration
|
||||
#
|
||||
synapse_logger = logging.getLogger("synapse")
|
||||
log_level = synapse_logger.getEffectiveLevel()
|
||||
stats["log_level"] = logging.getLevelName(log_level)
|
||||
|
||||
logger.info(
|
||||
"Reporting stats to %s: %s", hs.config.metrics.report_stats_endpoint, stats
|
||||
)
|
||||
try:
|
||||
await hs.get_proxied_http_client().put_json(
|
||||
hs.config.metrics.report_stats_endpoint, stats
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning("Error reporting stats: %s", e)
|
||||
|
||||
return run_as_background_process(
|
||||
"phone_stats_home", server_name, _phone_stats_home, hs, stats, stats_process
|
||||
)
|
||||
|
||||
|
||||
def start_phone_stats_home(hs: "HomeServer") -> None:
|
||||
"""
|
||||
Start the background tasks which report phone home stats.
|
||||
"""
|
||||
server_name = hs.hostname
|
||||
clock = hs.get_clock()
|
||||
|
||||
stats: JsonDict = {}
|
||||
@@ -210,25 +225,31 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
|
||||
)
|
||||
hs.get_datastores().main.reap_monthly_active_users()
|
||||
|
||||
@wrap_as_background_process("generate_monthly_active_users")
|
||||
async def generate_monthly_active_users() -> None:
|
||||
current_mau_count = 0
|
||||
current_mau_count_by_service: Mapping[str, int] = {}
|
||||
reserved_users: Sized = ()
|
||||
store = hs.get_datastores().main
|
||||
if hs.config.server.limit_usage_by_mau or hs.config.server.mau_stats_only:
|
||||
current_mau_count = await store.get_monthly_active_count()
|
||||
current_mau_count_by_service = (
|
||||
await store.get_monthly_active_count_by_service()
|
||||
)
|
||||
reserved_users = await store.get_registered_reserved_users()
|
||||
current_mau_gauge.set(float(current_mau_count))
|
||||
def generate_monthly_active_users() -> "defer.Deferred[None]":
|
||||
async def _generate_monthly_active_users() -> None:
|
||||
current_mau_count = 0
|
||||
current_mau_count_by_service: Mapping[str, int] = {}
|
||||
reserved_users: Sized = ()
|
||||
store = hs.get_datastores().main
|
||||
if hs.config.server.limit_usage_by_mau or hs.config.server.mau_stats_only:
|
||||
current_mau_count = await store.get_monthly_active_count()
|
||||
current_mau_count_by_service = (
|
||||
await store.get_monthly_active_count_by_service()
|
||||
)
|
||||
reserved_users = await store.get_registered_reserved_users()
|
||||
current_mau_gauge.set(float(current_mau_count))
|
||||
|
||||
for app_service, count in current_mau_count_by_service.items():
|
||||
current_mau_by_service_gauge.labels(app_service).set(float(count))
|
||||
for app_service, count in current_mau_count_by_service.items():
|
||||
current_mau_by_service_gauge.labels(app_service).set(float(count))
|
||||
|
||||
registered_reserved_users_mau_gauge.set(float(len(reserved_users)))
|
||||
max_mau_gauge.set(float(hs.config.server.max_mau_value))
|
||||
registered_reserved_users_mau_gauge.set(float(len(reserved_users)))
|
||||
max_mau_gauge.set(float(hs.config.server.max_mau_value))
|
||||
|
||||
return run_as_background_process(
|
||||
"generate_monthly_active_users",
|
||||
server_name,
|
||||
_generate_monthly_active_users,
|
||||
)
|
||||
|
||||
if hs.config.server.limit_usage_by_mau or hs.config.server.mau_stats_only:
|
||||
generate_monthly_active_users()
|
||||
|
||||
@@ -48,6 +48,7 @@ from synapse.events import EventBase
|
||||
from synapse.events.utils import SerializeEventConfig, serialize_event
|
||||
from synapse.http.client import SimpleHttpClient, is_unknown_endpoint
|
||||
from synapse.logging import opentracing
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.types import DeviceListUpdates, JsonDict, JsonMapping, ThirdPartyInstanceID
|
||||
from synapse.util.caches.response_cache import ResponseCache
|
||||
|
||||
@@ -59,29 +60,31 @@ logger = logging.getLogger(__name__)
|
||||
sent_transactions_counter = Counter(
|
||||
"synapse_appservice_api_sent_transactions",
|
||||
"Number of /transactions/ requests sent",
|
||||
["service"],
|
||||
labelnames=["service", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
failed_transactions_counter = Counter(
|
||||
"synapse_appservice_api_failed_transactions",
|
||||
"Number of /transactions/ requests that failed to send",
|
||||
["service"],
|
||||
labelnames=["service", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
sent_events_counter = Counter(
|
||||
"synapse_appservice_api_sent_events", "Number of events sent to the AS", ["service"]
|
||||
"synapse_appservice_api_sent_events",
|
||||
"Number of events sent to the AS",
|
||||
labelnames=["service", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
sent_ephemeral_counter = Counter(
|
||||
"synapse_appservice_api_sent_ephemeral",
|
||||
"Number of ephemeral events sent to the AS",
|
||||
["service"],
|
||||
labelnames=["service", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
sent_todevice_counter = Counter(
|
||||
"synapse_appservice_api_sent_todevice",
|
||||
"Number of todevice messages sent to the AS",
|
||||
["service"],
|
||||
labelnames=["service", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
HOUR_IN_MS = 60 * 60 * 1000
|
||||
@@ -382,6 +385,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
"left": list(device_list_summary.left),
|
||||
}
|
||||
|
||||
labels = {"service": service.id, SERVER_NAME_LABEL: self.server_name}
|
||||
try:
|
||||
args = None
|
||||
if self.config.use_appservice_legacy_authorization:
|
||||
@@ -399,10 +403,10 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
service.url,
|
||||
[event.get("event_id") for event in events],
|
||||
)
|
||||
sent_transactions_counter.labels(service.id).inc()
|
||||
sent_events_counter.labels(service.id).inc(len(serialized_events))
|
||||
sent_ephemeral_counter.labels(service.id).inc(len(ephemeral))
|
||||
sent_todevice_counter.labels(service.id).inc(len(to_device_messages))
|
||||
sent_transactions_counter.labels(**labels).inc()
|
||||
sent_events_counter.labels(**labels).inc(len(serialized_events))
|
||||
sent_ephemeral_counter.labels(**labels).inc(len(ephemeral))
|
||||
sent_todevice_counter.labels(**labels).inc(len(to_device_messages))
|
||||
return True
|
||||
except CodeMessageException as e:
|
||||
logger.warning(
|
||||
@@ -421,7 +425,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
ex.args,
|
||||
exc_info=logger.isEnabledFor(logging.DEBUG),
|
||||
)
|
||||
failed_transactions_counter.labels(service.id).inc()
|
||||
failed_transactions_counter.labels(**labels).inc()
|
||||
return False
|
||||
|
||||
async def claim_client_keys(
|
||||
|
||||
@@ -103,18 +103,16 @@ MAX_TO_DEVICE_MESSAGES_PER_TRANSACTION = 100
|
||||
|
||||
|
||||
class ApplicationServiceScheduler:
|
||||
"""Public facing API for this module. Does the required DI to tie the
|
||||
components together. This also serves as the "event_pool", which in this
|
||||
"""
|
||||
Public facing API for this module. Does the required dependency injection (DI) to
|
||||
tie the components together. This also serves as the "event_pool", which in this
|
||||
case is a simple array.
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.clock = hs.get_clock()
|
||||
self.txn_ctrl = _TransactionController(hs)
|
||||
self.store = hs.get_datastores().main
|
||||
self.as_api = hs.get_application_service_api()
|
||||
|
||||
self.txn_ctrl = _TransactionController(self.clock, self.store, self.as_api)
|
||||
self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock, hs)
|
||||
self.queuer = _ServiceQueuer(self.txn_ctrl, hs)
|
||||
|
||||
async def start(self) -> None:
|
||||
logger.info("Starting appservice scheduler")
|
||||
@@ -184,9 +182,7 @@ class _ServiceQueuer:
|
||||
appservice at a given time.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, txn_ctrl: "_TransactionController", clock: Clock, hs: "HomeServer"
|
||||
):
|
||||
def __init__(self, txn_ctrl: "_TransactionController", hs: "HomeServer"):
|
||||
# dict of {service_id: [events]}
|
||||
self.queued_events: Dict[str, List[EventBase]] = {}
|
||||
# dict of {service_id: [events]}
|
||||
@@ -199,10 +195,11 @@ class _ServiceQueuer:
|
||||
# the appservices which currently have a transaction in flight
|
||||
self.requests_in_flight: Set[str] = set()
|
||||
self.txn_ctrl = txn_ctrl
|
||||
self.clock = clock
|
||||
self._msc3202_transaction_extensions_enabled: bool = (
|
||||
hs.config.experimental.msc3202_transaction_extensions
|
||||
)
|
||||
self.server_name = hs.hostname
|
||||
self.clock = hs.get_clock()
|
||||
self._store = hs.get_datastores().main
|
||||
|
||||
def start_background_request(self, service: ApplicationService) -> None:
|
||||
@@ -210,7 +207,9 @@ class _ServiceQueuer:
|
||||
if service.id in self.requests_in_flight:
|
||||
return
|
||||
|
||||
run_as_background_process("as-sender", self._send_request, service)
|
||||
run_as_background_process(
|
||||
"as-sender", self.server_name, self._send_request, service
|
||||
)
|
||||
|
||||
async def _send_request(self, service: ApplicationService) -> None:
|
||||
# sanity-check: we shouldn't get here if this service already has a sender
|
||||
@@ -359,10 +358,11 @@ class _TransactionController:
|
||||
(Note we have only have one of these in the homeserver.)
|
||||
"""
|
||||
|
||||
def __init__(self, clock: Clock, store: DataStore, as_api: ApplicationServiceApi):
|
||||
self.clock = clock
|
||||
self.store = store
|
||||
self.as_api = as_api
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.server_name = hs.hostname
|
||||
self.clock = hs.get_clock()
|
||||
self.store = hs.get_datastores().main
|
||||
self.as_api = hs.get_application_service_api()
|
||||
|
||||
# map from service id to recoverer instance
|
||||
self.recoverers: Dict[str, "_Recoverer"] = {}
|
||||
@@ -446,7 +446,12 @@ class _TransactionController:
|
||||
logger.info("Starting recoverer for AS ID %s", service.id)
|
||||
assert service.id not in self.recoverers
|
||||
recoverer = self.RECOVERER_CLASS(
|
||||
self.clock, self.store, self.as_api, service, self.on_recovered
|
||||
self.server_name,
|
||||
self.clock,
|
||||
self.store,
|
||||
self.as_api,
|
||||
service,
|
||||
self.on_recovered,
|
||||
)
|
||||
self.recoverers[service.id] = recoverer
|
||||
recoverer.recover()
|
||||
@@ -477,21 +482,24 @@ class _Recoverer:
|
||||
We have one of these for each appservice which is currently considered DOWN.
|
||||
|
||||
Args:
|
||||
clock (synapse.util.Clock):
|
||||
store (synapse.storage.DataStore):
|
||||
as_api (synapse.appservice.api.ApplicationServiceApi):
|
||||
service (synapse.appservice.ApplicationService): the service we are managing
|
||||
callback (callable[_Recoverer]): called once the service recovers.
|
||||
server_name: the homeserver name (used to label metrics) (this should be `hs.hostname`).
|
||||
clock:
|
||||
store:
|
||||
as_api:
|
||||
service: the service we are managing
|
||||
callback: called once the service recovers.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
server_name: str,
|
||||
clock: Clock,
|
||||
store: DataStore,
|
||||
as_api: ApplicationServiceApi,
|
||||
service: ApplicationService,
|
||||
callback: Callable[["_Recoverer"], Awaitable[None]],
|
||||
):
|
||||
self.server_name = server_name
|
||||
self.clock = clock
|
||||
self.store = store
|
||||
self.as_api = as_api
|
||||
@@ -504,7 +512,11 @@ class _Recoverer:
|
||||
delay = 2**self.backoff_counter
|
||||
logger.info("Scheduling retries on %s in %fs", self.service.id, delay)
|
||||
self.scheduled_recovery = self.clock.call_later(
|
||||
delay, run_as_background_process, "as-recoverer", self.retry
|
||||
delay,
|
||||
run_as_background_process,
|
||||
"as-recoverer",
|
||||
self.server_name,
|
||||
self.retry,
|
||||
)
|
||||
|
||||
def _backoff(self) -> None:
|
||||
@@ -525,6 +537,7 @@ class _Recoverer:
|
||||
# Run a retry, which will resechedule a recovery if it fails.
|
||||
run_as_background_process(
|
||||
"retry",
|
||||
self.server_name,
|
||||
self.retry,
|
||||
)
|
||||
|
||||
|
||||
@@ -581,3 +581,10 @@ class ExperimentalConfig(Config):
|
||||
|
||||
# MSC4155: Invite filtering
|
||||
self.msc4155_enabled: bool = experimental.get("msc4155_enabled", False)
|
||||
|
||||
# MSC4293: Redact on Kick/Ban
|
||||
self.msc4293_enabled: bool = experimental.get("msc4293_enabled", False)
|
||||
|
||||
# MSC4306: Thread Subscriptions
|
||||
# (and MSC4308: sliding sync extension for thread subscriptions)
|
||||
self.msc4306_enabled: bool = experimental.get("msc4306_enabled", False)
|
||||
|
||||
@@ -241,6 +241,12 @@ class RatelimitConfig(Config):
|
||||
defaults={"per_second": 1, "burst_count": 5},
|
||||
)
|
||||
|
||||
self.rc_room_creation = RatelimitSettings.parse(
|
||||
config,
|
||||
"rc_room_creation",
|
||||
defaults={"per_second": 0.016, "burst_count": 10},
|
||||
)
|
||||
|
||||
self.rc_reports = RatelimitSettings.parse(
|
||||
config,
|
||||
"rc_reports",
|
||||
|
||||
@@ -22,11 +22,10 @@
|
||||
import logging
|
||||
import os
|
||||
from typing import Any, Dict, List, Tuple
|
||||
from urllib.request import getproxies_environment
|
||||
|
||||
import attr
|
||||
|
||||
from synapse.config.server import generate_ip_set
|
||||
from synapse.config.server import generate_ip_set, parse_proxy_config
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.check_dependencies import check_requirements
|
||||
from synapse.util.module_loader import load_module
|
||||
@@ -61,7 +60,7 @@ THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP = {
|
||||
"image/png": "png",
|
||||
}
|
||||
|
||||
HTTP_PROXY_SET_WARNING = """\
|
||||
URL_PREVIEW_BLACKLIST_IGNORED_BECAUSE_HTTP_PROXY_SET_WARNING = """\
|
||||
The Synapse config url_preview_ip_range_blacklist will be ignored as an HTTP(s) proxy is configured."""
|
||||
|
||||
|
||||
@@ -234,17 +233,25 @@ class ContentRepositoryConfig(Config):
|
||||
if self.url_preview_enabled:
|
||||
check_requirements("url-preview")
|
||||
|
||||
proxy_env = getproxies_environment()
|
||||
if "url_preview_ip_range_blacklist" not in config:
|
||||
if "http" not in proxy_env or "https" not in proxy_env:
|
||||
proxy_config = parse_proxy_config(config)
|
||||
is_proxy_configured = (
|
||||
proxy_config.http_proxy is not None
|
||||
or proxy_config.https_proxy is not None
|
||||
)
|
||||
if "url_preview_ip_range_blacklist" in config:
|
||||
if is_proxy_configured:
|
||||
logger.warning(
|
||||
"".join(
|
||||
URL_PREVIEW_BLACKLIST_IGNORED_BECAUSE_HTTP_PROXY_SET_WARNING
|
||||
)
|
||||
)
|
||||
else:
|
||||
if not is_proxy_configured:
|
||||
raise ConfigError(
|
||||
"For security, you must specify an explicit target IP address "
|
||||
"blacklist in url_preview_ip_range_blacklist for url previewing "
|
||||
"to work"
|
||||
)
|
||||
else:
|
||||
if "http" in proxy_env or "https" in proxy_env:
|
||||
logger.warning("".join(HTTP_PROXY_SET_WARNING))
|
||||
|
||||
# we always block '0.0.0.0' and '::', which are supposed to be
|
||||
# unroutable addresses.
|
||||
|
||||
@@ -25,11 +25,13 @@ import logging
|
||||
import os.path
|
||||
import urllib.parse
|
||||
from textwrap import indent
|
||||
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, Union
|
||||
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, TypedDict, Union
|
||||
from urllib.request import getproxies_environment
|
||||
|
||||
import attr
|
||||
import yaml
|
||||
from netaddr import AddrFormatError, IPNetwork, IPSet
|
||||
from typing_extensions import TypeGuard
|
||||
|
||||
from twisted.conch.ssh.keys import Key
|
||||
|
||||
@@ -43,6 +45,21 @@ from ._util import validate_config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# Directly from the mypy docs:
|
||||
# https://typing.python.org/en/latest/spec/narrowing.html#typeguard
|
||||
def is_str_list(val: Any, allow_empty: bool) -> TypeGuard[list[str]]:
|
||||
"""
|
||||
Type-narrow a value to a list of strings (compatible with mypy).
|
||||
"""
|
||||
if not isinstance(val, list):
|
||||
return False
|
||||
|
||||
if len(val) == 0:
|
||||
return allow_empty
|
||||
return all(isinstance(x, str) for x in val)
|
||||
|
||||
|
||||
DIRECT_TCP_ERROR = """
|
||||
Using direct TCP replication for workers is no longer supported.
|
||||
|
||||
@@ -291,6 +308,102 @@ class LimitRemoteRoomsConfig:
|
||||
)
|
||||
|
||||
|
||||
class ProxyConfigDictionary(TypedDict):
|
||||
"""
|
||||
Dictionary of proxy settings suitable for interacting with `urllib.request` API's
|
||||
"""
|
||||
|
||||
http: Optional[str]
|
||||
"""
|
||||
Proxy server to use for HTTP requests.
|
||||
"""
|
||||
https: Optional[str]
|
||||
"""
|
||||
Proxy server to use for HTTPS requests.
|
||||
"""
|
||||
no: str
|
||||
"""
|
||||
Comma-separated list of hosts, IP addresses, or IP ranges in CIDR format which
|
||||
should not use the proxy.
|
||||
|
||||
Empty string means no hosts should be excluded from the proxy.
|
||||
"""
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class ProxyConfig:
|
||||
"""
|
||||
Synapse configuration for HTTP proxy settings.
|
||||
"""
|
||||
|
||||
http_proxy: Optional[str]
|
||||
"""
|
||||
Proxy server to use for HTTP requests.
|
||||
"""
|
||||
https_proxy: Optional[str]
|
||||
"""
|
||||
Proxy server to use for HTTPS requests.
|
||||
"""
|
||||
no_proxy_hosts: Optional[List[str]]
|
||||
"""
|
||||
List of hosts, IP addresses, or IP ranges in CIDR format which should not use the
|
||||
proxy. Synapse will directly connect to these hosts.
|
||||
"""
|
||||
|
||||
def get_proxies_dictionary(self) -> ProxyConfigDictionary:
|
||||
"""
|
||||
Returns a dictionary of proxy settings suitable for interacting with
|
||||
`urllib.request` API's (e.g. `urllib.request.proxy_bypass_environment`)
|
||||
|
||||
The keys are `"http"`, `"https"`, and `"no"`.
|
||||
"""
|
||||
return ProxyConfigDictionary(
|
||||
http=self.http_proxy,
|
||||
https=self.https_proxy,
|
||||
no=",".join(self.no_proxy_hosts) if self.no_proxy_hosts else "",
|
||||
)
|
||||
|
||||
|
||||
def parse_proxy_config(config: JsonDict) -> ProxyConfig:
|
||||
"""
|
||||
Figure out forward proxy config for outgoing HTTP requests.
|
||||
|
||||
Prefer values from the given config over the environment variables (`http_proxy`,
|
||||
`https_proxy`, `no_proxy`, not case-sensitive).
|
||||
|
||||
Args:
|
||||
config: The top-level homeserver configuration dictionary.
|
||||
"""
|
||||
proxies_from_env = getproxies_environment()
|
||||
http_proxy = config.get("http_proxy", proxies_from_env.get("http"))
|
||||
if http_proxy is not None and not isinstance(http_proxy, str):
|
||||
raise ConfigError("'http_proxy' must be a string", ("http_proxy",))
|
||||
|
||||
https_proxy = config.get("https_proxy", proxies_from_env.get("https"))
|
||||
if https_proxy is not None and not isinstance(https_proxy, str):
|
||||
raise ConfigError("'https_proxy' must be a string", ("https_proxy",))
|
||||
|
||||
# List of hosts which should not use the proxy. Synapse will directly connect to
|
||||
# these hosts.
|
||||
no_proxy_hosts = config.get("no_proxy_hosts")
|
||||
# The `no_proxy` environment variable should be a comma-separated list of hosts,
|
||||
# IP addresses, or IP ranges in CIDR format
|
||||
no_proxy_from_env = proxies_from_env.get("no")
|
||||
if no_proxy_hosts is None and no_proxy_from_env is not None:
|
||||
no_proxy_hosts = no_proxy_from_env.split(",")
|
||||
|
||||
if no_proxy_hosts is not None and not is_str_list(no_proxy_hosts, allow_empty=True):
|
||||
raise ConfigError(
|
||||
"'no_proxy_hosts' must be a list of strings", ("no_proxy_hosts",)
|
||||
)
|
||||
|
||||
return ProxyConfig(
|
||||
http_proxy=http_proxy,
|
||||
https_proxy=https_proxy,
|
||||
no_proxy_hosts=no_proxy_hosts,
|
||||
)
|
||||
|
||||
|
||||
class ServerConfig(Config):
|
||||
section = "server"
|
||||
|
||||
@@ -718,6 +831,17 @@ class ServerConfig(Config):
|
||||
)
|
||||
)
|
||||
|
||||
# Figure out forward proxy config for outgoing HTTP requests.
|
||||
#
|
||||
# Prefer values from the file config over the environment variables
|
||||
self.proxy_config = parse_proxy_config(config)
|
||||
logger.debug(
|
||||
"Using proxy settings: http_proxy=%s, https_proxy=%s, no_proxy=%s",
|
||||
self.proxy_config.http_proxy,
|
||||
self.proxy_config.https_proxy,
|
||||
self.proxy_config.no_proxy_hosts,
|
||||
)
|
||||
|
||||
self.cleanup_extremities_with_dummy_events = config.get(
|
||||
"cleanup_extremities_with_dummy_events", True
|
||||
)
|
||||
|
||||
@@ -174,6 +174,10 @@ class WriterLocations:
|
||||
default=[MAIN_PROCESS_INSTANCE_NAME],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
thread_subscriptions: List[str] = attr.ib(
|
||||
default=["master"],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True)
|
||||
|
||||
@@ -152,6 +152,8 @@ class Keyring:
|
||||
def __init__(
|
||||
self, hs: "HomeServer", key_fetchers: "Optional[Iterable[KeyFetcher]]" = None
|
||||
):
|
||||
self.server_name = hs.hostname
|
||||
|
||||
if key_fetchers is None:
|
||||
# Always fetch keys from the database.
|
||||
mutable_key_fetchers: List[KeyFetcher] = [StoreKeyFetcher(hs)]
|
||||
@@ -169,7 +171,8 @@ class Keyring:
|
||||
self._fetch_keys_queue: BatchingQueue[
|
||||
_FetchKeyRequest, Dict[str, Dict[str, FetchKeyResult]]
|
||||
] = BatchingQueue(
|
||||
"keyring_server",
|
||||
name="keyring_server",
|
||||
server_name=self.server_name,
|
||||
clock=hs.get_clock(),
|
||||
# The method called to fetch each key
|
||||
process_batch_callback=self._inner_fetch_key_requests,
|
||||
@@ -473,8 +476,12 @@ class Keyring:
|
||||
|
||||
class KeyFetcher(metaclass=abc.ABCMeta):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.server_name = hs.hostname
|
||||
self._queue = BatchingQueue(
|
||||
self.__class__.__name__, hs.get_clock(), self._fetch_keys
|
||||
name=self.__class__.__name__,
|
||||
server_name=self.server_name,
|
||||
clock=hs.get_clock(),
|
||||
process_batch_callback=self._fetch_keys,
|
||||
)
|
||||
|
||||
async def get_keys(
|
||||
|
||||
@@ -34,6 +34,7 @@ class InviteAutoAccepter:
|
||||
def __init__(self, config: AutoAcceptInvitesConfig, api: ModuleApi):
|
||||
# Keep a reference to the Module API.
|
||||
self._api = api
|
||||
self.server_name = api.server_name
|
||||
self._config = config
|
||||
|
||||
if not self._config.enabled:
|
||||
@@ -113,6 +114,7 @@ class InviteAutoAccepter:
|
||||
# that occurs when responding to invites over federation (see https://github.com/matrix-org/synapse-auto-accept-invite/issues/12)
|
||||
run_as_background_process(
|
||||
"retry_make_join",
|
||||
self.server_name,
|
||||
self._retry_make_join,
|
||||
event.state_key,
|
||||
event.state_key,
|
||||
|
||||
@@ -74,6 +74,7 @@ from synapse.federation.transport.client import SendJoinResponse
|
||||
from synapse.http.client import is_unknown_endpoint
|
||||
from synapse.http.types import QueryParams
|
||||
from synapse.logging.opentracing import SynapseTags, log_kv, set_tag, tag_args, trace
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.types import JsonDict, StrCollection, UserID, get_domain_from_id
|
||||
from synapse.types.handlers.policy_server import RECOMMENDATION_OK, RECOMMENDATION_SPAM
|
||||
from synapse.util.async_helpers import concurrently_execute
|
||||
@@ -85,7 +86,9 @@ if TYPE_CHECKING:
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["type"])
|
||||
sent_queries_counter = Counter(
|
||||
"synapse_federation_client_sent_queries", "", labelnames=["type", SERVER_NAME_LABEL]
|
||||
)
|
||||
|
||||
|
||||
PDU_RETRY_TIME_MS = 1 * 60 * 1000
|
||||
@@ -209,7 +212,10 @@ class FederationClient(FederationBase):
|
||||
Returns:
|
||||
The JSON object from the response
|
||||
"""
|
||||
sent_queries_counter.labels(query_type).inc()
|
||||
sent_queries_counter.labels(
|
||||
type=query_type,
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
return await self.transport_layer.make_query(
|
||||
destination,
|
||||
@@ -231,7 +237,10 @@ class FederationClient(FederationBase):
|
||||
Returns:
|
||||
The JSON object from the response
|
||||
"""
|
||||
sent_queries_counter.labels("client_device_keys").inc()
|
||||
sent_queries_counter.labels(
|
||||
type="client_device_keys",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
return await self.transport_layer.query_client_keys(
|
||||
destination, content, timeout
|
||||
)
|
||||
@@ -242,7 +251,10 @@ class FederationClient(FederationBase):
|
||||
"""Query the device keys for a list of user ids hosted on a remote
|
||||
server.
|
||||
"""
|
||||
sent_queries_counter.labels("user_devices").inc()
|
||||
sent_queries_counter.labels(
|
||||
type="user_devices",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
return await self.transport_layer.query_user_devices(
|
||||
destination, user_id, timeout
|
||||
)
|
||||
@@ -264,7 +276,10 @@ class FederationClient(FederationBase):
|
||||
Returns:
|
||||
The JSON object from the response
|
||||
"""
|
||||
sent_queries_counter.labels("client_one_time_keys").inc()
|
||||
sent_queries_counter.labels(
|
||||
type="client_one_time_keys",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
# Convert the query with counts into a stable and unstable query and check
|
||||
# if attempting to claim more than 1 OTK.
|
||||
|
||||
@@ -82,6 +82,7 @@ from synapse.logging.opentracing import (
|
||||
tag_args,
|
||||
trace,
|
||||
)
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.replication.http.federation import (
|
||||
ReplicationFederationSendEduRestServlet,
|
||||
@@ -104,12 +105,18 @@ TRANSACTION_CONCURRENCY_LIMIT = 10
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
received_pdus_counter = Counter("synapse_federation_server_received_pdus", "")
|
||||
received_pdus_counter = Counter(
|
||||
"synapse_federation_server_received_pdus", "", labelnames=[SERVER_NAME_LABEL]
|
||||
)
|
||||
|
||||
received_edus_counter = Counter("synapse_federation_server_received_edus", "")
|
||||
received_edus_counter = Counter(
|
||||
"synapse_federation_server_received_edus", "", labelnames=[SERVER_NAME_LABEL]
|
||||
)
|
||||
|
||||
received_queries_counter = Counter(
|
||||
"synapse_federation_server_received_queries", "", ["type"]
|
||||
"synapse_federation_server_received_queries",
|
||||
"",
|
||||
labelnames=["type", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
pdu_process_time = Histogram(
|
||||
@@ -434,7 +441,9 @@ class FederationServer(FederationBase):
|
||||
report back to the sending server.
|
||||
"""
|
||||
|
||||
received_pdus_counter.inc(len(transaction.pdus))
|
||||
received_pdus_counter.labels(**{SERVER_NAME_LABEL: self.server_name}).inc(
|
||||
len(transaction.pdus)
|
||||
)
|
||||
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
|
||||
@@ -553,7 +562,7 @@ class FederationServer(FederationBase):
|
||||
"""Process the EDUs in a received transaction."""
|
||||
|
||||
async def _process_edu(edu_dict: JsonDict) -> None:
|
||||
received_edus_counter.inc()
|
||||
received_edus_counter.labels(**{SERVER_NAME_LABEL: self.server_name}).inc()
|
||||
|
||||
edu = Edu(
|
||||
origin=origin,
|
||||
@@ -668,7 +677,10 @@ class FederationServer(FederationBase):
|
||||
async def on_query_request(
|
||||
self, query_type: str, args: Dict[str, str]
|
||||
) -> Tuple[int, Dict[str, Any]]:
|
||||
received_queries_counter.labels(query_type).inc()
|
||||
received_queries_counter.labels(
|
||||
type=query_type,
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
resp = await self.registry.on_query(query_type, args)
|
||||
return 200, resp
|
||||
|
||||
|
||||
@@ -160,6 +160,7 @@ from synapse.federation.sender.transaction_manager import TransactionManager
|
||||
from synapse.federation.units import Edu
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.metrics import (
|
||||
SERVER_NAME_LABEL,
|
||||
LaterGauge,
|
||||
event_processing_loop_counter,
|
||||
event_processing_loop_room_count,
|
||||
@@ -189,11 +190,13 @@ logger = logging.getLogger(__name__)
|
||||
sent_pdus_destination_dist_count = Counter(
|
||||
"synapse_federation_client_sent_pdu_destinations_count",
|
||||
"Number of PDUs queued for sending to one or more destinations",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
sent_pdus_destination_dist_total = Counter(
|
||||
"synapse_federation_client_sent_pdu_destinations",
|
||||
"Total number of PDUs queued for sending across all destinations",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
# Time (in s) to wait before trying to wake up destinations that have
|
||||
@@ -296,6 +299,7 @@ class _DestinationWakeupQueue:
|
||||
|
||||
Staggers waking up of per destination queues to ensure that we don't attempt
|
||||
to start TLS connections with many hosts all at once, leading to pinned CPU.
|
||||
|
||||
"""
|
||||
|
||||
# The maximum duration in seconds between queuing up a destination and it
|
||||
@@ -303,6 +307,10 @@ class _DestinationWakeupQueue:
|
||||
_MAX_TIME_IN_QUEUE = 30.0
|
||||
|
||||
sender: "FederationSender" = attr.ib()
|
||||
server_name: str = attr.ib()
|
||||
"""
|
||||
Our homeserver name (used to label metrics) (`hs.hostname`).
|
||||
"""
|
||||
clock: Clock = attr.ib()
|
||||
max_delay_s: int = attr.ib()
|
||||
|
||||
@@ -427,7 +435,7 @@ class FederationSender(AbstractFederationSender):
|
||||
1.0 / hs.config.ratelimiting.federation_rr_transactions_per_room_per_second
|
||||
)
|
||||
self._destination_wakeup_queue = _DestinationWakeupQueue(
|
||||
self, self.clock, max_delay_s=rr_txn_interval_per_room_s
|
||||
self, self.server_name, self.clock, max_delay_s=rr_txn_interval_per_room_s
|
||||
)
|
||||
|
||||
# Regularly wake up destinations that have outstanding PDUs to be caught up
|
||||
@@ -435,6 +443,7 @@ class FederationSender(AbstractFederationSender):
|
||||
run_as_background_process,
|
||||
WAKEUP_RETRY_PERIOD_SEC * 1000.0,
|
||||
"wake_destinations_needing_catchup",
|
||||
self.server_name,
|
||||
self._wake_destinations_needing_catchup,
|
||||
)
|
||||
|
||||
@@ -477,7 +486,9 @@ class FederationSender(AbstractFederationSender):
|
||||
|
||||
# fire off a processing loop in the background
|
||||
run_as_background_process(
|
||||
"process_event_queue_for_federation", self._process_event_queue_loop
|
||||
"process_event_queue_for_federation",
|
||||
self.server_name,
|
||||
self._process_event_queue_loop,
|
||||
)
|
||||
|
||||
async def _process_event_queue_loop(self) -> None:
|
||||
@@ -700,13 +711,19 @@ class FederationSender(AbstractFederationSender):
|
||||
"federation_sender"
|
||||
).set(ts)
|
||||
|
||||
events_processed_counter.inc(len(event_entries))
|
||||
events_processed_counter.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc(len(event_entries))
|
||||
|
||||
event_processing_loop_room_count.labels("federation_sender").inc(
|
||||
len(events_by_room)
|
||||
)
|
||||
event_processing_loop_room_count.labels(
|
||||
name="federation_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc(len(events_by_room))
|
||||
|
||||
event_processing_loop_counter.labels("federation_sender").inc()
|
||||
event_processing_loop_counter.labels(
|
||||
name="federation_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
synapse.metrics.event_processing_positions.labels(
|
||||
"federation_sender"
|
||||
@@ -727,8 +744,12 @@ class FederationSender(AbstractFederationSender):
|
||||
if not destinations:
|
||||
return
|
||||
|
||||
sent_pdus_destination_dist_total.inc(len(destinations))
|
||||
sent_pdus_destination_dist_count.inc()
|
||||
sent_pdus_destination_dist_total.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc(len(destinations))
|
||||
sent_pdus_destination_dist_count.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc()
|
||||
|
||||
assert pdu.internal_metadata.stream_ordering
|
||||
|
||||
|
||||
@@ -40,7 +40,7 @@ from synapse.federation.units import Edu
|
||||
from synapse.handlers.presence import format_user_presence_state
|
||||
from synapse.logging import issue9533_logger
|
||||
from synapse.logging.opentracing import SynapseTags, set_tag
|
||||
from synapse.metrics import sent_transactions_counter
|
||||
from synapse.metrics import SERVER_NAME_LABEL, sent_transactions_counter
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.types import JsonDict, ReadReceipt
|
||||
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
|
||||
@@ -56,13 +56,15 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
sent_edus_counter = Counter(
|
||||
"synapse_federation_client_sent_edus", "Total number of EDUs successfully sent"
|
||||
"synapse_federation_client_sent_edus",
|
||||
"Total number of EDUs successfully sent",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
sent_edus_by_type = Counter(
|
||||
"synapse_federation_client_sent_edus_by_type",
|
||||
"Number of sent EDUs successfully sent, by event type",
|
||||
["type"],
|
||||
labelnames=["type", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
|
||||
@@ -91,7 +93,7 @@ class PerDestinationQueue:
|
||||
transaction_manager: "synapse.federation.sender.TransactionManager",
|
||||
destination: str,
|
||||
):
|
||||
self._server_name = hs.hostname
|
||||
self.server_name = hs.hostname
|
||||
self._clock = hs.get_clock()
|
||||
self._storage_controllers = hs.get_storage_controllers()
|
||||
self._store = hs.get_datastores().main
|
||||
@@ -311,6 +313,7 @@ class PerDestinationQueue:
|
||||
|
||||
run_as_background_process(
|
||||
"federation_transaction_transmission_loop",
|
||||
self.server_name,
|
||||
self._transaction_transmission_loop,
|
||||
)
|
||||
|
||||
@@ -322,7 +325,12 @@ class PerDestinationQueue:
|
||||
# This will throw if we wouldn't retry. We do this here so we fail
|
||||
# quickly, but we will later check this again in the http client,
|
||||
# hence why we throw the result away.
|
||||
await get_retry_limiter(self._destination, self._clock, self._store)
|
||||
await get_retry_limiter(
|
||||
destination=self._destination,
|
||||
our_server_name=self.server_name,
|
||||
clock=self._clock,
|
||||
store=self._store,
|
||||
)
|
||||
|
||||
if self._catching_up:
|
||||
# we potentially need to catch-up first
|
||||
@@ -362,10 +370,17 @@ class PerDestinationQueue:
|
||||
self._destination, pending_pdus, pending_edus
|
||||
)
|
||||
|
||||
sent_transactions_counter.inc()
|
||||
sent_edus_counter.inc(len(pending_edus))
|
||||
sent_transactions_counter.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc()
|
||||
sent_edus_counter.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc(len(pending_edus))
|
||||
for edu in pending_edus:
|
||||
sent_edus_by_type.labels(edu.edu_type).inc()
|
||||
sent_edus_by_type.labels(
|
||||
type=edu.edu_type,
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
except NotRetryingDestination as e:
|
||||
logger.debug(
|
||||
@@ -566,7 +581,7 @@ class PerDestinationQueue:
|
||||
new_pdus = await filter_events_for_server(
|
||||
self._storage_controllers,
|
||||
self._destination,
|
||||
self._server_name,
|
||||
self.server_name,
|
||||
new_pdus,
|
||||
redact=False,
|
||||
filter_out_erased_senders=True,
|
||||
@@ -590,7 +605,9 @@ class PerDestinationQueue:
|
||||
self._destination, room_catchup_pdus, []
|
||||
)
|
||||
|
||||
sent_transactions_counter.inc()
|
||||
sent_transactions_counter.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc()
|
||||
|
||||
# We pulled this from the DB, so it'll be non-null
|
||||
assert pdu.internal_metadata.stream_ordering
|
||||
@@ -613,7 +630,7 @@ class PerDestinationQueue:
|
||||
# Send at most limit EDUs for receipts.
|
||||
for content in self._pending_receipt_edus[:limit]:
|
||||
yield Edu(
|
||||
origin=self._server_name,
|
||||
origin=self.server_name,
|
||||
destination=self._destination,
|
||||
edu_type=EduTypes.RECEIPT,
|
||||
content=content,
|
||||
@@ -639,7 +656,7 @@ class PerDestinationQueue:
|
||||
)
|
||||
edus = [
|
||||
Edu(
|
||||
origin=self._server_name,
|
||||
origin=self.server_name,
|
||||
destination=self._destination,
|
||||
edu_type=edu_type,
|
||||
content=content,
|
||||
@@ -666,7 +683,7 @@ class PerDestinationQueue:
|
||||
|
||||
edus = [
|
||||
Edu(
|
||||
origin=self._server_name,
|
||||
origin=self.server_name,
|
||||
destination=self._destination,
|
||||
edu_type=EduTypes.DIRECT_TO_DEVICE,
|
||||
content=content,
|
||||
@@ -739,7 +756,7 @@ class _TransactionQueueManager:
|
||||
|
||||
pending_edus.append(
|
||||
Edu(
|
||||
origin=self.queue._server_name,
|
||||
origin=self.queue.server_name,
|
||||
destination=self.queue._destination,
|
||||
edu_type=EduTypes.PRESENCE,
|
||||
content={"push": presence_to_add},
|
||||
|
||||
@@ -38,6 +38,9 @@ logger = logging.getLogger(__name__)
|
||||
class AccountValidityHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.hs = hs
|
||||
self.server_name = (
|
||||
hs.hostname
|
||||
) # nb must be called this for @wrap_as_background_process
|
||||
self.config = hs.config
|
||||
self.store = hs.get_datastores().main
|
||||
self.send_email_handler = hs.get_send_email_handler()
|
||||
|
||||
@@ -358,6 +358,7 @@ class AdminHandler:
|
||||
user_id: str,
|
||||
rooms: list,
|
||||
requester: JsonMapping,
|
||||
use_admin: bool,
|
||||
reason: Optional[str],
|
||||
limit: Optional[int],
|
||||
) -> str:
|
||||
@@ -368,6 +369,7 @@ class AdminHandler:
|
||||
user_id: the user ID of the user whose events should be redacted
|
||||
rooms: the rooms in which to redact the user's events
|
||||
requester: the user requesting the events
|
||||
use_admin: whether to use the admin account to issue the redactions
|
||||
reason: reason for requesting the redaction, ie spam, etc
|
||||
limit: limit on the number of events in each room to redact
|
||||
|
||||
@@ -395,6 +397,7 @@ class AdminHandler:
|
||||
"rooms": rooms,
|
||||
"requester": requester,
|
||||
"user_id": user_id,
|
||||
"use_admin": use_admin,
|
||||
"reason": reason,
|
||||
"limit": limit,
|
||||
},
|
||||
@@ -426,9 +429,17 @@ class AdminHandler:
|
||||
user_id = task.params.get("user_id")
|
||||
assert user_id is not None
|
||||
|
||||
# puppet the user if they're ours, otherwise use admin to redact
|
||||
use_admin = task.params.get("use_admin", False)
|
||||
|
||||
# default to puppeting the user unless they are not local or it's been requested to
|
||||
# use the admin user to issue the redactions
|
||||
requester_id = (
|
||||
admin.user.to_string()
|
||||
if use_admin or not self.hs.is_mine_id(user_id)
|
||||
else user_id
|
||||
)
|
||||
requester = create_requester(
|
||||
user_id if self.hs.is_mine_id(user_id) else admin.user.to_string(),
|
||||
requester_id,
|
||||
authenticated_entity=admin.user.to_string(),
|
||||
)
|
||||
|
||||
|
||||
@@ -42,6 +42,7 @@ from synapse.events import EventBase
|
||||
from synapse.handlers.presence import format_user_presence_state
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.metrics import (
|
||||
SERVER_NAME_LABEL,
|
||||
event_processing_loop_counter,
|
||||
event_processing_loop_room_count,
|
||||
)
|
||||
@@ -68,12 +69,16 @@ if TYPE_CHECKING:
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
events_processed_counter = Counter("synapse_handlers_appservice_events_processed", "")
|
||||
events_processed_counter = Counter(
|
||||
"synapse_handlers_appservice_events_processed", "", labelnames=[SERVER_NAME_LABEL]
|
||||
)
|
||||
|
||||
|
||||
class ApplicationServicesHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.server_name = hs.hostname
|
||||
self.server_name = (
|
||||
hs.hostname
|
||||
) # nb must be called this for @wrap_as_background_process
|
||||
self.store = hs.get_datastores().main
|
||||
self.is_mine_id = hs.is_mine_id
|
||||
self.appservice_api = hs.get_application_service_api()
|
||||
@@ -166,7 +171,9 @@ class ApplicationServicesHandler:
|
||||
except Exception:
|
||||
logger.error("Application Services Failure")
|
||||
|
||||
run_as_background_process("as_scheduler", start_scheduler)
|
||||
run_as_background_process(
|
||||
"as_scheduler", self.server_name, start_scheduler
|
||||
)
|
||||
self.started_scheduler = True
|
||||
|
||||
# Fork off pushes to these services
|
||||
@@ -203,13 +210,19 @@ class ApplicationServicesHandler:
|
||||
"appservice_sender"
|
||||
).set(upper_bound)
|
||||
|
||||
events_processed_counter.inc(len(events))
|
||||
events_processed_counter.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc(len(events))
|
||||
|
||||
event_processing_loop_room_count.labels("appservice_sender").inc(
|
||||
len(events_by_room)
|
||||
)
|
||||
event_processing_loop_room_count.labels(
|
||||
name="appservice_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc(len(events_by_room))
|
||||
|
||||
event_processing_loop_counter.labels("appservice_sender").inc()
|
||||
event_processing_loop_counter.labels(
|
||||
name="appservice_sender",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
if events:
|
||||
now = self.clock.time_msec()
|
||||
|
||||
@@ -70,6 +70,7 @@ from synapse.http import get_request_user_agent
|
||||
from synapse.http.server import finish_request, respond_with_html
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import defer_to_thread
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage.databases.main.registration import (
|
||||
LoginTokenExpired,
|
||||
@@ -95,7 +96,7 @@ INVALID_USERNAME_OR_PASSWORD = "Invalid username or password"
|
||||
invalid_login_token_counter = Counter(
|
||||
"synapse_user_login_invalid_login_tokens",
|
||||
"Counts the number of rejected m.login.token on /login",
|
||||
["reason"],
|
||||
labelnames=["reason", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
|
||||
@@ -199,6 +200,7 @@ class AuthHandler:
|
||||
SESSION_EXPIRE_MS = 48 * 60 * 60 * 1000
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.server_name = hs.hostname
|
||||
self.store = hs.get_datastores().main
|
||||
self.auth = hs.get_auth()
|
||||
self.auth_blocking = hs.get_auth_blocking()
|
||||
@@ -247,6 +249,7 @@ class AuthHandler:
|
||||
run_as_background_process,
|
||||
5 * 60 * 1000,
|
||||
"expire_old_sessions",
|
||||
self.server_name,
|
||||
self._expire_old_sessions,
|
||||
)
|
||||
|
||||
@@ -271,8 +274,6 @@ class AuthHandler:
|
||||
hs.config.sso.sso_account_deactivated_template
|
||||
)
|
||||
|
||||
self._server_name = hs.config.server.server_name
|
||||
|
||||
# cast to tuple for use with str.startswith
|
||||
self._whitelisted_sso_clients = tuple(hs.config.sso.sso_client_whitelist)
|
||||
|
||||
@@ -1478,11 +1479,20 @@ class AuthHandler:
|
||||
try:
|
||||
return await self.store.consume_login_token(login_token)
|
||||
except LoginTokenExpired:
|
||||
invalid_login_token_counter.labels("expired").inc()
|
||||
invalid_login_token_counter.labels(
|
||||
reason="expired",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
except LoginTokenReused:
|
||||
invalid_login_token_counter.labels("reused").inc()
|
||||
invalid_login_token_counter.labels(
|
||||
reason="reused",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
except NotFoundError:
|
||||
invalid_login_token_counter.labels("not found").inc()
|
||||
invalid_login_token_counter.labels(
|
||||
reason="not found",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
raise AuthError(403, "Invalid login token", errcode=Codes.FORBIDDEN)
|
||||
|
||||
@@ -1857,7 +1867,7 @@ class AuthHandler:
|
||||
html = self._sso_redirect_confirm_template.render(
|
||||
display_url=display_url,
|
||||
redirect_url=redirect_url,
|
||||
server_name=self._server_name,
|
||||
server_name=self.server_name,
|
||||
new_user=new_user,
|
||||
user_id=registered_user_id,
|
||||
user_profile=user_profile_data,
|
||||
|
||||
@@ -39,6 +39,7 @@ class DeactivateAccountHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.store = hs.get_datastores().main
|
||||
self.hs = hs
|
||||
self.server_name = hs.hostname
|
||||
self._auth_handler = hs.get_auth_handler()
|
||||
self._device_handler = hs.get_device_handler()
|
||||
self._room_member_handler = hs.get_room_member_handler()
|
||||
@@ -187,6 +188,9 @@ class DeactivateAccountHandler:
|
||||
# Remove account data (including ignored users and push rules).
|
||||
await self.store.purge_account_data_for_user(user_id)
|
||||
|
||||
# Remove thread subscriptions for the user
|
||||
await self.store.purge_thread_subscription_settings_for_user(user_id)
|
||||
|
||||
# Delete any server-side backup keys
|
||||
await self.store.bulk_delete_backup_keys_and_versions_for_user(user_id)
|
||||
|
||||
@@ -240,7 +244,9 @@ class DeactivateAccountHandler:
|
||||
pending deactivation, if it isn't already running.
|
||||
"""
|
||||
if not self._user_parter_running:
|
||||
run_as_background_process("user_parter_loop", self._user_parter_loop)
|
||||
run_as_background_process(
|
||||
"user_parter_loop", self.server_name, self._user_parter_loop
|
||||
)
|
||||
|
||||
async def _user_parter_loop(self) -> None:
|
||||
"""Loop that parts deactivated users from rooms"""
|
||||
|
||||
@@ -110,12 +110,13 @@ class DelayedEventsHandler:
|
||||
# Can send the events in background after having awaited on marking them as processed
|
||||
run_as_background_process(
|
||||
"_send_events",
|
||||
self.server_name,
|
||||
self._send_events,
|
||||
events,
|
||||
)
|
||||
|
||||
self._initialized_from_db = run_as_background_process(
|
||||
"_schedule_db_events", _schedule_db_events
|
||||
"_schedule_db_events", self.server_name, _schedule_db_events
|
||||
)
|
||||
else:
|
||||
self._repl_client = ReplicationAddedDelayedEventRestServlet.make_client(hs)
|
||||
@@ -140,7 +141,9 @@ class DelayedEventsHandler:
|
||||
finally:
|
||||
self._event_processing = False
|
||||
|
||||
run_as_background_process("delayed_events.notify_new_event", process)
|
||||
run_as_background_process(
|
||||
"delayed_events.notify_new_event", self.server_name, process
|
||||
)
|
||||
|
||||
async def _unsafe_process_new_event(self) -> None:
|
||||
# If self._event_pos is None then means we haven't fetched it from the DB yet
|
||||
@@ -450,6 +453,7 @@ class DelayedEventsHandler:
|
||||
delay_sec,
|
||||
run_as_background_process,
|
||||
"_send_on_timeout",
|
||||
self.server_name,
|
||||
self._send_on_timeout,
|
||||
)
|
||||
else:
|
||||
|
||||
@@ -193,8 +193,9 @@ class DeviceHandler:
|
||||
self.clock.looping_call(
|
||||
run_as_background_process,
|
||||
DELETE_STALE_DEVICES_INTERVAL_MS,
|
||||
"delete_stale_devices",
|
||||
self._delete_stale_devices,
|
||||
desc="delete_stale_devices",
|
||||
server_name=self.server_name,
|
||||
func=self._delete_stale_devices,
|
||||
)
|
||||
|
||||
async def _delete_stale_devices(self) -> None:
|
||||
@@ -963,6 +964,9 @@ class DeviceWriterHandler(DeviceHandler):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__(hs)
|
||||
|
||||
self.server_name = (
|
||||
hs.hostname
|
||||
) # nb must be called this for @measure_func and @wrap_as_background_process
|
||||
# We only need to poke the federation sender explicitly if its on the
|
||||
# same instance. Other federation sender instances will get notified by
|
||||
# `synapse.app.generic_worker.FederationSenderHandler` when it sees it
|
||||
@@ -1440,6 +1444,7 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
|
||||
def __init__(self, hs: "HomeServer", device_handler: DeviceWriterHandler):
|
||||
super().__init__(hs)
|
||||
|
||||
self.server_name = hs.hostname
|
||||
self.federation = hs.get_federation_client()
|
||||
self.server_name = hs.hostname # nb must be called this for @measure_func
|
||||
self.clock = hs.get_clock() # nb must be called this for @measure_func
|
||||
@@ -1470,6 +1475,7 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
|
||||
self.clock.looping_call(
|
||||
run_as_background_process,
|
||||
30 * 1000,
|
||||
server_name=self.server_name,
|
||||
func=self._maybe_retry_device_resync,
|
||||
desc="_maybe_retry_device_resync",
|
||||
)
|
||||
@@ -1591,6 +1597,7 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
|
||||
await self.store.mark_remote_users_device_caches_as_stale([user_id])
|
||||
run_as_background_process(
|
||||
"_maybe_retry_device_resync",
|
||||
self.server_name,
|
||||
self.multi_user_device_resync,
|
||||
[user_id],
|
||||
False,
|
||||
|
||||
@@ -187,7 +187,9 @@ class FederationHandler:
|
||||
# were shut down.
|
||||
if not hs.config.worker.worker_app:
|
||||
run_as_background_process(
|
||||
"resume_sync_partial_state_room", self._resume_partial_state_room_sync
|
||||
"resume_sync_partial_state_room",
|
||||
self.server_name,
|
||||
self._resume_partial_state_room_sync,
|
||||
)
|
||||
|
||||
@trace
|
||||
@@ -316,6 +318,7 @@ class FederationHandler:
|
||||
)
|
||||
run_as_background_process(
|
||||
"_maybe_backfill_inner_anyway_with_max_depth",
|
||||
self.server_name,
|
||||
self.maybe_backfill,
|
||||
room_id=room_id,
|
||||
# We use `MAX_DEPTH` so that we find all backfill points next
|
||||
@@ -798,7 +801,10 @@ class FederationHandler:
|
||||
# have. Hence we fire off the background task, but don't wait for it.
|
||||
|
||||
run_as_background_process(
|
||||
"handle_queued_pdus", self._handle_queued_pdus, room_queue
|
||||
"handle_queued_pdus",
|
||||
self.server_name,
|
||||
self._handle_queued_pdus,
|
||||
room_queue,
|
||||
)
|
||||
|
||||
async def do_knock(
|
||||
@@ -1870,7 +1876,9 @@ class FederationHandler:
|
||||
)
|
||||
|
||||
run_as_background_process(
|
||||
desc="sync_partial_state_room", func=_sync_partial_state_room_wrapper
|
||||
desc="sync_partial_state_room",
|
||||
server_name=self.server_name,
|
||||
func=_sync_partial_state_room_wrapper,
|
||||
)
|
||||
|
||||
async def _sync_partial_state_room(
|
||||
|
||||
@@ -76,6 +76,7 @@ from synapse.logging.opentracing import (
|
||||
tag_args,
|
||||
trace,
|
||||
)
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.replication.http.federation import (
|
||||
ReplicationFederationSendEventsRestServlet,
|
||||
@@ -105,6 +106,7 @@ logger = logging.getLogger(__name__)
|
||||
soft_failed_event_counter = Counter(
|
||||
"synapse_federation_soft_failed_events_total",
|
||||
"Events received over federation that we marked as soft_failed",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
# Added to debug performance and track progress on optimizations
|
||||
@@ -146,6 +148,7 @@ class FederationEventHandler:
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.server_name = hs.hostname
|
||||
self._clock = hs.get_clock()
|
||||
self._store = hs.get_datastores().main
|
||||
self._state_store = hs.get_datastores().state
|
||||
@@ -170,7 +173,6 @@ class FederationEventHandler:
|
||||
|
||||
self._is_mine_id = hs.is_mine_id
|
||||
self._is_mine_server_name = hs.is_mine_server_name
|
||||
self._server_name = hs.hostname
|
||||
self._instance_name = hs.get_instance_name()
|
||||
|
||||
self._config = hs.config
|
||||
@@ -249,7 +251,7 @@ class FederationEventHandler:
|
||||
# Note that if we were never in the room then we would have already
|
||||
# dropped the event, since we wouldn't know the room version.
|
||||
is_in_room = await self._event_auth_handler.is_host_in_room(
|
||||
room_id, self._server_name
|
||||
room_id, self.server_name
|
||||
)
|
||||
if not is_in_room:
|
||||
logger.info(
|
||||
@@ -930,6 +932,7 @@ class FederationEventHandler:
|
||||
if len(events_with_failed_pull_attempts) > 0:
|
||||
run_as_background_process(
|
||||
"_process_new_pulled_events_with_failed_pull_attempts",
|
||||
self.server_name,
|
||||
_process_new_pulled_events,
|
||||
events_with_failed_pull_attempts,
|
||||
)
|
||||
@@ -1523,6 +1526,7 @@ class FederationEventHandler:
|
||||
if resync:
|
||||
run_as_background_process(
|
||||
"resync_device_due_to_pdu",
|
||||
self.server_name,
|
||||
self._resync_device,
|
||||
event.sender,
|
||||
)
|
||||
@@ -2049,7 +2053,9 @@ class FederationEventHandler:
|
||||
"hs": origin,
|
||||
},
|
||||
)
|
||||
soft_failed_event_counter.inc()
|
||||
soft_failed_event_counter.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc()
|
||||
event.internal_metadata.soft_failed = True
|
||||
|
||||
async def _load_or_fetch_auth_events_for_event(
|
||||
|
||||
@@ -92,6 +92,7 @@ class MessageHandler:
|
||||
"""Contains some read only APIs to get state about a room"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.server_name = hs.hostname
|
||||
self.auth = hs.get_auth()
|
||||
self.clock = hs.get_clock()
|
||||
self.state = hs.get_state_handler()
|
||||
@@ -107,7 +108,7 @@ class MessageHandler:
|
||||
|
||||
if not hs.config.worker.worker_app:
|
||||
run_as_background_process(
|
||||
"_schedule_next_expiry", self._schedule_next_expiry
|
||||
"_schedule_next_expiry", self.server_name, self._schedule_next_expiry
|
||||
)
|
||||
|
||||
async def get_room_data(
|
||||
@@ -439,6 +440,7 @@ class MessageHandler:
|
||||
delay,
|
||||
run_as_background_process,
|
||||
"_expire_event",
|
||||
self.server_name,
|
||||
self._expire_event,
|
||||
event_id,
|
||||
)
|
||||
@@ -541,6 +543,7 @@ class EventCreationHandler:
|
||||
self.clock.looping_call(
|
||||
lambda: run_as_background_process(
|
||||
"send_dummy_events_to_fill_extremities",
|
||||
self.server_name,
|
||||
self._send_dummy_events_to_fill_extremities,
|
||||
),
|
||||
5 * 60 * 1000,
|
||||
@@ -1942,6 +1945,7 @@ class EventCreationHandler:
|
||||
# matters as sometimes presence code can take a while.
|
||||
run_as_background_process(
|
||||
"bump_presence_active_time",
|
||||
self.server_name,
|
||||
self._bump_active_time,
|
||||
requester.user,
|
||||
requester.device_id,
|
||||
|
||||
@@ -79,12 +79,12 @@ class PaginationHandler:
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.hs = hs
|
||||
self.server_name = hs.hostname
|
||||
self.auth = hs.get_auth()
|
||||
self.store = hs.get_datastores().main
|
||||
self._storage_controllers = hs.get_storage_controllers()
|
||||
self._state_storage_controller = self._storage_controllers.state
|
||||
self.clock = hs.get_clock()
|
||||
self._server_name = hs.hostname
|
||||
self._room_shutdown_handler = hs.get_room_shutdown_handler()
|
||||
self._relations_handler = hs.get_relations_handler()
|
||||
self._worker_locks = hs.get_worker_locks_handler()
|
||||
@@ -119,6 +119,7 @@ class PaginationHandler:
|
||||
run_as_background_process,
|
||||
job.interval,
|
||||
"purge_history_for_rooms_in_range",
|
||||
self.server_name,
|
||||
self.purge_history_for_rooms_in_range,
|
||||
job.shortest_max_lifetime,
|
||||
job.longest_max_lifetime,
|
||||
@@ -245,6 +246,7 @@ class PaginationHandler:
|
||||
# other purges in the same room.
|
||||
run_as_background_process(
|
||||
PURGE_HISTORY_ACTION_NAME,
|
||||
self.server_name,
|
||||
self.purge_history,
|
||||
room_id,
|
||||
token,
|
||||
@@ -395,7 +397,7 @@ class PaginationHandler:
|
||||
write=True,
|
||||
):
|
||||
# first check that we have no users in this room
|
||||
joined = await self.store.is_host_joined(room_id, self._server_name)
|
||||
joined = await self.store.is_host_joined(room_id, self.server_name)
|
||||
if joined:
|
||||
if force:
|
||||
logger.info(
|
||||
@@ -604,6 +606,7 @@ class PaginationHandler:
|
||||
# for a costly federation call and processing.
|
||||
run_as_background_process(
|
||||
"maybe_backfill_in_the_background",
|
||||
self.server_name,
|
||||
self.hs.get_federation_handler().maybe_backfill,
|
||||
room_id,
|
||||
curr_topo,
|
||||
|
||||
@@ -105,7 +105,7 @@ from synapse.api.presence import UserDevicePresenceState, UserPresenceState
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.events.presence_router import PresenceRouter
|
||||
from synapse.logging.context import run_in_background
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.metrics import SERVER_NAME_LABEL, LaterGauge
|
||||
from synapse.metrics.background_process_metrics import (
|
||||
run_as_background_process,
|
||||
wrap_as_background_process,
|
||||
@@ -137,24 +137,40 @@ if TYPE_CHECKING:
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
notified_presence_counter = Counter("synapse_handler_presence_notified_presence", "")
|
||||
notified_presence_counter = Counter(
|
||||
"synapse_handler_presence_notified_presence", "", labelnames=[SERVER_NAME_LABEL]
|
||||
)
|
||||
federation_presence_out_counter = Counter(
|
||||
"synapse_handler_presence_federation_presence_out", ""
|
||||
"synapse_handler_presence_federation_presence_out",
|
||||
"",
|
||||
labelnames=[SERVER_NAME_LABEL],
|
||||
)
|
||||
presence_updates_counter = Counter(
|
||||
"synapse_handler_presence_presence_updates", "", labelnames=[SERVER_NAME_LABEL]
|
||||
)
|
||||
timers_fired_counter = Counter(
|
||||
"synapse_handler_presence_timers_fired", "", labelnames=[SERVER_NAME_LABEL]
|
||||
)
|
||||
presence_updates_counter = Counter("synapse_handler_presence_presence_updates", "")
|
||||
timers_fired_counter = Counter("synapse_handler_presence_timers_fired", "")
|
||||
federation_presence_counter = Counter(
|
||||
"synapse_handler_presence_federation_presence", ""
|
||||
"synapse_handler_presence_federation_presence", "", labelnames=[SERVER_NAME_LABEL]
|
||||
)
|
||||
bump_active_time_counter = Counter(
|
||||
"synapse_handler_presence_bump_active_time", "", labelnames=[SERVER_NAME_LABEL]
|
||||
)
|
||||
bump_active_time_counter = Counter("synapse_handler_presence_bump_active_time", "")
|
||||
|
||||
get_updates_counter = Counter("synapse_handler_presence_get_updates", "", ["type"])
|
||||
get_updates_counter = Counter(
|
||||
"synapse_handler_presence_get_updates", "", labelnames=["type", SERVER_NAME_LABEL]
|
||||
)
|
||||
|
||||
notify_reason_counter = Counter(
|
||||
"synapse_handler_presence_notify_reason", "", ["locality", "reason"]
|
||||
"synapse_handler_presence_notify_reason",
|
||||
"",
|
||||
labelnames=["locality", "reason", SERVER_NAME_LABEL],
|
||||
)
|
||||
state_transition_counter = Counter(
|
||||
"synapse_handler_presence_state_transition", "", ["locality", "from", "to"]
|
||||
"synapse_handler_presence_state_transition",
|
||||
"",
|
||||
labelnames=["locality", "from", "to", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
# If a user was last active in the last LAST_ACTIVE_GRANULARITY, consider them
|
||||
@@ -484,6 +500,7 @@ class _NullContextManager(ContextManager[None]):
|
||||
class WorkerPresenceHandler(BasePresenceHandler):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__(hs)
|
||||
self.server_name = hs.hostname
|
||||
self._presence_writer_instance = hs.config.worker.writers.presence[0]
|
||||
|
||||
# Route presence EDUs to the right worker
|
||||
@@ -517,6 +534,7 @@ class WorkerPresenceHandler(BasePresenceHandler):
|
||||
"shutdown",
|
||||
run_as_background_process,
|
||||
"generic_presence.on_shutdown",
|
||||
self.server_name,
|
||||
self._on_shutdown,
|
||||
)
|
||||
|
||||
@@ -666,7 +684,9 @@ class WorkerPresenceHandler(BasePresenceHandler):
|
||||
old_state = self.user_to_current_state.get(new_state.user_id)
|
||||
self.user_to_current_state[new_state.user_id] = new_state
|
||||
is_mine = self.is_mine_id(new_state.user_id)
|
||||
if not old_state or should_notify(old_state, new_state, is_mine):
|
||||
if not old_state or should_notify(
|
||||
old_state, new_state, is_mine, self.server_name
|
||||
):
|
||||
state_to_notify.append(new_state)
|
||||
|
||||
stream_id = token
|
||||
@@ -747,7 +767,9 @@ class WorkerPresenceHandler(BasePresenceHandler):
|
||||
class PresenceHandler(BasePresenceHandler):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__(hs)
|
||||
self.server_name = hs.hostname
|
||||
self.server_name = (
|
||||
hs.hostname
|
||||
) # nb must be called this for @wrap_as_background_process
|
||||
self.wheel_timer: WheelTimer[str] = WheelTimer()
|
||||
self.notifier = hs.get_notifier()
|
||||
|
||||
@@ -815,6 +837,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
"shutdown",
|
||||
run_as_background_process,
|
||||
"presence.on_shutdown",
|
||||
self.server_name,
|
||||
self._on_shutdown,
|
||||
)
|
||||
|
||||
@@ -972,6 +995,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
prev_state,
|
||||
new_state,
|
||||
is_mine=self.is_mine_id(user_id),
|
||||
our_server_name=self.server_name,
|
||||
wheel_timer=self.wheel_timer,
|
||||
now=now,
|
||||
# When overriding disabled presence, don't kick off all the
|
||||
@@ -991,10 +1015,14 @@ class PresenceHandler(BasePresenceHandler):
|
||||
|
||||
# TODO: We should probably ensure there are no races hereafter
|
||||
|
||||
presence_updates_counter.inc(len(new_states))
|
||||
presence_updates_counter.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc(len(new_states))
|
||||
|
||||
if to_notify:
|
||||
notified_presence_counter.inc(len(to_notify))
|
||||
notified_presence_counter.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc(len(to_notify))
|
||||
await self._persist_and_notify(list(to_notify.values()))
|
||||
|
||||
self.unpersisted_users_changes |= {s.user_id for s in new_states}
|
||||
@@ -1013,7 +1041,9 @@ class PresenceHandler(BasePresenceHandler):
|
||||
if user_id not in to_notify
|
||||
}
|
||||
if to_federation_ping:
|
||||
federation_presence_out_counter.inc(len(to_federation_ping))
|
||||
federation_presence_out_counter.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc(len(to_federation_ping))
|
||||
|
||||
hosts_to_states = await get_interested_remotes(
|
||||
self.store,
|
||||
@@ -1063,7 +1093,9 @@ class PresenceHandler(BasePresenceHandler):
|
||||
for user_id in users_to_check
|
||||
]
|
||||
|
||||
timers_fired_counter.inc(len(states))
|
||||
timers_fired_counter.labels(**{SERVER_NAME_LABEL: self.server_name}).inc(
|
||||
len(states)
|
||||
)
|
||||
|
||||
# Set of user ID & device IDs which are currently syncing.
|
||||
syncing_user_devices = {
|
||||
@@ -1097,7 +1129,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
|
||||
user_id = user.to_string()
|
||||
|
||||
bump_active_time_counter.inc()
|
||||
bump_active_time_counter.labels(**{SERVER_NAME_LABEL: self.server_name}).inc()
|
||||
|
||||
now = self.clock.time_msec()
|
||||
|
||||
@@ -1349,7 +1381,9 @@ class PresenceHandler(BasePresenceHandler):
|
||||
updates.append(prev_state.copy_and_replace(**new_fields))
|
||||
|
||||
if updates:
|
||||
federation_presence_counter.inc(len(updates))
|
||||
federation_presence_counter.labels(
|
||||
**{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc(len(updates))
|
||||
await self._update_states(updates)
|
||||
|
||||
async def set_state(
|
||||
@@ -1495,7 +1529,9 @@ class PresenceHandler(BasePresenceHandler):
|
||||
finally:
|
||||
self._event_processing = False
|
||||
|
||||
run_as_background_process("presence.notify_new_event", _process_presence)
|
||||
run_as_background_process(
|
||||
"presence.notify_new_event", self.server_name, _process_presence
|
||||
)
|
||||
|
||||
async def _unsafe_process(self) -> None:
|
||||
# Loop round handling deltas until we're up to date
|
||||
@@ -1660,7 +1696,10 @@ class PresenceHandler(BasePresenceHandler):
|
||||
|
||||
|
||||
def should_notify(
|
||||
old_state: UserPresenceState, new_state: UserPresenceState, is_mine: bool
|
||||
old_state: UserPresenceState,
|
||||
new_state: UserPresenceState,
|
||||
is_mine: bool,
|
||||
our_server_name: str,
|
||||
) -> bool:
|
||||
"""Decides if a presence state change should be sent to interested parties."""
|
||||
user_location = "remote"
|
||||
@@ -1671,19 +1710,38 @@ def should_notify(
|
||||
return False
|
||||
|
||||
if old_state.status_msg != new_state.status_msg:
|
||||
notify_reason_counter.labels(user_location, "status_msg_change").inc()
|
||||
notify_reason_counter.labels(
|
||||
locality=user_location,
|
||||
reason="status_msg_change",
|
||||
**{SERVER_NAME_LABEL: our_server_name},
|
||||
).inc()
|
||||
return True
|
||||
|
||||
if old_state.state != new_state.state:
|
||||
notify_reason_counter.labels(user_location, "state_change").inc()
|
||||
notify_reason_counter.labels(
|
||||
locality=user_location,
|
||||
reason="state_change",
|
||||
**{SERVER_NAME_LABEL: our_server_name},
|
||||
).inc()
|
||||
state_transition_counter.labels(
|
||||
user_location, old_state.state, new_state.state
|
||||
**{
|
||||
"locality": user_location,
|
||||
# `from` is a reserved word in Python so we have to label it this way if
|
||||
# we want to use keyword args.
|
||||
"from": old_state.state,
|
||||
"to": new_state.state,
|
||||
SERVER_NAME_LABEL: our_server_name,
|
||||
},
|
||||
).inc()
|
||||
return True
|
||||
|
||||
if old_state.state == PresenceState.ONLINE:
|
||||
if new_state.currently_active != old_state.currently_active:
|
||||
notify_reason_counter.labels(user_location, "current_active_change").inc()
|
||||
notify_reason_counter.labels(
|
||||
locality=user_location,
|
||||
reason="current_active_change",
|
||||
**{SERVER_NAME_LABEL: our_server_name},
|
||||
).inc()
|
||||
return True
|
||||
|
||||
if (
|
||||
@@ -1693,14 +1751,18 @@ def should_notify(
|
||||
# Only notify about last active bumps if we're not currently active
|
||||
if not new_state.currently_active:
|
||||
notify_reason_counter.labels(
|
||||
user_location, "last_active_change_online"
|
||||
locality=user_location,
|
||||
reason="last_active_change_online",
|
||||
**{SERVER_NAME_LABEL: our_server_name},
|
||||
).inc()
|
||||
return True
|
||||
|
||||
elif new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
|
||||
# Always notify for a transition where last active gets bumped.
|
||||
notify_reason_counter.labels(
|
||||
user_location, "last_active_change_not_online"
|
||||
locality=user_location,
|
||||
reason="last_active_change_not_online",
|
||||
**{SERVER_NAME_LABEL: our_server_name},
|
||||
).inc()
|
||||
return True
|
||||
|
||||
@@ -1767,6 +1829,7 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
|
||||
self.server_name = hs.hostname
|
||||
self.get_presence_handler = hs.get_presence_handler
|
||||
self.get_presence_router = hs.get_presence_router
|
||||
self.server_name = hs.hostname
|
||||
self.clock = hs.get_clock()
|
||||
self.store = hs.get_datastores().main
|
||||
|
||||
@@ -1878,7 +1941,10 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
|
||||
|
||||
# If we have the full list of changes for presence we can
|
||||
# simply check which ones share a room with the user.
|
||||
get_updates_counter.labels("stream").inc()
|
||||
get_updates_counter.labels(
|
||||
type="stream",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
sharing_users = await self.store.do_users_share_a_room(
|
||||
user_id, updated_users
|
||||
@@ -1891,7 +1957,10 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
|
||||
else:
|
||||
# Too many possible updates. Find all users we can see and check
|
||||
# if any of them have changed.
|
||||
get_updates_counter.labels("full").inc()
|
||||
get_updates_counter.labels(
|
||||
type="full",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
users_interested_in = (
|
||||
await self.store.get_users_who_share_room_with_user(user_id)
|
||||
@@ -2141,6 +2210,7 @@ def handle_update(
|
||||
prev_state: UserPresenceState,
|
||||
new_state: UserPresenceState,
|
||||
is_mine: bool,
|
||||
our_server_name: str,
|
||||
wheel_timer: WheelTimer,
|
||||
now: int,
|
||||
persist: bool,
|
||||
@@ -2153,6 +2223,7 @@ def handle_update(
|
||||
prev_state
|
||||
new_state
|
||||
is_mine: Whether the user is ours
|
||||
our_server_name: The homeserver name of the our server (`hs.hostname`)
|
||||
wheel_timer
|
||||
now: Time now in ms
|
||||
persist: True if this state should persist until another update occurs.
|
||||
@@ -2221,7 +2292,7 @@ def handle_update(
|
||||
)
|
||||
|
||||
# Check whether the change was something worth notifying about
|
||||
if should_notify(prev_state, new_state, is_mine):
|
||||
if should_notify(prev_state, new_state, is_mine, our_server_name):
|
||||
new_state = new_state.copy_and_replace(last_federation_update_ts=now)
|
||||
persist_and_notify = True
|
||||
|
||||
|
||||
@@ -124,7 +124,7 @@ class ProfileHandler:
|
||||
except RequestSendFailed as e:
|
||||
raise SynapseError(502, "Failed to fetch profile") from e
|
||||
except HttpResponseException as e:
|
||||
if e.code < 500 and e.code != 404:
|
||||
if e.code < 500 and e.code not in (403, 404):
|
||||
# Other codes are not allowed in c2s API
|
||||
logger.info(
|
||||
"Server replied with wrong response: %s %s", e.code, e.msg
|
||||
|
||||
@@ -45,6 +45,7 @@ from synapse.api.errors import (
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.config.server import is_threepid_reserved
|
||||
from synapse.http.servlet import assert_params_in_dict
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.replication.http.login import RegisterDeviceReplicationServlet
|
||||
from synapse.replication.http.register import (
|
||||
ReplicationPostRegisterActionsServlet,
|
||||
@@ -62,29 +63,38 @@ logger = logging.getLogger(__name__)
|
||||
registration_counter = Counter(
|
||||
"synapse_user_registrations_total",
|
||||
"Number of new users registered (since restart)",
|
||||
["guest", "shadow_banned", "auth_provider"],
|
||||
labelnames=["guest", "shadow_banned", "auth_provider", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
login_counter = Counter(
|
||||
"synapse_user_logins_total",
|
||||
"Number of user logins (since restart)",
|
||||
["guest", "auth_provider"],
|
||||
labelnames=["guest", "auth_provider", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
|
||||
def init_counters_for_auth_provider(auth_provider_id: str) -> None:
|
||||
def init_counters_for_auth_provider(auth_provider_id: str, server_name: str) -> None:
|
||||
"""Ensure the prometheus counters for the given auth provider are initialised
|
||||
|
||||
This fixes a problem where the counters are not reported for a given auth provider
|
||||
until the user first logs in/registers.
|
||||
|
||||
Args:
|
||||
auth_provider_id: The ID of the auth provider to initialise counters for.
|
||||
server_name: Our server name (used to label metrics) (this should be `hs.hostname`).
|
||||
"""
|
||||
for is_guest in (True, False):
|
||||
login_counter.labels(guest=is_guest, auth_provider=auth_provider_id)
|
||||
login_counter.labels(
|
||||
guest=is_guest,
|
||||
auth_provider=auth_provider_id,
|
||||
**{SERVER_NAME_LABEL: server_name},
|
||||
)
|
||||
for shadow_banned in (True, False):
|
||||
registration_counter.labels(
|
||||
guest=is_guest,
|
||||
shadow_banned=shadow_banned,
|
||||
auth_provider=auth_provider_id,
|
||||
**{SERVER_NAME_LABEL: server_name},
|
||||
)
|
||||
|
||||
|
||||
@@ -97,6 +107,7 @@ class LoginDict(TypedDict):
|
||||
|
||||
class RegistrationHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.server_name = hs.hostname
|
||||
self.store = hs.get_datastores().main
|
||||
self._storage_controllers = hs.get_storage_controllers()
|
||||
self.clock = hs.get_clock()
|
||||
@@ -112,7 +123,6 @@ class RegistrationHandler:
|
||||
self._account_validity_handler = hs.get_account_validity_handler()
|
||||
self._user_consent_version = self.hs.config.consent.user_consent_version
|
||||
self._server_notices_mxid = hs.config.servernotices.server_notices_mxid
|
||||
self._server_name = hs.hostname
|
||||
self._user_types_config = hs.config.user_types
|
||||
|
||||
self._spam_checker_module_callbacks = hs.get_module_api_callbacks().spam_checker
|
||||
@@ -138,7 +148,9 @@ class RegistrationHandler:
|
||||
)
|
||||
self.refresh_token_lifetime = hs.config.registration.refresh_token_lifetime
|
||||
|
||||
init_counters_for_auth_provider("")
|
||||
init_counters_for_auth_provider(
|
||||
auth_provider_id="", server_name=self.server_name
|
||||
)
|
||||
|
||||
async def check_username(
|
||||
self,
|
||||
@@ -362,6 +374,7 @@ class RegistrationHandler:
|
||||
guest=make_guest,
|
||||
shadow_banned=shadow_banned,
|
||||
auth_provider=(auth_provider_id or ""),
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
# If the user does not need to consent at registration, auto-join any
|
||||
@@ -422,7 +435,7 @@ class RegistrationHandler:
|
||||
if self.hs.config.registration.auto_join_user_id:
|
||||
fake_requester = create_requester(
|
||||
self.hs.config.registration.auto_join_user_id,
|
||||
authenticated_entity=self._server_name,
|
||||
authenticated_entity=self.server_name,
|
||||
)
|
||||
|
||||
# If the room requires an invite, add the user to the list of invites.
|
||||
@@ -435,7 +448,7 @@ class RegistrationHandler:
|
||||
requires_join = True
|
||||
else:
|
||||
fake_requester = create_requester(
|
||||
user_id, authenticated_entity=self._server_name
|
||||
user_id, authenticated_entity=self.server_name
|
||||
)
|
||||
|
||||
# Choose whether to federate the new room.
|
||||
@@ -467,7 +480,7 @@ class RegistrationHandler:
|
||||
|
||||
await room_member_handler.update_membership(
|
||||
requester=create_requester(
|
||||
user_id, authenticated_entity=self._server_name
|
||||
user_id, authenticated_entity=self.server_name
|
||||
),
|
||||
target=UserID.from_string(user_id),
|
||||
room_id=room_id,
|
||||
@@ -493,7 +506,7 @@ class RegistrationHandler:
|
||||
if requires_join:
|
||||
await room_member_handler.update_membership(
|
||||
requester=create_requester(
|
||||
user_id, authenticated_entity=self._server_name
|
||||
user_id, authenticated_entity=self.server_name
|
||||
),
|
||||
target=UserID.from_string(user_id),
|
||||
room_id=room_id,
|
||||
@@ -539,7 +552,7 @@ class RegistrationHandler:
|
||||
# we don't have a local user in the room to craft up an invite with.
|
||||
requires_invite = await self.store.is_host_joined(
|
||||
room_id,
|
||||
self._server_name,
|
||||
self.server_name,
|
||||
)
|
||||
|
||||
if requires_invite:
|
||||
@@ -567,7 +580,7 @@ class RegistrationHandler:
|
||||
await room_member_handler.update_membership(
|
||||
requester=create_requester(
|
||||
self.hs.config.registration.auto_join_user_id,
|
||||
authenticated_entity=self._server_name,
|
||||
authenticated_entity=self.server_name,
|
||||
),
|
||||
target=UserID.from_string(user_id),
|
||||
room_id=room_id,
|
||||
@@ -579,7 +592,7 @@ class RegistrationHandler:
|
||||
# Send the join.
|
||||
await room_member_handler.update_membership(
|
||||
requester=create_requester(
|
||||
user_id, authenticated_entity=self._server_name
|
||||
user_id, authenticated_entity=self.server_name
|
||||
),
|
||||
target=UserID.from_string(user_id),
|
||||
room_id=room_id,
|
||||
@@ -790,6 +803,7 @@ class RegistrationHandler:
|
||||
login_counter.labels(
|
||||
guest=is_guest,
|
||||
auth_provider=(auth_provider_id or ""),
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
return (
|
||||
|
||||
@@ -66,6 +66,7 @@ from synapse.api.errors import (
|
||||
SynapseError,
|
||||
)
|
||||
from synapse.api.filtering import Filter
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
|
||||
from synapse.event_auth import validate_event_for_room_version
|
||||
from synapse.events import EventBase
|
||||
@@ -131,7 +132,12 @@ class RoomCreationHandler:
|
||||
self.room_member_handler = hs.get_room_member_handler()
|
||||
self._event_auth_handler = hs.get_event_auth_handler()
|
||||
self.config = hs.config
|
||||
self.request_ratelimiter = hs.get_request_ratelimiter()
|
||||
self.common_request_ratelimiter = hs.get_request_ratelimiter()
|
||||
self.creation_ratelimiter = Ratelimiter(
|
||||
store=self.store,
|
||||
clock=self.clock,
|
||||
cfg=self.config.ratelimiting.rc_room_creation,
|
||||
)
|
||||
|
||||
# Room state based off defined presets
|
||||
self._presets_dict: Dict[str, Dict[str, Any]] = {
|
||||
@@ -203,7 +209,11 @@ class RoomCreationHandler:
|
||||
Raises:
|
||||
ShadowBanError if the requester is shadow-banned.
|
||||
"""
|
||||
await self.request_ratelimiter.ratelimit(requester)
|
||||
await self.creation_ratelimiter.ratelimit(requester, update=False)
|
||||
|
||||
# then apply the ratelimits
|
||||
await self.common_request_ratelimiter.ratelimit(requester)
|
||||
await self.creation_ratelimiter.ratelimit(requester)
|
||||
|
||||
user_id = requester.user.to_string()
|
||||
|
||||
@@ -809,11 +819,23 @@ class RoomCreationHandler:
|
||||
)
|
||||
|
||||
if ratelimit:
|
||||
# Rate limit once in advance, but don't rate limit the individual
|
||||
# events in the room — room creation isn't atomic and it's very
|
||||
# janky if half the events in the initial state don't make it because
|
||||
# of rate limiting.
|
||||
await self.request_ratelimiter.ratelimit(requester)
|
||||
# Limit the rate of room creations,
|
||||
# using both the limiter specific to room creations as well
|
||||
# as the general request ratelimiter.
|
||||
#
|
||||
# Note that we don't rate limit the individual
|
||||
# events in the room — room creation isn't atomic and
|
||||
# historically it was very janky if half the events in the
|
||||
# initial state don't make it because of rate limiting.
|
||||
|
||||
# First check the room creation ratelimiter without updating it
|
||||
# (this is so we don't consume a token if the other ratelimiter doesn't
|
||||
# allow us to proceed)
|
||||
await self.creation_ratelimiter.ratelimit(requester, update=False)
|
||||
|
||||
# then apply the ratelimits
|
||||
await self.common_request_ratelimiter.ratelimit(requester)
|
||||
await self.creation_ratelimiter.ratelimit(requester)
|
||||
|
||||
room_version_id = config.get(
|
||||
"room_version", self.config.server.default_room_version.identifier
|
||||
|
||||
@@ -2164,6 +2164,7 @@ class RoomForgetterHandler(StateDeltasHandler):
|
||||
super().__init__(hs)
|
||||
|
||||
self._hs = hs
|
||||
self.server_name = hs.hostname
|
||||
self._store = hs.get_datastores().main
|
||||
self._storage_controllers = hs.get_storage_controllers()
|
||||
self._clock = hs.get_clock()
|
||||
@@ -2195,7 +2196,9 @@ class RoomForgetterHandler(StateDeltasHandler):
|
||||
finally:
|
||||
self._is_processing = False
|
||||
|
||||
run_as_background_process("room_forgetter.notify_new_event", process)
|
||||
run_as_background_process(
|
||||
"room_forgetter.notify_new_event", self.server_name, process
|
||||
)
|
||||
|
||||
async def _unsafe_process(self) -> None:
|
||||
# If self.pos is None then means we haven't fetched it from DB
|
||||
|
||||
@@ -202,7 +202,7 @@ class SsoHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self._clock = hs.get_clock()
|
||||
self._store = hs.get_datastores().main
|
||||
self._server_name = hs.hostname
|
||||
self.server_name = hs.hostname
|
||||
self._is_mine_server_name = hs.is_mine_server_name
|
||||
self._registration_handler = hs.get_registration_handler()
|
||||
self._auth_handler = hs.get_auth_handler()
|
||||
@@ -238,7 +238,9 @@ class SsoHandler:
|
||||
p_id = p.idp_id
|
||||
assert p_id not in self._identity_providers
|
||||
self._identity_providers[p_id] = p
|
||||
init_counters_for_auth_provider(p_id)
|
||||
init_counters_for_auth_provider(
|
||||
auth_provider_id=p_id, server_name=self.server_name
|
||||
)
|
||||
|
||||
def get_identity_providers(self) -> Mapping[str, SsoIdentityProvider]:
|
||||
"""Get the configured identity providers"""
|
||||
@@ -569,7 +571,7 @@ class SsoHandler:
|
||||
return attributes
|
||||
|
||||
# Check if this mxid already exists
|
||||
user_id = UserID(attributes.localpart, self._server_name).to_string()
|
||||
user_id = UserID(attributes.localpart, self.server_name).to_string()
|
||||
if not await self._store.get_users_by_id_case_insensitive(user_id):
|
||||
# This mxid is free
|
||||
break
|
||||
@@ -907,7 +909,7 @@ class SsoHandler:
|
||||
|
||||
# render an error page.
|
||||
html = self._bad_user_template.render(
|
||||
server_name=self._server_name,
|
||||
server_name=self.server_name,
|
||||
user_id_to_verify=user_id_to_verify,
|
||||
)
|
||||
respond_with_html(request, 200, html)
|
||||
@@ -959,7 +961,7 @@ class SsoHandler:
|
||||
|
||||
if contains_invalid_mxid_characters(localpart):
|
||||
raise SynapseError(400, "localpart is invalid: %s" % (localpart,))
|
||||
user_id = UserID(localpart, self._server_name).to_string()
|
||||
user_id = UserID(localpart, self.server_name).to_string()
|
||||
user_infos = await self._store.get_users_by_id_case_insensitive(user_id)
|
||||
|
||||
logger.info("[session %s] users: %s", session_id, user_infos)
|
||||
|
||||
@@ -54,6 +54,7 @@ class StatsHandler:
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.hs = hs
|
||||
self.server_name = hs.hostname
|
||||
self.store = hs.get_datastores().main
|
||||
self._storage_controllers = hs.get_storage_controllers()
|
||||
self.state = hs.get_state_handler()
|
||||
@@ -89,7 +90,7 @@ class StatsHandler:
|
||||
finally:
|
||||
self._is_processing = False
|
||||
|
||||
run_as_background_process("stats.notify_new_event", process)
|
||||
run_as_background_process("stats.notify_new_event", self.server_name, process)
|
||||
|
||||
async def _unsafe_process(self) -> None:
|
||||
# If self.pos is None then means we haven't fetched it from DB
|
||||
|
||||
@@ -63,6 +63,7 @@ from synapse.logging.opentracing import (
|
||||
start_active_span,
|
||||
trace,
|
||||
)
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.storage.databases.main.event_push_actions import RoomNotifCounts
|
||||
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
|
||||
from synapse.storage.databases.main.stream import PaginateFunction
|
||||
@@ -104,7 +105,7 @@ non_empty_sync_counter = Counter(
|
||||
"Count of non empty sync responses. type is initial_sync/full_state_sync"
|
||||
"/incremental_sync. lazy_loaded indicates if lazy loaded members were "
|
||||
"enabled for that request.",
|
||||
["type", "lazy_loaded"],
|
||||
labelnames=["type", "lazy_loaded", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
# Store the cache that tracks which lazy-loaded members have been sent to a given
|
||||
@@ -614,7 +615,11 @@ class SyncHandler:
|
||||
lazy_loaded = "true"
|
||||
else:
|
||||
lazy_loaded = "false"
|
||||
non_empty_sync_counter.labels(sync_label, lazy_loaded).inc()
|
||||
non_empty_sync_counter.labels(
|
||||
type=sync_label,
|
||||
lazy_loaded=lazy_loaded,
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
126
synapse/handlers/thread_subscriptions.py
Normal file
126
synapse/handlers/thread_subscriptions.py
Normal file
@@ -0,0 +1,126 @@
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Optional
|
||||
|
||||
from synapse.api.errors import AuthError, NotFoundError
|
||||
from synapse.storage.databases.main.thread_subscriptions import ThreadSubscription
|
||||
from synapse.types import UserID
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ThreadSubscriptionsHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.store = hs.get_datastores().main
|
||||
self.event_handler = hs.get_event_handler()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
async def get_thread_subscription_settings(
|
||||
self,
|
||||
user_id: UserID,
|
||||
room_id: str,
|
||||
thread_root_event_id: str,
|
||||
) -> Optional[ThreadSubscription]:
|
||||
"""Get thread subscription settings for a specific thread and user.
|
||||
Checks that the thread root is both a real event and also that it is visible
|
||||
to the user.
|
||||
|
||||
Args:
|
||||
user_id: The ID of the user
|
||||
thread_root_event_id: The event ID of the thread root
|
||||
|
||||
Returns:
|
||||
A `ThreadSubscription` containing the active subscription settings or None if not set
|
||||
"""
|
||||
# First check that the user can access the thread root event
|
||||
# and that it exists
|
||||
try:
|
||||
event = await self.event_handler.get_event(
|
||||
user_id, room_id, thread_root_event_id
|
||||
)
|
||||
if event is None:
|
||||
raise NotFoundError("No such thread root")
|
||||
except AuthError:
|
||||
raise NotFoundError("No such thread root")
|
||||
|
||||
return await self.store.get_subscription_for_thread(
|
||||
user_id.to_string(), event.room_id, thread_root_event_id
|
||||
)
|
||||
|
||||
async def subscribe_user_to_thread(
|
||||
self,
|
||||
user_id: UserID,
|
||||
room_id: str,
|
||||
thread_root_event_id: str,
|
||||
*,
|
||||
automatic: bool,
|
||||
) -> Optional[int]:
|
||||
"""Sets or updates a user's subscription settings for a specific thread root.
|
||||
|
||||
Args:
|
||||
requester_user_id: The ID of the user whose settings are being updated.
|
||||
thread_root_event_id: The event ID of the thread root.
|
||||
automatic: whether the user was subscribed by an automatic decision by
|
||||
their client.
|
||||
|
||||
Returns:
|
||||
The stream ID for this update, if the update isn't no-opped.
|
||||
|
||||
Raises:
|
||||
NotFoundError if the user cannot access the thread root event, or it isn't
|
||||
known to this homeserver.
|
||||
"""
|
||||
# First check that the user can access the thread root event
|
||||
# and that it exists
|
||||
try:
|
||||
event = await self.event_handler.get_event(
|
||||
user_id, room_id, thread_root_event_id
|
||||
)
|
||||
if event is None:
|
||||
raise NotFoundError("No such thread root")
|
||||
except AuthError:
|
||||
logger.info("rejecting thread subscriptions change (thread not accessible)")
|
||||
raise NotFoundError("No such thread root")
|
||||
|
||||
return await self.store.subscribe_user_to_thread(
|
||||
user_id.to_string(),
|
||||
event.room_id,
|
||||
thread_root_event_id,
|
||||
automatic=automatic,
|
||||
)
|
||||
|
||||
async def unsubscribe_user_from_thread(
|
||||
self, user_id: UserID, room_id: str, thread_root_event_id: str
|
||||
) -> Optional[int]:
|
||||
"""Clears a user's subscription settings for a specific thread root.
|
||||
|
||||
Args:
|
||||
requester_user_id: The ID of the user whose settings are being updated.
|
||||
thread_root_event_id: The event ID of the thread root.
|
||||
|
||||
Returns:
|
||||
The stream ID for this update, if the update isn't no-opped.
|
||||
|
||||
Raises:
|
||||
NotFoundError if the user cannot access the thread root event, or it isn't
|
||||
known to this homeserver.
|
||||
"""
|
||||
# First check that the user can access the thread root event
|
||||
# and that it exists
|
||||
try:
|
||||
event = await self.event_handler.get_event(
|
||||
user_id, room_id, thread_root_event_id
|
||||
)
|
||||
if event is None:
|
||||
raise NotFoundError("No such thread root")
|
||||
except AuthError:
|
||||
logger.info("rejecting thread subscriptions change (thread not accessible)")
|
||||
raise NotFoundError("No such thread root")
|
||||
|
||||
return await self.store.unsubscribe_user_from_thread(
|
||||
user_id.to_string(),
|
||||
event.room_id,
|
||||
thread_root_event_id,
|
||||
)
|
||||
@@ -80,7 +80,9 @@ class FollowerTypingHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.store = hs.get_datastores().main
|
||||
self._storage_controllers = hs.get_storage_controllers()
|
||||
self.server_name = hs.config.server.server_name
|
||||
self.server_name = (
|
||||
hs.hostname
|
||||
) # nb must be called this for @wrap_as_background_process
|
||||
self.clock = hs.get_clock()
|
||||
self.is_mine_id = hs.is_mine_id
|
||||
self.is_mine_server_name = hs.is_mine_server_name
|
||||
@@ -143,7 +145,11 @@ class FollowerTypingHandler:
|
||||
last_fed_poke = self._member_last_federation_poke.get(member, None)
|
||||
if not last_fed_poke or last_fed_poke + FEDERATION_PING_INTERVAL <= now:
|
||||
run_as_background_process(
|
||||
"typing._push_remote", self._push_remote, member=member, typing=True
|
||||
"typing._push_remote",
|
||||
self.server_name,
|
||||
self._push_remote,
|
||||
member=member,
|
||||
typing=True,
|
||||
)
|
||||
|
||||
# Add a paranoia timer to ensure that we always have a timer for
|
||||
@@ -216,6 +222,7 @@ class FollowerTypingHandler:
|
||||
if self.federation:
|
||||
run_as_background_process(
|
||||
"_send_changes_in_typing_to_remotes",
|
||||
self.server_name,
|
||||
self._send_changes_in_typing_to_remotes,
|
||||
row.room_id,
|
||||
prev_typing,
|
||||
@@ -378,7 +385,11 @@ class TypingWriterHandler(FollowerTypingHandler):
|
||||
if self.hs.is_mine_id(member.user_id):
|
||||
# Only send updates for changes to our own users.
|
||||
run_as_background_process(
|
||||
"typing._push_remote", self._push_remote, member, typing
|
||||
"typing._push_remote",
|
||||
self.server_name,
|
||||
self._push_remote,
|
||||
member,
|
||||
typing,
|
||||
)
|
||||
|
||||
self._push_update_local(member=member, typing=typing)
|
||||
|
||||
@@ -192,7 +192,9 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||
self._is_processing = False
|
||||
|
||||
self._is_processing = True
|
||||
run_as_background_process("user_directory.notify_new_event", process)
|
||||
run_as_background_process(
|
||||
"user_directory.notify_new_event", self.server_name, process
|
||||
)
|
||||
|
||||
async def handle_local_profile_change(
|
||||
self, user_id: str, profile: ProfileInfo
|
||||
@@ -606,7 +608,9 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||
self._is_refreshing_remote_profiles = False
|
||||
|
||||
self._is_refreshing_remote_profiles = True
|
||||
run_as_background_process("user_directory.refresh_remote_profiles", process)
|
||||
run_as_background_process(
|
||||
"user_directory.refresh_remote_profiles", self.server_name, process
|
||||
)
|
||||
|
||||
async def _unsafe_refresh_remote_profiles(self) -> None:
|
||||
limit = MAX_SERVERS_TO_REFRESH_PROFILES_FOR_IN_ONE_GO - len(
|
||||
@@ -688,7 +692,9 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||
|
||||
self._is_refreshing_remote_profiles_for_servers.add(server_name)
|
||||
run_as_background_process(
|
||||
"user_directory.refresh_remote_profiles_for_remote_server", process
|
||||
"user_directory.refresh_remote_profiles_for_remote_server",
|
||||
self.server_name,
|
||||
process,
|
||||
)
|
||||
|
||||
async def _unsafe_refresh_remote_profiles_for_remote_server(
|
||||
|
||||
@@ -66,6 +66,9 @@ class WorkerLocksHandler:
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer") -> None:
|
||||
self.server_name = (
|
||||
hs.hostname
|
||||
) # nb must be called this for @wrap_as_background_process
|
||||
self._reactor = hs.get_reactor()
|
||||
self._store = hs.get_datastores().main
|
||||
self._clock = hs.get_clock()
|
||||
|
||||
@@ -85,6 +85,7 @@ from synapse.http.replicationagent import ReplicationAgent
|
||||
from synapse.http.types import QueryParams
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.logging.opentracing import set_tag, start_active_span, tags
|
||||
from synapse.metrics import SERVER_NAME_LABEL
|
||||
from synapse.types import ISynapseReactor, StrSequence
|
||||
from synapse.util import json_decoder
|
||||
from synapse.util.async_helpers import timeout_deferred
|
||||
@@ -108,9 +109,13 @@ except ImportError:
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
outgoing_requests_counter = Counter("synapse_http_client_requests", "", ["method"])
|
||||
outgoing_requests_counter = Counter(
|
||||
"synapse_http_client_requests", "", labelnames=["method", SERVER_NAME_LABEL]
|
||||
)
|
||||
incoming_responses_counter = Counter(
|
||||
"synapse_http_client_responses", "", ["method", "code"]
|
||||
"synapse_http_client_responses",
|
||||
"",
|
||||
labelnames=["method", "code", SERVER_NAME_LABEL],
|
||||
)
|
||||
|
||||
# the type of the headers map, to be passed to the t.w.h.Headers.
|
||||
@@ -346,6 +351,7 @@ class BaseHttpClient:
|
||||
treq_args: Optional[Dict[str, Any]] = None,
|
||||
):
|
||||
self.hs = hs
|
||||
self.server_name = hs.hostname
|
||||
self.reactor = hs.get_reactor()
|
||||
|
||||
self._extra_treq_args = treq_args or {}
|
||||
@@ -384,7 +390,9 @@ class BaseHttpClient:
|
||||
RequestTimedOutError if the request times out before the headers are read
|
||||
|
||||
"""
|
||||
outgoing_requests_counter.labels(method).inc()
|
||||
outgoing_requests_counter.labels(
|
||||
method=method, **{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc()
|
||||
|
||||
# log request but strip `access_token` (AS requests for example include this)
|
||||
logger.debug("Sending request %s %s", method, redact_uri(uri))
|
||||
@@ -438,7 +446,11 @@ class BaseHttpClient:
|
||||
|
||||
response = await make_deferred_yieldable(request_deferred)
|
||||
|
||||
incoming_responses_counter.labels(method, response.code).inc()
|
||||
incoming_responses_counter.labels(
|
||||
method=method,
|
||||
code=response.code,
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
logger.info(
|
||||
"Received response to %s %s: %s",
|
||||
method,
|
||||
@@ -447,7 +459,11 @@ class BaseHttpClient:
|
||||
)
|
||||
return response
|
||||
except Exception as e:
|
||||
incoming_responses_counter.labels(method, "ERR").inc()
|
||||
incoming_responses_counter.labels(
|
||||
method=method,
|
||||
code="ERR",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
logger.info(
|
||||
"Error sending request to %s %s: %s %s",
|
||||
method,
|
||||
@@ -821,12 +837,12 @@ class SimpleHttpClient(BaseHttpClient):
|
||||
pool.cachedConnectionTimeout = 2 * 60
|
||||
|
||||
self.agent: IAgent = ProxyAgent(
|
||||
self.reactor,
|
||||
hs.get_reactor(),
|
||||
reactor=self.reactor,
|
||||
proxy_reactor=hs.get_reactor(),
|
||||
connectTimeout=15,
|
||||
contextFactory=self.hs.get_http_client_context_factory(),
|
||||
pool=pool,
|
||||
use_proxy=use_proxy,
|
||||
proxy_config=hs.config.server.proxy_config,
|
||||
)
|
||||
|
||||
if self._ip_blocklist:
|
||||
@@ -855,6 +871,7 @@ class ReplicationClient(BaseHttpClient):
|
||||
hs: The HomeServer instance to pass in
|
||||
"""
|
||||
super().__init__(hs)
|
||||
self.server_name = hs.hostname
|
||||
|
||||
# Use a pool, but a very small one.
|
||||
pool = HTTPConnectionPool(self.reactor)
|
||||
@@ -891,7 +908,9 @@ class ReplicationClient(BaseHttpClient):
|
||||
RequestTimedOutError if the request times out before the headers are read
|
||||
|
||||
"""
|
||||
outgoing_requests_counter.labels(method).inc()
|
||||
outgoing_requests_counter.labels(
|
||||
method=method, **{SERVER_NAME_LABEL: self.server_name}
|
||||
).inc()
|
||||
|
||||
logger.debug("Sending request %s %s", method, uri)
|
||||
|
||||
@@ -948,7 +967,11 @@ class ReplicationClient(BaseHttpClient):
|
||||
|
||||
response = await make_deferred_yieldable(request_deferred)
|
||||
|
||||
incoming_responses_counter.labels(method, response.code).inc()
|
||||
incoming_responses_counter.labels(
|
||||
method=method,
|
||||
code=response.code,
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
logger.info(
|
||||
"Received response to %s %s: %s",
|
||||
method,
|
||||
@@ -957,7 +980,11 @@ class ReplicationClient(BaseHttpClient):
|
||||
)
|
||||
return response
|
||||
except Exception as e:
|
||||
incoming_responses_counter.labels(method, "ERR").inc()
|
||||
incoming_responses_counter.labels(
|
||||
method=method,
|
||||
code="ERR",
|
||||
**{SERVER_NAME_LABEL: self.server_name},
|
||||
).inc()
|
||||
logger.info(
|
||||
"Error sending request to %s %s: %s %s",
|
||||
method,
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user