Compare commits

..

49 Commits

Author SHA1 Message Date
David Robertson
551b28f307 lint, but mypy sad 2021-11-02 14:05:35 +00:00
David Robertson
263de78dff dumy changelog screw you I just want some CI 2021-11-02 13:57:05 +00:00
David Robertson
54cd97b012 Experimental attempt to validate with attrs
will sytest spot bugs?
2021-11-02 13:07:05 +00:00
Dirk Klimpel
caa706d825 Fix a bug in unit test test_block_room_and_not_purge (#11226) 2021-11-01 16:10:09 +00:00
reivilibre
69ab3dddbc Make check_event_allowed module API callback not fail open (accept events) when an exception is raised (#11033) 2021-11-01 15:45:56 +00:00
Dirk Klimpel
66bdca3e31 Remove deprecated delete room admin API (#11213)
Remove deprecated delete room admin API,
`POST /_synapse/admin/v1/rooms/<room_id>/delete`
2021-11-01 15:11:24 +00:00
Richard van der Hoff
71f9966f27 Support for serving server well-known files (#11211)
Fixes https://github.com/matrix-org/synapse/issues/8308
2021-11-01 15:10:16 +00:00
Brett Bethke
2014098d01 Add domain specific matching for haproxy config (#11128) 2021-11-01 14:16:02 +00:00
Richard van der Hoff
0b99d4c8d2 Docker: avoid changing userid unnecessarily (#11209)
* Docker image: avoid changing user during `generate`

The intention was always that the config files get written as the initial user
(normally root) - only the data directory needs to be writable by Synapse. This
got changed in https://github.com/matrix-org/synapse/pull/5970, but that seems
to have been a mistake.

* Avoid changing user if no explicit UID is given

* changelog
2021-11-01 13:55:30 +00:00
Aaron R
3ae1464efd Support Client-Server API r0.6.1 (#11097)
Fixes #11064

Signed-off-by: Aaron Raimist <aaron@raim.ist>
2021-11-01 13:28:39 +00:00
Sumner Evans
ece84f2c45 Improve code formatting and fix a few typos in docs (#11221)
* Labeled a lot more code blocks with the appropriate type
* Fixed a couple of minor typos (missing/extraneous commas)

Signed-off-by: Sumner Evans <me@sumnerevans.com>
2021-11-01 11:35:55 +00:00
Erik Johnston
82d2168a15 Add metrics to the threadpools (#11178) 2021-11-01 11:21:36 +00:00
Sean Quah
2451003f6f Test that ClientIpStore combines database and in-memory data correctly (#11179) 2021-11-01 11:20:54 +00:00
JohannesKleine
29ffd680bf Stop synapse from saving messages in device_inbox for hidden devices. (#10097)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2021-11-01 10:40:41 +00:00
Brendan Abolivier
e320f5dba3 Deprecate user_may_create_room_with_invites (#11206) 2021-11-01 10:46:08 +01:00
Dirk Klimpel
bfd7a9b65c Fix comments referencing v1.46.0 from PR #10969. (#11212)
#10969 was merged after 1.46.0rc1 was cut and will be included
in v1.47.0rc1 instead.
2021-10-29 13:43:51 -04:00
Brendan Abolivier
ad4eab9862 Add a module API method to retrieve state from a room (#11204) 2021-10-29 16:28:29 +00:00
Sean Quah
3ed17ff651 Clarify lack of Windows support in documentation (#11198) 2021-10-29 14:03:58 +01:00
Patrick Cloke
56e281bf6c Additional type hints for relations database class. (#11205) 2021-10-28 14:35:12 -04:00
Rafael Gonçalves
0e16b418f6 Add knock information in admin exported data (#11171)
Signed-off-by: Rafael Goncalves <rafaelgoncalves@riseup.net>
2021-10-28 18:54:38 +01:00
Shay
e002faee01 Fetch verify key locally rather than trying to do so over federation if origin and host are the same. (#11129)
* add tests for fetching key locally

* add logic to check if origin server is same as host and fetch verify key locally rather than over federation

* add changelog

* slight refactor, add docstring, change changelog entry

* Make changelog entry one line

* remove verify_json_locally and push locality check to process_request, add function process_request_locally

* remove leftover code reference

* refactor to add common call to 'verify_json and associated handling code

* add type hint to process_json

* add some docstrings + very slight refactor
2021-10-28 10:27:17 -07:00
Brendan Abolivier
adc0d35b17 Add a ModuleApi method to update a user's membership in a room (#11147)
Co-authored-by: reivilibre <oliverw@matrix.org>
2021-10-28 16:45:53 +00:00
David Robertson
1bfd141205 Type hints for the remaining two files in synapse.http. (#11164)
* Teach MyPy that the sentinel context is False

This means that if `ctx: LoggingContextOrSentinel`
then `bool(ctx)` narrows us to `ctx:LoggingContext`, which is a really
neat find!

* Annotate RequestMetrics

- Raise errors for sentry if we use the sentinel context
- Ensure we don't raise an error and carry on, but not recording stats
- Include stack trace in the error case to lower Sean's blood pressure

* Make mypy pass for synapse.http.request_metrics

* Make synapse.http.connectproxyclient pass mypy

Co-authored-by: reivilibre <oliverw@matrix.org>
2021-10-28 14:14:42 +01:00
Skyler Mäntysaari
a19bf32a03 docs/openid: Add Authentik documentation. (#11151) 2021-10-28 10:31:22 +00:00
Dan Callahan
a1ba7a850a Update scripts to pass Shellcheck lints (#11166) 2021-10-27 21:36:18 +01:00
Dan Callahan
0dffa9d0e0 Merge remote-tracking branch 'origin/develop' into shellcheck
Fixes a merge conflict with debian/changelog

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-27 20:04:00 +01:00
reivilibre
75ca0a6168 Annotate log_function decorator (#10943)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2021-10-27 17:27:23 +01:00
Samuel Philipp
4e393af52f Fixed config parse bug in review_recent_signups (#11191) 2021-10-27 17:25:18 +01:00
Patrick Cloke
19d5dc6931 Refactor Filter to handle fields according to data being filtered. (#11194)
This avoids filtering against fields which cannot exist on an
event source. E.g. presence updates don't have a room.
2021-10-27 11:26:30 -04:00
Dirk Klimpel
8d46fac98e Delete messages from device_inbox table when deleting device (#10969)
Fixes: #9346
2021-10-27 16:01:18 +01:00
Patrick Cloke
a930da3291 Include the stable identifier for MSC3288. (#11187)
Includes both the stable and unstable identifier to store-invite
calls to the identity server. In the future we should remove the
unstable identifier.
2021-10-27 14:19:19 +00:00
Erik Johnston
179dc8ae9e Merge remote-tracking branch 'origin/release-v1.46' into develop 2021-10-27 14:45:40 +01:00
Brendan Abolivier
c7a5e49664 Implement an on_new_event callback (#11126)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2021-10-26 15:17:36 +02:00
Dan Callahan
1afc6ecae1 Changelog
Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:21:40 +01:00
Dan Callahan
d7141e0b8b Fix Shellcheck SC2006: Use $(...) notation
Use $(...) notation instead of legacy backticked `...`.

https://github.com/koalaman/shellcheck/wiki/SC2006

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:55 +01:00
Dan Callahan
b5e910521b Fix Shellcheck SC2129: Consider using {..} >> file
Consider using { cmd1; cmd2; } >> file instead of individual redirects.

https://github.com/koalaman/shellcheck/wiki/SC2129

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
13f084eb58 Fix Shellcheck SC2086: Quote to prevent splitting
Double quote to prevent globbing and word splitting.

https://github.com/koalaman/shellcheck/wiki/SC2086

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
31096132c3 Fix Shellcheck SC2012: Use find instead of ls
Use find instead of ls to better handle non-alphanumeric filenames.

https://github.com/koalaman/shellcheck/wiki/SC2012

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
9d0f9d51d5 Fix Shellcheck SC2016: Single quotes don't expand
Expressions don't expand in single quotes, use double quotes for that.

https://github.com/koalaman/shellcheck/wiki/SC2016

This specifically warned about the '$aregis...' part of the sed script.
Which is a relatively obscure use of sed.

Splitting this into two commands makes its intent more obvious and
avoids contravening Shellcheck's lints.

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
bab2bc844c Fix Shellcheck SC1091: Can't follow file
Not following: (error message here)

https://github.com/koalaman/shellcheck/wiki/SC1091

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
7cf83c0aca Fix Shellcheck SC1001: Meaningless char escapes
This \o will be a regular 'o' in this context.

https://github.com/koalaman/shellcheck/wiki/SC1001

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
99e698d6ed Fix Shellcheck SC2089 and SC2090: Quotes in vars
SC2089: Quotes/backslashes will be treated literally. Use an array.

https://github.com/koalaman/shellcheck/wiki/SC2089

SC2090: Quotes/backslashes in this variable will not be respected.

https://github.com/koalaman/shellcheck/wiki/SC2090

Putting literal JSON in a variable mistakenly triggers these warnings.
Instead of adding ignore directives, this can be avoided by inlining the
JSON data into the curl invocation.

Since the variable is only used in this one location, inlining is fine.

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
dfa6143133 Fix Shellcheck SC2155: Declare + export separately
Declare and assign separately to avoid masking return values.

https://github.com/koalaman/shellcheck/wiki/SC2155

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
6a9d84a676 Fix Shellcheck SC2166: test -a is not well defined
Prefer [ p ] && [ q ] as [ p -a q ] is not well defined.

https://github.com/koalaman/shellcheck/wiki/SC2166

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
6c736fa472 Fix Shellcheck SC2154: variable possibly undefined
var is referenced but not assigned.

https://github.com/koalaman/shellcheck/wiki/SC2154

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
898e3be4c9 Fix Shellcheck SC2064: Use single quotes on traps
Use single quotes, otherwise this expands now rather than when signalled.

https://github.com/koalaman/shellcheck/wiki/SC2064

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
5eb481cd5b Fix Shellcheck SC2115: Ensure never expands to /*
Use "${var:?}" to ensure this never expands to /* .

https://github.com/koalaman/shellcheck/wiki/SC2115

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:54 +01:00
Dan Callahan
64adbb7b54 Fix Shellcheck SC2046: Quote to prevent word split
Quote this to prevent word splitting

https://www.shellcheck.net/wiki/SC2046

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:53 +01:00
Dan Callahan
12d79ff1b6 Fix Shellcheck SC2164: exit in case cd fails.
Use `cd ... || exit` in case cd fails.

https://github.com/koalaman/shellcheck/wiki/SC2164

Signed-off-by: Dan Callahan <danc@element.io>
2021-10-22 23:08:53 +01:00
143 changed files with 2172 additions and 1024 deletions

View File

@@ -3,7 +3,7 @@
# Test for the export-data admin command against sqlite and postgres
set -xe
cd `dirname $0`/../..
cd "$(dirname "$0")/../.."
echo "--- Install dependencies"

View File

@@ -7,7 +7,7 @@
set -xe
cd `dirname $0`/../..
cd "$(dirname "$0")/../.."
echo "--- Install dependencies"

View File

@@ -1,17 +1,8 @@
Synapse 1.46.0 (2021-11-02)
===========================
The cause of the [performance regression affecting Synapse 1.44](https://github.com/matrix-org/synapse/issues/11049) has been identified and fixed. ([\#11177](https://github.com/matrix-org/synapse/issues/11177))
Bugfixes
--------
- Fix a bug introduced in v1.46.0rc1 where URL previews of some XML documents would fail. ([\#11196](https://github.com/matrix-org/synapse/issues/11196))
Synapse 1.46.0rc1 (2021-10-27)
==============================
The cause of the [performance regression affecting Synapse 1.44](https://github.com/matrix-org/synapse/issues/11049) has been identified and fixed. ([\#11177](https://github.com/matrix-org/synapse/issues/11177))
Features
--------

View File

@@ -34,14 +34,6 @@ additional-css = [
"docs/website_files/table-of-contents.css",
"docs/website_files/remove-nav-buttons.css",
"docs/website_files/indent-section-headers.css",
"docs/website_files/version-picker.css",
]
additional-js = [
"docs/website_files/table-of-contents.js",
"docs/website_files/version-picker.js",
"docs/website_files/version.js",
]
theme = "docs/website_files/theme"
[preprocessor.schema_versions]
command = "./scripts-dev/schema_versions.py"
additional-js = ["docs/website_files/table-of-contents.js"]
theme = "docs/website_files/theme"

1
changelog.d/10097.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug which allowed hidden devices to receive to-device messages, resulting in unnecessary database bloat.

1
changelog.d/10943.misc Normal file
View File

@@ -0,0 +1 @@
Add type annotations for the `log_function` decorator.

1
changelog.d/10969.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where messages in the `device_inbox` table for deleted devices would persist indefinitely. Contributed by @dklimpel and @JohannesKleine.

1
changelog.d/11033.bugfix Normal file
View File

@@ -0,0 +1 @@
Do not accept events if a third-party rule module API callback raises an exception.

View File

@@ -0,0 +1 @@
Advertise support for Client-Server API r0.6.1.

View File

@@ -0,0 +1 @@
Add an `on_new_event` third-party rules callback to allow Synapse modules to act after an event has been sent into a room.

1
changelog.d/11128.doc Normal file
View File

@@ -0,0 +1 @@
Improve example HAProxy config in the docs to properly handle host headers with port information. This is required for federation over port 443 to work correctly.

1
changelog.d/11129.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix long-standing bug where verification requests could fail in certain cases if whitelist was in place but did not include your own homeserver.

View File

@@ -0,0 +1 @@
Add a module API method to update a user's membership in a room.

1
changelog.d/11151.doc Normal file
View File

@@ -0,0 +1 @@
Add documentation for using Authentik as an OpenID Connect Identity Provider. Contributed by @samip5.

1
changelog.d/11164.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints so that `synapse.http` passes `mypy` checks.

1
changelog.d/11166.misc Normal file
View File

@@ -0,0 +1 @@
Update scripts to pass Shellcheck lints.

1
changelog.d/11171.misc Normal file
View File

@@ -0,0 +1 @@
Add knock information in admin export. Contributed by Rafael Gonçalves.

View File

@@ -0,0 +1 @@
Add metrics for thread pool usage.

1
changelog.d/11179.misc Normal file
View File

@@ -0,0 +1 @@
Add tests to check that `ClientIpStore.get_last_client_ip_by_device` and `get_user_ip_and_agents` combine database and in-memory data correctly.

View File

@@ -0,0 +1 @@
Support the stable room type field for [MSC3288](https://github.com/matrix-org/matrix-doc/pull/3288).

1
changelog.d/11191.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.45.0 which prevented the `synapse_review_recent_signups` script from running. Contributed by @samuel-p.

1
changelog.d/11194.misc Normal file
View File

@@ -0,0 +1 @@
Refactor `Filter` to check different fields depending on the data type.

1
changelog.d/11198.doc Normal file
View File

@@ -0,0 +1 @@
Clarify lack of support for Windows.

View File

@@ -0,0 +1 @@
Add a module API method to retrieve the current state of a room.

1
changelog.d/11205.misc Normal file
View File

@@ -0,0 +1 @@
Improve type hints for the relations datastore.

View File

@@ -0,0 +1 @@
The `user_may_create_room_with_invites` module callback is now deprecated. Please refer to the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#upgrading-to-v1470) for more information.

1
changelog.d/11209.docker Normal file
View File

@@ -0,0 +1 @@
Avoid changing userid when started as a non-root user, and no explicit `UID` is set.

View File

@@ -0,0 +1 @@
Add support for serving `/.well-known/matrix/server` files, to redirect federation traffic to port 443.

1
changelog.d/11212.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where messages in the `device_inbox` table for deleted devices would persist indefinitely. Contributed by @dklimpel and @JohannesKleine.

View File

@@ -0,0 +1 @@
Remove deprecated admin API to delete rooms (`POST /_synapse/admin/v1/rooms/<room_id>/delete`).

1
changelog.d/11221.doc Normal file
View File

@@ -0,0 +1 @@
Improve code formatting and fix a few typos in docs. Contributed by @sumnerevans at Beeper.

1
changelog.d/11226.misc Normal file
View File

@@ -0,0 +1 @@
Fix a bug in unit test `test_block_room_and_not_purge`.

1
changelog.d/11232.misc Normal file
View File

@@ -0,0 +1 @@
TODO: write a proper changelog.

View File

@@ -84,7 +84,9 @@ AUTH="Authorization: Bearer $TOKEN"
###################################################################################################
# finally start pruning the room:
###################################################################################################
POSTDATA='{"delete_local_events":"true"}' # this will really delete local events, so the messages in the room really disappear unless they are restored by remote federation
# this will really delete local events, so the messages in the room really
# disappear unless they are restored by remote federation. This is because
# we pass {"delete_local_events":true} to the curl invocation below.
for ROOM in "${ROOMS_ARRAY[@]}"; do
echo "########################################### $(date) ################# "
@@ -104,7 +106,7 @@ for ROOM in "${ROOMS_ARRAY[@]}"; do
SLEEP=2
set -x
# call purge
OUT=$(curl --header "$AUTH" -s -d $POSTDATA POST "$API_URL/admin/purge_history/$ROOM/$EVENT_ID")
OUT=$(curl --header "$AUTH" -s -d '{"delete_local_events":true}' POST "$API_URL/admin/purge_history/$ROOM/$EVENT_ID")
PURGE_ID=$(echo "$OUT" |grep purge_id|cut -d'"' -f4 )
if [ "$PURGE_ID" == "" ]; then
# probably the history purge is already in progress for $ROOM

View File

@@ -15,7 +15,7 @@ export DH_VIRTUALENV_INSTALL_ROOT=/opt/venvs
# python won't look in the right directory. At least this way, the error will
# be a *bit* more obvious.
#
SNAKE=`readlink -e /usr/bin/python3`
SNAKE=$(readlink -e /usr/bin/python3)
# try to set the CFLAGS so any compiled C extensions are compiled with the most
# generic as possible x64 instructions, so that compiling it on a new Intel chip
@@ -24,7 +24,7 @@ SNAKE=`readlink -e /usr/bin/python3`
# TODO: add similar things for non-amd64, or figure out a more generic way to
# do this.
case `dpkg-architecture -q DEB_HOST_ARCH` in
case $(dpkg-architecture -q DEB_HOST_ARCH) in
amd64)
export CFLAGS=-march=x86-64
;;
@@ -56,8 +56,8 @@ case "$DEB_BUILD_OPTIONS" in
*)
# Copy tests to a temporary directory so that we can put them on the
# PYTHONPATH without putting the uninstalled synapse on the pythonpath.
tmpdir=`mktemp -d`
trap "rm -r $tmpdir" EXIT
tmpdir=$(mktemp -d)
trap 'rm -r $tmpdir' EXIT
cp -r tests "$tmpdir"
@@ -98,7 +98,7 @@ esac
--output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml"
# add a dependency on the right version of python to substvars.
PYPKG=`basename $SNAKE`
PYPKG=$(basename "$SNAKE")
echo "synapse:pydepends=$PYPKG" >> debian/matrix-synapse-py3.substvars

10
debian/changelog vendored
View File

@@ -1,12 +1,8 @@
matrix-synapse-py3 (1.46.0) stable; urgency=medium
matrix-synapse-py3 (1.47.0+nmu1) UNRELEASED; urgency=medium
[ Richard van der Hoff ]
* Compress debs with xz, to fix incompatibility of impish debs with reprepro.
* Update scripts to pass Shellcheck lints.
[ Synapse Packaging team ]
* New synapse release 1.46.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 02 Nov 2021 13:22:53 +0000
-- root <root@cae79a6e79d7> Fri, 22 Oct 2021 22:20:31 +0000
matrix-synapse-py3 (1.46.0~rc1) stable; urgency=medium

View File

@@ -2,6 +2,7 @@
set -e
# shellcheck disable=SC1091
. /usr/share/debconf/confmodule
# try to update the debconf db according to whatever is in the config files

View File

@@ -1,5 +1,6 @@
#!/bin/sh -e
# shellcheck disable=SC1091
. /usr/share/debconf/confmodule
CONFIGFILE_SERVERNAME="/etc/matrix-synapse/conf.d/server_name.yaml"

6
debian/rules vendored
View File

@@ -51,11 +51,5 @@ override_dh_shlibdeps:
override_dh_virtualenv:
./debian/build_virtualenv
override_dh_builddeb:
# force the compression to xzip, to stop dpkg-deb on impish defaulting to zstd
# (which requires reprepro 5.3.0-1.3, which is currently only in 'experimental' in Debian:
# https://metadata.ftp-master.debian.org/changelogs/main/r/reprepro/reprepro_5.3.0-1.3_changelog)
dh_builddeb -- -Zxz
%:
dh $@ --with python-virtualenv

View File

@@ -10,7 +10,7 @@ set -e
apt-get update
apt-get install -y lsb-release
deb=`ls /debs/matrix-synapse-py3_*+$(lsb_release -cs)*.deb | sort | tail -n1`
deb=$(find /debs -name "matrix-synapse-py3_*+$(lsb_release -cs)*.deb" | sort | tail -n1)
debconf-set-selections <<EOF
matrix-synapse matrix-synapse/report-stats boolean false
@@ -19,5 +19,6 @@ EOF
dpkg -i "$deb"
sed -i -e '/port: 8...$/{s/8448/18448/; s/8008/18008/}' -e '$aregistration_shared_secret: secret' /etc/matrix-synapse/homeserver.yaml
sed -i -e 's/port: 8448$/port: 18448/; s/port: 8008$/port: 18008' /etc/matrix-synapse/homeserver.yaml
echo 'registration_shared_secret: secret' >> /etc/matrix-synapse/homeserver.yaml
systemctl restart matrix-synapse

View File

@@ -6,14 +6,14 @@ DIR="$( cd "$( dirname "$0" )" && pwd )"
PID_FILE="$DIR/servers.pid"
if [ -f $PID_FILE ]; then
if [ -f "$PID_FILE" ]; then
echo "servers.pid exists!"
exit 1
fi
for port in 8080 8081 8082; do
rm -rf $DIR/$port
rm -rf $DIR/media_store.$port
rm -rf "${DIR:?}/$port"
rm -rf "$DIR/media_store.$port"
done
rm -rf $DIR/etc
rm -rf "${DIR:?}/etc"

View File

@@ -4,21 +4,22 @@ DIR="$( cd "$( dirname "$0" )" && pwd )"
CWD=$(pwd)
cd "$DIR/.."
cd "$DIR/.." || exit
mkdir -p demo/etc
export PYTHONPATH=$(readlink -f $(pwd))
PYTHONPATH=$(readlink -f "$(pwd)")
export PYTHONPATH
echo $PYTHONPATH
echo "$PYTHONPATH"
for port in 8080 8081 8082; do
echo "Starting server on port $port... "
https_port=$((port + 400))
mkdir -p demo/$port
pushd demo/$port
pushd demo/$port || exit
#rm $DIR/etc/$port.config
python3 -m synapse.app.homeserver \
@@ -27,75 +28,78 @@ for port in 8080 8081 8082; do
--config-path "$DIR/etc/$port.config" \
--report-stats no
if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
echo "public_baseurl: http://localhost:$port/" >> $DIR/etc/$port.config
echo 'enable_registration: true' >> $DIR/etc/$port.config
# Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
# accidentaly bork me with your fancy settings.
listeners=$(cat <<-PORTLISTENERS
# Configure server to listen on both $https_port and $port
# This overides some of the default settings above
listeners:
- port: $https_port
type: http
tls: true
resources:
- names: [client, federation]
- port: $port
tls: false
bind_addresses: ['::1', '127.0.0.1']
type: http
x_forwarded: true
resources:
- names: [client, federation]
compress: false
PORTLISTENERS
)
echo "${listeners}" >> $DIR/etc/$port.config
# Disable tls for the servers
printf '\n\n# Disable tls on the servers.' >> $DIR/etc/$port.config
echo '# DO NOT USE IN PRODUCTION' >> $DIR/etc/$port.config
echo 'use_insecure_ssl_client_just_for_testing_do_not_use: true' >> $DIR/etc/$port.config
echo 'federation_verify_certificates: false' >> $DIR/etc/$port.config
# Set tls paths
echo "tls_certificate_path: \"$DIR/etc/localhost:$https_port.tls.crt\"" >> $DIR/etc/$port.config
echo "tls_private_key_path: \"$DIR/etc/localhost:$https_port.tls.key\"" >> $DIR/etc/$port.config
if ! grep -F "Customisation made by demo/start.sh" -q "$DIR/etc/$port.config"; then
# Generate tls keys
openssl req -x509 -newkey rsa:4096 -keyout $DIR/etc/localhost\:$https_port.tls.key -out $DIR/etc/localhost\:$https_port.tls.crt -days 365 -nodes -subj "/O=matrix"
openssl req -x509 -newkey rsa:4096 -keyout "$DIR/etc/localhost:$https_port.tls.key" -out "$DIR/etc/localhost:$https_port.tls.crt" -days 365 -nodes -subj "/O=matrix"
# Ignore keys from the trusted keys server
echo '# Ignore keys from the trusted keys server' >> $DIR/etc/$port.config
echo 'trusted_key_servers:' >> $DIR/etc/$port.config
echo ' - server_name: "matrix.org"' >> $DIR/etc/$port.config
echo ' accept_keys_insecurely: true' >> $DIR/etc/$port.config
# Regenerate configuration
{
printf '\n\n# Customisation made by demo/start.sh\n'
echo "public_baseurl: http://localhost:$port/"
echo 'enable_registration: true'
# Reduce the blacklist
blacklist=$(cat <<-BLACK
# Set the blacklist so that it doesn't include 127.0.0.1, ::1
federation_ip_range_blacklist:
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- 'fe80::/64'
- 'fc00::/7'
BLACK
)
echo "${blacklist}" >> $DIR/etc/$port.config
# Warning, this heredoc depends on the interaction of tabs and spaces.
# Please don't accidentaly bork me with your fancy settings.
listeners=$(cat <<-PORTLISTENERS
# Configure server to listen on both $https_port and $port
# This overides some of the default settings above
listeners:
- port: $https_port
type: http
tls: true
resources:
- names: [client, federation]
- port: $port
tls: false
bind_addresses: ['::1', '127.0.0.1']
type: http
x_forwarded: true
resources:
- names: [client, federation]
compress: false
PORTLISTENERS
)
echo "${listeners}"
# Disable tls for the servers
printf '\n\n# Disable tls on the servers.'
echo '# DO NOT USE IN PRODUCTION'
echo 'use_insecure_ssl_client_just_for_testing_do_not_use: true'
echo 'federation_verify_certificates: false'
# Set tls paths
echo "tls_certificate_path: \"$DIR/etc/localhost:$https_port.tls.crt\""
echo "tls_private_key_path: \"$DIR/etc/localhost:$https_port.tls.key\""
# Ignore keys from the trusted keys server
echo '# Ignore keys from the trusted keys server'
echo 'trusted_key_servers:'
echo ' - server_name: "matrix.org"'
echo ' accept_keys_insecurely: true'
# Reduce the blacklist
blacklist=$(cat <<-BLACK
# Set the blacklist so that it doesn't include 127.0.0.1, ::1
federation_ip_range_blacklist:
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- 'fe80::/64'
- 'fc00::/7'
BLACK
)
echo "${blacklist}"
} >> "$DIR/etc/$port.config"
fi
# Check script parameters
if [ $# -eq 1 ]; then
if [ $1 = "--no-rate-limit" ]; then
if [ "$1" = "--no-rate-limit" ]; then
# Disable any rate limiting
ratelimiting=$(cat <<-RC
@@ -137,22 +141,22 @@ for port in 8080 8081 8082; do
burst_count: 1000
RC
)
echo "${ratelimiting}" >> $DIR/etc/$port.config
echo "${ratelimiting}" >> "$DIR/etc/$port.config"
fi
fi
if ! grep -F "full_twisted_stacktraces" -q $DIR/etc/$port.config; then
echo "full_twisted_stacktraces: true" >> $DIR/etc/$port.config
if ! grep -F "full_twisted_stacktraces" -q "$DIR/etc/$port.config"; then
echo "full_twisted_stacktraces: true" >> "$DIR/etc/$port.config"
fi
if ! grep -F "report_stats" -q $DIR/etc/$port.config ; then
echo "report_stats: false" >> $DIR/etc/$port.config
if ! grep -F "report_stats" -q "$DIR/etc/$port.config" ; then
echo "report_stats: false" >> "$DIR/etc/$port.config"
fi
python3 -m synapse.app.homeserver \
--config-path "$DIR/etc/$port.config" \
-D \
popd
popd || exit
done
cd "$CWD"
cd "$CWD" || exit

View File

@@ -8,7 +8,7 @@ for pid_file in $FILES; do
pid=$(cat "$pid_file")
if [[ $pid ]]; then
echo "Killing $pid_file with $pid"
kill $pid
kill "$pid"
fi
done

View File

@@ -65,7 +65,8 @@ The following environment variables are supported in `generate` mode:
* `SYNAPSE_DATA_DIR`: where the generated config will put persistent data
such as the database and media store. Defaults to `/data`.
* `UID`, `GID`: the user id and group id to use for creating the data
directories. Defaults to `991`, `991`.
directories. If unset, and no user is set via `docker run --user`, defaults
to `991`, `991`.
## Running synapse
@@ -97,7 +98,9 @@ The following environment variables are supported in `run` mode:
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
* `SYNAPSE_WORKER`: module to execute, used when running synapse with workers.
Defaults to `synapse.app.homeserver`, which is suitable for non-worker mode.
* `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`.
* `UID`, `GID`: the user and group id to run Synapse as. If unset, and no user
is set via `docker run --user`, defaults to `991`, `991`. Note that this user
must have permission to read the config files, and write to the data directories.
* `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
For more complex setups (e.g. for workers) you can also pass your args directly to synapse using `run` mode. For example like this:
@@ -186,7 +189,7 @@ point to another Dockerfile.
## Disabling the healthcheck
If you are using a non-standard port or tls inside docker you can disable the healthcheck
whilst running the above `docker run` commands.
whilst running the above `docker run` commands.
```
--no-healthcheck
@@ -212,7 +215,7 @@ If you wish to point the healthcheck at a different port with docker command, ad
## Setting the healthcheck in docker-compose file
You can add the following to set a custom healthcheck in a docker compose file.
You will need docker-compose version >2.1 for this to work.
You will need docker-compose version >2.1 for this to work.
```
healthcheck:
@@ -226,5 +229,5 @@ healthcheck:
## Using jemalloc
Jemalloc is embedded in the image and will be used instead of the default allocator.
You can read about jemalloc by reading the Synapse
You can read about jemalloc by reading the Synapse
[README](https://github.com/matrix-org/synapse/blob/HEAD/README.rst#help-synapse-is-slow-and-eats-all-my-ram-cpu).

View File

@@ -5,7 +5,7 @@
set -ex
# Get the codename from distro env
DIST=`cut -d ':' -f2 <<< $distro`
DIST=$(cut -d ':' -f2 <<< "${distro:?}")
# we get a read-only copy of the source: make a writeable copy
cp -aT /synapse/source /synapse/build
@@ -17,7 +17,7 @@ cd /synapse/build
# Section to determine which "component" it should go into (see
# https://manpages.debian.org/stretch/reprepro/reprepro.1.en.html#GUESSING)
DEB_VERSION=`dpkg-parsechangelog -SVersion`
DEB_VERSION=$(dpkg-parsechangelog -SVersion)
case $DEB_VERSION in
*~rc*|*~a*|*~b*|*~c*)
sed -ie '/^Section:/c\Section: prerelease' debian/control

View File

@@ -120,6 +120,7 @@ def generate_config_from_template(config_dir, config_path, environ, ownership):
]
if ownership is not None:
log(f"Setting ownership on /data to {ownership}")
subprocess.check_output(["chown", "-R", ownership, "/data"])
args = ["gosu", ownership] + args
@@ -144,12 +145,18 @@ def run_generate_config(environ, ownership):
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
if ownership is not None:
# make sure that synapse has perms to write to the data dir.
log(f"Setting ownership on {data_dir} to {ownership}")
subprocess.check_output(["chown", ownership, data_dir])
# create a suitable log config from our template
log_config_file = "%s/%s.log.config" % (config_dir, server_name)
if not os.path.exists(log_config_file):
log("Creating log config %s" % (log_config_file,))
convert("/conf/log.config", log_config_file, environ)
# generate the main config file, and a signing key.
args = [
"python",
"-m",
@@ -168,29 +175,23 @@ def run_generate_config(environ, ownership):
"--open-private-ports",
]
# log("running %s" % (args, ))
if ownership is not None:
# make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir])
args = ["gosu", ownership] + args
os.execv("/usr/sbin/gosu", args)
else:
os.execv("/usr/local/bin/python", args)
os.execv("/usr/local/bin/python", args)
def main(args, environ):
mode = args[1] if len(args) > 1 else "run"
desired_uid = int(environ.get("UID", "991"))
desired_gid = int(environ.get("GID", "991"))
synapse_worker = environ.get("SYNAPSE_WORKER", "synapse.app.homeserver")
if (desired_uid == os.getuid()) and (desired_gid == os.getgid()):
ownership = None
else:
ownership = "{}:{}".format(desired_uid, desired_gid)
if ownership is None:
log("Will not perform chmod/gosu as UserID already matches request")
# if we were given an explicit user to switch to, do so
ownership = None
if "UID" in environ:
desired_uid = int(environ["UID"])
desired_gid = int(environ.get("GID", "991"))
ownership = f"{desired_uid}:{desired_gid}"
elif os.getuid() == 0:
# otherwise, if we are running as root, use user 991
ownership = "991:991"
synapse_worker = environ.get("SYNAPSE_WORKER", "synapse.app.homeserver")
# In generate mode, generate a configuration and missing keys, then exit
if mode == "generate":

View File

@@ -15,12 +15,12 @@ in `homeserver.yaml`, to the list of authorized domains. If you have not set
1. Agree to the terms of service and submit.
1. Copy your site key and secret key and add them to your `homeserver.yaml`
configuration file
```
```yaml
recaptcha_public_key: YOUR_SITE_KEY
recaptcha_private_key: YOUR_SECRET_KEY
```
1. Enable the CAPTCHA for new registrations
```
```yaml
enable_registration_captcha: true
```
1. Go to the settings page for the CAPTCHA you just created

View File

@@ -99,7 +99,7 @@ server admin: see [Admin API](../usage/administration/admin_api).
It returns a JSON body like the following:
```jsonc
```json
{
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
"event_json": {
@@ -132,7 +132,7 @@ It returns a JSON body like the following:
},
"type": "m.room.message",
"unsigned": {
"age_ts": 1592291711430,
"age_ts": 1592291711430
}
},
"id": <report_id>,

View File

@@ -27,7 +27,7 @@ Room state data (such as joins, leaves, topic) is always preserved.
To delete local message events as well, set `delete_local_events` in the body:
```
```json
{
"delete_local_events": true
}

View File

@@ -28,7 +28,7 @@ server admin: see [Admin API](../usage/administration/admin_api).
Response:
```
```json
{
"room_id": "!636q39766251:server.com"
}

View File

@@ -87,7 +87,7 @@ GET /_synapse/admin/v1/rooms
A response body like the following is returned:
```jsonc
```json
{
"rooms": [
{
@@ -170,7 +170,7 @@ GET /_synapse/admin/v1/rooms?order_by=size
A response body like the following is returned:
```jsonc
```json
{
"rooms": [
{
@@ -208,7 +208,7 @@ A response body like the following is returned:
}
],
"offset": 0,
"total_rooms": 150
"total_rooms": 150,
"next_token": 100
}
```
@@ -224,7 +224,7 @@ GET /_synapse/admin/v1/rooms?order_by=size&from=100
A response body like the following is returned:
```jsonc
```json
{
"rooms": [
{
@@ -520,16 +520,6 @@ With all that being said, if you still want to try and recover the room:
4. If `new_room_user_id` was given, a 'Content Violation' will have been
created. Consider whether you want to delete that roomm.
## Deprecated endpoint
The previous deprecated API will be removed in a future release, it was:
```
POST /_synapse/admin/v1/rooms/<room_id>/delete
```
It behaves the same way than the current endpoint except the path and the method.
# Make Room Admin API
Grants another user the highest power available to a local user who is in the room.

View File

@@ -10,7 +10,9 @@ The necessary tools are detailed below.
First install them with:
pip install -e ".[lint,mypy]"
```sh
pip install -e ".[lint,mypy]"
```
- **black**
@@ -21,7 +23,9 @@ First install them with:
Have `black` auto-format your code (it shouldn't change any
functionality) with:
black . --exclude="\.tox|build|env"
```sh
black . --exclude="\.tox|build|env"
```
- **flake8**
@@ -30,7 +34,9 @@ First install them with:
Check all application and test code with:
flake8 synapse tests
```sh
flake8 synapse tests
```
- **isort**
@@ -39,7 +45,9 @@ First install them with:
Auto-fix imports with:
isort -rc synapse tests
```sh
isort -rc synapse tests
```
`-rc` means to recursively search the given directories.
@@ -66,15 +74,19 @@ save as it takes a while and is very resource intensive.
Example:
from synapse.types import UserID
...
user_id = UserID(local, server)
```python
from synapse.types import UserID
...
user_id = UserID(local, server)
```
is preferred over:
from synapse import types
...
user_id = types.UserID(local, server)
```python
from synapse import types
...
user_id = types.UserID(local, server)
```
(or any other variant).
@@ -134,28 +146,30 @@ Some guidelines follow:
Example:
## Frobnication ##
```yaml
## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
# frobbing distance. Defaults to 1000.
#
#distance: 100
```
Note that the sample configuration is generated from the synapse code
and is maintained by a script, `scripts-dev/generate_sample_config`.

View File

@@ -99,7 +99,7 @@ construct URIs where users can give their consent.
see if an unauthenticated user is viewing the page. This is typically
wrapped around the form that would be used to actually agree to the document:
```
```html
{% if not public_version %}
<!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
<form method="post" action="consent">

View File

@@ -1,4 +1,8 @@
# Delegation
# Delegation of incoming federation traffic
In the following documentation, we use the term `server_name` to refer to that setting
in your homeserver configuration file. It appears at the ends of user ids, and tells
other homeservers where they can find your server.
By default, other homeservers will expect to be able to reach yours via
your `server_name`, on port 8448. For example, if you set your `server_name`
@@ -12,13 +16,21 @@ to a different server and/or port (e.g. `synapse.example.com:443`).
## .well-known delegation
To use this method, you need to be able to alter the
`server_name` 's https server to serve the `/.well-known/matrix/server`
URL. Having an active server (with a valid TLS certificate) serving your
`server_name` domain is out of the scope of this documentation.
To use this method, you need to be able to configure the server at
`https://<server_name>` to serve a file at
`https://<server_name>/.well-known/matrix/server`. There are two ways to do this, shown below.
The URL `https://<server_name>/.well-known/matrix/server` should
return a JSON structure containing the key `m.server` like so:
Note that the `.well-known` file is hosted on the default port for `https` (port 443).
### External server
For maximum flexibility, you need to configure an external server such as nginx, Apache
or HAProxy to serve the `https://<server_name>/.well-known/matrix/server` file. Setting
up such a server is out of the scope of this documentation, but note that it is often
possible to configure your [reverse proxy](reverse_proxy.md) for this.
The URL `https://<server_name>/.well-known/matrix/server` should be configured
return a JSON structure containing the key `m.server` like this:
```json
{
@@ -26,8 +38,9 @@ return a JSON structure containing the key `m.server` like so:
}
```
In our example, this would mean that URL `https://example.com/.well-known/matrix/server`
should return:
In our example (where we want federation traffic to be routed to
`https://synapse.example.com`, on port 443), this would mean that
`https://example.com/.well-known/matrix/server` should return:
```json
{
@@ -38,16 +51,29 @@ should return:
Note, specifying a port is optional. If no port is specified, then it defaults
to 8448.
With .well-known delegation, federating servers will check for a valid TLS
certificate for the delegated hostname (in our example: `synapse.example.com`).
### Serving a `.well-known/matrix/server` file with Synapse
If you are able to set up your domain so that `https://<server_name>` is routed to
Synapse (i.e., the only change needed is to direct federation traffic to port 443
instead of port 8448), then it is possible to configure Synapse to serve a suitable
`.well-known/matrix/server` file. To do so, add the following to your `homeserver.yaml`
file:
```yaml
serve_server_wellknown: true
```
**Note**: this *only* works if `https://<server_name>` is routed to Synapse, so is
generally not suitable if Synapse is hosted at a subdomain such as
`https://synapse.example.com`.
## SRV DNS record delegation
It is also possible to do delegation using a SRV DNS record. However, that is
considered an advanced topic since it's a bit complex to set up, and `.well-known`
delegation is already enough in most cases.
It is also possible to do delegation using a SRV DNS record. However, that is generally
not recommended, as it can be difficult to configure the TLS certificates correctly in
this case, and it offers little advantage over `.well-known` delegation.
However, if you really need it, you can find some documentation on how such a
However, if you really need it, you can find some documentation on what such a
record should look like and how Synapse will use it in [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names).
@@ -68,27 +94,9 @@ wouldn't need any delegation set up.
domain `server_name` points to, you will need to let other servers know how to
find it using delegation.
### Do you still recommend against using a reverse proxy on the federation port?
### Should I use a reverse proxy for federation traffic?
We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.
See [the reverse proxy documentation](reverse_proxy.md) for information on setting up a
Generally, using a reverse proxy for both the federation and client traffic is a good
idea, since it saves handling TLS traffic in Synapse. See
[the reverse proxy documentation](reverse_proxy.md) for information on setting up a
reverse proxy.
### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
This is no longer necessary. If you are using a reverse proxy for all of your
TLS traffic, then you can set `no_tls: True` in the Synapse config.
In that case, the only reason Synapse needs the certificate is to populate a legacy
`tls_fingerprints` field in the federation API. This is ignored by Synapse 0.99.0
and later, and the only time pre-0.99 Synapses will check it is when attempting to
fetch the server keys - and generally this is delegated via `matrix.org`, which
is running a modern version of Synapse.
### Do I need the same certificate for the client and federation port?
No. There is nothing stopping you from using different certificates,
particularly if you are using a reverse proxy.

View File

@@ -8,23 +8,23 @@ easy to run CAS implementation built on top of Django.
1. Create a new virtualenv: `python3 -m venv <your virtualenv>`
2. Activate your virtualenv: `source /path/to/your/virtualenv/bin/activate`
3. Install Django and django-mama-cas:
```
```sh
python -m pip install "django<3" "django-mama-cas==2.4.0"
```
4. Create a Django project in the current directory:
```
```sh
django-admin startproject cas_test .
```
5. Follow the [install directions](https://django-mama-cas.readthedocs.io/en/latest/installation.html#configuring) for django-mama-cas
6. Setup the SQLite database: `python manage.py migrate`
7. Create a user:
```
```sh
python manage.py createsuperuser
```
1. Use whatever you want as the username and password.
2. Leave the other fields blank.
8. Use the built-in Django test server to serve the CAS endpoints on port 8000:
```
```sh
python manage.py runserver
```

View File

@@ -15,6 +15,11 @@ license - in our case, this is almost always Apache Software License v2 (see
# 2. What do I need?
If you are running Windows, the Windows Subsystem for Linux (WSL) is strongly
recommended for development. More information about WSL can be found at
<https://docs.microsoft.com/en-us/windows/wsl/install>. Running Synapse natively
on Windows is not officially supported.
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download).
The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git).
@@ -41,8 +46,6 @@ can find many good git tutorials on the web.
# 4. Install the dependencies
## Under Unix (macOS, Linux, BSD, ...)
Once you have installed Python 3 and added the source, please open a terminal and
setup a *virtualenv*, as follows:
@@ -56,10 +59,6 @@ pip install tox
This will install the developer dependencies for the project.
## Under Windows
TBD
# 5. Get in touch.

View File

@@ -89,7 +89,9 @@ To do so, use `scripts-dev/make_full_schema.sh`. This will produce new
Ensure postgres is installed, then run:
./scripts-dev/make_full_schema.sh -p postgres_username -o output_dir/
```sh
./scripts-dev/make_full_schema.sh -p postgres_username -o output_dir/
```
NB at the time of writing, this script predates the split into separate `state`/`main`
databases so will require updates to handle that correctly.

View File

@@ -15,7 +15,7 @@ To make Synapse (and therefore Element) use it:
sp_config:
allow_unknown_attributes: true # Works around a bug with AVA Hashes: https://github.com/IdentityPython/pysaml2/issues/388
metadata:
local: ["samling.xml"]
local: ["samling.xml"]
```
5. Ensure that your `homeserver.yaml` has a setting for `public_baseurl`:
```yaml

View File

@@ -69,9 +69,9 @@ A default policy can be defined as such, in the `retention` section of
the configuration file:
```yaml
default_policy:
min_lifetime: 1d
max_lifetime: 1y
default_policy:
min_lifetime: 1d
max_lifetime: 1y
```
Here, `min_lifetime` and `max_lifetime` have the same meaning and level
@@ -95,14 +95,14 @@ depending on an event's room's policy. This can be done by setting the
file. An example of such configuration could be:
```yaml
purge_jobs:
- longest_max_lifetime: 3d
interval: 12h
- shortest_max_lifetime: 3d
longest_max_lifetime: 1w
interval: 1d
- shortest_max_lifetime: 1w
interval: 2d
purge_jobs:
- longest_max_lifetime: 3d
interval: 12h
- shortest_max_lifetime: 3d
longest_max_lifetime: 1w
interval: 1d
- shortest_max_lifetime: 1w
interval: 2d
```
In this example, we define three jobs:
@@ -141,8 +141,8 @@ purging old events in a room. These limits can be defined as such in the
`retention` section of the configuration file:
```yaml
allowed_lifetime_min: 1d
allowed_lifetime_max: 1y
allowed_lifetime_min: 1d
allowed_lifetime_max: 1y
```
The limits are considered when running purge jobs. If necessary, the

View File

@@ -10,7 +10,7 @@ registered by using the Module API's `register_password_auth_provider_callbacks`
_First introduced in Synapse v1.46.0_
```
```python
auth_checkers: Dict[Tuple[str,Tuple], Callable]
```

View File

@@ -123,42 +123,6 @@ callback returns `True`, Synapse falls through to the next one. The value of the
callback that does not return `True` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
### `user_may_create_room_with_invites`
_First introduced in Synapse v1.44.0_
```python
async def user_may_create_room_with_invites(
user: str,
invites: List[str],
threepid_invites: List[Dict[str, str]],
) -> bool
```
Called when processing a room creation request (right after `user_may_create_room`).
The module is given the Matrix user ID of the user trying to create a room, as well as a
list of Matrix users to invite and a list of third-party identifiers (3PID, e.g. email
addresses) to invite.
An invited Matrix user to invite is represented by their Matrix user IDs, and an invited
3PIDs is represented by a dict that includes the 3PID medium (e.g. "email") through its
`medium` key and its address (e.g. "alice@example.com") through its `address` key.
See [the Matrix specification](https://matrix.org/docs/spec/appendices#pid-types) for more
information regarding third-party identifiers.
If no invite and/or 3PID invite were specified in the room creation request, the
corresponding list(s) will be empty.
**Note**: This callback is not called when a room is cloned (e.g. during a room upgrade)
since no invites are sent when cloning a room. To cover this case, modules also need to
implement `user_may_create_room`.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `True`, Synapse falls through to the next one. The value of the first
callback that does not return `True` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
### `user_may_create_room_alias`
_First introduced in Synapse v1.37.0_

View File

@@ -43,6 +43,14 @@ event with new data by returning the new event's data as a dictionary. In order
that, it is recommended the module calls `event.get_dict()` to get the current event as a
dictionary, and modify the returned dictionary accordingly.
If `check_event_allowed` raises an exception, the module is assumed to have failed.
The event will not be accepted but is not treated as explicitly rejected, either.
An HTTP request causing the module check will likely result in a 500 Internal
Server Error.
When the boolean returned by the module is `False`, the event is rejected.
(Module developers should not use exceptions for rejection.)
Note that replacing the event only works for events sent by local users, not for events
received over federation.
@@ -119,6 +127,27 @@ callback returns `True`, Synapse falls through to the next one. The value of the
callback that does not return `True` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
### `on_new_event`
_First introduced in Synapse v1.47.0_
```python
async def on_new_event(
event: "synapse.events.EventBase",
state_events: "synapse.types.StateMap",
) -> None:
```
Called after sending an event into a room. The module is passed the event, as well
as the state of the room _after_ the event. This means that if the event is a state event,
it will be included in this state.
Note that this callback is called when the event has already been processed and stored
into the room, which means this callback cannot be used to deny persisting the event. To
deny an incoming event, see [`check_event_for_spam`](spam_checker_callbacks.md#check_event_for_spam) instead.
If multiple modules implement this callback, Synapse runs them all in order.
## Example
The example below is a module that implements the third-party rules callback

View File

@@ -21,6 +21,7 @@ such as [Github][github-idp].
[google-idp]: https://developers.google.com/identity/protocols/oauth2/openid-connect
[auth0]: https://auth0.com/
[authentik]: https://goauthentik.io/
[okta]: https://www.okta.com/
[dex-idp]: https://github.com/dexidp/dex
[keycloak-idp]: https://www.keycloak.org/docs/latest/server_admin/#sso-protocols
@@ -209,6 +210,39 @@ oidc_providers:
display_name_template: "{{ user.name }}"
```
### Authentik
[Authentik][authentik] is an open-source IdP solution.
1. Create a provider in Authentik, with type OAuth2/OpenID.
2. The parameters are:
- Client Type: Confidential
- JWT Algorithm: RS256
- Scopes: OpenID, Email and Profile
- RSA Key: Select any available key
- Redirect URIs: `[synapse public baseurl]/_synapse/client/oidc/callback`
3. Create an application for synapse in Authentik and link it to the provider.
4. Note the slug of your application, Client ID and Client Secret.
Synapse config:
```yaml
oidc_providers:
- idp_id: authentik
idp_name: authentik
discover: true
issuer: "https://your.authentik.example.org/application/o/your-app-slug/" # TO BE FILLED: domain and slug
client_id: "your client id" # TO BE FILLED
client_secret: "your client secret" # TO BE FILLED
scopes:
- "openid"
- "profile"
- "email"
user_mapping_provider:
config:
localpart_template: "{{ user.preferred_username }}}"
display_name_template: "{{ user.preferred_username|capitalize }}" # TO BE FILLED: If your users have names in Authentik and you want those in Synapse, this should be replaced with user.name|capitalize.
```
### GitHub
[GitHub][github-idp] is a bit special as it is not an OpenID Connect compliant provider, but

View File

@@ -29,16 +29,20 @@ connect to a postgres database.
Assuming your PostgreSQL database user is called `postgres`, first authenticate as the database user with:
su - postgres
# Or, if your system uses sudo to get administrative rights
sudo -u postgres bash
```sh
su - postgres
# Or, if your system uses sudo to get administrative rights
sudo -u postgres bash
```
Then, create a postgres user and a database with:
# this will prompt for a password for the new user
createuser --pwprompt synapse_user
```sh
# this will prompt for a password for the new user
createuser --pwprompt synapse_user
createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse
createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse
```
The above will create a user called `synapse_user`, and a database called
`synapse`.
@@ -145,20 +149,26 @@ Firstly, shut down the currently running synapse server and copy its
database file (typically `homeserver.db`) to another location. Once the
copy is complete, restart synapse. For instance:
./synctl stop
cp homeserver.db homeserver.db.snapshot
./synctl start
```sh
./synctl stop
cp homeserver.db homeserver.db.snapshot
./synctl start
```
Copy the old config file into a new config file:
cp homeserver.yaml homeserver-postgres.yaml
```sh
cp homeserver.yaml homeserver-postgres.yaml
```
Edit the database section as described in the section *Synapse config*
above and with the SQLite snapshot located at `homeserver.db.snapshot`
simply run:
synapse_port_db --sqlite-database homeserver.db.snapshot \
--postgres-config homeserver-postgres.yaml
```sh
synapse_port_db --sqlite-database homeserver.db.snapshot \
--postgres-config homeserver-postgres.yaml
```
The flag `--curses` displays a coloured curses progress UI.
@@ -170,16 +180,20 @@ To complete the conversion shut down the synapse server and run the port
script one last time, e.g. if the SQLite database is at `homeserver.db`
run:
synapse_port_db --sqlite-database homeserver.db \
--postgres-config homeserver-postgres.yaml
```sh
synapse_port_db --sqlite-database homeserver.db \
--postgres-config homeserver-postgres.yaml
```
Once that has completed, change the synapse config to point at the
PostgreSQL database configuration file `homeserver-postgres.yaml`:
./synctl stop
mv homeserver.yaml homeserver-old-sqlite.yaml
mv homeserver-postgres.yaml homeserver.yaml
./synctl start
```sh
./synctl stop
mv homeserver.yaml homeserver-old-sqlite.yaml
mv homeserver-postgres.yaml homeserver.yaml
./synctl start
```
Synapse should now be running against PostgreSQL.

View File

@@ -52,7 +52,7 @@ to proxied traffic.)
### nginx
```
```nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
@@ -141,7 +141,7 @@ matrix.example.com {
### Apache
```
```apache
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com
@@ -170,7 +170,7 @@ matrix.example.com {
**NOTE 2**: It appears that Synapse is currently incompatible with the ModSecurity module for Apache (`mod_security2`). If you need it enabled for other services on your web server, you can disable it for Synapse's two VirtualHosts by including the following lines before each of the two `</VirtualHost>` above:
```
```apache
<IfModule security2_module>
SecRuleEngine off
</IfModule>
@@ -188,7 +188,7 @@ frontend https
http-request set-header X-Forwarded-For %[src]
# Matrix client traffic
acl matrix-host hdr(host) -i matrix.example.com
acl matrix-host hdr(host) -i matrix.example.com matrix.example.com:443
acl matrix-path path_beg /_matrix
acl matrix-path path_beg /_synapse/client

View File

@@ -93,6 +93,24 @@ pid_file: DATADIR/homeserver.pid
#
#public_baseurl: https://example.com/
# Uncomment the following to tell other servers to send federation traffic on
# port 443.
#
# By default, other servers will try to reach our server on port 8448, which can
# be inconvenient in some environments.
#
# Provided 'https://<server_name>/' on port 443 is routed to Synapse, this
# option configures Synapse to serve a file at
# 'https://<server_name>/.well-known/matrix/server'. This will tell other
# servers to send traffic to port 443 instead.
#
# See https://matrix-org.github.io/synapse/latest/delegate.html for more
# information.
#
# Defaults to 'false'.
#
#serve_server_wellknown: true
# Set the soft limit on the number of file descriptors synapse can use
# Zero is used to indicate synapse should set the soft limit to the
# hard limit.

View File

@@ -356,12 +356,14 @@ make install
##### Windows
If you wish to run or develop Synapse on Windows, the Windows Subsystem For
Linux provides a Linux environment on Windows 10 which is capable of using the
Debian, Fedora, or source installation methods. More information about WSL can
be found at <https://docs.microsoft.com/en-us/windows/wsl/install-win10> for
Windows 10 and <https://docs.microsoft.com/en-us/windows/wsl/install-on-server>
for Windows Server.
Running Synapse natively on Windows is not officially supported.
If you wish to run or develop Synapse on Windows, the Windows Subsystem for
Linux provides a Linux environment which is capable of using the Debian, Fedora,
or source installation methods. More information about WSL can be found at
<https://docs.microsoft.com/en-us/windows/wsl/install> for Windows 10/11 and
<https://docs.microsoft.com/en-us/windows/wsl/install-on-server> for
Windows Server.
## Setting up Synapse

View File

@@ -20,7 +20,9 @@ Finally, to actually run your worker-based synapse, you must pass synctl the `-a
commandline option to tell it to operate on all the worker configurations found
in the given directory, e.g.:
synctl -a $CONFIG/workers start
```sh
synctl -a $CONFIG/workers start
```
Currently one should always restart all workers when restarting or upgrading
synapse, unless you explicitly know it's safe not to. For instance, restarting
@@ -29,4 +31,6 @@ notifications.
To manipulate a specific worker, you pass the -w option to synctl:
synctl -w $CONFIG/workers/worker1.yaml restart
```sh
synctl -w $CONFIG/workers/worker1.yaml restart
```

View File

@@ -40,7 +40,9 @@ This will install and start a systemd service called `coturn`.
1. Configure it:
./configure
```sh
./configure
```
You may need to install `libevent2`: if so, you should do so in
the way recommended by your operating system. You can ignore
@@ -49,22 +51,28 @@ This will install and start a systemd service called `coturn`.
1. Build and install it:
make
make install
```sh
make
make install
```
### Configuration
1. Create or edit the config file in `/etc/turnserver.conf`. The relevant
lines, with example values, are:
use-auth-secret
static-auth-secret=[your secret key here]
realm=turn.myserver.org
```
use-auth-secret
static-auth-secret=[your secret key here]
realm=turn.myserver.org
```
See `turnserver.conf` for explanations of the options. One way to generate
the `static-auth-secret` is with `pwgen`:
pwgen -s 64 1
```sh
pwgen -s 64 1
```
A `realm` must be specified, but its value is somewhat arbitrary. (It is
sent to clients as part of the authentication flow.) It is conventional to
@@ -73,7 +81,9 @@ This will install and start a systemd service called `coturn`.
1. You will most likely want to configure coturn to write logs somewhere. The
easiest way is normally to send them to the syslog:
syslog
```sh
syslog
```
(in which case, the logs will be available via `journalctl -u coturn` on a
systemd system). Alternatively, coturn can be configured to write to a
@@ -83,31 +93,35 @@ This will install and start a systemd service called `coturn`.
connect to arbitrary IP addresses and ports. The following configuration is
suggested as a minimum starting point:
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
```
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work
allowed-peer-ip=10.0.0.1
# special case the turn server itself so that client->TURN->TURN->client flows work
allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200
```
1. Also consider supporting TLS/DTLS. To do this, add the following settings
to `turnserver.conf`:
# TLS certificates, including intermediate certs.
# For Let's Encrypt certificates, use `fullchain.pem` here.
cert=/path/to/fullchain.pem
```
# TLS certificates, including intermediate certs.
# For Let's Encrypt certificates, use `fullchain.pem` here.
cert=/path/to/fullchain.pem
# TLS private key file
pkey=/path/to/privkey.pem
# TLS private key file
pkey=/path/to/privkey.pem
```
In this case, replace the `turn:` schemes in the `turn_uri` settings below
with `turns:`.
@@ -126,7 +140,9 @@ This will install and start a systemd service called `coturn`.
If you want to try it anyway, you will at least need to tell coturn its
external IP address:
external-ip=192.88.99.1
```
external-ip=192.88.99.1
```
... and your NAT gateway must forward all of the relayed ports directly
(eg, port 56789 on the external IP must be always be forwarded to port
@@ -186,7 +202,7 @@ After updating the homeserver configuration, you must restart synapse:
./synctl restart
```
* If you use systemd:
```
```sh
systemctl restart matrix-synapse.service
```
... and then reload any clients (or wait an hour for them to refresh their

View File

@@ -85,6 +85,29 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.47.0
## Removal of old Room Admin API
The following admin APIs were deprecated in [Synapse 1.34](https://github.com/matrix-org/synapse/blob/v1.34.0/CHANGES.md#deprecations-and-removals)
(released on 2021-05-17) and have now been removed:
- `POST /_synapse/admin/v1/<room_id>/delete`
Any scripts still using the above APIs should be converted to use the
[Delete Room API](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#delete-room-api).
## Deprecation of the `user_may_create_room_with_invites` module callback
The `user_may_create_room_with_invites` is deprecated and will be removed in a future
version of Synapse. Modules implementing this callback can instead implement
[`user_may_invite`](https://matrix-org.github.io/synapse/latest/modules/spam_checker_callbacks.html#user_may_invite)
and use the [`get_room_state`](https://github.com/matrix-org/synapse/blob/872f23b95fa980a61b0866c1475e84491991fa20/synapse/module_api/__init__.py#L869-L876)
module API method to infer whether the invite is happening in the context of creating a
room.
We plan to remove this callback in January 2022.
# Upgrading to v1.45.0
## Changes required to media storage provider modules when reading from the Synapse configuration object
@@ -1163,16 +1186,20 @@ For more information on configuring TLS certificates see the
For users who have installed Synapse into a virtualenv, we recommend
doing this by creating a new virtualenv. For example:
virtualenv -p python3 ~/synapse/env3
source ~/synapse/env3/bin/activate
pip install matrix-synapse
```sh
virtualenv -p python3 ~/synapse/env3
source ~/synapse/env3/bin/activate
pip install matrix-synapse
```
You can then start synapse as normal, having activated the new
virtualenv:
cd ~/synapse
source env3/bin/activate
synctl start
```sh
cd ~/synapse
source env3/bin/activate
synctl start
```
Users who have installed from distribution packages should see the
relevant package documentation. See below for notes on Debian
@@ -1184,34 +1211,38 @@ For more information on configuring TLS certificates see the
`<server>.log.config` file. For example, if your `log.config`
file contains:
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: homeserver.log
maxBytes: 104857600
backupCount: 10
filters: [context]
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
```yaml
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: homeserver.log
maxBytes: 104857600
backupCount: 10
filters: [context]
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
```
Then you should update this to be:
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: homeserver.log
maxBytes: 104857600
backupCount: 10
filters: [context]
encoding: utf8
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
```yaml
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: homeserver.log
maxBytes: 104857600
backupCount: 10
filters: [context]
encoding: utf8
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
```
There is no need to revert this change if downgrading to
Python 2.
@@ -1297,24 +1328,28 @@ with the HS remotely has been removed.
It has been replaced by specifying a list of application service
registrations in `homeserver.yaml`:
app_service_config_files: ["registration-01.yaml", "registration-02.yaml"]
```yaml
app_service_config_files: ["registration-01.yaml", "registration-02.yaml"]
```
Where `registration-01.yaml` looks like:
url: <String> # e.g. "https://my.application.service.com"
as_token: <String>
hs_token: <String>
sender_localpart: <String> # This is a new field which denotes the user_id localpart when using the AS token
namespaces:
users:
- exclusive: <Boolean>
regex: <String> # e.g. "@prefix_.*"
aliases:
- exclusive: <Boolean>
regex: <String>
rooms:
- exclusive: <Boolean>
regex: <String>
```yaml
url: <String> # e.g. "https://my.application.service.com"
as_token: <String>
hs_token: <String>
sender_localpart: <String> # This is a new field which denotes the user_id localpart when using the AS token
namespaces:
users:
- exclusive: <Boolean>
regex: <String> # e.g. "@prefix_.*"
aliases:
- exclusive: <Boolean>
regex: <String>
rooms:
- exclusive: <Boolean>
regex: <String>
```
# Upgrading to v0.8.0

View File

@@ -24,11 +24,6 @@ Finally, we also stylise the chapter titles in the left sidebar by indenting the
slightly so that they are more visually distinguishable from the section headers
(the bold titles). This is done through the `indent-section-headers.css` file.
In addition to these modifications, we have added a version picker to the documentation.
Users can switch between documentations for different versions of Synapse.
This functionality was implemented through the `version-picker.js` and
`version-picker.css` files.
More information can be found in mdbook's official documentation for
[injecting page JS/CSS](https://rust-lang.github.io/mdBook/format/config.html)
and

View File

@@ -131,18 +131,6 @@
<i class="fa fa-search"></i>
</button>
{{/if}}
<div class="version-picker">
<div class="dropdown">
<div class="select">
<span></span>
<i class="fa fa-chevron-down"></i>
</div>
<input type="hidden" name="version">
<ul class="dropdown-menu">
<!-- Versions will be added dynamically in version-picker.js -->
</ul>
</div>
</div>
</div>
<h1 class="menu-title">{{ book_title }}</h1>
@@ -321,4 +309,4 @@
{{/if}}
</body>
</html>
</html>

View File

@@ -1,78 +0,0 @@
.version-picker {
display: flex;
align-items: center;
}
.version-picker .dropdown {
width: 130px;
max-height: 29px;
margin-left: 10px;
display: inline-block;
border-radius: 4px;
border: 1px solid var(--theme-popup-border);
position: relative;
font-size: 13px;
color: var(--fg);
height: 100%;
text-align: left;
}
.version-picker .dropdown .select {
cursor: pointer;
display: block;
padding: 5px 2px 5px 15px;
}
.version-picker .dropdown .select > i {
font-size: 10px;
color: var(--fg);
cursor: pointer;
float: right;
line-height: 20px !important;
}
.version-picker .dropdown:hover {
border: 1px solid var(--theme-popup-border);
}
.version-picker .dropdown:active {
background-color: var(--theme-popup-bg);
}
.version-picker .dropdown.active:hover,
.version-picker .dropdown.active {
border: 1px solid var(--theme-popup-border);
border-radius: 2px 2px 0 0;
background-color: var(--theme-popup-bg);
}
.version-picker .dropdown.active .select > i {
transform: rotate(-180deg);
}
.version-picker .dropdown .dropdown-menu {
position: absolute;
background-color: var(--theme-popup-bg);
width: 100%;
left: -1px;
right: 1px;
margin-top: 1px;
border: 1px solid var(--theme-popup-border);
border-radius: 0 0 4px 4px;
overflow: hidden;
display: none;
max-height: 300px;
overflow-y: auto;
z-index: 9;
}
.version-picker .dropdown .dropdown-menu li {
font-size: 12px;
padding: 6px 20px;
cursor: pointer;
}
.version-picker .dropdown .dropdown-menu {
padding: 0;
list-style: none;
}
.version-picker .dropdown .dropdown-menu li:hover {
background-color: var(--theme-hover);
}
.version-picker .dropdown .dropdown-menu li.active::before {
display: inline-block;
content: "✓";
margin-inline-start: -14px;
width: 14px;
}

View File

@@ -1,127 +0,0 @@
const dropdown = document.querySelector('.version-picker .dropdown');
const dropdownMenu = dropdown.querySelector('.dropdown-menu');
fetchVersions(dropdown, dropdownMenu).then(() => {
initializeVersionDropdown(dropdown, dropdownMenu);
});
/**
* Initialize the dropdown functionality for version selection.
*
* @param {Element} dropdown - The dropdown element.
* @param {Element} dropdownMenu - The dropdown menu element.
*/
function initializeVersionDropdown(dropdown, dropdownMenu) {
// Toggle the dropdown menu on click
dropdown.addEventListener('click', function () {
this.setAttribute('tabindex', 1);
this.classList.toggle('active');
dropdownMenu.style.display = (dropdownMenu.style.display === 'block') ? 'none' : 'block';
});
// Remove the 'active' class and hide the dropdown menu on focusout
dropdown.addEventListener('focusout', function () {
this.classList.remove('active');
dropdownMenu.style.display = 'none';
});
// Handle item selection within the dropdown menu
const dropdownMenuItems = dropdownMenu.querySelectorAll('li');
dropdownMenuItems.forEach(function (item) {
item.addEventListener('click', function () {
dropdownMenuItems.forEach(function (item) {
item.classList.remove('active');
});
this.classList.add('active');
dropdown.querySelector('span').textContent = this.textContent;
dropdown.querySelector('input').value = this.getAttribute('id');
window.location.href = changeVersion(window.location.href, this.textContent);
});
});
};
/**
* This function fetches the available versions from a GitHub repository
* and inserts them into the version picker.
*
* @param {Element} dropdown - The dropdown element.
* @param {Element} dropdownMenu - The dropdown menu element.
* @returns {Promise<Array<string>>} A promise that resolves with an array of available versions.
*/
function fetchVersions(dropdown, dropdownMenu) {
return new Promise((resolve, reject) => {
window.addEventListener("load", () => {
fetch("https://api.github.com/repos/matrix-org/synapse/git/trees/gh-pages", {
cache: "force-cache",
}).then(res =>
res.json()
).then(resObject => {
const excluded = ['dev-docs', 'v1.91.0', 'v1.80.0', 'v1.69.0'];
const tree = resObject.tree.filter(item => item.type === "tree" && !excluded.includes(item.path));
const versions = tree.map(item => item.path).sort(sortVersions);
// Create a list of <li> items for versions
versions.forEach((version) => {
const li = document.createElement("li");
li.textContent = version;
li.id = version;
if (window.SYNAPSE_VERSION === version) {
li.classList.add('active');
dropdown.querySelector('span').textContent = version;
dropdown.querySelector('input').value = version;
}
dropdownMenu.appendChild(li);
});
resolve(versions);
}).catch(ex => {
console.error("Failed to fetch version data", ex);
reject(ex);
})
});
});
}
/**
* Custom sorting function to sort an array of version strings.
*
* @param {string} a - The first version string to compare.
* @param {string} b - The second version string to compare.
* @returns {number} - A negative number if a should come before b, a positive number if b should come before a, or 0 if they are equal.
*/
function sortVersions(a, b) {
// Put 'develop' and 'latest' at the top
if (a === 'develop' || a === 'latest') return -1;
if (b === 'develop' || b === 'latest') return 1;
const versionA = (a.match(/v\d+(\.\d+)+/) || [])[0];
const versionB = (b.match(/v\d+(\.\d+)+/) || [])[0];
return versionB.localeCompare(versionA);
}
/**
* Change the version in a URL path.
*
* @param {string} url - The original URL to be modified.
* @param {string} newVersion - The new version to replace the existing version in the URL.
* @returns {string} The updated URL with the new version.
*/
function changeVersion(url, newVersion) {
const parsedURL = new URL(url);
const pathSegments = parsedURL.pathname.split('/');
// Modify the version
pathSegments[2] = newVersion;
// Reconstruct the URL
parsedURL.pathname = pathSegments.join('/');
return parsedURL.href;
}

View File

@@ -1 +0,0 @@
window.SYNAPSE_VERSION = 'v1.46';

View File

@@ -443,19 +443,19 @@ In the `media_repository` worker configuration file, configure the http listener
expose the `media` resource. For example:
```yaml
worker_listeners:
- type: http
port: 8085
resources:
- names:
- media
worker_listeners:
- type: http
port: 8085
resources:
- names:
- media
```
Note that if running multiple media repositories they must be on the same server
and you must configure a single instance to run the background tasks, e.g.:
```yaml
media_instance_running_background_jobs: "media-repository-1"
media_instance_running_background_jobs: "media-repository-1"
```
Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for both inbound client and federation requests (if they are handled separately).
@@ -492,7 +492,9 @@ must therefore be configured with the location of the main instance, via
the `worker_main_http_uri` setting in the `frontend_proxy` worker configuration
file. For example:
worker_main_http_uri: http://127.0.0.1:8008
```yaml
worker_main_http_uri: http://127.0.0.1:8008
```
### Historical apps

View File

@@ -16,6 +16,7 @@ no_implicit_optional = True
files =
scripts-dev/sign_json,
synapse/__init__.py,
synapse/api,
synapse/appservice,
synapse/config,
@@ -31,16 +32,7 @@ files =
synapse/federation,
synapse/groups,
synapse/handlers,
synapse/http/additional_resource.py,
synapse/http/client.py,
synapse/http/federation/matrix_federation_agent.py,
synapse/http/federation/srv_resolver.py,
synapse/http/federation/well_known_resolver.py,
synapse/http/matrixfederationclient.py,
synapse/http/proxyagent.py,
synapse/http/servlet.py,
synapse/http/server.py,
synapse/http/site.py,
synapse/http,
synapse/logging,
synapse/metrics,
synapse/module_api,
@@ -61,6 +53,7 @@ files =
synapse/storage/databases/main/keys.py,
synapse/storage/databases/main/pusher.py,
synapse/storage/databases/main/registration.py,
synapse/storage/databases/main/relations.py,
synapse/storage/databases/main/session.py,
synapse/storage/databases/main/stream.py,
synapse/storage/databases/main/ui_auth.py,

View File

@@ -42,10 +42,10 @@ echo "--------------------------"
echo
matched=0
for f in `git diff --name-only FETCH_HEAD... -- changelog.d`; do
for f in $(git diff --name-only FETCH_HEAD... -- changelog.d); do
# check that any modified newsfiles on this branch end with a full stop.
lastchar=`tr -d '\n' < $f | tail -c 1`
if [ $lastchar != '.' -a $lastchar != '!' ]; then
lastchar=$(tr -d '\n' < "$f" | tail -c 1)
if [ "$lastchar" != '.' ] && [ "$lastchar" != '!' ]; then
echo -e "\e[31mERROR: newsfragment $f does not end with a '.' or '!'\e[39m" >&2
echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2
exit 1

View File

@@ -25,7 +25,7 @@
# terminators are found, 0 otherwise.
# cd to the root of the repository
cd `dirname $0`/..
cd "$(dirname "$0")/.." || exit
# Find and print files with non-unix line terminators
if find . -path './.git/*' -prune -o -type f -print0 | xargs -0 grep -I -l $'\r$'; then

View File

@@ -24,7 +24,7 @@
set -e
# Change to the repository root
cd "$(dirname $0)/.."
cd "$(dirname "$0")/.."
# Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then
@@ -61,8 +61,8 @@ cd "$COMPLEMENT_DIR"
EXTRA_COMPLEMENT_ARGS=""
if [[ -n "$1" ]]; then
# A test name regex has been set, supply it to Complement
EXTRA_COMPLEMENT_ARGS+="-run $1 "
EXTRA_COMPLEMENT_ARGS=(-run "$1")
fi
# Run the tests!
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403,msc2716 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403,msc2716 -count=1 "${EXTRA_COMPLEMENT_ARGS[@]}" ./tests/...

View File

@@ -3,7 +3,7 @@
# Exits with 0 if there are no problems, or another code otherwise.
# cd to the root of the repository
cd `dirname $0`/..
cd "$(dirname "$0")/.." || exit
# Restore backup of sample config upon script exit
trap "mv docs/sample_config.yaml.bak docs/sample_config.yaml" EXIT

View File

@@ -60,5 +60,5 @@ DEBIAN_FRONTEND=noninteractive apt-get install -y devscripts
# Update the Debian changelog.
ver=${1}
dch -M -v $(sed -Ee 's/(rc|a|b|c)/~\1/' <<<$ver) "New synapse release $ver."
dch -M -v "$(sed -Ee 's/(rc|a|b|c)/~\1/' <<<"$ver")" "New synapse release $ver."
dch -M -r -D stable ""

View File

@@ -4,7 +4,7 @@
set -e
cd `dirname $0`/..
cd "$(dirname "$0")/.."
SAMPLE_CONFIG="docs/sample_config.yaml"
SAMPLE_LOG_CONFIG="docs/sample_log_config.yaml"

View File

@@ -4,6 +4,6 @@ set -e
# Fetch the current GitHub issue number, add one to it -- presto! The likely
# next PR number.
CURRENT_NUMBER=`curl -s "https://api.github.com/repos/matrix-org/synapse/issues?state=all&per_page=1" | jq -r ".[0].number"`
CURRENT_NUMBER=$(curl -s "https://api.github.com/repos/matrix-org/synapse/issues?state=all&per_page=1" | jq -r ".[0].number")
CURRENT_NUMBER=$((CURRENT_NUMBER+1))
echo $CURRENT_NUMBER

View File

@@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.46.0"
__version__ = "1.46.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -20,7 +20,12 @@ from typing import List
import attr
from synapse.config._base import RootConfig, find_config_files, read_config_files
from synapse.config._base import (
Config,
RootConfig,
find_config_files,
read_config_files,
)
from synapse.config.database import DatabaseConfig
from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn
from synapse.storage.engines import create_engine
@@ -126,7 +131,7 @@ def main():
config_dict,
)
since_ms = time.time() * 1000 - config.parse_duration(config_args.since)
since_ms = time.time() * 1000 - Config.parse_duration(config_args.since)
exclude_users_with_email = config_args.exclude_emails
include_context = not config_args.only_users

View File

@@ -596,3 +596,10 @@ class ShadowBanError(Exception):
This should be caught and a proper "fake" success response sent to the user.
"""
class ModuleFailedException(Exception):
"""
Raised when a module API callback fails, for example because it raised an
exception.
"""

View File

@@ -18,7 +18,8 @@ import json
from typing import (
TYPE_CHECKING,
Awaitable,
Container,
Callable,
Dict,
Iterable,
List,
Optional,
@@ -217,19 +218,19 @@ class FilterCollection:
return self._filter_json
def timeline_limit(self) -> int:
return self._room_timeline_filter.limit()
return self._room_timeline_filter.limit
def presence_limit(self) -> int:
return self._presence_filter.limit()
return self._presence_filter.limit
def ephemeral_limit(self) -> int:
return self._room_ephemeral_filter.limit()
return self._room_ephemeral_filter.limit
def lazy_load_members(self) -> bool:
return self._room_state_filter.lazy_load_members()
return self._room_state_filter.lazy_load_members
def include_redundant_members(self) -> bool:
return self._room_state_filter.include_redundant_members()
return self._room_state_filter.include_redundant_members
def filter_presence(
self, events: Iterable[UserPresenceState]
@@ -276,19 +277,25 @@ class Filter:
def __init__(self, filter_json: JsonDict):
self.filter_json = filter_json
self.types = self.filter_json.get("types", None)
self.not_types = self.filter_json.get("not_types", [])
self.limit = filter_json.get("limit", 10)
self.lazy_load_members = filter_json.get("lazy_load_members", False)
self.include_redundant_members = filter_json.get(
"include_redundant_members", False
)
self.rooms = self.filter_json.get("rooms", None)
self.not_rooms = self.filter_json.get("not_rooms", [])
self.types = filter_json.get("types", None)
self.not_types = filter_json.get("not_types", [])
self.senders = self.filter_json.get("senders", None)
self.not_senders = self.filter_json.get("not_senders", [])
self.rooms = filter_json.get("rooms", None)
self.not_rooms = filter_json.get("not_rooms", [])
self.contains_url = self.filter_json.get("contains_url", None)
self.senders = filter_json.get("senders", None)
self.not_senders = filter_json.get("not_senders", [])
self.labels = self.filter_json.get("org.matrix.labels", None)
self.not_labels = self.filter_json.get("org.matrix.not_labels", [])
self.contains_url = filter_json.get("contains_url", None)
self.labels = filter_json.get("org.matrix.labels", None)
self.not_labels = filter_json.get("org.matrix.not_labels", [])
def filters_all_types(self) -> bool:
return "*" in self.not_types
@@ -302,76 +309,95 @@ class Filter:
def check(self, event: FilterEvent) -> bool:
"""Checks whether the filter matches the given event.
Args:
event: The event, account data, or presence to check against this
filter.
Returns:
True if the event matches
True if the event matches the filter.
"""
# We usually get the full "events" as dictionaries coming through,
# except for presence which actually gets passed around as its own
# namedtuple type.
if isinstance(event, UserPresenceState):
sender: Optional[str] = event.user_id
room_id = None
ev_type = "m.presence"
contains_url = False
labels: List[str] = []
user_id = event.user_id
field_matchers = {
"senders": lambda v: user_id == v,
"types": lambda v: "m.presence" == v,
}
return self._check_fields(field_matchers)
else:
content = event.get("content")
# Content is assumed to be a dict below, so ensure it is. This should
# always be true for events, but account_data has been allowed to
# have non-dict content.
if not isinstance(content, dict):
content = {}
sender = event.get("sender", None)
if not sender:
# Presence events had their 'sender' in content.user_id, but are
# now handled above. We don't know if anything else uses this
# form. TODO: Check this and probably remove it.
content = event.get("content")
# account_data has been allowed to have non-dict content, so
# check type first
if isinstance(content, dict):
sender = content.get("user_id")
sender = content.get("user_id")
room_id = event.get("room_id", None)
ev_type = event.get("type", None)
content = event.get("content") or {}
# check if there is a string url field in the content for filtering purposes
contains_url = isinstance(content.get("url"), str)
labels = content.get(EventContentFields.LABELS, [])
return self.check_fields(room_id, sender, ev_type, labels, contains_url)
field_matchers = {
"rooms": lambda v: room_id == v,
"senders": lambda v: sender == v,
"types": lambda v: _matches_wildcard(ev_type, v),
"labels": lambda v: v in labels,
}
def check_fields(
self,
room_id: Optional[str],
sender: Optional[str],
event_type: Optional[str],
labels: Container[str],
contains_url: bool,
) -> bool:
result = self._check_fields(field_matchers)
if not result:
return result
contains_url_filter = self.contains_url
if contains_url_filter is not None:
contains_url = isinstance(content.get("url"), str)
if contains_url_filter != contains_url:
return False
return True
def _check_fields(self, field_matchers: Dict[str, Callable[[str], bool]]) -> bool:
"""Checks whether the filter matches the given event fields.
Args:
field_matchers: A map of attribute name to callable to use for checking
particular fields.
The attribute name and an inverse (not_<attribute name>) must
exist on the Filter.
The callable should return true if the event's value matches the
filter's value.
Returns:
True if the event fields match
"""
literal_keys = {
"rooms": lambda v: room_id == v,
"senders": lambda v: sender == v,
"types": lambda v: _matches_wildcard(event_type, v),
"labels": lambda v: v in labels,
}
for name, match_func in literal_keys.items():
for name, match_func in field_matchers.items():
# If the event matches one of the disallowed values, reject it.
not_name = "not_%s" % (name,)
disallowed_values = getattr(self, not_name)
if any(map(match_func, disallowed_values)):
return False
# Other the event does not match at least one of the allowed values,
# reject it.
allowed_values = getattr(self, name)
if allowed_values is not None:
if not any(map(match_func, allowed_values)):
return False
contains_url_filter = self.filter_json.get("contains_url")
if contains_url_filter is not None:
if contains_url_filter != contains_url:
return False
# Otherwise, accept it.
return True
def filter_rooms(self, room_ids: Iterable[str]) -> Set[str]:
@@ -385,10 +411,10 @@ class Filter:
"""
room_ids = set(room_ids)
disallowed_rooms = set(self.filter_json.get("not_rooms", []))
disallowed_rooms = set(self.not_rooms)
room_ids -= disallowed_rooms
allowed_rooms = self.filter_json.get("rooms", None)
allowed_rooms = self.rooms
if allowed_rooms is not None:
room_ids &= set(allowed_rooms)
@@ -397,15 +423,6 @@ class Filter:
def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
return list(filter(self.check, events))
def limit(self) -> int:
return self.filter_json.get("limit", 10)
def lazy_load_members(self) -> bool:
return self.filter_json.get("lazy_load_members", False)
def include_redundant_members(self) -> bool:
return self.filter_json.get("include_redundant_members", False)
def with_room_ids(self, room_ids: Iterable[str]) -> "Filter":
"""Returns a new filter with the given room IDs appended.

View File

@@ -45,6 +45,7 @@ from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.events.third_party_rules import load_legacy_third_party_event_rules
from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.logging.context import PreserveLoggingContext
from synapse.metrics import register_threadpool
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
@@ -351,6 +352,10 @@ async def start(hs: "HomeServer"):
GAIResolver(reactor, getThreadPool=lambda: resolver_threadpool)
)
# Register the threadpools with our metrics.
register_threadpool("default", reactor.getThreadPool())
register_threadpool("gai_resolver", resolver_threadpool)
# Set up the SIGHUP machinery.
if hasattr(signal, "SIGHUP"):

View File

@@ -145,6 +145,20 @@ class FileExfiltrationWriter(ExfiltrationWriter):
for event in state.values():
print(json.dumps(event), file=f)
def write_knock(self, room_id, event, state):
self.write_events(room_id, [event])
# We write the knock state somewhere else as they aren't full events
# and are only a subset of the state at the event.
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
knock_state = os.path.join(room_directory, "knock_state")
with open(knock_state, "a") as f:
for event in state.values():
print(json.dumps(event), file=f)
def finished(self):
return self.base_directory

View File

@@ -100,6 +100,7 @@ from synapse.rest.client.register import (
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer
from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
@@ -318,6 +319,8 @@ class GenericWorkerServer(HomeServer):
resources.update({CLIENT_API_PREFIX: resource})
resources.update(build_synapse_client_resource_tree(self))
resources.update({"/.well-known": well_known_resource(self)})
elif name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
elif name == "media":

View File

@@ -66,7 +66,7 @@ from synapse.rest.admin import AdminRestResource
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import WellKnownResource
from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.util.httpresourcetree import create_resource_tree
@@ -189,7 +189,7 @@ class SynapseHomeServer(HomeServer):
"/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource,
"/.well-known/matrix/client": WellKnownResource(self),
"/.well-known": well_known_resource(self),
"/_synapse/admin": AdminRestResource(self),
**build_synapse_client_resource_tree(self),
}

View File

@@ -262,6 +262,7 @@ class ServerConfig(Config):
self.print_pidfile = config.get("print_pidfile")
self.user_agent_suffix = config.get("user_agent_suffix")
self.use_frozen_dicts = config.get("use_frozen_dicts", False)
self.serve_server_wellknown = config.get("serve_server_wellknown", False)
self.public_baseurl = config.get("public_baseurl")
if self.public_baseurl is not None:
@@ -774,6 +775,24 @@ class ServerConfig(Config):
#
#public_baseurl: https://example.com/
# Uncomment the following to tell other servers to send federation traffic on
# port 443.
#
# By default, other servers will try to reach our server on port 8448, which can
# be inconvenient in some environments.
#
# Provided 'https://<server_name>/' on port 443 is routed to Synapse, this
# option configures Synapse to serve a file at
# 'https://<server_name>/.well-known/matrix/server'. This will tell other
# servers to send traffic to port 443 instead.
#
# See https://matrix-org.github.io/synapse/latest/delegate.html for more
# information.
#
# Defaults to 'false'.
#
#serve_server_wellknown: true
# Set the soft limit on the number of file descriptors synapse can use
# Zero is used to indicate synapse should set the soft limit to the
# hard limit.

View File

@@ -22,6 +22,7 @@ import attr
from signedjson.key import (
decode_verify_key_bytes,
encode_verify_key_base64,
get_verify_key,
is_signing_algorithm_supported,
)
from signedjson.sign import (
@@ -30,6 +31,7 @@ from signedjson.sign import (
signature_ids,
verify_signed_json,
)
from signedjson.types import VerifyKey
from unpaddedbase64 import decode_base64
from twisted.internet import defer
@@ -177,6 +179,8 @@ class Keyring:
clock=hs.get_clock(),
process_batch_callback=self._inner_fetch_key_requests,
)
self.verify_key = get_verify_key(hs.signing_key)
self.hostname = hs.hostname
async def verify_json_for_server(
self,
@@ -196,6 +200,7 @@ class Keyring:
validity_time: timestamp at which we require the signing key to
be valid. (0 implies we don't care)
"""
request = VerifyJsonRequest.from_json_object(
server_name,
json_object,
@@ -262,6 +267,11 @@ class Keyring:
Codes.UNAUTHORIZED,
)
# If we are the originating server don't fetch verify key for self over federation
if verify_request.server_name == self.hostname:
await self._process_json(self.verify_key, verify_request)
return
# Add the keys we need to verify to the queue for retrieval. We queue
# up requests for the same server so we don't end up with many in flight
# requests for the same keys.
@@ -285,35 +295,8 @@ class Keyring:
if key_result.valid_until_ts < verify_request.minimum_valid_until_ts:
continue
verify_key = key_result.verify_key
json_object = verify_request.get_json_object()
try:
verify_signed_json(
json_object,
verify_request.server_name,
verify_key,
)
verified = True
except SignatureVerifyException as e:
logger.debug(
"Error verifying signature for %s:%s:%s with key %s: %s",
verify_request.server_name,
verify_key.alg,
verify_key.version,
encode_verify_key_base64(verify_key),
str(e),
)
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s: %s"
% (
verify_request.server_name,
verify_key.alg,
verify_key.version,
str(e),
),
Codes.UNAUTHORIZED,
)
await self._process_json(key_result.verify_key, verify_request)
verified = True
if not verified:
raise SynapseError(
@@ -322,6 +305,39 @@ class Keyring:
Codes.UNAUTHORIZED,
)
async def _process_json(
self, verify_key: VerifyKey, verify_request: VerifyJsonRequest
) -> None:
"""Processes the `VerifyJsonRequest`. Raises if the signature can't be
verified.
"""
try:
verify_signed_json(
verify_request.get_json_object(),
verify_request.server_name,
verify_key,
)
except SignatureVerifyException as e:
logger.debug(
"Error verifying signature for %s:%s:%s with key %s: %s",
verify_request.server_name,
verify_key.alg,
verify_key.version,
encode_verify_key_base64(verify_key),
str(e),
)
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s: %s"
% (
verify_request.server_name,
verify_key.alg,
verify_key.version,
str(e),
),
Codes.UNAUTHORIZED,
)
async def _inner_fetch_key_requests(
self, requests: List[_FetchKeyRequest]
) -> Dict[str, Dict[str, FetchKeyResult]]:

View File

@@ -14,7 +14,7 @@
import logging
from typing import TYPE_CHECKING, Any, Awaitable, Callable, List, Optional, Tuple
from synapse.api.errors import SynapseError
from synapse.api.errors import ModuleFailedException, SynapseError
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.types import Requester, StateMap
@@ -36,6 +36,7 @@ CHECK_THREEPID_CAN_BE_INVITED_CALLBACK = Callable[
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK = Callable[
[str, StateMap[EventBase], str], Awaitable[bool]
]
ON_NEW_EVENT_CALLBACK = Callable[[EventBase, StateMap[EventBase]], Awaitable]
def load_legacy_third_party_event_rules(hs: "HomeServer") -> None:
@@ -152,6 +153,7 @@ class ThirdPartyEventRules:
self._check_visibility_can_be_modified_callbacks: List[
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK
] = []
self._on_new_event_callbacks: List[ON_NEW_EVENT_CALLBACK] = []
def register_third_party_rules_callbacks(
self,
@@ -163,6 +165,7 @@ class ThirdPartyEventRules:
check_visibility_can_be_modified: Optional[
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK
] = None,
on_new_event: Optional[ON_NEW_EVENT_CALLBACK] = None,
) -> None:
"""Register callbacks from modules for each hook."""
if check_event_allowed is not None:
@@ -181,6 +184,9 @@ class ThirdPartyEventRules:
check_visibility_can_be_modified,
)
if on_new_event is not None:
self._on_new_event_callbacks.append(on_new_event)
async def check_event_allowed(
self, event: EventBase, context: EventContext
) -> Tuple[bool, Optional[dict]]:
@@ -227,9 +233,10 @@ class ThirdPartyEventRules:
# This module callback needs a rework so that hacks such as
# this one are not necessary.
raise e
except Exception as e:
logger.warning("Failed to run module API callback %s: %s", callback, e)
continue
except Exception:
raise ModuleFailedException(
"Failed to run `check_event_allowed` module API callback"
)
# Return if the event shouldn't be allowed or if the module came up with a
# replacement dict for the event.
@@ -321,6 +328,31 @@ class ThirdPartyEventRules:
return True
async def on_new_event(self, event_id: str) -> None:
"""Let modules act on events after they've been sent (e.g. auto-accepting
invites, etc.)
Args:
event_id: The ID of the event.
Raises:
ModuleFailureError if a callback raised any exception.
"""
# Bail out early without hitting the store if we don't have any callbacks
if len(self._on_new_event_callbacks) == 0:
return
event = await self.store.get_event(event_id)
state_events = await self._get_state_map_for_room(event.room_id)
for callback in self._on_new_event_callbacks:
try:
await callback(event, state_events)
except Exception as e:
logger.exception(
"Failed to run module API callback %s: %s", callback, e
)
async def _get_state_map_for_room(self, room_id: str) -> StateMap[EventBase]:
"""Given a room ID, return the state events of that room.

View File

@@ -227,7 +227,7 @@ class FederationClient(FederationBase):
)
async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Iterable[str]
self, dest: str, room_id: str, limit: int, extremities: Collection[str]
) -> Optional[List[EventBase]]:
"""Requests some more historic PDUs for the given room from the
given destination server.
@@ -237,6 +237,8 @@ class FederationClient(FederationBase):
room_id: The room_id to backfill.
limit: The maximum number of events to return.
extremities: our current backwards extremities, to backfill from
Must be a Collection that is falsy when empty.
(Iterable is not enough here!)
"""
logger.debug("backfill extrem=%s", extremities)
@@ -250,11 +252,22 @@ class FederationClient(FederationBase):
logger.debug("backfill transaction_data=%r", transaction_data)
if not isinstance(transaction_data, dict):
# TODO we probably want an exception type specific to federation
# client validation.
raise TypeError("Backfill transaction_data is not a dict.")
transaction_data_pdus = transaction_data.get("pdus")
if not isinstance(transaction_data_pdus, list):
# TODO we probably want an exception type specific to federation
# client validation.
raise TypeError("transaction_data.pdus is not a list.")
room_version = await self.store.get_room_version(room_id)
pdus = [
event_from_pdu_json(p, room_version, outlier=False)
for p in transaction_data["pdus"]
for p in transaction_data_pdus
]
# Check signatures and hash of pdus, removing any from the list that fail checks

View File

@@ -295,14 +295,16 @@ class FederationServer(FederationBase):
Returns:
HTTP response code and body
"""
response = await self.transaction_actions.have_responded(origin, transaction)
existing_response = await self.transaction_actions.have_responded(
origin, transaction
)
if response:
if existing_response:
logger.debug(
"[%s] We've already responded to this request",
transaction.transaction_id,
)
return response
return existing_response
logger.debug("[%s] Transaction is new", transaction.transaction_id)
@@ -632,7 +634,7 @@ class FederationServer(FederationBase):
async def on_make_knock_request(
self, origin: str, room_id: str, user_id: str, supported_versions: List[str]
) -> Dict[str, Union[EventBase, str]]:
) -> JsonDict:
"""We've received a /make_knock/ request, so we create a partial knock
event for the room and hand that back, along with the room version, to the knocking
homeserver. We do *not* persist or process this event until the other server has

View File

@@ -149,7 +149,6 @@ class TransactionManager:
)
except HttpResponseException as e:
code = e.code
response = e.response
set_tag(tags.ERROR, True)

View File

@@ -15,7 +15,19 @@
import logging
import urllib
from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Tuple, Union
from typing import (
Any,
Awaitable,
Callable,
Collection,
Dict,
Iterable,
List,
Mapping,
Optional,
Tuple,
Union,
)
import attr
import ijson
@@ -100,7 +112,7 @@ class TransportLayerClient:
@log_function
async def backfill(
self, destination: str, room_id: str, event_tuples: Iterable[str], limit: int
self, destination: str, room_id: str, event_tuples: Collection[str], limit: int
) -> Optional[JsonDict]:
"""Requests `limit` previous PDUs in a given context before list of
PDUs.
@@ -108,7 +120,9 @@ class TransportLayerClient:
Args:
destination
room_id
event_tuples
event_tuples:
Must be a Collection that is falsy when empty.
(Iterable is not enough here!)
limit
Returns:
@@ -786,7 +800,7 @@ class TransportLayerClient:
@log_function
def join_group(
self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> JsonDict:
) -> Awaitable[JsonDict]:
"""Attempts to join a group"""
path = _create_v1_path("/groups/%s/users/%s/join", group_id, user_id)

View File

@@ -90,6 +90,7 @@ class AdminHandler:
Membership.LEAVE,
Membership.BAN,
Membership.INVITE,
Membership.KNOCK,
),
)
@@ -122,6 +123,13 @@ class AdminHandler:
invited_state = invite.unsigned["invite_room_state"]
writer.write_invite(room_id, invite, invited_state)
if room.membership == Membership.KNOCK:
event_id = room.event_id
knock = await self.store.get_event(event_id, allow_none=True)
if knock:
knock_state = knock.unsigned["knock_room_state"]
writer.write_knock(room_id, knock, knock_state)
continue
# We only want to bother fetching events up to the last time they
@@ -238,6 +246,20 @@ class ExfiltrationWriter(metaclass=abc.ABCMeta):
"""
raise NotImplementedError()
@abc.abstractmethod
def write_knock(
self, room_id: str, event: EventBase, state: StateMap[dict]
) -> None:
"""Write a knock for the room, with associated knock state.
Args:
room_id: The room ID the knock is for.
event: The knock event.
state: A subset of the state at the knock, with a subset of the
event keys (type, state_key content and sender).
"""
raise NotImplementedError()
@abc.abstractmethod
def finished(self) -> Any:
"""Called when all data has successfully been exported and written.

Some files were not shown because too many files have changed in this diff Show More