Compare commits

..

12 Commits

Author SHA1 Message Date
Erik Johnston
6b2d6fdd33 Add some debugging 2020-05-15 15:33:11 +01:00
Erik Johnston
d263a4de02 Enable moving event persistence off of master 2020-05-14 17:25:42 +01:00
Erik Johnston
66c1dff3ba Use new writers config 2020-05-14 17:25:41 +01:00
Erik Johnston
96b6023e3b Make location of events writer configurable 2020-05-14 17:25:09 +01:00
Erik Johnston
452019064c Allow ReplicationRestResource to be added to workers 2020-05-14 17:25:09 +01:00
Erik Johnston
7c8e09bcf1 Add a worker store for search insertion 2020-05-14 17:25:09 +01:00
Erik Johnston
e7f5ac4ed8 Fix lint 2020-05-14 17:09:58 +01:00
Erik Johnston
208ab7b135 Fix typing and add assertion. 2020-05-14 17:09:58 +01:00
Erik Johnston
41f558ccf7 Newsfile 2020-05-14 17:09:58 +01:00
Erik Johnston
342796d6ac Move push rules ID gen to push rules worker 2020-05-14 17:09:58 +01:00
Erik Johnston
bc3fc3927f Move events ID gens to EventWorkerStore 2020-05-14 17:09:58 +01:00
Erik Johnston
d67a8b5455 Move repliction event stream handling out of slave store 2020-05-14 17:09:58 +01:00
1246 changed files with 42396 additions and 101452 deletions

View File

@@ -0,0 +1,36 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from synapse.storage.engines import create_engine
logger = logging.getLogger("create_postgres_db")
if __name__ == "__main__":
# Create a PostgresEngine.
db_engine = create_engine({"name": "psycopg2", "args": {}})
# Connect to postgres to create the base database.
# We use "postgres" as a database because it's bound to exist and the "synapse" one
# doesn't exist yet.
db_conn = db_engine.module.connect(
user="postgres", host="postgres", password="postgres", dbname="postgres"
)
db_conn.autocommit = True
cur = db_conn.cursor()
cur.execute("CREATE DATABASE synapse;")
cur.close()
db_conn.close()

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env python
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import psycopg2
# a very simple replacment for `psql`, to make up for the lack of the postgres client
# libraries in the synapse docker image.
# We use "postgres" as a database because it's bound to exist and the "synapse" one
# doesn't exist yet.
db_conn = psycopg2.connect(
user="postgres", host="postgres", password="postgres", dbname="postgres"
)
db_conn.autocommit = True
cur = db_conn.cursor()
for c in sys.argv[1:]:
cur.execute(c)

View File

@@ -1,16 +1,13 @@
#!/usr/bin/env bash #!/bin/bash
# this script is run by buildkite in a plain `bionic` container; it installs the # this script is run by buildkite in a plain `xenial` container; it installs the
# minimal requirements for tox and hands over to the py3-old tox environment. # minimal requirements for tox and hands over to the py35-old tox environment.
set -ex set -ex
apt-get update apt-get update
apt-get install -y python3 python3-dev python3-pip libxml2-dev libxslt-dev xmlsec1 zlib1g-dev tox apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev tox
export LANG="C.UTF-8" export LANG="C.UTF-8"
# Prevent virtualenv from auto-updating pip to an incompatible version exec tox -e py35-old,combine
export VIRTUALENV_NO_DOWNLOAD=1
exec tox -e py3-old,combine

View File

@@ -1,10 +1,10 @@
#!/usr/bin/env bash #!/bin/bash
# #
# Test script for 'synapse_port_db'. # Test script for 'synapse_port_db', which creates a virtualenv, installs Synapse along
# - sets up synapse and deps # with additional dependencies needed for the test (such as coverage or the PostgreSQL
# - runs the port script on a prepopulated test sqlite db # driver), update the schema of the test SQLite database and run background updates on it,
# - also runs it against an new sqlite db # create an empty test database in PostgreSQL, then run the 'synapse_port_db' script to
# test porting the SQLite database to the PostgreSQL database (with coverage).
set -xe set -xe
cd `dirname $0`/../.. cd `dirname $0`/../..
@@ -22,32 +22,15 @@ echo "--- Generate the signing key"
# Generate the server's signing key. # Generate the server's signing key.
python -m synapse.app.homeserver --generate-keys -c .buildkite/sqlite-config.yaml python -m synapse.app.homeserver --generate-keys -c .buildkite/sqlite-config.yaml
echo "--- Prepare test database" echo "--- Prepare the databases"
# Make sure the SQLite3 database is using the latest schema and has no pending background update. # Make sure the SQLite3 database is using the latest schema and has no pending background update.
scripts-dev/update_database --database-config .buildkite/sqlite-config.yaml scripts-dev/update_database --database-config .buildkite/sqlite-config.yaml
# Create the PostgreSQL database. # Create the PostgreSQL database.
./.buildkite/scripts/postgres_exec.py "CREATE DATABASE synapse" ./.buildkite/scripts/create_postgres_db.py
echo "+++ Run synapse_port_db against test database" echo "+++ Run synapse_port_db"
coverage run scripts/synapse_port_db --sqlite-database .buildkite/test_db.db --postgres-config .buildkite/postgres-config.yaml
# Run the script
#####
# Now do the same again, on an empty database.
echo "--- Prepare empty SQLite database"
# we do this by deleting the sqlite db, and then doing the same again.
rm .buildkite/test_db.db
scripts-dev/update_database --database-config .buildkite/sqlite-config.yaml
# re-create the PostgreSQL database.
./.buildkite/scripts/postgres_exec.py \
"DROP DATABASE synapse" \
"CREATE DATABASE synapse"
echo "+++ Run synapse_port_db against empty database"
coverage run scripts/synapse_port_db --sqlite-database .buildkite/test_db.db --postgres-config .buildkite/postgres-config.yaml coverage run scripts/synapse_port_db --sqlite-database .buildkite/test_db.db --postgres-config .buildkite/postgres-config.yaml

Binary file not shown.

View File

@@ -1,10 +1,41 @@
# This file serves as a blacklist for SyTest tests that we expect will fail in # This file serves as a blacklist for SyTest tests that we expect will fail in
# Synapse when run under worker mode. For more details, see sytest-blacklist. # Synapse when run under worker mode. For more details, see sytest-blacklist.
Message history can be paginated
Can re-join room if re-invited Can re-join room if re-invited
The only membership state included in an initial sync is for all the senders in the timeline
Local device key changes get to remote servers
If remote user leaves room we no longer receive device updates
Forgotten room messages cannot be paginated
Inbound federation can get public room list
Members from the gap are included in gappy incr LL sync
Leaves are present in non-gapped incremental syncs
Old leaves are present in gapped incremental syncs
User sees updates to presence from other users in the incremental sync.
Gapped incremental syncs include all state changes
Old members are included in gappy incr LL sync if they start speaking
# new failures as of https://github.com/matrix-org/sytest/pull/732 # new failures as of https://github.com/matrix-org/sytest/pull/732
Device list doesn't change if remote server is down Device list doesn't change if remote server is down
Remote servers cannot set power levels in rooms without existing powerlevels
Remote servers should reject attempts by non-creators to set the power levels
# https://buildkite.com/matrix-dot-org/synapse/builds/6134#6f67bf47-e234-474d-80e8-c6e1868b15c5 # https://buildkite.com/matrix-dot-org/synapse/builds/6134#6f67bf47-e234-474d-80e8-c6e1868b15c5
Server correctly handles incoming m.device_list_update Server correctly handles incoming m.device_list_update
# this fails reliably with a torture level of 100 due to https://github.com/matrix-org/synapse/issues/6536
Outbound federation requests missing prev_events and then asks for /state_ids and resolves the state
Can get rooms/{roomId}/members at a given point

View File

@@ -1,35 +1,24 @@
version: 2.1 version: 2
jobs: jobs:
dockerhubuploadrelease: dockerhubuploadrelease:
docker: machine: true
- image: docker:git
steps: steps:
- checkout - checkout
- docker_prepare - run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD - run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# for release builds, we want to get the amd64 image out asap, so first - run: docker push matrixdotorg/synapse:${CIRCLE_TAG}
# we do an amd64-only build, before following up with a multiarch build. - run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py3
- docker_build:
tag: -t matrixdotorg/synapse:${CIRCLE_TAG}
platforms: linux/amd64
- docker_build:
tag: -t matrixdotorg/synapse:${CIRCLE_TAG}
platforms: linux/amd64,linux/arm64
dockerhubuploadlatest: dockerhubuploadlatest:
docker: machine: true
- image: docker:git
steps: steps:
- checkout - checkout
- docker_prepare - run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest -t matrixdotorg/synapse:latest-py3 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD - run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# for `latest`, we don't want the arm images to disappear, so don't update the tag - run: docker push matrixdotorg/synapse:latest
# until all of the platforms are built. - run: docker push matrixdotorg/synapse:latest-py3
- docker_build:
tag: -t matrixdotorg/synapse:latest
platforms: linux/amd64,linux/arm64
workflows: workflows:
version: 2
build: build:
jobs: jobs:
- dockerhubuploadrelease: - dockerhubuploadrelease:
@@ -42,37 +31,3 @@ workflows:
filters: filters:
branches: branches:
only: master only: master
commands:
docker_prepare:
description: Sets up a remote docker server, downloads the buildx cli plugin, and enables multiarch images
parameters:
buildx_version:
type: string
default: "v0.4.1"
steps:
- setup_remote_docker:
# 19.03.13 was the most recent available on circleci at the time of
# writing.
version: 19.03.13
- run: apk add --no-cache curl
- run: mkdir -vp ~/.docker/cli-plugins/ ~/dockercache
- run: curl --silent -L "https://github.com/docker/buildx/releases/download/<< parameters.buildx_version >>/buildx-<< parameters.buildx_version >>.linux-amd64" > ~/.docker/cli-plugins/docker-buildx
- run: chmod a+x ~/.docker/cli-plugins/docker-buildx
# install qemu links in /proc/sys/fs/binfmt_misc on the docker instance running the circleci job
- run: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
# create a context named `builder` for the builds
- run: docker context create builder
# create a buildx builder using the new context, and set it as the default
- run: docker buildx create builder --use
docker_build:
description: Builds and pushed images to dockerhub using buildx
parameters:
platforms:
type: string
default: linux/amd64
tag:
type: string
steps:
- run: docker buildx build -f docker/Dockerfile --push --platform << parameters.platforms >> --label gitsha1=${CIRCLE_SHA1} << parameters.tag >> --progress=plain .

View File

@@ -1,8 +0,0 @@
# Black reformatting (#5482).
32e7c9e7f20b57dd081023ac42d6931a8da9b3a3
# Target Python 3.5 with black (#8664).
aff1eb7c671b0a3813407321d2702ec46c71fa56
# Update black to 20.8b1 (#9381).
0a00b7ff14890987f09112a2ae696c61001e6cf1

View File

@@ -1,5 +0,0 @@
**If you are looking for support** please ask in **#synapse:matrix.org**
(using a matrix.org account if necessary). We do not use GitHub issues for
support.
**If you want to report a security issue** please see https://matrix.org/security-disclosure-policy/

View File

@@ -6,11 +6,9 @@ about: Create a report to help us improve
<!-- <!--
**THIS IS NOT A SUPPORT CHANNEL!** **IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**, You will likely get better support more quickly if you ask in ** #synapse:matrix.org ** ;)
please ask in **#synapse:matrix.org** (using a matrix.org account if necessary)
If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/
This is a bug report template. By following the instructions below and This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all filling out the sections with your information, you will help the us to get all

View File

@@ -1,322 +0,0 @@
name: Tests
on:
push:
branches: ["develop", "release-*"]
pull_request:
jobs:
lint:
runs-on: ubuntu-latest
strategy:
matrix:
toxenv:
- "check-sampleconfig"
- "check_codestyle"
- "check_isort"
- "mypy"
- "packaging"
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: pip install tox
- run: tox -e ${{ matrix.toxenv }}
lint-crlf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Check line endings
run: scripts-dev/check_line_terminators.sh
lint-newsfile:
if: ${{ github.base_ref == 'develop' || contains(github.base_ref, 'release-') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: pip install tox
- name: Patch Buildkite-specific test script
run: |
sed -i -e 's/\$BUILDKITE_PULL_REQUEST/${{ github.event.number }}/' \
scripts-dev/check-newsfragment
- run: scripts-dev/check-newsfragment
lint-sdist:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: "3.x"
- run: pip install wheel
- run: python setup.py sdist bdist_wheel
- uses: actions/upload-artifact@v2
with:
name: Python Distributions
path: dist/*
# Dummy step to gate other tests on without repeating the whole list
linting-done:
if: ${{ always() }} # Run this even if prior jobs were skipped
needs: [lint, lint-crlf, lint-newsfile, lint-sdist]
runs-on: ubuntu-latest
steps:
- run: "true"
trial:
if: ${{ !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.6", "3.7", "3.8", "3.9"]
database: ["sqlite"]
include:
# Newest Python without optional deps
- python-version: "3.9"
toxenv: "py-noextras,combine"
# Oldest Python with PostgreSQL
- python-version: "3.6"
database: "postgres"
postgres-version: "9.6"
# Newest Python with PostgreSQL
- python-version: "3.9"
database: "postgres"
postgres-version: "13"
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.postgres-version }}
if: ${{ matrix.postgres-version }}
run: |
docker run -d -p 5432:5432 \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.postgres-version }}
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- run: pip install tox
- name: Await PostgreSQL
if: ${{ matrix.postgres-version }}
timeout-minutes: 2
run: until pg_isready -h localhost; do sleep 1; done
- run: tox -e py,combine
env:
TRIAL_FLAGS: "--jobs=2"
SYNAPSE_POSTGRES: ${{ matrix.database == 'postgres' || '' }}
SYNAPSE_POSTGRES_HOST: localhost
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
- name: Dump logs
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
trial-olddeps:
if: ${{ !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Test with old deps
uses: docker://ubuntu:bionic # For old python and sqlite
with:
workdir: /github/workspace
entrypoint: .buildkite/scripts/test_old_deps.sh
env:
TRIAL_FLAGS: "--jobs=2"
- name: Dump logs
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
trial-pypy:
# Very slow; only run if the branch name includes 'pypy'
if: ${{ contains(github.ref, 'pypy') && !failure() }}
needs: linting-done
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["pypy-3.6"]
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1 libxml2-dev libxslt-dev
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- run: pip install tox
- run: tox -e py,combine
env:
TRIAL_FLAGS: "--jobs=2"
- name: Dump logs
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
sytest:
if: ${{ !failure() }}
needs: linting-done
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:${{ matrix.sytest-tag }}
volumes:
- ${{ github.workspace }}:/src
env:
BUILDKITE_BRANCH: ${{ github.head_ref }}
POSTGRES: ${{ matrix.postgres && 1}}
MULTI_POSTGRES: ${{ (matrix.postgres == 'multi-postgres') && 1}}
WORKERS: ${{ matrix.workers && 1 }}
REDIS: ${{ matrix.redis && 1 }}
BLACKLIST: ${{ matrix.workers && 'synapse-blacklist-with-workers' }}
strategy:
fail-fast: false
matrix:
include:
- sytest-tag: bionic
- sytest-tag: bionic
postgres: postgres
- sytest-tag: testing
postgres: postgres
- sytest-tag: bionic
postgres: multi-postgres
workers: workers
- sytest-tag: buster
postgres: multi-postgres
workers: workers
- sytest-tag: buster
postgres: postgres
workers: workers
redis: redis
steps:
- uses: actions/checkout@v2
- name: Prepare test blacklist
run: cat sytest-blacklist .buildkite/worker-blacklist > synapse-blacklist-with-workers
- name: Run SyTest
run: /bootstrap.sh synapse
working-directory: /src
- name: Dump results.tap
if: ${{ always() }}
run: cat /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v2
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
path: |
/logs/results.tap
/logs/**/*.log*
portdb:
if: ${{ !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
strategy:
matrix:
include:
- python-version: "3.6"
postgres-version: "9.6"
- python-version: "3.9"
postgres-version: "13"
services:
postgres:
image: postgres:${{ matrix.postgres-version }}
ports:
- 5432:5432
env:
POSTGRES_PASSWORD: "postgres"
POSTGRES_INITDB_ARGS: "--lc-collate C --lc-ctype C --encoding UTF8"
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Patch Buildkite-specific test scripts
run: |
sed -i -e 's/host="postgres"/host="localhost"/' .buildkite/scripts/postgres_exec.py
sed -i -e 's/host: postgres/host: localhost/' .buildkite/postgres-config.yaml
sed -i -e 's|/src/||' .buildkite/{sqlite,postgres}-config.yaml
sed -i -e 's/\$TOP/\$GITHUB_WORKSPACE/' .coveragerc
- run: .buildkite/scripts/test_synapse_port_db.sh
complement:
if: ${{ !failure() }}
needs: linting-done
runs-on: ubuntu-latest
container:
# https://github.com/matrix-org/complement/blob/master/dockerfiles/ComplementCIBuildkite.Dockerfile
image: matrixdotorg/complement:latest
env:
CI: true
ports:
- 8448:8448
volumes:
- /var/run/docker.sock:/var/run/docker.sock
steps:
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
with:
path: synapse
- name: Run actions/checkout@v2 for complement
uses: actions/checkout@v2
with:
repository: "matrix-org/complement"
path: complement
# Build initial Synapse image
- run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
working-directory: synapse
# Build a ready-to-run Synapse image based on the initial image above.
# This new image includes a config file, keys for signing and TLS, and
# other settings to make it suitable for testing under Complement.
- run: docker build -t complement-synapse -f Synapse.Dockerfile .
working-directory: complement/dockerfiles
# Run Complement
- run: go test -v -tags synapse_blacklist ./tests
env:
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
working-directory: complement

6
.gitignore vendored
View File

@@ -6,25 +6,21 @@
*.egg *.egg
*.egg-info *.egg-info
*.lock *.lock
*.py[cod] *.pyc
*.snap *.snap
*.tac *.tac
_trial_temp/ _trial_temp/
_trial_temp*/ _trial_temp*/
/out /out
.DS_Store
__pycache__/
# stuff that is likely to exist when you run a server locally # stuff that is likely to exist when you run a server locally
/*.db /*.db
/*.log /*.log
/*.log.*
/*.log.config /*.log.config
/*.pid /*.pid
/.python-version /.python-version
/*.signing.key /*.signing.key
/env/ /env/
/.venv*/
/homeserver*.yaml /homeserver*.yaml
/logs /logs
/media_store/ /media_store/

2703
CHANGES.md

File diff suppressed because it is too large Load Diff

View File

@@ -1,213 +1,86 @@
Welcome to Synapse # Contributing code to Matrix
This document aims to get you started with contributing to this repo! Everyone is welcome to contribute code to Matrix
(https://github.com/matrix-org), provided that they are willing to license
their contributions under the same license as the project itself. We follow a
simple 'inbound=outbound' model for contributions: the act of submitting an
'inbound' contribution means that the contributor agrees to license the code
under the same terms as the project's overall 'outbound' license - in our
case, this is almost always Apache Software License v2 (see [LICENSE](LICENSE)).
- [1. Who can contribute to Synapse?](#1-who-can-contribute-to-synapse) ## How to contribute
- [2. What do I need?](#2-what-do-i-need)
- [3. Get the source.](#3-get-the-source)
- [4. Install the dependencies](#4-install-the-dependencies)
* [Under Unix (macOS, Linux, BSD, ...)](#under-unix-macos-linux-bsd-)
* [Under Windows](#under-windows)
- [5. Get in touch.](#5-get-in-touch)
- [6. Pick an issue.](#6-pick-an-issue)
- [7. Turn coffee and documentation into code and documentation!](#7-turn-coffee-and-documentation-into-code-and-documentation)
- [8. Test, test, test!](#8-test-test-test)
* [Run the linters.](#run-the-linters)
* [Run the unit tests.](#run-the-unit-tests)
* [Run the integration tests.](#run-the-integration-tests)
- [9. Submit your patch.](#9-submit-your-patch)
* [Changelog](#changelog)
+ [How do I know what to call the changelog file before I create the PR?](#how-do-i-know-what-to-call-the-changelog-file-before-i-create-the-pr)
+ [Debian changelog](#debian-changelog)
* [Sign off](#sign-off)
- [10. Turn feedback into better code.](#10-turn-feedback-into-better-code)
- [11. Find a new issue.](#11-find-a-new-issue)
- [Notes for maintainers on merging PRs etc](#notes-for-maintainers-on-merging-prs-etc)
- [Conclusion](#conclusion)
# 1. Who can contribute to Synapse? The preferred and easiest way to contribute changes to Matrix is to fork the
relevant project on github, and then [create a pull request](
https://help.github.com/articles/using-pull-requests/) to ask us to pull
your changes into our repo.
Everyone is welcome to contribute code to [matrix.org **The single biggest thing you need to know is: please base your changes on
projects](https://github.com/matrix-org), provided that they are willing to the develop branch - *not* master.**
license their contributions under the same license as the project itself. We
follow a simple 'inbound=outbound' model for contributions: the act of
submitting an 'inbound' contribution means that the contributor agrees to
license the code under the same terms as the project's overall 'outbound'
license - in our case, this is almost always Apache Software License v2 (see
[LICENSE](LICENSE)).
# 2. What do I need? We use the master branch to track the most recent release, so that folks who
blindly clone the repo and automatically check out master get something that
works. Develop is the unstable branch where all the development actually
happens: the workflow is that contributors should fork the develop branch to
make a 'feature' branch for a particular contribution, and then make a pull
request to merge this back into the matrix.org 'official' develop branch. We
use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download). We use [Buildkite](https://buildkite.com/matrix-dot-org/synapse) for continuous
integration. If your change breaks the build, this will be shown in GitHub, so
please keep an eye on the pull request for feedback.
The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git). To run unit tests in a local development environment, you can use:
For some tests, you will need [a recent version of Docker](https://docs.docker.com/get-docker/). - ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the [documentation in the SyTest repo](
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
information.
# 3. Get the source. ## Code style
The preferred and easiest way to contribute changes is to fork the relevant All Matrix projects have a well-defined code-style - and sometimes we've even
project on GitHub, and then [create a pull request]( got as far as documenting it... For instance, synapse's code style doc lives
https://help.github.com/articles/using-pull-requests/) to ask us to pull your [here](docs/code_style.md).
changes into our repo.
Please base your changes on the `develop` branch. To facilitate meeting these criteria you can run `scripts-dev/lint.sh`
locally. Since this runs the tools listed in the above document, you'll need
python 3.6 and to install each tool:
```sh
git clone git@github.com:YOUR_GITHUB_USER_NAME/synapse.git
git checkout develop
``` ```
# Install the dependencies
pip install -U black flake8 flake8-comprehensions isort
If you need help getting started with git, this is beyond the scope of the document, but you # Run the linter script
can find many good git tutorials on the web.
# 4. Install the dependencies
## Under Unix (macOS, Linux, BSD, ...)
Once you have installed Python 3 and added the source, please open a terminal and
setup a *virtualenv*, as follows:
```sh
cd path/where/you/have/cloned/the/repository
python3 -m venv ./env
source ./env/bin/activate
pip install -e ".[all,lint,mypy,test]"
pip install tox
```
This will install the developer dependencies for the project.
## Under Windows
TBD
# 5. Get in touch.
Join our developer community on Matrix: #synapse-dev:matrix.org !
# 6. Pick an issue.
Fix your favorite problem or perhaps find a [Good First Issue](https://github.com/matrix-org/synapse/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+First+Issue%22)
to work on.
# 7. Turn coffee and documentation into code and documentation!
Synapse's code style is documented [here](docs/code_style.md). Please follow
it, including the conventions for the [sample configuration
file](docs/code_style.md#configuration-file-format).
There is a growing amount of documentation located in the [docs](docs)
directory. This documentation is intended primarily for sysadmins running their
own Synapse instance, as well as developers interacting externally with
Synapse. [docs/dev](docs/dev) exists primarily to house documentation for
Synapse developers. [docs/admin_api](docs/admin_api) houses documentation
regarding Synapse's Admin API, which is used mostly by sysadmins and external
service developers.
If you add new files added to either of these folders, please use [GitHub-Flavoured
Markdown](https://guides.github.com/features/mastering-markdown/).
Some documentation also exists in [Synapse's GitHub
Wiki](https://github.com/matrix-org/synapse/wiki), although this is primarily
contributed to by community authors.
# 8. Test, test, test!
<a name="test-test-test"></a>
While you're developing and before submitting a patch, you'll
want to test your code.
## Run the linters.
The linters look at your code and do two things:
- ensure that your code follows the coding style adopted by the project;
- catch a number of errors in your code.
They're pretty fast, don't hesitate!
```sh
source ./env/bin/activate
./scripts-dev/lint.sh ./scripts-dev/lint.sh
``` ```
Note that this script *will modify your files* to fix styling errors. **Note that the script does not just test/check, but also reformats code, so you
Make sure that you have saved all your files. may wish to ensure any new code is committed first**. By default this script
checks all files and can take some time; if you alter only certain files, you
might wish to specify paths as arguments to reduce the run-time:
If you wish to restrict the linters to only the files changed since the last commit
(much faster!), you can instead run:
```sh
source ./env/bin/activate
./scripts-dev/lint.sh -d
``` ```
Or if you know exactly which files you wish to lint, you can instead run:
```sh
source ./env/bin/activate
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder ./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
``` ```
## Run the unit tests. Before pushing new changes, ensure they don't produce linting errors. Commit any
files that were corrected.
The unit tests run parts of Synapse, including your changes, to see if anything Please ensure your changes match the cosmetic style of the existing project,
was broken. They are slower than the linters but will typically catch more errors. and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
```sh
source ./env/bin/activate
trial tests
```
If you wish to only run *some* unit tests, you may specify
another module instead of `tests` - or a test class or a method:
```sh
source ./env/bin/activate
trial tests.rest.admin.test_room tests.handlers.test_admin.ExfiltrateData.test_invite
```
If your tests fail, you may wish to look at the logs:
```sh
less _trial_temp/test.log
```
## Run the integration tests.
The integration tests are a more comprehensive suite of tests. They
run a full version of Synapse, including your changes, to check if
anything was broken. They are slower than the unit tests but will
typically catch more errors.
The following command will let you run the integration test with the most common
configuration:
```sh
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:py37
```
This configuration should generally cover your needs. For more details about other configurations, see [documentation in the SyTest repo](https://github.com/matrix-org/sytest/blob/develop/docker/README.md).
# 9. Submit your patch.
Once you're happy with your patch, it's time to prepare a Pull Request.
To prepare a Pull Request, please:
1. verify that [all the tests pass](#test-test-test), including the coding style;
2. [sign off](#sign-off) your contribution;
3. `git push` your commit to your fork of Synapse;
4. on GitHub, [create the Pull Request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request);
5. add a [changelog entry](#changelog) and push it to your Pull Request;
6. for most contributors, that's all - however, if you are a member of the organization `matrix-org`, on GitHub, please request a review from `matrix.org / Synapse Core`.
## Changelog ## Changelog
@@ -225,55 +98,24 @@ in the format of `PRnumber.type`. The type can be one of the following:
* `removal` (also used for deprecations) * `removal` (also used for deprecations)
* `misc` (for internal-only changes) * `misc` (for internal-only changes)
This file will become part of our [changelog]( The content of the file is your changelog entry, which should be a short
https://github.com/matrix-org/synapse/blob/master/CHANGES.md) at the next description of your change in the same style as the rest of our [changelog](
release, so the content of the file should be a short description of your https://github.com/matrix-org/synapse/blob/master/CHANGES.md). The file can
change in the same style as the rest of the changelog. The file can contain Markdown contain Markdown formatting, and should end with a full stop (.) or an
formatting, and should end with a full stop (.) or an exclamation mark (!) for exclamation mark (!) for consistency.
consistency.
Adding credits to the changelog is encouraged, we value your Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes! contributions and would like to have you shouted out in the release notes!
For example, a fix in PR #1234 would have its changelog entry in For example, a fix in PR #1234 would have its changelog entry in
`changelog.d/1234.bugfix`, and contain content like: `changelog.d/1234.bugfix`, and contain content like "The security levels of
Florbs are now validated when received over federation. Contributed by Jane
Matrix.".
> The security levels of Florbs are now validated when received ## Debian changelog
> via the `/federation/florb` endpoint. Contributed by Jane Matrix.
If there are multiple pull requests involved in a single bugfix/feature/etc,
then the content for each `changelog.d` file should be the same. Towncrier will
merge the matching files together into a single changelog entry when we come to
release.
### How do I know what to call the changelog file before I create the PR?
Obviously, you don't know if you should call your newsfile
`1234.bugfix` or `5678.bugfix` until you create the PR, which leads to a
chicken-and-egg problem.
There are two options for solving this:
1. Open the PR without a changelog file, see what number you got, and *then*
add the changelog file to your branch (see [Updating your pull
request](#updating-your-pull-request)), or:
1. Look at the [list of all
issues/PRs](https://github.com/matrix-org/synapse/issues?q=), add one to the
highest number you see, and quickly open the PR before somebody else claims
your number.
[This
script](https://github.com/richvdh/scripts/blob/master/next_github_number.sh)
might be helpful if you find yourself doing this a lot.
Sorry, we know it's a bit fiddly, but it's *really* helpful for us when we come
to put together a release!
### Debian changelog
Changes which affect the debian packaging files (in `debian`) are an Changes which affect the debian packaging files (in `debian`) are an
exception to the rule that all changes require a `changelog.d` file. exception.
In this case, you will need to add an entry to the debian changelog for the In this case, you will need to add an entry to the debian changelog for the
next release. For this, run the following command: next release. For this, run the following command:
@@ -358,36 +200,21 @@ Git allows you to add this signoff automatically when using the `-s`
flag to `git commit`, which uses the name and email set in your flag to `git commit`, which uses the name and email set in your
`user.name` and `user.email` git configs. `user.name` and `user.email` git configs.
## Merge Strategy
# 10. Turn feedback into better code. We use the commit history of develop/master extensively to identify
when regressions were introduced and what changes have been made.
Once the Pull Request is opened, you will see a few things: We aim to have a clean merge history, which means we normally squash-merge
changes into develop. For small changes this means there is no need to rebase
to clean up your PR before merging. Larger changes with an organised set of
commits may be merged as-is, if the history is judged to be useful.
1. our automated CI (Continuous Integration) pipeline will run (again) the linters, the unit tests, the integration tests and more; This use of squash-merging will mean PRs built on each other will be hard to
2. one or more of the developers will take a look at your Pull Request and offer feedback. merge. We suggest avoiding these where possible, and if required, ensuring
each PR has a tidy set of commits to ease merging.
From this point, you should: ## Conclusion
1. Look at the results of the CI pipeline.
- If there is any error, fix the error.
2. If a developer has requested changes, make these changes and let us know if it is ready for a developer to review again.
3. Create a new commit with the changes.
- Please do NOT overwrite the history. New commits make the reviewer's life easier.
- Push this commits to your Pull Request.
4. Back to 1.
Once both the CI and the developers are happy, the patch will be merged into Synapse and released shortly!
# 11. Find a new issue.
By now, you know the drill!
# Notes for maintainers on merging PRs etc
There are some notes for those with commit access to the project on how we
manage git [here](docs/dev/git.md).
# Conclusion
That's it! Matrix is a very open and collaborative project as you might expect That's it! Matrix is a very open and collaborative project as you might expect
given our obsession with open communication. If we're going to successfully given our obsession with open communication. If we're going to successfully

View File

@@ -1,45 +1,17 @@
# Installation Instructions - [Choosing your server name](#choosing-your-server-name)
- [Installing Synapse](#installing-synapse)
There are 3 steps to follow under **Installation Instructions**.
- [Installation Instructions](#installation-instructions)
- [Choosing your server name](#choosing-your-server-name)
- [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source) - [Installing from source](#installing-from-source)
- [Platform-specific prerequisites](#platform-specific-prerequisites) - [Platform-Specific Instructions](#platform-specific-instructions)
- [Debian/Ubuntu/Raspbian](#debianubunturaspbian)
- [ArchLinux](#archlinux)
- [CentOS/Fedora](#centosfedora)
- [macOS](#macos)
- [OpenSUSE](#opensuse)
- [OpenBSD](#openbsd)
- [Windows](#windows)
- [Prebuilt packages](#prebuilt-packages) - [Prebuilt packages](#prebuilt-packages)
- [Docker images and Ansible playbooks](#docker-images-and-ansible-playbooks) - [Setting up Synapse](#setting-up-synapse)
- [Debian/Ubuntu](#debianubuntu)
- [Matrix.org packages](#matrixorg-packages)
- [Downstream Debian packages](#downstream-debian-packages)
- [Downstream Ubuntu packages](#downstream-ubuntu-packages)
- [Fedora](#fedora)
- [OpenSUSE](#opensuse-1)
- [SUSE Linux Enterprise Server](#suse-linux-enterprise-server)
- [ArchLinux](#archlinux-1)
- [Void Linux](#void-linux)
- [FreeBSD](#freebsd)
- [OpenBSD](#openbsd-1)
- [NixOS](#nixos)
- [Setting up Synapse](#setting-up-synapse)
- [Using PostgreSQL](#using-postgresql)
- [TLS certificates](#tls-certificates) - [TLS certificates](#tls-certificates)
- [Client Well-Known URI](#client-well-known-uri)
- [Email](#email) - [Email](#email)
- [Registering a user](#registering-a-user) - [Registering a user](#registering-a-user)
- [Setting up a TURN server](#setting-up-a-turn-server) - [Setting up a TURN server](#setting-up-a-turn-server)
- [URL previews](#url-previews) - [URL previews](#url-previews)
- [Troubleshooting Installation](#troubleshooting-installation) - [Troubleshooting Installation](#troubleshooting-installation)
# Choosing your server name
## Choosing your server name
It is important to choose the name for your server before you install Synapse, It is important to choose the name for your server before you install Synapse,
because it cannot be changed later. because it cannot be changed later.
@@ -55,24 +27,27 @@ that your email address is probably `user@example.com` rather than
`user@email.example.com`) - but doing so may require more advanced setup: see `user@email.example.com`) - but doing so may require more advanced setup: see
[Setting up Federation](docs/federate.md). [Setting up Federation](docs/federate.md).
## Installing Synapse # Installing Synapse
### Installing from source ## Installing from source
(Prebuilt packages are available for some platforms - see [Prebuilt packages](#prebuilt-packages).) (Prebuilt packages are available for some platforms - see [Prebuilt packages](#prebuilt-packages).)
When installing from source please make sure that the [Platform-specific prerequisites](#platform-specific-prerequisites) are already installed.
System requirements: System requirements:
- POSIX-compliant system (tested on Linux & OS X) - POSIX-compliant system (tested on Linux & OS X)
- Python 3.5.2 or later, up to Python 3.9. - Python 3.5.2 or later, up to Python 3.8.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org - At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions. See [Platform-Specific
Instructions](#platform-specific-instructions) for information on installing
these on various platforms.
To install the Synapse homeserver run: To install the Synapse homeserver run:
```sh ```
mkdir -p ~/synapse mkdir -p ~/synapse
virtualenv -p python3 ~/synapse/env virtualenv -p python3 ~/synapse/env
source ~/synapse/env/bin/activate source ~/synapse/env/bin/activate
@@ -89,7 +64,7 @@ prefer.
This Synapse installation can then be later upgraded by using pip again with the This Synapse installation can then be later upgraded by using pip again with the
update flag: update flag:
```sh ```
source ~/synapse/env/bin/activate source ~/synapse/env/bin/activate
pip install -U matrix-synapse pip install -U matrix-synapse
``` ```
@@ -97,7 +72,7 @@ pip install -U matrix-synapse
Before you can start Synapse, you will need to generate a configuration Before you can start Synapse, you will need to generate a configuration
file. To do this, run (in your virtualenv, as before): file. To do this, run (in your virtualenv, as before):
```sh ```
cd ~/synapse cd ~/synapse
python -m synapse.app.homeserver \ python -m synapse.app.homeserver \
--server-name my.domain.name \ --server-name my.domain.name \
@@ -115,58 +90,70 @@ wise to back them up somewhere safe. (If, for whatever reason, you do need to
change your homeserver's keys, you may find that other homeserver have the change your homeserver's keys, you may find that other homeserver have the
old key cached. If you update the signing key, you should change the name of the old key cached. If you update the signing key, you should change the name of the
key in the `<server name>.signing.key` file (the second word) to something key in the `<server name>.signing.key` file (the second word) to something
different. See the [spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys) for more information on key management). different. See the
[spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys)
for more information on key management).
To actually run your new homeserver, pick a working directory for Synapse to To actually run your new homeserver, pick a working directory for Synapse to
run (e.g. `~/synapse`), and: run (e.g. `~/synapse`), and:
```sh ```
cd ~/synapse cd ~/synapse
source env/bin/activate source env/bin/activate
synctl start synctl start
``` ```
#### Platform-specific prerequisites ### Platform-Specific Instructions
Synapse is written in Python but some of the libraries it uses are written in #### Debian/Ubuntu/Raspbian
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions.
##### Debian/Ubuntu/Raspbian
Installing prerequisites on Ubuntu or Debian: Installing prerequisites on Ubuntu or Debian:
```sh ```
sudo apt install build-essential python3-dev libffi-dev \ sudo apt-get install build-essential python3-dev libffi-dev \
python3-pip python3-setuptools sqlite3 \ python3-pip python3-setuptools sqlite3 \
libssl-dev virtualenv libjpeg-dev libxslt1-dev libssl-dev virtualenv libjpeg-dev libxslt1-dev
``` ```
##### ArchLinux #### ArchLinux
Installing prerequisites on ArchLinux: Installing prerequisites on ArchLinux:
```sh ```
sudo pacman -S base-devel python python-pip \ sudo pacman -S base-devel python python-pip \
python-setuptools python-virtualenv sqlite3 python-setuptools python-virtualenv sqlite3
``` ```
##### CentOS/Fedora #### CentOS/Fedora
Installing prerequisites on CentOS or Fedora Linux: Installing prerequisites on CentOS 8 or Fedora>26:
```sh ```
sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \ sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
libwebp-devel libxml2-devel libxslt-devel libpq-devel \ libwebp-devel tk-devel redhat-rpm-config \
python3-virtualenv libffi-devel openssl-devel python3-devel python3-virtualenv libffi-devel openssl-devel
sudo dnf groupinstall "Development Tools" sudo dnf groupinstall "Development Tools"
``` ```
##### macOS Installing prerequisites on CentOS 7 or Fedora<=25:
```
sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel redhat-rpm-config \
python3-virtualenv libffi-devel openssl-devel
sudo yum groupinstall "Development Tools"
```
Note that Synapse does not support versions of SQLite before 3.11, and CentOS 7
uses SQLite 3.7. You may be able to work around this by installing a more
recent SQLite version, but it is recommended that you instead use a Postgres
database: see [docs/postgres.md](docs/postgres.md).
#### macOS
Installing prerequisites on macOS: Installing prerequisites on macOS:
```sh ```
xcode-select --install xcode-select --install
sudo easy_install pip sudo easy_install pip
sudo pip install virtualenv sudo pip install virtualenv
@@ -176,102 +163,95 @@ brew install pkg-config libffi
On macOS Catalina (10.15) you may need to explicitly install OpenSSL On macOS Catalina (10.15) you may need to explicitly install OpenSSL
via brew and inform `pip` about it so that `psycopg2` builds: via brew and inform `pip` about it so that `psycopg2` builds:
```sh ```
brew install openssl@1.1 brew install openssl@1.1
export LDFLAGS="-L/usr/local/opt/openssl/lib" export LDFLAGS=-L/usr/local/Cellar/openssl\@1.1/1.1.1d/lib/
export CPPFLAGS="-I/usr/local/opt/openssl/include"
``` ```
##### OpenSUSE #### OpenSUSE
Installing prerequisites on openSUSE: Installing prerequisites on openSUSE:
```sh ```
sudo zypper in -t pattern devel_basis sudo zypper in -t pattern devel_basis
sudo zypper in python-pip python-setuptools sqlite3 python-virtualenv \ sudo zypper in python-pip python-setuptools sqlite3 python-virtualenv \
python-devel libffi-devel libopenssl-devel libjpeg62-devel python-devel libffi-devel libopenssl-devel libjpeg62-devel
``` ```
##### OpenBSD #### OpenBSD
A port of Synapse is available under `net/synapse`. The filesystem Installing prerequisites on OpenBSD:
underlying the homeserver directory (defaults to `/var/synapse`) has to be
mounted with `wxallowed` (cf. `mount(8)`), so creating a separate filesystem
and mounting it to `/var/synapse` should be taken into consideration.
To be able to build Synapse's dependency on python the `WRKOBJDIR` ```
(cf. `bsd.port.mk(5)`) for building python, too, needs to be on a filesystem doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \
mounted with `wxallowed` (cf. `mount(8)`). libxslt jpeg
Creating a `WRKOBJDIR` for building python under `/usr/local` (which on a
default OpenBSD installation is mounted with `wxallowed`):
```sh
doas mkdir /usr/local/pobj_wxallowed
``` ```
Assuming `PORTS_PRIVSEP=Yes` (cf. `bsd.port.mk(5)`) and `SUDO=doas` are There is currently no port for OpenBSD. Additionally, OpenBSD's security
configured in `/etc/mk.conf`: settings require a slightly more difficult installation process.
```sh (XXX: I suspect this is out of date)
doas chown _pbuild:_pbuild /usr/local/pobj_wxallowed
```
Setting the `WRKOBJDIR` for building python: 1. Create a new directory in `/usr/local` called `_synapse`. Also, create a
new user called `_synapse` and set that directory as the new user's home.
This is required because, by default, OpenBSD only allows binaries which need
write and execute permissions on the same memory space to be run from
`/usr/local`.
2. `su` to the new `_synapse` user and change to their home directory.
3. Create a new virtualenv: `virtualenv -p python3 ~/.synapse`
4. Source the virtualenv configuration located at
`/usr/local/_synapse/.synapse/bin/activate`. This is done in `ksh` by
using the `.` command, rather than `bash`'s `source`.
5. Optionally, use `pip` to install `lxml`, which Synapse needs to parse
webpages for their titles.
6. Use `pip` to install this repository: `pip install matrix-synapse`
7. Optionally, change `_synapse`'s shell to `/bin/false` to reduce the
chance of a compromised Synapse server being used to take over your box.
```sh After this, you may proceed with the rest of the install directions.
echo WRKOBJDIR_lang/python/3.7=/usr/local/pobj_wxallowed \\nWRKOBJDIR_lang/python/2.7=/usr/local/pobj_wxallowed >> /etc/mk.conf
```
Building Synapse: #### Windows
```sh
cd /usr/ports/net/synapse
make install
```
##### Windows
If you wish to run or develop Synapse on Windows, the Windows Subsystem For If you wish to run or develop Synapse on Windows, the Windows Subsystem For
Linux provides a Linux environment on Windows 10 which is capable of using the Linux provides a Linux environment on Windows 10 which is capable of using the
Debian, Fedora, or source installation methods. More information about WSL can Debian, Fedora, or source installation methods. More information about WSL can
be found at <https://docs.microsoft.com/en-us/windows/wsl/install-win10> for be found at https://docs.microsoft.com/en-us/windows/wsl/install-win10 for
Windows 10 and <https://docs.microsoft.com/en-us/windows/wsl/install-on-server> Windows 10 and https://docs.microsoft.com/en-us/windows/wsl/install-on-server
for Windows Server. for Windows Server.
### Prebuilt packages ## Prebuilt packages
As an alternative to installing from source, prebuilt packages are available As an alternative to installing from source, prebuilt packages are available
for a number of platforms. for a number of platforms.
#### Docker images and Ansible playbooks ### Docker images and Ansible playbooks
There is an official synapse image available at There is an offical synapse image available at
<https://hub.docker.com/r/matrixdotorg/synapse> which can be used with https://hub.docker.com/r/matrixdotorg/synapse which can be used with
the docker-compose file available at [contrib/docker](contrib/docker). Further the docker-compose file available at [contrib/docker](contrib/docker). Further information on
information on this including configuration options is available in the README this including configuration options is available in the README on
on hub.docker.com. hub.docker.com.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at Dockerfile to automate a synapse server in a single Docker image, at
<https://hub.docker.com/r/avhost/docker-matrix/tags/> https://hub.docker.com/r/avhost/docker-matrix/tags/
Slavi Pantaleev has created an Ansible playbook, Slavi Pantaleev has created an Ansible playbook,
which installs the offical Docker image of Matrix Synapse which installs the offical Docker image of Matrix Synapse
along with many other Matrix-related services (Postgres database, Element, coturn, along with many other Matrix-related services (Postgres database, riot-web, coturn, mxisd, SSL support, etc.).
ma1sd, SSL support, etc.).
For more details, see For more details, see
<https://github.com/spantaleev/matrix-docker-ansible-deploy> https://github.com/spantaleev/matrix-docker-ansible-deploy
#### Debian/Ubuntu
##### Matrix.org packages ### Debian/Ubuntu
#### Matrix.org packages
Matrix.org provides Debian/Ubuntu packages of the latest stable version of Matrix.org provides Debian/Ubuntu packages of the latest stable version of
Synapse via <https://packages.matrix.org/debian/>. They are available for Debian Synapse via https://packages.matrix.org/debian/. They are available for Debian
9 (Stretch), Ubuntu 16.04 (Xenial), and later. To use them: 9 (Stretch), Ubuntu 16.04 (Xenial), and later. To use them:
```sh ```
sudo apt install -y lsb-release wget apt-transport-https sudo apt install -y lsb-release wget apt-transport-https
sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/matrix-org-archive-keyring.gpg] https://packages.matrix.org/debian/ $(lsb_release -cs) main" | echo "deb [signed-by=/usr/share/keyrings/matrix-org-archive-keyring.gpg] https://packages.matrix.org/debian/ $(lsb_release -cs) main" |
@@ -291,61 +271,56 @@ The fingerprint of the repository signing key (as shown by `gpg
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is /usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`. `AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
##### Downstream Debian packages #### Downstream Debian/Ubuntu packages
We do not recommend using the packages from the default Debian `buster` For `buster` and `sid`, Synapse is available in the Debian repositories and
repository at this time, as they are old and suffer from known security it should be possible to install it with simply:
vulnerabilities. You can install the latest version of Synapse from
[our repository](#matrixorg-packages) or from `buster-backports`. Please
see the [Debian documentation](https://backports.debian.org/Instructions/)
for information on how to use backports.
If you are using Debian `sid` or testing, Synapse is available in the default ```
repositories and it should be possible to install it simply with:
```sh
sudo apt install matrix-synapse sudo apt install matrix-synapse
``` ```
##### Downstream Ubuntu packages There is also a version of `matrix-synapse` in `stretch-backports`. Please see
the [Debian documentation on
backports](https://backports.debian.org/Instructions/) for information on how
to use them.
We do not recommend using the packages in the default Ubuntu repository We do not recommend using the packages in downstream Ubuntu at this time, as
at this time, as they are old and suffer from known security vulnerabilities. they are old and suffer from known security vulnerabilities.
The latest version of Synapse can be installed from [our repository](#matrixorg-packages).
#### Fedora ### Fedora
Synapse is in the Fedora repositories as `matrix-synapse`: Synapse is in the Fedora repositories as `matrix-synapse`:
```sh ```
sudo dnf install matrix-synapse sudo dnf install matrix-synapse
``` ```
Oleg Girko provides Fedora RPMs at Oleg Girko provides Fedora RPMs at
<https://obs.infoserver.lv/project/monitor/matrix-synapse> https://obs.infoserver.lv/project/monitor/matrix-synapse
#### OpenSUSE ### OpenSUSE
Synapse is in the OpenSUSE repositories as `matrix-synapse`: Synapse is in the OpenSUSE repositories as `matrix-synapse`:
```sh ```
sudo zypper install matrix-synapse sudo zypper install matrix-synapse
``` ```
#### SUSE Linux Enterprise Server ### SUSE Linux Enterprise Server
Unofficial package are built for SLES 15 in the openSUSE:Backports:SLE-15 repository at Unofficial package are built for SLES 15 in the openSUSE:Backports:SLE-15 repository at
<https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15/standard/> https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15/standard/
#### ArchLinux ### ArchLinux
The quickest way to get up and running with ArchLinux is probably with the community package The quickest way to get up and running with ArchLinux is probably with the community package
<https://www.archlinux.org/packages/community/any/matrix-synapse/>, which should pull in most of https://www.archlinux.org/packages/community/any/matrix-synapse/, which should pull in most of
the necessary dependencies. the necessary dependencies.
pip may be outdated (6.0.7-1 and needs to be upgraded to 6.0.8-1 ): pip may be outdated (6.0.7-1 and needs to be upgraded to 6.0.8-1 ):
```sh ```
sudo pip install --upgrade pip sudo pip install --upgrade pip
``` ```
@@ -354,65 +329,38 @@ ELFCLASS32 (x64 Systems), you may need to reinstall py-bcrypt to correctly
compile it under the right architecture. (This should not be needed if compile it under the right architecture. (This should not be needed if
installing under virtualenv): installing under virtualenv):
```sh ```
sudo pip uninstall py-bcrypt sudo pip uninstall py-bcrypt
sudo pip install py-bcrypt sudo pip install py-bcrypt
``` ```
#### Void Linux ### Void Linux
Synapse can be found in the void repositories as 'synapse': Synapse can be found in the void repositories as 'synapse':
```sh ```
xbps-install -Su xbps-install -Su
xbps-install -S synapse xbps-install -S synapse
``` ```
#### FreeBSD ### FreeBSD
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from: Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
- Ports: `cd /usr/ports/net-im/py-matrix-synapse && make install clean` - Ports: `cd /usr/ports/net-im/py-matrix-synapse && make install clean`
- Packages: `pkg install py37-matrix-synapse` - Packages: `pkg install py37-matrix-synapse`
#### OpenBSD
As of OpenBSD 6.7 Synapse is available as a pre-compiled binary. The filesystem ### NixOS
underlying the homeserver directory (defaults to `/var/synapse`) has to be
mounted with `wxallowed` (cf. `mount(8)`), so creating a separate filesystem
and mounting it to `/var/synapse` should be taken into consideration.
Installing Synapse:
```sh
doas pkg_add synapse
```
#### NixOS
Robin Lambertz has packaged Synapse for NixOS at: Robin Lambertz has packaged Synapse for NixOS at:
<https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-synapse.nix> https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-synapse.nix
## Setting up Synapse # Setting up Synapse
Once you have installed synapse as above, you will need to configure it. Once you have installed synapse as above, you will need to configure it.
### Using PostgreSQL ## TLS certificates
By default Synapse uses [SQLite](https://sqlite.org/) and in doing so trades performance for convenience.
SQLite is only recommended in Synapse for testing purposes or for servers with
very light workloads.
Almost all installations should opt to use [PostgreSQL](https://www.postgresql.org). Advantages include:
- significant performance improvements due to the superior threading and
caching model, smarter query optimiser
- allowing the DB to be run on separate hardware
For information on how to install and use PostgreSQL in Synapse, please see
[docs/postgres.md](docs/postgres.md)
### TLS certificates
The default configuration exposes a single HTTP port on the local The default configuration exposes a single HTTP port on the local
interface: `http://localhost:8008`. It is suitable for local testing, interface: `http://localhost:8008`. It is suitable for local testing,
@@ -426,11 +374,11 @@ The recommended way to do so is to set up a reverse proxy on port
Alternatively, you can configure Synapse to expose an HTTPS port. To do Alternatively, you can configure Synapse to expose an HTTPS port. To do
so, you will need to edit `homeserver.yaml`, as follows: so, you will need to edit `homeserver.yaml`, as follows:
- First, under the `listeners` section, uncomment the configuration for the * First, under the `listeners` section, uncomment the configuration for the
TLS-enabled listener. (Remove the hash sign (`#`) at the start of TLS-enabled listener. (Remove the hash sign (`#`) at the start of
each line). The relevant lines are like this: each line). The relevant lines are like this:
```yaml ```
- port: 8448 - port: 8448
type: http type: http
tls: true tls: true
@@ -438,12 +386,14 @@ so, you will need to edit `homeserver.yaml`, as follows:
- names: [client, federation] - names: [client, federation]
``` ```
- You will also need to uncomment the `tls_certificate_path` and * You will also need to uncomment the `tls_certificate_path` and
`tls_private_key_path` lines under the `TLS` section. You will need to manage `tls_private_key_path` lines under the `TLS` section. You can either
provisioning of these certificates yourself — Synapse had built-in ACME point these settings at an existing certificate and key, or you can
support, but the ACMEv1 protocol Synapse implements is deprecated, not enable Synapse's built-in ACME (Let's Encrypt) support. Instructions
allowed by LetsEncrypt for new sites, and will break for existing sites in for having Synapse automatically provision and renew federation
late 2020. See [ACME.md](docs/ACME.md). certificates through ACME can be found at [ACME.md](docs/ACME.md).
Note that, as pointed out in that document, this feature will not
work with installs set up after November 2019.
If you are using your own certificate, be sure to use a `.pem` file that If you are using your own certificate, be sure to use a `.pem` file that
includes the full certificate chain including any intermediate certificates includes the full certificate chain including any intermediate certificates
@@ -453,63 +403,8 @@ so, you will need to edit `homeserver.yaml`, as follows:
For a more detailed guide to configuring your server for federation, see For a more detailed guide to configuring your server for federation, see
[federate.md](docs/federate.md). [federate.md](docs/federate.md).
### Client Well-Known URI
Setting up the client Well-Known URI is optional but if you set it up, it will ## Email
allow users to enter their full username (e.g. `@user:<server_name>`) into clients
which support well-known lookup to automatically configure the homeserver and
identity server URLs. This is useful so that users don't have to memorize or think
about the actual homeserver URL you are using.
The URL `https://<server_name>/.well-known/matrix/client` should return JSON in
the following format.
```json
{
"m.homeserver": {
"base_url": "https://<matrix.example.com>"
}
}
```
It can optionally contain identity server information as well.
```json
{
"m.homeserver": {
"base_url": "https://<matrix.example.com>"
},
"m.identity_server": {
"base_url": "https://<identity.example.com>"
}
}
```
To work in browser based clients, the file must be served with the appropriate
Cross-Origin Resource Sharing (CORS) headers. A recommended value would be
`Access-Control-Allow-Origin: *` which would allow all browser based clients to
view it.
In nginx this would be something like:
```nginx
location /.well-known/matrix/client {
return 200 '{"m.homeserver": {"base_url": "https://<matrix.example.com>"}}';
default_type application/json;
add_header Access-Control-Allow-Origin *;
}
```
You should also ensure the `public_baseurl` option in `homeserver.yaml` is set
correctly. `public_baseurl` should be set to the URL that clients will use to
connect to your server. This is the same URL you put for the `m.homeserver`
`base_url` above.
```yaml
public_baseurl: "https://<matrix.example.com>"
```
### Email
It is desirable for Synapse to have the capability to send email. This allows It is desirable for Synapse to have the capability to send email. This allows
Synapse to send password reset emails, send verifications when an email address Synapse to send password reset emails, send verifications when an email address
@@ -524,28 +419,18 @@ and `notif_from` fields filled out. You may also need to set `smtp_user`,
If email is not configured, password reset, registration and notifications via If email is not configured, password reset, registration and notifications via
email will be disabled. email will be disabled.
### Registering a user ## Registering a user
The easiest way to create a new user is to do so from a client like [Element](https://element.io/). The easiest way to create a new user is to do so from a client like [Riot](https://riot.im).
Alternatively, you can do so from the command line. This can be done as follows: Alternatively you can do so from the command line if you have installed via pip.
1. If synapse was installed via pip, activate the virtualenv as follows (if Synapse was This can be done as follows:
installed via a prebuilt package, `register_new_matrix_user` should already be
on the search path):
```sh
cd ~/synapse
source env/bin/activate
synctl start # if not already running
```
2. Run the following command:
```sh
register_new_matrix_user -c homeserver.yaml http://localhost:8008
```
This will prompt you to add details for the new user, and will then connect to
the running Synapse to create the new user. For example:
``` ```
$ source ~/synapse/env/bin/activate
$ synctl start # if not already running
$ register_new_matrix_user -c homeserver.yaml http://localhost:8008
New user localpart: erikj New user localpart: erikj
Password: Password:
Confirm password: Confirm password:
@@ -560,12 +445,12 @@ value is generated by `--generate-config`), but it should be kept secret, as
anyone with knowledge of it can register users, including admin accounts, anyone with knowledge of it can register users, including admin accounts,
on your server even if `enable_registration` is `false`. on your server even if `enable_registration` is `false`.
### Setting up a TURN server ## Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See [docs/turn-howto.md](docs/turn-howto.md) for details. a TURN server. See [docs/turn-howto.md](docs/turn-howto.md) for details.
### URL previews ## URL previews
Synapse includes support for previewing URLs, which is disabled by default. To Synapse includes support for previewing URLs, which is disabled by default. To
turn it on you must enable the `url_preview_enabled: True` config parameter turn it on you must enable the `url_preview_enabled: True` config parameter
@@ -575,18 +460,19 @@ This is critical from a security perspective to stop arbitrary Matrix users
spidering 'internal' URLs on your network. At the very least we recommend that spidering 'internal' URLs on your network. At the very least we recommend that
your loopback and RFC1918 IP addresses are blacklisted. your loopback and RFC1918 IP addresses are blacklisted.
This also requires the optional `lxml` python dependency to be installed. This This also requires the optional `lxml` and `netaddr` python dependencies to be
in turn requires the `libxml2` library to be available - on Debian/Ubuntu this installed. This in turn requires the `libxml2` library to be available - on
means `apt-get install libxml2-dev`, or equivalent for your OS. Debian/Ubuntu this means `apt-get install libxml2-dev`, or equivalent for
your OS.
### Troubleshooting Installation # Troubleshooting Installation
`pip` seems to leak *lots* of memory during installation. For instance, a Linux `pip` seems to leak *lots* of memory during installation. For instance, a Linux
host with 512MB of RAM may run out of memory whilst installing Twisted. If this host with 512MB of RAM may run out of memory whilst installing Twisted. If this
happens, you will have to individually install the dependencies which are happens, you will have to individually install the dependencies which are
failing, e.g.: failing, e.g.:
```sh ```
pip install twisted pip install twisted
``` ```

View File

@@ -20,10 +20,9 @@ recursive-include scripts *
recursive-include scripts-dev * recursive-include scripts-dev *
recursive-include synapse *.pyi recursive-include synapse *.pyi
recursive-include tests *.py recursive-include tests *.py
recursive-include tests *.pem include tests/http/ca.crt
recursive-include tests *.p8 include tests/http/ca.key
recursive-include tests *.crt include tests/http/server.key
recursive-include tests *.key
recursive-include synapse/res * recursive-include synapse/res *
recursive-include synapse/static *.css recursive-include synapse/static *.css

View File

@@ -1,7 +1,3 @@
=========================================================
Synapse |support| |development| |license| |pypi| |python|
=========================================================
.. contents:: .. contents::
Introduction Introduction
@@ -41,7 +37,7 @@ which handle:
- Eventually-consistent cryptographically secure synchronisation of room - Eventually-consistent cryptographically secure synchronisation of room
state across a global open network of federated servers and services state across a global open network of federated servers and services
- Sending and receiving extensible messages in a room with (optional) - Sending and receiving extensible messages in a room with (optional)
end-to-end encryption end-to-end encryption[1]
- Inviting, joining, leaving, kicking, banning room members - Inviting, joining, leaving, kicking, banning room members
- Managing user accounts (registration, login, logout) - Managing user accounts (registration, login, logout)
- Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers, - Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers,
@@ -78,15 +74,7 @@ at the `Matrix spec <https://matrix.org/docs/spec>`_, and experiment with the
Thanks for using Matrix! Thanks for using Matrix!
Support [1] End-to-end encryption is currently in beta: `blog post <https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-encryption-security-assessment-released-and-implemented-cross-platform-on-riot-at-last>`_.
=======
For support installing or managing Synapse, please join |room|_ (from a matrix.org
account if necessary) and ask questions there. We do not use GitHub issues for
support requests, only for bug reports and feature requests.
.. |room| replace:: ``#synapse:matrix.org``
.. _room: https://matrix.to/#/#synapse:matrix.org
Synapse Installation Synapse Installation
@@ -108,11 +96,12 @@ Unless you are running a test instance of Synapse on your local machine, in
general, you will need to enable TLS support before you can successfully general, you will need to enable TLS support before you can successfully
connect from a client: see `<INSTALL.md#tls-certificates>`_. connect from a client: see `<INSTALL.md#tls-certificates>`_.
An easy way to get started is to login or register via Element at An easy way to get started is to login or register via Riot at
https://app.element.io/#/login or https://app.element.io/#/register respectively. https://riot.im/app/#/login or https://riot.im/app/#/register respectively.
You will need to change the server you are logging into from ``matrix.org`` You will need to change the server you are logging into from ``matrix.org``
and instead specify a Homeserver URL of ``https://<server_name>:8448`` and instead specify a Homeserver URL of ``https://<server_name>:8448``
(or just ``https://<server_name>`` if you are using a reverse proxy). (or just ``https://<server_name>`` if you are using a reverse proxy).
(Leave the identity server as the default - see `Identity servers`_.)
If you prefer to use another client, refer to our If you prefer to use another client, refer to our
`client breakdown <https://matrix.org/docs/projects/clients-matrix>`_. `client breakdown <https://matrix.org/docs/projects/clients-matrix>`_.
@@ -129,7 +118,7 @@ it, specify ``enable_registration: true`` in ``homeserver.yaml``. (It is then
recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.md>`_.) recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.md>`_.)
Once ``enable_registration`` is set to ``true``, it is possible to register a Once ``enable_registration`` is set to ``true``, it is possible to register a
user via a Matrix client. user via `riot.im <https://riot.im/app/#/register>`_ or other Matrix clients.
Your new user name will be formed partly from the ``server_name``, and partly Your new user name will be formed partly from the ``server_name``, and partly
from a localpart you specify when you create the account. Your name will take from a localpart you specify when you create the account. Your name will take
@@ -175,6 +164,30 @@ versions of synapse.
.. _UPGRADE.rst: UPGRADE.rst .. _UPGRADE.rst: UPGRADE.rst
Using PostgreSQL
================
Synapse offers two database engines:
* `SQLite <https://sqlite.org/>`_
* `PostgreSQL <https://www.postgresql.org>`_
By default Synapse uses SQLite in and doing so trades performance for convenience.
SQLite is only recommended in Synapse for testing purposes or for servers with
light workloads.
Almost all installations should opt to use PostreSQL. Advantages include:
* significant performance improvements due to the superior threading and
caching model, smarter query optimiser
* allowing the DB to be run on separate hardware
* allowing basic active/backup high-availability with a "hot spare" synapse
pointing at the same DB master, as well as enabling DB replication in
synapse itself.
For information on how to install and use PostgreSQL, please see
`docs/postgres.md <docs/postgres.md>`_.
.. _reverse-proxy: .. _reverse-proxy:
Using a reverse proxy with Synapse Using a reverse proxy with Synapse
@@ -183,9 +196,8 @@ Using a reverse proxy with Synapse
It is recommended to put a reverse proxy such as It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_, `nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_, `Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_, `Caddy <https://caddyserver.com/docs/proxy>`_ or
`HAProxy <https://www.haproxy.org/>`_ or `HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
`relayd <https://man.openbsd.org/relayd.8>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges. Matrix clients without needing to run Synapse with root privileges.
@@ -224,9 +236,10 @@ email address.
Password reset Password reset
============== ==============
Users can reset their password through their client. Alternatively, a server admin If a user has registered an email address to their account using an identity
can reset a users password using the `admin API <docs/admin_api/user_admin_api.rst#reset-password>`_ server, they can request a password-reset token via clients such as Riot.
or by directly editing the database as shown below.
A manual password reset can be done via direct database access as follows.
First calculate the hash of the new password:: First calculate the hash of the new password::
@@ -235,7 +248,7 @@ First calculate the hash of the new password::
Confirm password: Confirm password:
$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx $2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Then update the ``users`` table in the database:: Then update the `users` table in the database::
UPDATE users SET password_hash='$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' UPDATE users SET password_hash='$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
WHERE name='@test:test.com'; WHERE name='@test:test.com';
@@ -244,8 +257,6 @@ Then update the ``users`` table in the database::
Synapse Development Synapse Development
=================== ===================
Join our developer community on Matrix: `#synapse-dev:matrix.org <https://matrix.to/#/#synapse-dev:matrix.org>`_
Before setting up a development environment for synapse, make sure you have the Before setting up a development environment for synapse, make sure you have the
system dependencies (such as the python header files) installed - see system dependencies (such as the python header files) installed - see
`Installing from source <INSTALL.md#installing-from-source>`_. `Installing from source <INSTALL.md#installing-from-source>`_.
@@ -259,48 +270,23 @@ directory of your choice::
Synapse has a number of external dependencies, that are easiest Synapse has a number of external dependencies, that are easiest
to install using pip and a virtualenv:: to install using pip and a virtualenv::
python3 -m venv ./env virtualenv -p python3 env
source ./env/bin/activate source env/bin/activate
pip install -e ".[all,test]" python -m pip install --no-use-pep517 -e ".[all]"
This will run a process of downloading and installing all the needed This will run a process of downloading and installing all the needed
dependencies into a virtual env. If any dependencies fail to install, dependencies into a virtual env.
try installing the failing modules individually::
pip install -e "module-name" Once this is done, you may wish to run Synapse's unit tests, to
check that everything is installed as it should be::
Once this is done, you may wish to run Synapse's unit tests to
check that everything is installed correctly::
python -m twisted.trial tests python -m twisted.trial tests
This should end with a 'PASSED' result (note that exact numbers will This should end with a 'PASSED' result::
differ)::
Ran 1337 tests in 716.064s
PASSED (skips=15, successes=1322)
We recommend using the demo which starts 3 federated instances running on ports `8080` - `8082`
./demo/start.sh
(to stop, you can use `./demo/stop.sh`)
If you just want to start a single instance of the app and run it directly::
# Create the homeserver.yaml config once
python -m synapse.app.homeserver \
--server-name my.domain.name \
--config-path homeserver.yaml \
--generate-config \
--report-stats=[yes|no]
# Start the app
python -m synapse.app.homeserver --config-path homeserver.yaml
Ran 143 tests in 0.601s
PASSED (successes=143)
Running the Integration Tests Running the Integration Tests
============================= =============================
@@ -314,21 +300,22 @@ Testing with SyTest is recommended for verifying that changes related to the
Client-Server API are functioning correctly. See the `installation instructions Client-Server API are functioning correctly. See the `installation instructions
<https://github.com/matrix-org/sytest#installing>`_ for details. <https://github.com/matrix-org/sytest#installing>`_ for details.
Building Internal API Documentation
===================================
Platform dependencies Before building internal API documentation install sphinx and
===================== sphinxcontrib-napoleon::
Synapse uses a number of platform dependencies such as Python and PostgreSQL, pip install sphinx
and aims to follow supported upstream versions. See the pip install sphinxcontrib-napoleon
`<docs/deprecation_policy.md>`_ document for more details.
Building internal API documentation::
python setup.py build_sphinx
Troubleshooting Troubleshooting
=============== ===============
Need help? Join our community support room on Matrix:
`#synapse:matrix.org <https://matrix.to/#/#synapse:matrix.org>`_
Running out of File Handles Running out of File Handles
--------------------------- ---------------------------
@@ -393,12 +380,7 @@ massive excess of outgoing federation requests (see `discussion
indicate that your server is also issuing far more outgoing federation indicate that your server is also issuing far more outgoing federation
requests than can be accounted for by your users' activity, this is a requests than can be accounted for by your users' activity, this is a
likely cause. The misbehavior can be worked around by setting likely cause. The misbehavior can be worked around by setting
the following in the Synapse config file: ``use_presence: false`` in the Synapse config file.
.. code-block:: yaml
presence:
enabled: false
People can't accept room invitations from me People can't accept room invitations from me
-------------------------------------------- --------------------------------------------
@@ -412,23 +394,3 @@ something like the following in their logs::
This is normally caused by a misconfiguration in your reverse-proxy. See This is normally caused by a misconfiguration in your reverse-proxy. See
`<docs/reverse_proxy.md>`_ and double-check that your settings are correct. `<docs/reverse_proxy.md>`_ and double-check that your settings are correct.
.. |support| image:: https://img.shields.io/matrix/synapse:matrix.org?label=support&logo=matrix
:alt: (get support on #synapse:matrix.org)
:target: https://matrix.to/#/#synapse:matrix.org
.. |development| image:: https://img.shields.io/matrix/synapse-dev:matrix.org?label=development&logo=matrix
:alt: (discuss development on #synapse-dev:matrix.org)
:target: https://matrix.to/#/#synapse-dev:matrix.org
.. |license| image:: https://img.shields.io/github/license/matrix-org/synapse
:alt: (check license in LICENSE file)
:target: LICENSE
.. |pypi| image:: https://img.shields.io/pypi/v/matrix-synapse
:alt: (latest version released on PyPi)
:target: https://pypi.org/project/matrix-synapse
.. |python| image:: https://img.shields.io/pypi/pyversions/matrix-synapse
:alt: (supported python versions)
:target: https://pypi.org/project/matrix-synapse

View File

@@ -5,16 +5,6 @@ Before upgrading check if any special steps are required to upgrade from the
version you currently have installed to the current version of Synapse. The extra version you currently have installed to the current version of Synapse. The extra
instructions that may be required are listed later in this document. instructions that may be required are listed later in this document.
* Check that your versions of Python and PostgreSQL are still supported.
Synapse follows upstream lifecycles for `Python`_ and `PostgreSQL`_, and
removes support for versions which are no longer maintained.
The website https://endoflife.date also offers convenient summaries.
.. _Python: https://devguide.python.org/devcycle/#end-of-life-branches
.. _PostgreSQL: https://www.postgresql.org/support/versioning/
* If Synapse was installed using `prebuilt packages * If Synapse was installed using `prebuilt packages
<INSTALL.md#prebuilt-packages>`_, you will need to follow the normal process <INSTALL.md#prebuilt-packages>`_, you will need to follow the normal process
for upgrading those packages. for upgrading those packages.
@@ -85,433 +75,10 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.34.0
====================
`room_invite_state_types` configuration setting
-----------------------------------------------
The ``room_invite_state_types`` configuration setting has been deprecated and
replaced with ``room_prejoin_state``. See the `sample configuration file <https://github.com/matrix-org/synapse/blob/v1.34.0/docs/sample_config.yaml#L1515>`_.
If you have set ``room_invite_state_types`` to the default value you should simply
remove it from your configuration file. The default value used to be:
.. code:: yaml
room_invite_state_types:
- "m.room.join_rules"
- "m.room.canonical_alias"
- "m.room.avatar"
- "m.room.encryption"
- "m.room.name"
If you have customised this value by adding addition state types, you should
remove ``room_invite_state_types`` and configure ``additional_event_types`` with
your customisations.
If you have customised this value by removing state types, you should rename
``room_invite_state_types`` to ``additional_event_types``, and set
``disable_default_event_types`` to ``true``.
Upgrading to v1.33.0
====================
Account Validity HTML templates can now display a user's expiration date
------------------------------------------------------------------------
This may affect you if you have enabled the account validity feature, and have made use of a
custom HTML template specified by the ``account_validity.template_dir`` or ``account_validity.account_renewed_html_path``
Synapse config options.
The template can now accept an ``expiration_ts`` variable, which represents the unix timestamp in milliseconds for the
future date of which their account has been renewed until. See the
`default template <https://github.com/matrix-org/synapse/blob/release-v1.33.0/synapse/res/templates/account_renewed.html>`_
for an example of usage.
ALso note that a new HTML template, ``account_previously_renewed.html``, has been added. This is is shown to users
when they attempt to renew their account with a valid renewal token that has already been used before. The default
template contents can been found
`here <https://github.com/matrix-org/synapse/blob/release-v1.33.0/synapse/res/templates/account_previously_renewed.html>`_,
and can also accept an ``expiration_ts`` variable. This template replaces the error message users would previously see
upon attempting to use a valid renewal token more than once.
Upgrading to v1.32.0
====================
Regression causing connected Prometheus instances to become overwhelmed
-----------------------------------------------------------------------
This release introduces `a regression <https://github.com/matrix-org/synapse/issues/9853>`_
that can overwhelm connected Prometheus instances. This issue is not present in
Synapse v1.32.0rc1.
If you have been affected, please downgrade to 1.31.0. You then may need to
remove excess writeahead logs in order for Prometheus to recover. Instructions
for doing so are provided
`here <https://github.com/matrix-org/synapse/pull/9854#issuecomment-823472183>`_.
Dropping support for old Python, Postgres and SQLite versions
-------------------------------------------------------------
In line with our `deprecation policy <https://github.com/matrix-org/synapse/blob/release-v1.32.0/docs/deprecation_policy.md>`_,
we've dropped support for Python 3.5 and PostgreSQL 9.5, as they are no longer supported upstream.
This release of Synapse requires Python 3.6+ and PostgresSQL 9.6+ or SQLite 3.22+.
Removal of old List Accounts Admin API
--------------------------------------
The deprecated v1 "list accounts" admin API (``GET /_synapse/admin/v1/users/<user_id>``) has been removed in this version.
The `v2 list accounts API <https://github.com/matrix-org/synapse/blob/master/docs/admin_api/user_admin_api.rst#list-accounts>`_
has been available since Synapse 1.7.0 (2019-12-13), and is accessible under ``GET /_synapse/admin/v2/users``.
The deprecation of the old endpoint was announced with Synapse 1.28.0 (released on 2021-02-25).
Application Services must use type ``m.login.application_service`` when registering users
-----------------------------------------------------------------------------------------
In compliance with the
`Application Service spec <https://matrix.org/docs/spec/application_service/r0.1.2#server-admin-style-permissions>`_,
Application Services are now required to use the ``m.login.application_service`` type when registering users via the
``/_matrix/client/r0/register`` endpoint. This behaviour was deprecated in Synapse v1.30.0.
Please ensure your Application Services are up to date.
Upgrading to v1.29.0
====================
Requirement for X-Forwarded-Proto header
----------------------------------------
When using Synapse with a reverse proxy (in particular, when using the
`x_forwarded` option on an HTTP listener), Synapse now expects to receive an
`X-Forwarded-Proto` header on incoming HTTP requests. If it is not set, Synapse
will log a warning on each received request.
To avoid the warning, administrators using a reverse proxy should ensure that
the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to
indicate the protocol used by the client.
Synapse also requires the `Host` header to be preserved.
See the `reverse proxy documentation <docs/reverse_proxy.md>`_, where the
example configurations have been updated to show how to set these headers.
(Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it
sets `X-Forwarded-Proto` by default.)
Upgrading to v1.27.0
====================
Changes to callback URI for OAuth2 / OpenID Connect and SAML2
-------------------------------------------------------------
This version changes the URI used for callbacks from OAuth2 and SAML2 identity providers:
* If your server is configured for single sign-on via an OpenID Connect or OAuth2 identity
provider, you will need to add ``[synapse public baseurl]/_synapse/client/oidc/callback``
to the list of permitted "redirect URIs" at the identity provider.
See `docs/openid.md <docs/openid.md>`_ for more information on setting up OpenID
Connect.
* If your server is configured for single sign-on via a SAML2 identity provider, you will
need to add ``[synapse public baseurl]/_synapse/client/saml2/authn_response`` as a permitted
"ACS location" (also known as "allowed callback URLs") at the identity provider.
The "Issuer" in the "AuthnRequest" to the SAML2 identity provider is also updated to
``[synapse public baseurl]/_synapse/client/saml2/metadata.xml``. If your SAML2 identity
provider uses this property to validate or otherwise identify Synapse, its configuration
will need to be updated to use the new URL. Alternatively you could create a new, separate
"EntityDescriptor" in your SAML2 identity provider with the new URLs and leave the URLs in
the existing "EntityDescriptor" as they were.
Changes to HTML templates
-------------------------
The HTML templates for SSO and email notifications now have `Jinja2's autoescape <https://jinja.palletsprojects.com/en/2.11.x/api/#autoescaping>`_
enabled for files ending in ``.html``, ``.htm``, and ``.xml``. If you have customised
these templates and see issues when viewing them you might need to update them.
It is expected that most configurations will need no changes.
If you have customised the templates *names* for these templates, it is recommended
to verify they end in ``.html`` to ensure autoescape is enabled.
The above applies to the following templates:
* ``add_threepid.html``
* ``add_threepid_failure.html``
* ``add_threepid_success.html``
* ``notice_expiry.html``
* ``notice_expiry.html``
* ``notif_mail.html`` (which, by default, includes ``room.html`` and ``notif.html``)
* ``password_reset.html``
* ``password_reset_confirmation.html``
* ``password_reset_failure.html``
* ``password_reset_success.html``
* ``registration.html``
* ``registration_failure.html``
* ``registration_success.html``
* ``sso_account_deactivated.html``
* ``sso_auth_bad_user.html``
* ``sso_auth_confirm.html``
* ``sso_auth_success.html``
* ``sso_error.html``
* ``sso_login_idp_picker.html``
* ``sso_redirect_confirm.html``
Upgrading to v1.26.0
====================
Rolling back to v1.25.0 after a failed upgrade
----------------------------------------------
v1.26.0 includes a lot of large changes. If something problematic occurs, you
may want to roll-back to a previous version of Synapse. Because v1.26.0 also
includes a new database schema version, reverting that version is also required
alongside the generic rollback instructions mentioned above. In short, to roll
back to v1.25.0 you need to:
1. Stop the server
2. Decrease the schema version in the database:
.. code:: sql
UPDATE schema_version SET version = 58;
3. Delete the ignored users & chain cover data:
.. code:: sql
DROP TABLE IF EXISTS ignored_users;
UPDATE rooms SET has_auth_chain_index = false;
For PostgreSQL run:
.. code:: sql
TRUNCATE event_auth_chain_links;
TRUNCATE event_auth_chains;
For SQLite run:
.. code:: sql
DELETE FROM event_auth_chain_links;
DELETE FROM event_auth_chains;
4. Mark the deltas as not run (so they will re-run on upgrade).
.. code:: sql
DELETE FROM applied_schema_deltas WHERE version = 59 AND file = "59/01ignored_user.py";
DELETE FROM applied_schema_deltas WHERE version = 59 AND file = "59/06chain_cover_index.sql";
5. Downgrade Synapse by following the instructions for your installation method
in the "Rolling back to older versions" section above.
Upgrading to v1.25.0
====================
Last release supporting Python 3.5
----------------------------------
This is the last release of Synapse which guarantees support with Python 3.5,
which passed its upstream End of Life date several months ago.
We will attempt to maintain support through March 2021, but without guarantees.
In the future, Synapse will follow upstream schedules for ending support of
older versions of Python and PostgreSQL. Please upgrade to at least Python 3.6
and PostgreSQL 9.6 as soon as possible.
Blacklisting IP ranges
----------------------
Synapse v1.25.0 includes new settings, ``ip_range_blacklist`` and
``ip_range_whitelist``, for controlling outgoing requests from Synapse for federation,
identity servers, push, and for checking key validity for third-party invite events.
The previous setting, ``federation_ip_range_blacklist``, is deprecated. The new
``ip_range_blacklist`` defaults to private IP ranges if it is not defined.
If you have never customised ``federation_ip_range_blacklist`` it is recommended
that you remove that setting.
If you have customised ``federation_ip_range_blacklist`` you should update the
setting name to ``ip_range_blacklist``.
If you have a custom push server that is reached via private IP space you may
need to customise ``ip_range_blacklist`` or ``ip_range_whitelist``.
Upgrading to v1.24.0
====================
Custom OpenID Connect mapping provider breaking change
------------------------------------------------------
This release allows the OpenID Connect mapping provider to perform normalisation
of the localpart of the Matrix ID. This allows for the mapping provider to
specify different algorithms, instead of the [default way](https://matrix.org/docs/spec/appendices#mapping-from-other-character-sets).
If your Synapse configuration uses a custom mapping provider
(`oidc_config.user_mapping_provider.module` is specified and not equal to
`synapse.handlers.oidc_handler.JinjaOidcMappingProvider`) then you *must* ensure
that `map_user_attributes` of the mapping provider performs some normalisation
of the `localpart` returned. To match previous behaviour you can use the
`map_username_to_mxid_localpart` function provided by Synapse. An example is
shown below:
.. code-block:: python
from synapse.types import map_username_to_mxid_localpart
class MyMappingProvider:
def map_user_attributes(self, userinfo, token):
# ... your custom logic ...
sso_user_id = ...
localpart = map_username_to_mxid_localpart(sso_user_id)
return {"localpart": localpart}
Removal historical Synapse Admin API
------------------------------------
Historically, the Synapse Admin API has been accessible under:
* ``/_matrix/client/api/v1/admin``
* ``/_matrix/client/unstable/admin``
* ``/_matrix/client/r0/admin``
* ``/_synapse/admin/v1``
The endpoints with ``/_matrix/client/*`` prefixes have been removed as of v1.24.0.
The Admin API is now only accessible under:
* ``/_synapse/admin/v1``
The only exception is the `/admin/whois` endpoint, which is
`also available via the client-server API <https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid>`_.
The deprecation of the old endpoints was announced with Synapse 1.20.0 (released
on 2020-09-22) and makes it easier for homeserver admins to lock down external
access to the Admin API endpoints.
Upgrading to v1.23.0
====================
Structured logging configuration breaking changes
-------------------------------------------------
This release deprecates use of the ``structured: true`` logging configuration for
structured logging. If your logging configuration contains ``structured: true``
then it should be modified based on the `structured logging documentation
<https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md>`_.
The ``structured`` and ``drains`` logging options are now deprecated and should
be replaced by standard logging configuration of ``handlers`` and ``formatters``.
A future will release of Synapse will make using ``structured: true`` an error.
Upgrading to v1.22.0
====================
ThirdPartyEventRules breaking changes
-------------------------------------
This release introduces a backwards-incompatible change to modules making use of
``ThirdPartyEventRules`` in Synapse. If you make use of a module defined under the
``third_party_event_rules`` config option, please make sure it is updated to handle
the below change:
The ``http_client`` argument is no longer passed to modules as they are initialised. Instead,
modules are expected to make use of the ``http_client`` property on the ``ModuleApi`` class.
Modules are now passed a ``module_api`` argument during initialisation, which is an instance of
``ModuleApi``. ``ModuleApi`` instances have a ``http_client`` property which acts the same as
the ``http_client`` argument previously passed to ``ThirdPartyEventRules`` modules.
Upgrading to v1.21.0
====================
Forwarding ``/_synapse/client`` through your reverse proxy
----------------------------------------------------------
The `reverse proxy documentation
<https://github.com/matrix-org/synapse/blob/develop/docs/reverse_proxy.md>`_ has been updated
to include reverse proxy directives for ``/_synapse/client/*`` endpoints. As the user password
reset flow now uses endpoints under this prefix, **you must update your reverse proxy
configurations for user password reset to work**.
Additionally, note that the `Synapse worker documentation
<https://github.com/matrix-org/synapse/blob/develop/docs/workers.md>`_ has been updated to
state that the ``/_synapse/client/password_reset/email/submit_token`` endpoint can be handled
by all workers. If you make use of Synapse's worker feature, please update your reverse proxy
configuration to reflect this change.
New HTML templates
------------------
A new HTML template,
`password_reset_confirmation.html <https://github.com/matrix-org/synapse/blob/develop/synapse/res/templates/password_reset_confirmation.html>`_,
has been added to the ``synapse/res/templates`` directory. If you are using a
custom template directory, you may want to copy the template over and modify it.
Note that as of v1.20.0, templates do not need to be included in custom template
directories for Synapse to start. The default templates will be used if a custom
template cannot be found.
This page will appear to the user after clicking a password reset link that has
been emailed to them.
To complete password reset, the page must include a way to make a `POST`
request to
``/_synapse/client/password_reset/{medium}/submit_token``
with the query parameters from the original link, presented as a URL-encoded form. See the file
itself for more details.
Updated Single Sign-on HTML Templates
-------------------------------------
The ``saml_error.html`` template was removed from Synapse and replaced with the
``sso_error.html`` template. If your Synapse is configured to use SAML and a
custom ``sso_redirect_confirm_template_dir`` configuration then any customisations
of the ``saml_error.html`` template will need to be merged into the ``sso_error.html``
template. These templates are similar, but the parameters are slightly different:
* The ``msg`` parameter should be renamed to ``error_description``.
* There is no longer a ``code`` parameter for the response code.
* A string ``error`` parameter is available that includes a short hint of why a
user is seeing the error page.
Upgrading to v1.18.0
====================
Docker `-py3` suffix will be removed in future versions
-------------------------------------------------------
From 10th August 2020, we will no longer publish Docker images with the `-py3` tag suffix. The images tagged with the `-py3` suffix have been identical to the non-suffixed tags since release 0.99.0, and the suffix is obsolete.
On 10th August, we will remove the `latest-py3` tag. Existing per-release tags (such as `v1.18.0-py3`) will not be removed, but no new `-py3` tags will be added.
Scripts relying on the `-py3` suffix will need to be updated.
Redis replication is now recommended in lieu of TCP replication
---------------------------------------------------------------
When setting up worker processes, we now recommend the use of a Redis server for replication. **The old direct TCP connection method is deprecated and will be removed in a future release.**
See `docs/workers.md <docs/workers.md>`_ for more details.
Upgrading to v1.14.0
====================
This version includes a database update which is run as part of the upgrade,
and which may take a couple of minutes in the case of a large server. Synapse
will not respond to HTTP requests while this update is taking place.
Upgrading to v1.13.0 Upgrading to v1.13.0
==================== ====================
Incorrect database migration in old synapse versions Incorrect database migration in old synapse versions
---------------------------------------------------- ----------------------------------------------------

1
changelog.d/6391.feature Normal file
View File

@@ -0,0 +1 @@
Synapse's cache factor can now be configured in `homeserver.yaml` by the `caches.global_factor` setting. Additionally, `caches.per_cache_factors` controls the cache factors for individual caches.

1
changelog.d/7256.feature Normal file
View File

@@ -0,0 +1 @@
Add OpenID Connect login/registration support. Contributed by Quentin Gliech, on behalf of [les Connecteurs](https://connecteu.rs).

1
changelog.d/7281.misc Normal file
View File

@@ -0,0 +1 @@
Add MultiWriterIdGenerator to support multiple concurrent writers of streams.

1
changelog.d/7317.feature Normal file
View File

@@ -0,0 +1 @@
Add room details admin endpoint. Contributed by Awesome Technologies Innovationslabor GmbH.

1
changelog.d/7374.misc Normal file
View File

@@ -0,0 +1 @@
Move catchup of replication streams logic to worker.

1
changelog.d/7382.misc Normal file
View File

@@ -0,0 +1 @@
Add typing annotations in `synapse.federation`.

1
changelog.d/7396.misc Normal file
View File

@@ -0,0 +1 @@
Convert the room handler to async/await.

1
changelog.d/7398.docker Normal file
View File

@@ -0,0 +1 @@
Update docker runtime image to Alpine v3.11. Contributed by @Starbix.

1
changelog.d/7428.misc Normal file
View File

@@ -0,0 +1 @@
Improve performance of `get_e2e_cross_signing_key`.

1
changelog.d/7429.misc Normal file
View File

@@ -0,0 +1 @@
Improve performance of `mark_as_sent_devices_by_remote`.

1
changelog.d/7435.feature Normal file
View File

@@ -0,0 +1 @@
Allow for using more than one spam checker module at once.

1
changelog.d/7436.misc Normal file
View File

@@ -0,0 +1 @@
Support any process writing to cache invalidation stream.

1
changelog.d/7440.misc Normal file
View File

@@ -0,0 +1 @@
Refactor event persistence database functions in preparation for allowing them to be run on non-master processes.

1
changelog.d/7445.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints to the SAML handler.

1
changelog.d/7448.misc Normal file
View File

@@ -0,0 +1 @@
Remove storage method `get_hosts_in_room` that is no longer called anywhere.

1
changelog.d/7449.misc Normal file
View File

@@ -0,0 +1 @@
Fix some typos in the notice_expiry templates.

1
changelog.d/7458.doc Normal file
View File

@@ -0,0 +1 @@
Update information about mapping providers for SAML and OpenID.

1
changelog.d/7459.misc Normal file
View File

@@ -0,0 +1 @@
Convert the federation handler to async/await.

1
changelog.d/7460.misc Normal file
View File

@@ -0,0 +1 @@
Convert the search handler to async/await.

1
changelog.d/7470.misc Normal file
View File

@@ -0,0 +1 @@
Fix linting errors in new version of Flake8.

1
changelog.d/7475.misc Normal file
View File

@@ -0,0 +1 @@
Have all instance correctly respond to REPLICATE command.

1
changelog.d/7477.doc Normal file
View File

@@ -0,0 +1 @@
Fix copy-paste error in `ServerNoticesConfig` docstring. Contributed by @ptman.

1
changelog.d/7482.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix Redis reconnection logic that can result in missed updates over replication if master reconnects to Redis without restarting.

1
changelog.d/7490.misc Normal file
View File

@@ -0,0 +1 @@
Clean up replication unit tests.

1
changelog.d/7491.misc Normal file
View File

@@ -0,0 +1 @@
Move event stream handling out of slave store.

1
changelog.d/7492.misc Normal file
View File

@@ -0,0 +1 @@
Allow censoring of events to happen on workers.

1
changelog.d/7493.misc Normal file
View File

@@ -0,0 +1 @@
Move EventStream handling into default ReplicationDataHandler.

1
changelog.d/7495.feature Normal file
View File

@@ -0,0 +1 @@
Add `instance_map` config and route replication calls.

View File

@@ -1 +0,0 @@
Ensure python uses `malloc` when running Synapse in Docker.

View File

@@ -15,6 +15,11 @@
# limitations under the License. # limitations under the License.
""" Starts a synapse client console. """ """ Starts a synapse client console. """
from __future__ import print_function
from twisted.internet import reactor, defer, threads
from http import TwistedHttpClient
import argparse import argparse
import cmd import cmd
import getpass import getpass
@@ -23,15 +28,12 @@ import shlex
import sys import sys
import time import time
import urllib import urllib
from http import TwistedHttpClient
from typing import Optional
import nacl.encoding
import nacl.signing
import urlparse import urlparse
from signedjson.sign import SignatureVerifyException, verify_signed_json
from twisted.internet import defer, reactor, threads import nacl.signing
import nacl.encoding
from signedjson.sign import verify_signed_json, SignatureVerifyException
CONFIG_JSON = "cmdclient_config.json" CONFIG_JSON = "cmdclient_config.json"
@@ -93,7 +95,7 @@ class SynapseCmd(cmd.Cmd):
return self.config["user"].split(":")[1] return self.config["user"].split(":")[1]
def do_config(self, line): def do_config(self, line):
"""Show the config for this client: "config" """ Show the config for this client: "config"
Edit a key value mapping: "config key value" e.g. "config token 1234" Edit a key value mapping: "config key value" e.g. "config token 1234"
Config variables: Config variables:
user: The username to auth with. user: The username to auth with.
@@ -361,7 +363,7 @@ class SynapseCmd(cmd.Cmd):
print(e) print(e)
def do_topic(self, line): def do_topic(self, line):
""" "topic [set|get] <roomid> [<newtopic>]" """"topic [set|get] <roomid> [<newtopic>]"
Set the topic for a room: topic set <roomid> <newtopic> Set the topic for a room: topic set <roomid> <newtopic>
Get the topic for a room: topic get <roomid> Get the topic for a room: topic get <roomid>
""" """
@@ -491,7 +493,7 @@ class SynapseCmd(cmd.Cmd):
"list messages <roomid> from=END&to=START&limit=3" "list messages <roomid> from=END&to=START&limit=3"
""" """
args = self._parse(line, ["type", "roomid", "qp"]) args = self._parse(line, ["type", "roomid", "qp"])
if "type" not in args or "roomid" not in args: if not "type" in args or not "roomid" in args:
print("Must specify type and room ID.") print("Must specify type and room ID.")
return return
if args["type"] not in ["members", "messages"]: if args["type"] not in ["members", "messages"]:
@@ -506,7 +508,7 @@ class SynapseCmd(cmd.Cmd):
try: try:
key_value = key_value_str.split("=") key_value = key_value_str.split("=")
qp[key_value[0]] = key_value[1] qp[key_value[0]] = key_value[1]
except Exception: except:
print("Bad query param: %s" % key_value) print("Bad query param: %s" % key_value)
return return
@@ -583,7 +585,7 @@ class SynapseCmd(cmd.Cmd):
parsed_url = urlparse.urlparse(args["path"]) parsed_url = urlparse.urlparse(args["path"])
qp.update(urlparse.parse_qs(parsed_url.query)) qp.update(urlparse.parse_qs(parsed_url.query))
args["path"] = parsed_url.path args["path"] = parsed_url.path
except Exception: except:
pass pass
reactor.callFromThread( reactor.callFromThread(
@@ -608,8 +610,7 @@ class SynapseCmd(cmd.Cmd):
@defer.inlineCallbacks @defer.inlineCallbacks
def _do_event_stream(self, timeout): def _do_event_stream(self, timeout):
res = yield defer.ensureDeferred( res = yield self.http_client.get_json(
self.http_client.get_json(
self._url() + "/events", self._url() + "/events",
{ {
"access_token": self._tok(), "access_token": self._tok(),
@@ -617,7 +618,6 @@ class SynapseCmd(cmd.Cmd):
"from": self.event_stream_token, "from": self.event_stream_token,
}, },
) )
)
print(json.dumps(res, indent=4)) print(json.dumps(res, indent=4))
if "chunk" in res: if "chunk" in res:
@@ -691,7 +691,7 @@ class SynapseCmd(cmd.Cmd):
self._do_presence_state(2, line) self._do_presence_state(2, line)
def _parse(self, line, keys, force_keys=False): def _parse(self, line, keys, force_keys=False):
"""Parses the given line. """ Parses the given line.
Args: Args:
line : The line to parse line : The line to parse
@@ -719,10 +719,10 @@ class SynapseCmd(cmd.Cmd):
method, method,
path, path,
data=None, data=None,
query_params: Optional[dict] = None, query_params={"access_token": None},
alt_text=None, alt_text=None,
): ):
"""Runs an HTTP request and pretty prints the output. """ Runs an HTTP request and pretty prints the output.
Args: Args:
method: HTTP method method: HTTP method
@@ -730,8 +730,6 @@ class SynapseCmd(cmd.Cmd):
data: Raw JSON data if any data: Raw JSON data if any
query_params: dict of query parameters to add to the url query_params: dict of query parameters to add to the url
""" """
query_params = query_params or {"access_token": None}
url = self._url() + path url = self._url() + path
if "access_token" in query_params: if "access_token" in query_params:
query_params["access_token"] = self._tok() query_params["access_token"] = self._tok()
@@ -774,10 +772,10 @@ def main(server_url, identity_server_url, username, token, config_path):
syn_cmd.config = json.load(config) syn_cmd.config = json.load(config)
try: try:
http_client.verbose = "on" == syn_cmd.config["verbose"] http_client.verbose = "on" == syn_cmd.config["verbose"]
except Exception: except:
pass pass
print("Loaded config from %s" % config_path) print("Loaded config from %s" % config_path)
except Exception: except:
pass pass
# Twisted-specific: Runs the command processor in Twisted's event loop # Twisted-specific: Runs the command processor in Twisted's event loop

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -12,21 +13,23 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import json from __future__ import print_function
import urllib
from pprint import pformat
from typing import Optional
from twisted.internet import defer, reactor
from twisted.web.client import Agent, readBody from twisted.web.client import Agent, readBody
from twisted.web.http_headers import Headers from twisted.web.http_headers import Headers
from twisted.internet import defer, reactor
from pprint import pformat
import json
import urllib
class HttpClient: class HttpClient(object):
"""Interface for talking json over http""" """ Interface for talking json over http
"""
def put_json(self, url, data): def put_json(self, url, data):
"""Sends the specifed json data using PUT """ Sends the specifed json data using PUT
Args: Args:
url (str): The URL to PUT data to. url (str): The URL to PUT data to.
@@ -40,7 +43,7 @@ class HttpClient:
pass pass
def get_json(self, url, args=None): def get_json(self, url, args=None):
"""Gets some json from the given host homeserver and path """ Gets some json from the given host homeserver and path
Args: Args:
url (str): The URL to GET data from. url (str): The URL to GET data from.
@@ -57,7 +60,7 @@ class HttpClient:
class TwistedHttpClient(HttpClient): class TwistedHttpClient(HttpClient):
"""Wrapper around the twisted HTTP client api. """ Wrapper around the twisted HTTP client api.
Attributes: Attributes:
agent (twisted.web.client.Agent): The twisted Agent used to send the agent (twisted.web.client.Agent): The twisted Agent used to send the
@@ -85,9 +88,9 @@ class TwistedHttpClient(HttpClient):
body = yield readBody(response) body = yield readBody(response)
defer.returnValue(json.loads(body)) defer.returnValue(json.loads(body))
def _create_put_request(self, url, json_data, headers_dict: Optional[dict] = None): def _create_put_request(self, url, json_data, headers_dict={}):
"""Wrapper of _create_request to issue a PUT request""" """ Wrapper of _create_request to issue a PUT request
headers_dict = headers_dict or {} """
if "Content-Type" not in headers_dict: if "Content-Type" not in headers_dict:
raise defer.error(RuntimeError("Must include Content-Type header for PUTs")) raise defer.error(RuntimeError("Must include Content-Type header for PUTs"))
@@ -96,22 +99,15 @@ class TwistedHttpClient(HttpClient):
"PUT", url, producer=_JsonProducer(json_data), headers_dict=headers_dict "PUT", url, producer=_JsonProducer(json_data), headers_dict=headers_dict
) )
def _create_get_request(self, url, headers_dict: Optional[dict] = None): def _create_get_request(self, url, headers_dict={}):
"""Wrapper of _create_request to issue a GET request""" """ Wrapper of _create_request to issue a GET request
return self._create_request("GET", url, headers_dict=headers_dict or {}) """
return self._create_request("GET", url, headers_dict=headers_dict)
@defer.inlineCallbacks @defer.inlineCallbacks
def do_request( def do_request(
self, self, method, url, data=None, qparams=None, jsonreq=True, headers={}
method,
url,
data=None,
qparams=None,
jsonreq=True,
headers: Optional[dict] = None,
): ):
headers = headers or {}
if qparams: if qparams:
url = "%s?%s" % (url, urllib.urlencode(qparams, True)) url = "%s?%s" % (url, urllib.urlencode(qparams, True))
@@ -132,12 +128,9 @@ class TwistedHttpClient(HttpClient):
defer.returnValue(json.loads(body)) defer.returnValue(json.loads(body))
@defer.inlineCallbacks @defer.inlineCallbacks
def _create_request( def _create_request(self, method, url, producer=None, headers_dict={}):
self, method, url, producer=None, headers_dict: Optional[dict] = None """ Creates and sends a request to the given url
): """
"""Creates and sends a request to the given url"""
headers_dict = headers_dict or {}
headers_dict["User-Agent"] = ["Synapse Cmd Client"] headers_dict["User-Agent"] = ["Synapse Cmd Client"]
retries_left = 5 retries_left = 5
@@ -176,7 +169,7 @@ class TwistedHttpClient(HttpClient):
return d return d
class _RawProducer: class _RawProducer(object):
def __init__(self, data): def __init__(self, data):
self.data = data self.data = data
self.body = data self.body = data
@@ -193,8 +186,9 @@ class _RawProducer:
pass pass
class _JsonProducer: class _JsonProducer(object):
"""Used by the twisted http client to create the HTTP body from json""" """ Used by the twisted http client to create the HTTP body from json
"""
def __init__(self, jsn): def __init__(self, jsn):
self.data = jsn self.data = jsn

View File

@@ -50,7 +50,7 @@ services:
- traefik.http.routers.https-synapse.tls.certResolver=le-ssl - traefik.http.routers.https-synapse.tls.certResolver=le-ssl
db: db:
image: docker.io/postgres:12-alpine image: docker.io/postgres:10-alpine
# Change that password, of course! # Change that password, of course!
environment: environment:
- POSTGRES_USER=synapse - POSTGRES_USER=synapse

View File

@@ -63,7 +63,8 @@ class CursesStdIO:
self.redraw() self.redraw()
def redraw(self): def redraw(self):
"""method for redisplaying lines based on internal list of lines""" """ method for redisplaying lines
based on internal list of lines """
self.stdscr.clear() self.stdscr.clear()
self.paintStatus(self.statusText) self.paintStatus(self.statusText)
@@ -140,7 +141,7 @@ class CursesStdIO:
curses.endwin() curses.endwin()
class Callback: class Callback(object):
def __init__(self, stdio): def __init__(self, stdio):
self.stdio = stdio self.stdio = stdio

View File

@@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -27,24 +28,27 @@ Currently assumes the local address is localhost:<port>
""" """
from synapse.federation import ReplicationHandler
from synapse.federation.units import Pdu
from synapse.util import origin_from_ucid
from synapse.app.homeserver import SynapseHomeServer
# from synapse.logging.utils import log_function
from twisted.internet import reactor, defer
from twisted.python import log
import argparse import argparse
import curses.wrapper
import json import json
import logging import logging
import os import os
import re import re
import cursesio import cursesio
import curses.wrapper
from twisted.internet import defer, reactor
from twisted.python import log
from synapse.app.homeserver import SynapseHomeServer
from synapse.federation import ReplicationHandler
from synapse.federation.units import Pdu
from synapse.util import origin_from_ucid
# from synapse.logging.utils import log_function
logger = logging.getLogger("example") logger = logging.getLogger("example")
@@ -54,8 +58,8 @@ def excpetion_errback(failure):
logging.exception(failure) logging.exception(failure)
class InputOutput: class InputOutput(object):
"""This is responsible for basic I/O so that a user can interact with """ This is responsible for basic I/O so that a user can interact with
the example app. the example app.
""" """
@@ -67,10 +71,11 @@ class InputOutput:
self.server = server self.server = server
def on_line(self, line): def on_line(self, line):
"""This is where we process commands.""" """ This is where we process commands.
"""
try: try:
m = re.match(r"^join (\S+)$", line) m = re.match("^join (\S+)$", line)
if m: if m:
# The `sender` wants to join a room. # The `sender` wants to join a room.
(room_name,) = m.groups() (room_name,) = m.groups()
@@ -79,7 +84,7 @@ class InputOutput:
# self.print_line("OK.") # self.print_line("OK.")
return return
m = re.match(r"^invite (\S+) (\S+)$", line) m = re.match("^invite (\S+) (\S+)$", line)
if m: if m:
# `sender` wants to invite someone to a room # `sender` wants to invite someone to a room
room_name, invitee = m.groups() room_name, invitee = m.groups()
@@ -88,7 +93,7 @@ class InputOutput:
# self.print_line("OK.") # self.print_line("OK.")
return return
m = re.match(r"^send (\S+) (.*)$", line) m = re.match("^send (\S+) (.*)$", line)
if m: if m:
# `sender` wants to message a room # `sender` wants to message a room
room_name, body = m.groups() room_name, body = m.groups()
@@ -97,7 +102,7 @@ class InputOutput:
# self.print_line("OK.") # self.print_line("OK.")
return return
m = re.match(r"^backfill (\S+)$", line) m = re.match("^backfill (\S+)$", line)
if m: if m:
# we want to backfill a room # we want to backfill a room
(room_name,) = m.groups() (room_name,) = m.groups()
@@ -130,8 +135,8 @@ class IOLoggerHandler(logging.Handler):
self.io.print_log(msg) self.io.print_log(msg)
class Room: class Room(object):
"""Used to store (in memory) the current membership state of a room, and """ Used to store (in memory) the current membership state of a room, and
which home servers we should send PDUs associated with the room to. which home servers we should send PDUs associated with the room to.
""" """
@@ -146,7 +151,8 @@ class Room:
self.have_got_metadata = False self.have_got_metadata = False
def add_participant(self, participant): def add_participant(self, participant):
"""Someone has joined the room""" """ Someone has joined the room
"""
self.participants.add(participant) self.participants.add(participant)
self.invited.discard(participant) self.invited.discard(participant)
@@ -157,13 +163,14 @@ class Room:
self.oldest_server = server self.oldest_server = server
def add_invited(self, invitee): def add_invited(self, invitee):
"""Someone has been invited to the room""" """ Someone has been invited to the room
"""
self.invited.add(invitee) self.invited.add(invitee)
self.servers.add(origin_from_ucid(invitee)) self.servers.add(origin_from_ucid(invitee))
class HomeServer(ReplicationHandler): class HomeServer(ReplicationHandler):
"""A very basic home server implentation that allows people to join a """ A very basic home server implentation that allows people to join a
room and then invite other people. room and then invite other people.
""" """
@@ -177,7 +184,8 @@ class HomeServer(ReplicationHandler):
self.output = output self.output = output
def on_receive_pdu(self, pdu): def on_receive_pdu(self, pdu):
"""We just received a PDU""" """ We just received a PDU
"""
pdu_type = pdu.pdu_type pdu_type = pdu.pdu_type
if pdu_type == "sy.room.message": if pdu_type == "sy.room.message":
@@ -193,21 +201,34 @@ class HomeServer(ReplicationHandler):
% (pdu.context, pdu.pdu_type, json.dumps(pdu.content)) % (pdu.context, pdu.pdu_type, json.dumps(pdu.content))
) )
# def on_state_change(self, pdu):
##self.output.print_line("#%s (state) %s *** %s" %
##(pdu.context, pdu.state_key, pdu.pdu_type)
##)
# if "joinee" in pdu.content:
# self._on_join(pdu.context, pdu.content["joinee"])
# elif "invitee" in pdu.content:
# self._on_invite(pdu.origin, pdu.context, pdu.content["invitee"])
def _on_message(self, pdu): def _on_message(self, pdu):
"""We received a message""" """ We received a message
"""
self.output.print_line( self.output.print_line(
"#%s %s %s" % (pdu.context, pdu.content["sender"], pdu.content["body"]) "#%s %s %s" % (pdu.context, pdu.content["sender"], pdu.content["body"])
) )
def _on_join(self, context, joinee): def _on_join(self, context, joinee):
"""Someone has joined a room, either a remote user or a local user""" """ Someone has joined a room, either a remote user or a local user
"""
room = self._get_or_create_room(context) room = self._get_or_create_room(context)
room.add_participant(joinee) room.add_participant(joinee)
self.output.print_line("#%s %s %s" % (context, joinee, "*** JOINED")) self.output.print_line("#%s %s %s" % (context, joinee, "*** JOINED"))
def _on_invite(self, origin, context, invitee): def _on_invite(self, origin, context, invitee):
"""Someone has been invited""" """ Someone has been invited
"""
room = self._get_or_create_room(context) room = self._get_or_create_room(context)
room.add_invited(invitee) room.add_invited(invitee)
@@ -220,7 +241,8 @@ class HomeServer(ReplicationHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def send_message(self, room_name, sender, body): def send_message(self, room_name, sender, body):
"""Send a message to a room!""" """ Send a message to a room!
"""
destinations = yield self.get_servers_for_context(room_name) destinations = yield self.get_servers_for_context(room_name)
try: try:
@@ -238,7 +260,8 @@ class HomeServer(ReplicationHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def join_room(self, room_name, sender, joinee): def join_room(self, room_name, sender, joinee):
"""Join a room!""" """ Join a room!
"""
self._on_join(room_name, joinee) self._on_join(room_name, joinee)
destinations = yield self.get_servers_for_context(room_name) destinations = yield self.get_servers_for_context(room_name)
@@ -259,7 +282,8 @@ class HomeServer(ReplicationHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def invite_to_room(self, room_name, sender, invitee): def invite_to_room(self, room_name, sender, invitee):
"""Invite someone to a room!""" """ Invite someone to a room!
"""
self._on_invite(self.server_name, room_name, invitee) self._on_invite(self.server_name, room_name, invitee)
destinations = yield self.get_servers_for_context(room_name) destinations = yield self.get_servers_for_context(room_name)
@@ -290,7 +314,7 @@ class HomeServer(ReplicationHandler):
return self.replication_layer.backfill(dest, room_name, limit) return self.replication_layer.backfill(dest, room_name, limit)
def _get_room_remote_servers(self, room_name): def _get_room_remote_servers(self, room_name):
return list(self.joined_rooms.setdefault(room_name).servers) return [i for i in self.joined_rooms.setdefault(room_name).servers]
def _get_or_create_room(self, room_name): def _get_or_create_room(self, room_name):
return self.joined_rooms.setdefault(room_name, Room(room_name)) return self.joined_rooms.setdefault(room_name, Room(room_name))
@@ -310,7 +334,7 @@ def main(stdscr):
user = args.user user = args.user
server_name = origin_from_ucid(user) server_name = origin_from_ucid(user)
# Set up logging ## Set up logging ##
root_logger = logging.getLogger() root_logger = logging.getLogger()
@@ -330,7 +354,7 @@ def main(stdscr):
observer = log.PythonLoggingObserver() observer = log.PythonLoggingObserver()
observer.start() observer.start()
# Set up synapse server ## Set up synapse server
curses_stdio = cursesio.CursesStdIO(stdscr) curses_stdio = cursesio.CursesStdIO(stdscr)
input_output = InputOutput(curses_stdio, user) input_output = InputOutput(curses_stdio, user)
@@ -344,16 +368,16 @@ def main(stdscr):
input_output.set_home_server(hs) input_output.set_home_server(hs)
# Add input_output logger ## Add input_output logger
io_logger = IOLoggerHandler(input_output) io_logger = IOLoggerHandler(input_output)
io_logger.setFormatter(formatter) io_logger.setFormatter(formatter)
root_logger.addHandler(io_logger) root_logger.addHandler(io_logger)
# Start! ## Start! ##
try: try:
port = int(server_name.split(":")[1]) port = int(server_name.split(":")[1])
except Exception: except:
port = 12345 port = 12345
app_hs.get_http_server().start_listening(port) app_hs.get_http_server().start_listening(port)

View File

@@ -3,4 +3,4 @@
0. Set up Prometheus and Grafana. Out of scope for this readme. Useful documentation about using Grafana with Prometheus: http://docs.grafana.org/features/datasources/prometheus/ 0. Set up Prometheus and Grafana. Out of scope for this readme. Useful documentation about using Grafana with Prometheus: http://docs.grafana.org/features/datasources/prometheus/
1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.md 1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.md
2. Import dashboard into Grafana. Download `synapse.json`. Import it to Grafana and select the correct Prometheus datasource. http://docs.grafana.org/reference/export_import/ 2. Import dashboard into Grafana. Download `synapse.json`. Import it to Grafana and select the correct Prometheus datasource. http://docs.grafana.org/reference/export_import/
3. Set up required recording rules. https://github.com/matrix-org/synapse/tree/master/contrib/prometheus 3. Set up additional recording rules

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,4 @@
import argparse from __future__ import print_function
import cgi
import datetime
import json
import pydot
import urllib2
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# #
@@ -21,6 +15,15 @@ import urllib2
# limitations under the License. # limitations under the License.
import sqlite3
import pydot
import cgi
import json
import datetime
import argparse
import urllib2
def make_name(pdu_id, origin): def make_name(pdu_id, origin):
return "%s@%s" % (pdu_id, origin) return "%s@%s" % (pdu_id, origin)
@@ -30,7 +33,7 @@ def make_graph(pdus, room, filename_prefix):
node_map = {} node_map = {}
origins = set() origins = set()
colors = {"red", "green", "blue", "yellow", "purple"} colors = set(("red", "green", "blue", "yellow", "purple"))
for pdu in pdus: for pdu in pdus:
origins.add(pdu.get("origin")) origins.add(pdu.get("origin"))
@@ -46,7 +49,7 @@ def make_graph(pdus, room, filename_prefix):
try: try:
c = colors.pop() c = colors.pop()
color_map[o] = c color_map[o] = c
except Exception: except:
print("Run out of colours!") print("Run out of colours!")
color_map[o] = "black" color_map[o] = "black"

View File

@@ -13,13 +13,12 @@
# limitations under the License. # limitations under the License.
import argparse
import cgi
import datetime
import json
import sqlite3 import sqlite3
import pydot import pydot
import cgi
import json
import datetime
import argparse
from synapse.events import FrozenEvent from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze from synapse.util.frozenutils import unfreeze
@@ -99,7 +98,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
for prev_id, _ in event.prev_events: for prev_id, _ in event.prev_events:
try: try:
end_node = node_map[prev_id] end_node = node_map[prev_id]
except Exception: except:
end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,)) end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
node_map[prev_id] = end_node node_map[prev_id] = end_node

View File

@@ -1,12 +1,4 @@
import argparse from __future__ import print_function
import cgi
import datetime
import pydot
import simplejson as json
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
# Copyright 2016 OpenMarket Ltd # Copyright 2016 OpenMarket Ltd
# #
@@ -23,6 +15,18 @@ from synapse.util.frozenutils import unfreeze
# limitations under the License. # limitations under the License.
import pydot
import cgi
import simplejson as json
import datetime
import argparse
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
from six import string_types
def make_graph(file_name, room_id, file_prefix, limit): def make_graph(file_name, room_id, file_prefix, limit):
print("Reading lines") print("Reading lines")
with open(file_name) as f: with open(file_name) as f:
@@ -58,7 +62,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for key, value in unfreeze(event.get_dict()["content"]).items(): for key, value in unfreeze(event.get_dict()["content"]).items():
if value is None: if value is None:
value = "<null>" value = "<null>"
elif isinstance(value, str): elif isinstance(value, string_types):
pass pass
else: else:
value = json.dumps(value) value = json.dumps(value)
@@ -104,7 +108,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for prev_id, _ in event.prev_events: for prev_id, _ in event.prev_events:
try: try:
end_node = node_map[prev_id] end_node = node_map[prev_id]
except Exception: except:
end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,)) end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
node_map[prev_id] = end_node node_map[prev_id] = end_node

View File

@@ -10,15 +10,17 @@ the bridge.
Requires: Requires:
npm install jquery jsdom npm install jquery jsdom
""" """
import json from __future__ import print_function
import subprocess
import time
import gevent import gevent
import grequests import grequests
from BeautifulSoup import BeautifulSoup from BeautifulSoup import BeautifulSoup
import json
import urllib
import subprocess
import time
ACCESS_TOKEN = "" # ACCESS_TOKEN="" #
MATRIXBASE = "https://matrix.org/_matrix/client/api/v1/" MATRIXBASE = "https://matrix.org/_matrix/client/api/v1/"
MYUSERNAME = "@davetest:matrix.org" MYUSERNAME = "@davetest:matrix.org"
@@ -193,12 +195,15 @@ class TrivialXmppClient:
time.sleep(7) time.sleep(7)
print("SSRC spammer started") print("SSRC spammer started")
while self.running: while self.running:
ssrcMsg = "<presence to='%(tojid)s' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%(nick)s</nick><stats xmlns='http://jitsi.org/jitmeet/stats'><stat name='bitrate_download' value='175'/><stat name='bitrate_upload' value='176'/><stat name='packetLoss_total' value='0'/><stat name='packetLoss_download' value='0'/><stat name='packetLoss_upload' value='0'/></stats><media xmlns='http://estos.de/ns/mjs'><source type='audio' ssrc='%(assrc)s' direction='sendre'/><source type='video' ssrc='%(vssrc)s' direction='sendre'/></media></presence>" % { ssrcMsg = (
"<presence to='%(tojid)s' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%(nick)s</nick><stats xmlns='http://jitsi.org/jitmeet/stats'><stat name='bitrate_download' value='175'/><stat name='bitrate_upload' value='176'/><stat name='packetLoss_total' value='0'/><stat name='packetLoss_download' value='0'/><stat name='packetLoss_upload' value='0'/></stats><media xmlns='http://estos.de/ns/mjs'><source type='audio' ssrc='%(assrc)s' direction='sendre'/><source type='video' ssrc='%(vssrc)s' direction='sendre'/></media></presence>"
% {
"tojid": "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid), "tojid": "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid),
"nick": self.userId, "nick": self.userId,
"assrc": self.ssrcs["audio"], "assrc": self.ssrcs["audio"],
"vssrc": self.ssrcs["video"], "vssrc": self.ssrcs["video"],
} }
)
res = self.sendIq(ssrcMsg) res = self.sendIq(ssrcMsg)
print("reply from ssrc announce: ", res) print("reply from ssrc announce: ", res)
time.sleep(10) time.sleep(10)

View File

@@ -20,7 +20,6 @@ Add a new job to the main prometheus.conf file:
``` ```
### for Prometheus v2 ### for Prometheus v2
Add a new job to the main prometheus.yml file: Add a new job to the main prometheus.yml file:
```yaml ```yaml
@@ -30,12 +29,9 @@ Add a new job to the main prometheus.yml file:
scheme: "https" scheme: "https"
static_configs: static_configs:
- targets: ["my.server.here:port"] - targets: ['SERVER.LOCATION:PORT']
``` ```
An example of a Prometheus configuration with workers can be found in
[metrics-howto.md](https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.md).
To use `synapse.rules` add To use `synapse.rules` add
```yaml ```yaml

View File

@@ -9,7 +9,7 @@
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#process_resource_utime"), node: document.querySelector("#process_resource_utime"),
expr: "rate(process_cpu_seconds_total[2m]) * 100", expr: "rate(process_cpu_seconds_total[2m]) * 100",
name: "[[job]]-[[index]]", name: "[[job]]",
min: 0, min: 0,
max: 100, max: 100,
renderer: "line", renderer: "line",
@@ -22,12 +22,12 @@ new PromConsole.Graph({
</script> </script>
<h3>Memory</h3> <h3>Memory</h3>
<div id="process_resident_memory_bytes"></div> <div id="process_resource_maxrss"></div>
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#process_resident_memory_bytes"), node: document.querySelector("#process_resource_maxrss"),
expr: "process_resident_memory_bytes", expr: "process_psutil_rss:max",
name: "[[job]]-[[index]]", name: "Maxrss",
min: 0, min: 0,
renderer: "line", renderer: "line",
height: 150, height: 150,
@@ -43,8 +43,8 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#process_fds"), node: document.querySelector("#process_fds"),
expr: "process_open_fds", expr: "process_open_fds{job='synapse'}",
name: "[[job]]-[[index]]", name: "FDs",
min: 0, min: 0,
renderer: "line", renderer: "line",
height: 150, height: 150,
@@ -62,8 +62,8 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#reactor_total_time"), node: document.querySelector("#reactor_total_time"),
expr: "rate(python_twisted_reactor_tick_time_sum[2m])", expr: "rate(python_twisted_reactor_tick_time:total[2m]) / 1000",
name: "[[job]]-[[index]]", name: "time",
max: 1, max: 1,
min: 0, min: 0,
renderer: "area", renderer: "area",
@@ -80,8 +80,8 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#reactor_average_time"), node: document.querySelector("#reactor_average_time"),
expr: "rate(python_twisted_reactor_tick_time_sum[2m]) / rate(python_twisted_reactor_tick_time_count[2m])", expr: "rate(python_twisted_reactor_tick_time:total[2m]) / rate(python_twisted_reactor_tick_time:count[2m]) / 1000",
name: "[[job]]-[[index]]", name: "time",
min: 0, min: 0,
renderer: "line", renderer: "line",
height: 150, height: 150,
@@ -97,14 +97,14 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#reactor_pending_calls"), node: document.querySelector("#reactor_pending_calls"),
expr: "rate(python_twisted_reactor_pending_calls_sum[30s]) / rate(python_twisted_reactor_pending_calls_count[30s])", expr: "rate(python_twisted_reactor_pending_calls:total[30s])/rate(python_twisted_reactor_pending_calls:count[30s])",
name: "[[job]]-[[index]]", name: "calls",
min: 0, min: 0,
renderer: "line", renderer: "line",
height: 150, height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yTitle: "Pending Calls" yTitle: "Pending Cals"
}) })
</script> </script>
@@ -115,7 +115,7 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_storage_query_time"), node: document.querySelector("#synapse_storage_query_time"),
expr: "sum(rate(synapse_storage_query_time_count[2m])) by (verb)", expr: "rate(synapse_storage_query_time:count[2m])",
name: "[[verb]]", name: "[[verb]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
@@ -129,8 +129,8 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_storage_transaction_time"), node: document.querySelector("#synapse_storage_transaction_time"),
expr: "topk(10, rate(synapse_storage_transaction_time_count[2m]))", expr: "rate(synapse_storage_transaction_time:count[2m])",
name: "[[job]]-[[index]] [[desc]]", name: "[[desc]]",
min: 0, min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
@@ -140,12 +140,12 @@ new PromConsole.Graph({
</script> </script>
<h3>Transaction execution time</h3> <h3>Transaction execution time</h3>
<div id="synapse_storage_transactions_time_sec"></div> <div id="synapse_storage_transactions_time_msec"></div>
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_storage_transactions_time_sec"), node: document.querySelector("#synapse_storage_transactions_time_msec"),
expr: "rate(synapse_storage_transaction_time_sum[2m])", expr: "rate(synapse_storage_transaction_time:total[2m]) / 1000",
name: "[[job]]-[[index]] [[desc]]", name: "[[desc]]",
min: 0, min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
@@ -154,33 +154,34 @@ new PromConsole.Graph({
}) })
</script> </script>
<h3>Average time waiting for database connection</h3> <h3>Database scheduling latency</h3>
<div id="synapse_storage_avg_waiting_time"></div> <div id="synapse_storage_schedule_time"></div>
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_storage_avg_waiting_time"), node: document.querySelector("#synapse_storage_schedule_time"),
expr: "rate(synapse_storage_schedule_time_sum[2m]) / rate(synapse_storage_schedule_time_count[2m])", expr: "rate(synapse_storage_schedule_time:total[2m]) / 1000",
name: "[[job]]-[[index]]", name: "Total latency",
min: 0, min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s", yUnits: "s/s",
yTitle: "Time" yTitle: "Usage"
}) })
</script> </script>
<h3>Cache request rate</h3> <h3>Cache hit ratio</h3>
<div id="synapse_cache_request_rate"></div> <div id="synapse_cache_ratio"></div>
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_cache_request_rate"), node: document.querySelector("#synapse_cache_ratio"),
expr: "rate(synapse_util_caches_cache:total[2m])", expr: "rate(synapse_util_caches_cache:total[2m]) * 100",
name: "[[job]]-[[index]] [[name]]", name: "[[name]]",
min: 0, min: 0,
max: 100,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "rps", yUnits: "%",
yTitle: "Cache request rate" yTitle: "Percentage"
}) })
</script> </script>
@@ -190,7 +191,7 @@ new PromConsole.Graph({
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_cache_size"), node: document.querySelector("#synapse_cache_size"),
expr: "synapse_util_caches_cache:size", expr: "synapse_util_caches_cache:size",
name: "[[job]]-[[index]] [[name]]", name: "[[name]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "", yUnits: "",
@@ -205,8 +206,8 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_request_count_servlet"), node: document.querySelector("#synapse_http_server_request_count_servlet"),
expr: "rate(synapse_http_server_in_flight_requests_count[2m])", expr: "rate(synapse_http_server_request_count:servlet[2m])",
name: "[[job]]-[[index]] [[method]] [[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s", yUnits: "req/s",
@@ -218,8 +219,8 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_request_count_servlet_minus_events"), node: document.querySelector("#synapse_http_server_request_count_servlet_minus_events"),
expr: "rate(synapse_http_server_in_flight_requests_count{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])", expr: "rate(synapse_http_server_request_count:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
name: "[[job]]-[[index]] [[method]] [[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s", yUnits: "req/s",
@@ -232,8 +233,8 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_time_avg"), node: document.querySelector("#synapse_http_server_response_time_avg"),
expr: "rate(synapse_http_server_response_time_seconds_sum[2m]) / rate(synapse_http_server_response_count[2m])", expr: "rate(synapse_http_server_response_time_seconds[2m]) / rate(synapse_http_server_response_count[2m]) / 1000",
name: "[[job]]-[[index]] [[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/req", yUnits: "s/req",
@@ -276,7 +277,7 @@ new PromConsole.Graph({
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_ru_utime"), node: document.querySelector("#synapse_http_server_response_ru_utime"),
expr: "rate(synapse_http_server_response_ru_utime_seconds[2m])", expr: "rate(synapse_http_server_response_ru_utime_seconds[2m])",
name: "[[job]]-[[index]] [[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s", yUnits: "s/s",
@@ -291,7 +292,7 @@ new PromConsole.Graph({
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_db_txn_duration"), node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
expr: "rate(synapse_http_server_response_db_txn_duration_seconds[2m])", expr: "rate(synapse_http_server_response_db_txn_duration_seconds[2m])",
name: "[[job]]-[[index]] [[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s", yUnits: "s/s",
@@ -305,8 +306,8 @@ new PromConsole.Graph({
<script> <script>
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_send_time_avg"), node: document.querySelector("#synapse_http_server_send_time_avg"),
expr: "rate(synapse_http_server_response_time_seconds_sum{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_count{servlet='RoomSendEventRestServlet'}[2m])", expr: "rate(synapse_http_server_response_time_second{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
name: "[[job]]-[[index]] [[servlet]]", name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/req", yUnits: "s/req",
@@ -322,7 +323,7 @@ new PromConsole.Graph({
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_federation_client_sent"), node: document.querySelector("#synapse_federation_client_sent"),
expr: "rate(synapse_federation_client_sent[2m])", expr: "rate(synapse_federation_client_sent[2m])",
name: "[[job]]-[[index]] [[type]]", name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s", yUnits: "req/s",
@@ -336,7 +337,7 @@ new PromConsole.Graph({
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_federation_server_received"), node: document.querySelector("#synapse_federation_server_received"),
expr: "rate(synapse_federation_server_received[2m])", expr: "rate(synapse_federation_server_received[2m])",
name: "[[job]]-[[index]] [[type]]", name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s", yUnits: "req/s",
@@ -366,7 +367,7 @@ new PromConsole.Graph({
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_notifier_listeners"), node: document.querySelector("#synapse_notifier_listeners"),
expr: "synapse_notifier_listeners", expr: "synapse_notifier_listeners",
name: "[[job]]-[[index]]", name: "listeners",
min: 0, min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix, yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
@@ -381,7 +382,7 @@ new PromConsole.Graph({
new PromConsole.Graph({ new PromConsole.Graph({
node: document.querySelector("#synapse_notifier_notified_events"), node: document.querySelector("#synapse_notifier_notified_events"),
expr: "rate(synapse_notifier_notified_events[2m])", expr: "rate(synapse_notifier_notified_events[2m])",
name: "[[job]]-[[index]]", name: "events",
yAxisFormatter: PromConsole.NumberFormatter.humanize, yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize, yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "events/s", yUnits: "events/s",

View File

@@ -58,21 +58,3 @@ groups:
labels: labels:
type: "PDU" type: "PDU"
expr: 'synapse_federation_transaction_queue_pending_pdus + 0' expr: 'synapse_federation_transaction_queue_pending_pdus + 0'
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_type="remote"})
labels:
type: remote
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_entity="*client*",origin_type="local"})
labels:
type: local
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_entity!="*client*",origin_type="local"})
labels:
type: bridges
- record: synapse_storage_events_persisted_by_event_type
expr: sum without(origin_entity, origin_type) (synapse_storage_events_persisted_events_sep)
- record: synapse_storage_events_persisted_by_origin
expr: sum without(type) (synapse_storage_events_persisted_events_sep)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash #!/bin/bash
# this script will use the api: # this script will use the api:
# https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst # https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash #!/bin/bash
DOMAIN=yourserver.tld DOMAIN=yourserver.tld
# add this user as admin in your home server: # add this user as admin in your home server:

View File

@@ -1,11 +1,15 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import print_function
from argparse import ArgumentParser
import json import json
import requests
import sys import sys
import urllib import urllib
from argparse import ArgumentParser
import requests try:
raw_input
except NameError: # Python 3
raw_input = input
def _mkurl(template, kws): def _mkurl(template, kws):
@@ -52,7 +56,7 @@ def main(hs, room_id, access_token, user_id_prefix, why):
print("The following user IDs will be kicked from %s" % room_name) print("The following user IDs will be kicked from %s" % room_name)
for uid in kick_list: for uid in kick_list:
print(uid) print(uid)
doit = input("Continue? [Y]es\n") doit = raw_input("Continue? [Y]es\n")
if len(doit) > 0 and doit.lower() == "y": if len(doit) > 0 and doit.lower() == "y":
print("Kicking members...") print("Kicking members...")
# encode them all # encode them all

View File

@@ -15,9 +15,6 @@
[Unit] [Unit]
Description=Synapse Matrix homeserver Description=Synapse Matrix homeserver
# If you are using postgresql to persist data, uncomment this line to make sure
# synapse starts after the postgresql service.
# After=postgresql.service
[Service] [Service]
Type=notify Type=notify

View File

@@ -33,44 +33,34 @@ esac
# Use --builtin-venv to use the better `venv` module from CPython 3.4+ rather # Use --builtin-venv to use the better `venv` module from CPython 3.4+ rather
# than the 2/3 compatible `virtualenv`. # than the 2/3 compatible `virtualenv`.
# Pin pip to 20.3.4 to fix breakage in 21.0 on py3.5 (xenial)
dh_virtualenv \ dh_virtualenv \
--install-suffix "matrix-synapse" \ --install-suffix "matrix-synapse" \
--builtin-venv \ --builtin-venv \
--setuptools \
--python "$SNAKE" \ --python "$SNAKE" \
--upgrade-pip-to="20.3.4" \ --upgrade-pip \
--preinstall="lxml" \ --preinstall="lxml" \
--preinstall="mock" \ --preinstall="mock" \
--extra-pip-arg="--no-cache-dir" \ --extra-pip-arg="--no-cache-dir" \
--extra-pip-arg="--compile" \ --extra-pip-arg="--compile" \
--extras="all,systemd,test" --extras="all,systemd"
PACKAGE_BUILD_DIR="debian/matrix-synapse-py3" PACKAGE_BUILD_DIR="debian/matrix-synapse-py3"
VIRTUALENV_DIR="${PACKAGE_BUILD_DIR}${DH_VIRTUALENV_INSTALL_ROOT}/matrix-synapse" VIRTUALENV_DIR="${PACKAGE_BUILD_DIR}${DH_VIRTUALENV_INSTALL_ROOT}/matrix-synapse"
TARGET_PYTHON="${VIRTUALENV_DIR}/bin/python" TARGET_PYTHON="${VIRTUALENV_DIR}/bin/python"
case "$DEB_BUILD_OPTIONS" in # we copy the tests to a temporary directory so that we can put them on the
*nocheck*) # PYTHONPATH without putting the uninstalled synapse on the pythonpath.
# Skip running tests if "nocheck" present in $DEB_BUILD_OPTIONS tmpdir=`mktemp -d`
;; trap "rm -r $tmpdir" EXIT
*) cp -r tests "$tmpdir"
# Copy tests to a temporary directory so that we can put them on the
# PYTHONPATH without putting the uninstalled synapse on the pythonpath.
tmpdir=`mktemp -d`
trap "rm -r $tmpdir" EXIT
cp -r tests "$tmpdir" PYTHONPATH="$tmpdir" \
"${TARGET_PYTHON}" -B -m twisted.trial --reporter=text -j2 tests
PYTHONPATH="$tmpdir" \
"${TARGET_PYTHON}" -m twisted.trial --reporter=text -j2 tests
;;
esac
# build the config file # build the config file
"${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_config" \ "${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_config" \
--config-dir="/etc/matrix-synapse" \ --config-dir="/etc/matrix-synapse" \
--data-dir="/var/lib/matrix-synapse" | --data-dir="/var/lib/matrix-synapse" |
perl -pe ' perl -pe '
@@ -96,7 +86,7 @@ esac
' > "${PACKAGE_BUILD_DIR}/etc/matrix-synapse/homeserver.yaml" ' > "${PACKAGE_BUILD_DIR}/etc/matrix-synapse/homeserver.yaml"
# build the log config file # build the log config file
"${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_log_config" \ "${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_log_config" \
--output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml" --output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml"
# add a dependency on the right version of python to substvars. # add a dependency on the right version of python to substvars.

268
debian/changelog vendored
View File

@@ -1,274 +1,16 @@
matrix-synapse-py3 (1.33.2) stable; urgency=medium <<<<<<< HEAD
matrix-synapse-py3 (1.12.3ubuntu1) UNRELEASED; urgency=medium
* New synapse release 1.33.2.
-- Synapse Packaging team <packages@matrix.org> Tue, 11 May 2021 11:17:59 +0100
matrix-synapse-py3 (1.33.1) stable; urgency=medium
* New synapse release 1.33.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 06 May 2021 14:06:33 +0100
matrix-synapse-py3 (1.33.0) stable; urgency=medium
* New synapse release 1.33.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 05 May 2021 14:15:27 +0100
matrix-synapse-py3 (1.32.2) stable; urgency=medium
* New synapse release 1.32.2.
-- Synapse Packaging team <packages@matrix.org> Wed, 22 Apr 2021 12:43:52 +0100
matrix-synapse-py3 (1.32.1) stable; urgency=medium
* New synapse release 1.32.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 21 Apr 2021 14:00:55 +0100
matrix-synapse-py3 (1.32.0) stable; urgency=medium
[ Dan Callahan ]
* Skip tests when DEB_BUILD_OPTIONS contains "nocheck".
[ Synapse Packaging team ]
* New synapse release 1.32.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 Apr 2021 14:28:39 +0100
matrix-synapse-py3 (1.31.0) stable; urgency=medium
* New synapse release 1.31.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 06 Apr 2021 13:08:29 +0100
matrix-synapse-py3 (1.30.1) stable; urgency=medium
* New synapse release 1.30.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 26 Mar 2021 12:01:28 +0000
matrix-synapse-py3 (1.30.0) stable; urgency=medium
* New synapse release 1.30.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 22 Mar 2021 13:15:34 +0000
matrix-synapse-py3 (1.29.0) stable; urgency=medium
[ Jonathan de Jong ]
* Remove the python -B flag (don't generate bytecode) in scripts and documentation.
[ Synapse Packaging team ]
* New synapse release 1.29.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 08 Mar 2021 13:51:50 +0000
matrix-synapse-py3 (1.28.0) stable; urgency=medium
* New synapse release 1.28.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Feb 2021 10:21:57 +0000
matrix-synapse-py3 (1.27.0) stable; urgency=medium
[ Dan Callahan ]
* Fix build on Ubuntu 16.04 LTS (Xenial).
[ Synapse Packaging team ]
* New synapse release 1.27.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Feb 2021 13:11:28 +0000
matrix-synapse-py3 (1.26.0) stable; urgency=medium
[ Richard van der Hoff ]
* Remove dependency on `python3-distutils`.
[ Synapse Packaging team ]
* New synapse release 1.26.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 27 Jan 2021 12:43:35 -0500
matrix-synapse-py3 (1.25.0) stable; urgency=medium
[ Dan Callahan ]
* Update dependencies to account for the removal of the transitional
dh-systemd package from Debian Bullseye.
[ Synapse Packaging team ]
* New synapse release 1.25.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 13 Jan 2021 10:14:55 +0000
matrix-synapse-py3 (1.24.0) stable; urgency=medium
* New synapse release 1.24.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 09 Dec 2020 10:14:30 +0000
matrix-synapse-py3 (1.23.1) stable; urgency=medium
* New synapse release 1.23.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 09 Dec 2020 10:40:39 +0000
matrix-synapse-py3 (1.23.0) stable; urgency=medium
* New synapse release 1.23.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 18 Nov 2020 11:41:28 +0000
matrix-synapse-py3 (1.22.1) stable; urgency=medium
* New synapse release 1.22.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 30 Oct 2020 15:25:37 +0000
matrix-synapse-py3 (1.22.0) stable; urgency=medium
* New synapse release 1.22.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 27 Oct 2020 12:07:12 +0000
matrix-synapse-py3 (1.21.2) stable; urgency=medium
[ Synapse Packaging team ]
* New synapse release 1.21.2.
-- Synapse Packaging team <packages@matrix.org> Thu, 15 Oct 2020 09:23:27 -0400
matrix-synapse-py3 (1.21.1) stable; urgency=medium
[ Synapse Packaging team ]
* New synapse release 1.21.1.
[ Andrew Morgan ]
* Explicitly install "test" python dependencies.
-- Synapse Packaging team <packages@matrix.org> Tue, 13 Oct 2020 10:24:13 +0100
matrix-synapse-py3 (1.21.0) stable; urgency=medium
* New synapse release 1.21.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 12 Oct 2020 15:47:44 +0100
matrix-synapse-py3 (1.20.1) stable; urgency=medium
* New synapse release 1.20.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 24 Sep 2020 16:25:22 +0100
matrix-synapse-py3 (1.20.0) stable; urgency=medium
[ Synapse Packaging team ]
* New synapse release 1.20.0.
[ Dexter Chua ]
* Use Type=notify in systemd service
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Sep 2020 15:19:32 +0100
matrix-synapse-py3 (1.19.3) stable; urgency=medium
* New synapse release 1.19.3.
-- Synapse Packaging team <packages@matrix.org> Fri, 18 Sep 2020 14:59:30 +0100
matrix-synapse-py3 (1.19.2) stable; urgency=medium
* New synapse release 1.19.2.
-- Synapse Packaging team <packages@matrix.org> Wed, 16 Sep 2020 12:50:30 +0100
matrix-synapse-py3 (1.19.1) stable; urgency=medium
* New synapse release 1.19.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 27 Aug 2020 10:50:19 +0100
matrix-synapse-py3 (1.19.0) stable; urgency=medium
[ Synapse Packaging team ]
* New synapse release 1.19.0.
[ Aaron Raimist ]
* Fix outdated documentation for SYNAPSE_CACHE_FACTOR
-- Synapse Packaging team <packages@matrix.org> Mon, 17 Aug 2020 14:06:42 +0100
matrix-synapse-py3 (1.18.0) stable; urgency=medium
* New synapse release 1.18.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 30 Jul 2020 10:55:53 +0100
matrix-synapse-py3 (1.17.0) stable; urgency=medium
* New synapse release 1.17.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 13 Jul 2020 10:20:31 +0100
matrix-synapse-py3 (1.16.1) stable; urgency=medium
* New synapse release 1.16.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 10 Jul 2020 12:09:24 +0100
matrix-synapse-py3 (1.17.0rc1) stable; urgency=medium
* New synapse release 1.17.0rc1.
-- Synapse Packaging team <packages@matrix.org> Thu, 09 Jul 2020 16:53:12 +0100
matrix-synapse-py3 (1.16.0) stable; urgency=medium
* New synapse release 1.16.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 08 Jul 2020 11:03:48 +0100
matrix-synapse-py3 (1.15.2) stable; urgency=medium
* New synapse release 1.15.2.
-- Synapse Packaging team <packages@matrix.org> Thu, 02 Jul 2020 10:34:00 -0400
matrix-synapse-py3 (1.15.1) stable; urgency=medium
* New synapse release 1.15.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Jun 2020 10:27:50 +0100
matrix-synapse-py3 (1.15.0) stable; urgency=medium
* New synapse release 1.15.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 11 Jun 2020 13:27:06 +0100
matrix-synapse-py3 (1.14.0) stable; urgency=medium
* New synapse release 1.14.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 28 May 2020 10:37:27 +0000
matrix-synapse-py3 (1.13.0) stable; urgency=medium
[ Patrick Cloke ]
* Add information about .well-known files to Debian installation scripts. * Add information about .well-known files to Debian installation scripts.
[ Synapse Packaging team ] -- Patrick Cloke <patrickc@matrix.org> Mon, 06 Apr 2020 10:10:38 -0400
* New synapse release 1.13.0. =======
-- Synapse Packaging team <packages@matrix.org> Tue, 19 May 2020 09:16:56 -0400
matrix-synapse-py3 (1.12.4) stable; urgency=medium matrix-synapse-py3 (1.12.4) stable; urgency=medium
* New synapse release 1.12.4. * New synapse release 1.12.4.
-- Synapse Packaging team <packages@matrix.org> Thu, 23 Apr 2020 10:58:14 -0400 -- Synapse Packaging team <packages@matrix.org> Thu, 23 Apr 2020 10:58:14 -0400
>>>>>>> master
matrix-synapse-py3 (1.12.3) stable; urgency=medium matrix-synapse-py3 (1.12.3) stable; urgency=medium

7
debian/control vendored
View File

@@ -3,11 +3,9 @@ Section: contrib/python
Priority: extra Priority: extra
Maintainer: Synapse Packaging team <packages@matrix.org> Maintainer: Synapse Packaging team <packages@matrix.org>
# keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv. # keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv.
# TODO: Remove the dependency on dh-systemd after dropping support for Ubuntu xenial
# On all other supported releases, it's merely a transitional package which
# does nothing but depends on debhelper (> 9.20160709)
Build-Depends: Build-Depends:
debhelper (>= 9.20160709) | dh-systemd, debhelper (>= 9),
dh-systemd,
dh-virtualenv (>= 1.1), dh-virtualenv (>= 1.1),
libsystemd-dev, libsystemd-dev,
libpq-dev, libpq-dev,
@@ -31,6 +29,7 @@ Pre-Depends: dpkg (>= 1.16.1)
Depends: Depends:
adduser, adduser,
debconf, debconf,
python3-distutils|libpython3-stdlib (<< 3.6),
${misc:Depends}, ${misc:Depends},
${shlibs:Depends}, ${shlibs:Depends},
${synapse:pydepends}, ${synapse:pydepends},

View File

@@ -1,2 +1,2 @@
# Specify environment variables used when running Synapse # Specify environment variables used when running Synapse
# SYNAPSE_CACHE_FACTOR=0.5 (default) # SYNAPSE_CACHE_FACTOR=1 (default)

View File

@@ -2,7 +2,7 @@
Description=Synapse Matrix homeserver Description=Synapse Matrix homeserver
[Service] [Service]
Type=notify Type=simple
User=matrix-synapse User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse EnvironmentFile=/etc/default/matrix-synapse

2
debian/synctl.1 vendored
View File

@@ -44,7 +44,7 @@ Configuration file may be generated as follows:
. .
.nf .nf
$ python \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name> $ python \-B \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name>
. .
.fi .fi
. .

29
debian/synctl.ronn vendored
View File

@@ -41,25 +41,24 @@ process.
Configuration file may be generated as follows: Configuration file may be generated as follows:
$ python -m synapse.app.homeserver -c config.yaml --generate-config --server-name=<server name> $ python -B -m synapse.app.homeserver -c config.yaml --generate-config --server-name=<server name>
## ENVIRONMENT ## ENVIRONMENT
* `SYNAPSE_CACHE_FACTOR`: * `SYNAPSE_CACHE_FACTOR`:
Synapse's architecture is quite RAM hungry currently - we deliberately Synapse's architecture is quite RAM hungry currently - a lot of
cache a lot of recent room data and metadata in RAM in order to speed up recent room data and metadata is deliberately cached in RAM in
common requests. We'll improve this in the future, but for now the easiest order to speed up common requests. This will be improved in
way to either reduce the RAM usage (at the risk of slowing things down) future, but for now the easiest way to either reduce the RAM usage
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment (at the risk of slowing things down) is to set the
variable. The default is 0.5, which can be decreased to reduce RAM usage SYNAPSE_CACHE_FACTOR environment variable. Roughly speaking, a
in memory constrained enviroments, or increased if performance starts to SYNAPSE_CACHE_FACTOR of 1.0 will max out at around 3-4GB of
degrade. resident memory - this is what we currently run the matrix.org
on. The default setting is currently 0.1, which is probably around
However, degraded performance due to a low cache factor, common on a ~700MB footprint. You can dial it down further to 0.02 if
machines with slow disks, often leads to explosions in memory use due desired, which targets roughly ~512MB. Conversely you can dial it
backlogged requests. In this case, reducing the cache factor will make up if you need performance for lots of users and have a box with a
things worse. Instead, try increasing it drastically. 2.0 is a good lot of RAM.
starting value.
## COPYRIGHT ## COPYRIGHT

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash #!/bin/bash
set -e set -e

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash #!/bin/bash
DIR="$( cd "$( dirname "$0" )" && pwd )" DIR="$( cd "$( dirname "$0" )" && pwd )"
@@ -30,8 +30,6 @@ for port in 8080 8081 8082; do
if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
echo "public_baseurl: http://localhost:$port/" >> $DIR/etc/$port.config
echo 'enable_registration: true' >> $DIR/etc/$port.config echo 'enable_registration: true' >> $DIR/etc/$port.config
# Warning, this heredoc depends on the interaction of tabs and spaces. Please don't # Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
@@ -96,48 +94,18 @@ for port in 8080 8081 8082; do
# Check script parameters # Check script parameters
if [ $# -eq 1 ]; then if [ $# -eq 1 ]; then
if [ $1 = "--no-rate-limit" ]; then if [ $1 = "--no-rate-limit" ]; then
# messages rate limit
echo 'rc_messages_per_second: 1000' >> $DIR/etc/$port.config
echo 'rc_message_burst_count: 1000' >> $DIR/etc/$port.config
# Disable any rate limiting # registration rate limit
ratelimiting=$(cat <<-RC printf 'rc_registration:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config
rc_message:
per_second: 1000 # login rate limit
burst_count: 1000 echo 'rc_login:' >> $DIR/etc/$port.config
rc_registration: printf ' address:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config
per_second: 1000 printf ' account:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config
burst_count: 1000 printf ' failed_attempts:\n per_second: 1000\n burst_count: 1000\n' >> $DIR/etc/$port.config
rc_login:
address:
per_second: 1000
burst_count: 1000
account:
per_second: 1000
burst_count: 1000
failed_attempts:
per_second: 1000
burst_count: 1000
rc_admin_redaction:
per_second: 1000
burst_count: 1000
rc_joins:
local:
per_second: 1000
burst_count: 1000
remote:
per_second: 1000
burst_count: 1000
rc_3pid_validation:
per_second: 1000
burst_count: 1000
rc_invites:
per_room:
per_second: 1000
burst_count: 1000
per_user:
per_second: 1000
burst_count: 1000
RC
)
echo "${ratelimiting}" >> $DIR/etc/$port.config
fi fi
fi fi

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash #!/bin/bash
DIR="$( cd "$( dirname "$0" )" && pwd )" DIR="$( cd "$( dirname "$0" )" && pwd )"

59
demo/webserver.py Normal file
View File

@@ -0,0 +1,59 @@
import argparse
import BaseHTTPServer
import os
import SimpleHTTPServer
import cgi, logging
from daemonize import Daemonize
class SimpleHTTPRequestHandlerWithPOST(SimpleHTTPServer.SimpleHTTPRequestHandler):
UPLOAD_PATH = "upload"
"""
Accept all post request as file upload
"""
def do_POST(self):
path = os.path.join(self.UPLOAD_PATH, os.path.basename(self.path))
length = self.headers["content-length"]
data = self.rfile.read(int(length))
with open(path, "wb") as fh:
fh.write(data)
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.end_headers()
# Return the absolute path of the uploaded file
self.wfile.write('{"url":"/%s"}' % path)
def setup():
parser = argparse.ArgumentParser()
parser.add_argument("directory")
parser.add_argument("-p", "--port", dest="port", type=int, default=8080)
parser.add_argument("-P", "--pid-file", dest="pid", default="web.pid")
args = parser.parse_args()
# Get absolute path to directory to serve, as daemonize changes to '/'
os.chdir(args.directory)
dr = os.getcwd()
httpd = BaseHTTPServer.HTTPServer(("", args.port), SimpleHTTPRequestHandlerWithPOST)
def run():
os.chdir(dr)
httpd.serve_forever()
daemon = Daemonize(
app="synapse-webclient", pid=args.pid, action=run, auto_close_fds=False
)
daemon.start()
if __name__ == "__main__":
setup()

View File

@@ -11,72 +11,63 @@
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 . # docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
# #
ARG PYTHON_VERSION=3.8 ARG PYTHON_VERSION=3.7
### ###
### Stage 0: builder ### Stage 0: builder
### ###
FROM docker.io/python:${PYTHON_VERSION}-slim as builder FROM docker.io/python:${PYTHON_VERSION}-alpine3.11 as builder
# install the OS build deps # install the OS build deps
RUN apt-get update && apt-get install -y \
build-essential \
libffi-dev \
libjpeg-dev \
libpq-dev \
libssl-dev \
libwebp-dev \
libxml++2.6-dev \
libxslt1-dev \
openssl \
rustc \
zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy just what we need to pip install RUN apk add \
build-base \
libffi-dev \
libjpeg-turbo-dev \
libressl-dev \
libxslt-dev \
linux-headers \
postgresql-dev \
zlib-dev
# build things which have slow build steps, before we copy synapse, so that
# the layer can be cached.
#
# (we really just care about caching a wheel here, as the "pip install" below
# will install them again.)
RUN pip install --prefix="/install" --no-warn-script-location \
cryptography \
msgpack-python \
pillow \
pynacl
# now install synapse and all of the python deps to /install.
COPY synapse /synapse/synapse/
COPY scripts /synapse/scripts/ COPY scripts /synapse/scripts/
COPY MANIFEST.in README.rst setup.py synctl /synapse/ COPY MANIFEST.in README.rst setup.py synctl /synapse/
COPY synapse/__init__.py /synapse/synapse/__init__.py
COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
# To speed up rebuilds, install all of the dependencies before we copy over
# the whole synapse project so that we this layer in the Docker cache can be
# used while you develop on the source
#
# This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
RUN pip install --prefix="/install" --no-warn-script-location \ RUN pip install --prefix="/install" --no-warn-script-location \
/synapse[all] /synapse[all]
# Copy over the rest of the project
COPY synapse /synapse/synapse/
# Install the synapse package itself and all of its children packages.
#
# This is aiming at installing only the `packages=find_packages(...)` from `setup.py
RUN pip install --prefix="/install" --no-deps --no-warn-script-location /synapse
### ###
### Stage 1: runtime ### Stage 1: runtime
### ###
FROM docker.io/python:${PYTHON_VERSION}-slim FROM docker.io/python:${PYTHON_VERSION}-alpine3.11
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse' # xmlsec is required for saml support
LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/synapse/blob/master/docker/README.md' RUN apk add --no-cache --virtual .runtime_deps \
LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git' libffi \
LABEL org.opencontainers.image.licenses='Apache-2.0' libjpeg-turbo \
libressl \
RUN apt-get update && apt-get install -y \ libxslt \
curl \ libpq \
gosu \ zlib \
libjpeg62-turbo \ su-exec \
libpq5 \ tzdata \
libwebp6 \ xmlsec
xmlsec1 \
libjemalloc2 \
libssl-dev \
openssl \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /install /usr/local COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py COPY ./docker/start.py /start.py
@@ -87,6 +78,3 @@ VOLUME ["/data"]
EXPOSE 8008/tcp 8009/tcp 8448/tcp EXPOSE 8008/tcp 8009/tcp 8448/tcp
ENTRYPOINT ["/start.py"] ENTRYPOINT ["/start.py"]
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
CMD curl -fSs http://localhost:8008/health || exit 1

View File

@@ -27,19 +27,15 @@ RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
wget wget
# fetch and unpack the package # fetch and unpack the package
# TODO: Upgrade to 1.2.2 once xenial is dropped RUN wget -q -O /dh-virtuenv-1.1.tar.gz https://github.com/spotify/dh-virtualenv/archive/1.1.tar.gz
RUN mkdir /dh-virtualenv RUN tar xvf /dh-virtuenv-1.1.tar.gz
RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/ac6e1b1.tar.gz
RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz
# install its build deps. We do another apt-cache-update here, because we might # install its build deps
# be using a stale cache from docker build. RUN cd dh-virtualenv-1.1/ \
RUN apt-get update -qq -o Acquire::Languages=none \ && env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -yqq --no-install-recommends"
&& cd /dh-virtualenv \
&& env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -y --no-install-recommends"
# build it # build it
RUN cd /dh-virtualenv && dpkg-buildpackage -us -uc -b RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
### ###
### Stage 1 ### Stage 1
@@ -51,22 +47,17 @@ FROM ${distro}
ARG distro="" ARG distro=""
ENV distro ${distro} ENV distro ${distro}
# Python < 3.7 assumes LANG="C" means ASCII-only and throws on printing unicode
# http://bugs.python.org/issue19846
ENV LANG C.UTF-8
# Install the build dependencies # Install the build dependencies
# #
# NB: keep this list in sync with the list of build-deps in debian/control # NB: keep this list in sync with the list of build-deps in debian/control
# TODO: it would be nice to do that automatically. # TODO: it would be nice to do that automatically.
# TODO: Remove the dh-systemd stanza after dropping support for Ubuntu xenial
# it's a transitional package on all other, more recent releases
RUN apt-get update -qq -o Acquire::Languages=none \ RUN apt-get update -qq -o Acquire::Languages=none \
&& env DEBIAN_FRONTEND=noninteractive apt-get install \ && env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \ -yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
build-essential \ build-essential \
debhelper \ debhelper \
devscripts \ devscripts \
dh-systemd \
libsystemd-dev \ libsystemd-dev \
lsb-release \ lsb-release \
pkg-config \ pkg-config \
@@ -75,18 +66,14 @@ RUN apt-get update -qq -o Acquire::Languages=none \
python3-setuptools \ python3-setuptools \
python3-venv \ python3-venv \
sqlite3 \ sqlite3 \
libpq-dev \ libpq-dev
xmlsec1 \
&& ( env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
dh-systemd || true )
COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb / COPY --from=builder /dh-virtualenv_1.1-1_all.deb /
# install dhvirtualenv. Update the apt cache again first, in case we got a # install dhvirtualenv. Update the apt cache again first, in case we got a
# cached cache from docker the first time. # cached cache from docker the first time.
RUN apt-get update -qq -o Acquire::Languages=none \ RUN apt-get update -qq -o Acquire::Languages=none \
&& apt-get install -yq /dh-virtualenv_1.2~dev-1_all.deb && apt-get install -yq /dh-virtualenv_1.1-1_all.deb
WORKDIR /synapse/source WORKDIR /synapse/source
ENTRYPOINT ["bash","/synapse/source/docker/build_debian.sh"] ENTRYPOINT ["bash","/synapse/source/docker/build_debian.sh"]

View File

@@ -1,23 +0,0 @@
# Inherit from the official Synapse docker image
FROM matrixdotorg/synapse
# Install deps
RUN apt-get update
RUN apt-get install -y supervisor redis nginx
# Remove the default nginx sites
RUN rm /etc/nginx/sites-enabled/default
# Copy Synapse worker, nginx and supervisord configuration template files
COPY ./docker/conf-workers/* /conf/
# Expose nginx listener port
EXPOSE 8080/tcp
# Volume for user-editable config files, logs etc.
VOLUME ["/data"]
# A script to read environment variables and create the necessary
# files to run the desired worker configuration. Will start supervisord.
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
ENTRYPOINT ["/configure_workers_and_start.py"]

View File

@@ -1,140 +0,0 @@
# Running tests against a dockerised Synapse
It's possible to run integration tests against Synapse
using [Complement](https://github.com/matrix-org/complement). Complement is a Matrix Spec
compliance test suite for homeservers, and supports any homeserver docker image configured
to listen on ports 8008/8448. This document contains instructions for building Synapse
docker images that can be run inside Complement for testing purposes.
Note that running Synapse's unit tests from within the docker image is not supported.
## Testing with SQLite and single-process Synapse
> Note that `scripts-dev/complement.sh` is a script that will automatically build
> and run an SQLite-based, single-process of Synapse against Complement.
The instructions below will set up Complement testing for a single-process,
SQLite-based Synapse deployment.
Start by building the base Synapse docker image. If you wish to run tests with the latest
release of Synapse, instead of your current checkout, you can skip this step. From the
root of the repository:
```sh
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
This will build an image with the tag `matrixdotorg/synapse`.
Next, build the Synapse image for Complement. You will need a local checkout
of Complement. Change to the root of your Complement checkout and run:
```sh
docker build -t complement-synapse -f "dockerfiles/Synapse.Dockerfile" dockerfiles
```
This will build an image with the tag `complement-synapse`, which can be handed to
Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to
[Complement's documentation](https://github.com/matrix-org/complement/#running) for
how to run the tests, as well as the various available command line flags.
## Testing with PostgreSQL and single or multi-process Synapse
The above docker image only supports running Synapse with SQLite and in a
single-process topology. The following instructions are used to build a Synapse image for
Complement that supports either single or multi-process topology with a PostgreSQL
database backend.
As with the single-process image, build the base Synapse docker image. If you wish to run
tests with the latest release of Synapse, instead of your current checkout, you can skip
this step. From the root of the repository:
```sh
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
This will build an image with the tag `matrixdotorg/synapse`.
Next, we build a new image with worker support based on `matrixdotorg/synapse:latest`.
Again, from the root of the repository:
```sh
docker build -t matrixdotorg/synapse-workers -f docker/Dockerfile-workers .
```
This will build an image with the tag` matrixdotorg/synapse-workers`.
It's worth noting at this point that this image is fully functional, and
can be used for testing against locally. See instructions for using the container
under
[Running the Dockerfile-worker image standalone](#running-the-dockerfile-worker-image-standalone)
below.
Finally, build the Synapse image for Complement, which is based on
`matrixdotorg/synapse-workers`. You will need a local checkout of Complement. Change to
the root of your Complement checkout and run:
```sh
docker build -t matrixdotorg/complement-synapse-workers -f dockerfiles/SynapseWorkers.Dockerfile dockerfiles
```
This will build an image with the tag `complement-synapse`, which can be handed to
Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to
[Complement's documentation](https://github.com/matrix-org/complement/#running) for
how to run the tests, as well as the various available command line flags.
## Running the Dockerfile-worker image standalone
For manual testing of a multi-process Synapse instance in Docker,
[Dockerfile-workers](Dockerfile-workers) is a Dockerfile that will produce an image
bundling all necessary components together for a workerised homeserver instance.
This includes any desired Synapse worker processes, a nginx to route traffic accordingly,
a redis for worker communication and a supervisord instance to start up and monitor all
processes. You will need to provide your own postgres container to connect to, and TLS
is not handled by the container.
Once you've built the image using the above instructions, you can run it. Be sure
you've set up a volume according to the [usual Synapse docker instructions](README.md).
Then run something along the lines of:
```
docker run -d --name synapse \
--mount type=volume,src=synapse-data,dst=/data \
-p 8008:8008 \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=no \
-e POSTGRES_HOST=postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=somesecret \
-e SYNAPSE_WORKER_TYPES=synchrotron,media_repository,user_dir \
-e SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK=1 \
matrixdotorg/synapse-workers
```
...substituting `POSTGRES*` variables for those that match a postgres host you have
available (usually a running postgres docker container).
The `SYNAPSE_WORKER_TYPES` environment variable is a comma-separated list of workers to
use when running the container. All possible worker names are defined by the keys of the
`WORKERS_CONFIG` variable in [this script](configure_workers_and_start.py), which the
Dockerfile makes use of to generate appropriate worker, nginx and supervisord config
files.
Sharding is supported for a subset of workers, in line with the
[worker documentation](../docs/workers.md). To run multiple instances of a given worker
type, simply specify the type multiple times in `SYNAPSE_WORKER_TYPES`
(e.g `SYNAPSE_WORKER_TYPES=event_creator,event_creator...`).
Otherwise, `SYNAPSE_WORKER_TYPES` can either be left empty or unset to spawn no workers
(leaving only the main process). The container is configured to use redis-based worker
mode.
Logs for workers and the main process are logged to stdout and can be viewed with
standard `docker logs` tooling. Worker logs contain their worker name
after the timestamp.
Setting `SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK=1` will cause worker logs to be written to
`<data_dir>/logs/<worker_name>.log`. Logs are kept for 1 week and rotate every day at 00:
00, according to the container's clock. Logging for the main process must still be
configured by modifying the homeserver's log config in your Synapse data volume.

View File

@@ -2,28 +2,26 @@
This Docker image will run Synapse as a single process. By default it uses a This Docker image will run Synapse as a single process. By default it uses a
sqlite database; for production use you should connect it to a separate sqlite database; for production use you should connect it to a separate
postgres database. The image also does *not* provide a TURN server. postgres database.
This image should work on all platforms that are supported by Docker upstream. The image also does *not* provide a TURN server.
Note that Docker's WS1-backend Linux Containers on Windows
platform is [experimental](https://github.com/docker/for-win/issues/6470) and
is not supported by this image.
## Volumes ## Volumes
By default, the image expects a single volume, located at `/data`, that will hold: By default, the image expects a single volume, located at ``/data``, that will hold:
* configuration files; * configuration files;
* temporary files during uploads;
* uploaded media and thumbnails; * uploaded media and thumbnails;
* the SQLite database if you do not configure postgres; * the SQLite database if you do not configure postgres;
* the appservices configuration. * the appservices configuration.
You are free to use separate volumes depending on storage endpoints at your You are free to use separate volumes depending on storage endpoints at your
disposal. For instance, `/data/media` could be stored on a large but low disposal. For instance, ``/data/media`` could be stored on a large but low
performance hdd storage while other files could be stored on high performance performance hdd storage while other files could be stored on high performance
endpoints. endpoints.
In order to setup an application service, simply create an `appservices` In order to setup an application service, simply create an ``appservices``
directory in the data volume and write the application service Yaml directory in the data volume and write the application service Yaml
configuration file there. Multiple application services are supported. configuration file there. Multiple application services are supported.
@@ -56,8 +54,6 @@ The following environment variables are supported in `generate` mode:
* `SYNAPSE_SERVER_NAME` (mandatory): the server public hostname. * `SYNAPSE_SERVER_NAME` (mandatory): the server public hostname.
* `SYNAPSE_REPORT_STATS` (mandatory, `yes` or `no`): whether to enable * `SYNAPSE_REPORT_STATS` (mandatory, `yes` or `no`): whether to enable
anonymous statistics reporting. anonymous statistics reporting.
* `SYNAPSE_HTTP_PORT`: the port Synapse should listen on for http traffic.
Defaults to `8008`.
* `SYNAPSE_CONFIG_DIR`: where additional config files (such as the log config * `SYNAPSE_CONFIG_DIR`: where additional config files (such as the log config
and event signing key) will be stored. Defaults to `/data`. and event signing key) will be stored. Defaults to `/data`.
* `SYNAPSE_CONFIG_PATH`: path to the file to be generated. Defaults to * `SYNAPSE_CONFIG_PATH`: path to the file to be generated. Defaults to
@@ -78,8 +74,6 @@ docker run -d --name synapse \
matrixdotorg/synapse:latest matrixdotorg/synapse:latest
``` ```
(assuming 8008 is the port Synapse is configured to listen on for http traffic.)
You can then check that it has started correctly with: You can then check that it has started correctly with:
``` ```
@@ -89,7 +83,7 @@ docker logs synapse
If all is well, you should now be able to connect to http://localhost:8008 and If all is well, you should now be able to connect to http://localhost:8008 and
see a confirmation message. see a confirmation message.
The following environment variables are supported in `run` mode: The following environment variables are supported in run mode:
* `SYNAPSE_CONFIG_DIR`: where additional config files are stored. Defaults to * `SYNAPSE_CONFIG_DIR`: where additional config files are stored. Defaults to
`/data`. `/data`.
@@ -100,35 +94,6 @@ The following environment variables are supported in `run` mode:
* `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`. * `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`.
* `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`. * `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
For more complex setups (e.g. for workers) you can also pass your args directly to synapse using `run` mode. For example like this:
```
docker run -d --name synapse \
--mount type=volume,src=synapse-data,dst=/data \
-p 8008:8008 \
matrixdotorg/synapse:latest run \
-m synapse.app.generic_worker \
--config-path=/data/homeserver.yaml \
--config-path=/data/generic_worker.yaml
```
If you do not provide `-m`, the value of the `SYNAPSE_WORKER` environment variable is used. If you do not provide at least one `--config-path` or `-c`, the value of the `SYNAPSE_CONFIG_PATH` environment variable is used instead.
## Generating an (admin) user
After synapse is running, you may wish to create a user via `register_new_matrix_user`.
This requires a `registration_shared_secret` to be set in your config file. Synapse
must be restarted to pick up this change.
You can then call the script:
```
docker exec -it synapse register_new_matrix_user http://localhost:8008 -c /data/homeserver.yaml --help
```
Remember to remove the `registration_shared_secret` and restart if you no-longer need it.
## TLS support ## TLS support
The default configuration exposes a single HTTP port: http://localhost:8008. It The default configuration exposes a single HTTP port: http://localhost:8008. It
@@ -182,48 +147,3 @@ docker build -t matrixdotorg/synapse -f docker/Dockerfile .
You can choose to build a different docker image by changing the value of the `-f` flag to You can choose to build a different docker image by changing the value of the `-f` flag to
point to another Dockerfile. point to another Dockerfile.
## Disabling the healthcheck
If you are using a non-standard port or tls inside docker you can disable the healthcheck
whilst running the above `docker run` commands.
```
--no-healthcheck
```
## Disabling the healthcheck in docker-compose file
If you wish to disable the healthcheck via docker-compose, append the following to your service configuration.
```
healthcheck:
disable: true
```
## Setting custom healthcheck on docker run
If you wish to point the healthcheck at a different port with docker command, add the following
```
--health-cmd 'curl -fSs http://localhost:1234/health'
```
## Setting the healthcheck in docker-compose file
You can add the following to set a custom healthcheck in a docker compose file.
You will need docker-compose version >2.1 for this to work.
```
healthcheck:
test: ["CMD", "curl", "-fSs", "http://localhost:8008/health"]
interval: 15s
timeout: 5s
retries: 3
start_period: 5s
```
## Using jemalloc
Jemalloc is embedded in the image and will be used instead of the default allocator.
You can read about jemalloc by reading the Synapse [README](../README.md).

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash #!/bin/bash
# The script to build the Debian package, as ran inside the Docker image. # The script to build the Debian package, as ran inside the Docker image.

View File

@@ -1,27 +0,0 @@
# This file contains the base config for the reverse proxy, as part of ../Dockerfile-workers.
# configure_workers_and_start.py uses and amends to this file depending on the workers
# that have been selected.
{{ upstream_directives }}
server {
# Listen on an unoccupied port number
listen 8008;
listen [::]:8008;
server_name localhost;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 100M;
{{ worker_locations }}
# Send all other traffic to the main process
location ~* ^(\\/_matrix|\\/_synapse) {
proxy_pass http://localhost:8080;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
}
}

View File

@@ -1,9 +0,0 @@
# This file contains the base for the shared homeserver config file between Synapse workers,
# as part of ./Dockerfile-workers.
# configure_workers_and_start.py uses and amends to this file depending on the workers
# that have been selected.
redis:
enabled: true
{{ shared_worker_config }}

View File

@@ -1,41 +0,0 @@
# This file contains the base config for supervisord, as part of ../Dockerfile-workers.
# configure_workers_and_start.py uses and amends to this file depending on the workers
# that have been selected.
[supervisord]
nodaemon=true
user=root
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
priority=500
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
username=www-data
autorestart=true
[program:redis]
command=/usr/bin/redis-server /etc/redis/redis.conf --daemonize no
priority=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
username=redis
autorestart=true
[program:synapse_main]
command=/usr/local/bin/python -m synapse.app.homeserver --config-path="{{ main_config_path }}" --config-path=/conf/workers/shared.yaml
priority=10
# Log startup failures to supervisord's stdout/err
# Regular synapse logs will still go in the configured data directory
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=unexpected
exitcodes=0
# Additional process blocks
{{ worker_config }}

View File

@@ -1,26 +0,0 @@
# This is a configuration template for a single worker instance, and is
# used by Dockerfile-workers.
# Values will be change depending on whichever workers are selected when
# running that image.
worker_app: "{{ app }}"
worker_name: "{{ name }}"
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: {{ port }}
{% if listener_resources %}
resources:
- names:
{%- for resource in listener_resources %}
- {{ resource }}
{%- endfor %}
{% endif %}
worker_log_config: {{ worker_log_config_filepath }}
{{ worker_extra_conf }}

View File

@@ -40,9 +40,7 @@ listeners:
compress: false compress: false
{% endif %} {% endif %}
# Allow configuring in case we want to reverse proxy 8008 - port: 8008
# using another process in the same container
- port: {{ SYNAPSE_HTTP_PORT or 8008 }}
tls: false tls: false
bind_addresses: ['::'] bind_addresses: ['::']
type: http type: http
@@ -91,7 +89,8 @@ federation_rc_concurrent: 3
## Files ## ## Files ##
media_store_path: "/data/media" media_store_path: "/data/media"
max_upload_size: "{{ SYNAPSE_MAX_UPLOAD_SIZE or "50M" }}" uploads_path: "/data/uploads"
max_upload_size: "{{ SYNAPSE_MAX_UPLOAD_SIZE or "10M" }}"
max_image_pixels: "32M" max_image_pixels: "32M"
dynamic_thumbnails: false dynamic_thumbnails: false
@@ -175,10 +174,18 @@ report_stats: False
## API Configuration ## ## API Configuration ##
room_invite_state_types:
- "m.room.join_rules"
- "m.room.canonical_alias"
- "m.room.avatar"
- "m.room.name"
{% if SYNAPSE_APPSERVICES %} {% if SYNAPSE_APPSERVICES %}
app_service_config_files: app_service_config_files:
{% for appservice in SYNAPSE_APPSERVICES %} - "{{ appservice }}" {% for appservice in SYNAPSE_APPSERVICES %} - "{{ appservice }}"
{% endfor %} {% endfor %}
{% else %}
app_service_config_files: []
{% endif %} {% endif %}
macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}" macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}"
@@ -191,10 +198,12 @@ old_signing_keys: {}
key_refresh_interval: "1d" # 1 Day. key_refresh_interval: "1d" # 1 Day.
# The trusted servers to download signing keys from. # The trusted servers to download signing keys from.
trusted_key_servers: perspectives:
- server_name: matrix.org servers:
"matrix.org":
verify_keys: verify_keys:
"ed25519:auto": "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw" "ed25519:auto":
key: "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw"
password_config: password_config:
enabled: true enabled: true

View File

@@ -2,37 +2,18 @@ version: 1
formatters: formatters:
precise: precise:
{% if worker_name %}
format: '%(asctime)s - worker:{{ worker_name }} - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
{% else %}
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s' format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
{% endif %}
filters:
context:
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers: handlers:
file:
class: logging.handlers.TimedRotatingFileHandler
formatter: precise
filename: {{ LOG_FILE_PATH or "homeserver.log" }}
when: "midnight"
backupCount: 6 # Does not include the current log file.
encoding: utf8
# Default to buffering writes to log file for efficiency. This means that
# there will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
# logs will still be flushed immediately.
buffer:
class: logging.handlers.MemoryHandler
target: file
# The capacity is the number of log lines that are buffered before
# being written to disk. Increasing this will lead to better
# performance, at the expensive of it taking longer for log lines to
# be written to disk.
capacity: 10
flushLevel: 30 # Flush for WARNING logs as well
console: console:
class: logging.StreamHandler class: logging.StreamHandler
formatter: precise formatter: precise
filters: [context]
loggers: loggers:
synapse.storage.SQL: synapse.storage.SQL:
@@ -42,11 +23,6 @@ loggers:
root: root:
level: {{ SYNAPSE_LOG_LEVEL or "INFO" }} level: {{ SYNAPSE_LOG_LEVEL or "INFO" }}
{% if LOG_FILE_PATH %}
handlers: [console, buffer]
{% else %}
handlers: [console] handlers: [console]
{% endif %}
disable_existing_loggers: false disable_existing_loggers: false

View File

@@ -1,558 +0,0 @@
#!/usr/bin/env python
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script reads environment variables and generates a shared Synapse worker,
# nginx and supervisord configs depending on the workers requested.
#
# The environment variables it reads are:
# * SYNAPSE_SERVER_NAME: The desired server_name of the homeserver.
# * SYNAPSE_REPORT_STATS: Whether to report stats.
# * SYNAPSE_WORKER_TYPES: A comma separated list of worker names as specified in WORKER_CONFIG
# below. Leave empty for no workers, or set to '*' for all possible workers.
#
# NOTE: According to Complement's ENTRYPOINT expectations for a homeserver image (as defined
# in the project's README), this script may be run multiple times, and functionality should
# continue to work if so.
import os
import subprocess
import sys
import jinja2
import yaml
MAIN_PROCESS_HTTP_LISTENER_PORT = 8080
WORKERS_CONFIG = {
"pusher": {
"app": "synapse.app.pusher",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": {"start_pushers": False},
"worker_extra_conf": "",
},
"user_dir": {
"app": "synapse.app.user_dir",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$"
],
"shared_extra_conf": {"update_user_directory": False},
"worker_extra_conf": "",
},
"media_repository": {
"app": "synapse.app.media_repository",
"listener_resources": ["media"],
"endpoint_patterns": [
"^/_matrix/media/",
"^/_synapse/admin/v1/purge_media_cache$",
"^/_synapse/admin/v1/room/.*/media.*$",
"^/_synapse/admin/v1/user/.*/media.*$",
"^/_synapse/admin/v1/media/.*$",
"^/_synapse/admin/v1/quarantine_media/.*$",
],
"shared_extra_conf": {"enable_media_repo": False},
"worker_extra_conf": "enable_media_repo: true",
},
"appservice": {
"app": "synapse.app.appservice",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": {"notify_appservices": False},
"worker_extra_conf": "",
},
"federation_sender": {
"app": "synapse.app.federation_sender",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": {"send_federation": False},
"worker_extra_conf": "",
},
"synchrotron": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(v2_alpha|r0)/sync$",
"^/_matrix/client/(api/v1|v2_alpha|r0)/events$",
"^/_matrix/client/(api/v1|r0)/initialSync$",
"^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"federation_reader": {
"app": "synapse.app.generic_worker",
"listener_resources": ["federation"],
"endpoint_patterns": [
"^/_matrix/federation/(v1|v2)/event/",
"^/_matrix/federation/(v1|v2)/state/",
"^/_matrix/federation/(v1|v2)/state_ids/",
"^/_matrix/federation/(v1|v2)/backfill/",
"^/_matrix/federation/(v1|v2)/get_missing_events/",
"^/_matrix/federation/(v1|v2)/publicRooms",
"^/_matrix/federation/(v1|v2)/query/",
"^/_matrix/federation/(v1|v2)/make_join/",
"^/_matrix/federation/(v1|v2)/make_leave/",
"^/_matrix/federation/(v1|v2)/send_join/",
"^/_matrix/federation/(v1|v2)/send_leave/",
"^/_matrix/federation/(v1|v2)/invite/",
"^/_matrix/federation/(v1|v2)/query_auth/",
"^/_matrix/federation/(v1|v2)/event_auth/",
"^/_matrix/federation/(v1|v2)/exchange_third_party_invite/",
"^/_matrix/federation/(v1|v2)/user/devices/",
"^/_matrix/federation/(v1|v2)/get_groups_publicised$",
"^/_matrix/key/v2/query",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"federation_inbound": {
"app": "synapse.app.generic_worker",
"listener_resources": ["federation"],
"endpoint_patterns": ["/_matrix/federation/(v1|v2)/send/"],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"event_persister": {
"app": "synapse.app.generic_worker",
"listener_resources": ["replication"],
"endpoint_patterns": [],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"background_worker": {
"app": "synapse.app.generic_worker",
"listener_resources": [],
"endpoint_patterns": [],
# This worker cannot be sharded. Therefore there should only ever be one background
# worker, and it should be named background_worker1
"shared_extra_conf": {"run_background_tasks_on": "background_worker1"},
"worker_extra_conf": "",
},
"event_creator": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact",
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send",
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
"^/_matrix/client/(api/v1|r0|unstable)/join/",
"^/_matrix/client/(api/v1|r0|unstable)/profile/",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"frontend_proxy": {
"app": "synapse.app.frontend_proxy",
"listener_resources": ["client", "replication"],
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|unstable)/keys/upload"],
"shared_extra_conf": {},
"worker_extra_conf": (
"worker_main_http_uri: http://127.0.0.1:%d"
% (MAIN_PROCESS_HTTP_LISTENER_PORT,),
),
},
}
# Templates for sections that may be inserted multiple times in config files
SUPERVISORD_PROCESS_CONFIG_BLOCK = """
[program:synapse_{name}]
command=/usr/local/bin/python -m {app} \
--config-path="{config_path}" \
--config-path=/conf/workers/shared.yaml \
--config-path=/conf/workers/{name}.yaml
autorestart=unexpected
priority=500
exitcodes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
"""
NGINX_LOCATION_CONFIG_BLOCK = """
location ~* {endpoint} {
proxy_pass {upstream};
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
}
"""
NGINX_UPSTREAM_CONFIG_BLOCK = """
upstream {upstream_worker_type} {
{body}
}
"""
# Utility functions
def log(txt: str):
"""Log something to the stdout.
Args:
txt: The text to log.
"""
print(txt)
def error(txt: str):
"""Log something and exit with an error code.
Args:
txt: The text to log in error.
"""
log(txt)
sys.exit(2)
def convert(src: str, dst: str, **template_vars):
"""Generate a file from a template
Args:
src: Path to the input file.
dst: Path to write to.
template_vars: The arguments to replace placeholder variables in the template with.
"""
# Read the template file
with open(src) as infile:
template = infile.read()
# Generate a string from the template. We disable autoescape to prevent template
# variables from being escaped.
rendered = jinja2.Template(template, autoescape=False).render(**template_vars)
# Write the generated contents to a file
#
# We use append mode in case the files have already been written to by something else
# (for instance, as part of the instructions in a dockerfile).
with open(dst, "a") as outfile:
# In case the existing file doesn't end with a newline
outfile.write("\n")
outfile.write(rendered)
def add_sharding_to_shared_config(
shared_config: dict,
worker_type: str,
worker_name: str,
worker_port: int,
) -> None:
"""Given a dictionary representing a config file shared across all workers,
append sharded worker information to it for the current worker_type instance.
Args:
shared_config: The config dict that all worker instances share (after being converted to YAML)
worker_type: The type of worker (one of those defined in WORKERS_CONFIG).
worker_name: The name of the worker instance.
worker_port: The HTTP replication port that the worker instance is listening on.
"""
# The instance_map config field marks the workers that write to various replication streams
instance_map = shared_config.setdefault("instance_map", {})
# Worker-type specific sharding config
if worker_type == "pusher":
shared_config.setdefault("pusher_instances", []).append(worker_name)
elif worker_type == "federation_sender":
shared_config.setdefault("federation_sender_instances", []).append(worker_name)
elif worker_type == "event_persister":
# Event persisters write to the events stream, so we need to update
# the list of event stream writers
shared_config.setdefault("stream_writers", {}).setdefault("events", []).append(
worker_name
)
# Map of stream writer instance names to host/ports combos
instance_map[worker_name] = {
"host": "localhost",
"port": worker_port,
}
elif worker_type == "media_repository":
# The first configured media worker will run the media background jobs
shared_config.setdefault("media_instance_running_background_jobs", worker_name)
def generate_base_homeserver_config():
"""Starts Synapse and generates a basic homeserver config, which will later be
modified for worker support.
Raises: CalledProcessError if calling start.py returned a non-zero exit code.
"""
# start.py already does this for us, so just call that.
# note that this script is copied in in the official, monolith dockerfile
os.environ["SYNAPSE_HTTP_PORT"] = str(MAIN_PROCESS_HTTP_LISTENER_PORT)
subprocess.check_output(["/usr/local/bin/python", "/start.py", "migrate_config"])
def generate_worker_files(environ, config_path: str, data_dir: str):
"""Read the desired list of workers from environment variables and generate
shared homeserver, nginx and supervisord configs.
Args:
environ: _Environ[str]
config_path: Where to output the generated Synapse main worker config file.
data_dir: The location of the synapse data directory. Where log and
user-facing config files live.
"""
# Note that yaml cares about indentation, so care should be taken to insert lines
# into files at the correct indentation below.
# shared_config is the contents of a Synapse config file that will be shared amongst
# the main Synapse process as well as all workers.
# It is intended mainly for disabling functionality when certain workers are spun up,
# and adding a replication listener.
# First read the original config file and extract the listeners block. Then we'll add
# another listener for replication. Later we'll write out the result.
listeners = [
{
"port": 9093,
"bind_address": "127.0.0.1",
"type": "http",
"resources": [{"names": ["replication"]}],
}
]
with open(config_path) as file_stream:
original_config = yaml.safe_load(file_stream)
original_listeners = original_config.get("listeners")
if original_listeners:
listeners += original_listeners
# The shared homeserver config. The contents of which will be inserted into the
# base shared worker jinja2 template.
#
# This config file will be passed to all workers, included Synapse's main process.
shared_config = {"listeners": listeners}
# The supervisord config. The contents of which will be inserted into the
# base supervisord jinja2 template.
#
# Supervisord will be in charge of running everything, from redis to nginx to Synapse
# and all of its worker processes. Load the config template, which defines a few
# services that are necessary to run.
supervisord_config = ""
# Upstreams for load-balancing purposes. This dict takes the form of a worker type to the
# ports of each worker. For example:
# {
# worker_type: {1234, 1235, ...}}
# }
# and will be used to construct 'upstream' nginx directives.
nginx_upstreams = {}
# A map of: {"endpoint": "upstream"}, where "upstream" is a str representing what will be
# placed after the proxy_pass directive. The main benefit to representing this data as a
# dict over a str is that we can easily deduplicate endpoints across multiple instances
# of the same worker.
#
# An nginx site config that will be amended to depending on the workers that are
# spun up. To be placed in /etc/nginx/conf.d.
nginx_locations = {}
# Read the desired worker configuration from the environment
worker_types = environ.get("SYNAPSE_WORKER_TYPES")
if worker_types is None:
# No workers, just the main process
worker_types = []
else:
# Split type names by comma
worker_types = worker_types.split(",")
# Create the worker configuration directory if it doesn't already exist
os.makedirs("/conf/workers", exist_ok=True)
# Start worker ports from this arbitrary port
worker_port = 18009
# A counter of worker_type -> int. Used for determining the name for a given
# worker type when generating its config file, as each worker's name is just
# worker_type + instance #
worker_type_counter = {}
# For each worker type specified by the user, create config values
for worker_type in worker_types:
worker_type = worker_type.strip()
worker_config = WORKERS_CONFIG.get(worker_type)
if worker_config:
worker_config = worker_config.copy()
else:
log(worker_type + " is an unknown worker type! It will be ignored")
continue
new_worker_count = worker_type_counter.setdefault(worker_type, 0) + 1
worker_type_counter[worker_type] = new_worker_count
# Name workers by their type concatenated with an incrementing number
# e.g. federation_reader1
worker_name = worker_type + str(new_worker_count)
worker_config.update(
{"name": worker_name, "port": worker_port, "config_path": config_path}
)
# Update the shared config with any worker-type specific options
shared_config.update(worker_config["shared_extra_conf"])
# Check if more than one instance of this worker type has been specified
worker_type_total_count = worker_types.count(worker_type)
if worker_type_total_count > 1:
# Update the shared config with sharding-related options if necessary
add_sharding_to_shared_config(
shared_config, worker_type, worker_name, worker_port
)
# Enable the worker in supervisord
supervisord_config += SUPERVISORD_PROCESS_CONFIG_BLOCK.format_map(worker_config)
# Add nginx location blocks for this worker's endpoints (if any are defined)
for pattern in worker_config["endpoint_patterns"]:
# Determine whether we need to load-balance this worker
if worker_type_total_count > 1:
# Create or add to a load-balanced upstream for this worker
nginx_upstreams.setdefault(worker_type, set()).add(worker_port)
# Upstreams are named after the worker_type
upstream = "http://" + worker_type
else:
upstream = "http://localhost:%d" % (worker_port,)
# Note that this endpoint should proxy to this upstream
nginx_locations[pattern] = upstream
# Write out the worker's logging config file
# Check whether we should write worker logs to disk, in addition to the console
extra_log_template_args = {}
if environ.get("SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK"):
extra_log_template_args["LOG_FILE_PATH"] = "{dir}/logs/{name}.log".format(
dir=data_dir, name=worker_name
)
# Render and write the file
log_config_filepath = "/conf/workers/{name}.log.config".format(name=worker_name)
convert(
"/conf/log.config",
log_config_filepath,
worker_name=worker_name,
**extra_log_template_args,
)
# Then a worker config file
convert(
"/conf/worker.yaml.j2",
"/conf/workers/{name}.yaml".format(name=worker_name),
**worker_config,
worker_log_config_filepath=log_config_filepath,
)
worker_port += 1
# Build the nginx location config blocks
nginx_location_config = ""
for endpoint, upstream in nginx_locations.items():
nginx_location_config += NGINX_LOCATION_CONFIG_BLOCK.format(
endpoint=endpoint,
upstream=upstream,
)
# Determine the load-balancing upstreams to configure
nginx_upstream_config = ""
for upstream_worker_type, upstream_worker_ports in nginx_upstreams.items():
body = ""
for port in upstream_worker_ports:
body += " server localhost:%d;\n" % (port,)
# Add to the list of configured upstreams
nginx_upstream_config += NGINX_UPSTREAM_CONFIG_BLOCK.format(
upstream_worker_type=upstream_worker_type,
body=body,
)
# Finally, we'll write out the config files.
# Shared homeserver config
convert(
"/conf/shared.yaml.j2",
"/conf/workers/shared.yaml",
shared_worker_config=yaml.dump(shared_config),
)
# Nginx config
convert(
"/conf/nginx.conf.j2",
"/etc/nginx/conf.d/matrix-synapse.conf",
worker_locations=nginx_location_config,
upstream_directives=nginx_upstream_config,
)
# Supervisord config
convert(
"/conf/supervisord.conf.j2",
"/etc/supervisor/conf.d/supervisord.conf",
main_config_path=config_path,
worker_config=supervisord_config,
)
# Ensure the logging directory exists
log_dir = data_dir + "/logs"
if not os.path.exists(log_dir):
os.mkdir(log_dir)
def start_supervisord():
"""Starts up supervisord which then starts and monitors all other necessary processes
Raises: CalledProcessError if calling start.py return a non-zero exit code.
"""
subprocess.run(["/usr/bin/supervisord"], stdin=subprocess.PIPE)
def main(args, environ):
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
# override SYNAPSE_NO_TLS, we don't support TLS in worker mode,
# this needs to be handled by a frontend proxy
environ["SYNAPSE_NO_TLS"] = "yes"
# Generate the base homeserver config if one does not yet exist
if not os.path.exists(config_path):
log("Generating base homeserver config")
generate_base_homeserver_config()
# This script may be run multiple times (mostly by Complement, see note at top of file).
# Don't re-configure workers in this instance.
mark_filepath = "/conf/workers_have_been_configured"
if not os.path.exists(mark_filepath):
# Always regenerate all other config files
generate_worker_files(environ, config_path, data_dir)
# Mark workers as being configured
with open(mark_filepath, "w") as f:
f.write("")
# Start supervisord, which will start Synapse, all of the configured worker
# processes, redis, nginx etc. according to the config we created above.
start_supervisord()
if __name__ == "__main__":
main(sys.argv, os.environ)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash #!/bin/bash
# This script runs the PostgreSQL tests inside a Docker container. It expects # This script runs the PostgreSQL tests inside a Docker container. It expects
# the relevant source files to be mounted into /src (done automatically by the # the relevant source files to be mounted into /src (done automatically by the

View File

@@ -3,7 +3,6 @@
import codecs import codecs
import glob import glob
import os import os
import platform
import subprocess import subprocess
import sys import sys
@@ -121,7 +120,7 @@ def generate_config_from_template(config_dir, config_path, environ, ownership):
if ownership is not None: if ownership is not None:
subprocess.check_output(["chown", "-R", ownership, "/data"]) subprocess.check_output(["chown", "-R", ownership, "/data"])
args = ["gosu", ownership] + args args = ["su-exec", ownership] + args
subprocess.check_output(args) subprocess.check_output(args)
@@ -173,14 +172,14 @@ def run_generate_config(environ, ownership):
# make sure that synapse has perms to write to the data dir. # make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir]) subprocess.check_output(["chown", ownership, data_dir])
args = ["gosu", ownership] + args args = ["su-exec", ownership] + args
os.execv("/usr/sbin/gosu", args) os.execv("/sbin/su-exec", args)
else: else:
os.execv("/usr/local/bin/python", args) os.execv("/usr/local/bin/python", args)
def main(args, environ): def main(args, environ):
mode = args[1] if len(args) > 1 else "run" mode = args[1] if len(args) > 1 else None
desired_uid = int(environ.get("UID", "991")) desired_uid = int(environ.get("UID", "991"))
desired_gid = int(environ.get("GID", "991")) desired_gid = int(environ.get("GID", "991"))
synapse_worker = environ.get("SYNAPSE_WORKER", "synapse.app.homeserver") synapse_worker = environ.get("SYNAPSE_WORKER", "synapse.app.homeserver")
@@ -190,7 +189,7 @@ def main(args, environ):
ownership = "{}:{}".format(desired_uid, desired_gid) ownership = "{}:{}".format(desired_uid, desired_gid)
if ownership is None: if ownership is None:
log("Will not perform chmod/gosu as UserID already matches request") log("Will not perform chmod/su-exec as UserID already matches request")
# In generate mode, generate a configuration and missing keys, then exit # In generate mode, generate a configuration and missing keys, then exit
if mode == "generate": if mode == "generate":
@@ -206,28 +205,11 @@ def main(args, environ):
config_dir, config_path, environ, ownership config_dir, config_path, environ, ownership
) )
if mode != "run": if mode is not None:
error("Unknown execution mode '%s'" % (mode,)) error("Unknown execution mode '%s'" % (mode,))
args = args[2:]
if "-m" not in args:
args = ["-m", synapse_worker] + args
jemallocpath = "/usr/lib/%s-linux-gnu/libjemalloc.so.2" % (platform.machine(),)
if os.path.isfile(jemallocpath):
environ["LD_PRELOAD"] = jemallocpath
environ["PYTHONMALLOC"] = "malloc"
else:
log("Could not find %s, will not use" % (jemallocpath,))
# if there are no config files passed to synapse, try adding the default file
if not any(p.startswith("--config-path") or p.startswith("-c") for p in args):
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data") config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get( config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
"SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml"
)
if not os.path.exists(config_path): if not os.path.exists(config_path):
if "SYNAPSE_SERVER_NAME" in environ: if "SYNAPSE_SERVER_NAME" in environ:
@@ -250,16 +232,14 @@ running with 'migrate_config'. See the README for more details.
% (config_path,) % (config_path,)
) )
args += ["--config-path", config_path] log("Starting synapse with config file " + config_path)
log("Starting synapse with args " + " ".join(args)) args = ["python", "-m", synapse_worker, "--config-path", config_path]
args = ["python"] + args
if ownership is not None: if ownership is not None:
args = ["gosu", ownership] + args args = ["su-exec", ownership] + args
os.execve("/usr/sbin/gosu", args, environ) os.execv("/sbin/su-exec", args)
else: else:
os.execve("/usr/local/bin/python", args, environ) os.execv("/usr/local/bin/python", args)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -10,16 +10,5 @@
# homeserver.yaml. Instead, if you are starting from scratch, please generate # homeserver.yaml. Instead, if you are starting from scratch, please generate
# a fresh config using Synapse by following the instructions in INSTALL.md. # a fresh config using Synapse by following the instructions in INSTALL.md.
# Configuration options that take a time period can be set using a number
# followed by a letter. Letters have the following meanings:
# s = second
# m = minute
# h = hour
# d = day
# w = week
# y = year
# For example, setting redaction_retention_period: 5m would remove redacted
# messages from the database after 5 minutes, rather than 5 months.
################################################################################ ################################################################################

View File

@@ -12,14 +12,13 @@ introduced support for automatically provisioning certificates through
In [March 2019](https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430), In [March 2019](https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430),
Let's Encrypt announced that they were deprecating version 1 of the ACME Let's Encrypt announced that they were deprecating version 1 of the ACME
protocol, with the plan to disable the use of it for new accounts in protocol, with the plan to disable the use of it for new accounts in
November 2019, for new domains in June 2020, and for existing accounts and November 2019, and for existing accounts in June 2020.
domains in June 2021.
Synapse doesn't currently support version 2 of the ACME protocol, which Synapse doesn't currently support version 2 of the ACME protocol, which
means that: means that:
* for existing installs, Synapse's built-in ACME support will continue * for existing installs, Synapse's built-in ACME support will continue
to work until June 2021. to work until June 2020.
* for new installs, this feature will not work at all. * for new installs, this feature will not work at all.
Either way, it is recommended to move from Synapse's ACME support Either way, it is recommended to move from Synapse's ACME support

View File

@@ -4,21 +4,17 @@ Admin APIs
This directory includes documentation for the various synapse specific admin This directory includes documentation for the various synapse specific admin
APIs available. APIs available.
Authenticating as a server admin Only users that are server admins can use these APIs. A user can be marked as a
-------------------------------- server admin by updating the database directly, e.g.:
Many of the API calls in the admin api will require an `access_token` for a ``UPDATE users SET admin = 1 WHERE name = '@foo:bar.com'``
server admin. (Note that a server admin is distinct from a room admin.)
A user can be marked as a server admin by updating the database directly, e.g.: Restarting may be required for the changes to register.
.. code-block:: sql Using an admin access_token
###########################
UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
A new server admin user can also be created using the
``register_new_matrix_user`` script.
Many of the API calls listed in the documentation here will require to include an admin `access_token`.
Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings. Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
Once you have your `access_token`, to include it in a request, the best option is to add the token to a request header: Once you have your `access_token`, to include it in a request, the best option is to add the token to a request header:

View File

@@ -4,11 +4,11 @@ This API lets a server admin delete a local group. Doing so will kick all
users out of the group so that their clients will correctly handle the group users out of the group so that their clients will correctly handle the group
being deleted. being deleted.
The API is: The API is:
``` ```
POST /_synapse/admin/v1/delete_group/<group_id> POST /_synapse/admin/v1/delete_group/<group_id>
``` ```
To use it, you will need to authenticate by providing an `access_token` for a including an `access_token` of a server admin.
server admin: see [README.rst](README.rst).

View File

@@ -1,172 +0,0 @@
# Show reported events
This API returns information about reported events.
The api is:
```
GET /_synapse/admin/v1/event_reports?from=0&limit=10
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
It returns a JSON body like the following:
```json
{
"event_reports": [
{
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
"id": 2,
"reason": "foo",
"score": -100,
"received_ts": 1570897107409,
"canonical_alias": "#alias1:matrix.org",
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
"name": "Matrix HQ",
"sender": "@foobar:matrix.org",
"user_id": "@foo:matrix.org"
},
{
"event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
"id": 3,
"reason": "bar",
"score": -100,
"received_ts": 1598889612059,
"canonical_alias": "#alias2:matrix.org",
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
"name": "Your room name here",
"sender": "@foobar:matrix.org",
"user_id": "@bar:matrix.org"
}
],
"next_token": 2,
"total": 4
}
```
To paginate, check for `next_token` and if present, call the endpoint again with `from`
set to the value of `next_token`. This will return a new page.
If the endpoint does not return a `next_token` then there are no more reports to
paginate through.
**URL parameters:**
* `limit`: integer - Is optional but is used for pagination, denoting the maximum number
of items to return in this call. Defaults to `100`.
* `from`: integer - Is optional but used for pagination, denoting the offset in the
returned results. This should be treated as an opaque value and not explicitly set to
anything other than the return value of `next_token` from a previous call. Defaults to `0`.
* `dir`: string - Direction of event report order. Whether to fetch the most recent
first (`b`) or the oldest first (`f`). Defaults to `b`.
* `user_id`: string - Is optional and filters to only return users with user IDs that
contain this value. This is the user who reported the event and wrote the reason.
* `room_id`: string - Is optional and filters to only return rooms with room IDs that
contain this value.
**Response**
The following fields are returned in the JSON response body:
* `id`: integer - ID of event report.
* `received_ts`: integer - The timestamp (in milliseconds since the unix epoch) when this
report was sent.
* `room_id`: string - The ID of the room in which the event being reported is located.
* `name`: string - The name of the room.
* `event_id`: string - The ID of the reported event.
* `user_id`: string - This is the user who reported the event and wrote the reason.
* `reason`: string - Comment made by the `user_id` in this report. May be blank.
* `score`: integer - Content is reported based upon a negative score, where -100 is
"most offensive" and 0 is "inoffensive".
* `sender`: string - This is the ID of the user who sent the original message/event that
was reported.
* `canonical_alias`: string - The canonical alias of the room. `null` if the room does not
have a canonical alias set.
* `next_token`: integer - Indication for pagination. See above.
* `total`: integer - Total number of event reports related to the query
(`user_id` and `room_id`).
# Show details of a specific event report
This API returns information about a specific event report.
The api is:
```
GET /_synapse/admin/v1/event_reports/<report_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
It returns a JSON body like the following:
```jsonc
{
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
"event_json": {
"auth_events": [
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
],
"content": {
"body": "matrix.org: This Week in Matrix",
"format": "org.matrix.custom.html",
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
"msgtype": "m.notice"
},
"depth": 546,
"hashes": {
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
},
"origin": "matrix.org",
"origin_server_ts": 1592291711430,
"prev_events": [
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
],
"prev_state": [],
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
"sender": "@foobar:matrix.org",
"signatures": {
"matrix.org": {
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
}
},
"type": "m.room.message",
"unsigned": {
"age_ts": 1592291711430,
}
},
"id": <report_id>,
"reason": "foo",
"score": -100,
"received_ts": 1570897107409,
"canonical_alias": "#alias1:matrix.org",
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
"name": "Matrix HQ",
"sender": "@foobar:matrix.org",
"user_id": "@foo:matrix.org"
}
```
**URL parameters:**
* `report_id`: string - The ID of the event report.
**Response**
The following fields are returned in the JSON response body:
* `id`: integer - ID of event report.
* `received_ts`: integer - The timestamp (in milliseconds since the unix epoch) when this
report was sent.
* `room_id`: string - The ID of the room in which the event being reported is located.
* `name`: string - The name of the room.
* `event_id`: string - The ID of the reported event.
* `user_id`: string - This is the user who reported the event and wrote the reason.
* `reason`: string - Comment made by the `user_id` in this report. May be blank.
* `score`: integer - Content is reported based upon a negative score, where -100 is
"most offensive" and 0 is "inoffensive".
* `sender`: string - This is the ID of the user who sent the original message/event that
was reported.
* `canonical_alias`: string - The canonical alias of the room. `null` if the room does not
have a canonical alias set.
* `event_json`: object - Details of the original event that was reported.

View File

@@ -1,35 +1,15 @@
# Contents # List all media in a room
- [Querying media](#querying-media)
* [List all media in a room](#list-all-media-in-a-room)
* [List all media uploaded by a user](#list-all-media-uploaded-by-a-user)
- [Quarantine media](#quarantine-media)
* [Quarantining media by ID](#quarantining-media-by-id)
* [Quarantining media in a room](#quarantining-media-in-a-room)
* [Quarantining all media of a user](#quarantining-all-media-of-a-user)
* [Protecting media from being quarantined](#protecting-media-from-being-quarantined)
- [Delete local media](#delete-local-media)
* [Delete a specific local media](#delete-a-specific-local-media)
* [Delete local media by date or size](#delete-local-media-by-date-or-size)
- [Purge Remote Media API](#purge-remote-media-api)
# Querying media
These APIs allow extracting media information from the homeserver.
## List all media in a room
This API gets a list of known media in a room. This API gets a list of known media in a room.
However, it only shows media from unencrypted events or rooms.
The API is: The API is:
``` ```
GET /_synapse/admin/v1/room/<room_id>/media GET /_synapse/admin/v1/room/<room_id>/media
``` ```
To use it, you will need to authenticate by providing an `access_token` for a including an `access_token` of a server admin.
server admin: see [README.rst](README.rst).
The API returns a JSON body like the following: It returns a JSON body like the following:
```json ```
{ {
"local": [ "local": [
"mxc://localhost/xwvutsrqponmlkjihgfedcba", "mxc://localhost/xwvutsrqponmlkjihgfedcba",
@@ -42,12 +22,6 @@ The API returns a JSON body like the following:
} }
``` ```
## List all media uploaded by a user
Listing all media that has been uploaded by a local user can be achieved through
the use of the [List media of a user](user_admin_api.rst#list-media-of-a-user)
Admin API.
# Quarantine media # Quarantine media
Quarantining media means that it is marked as inaccessible by users. It applies Quarantining media means that it is marked as inaccessible by users. It applies
@@ -72,7 +46,7 @@ form of `abcdefg12345...`.
Response: Response:
```json ```
{} {}
``` ```
@@ -92,18 +66,14 @@ Where `room_id` is in the form of `!roomid12345:example.org`.
Response: Response:
```json ```
{ {
"num_quarantined": 10 "num_quarantined": 10 # The number of media items successfully quarantined
} }
``` ```
The following fields are returned in the JSON response body:
* `num_quarantined`: integer - The number of media items successfully quarantined
Note that there is a legacy endpoint, `POST Note that there is a legacy endpoint, `POST
/_synapse/admin/v1/quarantine_media/<room_id>`, that operates the same. /_synapse/admin/v1/quarantine_media/<room_id >`, that operates the same.
However, it is deprecated and may be removed in a future release. However, it is deprecated and may be removed in a future release.
## Quarantining all media of a user ## Quarantining all media of a user
@@ -120,155 +90,13 @@ POST /_synapse/admin/v1/user/<user_id>/media/quarantine
{} {}
``` ```
URL Parameters Where `user_id` is in the form of `@bob:example.org`.
* `user_id`: string - User ID in the form of `@bob:example.org`
Response: Response:
```json ```
{ {
"num_quarantined": 10 "num_quarantined": 10 # The number of media items successfully quarantined
} }
``` ```
The following fields are returned in the JSON response body:
* `num_quarantined`: integer - The number of media items successfully quarantined
## Protecting media from being quarantined
This API protects a single piece of local media from being quarantined using the
above APIs. This is useful for sticker packs and other shared media which you do
not want to get quarantined, especially when
[quarantining media in a room](#quarantining-media-in-a-room).
Request:
```
POST /_synapse/admin/v1/media/protect/<media_id>
{}
```
Where `media_id` is in the form of `abcdefg12345...`.
Response:
```json
{}
```
# Delete local media
This API deletes the *local* media from the disk of your own server.
This includes any local thumbnails and copies of media downloaded from
remote homeservers.
This API will not affect media that has been uploaded to external
media repositories (e.g https://github.com/turt2live/matrix-media-repo/).
See also [Purge Remote Media API](#purge-remote-media-api).
## Delete a specific local media
Delete a specific `media_id`.
Request:
```
DELETE /_synapse/admin/v1/media/<server_name>/<media_id>
{}
```
URL Parameters
* `server_name`: string - The name of your local server (e.g `matrix.org`)
* `media_id`: string - The ID of the media (e.g `abcdefghijklmnopqrstuvwx`)
Response:
```json
{
"deleted_media": [
"abcdefghijklmnopqrstuvwx"
],
"total": 1
}
```
The following fields are returned in the JSON response body:
* `deleted_media`: an array of strings - List of deleted `media_id`
* `total`: integer - Total number of deleted `media_id`
## Delete local media by date or size
Request:
```
POST /_synapse/admin/v1/media/<server_name>/delete?before_ts=<before_ts>
{}
```
URL Parameters
* `server_name`: string - The name of your local server (e.g `matrix.org`).
* `before_ts`: string representing a positive integer - Unix timestamp in ms.
Files that were last used before this timestamp will be deleted. It is the timestamp of
last access and not the timestamp creation.
* `size_gt`: Optional - string representing a positive integer - Size of the media in bytes.
Files that are larger will be deleted. Defaults to `0`.
* `keep_profiles`: Optional - string representing a boolean - Switch to also delete files
that are still used in image data (e.g user profile, room avatar).
If `false` these files will be deleted. Defaults to `true`.
Response:
```json
{
"deleted_media": [
"abcdefghijklmnopqrstuvwx",
"abcdefghijklmnopqrstuvwz"
],
"total": 2
}
```
The following fields are returned in the JSON response body:
* `deleted_media`: an array of strings - List of deleted `media_id`
* `total`: integer - Total number of deleted `media_id`
# Purge Remote Media API
The purge remote media API allows server admins to purge old cached remote media.
The API is:
```
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
{}
```
URL Parameters
* `unix_timestamp_in_ms`: string representing a positive integer - Unix timestamp in ms.
All cached media that was last accessed before this timestamp will be removed.
Response:
```json
{
"deleted": 10
}
```
The following fields are returned in the JSON response body:
* `deleted`: integer - The number of media items successfully deleted
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View File

@@ -15,8 +15,7 @@ The API is:
``POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]`` ``POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]``
To use it, you will need to authenticate by providing an ``access_token`` for a including an ``access_token`` of a server admin.
server admin: see `README.rst <README.rst>`_.
By default, events sent by local users are not deleted, as they may represent By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are the only copies of this content in existence. (Events sent by remote users are
@@ -55,10 +54,8 @@ It is possible to poll for updates on recent purges with a second API;
``GET /_synapse/admin/v1/purge_history_status/<purge_id>`` ``GET /_synapse/admin/v1/purge_history_status/<purge_id>``
Again, you will need to authenticate by providing an ``access_token`` for a (again, with a suitable ``access_token``). This API returns a JSON body like
server admin. the following:
This API returns a JSON body like the following:
.. code:: json .. code:: json

View File

@@ -0,0 +1,17 @@
Purge Remote Media API
======================
The purge remote media API allows server admins to purge old cached remote
media.
The API is::
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>&access_token=<access_token>
{}
Which will remove all cached media that was last accessed before
``<unix_timestamp_in_ms>``.
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View File

@@ -1,8 +1,5 @@
Deprecated: Purge room API Purge room API
========================== ==============
**The old Purge room API is deprecated and will be removed in a future release.
See the new [Delete Room API](rooms.md#delete-room-api) for more details.**
This API will remove all trace of a room from your database. This API will remove all trace of a room from your database.

Some files were not shown because too many files have changed in this diff Show More