Merge branch 'develop' into matrix-org-hotfixes

This commit is contained in:
Patrick Cloke 2020-12-15 08:23:14 -05:00
commit 33a349df91
191 changed files with 2918 additions and 1230 deletions

View file

@ -5,9 +5,10 @@ jobs:
- image: docker:git - image: docker:git
steps: steps:
- checkout - checkout
- setup_remote_docker
- docker_prepare - docker_prepare
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD - run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# for release builds, we want to get the amd64 image out asap, so first
# we do an amd64-only build, before following up with a multiarch build.
- docker_build: - docker_build:
tag: -t matrixdotorg/synapse:${CIRCLE_TAG} tag: -t matrixdotorg/synapse:${CIRCLE_TAG}
platforms: linux/amd64 platforms: linux/amd64
@ -20,12 +21,10 @@ jobs:
- image: docker:git - image: docker:git
steps: steps:
- checkout - checkout
- setup_remote_docker
- docker_prepare - docker_prepare
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD - run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- docker_build: # for `latest`, we don't want the arm images to disappear, so don't update the tag
tag: -t matrixdotorg/synapse:latest # until all of the platforms are built.
platforms: linux/amd64
- docker_build: - docker_build:
tag: -t matrixdotorg/synapse:latest tag: -t matrixdotorg/synapse:latest
platforms: linux/amd64,linux/arm/v7,linux/arm64 platforms: linux/amd64,linux/arm/v7,linux/arm64
@ -46,12 +45,16 @@ workflows:
commands: commands:
docker_prepare: docker_prepare:
description: Downloads the buildx cli plugin and enables multiarch images description: Sets up a remote docker server, downloads the buildx cli plugin, and enables multiarch images
parameters: parameters:
buildx_version: buildx_version:
type: string type: string
default: "v0.4.1" default: "v0.4.1"
steps: steps:
- setup_remote_docker:
# 19.03.13 was the most recent available on circleci at the time of
# writing.
version: 19.03.13
- run: apk add --no-cache curl - run: apk add --no-cache curl
- run: mkdir -vp ~/.docker/cli-plugins/ ~/dockercache - run: mkdir -vp ~/.docker/cli-plugins/ ~/dockercache
- run: curl --silent -L "https://github.com/docker/buildx/releases/download/<< parameters.buildx_version >>/buildx-<< parameters.buildx_version >>.linux-amd64" > ~/.docker/cli-plugins/docker-buildx - run: curl --silent -L "https://github.com/docker/buildx/releases/download/<< parameters.buildx_version >>/buildx-<< parameters.buildx_version >>.linux-amd64" > ~/.docker/cli-plugins/docker-buildx

View file

@ -1,3 +1,116 @@
Synapse 1.25.0 (2020-xx-xx)
===========================
Removal warning
---------------
The old [Purge Room API](https://github.com/matrix-org/synapse/tree/master/docs/admin_api/purge_room.md)
and [Shutdown Room API](https://github.com/matrix-org/synapse/tree/master/docs/admin_api/shutdown_room.md)
are deprecated and will be removed in a future release. They will be replaced by the
[Delete Room API](https://github.com/matrix-org/synapse/tree/master/docs/admin_api/rooms.md#delete-room-api).
`POST /_synapse/admin/v1/rooms/<room_id>/delete` replaces `POST /_synapse/admin/v1/purge_room` and
`POST /_synapse/admin/v1/shutdown_room/<room_id>`.
Synapse 1.24.0 (2020-12-09)
===========================
Due to the two security issues highlighted below, server administrators are
encouraged to update Synapse. We are not aware of these vulnerabilities being
exploited in the wild.
Security advisory
-----------------
The following issues are fixed in v1.23.1 and v1.24.0.
- There is a denial of service attack
([CVE-2020-26257](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26257))
against the federation APIs in which future events will not be correctly sent
to other servers over federation. This affects all servers that participate in
open federation. (Fixed in [#8776](https://github.com/matrix-org/synapse/pull/8776)).
- Synapse may be affected by OpenSSL
[CVE-2020-1971](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1971).
Synapse administrators should ensure that they have the latest versions of
the cryptography Python package installed.
To upgrade Synapse along with the cryptography package:
* Administrators using the [`matrix.org` Docker
image](https://hub.docker.com/r/matrixdotorg/synapse/) or the [Debian/Ubuntu
packages from
`matrix.org`](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#matrixorg-packages)
should ensure that they have version 1.24.0 or 1.23.1 installed: these images include
the updated packages.
* Administrators who have [installed Synapse from
source](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#installing-from-source)
should upgrade the cryptography package within their virtualenv by running:
```sh
<path_to_virtualenv>/bin/pip install 'cryptography>=3.3'
```
* Administrators who have installed Synapse from distribution packages should
consult the information from their distributions.
Internal Changes
----------------
- Add a maximum version for pysaml2 on Python 3.5. ([\#8898](https://github.com/matrix-org/synapse/issues/8898))
Synapse 1.23.1 (2020-12-09)
===========================
Due to the two security issues highlighted below, server administrators are
encouraged to update Synapse. We are not aware of these vulnerabilities being
exploited in the wild.
Security advisory
-----------------
The following issues are fixed in v1.23.1 and v1.24.0.
- There is a denial of service attack
([CVE-2020-26257](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26257))
against the federation APIs in which future events will not be correctly sent
to other servers over federation. This affects all servers that participate in
open federation. (Fixed in [#8776](https://github.com/matrix-org/synapse/pull/8776)).
- Synapse may be affected by OpenSSL
[CVE-2020-1971](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1971).
Synapse administrators should ensure that they have the latest versions of
the cryptography Python package installed.
To upgrade Synapse along with the cryptography package:
* Administrators using the [`matrix.org` Docker
image](https://hub.docker.com/r/matrixdotorg/synapse/) or the [Debian/Ubuntu
packages from
`matrix.org`](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#matrixorg-packages)
should ensure that they have version 1.24.0 or 1.23.1 installed: these images include
the updated packages.
* Administrators who have [installed Synapse from
source](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#installing-from-source)
should upgrade the cryptography package within their virtualenv by running:
```sh
<path_to_virtualenv>/bin/pip install 'cryptography>=3.3'
```
* Administrators who have installed Synapse from distribution packages should
consult the information from their distributions.
Bugfixes
--------
- Fix a bug in some federation APIs which could lead to unexpected behaviour if different parameters were set in the URI and the request body. ([\#8776](https://github.com/matrix-org/synapse/issues/8776))
Internal Changes
----------------
- Add a maximum version for pysaml2 on Python 3.5. ([\#8898](https://github.com/matrix-org/synapse/issues/8898))
Synapse 1.24.0rc2 (2020-12-04) Synapse 1.24.0rc2 (2020-12-04)
============================== ==============================

View file

@ -557,10 +557,9 @@ This is critical from a security perspective to stop arbitrary Matrix users
spidering 'internal' URLs on your network. At the very least we recommend that spidering 'internal' URLs on your network. At the very least we recommend that
your loopback and RFC1918 IP addresses are blacklisted. your loopback and RFC1918 IP addresses are blacklisted.
This also requires the optional `lxml` and `netaddr` python dependencies to be This also requires the optional `lxml` python dependency to be installed. This
installed. This in turn requires the `libxml2` library to be available - on in turn requires the `libxml2` library to be available - on Debian/Ubuntu this
Debian/Ubuntu this means `apt-get install libxml2-dev`, or equivalent for means `apt-get install libxml2-dev`, or equivalent for your OS.
your OS.
# Troubleshooting Installation # Troubleshooting Installation

View file

@ -75,6 +75,27 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.25.0
====================
Blacklisting IP ranges
----------------------
Synapse v1.25.0 includes new settings, ``ip_range_blacklist`` and
``ip_range_whitelist``, for controlling outgoing requests from Synapse for federation,
identity servers, push, and for checking key validity for third-party invite events.
The previous setting, ``federation_ip_range_blacklist``, is deprecated. The new
``ip_range_blacklist`` defaults to private IP ranges if it is not defined.
If you have never customised ``federation_ip_range_blacklist`` it is recommended
that you remove that setting.
If you have customised ``federation_ip_range_blacklist`` you should update the
setting name to ``ip_range_blacklist``.
If you have a custom push server that is reached via private IP space you may
need to customise ``ip_range_blacklist`` or ``ip_range_whitelist``.
Upgrading to v1.24.0 Upgrading to v1.24.0
==================== ====================

1
changelog.d/8802.doc Normal file
View file

@ -0,0 +1 @@
Fix the "Event persist rate" section of the included grafana dashboard by adding missing prometheus rules.

1
changelog.d/8821.bugfix Normal file
View file

@ -0,0 +1 @@
Apply an IP range blacklist to push and key revocation requests.

1
changelog.d/8827.bugfix Normal file
View file

@ -0,0 +1 @@
Fix bug where we might not correctly calculate the current state for rooms with multiple extremities.

1
changelog.d/8829.removal Normal file
View file

@ -0,0 +1 @@
Deprecate Shutdown Room and Purge Room Admin APIs.

1
changelog.d/8837.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a long standing bug in the register admin endpoint (`/_synapse/admin/v1/register`) when the `mac` field was not provided. The endpoint now properly returns a 400 error. Contributed by @edwargix.

1
changelog.d/8839.doc Normal file
View file

@ -0,0 +1 @@
Combine related media admin API docs.

1
changelog.d/8853.feature Normal file
View file

@ -0,0 +1 @@
Add optional HTTP authentication to replication endpoints.

1
changelog.d/8858.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a long-standing bug on Synapse instances supporting Single-Sign-On, where users would be prompted to enter their password to confirm certain actions, even though they have not set a password.

1
changelog.d/8861.misc Normal file
View file

@ -0,0 +1 @@
Remove some unnecessary stubbing from unit tests.

1
changelog.d/8862.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a longstanding bug where a 500 error would be returned if the `Content-Length` header was not provided to the upload media resource.

1
changelog.d/8864.misc Normal file
View file

@ -0,0 +1 @@
Remove unused `FakeResponse` class from unit tests.

1
changelog.d/8865.bugfix Normal file
View file

@ -0,0 +1 @@
Add additional validation to pusher URLs to be compliant with the specification.

1
changelog.d/8867.bugfix Normal file
View file

@ -0,0 +1 @@
Fix the error code that is returned when a user tries to register on a homeserver on which new-user registration has been disabled.

1
changelog.d/8870.bugfix Normal file
View file

@ -0,0 +1 @@
Apply an IP range blacklist to push and key revocation requests.

1
changelog.d/8872.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a bug where `PUT /_synapse/admin/v2/users/<user_id>` failed to create a new user when `avatar_url` is specified. Bug introduced in Synapse v1.9.0.

1
changelog.d/8873.doc Normal file
View file

@ -0,0 +1 @@
Fix an error in the documentation for the SAML username mapping provider.

1
changelog.d/8874.feature Normal file
View file

@ -0,0 +1 @@
Improve the error messages printed as a result of configuration problems for extension modules.

1
changelog.d/8879.misc Normal file
View file

@ -0,0 +1 @@
Pass `room_id` to `get_auth_chain_difference`.

1
changelog.d/8880.misc Normal file
View file

@ -0,0 +1 @@
Add type hints to push module.

1
changelog.d/8881.misc Normal file
View file

@ -0,0 +1 @@
Simplify logic for handling user-interactive-auth via single-sign-on servers.

1
changelog.d/8882.misc Normal file
View file

@ -0,0 +1 @@
Add type hints to push module.

1
changelog.d/8883.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a 500 error when attempting to preview an empty HTML file.

1
changelog.d/8886.feature Normal file
View file

@ -0,0 +1 @@
Add number of local devices to Room Details Admin API. Contributed by @dklimpel.

1
changelog.d/8887.feature Normal file
View file

@ -0,0 +1 @@
Add `X-Robots-Tag` header to stop web crawlers from indexing media.

1
changelog.d/8890.feature Normal file
View file

@ -0,0 +1 @@
Spam-checkers may now define their methods as `async`.

1
changelog.d/8891.doc Normal file
View file

@ -0,0 +1 @@
Clarify comments around template directories in `sample_config.yaml`.

1
changelog.d/8897.feature Normal file
View file

@ -0,0 +1 @@
Add support for allowing users to pick their own user ID during a single-sign-on login.

1
changelog.d/8900.feature Normal file
View file

@ -0,0 +1 @@
Add support for allowing users to pick their own user ID during a single-sign-on login.

1
changelog.d/8901.misc Normal file
View file

@ -0,0 +1 @@
Add type hints to push module.

1
changelog.d/8905.misc Normal file
View file

@ -0,0 +1 @@
Skip the SAML tests if the requirements (`pysaml2` and `xmlsec1`) aren't available.

1
changelog.d/8906.misc Normal file
View file

@ -0,0 +1 @@
Fix multiarch docker image builds.

1
changelog.d/8909.misc Normal file
View file

@ -0,0 +1 @@
Don't publish `latest` docker image until all archs are built.

1
changelog.d/8911.feature Normal file
View file

@ -0,0 +1 @@
Add support for allowing users to pick their own user ID during a single-sign-on login.

1
changelog.d/8916.misc Normal file
View file

@ -0,0 +1 @@
Various clean-ups to the structured logging and logging context code.

1
changelog.d/8918.bugfix Normal file
View file

@ -0,0 +1 @@
Fix occasional deadlock when handling SIGHUP.

1
changelog.d/8920.bugfix Normal file
View file

@ -0,0 +1 @@
Fix login API to not ratelimit application services that have ratelimiting disabled.

1
changelog.d/8921.bugfix Normal file
View file

@ -0,0 +1 @@
Fix bug where we ratelimited auto joining of rooms on registration (using `auto_join_rooms` config).

1
changelog.d/8935.misc Normal file
View file

@ -0,0 +1 @@
Various clean-ups to the structured logging and logging context code.

1
changelog.d/8937.bugfix Normal file
View file

@ -0,0 +1 @@
Fix bug introduced in Synapse v1.24.0 which would cause an exception on startup if both `enabled` and `localdb_enabled` were set to `False` in the `password_config` setting of the configuration file.

View file

@ -58,3 +58,21 @@ groups:
labels: labels:
type: "PDU" type: "PDU"
expr: 'synapse_federation_transaction_queue_pending_pdus + 0' expr: 'synapse_federation_transaction_queue_pending_pdus + 0'
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_type="remote"})
labels:
type: remote
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_entity="*client*",origin_type="local"})
labels:
type: local
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_entity!="*client*",origin_type="local"})
labels:
type: bridges
- record: synapse_storage_events_persisted_by_event_type
expr: sum without(origin_entity, origin_type) (synapse_storage_events_persisted_events_sep)
- record: synapse_storage_events_persisted_by_origin
expr: sum without(type) (synapse_storage_events_persisted_events_sep)

12
debian/changelog vendored
View file

@ -1,3 +1,15 @@
matrix-synapse-py3 (1.24.0) stable; urgency=medium
* New synapse release 1.24.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 09 Dec 2020 10:14:30 +0000
matrix-synapse-py3 (1.23.1) stable; urgency=medium
* New synapse release 1.23.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 09 Dec 2020 10:40:39 +0000
matrix-synapse-py3 (1.23.0) stable; urgency=medium matrix-synapse-py3 (1.23.0) stable; urgency=medium
* New synapse release 1.23.0. * New synapse release 1.23.0.

View file

@ -69,7 +69,8 @@ RUN apt-get update -qq -o Acquire::Languages=none \
python3-setuptools \ python3-setuptools \
python3-venv \ python3-venv \
sqlite3 \ sqlite3 \
libpq-dev libpq-dev \
xmlsec1
COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb / COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb /

View file

@ -1,3 +1,14 @@
# Contents
- [List all media in a room](#list-all-media-in-a-room)
- [Quarantine media](#quarantine-media)
* [Quarantining media by ID](#quarantining-media-by-id)
* [Quarantining media in a room](#quarantining-media-in-a-room)
* [Quarantining all media of a user](#quarantining-all-media-of-a-user)
- [Delete local media](#delete-local-media)
* [Delete a specific local media](#delete-a-specific-local-media)
* [Delete local media by date or size](#delete-local-media-by-date-or-size)
- [Purge Remote Media API](#purge-remote-media-api)
# List all media in a room # List all media in a room
This API gets a list of known media in a room. This API gets a list of known media in a room.
@ -11,7 +22,7 @@ To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst). server admin: see [README.rst](README.rst).
The API returns a JSON body like the following: The API returns a JSON body like the following:
``` ```json
{ {
"local": [ "local": [
"mxc://localhost/xwvutsrqponmlkjihgfedcba", "mxc://localhost/xwvutsrqponmlkjihgfedcba",
@ -48,7 +59,7 @@ form of `abcdefg12345...`.
Response: Response:
``` ```json
{} {}
``` ```
@ -68,12 +79,16 @@ Where `room_id` is in the form of `!roomid12345:example.org`.
Response: Response:
``` ```json
{ {
"num_quarantined": 10 # The number of media items successfully quarantined "num_quarantined": 10
} }
``` ```
The following fields are returned in the JSON response body:
* `num_quarantined`: integer - The number of media items successfully quarantined
Note that there is a legacy endpoint, `POST Note that there is a legacy endpoint, `POST
/_synapse/admin/v1/quarantine_media/<room_id>`, that operates the same. /_synapse/admin/v1/quarantine_media/<room_id>`, that operates the same.
However, it is deprecated and may be removed in a future release. However, it is deprecated and may be removed in a future release.
@ -92,23 +107,29 @@ POST /_synapse/admin/v1/user/<user_id>/media/quarantine
{} {}
``` ```
Where `user_id` is in the form of `@bob:example.org`. URL Parameters
* `user_id`: string - User ID in the form of `@bob:example.org`
Response: Response:
``` ```json
{ {
"num_quarantined": 10 # The number of media items successfully quarantined "num_quarantined": 10
} }
``` ```
The following fields are returned in the JSON response body:
* `num_quarantined`: integer - The number of media items successfully quarantined
# Delete local media # Delete local media
This API deletes the *local* media from the disk of your own server. This API deletes the *local* media from the disk of your own server.
This includes any local thumbnails and copies of media downloaded from This includes any local thumbnails and copies of media downloaded from
remote homeservers. remote homeservers.
This API will not affect media that has been uploaded to external This API will not affect media that has been uploaded to external
media repositories (e.g https://github.com/turt2live/matrix-media-repo/). media repositories (e.g https://github.com/turt2live/matrix-media-repo/).
See also [purge_remote_media.rst](purge_remote_media.rst). See also [Purge Remote Media API](#purge-remote-media-api).
## Delete a specific local media ## Delete a specific local media
Delete a specific `media_id`. Delete a specific `media_id`.
@ -180,3 +201,38 @@ The following fields are returned in the JSON response body:
* `deleted_media`: an array of strings - List of deleted `media_id` * `deleted_media`: an array of strings - List of deleted `media_id`
* `total`: integer - Total number of deleted `media_id` * `total`: integer - Total number of deleted `media_id`
# Purge Remote Media API
The purge remote media API allows server admins to purge old cached remote media.
The API is:
```
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
{}
```
URL Parameters
* `unix_timestamp_in_ms`: string representing a positive integer - Unix timestamp in ms.
All cached media that was last accessed before this timestamp will be removed.
Response:
```json
{
"deleted": 10
}
```
The following fields are returned in the JSON response body:
* `deleted`: integer - The number of media items successfully deleted
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View file

@ -1,20 +0,0 @@
Purge Remote Media API
======================
The purge remote media API allows server admins to purge old cached remote
media.
The API is::
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
{}
\... which will remove all cached media that was last accessed before
``<unix_timestamp_in_ms>``.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View file

@ -1,12 +1,13 @@
Purge room API Deprecated: Purge room API
============== ==========================
**The old Purge room API is deprecated and will be removed in a future release.
See the new [Delete Room API](rooms.md#delete-room-api) for more details.**
This API will remove all trace of a room from your database. This API will remove all trace of a room from your database.
All local users must have left the room before it can be removed. All local users must have left the room before it can be removed.
See also: [Delete Room API](rooms.md#delete-room-api)
The API is: The API is:
``` ```

View file

@ -1,3 +1,14 @@
# Contents
- [List Room API](#list-room-api)
* [Parameters](#parameters)
* [Usage](#usage)
- [Room Details API](#room-details-api)
- [Room Members API](#room-members-api)
- [Delete Room API](#delete-room-api)
* [Parameters](#parameters-1)
* [Response](#response)
* [Undoing room shutdowns](#undoing-room-shutdowns)
# List Room API # List Room API
The List Room admin API allows server admins to get a list of rooms on their The List Room admin API allows server admins to get a list of rooms on their
@ -76,7 +87,7 @@ GET /_synapse/admin/v1/rooms
Response: Response:
``` ```jsonc
{ {
"rooms": [ "rooms": [
{ {
@ -128,7 +139,7 @@ GET /_synapse/admin/v1/rooms?search_term=TWIM
Response: Response:
``` ```json
{ {
"rooms": [ "rooms": [
{ {
@ -163,7 +174,7 @@ GET /_synapse/admin/v1/rooms?order_by=size
Response: Response:
``` ```jsonc
{ {
"rooms": [ "rooms": [
{ {
@ -219,14 +230,14 @@ GET /_synapse/admin/v1/rooms?order_by=size&from=100
Response: Response:
``` ```jsonc
{ {
"rooms": [ "rooms": [
{ {
"room_id": "!mscvqgqpHYjBGDxNym:matrix.org", "room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
"name": "Music Theory", "name": "Music Theory",
"canonical_alias": "#musictheory:matrix.org", "canonical_alias": "#musictheory:matrix.org",
"joined_members": 127 "joined_members": 127,
"joined_local_members": 2, "joined_local_members": 2,
"version": "1", "version": "1",
"creator": "@foo:matrix.org", "creator": "@foo:matrix.org",
@ -243,7 +254,7 @@ Response:
"room_id": "!twcBhHVdZlQWuuxBhN:termina.org.uk", "room_id": "!twcBhHVdZlQWuuxBhN:termina.org.uk",
"name": "weechat-matrix", "name": "weechat-matrix",
"canonical_alias": "#weechat-matrix:termina.org.uk", "canonical_alias": "#weechat-matrix:termina.org.uk",
"joined_members": 137 "joined_members": 137,
"joined_local_members": 20, "joined_local_members": 20,
"version": "4", "version": "4",
"creator": "@foo:termina.org.uk", "creator": "@foo:termina.org.uk",
@ -278,6 +289,7 @@ The following fields are possible in the JSON response body:
* `canonical_alias` - The canonical (main) alias address of the room. * `canonical_alias` - The canonical (main) alias address of the room.
* `joined_members` - How many users are currently in the room. * `joined_members` - How many users are currently in the room.
* `joined_local_members` - How many local users are currently in the room. * `joined_local_members` - How many local users are currently in the room.
* `joined_local_devices` - How many local devices are currently in the room.
* `version` - The version of the room as a string. * `version` - The version of the room as a string.
* `creator` - The `user_id` of the room creator. * `creator` - The `user_id` of the room creator.
* `encryption` - Algorithm of end-to-end encryption of messages. Is `null` if encryption is not active. * `encryption` - Algorithm of end-to-end encryption of messages. Is `null` if encryption is not active.
@ -300,15 +312,16 @@ GET /_synapse/admin/v1/rooms/<room_id>
Response: Response:
``` ```json
{ {
"room_id": "!mscvqgqpHYjBGDxNym:matrix.org", "room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
"name": "Music Theory", "name": "Music Theory",
"avatar": "mxc://matrix.org/AQDaVFlbkQoErdOgqWRgiGSV", "avatar": "mxc://matrix.org/AQDaVFlbkQoErdOgqWRgiGSV",
"topic": "Theory, Composition, Notation, Analysis", "topic": "Theory, Composition, Notation, Analysis",
"canonical_alias": "#musictheory:matrix.org", "canonical_alias": "#musictheory:matrix.org",
"joined_members": 127 "joined_members": 127,
"joined_local_members": 2, "joined_local_members": 2,
"joined_local_devices": 2,
"version": "1", "version": "1",
"creator": "@foo:matrix.org", "creator": "@foo:matrix.org",
"encryption": null, "encryption": null,
@ -342,12 +355,12 @@ GET /_synapse/admin/v1/rooms/<room_id>/members
Response: Response:
``` ```json
{ {
"members": [ "members": [
"@foo:matrix.org", "@foo:matrix.org",
"@bar:matrix.org", "@bar:matrix.org",
"@foobar:matrix.org "@foobar:matrix.org"
], ],
"total": 3 "total": 3
} }
@ -357,8 +370,6 @@ Response:
The Delete Room admin API allows server admins to remove rooms from server The Delete Room admin API allows server admins to remove rooms from server
and block these rooms. and block these rooms.
It is a combination and improvement of "[Shutdown room](shutdown_room.md)"
and "[Purge room](purge_room.md)" API.
Shuts down a room. Moves all local users and room aliases automatically to a Shuts down a room. Moves all local users and room aliases automatically to a
new room if `new_room_user_id` is set. Otherwise local users only new room if `new_room_user_id` is set. Otherwise local users only
@ -455,3 +466,30 @@ The following fields are returned in the JSON response body:
* `local_aliases` - An array of strings representing the local aliases that were migrated from * `local_aliases` - An array of strings representing the local aliases that were migrated from
the old room to the new. the old room to the new.
* `new_room_id` - A string representing the room ID of the new room. * `new_room_id` - A string representing the room ID of the new room.
## Undoing room shutdowns
*Note*: This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level,
the structure can and does change without notice.
First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it
never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible
to recover at all:
* If the room was invite-only, your users will need to be re-invited.
* If the room no longer has any members at all, it'll be impossible to rejoin.
* The first user to rejoin will have to do so via an alias on a different server.
With all that being said, if you still want to try and recover the room:
1. For safety reasons, shut down Synapse.
2. In the database, run `DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';`
* For caution: it's recommended to run this in a transaction: `BEGIN; DELETE ...;`, verify you got 1 result, then `COMMIT;`.
* The room ID is the same one supplied to the shutdown room API, not the Content Violation room.
3. Restart Synapse.
You will have to manually handle, if you so choose, the following:
* Aliases that would have been redirected to the Content Violation room.
* Users that would have been booted from the room (and will have been force-joined to the Content Violation room).
* Removal of the Content Violation room if desired.

View file

@ -1,4 +1,7 @@
# Shutdown room API # Deprecated: Shutdown room API
**The old Shutdown room API is deprecated and will be removed in a future release.
See the new [Delete Room API](rooms.md#delete-room-api) for more details.**
Shuts down a room, preventing new joins and moves local users and room aliases automatically Shuts down a room, preventing new joins and moves local users and room aliases automatically
to a new room. The new room will be created with the user specified by the to a new room. The new room will be created with the user specified by the
@ -10,8 +13,6 @@ disallow any further invites or joins.
The local server will only have the power to move local user and room aliases to The local server will only have the power to move local user and room aliases to
the new room. Users on other servers will be unaffected. the new room. Users on other servers will be unaffected.
See also: [Delete Room API](rooms.md#delete-room-api)
## API ## API
You will need to authenticate with an access token for an admin user. You will need to authenticate with an access token for an admin user.

View file

@ -144,6 +144,35 @@ pid_file: DATADIR/homeserver.pid
# #
#enable_search: false #enable_search: false
# Prevent outgoing requests from being sent to the following blacklisted IP address
# CIDR ranges. If this option is not specified then it defaults to private IP
# address ranges (see the example below).
#
# The blacklist applies to the outbound requests for federation, identity servers,
# push servers, and for checking key validity for third-party invite events.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
# This option replaces federation_ip_range_blacklist in Synapse v1.25.0.
#
#ip_range_blacklist:
# - '127.0.0.0/8'
# - '10.0.0.0/8'
# - '172.16.0.0/12'
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '192.0.0.0/24'
# - '169.254.0.0/16'
# - '198.18.0.0/15'
# - '192.0.2.0/24'
# - '198.51.100.0/24'
# - '203.0.113.0/24'
# - '224.0.0.0/4'
# - '::1/128'
# - 'fe80::/10'
# - 'fc00::/7'
# List of ports that Synapse should listen on, their purpose and their # List of ports that Synapse should listen on, their purpose and their
# configuration. # configuration.
# #
@ -642,26 +671,17 @@ acme:
# - nyc.example.com # - nyc.example.com
# - syd.example.com # - syd.example.com
# Prevent federation requests from being sent to the following # List of IP address CIDR ranges that should be allowed for federation,
# blacklist IP address CIDR ranges. If this option is not specified, or # identity servers, push servers, and for checking key validity for
# specified with an empty list, no ip range blacklist will be enforced. # third-party invite events. This is useful for specifying exceptions to
# wide-ranging blacklisted target IP ranges - e.g. for communication with
# a push server only visible in your network.
# #
# As of Synapse v1.4.0 this option also affects any outbound requests to identity # This whitelist overrides ip_range_blacklist and defaults to an empty
# servers provided by user input. # list.
# #
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly #ip_range_whitelist:
# listed here, since they correspond to unroutable addresses.) # - '192.168.1.1'
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
# Report prometheus metrics on the age of PDUs being sent to and received from # Report prometheus metrics on the age of PDUs being sent to and received from
# the following domains. This can be used to give an idea of "delay" on inbound # the following domains. This can be used to give an idea of "delay" on inbound
@ -953,9 +973,15 @@ media_store_path: "DATADIR/media_store"
# - '172.16.0.0/12' # - '172.16.0.0/12'
# - '192.168.0.0/16' # - '192.168.0.0/16'
# - '100.64.0.0/10' # - '100.64.0.0/10'
# - '192.0.0.0/24'
# - '169.254.0.0/16' # - '169.254.0.0/16'
# - '198.18.0.0/15'
# - '192.0.2.0/24'
# - '198.51.100.0/24'
# - '203.0.113.0/24'
# - '224.0.0.0/4'
# - '::1/128' # - '::1/128'
# - 'fe80::/64' # - 'fe80::/10'
# - 'fc00::/7' # - 'fc00::/7'
# List of IP address CIDR ranges that the URL preview spider is allowed # List of IP address CIDR ranges that the URL preview spider is allowed
@ -1877,11 +1903,8 @@ sso:
# - https://my.custom.client/ # - https://my.custom.client/
# Directory in which Synapse will try to find the template files below. # Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used. # If not set, or the files named below are not found within the template
# # directory, default templates from within the Synapse package will be used.
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
# #
# Synapse will look for the following templates in this directory: # Synapse will look for the following templates in this directory:
# #
@ -2111,9 +2134,8 @@ email:
#validation_token_lifetime: 15m #validation_token_lifetime: 15m
# Directory in which Synapse will try to find the template files below. # Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used. # If not set, or the files named below are not found within the template
# # directory, default templates from within the Synapse package will be used.
# Do not uncomment this setting unless you want to customise the templates.
# #
# Synapse will look for the following templates in this directory: # Synapse will look for the following templates in this directory:
# #
@ -2587,6 +2609,13 @@ opentracing:
# #
#run_background_tasks_on: worker1 #run_background_tasks_on: worker1
# A shared secret used by the replication APIs to authenticate HTTP requests
# from workers.
#
# By default this is unused and traffic is not authenticated.
#
#worker_replication_secret: ""
# Configuration for Redis when using workers. This *must* be enabled when # Configuration for Redis when using workers. This *must* be enabled when
# using workers (unless using old style direct TCP configuration). # using workers (unless using old style direct TCP configuration).

View file

@ -22,6 +22,8 @@ well as some specific methods:
* `user_may_create_room` * `user_may_create_room`
* `user_may_create_room_alias` * `user_may_create_room_alias`
* `user_may_publish_room` * `user_may_publish_room`
* `check_username_for_spam`
* `check_registration_for_spam`
The details of the each of these methods (as well as their inputs and outputs) The details of the each of these methods (as well as their inputs and outputs)
are documented in the `synapse.events.spamcheck.SpamChecker` class. are documented in the `synapse.events.spamcheck.SpamChecker` class.
@ -32,28 +34,33 @@ call back into the homeserver internals.
### Example ### Example
```python ```python
from synapse.spam_checker_api import RegistrationBehaviour
class ExampleSpamChecker: class ExampleSpamChecker:
def __init__(self, config, api): def __init__(self, config, api):
self.config = config self.config = config
self.api = api self.api = api
def check_event_for_spam(self, foo): async def check_event_for_spam(self, foo):
return False # allow all events return False # allow all events
def user_may_invite(self, inviter_userid, invitee_userid, room_id): async def user_may_invite(self, inviter_userid, invitee_userid, room_id):
return True # allow all invites return True # allow all invites
def user_may_create_room(self, userid): async def user_may_create_room(self, userid):
return True # allow all room creations return True # allow all room creations
def user_may_create_room_alias(self, userid, room_alias): async def user_may_create_room_alias(self, userid, room_alias):
return True # allow all room aliases return True # allow all room aliases
def user_may_publish_room(self, userid, room_id): async def user_may_publish_room(self, userid, room_id):
return True # allow publishing of all rooms return True # allow publishing of all rooms
def check_username_for_spam(self, user_profile): async def check_username_for_spam(self, user_profile):
return False # allow all usernames return False # allow all usernames
async def check_registration_for_spam(self, email_threepid, username, request_info):
return RegistrationBehaviour.ALLOW # allow all registrations
``` ```
## Configuration ## Configuration

View file

@ -116,11 +116,13 @@ comment these options out and use those specified by the module instead.
A custom mapping provider must specify the following methods: A custom mapping provider must specify the following methods:
* `__init__(self, parsed_config)` * `__init__(self, parsed_config, module_api)`
- Arguments: - Arguments:
- `parsed_config` - A configuration object that is the return value of the - `parsed_config` - A configuration object that is the return value of the
`parse_config` method. You should set any configuration options needed by `parse_config` method. You should set any configuration options needed by
the module here. the module here.
- `module_api` - a `synapse.module_api.ModuleApi` object which provides the
stable API available for extension modules.
* `parse_config(config)` * `parse_config(config)`
- This method should have the `@staticmethod` decoration. - This method should have the `@staticmethod` decoration.
- Arguments: - Arguments:

View file

@ -89,7 +89,8 @@ shared configuration file.
Normally, only a couple of changes are needed to make an existing configuration Normally, only a couple of changes are needed to make an existing configuration
file suitable for use with workers. First, you need to enable an "HTTP replication file suitable for use with workers. First, you need to enable an "HTTP replication
listener" for the main process; and secondly, you need to enable redis-based listener" for the main process; and secondly, you need to enable redis-based
replication. For example: replication. Optionally, a shared secret can be used to authenticate HTTP
traffic between workers. For example:
```yaml ```yaml
@ -103,6 +104,9 @@ listeners:
resources: resources:
- names: [replication] - names: [replication]
# Add a random shared secret to authenticate traffic.
worker_replication_secret: ""
redis: redis:
enabled: true enabled: true
``` ```

View file

@ -43,6 +43,7 @@ files =
synapse/handlers/room_member.py, synapse/handlers/room_member.py,
synapse/handlers/room_member_worker.py, synapse/handlers/room_member_worker.py,
synapse/handlers/saml_handler.py, synapse/handlers/saml_handler.py,
synapse/handlers/sso.py,
synapse/handlers/sync.py, synapse/handlers/sync.py,
synapse/handlers/ui_auth, synapse/handlers/ui_auth,
synapse/http/client.py, synapse/http/client.py,
@ -55,8 +56,7 @@ files =
synapse/metrics, synapse/metrics,
synapse/module_api, synapse/module_api,
synapse/notifier.py, synapse/notifier.py,
synapse/push/pusherpool.py, synapse/push,
synapse/push/push_rule_evaluator.py,
synapse/replication, synapse/replication,
synapse/rest, synapse/rest,
synapse/server.py, synapse/server.py,

View file

@ -31,6 +31,8 @@ class SynapsePlugin(Plugin):
) -> Optional[Callable[[MethodSigContext], CallableType]]: ) -> Optional[Callable[[MethodSigContext], CallableType]]:
if fullname.startswith( if fullname.startswith(
"synapse.util.caches.descriptors._CachedFunction.__call__" "synapse.util.caches.descriptors._CachedFunction.__call__"
) or fullname.startswith(
"synapse.util.caches.descriptors._LruCachedFunction.__call__"
): ):
return cached_function_method_signature return cached_function_method_signature
return None return None

View file

@ -48,7 +48,7 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.24.0rc2" __version__ = "1.24.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when # We import here so that we don't have to install a bunch of deps when

View file

@ -31,7 +31,9 @@ from synapse.api.errors import (
MissingClientTokenError, MissingClientTokenError,
) )
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.appservice import ApplicationService
from synapse.events import EventBase from synapse.events import EventBase
from synapse.http.site import SynapseRequest
from synapse.logging import opentracing as opentracing from synapse.logging import opentracing as opentracing
from synapse.storage.databases.main.registration import TokenLookupResult from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import StateMap, UserID from synapse.types import StateMap, UserID
@ -474,7 +476,7 @@ class Auth:
now = self.hs.get_clock().time_msec() now = self.hs.get_clock().time_msec()
return now < expiry return now < expiry
def get_appservice_by_req(self, request): def get_appservice_by_req(self, request: SynapseRequest) -> ApplicationService:
token = self.get_access_token_from_request(request) token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token) service = self.store.get_app_service_by_token(token)
if not service: if not service:

View file

@ -245,6 +245,8 @@ def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
# Set up the SIGHUP machinery. # Set up the SIGHUP machinery.
if hasattr(signal, "SIGHUP"): if hasattr(signal, "SIGHUP"):
reactor = hs.get_reactor()
@wrap_as_background_process("sighup") @wrap_as_background_process("sighup")
def handle_sighup(*args, **kwargs): def handle_sighup(*args, **kwargs):
# Tell systemd our state, if we're using it. This will silently fail if # Tell systemd our state, if we're using it. This will silently fail if
@ -260,7 +262,9 @@ def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
# is so that we're in a sane state, e.g. flushing the logs may fail # is so that we're in a sane state, e.g. flushing the logs may fail
# if the sighup happens in the middle of writing a log entry. # if the sighup happens in the middle of writing a log entry.
def run_sighup(*args, **kwargs): def run_sighup(*args, **kwargs):
hs.get_clock().call_later(0, handle_sighup, *args, **kwargs) # `callFromThread` should be "signal safe" as well as thread
# safe.
reactor.callFromThread(handle_sighup, *args, **kwargs)
signal.signal(signal.SIGHUP, run_sighup) signal.signal(signal.SIGHUP, run_sighup)

View file

@ -266,7 +266,6 @@ class GenericWorkerPresence(BasePresenceHandler):
super().__init__(hs) super().__init__(hs)
self.hs = hs self.hs = hs
self.is_mine_id = hs.is_mine_id self.is_mine_id = hs.is_mine_id
self.http_client = hs.get_simple_http_client()
self._presence_enabled = hs.config.use_presence self._presence_enabled = hs.config.use_presence

View file

@ -19,7 +19,7 @@ import gc
import logging import logging
import os import os
import sys import sys
from typing import Iterable from typing import Iterable, Iterator
from twisted.application import service from twisted.application import service
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
@ -90,7 +90,7 @@ class SynapseHomeServer(HomeServer):
tls = listener_config.tls tls = listener_config.tls
site_tag = listener_config.http_options.tag site_tag = listener_config.http_options.tag
if site_tag is None: if site_tag is None:
site_tag = port site_tag = str(port)
# We always include a health resource. # We always include a health resource.
resources = {"/health": HealthResource()} resources = {"/health": HealthResource()}
@ -107,7 +107,10 @@ class SynapseHomeServer(HomeServer):
logger.debug("Configuring additional resources: %r", additional_resources) logger.debug("Configuring additional resources: %r", additional_resources)
module_api = self.get_module_api() module_api = self.get_module_api()
for path, resmodule in additional_resources.items(): for path, resmodule in additional_resources.items():
handler_cls, config = load_module(resmodule) handler_cls, config = load_module(
resmodule,
("listeners", site_tag, "additional_resources", "<%s>" % (path,)),
)
handler = handler_cls(config, module_api) handler = handler_cls(config, module_api)
if IResource.providedBy(handler): if IResource.providedBy(handler):
resource = handler resource = handler
@ -342,7 +345,10 @@ def setup(config_options):
"Synapse Homeserver", config_options "Synapse Homeserver", config_options
) )
except ConfigError as e: except ConfigError as e:
sys.stderr.write("\nERROR: %s\n" % (e,)) sys.stderr.write("\n")
for f in format_config_error(e):
sys.stderr.write(f)
sys.stderr.write("\n")
sys.exit(1) sys.exit(1)
if not config: if not config:
@ -445,6 +451,38 @@ def setup(config_options):
return hs return hs
def format_config_error(e: ConfigError) -> Iterator[str]:
"""
Formats a config error neatly
The idea is to format the immediate error, plus the "causes" of those errors,
hopefully in a way that makes sense to the user. For example:
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
Failed to parse config for module 'JinjaOidcMappingProvider':
invalid jinja template:
unexpected end of template, expected 'end of print statement'.
Args:
e: the error to be formatted
Returns: An iterator which yields string fragments to be formatted
"""
yield "Error in configuration"
if e.path:
yield " at '%s'" % (".".join(e.path),)
yield ":\n %s" % (e.msg,)
e = e.__cause__
indent = 1
while e:
indent += 1
yield ":\n%s%s" % (" " * indent, str(e))
e = e.__cause__
class SynapseService(service.Service): class SynapseService(service.Service):
""" """
A twisted Service class that will start synapse. Used to run synapse A twisted Service class that will start synapse. Used to run synapse

View file

@ -23,7 +23,7 @@ import urllib.parse
from collections import OrderedDict from collections import OrderedDict
from hashlib import sha256 from hashlib import sha256
from textwrap import dedent from textwrap import dedent
from typing import Any, Callable, List, MutableMapping, Optional from typing import Any, Callable, Iterable, List, MutableMapping, Optional
import attr import attr
import jinja2 import jinja2
@ -32,7 +32,17 @@ import yaml
class ConfigError(Exception): class ConfigError(Exception):
pass """Represents a problem parsing the configuration
Args:
msg: A textual description of the error.
path: Where appropriate, an indication of where in the configuration
the problem lies.
"""
def __init__(self, msg: str, path: Optional[Iterable[str]] = None):
self.msg = msg
self.path = path
# We split these messages out to allow packages to override with package # We split these messages out to allow packages to override with package

View file

@ -1,4 +1,4 @@
from typing import Any, List, Optional from typing import Any, Iterable, List, Optional
from synapse.config import ( from synapse.config import (
api, api,
@ -35,7 +35,10 @@ from synapse.config import (
workers, workers,
) )
class ConfigError(Exception): ... class ConfigError(Exception):
def __init__(self, msg: str, path: Optional[Iterable[str]] = None):
self.msg = msg
self.path = path
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
MISSING_REPORT_STATS_SPIEL: str MISSING_REPORT_STATS_SPIEL: str

View file

@ -38,6 +38,22 @@ def validate_config(
try: try:
jsonschema.validate(config, json_schema) jsonschema.validate(config, json_schema)
except jsonschema.ValidationError as e: except jsonschema.ValidationError as e:
raise json_error_to_config_error(e, config_path)
def json_error_to_config_error(
e: jsonschema.ValidationError, config_path: Iterable[str]
) -> ConfigError:
"""Converts a json validation error to a user-readable ConfigError
Args:
e: the exception to be converted
config_path: the path within the config file. This will be used as a basis
for the error message.
Returns:
a ConfigError
"""
# copy `config_path` before modifying it. # copy `config_path` before modifying it.
path = list(config_path) path = list(config_path)
for p in list(e.path): for p in list(e.path):
@ -45,7 +61,4 @@ def validate_config(
path.append("<item %i>" % p) path.append("<item %i>" % p)
else: else:
path.append(str(p)) path.append(str(p))
return ConfigError(e.message, path)
raise ConfigError(
"Unable to parse configuration: %s at %s" % (e.message, ".".join(path))
)

View file

@ -390,9 +390,8 @@ class EmailConfig(Config):
#validation_token_lifetime: 15m #validation_token_lifetime: 15m
# Directory in which Synapse will try to find the template files below. # Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used. # If not set, or the files named below are not found within the template
# # directory, default templates from within the Synapse package will be used.
# Do not uncomment this setting unless you want to customise the templates.
# #
# Synapse will look for the following templates in this directory: # Synapse will look for the following templates in this directory:
# #

View file

@ -12,12 +12,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import Optional from typing import Optional
from netaddr import IPSet from synapse.config._base import Config
from synapse.config._base import Config, ConfigError
from synapse.config._util import validate_config from synapse.config._util import validate_config
@ -36,23 +33,6 @@ class FederationConfig(Config):
for domain in federation_domain_whitelist: for domain in federation_domain_whitelist:
self.federation_domain_whitelist[domain] = True self.federation_domain_whitelist[domain] = True
self.federation_ip_range_blacklist = config.get(
"federation_ip_range_blacklist", []
)
# Attempt to create an IPSet from the given ranges
try:
self.federation_ip_range_blacklist = IPSet(
self.federation_ip_range_blacklist
)
# Always blacklist 0.0.0.0, ::
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
)
federation_metrics_domains = config.get("federation_metrics_domains") or [] federation_metrics_domains = config.get("federation_metrics_domains") or []
validate_config( validate_config(
_METRICS_FOR_DOMAINS_SCHEMA, _METRICS_FOR_DOMAINS_SCHEMA,
@ -76,26 +56,17 @@ class FederationConfig(Config):
# - nyc.example.com # - nyc.example.com
# - syd.example.com # - syd.example.com
# Prevent federation requests from being sent to the following # List of IP address CIDR ranges that should be allowed for federation,
# blacklist IP address CIDR ranges. If this option is not specified, or # identity servers, push servers, and for checking key validity for
# specified with an empty list, no ip range blacklist will be enforced. # third-party invite events. This is useful for specifying exceptions to
# wide-ranging blacklisted target IP ranges - e.g. for communication with
# a push server only visible in your network.
# #
# As of Synapse v1.4.0 this option also affects any outbound requests to identity # This whitelist overrides ip_range_blacklist and defaults to an empty
# servers provided by user input. # list.
# #
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly #ip_range_whitelist:
# listed here, since they correspond to unroutable addresses.) # - '192.168.1.1'
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
# Report prometheus metrics on the age of PDUs being sent to and received from # Report prometheus metrics on the age of PDUs being sent to and received from
# the following domains. This can be used to give an idea of "delay" on inbound # the following domains. This can be used to give an idea of "delay" on inbound

View file

@ -206,7 +206,7 @@ def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) ->
# filter options, but care must when using e.g. MemoryHandler to buffer # filter options, but care must when using e.g. MemoryHandler to buffer
# writes. # writes.
log_context_filter = LoggingContextFilter(request="") log_context_filter = LoggingContextFilter()
log_metadata_filter = MetadataFilter({"server_name": config.server_name}) log_metadata_filter = MetadataFilter({"server_name": config.server_name})
old_factory = logging.getLogRecordFactory() old_factory = logging.getLogRecordFactory()

View file

@ -66,7 +66,7 @@ class OIDCConfig(Config):
( (
self.oidc_user_mapping_provider_class, self.oidc_user_mapping_provider_class,
self.oidc_user_mapping_provider_config, self.oidc_user_mapping_provider_config,
) = load_module(ump_config) ) = load_module(ump_config, ("oidc_config", "user_mapping_provider"))
# Ensure loaded user mapping module has defined all necessary methods # Ensure loaded user mapping module has defined all necessary methods
required_methods = [ required_methods = [

View file

@ -36,7 +36,7 @@ class PasswordAuthProviderConfig(Config):
providers.append({"module": LDAP_PROVIDER, "config": ldap_config}) providers.append({"module": LDAP_PROVIDER, "config": ldap_config})
providers.extend(config.get("password_providers") or []) providers.extend(config.get("password_providers") or [])
for provider in providers: for i, provider in enumerate(providers):
mod_name = provider["module"] mod_name = provider["module"]
# This is for backwards compat when the ldap auth provider resided # This is for backwards compat when the ldap auth provider resided
@ -45,7 +45,8 @@ class PasswordAuthProviderConfig(Config):
mod_name = LDAP_PROVIDER mod_name = LDAP_PROVIDER
(provider_class, provider_config) = load_module( (provider_class, provider_config) = load_module(
{"module": mod_name, "config": provider["config"]} {"module": mod_name, "config": provider["config"]},
("password_providers", "<item %i>" % i),
) )
self.password_providers.append((provider_class, provider_config)) self.password_providers.append((provider_class, provider_config))

View file

@ -17,6 +17,9 @@ import os
from collections import namedtuple from collections import namedtuple
from typing import Dict, List from typing import Dict, List
from netaddr import IPSet
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST
from synapse.python_dependencies import DependencyException, check_requirements from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module from synapse.util.module_loader import load_module
@ -142,7 +145,7 @@ class ContentRepositoryConfig(Config):
# them to be started. # them to be started.
self.media_storage_providers = [] # type: List[tuple] self.media_storage_providers = [] # type: List[tuple]
for provider_config in storage_providers: for i, provider_config in enumerate(storage_providers):
# We special case the module "file_system" so as not to need to # We special case the module "file_system" so as not to need to
# expose FileStorageProviderBackend # expose FileStorageProviderBackend
if provider_config["module"] == "file_system": if provider_config["module"] == "file_system":
@ -151,7 +154,9 @@ class ContentRepositoryConfig(Config):
".FileStorageProviderBackend" ".FileStorageProviderBackend"
) )
provider_class, parsed_config = load_module(provider_config) provider_class, parsed_config = load_module(
provider_config, ("media_storage_providers", "<item %i>" % i)
)
wrapper_config = MediaStorageProviderConfig( wrapper_config = MediaStorageProviderConfig(
provider_config.get("store_local", False), provider_config.get("store_local", False),
@ -182,9 +187,6 @@ class ContentRepositoryConfig(Config):
"to work" "to work"
) )
# netaddr is a dependency for url_preview
from netaddr import IPSet
self.url_preview_ip_range_blacklist = IPSet( self.url_preview_ip_range_blacklist = IPSet(
config["url_preview_ip_range_blacklist"] config["url_preview_ip_range_blacklist"]
) )
@ -213,6 +215,10 @@ class ContentRepositoryConfig(Config):
# strip final NL # strip final NL
formatted_thumbnail_sizes = formatted_thumbnail_sizes[:-1] formatted_thumbnail_sizes = formatted_thumbnail_sizes[:-1]
ip_range_blacklist = "\n".join(
" # - '%s'" % ip for ip in DEFAULT_IP_RANGE_BLACKLIST
)
return ( return (
r""" r"""
## Media Store ## ## Media Store ##
@ -283,15 +289,7 @@ class ContentRepositoryConfig(Config):
# you uncomment the following list as a starting point. # you uncomment the following list as a starting point.
# #
#url_preview_ip_range_blacklist: #url_preview_ip_range_blacklist:
# - '127.0.0.0/8' %(ip_range_blacklist)s
# - '10.0.0.0/8'
# - '172.16.0.0/12'
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '169.254.0.0/16'
# - '::1/128'
# - 'fe80::/64'
# - 'fc00::/7'
# List of IP address CIDR ranges that the URL preview spider is allowed # List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist. # to access even if they are specified in url_preview_ip_range_blacklist.

View file

@ -180,7 +180,7 @@ class _RoomDirectoryRule:
self._alias_regex = glob_to_regex(alias) self._alias_regex = glob_to_regex(alias)
self._room_id_regex = glob_to_regex(room_id) self._room_id_regex = glob_to_regex(room_id)
except Exception as e: except Exception as e:
raise ConfigError("Failed to parse glob into regex: %s", e) raise ConfigError("Failed to parse glob into regex") from e
def matches(self, user_id, room_id, aliases): def matches(self, user_id, room_id, aliases):
"""Tests if this rule matches the given user_id, room_id and aliases. """Tests if this rule matches the given user_id, room_id and aliases.

View file

@ -125,7 +125,7 @@ class SAML2Config(Config):
( (
self.saml2_user_mapping_provider_class, self.saml2_user_mapping_provider_class,
self.saml2_user_mapping_provider_config, self.saml2_user_mapping_provider_config,
) = load_module(ump_dict) ) = load_module(ump_dict, ("saml2_config", "user_mapping_provider"))
# Ensure loaded user mapping module has defined all necessary methods # Ensure loaded user mapping module has defined all necessary methods
# Note parse_config() is already checked during the call to load_module # Note parse_config() is already checked during the call to load_module

View file

@ -23,6 +23,7 @@ from typing import Any, Dict, Iterable, List, Optional, Set
import attr import attr
import yaml import yaml
from netaddr import IPSet
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.http.endpoint import parse_and_validate_server_name from synapse.http.endpoint import parse_and_validate_server_name
@ -39,6 +40,34 @@ logger = logging.Logger(__name__)
# in the list. # in the list.
DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"] DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
DEFAULT_IP_RANGE_BLACKLIST = [
# Localhost
"127.0.0.0/8",
# Private networks.
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
# Carrier grade NAT.
"100.64.0.0/10",
# Address registry.
"192.0.0.0/24",
# Link-local networks.
"169.254.0.0/16",
# Testing networks.
"198.18.0.0/15",
"192.0.2.0/24",
"198.51.100.0/24",
"203.0.113.0/24",
# Multicast.
"224.0.0.0/4",
# Localhost
"::1/128",
# Link-local addresses.
"fe80::/10",
# Unique local addresses.
"fc00::/7",
]
DEFAULT_ROOM_VERSION = "6" DEFAULT_ROOM_VERSION = "6"
ROOM_COMPLEXITY_TOO_GREAT = ( ROOM_COMPLEXITY_TOO_GREAT = (
@ -256,6 +285,38 @@ class ServerConfig(Config):
# due to resource constraints # due to resource constraints
self.admin_contact = config.get("admin_contact", None) self.admin_contact = config.get("admin_contact", None)
ip_range_blacklist = config.get(
"ip_range_blacklist", DEFAULT_IP_RANGE_BLACKLIST
)
# Attempt to create an IPSet from the given ranges
try:
self.ip_range_blacklist = IPSet(ip_range_blacklist)
except Exception as e:
raise ConfigError("Invalid range(s) provided in ip_range_blacklist.") from e
# Always blacklist 0.0.0.0, ::
self.ip_range_blacklist.update(["0.0.0.0", "::"])
try:
self.ip_range_whitelist = IPSet(config.get("ip_range_whitelist", ()))
except Exception as e:
raise ConfigError("Invalid range(s) provided in ip_range_whitelist.") from e
# The federation_ip_range_blacklist is used for backwards-compatibility
# and only applies to federation and identity servers. If it is not given,
# default to ip_range_blacklist.
federation_ip_range_blacklist = config.get(
"federation_ip_range_blacklist", ip_range_blacklist
)
try:
self.federation_ip_range_blacklist = IPSet(federation_ip_range_blacklist)
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in federation_ip_range_blacklist."
) from e
# Always blacklist 0.0.0.0, ::
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
if self.public_baseurl is not None: if self.public_baseurl is not None:
if self.public_baseurl[-1] != "/": if self.public_baseurl[-1] != "/":
self.public_baseurl += "/" self.public_baseurl += "/"
@ -561,6 +622,10 @@ class ServerConfig(Config):
def generate_config_section( def generate_config_section(
self, server_name, data_dir_path, open_private_ports, listeners, **kwargs self, server_name, data_dir_path, open_private_ports, listeners, **kwargs
): ):
ip_range_blacklist = "\n".join(
" # - '%s'" % ip for ip in DEFAULT_IP_RANGE_BLACKLIST
)
_, bind_port = parse_and_validate_server_name(server_name) _, bind_port = parse_and_validate_server_name(server_name)
if bind_port is not None: if bind_port is not None:
unsecure_port = bind_port - 400 unsecure_port = bind_port - 400
@ -752,6 +817,21 @@ class ServerConfig(Config):
# #
#enable_search: false #enable_search: false
# Prevent outgoing requests from being sent to the following blacklisted IP address
# CIDR ranges. If this option is not specified then it defaults to private IP
# address ranges (see the example below).
#
# The blacklist applies to the outbound requests for federation, identity servers,
# push servers, and for checking key validity for third-party invite events.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
# This option replaces federation_ip_range_blacklist in Synapse v1.25.0.
#
#ip_range_blacklist:
%(ip_range_blacklist)s
# List of ports that Synapse should listen on, their purpose and their # List of ports that Synapse should listen on, their purpose and their
# configuration. # configuration.
# #

View file

@ -33,13 +33,14 @@ class SpamCheckerConfig(Config):
# spam checker, and thus was simply a dictionary with module # spam checker, and thus was simply a dictionary with module
# and config keys. Support this old behaviour by checking # and config keys. Support this old behaviour by checking
# to see if the option resolves to a dictionary # to see if the option resolves to a dictionary
self.spam_checkers.append(load_module(spam_checkers)) self.spam_checkers.append(load_module(spam_checkers, ("spam_checker",)))
elif isinstance(spam_checkers, list): elif isinstance(spam_checkers, list):
for spam_checker in spam_checkers: for i, spam_checker in enumerate(spam_checkers):
config_path = ("spam_checker", "<item %i>" % i)
if not isinstance(spam_checker, dict): if not isinstance(spam_checker, dict):
raise ConfigError("spam_checker syntax is incorrect") raise ConfigError("expected a mapping", config_path)
self.spam_checkers.append(load_module(spam_checker)) self.spam_checkers.append(load_module(spam_checker, config_path))
else: else:
raise ConfigError("spam_checker syntax is incorrect") raise ConfigError("spam_checker syntax is incorrect")

View file

@ -93,11 +93,8 @@ class SSOConfig(Config):
# - https://my.custom.client/ # - https://my.custom.client/
# Directory in which Synapse will try to find the template files below. # Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used. # If not set, or the files named below are not found within the template
# # directory, default templates from within the Synapse package will be used.
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
# #
# Synapse will look for the following templates in this directory: # Synapse will look for the following templates in this directory:
# #

View file

@ -26,7 +26,9 @@ class ThirdPartyRulesConfig(Config):
provider = config.get("third_party_event_rules", None) provider = config.get("third_party_event_rules", None)
if provider is not None: if provider is not None:
self.third_party_event_rules = load_module(provider) self.third_party_event_rules = load_module(
provider, ("third_party_event_rules",)
)
def generate_config_section(self, **kwargs): def generate_config_section(self, **kwargs):
return """\ return """\

View file

@ -85,6 +85,9 @@ class WorkerConfig(Config):
# The port on the main synapse for HTTP replication endpoint # The port on the main synapse for HTTP replication endpoint
self.worker_replication_http_port = config.get("worker_replication_http_port") self.worker_replication_http_port = config.get("worker_replication_http_port")
# The shared secret used for authentication when connecting to the main synapse.
self.worker_replication_secret = config.get("worker_replication_secret", None)
self.worker_name = config.get("worker_name", self.worker_app) self.worker_name = config.get("worker_name", self.worker_app)
self.worker_main_http_uri = config.get("worker_main_http_uri", None) self.worker_main_http_uri = config.get("worker_main_http_uri", None)
@ -185,6 +188,13 @@ class WorkerConfig(Config):
# data). If not provided this defaults to the main process. # data). If not provided this defaults to the main process.
# #
#run_background_tasks_on: worker1 #run_background_tasks_on: worker1
# A shared secret used by the replication APIs to authenticate HTTP requests
# from workers.
#
# By default this is unused and traffic is not authenticated.
#
#worker_replication_secret: ""
""" """
def read_arguments(self, args): def read_arguments(self, args):

View file

@ -578,7 +578,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
def __init__(self, hs): def __init__(self, hs):
super().__init__(hs) super().__init__(hs)
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.client = hs.get_http_client() self.client = hs.get_federation_http_client()
self.key_servers = self.config.key_servers self.key_servers = self.config.key_servers
async def get_keys(self, keys_to_fetch): async def get_keys(self, keys_to_fetch):
@ -748,7 +748,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
def __init__(self, hs): def __init__(self, hs):
super().__init__(hs) super().__init__(hs)
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.client = hs.get_http_client() self.client = hs.get_federation_http_client()
async def get_keys(self, keys_to_fetch): async def get_keys(self, keys_to_fetch):
""" """

View file

@ -15,10 +15,11 @@
# limitations under the License. # limitations under the License.
import inspect import inspect
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from synapse.spam_checker_api import RegistrationBehaviour from synapse.spam_checker_api import RegistrationBehaviour
from synapse.types import Collection from synapse.types import Collection
from synapse.util.async_helpers import maybe_awaitable
if TYPE_CHECKING: if TYPE_CHECKING:
import synapse.events import synapse.events
@ -39,7 +40,9 @@ class SpamChecker:
else: else:
self.spam_checkers.append(module(config=config)) self.spam_checkers.append(module(config=config))
def check_event_for_spam(self, event: "synapse.events.EventBase") -> bool: async def check_event_for_spam(
self, event: "synapse.events.EventBase"
) -> Union[bool, str]:
"""Checks if a given event is considered "spammy" by this server. """Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if If the server considers an event spammy, then it will be rejected if
@ -50,15 +53,16 @@ class SpamChecker:
event: the event to be checked event: the event to be checked
Returns: Returns:
True if the event is spammy. True or a string if the event is spammy. If a string is returned it
will be used as the error message returned to the user.
""" """
for spam_checker in self.spam_checkers: for spam_checker in self.spam_checkers:
if spam_checker.check_event_for_spam(event): if await maybe_awaitable(spam_checker.check_event_for_spam(event)):
return True return True
return False return False
def user_may_invite( async def user_may_invite(
self, inviter_userid: str, invitee_userid: str, room_id: str self, inviter_userid: str, invitee_userid: str, room_id: str
) -> bool: ) -> bool:
"""Checks if a given user may send an invite """Checks if a given user may send an invite
@ -75,14 +79,18 @@ class SpamChecker:
""" """
for spam_checker in self.spam_checkers: for spam_checker in self.spam_checkers:
if ( if (
spam_checker.user_may_invite(inviter_userid, invitee_userid, room_id) await maybe_awaitable(
spam_checker.user_may_invite(
inviter_userid, invitee_userid, room_id
)
)
is False is False
): ):
return False return False
return True return True
def user_may_create_room(self, userid: str) -> bool: async def user_may_create_room(self, userid: str) -> bool:
"""Checks if a given user may create a room """Checks if a given user may create a room
If this method returns false, the creation request will be rejected. If this method returns false, the creation request will be rejected.
@ -94,12 +102,15 @@ class SpamChecker:
True if the user may create a room, otherwise False True if the user may create a room, otherwise False
""" """
for spam_checker in self.spam_checkers: for spam_checker in self.spam_checkers:
if spam_checker.user_may_create_room(userid) is False: if (
await maybe_awaitable(spam_checker.user_may_create_room(userid))
is False
):
return False return False
return True return True
def user_may_create_room_alias(self, userid: str, room_alias: str) -> bool: async def user_may_create_room_alias(self, userid: str, room_alias: str) -> bool:
"""Checks if a given user may create a room alias """Checks if a given user may create a room alias
If this method returns false, the association request will be rejected. If this method returns false, the association request will be rejected.
@ -112,12 +123,17 @@ class SpamChecker:
True if the user may create a room alias, otherwise False True if the user may create a room alias, otherwise False
""" """
for spam_checker in self.spam_checkers: for spam_checker in self.spam_checkers:
if spam_checker.user_may_create_room_alias(userid, room_alias) is False: if (
await maybe_awaitable(
spam_checker.user_may_create_room_alias(userid, room_alias)
)
is False
):
return False return False
return True return True
def user_may_publish_room(self, userid: str, room_id: str) -> bool: async def user_may_publish_room(self, userid: str, room_id: str) -> bool:
"""Checks if a given user may publish a room to the directory """Checks if a given user may publish a room to the directory
If this method returns false, the publish request will be rejected. If this method returns false, the publish request will be rejected.
@ -130,12 +146,17 @@ class SpamChecker:
True if the user may publish the room, otherwise False True if the user may publish the room, otherwise False
""" """
for spam_checker in self.spam_checkers: for spam_checker in self.spam_checkers:
if spam_checker.user_may_publish_room(userid, room_id) is False: if (
await maybe_awaitable(
spam_checker.user_may_publish_room(userid, room_id)
)
is False
):
return False return False
return True return True
def check_username_for_spam(self, user_profile: Dict[str, str]) -> bool: async def check_username_for_spam(self, user_profile: Dict[str, str]) -> bool:
"""Checks if a user ID or display name are considered "spammy" by this server. """Checks if a user ID or display name are considered "spammy" by this server.
If the server considers a username spammy, then it will not be included in If the server considers a username spammy, then it will not be included in
@ -157,12 +178,12 @@ class SpamChecker:
if checker: if checker:
# Make a copy of the user profile object to ensure the spam checker # Make a copy of the user profile object to ensure the spam checker
# cannot modify it. # cannot modify it.
if checker(user_profile.copy()): if await maybe_awaitable(checker(user_profile.copy())):
return True return True
return False return False
def check_registration_for_spam( async def check_registration_for_spam(
self, self,
email_threepid: Optional[dict], email_threepid: Optional[dict],
username: Optional[str], username: Optional[str],
@ -185,7 +206,9 @@ class SpamChecker:
# spam checker # spam checker
checker = getattr(spam_checker, "check_registration_for_spam", None) checker = getattr(spam_checker, "check_registration_for_spam", None)
if checker: if checker:
behaviour = checker(email_threepid, username, request_info) behaviour = await maybe_awaitable(
checker(email_threepid, username, request_info)
)
assert isinstance(behaviour, RegistrationBehaviour) assert isinstance(behaviour, RegistrationBehaviour)
if behaviour != RegistrationBehaviour.ALLOW: if behaviour != RegistrationBehaviour.ALLOW:
return behaviour return behaviour

View file

@ -78,6 +78,7 @@ class FederationBase:
ctx = current_context() ctx = current_context()
@defer.inlineCallbacks
def callback(_, pdu: EventBase): def callback(_, pdu: EventBase):
with PreserveLoggingContext(ctx): with PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu): if not check_event_content_hash(pdu):
@ -105,7 +106,11 @@ class FederationBase:
) )
return redacted_event return redacted_event
if self.spam_checker.check_event_for_spam(pdu): result = yield defer.ensureDeferred(
self.spam_checker.check_event_for_spam(pdu)
)
if result:
logger.warning( logger.warning(
"Event contains spam, redacting %s: %s", "Event contains spam, redacting %s: %s",
pdu.event_id, pdu.event_id,

View file

@ -845,7 +845,6 @@ class FederationHandlerRegistry:
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
self.config = hs.config self.config = hs.config
self.http_client = hs.get_simple_http_client()
self.clock = hs.get_clock() self.clock = hs.get_clock()
self._instance_name = hs.get_instance_name() self._instance_name = hs.get_instance_name()

View file

@ -35,7 +35,7 @@ class TransportLayerClient:
def __init__(self, hs): def __init__(self, hs):
self.server_name = hs.hostname self.server_name = hs.hostname
self.client = hs.get_http_client() self.client = hs.get_federation_http_client()
@log_function @log_function
def get_room_state_ids(self, destination, room_id, event_id): def get_room_state_ids(self, destination, room_id, event_id):

View file

@ -1462,7 +1462,7 @@ def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=N
Args: Args:
hs (synapse.server.HomeServer): homeserver hs (synapse.server.HomeServer): homeserver
resource (TransportLayerServer): resource class to register to resource (JsonResource): resource class to register to
authenticator (Authenticator): authenticator to use authenticator (Authenticator): authenticator to use
ratelimiter (util.ratelimitutils.FederationRateLimiter): ratelimiter to use ratelimiter (util.ratelimitutils.FederationRateLimiter): ratelimiter to use
servlet_groups (list[str], optional): List of servlet groups to register. servlet_groups (list[str], optional): List of servlet groups to register.

View file

@ -32,6 +32,10 @@ logger = logging.getLogger(__name__)
class BaseHandler: class BaseHandler:
""" """
Common base class for the event handlers. Common base class for the event handlers.
Deprecated: new code should not use this. Instead, Handler classes should define the
fields they actually need. The utility methods should either be factored out to
standalone helper functions, or to different Handler classes.
""" """
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):

View file

@ -14,7 +14,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import inspect
import logging import logging
import time import time
import unicodedata import unicodedata
@ -22,6 +21,7 @@ import urllib.parse
from typing import ( from typing import (
TYPE_CHECKING, TYPE_CHECKING,
Any, Any,
Awaitable,
Callable, Callable,
Dict, Dict,
Iterable, Iterable,
@ -36,6 +36,8 @@ import attr
import bcrypt import bcrypt
import pymacaroons import pymacaroons
from twisted.web.http import Request
from synapse.api.constants import LoginType from synapse.api.constants import LoginType
from synapse.api.errors import ( from synapse.api.errors import (
AuthError, AuthError,
@ -56,6 +58,7 @@ from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.module_api import ModuleApi from synapse.module_api import ModuleApi
from synapse.types import JsonDict, Requester, UserID from synapse.types import JsonDict, Requester, UserID
from synapse.util import stringutils as stringutils from synapse.util import stringutils as stringutils
from synapse.util.async_helpers import maybe_awaitable
from synapse.util.msisdn import phone_number_to_msisdn from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.threepids import canonicalise_email from synapse.util.threepids import canonicalise_email
@ -193,39 +196,27 @@ class AuthHandler(BaseHandler):
self.hs = hs # FIXME better possibility to access registrationHandler later? self.hs = hs # FIXME better possibility to access registrationHandler later?
self.macaroon_gen = hs.get_macaroon_generator() self.macaroon_gen = hs.get_macaroon_generator()
self._password_enabled = hs.config.password_enabled self._password_enabled = hs.config.password_enabled
self._sso_enabled = ( self._password_localdb_enabled = hs.config.password_localdb_enabled
hs.config.cas_enabled or hs.config.saml2_enabled or hs.config.oidc_enabled
)
# we keep this as a list despite the O(N^2) implication so that we can
# keep PASSWORD first and avoid confusing clients which pick the first
# type in the list. (NB that the spec doesn't require us to do so and
# clients which favour types that they don't understand over those that
# they do are technically broken)
# start out by assuming PASSWORD is enabled; we will remove it later if not. # start out by assuming PASSWORD is enabled; we will remove it later if not.
login_types = [] login_types = set()
if hs.config.password_localdb_enabled: if self._password_localdb_enabled:
login_types.append(LoginType.PASSWORD) login_types.add(LoginType.PASSWORD)
for provider in self.password_providers: for provider in self.password_providers:
if hasattr(provider, "get_supported_login_types"): login_types.update(provider.get_supported_login_types().keys())
for t in provider.get_supported_login_types().keys():
if t not in login_types:
login_types.append(t)
if not self._password_enabled: if not self._password_enabled:
login_types.discard(LoginType.PASSWORD)
# Some clients just pick the first type in the list. In this case, we want
# them to use PASSWORD (rather than token or whatever), so we want to make sure
# that comes first, where it's present.
self._supported_login_types = []
if LoginType.PASSWORD in login_types:
self._supported_login_types.append(LoginType.PASSWORD)
login_types.remove(LoginType.PASSWORD) login_types.remove(LoginType.PASSWORD)
self._supported_login_types.extend(login_types)
self._supported_login_types = login_types
# Login types and UI Auth types have a heavy overlap, but are not
# necessarily identical. Login types have SSO (and other login types)
# added in the rest layer, see synapse.rest.client.v1.login.LoginRestServerlet.on_GET.
ui_auth_types = login_types.copy()
if self._sso_enabled:
ui_auth_types.append(LoginType.SSO)
self._supported_ui_auth_types = ui_auth_types
# Ratelimiter for failed auth during UIA. Uses same ratelimit config # Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`. # as per `rc_login.failed_attempts`.
@ -339,7 +330,10 @@ class AuthHandler(BaseHandler):
self._failed_uia_attempts_ratelimiter.ratelimit(user_id, update=False) self._failed_uia_attempts_ratelimiter.ratelimit(user_id, update=False)
# build a list of supported flows # build a list of supported flows
flows = [[login_type] for login_type in self._supported_ui_auth_types] supported_ui_auth_types = await self._get_available_ui_auth_types(
requester.user
)
flows = [[login_type] for login_type in supported_ui_auth_types]
try: try:
result, params, session_id = await self.check_ui_auth( result, params, session_id = await self.check_ui_auth(
@ -351,7 +345,7 @@ class AuthHandler(BaseHandler):
raise raise
# find the completed login type # find the completed login type
for login_type in self._supported_ui_auth_types: for login_type in supported_ui_auth_types:
if login_type not in result: if login_type not in result:
continue continue
@ -367,6 +361,41 @@ class AuthHandler(BaseHandler):
return params, session_id return params, session_id
async def _get_available_ui_auth_types(self, user: UserID) -> Iterable[str]:
"""Get a list of the authentication types this user can use
"""
ui_auth_types = set()
# if the HS supports password auth, and the user has a non-null password, we
# support password auth
if self._password_localdb_enabled and self._password_enabled:
lookupres = await self._find_user_id_and_pwd_hash(user.to_string())
if lookupres:
_, password_hash = lookupres
if password_hash:
ui_auth_types.add(LoginType.PASSWORD)
# also allow auth from password providers
for provider in self.password_providers:
for t in provider.get_supported_login_types().keys():
if t == LoginType.PASSWORD and not self._password_enabled:
continue
ui_auth_types.add(t)
# if sso is enabled, allow the user to log in via SSO iff they have a mapping
# from sso to mxid.
if self.hs.config.saml2.saml2_enabled or self.hs.config.oidc.oidc_enabled:
if await self.store.get_external_ids_by_user(user.to_string()):
ui_auth_types.add(LoginType.SSO)
# Our CAS impl does not (yet) correctly register users in user_external_ids,
# so always offer that if it's available.
if self.hs.config.cas.cas_enabled:
ui_auth_types.add(LoginType.SSO)
return ui_auth_types
def get_enabled_auth_types(self): def get_enabled_auth_types(self):
"""Return the enabled user-interactive authentication types """Return the enabled user-interactive authentication types
@ -831,7 +860,7 @@ class AuthHandler(BaseHandler):
async def validate_login( async def validate_login(
self, login_submission: Dict[str, Any], ratelimit: bool = False, self, login_submission: Dict[str, Any], ratelimit: bool = False,
) -> Tuple[str, Optional[Callable[[Dict[str, str]], None]]]: ) -> Tuple[str, Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
"""Authenticates the user for the /login API """Authenticates the user for the /login API
Also used by the user-interactive auth flow to validate auth types which don't Also used by the user-interactive auth flow to validate auth types which don't
@ -974,7 +1003,7 @@ class AuthHandler(BaseHandler):
async def _validate_userid_login( async def _validate_userid_login(
self, username: str, login_submission: Dict[str, Any], self, username: str, login_submission: Dict[str, Any],
) -> Tuple[str, Optional[Callable[[Dict[str, str]], None]]]: ) -> Tuple[str, Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
"""Helper for validate_login """Helper for validate_login
Handles login, once we've mapped 3pids onto userids Handles login, once we've mapped 3pids onto userids
@ -1029,7 +1058,7 @@ class AuthHandler(BaseHandler):
if result: if result:
return result return result
if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled: if login_type == LoginType.PASSWORD and self._password_localdb_enabled:
known_login_type = True known_login_type = True
# we've already checked that there is a (valid) password field # we've already checked that there is a (valid) password field
@ -1052,7 +1081,7 @@ class AuthHandler(BaseHandler):
async def check_password_provider_3pid( async def check_password_provider_3pid(
self, medium: str, address: str, password: str self, medium: str, address: str, password: str
) -> Tuple[Optional[str], Optional[Callable[[Dict[str, str]], None]]]: ) -> Tuple[Optional[str], Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
"""Check if a password provider is able to validate a thirdparty login """Check if a password provider is able to validate a thirdparty login
Args: Args:
@ -1303,15 +1332,14 @@ class AuthHandler(BaseHandler):
) )
async def complete_sso_ui_auth( async def complete_sso_ui_auth(
self, registered_user_id: str, session_id: str, request: SynapseRequest, self, registered_user_id: str, session_id: str, request: Request,
): ):
"""Having figured out a mxid for this user, complete the HTTP request """Having figured out a mxid for this user, complete the HTTP request
Args: Args:
registered_user_id: The registered user ID to complete SSO login for. registered_user_id: The registered user ID to complete SSO login for.
session_id: The ID of the user-interactive auth session.
request: The request to complete. request: The request to complete.
client_redirect_url: The URL to which to redirect the user at the end of the
process.
""" """
# Mark the stage of the authentication as successful. # Mark the stage of the authentication as successful.
# Save the user who authenticated with SSO, this will be used to ensure # Save the user who authenticated with SSO, this will be used to ensure
@ -1327,7 +1355,7 @@ class AuthHandler(BaseHandler):
async def complete_sso_login( async def complete_sso_login(
self, self,
registered_user_id: str, registered_user_id: str,
request: SynapseRequest, request: Request,
client_redirect_url: str, client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None, extra_attributes: Optional[JsonDict] = None,
): ):
@ -1355,7 +1383,7 @@ class AuthHandler(BaseHandler):
def _complete_sso_login( def _complete_sso_login(
self, self,
registered_user_id: str, registered_user_id: str,
request: SynapseRequest, request: Request,
client_redirect_url: str, client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None, extra_attributes: Optional[JsonDict] = None,
): ):
@ -1609,6 +1637,6 @@ class PasswordProvider:
# This might return an awaitable, if it does block the log out # This might return an awaitable, if it does block the log out
# until it completes. # until it completes.
result = g(user_id=user_id, device_id=device_id, access_token=access_token,) await maybe_awaitable(
if inspect.isawaitable(result): g(user_id=user_id, device_id=device_id, access_token=access_token,)
await result )

View file

@ -133,7 +133,9 @@ class DirectoryHandler(BaseHandler):
403, "You must be in the room to create an alias for it" 403, "You must be in the room to create an alias for it"
) )
if not self.spam_checker.user_may_create_room_alias(user_id, room_alias): if not await self.spam_checker.user_may_create_room_alias(
user_id, room_alias
):
raise AuthError(403, "This user is not permitted to create this alias") raise AuthError(403, "This user is not permitted to create this alias")
if not self.config.is_alias_creation_allowed( if not self.config.is_alias_creation_allowed(
@ -409,7 +411,7 @@ class DirectoryHandler(BaseHandler):
""" """
user_id = requester.user.to_string() user_id = requester.user.to_string()
if not self.spam_checker.user_may_publish_room(user_id, room_id): if not await self.spam_checker.user_may_publish_room(user_id, room_id):
raise AuthError( raise AuthError(
403, "This user is not permitted to publish rooms to the room list" 403, "This user is not permitted to publish rooms to the room list"
) )

View file

@ -140,7 +140,7 @@ class FederationHandler(BaseHandler):
self._message_handler = hs.get_message_handler() self._message_handler = hs.get_message_handler()
self._server_notices_mxid = hs.config.server_notices_mxid self._server_notices_mxid = hs.config.server_notices_mxid
self.config = hs.config self.config = hs.config
self.http_client = hs.get_simple_http_client() self.http_client = hs.get_proxied_blacklisted_http_client()
self._instance_name = hs.get_instance_name() self._instance_name = hs.get_instance_name()
self._replication = hs.get_replication_data_handler() self._replication = hs.get_replication_data_handler()
@ -1593,7 +1593,7 @@ class FederationHandler(BaseHandler):
if self.hs.config.block_non_admin_invites: if self.hs.config.block_non_admin_invites:
raise SynapseError(403, "This server does not accept room invites") raise SynapseError(403, "This server does not accept room invites")
if not self.spam_checker.user_may_invite( if not await self.spam_checker.user_may_invite(
event.sender, event.state_key, event.room_id event.sender, event.state_key, event.room_id
): ):
raise SynapseError( raise SynapseError(

View file

@ -46,13 +46,13 @@ class IdentityHandler(BaseHandler):
def __init__(self, hs): def __init__(self, hs):
super().__init__(hs) super().__init__(hs)
# An HTTP client for contacting trusted URLs.
self.http_client = SimpleHttpClient(hs) self.http_client = SimpleHttpClient(hs)
# We create a blacklisting instance of SimpleHttpClient for contacting identity # An HTTP client for contacting identity servers specified by clients.
# servers specified by clients
self.blacklisting_http_client = SimpleHttpClient( self.blacklisting_http_client = SimpleHttpClient(
hs, ip_blacklist=hs.config.federation_ip_range_blacklist hs, ip_blacklist=hs.config.federation_ip_range_blacklist
) )
self.federation_http_client = hs.get_http_client() self.federation_http_client = hs.get_federation_http_client()
self.hs = hs self.hs = hs
async def threepid_from_creds( async def threepid_from_creds(

View file

@ -744,7 +744,7 @@ class EventCreationHandler:
event.sender, event.sender,
) )
spam_error = self.spam_checker.check_event_for_spam(event) spam_error = await self.spam_checker.check_event_for_spam(event)
if spam_error: if spam_error:
if not isinstance(spam_error, str): if not isinstance(spam_error, str):
spam_error = "Spam is not permitted here" spam_error = "Spam is not permitted here"

View file

@ -674,6 +674,21 @@ class OidcHandler(BaseHandler):
self._sso_handler.render_error(request, "invalid_token", str(e)) self._sso_handler.render_error(request, "invalid_token", str(e))
return return
# first check if we're doing a UIA
if ui_auth_session_id:
try:
remote_user_id = self._remote_id_from_userinfo(userinfo)
except Exception as e:
logger.exception("Could not extract remote user id")
self._sso_handler.render_error(request, "mapping_error", str(e))
return
return await self._sso_handler.complete_sso_ui_auth_request(
self._auth_provider_id, remote_user_id, ui_auth_session_id, request
)
# otherwise, it's a login
# Pull out the user-agent and IP from the request. # Pull out the user-agent and IP from the request.
user_agent = request.get_user_agent("") user_agent = request.get_user_agent("")
ip_address = self.hs.get_ip_from_request(request) ip_address = self.hs.get_ip_from_request(request)
@ -698,11 +713,6 @@ class OidcHandler(BaseHandler):
extra_attributes = await get_extra_attributes(userinfo, token) extra_attributes = await get_extra_attributes(userinfo, token)
# and finally complete the login # and finally complete the login
if ui_auth_session_id:
await self._auth_handler.complete_sso_ui_auth(
user_id, ui_auth_session_id, request
)
else:
await self._auth_handler.complete_sso_login( await self._auth_handler.complete_sso_login(
user_id, request, client_redirect_url, extra_attributes user_id, request, client_redirect_url, extra_attributes
) )
@ -856,14 +866,11 @@ class OidcHandler(BaseHandler):
The mxid of the user The mxid of the user
""" """
try: try:
remote_user_id = self._user_mapping_provider.get_remote_user_id(userinfo) remote_user_id = self._remote_id_from_userinfo(userinfo)
except Exception as e: except Exception as e:
raise MappingException( raise MappingException(
"Failed to extract subject from OIDC response: %s" % (e,) "Failed to extract subject from OIDC response: %s" % (e,)
) )
# Some OIDC providers use integer IDs, but Synapse expects external IDs
# to be strings.
remote_user_id = str(remote_user_id)
# Older mapping providers don't accept the `failures` argument, so we # Older mapping providers don't accept the `failures` argument, so we
# try and detect support. # try and detect support.
@ -933,6 +940,19 @@ class OidcHandler(BaseHandler):
grandfather_existing_users, grandfather_existing_users,
) )
def _remote_id_from_userinfo(self, userinfo: UserInfo) -> str:
"""Extract the unique remote id from an OIDC UserInfo block
Args:
userinfo: An object representing the user given by the OIDC provider
Returns:
remote user id
"""
remote_user_id = self._user_mapping_provider.get_remote_user_id(userinfo)
# Some OIDC providers use integer IDs, but Synapse expects external IDs
# to be strings.
return str(remote_user_id)
UserAttributeDict = TypedDict( UserAttributeDict = TypedDict(
"UserAttributeDict", {"localpart": str, "display_name": Optional[str]} "UserAttributeDict", {"localpart": str, "display_name": Optional[str]}

View file

@ -18,7 +18,6 @@ from typing import List, Tuple
from synapse.appservice import ApplicationService from synapse.appservice import ApplicationService
from synapse.handlers._base import BaseHandler from synapse.handlers._base import BaseHandler
from synapse.types import JsonDict, ReadReceipt, get_domain_from_id from synapse.types import JsonDict, ReadReceipt, get_domain_from_id
from synapse.util.async_helpers import maybe_awaitable
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -98,11 +97,9 @@ class ReceiptsHandler(BaseHandler):
self.notifier.on_new_event("receipt_key", max_batch_id, rooms=affected_room_ids) self.notifier.on_new_event("receipt_key", max_batch_id, rooms=affected_room_ids)
# Note that the min here shouldn't be relied upon to be accurate. # Note that the min here shouldn't be relied upon to be accurate.
await maybe_awaitable( await self.hs.get_pusherpool().on_new_receipts(
self.hs.get_pusherpool().on_new_receipts(
min_batch_id, max_batch_id, affected_room_ids min_batch_id, max_batch_id, affected_room_ids
) )
)
return True return True

View file

@ -187,7 +187,7 @@ class RegistrationHandler(BaseHandler):
""" """
self.check_registration_ratelimit(address) self.check_registration_ratelimit(address)
result = self.spam_checker.check_registration_for_spam( result = await self.spam_checker.check_registration_for_spam(
threepid, localpart, user_agent_ips or [], threepid, localpart, user_agent_ips or [],
) )

View file

@ -358,7 +358,7 @@ class RoomCreationHandler(BaseHandler):
""" """
user_id = requester.user.to_string() user_id = requester.user.to_string()
if not self.spam_checker.user_may_create_room(user_id): if not await self.spam_checker.user_may_create_room(user_id):
raise SynapseError(403, "You are not permitted to create rooms") raise SynapseError(403, "You are not permitted to create rooms")
creation_content = { creation_content = {
@ -440,6 +440,7 @@ class RoomCreationHandler(BaseHandler):
invite_list=[], invite_list=[],
initial_state=initial_state, initial_state=initial_state,
creation_content=creation_content, creation_content=creation_content,
ratelimit=False,
) )
# Transfer membership events # Transfer membership events
@ -608,7 +609,7 @@ class RoomCreationHandler(BaseHandler):
403, "You are not permitted to create rooms", Codes.FORBIDDEN 403, "You are not permitted to create rooms", Codes.FORBIDDEN
) )
if not is_requester_admin and not self.spam_checker.user_may_create_room( if not is_requester_admin and not await self.spam_checker.user_may_create_room(
user_id user_id
): ):
raise SynapseError(403, "You are not permitted to create rooms") raise SynapseError(403, "You are not permitted to create rooms")
@ -735,6 +736,7 @@ class RoomCreationHandler(BaseHandler):
room_alias=room_alias, room_alias=room_alias,
power_level_content_override=power_level_content_override, power_level_content_override=power_level_content_override,
creator_join_profile=creator_join_profile, creator_join_profile=creator_join_profile,
ratelimit=ratelimit,
) )
if "name" in config: if "name" in config:
@ -838,6 +840,7 @@ class RoomCreationHandler(BaseHandler):
room_alias: Optional[RoomAlias] = None, room_alias: Optional[RoomAlias] = None,
power_level_content_override: Optional[JsonDict] = None, power_level_content_override: Optional[JsonDict] = None,
creator_join_profile: Optional[JsonDict] = None, creator_join_profile: Optional[JsonDict] = None,
ratelimit: bool = True,
) -> int: ) -> int:
"""Sends the initial events into a new room. """Sends the initial events into a new room.
@ -884,7 +887,7 @@ class RoomCreationHandler(BaseHandler):
creator.user, creator.user,
room_id, room_id,
"join", "join",
ratelimit=False, ratelimit=ratelimit,
content=creator_join_profile, content=creator_join_profile,
) )

View file

@ -204,7 +204,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# Only rate-limit if the user actually joined the room, otherwise we'll end # Only rate-limit if the user actually joined the room, otherwise we'll end
# up blocking profile updates. # up blocking profile updates.
if newly_joined: if newly_joined and ratelimit:
time_now_s = self.clock.time() time_now_s = self.clock.time()
( (
allowed, allowed,
@ -428,7 +428,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
) )
block_invite = True block_invite = True
if not self.spam_checker.user_may_invite( if not await self.spam_checker.user_may_invite(
requester.user.to_string(), target.to_string(), room_id requester.user.to_string(), target.to_string(), room_id
): ):
logger.info("Blocking invite due to spam checker") logger.info("Blocking invite due to spam checker")
@ -508,11 +508,14 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
raise AuthError(403, "Guest access not allowed") raise AuthError(403, "Guest access not allowed")
if not is_host_in_room: if not is_host_in_room:
if ratelimit:
time_now_s = self.clock.time() time_now_s = self.clock.time()
( (
allowed, allowed,
time_allowed, time_allowed,
) = self._join_rate_limiter_remote.can_requester_do_action(requester,) ) = self._join_rate_limiter_remote.can_requester_do_action(
requester,
)
if not allowed: if not allowed:
raise LimitExceededError( raise LimitExceededError(

View file

@ -34,7 +34,6 @@ from synapse.types import (
map_username_to_mxid_localpart, map_username_to_mxid_localpart,
mxid_localpart_allowed_characters, mxid_localpart_allowed_characters,
) )
from synapse.util.async_helpers import Linearizer
from synapse.util.iterutils import chunk_seq from synapse.util.iterutils import chunk_seq
if TYPE_CHECKING: if TYPE_CHECKING:
@ -81,9 +80,6 @@ class SamlHandler(BaseHandler):
# a map from saml session id to Saml2SessionData object # a map from saml session id to Saml2SessionData object
self._outstanding_requests_dict = {} # type: Dict[str, Saml2SessionData] self._outstanding_requests_dict = {} # type: Dict[str, Saml2SessionData]
# a lock on the mappings
self._mapping_lock = Linearizer(name="saml_mapping", clock=self.clock)
self._sso_handler = hs.get_sso_handler() self._sso_handler = hs.get_sso_handler()
def handle_redirect_request( def handle_redirect_request(
@ -183,6 +179,24 @@ class SamlHandler(BaseHandler):
saml2_auth.in_response_to, None saml2_auth.in_response_to, None
) )
# first check if we're doing a UIA
if current_session and current_session.ui_auth_session_id:
try:
remote_user_id = self._remote_id_from_saml_response(saml2_auth, None)
except MappingException as e:
logger.exception("Failed to extract remote user id from SAML response")
self._sso_handler.render_error(request, "mapping_error", str(e))
return
return await self._sso_handler.complete_sso_ui_auth_request(
self._auth_provider_id,
remote_user_id,
current_session.ui_auth_session_id,
request,
)
# otherwise, we're handling a login request.
# Ensure that the attributes of the logged in user meet the required # Ensure that the attributes of the logged in user meet the required
# attributes. # attributes.
for requirement in self._saml2_attribute_requirements: for requirement in self._saml2_attribute_requirements:
@ -206,13 +220,6 @@ class SamlHandler(BaseHandler):
self._sso_handler.render_error(request, "mapping_error", str(e)) self._sso_handler.render_error(request, "mapping_error", str(e))
return return
# Complete the interactive auth session or the login.
if current_session and current_session.ui_auth_session_id:
await self._auth_handler.complete_sso_ui_auth(
user_id, current_session.ui_auth_session_id, request
)
else:
await self._auth_handler.complete_sso_login(user_id, request, relay_state) await self._auth_handler.complete_sso_login(user_id, request, relay_state)
async def _map_saml_response_to_user( async def _map_saml_response_to_user(
@ -239,16 +246,10 @@ class SamlHandler(BaseHandler):
RedirectException: some mapping providers may raise this if they need RedirectException: some mapping providers may raise this if they need
to redirect to an interstitial page. to redirect to an interstitial page.
""" """
remote_user_id = self._remote_id_from_saml_response(
remote_user_id = self._user_mapping_provider.get_remote_user_id(
saml2_auth, client_redirect_url saml2_auth, client_redirect_url
) )
if not remote_user_id:
raise MappingException(
"Failed to extract remote user id from SAML response"
)
async def saml_response_to_remapped_user_attributes( async def saml_response_to_remapped_user_attributes(
failures: int, failures: int,
) -> UserAttributes: ) -> UserAttributes:
@ -294,7 +295,6 @@ class SamlHandler(BaseHandler):
return None return None
with (await self._mapping_lock.queue(self._auth_provider_id)):
return await self._sso_handler.get_mxid_from_sso( return await self._sso_handler.get_mxid_from_sso(
self._auth_provider_id, self._auth_provider_id,
remote_user_id, remote_user_id,
@ -304,6 +304,35 @@ class SamlHandler(BaseHandler):
grandfather_existing_users, grandfather_existing_users,
) )
def _remote_id_from_saml_response(
self,
saml2_auth: saml2.response.AuthnResponse,
client_redirect_url: Optional[str],
) -> str:
"""Extract the unique remote id from a SAML2 AuthnResponse
Args:
saml2_auth: The parsed SAML2 response.
client_redirect_url: The redirect URL passed in by the client.
Returns:
remote user id
Raises:
MappingException if there was an error extracting the user id
"""
# It's not obvious why we need to pass in the redirect URI to the mapping
# provider, but we do :/
remote_user_id = self._user_mapping_provider.get_remote_user_id(
saml2_auth, client_redirect_url
)
if not remote_user_id:
raise MappingException(
"Failed to extract remote user id from SAML response"
)
return remote_user_id
def expire_sessions(self): def expire_sessions(self):
expire_before = self.clock.time_msec() - self._saml2_session_lifetime expire_before = self.clock.time_msec() - self._saml2_session_lifetime
to_expire = set() to_expire = set()

View file

@ -17,10 +17,12 @@ from typing import TYPE_CHECKING, Awaitable, Callable, List, Optional
import attr import attr
from twisted.web.http import Request
from synapse.api.errors import RedirectException from synapse.api.errors import RedirectException
from synapse.handlers._base import BaseHandler
from synapse.http.server import respond_with_html from synapse.http.server import respond_with_html
from synapse.types import UserID, contains_invalid_mxid_characters from synapse.types import UserID, contains_invalid_mxid_characters
from synapse.util.async_helpers import Linearizer
if TYPE_CHECKING: if TYPE_CHECKING:
from synapse.server import HomeServer from synapse.server import HomeServer
@ -42,14 +44,19 @@ class UserAttributes:
emails = attr.ib(type=List[str], default=attr.Factory(list)) emails = attr.ib(type=List[str], default=attr.Factory(list))
class SsoHandler(BaseHandler): class SsoHandler:
# The number of attempts to ask the mapping provider for when generating an MXID. # The number of attempts to ask the mapping provider for when generating an MXID.
_MAP_USERNAME_RETRIES = 1000 _MAP_USERNAME_RETRIES = 1000
def __init__(self, hs: "HomeServer"): def __init__(self, hs: "HomeServer"):
super().__init__(hs) self._store = hs.get_datastore()
self._server_name = hs.hostname
self._registration_handler = hs.get_registration_handler() self._registration_handler = hs.get_registration_handler()
self._error_template = hs.config.sso_error_template self._error_template = hs.config.sso_error_template
self._auth_handler = hs.get_auth_handler()
# a lock on the mappings
self._mapping_lock = Linearizer(name="sso_user_mapping", clock=hs.get_clock())
def render_error( def render_error(
self, request, error: str, error_description: Optional[str] = None self, request, error: str, error_description: Optional[str] = None
@ -95,7 +102,7 @@ class SsoHandler(BaseHandler):
) )
# Check if we already have a mapping for this user. # Check if we already have a mapping for this user.
previously_registered_user_id = await self.store.get_user_by_external_id( previously_registered_user_id = await self._store.get_user_by_external_id(
auth_provider_id, remote_user_id, auth_provider_id, remote_user_id,
) )
@ -169,6 +176,10 @@ class SsoHandler(BaseHandler):
to an additional page. (e.g. to prompt for more information) to an additional page. (e.g. to prompt for more information)
""" """
# grab a lock while we try to find a mapping for this user. This seems...
# optimistic, especially for implementations that end up redirecting to
# interstitial pages.
with await self._mapping_lock.queue(auth_provider_id):
# first of all, check if we already have a mapping for this user # first of all, check if we already have a mapping for this user
previously_registered_user_id = await self.get_sso_user_by_remote_user_id( previously_registered_user_id = await self.get_sso_user_by_remote_user_id(
auth_provider_id, remote_user_id, auth_provider_id, remote_user_id,
@ -181,12 +192,22 @@ class SsoHandler(BaseHandler):
previously_registered_user_id = await grandfather_existing_users() previously_registered_user_id = await grandfather_existing_users()
if previously_registered_user_id: if previously_registered_user_id:
# Future logins should also match this user ID. # Future logins should also match this user ID.
await self.store.record_user_external_id( await self._store.record_user_external_id(
auth_provider_id, remote_user_id, previously_registered_user_id auth_provider_id, remote_user_id, previously_registered_user_id
) )
return previously_registered_user_id return previously_registered_user_id
# Otherwise, generate a new user. # Otherwise, generate a new user.
attributes = await self._call_attribute_mapper(sso_to_matrix_id_mapper)
user_id = await self._register_mapped_user(
attributes, auth_provider_id, remote_user_id, user_agent, ip_address,
)
return user_id
async def _call_attribute_mapper(
self, sso_to_matrix_id_mapper: Callable[[int], Awaitable[UserAttributes]],
) -> UserAttributes:
"""Call the attribute mapper function in a loop, until we get a unique userid"""
for i in range(self._MAP_USERNAME_RETRIES): for i in range(self._MAP_USERNAME_RETRIES):
try: try:
attributes = await sso_to_matrix_id_mapper(i) attributes = await sso_to_matrix_id_mapper(i)
@ -214,8 +235,8 @@ class SsoHandler(BaseHandler):
) )
# Check if this mxid already exists # Check if this mxid already exists
user_id = UserID(attributes.localpart, self.server_name).to_string() user_id = UserID(attributes.localpart, self._server_name).to_string()
if not await self.store.get_users_by_id_case_insensitive(user_id): if not await self._store.get_users_by_id_case_insensitive(user_id):
# This mxid is free # This mxid is free
break break
else: else:
@ -224,7 +245,16 @@ class SsoHandler(BaseHandler):
raise MappingException( raise MappingException(
"Unable to generate a Matrix ID from the SSO response" "Unable to generate a Matrix ID from the SSO response"
) )
return attributes
async def _register_mapped_user(
self,
attributes: UserAttributes,
auth_provider_id: str,
remote_user_id: str,
user_agent: str,
ip_address: str,
) -> str:
# Since the localpart is provided via a potentially untrusted module, # Since the localpart is provided via a potentially untrusted module,
# ensure the MXID is valid before registering. # ensure the MXID is valid before registering.
if contains_invalid_mxid_characters(attributes.localpart): if contains_invalid_mxid_characters(attributes.localpart):
@ -238,7 +268,47 @@ class SsoHandler(BaseHandler):
user_agent_ips=[(user_agent, ip_address)], user_agent_ips=[(user_agent, ip_address)],
) )
await self.store.record_user_external_id( await self._store.record_user_external_id(
auth_provider_id, remote_user_id, registered_user_id auth_provider_id, remote_user_id, registered_user_id
) )
return registered_user_id return registered_user_id
async def complete_sso_ui_auth_request(
self,
auth_provider_id: str,
remote_user_id: str,
ui_auth_session_id: str,
request: Request,
) -> None:
"""
Given an SSO ID, retrieve the user ID for it and complete UIA.
Note that this requires that the user is mapped in the "user_external_ids"
table. This will be the case if they have ever logged in via SAML or OIDC in
recentish synapse versions, but may not be for older users.
Args:
auth_provider_id: A unique identifier for this SSO provider, e.g.
"oidc" or "saml".
remote_user_id: The unique identifier from the SSO provider.
ui_auth_session_id: The ID of the user-interactive auth session.
request: The request to complete.
"""
user_id = await self.get_sso_user_by_remote_user_id(
auth_provider_id, remote_user_id,
)
if not user_id:
logger.warning(
"Remote user %s/%s has not previously logged in here: UIA will fail",
auth_provider_id,
remote_user_id,
)
# Let the UIA flow handle this the same as if they presented creds for a
# different user.
user_id = ""
await self._auth_handler.complete_sso_ui_auth(
user_id, ui_auth_session_id, request
)

View file

@ -81,11 +81,11 @@ class UserDirectoryHandler(StateDeltasHandler):
results = await self.store.search_user_dir(user_id, search_term, limit) results = await self.store.search_user_dir(user_id, search_term, limit)
# Remove any spammy users from the results. # Remove any spammy users from the results.
results["results"] = [ non_spammy_users = []
user for user in results["results"]:
for user in results["results"] if not await self.spam_checker.check_username_for_spam(user):
if not self.spam_checker.check_username_for_spam(user) non_spammy_users.append(user)
] results["results"] = non_spammy_users
return results return results

View file

@ -125,7 +125,7 @@ def _make_scheduler(reactor):
return _scheduler return _scheduler
class IPBlacklistingResolver: class _IPBlacklistingResolver:
""" """
A proxy for reactor.nameResolver which only produces non-blacklisted IP A proxy for reactor.nameResolver which only produces non-blacklisted IP
addresses, preventing DNS rebinding attacks on URL preview. addresses, preventing DNS rebinding attacks on URL preview.
@ -199,6 +199,35 @@ class IPBlacklistingResolver:
return r return r
@implementer(IReactorPluggableNameResolver)
class BlacklistingReactorWrapper:
"""
A Reactor wrapper which will prevent DNS resolution to blacklisted IP
addresses, to prevent DNS rebinding.
"""
def __init__(
self,
reactor: IReactorPluggableNameResolver,
ip_whitelist: Optional[IPSet],
ip_blacklist: IPSet,
):
self._reactor = reactor
# We need to use a DNS resolver which filters out blacklisted IP
# addresses, to prevent DNS rebinding.
self._nameResolver = _IPBlacklistingResolver(
self._reactor, ip_whitelist, ip_blacklist
)
def __getattr__(self, attr: str) -> Any:
# Passthrough to the real reactor except for the DNS resolver.
if attr == "nameResolver":
return self._nameResolver
else:
return getattr(self._reactor, attr)
class BlacklistingAgentWrapper(Agent): class BlacklistingAgentWrapper(Agent):
""" """
An Agent wrapper which will prevent access to IP addresses being accessed An Agent wrapper which will prevent access to IP addresses being accessed
@ -292,22 +321,11 @@ class SimpleHttpClient:
self.user_agent = self.user_agent.encode("ascii") self.user_agent = self.user_agent.encode("ascii")
if self._ip_blacklist: if self._ip_blacklist:
real_reactor = hs.get_reactor()
# If we have an IP blacklist, we need to use a DNS resolver which # If we have an IP blacklist, we need to use a DNS resolver which
# filters out blacklisted IP addresses, to prevent DNS rebinding. # filters out blacklisted IP addresses, to prevent DNS rebinding.
nameResolver = IPBlacklistingResolver( self.reactor = BlacklistingReactorWrapper(
real_reactor, self._ip_whitelist, self._ip_blacklist hs.get_reactor(), self._ip_whitelist, self._ip_blacklist
) )
@implementer(IReactorPluggableNameResolver)
class Reactor:
def __getattr__(_self, attr):
if attr == "nameResolver":
return nameResolver
else:
return getattr(real_reactor, attr)
self.reactor = Reactor()
else: else:
self.reactor = hs.get_reactor() self.reactor = hs.get_reactor()

View file

@ -16,7 +16,7 @@ import logging
import urllib.parse import urllib.parse
from typing import List, Optional from typing import List, Optional
from netaddr import AddrFormatError, IPAddress from netaddr import AddrFormatError, IPAddress, IPSet
from zope.interface import implementer from zope.interface import implementer
from twisted.internet import defer from twisted.internet import defer
@ -31,6 +31,7 @@ from twisted.web.http_headers import Headers
from twisted.web.iweb import IAgent, IAgentEndpointFactory, IBodyProducer from twisted.web.iweb import IAgent, IAgentEndpointFactory, IBodyProducer
from synapse.crypto.context_factory import FederationPolicyForHTTPS from synapse.crypto.context_factory import FederationPolicyForHTTPS
from synapse.http.client import BlacklistingAgentWrapper
from synapse.http.federation.srv_resolver import Server, SrvResolver from synapse.http.federation.srv_resolver import Server, SrvResolver
from synapse.http.federation.well_known_resolver import WellKnownResolver from synapse.http.federation.well_known_resolver import WellKnownResolver
from synapse.logging.context import make_deferred_yieldable, run_in_background from synapse.logging.context import make_deferred_yieldable, run_in_background
@ -70,6 +71,7 @@ class MatrixFederationAgent:
reactor: IReactorCore, reactor: IReactorCore,
tls_client_options_factory: Optional[FederationPolicyForHTTPS], tls_client_options_factory: Optional[FederationPolicyForHTTPS],
user_agent: bytes, user_agent: bytes,
ip_blacklist: IPSet,
_srv_resolver: Optional[SrvResolver] = None, _srv_resolver: Optional[SrvResolver] = None,
_well_known_resolver: Optional[WellKnownResolver] = None, _well_known_resolver: Optional[WellKnownResolver] = None,
): ):
@ -90,13 +92,19 @@ class MatrixFederationAgent:
self.user_agent = user_agent self.user_agent = user_agent
if _well_known_resolver is None: if _well_known_resolver is None:
# Note that the name resolver has already been wrapped in a
# IPBlacklistingResolver by MatrixFederationHttpClient.
_well_known_resolver = WellKnownResolver( _well_known_resolver = WellKnownResolver(
self._reactor, self._reactor,
agent=Agent( agent=BlacklistingAgentWrapper(
Agent(
self._reactor, self._reactor,
pool=self._pool, pool=self._pool,
contextFactory=tls_client_options_factory, contextFactory=tls_client_options_factory,
), ),
self._reactor,
ip_blacklist=ip_blacklist,
),
user_agent=self.user_agent, user_agent=self.user_agent,
) )

Some files were not shown because too many files have changed in this diff Show more