mirror of
https://github.com/element-hq/synapse
synced 2024-10-02 12:42:41 +00:00
Merge branch 'develop' into register-email-3pid-race
This commit is contained in:
commit
1ede511299
123 changed files with 6869 additions and 2251 deletions
30
.github/workflows/docs.yaml
vendored
30
.github/workflows/docs.yaml
vendored
|
@ -85,33 +85,3 @@ jobs:
|
||||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
publish_dir: ./book
|
publish_dir: ./book
|
||||||
destination_dir: ./${{ needs.pre.outputs.branch-version }}
|
destination_dir: ./${{ needs.pre.outputs.branch-version }}
|
||||||
|
|
||||||
################################################################################
|
|
||||||
pages-devdocs:
|
|
||||||
name: GitHub Pages (developer docs)
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
needs:
|
|
||||||
- pre
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
|
|
||||||
- name: "Set up Sphinx"
|
|
||||||
uses: matrix-org/setup-python-poetry@v1
|
|
||||||
with:
|
|
||||||
python-version: "3.x"
|
|
||||||
poetry-version: "1.3.2"
|
|
||||||
groups: "dev-docs"
|
|
||||||
extras: ""
|
|
||||||
|
|
||||||
- name: Build the documentation
|
|
||||||
run: |
|
|
||||||
cd dev-docs
|
|
||||||
poetry run make html
|
|
||||||
|
|
||||||
# Deploy to the target directory.
|
|
||||||
- name: Deploy to gh pages
|
|
||||||
uses: peaceiris/actions-gh-pages@4f9cc6602d3f66b9c108549d475ec49e8ef4d45e # v4.0.0
|
|
||||||
with:
|
|
||||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
publish_dir: ./dev-docs/_build/html
|
|
||||||
destination_dir: ./dev-docs/${{ needs.pre.outputs.branch-version }}
|
|
||||||
|
|
118
CHANGES.md
118
CHANGES.md
|
@ -1,3 +1,121 @@
|
||||||
|
# Synapse 1.108.0rc1 (2024-05-21)
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Add a feature that allows clients to query the configured federation whitelist. Disabled by default. ([\#16848](https://github.com/element-hq/synapse/issues/16848), [\#17199](https://github.com/element-hq/synapse/issues/17199))
|
||||||
|
- Add the ability to allow numeric user IDs with a specific prefix when in the CAS flow. Contributed by Aurélien Grimpard. ([\#17098](https://github.com/element-hq/synapse/issues/17098))
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
- Fix bug where push rules would be empty in `/sync` for some accounts. Introduced in v1.93.0. ([\#17142](https://github.com/element-hq/synapse/issues/17142))
|
||||||
|
- Add support for optional whitespace around the Federation API's `Authorization` header's parameter commas. ([\#17145](https://github.com/element-hq/synapse/issues/17145))
|
||||||
|
- Fix bug where disabling room publication prevented public rooms being created on workers. ([\#17177](https://github.com/element-hq/synapse/issues/17177), [\#17184](https://github.com/element-hq/synapse/issues/17184))
|
||||||
|
|
||||||
|
### Improved Documentation
|
||||||
|
|
||||||
|
- Document [`/v1/make_knock`](https://spec.matrix.org/v1.10/server-server-api/#get_matrixfederationv1make_knockroomiduserid) and [`/v1/send_knock/`](https://spec.matrix.org/v1.10/server-server-api/#put_matrixfederationv1send_knockroomideventid) federation endpoints as worker-compatible. ([\#17058](https://github.com/element-hq/synapse/issues/17058))
|
||||||
|
- Update User Admin API with note about prefixing OIDC external_id providers. ([\#17139](https://github.com/element-hq/synapse/issues/17139))
|
||||||
|
- Clarify the state of the created room when using the `autocreate_auto_join_room_preset` config option. ([\#17150](https://github.com/element-hq/synapse/issues/17150))
|
||||||
|
- Update the Admin FAQ with the current libjemalloc version for latest Debian stable. Additionally update the name of the "push_rules" stream in the Workers documentation. ([\#17171](https://github.com/element-hq/synapse/issues/17171))
|
||||||
|
|
||||||
|
### Internal Changes
|
||||||
|
|
||||||
|
- Add note to reflect that [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886) is closed but will remain supported for some time. ([\#17151](https://github.com/element-hq/synapse/issues/17151))
|
||||||
|
- Update dependency PyO3 to 0.21. ([\#17162](https://github.com/element-hq/synapse/issues/17162))
|
||||||
|
- Fixes linter errors found in PR #17147. ([\#17166](https://github.com/element-hq/synapse/issues/17166))
|
||||||
|
- Bump black from 24.2.0 to 24.4.2. ([\#17170](https://github.com/element-hq/synapse/issues/17170))
|
||||||
|
- Cache literal sync filter validation for performance. ([\#17186](https://github.com/element-hq/synapse/issues/17186))
|
||||||
|
- Improve performance by fixing a reactor pause. ([\#17192](https://github.com/element-hq/synapse/issues/17192))
|
||||||
|
- Route `/make_knock` and `/send_knock` federation APIs to the federation reader worker in Complement test runs. ([\#17195](https://github.com/element-hq/synapse/issues/17195))
|
||||||
|
- Prepare sync handler to be able to return different sync responses (`SyncVersion`). ([\#17200](https://github.com/element-hq/synapse/issues/17200))
|
||||||
|
- Organize the sync cache key parameter outside of the sync config (separate concerns). ([\#17201](https://github.com/element-hq/synapse/issues/17201))
|
||||||
|
- Refactor `SyncResultBuilder` assembly to its own function. ([\#17202](https://github.com/element-hq/synapse/issues/17202))
|
||||||
|
- Rename to be obvious: `joined_rooms` -> `joined_room_ids`. ([\#17203](https://github.com/element-hq/synapse/issues/17203), [\#17208](https://github.com/element-hq/synapse/issues/17208))
|
||||||
|
- Add a short pause when rate-limiting a request. ([\#17210](https://github.com/element-hq/synapse/issues/17210))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Updates to locked dependencies
|
||||||
|
|
||||||
|
* Bump cryptography from 42.0.5 to 42.0.7. ([\#17180](https://github.com/element-hq/synapse/issues/17180))
|
||||||
|
* Bump gitpython from 3.1.41 to 3.1.43. ([\#17181](https://github.com/element-hq/synapse/issues/17181))
|
||||||
|
* Bump immutabledict from 4.1.0 to 4.2.0. ([\#17179](https://github.com/element-hq/synapse/issues/17179))
|
||||||
|
* Bump sentry-sdk from 1.40.3 to 2.1.1. ([\#17178](https://github.com/element-hq/synapse/issues/17178))
|
||||||
|
* Bump serde from 1.0.200 to 1.0.201. ([\#17183](https://github.com/element-hq/synapse/issues/17183))
|
||||||
|
* Bump serde_json from 1.0.116 to 1.0.117. ([\#17182](https://github.com/element-hq/synapse/issues/17182))
|
||||||
|
|
||||||
|
Synapse 1.107.0 (2024-05-14)
|
||||||
|
============================
|
||||||
|
|
||||||
|
No significant changes since 1.107.0rc1.
|
||||||
|
|
||||||
|
|
||||||
|
# Synapse 1.107.0rc1 (2024-05-07)
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Add preliminary support for [MSC3823: Account Suspension](https://github.com/matrix-org/matrix-spec-proposals/pull/3823). ([\#17051](https://github.com/element-hq/synapse/issues/17051))
|
||||||
|
- Declare support for [Matrix v1.10](https://matrix.org/blog/2024/03/22/matrix-v1.10-release/). Contributed by @clokep. ([\#17082](https://github.com/element-hq/synapse/issues/17082))
|
||||||
|
- Add support for [MSC4115: membership metadata on events](https://github.com/matrix-org/matrix-spec-proposals/pull/4115). ([\#17104](https://github.com/element-hq/synapse/issues/17104), [\#17137](https://github.com/element-hq/synapse/issues/17137))
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
- Fixed search feature of Element Android on homesevers using SQLite by returning search terms as search highlights. ([\#17000](https://github.com/element-hq/synapse/issues/17000))
|
||||||
|
- Fixes a bug introduced in v1.52.0 where the `destination` query parameter for the [Destination Rooms Admin API](https://element-hq.github.io/synapse/v1.105/usage/administration/admin_api/federation.html#destination-rooms) failed to actually filter returned rooms. ([\#17077](https://github.com/element-hq/synapse/issues/17077))
|
||||||
|
- For MSC3266 room summaries, support queries at the recommended endpoint of `/_matrix/client/unstable/im.nheko.summary/summary/{roomIdOrAlias}`. The existing endpoint of `/_matrix/client/unstable/im.nheko.summary/rooms/{roomIdOrAlias}/summary` is deprecated. ([\#17078](https://github.com/element-hq/synapse/issues/17078))
|
||||||
|
- Apply user email & picture during OIDC registration if present & selected. ([\#17120](https://github.com/element-hq/synapse/issues/17120))
|
||||||
|
- Improve error message for cross signing reset with [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861) enabled. ([\#17121](https://github.com/element-hq/synapse/issues/17121))
|
||||||
|
- Fix a bug which meant that to-device messages received over federation could be dropped when the server was under load or networking problems caused problems between Synapse processes or the database. ([\#17127](https://github.com/element-hq/synapse/issues/17127))
|
||||||
|
- Fix bug where `StreamChangeCache` would not respect configured cache factors. ([\#17152](https://github.com/element-hq/synapse/issues/17152))
|
||||||
|
|
||||||
|
### Updates to the Docker image
|
||||||
|
|
||||||
|
- Correct licensing metadata on Docker image. ([\#17141](https://github.com/element-hq/synapse/issues/17141))
|
||||||
|
|
||||||
|
### Improved Documentation
|
||||||
|
|
||||||
|
- Update the `event_cache_size` and `global_factor` configuration options' documentation. ([\#17071](https://github.com/element-hq/synapse/issues/17071))
|
||||||
|
- Remove broken sphinx docs. ([\#17073](https://github.com/element-hq/synapse/issues/17073), [\#17148](https://github.com/element-hq/synapse/issues/17148))
|
||||||
|
- Add RuntimeDirectory to example matrix-synapse.service systemd unit. ([\#17084](https://github.com/element-hq/synapse/issues/17084))
|
||||||
|
- Fix various small typos throughout the docs. ([\#17114](https://github.com/element-hq/synapse/issues/17114))
|
||||||
|
- Update enable_notifs configuration documentation. ([\#17116](https://github.com/element-hq/synapse/issues/17116))
|
||||||
|
- Update the Upgrade Notes with the latest minimum supported Rust version of 1.66.0. Contributed by @jahway603. ([\#17140](https://github.com/element-hq/synapse/issues/17140))
|
||||||
|
|
||||||
|
### Internal Changes
|
||||||
|
|
||||||
|
- Enable [MSC3266](https://github.com/matrix-org/matrix-spec-proposals/pull/3266) by default in the Synapse Complement image. ([\#17105](https://github.com/element-hq/synapse/issues/17105))
|
||||||
|
- Add optimisation to `StreamChangeCache.get_entities_changed(..)`. ([\#17130](https://github.com/element-hq/synapse/issues/17130))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Updates to locked dependencies
|
||||||
|
|
||||||
|
* Bump furo from 2024.1.29 to 2024.4.27. ([\#17133](https://github.com/element-hq/synapse/issues/17133))
|
||||||
|
* Bump idna from 3.6 to 3.7. ([\#17136](https://github.com/element-hq/synapse/issues/17136))
|
||||||
|
* Bump jsonschema from 4.21.1 to 4.22.0. ([\#17157](https://github.com/element-hq/synapse/issues/17157))
|
||||||
|
* Bump lxml from 5.1.0 to 5.2.1. ([\#17158](https://github.com/element-hq/synapse/issues/17158))
|
||||||
|
* Bump phonenumbers from 8.13.29 to 8.13.35. ([\#17106](https://github.com/element-hq/synapse/issues/17106))
|
||||||
|
- Bump pillow from 10.2.0 to 10.3.0. ([\#17146](https://github.com/element-hq/synapse/issues/17146))
|
||||||
|
* Bump pydantic from 2.6.4 to 2.7.0. ([\#17107](https://github.com/element-hq/synapse/issues/17107))
|
||||||
|
* Bump pydantic from 2.7.0 to 2.7.1. ([\#17160](https://github.com/element-hq/synapse/issues/17160))
|
||||||
|
* Bump pyicu from 2.12 to 2.13. ([\#17109](https://github.com/element-hq/synapse/issues/17109))
|
||||||
|
* Bump serde from 1.0.197 to 1.0.198. ([\#17111](https://github.com/element-hq/synapse/issues/17111))
|
||||||
|
* Bump serde from 1.0.198 to 1.0.199. ([\#17132](https://github.com/element-hq/synapse/issues/17132))
|
||||||
|
* Bump serde from 1.0.199 to 1.0.200. ([\#17161](https://github.com/element-hq/synapse/issues/17161))
|
||||||
|
* Bump serde_json from 1.0.115 to 1.0.116. ([\#17112](https://github.com/element-hq/synapse/issues/17112))
|
||||||
|
- Update `tornado` Python dependency from 6.2 to 6.4. ([\#17131](https://github.com/element-hq/synapse/issues/17131))
|
||||||
|
* Bump twisted from 23.10.0 to 24.3.0. ([\#17135](https://github.com/element-hq/synapse/issues/17135))
|
||||||
|
* Bump types-bleach from 6.1.0.1 to 6.1.0.20240331. ([\#17110](https://github.com/element-hq/synapse/issues/17110))
|
||||||
|
* Bump types-pillow from 10.2.0.20240415 to 10.2.0.20240423. ([\#17159](https://github.com/element-hq/synapse/issues/17159))
|
||||||
|
* Bump types-setuptools from 69.0.0.20240125 to 69.5.0.20240423. ([\#17134](https://github.com/element-hq/synapse/issues/17134))
|
||||||
|
|
||||||
|
# Synapse 1.106.0 (2024-04-30)
|
||||||
|
|
||||||
|
No significant changes since 1.106.0rc1.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Synapse 1.106.0rc1 (2024-04-25)
|
# Synapse 1.106.0rc1 (2024-04-25)
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
|
|
215
Cargo.lock
generated
215
Cargo.lock
generated
|
@ -4,30 +4,30 @@ version = 3
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "aho-corasick"
|
name = "aho-corasick"
|
||||||
version = "1.0.2"
|
version = "1.1.3"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "43f6cb1bf222025340178f382c426f13757b2960e89779dfcb319c32542a5a41"
|
checksum = "8e60d3430d3a69478ad0993f19238d2df97c507009a52b3c10addcd7f6bcb916"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"memchr",
|
"memchr",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "anyhow"
|
name = "anyhow"
|
||||||
version = "1.0.82"
|
version = "1.0.86"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "f538837af36e6f6a9be0faa67f9a314f8119e4e4b5867c6ab40ed60360142519"
|
checksum = "b3d1d046238990b9cf5bcde22a3fb3584ee5cf65fb2765f454ed428c7a0063da"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "arc-swap"
|
name = "arc-swap"
|
||||||
version = "1.5.1"
|
version = "1.7.1"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "983cd8b9d4b02a6dc6ffa557262eb5858a27a0038ffffe21a0f133eaa819a164"
|
checksum = "69f7f8c3906b62b754cd5326047894316021dcfe5a194c8ea52bdd94934a3457"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "autocfg"
|
name = "autocfg"
|
||||||
version = "1.1.0"
|
version = "1.3.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "d468802bab17cbc0cc575e9b053f41e72aa36bfa6b7f55e3529ffa43161b97fa"
|
checksum = "0c4b4d0bd25bd0b74681c0ad21497610ce1b7c91b1022cd21c80c6fbdd9476b0"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "base64"
|
name = "base64"
|
||||||
|
@ -37,9 +37,9 @@ checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "bitflags"
|
name = "bitflags"
|
||||||
version = "1.3.2"
|
version = "2.5.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
|
checksum = "cf4b9d6a944f767f8e5e0db018570623c85f3d925ac718db4e06d0187adb21c1"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "blake2"
|
name = "blake2"
|
||||||
|
@ -52,9 +52,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "block-buffer"
|
name = "block-buffer"
|
||||||
version = "0.10.3"
|
version = "0.10.4"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "69cce20737498f97b993470a6e536b8523f0af7892a4f928cceb1ac5e52ebe7e"
|
checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"generic-array",
|
"generic-array",
|
||||||
]
|
]
|
||||||
|
@ -115,9 +115,9 @@ checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "generic-array"
|
name = "generic-array"
|
||||||
version = "0.14.6"
|
version = "0.14.7"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "bff49e947297f3312447abdca79f45f4738097cc82b06e72054d2223f601f1b9"
|
checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"typenum",
|
"typenum",
|
||||||
"version_check",
|
"version_check",
|
||||||
|
@ -125,9 +125,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "getrandom"
|
name = "getrandom"
|
||||||
version = "0.2.14"
|
version = "0.2.15"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "94b22e06ecb0110981051723910cbf0b5f5e09a2062dd7663334ee79a9d1286c"
|
checksum = "c4567c8db10ae91089c99af84c68c38da3ec2f087c3f82960bcdbf3656b6f4d7"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"cfg-if",
|
"cfg-if",
|
||||||
"js-sys",
|
"js-sys",
|
||||||
|
@ -191,15 +191,15 @@ checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "indoc"
|
name = "indoc"
|
||||||
version = "2.0.4"
|
version = "2.0.5"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "1e186cfbae8084e513daff4240b4797e342f988cecda4fb6c939150f96315fd8"
|
checksum = "b248f5224d1d606005e02c97f5aa4e88eeb230488bcc03bc9ca4d7991399f2b5"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "itoa"
|
name = "itoa"
|
||||||
version = "1.0.4"
|
version = "1.0.11"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "4217ad341ebadf8d8e724e264f13e593e0648f5b3e94b3896a5df283be015ecc"
|
checksum = "49f1f14873335454500d59611f1cf4a4b0f786f9ac11f4312a78e4cf2566695b"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "js-sys"
|
name = "js-sys"
|
||||||
|
@ -218,15 +218,15 @@ checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "libc"
|
name = "libc"
|
||||||
version = "0.2.153"
|
version = "0.2.154"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "9c198f91728a82281a64e1f4f9eeb25d82cb32a5de251c6bd1b5154d63a8e7bd"
|
checksum = "ae743338b92ff9146ce83992f766a31066a91a8c84a45e0e9f21e7cf6de6d346"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "lock_api"
|
name = "lock_api"
|
||||||
version = "0.4.9"
|
version = "0.4.12"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "435011366fe56583b16cf956f9df0095b405b82d76425bc8981c0e22e60ec4df"
|
checksum = "07af8b9cdd281b7915f413fa73f29ebd5d55d0d3f0155584dade1ff18cea1b17"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"autocfg",
|
"autocfg",
|
||||||
"scopeguard",
|
"scopeguard",
|
||||||
|
@ -240,15 +240,15 @@ checksum = "90ed8c1e510134f979dbc4f070f87d4313098b704861a105fe34231c70a3901c"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "memchr"
|
name = "memchr"
|
||||||
version = "2.6.3"
|
version = "2.7.2"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "8f232d6ef707e1956a43342693d2a31e72989554d58299d7a88738cc95b0d35c"
|
checksum = "6c8640c5d730cb13ebd907d8d04b52f55ac9a2eec55b440c8892f40d56c76c1d"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "memoffset"
|
name = "memoffset"
|
||||||
version = "0.9.0"
|
version = "0.9.1"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "5a634b1c61a95585bd15607c6ab0c4e5b226e695ff2800ba0cdccddf208c406c"
|
checksum = "488016bfae457b036d996092f6cb448677611ce4449e970ceaf42695203f218a"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"autocfg",
|
"autocfg",
|
||||||
]
|
]
|
||||||
|
@ -261,15 +261,15 @@ checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "once_cell"
|
name = "once_cell"
|
||||||
version = "1.15.0"
|
version = "1.19.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "e82dad04139b71a90c080c8463fe0dc7902db5192d939bd0950f074d014339e1"
|
checksum = "3fdb12b2476b595f9358c5161aa467c2438859caa136dec86c26fdd2efe17b92"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "parking_lot"
|
name = "parking_lot"
|
||||||
version = "0.12.1"
|
version = "0.12.2"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "3742b2c103b9f06bc9fff0a37ff4912935851bee6d36f3c02bcc755bcfec228f"
|
checksum = "7e4af0ca4f6caed20e900d564c242b8e5d4903fdacf31d3daf527b66fe6f42fb"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"lock_api",
|
"lock_api",
|
||||||
"parking_lot_core",
|
"parking_lot_core",
|
||||||
|
@ -277,15 +277,15 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "parking_lot_core"
|
name = "parking_lot_core"
|
||||||
version = "0.9.3"
|
version = "0.9.10"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "09a279cbf25cb0757810394fbc1e359949b59e348145c643a939a525692e6929"
|
checksum = "1e401f977ab385c9e4e3ab30627d6f26d00e2c73eef317493c4ec6d468726cf8"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"cfg-if",
|
"cfg-if",
|
||||||
"libc",
|
"libc",
|
||||||
"redox_syscall",
|
"redox_syscall",
|
||||||
"smallvec",
|
"smallvec",
|
||||||
"windows-sys",
|
"windows-targets",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
@ -302,18 +302,18 @@ checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "proc-macro2"
|
name = "proc-macro2"
|
||||||
version = "1.0.76"
|
version = "1.0.82"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "95fc56cda0b5c3325f5fbbd7ff9fda9e02bb00bb3dac51252d2f1bfa1cb8cc8c"
|
checksum = "8ad3d49ab951a01fbaafe34f2ec74122942fe18a3f9814c3268f1bb72042131b"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"unicode-ident",
|
"unicode-ident",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyo3"
|
name = "pyo3"
|
||||||
version = "0.20.3"
|
version = "0.21.2"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "53bdbb96d49157e65d45cc287af5f32ffadd5f4761438b527b055fb0d4bb8233"
|
checksum = "a5e00b96a521718e08e03b1a622f01c8a8deb50719335de3f60b3b3950f069d8"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"cfg-if",
|
"cfg-if",
|
||||||
|
@ -330,9 +330,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyo3-build-config"
|
name = "pyo3-build-config"
|
||||||
version = "0.20.3"
|
version = "0.21.2"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "deaa5745de3f5231ce10517a1f5dd97d53e5a2fd77aa6b5842292085831d48d7"
|
checksum = "7883df5835fafdad87c0d888b266c8ec0f4c9ca48a5bed6bbb592e8dedee1b50"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"once_cell",
|
"once_cell",
|
||||||
"target-lexicon",
|
"target-lexicon",
|
||||||
|
@ -340,9 +340,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyo3-ffi"
|
name = "pyo3-ffi"
|
||||||
version = "0.20.3"
|
version = "0.21.2"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "62b42531d03e08d4ef1f6e85a2ed422eb678b8cd62b762e53891c05faf0d4afa"
|
checksum = "01be5843dc60b916ab4dad1dca6d20b9b4e6ddc8e15f50c47fe6d85f1fb97403"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"libc",
|
"libc",
|
||||||
"pyo3-build-config",
|
"pyo3-build-config",
|
||||||
|
@ -350,9 +350,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyo3-log"
|
name = "pyo3-log"
|
||||||
version = "0.9.0"
|
version = "0.10.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "4c10808ee7250403bedb24bc30c32493e93875fef7ba3e4292226fe924f398bd"
|
checksum = "2af49834b8d2ecd555177e63b273b708dea75150abc6f5341d0a6e1a9623976c"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"arc-swap",
|
"arc-swap",
|
||||||
"log",
|
"log",
|
||||||
|
@ -361,9 +361,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyo3-macros"
|
name = "pyo3-macros"
|
||||||
version = "0.20.3"
|
version = "0.21.2"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "7305c720fa01b8055ec95e484a6eca7a83c841267f0dd5280f0c8b8551d2c158"
|
checksum = "77b34069fc0682e11b31dbd10321cbf94808394c56fd996796ce45217dfac53c"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
"pyo3-macros-backend",
|
"pyo3-macros-backend",
|
||||||
|
@ -373,9 +373,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyo3-macros-backend"
|
name = "pyo3-macros-backend"
|
||||||
version = "0.20.3"
|
version = "0.21.2"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "7c7e9b68bb9c3149c5b0cade5d07f953d6d125eb4337723c4ccdb665f1f96185"
|
checksum = "08260721f32db5e1a5beae69a55553f56b99bd0e1c3e6e0a5e8851a9d0f5a85c"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"heck",
|
"heck",
|
||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
|
@ -386,9 +386,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pythonize"
|
name = "pythonize"
|
||||||
version = "0.20.0"
|
version = "0.21.1"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "ffd1c3ef39c725d63db5f9bc455461bafd80540cb7824c61afb823501921a850"
|
checksum = "9d0664248812c38cc55a4ed07f88e4df516ce82604b93b1ffdc041aa77a6cb3c"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"pyo3",
|
"pyo3",
|
||||||
"serde",
|
"serde",
|
||||||
|
@ -396,9 +396,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "quote"
|
name = "quote"
|
||||||
version = "1.0.35"
|
version = "1.0.36"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "291ec9ab5efd934aaf503a6466c5d5251535d108ee747472c3977cc5acc868ef"
|
checksum = "0fa76aaf39101c457836aec0ce2316dbdc3ab723cdda1c6bd4e6ad4208acaca7"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
]
|
]
|
||||||
|
@ -435,9 +435,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "redox_syscall"
|
name = "redox_syscall"
|
||||||
version = "0.2.16"
|
version = "0.5.1"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "fb5a58c1855b4b6819d59012155603f0b22ad30cad752600aadfcb695265519a"
|
checksum = "469052894dcb553421e483e4209ee581a45100d31b4018de03e5a7ad86374a7e"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"bitflags",
|
"bitflags",
|
||||||
]
|
]
|
||||||
|
@ -456,9 +456,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "regex-automata"
|
name = "regex-automata"
|
||||||
version = "0.4.4"
|
version = "0.4.6"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "3b7fa1134405e2ec9353fd416b17f8dacd46c473d7d3fd1cf202706a14eb792a"
|
checksum = "86b83b8b9847f9bf95ef68afb0b8e6cdb80f498442f5179a29fad448fcc1eaea"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"aho-corasick",
|
"aho-corasick",
|
||||||
"memchr",
|
"memchr",
|
||||||
|
@ -467,36 +467,36 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "regex-syntax"
|
name = "regex-syntax"
|
||||||
version = "0.8.2"
|
version = "0.8.3"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "c08c74e62047bb2de4ff487b251e4a92e24f48745648451635cec7d591162d9f"
|
checksum = "adad44e29e4c806119491a7f06f03de4d1af22c3a680dd47f1e6e179439d1f56"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ryu"
|
name = "ryu"
|
||||||
version = "1.0.11"
|
version = "1.0.18"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "4501abdff3ae82a1c1b477a17252eb69cee9e66eb915c1abaa4f44d873df9f09"
|
checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "scopeguard"
|
name = "scopeguard"
|
||||||
version = "1.1.0"
|
version = "1.2.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
|
checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde"
|
name = "serde"
|
||||||
version = "1.0.198"
|
version = "1.0.202"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "9846a40c979031340571da2545a4e5b7c4163bdae79b301d5f86d03979451fcc"
|
checksum = "226b61a0d411b2ba5ff6d7f73a476ac4f8bb900373459cd00fab8512828ba395"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"serde_derive",
|
"serde_derive",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde_derive"
|
name = "serde_derive"
|
||||||
version = "1.0.198"
|
version = "1.0.202"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "e88edab869b01783ba905e7d0153f9fc1a6505a96e4ad3018011eedb838566d9"
|
checksum = "6048858004bcff69094cd972ed40a32500f153bd3be9f716b2eed2e8217c4838"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
"quote",
|
"quote",
|
||||||
|
@ -505,9 +505,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde_json"
|
name = "serde_json"
|
||||||
version = "1.0.116"
|
version = "1.0.117"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "3e17db7126d17feb94eb3fad46bf1a96b034e8aacbc2e775fe81505f8b0b2813"
|
checksum = "455182ea6142b14f93f4bc5320a2b31c1f266b66a4a5c858b013302a5d8cbfc3"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"itoa",
|
"itoa",
|
||||||
"ryu",
|
"ryu",
|
||||||
|
@ -516,9 +516,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "sha1"
|
name = "sha1"
|
||||||
version = "0.10.5"
|
version = "0.10.6"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "f04293dc80c3993519f2d7f6f511707ee7094fe0c6d3406feb330cdb3540eba3"
|
checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"cfg-if",
|
"cfg-if",
|
||||||
"cpufeatures",
|
"cpufeatures",
|
||||||
|
@ -538,21 +538,21 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "smallvec"
|
name = "smallvec"
|
||||||
version = "1.10.0"
|
version = "1.13.2"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "a507befe795404456341dfab10cef66ead4c041f62b8b11bbb92bffe5d0953e0"
|
checksum = "3c5e1a9a646d36c3599cd173a41282daf47c44583ad367b8e6837255952e5c67"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "subtle"
|
name = "subtle"
|
||||||
version = "2.4.1"
|
version = "2.5.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "6bdef32e8150c2a081110b42772ffe7d7c9032b606bc226c8260fd97e0976601"
|
checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "syn"
|
name = "syn"
|
||||||
version = "2.0.48"
|
version = "2.0.61"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "0f3531638e407dfc0814761abb7c00a5b54992b849452a0646b7f65c9f770f3f"
|
checksum = "c993ed8ccba56ae856363b1845da7266a7cb78e1d146c8a32d54b45a8b831fc9"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
"quote",
|
"quote",
|
||||||
|
@ -585,15 +585,15 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "target-lexicon"
|
name = "target-lexicon"
|
||||||
version = "0.12.4"
|
version = "0.12.14"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "c02424087780c9b71cc96799eaeddff35af2bc513278cda5c99fc1f5d026d3c1"
|
checksum = "e1fc403891a21bcfb7c37834ba66a547a8f402146eba7265b5a6d88059c9ff2f"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "typenum"
|
name = "typenum"
|
||||||
version = "1.15.0"
|
version = "1.17.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "dcf81ac59edc17cc8697ff311e8f5ef2d99fcbd9817b34cec66f90b6c3dfd987"
|
checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ulid"
|
name = "ulid"
|
||||||
|
@ -608,9 +608,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "unicode-ident"
|
name = "unicode-ident"
|
||||||
version = "1.0.5"
|
version = "1.0.12"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "6ceab39d59e4c9499d4e5a8ee0e2735b891bb7308ac83dfb4e80cad195c9f6f3"
|
checksum = "3354b9ac3fae1ff6755cb6db53683adb661634f67557942dea4facebec0fee4b"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "unindent"
|
name = "unindent"
|
||||||
|
@ -695,44 +695,65 @@ dependencies = [
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "windows-sys"
|
name = "windows-targets"
|
||||||
version = "0.36.1"
|
version = "0.52.5"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "ea04155a16a59f9eab786fe12a4a450e75cdb175f9e0d80da1e17db09f55b8d2"
|
checksum = "6f0713a46559409d202e70e28227288446bf7841d3211583a4b53e3f6d96e7eb"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
|
"windows_aarch64_gnullvm",
|
||||||
"windows_aarch64_msvc",
|
"windows_aarch64_msvc",
|
||||||
"windows_i686_gnu",
|
"windows_i686_gnu",
|
||||||
|
"windows_i686_gnullvm",
|
||||||
"windows_i686_msvc",
|
"windows_i686_msvc",
|
||||||
"windows_x86_64_gnu",
|
"windows_x86_64_gnu",
|
||||||
|
"windows_x86_64_gnullvm",
|
||||||
"windows_x86_64_msvc",
|
"windows_x86_64_msvc",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "windows_aarch64_msvc"
|
name = "windows_aarch64_gnullvm"
|
||||||
version = "0.36.1"
|
version = "0.52.5"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "9bb8c3fd39ade2d67e9874ac4f3db21f0d710bee00fe7cab16949ec184eeaa47"
|
checksum = "7088eed71e8b8dda258ecc8bac5fb1153c5cffaf2578fc8ff5d61e23578d3263"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "windows_aarch64_msvc"
|
||||||
|
version = "0.52.5"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "9985fd1504e250c615ca5f281c3f7a6da76213ebd5ccc9561496568a2752afb6"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "windows_i686_gnu"
|
name = "windows_i686_gnu"
|
||||||
version = "0.36.1"
|
version = "0.52.5"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "180e6ccf01daf4c426b846dfc66db1fc518f074baa793aa7d9b9aaeffad6a3b6"
|
checksum = "88ba073cf16d5372720ec942a8ccbf61626074c6d4dd2e745299726ce8b89670"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "windows_i686_gnullvm"
|
||||||
|
version = "0.52.5"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "87f4261229030a858f36b459e748ae97545d6f1ec60e5e0d6a3d32e0dc232ee9"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "windows_i686_msvc"
|
name = "windows_i686_msvc"
|
||||||
version = "0.36.1"
|
version = "0.52.5"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "e2e7917148b2812d1eeafaeb22a97e4813dfa60a3f8f78ebe204bcc88f12f024"
|
checksum = "db3c2bf3d13d5b658be73463284eaf12830ac9a26a90c717b7f771dfe97487bf"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "windows_x86_64_gnu"
|
name = "windows_x86_64_gnu"
|
||||||
version = "0.36.1"
|
version = "0.52.5"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "4dcd171b8776c41b97521e5da127a2d86ad280114807d0b2ab1e462bc764d9e1"
|
checksum = "4e4246f76bdeff09eb48875a0fd3e2af6aada79d409d33011886d3e1581517d9"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "windows_x86_64_gnullvm"
|
||||||
|
version = "0.52.5"
|
||||||
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
|
checksum = "852298e482cd67c356ddd9570386e2862b5673c85bd5f88df9ab6802b334c596"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "windows_x86_64_msvc"
|
name = "windows_x86_64_msvc"
|
||||||
version = "0.36.1"
|
version = "0.52.5"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "c811ca4a8c853ef420abd8592ba53ddbbac90410fab6903b3e79972a631f7680"
|
checksum = "bec47e5bfd1bff0eeaf6d8b485cc1074891a197ab4225d504cb7a1ab88b02bf0"
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Fixed search feature of Element Android on homesevers using SQLite by returning search terms as search highlights.
|
|
|
@ -1 +0,0 @@
|
||||||
Update event_cache_size and global_factor configurations documentation.
|
|
|
@ -1 +0,0 @@
|
||||||
Fixes a bug introduced in v1.52.0 where the `destination` query parameter for the [Destination Rooms Admin API](https://element-hq.github.io/synapse/v1.105/usage/administration/admin_api/federation.html#destination-rooms) failed to actually filter returned rooms.
|
|
|
@ -1 +0,0 @@
|
||||||
For MSC3266 room summaries, support queries at the recommended endpoint of `/_matrix/client/unstable/im.nheko.summary/summary/{roomIdOrAlias}`. The existing endpoint of `/_matrix/client/unstable/im.nheko.summary/rooms/{roomIdOrAlias}/summary` is deprecated.
|
|
|
@ -1 +0,0 @@
|
||||||
Add RuntimeDirectory to example matrix-synapse.service systemd unit.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix various small typos throughout the docs.
|
|
|
@ -1 +0,0 @@
|
||||||
Update enable_notifs configuration documentation.
|
|
|
@ -1 +0,0 @@
|
||||||
Improve error message for cross signing reset with MSC3861 enabled.
|
|
1
changelog.d/17147.feature
Normal file
1
changelog.d/17147.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add the ability to auto-accept invites on the behalf of users. See the [`auto_accept_invites`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#auto-accept-invites) config option for details.
|
1
changelog.d/17167.feature
Normal file
1
changelog.d/17167.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync/e2ee` endpoint for To-Device messages and device encryption info.
|
1
changelog.d/17176.misc
Normal file
1
changelog.d/17176.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Log exceptions when failing to auto-join new user according to the `auto_join_rooms` option.
|
1
changelog.d/17204.doc
Normal file
1
changelog.d/17204.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Update OIDC documentation: by default Matrix doesn't query userinfo endpoint, then claims should be put on id_token.
|
1
changelog.d/17211.misc
Normal file
1
changelog.d/17211.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Reduce work of calculating outbound device lists updates.
|
1
changelog.d/17213.feature
Normal file
1
changelog.d/17213.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Support MSC3916 by adding unstable media endpoints to `_matrix/client` (#17213).
|
1
changelog.d/17216.misc
Normal file
1
changelog.d/17216.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Improve performance of calculating device lists changes in `/sync`.
|
1
changelog.d/17219.feature
Normal file
1
changelog.d/17219.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add logging to tasks managed by the task scheduler, showing CPU and database usage.
|
24
debian/changelog
vendored
24
debian/changelog
vendored
|
@ -1,3 +1,27 @@
|
||||||
|
matrix-synapse-py3 (1.108.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.108.0rc1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 21 May 2024 10:54:13 +0100
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.107.0) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.107.0.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 14 May 2024 14:15:34 +0100
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.107.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.107.0rc1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 07 May 2024 16:26:26 +0100
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.106.0) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.106.0.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Apr 2024 11:51:43 +0100
|
||||||
|
|
||||||
matrix-synapse-py3 (1.106.0~rc1) stable; urgency=medium
|
matrix-synapse-py3 (1.106.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
* New Synapse release 1.106.0rc1.
|
* New Synapse release 1.106.0rc1.
|
||||||
|
|
|
@ -1,20 +0,0 @@
|
||||||
# Minimal makefile for Sphinx documentation
|
|
||||||
#
|
|
||||||
|
|
||||||
# You can set these variables from the command line, and also
|
|
||||||
# from the environment for the first two.
|
|
||||||
SPHINXOPTS ?=
|
|
||||||
SPHINXBUILD ?= sphinx-build
|
|
||||||
SOURCEDIR = .
|
|
||||||
BUILDDIR = _build
|
|
||||||
|
|
||||||
# Put it first so that "make" without argument is like "make help".
|
|
||||||
help:
|
|
||||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
|
||||||
|
|
||||||
.PHONY: help Makefile
|
|
||||||
|
|
||||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
|
||||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
|
||||||
%: Makefile
|
|
||||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
|
|
@ -1,50 +0,0 @@
|
||||||
# Configuration file for the Sphinx documentation builder.
|
|
||||||
#
|
|
||||||
# For the full list of built-in configuration values, see the documentation:
|
|
||||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html
|
|
||||||
|
|
||||||
# -- Project information -----------------------------------------------------
|
|
||||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
|
|
||||||
|
|
||||||
project = "Synapse development"
|
|
||||||
copyright = "2023, The Matrix.org Foundation C.I.C."
|
|
||||||
author = "The Synapse Maintainers and Community"
|
|
||||||
|
|
||||||
# -- General configuration ---------------------------------------------------
|
|
||||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
|
|
||||||
|
|
||||||
extensions = [
|
|
||||||
"autodoc2",
|
|
||||||
"myst_parser",
|
|
||||||
]
|
|
||||||
|
|
||||||
templates_path = ["_templates"]
|
|
||||||
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for Autodoc2 ----------------------------------------------------
|
|
||||||
|
|
||||||
autodoc2_docstring_parser_regexes = [
|
|
||||||
# this will render all docstrings as 'MyST' Markdown
|
|
||||||
(r".*", "myst"),
|
|
||||||
]
|
|
||||||
|
|
||||||
autodoc2_packages = [
|
|
||||||
{
|
|
||||||
"path": "../synapse",
|
|
||||||
# Don't render documentation for everything as a matter of course
|
|
||||||
"auto_mode": False,
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for MyST (Markdown) ---------------------------------------------
|
|
||||||
|
|
||||||
# myst_heading_anchors = 2
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for HTML output -------------------------------------------------
|
|
||||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
|
|
||||||
|
|
||||||
html_theme = "furo"
|
|
||||||
html_static_path = ["_static"]
|
|
|
@ -1,22 +0,0 @@
|
||||||
.. Synapse Developer Documentation documentation master file, created by
|
|
||||||
sphinx-quickstart on Mon Mar 13 08:59:51 2023.
|
|
||||||
You can adapt this file completely to your liking, but it should at least
|
|
||||||
contain the root `toctree` directive.
|
|
||||||
|
|
||||||
Welcome to the Synapse Developer Documentation!
|
|
||||||
===========================================================
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
:caption: Contents:
|
|
||||||
|
|
||||||
modules/federation_sender
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Indices and tables
|
|
||||||
==================
|
|
||||||
|
|
||||||
* :ref:`genindex`
|
|
||||||
* :ref:`modindex`
|
|
||||||
* :ref:`search`
|
|
|
@ -1,5 +0,0 @@
|
||||||
Federation Sender
|
|
||||||
=================
|
|
||||||
|
|
||||||
```{autodoc2-docstring} synapse.federation.sender
|
|
||||||
```
|
|
|
@ -163,7 +163,7 @@ FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm
|
||||||
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
|
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
|
||||||
LABEL org.opencontainers.image.documentation='https://github.com/element-hq/synapse/blob/master/docker/README.md'
|
LABEL org.opencontainers.image.documentation='https://github.com/element-hq/synapse/blob/master/docker/README.md'
|
||||||
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
|
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
|
||||||
LABEL org.opencontainers.image.licenses='Apache-2.0'
|
LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later'
|
||||||
|
|
||||||
RUN \
|
RUN \
|
||||||
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||||
|
|
|
@ -92,8 +92,6 @@ allow_device_name_lookup_over_federation: true
|
||||||
## Experimental Features ##
|
## Experimental Features ##
|
||||||
|
|
||||||
experimental_features:
|
experimental_features:
|
||||||
# client-side support for partial state in /send_join responses
|
|
||||||
faster_joins: true
|
|
||||||
# Enable support for polls
|
# Enable support for polls
|
||||||
msc3381_polls_enabled: true
|
msc3381_polls_enabled: true
|
||||||
# Enable deleting device-specific notification settings stored in account data
|
# Enable deleting device-specific notification settings stored in account data
|
||||||
|
@ -104,6 +102,10 @@ experimental_features:
|
||||||
msc3874_enabled: true
|
msc3874_enabled: true
|
||||||
# no UIA for x-signing upload for the first time
|
# no UIA for x-signing upload for the first time
|
||||||
msc3967_enabled: true
|
msc3967_enabled: true
|
||||||
|
# Expose a room summary for public rooms
|
||||||
|
msc3266_enabled: true
|
||||||
|
|
||||||
|
msc4115_membership_on_events: true
|
||||||
|
|
||||||
server_notices:
|
server_notices:
|
||||||
system_mxid_localpart: _server
|
system_mxid_localpart: _server
|
||||||
|
|
|
@ -211,6 +211,8 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
|
||||||
"^/_matrix/federation/(v1|v2)/make_leave/",
|
"^/_matrix/federation/(v1|v2)/make_leave/",
|
||||||
"^/_matrix/federation/(v1|v2)/send_join/",
|
"^/_matrix/federation/(v1|v2)/send_join/",
|
||||||
"^/_matrix/federation/(v1|v2)/send_leave/",
|
"^/_matrix/federation/(v1|v2)/send_leave/",
|
||||||
|
"^/_matrix/federation/v1/make_knock/",
|
||||||
|
"^/_matrix/federation/v1/send_knock/",
|
||||||
"^/_matrix/federation/(v1|v2)/invite/",
|
"^/_matrix/federation/(v1|v2)/invite/",
|
||||||
"^/_matrix/federation/(v1|v2)/query_auth/",
|
"^/_matrix/federation/(v1|v2)/query_auth/",
|
||||||
"^/_matrix/federation/(v1|v2)/event_auth/",
|
"^/_matrix/federation/(v1|v2)/event_auth/",
|
||||||
|
|
|
@ -141,8 +141,8 @@ Body parameters:
|
||||||
provider for SSO (Single sign-on). More details are in the configuration manual under the
|
provider for SSO (Single sign-on). More details are in the configuration manual under the
|
||||||
sections [sso](../usage/configuration/config_documentation.md#sso) and [oidc_providers](../usage/configuration/config_documentation.md#oidc_providers).
|
sections [sso](../usage/configuration/config_documentation.md#sso) and [oidc_providers](../usage/configuration/config_documentation.md#oidc_providers).
|
||||||
- `auth_provider` - **string**, required. The unique, internal ID of the external identity provider.
|
- `auth_provider` - **string**, required. The unique, internal ID of the external identity provider.
|
||||||
The same as `idp_id` from the homeserver configuration. Note that no error is raised if the
|
The same as `idp_id` from the homeserver configuration. If using OIDC, this value should be prefixed
|
||||||
provided value is not in the homeserver configuration.
|
with `oidc-`. Note that no error is raised if the provided value is not in the homeserver configuration.
|
||||||
- `external_id` - **string**, required. An identifier for the user in the external identity provider.
|
- `external_id` - **string**, required. An identifier for the user in the external identity provider.
|
||||||
When the user logs in to the identity provider, this must be the unique ID that they map to.
|
When the user logs in to the identity provider, this must be the unique ID that they map to.
|
||||||
- `admin` - **bool**, optional, defaults to `false`. Whether the user is a homeserver administrator,
|
- `admin` - **bool**, optional, defaults to `false`. Whether the user is a homeserver administrator,
|
||||||
|
|
|
@ -525,6 +525,8 @@ oidc_providers:
|
||||||
(`Options > Security > ID Token signature algorithm` and `Options > Security >
|
(`Options > Security > ID Token signature algorithm` and `Options > Security >
|
||||||
Access Token signature algorithm`)
|
Access Token signature algorithm`)
|
||||||
- Scopes: OpenID, Email and Profile
|
- Scopes: OpenID, Email and Profile
|
||||||
|
- Force claims into `id_token`
|
||||||
|
(`Options > Advanced > Force claims to be returned in ID Token`)
|
||||||
- Allowed redirection addresses for login (`Options > Basic > Allowed
|
- Allowed redirection addresses for login (`Options > Basic > Allowed
|
||||||
redirection addresses for login` ) :
|
redirection addresses for login` ) :
|
||||||
`[synapse public baseurl]/_synapse/client/oidc/callback`
|
`[synapse public baseurl]/_synapse/client/oidc/callback`
|
||||||
|
|
|
@ -98,6 +98,7 @@ A custom mapping provider must specify the following methods:
|
||||||
either accept this localpart or pick their own username. Otherwise this
|
either accept this localpart or pick their own username. Otherwise this
|
||||||
option has no effect. If omitted, defaults to `False`.
|
option has no effect. If omitted, defaults to `False`.
|
||||||
- `display_name`: An optional string, the display name for the user.
|
- `display_name`: An optional string, the display name for the user.
|
||||||
|
- `picture`: An optional string, the avatar url for the user.
|
||||||
- `emails`: A list of strings, the email address(es) to associate with
|
- `emails`: A list of strings, the email address(es) to associate with
|
||||||
this user. If omitted, defaults to an empty list.
|
this user. If omitted, defaults to an empty list.
|
||||||
* `async def get_extra_attributes(self, userinfo, token)`
|
* `async def get_extra_attributes(self, userinfo, token)`
|
||||||
|
|
|
@ -117,6 +117,14 @@ each upgrade are complete before moving on to the next upgrade, to avoid
|
||||||
stacking them up. You can monitor the currently running background updates with
|
stacking them up. You can monitor the currently running background updates with
|
||||||
[the Admin API](usage/administration/admin_api/background_updates.html#status).
|
[the Admin API](usage/administration/admin_api/background_updates.html#status).
|
||||||
|
|
||||||
|
# Upgrading to v1.106.0
|
||||||
|
|
||||||
|
## Minimum supported Rust version
|
||||||
|
The minimum supported Rust version has been increased from v1.65.0 to v1.66.0.
|
||||||
|
Users building from source will need to ensure their `rustc` version is up to
|
||||||
|
date.
|
||||||
|
|
||||||
|
|
||||||
# Upgrading to v1.100.0
|
# Upgrading to v1.100.0
|
||||||
|
|
||||||
## Minimum supported Rust version
|
## Minimum supported Rust version
|
||||||
|
|
|
@ -250,10 +250,10 @@ Using [libjemalloc](https://jemalloc.net) can also yield a significant
|
||||||
improvement in overall memory use, and especially in terms of giving back
|
improvement in overall memory use, and especially in terms of giving back
|
||||||
RAM to the OS. To use it, the library must simply be put in the
|
RAM to the OS. To use it, the library must simply be put in the
|
||||||
LD_PRELOAD environment variable when launching Synapse. On Debian, this
|
LD_PRELOAD environment variable when launching Synapse. On Debian, this
|
||||||
can be done by installing the `libjemalloc1` package and adding this
|
can be done by installing the `libjemalloc2` package and adding this
|
||||||
line to `/etc/default/matrix-synapse`:
|
line to `/etc/default/matrix-synapse`:
|
||||||
|
|
||||||
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
|
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2
|
||||||
|
|
||||||
This made a significant difference on Python 2.7 - it's unclear how
|
This made a significant difference on Python 2.7 - it's unclear how
|
||||||
much of an improvement it provides on Python 3.x.
|
much of an improvement it provides on Python 3.x.
|
||||||
|
|
|
@ -1232,6 +1232,31 @@ federation_domain_whitelist:
|
||||||
- syd.example.com
|
- syd.example.com
|
||||||
```
|
```
|
||||||
---
|
---
|
||||||
|
### `federation_whitelist_endpoint_enabled`
|
||||||
|
|
||||||
|
Enables an endpoint for fetching the federation whitelist config.
|
||||||
|
|
||||||
|
The request method and path is `GET /_synapse/client/v1/config/federation_whitelist`, and the
|
||||||
|
response format is:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"whitelist_enabled": true, // Whether the federation whitelist is being enforced
|
||||||
|
"whitelist": [ // Which server names are allowed by the whitelist
|
||||||
|
"example.com"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
If `whitelist_enabled` is `false` then the server is permitted to federate with all others.
|
||||||
|
|
||||||
|
The endpoint requires authentication.
|
||||||
|
|
||||||
|
Example configuration:
|
||||||
|
```yaml
|
||||||
|
federation_whitelist_endpoint_enabled: true
|
||||||
|
```
|
||||||
|
---
|
||||||
### `federation_metrics_domains`
|
### `federation_metrics_domains`
|
||||||
|
|
||||||
Report prometheus metrics on the age of PDUs being sent to and received from
|
Report prometheus metrics on the age of PDUs being sent to and received from
|
||||||
|
@ -2591,6 +2616,11 @@ Possible values for this option are:
|
||||||
* "trusted_private_chat": an invitation is required to join this room and the invitee is
|
* "trusted_private_chat": an invitation is required to join this room and the invitee is
|
||||||
assigned a power level of 100 upon joining the room.
|
assigned a power level of 100 upon joining the room.
|
||||||
|
|
||||||
|
Each preset will set up a room in the same manner as if it were provided as the `preset` parameter when
|
||||||
|
calling the
|
||||||
|
[`POST /_matrix/client/v3/createRoom`](https://spec.matrix.org/latest/client-server-api/#post_matrixclientv3createroom)
|
||||||
|
Client-Server API endpoint.
|
||||||
|
|
||||||
If a value of "private_chat" or "trusted_private_chat" is used then
|
If a value of "private_chat" or "trusted_private_chat" is used then
|
||||||
`auto_join_mxid_localpart` must also be configured.
|
`auto_join_mxid_localpart` must also be configured.
|
||||||
|
|
||||||
|
@ -3528,6 +3558,15 @@ Has the following sub-options:
|
||||||
users. This allows the CAS SSO flow to be limited to sign in only, rather than
|
users. This allows the CAS SSO flow to be limited to sign in only, rather than
|
||||||
automatically registering users that have a valid SSO login but do not have
|
automatically registering users that have a valid SSO login but do not have
|
||||||
a pre-registered account. Defaults to true.
|
a pre-registered account. Defaults to true.
|
||||||
|
* `allow_numeric_ids`: set to 'true' allow numeric user IDs (default false).
|
||||||
|
This allows CAS SSO flow to provide user IDs composed of numbers only.
|
||||||
|
These identifiers will be prefixed by the letter "u" by default.
|
||||||
|
The prefix can be configured using the "numeric_ids_prefix" option.
|
||||||
|
Be careful to choose the prefix correctly to avoid any possible conflicts
|
||||||
|
(e.g. user 1234 becomes u1234 when a user u1234 already exists).
|
||||||
|
* `numeric_ids_prefix`: the prefix you wish to add in front of a numeric user ID
|
||||||
|
when the "allow_numeric_ids" option is set to "true".
|
||||||
|
By default, the prefix is the letter "u" and only alphanumeric characters are allowed.
|
||||||
|
|
||||||
*Added in Synapse 1.93.0.*
|
*Added in Synapse 1.93.0.*
|
||||||
|
|
||||||
|
@ -3542,6 +3581,8 @@ cas_config:
|
||||||
userGroup: "staff"
|
userGroup: "staff"
|
||||||
department: None
|
department: None
|
||||||
enable_registration: true
|
enable_registration: true
|
||||||
|
allow_numeric_ids: true
|
||||||
|
numeric_ids_prefix: "numericuser"
|
||||||
```
|
```
|
||||||
---
|
---
|
||||||
### `sso`
|
### `sso`
|
||||||
|
@ -4554,3 +4595,32 @@ background_updates:
|
||||||
min_batch_size: 10
|
min_batch_size: 10
|
||||||
default_batch_size: 50
|
default_batch_size: 50
|
||||||
```
|
```
|
||||||
|
---
|
||||||
|
## Auto Accept Invites
|
||||||
|
Configuration settings related to automatically accepting invites.
|
||||||
|
|
||||||
|
---
|
||||||
|
### `auto_accept_invites`
|
||||||
|
|
||||||
|
Automatically accepting invites controls whether users are presented with an invite request or if they
|
||||||
|
are instead automatically joined to a room when receiving an invite. Set the `enabled` sub-option to true to
|
||||||
|
enable auto-accepting invites. Defaults to false.
|
||||||
|
This setting has the following sub-options:
|
||||||
|
* `enabled`: Whether to run the auto-accept invites logic. Defaults to false.
|
||||||
|
* `only_for_direct_messages`: Whether invites should be automatically accepted for all room types, or only
|
||||||
|
for direct messages. Defaults to false.
|
||||||
|
* `only_from_local_users`: Whether to only automatically accept invites from users on this homeserver. Defaults to false.
|
||||||
|
* `worker_to_run_on`: Which worker to run this module on. This must match the "worker_name".
|
||||||
|
|
||||||
|
NOTE: Care should be taken not to enable this setting if the `synapse_auto_accept_invite` module is enabled and installed.
|
||||||
|
The two modules will compete to perform the same task and may result in undesired behaviour. For example, multiple join
|
||||||
|
events could be generated from a single invite.
|
||||||
|
|
||||||
|
Example configuration:
|
||||||
|
```yaml
|
||||||
|
auto_accept_invites:
|
||||||
|
enabled: true
|
||||||
|
only_for_direct_messages: true
|
||||||
|
only_from_local_users: true
|
||||||
|
worker_to_run_on: "worker_1"
|
||||||
|
```
|
||||||
|
|
|
@ -211,6 +211,8 @@ information.
|
||||||
^/_matrix/federation/v1/make_leave/
|
^/_matrix/federation/v1/make_leave/
|
||||||
^/_matrix/federation/(v1|v2)/send_join/
|
^/_matrix/federation/(v1|v2)/send_join/
|
||||||
^/_matrix/federation/(v1|v2)/send_leave/
|
^/_matrix/federation/(v1|v2)/send_leave/
|
||||||
|
^/_matrix/federation/v1/make_knock/
|
||||||
|
^/_matrix/federation/v1/send_knock/
|
||||||
^/_matrix/federation/(v1|v2)/invite/
|
^/_matrix/federation/(v1|v2)/invite/
|
||||||
^/_matrix/federation/v1/event_auth/
|
^/_matrix/federation/v1/event_auth/
|
||||||
^/_matrix/federation/v1/timestamp_to_event/
|
^/_matrix/federation/v1/timestamp_to_event/
|
||||||
|
@ -535,7 +537,7 @@ the stream writer for the `presence` stream:
|
||||||
##### The `push_rules` stream
|
##### The `push_rules` stream
|
||||||
|
|
||||||
The following endpoints should be routed directly to the worker configured as
|
The following endpoints should be routed directly to the worker configured as
|
||||||
the stream writer for the `push` stream:
|
the stream writer for the `push_rules` stream:
|
||||||
|
|
||||||
^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
|
^/_matrix/client/(api/v1|r0|v3|unstable)/pushrules/
|
||||||
|
|
||||||
|
|
1189
poetry.lock
generated
1189
poetry.lock
generated
File diff suppressed because it is too large
Load diff
|
@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
||||||
|
|
||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "matrix-synapse"
|
name = "matrix-synapse"
|
||||||
version = "1.106.0rc1"
|
version = "1.108.0rc1"
|
||||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||||
license = "AGPL-3.0-or-later"
|
license = "AGPL-3.0-or-later"
|
||||||
|
@ -364,17 +364,6 @@ towncrier = ">=18.6.0rc1"
|
||||||
tomli = ">=1.2.3"
|
tomli = ">=1.2.3"
|
||||||
|
|
||||||
|
|
||||||
# Dependencies for building the development documentation
|
|
||||||
[tool.poetry.group.dev-docs]
|
|
||||||
optional = true
|
|
||||||
|
|
||||||
[tool.poetry.group.dev-docs.dependencies]
|
|
||||||
sphinx = {version = "^6.1", python = "^3.8"}
|
|
||||||
sphinx-autodoc2 = {version = ">=0.4.2,<0.6.0", python = "^3.8"}
|
|
||||||
myst-parser = {version = "^1.0.0", python = "^3.8"}
|
|
||||||
furo = ">=2022.12.7,<2025.0.0"
|
|
||||||
|
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
# The upper bounds here are defensive, intended to prevent situations like
|
# The upper bounds here are defensive, intended to prevent situations like
|
||||||
# https://github.com/matrix-org/synapse/issues/13849 and
|
# https://github.com/matrix-org/synapse/issues/13849 and
|
||||||
|
|
|
@ -30,14 +30,14 @@ http = "1.1.0"
|
||||||
lazy_static = "1.4.0"
|
lazy_static = "1.4.0"
|
||||||
log = "0.4.17"
|
log = "0.4.17"
|
||||||
mime = "0.3.17"
|
mime = "0.3.17"
|
||||||
pyo3 = { version = "0.20.0", features = [
|
pyo3 = { version = "0.21.0", features = [
|
||||||
"macros",
|
"macros",
|
||||||
"anyhow",
|
"anyhow",
|
||||||
"abi3",
|
"abi3",
|
||||||
"abi3-py38",
|
"abi3-py38",
|
||||||
] }
|
] }
|
||||||
pyo3-log = "0.9.0"
|
pyo3-log = "0.10.0"
|
||||||
pythonize = "0.20.0"
|
pythonize = "0.21.0"
|
||||||
regex = "1.6.0"
|
regex = "1.6.0"
|
||||||
sha2 = "0.10.8"
|
sha2 = "0.10.8"
|
||||||
serde = { version = "1.0.144", features = ["derive"] }
|
serde = { version = "1.0.144", features = ["derive"] }
|
||||||
|
|
|
@ -25,21 +25,21 @@ use std::net::Ipv4Addr;
|
||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use anyhow::Error;
|
use anyhow::Error;
|
||||||
use pyo3::prelude::*;
|
use pyo3::{prelude::*, pybacked::PyBackedStr};
|
||||||
use regex::Regex;
|
use regex::Regex;
|
||||||
|
|
||||||
use crate::push::utils::{glob_to_regex, GlobMatchType};
|
use crate::push::utils::{glob_to_regex, GlobMatchType};
|
||||||
|
|
||||||
/// Called when registering modules with python.
|
/// Called when registering modules with python.
|
||||||
pub fn register_module(py: Python<'_>, m: &PyModule) -> PyResult<()> {
|
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
|
||||||
let child_module = PyModule::new(py, "acl")?;
|
let child_module = PyModule::new_bound(py, "acl")?;
|
||||||
child_module.add_class::<ServerAclEvaluator>()?;
|
child_module.add_class::<ServerAclEvaluator>()?;
|
||||||
|
|
||||||
m.add_submodule(child_module)?;
|
m.add_submodule(&child_module)?;
|
||||||
|
|
||||||
// We need to manually add the module to sys.modules to make `from
|
// We need to manually add the module to sys.modules to make `from
|
||||||
// synapse.synapse_rust import acl` work.
|
// synapse.synapse_rust import acl` work.
|
||||||
py.import("sys")?
|
py.import_bound("sys")?
|
||||||
.getattr("modules")?
|
.getattr("modules")?
|
||||||
.set_item("synapse.synapse_rust.acl", child_module)?;
|
.set_item("synapse.synapse_rust.acl", child_module)?;
|
||||||
|
|
||||||
|
@ -59,8 +59,8 @@ impl ServerAclEvaluator {
|
||||||
#[new]
|
#[new]
|
||||||
pub fn py_new(
|
pub fn py_new(
|
||||||
allow_ip_literals: bool,
|
allow_ip_literals: bool,
|
||||||
allow: Vec<&str>,
|
allow: Vec<PyBackedStr>,
|
||||||
deny: Vec<&str>,
|
deny: Vec<PyBackedStr>,
|
||||||
) -> Result<Self, Error> {
|
) -> Result<Self, Error> {
|
||||||
let allow = allow
|
let allow = allow
|
||||||
.iter()
|
.iter()
|
||||||
|
|
|
@ -20,8 +20,10 @@
|
||||||
|
|
||||||
//! Implements the internal metadata class attached to events.
|
//! Implements the internal metadata class attached to events.
|
||||||
//!
|
//!
|
||||||
//! The internal metadata is a bit like a `TypedDict`, in that it is stored as a
|
//! The internal metadata is a bit like a `TypedDict`, in that most of
|
||||||
//! JSON dict in the DB. Most events have zero, or only a few, of these keys
|
//! it is stored as a JSON dict in the DB (the exceptions being `outlier`
|
||||||
|
//! and `stream_ordering` which have their own columns in the database).
|
||||||
|
//! Most events have zero, or only a few, of these keys
|
||||||
//! set. Therefore, since we care more about memory size than performance here,
|
//! set. Therefore, since we care more about memory size than performance here,
|
||||||
//! we store these fields in a mapping.
|
//! we store these fields in a mapping.
|
||||||
//!
|
//!
|
||||||
|
@ -36,9 +38,10 @@ use anyhow::Context;
|
||||||
use log::warn;
|
use log::warn;
|
||||||
use pyo3::{
|
use pyo3::{
|
||||||
exceptions::PyAttributeError,
|
exceptions::PyAttributeError,
|
||||||
|
pybacked::PyBackedStr,
|
||||||
pyclass, pymethods,
|
pyclass, pymethods,
|
||||||
types::{PyDict, PyString},
|
types::{PyAnyMethods, PyDict, PyDictMethods, PyString},
|
||||||
IntoPy, PyAny, PyObject, PyResult, Python,
|
Bound, IntoPy, PyAny, PyObject, PyResult, Python,
|
||||||
};
|
};
|
||||||
|
|
||||||
/// Definitions of the various fields of the internal metadata.
|
/// Definitions of the various fields of the internal metadata.
|
||||||
|
@ -57,7 +60,7 @@ enum EventInternalMetadataData {
|
||||||
|
|
||||||
impl EventInternalMetadataData {
|
impl EventInternalMetadataData {
|
||||||
/// Convert the field to its name and python object.
|
/// Convert the field to its name and python object.
|
||||||
fn to_python_pair<'a>(&self, py: Python<'a>) -> (&'a PyString, PyObject) {
|
fn to_python_pair<'a>(&self, py: Python<'a>) -> (&'a Bound<'a, PyString>, PyObject) {
|
||||||
match self {
|
match self {
|
||||||
EventInternalMetadataData::OutOfBandMembership(o) => {
|
EventInternalMetadataData::OutOfBandMembership(o) => {
|
||||||
(pyo3::intern!(py, "out_of_band_membership"), o.into_py(py))
|
(pyo3::intern!(py, "out_of_band_membership"), o.into_py(py))
|
||||||
|
@ -88,10 +91,13 @@ impl EventInternalMetadataData {
|
||||||
/// Converts from python key/values to the field.
|
/// Converts from python key/values to the field.
|
||||||
///
|
///
|
||||||
/// Returns `None` if the key is a valid but unrecognized string.
|
/// Returns `None` if the key is a valid but unrecognized string.
|
||||||
fn from_python_pair(key: &PyAny, value: &PyAny) -> PyResult<Option<Self>> {
|
fn from_python_pair(
|
||||||
let key_str: &str = key.extract()?;
|
key: &Bound<'_, PyAny>,
|
||||||
|
value: &Bound<'_, PyAny>,
|
||||||
|
) -> PyResult<Option<Self>> {
|
||||||
|
let key_str: PyBackedStr = key.extract()?;
|
||||||
|
|
||||||
let e = match key_str {
|
let e = match &*key_str {
|
||||||
"out_of_band_membership" => EventInternalMetadataData::OutOfBandMembership(
|
"out_of_band_membership" => EventInternalMetadataData::OutOfBandMembership(
|
||||||
value
|
value
|
||||||
.extract()
|
.extract()
|
||||||
|
@ -208,11 +214,11 @@ pub struct EventInternalMetadata {
|
||||||
#[pymethods]
|
#[pymethods]
|
||||||
impl EventInternalMetadata {
|
impl EventInternalMetadata {
|
||||||
#[new]
|
#[new]
|
||||||
fn new(dict: &PyDict) -> PyResult<Self> {
|
fn new(dict: &Bound<'_, PyDict>) -> PyResult<Self> {
|
||||||
let mut data = Vec::with_capacity(dict.len());
|
let mut data = Vec::with_capacity(dict.len());
|
||||||
|
|
||||||
for (key, value) in dict.iter() {
|
for (key, value) in dict.iter() {
|
||||||
match EventInternalMetadataData::from_python_pair(key, value) {
|
match EventInternalMetadataData::from_python_pair(&key, &value) {
|
||||||
Ok(Some(entry)) => data.push(entry),
|
Ok(Some(entry)) => data.push(entry),
|
||||||
Ok(None) => {}
|
Ok(None) => {}
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
|
@ -234,8 +240,11 @@ impl EventInternalMetadata {
|
||||||
self.clone()
|
self.clone()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Get a dict holding the data stored in the `internal_metadata` column in the database.
|
||||||
|
///
|
||||||
|
/// Note that `outlier` and `stream_ordering` are stored in separate columns so are not returned here.
|
||||||
fn get_dict(&self, py: Python<'_>) -> PyResult<PyObject> {
|
fn get_dict(&self, py: Python<'_>) -> PyResult<PyObject> {
|
||||||
let dict = PyDict::new(py);
|
let dict = PyDict::new_bound(py);
|
||||||
|
|
||||||
for entry in &self.data {
|
for entry in &self.data {
|
||||||
let (key, value) = entry.to_python_pair(py);
|
let (key, value) = entry.to_python_pair(py);
|
||||||
|
|
|
@ -20,20 +20,23 @@
|
||||||
|
|
||||||
//! Classes for representing Events.
|
//! Classes for representing Events.
|
||||||
|
|
||||||
use pyo3::{types::PyModule, PyResult, Python};
|
use pyo3::{
|
||||||
|
types::{PyAnyMethods, PyModule, PyModuleMethods},
|
||||||
|
Bound, PyResult, Python,
|
||||||
|
};
|
||||||
|
|
||||||
mod internal_metadata;
|
mod internal_metadata;
|
||||||
|
|
||||||
/// Called when registering modules with python.
|
/// Called when registering modules with python.
|
||||||
pub fn register_module(py: Python<'_>, m: &PyModule) -> PyResult<()> {
|
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
|
||||||
let child_module = PyModule::new(py, "events")?;
|
let child_module = PyModule::new_bound(py, "events")?;
|
||||||
child_module.add_class::<internal_metadata::EventInternalMetadata>()?;
|
child_module.add_class::<internal_metadata::EventInternalMetadata>()?;
|
||||||
|
|
||||||
m.add_submodule(child_module)?;
|
m.add_submodule(&child_module)?;
|
||||||
|
|
||||||
// We need to manually add the module to sys.modules to make `from
|
// We need to manually add the module to sys.modules to make `from
|
||||||
// synapse.synapse_rust import events` work.
|
// synapse.synapse_rust import events` work.
|
||||||
py.import("sys")?
|
py.import_bound("sys")?
|
||||||
.getattr("modules")?
|
.getattr("modules")?
|
||||||
.set_item("synapse.synapse_rust.events", child_module)?;
|
.set_item("synapse.synapse_rust.events", child_module)?;
|
||||||
|
|
||||||
|
|
|
@ -17,8 +17,8 @@ use headers::{Header, HeaderMapExt};
|
||||||
use http::{HeaderName, HeaderValue, Method, Request, Response, StatusCode, Uri};
|
use http::{HeaderName, HeaderValue, Method, Request, Response, StatusCode, Uri};
|
||||||
use pyo3::{
|
use pyo3::{
|
||||||
exceptions::PyValueError,
|
exceptions::PyValueError,
|
||||||
types::{PyBytes, PySequence, PyTuple},
|
types::{PyAnyMethods, PyBytes, PyBytesMethods, PySequence, PyTuple},
|
||||||
PyAny, PyResult,
|
Bound, PyAny, PyResult,
|
||||||
};
|
};
|
||||||
|
|
||||||
use crate::errors::SynapseError;
|
use crate::errors::SynapseError;
|
||||||
|
@ -28,10 +28,11 @@ use crate::errors::SynapseError;
|
||||||
/// # Errors
|
/// # Errors
|
||||||
///
|
///
|
||||||
/// Returns an error if calling the `read` on the Python object failed
|
/// Returns an error if calling the `read` on the Python object failed
|
||||||
fn read_io_body(body: &PyAny, chunk_size: usize) -> PyResult<Bytes> {
|
fn read_io_body(body: &Bound<'_, PyAny>, chunk_size: usize) -> PyResult<Bytes> {
|
||||||
let mut buf = BytesMut::new();
|
let mut buf = BytesMut::new();
|
||||||
loop {
|
loop {
|
||||||
let bytes: &PyBytes = body.call_method1("read", (chunk_size,))?.downcast()?;
|
let bound = &body.call_method1("read", (chunk_size,))?;
|
||||||
|
let bytes: &Bound<'_, PyBytes> = bound.downcast()?;
|
||||||
if bytes.as_bytes().is_empty() {
|
if bytes.as_bytes().is_empty() {
|
||||||
return Ok(buf.into());
|
return Ok(buf.into());
|
||||||
}
|
}
|
||||||
|
@ -50,17 +51,19 @@ fn read_io_body(body: &PyAny, chunk_size: usize) -> PyResult<Bytes> {
|
||||||
/// # Errors
|
/// # Errors
|
||||||
///
|
///
|
||||||
/// Returns an error if the Python object doesn't properly implement `IRequest`
|
/// Returns an error if the Python object doesn't properly implement `IRequest`
|
||||||
pub fn http_request_from_twisted(request: &PyAny) -> PyResult<Request<Bytes>> {
|
pub fn http_request_from_twisted(request: &Bound<'_, PyAny>) -> PyResult<Request<Bytes>> {
|
||||||
let content = request.getattr("content")?;
|
let content = request.getattr("content")?;
|
||||||
let body = read_io_body(content, 4096)?;
|
let body = read_io_body(&content, 4096)?;
|
||||||
|
|
||||||
let mut req = Request::new(body);
|
let mut req = Request::new(body);
|
||||||
|
|
||||||
let uri: &PyBytes = request.getattr("uri")?.downcast()?;
|
let bound = &request.getattr("uri")?;
|
||||||
|
let uri: &Bound<'_, PyBytes> = bound.downcast()?;
|
||||||
*req.uri_mut() =
|
*req.uri_mut() =
|
||||||
Uri::try_from(uri.as_bytes()).map_err(|_| PyValueError::new_err("invalid uri"))?;
|
Uri::try_from(uri.as_bytes()).map_err(|_| PyValueError::new_err("invalid uri"))?;
|
||||||
|
|
||||||
let method: &PyBytes = request.getattr("method")?.downcast()?;
|
let bound = &request.getattr("method")?;
|
||||||
|
let method: &Bound<'_, PyBytes> = bound.downcast()?;
|
||||||
*req.method_mut() = Method::from_bytes(method.as_bytes())
|
*req.method_mut() = Method::from_bytes(method.as_bytes())
|
||||||
.map_err(|_| PyValueError::new_err("invalid method"))?;
|
.map_err(|_| PyValueError::new_err("invalid method"))?;
|
||||||
|
|
||||||
|
@ -71,14 +74,17 @@ pub fn http_request_from_twisted(request: &PyAny) -> PyResult<Request<Bytes>> {
|
||||||
|
|
||||||
for header in headers_iter {
|
for header in headers_iter {
|
||||||
let header = header?;
|
let header = header?;
|
||||||
let header: &PyTuple = header.downcast()?;
|
let header: &Bound<'_, PyTuple> = header.downcast()?;
|
||||||
let name: &PyBytes = header.get_item(0)?.downcast()?;
|
let bound = &header.get_item(0)?;
|
||||||
|
let name: &Bound<'_, PyBytes> = bound.downcast()?;
|
||||||
let name = HeaderName::from_bytes(name.as_bytes())
|
let name = HeaderName::from_bytes(name.as_bytes())
|
||||||
.map_err(|_| PyValueError::new_err("invalid header name"))?;
|
.map_err(|_| PyValueError::new_err("invalid header name"))?;
|
||||||
|
|
||||||
let values: &PySequence = header.get_item(1)?.downcast()?;
|
let bound = &header.get_item(1)?;
|
||||||
|
let values: &Bound<'_, PySequence> = bound.downcast()?;
|
||||||
for index in 0..values.len()? {
|
for index in 0..values.len()? {
|
||||||
let value: &PyBytes = values.get_item(index)?.downcast()?;
|
let bound = &values.get_item(index)?;
|
||||||
|
let value: &Bound<'_, PyBytes> = bound.downcast()?;
|
||||||
let value = HeaderValue::from_bytes(value.as_bytes())
|
let value = HeaderValue::from_bytes(value.as_bytes())
|
||||||
.map_err(|_| PyValueError::new_err("invalid header value"))?;
|
.map_err(|_| PyValueError::new_err("invalid header value"))?;
|
||||||
req.headers_mut().append(name.clone(), value);
|
req.headers_mut().append(name.clone(), value);
|
||||||
|
@ -100,7 +106,10 @@ pub fn http_request_from_twisted(request: &PyAny) -> PyResult<Request<Bytes>> {
|
||||||
/// # Errors
|
/// # Errors
|
||||||
///
|
///
|
||||||
/// Returns an error if the Python object doesn't properly implement `IRequest`
|
/// Returns an error if the Python object doesn't properly implement `IRequest`
|
||||||
pub fn http_response_to_twisted<B>(request: &PyAny, response: Response<B>) -> PyResult<()>
|
pub fn http_response_to_twisted<B>(
|
||||||
|
request: &Bound<'_, PyAny>,
|
||||||
|
response: Response<B>,
|
||||||
|
) -> PyResult<()>
|
||||||
where
|
where
|
||||||
B: Buf,
|
B: Buf,
|
||||||
{
|
{
|
||||||
|
|
|
@ -38,7 +38,7 @@ fn reset_logging_config() {
|
||||||
|
|
||||||
/// The entry point for defining the Python module.
|
/// The entry point for defining the Python module.
|
||||||
#[pymodule]
|
#[pymodule]
|
||||||
fn synapse_rust(py: Python<'_>, m: &PyModule) -> PyResult<()> {
|
fn synapse_rust(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
|
||||||
m.add_function(wrap_pyfunction!(sum_as_string, m)?)?;
|
m.add_function(wrap_pyfunction!(sum_as_string, m)?)?;
|
||||||
m.add_function(wrap_pyfunction!(get_rust_file_digest, m)?)?;
|
m.add_function(wrap_pyfunction!(get_rust_file_digest, m)?)?;
|
||||||
m.add_function(wrap_pyfunction!(reset_logging_config, m)?)?;
|
m.add_function(wrap_pyfunction!(reset_logging_config, m)?)?;
|
||||||
|
|
|
@ -66,7 +66,7 @@ use log::warn;
|
||||||
use pyo3::exceptions::PyTypeError;
|
use pyo3::exceptions::PyTypeError;
|
||||||
use pyo3::prelude::*;
|
use pyo3::prelude::*;
|
||||||
use pyo3::types::{PyBool, PyList, PyLong, PyString};
|
use pyo3::types::{PyBool, PyList, PyLong, PyString};
|
||||||
use pythonize::{depythonize, pythonize};
|
use pythonize::{depythonize_bound, pythonize};
|
||||||
use serde::de::Error as _;
|
use serde::de::Error as _;
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
use serde_json::Value;
|
use serde_json::Value;
|
||||||
|
@ -78,19 +78,19 @@ pub mod evaluator;
|
||||||
pub mod utils;
|
pub mod utils;
|
||||||
|
|
||||||
/// Called when registering modules with python.
|
/// Called when registering modules with python.
|
||||||
pub fn register_module(py: Python<'_>, m: &PyModule) -> PyResult<()> {
|
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
|
||||||
let child_module = PyModule::new(py, "push")?;
|
let child_module = PyModule::new_bound(py, "push")?;
|
||||||
child_module.add_class::<PushRule>()?;
|
child_module.add_class::<PushRule>()?;
|
||||||
child_module.add_class::<PushRules>()?;
|
child_module.add_class::<PushRules>()?;
|
||||||
child_module.add_class::<FilteredPushRules>()?;
|
child_module.add_class::<FilteredPushRules>()?;
|
||||||
child_module.add_class::<PushRuleEvaluator>()?;
|
child_module.add_class::<PushRuleEvaluator>()?;
|
||||||
child_module.add_function(wrap_pyfunction!(get_base_rule_ids, m)?)?;
|
child_module.add_function(wrap_pyfunction!(get_base_rule_ids, m)?)?;
|
||||||
|
|
||||||
m.add_submodule(child_module)?;
|
m.add_submodule(&child_module)?;
|
||||||
|
|
||||||
// We need to manually add the module to sys.modules to make `from
|
// We need to manually add the module to sys.modules to make `from
|
||||||
// synapse.synapse_rust import push` work.
|
// synapse.synapse_rust import push` work.
|
||||||
py.import("sys")?
|
py.import_bound("sys")?
|
||||||
.getattr("modules")?
|
.getattr("modules")?
|
||||||
.set_item("synapse.synapse_rust.push", child_module)?;
|
.set_item("synapse.synapse_rust.push", child_module)?;
|
||||||
|
|
||||||
|
@ -271,12 +271,12 @@ pub enum SimpleJsonValue {
|
||||||
|
|
||||||
impl<'source> FromPyObject<'source> for SimpleJsonValue {
|
impl<'source> FromPyObject<'source> for SimpleJsonValue {
|
||||||
fn extract(ob: &'source PyAny) -> PyResult<Self> {
|
fn extract(ob: &'source PyAny) -> PyResult<Self> {
|
||||||
if let Ok(s) = <PyString as pyo3::PyTryFrom>::try_from(ob) {
|
if let Ok(s) = ob.downcast::<PyString>() {
|
||||||
Ok(SimpleJsonValue::Str(Cow::Owned(s.to_string())))
|
Ok(SimpleJsonValue::Str(Cow::Owned(s.to_string())))
|
||||||
// A bool *is* an int, ensure we try bool first.
|
// A bool *is* an int, ensure we try bool first.
|
||||||
} else if let Ok(b) = <PyBool as pyo3::PyTryFrom>::try_from(ob) {
|
} else if let Ok(b) = ob.downcast::<PyBool>() {
|
||||||
Ok(SimpleJsonValue::Bool(b.extract()?))
|
Ok(SimpleJsonValue::Bool(b.extract()?))
|
||||||
} else if let Ok(i) = <PyLong as pyo3::PyTryFrom>::try_from(ob) {
|
} else if let Ok(i) = ob.downcast::<PyLong>() {
|
||||||
Ok(SimpleJsonValue::Int(i.extract()?))
|
Ok(SimpleJsonValue::Int(i.extract()?))
|
||||||
} else if ob.is_none() {
|
} else if ob.is_none() {
|
||||||
Ok(SimpleJsonValue::Null)
|
Ok(SimpleJsonValue::Null)
|
||||||
|
@ -299,7 +299,7 @@ pub enum JsonValue {
|
||||||
|
|
||||||
impl<'source> FromPyObject<'source> for JsonValue {
|
impl<'source> FromPyObject<'source> for JsonValue {
|
||||||
fn extract(ob: &'source PyAny) -> PyResult<Self> {
|
fn extract(ob: &'source PyAny) -> PyResult<Self> {
|
||||||
if let Ok(l) = <PyList as pyo3::PyTryFrom>::try_from(ob) {
|
if let Ok(l) = ob.downcast::<PyList>() {
|
||||||
match l.iter().map(SimpleJsonValue::extract).collect() {
|
match l.iter().map(SimpleJsonValue::extract).collect() {
|
||||||
Ok(a) => Ok(JsonValue::Array(a)),
|
Ok(a) => Ok(JsonValue::Array(a)),
|
||||||
Err(e) => Err(PyTypeError::new_err(format!(
|
Err(e) => Err(PyTypeError::new_err(format!(
|
||||||
|
@ -370,8 +370,8 @@ impl IntoPy<PyObject> for Condition {
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'source> FromPyObject<'source> for Condition {
|
impl<'source> FromPyObject<'source> for Condition {
|
||||||
fn extract(ob: &'source PyAny) -> PyResult<Self> {
|
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
|
||||||
Ok(depythonize(ob)?)
|
Ok(depythonize_bound(ob.clone())?)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -26,8 +26,10 @@ use headers::{
|
||||||
use http::{header::ETAG, HeaderMap, Response, StatusCode, Uri};
|
use http::{header::ETAG, HeaderMap, Response, StatusCode, Uri};
|
||||||
use mime::Mime;
|
use mime::Mime;
|
||||||
use pyo3::{
|
use pyo3::{
|
||||||
exceptions::PyValueError, pyclass, pymethods, types::PyModule, Py, PyAny, PyObject, PyResult,
|
exceptions::PyValueError,
|
||||||
Python, ToPyObject,
|
pyclass, pymethods,
|
||||||
|
types::{PyAnyMethods, PyModule, PyModuleMethods},
|
||||||
|
Bound, Py, PyAny, PyObject, PyResult, Python, ToPyObject,
|
||||||
};
|
};
|
||||||
use ulid::Ulid;
|
use ulid::Ulid;
|
||||||
|
|
||||||
|
@ -109,7 +111,7 @@ impl RendezvousHandler {
|
||||||
#[pyo3(signature = (homeserver, /, capacity=100, max_content_length=4*1024, eviction_interval=60*1000, ttl=60*1000))]
|
#[pyo3(signature = (homeserver, /, capacity=100, max_content_length=4*1024, eviction_interval=60*1000, ttl=60*1000))]
|
||||||
fn new(
|
fn new(
|
||||||
py: Python<'_>,
|
py: Python<'_>,
|
||||||
homeserver: &PyAny,
|
homeserver: &Bound<'_, PyAny>,
|
||||||
capacity: usize,
|
capacity: usize,
|
||||||
max_content_length: u64,
|
max_content_length: u64,
|
||||||
eviction_interval: u64,
|
eviction_interval: u64,
|
||||||
|
@ -150,7 +152,7 @@ impl RendezvousHandler {
|
||||||
}
|
}
|
||||||
|
|
||||||
fn _evict(&mut self, py: Python<'_>) -> PyResult<()> {
|
fn _evict(&mut self, py: Python<'_>) -> PyResult<()> {
|
||||||
let clock = self.clock.as_ref(py);
|
let clock = self.clock.bind(py);
|
||||||
let now: u64 = clock.call_method0("time_msec")?.extract()?;
|
let now: u64 = clock.call_method0("time_msec")?.extract()?;
|
||||||
let now = SystemTime::UNIX_EPOCH + Duration::from_millis(now);
|
let now = SystemTime::UNIX_EPOCH + Duration::from_millis(now);
|
||||||
self.evict(now);
|
self.evict(now);
|
||||||
|
@ -158,12 +160,12 @@ impl RendezvousHandler {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn handle_post(&mut self, py: Python<'_>, twisted_request: &PyAny) -> PyResult<()> {
|
fn handle_post(&mut self, py: Python<'_>, twisted_request: &Bound<'_, PyAny>) -> PyResult<()> {
|
||||||
let request = http_request_from_twisted(twisted_request)?;
|
let request = http_request_from_twisted(twisted_request)?;
|
||||||
|
|
||||||
let content_type = self.check_input_headers(request.headers())?;
|
let content_type = self.check_input_headers(request.headers())?;
|
||||||
|
|
||||||
let clock = self.clock.as_ref(py);
|
let clock = self.clock.bind(py);
|
||||||
let now: u64 = clock.call_method0("time_msec")?.extract()?;
|
let now: u64 = clock.call_method0("time_msec")?.extract()?;
|
||||||
let now = SystemTime::UNIX_EPOCH + Duration::from_millis(now);
|
let now = SystemTime::UNIX_EPOCH + Duration::from_millis(now);
|
||||||
|
|
||||||
|
@ -197,7 +199,12 @@ impl RendezvousHandler {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn handle_get(&mut self, py: Python<'_>, twisted_request: &PyAny, id: &str) -> PyResult<()> {
|
fn handle_get(
|
||||||
|
&mut self,
|
||||||
|
py: Python<'_>,
|
||||||
|
twisted_request: &Bound<'_, PyAny>,
|
||||||
|
id: &str,
|
||||||
|
) -> PyResult<()> {
|
||||||
let request = http_request_from_twisted(twisted_request)?;
|
let request = http_request_from_twisted(twisted_request)?;
|
||||||
|
|
||||||
let if_none_match: Option<IfNoneMatch> = request.headers().typed_get_optional()?;
|
let if_none_match: Option<IfNoneMatch> = request.headers().typed_get_optional()?;
|
||||||
|
@ -233,7 +240,12 @@ impl RendezvousHandler {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn handle_put(&mut self, py: Python<'_>, twisted_request: &PyAny, id: &str) -> PyResult<()> {
|
fn handle_put(
|
||||||
|
&mut self,
|
||||||
|
py: Python<'_>,
|
||||||
|
twisted_request: &Bound<'_, PyAny>,
|
||||||
|
id: &str,
|
||||||
|
) -> PyResult<()> {
|
||||||
let request = http_request_from_twisted(twisted_request)?;
|
let request = http_request_from_twisted(twisted_request)?;
|
||||||
|
|
||||||
let content_type = self.check_input_headers(request.headers())?;
|
let content_type = self.check_input_headers(request.headers())?;
|
||||||
|
@ -281,7 +293,7 @@ impl RendezvousHandler {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
fn handle_delete(&mut self, twisted_request: &PyAny, id: &str) -> PyResult<()> {
|
fn handle_delete(&mut self, twisted_request: &Bound<'_, PyAny>, id: &str) -> PyResult<()> {
|
||||||
let _request = http_request_from_twisted(twisted_request)?;
|
let _request = http_request_from_twisted(twisted_request)?;
|
||||||
|
|
||||||
let id: Ulid = id.parse().map_err(|_| NotFoundError::new())?;
|
let id: Ulid = id.parse().map_err(|_| NotFoundError::new())?;
|
||||||
|
@ -298,16 +310,16 @@ impl RendezvousHandler {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn register_module(py: Python<'_>, m: &PyModule) -> PyResult<()> {
|
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
|
||||||
let child_module = PyModule::new(py, "rendezvous")?;
|
let child_module = PyModule::new_bound(py, "rendezvous")?;
|
||||||
|
|
||||||
child_module.add_class::<RendezvousHandler>()?;
|
child_module.add_class::<RendezvousHandler>()?;
|
||||||
|
|
||||||
m.add_submodule(child_module)?;
|
m.add_submodule(&child_module)?;
|
||||||
|
|
||||||
// We need to manually add the module to sys.modules to make `from
|
// We need to manually add the module to sys.modules to make `from
|
||||||
// synapse.synapse_rust import rendezvous` work.
|
// synapse.synapse_rust import rendezvous` work.
|
||||||
py.import("sys")?
|
py.import_bound("sys")?
|
||||||
.getattr("modules")?
|
.getattr("modules")?
|
||||||
.set_item("synapse.synapse_rust.rendezvous", child_module)?;
|
.set_item("synapse.synapse_rust.rendezvous", child_module)?;
|
||||||
|
|
||||||
|
|
|
@ -214,7 +214,17 @@ fi
|
||||||
|
|
||||||
extra_test_args=()
|
extra_test_args=()
|
||||||
|
|
||||||
test_packages="./tests/csapi ./tests ./tests/msc3874 ./tests/msc3890 ./tests/msc3391 ./tests/msc3930 ./tests/msc3902 ./tests/msc3967"
|
test_packages=(
|
||||||
|
./tests/csapi
|
||||||
|
./tests
|
||||||
|
./tests/msc3874
|
||||||
|
./tests/msc3890
|
||||||
|
./tests/msc3391
|
||||||
|
./tests/msc3930
|
||||||
|
./tests/msc3902
|
||||||
|
./tests/msc3967
|
||||||
|
./tests/msc4115
|
||||||
|
)
|
||||||
|
|
||||||
# Enable dirty runs, so tests will reuse the same container where possible.
|
# Enable dirty runs, so tests will reuse the same container where possible.
|
||||||
# This significantly speeds up tests, but increases the possibility of test pollution.
|
# This significantly speeds up tests, but increases the possibility of test pollution.
|
||||||
|
@ -278,7 +288,7 @@ fi
|
||||||
export PASS_SYNAPSE_LOG_TESTING=1
|
export PASS_SYNAPSE_LOG_TESTING=1
|
||||||
|
|
||||||
# Run the tests!
|
# Run the tests!
|
||||||
echo "Images built; running complement with ${extra_test_args[@]} $@ $test_packages"
|
echo "Images built; running complement with ${extra_test_args[@]} $@ ${test_packages[@]}"
|
||||||
cd "$COMPLEMENT_DIR"
|
cd "$COMPLEMENT_DIR"
|
||||||
|
|
||||||
go test -v -tags "synapse_blacklist" -count=1 "${extra_test_args[@]}" "$@" $test_packages
|
go test -v -tags "synapse_blacklist" -count=1 "${extra_test_args[@]}" "$@" "${test_packages[@]}"
|
||||||
|
|
|
@ -91,7 +91,6 @@ else
|
||||||
"synapse" "docker" "tests"
|
"synapse" "docker" "tests"
|
||||||
"scripts-dev"
|
"scripts-dev"
|
||||||
"contrib" "synmark" "stubs" ".ci"
|
"contrib" "synmark" "stubs" ".ci"
|
||||||
"dev-docs"
|
|
||||||
)
|
)
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
|
@ -127,7 +127,7 @@ BOOLEAN_COLUMNS = {
|
||||||
"redactions": ["have_censored"],
|
"redactions": ["have_censored"],
|
||||||
"room_stats_state": ["is_federatable"],
|
"room_stats_state": ["is_federatable"],
|
||||||
"rooms": ["is_public", "has_auth_chain_index"],
|
"rooms": ["is_public", "has_auth_chain_index"],
|
||||||
"users": ["shadow_banned", "approved", "locked"],
|
"users": ["shadow_banned", "approved", "locked", "suspended"],
|
||||||
"un_partial_stated_event_stream": ["rejection_status_changed"],
|
"un_partial_stated_event_stream": ["rejection_status_changed"],
|
||||||
"users_who_share_rooms": ["share_private"],
|
"users_who_share_rooms": ["share_private"],
|
||||||
"per_user_experimental_features": ["enabled"],
|
"per_user_experimental_features": ["enabled"],
|
||||||
|
|
|
@ -234,6 +234,13 @@ class EventContentFields:
|
||||||
TO_DEVICE_MSGID: Final = "org.matrix.msgid"
|
TO_DEVICE_MSGID: Final = "org.matrix.msgid"
|
||||||
|
|
||||||
|
|
||||||
|
class EventUnsignedContentFields:
|
||||||
|
"""Fields found inside the 'unsigned' data on events"""
|
||||||
|
|
||||||
|
# Requesting user's membership, per MSC4115
|
||||||
|
MSC4115_MEMBERSHIP: Final = "io.element.msc4115.membership"
|
||||||
|
|
||||||
|
|
||||||
class RoomTypes:
|
class RoomTypes:
|
||||||
"""Understood values of the room_type field of m.room.create events."""
|
"""Understood values of the room_type field of m.room.create events."""
|
||||||
|
|
||||||
|
|
|
@ -316,6 +316,10 @@ class Ratelimiter:
|
||||||
)
|
)
|
||||||
|
|
||||||
if not allowed:
|
if not allowed:
|
||||||
|
# We pause for a bit here to stop clients from "tight-looping" on
|
||||||
|
# retrying their request.
|
||||||
|
await self.clock.sleep(0.5)
|
||||||
|
|
||||||
raise LimitExceededError(
|
raise LimitExceededError(
|
||||||
limiter_name=self._limiter_name,
|
limiter_name=self._limiter_name,
|
||||||
retry_after_ms=int(1000 * (time_allowed - time_now_s)),
|
retry_after_ms=int(1000 * (time_allowed - time_now_s)),
|
||||||
|
|
|
@ -68,6 +68,7 @@ from synapse.config._base import format_config_error
|
||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.config.server import ListenerConfig, ManholeConfig, TCPListenerConfig
|
from synapse.config.server import ListenerConfig, ManholeConfig, TCPListenerConfig
|
||||||
from synapse.crypto import context_factory
|
from synapse.crypto import context_factory
|
||||||
|
from synapse.events.auto_accept_invites import InviteAutoAccepter
|
||||||
from synapse.events.presence_router import load_legacy_presence_router
|
from synapse.events.presence_router import load_legacy_presence_router
|
||||||
from synapse.handlers.auth import load_legacy_password_auth_providers
|
from synapse.handlers.auth import load_legacy_password_auth_providers
|
||||||
from synapse.http.site import SynapseSite
|
from synapse.http.site import SynapseSite
|
||||||
|
@ -582,6 +583,11 @@ async def start(hs: "HomeServer") -> None:
|
||||||
m = module(config, module_api)
|
m = module(config, module_api)
|
||||||
logger.info("Loaded module %s", m)
|
logger.info("Loaded module %s", m)
|
||||||
|
|
||||||
|
if hs.config.auto_accept_invites.enabled:
|
||||||
|
# Start the local auto_accept_invites module.
|
||||||
|
m = InviteAutoAccepter(hs.config.auto_accept_invites, module_api)
|
||||||
|
logger.info("Loaded local module %s", m)
|
||||||
|
|
||||||
load_legacy_spam_checkers(hs)
|
load_legacy_spam_checkers(hs)
|
||||||
load_legacy_third_party_event_rules(hs)
|
load_legacy_third_party_event_rules(hs)
|
||||||
load_legacy_presence_router(hs)
|
load_legacy_presence_router(hs)
|
||||||
|
|
|
@ -23,6 +23,7 @@ from synapse.config import ( # noqa: F401
|
||||||
api,
|
api,
|
||||||
appservice,
|
appservice,
|
||||||
auth,
|
auth,
|
||||||
|
auto_accept_invites,
|
||||||
background_updates,
|
background_updates,
|
||||||
cache,
|
cache,
|
||||||
captcha,
|
captcha,
|
||||||
|
@ -120,6 +121,7 @@ class RootConfig:
|
||||||
federation: federation.FederationConfig
|
federation: federation.FederationConfig
|
||||||
retention: retention.RetentionConfig
|
retention: retention.RetentionConfig
|
||||||
background_updates: background_updates.BackgroundUpdateConfig
|
background_updates: background_updates.BackgroundUpdateConfig
|
||||||
|
auto_accept_invites: auto_accept_invites.AutoAcceptInvitesConfig
|
||||||
|
|
||||||
config_classes: List[Type["Config"]] = ...
|
config_classes: List[Type["Config"]] = ...
|
||||||
config_files: List[str]
|
config_files: List[str]
|
||||||
|
|
43
synapse/config/auto_accept_invites.py
Normal file
43
synapse/config/auto_accept_invites.py
Normal file
|
@ -0,0 +1,43 @@
|
||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2024 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
# Originally licensed under the Apache License, Version 2.0:
|
||||||
|
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||||
|
#
|
||||||
|
# [This file includes modifications made by New Vector Limited]
|
||||||
|
#
|
||||||
|
#
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
|
from ._base import Config
|
||||||
|
|
||||||
|
|
||||||
|
class AutoAcceptInvitesConfig(Config):
|
||||||
|
section = "auto_accept_invites"
|
||||||
|
|
||||||
|
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
|
||||||
|
auto_accept_invites_config = config.get("auto_accept_invites") or {}
|
||||||
|
|
||||||
|
self.enabled = auto_accept_invites_config.get("enabled", False)
|
||||||
|
|
||||||
|
self.accept_invites_only_for_direct_messages = auto_accept_invites_config.get(
|
||||||
|
"only_for_direct_messages", False
|
||||||
|
)
|
||||||
|
|
||||||
|
self.accept_invites_only_from_local_users = auto_accept_invites_config.get(
|
||||||
|
"only_from_local_users", False
|
||||||
|
)
|
||||||
|
|
||||||
|
self.worker_to_run_on = auto_accept_invites_config.get("worker_to_run_on")
|
|
@ -66,6 +66,17 @@ class CasConfig(Config):
|
||||||
|
|
||||||
self.cas_enable_registration = cas_config.get("enable_registration", True)
|
self.cas_enable_registration = cas_config.get("enable_registration", True)
|
||||||
|
|
||||||
|
self.cas_allow_numeric_ids = cas_config.get("allow_numeric_ids")
|
||||||
|
self.cas_numeric_ids_prefix = cas_config.get("numeric_ids_prefix")
|
||||||
|
if (
|
||||||
|
self.cas_numeric_ids_prefix is not None
|
||||||
|
and self.cas_numeric_ids_prefix.isalnum() is False
|
||||||
|
):
|
||||||
|
raise ConfigError(
|
||||||
|
"Only alphanumeric characters are allowed for numeric IDs prefix",
|
||||||
|
("cas_config", "numeric_ids_prefix"),
|
||||||
|
)
|
||||||
|
|
||||||
self.idp_name = cas_config.get("idp_name", "CAS")
|
self.idp_name = cas_config.get("idp_name", "CAS")
|
||||||
self.idp_icon = cas_config.get("idp_icon")
|
self.idp_icon = cas_config.get("idp_icon")
|
||||||
self.idp_brand = cas_config.get("idp_brand")
|
self.idp_brand = cas_config.get("idp_brand")
|
||||||
|
@ -77,6 +88,8 @@ class CasConfig(Config):
|
||||||
self.cas_displayname_attribute = None
|
self.cas_displayname_attribute = None
|
||||||
self.cas_required_attributes = []
|
self.cas_required_attributes = []
|
||||||
self.cas_enable_registration = False
|
self.cas_enable_registration = False
|
||||||
|
self.cas_allow_numeric_ids = False
|
||||||
|
self.cas_numeric_ids_prefix = "u"
|
||||||
|
|
||||||
|
|
||||||
# CAS uses a legacy required attributes mapping, not the one provided by
|
# CAS uses a legacy required attributes mapping, not the one provided by
|
||||||
|
|
|
@ -332,6 +332,9 @@ class ExperimentalConfig(Config):
|
||||||
# MSC3391: Removing account data.
|
# MSC3391: Removing account data.
|
||||||
self.msc3391_enabled = experimental.get("msc3391_enabled", False)
|
self.msc3391_enabled = experimental.get("msc3391_enabled", False)
|
||||||
|
|
||||||
|
# MSC3575 (Sliding Sync API endpoints)
|
||||||
|
self.msc3575_enabled: bool = experimental.get("msc3575_enabled", False)
|
||||||
|
|
||||||
# MSC3773: Thread notifications
|
# MSC3773: Thread notifications
|
||||||
self.msc3773_enabled: bool = experimental.get("msc3773_enabled", False)
|
self.msc3773_enabled: bool = experimental.get("msc3773_enabled", False)
|
||||||
|
|
||||||
|
@ -432,3 +435,11 @@ class ExperimentalConfig(Config):
|
||||||
"You cannot have MSC4108 both enabled and delegated at the same time",
|
"You cannot have MSC4108 both enabled and delegated at the same time",
|
||||||
("experimental", "msc4108_delegation_endpoint"),
|
("experimental", "msc4108_delegation_endpoint"),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.msc4115_membership_on_events = experimental.get(
|
||||||
|
"msc4115_membership_on_events", False
|
||||||
|
)
|
||||||
|
|
||||||
|
self.msc3916_authenticated_media_enabled = experimental.get(
|
||||||
|
"msc3916_authenticated_media_enabled", False
|
||||||
|
)
|
||||||
|
|
|
@ -42,6 +42,10 @@ class FederationConfig(Config):
|
||||||
for domain in federation_domain_whitelist:
|
for domain in federation_domain_whitelist:
|
||||||
self.federation_domain_whitelist[domain] = True
|
self.federation_domain_whitelist[domain] = True
|
||||||
|
|
||||||
|
self.federation_whitelist_endpoint_enabled = config.get(
|
||||||
|
"federation_whitelist_endpoint_enabled", False
|
||||||
|
)
|
||||||
|
|
||||||
federation_metrics_domains = config.get("federation_metrics_domains") or []
|
federation_metrics_domains = config.get("federation_metrics_domains") or []
|
||||||
validate_config(
|
validate_config(
|
||||||
_METRICS_FOR_DOMAINS_SCHEMA,
|
_METRICS_FOR_DOMAINS_SCHEMA,
|
||||||
|
|
|
@ -23,6 +23,7 @@ from .account_validity import AccountValidityConfig
|
||||||
from .api import ApiConfig
|
from .api import ApiConfig
|
||||||
from .appservice import AppServiceConfig
|
from .appservice import AppServiceConfig
|
||||||
from .auth import AuthConfig
|
from .auth import AuthConfig
|
||||||
|
from .auto_accept_invites import AutoAcceptInvitesConfig
|
||||||
from .background_updates import BackgroundUpdateConfig
|
from .background_updates import BackgroundUpdateConfig
|
||||||
from .cache import CacheConfig
|
from .cache import CacheConfig
|
||||||
from .captcha import CaptchaConfig
|
from .captcha import CaptchaConfig
|
||||||
|
@ -105,4 +106,5 @@ class HomeServerConfig(RootConfig):
|
||||||
RedisConfig,
|
RedisConfig,
|
||||||
ExperimentalConfig,
|
ExperimentalConfig,
|
||||||
BackgroundUpdateConfig,
|
BackgroundUpdateConfig,
|
||||||
|
AutoAcceptInvitesConfig,
|
||||||
]
|
]
|
||||||
|
|
196
synapse/events/auto_accept_invites.py
Normal file
196
synapse/events/auto_accept_invites.py
Normal file
|
@ -0,0 +1,196 @@
|
||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C
|
||||||
|
# Copyright (C) 2024 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
# Originally licensed under the Apache License, Version 2.0:
|
||||||
|
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||||
|
#
|
||||||
|
# [This file includes modifications made by New Vector Limited]
|
||||||
|
#
|
||||||
|
#
|
||||||
|
import logging
|
||||||
|
from http import HTTPStatus
|
||||||
|
from typing import Any, Dict, Tuple
|
||||||
|
|
||||||
|
from synapse.api.constants import AccountDataTypes, EventTypes, Membership
|
||||||
|
from synapse.api.errors import SynapseError
|
||||||
|
from synapse.config.auto_accept_invites import AutoAcceptInvitesConfig
|
||||||
|
from synapse.module_api import EventBase, ModuleApi, run_as_background_process
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class InviteAutoAccepter:
|
||||||
|
def __init__(self, config: AutoAcceptInvitesConfig, api: ModuleApi):
|
||||||
|
# Keep a reference to the Module API.
|
||||||
|
self._api = api
|
||||||
|
self._config = config
|
||||||
|
|
||||||
|
if not self._config.enabled:
|
||||||
|
return
|
||||||
|
|
||||||
|
should_run_on_this_worker = config.worker_to_run_on == self._api.worker_name
|
||||||
|
|
||||||
|
if not should_run_on_this_worker:
|
||||||
|
logger.info(
|
||||||
|
"Not accepting invites on this worker (configured: %r, here: %r)",
|
||||||
|
config.worker_to_run_on,
|
||||||
|
self._api.worker_name,
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"Accepting invites on this worker (here: %r)", self._api.worker_name
|
||||||
|
)
|
||||||
|
|
||||||
|
# Register the callback.
|
||||||
|
self._api.register_third_party_rules_callbacks(
|
||||||
|
on_new_event=self.on_new_event,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def on_new_event(self, event: EventBase, *args: Any) -> None:
|
||||||
|
"""Listens for new events, and if the event is an invite for a local user then
|
||||||
|
automatically accepts it.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event: The incoming event.
|
||||||
|
"""
|
||||||
|
# Check if the event is an invite for a local user.
|
||||||
|
is_invite_for_local_user = (
|
||||||
|
event.type == EventTypes.Member
|
||||||
|
and event.is_state()
|
||||||
|
and event.membership == Membership.INVITE
|
||||||
|
and self._api.is_mine(event.state_key)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Only accept invites for direct messages if the configuration mandates it.
|
||||||
|
is_direct_message = event.content.get("is_direct", False)
|
||||||
|
is_allowed_by_direct_message_rules = (
|
||||||
|
not self._config.accept_invites_only_for_direct_messages
|
||||||
|
or is_direct_message is True
|
||||||
|
)
|
||||||
|
|
||||||
|
# Only accept invites from remote users if the configuration mandates it.
|
||||||
|
is_from_local_user = self._api.is_mine(event.sender)
|
||||||
|
is_allowed_by_local_user_rules = (
|
||||||
|
not self._config.accept_invites_only_from_local_users
|
||||||
|
or is_from_local_user is True
|
||||||
|
)
|
||||||
|
|
||||||
|
if (
|
||||||
|
is_invite_for_local_user
|
||||||
|
and is_allowed_by_direct_message_rules
|
||||||
|
and is_allowed_by_local_user_rules
|
||||||
|
):
|
||||||
|
# Make the user join the room. We run this as a background process to circumvent a race condition
|
||||||
|
# that occurs when responding to invites over federation (see https://github.com/matrix-org/synapse-auto-accept-invite/issues/12)
|
||||||
|
run_as_background_process(
|
||||||
|
"retry_make_join",
|
||||||
|
self._retry_make_join,
|
||||||
|
event.state_key,
|
||||||
|
event.state_key,
|
||||||
|
event.room_id,
|
||||||
|
"join",
|
||||||
|
bg_start_span=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
if is_direct_message:
|
||||||
|
# Mark this room as a direct message!
|
||||||
|
await self._mark_room_as_direct_message(
|
||||||
|
event.state_key, event.sender, event.room_id
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _mark_room_as_direct_message(
|
||||||
|
self, user_id: str, dm_user_id: str, room_id: str
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Marks a room (`room_id`) as a direct message with the counterparty `dm_user_id`
|
||||||
|
from the perspective of the user `user_id`.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id: the user for whom the membership is changing
|
||||||
|
dm_user_id: the user performing the membership change
|
||||||
|
room_id: room id of the room the user is invited to
|
||||||
|
"""
|
||||||
|
|
||||||
|
# This is a dict of User IDs to tuples of Room IDs
|
||||||
|
# (get_global will return a frozendict of tuples as it freezes the data,
|
||||||
|
# but we should accept either frozen or unfrozen variants.)
|
||||||
|
# Be careful: we convert the outer frozendict into a dict here,
|
||||||
|
# but the contents of the dict are still frozen (tuples in lieu of lists,
|
||||||
|
# etc.)
|
||||||
|
dm_map: Dict[str, Tuple[str, ...]] = dict(
|
||||||
|
await self._api.account_data_manager.get_global(
|
||||||
|
user_id, AccountDataTypes.DIRECT
|
||||||
|
)
|
||||||
|
or {}
|
||||||
|
)
|
||||||
|
|
||||||
|
if dm_user_id not in dm_map:
|
||||||
|
dm_map[dm_user_id] = (room_id,)
|
||||||
|
else:
|
||||||
|
dm_rooms_for_user = dm_map[dm_user_id]
|
||||||
|
assert isinstance(dm_rooms_for_user, (tuple, list))
|
||||||
|
|
||||||
|
dm_map[dm_user_id] = tuple(dm_rooms_for_user) + (room_id,)
|
||||||
|
|
||||||
|
await self._api.account_data_manager.put_global(
|
||||||
|
user_id, AccountDataTypes.DIRECT, dm_map
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _retry_make_join(
|
||||||
|
self, sender: str, target: str, room_id: str, new_membership: str
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
A function to retry sending the `make_join` request with an increasing backoff. This is
|
||||||
|
implemented to work around a race condition when receiving invites over federation.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sender: the user performing the membership change
|
||||||
|
target: the user for whom the membership is changing
|
||||||
|
room_id: room id of the room to join to
|
||||||
|
new_membership: the type of membership event (in this case will be "join")
|
||||||
|
"""
|
||||||
|
|
||||||
|
sleep = 0
|
||||||
|
retries = 0
|
||||||
|
join_event = None
|
||||||
|
|
||||||
|
while retries < 5:
|
||||||
|
try:
|
||||||
|
await self._api.sleep(sleep)
|
||||||
|
join_event = await self._api.update_room_membership(
|
||||||
|
sender=sender,
|
||||||
|
target=target,
|
||||||
|
room_id=room_id,
|
||||||
|
new_membership=new_membership,
|
||||||
|
)
|
||||||
|
except SynapseError as e:
|
||||||
|
if e.code == HTTPStatus.FORBIDDEN:
|
||||||
|
logger.debug(
|
||||||
|
f"Update_room_membership was forbidden. This can sometimes be expected for remote invites. Exception: {e}"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logger.warn(
|
||||||
|
f"Update_room_membership raised the following unexpected (SynapseError) exception: {e}"
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.warn(
|
||||||
|
f"Update_room_membership raised the following unexpected exception: {e}"
|
||||||
|
)
|
||||||
|
|
||||||
|
sleep = 2**retries
|
||||||
|
retries += 1
|
||||||
|
|
||||||
|
if join_event is not None:
|
||||||
|
break
|
|
@ -49,7 +49,7 @@ from synapse.api.errors import Codes, SynapseError
|
||||||
from synapse.api.room_versions import RoomVersion
|
from synapse.api.room_versions import RoomVersion
|
||||||
from synapse.types import JsonDict, Requester
|
from synapse.types import JsonDict, Requester
|
||||||
|
|
||||||
from . import EventBase
|
from . import EventBase, make_event_from_dict
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.handlers.relations import BundledAggregations
|
from synapse.handlers.relations import BundledAggregations
|
||||||
|
@ -82,17 +82,14 @@ def prune_event(event: EventBase) -> EventBase:
|
||||||
"""
|
"""
|
||||||
pruned_event_dict = prune_event_dict(event.room_version, event.get_dict())
|
pruned_event_dict = prune_event_dict(event.room_version, event.get_dict())
|
||||||
|
|
||||||
from . import make_event_from_dict
|
|
||||||
|
|
||||||
pruned_event = make_event_from_dict(
|
pruned_event = make_event_from_dict(
|
||||||
pruned_event_dict, event.room_version, event.internal_metadata.get_dict()
|
pruned_event_dict, event.room_version, event.internal_metadata.get_dict()
|
||||||
)
|
)
|
||||||
|
|
||||||
# copy the internal fields
|
# Copy the bits of `internal_metadata` that aren't returned by `get_dict`
|
||||||
pruned_event.internal_metadata.stream_ordering = (
|
pruned_event.internal_metadata.stream_ordering = (
|
||||||
event.internal_metadata.stream_ordering
|
event.internal_metadata.stream_ordering
|
||||||
)
|
)
|
||||||
|
|
||||||
pruned_event.internal_metadata.outlier = event.internal_metadata.outlier
|
pruned_event.internal_metadata.outlier = event.internal_metadata.outlier
|
||||||
|
|
||||||
# Mark the event as redacted
|
# Mark the event as redacted
|
||||||
|
@ -101,6 +98,29 @@ def prune_event(event: EventBase) -> EventBase:
|
||||||
return pruned_event
|
return pruned_event
|
||||||
|
|
||||||
|
|
||||||
|
def clone_event(event: EventBase) -> EventBase:
|
||||||
|
"""Take a copy of the event.
|
||||||
|
|
||||||
|
This is mostly useful because it does a *shallow* copy of the `unsigned` data,
|
||||||
|
which means it can then be updated without corrupting the in-memory cache. Note that
|
||||||
|
other properties of the event, such as `content`, are *not* (currently) copied here.
|
||||||
|
"""
|
||||||
|
# XXX: We rely on at least one of `event.get_dict()` and `make_event_from_dict()`
|
||||||
|
# making a copy of `unsigned`. Currently, both do, though I don't really know why.
|
||||||
|
# Still, as long as they do, there's not much point doing yet another copy here.
|
||||||
|
new_event = make_event_from_dict(
|
||||||
|
event.get_dict(), event.room_version, event.internal_metadata.get_dict()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Copy the bits of `internal_metadata` that aren't returned by `get_dict`.
|
||||||
|
new_event.internal_metadata.stream_ordering = (
|
||||||
|
event.internal_metadata.stream_ordering
|
||||||
|
)
|
||||||
|
new_event.internal_metadata.outlier = event.internal_metadata.outlier
|
||||||
|
|
||||||
|
return new_event
|
||||||
|
|
||||||
|
|
||||||
def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDict:
|
def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDict:
|
||||||
"""Redacts the event_dict in the same way as `prune_event`, except it
|
"""Redacts the event_dict in the same way as `prune_event`, except it
|
||||||
operates on dicts rather than event objects
|
operates on dicts rather than event objects
|
||||||
|
|
|
@ -546,7 +546,25 @@ class FederationServer(FederationBase):
|
||||||
edu_type=edu_dict["edu_type"],
|
edu_type=edu_dict["edu_type"],
|
||||||
content=edu_dict["content"],
|
content=edu_dict["content"],
|
||||||
)
|
)
|
||||||
|
try:
|
||||||
await self.registry.on_edu(edu.edu_type, origin, edu.content)
|
await self.registry.on_edu(edu.edu_type, origin, edu.content)
|
||||||
|
except Exception:
|
||||||
|
# If there was an error handling the EDU, we must reject the
|
||||||
|
# transaction.
|
||||||
|
#
|
||||||
|
# Some EDU types (notably, to-device messages) are, despite their name,
|
||||||
|
# expected to be reliable; if we weren't able to do something with it,
|
||||||
|
# we have to tell the sender that, and the only way the protocol gives
|
||||||
|
# us to do so is by sending an HTTP error back on the transaction.
|
||||||
|
#
|
||||||
|
# We log the exception now, and then raise a new SynapseError to cause
|
||||||
|
# the transaction to be failed.
|
||||||
|
logger.exception("Error handling EDU of type %s", edu.edu_type)
|
||||||
|
raise SynapseError(500, f"Error handing EDU of type {edu.edu_type}")
|
||||||
|
|
||||||
|
# TODO: if the first EDU fails, we should probably abort the whole
|
||||||
|
# thing rather than carrying on with the rest of them. That would
|
||||||
|
# probably be best done inside `concurrently_execute`.
|
||||||
|
|
||||||
await concurrently_execute(
|
await concurrently_execute(
|
||||||
_process_edu,
|
_process_edu,
|
||||||
|
@ -1414,12 +1432,7 @@ class FederationHandlerRegistry:
|
||||||
handler = self.edu_handlers.get(edu_type)
|
handler = self.edu_handlers.get(edu_type)
|
||||||
if handler:
|
if handler:
|
||||||
with start_active_span_from_edu(content, "handle_edu"):
|
with start_active_span_from_edu(content, "handle_edu"):
|
||||||
try:
|
|
||||||
await handler(origin, content)
|
await handler(origin, content)
|
||||||
except SynapseError as e:
|
|
||||||
logger.info("Failed to handle edu %r: %r", edu_type, e)
|
|
||||||
except Exception:
|
|
||||||
logger.exception("Failed to handle edu %r", edu_type)
|
|
||||||
return
|
return
|
||||||
|
|
||||||
# Check if we can route it somewhere else that isn't us
|
# Check if we can route it somewhere else that isn't us
|
||||||
|
@ -1428,17 +1441,12 @@ class FederationHandlerRegistry:
|
||||||
# Pick an instance randomly so that we don't overload one.
|
# Pick an instance randomly so that we don't overload one.
|
||||||
route_to = random.choice(instances)
|
route_to = random.choice(instances)
|
||||||
|
|
||||||
try:
|
|
||||||
await self._send_edu(
|
await self._send_edu(
|
||||||
instance_name=route_to,
|
instance_name=route_to,
|
||||||
edu_type=edu_type,
|
edu_type=edu_type,
|
||||||
origin=origin,
|
origin=origin,
|
||||||
content=content,
|
content=content,
|
||||||
)
|
)
|
||||||
except SynapseError as e:
|
|
||||||
logger.info("Failed to handle edu %r: %r", edu_type, e)
|
|
||||||
except Exception:
|
|
||||||
logger.exception("Failed to handle edu %r", edu_type)
|
|
||||||
return
|
return
|
||||||
|
|
||||||
# Oh well, let's just log and move on.
|
# Oh well, let's just log and move on.
|
||||||
|
|
|
@ -180,7 +180,11 @@ def _parse_auth_header(header_bytes: bytes) -> Tuple[str, str, str, Optional[str
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
header_str = header_bytes.decode("utf-8")
|
header_str = header_bytes.decode("utf-8")
|
||||||
params = re.split(" +", header_str)[1].split(",")
|
space_or_tab = "[ \t]"
|
||||||
|
params = re.split(
|
||||||
|
rf"{space_or_tab}*,{space_or_tab}*",
|
||||||
|
re.split(r"^X-Matrix +", header_str, maxsplit=1)[1],
|
||||||
|
)
|
||||||
param_dict: Dict[str, str] = {
|
param_dict: Dict[str, str] = {
|
||||||
k.lower(): v for k, v in [param.split("=", maxsplit=1) for param in params]
|
k.lower(): v for k, v in [param.split("=", maxsplit=1) for param in params]
|
||||||
}
|
}
|
||||||
|
|
|
@ -42,6 +42,7 @@ class AdminHandler:
|
||||||
self._device_handler = hs.get_device_handler()
|
self._device_handler = hs.get_device_handler()
|
||||||
self._storage_controllers = hs.get_storage_controllers()
|
self._storage_controllers = hs.get_storage_controllers()
|
||||||
self._state_storage_controller = self._storage_controllers.state
|
self._state_storage_controller = self._storage_controllers.state
|
||||||
|
self._hs_config = hs.config
|
||||||
self._msc3866_enabled = hs.config.experimental.msc3866.enabled
|
self._msc3866_enabled = hs.config.experimental.msc3866.enabled
|
||||||
|
|
||||||
async def get_whois(self, user: UserID) -> JsonMapping:
|
async def get_whois(self, user: UserID) -> JsonMapping:
|
||||||
|
@ -217,7 +218,10 @@ class AdminHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
events = await filter_events_for_client(
|
events = await filter_events_for_client(
|
||||||
self._storage_controllers, user_id, events
|
self._storage_controllers,
|
||||||
|
user_id,
|
||||||
|
events,
|
||||||
|
msc4115_membership_on_events=self._hs_config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
writer.write_events(room_id, events)
|
writer.write_events(room_id, events)
|
||||||
|
|
|
@ -78,6 +78,8 @@ class CasHandler:
|
||||||
self._cas_displayname_attribute = hs.config.cas.cas_displayname_attribute
|
self._cas_displayname_attribute = hs.config.cas.cas_displayname_attribute
|
||||||
self._cas_required_attributes = hs.config.cas.cas_required_attributes
|
self._cas_required_attributes = hs.config.cas.cas_required_attributes
|
||||||
self._cas_enable_registration = hs.config.cas.cas_enable_registration
|
self._cas_enable_registration = hs.config.cas.cas_enable_registration
|
||||||
|
self._cas_allow_numeric_ids = hs.config.cas.cas_allow_numeric_ids
|
||||||
|
self._cas_numeric_ids_prefix = hs.config.cas.cas_numeric_ids_prefix
|
||||||
|
|
||||||
self._http_client = hs.get_proxied_http_client()
|
self._http_client = hs.get_proxied_http_client()
|
||||||
|
|
||||||
|
@ -188,6 +190,9 @@ class CasHandler:
|
||||||
for child in root[0]:
|
for child in root[0]:
|
||||||
if child.tag.endswith("user"):
|
if child.tag.endswith("user"):
|
||||||
user = child.text
|
user = child.text
|
||||||
|
# if numeric user IDs are allowed and username is numeric then we add the prefix so Synapse can handle it
|
||||||
|
if self._cas_allow_numeric_ids and user is not None and user.isdigit():
|
||||||
|
user = f"{self._cas_numeric_ids_prefix}{user}"
|
||||||
if child.tag.endswith("attributes"):
|
if child.tag.endswith("attributes"):
|
||||||
for attribute in child:
|
for attribute in child:
|
||||||
# ElementTree library expands the namespace in
|
# ElementTree library expands the namespace in
|
||||||
|
|
|
@ -159,20 +159,32 @@ class DeviceWorkerHandler:
|
||||||
|
|
||||||
@cancellable
|
@cancellable
|
||||||
async def get_device_changes_in_shared_rooms(
|
async def get_device_changes_in_shared_rooms(
|
||||||
self, user_id: str, room_ids: StrCollection, from_token: StreamToken
|
self,
|
||||||
|
user_id: str,
|
||||||
|
room_ids: StrCollection,
|
||||||
|
from_token: StreamToken,
|
||||||
|
now_token: Optional[StreamToken] = None,
|
||||||
) -> Set[str]:
|
) -> Set[str]:
|
||||||
"""Get the set of users whose devices have changed who share a room with
|
"""Get the set of users whose devices have changed who share a room with
|
||||||
the given user.
|
the given user.
|
||||||
"""
|
"""
|
||||||
|
now_device_lists_key = self.store.get_device_stream_token()
|
||||||
|
if now_token:
|
||||||
|
now_device_lists_key = now_token.device_list_key
|
||||||
|
|
||||||
changed_users = await self.store.get_device_list_changes_in_rooms(
|
changed_users = await self.store.get_device_list_changes_in_rooms(
|
||||||
room_ids, from_token.device_list_key
|
room_ids,
|
||||||
|
from_token.device_list_key,
|
||||||
|
now_device_lists_key,
|
||||||
)
|
)
|
||||||
|
|
||||||
if changed_users is not None:
|
if changed_users is not None:
|
||||||
# We also check if the given user has changed their device. If
|
# We also check if the given user has changed their device. If
|
||||||
# they're in no rooms then the above query won't include them.
|
# they're in no rooms then the above query won't include them.
|
||||||
changed = await self.store.get_users_whose_devices_changed(
|
changed = await self.store.get_users_whose_devices_changed(
|
||||||
from_token.device_list_key, [user_id]
|
from_token.device_list_key,
|
||||||
|
[user_id],
|
||||||
|
to_key=now_device_lists_key,
|
||||||
)
|
)
|
||||||
changed_users.update(changed)
|
changed_users.update(changed)
|
||||||
return changed_users
|
return changed_users
|
||||||
|
@ -190,7 +202,9 @@ class DeviceWorkerHandler:
|
||||||
tracked_users.add(user_id)
|
tracked_users.add(user_id)
|
||||||
|
|
||||||
changed = await self.store.get_users_whose_devices_changed(
|
changed = await self.store.get_users_whose_devices_changed(
|
||||||
from_token.device_list_key, tracked_users
|
from_token.device_list_key,
|
||||||
|
tracked_users,
|
||||||
|
to_key=now_device_lists_key,
|
||||||
)
|
)
|
||||||
|
|
||||||
return changed
|
return changed
|
||||||
|
@ -892,6 +906,13 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||||
context=opentracing_context,
|
context=opentracing_context,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
await self.store.mark_redundant_device_lists_pokes(
|
||||||
|
user_id=user_id,
|
||||||
|
device_id=device_id,
|
||||||
|
room_id=room_id,
|
||||||
|
converted_upto_stream_id=stream_id,
|
||||||
|
)
|
||||||
|
|
||||||
# Notify replication that we've updated the device list stream.
|
# Notify replication that we've updated the device list stream.
|
||||||
self.notifier.notify_replication()
|
self.notifier.notify_replication()
|
||||||
|
|
||||||
|
|
|
@ -104,6 +104,9 @@ class DeviceMessageHandler:
|
||||||
"""
|
"""
|
||||||
Handle receiving to-device messages from remote homeservers.
|
Handle receiving to-device messages from remote homeservers.
|
||||||
|
|
||||||
|
Note that any errors thrown from this method will cause the federation /send
|
||||||
|
request to receive an error response.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
origin: The remote homeserver.
|
origin: The remote homeserver.
|
||||||
content: The JSON dictionary containing the to-device messages.
|
content: The JSON dictionary containing the to-device messages.
|
||||||
|
|
|
@ -148,6 +148,7 @@ class EventHandler:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
self._storage_controllers = hs.get_storage_controllers()
|
self._storage_controllers = hs.get_storage_controllers()
|
||||||
|
self._config = hs.config
|
||||||
|
|
||||||
async def get_event(
|
async def get_event(
|
||||||
self,
|
self,
|
||||||
|
@ -189,7 +190,11 @@ class EventHandler:
|
||||||
is_peeking = not is_user_in_room
|
is_peeking = not is_user_in_room
|
||||||
|
|
||||||
filtered = await filter_events_for_client(
|
filtered = await filter_events_for_client(
|
||||||
self._storage_controllers, user.to_string(), [event], is_peeking=is_peeking
|
self._storage_controllers,
|
||||||
|
user.to_string(),
|
||||||
|
[event],
|
||||||
|
is_peeking=is_peeking,
|
||||||
|
msc4115_membership_on_events=self._config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
if not filtered:
|
if not filtered:
|
||||||
|
|
|
@ -221,7 +221,10 @@ class InitialSyncHandler:
|
||||||
).addErrback(unwrapFirstError)
|
).addErrback(unwrapFirstError)
|
||||||
|
|
||||||
messages = await filter_events_for_client(
|
messages = await filter_events_for_client(
|
||||||
self._storage_controllers, user_id, messages
|
self._storage_controllers,
|
||||||
|
user_id,
|
||||||
|
messages,
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token)
|
start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token)
|
||||||
|
@ -380,6 +383,7 @@ class InitialSyncHandler:
|
||||||
requester.user.to_string(),
|
requester.user.to_string(),
|
||||||
messages,
|
messages,
|
||||||
is_peeking=is_peeking,
|
is_peeking=is_peeking,
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
start_token = StreamToken.START.copy_and_replace(StreamKeyType.ROOM, token)
|
start_token = StreamToken.START.copy_and_replace(StreamKeyType.ROOM, token)
|
||||||
|
@ -494,6 +498,7 @@ class InitialSyncHandler:
|
||||||
requester.user.to_string(),
|
requester.user.to_string(),
|
||||||
messages,
|
messages,
|
||||||
is_peeking=is_peeking,
|
is_peeking=is_peeking,
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token)
|
start_token = now_token.copy_and_replace(StreamKeyType.ROOM, token)
|
||||||
|
|
|
@ -623,6 +623,7 @@ class PaginationHandler:
|
||||||
user_id,
|
user_id,
|
||||||
events,
|
events,
|
||||||
is_peeking=(member_event_id is None),
|
is_peeking=(member_event_id is None),
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
# if after the filter applied there are no more events
|
# if after the filter applied there are no more events
|
||||||
|
|
|
@ -20,7 +20,7 @@
|
||||||
#
|
#
|
||||||
import logging
|
import logging
|
||||||
import random
|
import random
|
||||||
from typing import TYPE_CHECKING, Optional, Union
|
from typing import TYPE_CHECKING, List, Optional, Union
|
||||||
|
|
||||||
from synapse.api.errors import (
|
from synapse.api.errors import (
|
||||||
AuthError,
|
AuthError,
|
||||||
|
@ -64,8 +64,10 @@ class ProfileHandler:
|
||||||
self.user_directory_handler = hs.get_user_directory_handler()
|
self.user_directory_handler = hs.get_user_directory_handler()
|
||||||
self.request_ratelimiter = hs.get_request_ratelimiter()
|
self.request_ratelimiter = hs.get_request_ratelimiter()
|
||||||
|
|
||||||
self.max_avatar_size = hs.config.server.max_avatar_size
|
self.max_avatar_size: Optional[int] = hs.config.server.max_avatar_size
|
||||||
self.allowed_avatar_mimetypes = hs.config.server.allowed_avatar_mimetypes
|
self.allowed_avatar_mimetypes: Optional[List[str]] = (
|
||||||
|
hs.config.server.allowed_avatar_mimetypes
|
||||||
|
)
|
||||||
|
|
||||||
self._is_mine_server_name = hs.is_mine_server_name
|
self._is_mine_server_name = hs.is_mine_server_name
|
||||||
|
|
||||||
|
@ -337,6 +339,12 @@ class ProfileHandler:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
if self.max_avatar_size:
|
if self.max_avatar_size:
|
||||||
|
if media_info.media_length is None:
|
||||||
|
logger.warning(
|
||||||
|
"Forbidding avatar change to %s: unknown media size",
|
||||||
|
mxc,
|
||||||
|
)
|
||||||
|
return False
|
||||||
# Ensure avatar does not exceed max allowed avatar size
|
# Ensure avatar does not exceed max allowed avatar size
|
||||||
if media_info.media_length > self.max_avatar_size:
|
if media_info.media_length > self.max_avatar_size:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
|
|
|
@ -590,7 +590,7 @@ class RegistrationHandler:
|
||||||
# moving away from bare excepts is a good thing to do.
|
# moving away from bare excepts is a good thing to do.
|
||||||
logger.error("Failed to join new user to %r: %r", r, e)
|
logger.error("Failed to join new user to %r: %r", r, e)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error("Failed to join new user to %r: %r", r, e)
|
logger.error("Failed to join new user to %r: %r", r, e, exc_info=True)
|
||||||
|
|
||||||
async def _auto_join_rooms(self, user_id: str) -> None:
|
async def _auto_join_rooms(self, user_id: str) -> None:
|
||||||
"""Automatically joins users to auto join rooms - creating the room in the first place
|
"""Automatically joins users to auto join rooms - creating the room in the first place
|
||||||
|
|
|
@ -95,6 +95,7 @@ class RelationsHandler:
|
||||||
self._event_handler = hs.get_event_handler()
|
self._event_handler = hs.get_event_handler()
|
||||||
self._event_serializer = hs.get_event_client_serializer()
|
self._event_serializer = hs.get_event_client_serializer()
|
||||||
self._event_creation_handler = hs.get_event_creation_handler()
|
self._event_creation_handler = hs.get_event_creation_handler()
|
||||||
|
self._config = hs.config
|
||||||
|
|
||||||
async def get_relations(
|
async def get_relations(
|
||||||
self,
|
self,
|
||||||
|
@ -163,6 +164,7 @@ class RelationsHandler:
|
||||||
user_id,
|
user_id,
|
||||||
events,
|
events,
|
||||||
is_peeking=(member_event_id is None),
|
is_peeking=(member_event_id is None),
|
||||||
|
msc4115_membership_on_events=self._config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
# The relations returned for the requested event do include their
|
# The relations returned for the requested event do include their
|
||||||
|
@ -608,6 +610,7 @@ class RelationsHandler:
|
||||||
user_id,
|
user_id,
|
||||||
events,
|
events,
|
||||||
is_peeking=(member_event_id is None),
|
is_peeking=(member_event_id is None),
|
||||||
|
msc4115_membership_on_events=self._config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
aggregations = await self.get_bundled_aggregations(
|
aggregations = await self.get_bundled_aggregations(
|
||||||
|
|
|
@ -1476,6 +1476,7 @@ class RoomContextHandler:
|
||||||
user.to_string(),
|
user.to_string(),
|
||||||
events,
|
events,
|
||||||
is_peeking=is_peeking,
|
is_peeking=is_peeking,
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
event = await self.store.get_event(
|
event = await self.store.get_event(
|
||||||
|
|
|
@ -752,6 +752,36 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
and requester.user.to_string() == self._server_notices_mxid
|
and requester.user.to_string() == self._server_notices_mxid
|
||||||
)
|
)
|
||||||
|
|
||||||
|
requester_suspended = await self.store.get_user_suspended_status(
|
||||||
|
requester.user.to_string()
|
||||||
|
)
|
||||||
|
if action == Membership.INVITE and requester_suspended:
|
||||||
|
raise SynapseError(
|
||||||
|
403,
|
||||||
|
"Sending invites while account is suspended is not allowed.",
|
||||||
|
Codes.USER_ACCOUNT_SUSPENDED,
|
||||||
|
)
|
||||||
|
|
||||||
|
if target.to_string() != requester.user.to_string():
|
||||||
|
target_suspended = await self.store.get_user_suspended_status(
|
||||||
|
target.to_string()
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
target_suspended = requester_suspended
|
||||||
|
|
||||||
|
if action == Membership.JOIN and target_suspended:
|
||||||
|
raise SynapseError(
|
||||||
|
403,
|
||||||
|
"Joining rooms while account is suspended is not allowed.",
|
||||||
|
Codes.USER_ACCOUNT_SUSPENDED,
|
||||||
|
)
|
||||||
|
if action == Membership.KNOCK and target_suspended:
|
||||||
|
raise SynapseError(
|
||||||
|
403,
|
||||||
|
"Knocking on rooms while account is suspended is not allowed.",
|
||||||
|
Codes.USER_ACCOUNT_SUSPENDED,
|
||||||
|
)
|
||||||
|
|
||||||
if (
|
if (
|
||||||
not self.allow_per_room_profiles and not is_requester_server_notices_user
|
not self.allow_per_room_profiles and not is_requester_server_notices_user
|
||||||
) or requester.shadow_banned:
|
) or requester.shadow_banned:
|
||||||
|
|
|
@ -480,7 +480,10 @@ class SearchHandler:
|
||||||
filtered_events = await search_filter.filter([r["event"] for r in results])
|
filtered_events = await search_filter.filter([r["event"] for r in results])
|
||||||
|
|
||||||
events = await filter_events_for_client(
|
events = await filter_events_for_client(
|
||||||
self._storage_controllers, user.to_string(), filtered_events
|
self._storage_controllers,
|
||||||
|
user.to_string(),
|
||||||
|
filtered_events,
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
events.sort(key=lambda e: -rank_map[e.event_id])
|
events.sort(key=lambda e: -rank_map[e.event_id])
|
||||||
|
@ -579,7 +582,10 @@ class SearchHandler:
|
||||||
filtered_events = await search_filter.filter([r["event"] for r in results])
|
filtered_events = await search_filter.filter([r["event"] for r in results])
|
||||||
|
|
||||||
events = await filter_events_for_client(
|
events = await filter_events_for_client(
|
||||||
self._storage_controllers, user.to_string(), filtered_events
|
self._storage_controllers,
|
||||||
|
user.to_string(),
|
||||||
|
filtered_events,
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
room_events.extend(events)
|
room_events.extend(events)
|
||||||
|
@ -664,11 +670,17 @@ class SearchHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
events_before = await filter_events_for_client(
|
events_before = await filter_events_for_client(
|
||||||
self._storage_controllers, user.to_string(), res.events_before
|
self._storage_controllers,
|
||||||
|
user.to_string(),
|
||||||
|
res.events_before,
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
events_after = await filter_events_for_client(
|
events_after = await filter_events_for_client(
|
||||||
self._storage_controllers, user.to_string(), res.events_after
|
self._storage_controllers,
|
||||||
|
user.to_string(),
|
||||||
|
res.events_after,
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
context: JsonDict = {
|
context: JsonDict = {
|
||||||
|
|
|
@ -169,6 +169,7 @@ class UsernameMappingSession:
|
||||||
# attributes returned by the ID mapper
|
# attributes returned by the ID mapper
|
||||||
display_name: Optional[str]
|
display_name: Optional[str]
|
||||||
emails: StrCollection
|
emails: StrCollection
|
||||||
|
avatar_url: Optional[str]
|
||||||
|
|
||||||
# An optional dictionary of extra attributes to be provided to the client in the
|
# An optional dictionary of extra attributes to be provided to the client in the
|
||||||
# login response.
|
# login response.
|
||||||
|
@ -183,6 +184,7 @@ class UsernameMappingSession:
|
||||||
# choices made by the user
|
# choices made by the user
|
||||||
chosen_localpart: Optional[str] = None
|
chosen_localpart: Optional[str] = None
|
||||||
use_display_name: bool = True
|
use_display_name: bool = True
|
||||||
|
use_avatar: bool = True
|
||||||
emails_to_use: StrCollection = ()
|
emails_to_use: StrCollection = ()
|
||||||
terms_accepted_version: Optional[str] = None
|
terms_accepted_version: Optional[str] = None
|
||||||
|
|
||||||
|
@ -660,6 +662,9 @@ class SsoHandler:
|
||||||
remote_user_id=remote_user_id,
|
remote_user_id=remote_user_id,
|
||||||
display_name=attributes.display_name,
|
display_name=attributes.display_name,
|
||||||
emails=attributes.emails,
|
emails=attributes.emails,
|
||||||
|
avatar_url=attributes.picture,
|
||||||
|
# Default to using all mapped emails. Will be overwritten in handle_submit_username_request.
|
||||||
|
emails_to_use=attributes.emails,
|
||||||
client_redirect_url=client_redirect_url,
|
client_redirect_url=client_redirect_url,
|
||||||
expiry_time_ms=now + self._MAPPING_SESSION_VALIDITY_PERIOD_MS,
|
expiry_time_ms=now + self._MAPPING_SESSION_VALIDITY_PERIOD_MS,
|
||||||
extra_login_attributes=extra_login_attributes,
|
extra_login_attributes=extra_login_attributes,
|
||||||
|
@ -812,7 +817,7 @@ class SsoHandler:
|
||||||
server_name = profile["avatar_url"].split("/")[-2]
|
server_name = profile["avatar_url"].split("/")[-2]
|
||||||
media_id = profile["avatar_url"].split("/")[-1]
|
media_id = profile["avatar_url"].split("/")[-1]
|
||||||
if self._is_mine_server_name(server_name):
|
if self._is_mine_server_name(server_name):
|
||||||
media = await self._media_repo.store.get_local_media(media_id)
|
media = await self._media_repo.store.get_local_media(media_id) # type: ignore[has-type]
|
||||||
if media is not None and upload_name == media.upload_name:
|
if media is not None and upload_name == media.upload_name:
|
||||||
logger.info("skipping saving the user avatar")
|
logger.info("skipping saving the user avatar")
|
||||||
return True
|
return True
|
||||||
|
@ -966,6 +971,7 @@ class SsoHandler:
|
||||||
session_id: str,
|
session_id: str,
|
||||||
localpart: str,
|
localpart: str,
|
||||||
use_display_name: bool,
|
use_display_name: bool,
|
||||||
|
use_avatar: bool,
|
||||||
emails_to_use: Iterable[str],
|
emails_to_use: Iterable[str],
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Handle a request to the username-picker 'submit' endpoint
|
"""Handle a request to the username-picker 'submit' endpoint
|
||||||
|
@ -988,6 +994,7 @@ class SsoHandler:
|
||||||
# update the session with the user's choices
|
# update the session with the user's choices
|
||||||
session.chosen_localpart = localpart
|
session.chosen_localpart = localpart
|
||||||
session.use_display_name = use_display_name
|
session.use_display_name = use_display_name
|
||||||
|
session.use_avatar = use_avatar
|
||||||
|
|
||||||
emails_from_idp = set(session.emails)
|
emails_from_idp = set(session.emails)
|
||||||
filtered_emails: Set[str] = set()
|
filtered_emails: Set[str] = set()
|
||||||
|
@ -1068,6 +1075,9 @@ class SsoHandler:
|
||||||
if session.use_display_name:
|
if session.use_display_name:
|
||||||
attributes.display_name = session.display_name
|
attributes.display_name = session.display_name
|
||||||
|
|
||||||
|
if session.use_avatar:
|
||||||
|
attributes.picture = session.avatar_url
|
||||||
|
|
||||||
# the following will raise a 400 error if the username has been taken in the
|
# the following will raise a 400 error if the username has been taken in the
|
||||||
# meantime.
|
# meantime.
|
||||||
user_id = await self._register_mapped_user(
|
user_id = await self._register_mapped_user(
|
||||||
|
|
|
@ -20,6 +20,7 @@
|
||||||
#
|
#
|
||||||
import itertools
|
import itertools
|
||||||
import logging
|
import logging
|
||||||
|
from enum import Enum
|
||||||
from typing import (
|
from typing import (
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
AbstractSet,
|
AbstractSet,
|
||||||
|
@ -27,11 +28,14 @@ from typing import (
|
||||||
Dict,
|
Dict,
|
||||||
FrozenSet,
|
FrozenSet,
|
||||||
List,
|
List,
|
||||||
|
Literal,
|
||||||
Mapping,
|
Mapping,
|
||||||
Optional,
|
Optional,
|
||||||
Sequence,
|
Sequence,
|
||||||
Set,
|
Set,
|
||||||
Tuple,
|
Tuple,
|
||||||
|
Union,
|
||||||
|
overload,
|
||||||
)
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
@ -112,12 +116,30 @@ LAZY_LOADED_MEMBERS_CACHE_MAX_SIZE = 100
|
||||||
SyncRequestKey = Tuple[Any, ...]
|
SyncRequestKey = Tuple[Any, ...]
|
||||||
|
|
||||||
|
|
||||||
|
class SyncVersion(Enum):
|
||||||
|
"""
|
||||||
|
Enum for specifying the version of sync request. This is used to key which type of
|
||||||
|
sync response that we are generating.
|
||||||
|
|
||||||
|
This is different than the `sync_type` you might see used in other code below; which
|
||||||
|
specifies the sub-type sync request (e.g. initial_sync, full_state_sync,
|
||||||
|
incremental_sync) and is really only relevant for the `/sync` v2 endpoint.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# These string values are semantically significant because they are used in the the
|
||||||
|
# metrics
|
||||||
|
|
||||||
|
# Traditional `/sync` endpoint
|
||||||
|
SYNC_V2 = "sync_v2"
|
||||||
|
# Part of MSC3575 Sliding Sync
|
||||||
|
E2EE_SYNC = "e2ee_sync"
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
class SyncConfig:
|
class SyncConfig:
|
||||||
user: UserID
|
user: UserID
|
||||||
filter_collection: FilterCollection
|
filter_collection: FilterCollection
|
||||||
is_guest: bool
|
is_guest: bool
|
||||||
request_key: SyncRequestKey
|
|
||||||
device_id: Optional[str]
|
device_id: Optional[str]
|
||||||
|
|
||||||
|
|
||||||
|
@ -263,6 +285,26 @@ class SyncResult:
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class E2eeSyncResult:
|
||||||
|
"""
|
||||||
|
Attributes:
|
||||||
|
next_batch: Token for the next sync
|
||||||
|
to_device: List of direct messages for the device.
|
||||||
|
device_lists: List of user_ids whose devices have changed
|
||||||
|
device_one_time_keys_count: Dict of algorithm to count for one time keys
|
||||||
|
for this device
|
||||||
|
device_unused_fallback_key_types: List of key types that have an unused fallback
|
||||||
|
key
|
||||||
|
"""
|
||||||
|
|
||||||
|
next_batch: StreamToken
|
||||||
|
to_device: List[JsonDict]
|
||||||
|
device_lists: DeviceListUpdates
|
||||||
|
device_one_time_keys_count: JsonMapping
|
||||||
|
device_unused_fallback_key_types: List[str]
|
||||||
|
|
||||||
|
|
||||||
class SyncHandler:
|
class SyncHandler:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.hs_config = hs.config
|
self.hs_config = hs.config
|
||||||
|
@ -305,17 +347,68 @@ class SyncHandler:
|
||||||
|
|
||||||
self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync
|
self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync
|
||||||
|
|
||||||
|
@overload
|
||||||
async def wait_for_sync_for_user(
|
async def wait_for_sync_for_user(
|
||||||
self,
|
self,
|
||||||
requester: Requester,
|
requester: Requester,
|
||||||
sync_config: SyncConfig,
|
sync_config: SyncConfig,
|
||||||
|
sync_version: Literal[SyncVersion.SYNC_V2],
|
||||||
|
request_key: SyncRequestKey,
|
||||||
since_token: Optional[StreamToken] = None,
|
since_token: Optional[StreamToken] = None,
|
||||||
timeout: int = 0,
|
timeout: int = 0,
|
||||||
full_state: bool = False,
|
full_state: bool = False,
|
||||||
) -> SyncResult:
|
) -> SyncResult: ...
|
||||||
|
|
||||||
|
@overload
|
||||||
|
async def wait_for_sync_for_user(
|
||||||
|
self,
|
||||||
|
requester: Requester,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
sync_version: Literal[SyncVersion.E2EE_SYNC],
|
||||||
|
request_key: SyncRequestKey,
|
||||||
|
since_token: Optional[StreamToken] = None,
|
||||||
|
timeout: int = 0,
|
||||||
|
full_state: bool = False,
|
||||||
|
) -> E2eeSyncResult: ...
|
||||||
|
|
||||||
|
@overload
|
||||||
|
async def wait_for_sync_for_user(
|
||||||
|
self,
|
||||||
|
requester: Requester,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
sync_version: SyncVersion,
|
||||||
|
request_key: SyncRequestKey,
|
||||||
|
since_token: Optional[StreamToken] = None,
|
||||||
|
timeout: int = 0,
|
||||||
|
full_state: bool = False,
|
||||||
|
) -> Union[SyncResult, E2eeSyncResult]: ...
|
||||||
|
|
||||||
|
async def wait_for_sync_for_user(
|
||||||
|
self,
|
||||||
|
requester: Requester,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
sync_version: SyncVersion,
|
||||||
|
request_key: SyncRequestKey,
|
||||||
|
since_token: Optional[StreamToken] = None,
|
||||||
|
timeout: int = 0,
|
||||||
|
full_state: bool = False,
|
||||||
|
) -> Union[SyncResult, E2eeSyncResult]:
|
||||||
"""Get the sync for a client if we have new data for it now. Otherwise
|
"""Get the sync for a client if we have new data for it now. Otherwise
|
||||||
wait for new data to arrive on the server. If the timeout expires, then
|
wait for new data to arrive on the server. If the timeout expires, then
|
||||||
return an empty sync result.
|
return an empty sync result.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
requester: The user requesting the sync response.
|
||||||
|
sync_config: Config/info necessary to process the sync request.
|
||||||
|
sync_version: Determines what kind of sync response to generate.
|
||||||
|
request_key: The key to use for caching the response.
|
||||||
|
since_token: The point in the stream to sync from.
|
||||||
|
timeout: How long to wait for new data to arrive before giving up.
|
||||||
|
full_state: Whether to return the full state for each room.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
When `SyncVersion.SYNC_V2`, returns a full `SyncResult`.
|
||||||
|
When `SyncVersion.E2EE_SYNC`, returns a `E2eeSyncResult`.
|
||||||
"""
|
"""
|
||||||
# If the user is not part of the mau group, then check that limits have
|
# If the user is not part of the mau group, then check that limits have
|
||||||
# not been exceeded (if not part of the group by this point, almost certain
|
# not been exceeded (if not part of the group by this point, almost certain
|
||||||
|
@ -324,9 +417,10 @@ class SyncHandler:
|
||||||
await self.auth_blocking.check_auth_blocking(requester=requester)
|
await self.auth_blocking.check_auth_blocking(requester=requester)
|
||||||
|
|
||||||
res = await self.response_cache.wrap(
|
res = await self.response_cache.wrap(
|
||||||
sync_config.request_key,
|
request_key,
|
||||||
self._wait_for_sync_for_user,
|
self._wait_for_sync_for_user,
|
||||||
sync_config,
|
sync_config,
|
||||||
|
sync_version,
|
||||||
since_token,
|
since_token,
|
||||||
timeout,
|
timeout,
|
||||||
full_state,
|
full_state,
|
||||||
|
@ -335,14 +429,48 @@ class SyncHandler:
|
||||||
logger.debug("Returning sync response for %s", user_id)
|
logger.debug("Returning sync response for %s", user_id)
|
||||||
return res
|
return res
|
||||||
|
|
||||||
|
@overload
|
||||||
async def _wait_for_sync_for_user(
|
async def _wait_for_sync_for_user(
|
||||||
self,
|
self,
|
||||||
sync_config: SyncConfig,
|
sync_config: SyncConfig,
|
||||||
|
sync_version: Literal[SyncVersion.SYNC_V2],
|
||||||
since_token: Optional[StreamToken],
|
since_token: Optional[StreamToken],
|
||||||
timeout: int,
|
timeout: int,
|
||||||
full_state: bool,
|
full_state: bool,
|
||||||
cache_context: ResponseCacheContext[SyncRequestKey],
|
cache_context: ResponseCacheContext[SyncRequestKey],
|
||||||
) -> SyncResult:
|
) -> SyncResult: ...
|
||||||
|
|
||||||
|
@overload
|
||||||
|
async def _wait_for_sync_for_user(
|
||||||
|
self,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
sync_version: Literal[SyncVersion.E2EE_SYNC],
|
||||||
|
since_token: Optional[StreamToken],
|
||||||
|
timeout: int,
|
||||||
|
full_state: bool,
|
||||||
|
cache_context: ResponseCacheContext[SyncRequestKey],
|
||||||
|
) -> E2eeSyncResult: ...
|
||||||
|
|
||||||
|
@overload
|
||||||
|
async def _wait_for_sync_for_user(
|
||||||
|
self,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
sync_version: SyncVersion,
|
||||||
|
since_token: Optional[StreamToken],
|
||||||
|
timeout: int,
|
||||||
|
full_state: bool,
|
||||||
|
cache_context: ResponseCacheContext[SyncRequestKey],
|
||||||
|
) -> Union[SyncResult, E2eeSyncResult]: ...
|
||||||
|
|
||||||
|
async def _wait_for_sync_for_user(
|
||||||
|
self,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
sync_version: SyncVersion,
|
||||||
|
since_token: Optional[StreamToken],
|
||||||
|
timeout: int,
|
||||||
|
full_state: bool,
|
||||||
|
cache_context: ResponseCacheContext[SyncRequestKey],
|
||||||
|
) -> Union[SyncResult, E2eeSyncResult]:
|
||||||
"""The start of the machinery that produces a /sync response.
|
"""The start of the machinery that produces a /sync response.
|
||||||
|
|
||||||
See https://spec.matrix.org/v1.1/client-server-api/#syncing for full details.
|
See https://spec.matrix.org/v1.1/client-server-api/#syncing for full details.
|
||||||
|
@ -363,9 +491,11 @@ class SyncHandler:
|
||||||
else:
|
else:
|
||||||
sync_type = "incremental_sync"
|
sync_type = "incremental_sync"
|
||||||
|
|
||||||
|
sync_label = f"{sync_version}:{sync_type}"
|
||||||
|
|
||||||
context = current_context()
|
context = current_context()
|
||||||
if context:
|
if context:
|
||||||
context.tag = sync_type
|
context.tag = sync_label
|
||||||
|
|
||||||
# if we have a since token, delete any to-device messages before that token
|
# if we have a since token, delete any to-device messages before that token
|
||||||
# (since we now know that the device has received them)
|
# (since we now know that the device has received them)
|
||||||
|
@ -383,15 +513,19 @@ class SyncHandler:
|
||||||
if timeout == 0 or since_token is None or full_state:
|
if timeout == 0 or since_token is None or full_state:
|
||||||
# we are going to return immediately, so don't bother calling
|
# we are going to return immediately, so don't bother calling
|
||||||
# notifier.wait_for_events.
|
# notifier.wait_for_events.
|
||||||
result: SyncResult = await self.current_sync_for_user(
|
result: Union[SyncResult, E2eeSyncResult] = (
|
||||||
sync_config, since_token, full_state=full_state
|
await self.current_sync_for_user(
|
||||||
|
sync_config, sync_version, since_token, full_state=full_state
|
||||||
|
)
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
# Otherwise, we wait for something to happen and report it to the user.
|
# Otherwise, we wait for something to happen and report it to the user.
|
||||||
async def current_sync_callback(
|
async def current_sync_callback(
|
||||||
before_token: StreamToken, after_token: StreamToken
|
before_token: StreamToken, after_token: StreamToken
|
||||||
) -> SyncResult:
|
) -> Union[SyncResult, E2eeSyncResult]:
|
||||||
return await self.current_sync_for_user(sync_config, since_token)
|
return await self.current_sync_for_user(
|
||||||
|
sync_config, sync_version, since_token
|
||||||
|
)
|
||||||
|
|
||||||
result = await self.notifier.wait_for_events(
|
result = await self.notifier.wait_for_events(
|
||||||
sync_config.user.to_string(),
|
sync_config.user.to_string(),
|
||||||
|
@ -416,27 +550,81 @@ class SyncHandler:
|
||||||
lazy_loaded = "true"
|
lazy_loaded = "true"
|
||||||
else:
|
else:
|
||||||
lazy_loaded = "false"
|
lazy_loaded = "false"
|
||||||
non_empty_sync_counter.labels(sync_type, lazy_loaded).inc()
|
non_empty_sync_counter.labels(sync_label, lazy_loaded).inc()
|
||||||
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
@overload
|
||||||
|
async def current_sync_for_user(
|
||||||
|
self,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
sync_version: Literal[SyncVersion.SYNC_V2],
|
||||||
|
since_token: Optional[StreamToken] = None,
|
||||||
|
full_state: bool = False,
|
||||||
|
) -> SyncResult: ...
|
||||||
|
|
||||||
|
@overload
|
||||||
|
async def current_sync_for_user(
|
||||||
|
self,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
sync_version: Literal[SyncVersion.E2EE_SYNC],
|
||||||
|
since_token: Optional[StreamToken] = None,
|
||||||
|
full_state: bool = False,
|
||||||
|
) -> E2eeSyncResult: ...
|
||||||
|
|
||||||
|
@overload
|
||||||
|
async def current_sync_for_user(
|
||||||
|
self,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
sync_version: SyncVersion,
|
||||||
|
since_token: Optional[StreamToken] = None,
|
||||||
|
full_state: bool = False,
|
||||||
|
) -> Union[SyncResult, E2eeSyncResult]: ...
|
||||||
|
|
||||||
async def current_sync_for_user(
|
async def current_sync_for_user(
|
||||||
self,
|
self,
|
||||||
sync_config: SyncConfig,
|
sync_config: SyncConfig,
|
||||||
|
sync_version: SyncVersion,
|
||||||
since_token: Optional[StreamToken] = None,
|
since_token: Optional[StreamToken] = None,
|
||||||
full_state: bool = False,
|
full_state: bool = False,
|
||||||
) -> SyncResult:
|
) -> Union[SyncResult, E2eeSyncResult]:
|
||||||
"""Generates the response body of a sync result, represented as a SyncResult.
|
"""
|
||||||
|
Generates the response body of a sync result, represented as a
|
||||||
|
`SyncResult`/`E2eeSyncResult`.
|
||||||
|
|
||||||
This is a wrapper around `generate_sync_result` which starts an open tracing
|
This is a wrapper around `generate_sync_result` which starts an open tracing
|
||||||
span to track the sync. See `generate_sync_result` for the next part of your
|
span to track the sync. See `generate_sync_result` for the next part of your
|
||||||
indoctrination.
|
indoctrination.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sync_config: Config/info necessary to process the sync request.
|
||||||
|
sync_version: Determines what kind of sync response to generate.
|
||||||
|
since_token: The point in the stream to sync from.p.
|
||||||
|
full_state: Whether to return the full state for each room.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
When `SyncVersion.SYNC_V2`, returns a full `SyncResult`.
|
||||||
|
When `SyncVersion.E2EE_SYNC`, returns a `E2eeSyncResult`.
|
||||||
"""
|
"""
|
||||||
with start_active_span("sync.current_sync_for_user"):
|
with start_active_span("sync.current_sync_for_user"):
|
||||||
log_kv({"since_token": since_token})
|
log_kv({"since_token": since_token})
|
||||||
sync_result = await self.generate_sync_result(
|
|
||||||
|
# Go through the `/sync` v2 path
|
||||||
|
if sync_version == SyncVersion.SYNC_V2:
|
||||||
|
sync_result: Union[SyncResult, E2eeSyncResult] = (
|
||||||
|
await self.generate_sync_result(
|
||||||
sync_config, since_token, full_state
|
sync_config, since_token, full_state
|
||||||
)
|
)
|
||||||
|
)
|
||||||
|
# Go through the MSC3575 Sliding Sync `/sync/e2ee` path
|
||||||
|
elif sync_version == SyncVersion.E2EE_SYNC:
|
||||||
|
sync_result = await self.generate_e2ee_sync_result(
|
||||||
|
sync_config, since_token
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
raise Exception(
|
||||||
|
f"Unknown sync_version (this is a Synapse problem): {sync_version}"
|
||||||
|
)
|
||||||
|
|
||||||
set_tag(SynapseTags.SYNC_RESULT, bool(sync_result))
|
set_tag(SynapseTags.SYNC_RESULT, bool(sync_result))
|
||||||
return sync_result
|
return sync_result
|
||||||
|
@ -596,6 +784,7 @@ class SyncHandler:
|
||||||
sync_config.user.to_string(),
|
sync_config.user.to_string(),
|
||||||
recents,
|
recents,
|
||||||
always_include_ids=current_state_ids,
|
always_include_ids=current_state_ids,
|
||||||
|
msc4115_membership_on_events=self.hs_config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
log_kv({"recents_after_visibility_filtering": len(recents)})
|
log_kv({"recents_after_visibility_filtering": len(recents)})
|
||||||
else:
|
else:
|
||||||
|
@ -681,6 +870,7 @@ class SyncHandler:
|
||||||
sync_config.user.to_string(),
|
sync_config.user.to_string(),
|
||||||
loaded_recents,
|
loaded_recents,
|
||||||
always_include_ids=current_state_ids,
|
always_include_ids=current_state_ids,
|
||||||
|
msc4115_membership_on_events=self.hs_config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
|
|
||||||
loaded_recents = []
|
loaded_recents = []
|
||||||
|
@ -1516,128 +1706,17 @@ class SyncHandler:
|
||||||
# See https://github.com/matrix-org/matrix-doc/issues/1144
|
# See https://github.com/matrix-org/matrix-doc/issues/1144
|
||||||
raise NotImplementedError()
|
raise NotImplementedError()
|
||||||
|
|
||||||
# Note: we get the users room list *before* we get the current token, this
|
sync_result_builder = await self.get_sync_result_builder(
|
||||||
# avoids checking back in history if rooms are joined after the token is fetched.
|
sync_config,
|
||||||
token_before_rooms = self.event_sources.get_current_token()
|
since_token,
|
||||||
mutable_joined_room_ids = set(await self.store.get_rooms_for_user(user_id))
|
full_state,
|
||||||
|
|
||||||
# NB: The now_token gets changed by some of the generate_sync_* methods,
|
|
||||||
# this is due to some of the underlying streams not supporting the ability
|
|
||||||
# to query up to a given point.
|
|
||||||
# Always use the `now_token` in `SyncResultBuilder`
|
|
||||||
now_token = self.event_sources.get_current_token()
|
|
||||||
log_kv({"now_token": now_token})
|
|
||||||
|
|
||||||
# Since we fetched the users room list before the token, there's a small window
|
|
||||||
# during which membership events may have been persisted, so we fetch these now
|
|
||||||
# and modify the joined room list for any changes between the get_rooms_for_user
|
|
||||||
# call and the get_current_token call.
|
|
||||||
membership_change_events = []
|
|
||||||
if since_token:
|
|
||||||
membership_change_events = await self.store.get_membership_changes_for_user(
|
|
||||||
user_id,
|
|
||||||
since_token.room_key,
|
|
||||||
now_token.room_key,
|
|
||||||
self.rooms_to_exclude_globally,
|
|
||||||
)
|
|
||||||
|
|
||||||
mem_last_change_by_room_id: Dict[str, EventBase] = {}
|
|
||||||
for event in membership_change_events:
|
|
||||||
mem_last_change_by_room_id[event.room_id] = event
|
|
||||||
|
|
||||||
# For the latest membership event in each room found, add/remove the room ID
|
|
||||||
# from the joined room list accordingly. In this case we only care if the
|
|
||||||
# latest change is JOIN.
|
|
||||||
|
|
||||||
for room_id, event in mem_last_change_by_room_id.items():
|
|
||||||
assert event.internal_metadata.stream_ordering
|
|
||||||
if (
|
|
||||||
event.internal_metadata.stream_ordering
|
|
||||||
< token_before_rooms.room_key.stream
|
|
||||||
):
|
|
||||||
continue
|
|
||||||
|
|
||||||
logger.info(
|
|
||||||
"User membership change between getting rooms and current token: %s %s %s",
|
|
||||||
user_id,
|
|
||||||
event.membership,
|
|
||||||
room_id,
|
|
||||||
)
|
|
||||||
# User joined a room - we have to then check the room state to ensure we
|
|
||||||
# respect any bans if there's a race between the join and ban events.
|
|
||||||
if event.membership == Membership.JOIN:
|
|
||||||
user_ids_in_room = await self.store.get_users_in_room(room_id)
|
|
||||||
if user_id in user_ids_in_room:
|
|
||||||
mutable_joined_room_ids.add(room_id)
|
|
||||||
# The user left the room, or left and was re-invited but not joined yet
|
|
||||||
else:
|
|
||||||
mutable_joined_room_ids.discard(room_id)
|
|
||||||
|
|
||||||
# Tweak the set of rooms to return to the client for eager (non-lazy) syncs.
|
|
||||||
mutable_rooms_to_exclude = set(self.rooms_to_exclude_globally)
|
|
||||||
if not sync_config.filter_collection.lazy_load_members():
|
|
||||||
# Non-lazy syncs should never include partially stated rooms.
|
|
||||||
# Exclude all partially stated rooms from this sync.
|
|
||||||
results = await self.store.is_partial_state_room_batched(
|
|
||||||
mutable_joined_room_ids
|
|
||||||
)
|
|
||||||
mutable_rooms_to_exclude.update(
|
|
||||||
room_id
|
|
||||||
for room_id, is_partial_state in results.items()
|
|
||||||
if is_partial_state
|
|
||||||
)
|
|
||||||
membership_change_events = [
|
|
||||||
event
|
|
||||||
for event in membership_change_events
|
|
||||||
if not results.get(event.room_id, False)
|
|
||||||
]
|
|
||||||
|
|
||||||
# Incremental eager syncs should additionally include rooms that
|
|
||||||
# - we are joined to
|
|
||||||
# - are full-stated
|
|
||||||
# - became fully-stated at some point during the sync period
|
|
||||||
# (These rooms will have been omitted during a previous eager sync.)
|
|
||||||
forced_newly_joined_room_ids: Set[str] = set()
|
|
||||||
if since_token and not sync_config.filter_collection.lazy_load_members():
|
|
||||||
un_partial_stated_rooms = (
|
|
||||||
await self.store.get_un_partial_stated_rooms_between(
|
|
||||||
since_token.un_partial_stated_rooms_key,
|
|
||||||
now_token.un_partial_stated_rooms_key,
|
|
||||||
mutable_joined_room_ids,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
results = await self.store.is_partial_state_room_batched(
|
|
||||||
un_partial_stated_rooms
|
|
||||||
)
|
|
||||||
forced_newly_joined_room_ids.update(
|
|
||||||
room_id
|
|
||||||
for room_id, is_partial_state in results.items()
|
|
||||||
if not is_partial_state
|
|
||||||
)
|
|
||||||
|
|
||||||
# Now we have our list of joined room IDs, exclude as configured and freeze
|
|
||||||
joined_room_ids = frozenset(
|
|
||||||
room_id
|
|
||||||
for room_id in mutable_joined_room_ids
|
|
||||||
if room_id not in mutable_rooms_to_exclude
|
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"Calculating sync response for %r between %s and %s",
|
"Calculating sync response for %r between %s and %s",
|
||||||
sync_config.user,
|
sync_config.user,
|
||||||
since_token,
|
sync_result_builder.since_token,
|
||||||
now_token,
|
sync_result_builder.now_token,
|
||||||
)
|
|
||||||
|
|
||||||
sync_result_builder = SyncResultBuilder(
|
|
||||||
sync_config,
|
|
||||||
full_state,
|
|
||||||
since_token=since_token,
|
|
||||||
now_token=now_token,
|
|
||||||
joined_room_ids=joined_room_ids,
|
|
||||||
excluded_room_ids=frozenset(mutable_rooms_to_exclude),
|
|
||||||
forced_newly_joined_room_ids=frozenset(forced_newly_joined_room_ids),
|
|
||||||
membership_change_events=membership_change_events,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
logger.debug("Fetching account data")
|
logger.debug("Fetching account data")
|
||||||
|
@ -1749,6 +1828,239 @@ class SyncHandler:
|
||||||
next_batch=sync_result_builder.now_token,
|
next_batch=sync_result_builder.now_token,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def generate_e2ee_sync_result(
|
||||||
|
self,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
since_token: Optional[StreamToken] = None,
|
||||||
|
) -> E2eeSyncResult:
|
||||||
|
"""
|
||||||
|
Generates the response body of a MSC3575 Sliding Sync `/sync/e2ee` result.
|
||||||
|
|
||||||
|
This is represented by a `E2eeSyncResult` struct, which is built from small
|
||||||
|
pieces using a `SyncResultBuilder`. The `sync_result_builder` is passed as a
|
||||||
|
mutable ("inout") parameter to various helper functions. These retrieve and
|
||||||
|
process the data which forms the sync body, often writing to the
|
||||||
|
`sync_result_builder` to store their output.
|
||||||
|
|
||||||
|
At the end, we transfer data from the `sync_result_builder` to a new `E2eeSyncResult`
|
||||||
|
instance to signify that the sync calculation is complete.
|
||||||
|
"""
|
||||||
|
user_id = sync_config.user.to_string()
|
||||||
|
app_service = self.store.get_app_service_by_user_id(user_id)
|
||||||
|
if app_service:
|
||||||
|
# We no longer support AS users using /sync directly.
|
||||||
|
# See https://github.com/matrix-org/matrix-doc/issues/1144
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
sync_result_builder = await self.get_sync_result_builder(
|
||||||
|
sync_config,
|
||||||
|
since_token,
|
||||||
|
full_state=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
# 1. Calculate `to_device` events
|
||||||
|
await self._generate_sync_entry_for_to_device(sync_result_builder)
|
||||||
|
|
||||||
|
# 2. Calculate `device_lists`
|
||||||
|
# Device list updates are sent if a since token is provided.
|
||||||
|
device_lists = DeviceListUpdates()
|
||||||
|
include_device_list_updates = bool(since_token and since_token.device_list_key)
|
||||||
|
if include_device_list_updates:
|
||||||
|
# Note that _generate_sync_entry_for_rooms sets sync_result_builder.joined, which
|
||||||
|
# is used in calculate_user_changes below.
|
||||||
|
#
|
||||||
|
# TODO: Running `_generate_sync_entry_for_rooms()` is a lot of work just to
|
||||||
|
# figure out the membership changes/derived info needed for
|
||||||
|
# `_generate_sync_entry_for_device_list()`. In the future, we should try to
|
||||||
|
# refactor this away.
|
||||||
|
(
|
||||||
|
newly_joined_rooms,
|
||||||
|
newly_left_rooms,
|
||||||
|
) = await self._generate_sync_entry_for_rooms(sync_result_builder)
|
||||||
|
|
||||||
|
# This uses the sync_result_builder.joined which is set in
|
||||||
|
# `_generate_sync_entry_for_rooms`, if that didn't find any joined
|
||||||
|
# rooms for some reason it is a no-op.
|
||||||
|
(
|
||||||
|
newly_joined_or_invited_or_knocked_users,
|
||||||
|
newly_left_users,
|
||||||
|
) = sync_result_builder.calculate_user_changes()
|
||||||
|
|
||||||
|
device_lists = await self._generate_sync_entry_for_device_list(
|
||||||
|
sync_result_builder,
|
||||||
|
newly_joined_rooms=newly_joined_rooms,
|
||||||
|
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
|
||||||
|
newly_left_rooms=newly_left_rooms,
|
||||||
|
newly_left_users=newly_left_users,
|
||||||
|
)
|
||||||
|
|
||||||
|
# 3. Calculate `device_one_time_keys_count` and `device_unused_fallback_key_types`
|
||||||
|
device_id = sync_config.device_id
|
||||||
|
one_time_keys_count: JsonMapping = {}
|
||||||
|
unused_fallback_key_types: List[str] = []
|
||||||
|
if device_id:
|
||||||
|
# TODO: We should have a way to let clients differentiate between the states of:
|
||||||
|
# * no change in OTK count since the provided since token
|
||||||
|
# * the server has zero OTKs left for this device
|
||||||
|
# Spec issue: https://github.com/matrix-org/matrix-doc/issues/3298
|
||||||
|
one_time_keys_count = await self.store.count_e2e_one_time_keys(
|
||||||
|
user_id, device_id
|
||||||
|
)
|
||||||
|
unused_fallback_key_types = list(
|
||||||
|
await self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
|
||||||
|
)
|
||||||
|
|
||||||
|
return E2eeSyncResult(
|
||||||
|
to_device=sync_result_builder.to_device,
|
||||||
|
device_lists=device_lists,
|
||||||
|
device_one_time_keys_count=one_time_keys_count,
|
||||||
|
device_unused_fallback_key_types=unused_fallback_key_types,
|
||||||
|
next_batch=sync_result_builder.now_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def get_sync_result_builder(
|
||||||
|
self,
|
||||||
|
sync_config: SyncConfig,
|
||||||
|
since_token: Optional[StreamToken] = None,
|
||||||
|
full_state: bool = False,
|
||||||
|
) -> "SyncResultBuilder":
|
||||||
|
"""
|
||||||
|
Assemble a `SyncResultBuilder` with all of the initial context to
|
||||||
|
start building up the sync response:
|
||||||
|
|
||||||
|
- Membership changes between the last sync and the current sync.
|
||||||
|
- Joined room IDs (minus any rooms to exclude).
|
||||||
|
- Rooms that became fully-stated/un-partial stated since the last sync.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sync_config: Config/info necessary to process the sync request.
|
||||||
|
since_token: The point in the stream to sync from.
|
||||||
|
full_state: Whether to return the full state for each room.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
`SyncResultBuilder` ready to start generating parts of the sync response.
|
||||||
|
"""
|
||||||
|
user_id = sync_config.user.to_string()
|
||||||
|
|
||||||
|
# Note: we get the users room list *before* we get the current token, this
|
||||||
|
# avoids checking back in history if rooms are joined after the token is fetched.
|
||||||
|
token_before_rooms = self.event_sources.get_current_token()
|
||||||
|
mutable_joined_room_ids = set(await self.store.get_rooms_for_user(user_id))
|
||||||
|
|
||||||
|
# NB: The `now_token` gets changed by some of the `generate_sync_*` methods,
|
||||||
|
# this is due to some of the underlying streams not supporting the ability
|
||||||
|
# to query up to a given point.
|
||||||
|
# Always use the `now_token` in `SyncResultBuilder`
|
||||||
|
now_token = self.event_sources.get_current_token()
|
||||||
|
log_kv({"now_token": now_token})
|
||||||
|
|
||||||
|
# Since we fetched the users room list before the token, there's a small window
|
||||||
|
# during which membership events may have been persisted, so we fetch these now
|
||||||
|
# and modify the joined room list for any changes between the get_rooms_for_user
|
||||||
|
# call and the get_current_token call.
|
||||||
|
membership_change_events = []
|
||||||
|
if since_token:
|
||||||
|
membership_change_events = await self.store.get_membership_changes_for_user(
|
||||||
|
user_id,
|
||||||
|
since_token.room_key,
|
||||||
|
now_token.room_key,
|
||||||
|
self.rooms_to_exclude_globally,
|
||||||
|
)
|
||||||
|
|
||||||
|
mem_last_change_by_room_id: Dict[str, EventBase] = {}
|
||||||
|
for event in membership_change_events:
|
||||||
|
mem_last_change_by_room_id[event.room_id] = event
|
||||||
|
|
||||||
|
# For the latest membership event in each room found, add/remove the room ID
|
||||||
|
# from the joined room list accordingly. In this case we only care if the
|
||||||
|
# latest change is JOIN.
|
||||||
|
|
||||||
|
for room_id, event in mem_last_change_by_room_id.items():
|
||||||
|
assert event.internal_metadata.stream_ordering
|
||||||
|
if (
|
||||||
|
event.internal_metadata.stream_ordering
|
||||||
|
< token_before_rooms.room_key.stream
|
||||||
|
):
|
||||||
|
continue
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"User membership change between getting rooms and current token: %s %s %s",
|
||||||
|
user_id,
|
||||||
|
event.membership,
|
||||||
|
room_id,
|
||||||
|
)
|
||||||
|
# User joined a room - we have to then check the room state to ensure we
|
||||||
|
# respect any bans if there's a race between the join and ban events.
|
||||||
|
if event.membership == Membership.JOIN:
|
||||||
|
user_ids_in_room = await self.store.get_users_in_room(room_id)
|
||||||
|
if user_id in user_ids_in_room:
|
||||||
|
mutable_joined_room_ids.add(room_id)
|
||||||
|
# The user left the room, or left and was re-invited but not joined yet
|
||||||
|
else:
|
||||||
|
mutable_joined_room_ids.discard(room_id)
|
||||||
|
|
||||||
|
# Tweak the set of rooms to return to the client for eager (non-lazy) syncs.
|
||||||
|
mutable_rooms_to_exclude = set(self.rooms_to_exclude_globally)
|
||||||
|
if not sync_config.filter_collection.lazy_load_members():
|
||||||
|
# Non-lazy syncs should never include partially stated rooms.
|
||||||
|
# Exclude all partially stated rooms from this sync.
|
||||||
|
results = await self.store.is_partial_state_room_batched(
|
||||||
|
mutable_joined_room_ids
|
||||||
|
)
|
||||||
|
mutable_rooms_to_exclude.update(
|
||||||
|
room_id
|
||||||
|
for room_id, is_partial_state in results.items()
|
||||||
|
if is_partial_state
|
||||||
|
)
|
||||||
|
membership_change_events = [
|
||||||
|
event
|
||||||
|
for event in membership_change_events
|
||||||
|
if not results.get(event.room_id, False)
|
||||||
|
]
|
||||||
|
|
||||||
|
# Incremental eager syncs should additionally include rooms that
|
||||||
|
# - we are joined to
|
||||||
|
# - are full-stated
|
||||||
|
# - became fully-stated at some point during the sync period
|
||||||
|
# (These rooms will have been omitted during a previous eager sync.)
|
||||||
|
forced_newly_joined_room_ids: Set[str] = set()
|
||||||
|
if since_token and not sync_config.filter_collection.lazy_load_members():
|
||||||
|
un_partial_stated_rooms = (
|
||||||
|
await self.store.get_un_partial_stated_rooms_between(
|
||||||
|
since_token.un_partial_stated_rooms_key,
|
||||||
|
now_token.un_partial_stated_rooms_key,
|
||||||
|
mutable_joined_room_ids,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
results = await self.store.is_partial_state_room_batched(
|
||||||
|
un_partial_stated_rooms
|
||||||
|
)
|
||||||
|
forced_newly_joined_room_ids.update(
|
||||||
|
room_id
|
||||||
|
for room_id, is_partial_state in results.items()
|
||||||
|
if not is_partial_state
|
||||||
|
)
|
||||||
|
|
||||||
|
# Now we have our list of joined room IDs, exclude as configured and freeze
|
||||||
|
joined_room_ids = frozenset(
|
||||||
|
room_id
|
||||||
|
for room_id in mutable_joined_room_ids
|
||||||
|
if room_id not in mutable_rooms_to_exclude
|
||||||
|
)
|
||||||
|
|
||||||
|
sync_result_builder = SyncResultBuilder(
|
||||||
|
sync_config,
|
||||||
|
full_state,
|
||||||
|
since_token=since_token,
|
||||||
|
now_token=now_token,
|
||||||
|
joined_room_ids=joined_room_ids,
|
||||||
|
excluded_room_ids=frozenset(mutable_rooms_to_exclude),
|
||||||
|
forced_newly_joined_room_ids=frozenset(forced_newly_joined_room_ids),
|
||||||
|
membership_change_events=membership_change_events,
|
||||||
|
)
|
||||||
|
|
||||||
|
return sync_result_builder
|
||||||
|
|
||||||
@measure_func("_generate_sync_entry_for_device_list")
|
@measure_func("_generate_sync_entry_for_device_list")
|
||||||
async def _generate_sync_entry_for_device_list(
|
async def _generate_sync_entry_for_device_list(
|
||||||
self,
|
self,
|
||||||
|
@ -1797,40 +2109,16 @@ class SyncHandler:
|
||||||
|
|
||||||
users_that_have_changed = set()
|
users_that_have_changed = set()
|
||||||
|
|
||||||
joined_rooms = sync_result_builder.joined_room_ids
|
joined_room_ids = sync_result_builder.joined_room_ids
|
||||||
|
|
||||||
# Step 1a, check for changes in devices of users we share a room
|
# Step 1a, check for changes in devices of users we share a room
|
||||||
# with
|
# with
|
||||||
#
|
|
||||||
# We do this in two different ways depending on what we have cached.
|
|
||||||
# If we already have a list of all the user that have changed since
|
|
||||||
# the last sync then it's likely more efficient to compare the rooms
|
|
||||||
# they're in with the rooms the syncing user is in.
|
|
||||||
#
|
|
||||||
# If we don't have that info cached then we get all the users that
|
|
||||||
# share a room with our user and check if those users have changed.
|
|
||||||
cache_result = self.store.get_cached_device_list_changes(
|
|
||||||
since_token.device_list_key
|
|
||||||
)
|
|
||||||
if cache_result.hit:
|
|
||||||
changed_users = cache_result.entities
|
|
||||||
|
|
||||||
result = await self.store.get_rooms_for_users(changed_users)
|
|
||||||
|
|
||||||
for changed_user_id, entries in result.items():
|
|
||||||
# Check if the changed user shares any rooms with the user,
|
|
||||||
# or if the changed user is the syncing user (as we always
|
|
||||||
# want to include device list updates of their own devices).
|
|
||||||
if user_id == changed_user_id or any(
|
|
||||||
rid in joined_rooms for rid in entries
|
|
||||||
):
|
|
||||||
users_that_have_changed.add(changed_user_id)
|
|
||||||
else:
|
|
||||||
users_that_have_changed = (
|
users_that_have_changed = (
|
||||||
await self._device_handler.get_device_changes_in_shared_rooms(
|
await self._device_handler.get_device_changes_in_shared_rooms(
|
||||||
user_id,
|
user_id,
|
||||||
sync_result_builder.joined_room_ids,
|
joined_room_ids,
|
||||||
from_token=since_token,
|
from_token=since_token,
|
||||||
|
now_token=sync_result_builder.now_token,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -1856,7 +2144,7 @@ class SyncHandler:
|
||||||
# Remove any users that we still share a room with.
|
# Remove any users that we still share a room with.
|
||||||
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
|
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
|
||||||
for user_id, entries in left_users_rooms.items():
|
for user_id, entries in left_users_rooms.items():
|
||||||
if any(rid in joined_rooms for rid in entries):
|
if any(rid in joined_room_ids for rid in entries):
|
||||||
newly_left_users.discard(user_id)
|
newly_left_users.discard(user_id)
|
||||||
|
|
||||||
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
|
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
|
||||||
|
@ -1943,23 +2231,19 @@ class SyncHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
if push_rules_changed:
|
if push_rules_changed:
|
||||||
global_account_data = {
|
global_account_data = dict(global_account_data)
|
||||||
AccountDataTypes.PUSH_RULES: await self._push_rules_handler.push_rules_for_user(
|
global_account_data[AccountDataTypes.PUSH_RULES] = (
|
||||||
sync_config.user
|
await self._push_rules_handler.push_rules_for_user(sync_config.user)
|
||||||
),
|
)
|
||||||
**global_account_data,
|
|
||||||
}
|
|
||||||
else:
|
else:
|
||||||
all_global_account_data = await self.store.get_global_account_data_for_user(
|
all_global_account_data = await self.store.get_global_account_data_for_user(
|
||||||
user_id
|
user_id
|
||||||
)
|
)
|
||||||
|
|
||||||
global_account_data = {
|
global_account_data = dict(all_global_account_data)
|
||||||
AccountDataTypes.PUSH_RULES: await self._push_rules_handler.push_rules_for_user(
|
global_account_data[AccountDataTypes.PUSH_RULES] = (
|
||||||
sync_config.user
|
await self._push_rules_handler.push_rules_for_user(sync_config.user)
|
||||||
),
|
)
|
||||||
**all_global_account_data,
|
|
||||||
}
|
|
||||||
|
|
||||||
account_data_for_user = (
|
account_data_for_user = (
|
||||||
await sync_config.filter_collection.filter_global_account_data(
|
await sync_config.filter_collection.filter_global_account_data(
|
||||||
|
|
|
@ -22,11 +22,27 @@
|
||||||
import logging
|
import logging
|
||||||
from io import BytesIO
|
from io import BytesIO
|
||||||
from types import TracebackType
|
from types import TracebackType
|
||||||
from typing import Optional, Tuple, Type
|
from typing import TYPE_CHECKING, List, Optional, Tuple, Type
|
||||||
|
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
|
|
||||||
|
from synapse.api.errors import Codes, SynapseError, cs_error
|
||||||
|
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
|
||||||
|
from synapse.http.server import respond_with_json
|
||||||
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.opentracing import trace
|
from synapse.logging.opentracing import trace
|
||||||
|
from synapse.media._base import (
|
||||||
|
FileInfo,
|
||||||
|
ThumbnailInfo,
|
||||||
|
respond_404,
|
||||||
|
respond_with_file,
|
||||||
|
respond_with_responder,
|
||||||
|
)
|
||||||
|
from synapse.media.media_storage import MediaStorage
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.media.media_repository import MediaRepository
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -231,3 +247,471 @@ class Thumbnailer:
|
||||||
def __del__(self) -> None:
|
def __del__(self) -> None:
|
||||||
# Make sure we actually do close the image, rather than leak data.
|
# Make sure we actually do close the image, rather than leak data.
|
||||||
self.close()
|
self.close()
|
||||||
|
|
||||||
|
|
||||||
|
class ThumbnailProvider:
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
hs: "HomeServer",
|
||||||
|
media_repo: "MediaRepository",
|
||||||
|
media_storage: MediaStorage,
|
||||||
|
):
|
||||||
|
self.hs = hs
|
||||||
|
self.media_repo = media_repo
|
||||||
|
self.media_storage = media_storage
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
||||||
|
|
||||||
|
async def respond_local_thumbnail(
|
||||||
|
self,
|
||||||
|
request: SynapseRequest,
|
||||||
|
media_id: str,
|
||||||
|
width: int,
|
||||||
|
height: int,
|
||||||
|
method: str,
|
||||||
|
m_type: str,
|
||||||
|
max_timeout_ms: int,
|
||||||
|
) -> None:
|
||||||
|
media_info = await self.media_repo.get_local_media_info(
|
||||||
|
request, media_id, max_timeout_ms
|
||||||
|
)
|
||||||
|
if not media_info:
|
||||||
|
return
|
||||||
|
|
||||||
|
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||||
|
await self._select_and_respond_with_thumbnail(
|
||||||
|
request,
|
||||||
|
width,
|
||||||
|
height,
|
||||||
|
method,
|
||||||
|
m_type,
|
||||||
|
thumbnail_infos,
|
||||||
|
media_id,
|
||||||
|
media_id,
|
||||||
|
url_cache=bool(media_info.url_cache),
|
||||||
|
server_name=None,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def select_or_generate_local_thumbnail(
|
||||||
|
self,
|
||||||
|
request: SynapseRequest,
|
||||||
|
media_id: str,
|
||||||
|
desired_width: int,
|
||||||
|
desired_height: int,
|
||||||
|
desired_method: str,
|
||||||
|
desired_type: str,
|
||||||
|
max_timeout_ms: int,
|
||||||
|
) -> None:
|
||||||
|
media_info = await self.media_repo.get_local_media_info(
|
||||||
|
request, media_id, max_timeout_ms
|
||||||
|
)
|
||||||
|
|
||||||
|
if not media_info:
|
||||||
|
return
|
||||||
|
|
||||||
|
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||||
|
for info in thumbnail_infos:
|
||||||
|
t_w = info.width == desired_width
|
||||||
|
t_h = info.height == desired_height
|
||||||
|
t_method = info.method == desired_method
|
||||||
|
t_type = info.type == desired_type
|
||||||
|
|
||||||
|
if t_w and t_h and t_method and t_type:
|
||||||
|
file_info = FileInfo(
|
||||||
|
server_name=None,
|
||||||
|
file_id=media_id,
|
||||||
|
url_cache=bool(media_info.url_cache),
|
||||||
|
thumbnail=info,
|
||||||
|
)
|
||||||
|
|
||||||
|
responder = await self.media_storage.fetch_media(file_info)
|
||||||
|
if responder:
|
||||||
|
await respond_with_responder(
|
||||||
|
request, responder, info.type, info.length
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||||
|
|
||||||
|
# Okay, so we generate one.
|
||||||
|
file_path = await self.media_repo.generate_local_exact_thumbnail(
|
||||||
|
media_id,
|
||||||
|
desired_width,
|
||||||
|
desired_height,
|
||||||
|
desired_method,
|
||||||
|
desired_type,
|
||||||
|
url_cache=bool(media_info.url_cache),
|
||||||
|
)
|
||||||
|
|
||||||
|
if file_path:
|
||||||
|
await respond_with_file(request, desired_type, file_path)
|
||||||
|
else:
|
||||||
|
logger.warning("Failed to generate thumbnail")
|
||||||
|
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||||
|
|
||||||
|
async def select_or_generate_remote_thumbnail(
|
||||||
|
self,
|
||||||
|
request: SynapseRequest,
|
||||||
|
server_name: str,
|
||||||
|
media_id: str,
|
||||||
|
desired_width: int,
|
||||||
|
desired_height: int,
|
||||||
|
desired_method: str,
|
||||||
|
desired_type: str,
|
||||||
|
max_timeout_ms: int,
|
||||||
|
) -> None:
|
||||||
|
media_info = await self.media_repo.get_remote_media_info(
|
||||||
|
server_name, media_id, max_timeout_ms
|
||||||
|
)
|
||||||
|
if not media_info:
|
||||||
|
respond_404(request)
|
||||||
|
return
|
||||||
|
|
||||||
|
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
||||||
|
server_name, media_id
|
||||||
|
)
|
||||||
|
|
||||||
|
file_id = media_info.filesystem_id
|
||||||
|
|
||||||
|
for info in thumbnail_infos:
|
||||||
|
t_w = info.width == desired_width
|
||||||
|
t_h = info.height == desired_height
|
||||||
|
t_method = info.method == desired_method
|
||||||
|
t_type = info.type == desired_type
|
||||||
|
|
||||||
|
if t_w and t_h and t_method and t_type:
|
||||||
|
file_info = FileInfo(
|
||||||
|
server_name=server_name,
|
||||||
|
file_id=file_id,
|
||||||
|
thumbnail=info,
|
||||||
|
)
|
||||||
|
|
||||||
|
responder = await self.media_storage.fetch_media(file_info)
|
||||||
|
if responder:
|
||||||
|
await respond_with_responder(
|
||||||
|
request, responder, info.type, info.length
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||||
|
|
||||||
|
# Okay, so we generate one.
|
||||||
|
file_path = await self.media_repo.generate_remote_exact_thumbnail(
|
||||||
|
server_name,
|
||||||
|
file_id,
|
||||||
|
media_id,
|
||||||
|
desired_width,
|
||||||
|
desired_height,
|
||||||
|
desired_method,
|
||||||
|
desired_type,
|
||||||
|
)
|
||||||
|
|
||||||
|
if file_path:
|
||||||
|
await respond_with_file(request, desired_type, file_path)
|
||||||
|
else:
|
||||||
|
logger.warning("Failed to generate thumbnail")
|
||||||
|
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||||
|
|
||||||
|
async def respond_remote_thumbnail(
|
||||||
|
self,
|
||||||
|
request: SynapseRequest,
|
||||||
|
server_name: str,
|
||||||
|
media_id: str,
|
||||||
|
width: int,
|
||||||
|
height: int,
|
||||||
|
method: str,
|
||||||
|
m_type: str,
|
||||||
|
max_timeout_ms: int,
|
||||||
|
) -> None:
|
||||||
|
# TODO: Don't download the whole remote file
|
||||||
|
# We should proxy the thumbnail from the remote server instead of
|
||||||
|
# downloading the remote file and generating our own thumbnails.
|
||||||
|
media_info = await self.media_repo.get_remote_media_info(
|
||||||
|
server_name, media_id, max_timeout_ms
|
||||||
|
)
|
||||||
|
if not media_info:
|
||||||
|
return
|
||||||
|
|
||||||
|
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
||||||
|
server_name, media_id
|
||||||
|
)
|
||||||
|
await self._select_and_respond_with_thumbnail(
|
||||||
|
request,
|
||||||
|
width,
|
||||||
|
height,
|
||||||
|
method,
|
||||||
|
m_type,
|
||||||
|
thumbnail_infos,
|
||||||
|
media_id,
|
||||||
|
media_info.filesystem_id,
|
||||||
|
url_cache=False,
|
||||||
|
server_name=server_name,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def _select_and_respond_with_thumbnail(
|
||||||
|
self,
|
||||||
|
request: SynapseRequest,
|
||||||
|
desired_width: int,
|
||||||
|
desired_height: int,
|
||||||
|
desired_method: str,
|
||||||
|
desired_type: str,
|
||||||
|
thumbnail_infos: List[ThumbnailInfo],
|
||||||
|
media_id: str,
|
||||||
|
file_id: str,
|
||||||
|
url_cache: bool,
|
||||||
|
server_name: Optional[str] = None,
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Respond to a request with an appropriate thumbnail from the previously generated thumbnails.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
request: The incoming request.
|
||||||
|
desired_width: The desired width, the returned thumbnail may be larger than this.
|
||||||
|
desired_height: The desired height, the returned thumbnail may be larger than this.
|
||||||
|
desired_method: The desired method used to generate the thumbnail.
|
||||||
|
desired_type: The desired content-type of the thumbnail.
|
||||||
|
thumbnail_infos: A list of thumbnail info of candidate thumbnails.
|
||||||
|
file_id: The ID of the media that a thumbnail is being requested for.
|
||||||
|
url_cache: True if this is from a URL cache.
|
||||||
|
server_name: The server name, if this is a remote thumbnail.
|
||||||
|
"""
|
||||||
|
logger.debug(
|
||||||
|
"_select_and_respond_with_thumbnail: media_id=%s desired=%sx%s (%s) thumbnail_infos=%s",
|
||||||
|
media_id,
|
||||||
|
desired_width,
|
||||||
|
desired_height,
|
||||||
|
desired_method,
|
||||||
|
thumbnail_infos,
|
||||||
|
)
|
||||||
|
|
||||||
|
# If `dynamic_thumbnails` is enabled, we expect Synapse to go down a
|
||||||
|
# different code path to handle it.
|
||||||
|
assert not self.dynamic_thumbnails
|
||||||
|
|
||||||
|
if thumbnail_infos:
|
||||||
|
file_info = self._select_thumbnail(
|
||||||
|
desired_width,
|
||||||
|
desired_height,
|
||||||
|
desired_method,
|
||||||
|
desired_type,
|
||||||
|
thumbnail_infos,
|
||||||
|
file_id,
|
||||||
|
url_cache,
|
||||||
|
server_name,
|
||||||
|
)
|
||||||
|
if not file_info:
|
||||||
|
logger.info("Couldn't find a thumbnail matching the desired inputs")
|
||||||
|
respond_404(request)
|
||||||
|
return
|
||||||
|
|
||||||
|
# The thumbnail property must exist.
|
||||||
|
assert file_info.thumbnail is not None
|
||||||
|
|
||||||
|
responder = await self.media_storage.fetch_media(file_info)
|
||||||
|
if responder:
|
||||||
|
await respond_with_responder(
|
||||||
|
request,
|
||||||
|
responder,
|
||||||
|
file_info.thumbnail.type,
|
||||||
|
file_info.thumbnail.length,
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# If we can't find the thumbnail we regenerate it. This can happen
|
||||||
|
# if e.g. we've deleted the thumbnails but still have the original
|
||||||
|
# image somewhere.
|
||||||
|
#
|
||||||
|
# Since we have an entry for the thumbnail in the DB we a) know we
|
||||||
|
# have have successfully generated the thumbnail in the past (so we
|
||||||
|
# don't need to worry about repeatedly failing to generate
|
||||||
|
# thumbnails), and b) have already calculated that appropriate
|
||||||
|
# width/height/method so we can just call the "generate exact"
|
||||||
|
# methods.
|
||||||
|
|
||||||
|
# First let's check that we do actually have the original image
|
||||||
|
# still. This will throw a 404 if we don't.
|
||||||
|
# TODO: We should refetch the thumbnails for remote media.
|
||||||
|
await self.media_storage.ensure_media_is_in_local_cache(
|
||||||
|
FileInfo(server_name, file_id, url_cache=url_cache)
|
||||||
|
)
|
||||||
|
|
||||||
|
if server_name:
|
||||||
|
await self.media_repo.generate_remote_exact_thumbnail(
|
||||||
|
server_name,
|
||||||
|
file_id=file_id,
|
||||||
|
media_id=media_id,
|
||||||
|
t_width=file_info.thumbnail.width,
|
||||||
|
t_height=file_info.thumbnail.height,
|
||||||
|
t_method=file_info.thumbnail.method,
|
||||||
|
t_type=file_info.thumbnail.type,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
await self.media_repo.generate_local_exact_thumbnail(
|
||||||
|
media_id=media_id,
|
||||||
|
t_width=file_info.thumbnail.width,
|
||||||
|
t_height=file_info.thumbnail.height,
|
||||||
|
t_method=file_info.thumbnail.method,
|
||||||
|
t_type=file_info.thumbnail.type,
|
||||||
|
url_cache=url_cache,
|
||||||
|
)
|
||||||
|
|
||||||
|
responder = await self.media_storage.fetch_media(file_info)
|
||||||
|
await respond_with_responder(
|
||||||
|
request,
|
||||||
|
responder,
|
||||||
|
file_info.thumbnail.type,
|
||||||
|
file_info.thumbnail.length,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# This might be because:
|
||||||
|
# 1. We can't create thumbnails for the given media (corrupted or
|
||||||
|
# unsupported file type), or
|
||||||
|
# 2. The thumbnailing process never ran or errored out initially
|
||||||
|
# when the media was first uploaded (these bugs should be
|
||||||
|
# reported and fixed).
|
||||||
|
# Note that we don't attempt to generate a thumbnail now because
|
||||||
|
# `dynamic_thumbnails` is disabled.
|
||||||
|
logger.info("Failed to find any generated thumbnails")
|
||||||
|
|
||||||
|
assert request.path is not None
|
||||||
|
respond_with_json(
|
||||||
|
request,
|
||||||
|
400,
|
||||||
|
cs_error(
|
||||||
|
"Cannot find any thumbnails for the requested media ('%s'). This might mean the media is not a supported_media_format=(%s) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)"
|
||||||
|
% (
|
||||||
|
request.path.decode(),
|
||||||
|
", ".join(THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP.keys()),
|
||||||
|
),
|
||||||
|
code=Codes.UNKNOWN,
|
||||||
|
),
|
||||||
|
send_cors=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _select_thumbnail(
|
||||||
|
self,
|
||||||
|
desired_width: int,
|
||||||
|
desired_height: int,
|
||||||
|
desired_method: str,
|
||||||
|
desired_type: str,
|
||||||
|
thumbnail_infos: List[ThumbnailInfo],
|
||||||
|
file_id: str,
|
||||||
|
url_cache: bool,
|
||||||
|
server_name: Optional[str],
|
||||||
|
) -> Optional[FileInfo]:
|
||||||
|
"""
|
||||||
|
Choose an appropriate thumbnail from the previously generated thumbnails.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
desired_width: The desired width, the returned thumbnail may be larger than this.
|
||||||
|
desired_height: The desired height, the returned thumbnail may be larger than this.
|
||||||
|
desired_method: The desired method used to generate the thumbnail.
|
||||||
|
desired_type: The desired content-type of the thumbnail.
|
||||||
|
thumbnail_infos: A list of thumbnail infos of candidate thumbnails.
|
||||||
|
file_id: The ID of the media that a thumbnail is being requested for.
|
||||||
|
url_cache: True if this is from a URL cache.
|
||||||
|
server_name: The server name, if this is a remote thumbnail.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The thumbnail which best matches the desired parameters.
|
||||||
|
"""
|
||||||
|
desired_method = desired_method.lower()
|
||||||
|
|
||||||
|
# The chosen thumbnail.
|
||||||
|
thumbnail_info = None
|
||||||
|
|
||||||
|
d_w = desired_width
|
||||||
|
d_h = desired_height
|
||||||
|
|
||||||
|
if desired_method == "crop":
|
||||||
|
# Thumbnails that match equal or larger sizes of desired width/height.
|
||||||
|
crop_info_list: List[
|
||||||
|
Tuple[int, int, int, bool, Optional[int], ThumbnailInfo]
|
||||||
|
] = []
|
||||||
|
# Other thumbnails.
|
||||||
|
crop_info_list2: List[
|
||||||
|
Tuple[int, int, int, bool, Optional[int], ThumbnailInfo]
|
||||||
|
] = []
|
||||||
|
for info in thumbnail_infos:
|
||||||
|
# Skip thumbnails generated with different methods.
|
||||||
|
if info.method != "crop":
|
||||||
|
continue
|
||||||
|
|
||||||
|
t_w = info.width
|
||||||
|
t_h = info.height
|
||||||
|
aspect_quality = abs(d_w * t_h - d_h * t_w)
|
||||||
|
min_quality = 0 if d_w <= t_w and d_h <= t_h else 1
|
||||||
|
size_quality = abs((d_w - t_w) * (d_h - t_h))
|
||||||
|
type_quality = desired_type != info.type
|
||||||
|
length_quality = info.length
|
||||||
|
if t_w >= d_w or t_h >= d_h:
|
||||||
|
crop_info_list.append(
|
||||||
|
(
|
||||||
|
aspect_quality,
|
||||||
|
min_quality,
|
||||||
|
size_quality,
|
||||||
|
type_quality,
|
||||||
|
length_quality,
|
||||||
|
info,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
crop_info_list2.append(
|
||||||
|
(
|
||||||
|
aspect_quality,
|
||||||
|
min_quality,
|
||||||
|
size_quality,
|
||||||
|
type_quality,
|
||||||
|
length_quality,
|
||||||
|
info,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
# Pick the most appropriate thumbnail. Some values of `desired_width` and
|
||||||
|
# `desired_height` may result in a tie, in which case we avoid comparing on
|
||||||
|
# the thumbnail info and pick the thumbnail that appears earlier
|
||||||
|
# in the list of candidates.
|
||||||
|
if crop_info_list:
|
||||||
|
thumbnail_info = min(crop_info_list, key=lambda t: t[:-1])[-1]
|
||||||
|
elif crop_info_list2:
|
||||||
|
thumbnail_info = min(crop_info_list2, key=lambda t: t[:-1])[-1]
|
||||||
|
elif desired_method == "scale":
|
||||||
|
# Thumbnails that match equal or larger sizes of desired width/height.
|
||||||
|
info_list: List[Tuple[int, bool, int, ThumbnailInfo]] = []
|
||||||
|
# Other thumbnails.
|
||||||
|
info_list2: List[Tuple[int, bool, int, ThumbnailInfo]] = []
|
||||||
|
|
||||||
|
for info in thumbnail_infos:
|
||||||
|
# Skip thumbnails generated with different methods.
|
||||||
|
if info.method != "scale":
|
||||||
|
continue
|
||||||
|
|
||||||
|
t_w = info.width
|
||||||
|
t_h = info.height
|
||||||
|
size_quality = abs((d_w - t_w) * (d_h - t_h))
|
||||||
|
type_quality = desired_type != info.type
|
||||||
|
length_quality = info.length
|
||||||
|
if t_w >= d_w or t_h >= d_h:
|
||||||
|
info_list.append((size_quality, type_quality, length_quality, info))
|
||||||
|
else:
|
||||||
|
info_list2.append(
|
||||||
|
(size_quality, type_quality, length_quality, info)
|
||||||
|
)
|
||||||
|
# Pick the most appropriate thumbnail. Some values of `desired_width` and
|
||||||
|
# `desired_height` may result in a tie, in which case we avoid comparing on
|
||||||
|
# the thumbnail info and pick the thumbnail that appears earlier
|
||||||
|
# in the list of candidates.
|
||||||
|
if info_list:
|
||||||
|
thumbnail_info = min(info_list, key=lambda t: t[:-1])[-1]
|
||||||
|
elif info_list2:
|
||||||
|
thumbnail_info = min(info_list2, key=lambda t: t[:-1])[-1]
|
||||||
|
|
||||||
|
if thumbnail_info:
|
||||||
|
return FileInfo(
|
||||||
|
file_id=file_id,
|
||||||
|
url_cache=url_cache,
|
||||||
|
server_name=server_name,
|
||||||
|
thumbnail=thumbnail_info,
|
||||||
|
)
|
||||||
|
|
||||||
|
# No matching thumbnail was found.
|
||||||
|
return None
|
||||||
|
|
|
@ -721,6 +721,7 @@ class Notifier:
|
||||||
user.to_string(),
|
user.to_string(),
|
||||||
new_events,
|
new_events,
|
||||||
is_peeking=is_peeking,
|
is_peeking=is_peeking,
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
elif keyname == StreamKeyType.PRESENCE:
|
elif keyname == StreamKeyType.PRESENCE:
|
||||||
now = self.clock.time_msec()
|
now = self.clock.time_msec()
|
||||||
|
|
|
@ -529,7 +529,10 @@ class Mailer:
|
||||||
}
|
}
|
||||||
|
|
||||||
the_events = await filter_events_for_client(
|
the_events = await filter_events_for_client(
|
||||||
self._storage_controllers, user_id, results.events_before
|
self._storage_controllers,
|
||||||
|
user_id,
|
||||||
|
results.events_before,
|
||||||
|
msc4115_membership_on_events=self.hs.config.experimental.msc4115_membership_on_events,
|
||||||
)
|
)
|
||||||
the_events.append(notif_event)
|
the_events.append(notif_event)
|
||||||
|
|
||||||
|
|
|
@ -55,6 +55,7 @@ from synapse.replication.tcp.streams.partial_state import (
|
||||||
)
|
)
|
||||||
from synapse.types import PersistedEventPosition, ReadReceipt, StreamKeyType, UserID
|
from synapse.types import PersistedEventPosition, ReadReceipt, StreamKeyType, UserID
|
||||||
from synapse.util.async_helpers import Linearizer, timeout_deferred
|
from synapse.util.async_helpers import Linearizer, timeout_deferred
|
||||||
|
from synapse.util.iterutils import batch_iter
|
||||||
from synapse.util.metrics import Measure
|
from synapse.util.metrics import Measure
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
@ -111,6 +112,15 @@ class ReplicationDataHandler:
|
||||||
token: stream token for this batch of rows
|
token: stream token for this batch of rows
|
||||||
rows: a list of Stream.ROW_TYPE objects as returned by Stream.parse_row.
|
rows: a list of Stream.ROW_TYPE objects as returned by Stream.parse_row.
|
||||||
"""
|
"""
|
||||||
|
all_room_ids: Set[str] = set()
|
||||||
|
if stream_name == DeviceListsStream.NAME:
|
||||||
|
if any(row.entity.startswith("@") and not row.is_signature for row in rows):
|
||||||
|
prev_token = self.store.get_device_stream_token()
|
||||||
|
all_room_ids = await self.store.get_all_device_list_changes(
|
||||||
|
prev_token, token
|
||||||
|
)
|
||||||
|
self.store.device_lists_in_rooms_have_changed(all_room_ids, token)
|
||||||
|
|
||||||
self.store.process_replication_rows(stream_name, instance_name, token, rows)
|
self.store.process_replication_rows(stream_name, instance_name, token, rows)
|
||||||
# NOTE: this must be called after process_replication_rows to ensure any
|
# NOTE: this must be called after process_replication_rows to ensure any
|
||||||
# cache invalidations are first handled before any stream ID advances.
|
# cache invalidations are first handled before any stream ID advances.
|
||||||
|
@ -145,14 +155,14 @@ class ReplicationDataHandler:
|
||||||
StreamKeyType.TO_DEVICE, token, users=entities
|
StreamKeyType.TO_DEVICE, token, users=entities
|
||||||
)
|
)
|
||||||
elif stream_name == DeviceListsStream.NAME:
|
elif stream_name == DeviceListsStream.NAME:
|
||||||
all_room_ids: Set[str] = set()
|
# `all_room_ids` can be large, so let's wake up those streams in batches
|
||||||
for row in rows:
|
for batched_room_ids in batch_iter(all_room_ids, 100):
|
||||||
if row.entity.startswith("@") and not row.is_signature:
|
|
||||||
room_ids = await self.store.get_rooms_for_user(row.entity)
|
|
||||||
all_room_ids.update(room_ids)
|
|
||||||
self.notifier.on_new_event(
|
self.notifier.on_new_event(
|
||||||
StreamKeyType.DEVICE_LIST, token, rooms=all_room_ids
|
StreamKeyType.DEVICE_LIST, token, rooms=batched_room_ids
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Yield to reactor so that we don't block.
|
||||||
|
await self._clock.sleep(0)
|
||||||
elif stream_name == PushersStream.NAME:
|
elif stream_name == PushersStream.NAME:
|
||||||
for row in rows:
|
for row in rows:
|
||||||
if row.deleted:
|
if row.deleted:
|
||||||
|
|
205
synapse/rest/client/media.py
Normal file
205
synapse/rest/client/media.py
Normal file
|
@ -0,0 +1,205 @@
|
||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||||
|
# Copyright 2015, 2016 OpenMarket Ltd
|
||||||
|
# Copyright (C) 2024 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
# Originally licensed under the Apache License, Version 2.0:
|
||||||
|
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||||
|
#
|
||||||
|
# [This file includes modifications made by New Vector Limited]
|
||||||
|
#
|
||||||
|
#
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import re
|
||||||
|
|
||||||
|
from synapse.http.server import (
|
||||||
|
HttpServer,
|
||||||
|
respond_with_json,
|
||||||
|
respond_with_json_bytes,
|
||||||
|
set_corp_headers,
|
||||||
|
set_cors_headers,
|
||||||
|
)
|
||||||
|
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
||||||
|
from synapse.http.site import SynapseRequest
|
||||||
|
from synapse.media._base import (
|
||||||
|
DEFAULT_MAX_TIMEOUT_MS,
|
||||||
|
MAXIMUM_ALLOWED_MAX_TIMEOUT_MS,
|
||||||
|
respond_404,
|
||||||
|
)
|
||||||
|
from synapse.media.media_repository import MediaRepository
|
||||||
|
from synapse.media.media_storage import MediaStorage
|
||||||
|
from synapse.media.thumbnailer import ThumbnailProvider
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
from synapse.util.stringutils import parse_and_validate_server_name
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class UnstablePreviewURLServlet(RestServlet):
|
||||||
|
"""
|
||||||
|
Same as `GET /_matrix/media/r0/preview_url`, this endpoint provides a generic preview API
|
||||||
|
for URLs which outputs Open Graph (https://ogp.me/) responses (with some Matrix
|
||||||
|
specific additions).
|
||||||
|
|
||||||
|
This does have trade-offs compared to other designs:
|
||||||
|
|
||||||
|
* Pros:
|
||||||
|
* Simple and flexible; can be used by any clients at any point
|
||||||
|
* Cons:
|
||||||
|
* If each homeserver provides one of these independently, all the homeservers in a
|
||||||
|
room may needlessly DoS the target URI
|
||||||
|
* The URL metadata must be stored somewhere, rather than just using Matrix
|
||||||
|
itself to store the media.
|
||||||
|
* Matrix cannot be used to distribute the metadata between homeservers.
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = [
|
||||||
|
re.compile(r"^/_matrix/client/unstable/org.matrix.msc3916/media/preview_url$")
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
hs: "HomeServer",
|
||||||
|
media_repo: "MediaRepository",
|
||||||
|
media_storage: MediaStorage,
|
||||||
|
):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
self.clock = hs.get_clock()
|
||||||
|
self.media_repo = media_repo
|
||||||
|
self.media_storage = media_storage
|
||||||
|
assert self.media_repo.url_previewer is not None
|
||||||
|
self.url_previewer = self.media_repo.url_previewer
|
||||||
|
|
||||||
|
async def on_GET(self, request: SynapseRequest) -> None:
|
||||||
|
requester = await self.auth.get_user_by_req(request)
|
||||||
|
url = parse_string(request, "url", required=True)
|
||||||
|
ts = parse_integer(request, "ts")
|
||||||
|
if ts is None:
|
||||||
|
ts = self.clock.time_msec()
|
||||||
|
|
||||||
|
og = await self.url_previewer.preview(url, requester.user, ts)
|
||||||
|
respond_with_json_bytes(request, 200, og, send_cors=True)
|
||||||
|
|
||||||
|
|
||||||
|
class UnstableMediaConfigResource(RestServlet):
|
||||||
|
PATTERNS = [
|
||||||
|
re.compile(r"^/_matrix/client/unstable/org.matrix.msc3916/media/config$")
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
super().__init__()
|
||||||
|
config = hs.config
|
||||||
|
self.clock = hs.get_clock()
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
self.limits_dict = {"m.upload.size": config.media.max_upload_size}
|
||||||
|
|
||||||
|
async def on_GET(self, request: SynapseRequest) -> None:
|
||||||
|
await self.auth.get_user_by_req(request)
|
||||||
|
respond_with_json(request, 200, self.limits_dict, send_cors=True)
|
||||||
|
|
||||||
|
|
||||||
|
class UnstableThumbnailResource(RestServlet):
|
||||||
|
PATTERNS = [
|
||||||
|
re.compile(
|
||||||
|
"/_matrix/client/unstable/org.matrix.msc3916/media/thumbnail/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)$"
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
hs: "HomeServer",
|
||||||
|
media_repo: "MediaRepository",
|
||||||
|
media_storage: MediaStorage,
|
||||||
|
):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
self.media_repo = media_repo
|
||||||
|
self.media_storage = media_storage
|
||||||
|
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
||||||
|
self._is_mine_server_name = hs.is_mine_server_name
|
||||||
|
self._server_name = hs.hostname
|
||||||
|
self.prevent_media_downloads_from = hs.config.media.prevent_media_downloads_from
|
||||||
|
self.thumbnailer = ThumbnailProvider(hs, media_repo, media_storage)
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
|
||||||
|
async def on_GET(
|
||||||
|
self, request: SynapseRequest, server_name: str, media_id: str
|
||||||
|
) -> None:
|
||||||
|
# Validate the server name, raising if invalid
|
||||||
|
parse_and_validate_server_name(server_name)
|
||||||
|
await self.auth.get_user_by_req(request)
|
||||||
|
|
||||||
|
set_cors_headers(request)
|
||||||
|
set_corp_headers(request)
|
||||||
|
width = parse_integer(request, "width", required=True)
|
||||||
|
height = parse_integer(request, "height", required=True)
|
||||||
|
method = parse_string(request, "method", "scale")
|
||||||
|
# TODO Parse the Accept header to get an prioritised list of thumbnail types.
|
||||||
|
m_type = "image/png"
|
||||||
|
max_timeout_ms = parse_integer(
|
||||||
|
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
|
||||||
|
)
|
||||||
|
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
|
||||||
|
|
||||||
|
if self._is_mine_server_name(server_name):
|
||||||
|
if self.dynamic_thumbnails:
|
||||||
|
await self.thumbnailer.select_or_generate_local_thumbnail(
|
||||||
|
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
await self.thumbnailer.respond_local_thumbnail(
|
||||||
|
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||||
|
)
|
||||||
|
self.media_repo.mark_recently_accessed(None, media_id)
|
||||||
|
else:
|
||||||
|
# Don't let users download media from configured domains, even if it
|
||||||
|
# is already downloaded. This is Trust & Safety tooling to make some
|
||||||
|
# media inaccessible to local users.
|
||||||
|
# See `prevent_media_downloads_from` config docs for more info.
|
||||||
|
if server_name in self.prevent_media_downloads_from:
|
||||||
|
respond_404(request)
|
||||||
|
return
|
||||||
|
|
||||||
|
remote_resp_function = (
|
||||||
|
self.thumbnailer.select_or_generate_remote_thumbnail
|
||||||
|
if self.dynamic_thumbnails
|
||||||
|
else self.thumbnailer.respond_remote_thumbnail
|
||||||
|
)
|
||||||
|
await remote_resp_function(
|
||||||
|
request,
|
||||||
|
server_name,
|
||||||
|
media_id,
|
||||||
|
width,
|
||||||
|
height,
|
||||||
|
method,
|
||||||
|
m_type,
|
||||||
|
max_timeout_ms,
|
||||||
|
)
|
||||||
|
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||||
|
|
||||||
|
|
||||||
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
|
if hs.config.experimental.msc3916_authenticated_media_enabled:
|
||||||
|
media_repo = hs.get_media_repository()
|
||||||
|
if hs.config.media.url_preview_enabled:
|
||||||
|
UnstablePreviewURLServlet(
|
||||||
|
hs, media_repo, media_repo.media_storage
|
||||||
|
).register(http_server)
|
||||||
|
UnstableMediaConfigResource(hs).register(http_server)
|
||||||
|
UnstableThumbnailResource(hs, media_repo, media_repo.media_storage).register(
|
||||||
|
http_server
|
||||||
|
)
|
|
@ -34,6 +34,9 @@ if TYPE_CHECKING:
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
# n.b [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886) has now been closed.
|
||||||
|
# However, we want to keep this implementation around for some time.
|
||||||
|
# TODO: define an end-of-life date for this implementation.
|
||||||
class MSC3886RendezvousServlet(RestServlet):
|
class MSC3886RendezvousServlet(RestServlet):
|
||||||
"""
|
"""
|
||||||
This is a placeholder implementation of [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886)
|
This is a placeholder implementation of [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886)
|
||||||
|
|
|
@ -40,6 +40,7 @@ from synapse.handlers.sync import (
|
||||||
KnockedSyncResult,
|
KnockedSyncResult,
|
||||||
SyncConfig,
|
SyncConfig,
|
||||||
SyncResult,
|
SyncResult,
|
||||||
|
SyncVersion,
|
||||||
)
|
)
|
||||||
from synapse.http.server import HttpServer
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import RestServlet, parse_boolean, parse_integer, parse_string
|
from synapse.http.servlet import RestServlet, parse_boolean, parse_integer, parse_string
|
||||||
|
@ -47,6 +48,7 @@ from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.opentracing import trace_with_opname
|
from synapse.logging.opentracing import trace_with_opname
|
||||||
from synapse.types import JsonDict, Requester, StreamToken
|
from synapse.types import JsonDict, Requester, StreamToken
|
||||||
from synapse.util import json_decoder
|
from synapse.util import json_decoder
|
||||||
|
from synapse.util.caches.lrucache import LruCache
|
||||||
|
|
||||||
from ._base import client_patterns, set_timeline_upper_limit
|
from ._base import client_patterns, set_timeline_upper_limit
|
||||||
|
|
||||||
|
@ -110,6 +112,11 @@ class SyncRestServlet(RestServlet):
|
||||||
self._msc2654_enabled = hs.config.experimental.msc2654_enabled
|
self._msc2654_enabled = hs.config.experimental.msc2654_enabled
|
||||||
self._msc3773_enabled = hs.config.experimental.msc3773_enabled
|
self._msc3773_enabled = hs.config.experimental.msc3773_enabled
|
||||||
|
|
||||||
|
self._json_filter_cache: LruCache[str, bool] = LruCache(
|
||||||
|
max_size=1000,
|
||||||
|
cache_name="sync_valid_filter",
|
||||||
|
)
|
||||||
|
|
||||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
# This will always be set by the time Twisted calls us.
|
# This will always be set by the time Twisted calls us.
|
||||||
assert request.args is not None
|
assert request.args is not None
|
||||||
|
@ -177,7 +184,13 @@ class SyncRestServlet(RestServlet):
|
||||||
filter_object = json_decoder.decode(filter_id)
|
filter_object = json_decoder.decode(filter_id)
|
||||||
except Exception:
|
except Exception:
|
||||||
raise SynapseError(400, "Invalid filter JSON", errcode=Codes.NOT_JSON)
|
raise SynapseError(400, "Invalid filter JSON", errcode=Codes.NOT_JSON)
|
||||||
|
|
||||||
|
# We cache the validation, as this can get quite expensive if people use
|
||||||
|
# a literal json blob as a query param.
|
||||||
|
if not self._json_filter_cache.get(filter_id):
|
||||||
self.filtering.check_valid_filter(filter_object)
|
self.filtering.check_valid_filter(filter_object)
|
||||||
|
self._json_filter_cache[filter_id] = True
|
||||||
|
|
||||||
set_timeline_upper_limit(
|
set_timeline_upper_limit(
|
||||||
filter_object, self.hs.config.server.filter_timeline_limit
|
filter_object, self.hs.config.server.filter_timeline_limit
|
||||||
)
|
)
|
||||||
|
@ -197,7 +210,6 @@ class SyncRestServlet(RestServlet):
|
||||||
user=user,
|
user=user,
|
||||||
filter_collection=filter_collection,
|
filter_collection=filter_collection,
|
||||||
is_guest=requester.is_guest,
|
is_guest=requester.is_guest,
|
||||||
request_key=request_key,
|
|
||||||
device_id=device_id,
|
device_id=device_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -220,6 +232,8 @@ class SyncRestServlet(RestServlet):
|
||||||
sync_result = await self.sync_handler.wait_for_sync_for_user(
|
sync_result = await self.sync_handler.wait_for_sync_for_user(
|
||||||
requester,
|
requester,
|
||||||
sync_config,
|
sync_config,
|
||||||
|
SyncVersion.SYNC_V2,
|
||||||
|
request_key,
|
||||||
since_token=since_token,
|
since_token=since_token,
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
full_state=full_state,
|
full_state=full_state,
|
||||||
|
@ -553,5 +567,176 @@ class SyncRestServlet(RestServlet):
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
class SlidingSyncE2eeRestServlet(RestServlet):
|
||||||
|
"""
|
||||||
|
API endpoint for MSC3575 Sliding Sync `/sync/e2ee`. This is being introduced as part
|
||||||
|
of Sliding Sync but doesn't have any sliding window component. It's just a way to
|
||||||
|
get E2EE events without having to sit through a big initial sync (`/sync` v2). And
|
||||||
|
we can avoid encryption events being backed up by the main sync response.
|
||||||
|
|
||||||
|
Having To-Device messages split out to this sync endpoint also helps when clients
|
||||||
|
need to have 2 or more sync streams open at a time, e.g a push notification process
|
||||||
|
and a main process. This can cause the two processes to race to fetch the To-Device
|
||||||
|
events, resulting in the need for complex synchronisation rules to ensure the token
|
||||||
|
is correctly and atomically exchanged between processes.
|
||||||
|
|
||||||
|
GET parameters::
|
||||||
|
timeout(int): How long to wait for new events in milliseconds.
|
||||||
|
since(batch_token): Batch token when asking for incremental deltas.
|
||||||
|
|
||||||
|
Response JSON::
|
||||||
|
{
|
||||||
|
"next_batch": // batch token for the next /sync
|
||||||
|
"to_device": {
|
||||||
|
// list of to-device events
|
||||||
|
"events": [
|
||||||
|
{
|
||||||
|
"content: { "algorithm": "m.olm.v1.curve25519-aes-sha2", "ciphertext": { ... }, "org.matrix.msgid": "abcd", "session_id": "abcd" },
|
||||||
|
"type": "m.room.encrypted",
|
||||||
|
"sender": "@alice:example.com",
|
||||||
|
}
|
||||||
|
// ...
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"device_lists": {
|
||||||
|
"changed": ["@alice:example.com"],
|
||||||
|
"left": ["@bob:example.com"]
|
||||||
|
},
|
||||||
|
"device_one_time_keys_count": {
|
||||||
|
"signed_curve25519": 50
|
||||||
|
},
|
||||||
|
"device_unused_fallback_key_types": [
|
||||||
|
"signed_curve25519"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = client_patterns(
|
||||||
|
"/org.matrix.msc3575/sync/e2ee$", releases=[], v1=False, unstable=True
|
||||||
|
)
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
super().__init__()
|
||||||
|
self.hs = hs
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
self.sync_handler = hs.get_sync_handler()
|
||||||
|
|
||||||
|
# Filtering only matters for the `device_lists` because it requires a bunch of
|
||||||
|
# derived information from rooms (see how `_generate_sync_entry_for_rooms()`
|
||||||
|
# prepares a bunch of data for `_generate_sync_entry_for_device_list()`).
|
||||||
|
self.only_member_events_filter_collection = FilterCollection(
|
||||||
|
self.hs,
|
||||||
|
{
|
||||||
|
"room": {
|
||||||
|
# We only care about membership events for the `device_lists`.
|
||||||
|
# Membership will tell us whether a user has joined/left a room and
|
||||||
|
# if there are new devices to encrypt for.
|
||||||
|
"timeline": {
|
||||||
|
"types": ["m.room.member"],
|
||||||
|
},
|
||||||
|
"state": {
|
||||||
|
"types": ["m.room.member"],
|
||||||
|
},
|
||||||
|
# We don't want any extra account_data generated because it's not
|
||||||
|
# returned by this endpoint. This helps us avoid work in
|
||||||
|
# `_generate_sync_entry_for_rooms()`
|
||||||
|
"account_data": {
|
||||||
|
"not_types": ["*"],
|
||||||
|
},
|
||||||
|
# We don't want any extra ephemeral data generated because it's not
|
||||||
|
# returned by this endpoint. This helps us avoid work in
|
||||||
|
# `_generate_sync_entry_for_rooms()`
|
||||||
|
"ephemeral": {
|
||||||
|
"not_types": ["*"],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
# We don't want any extra account_data generated because it's not
|
||||||
|
# returned by this endpoint. (This is just here for good measure)
|
||||||
|
"account_data": {
|
||||||
|
"not_types": ["*"],
|
||||||
|
},
|
||||||
|
# We don't want any extra presence data generated because it's not
|
||||||
|
# returned by this endpoint. (This is just here for good measure)
|
||||||
|
"presence": {
|
||||||
|
"not_types": ["*"],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
|
requester = await self.auth.get_user_by_req(request, allow_guest=True)
|
||||||
|
user = requester.user
|
||||||
|
device_id = requester.device_id
|
||||||
|
|
||||||
|
timeout = parse_integer(request, "timeout", default=0)
|
||||||
|
since = parse_string(request, "since")
|
||||||
|
|
||||||
|
sync_config = SyncConfig(
|
||||||
|
user=user,
|
||||||
|
filter_collection=self.only_member_events_filter_collection,
|
||||||
|
is_guest=requester.is_guest,
|
||||||
|
device_id=device_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
since_token = None
|
||||||
|
if since is not None:
|
||||||
|
since_token = await StreamToken.from_string(self.store, since)
|
||||||
|
|
||||||
|
# Request cache key
|
||||||
|
request_key = (
|
||||||
|
SyncVersion.E2EE_SYNC,
|
||||||
|
user,
|
||||||
|
timeout,
|
||||||
|
since,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Gather data for the response
|
||||||
|
sync_result = await self.sync_handler.wait_for_sync_for_user(
|
||||||
|
requester,
|
||||||
|
sync_config,
|
||||||
|
SyncVersion.E2EE_SYNC,
|
||||||
|
request_key,
|
||||||
|
since_token=since_token,
|
||||||
|
timeout=timeout,
|
||||||
|
full_state=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
# The client may have disconnected by now; don't bother to serialize the
|
||||||
|
# response if so.
|
||||||
|
if request._disconnected:
|
||||||
|
logger.info("Client has disconnected; not serializing response.")
|
||||||
|
return 200, {}
|
||||||
|
|
||||||
|
response: JsonDict = defaultdict(dict)
|
||||||
|
response["next_batch"] = await sync_result.next_batch.to_string(self.store)
|
||||||
|
|
||||||
|
if sync_result.to_device:
|
||||||
|
response["to_device"] = {"events": sync_result.to_device}
|
||||||
|
|
||||||
|
if sync_result.device_lists.changed:
|
||||||
|
response["device_lists"]["changed"] = list(sync_result.device_lists.changed)
|
||||||
|
if sync_result.device_lists.left:
|
||||||
|
response["device_lists"]["left"] = list(sync_result.device_lists.left)
|
||||||
|
|
||||||
|
# We always include this because https://github.com/vector-im/element-android/issues/3725
|
||||||
|
# The spec isn't terribly clear on when this can be omitted and how a client would tell
|
||||||
|
# the difference between "no keys present" and "nothing changed" in terms of whole field
|
||||||
|
# absent / individual key type entry absent
|
||||||
|
# Corresponding synapse issue: https://github.com/matrix-org/synapse/issues/10456
|
||||||
|
response["device_one_time_keys_count"] = sync_result.device_one_time_keys_count
|
||||||
|
|
||||||
|
# https://github.com/matrix-org/matrix-doc/blob/54255851f642f84a4f1aaf7bc063eebe3d76752b/proposals/2732-olm-fallback-keys.md
|
||||||
|
# states that this field should always be included, as long as the server supports the feature.
|
||||||
|
response["device_unused_fallback_key_types"] = (
|
||||||
|
sync_result.device_unused_fallback_key_types
|
||||||
|
)
|
||||||
|
|
||||||
|
return 200, response
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
SyncRestServlet(hs).register(http_server)
|
SyncRestServlet(hs).register(http_server)
|
||||||
|
|
||||||
|
if hs.config.experimental.msc3575_enabled:
|
||||||
|
SlidingSyncE2eeRestServlet(hs).register(http_server)
|
||||||
|
|
|
@ -89,6 +89,7 @@ class VersionsRestServlet(RestServlet):
|
||||||
"v1.7",
|
"v1.7",
|
||||||
"v1.8",
|
"v1.8",
|
||||||
"v1.9",
|
"v1.9",
|
||||||
|
"v1.10",
|
||||||
],
|
],
|
||||||
# as per MSC1497:
|
# as per MSC1497:
|
||||||
"unstable_features": {
|
"unstable_features": {
|
||||||
|
|
|
@ -22,23 +22,18 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
from typing import TYPE_CHECKING, List, Optional, Tuple
|
from typing import TYPE_CHECKING
|
||||||
|
|
||||||
from synapse.api.errors import Codes, SynapseError, cs_error
|
from synapse.http.server import set_corp_headers, set_cors_headers
|
||||||
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
|
|
||||||
from synapse.http.server import respond_with_json, set_corp_headers, set_cors_headers
|
|
||||||
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.media._base import (
|
from synapse.media._base import (
|
||||||
DEFAULT_MAX_TIMEOUT_MS,
|
DEFAULT_MAX_TIMEOUT_MS,
|
||||||
MAXIMUM_ALLOWED_MAX_TIMEOUT_MS,
|
MAXIMUM_ALLOWED_MAX_TIMEOUT_MS,
|
||||||
FileInfo,
|
|
||||||
ThumbnailInfo,
|
|
||||||
respond_404,
|
respond_404,
|
||||||
respond_with_file,
|
|
||||||
respond_with_responder,
|
|
||||||
)
|
)
|
||||||
from synapse.media.media_storage import MediaStorage
|
from synapse.media.media_storage import MediaStorage
|
||||||
|
from synapse.media.thumbnailer import ThumbnailProvider
|
||||||
from synapse.util.stringutils import parse_and_validate_server_name
|
from synapse.util.stringutils import parse_and_validate_server_name
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
@ -66,10 +61,11 @@ class ThumbnailResource(RestServlet):
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
self.media_repo = media_repo
|
self.media_repo = media_repo
|
||||||
self.media_storage = media_storage
|
self.media_storage = media_storage
|
||||||
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
|
||||||
self._is_mine_server_name = hs.is_mine_server_name
|
self._is_mine_server_name = hs.is_mine_server_name
|
||||||
self._server_name = hs.hostname
|
self._server_name = hs.hostname
|
||||||
self.prevent_media_downloads_from = hs.config.media.prevent_media_downloads_from
|
self.prevent_media_downloads_from = hs.config.media.prevent_media_downloads_from
|
||||||
|
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
||||||
|
self.thumbnail_provider = ThumbnailProvider(hs, media_repo, media_storage)
|
||||||
|
|
||||||
async def on_GET(
|
async def on_GET(
|
||||||
self, request: SynapseRequest, server_name: str, media_id: str
|
self, request: SynapseRequest, server_name: str, media_id: str
|
||||||
|
@ -91,11 +87,11 @@ class ThumbnailResource(RestServlet):
|
||||||
|
|
||||||
if self._is_mine_server_name(server_name):
|
if self._is_mine_server_name(server_name):
|
||||||
if self.dynamic_thumbnails:
|
if self.dynamic_thumbnails:
|
||||||
await self._select_or_generate_local_thumbnail(
|
await self.thumbnail_provider.select_or_generate_local_thumbnail(
|
||||||
request, media_id, width, height, method, m_type, max_timeout_ms
|
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
await self._respond_local_thumbnail(
|
await self.thumbnail_provider.respond_local_thumbnail(
|
||||||
request, media_id, width, height, method, m_type, max_timeout_ms
|
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||||
)
|
)
|
||||||
self.media_repo.mark_recently_accessed(None, media_id)
|
self.media_repo.mark_recently_accessed(None, media_id)
|
||||||
|
@ -109,9 +105,9 @@ class ThumbnailResource(RestServlet):
|
||||||
return
|
return
|
||||||
|
|
||||||
remote_resp_function = (
|
remote_resp_function = (
|
||||||
self._select_or_generate_remote_thumbnail
|
self.thumbnail_provider.select_or_generate_remote_thumbnail
|
||||||
if self.dynamic_thumbnails
|
if self.dynamic_thumbnails
|
||||||
else self._respond_remote_thumbnail
|
else self.thumbnail_provider.respond_remote_thumbnail
|
||||||
)
|
)
|
||||||
await remote_resp_function(
|
await remote_resp_function(
|
||||||
request,
|
request,
|
||||||
|
@ -124,457 +120,3 @@ class ThumbnailResource(RestServlet):
|
||||||
max_timeout_ms,
|
max_timeout_ms,
|
||||||
)
|
)
|
||||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||||
|
|
||||||
async def _respond_local_thumbnail(
|
|
||||||
self,
|
|
||||||
request: SynapseRequest,
|
|
||||||
media_id: str,
|
|
||||||
width: int,
|
|
||||||
height: int,
|
|
||||||
method: str,
|
|
||||||
m_type: str,
|
|
||||||
max_timeout_ms: int,
|
|
||||||
) -> None:
|
|
||||||
media_info = await self.media_repo.get_local_media_info(
|
|
||||||
request, media_id, max_timeout_ms
|
|
||||||
)
|
|
||||||
if not media_info:
|
|
||||||
return
|
|
||||||
|
|
||||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
|
||||||
await self._select_and_respond_with_thumbnail(
|
|
||||||
request,
|
|
||||||
width,
|
|
||||||
height,
|
|
||||||
method,
|
|
||||||
m_type,
|
|
||||||
thumbnail_infos,
|
|
||||||
media_id,
|
|
||||||
media_id,
|
|
||||||
url_cache=bool(media_info.url_cache),
|
|
||||||
server_name=None,
|
|
||||||
)
|
|
||||||
|
|
||||||
async def _select_or_generate_local_thumbnail(
|
|
||||||
self,
|
|
||||||
request: SynapseRequest,
|
|
||||||
media_id: str,
|
|
||||||
desired_width: int,
|
|
||||||
desired_height: int,
|
|
||||||
desired_method: str,
|
|
||||||
desired_type: str,
|
|
||||||
max_timeout_ms: int,
|
|
||||||
) -> None:
|
|
||||||
media_info = await self.media_repo.get_local_media_info(
|
|
||||||
request, media_id, max_timeout_ms
|
|
||||||
)
|
|
||||||
|
|
||||||
if not media_info:
|
|
||||||
return
|
|
||||||
|
|
||||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
|
||||||
for info in thumbnail_infos:
|
|
||||||
t_w = info.width == desired_width
|
|
||||||
t_h = info.height == desired_height
|
|
||||||
t_method = info.method == desired_method
|
|
||||||
t_type = info.type == desired_type
|
|
||||||
|
|
||||||
if t_w and t_h and t_method and t_type:
|
|
||||||
file_info = FileInfo(
|
|
||||||
server_name=None,
|
|
||||||
file_id=media_id,
|
|
||||||
url_cache=bool(media_info.url_cache),
|
|
||||||
thumbnail=info,
|
|
||||||
)
|
|
||||||
|
|
||||||
responder = await self.media_storage.fetch_media(file_info)
|
|
||||||
if responder:
|
|
||||||
await respond_with_responder(
|
|
||||||
request, responder, info.type, info.length
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
|
||||||
|
|
||||||
# Okay, so we generate one.
|
|
||||||
file_path = await self.media_repo.generate_local_exact_thumbnail(
|
|
||||||
media_id,
|
|
||||||
desired_width,
|
|
||||||
desired_height,
|
|
||||||
desired_method,
|
|
||||||
desired_type,
|
|
||||||
url_cache=bool(media_info.url_cache),
|
|
||||||
)
|
|
||||||
|
|
||||||
if file_path:
|
|
||||||
await respond_with_file(request, desired_type, file_path)
|
|
||||||
else:
|
|
||||||
logger.warning("Failed to generate thumbnail")
|
|
||||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
|
||||||
|
|
||||||
async def _select_or_generate_remote_thumbnail(
|
|
||||||
self,
|
|
||||||
request: SynapseRequest,
|
|
||||||
server_name: str,
|
|
||||||
media_id: str,
|
|
||||||
desired_width: int,
|
|
||||||
desired_height: int,
|
|
||||||
desired_method: str,
|
|
||||||
desired_type: str,
|
|
||||||
max_timeout_ms: int,
|
|
||||||
) -> None:
|
|
||||||
media_info = await self.media_repo.get_remote_media_info(
|
|
||||||
server_name, media_id, max_timeout_ms
|
|
||||||
)
|
|
||||||
if not media_info:
|
|
||||||
respond_404(request)
|
|
||||||
return
|
|
||||||
|
|
||||||
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
|
||||||
server_name, media_id
|
|
||||||
)
|
|
||||||
|
|
||||||
file_id = media_info.filesystem_id
|
|
||||||
|
|
||||||
for info in thumbnail_infos:
|
|
||||||
t_w = info.width == desired_width
|
|
||||||
t_h = info.height == desired_height
|
|
||||||
t_method = info.method == desired_method
|
|
||||||
t_type = info.type == desired_type
|
|
||||||
|
|
||||||
if t_w and t_h and t_method and t_type:
|
|
||||||
file_info = FileInfo(
|
|
||||||
server_name=server_name,
|
|
||||||
file_id=file_id,
|
|
||||||
thumbnail=info,
|
|
||||||
)
|
|
||||||
|
|
||||||
responder = await self.media_storage.fetch_media(file_info)
|
|
||||||
if responder:
|
|
||||||
await respond_with_responder(
|
|
||||||
request, responder, info.type, info.length
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
|
||||||
|
|
||||||
# Okay, so we generate one.
|
|
||||||
file_path = await self.media_repo.generate_remote_exact_thumbnail(
|
|
||||||
server_name,
|
|
||||||
file_id,
|
|
||||||
media_id,
|
|
||||||
desired_width,
|
|
||||||
desired_height,
|
|
||||||
desired_method,
|
|
||||||
desired_type,
|
|
||||||
)
|
|
||||||
|
|
||||||
if file_path:
|
|
||||||
await respond_with_file(request, desired_type, file_path)
|
|
||||||
else:
|
|
||||||
logger.warning("Failed to generate thumbnail")
|
|
||||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
|
||||||
|
|
||||||
async def _respond_remote_thumbnail(
|
|
||||||
self,
|
|
||||||
request: SynapseRequest,
|
|
||||||
server_name: str,
|
|
||||||
media_id: str,
|
|
||||||
width: int,
|
|
||||||
height: int,
|
|
||||||
method: str,
|
|
||||||
m_type: str,
|
|
||||||
max_timeout_ms: int,
|
|
||||||
) -> None:
|
|
||||||
# TODO: Don't download the whole remote file
|
|
||||||
# We should proxy the thumbnail from the remote server instead of
|
|
||||||
# downloading the remote file and generating our own thumbnails.
|
|
||||||
media_info = await self.media_repo.get_remote_media_info(
|
|
||||||
server_name, media_id, max_timeout_ms
|
|
||||||
)
|
|
||||||
if not media_info:
|
|
||||||
return
|
|
||||||
|
|
||||||
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
|
||||||
server_name, media_id
|
|
||||||
)
|
|
||||||
await self._select_and_respond_with_thumbnail(
|
|
||||||
request,
|
|
||||||
width,
|
|
||||||
height,
|
|
||||||
method,
|
|
||||||
m_type,
|
|
||||||
thumbnail_infos,
|
|
||||||
media_id,
|
|
||||||
media_info.filesystem_id,
|
|
||||||
url_cache=False,
|
|
||||||
server_name=server_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
async def _select_and_respond_with_thumbnail(
|
|
||||||
self,
|
|
||||||
request: SynapseRequest,
|
|
||||||
desired_width: int,
|
|
||||||
desired_height: int,
|
|
||||||
desired_method: str,
|
|
||||||
desired_type: str,
|
|
||||||
thumbnail_infos: List[ThumbnailInfo],
|
|
||||||
media_id: str,
|
|
||||||
file_id: str,
|
|
||||||
url_cache: bool,
|
|
||||||
server_name: Optional[str] = None,
|
|
||||||
) -> None:
|
|
||||||
"""
|
|
||||||
Respond to a request with an appropriate thumbnail from the previously generated thumbnails.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
request: The incoming request.
|
|
||||||
desired_width: The desired width, the returned thumbnail may be larger than this.
|
|
||||||
desired_height: The desired height, the returned thumbnail may be larger than this.
|
|
||||||
desired_method: The desired method used to generate the thumbnail.
|
|
||||||
desired_type: The desired content-type of the thumbnail.
|
|
||||||
thumbnail_infos: A list of thumbnail info of candidate thumbnails.
|
|
||||||
file_id: The ID of the media that a thumbnail is being requested for.
|
|
||||||
url_cache: True if this is from a URL cache.
|
|
||||||
server_name: The server name, if this is a remote thumbnail.
|
|
||||||
"""
|
|
||||||
logger.debug(
|
|
||||||
"_select_and_respond_with_thumbnail: media_id=%s desired=%sx%s (%s) thumbnail_infos=%s",
|
|
||||||
media_id,
|
|
||||||
desired_width,
|
|
||||||
desired_height,
|
|
||||||
desired_method,
|
|
||||||
thumbnail_infos,
|
|
||||||
)
|
|
||||||
|
|
||||||
# If `dynamic_thumbnails` is enabled, we expect Synapse to go down a
|
|
||||||
# different code path to handle it.
|
|
||||||
assert not self.dynamic_thumbnails
|
|
||||||
|
|
||||||
if thumbnail_infos:
|
|
||||||
file_info = self._select_thumbnail(
|
|
||||||
desired_width,
|
|
||||||
desired_height,
|
|
||||||
desired_method,
|
|
||||||
desired_type,
|
|
||||||
thumbnail_infos,
|
|
||||||
file_id,
|
|
||||||
url_cache,
|
|
||||||
server_name,
|
|
||||||
)
|
|
||||||
if not file_info:
|
|
||||||
logger.info("Couldn't find a thumbnail matching the desired inputs")
|
|
||||||
respond_404(request)
|
|
||||||
return
|
|
||||||
|
|
||||||
# The thumbnail property must exist.
|
|
||||||
assert file_info.thumbnail is not None
|
|
||||||
|
|
||||||
responder = await self.media_storage.fetch_media(file_info)
|
|
||||||
if responder:
|
|
||||||
await respond_with_responder(
|
|
||||||
request,
|
|
||||||
responder,
|
|
||||||
file_info.thumbnail.type,
|
|
||||||
file_info.thumbnail.length,
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
# If we can't find the thumbnail we regenerate it. This can happen
|
|
||||||
# if e.g. we've deleted the thumbnails but still have the original
|
|
||||||
# image somewhere.
|
|
||||||
#
|
|
||||||
# Since we have an entry for the thumbnail in the DB we a) know we
|
|
||||||
# have have successfully generated the thumbnail in the past (so we
|
|
||||||
# don't need to worry about repeatedly failing to generate
|
|
||||||
# thumbnails), and b) have already calculated that appropriate
|
|
||||||
# width/height/method so we can just call the "generate exact"
|
|
||||||
# methods.
|
|
||||||
|
|
||||||
# First let's check that we do actually have the original image
|
|
||||||
# still. This will throw a 404 if we don't.
|
|
||||||
# TODO: We should refetch the thumbnails for remote media.
|
|
||||||
await self.media_storage.ensure_media_is_in_local_cache(
|
|
||||||
FileInfo(server_name, file_id, url_cache=url_cache)
|
|
||||||
)
|
|
||||||
|
|
||||||
if server_name:
|
|
||||||
await self.media_repo.generate_remote_exact_thumbnail(
|
|
||||||
server_name,
|
|
||||||
file_id=file_id,
|
|
||||||
media_id=media_id,
|
|
||||||
t_width=file_info.thumbnail.width,
|
|
||||||
t_height=file_info.thumbnail.height,
|
|
||||||
t_method=file_info.thumbnail.method,
|
|
||||||
t_type=file_info.thumbnail.type,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
await self.media_repo.generate_local_exact_thumbnail(
|
|
||||||
media_id=media_id,
|
|
||||||
t_width=file_info.thumbnail.width,
|
|
||||||
t_height=file_info.thumbnail.height,
|
|
||||||
t_method=file_info.thumbnail.method,
|
|
||||||
t_type=file_info.thumbnail.type,
|
|
||||||
url_cache=url_cache,
|
|
||||||
)
|
|
||||||
|
|
||||||
responder = await self.media_storage.fetch_media(file_info)
|
|
||||||
await respond_with_responder(
|
|
||||||
request,
|
|
||||||
responder,
|
|
||||||
file_info.thumbnail.type,
|
|
||||||
file_info.thumbnail.length,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
# This might be because:
|
|
||||||
# 1. We can't create thumbnails for the given media (corrupted or
|
|
||||||
# unsupported file type), or
|
|
||||||
# 2. The thumbnailing process never ran or errored out initially
|
|
||||||
# when the media was first uploaded (these bugs should be
|
|
||||||
# reported and fixed).
|
|
||||||
# Note that we don't attempt to generate a thumbnail now because
|
|
||||||
# `dynamic_thumbnails` is disabled.
|
|
||||||
logger.info("Failed to find any generated thumbnails")
|
|
||||||
|
|
||||||
assert request.path is not None
|
|
||||||
respond_with_json(
|
|
||||||
request,
|
|
||||||
400,
|
|
||||||
cs_error(
|
|
||||||
"Cannot find any thumbnails for the requested media ('%s'). This might mean the media is not a supported_media_format=(%s) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)"
|
|
||||||
% (
|
|
||||||
request.path.decode(),
|
|
||||||
", ".join(THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP.keys()),
|
|
||||||
),
|
|
||||||
code=Codes.UNKNOWN,
|
|
||||||
),
|
|
||||||
send_cors=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
def _select_thumbnail(
|
|
||||||
self,
|
|
||||||
desired_width: int,
|
|
||||||
desired_height: int,
|
|
||||||
desired_method: str,
|
|
||||||
desired_type: str,
|
|
||||||
thumbnail_infos: List[ThumbnailInfo],
|
|
||||||
file_id: str,
|
|
||||||
url_cache: bool,
|
|
||||||
server_name: Optional[str],
|
|
||||||
) -> Optional[FileInfo]:
|
|
||||||
"""
|
|
||||||
Choose an appropriate thumbnail from the previously generated thumbnails.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
desired_width: The desired width, the returned thumbnail may be larger than this.
|
|
||||||
desired_height: The desired height, the returned thumbnail may be larger than this.
|
|
||||||
desired_method: The desired method used to generate the thumbnail.
|
|
||||||
desired_type: The desired content-type of the thumbnail.
|
|
||||||
thumbnail_infos: A list of thumbnail infos of candidate thumbnails.
|
|
||||||
file_id: The ID of the media that a thumbnail is being requested for.
|
|
||||||
url_cache: True if this is from a URL cache.
|
|
||||||
server_name: The server name, if this is a remote thumbnail.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
The thumbnail which best matches the desired parameters.
|
|
||||||
"""
|
|
||||||
desired_method = desired_method.lower()
|
|
||||||
|
|
||||||
# The chosen thumbnail.
|
|
||||||
thumbnail_info = None
|
|
||||||
|
|
||||||
d_w = desired_width
|
|
||||||
d_h = desired_height
|
|
||||||
|
|
||||||
if desired_method == "crop":
|
|
||||||
# Thumbnails that match equal or larger sizes of desired width/height.
|
|
||||||
crop_info_list: List[
|
|
||||||
Tuple[int, int, int, bool, Optional[int], ThumbnailInfo]
|
|
||||||
] = []
|
|
||||||
# Other thumbnails.
|
|
||||||
crop_info_list2: List[
|
|
||||||
Tuple[int, int, int, bool, Optional[int], ThumbnailInfo]
|
|
||||||
] = []
|
|
||||||
for info in thumbnail_infos:
|
|
||||||
# Skip thumbnails generated with different methods.
|
|
||||||
if info.method != "crop":
|
|
||||||
continue
|
|
||||||
|
|
||||||
t_w = info.width
|
|
||||||
t_h = info.height
|
|
||||||
aspect_quality = abs(d_w * t_h - d_h * t_w)
|
|
||||||
min_quality = 0 if d_w <= t_w and d_h <= t_h else 1
|
|
||||||
size_quality = abs((d_w - t_w) * (d_h - t_h))
|
|
||||||
type_quality = desired_type != info.type
|
|
||||||
length_quality = info.length
|
|
||||||
if t_w >= d_w or t_h >= d_h:
|
|
||||||
crop_info_list.append(
|
|
||||||
(
|
|
||||||
aspect_quality,
|
|
||||||
min_quality,
|
|
||||||
size_quality,
|
|
||||||
type_quality,
|
|
||||||
length_quality,
|
|
||||||
info,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
crop_info_list2.append(
|
|
||||||
(
|
|
||||||
aspect_quality,
|
|
||||||
min_quality,
|
|
||||||
size_quality,
|
|
||||||
type_quality,
|
|
||||||
length_quality,
|
|
||||||
info,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
# Pick the most appropriate thumbnail. Some values of `desired_width` and
|
|
||||||
# `desired_height` may result in a tie, in which case we avoid comparing on
|
|
||||||
# the thumbnail info and pick the thumbnail that appears earlier
|
|
||||||
# in the list of candidates.
|
|
||||||
if crop_info_list:
|
|
||||||
thumbnail_info = min(crop_info_list, key=lambda t: t[:-1])[-1]
|
|
||||||
elif crop_info_list2:
|
|
||||||
thumbnail_info = min(crop_info_list2, key=lambda t: t[:-1])[-1]
|
|
||||||
elif desired_method == "scale":
|
|
||||||
# Thumbnails that match equal or larger sizes of desired width/height.
|
|
||||||
info_list: List[Tuple[int, bool, int, ThumbnailInfo]] = []
|
|
||||||
# Other thumbnails.
|
|
||||||
info_list2: List[Tuple[int, bool, int, ThumbnailInfo]] = []
|
|
||||||
|
|
||||||
for info in thumbnail_infos:
|
|
||||||
# Skip thumbnails generated with different methods.
|
|
||||||
if info.method != "scale":
|
|
||||||
continue
|
|
||||||
|
|
||||||
t_w = info.width
|
|
||||||
t_h = info.height
|
|
||||||
size_quality = abs((d_w - t_w) * (d_h - t_h))
|
|
||||||
type_quality = desired_type != info.type
|
|
||||||
length_quality = info.length
|
|
||||||
if t_w >= d_w or t_h >= d_h:
|
|
||||||
info_list.append((size_quality, type_quality, length_quality, info))
|
|
||||||
else:
|
|
||||||
info_list2.append(
|
|
||||||
(size_quality, type_quality, length_quality, info)
|
|
||||||
)
|
|
||||||
# Pick the most appropriate thumbnail. Some values of `desired_width` and
|
|
||||||
# `desired_height` may result in a tie, in which case we avoid comparing on
|
|
||||||
# the thumbnail info and pick the thumbnail that appears earlier
|
|
||||||
# in the list of candidates.
|
|
||||||
if info_list:
|
|
||||||
thumbnail_info = min(info_list, key=lambda t: t[:-1])[-1]
|
|
||||||
elif info_list2:
|
|
||||||
thumbnail_info = min(info_list2, key=lambda t: t[:-1])[-1]
|
|
||||||
|
|
||||||
if thumbnail_info:
|
|
||||||
return FileInfo(
|
|
||||||
file_id=file_id,
|
|
||||||
url_cache=url_cache,
|
|
||||||
server_name=server_name,
|
|
||||||
thumbnail=thumbnail_info,
|
|
||||||
)
|
|
||||||
|
|
||||||
# No matching thumbnail was found.
|
|
||||||
return None
|
|
||||||
|
|
|
@ -23,6 +23,7 @@ from typing import TYPE_CHECKING, Mapping
|
||||||
|
|
||||||
from twisted.web.resource import Resource
|
from twisted.web.resource import Resource
|
||||||
|
|
||||||
|
from synapse.rest.synapse.client.federation_whitelist import FederationWhitelistResource
|
||||||
from synapse.rest.synapse.client.new_user_consent import NewUserConsentResource
|
from synapse.rest.synapse.client.new_user_consent import NewUserConsentResource
|
||||||
from synapse.rest.synapse.client.pick_idp import PickIdpResource
|
from synapse.rest.synapse.client.pick_idp import PickIdpResource
|
||||||
from synapse.rest.synapse.client.pick_username import pick_username_resource
|
from synapse.rest.synapse.client.pick_username import pick_username_resource
|
||||||
|
@ -77,6 +78,9 @@ def build_synapse_client_resource_tree(hs: "HomeServer") -> Mapping[str, Resourc
|
||||||
# To be removed in Synapse v1.32.0.
|
# To be removed in Synapse v1.32.0.
|
||||||
resources["/_matrix/saml2"] = res
|
resources["/_matrix/saml2"] = res
|
||||||
|
|
||||||
|
if hs.config.federation.federation_whitelist_endpoint_enabled:
|
||||||
|
resources[FederationWhitelistResource.PATH] = FederationWhitelistResource(hs)
|
||||||
|
|
||||||
if hs.config.experimental.msc4108_enabled:
|
if hs.config.experimental.msc4108_enabled:
|
||||||
resources["/_synapse/client/rendezvous"] = MSC4108RendezvousSessionResource(hs)
|
resources["/_synapse/client/rendezvous"] = MSC4108RendezvousSessionResource(hs)
|
||||||
|
|
||||||
|
|
66
synapse/rest/synapse/client/federation_whitelist.py
Normal file
66
synapse/rest/synapse/client/federation_whitelist.py
Normal file
|
@ -0,0 +1,66 @@
|
||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2024 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
|
from synapse.http.server import DirectServeJsonResource
|
||||||
|
from synapse.http.site import SynapseRequest
|
||||||
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class FederationWhitelistResource(DirectServeJsonResource):
|
||||||
|
"""Custom endpoint (disabled by default) to fetch the federation whitelist
|
||||||
|
config.
|
||||||
|
|
||||||
|
Only enabled if `federation_whitelist_endpoint_enabled` feature is enabled.
|
||||||
|
|
||||||
|
Response format:
|
||||||
|
|
||||||
|
{
|
||||||
|
"whitelist_enabled": true, // Whether the federation whitelist is being enforced
|
||||||
|
"whitelist": [ // Which server names are allowed by the whitelist
|
||||||
|
"example.com"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATH = "/_synapse/client/v1/config/federation_whitelist"
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
super().__init__()
|
||||||
|
|
||||||
|
self._federation_whitelist = hs.config.federation.federation_domain_whitelist
|
||||||
|
|
||||||
|
self._auth = hs.get_auth()
|
||||||
|
|
||||||
|
async def _async_render_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
|
await self._auth.get_user_by_req(request)
|
||||||
|
|
||||||
|
whitelist = []
|
||||||
|
if self._federation_whitelist:
|
||||||
|
# federation_whitelist is actually a dict, not a list
|
||||||
|
whitelist = list(self._federation_whitelist)
|
||||||
|
|
||||||
|
return_dict: JsonDict = {
|
||||||
|
"whitelist_enabled": self._federation_whitelist is not None,
|
||||||
|
"whitelist": whitelist,
|
||||||
|
}
|
||||||
|
|
||||||
|
return 200, return_dict
|
|
@ -113,6 +113,7 @@ class AccountDetailsResource(DirectServeHtmlResource):
|
||||||
"display_name": session.display_name,
|
"display_name": session.display_name,
|
||||||
"emails": session.emails,
|
"emails": session.emails,
|
||||||
"localpart": localpart,
|
"localpart": localpart,
|
||||||
|
"avatar_url": session.avatar_url,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -134,6 +135,7 @@ class AccountDetailsResource(DirectServeHtmlResource):
|
||||||
try:
|
try:
|
||||||
localpart = parse_string(request, "username", required=True)
|
localpart = parse_string(request, "username", required=True)
|
||||||
use_display_name = parse_boolean(request, "use_display_name", default=False)
|
use_display_name = parse_boolean(request, "use_display_name", default=False)
|
||||||
|
use_avatar = parse_boolean(request, "use_avatar", default=False)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
emails_to_use: List[str] = [
|
emails_to_use: List[str] = [
|
||||||
|
@ -147,5 +149,5 @@ class AccountDetailsResource(DirectServeHtmlResource):
|
||||||
return
|
return
|
||||||
|
|
||||||
await self._sso_handler.handle_submit_username_request(
|
await self._sso_handler.handle_submit_username_request(
|
||||||
request, session_id, localpart, use_display_name, emails_to_use
|
request, session_id, localpart, use_display_name, use_avatar, emails_to_use
|
||||||
)
|
)
|
||||||
|
|
|
@ -70,10 +70,7 @@ from synapse.types import (
|
||||||
from synapse.util import json_decoder, json_encoder
|
from synapse.util import json_decoder, json_encoder
|
||||||
from synapse.util.caches.descriptors import cached, cachedList
|
from synapse.util.caches.descriptors import cached, cachedList
|
||||||
from synapse.util.caches.lrucache import LruCache
|
from synapse.util.caches.lrucache import LruCache
|
||||||
from synapse.util.caches.stream_change_cache import (
|
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||||
AllEntitiesChangedResult,
|
|
||||||
StreamChangeCache,
|
|
||||||
)
|
|
||||||
from synapse.util.cancellation import cancellable
|
from synapse.util.cancellation import cancellable
|
||||||
from synapse.util.iterutils import batch_iter
|
from synapse.util.iterutils import batch_iter
|
||||||
from synapse.util.stringutils import shortstr
|
from synapse.util.stringutils import shortstr
|
||||||
|
@ -132,6 +129,20 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
prefilled_cache=device_list_prefill,
|
prefilled_cache=device_list_prefill,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
device_list_room_prefill, min_device_list_room_id = self.db_pool.get_cache_dict(
|
||||||
|
db_conn,
|
||||||
|
"device_lists_changes_in_room",
|
||||||
|
entity_column="room_id",
|
||||||
|
stream_column="stream_id",
|
||||||
|
max_value=device_list_max,
|
||||||
|
limit=10000,
|
||||||
|
)
|
||||||
|
self._device_list_room_stream_cache = StreamChangeCache(
|
||||||
|
"DeviceListRoomStreamChangeCache",
|
||||||
|
min_device_list_room_id,
|
||||||
|
prefilled_cache=device_list_room_prefill,
|
||||||
|
)
|
||||||
|
|
||||||
(
|
(
|
||||||
user_signature_stream_prefill,
|
user_signature_stream_prefill,
|
||||||
user_signature_stream_list_id,
|
user_signature_stream_list_id,
|
||||||
|
@ -209,6 +220,13 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
row.entity, token
|
row.entity, token
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def device_lists_in_rooms_have_changed(
|
||||||
|
self, room_ids: StrCollection, token: int
|
||||||
|
) -> None:
|
||||||
|
"Record that device lists have changed in rooms"
|
||||||
|
for room_id in room_ids:
|
||||||
|
self._device_list_room_stream_cache.entity_has_changed(room_id, token)
|
||||||
|
|
||||||
def get_device_stream_token(self) -> int:
|
def get_device_stream_token(self) -> int:
|
||||||
return self._device_list_id_gen.get_current_token()
|
return self._device_list_id_gen.get_current_token()
|
||||||
|
|
||||||
|
@ -832,16 +850,6 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
)
|
)
|
||||||
return {device[0]: db_to_json(device[1]) for device in devices}
|
return {device[0]: db_to_json(device[1]) for device in devices}
|
||||||
|
|
||||||
def get_cached_device_list_changes(
|
|
||||||
self,
|
|
||||||
from_key: int,
|
|
||||||
) -> AllEntitiesChangedResult:
|
|
||||||
"""Get set of users whose devices have changed since `from_key`, or None
|
|
||||||
if that information is not in our cache.
|
|
||||||
"""
|
|
||||||
|
|
||||||
return self._device_list_stream_cache.get_all_entities_changed(from_key)
|
|
||||||
|
|
||||||
@cancellable
|
@cancellable
|
||||||
async def get_all_devices_changed(
|
async def get_all_devices_changed(
|
||||||
self,
|
self,
|
||||||
|
@ -1457,7 +1465,7 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
|
|
||||||
@cancellable
|
@cancellable
|
||||||
async def get_device_list_changes_in_rooms(
|
async def get_device_list_changes_in_rooms(
|
||||||
self, room_ids: Collection[str], from_id: int
|
self, room_ids: Collection[str], from_id: int, to_id: int
|
||||||
) -> Optional[Set[str]]:
|
) -> Optional[Set[str]]:
|
||||||
"""Return the set of users whose devices have changed in the given rooms
|
"""Return the set of users whose devices have changed in the given rooms
|
||||||
since the given stream ID.
|
since the given stream ID.
|
||||||
|
@ -1473,9 +1481,15 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
if min_stream_id > from_id:
|
if min_stream_id > from_id:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
changed_room_ids = self._device_list_room_stream_cache.get_entities_changed(
|
||||||
|
room_ids, from_id
|
||||||
|
)
|
||||||
|
if not changed_room_ids:
|
||||||
|
return set()
|
||||||
|
|
||||||
sql = """
|
sql = """
|
||||||
SELECT DISTINCT user_id FROM device_lists_changes_in_room
|
SELECT DISTINCT user_id FROM device_lists_changes_in_room
|
||||||
WHERE {clause} AND stream_id >= ?
|
WHERE {clause} AND stream_id > ? AND stream_id <= ?
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def _get_device_list_changes_in_rooms_txn(
|
def _get_device_list_changes_in_rooms_txn(
|
||||||
|
@ -1487,11 +1501,12 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
return {user_id for user_id, in txn}
|
return {user_id for user_id, in txn}
|
||||||
|
|
||||||
changes = set()
|
changes = set()
|
||||||
for chunk in batch_iter(room_ids, 1000):
|
for chunk in batch_iter(changed_room_ids, 1000):
|
||||||
clause, args = make_in_list_sql_clause(
|
clause, args = make_in_list_sql_clause(
|
||||||
self.database_engine, "room_id", chunk
|
self.database_engine, "room_id", chunk
|
||||||
)
|
)
|
||||||
args.append(from_id)
|
args.append(from_id)
|
||||||
|
args.append(to_id)
|
||||||
|
|
||||||
changes |= await self.db_pool.runInteraction(
|
changes |= await self.db_pool.runInteraction(
|
||||||
"get_device_list_changes_in_rooms",
|
"get_device_list_changes_in_rooms",
|
||||||
|
@ -1502,6 +1517,34 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
|
|
||||||
return changes
|
return changes
|
||||||
|
|
||||||
|
async def get_all_device_list_changes(self, from_id: int, to_id: int) -> Set[str]:
|
||||||
|
"""Return the set of rooms where devices have changed since the given
|
||||||
|
stream ID.
|
||||||
|
|
||||||
|
Will raise an exception if the given stream ID is too old.
|
||||||
|
"""
|
||||||
|
|
||||||
|
min_stream_id = await self._get_min_device_lists_changes_in_room()
|
||||||
|
|
||||||
|
if min_stream_id > from_id:
|
||||||
|
raise Exception("stream ID is too old")
|
||||||
|
|
||||||
|
sql = """
|
||||||
|
SELECT DISTINCT room_id FROM device_lists_changes_in_room
|
||||||
|
WHERE stream_id > ? AND stream_id <= ?
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _get_all_device_list_changes_txn(
|
||||||
|
txn: LoggingTransaction,
|
||||||
|
) -> Set[str]:
|
||||||
|
txn.execute(sql, (from_id, to_id))
|
||||||
|
return {room_id for room_id, in txn}
|
||||||
|
|
||||||
|
return await self.db_pool.runInteraction(
|
||||||
|
"get_all_device_list_changes",
|
||||||
|
_get_all_device_list_changes_txn,
|
||||||
|
)
|
||||||
|
|
||||||
async def get_device_list_changes_in_room(
|
async def get_device_list_changes_in_room(
|
||||||
self, room_id: str, min_stream_id: int
|
self, room_id: str, min_stream_id: int
|
||||||
) -> Collection[Tuple[str, str]]:
|
) -> Collection[Tuple[str, str]]:
|
||||||
|
@ -1962,8 +2005,8 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
async def add_device_change_to_streams(
|
async def add_device_change_to_streams(
|
||||||
self,
|
self,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
device_ids: Collection[str],
|
device_ids: StrCollection,
|
||||||
room_ids: Collection[str],
|
room_ids: StrCollection,
|
||||||
) -> Optional[int]:
|
) -> Optional[int]:
|
||||||
"""Persist that a user's devices have been updated, and which hosts
|
"""Persist that a user's devices have been updated, and which hosts
|
||||||
(if any) should be poked.
|
(if any) should be poked.
|
||||||
|
@ -2118,12 +2161,36 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def mark_redundant_device_lists_pokes(
|
||||||
|
self,
|
||||||
|
user_id: str,
|
||||||
|
device_id: str,
|
||||||
|
room_id: str,
|
||||||
|
converted_upto_stream_id: int,
|
||||||
|
) -> None:
|
||||||
|
"""If we've calculated the outbound pokes for a given room/device list
|
||||||
|
update, mark any subsequent changes as already converted"""
|
||||||
|
|
||||||
|
sql = """
|
||||||
|
UPDATE device_lists_changes_in_room
|
||||||
|
SET converted_to_destinations = true
|
||||||
|
WHERE stream_id > ? AND user_id = ? AND device_id = ?
|
||||||
|
AND room_id = ? AND NOT converted_to_destinations
|
||||||
|
"""
|
||||||
|
|
||||||
|
def mark_redundant_device_lists_pokes_txn(txn: LoggingTransaction) -> None:
|
||||||
|
txn.execute(sql, (converted_upto_stream_id, user_id, device_id, room_id))
|
||||||
|
|
||||||
|
return await self.db_pool.runInteraction(
|
||||||
|
"mark_redundant_device_lists_pokes", mark_redundant_device_lists_pokes_txn
|
||||||
|
)
|
||||||
|
|
||||||
def _add_device_outbound_room_poke_txn(
|
def _add_device_outbound_room_poke_txn(
|
||||||
self,
|
self,
|
||||||
txn: LoggingTransaction,
|
txn: LoggingTransaction,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
device_ids: Iterable[str],
|
device_ids: StrCollection,
|
||||||
room_ids: Collection[str],
|
room_ids: StrCollection,
|
||||||
stream_ids: List[int],
|
stream_ids: List[int],
|
||||||
context: Dict[str, str],
|
context: Dict[str, str],
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -2161,6 +2228,10 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
txn.call_after(
|
||||||
|
self.device_lists_in_rooms_have_changed, room_ids, max(stream_ids)
|
||||||
|
)
|
||||||
|
|
||||||
async def get_uncoverted_outbound_room_pokes(
|
async def get_uncoverted_outbound_room_pokes(
|
||||||
self, start_stream_id: int, start_room_id: str, limit: int = 10
|
self, start_stream_id: int, start_room_id: str, limit: int = 10
|
||||||
) -> List[Tuple[str, str, str, int, Optional[Dict[str, str]]]]:
|
) -> List[Tuple[str, str, str, int, Optional[Dict[str, str]]]]:
|
||||||
|
|
|
@ -236,7 +236,8 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
||||||
consent_server_notice_sent, appservice_id, creation_ts, user_type,
|
consent_server_notice_sent, appservice_id, creation_ts, user_type,
|
||||||
deactivated, COALESCE(shadow_banned, FALSE) AS shadow_banned,
|
deactivated, COALESCE(shadow_banned, FALSE) AS shadow_banned,
|
||||||
COALESCE(approved, TRUE) AS approved,
|
COALESCE(approved, TRUE) AS approved,
|
||||||
COALESCE(locked, FALSE) AS locked
|
COALESCE(locked, FALSE) AS locked,
|
||||||
|
suspended
|
||||||
FROM users
|
FROM users
|
||||||
WHERE name = ?
|
WHERE name = ?
|
||||||
""",
|
""",
|
||||||
|
@ -261,6 +262,7 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
||||||
shadow_banned,
|
shadow_banned,
|
||||||
approved,
|
approved,
|
||||||
locked,
|
locked,
|
||||||
|
suspended,
|
||||||
) = row
|
) = row
|
||||||
|
|
||||||
return UserInfo(
|
return UserInfo(
|
||||||
|
@ -277,6 +279,7 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
||||||
user_type=user_type,
|
user_type=user_type,
|
||||||
approved=bool(approved),
|
approved=bool(approved),
|
||||||
locked=bool(locked),
|
locked=bool(locked),
|
||||||
|
suspended=bool(suspended),
|
||||||
)
|
)
|
||||||
|
|
||||||
return await self.db_pool.runInteraction(
|
return await self.db_pool.runInteraction(
|
||||||
|
@ -1180,6 +1183,27 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
||||||
# Convert the potential integer into a boolean.
|
# Convert the potential integer into a boolean.
|
||||||
return bool(res)
|
return bool(res)
|
||||||
|
|
||||||
|
@cached()
|
||||||
|
async def get_user_suspended_status(self, user_id: str) -> bool:
|
||||||
|
"""
|
||||||
|
Determine whether the user's account is suspended.
|
||||||
|
Args:
|
||||||
|
user_id: The user ID of the user in question
|
||||||
|
Returns:
|
||||||
|
True if the user's account is suspended, false if it is not suspended or
|
||||||
|
if the user ID cannot be found.
|
||||||
|
"""
|
||||||
|
|
||||||
|
res = await self.db_pool.simple_select_one_onecol(
|
||||||
|
table="users",
|
||||||
|
keyvalues={"name": user_id},
|
||||||
|
retcol="suspended",
|
||||||
|
allow_none=True,
|
||||||
|
desc="get_user_suspended",
|
||||||
|
)
|
||||||
|
|
||||||
|
return bool(res)
|
||||||
|
|
||||||
async def get_threepid_validation_session(
|
async def get_threepid_validation_session(
|
||||||
self,
|
self,
|
||||||
medium: Optional[str],
|
medium: Optional[str],
|
||||||
|
@ -2213,6 +2237,35 @@ class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
|
||||||
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
||||||
txn.call_after(self.is_guest.invalidate, (user_id,))
|
txn.call_after(self.is_guest.invalidate, (user_id,))
|
||||||
|
|
||||||
|
async def set_user_suspended_status(self, user_id: str, suspended: bool) -> None:
|
||||||
|
"""
|
||||||
|
Set whether the user's account is suspended in the `users` table.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id: The user ID of the user in question
|
||||||
|
suspended: True if the user is suspended, false if not
|
||||||
|
"""
|
||||||
|
await self.db_pool.runInteraction(
|
||||||
|
"set_user_suspended_status",
|
||||||
|
self.set_user_suspended_status_txn,
|
||||||
|
user_id,
|
||||||
|
suspended,
|
||||||
|
)
|
||||||
|
|
||||||
|
def set_user_suspended_status_txn(
|
||||||
|
self, txn: LoggingTransaction, user_id: str, suspended: bool
|
||||||
|
) -> None:
|
||||||
|
self.db_pool.simple_update_one_txn(
|
||||||
|
txn=txn,
|
||||||
|
table="users",
|
||||||
|
keyvalues={"name": user_id},
|
||||||
|
updatevalues={"suspended": suspended},
|
||||||
|
)
|
||||||
|
self._invalidate_cache_and_stream(
|
||||||
|
txn, self.get_user_suspended_status, (user_id,)
|
||||||
|
)
|
||||||
|
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
||||||
|
|
||||||
async def set_user_locked_status(self, user_id: str, locked: bool) -> None:
|
async def set_user_locked_status(self, user_id: str, locked: bool) -> None:
|
||||||
"""Set the `locked` property for the provided user to the provided value.
|
"""Set the `locked` property for the provided user to the provided value.
|
||||||
|
|
||||||
|
|
|
@ -21,13 +21,11 @@
|
||||||
#
|
#
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from abc import abstractmethod
|
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from typing import (
|
from typing import (
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
AbstractSet,
|
AbstractSet,
|
||||||
Any,
|
Any,
|
||||||
Awaitable,
|
|
||||||
Collection,
|
Collection,
|
||||||
Dict,
|
Dict,
|
||||||
List,
|
List,
|
||||||
|
@ -53,7 +51,7 @@ from synapse.api.room_versions import RoomVersion, RoomVersions
|
||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.replication.tcp.streams.partial_state import UnPartialStatedRoomStream
|
from synapse.replication.tcp.streams.partial_state import UnPartialStatedRoomStream
|
||||||
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
|
from synapse.storage._base import db_to_json, make_in_list_sql_clause
|
||||||
from synapse.storage.database import (
|
from synapse.storage.database import (
|
||||||
DatabasePool,
|
DatabasePool,
|
||||||
LoggingDatabaseConnection,
|
LoggingDatabaseConnection,
|
||||||
|
@ -1684,6 +1682,58 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
|
||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
async def set_room_is_public(self, room_id: str, is_public: bool) -> None:
|
||||||
|
await self.db_pool.simple_update_one(
|
||||||
|
table="rooms",
|
||||||
|
keyvalues={"room_id": room_id},
|
||||||
|
updatevalues={"is_public": is_public},
|
||||||
|
desc="set_room_is_public",
|
||||||
|
)
|
||||||
|
|
||||||
|
async def set_room_is_public_appservice(
|
||||||
|
self, room_id: str, appservice_id: str, network_id: str, is_public: bool
|
||||||
|
) -> None:
|
||||||
|
"""Edit the appservice/network specific public room list.
|
||||||
|
|
||||||
|
Each appservice can have a number of published room lists associated
|
||||||
|
with them, keyed off of an appservice defined `network_id`, which
|
||||||
|
basically represents a single instance of a bridge to a third party
|
||||||
|
network.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
room_id
|
||||||
|
appservice_id
|
||||||
|
network_id
|
||||||
|
is_public: Whether to publish or unpublish the room from the list.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if is_public:
|
||||||
|
await self.db_pool.simple_upsert(
|
||||||
|
table="appservice_room_list",
|
||||||
|
keyvalues={
|
||||||
|
"appservice_id": appservice_id,
|
||||||
|
"network_id": network_id,
|
||||||
|
"room_id": room_id,
|
||||||
|
},
|
||||||
|
values={},
|
||||||
|
insertion_values={
|
||||||
|
"appservice_id": appservice_id,
|
||||||
|
"network_id": network_id,
|
||||||
|
"room_id": room_id,
|
||||||
|
},
|
||||||
|
desc="set_room_is_public_appservice_true",
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
await self.db_pool.simple_delete(
|
||||||
|
table="appservice_room_list",
|
||||||
|
keyvalues={
|
||||||
|
"appservice_id": appservice_id,
|
||||||
|
"network_id": network_id,
|
||||||
|
"room_id": room_id,
|
||||||
|
},
|
||||||
|
desc="set_room_is_public_appservice_false",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class _BackgroundUpdates:
|
class _BackgroundUpdates:
|
||||||
REMOVE_TOMESTONED_ROOMS_BG_UPDATE = "remove_tombstoned_rooms_from_directory"
|
REMOVE_TOMESTONED_ROOMS_BG_UPDATE = "remove_tombstoned_rooms_from_directory"
|
||||||
|
@ -1702,7 +1752,7 @@ _REPLACE_ROOM_DEPTH_SQL_COMMANDS = (
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class RoomBackgroundUpdateStore(SQLBaseStore):
|
class RoomBackgroundUpdateStore(RoomWorkerStore):
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
database: DatabasePool,
|
database: DatabasePool,
|
||||||
|
@ -1935,14 +1985,6 @@ class RoomBackgroundUpdateStore(SQLBaseStore):
|
||||||
|
|
||||||
return len(rooms)
|
return len(rooms)
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def set_room_is_public(self, room_id: str, is_public: bool) -> Awaitable[None]:
|
|
||||||
# this will need to be implemented if a background update is performed with
|
|
||||||
# existing (tombstoned, public) rooms in the database.
|
|
||||||
#
|
|
||||||
# It's overridden by RoomStore for the synapse master.
|
|
||||||
raise NotImplementedError()
|
|
||||||
|
|
||||||
async def has_auth_chain_index(self, room_id: str) -> bool:
|
async def has_auth_chain_index(self, room_id: str) -> bool:
|
||||||
"""Check if the room has (or can have) a chain cover index.
|
"""Check if the room has (or can have) a chain cover index.
|
||||||
|
|
||||||
|
@ -2349,62 +2391,6 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore):
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
async def set_room_is_public(self, room_id: str, is_public: bool) -> None:
|
|
||||||
await self.db_pool.simple_update_one(
|
|
||||||
table="rooms",
|
|
||||||
keyvalues={"room_id": room_id},
|
|
||||||
updatevalues={"is_public": is_public},
|
|
||||||
desc="set_room_is_public",
|
|
||||||
)
|
|
||||||
|
|
||||||
self.hs.get_notifier().on_new_replication_data()
|
|
||||||
|
|
||||||
async def set_room_is_public_appservice(
|
|
||||||
self, room_id: str, appservice_id: str, network_id: str, is_public: bool
|
|
||||||
) -> None:
|
|
||||||
"""Edit the appservice/network specific public room list.
|
|
||||||
|
|
||||||
Each appservice can have a number of published room lists associated
|
|
||||||
with them, keyed off of an appservice defined `network_id`, which
|
|
||||||
basically represents a single instance of a bridge to a third party
|
|
||||||
network.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
room_id
|
|
||||||
appservice_id
|
|
||||||
network_id
|
|
||||||
is_public: Whether to publish or unpublish the room from the list.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if is_public:
|
|
||||||
await self.db_pool.simple_upsert(
|
|
||||||
table="appservice_room_list",
|
|
||||||
keyvalues={
|
|
||||||
"appservice_id": appservice_id,
|
|
||||||
"network_id": network_id,
|
|
||||||
"room_id": room_id,
|
|
||||||
},
|
|
||||||
values={},
|
|
||||||
insertion_values={
|
|
||||||
"appservice_id": appservice_id,
|
|
||||||
"network_id": network_id,
|
|
||||||
"room_id": room_id,
|
|
||||||
},
|
|
||||||
desc="set_room_is_public_appservice_true",
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
await self.db_pool.simple_delete(
|
|
||||||
table="appservice_room_list",
|
|
||||||
keyvalues={
|
|
||||||
"appservice_id": appservice_id,
|
|
||||||
"network_id": network_id,
|
|
||||||
"room_id": room_id,
|
|
||||||
},
|
|
||||||
desc="set_room_is_public_appservice_false",
|
|
||||||
)
|
|
||||||
|
|
||||||
self.hs.get_notifier().on_new_replication_data()
|
|
||||||
|
|
||||||
async def add_event_report(
|
async def add_event_report(
|
||||||
self,
|
self,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
|
|
|
@ -19,7 +19,7 @@
|
||||||
#
|
#
|
||||||
#
|
#
|
||||||
|
|
||||||
SCHEMA_VERSION = 84 # remember to update the list below when updating
|
SCHEMA_VERSION = 85 # remember to update the list below when updating
|
||||||
"""Represents the expectations made by the codebase about the database schema
|
"""Represents the expectations made by the codebase about the database schema
|
||||||
|
|
||||||
This should be incremented whenever the codebase changes its requirements on the
|
This should be incremented whenever the codebase changes its requirements on the
|
||||||
|
@ -136,6 +136,9 @@ Changes in SCHEMA_VERSION = 83
|
||||||
Changes in SCHEMA_VERSION = 84
|
Changes in SCHEMA_VERSION = 84
|
||||||
- No longer assumes that `event_auth_chain_links` holds transitive links, and
|
- No longer assumes that `event_auth_chain_links` holds transitive links, and
|
||||||
so read operations must do graph traversal.
|
so read operations must do graph traversal.
|
||||||
|
|
||||||
|
Changes in SCHEMA_VERSION = 85
|
||||||
|
- Add a column `suspended` to the `users` table
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
|
14
synapse/storage/schema/main/delta/85/01_add_suspended.sql
Normal file
14
synapse/storage/schema/main/delta/85/01_add_suspended.sql
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
--
|
||||||
|
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
--
|
||||||
|
-- Copyright (C) 2024 New Vector, Ltd
|
||||||
|
--
|
||||||
|
-- This program is free software: you can redistribute it and/or modify
|
||||||
|
-- it under the terms of the GNU Affero General Public License as
|
||||||
|
-- published by the Free Software Foundation, either version 3 of the
|
||||||
|
-- License, or (at your option) any later version.
|
||||||
|
--
|
||||||
|
-- See the GNU Affero General Public License for more details:
|
||||||
|
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
|
||||||
|
ALTER TABLE users ADD COLUMN suspended BOOLEAN DEFAULT FALSE NOT NULL;
|
|
@ -1156,6 +1156,7 @@ class UserInfo:
|
||||||
user_type: User type (None for normal user, 'support' and 'bot' other options).
|
user_type: User type (None for normal user, 'support' and 'bot' other options).
|
||||||
approved: If the user has been "approved" to register on the server.
|
approved: If the user has been "approved" to register on the server.
|
||||||
locked: Whether the user's account has been locked
|
locked: Whether the user's account has been locked
|
||||||
|
suspended: Whether the user's account is currently suspended
|
||||||
"""
|
"""
|
||||||
|
|
||||||
user_id: UserID
|
user_id: UserID
|
||||||
|
@ -1171,6 +1172,7 @@ class UserInfo:
|
||||||
is_shadow_banned: bool
|
is_shadow_banned: bool
|
||||||
approved: bool
|
approved: bool
|
||||||
locked: bool
|
locked: bool
|
||||||
|
suspended: bool
|
||||||
|
|
||||||
|
|
||||||
class UserProfile(TypedDict):
|
class UserProfile(TypedDict):
|
||||||
|
|
|
@ -115,7 +115,7 @@ class StreamChangeCache:
|
||||||
"""
|
"""
|
||||||
new_size = math.floor(self._original_max_size * factor)
|
new_size = math.floor(self._original_max_size * factor)
|
||||||
if new_size != self._max_size:
|
if new_size != self._max_size:
|
||||||
self.max_size = new_size
|
self._max_size = new_size
|
||||||
self._evict()
|
self._evict()
|
||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
@ -165,7 +165,7 @@ class StreamChangeCache:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def get_entities_changed(
|
def get_entities_changed(
|
||||||
self, entities: Collection[EntityType], stream_pos: int
|
self, entities: Collection[EntityType], stream_pos: int, _perf_factor: int = 1
|
||||||
) -> Union[Set[EntityType], FrozenSet[EntityType]]:
|
) -> Union[Set[EntityType], FrozenSet[EntityType]]:
|
||||||
"""
|
"""
|
||||||
Returns the subset of the given entities that have had changes after the given position.
|
Returns the subset of the given entities that have had changes after the given position.
|
||||||
|
@ -177,6 +177,8 @@ class StreamChangeCache:
|
||||||
Args:
|
Args:
|
||||||
entities: Entities to check for changes.
|
entities: Entities to check for changes.
|
||||||
stream_pos: The stream position to check for changes after.
|
stream_pos: The stream position to check for changes after.
|
||||||
|
_perf_factor: Used by unit tests to choose when to use each
|
||||||
|
optimisation.
|
||||||
|
|
||||||
Return:
|
Return:
|
||||||
A subset of entities which have changed after the given stream position.
|
A subset of entities which have changed after the given stream position.
|
||||||
|
@ -184,6 +186,22 @@ class StreamChangeCache:
|
||||||
This will be all entities if the given stream position is at or earlier
|
This will be all entities if the given stream position is at or earlier
|
||||||
than the earliest known stream position.
|
than the earliest known stream position.
|
||||||
"""
|
"""
|
||||||
|
if not self._cache or stream_pos <= self._earliest_known_stream_pos:
|
||||||
|
self.metrics.inc_misses()
|
||||||
|
return set(entities)
|
||||||
|
|
||||||
|
# If there have been tonnes of changes compared with the number of
|
||||||
|
# entities, it is faster to check each entities stream ordering
|
||||||
|
# one-by-one.
|
||||||
|
max_stream_pos, _ = self._cache.peekitem()
|
||||||
|
if max_stream_pos - stream_pos > _perf_factor * len(entities):
|
||||||
|
self.metrics.inc_hits()
|
||||||
|
return {
|
||||||
|
entity
|
||||||
|
for entity in entities
|
||||||
|
if self._entity_to_key.get(entity, -1) > stream_pos
|
||||||
|
}
|
||||||
|
|
||||||
cache_result = self.get_all_entities_changed(stream_pos)
|
cache_result = self.get_all_entities_changed(stream_pos)
|
||||||
if cache_result.hit:
|
if cache_result.hit:
|
||||||
# We now do an intersection, trying to do so in the most efficient
|
# We now do an intersection, trying to do so in the most efficient
|
||||||
|
|
|
@ -24,7 +24,12 @@ from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Optional, Set
|
||||||
|
|
||||||
from twisted.python.failure import Failure
|
from twisted.python.failure import Failure
|
||||||
|
|
||||||
from synapse.logging.context import nested_logging_context
|
from synapse.logging.context import (
|
||||||
|
ContextResourceUsage,
|
||||||
|
LoggingContext,
|
||||||
|
nested_logging_context,
|
||||||
|
set_current_context,
|
||||||
|
)
|
||||||
from synapse.metrics import LaterGauge
|
from synapse.metrics import LaterGauge
|
||||||
from synapse.metrics.background_process_metrics import (
|
from synapse.metrics.background_process_metrics import (
|
||||||
run_as_background_process,
|
run_as_background_process,
|
||||||
|
@ -81,6 +86,8 @@ class TaskScheduler:
|
||||||
MAX_CONCURRENT_RUNNING_TASKS = 5
|
MAX_CONCURRENT_RUNNING_TASKS = 5
|
||||||
# Time from the last task update after which we will log a warning
|
# Time from the last task update after which we will log a warning
|
||||||
LAST_UPDATE_BEFORE_WARNING_MS = 24 * 60 * 60 * 1000 # 24hrs
|
LAST_UPDATE_BEFORE_WARNING_MS = 24 * 60 * 60 * 1000 # 24hrs
|
||||||
|
# Report a running task's status and usage every so often.
|
||||||
|
OCCASIONAL_REPORT_INTERVAL_MS = 5 * 60 * 1000 # 5 minutes
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self._hs = hs
|
self._hs = hs
|
||||||
|
@ -346,6 +353,33 @@ class TaskScheduler:
|
||||||
assert task.id not in self._running_tasks
|
assert task.id not in self._running_tasks
|
||||||
await self._store.delete_scheduled_task(task.id)
|
await self._store.delete_scheduled_task(task.id)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _log_task_usage(
|
||||||
|
state: str, task: ScheduledTask, usage: ContextResourceUsage, active_time: float
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Log a line describing the state and usage of a task.
|
||||||
|
The log line is inspired by / a copy of the request log line format,
|
||||||
|
but with irrelevant fields removed.
|
||||||
|
|
||||||
|
active_time: Time that the task has been running for, in seconds.
|
||||||
|
"""
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"Task %s: %.3fsec (%.3fsec, %.3fsec) (%.3fsec/%.3fsec/%d)"
|
||||||
|
" [%d dbevts] %r, %r",
|
||||||
|
state,
|
||||||
|
active_time,
|
||||||
|
usage.ru_utime,
|
||||||
|
usage.ru_stime,
|
||||||
|
usage.db_sched_duration_sec,
|
||||||
|
usage.db_txn_duration_sec,
|
||||||
|
int(usage.db_txn_count),
|
||||||
|
usage.evt_db_fetch_count,
|
||||||
|
task.resource_id,
|
||||||
|
task.params,
|
||||||
|
)
|
||||||
|
|
||||||
async def _launch_task(self, task: ScheduledTask) -> None:
|
async def _launch_task(self, task: ScheduledTask) -> None:
|
||||||
"""Launch a scheduled task now.
|
"""Launch a scheduled task now.
|
||||||
|
|
||||||
|
@ -360,8 +394,32 @@ class TaskScheduler:
|
||||||
)
|
)
|
||||||
function = self._actions[task.action]
|
function = self._actions[task.action]
|
||||||
|
|
||||||
|
def _occasional_report(
|
||||||
|
task_log_context: LoggingContext, start_time: float
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Helper to log a 'Task continuing' line every so often.
|
||||||
|
"""
|
||||||
|
|
||||||
|
current_time = self._clock.time()
|
||||||
|
calling_context = set_current_context(task_log_context)
|
||||||
|
try:
|
||||||
|
usage = task_log_context.get_resource_usage()
|
||||||
|
TaskScheduler._log_task_usage(
|
||||||
|
"continuing", task, usage, current_time - start_time
|
||||||
|
)
|
||||||
|
finally:
|
||||||
|
set_current_context(calling_context)
|
||||||
|
|
||||||
async def wrapper() -> None:
|
async def wrapper() -> None:
|
||||||
with nested_logging_context(task.id):
|
with nested_logging_context(task.id) as log_context:
|
||||||
|
start_time = self._clock.time()
|
||||||
|
occasional_status_call = self._clock.looping_call(
|
||||||
|
_occasional_report,
|
||||||
|
TaskScheduler.OCCASIONAL_REPORT_INTERVAL_MS,
|
||||||
|
log_context,
|
||||||
|
start_time,
|
||||||
|
)
|
||||||
try:
|
try:
|
||||||
(status, result, error) = await function(task)
|
(status, result, error) = await function(task)
|
||||||
except Exception:
|
except Exception:
|
||||||
|
@ -383,6 +441,13 @@ class TaskScheduler:
|
||||||
)
|
)
|
||||||
self._running_tasks.remove(task.id)
|
self._running_tasks.remove(task.id)
|
||||||
|
|
||||||
|
current_time = self._clock.time()
|
||||||
|
usage = log_context.get_resource_usage()
|
||||||
|
TaskScheduler._log_task_usage(
|
||||||
|
status.value, task, usage, current_time - start_time
|
||||||
|
)
|
||||||
|
occasional_status_call.stop()
|
||||||
|
|
||||||
# Try launch a new task since we've finished with this one.
|
# Try launch a new task since we've finished with this one.
|
||||||
self._clock.call_later(0.1, self._launch_scheduled_tasks)
|
self._clock.call_later(0.1, self._launch_scheduled_tasks)
|
||||||
|
|
||||||
|
|
|
@ -36,10 +36,15 @@ from typing import (
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes, HistoryVisibility, Membership
|
from synapse.api.constants import (
|
||||||
|
EventTypes,
|
||||||
|
EventUnsignedContentFields,
|
||||||
|
HistoryVisibility,
|
||||||
|
Membership,
|
||||||
|
)
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.events.snapshot import EventContext
|
from synapse.events.snapshot import EventContext
|
||||||
from synapse.events.utils import prune_event
|
from synapse.events.utils import clone_event, prune_event
|
||||||
from synapse.logging.opentracing import trace
|
from synapse.logging.opentracing import trace
|
||||||
from synapse.storage.controllers import StorageControllers
|
from synapse.storage.controllers import StorageControllers
|
||||||
from synapse.storage.databases.main import DataStore
|
from synapse.storage.databases.main import DataStore
|
||||||
|
@ -77,6 +82,7 @@ async def filter_events_for_client(
|
||||||
is_peeking: bool = False,
|
is_peeking: bool = False,
|
||||||
always_include_ids: FrozenSet[str] = frozenset(),
|
always_include_ids: FrozenSet[str] = frozenset(),
|
||||||
filter_send_to_client: bool = True,
|
filter_send_to_client: bool = True,
|
||||||
|
msc4115_membership_on_events: bool = False,
|
||||||
) -> List[EventBase]:
|
) -> List[EventBase]:
|
||||||
"""
|
"""
|
||||||
Check which events a user is allowed to see. If the user can see the event but its
|
Check which events a user is allowed to see. If the user can see the event but its
|
||||||
|
@ -95,9 +101,12 @@ async def filter_events_for_client(
|
||||||
filter_send_to_client: Whether we're checking an event that's going to be
|
filter_send_to_client: Whether we're checking an event that's going to be
|
||||||
sent to a client. This might not always be the case since this function can
|
sent to a client. This might not always be the case since this function can
|
||||||
also be called to check whether a user can see the state at a given point.
|
also be called to check whether a user can see the state at a given point.
|
||||||
|
msc4115_membership_on_events: Whether to include the requesting user's
|
||||||
|
membership in the "unsigned" data, per MSC4115.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The filtered events.
|
The filtered events. If `msc4115_membership_on_events` is true, the `unsigned`
|
||||||
|
data is annotated with the membership state of `user_id` at each event.
|
||||||
"""
|
"""
|
||||||
# Filter out events that have been soft failed so that we don't relay them
|
# Filter out events that have been soft failed so that we don't relay them
|
||||||
# to clients.
|
# to clients.
|
||||||
|
@ -134,7 +143,8 @@ async def filter_events_for_client(
|
||||||
)
|
)
|
||||||
|
|
||||||
def allowed(event: EventBase) -> Optional[EventBase]:
|
def allowed(event: EventBase) -> Optional[EventBase]:
|
||||||
return _check_client_allowed_to_see_event(
|
state_after_event = event_id_to_state.get(event.event_id)
|
||||||
|
filtered = _check_client_allowed_to_see_event(
|
||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
event=event,
|
event=event,
|
||||||
clock=storage.main.clock,
|
clock=storage.main.clock,
|
||||||
|
@ -142,13 +152,45 @@ async def filter_events_for_client(
|
||||||
sender_ignored=event.sender in ignore_list,
|
sender_ignored=event.sender in ignore_list,
|
||||||
always_include_ids=always_include_ids,
|
always_include_ids=always_include_ids,
|
||||||
retention_policy=retention_policies[room_id],
|
retention_policy=retention_policies[room_id],
|
||||||
state=event_id_to_state.get(event.event_id),
|
state=state_after_event,
|
||||||
is_peeking=is_peeking,
|
is_peeking=is_peeking,
|
||||||
sender_erased=erased_senders.get(event.sender, False),
|
sender_erased=erased_senders.get(event.sender, False),
|
||||||
)
|
)
|
||||||
|
if filtered is None:
|
||||||
|
return None
|
||||||
|
|
||||||
# Check each event: gives an iterable of None or (a potentially modified)
|
if not msc4115_membership_on_events:
|
||||||
# EventBase.
|
return filtered
|
||||||
|
|
||||||
|
# Annotate the event with the user's membership after the event.
|
||||||
|
#
|
||||||
|
# Normally we just look in `state_after_event`, but if the event is an outlier
|
||||||
|
# we won't have such a state. The only outliers that are returned here are the
|
||||||
|
# user's own membership event, so we can just inspect that.
|
||||||
|
|
||||||
|
user_membership_event: Optional[EventBase]
|
||||||
|
if event.type == EventTypes.Member and event.state_key == user_id:
|
||||||
|
user_membership_event = event
|
||||||
|
elif state_after_event is not None:
|
||||||
|
user_membership_event = state_after_event.get((EventTypes.Member, user_id))
|
||||||
|
else:
|
||||||
|
# unreachable!
|
||||||
|
raise Exception("Missing state for event that is not user's own membership")
|
||||||
|
|
||||||
|
user_membership = (
|
||||||
|
user_membership_event.membership
|
||||||
|
if user_membership_event
|
||||||
|
else Membership.LEAVE
|
||||||
|
)
|
||||||
|
|
||||||
|
# Copy the event before updating the unsigned data: this shouldn't be persisted
|
||||||
|
# to the cache!
|
||||||
|
cloned = clone_event(filtered)
|
||||||
|
cloned.unsigned[EventUnsignedContentFields.MSC4115_MEMBERSHIP] = user_membership
|
||||||
|
|
||||||
|
return cloned
|
||||||
|
|
||||||
|
# Check each event: gives an iterable of None or (a modified) EventBase.
|
||||||
filtered_events = map(allowed, events)
|
filtered_events = map(allowed, events)
|
||||||
|
|
||||||
# Turn it into a list and remove None entries before returning.
|
# Turn it into a list and remove None entries before returning.
|
||||||
|
@ -396,7 +438,13 @@ def _check_client_allowed_to_see_event(
|
||||||
|
|
||||||
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
||||||
class _CheckMembershipReturn:
|
class _CheckMembershipReturn:
|
||||||
"Return value of _check_membership"
|
"""Return value of `_check_membership`.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
allowed: Whether the user should be allowed to see the event.
|
||||||
|
joined: Whether the user was joined to the room at the event.
|
||||||
|
"""
|
||||||
|
|
||||||
allowed: bool
|
allowed: bool
|
||||||
joined: bool
|
joined: bool
|
||||||
|
|
||||||
|
@ -408,12 +456,7 @@ def _check_membership(
|
||||||
state: StateMap[EventBase],
|
state: StateMap[EventBase],
|
||||||
is_peeking: bool,
|
is_peeking: bool,
|
||||||
) -> _CheckMembershipReturn:
|
) -> _CheckMembershipReturn:
|
||||||
"""Check whether the user can see the event due to their membership
|
"""Check whether the user can see the event due to their membership"""
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if they can, False if they can't, plus the membership of the user
|
|
||||||
at the event.
|
|
||||||
"""
|
|
||||||
# If the event is the user's own membership event, use the 'most joined'
|
# If the event is the user's own membership event, use the 'most joined'
|
||||||
# membership
|
# membership
|
||||||
membership = None
|
membership = None
|
||||||
|
@ -435,7 +478,7 @@ def _check_membership(
|
||||||
if membership == "leave" and (
|
if membership == "leave" and (
|
||||||
prev_membership == "join" or prev_membership == "invite"
|
prev_membership == "join" or prev_membership == "invite"
|
||||||
):
|
):
|
||||||
return _CheckMembershipReturn(True, membership == Membership.JOIN)
|
return _CheckMembershipReturn(True, False)
|
||||||
|
|
||||||
new_priority = MEMBERSHIP_PRIORITY.index(membership)
|
new_priority = MEMBERSHIP_PRIORITY.index(membership)
|
||||||
old_priority = MEMBERSHIP_PRIORITY.index(prev_membership)
|
old_priority = MEMBERSHIP_PRIORITY.index(prev_membership)
|
||||||
|
|
|
@ -116,8 +116,9 @@ class TestRatelimiter(unittest.HomeserverTestCase):
|
||||||
# Should raise
|
# Should raise
|
||||||
with self.assertRaises(LimitExceededError) as context:
|
with self.assertRaises(LimitExceededError) as context:
|
||||||
self.get_success_or_raise(
|
self.get_success_or_raise(
|
||||||
limiter.ratelimit(None, key="test_id", _time_now_s=5)
|
limiter.ratelimit(None, key="test_id", _time_now_s=5), by=0.5
|
||||||
)
|
)
|
||||||
|
|
||||||
self.assertEqual(context.exception.retry_after_ms, 5000)
|
self.assertEqual(context.exception.retry_after_ms, 5000)
|
||||||
|
|
||||||
# Shouldn't raise
|
# Shouldn't raise
|
||||||
|
@ -192,7 +193,7 @@ class TestRatelimiter(unittest.HomeserverTestCase):
|
||||||
# Second attempt, 1s later, will fail
|
# Second attempt, 1s later, will fail
|
||||||
with self.assertRaises(LimitExceededError) as context:
|
with self.assertRaises(LimitExceededError) as context:
|
||||||
self.get_success_or_raise(
|
self.get_success_or_raise(
|
||||||
limiter.ratelimit(None, key=("test_id",), _time_now_s=1)
|
limiter.ratelimit(None, key=("test_id",), _time_now_s=1), by=0.5
|
||||||
)
|
)
|
||||||
self.assertEqual(context.exception.retry_after_ms, 9000)
|
self.assertEqual(context.exception.retry_after_ms, 9000)
|
||||||
|
|
||||||
|
|
657
tests/events/test_auto_accept_invites.py
Normal file
657
tests/events/test_auto_accept_invites.py
Normal file
|
@ -0,0 +1,657 @@
|
||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright 2021 The Matrix.org Foundation C.I.C
|
||||||
|
# Copyright (C) 2024 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
# Originally licensed under the Apache License, Version 2.0:
|
||||||
|
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||||
|
#
|
||||||
|
# [This file includes modifications made by New Vector Limited]
|
||||||
|
#
|
||||||
|
#
|
||||||
|
import asyncio
|
||||||
|
from asyncio import Future
|
||||||
|
from http import HTTPStatus
|
||||||
|
from typing import Any, Awaitable, Dict, List, Optional, Tuple, TypeVar, cast
|
||||||
|
from unittest.mock import Mock
|
||||||
|
|
||||||
|
import attr
|
||||||
|
from parameterized import parameterized
|
||||||
|
|
||||||
|
from twisted.test.proto_helpers import MemoryReactor
|
||||||
|
|
||||||
|
from synapse.api.constants import EventTypes
|
||||||
|
from synapse.api.errors import SynapseError
|
||||||
|
from synapse.config.auto_accept_invites import AutoAcceptInvitesConfig
|
||||||
|
from synapse.events.auto_accept_invites import InviteAutoAccepter
|
||||||
|
from synapse.federation.federation_base import event_from_pdu_json
|
||||||
|
from synapse.handlers.sync import JoinedSyncResult, SyncRequestKey, SyncVersion
|
||||||
|
from synapse.module_api import ModuleApi
|
||||||
|
from synapse.rest import admin
|
||||||
|
from synapse.rest.client import login, room
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
from synapse.types import StreamToken, create_requester
|
||||||
|
from synapse.util import Clock
|
||||||
|
|
||||||
|
from tests.handlers.test_sync import generate_sync_config
|
||||||
|
from tests.unittest import (
|
||||||
|
FederatingHomeserverTestCase,
|
||||||
|
HomeserverTestCase,
|
||||||
|
TestCase,
|
||||||
|
override_config,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class AutoAcceptInvitesTestCase(FederatingHomeserverTestCase):
|
||||||
|
"""
|
||||||
|
Integration test cases for auto-accepting invites.
|
||||||
|
"""
|
||||||
|
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
room.register_servlets,
|
||||||
|
]
|
||||||
|
|
||||||
|
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||||
|
hs = self.setup_test_homeserver()
|
||||||
|
self.handler = hs.get_federation_handler()
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
return hs
|
||||||
|
|
||||||
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
|
self.sync_handler = self.hs.get_sync_handler()
|
||||||
|
self.module_api = hs.get_module_api()
|
||||||
|
|
||||||
|
@parameterized.expand(
|
||||||
|
[
|
||||||
|
[False],
|
||||||
|
[True],
|
||||||
|
]
|
||||||
|
)
|
||||||
|
@override_config(
|
||||||
|
{
|
||||||
|
"auto_accept_invites": {
|
||||||
|
"enabled": True,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
def test_auto_accept_invites(self, direct_room: bool) -> None:
|
||||||
|
"""Test that a user automatically joins a room when invited, if the
|
||||||
|
module is enabled.
|
||||||
|
"""
|
||||||
|
# A local user who sends an invite
|
||||||
|
inviting_user_id = self.register_user("inviter", "pass")
|
||||||
|
inviting_user_tok = self.login("inviter", "pass")
|
||||||
|
|
||||||
|
# A local user who receives an invite
|
||||||
|
invited_user_id = self.register_user("invitee", "pass")
|
||||||
|
self.login("invitee", "pass")
|
||||||
|
|
||||||
|
# Create a room and send an invite to the other user
|
||||||
|
room_id = self.helper.create_room_as(
|
||||||
|
inviting_user_id,
|
||||||
|
is_public=False,
|
||||||
|
tok=inviting_user_tok,
|
||||||
|
)
|
||||||
|
|
||||||
|
self.helper.invite(
|
||||||
|
room_id,
|
||||||
|
inviting_user_id,
|
||||||
|
invited_user_id,
|
||||||
|
tok=inviting_user_tok,
|
||||||
|
extra_data={"is_direct": direct_room},
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check that the invite receiving user has automatically joined the room when syncing
|
||||||
|
join_updates, _ = sync_join(self, invited_user_id)
|
||||||
|
self.assertEqual(len(join_updates), 1)
|
||||||
|
|
||||||
|
join_update: JoinedSyncResult = join_updates[0]
|
||||||
|
self.assertEqual(join_update.room_id, room_id)
|
||||||
|
|
||||||
|
@override_config(
|
||||||
|
{
|
||||||
|
"auto_accept_invites": {
|
||||||
|
"enabled": False,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
def test_module_not_enabled(self) -> None:
|
||||||
|
"""Test that a user does not automatically join a room when invited,
|
||||||
|
if the module is not enabled.
|
||||||
|
"""
|
||||||
|
# A local user who sends an invite
|
||||||
|
inviting_user_id = self.register_user("inviter", "pass")
|
||||||
|
inviting_user_tok = self.login("inviter", "pass")
|
||||||
|
|
||||||
|
# A local user who receives an invite
|
||||||
|
invited_user_id = self.register_user("invitee", "pass")
|
||||||
|
self.login("invitee", "pass")
|
||||||
|
|
||||||
|
# Create a room and send an invite to the other user
|
||||||
|
room_id = self.helper.create_room_as(
|
||||||
|
inviting_user_id, is_public=False, tok=inviting_user_tok
|
||||||
|
)
|
||||||
|
|
||||||
|
self.helper.invite(
|
||||||
|
room_id,
|
||||||
|
inviting_user_id,
|
||||||
|
invited_user_id,
|
||||||
|
tok=inviting_user_tok,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check that the invite receiving user has not automatically joined the room when syncing
|
||||||
|
join_updates, _ = sync_join(self, invited_user_id)
|
||||||
|
self.assertEqual(len(join_updates), 0)
|
||||||
|
|
||||||
|
@override_config(
|
||||||
|
{
|
||||||
|
"auto_accept_invites": {
|
||||||
|
"enabled": True,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
def test_invite_from_remote_user(self) -> None:
|
||||||
|
"""Test that an invite from a remote user results in the invited user
|
||||||
|
automatically joining the room.
|
||||||
|
"""
|
||||||
|
# A remote user who sends the invite
|
||||||
|
remote_server = "otherserver"
|
||||||
|
remote_user = "@otheruser:" + remote_server
|
||||||
|
|
||||||
|
# A local user who creates the room
|
||||||
|
creator_user_id = self.register_user("creator", "pass")
|
||||||
|
creator_user_tok = self.login("creator", "pass")
|
||||||
|
|
||||||
|
# A local user who receives an invite
|
||||||
|
invited_user_id = self.register_user("invitee", "pass")
|
||||||
|
self.login("invitee", "pass")
|
||||||
|
|
||||||
|
room_id = self.helper.create_room_as(
|
||||||
|
room_creator=creator_user_id, tok=creator_user_tok
|
||||||
|
)
|
||||||
|
room_version = self.get_success(self.store.get_room_version(room_id))
|
||||||
|
|
||||||
|
invite_event = event_from_pdu_json(
|
||||||
|
{
|
||||||
|
"type": EventTypes.Member,
|
||||||
|
"content": {"membership": "invite"},
|
||||||
|
"room_id": room_id,
|
||||||
|
"sender": remote_user,
|
||||||
|
"state_key": invited_user_id,
|
||||||
|
"depth": 32,
|
||||||
|
"prev_events": [],
|
||||||
|
"auth_events": [],
|
||||||
|
"origin_server_ts": self.clock.time_msec(),
|
||||||
|
},
|
||||||
|
room_version,
|
||||||
|
)
|
||||||
|
self.get_success(
|
||||||
|
self.handler.on_invite_request(
|
||||||
|
remote_server,
|
||||||
|
invite_event,
|
||||||
|
invite_event.room_version,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check that the invite receiving user has automatically joined the room when syncing
|
||||||
|
join_updates, _ = sync_join(self, invited_user_id)
|
||||||
|
self.assertEqual(len(join_updates), 1)
|
||||||
|
|
||||||
|
join_update: JoinedSyncResult = join_updates[0]
|
||||||
|
self.assertEqual(join_update.room_id, room_id)
|
||||||
|
|
||||||
|
@parameterized.expand(
|
||||||
|
[
|
||||||
|
[False, False],
|
||||||
|
[True, True],
|
||||||
|
]
|
||||||
|
)
|
||||||
|
@override_config(
|
||||||
|
{
|
||||||
|
"auto_accept_invites": {
|
||||||
|
"enabled": True,
|
||||||
|
"only_for_direct_messages": True,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
def test_accept_invite_direct_message(
|
||||||
|
self,
|
||||||
|
direct_room: bool,
|
||||||
|
expect_auto_join: bool,
|
||||||
|
) -> None:
|
||||||
|
"""Tests that, if the module is configured to only accept DM invites, invites to DM rooms are still
|
||||||
|
automatically accepted. Otherwise they are rejected.
|
||||||
|
"""
|
||||||
|
# A local user who sends an invite
|
||||||
|
inviting_user_id = self.register_user("inviter", "pass")
|
||||||
|
inviting_user_tok = self.login("inviter", "pass")
|
||||||
|
|
||||||
|
# A local user who receives an invite
|
||||||
|
invited_user_id = self.register_user("invitee", "pass")
|
||||||
|
self.login("invitee", "pass")
|
||||||
|
|
||||||
|
# Create a room and send an invite to the other user
|
||||||
|
room_id = self.helper.create_room_as(
|
||||||
|
inviting_user_id,
|
||||||
|
is_public=False,
|
||||||
|
tok=inviting_user_tok,
|
||||||
|
)
|
||||||
|
|
||||||
|
self.helper.invite(
|
||||||
|
room_id,
|
||||||
|
inviting_user_id,
|
||||||
|
invited_user_id,
|
||||||
|
tok=inviting_user_tok,
|
||||||
|
extra_data={"is_direct": direct_room},
|
||||||
|
)
|
||||||
|
|
||||||
|
if expect_auto_join:
|
||||||
|
# Check that the invite receiving user has automatically joined the room when syncing
|
||||||
|
join_updates, _ = sync_join(self, invited_user_id)
|
||||||
|
self.assertEqual(len(join_updates), 1)
|
||||||
|
|
||||||
|
join_update: JoinedSyncResult = join_updates[0]
|
||||||
|
self.assertEqual(join_update.room_id, room_id)
|
||||||
|
else:
|
||||||
|
# Check that the invite receiving user has not automatically joined the room when syncing
|
||||||
|
join_updates, _ = sync_join(self, invited_user_id)
|
||||||
|
self.assertEqual(len(join_updates), 0)
|
||||||
|
|
||||||
|
@parameterized.expand(
|
||||||
|
[
|
||||||
|
[False, True],
|
||||||
|
[True, False],
|
||||||
|
]
|
||||||
|
)
|
||||||
|
@override_config(
|
||||||
|
{
|
||||||
|
"auto_accept_invites": {
|
||||||
|
"enabled": True,
|
||||||
|
"only_from_local_users": True,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
)
|
||||||
|
def test_accept_invite_local_user(
|
||||||
|
self, remote_inviter: bool, expect_auto_join: bool
|
||||||
|
) -> None:
|
||||||
|
"""Tests that, if the module is configured to only accept invites from local users, invites
|
||||||
|
from local users are still automatically accepted. Otherwise they are rejected.
|
||||||
|
"""
|
||||||
|
# A local user who sends an invite
|
||||||
|
creator_user_id = self.register_user("inviter", "pass")
|
||||||
|
creator_user_tok = self.login("inviter", "pass")
|
||||||
|
|
||||||
|
# A local user who receives an invite
|
||||||
|
invited_user_id = self.register_user("invitee", "pass")
|
||||||
|
self.login("invitee", "pass")
|
||||||
|
|
||||||
|
# Create a room and send an invite to the other user
|
||||||
|
room_id = self.helper.create_room_as(
|
||||||
|
creator_user_id, is_public=False, tok=creator_user_tok
|
||||||
|
)
|
||||||
|
|
||||||
|
if remote_inviter:
|
||||||
|
room_version = self.get_success(self.store.get_room_version(room_id))
|
||||||
|
|
||||||
|
# A remote user who sends the invite
|
||||||
|
remote_server = "otherserver"
|
||||||
|
remote_user = "@otheruser:" + remote_server
|
||||||
|
|
||||||
|
invite_event = event_from_pdu_json(
|
||||||
|
{
|
||||||
|
"type": EventTypes.Member,
|
||||||
|
"content": {"membership": "invite"},
|
||||||
|
"room_id": room_id,
|
||||||
|
"sender": remote_user,
|
||||||
|
"state_key": invited_user_id,
|
||||||
|
"depth": 32,
|
||||||
|
"prev_events": [],
|
||||||
|
"auth_events": [],
|
||||||
|
"origin_server_ts": self.clock.time_msec(),
|
||||||
|
},
|
||||||
|
room_version,
|
||||||
|
)
|
||||||
|
self.get_success(
|
||||||
|
self.handler.on_invite_request(
|
||||||
|
remote_server,
|
||||||
|
invite_event,
|
||||||
|
invite_event.room_version,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
self.helper.invite(
|
||||||
|
room_id,
|
||||||
|
creator_user_id,
|
||||||
|
invited_user_id,
|
||||||
|
tok=creator_user_tok,
|
||||||
|
)
|
||||||
|
|
||||||
|
if expect_auto_join:
|
||||||
|
# Check that the invite receiving user has automatically joined the room when syncing
|
||||||
|
join_updates, _ = sync_join(self, invited_user_id)
|
||||||
|
self.assertEqual(len(join_updates), 1)
|
||||||
|
|
||||||
|
join_update: JoinedSyncResult = join_updates[0]
|
||||||
|
self.assertEqual(join_update.room_id, room_id)
|
||||||
|
else:
|
||||||
|
# Check that the invite receiving user has not automatically joined the room when syncing
|
||||||
|
join_updates, _ = sync_join(self, invited_user_id)
|
||||||
|
self.assertEqual(len(join_updates), 0)
|
||||||
|
|
||||||
|
|
||||||
|
_request_key = 0
|
||||||
|
|
||||||
|
|
||||||
|
def generate_request_key() -> SyncRequestKey:
|
||||||
|
global _request_key
|
||||||
|
_request_key += 1
|
||||||
|
return ("request_key", _request_key)
|
||||||
|
|
||||||
|
|
||||||
|
def sync_join(
|
||||||
|
testcase: HomeserverTestCase,
|
||||||
|
user_id: str,
|
||||||
|
since_token: Optional[StreamToken] = None,
|
||||||
|
) -> Tuple[List[JoinedSyncResult], StreamToken]:
|
||||||
|
"""Perform a sync request for the given user and return the user join updates
|
||||||
|
they've received, as well as the next_batch token.
|
||||||
|
|
||||||
|
This method assumes testcase.sync_handler points to the homeserver's sync handler.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
testcase: The testcase that is currently being run.
|
||||||
|
user_id: The ID of the user to generate a sync response for.
|
||||||
|
since_token: An optional token to indicate from at what point to sync from.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A tuple containing a list of join updates, and the sync response's
|
||||||
|
next_batch token.
|
||||||
|
"""
|
||||||
|
requester = create_requester(user_id)
|
||||||
|
sync_config = generate_sync_config(requester.user.to_string())
|
||||||
|
sync_result = testcase.get_success(
|
||||||
|
testcase.hs.get_sync_handler().wait_for_sync_for_user(
|
||||||
|
requester,
|
||||||
|
sync_config,
|
||||||
|
SyncVersion.SYNC_V2,
|
||||||
|
generate_request_key(),
|
||||||
|
since_token,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return sync_result.joined, sync_result.next_batch
|
||||||
|
|
||||||
|
|
||||||
|
class InviteAutoAccepterInternalTestCase(TestCase):
|
||||||
|
"""
|
||||||
|
Test cases which exercise the internals of the InviteAutoAccepter.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def setUp(self) -> None:
|
||||||
|
self.module = create_module()
|
||||||
|
self.user_id = "@peter:test"
|
||||||
|
self.invitee = "@lesley:test"
|
||||||
|
self.remote_invitee = "@thomas:remote"
|
||||||
|
|
||||||
|
# We know our module API is a mock, but mypy doesn't.
|
||||||
|
self.mocked_update_membership: Mock = self.module._api.update_room_membership # type: ignore[assignment]
|
||||||
|
|
||||||
|
async def test_accept_invite_with_failures(self) -> None:
|
||||||
|
"""Tests that receiving an invite for a local user makes the module attempt to
|
||||||
|
make the invitee join the room. This test verifies that it works if the call to
|
||||||
|
update membership returns exceptions before successfully completing and returning an event.
|
||||||
|
"""
|
||||||
|
invite = MockEvent(
|
||||||
|
sender="@inviter:test",
|
||||||
|
state_key="@invitee:test",
|
||||||
|
type="m.room.member",
|
||||||
|
content={"membership": "invite"},
|
||||||
|
)
|
||||||
|
|
||||||
|
join_event = MockEvent(
|
||||||
|
sender="someone",
|
||||||
|
state_key="someone",
|
||||||
|
type="m.room.member",
|
||||||
|
content={"membership": "join"},
|
||||||
|
)
|
||||||
|
# the first two calls raise an exception while the third call is successful
|
||||||
|
self.mocked_update_membership.side_effect = [
|
||||||
|
SynapseError(HTTPStatus.FORBIDDEN, "Forbidden"),
|
||||||
|
SynapseError(HTTPStatus.FORBIDDEN, "Forbidden"),
|
||||||
|
make_awaitable(join_event),
|
||||||
|
]
|
||||||
|
|
||||||
|
# Stop mypy from complaining that we give on_new_event a MockEvent rather than an
|
||||||
|
# EventBase.
|
||||||
|
await self.module.on_new_event(event=invite) # type: ignore[arg-type]
|
||||||
|
|
||||||
|
await self.retry_assertions(
|
||||||
|
self.mocked_update_membership,
|
||||||
|
3,
|
||||||
|
sender=invite.state_key,
|
||||||
|
target=invite.state_key,
|
||||||
|
room_id=invite.room_id,
|
||||||
|
new_membership="join",
|
||||||
|
)
|
||||||
|
|
||||||
|
async def test_accept_invite_failures(self) -> None:
|
||||||
|
"""Tests that receiving an invite for a local user makes the module attempt to
|
||||||
|
make the invitee join the room. This test verifies that if the update_membership call
|
||||||
|
fails consistently, _retry_make_join will break the loop after the set number of retries and
|
||||||
|
execution will continue.
|
||||||
|
"""
|
||||||
|
invite = MockEvent(
|
||||||
|
sender=self.user_id,
|
||||||
|
state_key=self.invitee,
|
||||||
|
type="m.room.member",
|
||||||
|
content={"membership": "invite"},
|
||||||
|
)
|
||||||
|
self.mocked_update_membership.side_effect = SynapseError(
|
||||||
|
HTTPStatus.FORBIDDEN, "Forbidden"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Stop mypy from complaining that we give on_new_event a MockEvent rather than an
|
||||||
|
# EventBase.
|
||||||
|
await self.module.on_new_event(event=invite) # type: ignore[arg-type]
|
||||||
|
|
||||||
|
await self.retry_assertions(
|
||||||
|
self.mocked_update_membership,
|
||||||
|
5,
|
||||||
|
sender=invite.state_key,
|
||||||
|
target=invite.state_key,
|
||||||
|
room_id=invite.room_id,
|
||||||
|
new_membership="join",
|
||||||
|
)
|
||||||
|
|
||||||
|
async def test_not_state(self) -> None:
|
||||||
|
"""Tests that receiving an invite that's not a state event does nothing."""
|
||||||
|
invite = MockEvent(
|
||||||
|
sender=self.user_id, type="m.room.member", content={"membership": "invite"}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Stop mypy from complaining that we give on_new_event a MockEvent rather than an
|
||||||
|
# EventBase.
|
||||||
|
await self.module.on_new_event(event=invite) # type: ignore[arg-type]
|
||||||
|
|
||||||
|
self.mocked_update_membership.assert_not_called()
|
||||||
|
|
||||||
|
async def test_not_invite(self) -> None:
|
||||||
|
"""Tests that receiving a membership update that's not an invite does nothing."""
|
||||||
|
invite = MockEvent(
|
||||||
|
sender=self.user_id,
|
||||||
|
state_key=self.user_id,
|
||||||
|
type="m.room.member",
|
||||||
|
content={"membership": "join"},
|
||||||
|
)
|
||||||
|
|
||||||
|
# Stop mypy from complaining that we give on_new_event a MockEvent rather than an
|
||||||
|
# EventBase.
|
||||||
|
await self.module.on_new_event(event=invite) # type: ignore[arg-type]
|
||||||
|
|
||||||
|
self.mocked_update_membership.assert_not_called()
|
||||||
|
|
||||||
|
async def test_not_membership(self) -> None:
|
||||||
|
"""Tests that receiving a state event that's not a membership update does
|
||||||
|
nothing.
|
||||||
|
"""
|
||||||
|
invite = MockEvent(
|
||||||
|
sender=self.user_id,
|
||||||
|
state_key=self.user_id,
|
||||||
|
type="org.matrix.test",
|
||||||
|
content={"foo": "bar"},
|
||||||
|
)
|
||||||
|
|
||||||
|
# Stop mypy from complaining that we give on_new_event a MockEvent rather than an
|
||||||
|
# EventBase.
|
||||||
|
await self.module.on_new_event(event=invite) # type: ignore[arg-type]
|
||||||
|
|
||||||
|
self.mocked_update_membership.assert_not_called()
|
||||||
|
|
||||||
|
def test_config_parse(self) -> None:
|
||||||
|
"""Tests that a correct configuration parses."""
|
||||||
|
config = {
|
||||||
|
"auto_accept_invites": {
|
||||||
|
"enabled": True,
|
||||||
|
"only_for_direct_messages": True,
|
||||||
|
"only_from_local_users": True,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
parsed_config = AutoAcceptInvitesConfig()
|
||||||
|
parsed_config.read_config(config)
|
||||||
|
|
||||||
|
self.assertTrue(parsed_config.enabled)
|
||||||
|
self.assertTrue(parsed_config.accept_invites_only_for_direct_messages)
|
||||||
|
self.assertTrue(parsed_config.accept_invites_only_from_local_users)
|
||||||
|
|
||||||
|
def test_runs_on_only_one_worker(self) -> None:
|
||||||
|
"""
|
||||||
|
Tests that the module only runs on the specified worker.
|
||||||
|
"""
|
||||||
|
# By default, we run on the main process...
|
||||||
|
main_module = create_module(
|
||||||
|
config_override={"auto_accept_invites": {"enabled": True}}, worker_name=None
|
||||||
|
)
|
||||||
|
cast(
|
||||||
|
Mock, main_module._api.register_third_party_rules_callbacks
|
||||||
|
).assert_called_once()
|
||||||
|
|
||||||
|
# ...and not on other workers (like synchrotrons)...
|
||||||
|
sync_module = create_module(worker_name="synchrotron42")
|
||||||
|
cast(
|
||||||
|
Mock, sync_module._api.register_third_party_rules_callbacks
|
||||||
|
).assert_not_called()
|
||||||
|
|
||||||
|
# ...unless we configured them to be the designated worker.
|
||||||
|
specified_module = create_module(
|
||||||
|
config_override={
|
||||||
|
"auto_accept_invites": {
|
||||||
|
"enabled": True,
|
||||||
|
"worker_to_run_on": "account_data1",
|
||||||
|
}
|
||||||
|
},
|
||||||
|
worker_name="account_data1",
|
||||||
|
)
|
||||||
|
cast(
|
||||||
|
Mock, specified_module._api.register_third_party_rules_callbacks
|
||||||
|
).assert_called_once()
|
||||||
|
|
||||||
|
async def retry_assertions(
|
||||||
|
self, mock: Mock, call_count: int, **kwargs: Any
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
This is a hacky way to ensure that the assertions are not called before the other coroutine
|
||||||
|
has a chance to call `update_room_membership`. It catches the exception caused by a failure,
|
||||||
|
and sleeps the thread before retrying, up until 5 tries.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
call_count: the number of times the mock should have been called
|
||||||
|
mock: the mocked function we want to assert on
|
||||||
|
kwargs: keyword arguments to assert that the mock was called with
|
||||||
|
"""
|
||||||
|
|
||||||
|
i = 0
|
||||||
|
while i < 5:
|
||||||
|
try:
|
||||||
|
# Check that the mocked method is called the expected amount of times and with the right
|
||||||
|
# arguments to attempt to make the user join the room.
|
||||||
|
mock.assert_called_with(**kwargs)
|
||||||
|
self.assertEqual(call_count, mock.call_count)
|
||||||
|
break
|
||||||
|
except AssertionError as e:
|
||||||
|
i += 1
|
||||||
|
if i == 5:
|
||||||
|
# we've used up the tries, force the test to fail as we've already caught the exception
|
||||||
|
self.fail(e)
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(auto_attribs=True)
|
||||||
|
class MockEvent:
|
||||||
|
"""Mocks an event. Only exposes properties the module uses."""
|
||||||
|
|
||||||
|
sender: str
|
||||||
|
type: str
|
||||||
|
content: Dict[str, Any]
|
||||||
|
room_id: str = "!someroom"
|
||||||
|
state_key: Optional[str] = None
|
||||||
|
|
||||||
|
def is_state(self) -> bool:
|
||||||
|
"""Checks if the event is a state event by checking if it has a state key."""
|
||||||
|
return self.state_key is not None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def membership(self) -> str:
|
||||||
|
"""Extracts the membership from the event. Should only be called on an event
|
||||||
|
that's a membership event, and will raise a KeyError otherwise.
|
||||||
|
"""
|
||||||
|
membership: str = self.content["membership"]
|
||||||
|
return membership
|
||||||
|
|
||||||
|
|
||||||
|
T = TypeVar("T")
|
||||||
|
TV = TypeVar("TV")
|
||||||
|
|
||||||
|
|
||||||
|
async def make_awaitable(value: T) -> T:
|
||||||
|
return value
|
||||||
|
|
||||||
|
|
||||||
|
def make_multiple_awaitable(result: TV) -> Awaitable[TV]:
|
||||||
|
"""
|
||||||
|
Makes an awaitable, suitable for mocking an `async` function.
|
||||||
|
This uses Futures as they can be awaited multiple times so can be returned
|
||||||
|
to multiple callers.
|
||||||
|
"""
|
||||||
|
future: Future[TV] = Future()
|
||||||
|
future.set_result(result)
|
||||||
|
return future
|
||||||
|
|
||||||
|
|
||||||
|
def create_module(
|
||||||
|
config_override: Optional[Dict[str, Any]] = None, worker_name: Optional[str] = None
|
||||||
|
) -> InviteAutoAccepter:
|
||||||
|
# Create a mock based on the ModuleApi spec, but override some mocked functions
|
||||||
|
# because some capabilities are needed for running the tests.
|
||||||
|
module_api = Mock(spec=ModuleApi)
|
||||||
|
module_api.is_mine.side_effect = lambda a: a.split(":")[1] == "test"
|
||||||
|
module_api.worker_name = worker_name
|
||||||
|
module_api.sleep.return_value = make_multiple_awaitable(None)
|
||||||
|
|
||||||
|
if config_override is None:
|
||||||
|
config_override = {}
|
||||||
|
|
||||||
|
config = AutoAcceptInvitesConfig()
|
||||||
|
config.read_config(config_override)
|
||||||
|
|
||||||
|
return InviteAutoAccepter(config, module_api)
|
|
@ -36,7 +36,7 @@ from synapse.server import HomeServer
|
||||||
from synapse.types import JsonDict, StreamToken, create_requester
|
from synapse.types import JsonDict, StreamToken, create_requester
|
||||||
from synapse.util import Clock
|
from synapse.util import Clock
|
||||||
|
|
||||||
from tests.handlers.test_sync import generate_sync_config
|
from tests.handlers.test_sync import SyncRequestKey, SyncVersion, generate_sync_config
|
||||||
from tests.unittest import (
|
from tests.unittest import (
|
||||||
FederatingHomeserverTestCase,
|
FederatingHomeserverTestCase,
|
||||||
HomeserverTestCase,
|
HomeserverTestCase,
|
||||||
|
@ -498,6 +498,15 @@ def send_presence_update(
|
||||||
return channel.json_body
|
return channel.json_body
|
||||||
|
|
||||||
|
|
||||||
|
_request_key = 0
|
||||||
|
|
||||||
|
|
||||||
|
def generate_request_key() -> SyncRequestKey:
|
||||||
|
global _request_key
|
||||||
|
_request_key += 1
|
||||||
|
return ("request_key", _request_key)
|
||||||
|
|
||||||
|
|
||||||
def sync_presence(
|
def sync_presence(
|
||||||
testcase: HomeserverTestCase,
|
testcase: HomeserverTestCase,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
|
@ -521,7 +530,11 @@ def sync_presence(
|
||||||
sync_config = generate_sync_config(requester.user.to_string())
|
sync_config = generate_sync_config(requester.user.to_string())
|
||||||
sync_result = testcase.get_success(
|
sync_result = testcase.get_success(
|
||||||
testcase.hs.get_sync_handler().wait_for_sync_for_user(
|
testcase.hs.get_sync_handler().wait_for_sync_for_user(
|
||||||
requester, sync_config, since_token
|
requester,
|
||||||
|
sync_config,
|
||||||
|
SyncVersion.SYNC_V2,
|
||||||
|
generate_request_key(),
|
||||||
|
since_token,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -32,6 +32,7 @@ from synapse.events.utils import (
|
||||||
PowerLevelsContent,
|
PowerLevelsContent,
|
||||||
SerializeEventConfig,
|
SerializeEventConfig,
|
||||||
_split_field,
|
_split_field,
|
||||||
|
clone_event,
|
||||||
copy_and_fixup_power_levels_contents,
|
copy_and_fixup_power_levels_contents,
|
||||||
maybe_upsert_event_field,
|
maybe_upsert_event_field,
|
||||||
prune_event,
|
prune_event,
|
||||||
|
@ -611,6 +612,29 @@ class PruneEventTestCase(stdlib_unittest.TestCase):
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class CloneEventTestCase(stdlib_unittest.TestCase):
|
||||||
|
def test_unsigned_is_copied(self) -> None:
|
||||||
|
original = make_event_from_dict(
|
||||||
|
{
|
||||||
|
"type": "A",
|
||||||
|
"event_id": "$test:domain",
|
||||||
|
"unsigned": {"a": 1, "b": 2},
|
||||||
|
},
|
||||||
|
RoomVersions.V1,
|
||||||
|
{"txn_id": "txn"},
|
||||||
|
)
|
||||||
|
original.internal_metadata.stream_ordering = 1234
|
||||||
|
self.assertEqual(original.internal_metadata.stream_ordering, 1234)
|
||||||
|
|
||||||
|
cloned = clone_event(original)
|
||||||
|
cloned.unsigned["b"] = 3
|
||||||
|
|
||||||
|
self.assertEqual(original.unsigned, {"a": 1, "b": 2})
|
||||||
|
self.assertEqual(cloned.unsigned, {"a": 1, "b": 3})
|
||||||
|
self.assertEqual(cloned.internal_metadata.stream_ordering, 1234)
|
||||||
|
self.assertEqual(cloned.internal_metadata.txn_id, "txn")
|
||||||
|
|
||||||
|
|
||||||
class SerializeEventTestCase(stdlib_unittest.TestCase):
|
class SerializeEventTestCase(stdlib_unittest.TestCase):
|
||||||
def serialize(self, ev: EventBase, fields: Optional[List[str]]) -> JsonDict:
|
def serialize(self, ev: EventBase, fields: Optional[List[str]]) -> JsonDict:
|
||||||
return serialize_event(
|
return serialize_event(
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue