mirror of
https://github.com/element-hq/synapse
synced 2024-08-19 15:00:25 +00:00
Merge branch 'madlittlemods/sliding-sync-room-data' into madlittlemods/sliding-sync-required-state
Conflicts: synapse/handlers/sliding_sync.py
This commit is contained in:
commit
73f2903bc5
91 changed files with 4359 additions and 728 deletions
2
.github/ISSUE_TEMPLATE.md
vendored
2
.github/ISSUE_TEMPLATE.md
vendored
|
@ -2,4 +2,4 @@
|
||||||
(using a matrix.org account if necessary). We do not use GitHub issues for
|
(using a matrix.org account if necessary). We do not use GitHub issues for
|
||||||
support.
|
support.
|
||||||
|
|
||||||
**If you want to report a security issue** please see https://matrix.org/security-disclosure-policy/
|
**If you want to report a security issue** please see https://element.io/security/security-disclosure-policy
|
||||||
|
|
2
.github/ISSUE_TEMPLATE/BUG_REPORT.yml
vendored
2
.github/ISSUE_TEMPLATE/BUG_REPORT.yml
vendored
|
@ -7,7 +7,7 @@ body:
|
||||||
**THIS IS NOT A SUPPORT CHANNEL!**
|
**THIS IS NOT A SUPPORT CHANNEL!**
|
||||||
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**, please ask in **[#synapse:matrix.org](https://matrix.to/#/#synapse:matrix.org)** (using a matrix.org account if necessary).
|
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**, please ask in **[#synapse:matrix.org](https://matrix.to/#/#synapse:matrix.org)** (using a matrix.org account if necessary).
|
||||||
|
|
||||||
If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/
|
If you want to report a security issue, please see https://element.io/security/security-disclosure-policy
|
||||||
|
|
||||||
This is a bug report form. By following the instructions below and completing the sections with your information, you will help the us to get all the necessary data to fix your issue.
|
This is a bug report form. By following the instructions below and completing the sections with your information, you will help the us to get all the necessary data to fix your issue.
|
||||||
|
|
||||||
|
|
20
.github/workflows/tests.yml
vendored
20
.github/workflows/tests.yml
vendored
|
@ -21,6 +21,7 @@ jobs:
|
||||||
trial: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.trial }}
|
trial: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.trial }}
|
||||||
integration: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.integration }}
|
integration: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.integration }}
|
||||||
linting: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.linting }}
|
linting: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.linting }}
|
||||||
|
linting_readme: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.linting_readme }}
|
||||||
steps:
|
steps:
|
||||||
- uses: dorny/paths-filter@v3
|
- uses: dorny/paths-filter@v3
|
||||||
id: filter
|
id: filter
|
||||||
|
@ -72,6 +73,9 @@ jobs:
|
||||||
- 'pyproject.toml'
|
- 'pyproject.toml'
|
||||||
- 'poetry.lock'
|
- 'poetry.lock'
|
||||||
- '.github/workflows/tests.yml'
|
- '.github/workflows/tests.yml'
|
||||||
|
|
||||||
|
linting_readme:
|
||||||
|
- 'README.rst'
|
||||||
|
|
||||||
check-sampleconfig:
|
check-sampleconfig:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
@ -269,6 +273,20 @@ jobs:
|
||||||
|
|
||||||
- run: cargo fmt --check
|
- run: cargo fmt --check
|
||||||
|
|
||||||
|
# This is to detect issues with the rst file, which can otherwise cause issues
|
||||||
|
# when uploading packages to PyPi.
|
||||||
|
lint-readme:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: changes
|
||||||
|
if: ${{ needs.changes.outputs.linting_readme == 'true' }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: actions/setup-python@v5
|
||||||
|
with:
|
||||||
|
python-version: "3.x"
|
||||||
|
- run: "pip install rstcheck"
|
||||||
|
- run: "rstcheck --report-level=WARNING README.rst"
|
||||||
|
|
||||||
# Dummy step to gate other tests on without repeating the whole list
|
# Dummy step to gate other tests on without repeating the whole list
|
||||||
linting-done:
|
linting-done:
|
||||||
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
|
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
|
||||||
|
@ -284,6 +302,7 @@ jobs:
|
||||||
- lint-clippy
|
- lint-clippy
|
||||||
- lint-clippy-nightly
|
- lint-clippy-nightly
|
||||||
- lint-rustfmt
|
- lint-rustfmt
|
||||||
|
- lint-readme
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: matrix-org/done-action@v2
|
- uses: matrix-org/done-action@v2
|
||||||
|
@ -301,6 +320,7 @@ jobs:
|
||||||
lint-clippy
|
lint-clippy
|
||||||
lint-clippy-nightly
|
lint-clippy-nightly
|
||||||
lint-rustfmt
|
lint-rustfmt
|
||||||
|
lint-readme
|
||||||
|
|
||||||
|
|
||||||
calculate-test-jobs:
|
calculate-test-jobs:
|
||||||
|
|
91
CHANGES.md
91
CHANGES.md
|
@ -1,3 +1,94 @@
|
||||||
|
# Synapse 1.110.0rc2 (2024-06-26)
|
||||||
|
|
||||||
|
### Internal Changes
|
||||||
|
|
||||||
|
- Fix uploading packages to PyPi. ([\#17363](https://github.com/element-hq/synapse/issues/17363))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Synapse 1.110.0rc1 (2024-06-26)
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Add initial implementation of an experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17187](https://github.com/element-hq/synapse/issues/17187))
|
||||||
|
- Add experimental support for [MSC3823](https://github.com/matrix-org/matrix-spec-proposals/pull/3823) - Account suspension. ([\#17255](https://github.com/element-hq/synapse/issues/17255))
|
||||||
|
- Improve ratelimiting in Synapse. ([\#17256](https://github.com/element-hq/synapse/issues/17256))
|
||||||
|
- Add support for the unstable [MSC4151](https://github.com/matrix-org/matrix-spec-proposals/pull/4151) report room API. ([\#17270](https://github.com/element-hq/synapse/issues/17270), [\#17296](https://github.com/element-hq/synapse/issues/17296))
|
||||||
|
- Filter for public and empty rooms added to Admin-API [List Room API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#list-room-api). ([\#17276](https://github.com/element-hq/synapse/issues/17276))
|
||||||
|
- Add `is_dm` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17277](https://github.com/element-hq/synapse/issues/17277))
|
||||||
|
- Add `is_encrypted` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17281](https://github.com/element-hq/synapse/issues/17281))
|
||||||
|
- Include user membership in events served to clients, per [MSC4115](https://github.com/matrix-org/matrix-spec-proposals/pull/4115). ([\#17282](https://github.com/element-hq/synapse/issues/17282))
|
||||||
|
- Do not require user-interactive authentication for uploading cross-signing keys for the first time, per [MSC3967](https://github.com/matrix-org/matrix-spec-proposals/pull/3967). ([\#17284](https://github.com/element-hq/synapse/issues/17284))
|
||||||
|
- Add `stream_ordering` sort to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17293](https://github.com/element-hq/synapse/issues/17293))
|
||||||
|
- `register_new_matrix_user` now supports a --password-file flag, which
|
||||||
|
is useful for scripting. ([\#17294](https://github.com/element-hq/synapse/issues/17294))
|
||||||
|
- `register_new_matrix_user` now supports a --exists-ok flag to allow registration of users that already exist in the database.
|
||||||
|
This is useful for scripts that bootstrap user accounts with initial passwords. ([\#17304](https://github.com/element-hq/synapse/issues/17304))
|
||||||
|
- Add support for via query parameter from [MSC4156](https://github.com/matrix-org/matrix-spec-proposals/pull/4156). ([\#17322](https://github.com/element-hq/synapse/issues/17322))
|
||||||
|
- Add `is_invite` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17335](https://github.com/element-hq/synapse/issues/17335))
|
||||||
|
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md) by adding a federation /download endpoint. ([\#17350](https://github.com/element-hq/synapse/issues/17350))
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
- Fix searching for users with their exact localpart whose ID includes a hyphen. ([\#17254](https://github.com/element-hq/synapse/issues/17254))
|
||||||
|
- Fix wrong retention policy being used when filtering events. ([\#17272](https://github.com/element-hq/synapse/issues/17272))
|
||||||
|
- Fix bug where OTKs were not always included in `/sync` response when using workers. ([\#17275](https://github.com/element-hq/synapse/issues/17275))
|
||||||
|
- Fix a long-standing bug where an invalid 'from' parameter to [`/notifications`](https://spec.matrix.org/v1.10/client-server-api/#get_matrixclientv3notifications) would result in an Internal Server Error. ([\#17283](https://github.com/element-hq/synapse/issues/17283))
|
||||||
|
- Fix edge case in `/sync` returning the wrong the state when using sharded event persisters. ([\#17295](https://github.com/element-hq/synapse/issues/17295))
|
||||||
|
- Add initial implementation of an experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17301](https://github.com/element-hq/synapse/issues/17301))
|
||||||
|
- Fix email notification subject when invited to a space. ([\#17336](https://github.com/element-hq/synapse/issues/17336))
|
||||||
|
|
||||||
|
### Improved Documentation
|
||||||
|
|
||||||
|
- Add missing quotes for example for `exclude_rooms_from_sync`. ([\#17308](https://github.com/element-hq/synapse/issues/17308))
|
||||||
|
- Update header in the README to visually fix the the auto-generated table of contents. ([\#17329](https://github.com/element-hq/synapse/issues/17329))
|
||||||
|
- Fix stale references to the Foundation's Security Disclosure Policy. ([\#17341](https://github.com/element-hq/synapse/issues/17341))
|
||||||
|
- Add default values for `rc_invites.per_issuer` to docs. ([\#17347](https://github.com/element-hq/synapse/issues/17347))
|
||||||
|
- Fix an error in the docs for `search_all_users` parameter under `user_directory`. ([\#17348](https://github.com/element-hq/synapse/issues/17348))
|
||||||
|
|
||||||
|
### Internal Changes
|
||||||
|
|
||||||
|
- Remove unused `expire_access_token` option in the Synapse Docker config file. Contributed by @AaronDewes. ([\#17198](https://github.com/element-hq/synapse/issues/17198))
|
||||||
|
- Use fully-qualified `PersistedEventPosition` when returning `RoomsForUser` to facilitate proper comparisons and `RoomStreamToken` generation. ([\#17265](https://github.com/element-hq/synapse/issues/17265))
|
||||||
|
- Add debug logging for when room keys are uploaded, including whether they are replacing other room keys. ([\#17266](https://github.com/element-hq/synapse/issues/17266))
|
||||||
|
- Handle OTK uploads off master. ([\#17271](https://github.com/element-hq/synapse/issues/17271))
|
||||||
|
- Don't try and resync devices for remote users whose servers are marked as down. ([\#17273](https://github.com/element-hq/synapse/issues/17273))
|
||||||
|
- Re-organize Pydantic models and types used in handlers. ([\#17279](https://github.com/element-hq/synapse/issues/17279))
|
||||||
|
- Expose the worker instance that persisted the event on `event.internal_metadata.instance_name`. ([\#17300](https://github.com/element-hq/synapse/issues/17300))
|
||||||
|
- Update the README with Element branding, improve headers and fix the #synapse:matrix.org support room link rendering. ([\#17324](https://github.com/element-hq/synapse/issues/17324))
|
||||||
|
- Change path of the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync implementation to `/org.matrix.simplified_msc3575/sync` since our simplified API is slightly incompatible with what's in the current MSC. ([\#17331](https://github.com/element-hq/synapse/issues/17331))
|
||||||
|
- Handle device lists notifications for large accounts more efficiently in worker mode. ([\#17333](https://github.com/element-hq/synapse/issues/17333), [\#17358](https://github.com/element-hq/synapse/issues/17358))
|
||||||
|
- Do not block event sending/receiving while calculating large event auth chains. ([\#17338](https://github.com/element-hq/synapse/issues/17338))
|
||||||
|
- Tidy up `parse_integer` docs and call sites to reflect the fact that they require non-negative integers by default, and bring `parse_integer_from_args` default in alignment. Contributed by Denis Kasak (@dkasak). ([\#17339](https://github.com/element-hq/synapse/issues/17339))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Updates to locked dependencies
|
||||||
|
|
||||||
|
* Bump authlib from 1.3.0 to 1.3.1. ([\#17343](https://github.com/element-hq/synapse/issues/17343))
|
||||||
|
* Bump dawidd6/action-download-artifact from 3.1.4 to 5. ([\#17289](https://github.com/element-hq/synapse/issues/17289))
|
||||||
|
* Bump dawidd6/action-download-artifact from 5 to 6. ([\#17313](https://github.com/element-hq/synapse/issues/17313))
|
||||||
|
* Bump docker/build-push-action from 5 to 6. ([\#17312](https://github.com/element-hq/synapse/issues/17312))
|
||||||
|
* Bump jinja2 from 3.1.3 to 3.1.4. ([\#17287](https://github.com/element-hq/synapse/issues/17287))
|
||||||
|
* Bump lazy_static from 1.4.0 to 1.5.0. ([\#17355](https://github.com/element-hq/synapse/issues/17355))
|
||||||
|
* Bump msgpack from 1.0.7 to 1.0.8. ([\#17317](https://github.com/element-hq/synapse/issues/17317))
|
||||||
|
* Bump netaddr from 1.2.1 to 1.3.0. ([\#17353](https://github.com/element-hq/synapse/issues/17353))
|
||||||
|
* Bump packaging from 24.0 to 24.1. ([\#17352](https://github.com/element-hq/synapse/issues/17352))
|
||||||
|
* Bump phonenumbers from 8.13.37 to 8.13.39. ([\#17315](https://github.com/element-hq/synapse/issues/17315))
|
||||||
|
* Bump regex from 1.10.4 to 1.10.5. ([\#17290](https://github.com/element-hq/synapse/issues/17290))
|
||||||
|
* Bump requests from 2.31.0 to 2.32.2. ([\#17345](https://github.com/element-hq/synapse/issues/17345))
|
||||||
|
* Bump sentry-sdk from 2.1.1 to 2.3.1. ([\#17263](https://github.com/element-hq/synapse/issues/17263))
|
||||||
|
* Bump sentry-sdk from 2.3.1 to 2.6.0. ([\#17351](https://github.com/element-hq/synapse/issues/17351))
|
||||||
|
* Bump tornado from 6.4 to 6.4.1. ([\#17344](https://github.com/element-hq/synapse/issues/17344))
|
||||||
|
* Bump mypy from 1.8.0 to 1.9.0. ([\#17297](https://github.com/element-hq/synapse/issues/17297))
|
||||||
|
* Bump types-jsonschema from 4.21.0.20240311 to 4.22.0.20240610. ([\#17288](https://github.com/element-hq/synapse/issues/17288))
|
||||||
|
* Bump types-netaddr from 1.2.0.20240219 to 1.3.0.20240530. ([\#17314](https://github.com/element-hq/synapse/issues/17314))
|
||||||
|
* Bump types-pillow from 10.2.0.20240423 to 10.2.0.20240520. ([\#17285](https://github.com/element-hq/synapse/issues/17285))
|
||||||
|
* Bump types-pyyaml from 6.0.12.12 to 6.0.12.20240311. ([\#17316](https://github.com/element-hq/synapse/issues/17316))
|
||||||
|
* Bump typing-extensions from 4.11.0 to 4.12.2. ([\#17354](https://github.com/element-hq/synapse/issues/17354))
|
||||||
|
* Bump urllib3 from 2.0.7 to 2.2.2. ([\#17346](https://github.com/element-hq/synapse/issues/17346))
|
||||||
|
|
||||||
# Synapse 1.109.0 (2024-06-18)
|
# Synapse 1.109.0 (2024-06-18)
|
||||||
|
|
||||||
### Internal Changes
|
### Internal Changes
|
||||||
|
|
12
Cargo.lock
generated
12
Cargo.lock
generated
|
@ -212,9 +212,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "lazy_static"
|
name = "lazy_static"
|
||||||
version = "1.4.0"
|
version = "1.5.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
|
checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "libc"
|
name = "libc"
|
||||||
|
@ -234,9 +234,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "log"
|
name = "log"
|
||||||
version = "0.4.21"
|
version = "0.4.22"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "90ed8c1e510134f979dbc4f070f87d4313098b704861a105fe34231c70a3901c"
|
checksum = "a7a70ba024b9dc04c27ea2f0c0548feb474ec5c54bba33a7f72f873a39d07b24"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "memchr"
|
name = "memchr"
|
||||||
|
@ -505,9 +505,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde_json"
|
name = "serde_json"
|
||||||
version = "1.0.117"
|
version = "1.0.119"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "455182ea6142b14f93f4bc5320a2b31c1f266b66a4a5c858b013302a5d8cbfc3"
|
checksum = "e8eddb61f0697cc3989c5d64b452f5488e2b8a60fd7d5076a3045076ffef8cb0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"itoa",
|
"itoa",
|
||||||
"ryu",
|
"ryu",
|
||||||
|
|
22
README.rst
22
README.rst
|
@ -1,20 +1,20 @@
|
||||||
.. image:: https://github.com/element-hq/product/assets/87339233/7abf477a-5277-47f3-be44-ea44917d8ed7
|
.. image:: https://github.com/element-hq/product/assets/87339233/7abf477a-5277-47f3-be44-ea44917d8ed7
|
||||||
:height: 60px
|
:height: 60px
|
||||||
|
|
||||||
===========================================================================================================
|
**Element Synapse - Matrix homeserver implementation**
|
||||||
Element Synapse - Matrix homeserver implementation |support| |development| |documentation| |license| |pypi| |python|
|
|
||||||
===========================================================================================================
|
|
||||||
|
|
||||||
Synapse is an open source `Matrix <https://matrix.org>`_ homeserver
|
|support| |development| |documentation| |license| |pypi| |python|
|
||||||
|
|
||||||
|
Synapse is an open source `Matrix <https://matrix.org>`__ homeserver
|
||||||
implementation, written and maintained by `Element <https://element.io>`_.
|
implementation, written and maintained by `Element <https://element.io>`_.
|
||||||
`Matrix <https://github.com/matrix-org>`_ is the open standard for
|
`Matrix <https://github.com/matrix-org>`__ is the open standard for
|
||||||
secure and interoperable real time communications. You can directly run
|
secure and interoperable real time communications. You can directly run
|
||||||
and manage the source code in this repository, available under an AGPL
|
and manage the source code in this repository, available under an AGPL
|
||||||
license. There is no support provided from Element unless you have a
|
license. There is no support provided from Element unless you have a
|
||||||
subscription.
|
subscription.
|
||||||
|
|
||||||
Subscription alternative
|
Subscription alternative
|
||||||
------------------------
|
========================
|
||||||
|
|
||||||
Alternatively, for those that need an enterprise-ready solution, Element
|
Alternatively, for those that need an enterprise-ready solution, Element
|
||||||
Server Suite (ESS) is `available as a subscription <https://element.io/pricing>`_.
|
Server Suite (ESS) is `available as a subscription <https://element.io/pricing>`_.
|
||||||
|
@ -119,7 +119,7 @@ impact to other applications will be minimal.
|
||||||
|
|
||||||
|
|
||||||
🧪 Testing a new installation
|
🧪 Testing a new installation
|
||||||
============================
|
=============================
|
||||||
|
|
||||||
The easiest way to try out your new Synapse installation is by connecting to it
|
The easiest way to try out your new Synapse installation is by connecting to it
|
||||||
from a web client.
|
from a web client.
|
||||||
|
@ -173,10 +173,10 @@ As when logging in, you will need to specify a "Custom server". Specify your
|
||||||
desired ``localpart`` in the 'User name' box.
|
desired ``localpart`` in the 'User name' box.
|
||||||
|
|
||||||
🎯 Troubleshooting and support
|
🎯 Troubleshooting and support
|
||||||
=============================
|
==============================
|
||||||
|
|
||||||
🚀 Professional support
|
🚀 Professional support
|
||||||
----------------------
|
-----------------------
|
||||||
|
|
||||||
Enterprise quality support for Synapse including SLAs is available as part of an
|
Enterprise quality support for Synapse including SLAs is available as part of an
|
||||||
`Element Server Suite (ESS) <https://element.io/pricing>` subscription.
|
`Element Server Suite (ESS) <https://element.io/pricing>` subscription.
|
||||||
|
@ -185,7 +185,7 @@ If you are an existing ESS subscriber then you can raise a `support request <htt
|
||||||
and access the `knowledge base <https://ems-docs.element.io>`.
|
and access the `knowledge base <https://ems-docs.element.io>`.
|
||||||
|
|
||||||
🤝 Community support
|
🤝 Community support
|
||||||
-------------------
|
--------------------
|
||||||
|
|
||||||
The `Admin FAQ <https://element-hq.github.io/synapse/latest/usage/administration/admin_faq.html>`_
|
The `Admin FAQ <https://element-hq.github.io/synapse/latest/usage/administration/admin_faq.html>`_
|
||||||
includes tips on dealing with some common problems. For more details, see
|
includes tips on dealing with some common problems. For more details, see
|
||||||
|
@ -202,7 +202,7 @@ issues for support requests, only for bug reports and feature requests.
|
||||||
.. _docs: docs
|
.. _docs: docs
|
||||||
|
|
||||||
🪪 Identity Servers
|
🪪 Identity Servers
|
||||||
==================
|
===================
|
||||||
|
|
||||||
Identity servers have the job of mapping email addresses and other 3rd Party
|
Identity servers have the job of mapping email addresses and other 3rd Party
|
||||||
IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs
|
IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Add initial implementation of an experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
|
@ -1 +0,0 @@
|
||||||
Remove unused `expire_access_token` option in the Synapse Docker config file. Contributed by @AaronDewes.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix searching for users with their exact localpart whose ID includes a hyphen.
|
|
|
@ -1 +0,0 @@
|
||||||
Improve ratelimiting in Synapse (#17256).
|
|
|
@ -1 +0,0 @@
|
||||||
Use fully-qualified `PersistedEventPosition` when returning `RoomsForUser` to facilitate proper comparisons and `RoomStreamToken` generation.
|
|
|
@ -1 +0,0 @@
|
||||||
Add debug logging for when room keys are uploaded, including whether they are replacing other room keys.
|
|
|
@ -1 +0,0 @@
|
||||||
Add support for the unstable [MSC4151](https://github.com/matrix-org/matrix-spec-proposals/pull/4151) report room API.
|
|
|
@ -1 +0,0 @@
|
||||||
Handle OTK uploads off master.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix wrong retention policy being used when filtering events.
|
|
|
@ -1 +0,0 @@
|
||||||
Don't try and resync devices for remote users whose servers are marked as down.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix bug where OTKs were not always included in `/sync` response when using workers.
|
|
|
@ -1 +0,0 @@
|
||||||
Filter for public and empty rooms added to Admin-API [List Room API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#list-room-api).
|
|
|
@ -1 +0,0 @@
|
||||||
Add `is_dm` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
|
@ -1 +0,0 @@
|
||||||
Re-organize Pydantic models and types used in handlers.
|
|
|
@ -1 +0,0 @@
|
||||||
Add `is_encrypted` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
|
@ -1 +0,0 @@
|
||||||
Include user membership in events served to clients, per MSC4115.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a long-standing bug where an invalid 'from' parameter to [`/notifications`](https://spec.matrix.org/v1.10/client-server-api/#get_matrixclientv3notifications) would result in an Internal Server Error.
|
|
|
@ -1 +0,0 @@
|
||||||
Do not require user-interactive authentication for uploading cross-signing keys for the first time, per MSC3967.
|
|
|
@ -1 +0,0 @@
|
||||||
Add `stream_ordering` sort to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
|
@ -1,2 +0,0 @@
|
||||||
`register_new_matrix_user` now supports a --password-file flag, which
|
|
||||||
is useful for scripting.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix edge case in `/sync` returning the wrong the state when using sharded event persisters.
|
|
|
@ -1 +0,0 @@
|
||||||
Add support for the unstable [MSC4151](https://github.com/matrix-org/matrix-spec-proposals/pull/4151) report room API.
|
|
|
@ -1 +0,0 @@
|
||||||
Bump `mypy` from 1.8.0 to 1.9.0.
|
|
|
@ -1 +0,0 @@
|
||||||
Expose the worker instance that persisted the event on `event.internal_metadata.instance_name`.
|
|
|
@ -1 +0,0 @@
|
||||||
Add initial implementation of an experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
|
@ -1,2 +0,0 @@
|
||||||
`register_new_matrix_user` now supports a --exists-ok flag to allow registration of users that already exist in the database.
|
|
||||||
This is useful for scripts that bootstrap user accounts with initial passwords.
|
|
|
@ -1 +0,0 @@
|
||||||
Add missing quotes for example for `exclude_rooms_from_sync`.
|
|
|
@ -1 +0,0 @@
|
||||||
Add support for via query parameter from MSC415.
|
|
|
@ -1 +0,0 @@
|
||||||
Update the README with Element branding, improve headers and fix the #synapse:matrix.org support room link rendering.
|
|
|
@ -1 +0,0 @@
|
||||||
This is a changelog so tests will run.
|
|
|
@ -1 +0,0 @@
|
||||||
Change path of the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync implementation to `/org.matrix.simplified_msc3575/sync` since our simplified API is slightly incompatible with what's in the current MSC.
|
|
1
changelog.d/17356.doc
Normal file
1
changelog.d/17356.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Clarify `url_preview_url_blacklist` is a usability feature.
|
1
changelog.d/17362.bugfix
Normal file
1
changelog.d/17362.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix rare race which causes no new to-device messages to be received from remote server.
|
1
changelog.d/17363.misc
Normal file
1
changelog.d/17363.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix uploading packages to PyPi.
|
1
changelog.d/17367.misc
Normal file
1
changelog.d/17367.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add CI check for the README.
|
1
changelog.d/17371.misc
Normal file
1
changelog.d/17371.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Limit size of presence EDUs to 50 entries.
|
11
debian/changelog
vendored
11
debian/changelog
vendored
|
@ -1,8 +1,15 @@
|
||||||
matrix-synapse-py3 (1.109.0+nmu1) UNRELEASED; urgency=medium
|
matrix-synapse-py3 (1.110.0~rc2) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.110.0rc2.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Wed, 26 Jun 2024 18:14:48 +0200
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.110.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
* `register_new_matrix_user` now supports a --password-file and a --exists-ok flag.
|
* `register_new_matrix_user` now supports a --password-file and a --exists-ok flag.
|
||||||
|
* New Synapse release 1.110.0rc1.
|
||||||
|
|
||||||
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jun 2024 13:29:36 +0100
|
-- Synapse Packaging team <packages@matrix.org> Wed, 26 Jun 2024 14:07:56 +0200
|
||||||
|
|
||||||
matrix-synapse-py3 (1.109.0) stable; urgency=medium
|
matrix-synapse-py3 (1.109.0) stable; urgency=medium
|
||||||
|
|
||||||
|
|
|
@ -1759,8 +1759,9 @@ rc_3pid_validation:
|
||||||
### `rc_invites`
|
### `rc_invites`
|
||||||
|
|
||||||
This option sets ratelimiting how often invites can be sent in a room or to a
|
This option sets ratelimiting how often invites can be sent in a room or to a
|
||||||
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10` and
|
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10`,
|
||||||
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`.
|
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`, and `per_issuer`
|
||||||
|
defaults to `per_second: 0.3`, `burst_count: 10`.
|
||||||
|
|
||||||
Client requests that invite user(s) when [creating a
|
Client requests that invite user(s) when [creating a
|
||||||
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3createroom)
|
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3createroom)
|
||||||
|
@ -1975,9 +1976,10 @@ This will not prevent the listed domains from accessing media themselves.
|
||||||
It simply prevents users on this server from downloading media originating
|
It simply prevents users on this server from downloading media originating
|
||||||
from the listed servers.
|
from the listed servers.
|
||||||
|
|
||||||
This will have no effect on media originating from the local server.
|
This will have no effect on media originating from the local server. This only
|
||||||
This only affects media downloaded from other Matrix servers, to
|
affects media downloaded from other Matrix servers, to control URL previews see
|
||||||
block domains from URL previews see [`url_preview_url_blacklist`](#url_preview_url_blacklist).
|
[`url_preview_ip_range_blacklist`](#url_preview_ip_range_blacklist) or
|
||||||
|
[`url_preview_url_blacklist`](#url_preview_url_blacklist).
|
||||||
|
|
||||||
Defaults to an empty list (nothing blocked).
|
Defaults to an empty list (nothing blocked).
|
||||||
|
|
||||||
|
@ -2129,12 +2131,14 @@ url_preview_ip_range_whitelist:
|
||||||
---
|
---
|
||||||
### `url_preview_url_blacklist`
|
### `url_preview_url_blacklist`
|
||||||
|
|
||||||
Optional list of URL matches that the URL preview spider is
|
Optional list of URL matches that the URL preview spider is denied from
|
||||||
denied from accessing. You should use `url_preview_ip_range_blacklist`
|
accessing. This is a usability feature, not a security one. You should use
|
||||||
in preference to this, otherwise someone could define a public DNS
|
`url_preview_ip_range_blacklist` in preference to this, otherwise someone could
|
||||||
entry that points to a private IP address and circumvent the blacklist.
|
define a public DNS entry that points to a private IP address and circumvent
|
||||||
This is more useful if you know there is an entire shape of URL that
|
the blacklist. Applications that perform redirects or serve different content
|
||||||
you know that will never want synapse to try to spider.
|
when detecting that Synapse is accessing them can also bypass the blacklist.
|
||||||
|
This is more useful if you know there is an entire shape of URL that you know
|
||||||
|
that you do not want Synapse to preview.
|
||||||
|
|
||||||
Each list entry is a dictionary of url component attributes as returned
|
Each list entry is a dictionary of url component attributes as returned
|
||||||
by urlparse.urlsplit as applied to the absolute form of the URL. See
|
by urlparse.urlsplit as applied to the absolute form of the URL. See
|
||||||
|
@ -2718,7 +2722,7 @@ Example configuration:
|
||||||
session_lifetime: 24h
|
session_lifetime: 24h
|
||||||
```
|
```
|
||||||
---
|
---
|
||||||
### `refresh_access_token_lifetime`
|
### `refreshable_access_token_lifetime`
|
||||||
|
|
||||||
Time that an access token remains valid for, if the session is using refresh tokens.
|
Time that an access token remains valid for, if the session is using refresh tokens.
|
||||||
|
|
||||||
|
@ -3806,7 +3810,8 @@ This setting defines options related to the user directory.
|
||||||
This option has the following sub-options:
|
This option has the following sub-options:
|
||||||
* `enabled`: Defines whether users can search the user directory. If false then
|
* `enabled`: Defines whether users can search the user directory. If false then
|
||||||
empty responses are returned to all queries. Defaults to true.
|
empty responses are returned to all queries. Defaults to true.
|
||||||
* `search_all_users`: Defines whether to search all users visible to your HS at the time the search is performed. If set to true, will return all users who share a room with the user from the homeserver.
|
* `search_all_users`: Defines whether to search all users visible to your homeserver at the time the search is performed.
|
||||||
|
If set to true, will return all users known to the homeserver matching the search query.
|
||||||
If false, search results will only contain users
|
If false, search results will only contain users
|
||||||
visible in public rooms and users sharing a room with the requester.
|
visible in public rooms and users sharing a room with the requester.
|
||||||
Defaults to false.
|
Defaults to false.
|
||||||
|
|
|
@ -62,6 +62,6 @@ following documentation:
|
||||||
|
|
||||||
## Reporting a security vulnerability
|
## Reporting a security vulnerability
|
||||||
|
|
||||||
If you've found a security issue in Synapse or any other Matrix.org Foundation
|
If you've found a security issue in Synapse or any other Element project,
|
||||||
project, please report it to us in accordance with our [Security Disclosure
|
please report it to us in accordance with our [Security Disclosure
|
||||||
Policy](https://www.matrix.org/security-disclosure-policy/). Thank you!
|
Policy](https://element.io/security/security-disclosure-policy). Thank you!
|
||||||
|
|
148
poetry.lock
generated
148
poetry.lock
generated
|
@ -35,13 +35,13 @@ tests-no-zope = ["attrs[tests-mypy]", "cloudpickle", "hypothesis", "pympler", "p
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "authlib"
|
name = "authlib"
|
||||||
version = "1.3.0"
|
version = "1.3.1"
|
||||||
description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients."
|
description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients."
|
||||||
optional = true
|
optional = true
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "Authlib-1.3.0-py2.py3-none-any.whl", hash = "sha256:9637e4de1fb498310a56900b3e2043a206b03cb11c05422014b0302cbc814be3"},
|
{file = "Authlib-1.3.1-py2.py3-none-any.whl", hash = "sha256:d35800b973099bbadc49b42b256ecb80041ad56b7fe1216a362c7943c088f377"},
|
||||||
{file = "Authlib-1.3.0.tar.gz", hash = "sha256:959ea62a5b7b5123c5059758296122b57cd2585ae2ed1c0622c21b371ffdae06"},
|
{file = "authlib-1.3.1.tar.gz", hash = "sha256:7ae843f03c06c5c0debd63c9db91f9fda64fa62a42a77419fa15fbb7e7a58917"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
|
@ -403,43 +403,43 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "cryptography"
|
name = "cryptography"
|
||||||
version = "42.0.7"
|
version = "42.0.8"
|
||||||
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
|
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
files = [
|
files = [
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:a987f840718078212fdf4504d0fd4c6effe34a7e4740378e59d47696e8dfb477"},
|
{file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:81d8a521705787afe7a18d5bfb47ea9d9cc068206270aad0b96a725022e18d2e"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:bd13b5e9b543532453de08bcdc3cc7cebec6f9883e886fd20a92f26940fd3e7a"},
|
{file = "cryptography-42.0.8-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:961e61cefdcb06e0c6d7e3a1b22ebe8b996eb2bf50614e89384be54c48c6b63d"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a79165431551042cc9d1d90e6145d5d0d3ab0f2d66326c201d9b0e7f5bf43604"},
|
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e3ec3672626e1b9e55afd0df6d774ff0e953452886e06e0f1eb7eb0c832e8902"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a47787a5e3649008a1102d3df55424e86606c9bae6fb77ac59afe06d234605f8"},
|
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e599b53fd95357d92304510fb7bda8523ed1f79ca98dce2f43c115950aa78801"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:02c0eee2d7133bdbbc5e24441258d5d2244beb31da5ed19fbb80315f4bbbff55"},
|
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5226d5d21ab681f432a9c1cf8b658c0cb02533eece706b155e5fbd8a0cdd3949"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:5e44507bf8d14b36b8389b226665d597bc0f18ea035d75b4e53c7b1ea84583cc"},
|
{file = "cryptography-42.0.8-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:6b7c4f03ce01afd3b76cf69a5455caa9cfa3de8c8f493e0d3ab7d20611c8dae9"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:7f8b25fa616d8b846aef64b15c606bb0828dbc35faf90566eb139aa9cff67af2"},
|
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:2346b911eb349ab547076f47f2e035fc8ff2c02380a7cbbf8d87114fa0f1c583"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:93a3209f6bb2b33e725ed08ee0991b92976dfdcf4e8b38646540674fc7508e13"},
|
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:ad803773e9df0b92e0a817d22fd8a3675493f690b96130a5e24f1b8fabbea9c7"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:e6b8f1881dac458c34778d0a424ae5769de30544fc678eac51c1c8bb2183e9da"},
|
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2f66d9cd9147ee495a8374a45ca445819f8929a3efcd2e3df6428e46c3cbb10b"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:3de9a45d3b2b7d8088c3fbf1ed4395dfeff79d07842217b38df14ef09ce1d8d7"},
|
{file = "cryptography-42.0.8-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d45b940883a03e19e944456a558b67a41160e367a719833c53de6911cabba2b7"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-win32.whl", hash = "sha256:789caea816c6704f63f6241a519bfa347f72fbd67ba28d04636b7c6b7da94b0b"},
|
{file = "cryptography-42.0.8-cp37-abi3-win32.whl", hash = "sha256:a0c5b2b0585b6af82d7e385f55a8bc568abff8923af147ee3c07bd8b42cda8b2"},
|
||||||
{file = "cryptography-42.0.7-cp37-abi3-win_amd64.whl", hash = "sha256:8cb8ce7c3347fcf9446f201dc30e2d5a3c898d009126010cbd1f443f28b52678"},
|
{file = "cryptography-42.0.8-cp37-abi3-win_amd64.whl", hash = "sha256:57080dee41209e556a9a4ce60d229244f7a66ef52750f813bfbe18959770cfba"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:a3a5ac8b56fe37f3125e5b72b61dcde43283e5370827f5233893d461b7360cd4"},
|
{file = "cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:dea567d1b0e8bc5764b9443858b673b734100c2871dc93163f58c46a97a83d28"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:779245e13b9a6638df14641d029add5dc17edbef6ec915688f3acb9e720a5858"},
|
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c4783183f7cb757b73b2ae9aed6599b96338eb957233c58ca8f49a49cc32fd5e"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0d563795db98b4cd57742a78a288cdbdc9daedac29f2239793071fe114f13785"},
|
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0608251135d0e03111152e41f0cc2392d1e74e35703960d4190b2e0f4ca9c70"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:31adb7d06fe4383226c3e963471f6837742889b3c4caa55aac20ad951bc8ffda"},
|
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dc0fdf6787f37b1c6b08e6dfc892d9d068b5bdb671198c72072828b80bd5fe4c"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:efd0bf5205240182e0f13bcaea41be4fdf5c22c5129fc7ced4a0282ac86998c9"},
|
{file = "cryptography-42.0.8-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:9c0c1716c8447ee7dbf08d6db2e5c41c688544c61074b54fc4564196f55c25a7"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:a9bc127cdc4ecf87a5ea22a2556cab6c7eda2923f84e4f3cc588e8470ce4e42e"},
|
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:fff12c88a672ab9c9c1cf7b0c80e3ad9e2ebd9d828d955c126be4fd3e5578c9e"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:3577d029bc3f4827dd5bf8bf7710cac13527b470bbf1820a3f394adb38ed7d5f"},
|
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:cafb92b2bc622cd1aa6a1dce4b93307792633f4c5fe1f46c6b97cf67073ec961"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2e47577f9b18723fa294b0ea9a17d5e53a227867a0a4904a1a076d1646d45ca1"},
|
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:31f721658a29331f895a5a54e7e82075554ccfb8b163a18719d342f5ffe5ecb1"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:1a58839984d9cb34c855197043eaae2c187d930ca6d644612843b4fe8513c886"},
|
{file = "cryptography-42.0.8-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:b297f90c5723d04bcc8265fc2a0f86d4ea2e0f7ab4b6994459548d3a6b992a14"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-win32.whl", hash = "sha256:e6b79d0adb01aae87e8a44c2b64bc3f3fe59515280e00fb6d57a7267a2583cda"},
|
{file = "cryptography-42.0.8-cp39-abi3-win32.whl", hash = "sha256:2f88d197e66c65be5e42cd72e5c18afbfae3f741742070e3019ac8f4ac57262c"},
|
||||||
{file = "cryptography-42.0.7-cp39-abi3-win_amd64.whl", hash = "sha256:16268d46086bb8ad5bf0a2b5544d8a9ed87a0e33f5e77dd3c3301e63d941a83b"},
|
{file = "cryptography-42.0.8-cp39-abi3-win_amd64.whl", hash = "sha256:fa76fbb7596cc5839320000cdd5d0955313696d9511debab7ee7278fc8b5c84a"},
|
||||||
{file = "cryptography-42.0.7-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:2954fccea107026512b15afb4aa664a5640cd0af630e2ee3962f2602693f0c82"},
|
{file = "cryptography-42.0.8-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:ba4f0a211697362e89ad822e667d8d340b4d8d55fae72cdd619389fb5912eefe"},
|
||||||
{file = "cryptography-42.0.7-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:362e7197754c231797ec45ee081f3088a27a47c6c01eff2ac83f60f85a50fe60"},
|
{file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:81884c4d096c272f00aeb1f11cf62ccd39763581645b0812e99a91505fa48e0c"},
|
||||||
{file = "cryptography-42.0.7-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:4f698edacf9c9e0371112792558d2f705b5645076cc0aaae02f816a0171770fd"},
|
{file = "cryptography-42.0.8-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c9bb2ae11bfbab395bdd072985abde58ea9860ed84e59dbc0463a5d0159f5b71"},
|
||||||
{file = "cryptography-42.0.7-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:5482e789294854c28237bba77c4c83be698be740e31a3ae5e879ee5444166582"},
|
{file = "cryptography-42.0.8-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:7016f837e15b0a1c119d27ecd89b3515f01f90a8615ed5e9427e30d9cdbfed3d"},
|
||||||
{file = "cryptography-42.0.7-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:e9b2a6309f14c0497f348d08a065d52f3020656f675819fc405fb63bbcd26562"},
|
{file = "cryptography-42.0.8-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5a94eccb2a81a309806027e1670a358b99b8fe8bfe9f8d329f27d72c094dde8c"},
|
||||||
{file = "cryptography-42.0.7-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:d8e3098721b84392ee45af2dd554c947c32cc52f862b6a3ae982dbb90f577f14"},
|
{file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:dec9b018df185f08483f294cae6ccac29e7a6e0678996587363dc352dc65c842"},
|
||||||
{file = "cryptography-42.0.7-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c65f96dad14f8528a447414125e1fc8feb2ad5a272b8f68477abbcc1ea7d94b9"},
|
{file = "cryptography-42.0.8-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:343728aac38decfdeecf55ecab3264b015be68fc2816ca800db649607aeee648"},
|
||||||
{file = "cryptography-42.0.7-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:36017400817987670037fbb0324d71489b6ead6231c9604f8fc1f7d008087c68"},
|
{file = "cryptography-42.0.8-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:013629ae70b40af70c9a7a5db40abe5d9054e6f4380e50ce769947b73bf3caad"},
|
||||||
{file = "cryptography-42.0.7.tar.gz", hash = "sha256:ecbfbc00bf55888edda9868a4cf927205de8499e7fabe6c050322298382953f2"},
|
{file = "cryptography-42.0.8.tar.gz", hash = "sha256:8d09d05439ce7baa8e9e95b07ec5b6c886f548deb7e0f69ef25f64b3bce842f2"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
|
@ -1461,13 +1461,13 @@ test = ["lxml", "pytest (>=4.6)", "pytest-cov"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "netaddr"
|
name = "netaddr"
|
||||||
version = "1.2.1"
|
version = "1.3.0"
|
||||||
description = "A network address manipulation library for Python"
|
description = "A network address manipulation library for Python"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
files = [
|
files = [
|
||||||
{file = "netaddr-1.2.1-py3-none-any.whl", hash = "sha256:bd9e9534b0d46af328cf64f0e5a23a5a43fca292df221c85580b27394793496e"},
|
{file = "netaddr-1.3.0-py3-none-any.whl", hash = "sha256:c2c6a8ebe5554ce33b7d5b3a306b71bbb373e000bbbf2350dd5213cc56e3dbbe"},
|
||||||
{file = "netaddr-1.2.1.tar.gz", hash = "sha256:6eb8fedf0412c6d294d06885c110de945cf4d22d2b510d0404f4e06950857987"},
|
{file = "netaddr-1.3.0.tar.gz", hash = "sha256:5c3c3d9895b551b763779ba7db7a03487dc1f8e3b385af819af341ae9ef6e48a"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
|
@ -1488,13 +1488,13 @@ tests = ["Sphinx", "doubles", "flake8", "flake8-quotes", "gevent", "mock", "pyte
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "packaging"
|
name = "packaging"
|
||||||
version = "24.0"
|
version = "24.1"
|
||||||
description = "Core utilities for Python packages"
|
description = "Core utilities for Python packages"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "packaging-24.0-py3-none-any.whl", hash = "sha256:2ddfb553fdf02fb784c234c7ba6ccc288296ceabec964ad2eae3777778130bc5"},
|
{file = "packaging-24.1-py3-none-any.whl", hash = "sha256:5b8f2217dbdbd2f7f384c41c628544e6d52f2d0f53c6d0c3ea61aa5d1d7ff124"},
|
||||||
{file = "packaging-24.0.tar.gz", hash = "sha256:eb82c5e3e56209074766e6885bb04b8c38a0c015d0a30036ebe7ece34c9989e9"},
|
{file = "packaging-24.1.tar.gz", hash = "sha256:026ed72c8ed3fcce5bf8950572258698927fd1dbda10a5e981cdf0ac37f4f002"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
@ -2157,13 +2157,13 @@ rpds-py = ">=0.7.0"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "requests"
|
name = "requests"
|
||||||
version = "2.31.0"
|
version = "2.32.2"
|
||||||
description = "Python HTTP for Humans."
|
description = "Python HTTP for Humans."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"},
|
{file = "requests-2.32.2-py3-none-any.whl", hash = "sha256:fc06670dd0ed212426dfeb94fc1b983d917c4f9847c863f313c9dfaaffb7c23c"},
|
||||||
{file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"},
|
{file = "requests-2.32.2.tar.gz", hash = "sha256:dd951ff5ecf3e3b3aa26b40703ba77495dab41da839ae72ef3c8e5d8e2433289"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
|
@ -2387,13 +2387,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "sentry-sdk"
|
name = "sentry-sdk"
|
||||||
version = "2.3.1"
|
version = "2.6.0"
|
||||||
description = "Python client for Sentry (https://sentry.io)"
|
description = "Python client for Sentry (https://sentry.io)"
|
||||||
optional = true
|
optional = true
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
files = [
|
files = [
|
||||||
{file = "sentry_sdk-2.3.1-py2.py3-none-any.whl", hash = "sha256:c5aeb095ba226391d337dd42a6f9470d86c9fc236ecc71cfc7cd1942b45010c6"},
|
{file = "sentry_sdk-2.6.0-py2.py3-none-any.whl", hash = "sha256:422b91cb49378b97e7e8d0e8d5a1069df23689d45262b86f54988a7db264e874"},
|
||||||
{file = "sentry_sdk-2.3.1.tar.gz", hash = "sha256:139a71a19f5e9eb5d3623942491ce03cf8ebc14ea2e39ba3e6fe79560d8a5b1f"},
|
{file = "sentry_sdk-2.6.0.tar.gz", hash = "sha256:65cc07e9c6995c5e316109f138570b32da3bd7ff8d0d0ee4aaf2628c3dd8127d"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
|
@ -2598,22 +2598,22 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "tornado"
|
name = "tornado"
|
||||||
version = "6.4"
|
version = "6.4.1"
|
||||||
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
|
description = "Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed."
|
||||||
optional = true
|
optional = true
|
||||||
python-versions = ">= 3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "tornado-6.4-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:02ccefc7d8211e5a7f9e8bc3f9e5b0ad6262ba2fbb683a6443ecc804e5224ce0"},
|
{file = "tornado-6.4.1-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:163b0aafc8e23d8cdc3c9dfb24c5368af84a81e3364745ccb4427669bf84aec8"},
|
||||||
{file = "tornado-6.4-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:27787de946a9cffd63ce5814c33f734c627a87072ec7eed71f7fc4417bb16263"},
|
{file = "tornado-6.4.1-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:6d5ce3437e18a2b66fbadb183c1d3364fb03f2be71299e7d10dbeeb69f4b2a14"},
|
||||||
{file = "tornado-6.4-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f7894c581ecdcf91666a0912f18ce5e757213999e183ebfc2c3fdbf4d5bd764e"},
|
{file = "tornado-6.4.1-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2e20b9113cd7293f164dc46fffb13535266e713cdb87bd2d15ddb336e96cfc4"},
|
||||||
{file = "tornado-6.4-cp38-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e43bc2e5370a6a8e413e1e1cd0c91bedc5bd62a74a532371042a18ef19e10579"},
|
{file = "tornado-6.4.1-cp38-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8ae50a504a740365267b2a8d1a90c9fbc86b780a39170feca9bcc1787ff80842"},
|
||||||
{file = "tornado-6.4-cp38-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f0251554cdd50b4b44362f73ad5ba7126fc5b2c2895cc62b14a1c2d7ea32f212"},
|
{file = "tornado-6.4.1-cp38-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:613bf4ddf5c7a95509218b149b555621497a6cc0d46ac341b30bd9ec19eac7f3"},
|
||||||
{file = "tornado-6.4-cp38-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:fd03192e287fbd0899dd8f81c6fb9cbbc69194d2074b38f384cb6fa72b80e9c2"},
|
{file = "tornado-6.4.1-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:25486eb223babe3eed4b8aecbac33b37e3dd6d776bc730ca14e1bf93888b979f"},
|
||||||
{file = "tornado-6.4-cp38-abi3-musllinux_1_1_i686.whl", hash = "sha256:88b84956273fbd73420e6d4b8d5ccbe913c65d31351b4c004ae362eba06e1f78"},
|
{file = "tornado-6.4.1-cp38-abi3-musllinux_1_2_i686.whl", hash = "sha256:454db8a7ecfcf2ff6042dde58404164d969b6f5d58b926da15e6b23817950fc4"},
|
||||||
{file = "tornado-6.4-cp38-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:71ddfc23a0e03ef2df1c1397d859868d158c8276a0603b96cf86892bff58149f"},
|
{file = "tornado-6.4.1-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a02a08cc7a9314b006f653ce40483b9b3c12cda222d6a46d4ac63bb6c9057698"},
|
||||||
{file = "tornado-6.4-cp38-abi3-win32.whl", hash = "sha256:6f8a6c77900f5ae93d8b4ae1196472d0ccc2775cc1dfdc9e7727889145c45052"},
|
{file = "tornado-6.4.1-cp38-abi3-win32.whl", hash = "sha256:d9a566c40b89757c9aa8e6f032bcdb8ca8795d7c1a9762910c722b1635c9de4d"},
|
||||||
{file = "tornado-6.4-cp38-abi3-win_amd64.whl", hash = "sha256:10aeaa8006333433da48dec9fe417877f8bcc21f48dda8d661ae79da357b2a63"},
|
{file = "tornado-6.4.1-cp38-abi3-win_amd64.whl", hash = "sha256:b24b8982ed444378d7f21d563f4180a2de31ced9d8d84443907a0a64da2072e7"},
|
||||||
{file = "tornado-6.4.tar.gz", hash = "sha256:72291fa6e6bc84e626589f1c29d90a5a6d593ef5ae68052ee2ef000dfd273dee"},
|
{file = "tornado-6.4.1.tar.gz", hash = "sha256:92d3ab53183d8c50f8204a51e6f91d18a15d5ef261e84d452800d4ff6fc504e9"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
@ -2906,24 +2906,24 @@ urllib3 = ">=2"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-setuptools"
|
name = "types-setuptools"
|
||||||
version = "69.5.0.20240423"
|
version = "70.1.0.20240627"
|
||||||
description = "Typing stubs for setuptools"
|
description = "Typing stubs for setuptools"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "types-setuptools-69.5.0.20240423.tar.gz", hash = "sha256:a7ba908f1746c4337d13f027fa0f4a5bcad6d1d92048219ba792b3295c58586d"},
|
{file = "types-setuptools-70.1.0.20240627.tar.gz", hash = "sha256:385907a47b5cf302b928ce07953cd91147d5de6f3da604c31905fdf0ec309e83"},
|
||||||
{file = "types_setuptools-69.5.0.20240423-py3-none-any.whl", hash = "sha256:a4381e041510755a6c9210e26ad55b1629bc10237aeb9cb8b6bd24996b73db48"},
|
{file = "types_setuptools-70.1.0.20240627-py3-none-any.whl", hash = "sha256:c7bdf05cd0a8b66868b4774c7b3c079d01ae025d8c9562bfc8bf2ff44d263c9c"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "typing-extensions"
|
name = "typing-extensions"
|
||||||
version = "4.11.0"
|
version = "4.12.2"
|
||||||
description = "Backported and Experimental Type Hints for Python 3.8+"
|
description = "Backported and Experimental Type Hints for Python 3.8+"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "typing_extensions-4.11.0-py3-none-any.whl", hash = "sha256:c1f94d72897edaf4ce775bb7558d5b79d8126906a14ea5ed1635921406c0387a"},
|
{file = "typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d"},
|
||||||
{file = "typing_extensions-4.11.0.tar.gz", hash = "sha256:83f085bd5ca59c80295fc2a82ab5dac679cbe02b9f33f7d83af68e241bea51b0"},
|
{file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
@ -2939,18 +2939,18 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "urllib3"
|
name = "urllib3"
|
||||||
version = "2.0.7"
|
version = "2.2.2"
|
||||||
description = "HTTP library with thread-safe connection pooling, file post, and more."
|
description = "HTTP library with thread-safe connection pooling, file post, and more."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "urllib3-2.0.7-py3-none-any.whl", hash = "sha256:fdb6d215c776278489906c2f8916e6e7d4f5a9b602ccbcfdf7f016fc8da0596e"},
|
{file = "urllib3-2.2.2-py3-none-any.whl", hash = "sha256:a448b2f64d686155468037e1ace9f2d2199776e17f0a46610480d311f73e3472"},
|
||||||
{file = "urllib3-2.0.7.tar.gz", hash = "sha256:c97dfde1f7bd43a71c8d2a58e369e9b2bf692d1334ea9f9cae55add7d0dd0f84"},
|
{file = "urllib3-2.2.2.tar.gz", hash = "sha256:dd505485549a7a552833da5e6063639d0d177c04f23bc3864e41e5dc5f612168"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
|
brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"]
|
||||||
secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"]
|
h2 = ["h2 (>=4,<5)"]
|
||||||
socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
|
socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
|
||||||
zstd = ["zstandard (>=0.18.0)"]
|
zstd = ["zstandard (>=0.18.0)"]
|
||||||
|
|
||||||
|
|
|
@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
||||||
|
|
||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "matrix-synapse"
|
name = "matrix-synapse"
|
||||||
version = "1.109.0"
|
version = "1.110.0rc2"
|
||||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||||
license = "AGPL-3.0-or-later"
|
license = "AGPL-3.0-or-later"
|
||||||
|
|
|
@ -433,6 +433,10 @@ class ExperimentalConfig(Config):
|
||||||
("experimental", "msc4108_delegation_endpoint"),
|
("experimental", "msc4108_delegation_endpoint"),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.msc3823_account_suspension = experimental.get(
|
||||||
|
"msc3823_account_suspension", False
|
||||||
|
)
|
||||||
|
|
||||||
self.msc3916_authenticated_media_enabled = experimental.get(
|
self.msc3916_authenticated_media_enabled = experimental.get(
|
||||||
"msc3916_authenticated_media_enabled", False
|
"msc3916_authenticated_media_enabled", False
|
||||||
)
|
)
|
||||||
|
|
|
@ -21,6 +21,7 @@
|
||||||
#
|
#
|
||||||
import datetime
|
import datetime
|
||||||
import logging
|
import logging
|
||||||
|
from collections import OrderedDict
|
||||||
from types import TracebackType
|
from types import TracebackType
|
||||||
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple, Type
|
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple, Type
|
||||||
|
|
||||||
|
@ -68,6 +69,10 @@ sent_edus_by_type = Counter(
|
||||||
# If the retry interval is larger than this then we enter "catchup" mode
|
# If the retry interval is larger than this then we enter "catchup" mode
|
||||||
CATCHUP_RETRY_INTERVAL = 60 * 60 * 1000
|
CATCHUP_RETRY_INTERVAL = 60 * 60 * 1000
|
||||||
|
|
||||||
|
# Limit how many presence states we add to each presence EDU, to ensure that
|
||||||
|
# they are bounded in size.
|
||||||
|
MAX_PRESENCE_STATES_PER_EDU = 50
|
||||||
|
|
||||||
|
|
||||||
class PerDestinationQueue:
|
class PerDestinationQueue:
|
||||||
"""
|
"""
|
||||||
|
@ -144,7 +149,7 @@ class PerDestinationQueue:
|
||||||
|
|
||||||
# Map of user_id -> UserPresenceState of pending presence to be sent to this
|
# Map of user_id -> UserPresenceState of pending presence to be sent to this
|
||||||
# destination
|
# destination
|
||||||
self._pending_presence: Dict[str, UserPresenceState] = {}
|
self._pending_presence: OrderedDict[str, UserPresenceState] = OrderedDict()
|
||||||
|
|
||||||
# List of room_id -> receipt_type -> user_id -> receipt_dict,
|
# List of room_id -> receipt_type -> user_id -> receipt_dict,
|
||||||
#
|
#
|
||||||
|
@ -399,7 +404,7 @@ class PerDestinationQueue:
|
||||||
# through another mechanism, because this is all volatile!
|
# through another mechanism, because this is all volatile!
|
||||||
self._pending_edus = []
|
self._pending_edus = []
|
||||||
self._pending_edus_keyed = {}
|
self._pending_edus_keyed = {}
|
||||||
self._pending_presence = {}
|
self._pending_presence.clear()
|
||||||
self._pending_receipt_edus = []
|
self._pending_receipt_edus = []
|
||||||
|
|
||||||
self._start_catching_up()
|
self._start_catching_up()
|
||||||
|
@ -721,22 +726,26 @@ class _TransactionQueueManager:
|
||||||
|
|
||||||
# Add presence EDU.
|
# Add presence EDU.
|
||||||
if self.queue._pending_presence:
|
if self.queue._pending_presence:
|
||||||
|
# Only send max 50 presence entries in the EDU, to bound the amount
|
||||||
|
# of data we're sending.
|
||||||
|
presence_to_add: List[JsonDict] = []
|
||||||
|
while (
|
||||||
|
self.queue._pending_presence
|
||||||
|
and len(presence_to_add) < MAX_PRESENCE_STATES_PER_EDU
|
||||||
|
):
|
||||||
|
_, presence = self.queue._pending_presence.popitem(last=False)
|
||||||
|
presence_to_add.append(
|
||||||
|
format_user_presence_state(presence, self.queue._clock.time_msec())
|
||||||
|
)
|
||||||
|
|
||||||
pending_edus.append(
|
pending_edus.append(
|
||||||
Edu(
|
Edu(
|
||||||
origin=self.queue._server_name,
|
origin=self.queue._server_name,
|
||||||
destination=self.queue._destination,
|
destination=self.queue._destination,
|
||||||
edu_type=EduTypes.PRESENCE,
|
edu_type=EduTypes.PRESENCE,
|
||||||
content={
|
content={"push": presence_to_add},
|
||||||
"push": [
|
|
||||||
format_user_presence_state(
|
|
||||||
presence, self.queue._clock.time_msec()
|
|
||||||
)
|
|
||||||
for presence in self.queue._pending_presence.values()
|
|
||||||
]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
self.queue._pending_presence = {}
|
|
||||||
|
|
||||||
# Add read receipt EDUs.
|
# Add read receipt EDUs.
|
||||||
pending_edus.extend(self.queue._get_receipt_edus(force_flush=False, limit=5))
|
pending_edus.extend(self.queue._get_receipt_edus(force_flush=False, limit=5))
|
||||||
|
|
|
@ -33,6 +33,7 @@ from synapse.federation.transport.server.federation import (
|
||||||
FEDERATION_SERVLET_CLASSES,
|
FEDERATION_SERVLET_CLASSES,
|
||||||
FederationAccountStatusServlet,
|
FederationAccountStatusServlet,
|
||||||
FederationUnstableClientKeysClaimServlet,
|
FederationUnstableClientKeysClaimServlet,
|
||||||
|
FederationUnstableMediaDownloadServlet,
|
||||||
)
|
)
|
||||||
from synapse.http.server import HttpServer, JsonResource
|
from synapse.http.server import HttpServer, JsonResource
|
||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
|
@ -315,6 +316,13 @@ def register_servlets(
|
||||||
):
|
):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if servletclass == FederationUnstableMediaDownloadServlet:
|
||||||
|
if (
|
||||||
|
not hs.config.server.enable_media_repo
|
||||||
|
or not hs.config.experimental.msc3916_authenticated_media_enabled
|
||||||
|
):
|
||||||
|
continue
|
||||||
|
|
||||||
servletclass(
|
servletclass(
|
||||||
hs=hs,
|
hs=hs,
|
||||||
authenticator=authenticator,
|
authenticator=authenticator,
|
||||||
|
|
|
@ -360,13 +360,29 @@ class BaseFederationServlet:
|
||||||
"request"
|
"request"
|
||||||
)
|
)
|
||||||
return None
|
return None
|
||||||
|
if (
|
||||||
|
func.__self__.__class__.__name__ # type: ignore
|
||||||
|
== "FederationUnstableMediaDownloadServlet"
|
||||||
|
):
|
||||||
|
response = await func(
|
||||||
|
origin, content, request, *args, **kwargs
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
response = await func(
|
||||||
|
origin, content, request.args, *args, **kwargs
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
if (
|
||||||
|
func.__self__.__class__.__name__ # type: ignore
|
||||||
|
== "FederationUnstableMediaDownloadServlet"
|
||||||
|
):
|
||||||
|
response = await func(
|
||||||
|
origin, content, request, *args, **kwargs
|
||||||
|
)
|
||||||
|
else:
|
||||||
response = await func(
|
response = await func(
|
||||||
origin, content, request.args, *args, **kwargs
|
origin, content, request.args, *args, **kwargs
|
||||||
)
|
)
|
||||||
else:
|
|
||||||
response = await func(
|
|
||||||
origin, content, request.args, *args, **kwargs
|
|
||||||
)
|
|
||||||
finally:
|
finally:
|
||||||
# if we used the origin's context as the parent, add a new span using
|
# if we used the origin's context as the parent, add a new span using
|
||||||
# the servlet span as a parent, so that we have a link
|
# the servlet span as a parent, so that we have a link
|
||||||
|
|
|
@ -44,10 +44,13 @@ from synapse.federation.transport.server._base import (
|
||||||
)
|
)
|
||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
parse_boolean_from_args,
|
parse_boolean_from_args,
|
||||||
|
parse_integer,
|
||||||
parse_integer_from_args,
|
parse_integer_from_args,
|
||||||
parse_string_from_args,
|
parse_string_from_args,
|
||||||
parse_strings_from_args,
|
parse_strings_from_args,
|
||||||
)
|
)
|
||||||
|
from synapse.http.site import SynapseRequest
|
||||||
|
from synapse.media._base import DEFAULT_MAX_TIMEOUT_MS, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util import SYNAPSE_VERSION
|
from synapse.util import SYNAPSE_VERSION
|
||||||
from synapse.util.ratelimitutils import FederationRateLimiter
|
from synapse.util.ratelimitutils import FederationRateLimiter
|
||||||
|
@ -787,6 +790,43 @@ class FederationAccountStatusServlet(BaseFederationServerServlet):
|
||||||
return 200, {"account_statuses": statuses, "failures": failures}
|
return 200, {"account_statuses": statuses, "failures": failures}
|
||||||
|
|
||||||
|
|
||||||
|
class FederationUnstableMediaDownloadServlet(BaseFederationServerServlet):
|
||||||
|
"""
|
||||||
|
Implementation of new federation media `/download` endpoint outlined in MSC3916. Returns
|
||||||
|
a multipart/mixed response consisting of a JSON object and the requested media
|
||||||
|
item. This endpoint only returns local media.
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATH = "/media/download/(?P<media_id>[^/]*)"
|
||||||
|
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc3916"
|
||||||
|
RATELIMIT = True
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
hs: "HomeServer",
|
||||||
|
ratelimiter: FederationRateLimiter,
|
||||||
|
authenticator: Authenticator,
|
||||||
|
server_name: str,
|
||||||
|
):
|
||||||
|
super().__init__(hs, authenticator, ratelimiter, server_name)
|
||||||
|
self.media_repo = self.hs.get_media_repository()
|
||||||
|
|
||||||
|
async def on_GET(
|
||||||
|
self,
|
||||||
|
origin: Optional[str],
|
||||||
|
content: Literal[None],
|
||||||
|
request: SynapseRequest,
|
||||||
|
media_id: str,
|
||||||
|
) -> None:
|
||||||
|
max_timeout_ms = parse_integer(
|
||||||
|
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
|
||||||
|
)
|
||||||
|
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
|
||||||
|
await self.media_repo.get_local_media(
|
||||||
|
request, media_id, None, max_timeout_ms, federation=True
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
||||||
FederationSendServlet,
|
FederationSendServlet,
|
||||||
FederationEventServlet,
|
FederationEventServlet,
|
||||||
|
@ -818,4 +858,5 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
||||||
FederationV1SendKnockServlet,
|
FederationV1SendKnockServlet,
|
||||||
FederationMakeKnockServlet,
|
FederationMakeKnockServlet,
|
||||||
FederationAccountStatusServlet,
|
FederationAccountStatusServlet,
|
||||||
|
FederationUnstableMediaDownloadServlet,
|
||||||
)
|
)
|
||||||
|
|
|
@ -642,6 +642,17 @@ class EventCreationHandler:
|
||||||
"""
|
"""
|
||||||
await self.auth_blocking.check_auth_blocking(requester=requester)
|
await self.auth_blocking.check_auth_blocking(requester=requester)
|
||||||
|
|
||||||
|
if event_dict["type"] == EventTypes.Message:
|
||||||
|
requester_suspended = await self.store.get_user_suspended_status(
|
||||||
|
requester.user.to_string()
|
||||||
|
)
|
||||||
|
if requester_suspended:
|
||||||
|
raise SynapseError(
|
||||||
|
403,
|
||||||
|
"Sending messages while account is suspended is not allowed.",
|
||||||
|
Codes.USER_ACCOUNT_SUSPENDED,
|
||||||
|
)
|
||||||
|
|
||||||
if event_dict["type"] == EventTypes.Create and event_dict["state_key"] == "":
|
if event_dict["type"] == EventTypes.Create and event_dict["state_key"] == "":
|
||||||
room_version_id = event_dict["content"]["room_version"]
|
room_version_id = event_dict["content"]["room_version"]
|
||||||
maybe_room_version_obj = KNOWN_ROOM_VERSIONS.get(room_version_id)
|
maybe_room_version_obj = KNOWN_ROOM_VERSIONS.get(room_version_id)
|
||||||
|
|
|
@ -21,6 +21,7 @@ import logging
|
||||||
from typing import (
|
from typing import (
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
AbstractSet,
|
AbstractSet,
|
||||||
|
Any,
|
||||||
Dict,
|
Dict,
|
||||||
Final,
|
Final,
|
||||||
List,
|
List,
|
||||||
|
@ -36,7 +37,8 @@ from immutabledict import immutabledict
|
||||||
from synapse.api.constants import AccountDataTypes, Direction, EventTypes, Membership
|
from synapse.api.constants import AccountDataTypes, Direction, EventTypes, Membership
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.events.utils import strip_event
|
from synapse.events.utils import strip_event
|
||||||
from synapse.storage.roommember import RoomsForUser
|
from synapse.handlers.relations import BundledAggregations
|
||||||
|
from synapse.storage.databases.main.stream import CurrentStateDeltaMembership
|
||||||
from synapse.types import (
|
from synapse.types import (
|
||||||
JsonDict,
|
JsonDict,
|
||||||
PersistedEventPosition,
|
PersistedEventPosition,
|
||||||
|
@ -56,28 +58,9 @@ if TYPE_CHECKING:
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def convert_event_to_rooms_for_user(event: EventBase) -> RoomsForUser:
|
def filter_membership_for_sync(
|
||||||
"""
|
*, membership: str, user_id: str, sender: Optional[str]
|
||||||
Quick helper to convert an event to a `RoomsForUser` object.
|
) -> bool:
|
||||||
"""
|
|
||||||
# These fields should be present for all persisted events
|
|
||||||
assert event.internal_metadata.stream_ordering is not None
|
|
||||||
assert event.internal_metadata.instance_name is not None
|
|
||||||
|
|
||||||
return RoomsForUser(
|
|
||||||
room_id=event.room_id,
|
|
||||||
sender=event.sender,
|
|
||||||
membership=event.membership,
|
|
||||||
event_id=event.event_id,
|
|
||||||
event_pos=PersistedEventPosition(
|
|
||||||
event.internal_metadata.instance_name,
|
|
||||||
event.internal_metadata.stream_ordering,
|
|
||||||
),
|
|
||||||
room_version_id=event.room_version.identifier,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def filter_membership_for_sync(*, membership: str, user_id: str, sender: str) -> bool:
|
|
||||||
"""
|
"""
|
||||||
Returns True if the membership event should be included in the sync response,
|
Returns True if the membership event should be included in the sync response,
|
||||||
otherwise False.
|
otherwise False.
|
||||||
|
@ -94,7 +77,13 @@ def filter_membership_for_sync(*, membership: str, user_id: str, sender: str) ->
|
||||||
#
|
#
|
||||||
# This logic includes kicks (leave events where the sender is not the same user) and
|
# This logic includes kicks (leave events where the sender is not the same user) and
|
||||||
# can be read as "anything that isn't a leave or a leave with a different sender".
|
# can be read as "anything that isn't a leave or a leave with a different sender".
|
||||||
return membership != Membership.LEAVE or sender != user_id
|
#
|
||||||
|
# When `sender=None` and `membership=Membership.LEAVE`, it means that a state reset
|
||||||
|
# happened that removed the user from the room, or the user was the last person
|
||||||
|
# locally to leave the room which caused the server to leave the room. In both
|
||||||
|
# cases, we can just remove the rooms since they are no longer relevant to the user.
|
||||||
|
# They could still be added back later if they are `newly_left`.
|
||||||
|
return membership != Membership.LEAVE or sender not in (user_id, None)
|
||||||
|
|
||||||
|
|
||||||
R = TypeVar("R")
|
R = TypeVar("R")
|
||||||
|
@ -229,6 +218,28 @@ class StateKeys:
|
||||||
LAZY: Final = "$LAZY"
|
LAZY: Final = "$LAZY"
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class _RoomMembershipForUser:
|
||||||
|
"""
|
||||||
|
Attributes:
|
||||||
|
event_id: The event ID of the membership event
|
||||||
|
event_pos: The stream position of the membership event
|
||||||
|
membership: The membership state of the user in the room
|
||||||
|
sender: The person who sent the membership event
|
||||||
|
newly_joined: Whether the user newly joined the room during the given token
|
||||||
|
range
|
||||||
|
"""
|
||||||
|
|
||||||
|
event_id: Optional[str]
|
||||||
|
event_pos: PersistedEventPosition
|
||||||
|
membership: str
|
||||||
|
sender: Optional[str]
|
||||||
|
newly_joined: bool
|
||||||
|
|
||||||
|
def copy_and_replace(self, **kwds: Any) -> "_RoomMembershipForUser":
|
||||||
|
return attr.evolve(self, **kwds)
|
||||||
|
|
||||||
|
|
||||||
class SlidingSyncHandler:
|
class SlidingSyncHandler:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
@ -392,10 +403,12 @@ class SlidingSyncHandler:
|
||||||
# We're going to loop through the sorted list of rooms starting
|
# We're going to loop through the sorted list of rooms starting
|
||||||
# at the range start index and keep adding rooms until we fill
|
# at the range start index and keep adding rooms until we fill
|
||||||
# up the range or run out of rooms.
|
# up the range or run out of rooms.
|
||||||
|
#
|
||||||
|
# Both sides of range are inclusive
|
||||||
current_range_index = range[0]
|
current_range_index = range[0]
|
||||||
range_end_index = range[1]
|
range_end_index = range[1]
|
||||||
while (
|
while (
|
||||||
current_range_index < range_end_index
|
current_range_index <= range_end_index
|
||||||
and current_range_index <= len(sorted_room_info) - 1
|
and current_range_index <= len(sorted_room_info) - 1
|
||||||
):
|
):
|
||||||
room_id, _ = sorted_room_info[current_range_index]
|
room_id, _ = sorted_room_info[current_range_index]
|
||||||
|
@ -460,7 +473,7 @@ class SlidingSyncHandler:
|
||||||
user=sync_config.user,
|
user=sync_config.user,
|
||||||
room_id=room_id,
|
room_id=room_id,
|
||||||
room_sync_config=room_sync_config,
|
room_sync_config=room_sync_config,
|
||||||
rooms_for_user_membership_at_to_token=sync_room_map[room_id],
|
rooms_membership_for_user_at_to_token=sync_room_map[room_id],
|
||||||
from_token=from_token,
|
from_token=from_token,
|
||||||
to_token=to_token,
|
to_token=to_token,
|
||||||
)
|
)
|
||||||
|
@ -479,7 +492,7 @@ class SlidingSyncHandler:
|
||||||
user: UserID,
|
user: UserID,
|
||||||
to_token: StreamToken,
|
to_token: StreamToken,
|
||||||
from_token: Optional[StreamToken] = None,
|
from_token: Optional[StreamToken] = None,
|
||||||
) -> Dict[str, RoomsForUser]:
|
) -> Dict[str, _RoomMembershipForUser]:
|
||||||
"""
|
"""
|
||||||
Fetch room IDs that should be listed for this user in the sync response (the
|
Fetch room IDs that should be listed for this user in the sync response (the
|
||||||
full room list that will be filtered, sorted, and sliced).
|
full room list that will be filtered, sorted, and sliced).
|
||||||
|
@ -528,13 +541,17 @@ class SlidingSyncHandler:
|
||||||
|
|
||||||
# Our working list of rooms that can show up in the sync response
|
# Our working list of rooms that can show up in the sync response
|
||||||
sync_room_id_set = {
|
sync_room_id_set = {
|
||||||
room_for_user.room_id: room_for_user
|
# Note: The `room_for_user` we're assigning here will need to be fixed up
|
||||||
for room_for_user in room_for_user_list
|
# (below) because they are potentially from the current snapshot time
|
||||||
if filter_membership_for_sync(
|
# instead from the time of the `to_token`.
|
||||||
|
room_for_user.room_id: _RoomMembershipForUser(
|
||||||
|
event_id=room_for_user.event_id,
|
||||||
|
event_pos=room_for_user.event_pos,
|
||||||
membership=room_for_user.membership,
|
membership=room_for_user.membership,
|
||||||
user_id=user_id,
|
|
||||||
sender=room_for_user.sender,
|
sender=room_for_user.sender,
|
||||||
|
newly_joined=False,
|
||||||
)
|
)
|
||||||
|
for room_for_user in room_for_user_list
|
||||||
}
|
}
|
||||||
|
|
||||||
# Get the `RoomStreamToken` that represents the spot we queried up to when we got
|
# Get the `RoomStreamToken` that represents the spot we queried up to when we got
|
||||||
|
@ -569,14 +586,9 @@ class SlidingSyncHandler:
|
||||||
#
|
#
|
||||||
# - 1a) Remove rooms that the user joined after the `to_token`
|
# - 1a) Remove rooms that the user joined after the `to_token`
|
||||||
# - 1b) Add back rooms that the user left after the `to_token`
|
# - 1b) Add back rooms that the user left after the `to_token`
|
||||||
|
# - 1c) Update room membership events to the point in time of the `to_token`
|
||||||
# - 2) Add back newly_left rooms (> `from_token` and <= `to_token`)
|
# - 2) Add back newly_left rooms (> `from_token` and <= `to_token`)
|
||||||
#
|
# - 3) Figure out which rooms are `newly_joined`
|
||||||
# Below, we're doing two separate lookups for membership changes. We could
|
|
||||||
# request everything for both fixups in one range, [`from_token.room_key`,
|
|
||||||
# `membership_snapshot_token`), but we want to avoid raw `stream_ordering`
|
|
||||||
# comparison without `instance_name` (which is flawed). We could refactor
|
|
||||||
# `event.internal_metadata` to include `instance_name` but it might turn out a
|
|
||||||
# little difficult and a bigger, broader Synapse change than we want to make.
|
|
||||||
|
|
||||||
# 1) -----------------------------------------------------
|
# 1) -----------------------------------------------------
|
||||||
|
|
||||||
|
@ -586,159 +598,198 @@ class SlidingSyncHandler:
|
||||||
# If our `to_token` is already the same or ahead of the latest room membership
|
# If our `to_token` is already the same or ahead of the latest room membership
|
||||||
# for the user, we don't need to do any "2)" fix-ups and can just straight-up
|
# for the user, we don't need to do any "2)" fix-ups and can just straight-up
|
||||||
# use the room list from the snapshot as a base (nothing has changed)
|
# use the room list from the snapshot as a base (nothing has changed)
|
||||||
membership_change_events_after_to_token = []
|
current_state_delta_membership_changes_after_to_token = []
|
||||||
if not membership_snapshot_token.is_before_or_eq(to_token.room_key):
|
if not membership_snapshot_token.is_before_or_eq(to_token.room_key):
|
||||||
membership_change_events_after_to_token = (
|
current_state_delta_membership_changes_after_to_token = (
|
||||||
await self.store.get_membership_changes_for_user(
|
await self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
user_id,
|
user_id,
|
||||||
from_key=to_token.room_key,
|
from_key=to_token.room_key,
|
||||||
to_key=membership_snapshot_token,
|
to_key=membership_snapshot_token,
|
||||||
excluded_rooms=self.rooms_to_exclude_globally,
|
excluded_room_ids=self.rooms_to_exclude_globally,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
# 1) Assemble a list of the last membership events in some given ranges. Someone
|
# 1) Assemble a list of the first membership event after the `to_token` so we can
|
||||||
# could have left and joined multiple times during the given range but we only
|
# step backward to the previous membership that would apply to the from/to
|
||||||
# care about end-result so we grab the last one.
|
# range.
|
||||||
last_membership_change_by_room_id_after_to_token: Dict[str, EventBase] = {}
|
first_membership_change_by_room_id_after_to_token: Dict[
|
||||||
# We also need the first membership event after the `to_token` so we can step
|
str, CurrentStateDeltaMembership
|
||||||
# backward to the previous membership that would apply to the from/to range.
|
] = {}
|
||||||
first_membership_change_by_room_id_after_to_token: Dict[str, EventBase] = {}
|
for membership_change in current_state_delta_membership_changes_after_to_token:
|
||||||
for event in membership_change_events_after_to_token:
|
|
||||||
last_membership_change_by_room_id_after_to_token[event.room_id] = event
|
|
||||||
# Only set if we haven't already set it
|
# Only set if we haven't already set it
|
||||||
first_membership_change_by_room_id_after_to_token.setdefault(
|
first_membership_change_by_room_id_after_to_token.setdefault(
|
||||||
event.room_id, event
|
membership_change.room_id, membership_change
|
||||||
)
|
)
|
||||||
|
|
||||||
# 1) Fixup
|
# 1) Fixup
|
||||||
|
#
|
||||||
|
# Since we fetched a snapshot of the users room list at some point in time after
|
||||||
|
# the from/to tokens, we need to revert/rewind some membership changes to match
|
||||||
|
# the point in time of the `to_token`.
|
||||||
for (
|
for (
|
||||||
last_membership_change_after_to_token
|
room_id,
|
||||||
) in last_membership_change_by_room_id_after_to_token.values():
|
first_membership_change_after_to_token,
|
||||||
room_id = last_membership_change_after_to_token.room_id
|
) in first_membership_change_by_room_id_after_to_token.items():
|
||||||
|
# 1a) Remove rooms that the user joined after the `to_token`
|
||||||
|
if first_membership_change_after_to_token.prev_event_id is None:
|
||||||
|
sync_room_id_set.pop(room_id, None)
|
||||||
|
# 1b) 1c) From the first membership event after the `to_token`, step backward to the
|
||||||
|
# previous membership that would apply to the from/to range.
|
||||||
|
else:
|
||||||
|
# We don't expect these fields to be `None` if we have a `prev_event_id`
|
||||||
|
# but we're being defensive since it's possible that the prev event was
|
||||||
|
# culled from the database.
|
||||||
|
if (
|
||||||
|
first_membership_change_after_to_token.prev_event_pos is not None
|
||||||
|
and first_membership_change_after_to_token.prev_membership
|
||||||
|
is not None
|
||||||
|
):
|
||||||
|
sync_room_id_set[room_id] = _RoomMembershipForUser(
|
||||||
|
event_id=first_membership_change_after_to_token.prev_event_id,
|
||||||
|
event_pos=first_membership_change_after_to_token.prev_event_pos,
|
||||||
|
membership=first_membership_change_after_to_token.prev_membership,
|
||||||
|
sender=first_membership_change_after_to_token.prev_sender,
|
||||||
|
newly_joined=False,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# If we can't find the previous membership event, we shouldn't
|
||||||
|
# include the room in the sync response since we can't determine the
|
||||||
|
# exact membership state and shouldn't rely on the current snapshot.
|
||||||
|
sync_room_id_set.pop(room_id, None)
|
||||||
|
|
||||||
# We want to find the first membership change after the `to_token` then step
|
# Filter the rooms that that we have updated room membership events to the point
|
||||||
# backward to know the membership in the from/to range.
|
# in time of the `to_token` (from the "1)" fixups)
|
||||||
first_membership_change_after_to_token = (
|
filtered_sync_room_id_set = {
|
||||||
first_membership_change_by_room_id_after_to_token.get(room_id)
|
room_id: room_membership_for_user
|
||||||
)
|
for room_id, room_membership_for_user in sync_room_id_set.items()
|
||||||
assert first_membership_change_after_to_token is not None, (
|
if filter_membership_for_sync(
|
||||||
"If there was a `last_membership_change_after_to_token` that we're iterating over, "
|
membership=room_membership_for_user.membership,
|
||||||
+ "then there should be corresponding a first change. For example, even if there "
|
|
||||||
+ "is only one event after the `to_token`, the first and last event will be same event. "
|
|
||||||
+ "This is probably a mistake in assembling the `last_membership_change_by_room_id_after_to_token`"
|
|
||||||
+ "/`first_membership_change_by_room_id_after_to_token` dicts above."
|
|
||||||
)
|
|
||||||
# TODO: Instead of reading from `unsigned`, refactor this to use the
|
|
||||||
# `current_state_delta_stream` table in the future. Probably a new
|
|
||||||
# `get_membership_changes_for_user()` function that uses
|
|
||||||
# `current_state_delta_stream` with a join to `room_memberships`. This would
|
|
||||||
# help in state reset scenarios since `prev_content` is looking at the
|
|
||||||
# current branch vs the current room state. This is all just data given to
|
|
||||||
# the client so no real harm to data integrity, but we'd like to be nice to
|
|
||||||
# the client. Since the `current_state_delta_stream` table is new, it
|
|
||||||
# doesn't have all events in it. Since this is Sliding Sync, if we ever need
|
|
||||||
# to, we can signal the client to throw all of their state away by sending
|
|
||||||
# "operation: RESET".
|
|
||||||
prev_content = first_membership_change_after_to_token.unsigned.get(
|
|
||||||
"prev_content", {}
|
|
||||||
)
|
|
||||||
prev_membership = prev_content.get("membership", None)
|
|
||||||
prev_sender = first_membership_change_after_to_token.unsigned.get(
|
|
||||||
"prev_sender", None
|
|
||||||
)
|
|
||||||
|
|
||||||
# Check if the previous membership (membership that applies to the from/to
|
|
||||||
# range) should be included in our `sync_room_id_set`
|
|
||||||
should_prev_membership_be_included = (
|
|
||||||
prev_membership is not None
|
|
||||||
and prev_sender is not None
|
|
||||||
and filter_membership_for_sync(
|
|
||||||
membership=prev_membership,
|
|
||||||
user_id=user_id,
|
|
||||||
sender=prev_sender,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Check if the last membership (membership that applies to our snapshot) was
|
|
||||||
# already included in our `sync_room_id_set`
|
|
||||||
was_last_membership_already_included = filter_membership_for_sync(
|
|
||||||
membership=last_membership_change_after_to_token.membership,
|
|
||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
sender=last_membership_change_after_to_token.sender,
|
sender=room_membership_for_user.sender,
|
||||||
)
|
)
|
||||||
|
}
|
||||||
# 1a) Add back rooms that the user left after the `to_token`
|
|
||||||
#
|
|
||||||
# For example, if the last membership event after the `to_token` is a leave
|
|
||||||
# event, then the room was excluded from `sync_room_id_set` when we first
|
|
||||||
# crafted it above. We should add these rooms back as long as the user also
|
|
||||||
# was part of the room before the `to_token`.
|
|
||||||
if (
|
|
||||||
not was_last_membership_already_included
|
|
||||||
and should_prev_membership_be_included
|
|
||||||
):
|
|
||||||
sync_room_id_set[room_id] = convert_event_to_rooms_for_user(
|
|
||||||
last_membership_change_after_to_token
|
|
||||||
)
|
|
||||||
# 1b) Remove rooms that the user joined (hasn't left) after the `to_token`
|
|
||||||
#
|
|
||||||
# For example, if the last membership event after the `to_token` is a "join"
|
|
||||||
# event, then the room was included `sync_room_id_set` when we first crafted
|
|
||||||
# it above. We should remove these rooms as long as the user also wasn't
|
|
||||||
# part of the room before the `to_token`.
|
|
||||||
elif (
|
|
||||||
was_last_membership_already_included
|
|
||||||
and not should_prev_membership_be_included
|
|
||||||
):
|
|
||||||
del sync_room_id_set[room_id]
|
|
||||||
|
|
||||||
# 2) -----------------------------------------------------
|
# 2) -----------------------------------------------------
|
||||||
# We fix-up newly_left rooms after the first fixup because it may have removed
|
# We fix-up newly_left rooms after the first fixup because it may have removed
|
||||||
# some left rooms that we can figure out our newly_left in the following code
|
# some left rooms that we can figure out are newly_left in the following code
|
||||||
|
|
||||||
# 2) Fetch membership changes that fall in the range from `from_token` up to `to_token`
|
# 2) Fetch membership changes that fall in the range from `from_token` up to `to_token`
|
||||||
membership_change_events_in_from_to_range = []
|
current_state_delta_membership_changes_in_from_to_range = []
|
||||||
if from_token:
|
if from_token:
|
||||||
membership_change_events_in_from_to_range = (
|
current_state_delta_membership_changes_in_from_to_range = (
|
||||||
await self.store.get_membership_changes_for_user(
|
await self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
user_id,
|
user_id,
|
||||||
from_key=from_token.room_key,
|
from_key=from_token.room_key,
|
||||||
to_key=to_token.room_key,
|
to_key=to_token.room_key,
|
||||||
excluded_rooms=self.rooms_to_exclude_globally,
|
excluded_room_ids=self.rooms_to_exclude_globally,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
# 2) Assemble a list of the last membership events in some given ranges. Someone
|
# 2) Assemble a list of the last membership events in some given ranges. Someone
|
||||||
# could have left and joined multiple times during the given range but we only
|
# could have left and joined multiple times during the given range but we only
|
||||||
# care about end-result so we grab the last one.
|
# care about end-result so we grab the last one.
|
||||||
last_membership_change_by_room_id_in_from_to_range: Dict[str, EventBase] = {}
|
last_membership_change_by_room_id_in_from_to_range: Dict[
|
||||||
for event in membership_change_events_in_from_to_range:
|
str, CurrentStateDeltaMembership
|
||||||
last_membership_change_by_room_id_in_from_to_range[event.room_id] = event
|
] = {}
|
||||||
|
# We also want to assemble a list of the first membership events during the token
|
||||||
|
# range so we can step backward to the previous membership that would apply to
|
||||||
|
# before the token range to see if we have `newly_joined` the room.
|
||||||
|
first_membership_change_by_room_id_in_from_to_range: Dict[
|
||||||
|
str, CurrentStateDeltaMembership
|
||||||
|
] = {}
|
||||||
|
# Keep track if the room has a non-join event in the token range so we can later
|
||||||
|
# tell if it was a `newly_joined` room. If the last membership event in the
|
||||||
|
# token range is a join and there is also some non-join in the range, we know
|
||||||
|
# they `newly_joined`.
|
||||||
|
has_non_join_event_by_room_id_in_from_to_range: Dict[str, bool] = {}
|
||||||
|
for (
|
||||||
|
membership_change
|
||||||
|
) in current_state_delta_membership_changes_in_from_to_range:
|
||||||
|
room_id = membership_change.room_id
|
||||||
|
|
||||||
|
last_membership_change_by_room_id_in_from_to_range[room_id] = (
|
||||||
|
membership_change
|
||||||
|
)
|
||||||
|
# Only set if we haven't already set it
|
||||||
|
first_membership_change_by_room_id_in_from_to_range.setdefault(
|
||||||
|
room_id, membership_change
|
||||||
|
)
|
||||||
|
|
||||||
|
if membership_change.membership != Membership.JOIN:
|
||||||
|
has_non_join_event_by_room_id_in_from_to_range[room_id] = True
|
||||||
|
|
||||||
# 2) Fixup
|
# 2) Fixup
|
||||||
|
#
|
||||||
|
# 3) We also want to assemble a list of possibly newly joined rooms. Someone
|
||||||
|
# could have left and joined multiple times during the given range but we only
|
||||||
|
# care about whether they are joined at the end of the token range so we are
|
||||||
|
# working with the last membership even in the token range.
|
||||||
|
possibly_newly_joined_room_ids = set()
|
||||||
for (
|
for (
|
||||||
last_membership_change_in_from_to_range
|
last_membership_change_in_from_to_range
|
||||||
) in last_membership_change_by_room_id_in_from_to_range.values():
|
) in last_membership_change_by_room_id_in_from_to_range.values():
|
||||||
room_id = last_membership_change_in_from_to_range.room_id
|
room_id = last_membership_change_in_from_to_range.room_id
|
||||||
|
|
||||||
|
# 3)
|
||||||
|
if last_membership_change_in_from_to_range.membership == Membership.JOIN:
|
||||||
|
possibly_newly_joined_room_ids.add(room_id)
|
||||||
|
|
||||||
# 2) Add back newly_left rooms (> `from_token` and <= `to_token`). We
|
# 2) Add back newly_left rooms (> `from_token` and <= `to_token`). We
|
||||||
# include newly_left rooms because the last event that the user should see
|
# include newly_left rooms because the last event that the user should see
|
||||||
# is their own leave event
|
# is their own leave event
|
||||||
if last_membership_change_in_from_to_range.membership == Membership.LEAVE:
|
if last_membership_change_in_from_to_range.membership == Membership.LEAVE:
|
||||||
sync_room_id_set[room_id] = convert_event_to_rooms_for_user(
|
filtered_sync_room_id_set[room_id] = _RoomMembershipForUser(
|
||||||
last_membership_change_in_from_to_range
|
event_id=last_membership_change_in_from_to_range.event_id,
|
||||||
|
event_pos=last_membership_change_in_from_to_range.event_pos,
|
||||||
|
membership=last_membership_change_in_from_to_range.membership,
|
||||||
|
sender=last_membership_change_in_from_to_range.sender,
|
||||||
|
newly_joined=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
return sync_room_id_set
|
# 3) Figure out `newly_joined`
|
||||||
|
for room_id in possibly_newly_joined_room_ids:
|
||||||
|
has_non_join_in_from_to_range = (
|
||||||
|
has_non_join_event_by_room_id_in_from_to_range.get(room_id, False)
|
||||||
|
)
|
||||||
|
# If the last membership event in the token range is a join and there is
|
||||||
|
# also some non-join in the range, we know they `newly_joined`.
|
||||||
|
if has_non_join_in_from_to_range:
|
||||||
|
# We found a `newly_joined` room (we left and joined within the token range)
|
||||||
|
filtered_sync_room_id_set[room_id] = filtered_sync_room_id_set[
|
||||||
|
room_id
|
||||||
|
].copy_and_replace(newly_joined=True)
|
||||||
|
else:
|
||||||
|
prev_event_id = first_membership_change_by_room_id_in_from_to_range[
|
||||||
|
room_id
|
||||||
|
].prev_event_id
|
||||||
|
prev_membership = first_membership_change_by_room_id_in_from_to_range[
|
||||||
|
room_id
|
||||||
|
].prev_membership
|
||||||
|
|
||||||
|
if prev_event_id is None:
|
||||||
|
# We found a `newly_joined` room (we are joining the room for the
|
||||||
|
# first time within the token range)
|
||||||
|
filtered_sync_room_id_set[room_id] = filtered_sync_room_id_set[
|
||||||
|
room_id
|
||||||
|
].copy_and_replace(newly_joined=True)
|
||||||
|
# Last resort, we need to step back to the previous membership event
|
||||||
|
# just before the token range to see if we're joined then or not.
|
||||||
|
elif prev_membership != Membership.JOIN:
|
||||||
|
# We found a `newly_joined` room (we left before the token range
|
||||||
|
# and joined within the token range)
|
||||||
|
filtered_sync_room_id_set[room_id] = filtered_sync_room_id_set[
|
||||||
|
room_id
|
||||||
|
].copy_and_replace(newly_joined=True)
|
||||||
|
|
||||||
|
return filtered_sync_room_id_set
|
||||||
|
|
||||||
async def filter_rooms(
|
async def filter_rooms(
|
||||||
self,
|
self,
|
||||||
user: UserID,
|
user: UserID,
|
||||||
sync_room_map: Dict[str, RoomsForUser],
|
sync_room_map: Dict[str, _RoomMembershipForUser],
|
||||||
filters: SlidingSyncConfig.SlidingSyncList.Filters,
|
filters: SlidingSyncConfig.SlidingSyncList.Filters,
|
||||||
to_token: StreamToken,
|
to_token: StreamToken,
|
||||||
) -> Dict[str, RoomsForUser]:
|
) -> Dict[str, _RoomMembershipForUser]:
|
||||||
"""
|
"""
|
||||||
Filter rooms based on the sync request.
|
Filter rooms based on the sync request.
|
||||||
|
|
||||||
|
@ -774,7 +825,7 @@ class SlidingSyncHandler:
|
||||||
|
|
||||||
# Flatten out the map
|
# Flatten out the map
|
||||||
dm_room_id_set = set()
|
dm_room_id_set = set()
|
||||||
if dm_map:
|
if isinstance(dm_map, dict):
|
||||||
for room_ids in dm_map.values():
|
for room_ids in dm_map.values():
|
||||||
# Account data should be a list of room IDs. Ignore anything else
|
# Account data should be a list of room IDs. Ignore anything else
|
||||||
if isinstance(room_ids, list):
|
if isinstance(room_ids, list):
|
||||||
|
@ -813,8 +864,21 @@ class SlidingSyncHandler:
|
||||||
):
|
):
|
||||||
filtered_room_id_set.remove(room_id)
|
filtered_room_id_set.remove(room_id)
|
||||||
|
|
||||||
if filters.is_invite:
|
# Filter for rooms that the user has been invited to
|
||||||
raise NotImplementedError()
|
if filters.is_invite is not None:
|
||||||
|
# Make a copy so we don't run into an error: `Set changed size during
|
||||||
|
# iteration`, when we filter out and remove items
|
||||||
|
for room_id in list(filtered_room_id_set):
|
||||||
|
room_for_user = sync_room_map[room_id]
|
||||||
|
# If we're looking for invite rooms, filter out rooms that the user is
|
||||||
|
# not invited to and vice versa
|
||||||
|
if (
|
||||||
|
filters.is_invite and room_for_user.membership != Membership.INVITE
|
||||||
|
) or (
|
||||||
|
not filters.is_invite
|
||||||
|
and room_for_user.membership == Membership.INVITE
|
||||||
|
):
|
||||||
|
filtered_room_id_set.remove(room_id)
|
||||||
|
|
||||||
if filters.room_types:
|
if filters.room_types:
|
||||||
raise NotImplementedError()
|
raise NotImplementedError()
|
||||||
|
@ -836,9 +900,9 @@ class SlidingSyncHandler:
|
||||||
|
|
||||||
async def sort_rooms(
|
async def sort_rooms(
|
||||||
self,
|
self,
|
||||||
sync_room_map: Dict[str, RoomsForUser],
|
sync_room_map: Dict[str, _RoomMembershipForUser],
|
||||||
to_token: StreamToken,
|
to_token: StreamToken,
|
||||||
) -> List[Tuple[str, RoomsForUser]]:
|
) -> List[Tuple[str, _RoomMembershipForUser]]:
|
||||||
"""
|
"""
|
||||||
Sort by `stream_ordering` of the last event that the user should see in the
|
Sort by `stream_ordering` of the last event that the user should see in the
|
||||||
room. `stream_ordering` is unique so we get a stable sort.
|
room. `stream_ordering` is unique so we get a stable sort.
|
||||||
|
@ -891,7 +955,7 @@ class SlidingSyncHandler:
|
||||||
user: UserID,
|
user: UserID,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
room_sync_config: RoomSyncConfig,
|
room_sync_config: RoomSyncConfig,
|
||||||
rooms_for_user_membership_at_to_token: RoomsForUser,
|
rooms_membership_for_user_at_to_token: _RoomMembershipForUser,
|
||||||
from_token: Optional[StreamToken],
|
from_token: Optional[StreamToken],
|
||||||
to_token: StreamToken,
|
to_token: StreamToken,
|
||||||
) -> SlidingSyncResult.RoomResult:
|
) -> SlidingSyncResult.RoomResult:
|
||||||
|
@ -905,42 +969,45 @@ class SlidingSyncHandler:
|
||||||
room_id: The room ID to fetch data for
|
room_id: The room ID to fetch data for
|
||||||
room_sync_config: Config for what data we should fetch for a room in the
|
room_sync_config: Config for what data we should fetch for a room in the
|
||||||
sync response.
|
sync response.
|
||||||
rooms_for_user_membership_at_to_token: Membership information for the user
|
rooms_membership_for_user_at_to_token: Membership information for the user
|
||||||
in the room at the time of `to_token`.
|
in the room at the time of `to_token`.
|
||||||
from_token: The point in the stream to sync from.
|
from_token: The point in the stream to sync from.
|
||||||
to_token: The point in the stream to sync up to.
|
to_token: The point in the stream to sync up to.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Assemble the list of timeline events
|
# Assemble the list of timeline events
|
||||||
timeline_events: List[EventBase] = []
|
#
|
||||||
limited = False
|
# It would be nice to make the `rooms` response more uniform regardless of
|
||||||
# We want to start off using the `to_token` (vs `from_token`) because we look
|
# membership. Currently, we have to make all of these optional because
|
||||||
# backwards from the `to_token` up to the `timeline_limit` and we might not
|
# `invite`/`knock` rooms only have `stripped_state`. See
|
||||||
# reach the `from_token` before we hit the limit. We will update the room stream
|
# https://github.com/matrix-org/matrix-spec-proposals/pull/3575#discussion_r1653045932
|
||||||
# position once we've fetched the events to point to the earliest event fetched.
|
timeline_events: Optional[List[EventBase]] = None
|
||||||
prev_batch_token = to_token
|
bundled_aggregations: Optional[Dict[str, BundledAggregations]] = None
|
||||||
if room_sync_config.timeline_limit > 0:
|
limited: Optional[bool] = None
|
||||||
newly_joined = False
|
prev_batch_token: Optional[StreamToken] = None
|
||||||
if (
|
num_live: Optional[int] = None
|
||||||
# We can only determine new-ness if we have a `from_token` to define our range
|
if (
|
||||||
from_token is not None
|
room_sync_config.timeline_limit > 0
|
||||||
and rooms_for_user_membership_at_to_token.membership == Membership.JOIN
|
# No timeline for invite/knock rooms (just `stripped_state`)
|
||||||
):
|
and rooms_membership_for_user_at_to_token.membership
|
||||||
newly_joined = (
|
not in (Membership.INVITE, Membership.KNOCK)
|
||||||
rooms_for_user_membership_at_to_token.event_pos.persisted_after(
|
):
|
||||||
from_token.room_key
|
limited = False
|
||||||
)
|
# We want to start off using the `to_token` (vs `from_token`) because we look
|
||||||
)
|
# backwards from the `to_token` up to the `timeline_limit` and we might not
|
||||||
|
# reach the `from_token` before we hit the limit. We will update the room stream
|
||||||
|
# position once we've fetched the events to point to the earliest event fetched.
|
||||||
|
prev_batch_token = to_token
|
||||||
|
|
||||||
# We're going to paginate backwards from the `to_token`
|
# We're going to paginate backwards from the `to_token`
|
||||||
from_bound = to_token.room_key
|
from_bound = to_token.room_key
|
||||||
# People shouldn't see past their leave/ban event
|
# People shouldn't see past their leave/ban event
|
||||||
if rooms_for_user_membership_at_to_token.membership in (
|
if rooms_membership_for_user_at_to_token.membership in (
|
||||||
Membership.LEAVE,
|
Membership.LEAVE,
|
||||||
Membership.BAN,
|
Membership.BAN,
|
||||||
):
|
):
|
||||||
from_bound = (
|
from_bound = (
|
||||||
rooms_for_user_membership_at_to_token.event_pos.to_room_stream_token()
|
rooms_membership_for_user_at_to_token.event_pos.to_room_stream_token()
|
||||||
)
|
)
|
||||||
|
|
||||||
# Determine whether we should limit the timeline to the token range.
|
# Determine whether we should limit the timeline to the token range.
|
||||||
|
@ -954,7 +1021,8 @@ class SlidingSyncHandler:
|
||||||
# connection before
|
# connection before
|
||||||
to_bound = (
|
to_bound = (
|
||||||
from_token.room_key
|
from_token.room_key
|
||||||
if from_token is not None and not newly_joined
|
if from_token is not None
|
||||||
|
and not rooms_membership_for_user_at_to_token.newly_joined
|
||||||
else None
|
else None
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -986,18 +1054,19 @@ class SlidingSyncHandler:
|
||||||
stream=timeline_events[0].internal_metadata.stream_ordering - 1
|
stream=timeline_events[0].internal_metadata.stream_ordering - 1
|
||||||
)
|
)
|
||||||
|
|
||||||
# TODO: Does `newly_joined` affect `limited`? It does in sync v2 but I fail
|
|
||||||
# to understand why.
|
|
||||||
|
|
||||||
# Make sure we don't expose any events that the client shouldn't see
|
# Make sure we don't expose any events that the client shouldn't see
|
||||||
timeline_events = await filter_events_for_client(
|
timeline_events = await filter_events_for_client(
|
||||||
self.storage_controllers,
|
self.storage_controllers,
|
||||||
user.to_string(),
|
user.to_string(),
|
||||||
timeline_events,
|
timeline_events,
|
||||||
is_peeking=rooms_for_user_membership_at_to_token.membership
|
is_peeking=rooms_membership_for_user_at_to_token.membership
|
||||||
!= Membership.JOIN,
|
!= Membership.JOIN,
|
||||||
filter_send_to_client=True,
|
filter_send_to_client=True,
|
||||||
)
|
)
|
||||||
|
# TODO: Filter out `EventTypes.CallInvite` in public rooms,
|
||||||
|
# see https://github.com/element-hq/synapse/issues/17359
|
||||||
|
|
||||||
|
# TODO: Handle timeline gaps (`get_timeline_gaps()`)
|
||||||
|
|
||||||
# Determine how many "live" events we have (events within the given token range).
|
# Determine how many "live" events we have (events within the given token range).
|
||||||
#
|
#
|
||||||
|
@ -1027,6 +1096,15 @@ class SlidingSyncHandler:
|
||||||
# this more with a binary search (bisect).
|
# this more with a binary search (bisect).
|
||||||
break
|
break
|
||||||
|
|
||||||
|
# If the timeline is `limited=True`, the client does not have all events
|
||||||
|
# necessary to calculate aggregations themselves.
|
||||||
|
if limited:
|
||||||
|
bundled_aggregations = (
|
||||||
|
await self.relations_handler.get_bundled_aggregations(
|
||||||
|
timeline_events, user.to_string()
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
# Update the `prev_batch_token` to point to the position that allows us to
|
# Update the `prev_batch_token` to point to the position that allows us to
|
||||||
# keep paginating backwards from the oldest event we return in the timeline.
|
# keep paginating backwards from the oldest event we return in the timeline.
|
||||||
prev_batch_token = prev_batch_token.copy_and_replace(
|
prev_batch_token = prev_batch_token.copy_and_replace(
|
||||||
|
@ -1036,12 +1114,16 @@ class SlidingSyncHandler:
|
||||||
# Figure out any stripped state events for invite/knocks. This allows the
|
# Figure out any stripped state events for invite/knocks. This allows the
|
||||||
# potential joiner to identify the room.
|
# potential joiner to identify the room.
|
||||||
stripped_state: List[JsonDict] = []
|
stripped_state: List[JsonDict] = []
|
||||||
if rooms_for_user_membership_at_to_token.membership in (
|
if rooms_membership_for_user_at_to_token.membership in (
|
||||||
Membership.INVITE,
|
Membership.INVITE,
|
||||||
Membership.KNOCK,
|
Membership.KNOCK,
|
||||||
):
|
):
|
||||||
|
# This should never happen. If someone is invited/knocked on room, then
|
||||||
|
# there should be an event for it.
|
||||||
|
assert rooms_membership_for_user_at_to_token.event_id is not None
|
||||||
|
|
||||||
invite_or_knock_event = await self.store.get_event(
|
invite_or_knock_event = await self.store.get_event(
|
||||||
rooms_for_user_membership_at_to_token.event_id
|
rooms_membership_for_user_at_to_token.event_id
|
||||||
)
|
)
|
||||||
|
|
||||||
stripped_state = []
|
stripped_state = []
|
||||||
|
@ -1056,17 +1138,11 @@ class SlidingSyncHandler:
|
||||||
|
|
||||||
stripped_state.append(strip_event(invite_or_knock_event))
|
stripped_state.append(strip_event(invite_or_knock_event))
|
||||||
|
|
||||||
# TODO: Handle timeline gaps (`get_timeline_gaps()`)
|
# TODO: Handle state resets. For example, if we see
|
||||||
|
# `rooms_membership_for_user_at_to_token.membership = Membership.LEAVE` but
|
||||||
# If the timeline is `limited=True`, the client does not have all events
|
# `required_state` doesn't include it, we should indicate to the client that a
|
||||||
# necessary to calculate aggregations themselves.
|
# state reset happened. Perhaps we should indicate this by setting `initial:
|
||||||
bundled_aggregations = None
|
# True` and empty `required_state`.
|
||||||
if limited:
|
|
||||||
bundled_aggregations = (
|
|
||||||
await self.relations_handler.get_bundled_aggregations(
|
|
||||||
timeline_events, user.to_string()
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: Since we can't determine whether we've already sent a room down this
|
# TODO: Since we can't determine whether we've already sent a room down this
|
||||||
# Sliding Sync connection before (we plan to add this optimization in the
|
# Sliding Sync connection before (we plan to add this optimization in the
|
||||||
|
|
|
@ -119,14 +119,15 @@ def parse_integer(
|
||||||
default: value to use if the parameter is absent, defaults to None.
|
default: value to use if the parameter is absent, defaults to None.
|
||||||
required: whether to raise a 400 SynapseError if the parameter is absent,
|
required: whether to raise a 400 SynapseError if the parameter is absent,
|
||||||
defaults to False.
|
defaults to False.
|
||||||
negative: whether to allow negative integers, defaults to True.
|
negative: whether to allow negative integers, defaults to False (disallowing
|
||||||
|
negatives).
|
||||||
Returns:
|
Returns:
|
||||||
An int value or the default.
|
An int value or the default.
|
||||||
|
|
||||||
Raises:
|
Raises:
|
||||||
SynapseError: if the parameter is absent and required, if the
|
SynapseError: if the parameter is absent and required, if the
|
||||||
parameter is present and not an integer, or if the
|
parameter is present and not an integer, or if the
|
||||||
parameter is illegitimate negative.
|
parameter is illegitimately negative.
|
||||||
"""
|
"""
|
||||||
args: Mapping[bytes, Sequence[bytes]] = request.args # type: ignore
|
args: Mapping[bytes, Sequence[bytes]] = request.args # type: ignore
|
||||||
return parse_integer_from_args(args, name, default, required, negative)
|
return parse_integer_from_args(args, name, default, required, negative)
|
||||||
|
@ -164,7 +165,7 @@ def parse_integer_from_args(
|
||||||
name: str,
|
name: str,
|
||||||
default: Optional[int] = None,
|
default: Optional[int] = None,
|
||||||
required: bool = False,
|
required: bool = False,
|
||||||
negative: bool = True,
|
negative: bool = False,
|
||||||
) -> Optional[int]:
|
) -> Optional[int]:
|
||||||
"""Parse an integer parameter from the request string
|
"""Parse an integer parameter from the request string
|
||||||
|
|
||||||
|
@ -174,7 +175,8 @@ def parse_integer_from_args(
|
||||||
default: value to use if the parameter is absent, defaults to None.
|
default: value to use if the parameter is absent, defaults to None.
|
||||||
required: whether to raise a 400 SynapseError if the parameter is absent,
|
required: whether to raise a 400 SynapseError if the parameter is absent,
|
||||||
defaults to False.
|
defaults to False.
|
||||||
negative: whether to allow negative integers, defaults to True.
|
negative: whether to allow negative integers, defaults to False (disallowing
|
||||||
|
negatives).
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
An int value or the default.
|
An int value or the default.
|
||||||
|
@ -182,7 +184,7 @@ def parse_integer_from_args(
|
||||||
Raises:
|
Raises:
|
||||||
SynapseError: if the parameter is absent and required, if the
|
SynapseError: if the parameter is absent and required, if the
|
||||||
parameter is present and not an integer, or if the
|
parameter is present and not an integer, or if the
|
||||||
parameter is illegitimate negative.
|
parameter is illegitimately negative.
|
||||||
"""
|
"""
|
||||||
name_bytes = name.encode("ascii")
|
name_bytes = name.encode("ascii")
|
||||||
|
|
||||||
|
|
|
@ -25,7 +25,16 @@ import os
|
||||||
import urllib
|
import urllib
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from types import TracebackType
|
from types import TracebackType
|
||||||
from typing import Awaitable, Dict, Generator, List, Optional, Tuple, Type
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Awaitable,
|
||||||
|
Dict,
|
||||||
|
Generator,
|
||||||
|
List,
|
||||||
|
Optional,
|
||||||
|
Tuple,
|
||||||
|
Type,
|
||||||
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
|
@ -37,8 +46,13 @@ from synapse.api.errors import Codes, cs_error
|
||||||
from synapse.http.server import finish_request, respond_with_json
|
from synapse.http.server import finish_request, respond_with_json
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.context import make_deferred_yieldable
|
from synapse.logging.context import make_deferred_yieldable
|
||||||
|
from synapse.util import Clock
|
||||||
from synapse.util.stringutils import is_ascii
|
from synapse.util.stringutils import is_ascii
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.storage.databases.main.media_repository import LocalMedia
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# list all text content types that will have the charset default to UTF-8 when
|
# list all text content types that will have the charset default to UTF-8 when
|
||||||
|
@ -260,6 +274,68 @@ def _can_encode_filename_as_token(x: str) -> bool:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
async def respond_with_multipart_responder(
|
||||||
|
clock: Clock,
|
||||||
|
request: SynapseRequest,
|
||||||
|
responder: "Optional[Responder]",
|
||||||
|
media_info: "LocalMedia",
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Responds to requests originating from the federation media `/download` endpoint by
|
||||||
|
streaming a multipart/mixed response
|
||||||
|
|
||||||
|
Args:
|
||||||
|
clock:
|
||||||
|
request: the federation request to respond to
|
||||||
|
responder: the responder which will send the response
|
||||||
|
media_info: metadata about the media item
|
||||||
|
"""
|
||||||
|
if not responder:
|
||||||
|
respond_404(request)
|
||||||
|
return
|
||||||
|
|
||||||
|
# If we have a responder we *must* use it as a context manager.
|
||||||
|
with responder:
|
||||||
|
if request._disconnected:
|
||||||
|
logger.warning(
|
||||||
|
"Not sending response to request %s, already disconnected.", request
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
from synapse.media.media_storage import MultipartFileConsumer
|
||||||
|
|
||||||
|
# note that currently the json_object is just {}, this will change when linked media
|
||||||
|
# is implemented
|
||||||
|
multipart_consumer = MultipartFileConsumer(
|
||||||
|
clock, request, media_info.media_type, {}, media_info.media_length
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.debug("Responding to media request with responder %s", responder)
|
||||||
|
if media_info.media_length is not None:
|
||||||
|
content_length = multipart_consumer.content_length()
|
||||||
|
assert content_length is not None
|
||||||
|
request.setHeader(b"Content-Length", b"%d" % (content_length,))
|
||||||
|
|
||||||
|
request.setHeader(
|
||||||
|
b"Content-Type",
|
||||||
|
b"multipart/mixed; boundary=%s" % multipart_consumer.boundary,
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await responder.write_to_consumer(multipart_consumer)
|
||||||
|
except Exception as e:
|
||||||
|
# The majority of the time this will be due to the client having gone
|
||||||
|
# away. Unfortunately, Twisted simply throws a generic exception at us
|
||||||
|
# in that case.
|
||||||
|
logger.warning("Failed to write to consumer: %s %s", type(e), e)
|
||||||
|
|
||||||
|
# Unregister the producer, if it has one, so Twisted doesn't complain
|
||||||
|
if request.producer:
|
||||||
|
request.unregisterProducer()
|
||||||
|
|
||||||
|
finish_request(request)
|
||||||
|
|
||||||
|
|
||||||
async def respond_with_responder(
|
async def respond_with_responder(
|
||||||
request: SynapseRequest,
|
request: SynapseRequest,
|
||||||
responder: "Optional[Responder]",
|
responder: "Optional[Responder]",
|
||||||
|
|
|
@ -54,6 +54,7 @@ from synapse.media._base import (
|
||||||
ThumbnailInfo,
|
ThumbnailInfo,
|
||||||
get_filename_from_headers,
|
get_filename_from_headers,
|
||||||
respond_404,
|
respond_404,
|
||||||
|
respond_with_multipart_responder,
|
||||||
respond_with_responder,
|
respond_with_responder,
|
||||||
)
|
)
|
||||||
from synapse.media.filepath import MediaFilePaths
|
from synapse.media.filepath import MediaFilePaths
|
||||||
|
@ -429,6 +430,7 @@ class MediaRepository:
|
||||||
media_id: str,
|
media_id: str,
|
||||||
name: Optional[str],
|
name: Optional[str],
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
|
federation: bool = False,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Responds to requests for local media, if exists, or returns 404.
|
"""Responds to requests for local media, if exists, or returns 404.
|
||||||
|
|
||||||
|
@ -440,6 +442,7 @@ class MediaRepository:
|
||||||
the filename in the Content-Disposition header of the response.
|
the filename in the Content-Disposition header of the response.
|
||||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||||
media to be uploaded.
|
media to be uploaded.
|
||||||
|
federation: whether the local media being fetched is for a federation request
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Resolves once a response has successfully been written to request
|
Resolves once a response has successfully been written to request
|
||||||
|
@ -460,9 +463,14 @@ class MediaRepository:
|
||||||
file_info = FileInfo(None, media_id, url_cache=bool(url_cache))
|
file_info = FileInfo(None, media_id, url_cache=bool(url_cache))
|
||||||
|
|
||||||
responder = await self.media_storage.fetch_media(file_info)
|
responder = await self.media_storage.fetch_media(file_info)
|
||||||
await respond_with_responder(
|
if federation:
|
||||||
request, responder, media_type, media_length, upload_name
|
await respond_with_multipart_responder(
|
||||||
)
|
self.clock, request, responder, media_info
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
await respond_with_responder(
|
||||||
|
request, responder, media_type, media_length, upload_name
|
||||||
|
)
|
||||||
|
|
||||||
async def get_remote_media(
|
async def get_remote_media(
|
||||||
self,
|
self,
|
||||||
|
|
|
@ -19,9 +19,12 @@
|
||||||
#
|
#
|
||||||
#
|
#
|
||||||
import contextlib
|
import contextlib
|
||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
|
from contextlib import closing
|
||||||
|
from io import BytesIO
|
||||||
from types import TracebackType
|
from types import TracebackType
|
||||||
from typing import (
|
from typing import (
|
||||||
IO,
|
IO,
|
||||||
|
@ -30,24 +33,35 @@ from typing import (
|
||||||
AsyncIterator,
|
AsyncIterator,
|
||||||
BinaryIO,
|
BinaryIO,
|
||||||
Callable,
|
Callable,
|
||||||
|
List,
|
||||||
Optional,
|
Optional,
|
||||||
Sequence,
|
Sequence,
|
||||||
Tuple,
|
Tuple,
|
||||||
Type,
|
Type,
|
||||||
|
Union,
|
||||||
|
cast,
|
||||||
)
|
)
|
||||||
|
from uuid import uuid4
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
from zope.interface import implementer
|
||||||
|
|
||||||
|
from twisted.internet import interfaces
|
||||||
from twisted.internet.defer import Deferred
|
from twisted.internet.defer import Deferred
|
||||||
from twisted.internet.interfaces import IConsumer
|
from twisted.internet.interfaces import IConsumer
|
||||||
from twisted.protocols.basic import FileSender
|
from twisted.protocols.basic import FileSender
|
||||||
|
|
||||||
from synapse.api.errors import NotFoundError
|
from synapse.api.errors import NotFoundError
|
||||||
from synapse.logging.context import defer_to_thread, make_deferred_yieldable
|
from synapse.logging.context import (
|
||||||
|
defer_to_thread,
|
||||||
|
make_deferred_yieldable,
|
||||||
|
run_in_background,
|
||||||
|
)
|
||||||
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
|
from synapse.logging.opentracing import start_active_span, trace, trace_with_opname
|
||||||
from synapse.util import Clock
|
from synapse.util import Clock
|
||||||
from synapse.util.file_consumer import BackgroundFileConsumer
|
from synapse.util.file_consumer import BackgroundFileConsumer
|
||||||
|
|
||||||
|
from ..types import JsonDict
|
||||||
from ._base import FileInfo, Responder
|
from ._base import FileInfo, Responder
|
||||||
from .filepath import MediaFilePaths
|
from .filepath import MediaFilePaths
|
||||||
|
|
||||||
|
@ -57,6 +71,8 @@ if TYPE_CHECKING:
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
CRLF = b"\r\n"
|
||||||
|
|
||||||
|
|
||||||
class MediaStorage:
|
class MediaStorage:
|
||||||
"""Responsible for storing/fetching files from local sources.
|
"""Responsible for storing/fetching files from local sources.
|
||||||
|
@ -174,7 +190,7 @@ class MediaStorage:
|
||||||
and configured storage providers.
|
and configured storage providers.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
file_info
|
file_info: Metadata about the media file
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Returns a Responder if the file was found, otherwise None.
|
Returns a Responder if the file was found, otherwise None.
|
||||||
|
@ -316,7 +332,7 @@ class FileResponder(Responder):
|
||||||
"""Wraps an open file that can be sent to a request.
|
"""Wraps an open file that can be sent to a request.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
open_file: A file like object to be streamed ot the client,
|
open_file: A file like object to be streamed to the client,
|
||||||
is closed when finished streaming.
|
is closed when finished streaming.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -370,3 +386,240 @@ class ReadableFileWrapper:
|
||||||
|
|
||||||
# We yield to the reactor by sleeping for 0 seconds.
|
# We yield to the reactor by sleeping for 0 seconds.
|
||||||
await self.clock.sleep(0)
|
await self.clock.sleep(0)
|
||||||
|
|
||||||
|
|
||||||
|
@implementer(interfaces.IConsumer)
|
||||||
|
@implementer(interfaces.IPushProducer)
|
||||||
|
class MultipartFileConsumer:
|
||||||
|
"""Wraps a given consumer so that any data that gets written to it gets
|
||||||
|
converted to a multipart format.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
clock: Clock,
|
||||||
|
wrapped_consumer: interfaces.IConsumer,
|
||||||
|
file_content_type: str,
|
||||||
|
json_object: JsonDict,
|
||||||
|
content_length: Optional[int] = None,
|
||||||
|
) -> None:
|
||||||
|
self.clock = clock
|
||||||
|
self.wrapped_consumer = wrapped_consumer
|
||||||
|
self.json_field = json_object
|
||||||
|
self.json_field_written = False
|
||||||
|
self.content_type_written = False
|
||||||
|
self.file_content_type = file_content_type
|
||||||
|
self.boundary = uuid4().hex.encode("ascii")
|
||||||
|
|
||||||
|
# The producer that registered with us, and if it's a push or pull
|
||||||
|
# producer.
|
||||||
|
self.producer: Optional["interfaces.IProducer"] = None
|
||||||
|
self.streaming: Optional[bool] = None
|
||||||
|
|
||||||
|
# Whether the wrapped consumer has asked us to pause.
|
||||||
|
self.paused = False
|
||||||
|
|
||||||
|
self.length = content_length
|
||||||
|
|
||||||
|
### IConsumer APIs ###
|
||||||
|
|
||||||
|
def registerProducer(
|
||||||
|
self, producer: "interfaces.IProducer", streaming: bool
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Register to receive data from a producer.
|
||||||
|
|
||||||
|
This sets self to be a consumer for a producer. When this object runs
|
||||||
|
out of data (as when a send(2) call on a socket succeeds in moving the
|
||||||
|
last data from a userspace buffer into a kernelspace buffer), it will
|
||||||
|
ask the producer to resumeProducing().
|
||||||
|
|
||||||
|
For L{IPullProducer} providers, C{resumeProducing} will be called once
|
||||||
|
each time data is required.
|
||||||
|
|
||||||
|
For L{IPushProducer} providers, C{pauseProducing} will be called
|
||||||
|
whenever the write buffer fills up and C{resumeProducing} will only be
|
||||||
|
called when it empties. The consumer will only call C{resumeProducing}
|
||||||
|
to balance a previous C{pauseProducing} call; the producer is assumed
|
||||||
|
to start in an un-paused state.
|
||||||
|
|
||||||
|
@param streaming: C{True} if C{producer} provides L{IPushProducer},
|
||||||
|
C{False} if C{producer} provides L{IPullProducer}.
|
||||||
|
|
||||||
|
@raise RuntimeError: If a producer is already registered.
|
||||||
|
"""
|
||||||
|
self.producer = producer
|
||||||
|
self.streaming = streaming
|
||||||
|
|
||||||
|
self.wrapped_consumer.registerProducer(self, True)
|
||||||
|
|
||||||
|
# kick off producing if `self.producer` is not a streaming producer
|
||||||
|
if not streaming:
|
||||||
|
self.resumeProducing()
|
||||||
|
|
||||||
|
def unregisterProducer(self) -> None:
|
||||||
|
"""
|
||||||
|
Stop consuming data from a producer, without disconnecting.
|
||||||
|
"""
|
||||||
|
self.wrapped_consumer.write(CRLF + b"--" + self.boundary + b"--" + CRLF)
|
||||||
|
self.wrapped_consumer.unregisterProducer()
|
||||||
|
self.paused = True
|
||||||
|
|
||||||
|
def write(self, data: bytes) -> None:
|
||||||
|
"""
|
||||||
|
The producer will write data by calling this method.
|
||||||
|
|
||||||
|
The implementation must be non-blocking and perform whatever
|
||||||
|
buffering is necessary. If the producer has provided enough data
|
||||||
|
for now and it is a L{IPushProducer}, the consumer may call its
|
||||||
|
C{pauseProducing} method.
|
||||||
|
"""
|
||||||
|
if not self.json_field_written:
|
||||||
|
self.wrapped_consumer.write(CRLF + b"--" + self.boundary + CRLF)
|
||||||
|
|
||||||
|
content_type = Header(b"Content-Type", b"application/json")
|
||||||
|
self.wrapped_consumer.write(bytes(content_type) + CRLF)
|
||||||
|
|
||||||
|
json_field = json.dumps(self.json_field)
|
||||||
|
json_bytes = json_field.encode("utf-8")
|
||||||
|
self.wrapped_consumer.write(CRLF + json_bytes)
|
||||||
|
self.wrapped_consumer.write(CRLF + b"--" + self.boundary + CRLF)
|
||||||
|
|
||||||
|
self.json_field_written = True
|
||||||
|
|
||||||
|
# if we haven't written the content type yet, do so
|
||||||
|
if not self.content_type_written:
|
||||||
|
type = self.file_content_type.encode("utf-8")
|
||||||
|
content_type = Header(b"Content-Type", type)
|
||||||
|
self.wrapped_consumer.write(bytes(content_type) + CRLF + CRLF)
|
||||||
|
self.content_type_written = True
|
||||||
|
|
||||||
|
self.wrapped_consumer.write(data)
|
||||||
|
|
||||||
|
### IPushProducer APIs ###
|
||||||
|
|
||||||
|
def stopProducing(self) -> None:
|
||||||
|
"""
|
||||||
|
Stop producing data.
|
||||||
|
|
||||||
|
This tells a producer that its consumer has died, so it must stop
|
||||||
|
producing data for good.
|
||||||
|
"""
|
||||||
|
assert self.producer is not None
|
||||||
|
|
||||||
|
self.paused = True
|
||||||
|
self.producer.stopProducing()
|
||||||
|
|
||||||
|
def pauseProducing(self) -> None:
|
||||||
|
"""
|
||||||
|
Pause producing data.
|
||||||
|
|
||||||
|
Tells a producer that it has produced too much data to process for
|
||||||
|
the time being, and to stop until C{resumeProducing()} is called.
|
||||||
|
"""
|
||||||
|
assert self.producer is not None
|
||||||
|
|
||||||
|
self.paused = True
|
||||||
|
|
||||||
|
if self.streaming:
|
||||||
|
cast("interfaces.IPushProducer", self.producer).pauseProducing()
|
||||||
|
else:
|
||||||
|
self.paused = True
|
||||||
|
|
||||||
|
def resumeProducing(self) -> None:
|
||||||
|
"""
|
||||||
|
Resume producing data.
|
||||||
|
|
||||||
|
This tells a producer to re-add itself to the main loop and produce
|
||||||
|
more data for its consumer.
|
||||||
|
"""
|
||||||
|
assert self.producer is not None
|
||||||
|
|
||||||
|
if self.streaming:
|
||||||
|
cast("interfaces.IPushProducer", self.producer).resumeProducing()
|
||||||
|
else:
|
||||||
|
# If the producer is not a streaming producer we need to start
|
||||||
|
# repeatedly calling `resumeProducing` in a loop.
|
||||||
|
run_in_background(self._resumeProducingRepeatedly)
|
||||||
|
|
||||||
|
def content_length(self) -> Optional[int]:
|
||||||
|
"""
|
||||||
|
Calculate the content length of the multipart response
|
||||||
|
in bytes.
|
||||||
|
"""
|
||||||
|
if not self.length:
|
||||||
|
return None
|
||||||
|
# calculate length of json field and content-type header
|
||||||
|
json_field = json.dumps(self.json_field)
|
||||||
|
json_bytes = json_field.encode("utf-8")
|
||||||
|
json_length = len(json_bytes)
|
||||||
|
|
||||||
|
type = self.file_content_type.encode("utf-8")
|
||||||
|
content_type = Header(b"Content-Type", type)
|
||||||
|
type_length = len(bytes(content_type))
|
||||||
|
|
||||||
|
# 154 is the length of the elements that aren't variable, ie
|
||||||
|
# CRLFs and boundary strings, etc
|
||||||
|
self.length += json_length + type_length + 154
|
||||||
|
|
||||||
|
return self.length
|
||||||
|
|
||||||
|
### Internal APIs. ###
|
||||||
|
|
||||||
|
async def _resumeProducingRepeatedly(self) -> None:
|
||||||
|
assert self.producer is not None
|
||||||
|
assert not self.streaming
|
||||||
|
|
||||||
|
producer = cast("interfaces.IPullProducer", self.producer)
|
||||||
|
|
||||||
|
self.paused = False
|
||||||
|
while not self.paused:
|
||||||
|
producer.resumeProducing()
|
||||||
|
await self.clock.sleep(0)
|
||||||
|
|
||||||
|
|
||||||
|
class Header:
|
||||||
|
"""
|
||||||
|
`Header` This class is a tiny wrapper that produces
|
||||||
|
request headers. We can't use standard python header
|
||||||
|
class because it encodes unicode fields using =? bla bla ?=
|
||||||
|
encoding, which is correct, but no one in HTTP world expects
|
||||||
|
that, everyone wants utf-8 raw bytes. (stolen from treq.multipart)
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
name: bytes,
|
||||||
|
value: Any,
|
||||||
|
params: Optional[List[Tuple[Any, Any]]] = None,
|
||||||
|
):
|
||||||
|
self.name = name
|
||||||
|
self.value = value
|
||||||
|
self.params = params or []
|
||||||
|
|
||||||
|
def add_param(self, name: Any, value: Any) -> None:
|
||||||
|
self.params.append((name, value))
|
||||||
|
|
||||||
|
def __bytes__(self) -> bytes:
|
||||||
|
with closing(BytesIO()) as h:
|
||||||
|
h.write(self.name + b": " + escape(self.value).encode("us-ascii"))
|
||||||
|
if self.params:
|
||||||
|
for name, val in self.params:
|
||||||
|
h.write(b"; ")
|
||||||
|
h.write(escape(name).encode("us-ascii"))
|
||||||
|
h.write(b"=")
|
||||||
|
h.write(b'"' + escape(val).encode("utf-8") + b'"')
|
||||||
|
h.seek(0)
|
||||||
|
return h.read()
|
||||||
|
|
||||||
|
|
||||||
|
def escape(value: Union[str, bytes]) -> str:
|
||||||
|
"""
|
||||||
|
This function prevents header values from corrupting the request,
|
||||||
|
a newline in the file name parameter makes form-data request unreadable
|
||||||
|
for a majority of parsers. (stolen from treq.multipart)
|
||||||
|
"""
|
||||||
|
if isinstance(value, bytes):
|
||||||
|
value = value.decode("utf-8")
|
||||||
|
return value.replace("\r", "").replace("\n", "").replace('"', '\\"')
|
||||||
|
|
|
@ -28,7 +28,7 @@ import jinja2
|
||||||
from markupsafe import Markup
|
from markupsafe import Markup
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes, Membership, RoomTypes
|
from synapse.api.constants import EventContentFields, EventTypes, Membership, RoomTypes
|
||||||
from synapse.api.errors import StoreError
|
from synapse.api.errors import StoreError
|
||||||
from synapse.config.emailconfig import EmailSubjectConfig
|
from synapse.config.emailconfig import EmailSubjectConfig
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
|
@ -716,7 +716,8 @@ class Mailer:
|
||||||
)
|
)
|
||||||
if (
|
if (
|
||||||
create_event
|
create_event
|
||||||
and create_event.content.get("room_type") == RoomTypes.SPACE
|
and create_event.content.get(EventContentFields.ROOM_TYPE)
|
||||||
|
== RoomTypes.SPACE
|
||||||
):
|
):
|
||||||
return self.email_subjects.invite_from_person_to_space % {
|
return self.email_subjects.invite_from_person_to_space % {
|
||||||
"person": inviter_name,
|
"person": inviter_name,
|
||||||
|
|
|
@ -114,13 +114,19 @@ class ReplicationDataHandler:
|
||||||
"""
|
"""
|
||||||
all_room_ids: Set[str] = set()
|
all_room_ids: Set[str] = set()
|
||||||
if stream_name == DeviceListsStream.NAME:
|
if stream_name == DeviceListsStream.NAME:
|
||||||
if any(row.entity.startswith("@") and not row.is_signature for row in rows):
|
if any(not row.is_signature and not row.hosts_calculated for row in rows):
|
||||||
prev_token = self.store.get_device_stream_token()
|
prev_token = self.store.get_device_stream_token()
|
||||||
all_room_ids = await self.store.get_all_device_list_changes(
|
all_room_ids = await self.store.get_all_device_list_changes(
|
||||||
prev_token, token
|
prev_token, token
|
||||||
)
|
)
|
||||||
self.store.device_lists_in_rooms_have_changed(all_room_ids, token)
|
self.store.device_lists_in_rooms_have_changed(all_room_ids, token)
|
||||||
|
|
||||||
|
# If we're sending federation we need to update the device lists
|
||||||
|
# outbound pokes stream change cache with updated hosts.
|
||||||
|
if self.send_handler and any(row.hosts_calculated for row in rows):
|
||||||
|
hosts = await self.store.get_destinations_for_device(token)
|
||||||
|
self.store.device_lists_outbound_pokes_have_changed(hosts, token)
|
||||||
|
|
||||||
self.store.process_replication_rows(stream_name, instance_name, token, rows)
|
self.store.process_replication_rows(stream_name, instance_name, token, rows)
|
||||||
# NOTE: this must be called after process_replication_rows to ensure any
|
# NOTE: this must be called after process_replication_rows to ensure any
|
||||||
# cache invalidations are first handled before any stream ID advances.
|
# cache invalidations are first handled before any stream ID advances.
|
||||||
|
@ -433,12 +439,11 @@ class FederationSenderHandler:
|
||||||
# The entities are either user IDs (starting with '@') whose devices
|
# The entities are either user IDs (starting with '@') whose devices
|
||||||
# have changed, or remote servers that we need to tell about
|
# have changed, or remote servers that we need to tell about
|
||||||
# changes.
|
# changes.
|
||||||
hosts = {
|
if any(row.hosts_calculated for row in rows):
|
||||||
row.entity
|
hosts = await self.store.get_destinations_for_device(token)
|
||||||
for row in rows
|
await self.federation_sender.send_device_messages(
|
||||||
if not row.entity.startswith("@") and not row.is_signature
|
hosts, immediate=False
|
||||||
}
|
)
|
||||||
await self.federation_sender.send_device_messages(hosts, immediate=False)
|
|
||||||
|
|
||||||
elif stream_name == ToDeviceStream.NAME:
|
elif stream_name == ToDeviceStream.NAME:
|
||||||
# The to_device stream includes stuff to be pushed to both local
|
# The to_device stream includes stuff to be pushed to both local
|
||||||
|
|
|
@ -549,10 +549,14 @@ class DeviceListsStream(_StreamFromIdGen):
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
class DeviceListsStreamRow:
|
class DeviceListsStreamRow:
|
||||||
entity: str
|
user_id: str
|
||||||
# Indicates that a user has signed their own device with their user-signing key
|
# Indicates that a user has signed their own device with their user-signing key
|
||||||
is_signature: bool
|
is_signature: bool
|
||||||
|
|
||||||
|
# Indicates if this is a notification that we've calculated the hosts we
|
||||||
|
# need to send the update to.
|
||||||
|
hosts_calculated: bool
|
||||||
|
|
||||||
NAME = "device_lists"
|
NAME = "device_lists"
|
||||||
ROW_TYPE = DeviceListsStreamRow
|
ROW_TYPE = DeviceListsStreamRow
|
||||||
|
|
||||||
|
@ -594,13 +598,13 @@ class DeviceListsStream(_StreamFromIdGen):
|
||||||
upper_limit_token = min(upper_limit_token, signatures_to_token)
|
upper_limit_token = min(upper_limit_token, signatures_to_token)
|
||||||
|
|
||||||
device_updates = [
|
device_updates = [
|
||||||
(stream_id, (entity, False))
|
(stream_id, (entity, False, hosts))
|
||||||
for stream_id, (entity,) in device_updates
|
for stream_id, (entity, hosts) in device_updates
|
||||||
if stream_id <= upper_limit_token
|
if stream_id <= upper_limit_token
|
||||||
]
|
]
|
||||||
|
|
||||||
signatures_updates = [
|
signatures_updates = [
|
||||||
(stream_id, (entity, True))
|
(stream_id, (entity, True, False))
|
||||||
for stream_id, (entity,) in signatures_updates
|
for stream_id, (entity,) in signatures_updates
|
||||||
if stream_id <= upper_limit_token
|
if stream_id <= upper_limit_token
|
||||||
]
|
]
|
||||||
|
|
|
@ -101,6 +101,7 @@ from synapse.rest.admin.users import (
|
||||||
ResetPasswordRestServlet,
|
ResetPasswordRestServlet,
|
||||||
SearchUsersRestServlet,
|
SearchUsersRestServlet,
|
||||||
ShadowBanRestServlet,
|
ShadowBanRestServlet,
|
||||||
|
SuspendAccountRestServlet,
|
||||||
UserAdminServlet,
|
UserAdminServlet,
|
||||||
UserByExternalId,
|
UserByExternalId,
|
||||||
UserByThreePid,
|
UserByThreePid,
|
||||||
|
@ -327,6 +328,8 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
BackgroundUpdateRestServlet(hs).register(http_server)
|
BackgroundUpdateRestServlet(hs).register(http_server)
|
||||||
BackgroundUpdateStartJobRestServlet(hs).register(http_server)
|
BackgroundUpdateStartJobRestServlet(hs).register(http_server)
|
||||||
ExperimentalFeaturesRestServlet(hs).register(http_server)
|
ExperimentalFeaturesRestServlet(hs).register(http_server)
|
||||||
|
if hs.config.experimental.msc3823_account_suspension:
|
||||||
|
SuspendAccountRestServlet(hs).register(http_server)
|
||||||
|
|
||||||
|
|
||||||
def register_servlets_for_client_rest_resource(
|
def register_servlets_for_client_rest_resource(
|
||||||
|
|
|
@ -61,8 +61,8 @@ class ListDestinationsRestServlet(RestServlet):
|
||||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
await assert_requester_is_admin(self._auth, request)
|
await assert_requester_is_admin(self._auth, request)
|
||||||
|
|
||||||
start = parse_integer(request, "from", default=0, negative=False)
|
start = parse_integer(request, "from", default=0)
|
||||||
limit = parse_integer(request, "limit", default=100, negative=False)
|
limit = parse_integer(request, "limit", default=100)
|
||||||
|
|
||||||
destination = parse_string(request, "destination")
|
destination = parse_string(request, "destination")
|
||||||
|
|
||||||
|
@ -181,8 +181,8 @@ class DestinationMembershipRestServlet(RestServlet):
|
||||||
if not await self._store.is_destination_known(destination):
|
if not await self._store.is_destination_known(destination):
|
||||||
raise NotFoundError("Unknown destination")
|
raise NotFoundError("Unknown destination")
|
||||||
|
|
||||||
start = parse_integer(request, "from", default=0, negative=False)
|
start = parse_integer(request, "from", default=0)
|
||||||
limit = parse_integer(request, "limit", default=100, negative=False)
|
limit = parse_integer(request, "limit", default=100)
|
||||||
|
|
||||||
direction = parse_enum(request, "dir", Direction, default=Direction.FORWARDS)
|
direction = parse_enum(request, "dir", Direction, default=Direction.FORWARDS)
|
||||||
|
|
||||||
|
|
|
@ -311,8 +311,8 @@ class DeleteMediaByDateSize(RestServlet):
|
||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
before_ts = parse_integer(request, "before_ts", required=True, negative=False)
|
before_ts = parse_integer(request, "before_ts", required=True)
|
||||||
size_gt = parse_integer(request, "size_gt", default=0, negative=False)
|
size_gt = parse_integer(request, "size_gt", default=0)
|
||||||
keep_profiles = parse_boolean(request, "keep_profiles", default=True)
|
keep_profiles = parse_boolean(request, "keep_profiles", default=True)
|
||||||
|
|
||||||
if before_ts < 30000000000: # Dec 1970 in milliseconds, Aug 2920 in seconds
|
if before_ts < 30000000000: # Dec 1970 in milliseconds, Aug 2920 in seconds
|
||||||
|
@ -377,8 +377,8 @@ class UserMediaRestServlet(RestServlet):
|
||||||
if user is None:
|
if user is None:
|
||||||
raise NotFoundError("Unknown user")
|
raise NotFoundError("Unknown user")
|
||||||
|
|
||||||
start = parse_integer(request, "from", default=0, negative=False)
|
start = parse_integer(request, "from", default=0)
|
||||||
limit = parse_integer(request, "limit", default=100, negative=False)
|
limit = parse_integer(request, "limit", default=100)
|
||||||
|
|
||||||
# If neither `order_by` nor `dir` is set, set the default order
|
# If neither `order_by` nor `dir` is set, set the default order
|
||||||
# to newest media is on top for backward compatibility.
|
# to newest media is on top for backward compatibility.
|
||||||
|
@ -421,8 +421,8 @@ class UserMediaRestServlet(RestServlet):
|
||||||
if user is None:
|
if user is None:
|
||||||
raise NotFoundError("Unknown user")
|
raise NotFoundError("Unknown user")
|
||||||
|
|
||||||
start = parse_integer(request, "from", default=0, negative=False)
|
start = parse_integer(request, "from", default=0)
|
||||||
limit = parse_integer(request, "limit", default=100, negative=False)
|
limit = parse_integer(request, "limit", default=100)
|
||||||
|
|
||||||
# If neither `order_by` nor `dir` is set, set the default order
|
# If neither `order_by` nor `dir` is set, set the default order
|
||||||
# to newest media is on top for backward compatibility.
|
# to newest media is on top for backward compatibility.
|
||||||
|
|
|
@ -63,10 +63,10 @@ class UserMediaStatisticsRestServlet(RestServlet):
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
start = parse_integer(request, "from", default=0, negative=False)
|
start = parse_integer(request, "from", default=0)
|
||||||
limit = parse_integer(request, "limit", default=100, negative=False)
|
limit = parse_integer(request, "limit", default=100)
|
||||||
from_ts = parse_integer(request, "from_ts", default=0, negative=False)
|
from_ts = parse_integer(request, "from_ts", default=0)
|
||||||
until_ts = parse_integer(request, "until_ts", negative=False)
|
until_ts = parse_integer(request, "until_ts")
|
||||||
|
|
||||||
if until_ts is not None:
|
if until_ts is not None:
|
||||||
if until_ts <= from_ts:
|
if until_ts <= from_ts:
|
||||||
|
|
|
@ -27,11 +27,13 @@ from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
|
from synapse._pydantic_compat import HAS_PYDANTIC_V2
|
||||||
from synapse.api.constants import Direction, UserTypes
|
from synapse.api.constants import Direction, UserTypes
|
||||||
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
RestServlet,
|
RestServlet,
|
||||||
assert_params_in_dict,
|
assert_params_in_dict,
|
||||||
|
parse_and_validate_json_object_from_request,
|
||||||
parse_boolean,
|
parse_boolean,
|
||||||
parse_enum,
|
parse_enum,
|
||||||
parse_integer,
|
parse_integer,
|
||||||
|
@ -49,10 +51,17 @@ from synapse.rest.client._base import client_patterns
|
||||||
from synapse.storage.databases.main.registration import ExternalIDReuseException
|
from synapse.storage.databases.main.registration import ExternalIDReuseException
|
||||||
from synapse.storage.databases.main.stats import UserSortOrder
|
from synapse.storage.databases.main.stats import UserSortOrder
|
||||||
from synapse.types import JsonDict, JsonMapping, UserID
|
from synapse.types import JsonDict, JsonMapping, UserID
|
||||||
|
from synapse.types.rest import RequestBodyModel
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
||||||
|
from pydantic.v1 import StrictBool
|
||||||
|
else:
|
||||||
|
from pydantic import StrictBool
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@ -90,8 +99,8 @@ class UsersRestServletV2(RestServlet):
|
||||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
await assert_requester_is_admin(self.auth, request)
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
start = parse_integer(request, "from", default=0, negative=False)
|
start = parse_integer(request, "from", default=0)
|
||||||
limit = parse_integer(request, "limit", default=100, negative=False)
|
limit = parse_integer(request, "limit", default=100)
|
||||||
|
|
||||||
user_id = parse_string(request, "user_id")
|
user_id = parse_string(request, "user_id")
|
||||||
name = parse_string(request, "name", encoding="utf-8")
|
name = parse_string(request, "name", encoding="utf-8")
|
||||||
|
@ -732,6 +741,36 @@ class DeactivateAccountRestServlet(RestServlet):
|
||||||
return HTTPStatus.OK, {"id_server_unbind_result": id_server_unbind_result}
|
return HTTPStatus.OK, {"id_server_unbind_result": id_server_unbind_result}
|
||||||
|
|
||||||
|
|
||||||
|
class SuspendAccountRestServlet(RestServlet):
|
||||||
|
PATTERNS = admin_patterns("/suspend/(?P<target_user_id>[^/]*)$")
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
self.is_mine = hs.is_mine
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
|
||||||
|
class PutBody(RequestBodyModel):
|
||||||
|
suspend: StrictBool
|
||||||
|
|
||||||
|
async def on_PUT(
|
||||||
|
self, request: SynapseRequest, target_user_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
requester = await self.auth.get_user_by_req(request)
|
||||||
|
await assert_user_is_admin(self.auth, requester)
|
||||||
|
|
||||||
|
if not self.is_mine(UserID.from_string(target_user_id)):
|
||||||
|
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only suspend local users")
|
||||||
|
|
||||||
|
if not await self.store.get_user_by_id(target_user_id):
|
||||||
|
raise NotFoundError("User not found")
|
||||||
|
|
||||||
|
body = parse_and_validate_json_object_from_request(request, self.PutBody)
|
||||||
|
suspend = body.suspend
|
||||||
|
await self.store.set_user_suspended_status(target_user_id, suspend)
|
||||||
|
|
||||||
|
return HTTPStatus.OK, {f"user_{target_user_id}_suspended": suspend}
|
||||||
|
|
||||||
|
|
||||||
class AccountValidityRenewServlet(RestServlet):
|
class AccountValidityRenewServlet(RestServlet):
|
||||||
PATTERNS = admin_patterns("/account_validity/validity$")
|
PATTERNS = admin_patterns("/account_validity/validity$")
|
||||||
|
|
||||||
|
|
|
@ -108,6 +108,19 @@ class ProfileDisplaynameRestServlet(RestServlet):
|
||||||
|
|
||||||
propagate = _read_propagate(self.hs, request)
|
propagate = _read_propagate(self.hs, request)
|
||||||
|
|
||||||
|
requester_suspended = (
|
||||||
|
await self.hs.get_datastores().main.get_user_suspended_status(
|
||||||
|
requester.user.to_string()
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
if requester_suspended:
|
||||||
|
raise SynapseError(
|
||||||
|
403,
|
||||||
|
"Updating displayname while account is suspended is not allowed.",
|
||||||
|
Codes.USER_ACCOUNT_SUSPENDED,
|
||||||
|
)
|
||||||
|
|
||||||
await self.profile_handler.set_displayname(
|
await self.profile_handler.set_displayname(
|
||||||
user, requester, new_name, is_admin, propagate=propagate
|
user, requester, new_name, is_admin, propagate=propagate
|
||||||
)
|
)
|
||||||
|
@ -167,6 +180,19 @@ class ProfileAvatarURLRestServlet(RestServlet):
|
||||||
|
|
||||||
propagate = _read_propagate(self.hs, request)
|
propagate = _read_propagate(self.hs, request)
|
||||||
|
|
||||||
|
requester_suspended = (
|
||||||
|
await self.hs.get_datastores().main.get_user_suspended_status(
|
||||||
|
requester.user.to_string()
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
if requester_suspended:
|
||||||
|
raise SynapseError(
|
||||||
|
403,
|
||||||
|
"Updating avatar URL while account is suspended is not allowed.",
|
||||||
|
Codes.USER_ACCOUNT_SUSPENDED,
|
||||||
|
)
|
||||||
|
|
||||||
await self.profile_handler.set_avatar_url(
|
await self.profile_handler.set_avatar_url(
|
||||||
user, requester, new_avatar_url, is_admin, propagate=propagate
|
user, requester, new_avatar_url, is_admin, propagate=propagate
|
||||||
)
|
)
|
||||||
|
|
|
@ -510,7 +510,7 @@ class PublicRoomListRestServlet(RestServlet):
|
||||||
if server:
|
if server:
|
||||||
raise e
|
raise e
|
||||||
|
|
||||||
limit: Optional[int] = parse_integer(request, "limit", 0, negative=False)
|
limit: Optional[int] = parse_integer(request, "limit", 0)
|
||||||
since_token = parse_string(request, "since")
|
since_token = parse_string(request, "since")
|
||||||
|
|
||||||
if limit == 0:
|
if limit == 0:
|
||||||
|
@ -1120,6 +1120,20 @@ class RoomRedactEventRestServlet(TransactionRestServlet):
|
||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
|
requester_suspended = await self._store.get_user_suspended_status(
|
||||||
|
requester.user.to_string()
|
||||||
|
)
|
||||||
|
|
||||||
|
if requester_suspended:
|
||||||
|
event = await self._store.get_event(event_id, allow_none=True)
|
||||||
|
if event:
|
||||||
|
if event.sender != requester.user.to_string():
|
||||||
|
raise SynapseError(
|
||||||
|
403,
|
||||||
|
"You can only redact your own events while account is suspended.",
|
||||||
|
Codes.USER_ACCOUNT_SUSPENDED,
|
||||||
|
)
|
||||||
|
|
||||||
# Ensure the redacts property in the content matches the one provided in
|
# Ensure the redacts property in the content matches the one provided in
|
||||||
# the URL.
|
# the URL.
|
||||||
room_version = await self._store.get_room_version(room_id)
|
room_version = await self._store.get_room_version(room_id)
|
||||||
|
@ -1430,16 +1444,7 @@ class RoomHierarchyRestServlet(RestServlet):
|
||||||
requester = await self._auth.get_user_by_req(request, allow_guest=True)
|
requester = await self._auth.get_user_by_req(request, allow_guest=True)
|
||||||
|
|
||||||
max_depth = parse_integer(request, "max_depth")
|
max_depth = parse_integer(request, "max_depth")
|
||||||
if max_depth is not None and max_depth < 0:
|
|
||||||
raise SynapseError(
|
|
||||||
400, "'max_depth' must be a non-negative integer", Codes.BAD_JSON
|
|
||||||
)
|
|
||||||
|
|
||||||
limit = parse_integer(request, "limit")
|
limit = parse_integer(request, "limit")
|
||||||
if limit is not None and limit <= 0:
|
|
||||||
raise SynapseError(
|
|
||||||
400, "'limit' must be a positive integer", Codes.BAD_JSON
|
|
||||||
)
|
|
||||||
|
|
||||||
return 200, await self._room_summary_handler.get_room_hierarchy(
|
return 200, await self._room_summary_handler.get_room_hierarchy(
|
||||||
requester,
|
requester,
|
||||||
|
|
|
@ -973,31 +973,13 @@ class SlidingSyncRestServlet(RestServlet):
|
||||||
requester=requester,
|
requester=requester,
|
||||||
)
|
)
|
||||||
|
|
||||||
serialized_rooms = {}
|
serialized_rooms: Dict[str, JsonDict] = {}
|
||||||
for room_id, room_result in rooms.items():
|
for room_id, room_result in rooms.items():
|
||||||
serialized_timeline = await self.event_serializer.serialize_events(
|
|
||||||
room_result.timeline_events,
|
|
||||||
time_now,
|
|
||||||
config=serialize_options,
|
|
||||||
bundle_aggregations=room_result.bundled_aggregations,
|
|
||||||
)
|
|
||||||
|
|
||||||
serialized_required_state = await self.event_serializer.serialize_events(
|
|
||||||
room_result.required_state,
|
|
||||||
time_now,
|
|
||||||
config=serialize_options,
|
|
||||||
)
|
|
||||||
|
|
||||||
serialized_rooms[room_id] = {
|
serialized_rooms[room_id] = {
|
||||||
"required_state": serialized_required_state,
|
|
||||||
"timeline": serialized_timeline,
|
|
||||||
"prev_batch": await room_result.prev_batch.to_string(self.store),
|
|
||||||
"limited": room_result.limited,
|
|
||||||
"joined_count": room_result.joined_count,
|
"joined_count": room_result.joined_count,
|
||||||
"invited_count": room_result.invited_count,
|
"invited_count": room_result.invited_count,
|
||||||
"notification_count": room_result.notification_count,
|
"notification_count": room_result.notification_count,
|
||||||
"highlight_count": room_result.highlight_count,
|
"highlight_count": room_result.highlight_count,
|
||||||
"num_live": room_result.num_live,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if room_result.name:
|
if room_result.name:
|
||||||
|
@ -1014,12 +996,47 @@ class SlidingSyncRestServlet(RestServlet):
|
||||||
if room_result.initial:
|
if room_result.initial:
|
||||||
serialized_rooms[room_id]["initial"] = room_result.initial
|
serialized_rooms[room_id]["initial"] = room_result.initial
|
||||||
|
|
||||||
|
# This will omitted for invite/knock rooms with `stripped_state`
|
||||||
|
if room_result.required_state is not None:
|
||||||
|
serialized_required_state = (
|
||||||
|
await self.event_serializer.serialize_events(
|
||||||
|
room_result.required_state,
|
||||||
|
time_now,
|
||||||
|
config=serialize_options,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
serialized_rooms[room_id]["required_state"] = serialized_required_state
|
||||||
|
|
||||||
|
# This will omitted for invite/knock rooms with `stripped_state`
|
||||||
|
if room_result.timeline_events is not None:
|
||||||
|
serialized_timeline = await self.event_serializer.serialize_events(
|
||||||
|
room_result.timeline_events,
|
||||||
|
time_now,
|
||||||
|
config=serialize_options,
|
||||||
|
bundle_aggregations=room_result.bundled_aggregations,
|
||||||
|
)
|
||||||
|
serialized_rooms[room_id]["timeline"] = serialized_timeline
|
||||||
|
|
||||||
|
# This will omitted for invite/knock rooms with `stripped_state`
|
||||||
|
if room_result.limited is not None:
|
||||||
|
serialized_rooms[room_id]["limited"] = room_result.limited
|
||||||
|
|
||||||
|
# This will omitted for invite/knock rooms with `stripped_state`
|
||||||
|
if room_result.prev_batch is not None:
|
||||||
|
serialized_rooms[room_id]["prev_batch"] = (
|
||||||
|
await room_result.prev_batch.to_string(self.store)
|
||||||
|
)
|
||||||
|
|
||||||
|
# This will omitted for invite/knock rooms with `stripped_state`
|
||||||
|
if room_result.num_live is not None:
|
||||||
|
serialized_rooms[room_id]["num_live"] = room_result.num_live
|
||||||
|
|
||||||
# Field should be absent on non-DM rooms
|
# Field should be absent on non-DM rooms
|
||||||
if room_result.is_dm:
|
if room_result.is_dm:
|
||||||
serialized_rooms[room_id]["is_dm"] = room_result.is_dm
|
serialized_rooms[room_id]["is_dm"] = room_result.is_dm
|
||||||
|
|
||||||
# Stripped state only applies to invite/knock rooms
|
# Stripped state only applies to invite/knock rooms
|
||||||
if room_result.stripped_state:
|
if room_result.stripped_state is not None:
|
||||||
# TODO: `knocked_state` but that isn't specced yet.
|
# TODO: `knocked_state` but that isn't specced yet.
|
||||||
#
|
#
|
||||||
# TODO: Instead of adding `knocked_state`, it would be good to rename
|
# TODO: Instead of adding `knocked_state`, it would be good to rename
|
||||||
|
|
|
@ -617,6 +617,17 @@ class EventsPersistenceStorageController:
|
||||||
room_id, chunk
|
room_id, chunk
|
||||||
)
|
)
|
||||||
|
|
||||||
|
with Measure(self._clock, "calculate_chain_cover_index_for_events"):
|
||||||
|
# We now calculate chain ID/sequence numbers for any state events we're
|
||||||
|
# persisting. We ignore out of band memberships as we're not in the room
|
||||||
|
# and won't have their auth chain (we'll fix it up later if we join the
|
||||||
|
# room).
|
||||||
|
#
|
||||||
|
# See: docs/auth_chain_difference_algorithm.md
|
||||||
|
new_event_links = await self.persist_events_store.calculate_chain_cover_index_for_events(
|
||||||
|
room_id, [e for e, _ in chunk]
|
||||||
|
)
|
||||||
|
|
||||||
await self.persist_events_store._persist_events_and_state_updates(
|
await self.persist_events_store._persist_events_and_state_updates(
|
||||||
room_id,
|
room_id,
|
||||||
chunk,
|
chunk,
|
||||||
|
@ -624,6 +635,7 @@ class EventsPersistenceStorageController:
|
||||||
new_forward_extremities=new_forward_extremities,
|
new_forward_extremities=new_forward_extremities,
|
||||||
use_negative_stream_ordering=backfilled,
|
use_negative_stream_ordering=backfilled,
|
||||||
inhibit_local_membership_updates=backfilled,
|
inhibit_local_membership_updates=backfilled,
|
||||||
|
new_event_links=new_event_links,
|
||||||
)
|
)
|
||||||
|
|
||||||
return replaced_events
|
return replaced_events
|
||||||
|
|
|
@ -825,14 +825,13 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
||||||
# Check if we've already inserted a matching message_id for that
|
# Check if we've already inserted a matching message_id for that
|
||||||
# origin. This can happen if the origin doesn't receive our
|
# origin. This can happen if the origin doesn't receive our
|
||||||
# acknowledgement from the first time we received the message.
|
# acknowledgement from the first time we received the message.
|
||||||
already_inserted = self.db_pool.simple_select_one_txn(
|
already_inserted = self.db_pool.simple_select_list_txn(
|
||||||
txn,
|
txn,
|
||||||
table="device_federation_inbox",
|
table="device_federation_inbox",
|
||||||
keyvalues={"origin": origin, "message_id": message_id},
|
keyvalues={"origin": origin, "message_id": message_id},
|
||||||
retcols=("message_id",),
|
retcols=("message_id",),
|
||||||
allow_none=True,
|
|
||||||
)
|
)
|
||||||
if already_inserted is not None:
|
if already_inserted:
|
||||||
return
|
return
|
||||||
|
|
||||||
# Add an entry for this message_id so that we know we've processed
|
# Add an entry for this message_id so that we know we've processed
|
||||||
|
|
|
@ -164,22 +164,24 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
prefilled_cache=user_signature_stream_prefill,
|
prefilled_cache=user_signature_stream_prefill,
|
||||||
)
|
)
|
||||||
|
|
||||||
(
|
self._device_list_federation_stream_cache = None
|
||||||
device_list_federation_prefill,
|
if hs.should_send_federation():
|
||||||
device_list_federation_list_id,
|
(
|
||||||
) = self.db_pool.get_cache_dict(
|
device_list_federation_prefill,
|
||||||
db_conn,
|
device_list_federation_list_id,
|
||||||
"device_lists_outbound_pokes",
|
) = self.db_pool.get_cache_dict(
|
||||||
entity_column="destination",
|
db_conn,
|
||||||
stream_column="stream_id",
|
"device_lists_outbound_pokes",
|
||||||
max_value=device_list_max,
|
entity_column="destination",
|
||||||
limit=10000,
|
stream_column="stream_id",
|
||||||
)
|
max_value=device_list_max,
|
||||||
self._device_list_federation_stream_cache = StreamChangeCache(
|
limit=10000,
|
||||||
"DeviceListFederationStreamChangeCache",
|
)
|
||||||
device_list_federation_list_id,
|
self._device_list_federation_stream_cache = StreamChangeCache(
|
||||||
prefilled_cache=device_list_federation_prefill,
|
"DeviceListFederationStreamChangeCache",
|
||||||
)
|
device_list_federation_list_id,
|
||||||
|
prefilled_cache=device_list_federation_prefill,
|
||||||
|
)
|
||||||
|
|
||||||
if hs.config.worker.run_background_tasks:
|
if hs.config.worker.run_background_tasks:
|
||||||
self._clock.looping_call(
|
self._clock.looping_call(
|
||||||
|
@ -207,23 +209,30 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
) -> None:
|
) -> None:
|
||||||
for row in rows:
|
for row in rows:
|
||||||
if row.is_signature:
|
if row.is_signature:
|
||||||
self._user_signature_stream_cache.entity_has_changed(row.entity, token)
|
self._user_signature_stream_cache.entity_has_changed(row.user_id, token)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# The entities are either user IDs (starting with '@') whose devices
|
# The entities are either user IDs (starting with '@') whose devices
|
||||||
# have changed, or remote servers that we need to tell about
|
# have changed, or remote servers that we need to tell about
|
||||||
# changes.
|
# changes.
|
||||||
if row.entity.startswith("@"):
|
if not row.hosts_calculated:
|
||||||
self._device_list_stream_cache.entity_has_changed(row.entity, token)
|
self._device_list_stream_cache.entity_has_changed(row.user_id, token)
|
||||||
self.get_cached_devices_for_user.invalidate((row.entity,))
|
self.get_cached_devices_for_user.invalidate((row.user_id,))
|
||||||
self._get_cached_user_device.invalidate((row.entity,))
|
self._get_cached_user_device.invalidate((row.user_id,))
|
||||||
self.get_device_list_last_stream_id_for_remote.invalidate((row.entity,))
|
self.get_device_list_last_stream_id_for_remote.invalidate(
|
||||||
|
(row.user_id,)
|
||||||
else:
|
|
||||||
self._device_list_federation_stream_cache.entity_has_changed(
|
|
||||||
row.entity, token
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def device_lists_outbound_pokes_have_changed(
|
||||||
|
self, destinations: StrCollection, token: int
|
||||||
|
) -> None:
|
||||||
|
assert self._device_list_federation_stream_cache is not None
|
||||||
|
|
||||||
|
for destination in destinations:
|
||||||
|
self._device_list_federation_stream_cache.entity_has_changed(
|
||||||
|
destination, token
|
||||||
|
)
|
||||||
|
|
||||||
def device_lists_in_rooms_have_changed(
|
def device_lists_in_rooms_have_changed(
|
||||||
self, room_ids: StrCollection, token: int
|
self, room_ids: StrCollection, token: int
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -363,6 +372,11 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
EDU contents.
|
EDU contents.
|
||||||
"""
|
"""
|
||||||
now_stream_id = self.get_device_stream_token()
|
now_stream_id = self.get_device_stream_token()
|
||||||
|
if from_stream_id == now_stream_id:
|
||||||
|
return now_stream_id, []
|
||||||
|
|
||||||
|
if self._device_list_federation_stream_cache is None:
|
||||||
|
raise Exception("Func can only be used on federation senders")
|
||||||
|
|
||||||
has_changed = self._device_list_federation_stream_cache.has_entity_changed(
|
has_changed = self._device_list_federation_stream_cache.has_entity_changed(
|
||||||
destination, int(from_stream_id)
|
destination, int(from_stream_id)
|
||||||
|
@ -1018,10 +1032,10 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
# This query Does The Right Thing where it'll correctly apply the
|
# This query Does The Right Thing where it'll correctly apply the
|
||||||
# bounds to the inner queries.
|
# bounds to the inner queries.
|
||||||
sql = """
|
sql = """
|
||||||
SELECT stream_id, entity FROM (
|
SELECT stream_id, user_id, hosts FROM (
|
||||||
SELECT stream_id, user_id AS entity FROM device_lists_stream
|
SELECT stream_id, user_id, false AS hosts FROM device_lists_stream
|
||||||
UNION ALL
|
UNION ALL
|
||||||
SELECT stream_id, destination AS entity FROM device_lists_outbound_pokes
|
SELECT DISTINCT stream_id, user_id, true AS hosts FROM device_lists_outbound_pokes
|
||||||
) AS e
|
) AS e
|
||||||
WHERE ? < stream_id AND stream_id <= ?
|
WHERE ? < stream_id AND stream_id <= ?
|
||||||
ORDER BY stream_id ASC
|
ORDER BY stream_id ASC
|
||||||
|
@ -1577,6 +1591,14 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
get_device_list_changes_in_room_txn,
|
get_device_list_changes_in_room_txn,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def get_destinations_for_device(self, stream_id: int) -> StrCollection:
|
||||||
|
return await self.db_pool.simple_select_onecol(
|
||||||
|
table="device_lists_outbound_pokes",
|
||||||
|
keyvalues={"stream_id": stream_id},
|
||||||
|
retcol="destination",
|
||||||
|
desc="get_destinations_for_device",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class DeviceBackgroundUpdateStore(SQLBaseStore):
|
class DeviceBackgroundUpdateStore(SQLBaseStore):
|
||||||
def __init__(
|
def __init__(
|
||||||
|
@ -2109,18 +2131,18 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
user_id: str,
|
user_id: str,
|
||||||
device_id: str,
|
device_id: str,
|
||||||
hosts: Collection[str],
|
hosts: Collection[str],
|
||||||
stream_ids: List[int],
|
stream_id: int,
|
||||||
context: Optional[Dict[str, str]],
|
context: Optional[Dict[str, str]],
|
||||||
) -> None:
|
) -> None:
|
||||||
for host in hosts:
|
if self._device_list_federation_stream_cache:
|
||||||
txn.call_after(
|
for host in hosts:
|
||||||
self._device_list_federation_stream_cache.entity_has_changed,
|
txn.call_after(
|
||||||
host,
|
self._device_list_federation_stream_cache.entity_has_changed,
|
||||||
stream_ids[-1],
|
host,
|
||||||
)
|
stream_id,
|
||||||
|
)
|
||||||
|
|
||||||
now = self._clock.time_msec()
|
now = self._clock.time_msec()
|
||||||
stream_id_iterator = iter(stream_ids)
|
|
||||||
|
|
||||||
encoded_context = json_encoder.encode(context)
|
encoded_context = json_encoder.encode(context)
|
||||||
mark_sent = not self.hs.is_mine_id(user_id)
|
mark_sent = not self.hs.is_mine_id(user_id)
|
||||||
|
@ -2129,7 +2151,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
(
|
(
|
||||||
destination,
|
destination,
|
||||||
self._instance_name,
|
self._instance_name,
|
||||||
next(stream_id_iterator),
|
stream_id,
|
||||||
user_id,
|
user_id,
|
||||||
device_id,
|
device_id,
|
||||||
mark_sent,
|
mark_sent,
|
||||||
|
@ -2314,22 +2336,22 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
return
|
return
|
||||||
|
|
||||||
def add_device_list_outbound_pokes_txn(
|
def add_device_list_outbound_pokes_txn(
|
||||||
txn: LoggingTransaction, stream_ids: List[int]
|
txn: LoggingTransaction, stream_id: int
|
||||||
) -> None:
|
) -> None:
|
||||||
self._add_device_outbound_poke_to_stream_txn(
|
self._add_device_outbound_poke_to_stream_txn(
|
||||||
txn,
|
txn,
|
||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
device_id=device_id,
|
device_id=device_id,
|
||||||
hosts=hosts,
|
hosts=hosts,
|
||||||
stream_ids=stream_ids,
|
stream_id=stream_id,
|
||||||
context=context,
|
context=context,
|
||||||
)
|
)
|
||||||
|
|
||||||
async with self._device_list_id_gen.get_next_mult(len(hosts)) as stream_ids:
|
async with self._device_list_id_gen.get_next() as stream_id:
|
||||||
return await self.db_pool.runInteraction(
|
return await self.db_pool.runInteraction(
|
||||||
"add_device_list_outbound_pokes",
|
"add_device_list_outbound_pokes",
|
||||||
add_device_list_outbound_pokes_txn,
|
add_device_list_outbound_pokes_txn,
|
||||||
stream_ids,
|
stream_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def add_remote_device_list_to_pending(
|
async def add_remote_device_list_to_pending(
|
||||||
|
|
|
@ -123,9 +123,9 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
|
||||||
if stream_name == DeviceListsStream.NAME:
|
if stream_name == DeviceListsStream.NAME:
|
||||||
for row in rows:
|
for row in rows:
|
||||||
assert isinstance(row, DeviceListsStream.DeviceListsStreamRow)
|
assert isinstance(row, DeviceListsStream.DeviceListsStreamRow)
|
||||||
if row.entity.startswith("@"):
|
if not row.hosts_calculated:
|
||||||
self._get_e2e_device_keys_for_federation_query_inner.invalidate(
|
self._get_e2e_device_keys_for_federation_query_inner.invalidate(
|
||||||
(row.entity,)
|
(row.user_id,)
|
||||||
)
|
)
|
||||||
|
|
||||||
super().process_replication_rows(stream_name, instance_name, token, rows)
|
super().process_replication_rows(stream_name, instance_name, token, rows)
|
||||||
|
|
|
@ -148,6 +148,10 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||||
500000, "_event_auth_cache", size_callback=len
|
500000, "_event_auth_cache", size_callback=len
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Flag used by unit tests to disable fallback when there is no chain cover
|
||||||
|
# index.
|
||||||
|
self.tests_allow_no_chain_cover_index = True
|
||||||
|
|
||||||
self._clock.looping_call(self._get_stats_for_federation_staging, 30 * 1000)
|
self._clock.looping_call(self._get_stats_for_federation_staging, 30 * 1000)
|
||||||
|
|
||||||
if isinstance(self.database_engine, PostgresEngine):
|
if isinstance(self.database_engine, PostgresEngine):
|
||||||
|
@ -220,8 +224,10 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||||
)
|
)
|
||||||
except _NoChainCoverIndex:
|
except _NoChainCoverIndex:
|
||||||
# For whatever reason we don't actually have a chain cover index
|
# For whatever reason we don't actually have a chain cover index
|
||||||
# for the events in question, so we fall back to the old method.
|
# for the events in question, so we fall back to the old method
|
||||||
pass
|
# (except in tests)
|
||||||
|
if not self.tests_allow_no_chain_cover_index:
|
||||||
|
raise
|
||||||
|
|
||||||
return await self.db_pool.runInteraction(
|
return await self.db_pool.runInteraction(
|
||||||
"get_auth_chain_ids",
|
"get_auth_chain_ids",
|
||||||
|
@ -271,7 +277,7 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||||
if events_missing_chain_info:
|
if events_missing_chain_info:
|
||||||
# This can happen due to e.g. downgrade/upgrade of the server. We
|
# This can happen due to e.g. downgrade/upgrade of the server. We
|
||||||
# raise an exception and fall back to the previous algorithm.
|
# raise an exception and fall back to the previous algorithm.
|
||||||
logger.info(
|
logger.error(
|
||||||
"Unexpectedly found that events don't have chain IDs in room %s: %s",
|
"Unexpectedly found that events don't have chain IDs in room %s: %s",
|
||||||
room_id,
|
room_id,
|
||||||
events_missing_chain_info,
|
events_missing_chain_info,
|
||||||
|
@ -482,8 +488,10 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||||
)
|
)
|
||||||
except _NoChainCoverIndex:
|
except _NoChainCoverIndex:
|
||||||
# For whatever reason we don't actually have a chain cover index
|
# For whatever reason we don't actually have a chain cover index
|
||||||
# for the events in question, so we fall back to the old method.
|
# for the events in question, so we fall back to the old method
|
||||||
pass
|
# (except in tests)
|
||||||
|
if not self.tests_allow_no_chain_cover_index:
|
||||||
|
raise
|
||||||
|
|
||||||
return await self.db_pool.runInteraction(
|
return await self.db_pool.runInteraction(
|
||||||
"get_auth_chain_difference",
|
"get_auth_chain_difference",
|
||||||
|
@ -710,7 +718,7 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||||
if events_missing_chain_info - event_to_auth_ids.keys():
|
if events_missing_chain_info - event_to_auth_ids.keys():
|
||||||
# Uh oh, we somehow haven't correctly done the chain cover index,
|
# Uh oh, we somehow haven't correctly done the chain cover index,
|
||||||
# bail and fall back to the old method.
|
# bail and fall back to the old method.
|
||||||
logger.info(
|
logger.error(
|
||||||
"Unexpectedly found that events don't have chain IDs in room %s: %s",
|
"Unexpectedly found that events don't have chain IDs in room %s: %s",
|
||||||
room_id,
|
room_id,
|
||||||
events_missing_chain_info - event_to_auth_ids.keys(),
|
events_missing_chain_info - event_to_auth_ids.keys(),
|
||||||
|
|
|
@ -34,7 +34,6 @@ from typing import (
|
||||||
Optional,
|
Optional,
|
||||||
Set,
|
Set,
|
||||||
Tuple,
|
Tuple,
|
||||||
Union,
|
|
||||||
cast,
|
cast,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -100,6 +99,23 @@ class DeltaState:
|
||||||
return not self.to_delete and not self.to_insert and not self.no_longer_in_room
|
return not self.to_delete and not self.to_insert and not self.no_longer_in_room
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(slots=True, auto_attribs=True)
|
||||||
|
class NewEventChainLinks:
|
||||||
|
"""Information about new auth chain links that need to be added to the DB.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
chain_id, sequence_number: the IDs corresponding to the event being
|
||||||
|
inserted, and the starting point of the links
|
||||||
|
links: Lists the links that need to be added, 2-tuple of the chain
|
||||||
|
ID/sequence number of the end point of the link.
|
||||||
|
"""
|
||||||
|
|
||||||
|
chain_id: int
|
||||||
|
sequence_number: int
|
||||||
|
|
||||||
|
links: List[Tuple[int, int]] = attr.Factory(list)
|
||||||
|
|
||||||
|
|
||||||
class PersistEventsStore:
|
class PersistEventsStore:
|
||||||
"""Contains all the functions for writing events to the database.
|
"""Contains all the functions for writing events to the database.
|
||||||
|
|
||||||
|
@ -148,6 +164,7 @@ class PersistEventsStore:
|
||||||
*,
|
*,
|
||||||
state_delta_for_room: Optional[DeltaState],
|
state_delta_for_room: Optional[DeltaState],
|
||||||
new_forward_extremities: Optional[Set[str]],
|
new_forward_extremities: Optional[Set[str]],
|
||||||
|
new_event_links: Dict[str, NewEventChainLinks],
|
||||||
use_negative_stream_ordering: bool = False,
|
use_negative_stream_ordering: bool = False,
|
||||||
inhibit_local_membership_updates: bool = False,
|
inhibit_local_membership_updates: bool = False,
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -217,6 +234,7 @@ class PersistEventsStore:
|
||||||
inhibit_local_membership_updates=inhibit_local_membership_updates,
|
inhibit_local_membership_updates=inhibit_local_membership_updates,
|
||||||
state_delta_for_room=state_delta_for_room,
|
state_delta_for_room=state_delta_for_room,
|
||||||
new_forward_extremities=new_forward_extremities,
|
new_forward_extremities=new_forward_extremities,
|
||||||
|
new_event_links=new_event_links,
|
||||||
)
|
)
|
||||||
persist_event_counter.inc(len(events_and_contexts))
|
persist_event_counter.inc(len(events_and_contexts))
|
||||||
|
|
||||||
|
@ -243,6 +261,87 @@ class PersistEventsStore:
|
||||||
(room_id,), frozenset(new_forward_extremities)
|
(room_id,), frozenset(new_forward_extremities)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def calculate_chain_cover_index_for_events(
|
||||||
|
self, room_id: str, events: Collection[EventBase]
|
||||||
|
) -> Dict[str, NewEventChainLinks]:
|
||||||
|
# Filter to state events, and ensure there are no duplicates.
|
||||||
|
state_events = []
|
||||||
|
seen_events = set()
|
||||||
|
for event in events:
|
||||||
|
if not event.is_state() or event.event_id in seen_events:
|
||||||
|
continue
|
||||||
|
|
||||||
|
state_events.append(event)
|
||||||
|
seen_events.add(event.event_id)
|
||||||
|
|
||||||
|
if not state_events:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
return await self.db_pool.runInteraction(
|
||||||
|
"_calculate_chain_cover_index_for_events",
|
||||||
|
self.calculate_chain_cover_index_for_events_txn,
|
||||||
|
room_id,
|
||||||
|
state_events,
|
||||||
|
)
|
||||||
|
|
||||||
|
def calculate_chain_cover_index_for_events_txn(
|
||||||
|
self, txn: LoggingTransaction, room_id: str, state_events: Collection[EventBase]
|
||||||
|
) -> Dict[str, NewEventChainLinks]:
|
||||||
|
# We now calculate chain ID/sequence numbers for any state events we're
|
||||||
|
# persisting. We ignore out of band memberships as we're not in the room
|
||||||
|
# and won't have their auth chain (we'll fix it up later if we join the
|
||||||
|
# room).
|
||||||
|
#
|
||||||
|
# See: docs/auth_chain_difference_algorithm.md
|
||||||
|
|
||||||
|
# We ignore legacy rooms that we aren't filling the chain cover index
|
||||||
|
# for.
|
||||||
|
row = self.db_pool.simple_select_one_txn(
|
||||||
|
txn,
|
||||||
|
table="rooms",
|
||||||
|
keyvalues={"room_id": room_id},
|
||||||
|
retcols=("room_id", "has_auth_chain_index"),
|
||||||
|
allow_none=True,
|
||||||
|
)
|
||||||
|
if row is None or row[1] is False:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
# Filter out events that we've already calculated.
|
||||||
|
rows = self.db_pool.simple_select_many_txn(
|
||||||
|
txn,
|
||||||
|
table="event_auth_chains",
|
||||||
|
column="event_id",
|
||||||
|
iterable=[e.event_id for e in state_events],
|
||||||
|
keyvalues={},
|
||||||
|
retcols=("event_id",),
|
||||||
|
)
|
||||||
|
already_persisted_events = {event_id for event_id, in rows}
|
||||||
|
state_events = [
|
||||||
|
event
|
||||||
|
for event in state_events
|
||||||
|
if event.event_id not in already_persisted_events
|
||||||
|
]
|
||||||
|
|
||||||
|
if not state_events:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
# We need to know the type/state_key and auth events of the events we're
|
||||||
|
# calculating chain IDs for. We don't rely on having the full Event
|
||||||
|
# instances as we'll potentially be pulling more events from the DB and
|
||||||
|
# we don't need the overhead of fetching/parsing the full event JSON.
|
||||||
|
event_to_types = {e.event_id: (e.type, e.state_key) for e in state_events}
|
||||||
|
event_to_auth_chain = {e.event_id: e.auth_event_ids() for e in state_events}
|
||||||
|
event_to_room_id = {e.event_id: e.room_id for e in state_events}
|
||||||
|
|
||||||
|
return self._calculate_chain_cover_index(
|
||||||
|
txn,
|
||||||
|
self.db_pool,
|
||||||
|
self.store.event_chain_id_gen,
|
||||||
|
event_to_room_id,
|
||||||
|
event_to_types,
|
||||||
|
event_to_auth_chain,
|
||||||
|
)
|
||||||
|
|
||||||
async def _get_events_which_are_prevs(self, event_ids: Iterable[str]) -> List[str]:
|
async def _get_events_which_are_prevs(self, event_ids: Iterable[str]) -> List[str]:
|
||||||
"""Filter the supplied list of event_ids to get those which are prev_events of
|
"""Filter the supplied list of event_ids to get those which are prev_events of
|
||||||
existing (non-outlier/rejected) events.
|
existing (non-outlier/rejected) events.
|
||||||
|
@ -358,6 +457,7 @@ class PersistEventsStore:
|
||||||
inhibit_local_membership_updates: bool,
|
inhibit_local_membership_updates: bool,
|
||||||
state_delta_for_room: Optional[DeltaState],
|
state_delta_for_room: Optional[DeltaState],
|
||||||
new_forward_extremities: Optional[Set[str]],
|
new_forward_extremities: Optional[Set[str]],
|
||||||
|
new_event_links: Dict[str, NewEventChainLinks],
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Insert some number of room events into the necessary database tables.
|
"""Insert some number of room events into the necessary database tables.
|
||||||
|
|
||||||
|
@ -466,7 +566,9 @@ class PersistEventsStore:
|
||||||
# Insert into event_to_state_groups.
|
# Insert into event_to_state_groups.
|
||||||
self._store_event_state_mappings_txn(txn, events_and_contexts)
|
self._store_event_state_mappings_txn(txn, events_and_contexts)
|
||||||
|
|
||||||
self._persist_event_auth_chain_txn(txn, [e for e, _ in events_and_contexts])
|
self._persist_event_auth_chain_txn(
|
||||||
|
txn, [e for e, _ in events_and_contexts], new_event_links
|
||||||
|
)
|
||||||
|
|
||||||
# _store_rejected_events_txn filters out any events which were
|
# _store_rejected_events_txn filters out any events which were
|
||||||
# rejected, and returns the filtered list.
|
# rejected, and returns the filtered list.
|
||||||
|
@ -496,7 +598,11 @@ class PersistEventsStore:
|
||||||
self,
|
self,
|
||||||
txn: LoggingTransaction,
|
txn: LoggingTransaction,
|
||||||
events: List[EventBase],
|
events: List[EventBase],
|
||||||
|
new_event_links: Dict[str, NewEventChainLinks],
|
||||||
) -> None:
|
) -> None:
|
||||||
|
if new_event_links:
|
||||||
|
self._persist_chain_cover_index(txn, self.db_pool, new_event_links)
|
||||||
|
|
||||||
# We only care about state events, so this if there are no state events.
|
# We only care about state events, so this if there are no state events.
|
||||||
if not any(e.is_state() for e in events):
|
if not any(e.is_state() for e in events):
|
||||||
return
|
return
|
||||||
|
@ -519,60 +625,6 @@ class PersistEventsStore:
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
# We now calculate chain ID/sequence numbers for any state events we're
|
|
||||||
# persisting. We ignore out of band memberships as we're not in the room
|
|
||||||
# and won't have their auth chain (we'll fix it up later if we join the
|
|
||||||
# room).
|
|
||||||
#
|
|
||||||
# See: docs/auth_chain_difference_algorithm.md
|
|
||||||
|
|
||||||
# We ignore legacy rooms that we aren't filling the chain cover index
|
|
||||||
# for.
|
|
||||||
rows = cast(
|
|
||||||
List[Tuple[str, Optional[Union[int, bool]]]],
|
|
||||||
self.db_pool.simple_select_many_txn(
|
|
||||||
txn,
|
|
||||||
table="rooms",
|
|
||||||
column="room_id",
|
|
||||||
iterable={event.room_id for event in events if event.is_state()},
|
|
||||||
keyvalues={},
|
|
||||||
retcols=("room_id", "has_auth_chain_index"),
|
|
||||||
),
|
|
||||||
)
|
|
||||||
rooms_using_chain_index = {
|
|
||||||
room_id for room_id, has_auth_chain_index in rows if has_auth_chain_index
|
|
||||||
}
|
|
||||||
|
|
||||||
state_events = {
|
|
||||||
event.event_id: event
|
|
||||||
for event in events
|
|
||||||
if event.is_state() and event.room_id in rooms_using_chain_index
|
|
||||||
}
|
|
||||||
|
|
||||||
if not state_events:
|
|
||||||
return
|
|
||||||
|
|
||||||
# We need to know the type/state_key and auth events of the events we're
|
|
||||||
# calculating chain IDs for. We don't rely on having the full Event
|
|
||||||
# instances as we'll potentially be pulling more events from the DB and
|
|
||||||
# we don't need the overhead of fetching/parsing the full event JSON.
|
|
||||||
event_to_types = {
|
|
||||||
e.event_id: (e.type, e.state_key) for e in state_events.values()
|
|
||||||
}
|
|
||||||
event_to_auth_chain = {
|
|
||||||
e.event_id: e.auth_event_ids() for e in state_events.values()
|
|
||||||
}
|
|
||||||
event_to_room_id = {e.event_id: e.room_id for e in state_events.values()}
|
|
||||||
|
|
||||||
self._add_chain_cover_index(
|
|
||||||
txn,
|
|
||||||
self.db_pool,
|
|
||||||
self.store.event_chain_id_gen,
|
|
||||||
event_to_room_id,
|
|
||||||
event_to_types,
|
|
||||||
event_to_auth_chain,
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def _add_chain_cover_index(
|
def _add_chain_cover_index(
|
||||||
cls,
|
cls,
|
||||||
|
@ -583,6 +635,35 @@ class PersistEventsStore:
|
||||||
event_to_types: Dict[str, Tuple[str, str]],
|
event_to_types: Dict[str, Tuple[str, str]],
|
||||||
event_to_auth_chain: Dict[str, StrCollection],
|
event_to_auth_chain: Dict[str, StrCollection],
|
||||||
) -> None:
|
) -> None:
|
||||||
|
"""Calculate and persist the chain cover index for the given events.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
event_to_room_id: Event ID to the room ID of the event
|
||||||
|
event_to_types: Event ID to type and state_key of the event
|
||||||
|
event_to_auth_chain: Event ID to list of auth event IDs of the
|
||||||
|
event (events with no auth events can be excluded).
|
||||||
|
"""
|
||||||
|
|
||||||
|
new_event_links = cls._calculate_chain_cover_index(
|
||||||
|
txn,
|
||||||
|
db_pool,
|
||||||
|
event_chain_id_gen,
|
||||||
|
event_to_room_id,
|
||||||
|
event_to_types,
|
||||||
|
event_to_auth_chain,
|
||||||
|
)
|
||||||
|
cls._persist_chain_cover_index(txn, db_pool, new_event_links)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def _calculate_chain_cover_index(
|
||||||
|
cls,
|
||||||
|
txn: LoggingTransaction,
|
||||||
|
db_pool: DatabasePool,
|
||||||
|
event_chain_id_gen: SequenceGenerator,
|
||||||
|
event_to_room_id: Dict[str, str],
|
||||||
|
event_to_types: Dict[str, Tuple[str, str]],
|
||||||
|
event_to_auth_chain: Dict[str, StrCollection],
|
||||||
|
) -> Dict[str, NewEventChainLinks]:
|
||||||
"""Calculate the chain cover index for the given events.
|
"""Calculate the chain cover index for the given events.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -590,6 +671,10 @@ class PersistEventsStore:
|
||||||
event_to_types: Event ID to type and state_key of the event
|
event_to_types: Event ID to type and state_key of the event
|
||||||
event_to_auth_chain: Event ID to list of auth event IDs of the
|
event_to_auth_chain: Event ID to list of auth event IDs of the
|
||||||
event (events with no auth events can be excluded).
|
event (events with no auth events can be excluded).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
A mapping with any new auth chain links we need to add, keyed by
|
||||||
|
event ID.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Map from event ID to chain ID/sequence number.
|
# Map from event ID to chain ID/sequence number.
|
||||||
|
@ -708,11 +793,11 @@ class PersistEventsStore:
|
||||||
room_id = event_to_room_id.get(event_id)
|
room_id = event_to_room_id.get(event_id)
|
||||||
if room_id:
|
if room_id:
|
||||||
e_type, state_key = event_to_types[event_id]
|
e_type, state_key = event_to_types[event_id]
|
||||||
db_pool.simple_insert_txn(
|
db_pool.simple_upsert_txn(
|
||||||
txn,
|
txn,
|
||||||
table="event_auth_chain_to_calculate",
|
table="event_auth_chain_to_calculate",
|
||||||
|
keyvalues={"event_id": event_id},
|
||||||
values={
|
values={
|
||||||
"event_id": event_id,
|
|
||||||
"room_id": room_id,
|
"room_id": room_id,
|
||||||
"type": e_type,
|
"type": e_type,
|
||||||
"state_key": state_key,
|
"state_key": state_key,
|
||||||
|
@ -724,7 +809,7 @@ class PersistEventsStore:
|
||||||
break
|
break
|
||||||
|
|
||||||
if not events_to_calc_chain_id_for:
|
if not events_to_calc_chain_id_for:
|
||||||
return
|
return {}
|
||||||
|
|
||||||
# Allocate chain ID/sequence numbers to each new event.
|
# Allocate chain ID/sequence numbers to each new event.
|
||||||
new_chain_tuples = cls._allocate_chain_ids(
|
new_chain_tuples = cls._allocate_chain_ids(
|
||||||
|
@ -739,23 +824,10 @@ class PersistEventsStore:
|
||||||
)
|
)
|
||||||
chain_map.update(new_chain_tuples)
|
chain_map.update(new_chain_tuples)
|
||||||
|
|
||||||
db_pool.simple_insert_many_txn(
|
to_return = {
|
||||||
txn,
|
event_id: NewEventChainLinks(chain_id, sequence_number)
|
||||||
table="event_auth_chains",
|
for event_id, (chain_id, sequence_number) in new_chain_tuples.items()
|
||||||
keys=("event_id", "chain_id", "sequence_number"),
|
}
|
||||||
values=[
|
|
||||||
(event_id, c_id, seq)
|
|
||||||
for event_id, (c_id, seq) in new_chain_tuples.items()
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
db_pool.simple_delete_many_txn(
|
|
||||||
txn,
|
|
||||||
table="event_auth_chain_to_calculate",
|
|
||||||
keyvalues={},
|
|
||||||
column="event_id",
|
|
||||||
values=new_chain_tuples,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Now we need to calculate any new links between chains caused by
|
# Now we need to calculate any new links between chains caused by
|
||||||
# the new events.
|
# the new events.
|
||||||
|
@ -825,10 +897,38 @@ class PersistEventsStore:
|
||||||
auth_chain_id, auth_sequence_number = chain_map[auth_id]
|
auth_chain_id, auth_sequence_number = chain_map[auth_id]
|
||||||
|
|
||||||
# Step 2a, add link between the event and auth event
|
# Step 2a, add link between the event and auth event
|
||||||
|
to_return[event_id].links.append((auth_chain_id, auth_sequence_number))
|
||||||
chain_links.add_link(
|
chain_links.add_link(
|
||||||
(chain_id, sequence_number), (auth_chain_id, auth_sequence_number)
|
(chain_id, sequence_number), (auth_chain_id, auth_sequence_number)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
return to_return
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def _persist_chain_cover_index(
|
||||||
|
cls,
|
||||||
|
txn: LoggingTransaction,
|
||||||
|
db_pool: DatabasePool,
|
||||||
|
new_event_links: Dict[str, NewEventChainLinks],
|
||||||
|
) -> None:
|
||||||
|
db_pool.simple_insert_many_txn(
|
||||||
|
txn,
|
||||||
|
table="event_auth_chains",
|
||||||
|
keys=("event_id", "chain_id", "sequence_number"),
|
||||||
|
values=[
|
||||||
|
(event_id, new_links.chain_id, new_links.sequence_number)
|
||||||
|
for event_id, new_links in new_event_links.items()
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
db_pool.simple_delete_many_txn(
|
||||||
|
txn,
|
||||||
|
table="event_auth_chain_to_calculate",
|
||||||
|
keyvalues={},
|
||||||
|
column="event_id",
|
||||||
|
values=new_event_links,
|
||||||
|
)
|
||||||
|
|
||||||
db_pool.simple_insert_many_txn(
|
db_pool.simple_insert_many_txn(
|
||||||
txn,
|
txn,
|
||||||
table="event_auth_chain_links",
|
table="event_auth_chain_links",
|
||||||
|
@ -838,7 +938,16 @@ class PersistEventsStore:
|
||||||
"target_chain_id",
|
"target_chain_id",
|
||||||
"target_sequence_number",
|
"target_sequence_number",
|
||||||
),
|
),
|
||||||
values=list(chain_links.get_additions()),
|
values=[
|
||||||
|
(
|
||||||
|
new_links.chain_id,
|
||||||
|
new_links.sequence_number,
|
||||||
|
target_chain_id,
|
||||||
|
target_sequence_number,
|
||||||
|
)
|
||||||
|
for new_links in new_event_links.values()
|
||||||
|
for (target_chain_id, target_sequence_number) in new_links.links
|
||||||
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
|
|
@ -44,6 +44,7 @@ what sort order was used:
|
||||||
import logging
|
import logging
|
||||||
from typing import (
|
from typing import (
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
|
AbstractSet,
|
||||||
Any,
|
Any,
|
||||||
Collection,
|
Collection,
|
||||||
Dict,
|
Dict,
|
||||||
|
@ -62,7 +63,7 @@ from typing_extensions import Literal
|
||||||
|
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.api.constants import Direction
|
from synapse.api.constants import Direction, EventTypes, Membership
|
||||||
from synapse.api.filtering import Filter
|
from synapse.api.filtering import Filter
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||||
|
@ -111,6 +112,32 @@ class _EventsAround:
|
||||||
end: RoomStreamToken
|
end: RoomStreamToken
|
||||||
|
|
||||||
|
|
||||||
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
|
class CurrentStateDeltaMembership:
|
||||||
|
"""
|
||||||
|
Attributes:
|
||||||
|
event_id: The "current" membership event ID in this room.
|
||||||
|
event_pos: The position of the "current" membership event in the event stream.
|
||||||
|
prev_event_id: The previous membership event in this room that was replaced by
|
||||||
|
the "current" one. May be `None` if there was no previous membership event.
|
||||||
|
room_id: The room ID of the membership event.
|
||||||
|
membership: The membership state of the user in the room
|
||||||
|
sender: The person who sent the membership event
|
||||||
|
"""
|
||||||
|
|
||||||
|
room_id: str
|
||||||
|
# Event
|
||||||
|
event_id: Optional[str]
|
||||||
|
event_pos: PersistedEventPosition
|
||||||
|
membership: str
|
||||||
|
sender: Optional[str]
|
||||||
|
# Prev event
|
||||||
|
prev_event_id: Optional[str]
|
||||||
|
prev_event_pos: Optional[PersistedEventPosition]
|
||||||
|
prev_membership: Optional[str]
|
||||||
|
prev_sender: Optional[str]
|
||||||
|
|
||||||
|
|
||||||
def generate_pagination_where_clause(
|
def generate_pagination_where_clause(
|
||||||
direction: Direction,
|
direction: Direction,
|
||||||
column_names: Tuple[str, str],
|
column_names: Tuple[str, str],
|
||||||
|
@ -390,6 +417,43 @@ def _filter_results(
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def _filter_results_by_stream(
|
||||||
|
lower_token: Optional[RoomStreamToken],
|
||||||
|
upper_token: Optional[RoomStreamToken],
|
||||||
|
instance_name: str,
|
||||||
|
stream_ordering: int,
|
||||||
|
) -> bool:
|
||||||
|
"""
|
||||||
|
This function only works with "live" tokens with `stream_ordering` only. See
|
||||||
|
`_filter_results(...)` if you want to work with all tokens.
|
||||||
|
|
||||||
|
Returns True if the event persisted by the given instance at the given
|
||||||
|
stream_ordering falls between the two tokens (taking a None
|
||||||
|
token to mean unbounded).
|
||||||
|
|
||||||
|
Used to filter results from fetching events in the DB against the given
|
||||||
|
tokens. This is necessary to handle the case where the tokens include
|
||||||
|
position maps, which we handle by fetching more than necessary from the DB
|
||||||
|
and then filtering (rather than attempting to construct a complicated SQL
|
||||||
|
query).
|
||||||
|
"""
|
||||||
|
if lower_token:
|
||||||
|
assert lower_token.topological is None
|
||||||
|
|
||||||
|
# If these are live tokens we compare the stream ordering against the
|
||||||
|
# writers stream position.
|
||||||
|
if stream_ordering <= lower_token.get_stream_pos_for_instance(instance_name):
|
||||||
|
return False
|
||||||
|
|
||||||
|
if upper_token:
|
||||||
|
assert upper_token.topological is None
|
||||||
|
|
||||||
|
if upper_token.get_stream_pos_for_instance(instance_name) < stream_ordering:
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
def filter_to_clause(event_filter: Optional[Filter]) -> Tuple[str, List[str]]:
|
def filter_to_clause(event_filter: Optional[Filter]) -> Tuple[str, List[str]]:
|
||||||
# NB: This may create SQL clauses that don't optimise well (and we don't
|
# NB: This may create SQL clauses that don't optimise well (and we don't
|
||||||
# have indices on all possible clauses). E.g. it may create
|
# have indices on all possible clauses). E.g. it may create
|
||||||
|
@ -731,6 +795,191 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||||
|
|
||||||
return ret, key
|
return ret, key
|
||||||
|
|
||||||
|
async def get_current_state_delta_membership_changes_for_user(
|
||||||
|
self,
|
||||||
|
user_id: str,
|
||||||
|
from_key: RoomStreamToken,
|
||||||
|
to_key: RoomStreamToken,
|
||||||
|
excluded_room_ids: Optional[List[str]] = None,
|
||||||
|
) -> List[CurrentStateDeltaMembership]:
|
||||||
|
"""
|
||||||
|
Fetch membership events (and the previous event that was replaced by that one)
|
||||||
|
for a given user.
|
||||||
|
|
||||||
|
Note: This function only works with "live" tokens with `stream_ordering` only.
|
||||||
|
|
||||||
|
We're looking for membership changes in the token range (> `from_key` and <=
|
||||||
|
`to_key`).
|
||||||
|
|
||||||
|
Please be mindful to only use this with `from_key` and `to_key` tokens that are
|
||||||
|
recent enough to be after when the first local user joined the room. Otherwise,
|
||||||
|
the results may be incomplete or too greedy. For example, if you use a token
|
||||||
|
range before the first local user joined the room, you will see 0 events since
|
||||||
|
`current_state_delta_stream` tracks what the server thinks is the current state
|
||||||
|
of the room as time goes. It does not track how state progresses from the
|
||||||
|
beginning of the room. So for example, when you remotely join a room, the first
|
||||||
|
rows will just be the state when you joined and progress from there.
|
||||||
|
|
||||||
|
You can probably reasonably use this with `/sync` because the `to_key` passed in
|
||||||
|
will be the "current" now token and the range will cover when the user joined
|
||||||
|
the room.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id: The user ID to fetch membership events for.
|
||||||
|
from_key: The point in the stream to sync from (fetching events > this point).
|
||||||
|
to_key: The token to fetch rooms up to (fetching events <= this point).
|
||||||
|
excluded_room_ids: Optional list of room IDs to exclude from the results.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
All membership changes to the current state in the token range. Events are
|
||||||
|
sorted by `stream_ordering` ascending.
|
||||||
|
"""
|
||||||
|
# Start by ruling out cases where a DB query is not necessary.
|
||||||
|
if from_key == to_key:
|
||||||
|
return []
|
||||||
|
|
||||||
|
if from_key:
|
||||||
|
has_changed = self._membership_stream_cache.has_entity_changed(
|
||||||
|
user_id, int(from_key.stream)
|
||||||
|
)
|
||||||
|
if not has_changed:
|
||||||
|
return []
|
||||||
|
|
||||||
|
def f(txn: LoggingTransaction) -> List[CurrentStateDeltaMembership]:
|
||||||
|
# To handle tokens with a non-empty instance_map we fetch more
|
||||||
|
# results than necessary and then filter down
|
||||||
|
min_from_id = from_key.stream
|
||||||
|
max_to_id = to_key.get_max_stream_pos()
|
||||||
|
|
||||||
|
args: List[Any] = [min_from_id, max_to_id, EventTypes.Member, user_id]
|
||||||
|
|
||||||
|
# TODO: It would be good to assert that the `from_token`/`to_token` is >=
|
||||||
|
# the first row in `current_state_delta_stream` for the rooms we're
|
||||||
|
# interested in. Otherwise, we will end up with empty results and not know
|
||||||
|
# it.
|
||||||
|
|
||||||
|
# We could `COALESCE(e.stream_ordering, s.stream_id)` to get more accurate
|
||||||
|
# stream positioning when available but given our usages, we can avoid the
|
||||||
|
# complexity. Between two (valid) stream tokens, we will still get all of
|
||||||
|
# the state changes. Since those events are persisted in a batch, valid
|
||||||
|
# tokens will either be before or after the batch of events.
|
||||||
|
#
|
||||||
|
# `stream_ordering` from the `events` table is more accurate when available
|
||||||
|
# since the `current_state_delta_stream` table only tracks that the current
|
||||||
|
# state is at this stream position (not what stream position the state event
|
||||||
|
# was added) and uses the *minimum* stream position for batches of events.
|
||||||
|
sql = """
|
||||||
|
SELECT
|
||||||
|
s.room_id,
|
||||||
|
e.event_id,
|
||||||
|
s.instance_name,
|
||||||
|
s.stream_id,
|
||||||
|
m.membership,
|
||||||
|
e.sender,
|
||||||
|
s.prev_event_id,
|
||||||
|
e_prev.instance_name AS prev_instance_name,
|
||||||
|
e_prev.stream_ordering AS prev_stream_ordering,
|
||||||
|
m_prev.membership AS prev_membership,
|
||||||
|
e_prev.sender AS prev_sender
|
||||||
|
FROM current_state_delta_stream AS s
|
||||||
|
LEFT JOIN events AS e ON e.event_id = s.event_id
|
||||||
|
LEFT JOIN room_memberships AS m ON m.event_id = s.event_id
|
||||||
|
LEFT JOIN events AS e_prev ON e_prev.event_id = s.prev_event_id
|
||||||
|
LEFT JOIN room_memberships AS m_prev ON m_prev.event_id = s.prev_event_id
|
||||||
|
WHERE s.stream_id > ? AND s.stream_id <= ?
|
||||||
|
AND s.type = ?
|
||||||
|
AND s.state_key = ?
|
||||||
|
ORDER BY s.stream_id ASC
|
||||||
|
"""
|
||||||
|
|
||||||
|
txn.execute(sql, args)
|
||||||
|
|
||||||
|
membership_changes: List[CurrentStateDeltaMembership] = []
|
||||||
|
for (
|
||||||
|
room_id,
|
||||||
|
event_id,
|
||||||
|
instance_name,
|
||||||
|
stream_ordering,
|
||||||
|
membership,
|
||||||
|
sender,
|
||||||
|
prev_event_id,
|
||||||
|
prev_instance_name,
|
||||||
|
prev_stream_ordering,
|
||||||
|
prev_membership,
|
||||||
|
prev_sender,
|
||||||
|
) in txn:
|
||||||
|
assert room_id is not None
|
||||||
|
assert instance_name is not None
|
||||||
|
assert stream_ordering is not None
|
||||||
|
|
||||||
|
if _filter_results_by_stream(
|
||||||
|
from_key,
|
||||||
|
to_key,
|
||||||
|
instance_name,
|
||||||
|
stream_ordering,
|
||||||
|
):
|
||||||
|
# When the server leaves a room, it will insert new rows into the
|
||||||
|
# `current_state_delta_stream` table with `event_id = null` for all
|
||||||
|
# current state. This means we might already have a row for the
|
||||||
|
# leave event and then another for the same leave where the
|
||||||
|
# `event_id=null` but the `prev_event_id` is pointing back at the
|
||||||
|
# earlier leave event. We don't want to report the leave, if we
|
||||||
|
# already have a leave event.
|
||||||
|
if event_id is None and prev_membership == Membership.LEAVE:
|
||||||
|
continue
|
||||||
|
|
||||||
|
membership_change = CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id,
|
||||||
|
# Event
|
||||||
|
event_id=event_id,
|
||||||
|
event_pos=PersistedEventPosition(
|
||||||
|
instance_name=instance_name,
|
||||||
|
stream=stream_ordering,
|
||||||
|
),
|
||||||
|
# When `s.event_id = null`, we won't be able to get respective
|
||||||
|
# `room_membership` but can assume the user has left the room
|
||||||
|
# because this only happens when the server leaves a room
|
||||||
|
# (meaning everyone locally left) or a state reset which removed
|
||||||
|
# the person from the room.
|
||||||
|
membership=(
|
||||||
|
membership if membership is not None else Membership.LEAVE
|
||||||
|
),
|
||||||
|
sender=sender,
|
||||||
|
# Prev event
|
||||||
|
prev_event_id=prev_event_id,
|
||||||
|
prev_event_pos=(
|
||||||
|
PersistedEventPosition(
|
||||||
|
instance_name=prev_instance_name,
|
||||||
|
stream=prev_stream_ordering,
|
||||||
|
)
|
||||||
|
if (
|
||||||
|
prev_instance_name is not None
|
||||||
|
and prev_stream_ordering is not None
|
||||||
|
)
|
||||||
|
else None
|
||||||
|
),
|
||||||
|
prev_membership=prev_membership,
|
||||||
|
prev_sender=prev_sender,
|
||||||
|
)
|
||||||
|
|
||||||
|
membership_changes.append(membership_change)
|
||||||
|
|
||||||
|
return membership_changes
|
||||||
|
|
||||||
|
membership_changes = await self.db_pool.runInteraction(
|
||||||
|
"get_current_state_delta_membership_changes_for_user", f
|
||||||
|
)
|
||||||
|
|
||||||
|
room_ids_to_exclude: AbstractSet[str] = set()
|
||||||
|
if excluded_room_ids is not None:
|
||||||
|
room_ids_to_exclude = set(excluded_room_ids)
|
||||||
|
|
||||||
|
return [
|
||||||
|
membership_change
|
||||||
|
for membership_change in membership_changes
|
||||||
|
if membership_change.room_id not in room_ids_to_exclude
|
||||||
|
]
|
||||||
|
|
||||||
@cancellable
|
@cancellable
|
||||||
async def get_membership_changes_for_user(
|
async def get_membership_changes_for_user(
|
||||||
self,
|
self,
|
||||||
|
@ -766,10 +1015,11 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||||
|
|
||||||
ignore_room_clause = ""
|
ignore_room_clause = ""
|
||||||
if excluded_rooms is not None and len(excluded_rooms) > 0:
|
if excluded_rooms is not None and len(excluded_rooms) > 0:
|
||||||
ignore_room_clause = "AND e.room_id NOT IN (%s)" % ",".join(
|
ignore_room_clause, ignore_room_args = make_in_list_sql_clause(
|
||||||
"?" for _ in excluded_rooms
|
txn.database_engine, "e.room_id", excluded_rooms, negative=True
|
||||||
)
|
)
|
||||||
args = args + excluded_rooms
|
ignore_room_clause = f"AND {ignore_room_clause}"
|
||||||
|
args += ignore_room_args
|
||||||
|
|
||||||
sql = """
|
sql = """
|
||||||
SELECT m.event_id, instance_name, topological_ordering, stream_ordering
|
SELECT m.event_id, instance_name, topological_ordering, stream_ordering
|
||||||
|
|
|
@ -32,7 +32,10 @@
|
||||||
* limitations under the License.
|
* limitations under the License.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
-- Tracks what the server thinks is the current state of the room as time goes. It does
|
||||||
|
-- not track how state progresses from the beginning of the room. So for example, when
|
||||||
|
-- you remotely join a room, the first rows will just be the state when you joined and
|
||||||
|
-- progress from there.
|
||||||
CREATE TABLE current_state_delta_stream (
|
CREATE TABLE current_state_delta_stream (
|
||||||
stream_id BIGINT NOT NULL,
|
stream_id BIGINT NOT NULL,
|
||||||
room_id TEXT NOT NULL,
|
room_id TEXT NOT NULL,
|
||||||
|
|
|
@ -75,9 +75,6 @@ class PaginationConfig:
|
||||||
raise SynapseError(400, "'to' parameter is invalid")
|
raise SynapseError(400, "'to' parameter is invalid")
|
||||||
|
|
||||||
limit = parse_integer(request, "limit", default=default_limit)
|
limit = parse_integer(request, "limit", default=default_limit)
|
||||||
if limit < 0:
|
|
||||||
raise SynapseError(400, "Limit must be 0 or above")
|
|
||||||
|
|
||||||
limit = min(limit, MAX_LIMIT)
|
limit = min(limit, MAX_LIMIT)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -31,10 +31,12 @@ else:
|
||||||
from pydantic import Extra
|
from pydantic import Extra
|
||||||
|
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.handlers.relations import BundledAggregations
|
|
||||||
from synapse.types import JsonDict, JsonMapping, StreamToken, UserID
|
from synapse.types import JsonDict, JsonMapping, StreamToken, UserID
|
||||||
from synapse.types.rest.client import SlidingSyncBody
|
from synapse.types.rest.client import SlidingSyncBody
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.handlers.relations import BundledAggregations
|
||||||
|
|
||||||
|
|
||||||
class ShutdownRoomParams(TypedDict):
|
class ShutdownRoomParams(TypedDict):
|
||||||
"""
|
"""
|
||||||
|
@ -195,18 +197,24 @@ class SlidingSyncResult:
|
||||||
avatar: Optional[str]
|
avatar: Optional[str]
|
||||||
heroes: Optional[List[EventBase]]
|
heroes: Optional[List[EventBase]]
|
||||||
initial: bool
|
initial: bool
|
||||||
required_state: List[EventBase]
|
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||||
timeline_events: List[EventBase]
|
required_state: Optional[List[EventBase]]
|
||||||
bundled_aggregations: Optional[Dict[str, BundledAggregations]]
|
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||||
|
timeline_events: Optional[List[EventBase]]
|
||||||
|
bundled_aggregations: Optional[Dict[str, "BundledAggregations"]]
|
||||||
is_dm: bool
|
is_dm: bool
|
||||||
|
# Optional because it's only relevant to invite/knock rooms
|
||||||
stripped_state: Optional[List[JsonDict]]
|
stripped_state: Optional[List[JsonDict]]
|
||||||
prev_batch: StreamToken
|
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||||
limited: bool
|
prev_batch: Optional[StreamToken]
|
||||||
|
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||||
|
limited: Optional[bool]
|
||||||
joined_count: int
|
joined_count: int
|
||||||
invited_count: int
|
invited_count: int
|
||||||
notification_count: int
|
notification_count: int
|
||||||
highlight_count: int
|
highlight_count: int
|
||||||
num_live: int
|
# Only optional because it won't be included for invite/knock rooms with `stripped_state`
|
||||||
|
num_live: Optional[int]
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
class SlidingWindowList:
|
class SlidingWindowList:
|
||||||
|
|
|
@ -154,10 +154,6 @@ class SlidingSyncBody(RequestBodyModel):
|
||||||
(Max 1000 messages)
|
(Max 1000 messages)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
class IncludeOldRooms(RequestBodyModel):
|
|
||||||
timeline_limit: StrictInt
|
|
||||||
required_state: List[Tuple[StrictStr, StrictStr]]
|
|
||||||
|
|
||||||
required_state: List[Tuple[StrictStr, StrictStr]]
|
required_state: List[Tuple[StrictStr, StrictStr]]
|
||||||
# mypy workaround via https://github.com/pydantic/pydantic/issues/156#issuecomment-1130883884
|
# mypy workaround via https://github.com/pydantic/pydantic/issues/156#issuecomment-1130883884
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
|
173
tests/federation/test_federation_media.py
Normal file
173
tests/federation/test_federation_media.py
Normal file
|
@ -0,0 +1,173 @@
|
||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2024 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
# Originally licensed under the Apache License, Version 2.0:
|
||||||
|
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||||
|
#
|
||||||
|
# [This file includes modifications made by New Vector Limited]
|
||||||
|
#
|
||||||
|
#
|
||||||
|
import io
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import tempfile
|
||||||
|
|
||||||
|
from twisted.test.proto_helpers import MemoryReactor
|
||||||
|
|
||||||
|
from synapse.media.filepath import MediaFilePaths
|
||||||
|
from synapse.media.media_storage import MediaStorage
|
||||||
|
from synapse.media.storage_provider import (
|
||||||
|
FileStorageProviderBackend,
|
||||||
|
StorageProviderWrapper,
|
||||||
|
)
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
from synapse.types import UserID
|
||||||
|
from synapse.util import Clock
|
||||||
|
|
||||||
|
from tests import unittest
|
||||||
|
from tests.test_utils import SMALL_PNG
|
||||||
|
from tests.unittest import override_config
|
||||||
|
|
||||||
|
|
||||||
|
class FederationUnstableMediaDownloadsTest(unittest.FederatingHomeserverTestCase):
|
||||||
|
|
||||||
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
|
super().prepare(reactor, clock, hs)
|
||||||
|
self.test_dir = tempfile.mkdtemp(prefix="synapse-tests-")
|
||||||
|
self.addCleanup(shutil.rmtree, self.test_dir)
|
||||||
|
self.primary_base_path = os.path.join(self.test_dir, "primary")
|
||||||
|
self.secondary_base_path = os.path.join(self.test_dir, "secondary")
|
||||||
|
|
||||||
|
hs.config.media.media_store_path = self.primary_base_path
|
||||||
|
|
||||||
|
storage_providers = [
|
||||||
|
StorageProviderWrapper(
|
||||||
|
FileStorageProviderBackend(hs, self.secondary_base_path),
|
||||||
|
store_local=True,
|
||||||
|
store_remote=False,
|
||||||
|
store_synchronous=True,
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
self.filepaths = MediaFilePaths(self.primary_base_path)
|
||||||
|
self.media_storage = MediaStorage(
|
||||||
|
hs, self.primary_base_path, self.filepaths, storage_providers
|
||||||
|
)
|
||||||
|
self.media_repo = hs.get_media_repository()
|
||||||
|
|
||||||
|
@override_config(
|
||||||
|
{"experimental_features": {"msc3916_authenticated_media_enabled": True}}
|
||||||
|
)
|
||||||
|
def test_file_download(self) -> None:
|
||||||
|
content = io.BytesIO(b"file_to_stream")
|
||||||
|
content_uri = self.get_success(
|
||||||
|
self.media_repo.create_content(
|
||||||
|
"text/plain",
|
||||||
|
"test_upload",
|
||||||
|
content,
|
||||||
|
46,
|
||||||
|
UserID.from_string("@user_id:whatever.org"),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
# test with a text file
|
||||||
|
channel = self.make_signed_federation_request(
|
||||||
|
"GET",
|
||||||
|
f"/_matrix/federation/unstable/org.matrix.msc3916/media/download/{content_uri.media_id}",
|
||||||
|
)
|
||||||
|
self.pump()
|
||||||
|
self.assertEqual(200, channel.code)
|
||||||
|
|
||||||
|
content_type = channel.headers.getRawHeaders("content-type")
|
||||||
|
assert content_type is not None
|
||||||
|
assert "multipart/mixed" in content_type[0]
|
||||||
|
assert "boundary" in content_type[0]
|
||||||
|
|
||||||
|
# extract boundary
|
||||||
|
boundary = content_type[0].split("boundary=")[1]
|
||||||
|
# split on boundary and check that json field and expected value exist
|
||||||
|
stripped = channel.text_body.split("\r\n" + "--" + boundary)
|
||||||
|
# TODO: the json object expected will change once MSC3911 is implemented, currently
|
||||||
|
# {} is returned for all requests as a placeholder (per MSC3196)
|
||||||
|
found_json = any(
|
||||||
|
"\r\nContent-Type: application/json\r\n\r\n{}" in field
|
||||||
|
for field in stripped
|
||||||
|
)
|
||||||
|
self.assertTrue(found_json)
|
||||||
|
|
||||||
|
# check that the text file and expected value exist
|
||||||
|
found_file = any(
|
||||||
|
"\r\nContent-Type: text/plain\r\n\r\nfile_to_stream" in field
|
||||||
|
for field in stripped
|
||||||
|
)
|
||||||
|
self.assertTrue(found_file)
|
||||||
|
|
||||||
|
content = io.BytesIO(SMALL_PNG)
|
||||||
|
content_uri = self.get_success(
|
||||||
|
self.media_repo.create_content(
|
||||||
|
"image/png",
|
||||||
|
"test_png_upload",
|
||||||
|
content,
|
||||||
|
67,
|
||||||
|
UserID.from_string("@user_id:whatever.org"),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
# test with an image file
|
||||||
|
channel = self.make_signed_federation_request(
|
||||||
|
"GET",
|
||||||
|
f"/_matrix/federation/unstable/org.matrix.msc3916/media/download/{content_uri.media_id}",
|
||||||
|
)
|
||||||
|
self.pump()
|
||||||
|
self.assertEqual(200, channel.code)
|
||||||
|
|
||||||
|
content_type = channel.headers.getRawHeaders("content-type")
|
||||||
|
assert content_type is not None
|
||||||
|
assert "multipart/mixed" in content_type[0]
|
||||||
|
assert "boundary" in content_type[0]
|
||||||
|
|
||||||
|
# extract boundary
|
||||||
|
boundary = content_type[0].split("boundary=")[1]
|
||||||
|
# split on boundary and check that json field and expected value exist
|
||||||
|
body = channel.result.get("body")
|
||||||
|
assert body is not None
|
||||||
|
stripped_bytes = body.split(b"\r\n" + b"--" + boundary.encode("utf-8"))
|
||||||
|
found_json = any(
|
||||||
|
b"\r\nContent-Type: application/json\r\n\r\n{}" in field
|
||||||
|
for field in stripped_bytes
|
||||||
|
)
|
||||||
|
self.assertTrue(found_json)
|
||||||
|
|
||||||
|
# check that the png file exists and matches what was uploaded
|
||||||
|
found_file = any(SMALL_PNG in field for field in stripped_bytes)
|
||||||
|
self.assertTrue(found_file)
|
||||||
|
|
||||||
|
@override_config(
|
||||||
|
{"experimental_features": {"msc3916_authenticated_media_enabled": False}}
|
||||||
|
)
|
||||||
|
def test_disable_config(self) -> None:
|
||||||
|
content = io.BytesIO(b"file_to_stream")
|
||||||
|
content_uri = self.get_success(
|
||||||
|
self.media_repo.create_content(
|
||||||
|
"text/plain",
|
||||||
|
"test_upload",
|
||||||
|
content,
|
||||||
|
46,
|
||||||
|
UserID.from_string("@user_id:whatever.org"),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
channel = self.make_signed_federation_request(
|
||||||
|
"GET",
|
||||||
|
f"/_matrix/federation/unstable/org.matrix.msc3916/media/download/{content_uri.media_id}",
|
||||||
|
)
|
||||||
|
self.pump()
|
||||||
|
self.assertEqual(404, channel.code)
|
||||||
|
self.assertEqual(channel.json_body.get("errcode"), "M_UNRECOGNIZED")
|
|
@ -27,6 +27,8 @@ from twisted.internet import defer
|
||||||
from twisted.test.proto_helpers import MemoryReactor
|
from twisted.test.proto_helpers import MemoryReactor
|
||||||
|
|
||||||
from synapse.api.constants import EduTypes, RoomEncryptionAlgorithms
|
from synapse.api.constants import EduTypes, RoomEncryptionAlgorithms
|
||||||
|
from synapse.api.presence import UserPresenceState
|
||||||
|
from synapse.federation.sender.per_destination_queue import MAX_PRESENCE_STATES_PER_EDU
|
||||||
from synapse.federation.units import Transaction
|
from synapse.federation.units import Transaction
|
||||||
from synapse.handlers.device import DeviceHandler
|
from synapse.handlers.device import DeviceHandler
|
||||||
from synapse.rest import admin
|
from synapse.rest import admin
|
||||||
|
@ -266,6 +268,123 @@ class FederationSenderReceiptsTestCases(HomeserverTestCase):
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class FederationSenderPresenceTestCases(HomeserverTestCase):
|
||||||
|
"""
|
||||||
|
Test federation sending for presence updates.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||||
|
self.federation_transport_client = Mock(spec=["send_transaction"])
|
||||||
|
self.federation_transport_client.send_transaction = AsyncMock()
|
||||||
|
hs = self.setup_test_homeserver(
|
||||||
|
federation_transport_client=self.federation_transport_client,
|
||||||
|
)
|
||||||
|
|
||||||
|
return hs
|
||||||
|
|
||||||
|
def default_config(self) -> JsonDict:
|
||||||
|
config = super().default_config()
|
||||||
|
config["federation_sender_instances"] = None
|
||||||
|
return config
|
||||||
|
|
||||||
|
def test_presence_simple(self) -> None:
|
||||||
|
"Test that sending a single presence update works"
|
||||||
|
|
||||||
|
mock_send_transaction: AsyncMock = (
|
||||||
|
self.federation_transport_client.send_transaction
|
||||||
|
)
|
||||||
|
mock_send_transaction.return_value = {}
|
||||||
|
|
||||||
|
sender = self.hs.get_federation_sender()
|
||||||
|
self.get_success(
|
||||||
|
sender.send_presence_to_destinations(
|
||||||
|
[UserPresenceState.default("@user:test")],
|
||||||
|
["server"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# expect a call to send_transaction
|
||||||
|
mock_send_transaction.assert_awaited_once()
|
||||||
|
|
||||||
|
json_cb = mock_send_transaction.call_args[0][1]
|
||||||
|
data = json_cb()
|
||||||
|
self.assertEqual(
|
||||||
|
data["edus"],
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"edu_type": EduTypes.PRESENCE,
|
||||||
|
"content": {
|
||||||
|
"push": [
|
||||||
|
{
|
||||||
|
"presence": "offline",
|
||||||
|
"user_id": "@user:test",
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
}
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_presence_batched(self) -> None:
|
||||||
|
"""Test that sending lots of presence updates to a destination are
|
||||||
|
batched, rather than having them all sent in one EDU."""
|
||||||
|
|
||||||
|
mock_send_transaction: AsyncMock = (
|
||||||
|
self.federation_transport_client.send_transaction
|
||||||
|
)
|
||||||
|
mock_send_transaction.return_value = {}
|
||||||
|
|
||||||
|
sender = self.hs.get_federation_sender()
|
||||||
|
|
||||||
|
# We now send lots of presence updates to force the federation sender to
|
||||||
|
# batch the mup.
|
||||||
|
number_presence_updates_to_send = MAX_PRESENCE_STATES_PER_EDU * 2
|
||||||
|
self.get_success(
|
||||||
|
sender.send_presence_to_destinations(
|
||||||
|
[
|
||||||
|
UserPresenceState.default(f"@user{i}:test")
|
||||||
|
for i in range(number_presence_updates_to_send)
|
||||||
|
],
|
||||||
|
["server"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.pump()
|
||||||
|
|
||||||
|
# We should have seen at least one transcation be sent by now.
|
||||||
|
mock_send_transaction.assert_called()
|
||||||
|
|
||||||
|
# We don't want to specify exactly how the presence EDUs get sent out,
|
||||||
|
# could be one per transaction or multiple per transaction. We just want
|
||||||
|
# to assert that a) each presence EDU has bounded number of updates, and
|
||||||
|
# b) that all updates get sent out.
|
||||||
|
presence_edus = []
|
||||||
|
for transaction_call in mock_send_transaction.call_args_list:
|
||||||
|
json_cb = transaction_call[0][1]
|
||||||
|
data = json_cb()
|
||||||
|
|
||||||
|
for edu in data["edus"]:
|
||||||
|
self.assertEqual(edu.get("edu_type"), EduTypes.PRESENCE)
|
||||||
|
presence_edus.append(edu)
|
||||||
|
|
||||||
|
# A set of all user presence we see, this should end up matching the
|
||||||
|
# number we sent out above.
|
||||||
|
seen_users: Set[str] = set()
|
||||||
|
|
||||||
|
for edu in presence_edus:
|
||||||
|
presence_states = edu["content"]["push"]
|
||||||
|
|
||||||
|
# This is where we actually check that the number of presence
|
||||||
|
# updates is bounded.
|
||||||
|
self.assertLessEqual(len(presence_states), MAX_PRESENCE_STATES_PER_EDU)
|
||||||
|
|
||||||
|
seen_users.update(p["user_id"] for p in presence_states)
|
||||||
|
|
||||||
|
self.assertEqual(len(seen_users), number_presence_updates_to_send)
|
||||||
|
|
||||||
|
|
||||||
class FederationSenderDevicesTestCases(HomeserverTestCase):
|
class FederationSenderDevicesTestCases(HomeserverTestCase):
|
||||||
"""
|
"""
|
||||||
Test federation sending to update devices.
|
Test federation sending to update devices.
|
||||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -37,6 +37,7 @@ from synapse.api.constants import ApprovalNoticeMedium, LoginType, UserTypes
|
||||||
from synapse.api.errors import Codes, HttpResponseException, ResourceLimitError
|
from synapse.api.errors import Codes, HttpResponseException, ResourceLimitError
|
||||||
from synapse.api.room_versions import RoomVersions
|
from synapse.api.room_versions import RoomVersions
|
||||||
from synapse.media.filepath import MediaFilePaths
|
from synapse.media.filepath import MediaFilePaths
|
||||||
|
from synapse.rest import admin
|
||||||
from synapse.rest.client import (
|
from synapse.rest.client import (
|
||||||
devices,
|
devices,
|
||||||
login,
|
login,
|
||||||
|
@ -5005,3 +5006,86 @@ class AllowCrossSigningReplacementTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
assert timestamp is not None
|
assert timestamp is not None
|
||||||
self.assertGreater(timestamp, self.clock.time_msec())
|
self.assertGreater(timestamp, self.clock.time_msec())
|
||||||
|
|
||||||
|
|
||||||
|
class UserSuspensionTestCase(unittest.HomeserverTestCase):
|
||||||
|
servlets = [
|
||||||
|
synapse.rest.admin.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
admin.register_servlets,
|
||||||
|
]
|
||||||
|
|
||||||
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
|
self.admin = self.register_user("thomas", "hackme", True)
|
||||||
|
self.admin_tok = self.login("thomas", "hackme")
|
||||||
|
|
||||||
|
self.bad_user = self.register_user("teresa", "hackme")
|
||||||
|
self.bad_user_tok = self.login("teresa", "hackme")
|
||||||
|
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
|
||||||
|
@override_config({"experimental_features": {"msc3823_account_suspension": True}})
|
||||||
|
def test_suspend_user(self) -> None:
|
||||||
|
# test that suspending user works
|
||||||
|
channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
f"/_synapse/admin/v1/suspend/{self.bad_user}",
|
||||||
|
{"suspend": True},
|
||||||
|
access_token=self.admin_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
self.assertEqual(channel.json_body, {f"user_{self.bad_user}_suspended": True})
|
||||||
|
|
||||||
|
res = self.get_success(self.store.get_user_suspended_status(self.bad_user))
|
||||||
|
self.assertEqual(True, res)
|
||||||
|
|
||||||
|
# test that un-suspending user works
|
||||||
|
channel2 = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
f"/_synapse/admin/v1/suspend/{self.bad_user}",
|
||||||
|
{"suspend": False},
|
||||||
|
access_token=self.admin_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel2.code, 200)
|
||||||
|
self.assertEqual(channel2.json_body, {f"user_{self.bad_user}_suspended": False})
|
||||||
|
|
||||||
|
res2 = self.get_success(self.store.get_user_suspended_status(self.bad_user))
|
||||||
|
self.assertEqual(False, res2)
|
||||||
|
|
||||||
|
# test that trying to un-suspend user who isn't suspended doesn't cause problems
|
||||||
|
channel3 = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
f"/_synapse/admin/v1/suspend/{self.bad_user}",
|
||||||
|
{"suspend": False},
|
||||||
|
access_token=self.admin_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel3.code, 200)
|
||||||
|
self.assertEqual(channel3.json_body, {f"user_{self.bad_user}_suspended": False})
|
||||||
|
|
||||||
|
res3 = self.get_success(self.store.get_user_suspended_status(self.bad_user))
|
||||||
|
self.assertEqual(False, res3)
|
||||||
|
|
||||||
|
# test that trying to suspend user who is already suspended doesn't cause problems
|
||||||
|
channel4 = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
f"/_synapse/admin/v1/suspend/{self.bad_user}",
|
||||||
|
{"suspend": True},
|
||||||
|
access_token=self.admin_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel4.code, 200)
|
||||||
|
self.assertEqual(channel4.json_body, {f"user_{self.bad_user}_suspended": True})
|
||||||
|
|
||||||
|
res4 = self.get_success(self.store.get_user_suspended_status(self.bad_user))
|
||||||
|
self.assertEqual(True, res4)
|
||||||
|
|
||||||
|
channel5 = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
f"/_synapse/admin/v1/suspend/{self.bad_user}",
|
||||||
|
{"suspend": True},
|
||||||
|
access_token=self.admin_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel5.code, 200)
|
||||||
|
self.assertEqual(channel5.json_body, {f"user_{self.bad_user}_suspended": True})
|
||||||
|
|
||||||
|
res5 = self.get_success(self.store.get_user_suspended_status(self.bad_user))
|
||||||
|
self.assertEqual(True, res5)
|
||||||
|
|
|
@ -3819,3 +3819,108 @@ class TimestampLookupTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
# Make sure the outlier event is not returned
|
# Make sure the outlier event is not returned
|
||||||
self.assertNotEqual(channel.json_body["event_id"], outlier_event.event_id)
|
self.assertNotEqual(channel.json_body["event_id"], outlier_event.event_id)
|
||||||
|
|
||||||
|
|
||||||
|
class UserSuspensionTests(unittest.HomeserverTestCase):
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
room.register_servlets,
|
||||||
|
profile.register_servlets,
|
||||||
|
]
|
||||||
|
|
||||||
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
|
self.user1 = self.register_user("thomas", "hackme")
|
||||||
|
self.tok1 = self.login("thomas", "hackme")
|
||||||
|
|
||||||
|
self.user2 = self.register_user("teresa", "hackme")
|
||||||
|
self.tok2 = self.login("teresa", "hackme")
|
||||||
|
|
||||||
|
self.room1 = self.helper.create_room_as(room_creator=self.user1, tok=self.tok1)
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
|
||||||
|
def test_suspended_user_cannot_send_message_to_room(self) -> None:
|
||||||
|
# set the user as suspended
|
||||||
|
self.get_success(self.store.set_user_suspended_status(self.user1, True))
|
||||||
|
|
||||||
|
channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
f"/rooms/{self.room1}/send/m.room.message/1",
|
||||||
|
access_token=self.tok1,
|
||||||
|
content={"body": "hello", "msgtype": "m.text"},
|
||||||
|
)
|
||||||
|
self.assertEqual(
|
||||||
|
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_suspended_user_cannot_change_profile_data(self) -> None:
|
||||||
|
# set the user as suspended
|
||||||
|
self.get_success(self.store.set_user_suspended_status(self.user1, True))
|
||||||
|
|
||||||
|
channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
f"/_matrix/client/v3/profile/{self.user1}/avatar_url",
|
||||||
|
access_token=self.tok1,
|
||||||
|
content={"avatar_url": "mxc://matrix.org/wefh34uihSDRGhw34"},
|
||||||
|
shorthand=False,
|
||||||
|
)
|
||||||
|
self.assertEqual(
|
||||||
|
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
|
||||||
|
)
|
||||||
|
|
||||||
|
channel2 = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
f"/_matrix/client/v3/profile/{self.user1}/displayname",
|
||||||
|
access_token=self.tok1,
|
||||||
|
content={"displayname": "something offensive"},
|
||||||
|
shorthand=False,
|
||||||
|
)
|
||||||
|
self.assertEqual(
|
||||||
|
channel2.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_suspended_user_cannot_redact_messages_other_than_their_own(self) -> None:
|
||||||
|
# first user sends message
|
||||||
|
self.make_request("POST", f"/rooms/{self.room1}/join", access_token=self.tok2)
|
||||||
|
res = self.helper.send_event(
|
||||||
|
self.room1,
|
||||||
|
"m.room.message",
|
||||||
|
{"body": "hello", "msgtype": "m.text"},
|
||||||
|
tok=self.tok2,
|
||||||
|
)
|
||||||
|
event_id = res["event_id"]
|
||||||
|
|
||||||
|
# second user sends message
|
||||||
|
self.make_request("POST", f"/rooms/{self.room1}/join", access_token=self.tok1)
|
||||||
|
res2 = self.helper.send_event(
|
||||||
|
self.room1,
|
||||||
|
"m.room.message",
|
||||||
|
{"body": "bad_message", "msgtype": "m.text"},
|
||||||
|
tok=self.tok1,
|
||||||
|
)
|
||||||
|
event_id2 = res2["event_id"]
|
||||||
|
|
||||||
|
# set the second user as suspended
|
||||||
|
self.get_success(self.store.set_user_suspended_status(self.user1, True))
|
||||||
|
|
||||||
|
# second user can't redact first user's message
|
||||||
|
channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
f"/_matrix/client/v3/rooms/{self.room1}/redact/{event_id}/1",
|
||||||
|
access_token=self.tok1,
|
||||||
|
content={"reason": "bogus"},
|
||||||
|
shorthand=False,
|
||||||
|
)
|
||||||
|
self.assertEqual(
|
||||||
|
channel.json_body["errcode"], "ORG.MATRIX.MSC3823.USER_ACCOUNT_SUSPENDED"
|
||||||
|
)
|
||||||
|
|
||||||
|
# but can redact their own
|
||||||
|
channel = self.make_request(
|
||||||
|
"PUT",
|
||||||
|
f"/_matrix/client/v3/rooms/{self.room1}/redact/{event_id2}/1",
|
||||||
|
access_token=self.tok1,
|
||||||
|
content={"reason": "bogus"},
|
||||||
|
shorthand=False,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel.code, 200)
|
||||||
|
|
|
@ -20,7 +20,7 @@
|
||||||
#
|
#
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
from typing import List
|
from typing import Dict, List
|
||||||
|
|
||||||
from parameterized import parameterized, parameterized_class
|
from parameterized import parameterized, parameterized_class
|
||||||
|
|
||||||
|
@ -1239,12 +1239,58 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
self.event_sources = hs.get_event_sources()
|
self.event_sources = hs.get_event_sources()
|
||||||
self.storage_controllers = hs.get_storage_controllers()
|
self.storage_controllers = hs.get_storage_controllers()
|
||||||
|
|
||||||
|
def _add_new_dm_to_global_account_data(
|
||||||
|
self, source_user_id: str, target_user_id: str, target_room_id: str
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Helper to handle inserting a new DM for the source user into global account data
|
||||||
|
(handles all of the list merging).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
source_user_id: The user ID of the DM mapping we're going to update
|
||||||
|
target_user_id: User ID of the person the DM is with
|
||||||
|
target_room_id: Room ID of the DM
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Get the current DM map
|
||||||
|
existing_dm_map = self.get_success(
|
||||||
|
self.store.get_global_account_data_by_type_for_user(
|
||||||
|
source_user_id, AccountDataTypes.DIRECT
|
||||||
|
)
|
||||||
|
)
|
||||||
|
# Scrutinize the account data since it has no concrete type. We're just copying
|
||||||
|
# everything into a known type. It should be a mapping from user ID to a list of
|
||||||
|
# room IDs. Ignore anything else.
|
||||||
|
new_dm_map: Dict[str, List[str]] = {}
|
||||||
|
if isinstance(existing_dm_map, dict):
|
||||||
|
for user_id, room_ids in existing_dm_map.items():
|
||||||
|
if isinstance(user_id, str) and isinstance(room_ids, list):
|
||||||
|
for room_id in room_ids:
|
||||||
|
if isinstance(room_id, str):
|
||||||
|
new_dm_map[user_id] = new_dm_map.get(user_id, []) + [
|
||||||
|
room_id
|
||||||
|
]
|
||||||
|
|
||||||
|
# Add the new DM to the map
|
||||||
|
new_dm_map[target_user_id] = new_dm_map.get(target_user_id, []) + [
|
||||||
|
target_room_id
|
||||||
|
]
|
||||||
|
# Save the DM map to global account data
|
||||||
|
self.get_success(
|
||||||
|
self.store.add_account_data_for_user(
|
||||||
|
source_user_id,
|
||||||
|
AccountDataTypes.DIRECT,
|
||||||
|
new_dm_map,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
def _create_dm_room(
|
def _create_dm_room(
|
||||||
self,
|
self,
|
||||||
inviter_user_id: str,
|
inviter_user_id: str,
|
||||||
inviter_tok: str,
|
inviter_tok: str,
|
||||||
invitee_user_id: str,
|
invitee_user_id: str,
|
||||||
invitee_tok: str,
|
invitee_tok: str,
|
||||||
|
should_join_room: bool = True,
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
Helper to create a DM room as the "inviter" and invite the "invitee" user to the
|
Helper to create a DM room as the "inviter" and invite the "invitee" user to the
|
||||||
|
@ -1265,24 +1311,17 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
tok=inviter_tok,
|
tok=inviter_tok,
|
||||||
extra_data={"is_direct": True},
|
extra_data={"is_direct": True},
|
||||||
)
|
)
|
||||||
# Person that was invited joins the room
|
if should_join_room:
|
||||||
self.helper.join(room_id, invitee_user_id, tok=invitee_tok)
|
# Person that was invited joins the room
|
||||||
|
self.helper.join(room_id, invitee_user_id, tok=invitee_tok)
|
||||||
|
|
||||||
# Mimic the client setting the room as a direct message in the global account
|
# Mimic the client setting the room as a direct message in the global account
|
||||||
# data
|
# data for both users.
|
||||||
self.get_success(
|
self._add_new_dm_to_global_account_data(
|
||||||
self.store.add_account_data_for_user(
|
invitee_user_id, inviter_user_id, room_id
|
||||||
invitee_user_id,
|
|
||||||
AccountDataTypes.DIRECT,
|
|
||||||
{inviter_user_id: [room_id]},
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
self.get_success(
|
self._add_new_dm_to_global_account_data(
|
||||||
self.store.add_account_data_for_user(
|
inviter_user_id, invitee_user_id, room_id
|
||||||
inviter_user_id,
|
|
||||||
AccountDataTypes.DIRECT,
|
|
||||||
{invitee_user_id: [room_id]},
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
|
|
||||||
return room_id
|
return room_id
|
||||||
|
@ -1400,15 +1439,28 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
user2_tok = self.login(user2_id, "pass")
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
# Create a DM room
|
# Create a DM room
|
||||||
dm_room_id = self._create_dm_room(
|
joined_dm_room_id = self._create_dm_room(
|
||||||
inviter_user_id=user1_id,
|
inviter_user_id=user1_id,
|
||||||
inviter_tok=user1_tok,
|
inviter_tok=user1_tok,
|
||||||
invitee_user_id=user2_id,
|
invitee_user_id=user2_id,
|
||||||
invitee_tok=user2_tok,
|
invitee_tok=user2_tok,
|
||||||
|
should_join_room=True,
|
||||||
|
)
|
||||||
|
invited_dm_room_id = self._create_dm_room(
|
||||||
|
inviter_user_id=user1_id,
|
||||||
|
inviter_tok=user1_tok,
|
||||||
|
invitee_user_id=user2_id,
|
||||||
|
invitee_tok=user2_tok,
|
||||||
|
should_join_room=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Create a normal room
|
# Create a normal room
|
||||||
room_id = self.helper.create_room_as(user1_id, tok=user1_tok, is_public=True)
|
room_id = self.helper.create_room_as(user1_id, tok=user2_tok)
|
||||||
|
self.helper.join(room_id, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
|
# Create a room that user1 is invited to
|
||||||
|
invite_room_id = self.helper.create_room_as(user1_id, tok=user2_tok)
|
||||||
|
self.helper.invite(invite_room_id, src=user2_id, targ=user1_id, tok=user2_tok)
|
||||||
|
|
||||||
# Make the Sliding Sync request
|
# Make the Sliding Sync request
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
|
@ -1416,18 +1468,34 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
self.sync_endpoint,
|
self.sync_endpoint,
|
||||||
{
|
{
|
||||||
"lists": {
|
"lists": {
|
||||||
|
# Absense of filters does not imply "False" values
|
||||||
|
"all": {
|
||||||
|
"ranges": [[0, 99]],
|
||||||
|
"required_state": [],
|
||||||
|
"timeline_limit": 1,
|
||||||
|
"filters": {},
|
||||||
|
},
|
||||||
|
# Test single truthy filter
|
||||||
"dms": {
|
"dms": {
|
||||||
"ranges": [[0, 99]],
|
"ranges": [[0, 99]],
|
||||||
"required_state": [],
|
"required_state": [],
|
||||||
"timeline_limit": 1,
|
"timeline_limit": 1,
|
||||||
"filters": {"is_dm": True},
|
"filters": {"is_dm": True},
|
||||||
},
|
},
|
||||||
"foo-list": {
|
# Test single falsy filter
|
||||||
|
"non-dms": {
|
||||||
"ranges": [[0, 99]],
|
"ranges": [[0, 99]],
|
||||||
"required_state": [],
|
"required_state": [],
|
||||||
"timeline_limit": 1,
|
"timeline_limit": 1,
|
||||||
"filters": {"is_dm": False},
|
"filters": {"is_dm": False},
|
||||||
},
|
},
|
||||||
|
# Test how multiple filters should stack (AND'd together)
|
||||||
|
"room-invites": {
|
||||||
|
"ranges": [[0, 99]],
|
||||||
|
"required_state": [],
|
||||||
|
"timeline_limit": 1,
|
||||||
|
"filters": {"is_dm": False, "is_invite": True},
|
||||||
|
},
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
access_token=user1_tok,
|
access_token=user1_tok,
|
||||||
|
@ -1437,32 +1505,59 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
# Make sure it has the foo-list we requested
|
# Make sure it has the foo-list we requested
|
||||||
self.assertListEqual(
|
self.assertListEqual(
|
||||||
list(channel.json_body["lists"].keys()),
|
list(channel.json_body["lists"].keys()),
|
||||||
["dms", "foo-list"],
|
["all", "dms", "non-dms", "room-invites"],
|
||||||
channel.json_body["lists"].keys(),
|
channel.json_body["lists"].keys(),
|
||||||
)
|
)
|
||||||
|
|
||||||
# Make sure the list includes the room we are joined to
|
# Make sure the lists have the correct rooms
|
||||||
|
self.assertListEqual(
|
||||||
|
list(channel.json_body["lists"]["all"]["ops"]),
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"op": "SYNC",
|
||||||
|
"range": [0, 99],
|
||||||
|
"room_ids": [
|
||||||
|
invite_room_id,
|
||||||
|
room_id,
|
||||||
|
invited_dm_room_id,
|
||||||
|
joined_dm_room_id,
|
||||||
|
],
|
||||||
|
}
|
||||||
|
],
|
||||||
|
list(channel.json_body["lists"]["all"]),
|
||||||
|
)
|
||||||
self.assertListEqual(
|
self.assertListEqual(
|
||||||
list(channel.json_body["lists"]["dms"]["ops"]),
|
list(channel.json_body["lists"]["dms"]["ops"]),
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"op": "SYNC",
|
"op": "SYNC",
|
||||||
"range": [0, 99],
|
"range": [0, 99],
|
||||||
"room_ids": [dm_room_id],
|
"room_ids": [invited_dm_room_id, joined_dm_room_id],
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
list(channel.json_body["lists"]["dms"]),
|
list(channel.json_body["lists"]["dms"]),
|
||||||
)
|
)
|
||||||
self.assertListEqual(
|
self.assertListEqual(
|
||||||
list(channel.json_body["lists"]["foo-list"]["ops"]),
|
list(channel.json_body["lists"]["non-dms"]["ops"]),
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"op": "SYNC",
|
"op": "SYNC",
|
||||||
"range": [0, 99],
|
"range": [0, 99],
|
||||||
"room_ids": [room_id],
|
"room_ids": [invite_room_id, room_id],
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
list(channel.json_body["lists"]["foo-list"]),
|
list(channel.json_body["lists"]["non-dms"]),
|
||||||
|
)
|
||||||
|
self.assertListEqual(
|
||||||
|
list(channel.json_body["lists"]["room-invites"]["ops"]),
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"op": "SYNC",
|
||||||
|
"range": [0, 99],
|
||||||
|
"room_ids": [invite_room_id],
|
||||||
|
}
|
||||||
|
],
|
||||||
|
list(channel.json_body["lists"]["room-invites"]),
|
||||||
)
|
)
|
||||||
|
|
||||||
def test_sort_list(self) -> None:
|
def test_sort_list(self) -> None:
|
||||||
|
@ -1522,6 +1617,98 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
channel.json_body["lists"]["foo-list"],
|
channel.json_body["lists"]["foo-list"],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def test_sliced_windows(self) -> None:
|
||||||
|
"""
|
||||||
|
Test that the `lists` `ranges` are sliced correctly. Both sides of each range
|
||||||
|
are inclusive.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
|
||||||
|
_room_id1 = self.helper.create_room_as(user1_id, tok=user1_tok, is_public=True)
|
||||||
|
room_id2 = self.helper.create_room_as(user1_id, tok=user1_tok, is_public=True)
|
||||||
|
room_id3 = self.helper.create_room_as(user1_id, tok=user1_tok, is_public=True)
|
||||||
|
|
||||||
|
# Make the Sliding Sync request for a single room
|
||||||
|
channel = self.make_request(
|
||||||
|
"POST",
|
||||||
|
self.sync_endpoint,
|
||||||
|
{
|
||||||
|
"lists": {
|
||||||
|
"foo-list": {
|
||||||
|
"ranges": [[0, 0]],
|
||||||
|
"required_state": [
|
||||||
|
["m.room.join_rules", ""],
|
||||||
|
["m.room.history_visibility", ""],
|
||||||
|
["m.space.child", "*"],
|
||||||
|
],
|
||||||
|
"timeline_limit": 1,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
access_token=user1_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel.code, 200, channel.json_body)
|
||||||
|
|
||||||
|
# Make sure it has the foo-list we requested
|
||||||
|
self.assertListEqual(
|
||||||
|
list(channel.json_body["lists"].keys()),
|
||||||
|
["foo-list"],
|
||||||
|
channel.json_body["lists"].keys(),
|
||||||
|
)
|
||||||
|
# Make sure the list is sorted in the way we expect
|
||||||
|
self.assertListEqual(
|
||||||
|
list(channel.json_body["lists"]["foo-list"]["ops"]),
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"op": "SYNC",
|
||||||
|
"range": [0, 0],
|
||||||
|
"room_ids": [room_id3],
|
||||||
|
}
|
||||||
|
],
|
||||||
|
channel.json_body["lists"]["foo-list"],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Make the Sliding Sync request for the first two rooms
|
||||||
|
channel = self.make_request(
|
||||||
|
"POST",
|
||||||
|
self.sync_endpoint,
|
||||||
|
{
|
||||||
|
"lists": {
|
||||||
|
"foo-list": {
|
||||||
|
"ranges": [[0, 1]],
|
||||||
|
"required_state": [
|
||||||
|
["m.room.join_rules", ""],
|
||||||
|
["m.room.history_visibility", ""],
|
||||||
|
["m.space.child", "*"],
|
||||||
|
],
|
||||||
|
"timeline_limit": 1,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
access_token=user1_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel.code, 200, channel.json_body)
|
||||||
|
|
||||||
|
# Make sure it has the foo-list we requested
|
||||||
|
self.assertListEqual(
|
||||||
|
list(channel.json_body["lists"].keys()),
|
||||||
|
["foo-list"],
|
||||||
|
channel.json_body["lists"].keys(),
|
||||||
|
)
|
||||||
|
# Make sure the list is sorted in the way we expect
|
||||||
|
self.assertListEqual(
|
||||||
|
list(channel.json_body["lists"]["foo-list"]["ops"]),
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"op": "SYNC",
|
||||||
|
"range": [0, 1],
|
||||||
|
"room_ids": [room_id3, room_id2],
|
||||||
|
}
|
||||||
|
],
|
||||||
|
channel.json_body["lists"]["foo-list"],
|
||||||
|
)
|
||||||
|
|
||||||
def test_rooms_limited_initial_sync(self) -> None:
|
def test_rooms_limited_initial_sync(self) -> None:
|
||||||
"""
|
"""
|
||||||
Test that we mark `rooms` as `limited=True` when we saturate the `timeline_limit`
|
Test that we mark `rooms` as `limited=True` when we saturate the `timeline_limit`
|
||||||
|
@ -1788,9 +1975,7 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
"POST",
|
"POST",
|
||||||
self.sync_endpoint
|
self.sync_endpoint
|
||||||
+ f"?pos={self.get_success(
|
+ f"?pos={self.get_success(from_token.to_string(self.store))}",
|
||||||
from_token.to_string(self.store)
|
|
||||||
)}",
|
|
||||||
{
|
{
|
||||||
"lists": {
|
"lists": {
|
||||||
"foo-list": {
|
"foo-list": {
|
||||||
|
@ -1837,9 +2022,12 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
def test_rooms_invite_shared_history_initial_sync(self) -> None:
|
def test_rooms_invite_shared_history_initial_sync(self) -> None:
|
||||||
"""
|
"""
|
||||||
Test that `rooms` we are invited to have some stripped `invite_state` and that
|
Test that `rooms` we are invited to have some stripped `invite_state` during an
|
||||||
we can't see any timeline events because the history visiblity is `shared` and
|
initial sync.
|
||||||
we haven't joined the room yet.
|
|
||||||
|
This is an `invite` room so we should only have `stripped_state` (no `timeline`)
|
||||||
|
but we also shouldn't see any timeline events because the history visiblity is
|
||||||
|
`shared` and we haven't joined the room yet.
|
||||||
"""
|
"""
|
||||||
user1_id = self.register_user("user1", "pass")
|
user1_id = self.register_user("user1", "pass")
|
||||||
user1_tok = self.login(user1_id, "pass")
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
@ -1882,27 +2070,133 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
self.assertEqual(channel.code, 200, channel.json_body)
|
self.assertEqual(channel.code, 200, channel.json_body)
|
||||||
|
|
||||||
# Should not see anything (except maybe the invite event) because we haven't
|
# `timeline` is omitted for `invite` rooms with `stripped_state`
|
||||||
# joined yet (history visibility is `shared`) (`filter_events_for_client(...)`
|
self.assertIsNone(
|
||||||
# is doing the work here)
|
channel.json_body["rooms"][room_id1].get("timeline"),
|
||||||
self.assertEqual(
|
|
||||||
channel.json_body["rooms"][room_id1]["timeline"],
|
|
||||||
[],
|
|
||||||
channel.json_body["rooms"][room_id1]["timeline"],
|
|
||||||
)
|
|
||||||
# No "live" events in an initial sync (no `from_token` to define the "live"
|
|
||||||
# range) and no events returned in the timeline anyway so nothing could be
|
|
||||||
# "live".
|
|
||||||
self.assertEqual(
|
|
||||||
channel.json_body["rooms"][room_id1]["num_live"],
|
|
||||||
0,
|
|
||||||
channel.json_body["rooms"][room_id1],
|
channel.json_body["rooms"][room_id1],
|
||||||
)
|
)
|
||||||
# Even though we don't get any timeline events because they are filtered out,
|
# `num_live` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
# there is still more to paginate
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("num_live"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# `limited` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("limited"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# `prev_batch` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("prev_batch"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# We should have some `stripped_state` so the potential joiner can identify the
|
||||||
|
# room (we don't care about the order).
|
||||||
|
self.assertCountEqual(
|
||||||
|
channel.json_body["rooms"][room_id1]["invite_state"],
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"content": {"creator": user2_id, "room_version": "10"},
|
||||||
|
"sender": user2_id,
|
||||||
|
"state_key": "",
|
||||||
|
"type": "m.room.create",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": {"join_rule": "public"},
|
||||||
|
"sender": user2_id,
|
||||||
|
"state_key": "",
|
||||||
|
"type": "m.room.join_rules",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": {"displayname": user2.localpart, "membership": "join"},
|
||||||
|
"sender": user2_id,
|
||||||
|
"state_key": user2_id,
|
||||||
|
"type": "m.room.member",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": {"displayname": user1.localpart, "membership": "invite"},
|
||||||
|
"sender": user2_id,
|
||||||
|
"state_key": user1_id,
|
||||||
|
"type": "m.room.member",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
channel.json_body["rooms"][room_id1]["invite_state"],
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_rooms_invite_shared_history_incremental_sync(self) -> None:
|
||||||
|
"""
|
||||||
|
Test that `rooms` we are invited to have some stripped `invite_state` during an
|
||||||
|
incremental sync.
|
||||||
|
|
||||||
|
This is an `invite` room so we should only have `stripped_state` (no `timeline`)
|
||||||
|
but we also shouldn't see any timeline events because the history visiblity is
|
||||||
|
`shared` and we haven't joined the room yet.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user1 = UserID.from_string(user1_id)
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
user2 = UserID.from_string(user2_id)
|
||||||
|
|
||||||
|
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||||
|
# Ensure we're testing with a room with `shared` history visibility which means
|
||||||
|
# history visible until you actually join the room.
|
||||||
|
history_visibility_response = self.helper.get_state(
|
||||||
|
room_id1, EventTypes.RoomHistoryVisibility, tok=user2_tok
|
||||||
|
)
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
channel.json_body["rooms"][room_id1]["limited"],
|
history_visibility_response.get("history_visibility"),
|
||||||
True,
|
HistoryVisibility.SHARED,
|
||||||
|
)
|
||||||
|
|
||||||
|
self.helper.send(room_id1, "activity before invite1", tok=user2_tok)
|
||||||
|
self.helper.send(room_id1, "activity before invite2", tok=user2_tok)
|
||||||
|
self.helper.invite(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
|
||||||
|
self.helper.send(room_id1, "activity after invite3", tok=user2_tok)
|
||||||
|
self.helper.send(room_id1, "activity after invite4", tok=user2_tok)
|
||||||
|
|
||||||
|
from_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
self.helper.send(room_id1, "activity after token5", tok=user2_tok)
|
||||||
|
self.helper.send(room_id1, "activity after toekn6", tok=user2_tok)
|
||||||
|
|
||||||
|
# Make the Sliding Sync request
|
||||||
|
channel = self.make_request(
|
||||||
|
"POST",
|
||||||
|
self.sync_endpoint
|
||||||
|
+ f"?pos={self.get_success(from_token.to_string(self.store))}",
|
||||||
|
{
|
||||||
|
"lists": {
|
||||||
|
"foo-list": {
|
||||||
|
"ranges": [[0, 1]],
|
||||||
|
"required_state": [],
|
||||||
|
"timeline_limit": 3,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
access_token=user1_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel.code, 200, channel.json_body)
|
||||||
|
|
||||||
|
# `timeline` is omitted for `invite` rooms with `stripped_state`
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("timeline"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# `num_live` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("num_live"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# `limited` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("limited"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# `prev_batch` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("prev_batch"),
|
||||||
channel.json_body["rooms"][room_id1],
|
channel.json_body["rooms"][room_id1],
|
||||||
)
|
)
|
||||||
# We should have some `stripped_state` so the potential joiner can identify the
|
# We should have some `stripped_state` so the potential joiner can identify the
|
||||||
|
@ -1940,9 +2234,14 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
|
|
||||||
def test_rooms_invite_world_readable_history_initial_sync(self) -> None:
|
def test_rooms_invite_world_readable_history_initial_sync(self) -> None:
|
||||||
"""
|
"""
|
||||||
Test that `rooms` we are invited to have some stripped `invite_state` and that
|
Test that `rooms` we are invited to have some stripped `invite_state` during an
|
||||||
we can't see any timeline events because the history visiblity is `shared` and
|
initial sync.
|
||||||
we haven't joined the room yet.
|
|
||||||
|
This is an `invite` room so we should only have `stripped_state` (no `timeline`)
|
||||||
|
but depending on the semantics we decide, we could potentially see some
|
||||||
|
historical events before/after the `from_token` because the history is
|
||||||
|
`world_readable`. Same situation for events after the `from_token` if the
|
||||||
|
history visibility was set to `invited`.
|
||||||
"""
|
"""
|
||||||
user1_id = self.register_user("user1", "pass")
|
user1_id = self.register_user("user1", "pass")
|
||||||
user1_tok = self.login(user1_id, "pass")
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
@ -1978,12 +2277,10 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
|
|
||||||
self.helper.send(room_id1, "activity before1", tok=user2_tok)
|
self.helper.send(room_id1, "activity before1", tok=user2_tok)
|
||||||
event_response2 = self.helper.send(room_id1, "activity before2", tok=user2_tok)
|
self.helper.send(room_id1, "activity before2", tok=user2_tok)
|
||||||
use1_invite_response = self.helper.invite(
|
self.helper.invite(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
|
||||||
room_id1, src=user2_id, targ=user1_id, tok=user2_tok
|
self.helper.send(room_id1, "activity after3", tok=user2_tok)
|
||||||
)
|
self.helper.send(room_id1, "activity after4", tok=user2_tok)
|
||||||
event_response3 = self.helper.send(room_id1, "activity after3", tok=user2_tok)
|
|
||||||
event_response4 = self.helper.send(room_id1, "activity after4", tok=user2_tok)
|
|
||||||
|
|
||||||
# Make the Sliding Sync request
|
# Make the Sliding Sync request
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
|
@ -2003,31 +2300,151 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
self.assertEqual(channel.code, 200, channel.json_body)
|
self.assertEqual(channel.code, 200, channel.json_body)
|
||||||
|
|
||||||
# Should see the last 4 events in the room
|
# `timeline` is omitted for `invite` rooms with `stripped_state`
|
||||||
self.assertEqual(
|
self.assertIsNone(
|
||||||
[
|
channel.json_body["rooms"][room_id1].get("timeline"),
|
||||||
event["event_id"]
|
|
||||||
for event in channel.json_body["rooms"][room_id1]["timeline"]
|
|
||||||
],
|
|
||||||
[
|
|
||||||
event_response2["event_id"],
|
|
||||||
use1_invite_response["event_id"],
|
|
||||||
event_response3["event_id"],
|
|
||||||
event_response4["event_id"],
|
|
||||||
],
|
|
||||||
channel.json_body["rooms"][room_id1]["timeline"],
|
|
||||||
)
|
|
||||||
# No "live" events in an initial sync (no `from_token` to define the "live"
|
|
||||||
# range)
|
|
||||||
self.assertEqual(
|
|
||||||
channel.json_body["rooms"][room_id1]["num_live"],
|
|
||||||
0,
|
|
||||||
channel.json_body["rooms"][room_id1],
|
channel.json_body["rooms"][room_id1],
|
||||||
)
|
)
|
||||||
# There is still more to paginate
|
# `num_live` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("num_live"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# `limited` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("limited"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# `prev_batch` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("prev_batch"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# We should have some `stripped_state` so the potential joiner can identify the
|
||||||
|
# room (we don't care about the order).
|
||||||
|
self.assertCountEqual(
|
||||||
|
channel.json_body["rooms"][room_id1]["invite_state"],
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"content": {"creator": user2_id, "room_version": "10"},
|
||||||
|
"sender": user2_id,
|
||||||
|
"state_key": "",
|
||||||
|
"type": "m.room.create",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": {"join_rule": "public"},
|
||||||
|
"sender": user2_id,
|
||||||
|
"state_key": "",
|
||||||
|
"type": "m.room.join_rules",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": {"displayname": user2.localpart, "membership": "join"},
|
||||||
|
"sender": user2_id,
|
||||||
|
"state_key": user2_id,
|
||||||
|
"type": "m.room.member",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": {"displayname": user1.localpart, "membership": "invite"},
|
||||||
|
"sender": user2_id,
|
||||||
|
"state_key": user1_id,
|
||||||
|
"type": "m.room.member",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
channel.json_body["rooms"][room_id1]["invite_state"],
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_rooms_invite_world_readable_history_incremental_sync(self) -> None:
|
||||||
|
"""
|
||||||
|
Test that `rooms` we are invited to have some stripped `invite_state` during an
|
||||||
|
incremental sync.
|
||||||
|
|
||||||
|
This is an `invite` room so we should only have `stripped_state` (no `timeline`)
|
||||||
|
but depending on the semantics we decide, we could potentially see some
|
||||||
|
historical events before/after the `from_token` because the history is
|
||||||
|
`world_readable`. Same situation for events after the `from_token` if the
|
||||||
|
history visibility was set to `invited`.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user1 = UserID.from_string(user1_id)
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
user2 = UserID.from_string(user2_id)
|
||||||
|
|
||||||
|
room_id1 = self.helper.create_room_as(
|
||||||
|
user2_id,
|
||||||
|
tok=user2_tok,
|
||||||
|
extra_content={
|
||||||
|
"preset": "public_chat",
|
||||||
|
"initial_state": [
|
||||||
|
{
|
||||||
|
"content": {
|
||||||
|
"history_visibility": HistoryVisibility.WORLD_READABLE
|
||||||
|
},
|
||||||
|
"state_key": "",
|
||||||
|
"type": EventTypes.RoomHistoryVisibility,
|
||||||
|
}
|
||||||
|
],
|
||||||
|
},
|
||||||
|
)
|
||||||
|
# Ensure we're testing with a room with `world_readable` history visibility
|
||||||
|
# which means events are visible to anyone even without membership.
|
||||||
|
history_visibility_response = self.helper.get_state(
|
||||||
|
room_id1, EventTypes.RoomHistoryVisibility, tok=user2_tok
|
||||||
|
)
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
channel.json_body["rooms"][room_id1]["limited"],
|
history_visibility_response.get("history_visibility"),
|
||||||
True,
|
HistoryVisibility.WORLD_READABLE,
|
||||||
|
)
|
||||||
|
|
||||||
|
self.helper.send(room_id1, "activity before invite1", tok=user2_tok)
|
||||||
|
self.helper.send(room_id1, "activity before invite2", tok=user2_tok)
|
||||||
|
self.helper.invite(room_id1, src=user2_id, targ=user1_id, tok=user2_tok)
|
||||||
|
self.helper.send(room_id1, "activity after invite3", tok=user2_tok)
|
||||||
|
self.helper.send(room_id1, "activity after invite4", tok=user2_tok)
|
||||||
|
|
||||||
|
from_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
self.helper.send(room_id1, "activity after token5", tok=user2_tok)
|
||||||
|
self.helper.send(room_id1, "activity after toekn6", tok=user2_tok)
|
||||||
|
|
||||||
|
# Make the Sliding Sync request
|
||||||
|
channel = self.make_request(
|
||||||
|
"POST",
|
||||||
|
self.sync_endpoint
|
||||||
|
+ f"?pos={self.get_success(from_token.to_string(self.store))}",
|
||||||
|
{
|
||||||
|
"lists": {
|
||||||
|
"foo-list": {
|
||||||
|
"ranges": [[0, 1]],
|
||||||
|
"required_state": [],
|
||||||
|
# Large enough to see the latest events and before the invite
|
||||||
|
"timeline_limit": 4,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
access_token=user1_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel.code, 200, channel.json_body)
|
||||||
|
|
||||||
|
# `timeline` is omitted for `invite` rooms with `stripped_state`
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("timeline"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# `num_live` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("num_live"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# `limited` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("limited"),
|
||||||
|
channel.json_body["rooms"][room_id1],
|
||||||
|
)
|
||||||
|
# `prev_batch` is omitted for `invite` rooms with `stripped_state` (no timeline anyway)
|
||||||
|
self.assertIsNone(
|
||||||
|
channel.json_body["rooms"][room_id1].get("prev_batch"),
|
||||||
channel.json_body["rooms"][room_id1],
|
channel.json_body["rooms"][room_id1],
|
||||||
)
|
)
|
||||||
# We should have some `stripped_state` so the potential joiner can identify the
|
# We should have some `stripped_state` so the potential joiner can identify the
|
||||||
|
@ -2163,9 +2580,7 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
"POST",
|
"POST",
|
||||||
self.sync_endpoint
|
self.sync_endpoint
|
||||||
+ f"?pos={self.get_success(
|
+ f"?pos={self.get_success(from_token.to_string(self.store))}",
|
||||||
from_token.to_string(self.store)
|
|
||||||
)}",
|
|
||||||
{
|
{
|
||||||
"lists": {
|
"lists": {
|
||||||
"foo-list": {
|
"foo-list": {
|
||||||
|
@ -2233,9 +2648,7 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
channel = self.make_request(
|
channel = self.make_request(
|
||||||
"POST",
|
"POST",
|
||||||
self.sync_endpoint
|
self.sync_endpoint
|
||||||
+ f"?pos={self.get_success(
|
+ f"?pos={self.get_success(from_token.to_string(self.store))}",
|
||||||
from_token.to_string(self.store)
|
|
||||||
)}",
|
|
||||||
{
|
{
|
||||||
"lists": {
|
"lists": {
|
||||||
"foo-list": {
|
"foo-list": {
|
||||||
|
|
|
@ -36,6 +36,14 @@ class DeviceStoreTestCase(HomeserverTestCase):
|
||||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
|
|
||||||
|
def default_config(self) -> JsonDict:
|
||||||
|
config = super().default_config()
|
||||||
|
|
||||||
|
# We 'enable' federation otherwise `get_device_updates_by_remote` will
|
||||||
|
# throw an exception.
|
||||||
|
config["federation_sender_instances"] = ["master"]
|
||||||
|
return config
|
||||||
|
|
||||||
def add_device_change(self, user_id: str, device_ids: List[str], host: str) -> None:
|
def add_device_change(self, user_id: str, device_ids: List[str], host: str) -> None:
|
||||||
"""Add a device list change for the given device to
|
"""Add a device list change for the given device to
|
||||||
`device_lists_outbound_pokes` table.
|
`device_lists_outbound_pokes` table.
|
||||||
|
|
|
@ -447,7 +447,14 @@ class EventChainStoreTestCase(HomeserverTestCase):
|
||||||
)
|
)
|
||||||
|
|
||||||
# Actually call the function that calculates the auth chain stuff.
|
# Actually call the function that calculates the auth chain stuff.
|
||||||
persist_events_store._persist_event_auth_chain_txn(txn, events)
|
new_event_links = (
|
||||||
|
persist_events_store.calculate_chain_cover_index_for_events_txn(
|
||||||
|
txn, events[0].room_id, [e for e in events if e.is_state()]
|
||||||
|
)
|
||||||
|
)
|
||||||
|
persist_events_store._persist_event_auth_chain_txn(
|
||||||
|
txn, events, new_event_links
|
||||||
|
)
|
||||||
|
|
||||||
self.get_success(
|
self.get_success(
|
||||||
persist_events_store.db_pool.runInteraction(
|
persist_events_store.db_pool.runInteraction(
|
||||||
|
|
|
@ -365,12 +365,19 @@ class EventFederationWorkerStoreTestCase(tests.unittest.HomeserverTestCase):
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
events = [
|
||||||
|
cast(EventBase, FakeEvent(event_id, room_id, AUTH_GRAPH[event_id]))
|
||||||
|
for event_id in AUTH_GRAPH
|
||||||
|
]
|
||||||
|
new_event_links = (
|
||||||
|
self.persist_events.calculate_chain_cover_index_for_events_txn(
|
||||||
|
txn, room_id, [e for e in events if e.is_state()]
|
||||||
|
)
|
||||||
|
)
|
||||||
self.persist_events._persist_event_auth_chain_txn(
|
self.persist_events._persist_event_auth_chain_txn(
|
||||||
txn,
|
txn,
|
||||||
[
|
events,
|
||||||
cast(EventBase, FakeEvent(event_id, room_id, AUTH_GRAPH[event_id]))
|
new_event_links,
|
||||||
for event_id in AUTH_GRAPH
|
|
||||||
],
|
|
||||||
)
|
)
|
||||||
|
|
||||||
self.get_success(
|
self.get_success(
|
||||||
|
@ -544,6 +551,9 @@ class EventFederationWorkerStoreTestCase(tests.unittest.HomeserverTestCase):
|
||||||
rooms.
|
rooms.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
# We allow partial covers for this test
|
||||||
|
self.hs.get_datastores().main.tests_allow_no_chain_cover_index = True
|
||||||
|
|
||||||
room_id = "@ROOM:local"
|
room_id = "@ROOM:local"
|
||||||
|
|
||||||
# The silly auth graph we use to test the auth difference algorithm,
|
# The silly auth graph we use to test the auth difference algorithm,
|
||||||
|
@ -628,13 +638,20 @@ class EventFederationWorkerStoreTestCase(tests.unittest.HomeserverTestCase):
|
||||||
)
|
)
|
||||||
|
|
||||||
# Insert all events apart from 'B'
|
# Insert all events apart from 'B'
|
||||||
|
events = [
|
||||||
|
cast(EventBase, FakeEvent(event_id, room_id, auth_graph[event_id]))
|
||||||
|
for event_id in auth_graph
|
||||||
|
if event_id != "b"
|
||||||
|
]
|
||||||
|
new_event_links = (
|
||||||
|
self.persist_events.calculate_chain_cover_index_for_events_txn(
|
||||||
|
txn, room_id, [e for e in events if e.is_state()]
|
||||||
|
)
|
||||||
|
)
|
||||||
self.persist_events._persist_event_auth_chain_txn(
|
self.persist_events._persist_event_auth_chain_txn(
|
||||||
txn,
|
txn,
|
||||||
[
|
events,
|
||||||
cast(EventBase, FakeEvent(event_id, room_id, auth_graph[event_id]))
|
new_event_links,
|
||||||
for event_id in auth_graph
|
|
||||||
if event_id != "b"
|
|
||||||
],
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Now we insert the event 'B' without a chain cover, by temporarily
|
# Now we insert the event 'B' without a chain cover, by temporarily
|
||||||
|
@ -647,9 +664,14 @@ class EventFederationWorkerStoreTestCase(tests.unittest.HomeserverTestCase):
|
||||||
updatevalues={"has_auth_chain_index": False},
|
updatevalues={"has_auth_chain_index": False},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
events = [cast(EventBase, FakeEvent("b", room_id, auth_graph["b"]))]
|
||||||
|
new_event_links = (
|
||||||
|
self.persist_events.calculate_chain_cover_index_for_events_txn(
|
||||||
|
txn, room_id, [e for e in events if e.is_state()]
|
||||||
|
)
|
||||||
|
)
|
||||||
self.persist_events._persist_event_auth_chain_txn(
|
self.persist_events._persist_event_auth_chain_txn(
|
||||||
txn,
|
txn, events, new_event_links
|
||||||
[cast(EventBase, FakeEvent("b", room_id, auth_graph["b"]))],
|
|
||||||
)
|
)
|
||||||
|
|
||||||
self.store.db_pool.simple_update_txn(
|
self.store.db_pool.simple_update_txn(
|
||||||
|
|
|
@ -21,20 +21,32 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import List, Tuple
|
from typing import List, Tuple
|
||||||
|
from unittest.mock import AsyncMock, patch
|
||||||
|
|
||||||
from immutabledict import immutabledict
|
from immutabledict import immutabledict
|
||||||
|
|
||||||
from twisted.test.proto_helpers import MemoryReactor
|
from twisted.test.proto_helpers import MemoryReactor
|
||||||
|
|
||||||
from synapse.api.constants import Direction, EventTypes, RelationTypes
|
from synapse.api.constants import Direction, EventTypes, Membership, RelationTypes
|
||||||
from synapse.api.filtering import Filter
|
from synapse.api.filtering import Filter
|
||||||
|
from synapse.crypto.event_signing import add_hashes_and_signatures
|
||||||
|
from synapse.events import FrozenEventV3
|
||||||
|
from synapse.federation.federation_client import SendJoinResult
|
||||||
from synapse.rest import admin
|
from synapse.rest import admin
|
||||||
from synapse.rest.client import login, room
|
from synapse.rest.client import login, room
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
from synapse.types import JsonDict, PersistedEventPosition, RoomStreamToken
|
from synapse.storage.databases.main.stream import CurrentStateDeltaMembership
|
||||||
|
from synapse.types import (
|
||||||
|
JsonDict,
|
||||||
|
PersistedEventPosition,
|
||||||
|
RoomStreamToken,
|
||||||
|
UserID,
|
||||||
|
create_requester,
|
||||||
|
)
|
||||||
from synapse.util import Clock
|
from synapse.util import Clock
|
||||||
|
|
||||||
from tests.unittest import HomeserverTestCase
|
from tests.test_utils.event_injection import create_event
|
||||||
|
from tests.unittest import FederatingHomeserverTestCase, HomeserverTestCase
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -543,3 +555,859 @@ class GetLastEventInRoomBeforeStreamOrderingTestCase(HomeserverTestCase):
|
||||||
}
|
}
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class GetCurrentStateDeltaMembershipChangesForUserTestCase(HomeserverTestCase):
|
||||||
|
"""
|
||||||
|
Test `get_current_state_delta_membership_changes_for_user(...)`
|
||||||
|
"""
|
||||||
|
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets,
|
||||||
|
room.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
]
|
||||||
|
|
||||||
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
self.event_sources = hs.get_event_sources()
|
||||||
|
self.state_handler = self.hs.get_state_handler()
|
||||||
|
persistence = hs.get_storage_controllers().persistence
|
||||||
|
assert persistence is not None
|
||||||
|
self.persistence = persistence
|
||||||
|
|
||||||
|
def test_returns_membership_events(self) -> None:
|
||||||
|
"""
|
||||||
|
A basic test that a membership event in the token range is returned for the user.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
before_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||||
|
join_response = self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
join_pos = self.get_success(
|
||||||
|
self.store.get_position_for_event(join_response["event_id"])
|
||||||
|
)
|
||||||
|
|
||||||
|
after_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
membership_changes = self.get_success(
|
||||||
|
self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
|
user1_id,
|
||||||
|
from_key=before_room1_token.room_key,
|
||||||
|
to_key=after_room1_token.room_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Let the whole diff show on failure
|
||||||
|
self.maxDiff = None
|
||||||
|
self.assertEqual(
|
||||||
|
membership_changes,
|
||||||
|
[
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id1,
|
||||||
|
event_id=join_response["event_id"],
|
||||||
|
event_pos=join_pos,
|
||||||
|
membership="join",
|
||||||
|
sender=user1_id,
|
||||||
|
prev_event_id=None,
|
||||||
|
prev_event_pos=None,
|
||||||
|
prev_membership=None,
|
||||||
|
prev_sender=None,
|
||||||
|
)
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_server_left_room_after_us(self) -> None:
|
||||||
|
"""
|
||||||
|
Test that when probing over part of the DAG where the server left the room *after
|
||||||
|
us*, we still see the join and leave changes.
|
||||||
|
|
||||||
|
This is to make sure we play nicely with this behavior: When the server leaves a
|
||||||
|
room, it will insert new rows with `event_id = null` into the
|
||||||
|
`current_state_delta_stream` table for all current state.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
before_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
room_id1 = self.helper.create_room_as(
|
||||||
|
user2_id,
|
||||||
|
tok=user2_tok,
|
||||||
|
extra_content={
|
||||||
|
"power_level_content_override": {
|
||||||
|
"users": {
|
||||||
|
user2_id: 100,
|
||||||
|
# Allow user1 to send state in the room
|
||||||
|
user1_id: 100,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
)
|
||||||
|
join_response1 = self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
join_pos1 = self.get_success(
|
||||||
|
self.store.get_position_for_event(join_response1["event_id"])
|
||||||
|
)
|
||||||
|
# Make sure that random other non-member state that happens to have a `state_key`
|
||||||
|
# matching the user ID doesn't mess with things.
|
||||||
|
self.helper.send_state(
|
||||||
|
room_id1,
|
||||||
|
event_type="foobarbazdummy",
|
||||||
|
state_key=user1_id,
|
||||||
|
body={"foo": "bar"},
|
||||||
|
tok=user1_tok,
|
||||||
|
)
|
||||||
|
# User1 should leave the room first
|
||||||
|
leave_response1 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
leave_pos1 = self.get_success(
|
||||||
|
self.store.get_position_for_event(leave_response1["event_id"])
|
||||||
|
)
|
||||||
|
|
||||||
|
# User2 should also leave the room (everyone has left the room which means the
|
||||||
|
# server is no longer in the room).
|
||||||
|
self.helper.leave(room_id1, user2_id, tok=user2_tok)
|
||||||
|
|
||||||
|
after_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# Get the membership changes for the user.
|
||||||
|
#
|
||||||
|
# At this point, the `current_state_delta_stream` table should look like the
|
||||||
|
# following. When the server leaves a room, it will insert new rows with
|
||||||
|
# `event_id = null` for all current state.
|
||||||
|
#
|
||||||
|
# | stream_id | room_id | type | state_key | event_id | prev_event_id |
|
||||||
|
# |-----------|----------|-----------------------------|----------------|----------|---------------|
|
||||||
|
# | 2 | !x:test | 'm.room.create' | '' | $xxx | None |
|
||||||
|
# | 3 | !x:test | 'm.room.member' | '@user2:test' | $aaa | None |
|
||||||
|
# | 4 | !x:test | 'm.room.history_visibility' | '' | $xxx | None |
|
||||||
|
# | 4 | !x:test | 'm.room.join_rules' | '' | $xxx | None |
|
||||||
|
# | 4 | !x:test | 'm.room.power_levels' | '' | $xxx | None |
|
||||||
|
# | 7 | !x:test | 'm.room.member' | '@user1:test' | $ooo | None |
|
||||||
|
# | 8 | !x:test | 'foobarbazdummy' | '@user1:test' | $xxx | None |
|
||||||
|
# | 9 | !x:test | 'm.room.member' | '@user1:test' | $ppp | $ooo |
|
||||||
|
# | 10 | !x:test | 'foobarbazdummy' | '@user1:test' | None | $xxx |
|
||||||
|
# | 10 | !x:test | 'm.room.create' | '' | None | $xxx |
|
||||||
|
# | 10 | !x:test | 'm.room.history_visibility' | '' | None | $xxx |
|
||||||
|
# | 10 | !x:test | 'm.room.join_rules' | '' | None | $xxx |
|
||||||
|
# | 10 | !x:test | 'm.room.member' | '@user1:test' | None | $ppp |
|
||||||
|
# | 10 | !x:test | 'm.room.member' | '@user2:test' | None | $aaa |
|
||||||
|
# | 10 | !x:test | 'm.room.power_levels' | | None | $xxx |
|
||||||
|
membership_changes = self.get_success(
|
||||||
|
self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
|
user1_id,
|
||||||
|
from_key=before_room1_token.room_key,
|
||||||
|
to_key=after_room1_token.room_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Let the whole diff show on failure
|
||||||
|
self.maxDiff = None
|
||||||
|
self.assertEqual(
|
||||||
|
membership_changes,
|
||||||
|
[
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id1,
|
||||||
|
event_id=join_response1["event_id"],
|
||||||
|
event_pos=join_pos1,
|
||||||
|
membership="join",
|
||||||
|
sender=user1_id,
|
||||||
|
prev_event_id=None,
|
||||||
|
prev_event_pos=None,
|
||||||
|
prev_membership=None,
|
||||||
|
prev_sender=None,
|
||||||
|
),
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id1,
|
||||||
|
event_id=leave_response1["event_id"],
|
||||||
|
event_pos=leave_pos1,
|
||||||
|
membership="leave",
|
||||||
|
sender=user1_id,
|
||||||
|
prev_event_id=join_response1["event_id"],
|
||||||
|
prev_event_pos=join_pos1,
|
||||||
|
prev_membership="join",
|
||||||
|
prev_sender=user1_id,
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_server_left_room_after_us_later(self) -> None:
|
||||||
|
"""
|
||||||
|
Test when the user leaves the room, then sometime later, everyone else leaves
|
||||||
|
the room, causing the server to leave the room, we shouldn't see any membership
|
||||||
|
changes.
|
||||||
|
|
||||||
|
This is to make sure we play nicely with this behavior: When the server leaves a
|
||||||
|
room, it will insert new rows with `event_id = null` into the
|
||||||
|
`current_state_delta_stream` table for all current state.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||||
|
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
# User1 should leave the room first
|
||||||
|
self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
|
after_user1_leave_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# User2 should also leave the room (everyone has left the room which means the
|
||||||
|
# server is no longer in the room).
|
||||||
|
self.helper.leave(room_id1, user2_id, tok=user2_tok)
|
||||||
|
|
||||||
|
after_server_leave_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# Join another room as user1 just to advance the stream_ordering and bust
|
||||||
|
# `_membership_stream_cache`
|
||||||
|
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||||
|
self.helper.join(room_id2, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
|
# Get the membership changes for the user.
|
||||||
|
#
|
||||||
|
# At this point, the `current_state_delta_stream` table should look like the
|
||||||
|
# following. When the server leaves a room, it will insert new rows with
|
||||||
|
# `event_id = null` for all current state.
|
||||||
|
#
|
||||||
|
# TODO: Add DB rows to better see what's going on.
|
||||||
|
membership_changes = self.get_success(
|
||||||
|
self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
|
user1_id,
|
||||||
|
from_key=after_user1_leave_token.room_key,
|
||||||
|
to_key=after_server_leave_token.room_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Let the whole diff show on failure
|
||||||
|
self.maxDiff = None
|
||||||
|
self.assertEqual(
|
||||||
|
membership_changes,
|
||||||
|
[],
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_we_cause_server_left_room(self) -> None:
|
||||||
|
"""
|
||||||
|
Test that when probing over part of the DAG where the user leaves the room
|
||||||
|
causing the server to leave the room (because we were the last local user in the
|
||||||
|
room), we still see the join and leave changes.
|
||||||
|
|
||||||
|
This is to make sure we play nicely with this behavior: When the server leaves a
|
||||||
|
room, it will insert new rows with `event_id = null` into the
|
||||||
|
`current_state_delta_stream` table for all current state.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
before_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
room_id1 = self.helper.create_room_as(
|
||||||
|
user2_id,
|
||||||
|
tok=user2_tok,
|
||||||
|
extra_content={
|
||||||
|
"power_level_content_override": {
|
||||||
|
"users": {
|
||||||
|
user2_id: 100,
|
||||||
|
# Allow user1 to send state in the room
|
||||||
|
user1_id: 100,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
)
|
||||||
|
join_response1 = self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
join_pos1 = self.get_success(
|
||||||
|
self.store.get_position_for_event(join_response1["event_id"])
|
||||||
|
)
|
||||||
|
# Make sure that random other non-member state that happens to have a `state_key`
|
||||||
|
# matching the user ID doesn't mess with things.
|
||||||
|
self.helper.send_state(
|
||||||
|
room_id1,
|
||||||
|
event_type="foobarbazdummy",
|
||||||
|
state_key=user1_id,
|
||||||
|
body={"foo": "bar"},
|
||||||
|
tok=user1_tok,
|
||||||
|
)
|
||||||
|
|
||||||
|
# User2 should leave the room first.
|
||||||
|
self.helper.leave(room_id1, user2_id, tok=user2_tok)
|
||||||
|
|
||||||
|
# User1 (the person we're testing with) should also leave the room (everyone has
|
||||||
|
# left the room which means the server is no longer in the room).
|
||||||
|
leave_response1 = self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
leave_pos1 = self.get_success(
|
||||||
|
self.store.get_position_for_event(leave_response1["event_id"])
|
||||||
|
)
|
||||||
|
|
||||||
|
after_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# Get the membership changes for the user.
|
||||||
|
#
|
||||||
|
# At this point, the `current_state_delta_stream` table should look like the
|
||||||
|
# following. When the server leaves a room, it will insert new rows with
|
||||||
|
# `event_id = null` for all current state.
|
||||||
|
#
|
||||||
|
# | stream_id | room_id | type | state_key | event_id | prev_event_id |
|
||||||
|
# |-----------|-----------|-----------------------------|---------------|----------|---------------|
|
||||||
|
# | 2 | '!x:test' | 'm.room.create' | '' | '$xxx' | None |
|
||||||
|
# | 3 | '!x:test' | 'm.room.member' | '@user2:test' | '$aaa' | None |
|
||||||
|
# | 4 | '!x:test' | 'm.room.history_visibility' | '' | '$xxx' | None |
|
||||||
|
# | 4 | '!x:test' | 'm.room.join_rules' | '' | '$xxx' | None |
|
||||||
|
# | 4 | '!x:test' | 'm.room.power_levels' | '' | '$xxx' | None |
|
||||||
|
# | 7 | '!x:test' | 'm.room.member' | '@user1:test' | '$ooo' | None |
|
||||||
|
# | 8 | '!x:test' | 'foobarbazdummy' | '@user1:test' | '$xxx' | None |
|
||||||
|
# | 9 | '!x:test' | 'm.room.member' | '@user2:test' | '$bbb' | '$aaa' |
|
||||||
|
# | 10 | '!x:test' | 'foobarbazdummy' | '@user1:test' | None | '$xxx' |
|
||||||
|
# | 10 | '!x:test' | 'm.room.create' | '' | None | '$xxx' |
|
||||||
|
# | 10 | '!x:test' | 'm.room.history_visibility' | '' | None | '$xxx' |
|
||||||
|
# | 10 | '!x:test' | 'm.room.join_rules' | '' | None | '$xxx' |
|
||||||
|
# | 10 | '!x:test' | 'm.room.member' | '@user1:test' | None | '$ooo' |
|
||||||
|
# | 10 | '!x:test' | 'm.room.member' | '@user2:test' | None | '$bbb' |
|
||||||
|
# | 10 | '!x:test' | 'm.room.power_levels' | '' | None | '$xxx' |
|
||||||
|
membership_changes = self.get_success(
|
||||||
|
self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
|
user1_id,
|
||||||
|
from_key=before_room1_token.room_key,
|
||||||
|
to_key=after_room1_token.room_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Let the whole diff show on failure
|
||||||
|
self.maxDiff = None
|
||||||
|
self.assertEqual(
|
||||||
|
membership_changes,
|
||||||
|
[
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id1,
|
||||||
|
event_id=join_response1["event_id"],
|
||||||
|
event_pos=join_pos1,
|
||||||
|
membership="join",
|
||||||
|
sender=user1_id,
|
||||||
|
prev_event_id=None,
|
||||||
|
prev_event_pos=None,
|
||||||
|
prev_membership=None,
|
||||||
|
prev_sender=None,
|
||||||
|
),
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id1,
|
||||||
|
event_id=None, # leave_response1["event_id"],
|
||||||
|
event_pos=leave_pos1,
|
||||||
|
membership="leave",
|
||||||
|
sender=None, # user1_id,
|
||||||
|
prev_event_id=join_response1["event_id"],
|
||||||
|
prev_event_pos=join_pos1,
|
||||||
|
prev_membership="join",
|
||||||
|
prev_sender=user1_id,
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_different_user_membership_persisted_in_same_batch(self) -> None:
|
||||||
|
"""
|
||||||
|
Test batch of membership events from different users being processed at once.
|
||||||
|
This will result in all of the memberships being stored in the
|
||||||
|
`current_state_delta_stream` table with the same `stream_ordering` even though
|
||||||
|
the individual events have different `stream_ordering`s.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
_user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
user3_id = self.register_user("user3", "pass")
|
||||||
|
_user3_tok = self.login(user3_id, "pass")
|
||||||
|
user4_id = self.register_user("user4", "pass")
|
||||||
|
_user4_tok = self.login(user4_id, "pass")
|
||||||
|
|
||||||
|
before_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# User2 is just the designated person to create the room (we do this across the
|
||||||
|
# tests to be consistent)
|
||||||
|
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||||
|
|
||||||
|
# Persist the user1, user3, and user4 join events in the same batch so they all
|
||||||
|
# end up in the `current_state_delta_stream` table with the same
|
||||||
|
# stream_ordering.
|
||||||
|
join_event3, join_event_context3 = self.get_success(
|
||||||
|
create_event(
|
||||||
|
self.hs,
|
||||||
|
sender=user3_id,
|
||||||
|
type=EventTypes.Member,
|
||||||
|
state_key=user3_id,
|
||||||
|
content={"membership": "join"},
|
||||||
|
room_id=room_id1,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
# We want to put user1 in the middle of the batch. This way, regardless of the
|
||||||
|
# implementation that inserts rows into current_state_delta_stream` (whether it
|
||||||
|
# be minimum/maximum of stream position of the batch), we will still catch bugs.
|
||||||
|
join_event1, join_event_context1 = self.get_success(
|
||||||
|
create_event(
|
||||||
|
self.hs,
|
||||||
|
sender=user1_id,
|
||||||
|
type=EventTypes.Member,
|
||||||
|
state_key=user1_id,
|
||||||
|
content={"membership": "join"},
|
||||||
|
room_id=room_id1,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
join_event4, join_event_context4 = self.get_success(
|
||||||
|
create_event(
|
||||||
|
self.hs,
|
||||||
|
sender=user4_id,
|
||||||
|
type=EventTypes.Member,
|
||||||
|
state_key=user4_id,
|
||||||
|
content={"membership": "join"},
|
||||||
|
room_id=room_id1,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
self.get_success(
|
||||||
|
self.persistence.persist_events(
|
||||||
|
[
|
||||||
|
(join_event3, join_event_context3),
|
||||||
|
(join_event1, join_event_context1),
|
||||||
|
(join_event4, join_event_context4),
|
||||||
|
]
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
after_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# Get the membership changes for the user.
|
||||||
|
#
|
||||||
|
# At this point, the `current_state_delta_stream` table should look like (notice
|
||||||
|
# those three memberships at the end with `stream_id=7` because we persisted
|
||||||
|
# them in the same batch):
|
||||||
|
#
|
||||||
|
# | stream_id | room_id | type | state_key | event_id | prev_event_id |
|
||||||
|
# |-----------|-----------|----------------------------|------------------|----------|---------------|
|
||||||
|
# | 2 | '!x:test' | 'm.room.create' | '' | '$xxx' | None |
|
||||||
|
# | 3 | '!x:test' | 'm.room.member' | '@user2:test' | '$xxx' | None |
|
||||||
|
# | 4 | '!x:test' | 'm.room.history_visibility'| '' | '$xxx' | None |
|
||||||
|
# | 4 | '!x:test' | 'm.room.join_rules' | '' | '$xxx' | None |
|
||||||
|
# | 4 | '!x:test' | 'm.room.power_levels' | '' | '$xxx' | None |
|
||||||
|
# | 7 | '!x:test' | 'm.room.member' | '@user3:test' | '$xxx' | None |
|
||||||
|
# | 7 | '!x:test' | 'm.room.member' | '@user1:test' | '$xxx' | None |
|
||||||
|
# | 7 | '!x:test' | 'm.room.member' | '@user4:test' | '$xxx' | None |
|
||||||
|
membership_changes = self.get_success(
|
||||||
|
self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
|
user1_id,
|
||||||
|
from_key=before_room1_token.room_key,
|
||||||
|
to_key=after_room1_token.room_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
join_pos3 = self.get_success(
|
||||||
|
self.store.get_position_for_event(join_event3.event_id)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Let the whole diff show on failure
|
||||||
|
self.maxDiff = None
|
||||||
|
self.assertEqual(
|
||||||
|
membership_changes,
|
||||||
|
[
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id1,
|
||||||
|
event_id=join_event1.event_id,
|
||||||
|
# Ideally, this would be `join_pos1` (to match the `event_id`) but
|
||||||
|
# when events are persisted in a batch, they are all stored in the
|
||||||
|
# `current_state_delta_stream` table with the minimum
|
||||||
|
# `stream_ordering` from the batch.
|
||||||
|
event_pos=join_pos3,
|
||||||
|
membership="join",
|
||||||
|
sender=user1_id,
|
||||||
|
prev_event_id=None,
|
||||||
|
prev_event_pos=None,
|
||||||
|
prev_membership=None,
|
||||||
|
prev_sender=None,
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_state_reset(self) -> None:
|
||||||
|
"""
|
||||||
|
Test a state reset scenario where the user gets removed from the room (when
|
||||||
|
there is no corresponding leave event)
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||||
|
join_response1 = self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
join_pos1 = self.get_success(
|
||||||
|
self.store.get_position_for_event(join_response1["event_id"])
|
||||||
|
)
|
||||||
|
|
||||||
|
before_reset_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# Send another state event to make a position for the state reset to happen at
|
||||||
|
dummy_state_response = self.helper.send_state(
|
||||||
|
room_id1,
|
||||||
|
event_type="foobarbaz",
|
||||||
|
state_key="",
|
||||||
|
body={"foo": "bar"},
|
||||||
|
tok=user2_tok,
|
||||||
|
)
|
||||||
|
dummy_state_pos = self.get_success(
|
||||||
|
self.store.get_position_for_event(dummy_state_response["event_id"])
|
||||||
|
)
|
||||||
|
|
||||||
|
# Mock a state reset removing the membership for user1 in the current state
|
||||||
|
self.get_success(
|
||||||
|
self.store.db_pool.simple_delete(
|
||||||
|
table="current_state_events",
|
||||||
|
keyvalues={
|
||||||
|
"room_id": room_id1,
|
||||||
|
"type": EventTypes.Member,
|
||||||
|
"state_key": user1_id,
|
||||||
|
},
|
||||||
|
desc="state reset user in current_state_delta_stream",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
self.get_success(
|
||||||
|
self.store.db_pool.simple_insert(
|
||||||
|
table="current_state_delta_stream",
|
||||||
|
values={
|
||||||
|
"stream_id": dummy_state_pos.stream,
|
||||||
|
"room_id": room_id1,
|
||||||
|
"type": EventTypes.Member,
|
||||||
|
"state_key": user1_id,
|
||||||
|
"event_id": None,
|
||||||
|
"prev_event_id": join_response1["event_id"],
|
||||||
|
"instance_name": dummy_state_pos.instance_name,
|
||||||
|
},
|
||||||
|
desc="state reset user in current_state_delta_stream",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Manually bust the cache since we we're just manually messing with the database
|
||||||
|
# and not causing an actual state reset.
|
||||||
|
self.store._membership_stream_cache.entity_has_changed(
|
||||||
|
user1_id, dummy_state_pos.stream
|
||||||
|
)
|
||||||
|
|
||||||
|
after_reset_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
membership_changes = self.get_success(
|
||||||
|
self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
|
user1_id,
|
||||||
|
from_key=before_reset_token.room_key,
|
||||||
|
to_key=after_reset_token.room_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Let the whole diff show on failure
|
||||||
|
self.maxDiff = None
|
||||||
|
self.assertEqual(
|
||||||
|
membership_changes,
|
||||||
|
[
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id1,
|
||||||
|
event_id=None,
|
||||||
|
event_pos=dummy_state_pos,
|
||||||
|
membership="leave",
|
||||||
|
sender=None, # user1_id,
|
||||||
|
prev_event_id=join_response1["event_id"],
|
||||||
|
prev_event_pos=join_pos1,
|
||||||
|
prev_membership="join",
|
||||||
|
prev_sender=user1_id,
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_excluded_room_ids(self) -> None:
|
||||||
|
"""
|
||||||
|
Test that the `excluded_room_ids` option excludes changes from the specified rooms.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
before_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||||
|
join_response1 = self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
join_pos1 = self.get_success(
|
||||||
|
self.store.get_position_for_event(join_response1["event_id"])
|
||||||
|
)
|
||||||
|
|
||||||
|
room_id2 = self.helper.create_room_as(user2_id, tok=user2_tok)
|
||||||
|
join_response2 = self.helper.join(room_id2, user1_id, tok=user1_tok)
|
||||||
|
join_pos2 = self.get_success(
|
||||||
|
self.store.get_position_for_event(join_response2["event_id"])
|
||||||
|
)
|
||||||
|
|
||||||
|
after_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# First test the the room is returned without the `excluded_room_ids` option
|
||||||
|
membership_changes = self.get_success(
|
||||||
|
self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
|
user1_id,
|
||||||
|
from_key=before_room1_token.room_key,
|
||||||
|
to_key=after_room1_token.room_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Let the whole diff show on failure
|
||||||
|
self.maxDiff = None
|
||||||
|
self.assertEqual(
|
||||||
|
membership_changes,
|
||||||
|
[
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id1,
|
||||||
|
event_id=join_response1["event_id"],
|
||||||
|
event_pos=join_pos1,
|
||||||
|
membership="join",
|
||||||
|
sender=user1_id,
|
||||||
|
prev_event_id=None,
|
||||||
|
prev_event_pos=None,
|
||||||
|
prev_membership=None,
|
||||||
|
prev_sender=None,
|
||||||
|
),
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id2,
|
||||||
|
event_id=join_response2["event_id"],
|
||||||
|
event_pos=join_pos2,
|
||||||
|
membership="join",
|
||||||
|
sender=user1_id,
|
||||||
|
prev_event_id=None,
|
||||||
|
prev_event_pos=None,
|
||||||
|
prev_membership=None,
|
||||||
|
prev_sender=None,
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
# The test that `excluded_room_ids` excludes room2 as expected
|
||||||
|
membership_changes = self.get_success(
|
||||||
|
self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
|
user1_id,
|
||||||
|
from_key=before_room1_token.room_key,
|
||||||
|
to_key=after_room1_token.room_key,
|
||||||
|
excluded_room_ids=[room_id2],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Let the whole diff show on failure
|
||||||
|
self.maxDiff = None
|
||||||
|
self.assertEqual(
|
||||||
|
membership_changes,
|
||||||
|
[
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=room_id1,
|
||||||
|
event_id=join_response1["event_id"],
|
||||||
|
event_pos=join_pos1,
|
||||||
|
membership="join",
|
||||||
|
sender=user1_id,
|
||||||
|
prev_event_id=None,
|
||||||
|
prev_event_pos=None,
|
||||||
|
prev_membership=None,
|
||||||
|
prev_sender=None,
|
||||||
|
)
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class GetCurrentStateDeltaMembershipChangesForUserFederationTestCase(
|
||||||
|
FederatingHomeserverTestCase
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Test `get_current_state_delta_membership_changes_for_user(...)` when joining remote federated rooms.
|
||||||
|
"""
|
||||||
|
|
||||||
|
servlets = [
|
||||||
|
admin.register_servlets_for_client_rest_resource,
|
||||||
|
room.register_servlets,
|
||||||
|
login.register_servlets,
|
||||||
|
]
|
||||||
|
|
||||||
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
|
self.sliding_sync_handler = self.hs.get_sliding_sync_handler()
|
||||||
|
self.store = self.hs.get_datastores().main
|
||||||
|
self.event_sources = hs.get_event_sources()
|
||||||
|
self.room_member_handler = hs.get_room_member_handler()
|
||||||
|
|
||||||
|
def test_remote_join(self) -> None:
|
||||||
|
"""
|
||||||
|
Test remote join where the first rows in `current_state_delta_stream` will just
|
||||||
|
be the state when you joined the remote room.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
_user1_tok = self.login(user1_id, "pass")
|
||||||
|
|
||||||
|
before_join_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
intially_unjoined_room_id = f"!example:{self.OTHER_SERVER_NAME}"
|
||||||
|
|
||||||
|
# Remotely join a room on another homeserver.
|
||||||
|
#
|
||||||
|
# To do this we have to mock the responses from the remote homeserver. We also
|
||||||
|
# patch out a bunch of event checks on our end.
|
||||||
|
create_event_source = {
|
||||||
|
"auth_events": [],
|
||||||
|
"content": {
|
||||||
|
"creator": f"@creator:{self.OTHER_SERVER_NAME}",
|
||||||
|
"room_version": self.hs.config.server.default_room_version.identifier,
|
||||||
|
},
|
||||||
|
"depth": 0,
|
||||||
|
"origin_server_ts": 0,
|
||||||
|
"prev_events": [],
|
||||||
|
"room_id": intially_unjoined_room_id,
|
||||||
|
"sender": f"@creator:{self.OTHER_SERVER_NAME}",
|
||||||
|
"state_key": "",
|
||||||
|
"type": EventTypes.Create,
|
||||||
|
}
|
||||||
|
self.add_hashes_and_signatures_from_other_server(
|
||||||
|
create_event_source,
|
||||||
|
self.hs.config.server.default_room_version,
|
||||||
|
)
|
||||||
|
create_event = FrozenEventV3(
|
||||||
|
create_event_source,
|
||||||
|
self.hs.config.server.default_room_version,
|
||||||
|
{},
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
creator_join_event_source = {
|
||||||
|
"auth_events": [create_event.event_id],
|
||||||
|
"content": {
|
||||||
|
"membership": "join",
|
||||||
|
},
|
||||||
|
"depth": 1,
|
||||||
|
"origin_server_ts": 1,
|
||||||
|
"prev_events": [],
|
||||||
|
"room_id": intially_unjoined_room_id,
|
||||||
|
"sender": f"@creator:{self.OTHER_SERVER_NAME}",
|
||||||
|
"state_key": f"@creator:{self.OTHER_SERVER_NAME}",
|
||||||
|
"type": EventTypes.Member,
|
||||||
|
}
|
||||||
|
self.add_hashes_and_signatures_from_other_server(
|
||||||
|
creator_join_event_source,
|
||||||
|
self.hs.config.server.default_room_version,
|
||||||
|
)
|
||||||
|
creator_join_event = FrozenEventV3(
|
||||||
|
creator_join_event_source,
|
||||||
|
self.hs.config.server.default_room_version,
|
||||||
|
{},
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Our local user is going to remote join the room
|
||||||
|
join_event_source = {
|
||||||
|
"auth_events": [create_event.event_id],
|
||||||
|
"content": {"membership": "join"},
|
||||||
|
"depth": 1,
|
||||||
|
"origin_server_ts": 100,
|
||||||
|
"prev_events": [creator_join_event.event_id],
|
||||||
|
"sender": user1_id,
|
||||||
|
"state_key": user1_id,
|
||||||
|
"room_id": intially_unjoined_room_id,
|
||||||
|
"type": EventTypes.Member,
|
||||||
|
}
|
||||||
|
add_hashes_and_signatures(
|
||||||
|
self.hs.config.server.default_room_version,
|
||||||
|
join_event_source,
|
||||||
|
self.hs.hostname,
|
||||||
|
self.hs.signing_key,
|
||||||
|
)
|
||||||
|
join_event = FrozenEventV3(
|
||||||
|
join_event_source,
|
||||||
|
self.hs.config.server.default_room_version,
|
||||||
|
{},
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
|
||||||
|
mock_make_membership_event = AsyncMock(
|
||||||
|
return_value=(
|
||||||
|
self.OTHER_SERVER_NAME,
|
||||||
|
join_event,
|
||||||
|
self.hs.config.server.default_room_version,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
mock_send_join = AsyncMock(
|
||||||
|
return_value=SendJoinResult(
|
||||||
|
join_event,
|
||||||
|
self.OTHER_SERVER_NAME,
|
||||||
|
state=[create_event, creator_join_event],
|
||||||
|
auth_chain=[create_event, creator_join_event],
|
||||||
|
partial_state=False,
|
||||||
|
servers_in_room=frozenset(),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch.object(
|
||||||
|
self.room_member_handler.federation_handler.federation_client,
|
||||||
|
"make_membership_event",
|
||||||
|
mock_make_membership_event,
|
||||||
|
), patch.object(
|
||||||
|
self.room_member_handler.federation_handler.federation_client,
|
||||||
|
"send_join",
|
||||||
|
mock_send_join,
|
||||||
|
), patch(
|
||||||
|
"synapse.event_auth._is_membership_change_allowed",
|
||||||
|
return_value=None,
|
||||||
|
), patch(
|
||||||
|
"synapse.handlers.federation_event.check_state_dependent_auth_rules",
|
||||||
|
return_value=None,
|
||||||
|
):
|
||||||
|
self.get_success(
|
||||||
|
self.room_member_handler.update_membership(
|
||||||
|
requester=create_requester(user1_id),
|
||||||
|
target=UserID.from_string(user1_id),
|
||||||
|
room_id=intially_unjoined_room_id,
|
||||||
|
action=Membership.JOIN,
|
||||||
|
remote_room_hosts=[self.OTHER_SERVER_NAME],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
after_join_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# Get the membership changes for the user.
|
||||||
|
#
|
||||||
|
# At this point, the `current_state_delta_stream` table should look like the
|
||||||
|
# following. Notice that all of the events are at the same `stream_id` because
|
||||||
|
# the current state starts out where we remotely joined:
|
||||||
|
#
|
||||||
|
# | stream_id | room_id | type | state_key | event_id | prev_event_id |
|
||||||
|
# |-----------|------------------------------|-----------------|------------------------------|----------|---------------|
|
||||||
|
# | 2 | '!example:other.example.com' | 'm.room.member' | '@user1:test' | '$xxx' | None |
|
||||||
|
# | 2 | '!example:other.example.com' | 'm.room.create' | '' | '$xxx' | None |
|
||||||
|
# | 2 | '!example:other.example.com' | 'm.room.member' | '@creator:other.example.com' | '$xxx' | None |
|
||||||
|
membership_changes = self.get_success(
|
||||||
|
self.store.get_current_state_delta_membership_changes_for_user(
|
||||||
|
user1_id,
|
||||||
|
from_key=before_join_token.room_key,
|
||||||
|
to_key=after_join_token.room_key,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
join_pos = self.get_success(
|
||||||
|
self.store.get_position_for_event(join_event.event_id)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Let the whole diff show on failure
|
||||||
|
self.maxDiff = None
|
||||||
|
self.assertEqual(
|
||||||
|
membership_changes,
|
||||||
|
[
|
||||||
|
CurrentStateDeltaMembership(
|
||||||
|
room_id=intially_unjoined_room_id,
|
||||||
|
event_id=join_event.event_id,
|
||||||
|
event_pos=join_pos,
|
||||||
|
membership="join",
|
||||||
|
sender=user1_id,
|
||||||
|
prev_event_id=None,
|
||||||
|
prev_event_pos=None,
|
||||||
|
prev_membership=None,
|
||||||
|
prev_sender=None,
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
|
@ -344,6 +344,8 @@ class HomeserverTestCase(TestCase):
|
||||||
self._hs_args = {"clock": self.clock, "reactor": self.reactor}
|
self._hs_args = {"clock": self.clock, "reactor": self.reactor}
|
||||||
self.hs = self.make_homeserver(self.reactor, self.clock)
|
self.hs = self.make_homeserver(self.reactor, self.clock)
|
||||||
|
|
||||||
|
self.hs.get_datastores().main.tests_allow_no_chain_cover_index = False
|
||||||
|
|
||||||
# Honour the `use_frozen_dicts` config option. We have to do this
|
# Honour the `use_frozen_dicts` config option. We have to do this
|
||||||
# manually because this is taken care of in the app `start` code, which
|
# manually because this is taken care of in the app `start` code, which
|
||||||
# we don't run. Plus we want to reset it on tearDown.
|
# we don't run. Plus we want to reset it on tearDown.
|
||||||
|
|
Loading…
Reference in a new issue