mirror of
https://github.com/element-hq/synapse
synced 2024-09-29 15:52:43 +00:00
Merge branch 'develop' into register-email-3pid-race
This commit is contained in:
commit
3261ca84cc
122 changed files with 5385 additions and 1450 deletions
2
.github/workflows/docs-pr-netlify.yaml
vendored
2
.github/workflows/docs-pr-netlify.yaml
vendored
|
@ -14,7 +14,7 @@ jobs:
|
||||||
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
|
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
|
||||||
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
|
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
|
||||||
- name: 📥 Download artifact
|
- name: 📥 Download artifact
|
||||||
uses: dawidd6/action-download-artifact@09f2f74827fd3a8607589e5ad7f9398816f540fe # v3.1.4
|
uses: dawidd6/action-download-artifact@deb3bb83256a78589fef6a7b942e5f2573ad7c13 # v5
|
||||||
with:
|
with:
|
||||||
workflow: docs-pr.yaml
|
workflow: docs-pr.yaml
|
||||||
run_id: ${{ github.event.workflow_run.id }}
|
run_id: ${{ github.event.workflow_run.id }}
|
||||||
|
|
71
CHANGES.md
71
CHANGES.md
|
@ -1,3 +1,74 @@
|
||||||
|
# Synapse 1.109.0rc2 (2024-06-11)
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
- Fix bug where one-time-keys were not always included in `/sync` response when using workers. Introduced in v1.109.0rc1. ([\#17275](https://github.com/element-hq/synapse/issues/17275))
|
||||||
|
- Fix bug where `/sync` could get stuck due to edge case in device lists handling. Introduced in v1.109.0rc1. ([\#17292](https://github.com/element-hq/synapse/issues/17292))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Synapse 1.109.0rc1 (2024-06-04)
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Add the ability to auto-accept invites on the behalf of users. See the [`auto_accept_invites`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#auto-accept-invites) config option for details. ([\#17147](https://github.com/element-hq/synapse/issues/17147))
|
||||||
|
- Add experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync/e2ee` endpoint for to-device messages and device encryption info. ([\#17167](https://github.com/element-hq/synapse/issues/17167))
|
||||||
|
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/issues/3916) by adding unstable media endpoints to `/_matrix/client`. ([\#17213](https://github.com/element-hq/synapse/issues/17213))
|
||||||
|
- Add logging to tasks managed by the task scheduler, showing CPU and database usage. ([\#17219](https://github.com/element-hq/synapse/issues/17219))
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
- Fix deduplicating of membership events to not create unused state groups. ([\#17164](https://github.com/element-hq/synapse/issues/17164))
|
||||||
|
- Fix bug where duplicate events could be sent down sync when using workers that are overloaded. ([\#17215](https://github.com/element-hq/synapse/issues/17215))
|
||||||
|
- Ignore attempts to send to-device messages to bad users, to avoid log spam when we try to connect to the bad server. ([\#17240](https://github.com/element-hq/synapse/issues/17240))
|
||||||
|
- Fix handling of duplicate concurrent uploading of device one-time-keys. ([\#17241](https://github.com/element-hq/synapse/issues/17241))
|
||||||
|
- Fix reporting of default tags to Sentry, such as worker name. Broke in v1.108.0. ([\#17251](https://github.com/element-hq/synapse/issues/17251))
|
||||||
|
- Fix bug where typing updates would not be sent when using workers after a restart. ([\#17252](https://github.com/element-hq/synapse/issues/17252))
|
||||||
|
|
||||||
|
### Improved Documentation
|
||||||
|
|
||||||
|
- Update the LemonLDAP documentation to say that claims should be explicitly included in the returned `id_token`, as Synapse won't request them. ([\#17204](https://github.com/element-hq/synapse/issues/17204))
|
||||||
|
|
||||||
|
### Internal Changes
|
||||||
|
|
||||||
|
- Improve DB usage when fetching related events. ([\#17083](https://github.com/element-hq/synapse/issues/17083))
|
||||||
|
- Log exceptions when failing to auto-join new user according to the `auto_join_rooms` option. ([\#17176](https://github.com/element-hq/synapse/issues/17176))
|
||||||
|
- Reduce work of calculating outbound device lists updates. ([\#17211](https://github.com/element-hq/synapse/issues/17211))
|
||||||
|
- Improve performance of calculating device lists changes in `/sync`. ([\#17216](https://github.com/element-hq/synapse/issues/17216))
|
||||||
|
- Move towards using `MultiWriterIdGenerator` everywhere. ([\#17226](https://github.com/element-hq/synapse/issues/17226))
|
||||||
|
- Replaces all usages of `StreamIdGenerator` with `MultiWriterIdGenerator`. ([\#17229](https://github.com/element-hq/synapse/issues/17229))
|
||||||
|
- Change the `allow_unsafe_locale` config option to also apply when setting up new databases. ([\#17238](https://github.com/element-hq/synapse/issues/17238))
|
||||||
|
- Fix errors in logs about closing incorrect logging contexts when media gets rejected by a module. ([\#17239](https://github.com/element-hq/synapse/issues/17239), [\#17246](https://github.com/element-hq/synapse/issues/17246))
|
||||||
|
- Clean out invalid destinations from `device_federation_outbox` table. ([\#17242](https://github.com/element-hq/synapse/issues/17242))
|
||||||
|
- Stop logging errors when receiving invalid User IDs in key querys requests. ([\#17250](https://github.com/element-hq/synapse/issues/17250))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Updates to locked dependencies
|
||||||
|
|
||||||
|
* Bump anyhow from 1.0.83 to 1.0.86. ([\#17220](https://github.com/element-hq/synapse/issues/17220))
|
||||||
|
* Bump bcrypt from 4.1.2 to 4.1.3. ([\#17224](https://github.com/element-hq/synapse/issues/17224))
|
||||||
|
* Bump lxml from 5.2.1 to 5.2.2. ([\#17261](https://github.com/element-hq/synapse/issues/17261))
|
||||||
|
* Bump mypy-zope from 1.0.3 to 1.0.4. ([\#17262](https://github.com/element-hq/synapse/issues/17262))
|
||||||
|
* Bump phonenumbers from 8.13.35 to 8.13.37. ([\#17235](https://github.com/element-hq/synapse/issues/17235))
|
||||||
|
* Bump prometheus-client from 0.19.0 to 0.20.0. ([\#17233](https://github.com/element-hq/synapse/issues/17233))
|
||||||
|
* Bump pyasn1 from 0.5.1 to 0.6.0. ([\#17223](https://github.com/element-hq/synapse/issues/17223))
|
||||||
|
* Bump pyicu from 2.13 to 2.13.1. ([\#17236](https://github.com/element-hq/synapse/issues/17236))
|
||||||
|
* Bump pyopenssl from 24.0.0 to 24.1.0. ([\#17234](https://github.com/element-hq/synapse/issues/17234))
|
||||||
|
* Bump serde from 1.0.201 to 1.0.202. ([\#17221](https://github.com/element-hq/synapse/issues/17221))
|
||||||
|
* Bump serde from 1.0.202 to 1.0.203. ([\#17232](https://github.com/element-hq/synapse/issues/17232))
|
||||||
|
* Bump twine from 5.0.0 to 5.1.0. ([\#17225](https://github.com/element-hq/synapse/issues/17225))
|
||||||
|
* Bump types-psycopg2 from 2.9.21.20240311 to 2.9.21.20240417. ([\#17222](https://github.com/element-hq/synapse/issues/17222))
|
||||||
|
* Bump types-pyopenssl from 24.0.0.20240311 to 24.1.0.20240425. ([\#17260](https://github.com/element-hq/synapse/issues/17260))
|
||||||
|
|
||||||
|
# Synapse 1.108.0 (2024-05-28)
|
||||||
|
|
||||||
|
No significant changes since 1.108.0rc1.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Synapse 1.108.0rc1 (2024-05-21)
|
# Synapse 1.108.0rc1 (2024-05-21)
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
|
|
12
Cargo.lock
generated
12
Cargo.lock
generated
|
@ -444,9 +444,9 @@ dependencies = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "regex"
|
name = "regex"
|
||||||
version = "1.10.4"
|
version = "1.10.5"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "c117dbdfde9c8308975b6a18d71f3f385c89461f7b3fb054288ecf2a2058ba4c"
|
checksum = "b91213439dad192326a0d7c6ee3955910425f441d7038e0d6933b0aec5c4517f"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"aho-corasick",
|
"aho-corasick",
|
||||||
"memchr",
|
"memchr",
|
||||||
|
@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde"
|
name = "serde"
|
||||||
version = "1.0.202"
|
version = "1.0.203"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "226b61a0d411b2ba5ff6d7f73a476ac4f8bb900373459cd00fab8512828ba395"
|
checksum = "7253ab4de971e72fb7be983802300c30b5a7f0c2e56fab8abfc6a214307c0094"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"serde_derive",
|
"serde_derive",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde_derive"
|
name = "serde_derive"
|
||||||
version = "1.0.202"
|
version = "1.0.203"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "6048858004bcff69094cd972ed40a32500f153bd3be9f716b2eed2e8217c4838"
|
checksum = "500cbc0ebeb6f46627f50f3f5811ccf6bf00643be300b4c3eabc0ef55dc5b5ba"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
"quote",
|
"quote",
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Add the ability to auto-accept invites on the behalf of users. See the [`auto_accept_invites`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#auto-accept-invites) config option for details.
|
|
|
@ -1 +0,0 @@
|
||||||
Add experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync/e2ee` endpoint for To-Device messages and device encryption info.
|
|
2
changelog.d/17172.feature
Normal file
2
changelog.d/17172.feature
Normal file
|
@ -0,0 +1,2 @@
|
||||||
|
Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md)
|
||||||
|
by adding a federation /download endpoint (#17172).
|
|
@ -1 +0,0 @@
|
||||||
Log exceptions when failing to auto-join new user according to the `auto_join_rooms` option.
|
|
1
changelog.d/17187.feature
Normal file
1
changelog.d/17187.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add initial implementation of an experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
@ -1 +0,0 @@
|
||||||
Update OIDC documentation: by default Matrix doesn't query userinfo endpoint, then claims should be put on id_token.
|
|
|
@ -1 +0,0 @@
|
||||||
Reduce work of calculating outbound device lists updates.
|
|
|
@ -1 +0,0 @@
|
||||||
Support MSC3916 by adding unstable media endpoints to `_matrix/client` (#17213).
|
|
|
@ -1 +0,0 @@
|
||||||
Improve performance of calculating device lists changes in `/sync`.
|
|
|
@ -1 +0,0 @@
|
||||||
Add logging to tasks managed by the task scheduler, showing CPU and database usage.
|
|
1
changelog.d/17254.bugfix
Normal file
1
changelog.d/17254.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix searching for users with their exact localpart whose ID includes a hyphen.
|
1
changelog.d/17256.feature
Normal file
1
changelog.d/17256.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Improve ratelimiting in Synapse (#17256).
|
1
changelog.d/17265.misc
Normal file
1
changelog.d/17265.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Use fully-qualified `PersistedEventPosition` when returning `RoomsForUser` to facilitate proper comparisons and `RoomStreamToken` generation.
|
1
changelog.d/17266.misc
Normal file
1
changelog.d/17266.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add debug logging for when room keys are uploaded, including whether they are replacing other room keys.
|
1
changelog.d/17270.feature
Normal file
1
changelog.d/17270.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add support for the unstable [MSC4151](https://github.com/matrix-org/matrix-spec-proposals/pull/4151) report room API.
|
1
changelog.d/17271.misc
Normal file
1
changelog.d/17271.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Handle OTK uploads off master.
|
1
changelog.d/17272.bugfix
Normal file
1
changelog.d/17272.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix wrong retention policy being used when filtering events.
|
1
changelog.d/17273.misc
Normal file
1
changelog.d/17273.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Don't try and resync devices for remote users whose servers are marked as down.
|
1
changelog.d/17275.bugfix
Normal file
1
changelog.d/17275.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix bug where OTKs were not always included in `/sync` response when using workers.
|
1
changelog.d/17279.misc
Normal file
1
changelog.d/17279.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Re-organize Pydantic models and types used in handlers.
|
18
debian/changelog
vendored
18
debian/changelog
vendored
|
@ -1,3 +1,21 @@
|
||||||
|
matrix-synapse-py3 (1.109.0~rc2) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.109.0rc2.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 11 Jun 2024 13:20:17 +0000
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.109.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.109.0rc1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Jun 2024 09:42:46 +0100
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.108.0) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.108.0.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 28 May 2024 11:54:22 +0100
|
||||||
|
|
||||||
matrix-synapse-py3 (1.108.0~rc1) stable; urgency=medium
|
matrix-synapse-py3 (1.108.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
* New Synapse release 1.108.0rc1.
|
* New Synapse release 1.108.0rc1.
|
||||||
|
|
|
@ -242,12 +242,11 @@ host all all ::1/128 ident
|
||||||
|
|
||||||
### Fixing incorrect `COLLATE` or `CTYPE`
|
### Fixing incorrect `COLLATE` or `CTYPE`
|
||||||
|
|
||||||
Synapse will refuse to set up a new database if it has the wrong values of
|
Synapse will refuse to start when using a database with incorrect values of
|
||||||
`COLLATE` and `CTYPE` set. Synapse will also refuse to start an existing database with incorrect values
|
`COLLATE` and `CTYPE` unless the config flag `allow_unsafe_locale`, found in the
|
||||||
of `COLLATE` and `CTYPE` unless the config flag `allow_unsafe_locale`, found in the
|
`database` section of the config, is set to true. Using different locales can
|
||||||
`database` section of the config, is set to true. Using different locales can cause issues if the locale library is updated from
|
cause issues if the locale library is updated from underneath the database, or
|
||||||
underneath the database, or if a different version of the locale is used on any
|
if a different version of the locale is used on any replicas.
|
||||||
replicas.
|
|
||||||
|
|
||||||
If you have a database with an unsafe locale, the safest way to fix the issue is to dump the database and recreate it with
|
If you have a database with an unsafe locale, the safest way to fix the issue is to dump the database and recreate it with
|
||||||
the correct locale parameter (as shown above). It is also possible to change the
|
the correct locale parameter (as shown above). It is also possible to change the
|
||||||
|
|
|
@ -1946,6 +1946,24 @@ Example configuration:
|
||||||
max_image_pixels: 35M
|
max_image_pixels: 35M
|
||||||
```
|
```
|
||||||
---
|
---
|
||||||
|
### `remote_media_download_burst_count`
|
||||||
|
|
||||||
|
Remote media downloads are ratelimited using a [leaky bucket algorithm](https://en.wikipedia.org/wiki/Leaky_bucket), where a given "bucket" is keyed to the IP address of the requester when requesting remote media downloads. This configuration option sets the size of the bucket against which the size in bytes of downloads are penalized - if the bucket is full, ie a given number of bytes have already been downloaded, further downloads will be denied until the bucket drains. Defaults to 500MiB. See also `remote_media_download_per_second` which determines the rate at which the "bucket" is emptied and thus has available space to authorize new requests.
|
||||||
|
|
||||||
|
Example configuration:
|
||||||
|
```yaml
|
||||||
|
remote_media_download_burst_count: 200M
|
||||||
|
```
|
||||||
|
---
|
||||||
|
### `remote_media_download_per_second`
|
||||||
|
|
||||||
|
Works in conjunction with `remote_media_download_burst_count` to ratelimit remote media downloads - this configuration option determines the rate at which the "bucket" (see above) leaks in bytes per second. As requests are made to download remote media, the size of those requests in bytes is added to the bucket, and once the bucket has reached it's capacity, no more requests will be allowed until a number of bytes has "drained" from the bucket. This setting determines the rate at which bytes drain from the bucket, with the practical effect that the larger the number, the faster the bucket leaks, allowing for more bytes downloaded over a shorter period of time. Defaults to 87KiB per second. See also `remote_media_download_burst_count`.
|
||||||
|
|
||||||
|
Example configuration:
|
||||||
|
```yaml
|
||||||
|
remote_media_download_per_second: 40K
|
||||||
|
```
|
||||||
|
---
|
||||||
### `prevent_media_downloads_from`
|
### `prevent_media_downloads_from`
|
||||||
|
|
||||||
A list of domains to never download media from. Media from these
|
A list of domains to never download media from. Media from these
|
||||||
|
|
379
poetry.lock
generated
379
poetry.lock
generated
|
@ -912,13 +912,13 @@ trio = ["async_generator", "trio"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "jinja2"
|
name = "jinja2"
|
||||||
version = "3.1.3"
|
version = "3.1.4"
|
||||||
description = "A very fast and expressive template engine."
|
description = "A very fast and expressive template engine."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
files = [
|
files = [
|
||||||
{file = "Jinja2-3.1.3-py3-none-any.whl", hash = "sha256:7d6d50dd97d52cbc355597bd845fabfbac3f551e1f99619e39a35ce8c370b5fa"},
|
{file = "jinja2-3.1.4-py3-none-any.whl", hash = "sha256:bc5dd2abb727a5319567b7a813e6a2e7318c39f4f487cfe6c89c6f9c7d25197d"},
|
||||||
{file = "Jinja2-3.1.3.tar.gz", hash = "sha256:ac8bd6544d4bb2c9792bf3a159e80bba8fda7f07e81bc3aed565432d5925ba90"},
|
{file = "jinja2-3.1.4.tar.gz", hash = "sha256:4a3aee7acbbe7303aede8e9648d13b8bf88a429282aa6122a993f0ac800cb369"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
|
@ -1005,165 +1005,153 @@ pyasn1 = ">=0.4.6"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "lxml"
|
name = "lxml"
|
||||||
version = "5.2.1"
|
version = "5.2.2"
|
||||||
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
|
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
|
||||||
optional = true
|
optional = true
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
files = [
|
files = [
|
||||||
{file = "lxml-5.2.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:1f7785f4f789fdb522729ae465adcaa099e2a3441519df750ebdccc481d961a1"},
|
{file = "lxml-5.2.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:364d03207f3e603922d0d3932ef363d55bbf48e3647395765f9bfcbdf6d23632"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6cc6ee342fb7fa2471bd9b6d6fdfc78925a697bf5c2bcd0a302e98b0d35bfad3"},
|
{file = "lxml-5.2.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:50127c186f191b8917ea2fb8b206fbebe87fd414a6084d15568c27d0a21d60db"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:794f04eec78f1d0e35d9e0c36cbbb22e42d370dda1609fb03bcd7aeb458c6377"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:74e4f025ef3db1c6da4460dd27c118d8cd136d0391da4e387a15e48e5c975147"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c817d420c60a5183953c783b0547d9eb43b7b344a2c46f69513d5952a78cddf3"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:981a06a3076997adf7c743dcd0d7a0415582661e2517c7d961493572e909aa1d"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2213afee476546a7f37c7a9b4ad4d74b1e112a6fafffc9185d6d21f043128c81"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aef5474d913d3b05e613906ba4090433c515e13ea49c837aca18bde190853dff"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b070bbe8d3f0f6147689bed981d19bbb33070225373338df755a46893528104a"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1e275ea572389e41e8b039ac076a46cb87ee6b8542df3fff26f5baab43713bca"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e02c5175f63effbd7c5e590399c118d5db6183bbfe8e0d118bdb5c2d1b48d937"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5b65529bb2f21ac7861a0e94fdbf5dc0daab41497d18223b46ee8515e5ad297"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:3dc773b2861b37b41a6136e0b72a1a44689a9c4c101e0cddb6b854016acc0aa8"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:bcc98f911f10278d1daf14b87d65325851a1d29153caaf146877ec37031d5f36"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:d7520db34088c96cc0e0a3ad51a4fd5b401f279ee112aa2b7f8f976d8582606d"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:b47633251727c8fe279f34025844b3b3a3e40cd1b198356d003aa146258d13a2"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:bcbf4af004f98793a95355980764b3d80d47117678118a44a80b721c9913436a"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:fbc9d316552f9ef7bba39f4edfad4a734d3d6f93341232a9dddadec4f15d425f"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:a2b44bec7adf3e9305ce6cbfa47a4395667e744097faed97abb4728748ba7d47"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:13e69be35391ce72712184f69000cda04fc89689429179bc4c0ae5f0b7a8c21b"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:1c5bb205e9212d0ebddf946bc07e73fa245c864a5f90f341d11ce7b0b854475d"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:3b6a30a9ab040b3f545b697cb3adbf3696c05a3a68aad172e3fd7ca73ab3c835"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2c9d147f754b1b0e723e6afb7ba1566ecb162fe4ea657f53d2139bbf894d050a"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:a233bb68625a85126ac9f1fc66d24337d6e8a0f9207b688eec2e7c880f012ec0"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:3545039fa4779be2df51d6395e91a810f57122290864918b172d5dc7ca5bb433"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:dfa7c241073d8f2b8e8dbc7803c434f57dbb83ae2a3d7892dd068d99e96efe2c"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a91481dbcddf1736c98a80b122afa0f7296eeb80b72344d7f45dc9f781551f56"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:1a7aca7964ac4bb07680d5c9d63b9d7028cace3e2d43175cb50bba8c5ad33316"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:2ddfe41ddc81f29a4c44c8ce239eda5ade4e7fc305fb7311759dd6229a080052"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ae4073a60ab98529ab8a72ebf429f2a8cc612619a8c04e08bed27450d52103c0"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:a7baf9ffc238e4bf401299f50e971a45bfcc10a785522541a6e3179c83eabf0a"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:ffb2be176fed4457e445fe540617f0252a72a8bc56208fd65a690fdb1f57660b"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:31e9a882013c2f6bd2f2c974241bf4ba68c85eba943648ce88936d23209a2e01"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:e290d79a4107d7d794634ce3e985b9ae4f920380a813717adf61804904dc4393"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:0a15438253b34e6362b2dc41475e7f80de76320f335e70c5528b7148cac253a1"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:96e85aa09274955bb6bd483eaf5b12abadade01010478154b0ec70284c1b1526"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-win32.whl", hash = "sha256:6992030d43b916407c9aa52e9673612ff39a575523c5f4cf72cdef75365709a5"},
|
{file = "lxml-5.2.2-cp310-cp310-win32.whl", hash = "sha256:f956196ef61369f1685d14dad80611488d8dc1ef00be57c0c5a03064005b0f30"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:da052e7962ea2d5e5ef5bc0355d55007407087392cf465b7ad84ce5f3e25fe0f"},
|
{file = "lxml-5.2.2-cp310-cp310-win_amd64.whl", hash = "sha256:875a3f90d7eb5c5d77e529080d95140eacb3c6d13ad5b616ee8095447b1d22e7"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:70ac664a48aa64e5e635ae5566f5227f2ab7f66a3990d67566d9907edcbbf867"},
|
{file = "lxml-5.2.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:45f9494613160d0405682f9eee781c7e6d1bf45f819654eb249f8f46a2c22545"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1ae67b4e737cddc96c99461d2f75d218bdf7a0c3d3ad5604d1f5e7464a2f9ffe"},
|
{file = "lxml-5.2.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b0b3f2df149efb242cee2ffdeb6674b7f30d23c9a7af26595099afaf46ef4e88"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f18a5a84e16886898e51ab4b1d43acb3083c39b14c8caeb3589aabff0ee0b270"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d28cb356f119a437cc58a13f8135ab8a4c8ece18159eb9194b0d269ec4e28083"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c6f2c8372b98208ce609c9e1d707f6918cc118fea4e2c754c9f0812c04ca116d"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:657a972f46bbefdbba2d4f14413c0d079f9ae243bd68193cb5061b9732fa54c1"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:394ed3924d7a01b5bd9a0d9d946136e1c2f7b3dc337196d99e61740ed4bc6fe1"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b74b9ea10063efb77a965a8d5f4182806fbf59ed068b3c3fd6f30d2ac7bee734"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5d077bc40a1fe984e1a9931e801e42959a1e6598edc8a3223b061d30fbd26bbc"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:07542787f86112d46d07d4f3c4e7c760282011b354d012dc4141cc12a68cef5f"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:764b521b75701f60683500d8621841bec41a65eb739b8466000c6fdbc256c240"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:303f540ad2dddd35b92415b74b900c749ec2010e703ab3bfd6660979d01fd4ed"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:3a6b45da02336895da82b9d472cd274b22dc27a5cea1d4b793874eead23dd14f"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:2eb2227ce1ff998faf0cd7fe85bbf086aa41dfc5af3b1d80867ecfe75fb68df3"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:5ea7b6766ac2dfe4bcac8b8595107665a18ef01f8c8343f00710b85096d1b53a"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:1d8a701774dfc42a2f0b8ccdfe7dbc140500d1049e0632a611985d943fcf12df"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:e196a4ff48310ba62e53a8e0f97ca2bca83cdd2fe2934d8b5cb0df0a841b193a"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:56793b7a1a091a7c286b5f4aa1fe4ae5d1446fe742d00cdf2ffb1077865db10d"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:200e63525948e325d6a13a76ba2911f927ad399ef64f57898cf7c74e69b71095"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:eb00b549b13bd6d884c863554566095bf6fa9c3cecb2e7b399c4bc7904cb33b5"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:dae0ed02f6b075426accbf6b2863c3d0a7eacc1b41fb40f2251d931e50188dad"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1a2569a1f15ae6c8c64108a2cd2b4a858fc1e13d25846be0666fc144715e32ab"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:ab31a88a651039a07a3ae327d68ebdd8bc589b16938c09ef3f32a4b809dc96ef"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:8cf85a6e40ff1f37fe0f25719aadf443686b1ac7652593dc53c7ef9b8492b115"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:df2e6f546c4df14bc81f9498bbc007fbb87669f1bb707c6138878c46b06f6510"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:d237ba6664b8e60fd90b8549a149a74fcc675272e0e95539a00522e4ca688b04"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5dd1537e7cc06efd81371f5d1a992bd5ab156b2b4f88834ca852de4a8ea523fa"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0b3f5016e00ae7630a4b83d0868fca1e3d494c78a75b1c7252606a3a1c5fc2ad"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:9b9ec9c9978b708d488bec36b9e4c94d88fd12ccac3e62134a9d17ddba910ea9"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:23441e2b5339bc54dc949e9e675fa35efe858108404ef9aa92f0456929ef6fe8"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:8e77c69d5892cb5ba71703c4057091e31ccf534bd7f129307a4d084d90d014b8"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:2fb0ba3e8566548d6c8e7dd82a8229ff47bd8fb8c2da237607ac8e5a1b8312e5"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:a8d5c70e04aac1eda5c829a26d1f75c6e5286c74743133d9f742cda8e53b9c2f"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:79d1fb9252e7e2cfe4de6e9a6610c7cbb99b9708e2c3e29057f487de5a9eaefa"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c94e75445b00319c1fad60f3c98b09cd63fe1134a8a953dcd48989ef42318534"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6dcc3d17eac1df7859ae01202e9bb11ffa8c98949dcbeb1069c8b9a75917e01b"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-win32.whl", hash = "sha256:4951e4f7a5680a2db62f7f4ab2f84617674d36d2d76a729b9a8be4b59b3659be"},
|
{file = "lxml-5.2.2-cp311-cp311-win32.whl", hash = "sha256:4c30a2f83677876465f44c018830f608fa3c6a8a466eb223535035fbc16f3438"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:5c670c0406bdc845b474b680b9a5456c561c65cf366f8db5a60154088c92d102"},
|
{file = "lxml-5.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:49095a38eb333aaf44c06052fd2ec3b8f23e19747ca7ec6f6c954ffea6dbf7be"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:abc25c3cab9ec7fcd299b9bcb3b8d4a1231877e425c650fa1c7576c5107ab851"},
|
{file = "lxml-5.2.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:7429e7faa1a60cad26ae4227f4dd0459efde239e494c7312624ce228e04f6391"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:6935bbf153f9a965f1e07c2649c0849d29832487c52bb4a5c5066031d8b44fd5"},
|
{file = "lxml-5.2.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:50ccb5d355961c0f12f6cf24b7187dbabd5433f29e15147a67995474f27d1776"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d793bebb202a6000390a5390078e945bbb49855c29c7e4d56a85901326c3b5d9"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc911208b18842a3a57266d8e51fc3cfaccee90a5351b92079beed912a7914c2"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afd5562927cdef7c4f5550374acbc117fd4ecc05b5007bdfa57cc5355864e0a4"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:33ce9e786753743159799fdf8e92a5da351158c4bfb6f2db0bf31e7892a1feb5"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0e7259016bc4345a31af861fdce942b77c99049d6c2107ca07dc2bba2435c1d9"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ec87c44f619380878bd49ca109669c9f221d9ae6883a5bcb3616785fa8f94c97"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:530e7c04f72002d2f334d5257c8a51bf409db0316feee7c87e4385043be136af"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08ea0f606808354eb8f2dfaac095963cb25d9d28e27edcc375d7b30ab01abbf6"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59689a75ba8d7ffca577aefd017d08d659d86ad4585ccc73e43edbfc7476781a"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75a9632f1d4f698b2e6e2e1ada40e71f369b15d69baddb8968dcc8e683839b18"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:f9737bf36262046213a28e789cc82d82c6ef19c85a0cf05e75c670a33342ac2c"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:74da9f97daec6928567b48c90ea2c82a106b2d500f397eeb8941e47d30b1ca85"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:3a74c4f27167cb95c1d4af1c0b59e88b7f3e0182138db2501c353555f7ec57f4"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:0969e92af09c5687d769731e3f39ed62427cc72176cebb54b7a9d52cc4fa3b73"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:68a2610dbe138fa8c5826b3f6d98a7cfc29707b850ddcc3e21910a6fe51f6ca0"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:9164361769b6ca7769079f4d426a41df6164879f7f3568be9086e15baca61466"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:f0a1bc63a465b6d72569a9bba9f2ef0334c4e03958e043da1920299100bc7c08"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:d26a618ae1766279f2660aca0081b2220aca6bd1aa06b2cf73f07383faf48927"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c2d35a1d047efd68027817b32ab1586c1169e60ca02c65d428ae815b593e65d4"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ab67ed772c584b7ef2379797bf14b82df9aa5f7438c5b9a09624dd834c1c1aaf"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:79bd05260359170f78b181b59ce871673ed01ba048deef4bf49a36ab3e72e80b"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:3d1e35572a56941b32c239774d7e9ad724074d37f90c7a7d499ab98761bd80cf"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:865bad62df277c04beed9478fe665b9ef63eb28fe026d5dedcb89b537d2e2ea6"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:8268cbcd48c5375f46e000adb1390572c98879eb4f77910c6053d25cc3ac2c67"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:44f6c7caff88d988db017b9b0e4ab04934f11e3e72d478031efc7edcac6c622f"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e282aedd63c639c07c3857097fc0e236f984ceb4089a8b284da1c526491e3f3d"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:71e97313406ccf55d32cc98a533ee05c61e15d11b99215b237346171c179c0b0"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6dfdc2bfe69e9adf0df4915949c22a25b39d175d599bf98e7ddf620a13678585"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:057cdc6b86ab732cf361f8b4d8af87cf195a1f6dc5b0ff3de2dced242c2015e0"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:4aefd911793b5d2d7a921233a54c90329bf3d4a6817dc465f12ffdfe4fc7b8fe"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:f3bbbc998d42f8e561f347e798b85513ba4da324c2b3f9b7969e9c45b10f6169"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:8b8df03a9e995b6211dafa63b32f9d405881518ff1ddd775db4e7b98fb545e1c"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:491755202eb21a5e350dae00c6d9a17247769c64dcf62d8c788b5c135e179dc4"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f11ae142f3a322d44513de1018b50f474f8f736bc3cd91d969f464b5bfef8836"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-win32.whl", hash = "sha256:8de8f9d6caa7f25b204fc861718815d41cbcf27ee8f028c89c882a0cf4ae4134"},
|
{file = "lxml-5.2.2-cp312-cp312-win32.whl", hash = "sha256:16a8326e51fcdffc886294c1e70b11ddccec836516a343f9ed0f82aac043c24a"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:f2a9efc53d5b714b8df2b4b3e992accf8ce5bbdfe544d74d5c6766c9e1146a3a"},
|
{file = "lxml-5.2.2-cp312-cp312-win_amd64.whl", hash = "sha256:bbc4b80af581e18568ff07f6395c02114d05f4865c2812a1f02f2eaecf0bfd48"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:70a9768e1b9d79edca17890175ba915654ee1725975d69ab64813dd785a2bd5c"},
|
{file = "lxml-5.2.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e3d9d13603410b72787579769469af730c38f2f25505573a5888a94b62b920f8"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c38d7b9a690b090de999835f0443d8aa93ce5f2064035dfc48f27f02b4afc3d0"},
|
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38b67afb0a06b8575948641c1d6d68e41b83a3abeae2ca9eed2ac59892b36706"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5670fb70a828663cc37552a2a85bf2ac38475572b0e9b91283dc09efb52c41d1"},
|
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c689d0d5381f56de7bd6966a4541bff6e08bf8d3871bbd89a0c6ab18aa699573"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:958244ad566c3ffc385f47dddde4145088a0ab893504b54b52c041987a8c1863"},
|
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:cf2a978c795b54c539f47964ec05e35c05bd045db5ca1e8366988c7f2fe6b3ce"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b6241d4eee5f89453307c2f2bfa03b50362052ca0af1efecf9fef9a41a22bb4f"},
|
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:739e36ef7412b2bd940f75b278749106e6d025e40027c0b94a17ef7968d55d56"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:2a66bf12fbd4666dd023b6f51223aed3d9f3b40fef06ce404cb75bafd3d89536"},
|
{file = "lxml-5.2.2-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d8bbcd21769594dbba9c37d3c819e2d5847656ca99c747ddb31ac1701d0c0ed9"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_ppc64le.whl", hash = "sha256:9123716666e25b7b71c4e1789ec829ed18663152008b58544d95b008ed9e21e9"},
|
{file = "lxml-5.2.2-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:2304d3c93f2258ccf2cf7a6ba8c761d76ef84948d87bf9664e14d203da2cd264"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_s390x.whl", hash = "sha256:0c3f67e2aeda739d1cc0b1102c9a9129f7dc83901226cc24dd72ba275ced4218"},
|
{file = "lxml-5.2.2-cp36-cp36m-win32.whl", hash = "sha256:02437fb7308386867c8b7b0e5bc4cd4b04548b1c5d089ffb8e7b31009b961dc3"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:5d5792e9b3fb8d16a19f46aa8208987cfeafe082363ee2745ea8b643d9cc5b45"},
|
{file = "lxml-5.2.2-cp36-cp36m-win_amd64.whl", hash = "sha256:edcfa83e03370032a489430215c1e7783128808fd3e2e0a3225deee278585196"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_aarch64.whl", hash = "sha256:88e22fc0a6684337d25c994381ed8a1580a6f5ebebd5ad41f89f663ff4ec2885"},
|
{file = "lxml-5.2.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:28bf95177400066596cdbcfc933312493799382879da504633d16cf60bba735b"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_ppc64le.whl", hash = "sha256:21c2e6b09565ba5b45ae161b438e033a86ad1736b8c838c766146eff8ceffff9"},
|
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a745cc98d504d5bd2c19b10c79c61c7c3df9222629f1b6210c0368177589fb8"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_s390x.whl", hash = "sha256:afbbdb120d1e78d2ba8064a68058001b871154cc57787031b645c9142b937a62"},
|
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1b590b39ef90c6b22ec0be925b211298e810b4856909c8ca60d27ffbca6c12e6"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:627402ad8dea044dde2eccde4370560a2b750ef894c9578e1d4f8ffd54000461"},
|
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b336b0416828022bfd5a2e3083e7f5ba54b96242159f83c7e3eebaec752f1716"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-win32.whl", hash = "sha256:e89580a581bf478d8dcb97d9cd011d567768e8bc4095f8557b21c4d4c5fea7d0"},
|
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:c2faf60c583af0d135e853c86ac2735ce178f0e338a3c7f9ae8f622fd2eb788c"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:59565f10607c244bc4c05c0c5fa0c190c990996e0c719d05deec7030c2aa8289"},
|
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:4bc6cb140a7a0ad1f7bc37e018d0ed690b7b6520ade518285dc3171f7a117905"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:857500f88b17a6479202ff5fe5f580fc3404922cd02ab3716197adf1ef628029"},
|
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7ff762670cada8e05b32bf1e4dc50b140790909caa8303cfddc4d702b71ea184"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:56c22432809085b3f3ae04e6e7bdd36883d7258fcd90e53ba7b2e463efc7a6af"},
|
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:57f0a0bbc9868e10ebe874e9f129d2917750adf008fe7b9c1598c0fbbfdde6a6"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a55ee573116ba208932e2d1a037cc4b10d2c1cb264ced2184d00b18ce585b2c0"},
|
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:a6d2092797b388342c1bc932077ad232f914351932353e2e8706851c870bca1f"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:6cf58416653c5901e12624e4013708b6e11142956e7f35e7a83f1ab02f3fe456"},
|
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:60499fe961b21264e17a471ec296dcbf4365fbea611bf9e303ab69db7159ce61"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:64c2baa7774bc22dd4474248ba16fe1a7f611c13ac6123408694d4cc93d66dbd"},
|
{file = "lxml-5.2.2-cp37-cp37m-win32.whl", hash = "sha256:d9b342c76003c6b9336a80efcc766748a333573abf9350f4094ee46b006ec18f"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:74b28c6334cca4dd704e8004cba1955af0b778cf449142e581e404bd211fb619"},
|
{file = "lxml-5.2.2-cp37-cp37m-win_amd64.whl", hash = "sha256:b16db2770517b8799c79aa80f4053cd6f8b716f21f8aca962725a9565ce3ee40"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:7221d49259aa1e5a8f00d3d28b1e0b76031655ca74bb287123ef56c3db92f213"},
|
{file = "lxml-5.2.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7ed07b3062b055d7a7f9d6557a251cc655eed0b3152b76de619516621c56f5d3"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:3dbe858ee582cbb2c6294dc85f55b5f19c918c2597855e950f34b660f1a5ede6"},
|
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f60fdd125d85bf9c279ffb8e94c78c51b3b6a37711464e1f5f31078b45002421"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:04ab5415bf6c86e0518d57240a96c4d1fcfc3cb370bb2ac2a732b67f579e5a04"},
|
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8a7e24cb69ee5f32e003f50e016d5fde438010c1022c96738b04fc2423e61706"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_ppc64le.whl", hash = "sha256:6ab833e4735a7e5533711a6ea2df26459b96f9eec36d23f74cafe03631647c41"},
|
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23cfafd56887eaed93d07bc4547abd5e09d837a002b791e9767765492a75883f"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_s390x.whl", hash = "sha256:f443cdef978430887ed55112b491f670bba6462cea7a7742ff8f14b7abb98d75"},
|
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:19b4e485cd07b7d83e3fe3b72132e7df70bfac22b14fe4bf7a23822c3a35bff5"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:9e2addd2d1866fe112bc6f80117bcc6bc25191c5ed1bfbcf9f1386a884252ae8"},
|
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:7ce7ad8abebe737ad6143d9d3bf94b88b93365ea30a5b81f6877ec9c0dee0a48"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-win32.whl", hash = "sha256:f51969bac61441fd31f028d7b3b45962f3ecebf691a510495e5d2cd8c8092dbd"},
|
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e49b052b768bb74f58c7dda4e0bdf7b79d43a9204ca584ffe1fb48a6f3c84c66"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b0b58fbfa1bf7367dde8a557994e3b1637294be6cf2169810375caf8571a085c"},
|
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d14a0d029a4e176795cef99c056d58067c06195e0c7e2dbb293bf95c08f772a3"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:804f74efe22b6a227306dd890eecc4f8c59ff25ca35f1f14e7482bbce96ef10b"},
|
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:be49ad33819d7dcc28a309b86d4ed98e1a65f3075c6acd3cd4fe32103235222b"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08802f0c56ed150cc6885ae0788a321b73505d2263ee56dad84d200cab11c07a"},
|
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a6d17e0370d2516d5bb9062c7b4cb731cff921fc875644c3d751ad857ba9c5b1"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0f8c09ed18ecb4ebf23e02b8e7a22a05d6411911e6fabef3a36e4f371f4f2585"},
|
{file = "lxml-5.2.2-cp38-cp38-win32.whl", hash = "sha256:5b8c041b6265e08eac8a724b74b655404070b636a8dd6d7a13c3adc07882ef30"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e3d30321949861404323c50aebeb1943461a67cd51d4200ab02babc58bd06a86"},
|
{file = "lxml-5.2.2-cp38-cp38-win_amd64.whl", hash = "sha256:f61efaf4bed1cc0860e567d2ecb2363974d414f7f1f124b1df368bbf183453a6"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:b560e3aa4b1d49e0e6c847d72665384db35b2f5d45f8e6a5c0072e0283430533"},
|
{file = "lxml-5.2.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:fb91819461b1b56d06fa4bcf86617fac795f6a99d12239fb0c68dbeba41a0a30"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:058a1308914f20784c9f4674036527e7c04f7be6fb60f5d61353545aa7fcb739"},
|
{file = "lxml-5.2.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d4ed0c7cbecde7194cd3228c044e86bf73e30a23505af852857c09c24e77ec5d"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:adfb84ca6b87e06bc6b146dc7da7623395db1e31621c4785ad0658c5028b37d7"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54401c77a63cc7d6dc4b4e173bb484f28a5607f3df71484709fe037c92d4f0ed"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:417d14450f06d51f363e41cace6488519038f940676ce9664b34ebf5653433a5"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:625e3ef310e7fa3a761d48ca7ea1f9d8718a32b1542e727d584d82f4453d5eeb"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:a2dfe7e2473f9b59496247aad6e23b405ddf2e12ef0765677b0081c02d6c2c0b"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:519895c99c815a1a24a926d5b60627ce5ea48e9f639a5cd328bda0515ea0f10c"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:bf2e2458345d9bffb0d9ec16557d8858c9c88d2d11fed53998512504cd9df49b"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c7079d5eb1c1315a858bbf180000757db8ad904a89476653232db835c3114001"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:58278b29cb89f3e43ff3e0c756abbd1518f3ee6adad9e35b51fb101c1c1daaec"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:343ab62e9ca78094f2306aefed67dcfad61c4683f87eee48ff2fd74902447726"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:64641a6068a16201366476731301441ce93457eb8452056f570133a6ceb15fca"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:cd9e78285da6c9ba2d5c769628f43ef66d96ac3085e59b10ad4f3707980710d3"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:78bfa756eab503673991bdcf464917ef7845a964903d3302c5f68417ecdc948c"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:546cf886f6242dff9ec206331209db9c8e1643ae642dea5fdbecae2453cb50fd"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:11a04306fcba10cd9637e669fd73aa274c1c09ca64af79c041aa820ea992b637"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:02f6a8eb6512fdc2fd4ca10a49c341c4e109aa6e9448cc4859af5b949622715a"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-win32.whl", hash = "sha256:66bc5eb8a323ed9894f8fa0ee6cb3e3fb2403d99aee635078fd19a8bc7a5a5da"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:339ee4a4704bc724757cd5dd9dc8cf4d00980f5d3e6e06d5847c1b594ace68ab"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:9676bfc686fa6a3fa10cd4ae6b76cae8be26eb5ec6811d2a325636c460da1806"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0a028b61a2e357ace98b1615fc03f76eb517cc028993964fe08ad514b1e8892d"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:cf22b41fdae514ee2f1691b6c3cdeae666d8b7fa9434de445f12bbeee0cf48dd"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:f90e552ecbad426eab352e7b2933091f2be77115bb16f09f78404861c8322981"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ec42088248c596dbd61d4ae8a5b004f97a4d91a9fd286f632e42e60b706718d7"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:d83e2d94b69bf31ead2fa45f0acdef0757fa0458a129734f59f67f3d2eb7ef32"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cd53553ddad4a9c2f1f022756ae64abe16da1feb497edf4d9f87f99ec7cf86bd"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a02d3c48f9bb1e10c7788d92c0c7db6f2002d024ab6e74d6f45ae33e3d0288a3"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:feaa45c0eae424d3e90d78823f3828e7dc42a42f21ed420db98da2c4ecf0a2cb"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6d68ce8e7b2075390e8ac1e1d3a99e8b6372c694bbe612632606d1d546794207"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ddc678fb4c7e30cf830a2b5a8d869538bc55b28d6c68544d09c7d0d8f17694dc"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:453d037e09a5176d92ec0fd282e934ed26d806331a8b70ab431a81e2fbabf56d"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:853e074d4931dbcba7480d4dcab23d5c56bd9607f92825ab80ee2bd916edea53"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:3b019d4ee84b683342af793b56bb35034bd749e4cbdd3d33f7d1107790f8c472"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc4691d60512798304acb9207987e7b2b7c44627ea88b9d77489bbe3e6cc3bd4"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:cb3942960f0beb9f46e2a71a3aca220d1ca32feb5a398656be934320804c0df9"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:beb72935a941965c52990f3a32d7f07ce869fe21c6af8b34bf6a277b33a345d3"},
|
{file = "lxml-5.2.2-cp39-cp39-win32.whl", hash = "sha256:ac6540c9fff6e3813d29d0403ee7a81897f1d8ecc09a8ff84d2eea70ede1cdbf"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:6588c459c5627fefa30139be4d2e28a2c2a1d0d1c265aad2ba1935a7863a4913"},
|
{file = "lxml-5.2.2-cp39-cp39-win_amd64.whl", hash = "sha256:610b5c77428a50269f38a534057444c249976433f40f53e3b47e68349cca1425"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:588008b8497667f1ddca7c99f2f85ce8511f8f7871b4a06ceede68ab62dff64b"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:b537bd04d7ccd7c6350cdaaaad911f6312cbd61e6e6045542f781c7f8b2e99d2"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:b6787b643356111dfd4032b5bffe26d2f8331556ecb79e15dacb9275da02866e"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4820c02195d6dfb7b8508ff276752f6b2ff8b64ae5d13ebe02e7667e035000b9"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7c17b64b0a6ef4e5affae6a3724010a7a66bda48a62cfe0674dabd46642e8b54"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a09f6184f17a80897172863a655467da2b11151ec98ba8d7af89f17bf63dae"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:27aa20d45c2e0b8cd05da6d4759649170e8dfc4f4e5ef33a34d06f2d79075d57"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:76acba4c66c47d27c8365e7c10b3d8016a7da83d3191d053a58382311a8bf4e1"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:d4f2cc7060dc3646632d7f15fe68e2fa98f58e35dd5666cd525f3b35d3fed7f8"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b128092c927eaf485928cec0c28f6b8bead277e28acf56800e972aa2c2abd7a2"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff46d772d5f6f73564979cd77a4fffe55c916a05f3cb70e7c9c0590059fb29ef"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:ae791f6bd43305aade8c0e22f816b34f3b72b6c820477aab4d18473a37e8090b"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:96323338e6c14e958d775700ec8a88346014a85e5de73ac7967db0367582049b"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a2f6a1bc2460e643785a2cde17293bd7a8f990884b822f7bca47bee0a82fc66b"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:52421b41ac99e9d91934e4d0d0fe7da9f02bfa7536bb4431b4c05c906c8c6919"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e8d351ff44c1638cb6e980623d517abd9f580d2e53bfcd18d8941c052a5a009"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:7a7efd5b6d3e30d81ec68ab8a88252d7c7c6f13aaa875009fe3097eb4e30b84c"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bec4bd9133420c5c52d562469c754f27c5c9e36ee06abc169612c959bd7dbb07"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:0ed777c1e8c99b63037b91f9d73a6aad20fd035d77ac84afcc205225f8f41188"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:55ce6b6d803890bd3cc89975fca9de1dff39729b43b73cb15ddd933b8bc20484"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-win32.whl", hash = "sha256:644df54d729ef810dcd0f7732e50e5ad1bd0a135278ed8d6bcb06f33b6b6f708"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8ab6a358d1286498d80fe67bd3d69fcbc7d1359b45b41e74c4a26964ca99c3f8"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:9ca66b8e90daca431b7ca1408cae085d025326570e57749695d6a01454790e95"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:06668e39e1f3c065349c51ac27ae430719d7806c026fec462e5693b08b95696b"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9b0ff53900566bc6325ecde9181d89afadc59c5ffa39bddf084aaedfe3b06a11"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9cd5323344d8ebb9fb5e96da5de5ad4ebab993bbf51674259dbe9d7a18049525"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fd6037392f2d57793ab98d9e26798f44b8b4da2f2464388588f48ac52c489ea1"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:89feb82ca055af0fe797a2323ec9043b26bc371365847dbe83c7fd2e2f181c34"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b9c07e7a45bb64e21df4b6aa623cb8ba214dfb47d2027d90eac197329bb5e94"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e481bba1e11ba585fb06db666bfc23dbe181dbafc7b25776156120bf12e0d5a6"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3249cc2989d9090eeac5467e50e9ec2d40704fea9ab72f36b034ea34ee65ca98"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:9d6c6ea6a11ca0ff9cd0390b885984ed31157c168565702959c25e2191674a14"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:f42038016852ae51b4088b2862126535cc4fc85802bfe30dea3500fdfaf1864e"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:3d98de734abee23e61f6b8c2e08a88453ada7d6486dc7cdc82922a03968928db"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:533658f8fbf056b70e434dff7e7aa611bcacb33e01f75de7f821810e48d1bb66"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:69ab77a1373f1e7563e0fb5a29a8440367dec051da6c7405333699d07444f511"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:622020d4521e22fb371e15f580d153134bfb68d6a429d1342a25f051ec72df1c"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:34e17913c431f5ae01d8658dbf792fdc457073dcdfbb31dc0cc6ab256e664a8d"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:efa7b51824aa0ee957ccd5a741c73e6851de55f40d807f08069eb4c5a26b2baa"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:05f8757b03208c3f50097761be2dea0aba02e94f0dc7023ed73a7bb14ff11eb0"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9c6ad0fbf105f6bcc9300c00010a2ffa44ea6f555df1a2ad95c88f5656104817"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a520b4f9974b0a0a6ed73c2154de57cdfd0c8800f4f15ab2b73238ffed0b36e"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:e233db59c8f76630c512ab4a4daf5a5986da5c3d5b44b8e9fc742f2a24dbd460"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5e097646944b66207023bc3c634827de858aebc226d5d4d6d16f0b77566ea182"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:6a014510830df1475176466b6087fc0c08b47a36714823e58d8b8d7709132a96"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b5e4ef22ff25bfd4ede5f8fb30f7b24446345f3e79d9b7455aef2836437bc38a"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d38c8f50ecf57f0463399569aa388b232cf1a2ffb8f0a9a5412d0db57e054860"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:ff69a9a0b4b17d78170c73abe2ab12084bdf1691550c5629ad1fe7849433f324"},
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:5aea8212fb823e006b995c4dda533edcf98a893d941f173f6c9506126188860d"},
|
{file = "lxml-5.2.2.tar.gz", hash = "sha256:bb2dc4898180bea79863d5487e5f9c7c34297414bad54bcd0f0852aee9cfdb87"},
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ff097ae562e637409b429a7ac958a20aab237a0378c42dabaa1e3abf2f896e5f"},
|
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f5d65c39f16717a47c36c756af0fb36144069c4718824b7533f803ecdf91138"},
|
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3d0c3dd24bb4605439bf91068598d00c6370684f8de4a67c2992683f6c309d6b"},
|
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e32be23d538753a8adb6c85bd539f5fd3b15cb987404327c569dfc5fd8366e85"},
|
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:cc518cea79fd1e2f6c90baafa28906d4309d24f3a63e801d855e7424c5b34144"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0af35bd8ebf84888373630f73f24e86bf016642fb8576fba49d3d6b560b7cbc"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8aca2e3a72f37bfc7b14ba96d4056244001ddcc18382bd0daa087fd2e68a354"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ca1e8188b26a819387b29c3895c47a5e618708fe6f787f3b1a471de2c4a94d9"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c8ba129e6d3b0136a0f50345b2cb3db53f6bda5dd8c7f5d83fbccba97fb5dcb5"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e998e304036198b4f6914e6a1e2b6f925208a20e2042563d9734881150c6c246"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:d3be9b2076112e51b323bdf6d5a7f8a798de55fb8d95fcb64bd179460cdc0704"},
|
|
||||||
{file = "lxml-5.2.1.tar.gz", hash = "sha256:3f7765e69bbce0906a7c74d5fe46d2c7a7596147318dbc08e4a2431f3060e306"},
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
|
@ -1454,17 +1442,17 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "mypy-zope"
|
name = "mypy-zope"
|
||||||
version = "1.0.3"
|
version = "1.0.4"
|
||||||
description = "Plugin for mypy to support zope interfaces"
|
description = "Plugin for mypy to support zope interfaces"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "*"
|
python-versions = "*"
|
||||||
files = [
|
files = [
|
||||||
{file = "mypy-zope-1.0.3.tar.gz", hash = "sha256:149081bd2754d947747baefac569bb1c2bc127b4a2cc1fa505492336946bb3b4"},
|
{file = "mypy-zope-1.0.4.tar.gz", hash = "sha256:a9569e73ae85a65247787d98590fa6d4290e76f26aabe035d1c3e94a0b9ab6ee"},
|
||||||
{file = "mypy_zope-1.0.3-py3-none-any.whl", hash = "sha256:7a30ce1a2589173f0be66662c9a9179f75737afc40e4104df4c76fb5a8421c14"},
|
{file = "mypy_zope-1.0.4-py3-none-any.whl", hash = "sha256:c7298f93963a84f2b145c2b5cc98709fc2a5be4adf54bfe23fa7fdd8fd19c975"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
mypy = ">=1.0.0,<1.9.0"
|
mypy = ">=1.0.0,<1.10.0"
|
||||||
"zope.interface" = "*"
|
"zope.interface" = "*"
|
||||||
"zope.schema" = "*"
|
"zope.schema" = "*"
|
||||||
|
|
||||||
|
@ -1536,13 +1524,13 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "phonenumbers"
|
name = "phonenumbers"
|
||||||
version = "8.13.35"
|
version = "8.13.37"
|
||||||
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
|
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "*"
|
python-versions = "*"
|
||||||
files = [
|
files = [
|
||||||
{file = "phonenumbers-8.13.35-py2.py3-none-any.whl", hash = "sha256:58286a8e617bd75f541e04313b28c36398be6d4443a778c85e9617a93c391310"},
|
{file = "phonenumbers-8.13.37-py2.py3-none-any.whl", hash = "sha256:4ea00ef5012422c08c7955c21131e7ae5baa9a3ef52cf2d561e963f023006b80"},
|
||||||
{file = "phonenumbers-8.13.35.tar.gz", hash = "sha256:64f061a967dcdae11e1c59f3688649e697b897110a33bb74d5a69c3e35321245"},
|
{file = "phonenumbers-8.13.37.tar.gz", hash = "sha256:bd315fed159aea0516f7c367231810fe8344d5bec26156b88fa18374c11d1cf2"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
@ -1673,13 +1661,13 @@ test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2.1)", "pytes
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "prometheus-client"
|
name = "prometheus-client"
|
||||||
version = "0.19.0"
|
version = "0.20.0"
|
||||||
description = "Python client for the Prometheus monitoring system."
|
description = "Python client for the Prometheus monitoring system."
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "prometheus_client-0.19.0-py3-none-any.whl", hash = "sha256:c88b1e6ecf6b41cd8fb5731c7ae919bf66df6ec6fafa555cd6c0e16ca169ae92"},
|
{file = "prometheus_client-0.20.0-py3-none-any.whl", hash = "sha256:cde524a85bce83ca359cc837f28b8c0db5cac7aa653a588fd7e84ba061c329e7"},
|
||||||
{file = "prometheus_client-0.19.0.tar.gz", hash = "sha256:4585b0d1223148c27a225b10dbec5ae9bc4c81a99a3fa80774fa6209935324e1"},
|
{file = "prometheus_client-0.20.0.tar.gz", hash = "sha256:287629d00b147a32dcb2be0b9df905da599b2d82f80377083ec8463309a4bb89"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
|
@ -1915,12 +1903,12 @@ plugins = ["importlib-metadata"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyicu"
|
name = "pyicu"
|
||||||
version = "2.13"
|
version = "2.13.1"
|
||||||
description = "Python extension wrapping the ICU C++ API"
|
description = "Python extension wrapping the ICU C++ API"
|
||||||
optional = true
|
optional = true
|
||||||
python-versions = "*"
|
python-versions = "*"
|
||||||
files = [
|
files = [
|
||||||
{file = "PyICU-2.13.tar.gz", hash = "sha256:d481be888975df3097c2790241bbe8518f65c9676a74957cdbe790e559c828f6"},
|
{file = "PyICU-2.13.1.tar.gz", hash = "sha256:d4919085eaa07da12bade8ee721e7bbf7ade0151ca0f82946a26c8f4b98cdceb"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
@ -1997,13 +1985,13 @@ tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyopenssl"
|
name = "pyopenssl"
|
||||||
version = "24.0.0"
|
version = "24.1.0"
|
||||||
description = "Python wrapper module around the OpenSSL library"
|
description = "Python wrapper module around the OpenSSL library"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.7"
|
python-versions = ">=3.7"
|
||||||
files = [
|
files = [
|
||||||
{file = "pyOpenSSL-24.0.0-py3-none-any.whl", hash = "sha256:ba07553fb6fd6a7a2259adb9b84e12302a9a8a75c44046e8bb5d3e5ee887e3c3"},
|
{file = "pyOpenSSL-24.1.0-py3-none-any.whl", hash = "sha256:17ed5be5936449c5418d1cd269a1a9e9081bc54c17aed272b45856a3d3dc86ad"},
|
||||||
{file = "pyOpenSSL-24.0.0.tar.gz", hash = "sha256:6aa33039a93fffa4563e655b61d11364d01264be8ccb49906101e02a334530bf"},
|
{file = "pyOpenSSL-24.1.0.tar.gz", hash = "sha256:cabed4bfaa5df9f1a16c0ef64a0cb65318b5cd077a7eda7d6970131ca2f41a6f"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
|
@ -2011,7 +1999,7 @@ cryptography = ">=41.0.5,<43"
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
docs = ["sphinx (!=5.2.0,!=5.2.0.post0,!=7.2.5)", "sphinx-rtd-theme"]
|
docs = ["sphinx (!=5.2.0,!=5.2.0.post0,!=7.2.5)", "sphinx-rtd-theme"]
|
||||||
test = ["flaky", "pretend", "pytest (>=3.0.1)"]
|
test = ["pretend", "pytest (>=3.0.1)", "pytest-rerunfailures"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pysaml2"
|
name = "pysaml2"
|
||||||
|
@ -2399,13 +2387,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "sentry-sdk"
|
name = "sentry-sdk"
|
||||||
version = "2.1.1"
|
version = "2.3.1"
|
||||||
description = "Python client for Sentry (https://sentry.io)"
|
description = "Python client for Sentry (https://sentry.io)"
|
||||||
optional = true
|
optional = true
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
files = [
|
files = [
|
||||||
{file = "sentry_sdk-2.1.1-py2.py3-none-any.whl", hash = "sha256:99aeb78fb76771513bd3b2829d12613130152620768d00cd3e45ac00cb17950f"},
|
{file = "sentry_sdk-2.3.1-py2.py3-none-any.whl", hash = "sha256:c5aeb095ba226391d337dd42a6f9470d86c9fc236ecc71cfc7cd1942b45010c6"},
|
||||||
{file = "sentry_sdk-2.1.1.tar.gz", hash = "sha256:95d8c0bb41c8b0bc37ab202c2c4a295bb84398ee05f4cdce55051cd75b926ec1"},
|
{file = "sentry_sdk-2.3.1.tar.gz", hash = "sha256:139a71a19f5e9eb5d3623942491ce03cf8ebc14ea2e39ba3e6fe79560d8a5b1f"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
|
@ -2427,7 +2415,7 @@ django = ["django (>=1.8)"]
|
||||||
falcon = ["falcon (>=1.4)"]
|
falcon = ["falcon (>=1.4)"]
|
||||||
fastapi = ["fastapi (>=0.79.0)"]
|
fastapi = ["fastapi (>=0.79.0)"]
|
||||||
flask = ["blinker (>=1.1)", "flask (>=0.11)", "markupsafe"]
|
flask = ["blinker (>=1.1)", "flask (>=0.11)", "markupsafe"]
|
||||||
grpcio = ["grpcio (>=1.21.1)"]
|
grpcio = ["grpcio (>=1.21.1)", "protobuf (>=3.8.0)"]
|
||||||
httpx = ["httpx (>=0.16.0)"]
|
httpx = ["httpx (>=0.16.0)"]
|
||||||
huey = ["huey (>=2)"]
|
huey = ["huey (>=2)"]
|
||||||
huggingface-hub = ["huggingface-hub (>=0.22)"]
|
huggingface-hub = ["huggingface-hub (>=0.22)"]
|
||||||
|
@ -2782,6 +2770,20 @@ files = [
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
types-html5lib = "*"
|
types-html5lib = "*"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "types-cffi"
|
||||||
|
version = "1.16.0.20240331"
|
||||||
|
description = "Typing stubs for cffi"
|
||||||
|
optional = false
|
||||||
|
python-versions = ">=3.8"
|
||||||
|
files = [
|
||||||
|
{file = "types-cffi-1.16.0.20240331.tar.gz", hash = "sha256:b8b20d23a2b89cfed5f8c5bc53b0cb8677c3aac6d970dbc771e28b9c698f5dee"},
|
||||||
|
{file = "types_cffi-1.16.0.20240331-py3-none-any.whl", hash = "sha256:a363e5ea54a4eb6a4a105d800685fde596bc318089b025b27dee09849fe41ff0"},
|
||||||
|
]
|
||||||
|
|
||||||
|
[package.dependencies]
|
||||||
|
types-setuptools = "*"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-commonmark"
|
name = "types-commonmark"
|
||||||
version = "0.9.2.20240106"
|
version = "0.9.2.20240106"
|
||||||
|
@ -2806,13 +2808,13 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-jsonschema"
|
name = "types-jsonschema"
|
||||||
version = "4.21.0.20240311"
|
version = "4.22.0.20240610"
|
||||||
description = "Typing stubs for jsonschema"
|
description = "Typing stubs for jsonschema"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "types-jsonschema-4.21.0.20240311.tar.gz", hash = "sha256:f7165ce70abd91df490c73b089873afd2899c5e56430ee495b64f851ad01f287"},
|
{file = "types-jsonschema-4.22.0.20240610.tar.gz", hash = "sha256:f82ab9fe756e3a2642ea9712c46b403ce61eb380b939b696cff3252af42f65b0"},
|
||||||
{file = "types_jsonschema-4.21.0.20240311-py3-none-any.whl", hash = "sha256:e872f5661513824edf9698f73a66c9c114713d93eab58699bd0532e7e6db5750"},
|
{file = "types_jsonschema-4.22.0.20240610-py3-none-any.whl", hash = "sha256:89996b9bd1928f820a0e252b2844be21cd2e55d062b6fa1048d88453006ad89e"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
|
@ -2842,13 +2844,13 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-pillow"
|
name = "types-pillow"
|
||||||
version = "10.2.0.20240423"
|
version = "10.2.0.20240520"
|
||||||
description = "Typing stubs for Pillow"
|
description = "Typing stubs for Pillow"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "types-Pillow-10.2.0.20240423.tar.gz", hash = "sha256:696e68b9b6a58548fc307a8669830469237c5b11809ddf978ac77fafa79251cd"},
|
{file = "types-Pillow-10.2.0.20240520.tar.gz", hash = "sha256:130b979195465fa1e1676d8e81c9c7c30319e8e95b12fae945e8f0d525213107"},
|
||||||
{file = "types_Pillow-10.2.0.20240423-py3-none-any.whl", hash = "sha256:bd12923093b96c91d523efcdb66967a307f1a843bcfaf2d5a529146c10a9ced3"},
|
{file = "types_Pillow-10.2.0.20240520-py3-none-any.whl", hash = "sha256:33c36494b380e2a269bb742181bea5d9b00820367822dbd3760f07210a1da23d"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
@ -2864,17 +2866,18 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-pyopenssl"
|
name = "types-pyopenssl"
|
||||||
version = "24.0.0.20240311"
|
version = "24.1.0.20240425"
|
||||||
description = "Typing stubs for pyOpenSSL"
|
description = "Typing stubs for pyOpenSSL"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "types-pyOpenSSL-24.0.0.20240311.tar.gz", hash = "sha256:7bca00cfc4e7ef9c5d2663c6a1c068c35798e59670595439f6296e7ba3d58083"},
|
{file = "types-pyOpenSSL-24.1.0.20240425.tar.gz", hash = "sha256:0a7e82626c1983dc8dc59292bf20654a51c3c3881bcbb9b337c1da6e32f0204e"},
|
||||||
{file = "types_pyOpenSSL-24.0.0.20240311-py3-none-any.whl", hash = "sha256:6e8e8bfad34924067333232c93f7fc4b369856d8bea0d5c9d1808cb290ab1972"},
|
{file = "types_pyOpenSSL-24.1.0.20240425-py3-none-any.whl", hash = "sha256:f51a156835555dd2a1f025621e8c4fbe7493470331afeef96884d1d29bf3a473"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
cryptography = ">=35.0.0"
|
cryptography = ">=35.0.0"
|
||||||
|
types-cffi = "*"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-pyyaml"
|
name = "types-pyyaml"
|
||||||
|
@ -3184,4 +3187,4 @@ user-search = ["pyicu"]
|
||||||
[metadata]
|
[metadata]
|
||||||
lock-version = "2.0"
|
lock-version = "2.0"
|
||||||
python-versions = "^3.8.0"
|
python-versions = "^3.8.0"
|
||||||
content-hash = "987f8eccaa222367b1a2e15b0d496586ca50d46ca1277e69694922d31c93ce5b"
|
content-hash = "107c8fb5c67360340854fbdba3c085fc5f9c7be24bcb592596a914eea621faea"
|
||||||
|
|
|
@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
||||||
|
|
||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "matrix-synapse"
|
name = "matrix-synapse"
|
||||||
version = "1.108.0rc1"
|
version = "1.109.0rc2"
|
||||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||||
license = "AGPL-3.0-or-later"
|
license = "AGPL-3.0-or-later"
|
||||||
|
@ -200,10 +200,8 @@ netaddr = ">=0.7.18"
|
||||||
# add a lower bound to the Jinja2 dependency.
|
# add a lower bound to the Jinja2 dependency.
|
||||||
Jinja2 = ">=3.0"
|
Jinja2 = ">=3.0"
|
||||||
bleach = ">=1.4.3"
|
bleach = ">=1.4.3"
|
||||||
# We use `ParamSpec` and `Concatenate`, which were added in `typing-extensions` 3.10.0.0.
|
# We use `Self`, which were added in `typing-extensions` 4.0.
|
||||||
# Additionally we need https://github.com/python/typing/pull/817 to allow types to be
|
typing-extensions = ">=4.0"
|
||||||
# generic over ParamSpecs.
|
|
||||||
typing-extensions = ">=3.10.0.1"
|
|
||||||
# We enforce that we have a `cryptography` version that bundles an `openssl`
|
# We enforce that we have a `cryptography` version that bundles an `openssl`
|
||||||
# with the latest security patches.
|
# with the latest security patches.
|
||||||
cryptography = ">=3.4.7"
|
cryptography = ">=3.4.7"
|
||||||
|
|
|
@ -777,22 +777,74 @@ class Porter:
|
||||||
await self._setup_events_stream_seqs()
|
await self._setup_events_stream_seqs()
|
||||||
await self._setup_sequence(
|
await self._setup_sequence(
|
||||||
"un_partial_stated_event_stream_sequence",
|
"un_partial_stated_event_stream_sequence",
|
||||||
("un_partial_stated_event_stream",),
|
[("un_partial_stated_event_stream", "stream_id")],
|
||||||
)
|
)
|
||||||
await self._setup_sequence(
|
await self._setup_sequence(
|
||||||
"device_inbox_sequence", ("device_inbox", "device_federation_outbox")
|
"device_inbox_sequence",
|
||||||
|
[
|
||||||
|
("device_inbox", "stream_id"),
|
||||||
|
("device_federation_outbox", "stream_id"),
|
||||||
|
],
|
||||||
)
|
)
|
||||||
await self._setup_sequence(
|
await self._setup_sequence(
|
||||||
"account_data_sequence",
|
"account_data_sequence",
|
||||||
("room_account_data", "room_tags_revisions", "account_data"),
|
[
|
||||||
|
("room_account_data", "stream_id"),
|
||||||
|
("room_tags_revisions", "stream_id"),
|
||||||
|
("account_data", "stream_id"),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
await self._setup_sequence(
|
||||||
|
"receipts_sequence",
|
||||||
|
[
|
||||||
|
("receipts_linearized", "stream_id"),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
await self._setup_sequence(
|
||||||
|
"presence_stream_sequence",
|
||||||
|
[
|
||||||
|
("presence_stream", "stream_id"),
|
||||||
|
],
|
||||||
)
|
)
|
||||||
await self._setup_sequence("receipts_sequence", ("receipts_linearized",))
|
|
||||||
await self._setup_sequence("presence_stream_sequence", ("presence_stream",))
|
|
||||||
await self._setup_auth_chain_sequence()
|
await self._setup_auth_chain_sequence()
|
||||||
await self._setup_sequence(
|
await self._setup_sequence(
|
||||||
"application_services_txn_id_seq",
|
"application_services_txn_id_seq",
|
||||||
("application_services_txns",),
|
[
|
||||||
"txn_id",
|
(
|
||||||
|
"application_services_txns",
|
||||||
|
"txn_id",
|
||||||
|
)
|
||||||
|
],
|
||||||
|
)
|
||||||
|
await self._setup_sequence(
|
||||||
|
"device_lists_sequence",
|
||||||
|
[
|
||||||
|
("device_lists_stream", "stream_id"),
|
||||||
|
("user_signature_stream", "stream_id"),
|
||||||
|
("device_lists_outbound_pokes", "stream_id"),
|
||||||
|
("device_lists_changes_in_room", "stream_id"),
|
||||||
|
("device_lists_remote_pending", "stream_id"),
|
||||||
|
("device_lists_changes_converted_stream_position", "stream_id"),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
await self._setup_sequence(
|
||||||
|
"e2e_cross_signing_keys_sequence",
|
||||||
|
[
|
||||||
|
("e2e_cross_signing_keys", "stream_id"),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
await self._setup_sequence(
|
||||||
|
"push_rules_stream_sequence",
|
||||||
|
[
|
||||||
|
("push_rules_stream", "stream_id"),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
await self._setup_sequence(
|
||||||
|
"pushers_sequence",
|
||||||
|
[
|
||||||
|
("pushers", "id"),
|
||||||
|
("deleted_pushers", "stream_id"),
|
||||||
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
# Step 3. Get tables.
|
# Step 3. Get tables.
|
||||||
|
@ -1101,12 +1153,11 @@ class Porter:
|
||||||
async def _setup_sequence(
|
async def _setup_sequence(
|
||||||
self,
|
self,
|
||||||
sequence_name: str,
|
sequence_name: str,
|
||||||
stream_id_tables: Iterable[str],
|
stream_id_tables: Iterable[Tuple[str, str]],
|
||||||
column_name: str = "stream_id",
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Set a sequence to the correct value."""
|
"""Set a sequence to the correct value."""
|
||||||
current_stream_ids = []
|
current_stream_ids = []
|
||||||
for stream_id_table in stream_id_tables:
|
for stream_id_table, column_name in stream_id_tables:
|
||||||
max_stream_id = cast(
|
max_stream_id = cast(
|
||||||
int,
|
int,
|
||||||
await self.sqlite_store.db_pool.simple_select_one_onecol(
|
await self.sqlite_store.db_pool.simple_select_one_onecol(
|
||||||
|
|
|
@ -50,7 +50,7 @@ class Membership:
|
||||||
KNOCK: Final = "knock"
|
KNOCK: Final = "knock"
|
||||||
LEAVE: Final = "leave"
|
LEAVE: Final = "leave"
|
||||||
BAN: Final = "ban"
|
BAN: Final = "ban"
|
||||||
LIST: Final = (INVITE, JOIN, KNOCK, LEAVE, BAN)
|
LIST: Final = {INVITE, JOIN, KNOCK, LEAVE, BAN}
|
||||||
|
|
||||||
|
|
||||||
class PresenceState:
|
class PresenceState:
|
||||||
|
|
|
@ -681,17 +681,17 @@ def setup_sentry(hs: "HomeServer") -> None:
|
||||||
)
|
)
|
||||||
|
|
||||||
# We set some default tags that give some context to this instance
|
# We set some default tags that give some context to this instance
|
||||||
with sentry_sdk.configure_scope() as scope:
|
global_scope = sentry_sdk.Scope.get_global_scope()
|
||||||
scope.set_tag("matrix_server_name", hs.config.server.server_name)
|
global_scope.set_tag("matrix_server_name", hs.config.server.server_name)
|
||||||
|
|
||||||
app = (
|
app = (
|
||||||
hs.config.worker.worker_app
|
hs.config.worker.worker_app
|
||||||
if hs.config.worker.worker_app
|
if hs.config.worker.worker_app
|
||||||
else "synapse.app.homeserver"
|
else "synapse.app.homeserver"
|
||||||
)
|
)
|
||||||
name = hs.get_instance_name()
|
name = hs.get_instance_name()
|
||||||
scope.set_tag("worker_app", app)
|
global_scope.set_tag("worker_app", app)
|
||||||
scope.set_tag("worker_name", name)
|
global_scope.set_tag("worker_name", name)
|
||||||
|
|
||||||
|
|
||||||
def setup_sdnotify(hs: "HomeServer") -> None:
|
def setup_sdnotify(hs: "HomeServer") -> None:
|
||||||
|
|
|
@ -443,3 +443,6 @@ class ExperimentalConfig(Config):
|
||||||
self.msc3916_authenticated_media_enabled = experimental.get(
|
self.msc3916_authenticated_media_enabled = experimental.get(
|
||||||
"msc3916_authenticated_media_enabled", False
|
"msc3916_authenticated_media_enabled", False
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# MSC4151: Report room API (Client-Server API)
|
||||||
|
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)
|
||||||
|
|
|
@ -218,3 +218,13 @@ class RatelimitConfig(Config):
|
||||||
"rc_media_create",
|
"rc_media_create",
|
||||||
defaults={"per_second": 10, "burst_count": 50},
|
defaults={"per_second": 10, "burst_count": 50},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.remote_media_downloads = RatelimitSettings(
|
||||||
|
key="rc_remote_media_downloads",
|
||||||
|
per_second=self.parse_size(
|
||||||
|
config.get("remote_media_download_per_second", "87K")
|
||||||
|
),
|
||||||
|
burst_count=self.parse_size(
|
||||||
|
config.get("remote_media_download_burst_count", "500M")
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
|
@ -47,9 +47,9 @@ from synapse.events.utils import (
|
||||||
validate_canonicaljson,
|
validate_canonicaljson,
|
||||||
)
|
)
|
||||||
from synapse.http.servlet import validate_json_object
|
from synapse.http.servlet import validate_json_object
|
||||||
from synapse.rest.models import RequestBodyModel
|
|
||||||
from synapse.storage.controllers.state import server_acl_evaluator_from_event
|
from synapse.storage.controllers.state import server_acl_evaluator_from_event
|
||||||
from synapse.types import EventID, JsonDict, RoomID, StrCollection, UserID
|
from synapse.types import EventID, JsonDict, RoomID, StrCollection, UserID
|
||||||
|
from synapse.types.rest import RequestBodyModel
|
||||||
|
|
||||||
|
|
||||||
class EventValidator:
|
class EventValidator:
|
||||||
|
|
|
@ -56,6 +56,7 @@ from synapse.api.errors import (
|
||||||
SynapseError,
|
SynapseError,
|
||||||
UnsupportedRoomVersionError,
|
UnsupportedRoomVersionError,
|
||||||
)
|
)
|
||||||
|
from synapse.api.ratelimiting import Ratelimiter
|
||||||
from synapse.api.room_versions import (
|
from synapse.api.room_versions import (
|
||||||
KNOWN_ROOM_VERSIONS,
|
KNOWN_ROOM_VERSIONS,
|
||||||
EventFormatVersions,
|
EventFormatVersions,
|
||||||
|
@ -1877,6 +1878,8 @@ class FederationClient(FederationBase):
|
||||||
output_stream: BinaryIO,
|
output_stream: BinaryIO,
|
||||||
max_size: int,
|
max_size: int,
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
|
download_ratelimiter: Ratelimiter,
|
||||||
|
ip_address: str,
|
||||||
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
||||||
try:
|
try:
|
||||||
return await self.transport_layer.download_media_v3(
|
return await self.transport_layer.download_media_v3(
|
||||||
|
@ -1885,6 +1888,8 @@ class FederationClient(FederationBase):
|
||||||
output_stream=output_stream,
|
output_stream=output_stream,
|
||||||
max_size=max_size,
|
max_size=max_size,
|
||||||
max_timeout_ms=max_timeout_ms,
|
max_timeout_ms=max_timeout_ms,
|
||||||
|
download_ratelimiter=download_ratelimiter,
|
||||||
|
ip_address=ip_address,
|
||||||
)
|
)
|
||||||
except HttpResponseException as e:
|
except HttpResponseException as e:
|
||||||
# If an error is received that is due to an unrecognised endpoint,
|
# If an error is received that is due to an unrecognised endpoint,
|
||||||
|
@ -1905,6 +1910,8 @@ class FederationClient(FederationBase):
|
||||||
output_stream=output_stream,
|
output_stream=output_stream,
|
||||||
max_size=max_size,
|
max_size=max_size,
|
||||||
max_timeout_ms=max_timeout_ms,
|
max_timeout_ms=max_timeout_ms,
|
||||||
|
download_ratelimiter=download_ratelimiter,
|
||||||
|
ip_address=ip_address,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -674,7 +674,7 @@ class FederationServer(FederationBase):
|
||||||
# This is in addition to the HS-level rate limiting applied by
|
# This is in addition to the HS-level rate limiting applied by
|
||||||
# BaseFederationServlet.
|
# BaseFederationServlet.
|
||||||
# type-ignore: mypy doesn't seem able to deduce the type of the limiter(!?)
|
# type-ignore: mypy doesn't seem able to deduce the type of the limiter(!?)
|
||||||
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
|
await self._room_member_handler._join_rate_per_room_limiter.ratelimit(
|
||||||
requester=None,
|
requester=None,
|
||||||
key=room_id,
|
key=room_id,
|
||||||
update=False,
|
update=False,
|
||||||
|
@ -717,7 +717,7 @@ class FederationServer(FederationBase):
|
||||||
SynapseTags.SEND_JOIN_RESPONSE_IS_PARTIAL_STATE,
|
SynapseTags.SEND_JOIN_RESPONSE_IS_PARTIAL_STATE,
|
||||||
caller_supports_partial_state,
|
caller_supports_partial_state,
|
||||||
)
|
)
|
||||||
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
|
await self._room_member_handler._join_rate_per_room_limiter.ratelimit(
|
||||||
requester=None,
|
requester=None,
|
||||||
key=room_id,
|
key=room_id,
|
||||||
update=False,
|
update=False,
|
||||||
|
|
|
@ -43,6 +43,7 @@ import ijson
|
||||||
|
|
||||||
from synapse.api.constants import Direction, Membership
|
from synapse.api.constants import Direction, Membership
|
||||||
from synapse.api.errors import Codes, HttpResponseException, SynapseError
|
from synapse.api.errors import Codes, HttpResponseException, SynapseError
|
||||||
|
from synapse.api.ratelimiting import Ratelimiter
|
||||||
from synapse.api.room_versions import RoomVersion
|
from synapse.api.room_versions import RoomVersion
|
||||||
from synapse.api.urls import (
|
from synapse.api.urls import (
|
||||||
FEDERATION_UNSTABLE_PREFIX,
|
FEDERATION_UNSTABLE_PREFIX,
|
||||||
|
@ -819,6 +820,8 @@ class TransportLayerClient:
|
||||||
output_stream: BinaryIO,
|
output_stream: BinaryIO,
|
||||||
max_size: int,
|
max_size: int,
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
|
download_ratelimiter: Ratelimiter,
|
||||||
|
ip_address: str,
|
||||||
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
||||||
path = f"/_matrix/media/r0/download/{destination}/{media_id}"
|
path = f"/_matrix/media/r0/download/{destination}/{media_id}"
|
||||||
|
|
||||||
|
@ -834,6 +837,8 @@ class TransportLayerClient:
|
||||||
"allow_remote": "false",
|
"allow_remote": "false",
|
||||||
"timeout_ms": str(max_timeout_ms),
|
"timeout_ms": str(max_timeout_ms),
|
||||||
},
|
},
|
||||||
|
download_ratelimiter=download_ratelimiter,
|
||||||
|
ip_address=ip_address,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def download_media_v3(
|
async def download_media_v3(
|
||||||
|
@ -843,6 +848,8 @@ class TransportLayerClient:
|
||||||
output_stream: BinaryIO,
|
output_stream: BinaryIO,
|
||||||
max_size: int,
|
max_size: int,
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
|
download_ratelimiter: Ratelimiter,
|
||||||
|
ip_address: str,
|
||||||
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
||||||
path = f"/_matrix/media/v3/download/{destination}/{media_id}"
|
path = f"/_matrix/media/v3/download/{destination}/{media_id}"
|
||||||
|
|
||||||
|
@ -862,6 +869,8 @@ class TransportLayerClient:
|
||||||
"allow_redirect": "true",
|
"allow_redirect": "true",
|
||||||
},
|
},
|
||||||
follow_redirects=True,
|
follow_redirects=True,
|
||||||
|
download_ratelimiter=download_ratelimiter,
|
||||||
|
ip_address=ip_address,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -19,6 +19,7 @@
|
||||||
# [This file includes modifications made by New Vector Limited]
|
# [This file includes modifications made by New Vector Limited]
|
||||||
#
|
#
|
||||||
#
|
#
|
||||||
|
import inspect
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple, Type
|
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple, Type
|
||||||
|
|
||||||
|
@ -33,6 +34,7 @@ from synapse.federation.transport.server.federation import (
|
||||||
FEDERATION_SERVLET_CLASSES,
|
FEDERATION_SERVLET_CLASSES,
|
||||||
FederationAccountStatusServlet,
|
FederationAccountStatusServlet,
|
||||||
FederationUnstableClientKeysClaimServlet,
|
FederationUnstableClientKeysClaimServlet,
|
||||||
|
FederationUnstableMediaDownloadServlet,
|
||||||
)
|
)
|
||||||
from synapse.http.server import HttpServer, JsonResource
|
from synapse.http.server import HttpServer, JsonResource
|
||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
|
@ -315,6 +317,28 @@ def register_servlets(
|
||||||
):
|
):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if servletclass == FederationUnstableMediaDownloadServlet:
|
||||||
|
if (
|
||||||
|
not hs.config.server.enable_media_repo
|
||||||
|
or not hs.config.experimental.msc3916_authenticated_media_enabled
|
||||||
|
):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# don't load the endpoint if the storage provider is incompatible
|
||||||
|
media_repo = hs.get_media_repository()
|
||||||
|
load_download_endpoint = True
|
||||||
|
for provider in media_repo.media_storage.storage_providers:
|
||||||
|
signature = inspect.signature(provider.backend.fetch)
|
||||||
|
if "federation" not in signature.parameters:
|
||||||
|
logger.warning(
|
||||||
|
f"Federation media `/download` endpoint will not be enabled as storage provider {provider.backend} is not compatible with this endpoint."
|
||||||
|
)
|
||||||
|
load_download_endpoint = False
|
||||||
|
break
|
||||||
|
|
||||||
|
if not load_download_endpoint:
|
||||||
|
continue
|
||||||
|
|
||||||
servletclass(
|
servletclass(
|
||||||
hs=hs,
|
hs=hs,
|
||||||
authenticator=authenticator,
|
authenticator=authenticator,
|
||||||
|
|
|
@ -360,13 +360,29 @@ class BaseFederationServlet:
|
||||||
"request"
|
"request"
|
||||||
)
|
)
|
||||||
return None
|
return None
|
||||||
|
if (
|
||||||
|
func.__self__.__class__.__name__ # type: ignore
|
||||||
|
== "FederationUnstableMediaDownloadServlet"
|
||||||
|
):
|
||||||
|
response = await func(
|
||||||
|
origin, content, request, *args, **kwargs
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
response = await func(
|
||||||
|
origin, content, request.args, *args, **kwargs
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
if (
|
||||||
|
func.__self__.__class__.__name__ # type: ignore
|
||||||
|
== "FederationUnstableMediaDownloadServlet"
|
||||||
|
):
|
||||||
|
response = await func(
|
||||||
|
origin, content, request, *args, **kwargs
|
||||||
|
)
|
||||||
|
else:
|
||||||
response = await func(
|
response = await func(
|
||||||
origin, content, request.args, *args, **kwargs
|
origin, content, request.args, *args, **kwargs
|
||||||
)
|
)
|
||||||
else:
|
|
||||||
response = await func(
|
|
||||||
origin, content, request.args, *args, **kwargs
|
|
||||||
)
|
|
||||||
finally:
|
finally:
|
||||||
# if we used the origin's context as the parent, add a new span using
|
# if we used the origin's context as the parent, add a new span using
|
||||||
# the servlet span as a parent, so that we have a link
|
# the servlet span as a parent, so that we have a link
|
||||||
|
|
|
@ -44,10 +44,13 @@ from synapse.federation.transport.server._base import (
|
||||||
)
|
)
|
||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
parse_boolean_from_args,
|
parse_boolean_from_args,
|
||||||
|
parse_integer,
|
||||||
parse_integer_from_args,
|
parse_integer_from_args,
|
||||||
parse_string_from_args,
|
parse_string_from_args,
|
||||||
parse_strings_from_args,
|
parse_strings_from_args,
|
||||||
)
|
)
|
||||||
|
from synapse.http.site import SynapseRequest
|
||||||
|
from synapse.media._base import DEFAULT_MAX_TIMEOUT_MS, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util import SYNAPSE_VERSION
|
from synapse.util import SYNAPSE_VERSION
|
||||||
from synapse.util.ratelimitutils import FederationRateLimiter
|
from synapse.util.ratelimitutils import FederationRateLimiter
|
||||||
|
@ -787,6 +790,43 @@ class FederationAccountStatusServlet(BaseFederationServerServlet):
|
||||||
return 200, {"account_statuses": statuses, "failures": failures}
|
return 200, {"account_statuses": statuses, "failures": failures}
|
||||||
|
|
||||||
|
|
||||||
|
class FederationUnstableMediaDownloadServlet(BaseFederationServerServlet):
|
||||||
|
"""
|
||||||
|
Implementation of new federation media `/download` endpoint outlined in MSC3916. Returns
|
||||||
|
a multipart/form-data response consisting of a JSON object and the requested media
|
||||||
|
item. This endpoint only returns local media.
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATH = "/media/download/(?P<media_id>[^/]*)"
|
||||||
|
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc3916"
|
||||||
|
RATELIMIT = True
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
hs: "HomeServer",
|
||||||
|
ratelimiter: FederationRateLimiter,
|
||||||
|
authenticator: Authenticator,
|
||||||
|
server_name: str,
|
||||||
|
):
|
||||||
|
super().__init__(hs, authenticator, ratelimiter, server_name)
|
||||||
|
self.media_repo = self.hs.get_media_repository()
|
||||||
|
|
||||||
|
async def on_GET(
|
||||||
|
self,
|
||||||
|
origin: Optional[str],
|
||||||
|
content: Literal[None],
|
||||||
|
request: SynapseRequest,
|
||||||
|
media_id: str,
|
||||||
|
) -> None:
|
||||||
|
max_timeout_ms = parse_integer(
|
||||||
|
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
|
||||||
|
)
|
||||||
|
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
|
||||||
|
await self.media_repo.get_local_media(
|
||||||
|
request, media_id, None, max_timeout_ms, federation=True
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
||||||
FederationSendServlet,
|
FederationSendServlet,
|
||||||
FederationEventServlet,
|
FederationEventServlet,
|
||||||
|
@ -818,4 +858,5 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
||||||
FederationV1SendKnockServlet,
|
FederationV1SendKnockServlet,
|
||||||
FederationMakeKnockServlet,
|
FederationMakeKnockServlet,
|
||||||
FederationAccountStatusServlet,
|
FederationAccountStatusServlet,
|
||||||
|
FederationUnstableMediaDownloadServlet,
|
||||||
)
|
)
|
||||||
|
|
|
@ -126,13 +126,7 @@ class AdminHandler:
|
||||||
# Get all rooms the user is in or has been in
|
# Get all rooms the user is in or has been in
|
||||||
rooms = await self._store.get_rooms_for_local_user_where_membership_is(
|
rooms = await self._store.get_rooms_for_local_user_where_membership_is(
|
||||||
user_id,
|
user_id,
|
||||||
membership_list=(
|
membership_list=Membership.LIST,
|
||||||
Membership.JOIN,
|
|
||||||
Membership.LEAVE,
|
|
||||||
Membership.BAN,
|
|
||||||
Membership.INVITE,
|
|
||||||
Membership.KNOCK,
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# We only try and fetch events for rooms the user has been in. If
|
# We only try and fetch events for rooms the user has been in. If
|
||||||
|
@ -179,7 +173,7 @@ class AdminHandler:
|
||||||
if room.membership == Membership.JOIN:
|
if room.membership == Membership.JOIN:
|
||||||
stream_ordering = self._store.get_room_max_stream_ordering()
|
stream_ordering = self._store.get_room_max_stream_ordering()
|
||||||
else:
|
else:
|
||||||
stream_ordering = room.stream_ordering
|
stream_ordering = room.event_pos.stream
|
||||||
|
|
||||||
from_key = RoomStreamToken(topological=0, stream=0)
|
from_key = RoomStreamToken(topological=0, stream=0)
|
||||||
to_key = RoomStreamToken(stream=stream_ordering)
|
to_key = RoomStreamToken(stream=stream_ordering)
|
||||||
|
|
|
@ -236,6 +236,13 @@ class DeviceMessageHandler:
|
||||||
local_messages = {}
|
local_messages = {}
|
||||||
remote_messages: Dict[str, Dict[str, Dict[str, JsonDict]]] = {}
|
remote_messages: Dict[str, Dict[str, Dict[str, JsonDict]]] = {}
|
||||||
for user_id, by_device in messages.items():
|
for user_id, by_device in messages.items():
|
||||||
|
if not UserID.is_valid(user_id):
|
||||||
|
logger.warning(
|
||||||
|
"Ignoring attempt to send device message to invalid user: %r",
|
||||||
|
user_id,
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
# add an opentracing log entry for each message
|
# add an opentracing log entry for each message
|
||||||
for device_id, message_content in by_device.items():
|
for device_id, message_content in by_device.items():
|
||||||
log_kv(
|
log_kv(
|
||||||
|
|
|
@ -35,6 +35,7 @@ from synapse.api.errors import CodeMessageException, Codes, NotFoundError, Synap
|
||||||
from synapse.handlers.device import DeviceHandler
|
from synapse.handlers.device import DeviceHandler
|
||||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||||
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
|
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
|
||||||
|
from synapse.replication.http.devices import ReplicationUploadKeysForUserRestServlet
|
||||||
from synapse.types import (
|
from synapse.types import (
|
||||||
JsonDict,
|
JsonDict,
|
||||||
JsonMapping,
|
JsonMapping,
|
||||||
|
@ -45,7 +46,10 @@ from synapse.types import (
|
||||||
from synapse.util import json_decoder
|
from synapse.util import json_decoder
|
||||||
from synapse.util.async_helpers import Linearizer, concurrently_execute
|
from synapse.util.async_helpers import Linearizer, concurrently_execute
|
||||||
from synapse.util.cancellation import cancellable
|
from synapse.util.cancellation import cancellable
|
||||||
from synapse.util.retryutils import NotRetryingDestination
|
from synapse.util.retryutils import (
|
||||||
|
NotRetryingDestination,
|
||||||
|
filter_destinations_by_retry_limiter,
|
||||||
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
@ -53,6 +57,9 @@ if TYPE_CHECKING:
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
ONE_TIME_KEY_UPLOAD = "one_time_key_upload_lock"
|
||||||
|
|
||||||
|
|
||||||
class E2eKeysHandler:
|
class E2eKeysHandler:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.config = hs.config
|
self.config = hs.config
|
||||||
|
@ -62,6 +69,7 @@ class E2eKeysHandler:
|
||||||
self._appservice_handler = hs.get_application_service_handler()
|
self._appservice_handler = hs.get_application_service_handler()
|
||||||
self.is_mine = hs.is_mine
|
self.is_mine = hs.is_mine
|
||||||
self.clock = hs.get_clock()
|
self.clock = hs.get_clock()
|
||||||
|
self._worker_lock_handler = hs.get_worker_locks_handler()
|
||||||
|
|
||||||
federation_registry = hs.get_federation_registry()
|
federation_registry = hs.get_federation_registry()
|
||||||
|
|
||||||
|
@ -82,6 +90,12 @@ class E2eKeysHandler:
|
||||||
edu_updater.incoming_signing_key_update,
|
edu_updater.incoming_signing_key_update,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.device_key_uploader = self.upload_device_keys_for_user
|
||||||
|
else:
|
||||||
|
self.device_key_uploader = (
|
||||||
|
ReplicationUploadKeysForUserRestServlet.make_client(hs)
|
||||||
|
)
|
||||||
|
|
||||||
# doesn't really work as part of the generic query API, because the
|
# doesn't really work as part of the generic query API, because the
|
||||||
# query request requires an object POST, but we abuse the
|
# query request requires an object POST, but we abuse the
|
||||||
# "query handler" interface.
|
# "query handler" interface.
|
||||||
|
@ -145,6 +159,11 @@ class E2eKeysHandler:
|
||||||
remote_queries = {}
|
remote_queries = {}
|
||||||
|
|
||||||
for user_id, device_ids in device_keys_query.items():
|
for user_id, device_ids in device_keys_query.items():
|
||||||
|
if not UserID.is_valid(user_id):
|
||||||
|
# Ignore invalid user IDs, which is the same behaviour as if
|
||||||
|
# the user existed but had no keys.
|
||||||
|
continue
|
||||||
|
|
||||||
# we use UserID.from_string to catch invalid user ids
|
# we use UserID.from_string to catch invalid user ids
|
||||||
if self.is_mine(UserID.from_string(user_id)):
|
if self.is_mine(UserID.from_string(user_id)):
|
||||||
local_query[user_id] = device_ids
|
local_query[user_id] = device_ids
|
||||||
|
@ -259,10 +278,8 @@ class E2eKeysHandler:
|
||||||
"%d destinations to query devices for", len(remote_queries_not_in_cache)
|
"%d destinations to query devices for", len(remote_queries_not_in_cache)
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _query(
|
async def _query(destination: str) -> None:
|
||||||
destination_queries: Tuple[str, Dict[str, Iterable[str]]]
|
queries = remote_queries_not_in_cache[destination]
|
||||||
) -> None:
|
|
||||||
destination, queries = destination_queries
|
|
||||||
return await self._query_devices_for_destination(
|
return await self._query_devices_for_destination(
|
||||||
results,
|
results,
|
||||||
cross_signing_keys,
|
cross_signing_keys,
|
||||||
|
@ -272,9 +289,20 @@ class E2eKeysHandler:
|
||||||
timeout,
|
timeout,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Only try and fetch keys for destinations that are not marked as
|
||||||
|
# down.
|
||||||
|
filtered_destinations = await filter_destinations_by_retry_limiter(
|
||||||
|
remote_queries_not_in_cache.keys(),
|
||||||
|
self.clock,
|
||||||
|
self.store,
|
||||||
|
# Let's give an arbitrary grace period for those hosts that are
|
||||||
|
# only recently down
|
||||||
|
retry_due_within_ms=60 * 1000,
|
||||||
|
)
|
||||||
|
|
||||||
await concurrently_execute(
|
await concurrently_execute(
|
||||||
_query,
|
_query,
|
||||||
remote_queries_not_in_cache.items(),
|
filtered_destinations,
|
||||||
10,
|
10,
|
||||||
delay_cancellation=True,
|
delay_cancellation=True,
|
||||||
)
|
)
|
||||||
|
@ -775,36 +803,17 @@ class E2eKeysHandler:
|
||||||
"one_time_keys": A mapping from algorithm to number of keys for that
|
"one_time_keys": A mapping from algorithm to number of keys for that
|
||||||
algorithm, including those previously persisted.
|
algorithm, including those previously persisted.
|
||||||
"""
|
"""
|
||||||
# This can only be called from the main process.
|
|
||||||
assert isinstance(self.device_handler, DeviceHandler)
|
|
||||||
|
|
||||||
time_now = self.clock.time_msec()
|
time_now = self.clock.time_msec()
|
||||||
|
|
||||||
# TODO: Validate the JSON to make sure it has the right keys.
|
# TODO: Validate the JSON to make sure it has the right keys.
|
||||||
device_keys = keys.get("device_keys", None)
|
device_keys = keys.get("device_keys", None)
|
||||||
if device_keys:
|
if device_keys:
|
||||||
logger.info(
|
await self.device_key_uploader(
|
||||||
"Updating device_keys for device %r for user %s at %d",
|
user_id=user_id,
|
||||||
device_id,
|
device_id=device_id,
|
||||||
user_id,
|
keys={"device_keys": device_keys},
|
||||||
time_now,
|
|
||||||
)
|
)
|
||||||
log_kv(
|
|
||||||
{
|
|
||||||
"message": "Updating device_keys for user.",
|
|
||||||
"user_id": user_id,
|
|
||||||
"device_id": device_id,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
# TODO: Sign the JSON with the server key
|
|
||||||
changed = await self.store.set_e2e_device_keys(
|
|
||||||
user_id, device_id, time_now, device_keys
|
|
||||||
)
|
|
||||||
if changed:
|
|
||||||
# Only notify about device updates *if* the keys actually changed
|
|
||||||
await self.device_handler.notify_device_update(user_id, [device_id])
|
|
||||||
else:
|
|
||||||
log_kv({"message": "Not updating device_keys for user", "user_id": user_id})
|
|
||||||
one_time_keys = keys.get("one_time_keys", None)
|
one_time_keys = keys.get("one_time_keys", None)
|
||||||
if one_time_keys:
|
if one_time_keys:
|
||||||
log_kv(
|
log_kv(
|
||||||
|
@ -840,6 +849,49 @@ class E2eKeysHandler:
|
||||||
{"message": "Did not update fallback_keys", "reason": "no keys given"}
|
{"message": "Did not update fallback_keys", "reason": "no keys given"}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
result = await self.store.count_e2e_one_time_keys(user_id, device_id)
|
||||||
|
|
||||||
|
set_tag("one_time_key_counts", str(result))
|
||||||
|
return {"one_time_key_counts": result}
|
||||||
|
|
||||||
|
@tag_args
|
||||||
|
async def upload_device_keys_for_user(
|
||||||
|
self, user_id: str, device_id: str, keys: JsonDict
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Args:
|
||||||
|
user_id: user whose keys are being uploaded.
|
||||||
|
device_id: device whose keys are being uploaded.
|
||||||
|
device_keys: the `device_keys` of an /keys/upload request.
|
||||||
|
|
||||||
|
"""
|
||||||
|
# This can only be called from the main process.
|
||||||
|
assert isinstance(self.device_handler, DeviceHandler)
|
||||||
|
|
||||||
|
time_now = self.clock.time_msec()
|
||||||
|
|
||||||
|
device_keys = keys["device_keys"]
|
||||||
|
logger.info(
|
||||||
|
"Updating device_keys for device %r for user %s at %d",
|
||||||
|
device_id,
|
||||||
|
user_id,
|
||||||
|
time_now,
|
||||||
|
)
|
||||||
|
log_kv(
|
||||||
|
{
|
||||||
|
"message": "Updating device_keys for user.",
|
||||||
|
"user_id": user_id,
|
||||||
|
"device_id": device_id,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
# TODO: Sign the JSON with the server key
|
||||||
|
changed = await self.store.set_e2e_device_keys(
|
||||||
|
user_id, device_id, time_now, device_keys
|
||||||
|
)
|
||||||
|
if changed:
|
||||||
|
# Only notify about device updates *if* the keys actually changed
|
||||||
|
await self.device_handler.notify_device_update(user_id, [device_id])
|
||||||
|
|
||||||
# the device should have been registered already, but it may have been
|
# the device should have been registered already, but it may have been
|
||||||
# deleted due to a race with a DELETE request. Or we may be using an
|
# deleted due to a race with a DELETE request. Or we may be using an
|
||||||
# old access_token without an associated device_id. Either way, we
|
# old access_token without an associated device_id. Either way, we
|
||||||
|
@ -847,53 +899,56 @@ class E2eKeysHandler:
|
||||||
# keys without a corresponding device.
|
# keys without a corresponding device.
|
||||||
await self.device_handler.check_device_registered(user_id, device_id)
|
await self.device_handler.check_device_registered(user_id, device_id)
|
||||||
|
|
||||||
result = await self.store.count_e2e_one_time_keys(user_id, device_id)
|
|
||||||
|
|
||||||
set_tag("one_time_key_counts", str(result))
|
|
||||||
return {"one_time_key_counts": result}
|
|
||||||
|
|
||||||
async def _upload_one_time_keys_for_user(
|
async def _upload_one_time_keys_for_user(
|
||||||
self, user_id: str, device_id: str, time_now: int, one_time_keys: JsonDict
|
self, user_id: str, device_id: str, time_now: int, one_time_keys: JsonDict
|
||||||
) -> None:
|
) -> None:
|
||||||
logger.info(
|
# We take out a lock so that we don't have to worry about a client
|
||||||
"Adding one_time_keys %r for device %r for user %r at %d",
|
# sending duplicate requests.
|
||||||
one_time_keys.keys(),
|
lock_key = f"{user_id}_{device_id}"
|
||||||
device_id,
|
async with self._worker_lock_handler.acquire_lock(
|
||||||
user_id,
|
ONE_TIME_KEY_UPLOAD, lock_key
|
||||||
time_now,
|
):
|
||||||
)
|
logger.info(
|
||||||
|
"Adding one_time_keys %r for device %r for user %r at %d",
|
||||||
|
one_time_keys.keys(),
|
||||||
|
device_id,
|
||||||
|
user_id,
|
||||||
|
time_now,
|
||||||
|
)
|
||||||
|
|
||||||
# make a list of (alg, id, key) tuples
|
# make a list of (alg, id, key) tuples
|
||||||
key_list = []
|
key_list = []
|
||||||
for key_id, key_obj in one_time_keys.items():
|
for key_id, key_obj in one_time_keys.items():
|
||||||
algorithm, key_id = key_id.split(":")
|
algorithm, key_id = key_id.split(":")
|
||||||
key_list.append((algorithm, key_id, key_obj))
|
key_list.append((algorithm, key_id, key_obj))
|
||||||
|
|
||||||
# First we check if we have already persisted any of the keys.
|
# First we check if we have already persisted any of the keys.
|
||||||
existing_key_map = await self.store.get_e2e_one_time_keys(
|
existing_key_map = await self.store.get_e2e_one_time_keys(
|
||||||
user_id, device_id, [k_id for _, k_id, _ in key_list]
|
user_id, device_id, [k_id for _, k_id, _ in key_list]
|
||||||
)
|
)
|
||||||
|
|
||||||
new_keys = [] # Keys that we need to insert. (alg, id, json) tuples.
|
new_keys = [] # Keys that we need to insert. (alg, id, json) tuples.
|
||||||
for algorithm, key_id, key in key_list:
|
for algorithm, key_id, key in key_list:
|
||||||
ex_json = existing_key_map.get((algorithm, key_id), None)
|
ex_json = existing_key_map.get((algorithm, key_id), None)
|
||||||
if ex_json:
|
if ex_json:
|
||||||
if not _one_time_keys_match(ex_json, key):
|
if not _one_time_keys_match(ex_json, key):
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400,
|
400,
|
||||||
(
|
(
|
||||||
"One time key %s:%s already exists. "
|
"One time key %s:%s already exists. "
|
||||||
"Old key: %s; new key: %r"
|
"Old key: %s; new key: %r"
|
||||||
|
)
|
||||||
|
% (algorithm, key_id, ex_json, key),
|
||||||
)
|
)
|
||||||
% (algorithm, key_id, ex_json, key),
|
else:
|
||||||
|
new_keys.append(
|
||||||
|
(algorithm, key_id, encode_canonical_json(key).decode("ascii"))
|
||||||
)
|
)
|
||||||
else:
|
|
||||||
new_keys.append(
|
|
||||||
(algorithm, key_id, encode_canonical_json(key).decode("ascii"))
|
|
||||||
)
|
|
||||||
|
|
||||||
log_kv({"message": "Inserting new one_time_keys.", "keys": new_keys})
|
log_kv({"message": "Inserting new one_time_keys.", "keys": new_keys})
|
||||||
await self.store.add_e2e_one_time_keys(user_id, device_id, time_now, new_keys)
|
await self.store.add_e2e_one_time_keys(
|
||||||
|
user_id, device_id, time_now, new_keys
|
||||||
|
)
|
||||||
|
|
||||||
async def upload_signing_keys_for_user(
|
async def upload_signing_keys_for_user(
|
||||||
self, user_id: str, keys: JsonDict
|
self, user_id: str, keys: JsonDict
|
||||||
|
|
|
@ -247,6 +247,12 @@ class E2eRoomKeysHandler:
|
||||||
if current_room_key:
|
if current_room_key:
|
||||||
if self._should_replace_room_key(current_room_key, room_key):
|
if self._should_replace_room_key(current_room_key, room_key):
|
||||||
log_kv({"message": "Replacing room key."})
|
log_kv({"message": "Replacing room key."})
|
||||||
|
logger.debug(
|
||||||
|
"Replacing room key. room=%s session=%s user=%s",
|
||||||
|
room_id,
|
||||||
|
session_id,
|
||||||
|
user_id,
|
||||||
|
)
|
||||||
# updates are done one at a time in the DB, so send
|
# updates are done one at a time in the DB, so send
|
||||||
# updates right away rather than batching them up,
|
# updates right away rather than batching them up,
|
||||||
# like we do with the inserts
|
# like we do with the inserts
|
||||||
|
@ -256,6 +262,12 @@ class E2eRoomKeysHandler:
|
||||||
changed = True
|
changed = True
|
||||||
else:
|
else:
|
||||||
log_kv({"message": "Not replacing room_key."})
|
log_kv({"message": "Not replacing room_key."})
|
||||||
|
logger.debug(
|
||||||
|
"Not replacing room key. room=%s session=%s user=%s",
|
||||||
|
room_id,
|
||||||
|
session_id,
|
||||||
|
user_id,
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
log_kv(
|
log_kv(
|
||||||
{
|
{
|
||||||
|
@ -265,6 +277,12 @@ class E2eRoomKeysHandler:
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
log_kv({"message": "Replacing room key."})
|
log_kv({"message": "Replacing room key."})
|
||||||
|
logger.debug(
|
||||||
|
"Inserting new room key. room=%s session=%s user=%s",
|
||||||
|
room_id,
|
||||||
|
session_id,
|
||||||
|
user_id,
|
||||||
|
)
|
||||||
to_insert.append((room_id, session_id, room_key))
|
to_insert.append((room_id, session_id, room_key))
|
||||||
changed = True
|
changed = True
|
||||||
|
|
||||||
|
|
|
@ -199,7 +199,7 @@ class InitialSyncHandler:
|
||||||
)
|
)
|
||||||
elif event.membership == Membership.LEAVE:
|
elif event.membership == Membership.LEAVE:
|
||||||
room_end_token = RoomStreamToken(
|
room_end_token = RoomStreamToken(
|
||||||
stream=event.stream_ordering,
|
stream=event.event_pos.stream,
|
||||||
)
|
)
|
||||||
deferred_room_state = run_in_background(
|
deferred_room_state = run_in_background(
|
||||||
self._state_storage_controller.get_state_for_events,
|
self._state_storage_controller.get_state_for_events,
|
||||||
|
|
|
@ -496,13 +496,6 @@ class EventCreationHandler:
|
||||||
|
|
||||||
self.room_prejoin_state_types = self.hs.config.api.room_prejoin_state
|
self.room_prejoin_state_types = self.hs.config.api.room_prejoin_state
|
||||||
|
|
||||||
self.membership_types_to_include_profile_data_in = {
|
|
||||||
Membership.JOIN,
|
|
||||||
Membership.KNOCK,
|
|
||||||
}
|
|
||||||
if self.hs.config.server.include_profile_data_on_invite:
|
|
||||||
self.membership_types_to_include_profile_data_in.add(Membership.INVITE)
|
|
||||||
|
|
||||||
self.send_event = ReplicationSendEventRestServlet.make_client(hs)
|
self.send_event = ReplicationSendEventRestServlet.make_client(hs)
|
||||||
self.send_events = ReplicationSendEventsRestServlet.make_client(hs)
|
self.send_events = ReplicationSendEventsRestServlet.make_client(hs)
|
||||||
|
|
||||||
|
@ -594,8 +587,6 @@ class EventCreationHandler:
|
||||||
Creates an FrozenEvent object, filling out auth_events, prev_events,
|
Creates an FrozenEvent object, filling out auth_events, prev_events,
|
||||||
etc.
|
etc.
|
||||||
|
|
||||||
Adds display names to Join membership events.
|
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
requester
|
requester
|
||||||
event_dict: An entire event
|
event_dict: An entire event
|
||||||
|
@ -672,29 +663,6 @@ class EventCreationHandler:
|
||||||
|
|
||||||
self.validator.validate_builder(builder)
|
self.validator.validate_builder(builder)
|
||||||
|
|
||||||
if builder.type == EventTypes.Member:
|
|
||||||
membership = builder.content.get("membership", None)
|
|
||||||
target = UserID.from_string(builder.state_key)
|
|
||||||
|
|
||||||
if membership in self.membership_types_to_include_profile_data_in:
|
|
||||||
# If event doesn't include a display name, add one.
|
|
||||||
profile = self.profile_handler
|
|
||||||
content = builder.content
|
|
||||||
|
|
||||||
try:
|
|
||||||
if "displayname" not in content:
|
|
||||||
displayname = await profile.get_displayname(target)
|
|
||||||
if displayname is not None:
|
|
||||||
content["displayname"] = displayname
|
|
||||||
if "avatar_url" not in content:
|
|
||||||
avatar_url = await profile.get_avatar_url(target)
|
|
||||||
if avatar_url is not None:
|
|
||||||
content["avatar_url"] = avatar_url
|
|
||||||
except Exception as e:
|
|
||||||
logger.info(
|
|
||||||
"Failed to get profile information for %r: %s", target, e
|
|
||||||
)
|
|
||||||
|
|
||||||
is_exempt = await self._is_exempt_from_privacy_policy(builder, requester)
|
is_exempt = await self._is_exempt_from_privacy_policy(builder, requester)
|
||||||
if require_consent and not is_exempt:
|
if require_consent and not is_exempt:
|
||||||
await self.assert_accepted_privacy_policy(requester)
|
await self.assert_accepted_privacy_policy(requester)
|
||||||
|
|
|
@ -27,7 +27,6 @@ from synapse.api.constants import Direction, EventTypes, Membership
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.api.errors import SynapseError
|
||||||
from synapse.api.filtering import Filter
|
from synapse.api.filtering import Filter
|
||||||
from synapse.events.utils import SerializeEventConfig
|
from synapse.events.utils import SerializeEventConfig
|
||||||
from synapse.handlers.room import ShutdownRoomParams, ShutdownRoomResponse
|
|
||||||
from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
|
from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
|
||||||
from synapse.logging.opentracing import trace
|
from synapse.logging.opentracing import trace
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
|
@ -41,6 +40,7 @@ from synapse.types import (
|
||||||
StreamKeyType,
|
StreamKeyType,
|
||||||
TaskStatus,
|
TaskStatus,
|
||||||
)
|
)
|
||||||
|
from synapse.types.handlers import ShutdownRoomParams, ShutdownRoomResponse
|
||||||
from synapse.types.state import StateFilter
|
from synapse.types.state import StateFilter
|
||||||
from synapse.util.async_helpers import ReadWriteLock
|
from synapse.util.async_helpers import ReadWriteLock
|
||||||
from synapse.visibility import filter_events_for_client
|
from synapse.visibility import filter_events_for_client
|
||||||
|
|
|
@ -393,9 +393,9 @@ class RelationsHandler:
|
||||||
|
|
||||||
# Attempt to find another event to use as the latest event.
|
# Attempt to find another event to use as the latest event.
|
||||||
potential_events, _ = await self._main_store.get_relations_for_event(
|
potential_events, _ = await self._main_store.get_relations_for_event(
|
||||||
|
room_id,
|
||||||
event_id,
|
event_id,
|
||||||
event,
|
event,
|
||||||
room_id,
|
|
||||||
RelationTypes.THREAD,
|
RelationTypes.THREAD,
|
||||||
direction=Direction.FORWARDS,
|
direction=Direction.FORWARDS,
|
||||||
)
|
)
|
||||||
|
|
|
@ -40,7 +40,6 @@ from typing import (
|
||||||
)
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
from typing_extensions import TypedDict
|
|
||||||
|
|
||||||
import synapse.events.snapshot
|
import synapse.events.snapshot
|
||||||
from synapse.api.constants import (
|
from synapse.api.constants import (
|
||||||
|
@ -88,6 +87,7 @@ from synapse.types import (
|
||||||
UserID,
|
UserID,
|
||||||
create_requester,
|
create_requester,
|
||||||
)
|
)
|
||||||
|
from synapse.types.handlers import ShutdownRoomParams, ShutdownRoomResponse
|
||||||
from synapse.types.state import StateFilter
|
from synapse.types.state import StateFilter
|
||||||
from synapse.util import stringutils
|
from synapse.util import stringutils
|
||||||
from synapse.util.caches.response_cache import ResponseCache
|
from synapse.util.caches.response_cache import ResponseCache
|
||||||
|
@ -1780,63 +1780,6 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
|
||||||
return self.store.get_current_room_stream_token_for_room_id(room_id)
|
return self.store.get_current_room_stream_token_for_room_id(room_id)
|
||||||
|
|
||||||
|
|
||||||
class ShutdownRoomParams(TypedDict):
|
|
||||||
"""
|
|
||||||
Attributes:
|
|
||||||
requester_user_id:
|
|
||||||
User who requested the action. Will be recorded as putting the room on the
|
|
||||||
blocking list.
|
|
||||||
new_room_user_id:
|
|
||||||
If set, a new room will be created with this user ID
|
|
||||||
as the creator and admin, and all users in the old room will be
|
|
||||||
moved into that room. If not set, no new room will be created
|
|
||||||
and the users will just be removed from the old room.
|
|
||||||
new_room_name:
|
|
||||||
A string representing the name of the room that new users will
|
|
||||||
be invited to. Defaults to `Content Violation Notification`
|
|
||||||
message:
|
|
||||||
A string containing the first message that will be sent as
|
|
||||||
`new_room_user_id` in the new room. Ideally this will clearly
|
|
||||||
convey why the original room was shut down.
|
|
||||||
Defaults to `Sharing illegal content on this server is not
|
|
||||||
permitted and rooms in violation will be blocked.`
|
|
||||||
block:
|
|
||||||
If set to `true`, this room will be added to a blocking list,
|
|
||||||
preventing future attempts to join the room. Defaults to `false`.
|
|
||||||
purge:
|
|
||||||
If set to `true`, purge the given room from the database.
|
|
||||||
force_purge:
|
|
||||||
If set to `true`, the room will be purged from database
|
|
||||||
even if there are still users joined to the room.
|
|
||||||
"""
|
|
||||||
|
|
||||||
requester_user_id: Optional[str]
|
|
||||||
new_room_user_id: Optional[str]
|
|
||||||
new_room_name: Optional[str]
|
|
||||||
message: Optional[str]
|
|
||||||
block: bool
|
|
||||||
purge: bool
|
|
||||||
force_purge: bool
|
|
||||||
|
|
||||||
|
|
||||||
class ShutdownRoomResponse(TypedDict):
|
|
||||||
"""
|
|
||||||
Attributes:
|
|
||||||
kicked_users: An array of users (`user_id`) that were kicked.
|
|
||||||
failed_to_kick_users:
|
|
||||||
An array of users (`user_id`) that that were not kicked.
|
|
||||||
local_aliases:
|
|
||||||
An array of strings representing the local aliases that were
|
|
||||||
migrated from the old room to the new.
|
|
||||||
new_room_id: A string representing the room ID of the new room.
|
|
||||||
"""
|
|
||||||
|
|
||||||
kicked_users: List[str]
|
|
||||||
failed_to_kick_users: List[str]
|
|
||||||
local_aliases: List[str]
|
|
||||||
new_room_id: Optional[str]
|
|
||||||
|
|
||||||
|
|
||||||
class RoomShutdownHandler:
|
class RoomShutdownHandler:
|
||||||
DEFAULT_MESSAGE = (
|
DEFAULT_MESSAGE = (
|
||||||
"Sharing illegal content on this server is not permitted and rooms in"
|
"Sharing illegal content on this server is not permitted and rooms in"
|
||||||
|
|
|
@ -106,6 +106,13 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
self.event_auth_handler = hs.get_event_auth_handler()
|
self.event_auth_handler = hs.get_event_auth_handler()
|
||||||
self._worker_lock_handler = hs.get_worker_locks_handler()
|
self._worker_lock_handler = hs.get_worker_locks_handler()
|
||||||
|
|
||||||
|
self._membership_types_to_include_profile_data_in = {
|
||||||
|
Membership.JOIN,
|
||||||
|
Membership.KNOCK,
|
||||||
|
}
|
||||||
|
if self.hs.config.server.include_profile_data_on_invite:
|
||||||
|
self._membership_types_to_include_profile_data_in.add(Membership.INVITE)
|
||||||
|
|
||||||
self.member_linearizer: Linearizer = Linearizer(name="member")
|
self.member_linearizer: Linearizer = Linearizer(name="member")
|
||||||
self.member_as_limiter = Linearizer(max_count=10, name="member_as_limiter")
|
self.member_as_limiter = Linearizer(max_count=10, name="member_as_limiter")
|
||||||
|
|
||||||
|
@ -785,9 +792,8 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
if (
|
if (
|
||||||
not self.allow_per_room_profiles and not is_requester_server_notices_user
|
not self.allow_per_room_profiles and not is_requester_server_notices_user
|
||||||
) or requester.shadow_banned:
|
) or requester.shadow_banned:
|
||||||
# Strip profile data, knowing that new profile data will be added to the
|
# Strip profile data, knowing that new profile data will be added to
|
||||||
# event's content in event_creation_handler.create_event() using the target's
|
# the event's content below using the target's global profile.
|
||||||
# global profile.
|
|
||||||
content.pop("displayname", None)
|
content.pop("displayname", None)
|
||||||
content.pop("avatar_url", None)
|
content.pop("avatar_url", None)
|
||||||
|
|
||||||
|
@ -823,6 +829,29 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||||
if action in ["kick", "unban"]:
|
if action in ["kick", "unban"]:
|
||||||
effective_membership_state = "leave"
|
effective_membership_state = "leave"
|
||||||
|
|
||||||
|
if effective_membership_state not in Membership.LIST:
|
||||||
|
raise SynapseError(400, "Invalid membership key")
|
||||||
|
|
||||||
|
# Add profile data for joins etc, if no per-room profile.
|
||||||
|
if (
|
||||||
|
effective_membership_state
|
||||||
|
in self._membership_types_to_include_profile_data_in
|
||||||
|
):
|
||||||
|
# If event doesn't include a display name, add one.
|
||||||
|
profile = self.profile_handler
|
||||||
|
|
||||||
|
try:
|
||||||
|
if "displayname" not in content:
|
||||||
|
displayname = await profile.get_displayname(target)
|
||||||
|
if displayname is not None:
|
||||||
|
content["displayname"] = displayname
|
||||||
|
if "avatar_url" not in content:
|
||||||
|
avatar_url = await profile.get_avatar_url(target)
|
||||||
|
if avatar_url is not None:
|
||||||
|
content["avatar_url"] = avatar_url
|
||||||
|
except Exception as e:
|
||||||
|
logger.info("Failed to get profile information for %r: %s", target, e)
|
||||||
|
|
||||||
# if this is a join with a 3pid signature, we may need to turn a 3pid
|
# if this is a join with a 3pid signature, we may need to turn a 3pid
|
||||||
# invite into a normal invite before we can handle the join.
|
# invite into a normal invite before we can handle the join.
|
||||||
if third_party_signed is not None:
|
if third_party_signed is not None:
|
||||||
|
|
441
synapse/handlers/sliding_sync.py
Normal file
441
synapse/handlers/sliding_sync.py
Normal file
|
@ -0,0 +1,441 @@
|
||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2024 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
# Originally licensed under the Apache License, Version 2.0:
|
||||||
|
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||||
|
#
|
||||||
|
# [This file includes modifications made by New Vector Limited]
|
||||||
|
#
|
||||||
|
#
|
||||||
|
import logging
|
||||||
|
from typing import TYPE_CHECKING, AbstractSet, Dict, List, Optional
|
||||||
|
|
||||||
|
from immutabledict import immutabledict
|
||||||
|
|
||||||
|
from synapse.api.constants import Membership
|
||||||
|
from synapse.events import EventBase
|
||||||
|
from synapse.types import Requester, RoomStreamToken, StreamToken, UserID
|
||||||
|
from synapse.types.handlers import OperationType, SlidingSyncConfig, SlidingSyncResult
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def filter_membership_for_sync(*, membership: str, user_id: str, sender: str) -> bool:
|
||||||
|
"""
|
||||||
|
Returns True if the membership event should be included in the sync response,
|
||||||
|
otherwise False.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
membership: The membership state of the user in the room.
|
||||||
|
user_id: The user ID that the membership applies to
|
||||||
|
sender: The person who sent the membership event
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Everything except `Membership.LEAVE` because we want everything that's *still*
|
||||||
|
# relevant to the user. There are few more things to include in the sync response
|
||||||
|
# (newly_left) but those are handled separately.
|
||||||
|
#
|
||||||
|
# This logic includes kicks (leave events where the sender is not the same user) and
|
||||||
|
# can be read as "anything that isn't a leave or a leave with a different sender".
|
||||||
|
return membership != Membership.LEAVE or sender != user_id
|
||||||
|
|
||||||
|
|
||||||
|
class SlidingSyncHandler:
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
self.clock = hs.get_clock()
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
self.auth_blocking = hs.get_auth_blocking()
|
||||||
|
self.notifier = hs.get_notifier()
|
||||||
|
self.event_sources = hs.get_event_sources()
|
||||||
|
self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync
|
||||||
|
|
||||||
|
async def wait_for_sync_for_user(
|
||||||
|
self,
|
||||||
|
requester: Requester,
|
||||||
|
sync_config: SlidingSyncConfig,
|
||||||
|
from_token: Optional[StreamToken] = None,
|
||||||
|
timeout_ms: int = 0,
|
||||||
|
) -> SlidingSyncResult:
|
||||||
|
"""Get the sync for a client if we have new data for it now. Otherwise
|
||||||
|
wait for new data to arrive on the server. If the timeout expires, then
|
||||||
|
return an empty sync result.
|
||||||
|
"""
|
||||||
|
# If the user is not part of the mau group, then check that limits have
|
||||||
|
# not been exceeded (if not part of the group by this point, almost certain
|
||||||
|
# auth_blocking will occur)
|
||||||
|
await self.auth_blocking.check_auth_blocking(requester=requester)
|
||||||
|
|
||||||
|
# TODO: If the To-Device extension is enabled and we have a `from_token`, delete
|
||||||
|
# any to-device messages before that token (since we now know that the device
|
||||||
|
# has received them). (see sync v2 for how to do this)
|
||||||
|
|
||||||
|
# If we're working with a user-provided token, we need to make sure to wait for
|
||||||
|
# this worker to catch up with the token so we don't skip past any incoming
|
||||||
|
# events or future events if the user is nefariously, manually modifying the
|
||||||
|
# token.
|
||||||
|
if from_token is not None:
|
||||||
|
# We need to make sure this worker has caught up with the token. If
|
||||||
|
# this returns false, it means we timed out waiting, and we should
|
||||||
|
# just return an empty response.
|
||||||
|
before_wait_ts = self.clock.time_msec()
|
||||||
|
if not await self.notifier.wait_for_stream_token(from_token):
|
||||||
|
logger.warning(
|
||||||
|
"Timed out waiting for worker to catch up. Returning empty response"
|
||||||
|
)
|
||||||
|
return SlidingSyncResult.empty(from_token)
|
||||||
|
|
||||||
|
# If we've spent significant time waiting to catch up, take it off
|
||||||
|
# the timeout.
|
||||||
|
after_wait_ts = self.clock.time_msec()
|
||||||
|
if after_wait_ts - before_wait_ts > 1_000:
|
||||||
|
timeout_ms -= after_wait_ts - before_wait_ts
|
||||||
|
timeout_ms = max(timeout_ms, 0)
|
||||||
|
|
||||||
|
# We're going to respond immediately if the timeout is 0 or if this is an
|
||||||
|
# initial sync (without a `from_token`) so we can avoid calling
|
||||||
|
# `notifier.wait_for_events()`.
|
||||||
|
if timeout_ms == 0 or from_token is None:
|
||||||
|
now_token = self.event_sources.get_current_token()
|
||||||
|
result = await self.current_sync_for_user(
|
||||||
|
sync_config,
|
||||||
|
from_token=from_token,
|
||||||
|
to_token=now_token,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
# Otherwise, we wait for something to happen and report it to the user.
|
||||||
|
async def current_sync_callback(
|
||||||
|
before_token: StreamToken, after_token: StreamToken
|
||||||
|
) -> SlidingSyncResult:
|
||||||
|
return await self.current_sync_for_user(
|
||||||
|
sync_config,
|
||||||
|
from_token=from_token,
|
||||||
|
to_token=after_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
result = await self.notifier.wait_for_events(
|
||||||
|
sync_config.user.to_string(),
|
||||||
|
timeout_ms,
|
||||||
|
current_sync_callback,
|
||||||
|
from_token=from_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
async def current_sync_for_user(
|
||||||
|
self,
|
||||||
|
sync_config: SlidingSyncConfig,
|
||||||
|
to_token: StreamToken,
|
||||||
|
from_token: Optional[StreamToken] = None,
|
||||||
|
) -> SlidingSyncResult:
|
||||||
|
"""
|
||||||
|
Generates the response body of a Sliding Sync result, represented as a
|
||||||
|
`SlidingSyncResult`.
|
||||||
|
"""
|
||||||
|
user_id = sync_config.user.to_string()
|
||||||
|
app_service = self.store.get_app_service_by_user_id(user_id)
|
||||||
|
if app_service:
|
||||||
|
# We no longer support AS users using /sync directly.
|
||||||
|
# See https://github.com/matrix-org/matrix-doc/issues/1144
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
# Get all of the room IDs that the user should be able to see in the sync
|
||||||
|
# response
|
||||||
|
room_id_set = await self.get_sync_room_ids_for_user(
|
||||||
|
sync_config.user,
|
||||||
|
from_token=from_token,
|
||||||
|
to_token=to_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Assemble sliding window lists
|
||||||
|
lists: Dict[str, SlidingSyncResult.SlidingWindowList] = {}
|
||||||
|
if sync_config.lists:
|
||||||
|
for list_key, list_config in sync_config.lists.items():
|
||||||
|
# TODO: Apply filters
|
||||||
|
#
|
||||||
|
# TODO: Exclude partially stated rooms unless the `required_state` has
|
||||||
|
# `["m.room.member", "$LAZY"]`
|
||||||
|
filtered_room_ids = room_id_set
|
||||||
|
# TODO: Apply sorts
|
||||||
|
sorted_room_ids = sorted(filtered_room_ids)
|
||||||
|
|
||||||
|
ops: List[SlidingSyncResult.SlidingWindowList.Operation] = []
|
||||||
|
if list_config.ranges:
|
||||||
|
for range in list_config.ranges:
|
||||||
|
ops.append(
|
||||||
|
SlidingSyncResult.SlidingWindowList.Operation(
|
||||||
|
op=OperationType.SYNC,
|
||||||
|
range=range,
|
||||||
|
room_ids=sorted_room_ids[range[0] : range[1]],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
lists[list_key] = SlidingSyncResult.SlidingWindowList(
|
||||||
|
count=len(sorted_room_ids),
|
||||||
|
ops=ops,
|
||||||
|
)
|
||||||
|
|
||||||
|
return SlidingSyncResult(
|
||||||
|
next_pos=to_token,
|
||||||
|
lists=lists,
|
||||||
|
# TODO: Gather room data for rooms in lists and `sync_config.room_subscriptions`
|
||||||
|
rooms={},
|
||||||
|
extensions={},
|
||||||
|
)
|
||||||
|
|
||||||
|
async def get_sync_room_ids_for_user(
|
||||||
|
self,
|
||||||
|
user: UserID,
|
||||||
|
to_token: StreamToken,
|
||||||
|
from_token: Optional[StreamToken] = None,
|
||||||
|
) -> AbstractSet[str]:
|
||||||
|
"""
|
||||||
|
Fetch room IDs that should be listed for this user in the sync response (the
|
||||||
|
full room list that will be filtered, sorted, and sliced).
|
||||||
|
|
||||||
|
We're looking for rooms where the user has the following state in the token
|
||||||
|
range (> `from_token` and <= `to_token`):
|
||||||
|
|
||||||
|
- `invite`, `join`, `knock`, `ban` membership events
|
||||||
|
- Kicks (`leave` membership events where `sender` is different from the
|
||||||
|
`user_id`/`state_key`)
|
||||||
|
- `newly_left` (rooms that were left during the given token range)
|
||||||
|
- In order for bans/kicks to not show up in sync, you need to `/forget` those
|
||||||
|
rooms. This doesn't modify the event itself though and only adds the
|
||||||
|
`forgotten` flag to the `room_memberships` table in Synapse. There isn't a way
|
||||||
|
to tell when a room was forgotten at the moment so we can't factor it into the
|
||||||
|
from/to range.
|
||||||
|
"""
|
||||||
|
user_id = user.to_string()
|
||||||
|
|
||||||
|
# First grab a current snapshot rooms for the user
|
||||||
|
# (also handles forgotten rooms)
|
||||||
|
room_for_user_list = await self.store.get_rooms_for_local_user_where_membership_is(
|
||||||
|
user_id=user_id,
|
||||||
|
# We want to fetch any kind of membership (joined and left rooms) in order
|
||||||
|
# to get the `event_pos` of the latest room membership event for the
|
||||||
|
# user.
|
||||||
|
#
|
||||||
|
# We will filter out the rooms that don't belong below (see
|
||||||
|
# `filter_membership_for_sync`)
|
||||||
|
membership_list=Membership.LIST,
|
||||||
|
excluded_rooms=self.rooms_to_exclude_globally,
|
||||||
|
)
|
||||||
|
|
||||||
|
# If the user has never joined any rooms before, we can just return an empty list
|
||||||
|
if not room_for_user_list:
|
||||||
|
return set()
|
||||||
|
|
||||||
|
# Our working list of rooms that can show up in the sync response
|
||||||
|
sync_room_id_set = {
|
||||||
|
room_for_user.room_id
|
||||||
|
for room_for_user in room_for_user_list
|
||||||
|
if filter_membership_for_sync(
|
||||||
|
membership=room_for_user.membership,
|
||||||
|
user_id=user_id,
|
||||||
|
sender=room_for_user.sender,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get the `RoomStreamToken` that represents the spot we queried up to when we got
|
||||||
|
# our membership snapshot from `get_rooms_for_local_user_where_membership_is()`.
|
||||||
|
#
|
||||||
|
# First, we need to get the max stream_ordering of each event persister instance
|
||||||
|
# that we queried events from.
|
||||||
|
instance_to_max_stream_ordering_map: Dict[str, int] = {}
|
||||||
|
for room_for_user in room_for_user_list:
|
||||||
|
instance_name = room_for_user.event_pos.instance_name
|
||||||
|
stream_ordering = room_for_user.event_pos.stream
|
||||||
|
|
||||||
|
current_instance_max_stream_ordering = (
|
||||||
|
instance_to_max_stream_ordering_map.get(instance_name)
|
||||||
|
)
|
||||||
|
if (
|
||||||
|
current_instance_max_stream_ordering is None
|
||||||
|
or stream_ordering > current_instance_max_stream_ordering
|
||||||
|
):
|
||||||
|
instance_to_max_stream_ordering_map[instance_name] = stream_ordering
|
||||||
|
|
||||||
|
# Then assemble the `RoomStreamToken`
|
||||||
|
membership_snapshot_token = RoomStreamToken(
|
||||||
|
# Minimum position in the `instance_map`
|
||||||
|
stream=min(instance_to_max_stream_ordering_map.values()),
|
||||||
|
instance_map=immutabledict(instance_to_max_stream_ordering_map),
|
||||||
|
)
|
||||||
|
|
||||||
|
# If our `to_token` is already the same or ahead of the latest room membership
|
||||||
|
# for the user, we can just straight-up return the room list (nothing has
|
||||||
|
# changed)
|
||||||
|
if membership_snapshot_token.is_before_or_eq(to_token.room_key):
|
||||||
|
return sync_room_id_set
|
||||||
|
|
||||||
|
# Since we fetched the users room list at some point in time after the from/to
|
||||||
|
# tokens, we need to revert/rewind some membership changes to match the point in
|
||||||
|
# time of the `to_token`. In particular, we need to make these fixups:
|
||||||
|
#
|
||||||
|
# - 1a) Remove rooms that the user joined after the `to_token`
|
||||||
|
# - 1b) Add back rooms that the user left after the `to_token`
|
||||||
|
# - 2) Add back newly_left rooms (> `from_token` and <= `to_token`)
|
||||||
|
#
|
||||||
|
# Below, we're doing two separate lookups for membership changes. We could
|
||||||
|
# request everything for both fixups in one range, [`from_token.room_key`,
|
||||||
|
# `membership_snapshot_token`), but we want to avoid raw `stream_ordering`
|
||||||
|
# comparison without `instance_name` (which is flawed). We could refactor
|
||||||
|
# `event.internal_metadata` to include `instance_name` but it might turn out a
|
||||||
|
# little difficult and a bigger, broader Synapse change than we want to make.
|
||||||
|
|
||||||
|
# 1) -----------------------------------------------------
|
||||||
|
|
||||||
|
# 1) Fetch membership changes that fall in the range from `to_token` up to
|
||||||
|
# `membership_snapshot_token`
|
||||||
|
membership_change_events_after_to_token = (
|
||||||
|
await self.store.get_membership_changes_for_user(
|
||||||
|
user_id,
|
||||||
|
from_key=to_token.room_key,
|
||||||
|
to_key=membership_snapshot_token,
|
||||||
|
excluded_rooms=self.rooms_to_exclude_globally,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# 1) Assemble a list of the last membership events in some given ranges. Someone
|
||||||
|
# could have left and joined multiple times during the given range but we only
|
||||||
|
# care about end-result so we grab the last one.
|
||||||
|
last_membership_change_by_room_id_after_to_token: Dict[str, EventBase] = {}
|
||||||
|
# We also need the first membership event after the `to_token` so we can step
|
||||||
|
# backward to the previous membership that would apply to the from/to range.
|
||||||
|
first_membership_change_by_room_id_after_to_token: Dict[str, EventBase] = {}
|
||||||
|
for event in membership_change_events_after_to_token:
|
||||||
|
last_membership_change_by_room_id_after_to_token[event.room_id] = event
|
||||||
|
# Only set if we haven't already set it
|
||||||
|
first_membership_change_by_room_id_after_to_token.setdefault(
|
||||||
|
event.room_id, event
|
||||||
|
)
|
||||||
|
|
||||||
|
# 1) Fixup
|
||||||
|
for (
|
||||||
|
last_membership_change_after_to_token
|
||||||
|
) in last_membership_change_by_room_id_after_to_token.values():
|
||||||
|
room_id = last_membership_change_after_to_token.room_id
|
||||||
|
|
||||||
|
# We want to find the first membership change after the `to_token` then step
|
||||||
|
# backward to know the membership in the from/to range.
|
||||||
|
first_membership_change_after_to_token = (
|
||||||
|
first_membership_change_by_room_id_after_to_token.get(room_id)
|
||||||
|
)
|
||||||
|
assert first_membership_change_after_to_token is not None, (
|
||||||
|
"If there was a `last_membership_change_after_to_token` that we're iterating over, "
|
||||||
|
+ "then there should be corresponding a first change. For example, even if there "
|
||||||
|
+ "is only one event after the `to_token`, the first and last event will be same event. "
|
||||||
|
+ "This is probably a mistake in assembling the `last_membership_change_by_room_id_after_to_token`"
|
||||||
|
+ "/`first_membership_change_by_room_id_after_to_token` dicts above."
|
||||||
|
)
|
||||||
|
# TODO: Instead of reading from `unsigned`, refactor this to use the
|
||||||
|
# `current_state_delta_stream` table in the future. Probably a new
|
||||||
|
# `get_membership_changes_for_user()` function that uses
|
||||||
|
# `current_state_delta_stream` with a join to `room_memberships`. This would
|
||||||
|
# help in state reset scenarios since `prev_content` is looking at the
|
||||||
|
# current branch vs the current room state. This is all just data given to
|
||||||
|
# the client so no real harm to data integrity, but we'd like to be nice to
|
||||||
|
# the client. Since the `current_state_delta_stream` table is new, it
|
||||||
|
# doesn't have all events in it. Since this is Sliding Sync, if we ever need
|
||||||
|
# to, we can signal the client to throw all of their state away by sending
|
||||||
|
# "operation: RESET".
|
||||||
|
prev_content = first_membership_change_after_to_token.unsigned.get(
|
||||||
|
"prev_content", {}
|
||||||
|
)
|
||||||
|
prev_membership = prev_content.get("membership", None)
|
||||||
|
prev_sender = first_membership_change_after_to_token.unsigned.get(
|
||||||
|
"prev_sender", None
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if the previous membership (membership that applies to the from/to
|
||||||
|
# range) should be included in our `sync_room_id_set`
|
||||||
|
should_prev_membership_be_included = (
|
||||||
|
prev_membership is not None
|
||||||
|
and prev_sender is not None
|
||||||
|
and filter_membership_for_sync(
|
||||||
|
membership=prev_membership,
|
||||||
|
user_id=user_id,
|
||||||
|
sender=prev_sender,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if the last membership (membership that applies to our snapshot) was
|
||||||
|
# already included in our `sync_room_id_set`
|
||||||
|
was_last_membership_already_included = filter_membership_for_sync(
|
||||||
|
membership=last_membership_change_after_to_token.membership,
|
||||||
|
user_id=user_id,
|
||||||
|
sender=last_membership_change_after_to_token.sender,
|
||||||
|
)
|
||||||
|
|
||||||
|
# 1a) Add back rooms that the user left after the `to_token`
|
||||||
|
#
|
||||||
|
# For example, if the last membership event after the `to_token` is a leave
|
||||||
|
# event, then the room was excluded from `sync_room_id_set` when we first
|
||||||
|
# crafted it above. We should add these rooms back as long as the user also
|
||||||
|
# was part of the room before the `to_token`.
|
||||||
|
if (
|
||||||
|
not was_last_membership_already_included
|
||||||
|
and should_prev_membership_be_included
|
||||||
|
):
|
||||||
|
sync_room_id_set.add(room_id)
|
||||||
|
# 1b) Remove rooms that the user joined (hasn't left) after the `to_token`
|
||||||
|
#
|
||||||
|
# For example, if the last membership event after the `to_token` is a "join"
|
||||||
|
# event, then the room was included `sync_room_id_set` when we first crafted
|
||||||
|
# it above. We should remove these rooms as long as the user also wasn't
|
||||||
|
# part of the room before the `to_token`.
|
||||||
|
elif (
|
||||||
|
was_last_membership_already_included
|
||||||
|
and not should_prev_membership_be_included
|
||||||
|
):
|
||||||
|
sync_room_id_set.discard(room_id)
|
||||||
|
|
||||||
|
# 2) -----------------------------------------------------
|
||||||
|
# We fix-up newly_left rooms after the first fixup because it may have removed
|
||||||
|
# some left rooms that we can figure out our newly_left in the following code
|
||||||
|
|
||||||
|
# 2) Fetch membership changes that fall in the range from `from_token` up to `to_token`
|
||||||
|
membership_change_events_in_from_to_range = []
|
||||||
|
if from_token:
|
||||||
|
membership_change_events_in_from_to_range = (
|
||||||
|
await self.store.get_membership_changes_for_user(
|
||||||
|
user_id,
|
||||||
|
from_key=from_token.room_key,
|
||||||
|
to_key=to_token.room_key,
|
||||||
|
excluded_rooms=self.rooms_to_exclude_globally,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# 2) Assemble a list of the last membership events in some given ranges. Someone
|
||||||
|
# could have left and joined multiple times during the given range but we only
|
||||||
|
# care about end-result so we grab the last one.
|
||||||
|
last_membership_change_by_room_id_in_from_to_range: Dict[str, EventBase] = {}
|
||||||
|
for event in membership_change_events_in_from_to_range:
|
||||||
|
last_membership_change_by_room_id_in_from_to_range[event.room_id] = event
|
||||||
|
|
||||||
|
# 2) Fixup
|
||||||
|
for (
|
||||||
|
last_membership_change_in_from_to_range
|
||||||
|
) in last_membership_change_by_room_id_in_from_to_range.values():
|
||||||
|
room_id = last_membership_change_in_from_to_range.room_id
|
||||||
|
|
||||||
|
# 2) Add back newly_left rooms (> `from_token` and <= `to_token`). We
|
||||||
|
# include newly_left rooms because the last event that the user should see
|
||||||
|
# is their own leave event
|
||||||
|
if last_membership_change_in_from_to_range.membership == Membership.LEAVE:
|
||||||
|
sync_room_id_set.add(room_id)
|
||||||
|
|
||||||
|
return sync_room_id_set
|
|
@ -284,6 +284,27 @@ class SyncResult:
|
||||||
or self.device_lists
|
or self.device_lists
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def empty(
|
||||||
|
next_batch: StreamToken,
|
||||||
|
device_one_time_keys_count: JsonMapping,
|
||||||
|
device_unused_fallback_key_types: List[str],
|
||||||
|
) -> "SyncResult":
|
||||||
|
"Return a new empty result"
|
||||||
|
return SyncResult(
|
||||||
|
next_batch=next_batch,
|
||||||
|
presence=[],
|
||||||
|
account_data=[],
|
||||||
|
joined=[],
|
||||||
|
invited=[],
|
||||||
|
knocked=[],
|
||||||
|
archived=[],
|
||||||
|
to_device=[],
|
||||||
|
device_lists=DeviceListUpdates(),
|
||||||
|
device_one_time_keys_count=device_one_time_keys_count,
|
||||||
|
device_unused_fallback_key_types=device_unused_fallback_key_types,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||||
class E2eeSyncResult:
|
class E2eeSyncResult:
|
||||||
|
@ -497,6 +518,45 @@ class SyncHandler:
|
||||||
if context:
|
if context:
|
||||||
context.tag = sync_label
|
context.tag = sync_label
|
||||||
|
|
||||||
|
if since_token is not None:
|
||||||
|
# We need to make sure this worker has caught up with the token. If
|
||||||
|
# this returns false it means we timed out waiting, and we should
|
||||||
|
# just return an empty response.
|
||||||
|
start = self.clock.time_msec()
|
||||||
|
if not await self.notifier.wait_for_stream_token(since_token):
|
||||||
|
logger.warning(
|
||||||
|
"Timed out waiting for worker to catch up. Returning empty response"
|
||||||
|
)
|
||||||
|
device_id = sync_config.device_id
|
||||||
|
one_time_keys_count: JsonMapping = {}
|
||||||
|
unused_fallback_key_types: List[str] = []
|
||||||
|
if device_id:
|
||||||
|
user_id = sync_config.user.to_string()
|
||||||
|
# TODO: We should have a way to let clients differentiate between the states of:
|
||||||
|
# * no change in OTK count since the provided since token
|
||||||
|
# * the server has zero OTKs left for this device
|
||||||
|
# Spec issue: https://github.com/matrix-org/matrix-doc/issues/3298
|
||||||
|
one_time_keys_count = await self.store.count_e2e_one_time_keys(
|
||||||
|
user_id, device_id
|
||||||
|
)
|
||||||
|
unused_fallback_key_types = list(
|
||||||
|
await self.store.get_e2e_unused_fallback_key_types(
|
||||||
|
user_id, device_id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
cache_context.should_cache = False # Don't cache empty responses
|
||||||
|
return SyncResult.empty(
|
||||||
|
since_token, one_time_keys_count, unused_fallback_key_types
|
||||||
|
)
|
||||||
|
|
||||||
|
# If we've spent significant time waiting to catch up, take it off
|
||||||
|
# the timeout.
|
||||||
|
now = self.clock.time_msec()
|
||||||
|
if now - start > 1_000:
|
||||||
|
timeout -= now - start
|
||||||
|
timeout = max(timeout, 0)
|
||||||
|
|
||||||
# if we have a since token, delete any to-device messages before that token
|
# if we have a since token, delete any to-device messages before that token
|
||||||
# (since we now know that the device has received them)
|
# (since we now know that the device has received them)
|
||||||
if since_token is not None:
|
if since_token is not None:
|
||||||
|
@ -1942,7 +2002,7 @@ class SyncHandler:
|
||||||
"""
|
"""
|
||||||
user_id = sync_config.user.to_string()
|
user_id = sync_config.user.to_string()
|
||||||
|
|
||||||
# Note: we get the users room list *before* we get the current token, this
|
# Note: we get the users room list *before* we get the `now_token`, this
|
||||||
# avoids checking back in history if rooms are joined after the token is fetched.
|
# avoids checking back in history if rooms are joined after the token is fetched.
|
||||||
token_before_rooms = self.event_sources.get_current_token()
|
token_before_rooms = self.event_sources.get_current_token()
|
||||||
mutable_joined_room_ids = set(await self.store.get_rooms_for_user(user_id))
|
mutable_joined_room_ids = set(await self.store.get_rooms_for_user(user_id))
|
||||||
|
@ -1954,10 +2014,10 @@ class SyncHandler:
|
||||||
now_token = self.event_sources.get_current_token()
|
now_token = self.event_sources.get_current_token()
|
||||||
log_kv({"now_token": now_token})
|
log_kv({"now_token": now_token})
|
||||||
|
|
||||||
# Since we fetched the users room list before the token, there's a small window
|
# Since we fetched the users room list before calculating the `now_token` (see
|
||||||
# during which membership events may have been persisted, so we fetch these now
|
# above), there's a small window during which membership events may have been
|
||||||
# and modify the joined room list for any changes between the get_rooms_for_user
|
# persisted, so we fetch these now and modify the joined room list for any
|
||||||
# call and the get_current_token call.
|
# changes between the get_rooms_for_user call and the get_current_token call.
|
||||||
membership_change_events = []
|
membership_change_events = []
|
||||||
if since_token:
|
if since_token:
|
||||||
membership_change_events = await self.store.get_membership_changes_for_user(
|
membership_change_events = await self.store.get_membership_changes_for_user(
|
||||||
|
@ -1967,16 +2027,19 @@ class SyncHandler:
|
||||||
self.rooms_to_exclude_globally,
|
self.rooms_to_exclude_globally,
|
||||||
)
|
)
|
||||||
|
|
||||||
mem_last_change_by_room_id: Dict[str, EventBase] = {}
|
last_membership_change_by_room_id: Dict[str, EventBase] = {}
|
||||||
for event in membership_change_events:
|
for event in membership_change_events:
|
||||||
mem_last_change_by_room_id[event.room_id] = event
|
last_membership_change_by_room_id[event.room_id] = event
|
||||||
|
|
||||||
# For the latest membership event in each room found, add/remove the room ID
|
# For the latest membership event in each room found, add/remove the room ID
|
||||||
# from the joined room list accordingly. In this case we only care if the
|
# from the joined room list accordingly. In this case we only care if the
|
||||||
# latest change is JOIN.
|
# latest change is JOIN.
|
||||||
|
|
||||||
for room_id, event in mem_last_change_by_room_id.items():
|
for room_id, event in last_membership_change_by_room_id.items():
|
||||||
assert event.internal_metadata.stream_ordering
|
assert event.internal_metadata.stream_ordering
|
||||||
|
# As a shortcut, skip any events that happened before we got our
|
||||||
|
# `get_rooms_for_user()` snapshot (any changes are already represented
|
||||||
|
# in that list).
|
||||||
if (
|
if (
|
||||||
event.internal_metadata.stream_ordering
|
event.internal_metadata.stream_ordering
|
||||||
< token_before_rooms.room_key.stream
|
< token_before_rooms.room_key.stream
|
||||||
|
@ -2770,7 +2833,7 @@ class SyncHandler:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
leave_token = now_token.copy_and_replace(
|
leave_token = now_token.copy_and_replace(
|
||||||
StreamKeyType.ROOM, RoomStreamToken(stream=event.stream_ordering)
|
StreamKeyType.ROOM, RoomStreamToken(stream=event.event_pos.stream)
|
||||||
)
|
)
|
||||||
room_entries.append(
|
room_entries.append(
|
||||||
RoomSyncResultBuilder(
|
RoomSyncResultBuilder(
|
||||||
|
|
|
@ -477,9 +477,9 @@ class TypingWriterHandler(FollowerTypingHandler):
|
||||||
|
|
||||||
rows = []
|
rows = []
|
||||||
for room_id in changed_rooms:
|
for room_id in changed_rooms:
|
||||||
serial = self._room_serials[room_id]
|
serial = self._room_serials.get(room_id)
|
||||||
if last_id < serial <= current_id:
|
if serial and last_id < serial <= current_id:
|
||||||
typing = self._room_typing[room_id]
|
typing = self._room_typing.get(room_id, set())
|
||||||
rows.append((serial, [room_id, list(typing)]))
|
rows.append((serial, [room_id, list(typing)]))
|
||||||
rows.sort()
|
rows.sort()
|
||||||
|
|
||||||
|
|
|
@ -57,7 +57,7 @@ from twisted.internet.interfaces import IReactorTime
|
||||||
from twisted.internet.task import Cooperator
|
from twisted.internet.task import Cooperator
|
||||||
from twisted.web.client import ResponseFailed
|
from twisted.web.client import ResponseFailed
|
||||||
from twisted.web.http_headers import Headers
|
from twisted.web.http_headers import Headers
|
||||||
from twisted.web.iweb import IAgent, IBodyProducer, IResponse
|
from twisted.web.iweb import UNKNOWN_LENGTH, IAgent, IBodyProducer, IResponse
|
||||||
|
|
||||||
import synapse.metrics
|
import synapse.metrics
|
||||||
import synapse.util.retryutils
|
import synapse.util.retryutils
|
||||||
|
@ -68,6 +68,7 @@ from synapse.api.errors import (
|
||||||
RequestSendFailed,
|
RequestSendFailed,
|
||||||
SynapseError,
|
SynapseError,
|
||||||
)
|
)
|
||||||
|
from synapse.api.ratelimiting import Ratelimiter
|
||||||
from synapse.crypto.context_factory import FederationPolicyForHTTPS
|
from synapse.crypto.context_factory import FederationPolicyForHTTPS
|
||||||
from synapse.http import QuieterFileBodyProducer
|
from synapse.http import QuieterFileBodyProducer
|
||||||
from synapse.http.client import (
|
from synapse.http.client import (
|
||||||
|
@ -1411,9 +1412,11 @@ class MatrixFederationHttpClient:
|
||||||
destination: str,
|
destination: str,
|
||||||
path: str,
|
path: str,
|
||||||
output_stream: BinaryIO,
|
output_stream: BinaryIO,
|
||||||
|
download_ratelimiter: Ratelimiter,
|
||||||
|
ip_address: str,
|
||||||
|
max_size: int,
|
||||||
args: Optional[QueryParams] = None,
|
args: Optional[QueryParams] = None,
|
||||||
retry_on_dns_fail: bool = True,
|
retry_on_dns_fail: bool = True,
|
||||||
max_size: Optional[int] = None,
|
|
||||||
ignore_backoff: bool = False,
|
ignore_backoff: bool = False,
|
||||||
follow_redirects: bool = False,
|
follow_redirects: bool = False,
|
||||||
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
) -> Tuple[int, Dict[bytes, List[bytes]]]:
|
||||||
|
@ -1422,6 +1425,10 @@ class MatrixFederationHttpClient:
|
||||||
destination: The remote server to send the HTTP request to.
|
destination: The remote server to send the HTTP request to.
|
||||||
path: The HTTP path to GET.
|
path: The HTTP path to GET.
|
||||||
output_stream: File to write the response body to.
|
output_stream: File to write the response body to.
|
||||||
|
download_ratelimiter: a ratelimiter to limit remote media downloads, keyed to
|
||||||
|
requester IP
|
||||||
|
ip_address: IP address of the requester
|
||||||
|
max_size: maximum allowable size in bytes of the file
|
||||||
args: Optional dictionary used to create the query string.
|
args: Optional dictionary used to create the query string.
|
||||||
ignore_backoff: true to ignore the historical backoff data
|
ignore_backoff: true to ignore the historical backoff data
|
||||||
and try the request anyway.
|
and try the request anyway.
|
||||||
|
@ -1441,11 +1448,27 @@ class MatrixFederationHttpClient:
|
||||||
federation whitelist
|
federation whitelist
|
||||||
RequestSendFailed: If there were problems connecting to the
|
RequestSendFailed: If there were problems connecting to the
|
||||||
remote, due to e.g. DNS failures, connection timeouts etc.
|
remote, due to e.g. DNS failures, connection timeouts etc.
|
||||||
|
SynapseError: If the requested file exceeds ratelimits
|
||||||
"""
|
"""
|
||||||
request = MatrixFederationRequest(
|
request = MatrixFederationRequest(
|
||||||
method="GET", destination=destination, path=path, query=args
|
method="GET", destination=destination, path=path, query=args
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# check for a minimum balance of 1MiB in ratelimiter before initiating request
|
||||||
|
send_req, _ = await download_ratelimiter.can_do_action(
|
||||||
|
requester=None, key=ip_address, n_actions=1048576, update=False
|
||||||
|
)
|
||||||
|
|
||||||
|
if not send_req:
|
||||||
|
msg = "Requested file size exceeds ratelimits"
|
||||||
|
logger.warning(
|
||||||
|
"{%s} [%s] %s",
|
||||||
|
request.txn_id,
|
||||||
|
request.destination,
|
||||||
|
msg,
|
||||||
|
)
|
||||||
|
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
|
||||||
|
|
||||||
response = await self._send_request(
|
response = await self._send_request(
|
||||||
request,
|
request,
|
||||||
retry_on_dns_fail=retry_on_dns_fail,
|
retry_on_dns_fail=retry_on_dns_fail,
|
||||||
|
@ -1455,12 +1478,36 @@ class MatrixFederationHttpClient:
|
||||||
|
|
||||||
headers = dict(response.headers.getAllRawHeaders())
|
headers = dict(response.headers.getAllRawHeaders())
|
||||||
|
|
||||||
|
expected_size = response.length
|
||||||
|
# if we don't get an expected length then use the max length
|
||||||
|
if expected_size == UNKNOWN_LENGTH:
|
||||||
|
expected_size = max_size
|
||||||
|
logger.debug(
|
||||||
|
f"File size unknown, assuming file is max allowable size: {max_size}"
|
||||||
|
)
|
||||||
|
|
||||||
|
read_body, _ = await download_ratelimiter.can_do_action(
|
||||||
|
requester=None,
|
||||||
|
key=ip_address,
|
||||||
|
n_actions=expected_size,
|
||||||
|
)
|
||||||
|
if not read_body:
|
||||||
|
msg = "Requested file size exceeds ratelimits"
|
||||||
|
logger.warning(
|
||||||
|
"{%s} [%s] %s",
|
||||||
|
request.txn_id,
|
||||||
|
request.destination,
|
||||||
|
msg,
|
||||||
|
)
|
||||||
|
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
d = read_body_with_max_size(response, output_stream, max_size)
|
# add a byte of headroom to max size as function errs at >=
|
||||||
|
d = read_body_with_max_size(response, output_stream, expected_size + 1)
|
||||||
d.addTimeout(self.default_timeout_seconds, self.reactor)
|
d.addTimeout(self.default_timeout_seconds, self.reactor)
|
||||||
length = await make_deferred_yieldable(d)
|
length = await make_deferred_yieldable(d)
|
||||||
except BodyExceededMaxSize:
|
except BodyExceededMaxSize:
|
||||||
msg = "Requested file is too large > %r bytes" % (max_size,)
|
msg = "Requested file is too large > %r bytes" % (expected_size,)
|
||||||
logger.warning(
|
logger.warning(
|
||||||
"{%s} [%s] %s",
|
"{%s} [%s] %s",
|
||||||
request.txn_id,
|
request.txn_id,
|
||||||
|
|
|
@ -25,7 +25,16 @@ import os
|
||||||
import urllib
|
import urllib
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from types import TracebackType
|
from types import TracebackType
|
||||||
from typing import Awaitable, Dict, Generator, List, Optional, Tuple, Type
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Awaitable,
|
||||||
|
Dict,
|
||||||
|
Generator,
|
||||||
|
List,
|
||||||
|
Optional,
|
||||||
|
Tuple,
|
||||||
|
Type,
|
||||||
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
|
@ -39,6 +48,11 @@ from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.context import make_deferred_yieldable
|
from synapse.logging.context import make_deferred_yieldable
|
||||||
from synapse.util.stringutils import is_ascii
|
from synapse.util.stringutils import is_ascii
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.media.media_storage import MultipartResponder
|
||||||
|
from synapse.storage.databases.main.media_repository import LocalMedia
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# list all text content types that will have the charset default to UTF-8 when
|
# list all text content types that will have the charset default to UTF-8 when
|
||||||
|
@ -260,6 +274,53 @@ def _can_encode_filename_as_token(x: str) -> bool:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
async def respond_with_multipart_responder(
|
||||||
|
request: SynapseRequest,
|
||||||
|
responder: "Optional[MultipartResponder]",
|
||||||
|
media_info: "LocalMedia",
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Responds via a Multipart responder for the federation media `/download` requests
|
||||||
|
|
||||||
|
Args:
|
||||||
|
request: the federation request to respond to
|
||||||
|
responder: the Multipart responder which will send the response
|
||||||
|
media_info: metadata about the media item
|
||||||
|
"""
|
||||||
|
if not responder:
|
||||||
|
respond_404(request)
|
||||||
|
return
|
||||||
|
|
||||||
|
# If we have a responder we *must* use it as a context manager.
|
||||||
|
with responder:
|
||||||
|
if request._disconnected:
|
||||||
|
logger.warning(
|
||||||
|
"Not sending response to request %s, already disconnected.", request
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
logger.debug("Responding to media request with responder %s", responder)
|
||||||
|
if media_info.media_length is not None:
|
||||||
|
request.setHeader(b"Content-Length", b"%d" % (media_info.media_length,))
|
||||||
|
request.setHeader(
|
||||||
|
b"Content-Type", b"multipart/mixed; boundary=%s" % responder.boundary
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
await responder.write_to_consumer(request)
|
||||||
|
except Exception as e:
|
||||||
|
# The majority of the time this will be due to the client having gone
|
||||||
|
# away. Unfortunately, Twisted simply throws a generic exception at us
|
||||||
|
# in that case.
|
||||||
|
logger.warning("Failed to write to consumer: %s %s", type(e), e)
|
||||||
|
|
||||||
|
# Unregister the producer, if it has one, so Twisted doesn't complain
|
||||||
|
if request.producer:
|
||||||
|
request.unregisterProducer()
|
||||||
|
|
||||||
|
finish_request(request)
|
||||||
|
|
||||||
|
|
||||||
async def respond_with_responder(
|
async def respond_with_responder(
|
||||||
request: SynapseRequest,
|
request: SynapseRequest,
|
||||||
responder: "Optional[Responder]",
|
responder: "Optional[Responder]",
|
||||||
|
|
|
@ -42,6 +42,7 @@ from synapse.api.errors import (
|
||||||
SynapseError,
|
SynapseError,
|
||||||
cs_error,
|
cs_error,
|
||||||
)
|
)
|
||||||
|
from synapse.api.ratelimiting import Ratelimiter
|
||||||
from synapse.config.repository import ThumbnailRequirement
|
from synapse.config.repository import ThumbnailRequirement
|
||||||
from synapse.http.server import respond_with_json
|
from synapse.http.server import respond_with_json
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
|
@ -53,10 +54,11 @@ from synapse.media._base import (
|
||||||
ThumbnailInfo,
|
ThumbnailInfo,
|
||||||
get_filename_from_headers,
|
get_filename_from_headers,
|
||||||
respond_404,
|
respond_404,
|
||||||
|
respond_with_multipart_responder,
|
||||||
respond_with_responder,
|
respond_with_responder,
|
||||||
)
|
)
|
||||||
from synapse.media.filepath import MediaFilePaths
|
from synapse.media.filepath import MediaFilePaths
|
||||||
from synapse.media.media_storage import MediaStorage
|
from synapse.media.media_storage import MediaStorage, MultipartResponder
|
||||||
from synapse.media.storage_provider import StorageProviderWrapper
|
from synapse.media.storage_provider import StorageProviderWrapper
|
||||||
from synapse.media.thumbnailer import Thumbnailer, ThumbnailError
|
from synapse.media.thumbnailer import Thumbnailer, ThumbnailError
|
||||||
from synapse.media.url_previewer import UrlPreviewer
|
from synapse.media.url_previewer import UrlPreviewer
|
||||||
|
@ -111,6 +113,12 @@ class MediaRepository:
|
||||||
)
|
)
|
||||||
self.prevent_media_downloads_from = hs.config.media.prevent_media_downloads_from
|
self.prevent_media_downloads_from = hs.config.media.prevent_media_downloads_from
|
||||||
|
|
||||||
|
self.download_ratelimiter = Ratelimiter(
|
||||||
|
store=hs.get_storage_controllers().main,
|
||||||
|
clock=hs.get_clock(),
|
||||||
|
cfg=hs.config.ratelimiting.remote_media_downloads,
|
||||||
|
)
|
||||||
|
|
||||||
# List of StorageProviders where we should search for media and
|
# List of StorageProviders where we should search for media and
|
||||||
# potentially upload to.
|
# potentially upload to.
|
||||||
storage_providers = []
|
storage_providers = []
|
||||||
|
@ -422,6 +430,7 @@ class MediaRepository:
|
||||||
media_id: str,
|
media_id: str,
|
||||||
name: Optional[str],
|
name: Optional[str],
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
|
federation: bool = False,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Responds to requests for local media, if exists, or returns 404.
|
"""Responds to requests for local media, if exists, or returns 404.
|
||||||
|
|
||||||
|
@ -433,6 +442,7 @@ class MediaRepository:
|
||||||
the filename in the Content-Disposition header of the response.
|
the filename in the Content-Disposition header of the response.
|
||||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||||
media to be uploaded.
|
media to be uploaded.
|
||||||
|
federation: whether the local media being fetched is for a federation request
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Resolves once a response has successfully been written to request
|
Resolves once a response has successfully been written to request
|
||||||
|
@ -452,10 +462,17 @@ class MediaRepository:
|
||||||
|
|
||||||
file_info = FileInfo(None, media_id, url_cache=bool(url_cache))
|
file_info = FileInfo(None, media_id, url_cache=bool(url_cache))
|
||||||
|
|
||||||
responder = await self.media_storage.fetch_media(file_info)
|
responder = await self.media_storage.fetch_media(
|
||||||
await respond_with_responder(
|
file_info, media_info, federation
|
||||||
request, responder, media_type, media_length, upload_name
|
|
||||||
)
|
)
|
||||||
|
if federation:
|
||||||
|
# this really should be a Multipart responder but just in case
|
||||||
|
assert isinstance(responder, MultipartResponder)
|
||||||
|
await respond_with_multipart_responder(request, responder, media_info)
|
||||||
|
else:
|
||||||
|
await respond_with_responder(
|
||||||
|
request, responder, media_type, media_length, upload_name
|
||||||
|
)
|
||||||
|
|
||||||
async def get_remote_media(
|
async def get_remote_media(
|
||||||
self,
|
self,
|
||||||
|
@ -464,6 +481,7 @@ class MediaRepository:
|
||||||
media_id: str,
|
media_id: str,
|
||||||
name: Optional[str],
|
name: Optional[str],
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
|
ip_address: str,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Respond to requests for remote media.
|
"""Respond to requests for remote media.
|
||||||
|
|
||||||
|
@ -475,6 +493,7 @@ class MediaRepository:
|
||||||
the filename in the Content-Disposition header of the response.
|
the filename in the Content-Disposition header of the response.
|
||||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||||
media to be uploaded.
|
media to be uploaded.
|
||||||
|
ip_address: the IP address of the requester
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Resolves once a response has successfully been written to request
|
Resolves once a response has successfully been written to request
|
||||||
|
@ -500,7 +519,11 @@ class MediaRepository:
|
||||||
key = (server_name, media_id)
|
key = (server_name, media_id)
|
||||||
async with self.remote_media_linearizer.queue(key):
|
async with self.remote_media_linearizer.queue(key):
|
||||||
responder, media_info = await self._get_remote_media_impl(
|
responder, media_info = await self._get_remote_media_impl(
|
||||||
server_name, media_id, max_timeout_ms
|
server_name,
|
||||||
|
media_id,
|
||||||
|
max_timeout_ms,
|
||||||
|
self.download_ratelimiter,
|
||||||
|
ip_address,
|
||||||
)
|
)
|
||||||
|
|
||||||
# We deliberately stream the file outside the lock
|
# We deliberately stream the file outside the lock
|
||||||
|
@ -517,7 +540,7 @@ class MediaRepository:
|
||||||
respond_404(request)
|
respond_404(request)
|
||||||
|
|
||||||
async def get_remote_media_info(
|
async def get_remote_media_info(
|
||||||
self, server_name: str, media_id: str, max_timeout_ms: int
|
self, server_name: str, media_id: str, max_timeout_ms: int, ip_address: str
|
||||||
) -> RemoteMedia:
|
) -> RemoteMedia:
|
||||||
"""Gets the media info associated with the remote file, downloading
|
"""Gets the media info associated with the remote file, downloading
|
||||||
if necessary.
|
if necessary.
|
||||||
|
@ -527,6 +550,7 @@ class MediaRepository:
|
||||||
media_id: The media ID of the content (as defined by the remote server).
|
media_id: The media ID of the content (as defined by the remote server).
|
||||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||||
media to be uploaded.
|
media to be uploaded.
|
||||||
|
ip_address: IP address of the requester
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The media info of the file
|
The media info of the file
|
||||||
|
@ -542,7 +566,11 @@ class MediaRepository:
|
||||||
key = (server_name, media_id)
|
key = (server_name, media_id)
|
||||||
async with self.remote_media_linearizer.queue(key):
|
async with self.remote_media_linearizer.queue(key):
|
||||||
responder, media_info = await self._get_remote_media_impl(
|
responder, media_info = await self._get_remote_media_impl(
|
||||||
server_name, media_id, max_timeout_ms
|
server_name,
|
||||||
|
media_id,
|
||||||
|
max_timeout_ms,
|
||||||
|
self.download_ratelimiter,
|
||||||
|
ip_address,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Ensure we actually use the responder so that it releases resources
|
# Ensure we actually use the responder so that it releases resources
|
||||||
|
@ -553,7 +581,12 @@ class MediaRepository:
|
||||||
return media_info
|
return media_info
|
||||||
|
|
||||||
async def _get_remote_media_impl(
|
async def _get_remote_media_impl(
|
||||||
self, server_name: str, media_id: str, max_timeout_ms: int
|
self,
|
||||||
|
server_name: str,
|
||||||
|
media_id: str,
|
||||||
|
max_timeout_ms: int,
|
||||||
|
download_ratelimiter: Ratelimiter,
|
||||||
|
ip_address: str,
|
||||||
) -> Tuple[Optional[Responder], RemoteMedia]:
|
) -> Tuple[Optional[Responder], RemoteMedia]:
|
||||||
"""Looks for media in local cache, if not there then attempt to
|
"""Looks for media in local cache, if not there then attempt to
|
||||||
download from remote server.
|
download from remote server.
|
||||||
|
@ -564,6 +597,9 @@ class MediaRepository:
|
||||||
remote server).
|
remote server).
|
||||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||||
media to be uploaded.
|
media to be uploaded.
|
||||||
|
download_ratelimiter: a ratelimiter limiting remote media downloads, keyed to
|
||||||
|
requester IP.
|
||||||
|
ip_address: the IP address of the requester
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A tuple of responder and the media info of the file.
|
A tuple of responder and the media info of the file.
|
||||||
|
@ -596,7 +632,7 @@ class MediaRepository:
|
||||||
|
|
||||||
try:
|
try:
|
||||||
media_info = await self._download_remote_file(
|
media_info = await self._download_remote_file(
|
||||||
server_name, media_id, max_timeout_ms
|
server_name, media_id, max_timeout_ms, download_ratelimiter, ip_address
|
||||||
)
|
)
|
||||||
except SynapseError:
|
except SynapseError:
|
||||||
raise
|
raise
|
||||||
|
@ -630,6 +666,8 @@ class MediaRepository:
|
||||||
server_name: str,
|
server_name: str,
|
||||||
media_id: str,
|
media_id: str,
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
|
download_ratelimiter: Ratelimiter,
|
||||||
|
ip_address: str,
|
||||||
) -> RemoteMedia:
|
) -> RemoteMedia:
|
||||||
"""Attempt to download the remote file from the given server name,
|
"""Attempt to download the remote file from the given server name,
|
||||||
using the given file_id as the local id.
|
using the given file_id as the local id.
|
||||||
|
@ -641,6 +679,9 @@ class MediaRepository:
|
||||||
locally generated.
|
locally generated.
|
||||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||||
media to be uploaded.
|
media to be uploaded.
|
||||||
|
download_ratelimiter: a ratelimiter limiting remote media downloads, keyed to
|
||||||
|
requester IP
|
||||||
|
ip_address: the IP address of the requester
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The media info of the file.
|
The media info of the file.
|
||||||
|
@ -650,7 +691,7 @@ class MediaRepository:
|
||||||
|
|
||||||
file_info = FileInfo(server_name=server_name, file_id=file_id)
|
file_info = FileInfo(server_name=server_name, file_id=file_id)
|
||||||
|
|
||||||
with self.media_storage.store_into_file(file_info) as (f, fname, finish):
|
async with self.media_storage.store_into_file(file_info) as (f, fname):
|
||||||
try:
|
try:
|
||||||
length, headers = await self.client.download_media(
|
length, headers = await self.client.download_media(
|
||||||
server_name,
|
server_name,
|
||||||
|
@ -658,6 +699,8 @@ class MediaRepository:
|
||||||
output_stream=f,
|
output_stream=f,
|
||||||
max_size=self.max_upload_size,
|
max_size=self.max_upload_size,
|
||||||
max_timeout_ms=max_timeout_ms,
|
max_timeout_ms=max_timeout_ms,
|
||||||
|
download_ratelimiter=download_ratelimiter,
|
||||||
|
ip_address=ip_address,
|
||||||
)
|
)
|
||||||
except RequestSendFailed as e:
|
except RequestSendFailed as e:
|
||||||
logger.warning(
|
logger.warning(
|
||||||
|
@ -693,8 +736,6 @@ class MediaRepository:
|
||||||
)
|
)
|
||||||
raise SynapseError(502, "Failed to fetch remote media")
|
raise SynapseError(502, "Failed to fetch remote media")
|
||||||
|
|
||||||
await finish()
|
|
||||||
|
|
||||||
if b"Content-Type" in headers:
|
if b"Content-Type" in headers:
|
||||||
media_type = headers[b"Content-Type"][0].decode("ascii")
|
media_type = headers[b"Content-Type"][0].decode("ascii")
|
||||||
else:
|
else:
|
||||||
|
@ -1045,17 +1086,17 @@ class MediaRepository:
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
with self.media_storage.store_into_file(file_info) as (
|
async with self.media_storage.store_into_file(file_info) as (f, fname):
|
||||||
f,
|
|
||||||
fname,
|
|
||||||
finish,
|
|
||||||
):
|
|
||||||
try:
|
try:
|
||||||
await self.media_storage.write_to_file(t_byte_source, f)
|
await self.media_storage.write_to_file(t_byte_source, f)
|
||||||
await finish()
|
|
||||||
finally:
|
finally:
|
||||||
t_byte_source.close()
|
t_byte_source.close()
|
||||||
|
|
||||||
|
# We flush and close the file to ensure that the bytes have
|
||||||
|
# been written before getting the size.
|
||||||
|
f.flush()
|
||||||
|
f.close()
|
||||||
|
|
||||||
t_len = os.path.getsize(fname)
|
t_len = os.path.getsize(fname)
|
||||||
|
|
||||||
# Write to database
|
# Write to database
|
||||||
|
|
|
@ -19,26 +19,33 @@
|
||||||
#
|
#
|
||||||
#
|
#
|
||||||
import contextlib
|
import contextlib
|
||||||
|
import json
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
|
from contextlib import closing
|
||||||
|
from io import BytesIO
|
||||||
from types import TracebackType
|
from types import TracebackType
|
||||||
from typing import (
|
from typing import (
|
||||||
IO,
|
IO,
|
||||||
TYPE_CHECKING,
|
TYPE_CHECKING,
|
||||||
Any,
|
Any,
|
||||||
Awaitable,
|
AsyncIterator,
|
||||||
BinaryIO,
|
BinaryIO,
|
||||||
Callable,
|
Callable,
|
||||||
Generator,
|
List,
|
||||||
Optional,
|
Optional,
|
||||||
Sequence,
|
Sequence,
|
||||||
Tuple,
|
Tuple,
|
||||||
Type,
|
Type,
|
||||||
|
Union,
|
||||||
)
|
)
|
||||||
|
from uuid import uuid4
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
from zope.interface import implementer
|
||||||
|
|
||||||
|
from twisted.internet import defer, interfaces
|
||||||
from twisted.internet.defer import Deferred
|
from twisted.internet.defer import Deferred
|
||||||
from twisted.internet.interfaces import IConsumer
|
from twisted.internet.interfaces import IConsumer
|
||||||
from twisted.protocols.basic import FileSender
|
from twisted.protocols.basic import FileSender
|
||||||
|
@ -49,15 +56,19 @@ from synapse.logging.opentracing import start_active_span, trace, trace_with_opn
|
||||||
from synapse.util import Clock
|
from synapse.util import Clock
|
||||||
from synapse.util.file_consumer import BackgroundFileConsumer
|
from synapse.util.file_consumer import BackgroundFileConsumer
|
||||||
|
|
||||||
|
from ..storage.databases.main.media_repository import LocalMedia
|
||||||
|
from ..types import JsonDict
|
||||||
from ._base import FileInfo, Responder
|
from ._base import FileInfo, Responder
|
||||||
from .filepath import MediaFilePaths
|
from .filepath import MediaFilePaths
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.media.storage_provider import StorageProvider
|
from synapse.media.storage_provider import StorageProviderWrapper
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
CRLF = b"\r\n"
|
||||||
|
|
||||||
|
|
||||||
class MediaStorage:
|
class MediaStorage:
|
||||||
"""Responsible for storing/fetching files from local sources.
|
"""Responsible for storing/fetching files from local sources.
|
||||||
|
@ -74,7 +85,7 @@ class MediaStorage:
|
||||||
hs: "HomeServer",
|
hs: "HomeServer",
|
||||||
local_media_directory: str,
|
local_media_directory: str,
|
||||||
filepaths: MediaFilePaths,
|
filepaths: MediaFilePaths,
|
||||||
storage_providers: Sequence["StorageProvider"],
|
storage_providers: Sequence["StorageProviderWrapper"],
|
||||||
):
|
):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.reactor = hs.get_reactor()
|
self.reactor = hs.get_reactor()
|
||||||
|
@ -97,11 +108,9 @@ class MediaStorage:
|
||||||
the file path written to in the primary media store
|
the file path written to in the primary media store
|
||||||
"""
|
"""
|
||||||
|
|
||||||
with self.store_into_file(file_info) as (f, fname, finish_cb):
|
async with self.store_into_file(file_info) as (f, fname):
|
||||||
# Write to the main media repository
|
# Write to the main media repository
|
||||||
await self.write_to_file(source, f)
|
await self.write_to_file(source, f)
|
||||||
# Write to the other storage providers
|
|
||||||
await finish_cb()
|
|
||||||
|
|
||||||
return fname
|
return fname
|
||||||
|
|
||||||
|
@ -111,32 +120,27 @@ class MediaStorage:
|
||||||
await defer_to_thread(self.reactor, _write_file_synchronously, source, output)
|
await defer_to_thread(self.reactor, _write_file_synchronously, source, output)
|
||||||
|
|
||||||
@trace_with_opname("MediaStorage.store_into_file")
|
@trace_with_opname("MediaStorage.store_into_file")
|
||||||
@contextlib.contextmanager
|
@contextlib.asynccontextmanager
|
||||||
def store_into_file(
|
async def store_into_file(
|
||||||
self, file_info: FileInfo
|
self, file_info: FileInfo
|
||||||
) -> Generator[Tuple[BinaryIO, str, Callable[[], Awaitable[None]]], None, None]:
|
) -> AsyncIterator[Tuple[BinaryIO, str]]:
|
||||||
"""Context manager used to get a file like object to write into, as
|
"""Async Context manager used to get a file like object to write into, as
|
||||||
described by file_info.
|
described by file_info.
|
||||||
|
|
||||||
Actually yields a 3-tuple (file, fname, finish_cb), where file is a file
|
Actually yields a 2-tuple (file, fname,), where file is a file
|
||||||
like object that can be written to, fname is the absolute path of file
|
like object that can be written to and fname is the absolute path of file
|
||||||
on disk, and finish_cb is a function that returns an awaitable.
|
on disk.
|
||||||
|
|
||||||
fname can be used to read the contents from after upload, e.g. to
|
fname can be used to read the contents from after upload, e.g. to
|
||||||
generate thumbnails.
|
generate thumbnails.
|
||||||
|
|
||||||
finish_cb must be called and waited on after the file has been successfully been
|
|
||||||
written to. Should not be called if there was an error. Checks for spam and
|
|
||||||
stores the file into the configured storage providers.
|
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
file_info: Info about the file to store
|
file_info: Info about the file to store
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
with media_storage.store_into_file(info) as (f, fname, finish_cb):
|
async with media_storage.store_into_file(info) as (f, fname,):
|
||||||
# .. write into f ...
|
# .. write into f ...
|
||||||
await finish_cb()
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
path = self._file_info_to_path(file_info)
|
path = self._file_info_to_path(file_info)
|
||||||
|
@ -145,72 +149,55 @@ class MediaStorage:
|
||||||
dirname = os.path.dirname(fname)
|
dirname = os.path.dirname(fname)
|
||||||
os.makedirs(dirname, exist_ok=True)
|
os.makedirs(dirname, exist_ok=True)
|
||||||
|
|
||||||
finished_called = [False]
|
|
||||||
|
|
||||||
main_media_repo_write_trace_scope = start_active_span(
|
|
||||||
"writing to main media repo"
|
|
||||||
)
|
|
||||||
main_media_repo_write_trace_scope.__enter__()
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with open(fname, "wb") as f:
|
with start_active_span("writing to main media repo"):
|
||||||
|
with open(fname, "wb") as f:
|
||||||
|
yield f, fname
|
||||||
|
|
||||||
async def finish() -> None:
|
with start_active_span("writing to other storage providers"):
|
||||||
# When someone calls finish, we assume they are done writing to the main media repo
|
spam_check = (
|
||||||
main_media_repo_write_trace_scope.__exit__(None, None, None)
|
await self._spam_checker_module_callbacks.check_media_file_for_spam(
|
||||||
|
ReadableFileWrapper(self.clock, fname), file_info
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if spam_check != self._spam_checker_module_callbacks.NOT_SPAM:
|
||||||
|
logger.info("Blocking media due to spam checker")
|
||||||
|
# Note that we'll delete the stored media, due to the
|
||||||
|
# try/except below. The media also won't be stored in
|
||||||
|
# the DB.
|
||||||
|
# We currently ignore any additional field returned by
|
||||||
|
# the spam-check API.
|
||||||
|
raise SpamMediaException(errcode=spam_check[0])
|
||||||
|
|
||||||
with start_active_span("writing to other storage providers"):
|
for provider in self.storage_providers:
|
||||||
# Ensure that all writes have been flushed and close the
|
with start_active_span(str(provider)):
|
||||||
# file.
|
await provider.store_file(path, file_info)
|
||||||
f.flush()
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
spam_check = await self._spam_checker_module_callbacks.check_media_file_for_spam(
|
|
||||||
ReadableFileWrapper(self.clock, fname), file_info
|
|
||||||
)
|
|
||||||
if spam_check != self._spam_checker_module_callbacks.NOT_SPAM:
|
|
||||||
logger.info("Blocking media due to spam checker")
|
|
||||||
# Note that we'll delete the stored media, due to the
|
|
||||||
# try/except below. The media also won't be stored in
|
|
||||||
# the DB.
|
|
||||||
# We currently ignore any additional field returned by
|
|
||||||
# the spam-check API.
|
|
||||||
raise SpamMediaException(errcode=spam_check[0])
|
|
||||||
|
|
||||||
for provider in self.storage_providers:
|
|
||||||
with start_active_span(str(provider)):
|
|
||||||
await provider.store_file(path, file_info)
|
|
||||||
|
|
||||||
finished_called[0] = True
|
|
||||||
|
|
||||||
yield f, fname, finish
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
try:
|
try:
|
||||||
main_media_repo_write_trace_scope.__exit__(
|
|
||||||
type(e), None, e.__traceback__
|
|
||||||
)
|
|
||||||
os.remove(fname)
|
os.remove(fname)
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
raise e from None
|
raise e from None
|
||||||
|
|
||||||
if not finished_called:
|
async def fetch_media(
|
||||||
exc = Exception("Finished callback not called")
|
self,
|
||||||
main_media_repo_write_trace_scope.__exit__(
|
file_info: FileInfo,
|
||||||
type(exc), None, exc.__traceback__
|
media_info: Optional[LocalMedia] = None,
|
||||||
)
|
federation: bool = False,
|
||||||
raise exc
|
) -> Optional[Responder]:
|
||||||
|
|
||||||
async def fetch_media(self, file_info: FileInfo) -> Optional[Responder]:
|
|
||||||
"""Attempts to fetch media described by file_info from the local cache
|
"""Attempts to fetch media described by file_info from the local cache
|
||||||
and configured storage providers.
|
and configured storage providers.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
file_info
|
file_info: Metadata about the media file
|
||||||
|
media_info: Metadata about the media item
|
||||||
|
federation: Whether this file is being fetched for a federation request
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Returns a Responder if the file was found, otherwise None.
|
If the file was found returns a Responder (a Multipart Responder if the requested
|
||||||
|
file is for the federation /download endpoint), otherwise None.
|
||||||
"""
|
"""
|
||||||
paths = [self._file_info_to_path(file_info)]
|
paths = [self._file_info_to_path(file_info)]
|
||||||
|
|
||||||
|
@ -230,12 +217,19 @@ class MediaStorage:
|
||||||
local_path = os.path.join(self.local_media_directory, path)
|
local_path = os.path.join(self.local_media_directory, path)
|
||||||
if os.path.exists(local_path):
|
if os.path.exists(local_path):
|
||||||
logger.debug("responding with local file %s", local_path)
|
logger.debug("responding with local file %s", local_path)
|
||||||
return FileResponder(open(local_path, "rb"))
|
if federation:
|
||||||
|
assert media_info is not None
|
||||||
|
boundary = uuid4().hex.encode("ascii")
|
||||||
|
return MultipartResponder(
|
||||||
|
open(local_path, "rb"), media_info, boundary
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
return FileResponder(open(local_path, "rb"))
|
||||||
logger.debug("local file %s did not exist", local_path)
|
logger.debug("local file %s did not exist", local_path)
|
||||||
|
|
||||||
for provider in self.storage_providers:
|
for provider in self.storage_providers:
|
||||||
for path in paths:
|
for path in paths:
|
||||||
res: Any = await provider.fetch(path, file_info)
|
res: Any = await provider.fetch(path, file_info, media_info, federation)
|
||||||
if res:
|
if res:
|
||||||
logger.debug("Streaming %s from %s", path, provider)
|
logger.debug("Streaming %s from %s", path, provider)
|
||||||
return res
|
return res
|
||||||
|
@ -349,7 +343,7 @@ class FileResponder(Responder):
|
||||||
"""Wraps an open file that can be sent to a request.
|
"""Wraps an open file that can be sent to a request.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
open_file: A file like object to be streamed ot the client,
|
open_file: A file like object to be streamed to the client,
|
||||||
is closed when finished streaming.
|
is closed when finished streaming.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -370,6 +364,38 @@ class FileResponder(Responder):
|
||||||
self.open_file.close()
|
self.open_file.close()
|
||||||
|
|
||||||
|
|
||||||
|
class MultipartResponder(Responder):
|
||||||
|
"""Wraps an open file, formats the response according to MSC3916 and sends it to a
|
||||||
|
federation request.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
open_file: A file like object to be streamed to the client,
|
||||||
|
is closed when finished streaming.
|
||||||
|
media_info: metadata about the media item
|
||||||
|
boundary: bytes to use for the multipart response boundary
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, open_file: IO, media_info: LocalMedia, boundary: bytes) -> None:
|
||||||
|
self.open_file = open_file
|
||||||
|
self.media_info = media_info
|
||||||
|
self.boundary = boundary
|
||||||
|
|
||||||
|
def write_to_consumer(self, consumer: IConsumer) -> Deferred:
|
||||||
|
return make_deferred_yieldable(
|
||||||
|
MultipartFileSender().beginFileTransfer(
|
||||||
|
self.open_file, consumer, self.media_info.media_type, {}, self.boundary
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def __exit__(
|
||||||
|
self,
|
||||||
|
exc_type: Optional[Type[BaseException]],
|
||||||
|
exc_val: Optional[BaseException],
|
||||||
|
exc_tb: Optional[TracebackType],
|
||||||
|
) -> None:
|
||||||
|
self.open_file.close()
|
||||||
|
|
||||||
|
|
||||||
class SpamMediaException(NotFoundError):
|
class SpamMediaException(NotFoundError):
|
||||||
"""The media was blocked by a spam checker, so we simply 404 the request (in
|
"""The media was blocked by a spam checker, so we simply 404 the request (in
|
||||||
the same way as if it was quarantined).
|
the same way as if it was quarantined).
|
||||||
|
@ -403,3 +429,151 @@ class ReadableFileWrapper:
|
||||||
|
|
||||||
# We yield to the reactor by sleeping for 0 seconds.
|
# We yield to the reactor by sleeping for 0 seconds.
|
||||||
await self.clock.sleep(0)
|
await self.clock.sleep(0)
|
||||||
|
|
||||||
|
|
||||||
|
@implementer(interfaces.IProducer)
|
||||||
|
class MultipartFileSender:
|
||||||
|
"""
|
||||||
|
A producer that sends the contents of a file to a federation request in the format
|
||||||
|
outlined in MSC3916 - a multipart/format-data response where the first field is a
|
||||||
|
JSON object and the second is the requested file.
|
||||||
|
|
||||||
|
This is a slight re-writing of twisted.protocols.basic.FileSender to achieve the format
|
||||||
|
outlined above.
|
||||||
|
"""
|
||||||
|
|
||||||
|
CHUNK_SIZE = 2**14
|
||||||
|
|
||||||
|
lastSent = ""
|
||||||
|
deferred: Optional[defer.Deferred] = None
|
||||||
|
|
||||||
|
def beginFileTransfer(
|
||||||
|
self,
|
||||||
|
file: IO,
|
||||||
|
consumer: IConsumer,
|
||||||
|
file_content_type: str,
|
||||||
|
json_object: JsonDict,
|
||||||
|
boundary: bytes,
|
||||||
|
) -> Deferred:
|
||||||
|
"""
|
||||||
|
Begin transferring a file
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file: The file object to read data from
|
||||||
|
consumer: The synapse request to write the data to
|
||||||
|
file_content_type: The content-type of the file
|
||||||
|
json_object: The JSON object to write to the first field of the response
|
||||||
|
boundary: bytes to be used as the multipart/form-data boundary
|
||||||
|
|
||||||
|
Returns: A deferred whose callback will be invoked when the file has
|
||||||
|
been completely written to the consumer. The last byte written to the
|
||||||
|
consumer is passed to the callback.
|
||||||
|
"""
|
||||||
|
self.file: Optional[IO] = file
|
||||||
|
self.consumer = consumer
|
||||||
|
self.json_field = json_object
|
||||||
|
self.json_field_written = False
|
||||||
|
self.content_type_written = False
|
||||||
|
self.file_content_type = file_content_type
|
||||||
|
self.boundary = boundary
|
||||||
|
self.deferred: Deferred = defer.Deferred()
|
||||||
|
self.consumer.registerProducer(self, False)
|
||||||
|
# while it's not entirely clear why this assignment is necessary, it mirrors
|
||||||
|
# the behavior in FileSender.beginFileTransfer and thus is preserved here
|
||||||
|
deferred = self.deferred
|
||||||
|
return deferred
|
||||||
|
|
||||||
|
def resumeProducing(self) -> None:
|
||||||
|
# write the first field, which will always be a json field
|
||||||
|
if not self.json_field_written:
|
||||||
|
self.consumer.write(CRLF + b"--" + self.boundary + CRLF)
|
||||||
|
|
||||||
|
content_type = Header(b"Content-Type", b"application/json")
|
||||||
|
self.consumer.write(bytes(content_type) + CRLF)
|
||||||
|
|
||||||
|
json_field = json.dumps(self.json_field)
|
||||||
|
json_bytes = json_field.encode("utf-8")
|
||||||
|
self.consumer.write(json_bytes)
|
||||||
|
self.consumer.write(CRLF + b"--" + self.boundary + CRLF)
|
||||||
|
|
||||||
|
self.json_field_written = True
|
||||||
|
|
||||||
|
chunk: Any = ""
|
||||||
|
if self.file:
|
||||||
|
# if we haven't written the content type yet, do so
|
||||||
|
if not self.content_type_written:
|
||||||
|
type = self.file_content_type.encode("utf-8")
|
||||||
|
content_type = Header(b"Content-Type", type)
|
||||||
|
self.consumer.write(bytes(content_type) + CRLF)
|
||||||
|
self.content_type_written = True
|
||||||
|
|
||||||
|
chunk = self.file.read(self.CHUNK_SIZE)
|
||||||
|
|
||||||
|
if not chunk:
|
||||||
|
# we've reached the end of the file
|
||||||
|
self.consumer.write(CRLF + b"--" + self.boundary + b"--" + CRLF)
|
||||||
|
self.file = None
|
||||||
|
self.consumer.unregisterProducer()
|
||||||
|
|
||||||
|
if self.deferred:
|
||||||
|
self.deferred.callback(self.lastSent)
|
||||||
|
self.deferred = None
|
||||||
|
return
|
||||||
|
|
||||||
|
self.consumer.write(chunk)
|
||||||
|
self.lastSent = chunk[-1:]
|
||||||
|
|
||||||
|
def pauseProducing(self) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def stopProducing(self) -> None:
|
||||||
|
if self.deferred:
|
||||||
|
self.deferred.errback(Exception("Consumer asked us to stop producing"))
|
||||||
|
self.deferred = None
|
||||||
|
|
||||||
|
|
||||||
|
class Header:
|
||||||
|
"""
|
||||||
|
`Header` This class is a tiny wrapper that produces
|
||||||
|
request headers. We can't use standard python header
|
||||||
|
class because it encodes unicode fields using =? bla bla ?=
|
||||||
|
encoding, which is correct, but no one in HTTP world expects
|
||||||
|
that, everyone wants utf-8 raw bytes. (stolen from treq.multipart)
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
name: bytes,
|
||||||
|
value: Any,
|
||||||
|
params: Optional[List[Tuple[Any, Any]]] = None,
|
||||||
|
):
|
||||||
|
self.name = name
|
||||||
|
self.value = value
|
||||||
|
self.params = params or []
|
||||||
|
|
||||||
|
def add_param(self, name: Any, value: Any) -> None:
|
||||||
|
self.params.append((name, value))
|
||||||
|
|
||||||
|
def __bytes__(self) -> bytes:
|
||||||
|
with closing(BytesIO()) as h:
|
||||||
|
h.write(self.name + b": " + escape(self.value).encode("us-ascii"))
|
||||||
|
if self.params:
|
||||||
|
for name, val in self.params:
|
||||||
|
h.write(b"; ")
|
||||||
|
h.write(escape(name).encode("us-ascii"))
|
||||||
|
h.write(b"=")
|
||||||
|
h.write(b'"' + escape(val).encode("utf-8") + b'"')
|
||||||
|
h.seek(0)
|
||||||
|
return h.read()
|
||||||
|
|
||||||
|
|
||||||
|
def escape(value: Union[str, bytes]) -> str:
|
||||||
|
"""
|
||||||
|
This function prevents header values from corrupting the request,
|
||||||
|
a newline in the file name parameter makes form-data request unreadable
|
||||||
|
for a majority of parsers. (stolen from treq.multipart)
|
||||||
|
"""
|
||||||
|
if isinstance(value, bytes):
|
||||||
|
value = value.decode("utf-8")
|
||||||
|
return value.replace("\r", "").replace("\n", "").replace('"', '\\"')
|
||||||
|
|
|
@ -24,14 +24,16 @@ import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
from typing import TYPE_CHECKING, Callable, Optional
|
from typing import TYPE_CHECKING, Callable, Optional
|
||||||
|
from uuid import uuid4
|
||||||
|
|
||||||
from synapse.config._base import Config
|
from synapse.config._base import Config
|
||||||
from synapse.logging.context import defer_to_thread, run_in_background
|
from synapse.logging.context import defer_to_thread, run_in_background
|
||||||
from synapse.logging.opentracing import start_active_span, trace_with_opname
|
from synapse.logging.opentracing import start_active_span, trace_with_opname
|
||||||
from synapse.util.async_helpers import maybe_awaitable
|
from synapse.util.async_helpers import maybe_awaitable
|
||||||
|
|
||||||
|
from ..storage.databases.main.media_repository import LocalMedia
|
||||||
from ._base import FileInfo, Responder
|
from ._base import FileInfo, Responder
|
||||||
from .media_storage import FileResponder
|
from .media_storage import FileResponder, MultipartResponder
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -55,13 +57,21 @@ class StorageProvider(metaclass=abc.ABCMeta):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@abc.abstractmethod
|
@abc.abstractmethod
|
||||||
async def fetch(self, path: str, file_info: FileInfo) -> Optional[Responder]:
|
async def fetch(
|
||||||
|
self,
|
||||||
|
path: str,
|
||||||
|
file_info: FileInfo,
|
||||||
|
media_info: Optional[LocalMedia] = None,
|
||||||
|
federation: bool = False,
|
||||||
|
) -> Optional[Responder]:
|
||||||
"""Attempt to fetch the file described by file_info and stream it
|
"""Attempt to fetch the file described by file_info and stream it
|
||||||
into writer.
|
into writer.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
path: Relative path of file in local cache
|
path: Relative path of file in local cache
|
||||||
file_info: The metadata of the file.
|
file_info: The metadata of the file.
|
||||||
|
media_info: metadata of the media item
|
||||||
|
federation: Whether the requested media is for a federation request
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Returns a Responder if the provider has the file, otherwise returns None.
|
Returns a Responder if the provider has the file, otherwise returns None.
|
||||||
|
@ -124,7 +134,13 @@ class StorageProviderWrapper(StorageProvider):
|
||||||
run_in_background(store)
|
run_in_background(store)
|
||||||
|
|
||||||
@trace_with_opname("StorageProviderWrapper.fetch")
|
@trace_with_opname("StorageProviderWrapper.fetch")
|
||||||
async def fetch(self, path: str, file_info: FileInfo) -> Optional[Responder]:
|
async def fetch(
|
||||||
|
self,
|
||||||
|
path: str,
|
||||||
|
file_info: FileInfo,
|
||||||
|
media_info: Optional[LocalMedia] = None,
|
||||||
|
federation: bool = False,
|
||||||
|
) -> Optional[Responder]:
|
||||||
if file_info.url_cache:
|
if file_info.url_cache:
|
||||||
# Files in the URL preview cache definitely aren't stored here,
|
# Files in the URL preview cache definitely aren't stored here,
|
||||||
# so avoid any potentially slow I/O or network access.
|
# so avoid any potentially slow I/O or network access.
|
||||||
|
@ -132,7 +148,9 @@ class StorageProviderWrapper(StorageProvider):
|
||||||
|
|
||||||
# store_file is supposed to return an Awaitable, but guard
|
# store_file is supposed to return an Awaitable, but guard
|
||||||
# against improper implementations.
|
# against improper implementations.
|
||||||
return await maybe_awaitable(self.backend.fetch(path, file_info))
|
return await maybe_awaitable(
|
||||||
|
self.backend.fetch(path, file_info, media_info, federation)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class FileStorageProviderBackend(StorageProvider):
|
class FileStorageProviderBackend(StorageProvider):
|
||||||
|
@ -172,11 +190,23 @@ class FileStorageProviderBackend(StorageProvider):
|
||||||
)
|
)
|
||||||
|
|
||||||
@trace_with_opname("FileStorageProviderBackend.fetch")
|
@trace_with_opname("FileStorageProviderBackend.fetch")
|
||||||
async def fetch(self, path: str, file_info: FileInfo) -> Optional[Responder]:
|
async def fetch(
|
||||||
|
self,
|
||||||
|
path: str,
|
||||||
|
file_info: FileInfo,
|
||||||
|
media_info: Optional[LocalMedia] = None,
|
||||||
|
federation: bool = False,
|
||||||
|
) -> Optional[Responder]:
|
||||||
"""See StorageProvider.fetch"""
|
"""See StorageProvider.fetch"""
|
||||||
|
|
||||||
backup_fname = os.path.join(self.base_directory, path)
|
backup_fname = os.path.join(self.base_directory, path)
|
||||||
if os.path.isfile(backup_fname):
|
if os.path.isfile(backup_fname):
|
||||||
|
if federation:
|
||||||
|
assert media_info is not None
|
||||||
|
boundary = uuid4().hex.encode("ascii")
|
||||||
|
return MultipartResponder(
|
||||||
|
open(backup_fname, "rb"), media_info, boundary
|
||||||
|
)
|
||||||
return FileResponder(open(backup_fname, "rb"))
|
return FileResponder(open(backup_fname, "rb"))
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
|
@ -359,9 +359,10 @@ class ThumbnailProvider:
|
||||||
desired_method: str,
|
desired_method: str,
|
||||||
desired_type: str,
|
desired_type: str,
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
|
ip_address: str,
|
||||||
) -> None:
|
) -> None:
|
||||||
media_info = await self.media_repo.get_remote_media_info(
|
media_info = await self.media_repo.get_remote_media_info(
|
||||||
server_name, media_id, max_timeout_ms
|
server_name, media_id, max_timeout_ms, ip_address
|
||||||
)
|
)
|
||||||
if not media_info:
|
if not media_info:
|
||||||
respond_404(request)
|
respond_404(request)
|
||||||
|
@ -422,12 +423,13 @@ class ThumbnailProvider:
|
||||||
method: str,
|
method: str,
|
||||||
m_type: str,
|
m_type: str,
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
|
ip_address: str,
|
||||||
) -> None:
|
) -> None:
|
||||||
# TODO: Don't download the whole remote file
|
# TODO: Don't download the whole remote file
|
||||||
# We should proxy the thumbnail from the remote server instead of
|
# We should proxy the thumbnail from the remote server instead of
|
||||||
# downloading the remote file and generating our own thumbnails.
|
# downloading the remote file and generating our own thumbnails.
|
||||||
media_info = await self.media_repo.get_remote_media_info(
|
media_info = await self.media_repo.get_remote_media_info(
|
||||||
server_name, media_id, max_timeout_ms
|
server_name, media_id, max_timeout_ms, ip_address
|
||||||
)
|
)
|
||||||
if not media_info:
|
if not media_info:
|
||||||
return
|
return
|
||||||
|
|
|
@ -592,7 +592,7 @@ class UrlPreviewer:
|
||||||
|
|
||||||
file_info = FileInfo(server_name=None, file_id=file_id, url_cache=True)
|
file_info = FileInfo(server_name=None, file_id=file_id, url_cache=True)
|
||||||
|
|
||||||
with self.media_storage.store_into_file(file_info) as (f, fname, finish):
|
async with self.media_storage.store_into_file(file_info) as (f, fname):
|
||||||
if url.startswith("data:"):
|
if url.startswith("data:"):
|
||||||
if not allow_data_urls:
|
if not allow_data_urls:
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
|
@ -603,8 +603,6 @@ class UrlPreviewer:
|
||||||
else:
|
else:
|
||||||
download_result = await self._download_url(url, f)
|
download_result = await self._download_url(url, f)
|
||||||
|
|
||||||
await finish()
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
time_now_ms = self.clock.time_msec()
|
time_now_ms = self.clock.time_msec()
|
||||||
|
|
||||||
|
|
|
@ -763,6 +763,29 @@ class Notifier:
|
||||||
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
async def wait_for_stream_token(self, stream_token: StreamToken) -> bool:
|
||||||
|
"""Wait for this worker to catch up with the given stream token."""
|
||||||
|
|
||||||
|
start = self.clock.time_msec()
|
||||||
|
while True:
|
||||||
|
current_token = self.event_sources.get_current_token()
|
||||||
|
if stream_token.is_before_or_eq(current_token):
|
||||||
|
return True
|
||||||
|
|
||||||
|
now = self.clock.time_msec()
|
||||||
|
|
||||||
|
if now - start > 10_000:
|
||||||
|
return False
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"Waiting for current token to reach %s; currently at %s",
|
||||||
|
stream_token,
|
||||||
|
current_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
# TODO: be better
|
||||||
|
await self.clock.sleep(0.5)
|
||||||
|
|
||||||
async def _get_room_ids(
|
async def _get_room_ids(
|
||||||
self, user: UserID, explicit_room_id: Optional[str]
|
self, user: UserID, explicit_room_id: Optional[str]
|
||||||
) -> Tuple[StrCollection, bool]:
|
) -> Tuple[StrCollection, bool]:
|
||||||
|
|
|
@ -53,7 +53,7 @@ from synapse.rest.client import (
|
||||||
register,
|
register,
|
||||||
relations,
|
relations,
|
||||||
rendezvous,
|
rendezvous,
|
||||||
report_event,
|
reporting,
|
||||||
room,
|
room,
|
||||||
room_keys,
|
room_keys,
|
||||||
room_upgrade_rest_servlet,
|
room_upgrade_rest_servlet,
|
||||||
|
@ -128,7 +128,7 @@ class ClientRestResource(JsonResource):
|
||||||
tags.register_servlets(hs, client_resource)
|
tags.register_servlets(hs, client_resource)
|
||||||
account_data.register_servlets(hs, client_resource)
|
account_data.register_servlets(hs, client_resource)
|
||||||
if is_main_process:
|
if is_main_process:
|
||||||
report_event.register_servlets(hs, client_resource)
|
reporting.register_servlets(hs, client_resource)
|
||||||
openid.register_servlets(hs, client_resource)
|
openid.register_servlets(hs, client_resource)
|
||||||
notifications.register_servlets(hs, client_resource)
|
notifications.register_servlets(hs, client_resource)
|
||||||
devices.register_servlets(hs, client_resource)
|
devices.register_servlets(hs, client_resource)
|
||||||
|
|
|
@ -56,14 +56,14 @@ from synapse.http.servlet import (
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.metrics import threepid_send_requests
|
from synapse.metrics import threepid_send_requests
|
||||||
from synapse.push.mailer import Mailer
|
from synapse.push.mailer import Mailer
|
||||||
from synapse.rest.client.models import (
|
from synapse.types import JsonDict
|
||||||
|
from synapse.types.rest import RequestBodyModel
|
||||||
|
from synapse.types.rest.client import (
|
||||||
AuthenticationData,
|
AuthenticationData,
|
||||||
ClientSecretStr,
|
ClientSecretStr,
|
||||||
EmailRequestTokenBody,
|
EmailRequestTokenBody,
|
||||||
MsisdnRequestTokenBody,
|
MsisdnRequestTokenBody,
|
||||||
)
|
)
|
||||||
from synapse.rest.models import RequestBodyModel
|
|
||||||
from synapse.types import JsonDict
|
|
||||||
from synapse.util.msisdn import phone_number_to_msisdn
|
from synapse.util.msisdn import phone_number_to_msisdn
|
||||||
from synapse.util.stringutils import assert_valid_client_secret, random_string
|
from synapse.util.stringutils import assert_valid_client_secret, random_string
|
||||||
from synapse.util.threepids import check_3pid_allowed, validate_email
|
from synapse.util.threepids import check_3pid_allowed, validate_email
|
||||||
|
|
|
@ -42,9 +42,9 @@ from synapse.http.servlet import (
|
||||||
)
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.rest.client._base import client_patterns, interactive_auth_handler
|
from synapse.rest.client._base import client_patterns, interactive_auth_handler
|
||||||
from synapse.rest.client.models import AuthenticationData
|
|
||||||
from synapse.rest.models import RequestBodyModel
|
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
|
from synapse.types.rest import RequestBodyModel
|
||||||
|
from synapse.types.rest.client import AuthenticationData
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
|
@ -41,8 +41,8 @@ from synapse.http.servlet import (
|
||||||
)
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.rest.client._base import client_patterns
|
from synapse.rest.client._base import client_patterns
|
||||||
from synapse.rest.models import RequestBodyModel
|
|
||||||
from synapse.types import JsonDict, RoomAlias
|
from synapse.types import JsonDict, RoomAlias
|
||||||
|
from synapse.types.rest import RequestBodyModel
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
|
@ -36,7 +36,6 @@ from synapse.http.servlet import (
|
||||||
)
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.opentracing import log_kv, set_tag
|
from synapse.logging.opentracing import log_kv, set_tag
|
||||||
from synapse.replication.http.devices import ReplicationUploadKeysForUserRestServlet
|
|
||||||
from synapse.rest.client._base import client_patterns, interactive_auth_handler
|
from synapse.rest.client._base import client_patterns, interactive_auth_handler
|
||||||
from synapse.types import JsonDict, StreamToken
|
from synapse.types import JsonDict, StreamToken
|
||||||
from synapse.util.cancellation import cancellable
|
from synapse.util.cancellation import cancellable
|
||||||
|
@ -105,13 +104,8 @@ class KeyUploadServlet(RestServlet):
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
self.e2e_keys_handler = hs.get_e2e_keys_handler()
|
self.e2e_keys_handler = hs.get_e2e_keys_handler()
|
||||||
self.device_handler = hs.get_device_handler()
|
self.device_handler = hs.get_device_handler()
|
||||||
|
self._clock = hs.get_clock()
|
||||||
if hs.config.worker.worker_app is None:
|
self._store = hs.get_datastores().main
|
||||||
# if main process
|
|
||||||
self.key_uploader = self.e2e_keys_handler.upload_keys_for_user
|
|
||||||
else:
|
|
||||||
# then a worker
|
|
||||||
self.key_uploader = ReplicationUploadKeysForUserRestServlet.make_client(hs)
|
|
||||||
|
|
||||||
async def on_POST(
|
async def on_POST(
|
||||||
self, request: SynapseRequest, device_id: Optional[str]
|
self, request: SynapseRequest, device_id: Optional[str]
|
||||||
|
@ -151,9 +145,10 @@ class KeyUploadServlet(RestServlet):
|
||||||
400, "To upload keys, you must pass device_id when authenticating"
|
400, "To upload keys, you must pass device_id when authenticating"
|
||||||
)
|
)
|
||||||
|
|
||||||
result = await self.key_uploader(
|
result = await self.e2e_keys_handler.upload_keys_for_user(
|
||||||
user_id=user_id, device_id=device_id, keys=body
|
user_id=user_id, device_id=device_id, keys=body
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, result
|
return 200, result
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -174,6 +174,7 @@ class UnstableThumbnailResource(RestServlet):
|
||||||
respond_404(request)
|
respond_404(request)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
ip_address = request.getClientAddress().host
|
||||||
remote_resp_function = (
|
remote_resp_function = (
|
||||||
self.thumbnailer.select_or_generate_remote_thumbnail
|
self.thumbnailer.select_or_generate_remote_thumbnail
|
||||||
if self.dynamic_thumbnails
|
if self.dynamic_thumbnails
|
||||||
|
@ -188,6 +189,7 @@ class UnstableThumbnailResource(RestServlet):
|
||||||
method,
|
method,
|
||||||
m_type,
|
m_type,
|
||||||
max_timeout_ms,
|
max_timeout_ms,
|
||||||
|
ip_address,
|
||||||
)
|
)
|
||||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||||
|
|
||||||
|
|
|
@ -1,99 +0,0 @@
|
||||||
#
|
|
||||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
|
||||||
#
|
|
||||||
# Copyright 2022 The Matrix.org Foundation C.I.C.
|
|
||||||
# Copyright (C) 2023 New Vector, Ltd
|
|
||||||
#
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU Affero General Public License as
|
|
||||||
# published by the Free Software Foundation, either version 3 of the
|
|
||||||
# License, or (at your option) any later version.
|
|
||||||
#
|
|
||||||
# See the GNU Affero General Public License for more details:
|
|
||||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
|
||||||
#
|
|
||||||
# Originally licensed under the Apache License, Version 2.0:
|
|
||||||
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
|
||||||
#
|
|
||||||
# [This file includes modifications made by New Vector Limited]
|
|
||||||
#
|
|
||||||
#
|
|
||||||
from typing import TYPE_CHECKING, Dict, Optional
|
|
||||||
|
|
||||||
from synapse._pydantic_compat import HAS_PYDANTIC_V2
|
|
||||||
|
|
||||||
if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
|
||||||
from pydantic.v1 import Extra, StrictInt, StrictStr, constr, validator
|
|
||||||
else:
|
|
||||||
from pydantic import Extra, StrictInt, StrictStr, constr, validator
|
|
||||||
|
|
||||||
from synapse.rest.models import RequestBodyModel
|
|
||||||
from synapse.util.threepids import validate_email
|
|
||||||
|
|
||||||
|
|
||||||
class AuthenticationData(RequestBodyModel):
|
|
||||||
"""
|
|
||||||
Data used during user-interactive authentication.
|
|
||||||
|
|
||||||
(The name "Authentication Data" is taken directly from the spec.)
|
|
||||||
|
|
||||||
Additional keys will be present, depending on the `type` field. Use
|
|
||||||
`.dict(exclude_unset=True)` to access them.
|
|
||||||
"""
|
|
||||||
|
|
||||||
class Config:
|
|
||||||
extra = Extra.allow
|
|
||||||
|
|
||||||
session: Optional[StrictStr] = None
|
|
||||||
type: Optional[StrictStr] = None
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
ClientSecretStr = StrictStr
|
|
||||||
else:
|
|
||||||
# See also assert_valid_client_secret()
|
|
||||||
ClientSecretStr = constr(
|
|
||||||
regex="[0-9a-zA-Z.=_-]", # noqa: F722
|
|
||||||
min_length=1,
|
|
||||||
max_length=255,
|
|
||||||
strict=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ThreepidRequestTokenBody(RequestBodyModel):
|
|
||||||
client_secret: ClientSecretStr
|
|
||||||
id_server: Optional[StrictStr]
|
|
||||||
id_access_token: Optional[StrictStr]
|
|
||||||
next_link: Optional[StrictStr]
|
|
||||||
send_attempt: StrictInt
|
|
||||||
|
|
||||||
@validator("id_access_token", always=True)
|
|
||||||
def token_required_for_identity_server(
|
|
||||||
cls, token: Optional[str], values: Dict[str, object]
|
|
||||||
) -> Optional[str]:
|
|
||||||
if values.get("id_server") is not None and token is None:
|
|
||||||
raise ValueError("id_access_token is required if an id_server is supplied.")
|
|
||||||
return token
|
|
||||||
|
|
||||||
|
|
||||||
class EmailRequestTokenBody(ThreepidRequestTokenBody):
|
|
||||||
email: StrictStr
|
|
||||||
|
|
||||||
# Canonicalise the email address. The addresses are all stored canonicalised
|
|
||||||
# in the database. This allows the user to reset his password without having to
|
|
||||||
# know the exact spelling (eg. upper and lower case) of address in the database.
|
|
||||||
# Without this, an email stored in the database as "foo@bar.com" would cause
|
|
||||||
# user requests for "FOO@bar.com" to raise a Not Found error.
|
|
||||||
_email_validator = validator("email", allow_reuse=True)(validate_email)
|
|
||||||
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
ISO3116_1_Alpha_2 = StrictStr
|
|
||||||
else:
|
|
||||||
# Per spec: two-letter uppercase ISO-3166-1-alpha-2
|
|
||||||
ISO3116_1_Alpha_2 = constr(regex="[A-Z]{2}", strict=True)
|
|
||||||
|
|
||||||
|
|
||||||
class MsisdnRequestTokenBody(ThreepidRequestTokenBody):
|
|
||||||
country: ISO3116_1_Alpha_2
|
|
||||||
phone_number: StrictStr
|
|
|
@ -23,17 +23,28 @@ import logging
|
||||||
from http import HTTPStatus
|
from http import HTTPStatus
|
||||||
from typing import TYPE_CHECKING, Tuple
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
|
from synapse._pydantic_compat import HAS_PYDANTIC_V2
|
||||||
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
|
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
|
||||||
from synapse.http.server import HttpServer
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import RestServlet, parse_json_object_from_request
|
from synapse.http.servlet import (
|
||||||
|
RestServlet,
|
||||||
|
parse_and_validate_json_object_from_request,
|
||||||
|
parse_json_object_from_request,
|
||||||
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
|
from synapse.types.rest import RequestBodyModel
|
||||||
|
|
||||||
from ._base import client_patterns
|
from ._base import client_patterns
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
if TYPE_CHECKING or HAS_PYDANTIC_V2:
|
||||||
|
from pydantic.v1 import StrictStr
|
||||||
|
else:
|
||||||
|
from pydantic import StrictStr
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@ -95,5 +106,49 @@ class ReportEventRestServlet(RestServlet):
|
||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
|
|
||||||
|
class ReportRoomRestServlet(RestServlet):
|
||||||
|
# https://github.com/matrix-org/matrix-spec-proposals/pull/4151
|
||||||
|
PATTERNS = client_patterns(
|
||||||
|
"/org.matrix.msc4151/rooms/(?P<room_id>[^/]*)/report$",
|
||||||
|
releases=[],
|
||||||
|
v1=False,
|
||||||
|
unstable=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
super().__init__()
|
||||||
|
self.hs = hs
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
self.clock = hs.get_clock()
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
|
||||||
|
class PostBody(RequestBodyModel):
|
||||||
|
reason: StrictStr
|
||||||
|
|
||||||
|
async def on_POST(
|
||||||
|
self, request: SynapseRequest, room_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
requester = await self.auth.get_user_by_req(request)
|
||||||
|
user_id = requester.user.to_string()
|
||||||
|
|
||||||
|
body = parse_and_validate_json_object_from_request(request, self.PostBody)
|
||||||
|
|
||||||
|
room = await self.store.get_room(room_id)
|
||||||
|
if room is None:
|
||||||
|
raise NotFoundError("Room does not exist")
|
||||||
|
|
||||||
|
await self.store.add_room_report(
|
||||||
|
room_id=room_id,
|
||||||
|
user_id=user_id,
|
||||||
|
reason=body.reason,
|
||||||
|
received_ts=self.clock.time_msec(),
|
||||||
|
)
|
||||||
|
|
||||||
|
return 200, {}
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
ReportEventRestServlet(hs).register(http_server)
|
ReportEventRestServlet(hs).register(http_server)
|
||||||
|
|
||||||
|
if hs.config.experimental.msc4151_enabled:
|
||||||
|
ReportRoomRestServlet(hs).register(http_server)
|
|
@ -292,6 +292,9 @@ class RoomStateEventRestServlet(RestServlet):
|
||||||
try:
|
try:
|
||||||
if event_type == EventTypes.Member:
|
if event_type == EventTypes.Member:
|
||||||
membership = content.get("membership", None)
|
membership = content.get("membership", None)
|
||||||
|
if not isinstance(membership, str):
|
||||||
|
raise SynapseError(400, "Invalid membership (must be a string)")
|
||||||
|
|
||||||
event_id, _ = await self.room_member_handler.update_membership(
|
event_id, _ = await self.room_member_handler.update_membership(
|
||||||
requester,
|
requester,
|
||||||
target=UserID.from_string(state_key),
|
target=UserID.from_string(state_key),
|
||||||
|
|
|
@ -33,6 +33,7 @@ from synapse.events.utils import (
|
||||||
format_event_raw,
|
format_event_raw,
|
||||||
)
|
)
|
||||||
from synapse.handlers.presence import format_user_presence_state
|
from synapse.handlers.presence import format_user_presence_state
|
||||||
|
from synapse.handlers.sliding_sync import SlidingSyncConfig, SlidingSyncResult
|
||||||
from synapse.handlers.sync import (
|
from synapse.handlers.sync import (
|
||||||
ArchivedSyncResult,
|
ArchivedSyncResult,
|
||||||
InvitedSyncResult,
|
InvitedSyncResult,
|
||||||
|
@ -43,10 +44,17 @@ from synapse.handlers.sync import (
|
||||||
SyncVersion,
|
SyncVersion,
|
||||||
)
|
)
|
||||||
from synapse.http.server import HttpServer
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import RestServlet, parse_boolean, parse_integer, parse_string
|
from synapse.http.servlet import (
|
||||||
|
RestServlet,
|
||||||
|
parse_and_validate_json_object_from_request,
|
||||||
|
parse_boolean,
|
||||||
|
parse_integer,
|
||||||
|
parse_string,
|
||||||
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.opentracing import trace_with_opname
|
from synapse.logging.opentracing import trace_with_opname
|
||||||
from synapse.types import JsonDict, Requester, StreamToken
|
from synapse.types import JsonDict, Requester, StreamToken
|
||||||
|
from synapse.types.rest.client import SlidingSyncBody
|
||||||
from synapse.util import json_decoder
|
from synapse.util import json_decoder
|
||||||
from synapse.util.caches.lrucache import LruCache
|
from synapse.util.caches.lrucache import LruCache
|
||||||
|
|
||||||
|
@ -735,8 +743,228 @@ class SlidingSyncE2eeRestServlet(RestServlet):
|
||||||
return 200, response
|
return 200, response
|
||||||
|
|
||||||
|
|
||||||
|
class SlidingSyncRestServlet(RestServlet):
|
||||||
|
"""
|
||||||
|
API endpoint for MSC3575 Sliding Sync `/sync`. Allows for clients to request a
|
||||||
|
subset (sliding window) of rooms, state, and timeline events (just what they need)
|
||||||
|
in order to bootstrap quickly and subscribe to only what the client cares about.
|
||||||
|
Because the client can specify what it cares about, we can respond quickly and skip
|
||||||
|
all of the work we would normally have to do with a sync v2 response.
|
||||||
|
|
||||||
|
Request query parameters:
|
||||||
|
timeout: How long to wait for new events in milliseconds.
|
||||||
|
pos: Stream position token when asking for incremental deltas.
|
||||||
|
|
||||||
|
Request body::
|
||||||
|
{
|
||||||
|
// Sliding Window API
|
||||||
|
"lists": {
|
||||||
|
"foo-list": {
|
||||||
|
"ranges": [ [0, 99] ],
|
||||||
|
"sort": [ "by_notification_level", "by_recency", "by_name" ],
|
||||||
|
"required_state": [
|
||||||
|
["m.room.join_rules", ""],
|
||||||
|
["m.room.history_visibility", ""],
|
||||||
|
["m.space.child", "*"]
|
||||||
|
],
|
||||||
|
"timeline_limit": 10,
|
||||||
|
"filters": {
|
||||||
|
"is_dm": true
|
||||||
|
},
|
||||||
|
"bump_event_types": [ "m.room.message", "m.room.encrypted" ],
|
||||||
|
}
|
||||||
|
},
|
||||||
|
// Room Subscriptions API
|
||||||
|
"room_subscriptions": {
|
||||||
|
"!sub1:bar": {
|
||||||
|
"required_state": [ ["*","*"] ],
|
||||||
|
"timeline_limit": 10,
|
||||||
|
"include_old_rooms": {
|
||||||
|
"timeline_limit": 1,
|
||||||
|
"required_state": [ ["m.room.tombstone", ""], ["m.room.create", ""] ],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
// Extensions API
|
||||||
|
"extensions": {}
|
||||||
|
}
|
||||||
|
|
||||||
|
Response JSON::
|
||||||
|
{
|
||||||
|
"next_pos": "s58_224_0_13_10_1_1_16_0_1",
|
||||||
|
"lists": {
|
||||||
|
"foo-list": {
|
||||||
|
"count": 1337,
|
||||||
|
"ops": [{
|
||||||
|
"op": "SYNC",
|
||||||
|
"range": [0, 99],
|
||||||
|
"room_ids": [
|
||||||
|
"!foo:bar",
|
||||||
|
// ... 99 more room IDs
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
// Aggregated rooms from lists and room subscriptions
|
||||||
|
"rooms": {
|
||||||
|
// Room from room subscription
|
||||||
|
"!sub1:bar": {
|
||||||
|
"name": "Alice and Bob",
|
||||||
|
"avatar": "mxc://...",
|
||||||
|
"initial": true,
|
||||||
|
"required_state": [
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.create", "state_key":"", "content":{"creator":"@alice:example.com"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.join_rules", "state_key":"", "content":{"join_rule":"invite"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.history_visibility", "state_key":"", "content":{"history_visibility":"joined"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.member", "state_key":"@alice:example.com", "content":{"membership":"join"}}
|
||||||
|
],
|
||||||
|
"timeline": [
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.create", "state_key":"", "content":{"creator":"@alice:example.com"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.join_rules", "state_key":"", "content":{"join_rule":"invite"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.history_visibility", "state_key":"", "content":{"history_visibility":"joined"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.member", "state_key":"@alice:example.com", "content":{"membership":"join"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"A"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"B"}},
|
||||||
|
],
|
||||||
|
"prev_batch": "t111_222_333",
|
||||||
|
"joined_count": 41,
|
||||||
|
"invited_count": 1,
|
||||||
|
"notification_count": 1,
|
||||||
|
"highlight_count": 0
|
||||||
|
},
|
||||||
|
// rooms from list
|
||||||
|
"!foo:bar": {
|
||||||
|
"name": "The calculated room name",
|
||||||
|
"avatar": "mxc://...",
|
||||||
|
"initial": true,
|
||||||
|
"required_state": [
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.join_rules", "state_key":"", "content":{"join_rule":"invite"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.history_visibility", "state_key":"", "content":{"history_visibility":"joined"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.space.child", "state_key":"!foo:example.com", "content":{"via":["example.com"]}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.space.child", "state_key":"!bar:example.com", "content":{"via":["example.com"]}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.space.child", "state_key":"!baz:example.com", "content":{"via":["example.com"]}}
|
||||||
|
],
|
||||||
|
"timeline": [
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.join_rules", "state_key":"", "content":{"join_rule":"invite"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"A"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"B"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"C"}},
|
||||||
|
{"sender":"@alice:example.com","type":"m.room.message", "content":{"body":"D"}},
|
||||||
|
],
|
||||||
|
"prev_batch": "t111_222_333",
|
||||||
|
"joined_count": 4,
|
||||||
|
"invited_count": 0,
|
||||||
|
"notification_count": 54,
|
||||||
|
"highlight_count": 3
|
||||||
|
},
|
||||||
|
// ... 99 more items
|
||||||
|
},
|
||||||
|
"extensions": {}
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = client_patterns(
|
||||||
|
"/org.matrix.msc3575/sync$", releases=[], v1=False, unstable=True
|
||||||
|
)
|
||||||
|
|
||||||
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
super().__init__()
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
self.filtering = hs.get_filtering()
|
||||||
|
self.sliding_sync_handler = hs.get_sliding_sync_handler()
|
||||||
|
|
||||||
|
# TODO: Update this to `on_GET` once we figure out how we want to handle params
|
||||||
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
|
requester = await self.auth.get_user_by_req(request, allow_guest=True)
|
||||||
|
user = requester.user
|
||||||
|
device_id = requester.device_id
|
||||||
|
|
||||||
|
timeout = parse_integer(request, "timeout", default=0)
|
||||||
|
# Position in the stream
|
||||||
|
from_token_string = parse_string(request, "pos")
|
||||||
|
|
||||||
|
from_token = None
|
||||||
|
if from_token_string is not None:
|
||||||
|
from_token = await StreamToken.from_string(self.store, from_token_string)
|
||||||
|
|
||||||
|
# TODO: We currently don't know whether we're going to use sticky params or
|
||||||
|
# maybe some filters like sync v2 where they are built up once and referenced
|
||||||
|
# by filter ID. For now, we will just prototype with always passing everything
|
||||||
|
# in.
|
||||||
|
body = parse_and_validate_json_object_from_request(request, SlidingSyncBody)
|
||||||
|
logger.info("Sliding sync request: %r", body)
|
||||||
|
|
||||||
|
sync_config = SlidingSyncConfig(
|
||||||
|
user=user,
|
||||||
|
device_id=device_id,
|
||||||
|
# FIXME: Currently, we're just manually copying the fields from the
|
||||||
|
# `SlidingSyncBody` into the config. How can we gurantee into the future
|
||||||
|
# that we don't forget any? I would like something more structured like
|
||||||
|
# `copy_attributes(from=body, to=config)`
|
||||||
|
lists=body.lists,
|
||||||
|
room_subscriptions=body.room_subscriptions,
|
||||||
|
extensions=body.extensions,
|
||||||
|
)
|
||||||
|
|
||||||
|
sliding_sync_results = await self.sliding_sync_handler.wait_for_sync_for_user(
|
||||||
|
requester,
|
||||||
|
sync_config,
|
||||||
|
from_token,
|
||||||
|
timeout,
|
||||||
|
)
|
||||||
|
|
||||||
|
# The client may have disconnected by now; don't bother to serialize the
|
||||||
|
# response if so.
|
||||||
|
if request._disconnected:
|
||||||
|
logger.info("Client has disconnected; not serializing response.")
|
||||||
|
return 200, {}
|
||||||
|
|
||||||
|
response_content = await self.encode_response(sliding_sync_results)
|
||||||
|
|
||||||
|
return 200, response_content
|
||||||
|
|
||||||
|
# TODO: Is there a better way to encode things?
|
||||||
|
async def encode_response(
|
||||||
|
self,
|
||||||
|
sliding_sync_result: SlidingSyncResult,
|
||||||
|
) -> JsonDict:
|
||||||
|
response: JsonDict = defaultdict(dict)
|
||||||
|
|
||||||
|
response["next_pos"] = await sliding_sync_result.next_pos.to_string(self.store)
|
||||||
|
serialized_lists = self.encode_lists(sliding_sync_result.lists)
|
||||||
|
if serialized_lists:
|
||||||
|
response["lists"] = serialized_lists
|
||||||
|
response["rooms"] = {} # TODO: sliding_sync_result.rooms
|
||||||
|
response["extensions"] = {} # TODO: sliding_sync_result.extensions
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
def encode_lists(
|
||||||
|
self, lists: Dict[str, SlidingSyncResult.SlidingWindowList]
|
||||||
|
) -> JsonDict:
|
||||||
|
def encode_operation(
|
||||||
|
operation: SlidingSyncResult.SlidingWindowList.Operation,
|
||||||
|
) -> JsonDict:
|
||||||
|
return {
|
||||||
|
"op": operation.op.value,
|
||||||
|
"range": operation.range,
|
||||||
|
"room_ids": operation.room_ids,
|
||||||
|
}
|
||||||
|
|
||||||
|
serialized_lists = {}
|
||||||
|
for list_key, list_result in lists.items():
|
||||||
|
serialized_lists[list_key] = {
|
||||||
|
"count": list_result.count,
|
||||||
|
"ops": [encode_operation(op) for op in list_result.ops],
|
||||||
|
}
|
||||||
|
|
||||||
|
return serialized_lists
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
SyncRestServlet(hs).register(http_server)
|
SyncRestServlet(hs).register(http_server)
|
||||||
|
|
||||||
if hs.config.experimental.msc3575_enabled:
|
if hs.config.experimental.msc3575_enabled:
|
||||||
|
SlidingSyncRestServlet(hs).register(http_server)
|
||||||
SlidingSyncE2eeRestServlet(hs).register(http_server)
|
SlidingSyncE2eeRestServlet(hs).register(http_server)
|
||||||
|
|
|
@ -149,6 +149,8 @@ class VersionsRestServlet(RestServlet):
|
||||||
is not None
|
is not None
|
||||||
)
|
)
|
||||||
),
|
),
|
||||||
|
# MSC4151: Report room API (Client-Server API)
|
||||||
|
"org.matrix.msc4151": self.config.experimental.msc4151_enabled,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
|
@ -41,9 +41,9 @@ from synapse.http.servlet import (
|
||||||
parse_and_validate_json_object_from_request,
|
parse_and_validate_json_object_from_request,
|
||||||
parse_integer,
|
parse_integer,
|
||||||
)
|
)
|
||||||
from synapse.rest.models import RequestBodyModel
|
|
||||||
from synapse.storage.keys import FetchKeyResultForRemote
|
from synapse.storage.keys import FetchKeyResultForRemote
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
|
from synapse.types.rest import RequestBodyModel
|
||||||
from synapse.util import json_decoder
|
from synapse.util import json_decoder
|
||||||
from synapse.util.async_helpers import yieldable_gather_results
|
from synapse.util.async_helpers import yieldable_gather_results
|
||||||
|
|
||||||
|
|
|
@ -97,6 +97,12 @@ class DownloadResource(RestServlet):
|
||||||
respond_404(request)
|
respond_404(request)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
ip_address = request.getClientAddress().host
|
||||||
await self.media_repo.get_remote_media(
|
await self.media_repo.get_remote_media(
|
||||||
request, server_name, media_id, file_name, max_timeout_ms
|
request,
|
||||||
|
server_name,
|
||||||
|
media_id,
|
||||||
|
file_name,
|
||||||
|
max_timeout_ms,
|
||||||
|
ip_address,
|
||||||
)
|
)
|
||||||
|
|
|
@ -104,6 +104,7 @@ class ThumbnailResource(RestServlet):
|
||||||
respond_404(request)
|
respond_404(request)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
ip_address = request.getClientAddress().host
|
||||||
remote_resp_function = (
|
remote_resp_function = (
|
||||||
self.thumbnail_provider.select_or_generate_remote_thumbnail
|
self.thumbnail_provider.select_or_generate_remote_thumbnail
|
||||||
if self.dynamic_thumbnails
|
if self.dynamic_thumbnails
|
||||||
|
@ -118,5 +119,6 @@ class ThumbnailResource(RestServlet):
|
||||||
method,
|
method,
|
||||||
m_type,
|
m_type,
|
||||||
max_timeout_ms,
|
max_timeout_ms,
|
||||||
|
ip_address,
|
||||||
)
|
)
|
||||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||||
|
|
|
@ -109,6 +109,7 @@ from synapse.handlers.room_summary import RoomSummaryHandler
|
||||||
from synapse.handlers.search import SearchHandler
|
from synapse.handlers.search import SearchHandler
|
||||||
from synapse.handlers.send_email import SendEmailHandler
|
from synapse.handlers.send_email import SendEmailHandler
|
||||||
from synapse.handlers.set_password import SetPasswordHandler
|
from synapse.handlers.set_password import SetPasswordHandler
|
||||||
|
from synapse.handlers.sliding_sync import SlidingSyncHandler
|
||||||
from synapse.handlers.sso import SsoHandler
|
from synapse.handlers.sso import SsoHandler
|
||||||
from synapse.handlers.stats import StatsHandler
|
from synapse.handlers.stats import StatsHandler
|
||||||
from synapse.handlers.sync import SyncHandler
|
from synapse.handlers.sync import SyncHandler
|
||||||
|
@ -554,6 +555,9 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||||
def get_sync_handler(self) -> SyncHandler:
|
def get_sync_handler(self) -> SyncHandler:
|
||||||
return SyncHandler(self)
|
return SyncHandler(self)
|
||||||
|
|
||||||
|
def get_sliding_sync_handler(self) -> SlidingSyncHandler:
|
||||||
|
return SlidingSyncHandler(self)
|
||||||
|
|
||||||
@cache_in_self
|
@cache_in_self
|
||||||
def get_room_list_handler(self) -> RoomListHandler:
|
def get_room_list_handler(self) -> RoomListHandler:
|
||||||
return RoomListHandler(self)
|
return RoomListHandler(self)
|
||||||
|
|
|
@ -2461,7 +2461,11 @@ class DatabasePool:
|
||||||
|
|
||||||
|
|
||||||
def make_in_list_sql_clause(
|
def make_in_list_sql_clause(
|
||||||
database_engine: BaseDatabaseEngine, column: str, iterable: Collection[Any]
|
database_engine: BaseDatabaseEngine,
|
||||||
|
column: str,
|
||||||
|
iterable: Collection[Any],
|
||||||
|
*,
|
||||||
|
negative: bool = False,
|
||||||
) -> Tuple[str, list]:
|
) -> Tuple[str, list]:
|
||||||
"""Returns an SQL clause that checks the given column is in the iterable.
|
"""Returns an SQL clause that checks the given column is in the iterable.
|
||||||
|
|
||||||
|
@ -2474,6 +2478,7 @@ def make_in_list_sql_clause(
|
||||||
database_engine
|
database_engine
|
||||||
column: Name of the column
|
column: Name of the column
|
||||||
iterable: The values to check the column against.
|
iterable: The values to check the column against.
|
||||||
|
negative: Whether we should check for inequality, i.e. `NOT IN`
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A tuple of SQL query and the args
|
A tuple of SQL query and the args
|
||||||
|
@ -2482,9 +2487,19 @@ def make_in_list_sql_clause(
|
||||||
if database_engine.supports_using_any_list:
|
if database_engine.supports_using_any_list:
|
||||||
# This should hopefully be faster, but also makes postgres query
|
# This should hopefully be faster, but also makes postgres query
|
||||||
# stats easier to understand.
|
# stats easier to understand.
|
||||||
return "%s = ANY(?)" % (column,), [list(iterable)]
|
if not negative:
|
||||||
|
clause = f"{column} = ANY(?)"
|
||||||
|
else:
|
||||||
|
clause = f"{column} != ALL(?)"
|
||||||
|
|
||||||
|
return clause, [list(iterable)]
|
||||||
else:
|
else:
|
||||||
return "%s IN (%s)" % (column, ",".join("?" for _ in iterable)), list(iterable)
|
params = ",".join("?" for _ in iterable)
|
||||||
|
if not negative:
|
||||||
|
clause = f"{column} IN ({params})"
|
||||||
|
else:
|
||||||
|
clause = f"{column} NOT IN ({params})"
|
||||||
|
return clause, list(iterable)
|
||||||
|
|
||||||
|
|
||||||
# These overloads ensure that `columns` and `iterable` values have the same length.
|
# These overloads ensure that `columns` and `iterable` values have the same length.
|
||||||
|
|
|
@ -43,11 +43,9 @@ from synapse.storage.database import (
|
||||||
)
|
)
|
||||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||||
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
|
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
|
||||||
from synapse.storage.engines import PostgresEngine
|
|
||||||
from synapse.storage.util.id_generators import (
|
from synapse.storage.util.id_generators import (
|
||||||
AbstractStreamIdGenerator,
|
AbstractStreamIdGenerator,
|
||||||
MultiWriterIdGenerator,
|
MultiWriterIdGenerator,
|
||||||
StreamIdGenerator,
|
|
||||||
)
|
)
|
||||||
from synapse.types import JsonDict, JsonMapping
|
from synapse.types import JsonDict, JsonMapping
|
||||||
from synapse.util import json_encoder
|
from synapse.util import json_encoder
|
||||||
|
@ -75,37 +73,20 @@ class AccountDataWorkerStore(PushRulesWorkerStore, CacheInvalidationWorkerStore)
|
||||||
|
|
||||||
self._account_data_id_gen: AbstractStreamIdGenerator
|
self._account_data_id_gen: AbstractStreamIdGenerator
|
||||||
|
|
||||||
if isinstance(database.engine, PostgresEngine):
|
self._account_data_id_gen = MultiWriterIdGenerator(
|
||||||
self._account_data_id_gen = MultiWriterIdGenerator(
|
db_conn=db_conn,
|
||||||
db_conn=db_conn,
|
db=database,
|
||||||
db=database,
|
notifier=hs.get_replication_notifier(),
|
||||||
notifier=hs.get_replication_notifier(),
|
stream_name="account_data",
|
||||||
stream_name="account_data",
|
instance_name=self._instance_name,
|
||||||
instance_name=self._instance_name,
|
tables=[
|
||||||
tables=[
|
("room_account_data", "instance_name", "stream_id"),
|
||||||
("room_account_data", "instance_name", "stream_id"),
|
("room_tags_revisions", "instance_name", "stream_id"),
|
||||||
("room_tags_revisions", "instance_name", "stream_id"),
|
("account_data", "instance_name", "stream_id"),
|
||||||
("account_data", "instance_name", "stream_id"),
|
],
|
||||||
],
|
sequence_name="account_data_sequence",
|
||||||
sequence_name="account_data_sequence",
|
writers=hs.config.worker.writers.account_data,
|
||||||
writers=hs.config.worker.writers.account_data,
|
)
|
||||||
)
|
|
||||||
else:
|
|
||||||
# Multiple writers are not supported for SQLite.
|
|
||||||
#
|
|
||||||
# We shouldn't be running in worker mode with SQLite, but its useful
|
|
||||||
# to support it for unit tests.
|
|
||||||
self._account_data_id_gen = StreamIdGenerator(
|
|
||||||
db_conn,
|
|
||||||
hs.get_replication_notifier(),
|
|
||||||
"room_account_data",
|
|
||||||
"stream_id",
|
|
||||||
extra_tables=[
|
|
||||||
("account_data", "stream_id"),
|
|
||||||
("room_tags_revisions", "stream_id"),
|
|
||||||
],
|
|
||||||
is_writer=self._instance_name in hs.config.worker.writers.account_data,
|
|
||||||
)
|
|
||||||
|
|
||||||
account_max = self.get_max_account_data_stream_id()
|
account_max = self.get_max_account_data_stream_id()
|
||||||
self._account_data_stream_cache = StreamChangeCache(
|
self._account_data_stream_cache = StreamChangeCache(
|
||||||
|
|
|
@ -318,7 +318,13 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||||
self._invalidate_local_get_event_cache(redacts) # type: ignore[attr-defined]
|
self._invalidate_local_get_event_cache(redacts) # type: ignore[attr-defined]
|
||||||
# Caches which might leak edits must be invalidated for the event being
|
# Caches which might leak edits must be invalidated for the event being
|
||||||
# redacted.
|
# redacted.
|
||||||
self._attempt_to_invalidate_cache("get_relations_for_event", (redacts,))
|
self._attempt_to_invalidate_cache(
|
||||||
|
"get_relations_for_event",
|
||||||
|
(
|
||||||
|
room_id,
|
||||||
|
redacts,
|
||||||
|
),
|
||||||
|
)
|
||||||
self._attempt_to_invalidate_cache("get_applicable_edit", (redacts,))
|
self._attempt_to_invalidate_cache("get_applicable_edit", (redacts,))
|
||||||
self._attempt_to_invalidate_cache("get_thread_id", (redacts,))
|
self._attempt_to_invalidate_cache("get_thread_id", (redacts,))
|
||||||
self._attempt_to_invalidate_cache("get_thread_id_for_receipts", (redacts,))
|
self._attempt_to_invalidate_cache("get_thread_id_for_receipts", (redacts,))
|
||||||
|
@ -345,7 +351,13 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||||
)
|
)
|
||||||
|
|
||||||
if relates_to:
|
if relates_to:
|
||||||
self._attempt_to_invalidate_cache("get_relations_for_event", (relates_to,))
|
self._attempt_to_invalidate_cache(
|
||||||
|
"get_relations_for_event",
|
||||||
|
(
|
||||||
|
room_id,
|
||||||
|
relates_to,
|
||||||
|
),
|
||||||
|
)
|
||||||
self._attempt_to_invalidate_cache("get_references_for_event", (relates_to,))
|
self._attempt_to_invalidate_cache("get_references_for_event", (relates_to,))
|
||||||
self._attempt_to_invalidate_cache("get_applicable_edit", (relates_to,))
|
self._attempt_to_invalidate_cache("get_applicable_edit", (relates_to,))
|
||||||
self._attempt_to_invalidate_cache("get_thread_summary", (relates_to,))
|
self._attempt_to_invalidate_cache("get_thread_summary", (relates_to,))
|
||||||
|
@ -380,9 +392,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
"get_unread_event_push_actions_by_room_for_user", (room_id,)
|
"get_unread_event_push_actions_by_room_for_user", (room_id,)
|
||||||
)
|
)
|
||||||
|
self._attempt_to_invalidate_cache("get_relations_for_event", (room_id,))
|
||||||
|
|
||||||
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
|
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
|
||||||
self._attempt_to_invalidate_cache("get_relations_for_event", None)
|
|
||||||
self._attempt_to_invalidate_cache("get_applicable_edit", None)
|
self._attempt_to_invalidate_cache("get_applicable_edit", None)
|
||||||
self._attempt_to_invalidate_cache("get_thread_id", None)
|
self._attempt_to_invalidate_cache("get_thread_id", None)
|
||||||
self._attempt_to_invalidate_cache("get_thread_id_for_receipts", None)
|
self._attempt_to_invalidate_cache("get_thread_id_for_receipts", None)
|
||||||
|
|
|
@ -50,16 +50,15 @@ from synapse.storage.database import (
|
||||||
LoggingTransaction,
|
LoggingTransaction,
|
||||||
make_in_list_sql_clause,
|
make_in_list_sql_clause,
|
||||||
)
|
)
|
||||||
from synapse.storage.engines import PostgresEngine
|
|
||||||
from synapse.storage.util.id_generators import (
|
from synapse.storage.util.id_generators import (
|
||||||
AbstractStreamIdGenerator,
|
AbstractStreamIdGenerator,
|
||||||
MultiWriterIdGenerator,
|
MultiWriterIdGenerator,
|
||||||
StreamIdGenerator,
|
|
||||||
)
|
)
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util import json_encoder
|
from synapse.util import json_encoder
|
||||||
from synapse.util.caches.expiringcache import ExpiringCache
|
from synapse.util.caches.expiringcache import ExpiringCache
|
||||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||||
|
from synapse.util.stringutils import parse_and_validate_server_name
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
@ -89,35 +88,23 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
||||||
expiry_ms=30 * 60 * 1000,
|
expiry_ms=30 * 60 * 1000,
|
||||||
)
|
)
|
||||||
|
|
||||||
if isinstance(database.engine, PostgresEngine):
|
self._can_write_to_device = (
|
||||||
self._can_write_to_device = (
|
self._instance_name in hs.config.worker.writers.to_device
|
||||||
self._instance_name in hs.config.worker.writers.to_device
|
)
|
||||||
)
|
|
||||||
|
|
||||||
self._to_device_msg_id_gen: AbstractStreamIdGenerator = (
|
self._to_device_msg_id_gen: AbstractStreamIdGenerator = MultiWriterIdGenerator(
|
||||||
MultiWriterIdGenerator(
|
db_conn=db_conn,
|
||||||
db_conn=db_conn,
|
db=database,
|
||||||
db=database,
|
notifier=hs.get_replication_notifier(),
|
||||||
notifier=hs.get_replication_notifier(),
|
stream_name="to_device",
|
||||||
stream_name="to_device",
|
instance_name=self._instance_name,
|
||||||
instance_name=self._instance_name,
|
tables=[
|
||||||
tables=[
|
("device_inbox", "instance_name", "stream_id"),
|
||||||
("device_inbox", "instance_name", "stream_id"),
|
("device_federation_outbox", "instance_name", "stream_id"),
|
||||||
("device_federation_outbox", "instance_name", "stream_id"),
|
],
|
||||||
],
|
sequence_name="device_inbox_sequence",
|
||||||
sequence_name="device_inbox_sequence",
|
writers=hs.config.worker.writers.to_device,
|
||||||
writers=hs.config.worker.writers.to_device,
|
)
|
||||||
)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
self._can_write_to_device = True
|
|
||||||
self._to_device_msg_id_gen = StreamIdGenerator(
|
|
||||||
db_conn,
|
|
||||||
hs.get_replication_notifier(),
|
|
||||||
"device_inbox",
|
|
||||||
"stream_id",
|
|
||||||
extra_tables=[("device_federation_outbox", "stream_id")],
|
|
||||||
)
|
|
||||||
|
|
||||||
max_device_inbox_id = self._to_device_msg_id_gen.get_current_token()
|
max_device_inbox_id = self._to_device_msg_id_gen.get_current_token()
|
||||||
device_inbox_prefill, min_device_inbox_id = self.db_pool.get_cache_dict(
|
device_inbox_prefill, min_device_inbox_id = self.db_pool.get_cache_dict(
|
||||||
|
@ -978,6 +965,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
||||||
class DeviceInboxBackgroundUpdateStore(SQLBaseStore):
|
class DeviceInboxBackgroundUpdateStore(SQLBaseStore):
|
||||||
DEVICE_INBOX_STREAM_ID = "device_inbox_stream_drop"
|
DEVICE_INBOX_STREAM_ID = "device_inbox_stream_drop"
|
||||||
REMOVE_DEAD_DEVICES_FROM_INBOX = "remove_dead_devices_from_device_inbox"
|
REMOVE_DEAD_DEVICES_FROM_INBOX = "remove_dead_devices_from_device_inbox"
|
||||||
|
CLEANUP_DEVICE_FEDERATION_OUTBOX = "cleanup_device_federation_outbox"
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
|
@ -1003,6 +991,11 @@ class DeviceInboxBackgroundUpdateStore(SQLBaseStore):
|
||||||
self._remove_dead_devices_from_device_inbox,
|
self._remove_dead_devices_from_device_inbox,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.db_pool.updates.register_background_update_handler(
|
||||||
|
self.CLEANUP_DEVICE_FEDERATION_OUTBOX,
|
||||||
|
self._cleanup_device_federation_outbox,
|
||||||
|
)
|
||||||
|
|
||||||
async def _background_drop_index_device_inbox(
|
async def _background_drop_index_device_inbox(
|
||||||
self, progress: JsonDict, batch_size: int
|
self, progress: JsonDict, batch_size: int
|
||||||
) -> int:
|
) -> int:
|
||||||
|
@ -1094,6 +1087,75 @@ class DeviceInboxBackgroundUpdateStore(SQLBaseStore):
|
||||||
|
|
||||||
return batch_size
|
return batch_size
|
||||||
|
|
||||||
|
async def _cleanup_device_federation_outbox(
|
||||||
|
self,
|
||||||
|
progress: JsonDict,
|
||||||
|
batch_size: int,
|
||||||
|
) -> int:
|
||||||
|
def _cleanup_device_federation_outbox_txn(
|
||||||
|
txn: LoggingTransaction,
|
||||||
|
) -> bool:
|
||||||
|
if "max_stream_id" in progress:
|
||||||
|
max_stream_id = progress["max_stream_id"]
|
||||||
|
else:
|
||||||
|
txn.execute("SELECT max(stream_id) FROM device_federation_outbox")
|
||||||
|
res = cast(Tuple[Optional[int]], txn.fetchone())
|
||||||
|
if res[0] is None:
|
||||||
|
# this can only happen if the `device_inbox` table is empty, in which
|
||||||
|
# case we have no work to do.
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
max_stream_id = res[0]
|
||||||
|
|
||||||
|
start = progress.get("stream_id", 0)
|
||||||
|
stop = start + batch_size
|
||||||
|
|
||||||
|
sql = """
|
||||||
|
SELECT destination FROM device_federation_outbox
|
||||||
|
WHERE ? < stream_id AND stream_id <= ?
|
||||||
|
"""
|
||||||
|
|
||||||
|
txn.execute(sql, (start, stop))
|
||||||
|
|
||||||
|
destinations = {d for d, in txn}
|
||||||
|
to_remove = set()
|
||||||
|
for d in destinations:
|
||||||
|
try:
|
||||||
|
parse_and_validate_server_name(d)
|
||||||
|
except ValueError:
|
||||||
|
to_remove.add(d)
|
||||||
|
|
||||||
|
self.db_pool.simple_delete_many_txn(
|
||||||
|
txn,
|
||||||
|
table="device_federation_outbox",
|
||||||
|
column="destination",
|
||||||
|
values=to_remove,
|
||||||
|
keyvalues={},
|
||||||
|
)
|
||||||
|
|
||||||
|
self.db_pool.updates._background_update_progress_txn(
|
||||||
|
txn,
|
||||||
|
self.CLEANUP_DEVICE_FEDERATION_OUTBOX,
|
||||||
|
{
|
||||||
|
"stream_id": stop,
|
||||||
|
"max_stream_id": max_stream_id,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
return stop >= max_stream_id
|
||||||
|
|
||||||
|
finished = await self.db_pool.runInteraction(
|
||||||
|
"_cleanup_device_federation_outbox",
|
||||||
|
_cleanup_device_federation_outbox_txn,
|
||||||
|
)
|
||||||
|
|
||||||
|
if finished:
|
||||||
|
await self.db_pool.updates._end_background_update(
|
||||||
|
self.CLEANUP_DEVICE_FEDERATION_OUTBOX,
|
||||||
|
)
|
||||||
|
|
||||||
|
return batch_size
|
||||||
|
|
||||||
|
|
||||||
class DeviceInboxStore(DeviceInboxWorkerStore, DeviceInboxBackgroundUpdateStore):
|
class DeviceInboxStore(DeviceInboxWorkerStore, DeviceInboxBackgroundUpdateStore):
|
||||||
pass
|
pass
|
||||||
|
|
|
@ -57,10 +57,7 @@ from synapse.storage.database import (
|
||||||
from synapse.storage.databases.main.end_to_end_keys import EndToEndKeyWorkerStore
|
from synapse.storage.databases.main.end_to_end_keys import EndToEndKeyWorkerStore
|
||||||
from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
|
from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
|
||||||
from synapse.storage.types import Cursor
|
from synapse.storage.types import Cursor
|
||||||
from synapse.storage.util.id_generators import (
|
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||||
AbstractStreamIdGenerator,
|
|
||||||
StreamIdGenerator,
|
|
||||||
)
|
|
||||||
from synapse.types import (
|
from synapse.types import (
|
||||||
JsonDict,
|
JsonDict,
|
||||||
JsonMapping,
|
JsonMapping,
|
||||||
|
@ -99,19 +96,26 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
|
|
||||||
# In the worker store this is an ID tracker which we overwrite in the non-worker
|
# In the worker store this is an ID tracker which we overwrite in the non-worker
|
||||||
# class below that is used on the main process.
|
# class below that is used on the main process.
|
||||||
self._device_list_id_gen = StreamIdGenerator(
|
self._device_list_id_gen = MultiWriterIdGenerator(
|
||||||
db_conn,
|
db_conn=db_conn,
|
||||||
hs.get_replication_notifier(),
|
db=database,
|
||||||
"device_lists_stream",
|
notifier=hs.get_replication_notifier(),
|
||||||
"stream_id",
|
stream_name="device_lists_stream",
|
||||||
extra_tables=[
|
instance_name=self._instance_name,
|
||||||
("user_signature_stream", "stream_id"),
|
tables=[
|
||||||
("device_lists_outbound_pokes", "stream_id"),
|
("device_lists_stream", "instance_name", "stream_id"),
|
||||||
("device_lists_changes_in_room", "stream_id"),
|
("user_signature_stream", "instance_name", "stream_id"),
|
||||||
("device_lists_remote_pending", "stream_id"),
|
("device_lists_outbound_pokes", "instance_name", "stream_id"),
|
||||||
("device_lists_changes_converted_stream_position", "stream_id"),
|
("device_lists_changes_in_room", "instance_name", "stream_id"),
|
||||||
|
("device_lists_remote_pending", "instance_name", "stream_id"),
|
||||||
|
(
|
||||||
|
"device_lists_changes_converted_stream_position",
|
||||||
|
"instance_name",
|
||||||
|
"stream_id",
|
||||||
|
),
|
||||||
],
|
],
|
||||||
is_writer=hs.config.worker.worker_app is None,
|
sequence_name="device_lists_sequence",
|
||||||
|
writers=["master"],
|
||||||
)
|
)
|
||||||
|
|
||||||
device_list_max = self._device_list_id_gen.get_current_token()
|
device_list_max = self._device_list_id_gen.get_current_token()
|
||||||
|
@ -762,6 +766,7 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
|
||||||
"stream_id": stream_id,
|
"stream_id": stream_id,
|
||||||
"from_user_id": from_user_id,
|
"from_user_id": from_user_id,
|
||||||
"user_ids": json_encoder.encode(user_ids),
|
"user_ids": json_encoder.encode(user_ids),
|
||||||
|
"instance_name": self._instance_name,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -1582,6 +1587,8 @@ class DeviceBackgroundUpdateStore(SQLBaseStore):
|
||||||
):
|
):
|
||||||
super().__init__(database, db_conn, hs)
|
super().__init__(database, db_conn, hs)
|
||||||
|
|
||||||
|
self._instance_name = hs.get_instance_name()
|
||||||
|
|
||||||
self.db_pool.updates.register_background_index_update(
|
self.db_pool.updates.register_background_index_update(
|
||||||
"device_lists_stream_idx",
|
"device_lists_stream_idx",
|
||||||
index_name="device_lists_stream_user_id",
|
index_name="device_lists_stream_user_id",
|
||||||
|
@ -1694,6 +1701,7 @@ class DeviceBackgroundUpdateStore(SQLBaseStore):
|
||||||
"device_lists_outbound_pokes",
|
"device_lists_outbound_pokes",
|
||||||
{
|
{
|
||||||
"stream_id": stream_id,
|
"stream_id": stream_id,
|
||||||
|
"instance_name": self._instance_name,
|
||||||
"destination": destination,
|
"destination": destination,
|
||||||
"user_id": user_id,
|
"user_id": user_id,
|
||||||
"device_id": device_id,
|
"device_id": device_id,
|
||||||
|
@ -1730,10 +1738,6 @@ class DeviceBackgroundUpdateStore(SQLBaseStore):
|
||||||
|
|
||||||
|
|
||||||
class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
# Because we have write access, this will be a StreamIdGenerator
|
|
||||||
# (see DeviceWorkerStore.__init__)
|
|
||||||
_device_list_id_gen: AbstractStreamIdGenerator
|
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
database: DatabasePool,
|
database: DatabasePool,
|
||||||
|
@ -2092,9 +2096,9 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
self.db_pool.simple_insert_many_txn(
|
self.db_pool.simple_insert_many_txn(
|
||||||
txn,
|
txn,
|
||||||
table="device_lists_stream",
|
table="device_lists_stream",
|
||||||
keys=("stream_id", "user_id", "device_id"),
|
keys=("instance_name", "stream_id", "user_id", "device_id"),
|
||||||
values=[
|
values=[
|
||||||
(stream_id, user_id, device_id)
|
(self._instance_name, stream_id, user_id, device_id)
|
||||||
for stream_id, device_id in zip(stream_ids, device_ids)
|
for stream_id, device_id in zip(stream_ids, device_ids)
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
@ -2124,6 +2128,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
values = [
|
values = [
|
||||||
(
|
(
|
||||||
destination,
|
destination,
|
||||||
|
self._instance_name,
|
||||||
next(stream_id_iterator),
|
next(stream_id_iterator),
|
||||||
user_id,
|
user_id,
|
||||||
device_id,
|
device_id,
|
||||||
|
@ -2139,6 +2144,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
table="device_lists_outbound_pokes",
|
table="device_lists_outbound_pokes",
|
||||||
keys=(
|
keys=(
|
||||||
"destination",
|
"destination",
|
||||||
|
"instance_name",
|
||||||
"stream_id",
|
"stream_id",
|
||||||
"user_id",
|
"user_id",
|
||||||
"device_id",
|
"device_id",
|
||||||
|
@ -2157,7 +2163,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
device_id,
|
device_id,
|
||||||
{
|
{
|
||||||
stream_id: destination
|
stream_id: destination
|
||||||
for (destination, stream_id, _, _, _, _, _) in values
|
for (destination, _, stream_id, _, _, _, _, _) in values
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -2210,6 +2216,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
"device_id",
|
"device_id",
|
||||||
"room_id",
|
"room_id",
|
||||||
"stream_id",
|
"stream_id",
|
||||||
|
"instance_name",
|
||||||
"converted_to_destinations",
|
"converted_to_destinations",
|
||||||
"opentracing_context",
|
"opentracing_context",
|
||||||
),
|
),
|
||||||
|
@ -2219,6 +2226,7 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
device_id,
|
device_id,
|
||||||
room_id,
|
room_id,
|
||||||
stream_id,
|
stream_id,
|
||||||
|
self._instance_name,
|
||||||
# We only need to calculate outbound pokes for local users
|
# We only need to calculate outbound pokes for local users
|
||||||
not self.hs.is_mine_id(user_id),
|
not self.hs.is_mine_id(user_id),
|
||||||
encoded_context,
|
encoded_context,
|
||||||
|
@ -2338,7 +2346,10 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
"user_id": user_id,
|
"user_id": user_id,
|
||||||
"device_id": device_id,
|
"device_id": device_id,
|
||||||
},
|
},
|
||||||
values={"stream_id": stream_id},
|
values={
|
||||||
|
"stream_id": stream_id,
|
||||||
|
"instance_name": self._instance_name,
|
||||||
|
},
|
||||||
desc="add_remote_device_list_to_pending",
|
desc="add_remote_device_list_to_pending",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -2388,15 +2399,16 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
`FALSE` have not been converted.
|
`FALSE` have not been converted.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
return cast(
|
# There should be only one row in this table, though we want to
|
||||||
Tuple[int, str],
|
# future-proof ourselves for when we have multiple rows (one for each
|
||||||
await self.db_pool.simple_select_one(
|
# instance). So to handle that case we take the minimum of all rows.
|
||||||
table="device_lists_changes_converted_stream_position",
|
rows = await self.db_pool.simple_select_list(
|
||||||
keyvalues={},
|
table="device_lists_changes_converted_stream_position",
|
||||||
retcols=["stream_id", "room_id"],
|
keyvalues={},
|
||||||
desc="get_device_change_last_converted_pos",
|
retcols=["stream_id", "room_id"],
|
||||||
),
|
desc="get_device_change_last_converted_pos",
|
||||||
)
|
)
|
||||||
|
return cast(Tuple[int, str], min(rows))
|
||||||
|
|
||||||
async def set_device_change_last_converted_pos(
|
async def set_device_change_last_converted_pos(
|
||||||
self,
|
self,
|
||||||
|
@ -2411,6 +2423,10 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||||
await self.db_pool.simple_update_one(
|
await self.db_pool.simple_update_one(
|
||||||
table="device_lists_changes_converted_stream_position",
|
table="device_lists_changes_converted_stream_position",
|
||||||
keyvalues={},
|
keyvalues={},
|
||||||
updatevalues={"stream_id": stream_id, "room_id": room_id},
|
updatevalues={
|
||||||
|
"stream_id": stream_id,
|
||||||
|
"instance_name": self._instance_name,
|
||||||
|
"room_id": room_id,
|
||||||
|
},
|
||||||
desc="set_device_change_last_converted_pos",
|
desc="set_device_change_last_converted_pos",
|
||||||
)
|
)
|
||||||
|
|
|
@ -58,7 +58,7 @@ from synapse.storage.database import (
|
||||||
)
|
)
|
||||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||||
from synapse.storage.engines import PostgresEngine
|
from synapse.storage.engines import PostgresEngine
|
||||||
from synapse.storage.util.id_generators import StreamIdGenerator
|
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||||
from synapse.types import JsonDict, JsonMapping
|
from synapse.types import JsonDict, JsonMapping
|
||||||
from synapse.util import json_decoder, json_encoder
|
from synapse.util import json_decoder, json_encoder
|
||||||
from synapse.util.caches.descriptors import cached, cachedList
|
from synapse.util.caches.descriptors import cached, cachedList
|
||||||
|
@ -1448,11 +1448,17 @@ class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
|
||||||
):
|
):
|
||||||
super().__init__(database, db_conn, hs)
|
super().__init__(database, db_conn, hs)
|
||||||
|
|
||||||
self._cross_signing_id_gen = StreamIdGenerator(
|
self._cross_signing_id_gen = MultiWriterIdGenerator(
|
||||||
db_conn,
|
db_conn=db_conn,
|
||||||
hs.get_replication_notifier(),
|
db=database,
|
||||||
"e2e_cross_signing_keys",
|
notifier=hs.get_replication_notifier(),
|
||||||
"stream_id",
|
stream_name="e2e_cross_signing_keys",
|
||||||
|
instance_name=self._instance_name,
|
||||||
|
tables=[
|
||||||
|
("e2e_cross_signing_keys", "instance_name", "stream_id"),
|
||||||
|
],
|
||||||
|
sequence_name="e2e_cross_signing_keys_sequence",
|
||||||
|
writers=["master"],
|
||||||
)
|
)
|
||||||
|
|
||||||
async def set_e2e_device_keys(
|
async def set_e2e_device_keys(
|
||||||
|
@ -1627,6 +1633,7 @@ class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
|
||||||
"keytype": key_type,
|
"keytype": key_type,
|
||||||
"keydata": json_encoder.encode(key),
|
"keydata": json_encoder.encode(key),
|
||||||
"stream_id": stream_id,
|
"stream_id": stream_id,
|
||||||
|
"instance_name": self._instance_name,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -95,6 +95,10 @@ class DeltaState:
|
||||||
to_insert: StateMap[str]
|
to_insert: StateMap[str]
|
||||||
no_longer_in_room: bool = False
|
no_longer_in_room: bool = False
|
||||||
|
|
||||||
|
def is_noop(self) -> bool:
|
||||||
|
"""Whether this state delta is actually empty"""
|
||||||
|
return not self.to_delete and not self.to_insert and not self.no_longer_in_room
|
||||||
|
|
||||||
|
|
||||||
class PersistEventsStore:
|
class PersistEventsStore:
|
||||||
"""Contains all the functions for writing events to the database.
|
"""Contains all the functions for writing events to the database.
|
||||||
|
@ -1017,6 +1021,9 @@ class PersistEventsStore:
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Update the current state stored in the datatabase for the given room"""
|
"""Update the current state stored in the datatabase for the given room"""
|
||||||
|
|
||||||
|
if state_delta.is_noop():
|
||||||
|
return
|
||||||
|
|
||||||
async with self._stream_id_gen.get_next() as stream_ordering:
|
async with self._stream_id_gen.get_next() as stream_ordering:
|
||||||
await self.db_pool.runInteraction(
|
await self.db_pool.runInteraction(
|
||||||
"update_current_state",
|
"update_current_state",
|
||||||
|
@ -1923,7 +1930,12 @@ class PersistEventsStore:
|
||||||
|
|
||||||
# Any relation information for the related event must be cleared.
|
# Any relation information for the related event must be cleared.
|
||||||
self.store._invalidate_cache_and_stream(
|
self.store._invalidate_cache_and_stream(
|
||||||
txn, self.store.get_relations_for_event, (redacted_relates_to,)
|
txn,
|
||||||
|
self.store.get_relations_for_event,
|
||||||
|
(
|
||||||
|
room_id,
|
||||||
|
redacted_relates_to,
|
||||||
|
),
|
||||||
)
|
)
|
||||||
if rel_type == RelationTypes.REFERENCE:
|
if rel_type == RelationTypes.REFERENCE:
|
||||||
self.store._invalidate_cache_and_stream(
|
self.store._invalidate_cache_and_stream(
|
||||||
|
|
|
@ -1181,7 +1181,7 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
|
||||||
|
|
||||||
results = list(txn)
|
results = list(txn)
|
||||||
# (event_id, parent_id, rel_type) for each relation
|
# (event_id, parent_id, rel_type) for each relation
|
||||||
relations_to_insert: List[Tuple[str, str, str]] = []
|
relations_to_insert: List[Tuple[str, str, str, str]] = []
|
||||||
for event_id, event_json_raw in results:
|
for event_id, event_json_raw in results:
|
||||||
try:
|
try:
|
||||||
event_json = db_to_json(event_json_raw)
|
event_json = db_to_json(event_json_raw)
|
||||||
|
@ -1214,7 +1214,8 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
|
||||||
if not isinstance(parent_id, str):
|
if not isinstance(parent_id, str):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
relations_to_insert.append((event_id, parent_id, rel_type))
|
room_id = event_json["room_id"]
|
||||||
|
relations_to_insert.append((room_id, event_id, parent_id, rel_type))
|
||||||
|
|
||||||
# Insert the missing data, note that we upsert here in case the event
|
# Insert the missing data, note that we upsert here in case the event
|
||||||
# has already been processed.
|
# has already been processed.
|
||||||
|
@ -1223,18 +1224,27 @@ class EventsBackgroundUpdatesStore(SQLBaseStore):
|
||||||
txn=txn,
|
txn=txn,
|
||||||
table="event_relations",
|
table="event_relations",
|
||||||
key_names=("event_id",),
|
key_names=("event_id",),
|
||||||
key_values=[(r[0],) for r in relations_to_insert],
|
key_values=[(r[1],) for r in relations_to_insert],
|
||||||
value_names=("relates_to_id", "relation_type"),
|
value_names=("relates_to_id", "relation_type"),
|
||||||
value_values=[r[1:] for r in relations_to_insert],
|
value_values=[r[2:] for r in relations_to_insert],
|
||||||
)
|
)
|
||||||
|
|
||||||
# Iterate the parent IDs and invalidate caches.
|
# Iterate the parent IDs and invalidate caches.
|
||||||
cache_tuples = {(r[1],) for r in relations_to_insert}
|
|
||||||
self._invalidate_cache_and_stream_bulk( # type: ignore[attr-defined]
|
self._invalidate_cache_and_stream_bulk( # type: ignore[attr-defined]
|
||||||
txn, self.get_relations_for_event, cache_tuples # type: ignore[attr-defined]
|
txn,
|
||||||
|
self.get_relations_for_event, # type: ignore[attr-defined]
|
||||||
|
{
|
||||||
|
(
|
||||||
|
r[0], # room_id
|
||||||
|
r[2], # parent_id
|
||||||
|
)
|
||||||
|
for r in relations_to_insert
|
||||||
|
},
|
||||||
)
|
)
|
||||||
self._invalidate_cache_and_stream_bulk( # type: ignore[attr-defined]
|
self._invalidate_cache_and_stream_bulk( # type: ignore[attr-defined]
|
||||||
txn, self.get_thread_summary, cache_tuples # type: ignore[attr-defined]
|
txn,
|
||||||
|
self.get_thread_summary, # type: ignore[attr-defined]
|
||||||
|
{(r[1],) for r in relations_to_insert},
|
||||||
)
|
)
|
||||||
|
|
||||||
if results:
|
if results:
|
||||||
|
|
|
@ -75,12 +75,10 @@ from synapse.storage.database import (
|
||||||
LoggingDatabaseConnection,
|
LoggingDatabaseConnection,
|
||||||
LoggingTransaction,
|
LoggingTransaction,
|
||||||
)
|
)
|
||||||
from synapse.storage.engines import PostgresEngine
|
|
||||||
from synapse.storage.types import Cursor
|
from synapse.storage.types import Cursor
|
||||||
from synapse.storage.util.id_generators import (
|
from synapse.storage.util.id_generators import (
|
||||||
AbstractStreamIdGenerator,
|
AbstractStreamIdGenerator,
|
||||||
MultiWriterIdGenerator,
|
MultiWriterIdGenerator,
|
||||||
StreamIdGenerator,
|
|
||||||
)
|
)
|
||||||
from synapse.storage.util.sequence import build_sequence_generator
|
from synapse.storage.util.sequence import build_sequence_generator
|
||||||
from synapse.types import JsonDict, get_domain_from_id
|
from synapse.types import JsonDict, get_domain_from_id
|
||||||
|
@ -195,51 +193,35 @@ class EventsWorkerStore(SQLBaseStore):
|
||||||
|
|
||||||
self._stream_id_gen: AbstractStreamIdGenerator
|
self._stream_id_gen: AbstractStreamIdGenerator
|
||||||
self._backfill_id_gen: AbstractStreamIdGenerator
|
self._backfill_id_gen: AbstractStreamIdGenerator
|
||||||
if isinstance(database.engine, PostgresEngine):
|
|
||||||
# If we're using Postgres than we can use `MultiWriterIdGenerator`
|
self._stream_id_gen = MultiWriterIdGenerator(
|
||||||
# regardless of whether this process writes to the streams or not.
|
db_conn=db_conn,
|
||||||
self._stream_id_gen = MultiWriterIdGenerator(
|
db=database,
|
||||||
db_conn=db_conn,
|
notifier=hs.get_replication_notifier(),
|
||||||
db=database,
|
stream_name="events",
|
||||||
notifier=hs.get_replication_notifier(),
|
instance_name=hs.get_instance_name(),
|
||||||
stream_name="events",
|
tables=[
|
||||||
instance_name=hs.get_instance_name(),
|
("events", "instance_name", "stream_ordering"),
|
||||||
tables=[("events", "instance_name", "stream_ordering")],
|
("current_state_delta_stream", "instance_name", "stream_id"),
|
||||||
sequence_name="events_stream_seq",
|
("ex_outlier_stream", "instance_name", "event_stream_ordering"),
|
||||||
writers=hs.config.worker.writers.events,
|
],
|
||||||
)
|
sequence_name="events_stream_seq",
|
||||||
self._backfill_id_gen = MultiWriterIdGenerator(
|
writers=hs.config.worker.writers.events,
|
||||||
db_conn=db_conn,
|
)
|
||||||
db=database,
|
self._backfill_id_gen = MultiWriterIdGenerator(
|
||||||
notifier=hs.get_replication_notifier(),
|
db_conn=db_conn,
|
||||||
stream_name="backfill",
|
db=database,
|
||||||
instance_name=hs.get_instance_name(),
|
notifier=hs.get_replication_notifier(),
|
||||||
tables=[("events", "instance_name", "stream_ordering")],
|
stream_name="backfill",
|
||||||
sequence_name="events_backfill_stream_seq",
|
instance_name=hs.get_instance_name(),
|
||||||
positive=False,
|
tables=[
|
||||||
writers=hs.config.worker.writers.events,
|
("events", "instance_name", "stream_ordering"),
|
||||||
)
|
("ex_outlier_stream", "instance_name", "event_stream_ordering"),
|
||||||
else:
|
],
|
||||||
# Multiple writers are not supported for SQLite.
|
sequence_name="events_backfill_stream_seq",
|
||||||
#
|
positive=False,
|
||||||
# We shouldn't be running in worker mode with SQLite, but its useful
|
writers=hs.config.worker.writers.events,
|
||||||
# to support it for unit tests.
|
)
|
||||||
self._stream_id_gen = StreamIdGenerator(
|
|
||||||
db_conn,
|
|
||||||
hs.get_replication_notifier(),
|
|
||||||
"events",
|
|
||||||
"stream_ordering",
|
|
||||||
is_writer=hs.get_instance_name() in hs.config.worker.writers.events,
|
|
||||||
)
|
|
||||||
self._backfill_id_gen = StreamIdGenerator(
|
|
||||||
db_conn,
|
|
||||||
hs.get_replication_notifier(),
|
|
||||||
"events",
|
|
||||||
"stream_ordering",
|
|
||||||
step=-1,
|
|
||||||
extra_tables=[("ex_outlier_stream", "event_stream_ordering")],
|
|
||||||
is_writer=hs.get_instance_name() in hs.config.worker.writers.events,
|
|
||||||
)
|
|
||||||
|
|
||||||
events_max = self._stream_id_gen.get_current_token()
|
events_max = self._stream_id_gen.get_current_token()
|
||||||
curr_state_delta_prefill, min_curr_state_delta_id = self.db_pool.get_cache_dict(
|
curr_state_delta_prefill, min_curr_state_delta_id = self.db_pool.get_cache_dict(
|
||||||
|
@ -309,27 +291,17 @@ class EventsWorkerStore(SQLBaseStore):
|
||||||
|
|
||||||
self._un_partial_stated_events_stream_id_gen: AbstractStreamIdGenerator
|
self._un_partial_stated_events_stream_id_gen: AbstractStreamIdGenerator
|
||||||
|
|
||||||
if isinstance(database.engine, PostgresEngine):
|
self._un_partial_stated_events_stream_id_gen = MultiWriterIdGenerator(
|
||||||
self._un_partial_stated_events_stream_id_gen = MultiWriterIdGenerator(
|
db_conn=db_conn,
|
||||||
db_conn=db_conn,
|
db=database,
|
||||||
db=database,
|
notifier=hs.get_replication_notifier(),
|
||||||
notifier=hs.get_replication_notifier(),
|
stream_name="un_partial_stated_event_stream",
|
||||||
stream_name="un_partial_stated_event_stream",
|
instance_name=hs.get_instance_name(),
|
||||||
instance_name=hs.get_instance_name(),
|
tables=[("un_partial_stated_event_stream", "instance_name", "stream_id")],
|
||||||
tables=[
|
sequence_name="un_partial_stated_event_stream_sequence",
|
||||||
("un_partial_stated_event_stream", "instance_name", "stream_id")
|
# TODO(faster_joins, multiple writers) Support multiple writers.
|
||||||
],
|
writers=["master"],
|
||||||
sequence_name="un_partial_stated_event_stream_sequence",
|
)
|
||||||
# TODO(faster_joins, multiple writers) Support multiple writers.
|
|
||||||
writers=["master"],
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
self._un_partial_stated_events_stream_id_gen = StreamIdGenerator(
|
|
||||||
db_conn,
|
|
||||||
hs.get_replication_notifier(),
|
|
||||||
"un_partial_stated_event_stream",
|
|
||||||
"stream_id",
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_un_partial_stated_events_token(self, instance_name: str) -> int:
|
def get_un_partial_stated_events_token(self, instance_name: str) -> int:
|
||||||
return (
|
return (
|
||||||
|
|
|
@ -40,13 +40,11 @@ from synapse.storage.database import (
|
||||||
LoggingTransaction,
|
LoggingTransaction,
|
||||||
)
|
)
|
||||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||||
from synapse.storage.engines import PostgresEngine
|
|
||||||
from synapse.storage.engines._base import IsolationLevel
|
from synapse.storage.engines._base import IsolationLevel
|
||||||
from synapse.storage.types import Connection
|
from synapse.storage.types import Connection
|
||||||
from synapse.storage.util.id_generators import (
|
from synapse.storage.util.id_generators import (
|
||||||
AbstractStreamIdGenerator,
|
AbstractStreamIdGenerator,
|
||||||
MultiWriterIdGenerator,
|
MultiWriterIdGenerator,
|
||||||
StreamIdGenerator,
|
|
||||||
)
|
)
|
||||||
from synapse.util.caches.descriptors import cached, cachedList
|
from synapse.util.caches.descriptors import cached, cachedList
|
||||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||||
|
@ -91,21 +89,16 @@ class PresenceStore(PresenceBackgroundUpdateStore, CacheInvalidationWorkerStore)
|
||||||
self._instance_name in hs.config.worker.writers.presence
|
self._instance_name in hs.config.worker.writers.presence
|
||||||
)
|
)
|
||||||
|
|
||||||
if isinstance(database.engine, PostgresEngine):
|
self._presence_id_gen = MultiWriterIdGenerator(
|
||||||
self._presence_id_gen = MultiWriterIdGenerator(
|
db_conn=db_conn,
|
||||||
db_conn=db_conn,
|
db=database,
|
||||||
db=database,
|
notifier=hs.get_replication_notifier(),
|
||||||
notifier=hs.get_replication_notifier(),
|
stream_name="presence_stream",
|
||||||
stream_name="presence_stream",
|
instance_name=self._instance_name,
|
||||||
instance_name=self._instance_name,
|
tables=[("presence_stream", "instance_name", "stream_id")],
|
||||||
tables=[("presence_stream", "instance_name", "stream_id")],
|
sequence_name="presence_stream_sequence",
|
||||||
sequence_name="presence_stream_sequence",
|
writers=hs.config.worker.writers.presence,
|
||||||
writers=hs.config.worker.writers.presence,
|
)
|
||||||
)
|
|
||||||
else:
|
|
||||||
self._presence_id_gen = StreamIdGenerator(
|
|
||||||
db_conn, hs.get_replication_notifier(), "presence_stream", "stream_id"
|
|
||||||
)
|
|
||||||
|
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self._presence_on_startup = self._get_active_presence(db_conn)
|
self._presence_on_startup = self._get_active_presence(db_conn)
|
||||||
|
|
|
@ -53,7 +53,7 @@ from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
|
||||||
from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
|
from synapse.storage.databases.main.roommember import RoomMemberWorkerStore
|
||||||
from synapse.storage.engines import PostgresEngine, Sqlite3Engine
|
from synapse.storage.engines import PostgresEngine, Sqlite3Engine
|
||||||
from synapse.storage.push_rule import InconsistentRuleException, RuleNotFoundException
|
from synapse.storage.push_rule import InconsistentRuleException, RuleNotFoundException
|
||||||
from synapse.storage.util.id_generators import IdGenerator, StreamIdGenerator
|
from synapse.storage.util.id_generators import IdGenerator, MultiWriterIdGenerator
|
||||||
from synapse.synapse_rust.push import FilteredPushRules, PushRule, PushRules
|
from synapse.synapse_rust.push import FilteredPushRules, PushRule, PushRules
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util import json_encoder, unwrapFirstError
|
from synapse.util import json_encoder, unwrapFirstError
|
||||||
|
@ -126,7 +126,7 @@ class PushRulesWorkerStore(
|
||||||
`get_max_push_rules_stream_id` which can be called in the initializer.
|
`get_max_push_rules_stream_id` which can be called in the initializer.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
_push_rules_stream_id_gen: StreamIdGenerator
|
_push_rules_stream_id_gen: MultiWriterIdGenerator
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
|
@ -140,14 +140,17 @@ class PushRulesWorkerStore(
|
||||||
hs.get_instance_name() in hs.config.worker.writers.push_rules
|
hs.get_instance_name() in hs.config.worker.writers.push_rules
|
||||||
)
|
)
|
||||||
|
|
||||||
# In the worker store this is an ID tracker which we overwrite in the non-worker
|
self._push_rules_stream_id_gen = MultiWriterIdGenerator(
|
||||||
# class below that is used on the main process.
|
db_conn=db_conn,
|
||||||
self._push_rules_stream_id_gen = StreamIdGenerator(
|
db=database,
|
||||||
db_conn,
|
notifier=hs.get_replication_notifier(),
|
||||||
hs.get_replication_notifier(),
|
stream_name="push_rules_stream",
|
||||||
"push_rules_stream",
|
instance_name=self._instance_name,
|
||||||
"stream_id",
|
tables=[
|
||||||
is_writer=self._is_push_writer,
|
("push_rules_stream", "instance_name", "stream_id"),
|
||||||
|
],
|
||||||
|
sequence_name="push_rules_stream_sequence",
|
||||||
|
writers=hs.config.worker.writers.push_rules,
|
||||||
)
|
)
|
||||||
|
|
||||||
push_rules_prefill, push_rules_id = self.db_pool.get_cache_dict(
|
push_rules_prefill, push_rules_id = self.db_pool.get_cache_dict(
|
||||||
|
@ -880,6 +883,7 @@ class PushRulesWorkerStore(
|
||||||
raise Exception("Not a push writer")
|
raise Exception("Not a push writer")
|
||||||
|
|
||||||
values = {
|
values = {
|
||||||
|
"instance_name": self._instance_name,
|
||||||
"stream_id": stream_id,
|
"stream_id": stream_id,
|
||||||
"event_stream_ordering": event_stream_ordering,
|
"event_stream_ordering": event_stream_ordering,
|
||||||
"user_id": user_id,
|
"user_id": user_id,
|
||||||
|
|
|
@ -40,10 +40,7 @@ from synapse.storage.database import (
|
||||||
LoggingDatabaseConnection,
|
LoggingDatabaseConnection,
|
||||||
LoggingTransaction,
|
LoggingTransaction,
|
||||||
)
|
)
|
||||||
from synapse.storage.util.id_generators import (
|
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||||
AbstractStreamIdGenerator,
|
|
||||||
StreamIdGenerator,
|
|
||||||
)
|
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util import json_encoder
|
from synapse.util import json_encoder
|
||||||
from synapse.util.caches.descriptors import cached
|
from synapse.util.caches.descriptors import cached
|
||||||
|
@ -84,15 +81,20 @@ class PusherWorkerStore(SQLBaseStore):
|
||||||
):
|
):
|
||||||
super().__init__(database, db_conn, hs)
|
super().__init__(database, db_conn, hs)
|
||||||
|
|
||||||
# In the worker store this is an ID tracker which we overwrite in the non-worker
|
self._instance_name = hs.get_instance_name()
|
||||||
# class below that is used on the main process.
|
|
||||||
self._pushers_id_gen = StreamIdGenerator(
|
self._pushers_id_gen = MultiWriterIdGenerator(
|
||||||
db_conn,
|
db_conn=db_conn,
|
||||||
hs.get_replication_notifier(),
|
db=database,
|
||||||
"pushers",
|
notifier=hs.get_replication_notifier(),
|
||||||
"id",
|
stream_name="pushers",
|
||||||
extra_tables=[("deleted_pushers", "stream_id")],
|
instance_name=self._instance_name,
|
||||||
is_writer=hs.config.worker.worker_app is None,
|
tables=[
|
||||||
|
("pushers", "instance_name", "id"),
|
||||||
|
("deleted_pushers", "instance_name", "stream_id"),
|
||||||
|
],
|
||||||
|
sequence_name="pushers_sequence",
|
||||||
|
writers=["master"],
|
||||||
)
|
)
|
||||||
|
|
||||||
self.db_pool.updates.register_background_update_handler(
|
self.db_pool.updates.register_background_update_handler(
|
||||||
|
@ -655,7 +657,7 @@ class PusherBackgroundUpdatesStore(SQLBaseStore):
|
||||||
class PusherStore(PusherWorkerStore, PusherBackgroundUpdatesStore):
|
class PusherStore(PusherWorkerStore, PusherBackgroundUpdatesStore):
|
||||||
# Because we have write access, this will be a StreamIdGenerator
|
# Because we have write access, this will be a StreamIdGenerator
|
||||||
# (see PusherWorkerStore.__init__)
|
# (see PusherWorkerStore.__init__)
|
||||||
_pushers_id_gen: AbstractStreamIdGenerator
|
_pushers_id_gen: MultiWriterIdGenerator
|
||||||
|
|
||||||
async def add_pusher(
|
async def add_pusher(
|
||||||
self,
|
self,
|
||||||
|
@ -688,6 +690,7 @@ class PusherStore(PusherWorkerStore, PusherBackgroundUpdatesStore):
|
||||||
"last_stream_ordering": last_stream_ordering,
|
"last_stream_ordering": last_stream_ordering,
|
||||||
"profile_tag": profile_tag,
|
"profile_tag": profile_tag,
|
||||||
"id": stream_id,
|
"id": stream_id,
|
||||||
|
"instance_name": self._instance_name,
|
||||||
"enabled": enabled,
|
"enabled": enabled,
|
||||||
"device_id": device_id,
|
"device_id": device_id,
|
||||||
# XXX(quenting): We're only really persisting the access token ID
|
# XXX(quenting): We're only really persisting the access token ID
|
||||||
|
@ -735,6 +738,7 @@ class PusherStore(PusherWorkerStore, PusherBackgroundUpdatesStore):
|
||||||
table="deleted_pushers",
|
table="deleted_pushers",
|
||||||
values={
|
values={
|
||||||
"stream_id": stream_id,
|
"stream_id": stream_id,
|
||||||
|
"instance_name": self._instance_name,
|
||||||
"app_id": app_id,
|
"app_id": app_id,
|
||||||
"pushkey": pushkey,
|
"pushkey": pushkey,
|
||||||
"user_id": user_id,
|
"user_id": user_id,
|
||||||
|
@ -773,9 +777,15 @@ class PusherStore(PusherWorkerStore, PusherBackgroundUpdatesStore):
|
||||||
self.db_pool.simple_insert_many_txn(
|
self.db_pool.simple_insert_many_txn(
|
||||||
txn,
|
txn,
|
||||||
table="deleted_pushers",
|
table="deleted_pushers",
|
||||||
keys=("stream_id", "app_id", "pushkey", "user_id"),
|
keys=("stream_id", "instance_name", "app_id", "pushkey", "user_id"),
|
||||||
values=[
|
values=[
|
||||||
(stream_id, pusher.app_id, pusher.pushkey, user_id)
|
(
|
||||||
|
stream_id,
|
||||||
|
self._instance_name,
|
||||||
|
pusher.app_id,
|
||||||
|
pusher.pushkey,
|
||||||
|
user_id,
|
||||||
|
)
|
||||||
for stream_id, pusher in zip(stream_ids, pushers)
|
for stream_id, pusher in zip(stream_ids, pushers)
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
|
@ -44,12 +44,10 @@ from synapse.storage.database import (
|
||||||
LoggingDatabaseConnection,
|
LoggingDatabaseConnection,
|
||||||
LoggingTransaction,
|
LoggingTransaction,
|
||||||
)
|
)
|
||||||
from synapse.storage.engines import PostgresEngine
|
|
||||||
from synapse.storage.engines._base import IsolationLevel
|
from synapse.storage.engines._base import IsolationLevel
|
||||||
from synapse.storage.util.id_generators import (
|
from synapse.storage.util.id_generators import (
|
||||||
AbstractStreamIdGenerator,
|
AbstractStreamIdGenerator,
|
||||||
MultiWriterIdGenerator,
|
MultiWriterIdGenerator,
|
||||||
StreamIdGenerator,
|
|
||||||
)
|
)
|
||||||
from synapse.types import (
|
from synapse.types import (
|
||||||
JsonDict,
|
JsonDict,
|
||||||
|
@ -80,35 +78,20 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
||||||
# class below that is used on the main process.
|
# class below that is used on the main process.
|
||||||
self._receipts_id_gen: AbstractStreamIdGenerator
|
self._receipts_id_gen: AbstractStreamIdGenerator
|
||||||
|
|
||||||
if isinstance(database.engine, PostgresEngine):
|
self._can_write_to_receipts = (
|
||||||
self._can_write_to_receipts = (
|
self._instance_name in hs.config.worker.writers.receipts
|
||||||
self._instance_name in hs.config.worker.writers.receipts
|
)
|
||||||
)
|
|
||||||
|
|
||||||
self._receipts_id_gen = MultiWriterIdGenerator(
|
self._receipts_id_gen = MultiWriterIdGenerator(
|
||||||
db_conn=db_conn,
|
db_conn=db_conn,
|
||||||
db=database,
|
db=database,
|
||||||
notifier=hs.get_replication_notifier(),
|
notifier=hs.get_replication_notifier(),
|
||||||
stream_name="receipts",
|
stream_name="receipts",
|
||||||
instance_name=self._instance_name,
|
instance_name=self._instance_name,
|
||||||
tables=[("receipts_linearized", "instance_name", "stream_id")],
|
tables=[("receipts_linearized", "instance_name", "stream_id")],
|
||||||
sequence_name="receipts_sequence",
|
sequence_name="receipts_sequence",
|
||||||
writers=hs.config.worker.writers.receipts,
|
writers=hs.config.worker.writers.receipts,
|
||||||
)
|
)
|
||||||
else:
|
|
||||||
self._can_write_to_receipts = True
|
|
||||||
|
|
||||||
# Multiple writers are not supported for SQLite.
|
|
||||||
#
|
|
||||||
# We shouldn't be running in worker mode with SQLite, but its useful
|
|
||||||
# to support it for unit tests.
|
|
||||||
self._receipts_id_gen = StreamIdGenerator(
|
|
||||||
db_conn,
|
|
||||||
hs.get_replication_notifier(),
|
|
||||||
"receipts_linearized",
|
|
||||||
"stream_id",
|
|
||||||
is_writer=hs.get_instance_name() in hs.config.worker.writers.receipts,
|
|
||||||
)
|
|
||||||
|
|
||||||
super().__init__(database, db_conn, hs)
|
super().__init__(database, db_conn, hs)
|
||||||
|
|
||||||
|
|
|
@ -169,9 +169,9 @@ class RelationsWorkerStore(SQLBaseStore):
|
||||||
@cached(uncached_args=("event",), tree=True)
|
@cached(uncached_args=("event",), tree=True)
|
||||||
async def get_relations_for_event(
|
async def get_relations_for_event(
|
||||||
self,
|
self,
|
||||||
|
room_id: str,
|
||||||
event_id: str,
|
event_id: str,
|
||||||
event: EventBase,
|
event: EventBase,
|
||||||
room_id: str,
|
|
||||||
relation_type: Optional[str] = None,
|
relation_type: Optional[str] = None,
|
||||||
event_type: Optional[str] = None,
|
event_type: Optional[str] = None,
|
||||||
limit: int = 5,
|
limit: int = 5,
|
||||||
|
|
|
@ -58,13 +58,11 @@ from synapse.storage.database import (
|
||||||
LoggingTransaction,
|
LoggingTransaction,
|
||||||
)
|
)
|
||||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||||
from synapse.storage.engines import PostgresEngine
|
|
||||||
from synapse.storage.types import Cursor
|
from synapse.storage.types import Cursor
|
||||||
from synapse.storage.util.id_generators import (
|
from synapse.storage.util.id_generators import (
|
||||||
AbstractStreamIdGenerator,
|
AbstractStreamIdGenerator,
|
||||||
IdGenerator,
|
IdGenerator,
|
||||||
MultiWriterIdGenerator,
|
MultiWriterIdGenerator,
|
||||||
StreamIdGenerator,
|
|
||||||
)
|
)
|
||||||
from synapse.types import JsonDict, RetentionPolicy, StrCollection, ThirdPartyInstanceID
|
from synapse.types import JsonDict, RetentionPolicy, StrCollection, ThirdPartyInstanceID
|
||||||
from synapse.util import json_encoder
|
from synapse.util import json_encoder
|
||||||
|
@ -155,27 +153,17 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
|
||||||
|
|
||||||
self._un_partial_stated_rooms_stream_id_gen: AbstractStreamIdGenerator
|
self._un_partial_stated_rooms_stream_id_gen: AbstractStreamIdGenerator
|
||||||
|
|
||||||
if isinstance(database.engine, PostgresEngine):
|
self._un_partial_stated_rooms_stream_id_gen = MultiWriterIdGenerator(
|
||||||
self._un_partial_stated_rooms_stream_id_gen = MultiWriterIdGenerator(
|
db_conn=db_conn,
|
||||||
db_conn=db_conn,
|
db=database,
|
||||||
db=database,
|
notifier=hs.get_replication_notifier(),
|
||||||
notifier=hs.get_replication_notifier(),
|
stream_name="un_partial_stated_room_stream",
|
||||||
stream_name="un_partial_stated_room_stream",
|
instance_name=self._instance_name,
|
||||||
instance_name=self._instance_name,
|
tables=[("un_partial_stated_room_stream", "instance_name", "stream_id")],
|
||||||
tables=[
|
sequence_name="un_partial_stated_room_stream_sequence",
|
||||||
("un_partial_stated_room_stream", "instance_name", "stream_id")
|
# TODO(faster_joins, multiple writers) Support multiple writers.
|
||||||
],
|
writers=["master"],
|
||||||
sequence_name="un_partial_stated_room_stream_sequence",
|
)
|
||||||
# TODO(faster_joins, multiple writers) Support multiple writers.
|
|
||||||
writers=["master"],
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
self._un_partial_stated_rooms_stream_id_gen = StreamIdGenerator(
|
|
||||||
db_conn,
|
|
||||||
hs.get_replication_notifier(),
|
|
||||||
"un_partial_stated_room_stream",
|
|
||||||
"stream_id",
|
|
||||||
)
|
|
||||||
|
|
||||||
def process_replication_position(
|
def process_replication_position(
|
||||||
self, stream_name: str, instance_name: str, token: int
|
self, stream_name: str, instance_name: str, token: int
|
||||||
|
@ -2219,6 +2207,7 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore):
|
||||||
super().__init__(database, db_conn, hs)
|
super().__init__(database, db_conn, hs)
|
||||||
|
|
||||||
self._event_reports_id_gen = IdGenerator(db_conn, "event_reports", "id")
|
self._event_reports_id_gen = IdGenerator(db_conn, "event_reports", "id")
|
||||||
|
self._room_reports_id_gen = IdGenerator(db_conn, "room_reports", "id")
|
||||||
|
|
||||||
self._instance_name = hs.get_instance_name()
|
self._instance_name = hs.get_instance_name()
|
||||||
|
|
||||||
|
@ -2428,6 +2417,37 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore):
|
||||||
)
|
)
|
||||||
return next_id
|
return next_id
|
||||||
|
|
||||||
|
async def add_room_report(
|
||||||
|
self,
|
||||||
|
room_id: str,
|
||||||
|
user_id: str,
|
||||||
|
reason: str,
|
||||||
|
received_ts: int,
|
||||||
|
) -> int:
|
||||||
|
"""Add a room report
|
||||||
|
|
||||||
|
Args:
|
||||||
|
room_id: The room ID being reported.
|
||||||
|
user_id: User who reports the room.
|
||||||
|
reason: Description that the user specifies.
|
||||||
|
received_ts: Time when the user submitted the report (milliseconds).
|
||||||
|
Returns:
|
||||||
|
Id of the room report.
|
||||||
|
"""
|
||||||
|
next_id = self._room_reports_id_gen.get_next()
|
||||||
|
await self.db_pool.simple_insert(
|
||||||
|
table="room_reports",
|
||||||
|
values={
|
||||||
|
"id": next_id,
|
||||||
|
"received_ts": received_ts,
|
||||||
|
"room_id": room_id,
|
||||||
|
"user_id": user_id,
|
||||||
|
"reason": reason,
|
||||||
|
},
|
||||||
|
desc="add_room_report",
|
||||||
|
)
|
||||||
|
return next_id
|
||||||
|
|
||||||
async def block_room(self, room_id: str, user_id: str) -> None:
|
async def block_room(self, room_id: str, user_id: str) -> None:
|
||||||
"""Marks the room as blocked.
|
"""Marks the room as blocked.
|
||||||
|
|
||||||
|
|
|
@ -476,7 +476,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||||
)
|
)
|
||||||
|
|
||||||
sql = """
|
sql = """
|
||||||
SELECT room_id, e.sender, c.membership, event_id, e.stream_ordering, r.room_version
|
SELECT room_id, e.sender, c.membership, event_id, e.instance_name, e.stream_ordering, r.room_version
|
||||||
FROM local_current_membership AS c
|
FROM local_current_membership AS c
|
||||||
INNER JOIN events AS e USING (room_id, event_id)
|
INNER JOIN events AS e USING (room_id, event_id)
|
||||||
INNER JOIN rooms AS r USING (room_id)
|
INNER JOIN rooms AS r USING (room_id)
|
||||||
|
@ -488,7 +488,17 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||||
)
|
)
|
||||||
|
|
||||||
txn.execute(sql, (user_id, *args))
|
txn.execute(sql, (user_id, *args))
|
||||||
results = [RoomsForUser(*r) for r in txn]
|
results = [
|
||||||
|
RoomsForUser(
|
||||||
|
room_id=room_id,
|
||||||
|
sender=sender,
|
||||||
|
membership=membership,
|
||||||
|
event_id=event_id,
|
||||||
|
event_pos=PersistedEventPosition(instance_name, stream_ordering),
|
||||||
|
room_version_id=room_version,
|
||||||
|
)
|
||||||
|
for room_id, sender, membership, event_id, instance_name, stream_ordering, room_version in txn
|
||||||
|
]
|
||||||
|
|
||||||
return results
|
return results
|
||||||
|
|
||||||
|
|
|
@ -1281,7 +1281,7 @@ def _parse_words_with_regex(search_term: str) -> List[str]:
|
||||||
Break down search term into words, when we don't have ICU available.
|
Break down search term into words, when we don't have ICU available.
|
||||||
See: `_parse_words`
|
See: `_parse_words`
|
||||||
"""
|
"""
|
||||||
return re.findall(r"([\w\-]+)", search_term, re.UNICODE)
|
return re.findall(r"([\w-]+)", search_term, re.UNICODE)
|
||||||
|
|
||||||
|
|
||||||
def _parse_words_with_icu(search_term: str) -> List[str]:
|
def _parse_words_with_icu(search_term: str) -> List[str]:
|
||||||
|
@ -1303,15 +1303,69 @@ def _parse_words_with_icu(search_term: str) -> List[str]:
|
||||||
if j < 0:
|
if j < 0:
|
||||||
break
|
break
|
||||||
|
|
||||||
result = search_term[i:j]
|
# We want to make sure that we split on `@` and `:` specifically, as
|
||||||
|
# they occur in user IDs.
|
||||||
|
for result in re.split(r"[@:]+", search_term[i:j]):
|
||||||
|
results.append(result.strip())
|
||||||
|
|
||||||
|
i = j
|
||||||
|
|
||||||
|
# libicu will break up words that have punctuation in them, but to handle
|
||||||
|
# cases where user IDs have '-', '.' and '_' in them we want to *not* break
|
||||||
|
# those into words and instead allow the DB to tokenise them how it wants.
|
||||||
|
#
|
||||||
|
# In particular, user-71 in postgres gets tokenised to "user, -71", and this
|
||||||
|
# will not match a query for "user, 71".
|
||||||
|
new_results: List[str] = []
|
||||||
|
i = 0
|
||||||
|
while i < len(results):
|
||||||
|
curr = results[i]
|
||||||
|
|
||||||
|
prev = None
|
||||||
|
next = None
|
||||||
|
if i > 0:
|
||||||
|
prev = results[i - 1]
|
||||||
|
if i + 1 < len(results):
|
||||||
|
next = results[i + 1]
|
||||||
|
|
||||||
|
i += 1
|
||||||
|
|
||||||
# libicu considers spaces and punctuation between words as words, but we don't
|
# libicu considers spaces and punctuation between words as words, but we don't
|
||||||
# want to include those in results as they would result in syntax errors in SQL
|
# want to include those in results as they would result in syntax errors in SQL
|
||||||
# queries (e.g. "foo bar" would result in the search query including "foo & &
|
# queries (e.g. "foo bar" would result in the search query including "foo & &
|
||||||
# bar").
|
# bar").
|
||||||
if len(re.findall(r"([\w\-]+)", result, re.UNICODE)):
|
if not curr:
|
||||||
results.append(result)
|
continue
|
||||||
|
|
||||||
i = j
|
if curr in ["-", ".", "_"]:
|
||||||
|
prefix = ""
|
||||||
|
suffix = ""
|
||||||
|
|
||||||
return results
|
# Check if the next item is a word, and if so use it as the suffix.
|
||||||
|
# We check for if its a word as we don't want to concatenate
|
||||||
|
# multiple punctuation marks.
|
||||||
|
if next is not None and re.match(r"\w", next):
|
||||||
|
suffix = next
|
||||||
|
i += 1 # We're using next, so we skip it in the outer loop.
|
||||||
|
else:
|
||||||
|
# We want to avoid creating terms like "user-", as we should
|
||||||
|
# strip trailing punctuation.
|
||||||
|
continue
|
||||||
|
|
||||||
|
if prev and re.match(r"\w", prev) and new_results:
|
||||||
|
prefix = new_results[-1]
|
||||||
|
new_results.pop()
|
||||||
|
|
||||||
|
# We might not have a prefix here, but that's fine as we want to
|
||||||
|
# ensure that we don't strip preceding punctuation e.g. '-71'
|
||||||
|
# shouldn't be converted to '71'.
|
||||||
|
|
||||||
|
new_results.append(f"{prefix}{curr}{suffix}")
|
||||||
|
continue
|
||||||
|
elif not re.match(r"\w", curr):
|
||||||
|
# Ignore other punctuation
|
||||||
|
continue
|
||||||
|
|
||||||
|
new_results.append(curr)
|
||||||
|
|
||||||
|
return new_results
|
||||||
|
|
|
@ -142,6 +142,10 @@ class PostgresEngine(
|
||||||
apply stricter checks on new databases versus existing database.
|
apply stricter checks on new databases versus existing database.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
allow_unsafe_locale = self.config.get("allow_unsafe_locale", False)
|
||||||
|
if allow_unsafe_locale:
|
||||||
|
return
|
||||||
|
|
||||||
collation, ctype = self.get_db_locale(txn)
|
collation, ctype = self.get_db_locale(txn)
|
||||||
|
|
||||||
errors = []
|
errors = []
|
||||||
|
@ -155,7 +159,9 @@ class PostgresEngine(
|
||||||
if errors:
|
if errors:
|
||||||
raise IncorrectDatabaseSetup(
|
raise IncorrectDatabaseSetup(
|
||||||
"Database is incorrectly configured:\n\n%s\n\n"
|
"Database is incorrectly configured:\n\n%s\n\n"
|
||||||
"See docs/postgres.md for more information." % ("\n".join(errors))
|
"See docs/postgres.md for more information. You can override this check by"
|
||||||
|
"setting 'allow_unsafe_locale' to true in the database config.",
|
||||||
|
"\n".join(errors),
|
||||||
)
|
)
|
||||||
|
|
||||||
def convert_param_style(self, sql: str) -> str:
|
def convert_param_style(self, sql: str) -> str:
|
||||||
|
|
|
@ -35,7 +35,7 @@ class RoomsForUser:
|
||||||
sender: str
|
sender: str
|
||||||
membership: str
|
membership: str
|
||||||
event_id: str
|
event_id: str
|
||||||
stream_ordering: int
|
event_pos: PersistedEventPosition
|
||||||
room_version_id: str
|
room_version_id: str
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,27 @@
|
||||||
|
--
|
||||||
|
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
--
|
||||||
|
-- Copyright (C) 2024 New Vector, Ltd
|
||||||
|
--
|
||||||
|
-- This program is free software: you can redistribute it and/or modify
|
||||||
|
-- it under the terms of the GNU Affero General Public License as
|
||||||
|
-- published by the Free Software Foundation, either version 3 of the
|
||||||
|
-- License, or (at your option) any later version.
|
||||||
|
--
|
||||||
|
-- See the GNU Affero General Public License for more details:
|
||||||
|
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
|
||||||
|
-- Add `instance_name` columns to stream tables to allow them to be used with
|
||||||
|
-- `MultiWriterIdGenerator`
|
||||||
|
ALTER TABLE device_lists_stream ADD COLUMN instance_name TEXT;
|
||||||
|
ALTER TABLE user_signature_stream ADD COLUMN instance_name TEXT;
|
||||||
|
ALTER TABLE device_lists_outbound_pokes ADD COLUMN instance_name TEXT;
|
||||||
|
ALTER TABLE device_lists_changes_in_room ADD COLUMN instance_name TEXT;
|
||||||
|
ALTER TABLE device_lists_remote_pending ADD COLUMN instance_name TEXT;
|
||||||
|
|
||||||
|
ALTER TABLE e2e_cross_signing_keys ADD COLUMN instance_name TEXT;
|
||||||
|
|
||||||
|
ALTER TABLE push_rules_stream ADD COLUMN instance_name TEXT;
|
||||||
|
|
||||||
|
ALTER TABLE pushers ADD COLUMN instance_name TEXT;
|
||||||
|
ALTER TABLE deleted_pushers ADD COLUMN instance_name TEXT;
|
|
@ -0,0 +1,54 @@
|
||||||
|
--
|
||||||
|
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
--
|
||||||
|
-- Copyright (C) 2024 New Vector, Ltd
|
||||||
|
--
|
||||||
|
-- This program is free software: you can redistribute it and/or modify
|
||||||
|
-- it under the terms of the GNU Affero General Public License as
|
||||||
|
-- published by the Free Software Foundation, either version 3 of the
|
||||||
|
-- License, or (at your option) any later version.
|
||||||
|
--
|
||||||
|
-- See the GNU Affero General Public License for more details:
|
||||||
|
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
|
||||||
|
-- Add squences for stream tables to allow them to be used with
|
||||||
|
-- `MultiWriterIdGenerator`
|
||||||
|
CREATE SEQUENCE IF NOT EXISTS device_lists_sequence;
|
||||||
|
|
||||||
|
-- We need to take the max across all the device lists tables as they share the
|
||||||
|
-- ID generator
|
||||||
|
SELECT setval('device_lists_sequence', (
|
||||||
|
SELECT GREATEST(
|
||||||
|
(SELECT COALESCE(MAX(stream_id), 1) FROM device_lists_stream),
|
||||||
|
(SELECT COALESCE(MAX(stream_id), 1) FROM user_signature_stream),
|
||||||
|
(SELECT COALESCE(MAX(stream_id), 1) FROM device_lists_outbound_pokes),
|
||||||
|
(SELECT COALESCE(MAX(stream_id), 1) FROM device_lists_changes_in_room),
|
||||||
|
(SELECT COALESCE(MAX(stream_id), 1) FROM device_lists_remote_pending),
|
||||||
|
(SELECT COALESCE(MAX(stream_id), 1) FROM device_lists_changes_converted_stream_position)
|
||||||
|
)
|
||||||
|
));
|
||||||
|
|
||||||
|
CREATE SEQUENCE IF NOT EXISTS e2e_cross_signing_keys_sequence;
|
||||||
|
|
||||||
|
SELECT setval('e2e_cross_signing_keys_sequence', (
|
||||||
|
SELECT COALESCE(MAX(stream_id), 1) FROM e2e_cross_signing_keys
|
||||||
|
));
|
||||||
|
|
||||||
|
|
||||||
|
CREATE SEQUENCE IF NOT EXISTS push_rules_stream_sequence;
|
||||||
|
|
||||||
|
SELECT setval('push_rules_stream_sequence', (
|
||||||
|
SELECT COALESCE(MAX(stream_id), 1) FROM push_rules_stream
|
||||||
|
));
|
||||||
|
|
||||||
|
|
||||||
|
CREATE SEQUENCE IF NOT EXISTS pushers_sequence;
|
||||||
|
|
||||||
|
-- We need to take the max across all the pusher tables as they share the
|
||||||
|
-- ID generator
|
||||||
|
SELECT setval('pushers_sequence', (
|
||||||
|
SELECT GREATEST(
|
||||||
|
(SELECT COALESCE(MAX(id), 1) FROM pushers),
|
||||||
|
(SELECT COALESCE(MAX(stream_id), 1) FROM deleted_pushers)
|
||||||
|
)
|
||||||
|
));
|
|
@ -0,0 +1,15 @@
|
||||||
|
--
|
||||||
|
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
--
|
||||||
|
-- Copyright (C) 2024 New Vector, Ltd
|
||||||
|
--
|
||||||
|
-- This program is free software: you can redistribute it and/or modify
|
||||||
|
-- it under the terms of the GNU Affero General Public License as
|
||||||
|
-- published by the Free Software Foundation, either version 3 of the
|
||||||
|
-- License, or (at your option) any later version.
|
||||||
|
--
|
||||||
|
-- See the GNU Affero General Public License for more details:
|
||||||
|
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
|
||||||
|
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||||
|
(8504, 'cleanup_device_federation_outbox', '{}');
|
|
@ -0,0 +1,16 @@
|
||||||
|
--
|
||||||
|
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
--
|
||||||
|
-- Copyright (C) 2024 New Vector, Ltd
|
||||||
|
--
|
||||||
|
-- This program is free software: you can redistribute it and/or modify
|
||||||
|
-- it under the terms of the GNU Affero General Public License as
|
||||||
|
-- published by the Free Software Foundation, either version 3 of the
|
||||||
|
-- License, or (at your option) any later version.
|
||||||
|
--
|
||||||
|
-- See the GNU Affero General Public License for more details:
|
||||||
|
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
|
||||||
|
-- Add `instance_name` columns to stream tables to allow them to be used with
|
||||||
|
-- `MultiWriterIdGenerator`
|
||||||
|
ALTER TABLE device_lists_changes_converted_stream_position ADD COLUMN instance_name TEXT;
|
20
synapse/storage/schema/main/delta/85/06_add_room_reports.sql
Normal file
20
synapse/storage/schema/main/delta/85/06_add_room_reports.sql
Normal file
|
@ -0,0 +1,20 @@
|
||||||
|
--
|
||||||
|
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
--
|
||||||
|
-- Copyright (C) 2024 New Vector, Ltd
|
||||||
|
--
|
||||||
|
-- This program is free software: you can redistribute it and/or modify
|
||||||
|
-- it under the terms of the GNU Affero General Public License as
|
||||||
|
-- published by the Free Software Foundation, either version 3 of the
|
||||||
|
-- License, or (at your option) any later version.
|
||||||
|
--
|
||||||
|
-- See the GNU Affero General Public License for more details:
|
||||||
|
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
|
||||||
|
CREATE TABLE room_reports (
|
||||||
|
id BIGINT NOT NULL PRIMARY KEY,
|
||||||
|
received_ts BIGINT NOT NULL,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
user_id TEXT NOT NULL,
|
||||||
|
reason TEXT NOT NULL
|
||||||
|
);
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue