mirror of
https://github.com/element-hq/synapse
synced 2024-10-06 13:32:40 +00:00
Merge branch 'madlittlemods/msc3575-sliding-sync-0.0.1' into madlittlemods/msc3575-sliding-sync-filtering
Conflicts: tests/handlers/test_sliding_sync.py tests/rest/client/test_sync.py
This commit is contained in:
commit
555ba4b891
43 changed files with 914 additions and 379 deletions
54
CHANGES.md
54
CHANGES.md
|
@ -1,3 +1,57 @@
|
||||||
|
# Synapse 1.109.0rc1 (2024-06-04)
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- Add the ability to auto-accept invites on the behalf of users. See the [`auto_accept_invites`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#auto-accept-invites) config option for details. ([\#17147](https://github.com/element-hq/synapse/issues/17147))
|
||||||
|
- Add experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync/e2ee` endpoint for to-device messages and device encryption info. ([\#17167](https://github.com/element-hq/synapse/issues/17167))
|
||||||
|
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/issues/3916) by adding unstable media endpoints to `/_matrix/client`. ([\#17213](https://github.com/element-hq/synapse/issues/17213))
|
||||||
|
- Add logging to tasks managed by the task scheduler, showing CPU and database usage. ([\#17219](https://github.com/element-hq/synapse/issues/17219))
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
- Fix deduplicating of membership events to not create unused state groups. ([\#17164](https://github.com/element-hq/synapse/issues/17164))
|
||||||
|
- Fix bug where duplicate events could be sent down sync when using workers that are overloaded. ([\#17215](https://github.com/element-hq/synapse/issues/17215))
|
||||||
|
- Ignore attempts to send to-device messages to bad users, to avoid log spam when we try to connect to the bad server. ([\#17240](https://github.com/element-hq/synapse/issues/17240))
|
||||||
|
- Fix handling of duplicate concurrent uploading of device one-time-keys. ([\#17241](https://github.com/element-hq/synapse/issues/17241))
|
||||||
|
- Fix reporting of default tags to Sentry, such as worker name. Broke in v1.108.0. ([\#17251](https://github.com/element-hq/synapse/issues/17251))
|
||||||
|
- Fix bug where typing updates would not be sent when using workers after a restart. ([\#17252](https://github.com/element-hq/synapse/issues/17252))
|
||||||
|
|
||||||
|
### Improved Documentation
|
||||||
|
|
||||||
|
- Update the LemonLDAP documentation to say that claims should be explicitly included in the returned `id_token`, as Synapse won't request them. ([\#17204](https://github.com/element-hq/synapse/issues/17204))
|
||||||
|
|
||||||
|
### Internal Changes
|
||||||
|
|
||||||
|
- Improve DB usage when fetching related events. ([\#17083](https://github.com/element-hq/synapse/issues/17083))
|
||||||
|
- Log exceptions when failing to auto-join new user according to the `auto_join_rooms` option. ([\#17176](https://github.com/element-hq/synapse/issues/17176))
|
||||||
|
- Reduce work of calculating outbound device lists updates. ([\#17211](https://github.com/element-hq/synapse/issues/17211))
|
||||||
|
- Improve performance of calculating device lists changes in `/sync`. ([\#17216](https://github.com/element-hq/synapse/issues/17216))
|
||||||
|
- Move towards using `MultiWriterIdGenerator` everywhere. ([\#17226](https://github.com/element-hq/synapse/issues/17226))
|
||||||
|
- Replaces all usages of `StreamIdGenerator` with `MultiWriterIdGenerator`. ([\#17229](https://github.com/element-hq/synapse/issues/17229))
|
||||||
|
- Change the `allow_unsafe_locale` config option to also apply when setting up new databases. ([\#17238](https://github.com/element-hq/synapse/issues/17238))
|
||||||
|
- Fix errors in logs about closing incorrect logging contexts when media gets rejected by a module. ([\#17239](https://github.com/element-hq/synapse/issues/17239), [\#17246](https://github.com/element-hq/synapse/issues/17246))
|
||||||
|
- Clean out invalid destinations from `device_federation_outbox` table. ([\#17242](https://github.com/element-hq/synapse/issues/17242))
|
||||||
|
- Stop logging errors when receiving invalid User IDs in key querys requests. ([\#17250](https://github.com/element-hq/synapse/issues/17250))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Updates to locked dependencies
|
||||||
|
|
||||||
|
* Bump anyhow from 1.0.83 to 1.0.86. ([\#17220](https://github.com/element-hq/synapse/issues/17220))
|
||||||
|
* Bump bcrypt from 4.1.2 to 4.1.3. ([\#17224](https://github.com/element-hq/synapse/issues/17224))
|
||||||
|
* Bump lxml from 5.2.1 to 5.2.2. ([\#17261](https://github.com/element-hq/synapse/issues/17261))
|
||||||
|
* Bump mypy-zope from 1.0.3 to 1.0.4. ([\#17262](https://github.com/element-hq/synapse/issues/17262))
|
||||||
|
* Bump phonenumbers from 8.13.35 to 8.13.37. ([\#17235](https://github.com/element-hq/synapse/issues/17235))
|
||||||
|
* Bump prometheus-client from 0.19.0 to 0.20.0. ([\#17233](https://github.com/element-hq/synapse/issues/17233))
|
||||||
|
* Bump pyasn1 from 0.5.1 to 0.6.0. ([\#17223](https://github.com/element-hq/synapse/issues/17223))
|
||||||
|
* Bump pyicu from 2.13 to 2.13.1. ([\#17236](https://github.com/element-hq/synapse/issues/17236))
|
||||||
|
* Bump pyopenssl from 24.0.0 to 24.1.0. ([\#17234](https://github.com/element-hq/synapse/issues/17234))
|
||||||
|
* Bump serde from 1.0.201 to 1.0.202. ([\#17221](https://github.com/element-hq/synapse/issues/17221))
|
||||||
|
* Bump serde from 1.0.202 to 1.0.203. ([\#17232](https://github.com/element-hq/synapse/issues/17232))
|
||||||
|
* Bump twine from 5.0.0 to 5.1.0. ([\#17225](https://github.com/element-hq/synapse/issues/17225))
|
||||||
|
* Bump types-psycopg2 from 2.9.21.20240311 to 2.9.21.20240417. ([\#17222](https://github.com/element-hq/synapse/issues/17222))
|
||||||
|
* Bump types-pyopenssl from 24.0.0.20240311 to 24.1.0.20240425. ([\#17260](https://github.com/element-hq/synapse/issues/17260))
|
||||||
|
|
||||||
# Synapse 1.108.0 (2024-05-28)
|
# Synapse 1.108.0 (2024-05-28)
|
||||||
|
|
||||||
No significant changes since 1.108.0rc1.
|
No significant changes since 1.108.0rc1.
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
Improve DB usage when fetching related events.
|
|
|
@ -1 +0,0 @@
|
||||||
Add the ability to auto-accept invites on the behalf of users. See the [`auto_accept_invites`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#auto-accept-invites) config option for details.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix deduplicating of membership events to not create unused state groups.
|
|
|
@ -1 +0,0 @@
|
||||||
Add experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync/e2ee` endpoint for To-Device messages and device encryption info.
|
|
|
@ -1 +0,0 @@
|
||||||
Log exceptions when failing to auto-join new user according to the `auto_join_rooms` option.
|
|
|
@ -1 +0,0 @@
|
||||||
Update OIDC documentation: by default Matrix doesn't query userinfo endpoint, then claims should be put on id_token.
|
|
|
@ -1 +0,0 @@
|
||||||
Reduce work of calculating outbound device lists updates.
|
|
|
@ -1 +0,0 @@
|
||||||
Support MSC3916 by adding unstable media endpoints to `_matrix/client` (#17213).
|
|
|
@ -1 +0,0 @@
|
||||||
Fix bug where duplicate events could be sent down sync when using workers that are overloaded.
|
|
|
@ -1 +0,0 @@
|
||||||
Improve performance of calculating device lists changes in `/sync`.
|
|
|
@ -1 +0,0 @@
|
||||||
Add logging to tasks managed by the task scheduler, showing CPU and database usage.
|
|
|
@ -1 +0,0 @@
|
||||||
Move towards using `MultiWriterIdGenerator` everywhere.
|
|
|
@ -1 +0,0 @@
|
||||||
Replaces all usages of `StreamIdGenerator` with `MultiWriterIdGenerator`.
|
|
|
@ -1 +0,0 @@
|
||||||
Change the `allow_unsafe_locale` config option to also apply when setting up new databases.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix errors in logs about closing incorrect logging contexts when media gets rejected by a module.
|
|
|
@ -1 +0,0 @@
|
||||||
Ignore attempts to send to-device messages to bad users, to avoid log spam when we try to connect to the bad server.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix handling of duplicate concurrent uploading of device one-time-keys.
|
|
|
@ -1 +0,0 @@
|
||||||
Clean out invalid destinations from `device_federation_outbox` table.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix errors in logs about closing incorrect logging contexts when media gets rejected by a module.
|
|
1
changelog.d/17265.misc
Normal file
1
changelog.d/17265.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Use fully-qualified `PersistedEventPosition` when returning `RoomsForUser` to facilitate proper comparisons and `RoomStreamToken` generation.
|
6
debian/changelog
vendored
6
debian/changelog
vendored
|
@ -1,3 +1,9 @@
|
||||||
|
matrix-synapse-py3 (1.109.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.109.0rc1.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Jun 2024 09:42:46 +0100
|
||||||
|
|
||||||
matrix-synapse-py3 (1.108.0) stable; urgency=medium
|
matrix-synapse-py3 (1.108.0) stable; urgency=medium
|
||||||
|
|
||||||
* New Synapse release 1.108.0.
|
* New Synapse release 1.108.0.
|
||||||
|
|
337
poetry.lock
generated
337
poetry.lock
generated
|
@ -1005,165 +1005,153 @@ pyasn1 = ">=0.4.6"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "lxml"
|
name = "lxml"
|
||||||
version = "5.2.1"
|
version = "5.2.2"
|
||||||
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
|
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
|
||||||
optional = true
|
optional = true
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
files = [
|
files = [
|
||||||
{file = "lxml-5.2.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:1f7785f4f789fdb522729ae465adcaa099e2a3441519df750ebdccc481d961a1"},
|
{file = "lxml-5.2.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:364d03207f3e603922d0d3932ef363d55bbf48e3647395765f9bfcbdf6d23632"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6cc6ee342fb7fa2471bd9b6d6fdfc78925a697bf5c2bcd0a302e98b0d35bfad3"},
|
{file = "lxml-5.2.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:50127c186f191b8917ea2fb8b206fbebe87fd414a6084d15568c27d0a21d60db"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:794f04eec78f1d0e35d9e0c36cbbb22e42d370dda1609fb03bcd7aeb458c6377"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:74e4f025ef3db1c6da4460dd27c118d8cd136d0391da4e387a15e48e5c975147"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c817d420c60a5183953c783b0547d9eb43b7b344a2c46f69513d5952a78cddf3"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:981a06a3076997adf7c743dcd0d7a0415582661e2517c7d961493572e909aa1d"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2213afee476546a7f37c7a9b4ad4d74b1e112a6fafffc9185d6d21f043128c81"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aef5474d913d3b05e613906ba4090433c515e13ea49c837aca18bde190853dff"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b070bbe8d3f0f6147689bed981d19bbb33070225373338df755a46893528104a"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1e275ea572389e41e8b039ac076a46cb87ee6b8542df3fff26f5baab43713bca"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e02c5175f63effbd7c5e590399c118d5db6183bbfe8e0d118bdb5c2d1b48d937"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5b65529bb2f21ac7861a0e94fdbf5dc0daab41497d18223b46ee8515e5ad297"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:3dc773b2861b37b41a6136e0b72a1a44689a9c4c101e0cddb6b854016acc0aa8"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:bcc98f911f10278d1daf14b87d65325851a1d29153caaf146877ec37031d5f36"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:d7520db34088c96cc0e0a3ad51a4fd5b401f279ee112aa2b7f8f976d8582606d"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_ppc64le.whl", hash = "sha256:b47633251727c8fe279f34025844b3b3a3e40cd1b198356d003aa146258d13a2"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:bcbf4af004f98793a95355980764b3d80d47117678118a44a80b721c9913436a"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_s390x.whl", hash = "sha256:fbc9d316552f9ef7bba39f4edfad4a734d3d6f93341232a9dddadec4f15d425f"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:a2b44bec7adf3e9305ce6cbfa47a4395667e744097faed97abb4728748ba7d47"},
|
{file = "lxml-5.2.2-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:13e69be35391ce72712184f69000cda04fc89689429179bc4c0ae5f0b7a8c21b"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:1c5bb205e9212d0ebddf946bc07e73fa245c864a5f90f341d11ce7b0b854475d"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:3b6a30a9ab040b3f545b697cb3adbf3696c05a3a68aad172e3fd7ca73ab3c835"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:2c9d147f754b1b0e723e6afb7ba1566ecb162fe4ea657f53d2139bbf894d050a"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:a233bb68625a85126ac9f1fc66d24337d6e8a0f9207b688eec2e7c880f012ec0"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:3545039fa4779be2df51d6395e91a810f57122290864918b172d5dc7ca5bb433"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:dfa7c241073d8f2b8e8dbc7803c434f57dbb83ae2a3d7892dd068d99e96efe2c"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a91481dbcddf1736c98a80b122afa0f7296eeb80b72344d7f45dc9f781551f56"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:1a7aca7964ac4bb07680d5c9d63b9d7028cace3e2d43175cb50bba8c5ad33316"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:2ddfe41ddc81f29a4c44c8ce239eda5ade4e7fc305fb7311759dd6229a080052"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ae4073a60ab98529ab8a72ebf429f2a8cc612619a8c04e08bed27450d52103c0"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:a7baf9ffc238e4bf401299f50e971a45bfcc10a785522541a6e3179c83eabf0a"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:ffb2be176fed4457e445fe540617f0252a72a8bc56208fd65a690fdb1f57660b"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:31e9a882013c2f6bd2f2c974241bf4ba68c85eba943648ce88936d23209a2e01"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:e290d79a4107d7d794634ce3e985b9ae4f920380a813717adf61804904dc4393"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:0a15438253b34e6362b2dc41475e7f80de76320f335e70c5528b7148cac253a1"},
|
{file = "lxml-5.2.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:96e85aa09274955bb6bd483eaf5b12abadade01010478154b0ec70284c1b1526"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-win32.whl", hash = "sha256:6992030d43b916407c9aa52e9673612ff39a575523c5f4cf72cdef75365709a5"},
|
{file = "lxml-5.2.2-cp310-cp310-win32.whl", hash = "sha256:f956196ef61369f1685d14dad80611488d8dc1ef00be57c0c5a03064005b0f30"},
|
||||||
{file = "lxml-5.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:da052e7962ea2d5e5ef5bc0355d55007407087392cf465b7ad84ce5f3e25fe0f"},
|
{file = "lxml-5.2.2-cp310-cp310-win_amd64.whl", hash = "sha256:875a3f90d7eb5c5d77e529080d95140eacb3c6d13ad5b616ee8095447b1d22e7"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:70ac664a48aa64e5e635ae5566f5227f2ab7f66a3990d67566d9907edcbbf867"},
|
{file = "lxml-5.2.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:45f9494613160d0405682f9eee781c7e6d1bf45f819654eb249f8f46a2c22545"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1ae67b4e737cddc96c99461d2f75d218bdf7a0c3d3ad5604d1f5e7464a2f9ffe"},
|
{file = "lxml-5.2.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b0b3f2df149efb242cee2ffdeb6674b7f30d23c9a7af26595099afaf46ef4e88"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f18a5a84e16886898e51ab4b1d43acb3083c39b14c8caeb3589aabff0ee0b270"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d28cb356f119a437cc58a13f8135ab8a4c8ece18159eb9194b0d269ec4e28083"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c6f2c8372b98208ce609c9e1d707f6918cc118fea4e2c754c9f0812c04ca116d"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:657a972f46bbefdbba2d4f14413c0d079f9ae243bd68193cb5061b9732fa54c1"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:394ed3924d7a01b5bd9a0d9d946136e1c2f7b3dc337196d99e61740ed4bc6fe1"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b74b9ea10063efb77a965a8d5f4182806fbf59ed068b3c3fd6f30d2ac7bee734"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5d077bc40a1fe984e1a9931e801e42959a1e6598edc8a3223b061d30fbd26bbc"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:07542787f86112d46d07d4f3c4e7c760282011b354d012dc4141cc12a68cef5f"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:764b521b75701f60683500d8621841bec41a65eb739b8466000c6fdbc256c240"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:303f540ad2dddd35b92415b74b900c749ec2010e703ab3bfd6660979d01fd4ed"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:3a6b45da02336895da82b9d472cd274b22dc27a5cea1d4b793874eead23dd14f"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:2eb2227ce1ff998faf0cd7fe85bbf086aa41dfc5af3b1d80867ecfe75fb68df3"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:5ea7b6766ac2dfe4bcac8b8595107665a18ef01f8c8343f00710b85096d1b53a"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_ppc64le.whl", hash = "sha256:1d8a701774dfc42a2f0b8ccdfe7dbc140500d1049e0632a611985d943fcf12df"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:e196a4ff48310ba62e53a8e0f97ca2bca83cdd2fe2934d8b5cb0df0a841b193a"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_s390x.whl", hash = "sha256:56793b7a1a091a7c286b5f4aa1fe4ae5d1446fe742d00cdf2ffb1077865db10d"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:200e63525948e325d6a13a76ba2911f927ad399ef64f57898cf7c74e69b71095"},
|
{file = "lxml-5.2.2-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:eb00b549b13bd6d884c863554566095bf6fa9c3cecb2e7b399c4bc7904cb33b5"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:dae0ed02f6b075426accbf6b2863c3d0a7eacc1b41fb40f2251d931e50188dad"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1a2569a1f15ae6c8c64108a2cd2b4a858fc1e13d25846be0666fc144715e32ab"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:ab31a88a651039a07a3ae327d68ebdd8bc589b16938c09ef3f32a4b809dc96ef"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:8cf85a6e40ff1f37fe0f25719aadf443686b1ac7652593dc53c7ef9b8492b115"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:df2e6f546c4df14bc81f9498bbc007fbb87669f1bb707c6138878c46b06f6510"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:d237ba6664b8e60fd90b8549a149a74fcc675272e0e95539a00522e4ca688b04"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5dd1537e7cc06efd81371f5d1a992bd5ab156b2b4f88834ca852de4a8ea523fa"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0b3f5016e00ae7630a4b83d0868fca1e3d494c78a75b1c7252606a3a1c5fc2ad"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:9b9ec9c9978b708d488bec36b9e4c94d88fd12ccac3e62134a9d17ddba910ea9"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:23441e2b5339bc54dc949e9e675fa35efe858108404ef9aa92f0456929ef6fe8"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:8e77c69d5892cb5ba71703c4057091e31ccf534bd7f129307a4d084d90d014b8"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:2fb0ba3e8566548d6c8e7dd82a8229ff47bd8fb8c2da237607ac8e5a1b8312e5"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:a8d5c70e04aac1eda5c829a26d1f75c6e5286c74743133d9f742cda8e53b9c2f"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:79d1fb9252e7e2cfe4de6e9a6610c7cbb99b9708e2c3e29057f487de5a9eaefa"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c94e75445b00319c1fad60f3c98b09cd63fe1134a8a953dcd48989ef42318534"},
|
{file = "lxml-5.2.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6dcc3d17eac1df7859ae01202e9bb11ffa8c98949dcbeb1069c8b9a75917e01b"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-win32.whl", hash = "sha256:4951e4f7a5680a2db62f7f4ab2f84617674d36d2d76a729b9a8be4b59b3659be"},
|
{file = "lxml-5.2.2-cp311-cp311-win32.whl", hash = "sha256:4c30a2f83677876465f44c018830f608fa3c6a8a466eb223535035fbc16f3438"},
|
||||||
{file = "lxml-5.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:5c670c0406bdc845b474b680b9a5456c561c65cf366f8db5a60154088c92d102"},
|
{file = "lxml-5.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:49095a38eb333aaf44c06052fd2ec3b8f23e19747ca7ec6f6c954ffea6dbf7be"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:abc25c3cab9ec7fcd299b9bcb3b8d4a1231877e425c650fa1c7576c5107ab851"},
|
{file = "lxml-5.2.2-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:7429e7faa1a60cad26ae4227f4dd0459efde239e494c7312624ce228e04f6391"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:6935bbf153f9a965f1e07c2649c0849d29832487c52bb4a5c5066031d8b44fd5"},
|
{file = "lxml-5.2.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:50ccb5d355961c0f12f6cf24b7187dbabd5433f29e15147a67995474f27d1776"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d793bebb202a6000390a5390078e945bbb49855c29c7e4d56a85901326c3b5d9"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc911208b18842a3a57266d8e51fc3cfaccee90a5351b92079beed912a7914c2"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afd5562927cdef7c4f5550374acbc117fd4ecc05b5007bdfa57cc5355864e0a4"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:33ce9e786753743159799fdf8e92a5da351158c4bfb6f2db0bf31e7892a1feb5"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0e7259016bc4345a31af861fdce942b77c99049d6c2107ca07dc2bba2435c1d9"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ec87c44f619380878bd49ca109669c9f221d9ae6883a5bcb3616785fa8f94c97"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:530e7c04f72002d2f334d5257c8a51bf409db0316feee7c87e4385043be136af"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:08ea0f606808354eb8f2dfaac095963cb25d9d28e27edcc375d7b30ab01abbf6"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59689a75ba8d7ffca577aefd017d08d659d86ad4585ccc73e43edbfc7476781a"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75a9632f1d4f698b2e6e2e1ada40e71f369b15d69baddb8968dcc8e683839b18"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:f9737bf36262046213a28e789cc82d82c6ef19c85a0cf05e75c670a33342ac2c"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:74da9f97daec6928567b48c90ea2c82a106b2d500f397eeb8941e47d30b1ca85"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:3a74c4f27167cb95c1d4af1c0b59e88b7f3e0182138db2501c353555f7ec57f4"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:0969e92af09c5687d769731e3f39ed62427cc72176cebb54b7a9d52cc4fa3b73"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:68a2610dbe138fa8c5826b3f6d98a7cfc29707b850ddcc3e21910a6fe51f6ca0"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:9164361769b6ca7769079f4d426a41df6164879f7f3568be9086e15baca61466"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:f0a1bc63a465b6d72569a9bba9f2ef0334c4e03958e043da1920299100bc7c08"},
|
{file = "lxml-5.2.2-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:d26a618ae1766279f2660aca0081b2220aca6bd1aa06b2cf73f07383faf48927"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c2d35a1d047efd68027817b32ab1586c1169e60ca02c65d428ae815b593e65d4"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ab67ed772c584b7ef2379797bf14b82df9aa5f7438c5b9a09624dd834c1c1aaf"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:79bd05260359170f78b181b59ce871673ed01ba048deef4bf49a36ab3e72e80b"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:3d1e35572a56941b32c239774d7e9ad724074d37f90c7a7d499ab98761bd80cf"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:865bad62df277c04beed9478fe665b9ef63eb28fe026d5dedcb89b537d2e2ea6"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:8268cbcd48c5375f46e000adb1390572c98879eb4f77910c6053d25cc3ac2c67"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:44f6c7caff88d988db017b9b0e4ab04934f11e3e72d478031efc7edcac6c622f"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:e282aedd63c639c07c3857097fc0e236f984ceb4089a8b284da1c526491e3f3d"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:71e97313406ccf55d32cc98a533ee05c61e15d11b99215b237346171c179c0b0"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6dfdc2bfe69e9adf0df4915949c22a25b39d175d599bf98e7ddf620a13678585"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:057cdc6b86ab732cf361f8b4d8af87cf195a1f6dc5b0ff3de2dced242c2015e0"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:4aefd911793b5d2d7a921233a54c90329bf3d4a6817dc465f12ffdfe4fc7b8fe"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:f3bbbc998d42f8e561f347e798b85513ba4da324c2b3f9b7969e9c45b10f6169"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:8b8df03a9e995b6211dafa63b32f9d405881518ff1ddd775db4e7b98fb545e1c"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:491755202eb21a5e350dae00c6d9a17247769c64dcf62d8c788b5c135e179dc4"},
|
{file = "lxml-5.2.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f11ae142f3a322d44513de1018b50f474f8f736bc3cd91d969f464b5bfef8836"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-win32.whl", hash = "sha256:8de8f9d6caa7f25b204fc861718815d41cbcf27ee8f028c89c882a0cf4ae4134"},
|
{file = "lxml-5.2.2-cp312-cp312-win32.whl", hash = "sha256:16a8326e51fcdffc886294c1e70b11ddccec836516a343f9ed0f82aac043c24a"},
|
||||||
{file = "lxml-5.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:f2a9efc53d5b714b8df2b4b3e992accf8ce5bbdfe544d74d5c6766c9e1146a3a"},
|
{file = "lxml-5.2.2-cp312-cp312-win_amd64.whl", hash = "sha256:bbc4b80af581e18568ff07f6395c02114d05f4865c2812a1f02f2eaecf0bfd48"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:70a9768e1b9d79edca17890175ba915654ee1725975d69ab64813dd785a2bd5c"},
|
{file = "lxml-5.2.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:e3d9d13603410b72787579769469af730c38f2f25505573a5888a94b62b920f8"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c38d7b9a690b090de999835f0443d8aa93ce5f2064035dfc48f27f02b4afc3d0"},
|
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38b67afb0a06b8575948641c1d6d68e41b83a3abeae2ca9eed2ac59892b36706"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5670fb70a828663cc37552a2a85bf2ac38475572b0e9b91283dc09efb52c41d1"},
|
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c689d0d5381f56de7bd6966a4541bff6e08bf8d3871bbd89a0c6ab18aa699573"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:958244ad566c3ffc385f47dddde4145088a0ab893504b54b52c041987a8c1863"},
|
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_28_x86_64.whl", hash = "sha256:cf2a978c795b54c539f47964ec05e35c05bd045db5ca1e8366988c7f2fe6b3ce"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b6241d4eee5f89453307c2f2bfa03b50362052ca0af1efecf9fef9a41a22bb4f"},
|
{file = "lxml-5.2.2-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:739e36ef7412b2bd940f75b278749106e6d025e40027c0b94a17ef7968d55d56"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:2a66bf12fbd4666dd023b6f51223aed3d9f3b40fef06ce404cb75bafd3d89536"},
|
{file = "lxml-5.2.2-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d8bbcd21769594dbba9c37d3c819e2d5847656ca99c747ddb31ac1701d0c0ed9"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_ppc64le.whl", hash = "sha256:9123716666e25b7b71c4e1789ec829ed18663152008b58544d95b008ed9e21e9"},
|
{file = "lxml-5.2.2-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:2304d3c93f2258ccf2cf7a6ba8c761d76ef84948d87bf9664e14d203da2cd264"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_s390x.whl", hash = "sha256:0c3f67e2aeda739d1cc0b1102c9a9129f7dc83901226cc24dd72ba275ced4218"},
|
{file = "lxml-5.2.2-cp36-cp36m-win32.whl", hash = "sha256:02437fb7308386867c8b7b0e5bc4cd4b04548b1c5d089ffb8e7b31009b961dc3"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:5d5792e9b3fb8d16a19f46aa8208987cfeafe082363ee2745ea8b643d9cc5b45"},
|
{file = "lxml-5.2.2-cp36-cp36m-win_amd64.whl", hash = "sha256:edcfa83e03370032a489430215c1e7783128808fd3e2e0a3225deee278585196"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_aarch64.whl", hash = "sha256:88e22fc0a6684337d25c994381ed8a1580a6f5ebebd5ad41f89f663ff4ec2885"},
|
{file = "lxml-5.2.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:28bf95177400066596cdbcfc933312493799382879da504633d16cf60bba735b"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_ppc64le.whl", hash = "sha256:21c2e6b09565ba5b45ae161b438e033a86ad1736b8c838c766146eff8ceffff9"},
|
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a745cc98d504d5bd2c19b10c79c61c7c3df9222629f1b6210c0368177589fb8"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_s390x.whl", hash = "sha256:afbbdb120d1e78d2ba8064a68058001b871154cc57787031b645c9142b937a62"},
|
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1b590b39ef90c6b22ec0be925b211298e810b4856909c8ca60d27ffbca6c12e6"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:627402ad8dea044dde2eccde4370560a2b750ef894c9578e1d4f8ffd54000461"},
|
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b336b0416828022bfd5a2e3083e7f5ba54b96242159f83c7e3eebaec752f1716"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-win32.whl", hash = "sha256:e89580a581bf478d8dcb97d9cd011d567768e8bc4095f8557b21c4d4c5fea7d0"},
|
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:c2faf60c583af0d135e853c86ac2735ce178f0e338a3c7f9ae8f622fd2eb788c"},
|
||||||
{file = "lxml-5.2.1-cp36-cp36m-win_amd64.whl", hash = "sha256:59565f10607c244bc4c05c0c5fa0c190c990996e0c719d05deec7030c2aa8289"},
|
{file = "lxml-5.2.2-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:4bc6cb140a7a0ad1f7bc37e018d0ed690b7b6520ade518285dc3171f7a117905"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:857500f88b17a6479202ff5fe5f580fc3404922cd02ab3716197adf1ef628029"},
|
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7ff762670cada8e05b32bf1e4dc50b140790909caa8303cfddc4d702b71ea184"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:56c22432809085b3f3ae04e6e7bdd36883d7258fcd90e53ba7b2e463efc7a6af"},
|
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:57f0a0bbc9868e10ebe874e9f129d2917750adf008fe7b9c1598c0fbbfdde6a6"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a55ee573116ba208932e2d1a037cc4b10d2c1cb264ced2184d00b18ce585b2c0"},
|
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:a6d2092797b388342c1bc932077ad232f914351932353e2e8706851c870bca1f"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:6cf58416653c5901e12624e4013708b6e11142956e7f35e7a83f1ab02f3fe456"},
|
{file = "lxml-5.2.2-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:60499fe961b21264e17a471ec296dcbf4365fbea611bf9e303ab69db7159ce61"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:64c2baa7774bc22dd4474248ba16fe1a7f611c13ac6123408694d4cc93d66dbd"},
|
{file = "lxml-5.2.2-cp37-cp37m-win32.whl", hash = "sha256:d9b342c76003c6b9336a80efcc766748a333573abf9350f4094ee46b006ec18f"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:74b28c6334cca4dd704e8004cba1955af0b778cf449142e581e404bd211fb619"},
|
{file = "lxml-5.2.2-cp37-cp37m-win_amd64.whl", hash = "sha256:b16db2770517b8799c79aa80f4053cd6f8b716f21f8aca962725a9565ce3ee40"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:7221d49259aa1e5a8f00d3d28b1e0b76031655ca74bb287123ef56c3db92f213"},
|
{file = "lxml-5.2.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7ed07b3062b055d7a7f9d6557a251cc655eed0b3152b76de619516621c56f5d3"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:3dbe858ee582cbb2c6294dc85f55b5f19c918c2597855e950f34b660f1a5ede6"},
|
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f60fdd125d85bf9c279ffb8e94c78c51b3b6a37711464e1f5f31078b45002421"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:04ab5415bf6c86e0518d57240a96c4d1fcfc3cb370bb2ac2a732b67f579e5a04"},
|
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8a7e24cb69ee5f32e003f50e016d5fde438010c1022c96738b04fc2423e61706"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_ppc64le.whl", hash = "sha256:6ab833e4735a7e5533711a6ea2df26459b96f9eec36d23f74cafe03631647c41"},
|
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23cfafd56887eaed93d07bc4547abd5e09d837a002b791e9767765492a75883f"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_s390x.whl", hash = "sha256:f443cdef978430887ed55112b491f670bba6462cea7a7742ff8f14b7abb98d75"},
|
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:19b4e485cd07b7d83e3fe3b72132e7df70bfac22b14fe4bf7a23822c3a35bff5"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:9e2addd2d1866fe112bc6f80117bcc6bc25191c5ed1bfbcf9f1386a884252ae8"},
|
{file = "lxml-5.2.2-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:7ce7ad8abebe737ad6143d9d3bf94b88b93365ea30a5b81f6877ec9c0dee0a48"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-win32.whl", hash = "sha256:f51969bac61441fd31f028d7b3b45962f3ecebf691a510495e5d2cd8c8092dbd"},
|
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e49b052b768bb74f58c7dda4e0bdf7b79d43a9204ca584ffe1fb48a6f3c84c66"},
|
||||||
{file = "lxml-5.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:b0b58fbfa1bf7367dde8a557994e3b1637294be6cf2169810375caf8571a085c"},
|
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d14a0d029a4e176795cef99c056d58067c06195e0c7e2dbb293bf95c08f772a3"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:804f74efe22b6a227306dd890eecc4f8c59ff25ca35f1f14e7482bbce96ef10b"},
|
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:be49ad33819d7dcc28a309b86d4ed98e1a65f3075c6acd3cd4fe32103235222b"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:08802f0c56ed150cc6885ae0788a321b73505d2263ee56dad84d200cab11c07a"},
|
{file = "lxml-5.2.2-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a6d17e0370d2516d5bb9062c7b4cb731cff921fc875644c3d751ad857ba9c5b1"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0f8c09ed18ecb4ebf23e02b8e7a22a05d6411911e6fabef3a36e4f371f4f2585"},
|
{file = "lxml-5.2.2-cp38-cp38-win32.whl", hash = "sha256:5b8c041b6265e08eac8a724b74b655404070b636a8dd6d7a13c3adc07882ef30"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e3d30321949861404323c50aebeb1943461a67cd51d4200ab02babc58bd06a86"},
|
{file = "lxml-5.2.2-cp38-cp38-win_amd64.whl", hash = "sha256:f61efaf4bed1cc0860e567d2ecb2363974d414f7f1f124b1df368bbf183453a6"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:b560e3aa4b1d49e0e6c847d72665384db35b2f5d45f8e6a5c0072e0283430533"},
|
{file = "lxml-5.2.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:fb91819461b1b56d06fa4bcf86617fac795f6a99d12239fb0c68dbeba41a0a30"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:058a1308914f20784c9f4674036527e7c04f7be6fb60f5d61353545aa7fcb739"},
|
{file = "lxml-5.2.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d4ed0c7cbecde7194cd3228c044e86bf73e30a23505af852857c09c24e77ec5d"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:adfb84ca6b87e06bc6b146dc7da7623395db1e31621c4785ad0658c5028b37d7"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54401c77a63cc7d6dc4b4e173bb484f28a5607f3df71484709fe037c92d4f0ed"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:417d14450f06d51f363e41cace6488519038f940676ce9664b34ebf5653433a5"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:625e3ef310e7fa3a761d48ca7ea1f9d8718a32b1542e727d584d82f4453d5eeb"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:a2dfe7e2473f9b59496247aad6e23b405ddf2e12ef0765677b0081c02d6c2c0b"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:519895c99c815a1a24a926d5b60627ce5ea48e9f639a5cd328bda0515ea0f10c"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:bf2e2458345d9bffb0d9ec16557d8858c9c88d2d11fed53998512504cd9df49b"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c7079d5eb1c1315a858bbf180000757db8ad904a89476653232db835c3114001"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:58278b29cb89f3e43ff3e0c756abbd1518f3ee6adad9e35b51fb101c1c1daaec"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:343ab62e9ca78094f2306aefed67dcfad61c4683f87eee48ff2fd74902447726"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:64641a6068a16201366476731301441ce93457eb8452056f570133a6ceb15fca"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:cd9e78285da6c9ba2d5c769628f43ef66d96ac3085e59b10ad4f3707980710d3"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:78bfa756eab503673991bdcf464917ef7845a964903d3302c5f68417ecdc948c"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:546cf886f6242dff9ec206331209db9c8e1643ae642dea5fdbecae2453cb50fd"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:11a04306fcba10cd9637e669fd73aa274c1c09ca64af79c041aa820ea992b637"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:02f6a8eb6512fdc2fd4ca10a49c341c4e109aa6e9448cc4859af5b949622715a"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-win32.whl", hash = "sha256:66bc5eb8a323ed9894f8fa0ee6cb3e3fb2403d99aee635078fd19a8bc7a5a5da"},
|
{file = "lxml-5.2.2-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:339ee4a4704bc724757cd5dd9dc8cf4d00980f5d3e6e06d5847c1b594ace68ab"},
|
||||||
{file = "lxml-5.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:9676bfc686fa6a3fa10cd4ae6b76cae8be26eb5ec6811d2a325636c460da1806"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:0a028b61a2e357ace98b1615fc03f76eb517cc028993964fe08ad514b1e8892d"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:cf22b41fdae514ee2f1691b6c3cdeae666d8b7fa9434de445f12bbeee0cf48dd"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:f90e552ecbad426eab352e7b2933091f2be77115bb16f09f78404861c8322981"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ec42088248c596dbd61d4ae8a5b004f97a4d91a9fd286f632e42e60b706718d7"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:d83e2d94b69bf31ead2fa45f0acdef0757fa0458a129734f59f67f3d2eb7ef32"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cd53553ddad4a9c2f1f022756ae64abe16da1feb497edf4d9f87f99ec7cf86bd"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a02d3c48f9bb1e10c7788d92c0c7db6f2002d024ab6e74d6f45ae33e3d0288a3"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:feaa45c0eae424d3e90d78823f3828e7dc42a42f21ed420db98da2c4ecf0a2cb"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:6d68ce8e7b2075390e8ac1e1d3a99e8b6372c694bbe612632606d1d546794207"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ddc678fb4c7e30cf830a2b5a8d869538bc55b28d6c68544d09c7d0d8f17694dc"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:453d037e09a5176d92ec0fd282e934ed26d806331a8b70ab431a81e2fbabf56d"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:853e074d4931dbcba7480d4dcab23d5c56bd9607f92825ab80ee2bd916edea53"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:3b019d4ee84b683342af793b56bb35034bd749e4cbdd3d33f7d1107790f8c472"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc4691d60512798304acb9207987e7b2b7c44627ea88b9d77489bbe3e6cc3bd4"},
|
{file = "lxml-5.2.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:cb3942960f0beb9f46e2a71a3aca220d1ca32feb5a398656be934320804c0df9"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:beb72935a941965c52990f3a32d7f07ce869fe21c6af8b34bf6a277b33a345d3"},
|
{file = "lxml-5.2.2-cp39-cp39-win32.whl", hash = "sha256:ac6540c9fff6e3813d29d0403ee7a81897f1d8ecc09a8ff84d2eea70ede1cdbf"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_ppc64le.whl", hash = "sha256:6588c459c5627fefa30139be4d2e28a2c2a1d0d1c265aad2ba1935a7863a4913"},
|
{file = "lxml-5.2.2-cp39-cp39-win_amd64.whl", hash = "sha256:610b5c77428a50269f38a534057444c249976433f40f53e3b47e68349cca1425"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_s390x.whl", hash = "sha256:588008b8497667f1ddca7c99f2f85ce8511f8f7871b4a06ceede68ab62dff64b"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:b537bd04d7ccd7c6350cdaaaad911f6312cbd61e6e6045542f781c7f8b2e99d2"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:b6787b643356111dfd4032b5bffe26d2f8331556ecb79e15dacb9275da02866e"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4820c02195d6dfb7b8508ff276752f6b2ff8b64ae5d13ebe02e7667e035000b9"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:7c17b64b0a6ef4e5affae6a3724010a7a66bda48a62cfe0674dabd46642e8b54"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a09f6184f17a80897172863a655467da2b11151ec98ba8d7af89f17bf63dae"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:27aa20d45c2e0b8cd05da6d4759649170e8dfc4f4e5ef33a34d06f2d79075d57"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:76acba4c66c47d27c8365e7c10b3d8016a7da83d3191d053a58382311a8bf4e1"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:d4f2cc7060dc3646632d7f15fe68e2fa98f58e35dd5666cd525f3b35d3fed7f8"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b128092c927eaf485928cec0c28f6b8bead277e28acf56800e972aa2c2abd7a2"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:ff46d772d5f6f73564979cd77a4fffe55c916a05f3cb70e7c9c0590059fb29ef"},
|
{file = "lxml-5.2.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:ae791f6bd43305aade8c0e22f816b34f3b72b6c820477aab4d18473a37e8090b"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:96323338e6c14e958d775700ec8a88346014a85e5de73ac7967db0367582049b"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a2f6a1bc2460e643785a2cde17293bd7a8f990884b822f7bca47bee0a82fc66b"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:52421b41ac99e9d91934e4d0d0fe7da9f02bfa7536bb4431b4c05c906c8c6919"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e8d351ff44c1638cb6e980623d517abd9f580d2e53bfcd18d8941c052a5a009"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:7a7efd5b6d3e30d81ec68ab8a88252d7c7c6f13aaa875009fe3097eb4e30b84c"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bec4bd9133420c5c52d562469c754f27c5c9e36ee06abc169612c959bd7dbb07"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:0ed777c1e8c99b63037b91f9d73a6aad20fd035d77ac84afcc205225f8f41188"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:55ce6b6d803890bd3cc89975fca9de1dff39729b43b73cb15ddd933b8bc20484"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-win32.whl", hash = "sha256:644df54d729ef810dcd0f7732e50e5ad1bd0a135278ed8d6bcb06f33b6b6f708"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8ab6a358d1286498d80fe67bd3d69fcbc7d1359b45b41e74c4a26964ca99c3f8"},
|
||||||
{file = "lxml-5.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:9ca66b8e90daca431b7ca1408cae085d025326570e57749695d6a01454790e95"},
|
{file = "lxml-5.2.2-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:06668e39e1f3c065349c51ac27ae430719d7806c026fec462e5693b08b95696b"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9b0ff53900566bc6325ecde9181d89afadc59c5ffa39bddf084aaedfe3b06a11"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:9cd5323344d8ebb9fb5e96da5de5ad4ebab993bbf51674259dbe9d7a18049525"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fd6037392f2d57793ab98d9e26798f44b8b4da2f2464388588f48ac52c489ea1"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:89feb82ca055af0fe797a2323ec9043b26bc371365847dbe83c7fd2e2f181c34"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b9c07e7a45bb64e21df4b6aa623cb8ba214dfb47d2027d90eac197329bb5e94"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e481bba1e11ba585fb06db666bfc23dbe181dbafc7b25776156120bf12e0d5a6"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3249cc2989d9090eeac5467e50e9ec2d40704fea9ab72f36b034ea34ee65ca98"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:9d6c6ea6a11ca0ff9cd0390b885984ed31157c168565702959c25e2191674a14"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:f42038016852ae51b4088b2862126535cc4fc85802bfe30dea3500fdfaf1864e"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:3d98de734abee23e61f6b8c2e08a88453ada7d6486dc7cdc82922a03968928db"},
|
||||||
{file = "lxml-5.2.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:533658f8fbf056b70e434dff7e7aa611bcacb33e01f75de7f821810e48d1bb66"},
|
{file = "lxml-5.2.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:69ab77a1373f1e7563e0fb5a29a8440367dec051da6c7405333699d07444f511"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:622020d4521e22fb371e15f580d153134bfb68d6a429d1342a25f051ec72df1c"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:34e17913c431f5ae01d8658dbf792fdc457073dcdfbb31dc0cc6ab256e664a8d"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:efa7b51824aa0ee957ccd5a741c73e6851de55f40d807f08069eb4c5a26b2baa"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:05f8757b03208c3f50097761be2dea0aba02e94f0dc7023ed73a7bb14ff11eb0"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9c6ad0fbf105f6bcc9300c00010a2ffa44ea6f555df1a2ad95c88f5656104817"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6a520b4f9974b0a0a6ed73c2154de57cdfd0c8800f4f15ab2b73238ffed0b36e"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:e233db59c8f76630c512ab4a4daf5a5986da5c3d5b44b8e9fc742f2a24dbd460"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5e097646944b66207023bc3c634827de858aebc226d5d4d6d16f0b77566ea182"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:6a014510830df1475176466b6087fc0c08b47a36714823e58d8b8d7709132a96"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:b5e4ef22ff25bfd4ede5f8fb30f7b24446345f3e79d9b7455aef2836437bc38a"},
|
||||||
{file = "lxml-5.2.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:d38c8f50ecf57f0463399569aa388b232cf1a2ffb8f0a9a5412d0db57e054860"},
|
{file = "lxml-5.2.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:ff69a9a0b4b17d78170c73abe2ab12084bdf1691550c5629ad1fe7849433f324"},
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:5aea8212fb823e006b995c4dda533edcf98a893d941f173f6c9506126188860d"},
|
{file = "lxml-5.2.2.tar.gz", hash = "sha256:bb2dc4898180bea79863d5487e5f9c7c34297414bad54bcd0f0852aee9cfdb87"},
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ff097ae562e637409b429a7ac958a20aab237a0378c42dabaa1e3abf2f896e5f"},
|
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f5d65c39f16717a47c36c756af0fb36144069c4718824b7533f803ecdf91138"},
|
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:3d0c3dd24bb4605439bf91068598d00c6370684f8de4a67c2992683f6c309d6b"},
|
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e32be23d538753a8adb6c85bd539f5fd3b15cb987404327c569dfc5fd8366e85"},
|
|
||||||
{file = "lxml-5.2.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:cc518cea79fd1e2f6c90baafa28906d4309d24f3a63e801d855e7424c5b34144"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a0af35bd8ebf84888373630f73f24e86bf016642fb8576fba49d3d6b560b7cbc"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8aca2e3a72f37bfc7b14ba96d4056244001ddcc18382bd0daa087fd2e68a354"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ca1e8188b26a819387b29c3895c47a5e618708fe6f787f3b1a471de2c4a94d9"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c8ba129e6d3b0136a0f50345b2cb3db53f6bda5dd8c7f5d83fbccba97fb5dcb5"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e998e304036198b4f6914e6a1e2b6f925208a20e2042563d9734881150c6c246"},
|
|
||||||
{file = "lxml-5.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:d3be9b2076112e51b323bdf6d5a7f8a798de55fb8d95fcb64bd179460cdc0704"},
|
|
||||||
{file = "lxml-5.2.1.tar.gz", hash = "sha256:3f7765e69bbce0906a7c74d5fe46d2c7a7596147318dbc08e4a2431f3060e306"},
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.extras]
|
[package.extras]
|
||||||
|
@ -1454,17 +1442,17 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "mypy-zope"
|
name = "mypy-zope"
|
||||||
version = "1.0.3"
|
version = "1.0.4"
|
||||||
description = "Plugin for mypy to support zope interfaces"
|
description = "Plugin for mypy to support zope interfaces"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = "*"
|
python-versions = "*"
|
||||||
files = [
|
files = [
|
||||||
{file = "mypy-zope-1.0.3.tar.gz", hash = "sha256:149081bd2754d947747baefac569bb1c2bc127b4a2cc1fa505492336946bb3b4"},
|
{file = "mypy-zope-1.0.4.tar.gz", hash = "sha256:a9569e73ae85a65247787d98590fa6d4290e76f26aabe035d1c3e94a0b9ab6ee"},
|
||||||
{file = "mypy_zope-1.0.3-py3-none-any.whl", hash = "sha256:7a30ce1a2589173f0be66662c9a9179f75737afc40e4104df4c76fb5a8421c14"},
|
{file = "mypy_zope-1.0.4-py3-none-any.whl", hash = "sha256:c7298f93963a84f2b145c2b5cc98709fc2a5be4adf54bfe23fa7fdd8fd19c975"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
mypy = ">=1.0.0,<1.9.0"
|
mypy = ">=1.0.0,<1.10.0"
|
||||||
"zope.interface" = "*"
|
"zope.interface" = "*"
|
||||||
"zope.schema" = "*"
|
"zope.schema" = "*"
|
||||||
|
|
||||||
|
@ -2399,13 +2387,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "sentry-sdk"
|
name = "sentry-sdk"
|
||||||
version = "2.1.1"
|
version = "2.3.1"
|
||||||
description = "Python client for Sentry (https://sentry.io)"
|
description = "Python client for Sentry (https://sentry.io)"
|
||||||
optional = true
|
optional = true
|
||||||
python-versions = ">=3.6"
|
python-versions = ">=3.6"
|
||||||
files = [
|
files = [
|
||||||
{file = "sentry_sdk-2.1.1-py2.py3-none-any.whl", hash = "sha256:99aeb78fb76771513bd3b2829d12613130152620768d00cd3e45ac00cb17950f"},
|
{file = "sentry_sdk-2.3.1-py2.py3-none-any.whl", hash = "sha256:c5aeb095ba226391d337dd42a6f9470d86c9fc236ecc71cfc7cd1942b45010c6"},
|
||||||
{file = "sentry_sdk-2.1.1.tar.gz", hash = "sha256:95d8c0bb41c8b0bc37ab202c2c4a295bb84398ee05f4cdce55051cd75b926ec1"},
|
{file = "sentry_sdk-2.3.1.tar.gz", hash = "sha256:139a71a19f5e9eb5d3623942491ce03cf8ebc14ea2e39ba3e6fe79560d8a5b1f"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
|
@ -2427,7 +2415,7 @@ django = ["django (>=1.8)"]
|
||||||
falcon = ["falcon (>=1.4)"]
|
falcon = ["falcon (>=1.4)"]
|
||||||
fastapi = ["fastapi (>=0.79.0)"]
|
fastapi = ["fastapi (>=0.79.0)"]
|
||||||
flask = ["blinker (>=1.1)", "flask (>=0.11)", "markupsafe"]
|
flask = ["blinker (>=1.1)", "flask (>=0.11)", "markupsafe"]
|
||||||
grpcio = ["grpcio (>=1.21.1)"]
|
grpcio = ["grpcio (>=1.21.1)", "protobuf (>=3.8.0)"]
|
||||||
httpx = ["httpx (>=0.16.0)"]
|
httpx = ["httpx (>=0.16.0)"]
|
||||||
huey = ["huey (>=2)"]
|
huey = ["huey (>=2)"]
|
||||||
huggingface-hub = ["huggingface-hub (>=0.22)"]
|
huggingface-hub = ["huggingface-hub (>=0.22)"]
|
||||||
|
@ -2782,6 +2770,20 @@ files = [
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
types-html5lib = "*"
|
types-html5lib = "*"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "types-cffi"
|
||||||
|
version = "1.16.0.20240331"
|
||||||
|
description = "Typing stubs for cffi"
|
||||||
|
optional = false
|
||||||
|
python-versions = ">=3.8"
|
||||||
|
files = [
|
||||||
|
{file = "types-cffi-1.16.0.20240331.tar.gz", hash = "sha256:b8b20d23a2b89cfed5f8c5bc53b0cb8677c3aac6d970dbc771e28b9c698f5dee"},
|
||||||
|
{file = "types_cffi-1.16.0.20240331-py3-none-any.whl", hash = "sha256:a363e5ea54a4eb6a4a105d800685fde596bc318089b025b27dee09849fe41ff0"},
|
||||||
|
]
|
||||||
|
|
||||||
|
[package.dependencies]
|
||||||
|
types-setuptools = "*"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-commonmark"
|
name = "types-commonmark"
|
||||||
version = "0.9.2.20240106"
|
version = "0.9.2.20240106"
|
||||||
|
@ -2864,17 +2866,18 @@ files = [
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-pyopenssl"
|
name = "types-pyopenssl"
|
||||||
version = "24.0.0.20240311"
|
version = "24.1.0.20240425"
|
||||||
description = "Typing stubs for pyOpenSSL"
|
description = "Typing stubs for pyOpenSSL"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.8"
|
python-versions = ">=3.8"
|
||||||
files = [
|
files = [
|
||||||
{file = "types-pyOpenSSL-24.0.0.20240311.tar.gz", hash = "sha256:7bca00cfc4e7ef9c5d2663c6a1c068c35798e59670595439f6296e7ba3d58083"},
|
{file = "types-pyOpenSSL-24.1.0.20240425.tar.gz", hash = "sha256:0a7e82626c1983dc8dc59292bf20654a51c3c3881bcbb9b337c1da6e32f0204e"},
|
||||||
{file = "types_pyOpenSSL-24.0.0.20240311-py3-none-any.whl", hash = "sha256:6e8e8bfad34924067333232c93f7fc4b369856d8bea0d5c9d1808cb290ab1972"},
|
{file = "types_pyOpenSSL-24.1.0.20240425-py3-none-any.whl", hash = "sha256:f51a156835555dd2a1f025621e8c4fbe7493470331afeef96884d1d29bf3a473"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
cryptography = ">=35.0.0"
|
cryptography = ">=35.0.0"
|
||||||
|
types-cffi = "*"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "types-pyyaml"
|
name = "types-pyyaml"
|
||||||
|
@ -3184,4 +3187,4 @@ user-search = ["pyicu"]
|
||||||
[metadata]
|
[metadata]
|
||||||
lock-version = "2.0"
|
lock-version = "2.0"
|
||||||
python-versions = "^3.8.0"
|
python-versions = "^3.8.0"
|
||||||
content-hash = "987f8eccaa222367b1a2e15b0d496586ca50d46ca1277e69694922d31c93ce5b"
|
content-hash = "107c8fb5c67360340854fbdba3c085fc5f9c7be24bcb592596a914eea621faea"
|
||||||
|
|
|
@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
||||||
|
|
||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "matrix-synapse"
|
name = "matrix-synapse"
|
||||||
version = "1.108.0"
|
version = "1.109.0rc1"
|
||||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||||
license = "AGPL-3.0-or-later"
|
license = "AGPL-3.0-or-later"
|
||||||
|
|
|
@ -681,8 +681,8 @@ def setup_sentry(hs: "HomeServer") -> None:
|
||||||
)
|
)
|
||||||
|
|
||||||
# We set some default tags that give some context to this instance
|
# We set some default tags that give some context to this instance
|
||||||
with sentry_sdk.configure_scope() as scope:
|
global_scope = sentry_sdk.Scope.get_global_scope()
|
||||||
scope.set_tag("matrix_server_name", hs.config.server.server_name)
|
global_scope.set_tag("matrix_server_name", hs.config.server.server_name)
|
||||||
|
|
||||||
app = (
|
app = (
|
||||||
hs.config.worker.worker_app
|
hs.config.worker.worker_app
|
||||||
|
@ -690,8 +690,8 @@ def setup_sentry(hs: "HomeServer") -> None:
|
||||||
else "synapse.app.homeserver"
|
else "synapse.app.homeserver"
|
||||||
)
|
)
|
||||||
name = hs.get_instance_name()
|
name = hs.get_instance_name()
|
||||||
scope.set_tag("worker_app", app)
|
global_scope.set_tag("worker_app", app)
|
||||||
scope.set_tag("worker_name", name)
|
global_scope.set_tag("worker_name", name)
|
||||||
|
|
||||||
|
|
||||||
def setup_sdnotify(hs: "HomeServer") -> None:
|
def setup_sdnotify(hs: "HomeServer") -> None:
|
||||||
|
|
|
@ -674,7 +674,7 @@ class FederationServer(FederationBase):
|
||||||
# This is in addition to the HS-level rate limiting applied by
|
# This is in addition to the HS-level rate limiting applied by
|
||||||
# BaseFederationServlet.
|
# BaseFederationServlet.
|
||||||
# type-ignore: mypy doesn't seem able to deduce the type of the limiter(!?)
|
# type-ignore: mypy doesn't seem able to deduce the type of the limiter(!?)
|
||||||
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
|
await self._room_member_handler._join_rate_per_room_limiter.ratelimit(
|
||||||
requester=None,
|
requester=None,
|
||||||
key=room_id,
|
key=room_id,
|
||||||
update=False,
|
update=False,
|
||||||
|
@ -717,7 +717,7 @@ class FederationServer(FederationBase):
|
||||||
SynapseTags.SEND_JOIN_RESPONSE_IS_PARTIAL_STATE,
|
SynapseTags.SEND_JOIN_RESPONSE_IS_PARTIAL_STATE,
|
||||||
caller_supports_partial_state,
|
caller_supports_partial_state,
|
||||||
)
|
)
|
||||||
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
|
await self._room_member_handler._join_rate_per_room_limiter.ratelimit(
|
||||||
requester=None,
|
requester=None,
|
||||||
key=room_id,
|
key=room_id,
|
||||||
update=False,
|
update=False,
|
||||||
|
|
|
@ -126,13 +126,7 @@ class AdminHandler:
|
||||||
# Get all rooms the user is in or has been in
|
# Get all rooms the user is in or has been in
|
||||||
rooms = await self._store.get_rooms_for_local_user_where_membership_is(
|
rooms = await self._store.get_rooms_for_local_user_where_membership_is(
|
||||||
user_id,
|
user_id,
|
||||||
membership_list=(
|
membership_list=Membership.LIST,
|
||||||
Membership.JOIN,
|
|
||||||
Membership.LEAVE,
|
|
||||||
Membership.BAN,
|
|
||||||
Membership.INVITE,
|
|
||||||
Membership.KNOCK,
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# We only try and fetch events for rooms the user has been in. If
|
# We only try and fetch events for rooms the user has been in. If
|
||||||
|
@ -179,7 +173,7 @@ class AdminHandler:
|
||||||
if room.membership == Membership.JOIN:
|
if room.membership == Membership.JOIN:
|
||||||
stream_ordering = self._store.get_room_max_stream_ordering()
|
stream_ordering = self._store.get_room_max_stream_ordering()
|
||||||
else:
|
else:
|
||||||
stream_ordering = room.stream_ordering
|
stream_ordering = room.event_pos.stream
|
||||||
|
|
||||||
from_key = RoomStreamToken(topological=0, stream=0)
|
from_key = RoomStreamToken(topological=0, stream=0)
|
||||||
to_key = RoomStreamToken(stream=stream_ordering)
|
to_key = RoomStreamToken(stream=stream_ordering)
|
||||||
|
|
|
@ -149,6 +149,11 @@ class E2eKeysHandler:
|
||||||
remote_queries = {}
|
remote_queries = {}
|
||||||
|
|
||||||
for user_id, device_ids in device_keys_query.items():
|
for user_id, device_ids in device_keys_query.items():
|
||||||
|
if not UserID.is_valid(user_id):
|
||||||
|
# Ignore invalid user IDs, which is the same behaviour as if
|
||||||
|
# the user existed but had no keys.
|
||||||
|
continue
|
||||||
|
|
||||||
# we use UserID.from_string to catch invalid user ids
|
# we use UserID.from_string to catch invalid user ids
|
||||||
if self.is_mine(UserID.from_string(user_id)):
|
if self.is_mine(UserID.from_string(user_id)):
|
||||||
local_query[user_id] = device_ids
|
local_query[user_id] = device_ids
|
||||||
|
|
|
@ -199,7 +199,7 @@ class InitialSyncHandler:
|
||||||
)
|
)
|
||||||
elif event.membership == Membership.LEAVE:
|
elif event.membership == Membership.LEAVE:
|
||||||
room_end_token = RoomStreamToken(
|
room_end_token = RoomStreamToken(
|
||||||
stream=event.stream_ordering,
|
stream=event.event_pos.stream,
|
||||||
)
|
)
|
||||||
deferred_room_state = run_in_background(
|
deferred_room_state = run_in_background(
|
||||||
self._state_storage_controller.get_state_for_events,
|
self._state_storage_controller.get_state_for_events,
|
||||||
|
|
|
@ -27,7 +27,6 @@ from synapse.api.constants import Direction, EventTypes, Membership
|
||||||
from synapse.api.errors import SynapseError
|
from synapse.api.errors import SynapseError
|
||||||
from synapse.api.filtering import Filter
|
from synapse.api.filtering import Filter
|
||||||
from synapse.events.utils import SerializeEventConfig
|
from synapse.events.utils import SerializeEventConfig
|
||||||
from synapse.handlers.room import ShutdownRoomParams, ShutdownRoomResponse
|
|
||||||
from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
|
from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
|
||||||
from synapse.logging.opentracing import trace
|
from synapse.logging.opentracing import trace
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
|
@ -38,6 +37,8 @@ from synapse.types import (
|
||||||
JsonMapping,
|
JsonMapping,
|
||||||
Requester,
|
Requester,
|
||||||
ScheduledTask,
|
ScheduledTask,
|
||||||
|
ShutdownRoomParams,
|
||||||
|
ShutdownRoomResponse,
|
||||||
StreamKeyType,
|
StreamKeyType,
|
||||||
TaskStatus,
|
TaskStatus,
|
||||||
)
|
)
|
||||||
|
|
|
@ -40,7 +40,6 @@ from typing import (
|
||||||
)
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
from typing_extensions import TypedDict
|
|
||||||
|
|
||||||
import synapse.events.snapshot
|
import synapse.events.snapshot
|
||||||
from synapse.api.constants import (
|
from synapse.api.constants import (
|
||||||
|
@ -81,6 +80,8 @@ from synapse.types import (
|
||||||
RoomAlias,
|
RoomAlias,
|
||||||
RoomID,
|
RoomID,
|
||||||
RoomStreamToken,
|
RoomStreamToken,
|
||||||
|
ShutdownRoomParams,
|
||||||
|
ShutdownRoomResponse,
|
||||||
StateMap,
|
StateMap,
|
||||||
StrCollection,
|
StrCollection,
|
||||||
StreamKeyType,
|
StreamKeyType,
|
||||||
|
@ -1780,63 +1781,6 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
|
||||||
return self.store.get_current_room_stream_token_for_room_id(room_id)
|
return self.store.get_current_room_stream_token_for_room_id(room_id)
|
||||||
|
|
||||||
|
|
||||||
class ShutdownRoomParams(TypedDict):
|
|
||||||
"""
|
|
||||||
Attributes:
|
|
||||||
requester_user_id:
|
|
||||||
User who requested the action. Will be recorded as putting the room on the
|
|
||||||
blocking list.
|
|
||||||
new_room_user_id:
|
|
||||||
If set, a new room will be created with this user ID
|
|
||||||
as the creator and admin, and all users in the old room will be
|
|
||||||
moved into that room. If not set, no new room will be created
|
|
||||||
and the users will just be removed from the old room.
|
|
||||||
new_room_name:
|
|
||||||
A string representing the name of the room that new users will
|
|
||||||
be invited to. Defaults to `Content Violation Notification`
|
|
||||||
message:
|
|
||||||
A string containing the first message that will be sent as
|
|
||||||
`new_room_user_id` in the new room. Ideally this will clearly
|
|
||||||
convey why the original room was shut down.
|
|
||||||
Defaults to `Sharing illegal content on this server is not
|
|
||||||
permitted and rooms in violation will be blocked.`
|
|
||||||
block:
|
|
||||||
If set to `true`, this room will be added to a blocking list,
|
|
||||||
preventing future attempts to join the room. Defaults to `false`.
|
|
||||||
purge:
|
|
||||||
If set to `true`, purge the given room from the database.
|
|
||||||
force_purge:
|
|
||||||
If set to `true`, the room will be purged from database
|
|
||||||
even if there are still users joined to the room.
|
|
||||||
"""
|
|
||||||
|
|
||||||
requester_user_id: Optional[str]
|
|
||||||
new_room_user_id: Optional[str]
|
|
||||||
new_room_name: Optional[str]
|
|
||||||
message: Optional[str]
|
|
||||||
block: bool
|
|
||||||
purge: bool
|
|
||||||
force_purge: bool
|
|
||||||
|
|
||||||
|
|
||||||
class ShutdownRoomResponse(TypedDict):
|
|
||||||
"""
|
|
||||||
Attributes:
|
|
||||||
kicked_users: An array of users (`user_id`) that were kicked.
|
|
||||||
failed_to_kick_users:
|
|
||||||
An array of users (`user_id`) that that were not kicked.
|
|
||||||
local_aliases:
|
|
||||||
An array of strings representing the local aliases that were
|
|
||||||
migrated from the old room to the new.
|
|
||||||
new_room_id: A string representing the room ID of the new room.
|
|
||||||
"""
|
|
||||||
|
|
||||||
kicked_users: List[str]
|
|
||||||
failed_to_kick_users: List[str]
|
|
||||||
local_aliases: List[str]
|
|
||||||
new_room_id: Optional[str]
|
|
||||||
|
|
||||||
|
|
||||||
class RoomShutdownHandler:
|
class RoomShutdownHandler:
|
||||||
DEFAULT_MESSAGE = (
|
DEFAULT_MESSAGE = (
|
||||||
"Sharing illegal content on this server is not permitted and rooms in"
|
"Sharing illegal content on this server is not permitted and rooms in"
|
||||||
|
|
|
@ -1,8 +1,28 @@
|
||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2024 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
# Originally licensed under the Apache License, Version 2.0:
|
||||||
|
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||||
|
#
|
||||||
|
# [This file includes modifications made by New Vector Limited]
|
||||||
|
#
|
||||||
|
#
|
||||||
import logging
|
import logging
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from typing import TYPE_CHECKING, AbstractSet, Dict, Final, List, Optional, Tuple
|
from typing import TYPE_CHECKING, AbstractSet, Dict, Final, List, Optional, Tuple
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
from immutabledict import immutabledict
|
||||||
|
|
||||||
from synapse._pydantic_compat import HAS_PYDANTIC_V2
|
from synapse._pydantic_compat import HAS_PYDANTIC_V2
|
||||||
|
|
||||||
|
@ -22,7 +42,9 @@ if TYPE_CHECKING:
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
# Everything except `Membership.LEAVE`
|
# Everything except `Membership.LEAVE` because we want everything that's *still*
|
||||||
|
# relevant to the user. There are few more things to include in the sync response
|
||||||
|
# (kicks, newly_left) but those are handled separately.
|
||||||
MEMBERSHIP_TO_DISPLAY_IN_SYNC = (
|
MEMBERSHIP_TO_DISPLAY_IN_SYNC = (
|
||||||
Membership.INVITE,
|
Membership.INVITE,
|
||||||
Membership.JOIN,
|
Membership.JOIN,
|
||||||
|
@ -31,6 +53,24 @@ MEMBERSHIP_TO_DISPLAY_IN_SYNC = (
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def filter_membership_for_sync(*, membership: str, user_id: str, sender: str) -> bool:
|
||||||
|
"""
|
||||||
|
Returns True if the membership event should be included in the sync response,
|
||||||
|
otherwise False.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
membership: The membership state of the user in the room.
|
||||||
|
user_id: The user ID that the membership applies to
|
||||||
|
sender: The person who sent the membership event
|
||||||
|
"""
|
||||||
|
|
||||||
|
return (
|
||||||
|
membership in MEMBERSHIP_TO_DISPLAY_IN_SYNC
|
||||||
|
# Include kicks
|
||||||
|
or (membership == Membership.LEAVE and sender != user_id)
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class SlidingSyncConfig(SlidingSyncBody):
|
class SlidingSyncConfig(SlidingSyncBody):
|
||||||
"""
|
"""
|
||||||
Inherit from `SlidingSyncBody` since we need all of the same fields and add a few
|
Inherit from `SlidingSyncBody` since we need all of the same fields and add a few
|
||||||
|
@ -170,7 +210,7 @@ class SlidingSyncResult:
|
||||||
|
|
||||||
next_pos: StreamToken
|
next_pos: StreamToken
|
||||||
lists: Dict[str, SlidingWindowList]
|
lists: Dict[str, SlidingWindowList]
|
||||||
rooms: List[RoomResult]
|
rooms: Dict[str, RoomResult]
|
||||||
extensions: JsonMapping
|
extensions: JsonMapping
|
||||||
|
|
||||||
def __bool__(self) -> bool:
|
def __bool__(self) -> bool:
|
||||||
|
@ -180,10 +220,21 @@ class SlidingSyncResult:
|
||||||
"""
|
"""
|
||||||
return bool(self.lists or self.rooms or self.extensions)
|
return bool(self.lists or self.rooms or self.extensions)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def empty(next_pos: StreamToken) -> "SlidingSyncResult":
|
||||||
|
"Return a new empty result"
|
||||||
|
return SlidingSyncResult(
|
||||||
|
next_pos=next_pos,
|
||||||
|
lists={},
|
||||||
|
rooms={},
|
||||||
|
extensions={},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class SlidingSyncHandler:
|
class SlidingSyncHandler:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.hs_config = hs.config
|
self.hs_config = hs.config
|
||||||
|
self.clock = hs.get_clock()
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
self.auth_blocking = hs.get_auth_blocking()
|
self.auth_blocking = hs.get_auth_blocking()
|
||||||
self.notifier = hs.get_notifier()
|
self.notifier = hs.get_notifier()
|
||||||
|
@ -195,7 +246,7 @@ class SlidingSyncHandler:
|
||||||
requester: Requester,
|
requester: Requester,
|
||||||
sync_config: SlidingSyncConfig,
|
sync_config: SlidingSyncConfig,
|
||||||
from_token: Optional[StreamToken] = None,
|
from_token: Optional[StreamToken] = None,
|
||||||
timeout: int = 0,
|
timeout_ms: int = 0,
|
||||||
) -> SlidingSyncResult:
|
) -> SlidingSyncResult:
|
||||||
"""Get the sync for a client if we have new data for it now. Otherwise
|
"""Get the sync for a client if we have new data for it now. Otherwise
|
||||||
wait for new data to arrive on the server. If the timeout expires, then
|
wait for new data to arrive on the server. If the timeout expires, then
|
||||||
|
@ -210,7 +261,32 @@ class SlidingSyncHandler:
|
||||||
# any to-device messages before that token (since we now know that the device
|
# any to-device messages before that token (since we now know that the device
|
||||||
# has received them). (see sync v2 for how to do this)
|
# has received them). (see sync v2 for how to do this)
|
||||||
|
|
||||||
if timeout == 0 or from_token is None:
|
# If we're working with a user-provided token, we need to make sure to wait for
|
||||||
|
# this worker to catch up with the token so we don't skip past any incoming
|
||||||
|
# events or future events if the user is nefariously, manually modifying the
|
||||||
|
# token.
|
||||||
|
if from_token is not None:
|
||||||
|
# We need to make sure this worker has caught up with the token. If
|
||||||
|
# this returns false, it means we timed out waiting, and we should
|
||||||
|
# just return an empty response.
|
||||||
|
before_wait_ts = self.clock.time_msec()
|
||||||
|
if not await self.notifier.wait_for_stream_token(from_token):
|
||||||
|
logger.warning(
|
||||||
|
"Timed out waiting for worker to catch up. Returning empty response"
|
||||||
|
)
|
||||||
|
return SlidingSyncResult.empty(from_token)
|
||||||
|
|
||||||
|
# If we've spent significant time waiting to catch up, take it off
|
||||||
|
# the timeout.
|
||||||
|
after_wait_ts = self.clock.time_msec()
|
||||||
|
if after_wait_ts - before_wait_ts > 1_000:
|
||||||
|
timeout_ms -= after_wait_ts - before_wait_ts
|
||||||
|
timeout_ms = max(timeout_ms, 0)
|
||||||
|
|
||||||
|
# We're going to respond immediately if the timeout is 0 or if this is an
|
||||||
|
# initial sync (without a `from_token`) so we can avoid calling
|
||||||
|
# `notifier.wait_for_events()`.
|
||||||
|
if timeout_ms == 0 or from_token is None:
|
||||||
now_token = self.event_sources.get_current_token()
|
now_token = self.event_sources.get_current_token()
|
||||||
result = await self.current_sync_for_user(
|
result = await self.current_sync_for_user(
|
||||||
sync_config,
|
sync_config,
|
||||||
|
@ -230,7 +306,7 @@ class SlidingSyncHandler:
|
||||||
|
|
||||||
result = await self.notifier.wait_for_events(
|
result = await self.notifier.wait_for_events(
|
||||||
sync_config.user.to_string(),
|
sync_config.user.to_string(),
|
||||||
timeout,
|
timeout_ms,
|
||||||
current_sync_callback,
|
current_sync_callback,
|
||||||
from_token=from_token,
|
from_token=from_token,
|
||||||
)
|
)
|
||||||
|
@ -295,7 +371,7 @@ class SlidingSyncHandler:
|
||||||
next_pos=to_token,
|
next_pos=to_token,
|
||||||
lists=lists,
|
lists=lists,
|
||||||
# TODO: Gather room data for rooms in lists and `sync_config.room_subscriptions`
|
# TODO: Gather room data for rooms in lists and `sync_config.room_subscriptions`
|
||||||
rooms=[],
|
rooms={},
|
||||||
extensions={},
|
extensions={},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -306,10 +382,12 @@ class SlidingSyncHandler:
|
||||||
from_token: Optional[StreamToken] = None,
|
from_token: Optional[StreamToken] = None,
|
||||||
) -> AbstractSet[str]:
|
) -> AbstractSet[str]:
|
||||||
"""
|
"""
|
||||||
Fetch room IDs that should be listed for this user in the sync response.
|
Fetch room IDs that should be listed for this user in the sync response (the
|
||||||
|
full room list that will be filtered, sorted, and sliced).
|
||||||
|
|
||||||
We're looking for rooms that the user has not left (`invite`, `knock`, `join`,
|
We're looking for rooms that the user has not left (`invite`, `knock`, `join`,
|
||||||
and `ban`) or newly_left rooms that are > `from_token` and <= `to_token`.
|
and `ban`), or kicks (`leave` where the `sender` is different from the
|
||||||
|
`state_key`), or newly_left rooms that are > `from_token` and <= `to_token`.
|
||||||
"""
|
"""
|
||||||
user_id = user.to_string()
|
user_id = user.to_string()
|
||||||
|
|
||||||
|
@ -317,11 +395,11 @@ class SlidingSyncHandler:
|
||||||
room_for_user_list = await self.store.get_rooms_for_local_user_where_membership_is(
|
room_for_user_list = await self.store.get_rooms_for_local_user_where_membership_is(
|
||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
# We want to fetch any kind of membership (joined and left rooms) in order
|
# We want to fetch any kind of membership (joined and left rooms) in order
|
||||||
# to get the `stream_ordering` of the latest room membership event for the
|
# to get the `event_pos` of the latest room membership event for the
|
||||||
# user.
|
# user.
|
||||||
#
|
#
|
||||||
# We will filter out the rooms that the user has left below (see
|
# We will filter out the rooms that don't belong below (see
|
||||||
# `MEMBERSHIP_TO_DISPLAY_IN_SYNC`)
|
# `filter_membership_for_sync`)
|
||||||
membership_list=Membership.LIST,
|
membership_list=Membership.LIST,
|
||||||
excluded_rooms=self.rooms_to_exclude_globally,
|
excluded_rooms=self.rooms_to_exclude_globally,
|
||||||
)
|
)
|
||||||
|
@ -334,101 +412,99 @@ class SlidingSyncHandler:
|
||||||
sync_room_id_set = {
|
sync_room_id_set = {
|
||||||
room_for_user.room_id
|
room_for_user.room_id
|
||||||
for room_for_user in room_for_user_list
|
for room_for_user in room_for_user_list
|
||||||
if room_for_user.membership in MEMBERSHIP_TO_DISPLAY_IN_SYNC
|
if filter_membership_for_sync(
|
||||||
|
membership=room_for_user.membership,
|
||||||
|
user_id=user_id,
|
||||||
|
sender=room_for_user.sender,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
# Find the stream_ordering of the latest room membership event which will mark
|
# Get the `RoomStreamToken` that represents the spot we queried up to when we got
|
||||||
# the spot we queried up to.
|
# our membership snapshot from `get_rooms_for_local_user_where_membership_is()`.
|
||||||
max_stream_ordering_from_room_list = max(
|
#
|
||||||
room_for_user.stream_ordering for room_for_user in room_for_user_list
|
# First, we need to get the max stream_ordering of each event persister instance
|
||||||
|
# that we queried events from.
|
||||||
|
instance_to_max_stream_ordering_map: Dict[str, int] = {}
|
||||||
|
for room_for_user in room_for_user_list:
|
||||||
|
instance_name = room_for_user.event_pos.instance_name
|
||||||
|
stream_ordering = room_for_user.event_pos.stream
|
||||||
|
|
||||||
|
current_instance_max_stream_ordering = (
|
||||||
|
instance_to_max_stream_ordering_map.get(instance_name)
|
||||||
|
)
|
||||||
|
if (
|
||||||
|
current_instance_max_stream_ordering is None
|
||||||
|
or stream_ordering > current_instance_max_stream_ordering
|
||||||
|
):
|
||||||
|
instance_to_max_stream_ordering_map[instance_name] = stream_ordering
|
||||||
|
|
||||||
|
# Then assemble the `RoomStreamToken`
|
||||||
|
membership_snapshot_token = RoomStreamToken(
|
||||||
|
# Minimum position in the `instance_map`
|
||||||
|
stream=min(
|
||||||
|
stream_ordering
|
||||||
|
for stream_ordering in instance_to_max_stream_ordering_map.values()
|
||||||
|
),
|
||||||
|
instance_map=immutabledict(instance_to_max_stream_ordering_map),
|
||||||
)
|
)
|
||||||
|
|
||||||
# If our `to_token` is already the same or ahead of the latest room membership
|
# If our `to_token` is already the same or ahead of the latest room membership
|
||||||
# for the user, we can just straight-up return the room list (nothing has
|
# for the user, we can just straight-up return the room list (nothing has
|
||||||
# changed)
|
# changed)
|
||||||
if max_stream_ordering_from_room_list <= to_token.room_key.stream:
|
if membership_snapshot_token.is_before_or_eq(to_token.room_key):
|
||||||
return sync_room_id_set
|
return sync_room_id_set
|
||||||
|
|
||||||
# We assume the `from_token` is before or at-least equal to the `to_token`
|
# We assume the `from_token` is before or at-least equal to the `to_token`
|
||||||
assert (
|
assert from_token is None or from_token.room_key.is_before_or_eq(
|
||||||
from_token is None or from_token.room_key.stream <= to_token.room_key.stream
|
to_token.room_key
|
||||||
), f"{from_token.room_key.stream if from_token else None} <= {to_token.room_key.stream}"
|
), f"{from_token.room_key if from_token else None} < {to_token.room_key}"
|
||||||
|
|
||||||
# We assume the `from_token`/`to_token` is before the `max_stream_ordering_from_room_list`
|
# We assume the `from_token`/`to_token` is before the `membership_snapshot_token`
|
||||||
assert (
|
assert from_token is None or from_token.room_key.is_before_or_eq(
|
||||||
from_token is None
|
membership_snapshot_token
|
||||||
or from_token.room_key.stream < max_stream_ordering_from_room_list
|
), f"{from_token.room_key if from_token else None} < {membership_snapshot_token}"
|
||||||
), f"{from_token.room_key.stream if from_token else None} < {max_stream_ordering_from_room_list}"
|
assert to_token.room_key.is_before_or_eq(
|
||||||
assert (
|
membership_snapshot_token
|
||||||
to_token.room_key.stream < max_stream_ordering_from_room_list
|
), f"{to_token.room_key} < {membership_snapshot_token}"
|
||||||
), f"{to_token.room_key.stream} < {max_stream_ordering_from_room_list}"
|
|
||||||
|
|
||||||
# Since we fetched the users room list at some point in time after the from/to
|
# Since we fetched the users room list at some point in time after the from/to
|
||||||
# tokens, we need to revert/rewind some membership changes to match the point in
|
# tokens, we need to revert/rewind some membership changes to match the point in
|
||||||
# time of the `to_token`.
|
# time of the `to_token`. In particular, we need to make these fixups:
|
||||||
#
|
#
|
||||||
# - 1) Add back newly_left rooms (> `from_token` and <= `to_token`)
|
# - 1) Add back newly_left rooms (> `from_token` and <= `to_token`)
|
||||||
# - 2a) Remove rooms that the user joined after the `to_token`
|
# - 2a) Remove rooms that the user joined after the `to_token`
|
||||||
# - 2b) Add back rooms that the user left after the `to_token`
|
# - 2b) Add back rooms that the user left after the `to_token`
|
||||||
membership_change_events = await self.store.get_membership_changes_for_user(
|
#
|
||||||
|
# Below, we're doing two separate lookups for membership changes. We could
|
||||||
|
# request everything for both fixups in one range, [`from_token.room_key`,
|
||||||
|
# `membership_snapshot_token`), but we want to avoid raw `stream_ordering`
|
||||||
|
# comparison without `instance_name` (which is flawed). We could refactor
|
||||||
|
# `event.internal_metadata` to include `instance_name` but it might turn out a
|
||||||
|
# little difficult and a bigger, broader Synapse change than we want to make.
|
||||||
|
|
||||||
|
# 1) -----------------------------------------------------
|
||||||
|
|
||||||
|
# 1) Fetch membership changes that fall in the range from `from_token` up to `to_token`
|
||||||
|
membership_change_events_in_from_to_range = []
|
||||||
|
if from_token:
|
||||||
|
membership_change_events_in_from_to_range = (
|
||||||
|
await self.store.get_membership_changes_for_user(
|
||||||
user_id,
|
user_id,
|
||||||
# Start from the `from_token` if given, otherwise from the `to_token` so we
|
from_key=from_token.room_key,
|
||||||
# can still do the 2) fixups.
|
to_key=to_token.room_key,
|
||||||
from_key=from_token.room_key if from_token else to_token.room_key,
|
|
||||||
# Fetch up to our membership snapshot
|
|
||||||
to_key=RoomStreamToken(stream=max_stream_ordering_from_room_list),
|
|
||||||
excluded_rooms=self.rooms_to_exclude_globally,
|
excluded_rooms=self.rooms_to_exclude_globally,
|
||||||
)
|
)
|
||||||
|
)
|
||||||
|
|
||||||
# Assemble a list of the last membership events in some given ranges. Someone
|
# 1) Assemble a list of the last membership events in some given ranges. Someone
|
||||||
# could have left and joined multiple times during the given range but we only
|
# could have left and joined multiple times during the given range but we only
|
||||||
# care about end-result so we grab the last one.
|
# care about end-result so we grab the last one.
|
||||||
last_membership_change_by_room_id_in_from_to_range: Dict[str, EventBase] = {}
|
last_membership_change_by_room_id_in_from_to_range: Dict[str, EventBase] = {}
|
||||||
last_membership_change_by_room_id_after_to_token: Dict[str, EventBase] = {}
|
for event in membership_change_events_in_from_to_range:
|
||||||
# We also need the first membership event after the `to_token` so we can step
|
|
||||||
# backward to the previous membership that would apply to the from/to range.
|
|
||||||
first_membership_change_by_room_id_after_to_token: Dict[str, EventBase] = {}
|
|
||||||
for event in membership_change_events:
|
|
||||||
assert event.internal_metadata.stream_ordering
|
assert event.internal_metadata.stream_ordering
|
||||||
|
last_membership_change_by_room_id_in_from_to_range[event.room_id] = event
|
||||||
|
|
||||||
if (
|
# 1) Fixup
|
||||||
(
|
|
||||||
from_token is None
|
|
||||||
or event.internal_metadata.stream_ordering
|
|
||||||
> from_token.room_key.stream
|
|
||||||
)
|
|
||||||
and event.internal_metadata.stream_ordering <= to_token.room_key.stream
|
|
||||||
):
|
|
||||||
last_membership_change_by_room_id_in_from_to_range[event.room_id] = (
|
|
||||||
event
|
|
||||||
)
|
|
||||||
elif (
|
|
||||||
event.internal_metadata.stream_ordering > to_token.room_key.stream
|
|
||||||
and event.internal_metadata.stream_ordering
|
|
||||||
<= max_stream_ordering_from_room_list
|
|
||||||
):
|
|
||||||
last_membership_change_by_room_id_after_to_token[event.room_id] = event
|
|
||||||
# Only set if we haven't already set it
|
|
||||||
first_membership_change_by_room_id_after_to_token.setdefault(
|
|
||||||
event.room_id, event
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
# We don't expect this to happen since we should only be fetching
|
|
||||||
# `membership_change_events` that fall in the given ranges above. It
|
|
||||||
# doesn't hurt anything to ignore an event we don't need but may
|
|
||||||
# indicate a bug in the logic above.
|
|
||||||
raise AssertionError(
|
|
||||||
"Membership event with stream_ordering=%s should fall in the given ranges above"
|
|
||||||
+ " (%d > x <= %d) or (%d > x <= %d). We shouldn't be fetching extra membership"
|
|
||||||
+ " events that aren't used.",
|
|
||||||
event.internal_metadata.stream_ordering,
|
|
||||||
from_token.room_key.stream if from_token else None,
|
|
||||||
to_token.room_key.stream,
|
|
||||||
to_token.room_key.stream,
|
|
||||||
max_stream_ordering_from_room_list,
|
|
||||||
)
|
|
||||||
|
|
||||||
# 1)
|
|
||||||
for (
|
for (
|
||||||
last_membership_change_in_from_to_range
|
last_membership_change_in_from_to_range
|
||||||
) in last_membership_change_by_room_id_in_from_to_range.values():
|
) in last_membership_change_by_room_id_in_from_to_range.values():
|
||||||
|
@ -440,7 +516,36 @@ class SlidingSyncHandler:
|
||||||
if last_membership_change_in_from_to_range.membership == Membership.LEAVE:
|
if last_membership_change_in_from_to_range.membership == Membership.LEAVE:
|
||||||
sync_room_id_set.add(room_id)
|
sync_room_id_set.add(room_id)
|
||||||
|
|
||||||
# 2)
|
# 2) -----------------------------------------------------
|
||||||
|
|
||||||
|
# 2) Fetch membership changes that fall in the range from `to_token` up to
|
||||||
|
# `membership_snapshot_token`
|
||||||
|
membership_change_events_after_to_token = (
|
||||||
|
await self.store.get_membership_changes_for_user(
|
||||||
|
user_id,
|
||||||
|
from_key=to_token.room_key,
|
||||||
|
to_key=membership_snapshot_token,
|
||||||
|
excluded_rooms=self.rooms_to_exclude_globally,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# 2) Assemble a list of the last membership events in some given ranges. Someone
|
||||||
|
# could have left and joined multiple times during the given range but we only
|
||||||
|
# care about end-result so we grab the last one.
|
||||||
|
last_membership_change_by_room_id_after_to_token: Dict[str, EventBase] = {}
|
||||||
|
# We also need the first membership event after the `to_token` so we can step
|
||||||
|
# backward to the previous membership that would apply to the from/to range.
|
||||||
|
first_membership_change_by_room_id_after_to_token: Dict[str, EventBase] = {}
|
||||||
|
for event in membership_change_events_after_to_token:
|
||||||
|
assert event.internal_metadata.stream_ordering
|
||||||
|
|
||||||
|
last_membership_change_by_room_id_after_to_token[event.room_id] = event
|
||||||
|
# Only set if we haven't already set it
|
||||||
|
first_membership_change_by_room_id_after_to_token.setdefault(
|
||||||
|
event.room_id, event
|
||||||
|
)
|
||||||
|
|
||||||
|
# 2) Fixup
|
||||||
for (
|
for (
|
||||||
last_membership_change_after_to_token
|
last_membership_change_after_to_token
|
||||||
) in last_membership_change_by_room_id_after_to_token.values():
|
) in last_membership_change_by_room_id_after_to_token.values():
|
||||||
|
@ -458,33 +563,65 @@ class SlidingSyncHandler:
|
||||||
+ "This is probably a mistake in assembling the `last_membership_change_by_room_id_after_to_token`"
|
+ "This is probably a mistake in assembling the `last_membership_change_by_room_id_after_to_token`"
|
||||||
+ "/`first_membership_change_by_room_id_after_to_token` dicts above."
|
+ "/`first_membership_change_by_room_id_after_to_token` dicts above."
|
||||||
)
|
)
|
||||||
|
# TODO: Instead of reading from `unsigned`, refactor this to use the
|
||||||
|
# `current_state_delta_stream` table in the future. Probably a new
|
||||||
|
# `get_membership_changes_for_user()` function that uses
|
||||||
|
# `current_state_delta_stream` with a join to `room_memberships`. This would
|
||||||
|
# help in state reset scenarios since `prev_content` is looking at the
|
||||||
|
# current branch vs the current room state. This is all just data given to
|
||||||
|
# the client so no real harm to data integrity, but we'd like to be nice to
|
||||||
|
# the client. Since the `current_state_delta_stream` table is new, it
|
||||||
|
# doesn't have all events in it. Since this is Sliding Sync, if we ever need
|
||||||
|
# to, we can signal the client to throw all of their state away by sending
|
||||||
|
# "operation: RESET".
|
||||||
prev_content = first_membership_change_after_to_token.unsigned.get(
|
prev_content = first_membership_change_after_to_token.unsigned.get(
|
||||||
"prev_content", {}
|
"prev_content", {}
|
||||||
)
|
)
|
||||||
prev_membership = prev_content.get("membership", None)
|
prev_membership = prev_content.get("membership", None)
|
||||||
|
prev_sender = first_membership_change_after_to_token.unsigned.get(
|
||||||
|
"prev_sender", None
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if the previous membership (membership that applies to the from/to
|
||||||
|
# range) should be included in our `sync_room_id_set`
|
||||||
|
should_prev_membership_be_included = (
|
||||||
|
prev_membership is not None
|
||||||
|
and prev_sender is not None
|
||||||
|
and filter_membership_for_sync(
|
||||||
|
membership=prev_membership,
|
||||||
|
user_id=user_id,
|
||||||
|
sender=prev_sender,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if the last membership (membership that applies to our snapshot) was
|
||||||
|
# already included in our `sync_room_id_set`
|
||||||
|
was_last_membership_already_included = filter_membership_for_sync(
|
||||||
|
membership=last_membership_change_after_to_token.membership,
|
||||||
|
user_id=user_id,
|
||||||
|
sender=last_membership_change_after_to_token.sender,
|
||||||
|
)
|
||||||
|
|
||||||
# 2a) Add back rooms that the user left after the `to_token`
|
# 2a) Add back rooms that the user left after the `to_token`
|
||||||
#
|
#
|
||||||
# If the last membership event after the `to_token` is a leave event, then
|
# For example, if the last membership event after the `to_token` is a leave
|
||||||
# the room was excluded from the
|
# event, then the room was excluded from `sync_room_id_set` when we first
|
||||||
# `get_rooms_for_local_user_where_membership_is()` results. We should add
|
# crafted it above. We should add these rooms back as long as the user also
|
||||||
# these rooms back as long as the user was part of the room before the
|
# was part of the room before the `to_token`.
|
||||||
# `to_token`.
|
|
||||||
if (
|
if (
|
||||||
last_membership_change_after_to_token.membership == Membership.LEAVE
|
not was_last_membership_already_included
|
||||||
and prev_membership is not None
|
and should_prev_membership_be_included
|
||||||
and prev_membership != Membership.LEAVE
|
|
||||||
):
|
):
|
||||||
sync_room_id_set.add(room_id)
|
sync_room_id_set.add(room_id)
|
||||||
# 2b) Remove rooms that the user joined (hasn't left) after the `to_token`
|
# 2b) Remove rooms that the user joined (hasn't left) after the `to_token`
|
||||||
#
|
#
|
||||||
# If the last membership event after the `to_token` is a "join" event, then
|
# For example, if the last membership event after the `to_token` is a "join"
|
||||||
# the room was included in the `get_rooms_for_local_user_where_membership_is()`
|
# event, then the room was included `sync_room_id_set` when we first crafted
|
||||||
# results. We should remove these rooms as long as the user wasn't part of
|
# it above. We should remove these rooms as long as the user also wasn't
|
||||||
# the room before the `to_token`.
|
# part of the room before the `to_token`.
|
||||||
elif (
|
elif (
|
||||||
last_membership_change_after_to_token.membership != Membership.LEAVE
|
was_last_membership_already_included
|
||||||
and (prev_membership is None or prev_membership == Membership.LEAVE)
|
and not should_prev_membership_be_included
|
||||||
):
|
):
|
||||||
sync_room_id_set.discard(room_id)
|
sync_room_id_set.discard(room_id)
|
||||||
|
|
||||||
|
|
|
@ -2808,7 +2808,7 @@ class SyncHandler:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
leave_token = now_token.copy_and_replace(
|
leave_token = now_token.copy_and_replace(
|
||||||
StreamKeyType.ROOM, RoomStreamToken(stream=event.stream_ordering)
|
StreamKeyType.ROOM, RoomStreamToken(stream=event.event_pos.stream)
|
||||||
)
|
)
|
||||||
room_entries.append(
|
room_entries.append(
|
||||||
RoomSyncResultBuilder(
|
RoomSyncResultBuilder(
|
||||||
|
|
|
@ -477,9 +477,9 @@ class TypingWriterHandler(FollowerTypingHandler):
|
||||||
|
|
||||||
rows = []
|
rows = []
|
||||||
for room_id in changed_rooms:
|
for room_id in changed_rooms:
|
||||||
serial = self._room_serials[room_id]
|
serial = self._room_serials.get(room_id)
|
||||||
if last_id < serial <= current_id:
|
if serial and last_id < serial <= current_id:
|
||||||
typing = self._room_typing[room_id]
|
typing = self._room_typing.get(room_id, set())
|
||||||
rows.append((serial, [room_id, list(typing)]))
|
rows.append((serial, [room_id, list(typing)]))
|
||||||
rows.sort()
|
rows.sort()
|
||||||
|
|
||||||
|
|
|
@ -914,6 +914,12 @@ class SlidingSyncRestServlet(RestServlet):
|
||||||
timeout,
|
timeout,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# The client may have disconnected by now; don't bother to serialize the
|
||||||
|
# response if so.
|
||||||
|
if request._disconnected:
|
||||||
|
logger.info("Client has disconnected; not serializing response.")
|
||||||
|
return 200, {}
|
||||||
|
|
||||||
response_content = await self.encode_response(sliding_sync_results)
|
response_content = await self.encode_response(sliding_sync_results)
|
||||||
|
|
||||||
return 200, response_content
|
return 200, response_content
|
||||||
|
|
|
@ -476,7 +476,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||||
)
|
)
|
||||||
|
|
||||||
sql = """
|
sql = """
|
||||||
SELECT room_id, e.sender, c.membership, event_id, e.stream_ordering, r.room_version
|
SELECT room_id, e.sender, c.membership, event_id, e.instance_name, e.stream_ordering, r.room_version
|
||||||
FROM local_current_membership AS c
|
FROM local_current_membership AS c
|
||||||
INNER JOIN events AS e USING (room_id, event_id)
|
INNER JOIN events AS e USING (room_id, event_id)
|
||||||
INNER JOIN rooms AS r USING (room_id)
|
INNER JOIN rooms AS r USING (room_id)
|
||||||
|
@ -488,7 +488,17 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||||
)
|
)
|
||||||
|
|
||||||
txn.execute(sql, (user_id, *args))
|
txn.execute(sql, (user_id, *args))
|
||||||
results = [RoomsForUser(*r) for r in txn]
|
results = [
|
||||||
|
RoomsForUser(
|
||||||
|
room_id=room_id,
|
||||||
|
sender=sender,
|
||||||
|
membership=membership,
|
||||||
|
event_id=event_id,
|
||||||
|
event_pos=PersistedEventPosition(instance_name, stream_ordering),
|
||||||
|
room_version_id=room_version,
|
||||||
|
)
|
||||||
|
for room_id, sender, membership, event_id, instance_name, stream_ordering, room_version in txn
|
||||||
|
]
|
||||||
|
|
||||||
return results
|
return results
|
||||||
|
|
||||||
|
|
|
@ -35,7 +35,7 @@ class RoomsForUser:
|
||||||
sender: str
|
sender: str
|
||||||
membership: str
|
membership: str
|
||||||
event_id: str
|
event_id: str
|
||||||
stream_ordering: int
|
event_pos: PersistedEventPosition
|
||||||
room_version_id: str
|
room_version_id: str
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1279,3 +1279,60 @@ class ScheduledTask:
|
||||||
result: Optional[JsonMapping]
|
result: Optional[JsonMapping]
|
||||||
# Optional error that should be assigned a value when the status is FAILED
|
# Optional error that should be assigned a value when the status is FAILED
|
||||||
error: Optional[str]
|
error: Optional[str]
|
||||||
|
|
||||||
|
|
||||||
|
class ShutdownRoomParams(TypedDict):
|
||||||
|
"""
|
||||||
|
Attributes:
|
||||||
|
requester_user_id:
|
||||||
|
User who requested the action. Will be recorded as putting the room on the
|
||||||
|
blocking list.
|
||||||
|
new_room_user_id:
|
||||||
|
If set, a new room will be created with this user ID
|
||||||
|
as the creator and admin, and all users in the old room will be
|
||||||
|
moved into that room. If not set, no new room will be created
|
||||||
|
and the users will just be removed from the old room.
|
||||||
|
new_room_name:
|
||||||
|
A string representing the name of the room that new users will
|
||||||
|
be invited to. Defaults to `Content Violation Notification`
|
||||||
|
message:
|
||||||
|
A string containing the first message that will be sent as
|
||||||
|
`new_room_user_id` in the new room. Ideally this will clearly
|
||||||
|
convey why the original room was shut down.
|
||||||
|
Defaults to `Sharing illegal content on this server is not
|
||||||
|
permitted and rooms in violation will be blocked.`
|
||||||
|
block:
|
||||||
|
If set to `true`, this room will be added to a blocking list,
|
||||||
|
preventing future attempts to join the room. Defaults to `false`.
|
||||||
|
purge:
|
||||||
|
If set to `true`, purge the given room from the database.
|
||||||
|
force_purge:
|
||||||
|
If set to `true`, the room will be purged from database
|
||||||
|
even if there are still users joined to the room.
|
||||||
|
"""
|
||||||
|
|
||||||
|
requester_user_id: Optional[str]
|
||||||
|
new_room_user_id: Optional[str]
|
||||||
|
new_room_name: Optional[str]
|
||||||
|
message: Optional[str]
|
||||||
|
block: bool
|
||||||
|
purge: bool
|
||||||
|
force_purge: bool
|
||||||
|
|
||||||
|
|
||||||
|
class ShutdownRoomResponse(TypedDict):
|
||||||
|
"""
|
||||||
|
Attributes:
|
||||||
|
kicked_users: An array of users (`user_id`) that were kicked.
|
||||||
|
failed_to_kick_users:
|
||||||
|
An array of users (`user_id`) that that were not kicked.
|
||||||
|
local_aliases:
|
||||||
|
An array of strings representing the local aliases that were
|
||||||
|
migrated from the old room to the new.
|
||||||
|
new_room_id: A string representing the room ID of the new room.
|
||||||
|
"""
|
||||||
|
|
||||||
|
kicked_users: List[str]
|
||||||
|
failed_to_kick_users: List[str]
|
||||||
|
local_aliases: List[str]
|
||||||
|
new_room_id: Optional[str]
|
||||||
|
|
|
@ -1,8 +1,27 @@
|
||||||
|
#
|
||||||
|
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||||
|
#
|
||||||
|
# Copyright (C) 2024 New Vector, Ltd
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU Affero General Public License as
|
||||||
|
# published by the Free Software Foundation, either version 3 of the
|
||||||
|
# License, or (at your option) any later version.
|
||||||
|
#
|
||||||
|
# See the GNU Affero General Public License for more details:
|
||||||
|
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||||
|
#
|
||||||
|
# Originally licensed under the Apache License, Version 2.0:
|
||||||
|
# <http://www.apache.org/licenses/LICENSE-2.0>.
|
||||||
|
#
|
||||||
|
# [This file includes modifications made by New Vector Limited]
|
||||||
|
#
|
||||||
|
#
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from twisted.test.proto_helpers import MemoryReactor
|
from twisted.test.proto_helpers import MemoryReactor
|
||||||
|
|
||||||
from synapse.api.constants import AccountDataTypes, EventTypes, JoinRules
|
from synapse.api.constants import AccountDataTypes, EventTypes, JoinRules, Membership
|
||||||
from synapse.api.room_versions import RoomVersions
|
from synapse.api.room_versions import RoomVersions
|
||||||
from synapse.rest import admin
|
from synapse.rest import admin
|
||||||
from synapse.rest.client import knock, login, room
|
from synapse.rest.client import knock, login, room
|
||||||
|
@ -143,7 +162,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
|
||||||
b"{}",
|
b"{}",
|
||||||
user1_tok,
|
user1_tok,
|
||||||
)
|
)
|
||||||
self.assertEqual(200, channel.code, channel.result)
|
self.assertEqual(channel.code, 200, channel.result)
|
||||||
|
|
||||||
after_room_token = self.event_sources.get_current_token()
|
after_room_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
@ -165,6 +184,128 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def test_get_kicked_room(self) -> None:
|
||||||
|
"""
|
||||||
|
Test that a room that the user was kicked from still shows up. When the user
|
||||||
|
comes back to their client, they should see that they were kicked.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
# Setup the kick room (user2 kicks user1 from the room)
|
||||||
|
kick_room_id = self.helper.create_room_as(
|
||||||
|
user2_id, tok=user2_tok, is_public=True
|
||||||
|
)
|
||||||
|
self.helper.join(kick_room_id, user1_id, tok=user1_tok)
|
||||||
|
# Kick user1 from the room
|
||||||
|
self.helper.change_membership(
|
||||||
|
room=kick_room_id,
|
||||||
|
src=user2_id,
|
||||||
|
targ=user1_id,
|
||||||
|
tok=user2_tok,
|
||||||
|
membership=Membership.LEAVE,
|
||||||
|
extra_data={
|
||||||
|
"reason": "Bad manners",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
after_kick_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
room_id_results = self.get_success(
|
||||||
|
self.sliding_sync_handler.get_sync_room_ids_for_user(
|
||||||
|
UserID.from_string(user1_id),
|
||||||
|
from_token=after_kick_token,
|
||||||
|
to_token=after_kick_token,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# The kicked room should show up
|
||||||
|
self.assertEqual(room_id_results, {kick_room_id})
|
||||||
|
|
||||||
|
def test_forgotten_rooms(self) -> None:
|
||||||
|
"""
|
||||||
|
Forgotten rooms do not show up even if we forget after the from/to range.
|
||||||
|
|
||||||
|
Ideally, we would be able to track when the `/forget` happens and apply it
|
||||||
|
accordingly in the token range but the forgotten flag is only an extra bool in
|
||||||
|
the `room_memberships` table.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
# Setup a normal room that we leave. This won't show up in the sync response
|
||||||
|
# because we left it before our token but is good to check anyway.
|
||||||
|
leave_room_id = self.helper.create_room_as(
|
||||||
|
user2_id, tok=user2_tok, is_public=True
|
||||||
|
)
|
||||||
|
self.helper.join(leave_room_id, user1_id, tok=user1_tok)
|
||||||
|
self.helper.leave(leave_room_id, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
|
# Setup the ban room (user2 bans user1 from the room)
|
||||||
|
ban_room_id = self.helper.create_room_as(
|
||||||
|
user2_id, tok=user2_tok, is_public=True
|
||||||
|
)
|
||||||
|
self.helper.join(ban_room_id, user1_id, tok=user1_tok)
|
||||||
|
self.helper.ban(ban_room_id, src=user2_id, targ=user1_id, tok=user2_tok)
|
||||||
|
|
||||||
|
# Setup the kick room (user2 kicks user1 from the room)
|
||||||
|
kick_room_id = self.helper.create_room_as(
|
||||||
|
user2_id, tok=user2_tok, is_public=True
|
||||||
|
)
|
||||||
|
self.helper.join(kick_room_id, user1_id, tok=user1_tok)
|
||||||
|
# Kick user1 from the room
|
||||||
|
self.helper.change_membership(
|
||||||
|
room=kick_room_id,
|
||||||
|
src=user2_id,
|
||||||
|
targ=user1_id,
|
||||||
|
tok=user2_tok,
|
||||||
|
membership=Membership.LEAVE,
|
||||||
|
extra_data={
|
||||||
|
"reason": "Bad manners",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
before_room_forgets = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# Forget the room after we already have our tokens. This doesn't change
|
||||||
|
# the membership event itself but will mark it internally in Synapse
|
||||||
|
channel = self.make_request(
|
||||||
|
"POST",
|
||||||
|
f"/_matrix/client/r0/rooms/{leave_room_id}/forget",
|
||||||
|
content={},
|
||||||
|
access_token=user1_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel.code, 200, channel.result)
|
||||||
|
channel = self.make_request(
|
||||||
|
"POST",
|
||||||
|
f"/_matrix/client/r0/rooms/{ban_room_id}/forget",
|
||||||
|
content={},
|
||||||
|
access_token=user1_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel.code, 200, channel.result)
|
||||||
|
channel = self.make_request(
|
||||||
|
"POST",
|
||||||
|
f"/_matrix/client/r0/rooms/{kick_room_id}/forget",
|
||||||
|
content={},
|
||||||
|
access_token=user1_tok,
|
||||||
|
)
|
||||||
|
self.assertEqual(channel.code, 200, channel.result)
|
||||||
|
|
||||||
|
room_id_results = self.get_success(
|
||||||
|
self.sliding_sync_handler.get_sync_room_ids_for_user(
|
||||||
|
UserID.from_string(user1_id),
|
||||||
|
from_token=before_room_forgets,
|
||||||
|
to_token=before_room_forgets,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# We shouldn't see the room because it was forgotten
|
||||||
|
self.assertEqual(room_id_results, set())
|
||||||
|
|
||||||
def test_only_newly_left_rooms_show_up(self) -> None:
|
def test_only_newly_left_rooms_show_up(self) -> None:
|
||||||
"""
|
"""
|
||||||
Test that newly_left rooms still show up in the sync response but rooms that
|
Test that newly_left rooms still show up in the sync response but rooms that
|
||||||
|
@ -228,7 +369,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
|
||||||
def test_join_during_range_and_left_room_after_to_token(self) -> None:
|
def test_join_during_range_and_left_room_after_to_token(self) -> None:
|
||||||
"""
|
"""
|
||||||
Room still shows up if we left the room but were joined during the
|
Room still shows up if we left the room but were joined during the
|
||||||
from_token/to_token. See condition "2b)" comments in the
|
from_token/to_token. See condition "2a)" comments in the
|
||||||
`get_sync_room_ids_for_user()` method.
|
`get_sync_room_ids_for_user()` method.
|
||||||
"""
|
"""
|
||||||
user1_id = self.register_user("user1", "pass")
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
@ -258,7 +399,7 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
|
||||||
def test_join_before_range_and_left_room_after_to_token(self) -> None:
|
def test_join_before_range_and_left_room_after_to_token(self) -> None:
|
||||||
"""
|
"""
|
||||||
Room still shows up if we left the room but were joined before the `from_token`
|
Room still shows up if we left the room but were joined before the `from_token`
|
||||||
so it should show up. See condition "2b)" comments in the
|
so it should show up. See condition "2a)" comments in the
|
||||||
`get_sync_room_ids_for_user()` method.
|
`get_sync_room_ids_for_user()` method.
|
||||||
"""
|
"""
|
||||||
user1_id = self.register_user("user1", "pass")
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
@ -282,6 +423,54 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
|
||||||
# We should still see the room because we were joined before the `from_token`
|
# We should still see the room because we were joined before the `from_token`
|
||||||
self.assertEqual(room_id_results, {room_id1})
|
self.assertEqual(room_id_results, {room_id1})
|
||||||
|
|
||||||
|
def test_kicked_before_range_and_left_after_to_token(self) -> None:
|
||||||
|
"""
|
||||||
|
Room still shows up if we left the room but were kicked before the `from_token`
|
||||||
|
so it should show up. See condition "2a)" comments in the
|
||||||
|
`get_sync_room_ids_for_user()` method.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
# Setup the kick room (user2 kicks user1 from the room)
|
||||||
|
kick_room_id = self.helper.create_room_as(
|
||||||
|
user2_id, tok=user2_tok, is_public=True
|
||||||
|
)
|
||||||
|
self.helper.join(kick_room_id, user1_id, tok=user1_tok)
|
||||||
|
# Kick user1 from the room
|
||||||
|
self.helper.change_membership(
|
||||||
|
room=kick_room_id,
|
||||||
|
src=user2_id,
|
||||||
|
targ=user1_id,
|
||||||
|
tok=user2_tok,
|
||||||
|
membership=Membership.LEAVE,
|
||||||
|
extra_data={
|
||||||
|
"reason": "Bad manners",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
after_kick_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# Leave the room after we already have our tokens
|
||||||
|
#
|
||||||
|
# We have to join before we can leave (leave -> leave isn't a valid transition
|
||||||
|
# or at least it doesn't work in Synapse, 403 forbidden)
|
||||||
|
self.helper.join(kick_room_id, user1_id, tok=user1_tok)
|
||||||
|
self.helper.leave(kick_room_id, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
|
room_id_results = self.get_success(
|
||||||
|
self.sliding_sync_handler.get_sync_room_ids_for_user(
|
||||||
|
UserID.from_string(user1_id),
|
||||||
|
from_token=after_kick_token,
|
||||||
|
to_token=after_kick_token,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# We shouldn't see the room because it was forgotten
|
||||||
|
self.assertEqual(room_id_results, {kick_room_id})
|
||||||
|
|
||||||
def test_newly_left_during_range_and_join_leave_after_to_token(self) -> None:
|
def test_newly_left_during_range_and_join_leave_after_to_token(self) -> None:
|
||||||
"""
|
"""
|
||||||
Newly left room should show up. But we're also testing that joining and leaving
|
Newly left room should show up. But we're also testing that joining and leaving
|
||||||
|
@ -354,6 +543,40 @@ class GetSyncRoomIdsForUserTestCase(HomeserverTestCase):
|
||||||
# Room shouldn't show up because it was left before the `from_token`
|
# Room shouldn't show up because it was left before the `from_token`
|
||||||
self.assertEqual(room_id_results, set())
|
self.assertEqual(room_id_results, set())
|
||||||
|
|
||||||
|
def test_leave_before_range_and_join_after_to_token(self) -> None:
|
||||||
|
"""
|
||||||
|
Old left room shouldn't show up. But we're also testing that joining after the
|
||||||
|
`to_token` doesn't mess with the results. See condition "2b)" comments in the
|
||||||
|
`get_sync_room_ids_for_user()` method.
|
||||||
|
"""
|
||||||
|
user1_id = self.register_user("user1", "pass")
|
||||||
|
user1_tok = self.login(user1_id, "pass")
|
||||||
|
user2_id = self.register_user("user2", "pass")
|
||||||
|
user2_tok = self.login(user2_id, "pass")
|
||||||
|
|
||||||
|
# We create the room with user2 so the room isn't left with no members when we
|
||||||
|
# leave and can still re-join.
|
||||||
|
room_id1 = self.helper.create_room_as(user2_id, tok=user2_tok, is_public=True)
|
||||||
|
# Join and leave the room before the from/to range
|
||||||
|
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
self.helper.leave(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
|
after_room1_token = self.event_sources.get_current_token()
|
||||||
|
|
||||||
|
# Join the room after we already have our tokens
|
||||||
|
self.helper.join(room_id1, user1_id, tok=user1_tok)
|
||||||
|
|
||||||
|
room_id_results = self.get_success(
|
||||||
|
self.sliding_sync_handler.get_sync_room_ids_for_user(
|
||||||
|
UserID.from_string(user1_id),
|
||||||
|
from_token=after_room1_token,
|
||||||
|
to_token=after_room1_token,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Room shouldn't show up because it was left before the `from_token`
|
||||||
|
self.assertEqual(room_id_results, set())
|
||||||
|
|
||||||
def test_join_leave_multiple_times_during_range_and_after_to_token(
|
def test_join_leave_multiple_times_during_range_and_after_to_token(
|
||||||
self,
|
self,
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
|
@ -32,7 +32,7 @@ from twisted.web.resource import Resource
|
||||||
from synapse.api.constants import EduTypes
|
from synapse.api.constants import EduTypes
|
||||||
from synapse.api.errors import AuthError
|
from synapse.api.errors import AuthError
|
||||||
from synapse.federation.transport.server import TransportLayerServer
|
from synapse.federation.transport.server import TransportLayerServer
|
||||||
from synapse.handlers.typing import TypingWriterHandler
|
from synapse.handlers.typing import FORGET_TIMEOUT, TypingWriterHandler
|
||||||
from synapse.http.federation.matrix_federation_agent import MatrixFederationAgent
|
from synapse.http.federation.matrix_federation_agent import MatrixFederationAgent
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
from synapse.types import JsonDict, Requester, StreamKeyType, UserID, create_requester
|
from synapse.types import JsonDict, Requester, StreamKeyType, UserID, create_requester
|
||||||
|
@ -501,3 +501,54 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def test_prune_typing_replication(self) -> None:
|
||||||
|
"""Regression test for `get_all_typing_updates` breaking when we prune
|
||||||
|
old updates
|
||||||
|
"""
|
||||||
|
self.room_members = [U_APPLE, U_BANANA]
|
||||||
|
|
||||||
|
instance_name = self.hs.get_instance_name()
|
||||||
|
|
||||||
|
self.get_success(
|
||||||
|
self.handler.started_typing(
|
||||||
|
target_user=U_APPLE,
|
||||||
|
requester=create_requester(U_APPLE),
|
||||||
|
room_id=ROOM_ID,
|
||||||
|
timeout=10000,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
rows, _, _ = self.get_success(
|
||||||
|
self.handler.get_all_typing_updates(
|
||||||
|
instance_name=instance_name,
|
||||||
|
last_id=0,
|
||||||
|
current_id=self.handler.get_current_token(),
|
||||||
|
limit=100,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
self.assertEqual(rows, [(1, [ROOM_ID, [U_APPLE.to_string()]])])
|
||||||
|
|
||||||
|
self.reactor.advance(20000)
|
||||||
|
|
||||||
|
rows, _, _ = self.get_success(
|
||||||
|
self.handler.get_all_typing_updates(
|
||||||
|
instance_name=instance_name,
|
||||||
|
last_id=1,
|
||||||
|
current_id=self.handler.get_current_token(),
|
||||||
|
limit=100,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
self.assertEqual(rows, [(2, [ROOM_ID, []])])
|
||||||
|
|
||||||
|
self.reactor.advance(FORGET_TIMEOUT)
|
||||||
|
|
||||||
|
rows, _, _ = self.get_success(
|
||||||
|
self.handler.get_all_typing_updates(
|
||||||
|
instance_name=instance_name,
|
||||||
|
last_id=1,
|
||||||
|
current_id=self.handler.get_current_token(),
|
||||||
|
limit=100,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
self.assertEqual(rows, [])
|
||||||
|
|
|
@ -154,7 +154,10 @@ class EventsWorkerStoreTestCase(BaseWorkerStoreTestCase):
|
||||||
USER_ID,
|
USER_ID,
|
||||||
"invite",
|
"invite",
|
||||||
event.event_id,
|
event.event_id,
|
||||||
|
PersistedEventPosition(
|
||||||
|
self.hs.get_instance_name(),
|
||||||
event.internal_metadata.stream_ordering,
|
event.internal_metadata.stream_ordering,
|
||||||
|
),
|
||||||
RoomVersions.V1.identifier,
|
RoomVersions.V1.identifier,
|
||||||
)
|
)
|
||||||
],
|
],
|
||||||
|
|
|
@ -35,7 +35,7 @@ from synapse.api.constants import (
|
||||||
)
|
)
|
||||||
from synapse.rest.client import devices, knock, login, read_marker, receipts, room, sync
|
from synapse.rest.client import devices, knock, login, read_marker, receipts, room, sync
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict, RoomStreamToken, StreamKeyType
|
||||||
from synapse.util import Clock
|
from synapse.util import Clock
|
||||||
|
|
||||||
from tests import unittest
|
from tests import unittest
|
||||||
|
@ -1229,6 +1229,8 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
self.sync_endpoint = "/_matrix/client/unstable/org.matrix.msc3575/sync"
|
self.sync_endpoint = "/_matrix/client/unstable/org.matrix.msc3575/sync"
|
||||||
|
self.store = hs.get_datastores().main
|
||||||
|
self.event_sources = hs.get_event_sources()
|
||||||
|
|
||||||
def _create_dm_room(
|
def _create_dm_room(
|
||||||
self,
|
self,
|
||||||
|
@ -1331,6 +1333,60 @@ class SlidingSyncTestCase(unittest.HomeserverTestCase):
|
||||||
channel.json_body["lists"]["foo-list"],
|
channel.json_body["lists"]["foo-list"],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def test_wait_for_sync_token(self) -> None:
|
||||||
|
"""
|
||||||
|
Test that worker will wait until it catches up to the given token
|
||||||
|
"""
|
||||||
|
alice_user_id = self.register_user("alice", "correcthorse")
|
||||||
|
alice_access_token = self.login(alice_user_id, "correcthorse")
|
||||||
|
|
||||||
|
# Create a future token that will cause us to wait. Since we never send a new
|
||||||
|
# event to reach that future stream_ordering, the worker will wait until the
|
||||||
|
# full timeout.
|
||||||
|
current_token = self.event_sources.get_current_token()
|
||||||
|
future_position_token = current_token.copy_and_replace(
|
||||||
|
StreamKeyType.ROOM,
|
||||||
|
RoomStreamToken(stream=current_token.room_key.stream + 1),
|
||||||
|
)
|
||||||
|
|
||||||
|
future_position_token_serialized = self.get_success(
|
||||||
|
future_position_token.to_string(self.store)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Make the Sliding Sync request
|
||||||
|
channel = self.make_request(
|
||||||
|
"POST",
|
||||||
|
self.sync_endpoint + f"?pos={future_position_token_serialized}",
|
||||||
|
{
|
||||||
|
"lists": {
|
||||||
|
"foo-list": {
|
||||||
|
"ranges": [[0, 99]],
|
||||||
|
"sort": ["by_notification_level", "by_recency", "by_name"],
|
||||||
|
"required_state": [
|
||||||
|
["m.room.join_rules", ""],
|
||||||
|
["m.room.history_visibility", ""],
|
||||||
|
["m.space.child", "*"],
|
||||||
|
],
|
||||||
|
"timeline_limit": 1,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
access_token=alice_access_token,
|
||||||
|
await_result=False,
|
||||||
|
)
|
||||||
|
# Block for 10 seconds to make `notifier.wait_for_stream_token(from_token)`
|
||||||
|
# timeout
|
||||||
|
with self.assertRaises(TimedOutException):
|
||||||
|
channel.await_result(timeout_ms=9900)
|
||||||
|
channel.await_result(timeout_ms=200)
|
||||||
|
self.assertEqual(channel.code, 200, channel.json_body)
|
||||||
|
|
||||||
|
# We expect the `next_pos` in the result to be the same as what we requested
|
||||||
|
# with because we weren't able to find anything new yet.
|
||||||
|
self.assertEqual(
|
||||||
|
channel.json_body["next_pos"], future_position_token_serialized
|
||||||
|
)
|
||||||
|
|
||||||
def test_filter_list(self) -> None:
|
def test_filter_list(self) -> None:
|
||||||
"""
|
"""
|
||||||
Test that filters apply to lists
|
Test that filters apply to lists
|
||||||
|
|
|
@ -330,9 +330,12 @@ class RestHelper:
|
||||||
data,
|
data,
|
||||||
)
|
)
|
||||||
|
|
||||||
assert channel.code == expect_code, "Expected: %d, got: %d, resp: %r" % (
|
assert (
|
||||||
|
channel.code == expect_code
|
||||||
|
), "Expected: %d, got: %d, PUT %s -> resp: %r" % (
|
||||||
expect_code,
|
expect_code,
|
||||||
channel.code,
|
channel.code,
|
||||||
|
path,
|
||||||
channel.result["body"],
|
channel.result["body"],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue