mirror of
https://github.com/element-hq/synapse
synced 2024-09-17 18:55:10 +00:00
Merge remote-tracking branch 'origin/develop' into squah/faster_room_joins_unblock_lazy_loading_sync_2
This commit is contained in:
commit
6fd9f45619
116 changed files with 5039 additions and 2905 deletions
14
.github/workflows/tests.yml
vendored
14
.github/workflows/tests.yml
vendored
|
@ -53,10 +53,22 @@ jobs:
|
||||||
env:
|
env:
|
||||||
PULL_REQUEST_NUMBER: ${{ github.event.number }}
|
PULL_REQUEST_NUMBER: ${{ github.event.number }}
|
||||||
|
|
||||||
|
lint-pydantic:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
ref: ${{ github.event.pull_request.head.sha }}
|
||||||
|
fetch-depth: 0
|
||||||
|
- uses: matrix-org/setup-python-poetry@v1
|
||||||
|
with:
|
||||||
|
extras: "all"
|
||||||
|
- run: poetry run scripts-dev/check_pydantic_models.py
|
||||||
|
|
||||||
# Dummy step to gate other tests on without repeating the whole list
|
# Dummy step to gate other tests on without repeating the whole list
|
||||||
linting-done:
|
linting-done:
|
||||||
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
|
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
|
||||||
needs: [lint, lint-crlf, lint-newsfile, check-sampleconfig, check-schema-delta]
|
needs: [lint, lint-crlf, lint-newsfile, lint-pydantic, check-sampleconfig, check-schema-delta]
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- run: "true"
|
- run: "true"
|
||||||
|
|
17
CHANGES.md
17
CHANGES.md
|
@ -1,3 +1,18 @@
|
||||||
|
Synapse 1.65.0 (2022-08-16)
|
||||||
|
===========================
|
||||||
|
|
||||||
|
No significant changes since 1.65.0rc2.
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 1.65.0rc2 (2022-08-11)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Internal Changes
|
||||||
|
----------------
|
||||||
|
|
||||||
|
- Revert 'Remove the unspecced `room_id` field in the `/hierarchy` response. ([\#13365](https://github.com/matrix-org/synapse/issues/13365))' to give more time for clients to update. ([\#13501](https://github.com/matrix-org/synapse/issues/13501))
|
||||||
|
|
||||||
|
|
||||||
Synapse 1.65.0rc1 (2022-08-09)
|
Synapse 1.65.0rc1 (2022-08-09)
|
||||||
==============================
|
==============================
|
||||||
|
|
||||||
|
@ -16,7 +31,7 @@ Bugfixes
|
||||||
--------
|
--------
|
||||||
|
|
||||||
- Update the version of the LDAP3 auth provider module included in the `matrixdotorg/synapse` DockerHub images and the Debian packages hosted on packages.matrix.org to 0.2.2. This version fixes a regression in the module. ([\#13470](https://github.com/matrix-org/synapse/issues/13470))
|
- Update the version of the LDAP3 auth provider module included in the `matrixdotorg/synapse` DockerHub images and the Debian packages hosted on packages.matrix.org to 0.2.2. This version fixes a regression in the module. ([\#13470](https://github.com/matrix-org/synapse/issues/13470))
|
||||||
- Fix a bug introduced in Synapse v1.41.0 where the `/hierarchy` API returned non-standard information (a `room_id` field under each entry in `children_state`). ([\#13365](https://github.com/matrix-org/synapse/issues/13365))
|
- Fix a bug introduced in Synapse v1.41.0 where the `/hierarchy` API returned non-standard information (a `room_id` field under each entry in `children_state`) (this was reverted in v1.65.0rc2, see changelog notes above). ([\#13365](https://github.com/matrix-org/synapse/issues/13365))
|
||||||
- Fix a bug introduced in Synapse 0.24.0 that would respond with the wrong error status code to `/joined_members` requests when the requester is not a current member of the room. Contributed by @andrewdoh. ([\#13374](https://github.com/matrix-org/synapse/issues/13374))
|
- Fix a bug introduced in Synapse 0.24.0 that would respond with the wrong error status code to `/joined_members` requests when the requester is not a current member of the room. Contributed by @andrewdoh. ([\#13374](https://github.com/matrix-org/synapse/issues/13374))
|
||||||
- Fix bug in handling of typing events for appservices. Contributed by Nick @ Beeper (@fizzadar). ([\#13392](https://github.com/matrix-org/synapse/issues/13392))
|
- Fix bug in handling of typing events for appservices. Contributed by Nick @ Beeper (@fizzadar). ([\#13392](https://github.com/matrix-org/synapse/issues/13392))
|
||||||
- Fix a bug introduced in Synapse 1.57.0 where rooms listed in `exclude_rooms_from_sync` in the configuration file would not be properly excluded from incremental syncs. ([\#13408](https://github.com/matrix-org/synapse/issues/13408))
|
- Fix a bug introduced in Synapse 1.57.0 where rooms listed in `exclude_rooms_from_sync` in the configuration file would not be properly excluded from incremental syncs. ([\#13408](https://github.com/matrix-org/synapse/issues/13408))
|
||||||
|
|
1
changelog.d/13188.feature
Normal file
1
changelog.d/13188.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Improve validation of request bodies for the following client-server API endpoints: [`/account/password`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3accountpassword), [`/account/password/email/requestToken`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3accountpasswordemailrequesttoken), [`/account/deactivate`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3accountdeactivate) and [`/account/3pid/email/requestToken`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3account3pidemailrequesttoken).
|
1
changelog.d/13459.misc
Normal file
1
changelog.d/13459.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Faster joins: update the rejected state of events during de-partial-stating.
|
1
changelog.d/13472.doc
Normal file
1
changelog.d/13472.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add `openssl` example for generating registration HMAC digest.
|
1
changelog.d/13485.misc
Normal file
1
changelog.d/13485.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add comments about how event push actions are rotated.
|
1
changelog.d/13488.misc
Normal file
1
changelog.d/13488.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Use literals in place of `HTTPStatus` constants in tests.
|
1
changelog.d/13489.misc
Normal file
1
changelog.d/13489.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Instrument the federation/backfill part of `/messages` for understandable traces in Jaeger.
|
1
changelog.d/13492.doc
Normal file
1
changelog.d/13492.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Document that event purging related to the `redaction_retention_period` config option is executed only every 5 minutes.
|
1
changelog.d/13493.misc
Normal file
1
changelog.d/13493.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Modify HTML template content to better support mobile devices' screen sizes.
|
2
changelog.d/13497.doc
Normal file
2
changelog.d/13497.doc
Normal file
|
@ -0,0 +1,2 @@
|
||||||
|
Add a warning to retention documentation regarding the possibility of database corruption.
|
||||||
|
|
1
changelog.d/13499.misc
Normal file
1
changelog.d/13499.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Instrument `FederationStateIdsServlet` (`/state_ids`) for understandable traces in Jaeger.
|
1
changelog.d/13502.misc
Normal file
1
changelog.d/13502.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add a linter script which will reject non-strict types in Pydantic models.
|
1
changelog.d/13503.feature
Normal file
1
changelog.d/13503.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add forgotten status to Room Details API.
|
1
changelog.d/13514.bugfix
Normal file
1
changelog.d/13514.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Faster room joins: make `/joined_members` block whilst the room is partial stated.
|
1
changelog.d/13515.doc
Normal file
1
changelog.d/13515.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Document that the `DOCKER_BUILDKIT=1` flag is needed to build the docker image.
|
1
changelog.d/13522.misc
Normal file
1
changelog.d/13522.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Improve performance of sending messages in rooms with thousands of local users.
|
1
changelog.d/13531.misc
Normal file
1
changelog.d/13531.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Faster room joins: Refuse to start when faster joins is enabled on a deployment with workers, since worker configurations are not currently supported.
|
1
changelog.d/13533.misc
Normal file
1
changelog.d/13533.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Track HTTP response times over 10 seconds from `/messages` (`synapse_room_message_list_rest_servlet_response_time_seconds`).
|
1
changelog.d/13535.misc
Normal file
1
changelog.d/13535.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add metrics to time how long it takes us to do backfill processing (`synapse_federation_backfill_processing_before_time_seconds`, `synapse_federation_backfill_processing_after_time_seconds`).
|
1
changelog.d/13536.doc
Normal file
1
changelog.d/13536.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add missing links in `user_consent` section of configuration manual.
|
1
changelog.d/13538.doc
Normal file
1
changelog.d/13538.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix the doc and some warnings that were referring to the nonexistent `custom_templates_directory` setting (instead of `custom_template_directory`).
|
1
changelog.d/13544.misc
Normal file
1
changelog.d/13544.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add metrics to track rate limiter queue timing (`synapse_rate_limit_queue_wait_time_seconds`).
|
1
changelog.d/13547.misc
Normal file
1
changelog.d/13547.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Improve performance of sending messages in rooms with thousands of local users.
|
File diff suppressed because it is too large
Load diff
12
debian/changelog
vendored
12
debian/changelog
vendored
|
@ -1,3 +1,15 @@
|
||||||
|
matrix-synapse-py3 (1.65.0) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.65.0.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Aug 2022 16:51:26 +0100
|
||||||
|
|
||||||
|
matrix-synapse-py3 (1.65.0~rc2) stable; urgency=medium
|
||||||
|
|
||||||
|
* New Synapse release 1.65.0rc2.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Thu, 11 Aug 2022 11:38:18 +0100
|
||||||
|
|
||||||
matrix-synapse-py3 (1.65.0~rc1) stable; urgency=medium
|
matrix-synapse-py3 (1.65.0~rc1) stable; urgency=medium
|
||||||
|
|
||||||
* New Synapse release 1.65.0rc1.
|
* New Synapse release 1.65.0rc1.
|
||||||
|
|
|
@ -191,7 +191,7 @@ If you need to build the image from a Synapse checkout, use the following `docke
|
||||||
build` command from the repo's root:
|
build` command from the repo's root:
|
||||||
|
|
||||||
```
|
```
|
||||||
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
|
DOCKER_BUILDKIT=1 docker build -t matrixdotorg/synapse -f docker/Dockerfile .
|
||||||
```
|
```
|
||||||
|
|
||||||
You can choose to build a different docker image by changing the value of the `-f` flag to
|
You can choose to build a different docker image by changing the value of the `-f` flag to
|
||||||
|
|
|
@ -46,7 +46,24 @@ As an example:
|
||||||
The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being
|
The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being
|
||||||
the shared secret and the content being the nonce, user, password, either the
|
the shared secret and the content being the nonce, user, password, either the
|
||||||
string "admin" or "notadmin", and optionally the user_type
|
string "admin" or "notadmin", and optionally the user_type
|
||||||
each separated by NULs. For an example of generation in Python:
|
each separated by NULs.
|
||||||
|
|
||||||
|
Here is an easy way to generate the HMAC digest if you have Bash and OpenSSL:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Update these values and then paste this code block into a bash terminal
|
||||||
|
nonce='thisisanonce'
|
||||||
|
username='pepper_roni'
|
||||||
|
password='pizza'
|
||||||
|
admin='admin'
|
||||||
|
secret='shared_secret'
|
||||||
|
|
||||||
|
printf '%s\0%s\0%s\0%s' "$nonce" "$username" "$password" "$admin" |
|
||||||
|
openssl sha1 -hmac "$secret" |
|
||||||
|
awk '{print $2}'
|
||||||
|
```
|
||||||
|
|
||||||
|
For an example of generation in Python:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import hmac, hashlib
|
import hmac, hashlib
|
||||||
|
|
|
@ -302,6 +302,8 @@ The following fields are possible in the JSON response body:
|
||||||
* `state_events` - Total number of state_events of a room. Complexity of the room.
|
* `state_events` - Total number of state_events of a room. Complexity of the room.
|
||||||
* `room_type` - The type of the room taken from the room's creation event; for example "m.space" if the room is a space.
|
* `room_type` - The type of the room taken from the room's creation event; for example "m.space" if the room is a space.
|
||||||
If the room does not define a type, the value will be `null`.
|
If the room does not define a type, the value will be `null`.
|
||||||
|
* `forgotten` - Whether all local users have
|
||||||
|
[forgotten](https://spec.matrix.org/latest/client-server-api/#leaving-rooms) the room.
|
||||||
|
|
||||||
The API is:
|
The API is:
|
||||||
|
|
||||||
|
@ -330,7 +332,8 @@ A response body like the following is returned:
|
||||||
"guest_access": null,
|
"guest_access": null,
|
||||||
"history_visibility": "shared",
|
"history_visibility": "shared",
|
||||||
"state_events": 93534,
|
"state_events": 93534,
|
||||||
"room_type": "m.space"
|
"room_type": "m.space",
|
||||||
|
"forgotten": false
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -8,7 +8,8 @@ and allow server and room admins to configure how long messages should
|
||||||
be kept in a homeserver's database before being purged from it.
|
be kept in a homeserver's database before being purged from it.
|
||||||
**Please note that, as this feature isn't part of the Matrix
|
**Please note that, as this feature isn't part of the Matrix
|
||||||
specification yet, this implementation is to be considered as
|
specification yet, this implementation is to be considered as
|
||||||
experimental.**
|
experimental. There are known bugs which may cause database corruption.
|
||||||
|
Proceed with caution.**
|
||||||
|
|
||||||
A message retention policy is mainly defined by its `max_lifetime`
|
A message retention policy is mainly defined by its `max_lifetime`
|
||||||
parameter, which defines how long a message can be kept around after
|
parameter, which defines how long a message can be kept around after
|
||||||
|
|
|
@ -9,7 +9,7 @@ in, allowing them to specify custom templates:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
templates:
|
templates:
|
||||||
custom_templates_directory: /path/to/custom/templates/
|
custom_template_directory: /path/to/custom/templates/
|
||||||
```
|
```
|
||||||
|
|
||||||
If this setting is not set, or the files named below are not found within the directory,
|
If this setting is not set, or the files named below are not found within the directory,
|
||||||
|
|
|
@ -759,6 +759,10 @@ allowed_avatar_mimetypes: ["image/png", "image/jpeg", "image/gif"]
|
||||||
How long to keep redacted events in unredacted form in the database. After
|
How long to keep redacted events in unredacted form in the database. After
|
||||||
this period redacted events get replaced with their redacted form in the DB.
|
this period redacted events get replaced with their redacted form in the DB.
|
||||||
|
|
||||||
|
Synapse will check whether the rentention period has concluded for redacted
|
||||||
|
events every 5 minutes. Thus, even if this option is set to `0`, Synapse may
|
||||||
|
still take up to 5 minutes to purge redacted events from the database.
|
||||||
|
|
||||||
Defaults to `7d`. Set to `null` to disable.
|
Defaults to `7d`. Set to `null` to disable.
|
||||||
|
|
||||||
Example configuration:
|
Example configuration:
|
||||||
|
@ -845,7 +849,11 @@ which are older than the room's maximum retention period. Synapse will also
|
||||||
filter events received over federation so that events that should have been
|
filter events received over federation so that events that should have been
|
||||||
purged are ignored and not stored again.
|
purged are ignored and not stored again.
|
||||||
|
|
||||||
The message retention policies feature is disabled by default.
|
The message retention policies feature is disabled by default. Please be advised
|
||||||
|
that enabling this feature carries some risk. There are known bugs with the implementation
|
||||||
|
which can cause database corruption. Setting retention to delete older history
|
||||||
|
is less risky than deleting newer history but in general caution is advised when enabling this
|
||||||
|
experimental feature. You can read more about this feature [here](../../message_retention_policies.md).
|
||||||
|
|
||||||
This setting has the following sub-options:
|
This setting has the following sub-options:
|
||||||
* `default_policy`: Default retention policy. If set, Synapse will apply it to rooms that lack the
|
* `default_policy`: Default retention policy. If set, Synapse will apply it to rooms that lack the
|
||||||
|
@ -3344,7 +3352,7 @@ user_directory:
|
||||||
For detailed instructions on user consent configuration, see [here](../../consent_tracking.md).
|
For detailed instructions on user consent configuration, see [here](../../consent_tracking.md).
|
||||||
|
|
||||||
Parts of this section are required if enabling the `consent` resource under
|
Parts of this section are required if enabling the `consent` resource under
|
||||||
`listeners`, in particular `template_dir` and `version`. # TODO: link `listeners`
|
[`listeners`](#listeners), in particular `template_dir` and `version`.
|
||||||
|
|
||||||
* `template_dir`: gives the location of the templates for the HTML forms.
|
* `template_dir`: gives the location of the templates for the HTML forms.
|
||||||
This directory should contain one subdirectory per language (eg, `en`, `fr`),
|
This directory should contain one subdirectory per language (eg, `en`, `fr`),
|
||||||
|
@ -3356,7 +3364,7 @@ Parts of this section are required if enabling the `consent` resource under
|
||||||
parameter.
|
parameter.
|
||||||
|
|
||||||
* `server_notice_content`: if enabled, will send a user a "Server Notice"
|
* `server_notice_content`: if enabled, will send a user a "Server Notice"
|
||||||
asking them to consent to the privacy policy. The `server_notices` section ##TODO: link
|
asking them to consent to the privacy policy. The [`server_notices` section](#server_notices)
|
||||||
must also be configured for this to work. Notices will *not* be sent to
|
must also be configured for this to work. Notices will *not* be sent to
|
||||||
guest users unless `send_server_notice_to_guests` is set to true.
|
guest users unless `send_server_notice_to_guests` is set to true.
|
||||||
|
|
||||||
|
|
2
mypy.ini
2
mypy.ini
|
@ -1,6 +1,6 @@
|
||||||
[mypy]
|
[mypy]
|
||||||
namespace_packages = True
|
namespace_packages = True
|
||||||
plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
|
plugins = pydantic.mypy, mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
|
||||||
follow_imports = normal
|
follow_imports = normal
|
||||||
check_untyped_defs = True
|
check_untyped_defs = True
|
||||||
show_error_codes = True
|
show_error_codes = True
|
||||||
|
|
54
poetry.lock
generated
54
poetry.lock
generated
|
@ -778,6 +778,21 @@ category = "main"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "pydantic"
|
||||||
|
version = "1.9.1"
|
||||||
|
description = "Data validation and settings management using python type hints"
|
||||||
|
category = "main"
|
||||||
|
optional = false
|
||||||
|
python-versions = ">=3.6.1"
|
||||||
|
|
||||||
|
[package.dependencies]
|
||||||
|
typing-extensions = ">=3.7.4.3"
|
||||||
|
|
||||||
|
[package.extras]
|
||||||
|
dotenv = ["python-dotenv (>=0.10.4)"]
|
||||||
|
email = ["email-validator (>=1.0.3)"]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "pyflakes"
|
name = "pyflakes"
|
||||||
version = "2.4.0"
|
version = "2.4.0"
|
||||||
|
@ -1563,7 +1578,7 @@ url_preview = ["lxml"]
|
||||||
[metadata]
|
[metadata]
|
||||||
lock-version = "1.1"
|
lock-version = "1.1"
|
||||||
python-versions = "^3.7.1"
|
python-versions = "^3.7.1"
|
||||||
content-hash = "c24bbcee7e86dbbe7cdbf49f91a25b310bf21095452641e7440129f59b077f78"
|
content-hash = "7de518bf27967b3547eab8574342cfb67f87d6b47b4145c13de11112141dbf2d"
|
||||||
|
|
||||||
[metadata.files]
|
[metadata.files]
|
||||||
attrs = [
|
attrs = [
|
||||||
|
@ -2260,6 +2275,43 @@ pycparser = [
|
||||||
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
|
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
|
||||||
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
|
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
|
||||||
]
|
]
|
||||||
|
pydantic = [
|
||||||
|
{file = "pydantic-1.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c8098a724c2784bf03e8070993f6d46aa2eeca031f8d8a048dff277703e6e193"},
|
||||||
|
{file = "pydantic-1.9.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c320c64dd876e45254bdd350f0179da737463eea41c43bacbee9d8c9d1021f11"},
|
||||||
|
{file = "pydantic-1.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:18f3e912f9ad1bdec27fb06b8198a2ccc32f201e24174cec1b3424dda605a310"},
|
||||||
|
{file = "pydantic-1.9.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c11951b404e08b01b151222a1cb1a9f0a860a8153ce8334149ab9199cd198131"},
|
||||||
|
{file = "pydantic-1.9.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:8bc541a405423ce0e51c19f637050acdbdf8feca34150e0d17f675e72d119580"},
|
||||||
|
{file = "pydantic-1.9.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:e565a785233c2d03724c4dc55464559639b1ba9ecf091288dd47ad9c629433bd"},
|
||||||
|
{file = "pydantic-1.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:a4a88dcd6ff8fd47c18b3a3709a89adb39a6373f4482e04c1b765045c7e282fd"},
|
||||||
|
{file = "pydantic-1.9.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:447d5521575f18e18240906beadc58551e97ec98142266e521c34968c76c8761"},
|
||||||
|
{file = "pydantic-1.9.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:985ceb5d0a86fcaa61e45781e567a59baa0da292d5ed2e490d612d0de5796918"},
|
||||||
|
{file = "pydantic-1.9.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:059b6c1795170809103a1538255883e1983e5b831faea6558ef873d4955b4a74"},
|
||||||
|
{file = "pydantic-1.9.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:d12f96b5b64bec3f43c8e82b4aab7599d0157f11c798c9f9c528a72b9e0b339a"},
|
||||||
|
{file = "pydantic-1.9.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:ae72f8098acb368d877b210ebe02ba12585e77bd0db78ac04a1ee9b9f5dd2166"},
|
||||||
|
{file = "pydantic-1.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:79b485767c13788ee314669008d01f9ef3bc05db9ea3298f6a50d3ef596a154b"},
|
||||||
|
{file = "pydantic-1.9.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:494f7c8537f0c02b740c229af4cb47c0d39840b829ecdcfc93d91dcbb0779892"},
|
||||||
|
{file = "pydantic-1.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f0f047e11febe5c3198ed346b507e1d010330d56ad615a7e0a89fae604065a0e"},
|
||||||
|
{file = "pydantic-1.9.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:969dd06110cb780da01336b281f53e2e7eb3a482831df441fb65dd30403f4608"},
|
||||||
|
{file = "pydantic-1.9.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:177071dfc0df6248fd22b43036f936cfe2508077a72af0933d0c1fa269b18537"},
|
||||||
|
{file = "pydantic-1.9.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:9bcf8b6e011be08fb729d110f3e22e654a50f8a826b0575c7196616780683380"},
|
||||||
|
{file = "pydantic-1.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a955260d47f03df08acf45689bd163ed9df82c0e0124beb4251b1290fa7ae728"},
|
||||||
|
{file = "pydantic-1.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9ce157d979f742a915b75f792dbd6aa63b8eccaf46a1005ba03aa8a986bde34a"},
|
||||||
|
{file = "pydantic-1.9.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0bf07cab5b279859c253d26a9194a8906e6f4a210063b84b433cf90a569de0c1"},
|
||||||
|
{file = "pydantic-1.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d93d4e95eacd313d2c765ebe40d49ca9dd2ed90e5b37d0d421c597af830c195"},
|
||||||
|
{file = "pydantic-1.9.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1542636a39c4892c4f4fa6270696902acb186a9aaeac6f6cf92ce6ae2e88564b"},
|
||||||
|
{file = "pydantic-1.9.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a9af62e9b5b9bc67b2a195ebc2c2662fdf498a822d62f902bf27cccb52dbbf49"},
|
||||||
|
{file = "pydantic-1.9.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:fe4670cb32ea98ffbf5a1262f14c3e102cccd92b1869df3bb09538158ba90fe6"},
|
||||||
|
{file = "pydantic-1.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:9f659a5ee95c8baa2436d392267988fd0f43eb774e5eb8739252e5a7e9cf07e0"},
|
||||||
|
{file = "pydantic-1.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b83ba3825bc91dfa989d4eed76865e71aea3a6ca1388b59fc801ee04c4d8d0d6"},
|
||||||
|
{file = "pydantic-1.9.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1dd8fecbad028cd89d04a46688d2fcc14423e8a196d5b0a5c65105664901f810"},
|
||||||
|
{file = "pydantic-1.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:02eefd7087268b711a3ff4db528e9916ac9aa18616da7bca69c1871d0b7a091f"},
|
||||||
|
{file = "pydantic-1.9.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7eb57ba90929bac0b6cc2af2373893d80ac559adda6933e562dcfb375029acee"},
|
||||||
|
{file = "pydantic-1.9.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:4ce9ae9e91f46c344bec3b03d6ee9612802682c1551aaf627ad24045ce090761"},
|
||||||
|
{file = "pydantic-1.9.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:72ccb318bf0c9ab97fc04c10c37683d9eea952ed526707fabf9ac5ae59b701fd"},
|
||||||
|
{file = "pydantic-1.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:61b6760b08b7c395975d893e0b814a11cf011ebb24f7d869e7118f5a339a82e1"},
|
||||||
|
{file = "pydantic-1.9.1-py3-none-any.whl", hash = "sha256:4988c0f13c42bfa9ddd2fe2f569c9d54646ce84adc5de84228cfe83396f3bd58"},
|
||||||
|
{file = "pydantic-1.9.1.tar.gz", hash = "sha256:1ed987c3ff29fff7fd8c3ea3a3ea877ad310aae2ef9889a119e22d3f2db0691a"},
|
||||||
|
]
|
||||||
pyflakes = [
|
pyflakes = [
|
||||||
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
|
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
|
||||||
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
|
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
|
||||||
|
|
|
@ -54,7 +54,7 @@ skip_gitignore = true
|
||||||
|
|
||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "matrix-synapse"
|
name = "matrix-synapse"
|
||||||
version = "1.65.0rc1"
|
version = "1.65.0"
|
||||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||||
license = "Apache-2.0"
|
license = "Apache-2.0"
|
||||||
|
@ -158,6 +158,9 @@ packaging = ">=16.1"
|
||||||
# At the time of writing, we only use functions from the version `importlib.metadata`
|
# At the time of writing, we only use functions from the version `importlib.metadata`
|
||||||
# which shipped in Python 3.8. This corresponds to version 1.4 of the backport.
|
# which shipped in Python 3.8. This corresponds to version 1.4 of the backport.
|
||||||
importlib_metadata = { version = ">=1.4", python = "<3.8" }
|
importlib_metadata = { version = ">=1.4", python = "<3.8" }
|
||||||
|
# This is the most recent version of Pydantic with available on common distros.
|
||||||
|
pydantic = ">=1.7.4"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Optional Dependencies
|
# Optional Dependencies
|
||||||
|
|
425
scripts-dev/check_pydantic_models.py
Executable file
425
scripts-dev/check_pydantic_models.py
Executable file
|
@ -0,0 +1,425 @@
|
||||||
|
#! /usr/bin/env python
|
||||||
|
# Copyright 2022 The Matrix.org Foundation C.I.C.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
"""
|
||||||
|
A script which enforces that Synapse always uses strict types when defining a Pydantic
|
||||||
|
model.
|
||||||
|
|
||||||
|
Pydantic does not yet offer a strict mode, but it is planned for pydantic v2. See
|
||||||
|
|
||||||
|
https://github.com/pydantic/pydantic/issues/1098
|
||||||
|
https://pydantic-docs.helpmanual.io/blog/pydantic-v2/#strict-mode
|
||||||
|
|
||||||
|
until then, this script is a best effort to stop us from introducing type coersion bugs
|
||||||
|
(like the infamous stringy power levels fixed in room version 10).
|
||||||
|
"""
|
||||||
|
import argparse
|
||||||
|
import contextlib
|
||||||
|
import functools
|
||||||
|
import importlib
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import pkgutil
|
||||||
|
import sys
|
||||||
|
import textwrap
|
||||||
|
import traceback
|
||||||
|
import unittest.mock
|
||||||
|
from contextlib import contextmanager
|
||||||
|
from typing import Any, Callable, Dict, Generator, List, Set, Type, TypeVar
|
||||||
|
|
||||||
|
from parameterized import parameterized
|
||||||
|
from pydantic import BaseModel as PydanticBaseModel, conbytes, confloat, conint, constr
|
||||||
|
from pydantic.typing import get_args
|
||||||
|
from typing_extensions import ParamSpec
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
CONSTRAINED_TYPE_FACTORIES_WITH_STRICT_FLAG: List[Callable] = [
|
||||||
|
constr,
|
||||||
|
conbytes,
|
||||||
|
conint,
|
||||||
|
confloat,
|
||||||
|
]
|
||||||
|
|
||||||
|
TYPES_THAT_PYDANTIC_WILL_COERCE_TO = [
|
||||||
|
str,
|
||||||
|
bytes,
|
||||||
|
int,
|
||||||
|
float,
|
||||||
|
bool,
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
P = ParamSpec("P")
|
||||||
|
R = TypeVar("R")
|
||||||
|
|
||||||
|
|
||||||
|
class ModelCheckerException(Exception):
|
||||||
|
"""Dummy exception. Allows us to detect unwanted types during a module import."""
|
||||||
|
|
||||||
|
|
||||||
|
class MissingStrictInConstrainedTypeException(ModelCheckerException):
|
||||||
|
factory_name: str
|
||||||
|
|
||||||
|
def __init__(self, factory_name: str):
|
||||||
|
self.factory_name = factory_name
|
||||||
|
|
||||||
|
|
||||||
|
class FieldHasUnwantedTypeException(ModelCheckerException):
|
||||||
|
message: str
|
||||||
|
|
||||||
|
def __init__(self, message: str):
|
||||||
|
self.message = message
|
||||||
|
|
||||||
|
|
||||||
|
def make_wrapper(factory: Callable[P, R]) -> Callable[P, R]:
|
||||||
|
"""We patch `constr` and friends with wrappers that enforce strict=True."""
|
||||||
|
|
||||||
|
@functools.wraps(factory)
|
||||||
|
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
|
||||||
|
# type-ignore: should be redundant once we can use https://github.com/python/mypy/pull/12668
|
||||||
|
if "strict" not in kwargs: # type: ignore[attr-defined]
|
||||||
|
raise MissingStrictInConstrainedTypeException(factory.__name__)
|
||||||
|
if not kwargs["strict"]: # type: ignore[index]
|
||||||
|
raise MissingStrictInConstrainedTypeException(factory.__name__)
|
||||||
|
return factory(*args, **kwargs)
|
||||||
|
|
||||||
|
return wrapper
|
||||||
|
|
||||||
|
|
||||||
|
def field_type_unwanted(type_: Any) -> bool:
|
||||||
|
"""Very rough attempt to detect if a type is unwanted as a Pydantic annotation.
|
||||||
|
|
||||||
|
At present, we exclude types which will coerce, or any generic type involving types
|
||||||
|
which will coerce."""
|
||||||
|
logger.debug("Is %s unwanted?")
|
||||||
|
if type_ in TYPES_THAT_PYDANTIC_WILL_COERCE_TO:
|
||||||
|
logger.debug("yes")
|
||||||
|
return True
|
||||||
|
logger.debug("Maybe. Subargs are %s", get_args(type_))
|
||||||
|
rv = any(field_type_unwanted(t) for t in get_args(type_))
|
||||||
|
logger.debug("Conclusion: %s %s unwanted", type_, "is" if rv else "is not")
|
||||||
|
return rv
|
||||||
|
|
||||||
|
|
||||||
|
class PatchedBaseModel(PydanticBaseModel):
|
||||||
|
"""A patched version of BaseModel that inspects fields after models are defined.
|
||||||
|
|
||||||
|
We complain loudly if we see an unwanted type.
|
||||||
|
|
||||||
|
Beware: ModelField.type_ is presumably private; this is likely to be very brittle.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def __init_subclass__(cls: Type[PydanticBaseModel], **kwargs: object):
|
||||||
|
for field in cls.__fields__.values():
|
||||||
|
# Note that field.type_ and field.outer_type are computed based on the
|
||||||
|
# annotation type, see pydantic.fields.ModelField._type_analysis
|
||||||
|
if field_type_unwanted(field.outer_type_):
|
||||||
|
# TODO: this only reports the first bad field. Can we find all bad ones
|
||||||
|
# and report them all?
|
||||||
|
raise FieldHasUnwantedTypeException(
|
||||||
|
f"{cls.__module__}.{cls.__qualname__} has field '{field.name}' "
|
||||||
|
f"with unwanted type `{field.outer_type_}`"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def monkeypatch_pydantic() -> Generator[None, None, None]:
|
||||||
|
"""Patch pydantic with our snooping versions of BaseModel and the con* functions.
|
||||||
|
|
||||||
|
If the snooping functions see something they don't like, they'll raise a
|
||||||
|
ModelCheckingException instance.
|
||||||
|
"""
|
||||||
|
with contextlib.ExitStack() as patches:
|
||||||
|
# Most Synapse code ought to import the patched objects directly from
|
||||||
|
# `pydantic`. But we also patch their containing modules `pydantic.main` and
|
||||||
|
# `pydantic.types` for completeness.
|
||||||
|
patch_basemodel1 = unittest.mock.patch(
|
||||||
|
"pydantic.BaseModel", new=PatchedBaseModel
|
||||||
|
)
|
||||||
|
patch_basemodel2 = unittest.mock.patch(
|
||||||
|
"pydantic.main.BaseModel", new=PatchedBaseModel
|
||||||
|
)
|
||||||
|
patches.enter_context(patch_basemodel1)
|
||||||
|
patches.enter_context(patch_basemodel2)
|
||||||
|
for factory in CONSTRAINED_TYPE_FACTORIES_WITH_STRICT_FLAG:
|
||||||
|
wrapper: Callable = make_wrapper(factory)
|
||||||
|
patch1 = unittest.mock.patch(f"pydantic.{factory.__name__}", new=wrapper)
|
||||||
|
patch2 = unittest.mock.patch(
|
||||||
|
f"pydantic.types.{factory.__name__}", new=wrapper
|
||||||
|
)
|
||||||
|
patches.enter_context(patch1)
|
||||||
|
patches.enter_context(patch2)
|
||||||
|
yield
|
||||||
|
|
||||||
|
|
||||||
|
def format_model_checker_exception(e: ModelCheckerException) -> str:
|
||||||
|
"""Work out which line of code caused e. Format the line in a human-friendly way."""
|
||||||
|
# TODO. FieldHasUnwantedTypeException gives better error messages. Can we ditch the
|
||||||
|
# patches of constr() etc, and instead inspect fields to look for ConstrainedStr
|
||||||
|
# with strict=False? There is some difficulty with the inheritance hierarchy
|
||||||
|
# because StrictStr < ConstrainedStr < str.
|
||||||
|
if isinstance(e, FieldHasUnwantedTypeException):
|
||||||
|
return e.message
|
||||||
|
elif isinstance(e, MissingStrictInConstrainedTypeException):
|
||||||
|
frame_summary = traceback.extract_tb(e.__traceback__)[-2]
|
||||||
|
return (
|
||||||
|
f"Missing `strict=True` from {e.factory_name}() call \n"
|
||||||
|
+ traceback.format_list([frame_summary])[0].lstrip()
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
raise ValueError(f"Unknown exception {e}") from e
|
||||||
|
|
||||||
|
|
||||||
|
def lint() -> int:
|
||||||
|
"""Try to import all of Synapse and see if we spot any Pydantic type coercions.
|
||||||
|
|
||||||
|
Print any problems, then return a status code suitable for sys.exit."""
|
||||||
|
failures = do_lint()
|
||||||
|
if failures:
|
||||||
|
print(f"Found {len(failures)} problem(s)")
|
||||||
|
for failure in sorted(failures):
|
||||||
|
print(failure)
|
||||||
|
return os.EX_DATAERR if failures else os.EX_OK
|
||||||
|
|
||||||
|
|
||||||
|
def do_lint() -> Set[str]:
|
||||||
|
"""Try to import all of Synapse and see if we spot any Pydantic type coercions."""
|
||||||
|
failures = set()
|
||||||
|
|
||||||
|
with monkeypatch_pydantic():
|
||||||
|
logger.debug("Importing synapse")
|
||||||
|
try:
|
||||||
|
# TODO: make "synapse" an argument so we can target this script at
|
||||||
|
# a subpackage
|
||||||
|
module = importlib.import_module("synapse")
|
||||||
|
except ModelCheckerException as e:
|
||||||
|
logger.warning("Bad annotation found when importing synapse")
|
||||||
|
failures.add(format_model_checker_exception(e))
|
||||||
|
return failures
|
||||||
|
|
||||||
|
try:
|
||||||
|
logger.debug("Fetching subpackages")
|
||||||
|
module_infos = list(
|
||||||
|
pkgutil.walk_packages(module.__path__, f"{module.__name__}.")
|
||||||
|
)
|
||||||
|
except ModelCheckerException as e:
|
||||||
|
logger.warning("Bad annotation found when looking for modules to import")
|
||||||
|
failures.add(format_model_checker_exception(e))
|
||||||
|
return failures
|
||||||
|
|
||||||
|
for module_info in module_infos:
|
||||||
|
logger.debug("Importing %s", module_info.name)
|
||||||
|
try:
|
||||||
|
importlib.import_module(module_info.name)
|
||||||
|
except ModelCheckerException as e:
|
||||||
|
logger.warning(
|
||||||
|
f"Bad annotation found when importing {module_info.name}"
|
||||||
|
)
|
||||||
|
failures.add(format_model_checker_exception(e))
|
||||||
|
|
||||||
|
return failures
|
||||||
|
|
||||||
|
|
||||||
|
def run_test_snippet(source: str) -> None:
|
||||||
|
"""Exec a snippet of source code in an isolated environment."""
|
||||||
|
# To emulate `source` being called at the top level of the module,
|
||||||
|
# the globals and locals we provide apparently have to be the same mapping.
|
||||||
|
#
|
||||||
|
# > Remember that at the module level, globals and locals are the same dictionary.
|
||||||
|
# > If exec gets two separate objects as globals and locals, the code will be
|
||||||
|
# > executed as if it were embedded in a class definition.
|
||||||
|
globals_: Dict[str, object]
|
||||||
|
locals_: Dict[str, object]
|
||||||
|
globals_ = locals_ = {}
|
||||||
|
exec(textwrap.dedent(source), globals_, locals_)
|
||||||
|
|
||||||
|
|
||||||
|
class TestConstrainedTypesPatch(unittest.TestCase):
|
||||||
|
def test_expression_without_strict_raises(self) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
from pydantic import constr
|
||||||
|
constr()
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_called_as_module_attribute_raises(self) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
import pydantic
|
||||||
|
pydantic.constr()
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_wildcard_import_raises(self) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
from pydantic import *
|
||||||
|
constr()
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_alternative_import_raises(self) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
from pydantic.types import constr
|
||||||
|
constr()
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_alternative_import_attribute_raises(self) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
import pydantic.types
|
||||||
|
pydantic.types.constr()
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_kwarg_but_no_strict_raises(self) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
from pydantic import constr
|
||||||
|
constr(min_length=10)
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_kwarg_strict_False_raises(self) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
from pydantic import constr
|
||||||
|
constr(strict=False)
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_kwarg_strict_True_doesnt_raise(self) -> None:
|
||||||
|
with monkeypatch_pydantic():
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
from pydantic import constr
|
||||||
|
constr(strict=True)
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_annotation_without_strict_raises(self) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
from pydantic import constr
|
||||||
|
x: constr()
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_field_annotation_without_strict_raises(self) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
from pydantic import BaseModel, conint
|
||||||
|
class C:
|
||||||
|
x: conint()
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestFieldTypeInspection(unittest.TestCase):
|
||||||
|
@parameterized.expand(
|
||||||
|
[
|
||||||
|
("str",),
|
||||||
|
("bytes"),
|
||||||
|
("int",),
|
||||||
|
("float",),
|
||||||
|
("bool"),
|
||||||
|
("Optional[str]",),
|
||||||
|
("Union[None, str]",),
|
||||||
|
("List[str]",),
|
||||||
|
("List[List[str]]",),
|
||||||
|
("Dict[StrictStr, str]",),
|
||||||
|
("Dict[str, StrictStr]",),
|
||||||
|
("TypedDict('D', x=int)",),
|
||||||
|
]
|
||||||
|
)
|
||||||
|
def test_field_holding_unwanted_type_raises(self, annotation: str) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
f"""
|
||||||
|
from typing import *
|
||||||
|
from pydantic import *
|
||||||
|
class C(BaseModel):
|
||||||
|
f: {annotation}
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
@parameterized.expand(
|
||||||
|
[
|
||||||
|
("StrictStr",),
|
||||||
|
("StrictBytes"),
|
||||||
|
("StrictInt",),
|
||||||
|
("StrictFloat",),
|
||||||
|
("StrictBool"),
|
||||||
|
("constr(strict=True, min_length=10)",),
|
||||||
|
("Optional[StrictStr]",),
|
||||||
|
("Union[None, StrictStr]",),
|
||||||
|
("List[StrictStr]",),
|
||||||
|
("List[List[StrictStr]]",),
|
||||||
|
("Dict[StrictStr, StrictStr]",),
|
||||||
|
("TypedDict('D', x=StrictInt)",),
|
||||||
|
]
|
||||||
|
)
|
||||||
|
def test_field_holding_accepted_type_doesnt_raise(self, annotation: str) -> None:
|
||||||
|
with monkeypatch_pydantic():
|
||||||
|
run_test_snippet(
|
||||||
|
f"""
|
||||||
|
from typing import *
|
||||||
|
from pydantic import *
|
||||||
|
class C(BaseModel):
|
||||||
|
f: {annotation}
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_field_holding_str_raises_with_alternative_import(self) -> None:
|
||||||
|
with monkeypatch_pydantic(), self.assertRaises(ModelCheckerException):
|
||||||
|
run_test_snippet(
|
||||||
|
"""
|
||||||
|
from pydantic.main import BaseModel
|
||||||
|
class C(BaseModel):
|
||||||
|
f: str
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument("mode", choices=["lint", "test"], default="lint", nargs="?")
|
||||||
|
parser.add_argument("-v", "--verbose", action="store_true")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
args = parser.parse_args(sys.argv[1:])
|
||||||
|
logging.basicConfig(
|
||||||
|
format="%(asctime)s %(name)s:%(lineno)d %(levelname)s %(message)s",
|
||||||
|
level=logging.DEBUG if args.verbose else logging.INFO,
|
||||||
|
)
|
||||||
|
# suppress logs we don't care about
|
||||||
|
logging.getLogger("xmlschema").setLevel(logging.WARNING)
|
||||||
|
if args.mode == "lint":
|
||||||
|
sys.exit(lint())
|
||||||
|
elif args.mode == "test":
|
||||||
|
unittest.main(argv=sys.argv[:1])
|
|
@ -106,4 +106,5 @@ isort "${files[@]}"
|
||||||
python3 -m black "${files[@]}"
|
python3 -m black "${files[@]}"
|
||||||
./scripts-dev/config-lint.sh
|
./scripts-dev/config-lint.sh
|
||||||
flake8 "${files[@]}"
|
flake8 "${files[@]}"
|
||||||
|
./scripts-dev/check_pydantic_models.py lint
|
||||||
mypy
|
mypy
|
||||||
|
|
|
@ -441,6 +441,13 @@ def start(config_options: List[str]) -> None:
|
||||||
"synapse.app.user_dir",
|
"synapse.app.user_dir",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if config.experimental.faster_joins_enabled:
|
||||||
|
raise ConfigError(
|
||||||
|
"You have enabled the experimental `faster_joins` config option, but it is "
|
||||||
|
"not compatible with worker deployments yet. Please disable `faster_joins` "
|
||||||
|
"or run Synapse as a single process deployment instead."
|
||||||
|
)
|
||||||
|
|
||||||
synapse.events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
|
synapse.events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
|
||||||
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
|
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
|
||||||
|
|
||||||
|
|
|
@ -23,7 +23,7 @@ LEGACY_TEMPLATE_DIR_WARNING = """
|
||||||
This server's configuration file is using the deprecated 'template_dir' setting in the
|
This server's configuration file is using the deprecated 'template_dir' setting in the
|
||||||
'account_validity' section. Support for this setting has been deprecated and will be
|
'account_validity' section. Support for this setting has been deprecated and will be
|
||||||
removed in a future version of Synapse. Server admins should instead use the new
|
removed in a future version of Synapse. Server admins should instead use the new
|
||||||
'custom_templates_directory' setting documented here:
|
'custom_template_directory' setting documented here:
|
||||||
https://matrix-org.github.io/synapse/latest/templates.html
|
https://matrix-org.github.io/synapse/latest/templates.html
|
||||||
---------------------------------------------------------------------------------------"""
|
---------------------------------------------------------------------------------------"""
|
||||||
|
|
||||||
|
|
|
@ -53,7 +53,7 @@ LEGACY_TEMPLATE_DIR_WARNING = """
|
||||||
This server's configuration file is using the deprecated 'template_dir' setting in the
|
This server's configuration file is using the deprecated 'template_dir' setting in the
|
||||||
'email' section. Support for this setting has been deprecated and will be removed in a
|
'email' section. Support for this setting has been deprecated and will be removed in a
|
||||||
future version of Synapse. Server admins should instead use the new
|
future version of Synapse. Server admins should instead use the new
|
||||||
'custom_templates_directory' setting documented here:
|
'custom_template_directory' setting documented here:
|
||||||
https://matrix-org.github.io/synapse/latest/templates.html
|
https://matrix-org.github.io/synapse/latest/templates.html
|
||||||
---------------------------------------------------------------------------------------"""
|
---------------------------------------------------------------------------------------"""
|
||||||
|
|
||||||
|
|
|
@ -26,7 +26,7 @@ LEGACY_TEMPLATE_DIR_WARNING = """
|
||||||
This server's configuration file is using the deprecated 'template_dir' setting in the
|
This server's configuration file is using the deprecated 'template_dir' setting in the
|
||||||
'sso' section. Support for this setting has been deprecated and will be removed in a
|
'sso' section. Support for this setting has been deprecated and will be removed in a
|
||||||
future version of Synapse. Server admins should instead use the new
|
future version of Synapse. Server admins should instead use the new
|
||||||
'custom_templates_directory' setting documented here:
|
'custom_template_directory' setting documented here:
|
||||||
https://matrix-org.github.io/synapse/latest/templates.html
|
https://matrix-org.github.io/synapse/latest/templates.html
|
||||||
---------------------------------------------------------------------------------------"""
|
---------------------------------------------------------------------------------------"""
|
||||||
|
|
||||||
|
|
|
@ -61,7 +61,7 @@ from synapse.federation.federation_base import (
|
||||||
)
|
)
|
||||||
from synapse.federation.transport.client import SendJoinResponse
|
from synapse.federation.transport.client import SendJoinResponse
|
||||||
from synapse.http.types import QueryParams
|
from synapse.http.types import QueryParams
|
||||||
from synapse.logging.opentracing import trace
|
from synapse.logging.opentracing import SynapseTags, set_tag, tag_args, trace
|
||||||
from synapse.types import JsonDict, UserID, get_domain_from_id
|
from synapse.types import JsonDict, UserID, get_domain_from_id
|
||||||
from synapse.util.async_helpers import concurrently_execute
|
from synapse.util.async_helpers import concurrently_execute
|
||||||
from synapse.util.caches.expiringcache import ExpiringCache
|
from synapse.util.caches.expiringcache import ExpiringCache
|
||||||
|
@ -235,6 +235,7 @@ class FederationClient(FederationBase):
|
||||||
)
|
)
|
||||||
|
|
||||||
@trace
|
@trace
|
||||||
|
@tag_args
|
||||||
async def backfill(
|
async def backfill(
|
||||||
self, dest: str, room_id: str, limit: int, extremities: Collection[str]
|
self, dest: str, room_id: str, limit: int, extremities: Collection[str]
|
||||||
) -> Optional[List[EventBase]]:
|
) -> Optional[List[EventBase]]:
|
||||||
|
@ -337,6 +338,8 @@ class FederationClient(FederationBase):
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def get_pdu(
|
async def get_pdu(
|
||||||
self,
|
self,
|
||||||
destinations: Iterable[str],
|
destinations: Iterable[str],
|
||||||
|
@ -448,6 +451,8 @@ class FederationClient(FederationBase):
|
||||||
|
|
||||||
return event_copy
|
return event_copy
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def get_room_state_ids(
|
async def get_room_state_ids(
|
||||||
self, destination: str, room_id: str, event_id: str
|
self, destination: str, room_id: str, event_id: str
|
||||||
) -> Tuple[List[str], List[str]]:
|
) -> Tuple[List[str], List[str]]:
|
||||||
|
@ -467,6 +472,23 @@ class FederationClient(FederationBase):
|
||||||
state_event_ids = result["pdu_ids"]
|
state_event_ids = result["pdu_ids"]
|
||||||
auth_event_ids = result.get("auth_chain_ids", [])
|
auth_event_ids = result.get("auth_chain_ids", [])
|
||||||
|
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "state_event_ids",
|
||||||
|
str(state_event_ids),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "state_event_ids.length",
|
||||||
|
str(len(state_event_ids)),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "auth_event_ids",
|
||||||
|
str(auth_event_ids),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "auth_event_ids.length",
|
||||||
|
str(len(auth_event_ids)),
|
||||||
|
)
|
||||||
|
|
||||||
if not isinstance(state_event_ids, list) or not isinstance(
|
if not isinstance(state_event_ids, list) or not isinstance(
|
||||||
auth_event_ids, list
|
auth_event_ids, list
|
||||||
):
|
):
|
||||||
|
@ -474,6 +496,8 @@ class FederationClient(FederationBase):
|
||||||
|
|
||||||
return state_event_ids, auth_event_ids
|
return state_event_ids, auth_event_ids
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def get_room_state(
|
async def get_room_state(
|
||||||
self,
|
self,
|
||||||
destination: str,
|
destination: str,
|
||||||
|
@ -533,6 +557,7 @@ class FederationClient(FederationBase):
|
||||||
|
|
||||||
return valid_state_events, valid_auth_events
|
return valid_state_events, valid_auth_events
|
||||||
|
|
||||||
|
@trace
|
||||||
async def _check_sigs_and_hash_and_fetch(
|
async def _check_sigs_and_hash_and_fetch(
|
||||||
self,
|
self,
|
||||||
origin: str,
|
origin: str,
|
||||||
|
|
|
@ -61,7 +61,12 @@ from synapse.logging.context import (
|
||||||
nested_logging_context,
|
nested_logging_context,
|
||||||
run_in_background,
|
run_in_background,
|
||||||
)
|
)
|
||||||
from synapse.logging.opentracing import log_kv, start_active_span_from_edu, trace
|
from synapse.logging.opentracing import (
|
||||||
|
log_kv,
|
||||||
|
start_active_span_from_edu,
|
||||||
|
tag_args,
|
||||||
|
trace,
|
||||||
|
)
|
||||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||||
from synapse.replication.http.federation import (
|
from synapse.replication.http.federation import (
|
||||||
ReplicationFederationSendEduRestServlet,
|
ReplicationFederationSendEduRestServlet,
|
||||||
|
@ -547,6 +552,8 @@ class FederationServer(FederationBase):
|
||||||
|
|
||||||
return 200, resp
|
return 200, resp
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def on_state_ids_request(
|
async def on_state_ids_request(
|
||||||
self, origin: str, room_id: str, event_id: str
|
self, origin: str, room_id: str, event_id: str
|
||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
|
@ -569,6 +576,8 @@ class FederationServer(FederationBase):
|
||||||
|
|
||||||
return 200, resp
|
return 200, resp
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def _on_state_ids_request_compute(
|
async def _on_state_ids_request_compute(
|
||||||
self, room_id: str, event_id: str
|
self, room_id: str, event_id: str
|
||||||
) -> JsonDict:
|
) -> JsonDict:
|
||||||
|
|
|
@ -32,6 +32,7 @@ from typing import (
|
||||||
)
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
from prometheus_client import Histogram
|
||||||
from signedjson.key import decode_verify_key_bytes
|
from signedjson.key import decode_verify_key_bytes
|
||||||
from signedjson.sign import verify_signed_json
|
from signedjson.sign import verify_signed_json
|
||||||
from unpaddedbase64 import decode_base64
|
from unpaddedbase64 import decode_base64
|
||||||
|
@ -59,7 +60,7 @@ from synapse.events.validator import EventValidator
|
||||||
from synapse.federation.federation_client import InvalidResponseError
|
from synapse.federation.federation_client import InvalidResponseError
|
||||||
from synapse.http.servlet import assert_params_in_dict
|
from synapse.http.servlet import assert_params_in_dict
|
||||||
from synapse.logging.context import nested_logging_context
|
from synapse.logging.context import nested_logging_context
|
||||||
from synapse.logging.opentracing import trace
|
from synapse.logging.opentracing import SynapseTags, set_tag, tag_args, trace
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.module_api import NOT_SPAM
|
from synapse.module_api import NOT_SPAM
|
||||||
from synapse.replication.http.federation import (
|
from synapse.replication.http.federation import (
|
||||||
|
@ -79,6 +80,24 @@ if TYPE_CHECKING:
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Added to debug performance and track progress on optimizations
|
||||||
|
backfill_processing_before_timer = Histogram(
|
||||||
|
"synapse_federation_backfill_processing_before_time_seconds",
|
||||||
|
"sec",
|
||||||
|
[],
|
||||||
|
buckets=(
|
||||||
|
1.0,
|
||||||
|
5.0,
|
||||||
|
10.0,
|
||||||
|
20.0,
|
||||||
|
30.0,
|
||||||
|
40.0,
|
||||||
|
60.0,
|
||||||
|
80.0,
|
||||||
|
"+Inf",
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_domains_from_state(state: StateMap[EventBase]) -> List[Tuple[str, int]]:
|
def get_domains_from_state(state: StateMap[EventBase]) -> List[Tuple[str, int]]:
|
||||||
"""Get joined domains from state
|
"""Get joined domains from state
|
||||||
|
@ -138,6 +157,7 @@ class FederationHandler:
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
|
|
||||||
|
self.clock = hs.get_clock()
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
self._storage_controllers = hs.get_storage_controllers()
|
self._storage_controllers = hs.get_storage_controllers()
|
||||||
self._state_storage_controller = self._storage_controllers.state
|
self._state_storage_controller = self._storage_controllers.state
|
||||||
|
@ -197,12 +217,39 @@ class FederationHandler:
|
||||||
return. This is used as part of the heuristic to decide if we
|
return. This is used as part of the heuristic to decide if we
|
||||||
should back paginate.
|
should back paginate.
|
||||||
"""
|
"""
|
||||||
|
# Starting the processing time here so we can include the room backfill
|
||||||
|
# linearizer lock queue in the timing
|
||||||
|
processing_start_time = self.clock.time_msec()
|
||||||
|
|
||||||
async with self._room_backfill.queue(room_id):
|
async with self._room_backfill.queue(room_id):
|
||||||
return await self._maybe_backfill_inner(room_id, current_depth, limit)
|
return await self._maybe_backfill_inner(
|
||||||
|
room_id,
|
||||||
|
current_depth,
|
||||||
|
limit,
|
||||||
|
processing_start_time=processing_start_time,
|
||||||
|
)
|
||||||
|
|
||||||
async def _maybe_backfill_inner(
|
async def _maybe_backfill_inner(
|
||||||
self, room_id: str, current_depth: int, limit: int
|
self,
|
||||||
|
room_id: str,
|
||||||
|
current_depth: int,
|
||||||
|
limit: int,
|
||||||
|
*,
|
||||||
|
processing_start_time: int,
|
||||||
) -> bool:
|
) -> bool:
|
||||||
|
"""
|
||||||
|
Checks whether the `current_depth` is at or approaching any backfill
|
||||||
|
points in the room and if so, will backfill. We only care about
|
||||||
|
checking backfill points that happened before the `current_depth`
|
||||||
|
(meaning less than or equal to the `current_depth`).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
room_id: The room to backfill in.
|
||||||
|
current_depth: The depth to check at for any upcoming backfill points.
|
||||||
|
limit: The max number of events to request from the remote federated server.
|
||||||
|
processing_start_time: The time when `maybe_backfill` started
|
||||||
|
processing. Only used for timing.
|
||||||
|
"""
|
||||||
backwards_extremities = [
|
backwards_extremities = [
|
||||||
_BackfillPoint(event_id, depth, _BackfillPointType.BACKWARDS_EXTREMITY)
|
_BackfillPoint(event_id, depth, _BackfillPointType.BACKWARDS_EXTREMITY)
|
||||||
for event_id, depth in await self.store.get_oldest_event_ids_with_depth_in_room(
|
for event_id, depth in await self.store.get_oldest_event_ids_with_depth_in_room(
|
||||||
|
@ -370,6 +417,14 @@ class FederationHandler:
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"_maybe_backfill_inner: extremities_to_request %s", extremities_to_request
|
"_maybe_backfill_inner: extremities_to_request %s", extremities_to_request
|
||||||
)
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "extremities_to_request",
|
||||||
|
str(extremities_to_request),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "extremities_to_request.length",
|
||||||
|
str(len(extremities_to_request)),
|
||||||
|
)
|
||||||
|
|
||||||
# Now we need to decide which hosts to hit first.
|
# Now we need to decide which hosts to hit first.
|
||||||
|
|
||||||
|
@ -425,6 +480,11 @@ class FederationHandler:
|
||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
processing_end_time = self.clock.time_msec()
|
||||||
|
backfill_processing_before_timer.observe(
|
||||||
|
(processing_start_time - processing_end_time) / 1000
|
||||||
|
)
|
||||||
|
|
||||||
success = await try_backfill(likely_domains)
|
success = await try_backfill(likely_domains)
|
||||||
if success:
|
if success:
|
||||||
return True
|
return True
|
||||||
|
@ -1081,6 +1141,8 @@ class FederationHandler:
|
||||||
|
|
||||||
return event
|
return event
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def get_state_ids_for_pdu(self, room_id: str, event_id: str) -> List[str]:
|
async def get_state_ids_for_pdu(self, room_id: str, event_id: str) -> List[str]:
|
||||||
"""Returns the state at the event. i.e. not including said event."""
|
"""Returns the state at the event. i.e. not including said event."""
|
||||||
event = await self.store.get_event(event_id, check_room_id=room_id)
|
event = await self.store.get_event(event_id, check_room_id=room_id)
|
||||||
|
|
|
@ -29,7 +29,7 @@ from typing import (
|
||||||
Tuple,
|
Tuple,
|
||||||
)
|
)
|
||||||
|
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter, Histogram
|
||||||
|
|
||||||
from synapse import event_auth
|
from synapse import event_auth
|
||||||
from synapse.api.constants import (
|
from synapse.api.constants import (
|
||||||
|
@ -59,7 +59,13 @@ from synapse.events import EventBase
|
||||||
from synapse.events.snapshot import EventContext
|
from synapse.events.snapshot import EventContext
|
||||||
from synapse.federation.federation_client import InvalidResponseError
|
from synapse.federation.federation_client import InvalidResponseError
|
||||||
from synapse.logging.context import nested_logging_context
|
from synapse.logging.context import nested_logging_context
|
||||||
from synapse.logging.opentracing import trace
|
from synapse.logging.opentracing import (
|
||||||
|
SynapseTags,
|
||||||
|
set_tag,
|
||||||
|
start_active_span,
|
||||||
|
tag_args,
|
||||||
|
trace,
|
||||||
|
)
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
|
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
|
||||||
from synapse.replication.http.federation import (
|
from synapse.replication.http.federation import (
|
||||||
|
@ -92,6 +98,26 @@ soft_failed_event_counter = Counter(
|
||||||
"Events received over federation that we marked as soft_failed",
|
"Events received over federation that we marked as soft_failed",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Added to debug performance and track progress on optimizations
|
||||||
|
backfill_processing_after_timer = Histogram(
|
||||||
|
"synapse_federation_backfill_processing_after_time_seconds",
|
||||||
|
"sec",
|
||||||
|
[],
|
||||||
|
buckets=(
|
||||||
|
1.0,
|
||||||
|
5.0,
|
||||||
|
10.0,
|
||||||
|
20.0,
|
||||||
|
30.0,
|
||||||
|
40.0,
|
||||||
|
60.0,
|
||||||
|
80.0,
|
||||||
|
120.0,
|
||||||
|
180.0,
|
||||||
|
"+Inf",
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class FederationEventHandler:
|
class FederationEventHandler:
|
||||||
"""Handles events that originated from federation.
|
"""Handles events that originated from federation.
|
||||||
|
@ -410,6 +436,7 @@ class FederationEventHandler:
|
||||||
prev_member_event,
|
prev_member_event,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
async def process_remote_join(
|
async def process_remote_join(
|
||||||
self,
|
self,
|
||||||
origin: str,
|
origin: str,
|
||||||
|
@ -597,20 +624,21 @@ class FederationEventHandler:
|
||||||
if not events:
|
if not events:
|
||||||
return
|
return
|
||||||
|
|
||||||
# if there are any events in the wrong room, the remote server is buggy and
|
with backfill_processing_after_timer.time():
|
||||||
# should not be trusted.
|
# if there are any events in the wrong room, the remote server is buggy and
|
||||||
for ev in events:
|
# should not be trusted.
|
||||||
if ev.room_id != room_id:
|
for ev in events:
|
||||||
raise InvalidResponseError(
|
if ev.room_id != room_id:
|
||||||
f"Remote server {dest} returned event {ev.event_id} which is in "
|
raise InvalidResponseError(
|
||||||
f"room {ev.room_id}, when we were backfilling in {room_id}"
|
f"Remote server {dest} returned event {ev.event_id} which is in "
|
||||||
)
|
f"room {ev.room_id}, when we were backfilling in {room_id}"
|
||||||
|
)
|
||||||
|
|
||||||
await self._process_pulled_events(
|
await self._process_pulled_events(
|
||||||
dest,
|
dest,
|
||||||
events,
|
events,
|
||||||
backfilled=True,
|
backfilled=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
@trace
|
@trace
|
||||||
async def _get_missing_events_for_pdu(
|
async def _get_missing_events_for_pdu(
|
||||||
|
@ -715,7 +743,7 @@ class FederationEventHandler:
|
||||||
|
|
||||||
@trace
|
@trace
|
||||||
async def _process_pulled_events(
|
async def _process_pulled_events(
|
||||||
self, origin: str, events: Iterable[EventBase], backfilled: bool
|
self, origin: str, events: Collection[EventBase], backfilled: bool
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Process a batch of events we have pulled from a remote server
|
"""Process a batch of events we have pulled from a remote server
|
||||||
|
|
||||||
|
@ -730,6 +758,15 @@ class FederationEventHandler:
|
||||||
backfilled: True if this is part of a historical batch of events (inhibits
|
backfilled: True if this is part of a historical batch of events (inhibits
|
||||||
notification to clients, and validation of device keys.)
|
notification to clients, and validation of device keys.)
|
||||||
"""
|
"""
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.FUNC_ARG_PREFIX + "event_ids",
|
||||||
|
str([event.event_id for event in events]),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
|
||||||
|
str(len(events)),
|
||||||
|
)
|
||||||
|
set_tag(SynapseTags.FUNC_ARG_PREFIX + "backfilled", str(backfilled))
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"processing pulled backfilled=%s events=%s",
|
"processing pulled backfilled=%s events=%s",
|
||||||
backfilled,
|
backfilled,
|
||||||
|
@ -753,6 +790,7 @@ class FederationEventHandler:
|
||||||
await self._process_pulled_event(origin, ev, backfilled=backfilled)
|
await self._process_pulled_event(origin, ev, backfilled=backfilled)
|
||||||
|
|
||||||
@trace
|
@trace
|
||||||
|
@tag_args
|
||||||
async def _process_pulled_event(
|
async def _process_pulled_event(
|
||||||
self, origin: str, event: EventBase, backfilled: bool
|
self, origin: str, event: EventBase, backfilled: bool
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -854,6 +892,7 @@ class FederationEventHandler:
|
||||||
else:
|
else:
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
@trace
|
||||||
async def _compute_event_context_with_maybe_missing_prevs(
|
async def _compute_event_context_with_maybe_missing_prevs(
|
||||||
self, dest: str, event: EventBase
|
self, dest: str, event: EventBase
|
||||||
) -> EventContext:
|
) -> EventContext:
|
||||||
|
@ -970,6 +1009,8 @@ class FederationEventHandler:
|
||||||
event, state_ids_before_event=state_map, partial_state=partial_state
|
event, state_ids_before_event=state_map, partial_state=partial_state
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def _get_state_ids_after_missing_prev_event(
|
async def _get_state_ids_after_missing_prev_event(
|
||||||
self,
|
self,
|
||||||
destination: str,
|
destination: str,
|
||||||
|
@ -1009,10 +1050,10 @@ class FederationEventHandler:
|
||||||
logger.debug("Fetching %i events from cache/store", len(desired_events))
|
logger.debug("Fetching %i events from cache/store", len(desired_events))
|
||||||
have_events = await self._store.have_seen_events(room_id, desired_events)
|
have_events = await self._store.have_seen_events(room_id, desired_events)
|
||||||
|
|
||||||
missing_desired_events = desired_events - have_events
|
missing_desired_event_ids = desired_events - have_events
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"We are missing %i events (got %i)",
|
"We are missing %i events (got %i)",
|
||||||
len(missing_desired_events),
|
len(missing_desired_event_ids),
|
||||||
len(have_events),
|
len(have_events),
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -1024,13 +1065,30 @@ class FederationEventHandler:
|
||||||
# already have a bunch of the state events. It would be nice if the
|
# already have a bunch of the state events. It would be nice if the
|
||||||
# federation api gave us a way of finding out which we actually need.
|
# federation api gave us a way of finding out which we actually need.
|
||||||
|
|
||||||
missing_auth_events = set(auth_event_ids) - have_events
|
missing_auth_event_ids = set(auth_event_ids) - have_events
|
||||||
missing_auth_events.difference_update(
|
missing_auth_event_ids.difference_update(
|
||||||
await self._store.have_seen_events(room_id, missing_auth_events)
|
await self._store.have_seen_events(room_id, missing_auth_event_ids)
|
||||||
)
|
)
|
||||||
logger.debug("We are also missing %i auth events", len(missing_auth_events))
|
logger.debug("We are also missing %i auth events", len(missing_auth_event_ids))
|
||||||
|
|
||||||
missing_events = missing_desired_events | missing_auth_events
|
missing_event_ids = missing_desired_event_ids | missing_auth_event_ids
|
||||||
|
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "missing_auth_event_ids",
|
||||||
|
str(missing_auth_event_ids),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "missing_auth_event_ids.length",
|
||||||
|
str(len(missing_auth_event_ids)),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "missing_desired_event_ids",
|
||||||
|
str(missing_desired_event_ids),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "missing_desired_event_ids.length",
|
||||||
|
str(len(missing_desired_event_ids)),
|
||||||
|
)
|
||||||
|
|
||||||
# Making an individual request for each of 1000s of events has a lot of
|
# Making an individual request for each of 1000s of events has a lot of
|
||||||
# overhead. On the other hand, we don't really want to fetch all of the events
|
# overhead. On the other hand, we don't really want to fetch all of the events
|
||||||
|
@ -1041,13 +1099,13 @@ class FederationEventHandler:
|
||||||
#
|
#
|
||||||
# TODO: might it be better to have an API which lets us do an aggregate event
|
# TODO: might it be better to have an API which lets us do an aggregate event
|
||||||
# request
|
# request
|
||||||
if (len(missing_events) * 10) >= len(auth_event_ids) + len(state_event_ids):
|
if (len(missing_event_ids) * 10) >= len(auth_event_ids) + len(state_event_ids):
|
||||||
logger.debug("Requesting complete state from remote")
|
logger.debug("Requesting complete state from remote")
|
||||||
await self._get_state_and_persist(destination, room_id, event_id)
|
await self._get_state_and_persist(destination, room_id, event_id)
|
||||||
else:
|
else:
|
||||||
logger.debug("Fetching %i events from remote", len(missing_events))
|
logger.debug("Fetching %i events from remote", len(missing_event_ids))
|
||||||
await self._get_events_and_persist(
|
await self._get_events_and_persist(
|
||||||
destination=destination, room_id=room_id, event_ids=missing_events
|
destination=destination, room_id=room_id, event_ids=missing_event_ids
|
||||||
)
|
)
|
||||||
|
|
||||||
# We now need to fill out the state map, which involves fetching the
|
# We now need to fill out the state map, which involves fetching the
|
||||||
|
@ -1104,6 +1162,14 @@ class FederationEventHandler:
|
||||||
event_id,
|
event_id,
|
||||||
failed_to_fetch,
|
failed_to_fetch,
|
||||||
)
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "failed_to_fetch",
|
||||||
|
str(failed_to_fetch),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "failed_to_fetch.length",
|
||||||
|
str(len(failed_to_fetch)),
|
||||||
|
)
|
||||||
|
|
||||||
if remote_event.is_state() and remote_event.rejected_reason is None:
|
if remote_event.is_state() and remote_event.rejected_reason is None:
|
||||||
state_map[
|
state_map[
|
||||||
|
@ -1112,6 +1178,8 @@ class FederationEventHandler:
|
||||||
|
|
||||||
return state_map
|
return state_map
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def _get_state_and_persist(
|
async def _get_state_and_persist(
|
||||||
self, destination: str, room_id: str, event_id: str
|
self, destination: str, room_id: str, event_id: str
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -1133,6 +1201,7 @@ class FederationEventHandler:
|
||||||
destination=destination, room_id=room_id, event_ids=(event_id,)
|
destination=destination, room_id=room_id, event_ids=(event_id,)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
async def _process_received_pdu(
|
async def _process_received_pdu(
|
||||||
self,
|
self,
|
||||||
origin: str,
|
origin: str,
|
||||||
|
@ -1283,6 +1352,7 @@ class FederationEventHandler:
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception("Failed to resync device for %s", sender)
|
logger.exception("Failed to resync device for %s", sender)
|
||||||
|
|
||||||
|
@trace
|
||||||
async def _handle_marker_event(self, origin: str, marker_event: EventBase) -> None:
|
async def _handle_marker_event(self, origin: str, marker_event: EventBase) -> None:
|
||||||
"""Handles backfilling the insertion event when we receive a marker
|
"""Handles backfilling the insertion event when we receive a marker
|
||||||
event that points to one.
|
event that points to one.
|
||||||
|
@ -1414,6 +1484,8 @@ class FederationEventHandler:
|
||||||
|
|
||||||
return event_from_response
|
return event_from_response
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def _get_events_and_persist(
|
async def _get_events_and_persist(
|
||||||
self, destination: str, room_id: str, event_ids: Collection[str]
|
self, destination: str, room_id: str, event_ids: Collection[str]
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -1459,6 +1531,7 @@ class FederationEventHandler:
|
||||||
logger.info("Fetched %i events of %i requested", len(events), len(event_ids))
|
logger.info("Fetched %i events of %i requested", len(events), len(event_ids))
|
||||||
await self._auth_and_persist_outliers(room_id, events)
|
await self._auth_and_persist_outliers(room_id, events)
|
||||||
|
|
||||||
|
@trace
|
||||||
async def _auth_and_persist_outliers(
|
async def _auth_and_persist_outliers(
|
||||||
self, room_id: str, events: Iterable[EventBase]
|
self, room_id: str, events: Iterable[EventBase]
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -1477,6 +1550,16 @@ class FederationEventHandler:
|
||||||
"""
|
"""
|
||||||
event_map = {event.event_id: event for event in events}
|
event_map = {event.event_id: event for event in events}
|
||||||
|
|
||||||
|
event_ids = event_map.keys()
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.FUNC_ARG_PREFIX + "event_ids",
|
||||||
|
str(event_ids),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
|
||||||
|
str(len(event_ids)),
|
||||||
|
)
|
||||||
|
|
||||||
# filter out any events we have already seen. This might happen because
|
# filter out any events we have already seen. This might happen because
|
||||||
# the events were eagerly pushed to us (eg, during a room join), or because
|
# the events were eagerly pushed to us (eg, during a room join), or because
|
||||||
# another thread has raced against us since we decided to request the event.
|
# another thread has raced against us since we decided to request the event.
|
||||||
|
@ -1593,6 +1676,7 @@ class FederationEventHandler:
|
||||||
backfilled=True,
|
backfilled=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
async def _check_event_auth(
|
async def _check_event_auth(
|
||||||
self, origin: Optional[str], event: EventBase, context: EventContext
|
self, origin: Optional[str], event: EventBase, context: EventContext
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -1631,6 +1715,14 @@ class FederationEventHandler:
|
||||||
claimed_auth_events = await self._load_or_fetch_auth_events_for_event(
|
claimed_auth_events = await self._load_or_fetch_auth_events_for_event(
|
||||||
origin, event
|
origin, event
|
||||||
)
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "claimed_auth_events",
|
||||||
|
str([ev.event_id for ev in claimed_auth_events]),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "claimed_auth_events.length",
|
||||||
|
str(len(claimed_auth_events)),
|
||||||
|
)
|
||||||
|
|
||||||
# ... and check that the event passes auth at those auth events.
|
# ... and check that the event passes auth at those auth events.
|
||||||
# https://spec.matrix.org/v1.3/server-server-api/#checks-performed-on-receipt-of-a-pdu:
|
# https://spec.matrix.org/v1.3/server-server-api/#checks-performed-on-receipt-of-a-pdu:
|
||||||
|
@ -1728,6 +1820,7 @@ class FederationEventHandler:
|
||||||
)
|
)
|
||||||
context.rejected = RejectedReason.AUTH_ERROR
|
context.rejected = RejectedReason.AUTH_ERROR
|
||||||
|
|
||||||
|
@trace
|
||||||
async def _maybe_kick_guest_users(self, event: EventBase) -> None:
|
async def _maybe_kick_guest_users(self, event: EventBase) -> None:
|
||||||
if event.type != EventTypes.GuestAccess:
|
if event.type != EventTypes.GuestAccess:
|
||||||
return
|
return
|
||||||
|
@ -1935,6 +2028,8 @@ class FederationEventHandler:
|
||||||
# instead we raise an AuthError, which will make the caller ignore it.
|
# instead we raise an AuthError, which will make the caller ignore it.
|
||||||
raise AuthError(code=HTTPStatus.FORBIDDEN, msg="Auth events could not be found")
|
raise AuthError(code=HTTPStatus.FORBIDDEN, msg="Auth events could not be found")
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def _get_remote_auth_chain_for_event(
|
async def _get_remote_auth_chain_for_event(
|
||||||
self, destination: str, room_id: str, event_id: str
|
self, destination: str, room_id: str, event_id: str
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -1963,6 +2058,7 @@ class FederationEventHandler:
|
||||||
|
|
||||||
await self._auth_and_persist_outliers(room_id, remote_auth_events)
|
await self._auth_and_persist_outliers(room_id, remote_auth_events)
|
||||||
|
|
||||||
|
@trace
|
||||||
async def _run_push_actions_and_persist_event(
|
async def _run_push_actions_and_persist_event(
|
||||||
self, event: EventBase, context: EventContext, backfilled: bool = False
|
self, event: EventBase, context: EventContext, backfilled: bool = False
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -2071,8 +2167,17 @@ class FederationEventHandler:
|
||||||
self._message_handler.maybe_schedule_expiry(event)
|
self._message_handler.maybe_schedule_expiry(event)
|
||||||
|
|
||||||
if not backfilled: # Never notify for backfilled events
|
if not backfilled: # Never notify for backfilled events
|
||||||
for event in events:
|
with start_active_span("notify_persisted_events"):
|
||||||
await self._notify_persisted_event(event, max_stream_token)
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "event_ids",
|
||||||
|
str([ev.event_id for ev in events]),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.RESULT_PREFIX + "event_ids.length",
|
||||||
|
str(len(events)),
|
||||||
|
)
|
||||||
|
for event in events:
|
||||||
|
await self._notify_persisted_event(event, max_stream_token)
|
||||||
|
|
||||||
return max_stream_token.stream
|
return max_stream_token.stream
|
||||||
|
|
||||||
|
|
|
@ -331,7 +331,11 @@ class MessageHandler:
|
||||||
msg="Getting joined members while not being a current member of the room is forbidden.",
|
msg="Getting joined members while not being a current member of the room is forbidden.",
|
||||||
)
|
)
|
||||||
|
|
||||||
users_with_profile = await self.store.get_users_in_room_with_profiles(room_id)
|
users_with_profile = (
|
||||||
|
await self._state_storage_controller.get_users_in_room_with_profiles(
|
||||||
|
room_id
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
# If this is an AS, double check that they are allowed to see the members.
|
# If this is an AS, double check that they are allowed to see the members.
|
||||||
# This can either be because the AS user is in the room or because there
|
# This can either be because the AS user is in the room or because there
|
||||||
|
|
|
@ -453,6 +453,7 @@ class RoomSummaryHandler:
|
||||||
"type": e.type,
|
"type": e.type,
|
||||||
"state_key": e.state_key,
|
"state_key": e.state_key,
|
||||||
"content": e.content,
|
"content": e.content,
|
||||||
|
"room_id": e.room_id,
|
||||||
"sender": e.sender,
|
"sender": e.sender,
|
||||||
"origin_server_ts": e.origin_server_ts,
|
"origin_server_ts": e.origin_server_ts,
|
||||||
}
|
}
|
||||||
|
|
|
@ -23,9 +23,12 @@ from typing import (
|
||||||
Optional,
|
Optional,
|
||||||
Sequence,
|
Sequence,
|
||||||
Tuple,
|
Tuple,
|
||||||
|
Type,
|
||||||
|
TypeVar,
|
||||||
overload,
|
overload,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
from pydantic import BaseModel, ValidationError
|
||||||
from typing_extensions import Literal
|
from typing_extensions import Literal
|
||||||
|
|
||||||
from twisted.web.server import Request
|
from twisted.web.server import Request
|
||||||
|
@ -694,6 +697,28 @@ def parse_json_object_from_request(
|
||||||
return content
|
return content
|
||||||
|
|
||||||
|
|
||||||
|
Model = TypeVar("Model", bound=BaseModel)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_and_validate_json_object_from_request(
|
||||||
|
request: Request, model_type: Type[Model]
|
||||||
|
) -> Model:
|
||||||
|
"""Parse a JSON object from the body of a twisted HTTP request, then deserialise and
|
||||||
|
validate using the given pydantic model.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
SynapseError if the request body couldn't be decoded as JSON or
|
||||||
|
if it wasn't a JSON object.
|
||||||
|
"""
|
||||||
|
content = parse_json_object_from_request(request, allow_empty_body=False)
|
||||||
|
try:
|
||||||
|
instance = model_type.parse_obj(content)
|
||||||
|
except ValidationError as e:
|
||||||
|
raise SynapseError(HTTPStatus.BAD_REQUEST, str(e), errcode=Codes.BAD_JSON)
|
||||||
|
|
||||||
|
return instance
|
||||||
|
|
||||||
|
|
||||||
def assert_params_in_dict(body: JsonDict, required: Iterable[str]) -> None:
|
def assert_params_in_dict(body: JsonDict, required: Iterable[str]) -> None:
|
||||||
absent = []
|
absent = []
|
||||||
for k in required:
|
for k in required:
|
||||||
|
|
|
@ -310,6 +310,19 @@ class SynapseTags:
|
||||||
# The name of the external cache
|
# The name of the external cache
|
||||||
CACHE_NAME = "cache.name"
|
CACHE_NAME = "cache.name"
|
||||||
|
|
||||||
|
# Used to tag function arguments
|
||||||
|
#
|
||||||
|
# Tag a named arg. The name of the argument should be appended to this prefix.
|
||||||
|
FUNC_ARG_PREFIX = "ARG."
|
||||||
|
# Tag extra variadic number of positional arguments (`def foo(first, second, *extras)`)
|
||||||
|
FUNC_ARGS = "args"
|
||||||
|
# Tag keyword args
|
||||||
|
FUNC_KWARGS = "kwargs"
|
||||||
|
|
||||||
|
# Some intermediate result that's interesting to the function. The label for
|
||||||
|
# the result should be appended to this prefix.
|
||||||
|
RESULT_PREFIX = "RESULT."
|
||||||
|
|
||||||
|
|
||||||
class SynapseBaggage:
|
class SynapseBaggage:
|
||||||
FORCE_TRACING = "synapse-force-tracing"
|
FORCE_TRACING = "synapse-force-tracing"
|
||||||
|
@ -967,9 +980,9 @@ def tag_args(func: Callable[P, R]) -> Callable[P, R]:
|
||||||
# first argument only if it's named `self` or `cls`. This isn't fool-proof
|
# first argument only if it's named `self` or `cls`. This isn't fool-proof
|
||||||
# but handles the idiomatic cases.
|
# but handles the idiomatic cases.
|
||||||
for i, arg in enumerate(args[1:], start=1): # type: ignore[index]
|
for i, arg in enumerate(args[1:], start=1): # type: ignore[index]
|
||||||
set_tag("ARG_" + argspec.args[i], str(arg))
|
set_tag(SynapseTags.FUNC_ARG_PREFIX + argspec.args[i], str(arg))
|
||||||
set_tag("args", str(args[len(argspec.args) :])) # type: ignore[index]
|
set_tag(SynapseTags.FUNC_ARGS, str(args[len(argspec.args) :])) # type: ignore[index]
|
||||||
set_tag("kwargs", str(kwargs))
|
set_tag(SynapseTags.FUNC_KWARGS, str(kwargs))
|
||||||
yield
|
yield
|
||||||
|
|
||||||
return _custom_sync_async_decorator(func, _wrapping_logic)
|
return _custom_sync_async_decorator(func, _wrapping_logic)
|
||||||
|
|
|
@ -14,128 +14,235 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import copy
|
"""
|
||||||
from typing import Any, Dict, List
|
Push rules is the system used to determine which events trigger a push (and a
|
||||||
|
bump in notification counts).
|
||||||
|
|
||||||
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
|
This consists of a list of "push rules" for each user, where a push rule is a
|
||||||
|
pair of "conditions" and "actions". When a user receives an event Synapse
|
||||||
|
iterates over the list of push rules until it finds one where all the conditions
|
||||||
|
match the event, at which point "actions" describe the outcome (e.g. notify,
|
||||||
|
highlight, etc).
|
||||||
|
|
||||||
|
Push rules are split up into 5 different "kinds" (aka "priority classes"), which
|
||||||
|
are run in order:
|
||||||
|
1. Override — highest priority rules, e.g. always ignore notices
|
||||||
|
2. Content — content specific rules, e.g. @ notifications
|
||||||
|
3. Room — per room rules, e.g. enable/disable notifications for all messages
|
||||||
|
in a room
|
||||||
|
4. Sender — per sender rules, e.g. never notify for messages from a given
|
||||||
|
user
|
||||||
|
5. Underride — the lowest priority "default" rules, e.g. notify for every
|
||||||
|
message.
|
||||||
|
|
||||||
|
The set of "base rules" are the list of rules that every user has by default. A
|
||||||
|
user can modify their copy of the push rules in one of three ways:
|
||||||
|
|
||||||
|
1. Adding a new push rule of a certain kind
|
||||||
|
2. Changing the actions of a base rule
|
||||||
|
3. Enabling/disabling a base rule.
|
||||||
|
|
||||||
|
The base rules are split into whether they come before or after a particular
|
||||||
|
kind, so the order of push rule evaluation would be: base rules for before
|
||||||
|
"override" kind, user defined "override" rules, base rules after "override"
|
||||||
|
kind, etc, etc.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import itertools
|
||||||
|
import logging
|
||||||
|
from typing import Dict, Iterator, List, Mapping, Sequence, Tuple, Union
|
||||||
|
|
||||||
|
import attr
|
||||||
|
|
||||||
|
from synapse.config.experimental import ExperimentalConfig
|
||||||
|
from synapse.push.rulekinds import PRIORITY_CLASS_MAP
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def list_with_base_rules(rawrules: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
@attr.s(auto_attribs=True, slots=True, frozen=True)
|
||||||
"""Combine the list of rules set by the user with the default push rules
|
class PushRule:
|
||||||
|
"""A push rule
|
||||||
|
|
||||||
Args:
|
Attributes:
|
||||||
rawrules: The rules the user has modified or set.
|
rule_id: a unique ID for this rule
|
||||||
|
priority_class: what "kind" of push rule this is (see
|
||||||
Returns:
|
`PRIORITY_CLASS_MAP` for mapping between int and kind)
|
||||||
A new list with the rules set by the user combined with the defaults.
|
conditions: the sequence of conditions that all need to match
|
||||||
|
actions: the actions to apply if all conditions are met
|
||||||
|
default: is this a base rule?
|
||||||
|
default_enabled: is this enabled by default?
|
||||||
"""
|
"""
|
||||||
ruleslist = []
|
|
||||||
|
|
||||||
# Grab the base rules that the user has modified.
|
rule_id: str
|
||||||
# The modified base rules have a priority_class of -1.
|
priority_class: int
|
||||||
modified_base_rules = {r["rule_id"]: r for r in rawrules if r["priority_class"] < 0}
|
conditions: Sequence[Mapping[str, str]]
|
||||||
|
actions: Sequence[Union[str, Mapping]]
|
||||||
|
default: bool = False
|
||||||
|
default_enabled: bool = True
|
||||||
|
|
||||||
# Remove the modified base rules from the list, They'll be added back
|
|
||||||
# in the default positions in the list.
|
|
||||||
rawrules = [r for r in rawrules if r["priority_class"] >= 0]
|
|
||||||
|
|
||||||
# shove the server default rules for each kind onto the end of each
|
@attr.s(auto_attribs=True, slots=True, frozen=True, weakref_slot=False)
|
||||||
current_prio_class = list(PRIORITY_CLASS_INVERSE_MAP)[-1]
|
class PushRules:
|
||||||
|
"""A collection of push rules for an account.
|
||||||
|
|
||||||
ruleslist.extend(
|
Can be iterated over, producing push rules in priority order.
|
||||||
make_base_prepend_rules(
|
"""
|
||||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
|
|
||||||
|
# A mapping from rule ID to push rule that overrides a base rule. These will
|
||||||
|
# be returned instead of the base rule.
|
||||||
|
overriden_base_rules: Dict[str, PushRule] = attr.Factory(dict)
|
||||||
|
|
||||||
|
# The following stores the custom push rules at each priority class.
|
||||||
|
#
|
||||||
|
# We keep these separate (rather than combining into one big list) to avoid
|
||||||
|
# copying the base rules around all the time.
|
||||||
|
override: List[PushRule] = attr.Factory(list)
|
||||||
|
content: List[PushRule] = attr.Factory(list)
|
||||||
|
room: List[PushRule] = attr.Factory(list)
|
||||||
|
sender: List[PushRule] = attr.Factory(list)
|
||||||
|
underride: List[PushRule] = attr.Factory(list)
|
||||||
|
|
||||||
|
def __iter__(self) -> Iterator[PushRule]:
|
||||||
|
# When iterating over the push rules we need to return the base rules
|
||||||
|
# interspersed at the correct spots.
|
||||||
|
for rule in itertools.chain(
|
||||||
|
BASE_PREPEND_OVERRIDE_RULES,
|
||||||
|
self.override,
|
||||||
|
BASE_APPEND_OVERRIDE_RULES,
|
||||||
|
self.content,
|
||||||
|
BASE_APPEND_CONTENT_RULES,
|
||||||
|
self.room,
|
||||||
|
self.sender,
|
||||||
|
self.underride,
|
||||||
|
BASE_APPEND_UNDERRIDE_RULES,
|
||||||
|
):
|
||||||
|
# Check if a base rule has been overriden by a custom rule. If so
|
||||||
|
# return that instead.
|
||||||
|
override_rule = self.overriden_base_rules.get(rule.rule_id)
|
||||||
|
if override_rule:
|
||||||
|
yield override_rule
|
||||||
|
else:
|
||||||
|
yield rule
|
||||||
|
|
||||||
|
def __len__(self) -> int:
|
||||||
|
# The length is mostly used by caches to get a sense of "size" / amount
|
||||||
|
# of memory this object is using, so we only count the number of custom
|
||||||
|
# rules.
|
||||||
|
return (
|
||||||
|
len(self.overriden_base_rules)
|
||||||
|
+ len(self.override)
|
||||||
|
+ len(self.content)
|
||||||
|
+ len(self.room)
|
||||||
|
+ len(self.sender)
|
||||||
|
+ len(self.underride)
|
||||||
)
|
)
|
||||||
)
|
|
||||||
|
|
||||||
for r in rawrules:
|
|
||||||
if r["priority_class"] < current_prio_class:
|
|
||||||
while r["priority_class"] < current_prio_class:
|
|
||||||
ruleslist.extend(
|
|
||||||
make_base_append_rules(
|
|
||||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
|
||||||
modified_base_rules,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
current_prio_class -= 1
|
|
||||||
if current_prio_class > 0:
|
|
||||||
ruleslist.extend(
|
|
||||||
make_base_prepend_rules(
|
|
||||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
|
||||||
modified_base_rules,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
ruleslist.append(r)
|
@attr.s(auto_attribs=True, slots=True, frozen=True, weakref_slot=False)
|
||||||
|
class FilteredPushRules:
|
||||||
|
"""A wrapper around `PushRules` that filters out disabled experimental push
|
||||||
|
rules, and includes the "enabled" state for each rule when iterated over.
|
||||||
|
"""
|
||||||
|
|
||||||
while current_prio_class > 0:
|
push_rules: PushRules
|
||||||
ruleslist.extend(
|
enabled_map: Dict[str, bool]
|
||||||
make_base_append_rules(
|
experimental_config: ExperimentalConfig
|
||||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
|
|
||||||
|
def __iter__(self) -> Iterator[Tuple[PushRule, bool]]:
|
||||||
|
for rule in self.push_rules:
|
||||||
|
if not _is_experimental_rule_enabled(
|
||||||
|
rule.rule_id, self.experimental_config
|
||||||
|
):
|
||||||
|
continue
|
||||||
|
|
||||||
|
enabled = self.enabled_map.get(rule.rule_id, rule.default_enabled)
|
||||||
|
|
||||||
|
yield rule, enabled
|
||||||
|
|
||||||
|
def __len__(self) -> int:
|
||||||
|
return len(self.push_rules)
|
||||||
|
|
||||||
|
|
||||||
|
DEFAULT_EMPTY_PUSH_RULES = PushRules()
|
||||||
|
|
||||||
|
|
||||||
|
def compile_push_rules(rawrules: List[PushRule]) -> PushRules:
|
||||||
|
"""Given a set of custom push rules return a `PushRules` instance (which
|
||||||
|
includes the base rules).
|
||||||
|
"""
|
||||||
|
|
||||||
|
if not rawrules:
|
||||||
|
# Fast path to avoid allocating empty lists when there are no custom
|
||||||
|
# rules for the user.
|
||||||
|
return DEFAULT_EMPTY_PUSH_RULES
|
||||||
|
|
||||||
|
rules = PushRules()
|
||||||
|
|
||||||
|
for rule in rawrules:
|
||||||
|
# We need to decide which bucket each custom push rule goes into.
|
||||||
|
|
||||||
|
# If it has the same ID as a base rule then it overrides that...
|
||||||
|
overriden_base_rule = BASE_RULES_BY_ID.get(rule.rule_id)
|
||||||
|
if overriden_base_rule:
|
||||||
|
rules.overriden_base_rules[rule.rule_id] = attr.evolve(
|
||||||
|
overriden_base_rule, actions=rule.actions
|
||||||
)
|
)
|
||||||
)
|
continue
|
||||||
current_prio_class -= 1
|
|
||||||
if current_prio_class > 0:
|
# ... otherwise it gets added to the appropriate priority class bucket
|
||||||
ruleslist.extend(
|
collection: List[PushRule]
|
||||||
make_base_prepend_rules(
|
if rule.priority_class == 5:
|
||||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
|
collection = rules.override
|
||||||
)
|
elif rule.priority_class == 4:
|
||||||
|
collection = rules.content
|
||||||
|
elif rule.priority_class == 3:
|
||||||
|
collection = rules.room
|
||||||
|
elif rule.priority_class == 2:
|
||||||
|
collection = rules.sender
|
||||||
|
elif rule.priority_class == 1:
|
||||||
|
collection = rules.underride
|
||||||
|
elif rule.priority_class <= 0:
|
||||||
|
logger.info(
|
||||||
|
"Got rule with priority class less than zero, but doesn't override a base rule: %s",
|
||||||
|
rule,
|
||||||
)
|
)
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
# We log and continue here so as not to break event sending
|
||||||
|
logger.error("Unknown priority class: %", rule.priority_class)
|
||||||
|
continue
|
||||||
|
|
||||||
return ruleslist
|
collection.append(rule)
|
||||||
|
|
||||||
|
|
||||||
def make_base_append_rules(
|
|
||||||
kind: str, modified_base_rules: Dict[str, Dict[str, Any]]
|
|
||||||
) -> List[Dict[str, Any]]:
|
|
||||||
rules = []
|
|
||||||
|
|
||||||
if kind == "override":
|
|
||||||
rules = BASE_APPEND_OVERRIDE_RULES
|
|
||||||
elif kind == "underride":
|
|
||||||
rules = BASE_APPEND_UNDERRIDE_RULES
|
|
||||||
elif kind == "content":
|
|
||||||
rules = BASE_APPEND_CONTENT_RULES
|
|
||||||
|
|
||||||
# Copy the rules before modifying them
|
|
||||||
rules = copy.deepcopy(rules)
|
|
||||||
for r in rules:
|
|
||||||
# Only modify the actions, keep the conditions the same.
|
|
||||||
assert isinstance(r["rule_id"], str)
|
|
||||||
modified = modified_base_rules.get(r["rule_id"])
|
|
||||||
if modified:
|
|
||||||
r["actions"] = modified["actions"]
|
|
||||||
|
|
||||||
return rules
|
return rules
|
||||||
|
|
||||||
|
|
||||||
def make_base_prepend_rules(
|
def _is_experimental_rule_enabled(
|
||||||
kind: str,
|
rule_id: str, experimental_config: ExperimentalConfig
|
||||||
modified_base_rules: Dict[str, Dict[str, Any]],
|
) -> bool:
|
||||||
) -> List[Dict[str, Any]]:
|
"""Used by `FilteredPushRules` to filter out experimental rules when they
|
||||||
rules = []
|
have not been enabled.
|
||||||
|
"""
|
||||||
if kind == "override":
|
if (
|
||||||
rules = BASE_PREPEND_OVERRIDE_RULES
|
rule_id == "global/override/.org.matrix.msc3786.rule.room.server_acl"
|
||||||
|
and not experimental_config.msc3786_enabled
|
||||||
# Copy the rules before modifying them
|
):
|
||||||
rules = copy.deepcopy(rules)
|
return False
|
||||||
for r in rules:
|
if (
|
||||||
# Only modify the actions, keep the conditions the same.
|
rule_id == "global/underride/.org.matrix.msc3772.thread_reply"
|
||||||
assert isinstance(r["rule_id"], str)
|
and not experimental_config.msc3772_enabled
|
||||||
modified = modified_base_rules.get(r["rule_id"])
|
):
|
||||||
if modified:
|
return False
|
||||||
r["actions"] = modified["actions"]
|
return True
|
||||||
|
|
||||||
return rules
|
|
||||||
|
|
||||||
|
|
||||||
# We have to annotate these types, otherwise mypy infers them as
|
BASE_APPEND_CONTENT_RULES = [
|
||||||
# `List[Dict[str, Sequence[Collection[str]]]]`.
|
PushRule(
|
||||||
BASE_APPEND_CONTENT_RULES: List[Dict[str, Any]] = [
|
default=True,
|
||||||
{
|
priority_class=PRIORITY_CLASS_MAP["content"],
|
||||||
"rule_id": "global/content/.m.rule.contains_user_name",
|
rule_id="global/content/.m.rule.contains_user_name",
|
||||||
"conditions": [
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "content.body",
|
"key": "content.body",
|
||||||
|
@ -143,29 +250,33 @@ BASE_APPEND_CONTENT_RULES: List[Dict[str, Any]] = [
|
||||||
"pattern_type": "user_localpart",
|
"pattern_type": "user_localpart",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": [
|
actions=[
|
||||||
"notify",
|
"notify",
|
||||||
{"set_tweak": "sound", "value": "default"},
|
{"set_tweak": "sound", "value": "default"},
|
||||||
{"set_tweak": "highlight"},
|
{"set_tweak": "highlight"},
|
||||||
],
|
],
|
||||||
}
|
)
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
BASE_PREPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
BASE_PREPEND_OVERRIDE_RULES = [
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/override/.m.rule.master",
|
default=True,
|
||||||
"enabled": False,
|
priority_class=PRIORITY_CLASS_MAP["override"],
|
||||||
"conditions": [],
|
rule_id="global/override/.m.rule.master",
|
||||||
"actions": ["dont_notify"],
|
default_enabled=False,
|
||||||
}
|
conditions=[],
|
||||||
|
actions=["dont_notify"],
|
||||||
|
)
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
BASE_APPEND_OVERRIDE_RULES = [
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/override/.m.rule.suppress_notices",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["override"],
|
||||||
|
rule_id="global/override/.m.rule.suppress_notices",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "content.msgtype",
|
"key": "content.msgtype",
|
||||||
|
@ -173,13 +284,15 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_suppress_notices",
|
"_cache_key": "_suppress_notices",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["dont_notify"],
|
actions=["dont_notify"],
|
||||||
},
|
),
|
||||||
# NB. .m.rule.invite_for_me must be higher prio than .m.rule.member_event
|
# NB. .m.rule.invite_for_me must be higher prio than .m.rule.member_event
|
||||||
# otherwise invites will be matched by .m.rule.member_event
|
# otherwise invites will be matched by .m.rule.member_event
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/override/.m.rule.invite_for_me",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["override"],
|
||||||
|
rule_id="global/override/.m.rule.invite_for_me",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
|
@ -195,21 +308,23 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
# Match the requester's MXID.
|
# Match the requester's MXID.
|
||||||
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
|
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
|
||||||
],
|
],
|
||||||
"actions": [
|
actions=[
|
||||||
"notify",
|
"notify",
|
||||||
{"set_tweak": "sound", "value": "default"},
|
{"set_tweak": "sound", "value": "default"},
|
||||||
{"set_tweak": "highlight", "value": False},
|
{"set_tweak": "highlight", "value": False},
|
||||||
],
|
],
|
||||||
},
|
),
|
||||||
# Will we sometimes want to know about people joining and leaving?
|
# Will we sometimes want to know about people joining and leaving?
|
||||||
# Perhaps: if so, this could be expanded upon. Seems the most usual case
|
# Perhaps: if so, this could be expanded upon. Seems the most usual case
|
||||||
# is that we don't though. We add this override rule so that even if
|
# is that we don't though. We add this override rule so that even if
|
||||||
# the room rule is set to notify, we don't get notifications about
|
# the room rule is set to notify, we don't get notifications about
|
||||||
# join/leave/avatar/displayname events.
|
# join/leave/avatar/displayname events.
|
||||||
# See also: https://matrix.org/jira/browse/SYN-607
|
# See also: https://matrix.org/jira/browse/SYN-607
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/override/.m.rule.member_event",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["override"],
|
||||||
|
rule_id="global/override/.m.rule.member_event",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
|
@ -217,24 +332,28 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_member",
|
"_cache_key": "_member",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["dont_notify"],
|
actions=["dont_notify"],
|
||||||
},
|
),
|
||||||
# This was changed from underride to override so it's closer in priority
|
# This was changed from underride to override so it's closer in priority
|
||||||
# to the content rules where the user name highlight rule lives. This
|
# to the content rules where the user name highlight rule lives. This
|
||||||
# way a room rule is lower priority than both but a custom override rule
|
# way a room rule is lower priority than both but a custom override rule
|
||||||
# is higher priority than both.
|
# is higher priority than both.
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/override/.m.rule.contains_display_name",
|
default=True,
|
||||||
"conditions": [{"kind": "contains_display_name"}],
|
priority_class=PRIORITY_CLASS_MAP["override"],
|
||||||
"actions": [
|
rule_id="global/override/.m.rule.contains_display_name",
|
||||||
|
conditions=[{"kind": "contains_display_name"}],
|
||||||
|
actions=[
|
||||||
"notify",
|
"notify",
|
||||||
{"set_tweak": "sound", "value": "default"},
|
{"set_tweak": "sound", "value": "default"},
|
||||||
{"set_tweak": "highlight"},
|
{"set_tweak": "highlight"},
|
||||||
],
|
],
|
||||||
},
|
),
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/override/.m.rule.roomnotif",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["override"],
|
||||||
|
rule_id="global/override/.m.rule.roomnotif",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "content.body",
|
"key": "content.body",
|
||||||
|
@ -247,11 +366,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_roomnotif_pl",
|
"_cache_key": "_roomnotif_pl",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
|
actions=["notify", {"set_tweak": "highlight", "value": True}],
|
||||||
},
|
),
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/override/.m.rule.tombstone",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["override"],
|
||||||
|
rule_id="global/override/.m.rule.tombstone",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
|
@ -265,11 +386,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_tombstone_statekey",
|
"_cache_key": "_tombstone_statekey",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
|
actions=["notify", {"set_tweak": "highlight", "value": True}],
|
||||||
},
|
),
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/override/.m.rule.reaction",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["override"],
|
||||||
|
rule_id="global/override/.m.rule.reaction",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
|
@ -277,14 +400,16 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_reaction",
|
"_cache_key": "_reaction",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["dont_notify"],
|
actions=["dont_notify"],
|
||||||
},
|
),
|
||||||
# XXX: This is an experimental rule that is only enabled if msc3786_enabled
|
# XXX: This is an experimental rule that is only enabled if msc3786_enabled
|
||||||
# is enabled, if it is not the rule gets filtered out in _load_rules() in
|
# is enabled, if it is not the rule gets filtered out in _load_rules() in
|
||||||
# PushRulesWorkerStore
|
# PushRulesWorkerStore
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/override/.org.matrix.msc3786.rule.room.server_acl",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["override"],
|
||||||
|
rule_id="global/override/.org.matrix.msc3786.rule.room.server_acl",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
|
@ -298,15 +423,17 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_room_server_acl_state_key",
|
"_cache_key": "_room_server_acl_state_key",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": [],
|
actions=[],
|
||||||
},
|
),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
BASE_APPEND_UNDERRIDE_RULES = [
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/underride/.m.rule.call",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["underride"],
|
||||||
|
rule_id="global/underride/.m.rule.call",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
|
@ -314,17 +441,19 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_call",
|
"_cache_key": "_call",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": [
|
actions=[
|
||||||
"notify",
|
"notify",
|
||||||
{"set_tweak": "sound", "value": "ring"},
|
{"set_tweak": "sound", "value": "ring"},
|
||||||
{"set_tweak": "highlight", "value": False},
|
{"set_tweak": "highlight", "value": False},
|
||||||
],
|
],
|
||||||
},
|
),
|
||||||
# XXX: once m.direct is standardised everywhere, we should use it to detect
|
# XXX: once m.direct is standardised everywhere, we should use it to detect
|
||||||
# a DM from the user's perspective rather than this heuristic.
|
# a DM from the user's perspective rather than this heuristic.
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/underride/.m.rule.room_one_to_one",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["underride"],
|
||||||
|
rule_id="global/underride/.m.rule.room_one_to_one",
|
||||||
|
conditions=[
|
||||||
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
|
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
|
@ -333,17 +462,19 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_message",
|
"_cache_key": "_message",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": [
|
actions=[
|
||||||
"notify",
|
"notify",
|
||||||
{"set_tweak": "sound", "value": "default"},
|
{"set_tweak": "sound", "value": "default"},
|
||||||
{"set_tweak": "highlight", "value": False},
|
{"set_tweak": "highlight", "value": False},
|
||||||
],
|
],
|
||||||
},
|
),
|
||||||
# XXX: this is going to fire for events which aren't m.room.messages
|
# XXX: this is going to fire for events which aren't m.room.messages
|
||||||
# but are encrypted (e.g. m.call.*)...
|
# but are encrypted (e.g. m.call.*)...
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/underride/.m.rule.encrypted_room_one_to_one",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["underride"],
|
||||||
|
rule_id="global/underride/.m.rule.encrypted_room_one_to_one",
|
||||||
|
conditions=[
|
||||||
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
|
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
|
@ -352,15 +483,17 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_encrypted",
|
"_cache_key": "_encrypted",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": [
|
actions=[
|
||||||
"notify",
|
"notify",
|
||||||
{"set_tweak": "sound", "value": "default"},
|
{"set_tweak": "sound", "value": "default"},
|
||||||
{"set_tweak": "highlight", "value": False},
|
{"set_tweak": "highlight", "value": False},
|
||||||
],
|
],
|
||||||
},
|
),
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/underride/.org.matrix.msc3772.thread_reply",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["underride"],
|
||||||
|
rule_id="global/underride/.org.matrix.msc3772.thread_reply",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "org.matrix.msc3772.relation_match",
|
"kind": "org.matrix.msc3772.relation_match",
|
||||||
"rel_type": "m.thread",
|
"rel_type": "m.thread",
|
||||||
|
@ -368,11 +501,13 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"sender_type": "user_id",
|
"sender_type": "user_id",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
actions=["notify", {"set_tweak": "highlight", "value": False}],
|
||||||
},
|
),
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/underride/.m.rule.message",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["underride"],
|
||||||
|
rule_id="global/underride/.m.rule.message",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
|
@ -380,13 +515,15 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_message",
|
"_cache_key": "_message",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
actions=["notify", {"set_tweak": "highlight", "value": False}],
|
||||||
},
|
),
|
||||||
# XXX: this is going to fire for events which aren't m.room.messages
|
# XXX: this is going to fire for events which aren't m.room.messages
|
||||||
# but are encrypted (e.g. m.call.*)...
|
# but are encrypted (e.g. m.call.*)...
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/underride/.m.rule.encrypted",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["underride"],
|
||||||
|
rule_id="global/underride/.m.rule.encrypted",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
|
@ -394,11 +531,13 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_encrypted",
|
"_cache_key": "_encrypted",
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
actions=["notify", {"set_tweak": "highlight", "value": False}],
|
||||||
},
|
),
|
||||||
{
|
PushRule(
|
||||||
"rule_id": "global/underride/.im.vector.jitsi",
|
default=True,
|
||||||
"conditions": [
|
priority_class=PRIORITY_CLASS_MAP["underride"],
|
||||||
|
rule_id="global/underride/.im.vector.jitsi",
|
||||||
|
conditions=[
|
||||||
{
|
{
|
||||||
"kind": "event_match",
|
"kind": "event_match",
|
||||||
"key": "type",
|
"key": "type",
|
||||||
|
@ -418,29 +557,27 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||||
"_cache_key": "_is_state_event",
|
"_cache_key": "_is_state_event",
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
actions=["notify", {"set_tweak": "highlight", "value": False}],
|
||||||
},
|
),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
BASE_RULE_IDS = set()
|
BASE_RULE_IDS = set()
|
||||||
|
|
||||||
|
BASE_RULES_BY_ID: Dict[str, PushRule] = {}
|
||||||
|
|
||||||
for r in BASE_APPEND_CONTENT_RULES:
|
for r in BASE_APPEND_CONTENT_RULES:
|
||||||
r["priority_class"] = PRIORITY_CLASS_MAP["content"]
|
BASE_RULE_IDS.add(r.rule_id)
|
||||||
r["default"] = True
|
BASE_RULES_BY_ID[r.rule_id] = r
|
||||||
BASE_RULE_IDS.add(r["rule_id"])
|
|
||||||
|
|
||||||
for r in BASE_PREPEND_OVERRIDE_RULES:
|
for r in BASE_PREPEND_OVERRIDE_RULES:
|
||||||
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
|
BASE_RULE_IDS.add(r.rule_id)
|
||||||
r["default"] = True
|
BASE_RULES_BY_ID[r.rule_id] = r
|
||||||
BASE_RULE_IDS.add(r["rule_id"])
|
|
||||||
|
|
||||||
for r in BASE_APPEND_OVERRIDE_RULES:
|
for r in BASE_APPEND_OVERRIDE_RULES:
|
||||||
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
|
BASE_RULE_IDS.add(r.rule_id)
|
||||||
r["default"] = True
|
BASE_RULES_BY_ID[r.rule_id] = r
|
||||||
BASE_RULE_IDS.add(r["rule_id"])
|
|
||||||
|
|
||||||
for r in BASE_APPEND_UNDERRIDE_RULES:
|
for r in BASE_APPEND_UNDERRIDE_RULES:
|
||||||
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
|
BASE_RULE_IDS.add(r.rule_id)
|
||||||
r["default"] = True
|
BASE_RULES_BY_ID[r.rule_id] = r
|
||||||
BASE_RULE_IDS.add(r["rule_id"])
|
|
||||||
|
|
|
@ -15,7 +15,18 @@
|
||||||
|
|
||||||
import itertools
|
import itertools
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Set, Tuple, Union
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Collection,
|
||||||
|
Dict,
|
||||||
|
Iterable,
|
||||||
|
List,
|
||||||
|
Mapping,
|
||||||
|
Optional,
|
||||||
|
Set,
|
||||||
|
Tuple,
|
||||||
|
Union,
|
||||||
|
)
|
||||||
|
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
|
|
||||||
|
@ -30,6 +41,7 @@ from synapse.util.caches import register_cache
|
||||||
from synapse.util.metrics import measure_func
|
from synapse.util.metrics import measure_func
|
||||||
from synapse.visibility import filter_event_for_clients_with_state
|
from synapse.visibility import filter_event_for_clients_with_state
|
||||||
|
|
||||||
|
from .baserules import FilteredPushRules, PushRule
|
||||||
from .push_rule_evaluator import PushRuleEvaluatorForEvent
|
from .push_rule_evaluator import PushRuleEvaluatorForEvent
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
@ -112,7 +124,7 @@ class BulkPushRuleEvaluator:
|
||||||
async def _get_rules_for_event(
|
async def _get_rules_for_event(
|
||||||
self,
|
self,
|
||||||
event: EventBase,
|
event: EventBase,
|
||||||
) -> Dict[str, List[Dict[str, Any]]]:
|
) -> Dict[str, FilteredPushRules]:
|
||||||
"""Get the push rules for all users who may need to be notified about
|
"""Get the push rules for all users who may need to be notified about
|
||||||
the event.
|
the event.
|
||||||
|
|
||||||
|
@ -186,7 +198,7 @@ class BulkPushRuleEvaluator:
|
||||||
return pl_event.content if pl_event else {}, sender_level
|
return pl_event.content if pl_event else {}, sender_level
|
||||||
|
|
||||||
async def _get_mutual_relations(
|
async def _get_mutual_relations(
|
||||||
self, event: EventBase, rules: Iterable[Dict[str, Any]]
|
self, event: EventBase, rules: Iterable[Tuple[PushRule, bool]]
|
||||||
) -> Dict[str, Set[Tuple[str, str]]]:
|
) -> Dict[str, Set[Tuple[str, str]]]:
|
||||||
"""
|
"""
|
||||||
Fetch event metadata for events which related to the same event as the given event.
|
Fetch event metadata for events which related to the same event as the given event.
|
||||||
|
@ -216,12 +228,11 @@ class BulkPushRuleEvaluator:
|
||||||
|
|
||||||
# Pre-filter to figure out which relation types are interesting.
|
# Pre-filter to figure out which relation types are interesting.
|
||||||
rel_types = set()
|
rel_types = set()
|
||||||
for rule in rules:
|
for rule, enabled in rules:
|
||||||
# Skip disabled rules.
|
if not enabled:
|
||||||
if "enabled" in rule and not rule["enabled"]:
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
for condition in rule["conditions"]:
|
for condition in rule.conditions:
|
||||||
if condition["kind"] != "org.matrix.msc3772.relation_match":
|
if condition["kind"] != "org.matrix.msc3772.relation_match":
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -254,7 +265,7 @@ class BulkPushRuleEvaluator:
|
||||||
count_as_unread = _should_count_as_unread(event, context)
|
count_as_unread = _should_count_as_unread(event, context)
|
||||||
|
|
||||||
rules_by_user = await self._get_rules_for_event(event)
|
rules_by_user = await self._get_rules_for_event(event)
|
||||||
actions_by_user: Dict[str, List[Union[dict, str]]] = {}
|
actions_by_user: Dict[str, Collection[Union[Mapping, str]]] = {}
|
||||||
|
|
||||||
room_member_count = await self.store.get_number_joined_users_in_room(
|
room_member_count = await self.store.get_number_joined_users_in_room(
|
||||||
event.room_id
|
event.room_id
|
||||||
|
@ -317,15 +328,13 @@ class BulkPushRuleEvaluator:
|
||||||
# current user, it'll be added to the dict later.
|
# current user, it'll be added to the dict later.
|
||||||
actions_by_user[uid] = []
|
actions_by_user[uid] = []
|
||||||
|
|
||||||
for rule in rules:
|
for rule, enabled in rules:
|
||||||
if "enabled" in rule and not rule["enabled"]:
|
if not enabled:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
matches = evaluator.check_conditions(
|
matches = evaluator.check_conditions(rule.conditions, uid, display_name)
|
||||||
rule["conditions"], uid, display_name
|
|
||||||
)
|
|
||||||
if matches:
|
if matches:
|
||||||
actions = [x for x in rule["actions"] if x != "dont_notify"]
|
actions = [x for x in rule.actions if x != "dont_notify"]
|
||||||
if actions and "notify" in actions:
|
if actions and "notify" in actions:
|
||||||
# Push rules say we should notify the user of this event
|
# Push rules say we should notify the user of this event
|
||||||
actions_by_user[uid] = actions
|
actions_by_user[uid] = actions
|
||||||
|
|
|
@ -18,16 +18,15 @@ from typing import Any, Dict, List, Optional
|
||||||
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
|
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
|
||||||
from synapse.types import UserID
|
from synapse.types import UserID
|
||||||
|
|
||||||
|
from .baserules import FilteredPushRules, PushRule
|
||||||
|
|
||||||
|
|
||||||
def format_push_rules_for_user(
|
def format_push_rules_for_user(
|
||||||
user: UserID, ruleslist: List
|
user: UserID, ruleslist: FilteredPushRules
|
||||||
) -> Dict[str, Dict[str, list]]:
|
) -> Dict[str, Dict[str, list]]:
|
||||||
"""Converts a list of rawrules and a enabled map into nested dictionaries
|
"""Converts a list of rawrules and a enabled map into nested dictionaries
|
||||||
to match the Matrix client-server format for push rules"""
|
to match the Matrix client-server format for push rules"""
|
||||||
|
|
||||||
# We're going to be mutating this a lot, so do a deep copy
|
|
||||||
ruleslist = copy.deepcopy(ruleslist)
|
|
||||||
|
|
||||||
rules: Dict[str, Dict[str, List[Dict[str, Any]]]] = {
|
rules: Dict[str, Dict[str, List[Dict[str, Any]]]] = {
|
||||||
"global": {},
|
"global": {},
|
||||||
"device": {},
|
"device": {},
|
||||||
|
@ -35,11 +34,30 @@ def format_push_rules_for_user(
|
||||||
|
|
||||||
rules["global"] = _add_empty_priority_class_arrays(rules["global"])
|
rules["global"] = _add_empty_priority_class_arrays(rules["global"])
|
||||||
|
|
||||||
for r in ruleslist:
|
for r, enabled in ruleslist:
|
||||||
template_name = _priority_class_to_template_name(r["priority_class"])
|
template_name = _priority_class_to_template_name(r.priority_class)
|
||||||
|
|
||||||
|
rulearray = rules["global"][template_name]
|
||||||
|
|
||||||
|
template_rule = _rule_to_template(r)
|
||||||
|
if not template_rule:
|
||||||
|
continue
|
||||||
|
|
||||||
|
rulearray.append(template_rule)
|
||||||
|
|
||||||
|
template_rule["enabled"] = enabled
|
||||||
|
|
||||||
|
if "conditions" not in template_rule:
|
||||||
|
# Not all formatted rules have explicit conditions, e.g. "room"
|
||||||
|
# rules omit them as they can be derived from the kind and rule ID.
|
||||||
|
#
|
||||||
|
# If the formatted rule has no conditions then we can skip the
|
||||||
|
# formatting of conditions.
|
||||||
|
continue
|
||||||
|
|
||||||
# Remove internal stuff.
|
# Remove internal stuff.
|
||||||
for c in r["conditions"]:
|
template_rule["conditions"] = copy.deepcopy(template_rule["conditions"])
|
||||||
|
for c in template_rule["conditions"]:
|
||||||
c.pop("_cache_key", None)
|
c.pop("_cache_key", None)
|
||||||
|
|
||||||
pattern_type = c.pop("pattern_type", None)
|
pattern_type = c.pop("pattern_type", None)
|
||||||
|
@ -52,16 +70,6 @@ def format_push_rules_for_user(
|
||||||
if sender_type == "user_id":
|
if sender_type == "user_id":
|
||||||
c["sender"] = user.to_string()
|
c["sender"] = user.to_string()
|
||||||
|
|
||||||
rulearray = rules["global"][template_name]
|
|
||||||
|
|
||||||
template_rule = _rule_to_template(r)
|
|
||||||
if template_rule:
|
|
||||||
if "enabled" in r:
|
|
||||||
template_rule["enabled"] = r["enabled"]
|
|
||||||
else:
|
|
||||||
template_rule["enabled"] = True
|
|
||||||
rulearray.append(template_rule)
|
|
||||||
|
|
||||||
return rules
|
return rules
|
||||||
|
|
||||||
|
|
||||||
|
@ -71,24 +79,24 @@ def _add_empty_priority_class_arrays(d: Dict[str, list]) -> Dict[str, list]:
|
||||||
return d
|
return d
|
||||||
|
|
||||||
|
|
||||||
def _rule_to_template(rule: Dict[str, Any]) -> Optional[Dict[str, Any]]:
|
def _rule_to_template(rule: PushRule) -> Optional[Dict[str, Any]]:
|
||||||
unscoped_rule_id = None
|
templaterule: Dict[str, Any]
|
||||||
if "rule_id" in rule:
|
|
||||||
unscoped_rule_id = _rule_id_from_namespaced(rule["rule_id"])
|
|
||||||
|
|
||||||
template_name = _priority_class_to_template_name(rule["priority_class"])
|
unscoped_rule_id = _rule_id_from_namespaced(rule.rule_id)
|
||||||
|
|
||||||
|
template_name = _priority_class_to_template_name(rule.priority_class)
|
||||||
if template_name in ["override", "underride"]:
|
if template_name in ["override", "underride"]:
|
||||||
templaterule = {k: rule[k] for k in ["conditions", "actions"]}
|
templaterule = {"conditions": rule.conditions, "actions": rule.actions}
|
||||||
elif template_name in ["sender", "room"]:
|
elif template_name in ["sender", "room"]:
|
||||||
templaterule = {"actions": rule["actions"]}
|
templaterule = {"actions": rule.actions}
|
||||||
unscoped_rule_id = rule["conditions"][0]["pattern"]
|
unscoped_rule_id = rule.conditions[0]["pattern"]
|
||||||
elif template_name == "content":
|
elif template_name == "content":
|
||||||
if len(rule["conditions"]) != 1:
|
if len(rule.conditions) != 1:
|
||||||
return None
|
return None
|
||||||
thecond = rule["conditions"][0]
|
thecond = rule.conditions[0]
|
||||||
if "pattern" not in thecond:
|
if "pattern" not in thecond:
|
||||||
return None
|
return None
|
||||||
templaterule = {"actions": rule["actions"]}
|
templaterule = {"actions": rule.actions}
|
||||||
templaterule["pattern"] = thecond["pattern"]
|
templaterule["pattern"] = thecond["pattern"]
|
||||||
else:
|
else:
|
||||||
# This should not be reached unless this function is not kept in sync
|
# This should not be reached unless this function is not kept in sync
|
||||||
|
@ -97,8 +105,8 @@ def _rule_to_template(rule: Dict[str, Any]) -> Optional[Dict[str, Any]]:
|
||||||
|
|
||||||
if unscoped_rule_id:
|
if unscoped_rule_id:
|
||||||
templaterule["rule_id"] = unscoped_rule_id
|
templaterule["rule_id"] = unscoped_rule_id
|
||||||
if "default" in rule:
|
if rule.default:
|
||||||
templaterule["default"] = rule["default"]
|
templaterule["default"] = True
|
||||||
return templaterule
|
return templaterule
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -15,7 +15,18 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
from typing import Any, Dict, List, Mapping, Optional, Pattern, Set, Tuple, Union
|
from typing import (
|
||||||
|
Any,
|
||||||
|
Dict,
|
||||||
|
List,
|
||||||
|
Mapping,
|
||||||
|
Optional,
|
||||||
|
Pattern,
|
||||||
|
Sequence,
|
||||||
|
Set,
|
||||||
|
Tuple,
|
||||||
|
Union,
|
||||||
|
)
|
||||||
|
|
||||||
from matrix_common.regex import glob_to_regex, to_word_pattern
|
from matrix_common.regex import glob_to_regex, to_word_pattern
|
||||||
|
|
||||||
|
@ -32,14 +43,14 @@ INEQUALITY_EXPR = re.compile("^([=<>]*)([0-9]*)$")
|
||||||
|
|
||||||
|
|
||||||
def _room_member_count(
|
def _room_member_count(
|
||||||
ev: EventBase, condition: Dict[str, Any], room_member_count: int
|
ev: EventBase, condition: Mapping[str, Any], room_member_count: int
|
||||||
) -> bool:
|
) -> bool:
|
||||||
return _test_ineq_condition(condition, room_member_count)
|
return _test_ineq_condition(condition, room_member_count)
|
||||||
|
|
||||||
|
|
||||||
def _sender_notification_permission(
|
def _sender_notification_permission(
|
||||||
ev: EventBase,
|
ev: EventBase,
|
||||||
condition: Dict[str, Any],
|
condition: Mapping[str, Any],
|
||||||
sender_power_level: int,
|
sender_power_level: int,
|
||||||
power_levels: Dict[str, Union[int, Dict[str, int]]],
|
power_levels: Dict[str, Union[int, Dict[str, int]]],
|
||||||
) -> bool:
|
) -> bool:
|
||||||
|
@ -54,7 +65,7 @@ def _sender_notification_permission(
|
||||||
return sender_power_level >= room_notif_level
|
return sender_power_level >= room_notif_level
|
||||||
|
|
||||||
|
|
||||||
def _test_ineq_condition(condition: Dict[str, Any], number: int) -> bool:
|
def _test_ineq_condition(condition: Mapping[str, Any], number: int) -> bool:
|
||||||
if "is" not in condition:
|
if "is" not in condition:
|
||||||
return False
|
return False
|
||||||
m = INEQUALITY_EXPR.match(condition["is"])
|
m = INEQUALITY_EXPR.match(condition["is"])
|
||||||
|
@ -137,7 +148,7 @@ class PushRuleEvaluatorForEvent:
|
||||||
self._condition_cache: Dict[str, bool] = {}
|
self._condition_cache: Dict[str, bool] = {}
|
||||||
|
|
||||||
def check_conditions(
|
def check_conditions(
|
||||||
self, conditions: List[dict], uid: str, display_name: Optional[str]
|
self, conditions: Sequence[Mapping], uid: str, display_name: Optional[str]
|
||||||
) -> bool:
|
) -> bool:
|
||||||
"""
|
"""
|
||||||
Returns true if a user's conditions/user ID/display name match the event.
|
Returns true if a user's conditions/user ID/display name match the event.
|
||||||
|
@ -169,7 +180,7 @@ class PushRuleEvaluatorForEvent:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def matches(
|
def matches(
|
||||||
self, condition: Dict[str, Any], user_id: str, display_name: Optional[str]
|
self, condition: Mapping[str, Any], user_id: str, display_name: Optional[str]
|
||||||
) -> bool:
|
) -> bool:
|
||||||
"""
|
"""
|
||||||
Returns true if a user's condition/user ID/display name match the event.
|
Returns true if a user's condition/user ID/display name match the event.
|
||||||
|
@ -204,7 +215,7 @@ class PushRuleEvaluatorForEvent:
|
||||||
# endpoint with an unknown kind, see _rule_tuple_from_request_object.
|
# endpoint with an unknown kind, see _rule_tuple_from_request_object.
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def _event_match(self, condition: dict, user_id: str) -> bool:
|
def _event_match(self, condition: Mapping, user_id: str) -> bool:
|
||||||
"""
|
"""
|
||||||
Check an "event_match" push rule condition.
|
Check an "event_match" push rule condition.
|
||||||
|
|
||||||
|
@ -269,7 +280,7 @@ class PushRuleEvaluatorForEvent:
|
||||||
|
|
||||||
return bool(r.search(body))
|
return bool(r.search(body))
|
||||||
|
|
||||||
def _relation_match(self, condition: dict, user_id: str) -> bool:
|
def _relation_match(self, condition: Mapping, user_id: str) -> bool:
|
||||||
"""
|
"""
|
||||||
Check an "relation_match" push rule condition.
|
Check an "relation_match" push rule condition.
|
||||||
|
|
||||||
|
|
|
@ -1 +1,12 @@
|
||||||
<html><body>Your account is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</body><html>
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Your account is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</title>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
Your account is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.
|
||||||
|
</body>
|
||||||
|
</html>
|
|
@ -1 +1,12 @@
|
||||||
<html><body>Your account has been successfully renewed and is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</body><html>
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Your account has been successfully renewed and is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.</title>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
Your account has been successfully renewed and is valid until {{ expiration_ts|format_ts("%d-%m-%Y") }}.
|
||||||
|
</body>
|
||||||
|
</html>
|
|
@ -1,9 +1,14 @@
|
||||||
<html>
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Request to add an email address to your Matrix account</title>
|
||||||
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<p>A request to add an email address to your Matrix account has been received. If this was you, please click the link below to confirm adding this email:</p>
|
<p>A request to add an email address to your Matrix account has been received. If this was you, please click the link below to confirm adding this email:</p>
|
||||||
|
|
||||||
<a href="{{ link }}">{{ link }}</a>
|
<a href="{{ link }}">{{ link }}</a>
|
||||||
|
|
||||||
<p>If this was not you, you can safely ignore this email. Thank you.</p>
|
<p>If this was not you, you can safely ignore this email. Thank you.</p>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|
|
@ -1,8 +1,13 @@
|
||||||
<html>
|
<!DOCTYPE html>
|
||||||
<head></head>
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Request failed</title>
|
||||||
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<p>The request failed for the following reason: {{ failure_reason }}.</p>
|
<p>The request failed for the following reason: {{ failure_reason }}.</p>
|
||||||
|
<p>No changes have been made to your account.</p>
|
||||||
<p>No changes have been made to your account.</p>
|
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|
|
@ -1,6 +1,12 @@
|
||||||
<html>
|
<!DOCTYPE html>
|
||||||
<head></head>
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Your email has now been validated</title>
|
||||||
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<p>Your email has now been validated, please return to your client. You may now close this window.</p>
|
<p>Your email has now been validated, please return to your client. You may now close this window.</p>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
|
@ -1,8 +1,8 @@
|
||||||
<html>
|
<html>
|
||||||
<head>
|
<head>
|
||||||
<title>Success!</title>
|
<title>Success!</title>
|
||||||
<meta name='viewport' content='width=device-width, initial-scale=1,
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
|
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
|
||||||
<script>
|
<script>
|
||||||
if (window.onAuthDone) {
|
if (window.onAuthDone) {
|
||||||
|
|
|
@ -1 +1,12 @@
|
||||||
<html><body>Invalid renewal token.</body><html>
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Invalid renewal token.</title>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
Invalid renewal token.
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
|
|
@ -1,6 +1,8 @@
|
||||||
<!doctype html>
|
<!doctype html>
|
||||||
<html lang="en">
|
<html lang="en">
|
||||||
<head>
|
<head>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<style type="text/css">
|
<style type="text/css">
|
||||||
{% include 'mail.css' without context %}
|
{% include 'mail.css' without context %}
|
||||||
{% include "mail-%s.css" % app_name ignore missing without context %}
|
{% include "mail-%s.css" % app_name ignore missing without context %}
|
||||||
|
|
|
@ -1,6 +1,8 @@
|
||||||
<!doctype html>
|
<!doctype html>
|
||||||
<html lang="en">
|
<html lang="en">
|
||||||
<head>
|
<head>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<style type="text/css">
|
<style type="text/css">
|
||||||
{%- include 'mail.css' without context %}
|
{%- include 'mail.css' without context %}
|
||||||
{%- include "mail-%s.css" % app_name ignore missing without context %}
|
{%- include "mail-%s.css" % app_name ignore missing without context %}
|
||||||
|
|
|
@ -1,4 +1,9 @@
|
||||||
<html>
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<title>Password reset</title>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<p>A password reset request has been received for your Matrix account. If this was you, please click the link below to confirm resetting your password:</p>
|
<p>A password reset request has been received for your Matrix account. If this was you, please click the link below to confirm resetting your password:</p>
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,9 @@
|
||||||
<html>
|
<html lang="en">
|
||||||
<head></head>
|
<head>
|
||||||
|
<title>Password reset confirmation</title>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<!--Use a hidden form to resubmit the information necessary to reset the password-->
|
<!--Use a hidden form to resubmit the information necessary to reset the password-->
|
||||||
<form method="post">
|
<form method="post">
|
||||||
|
|
|
@ -1,5 +1,9 @@
|
||||||
<html>
|
<html lang="en">
|
||||||
<head></head>
|
<head>
|
||||||
|
<title>Password reset failure</title>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<p>The request failed for the following reason: {{ failure_reason }}.</p>
|
<p>The request failed for the following reason: {{ failure_reason }}.</p>
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,8 @@
|
||||||
<html>
|
<html lang="en">
|
||||||
<head></head>
|
<head>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<p>Your email has now been validated, please return to your client to reset your password. You may now close this window.</p>
|
<p>Your email has now been validated, please return to your client to reset your password. You may now close this window.</p>
|
||||||
</body>
|
</body>
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
<html>
|
<html>
|
||||||
<head>
|
<head>
|
||||||
<title>Authentication</title>
|
<title>Authentication</title>
|
||||||
<meta name='viewport' content='width=device-width, initial-scale=1,
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<script src="https://www.recaptcha.net/recaptcha/api.js"
|
<script src="https://www.recaptcha.net/recaptcha/api.js"
|
||||||
async defer></script>
|
async defer></script>
|
||||||
<script src="//code.jquery.com/jquery-1.11.2.min.js"></script>
|
<script src="//code.jquery.com/jquery-1.11.2.min.js"></script>
|
||||||
|
|
|
@ -1,4 +1,9 @@
|
||||||
<html>
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<title>Registration</title>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<p>You have asked us to register this email with a new Matrix account. If this was you, please click the link below to confirm your email address:</p>
|
<p>You have asked us to register this email with a new Matrix account. If this was you, please click the link below to confirm your email address:</p>
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,8 @@
|
||||||
<html>
|
<html lang="en">
|
||||||
<head></head>
|
<head>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<p>Validation failed for the following reason: {{ failure_reason }}.</p>
|
<p>Validation failed for the following reason: {{ failure_reason }}.</p>
|
||||||
</body>
|
</body>
|
||||||
|
|
|
@ -1,5 +1,9 @@
|
||||||
<html>
|
<html lang="en">
|
||||||
<head></head>
|
<head>
|
||||||
|
<title>Your email has now been validated</title>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
</head>
|
||||||
<body>
|
<body>
|
||||||
<p>Your email has now been validated, please return to your client. You may now close this window.</p>
|
<p>Your email has now been validated, please return to your client. You may now close this window.</p>
|
||||||
</body>
|
</body>
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
<html>
|
<html lang="en">
|
||||||
<head>
|
<head>
|
||||||
<title>Authentication</title>
|
<title>Authentication</title>
|
||||||
<meta name='viewport' content='width=device-width, initial-scale=1,
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
|
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
|
|
|
@ -3,8 +3,8 @@
|
||||||
<head>
|
<head>
|
||||||
<meta charset="UTF-8">
|
<meta charset="UTF-8">
|
||||||
<title>SSO account deactivated</title>
|
<title>SSO account deactivated</title>
|
||||||
<meta name="viewport" content="width=device-width, user-scalable=no">
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
<style type="text/css">
|
<meta name="viewport" content="width=device-width, initial-scale=1.0"> <style type="text/css">
|
||||||
{% include "sso.css" without context %}
|
{% include "sso.css" without context %}
|
||||||
</style>
|
</style>
|
||||||
</head>
|
</head>
|
||||||
|
|
|
@ -3,7 +3,8 @@
|
||||||
<head>
|
<head>
|
||||||
<title>Create your account</title>
|
<title>Create your account</title>
|
||||||
<meta charset="utf-8">
|
<meta charset="utf-8">
|
||||||
<meta name="viewport" content="width=device-width, user-scalable=no">
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<script type="text/javascript">
|
<script type="text/javascript">
|
||||||
let wasKeyboard = false;
|
let wasKeyboard = false;
|
||||||
document.addEventListener("mousedown", function() { wasKeyboard = false; });
|
document.addEventListener("mousedown", function() { wasKeyboard = false; });
|
||||||
|
|
|
@ -3,7 +3,8 @@
|
||||||
<head>
|
<head>
|
||||||
<meta charset="UTF-8">
|
<meta charset="UTF-8">
|
||||||
<title>Authentication failed</title>
|
<title>Authentication failed</title>
|
||||||
<meta name="viewport" content="width=device-width, user-scalable=no">
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<style type="text/css">
|
<style type="text/css">
|
||||||
{% include "sso.css" without context %}
|
{% include "sso.css" without context %}
|
||||||
</style>
|
</style>
|
||||||
|
|
|
@ -3,7 +3,8 @@
|
||||||
<head>
|
<head>
|
||||||
<meta charset="UTF-8">
|
<meta charset="UTF-8">
|
||||||
<title>Confirm it's you</title>
|
<title>Confirm it's you</title>
|
||||||
<meta name="viewport" content="width=device-width, user-scalable=no">
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<style type="text/css">
|
<style type="text/css">
|
||||||
{% include "sso.css" without context %}
|
{% include "sso.css" without context %}
|
||||||
</style>
|
</style>
|
||||||
|
|
|
@ -3,7 +3,8 @@
|
||||||
<head>
|
<head>
|
||||||
<meta charset="UTF-8">
|
<meta charset="UTF-8">
|
||||||
<title>Authentication successful</title>
|
<title>Authentication successful</title>
|
||||||
<meta name="viewport" content="width=device-width, user-scalable=no">
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<style type="text/css">
|
<style type="text/css">
|
||||||
{% include "sso.css" without context %}
|
{% include "sso.css" without context %}
|
||||||
</style>
|
</style>
|
||||||
|
|
|
@ -3,7 +3,8 @@
|
||||||
<head>
|
<head>
|
||||||
<meta charset="UTF-8">
|
<meta charset="UTF-8">
|
||||||
<title>Authentication failed</title>
|
<title>Authentication failed</title>
|
||||||
<meta name="viewport" content="width=device-width, user-scalable=no">
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<style type="text/css">
|
<style type="text/css">
|
||||||
{% include "sso.css" without context %}
|
{% include "sso.css" without context %}
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,8 @@
|
||||||
<!DOCTYPE html>
|
<!DOCTYPE html>
|
||||||
<html lang="en">
|
<html lang="en">
|
||||||
<head>
|
<head>
|
||||||
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<meta charset="UTF-8">
|
<meta charset="UTF-8">
|
||||||
<title>Choose identity provider</title>
|
<title>Choose identity provider</title>
|
||||||
<style type="text/css">
|
<style type="text/css">
|
||||||
|
|
|
@ -3,7 +3,8 @@
|
||||||
<head>
|
<head>
|
||||||
<meta charset="UTF-8">
|
<meta charset="UTF-8">
|
||||||
<title>Agree to terms and conditions</title>
|
<title>Agree to terms and conditions</title>
|
||||||
<meta name="viewport" content="width=device-width, user-scalable=no">
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<style type="text/css">
|
<style type="text/css">
|
||||||
{% include "sso.css" without context %}
|
{% include "sso.css" without context %}
|
||||||
|
|
||||||
|
|
|
@ -3,7 +3,8 @@
|
||||||
<head>
|
<head>
|
||||||
<meta charset="UTF-8">
|
<meta charset="UTF-8">
|
||||||
<title>Continue to your account</title>
|
<title>Continue to your account</title>
|
||||||
<meta name="viewport" content="width=device-width, user-scalable=no">
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<style type="text/css">
|
<style type="text/css">
|
||||||
{% include "sso.css" without context %}
|
{% include "sso.css" without context %}
|
||||||
|
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
<html>
|
<html>
|
||||||
<head>
|
<head>
|
||||||
<title>Authentication</title>
|
<title>Authentication</title>
|
||||||
<meta name='viewport' content='width=device-width, initial-scale=1,
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
|
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
|
||||||
</head>
|
</head>
|
||||||
<body>
|
<body>
|
||||||
|
|
|
@ -303,6 +303,7 @@ class RoomRestServlet(RestServlet):
|
||||||
|
|
||||||
members = await self.store.get_users_in_room(room_id)
|
members = await self.store.get_users_in_room(room_id)
|
||||||
ret["joined_local_devices"] = await self.store.count_devices_by_users(members)
|
ret["joined_local_devices"] = await self.store.count_devices_by_users(members)
|
||||||
|
ret["forgotten"] = await self.store.is_locally_forgotten_room(room_id)
|
||||||
|
|
||||||
return HTTPStatus.OK, ret
|
return HTTPStatus.OK, ret
|
||||||
|
|
||||||
|
|
|
@ -15,10 +15,11 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import logging
|
import logging
|
||||||
import random
|
import random
|
||||||
from http import HTTPStatus
|
|
||||||
from typing import TYPE_CHECKING, Optional, Tuple
|
from typing import TYPE_CHECKING, Optional, Tuple
|
||||||
from urllib.parse import urlparse
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
|
from pydantic import StrictBool, StrictStr, constr
|
||||||
|
|
||||||
from twisted.web.server import Request
|
from twisted.web.server import Request
|
||||||
|
|
||||||
from synapse.api.constants import LoginType
|
from synapse.api.constants import LoginType
|
||||||
|
@ -34,12 +35,15 @@ from synapse.http.server import HttpServer, finish_request, respond_with_html
|
||||||
from synapse.http.servlet import (
|
from synapse.http.servlet import (
|
||||||
RestServlet,
|
RestServlet,
|
||||||
assert_params_in_dict,
|
assert_params_in_dict,
|
||||||
|
parse_and_validate_json_object_from_request,
|
||||||
parse_json_object_from_request,
|
parse_json_object_from_request,
|
||||||
parse_string,
|
parse_string,
|
||||||
)
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.metrics import threepid_send_requests
|
from synapse.metrics import threepid_send_requests
|
||||||
from synapse.push.mailer import Mailer
|
from synapse.push.mailer import Mailer
|
||||||
|
from synapse.rest.client.models import AuthenticationData, EmailRequestTokenBody
|
||||||
|
from synapse.rest.models import RequestBodyModel
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util.msisdn import phone_number_to_msisdn
|
from synapse.util.msisdn import phone_number_to_msisdn
|
||||||
from synapse.util.stringutils import assert_valid_client_secret, random_string
|
from synapse.util.stringutils import assert_valid_client_secret, random_string
|
||||||
|
@ -82,32 +86,16 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
|
||||||
400, "Email-based password resets have been disabled on this server"
|
400, "Email-based password resets have been disabled on this server"
|
||||||
)
|
)
|
||||||
|
|
||||||
body = parse_json_object_from_request(request)
|
body = parse_and_validate_json_object_from_request(
|
||||||
|
request, EmailRequestTokenBody
|
||||||
|
)
|
||||||
|
|
||||||
assert_params_in_dict(body, ["client_secret", "email", "send_attempt"])
|
if body.next_link:
|
||||||
|
|
||||||
# Extract params from body
|
|
||||||
client_secret = body["client_secret"]
|
|
||||||
assert_valid_client_secret(client_secret)
|
|
||||||
|
|
||||||
# Canonicalise the email address. The addresses are all stored canonicalised
|
|
||||||
# in the database. This allows the user to reset his password without having to
|
|
||||||
# know the exact spelling (eg. upper and lower case) of address in the database.
|
|
||||||
# Stored in the database "foo@bar.com"
|
|
||||||
# User requests with "FOO@bar.com" would raise a Not Found error
|
|
||||||
try:
|
|
||||||
email = validate_email(body["email"])
|
|
||||||
except ValueError as e:
|
|
||||||
raise SynapseError(400, str(e))
|
|
||||||
send_attempt = body["send_attempt"]
|
|
||||||
next_link = body.get("next_link") # Optional param
|
|
||||||
|
|
||||||
if next_link:
|
|
||||||
# Raise if the provided next_link value isn't valid
|
# Raise if the provided next_link value isn't valid
|
||||||
assert_valid_next_link(self.hs, next_link)
|
assert_valid_next_link(self.hs, body.next_link)
|
||||||
|
|
||||||
await self.identity_handler.ratelimit_request_token_requests(
|
await self.identity_handler.ratelimit_request_token_requests(
|
||||||
request, "email", email
|
request, "email", body.email
|
||||||
)
|
)
|
||||||
|
|
||||||
# The email will be sent to the stored address.
|
# The email will be sent to the stored address.
|
||||||
|
@ -115,7 +103,7 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
|
||||||
# an email address which is controlled by the attacker but which, after
|
# an email address which is controlled by the attacker but which, after
|
||||||
# canonicalisation, matches the one in our database.
|
# canonicalisation, matches the one in our database.
|
||||||
existing_user_id = await self.hs.get_datastores().main.get_user_id_by_threepid(
|
existing_user_id = await self.hs.get_datastores().main.get_user_id_by_threepid(
|
||||||
"email", email
|
"email", body.email
|
||||||
)
|
)
|
||||||
|
|
||||||
if existing_user_id is None:
|
if existing_user_id is None:
|
||||||
|
@ -135,26 +123,26 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
|
||||||
# Have the configured identity server handle the request
|
# Have the configured identity server handle the request
|
||||||
ret = await self.identity_handler.request_email_token(
|
ret = await self.identity_handler.request_email_token(
|
||||||
self.hs.config.registration.account_threepid_delegate_email,
|
self.hs.config.registration.account_threepid_delegate_email,
|
||||||
email,
|
body.email,
|
||||||
client_secret,
|
body.client_secret,
|
||||||
send_attempt,
|
body.send_attempt,
|
||||||
next_link,
|
body.next_link,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
# Send password reset emails from Synapse
|
# Send password reset emails from Synapse
|
||||||
sid = await self.identity_handler.send_threepid_validation(
|
sid = await self.identity_handler.send_threepid_validation(
|
||||||
email,
|
body.email,
|
||||||
client_secret,
|
body.client_secret,
|
||||||
send_attempt,
|
body.send_attempt,
|
||||||
self.mailer.send_password_reset_mail,
|
self.mailer.send_password_reset_mail,
|
||||||
next_link,
|
body.next_link,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Wrap the session id in a JSON object
|
# Wrap the session id in a JSON object
|
||||||
ret = {"sid": sid}
|
ret = {"sid": sid}
|
||||||
|
|
||||||
threepid_send_requests.labels(type="email", reason="password_reset").observe(
|
threepid_send_requests.labels(type="email", reason="password_reset").observe(
|
||||||
send_attempt
|
body.send_attempt
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, ret
|
return 200, ret
|
||||||
|
@ -172,16 +160,23 @@ class PasswordRestServlet(RestServlet):
|
||||||
self.password_policy_handler = hs.get_password_policy_handler()
|
self.password_policy_handler = hs.get_password_policy_handler()
|
||||||
self._set_password_handler = hs.get_set_password_handler()
|
self._set_password_handler = hs.get_set_password_handler()
|
||||||
|
|
||||||
|
class PostBody(RequestBodyModel):
|
||||||
|
auth: Optional[AuthenticationData] = None
|
||||||
|
logout_devices: StrictBool = True
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
# workaround for https://github.com/samuelcolvin/pydantic/issues/156
|
||||||
|
new_password: Optional[StrictStr] = None
|
||||||
|
else:
|
||||||
|
new_password: Optional[constr(max_length=512, strict=True)] = None
|
||||||
|
|
||||||
@interactive_auth_handler
|
@interactive_auth_handler
|
||||||
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
body = parse_json_object_from_request(request)
|
body = parse_and_validate_json_object_from_request(request, self.PostBody)
|
||||||
|
|
||||||
# we do basic sanity checks here because the auth layer will store these
|
# we do basic sanity checks here because the auth layer will store these
|
||||||
# in sessions. Pull out the new password provided to us.
|
# in sessions. Pull out the new password provided to us.
|
||||||
new_password = body.pop("new_password", None)
|
new_password = body.new_password
|
||||||
if new_password is not None:
|
if new_password is not None:
|
||||||
if not isinstance(new_password, str) or len(new_password) > 512:
|
|
||||||
raise SynapseError(400, "Invalid password")
|
|
||||||
self.password_policy_handler.validate_password(new_password)
|
self.password_policy_handler.validate_password(new_password)
|
||||||
|
|
||||||
# there are two possibilities here. Either the user does not have an
|
# there are two possibilities here. Either the user does not have an
|
||||||
|
@ -201,7 +196,7 @@ class PasswordRestServlet(RestServlet):
|
||||||
params, session_id = await self.auth_handler.validate_user_via_ui_auth(
|
params, session_id = await self.auth_handler.validate_user_via_ui_auth(
|
||||||
requester,
|
requester,
|
||||||
request,
|
request,
|
||||||
body,
|
body.dict(),
|
||||||
"modify your account password",
|
"modify your account password",
|
||||||
)
|
)
|
||||||
except InteractiveAuthIncompleteError as e:
|
except InteractiveAuthIncompleteError as e:
|
||||||
|
@ -224,7 +219,7 @@ class PasswordRestServlet(RestServlet):
|
||||||
result, params, session_id = await self.auth_handler.check_ui_auth(
|
result, params, session_id = await self.auth_handler.check_ui_auth(
|
||||||
[[LoginType.EMAIL_IDENTITY]],
|
[[LoginType.EMAIL_IDENTITY]],
|
||||||
request,
|
request,
|
||||||
body,
|
body.dict(),
|
||||||
"modify your account password",
|
"modify your account password",
|
||||||
)
|
)
|
||||||
except InteractiveAuthIncompleteError as e:
|
except InteractiveAuthIncompleteError as e:
|
||||||
|
@ -299,37 +294,33 @@ class DeactivateAccountRestServlet(RestServlet):
|
||||||
self.auth_handler = hs.get_auth_handler()
|
self.auth_handler = hs.get_auth_handler()
|
||||||
self._deactivate_account_handler = hs.get_deactivate_account_handler()
|
self._deactivate_account_handler = hs.get_deactivate_account_handler()
|
||||||
|
|
||||||
|
class PostBody(RequestBodyModel):
|
||||||
|
auth: Optional[AuthenticationData] = None
|
||||||
|
id_server: Optional[StrictStr] = None
|
||||||
|
# Not specced, see https://github.com/matrix-org/matrix-spec/issues/297
|
||||||
|
erase: StrictBool = False
|
||||||
|
|
||||||
@interactive_auth_handler
|
@interactive_auth_handler
|
||||||
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
body = parse_json_object_from_request(request)
|
body = parse_and_validate_json_object_from_request(request, self.PostBody)
|
||||||
erase = body.get("erase", False)
|
|
||||||
if not isinstance(erase, bool):
|
|
||||||
raise SynapseError(
|
|
||||||
HTTPStatus.BAD_REQUEST,
|
|
||||||
"Param 'erase' must be a boolean, if given",
|
|
||||||
Codes.BAD_JSON,
|
|
||||||
)
|
|
||||||
|
|
||||||
requester = await self.auth.get_user_by_req(request)
|
requester = await self.auth.get_user_by_req(request)
|
||||||
|
|
||||||
# allow ASes to deactivate their own users
|
# allow ASes to deactivate their own users
|
||||||
if requester.app_service:
|
if requester.app_service:
|
||||||
await self._deactivate_account_handler.deactivate_account(
|
await self._deactivate_account_handler.deactivate_account(
|
||||||
requester.user.to_string(), erase, requester
|
requester.user.to_string(), body.erase, requester
|
||||||
)
|
)
|
||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
await self.auth_handler.validate_user_via_ui_auth(
|
await self.auth_handler.validate_user_via_ui_auth(
|
||||||
requester,
|
requester,
|
||||||
request,
|
request,
|
||||||
body,
|
body.dict(),
|
||||||
"deactivate your account",
|
"deactivate your account",
|
||||||
)
|
)
|
||||||
result = await self._deactivate_account_handler.deactivate_account(
|
result = await self._deactivate_account_handler.deactivate_account(
|
||||||
requester.user.to_string(),
|
requester.user.to_string(), body.erase, requester, id_server=body.id_server
|
||||||
erase,
|
|
||||||
requester,
|
|
||||||
id_server=body.get("id_server"),
|
|
||||||
)
|
)
|
||||||
if result:
|
if result:
|
||||||
id_server_unbind_result = "success"
|
id_server_unbind_result = "success"
|
||||||
|
@ -364,28 +355,15 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
|
||||||
"Adding emails have been disabled due to lack of an email config"
|
"Adding emails have been disabled due to lack of an email config"
|
||||||
)
|
)
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "Adding an email to your account is disabled on this server"
|
400,
|
||||||
|
"Adding an email to your account is disabled on this server",
|
||||||
)
|
)
|
||||||
|
|
||||||
body = parse_json_object_from_request(request)
|
body = parse_and_validate_json_object_from_request(
|
||||||
assert_params_in_dict(body, ["client_secret", "email", "send_attempt"])
|
request, EmailRequestTokenBody
|
||||||
client_secret = body["client_secret"]
|
)
|
||||||
assert_valid_client_secret(client_secret)
|
|
||||||
|
|
||||||
# Canonicalise the email address. The addresses are all stored canonicalised
|
if not await check_3pid_allowed(self.hs, "email", body.email):
|
||||||
# in the database.
|
|
||||||
# This ensures that the validation email is sent to the canonicalised address
|
|
||||||
# as it will later be entered into the database.
|
|
||||||
# Otherwise the email will be sent to "FOO@bar.com" and stored as
|
|
||||||
# "foo@bar.com" in database.
|
|
||||||
try:
|
|
||||||
email = validate_email(body["email"])
|
|
||||||
except ValueError as e:
|
|
||||||
raise SynapseError(400, str(e))
|
|
||||||
send_attempt = body["send_attempt"]
|
|
||||||
next_link = body.get("next_link") # Optional param
|
|
||||||
|
|
||||||
if not await check_3pid_allowed(self.hs, "email", email):
|
|
||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
403,
|
403,
|
||||||
"Your email domain is not authorized on this server",
|
"Your email domain is not authorized on this server",
|
||||||
|
@ -393,14 +371,14 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
|
||||||
)
|
)
|
||||||
|
|
||||||
await self.identity_handler.ratelimit_request_token_requests(
|
await self.identity_handler.ratelimit_request_token_requests(
|
||||||
request, "email", email
|
request, "email", body.email
|
||||||
)
|
)
|
||||||
|
|
||||||
if next_link:
|
if body.next_link:
|
||||||
# Raise if the provided next_link value isn't valid
|
# Raise if the provided next_link value isn't valid
|
||||||
assert_valid_next_link(self.hs, next_link)
|
assert_valid_next_link(self.hs, body.next_link)
|
||||||
|
|
||||||
existing_user_id = await self.store.get_user_id_by_threepid("email", email)
|
existing_user_id = await self.store.get_user_id_by_threepid("email", body.email)
|
||||||
|
|
||||||
if existing_user_id is not None:
|
if existing_user_id is not None:
|
||||||
if self.config.server.request_token_inhibit_3pid_errors:
|
if self.config.server.request_token_inhibit_3pid_errors:
|
||||||
|
@ -419,26 +397,26 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
|
||||||
# Have the configured identity server handle the request
|
# Have the configured identity server handle the request
|
||||||
ret = await self.identity_handler.request_email_token(
|
ret = await self.identity_handler.request_email_token(
|
||||||
self.hs.config.registration.account_threepid_delegate_email,
|
self.hs.config.registration.account_threepid_delegate_email,
|
||||||
email,
|
body.email,
|
||||||
client_secret,
|
body.client_secret,
|
||||||
send_attempt,
|
body.send_attempt,
|
||||||
next_link,
|
body.next_link,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
# Send threepid validation emails from Synapse
|
# Send threepid validation emails from Synapse
|
||||||
sid = await self.identity_handler.send_threepid_validation(
|
sid = await self.identity_handler.send_threepid_validation(
|
||||||
email,
|
body.email,
|
||||||
client_secret,
|
body.client_secret,
|
||||||
send_attempt,
|
body.send_attempt,
|
||||||
self.mailer.send_add_threepid_mail,
|
self.mailer.send_add_threepid_mail,
|
||||||
next_link,
|
body.next_link,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Wrap the session id in a JSON object
|
# Wrap the session id in a JSON object
|
||||||
ret = {"sid": sid}
|
ret = {"sid": sid}
|
||||||
|
|
||||||
threepid_send_requests.labels(type="email", reason="add_threepid").observe(
|
threepid_send_requests.labels(type="email", reason="add_threepid").observe(
|
||||||
send_attempt
|
body.send_attempt
|
||||||
)
|
)
|
||||||
|
|
||||||
return 200, ret
|
return 200, ret
|
||||||
|
|
69
synapse/rest/client/models.py
Normal file
69
synapse/rest/client/models.py
Normal file
|
@ -0,0 +1,69 @@
|
||||||
|
# Copyright 2022 The Matrix.org Foundation C.I.C.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
from typing import TYPE_CHECKING, Dict, Optional
|
||||||
|
|
||||||
|
from pydantic import Extra, StrictInt, StrictStr, constr, validator
|
||||||
|
|
||||||
|
from synapse.rest.models import RequestBodyModel
|
||||||
|
from synapse.util.threepids import validate_email
|
||||||
|
|
||||||
|
|
||||||
|
class AuthenticationData(RequestBodyModel):
|
||||||
|
"""
|
||||||
|
Data used during user-interactive authentication.
|
||||||
|
|
||||||
|
(The name "Authentication Data" is taken directly from the spec.)
|
||||||
|
|
||||||
|
Additional keys will be present, depending on the `type` field. Use `.dict()` to
|
||||||
|
access them.
|
||||||
|
"""
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
extra = Extra.allow
|
||||||
|
|
||||||
|
session: Optional[StrictStr] = None
|
||||||
|
type: Optional[StrictStr] = None
|
||||||
|
|
||||||
|
|
||||||
|
class EmailRequestTokenBody(RequestBodyModel):
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
client_secret: StrictStr
|
||||||
|
else:
|
||||||
|
# See also assert_valid_client_secret()
|
||||||
|
client_secret: constr(
|
||||||
|
regex="[0-9a-zA-Z.=_-]", # noqa: F722
|
||||||
|
min_length=0,
|
||||||
|
max_length=255,
|
||||||
|
strict=True,
|
||||||
|
)
|
||||||
|
email: StrictStr
|
||||||
|
id_server: Optional[StrictStr]
|
||||||
|
id_access_token: Optional[StrictStr]
|
||||||
|
next_link: Optional[StrictStr]
|
||||||
|
send_attempt: StrictInt
|
||||||
|
|
||||||
|
@validator("id_access_token", always=True)
|
||||||
|
def token_required_for_identity_server(
|
||||||
|
cls, token: Optional[str], values: Dict[str, object]
|
||||||
|
) -> Optional[str]:
|
||||||
|
if values.get("id_server") is not None and token is None:
|
||||||
|
raise ValueError("id_access_token is required if an id_server is supplied.")
|
||||||
|
return token
|
||||||
|
|
||||||
|
# Canonicalise the email address. The addresses are all stored canonicalised
|
||||||
|
# in the database. This allows the user to reset his password without having to
|
||||||
|
# know the exact spelling (eg. upper and lower case) of address in the database.
|
||||||
|
# Without this, an email stored in the database as "foo@bar.com" would cause
|
||||||
|
# user requests for "FOO@bar.com" to raise a Not Found error.
|
||||||
|
_email_validator = validator("email", allow_reuse=True)(validate_email)
|
|
@ -19,6 +19,8 @@ import re
|
||||||
from typing import TYPE_CHECKING, Awaitable, Dict, List, Optional, Tuple
|
from typing import TYPE_CHECKING, Awaitable, Dict, List, Optional, Tuple
|
||||||
from urllib import parse as urlparse
|
from urllib import parse as urlparse
|
||||||
|
|
||||||
|
from prometheus_client.core import Histogram
|
||||||
|
|
||||||
from twisted.web.server import Request
|
from twisted.web.server import Request
|
||||||
|
|
||||||
from synapse import event_auth
|
from synapse import event_auth
|
||||||
|
@ -60,6 +62,35 @@ if TYPE_CHECKING:
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# This is an extra metric on top of `synapse_http_server_response_time_seconds`
|
||||||
|
# which times the same sort of thing but this one allows us to see values
|
||||||
|
# greater than 10s. We use a separate dedicated histogram with its own buckets
|
||||||
|
# so that we don't increase the cardinality of the general one because it's
|
||||||
|
# multiplied across hundreds of servlets.
|
||||||
|
messsages_response_timer = Histogram(
|
||||||
|
"synapse_room_message_list_rest_servlet_response_time_seconds",
|
||||||
|
"sec",
|
||||||
|
[],
|
||||||
|
buckets=(
|
||||||
|
0.005,
|
||||||
|
0.01,
|
||||||
|
0.025,
|
||||||
|
0.05,
|
||||||
|
0.1,
|
||||||
|
0.25,
|
||||||
|
0.5,
|
||||||
|
1.0,
|
||||||
|
2.5,
|
||||||
|
5.0,
|
||||||
|
10.0,
|
||||||
|
30.0,
|
||||||
|
60.0,
|
||||||
|
120.0,
|
||||||
|
180.0,
|
||||||
|
"+Inf",
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class TransactionRestServlet(RestServlet):
|
class TransactionRestServlet(RestServlet):
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
|
@ -560,6 +591,7 @@ class RoomMessageListRestServlet(RestServlet):
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
self.store = hs.get_datastores().main
|
self.store = hs.get_datastores().main
|
||||||
|
|
||||||
|
@messsages_response_timer.time()
|
||||||
async def on_GET(
|
async def on_GET(
|
||||||
self, request: SynapseRequest, room_id: str
|
self, request: SynapseRequest, room_id: str
|
||||||
) -> Tuple[int, JsonDict]:
|
) -> Tuple[int, JsonDict]:
|
||||||
|
|
23
synapse/rest/models.py
Normal file
23
synapse/rest/models.py
Normal file
|
@ -0,0 +1,23 @@
|
||||||
|
from pydantic import BaseModel, Extra
|
||||||
|
|
||||||
|
|
||||||
|
class RequestBodyModel(BaseModel):
|
||||||
|
"""A custom version of Pydantic's BaseModel which
|
||||||
|
|
||||||
|
- ignores unknown fields and
|
||||||
|
- does not allow fields to be overwritten after construction,
|
||||||
|
|
||||||
|
but otherwise uses Pydantic's default behaviour.
|
||||||
|
|
||||||
|
Ignoring unknown fields is a useful default. It means that clients can provide
|
||||||
|
unstable field not known to the server without the request being refused outright.
|
||||||
|
|
||||||
|
Subclassing in this way is recommended by
|
||||||
|
https://pydantic-docs.helpmanual.io/usage/model_config/#change-behaviour-globally
|
||||||
|
"""
|
||||||
|
|
||||||
|
class Config:
|
||||||
|
# By default, ignore fields that we don't recognise.
|
||||||
|
extra = Extra.ignore
|
||||||
|
# By default, don't allow fields to be reassigned after parsing.
|
||||||
|
allow_mutation = False
|
|
@ -3,7 +3,8 @@
|
||||||
<head>
|
<head>
|
||||||
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
|
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
|
||||||
<title> Login </title>
|
<title> Login </title>
|
||||||
<meta name='viewport' content='width=device-width, initial-scale=1, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<link rel="stylesheet" href="style.css">
|
<link rel="stylesheet" href="style.css">
|
||||||
<script src="js/jquery-3.4.1.min.js"></script>
|
<script src="js/jquery-3.4.1.min.js"></script>
|
||||||
<script src="js/login.js"></script>
|
<script src="js/login.js"></script>
|
||||||
|
|
|
@ -2,7 +2,8 @@
|
||||||
<html>
|
<html>
|
||||||
<head>
|
<head>
|
||||||
<title> Registration </title>
|
<title> Registration </title>
|
||||||
<meta name='viewport' content='width=device-width, initial-scale=1, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
|
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
<link rel="stylesheet" href="style.css">
|
<link rel="stylesheet" href="style.css">
|
||||||
<script src="js/jquery-3.4.1.min.js"></script>
|
<script src="js/jquery-3.4.1.min.js"></script>
|
||||||
<script src="https://www.recaptcha.net/recaptcha/api/js/recaptcha_ajax.js"></script>
|
<script src="https://www.recaptcha.net/recaptcha/api/js/recaptcha_ajax.js"></script>
|
||||||
|
|
|
@ -45,8 +45,14 @@ from twisted.internet import defer
|
||||||
from synapse.api.constants import EventTypes, Membership
|
from synapse.api.constants import EventTypes, Membership
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.events.snapshot import EventContext
|
from synapse.events.snapshot import EventContext
|
||||||
from synapse.logging import opentracing
|
|
||||||
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
|
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
|
||||||
|
from synapse.logging.opentracing import (
|
||||||
|
SynapseTags,
|
||||||
|
active_span,
|
||||||
|
set_tag,
|
||||||
|
start_active_span_follows_from,
|
||||||
|
trace,
|
||||||
|
)
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||||
from synapse.storage.controllers.state import StateStorageController
|
from synapse.storage.controllers.state import StateStorageController
|
||||||
from synapse.storage.databases import Databases
|
from synapse.storage.databases import Databases
|
||||||
|
@ -223,7 +229,7 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
|
||||||
queue.append(end_item)
|
queue.append(end_item)
|
||||||
|
|
||||||
# also add our active opentracing span to the item so that we get a link back
|
# also add our active opentracing span to the item so that we get a link back
|
||||||
span = opentracing.active_span()
|
span = active_span()
|
||||||
if span:
|
if span:
|
||||||
end_item.parent_opentracing_span_contexts.append(span.context)
|
end_item.parent_opentracing_span_contexts.append(span.context)
|
||||||
|
|
||||||
|
@ -234,7 +240,7 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
|
||||||
res = await make_deferred_yieldable(end_item.deferred.observe())
|
res = await make_deferred_yieldable(end_item.deferred.observe())
|
||||||
|
|
||||||
# add another opentracing span which links to the persist trace.
|
# add another opentracing span which links to the persist trace.
|
||||||
with opentracing.start_active_span_follows_from(
|
with start_active_span_follows_from(
|
||||||
f"{task.name}_complete", (end_item.opentracing_span_context,)
|
f"{task.name}_complete", (end_item.opentracing_span_context,)
|
||||||
):
|
):
|
||||||
pass
|
pass
|
||||||
|
@ -266,7 +272,7 @@ class _EventPeristenceQueue(Generic[_PersistResult]):
|
||||||
queue = self._get_drainining_queue(room_id)
|
queue = self._get_drainining_queue(room_id)
|
||||||
for item in queue:
|
for item in queue:
|
||||||
try:
|
try:
|
||||||
with opentracing.start_active_span_follows_from(
|
with start_active_span_follows_from(
|
||||||
item.task.name,
|
item.task.name,
|
||||||
item.parent_opentracing_span_contexts,
|
item.parent_opentracing_span_contexts,
|
||||||
inherit_force_tracing=True,
|
inherit_force_tracing=True,
|
||||||
|
@ -355,7 +361,7 @@ class EventsPersistenceStorageController:
|
||||||
f"Found an unexpected task type in event persistence queue: {task}"
|
f"Found an unexpected task type in event persistence queue: {task}"
|
||||||
)
|
)
|
||||||
|
|
||||||
@opentracing.trace
|
@trace
|
||||||
async def persist_events(
|
async def persist_events(
|
||||||
self,
|
self,
|
||||||
events_and_contexts: Iterable[Tuple[EventBase, EventContext]],
|
events_and_contexts: Iterable[Tuple[EventBase, EventContext]],
|
||||||
|
@ -380,9 +386,21 @@ class EventsPersistenceStorageController:
|
||||||
PartialStateConflictError: if attempting to persist a partial state event in
|
PartialStateConflictError: if attempting to persist a partial state event in
|
||||||
a room that has been un-partial stated.
|
a room that has been un-partial stated.
|
||||||
"""
|
"""
|
||||||
|
event_ids: List[str] = []
|
||||||
partitioned: Dict[str, List[Tuple[EventBase, EventContext]]] = {}
|
partitioned: Dict[str, List[Tuple[EventBase, EventContext]]] = {}
|
||||||
for event, ctx in events_and_contexts:
|
for event, ctx in events_and_contexts:
|
||||||
partitioned.setdefault(event.room_id, []).append((event, ctx))
|
partitioned.setdefault(event.room_id, []).append((event, ctx))
|
||||||
|
event_ids.append(event.event_id)
|
||||||
|
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.FUNC_ARG_PREFIX + "event_ids",
|
||||||
|
str(event_ids),
|
||||||
|
)
|
||||||
|
set_tag(
|
||||||
|
SynapseTags.FUNC_ARG_PREFIX + "event_ids.length",
|
||||||
|
str(len(event_ids)),
|
||||||
|
)
|
||||||
|
set_tag(SynapseTags.FUNC_ARG_PREFIX + "backfilled", str(backfilled))
|
||||||
|
|
||||||
async def enqueue(
|
async def enqueue(
|
||||||
item: Tuple[str, List[Tuple[EventBase, EventContext]]]
|
item: Tuple[str, List[Tuple[EventBase, EventContext]]]
|
||||||
|
@ -418,7 +436,7 @@ class EventsPersistenceStorageController:
|
||||||
self.main_store.get_room_max_token(),
|
self.main_store.get_room_max_token(),
|
||||||
)
|
)
|
||||||
|
|
||||||
@opentracing.trace
|
@trace
|
||||||
async def persist_event(
|
async def persist_event(
|
||||||
self, event: EventBase, context: EventContext, backfilled: bool = False
|
self, event: EventBase, context: EventContext, backfilled: bool = False
|
||||||
) -> Tuple[EventBase, PersistedEventPosition, RoomStreamToken]:
|
) -> Tuple[EventBase, PersistedEventPosition, RoomStreamToken]:
|
||||||
|
|
|
@ -29,7 +29,8 @@ from typing import (
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes
|
from synapse.api.constants import EventTypes
|
||||||
from synapse.events import EventBase
|
from synapse.events import EventBase
|
||||||
from synapse.logging.opentracing import trace
|
from synapse.logging.opentracing import tag_args, trace
|
||||||
|
from synapse.storage.roommember import ProfileInfo
|
||||||
from synapse.storage.state import StateFilter
|
from synapse.storage.state import StateFilter
|
||||||
from synapse.storage.util.partial_state_events_tracker import (
|
from synapse.storage.util.partial_state_events_tracker import (
|
||||||
PartialCurrentStateTracker,
|
PartialCurrentStateTracker,
|
||||||
|
@ -228,6 +229,7 @@ class StateStorageController:
|
||||||
return {event: event_to_state[event] for event in event_ids}
|
return {event: event_to_state[event] for event in event_ids}
|
||||||
|
|
||||||
@trace
|
@trace
|
||||||
|
@tag_args
|
||||||
async def get_state_ids_for_events(
|
async def get_state_ids_for_events(
|
||||||
self,
|
self,
|
||||||
event_ids: Collection[str],
|
event_ids: Collection[str],
|
||||||
|
@ -348,6 +350,7 @@ class StateStorageController:
|
||||||
)
|
)
|
||||||
|
|
||||||
@trace
|
@trace
|
||||||
|
@tag_args
|
||||||
async def get_state_group_for_events(
|
async def get_state_group_for_events(
|
||||||
self,
|
self,
|
||||||
event_ids: Collection[str],
|
event_ids: Collection[str],
|
||||||
|
@ -489,6 +492,7 @@ class StateStorageController:
|
||||||
prev_stream_id, max_stream_id
|
prev_stream_id, max_stream_id
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
async def get_current_state(
|
async def get_current_state(
|
||||||
self, room_id: str, state_filter: Optional[StateFilter] = None
|
self, room_id: str, state_filter: Optional[StateFilter] = None
|
||||||
) -> StateMap[EventBase]:
|
) -> StateMap[EventBase]:
|
||||||
|
@ -522,3 +526,15 @@ class StateStorageController:
|
||||||
await self._partial_state_room_tracker.await_full_state(room_id)
|
await self._partial_state_room_tracker.await_full_state(room_id)
|
||||||
|
|
||||||
return await self.stores.main.get_current_hosts_in_room(room_id)
|
return await self.stores.main.get_current_hosts_in_room(room_id)
|
||||||
|
|
||||||
|
async def get_users_in_room_with_profiles(
|
||||||
|
self, room_id: str
|
||||||
|
) -> Dict[str, ProfileInfo]:
|
||||||
|
"""
|
||||||
|
Get the current users in the room with their profiles.
|
||||||
|
If the room is currently partial-stated, this will block until the room has
|
||||||
|
full state.
|
||||||
|
"""
|
||||||
|
await self._partial_state_room_tracker.await_full_state(room_id)
|
||||||
|
|
||||||
|
return await self.stores.main.get_users_in_room_with_profiles(room_id)
|
||||||
|
|
|
@ -33,6 +33,7 @@ from synapse.api.constants import MAX_DEPTH, EventTypes
|
||||||
from synapse.api.errors import StoreError
|
from synapse.api.errors import StoreError
|
||||||
from synapse.api.room_versions import EventFormatVersions, RoomVersion
|
from synapse.api.room_versions import EventFormatVersions, RoomVersion
|
||||||
from synapse.events import EventBase, make_event_from_dict
|
from synapse.events import EventBase, make_event_from_dict
|
||||||
|
from synapse.logging.opentracing import tag_args, trace
|
||||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||||
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
|
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
|
||||||
from synapse.storage.database import (
|
from synapse.storage.database import (
|
||||||
|
@ -126,6 +127,8 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||||
)
|
)
|
||||||
return await self.get_events_as_list(event_ids)
|
return await self.get_events_as_list(event_ids)
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def get_auth_chain_ids(
|
async def get_auth_chain_ids(
|
||||||
self,
|
self,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
|
@ -709,6 +712,8 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||||
# Return all events where not all sets can reach them.
|
# Return all events where not all sets can reach them.
|
||||||
return {eid for eid, n in event_to_missing_sets.items() if n}
|
return {eid for eid, n in event_to_missing_sets.items() if n}
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def get_oldest_event_ids_with_depth_in_room(
|
async def get_oldest_event_ids_with_depth_in_room(
|
||||||
self, room_id: str
|
self, room_id: str
|
||||||
) -> List[Tuple[str, int]]:
|
) -> List[Tuple[str, int]]:
|
||||||
|
@ -767,6 +772,7 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||||
room_id,
|
room_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
async def get_insertion_event_backward_extremities_in_room(
|
async def get_insertion_event_backward_extremities_in_room(
|
||||||
self, room_id: str
|
self, room_id: str
|
||||||
) -> List[Tuple[str, int]]:
|
) -> List[Tuple[str, int]]:
|
||||||
|
@ -1339,6 +1345,8 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||||
event_results.reverse()
|
event_results.reverse()
|
||||||
return event_results
|
return event_results
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def get_successor_events(self, event_id: str) -> List[str]:
|
async def get_successor_events(self, event_id: str) -> List[str]:
|
||||||
"""Fetch all events that have the given event as a prev event
|
"""Fetch all events that have the given event as a prev event
|
||||||
|
|
||||||
|
@ -1375,6 +1383,7 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas
|
||||||
_delete_old_forward_extrem_cache_txn,
|
_delete_old_forward_extrem_cache_txn,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@trace
|
||||||
async def insert_insertion_extremity(self, event_id: str, room_id: str) -> None:
|
async def insert_insertion_extremity(self, event_id: str, room_id: str) -> None:
|
||||||
await self.db_pool.simple_upsert(
|
await self.db_pool.simple_upsert(
|
||||||
table="insertion_event_extremities",
|
table="insertion_event_extremities",
|
||||||
|
|
|
@ -74,7 +74,17 @@ receipt.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union, cast
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Collection,
|
||||||
|
Dict,
|
||||||
|
List,
|
||||||
|
Mapping,
|
||||||
|
Optional,
|
||||||
|
Tuple,
|
||||||
|
Union,
|
||||||
|
cast,
|
||||||
|
)
|
||||||
|
|
||||||
import attr
|
import attr
|
||||||
|
|
||||||
|
@ -154,7 +164,9 @@ class NotifCounts:
|
||||||
highlight_count: int = 0
|
highlight_count: int = 0
|
||||||
|
|
||||||
|
|
||||||
def _serialize_action(actions: List[Union[dict, str]], is_highlight: bool) -> str:
|
def _serialize_action(
|
||||||
|
actions: Collection[Union[Mapping, str]], is_highlight: bool
|
||||||
|
) -> str:
|
||||||
"""Custom serializer for actions. This allows us to "compress" common actions.
|
"""Custom serializer for actions. This allows us to "compress" common actions.
|
||||||
|
|
||||||
We use the fact that most users have the same actions for notifs (and for
|
We use the fact that most users have the same actions for notifs (and for
|
||||||
|
@ -227,7 +239,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
user_id: str,
|
user_id: str,
|
||||||
) -> NotifCounts:
|
) -> NotifCounts:
|
||||||
"""Get the notification count, the highlight count and the unread message count
|
"""Get the notification count, the highlight count and the unread message count
|
||||||
for a given user in a given room after the given read receipt.
|
for a given user in a given room after their latest read receipt.
|
||||||
|
|
||||||
Note that this function assumes the user to be a current member of the room,
|
Note that this function assumes the user to be a current member of the room,
|
||||||
since it's either called by the sync handler to handle joined room entries, or by
|
since it's either called by the sync handler to handle joined room entries, or by
|
||||||
|
@ -238,9 +250,8 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
user_id: The user to retrieve the counts for.
|
user_id: The user to retrieve the counts for.
|
||||||
|
|
||||||
Returns
|
Returns
|
||||||
A dict containing the counts mentioned earlier in this docstring,
|
A NotifCounts object containing the notification count, the highlight count
|
||||||
respectively under the keys "notify_count", "highlight_count" and
|
and the unread message count.
|
||||||
"unread_count".
|
|
||||||
"""
|
"""
|
||||||
return await self.db_pool.runInteraction(
|
return await self.db_pool.runInteraction(
|
||||||
"get_unread_event_push_actions_by_room",
|
"get_unread_event_push_actions_by_room",
|
||||||
|
@ -255,6 +266,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
room_id: str,
|
room_id: str,
|
||||||
user_id: str,
|
user_id: str,
|
||||||
) -> NotifCounts:
|
) -> NotifCounts:
|
||||||
|
# Get the stream ordering of the user's latest receipt in the room.
|
||||||
result = self.get_last_receipt_for_user_txn(
|
result = self.get_last_receipt_for_user_txn(
|
||||||
txn,
|
txn,
|
||||||
user_id,
|
user_id,
|
||||||
|
@ -266,13 +278,11 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
stream_ordering = None
|
|
||||||
if result:
|
if result:
|
||||||
_, stream_ordering = result
|
_, stream_ordering = result
|
||||||
|
|
||||||
if stream_ordering is None:
|
else:
|
||||||
# Either last_read_event_id is None, or it's an event we don't have (e.g.
|
# If the user has no receipts in the room, retrieve the stream ordering for
|
||||||
# because it's been purged), in which case retrieve the stream ordering for
|
|
||||||
# the latest membership event from this user in this room (which we assume is
|
# the latest membership event from this user in this room (which we assume is
|
||||||
# a join).
|
# a join).
|
||||||
event_id = self.db_pool.simple_select_one_onecol_txn(
|
event_id = self.db_pool.simple_select_one_onecol_txn(
|
||||||
|
@ -289,10 +299,26 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
)
|
)
|
||||||
|
|
||||||
def _get_unread_counts_by_pos_txn(
|
def _get_unread_counts_by_pos_txn(
|
||||||
self, txn: LoggingTransaction, room_id: str, user_id: str, stream_ordering: int
|
self,
|
||||||
|
txn: LoggingTransaction,
|
||||||
|
room_id: str,
|
||||||
|
user_id: str,
|
||||||
|
receipt_stream_ordering: int,
|
||||||
) -> NotifCounts:
|
) -> NotifCounts:
|
||||||
"""Get the number of unread messages for a user/room that have happened
|
"""Get the number of unread messages for a user/room that have happened
|
||||||
since the given stream ordering.
|
since the given stream ordering.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
txn: The database transaction.
|
||||||
|
room_id: The room ID to get unread counts for.
|
||||||
|
user_id: The user ID to get unread counts for.
|
||||||
|
receipt_stream_ordering: The stream ordering of the user's latest
|
||||||
|
receipt in the room. If there are no receipts, the stream ordering
|
||||||
|
of the user's join event.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
A NotifCounts object containing the notification count, the highlight count
|
||||||
|
and the unread message count.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
counts = NotifCounts()
|
counts = NotifCounts()
|
||||||
|
@ -320,7 +346,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
OR last_receipt_stream_ordering = ?
|
OR last_receipt_stream_ordering = ?
|
||||||
)
|
)
|
||||||
""",
|
""",
|
||||||
(room_id, user_id, stream_ordering, stream_ordering),
|
(room_id, user_id, receipt_stream_ordering, receipt_stream_ordering),
|
||||||
)
|
)
|
||||||
row = txn.fetchone()
|
row = txn.fetchone()
|
||||||
|
|
||||||
|
@ -338,17 +364,20 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
AND stream_ordering > ?
|
AND stream_ordering > ?
|
||||||
AND highlight = 1
|
AND highlight = 1
|
||||||
"""
|
"""
|
||||||
txn.execute(sql, (user_id, room_id, stream_ordering))
|
txn.execute(sql, (user_id, room_id, receipt_stream_ordering))
|
||||||
row = txn.fetchone()
|
row = txn.fetchone()
|
||||||
if row:
|
if row:
|
||||||
counts.highlight_count += row[0]
|
counts.highlight_count += row[0]
|
||||||
|
|
||||||
# Finally we need to count push actions that aren't included in the
|
# Finally we need to count push actions that aren't included in the
|
||||||
# summary returned above, e.g. recent events that haven't been
|
# summary returned above. This might be due to recent events that haven't
|
||||||
# summarised yet, or the summary is empty due to a recent read receipt.
|
# been summarised yet or the summary is out of date due to a recent read
|
||||||
stream_ordering = max(stream_ordering, summary_stream_ordering)
|
# receipt.
|
||||||
|
start_unread_stream_ordering = max(
|
||||||
|
receipt_stream_ordering, summary_stream_ordering
|
||||||
|
)
|
||||||
notify_count, unread_count = self._get_notif_unread_count_for_user_room(
|
notify_count, unread_count = self._get_notif_unread_count_for_user_room(
|
||||||
txn, room_id, user_id, stream_ordering
|
txn, room_id, user_id, start_unread_stream_ordering
|
||||||
)
|
)
|
||||||
|
|
||||||
counts.notify_count += notify_count
|
counts.notify_count += notify_count
|
||||||
|
@ -733,7 +762,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
async def add_push_actions_to_staging(
|
async def add_push_actions_to_staging(
|
||||||
self,
|
self,
|
||||||
event_id: str,
|
event_id: str,
|
||||||
user_id_actions: Dict[str, List[Union[dict, str]]],
|
user_id_actions: Dict[str, Collection[Union[Mapping, str]]],
|
||||||
count_as_unread: bool,
|
count_as_unread: bool,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Add the push actions for the event to the push action staging area.
|
"""Add the push actions for the event to the push action staging area.
|
||||||
|
@ -750,7 +779,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
# This is a helper function for generating the necessary tuple that
|
# This is a helper function for generating the necessary tuple that
|
||||||
# can be used to insert into the `event_push_actions_staging` table.
|
# can be used to insert into the `event_push_actions_staging` table.
|
||||||
def _gen_entry(
|
def _gen_entry(
|
||||||
user_id: str, actions: List[Union[dict, str]]
|
user_id: str, actions: Collection[Union[Mapping, str]]
|
||||||
) -> Tuple[str, str, str, int, int, int]:
|
) -> Tuple[str, str, str, int, int, int]:
|
||||||
is_highlight = 1 if _action_has_highlight(actions) else 0
|
is_highlight = 1 if _action_has_highlight(actions) else 0
|
||||||
notif = 1 if "notify" in actions else 0
|
notif = 1 if "notify" in actions else 0
|
||||||
|
@ -1151,8 +1180,6 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
txn: The database transaction.
|
txn: The database transaction.
|
||||||
old_rotate_stream_ordering: The previous maximum event stream ordering.
|
old_rotate_stream_ordering: The previous maximum event stream ordering.
|
||||||
rotate_to_stream_ordering: The new maximum event stream ordering to summarise.
|
rotate_to_stream_ordering: The new maximum event stream ordering to summarise.
|
||||||
|
|
||||||
Returns whether the archiving process has caught up or not.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# Calculate the new counts that should be upserted into event_push_summary
|
# Calculate the new counts that should be upserted into event_push_summary
|
||||||
|
@ -1238,9 +1265,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
|
||||||
(rotate_to_stream_ordering,),
|
(rotate_to_stream_ordering,),
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _remove_old_push_actions_that_have_rotated(
|
async def _remove_old_push_actions_that_have_rotated(self) -> None:
|
||||||
self,
|
|
||||||
) -> None:
|
|
||||||
"""Clear out old push actions that have been summarised."""
|
"""Clear out old push actions that have been summarised."""
|
||||||
|
|
||||||
# We want to clear out anything that is older than a day that *has* already
|
# We want to clear out anything that is older than a day that *has* already
|
||||||
|
@ -1397,7 +1422,7 @@ class EventPushActionsStore(EventPushActionsWorkerStore):
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
def _action_has_highlight(actions: List[Union[dict, str]]) -> bool:
|
def _action_has_highlight(actions: Collection[Union[Mapping, str]]) -> bool:
|
||||||
for action in actions:
|
for action in actions:
|
||||||
if not isinstance(action, dict):
|
if not isinstance(action, dict):
|
||||||
continue
|
continue
|
||||||
|
|
|
@ -40,6 +40,7 @@ from synapse.api.errors import Codes, SynapseError
|
||||||
from synapse.api.room_versions import RoomVersions
|
from synapse.api.room_versions import RoomVersions
|
||||||
from synapse.events import EventBase, relation_from_event
|
from synapse.events import EventBase, relation_from_event
|
||||||
from synapse.events.snapshot import EventContext
|
from synapse.events.snapshot import EventContext
|
||||||
|
from synapse.logging.opentracing import trace
|
||||||
from synapse.storage._base import db_to_json, make_in_list_sql_clause
|
from synapse.storage._base import db_to_json, make_in_list_sql_clause
|
||||||
from synapse.storage.database import (
|
from synapse.storage.database import (
|
||||||
DatabasePool,
|
DatabasePool,
|
||||||
|
@ -145,6 +146,7 @@ class PersistEventsStore:
|
||||||
self._backfill_id_gen: AbstractStreamIdGenerator = self.store._backfill_id_gen
|
self._backfill_id_gen: AbstractStreamIdGenerator = self.store._backfill_id_gen
|
||||||
self._stream_id_gen: AbstractStreamIdGenerator = self.store._stream_id_gen
|
self._stream_id_gen: AbstractStreamIdGenerator = self.store._stream_id_gen
|
||||||
|
|
||||||
|
@trace
|
||||||
async def _persist_events_and_state_updates(
|
async def _persist_events_and_state_updates(
|
||||||
self,
|
self,
|
||||||
events_and_contexts: List[Tuple[EventBase, EventContext]],
|
events_and_contexts: List[Tuple[EventBase, EventContext]],
|
||||||
|
|
|
@ -54,6 +54,7 @@ from synapse.logging.context import (
|
||||||
current_context,
|
current_context,
|
||||||
make_deferred_yieldable,
|
make_deferred_yieldable,
|
||||||
)
|
)
|
||||||
|
from synapse.logging.opentracing import start_active_span, tag_args, trace
|
||||||
from synapse.metrics.background_process_metrics import (
|
from synapse.metrics.background_process_metrics import (
|
||||||
run_as_background_process,
|
run_as_background_process,
|
||||||
wrap_as_background_process,
|
wrap_as_background_process,
|
||||||
|
@ -430,6 +431,8 @@ class EventsWorkerStore(SQLBaseStore):
|
||||||
|
|
||||||
return {e.event_id: e for e in events}
|
return {e.event_id: e for e in events}
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def get_events_as_list(
|
async def get_events_as_list(
|
||||||
self,
|
self,
|
||||||
event_ids: Collection[str],
|
event_ids: Collection[str],
|
||||||
|
@ -1090,23 +1093,42 @@ class EventsWorkerStore(SQLBaseStore):
|
||||||
"""
|
"""
|
||||||
fetched_event_ids: Set[str] = set()
|
fetched_event_ids: Set[str] = set()
|
||||||
fetched_events: Dict[str, _EventRow] = {}
|
fetched_events: Dict[str, _EventRow] = {}
|
||||||
events_to_fetch = event_ids
|
|
||||||
|
|
||||||
while events_to_fetch:
|
async def _fetch_event_ids_and_get_outstanding_redactions(
|
||||||
row_map = await self._enqueue_events(events_to_fetch)
|
event_ids_to_fetch: Collection[str],
|
||||||
|
) -> Collection[str]:
|
||||||
|
"""
|
||||||
|
Fetch all of the given event_ids and return any associated redaction event_ids
|
||||||
|
that we still need to fetch in the next iteration.
|
||||||
|
"""
|
||||||
|
row_map = await self._enqueue_events(event_ids_to_fetch)
|
||||||
|
|
||||||
# we need to recursively fetch any redactions of those events
|
# we need to recursively fetch any redactions of those events
|
||||||
redaction_ids: Set[str] = set()
|
redaction_ids: Set[str] = set()
|
||||||
for event_id in events_to_fetch:
|
for event_id in event_ids_to_fetch:
|
||||||
row = row_map.get(event_id)
|
row = row_map.get(event_id)
|
||||||
fetched_event_ids.add(event_id)
|
fetched_event_ids.add(event_id)
|
||||||
if row:
|
if row:
|
||||||
fetched_events[event_id] = row
|
fetched_events[event_id] = row
|
||||||
redaction_ids.update(row.redactions)
|
redaction_ids.update(row.redactions)
|
||||||
|
|
||||||
events_to_fetch = redaction_ids.difference(fetched_event_ids)
|
event_ids_to_fetch = redaction_ids.difference(fetched_event_ids)
|
||||||
if events_to_fetch:
|
return event_ids_to_fetch
|
||||||
logger.debug("Also fetching redaction events %s", events_to_fetch)
|
|
||||||
|
# Grab the initial list of events requested
|
||||||
|
event_ids_to_fetch = await _fetch_event_ids_and_get_outstanding_redactions(
|
||||||
|
event_ids
|
||||||
|
)
|
||||||
|
# Then go and recursively find all of the associated redactions
|
||||||
|
with start_active_span("recursively fetching redactions"):
|
||||||
|
while event_ids_to_fetch:
|
||||||
|
logger.debug("Also fetching redaction events %s", event_ids_to_fetch)
|
||||||
|
|
||||||
|
event_ids_to_fetch = (
|
||||||
|
await _fetch_event_ids_and_get_outstanding_redactions(
|
||||||
|
event_ids_to_fetch
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
# build a map from event_id to EventBase
|
# build a map from event_id to EventBase
|
||||||
event_map: Dict[str, EventBase] = {}
|
event_map: Dict[str, EventBase] = {}
|
||||||
|
@ -1424,6 +1446,8 @@ class EventsWorkerStore(SQLBaseStore):
|
||||||
|
|
||||||
return {r["event_id"] for r in rows}
|
return {r["event_id"] for r in rows}
|
||||||
|
|
||||||
|
@trace
|
||||||
|
@tag_args
|
||||||
async def have_seen_events(
|
async def have_seen_events(
|
||||||
self, room_id: str, event_ids: Iterable[str]
|
self, room_id: str, event_ids: Iterable[str]
|
||||||
) -> Set[str]:
|
) -> Set[str]:
|
||||||
|
@ -2200,3 +2224,63 @@ class EventsWorkerStore(SQLBaseStore):
|
||||||
(room_id,),
|
(room_id,),
|
||||||
)
|
)
|
||||||
return [row[0] for row in txn]
|
return [row[0] for row in txn]
|
||||||
|
|
||||||
|
def mark_event_rejected_txn(
|
||||||
|
self,
|
||||||
|
txn: LoggingTransaction,
|
||||||
|
event_id: str,
|
||||||
|
rejection_reason: Optional[str],
|
||||||
|
) -> None:
|
||||||
|
"""Mark an event that was previously accepted as rejected, or vice versa
|
||||||
|
|
||||||
|
This can happen, for example, when resyncing state during a faster join.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
txn:
|
||||||
|
event_id: ID of event to update
|
||||||
|
rejection_reason: reason it has been rejected, or None if it is now accepted
|
||||||
|
"""
|
||||||
|
if rejection_reason is None:
|
||||||
|
logger.info(
|
||||||
|
"Marking previously-processed event %s as accepted",
|
||||||
|
event_id,
|
||||||
|
)
|
||||||
|
self.db_pool.simple_delete_txn(
|
||||||
|
txn,
|
||||||
|
"rejections",
|
||||||
|
keyvalues={"event_id": event_id},
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logger.info(
|
||||||
|
"Marking previously-processed event %s as rejected(%s)",
|
||||||
|
event_id,
|
||||||
|
rejection_reason,
|
||||||
|
)
|
||||||
|
self.db_pool.simple_upsert_txn(
|
||||||
|
txn,
|
||||||
|
table="rejections",
|
||||||
|
keyvalues={"event_id": event_id},
|
||||||
|
values={
|
||||||
|
"reason": rejection_reason,
|
||||||
|
"last_check": self._clock.time_msec(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
self.db_pool.simple_update_txn(
|
||||||
|
txn,
|
||||||
|
table="events",
|
||||||
|
keyvalues={"event_id": event_id},
|
||||||
|
updatevalues={"rejection_reason": rejection_reason},
|
||||||
|
)
|
||||||
|
|
||||||
|
self.invalidate_get_event_cache_after_txn(txn, event_id)
|
||||||
|
|
||||||
|
# TODO(faster_joins): invalidate the cache on workers. Ideally we'd just
|
||||||
|
# call '_send_invalidation_to_replication', but we actually need the other
|
||||||
|
# end to call _invalidate_local_get_event_cache() rather than (just)
|
||||||
|
# _get_event_cache.invalidate().
|
||||||
|
#
|
||||||
|
# One solution might be to (somehow) get the workers to call
|
||||||
|
# _invalidate_caches_for_event() (though that will invalidate more than
|
||||||
|
# strictly necessary).
|
||||||
|
#
|
||||||
|
# https://github.com/matrix-org/synapse/issues/12994
|
||||||
|
|
|
@ -14,11 +14,23 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
import abc
|
import abc
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Collection, Dict, List, Optional, Tuple, Union, cast
|
from typing import (
|
||||||
|
TYPE_CHECKING,
|
||||||
|
Any,
|
||||||
|
Collection,
|
||||||
|
Dict,
|
||||||
|
List,
|
||||||
|
Mapping,
|
||||||
|
Optional,
|
||||||
|
Sequence,
|
||||||
|
Tuple,
|
||||||
|
Union,
|
||||||
|
cast,
|
||||||
|
)
|
||||||
|
|
||||||
from synapse.api.errors import StoreError
|
from synapse.api.errors import StoreError
|
||||||
from synapse.config.homeserver import ExperimentalConfig
|
from synapse.config.homeserver import ExperimentalConfig
|
||||||
from synapse.push.baserules import list_with_base_rules
|
from synapse.push.baserules import FilteredPushRules, PushRule, compile_push_rules
|
||||||
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
|
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
|
||||||
from synapse.storage._base import SQLBaseStore, db_to_json
|
from synapse.storage._base import SQLBaseStore, db_to_json
|
||||||
from synapse.storage.database import (
|
from synapse.storage.database import (
|
||||||
|
@ -50,60 +62,30 @@ if TYPE_CHECKING:
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
def _is_experimental_rule_enabled(
|
|
||||||
rule_id: str, experimental_config: ExperimentalConfig
|
|
||||||
) -> bool:
|
|
||||||
"""Used by `_load_rules` to filter out experimental rules when they
|
|
||||||
have not been enabled.
|
|
||||||
"""
|
|
||||||
if (
|
|
||||||
rule_id == "global/override/.org.matrix.msc3786.rule.room.server_acl"
|
|
||||||
and not experimental_config.msc3786_enabled
|
|
||||||
):
|
|
||||||
return False
|
|
||||||
if (
|
|
||||||
rule_id == "global/underride/.org.matrix.msc3772.thread_reply"
|
|
||||||
and not experimental_config.msc3772_enabled
|
|
||||||
):
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
def _load_rules(
|
def _load_rules(
|
||||||
rawrules: List[JsonDict],
|
rawrules: List[JsonDict],
|
||||||
enabled_map: Dict[str, bool],
|
enabled_map: Dict[str, bool],
|
||||||
experimental_config: ExperimentalConfig,
|
experimental_config: ExperimentalConfig,
|
||||||
) -> List[JsonDict]:
|
) -> FilteredPushRules:
|
||||||
ruleslist = []
|
"""Take the DB rows returned from the DB and convert them into a full
|
||||||
for rawrule in rawrules:
|
`FilteredPushRules` object.
|
||||||
rule = dict(rawrule)
|
"""
|
||||||
rule["conditions"] = db_to_json(rawrule["conditions"])
|
|
||||||
rule["actions"] = db_to_json(rawrule["actions"])
|
|
||||||
rule["default"] = False
|
|
||||||
ruleslist.append(rule)
|
|
||||||
|
|
||||||
# We're going to be mutating this a lot, so copy it. We also filter out
|
ruleslist = [
|
||||||
# any experimental default push rules that aren't enabled.
|
PushRule(
|
||||||
rules = [
|
rule_id=rawrule["rule_id"],
|
||||||
rule
|
priority_class=rawrule["priority_class"],
|
||||||
for rule in list_with_base_rules(ruleslist)
|
conditions=db_to_json(rawrule["conditions"]),
|
||||||
if _is_experimental_rule_enabled(rule["rule_id"], experimental_config)
|
actions=db_to_json(rawrule["actions"]),
|
||||||
|
)
|
||||||
|
for rawrule in rawrules
|
||||||
]
|
]
|
||||||
|
|
||||||
for i, rule in enumerate(rules):
|
push_rules = compile_push_rules(ruleslist)
|
||||||
rule_id = rule["rule_id"]
|
|
||||||
|
|
||||||
if rule_id not in enabled_map:
|
filtered_rules = FilteredPushRules(push_rules, enabled_map, experimental_config)
|
||||||
continue
|
|
||||||
if rule.get("enabled", True) == bool(enabled_map[rule_id]):
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Rules are cached across users.
|
return filtered_rules
|
||||||
rule = dict(rule)
|
|
||||||
rule["enabled"] = bool(enabled_map[rule_id])
|
|
||||||
rules[i] = rule
|
|
||||||
|
|
||||||
return rules
|
|
||||||
|
|
||||||
|
|
||||||
# The ABCMeta metaclass ensures that it cannot be instantiated without
|
# The ABCMeta metaclass ensures that it cannot be instantiated without
|
||||||
|
@ -162,7 +144,7 @@ class PushRulesWorkerStore(
|
||||||
raise NotImplementedError()
|
raise NotImplementedError()
|
||||||
|
|
||||||
@cached(max_entries=5000)
|
@cached(max_entries=5000)
|
||||||
async def get_push_rules_for_user(self, user_id: str) -> List[JsonDict]:
|
async def get_push_rules_for_user(self, user_id: str) -> FilteredPushRules:
|
||||||
rows = await self.db_pool.simple_select_list(
|
rows = await self.db_pool.simple_select_list(
|
||||||
table="push_rules",
|
table="push_rules",
|
||||||
keyvalues={"user_name": user_id},
|
keyvalues={"user_name": user_id},
|
||||||
|
@ -216,11 +198,11 @@ class PushRulesWorkerStore(
|
||||||
@cachedList(cached_method_name="get_push_rules_for_user", list_name="user_ids")
|
@cachedList(cached_method_name="get_push_rules_for_user", list_name="user_ids")
|
||||||
async def bulk_get_push_rules(
|
async def bulk_get_push_rules(
|
||||||
self, user_ids: Collection[str]
|
self, user_ids: Collection[str]
|
||||||
) -> Dict[str, List[JsonDict]]:
|
) -> Dict[str, FilteredPushRules]:
|
||||||
if not user_ids:
|
if not user_ids:
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
results: Dict[str, List[JsonDict]] = {user_id: [] for user_id in user_ids}
|
raw_rules: Dict[str, List[JsonDict]] = {user_id: [] for user_id in user_ids}
|
||||||
|
|
||||||
rows = await self.db_pool.simple_select_many_batch(
|
rows = await self.db_pool.simple_select_many_batch(
|
||||||
table="push_rules",
|
table="push_rules",
|
||||||
|
@ -234,11 +216,13 @@ class PushRulesWorkerStore(
|
||||||
rows.sort(key=lambda row: (-int(row["priority_class"]), -int(row["priority"])))
|
rows.sort(key=lambda row: (-int(row["priority_class"]), -int(row["priority"])))
|
||||||
|
|
||||||
for row in rows:
|
for row in rows:
|
||||||
results.setdefault(row["user_name"], []).append(row)
|
raw_rules.setdefault(row["user_name"], []).append(row)
|
||||||
|
|
||||||
enabled_map_by_user = await self.bulk_get_push_rules_enabled(user_ids)
|
enabled_map_by_user = await self.bulk_get_push_rules_enabled(user_ids)
|
||||||
|
|
||||||
for user_id, rules in results.items():
|
results: Dict[str, FilteredPushRules] = {}
|
||||||
|
|
||||||
|
for user_id, rules in raw_rules.items():
|
||||||
results[user_id] = _load_rules(
|
results[user_id] = _load_rules(
|
||||||
rules, enabled_map_by_user.get(user_id, {}), self.hs.config.experimental
|
rules, enabled_map_by_user.get(user_id, {}), self.hs.config.experimental
|
||||||
)
|
)
|
||||||
|
@ -345,8 +329,8 @@ class PushRuleStore(PushRulesWorkerStore):
|
||||||
user_id: str,
|
user_id: str,
|
||||||
rule_id: str,
|
rule_id: str,
|
||||||
priority_class: int,
|
priority_class: int,
|
||||||
conditions: List[Dict[str, str]],
|
conditions: Sequence[Mapping[str, str]],
|
||||||
actions: List[Union[JsonDict, str]],
|
actions: Sequence[Union[Mapping[str, Any], str]],
|
||||||
before: Optional[str] = None,
|
before: Optional[str] = None,
|
||||||
after: Optional[str] = None,
|
after: Optional[str] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
|
@ -817,7 +801,7 @@ class PushRuleStore(PushRulesWorkerStore):
|
||||||
return self._push_rules_stream_id_gen.get_current_token()
|
return self._push_rules_stream_id_gen.get_current_token()
|
||||||
|
|
||||||
async def copy_push_rule_from_room_to_room(
|
async def copy_push_rule_from_room_to_room(
|
||||||
self, new_room_id: str, user_id: str, rule: dict
|
self, new_room_id: str, user_id: str, rule: PushRule
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Copy a single push rule from one room to another for a specific user.
|
"""Copy a single push rule from one room to another for a specific user.
|
||||||
|
|
||||||
|
@ -827,21 +811,27 @@ class PushRuleStore(PushRulesWorkerStore):
|
||||||
rule: A push rule.
|
rule: A push rule.
|
||||||
"""
|
"""
|
||||||
# Create new rule id
|
# Create new rule id
|
||||||
rule_id_scope = "/".join(rule["rule_id"].split("/")[:-1])
|
rule_id_scope = "/".join(rule.rule_id.split("/")[:-1])
|
||||||
new_rule_id = rule_id_scope + "/" + new_room_id
|
new_rule_id = rule_id_scope + "/" + new_room_id
|
||||||
|
|
||||||
|
new_conditions = []
|
||||||
|
|
||||||
# Change room id in each condition
|
# Change room id in each condition
|
||||||
for condition in rule.get("conditions", []):
|
for condition in rule.conditions:
|
||||||
|
new_condition = condition
|
||||||
if condition.get("key") == "room_id":
|
if condition.get("key") == "room_id":
|
||||||
condition["pattern"] = new_room_id
|
new_condition = dict(condition)
|
||||||
|
new_condition["pattern"] = new_room_id
|
||||||
|
|
||||||
|
new_conditions.append(new_condition)
|
||||||
|
|
||||||
# Add the rule for the new room
|
# Add the rule for the new room
|
||||||
await self.add_push_rule(
|
await self.add_push_rule(
|
||||||
user_id=user_id,
|
user_id=user_id,
|
||||||
rule_id=new_rule_id,
|
rule_id=new_rule_id,
|
||||||
priority_class=rule["priority_class"],
|
priority_class=rule.priority_class,
|
||||||
conditions=rule["conditions"],
|
conditions=new_conditions,
|
||||||
actions=rule["actions"],
|
actions=rule.actions,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def copy_push_rules_from_room_to_room_for_user(
|
async def copy_push_rules_from_room_to_room_for_user(
|
||||||
|
@ -859,8 +849,11 @@ class PushRuleStore(PushRulesWorkerStore):
|
||||||
user_push_rules = await self.get_push_rules_for_user(user_id)
|
user_push_rules = await self.get_push_rules_for_user(user_id)
|
||||||
|
|
||||||
# Get rules relating to the old room and copy them to the new room
|
# Get rules relating to the old room and copy them to the new room
|
||||||
for rule in user_push_rules:
|
for rule, enabled in user_push_rules:
|
||||||
conditions = rule.get("conditions", [])
|
if not enabled:
|
||||||
|
continue
|
||||||
|
|
||||||
|
conditions = rule.conditions
|
||||||
if any(
|
if any(
|
||||||
(c.get("key") == "room_id" and c.get("pattern") == old_room_id)
|
(c.get("key") == "room_id" and c.get("pattern") == old_room_id)
|
||||||
for c in conditions
|
for c in conditions
|
||||||
|
|
|
@ -161,7 +161,7 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
||||||
receipt_type: The receipt types to fetch.
|
receipt_type: The receipt types to fetch.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The latest receipt, if one exists.
|
The event ID and stream ordering of the latest receipt, if one exists.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
clause, args = make_in_list_sql_clause(
|
clause, args = make_in_list_sql_clause(
|
||||||
|
|
|
@ -283,6 +283,9 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A mapping from user ID to ProfileInfo.
|
A mapping from user ID to ProfileInfo.
|
||||||
|
|
||||||
|
Preconditions:
|
||||||
|
- There is full state available for the room (it is not partial-stated).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def _get_users_in_room_with_profiles(
|
def _get_users_in_room_with_profiles(
|
||||||
|
@ -1212,6 +1215,30 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
||||||
"get_forgotten_rooms_for_user", _get_forgotten_rooms_for_user_txn
|
"get_forgotten_rooms_for_user", _get_forgotten_rooms_for_user_txn
|
||||||
)
|
)
|
||||||
|
|
||||||
|
async def is_locally_forgotten_room(self, room_id: str) -> bool:
|
||||||
|
"""Returns whether all local users have forgotten this room_id.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
room_id: The room ID to query.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Whether the room is forgotten.
|
||||||
|
"""
|
||||||
|
|
||||||
|
sql = """
|
||||||
|
SELECT count(*) > 0 FROM local_current_membership
|
||||||
|
INNER JOIN room_memberships USING (room_id, event_id)
|
||||||
|
WHERE
|
||||||
|
room_id = ?
|
||||||
|
AND forgotten = 0;
|
||||||
|
"""
|
||||||
|
|
||||||
|
rows = await self.db_pool.execute("is_forgotten_room", None, sql, room_id)
|
||||||
|
|
||||||
|
# `count(*)` returns always an integer
|
||||||
|
# If any rows still exist it means someone has not forgotten this room yet
|
||||||
|
return not rows[0][0]
|
||||||
|
|
||||||
async def get_rooms_user_has_been_in(self, user_id: str) -> Set[str]:
|
async def get_rooms_user_has_been_in(self, user_id: str) -> Set[str]:
|
||||||
"""Get all rooms that the user has ever been in.
|
"""Get all rooms that the user has ever been in.
|
||||||
|
|
||||||
|
|
|
@ -430,6 +430,11 @@ class StateGroupWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||||
updatevalues={"state_group": state_group},
|
updatevalues={"state_group": state_group},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# the event may now be rejected where it was not before, or vice versa,
|
||||||
|
# in which case we need to update the rejected flags.
|
||||||
|
if bool(context.rejected) != (event.rejected_reason is not None):
|
||||||
|
self.mark_event_rejected_txn(txn, event.event_id, context.rejected)
|
||||||
|
|
||||||
self.db_pool.simple_delete_one_txn(
|
self.db_pool.simple_delete_one_txn(
|
||||||
txn,
|
txn,
|
||||||
table="partial_state_events",
|
table="partial_state_events",
|
||||||
|
|
|
@ -539,15 +539,6 @@ class StateFilter:
|
||||||
is_mine_id: a callable which confirms if a given state_key matches a mxid
|
is_mine_id: a callable which confirms if a given state_key matches a mxid
|
||||||
of a local user
|
of a local user
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# TODO(faster_joins): it's not entirely clear that this is safe. In particular,
|
|
||||||
# there may be circumstances in which we return a piece of state that, once we
|
|
||||||
# resync the state, we discover is invalid. For example: if it turns out that
|
|
||||||
# the sender of a piece of state wasn't actually in the room, then clearly that
|
|
||||||
# state shouldn't have been returned.
|
|
||||||
# We should at least add some tests around this to see what happens.
|
|
||||||
# https://github.com/matrix-org/synapse/issues/13006
|
|
||||||
|
|
||||||
# if we haven't requested membership events, then it depends on the value of
|
# if we haven't requested membership events, then it depends on the value of
|
||||||
# 'include_others'
|
# 'include_others'
|
||||||
if EventTypes.Member not in self.types:
|
if EventTypes.Member not in self.types:
|
||||||
|
|
|
@ -20,6 +20,7 @@ from twisted.internet import defer
|
||||||
from twisted.internet.defer import Deferred
|
from twisted.internet.defer import Deferred
|
||||||
|
|
||||||
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
|
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
|
||||||
|
from synapse.logging.opentracing import trace_with_opname
|
||||||
from synapse.storage.databases.main.events_worker import EventsWorkerStore
|
from synapse.storage.databases.main.events_worker import EventsWorkerStore
|
||||||
from synapse.storage.databases.main.room import RoomWorkerStore
|
from synapse.storage.databases.main.room import RoomWorkerStore
|
||||||
from synapse.util import unwrapFirstError
|
from synapse.util import unwrapFirstError
|
||||||
|
@ -58,6 +59,7 @@ class PartialStateEventsTracker:
|
||||||
for o in observers:
|
for o in observers:
|
||||||
o.callback(None)
|
o.callback(None)
|
||||||
|
|
||||||
|
@trace_with_opname("PartialStateEventsTracker.await_full_state")
|
||||||
async def await_full_state(self, event_ids: Collection[str]) -> None:
|
async def await_full_state(self, event_ids: Collection[str]) -> None:
|
||||||
"""Wait for all the given events to have full state.
|
"""Wait for all the given events to have full state.
|
||||||
|
|
||||||
|
@ -151,6 +153,7 @@ class PartialCurrentStateTracker:
|
||||||
for o in observers:
|
for o in observers:
|
||||||
o.callback(None)
|
o.callback(None)
|
||||||
|
|
||||||
|
@trace_with_opname("PartialCurrentStateTracker.await_full_state")
|
||||||
async def await_full_state(self, room_id: str) -> None:
|
async def await_full_state(self, room_id: str) -> None:
|
||||||
# We add the deferred immediately so that the DB call to check for
|
# We add the deferred immediately so that the DB call to check for
|
||||||
# partial state doesn't race when we unpartial the room.
|
# partial state doesn't race when we unpartial the room.
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue